title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
How can I select an element by tag name using jQuery?
|
Using jQuery selector, you can select an element by tag name. The jQuery element selector selects elements.
Try to run the following code to learn how to select an element by tag name using jQuery −
Live Demo
<html>
<head>
<title>jQuery Selector Example</title>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function() {
$("div").css("background-color", "yellow");
});
</script>
</head>
<body>
<div class = "big" id = "div1">
<p>This is first division of the DOM.</p>
</div>
<div class = "medium" id = "div2">
<p>This is second division of the DOM.</p>
</div>
<div class = "small" id = "div3">
<p>This is third division of the DOM</p>
</div>
</body>
</html>
|
[
{
"code": null,
"e": 1170,
"s": 1062,
"text": "Using jQuery selector, you can select an element by tag name. The jQuery element selector selects elements."
},
{
"code": null,
"e": 1261,
"s": 1170,
"text": "Try to run the following code to learn how to select an element by tag name using jQuery −"
},
{
"code": null,
"e": 1271,
"s": 1261,
"text": "Live Demo"
},
{
"code": null,
"e": 1938,
"s": 1271,
"text": "<html>\n\n <head>\n <title>jQuery Selector Example</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n \n <script>\n $(document).ready(function() {\n $(\"div\").css(\"background-color\", \"yellow\");\n });\n </script>\n </head>\n \n <body>\n\n <div class = \"big\" id = \"div1\">\n <p>This is first division of the DOM.</p>\n </div>\n\n <div class = \"medium\" id = \"div2\">\n <p>This is second division of the DOM.</p>\n </div>\n\n <div class = \"small\" id = \"div3\">\n <p>This is third division of the DOM</p>\n </div>\n\n </body>\n \n</html>"
}
] |
Pandas DataFrame Group by Consecutive Same Values | by Christopher Tao | Towards Data Science
|
It is very common that we want to segment a Pandas DataFrame by consecutive values. However, dealing with consecutive values is almost always not easy in any circumstances such as SQL, so does Pandas. Also, standard SQL provides a bunch of window functions to facilitate this kind of manipulation, but there are not too many window functions handy in Pandas. Fortunately, there are many workarounds in Python and sometimes make it even easier than classic window functions.
In this article, I’ll demonstrate how to group Pandas DataFrame by consecutive same values that repeat one or multiple times.
If you are still not quite sure what is the problem we’re trying to solve, don’t worry, you will understand by the sample data that is generated as follows:
df = pd.DataFrame({ 'item':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M'], 'v':[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 8, 8, 9]})
In the screenshot, the green lines split the groups that we expected.
In this example, it is actually very easy to group these values. Just simple groupby('v').
However, it is actually assuming that the value v must be monotonically increasing.
What if the value column is not monotonically increasing?
df = pd.DataFrame({ 'item':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N'], 'v':[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 8, 8, 9, 3]})
In this new example, we added the 13th row which has its value v == 3 again. If we simply groupby('v'), the 13th row will be put in the same group as the 2nd, 3rd and 4th rows, which is not what we want.
In other words, the 13th row should be in a separated group, because it is not consecutive.
The basic idea is to create such a column that can be grouped by. It must have the same values for the consecutive original values, but different values when the original value changes. We can use cumsum(). Here are the intuitive steps.
Create a new column shift down the original values by 1 rowCompare the shifted values with the original values. True if they are equal, otherwise FalseUsing cumsum() on the boolean column, the resulted column can be used for grouping by
Create a new column shift down the original values by 1 row
Compare the shifted values with the original values. True if they are equal, otherwise False
Using cumsum() on the boolean column, the resulted column can be used for grouping by
Here is a sample DataFrame created for intuition purposes only.
Why does it work? If you focus on each group:
The first boolean value of shifted_not_equal_to_original is True and the rest of them are False (if there are multiple rows in this group). So, the cumsum() function will make a difference between the first row of each group from the last row of its previous group.
The boolean value is True because the shifted value is different from the original value if the previous row value is different from the current. On the other hand, from the second row of this consecutive streak, it will be False because the value is equal to its precedent row.
I know the intuition looks complicated, but once you understand those, it is very easy to use this approach as follows.
df.groupby((df['v'].shift() != df['v']).cumsum())
Let’s verify the groups:
for k, v in df.groupby((df['v'].shift() != df['v']).cumsum()): print(f'[group {k}]') print(v)
Very importantly, the 13th row is grouped separately with the 2nd, 3rd and 4th row, which is exactly what we expected.
Therefore, this approach will be able to group the consecutive values regardless of whether the values are repeated later on or not, as long as it is not consecutive, another group will be created.
Indeed, I wouldn’t say that Pandas provides many useful window functions. However, Pandas in Python is a typical “programming language” not like SQL. So, it provides a lot more possibilities and sometimes can solve problems in even simpler ways.
medium.com
If you feel my articles are helpful, please consider joining Medium Membership to support me and thousands of other writers! (Click the link above)
|
[
{
"code": null,
"e": 645,
"s": 171,
"text": "It is very common that we want to segment a Pandas DataFrame by consecutive values. However, dealing with consecutive values is almost always not easy in any circumstances such as SQL, so does Pandas. Also, standard SQL provides a bunch of window functions to facilitate this kind of manipulation, but there are not too many window functions handy in Pandas. Fortunately, there are many workarounds in Python and sometimes make it even easier than classic window functions."
},
{
"code": null,
"e": 771,
"s": 645,
"text": "In this article, I’ll demonstrate how to group Pandas DataFrame by consecutive same values that repeat one or multiple times."
},
{
"code": null,
"e": 928,
"s": 771,
"text": "If you are still not quite sure what is the problem we’re trying to solve, don’t worry, you will understand by the sample data that is generated as follows:"
},
{
"code": null,
"e": 1075,
"s": 928,
"text": "df = pd.DataFrame({ 'item':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M'], 'v':[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 8, 8, 9]})"
},
{
"code": null,
"e": 1145,
"s": 1075,
"text": "In the screenshot, the green lines split the groups that we expected."
},
{
"code": null,
"e": 1236,
"s": 1145,
"text": "In this example, it is actually very easy to group these values. Just simple groupby('v')."
},
{
"code": null,
"e": 1320,
"s": 1236,
"text": "However, it is actually assuming that the value v must be monotonically increasing."
},
{
"code": null,
"e": 1378,
"s": 1320,
"text": "What if the value column is not monotonically increasing?"
},
{
"code": null,
"e": 1533,
"s": 1378,
"text": "df = pd.DataFrame({ 'item':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N'], 'v':[1, 2, 3, 3, 3, 4, 5, 5, 6, 7, 8, 8, 9, 3]})"
},
{
"code": null,
"e": 1737,
"s": 1533,
"text": "In this new example, we added the 13th row which has its value v == 3 again. If we simply groupby('v'), the 13th row will be put in the same group as the 2nd, 3rd and 4th rows, which is not what we want."
},
{
"code": null,
"e": 1829,
"s": 1737,
"text": "In other words, the 13th row should be in a separated group, because it is not consecutive."
},
{
"code": null,
"e": 2066,
"s": 1829,
"text": "The basic idea is to create such a column that can be grouped by. It must have the same values for the consecutive original values, but different values when the original value changes. We can use cumsum(). Here are the intuitive steps."
},
{
"code": null,
"e": 2303,
"s": 2066,
"text": "Create a new column shift down the original values by 1 rowCompare the shifted values with the original values. True if they are equal, otherwise FalseUsing cumsum() on the boolean column, the resulted column can be used for grouping by"
},
{
"code": null,
"e": 2363,
"s": 2303,
"text": "Create a new column shift down the original values by 1 row"
},
{
"code": null,
"e": 2456,
"s": 2363,
"text": "Compare the shifted values with the original values. True if they are equal, otherwise False"
},
{
"code": null,
"e": 2542,
"s": 2456,
"text": "Using cumsum() on the boolean column, the resulted column can be used for grouping by"
},
{
"code": null,
"e": 2606,
"s": 2542,
"text": "Here is a sample DataFrame created for intuition purposes only."
},
{
"code": null,
"e": 2652,
"s": 2606,
"text": "Why does it work? If you focus on each group:"
},
{
"code": null,
"e": 2918,
"s": 2652,
"text": "The first boolean value of shifted_not_equal_to_original is True and the rest of them are False (if there are multiple rows in this group). So, the cumsum() function will make a difference between the first row of each group from the last row of its previous group."
},
{
"code": null,
"e": 3197,
"s": 2918,
"text": "The boolean value is True because the shifted value is different from the original value if the previous row value is different from the current. On the other hand, from the second row of this consecutive streak, it will be False because the value is equal to its precedent row."
},
{
"code": null,
"e": 3317,
"s": 3197,
"text": "I know the intuition looks complicated, but once you understand those, it is very easy to use this approach as follows."
},
{
"code": null,
"e": 3367,
"s": 3317,
"text": "df.groupby((df['v'].shift() != df['v']).cumsum())"
},
{
"code": null,
"e": 3392,
"s": 3367,
"text": "Let’s verify the groups:"
},
{
"code": null,
"e": 3492,
"s": 3392,
"text": "for k, v in df.groupby((df['v'].shift() != df['v']).cumsum()): print(f'[group {k}]') print(v)"
},
{
"code": null,
"e": 3611,
"s": 3492,
"text": "Very importantly, the 13th row is grouped separately with the 2nd, 3rd and 4th row, which is exactly what we expected."
},
{
"code": null,
"e": 3809,
"s": 3611,
"text": "Therefore, this approach will be able to group the consecutive values regardless of whether the values are repeated later on or not, as long as it is not consecutive, another group will be created."
},
{
"code": null,
"e": 4055,
"s": 3809,
"text": "Indeed, I wouldn’t say that Pandas provides many useful window functions. However, Pandas in Python is a typical “programming language” not like SQL. So, it provides a lot more possibilities and sometimes can solve problems in even simpler ways."
},
{
"code": null,
"e": 4066,
"s": 4055,
"text": "medium.com"
}
] |
Election Surveys with Plotly. Stacked Bars or Divergent Bars? | by Darío Weitz | Towards Data Science
|
During the year 2021, national legislative elections will be held in Argentina. The PASO (Primarias Abiertas Simultáneas y Obligatorias, Simultaneous and Mandatory Open Primary Elections) will be held on September 12, while the general elections will be held on November 14. The Chamber of Deputies will renew 127 of its 257 seats and the Senate will renew 24 of its 54 seats. It is considered a transcendental election because the ruling party is only 7 deputies away from obtaining its own majority in the Chamber of Deputies. Since it already has a majority in the Senate, a favorable result would give it total legislative control.
The electoral campaign has not yet begun because all the candidates that will compete for the different electoral coalitions have not yet been defined. Despite this, several polls are published every week in an attempt to establish the future behavior of voters. Some of these surveys focus on the image of the main political referents of the only two electoral forces with a chance of winning the election, as a proxy to predict the voter’s behavior at the moment of casting his vote.
One of the main pollsters asked about the image of the current president, the two previous presidents, the two main governors in office, and the president of the main political party in the opposition. Of course, the best way to display the results is through some kind of data visualization technique.
In the following, I indicate 2 alternatives that Plotly allows to display the results of electoral surveys.
It is considered that a survey should be symmetrical in the sense of having an equal number of positive or favorable responses as negative or unfavorable ones. Of course, the neutral response should not be absent (so as not to force the respondent to choose any side when his or her position is strictly neutral). Nor should the “Don’t Know-Don’t Answer” option be absent in order to include the level of ignorance of the respondents of the item under study.
One of the most commonly used charts to display survey results is the 100% Stacked Horizontal Bar Chart. A stacked bar chart shows a set of bars or rectangles, each of which represents a main category divided into subcategories. The numerical value of each subcategory is indicated by the length of rectangular segments that are stacked one after the other. In the particular case of survey results, the rectangles are oriented horizontally. In addition, in the case of 100% stacked bars, each segment represents the percentage of each subcategory, with the precaution that the sum of the percentages must equal 100.
The following 100% Stacked Bar Chart shows the image level of the main Argentine politicians. The graph is sorted in descending order according to positive image. The order of the stacked bars is as follows: positive image; neutral image; negative image; ignorance of the politician.
The code to obtain Fig.1 is as follows:
First, we import Plotly Express as px, the plotly.graph_objects module as go, and the Pandas and Colorlover libraries as pd and cl respectively:
import plotly.express as pximport plotly.graph_objects as goimport pandas as pdimport colorlover as cl
We then created the dataframe with the survey results, and sorted it according to the level of the positive image:
surv = {‘Pol_List’: [‘PB’, ‘HRL’, ‘AF’, ‘CFK’, ‘MM’, ‘AK’], ‘Posit’: [43.9, 39.8, 26.8, 26.7, 24.8, 21.7], ‘Neutr’: [14.3, 25.5, 10.6, 6.0, 26.4, 9.1], ‘No_Op’: [4.8, 4.2, 0.4, 0.5, 0.7, 2.1], ‘Negat’: [37.0, 30.5, 62.2, 66.8, 48.1, 67.1] }df_surv = pd.DataFrame(surv)df_surv = df_surv.sort_values(by=’Posit’, ascending = True)images = [‘Negat’, ‘No_Op’, ‘Neutr’, ‘Posit’]
Now, we are ready to create our graph.
The usual procedure with plotly is to use the method .add_trace(go.Bar()) to generate the figure and then manipulate it with the methods .update_layout() and .update_xaxes. Finally, display the diagram with .show() and export it with .write_image().
We need to place the .add_trace(go.Bar()) method inside a for loop to traverse the dataframe extracting the initials of the politicians (y = df_surv[‘Pol_List’]) and the quantitative values recorded in the survey (x = df_surv[column]). With orientation = ‘h’ we indicate that our bars will be oriented horizontally, and with text and textposition we will include the numerical values inside the rectangular segments. cl.scales selects a color scale that allows to quickly discern the positive values from the negative ones.
fig.update.layout allows us to set the title text and font, as well as the orientation and location of the legends. It is very important to include barmode = ‘stack’ to ensure that the bar segments are stacked one after the other.
Finally, we indicate that in the x-axis we do not want to place the scale or the grid as they are not necessary to our storytelling.
fig = go.Figure()for column in df_surv.columns[1:5]: fig.add_trace(go.Bar( x = df_surv[column], y = df_surv[‘Pol_List’], name = column, orientation = ‘h’, text = df_surv[column], textposition = ‘auto’, marker_color=cl.scales[ str(len(images))]’div’]’RdBu’] images.index(column)] ))fig.update_layout( title = ‘Image of Argentinian Politicians’, title_font= dict(family = ‘Arial’, size = 30), barmode = ‘stack’, legend_orientation =’h’, legend_traceorder=”normal”, legend_x = 0.10, legend_y = 1.1, )fig.update_xaxes( visible = False, showgrid = False)fig.write_image(path + ‘Survey1.png’)fig.show()
The diverging stacked bar charts are an extension of the previously described stacked bars with the addition of a vertical baseline. The conceptual idea is that negative responses are placed to the left of that baseline, while positive responses are placed to the right of it. This particular design allows for easy comparison between positive or favorable responses and unfavorable or negative responses for a wide variety of categories.
Each bar is separated into horizontal rectangular segments, stacked one after the other. As is characteristic of all bar charts, the length of each segment is proportional to the numerical value to be represented. This numerical value can be absolute or percentage depending on the characteristics of the data set to be plotted.
In order to place the negative responses to the left of the baseline we proceed to convert the corresponding column into negative values and sort according to that column:
df_surv['Negat'] = df_surv['Negat'] * -1df_surv = df_surv.sort_values(by=['Negat'], ascending = False)
We proceed to draw the graph with the same method .add_trace(go.Bar()) using a code equivalent to the one previously indicated with the 100% stacked horizontal bars.
fig2 = go.Figure()for column in df_surv.columns[1:5]: fig2.add_trace(go.Bar( x = df_surv[column], y = df_surv['Pol_List'], name = column, orientation = 'h', text = df_surv[column], textposition = 'auto', marker_color=cl.scales[ str(len(images))]’div’]’RdBu’] images.index(column)] ))
We must make two changes in the update.layout method: now we indicate barmode = ‘relative’ and legend_traceorder=”reversed”.
fig2.update_layout( title = 'Image of Argentinian Politicians', title_font= dict(family = 'Arial', size = 30), barmode = 'relative', legend_orientation ='h', legend_traceorder="rerversed", legend_x = 0.10, legend_y = 1.1, )
Then we draw the central vertical line that characterizes the divergent stacked bars:
# Draw a central vertical linefig2.add_shape(type='line', x0 = 0.5, y0 = -0.5, x1 = 0.5, y1 = 5.5, line=dict(color = 'yellow', width = 3))
On the x-axis we leave the default options to show the numerical values and the grid that now do enhance the storytelling:
fig2.update_xaxes( visible = True, showgrid = True)fig2.write_image( path + 'Divergent1.png')fig2.show()
A common alternative among the divergent stacked bars is to place the neutral responses between the positive and negative responses and to draw the vertical line in the center of the segment representing the neutral responses. Fig. 3 shows a diagram made with Matplotlib with these characteristics: SD: Strongly Disagree; D: Disagree; N: Neutral; A: Agree; SA: Strongly Agree. It is evident that this layout makes it difficult to perform a direct comparison between positive and negative responses since they do not begin at the same baseline.
It’s up to you whether alternative you prefer to tell your story.
If you find this article of interest, please read my previous (https://medium.com/@dar.wtz):
Lollipop & Dumbbell Charts with Plotly, Mean or Median?
towardsdatascience.com
Slope Charts, Why & How, Storytelling with Slopes
|
[
{
"code": null,
"e": 684,
"s": 47,
"text": "During the year 2021, national legislative elections will be held in Argentina. The PASO (Primarias Abiertas Simultáneas y Obligatorias, Simultaneous and Mandatory Open Primary Elections) will be held on September 12, while the general elections will be held on November 14. The Chamber of Deputies will renew 127 of its 257 seats and the Senate will renew 24 of its 54 seats. It is considered a transcendental election because the ruling party is only 7 deputies away from obtaining its own majority in the Chamber of Deputies. Since it already has a majority in the Senate, a favorable result would give it total legislative control."
},
{
"code": null,
"e": 1170,
"s": 684,
"text": "The electoral campaign has not yet begun because all the candidates that will compete for the different electoral coalitions have not yet been defined. Despite this, several polls are published every week in an attempt to establish the future behavior of voters. Some of these surveys focus on the image of the main political referents of the only two electoral forces with a chance of winning the election, as a proxy to predict the voter’s behavior at the moment of casting his vote."
},
{
"code": null,
"e": 1473,
"s": 1170,
"text": "One of the main pollsters asked about the image of the current president, the two previous presidents, the two main governors in office, and the president of the main political party in the opposition. Of course, the best way to display the results is through some kind of data visualization technique."
},
{
"code": null,
"e": 1581,
"s": 1473,
"text": "In the following, I indicate 2 alternatives that Plotly allows to display the results of electoral surveys."
},
{
"code": null,
"e": 2040,
"s": 1581,
"text": "It is considered that a survey should be symmetrical in the sense of having an equal number of positive or favorable responses as negative or unfavorable ones. Of course, the neutral response should not be absent (so as not to force the respondent to choose any side when his or her position is strictly neutral). Nor should the “Don’t Know-Don’t Answer” option be absent in order to include the level of ignorance of the respondents of the item under study."
},
{
"code": null,
"e": 2657,
"s": 2040,
"text": "One of the most commonly used charts to display survey results is the 100% Stacked Horizontal Bar Chart. A stacked bar chart shows a set of bars or rectangles, each of which represents a main category divided into subcategories. The numerical value of each subcategory is indicated by the length of rectangular segments that are stacked one after the other. In the particular case of survey results, the rectangles are oriented horizontally. In addition, in the case of 100% stacked bars, each segment represents the percentage of each subcategory, with the precaution that the sum of the percentages must equal 100."
},
{
"code": null,
"e": 2941,
"s": 2657,
"text": "The following 100% Stacked Bar Chart shows the image level of the main Argentine politicians. The graph is sorted in descending order according to positive image. The order of the stacked bars is as follows: positive image; neutral image; negative image; ignorance of the politician."
},
{
"code": null,
"e": 2981,
"s": 2941,
"text": "The code to obtain Fig.1 is as follows:"
},
{
"code": null,
"e": 3126,
"s": 2981,
"text": "First, we import Plotly Express as px, the plotly.graph_objects module as go, and the Pandas and Colorlover libraries as pd and cl respectively:"
},
{
"code": null,
"e": 3229,
"s": 3126,
"text": "import plotly.express as pximport plotly.graph_objects as goimport pandas as pdimport colorlover as cl"
},
{
"code": null,
"e": 3344,
"s": 3229,
"text": "We then created the dataframe with the survey results, and sorted it according to the level of the positive image:"
},
{
"code": null,
"e": 3753,
"s": 3344,
"text": "surv = {‘Pol_List’: [‘PB’, ‘HRL’, ‘AF’, ‘CFK’, ‘MM’, ‘AK’], ‘Posit’: [43.9, 39.8, 26.8, 26.7, 24.8, 21.7], ‘Neutr’: [14.3, 25.5, 10.6, 6.0, 26.4, 9.1], ‘No_Op’: [4.8, 4.2, 0.4, 0.5, 0.7, 2.1], ‘Negat’: [37.0, 30.5, 62.2, 66.8, 48.1, 67.1] }df_surv = pd.DataFrame(surv)df_surv = df_surv.sort_values(by=’Posit’, ascending = True)images = [‘Negat’, ‘No_Op’, ‘Neutr’, ‘Posit’]"
},
{
"code": null,
"e": 3792,
"s": 3753,
"text": "Now, we are ready to create our graph."
},
{
"code": null,
"e": 4042,
"s": 3792,
"text": "The usual procedure with plotly is to use the method .add_trace(go.Bar()) to generate the figure and then manipulate it with the methods .update_layout() and .update_xaxes. Finally, display the diagram with .show() and export it with .write_image()."
},
{
"code": null,
"e": 4566,
"s": 4042,
"text": "We need to place the .add_trace(go.Bar()) method inside a for loop to traverse the dataframe extracting the initials of the politicians (y = df_surv[‘Pol_List’]) and the quantitative values recorded in the survey (x = df_surv[column]). With orientation = ‘h’ we indicate that our bars will be oriented horizontally, and with text and textposition we will include the numerical values inside the rectangular segments. cl.scales selects a color scale that allows to quickly discern the positive values from the negative ones."
},
{
"code": null,
"e": 4797,
"s": 4566,
"text": "fig.update.layout allows us to set the title text and font, as well as the orientation and location of the legends. It is very important to include barmode = ‘stack’ to ensure that the bar segments are stacked one after the other."
},
{
"code": null,
"e": 4930,
"s": 4797,
"text": "Finally, we indicate that in the x-axis we do not want to place the scale or the grid as they are not necessary to our storytelling."
},
{
"code": null,
"e": 5894,
"s": 4930,
"text": "fig = go.Figure()for column in df_surv.columns[1:5]: fig.add_trace(go.Bar( x = df_surv[column], y = df_surv[‘Pol_List’], name = column, orientation = ‘h’, text = df_surv[column], textposition = ‘auto’, marker_color=cl.scales[ str(len(images))]’div’]’RdBu’] images.index(column)] ))fig.update_layout( title = ‘Image of Argentinian Politicians’, title_font= dict(family = ‘Arial’, size = 30), barmode = ‘stack’, legend_orientation =’h’, legend_traceorder=”normal”, legend_x = 0.10, legend_y = 1.1, )fig.update_xaxes( visible = False, showgrid = False)fig.write_image(path + ‘Survey1.png’)fig.show()"
},
{
"code": null,
"e": 6333,
"s": 5894,
"text": "The diverging stacked bar charts are an extension of the previously described stacked bars with the addition of a vertical baseline. The conceptual idea is that negative responses are placed to the left of that baseline, while positive responses are placed to the right of it. This particular design allows for easy comparison between positive or favorable responses and unfavorable or negative responses for a wide variety of categories."
},
{
"code": null,
"e": 6662,
"s": 6333,
"text": "Each bar is separated into horizontal rectangular segments, stacked one after the other. As is characteristic of all bar charts, the length of each segment is proportional to the numerical value to be represented. This numerical value can be absolute or percentage depending on the characteristics of the data set to be plotted."
},
{
"code": null,
"e": 6834,
"s": 6662,
"text": "In order to place the negative responses to the left of the baseline we proceed to convert the corresponding column into negative values and sort according to that column:"
},
{
"code": null,
"e": 6937,
"s": 6834,
"text": "df_surv['Negat'] = df_surv['Negat'] * -1df_surv = df_surv.sort_values(by=['Negat'], ascending = False)"
},
{
"code": null,
"e": 7103,
"s": 6937,
"text": "We proceed to draw the graph with the same method .add_trace(go.Bar()) using a code equivalent to the one previously indicated with the 100% stacked horizontal bars."
},
{
"code": null,
"e": 7608,
"s": 7103,
"text": "fig2 = go.Figure()for column in df_surv.columns[1:5]: fig2.add_trace(go.Bar( x = df_surv[column], y = df_surv['Pol_List'], name = column, orientation = 'h', text = df_surv[column], textposition = 'auto', marker_color=cl.scales[ str(len(images))]’div’]’RdBu’] images.index(column)] ))"
},
{
"code": null,
"e": 7733,
"s": 7608,
"text": "We must make two changes in the update.layout method: now we indicate barmode = ‘relative’ and legend_traceorder=”reversed”."
},
{
"code": null,
"e": 8085,
"s": 7733,
"text": "fig2.update_layout( title = 'Image of Argentinian Politicians', title_font= dict(family = 'Arial', size = 30), barmode = 'relative', legend_orientation ='h', legend_traceorder=\"rerversed\", legend_x = 0.10, legend_y = 1.1, )"
},
{
"code": null,
"e": 8171,
"s": 8085,
"text": "Then we draw the central vertical line that characterizes the divergent stacked bars:"
},
{
"code": null,
"e": 8352,
"s": 8171,
"text": "# Draw a central vertical linefig2.add_shape(type='line', x0 = 0.5, y0 = -0.5, x1 = 0.5, y1 = 5.5, line=dict(color = 'yellow', width = 3))"
},
{
"code": null,
"e": 8475,
"s": 8352,
"text": "On the x-axis we leave the default options to show the numerical values and the grid that now do enhance the storytelling:"
},
{
"code": null,
"e": 8615,
"s": 8475,
"text": "fig2.update_xaxes( visible = True, showgrid = True)fig2.write_image( path + 'Divergent1.png')fig2.show()"
},
{
"code": null,
"e": 9159,
"s": 8615,
"text": "A common alternative among the divergent stacked bars is to place the neutral responses between the positive and negative responses and to draw the vertical line in the center of the segment representing the neutral responses. Fig. 3 shows a diagram made with Matplotlib with these characteristics: SD: Strongly Disagree; D: Disagree; N: Neutral; A: Agree; SA: Strongly Agree. It is evident that this layout makes it difficult to perform a direct comparison between positive and negative responses since they do not begin at the same baseline."
},
{
"code": null,
"e": 9225,
"s": 9159,
"text": "It’s up to you whether alternative you prefer to tell your story."
},
{
"code": null,
"e": 9318,
"s": 9225,
"text": "If you find this article of interest, please read my previous (https://medium.com/@dar.wtz):"
},
{
"code": null,
"e": 9374,
"s": 9318,
"text": "Lollipop & Dumbbell Charts with Plotly, Mean or Median?"
},
{
"code": null,
"e": 9397,
"s": 9374,
"text": "towardsdatascience.com"
}
] |
Fix MySQL Database Error #1064?
|
The Database Error #1064 may occur due to incorrect syntax. For example, let’s say we are creating the below table −
mysql> create table DemoTable
(
UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
UserName varchar(100),
UserAge int,
UserAddress varchar(200),
UserCountryName varchar(100) ,
isMarried boolean,
);
This will produce the following output i.e. error −
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 10
To get rid of the above error, you need to remove last comma(,). The query is as follows to remove the error −
mysql> create table DemoTable
(
UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
UserName varchar(100),
UserAge int,
UserAddress varchar(200),
UserCountryName varchar(100),
isMarried boolean
);
Query OK, 0 rows affected (1.04 sec)
|
[
{
"code": null,
"e": 1179,
"s": 1062,
"text": "The Database Error #1064 may occur due to incorrect syntax. For example, let’s say we are creating the below table −"
},
{
"code": null,
"e": 1416,
"s": 1179,
"text": "mysql> create table DemoTable\n (\n UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n UserName varchar(100),\n UserAge int,\n UserAddress varchar(200),\n UserCountryName varchar(100) ,\n isMarried boolean,\n );"
},
{
"code": null,
"e": 1468,
"s": 1416,
"text": "This will produce the following output i.e. error −"
},
{
"code": null,
"e": 1637,
"s": 1468,
"text": "ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 10"
},
{
"code": null,
"e": 1748,
"s": 1637,
"text": "To get rid of the above error, you need to remove last comma(,). The query is as follows to remove the error −"
},
{
"code": null,
"e": 2020,
"s": 1748,
"text": "mysql> create table DemoTable\n (\n UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n UserName varchar(100),\n UserAge int,\n UserAddress varchar(200),\n UserCountryName varchar(100),\n isMarried boolean\n );\nQuery OK, 0 rows affected (1.04 sec)"
}
] |
How to create modules in Python 3 ? - GeeksforGeeks
|
24 Jun, 2021
Modules are simply python code having functions, classes, variables. Any python file with .py extension can be referenced as a module. Although there are some modules available through the python standard library which are installed through python installation, Other modules can be installed using the pip installer, We can also create our own python module.
In this article, we will see how to create modules in python.
Any python code consisting of functions, classes, variables can be termed as a module. We will be writing a simple module that will be having a function, a class, and a variable.
Python
# defining a class Ageclass Age: # defining the __init__ method def __init__(self, name, years): self.years = years self.name = name # defining the getAge method def getAge(self): print("The age of " + self.name +" is "+self.years) # defining the function hello def hello(): print('hello geeks!') # creating a string s s = 'I love python!'
Here in the above module, we have defined a class ‘Age’ having a __init__() and getAge() method which will be used to initialize the class object and getting the age of the object that would be passed, We have also defined a function hello() which will print ‘hello geeks!’ and also a string s which is ‘I love python!’.
We will name the above module made as mod.py and save it.
Now we will discuss how to import a module and use its various functions, classes, variables from other python files. For this, we will make a file main.py in the same directory as mod.py.
We will import the module with an import statement and use the functions, classes, variable as following:-
For using the function from a module we will first import it and then will call the function with module_name.function_name(). For example, if we want to call the function hello() of mod.py in main.py we will first import mod.py and then call it with mod.hello().
Python
# importing the module mod.pyimport mod # calling the function hello() from mod.pymod.hello()
Output:
hello geeks!
Note: Please make sure that the main.py and mod.py are in the same directory.
We can also import only a separate function from a module to use it by writing: from module_name import function_name
And then call it directly. For example, if we want to call the function hello() of mod.py in main.py using the above method we will write the following code in main.py.
Python
# importing only the function hello from the module mod.pyfrom mod import hello # calling the function hello() hello()
Output:
hello geeks!
We can use the classes to make various objects by module_name.class_name(). For example, if we want to create an object in main.py by the name ‘jack’ of class Age present in mod.py. We will write the following code in main.py:
Python
# importing the module mod.pyimport mod # creating a object jack with jack as name and 21 as agejack = mod.Age('jack','21') # calling the getAge() method for object 'jack'jack.getAge()
Output:
The age of jack is 21
The way we imported only a particular function in the same way we can import only a particular class. The code for this will be:
Python
# importing only the Age class from modfrom mod import Age # creating the object jack with jack as name and 21 as agejack = Age('jack','21') # calling the getAge() method for the object jackjack.getAge()
Output:
The age of jack is 21
For getting any variable from a module we have to first import the module, and then we can simply get it by module_name.variable_name. For example, if we want to print the variable with name s present in our mod.py module we will write the following code:
Python
# importing the module mod.pyimport mod # printing the variable s present in mod.pyprint(mod.s)
Output:
I love python!
As we did for functions and classes in the same way we can import a particular variable from a module. For printing variable ‘s’ by this method the code will be:
Python
# importing only the variable s from module mod.pyfrom mod import s # printing the variable sprint(s)
Output:
I love python!
Picked
python-modules
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Pandas dataframe.groupby()
Python | Get unique values from a list
Defaultdict in Python
Python | os.path.join() method
Python Classes and Objects
Create a directory in Python
|
[
{
"code": null,
"e": 23926,
"s": 23898,
"text": "\n24 Jun, 2021"
},
{
"code": null,
"e": 24286,
"s": 23926,
"text": "Modules are simply python code having functions, classes, variables. Any python file with .py extension can be referenced as a module. Although there are some modules available through the python standard library which are installed through python installation, Other modules can be installed using the pip installer, We can also create our own python module."
},
{
"code": null,
"e": 24349,
"s": 24286,
"text": "In this article, we will see how to create modules in python. "
},
{
"code": null,
"e": 24528,
"s": 24349,
"text": "Any python code consisting of functions, classes, variables can be termed as a module. We will be writing a simple module that will be having a function, a class, and a variable."
},
{
"code": null,
"e": 24535,
"s": 24528,
"text": "Python"
},
{
"code": "# defining a class Ageclass Age: # defining the __init__ method def __init__(self, name, years): self.years = years self.name = name # defining the getAge method def getAge(self): print(\"The age of \" + self.name +\" is \"+self.years) # defining the function hello def hello(): print('hello geeks!') # creating a string s s = 'I love python!' ",
"e": 25013,
"s": 24535,
"text": null
},
{
"code": null,
"e": 25334,
"s": 25013,
"text": "Here in the above module, we have defined a class ‘Age’ having a __init__() and getAge() method which will be used to initialize the class object and getting the age of the object that would be passed, We have also defined a function hello() which will print ‘hello geeks!’ and also a string s which is ‘I love python!’."
},
{
"code": null,
"e": 25392,
"s": 25334,
"text": "We will name the above module made as mod.py and save it."
},
{
"code": null,
"e": 25582,
"s": 25392,
"text": "Now we will discuss how to import a module and use its various functions, classes, variables from other python files. For this, we will make a file main.py in the same directory as mod.py. "
},
{
"code": null,
"e": 25689,
"s": 25582,
"text": "We will import the module with an import statement and use the functions, classes, variable as following:-"
},
{
"code": null,
"e": 25953,
"s": 25689,
"text": "For using the function from a module we will first import it and then will call the function with module_name.function_name(). For example, if we want to call the function hello() of mod.py in main.py we will first import mod.py and then call it with mod.hello()."
},
{
"code": null,
"e": 25960,
"s": 25953,
"text": "Python"
},
{
"code": "# importing the module mod.pyimport mod # calling the function hello() from mod.pymod.hello() ",
"e": 26065,
"s": 25960,
"text": null
},
{
"code": null,
"e": 26073,
"s": 26065,
"text": "Output:"
},
{
"code": null,
"e": 26086,
"s": 26073,
"text": "hello geeks!"
},
{
"code": null,
"e": 26164,
"s": 26086,
"text": "Note: Please make sure that the main.py and mod.py are in the same directory."
},
{
"code": null,
"e": 26282,
"s": 26164,
"text": "We can also import only a separate function from a module to use it by writing: from module_name import function_name"
},
{
"code": null,
"e": 26451,
"s": 26282,
"text": "And then call it directly. For example, if we want to call the function hello() of mod.py in main.py using the above method we will write the following code in main.py."
},
{
"code": null,
"e": 26458,
"s": 26451,
"text": "Python"
},
{
"code": "# importing only the function hello from the module mod.pyfrom mod import hello # calling the function hello() hello() ",
"e": 26592,
"s": 26458,
"text": null
},
{
"code": null,
"e": 26600,
"s": 26592,
"text": "Output:"
},
{
"code": null,
"e": 26613,
"s": 26600,
"text": "hello geeks!"
},
{
"code": null,
"e": 26840,
"s": 26613,
"text": "We can use the classes to make various objects by module_name.class_name(). For example, if we want to create an object in main.py by the name ‘jack’ of class Age present in mod.py. We will write the following code in main.py:"
},
{
"code": null,
"e": 26847,
"s": 26840,
"text": "Python"
},
{
"code": "# importing the module mod.pyimport mod # creating a object jack with jack as name and 21 as agejack = mod.Age('jack','21') # calling the getAge() method for object 'jack'jack.getAge() ",
"e": 27061,
"s": 26847,
"text": null
},
{
"code": null,
"e": 27069,
"s": 27061,
"text": "Output:"
},
{
"code": null,
"e": 27091,
"s": 27069,
"text": "The age of jack is 21"
},
{
"code": null,
"e": 27220,
"s": 27091,
"text": "The way we imported only a particular function in the same way we can import only a particular class. The code for this will be:"
},
{
"code": null,
"e": 27227,
"s": 27220,
"text": "Python"
},
{
"code": "# importing only the Age class from modfrom mod import Age # creating the object jack with jack as name and 21 as agejack = Age('jack','21') # calling the getAge() method for the object jackjack.getAge() ",
"e": 27463,
"s": 27227,
"text": null
},
{
"code": null,
"e": 27471,
"s": 27463,
"text": "Output:"
},
{
"code": null,
"e": 27493,
"s": 27471,
"text": "The age of jack is 21"
},
{
"code": null,
"e": 27749,
"s": 27493,
"text": "For getting any variable from a module we have to first import the module, and then we can simply get it by module_name.variable_name. For example, if we want to print the variable with name s present in our mod.py module we will write the following code:"
},
{
"code": null,
"e": 27756,
"s": 27749,
"text": "Python"
},
{
"code": "# importing the module mod.pyimport mod # printing the variable s present in mod.pyprint(mod.s) ",
"e": 27871,
"s": 27756,
"text": null
},
{
"code": null,
"e": 27879,
"s": 27871,
"text": "Output:"
},
{
"code": null,
"e": 27894,
"s": 27879,
"text": "I love python!"
},
{
"code": null,
"e": 28056,
"s": 27894,
"text": "As we did for functions and classes in the same way we can import a particular variable from a module. For printing variable ‘s’ by this method the code will be:"
},
{
"code": null,
"e": 28063,
"s": 28056,
"text": "Python"
},
{
"code": "# importing only the variable s from module mod.pyfrom mod import s # printing the variable sprint(s) ",
"e": 28179,
"s": 28063,
"text": null
},
{
"code": null,
"e": 28187,
"s": 28179,
"text": "Output:"
},
{
"code": null,
"e": 28202,
"s": 28187,
"text": "I love python!"
},
{
"code": null,
"e": 28209,
"s": 28202,
"text": "Picked"
},
{
"code": null,
"e": 28224,
"s": 28209,
"text": "python-modules"
},
{
"code": null,
"e": 28231,
"s": 28224,
"text": "Python"
},
{
"code": null,
"e": 28329,
"s": 28231,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28338,
"s": 28329,
"text": "Comments"
},
{
"code": null,
"e": 28351,
"s": 28338,
"text": "Old Comments"
},
{
"code": null,
"e": 28383,
"s": 28351,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28439,
"s": 28383,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 28481,
"s": 28439,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 28523,
"s": 28481,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 28559,
"s": 28523,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 28598,
"s": 28559,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 28620,
"s": 28598,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 28651,
"s": 28620,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 28678,
"s": 28651,
"text": "Python Classes and Objects"
}
] |
Python 3 - time strptime() Method
|
The method strptime() parses a string representing a time according to a format. The return value is a struct_time as returned by gmtime() or localtime().
The format parameter uses the same directives as those used by strftime(); it defaults to "%a %b %d %H:%M:%S %Y" which matches the formatting returned by ctime().
If string cannot be parsed according to format, or if it has excess data after parsing, ValueError is raised.
Following is the syntax for strptime() method −
time.strptime(string[, format])
string − This is the time in string format which would be parsed based on the given format.
string − This is the time in string format which would be parsed based on the given format.
format − This is the directive which would be used to parse the given string.
format − This is the directive which would be used to parse the given string.
The following directives can be embedded in the format string −
%a − abbreviated weekday name
%a − abbreviated weekday name
%A − full weekday name
%A − full weekday name
%b − abbreviated month name
%b − abbreviated month name
%B − full month name
%B − full month name
%c − preferred date and time representation
%c − preferred date and time representation
%C − century number (the year divided by 100, range 00 to 99)
%C − century number (the year divided by 100, range 00 to 99)
%d − day of the month (01 to 31)
%d − day of the month (01 to 31)
%D − same as %m/%d/%y
%D − same as %m/%d/%y
%e − day of the month (1 to 31)
%e − day of the month (1 to 31)
%g − like %G, but without the century
%g − like %G, but without the century
%G − 4-digit year corresponding to the ISO week number (see %V).
%G − 4-digit year corresponding to the ISO week number (see %V).
%h − same as %b
%h − same as %b
%H − hour, using a 24-hour clock (00 to 23)
%H − hour, using a 24-hour clock (00 to 23)
%I − hour, using a 12-hour clock (01 to 12)
%I − hour, using a 12-hour clock (01 to 12)
%j − day of the year (001 to 366)
%j − day of the year (001 to 366)
%m − month (01 to 12)
%m − month (01 to 12)
%M − minute
%M − minute
%n − newline character
%n − newline character
%p − either am or pm according to the given time value
%p − either am or pm according to the given time value
%r − time in a.m. and p.m. notation
%r − time in a.m. and p.m. notation
%R − time in 24 hour notation
%R − time in 24 hour notation
%S − second
%S − second
%t − tab character
%t − tab character
%T − current time, equal to %H:%M:%S
%T − current time, equal to %H:%M:%S
%u − weekday as a number (1 to 7), Monday = 1. Warning: In Sun Solaris Sunday = 1
%u − weekday as a number (1 to 7), Monday = 1. Warning: In Sun Solaris Sunday = 1
%U − week number of the current year, starting with the first Sunday as the first day of the first week
%U − week number of the current year, starting with the first Sunday as the first day of the first week
%V − The ISO 8601 week number of the current year (01 to 53), where week 1 is the first week that has at least 4 days in the current year, and with Monday as the first day of the week
%V − The ISO 8601 week number of the current year (01 to 53), where week 1 is the first week that has at least 4 days in the current year, and with Monday as the first day of the week
%W − week number of the current year, starting with the first Monday as the first day of the first week
%W − week number of the current year, starting with the first Monday as the first day of the first week
%w − day of the week as a decimal, Sunday = 0
%w − day of the week as a decimal, Sunday = 0
%x − preferred date representation without the time
%x − preferred date representation without the time
%X − preferred time representation without the date
%X − preferred time representation without the date
%y − year without a century (range 00 to 99)
%y − year without a century (range 00 to 99)
%Y − year including the century
%Y − year including the century
%Z or %z − time zone or name or abbreviation
%Z or %z − time zone or name or abbreviation
%% − a literal % character
%% − a literal % character
This return value is struct_time as returned by gmtime() or localtime().
The following example shows the usage of strptime() method.
#!/usr/bin/python3
import time
struct_time = time.strptime("30 12 2015", "%d %m %Y")
print ("tuple : ", struct_time)
When we run the above program, it produces the following result −
tuple : time.struct_time(tm_year = 2015, tm_mon = 12, tm_mday = 30,
tm_hour = 0, tm_min = 0, tm_sec = 0, tm_wday = 2, tm_yday = 364, tm_isdst = -1)
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2495,
"s": 2340,
"text": "The method strptime() parses a string representing a time according to a format. The return value is a struct_time as returned by gmtime() or localtime()."
},
{
"code": null,
"e": 2658,
"s": 2495,
"text": "The format parameter uses the same directives as those used by strftime(); it defaults to \"%a %b %d %H:%M:%S %Y\" which matches the formatting returned by ctime()."
},
{
"code": null,
"e": 2768,
"s": 2658,
"text": "If string cannot be parsed according to format, or if it has excess data after parsing, ValueError is raised."
},
{
"code": null,
"e": 2816,
"s": 2768,
"text": "Following is the syntax for strptime() method −"
},
{
"code": null,
"e": 2849,
"s": 2816,
"text": "time.strptime(string[, format])\n"
},
{
"code": null,
"e": 2941,
"s": 2849,
"text": "string − This is the time in string format which would be parsed based on the given format."
},
{
"code": null,
"e": 3033,
"s": 2941,
"text": "string − This is the time in string format which would be parsed based on the given format."
},
{
"code": null,
"e": 3111,
"s": 3033,
"text": "format − This is the directive which would be used to parse the given string."
},
{
"code": null,
"e": 3189,
"s": 3111,
"text": "format − This is the directive which would be used to parse the given string."
},
{
"code": null,
"e": 3253,
"s": 3189,
"text": "The following directives can be embedded in the format string −"
},
{
"code": null,
"e": 3283,
"s": 3253,
"text": "%a − abbreviated weekday name"
},
{
"code": null,
"e": 3313,
"s": 3283,
"text": "%a − abbreviated weekday name"
},
{
"code": null,
"e": 3336,
"s": 3313,
"text": "%A − full weekday name"
},
{
"code": null,
"e": 3359,
"s": 3336,
"text": "%A − full weekday name"
},
{
"code": null,
"e": 3387,
"s": 3359,
"text": "%b − abbreviated month name"
},
{
"code": null,
"e": 3415,
"s": 3387,
"text": "%b − abbreviated month name"
},
{
"code": null,
"e": 3436,
"s": 3415,
"text": "%B − full month name"
},
{
"code": null,
"e": 3457,
"s": 3436,
"text": "%B − full month name"
},
{
"code": null,
"e": 3501,
"s": 3457,
"text": "%c − preferred date and time representation"
},
{
"code": null,
"e": 3545,
"s": 3501,
"text": "%c − preferred date and time representation"
},
{
"code": null,
"e": 3607,
"s": 3545,
"text": "%C − century number (the year divided by 100, range 00 to 99)"
},
{
"code": null,
"e": 3669,
"s": 3607,
"text": "%C − century number (the year divided by 100, range 00 to 99)"
},
{
"code": null,
"e": 3702,
"s": 3669,
"text": "%d − day of the month (01 to 31)"
},
{
"code": null,
"e": 3735,
"s": 3702,
"text": "%d − day of the month (01 to 31)"
},
{
"code": null,
"e": 3757,
"s": 3735,
"text": "%D − same as %m/%d/%y"
},
{
"code": null,
"e": 3779,
"s": 3757,
"text": "%D − same as %m/%d/%y"
},
{
"code": null,
"e": 3811,
"s": 3779,
"text": "%e − day of the month (1 to 31)"
},
{
"code": null,
"e": 3843,
"s": 3811,
"text": "%e − day of the month (1 to 31)"
},
{
"code": null,
"e": 3881,
"s": 3843,
"text": "%g − like %G, but without the century"
},
{
"code": null,
"e": 3919,
"s": 3881,
"text": "%g − like %G, but without the century"
},
{
"code": null,
"e": 3984,
"s": 3919,
"text": "%G − 4-digit year corresponding to the ISO week number (see %V)."
},
{
"code": null,
"e": 4049,
"s": 3984,
"text": "%G − 4-digit year corresponding to the ISO week number (see %V)."
},
{
"code": null,
"e": 4065,
"s": 4049,
"text": "%h − same as %b"
},
{
"code": null,
"e": 4081,
"s": 4065,
"text": "%h − same as %b"
},
{
"code": null,
"e": 4125,
"s": 4081,
"text": "%H − hour, using a 24-hour clock (00 to 23)"
},
{
"code": null,
"e": 4169,
"s": 4125,
"text": "%H − hour, using a 24-hour clock (00 to 23)"
},
{
"code": null,
"e": 4213,
"s": 4169,
"text": "%I − hour, using a 12-hour clock (01 to 12)"
},
{
"code": null,
"e": 4257,
"s": 4213,
"text": "%I − hour, using a 12-hour clock (01 to 12)"
},
{
"code": null,
"e": 4291,
"s": 4257,
"text": "%j − day of the year (001 to 366)"
},
{
"code": null,
"e": 4325,
"s": 4291,
"text": "%j − day of the year (001 to 366)"
},
{
"code": null,
"e": 4347,
"s": 4325,
"text": "%m − month (01 to 12)"
},
{
"code": null,
"e": 4369,
"s": 4347,
"text": "%m − month (01 to 12)"
},
{
"code": null,
"e": 4381,
"s": 4369,
"text": "%M − minute"
},
{
"code": null,
"e": 4393,
"s": 4381,
"text": "%M − minute"
},
{
"code": null,
"e": 4416,
"s": 4393,
"text": "%n − newline character"
},
{
"code": null,
"e": 4439,
"s": 4416,
"text": "%n − newline character"
},
{
"code": null,
"e": 4494,
"s": 4439,
"text": "%p − either am or pm according to the given time value"
},
{
"code": null,
"e": 4549,
"s": 4494,
"text": "%p − either am or pm according to the given time value"
},
{
"code": null,
"e": 4585,
"s": 4549,
"text": "%r − time in a.m. and p.m. notation"
},
{
"code": null,
"e": 4621,
"s": 4585,
"text": "%r − time in a.m. and p.m. notation"
},
{
"code": null,
"e": 4651,
"s": 4621,
"text": "%R − time in 24 hour notation"
},
{
"code": null,
"e": 4681,
"s": 4651,
"text": "%R − time in 24 hour notation"
},
{
"code": null,
"e": 4693,
"s": 4681,
"text": "%S − second"
},
{
"code": null,
"e": 4705,
"s": 4693,
"text": "%S − second"
},
{
"code": null,
"e": 4724,
"s": 4705,
"text": "%t − tab character"
},
{
"code": null,
"e": 4743,
"s": 4724,
"text": "%t − tab character"
},
{
"code": null,
"e": 4780,
"s": 4743,
"text": "%T − current time, equal to %H:%M:%S"
},
{
"code": null,
"e": 4817,
"s": 4780,
"text": "%T − current time, equal to %H:%M:%S"
},
{
"code": null,
"e": 4899,
"s": 4817,
"text": "%u − weekday as a number (1 to 7), Monday = 1. Warning: In Sun Solaris Sunday = 1"
},
{
"code": null,
"e": 4981,
"s": 4899,
"text": "%u − weekday as a number (1 to 7), Monday = 1. Warning: In Sun Solaris Sunday = 1"
},
{
"code": null,
"e": 5085,
"s": 4981,
"text": "%U − week number of the current year, starting with the first Sunday as the first day of the first week"
},
{
"code": null,
"e": 5189,
"s": 5085,
"text": "%U − week number of the current year, starting with the first Sunday as the first day of the first week"
},
{
"code": null,
"e": 5373,
"s": 5189,
"text": "%V − The ISO 8601 week number of the current year (01 to 53), where week 1 is the first week that has at least 4 days in the current year, and with Monday as the first day of the week"
},
{
"code": null,
"e": 5557,
"s": 5373,
"text": "%V − The ISO 8601 week number of the current year (01 to 53), where week 1 is the first week that has at least 4 days in the current year, and with Monday as the first day of the week"
},
{
"code": null,
"e": 5661,
"s": 5557,
"text": "%W − week number of the current year, starting with the first Monday as the first day of the first week"
},
{
"code": null,
"e": 5765,
"s": 5661,
"text": "%W − week number of the current year, starting with the first Monday as the first day of the first week"
},
{
"code": null,
"e": 5811,
"s": 5765,
"text": "%w − day of the week as a decimal, Sunday = 0"
},
{
"code": null,
"e": 5857,
"s": 5811,
"text": "%w − day of the week as a decimal, Sunday = 0"
},
{
"code": null,
"e": 5909,
"s": 5857,
"text": "%x − preferred date representation without the time"
},
{
"code": null,
"e": 5961,
"s": 5909,
"text": "%x − preferred date representation without the time"
},
{
"code": null,
"e": 6013,
"s": 5961,
"text": "%X − preferred time representation without the date"
},
{
"code": null,
"e": 6065,
"s": 6013,
"text": "%X − preferred time representation without the date"
},
{
"code": null,
"e": 6110,
"s": 6065,
"text": "%y − year without a century (range 00 to 99)"
},
{
"code": null,
"e": 6155,
"s": 6110,
"text": "%y − year without a century (range 00 to 99)"
},
{
"code": null,
"e": 6187,
"s": 6155,
"text": "%Y − year including the century"
},
{
"code": null,
"e": 6219,
"s": 6187,
"text": "%Y − year including the century"
},
{
"code": null,
"e": 6264,
"s": 6219,
"text": "%Z or %z − time zone or name or abbreviation"
},
{
"code": null,
"e": 6309,
"s": 6264,
"text": "%Z or %z − time zone or name or abbreviation"
},
{
"code": null,
"e": 6336,
"s": 6309,
"text": "%% − a literal % character"
},
{
"code": null,
"e": 6363,
"s": 6336,
"text": "%% − a literal % character"
},
{
"code": null,
"e": 6436,
"s": 6363,
"text": "This return value is struct_time as returned by gmtime() or localtime()."
},
{
"code": null,
"e": 6496,
"s": 6436,
"text": "The following example shows the usage of strptime() method."
},
{
"code": null,
"e": 6614,
"s": 6496,
"text": "#!/usr/bin/python3\nimport time\n\nstruct_time = time.strptime(\"30 12 2015\", \"%d %m %Y\")\nprint (\"tuple : \", struct_time)"
},
{
"code": null,
"e": 6680,
"s": 6614,
"text": "When we run the above program, it produces the following result −"
},
{
"code": null,
"e": 6834,
"s": 6680,
"text": "tuple : time.struct_time(tm_year = 2015, tm_mon = 12, tm_mday = 30, \n tm_hour = 0, tm_min = 0, tm_sec = 0, tm_wday = 2, tm_yday = 364, tm_isdst = -1)\n"
},
{
"code": null,
"e": 6871,
"s": 6834,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 6887,
"s": 6871,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 6920,
"s": 6887,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 6939,
"s": 6920,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 6974,
"s": 6939,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 6996,
"s": 6974,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 7030,
"s": 6996,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 7058,
"s": 7030,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 7093,
"s": 7058,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 7107,
"s": 7093,
"text": " Lets Kode It"
},
{
"code": null,
"e": 7140,
"s": 7107,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 7157,
"s": 7140,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 7164,
"s": 7157,
"text": " Print"
},
{
"code": null,
"e": 7175,
"s": 7164,
"text": " Add Notes"
}
] |
Dictionaries in Python. One of the most important data... | by Ankit Gupta | Towards Data Science
|
In this article, I will talk about dictionaries. This is the second article in the series named “Data Structures in Python”. The first part of this series was about lists.
Dictionaries are important data structures in Python that use keys for indexing. They are an unordered sequence of items (key-value pairs), which means the order is not preserved. The keys are immutable. Just like lists, the values of dictionaries can hold heterogeneous data i.e., integers, floats, strings, NaN, Booleans, lists, arrays, and even nested dictionaries.
This article will provide you a clear understanding and enable you to work proficiently with Python dictionaries.
The following topics are covered in this article:
Creating a dictionary and adding elements
Accessing dictionary elements
Removing dictionary elements
Adding/inserting new elements
Merging/concatenating dictionaries
Modifying a dictionary
Sorting a dictionary
Dictionary comprehension
Alternative ways to create dictionaries
Copying a dictionary
Renaming existing keys
Nested dictionaries
Checking if a key exists in a dictionary
Like lists are initialized using square brackets ([ ]), dictionaries are initialized using curly brackets ({ }). An empty dictionary has, of course, zero length.
dic_a = {} # An empty dictionarytype(dic_a)>>> dictlen(dic_a)>>> 0
A dictionary has two characteristic features: keys and values. Each key has a corresponding value. Both, the key and the value, can be of type string, float, integer, NaN, etc. Adding elements to a dictionary means adding a key-value pair. A dictionary is composed of one or more key-value pairs.
Let us add some elements to our empty dictionary. The following is one way to do so. Here, 'A' is the key and 'Apple' is its value. You can add as many elements as you like.
# Adding the first elementdic_a['A'] = 'Apple'print (dic_a)>>> {'A': 'Apple'}# Adding the second elementdic_a['B'] = 'Ball'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball'}
Note: Python being case-sensitive, 'A' and 'a' act as two different keys.
dic_a['a'] = 'apple'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'a': 'apple'}
If you find the above method of adding elements one-by-one tiresome, you can also initialize the dictionary at once by specifying all the key-value pairs.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}
So far your dictionary had strings as keys and values. A dictionary can also store data of mixed types. The following is a valid Python dictionary.
dic_b = {1: 'Ace', 'B': 123, np.nan: 99.9, 'D': np.nan, 'E': np.inf}
You should though use meaningful names for the keys as they denote the indices of dictionaries. In particular, avoid using floats and np.nan as keys.
Having created our dictionaries, let us see how we can access their elements.
You can access the keys and values using the functions dict.keys() and dict.values(), respectively. You can also access both keys and values in the form of tuples using the items() function.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.keys()>>> dict_keys(['A', 'B', 'C'])dic_a.values()>>> dict_values(['Apple', 'Ball', 'Cat'])dic_a.items()>>> dict_items([('A', 'Apple'), ('B', 'Ball'), ('C', 'Cat')])
Alternatively, you can also use a “for” loop to access/print them one at a time.
# Printing keysfor key in dic_a.keys(): print (key, end=' ')>>> A B C############################## Printing valuesfor key in dic_a.values(): print (key, end=' ')>>> Apple Ball Cat
You can avoid two “for” loops and access the keys and values using items(). The “for” loop will iterate through the key-value pairs returned by items(). Here, the key and value are arbitrary variable names.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}for key, value in dic_a.items(): print (key, value)>>> A Apple B Ball C Cat
The dictionary items cannot be accessed using list-like indexing.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a[0]>>> ----> 1 dic_a[0] KeyError: 0
You need to use the keys to access the corresponding values from a dictionary.
# Accessing the value "Apple"dic_a['A']>>> 'Apple'# Accessing the value "Cat"dic_a['C']>>> 'Cat'
You will get an error if the key does not exist in the dictionary.
dic_a['Z']>>> KeyError Traceback (most recent call last)----> 1 dic_a['Z']KeyError: 'Z'
If you want to avoid such key errors in case of non-existent keys, you can use the get() function. This returns None when the key does not exist. You can also use a custom message to be returned.
print (dic_a.get('Z'))>>> None# Custom return messageprint (dic_a.get('Z', 'Key does not exist'))>>> Key does not exist
If you want to access dictionary elements (either keys or values) using indices, you need to first convert them into lists.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}list(dic_a.keys())[0]>>> 'A'list(dic_a.keys())[-1]>>> 'C'list(dic_a.values())[0]>>> 'Apple'list(dic_a.values())[-1]>>> 'Cat'
Deleting elements from a dictionary means deleting a key-value pair together.
You can delete the dictionary elements using the del keyword and the key whose value you want to delete. The deletion is in-place, which means you do not need to re-assign the value of the dictionary after deletion.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}# Deleting the key-value pair of 'A': 'Apple'del dic_a['A']print (dic_a)>>> {'B': 'Ball', 'C': 'Cat'}# Deleting the key-value pair of 'C': 'Cat'del dic_a['C']print (dic_a)>>> {'B': 'Ball'}
You can also use the “pop( )” function to delete elements. It returns the value being popped (deleted) and the dictionary is modified in-place.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.pop('A')>>> 'Apple' print (dic_a)# {'B': 'Ball', 'C': 'Cat'}
In both the above-described methods, you will get a KeyError if the key to be deleted does not exist in the dictionary. In the case of “pop( )”, you can specify the error message to show up if the key does not exist.
key_to_delete = 'E'dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.pop(key_to_delete, f'Key {key_to_delete} does not exist.')>>> 'Key E does not exist.'
There is no direct way but you can use a “for” loop as shown below.
to_delete = ['A', 'C']dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}for key in to_delete: del dic_a[key] print (dic_a)>>> {'B': 'Ball'}
You can add one element at a time to an already existing dictionary as shown below.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a['D'] = 'Dog'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}dic_a['E'] = 'Egg'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog', 'E': 'Egg'}
If the key you are adding already exists, the existing value will be overwritten.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a['A'] = 'Adam' # key 'A' already exists with the value 'Apple'print (dic_a)>>> {'A': 'Adam', 'B': 'Ball', 'C': 'Cat'}
You can also use the update() function for adding a new key-value pair by passing the pair as an argument.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.update({'D': 'Dog'})print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}
The update() function also allows you to add multiple key-value pairs simultaneously to an existing dictionary.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = {'D':'Dog', 'E':'Egg'}dic_a.update(dic_b)print(dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog', 'E': 'Egg'}
You can merge two or more dictionaries using the unpacking operator (**) starting Python 3.5 onwards.
dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'D': 'Dog'}dic_merged = {**dic_a, **dic_b}print (dic_merged)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}
If you don’t want to create a new dictionary but just want to add dic_bto the existing dic_a, you can simply update the first dictionary as shown earlier.
dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'D': 'Dog'}dic_a.update(dic_b)print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}
One of the characteristics of Python dictionaries is that they cannot have duplicate keys i.e., a key cannot appear twice. So, what happens if you concatenate two or more dictionaries that have one or more common keys.
The answer is that the key-value pair in the last merged dictionary (in the order of merging) will survive. In the following example, the key'A' exists in all three dictionaries, and, therefore, the final dictionary took the value from the last merged dictionary (dic_c).
dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'A': 'Apricot'}dic_c = {'A': 'Adam', 'E': 'Egg'}dic_merged = {**dic_a, **dic_b, **dic_c}print (dic_merged)>>> {'A': 'Adam', 'B': 'Ball', 'C': 'Cat', 'E': 'Egg'}
I just said that the dictionaries cannot have duplicate keys. Strictly speaking, you can define a dictionary with duplicate keys, but, when you print it, only the last duplicate key will be printed. A shown below, only unique keys are returned, and for the duplicated key (‘A’ here), only the last value is returned.
dic_a = {'A': 'Apple', 'B': 'Ball', 'A': 'Apricot', 'A': 'Assault'}print (dic_a)>>> {'A': 'Assault', 'B': 'Ball'}
From Python 3.9 onwards, you can use the | operator to concatenate two or more dictionaries.
dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'D': 'Dog'}dic_c = dic_a | dic_b>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}# Concatenating more than 2 dictionariesdic_d = dic_a | dic_b | dic_c
If you want to change the value of 'A' from 'Apple' to 'Apricot', you can use a simple assignment.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a['A'] = 'Apricot'print (dic_a)>>> {'A': 'Apricot', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}
The order is not maintained in a dictionary. You can sort a dictionary using either the keys or the values using the sorted() function.
If the keys are strings (alphabets), they will be sorted alphabetically. In a dictionary, we have two main elements: keys and values. Hence, while sorting with respect to the keys, we use the first element i.e. keys, and, therefore, the index used in the lambda function is “[0]”. You can read this post for more information on the lambda functions.
dic_a = {'B': 100, 'C': 10, 'D': 90, 'A': 40}sorted(dic_a.items(), key=lambda x: x[0])>>> [('A', 40), ('B', 100), ('C', 10), ('D', 90)]
The sorting is not in-place. As shown below, if you now print the dictionary, it remains unordered, as initialized originally. You will have to reassign it after sorting.
# The dictionary remains unordered if you print itprint (dic_a)>>> {'B': 100, 'C': 10, 'D': 90, 'A': 40}
If you want to sort in reverse order, specify the keyword reverse=True.
sorted(dic_a.items(), key=lambda x: x[0], reverse=True)>>> [('D', 90), ('C', 10), ('B', 100), ('A', 40)]
To sort a dictionary based on its values, you need to use the index “[1]” inside the lambda function.
dic_a = {'B': 100, 'C': 10, 'D': 90, 'A': 40}sorted(dic_a.items(), key=lambda x: x[1])>>> [('C', 10), ('A', 40), ('D', 90), ('B', 100)]
It is a very helpful method for creating dictionaries dynamically. Suppose you want to create a dictionary where the key is an integer and the value is its square. A dictionary comprehension would look like the following.
dic_c = {i: i**2 for i in range(5)}print (dic_c)>>> {0: 0, 1: 1, 2: 4, 3: 9, 4: 16}
If you want your keys to be strings, you can use “f-strings”.
dic_c = {f'{i}': i**2 for i in range(5)}print (dic_c)>>> {'0': 0, '1': 1, '2': 4, '3': 9, '4': 16}
Suppose you have two lists and you want to create a dictionary out of them. The simplest way is to use the dict() constructor.
names = ['Sam', 'Adam', 'Tom', 'Harry']marks = [90, 85, 55, 70]dic_grades = dict(zip(names, marks))print (dic_grades)>>> {'Sam': 90, 'Adam': 85, 'Tom': 55, 'Harry': 70}
You can also zip the two lists together and create the dictionary using “dictionary comprehension” as shown earlier.
dic_grades = {k:v for k, v in zip(names, marks)}print (dic_grades)>>> {'Sam': 90, 'Adam': 85, 'Tom': 55, 'Harry': 70}
You can also pass a list of key-value pairs separated by commas to the dict() construct and it will return a dictionary.
dic_a = dict([('A', 'Apple'), ('B', 'Ball'), ('C', 'Cat')])print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}
If your keys are strings, you can use even a simpler initialization using only the variables as the keys.
dic_a = dict(A='Apple', B='Ball', C='Cat')print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}
I will explain this point using a simple example. There are more subtleties involved in the copying mechanism of dictionaries, and, I would recommend the reader to refer to this Stack Overflow post for a detailed explanation.
When you simply reassign an existing dictionary (parent dictionary) to a new dictionary, both point to the same object (“reference assignment”).
Consider the following example where you reassign dic_a to dic_b.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = dic_a # Simple reassignment (Reference assignment)
Now, if you modify dic_b(for e.g. adding a new element), you will notice that the change will also be reflected in dic_a.
dic_b['D'] = 'Dog'print (dic_b)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}
A shallow copy is created using the copy() function. In a shallow copy, the two dictionaries act as two independent objects, with their contents still sharing the same reference. If you add a new key-value pair in the new dictionary (shallow copy), it will not show up in the parent dictionary.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = dic_a.copy()dic_b['D'] = 'Dog'# New, shallow copy, has the new key-value pairprint (dic_b)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}# The parent dictionary does not have the new key-value pairprint (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}
Now, if the contents in the parent dictionary (“dic_a”) will change depends on the type of the value. For instance, in the following, the contents are simple strings that are immutable. So changing the value in “dic_b” for a given key ('A'in this case) will not change the value of the key 'A' in “dic_a”.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = dic_a.copy()# Replace an existing key with a new value in the shallow copydic_b['A'] = 'Adam'print (dic_b)>>> {'A': 'Adam', 'B': 'Ball', 'C': 'Cat'}# Strings are immutable so 'Apple' doesn't change to 'Adam' in dic_aprint (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}
However, if the value of the key 'A' in “dic_a” is a list, then changing its value in the “dic_b” will reflect the changes in “dic_a” (parent dictionary), because lists are mutable.
dic_a = {'A': ['Apple'], 'B': 'Ball', 'C': 'Cat'}# Make a shallow copydic_b = dic_a.copy()dic_b['A'][0] = 'Adam'print (dic_b)>>> {'A': ['Adam'], 'B': 'Ball', 'C': 'Coal'}# Lists are mutable so the changes get reflected in dic_a tooprint (dic_a)>>> {'A': ['Adam'], 'B': 'Ball', 'C': 'Cat'}
Suppose you want to replace the key “Adam” by “Alex”. You can use the pop function because it deletes the passed key (“Adam” here) and returns the deleted value (85 here). So you kill two birds with a single shot. Use the returned (deleted) value to assign the value to the new key (“Alex” here). There can be more complicated cases where the key is a tuple. Such cases are out of the scope of this article.
dic_a = {'Sam': 90, 'Adam': 85, 'Tom': 55, 'Harry': 70}dic_a['Alex'] = dic_a.pop('Adam')print (dic_a)>>> {'Sam': 90, 'Tom': 55, 'Harry': 70, 'Alex': 85}
A nested dictionary has one or more dictionaries within a dictionary. The following is the simplest example of a nested dictionary with two layers of nesting. Here, the outer dictionary (layer 1) has only one key-value pair. However, the value is now a dictionary itself.
dic_a = {'A': {'B': 'Ball'}}dic_a['A']>>> {'B': 'Ball'}type(dic_a['A'])>>> dict
If you want to further access the key-value pair of the inner dictionary (layer 2), you now need to use dic_a['A'] as the dictionary.
dic_a['A']['B']>>> 'Ball'
Let us add an additional layer of the nested dictionary. Now, dic_a['A'] is a nested dictionary in itself, unlike the simplest nested dictionary above.
dic_a = {'A': {'B': {'C': 'Cat'}}}# Layer 1dic_a['A']>>> {'B': {'C': 'Cat'}}# Layer 2dic_a['A']['B']>>> {'C': 'Cat'}# Layer 3dic_a['A']['B']['C']>>> 'Cat'
You can find if a particular key exists in a dictionary using the in operator.
dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}'A' in dic_a# True'E' in dic_a# False
In the above code, you do not need to use “in dic_a.keys( )” because “in dic_a” already looks up in the keys.
This brings me to the end of this article. You can access the first part of this series on “Data Structures in Python” here.
|
[
{
"code": null,
"e": 344,
"s": 172,
"text": "In this article, I will talk about dictionaries. This is the second article in the series named “Data Structures in Python”. The first part of this series was about lists."
},
{
"code": null,
"e": 713,
"s": 344,
"text": "Dictionaries are important data structures in Python that use keys for indexing. They are an unordered sequence of items (key-value pairs), which means the order is not preserved. The keys are immutable. Just like lists, the values of dictionaries can hold heterogeneous data i.e., integers, floats, strings, NaN, Booleans, lists, arrays, and even nested dictionaries."
},
{
"code": null,
"e": 827,
"s": 713,
"text": "This article will provide you a clear understanding and enable you to work proficiently with Python dictionaries."
},
{
"code": null,
"e": 877,
"s": 827,
"text": "The following topics are covered in this article:"
},
{
"code": null,
"e": 919,
"s": 877,
"text": "Creating a dictionary and adding elements"
},
{
"code": null,
"e": 949,
"s": 919,
"text": "Accessing dictionary elements"
},
{
"code": null,
"e": 978,
"s": 949,
"text": "Removing dictionary elements"
},
{
"code": null,
"e": 1008,
"s": 978,
"text": "Adding/inserting new elements"
},
{
"code": null,
"e": 1043,
"s": 1008,
"text": "Merging/concatenating dictionaries"
},
{
"code": null,
"e": 1066,
"s": 1043,
"text": "Modifying a dictionary"
},
{
"code": null,
"e": 1087,
"s": 1066,
"text": "Sorting a dictionary"
},
{
"code": null,
"e": 1112,
"s": 1087,
"text": "Dictionary comprehension"
},
{
"code": null,
"e": 1152,
"s": 1112,
"text": "Alternative ways to create dictionaries"
},
{
"code": null,
"e": 1173,
"s": 1152,
"text": "Copying a dictionary"
},
{
"code": null,
"e": 1196,
"s": 1173,
"text": "Renaming existing keys"
},
{
"code": null,
"e": 1216,
"s": 1196,
"text": "Nested dictionaries"
},
{
"code": null,
"e": 1257,
"s": 1216,
"text": "Checking if a key exists in a dictionary"
},
{
"code": null,
"e": 1419,
"s": 1257,
"text": "Like lists are initialized using square brackets ([ ]), dictionaries are initialized using curly brackets ({ }). An empty dictionary has, of course, zero length."
},
{
"code": null,
"e": 1486,
"s": 1419,
"text": "dic_a = {} # An empty dictionarytype(dic_a)>>> dictlen(dic_a)>>> 0"
},
{
"code": null,
"e": 1783,
"s": 1486,
"text": "A dictionary has two characteristic features: keys and values. Each key has a corresponding value. Both, the key and the value, can be of type string, float, integer, NaN, etc. Adding elements to a dictionary means adding a key-value pair. A dictionary is composed of one or more key-value pairs."
},
{
"code": null,
"e": 1957,
"s": 1783,
"text": "Let us add some elements to our empty dictionary. The following is one way to do so. Here, 'A' is the key and 'Apple' is its value. You can add as many elements as you like."
},
{
"code": null,
"e": 2125,
"s": 1957,
"text": "# Adding the first elementdic_a['A'] = 'Apple'print (dic_a)>>> {'A': 'Apple'}# Adding the second elementdic_a['B'] = 'Ball'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball'}"
},
{
"code": null,
"e": 2199,
"s": 2125,
"text": "Note: Python being case-sensitive, 'A' and 'a' act as two different keys."
},
{
"code": null,
"e": 2278,
"s": 2199,
"text": "dic_a['a'] = 'apple'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'a': 'apple'}"
},
{
"code": null,
"e": 2433,
"s": 2278,
"text": "If you find the above method of adding elements one-by-one tiresome, you can also initialize the dictionary at once by specifying all the key-value pairs."
},
{
"code": null,
"e": 2481,
"s": 2433,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 2629,
"s": 2481,
"text": "So far your dictionary had strings as keys and values. A dictionary can also store data of mixed types. The following is a valid Python dictionary."
},
{
"code": null,
"e": 2698,
"s": 2629,
"text": "dic_b = {1: 'Ace', 'B': 123, np.nan: 99.9, 'D': np.nan, 'E': np.inf}"
},
{
"code": null,
"e": 2848,
"s": 2698,
"text": "You should though use meaningful names for the keys as they denote the indices of dictionaries. In particular, avoid using floats and np.nan as keys."
},
{
"code": null,
"e": 2926,
"s": 2848,
"text": "Having created our dictionaries, let us see how we can access their elements."
},
{
"code": null,
"e": 3117,
"s": 2926,
"text": "You can access the keys and values using the functions dict.keys() and dict.values(), respectively. You can also access both keys and values in the form of tuples using the items() function."
},
{
"code": null,
"e": 3336,
"s": 3117,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.keys()>>> dict_keys(['A', 'B', 'C'])dic_a.values()>>> dict_values(['Apple', 'Ball', 'Cat'])dic_a.items()>>> dict_items([('A', 'Apple'), ('B', 'Ball'), ('C', 'Cat')])"
},
{
"code": null,
"e": 3417,
"s": 3336,
"text": "Alternatively, you can also use a “for” loop to access/print them one at a time."
},
{
"code": null,
"e": 3604,
"s": 3417,
"text": "# Printing keysfor key in dic_a.keys(): print (key, end=' ')>>> A B C############################## Printing valuesfor key in dic_a.values(): print (key, end=' ')>>> Apple Ball Cat"
},
{
"code": null,
"e": 3811,
"s": 3604,
"text": "You can avoid two “for” loops and access the keys and values using items(). The “for” loop will iterate through the key-value pairs returned by items(). Here, the key and value are arbitrary variable names."
},
{
"code": null,
"e": 3943,
"s": 3811,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}for key, value in dic_a.items(): print (key, value)>>> A Apple B Ball C Cat"
},
{
"code": null,
"e": 4009,
"s": 3943,
"text": "The dictionary items cannot be accessed using list-like indexing."
},
{
"code": null,
"e": 4100,
"s": 4009,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a[0]>>> ----> 1 dic_a[0] KeyError: 0"
},
{
"code": null,
"e": 4179,
"s": 4100,
"text": "You need to use the keys to access the corresponding values from a dictionary."
},
{
"code": null,
"e": 4276,
"s": 4179,
"text": "# Accessing the value \"Apple\"dic_a['A']>>> 'Apple'# Accessing the value \"Cat\"dic_a['C']>>> 'Cat'"
},
{
"code": null,
"e": 4343,
"s": 4276,
"text": "You will get an error if the key does not exist in the dictionary."
},
{
"code": null,
"e": 4431,
"s": 4343,
"text": "dic_a['Z']>>> KeyError Traceback (most recent call last)----> 1 dic_a['Z']KeyError: 'Z'"
},
{
"code": null,
"e": 4627,
"s": 4431,
"text": "If you want to avoid such key errors in case of non-existent keys, you can use the get() function. This returns None when the key does not exist. You can also use a custom message to be returned."
},
{
"code": null,
"e": 4747,
"s": 4627,
"text": "print (dic_a.get('Z'))>>> None# Custom return messageprint (dic_a.get('Z', 'Key does not exist'))>>> Key does not exist"
},
{
"code": null,
"e": 4871,
"s": 4747,
"text": "If you want to access dictionary elements (either keys or values) using indices, you need to first convert them into lists."
},
{
"code": null,
"e": 5043,
"s": 4871,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}list(dic_a.keys())[0]>>> 'A'list(dic_a.keys())[-1]>>> 'C'list(dic_a.values())[0]>>> 'Apple'list(dic_a.values())[-1]>>> 'Cat'"
},
{
"code": null,
"e": 5121,
"s": 5043,
"text": "Deleting elements from a dictionary means deleting a key-value pair together."
},
{
"code": null,
"e": 5337,
"s": 5121,
"text": "You can delete the dictionary elements using the del keyword and the key whose value you want to delete. The deletion is in-place, which means you do not need to re-assign the value of the dictionary after deletion."
},
{
"code": null,
"e": 5573,
"s": 5337,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}# Deleting the key-value pair of 'A': 'Apple'del dic_a['A']print (dic_a)>>> {'B': 'Ball', 'C': 'Cat'}# Deleting the key-value pair of 'C': 'Cat'del dic_a['C']print (dic_a)>>> {'B': 'Ball'}"
},
{
"code": null,
"e": 5717,
"s": 5573,
"text": "You can also use the “pop( )” function to delete elements. It returns the value being popped (deleted) and the dictionary is modified in-place."
},
{
"code": null,
"e": 5831,
"s": 5717,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.pop('A')>>> 'Apple' print (dic_a)# {'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 6048,
"s": 5831,
"text": "In both the above-described methods, you will get a KeyError if the key to be deleted does not exist in the dictionary. In the case of “pop( )”, you can specify the error message to show up if the key does not exist."
},
{
"code": null,
"e": 6206,
"s": 6048,
"text": "key_to_delete = 'E'dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.pop(key_to_delete, f'Key {key_to_delete} does not exist.')>>> 'Key E does not exist.'"
},
{
"code": null,
"e": 6274,
"s": 6206,
"text": "There is no direct way but you can use a “for” loop as shown below."
},
{
"code": null,
"e": 6417,
"s": 6274,
"text": "to_delete = ['A', 'C']dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}for key in to_delete: del dic_a[key] print (dic_a)>>> {'B': 'Ball'}"
},
{
"code": null,
"e": 6501,
"s": 6417,
"text": "You can add one element at a time to an already existing dictionary as shown below."
},
{
"code": null,
"e": 6733,
"s": 6501,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a['D'] = 'Dog'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}dic_a['E'] = 'Egg'print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog', 'E': 'Egg'}"
},
{
"code": null,
"e": 6815,
"s": 6733,
"text": "If the key you are adding already exists, the existing value will be overwritten."
},
{
"code": null,
"e": 6985,
"s": 6815,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a['A'] = 'Adam' # key 'A' already exists with the value 'Apple'print (dic_a)>>> {'A': 'Adam', 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 7092,
"s": 6985,
"text": "You can also use the update() function for adding a new key-value pair by passing the pair as an argument."
},
{
"code": null,
"e": 7234,
"s": 7092,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a.update({'D': 'Dog'})print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}"
},
{
"code": null,
"e": 7346,
"s": 7234,
"text": "The update() function also allows you to add multiple key-value pairs simultaneously to an existing dictionary."
},
{
"code": null,
"e": 7522,
"s": 7346,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = {'D':'Dog', 'E':'Egg'}dic_a.update(dic_b)print(dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog', 'E': 'Egg'}"
},
{
"code": null,
"e": 7624,
"s": 7522,
"text": "You can merge two or more dictionaries using the unpacking operator (**) starting Python 3.5 onwards."
},
{
"code": null,
"e": 7796,
"s": 7624,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'D': 'Dog'}dic_merged = {**dic_a, **dic_b}print (dic_merged)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}"
},
{
"code": null,
"e": 7951,
"s": 7796,
"text": "If you don’t want to create a new dictionary but just want to add dic_bto the existing dic_a, you can simply update the first dictionary as shown earlier."
},
{
"code": null,
"e": 8106,
"s": 7951,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'D': 'Dog'}dic_a.update(dic_b)print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}"
},
{
"code": null,
"e": 8325,
"s": 8106,
"text": "One of the characteristics of Python dictionaries is that they cannot have duplicate keys i.e., a key cannot appear twice. So, what happens if you concatenate two or more dictionaries that have one or more common keys."
},
{
"code": null,
"e": 8597,
"s": 8325,
"text": "The answer is that the key-value pair in the last merged dictionary (in the order of merging) will survive. In the following example, the key'A' exists in all three dictionaries, and, therefore, the final dictionary took the value from the last merged dictionary (dic_c)."
},
{
"code": null,
"e": 8814,
"s": 8597,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'A': 'Apricot'}dic_c = {'A': 'Adam', 'E': 'Egg'}dic_merged = {**dic_a, **dic_b, **dic_c}print (dic_merged)>>> {'A': 'Adam', 'B': 'Ball', 'C': 'Cat', 'E': 'Egg'}"
},
{
"code": null,
"e": 9131,
"s": 8814,
"text": "I just said that the dictionaries cannot have duplicate keys. Strictly speaking, you can define a dictionary with duplicate keys, but, when you print it, only the last duplicate key will be printed. A shown below, only unique keys are returned, and for the duplicated key (‘A’ here), only the last value is returned."
},
{
"code": null,
"e": 9245,
"s": 9131,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'A': 'Apricot', 'A': 'Assault'}print (dic_a)>>> {'A': 'Assault', 'B': 'Ball'}"
},
{
"code": null,
"e": 9338,
"s": 9245,
"text": "From Python 3.9 onwards, you can use the | operator to concatenate two or more dictionaries."
},
{
"code": null,
"e": 9551,
"s": 9338,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball'}dic_b = {'C': 'Cat', 'D': 'Dog'}dic_c = dic_a | dic_b>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}# Concatenating more than 2 dictionariesdic_d = dic_a | dic_b | dic_c"
},
{
"code": null,
"e": 9650,
"s": 9551,
"text": "If you want to change the value of 'A' from 'Apple' to 'Apricot', you can use a simple assignment."
},
{
"code": null,
"e": 9790,
"s": 9650,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_a['A'] = 'Apricot'print (dic_a)>>> {'A': 'Apricot', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}"
},
{
"code": null,
"e": 9926,
"s": 9790,
"text": "The order is not maintained in a dictionary. You can sort a dictionary using either the keys or the values using the sorted() function."
},
{
"code": null,
"e": 10276,
"s": 9926,
"text": "If the keys are strings (alphabets), they will be sorted alphabetically. In a dictionary, we have two main elements: keys and values. Hence, while sorting with respect to the keys, we use the first element i.e. keys, and, therefore, the index used in the lambda function is “[0]”. You can read this post for more information on the lambda functions."
},
{
"code": null,
"e": 10412,
"s": 10276,
"text": "dic_a = {'B': 100, 'C': 10, 'D': 90, 'A': 40}sorted(dic_a.items(), key=lambda x: x[0])>>> [('A', 40), ('B', 100), ('C', 10), ('D', 90)]"
},
{
"code": null,
"e": 10583,
"s": 10412,
"text": "The sorting is not in-place. As shown below, if you now print the dictionary, it remains unordered, as initialized originally. You will have to reassign it after sorting."
},
{
"code": null,
"e": 10688,
"s": 10583,
"text": "# The dictionary remains unordered if you print itprint (dic_a)>>> {'B': 100, 'C': 10, 'D': 90, 'A': 40}"
},
{
"code": null,
"e": 10760,
"s": 10688,
"text": "If you want to sort in reverse order, specify the keyword reverse=True."
},
{
"code": null,
"e": 10865,
"s": 10760,
"text": "sorted(dic_a.items(), key=lambda x: x[0], reverse=True)>>> [('D', 90), ('C', 10), ('B', 100), ('A', 40)]"
},
{
"code": null,
"e": 10967,
"s": 10865,
"text": "To sort a dictionary based on its values, you need to use the index “[1]” inside the lambda function."
},
{
"code": null,
"e": 11103,
"s": 10967,
"text": "dic_a = {'B': 100, 'C': 10, 'D': 90, 'A': 40}sorted(dic_a.items(), key=lambda x: x[1])>>> [('C', 10), ('A', 40), ('D', 90), ('B', 100)]"
},
{
"code": null,
"e": 11325,
"s": 11103,
"text": "It is a very helpful method for creating dictionaries dynamically. Suppose you want to create a dictionary where the key is an integer and the value is its square. A dictionary comprehension would look like the following."
},
{
"code": null,
"e": 11409,
"s": 11325,
"text": "dic_c = {i: i**2 for i in range(5)}print (dic_c)>>> {0: 0, 1: 1, 2: 4, 3: 9, 4: 16}"
},
{
"code": null,
"e": 11471,
"s": 11409,
"text": "If you want your keys to be strings, you can use “f-strings”."
},
{
"code": null,
"e": 11570,
"s": 11471,
"text": "dic_c = {f'{i}': i**2 for i in range(5)}print (dic_c)>>> {'0': 0, '1': 1, '2': 4, '3': 9, '4': 16}"
},
{
"code": null,
"e": 11697,
"s": 11570,
"text": "Suppose you have two lists and you want to create a dictionary out of them. The simplest way is to use the dict() constructor."
},
{
"code": null,
"e": 11866,
"s": 11697,
"text": "names = ['Sam', 'Adam', 'Tom', 'Harry']marks = [90, 85, 55, 70]dic_grades = dict(zip(names, marks))print (dic_grades)>>> {'Sam': 90, 'Adam': 85, 'Tom': 55, 'Harry': 70}"
},
{
"code": null,
"e": 11983,
"s": 11866,
"text": "You can also zip the two lists together and create the dictionary using “dictionary comprehension” as shown earlier."
},
{
"code": null,
"e": 12101,
"s": 11983,
"text": "dic_grades = {k:v for k, v in zip(names, marks)}print (dic_grades)>>> {'Sam': 90, 'Adam': 85, 'Tom': 55, 'Harry': 70}"
},
{
"code": null,
"e": 12222,
"s": 12101,
"text": "You can also pass a list of key-value pairs separated by commas to the dict() construct and it will return a dictionary."
},
{
"code": null,
"e": 12338,
"s": 12222,
"text": "dic_a = dict([('A', 'Apple'), ('B', 'Ball'), ('C', 'Cat')])print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 12444,
"s": 12338,
"text": "If your keys are strings, you can use even a simpler initialization using only the variables as the keys."
},
{
"code": null,
"e": 12543,
"s": 12444,
"text": "dic_a = dict(A='Apple', B='Ball', C='Cat')print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 12769,
"s": 12543,
"text": "I will explain this point using a simple example. There are more subtleties involved in the copying mechanism of dictionaries, and, I would recommend the reader to refer to this Stack Overflow post for a detailed explanation."
},
{
"code": null,
"e": 12914,
"s": 12769,
"text": "When you simply reassign an existing dictionary (parent dictionary) to a new dictionary, both point to the same object (“reference assignment”)."
},
{
"code": null,
"e": 12980,
"s": 12914,
"text": "Consider the following example where you reassign dic_a to dic_b."
},
{
"code": null,
"e": 13086,
"s": 12980,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = dic_a # Simple reassignment (Reference assignment)"
},
{
"code": null,
"e": 13208,
"s": 13086,
"text": "Now, if you modify dic_b(for e.g. adding a new element), you will notice that the change will also be reflected in dic_a."
},
{
"code": null,
"e": 13363,
"s": 13208,
"text": "dic_b['D'] = 'Dog'print (dic_b)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}print (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}"
},
{
"code": null,
"e": 13658,
"s": 13363,
"text": "A shallow copy is created using the copy() function. In a shallow copy, the two dictionaries act as two independent objects, with their contents still sharing the same reference. If you add a new key-value pair in the new dictionary (shallow copy), it will not show up in the parent dictionary."
},
{
"code": null,
"e": 13975,
"s": 13658,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = dic_a.copy()dic_b['D'] = 'Dog'# New, shallow copy, has the new key-value pairprint (dic_b)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}# The parent dictionary does not have the new key-value pairprint (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 14281,
"s": 13975,
"text": "Now, if the contents in the parent dictionary (“dic_a”) will change depends on the type of the value. For instance, in the following, the contents are simple strings that are immutable. So changing the value in “dic_b” for a given key ('A'in this case) will not change the value of the key 'A' in “dic_a”."
},
{
"code": null,
"e": 14609,
"s": 14281,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}dic_b = dic_a.copy()# Replace an existing key with a new value in the shallow copydic_b['A'] = 'Adam'print (dic_b)>>> {'A': 'Adam', 'B': 'Ball', 'C': 'Cat'}# Strings are immutable so 'Apple' doesn't change to 'Adam' in dic_aprint (dic_a)>>> {'A': 'Apple', 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 14791,
"s": 14609,
"text": "However, if the value of the key 'A' in “dic_a” is a list, then changing its value in the “dic_b” will reflect the changes in “dic_a” (parent dictionary), because lists are mutable."
},
{
"code": null,
"e": 15080,
"s": 14791,
"text": "dic_a = {'A': ['Apple'], 'B': 'Ball', 'C': 'Cat'}# Make a shallow copydic_b = dic_a.copy()dic_b['A'][0] = 'Adam'print (dic_b)>>> {'A': ['Adam'], 'B': 'Ball', 'C': 'Coal'}# Lists are mutable so the changes get reflected in dic_a tooprint (dic_a)>>> {'A': ['Adam'], 'B': 'Ball', 'C': 'Cat'}"
},
{
"code": null,
"e": 15488,
"s": 15080,
"text": "Suppose you want to replace the key “Adam” by “Alex”. You can use the pop function because it deletes the passed key (“Adam” here) and returns the deleted value (85 here). So you kill two birds with a single shot. Use the returned (deleted) value to assign the value to the new key (“Alex” here). There can be more complicated cases where the key is a tuple. Such cases are out of the scope of this article."
},
{
"code": null,
"e": 15641,
"s": 15488,
"text": "dic_a = {'Sam': 90, 'Adam': 85, 'Tom': 55, 'Harry': 70}dic_a['Alex'] = dic_a.pop('Adam')print (dic_a)>>> {'Sam': 90, 'Tom': 55, 'Harry': 70, 'Alex': 85}"
},
{
"code": null,
"e": 15913,
"s": 15641,
"text": "A nested dictionary has one or more dictionaries within a dictionary. The following is the simplest example of a nested dictionary with two layers of nesting. Here, the outer dictionary (layer 1) has only one key-value pair. However, the value is now a dictionary itself."
},
{
"code": null,
"e": 15993,
"s": 15913,
"text": "dic_a = {'A': {'B': 'Ball'}}dic_a['A']>>> {'B': 'Ball'}type(dic_a['A'])>>> dict"
},
{
"code": null,
"e": 16127,
"s": 15993,
"text": "If you want to further access the key-value pair of the inner dictionary (layer 2), you now need to use dic_a['A'] as the dictionary."
},
{
"code": null,
"e": 16153,
"s": 16127,
"text": "dic_a['A']['B']>>> 'Ball'"
},
{
"code": null,
"e": 16305,
"s": 16153,
"text": "Let us add an additional layer of the nested dictionary. Now, dic_a['A'] is a nested dictionary in itself, unlike the simplest nested dictionary above."
},
{
"code": null,
"e": 16460,
"s": 16305,
"text": "dic_a = {'A': {'B': {'C': 'Cat'}}}# Layer 1dic_a['A']>>> {'B': {'C': 'Cat'}}# Layer 2dic_a['A']['B']>>> {'C': 'Cat'}# Layer 3dic_a['A']['B']['C']>>> 'Cat'"
},
{
"code": null,
"e": 16539,
"s": 16460,
"text": "You can find if a particular key exists in a dictionary using the in operator."
},
{
"code": null,
"e": 16636,
"s": 16539,
"text": "dic_a = {'A': 'Apple', 'B': 'Ball', 'C': 'Cat', 'D': 'Dog'}'A' in dic_a# True'E' in dic_a# False"
},
{
"code": null,
"e": 16746,
"s": 16636,
"text": "In the above code, you do not need to use “in dic_a.keys( )” because “in dic_a” already looks up in the keys."
}
] |
Find all possible outcomes of a given expression - GeeksforGeeks
|
10 Jun, 2021
Given an arithmetic expression, find all possible outcomes of this expression. Different outcomes are evaluated by putting brackets at different places.We may assume that the numbers are single digit numbers in given expression.Examples:
Input: 1+3*2
Output: 8 7
Explanation
(1 + 3)*2 = 80
(1 + (3 * 2)) = 70
Input: 1*2+3*4
Output: 14 20 14 20 20
(1*(2+(3*4))) = 14
(1*((2+3)*4)) = 20
((1*2)+(3*4)) = 14
((1*(2+3))*4) = 20
((1*2)+3)*4) = 20
The idea is to iterate through every operator in given expression. For every operator, evaluate all possible values of its left and right sides. Apply current operator on every pair of left side and right side values and add all evaluated values to the result.
1) Initialize result 'res' as empty.
2) Do following for every operator 'x'.
a) Recursively evaluate all possible values on left of 'x'.
Let the list of values be 'l'.
a) Recursively evaluate all possible values on right of 'x'.
Let the list of values be 'r'.
c) Loop through all values in list 'l'
loop through all values in list 'r'
Apply current operator 'x' on current items of
'l' and 'r' and add the evaluated value to 'res'
3) Return 'res'.
Below is the implementation of above algorithm.
C++
Java
Python3
C#
PHP
Javascript
// C++ program to evaluate all possible values of// a expression#include<bits/stdc++.h>using namespace std; // Utility function to evaluate a simple expression// with one operator only.int eval(int a, char op, int b){ if (op=='+') return a+b; if (op=='-') return a-b; if (op == '*') return a*b;} // This function evaluates all possible values and// returns a list of evaluated values.vector<int> evaluateAll(string expr, int low, int high){ // To store result (all possible evaluations of // given expression 'expr') vector<int> res; // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.push_back(expr[low] - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high-2)) { int num = eval(expr[low]-'0', expr[low+1], expr[low+2]-'0'); res.push_back(num); return res; } // every i refers to an operator for (int i=low+1; i<=high; i+=2) { // l refers to all the possible values // in the left of operator 'expr[i]' vector<int> l = evaluateAll(expr, low, i-1); // r refers to all the possible values // in the right of operator 'expr[i]' vector<int> r = evaluateAll(expr, i+1, high); // Take above evaluated all possible // values in left side of 'i' for (int s1=0; s1<l.size(); s1++) { // Take above evaluated all possible // values in right side of 'i' for (int s2=0; s2<r.size(); s2++) { // Calculate value for every pair // and add the value to result. int val = eval(l[s1], expr[i], r[s2]); res.push_back(val); } } } return res;} // Driver programint main(){ string expr = "1*2+3*4"; int len = expr.length(); vector<int> ans = evaluateAll(expr, 0, len-1); for (int i=0; i< ans.size(); i++) cout << ans[i] << endl; return 0;}
// Java program to evaluate all possible// values of a expressionimport java.util.*; class GFG{ // Utility function to evaluate a simple expression // with one operator only. static int eval(int a, char op, int b) { if (op == '+') { return a + b; } if (op == '-') { return a - b; } if (op == '*') { return a * b; } return Integer.MAX_VALUE; } // This function evaluates all possible values and // returns a list of evaluated values. static Vector<Integer> evaluateAll(String expr, int low, int high) { // To store result (all possible evaluations of // given expression 'expr') Vector<Integer> res = new Vector<Integer>(); // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.add(expr.charAt(low) - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high - 2)) { int num = eval(expr.charAt(low) - '0', expr.charAt(low + 1), expr.charAt(low + 2) - '0'); res.add(num); return res; } // every i refers to an operator for (int i = low + 1; i <= high; i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' Vector<Integer> l = evaluateAll(expr, low, i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' Vector<Integer> r = evaluateAll(expr, i + 1, high); // Take above evaluated all possible // values in left side of 'i' for (int s1 = 0; s1 < l.size(); s1++) { // Take above evaluated all possible // values in right side of 'i' for (int s2 = 0; s2 < r.size(); s2++) { // Calculate value for every pair // and add the value to result. int val = eval(l.get(s1), expr.charAt(i), r.get(s2)); res.add(val); } } } return res; } // Driver program public static void main(String[] args) { String expr = "1*2+3*4"; int len = expr.length(); Vector<Integer> ans = evaluateAll(expr, 0, len - 1); for (int i = 0; i < ans.size(); i++) { System.out.println(ans.get(i)); } }} // This code has been contributed by 29AjayKumar
# Python3 program to evaluate all# possible values of a expression # Utility function to evaluate a simple# expression with one operator only.def eval(a, op, b): if op == '+': return a + b if op == '-': return a - b if op == '*': return a * b # This function evaluates all possible values# and returns a list of evaluated values.def evaluateAll(expr, low, high): # To store result (all possible # evaluations of given expression 'expr') res = [] # If there is only one character, # it must be a digit (or operand), # return it. if low == high: res.append(int(expr[low])) return res # If there are only three characters, # middle one must be operator and # corner ones must be operand if low == (high - 2): num = eval(int(expr[low]), expr[low + 1], int(expr[low + 2])) res.append(num) return res # every i refers to an operator for i in range(low + 1, high + 1, 2): # l refers to all the possible values # in the left of operator 'expr[i]' l = evaluateAll(expr, low, i - 1) # r refers to all the possible values # in the right of operator 'expr[i]' r = evaluateAll(expr, i + 1, high) # Take above evaluated all possible # values in left side of 'i' for s1 in range(0, len(l)): # Take above evaluated all possible # values in right side of 'i' for s2 in range(0, len(r)): # Calculate value for every pair # and add the value to result. val = eval(l[s1], expr[i], r[s2]) res.append(val) return res # Driver Codeif __name__ == "__main__": expr = "1*2+3*4" length = len(expr) ans = evaluateAll(expr, 0, length - 1) for i in range(0, len(ans)): print(ans[i]) # This code is contributed by Rituraj Jain
// C# program to evaluate all possible// values of a expressionusing System;using System.Collections.Generic; class GFG{ // Utility function to evaluate a simple expression // with one operator only. static int eval(int a, char op, int b) { if (op == '+') { return a + b; } if (op == '-') { return a - b; } if (op == '*') { return a * b; } return int.MaxValue; } // This function evaluates all possible values and // returns a list of evaluated values. static List<int> evaluateAll(String expr, int low, int high) { // To store result (all possible evaluations of // given expression 'expr') List<int> res = new List<int> (); // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.Add(expr[low] - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high - 2)) { int num = eval(expr[low] - '0', expr[low + 1], expr[low + 2] - '0'); res.Add(num); return res; } // every i refers to an operator for (int i = low + 1; i <= high; i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' List<int> l = evaluateAll(expr, low, i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' List<int> r = evaluateAll(expr, i + 1, high); // Take above evaluated all possible // values in left side of 'i' for (int s1 = 0; s1 < l.Count; s1++) { // Take above evaluated all possible // values in right side of 'i' for (int s2 = 0; s2 < r.Count; s2++) { // Calculate value for every pair // and add the value to result. int val = eval(l[s1], expr[i], r[s2]); res.Add(val); } } } return res; } // Driver code public static void Main() { String expr = "1*2+3*4"; int len = expr.Length; List<int> ans = evaluateAll(expr, 0, len - 1); for (int i = 0; i < ans.Count; i++) { Console.WriteLine(ans[i]); } }} /* This code contributed by PrinciRaj1992 */
<?php// PHP program to evaluate all possible// values of a expression // Utility function to eval1uate a simple// expression with one operator only.function eval1($a, $op, $b){ if ($op == '+') return $a + $b; if ($op == '-') return $a - $b; if ($op == '*') return $a * $b;} // This function eval1uates all possible values// and returns a list of eval1uated values.function eval1uateAll($expr, $low, $high){ // To store result (all possible // evaluations of given expression 'expr') $res = array(); // If there is only one character, it must // be a digit (or operand), return it. if ($low == $high) { array_push($res, ord($expr[$low]) - ord('0')); return $res; } // If there are only three characters, // middle one must be operator and // corner ones must be operand if ($low == ($high - 2)) { $num = eval1(ord($expr[$low]) - ord('0'), $expr[$low + 1], ord($expr[$low + 2]) - ord('0')); array_push($res, $num); return $res; } // every i refers to an operator for ($i = $low + 1; $i <= $high; $i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' $l = eval1uateAll($expr, $low, $i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' $r = eval1uateAll($expr, $i + 1, $high); // Take above eval1uated all possible // values in left side of 'i' for ($s1 = 0; $s1 < count($l); $s1++) { // Take above eval1uated all possible // values in right side of 'i' for ($s2 = 0; $s2 < count($r); $s2++) { // Calculate value for every pair // and add the value to result. $val = eval1($l[$s1], $expr[$i], $r[$s2]); array_push($res, $val); } } } return $res;} // Driver Code$expr = "1*2+3*4";$len = strlen($expr);$ans = eval1uateAll($expr, 0, $len - 1); for ($i = 0; $i < count($ans); $i++) echo $ans[$i] . "\n"; // This code is contributed by mits?>
<script>// Javascript program to evaluate all possible// values of a expression // Utility function to evaluate a simple expression // with one operator only.function eval(a,op,b){ if (op == '+') { return a + b; } if (op == '-') { return a - b; } if (op == '*') { return a * b; } return Number.MAX_VALUE;} // This function evaluates all possible values and // returns a list of evaluated values.function evaluateAll(expr,low,high){ // To store result (all possible evaluations of // given expression 'expr') let res = []; // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.push(expr[low] - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high - 2)) { let num = eval(expr[low] - '0', expr[low + 1], expr[low + 2] - '0'); res.push(num); return res; } // every i refers to an operator for (let i = low + 1; i <= high; i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' let l = evaluateAll(expr, low, i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' let r = evaluateAll(expr, i + 1, high); // Take above evaluated all possible // values in left side of 'i' for (let s1 = 0; s1 < l.length; s1++) { // Take above evaluated all possible // values in right side of 'i' for (let s2 = 0; s2 < r.length; s2++) { // Calculate value for every pair // and add the value to result. let val = eval(l[s1], expr[i], r[s2]); res.push(val); } } } return res;} // Driver programlet expr = "1*2+3*4";let len = expr.length;let ans = evaluateAll(expr, 0, len - 1);for (let i = 0; i < ans.length; i++){ document.write(ans[i]+"<br>");} // This code is contributed by rag2127</script>
Output:
14
20
14
20
20
Exercise: Extend the above solution so that it works for numbers with multiple digits also. For example, expressions like “100*30+20” (Hint: We can create an integer array to store all operands and operators of given expression).This article is contributed by Ekta Goel. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Mithun Kumar
29AjayKumar
princiraj1992
rituraj_jain
rag2127
Mathematical
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Program to print prime numbers from 1 to N.
Modular multiplicative inverse
Fizz Buzz Implementation
Check if a number is Palindrome
Segment Tree | Set 1 (Sum of given range)
Generate all permutation of a set in Python
How to check if a given point lies inside or outside a polygon?
Merge two sorted arrays with O(1) extra space
Singular Value Decomposition (SVD)
Program to multiply two matrices
|
[
{
"code": null,
"e": 26073,
"s": 26045,
"text": "\n10 Jun, 2021"
},
{
"code": null,
"e": 26313,
"s": 26073,
"text": "Given an arithmetic expression, find all possible outcomes of this expression. Different outcomes are evaluated by putting brackets at different places.We may assume that the numbers are single digit numbers in given expression.Examples: "
},
{
"code": null,
"e": 26536,
"s": 26313,
"text": "Input: 1+3*2\nOutput: 8 7\nExplanation\n(1 + 3)*2 = 80\n(1 + (3 * 2)) = 70\n\nInput: 1*2+3*4\nOutput: 14 20 14 20 20\n (1*(2+(3*4))) = 14\n (1*((2+3)*4)) = 20 \n ((1*2)+(3*4)) = 14\n ((1*(2+3))*4) = 20\n ((1*2)+3)*4) = 20"
},
{
"code": null,
"e": 26801,
"s": 26538,
"text": "The idea is to iterate through every operator in given expression. For every operator, evaluate all possible values of its left and right sides. Apply current operator on every pair of left side and right side values and add all evaluated values to the result. "
},
{
"code": null,
"e": 27324,
"s": 26801,
"text": "1) Initialize result 'res' as empty.\n2) Do following for every operator 'x'.\n a) Recursively evaluate all possible values on left of 'x'.\n Let the list of values be 'l'. \n a) Recursively evaluate all possible values on right of 'x'.\n Let the list of values be 'r'.\n c) Loop through all values in list 'l' \n loop through all values in list 'r'\n Apply current operator 'x' on current items of \n 'l' and 'r' and add the evaluated value to 'res' \n3) Return 'res'."
},
{
"code": null,
"e": 27374,
"s": 27324,
"text": "Below is the implementation of above algorithm. "
},
{
"code": null,
"e": 27378,
"s": 27374,
"text": "C++"
},
{
"code": null,
"e": 27383,
"s": 27378,
"text": "Java"
},
{
"code": null,
"e": 27391,
"s": 27383,
"text": "Python3"
},
{
"code": null,
"e": 27394,
"s": 27391,
"text": "C#"
},
{
"code": null,
"e": 27398,
"s": 27394,
"text": "PHP"
},
{
"code": null,
"e": 27409,
"s": 27398,
"text": "Javascript"
},
{
"code": "// C++ program to evaluate all possible values of// a expression#include<bits/stdc++.h>using namespace std; // Utility function to evaluate a simple expression// with one operator only.int eval(int a, char op, int b){ if (op=='+') return a+b; if (op=='-') return a-b; if (op == '*') return a*b;} // This function evaluates all possible values and// returns a list of evaluated values.vector<int> evaluateAll(string expr, int low, int high){ // To store result (all possible evaluations of // given expression 'expr') vector<int> res; // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.push_back(expr[low] - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high-2)) { int num = eval(expr[low]-'0', expr[low+1], expr[low+2]-'0'); res.push_back(num); return res; } // every i refers to an operator for (int i=low+1; i<=high; i+=2) { // l refers to all the possible values // in the left of operator 'expr[i]' vector<int> l = evaluateAll(expr, low, i-1); // r refers to all the possible values // in the right of operator 'expr[i]' vector<int> r = evaluateAll(expr, i+1, high); // Take above evaluated all possible // values in left side of 'i' for (int s1=0; s1<l.size(); s1++) { // Take above evaluated all possible // values in right side of 'i' for (int s2=0; s2<r.size(); s2++) { // Calculate value for every pair // and add the value to result. int val = eval(l[s1], expr[i], r[s2]); res.push_back(val); } } } return res;} // Driver programint main(){ string expr = \"1*2+3*4\"; int len = expr.length(); vector<int> ans = evaluateAll(expr, 0, len-1); for (int i=0; i< ans.size(); i++) cout << ans[i] << endl; return 0;}",
"e": 29520,
"s": 27409,
"text": null
},
{
"code": "// Java program to evaluate all possible// values of a expressionimport java.util.*; class GFG{ // Utility function to evaluate a simple expression // with one operator only. static int eval(int a, char op, int b) { if (op == '+') { return a + b; } if (op == '-') { return a - b; } if (op == '*') { return a * b; } return Integer.MAX_VALUE; } // This function evaluates all possible values and // returns a list of evaluated values. static Vector<Integer> evaluateAll(String expr, int low, int high) { // To store result (all possible evaluations of // given expression 'expr') Vector<Integer> res = new Vector<Integer>(); // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.add(expr.charAt(low) - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high - 2)) { int num = eval(expr.charAt(low) - '0', expr.charAt(low + 1), expr.charAt(low + 2) - '0'); res.add(num); return res; } // every i refers to an operator for (int i = low + 1; i <= high; i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' Vector<Integer> l = evaluateAll(expr, low, i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' Vector<Integer> r = evaluateAll(expr, i + 1, high); // Take above evaluated all possible // values in left side of 'i' for (int s1 = 0; s1 < l.size(); s1++) { // Take above evaluated all possible // values in right side of 'i' for (int s2 = 0; s2 < r.size(); s2++) { // Calculate value for every pair // and add the value to result. int val = eval(l.get(s1), expr.charAt(i), r.get(s2)); res.add(val); } } } return res; } // Driver program public static void main(String[] args) { String expr = \"1*2+3*4\"; int len = expr.length(); Vector<Integer> ans = evaluateAll(expr, 0, len - 1); for (int i = 0; i < ans.size(); i++) { System.out.println(ans.get(i)); } }} // This code has been contributed by 29AjayKumar",
"e": 32328,
"s": 29520,
"text": null
},
{
"code": "# Python3 program to evaluate all# possible values of a expression # Utility function to evaluate a simple# expression with one operator only.def eval(a, op, b): if op == '+': return a + b if op == '-': return a - b if op == '*': return a * b # This function evaluates all possible values# and returns a list of evaluated values.def evaluateAll(expr, low, high): # To store result (all possible # evaluations of given expression 'expr') res = [] # If there is only one character, # it must be a digit (or operand), # return it. if low == high: res.append(int(expr[low])) return res # If there are only three characters, # middle one must be operator and # corner ones must be operand if low == (high - 2): num = eval(int(expr[low]), expr[low + 1], int(expr[low + 2])) res.append(num) return res # every i refers to an operator for i in range(low + 1, high + 1, 2): # l refers to all the possible values # in the left of operator 'expr[i]' l = evaluateAll(expr, low, i - 1) # r refers to all the possible values # in the right of operator 'expr[i]' r = evaluateAll(expr, i + 1, high) # Take above evaluated all possible # values in left side of 'i' for s1 in range(0, len(l)): # Take above evaluated all possible # values in right side of 'i' for s2 in range(0, len(r)): # Calculate value for every pair # and add the value to result. val = eval(l[s1], expr[i], r[s2]) res.append(val) return res # Driver Codeif __name__ == \"__main__\": expr = \"1*2+3*4\" length = len(expr) ans = evaluateAll(expr, 0, length - 1) for i in range(0, len(ans)): print(ans[i]) # This code is contributed by Rituraj Jain",
"e": 34278,
"s": 32328,
"text": null
},
{
"code": "// C# program to evaluate all possible// values of a expressionusing System;using System.Collections.Generic; class GFG{ // Utility function to evaluate a simple expression // with one operator only. static int eval(int a, char op, int b) { if (op == '+') { return a + b; } if (op == '-') { return a - b; } if (op == '*') { return a * b; } return int.MaxValue; } // This function evaluates all possible values and // returns a list of evaluated values. static List<int> evaluateAll(String expr, int low, int high) { // To store result (all possible evaluations of // given expression 'expr') List<int> res = new List<int> (); // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.Add(expr[low] - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high - 2)) { int num = eval(expr[low] - '0', expr[low + 1], expr[low + 2] - '0'); res.Add(num); return res; } // every i refers to an operator for (int i = low + 1; i <= high; i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' List<int> l = evaluateAll(expr, low, i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' List<int> r = evaluateAll(expr, i + 1, high); // Take above evaluated all possible // values in left side of 'i' for (int s1 = 0; s1 < l.Count; s1++) { // Take above evaluated all possible // values in right side of 'i' for (int s2 = 0; s2 < r.Count; s2++) { // Calculate value for every pair // and add the value to result. int val = eval(l[s1], expr[i], r[s2]); res.Add(val); } } } return res; } // Driver code public static void Main() { String expr = \"1*2+3*4\"; int len = expr.Length; List<int> ans = evaluateAll(expr, 0, len - 1); for (int i = 0; i < ans.Count; i++) { Console.WriteLine(ans[i]); } }} /* This code contributed by PrinciRaj1992 */",
"e": 36997,
"s": 34278,
"text": null
},
{
"code": "<?php// PHP program to evaluate all possible// values of a expression // Utility function to eval1uate a simple// expression with one operator only.function eval1($a, $op, $b){ if ($op == '+') return $a + $b; if ($op == '-') return $a - $b; if ($op == '*') return $a * $b;} // This function eval1uates all possible values// and returns a list of eval1uated values.function eval1uateAll($expr, $low, $high){ // To store result (all possible // evaluations of given expression 'expr') $res = array(); // If there is only one character, it must // be a digit (or operand), return it. if ($low == $high) { array_push($res, ord($expr[$low]) - ord('0')); return $res; } // If there are only three characters, // middle one must be operator and // corner ones must be operand if ($low == ($high - 2)) { $num = eval1(ord($expr[$low]) - ord('0'), $expr[$low + 1], ord($expr[$low + 2]) - ord('0')); array_push($res, $num); return $res; } // every i refers to an operator for ($i = $low + 1; $i <= $high; $i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' $l = eval1uateAll($expr, $low, $i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' $r = eval1uateAll($expr, $i + 1, $high); // Take above eval1uated all possible // values in left side of 'i' for ($s1 = 0; $s1 < count($l); $s1++) { // Take above eval1uated all possible // values in right side of 'i' for ($s2 = 0; $s2 < count($r); $s2++) { // Calculate value for every pair // and add the value to result. $val = eval1($l[$s1], $expr[$i], $r[$s2]); array_push($res, $val); } } } return $res;} // Driver Code$expr = \"1*2+3*4\";$len = strlen($expr);$ans = eval1uateAll($expr, 0, $len - 1); for ($i = 0; $i < count($ans); $i++) echo $ans[$i] . \"\\n\"; // This code is contributed by mits?>",
"e": 39211,
"s": 36997,
"text": null
},
{
"code": "<script>// Javascript program to evaluate all possible// values of a expression // Utility function to evaluate a simple expression // with one operator only.function eval(a,op,b){ if (op == '+') { return a + b; } if (op == '-') { return a - b; } if (op == '*') { return a * b; } return Number.MAX_VALUE;} // This function evaluates all possible values and // returns a list of evaluated values.function evaluateAll(expr,low,high){ // To store result (all possible evaluations of // given expression 'expr') let res = []; // If there is only one character, it must // be a digit (or operand), return it. if (low == high) { res.push(expr[low] - '0'); return res; } // If there are only three characters, middle // one must be operator and corner ones must be // operand if (low == (high - 2)) { let num = eval(expr[low] - '0', expr[low + 1], expr[low + 2] - '0'); res.push(num); return res; } // every i refers to an operator for (let i = low + 1; i <= high; i += 2) { // l refers to all the possible values // in the left of operator 'expr[i]' let l = evaluateAll(expr, low, i - 1); // r refers to all the possible values // in the right of operator 'expr[i]' let r = evaluateAll(expr, i + 1, high); // Take above evaluated all possible // values in left side of 'i' for (let s1 = 0; s1 < l.length; s1++) { // Take above evaluated all possible // values in right side of 'i' for (let s2 = 0; s2 < r.length; s2++) { // Calculate value for every pair // and add the value to result. let val = eval(l[s1], expr[i], r[s2]); res.push(val); } } } return res;} // Driver programlet expr = \"1*2+3*4\";let len = expr.length;let ans = evaluateAll(expr, 0, len - 1);for (let i = 0; i < ans.length; i++){ document.write(ans[i]+\"<br>\");} // This code is contributed by rag2127</script>",
"e": 41671,
"s": 39211,
"text": null
},
{
"code": null,
"e": 41679,
"s": 41671,
"text": "Output:"
},
{
"code": null,
"e": 41694,
"s": 41679,
"text": "14\n20\n14\n20\n20"
},
{
"code": null,
"e": 42091,
"s": 41694,
"text": "Exercise: Extend the above solution so that it works for numbers with multiple digits also. For example, expressions like “100*30+20” (Hint: We can create an integer array to store all operands and operators of given expression).This article is contributed by Ekta Goel. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 42104,
"s": 42091,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 42116,
"s": 42104,
"text": "29AjayKumar"
},
{
"code": null,
"e": 42130,
"s": 42116,
"text": "princiraj1992"
},
{
"code": null,
"e": 42143,
"s": 42130,
"text": "rituraj_jain"
},
{
"code": null,
"e": 42151,
"s": 42143,
"text": "rag2127"
},
{
"code": null,
"e": 42164,
"s": 42151,
"text": "Mathematical"
},
{
"code": null,
"e": 42177,
"s": 42164,
"text": "Mathematical"
},
{
"code": null,
"e": 42275,
"s": 42177,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 42319,
"s": 42275,
"text": "Program to print prime numbers from 1 to N."
},
{
"code": null,
"e": 42350,
"s": 42319,
"text": "Modular multiplicative inverse"
},
{
"code": null,
"e": 42375,
"s": 42350,
"text": "Fizz Buzz Implementation"
},
{
"code": null,
"e": 42407,
"s": 42375,
"text": "Check if a number is Palindrome"
},
{
"code": null,
"e": 42449,
"s": 42407,
"text": "Segment Tree | Set 1 (Sum of given range)"
},
{
"code": null,
"e": 42493,
"s": 42449,
"text": "Generate all permutation of a set in Python"
},
{
"code": null,
"e": 42557,
"s": 42493,
"text": "How to check if a given point lies inside or outside a polygon?"
},
{
"code": null,
"e": 42603,
"s": 42557,
"text": "Merge two sorted arrays with O(1) extra space"
},
{
"code": null,
"e": 42638,
"s": 42603,
"text": "Singular Value Decomposition (SVD)"
}
] |
Image Processing in Java - Colored Image to Sepia Image Conversion - GeeksforGeeks
|
14 Nov, 2021
Prerequisites:
Image Processing in Java – Read and Write
Image Processing In Java – Get and Set Pixels
Image Processing in Java – Colored Image to Grayscale Image Conversion
Image Processing in Java – Colored Image to Negative Image Conversion
Image Processing in Java – Colored to Red Green Blue Image Conversion
In this set, we will be converting a colored image to a sepia image. In a sepia image, the Alpha component of the image will be the same as the original image(since the alpha component denotes the transparency). Still, the RGB will be changed, which will be calculated by the following formula.
newRed = 0.393*R + 0.769*G + 0.189*B
newGreen = 0.349*R + 0.686*G + 0.168*B
newBlue = 0.272*R + 0.534*G + 0.131*B
If any of these output values is greater than 255, simply set it to 255. These specific values are the values for a sepia tone that are recommended by Microsoft.
Get the RGB value of the pixel.Calculate newRed, new green, newBlue using the above formula (Take the integer value)Set the new RGB value of the pixel as per the following condition: If newRed > 255 then R = 255 else R = newRedIf newGreen > 255 then G = 255 else G = newGreenIf newBlue > 255 then B = 255 else B = newBlueReplace the value of R, G, and B with the new value that we calculated for the pixel.Repeat Step 1 to Step 4 for each pixel of the image.
Get the RGB value of the pixel.
Calculate newRed, new green, newBlue using the above formula (Take the integer value)
Set the new RGB value of the pixel as per the following condition: If newRed > 255 then R = 255 else R = newRedIf newGreen > 255 then G = 255 else G = newGreenIf newBlue > 255 then B = 255 else B = newBlue
If newRed > 255 then R = 255 else R = newRed
If newGreen > 255 then G = 255 else G = newGreen
If newBlue > 255 then B = 255 else B = newBlue
Replace the value of R, G, and B with the new value that we calculated for the pixel.
Repeat Step 1 to Step 4 for each pixel of the image.
Java
// Java program to demonstrate// colored to sepia conversion import java.awt.image.BufferedImage;import java.io.File;import java.io.IOException;import javax.imageio.ImageIO; public class Sepia { public static void main(String args[]) throws IOException { BufferedImage img = null; File f = null; // read image try { f = new File( "C:/Users/hp/Desktop/Image Processing in Java/gfg-logo.png"); img = ImageIO.read(f); } catch (IOException e) { System.out.println(e); } // get width and height of the image int width = img.getWidth(); int height = img.getHeight(); // convert to sepia for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { int p = img.getRGB(x, y); int a = (p >> 24) & 0xff; int R = (p >> 16) & 0xff; int G = (p >> 8) & 0xff; int B = p & 0xff; // calculate newRed, newGreen, newBlue int newRed = (int)(0.393 * R + 0.769 * G + 0.189 * B); int newGreen = (int)(0.349 * R + 0.686 * G + 0.168 * B); int newBlue = (int)(0.272 * R + 0.534 * G + 0.131 * B); // check condition if (newRed > 255) R = 255; else R = newRed; if (newGreen > 255) G = 255; else G = newGreen; if (newBlue > 255) B = 255; else B = newBlue; // set new RGB value p = (a << 24) | (R << 16) | (G << 8) | B; img.setRGB(x, y, p); } } // write image try { f = new File( "C:/Users/hp/Desktop/Image Processing in Java/GFG.png"); ImageIO.write(img, "png", f); } catch (IOException e) { System.out.println(e); } }}
Note: This code will not run on an online IDE as it needs an image on a disk.
This article is contributed by Pratik Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Akanksha_Rai
nishkarshgandhi
Image-Processing
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Interfaces in Java
LinkedList in Java
Overriding in Java
Set in Java
Collections in Java
Multithreading in Java
Singleton Class in Java
Queue Interface In Java
Initializing a List in Java
Constructors in Java
|
[
{
"code": null,
"e": 24940,
"s": 24912,
"text": "\n14 Nov, 2021"
},
{
"code": null,
"e": 24956,
"s": 24940,
"text": "Prerequisites: "
},
{
"code": null,
"e": 24998,
"s": 24956,
"text": "Image Processing in Java – Read and Write"
},
{
"code": null,
"e": 25044,
"s": 24998,
"text": "Image Processing In Java – Get and Set Pixels"
},
{
"code": null,
"e": 25115,
"s": 25044,
"text": "Image Processing in Java – Colored Image to Grayscale Image Conversion"
},
{
"code": null,
"e": 25185,
"s": 25115,
"text": "Image Processing in Java – Colored Image to Negative Image Conversion"
},
{
"code": null,
"e": 25255,
"s": 25185,
"text": "Image Processing in Java – Colored to Red Green Blue Image Conversion"
},
{
"code": null,
"e": 25551,
"s": 25255,
"text": "In this set, we will be converting a colored image to a sepia image. In a sepia image, the Alpha component of the image will be the same as the original image(since the alpha component denotes the transparency). Still, the RGB will be changed, which will be calculated by the following formula. "
},
{
"code": null,
"e": 25665,
"s": 25551,
"text": "newRed = 0.393*R + 0.769*G + 0.189*B\nnewGreen = 0.349*R + 0.686*G + 0.168*B\nnewBlue = 0.272*R + 0.534*G + 0.131*B"
},
{
"code": null,
"e": 25827,
"s": 25665,
"text": "If any of these output values is greater than 255, simply set it to 255. These specific values are the values for a sepia tone that are recommended by Microsoft."
},
{
"code": null,
"e": 26286,
"s": 25827,
"text": "Get the RGB value of the pixel.Calculate newRed, new green, newBlue using the above formula (Take the integer value)Set the new RGB value of the pixel as per the following condition: If newRed > 255 then R = 255 else R = newRedIf newGreen > 255 then G = 255 else G = newGreenIf newBlue > 255 then B = 255 else B = newBlueReplace the value of R, G, and B with the new value that we calculated for the pixel.Repeat Step 1 to Step 4 for each pixel of the image."
},
{
"code": null,
"e": 26318,
"s": 26286,
"text": "Get the RGB value of the pixel."
},
{
"code": null,
"e": 26404,
"s": 26318,
"text": "Calculate newRed, new green, newBlue using the above formula (Take the integer value)"
},
{
"code": null,
"e": 26610,
"s": 26404,
"text": "Set the new RGB value of the pixel as per the following condition: If newRed > 255 then R = 255 else R = newRedIf newGreen > 255 then G = 255 else G = newGreenIf newBlue > 255 then B = 255 else B = newBlue"
},
{
"code": null,
"e": 26655,
"s": 26610,
"text": "If newRed > 255 then R = 255 else R = newRed"
},
{
"code": null,
"e": 26704,
"s": 26655,
"text": "If newGreen > 255 then G = 255 else G = newGreen"
},
{
"code": null,
"e": 26751,
"s": 26704,
"text": "If newBlue > 255 then B = 255 else B = newBlue"
},
{
"code": null,
"e": 26837,
"s": 26751,
"text": "Replace the value of R, G, and B with the new value that we calculated for the pixel."
},
{
"code": null,
"e": 26890,
"s": 26837,
"text": "Repeat Step 1 to Step 4 for each pixel of the image."
},
{
"code": null,
"e": 26895,
"s": 26890,
"text": "Java"
},
{
"code": "// Java program to demonstrate// colored to sepia conversion import java.awt.image.BufferedImage;import java.io.File;import java.io.IOException;import javax.imageio.ImageIO; public class Sepia { public static void main(String args[]) throws IOException { BufferedImage img = null; File f = null; // read image try { f = new File( \"C:/Users/hp/Desktop/Image Processing in Java/gfg-logo.png\"); img = ImageIO.read(f); } catch (IOException e) { System.out.println(e); } // get width and height of the image int width = img.getWidth(); int height = img.getHeight(); // convert to sepia for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { int p = img.getRGB(x, y); int a = (p >> 24) & 0xff; int R = (p >> 16) & 0xff; int G = (p >> 8) & 0xff; int B = p & 0xff; // calculate newRed, newGreen, newBlue int newRed = (int)(0.393 * R + 0.769 * G + 0.189 * B); int newGreen = (int)(0.349 * R + 0.686 * G + 0.168 * B); int newBlue = (int)(0.272 * R + 0.534 * G + 0.131 * B); // check condition if (newRed > 255) R = 255; else R = newRed; if (newGreen > 255) G = 255; else G = newGreen; if (newBlue > 255) B = 255; else B = newBlue; // set new RGB value p = (a << 24) | (R << 16) | (G << 8) | B; img.setRGB(x, y, p); } } // write image try { f = new File( \"C:/Users/hp/Desktop/Image Processing in Java/GFG.png\"); ImageIO.write(img, \"png\", f); } catch (IOException e) { System.out.println(e); } }}",
"e": 29082,
"s": 26895,
"text": null
},
{
"code": null,
"e": 29160,
"s": 29082,
"text": "Note: This code will not run on an online IDE as it needs an image on a disk."
},
{
"code": null,
"e": 29583,
"s": 29160,
"text": "This article is contributed by Pratik Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 29596,
"s": 29583,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 29612,
"s": 29596,
"text": "nishkarshgandhi"
},
{
"code": null,
"e": 29629,
"s": 29612,
"text": "Image-Processing"
},
{
"code": null,
"e": 29634,
"s": 29629,
"text": "Java"
},
{
"code": null,
"e": 29639,
"s": 29634,
"text": "Java"
},
{
"code": null,
"e": 29737,
"s": 29639,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29746,
"s": 29737,
"text": "Comments"
},
{
"code": null,
"e": 29759,
"s": 29746,
"text": "Old Comments"
},
{
"code": null,
"e": 29778,
"s": 29759,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 29797,
"s": 29778,
"text": "LinkedList in Java"
},
{
"code": null,
"e": 29816,
"s": 29797,
"text": "Overriding in Java"
},
{
"code": null,
"e": 29828,
"s": 29816,
"text": "Set in Java"
},
{
"code": null,
"e": 29848,
"s": 29828,
"text": "Collections in Java"
},
{
"code": null,
"e": 29871,
"s": 29848,
"text": "Multithreading in Java"
},
{
"code": null,
"e": 29895,
"s": 29871,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 29919,
"s": 29895,
"text": "Queue Interface In Java"
},
{
"code": null,
"e": 29947,
"s": 29919,
"text": "Initializing a List in Java"
}
] |
PHP | utf8_encode() Function - GeeksforGeeks
|
30 Sep, 2019
The utf8_encode() function is an inbuilt function in PHP which is used to encode an ISO-8859-1 string to UTF-8. Unicode has been developed to describe all possible characters of all languages and includes a lot of symbols with one unique number for each symbol/character. UTF-8 has been used to transfer the Unicode character from one computer to another. It is not always possible to transfer a Unicode character to another computer reliably.
Syntax:
string utf8_encode( string $string )
Parameters: This function accepts single parameter $string which is required. It specifies the ISO-8859-1 string which need to be encoded.
Return Value: This function returns a string representing the encoded string on success or False on failure.
Note: This function is available for PHP 4.0.0 and newer version.
Example 1:
<?php // String to encode$string_to_encode = "\x63"; // Encoding stringecho utf8_encode($string_to_encode); ?>
Output:
c
Example 2:
<?php // Creating an array of 256 elements$text = array( "\x20", "\x21", "\x22", "\x23", "\x24", "\x25", "\x26", "\x27", "\x28", "\x29", "\x2a", "\x2b", "\x2c", "\x2d", "\x2e", "\x2f", "\x30", "\x31", "\x32", "\x33", "\x34", "\x35", "\x36", "\x37", ); // Encoding all charactersforeach( $text as $index ) { echo utf8_encode($index) . " "; } ?>
Output:
! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7
Reference: https://www.php.net/manual/en/function.utf8-encode.php
PHP-function
PHP
PHP Programs
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Insert Form Data into Database using PHP ?
How to convert array to string in PHP ?
How to check whether an array is empty using PHP?
PHP | Converting string to Date and DateTime
Comparing two dates in PHP
How to Insert Form Data into Database using PHP ?
How to convert array to string in PHP ?
How to call PHP function on the click of a Button ?
How to check whether an array is empty using PHP?
Comparing two dates in PHP
|
[
{
"code": null,
"e": 26800,
"s": 26772,
"text": "\n30 Sep, 2019"
},
{
"code": null,
"e": 27244,
"s": 26800,
"text": "The utf8_encode() function is an inbuilt function in PHP which is used to encode an ISO-8859-1 string to UTF-8. Unicode has been developed to describe all possible characters of all languages and includes a lot of symbols with one unique number for each symbol/character. UTF-8 has been used to transfer the Unicode character from one computer to another. It is not always possible to transfer a Unicode character to another computer reliably."
},
{
"code": null,
"e": 27252,
"s": 27244,
"text": "Syntax:"
},
{
"code": null,
"e": 27289,
"s": 27252,
"text": "string utf8_encode( string $string )"
},
{
"code": null,
"e": 27428,
"s": 27289,
"text": "Parameters: This function accepts single parameter $string which is required. It specifies the ISO-8859-1 string which need to be encoded."
},
{
"code": null,
"e": 27537,
"s": 27428,
"text": "Return Value: This function returns a string representing the encoded string on success or False on failure."
},
{
"code": null,
"e": 27603,
"s": 27537,
"text": "Note: This function is available for PHP 4.0.0 and newer version."
},
{
"code": null,
"e": 27614,
"s": 27603,
"text": "Example 1:"
},
{
"code": "<?php // String to encode$string_to_encode = \"\\x63\"; // Encoding stringecho utf8_encode($string_to_encode); ?>",
"e": 27728,
"s": 27614,
"text": null
},
{
"code": null,
"e": 27736,
"s": 27728,
"text": "Output:"
},
{
"code": null,
"e": 27739,
"s": 27736,
"text": "c\n"
},
{
"code": null,
"e": 27750,
"s": 27739,
"text": "Example 2:"
},
{
"code": "<?php // Creating an array of 256 elements$text = array( \"\\x20\", \"\\x21\", \"\\x22\", \"\\x23\", \"\\x24\", \"\\x25\", \"\\x26\", \"\\x27\", \"\\x28\", \"\\x29\", \"\\x2a\", \"\\x2b\", \"\\x2c\", \"\\x2d\", \"\\x2e\", \"\\x2f\", \"\\x30\", \"\\x31\", \"\\x32\", \"\\x33\", \"\\x34\", \"\\x35\", \"\\x36\", \"\\x37\", ); // Encoding all charactersforeach( $text as $index ) { echo utf8_encode($index) . \" \"; } ?>",
"e": 28113,
"s": 27750,
"text": null
},
{
"code": null,
"e": 28121,
"s": 28113,
"text": "Output:"
},
{
"code": null,
"e": 28167,
"s": 28121,
"text": "! \" # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7"
},
{
"code": null,
"e": 28233,
"s": 28167,
"text": "Reference: https://www.php.net/manual/en/function.utf8-encode.php"
},
{
"code": null,
"e": 28246,
"s": 28233,
"text": "PHP-function"
},
{
"code": null,
"e": 28250,
"s": 28246,
"text": "PHP"
},
{
"code": null,
"e": 28263,
"s": 28250,
"text": "PHP Programs"
},
{
"code": null,
"e": 28280,
"s": 28263,
"text": "Web Technologies"
},
{
"code": null,
"e": 28284,
"s": 28280,
"text": "PHP"
},
{
"code": null,
"e": 28382,
"s": 28284,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28432,
"s": 28382,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 28472,
"s": 28432,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 28522,
"s": 28472,
"text": "How to check whether an array is empty using PHP?"
},
{
"code": null,
"e": 28567,
"s": 28522,
"text": "PHP | Converting string to Date and DateTime"
},
{
"code": null,
"e": 28594,
"s": 28567,
"text": "Comparing two dates in PHP"
},
{
"code": null,
"e": 28644,
"s": 28594,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 28684,
"s": 28644,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 28736,
"s": 28684,
"text": "How to call PHP function on the click of a Button ?"
},
{
"code": null,
"e": 28786,
"s": 28736,
"text": "How to check whether an array is empty using PHP?"
}
] |
What is the difference between NumPy and pandas?
|
Both pandas and NumPy are validly used powerful open-source libraries in python. These packages have their own applicability. A lot of pandas functionalities are built on top of NumPy, and they are both part of the SkiPy Analytics world.
Numpy stands for Numerical Python. NumPy is the core library for scientific computing. it can deal with multidimensional data, which is nothing but n-dimensional numerical data. Numpy array is a powerful N-dimensional array object which is in the form of rows and columns.
Many NumPy operations are implemented in the C language. It is fast and it requires less memory than pandas.
Numpy allows you to do every numerical task like linear algebra and many other advanced linear algebra tasks. These include tasks like inverting a matrix, Singular value decomposition, determinant estimation, etc.
Let’s take an example and see how we gonna do mathematical operations.
import numpy as np
arr = np.array([[2,12,3], [10,5,7],[9,8,11]])
print(arr)
arr_inv = np.linalg.inv(arr)
print(arr_inv)
The first line of the above block imports the NumPy module and np is representing the alias name for the NumPy module. The variable arr is a 2-Dimensional array and it has 3 rows and 3 columns. After that, we are calculating the inverse matrix of our array arr by using the inv() function available in the numpy.linalg (linear algebra) module.
[[ 2 12 3]
[10 5 7]
[ 9 8 11]]
[[ 0.0021692 0.23427332 -0.14967462]
[ 0.10195228 0.01084599 -0.03470716]
[-0.07592191 -0.19956616 0.23861171]]
This output block has two arrays first one is representing the array of values from the arr variable and the second one is an inverted matrix of arr (variable arr_inv).
Pandas provides high-performance data manipulation in Python and it requires NumPy for operating as it is built on the top of NumPy. The name of Pandas is derived from the word Panel Data, which means Econometrics from Multidimensional data.
Pandas allows you to do most of the things that you can do with the spreadsheet with Python code, and NumPy majorly works with numerical data whereas Pandas works with tabular data. This tabular data can be any form like it may be CSV file or SQL data.
The Pandas provides powerful tools like DataFrame and Series that are mainly used for analyzing the data.
Let’s take an example and see how pandas will handle tabular data.
data = pd.read_csv('titanic.csv')
print(data.head())
Pandas provides a number of functions to read any type of data into a pandas DataFrame or Series, in this above example we read the titanic data set as pandas dataframe. And displayed the output using the head() method.
PassengerId Survived Pclass \
0 1 0 3
1 2 1 1
2 3 1 3
3 4 1 1
4 5 0 3
Name Gender Age SibSp \
0 Braund, Mr. Owen Harris male 22.0 1
1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1
2 Heikkinen, Miss. Laina female 26.0 0
3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1
4 Allen, Mr. William Henry male 35.0 0
Parch Ticket Fare Cabin Embarked
0 0 A/5 21171 7.2500 NaN S
1 0 PC 17599 71.2833 C85 C
2 0 STON/O2. 3101282 7.9250 NaN S
3 0 113803 53.1000 C123 S
4 0 373450 8.0500 NaN S
As we can see, a pandas data frame can store any type of data whereas NumPy is only dealing with a numerical value.
|
[
{
"code": null,
"e": 1300,
"s": 1062,
"text": "Both pandas and NumPy are validly used powerful open-source libraries in python. These packages have their own applicability. A lot of pandas functionalities are built on top of NumPy, and they are both part of the SkiPy Analytics world."
},
{
"code": null,
"e": 1573,
"s": 1300,
"text": "Numpy stands for Numerical Python. NumPy is the core library for scientific computing. it can deal with multidimensional data, which is nothing but n-dimensional numerical data. Numpy array is a powerful N-dimensional array object which is in the form of rows and columns."
},
{
"code": null,
"e": 1682,
"s": 1573,
"text": "Many NumPy operations are implemented in the C language. It is fast and it requires less memory than pandas."
},
{
"code": null,
"e": 1896,
"s": 1682,
"text": "Numpy allows you to do every numerical task like linear algebra and many other advanced linear algebra tasks. These include tasks like inverting a matrix, Singular value decomposition, determinant estimation, etc."
},
{
"code": null,
"e": 1967,
"s": 1896,
"text": "Let’s take an example and see how we gonna do mathematical operations."
},
{
"code": null,
"e": 2087,
"s": 1967,
"text": "import numpy as np\narr = np.array([[2,12,3], [10,5,7],[9,8,11]])\nprint(arr)\narr_inv = np.linalg.inv(arr)\nprint(arr_inv)"
},
{
"code": null,
"e": 2431,
"s": 2087,
"text": "The first line of the above block imports the NumPy module and np is representing the alias name for the NumPy module. The variable arr is a 2-Dimensional array and it has 3 rows and 3 columns. After that, we are calculating the inverse matrix of our array arr by using the inv() function available in the numpy.linalg (linear algebra) module."
},
{
"code": null,
"e": 2578,
"s": 2431,
"text": "[[ 2 12 3]\n [10 5 7]\n [ 9 8 11]]\n[[ 0.0021692 0.23427332 -0.14967462]\n [ 0.10195228 0.01084599 -0.03470716]\n [-0.07592191 -0.19956616 0.23861171]]"
},
{
"code": null,
"e": 2747,
"s": 2578,
"text": "This output block has two arrays first one is representing the array of values from the arr variable and the second one is an inverted matrix of arr (variable arr_inv)."
},
{
"code": null,
"e": 2989,
"s": 2747,
"text": "Pandas provides high-performance data manipulation in Python and it requires NumPy for operating as it is built on the top of NumPy. The name of Pandas is derived from the word Panel Data, which means Econometrics from Multidimensional data."
},
{
"code": null,
"e": 3242,
"s": 2989,
"text": "Pandas allows you to do most of the things that you can do with the spreadsheet with Python code, and NumPy majorly works with numerical data whereas Pandas works with tabular data. This tabular data can be any form like it may be CSV file or SQL data."
},
{
"code": null,
"e": 3348,
"s": 3242,
"text": "The Pandas provides powerful tools like DataFrame and Series that are mainly used for analyzing the data."
},
{
"code": null,
"e": 3415,
"s": 3348,
"text": "Let’s take an example and see how pandas will handle tabular data."
},
{
"code": null,
"e": 3468,
"s": 3415,
"text": "data = pd.read_csv('titanic.csv')\nprint(data.head())"
},
{
"code": null,
"e": 3688,
"s": 3468,
"text": "Pandas provides a number of functions to read any type of data into a pandas DataFrame or Series, in this above example we read the titanic data set as pandas dataframe. And displayed the output using the head() method."
},
{
"code": null,
"e": 4666,
"s": 3688,
"text": " PassengerId Survived Pclass \\\n0 1 0 3\n1 2 1 1\n2 3 1 3\n3 4 1 1\n4 5 0 3\n\n Name Gender Age SibSp \\\n0 Braund, Mr. Owen Harris male 22.0 1\n1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1\n2 Heikkinen, Miss. Laina female 26.0 0\n3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1\n4 Allen, Mr. William Henry male 35.0 0\n\n Parch Ticket Fare Cabin Embarked\n0 0 A/5 21171 7.2500 NaN S\n1 0 PC 17599 71.2833 C85 C\n2 0 STON/O2. 3101282 7.9250 NaN S\n3 0 113803 53.1000 C123 S\n4 0 373450 8.0500 NaN S"
},
{
"code": null,
"e": 4782,
"s": 4666,
"text": "As we can see, a pandas data frame can store any type of data whereas NumPy is only dealing with a numerical value."
}
] |
Empowering Spark with MLflow. This post aims to cover our initial... | by Albert Franzi | Towards Data Science
|
This post aims to cover our initial experience using MLflow.
We will start discovering MLflow with its own tracking server by logging all the exploratory iterations. Then, we will show our experience linking Spark with MLflow using UDFs.
At Alpha Health we leverage the power of Machine Learning and AI to empower people in taking control of their health and wellbeing. Machine Learning models are hence at the core of the data products we are developing, and this is why MLFLow, an open-source platform covering all aspects of the ML lifecycle drew our attention.
The main goal of MLflow is to provide an extra layer on top of ML allowing Data Scientists to work with virtually any Machine Learning library (h2o, keras, mleap, pytorch, sklearn & tensorflow) meanwhile, it brings their job to another level.
MLflow provides three components:
Tracking — Recording and querying experiments: code, data, config and results. Very useful to keep track of your modelling progress.
Projects — Packaging format for reproducible runs on any platform (i.e Sagemaker).
Models — General format for sending models to diverse deployment tools.
MLflow (currently in alpha) is an open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment.
In order to use MLflow, we first need to set up all the Python Environment to use MLflow, we would use PyEnv (to setup → Python in Mac). This will provide a virtual environment where we can install all the libraries required to run it.
pyenv install 3.7.0
pyenv global 3.7.0 # Use Python 3.7
mkvirtualenv mlflow # Create a Virtual Env with Python 3.7
workon mlflow
Install required libraries.
pip install mlflow==0.7.0 \
Cython==0.29 \
numpy==1.14.5 \
pandas==0.23.4 \
pyarrow==0.11.0
Note: We are using PyArrow to launch models as UDFs. PyArrow and Numpy versions required fixing since the latest ones had some conflicts between them.
MLflow Tracking allows us to log and query experiments using both Python and REST APIs. Besides, it’s possible to define where are we going to store the model artifacts (Localhost, Amazon S3, Azure Blob Storage, Google Cloud Storage or SFTP server). Since we are using AWS in Alpha Health, we will be experimenting with S3 as an artifact storage.
MLflow recommends using a persistent file-store. The file-store is where the server stores run and experiment metadata. So, when running a server, make sure that this points to a persistent file system location. Here, we are just using /tmp for the experiment.
Keep in mind that if we want to use the mlflow server to run old experiments, they must be present in the file-store. However, without them, we would still be able to use them in UDFs since only the model path would be required.
Note: Keep in mind the Tracking UI and the model client has to have access to the artifact location. That means, regardless of the Tracking UI being on an EC2 instance, if we run MLflow locally, our machine should have direct access to S3 to write the artifact models.
Once the Tracking Server is running, we can start training our models.
As an example we’ll use a modification of the wine example provided in the MLflow Sklearn example.
As mentioned before, MLflow allows logging the params, metrics and artifacts of our models, so we can keep track of how these evolve over our different iterations. This feature is quite useful, since we will be able to reproduce our best model by checking the Tracking Server or validate which code was executing the desired iteration since it logs (for free) the git hash commit.
The MLflow tracking server ,launched using “mlflow server”, also hosts REST APIs for tracking runs and writing data to the local filesystem. You can specify a tracking server URI with the “MLFLOW_TRACKING_URI” environment variable and MLflow tracking APIs automatically communicate with the tracking server at that URI to create/get run information, log metrics, and so on.
Ref: Docs// Running a tracking server
In order to serve a model, all we need is a Tracking Server running (see Launching UI) and a model run ID.
To serve models with the MLflow serve feature we need access to the Tracking UI, so it can retrieve the model information by just specifying the--run_id .
Once the model is being served by the Tracking Server, we can query our new model endpoint.
Although, it’s quite powerful to be able to serve models in real time by just training the model and using the serve feature (ref: mlflow // docs // models # local), applying models using Spark (batch or streaming) is even more powerful since it joins the distribution force.
Imagine having an offline traning and then applying the output model to all your data in a easier way. This is where Spark and MLflow shine to the perfection together.
Ref: Get started PySpark — Jupyter
To show how we are applying the MLflow models to our Spark Dataframes. We need to setup Jupyter notebooks with PySpark.
Start by downloading the latest stable Apache Spark (current 2.4.3).
cd ~/Downloads/
tar -xzf spark-2.4.3-bin-hadoop2.7.tgz
mv ~/Downloads/spark-2.4.3-bin-hadoop2.7 ~/
ln -s ~/spark-2.4.3-bin-hadoop2.7 ~/spark̀
Install PySpark and Jupyter in our virtualEnv
pip install pyspark jupyter
Setup Environmnet variables
export SPARK_HOME=~/spark
export PATH=$SPARK_HOME/bin:$PATH
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --notebook-dir=${HOME}/Projects/notebooks"
By defining notebook-dir, we will be able to store & persist our notebooks in the desired folder.
Since we configured Jupyter as the PySpark driver, now we can launch Jupyter with a PySpark context attached to our notebooks.
As we have shown above, MLflow provides a feature to log our model artifacts to S3. So, once we have the chosen one model, we can import it as a UDF by using the mlflow.pyfunc module.
Up to this point, we have shown how to use PySpark with MLflow by running a wine quality prediction across all our wine dataset. But what happens when you need to use the Python MLflow modules from Scala Spark?
We also managed to test this by sharing the Spark Context between Scala and Python. That means we registered the MLflow UDF in Python, and then used it from Scala (yeap, not a nice solution, but at least it’s something 🍭).
For this example, we will add the Toree Kernel to our existing Jupyter.
As you can see in the attached notebook, the UDF is shared between Spark and PySpark. We hope this last part is helpful for those teams that love Scala and have to put ML models into production.
Although MLflow is currently in Alpha (🥁), it looks quite promising. Just with the power of being able to run multiple machine learning frameworks and using them from the same endpoint it brings all the recommendation systems to the next level.
Plus, MLflow brings closer Data Engineers and Data Scientists by having a common layer between them.
After this research about MLflow, we are sure we are going to go further with it and start using it in our Spark pipelines and in our recommendation systems.
It would be nice to have the file-store synchronized with a DB instead of using a FS. This should allow having multiple endpoints using the same file-store. Like having multiple Presto and Athena instances using the same Glue metastore.
Just to wrap up, thanks to all the community behind MLflow to make it possible and making our data lives more interesting.
If you are playing with MLflow, feel free to reach us and provide some feedback about how you are using it! Even more, if you are using MLflow in production.
|
[
{
"code": null,
"e": 233,
"s": 172,
"text": "This post aims to cover our initial experience using MLflow."
},
{
"code": null,
"e": 410,
"s": 233,
"text": "We will start discovering MLflow with its own tracking server by logging all the exploratory iterations. Then, we will show our experience linking Spark with MLflow using UDFs."
},
{
"code": null,
"e": 737,
"s": 410,
"text": "At Alpha Health we leverage the power of Machine Learning and AI to empower people in taking control of their health and wellbeing. Machine Learning models are hence at the core of the data products we are developing, and this is why MLFLow, an open-source platform covering all aspects of the ML lifecycle drew our attention."
},
{
"code": null,
"e": 980,
"s": 737,
"text": "The main goal of MLflow is to provide an extra layer on top of ML allowing Data Scientists to work with virtually any Machine Learning library (h2o, keras, mleap, pytorch, sklearn & tensorflow) meanwhile, it brings their job to another level."
},
{
"code": null,
"e": 1014,
"s": 980,
"text": "MLflow provides three components:"
},
{
"code": null,
"e": 1147,
"s": 1014,
"text": "Tracking — Recording and querying experiments: code, data, config and results. Very useful to keep track of your modelling progress."
},
{
"code": null,
"e": 1230,
"s": 1147,
"text": "Projects — Packaging format for reproducible runs on any platform (i.e Sagemaker)."
},
{
"code": null,
"e": 1302,
"s": 1230,
"text": "Models — General format for sending models to diverse deployment tools."
},
{
"code": null,
"e": 1444,
"s": 1302,
"text": "MLflow (currently in alpha) is an open source platform to manage the ML lifecycle, including experimentation, reproducibility and deployment."
},
{
"code": null,
"e": 1680,
"s": 1444,
"text": "In order to use MLflow, we first need to set up all the Python Environment to use MLflow, we would use PyEnv (to setup → Python in Mac). This will provide a virtual environment where we can install all the libraries required to run it."
},
{
"code": null,
"e": 1810,
"s": 1680,
"text": "pyenv install 3.7.0\npyenv global 3.7.0 # Use Python 3.7\nmkvirtualenv mlflow # Create a Virtual Env with Python 3.7\nworkon mlflow\n"
},
{
"code": null,
"e": 1838,
"s": 1810,
"text": "Install required libraries."
},
{
"code": null,
"e": 1980,
"s": 1838,
"text": "pip install mlflow==0.7.0 \\\n Cython==0.29 \\ \n numpy==1.14.5 \\\n pandas==0.23.4 \\\n pyarrow==0.11.0\n"
},
{
"code": null,
"e": 2131,
"s": 1980,
"text": "Note: We are using PyArrow to launch models as UDFs. PyArrow and Numpy versions required fixing since the latest ones had some conflicts between them."
},
{
"code": null,
"e": 2478,
"s": 2131,
"text": "MLflow Tracking allows us to log and query experiments using both Python and REST APIs. Besides, it’s possible to define where are we going to store the model artifacts (Localhost, Amazon S3, Azure Blob Storage, Google Cloud Storage or SFTP server). Since we are using AWS in Alpha Health, we will be experimenting with S3 as an artifact storage."
},
{
"code": null,
"e": 2739,
"s": 2478,
"text": "MLflow recommends using a persistent file-store. The file-store is where the server stores run and experiment metadata. So, when running a server, make sure that this points to a persistent file system location. Here, we are just using /tmp for the experiment."
},
{
"code": null,
"e": 2968,
"s": 2739,
"text": "Keep in mind that if we want to use the mlflow server to run old experiments, they must be present in the file-store. However, without them, we would still be able to use them in UDFs since only the model path would be required."
},
{
"code": null,
"e": 3237,
"s": 2968,
"text": "Note: Keep in mind the Tracking UI and the model client has to have access to the artifact location. That means, regardless of the Tracking UI being on an EC2 instance, if we run MLflow locally, our machine should have direct access to S3 to write the artifact models."
},
{
"code": null,
"e": 3308,
"s": 3237,
"text": "Once the Tracking Server is running, we can start training our models."
},
{
"code": null,
"e": 3407,
"s": 3308,
"text": "As an example we’ll use a modification of the wine example provided in the MLflow Sklearn example."
},
{
"code": null,
"e": 3788,
"s": 3407,
"text": "As mentioned before, MLflow allows logging the params, metrics and artifacts of our models, so we can keep track of how these evolve over our different iterations. This feature is quite useful, since we will be able to reproduce our best model by checking the Tracking Server or validate which code was executing the desired iteration since it logs (for free) the git hash commit."
},
{
"code": null,
"e": 4162,
"s": 3788,
"text": "The MLflow tracking server ,launched using “mlflow server”, also hosts REST APIs for tracking runs and writing data to the local filesystem. You can specify a tracking server URI with the “MLFLOW_TRACKING_URI” environment variable and MLflow tracking APIs automatically communicate with the tracking server at that URI to create/get run information, log metrics, and so on."
},
{
"code": null,
"e": 4200,
"s": 4162,
"text": "Ref: Docs// Running a tracking server"
},
{
"code": null,
"e": 4307,
"s": 4200,
"text": "In order to serve a model, all we need is a Tracking Server running (see Launching UI) and a model run ID."
},
{
"code": null,
"e": 4462,
"s": 4307,
"text": "To serve models with the MLflow serve feature we need access to the Tracking UI, so it can retrieve the model information by just specifying the--run_id ."
},
{
"code": null,
"e": 4554,
"s": 4462,
"text": "Once the model is being served by the Tracking Server, we can query our new model endpoint."
},
{
"code": null,
"e": 4830,
"s": 4554,
"text": "Although, it’s quite powerful to be able to serve models in real time by just training the model and using the serve feature (ref: mlflow // docs // models # local), applying models using Spark (batch or streaming) is even more powerful since it joins the distribution force."
},
{
"code": null,
"e": 4998,
"s": 4830,
"text": "Imagine having an offline traning and then applying the output model to all your data in a easier way. This is where Spark and MLflow shine to the perfection together."
},
{
"code": null,
"e": 5033,
"s": 4998,
"text": "Ref: Get started PySpark — Jupyter"
},
{
"code": null,
"e": 5153,
"s": 5033,
"text": "To show how we are applying the MLflow models to our Spark Dataframes. We need to setup Jupyter notebooks with PySpark."
},
{
"code": null,
"e": 5222,
"s": 5153,
"text": "Start by downloading the latest stable Apache Spark (current 2.4.3)."
},
{
"code": null,
"e": 5365,
"s": 5222,
"text": "cd ~/Downloads/\ntar -xzf spark-2.4.3-bin-hadoop2.7.tgz\nmv ~/Downloads/spark-2.4.3-bin-hadoop2.7 ~/\nln -s ~/spark-2.4.3-bin-hadoop2.7 ~/spark̀\n"
},
{
"code": null,
"e": 5411,
"s": 5365,
"text": "Install PySpark and Jupyter in our virtualEnv"
},
{
"code": null,
"e": 5440,
"s": 5411,
"text": "pip install pyspark jupyter\n"
},
{
"code": null,
"e": 5468,
"s": 5440,
"text": "Setup Environmnet variables"
},
{
"code": null,
"e": 5653,
"s": 5468,
"text": "export SPARK_HOME=~/spark\nexport PATH=$SPARK_HOME/bin:$PATH\nexport PYSPARK_DRIVER_PYTHON=jupyter\nexport PYSPARK_DRIVER_PYTHON_OPTS=\"notebook --notebook-dir=${HOME}/Projects/notebooks\"\n"
},
{
"code": null,
"e": 5751,
"s": 5653,
"text": "By defining notebook-dir, we will be able to store & persist our notebooks in the desired folder."
},
{
"code": null,
"e": 5878,
"s": 5751,
"text": "Since we configured Jupyter as the PySpark driver, now we can launch Jupyter with a PySpark context attached to our notebooks."
},
{
"code": null,
"e": 6062,
"s": 5878,
"text": "As we have shown above, MLflow provides a feature to log our model artifacts to S3. So, once we have the chosen one model, we can import it as a UDF by using the mlflow.pyfunc module."
},
{
"code": null,
"e": 6273,
"s": 6062,
"text": "Up to this point, we have shown how to use PySpark with MLflow by running a wine quality prediction across all our wine dataset. But what happens when you need to use the Python MLflow modules from Scala Spark?"
},
{
"code": null,
"e": 6496,
"s": 6273,
"text": "We also managed to test this by sharing the Spark Context between Scala and Python. That means we registered the MLflow UDF in Python, and then used it from Scala (yeap, not a nice solution, but at least it’s something 🍭)."
},
{
"code": null,
"e": 6568,
"s": 6496,
"text": "For this example, we will add the Toree Kernel to our existing Jupyter."
},
{
"code": null,
"e": 6763,
"s": 6568,
"text": "As you can see in the attached notebook, the UDF is shared between Spark and PySpark. We hope this last part is helpful for those teams that love Scala and have to put ML models into production."
},
{
"code": null,
"e": 7008,
"s": 6763,
"text": "Although MLflow is currently in Alpha (🥁), it looks quite promising. Just with the power of being able to run multiple machine learning frameworks and using them from the same endpoint it brings all the recommendation systems to the next level."
},
{
"code": null,
"e": 7109,
"s": 7008,
"text": "Plus, MLflow brings closer Data Engineers and Data Scientists by having a common layer between them."
},
{
"code": null,
"e": 7267,
"s": 7109,
"text": "After this research about MLflow, we are sure we are going to go further with it and start using it in our Spark pipelines and in our recommendation systems."
},
{
"code": null,
"e": 7504,
"s": 7267,
"text": "It would be nice to have the file-store synchronized with a DB instead of using a FS. This should allow having multiple endpoints using the same file-store. Like having multiple Presto and Athena instances using the same Glue metastore."
},
{
"code": null,
"e": 7627,
"s": 7504,
"text": "Just to wrap up, thanks to all the community behind MLflow to make it possible and making our data lives more interesting."
}
] |
Closure properties of Regular languages - GeeksforGeeks
|
15 Jan, 2020
Closure properties on regular languages are defined as certain operations on regular language which are guaranteed to produce regular language. Closure refers to some operation on a language, resulting in a new language that is of same “type” as originally operated on i.e., regular.
Regular languages are closed under following operations.
Consider L and M are regular languages:
Kleen Closure:RS is a regular expression whose language is L, M. R* is a regular expression whose language is L*.Positive closure:RS is a regular expression whose language is L, M. is a regular expression whose language is .Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular.Reverse Operator:Given language L, is the set of strings whose reversal is in L.Example: L = {0, 01, 100}; ={0, 10, 001}.Proof: Let E be a regular expression for L. We show how to reverse E, to provide a regular expression for .Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular.Union:Let L and M be the languages of regular expressions R and S, respectively.Then R+S is a regular expression whose language is(L U M).Intersection:Let L and M be the languages of regular expressions R and S, respectively then it a regular expression whose language is L intersection M.proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs consisting of final states of both A and B.Set Difference operator:If L and M are regular languages, then so is L – M = strings in L but not M.Proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs, where A-state is final but B-state is not.Homomorphism:A homomorphism on an alphabet is a function that gives a string for each symbol in that alphabet. Example: h(0) = ab; h(1) = . Extend to strings by h(a1...an) =h(a1)...h(an). Example: h(01010) = ababab.If L is a regular language, and h is a homomorphism on its alphabet, then h(L)= {h(w) | w is in L} is also a regular language.Proof: Let E be a regular expression for L. Apply h to each symbol in E. Language of resulting R, E is h(L).Inverse Homomorphism : Let h be a homomorphism and L a language whose alphabet is the output language of h. (L) = {w | h(w) is in L}.
Kleen Closure:RS is a regular expression whose language is L, M. R* is a regular expression whose language is L*.
Positive closure:RS is a regular expression whose language is L, M. is a regular expression whose language is .
Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular.
Reverse Operator:Given language L, is the set of strings whose reversal is in L.Example: L = {0, 01, 100}; ={0, 10, 001}.Proof: Let E be a regular expression for L. We show how to reverse E, to provide a regular expression for .
Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular.
Union:Let L and M be the languages of regular expressions R and S, respectively.Then R+S is a regular expression whose language is(L U M).
Intersection:Let L and M be the languages of regular expressions R and S, respectively then it a regular expression whose language is L intersection M.proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs consisting of final states of both A and B.
Set Difference operator:If L and M are regular languages, then so is L – M = strings in L but not M.Proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs, where A-state is final but B-state is not.
Proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs, where A-state is final but B-state is not.
Homomorphism:A homomorphism on an alphabet is a function that gives a string for each symbol in that alphabet. Example: h(0) = ab; h(1) = . Extend to strings by h(a1...an) =h(a1)...h(an). Example: h(01010) = ababab.If L is a regular language, and h is a homomorphism on its alphabet, then h(L)= {h(w) | w is in L} is also a regular language.Proof: Let E be a regular expression for L. Apply h to each symbol in E. Language of resulting R, E is h(L).
If L is a regular language, and h is a homomorphism on its alphabet, then h(L)= {h(w) | w is in L} is also a regular language.Proof: Let E be a regular expression for L. Apply h to each symbol in E. Language of resulting R, E is h(L).
Inverse Homomorphism : Let h be a homomorphism and L a language whose alphabet is the output language of h. (L) = {w | h(w) is in L}.
Note: There are few more properties like symmetric difference operator, prefix operator, substitution which are closed under closure properties of regular language.
Decision Properties:Approximately all the properties are decidable in case of finite automaton.
(i) Emptiness
(ii) Non-emptiness
(iii) Finiteness
(iv) Infiniteness
(v) Membership
(vi) Equality
These are explained as following below.
(i) Emptiness and Non-emptiness:
Step-1: select the state that cannot be reached from the initial states & delete them (remove unreachable states).
Step 2: if the resulting machine contains at least one final states, so then the finite automata accepts the non-empty language.
Step 3: if the resulting machine is free from final state, then finite automata accepts empty language.(ii) Finiteness and Infiniteness:Step-1: select the state that cannot be reached from the initial state & delete them (remove unreachable states).Step-2: select the state from which we cannot reach the final state & delete them (remove dead states).Step-3: if the resulting machine contains loops or cycles then the finite automata accepts infinite language.Step-4: if the resulting machine do not contain loops or cycles then the finite automata accepts infinite language.(iii) Membership:Membership is a property to verify an arbitrary string is accepted by a finite automaton or not i.e. it is a member of the language or not.Let M is a finite automata that accepts some strings over an alphabet, and let ‘w’ be any string defined over the alphabet, if there exist a transition path in M, which starts at initial state & ends in anyone of the final state, then string ‘w’ is a member of M, otherwise ‘w’ is not a member of M.(iv) Equality:Two finite state automata M1 & M2 is said to be equal if and only if, they accept the same language. Minimise the finite state automata and the minimal DFA will be unique.My Personal Notes
arrow_drop_upSave
(ii) Finiteness and Infiniteness:
Step-1: select the state that cannot be reached from the initial state & delete them (remove unreachable states).
Step-2: select the state from which we cannot reach the final state & delete them (remove dead states).
Step-3: if the resulting machine contains loops or cycles then the finite automata accepts infinite language.
Step-4: if the resulting machine do not contain loops or cycles then the finite automata accepts infinite language.
(iii) Membership:Membership is a property to verify an arbitrary string is accepted by a finite automaton or not i.e. it is a member of the language or not.
Let M is a finite automata that accepts some strings over an alphabet, and let ‘w’ be any string defined over the alphabet, if there exist a transition path in M, which starts at initial state & ends in anyone of the final state, then string ‘w’ is a member of M, otherwise ‘w’ is not a member of M.
(iv) Equality:Two finite state automata M1 & M2 is said to be equal if and only if, they accept the same language. Minimise the finite state automata and the minimal DFA will be unique.
GATE CS
Theory of Computation & Automata
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Page Replacement Algorithms in Operating Systems
Semaphores in Process Synchronization
Differences between TCP and UDP
Data encryption standard (DES) | Set 1
Caesar Cipher in Cryptography
Difference between DFA and NFA
Turing Machine in TOC
Difference between Mealy machine and Moore machine
Conversion from NFA to DFA
Construct Pushdown Automata for given languages
|
[
{
"code": null,
"e": 24395,
"s": 24367,
"text": "\n15 Jan, 2020"
},
{
"code": null,
"e": 24679,
"s": 24395,
"text": "Closure properties on regular languages are defined as certain operations on regular language which are guaranteed to produce regular language. Closure refers to some operation on a language, resulting in a new language that is of same “type” as originally operated on i.e., regular."
},
{
"code": null,
"e": 24736,
"s": 24679,
"text": "Regular languages are closed under following operations."
},
{
"code": null,
"e": 24776,
"s": 24736,
"text": "Consider L and M are regular languages:"
},
{
"code": null,
"e": 26970,
"s": 24776,
"text": "Kleen Closure:RS is a regular expression whose language is L, M. R* is a regular expression whose language is L*.Positive closure:RS is a regular expression whose language is L, M. is a regular expression whose language is .Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular.Reverse Operator:Given language L, is the set of strings whose reversal is in L.Example: L = {0, 01, 100}; ={0, 10, 001}.Proof: Let E be a regular expression for L. We show how to reverse E, to provide a regular expression for .Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular.Union:Let L and M be the languages of regular expressions R and S, respectively.Then R+S is a regular expression whose language is(L U M).Intersection:Let L and M be the languages of regular expressions R and S, respectively then it a regular expression whose language is L intersection M.proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs consisting of final states of both A and B.Set Difference operator:If L and M are regular languages, then so is L – M = strings in L but not M.Proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs, where A-state is final but B-state is not.Homomorphism:A homomorphism on an alphabet is a function that gives a string for each symbol in that alphabet. Example: h(0) = ab; h(1) = . Extend to strings by h(a1...an) =h(a1)...h(an). Example: h(01010) = ababab.If L is a regular language, and h is a homomorphism on its alphabet, then h(L)= {h(w) | w is in L} is also a regular language.Proof: Let E be a regular expression for L. Apply h to each symbol in E. Language of resulting R, E is h(L).Inverse Homomorphism : Let h be a homomorphism and L a language whose alphabet is the output language of h. (L) = {w | h(w) is in L}."
},
{
"code": null,
"e": 27084,
"s": 26970,
"text": "Kleen Closure:RS is a regular expression whose language is L, M. R* is a regular expression whose language is L*."
},
{
"code": null,
"e": 27197,
"s": 27084,
"text": "Positive closure:RS is a regular expression whose language is L, M. is a regular expression whose language is ."
},
{
"code": null,
"e": 27381,
"s": 27197,
"text": "Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular."
},
{
"code": null,
"e": 27612,
"s": 27381,
"text": "Reverse Operator:Given language L, is the set of strings whose reversal is in L.Example: L = {0, 01, 100}; ={0, 10, 001}.Proof: Let E be a regular expression for L. We show how to reverse E, to provide a regular expression for ."
},
{
"code": null,
"e": 27796,
"s": 27612,
"text": "Complement:The complement of a language L (with respect to an alphabet such that contains L) is –L. Since is surely regular, the complement of a regular language is always regular."
},
{
"code": null,
"e": 27935,
"s": 27796,
"text": "Union:Let L and M be the languages of regular expressions R and S, respectively.Then R+S is a regular expression whose language is(L U M)."
},
{
"code": null,
"e": 28287,
"s": 27935,
"text": "Intersection:Let L and M be the languages of regular expressions R and S, respectively then it a regular expression whose language is L intersection M.proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs consisting of final states of both A and B."
},
{
"code": null,
"e": 28588,
"s": 28287,
"text": "Set Difference operator:If L and M are regular languages, then so is L – M = strings in L but not M.Proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs, where A-state is final but B-state is not."
},
{
"code": null,
"e": 28789,
"s": 28588,
"text": "Proof: Let A and B be DFA’s whose languages are L and M, respectively. Construct C, the product automaton of A and B make the final states of C be the pairs, where A-state is final but B-state is not."
},
{
"code": null,
"e": 29239,
"s": 28789,
"text": "Homomorphism:A homomorphism on an alphabet is a function that gives a string for each symbol in that alphabet. Example: h(0) = ab; h(1) = . Extend to strings by h(a1...an) =h(a1)...h(an). Example: h(01010) = ababab.If L is a regular language, and h is a homomorphism on its alphabet, then h(L)= {h(w) | w is in L} is also a regular language.Proof: Let E be a regular expression for L. Apply h to each symbol in E. Language of resulting R, E is h(L)."
},
{
"code": null,
"e": 29474,
"s": 29239,
"text": "If L is a regular language, and h is a homomorphism on its alphabet, then h(L)= {h(w) | w is in L} is also a regular language.Proof: Let E be a regular expression for L. Apply h to each symbol in E. Language of resulting R, E is h(L)."
},
{
"code": null,
"e": 29609,
"s": 29474,
"text": "Inverse Homomorphism : Let h be a homomorphism and L a language whose alphabet is the output language of h. (L) = {w | h(w) is in L}."
},
{
"code": null,
"e": 29774,
"s": 29609,
"text": "Note: There are few more properties like symmetric difference operator, prefix operator, substitution which are closed under closure properties of regular language."
},
{
"code": null,
"e": 29870,
"s": 29774,
"text": "Decision Properties:Approximately all the properties are decidable in case of finite automaton."
},
{
"code": null,
"e": 29974,
"s": 29870,
"text": "(i) Emptiness \n(ii) Non-emptiness \n(iii) Finiteness \n(iv) Infiniteness \n(v) Membership \n(vi) Equality "
},
{
"code": null,
"e": 30014,
"s": 29974,
"text": "These are explained as following below."
},
{
"code": null,
"e": 30047,
"s": 30014,
"text": "(i) Emptiness and Non-emptiness:"
},
{
"code": null,
"e": 30162,
"s": 30047,
"text": "Step-1: select the state that cannot be reached from the initial states & delete them (remove unreachable states)."
},
{
"code": null,
"e": 30291,
"s": 30162,
"text": "Step 2: if the resulting machine contains at least one final states, so then the finite automata accepts the non-empty language."
},
{
"code": null,
"e": 31543,
"s": 30291,
"text": "Step 3: if the resulting machine is free from final state, then finite automata accepts empty language.(ii) Finiteness and Infiniteness:Step-1: select the state that cannot be reached from the initial state & delete them (remove unreachable states).Step-2: select the state from which we cannot reach the final state & delete them (remove dead states).Step-3: if the resulting machine contains loops or cycles then the finite automata accepts infinite language.Step-4: if the resulting machine do not contain loops or cycles then the finite automata accepts infinite language.(iii) Membership:Membership is a property to verify an arbitrary string is accepted by a finite automaton or not i.e. it is a member of the language or not.Let M is a finite automata that accepts some strings over an alphabet, and let ‘w’ be any string defined over the alphabet, if there exist a transition path in M, which starts at initial state & ends in anyone of the final state, then string ‘w’ is a member of M, otherwise ‘w’ is not a member of M.(iv) Equality:Two finite state automata M1 & M2 is said to be equal if and only if, they accept the same language. Minimise the finite state automata and the minimal DFA will be unique.My Personal Notes\narrow_drop_upSave"
},
{
"code": null,
"e": 31577,
"s": 31543,
"text": "(ii) Finiteness and Infiniteness:"
},
{
"code": null,
"e": 31691,
"s": 31577,
"text": "Step-1: select the state that cannot be reached from the initial state & delete them (remove unreachable states)."
},
{
"code": null,
"e": 31795,
"s": 31691,
"text": "Step-2: select the state from which we cannot reach the final state & delete them (remove dead states)."
},
{
"code": null,
"e": 31905,
"s": 31795,
"text": "Step-3: if the resulting machine contains loops or cycles then the finite automata accepts infinite language."
},
{
"code": null,
"e": 32021,
"s": 31905,
"text": "Step-4: if the resulting machine do not contain loops or cycles then the finite automata accepts infinite language."
},
{
"code": null,
"e": 32178,
"s": 32021,
"text": "(iii) Membership:Membership is a property to verify an arbitrary string is accepted by a finite automaton or not i.e. it is a member of the language or not."
},
{
"code": null,
"e": 32478,
"s": 32178,
"text": "Let M is a finite automata that accepts some strings over an alphabet, and let ‘w’ be any string defined over the alphabet, if there exist a transition path in M, which starts at initial state & ends in anyone of the final state, then string ‘w’ is a member of M, otherwise ‘w’ is not a member of M."
},
{
"code": null,
"e": 32664,
"s": 32478,
"text": "(iv) Equality:Two finite state automata M1 & M2 is said to be equal if and only if, they accept the same language. Minimise the finite state automata and the minimal DFA will be unique."
},
{
"code": null,
"e": 32672,
"s": 32664,
"text": "GATE CS"
},
{
"code": null,
"e": 32705,
"s": 32672,
"text": "Theory of Computation & Automata"
},
{
"code": null,
"e": 32803,
"s": 32705,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32812,
"s": 32803,
"text": "Comments"
},
{
"code": null,
"e": 32825,
"s": 32812,
"text": "Old Comments"
},
{
"code": null,
"e": 32874,
"s": 32825,
"text": "Page Replacement Algorithms in Operating Systems"
},
{
"code": null,
"e": 32912,
"s": 32874,
"text": "Semaphores in Process Synchronization"
},
{
"code": null,
"e": 32944,
"s": 32912,
"text": "Differences between TCP and UDP"
},
{
"code": null,
"e": 32983,
"s": 32944,
"text": "Data encryption standard (DES) | Set 1"
},
{
"code": null,
"e": 33013,
"s": 32983,
"text": "Caesar Cipher in Cryptography"
},
{
"code": null,
"e": 33044,
"s": 33013,
"text": "Difference between DFA and NFA"
},
{
"code": null,
"e": 33066,
"s": 33044,
"text": "Turing Machine in TOC"
},
{
"code": null,
"e": 33117,
"s": 33066,
"text": "Difference between Mealy machine and Moore machine"
},
{
"code": null,
"e": 33144,
"s": 33117,
"text": "Conversion from NFA to DFA"
}
] |
Flask â SQLite
|
Python has an in-built support for SQlite. SQlite3 module is shipped with Python distribution. For a detailed tutorial on using SQLite database in Python, please refer to this link. In this section we shall see how a Flask application interacts with SQLite.
Create an SQLite database ‘database.db’ and create a students’ table in it.
import sqlite3
conn = sqlite3.connect('database.db')
print "Opened database successfully";
conn.execute('CREATE TABLE students (name TEXT, addr TEXT, city TEXT, pin TEXT)')
print "Table created successfully";
conn.close()
Our Flask application has three View functions.
First new_student() function is bound to the URL rule (‘/addnew’). It renders an HTML file containing student information form.
@app.route('/enternew')
def new_student():
return render_template('student.html')
The HTML script for ‘student.html’ is as follows −
<html>
<body>
<form action = "{{ url_for('addrec') }}" method = "POST">
<h3>Student Information</h3>
Name<br>
<input type = "text" name = "nm" /></br>
Address<br>
<textarea name = "add" ></textarea><br>
City<br>
<input type = "text" name = "city" /><br>
PINCODE<br>
<input type = "text" name = "pin" /><br>
<input type = "submit" value = "submit" /><br>
</form>
</body>
</html>
As it can be seen, form data is posted to the ‘/addrec’ URL which binds the addrec() function.
This addrec() function retrieves the form’s data by POST method and inserts in students table. Message corresponding to success or error in insert operation is rendered to ‘result.html’.
@app.route('/addrec',methods = ['POST', 'GET'])
def addrec():
if request.method == 'POST':
try:
nm = request.form['nm']
addr = request.form['add']
city = request.form['city']
pin = request.form['pin']
with sql.connect("database.db") as con:
cur = con.cursor()
cur.execute("INSERT INTO students (name,addr,city,pin)
VALUES (?,?,?,?)",(nm,addr,city,pin) )
con.commit()
msg = "Record successfully added"
except:
con.rollback()
msg = "error in insert operation"
finally:
return render_template("result.html",msg = msg)
con.close()
The HTML script of result.html contains an escaping statement {{msg}} that displays the result of Insert operation.
<!doctype html>
<html>
<body>
result of addition : {{ msg }}
<h2><a href = "\">go back to home page</a></h2>
</body>
</html>
The application contains another list() function represented by ‘/list’ URL. It populates ‘rows’ as a MultiDict object containing all records in the students table. This object is passed to the list.html template.
@app.route('/list')
def list():
con = sql.connect("database.db")
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select * from students")
rows = cur.fetchall();
return render_template("list.html",rows = rows)
This list.html is a template, which iterates over the row set and renders the data in an HTML table.
<!doctype html>
<html>
<body>
<table border = 1>
<thead>
<td>Name</td>
<td>Address>/td<
<td>city</td>
<td>Pincode</td>
</thead>
{% for row in rows %}
<tr>
<td>{{row["name"]}}</td>
<td>{{row["addr"]}}</td>
<td> {{ row["city"]}}</td>
<td>{{row['pin']}}</td>
</tr>
{% endfor %}
</table>
<a href = "/">Go back to home page</a>
</body>
</html>
Finally, the ‘/’ URL rule renders a ‘home.html’ which acts as the entry point of the application.
@app.route('/')
def home():
return render_template('home.html')
Here is the complete code of Flask-SQLite application.
from flask import Flask, render_template, request
import sqlite3 as sql
app = Flask(__name__)
@app.route('/')
def home():
return render_template('home.html')
@app.route('/enternew')
def new_student():
return render_template('student.html')
@app.route('/addrec',methods = ['POST', 'GET'])
def addrec():
if request.method == 'POST':
try:
nm = request.form['nm']
addr = request.form['add']
city = request.form['city']
pin = request.form['pin']
with sql.connect("database.db") as con:
cur = con.cursor()
cur.execute("INSERT INTO students (name,addr,city,pin)
VALUES (?,?,?,?)",(nm,addr,city,pin) )
con.commit()
msg = "Record successfully added"
except:
con.rollback()
msg = "error in insert operation"
finally:
return render_template("result.html",msg = msg)
con.close()
@app.route('/list')
def list():
con = sql.connect("database.db")
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select * from students")
rows = cur.fetchall();
return render_template("list.html",rows = rows)
if __name__ == '__main__':
app.run(debug = True)
Run this script from Python shell and as the development server starts running. Visit http://localhost:5000/ in browser which displays a simple menu like this −
Click ‘Add New Record’ link to open the Student Information Form.
Fill the form fields and submit it. The underlying function inserts the record in the students table.
Go back to the home page and click ‘Show List’ link. The table showing the sample data will be displayed.
22 Lectures
6 hours
Malhar Lathkar
21 Lectures
1.5 hours
Jack Chan
16 Lectures
4 hours
Malhar Lathkar
54 Lectures
6 hours
Srikanth Guskra
88 Lectures
3.5 hours
Jorge Escobar
80 Lectures
12 hours
Stone River ELearning
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2291,
"s": 2033,
"text": "Python has an in-built support for SQlite. SQlite3 module is shipped with Python distribution. For a detailed tutorial on using SQLite database in Python, please refer to this link. In this section we shall see how a Flask application interacts with SQLite."
},
{
"code": null,
"e": 2367,
"s": 2291,
"text": "Create an SQLite database ‘database.db’ and create a students’ table in it."
},
{
"code": null,
"e": 2591,
"s": 2367,
"text": "import sqlite3\n\nconn = sqlite3.connect('database.db')\nprint \"Opened database successfully\";\n\nconn.execute('CREATE TABLE students (name TEXT, addr TEXT, city TEXT, pin TEXT)')\nprint \"Table created successfully\";\nconn.close()"
},
{
"code": null,
"e": 2639,
"s": 2591,
"text": "Our Flask application has three View functions."
},
{
"code": null,
"e": 2767,
"s": 2639,
"text": "First new_student() function is bound to the URL rule (‘/addnew’). It renders an HTML file containing student information form."
},
{
"code": null,
"e": 2852,
"s": 2767,
"text": "@app.route('/enternew')\ndef new_student():\n return render_template('student.html')"
},
{
"code": null,
"e": 2903,
"s": 2852,
"text": "The HTML script for ‘student.html’ is as follows −"
},
{
"code": null,
"e": 3419,
"s": 2903,
"text": "<html>\n <body>\n <form action = \"{{ url_for('addrec') }}\" method = \"POST\">\n <h3>Student Information</h3>\n Name<br>\n <input type = \"text\" name = \"nm\" /></br>\n \n Address<br>\n <textarea name = \"add\" ></textarea><br>\n \n City<br>\n <input type = \"text\" name = \"city\" /><br>\n \n PINCODE<br>\n <input type = \"text\" name = \"pin\" /><br>\n <input type = \"submit\" value = \"submit\" /><br>\n </form>\n </body>\n</html>"
},
{
"code": null,
"e": 3514,
"s": 3419,
"text": "As it can be seen, form data is posted to the ‘/addrec’ URL which binds the addrec() function."
},
{
"code": null,
"e": 3701,
"s": 3514,
"text": "This addrec() function retrieves the form’s data by POST method and inserts in students table. Message corresponding to success or error in insert operation is rendered to ‘result.html’."
},
{
"code": null,
"e": 4424,
"s": 3701,
"text": "@app.route('/addrec',methods = ['POST', 'GET'])\ndef addrec():\n if request.method == 'POST':\n try:\n nm = request.form['nm']\n addr = request.form['add']\n city = request.form['city']\n pin = request.form['pin']\n \n with sql.connect(\"database.db\") as con:\n cur = con.cursor()\n cur.execute(\"INSERT INTO students (name,addr,city,pin) \n VALUES (?,?,?,?)\",(nm,addr,city,pin) )\n \n con.commit()\n msg = \"Record successfully added\"\n except:\n con.rollback()\n msg = \"error in insert operation\"\n \n finally:\n return render_template(\"result.html\",msg = msg)\n con.close()"
},
{
"code": null,
"e": 4540,
"s": 4424,
"text": "The HTML script of result.html contains an escaping statement {{msg}} that displays the result of Insert operation."
},
{
"code": null,
"e": 4683,
"s": 4540,
"text": "<!doctype html>\n<html>\n <body>\n result of addition : {{ msg }}\n <h2><a href = \"\\\">go back to home page</a></h2>\n </body>\n</html>"
},
{
"code": null,
"e": 4897,
"s": 4683,
"text": "The application contains another list() function represented by ‘/list’ URL. It populates ‘rows’ as a MultiDict object containing all records in the students table. This object is passed to the list.html template."
},
{
"code": null,
"e": 5143,
"s": 4897,
"text": "@app.route('/list')\ndef list():\n con = sql.connect(\"database.db\")\n con.row_factory = sql.Row\n \n cur = con.cursor()\n cur.execute(\"select * from students\")\n \n rows = cur.fetchall(); \n return render_template(\"list.html\",rows = rows)"
},
{
"code": null,
"e": 5244,
"s": 5143,
"text": "This list.html is a template, which iterates over the row set and renders the data in an HTML table."
},
{
"code": null,
"e": 5793,
"s": 5244,
"text": "<!doctype html>\n<html>\n <body>\n <table border = 1>\n <thead>\n <td>Name</td>\n <td>Address>/td<\n <td>city</td>\n <td>Pincode</td>\n </thead>\n \n {% for row in rows %}\n <tr>\n <td>{{row[\"name\"]}}</td>\n <td>{{row[\"addr\"]}}</td>\n <td> {{ row[\"city\"]}}</td>\n <td>{{row['pin']}}</td>\t\n </tr>\n {% endfor %}\n </table>\n \n <a href = \"/\">Go back to home page</a>\n </body>\n</html>"
},
{
"code": null,
"e": 5891,
"s": 5793,
"text": "Finally, the ‘/’ URL rule renders a ‘home.html’ which acts as the entry point of the application."
},
{
"code": null,
"e": 5958,
"s": 5891,
"text": "@app.route('/')\ndef home():\n return render_template('home.html')"
},
{
"code": null,
"e": 6013,
"s": 5958,
"text": "Here is the complete code of Flask-SQLite application."
},
{
"code": null,
"e": 7297,
"s": 6013,
"text": "from flask import Flask, render_template, request\nimport sqlite3 as sql\napp = Flask(__name__)\n\[email protected]('/')\ndef home():\n return render_template('home.html')\n\[email protected]('/enternew')\ndef new_student():\n return render_template('student.html')\n\[email protected]('/addrec',methods = ['POST', 'GET'])\ndef addrec():\n if request.method == 'POST':\n try:\n nm = request.form['nm']\n addr = request.form['add']\n city = request.form['city']\n pin = request.form['pin']\n \n with sql.connect(\"database.db\") as con:\n cur = con.cursor()\n \n cur.execute(\"INSERT INTO students (name,addr,city,pin) \n VALUES (?,?,?,?)\",(nm,addr,city,pin) )\n \n con.commit()\n msg = \"Record successfully added\"\n except:\n con.rollback()\n msg = \"error in insert operation\"\n \n finally:\n return render_template(\"result.html\",msg = msg)\n con.close()\n\[email protected]('/list')\ndef list():\n con = sql.connect(\"database.db\")\n con.row_factory = sql.Row\n \n cur = con.cursor()\n cur.execute(\"select * from students\")\n \n rows = cur.fetchall();\n return render_template(\"list.html\",rows = rows)\n\nif __name__ == '__main__':\n app.run(debug = True)"
},
{
"code": null,
"e": 7458,
"s": 7297,
"text": "Run this script from Python shell and as the development server starts running. Visit http://localhost:5000/ in browser which displays a simple menu like this −"
},
{
"code": null,
"e": 7524,
"s": 7458,
"text": "Click ‘Add New Record’ link to open the Student Information Form."
},
{
"code": null,
"e": 7626,
"s": 7524,
"text": "Fill the form fields and submit it. The underlying function inserts the record in the students table."
},
{
"code": null,
"e": 7732,
"s": 7626,
"text": "Go back to the home page and click ‘Show List’ link. The table showing the sample data will be displayed."
},
{
"code": null,
"e": 7765,
"s": 7732,
"text": "\n 22 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 7781,
"s": 7765,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 7816,
"s": 7781,
"text": "\n 21 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 7827,
"s": 7816,
"text": " Jack Chan"
},
{
"code": null,
"e": 7860,
"s": 7827,
"text": "\n 16 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 7876,
"s": 7860,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 7909,
"s": 7876,
"text": "\n 54 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 7926,
"s": 7909,
"text": " Srikanth Guskra"
},
{
"code": null,
"e": 7961,
"s": 7926,
"text": "\n 88 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 7976,
"s": 7961,
"text": " Jorge Escobar"
},
{
"code": null,
"e": 8010,
"s": 7976,
"text": "\n 80 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 8033,
"s": 8010,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 8040,
"s": 8033,
"text": " Print"
},
{
"code": null,
"e": 8051,
"s": 8040,
"text": " Add Notes"
}
] |
How to avoid multicollinearity in Categorical Data | by Satyam Kumar | Towards Data Science
|
Multicollinearity refers to a condition in which the independent variables are correlated to each other. Multicollinearity can cause problems when you fit the model and interpret the results. The variables of the dataset should be independent of each other to overdue the problem of multicollinearity.
To handle or remove multicollinearity in the dataset, firstly we need to confirm if the dataset is multicollinear in nature. There are various techniques to find the presence of multicollinearity in the data, some of them are:
Getting very high standard errors for regression coefficients
The overall model is significant, but none of the coefficients are significant
Large changes in coefficients when adding predictors
High Variance Inflation Factor (VIF) and Low Tolerance
are some of the techniques or hacks to find multicollinearity in the data.
To read more about how to remove multicollinearity in the dataset using Principal Component Analysis read my below-mentioned article:
towardsdatascience.com
In this article, we will see how to find multicollinearity in categorical features using the Correlation Matrix, and remove it.
For further analysis, the dataset used is Churn Modelling from Kaggle. The problem statement is a binary classification problem and has numerical and categorical columns.
For this article, we will only observe collinearity between categorical features: “Geography”, “Gender”.
Machine Learning models can train only the dataset with numerical features, in order to convert categorical features, pd.get_dummies is a powerful technique to convert categorical variables into numerical variables. It one-hot encodes the categorical variables.
pd.get_dummies one hot encodes the categorical features “Geography”, “Gender”.
Syntax: pandas.get_dummies(data, prefix=None, prefix_sep=’_’, dummy_na=False, columns=None, sparse=False, drop_first=False, dtype=None)
To find the person correlation coefficient between all the numerical variables in the dataset:
data.corr(method='pearson')Method of correlation:* pearson (default)* kendall* spearman
From the above correlation matrix, it is clearly observed that the one-hot features encoded using pd.get_dummies are highly correlated with others.
Correlation coefficient scale:+1: highly correlated in positive direction-1: highly correlated in negative direction 0: No correlation
To avoid or remove multicollinearity in the dataset after one-hot encoding using pd.get_dummies, you can drop one of the categories and hence removing collinearity between the categorical features. Sklearn provides this feature by including drop_first=True in pd.get_dummies.
For example, if you have a variable gender, you don't need both a male and female dummy. If male=1 then the person is a male and if male=0 then the person is female. For the presence of hundreds of categories, dropping the first column does not affect much.
One category from each categorical column is avoided by using drop_first=True, and it's clearly observed from the correlation heatmap that the categorical features are no more correlated.
In this article, we have discussed how to avoid multicollinearity in categorical data. Lasso regularization can also be used to avoid the effect of multicollinearity. By choosing the correct value of alpha for lasso regularization the weight of the extra features will be equated to zero.
[1] Pandas get_dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html
Thank You for Reading
|
[
{
"code": null,
"e": 349,
"s": 47,
"text": "Multicollinearity refers to a condition in which the independent variables are correlated to each other. Multicollinearity can cause problems when you fit the model and interpret the results. The variables of the dataset should be independent of each other to overdue the problem of multicollinearity."
},
{
"code": null,
"e": 576,
"s": 349,
"text": "To handle or remove multicollinearity in the dataset, firstly we need to confirm if the dataset is multicollinear in nature. There are various techniques to find the presence of multicollinearity in the data, some of them are:"
},
{
"code": null,
"e": 638,
"s": 576,
"text": "Getting very high standard errors for regression coefficients"
},
{
"code": null,
"e": 717,
"s": 638,
"text": "The overall model is significant, but none of the coefficients are significant"
},
{
"code": null,
"e": 770,
"s": 717,
"text": "Large changes in coefficients when adding predictors"
},
{
"code": null,
"e": 825,
"s": 770,
"text": "High Variance Inflation Factor (VIF) and Low Tolerance"
},
{
"code": null,
"e": 900,
"s": 825,
"text": "are some of the techniques or hacks to find multicollinearity in the data."
},
{
"code": null,
"e": 1034,
"s": 900,
"text": "To read more about how to remove multicollinearity in the dataset using Principal Component Analysis read my below-mentioned article:"
},
{
"code": null,
"e": 1057,
"s": 1034,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1185,
"s": 1057,
"text": "In this article, we will see how to find multicollinearity in categorical features using the Correlation Matrix, and remove it."
},
{
"code": null,
"e": 1356,
"s": 1185,
"text": "For further analysis, the dataset used is Churn Modelling from Kaggle. The problem statement is a binary classification problem and has numerical and categorical columns."
},
{
"code": null,
"e": 1461,
"s": 1356,
"text": "For this article, we will only observe collinearity between categorical features: “Geography”, “Gender”."
},
{
"code": null,
"e": 1723,
"s": 1461,
"text": "Machine Learning models can train only the dataset with numerical features, in order to convert categorical features, pd.get_dummies is a powerful technique to convert categorical variables into numerical variables. It one-hot encodes the categorical variables."
},
{
"code": null,
"e": 1802,
"s": 1723,
"text": "pd.get_dummies one hot encodes the categorical features “Geography”, “Gender”."
},
{
"code": null,
"e": 1939,
"s": 1802,
"text": "Syntax: pandas.get_dummies(data, prefix=None, prefix_sep=’_’, dummy_na=False, columns=None, sparse=False, drop_first=False, dtype=None)"
},
{
"code": null,
"e": 2034,
"s": 1939,
"text": "To find the person correlation coefficient between all the numerical variables in the dataset:"
},
{
"code": null,
"e": 2122,
"s": 2034,
"text": "data.corr(method='pearson')Method of correlation:* pearson (default)* kendall* spearman"
},
{
"code": null,
"e": 2270,
"s": 2122,
"text": "From the above correlation matrix, it is clearly observed that the one-hot features encoded using pd.get_dummies are highly correlated with others."
},
{
"code": null,
"e": 2405,
"s": 2270,
"text": "Correlation coefficient scale:+1: highly correlated in positive direction-1: highly correlated in negative direction 0: No correlation"
},
{
"code": null,
"e": 2681,
"s": 2405,
"text": "To avoid or remove multicollinearity in the dataset after one-hot encoding using pd.get_dummies, you can drop one of the categories and hence removing collinearity between the categorical features. Sklearn provides this feature by including drop_first=True in pd.get_dummies."
},
{
"code": null,
"e": 2939,
"s": 2681,
"text": "For example, if you have a variable gender, you don't need both a male and female dummy. If male=1 then the person is a male and if male=0 then the person is female. For the presence of hundreds of categories, dropping the first column does not affect much."
},
{
"code": null,
"e": 3127,
"s": 2939,
"text": "One category from each categorical column is avoided by using drop_first=True, and it's clearly observed from the correlation heatmap that the categorical features are no more correlated."
},
{
"code": null,
"e": 3416,
"s": 3127,
"text": "In this article, we have discussed how to avoid multicollinearity in categorical data. Lasso regularization can also be used to avoid the effect of multicollinearity. By choosing the correct value of alpha for lasso regularization the weight of the extra features will be equated to zero."
},
{
"code": null,
"e": 3523,
"s": 3416,
"text": "[1] Pandas get_dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html"
}
] |
Python | os.truncate() method - GeeksforGeeks
|
25 Jun, 2019
OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of using operating system dependent functionality.os.truncate() method truncates the file corresponding to path so that it is at most length bytes in size. This function can support file descriptor too.
Syntax: os.truncate(path, length)
Parameters:path: This parameter is the path or file descriptor of the file that is to be truncated.length: This is the length of the file upto which file is to be truncated.
Return Value: This method does not returns any value.
Example #1 :
Using os.truncate() method to truncate a file using it’s path.
# Python program to explain os.truncate() method # importing os module import os # path path = 'C:/Users/Rajnish/Desktop/testfile.txt' # Open the file and get# the file descriptor associated# with it using os.open() methodfd = os.open(path, os.O_RDWR|os.O_CREAT) # String to be writtens = 'GeeksforGeeks - A Computer Science portal' # Convert the string to bytes line = str.encode(s) # Write the bytestring to the file # associated with the file # descriptor fd os.write(fd, line) # Using os.truncate() method# Using path as parameteros.truncate(path, 10) # Seek the file from beginning# using os.lseek() methodos.lseek(fd, 0, 0) # Read the files = os.read(fd, 15) # Print stringprint(s) # Close the file descriptor os.close(fd)
b'GeeksforGe'
Example #2 :Using os.truncate() method to truncate a file using a file descriptor
# Python program to explain os.truncate() method # importing os module import os # path path = 'C:/Users/Rajnish/Desktop/testfile.txt' # Open the file and get# the file descriptor associated# with it using os.open() methodfd = os.open(path, os.O_RDWR|os.O_CREAT) # String to be writtens = 'GeeksforGeeks' # Convert the string to bytes line = str.encode(s) # Write the bytestring to the file # associated with the file # descriptor fd os.write(fd, line) # Using os.truncate() method# Using fd as parameteros.truncate(fd, 4) # Seek the file from beginning# using os.lseek() methodos.lseek(fd, 0, 0) # Read the files = os.read(fd, 15) # Print stringprint(s) # Close the file descriptor os.close(fd)
b'Geek'
python-os-module
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
Python String | replace()
Reading and Writing to text files in Python
*args and **kwargs in Python
Create a Pandas DataFrame from Lists
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
|
[
{
"code": null,
"e": 25867,
"s": 25839,
"text": "\n25 Jun, 2019"
},
{
"code": null,
"e": 26238,
"s": 25867,
"text": "OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of using operating system dependent functionality.os.truncate() method truncates the file corresponding to path so that it is at most length bytes in size. This function can support file descriptor too."
},
{
"code": null,
"e": 26272,
"s": 26238,
"text": "Syntax: os.truncate(path, length)"
},
{
"code": null,
"e": 26446,
"s": 26272,
"text": "Parameters:path: This parameter is the path or file descriptor of the file that is to be truncated.length: This is the length of the file upto which file is to be truncated."
},
{
"code": null,
"e": 26500,
"s": 26446,
"text": "Return Value: This method does not returns any value."
},
{
"code": null,
"e": 26513,
"s": 26500,
"text": "Example #1 :"
},
{
"code": null,
"e": 26576,
"s": 26513,
"text": "Using os.truncate() method to truncate a file using it’s path."
},
{
"code": "# Python program to explain os.truncate() method # importing os module import os # path path = 'C:/Users/Rajnish/Desktop/testfile.txt' # Open the file and get# the file descriptor associated# with it using os.open() methodfd = os.open(path, os.O_RDWR|os.O_CREAT) # String to be writtens = 'GeeksforGeeks - A Computer Science portal' # Convert the string to bytes line = str.encode(s) # Write the bytestring to the file # associated with the file # descriptor fd os.write(fd, line) # Using os.truncate() method# Using path as parameteros.truncate(path, 10) # Seek the file from beginning# using os.lseek() methodos.lseek(fd, 0, 0) # Read the files = os.read(fd, 15) # Print stringprint(s) # Close the file descriptor os.close(fd)",
"e": 27326,
"s": 26576,
"text": null
},
{
"code": null,
"e": 27341,
"s": 27326,
"text": "b'GeeksforGe'\n"
},
{
"code": null,
"e": 27423,
"s": 27341,
"text": "Example #2 :Using os.truncate() method to truncate a file using a file descriptor"
},
{
"code": "# Python program to explain os.truncate() method # importing os module import os # path path = 'C:/Users/Rajnish/Desktop/testfile.txt' # Open the file and get# the file descriptor associated# with it using os.open() methodfd = os.open(path, os.O_RDWR|os.O_CREAT) # String to be writtens = 'GeeksforGeeks' # Convert the string to bytes line = str.encode(s) # Write the bytestring to the file # associated with the file # descriptor fd os.write(fd, line) # Using os.truncate() method# Using fd as parameteros.truncate(fd, 4) # Seek the file from beginning# using os.lseek() methodos.lseek(fd, 0, 0) # Read the files = os.read(fd, 15) # Print stringprint(s) # Close the file descriptor os.close(fd)",
"e": 28140,
"s": 27423,
"text": null
},
{
"code": null,
"e": 28149,
"s": 28140,
"text": "b'Geek'\n"
},
{
"code": null,
"e": 28166,
"s": 28149,
"text": "python-os-module"
},
{
"code": null,
"e": 28173,
"s": 28166,
"text": "Python"
},
{
"code": null,
"e": 28271,
"s": 28173,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28289,
"s": 28271,
"text": "Python Dictionary"
},
{
"code": null,
"e": 28321,
"s": 28289,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28343,
"s": 28321,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 28385,
"s": 28343,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 28411,
"s": 28385,
"text": "Python String | replace()"
},
{
"code": null,
"e": 28455,
"s": 28411,
"text": "Reading and Writing to text files in Python"
},
{
"code": null,
"e": 28484,
"s": 28455,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 28521,
"s": 28484,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 28563,
"s": 28521,
"text": "Check if element exists in list in Python"
}
] |
How to print a number with commas as thousands separators in JavaScript? - GeeksforGeeks
|
18 Oct, 2019
Method 1: Using Intl.NumberFormat()
The Intl.NumberFormat() object is used to represent numbers in a language-sensitive formatting. It can be used to represent currency or percentages according to the locale specified.
The locales parameter of this object is used to specify the format of the number. The ‘en-US’ locale is used to specify that the locale takes the format of the United States and the English language, where numbers are represented with a comma between the thousands.
The format() method of this object can be used to return a string of the number in the specified locale and formatting options. This will format the number with commas at the thousands of places and return a string with the formatted number.
Syntax:
nfObject = new Intl.NumberFormat('en-US')
nfObject.format(givenNumber)
Example:
<!DOCTYPE html><html> <head> <title> Print a number with commas as thousands separators in JavaScript. </title></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b> How to print a number with commas as thousands separators in JavaScript? </b> <p>Output for '123456789': <span class="output"></span> </p> <button onclick="separateNumber()">Separate thousands</button> <script type="text/javascript"> function separateNumber() { givenNumber = 123456789; nfObject = new Intl.NumberFormat('en-US'); output = nfObject.format(givenNumber); document.querySelector('.output').textContent = output; } </script></body> </html>
Output:
Before clicking the button:
After clicking the button:
Method 2: Using toLocaleString()
The toLocaleString() method is used to return a string with a language-sensitive representation of a number. The optional locales parameter is used to specify the format of the number.
The locale ‘en-US’ is used to specify that the locale takes the format of the United States and the English language, where numbers are represented with a comma between the thousands. This will format the number with commas at the thousands of places and return a string with the formatted number.
Syntax:
givenNumber.toLocaleString('en-US')
Example:
<!DOCTYPE html><html> <head> <title> Print a number with commas as thousands separators in JavaScript. </title></head> <body> <h1 style="color: green">GeeksforGeeks</h1> <b> How to print a number with commas as thousands separators in JavaScript? </b> <p>Output for '987654321': <span class="output"></span> </p> <button onclick="separateNumber()"> Separate thousands </button> <script type="text/javascript"> function separateNumber() { givenNumber = 987654321; output = givenNumber.toLocaleString('en-US'); document.querySelector('.output').textContent = output; } </script></body> </html>
Output:
Before clicking the button:
After clicking the button:
JavaScript-Misc
Picked
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 24658,
"s": 24630,
"text": "\n18 Oct, 2019"
},
{
"code": null,
"e": 24694,
"s": 24658,
"text": "Method 1: Using Intl.NumberFormat()"
},
{
"code": null,
"e": 24877,
"s": 24694,
"text": "The Intl.NumberFormat() object is used to represent numbers in a language-sensitive formatting. It can be used to represent currency or percentages according to the locale specified."
},
{
"code": null,
"e": 25143,
"s": 24877,
"text": "The locales parameter of this object is used to specify the format of the number. The ‘en-US’ locale is used to specify that the locale takes the format of the United States and the English language, where numbers are represented with a comma between the thousands."
},
{
"code": null,
"e": 25385,
"s": 25143,
"text": "The format() method of this object can be used to return a string of the number in the specified locale and formatting options. This will format the number with commas at the thousands of places and return a string with the formatted number."
},
{
"code": null,
"e": 25393,
"s": 25385,
"text": "Syntax:"
},
{
"code": null,
"e": 25465,
"s": 25393,
"text": "nfObject = new Intl.NumberFormat('en-US')\nnfObject.format(givenNumber)\n"
},
{
"code": null,
"e": 25474,
"s": 25465,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> Print a number with commas as thousands separators in JavaScript. </title></head> <body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <b> How to print a number with commas as thousands separators in JavaScript? </b> <p>Output for '123456789': <span class=\"output\"></span> </p> <button onclick=\"separateNumber()\">Separate thousands</button> <script type=\"text/javascript\"> function separateNumber() { givenNumber = 123456789; nfObject = new Intl.NumberFormat('en-US'); output = nfObject.format(givenNumber); document.querySelector('.output').textContent = output; } </script></body> </html>",
"e": 26229,
"s": 25474,
"text": null
},
{
"code": null,
"e": 26237,
"s": 26229,
"text": "Output:"
},
{
"code": null,
"e": 26265,
"s": 26237,
"text": "Before clicking the button:"
},
{
"code": null,
"e": 26292,
"s": 26265,
"text": "After clicking the button:"
},
{
"code": null,
"e": 26325,
"s": 26292,
"text": "Method 2: Using toLocaleString()"
},
{
"code": null,
"e": 26510,
"s": 26325,
"text": "The toLocaleString() method is used to return a string with a language-sensitive representation of a number. The optional locales parameter is used to specify the format of the number."
},
{
"code": null,
"e": 26808,
"s": 26510,
"text": "The locale ‘en-US’ is used to specify that the locale takes the format of the United States and the English language, where numbers are represented with a comma between the thousands. This will format the number with commas at the thousands of places and return a string with the formatted number."
},
{
"code": null,
"e": 26816,
"s": 26808,
"text": "Syntax:"
},
{
"code": null,
"e": 26852,
"s": 26816,
"text": "givenNumber.toLocaleString('en-US')"
},
{
"code": null,
"e": 26861,
"s": 26852,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> Print a number with commas as thousands separators in JavaScript. </title></head> <body> <h1 style=\"color: green\">GeeksforGeeks</h1> <b> How to print a number with commas as thousands separators in JavaScript? </b> <p>Output for '987654321': <span class=\"output\"></span> </p> <button onclick=\"separateNumber()\"> Separate thousands </button> <script type=\"text/javascript\"> function separateNumber() { givenNumber = 987654321; output = givenNumber.toLocaleString('en-US'); document.querySelector('.output').textContent = output; } </script></body> </html>",
"e": 27572,
"s": 26861,
"text": null
},
{
"code": null,
"e": 27580,
"s": 27572,
"text": "Output:"
},
{
"code": null,
"e": 27608,
"s": 27580,
"text": "Before clicking the button:"
},
{
"code": null,
"e": 27635,
"s": 27608,
"text": "After clicking the button:"
},
{
"code": null,
"e": 27651,
"s": 27635,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 27658,
"s": 27651,
"text": "Picked"
},
{
"code": null,
"e": 27669,
"s": 27658,
"text": "JavaScript"
},
{
"code": null,
"e": 27686,
"s": 27669,
"text": "Web Technologies"
},
{
"code": null,
"e": 27713,
"s": 27686,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 27811,
"s": 27713,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27820,
"s": 27811,
"text": "Comments"
},
{
"code": null,
"e": 27833,
"s": 27820,
"text": "Old Comments"
},
{
"code": null,
"e": 27878,
"s": 27833,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 27939,
"s": 27878,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28011,
"s": 27939,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 28063,
"s": 28011,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 28109,
"s": 28063,
"text": "How to Open URL in New Tab using JavaScript ?"
},
{
"code": null,
"e": 28151,
"s": 28109,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 28184,
"s": 28151,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 28246,
"s": 28184,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 28289,
"s": 28246,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Program for Armstrong Numbers
|
22 Jun, 2022
Given a number x, determine whether the given number is Armstrong number or not.
A positive integer of n digits is called an Armstrong number of order n (order is number of digits) if.
abcd... = pow(a,n) + pow(b,n) + pow(c,n) + pow(d,n) + ....
Example:
Input : 153
Chapters
descriptions off, selected
captions settings, opens captions settings dialog
captions off, selected
English
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
Output : Yes
153 is an Armstrong number.
1*1*1 + 5*5*5 + 3*3*3 = 153
Input : 120
Output : No
120 is not a Armstrong number.
1*1*1 + 2*2*2 + 0*0*0 = 9
Input : 1253
Output : No
1253 is not a Armstrong Number
1*1*1*1 + 2*2*2*2 + 5*5*5*5 + 3*3*3*3 = 723
Input : 1634
Output : Yes
1*1*1*1 + 6*6*6*6 + 3*3*3*3 + 4*4*4*4 = 1634
Approach: The idea is to first count number digits (or find order). Let the number of digits be n. For every digit r in input number x, compute rn. If sum of all such values is equal to n, then return true, else false.
C++
C
Java
Python
Python3
C#
Javascript
// C++ program to determine whether the number is// Armstrong number or not#include <bits/stdc++.h>using namespace std; /* Function to calculate x raised to the power y */int power(int x, unsigned int y){ if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2);} /* Function to calculate order of the number */int order(int x){ int n = 0; while (x) { n++; x = x / 10; } return n;} // Function to check whether the given number is// Armstrong number or notbool isArmstrong(int x){ // Calling order function int n = order(x); int temp = x, sum = 0; while (temp) { int r = temp % 10; sum += power(r, n); temp = temp / 10; } // If satisfies Armstrong condition return (sum == x);} // Driver Programint main(){ int x = 153; cout << boolalpha << isArmstrong(x) << endl; x = 1253; cout << boolalpha << isArmstrong(x) << endl; return 0;}
// C program to find Armstrong number #include <stdio.h> /* Function to calculate x raised to the power y */int power(int x, unsigned int y){ if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2);} /* Function to calculate order of the number */int order(int x){ int n = 0; while (x) { n++; x = x / 10; } return n;} // Function to check whether the given number is// Armstrong number or notint isArmstrong(int x){ // Calling order function int n = order(x); int temp = x, sum = 0; while (temp) { int r = temp % 10; sum += power(r, n); temp = temp / 10; } // If satisfies Armstrong condition if (sum == x) return 1; else return 0;} // Driver Programint main(){ int x = 153; if (isArmstrong(x) == 1) printf("True\n"); else printf("False\n"); x = 1253; if (isArmstrong(x) == 1) printf("True\n"); else printf("False\n"); return 0;}
// Java program to determine whether the number is// Armstrong number or notpublic class Armstrong { /* Function to calculate x raised to the power y */ int power(int x, long y) { if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2); } /* Function to calculate order of the number */ int order(int x) { int n = 0; while (x != 0) { n++; x = x / 10; } return n; } // Function to check whether the given number is // Armstrong number or not boolean isArmstrong(int x) { // Calling order function int n = order(x); int temp = x, sum = 0; while (temp != 0) { int r = temp % 10; sum = sum + power(r, n); temp = temp / 10; } // If satisfies Armstrong condition return (sum == x); } // Driver Program public static void main(String[] args) { Armstrong ob = new Armstrong(); int x = 153; System.out.println(ob.isArmstrong(x)); x = 1253; System.out.println(ob.isArmstrong(x)); }}
# Python program to determine whether the number is# Armstrong number or not # Function to calculate x raised to the power y def power(x, y): if y == 0: return 1 if y % 2 == 0: return power(x, y/2)*power(x, y/2) return x*power(x, y/2)*power(x, y/2) # Function to calculate order of the number def order(x): # variable to store of the number n = 0 while (x != 0): n = n+1 x = x/10 return n # Function to check whether the given number is# Armstrong number or not def isArmstrong(x): n = order(x) temp = x sum1 = 0 while (temp != 0): r = temp % 10 sum1 = sum1 + power(r, n) temp = temp/10 # If condition satisfies return (sum1 == x) # Driver Programx = 153print(isArmstrong(x))x = 1253print(isArmstrong(x))
# python3 >= 3.6 for typehint support# This example avoids the complexity of ordering# through type conversions & string manipulation def isArmstrong(val: int) -> bool: """val will be tested to see if its an Armstrong number. Arguments: val {int} -- positive integer only. Returns: bool -- true is /false isn't """ # break the int into its respective digits parts = [int(_) for _ in str(val)] # begin test. counter = 0 for _ in parts: counter += _**3 return (counter == val) # Check Armstrong numberprint(isArmstrong(153)) print(isArmstrong(1253))
// C# program to determine whether the// number is Armstrong number or notusing System; public class GFG { // Function to calculate x raised // to the power y int power(int x, long y) { if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2); } // Function to calculate // order of the number int order(int x) { int n = 0; while (x != 0) { n++; x = x / 10; } return n; } // Function to check whether the // given number is Armstrong number // or not bool isArmstrong(int x) { // Calling order function int n = order(x); int temp = x, sum = 0; while (temp != 0) { int r = temp % 10; sum = sum + power(r, n); temp = temp / 10; } // If satisfies Armstrong condition return (sum == x); } // Driver Code public static void Main() { GFG ob = new GFG(); int x = 153; Console.WriteLine(ob.isArmstrong(x)); x = 1253; Console.WriteLine(ob.isArmstrong(x)); }} // This code is contributed by Nitin Mittal.
<script> // JavaScript program to determine whether the // number is Armstrong number or not // Function to calculate x raised // to the power y function power(x, y) { if( y == 0) return 1; if (y % 2 == 0) return power(x, parseInt(y / 2, 10)) * power(x, parseInt(y / 2, 10)); return x * power(x, parseInt(y / 2, 10)) * power(x, parseInt(y / 2, 10)); } // Function to calculate // order of the number function order(x) { let n = 0; while (x != 0) { n++; x = parseInt(x / 10, 10); } return n; } // Function to check whether the // given number is Armstrong number // or not function isArmstrong(x) { // Calling order function let n = order(x); let temp = x, sum = 0; while (temp != 0) { let r = temp % 10; sum = sum + power(r, n); temp = parseInt(temp / 10, 10); } // If satisfies Armstrong condition return (sum == x); } let x = 153; if(isArmstrong(x)) { document.write("True" + "</br>"); } else{ document.write("False" + "</br>"); } x = 1253; if(isArmstrong(x)) { document.write("True"); } else{ document.write("False"); } </script>
true
false
The above approach can also be implemented in a shorter way as:
C++
C
Java
#include <iostream>using namespace std; int main() { int n = 153; int temp = n; int p = 0; /*function to calculate the sum of individual digits */ while (n > 0) { int rem = n % 10; p = (p) + (rem * rem * rem); n = n / 10; } /* condition to check whether the value of P equals to user input or not. */ if (temp == p) { cout<<("Yes. It is Armstrong No."); } else { cout<<("No. It is not an Armstrong No."); } return 0;} // This code is contributed by sathiyamoorthics19
#include <stdio.h> int main() { int n = 153; int temp = n; int p = 0; /*function to calculate the sum of individual digits */ while (n > 0) { int rem = n % 10; p = (p) + (rem * rem * rem); n = n / 10; } /* condition to check whether the value of P equals to user input or not. */ if (temp == p) { printf("Yes. It is Armstrong No."); } else { printf("No. It is not an Armstrong No."); } return 0;} // This code is contributed by sathiyamoorthics19
// Java program to determine whether the number is// Armstrong number or not public class ArmstrongNumber { public static void main(String[] args) { int n = 153; int temp = n; int p = 0; /*function to calculate the sum of individual digits */ while (n > 0) { int rem = n % 10; p = (p) + (rem * rem * rem); n = n / 10; } /* condition to check whether the value of P equals to user input or not. */ if (temp == p) { System.out.println("Yes. It is Armstrong No."); } else { System.out.println( "No. It is not an Armstrong No."); } }}
Yes. It is Armstrong No.
Find nth Armstrong number
Input : 9
Output : 9
Input : 10
Output : 153
C++
Java
Python3
C#
PHP
Javascript
C
// C++ Program to find// Nth Armstrong Number#include <bits/stdc++.h>#include <math.h>using namespace std; // Function to find Nth Armstrong Numberint NthArmstrong(int n){ int count = 0; // upper limit from integer for (int i = 1; i <= INT_MAX; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; }} // Driver Functionint main(){ int n = 12; cout << NthArmstrong(n); return 0;} // This Code is Contributed by 'jaingyayak'
// Java Program to find// Nth Armstrong Numberimport java.lang.Math; class GFG { // Function to find Nth Armstrong Number static int NthArmstrong(int n) { int count = 0; // upper limit from integer for (int i = 1; i <= Integer.MAX_VALUE; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)Math.log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + (int)Math.pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; } return n; } // Driver Code public static void main(String[] args) { int n = 12; System.out.println(NthArmstrong(n)); }} // This code is contributed by Code_Mech.
# Python3 Program to find Nth Armstrong Numberimport mathimport sys # Function to find Nth Armstrong Number def NthArmstrong(n): count = 0 # upper limit from integer for i in range(1, sys.maxsize): num = i rem = 0 digit = 0 sum = 0 # Copy the value for num in num num = i # Find total digits in num digit = int(math.log10(num) + 1) # Calculate sum of power of digits while(num > 0): rem = num % 10 sum = sum + pow(rem, digit) num = num // 10 # Check for Armstrong number if(i == sum): count += 1 if(count == n): return i # Driver Coden = 12print(NthArmstrong(n)) # This code is contributed by chandan_jnu
// C# Program to find// Nth Armstrong Numberusing System; class GFG { // Function to find Nth Armstrong Number static int NthArmstrong(int n) { int count = 0; // upper limit from integer for (int i = 1; i <= int.MaxValue; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)Math.Log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + (int)Math.Pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; } return n; } // Driver Code public static void Main() { int n = 12; Console.WriteLine(NthArmstrong(n)); }} // This code is contributed by Code_Mech.
<?php// PHP Program to find// Nth Armstrong Number // Function to find// Nth Armstrong Numberfunction NthArmstrong($n){ $count = 0; // upper limit // from integer for($i = 1; $i <= PHP_INT_MAX; $i++) { $num = $i; $rem; $digit = 0; $sum = 0; // Copy the value // for num in num $num = $i; // Find total // digits in num $digit = (int) log10($num) + 1; // Calculate sum of // power of digits while($num > 0) { $rem = $num % 10; $sum = $sum + pow($rem, $digit); $num = (int)$num / 10; } // Check for // Armstrong number if($i == $sum) $count++; if($count == $n) return $i; }} // Driver Code$n = 12;echo NthArmstrong($n); // This Code is Contributed// by akt_mit?>
<script> // Javascript program to find Nth Armstrong Number // Function to find Nth Armstrong Numberfunction NthArmstrong(n){ let count = 0; // Upper limit from integer for(let i = 1; i <= Number.MAX_VALUE; i++) { let num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = parseInt(Math.log10(num), 10) + 1; // Calculate sum of power of digits while(num > 0) { rem = num % 10; sum = sum + Math.pow(rem, digit); num = parseInt(num / 10, 10); } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; } return n;} // Driver codelet n = 12; document.write(NthArmstrong(n)); // This code is contributed by rameshtravel07 </script>
// C Program to find// Nth Armstrong Number#include <stdio.h>#include <math.h>#include<limits.h> // Function to find Nth Armstrong Numberint NthArmstrong(int n){ int count = 0; // upper limit from integer for (int i = 1; i <= INT_MAX; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; }} // Driver Functionint main(){ int n = 12; printf("%d", NthArmstrong(n)); return 0;} // This Code is Contributed by//'sathiyamoorthics19'
371
Using Numeric Strings:
When considering the number as a string we can access any digit we want and perform operations
Java
Python3
C#
Javascript
public class armstrongNumber{ public void isArmstrong(String n) { char[] s = n.toCharArray(); int size = s.length; int sum = 0; for (char num : s) { int temp = 1; int i = Integer.parseInt(Character.toString(num)); for (int j = 0; j <= size - 1; j++) { temp *= i; } sum += temp; } if (sum == Integer.parseInt(n)) { System.out.println("True"); } else { System.out.println("False"); } } public static void main(String[] args) { armstrongNumber am = new armstrongNumber(); am.isArmstrong("153"); am.isArmstrong("1253"); }}
def armstrong(n): number = str(n) n = len(number) output = 0 for i in number: output = output+int(i)**n if output == int(number): return(True) else: return(False) print(armstrong(153))print(armstrong(1253))
using System;public class armstrongNumber { public void isArmstrong(String n) { char[] s = n.ToCharArray(); int size = s.Length; int sum = 0; foreach(char num in s) { int temp = 1; int i = Int32.Parse(char.ToString(num)); for (int j = 0; j <= size - 1; j++) { temp *= i; } sum += temp; } if (sum == Int32.Parse(n)) { Console.WriteLine("True"); } else { Console.WriteLine("False"); } } public static void Main(String[] args) { armstrongNumber am = new armstrongNumber(); am.isArmstrong("153"); am.isArmstrong("1253"); }} // This code is contributed by umadevi9616
<script>function armstrong(n){ let number = new String(n) n = number.length let output = 0 for(let i of number) output = output + parseInt(i)**n if (output == parseInt(number)) return("True" + "<br>") else return("False" + "<br>")} document.write(armstrong(153))document.write(armstrong(1253)) // This code is contributed by _saurabh_jaiswal.</script>
True
False
Time Complexity: O(n).Auxiliary Space: O(1).
Armstrong Number | GeeksforGeeks - YouTubeGeeksforGeeks531K subscribersArmstrong Number | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 6:10•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=ivjDNztMpZU" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
References: http://www.cs.mtu.edu/~shene/COURSES/cs201/NOTES/chap04/arms.html http://www.programiz.com/c-programming/examples/check-armstrong-number
This article is contributed by Rahul Agrawal .If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
jit_t
RishabhPrabhu
jwelt
Chandan_Kumar
Code_Mech
mukesh07
rameshtravel07
shreyaperla
surindertarika1234
asifmustafa7634
_saurabh_jaiswal
umadevi9616
myeducationalarena
sathiyamoorthics19
anandkumarshivam2266
Oracle
VMWare
Mathematical
VMWare
Oracle
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n22 Jun, 2022"
},
{
"code": null,
"e": 134,
"s": 52,
"text": "Given a number x, determine whether the given number is Armstrong number or not. "
},
{
"code": null,
"e": 239,
"s": 134,
"text": "A positive integer of n digits is called an Armstrong number of order n (order is number of digits) if. "
},
{
"code": null,
"e": 299,
"s": 239,
"text": "abcd... = pow(a,n) + pow(b,n) + pow(c,n) + pow(d,n) + .... "
},
{
"code": null,
"e": 309,
"s": 299,
"text": "Example: "
},
{
"code": null,
"e": 321,
"s": 309,
"text": "Input : 153"
},
{
"code": null,
"e": 330,
"s": 321,
"text": "Chapters"
},
{
"code": null,
"e": 357,
"s": 330,
"text": "descriptions off, selected"
},
{
"code": null,
"e": 407,
"s": 357,
"text": "captions settings, opens captions settings dialog"
},
{
"code": null,
"e": 430,
"s": 407,
"text": "captions off, selected"
},
{
"code": null,
"e": 438,
"s": 430,
"text": "English"
},
{
"code": null,
"e": 462,
"s": 438,
"text": "This is a modal window."
},
{
"code": null,
"e": 531,
"s": 462,
"text": "Beginning of dialog window. Escape will cancel and close the window."
},
{
"code": null,
"e": 553,
"s": 531,
"text": "End of dialog window."
},
{
"code": null,
"e": 566,
"s": 553,
"text": "Output : Yes"
},
{
"code": null,
"e": 594,
"s": 566,
"text": "153 is an Armstrong number."
},
{
"code": null,
"e": 622,
"s": 594,
"text": "1*1*1 + 5*5*5 + 3*3*3 = 153"
},
{
"code": null,
"e": 634,
"s": 622,
"text": "Input : 120"
},
{
"code": null,
"e": 646,
"s": 634,
"text": "Output : No"
},
{
"code": null,
"e": 677,
"s": 646,
"text": "120 is not a Armstrong number."
},
{
"code": null,
"e": 703,
"s": 677,
"text": "1*1*1 + 2*2*2 + 0*0*0 = 9"
},
{
"code": null,
"e": 716,
"s": 703,
"text": "Input : 1253"
},
{
"code": null,
"e": 728,
"s": 716,
"text": "Output : No"
},
{
"code": null,
"e": 759,
"s": 728,
"text": "1253 is not a Armstrong Number"
},
{
"code": null,
"e": 803,
"s": 759,
"text": "1*1*1*1 + 2*2*2*2 + 5*5*5*5 + 3*3*3*3 = 723"
},
{
"code": null,
"e": 816,
"s": 803,
"text": "Input : 1634"
},
{
"code": null,
"e": 829,
"s": 816,
"text": "Output : Yes"
},
{
"code": null,
"e": 874,
"s": 829,
"text": "1*1*1*1 + 6*6*6*6 + 3*3*3*3 + 4*4*4*4 = 1634"
},
{
"code": null,
"e": 1093,
"s": 874,
"text": "Approach: The idea is to first count number digits (or find order). Let the number of digits be n. For every digit r in input number x, compute rn. If sum of all such values is equal to n, then return true, else false."
},
{
"code": null,
"e": 1097,
"s": 1093,
"text": "C++"
},
{
"code": null,
"e": 1099,
"s": 1097,
"text": "C"
},
{
"code": null,
"e": 1104,
"s": 1099,
"text": "Java"
},
{
"code": null,
"e": 1111,
"s": 1104,
"text": "Python"
},
{
"code": null,
"e": 1119,
"s": 1111,
"text": "Python3"
},
{
"code": null,
"e": 1122,
"s": 1119,
"text": "C#"
},
{
"code": null,
"e": 1133,
"s": 1122,
"text": "Javascript"
},
{
"code": "// C++ program to determine whether the number is// Armstrong number or not#include <bits/stdc++.h>using namespace std; /* Function to calculate x raised to the power y */int power(int x, unsigned int y){ if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2);} /* Function to calculate order of the number */int order(int x){ int n = 0; while (x) { n++; x = x / 10; } return n;} // Function to check whether the given number is// Armstrong number or notbool isArmstrong(int x){ // Calling order function int n = order(x); int temp = x, sum = 0; while (temp) { int r = temp % 10; sum += power(r, n); temp = temp / 10; } // If satisfies Armstrong condition return (sum == x);} // Driver Programint main(){ int x = 153; cout << boolalpha << isArmstrong(x) << endl; x = 1253; cout << boolalpha << isArmstrong(x) << endl; return 0;}",
"e": 2139,
"s": 1133,
"text": null
},
{
"code": "// C program to find Armstrong number #include <stdio.h> /* Function to calculate x raised to the power y */int power(int x, unsigned int y){ if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2);} /* Function to calculate order of the number */int order(int x){ int n = 0; while (x) { n++; x = x / 10; } return n;} // Function to check whether the given number is// Armstrong number or notint isArmstrong(int x){ // Calling order function int n = order(x); int temp = x, sum = 0; while (temp) { int r = temp % 10; sum += power(r, n); temp = temp / 10; } // If satisfies Armstrong condition if (sum == x) return 1; else return 0;} // Driver Programint main(){ int x = 153; if (isArmstrong(x) == 1) printf(\"True\\n\"); else printf(\"False\\n\"); x = 1253; if (isArmstrong(x) == 1) printf(\"True\\n\"); else printf(\"False\\n\"); return 0;}",
"e": 3198,
"s": 2139,
"text": null
},
{
"code": "// Java program to determine whether the number is// Armstrong number or notpublic class Armstrong { /* Function to calculate x raised to the power y */ int power(int x, long y) { if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2); } /* Function to calculate order of the number */ int order(int x) { int n = 0; while (x != 0) { n++; x = x / 10; } return n; } // Function to check whether the given number is // Armstrong number or not boolean isArmstrong(int x) { // Calling order function int n = order(x); int temp = x, sum = 0; while (temp != 0) { int r = temp % 10; sum = sum + power(r, n); temp = temp / 10; } // If satisfies Armstrong condition return (sum == x); } // Driver Program public static void main(String[] args) { Armstrong ob = new Armstrong(); int x = 153; System.out.println(ob.isArmstrong(x)); x = 1253; System.out.println(ob.isArmstrong(x)); }}",
"e": 4411,
"s": 3198,
"text": null
},
{
"code": "# Python program to determine whether the number is# Armstrong number or not # Function to calculate x raised to the power y def power(x, y): if y == 0: return 1 if y % 2 == 0: return power(x, y/2)*power(x, y/2) return x*power(x, y/2)*power(x, y/2) # Function to calculate order of the number def order(x): # variable to store of the number n = 0 while (x != 0): n = n+1 x = x/10 return n # Function to check whether the given number is# Armstrong number or not def isArmstrong(x): n = order(x) temp = x sum1 = 0 while (temp != 0): r = temp % 10 sum1 = sum1 + power(r, n) temp = temp/10 # If condition satisfies return (sum1 == x) # Driver Programx = 153print(isArmstrong(x))x = 1253print(isArmstrong(x))",
"e": 5210,
"s": 4411,
"text": null
},
{
"code": "# python3 >= 3.6 for typehint support# This example avoids the complexity of ordering# through type conversions & string manipulation def isArmstrong(val: int) -> bool: \"\"\"val will be tested to see if its an Armstrong number. Arguments: val {int} -- positive integer only. Returns: bool -- true is /false isn't \"\"\" # break the int into its respective digits parts = [int(_) for _ in str(val)] # begin test. counter = 0 for _ in parts: counter += _**3 return (counter == val) # Check Armstrong numberprint(isArmstrong(153)) print(isArmstrong(1253))",
"e": 5813,
"s": 5210,
"text": null
},
{
"code": "// C# program to determine whether the// number is Armstrong number or notusing System; public class GFG { // Function to calculate x raised // to the power y int power(int x, long y) { if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2); } // Function to calculate // order of the number int order(int x) { int n = 0; while (x != 0) { n++; x = x / 10; } return n; } // Function to check whether the // given number is Armstrong number // or not bool isArmstrong(int x) { // Calling order function int n = order(x); int temp = x, sum = 0; while (temp != 0) { int r = temp % 10; sum = sum + power(r, n); temp = temp / 10; } // If satisfies Armstrong condition return (sum == x); } // Driver Code public static void Main() { GFG ob = new GFG(); int x = 153; Console.WriteLine(ob.isArmstrong(x)); x = 1253; Console.WriteLine(ob.isArmstrong(x)); }} // This code is contributed by Nitin Mittal.",
"e": 7053,
"s": 5813,
"text": null
},
{
"code": "<script> // JavaScript program to determine whether the // number is Armstrong number or not // Function to calculate x raised // to the power y function power(x, y) { if( y == 0) return 1; if (y % 2 == 0) return power(x, parseInt(y / 2, 10)) * power(x, parseInt(y / 2, 10)); return x * power(x, parseInt(y / 2, 10)) * power(x, parseInt(y / 2, 10)); } // Function to calculate // order of the number function order(x) { let n = 0; while (x != 0) { n++; x = parseInt(x / 10, 10); } return n; } // Function to check whether the // given number is Armstrong number // or not function isArmstrong(x) { // Calling order function let n = order(x); let temp = x, sum = 0; while (temp != 0) { let r = temp % 10; sum = sum + power(r, n); temp = parseInt(temp / 10, 10); } // If satisfies Armstrong condition return (sum == x); } let x = 153; if(isArmstrong(x)) { document.write(\"True\" + \"</br>\"); } else{ document.write(\"False\" + \"</br>\"); } x = 1253; if(isArmstrong(x)) { document.write(\"True\"); } else{ document.write(\"False\"); } </script>",
"e": 8482,
"s": 7053,
"text": null
},
{
"code": null,
"e": 8493,
"s": 8482,
"text": "true\nfalse"
},
{
"code": null,
"e": 8557,
"s": 8493,
"text": "The above approach can also be implemented in a shorter way as:"
},
{
"code": null,
"e": 8561,
"s": 8557,
"text": "C++"
},
{
"code": null,
"e": 8563,
"s": 8561,
"text": "C"
},
{
"code": null,
"e": 8568,
"s": 8563,
"text": "Java"
},
{
"code": "#include <iostream>using namespace std; int main() { int n = 153; int temp = n; int p = 0; /*function to calculate the sum of individual digits */ while (n > 0) { int rem = n % 10; p = (p) + (rem * rem * rem); n = n / 10; } /* condition to check whether the value of P equals to user input or not. */ if (temp == p) { cout<<(\"Yes. It is Armstrong No.\"); } else { cout<<(\"No. It is not an Armstrong No.\"); } return 0;} // This code is contributed by sathiyamoorthics19",
"e": 9215,
"s": 8568,
"text": null
},
{
"code": "#include <stdio.h> int main() { int n = 153; int temp = n; int p = 0; /*function to calculate the sum of individual digits */ while (n > 0) { int rem = n % 10; p = (p) + (rem * rem * rem); n = n / 10; } /* condition to check whether the value of P equals to user input or not. */ if (temp == p) { printf(\"Yes. It is Armstrong No.\"); } else { printf(\"No. It is not an Armstrong No.\"); } return 0;} // This code is contributed by sathiyamoorthics19",
"e": 9841,
"s": 9215,
"text": null
},
{
"code": "// Java program to determine whether the number is// Armstrong number or not public class ArmstrongNumber { public static void main(String[] args) { int n = 153; int temp = n; int p = 0; /*function to calculate the sum of individual digits */ while (n > 0) { int rem = n % 10; p = (p) + (rem * rem * rem); n = n / 10; } /* condition to check whether the value of P equals to user input or not. */ if (temp == p) { System.out.println(\"Yes. It is Armstrong No.\"); } else { System.out.println( \"No. It is not an Armstrong No.\"); } }}",
"e": 10569,
"s": 9841,
"text": null
},
{
"code": null,
"e": 10594,
"s": 10569,
"text": "Yes. It is Armstrong No."
},
{
"code": null,
"e": 10621,
"s": 10594,
"text": "Find nth Armstrong number "
},
{
"code": null,
"e": 10632,
"s": 10621,
"text": "Input : 9"
},
{
"code": null,
"e": 10643,
"s": 10632,
"text": "Output : 9"
},
{
"code": null,
"e": 10655,
"s": 10643,
"text": "Input : 10"
},
{
"code": null,
"e": 10668,
"s": 10655,
"text": "Output : 153"
},
{
"code": null,
"e": 10672,
"s": 10668,
"text": "C++"
},
{
"code": null,
"e": 10677,
"s": 10672,
"text": "Java"
},
{
"code": null,
"e": 10685,
"s": 10677,
"text": "Python3"
},
{
"code": null,
"e": 10688,
"s": 10685,
"text": "C#"
},
{
"code": null,
"e": 10692,
"s": 10688,
"text": "PHP"
},
{
"code": null,
"e": 10703,
"s": 10692,
"text": "Javascript"
},
{
"code": null,
"e": 10705,
"s": 10703,
"text": "C"
},
{
"code": "// C++ Program to find// Nth Armstrong Number#include <bits/stdc++.h>#include <math.h>using namespace std; // Function to find Nth Armstrong Numberint NthArmstrong(int n){ int count = 0; // upper limit from integer for (int i = 1; i <= INT_MAX; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; }} // Driver Functionint main(){ int n = 12; cout << NthArmstrong(n); return 0;} // This Code is Contributed by 'jaingyayak'",
"e": 11572,
"s": 10705,
"text": null
},
{
"code": "// Java Program to find// Nth Armstrong Numberimport java.lang.Math; class GFG { // Function to find Nth Armstrong Number static int NthArmstrong(int n) { int count = 0; // upper limit from integer for (int i = 1; i <= Integer.MAX_VALUE; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)Math.log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + (int)Math.pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; } return n; } // Driver Code public static void main(String[] args) { int n = 12; System.out.println(NthArmstrong(n)); }} // This code is contributed by Code_Mech.",
"e": 12605,
"s": 11572,
"text": null
},
{
"code": "# Python3 Program to find Nth Armstrong Numberimport mathimport sys # Function to find Nth Armstrong Number def NthArmstrong(n): count = 0 # upper limit from integer for i in range(1, sys.maxsize): num = i rem = 0 digit = 0 sum = 0 # Copy the value for num in num num = i # Find total digits in num digit = int(math.log10(num) + 1) # Calculate sum of power of digits while(num > 0): rem = num % 10 sum = sum + pow(rem, digit) num = num // 10 # Check for Armstrong number if(i == sum): count += 1 if(count == n): return i # Driver Coden = 12print(NthArmstrong(n)) # This code is contributed by chandan_jnu",
"e": 13374,
"s": 12605,
"text": null
},
{
"code": "// C# Program to find// Nth Armstrong Numberusing System; class GFG { // Function to find Nth Armstrong Number static int NthArmstrong(int n) { int count = 0; // upper limit from integer for (int i = 1; i <= int.MaxValue; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)Math.Log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + (int)Math.Pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; } return n; } // Driver Code public static void Main() { int n = 12; Console.WriteLine(NthArmstrong(n)); }} // This code is contributed by Code_Mech.",
"e": 14377,
"s": 13374,
"text": null
},
{
"code": "<?php// PHP Program to find// Nth Armstrong Number // Function to find// Nth Armstrong Numberfunction NthArmstrong($n){ $count = 0; // upper limit // from integer for($i = 1; $i <= PHP_INT_MAX; $i++) { $num = $i; $rem; $digit = 0; $sum = 0; // Copy the value // for num in num $num = $i; // Find total // digits in num $digit = (int) log10($num) + 1; // Calculate sum of // power of digits while($num > 0) { $rem = $num % 10; $sum = $sum + pow($rem, $digit); $num = (int)$num / 10; } // Check for // Armstrong number if($i == $sum) $count++; if($count == $n) return $i; }} // Driver Code$n = 12;echo NthArmstrong($n); // This Code is Contributed// by akt_mit?>",
"e": 15321,
"s": 14377,
"text": null
},
{
"code": "<script> // Javascript program to find Nth Armstrong Number // Function to find Nth Armstrong Numberfunction NthArmstrong(n){ let count = 0; // Upper limit from integer for(let i = 1; i <= Number.MAX_VALUE; i++) { let num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = parseInt(Math.log10(num), 10) + 1; // Calculate sum of power of digits while(num > 0) { rem = num % 10; sum = sum + Math.pow(rem, digit); num = parseInt(num / 10, 10); } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; } return n;} // Driver codelet n = 12; document.write(NthArmstrong(n)); // This code is contributed by rameshtravel07 </script>",
"e": 16192,
"s": 15321,
"text": null
},
{
"code": "// C Program to find// Nth Armstrong Number#include <stdio.h>#include <math.h>#include<limits.h> // Function to find Nth Armstrong Numberint NthArmstrong(int n){ int count = 0; // upper limit from integer for (int i = 1; i <= INT_MAX; i++) { int num = i, rem, digit = 0, sum = 0; // Copy the value for num in num num = i; // Find total digits in num digit = (int)log10(num) + 1; // Calculate sum of power of digits while (num > 0) { rem = num % 10; sum = sum + pow(rem, digit); num = num / 10; } // Check for Armstrong number if (i == sum) count++; if (count == n) return i; }} // Driver Functionint main(){ int n = 12; printf(\"%d\", NthArmstrong(n)); return 0;} // This Code is Contributed by//'sathiyamoorthics19'",
"e": 17064,
"s": 16192,
"text": null
},
{
"code": null,
"e": 17068,
"s": 17064,
"text": "371"
},
{
"code": null,
"e": 17091,
"s": 17068,
"text": "Using Numeric Strings:"
},
{
"code": null,
"e": 17186,
"s": 17091,
"text": "When considering the number as a string we can access any digit we want and perform operations"
},
{
"code": null,
"e": 17191,
"s": 17186,
"text": "Java"
},
{
"code": null,
"e": 17199,
"s": 17191,
"text": "Python3"
},
{
"code": null,
"e": 17202,
"s": 17199,
"text": "C#"
},
{
"code": null,
"e": 17213,
"s": 17202,
"text": "Javascript"
},
{
"code": "public class armstrongNumber{ public void isArmstrong(String n) { char[] s = n.toCharArray(); int size = s.length; int sum = 0; for (char num : s) { int temp = 1; int i = Integer.parseInt(Character.toString(num)); for (int j = 0; j <= size - 1; j++) { temp *= i; } sum += temp; } if (sum == Integer.parseInt(n)) { System.out.println(\"True\"); } else { System.out.println(\"False\"); } } public static void main(String[] args) { armstrongNumber am = new armstrongNumber(); am.isArmstrong(\"153\"); am.isArmstrong(\"1253\"); }}",
"e": 17983,
"s": 17213,
"text": null
},
{
"code": "def armstrong(n): number = str(n) n = len(number) output = 0 for i in number: output = output+int(i)**n if output == int(number): return(True) else: return(False) print(armstrong(153))print(armstrong(1253))",
"e": 18232,
"s": 17983,
"text": null
},
{
"code": "using System;public class armstrongNumber { public void isArmstrong(String n) { char[] s = n.ToCharArray(); int size = s.Length; int sum = 0; foreach(char num in s) { int temp = 1; int i = Int32.Parse(char.ToString(num)); for (int j = 0; j <= size - 1; j++) { temp *= i; } sum += temp; } if (sum == Int32.Parse(n)) { Console.WriteLine(\"True\"); } else { Console.WriteLine(\"False\"); } } public static void Main(String[] args) { armstrongNumber am = new armstrongNumber(); am.isArmstrong(\"153\"); am.isArmstrong(\"1253\"); }} // This code is contributed by umadevi9616",
"e": 18997,
"s": 18232,
"text": null
},
{
"code": "<script>function armstrong(n){ let number = new String(n) n = number.length let output = 0 for(let i of number) output = output + parseInt(i)**n if (output == parseInt(number)) return(\"True\" + \"<br>\") else return(\"False\" + \"<br>\")} document.write(armstrong(153))document.write(armstrong(1253)) // This code is contributed by _saurabh_jaiswal.</script>",
"e": 19396,
"s": 18997,
"text": null
},
{
"code": null,
"e": 19407,
"s": 19396,
"text": "True\nFalse"
},
{
"code": null,
"e": 19452,
"s": 19407,
"text": "Time Complexity: O(n).Auxiliary Space: O(1)."
},
{
"code": null,
"e": 20302,
"s": 19452,
"text": "Armstrong Number | GeeksforGeeks - YouTubeGeeksforGeeks531K subscribersArmstrong Number | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 6:10•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=ivjDNztMpZU\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 20452,
"s": 20302,
"text": "References: http://www.cs.mtu.edu/~shene/COURSES/cs201/NOTES/chap04/arms.html http://www.programiz.com/c-programming/examples/check-armstrong-number "
},
{
"code": null,
"e": 20874,
"s": 20452,
"text": "This article is contributed by Rahul Agrawal .If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 20887,
"s": 20874,
"text": "nitin mittal"
},
{
"code": null,
"e": 20893,
"s": 20887,
"text": "jit_t"
},
{
"code": null,
"e": 20907,
"s": 20893,
"text": "RishabhPrabhu"
},
{
"code": null,
"e": 20913,
"s": 20907,
"text": "jwelt"
},
{
"code": null,
"e": 20927,
"s": 20913,
"text": "Chandan_Kumar"
},
{
"code": null,
"e": 20937,
"s": 20927,
"text": "Code_Mech"
},
{
"code": null,
"e": 20946,
"s": 20937,
"text": "mukesh07"
},
{
"code": null,
"e": 20961,
"s": 20946,
"text": "rameshtravel07"
},
{
"code": null,
"e": 20973,
"s": 20961,
"text": "shreyaperla"
},
{
"code": null,
"e": 20992,
"s": 20973,
"text": "surindertarika1234"
},
{
"code": null,
"e": 21008,
"s": 20992,
"text": "asifmustafa7634"
},
{
"code": null,
"e": 21025,
"s": 21008,
"text": "_saurabh_jaiswal"
},
{
"code": null,
"e": 21037,
"s": 21025,
"text": "umadevi9616"
},
{
"code": null,
"e": 21056,
"s": 21037,
"text": "myeducationalarena"
},
{
"code": null,
"e": 21075,
"s": 21056,
"text": "sathiyamoorthics19"
},
{
"code": null,
"e": 21096,
"s": 21075,
"text": "anandkumarshivam2266"
},
{
"code": null,
"e": 21103,
"s": 21096,
"text": "Oracle"
},
{
"code": null,
"e": 21110,
"s": 21103,
"text": "VMWare"
},
{
"code": null,
"e": 21123,
"s": 21110,
"text": "Mathematical"
},
{
"code": null,
"e": 21130,
"s": 21123,
"text": "VMWare"
},
{
"code": null,
"e": 21137,
"s": 21130,
"text": "Oracle"
},
{
"code": null,
"e": 21150,
"s": 21137,
"text": "Mathematical"
}
] |
How to save a histogram plot in Python?
|
To save a histogram plot in Python, we can take the following steps −
Set the figure size and adjust the padding between and around the subplots.
Create data points "k" for the histogram.
Plot the histogram using hist() method.
To save the histogram, use plt.savefig('image_name').
To display the figure, use show() method.
import matplotlib.pyplot as plt
# Set the figure size
plt.rcParams["figure.figsize"] = [7.00, 3.50]
plt.rcParams["figure.autolayout"] = True
# Data points for the histogram
k = [1, 3, 2, 5, 4, 7, 5, 1, 0, 4, 1]
# Plot the histogram
plt.hist(k)
# Save the histogram
plt.savefig('hist.png')
# Display the plot
plt.show()
It will save the following plot as 'hist.png' in the project directory.
|
[
{
"code": null,
"e": 1257,
"s": 1187,
"text": "To save a histogram plot in Python, we can take the following steps −"
},
{
"code": null,
"e": 1333,
"s": 1257,
"text": "Set the figure size and adjust the padding between and around the subplots."
},
{
"code": null,
"e": 1375,
"s": 1333,
"text": "Create data points \"k\" for the histogram."
},
{
"code": null,
"e": 1415,
"s": 1375,
"text": "Plot the histogram using hist() method."
},
{
"code": null,
"e": 1469,
"s": 1415,
"text": "To save the histogram, use plt.savefig('image_name')."
},
{
"code": null,
"e": 1511,
"s": 1469,
"text": "To display the figure, use show() method."
},
{
"code": null,
"e": 1835,
"s": 1511,
"text": "import matplotlib.pyplot as plt\n\n# Set the figure size\nplt.rcParams[\"figure.figsize\"] = [7.00, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\n# Data points for the histogram\nk = [1, 3, 2, 5, 4, 7, 5, 1, 0, 4, 1]\n\n# Plot the histogram\nplt.hist(k)\n\n# Save the histogram\nplt.savefig('hist.png')\n\n# Display the plot\nplt.show()"
},
{
"code": null,
"e": 1907,
"s": 1835,
"text": "It will save the following plot as 'hist.png' in the project directory."
}
] |
How to Create a Correlation Matrix using Pandas?
|
08 Oct, 2021
Correlation is a statistical technique that shows how two variables are related. Pandas dataframe.corr() method is used for creating the correlation matrix. It is used to find the pairwise correlation of all columns in the dataframe. Any na values are automatically excluded. For any non-numeric data type columns in the dataframe it is ignored.To create correlation matrix using pandas, these steps should be taken:
Obtain the data.Create the DataFrame using Pandas.Create correlation matrix using Pandas
Obtain the data.
Create the DataFrame using Pandas.
Create correlation matrix using Pandas
Example 1:
Python3
# import pandasimport pandas as pd # obtaining the datadata = {'A': [45, 37, 42], 'B': [38, 31, 26], 'C': [10, 15, 17] }# creation of DataFramedf = pd.DataFrame(data) # creation of correlation matrixcorrM = df.corr() corrM
Output:
Values at the diagonal shows the correlation of a variable with itself, hence diagonal shows the correlation 1.
Example 2:
Python3
import pandas as pd data = {'A': [45, 37, 42, 50], 'B': [38, 31, 26, 90], 'C': [10, 15, 17, 100], 'D': [60, 99, 23, 56], 'E': [76, 98, 78, 90] } df = pd.DataFrame(data) corrM = df.corr()corrM
Output:
Example 3:
Python3
import pandas as pd # Integer and string values can# never be correlated.data = {'A': [45, 37, 42, 50], 'B': ['R', 'O', 'M', 'Y'], } df = pd.DataFrame(data) corrM = df.corr()corrM
Output:
Example 4:
Python3
import pandas as pd data = {'A': [45, 37, 42, 50], 'B': ['R', 'O', 'M', 'Y'], 'C': [56, 67, 68, 60], } df = pd.DataFrame(data) corrM = df.corr()corrM
Output:
sagar0719kumar
Python pandas-dataFrame
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n08 Oct, 2021"
},
{
"code": null,
"e": 447,
"s": 28,
"text": "Correlation is a statistical technique that shows how two variables are related. Pandas dataframe.corr() method is used for creating the correlation matrix. It is used to find the pairwise correlation of all columns in the dataframe. Any na values are automatically excluded. For any non-numeric data type columns in the dataframe it is ignored.To create correlation matrix using pandas, these steps should be taken: "
},
{
"code": null,
"e": 536,
"s": 447,
"text": "Obtain the data.Create the DataFrame using Pandas.Create correlation matrix using Pandas"
},
{
"code": null,
"e": 553,
"s": 536,
"text": "Obtain the data."
},
{
"code": null,
"e": 588,
"s": 553,
"text": "Create the DataFrame using Pandas."
},
{
"code": null,
"e": 627,
"s": 588,
"text": "Create correlation matrix using Pandas"
},
{
"code": null,
"e": 640,
"s": 627,
"text": "Example 1: "
},
{
"code": null,
"e": 648,
"s": 640,
"text": "Python3"
},
{
"code": "# import pandasimport pandas as pd # obtaining the datadata = {'A': [45, 37, 42], 'B': [38, 31, 26], 'C': [10, 15, 17] }# creation of DataFramedf = pd.DataFrame(data) # creation of correlation matrixcorrM = df.corr() corrM",
"e": 892,
"s": 648,
"text": null
},
{
"code": null,
"e": 901,
"s": 892,
"text": "Output: "
},
{
"code": null,
"e": 1015,
"s": 903,
"text": "Values at the diagonal shows the correlation of a variable with itself, hence diagonal shows the correlation 1."
},
{
"code": null,
"e": 1028,
"s": 1015,
"text": "Example 2: "
},
{
"code": null,
"e": 1036,
"s": 1028,
"text": "Python3"
},
{
"code": "import pandas as pd data = {'A': [45, 37, 42, 50], 'B': [38, 31, 26, 90], 'C': [10, 15, 17, 100], 'D': [60, 99, 23, 56], 'E': [76, 98, 78, 90] } df = pd.DataFrame(data) corrM = df.corr()corrM",
"e": 1263,
"s": 1036,
"text": null
},
{
"code": null,
"e": 1273,
"s": 1263,
"text": "Output: "
},
{
"code": null,
"e": 1286,
"s": 1273,
"text": "Example 3: "
},
{
"code": null,
"e": 1294,
"s": 1286,
"text": "Python3"
},
{
"code": "import pandas as pd # Integer and string values can# never be correlated.data = {'A': [45, 37, 42, 50], 'B': ['R', 'O', 'M', 'Y'], } df = pd.DataFrame(data) corrM = df.corr()corrM",
"e": 1488,
"s": 1294,
"text": null
},
{
"code": null,
"e": 1498,
"s": 1488,
"text": "Output: "
},
{
"code": null,
"e": 1511,
"s": 1498,
"text": "Example 4: "
},
{
"code": null,
"e": 1519,
"s": 1511,
"text": "Python3"
},
{
"code": "import pandas as pd data = {'A': [45, 37, 42, 50], 'B': ['R', 'O', 'M', 'Y'], 'C': [56, 67, 68, 60], } df = pd.DataFrame(data) corrM = df.corr()corrM",
"e": 1705,
"s": 1519,
"text": null
},
{
"code": null,
"e": 1715,
"s": 1705,
"text": "Output: "
},
{
"code": null,
"e": 1732,
"s": 1717,
"text": "sagar0719kumar"
},
{
"code": null,
"e": 1756,
"s": 1732,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 1770,
"s": 1756,
"text": "Python-pandas"
},
{
"code": null,
"e": 1777,
"s": 1770,
"text": "Python"
}
] |
Node.js crypto.privateDecrypt() Method
|
11 Oct, 2021
The crypto.privateDecrypt() method is used to decrypt the content of the buffer with privateKey.buffer which was previously encrypted using the corresponding public key, i.e. crypto.publicEncrypt().
Syntax:
crypto.privateDecrypt( privateKey, buffer )
Parameters: This method accept two parameters as mentioned above and described below:
privateKey: It can hold Object, string, Buffer, or KeyObject type of data.oaepHash It is the hash function of type string which is used for ‘OAEP’ padding. And the default value is ‘sha1’.oaepLabel: It is the label which is used for ‘OAEP’ padding. And if it is not specified, then no label is used. It is of type Buffer, TypedArray or DataView.padding: It is an optional padding value which is defined in crypto.constants, which can be crypto.constants.RSA_NO_PADDING, crypto.constants.RSA_PKCS1_PADDING, or crypto.constants.RSA_PKCS1_OAEP_PADDING. It is of type crypto.constants.
oaepHash It is the hash function of type string which is used for ‘OAEP’ padding. And the default value is ‘sha1’.oaepLabel: It is the label which is used for ‘OAEP’ padding. And if it is not specified, then no label is used. It is of type Buffer, TypedArray or DataView.padding: It is an optional padding value which is defined in crypto.constants, which can be crypto.constants.RSA_NO_PADDING, crypto.constants.RSA_PKCS1_PADDING, or crypto.constants.RSA_PKCS1_OAEP_PADDING. It is of type crypto.constants.
oaepHash It is the hash function of type string which is used for ‘OAEP’ padding. And the default value is ‘sha1’.
oaepLabel: It is the label which is used for ‘OAEP’ padding. And if it is not specified, then no label is used. It is of type Buffer, TypedArray or DataView.
padding: It is an optional padding value which is defined in crypto.constants, which can be crypto.constants.RSA_NO_PADDING, crypto.constants.RSA_PKCS1_PADDING, or crypto.constants.RSA_PKCS1_OAEP_PADDING. It is of type crypto.constants.
buffer: It is of type Buffer, TypedArray, or DataView.
Return Value: It returns a new Buffer with the decrypted content.
Below example illustrate the use of crypto.privateDecrypt() method in Node.js:
Example 1:
// Node.js program to demonstrate the // crypto.privateDecrypt() method // Including the crypto and fs modulesvar crypto = require('crypto');var fs = require('fs'); // Reading the Public KeypubK = fs.readFileSync('pub.key').toString(); // Passing the text to be encrypted using private keyvar buf = Buffer.from('This is secret code', 'utf8'); // Encrypting the textsecretData = crypto.publicEncrypt(pubK, buf); // Printing the encrypted textconsole.log(secretData); // Reading the Private keyprivK = fs.readFileSync('priv.key').toString(); // Decrypting the text using public keyorigData = crypto.privateDecrypt(privK, secretData);console.log(); // Printing the original contentconsole.log(origData);
Output:
<Buffer 58 dd 76 8f cb 25 52 2b e7 3a b2 1b
0f 43 aa e0 df 65 fa 1d 3b 31 6f b7 f9 47 06
d5 f7 72 19 cd 2f 67 66 27 00 bb 43 8e 64 38
07 38 28 aa07 59 b4 60 ... >
<Buffer 54 68 69 73 20 69 73 20 73 65 63 72
65 74 20 63 6f 64 65 >
Example 2:
// Node.js program to demonstrate the // crypto.privateDecrypt() method // Including crypto and fs moduleconst crypto = require('crypto');const fs = require("fs"); // Using a function generateKeyFilesfunction generateKeyFiles() { const keyPair = crypto.generateKeyPairSync('rsa', { modulusLength: 530, publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: '' } }); // Creating public and private key file fs.writeFileSync("public_key", keyPair.publicKey); fs.writeFileSync("private_key", keyPair.privateKey);} // Generate keysgenerateKeyFiles(); // Creating a function to encrypt stringfunction encryptString (plaintext, publicKeyFile) { const publicKey = fs.readFileSync(publicKeyFile, "utf8"); // publicEncrypt() method with its parameters const encrypted = crypto.publicEncrypt( publicKey, Buffer.from(plaintext)); return encrypted.toString("base64");} // Creating a function to decrypt stringfunction decryptString (ciphertext, privateKeyFile) { const privateKey = fs.readFileSync(privateKeyFile, "utf8"); // privateDecrypt() method with its parameters const decrypted = crypto.privateDecrypt( { key: privateKey, passphrase: '', }, Buffer.from(ciphertext, "base64") ); return decrypted.toString("utf8");} // Defining a text to be encryptedconst plainText = "Geeks!"; // Defining encrypted textconst encrypted = encryptString(plainText, "./public_key"); // Prints plain textconsole.log("Plaintext:", plainText);console.log(); // Prints buffer of encrypted contentconsole.log("Encrypted Text: ", encrypted);console.log(); // Prints buffer of decrypted contentconsole.log("Decrypted Text: ", decryptString(encrypted, "private_key"));
Output:
Plaintext: Geeks!
Encrypted Text: ACks6H7InpaeGdI4w9MObyD73YB7N1V0nVsG5Jl10SNeH3no6gfgjeD4ZFsSFhCXzFkognMGbRNsg0BReVOHxRs7eQ==
Decrypted Text: Geeks!
Reference: https://nodejs.org/api/crypto.html#crypto_crypto_privatedecrypt_privatekey_buffer
127ajr
Node.js-crypto-module
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between promise and async await in Node.js
Mongoose | findByIdAndUpdate() Function
JWT Authentication with Node.js
Installation of Node.js on Windows
Difference between dependencies, devDependencies and peerDependencies
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ?
Differences between Functional Components and Class Components in React
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n11 Oct, 2021"
},
{
"code": null,
"e": 227,
"s": 28,
"text": "The crypto.privateDecrypt() method is used to decrypt the content of the buffer with privateKey.buffer which was previously encrypted using the corresponding public key, i.e. crypto.publicEncrypt()."
},
{
"code": null,
"e": 235,
"s": 227,
"text": "Syntax:"
},
{
"code": null,
"e": 279,
"s": 235,
"text": "crypto.privateDecrypt( privateKey, buffer )"
},
{
"code": null,
"e": 365,
"s": 279,
"text": "Parameters: This method accept two parameters as mentioned above and described below:"
},
{
"code": null,
"e": 947,
"s": 365,
"text": "privateKey: It can hold Object, string, Buffer, or KeyObject type of data.oaepHash It is the hash function of type string which is used for ‘OAEP’ padding. And the default value is ‘sha1’.oaepLabel: It is the label which is used for ‘OAEP’ padding. And if it is not specified, then no label is used. It is of type Buffer, TypedArray or DataView.padding: It is an optional padding value which is defined in crypto.constants, which can be crypto.constants.RSA_NO_PADDING, crypto.constants.RSA_PKCS1_PADDING, or crypto.constants.RSA_PKCS1_OAEP_PADDING. It is of type crypto.constants."
},
{
"code": null,
"e": 1455,
"s": 947,
"text": "oaepHash It is the hash function of type string which is used for ‘OAEP’ padding. And the default value is ‘sha1’.oaepLabel: It is the label which is used for ‘OAEP’ padding. And if it is not specified, then no label is used. It is of type Buffer, TypedArray or DataView.padding: It is an optional padding value which is defined in crypto.constants, which can be crypto.constants.RSA_NO_PADDING, crypto.constants.RSA_PKCS1_PADDING, or crypto.constants.RSA_PKCS1_OAEP_PADDING. It is of type crypto.constants."
},
{
"code": null,
"e": 1570,
"s": 1455,
"text": "oaepHash It is the hash function of type string which is used for ‘OAEP’ padding. And the default value is ‘sha1’."
},
{
"code": null,
"e": 1728,
"s": 1570,
"text": "oaepLabel: It is the label which is used for ‘OAEP’ padding. And if it is not specified, then no label is used. It is of type Buffer, TypedArray or DataView."
},
{
"code": null,
"e": 1965,
"s": 1728,
"text": "padding: It is an optional padding value which is defined in crypto.constants, which can be crypto.constants.RSA_NO_PADDING, crypto.constants.RSA_PKCS1_PADDING, or crypto.constants.RSA_PKCS1_OAEP_PADDING. It is of type crypto.constants."
},
{
"code": null,
"e": 2020,
"s": 1965,
"text": "buffer: It is of type Buffer, TypedArray, or DataView."
},
{
"code": null,
"e": 2086,
"s": 2020,
"text": "Return Value: It returns a new Buffer with the decrypted content."
},
{
"code": null,
"e": 2165,
"s": 2086,
"text": "Below example illustrate the use of crypto.privateDecrypt() method in Node.js:"
},
{
"code": null,
"e": 2176,
"s": 2165,
"text": "Example 1:"
},
{
"code": "// Node.js program to demonstrate the // crypto.privateDecrypt() method // Including the crypto and fs modulesvar crypto = require('crypto');var fs = require('fs'); // Reading the Public KeypubK = fs.readFileSync('pub.key').toString(); // Passing the text to be encrypted using private keyvar buf = Buffer.from('This is secret code', 'utf8'); // Encrypting the textsecretData = crypto.publicEncrypt(pubK, buf); // Printing the encrypted textconsole.log(secretData); // Reading the Private keyprivK = fs.readFileSync('priv.key').toString(); // Decrypting the text using public keyorigData = crypto.privateDecrypt(privK, secretData);console.log(); // Printing the original contentconsole.log(origData);",
"e": 2885,
"s": 2176,
"text": null
},
{
"code": null,
"e": 2893,
"s": 2885,
"text": "Output:"
},
{
"code": null,
"e": 3125,
"s": 2893,
"text": "<Buffer 58 dd 76 8f cb 25 52 2b e7 3a b2 1b\n0f 43 aa e0 df 65 fa 1d 3b 31 6f b7 f9 47 06\nd5 f7 72 19 cd 2f 67 66 27 00 bb 43 8e 64 38\n07 38 28 aa07 59 b4 60 ... >\n\n<Buffer 54 68 69 73 20 69 73 20 73 65 63 72\n65 74 20 63 6f 64 65 >\n"
},
{
"code": null,
"e": 3136,
"s": 3125,
"text": "Example 2:"
},
{
"code": "// Node.js program to demonstrate the // crypto.privateDecrypt() method // Including crypto and fs moduleconst crypto = require('crypto');const fs = require(\"fs\"); // Using a function generateKeyFilesfunction generateKeyFiles() { const keyPair = crypto.generateKeyPairSync('rsa', { modulusLength: 530, publicKeyEncoding: { type: 'spki', format: 'pem' }, privateKeyEncoding: { type: 'pkcs8', format: 'pem', cipher: 'aes-256-cbc', passphrase: '' } }); // Creating public and private key file fs.writeFileSync(\"public_key\", keyPair.publicKey); fs.writeFileSync(\"private_key\", keyPair.privateKey);} // Generate keysgenerateKeyFiles(); // Creating a function to encrypt stringfunction encryptString (plaintext, publicKeyFile) { const publicKey = fs.readFileSync(publicKeyFile, \"utf8\"); // publicEncrypt() method with its parameters const encrypted = crypto.publicEncrypt( publicKey, Buffer.from(plaintext)); return encrypted.toString(\"base64\");} // Creating a function to decrypt stringfunction decryptString (ciphertext, privateKeyFile) { const privateKey = fs.readFileSync(privateKeyFile, \"utf8\"); // privateDecrypt() method with its parameters const decrypted = crypto.privateDecrypt( { key: privateKey, passphrase: '', }, Buffer.from(ciphertext, \"base64\") ); return decrypted.toString(\"utf8\");} // Defining a text to be encryptedconst plainText = \"Geeks!\"; // Defining encrypted textconst encrypted = encryptString(plainText, \"./public_key\"); // Prints plain textconsole.log(\"Plaintext:\", plainText);console.log(); // Prints buffer of encrypted contentconsole.log(\"Encrypted Text: \", encrypted);console.log(); // Prints buffer of decrypted contentconsole.log(\"Decrypted Text: \", decryptString(encrypted, \"private_key\"));",
"e": 5051,
"s": 3136,
"text": null
},
{
"code": null,
"e": 5059,
"s": 5051,
"text": "Output:"
},
{
"code": null,
"e": 5213,
"s": 5059,
"text": "Plaintext: Geeks!\n\nEncrypted Text: ACks6H7InpaeGdI4w9MObyD73YB7N1V0nVsG5Jl10SNeH3no6gfgjeD4ZFsSFhCXzFkognMGbRNsg0BReVOHxRs7eQ==\n\nDecrypted Text: Geeks!\n"
},
{
"code": null,
"e": 5306,
"s": 5213,
"text": "Reference: https://nodejs.org/api/crypto.html#crypto_crypto_privatedecrypt_privatekey_buffer"
},
{
"code": null,
"e": 5313,
"s": 5306,
"text": "127ajr"
},
{
"code": null,
"e": 5335,
"s": 5313,
"text": "Node.js-crypto-module"
},
{
"code": null,
"e": 5343,
"s": 5335,
"text": "Node.js"
},
{
"code": null,
"e": 5360,
"s": 5343,
"text": "Web Technologies"
},
{
"code": null,
"e": 5458,
"s": 5360,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 5512,
"s": 5458,
"text": "Difference between promise and async await in Node.js"
},
{
"code": null,
"e": 5552,
"s": 5512,
"text": "Mongoose | findByIdAndUpdate() Function"
},
{
"code": null,
"e": 5584,
"s": 5552,
"text": "JWT Authentication with Node.js"
},
{
"code": null,
"e": 5619,
"s": 5584,
"text": "Installation of Node.js on Windows"
},
{
"code": null,
"e": 5689,
"s": 5619,
"text": "Difference between dependencies, devDependencies and peerDependencies"
},
{
"code": null,
"e": 5751,
"s": 5689,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 5812,
"s": 5751,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 5862,
"s": 5812,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 5905,
"s": 5862,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
TCS Coding Practice Question | Check Armstrong Number
|
06 Jul, 2022
Given a number, the task is to check if this number is Armstrong or not using Command Line Arguments. A positive integer of n digits is called an Armstrong number of order n (order is the number of digits) if.
abcd... = pow(a, n) + pow(b, n) + pow(c, n) + pow(d, n) + ....
Example:
Input: 153
Output: Yes
153 is an Armstrong number.
1*1*1 + 5*5*5 + 3*3*3 = 153
Input: 120
Output: No
120 is not a Armstrong number.
1*1*1 + 2*2*2 + 0*0*0 = 9
Input: 1253
Output: No
1253 is not a Armstrong Number
1*1*1*1 + 2*2*2*2 + 5*5*5*5 + 3*3*3*3 = 723
Input: 1634
Output: Yes
1*1*1*1 + 6*6*6*6 + 3*3*3*3 + 4*4*4*4 = 1634
Approach:
Since the number is entered as Command line Argument, there is no need for a dedicated input line
Extract the input number from the command line argument
This extracted number will be in string type.
Convert this number into integer type and store it in a variable, say num
Count the number of digits (or find the order) of the number num and store it in a variable, say n.
For every digit r in input number num, compute rn.
If the sum of all such values is equal to num
If they are not same, the number is not Armstrong
If they are the same, the number is an Armstrong Number.
Program:
C
Java
// C program to check if a number is Armstrong// using command line arguments #include <stdio.h>#include <stdlib.h> /* atoi */ // Function to calculate x raised to the power yint power(int x, unsigned int y){ if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2);} // Function to calculate order of the numberint order(int x){ int n = 0; while (x) { n++; x = x / 10; } return n;} // Function to check whether the given number is// Armstrong number or notint isArmstrong(int x){ // Calling order function int n = order(x); int temp = x, sum = 0; while (temp) { int r = temp % 10; sum += power(r, n); temp = temp / 10; } // If satisfies Armstrong condition if (sum == x) return 1; else return 0;} // Driver codeint main(int argc, char* argv[]){ int num, res = 0; // Check if the length of args array is 1 if (argc == 1) printf("No command line arguments found.\n"); else { // Get the command line argument and // Convert it from string type to integer type // using function "atoi( argument)" num = atoi(argv[1]); // Check if it is Armstrong res = isArmstrong(num); // Check if res is 0 or 1 if (res == 0) // Print No printf("No\n"); else // Print Yes printf("Yes\n"); } return 0;}
// Java program to check if a number is Armstrong// using command line arguments class GFG { // Function to calculate x // raised to the power y public static int power(int x, long y) { if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2); } // Function to calculate order of the number public static int order(int x) { int n = 0; while (x != 0) { n++; x = x / 10; } return n; } // Function to check whether the given number is // Armstrong number or not public static int isArmstrong(int x) { // Calling order function int n = order(x); int temp = x, sum = 0; while (temp != 0) { int r = temp % 10; sum = sum + power(r, n); temp = temp / 10; } // If satisfies Armstrong condition if (sum == x) return 1; else return 0; } // Driver code public static void main(String[] args) { // Check if length of args array is // greater than 0 if (args.length > 0) { // Get the command line argument and // Convert it from string type to integer type int num = Integer.parseInt(args[0]); // Get the command line argument // and check if it is Armstrong int res = isArmstrong(num); // Check if res is 0 or 1 if (res == 0) // Print No System.out.println("No\n"); else // Print Yes System.out.println("Yes\n"); } else System.out.println("No command line " + "arguments found."); }}
Output:
In C:
In Java:
Time Complexity: O (log N)Auxiliary Space: O (1)
jayanth_mkv
TCS
TCS-coding-questions
C++ Programs
Java Programs
Placements
TCS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Shallow Copy and Deep Copy in C++
C++ Program to check if a given String is Palindrome or not
How to find the minimum and maximum element of a Vector using STL in C++?
C++ Program for QuickSort
C Program to Swap two Numbers
Initializing a List in Java
Java Programming Examples
Convert a String to Character Array in Java
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
|
[
{
"code": null,
"e": 53,
"s": 25,
"text": "\n06 Jul, 2022"
},
{
"code": null,
"e": 263,
"s": 53,
"text": "Given a number, the task is to check if this number is Armstrong or not using Command Line Arguments. A positive integer of n digits is called an Armstrong number of order n (order is the number of digits) if."
},
{
"code": null,
"e": 327,
"s": 263,
"text": "abcd... = pow(a, n) + pow(b, n) + pow(c, n) + pow(d, n) + .... "
},
{
"code": null,
"e": 336,
"s": 327,
"text": "Example:"
},
{
"code": null,
"e": 664,
"s": 336,
"text": "Input: 153\nOutput: Yes\n153 is an Armstrong number.\n1*1*1 + 5*5*5 + 3*3*3 = 153\n\nInput: 120\nOutput: No\n120 is not a Armstrong number.\n1*1*1 + 2*2*2 + 0*0*0 = 9\n\nInput: 1253\nOutput: No\n1253 is not a Armstrong Number\n1*1*1*1 + 2*2*2*2 + 5*5*5*5 + 3*3*3*3 = 723\n\nInput: 1634\nOutput: Yes\n1*1*1*1 + 6*6*6*6 + 3*3*3*3 + 4*4*4*4 = 1634"
},
{
"code": null,
"e": 674,
"s": 664,
"text": "Approach:"
},
{
"code": null,
"e": 772,
"s": 674,
"text": "Since the number is entered as Command line Argument, there is no need for a dedicated input line"
},
{
"code": null,
"e": 828,
"s": 772,
"text": "Extract the input number from the command line argument"
},
{
"code": null,
"e": 874,
"s": 828,
"text": "This extracted number will be in string type."
},
{
"code": null,
"e": 948,
"s": 874,
"text": "Convert this number into integer type and store it in a variable, say num"
},
{
"code": null,
"e": 1048,
"s": 948,
"text": "Count the number of digits (or find the order) of the number num and store it in a variable, say n."
},
{
"code": null,
"e": 1099,
"s": 1048,
"text": "For every digit r in input number num, compute rn."
},
{
"code": null,
"e": 1145,
"s": 1099,
"text": "If the sum of all such values is equal to num"
},
{
"code": null,
"e": 1195,
"s": 1145,
"text": "If they are not same, the number is not Armstrong"
},
{
"code": null,
"e": 1252,
"s": 1195,
"text": "If they are the same, the number is an Armstrong Number."
},
{
"code": null,
"e": 1262,
"s": 1252,
"text": "Program: "
},
{
"code": null,
"e": 1264,
"s": 1262,
"text": "C"
},
{
"code": null,
"e": 1269,
"s": 1264,
"text": "Java"
},
{
"code": "// C program to check if a number is Armstrong// using command line arguments #include <stdio.h>#include <stdlib.h> /* atoi */ // Function to calculate x raised to the power yint power(int x, unsigned int y){ if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2);} // Function to calculate order of the numberint order(int x){ int n = 0; while (x) { n++; x = x / 10; } return n;} // Function to check whether the given number is// Armstrong number or notint isArmstrong(int x){ // Calling order function int n = order(x); int temp = x, sum = 0; while (temp) { int r = temp % 10; sum += power(r, n); temp = temp / 10; } // If satisfies Armstrong condition if (sum == x) return 1; else return 0;} // Driver codeint main(int argc, char* argv[]){ int num, res = 0; // Check if the length of args array is 1 if (argc == 1) printf(\"No command line arguments found.\\n\"); else { // Get the command line argument and // Convert it from string type to integer type // using function \"atoi( argument)\" num = atoi(argv[1]); // Check if it is Armstrong res = isArmstrong(num); // Check if res is 0 or 1 if (res == 0) // Print No printf(\"No\\n\"); else // Print Yes printf(\"Yes\\n\"); } return 0;}",
"e": 2768,
"s": 1269,
"text": null
},
{
"code": "// Java program to check if a number is Armstrong// using command line arguments class GFG { // Function to calculate x // raised to the power y public static int power(int x, long y) { if (y == 0) return 1; if (y % 2 == 0) return power(x, y / 2) * power(x, y / 2); return x * power(x, y / 2) * power(x, y / 2); } // Function to calculate order of the number public static int order(int x) { int n = 0; while (x != 0) { n++; x = x / 10; } return n; } // Function to check whether the given number is // Armstrong number or not public static int isArmstrong(int x) { // Calling order function int n = order(x); int temp = x, sum = 0; while (temp != 0) { int r = temp % 10; sum = sum + power(r, n); temp = temp / 10; } // If satisfies Armstrong condition if (sum == x) return 1; else return 0; } // Driver code public static void main(String[] args) { // Check if length of args array is // greater than 0 if (args.length > 0) { // Get the command line argument and // Convert it from string type to integer type int num = Integer.parseInt(args[0]); // Get the command line argument // and check if it is Armstrong int res = isArmstrong(num); // Check if res is 0 or 1 if (res == 0) // Print No System.out.println(\"No\\n\"); else // Print Yes System.out.println(\"Yes\\n\"); } else System.out.println(\"No command line \" + \"arguments found.\"); }}",
"e": 4609,
"s": 2768,
"text": null
},
{
"code": null,
"e": 4617,
"s": 4609,
"text": "Output:"
},
{
"code": null,
"e": 4624,
"s": 4617,
"text": "In C: "
},
{
"code": null,
"e": 4634,
"s": 4624,
"text": "In Java: "
},
{
"code": null,
"e": 4683,
"s": 4634,
"text": "Time Complexity: O (log N)Auxiliary Space: O (1)"
},
{
"code": null,
"e": 4695,
"s": 4683,
"text": "jayanth_mkv"
},
{
"code": null,
"e": 4699,
"s": 4695,
"text": "TCS"
},
{
"code": null,
"e": 4720,
"s": 4699,
"text": "TCS-coding-questions"
},
{
"code": null,
"e": 4733,
"s": 4720,
"text": "C++ Programs"
},
{
"code": null,
"e": 4747,
"s": 4733,
"text": "Java Programs"
},
{
"code": null,
"e": 4758,
"s": 4747,
"text": "Placements"
},
{
"code": null,
"e": 4762,
"s": 4758,
"text": "TCS"
},
{
"code": null,
"e": 4860,
"s": 4762,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4894,
"s": 4860,
"text": "Shallow Copy and Deep Copy in C++"
},
{
"code": null,
"e": 4954,
"s": 4894,
"text": "C++ Program to check if a given String is Palindrome or not"
},
{
"code": null,
"e": 5028,
"s": 4954,
"text": "How to find the minimum and maximum element of a Vector using STL in C++?"
},
{
"code": null,
"e": 5054,
"s": 5028,
"text": "C++ Program for QuickSort"
},
{
"code": null,
"e": 5084,
"s": 5054,
"text": "C Program to Swap two Numbers"
},
{
"code": null,
"e": 5112,
"s": 5084,
"text": "Initializing a List in Java"
},
{
"code": null,
"e": 5138,
"s": 5112,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 5182,
"s": 5138,
"text": "Convert a String to Character Array in Java"
},
{
"code": null,
"e": 5216,
"s": 5182,
"text": "Convert Double to Integer in Java"
}
] |
Python | time.gmtime() method
|
03 Jan, 2022
Time module in Python provides various time-related functions. This module comes under Python’s standard utility modules.
time.gmtime() method of Time module is used to convert a time expressed in seconds since the epoch to a time.struct_time object in UTC in which tm_isdst attribute is always 0.To convert the given time in seconds since the epoch to a time.struct_time object in local time, time.localtime() method is used.
This method returns a time.struct_time object with a named tuple interface. Following are the values present in time.struct_time object:
Syntax: time.gmtime([secs])
Parameter:secs (optional): An integer or float value representing time in seconds. Fractions of specified seconds will be ignored. If the secs parameter is not provided or None then the current time as returned by time.time() method is used.
Return type: This method returns an object of class ‘time.struct_time’.
Code #1: Use of time.gmtime() method
Python3
# Python program to explain time.gmtime() method # importing time moduleimport time # If secs parameter# is not given then# the current time # as returned by time.time() method# is used # Convert the current time in seconds# since the epoch to a# time.struct_time object in UTCobj = time.gmtime() # Print the time.struct.time objectprint(obj)
time.struct_time(tm_year=2019, tm_mon=8, tm_mday=22, tm_hour=3, tm_min=53,
tm_sec=32, tm_wday=3, tm_yday=234, tm_isdst=0)
Code #2: Use of time.gmtime() method
Python3
# Python program to explain time.gmtime() method # importing time moduleimport time # Time in seconds# since the epochsecs = 40000 # Convert the given time in seconds# since the epoch to a# time.struct_time object in UTC# using time.gmtime() methodobj = time.gmtime(secs) # Print the time.struct_time objectprint("time.struct_time object for seconds =", secs)print(obj) # Time in seconds# since the epochsecs = 40000.7856 # Convert the given time in seconds# since the epoch to a# time.struct_time object in UTC# using time.gmtime() methodobj = time.gmtime(secs) # Print the time.struct_time objectprint("\ntime.struct_time object for seconds =", secs)print(obj) # Output for sec = 40000 # and secs = 40000.7856# will be same because# fractions in 40000.7856# i.e .7856 will be ignored
time.struct_time object for seconds = 40000
time.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=11, tm_min=6,
tm_sec=40, tm_wday=3, tm_yday=1, tm_isdst=0)
time.struct_time object for seconds = 40000.7856
time.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=11, tm_min=6,
tm_sec=40, tm_wday=3, tm_yday=1, tm_isdst=0)
Reference: https://docs.python.org/3/library/time.html#time.gmtime
adnanirshad158
python-utility
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Jan, 2022"
},
{
"code": null,
"e": 150,
"s": 28,
"text": "Time module in Python provides various time-related functions. This module comes under Python’s standard utility modules."
},
{
"code": null,
"e": 455,
"s": 150,
"text": "time.gmtime() method of Time module is used to convert a time expressed in seconds since the epoch to a time.struct_time object in UTC in which tm_isdst attribute is always 0.To convert the given time in seconds since the epoch to a time.struct_time object in local time, time.localtime() method is used."
},
{
"code": null,
"e": 592,
"s": 455,
"text": "This method returns a time.struct_time object with a named tuple interface. Following are the values present in time.struct_time object:"
},
{
"code": null,
"e": 620,
"s": 592,
"text": "Syntax: time.gmtime([secs])"
},
{
"code": null,
"e": 862,
"s": 620,
"text": "Parameter:secs (optional): An integer or float value representing time in seconds. Fractions of specified seconds will be ignored. If the secs parameter is not provided or None then the current time as returned by time.time() method is used."
},
{
"code": null,
"e": 934,
"s": 862,
"text": "Return type: This method returns an object of class ‘time.struct_time’."
},
{
"code": null,
"e": 971,
"s": 934,
"text": "Code #1: Use of time.gmtime() method"
},
{
"code": null,
"e": 979,
"s": 971,
"text": "Python3"
},
{
"code": "# Python program to explain time.gmtime() method # importing time moduleimport time # If secs parameter# is not given then# the current time # as returned by time.time() method# is used # Convert the current time in seconds# since the epoch to a# time.struct_time object in UTCobj = time.gmtime() # Print the time.struct.time objectprint(obj)",
"e": 1329,
"s": 979,
"text": null
},
{
"code": null,
"e": 1452,
"s": 1329,
"text": "time.struct_time(tm_year=2019, tm_mon=8, tm_mday=22, tm_hour=3, tm_min=53,\ntm_sec=32, tm_wday=3, tm_yday=234, tm_isdst=0)\n"
},
{
"code": null,
"e": 1489,
"s": 1452,
"text": "Code #2: Use of time.gmtime() method"
},
{
"code": null,
"e": 1497,
"s": 1489,
"text": "Python3"
},
{
"code": "# Python program to explain time.gmtime() method # importing time moduleimport time # Time in seconds# since the epochsecs = 40000 # Convert the given time in seconds# since the epoch to a# time.struct_time object in UTC# using time.gmtime() methodobj = time.gmtime(secs) # Print the time.struct_time objectprint(\"time.struct_time object for seconds =\", secs)print(obj) # Time in seconds# since the epochsecs = 40000.7856 # Convert the given time in seconds# since the epoch to a# time.struct_time object in UTC# using time.gmtime() methodobj = time.gmtime(secs) # Print the time.struct_time objectprint(\"\\ntime.struct_time object for seconds =\", secs)print(obj) # Output for sec = 40000 # and secs = 40000.7856# will be same because# fractions in 40000.7856# i.e .7856 will be ignored",
"e": 2297,
"s": 1497,
"text": null
},
{
"code": null,
"e": 2630,
"s": 2297,
"text": "time.struct_time object for seconds = 40000\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=11, tm_min=6,\ntm_sec=40, tm_wday=3, tm_yday=1, tm_isdst=0)\n\ntime.struct_time object for seconds = 40000.7856\ntime.struct_time(tm_year=1970, tm_mon=1, tm_mday=1, tm_hour=11, tm_min=6,\ntm_sec=40, tm_wday=3, tm_yday=1, tm_isdst=0)\n"
},
{
"code": null,
"e": 2697,
"s": 2630,
"text": "Reference: https://docs.python.org/3/library/time.html#time.gmtime"
},
{
"code": null,
"e": 2712,
"s": 2697,
"text": "adnanirshad158"
},
{
"code": null,
"e": 2727,
"s": 2712,
"text": "python-utility"
},
{
"code": null,
"e": 2734,
"s": 2727,
"text": "Python"
}
] |
Print the given pattern recursively
|
07 Dec, 2021
Given a positive integer n. Print the inverted triangular pattern (as described in the examples below) using the recursive approach.
Examples:
Input : n = 5
Output :
* * * * *
* * * *
* * *
* *
*
Input : n = 7
Output :
* * * * * * *
* * * * * *
* * * * *
* * * *
* * *
* *
*
Method 1 (Using two recursive functions): One recursive function is used to get the row number and the other recursive function is used to print the stars of that particular row.
Algorithm:
printPatternRowRecur(n)
if n < 1
return
print "* "
printPatternRowRecur(n-1)
printPatternRecur(n)
if n < 1
return
printPatternRowRecur(n)
print "\n"
printPatternRecur(n-1)
C++
Java
Python3
C#
PHP
Javascript
// C++ implementation to print the given// pattern recursively#include <bits/stdc++.h>using namespace std; // function to print the 'n-th' row of the// pattern recursivelyvoid printPatternRowRecur(int n){ // base condition if (n < 1) return; // print the remaining stars of the n-th row // recursively cout << "* "; printPatternRowRecur(n-1);} void printPatternRecur(int n){ // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line cout << endl; // print stars of the remaining rows recursively printPatternRecur(n-1); } // Driver program to test aboveint main(){ int n = 5; printPatternRecur(n); return 0;}
// java implementation to print the given// pattern recursivelyimport java.io.*; class GFG{ // function to print the 'n-th' row // of the pattern recursively static void printPatternRowRecur(int n) { // base condition if (n < 1) return; // print the remaining stars // of the n-th row recursively System.out.print( "* "); printPatternRowRecur(n - 1); } static void printPatternRecur(int n) { // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line System.out.println (); // print stars of the // remaining rows recursively printPatternRecur(n - 1); } // Driver program to test above public static void main (String[] args) { int n = 5; printPatternRecur(n); }}//This code is contributed by vt_m
# Python 3 implementation# to print the given# pattern recursively # function to print the# 'n-th' row of the# pattern recursivelydef printPatternRowRecur(n): # base condition if (n < 1): return # print the remaining # stars of the n-th row # recursively print("*", end = " ") printPatternRowRecur(n - 1) def printPatternRecur(n): # base condition if (n < 1): return # print the stars of # the n-th row printPatternRowRecur(n) # move to next line print("") # print stars of the # remaining rows recursively printPatternRecur(n - 1) # Driver Coden = 5printPatternRecur(n) # This code is contributed# by Smitha
// C# implementation to print the given// pattern recursivelyusing System;class GFG{ // function to print the 'n-th' row // of the pattern recursively static void printPatternRowRecur(int n) { // base condition if (n < 1) return; // print the remaining stars // of the n-th row recursively Console.Write( "* "); printPatternRowRecur(n - 1); } static void printPatternRecur(int n) { // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line Console.WriteLine(); // print stars of the // remaining rows recursively printPatternRecur(n - 1); } // Driver program to test above public static void Main() { int n = 5; printPatternRecur(n); }}//This code is contributed by vt_m
<?php// php implementation to print the given// pattern recursively // function to print the 'n-th' row// of the pattern recursivelyfunction printPatternRowRecur($n){ // base condition if ($n < 1) return; // print the remaining stars of // the n-th row recursively echo "* "; printPatternRowRecur($n-1);} function printPatternRecur($n){ // base condition if ($n < 1) return; // print the stars of the n-th row printPatternRowRecur($n); // move to next line echo "\n"; // print stars of the remaining // rows recursively printPatternRecur($n-1); } // Driver code $n = 5; printPatternRecur($n); // This code is contributed by mits?>
<script> // JavaScript implementation to print the given// pattern recursively // function to print the 'n-th' row // of the pattern recursivelyfunction printPatternRowRecur(n){ // base condition if (n < 1) return; // print the remaining stars // of the n-th row recursively document.write( "* "); printPatternRowRecur(n - 1);} function printPatternRecur(n){ // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line document.write("<br>"); // print stars of the // remaining rows recursively printPatternRecur(n - 1); } // Driver program to test abovevar n = 5;printPatternRecur(n); // This code is contributed by Amit Katiyar </script>
Output:
* * * * *
* * * *
* * *
* *
*
Method 2 (Using single recursive function): This approach uses a single recursive function to print the entire pattern.
Algorithm:
printPatternRecur(n, i)
if n < 1
return
if i <= n
print "* "
printPatternRecur(n, i+1)
else
print "\n"
printPatternRecur(n-1, 1)
C++
Java
Python3
C#
PHP
Javascript
// C++ implementation to print the given pattern recursively#include <bits/stdc++.h> using namespace std; // function to print the given pattern recursivelyvoid printPatternRecur(int n, int i){ // base condition if (n < 1) return; // to print the stars of a particular row if (i <= n) { cout << "* "; // recursively print rest of the stars // of the row printPatternRecur(n, i + 1); } else { // change line cout << endl; // print stars of the remaining rows recursively printPatternRecur(n-1, 1); }} // Driver program to test aboveint main(){ int n = 5; printPatternRecur(n, 1); return 0; }
// java implementation to// print the given pattern recursivelyimport java.io.*; class GFG { // function to print the // given pattern recursively static void printPatternRecur(int n, int i) { // base condition if (n < 1) return; // to print the stars of // a particular row if (i <= n) { System.out.print ( "* "); // recursively print rest // of the stars of the row printPatternRecur(n, i + 1); } else { // change line System.out.println( ); // print stars of the // remaining rows recursively printPatternRecur(n - 1, 1); } } // Driver program public static void main (String[] args) { int n = 5; printPatternRecur(n, 1); }} // This code is contributed by vt_m
# Python3 implementation to print the# given pattern recursively # function to print the given pattern# recursivelydef printPatternRecur(n, i): # base condition if (n < 1): return # to print the stars of a # particular row if (i <= n): print("* ", end = "") # recursively print rest of # the stars of the row printPatternRecur(n, i + 1) else: # change line print("") # print stars of the remaining # rows recursively printPatternRecur(n-1, 1) # Driver program to test aboven = 5printPatternRecur(n, 1) # This code is contributed by Smitha
// C# implementation to// print the given pattern recursivelyusing System;class GFG { // function to print the // given pattern recursively static void printPatternRecur(int n, int i) { // base condition if (n < 1) return; // to print the stars of // a particular row if (i <= n) { Console.Write ( "* "); // recursively print rest // of the stars of the row printPatternRecur(n, i + 1); } else { // change line Console.WriteLine( ); // print stars of the // remaining rows recursively printPatternRecur(n - 1, 1); } } // Driver program public static void Main () { int n = 5; printPatternRecur(n, 1); }} // This code is contributed by vt_m
<?php// php implementation to print// the given pattern recursively // function to print the given// pattern recursivelyfunction printPatternRecur($n, $i){ // base condition if ($n < 1) return; // to print the stars of // a particular row if ($i <= $n) { echo "* "; // recursively print rest of // the stars of the row printPatternRecur($n, $i + 1); } else { // change line echo "\n"; // print stars of the remaining // rows recursively printPatternRecur($n - 1, 1); }} // Driver code $n = 5; printPatternRecur($n, 1); // This code is contributed by mits?>
<script> // JavaScript implementation to print// the given pattern recursively // Function to print the given// pattern recursivelyfunction printPatternRecur(n, i){ // Base condition if (n < 1) return; // To print the stars of // a particular row if (i <= n) { document.write("*" + " "); // Recursively print rest of the stars // of the row printPatternRecur(n, i + 1); } else { // Change line document.write("<br>"); // Print stars of the remaining // rows recursively printPatternRecur(n - 1, 1); }} // Driver codevar n = 5;printPatternRecur(n, 1); // This code is contributed by rdtank </script>
Output:
* * * * *
* * * *
* * *
* *
*
This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Mithun Kumar
Smitha Dinesh Semwal
rdtank
amit143katiyar
khushboogoyal499
simranarora5sos
pattern-printing
School Programming
pattern-printing
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n07 Dec, 2021"
},
{
"code": null,
"e": 185,
"s": 52,
"text": "Given a positive integer n. Print the inverted triangular pattern (as described in the examples below) using the recursive approach."
},
{
"code": null,
"e": 196,
"s": 185,
"text": "Examples: "
},
{
"code": null,
"e": 331,
"s": 196,
"text": "Input : n = 5\nOutput : \n* * * * *\n* * * *\n* * *\n* *\n*\n\nInput : n = 7\nOutput :\n* * * * * * *\n* * * * * *\n* * * * * \n* * * *\n* * *\n* *\n*"
},
{
"code": null,
"e": 510,
"s": 331,
"text": "Method 1 (Using two recursive functions): One recursive function is used to get the row number and the other recursive function is used to print the stars of that particular row."
},
{
"code": null,
"e": 523,
"s": 510,
"text": "Algorithm: "
},
{
"code": null,
"e": 754,
"s": 523,
"text": "printPatternRowRecur(n)\n if n < 1\n return\n \n print \"* \"\n printPatternRowRecur(n-1)\n\nprintPatternRecur(n)\n if n < 1\n return\n \n printPatternRowRecur(n)\n print \"\\n\"\n printPatternRecur(n-1)"
},
{
"code": null,
"e": 758,
"s": 754,
"text": "C++"
},
{
"code": null,
"e": 763,
"s": 758,
"text": "Java"
},
{
"code": null,
"e": 771,
"s": 763,
"text": "Python3"
},
{
"code": null,
"e": 774,
"s": 771,
"text": "C#"
},
{
"code": null,
"e": 778,
"s": 774,
"text": "PHP"
},
{
"code": null,
"e": 789,
"s": 778,
"text": "Javascript"
},
{
"code": "// C++ implementation to print the given// pattern recursively#include <bits/stdc++.h>using namespace std; // function to print the 'n-th' row of the// pattern recursivelyvoid printPatternRowRecur(int n){ // base condition if (n < 1) return; // print the remaining stars of the n-th row // recursively cout << \"* \"; printPatternRowRecur(n-1);} void printPatternRecur(int n){ // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line cout << endl; // print stars of the remaining rows recursively printPatternRecur(n-1); } // Driver program to test aboveint main(){ int n = 5; printPatternRecur(n); return 0;}",
"e": 1560,
"s": 789,
"text": null
},
{
"code": "// java implementation to print the given// pattern recursivelyimport java.io.*; class GFG{ // function to print the 'n-th' row // of the pattern recursively static void printPatternRowRecur(int n) { // base condition if (n < 1) return; // print the remaining stars // of the n-th row recursively System.out.print( \"* \"); printPatternRowRecur(n - 1); } static void printPatternRecur(int n) { // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line System.out.println (); // print stars of the // remaining rows recursively printPatternRecur(n - 1); } // Driver program to test above public static void main (String[] args) { int n = 5; printPatternRecur(n); }}//This code is contributed by vt_m",
"e": 2558,
"s": 1560,
"text": null
},
{
"code": "# Python 3 implementation# to print the given# pattern recursively # function to print the# 'n-th' row of the# pattern recursivelydef printPatternRowRecur(n): # base condition if (n < 1): return # print the remaining # stars of the n-th row # recursively print(\"*\", end = \" \") printPatternRowRecur(n - 1) def printPatternRecur(n): # base condition if (n < 1): return # print the stars of # the n-th row printPatternRowRecur(n) # move to next line print(\"\") # print stars of the # remaining rows recursively printPatternRecur(n - 1) # Driver Coden = 5printPatternRecur(n) # This code is contributed# by Smitha",
"e": 3262,
"s": 2558,
"text": null
},
{
"code": "// C# implementation to print the given// pattern recursivelyusing System;class GFG{ // function to print the 'n-th' row // of the pattern recursively static void printPatternRowRecur(int n) { // base condition if (n < 1) return; // print the remaining stars // of the n-th row recursively Console.Write( \"* \"); printPatternRowRecur(n - 1); } static void printPatternRecur(int n) { // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line Console.WriteLine(); // print stars of the // remaining rows recursively printPatternRecur(n - 1); } // Driver program to test above public static void Main() { int n = 5; printPatternRecur(n); }}//This code is contributed by vt_m",
"e": 4235,
"s": 3262,
"text": null
},
{
"code": "<?php// php implementation to print the given// pattern recursively // function to print the 'n-th' row// of the pattern recursivelyfunction printPatternRowRecur($n){ // base condition if ($n < 1) return; // print the remaining stars of // the n-th row recursively echo \"* \"; printPatternRowRecur($n-1);} function printPatternRecur($n){ // base condition if ($n < 1) return; // print the stars of the n-th row printPatternRowRecur($n); // move to next line echo \"\\n\"; // print stars of the remaining // rows recursively printPatternRecur($n-1); } // Driver code $n = 5; printPatternRecur($n); // This code is contributed by mits?>",
"e": 4968,
"s": 4235,
"text": null
},
{
"code": "<script> // JavaScript implementation to print the given// pattern recursively // function to print the 'n-th' row // of the pattern recursivelyfunction printPatternRowRecur(n){ // base condition if (n < 1) return; // print the remaining stars // of the n-th row recursively document.write( \"* \"); printPatternRowRecur(n - 1);} function printPatternRecur(n){ // base condition if (n < 1) return; // print the stars of the n-th row printPatternRowRecur(n); // move to next line document.write(\"<br>\"); // print stars of the // remaining rows recursively printPatternRecur(n - 1); } // Driver program to test abovevar n = 5;printPatternRecur(n); // This code is contributed by Amit Katiyar </script>",
"e": 5760,
"s": 4968,
"text": null
},
{
"code": null,
"e": 5768,
"s": 5760,
"text": "Output:"
},
{
"code": null,
"e": 5798,
"s": 5768,
"text": "* * * * *\n* * * *\n* * *\n* *\n*"
},
{
"code": null,
"e": 5918,
"s": 5798,
"text": "Method 2 (Using single recursive function): This approach uses a single recursive function to print the entire pattern."
},
{
"code": null,
"e": 5931,
"s": 5918,
"text": "Algorithm: "
},
{
"code": null,
"e": 6126,
"s": 5931,
"text": "printPatternRecur(n, i)\n if n < 1\n return\n \n if i <= n\n print \"* \"\n printPatternRecur(n, i+1)\n \n else\n print \"\\n\"\n printPatternRecur(n-1, 1)"
},
{
"code": null,
"e": 6130,
"s": 6126,
"text": "C++"
},
{
"code": null,
"e": 6135,
"s": 6130,
"text": "Java"
},
{
"code": null,
"e": 6143,
"s": 6135,
"text": "Python3"
},
{
"code": null,
"e": 6146,
"s": 6143,
"text": "C#"
},
{
"code": null,
"e": 6150,
"s": 6146,
"text": "PHP"
},
{
"code": null,
"e": 6161,
"s": 6150,
"text": "Javascript"
},
{
"code": "// C++ implementation to print the given pattern recursively#include <bits/stdc++.h> using namespace std; // function to print the given pattern recursivelyvoid printPatternRecur(int n, int i){ // base condition if (n < 1) return; // to print the stars of a particular row if (i <= n) { cout << \"* \"; // recursively print rest of the stars // of the row printPatternRecur(n, i + 1); } else { // change line cout << endl; // print stars of the remaining rows recursively printPatternRecur(n-1, 1); }} // Driver program to test aboveint main(){ int n = 5; printPatternRecur(n, 1); return 0; }",
"e": 6884,
"s": 6161,
"text": null
},
{
"code": "// java implementation to// print the given pattern recursivelyimport java.io.*; class GFG { // function to print the // given pattern recursively static void printPatternRecur(int n, int i) { // base condition if (n < 1) return; // to print the stars of // a particular row if (i <= n) { System.out.print ( \"* \"); // recursively print rest // of the stars of the row printPatternRecur(n, i + 1); } else { // change line System.out.println( ); // print stars of the // remaining rows recursively printPatternRecur(n - 1, 1); } } // Driver program public static void main (String[] args) { int n = 5; printPatternRecur(n, 1); }} // This code is contributed by vt_m",
"e": 7833,
"s": 6884,
"text": null
},
{
"code": "# Python3 implementation to print the# given pattern recursively # function to print the given pattern# recursivelydef printPatternRecur(n, i): # base condition if (n < 1): return # to print the stars of a # particular row if (i <= n): print(\"* \", end = \"\") # recursively print rest of # the stars of the row printPatternRecur(n, i + 1) else: # change line print(\"\") # print stars of the remaining # rows recursively printPatternRecur(n-1, 1) # Driver program to test aboven = 5printPatternRecur(n, 1) # This code is contributed by Smitha",
"e": 8499,
"s": 7833,
"text": null
},
{
"code": "// C# implementation to// print the given pattern recursivelyusing System;class GFG { // function to print the // given pattern recursively static void printPatternRecur(int n, int i) { // base condition if (n < 1) return; // to print the stars of // a particular row if (i <= n) { Console.Write ( \"* \"); // recursively print rest // of the stars of the row printPatternRecur(n, i + 1); } else { // change line Console.WriteLine( ); // print stars of the // remaining rows recursively printPatternRecur(n - 1, 1); } } // Driver program public static void Main () { int n = 5; printPatternRecur(n, 1); }} // This code is contributed by vt_m",
"e": 9423,
"s": 8499,
"text": null
},
{
"code": "<?php// php implementation to print// the given pattern recursively // function to print the given// pattern recursivelyfunction printPatternRecur($n, $i){ // base condition if ($n < 1) return; // to print the stars of // a particular row if ($i <= $n) { echo \"* \"; // recursively print rest of // the stars of the row printPatternRecur($n, $i + 1); } else { // change line echo \"\\n\"; // print stars of the remaining // rows recursively printPatternRecur($n - 1, 1); }} // Driver code $n = 5; printPatternRecur($n, 1); // This code is contributed by mits?>",
"e": 10124,
"s": 9423,
"text": null
},
{
"code": "<script> // JavaScript implementation to print// the given pattern recursively // Function to print the given// pattern recursivelyfunction printPatternRecur(n, i){ // Base condition if (n < 1) return; // To print the stars of // a particular row if (i <= n) { document.write(\"*\" + \" \"); // Recursively print rest of the stars // of the row printPatternRecur(n, i + 1); } else { // Change line document.write(\"<br>\"); // Print stars of the remaining // rows recursively printPatternRecur(n - 1, 1); }} // Driver codevar n = 5;printPatternRecur(n, 1); // This code is contributed by rdtank </script>",
"e": 10861,
"s": 10124,
"text": null
},
{
"code": null,
"e": 10870,
"s": 10861,
"text": "Output: "
},
{
"code": null,
"e": 10900,
"s": 10870,
"text": "* * * * *\n* * * *\n* * *\n* *\n*"
},
{
"code": null,
"e": 11322,
"s": 10900,
"text": "This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 11335,
"s": 11322,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 11356,
"s": 11335,
"text": "Smitha Dinesh Semwal"
},
{
"code": null,
"e": 11363,
"s": 11356,
"text": "rdtank"
},
{
"code": null,
"e": 11378,
"s": 11363,
"text": "amit143katiyar"
},
{
"code": null,
"e": 11395,
"s": 11378,
"text": "khushboogoyal499"
},
{
"code": null,
"e": 11411,
"s": 11395,
"text": "simranarora5sos"
},
{
"code": null,
"e": 11428,
"s": 11411,
"text": "pattern-printing"
},
{
"code": null,
"e": 11447,
"s": 11428,
"text": "School Programming"
},
{
"code": null,
"e": 11464,
"s": 11447,
"text": "pattern-printing"
}
] |
Generating a Requirements.txt File from a Jupyter Notebook | by Adam Shafi | Towards Data Science
|
Want to jump straight to the best option? Click here.
Creating a requirements.txt file is a necessary process, particularly when sharing your code, developing MLOps or just pushing something up into a Docker container.
It is surprisingly tricky to get a clean requirements.txt file from a Jupyter Notebook, so I’ve investigated the different ways to do it...
This is the most common way I’ve seen on Stack Overflow is to use pip freeze. This will list every package and version in your current virtual environment.
Simply open a terminal and navigate to the folder you want your requirements file to be in.
You can then activate a virtual environment using venv or conda.
conda activate myenvpip freeze > requirements.txt
The problem with this method
It will take every package you have installed on that environment. If you want to create something lightweight, using only your imports then you’ll need to manually prune this file.
If you want to only use packages in your currently jupyter notebook, you can use the python package session-info.
Install using:
pip install session-info
In your jupyter notebook, after you have imported packages, you can then write:
import session_infosession_info.show()
You’ll get something that looks like this:
You can then copy and paste and replace the spaces with == using find and replace.
The problem with this method
It doesn’t take into account the dependencies from our .py file, custom_functions. However, these will appear lower down with all the other dependencies.
The big problem is that it takes the name of the imports and not the packages. For example, the command for sklearn is actuallypip install scikit-learn which means that this won’t work as a requirements.txt file.
If you only want the requirements.txt file to relate to a single notebook and not your whole environment, this is the best way to do it.
Start by opening a terminal and installing pipreqs and nbconvert.
pip install pipreqspip install nbconvert
Then, navigate to the folder where your Jupyter notebook is located.
If we assume the notebook is called “my nb”. You’ll need to escape the space with a backslash (\).
jupyter nbconvert --output-dir="./reqs" --to script my\ nb.ipynbcd reqspipreqs
If this worked, you should receive a success message.
So what we’ve done here is converted our notebook into a .py file in a new directory called reqs, then run pipreqs in the new directory. The reason for this is that pipreqs only works on .py files and I can’t seem to get it to work when there are other files in the folder. The requirements.txt will be generated in the same folder.
You’ll end up with a perfect requirements file which you can use for anything else!
|
[
{
"code": null,
"e": 226,
"s": 172,
"text": "Want to jump straight to the best option? Click here."
},
{
"code": null,
"e": 391,
"s": 226,
"text": "Creating a requirements.txt file is a necessary process, particularly when sharing your code, developing MLOps or just pushing something up into a Docker container."
},
{
"code": null,
"e": 531,
"s": 391,
"text": "It is surprisingly tricky to get a clean requirements.txt file from a Jupyter Notebook, so I’ve investigated the different ways to do it..."
},
{
"code": null,
"e": 687,
"s": 531,
"text": "This is the most common way I’ve seen on Stack Overflow is to use pip freeze. This will list every package and version in your current virtual environment."
},
{
"code": null,
"e": 779,
"s": 687,
"text": "Simply open a terminal and navigate to the folder you want your requirements file to be in."
},
{
"code": null,
"e": 844,
"s": 779,
"text": "You can then activate a virtual environment using venv or conda."
},
{
"code": null,
"e": 894,
"s": 844,
"text": "conda activate myenvpip freeze > requirements.txt"
},
{
"code": null,
"e": 923,
"s": 894,
"text": "The problem with this method"
},
{
"code": null,
"e": 1105,
"s": 923,
"text": "It will take every package you have installed on that environment. If you want to create something lightweight, using only your imports then you’ll need to manually prune this file."
},
{
"code": null,
"e": 1219,
"s": 1105,
"text": "If you want to only use packages in your currently jupyter notebook, you can use the python package session-info."
},
{
"code": null,
"e": 1234,
"s": 1219,
"text": "Install using:"
},
{
"code": null,
"e": 1259,
"s": 1234,
"text": "pip install session-info"
},
{
"code": null,
"e": 1339,
"s": 1259,
"text": "In your jupyter notebook, after you have imported packages, you can then write:"
},
{
"code": null,
"e": 1378,
"s": 1339,
"text": "import session_infosession_info.show()"
},
{
"code": null,
"e": 1421,
"s": 1378,
"text": "You’ll get something that looks like this:"
},
{
"code": null,
"e": 1504,
"s": 1421,
"text": "You can then copy and paste and replace the spaces with == using find and replace."
},
{
"code": null,
"e": 1533,
"s": 1504,
"text": "The problem with this method"
},
{
"code": null,
"e": 1687,
"s": 1533,
"text": "It doesn’t take into account the dependencies from our .py file, custom_functions. However, these will appear lower down with all the other dependencies."
},
{
"code": null,
"e": 1900,
"s": 1687,
"text": "The big problem is that it takes the name of the imports and not the packages. For example, the command for sklearn is actuallypip install scikit-learn which means that this won’t work as a requirements.txt file."
},
{
"code": null,
"e": 2037,
"s": 1900,
"text": "If you only want the requirements.txt file to relate to a single notebook and not your whole environment, this is the best way to do it."
},
{
"code": null,
"e": 2103,
"s": 2037,
"text": "Start by opening a terminal and installing pipreqs and nbconvert."
},
{
"code": null,
"e": 2144,
"s": 2103,
"text": "pip install pipreqspip install nbconvert"
},
{
"code": null,
"e": 2213,
"s": 2144,
"text": "Then, navigate to the folder where your Jupyter notebook is located."
},
{
"code": null,
"e": 2312,
"s": 2213,
"text": "If we assume the notebook is called “my nb”. You’ll need to escape the space with a backslash (\\)."
},
{
"code": null,
"e": 2391,
"s": 2312,
"text": "jupyter nbconvert --output-dir=\"./reqs\" --to script my\\ nb.ipynbcd reqspipreqs"
},
{
"code": null,
"e": 2445,
"s": 2391,
"text": "If this worked, you should receive a success message."
},
{
"code": null,
"e": 2778,
"s": 2445,
"text": "So what we’ve done here is converted our notebook into a .py file in a new directory called reqs, then run pipreqs in the new directory. The reason for this is that pipreqs only works on .py files and I can’t seem to get it to work when there are other files in the folder. The requirements.txt will be generated in the same folder."
}
] |
Ruby Basic Syntax - GeeksforGeeks
|
06 Sep, 2019
Ruby is a pure Object-Oriented language developed by Yukihiro Matsumoto (also known as Matz in the Ruby community) in the mid 1990’s in Japan. To program in Ruby is easy to learn because of its similar syntax to already widely used languages. Here, we will learn the basic syntax of Ruby language.Let us write a simple program to print “Hello World!”.
# this line will print "Hello World!" as output.puts "Hello World!";
Output:
Hello World!
Ruby interprets newline characters(\n) and semicolons(;) as the end of a statement.Note: If a line has +, – or backslash at the end of a line, then it indicates the continuation of a statement.
Whitespace characters such as spaces and tabs are generally ignored in Ruby code, except when they appear in a string, i.e. it ignores all spaces in a statement But sometimes, whitespaces are used to interpret ambiguous statements.Example :
a / b interprets as a/b (Here a is a variable)a b interprets as a(b)(Here a is a method)
# declaring a function named 'a' which accepts an# integer and return 1def a(u) return 1 end # driver codea = 3 b = 2 # this a + b interprets as a + b, so prints 5 as outputputs(a + b) # this a b interprets as a(b) thus the returned# value is printedputs(a b)
Output:
5
1
BEGIN statement is used to declare a part of code which must be called before the program runs.
Syntax:
BEGIN
{
# code written here
}
Similarly, END is used to declare a part of code which must be called at the end of program.Syntax:
END
{
# code written here
}
Example of BEGIN and END
# Ruby program of BEGIN and ENDputs "This is main body of program" END { puts "END of the program"}BEGIN { puts "BEGINNING of the Program"}
Output:
BEGINNING of the Program
This is main body of program
A comment hides some part of code, from the Ruby Interpreter. Comments can be written in different ways, using hash character(#) at the beginning of a line.Syntax :
#This is a single line comment
#This is multiple
#lines of comment
=begin
This is another
way of writing
comments in a
block fashion
=end
Identifiers are names of variables, constants and functions/methods.
Ruby identifiers are case sensitive.
Ruby identifiers may consists of alphanumeric characters and also underscore ( _ ).
For example: Man_1, item_01 are examples of identifiers.
The reserved words in Ruby which cannot be used as constant or variable names are called keywords of Ruby.
Akanksha_Rai
Picked
Ruby-Basics
Ruby
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Include v/s Extend in Ruby
Ruby | Enumerator each_with_index function
Ruby | Array select() function
Global Variable in Ruby
Ruby | Hash delete() function
Ruby | String gsub! Method
Ruby | String capitalize() Method
How to Make a Custom Array of Hashes in Ruby?
Ruby | Case Statement
Ruby | Numeric round() function
|
[
{
"code": null,
"e": 23242,
"s": 23214,
"text": "\n06 Sep, 2019"
},
{
"code": null,
"e": 23594,
"s": 23242,
"text": "Ruby is a pure Object-Oriented language developed by Yukihiro Matsumoto (also known as Matz in the Ruby community) in the mid 1990’s in Japan. To program in Ruby is easy to learn because of its similar syntax to already widely used languages. Here, we will learn the basic syntax of Ruby language.Let us write a simple program to print “Hello World!”."
},
{
"code": "# this line will print \"Hello World!\" as output.puts \"Hello World!\";",
"e": 23663,
"s": 23594,
"text": null
},
{
"code": null,
"e": 23671,
"s": 23663,
"text": "Output:"
},
{
"code": null,
"e": 23684,
"s": 23671,
"text": "Hello World!"
},
{
"code": null,
"e": 23878,
"s": 23684,
"text": "Ruby interprets newline characters(\\n) and semicolons(;) as the end of a statement.Note: If a line has +, – or backslash at the end of a line, then it indicates the continuation of a statement."
},
{
"code": null,
"e": 24119,
"s": 23878,
"text": "Whitespace characters such as spaces and tabs are generally ignored in Ruby code, except when they appear in a string, i.e. it ignores all spaces in a statement But sometimes, whitespaces are used to interpret ambiguous statements.Example :"
},
{
"code": null,
"e": 24208,
"s": 24119,
"text": "a / b interprets as a/b (Here a is a variable)a b interprets as a(b)(Here a is a method)"
},
{
"code": "# declaring a function named 'a' which accepts an# integer and return 1def a(u) return 1 end # driver codea = 3 b = 2 # this a + b interprets as a + b, so prints 5 as outputputs(a + b) # this a b interprets as a(b) thus the returned# value is printedputs(a b)",
"e": 24473,
"s": 24208,
"text": null
},
{
"code": null,
"e": 24481,
"s": 24473,
"text": "Output:"
},
{
"code": null,
"e": 24486,
"s": 24481,
"text": "5\n1\n"
},
{
"code": null,
"e": 24582,
"s": 24486,
"text": "BEGIN statement is used to declare a part of code which must be called before the program runs."
},
{
"code": null,
"e": 24590,
"s": 24582,
"text": "Syntax:"
},
{
"code": null,
"e": 24625,
"s": 24590,
"text": "BEGIN\n{\n # code written here\n}\n"
},
{
"code": null,
"e": 24725,
"s": 24625,
"text": "Similarly, END is used to declare a part of code which must be called at the end of program.Syntax:"
},
{
"code": null,
"e": 24758,
"s": 24725,
"text": "END\n{\n # code written here\n}\n"
},
{
"code": null,
"e": 24783,
"s": 24758,
"text": "Example of BEGIN and END"
},
{
"code": "# Ruby program of BEGIN and ENDputs \"This is main body of program\" END { puts \"END of the program\"}BEGIN { puts \"BEGINNING of the Program\"}",
"e": 24929,
"s": 24783,
"text": null
},
{
"code": null,
"e": 24937,
"s": 24929,
"text": "Output:"
},
{
"code": null,
"e": 24992,
"s": 24937,
"text": "BEGINNING of the Program\nThis is main body of program\n"
},
{
"code": null,
"e": 25157,
"s": 24992,
"text": "A comment hides some part of code, from the Ruby Interpreter. Comments can be written in different ways, using hash character(#) at the beginning of a line.Syntax :"
},
{
"code": null,
"e": 25188,
"s": 25157,
"text": "#This is a single line comment"
},
{
"code": null,
"e": 25224,
"s": 25188,
"text": "#This is multiple\n#lines of comment"
},
{
"code": null,
"e": 25297,
"s": 25224,
"text": "=begin\nThis is another\nway of writing \ncomments in a \nblock fashion\n=end"
},
{
"code": null,
"e": 25366,
"s": 25297,
"text": "Identifiers are names of variables, constants and functions/methods."
},
{
"code": null,
"e": 25403,
"s": 25366,
"text": "Ruby identifiers are case sensitive."
},
{
"code": null,
"e": 25487,
"s": 25403,
"text": "Ruby identifiers may consists of alphanumeric characters and also underscore ( _ )."
},
{
"code": null,
"e": 25544,
"s": 25487,
"text": "For example: Man_1, item_01 are examples of identifiers."
},
{
"code": null,
"e": 25651,
"s": 25544,
"text": "The reserved words in Ruby which cannot be used as constant or variable names are called keywords of Ruby."
},
{
"code": null,
"e": 25664,
"s": 25651,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 25671,
"s": 25664,
"text": "Picked"
},
{
"code": null,
"e": 25683,
"s": 25671,
"text": "Ruby-Basics"
},
{
"code": null,
"e": 25688,
"s": 25683,
"text": "Ruby"
},
{
"code": null,
"e": 25786,
"s": 25688,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25795,
"s": 25786,
"text": "Comments"
},
{
"code": null,
"e": 25808,
"s": 25795,
"text": "Old Comments"
},
{
"code": null,
"e": 25835,
"s": 25808,
"text": "Include v/s Extend in Ruby"
},
{
"code": null,
"e": 25878,
"s": 25835,
"text": "Ruby | Enumerator each_with_index function"
},
{
"code": null,
"e": 25909,
"s": 25878,
"text": "Ruby | Array select() function"
},
{
"code": null,
"e": 25933,
"s": 25909,
"text": "Global Variable in Ruby"
},
{
"code": null,
"e": 25963,
"s": 25933,
"text": "Ruby | Hash delete() function"
},
{
"code": null,
"e": 25990,
"s": 25963,
"text": "Ruby | String gsub! Method"
},
{
"code": null,
"e": 26024,
"s": 25990,
"text": "Ruby | String capitalize() Method"
},
{
"code": null,
"e": 26070,
"s": 26024,
"text": "How to Make a Custom Array of Hashes in Ruby?"
},
{
"code": null,
"e": 26092,
"s": 26070,
"text": "Ruby | Case Statement"
}
] |
Sum of Leaf Nodes | Practice | GeeksforGeeks
|
Given a Binary Tree of size N. The task is to complete the function sumLeaf(), that should return the sum of all the leaf nodes of the given binary tree.
Input:
First line of input contains number of testcases T. For each testcase, there will be two lines, first of which containing the number of edges (between two nodes) in the tree. Next line contains N pairs (considering a and b) with a 'L' (means node b on left of a) or 'R' (means node b on right of a) after a and b.
Output:
For each testcase, there will be a single line containing the sum of all leaf nodes in the tree.
User Task:
The task is to complete the function sumLeaf() which takes root reference as argument and returns the sum of all leaf nodes.
Constraints:
1 <= T <= 100
1 <= N <= 103
Example:
Input:
2
2
1 2 L 1 3 R
5
10 20 L 10 30 R 20 40 L 20 60 R 30 90 L
Output:
5
190
Explanation:
Testcase 1: Leaf nodes in the tree are 2 and 3, and their sum is 5.
0
harshscode1 week ago
EASY AND EFFICIENT SOLUTION......
if(root==NULL) return 0; if(root->left==NULL and root->right==NULL) return root->data; int sum1=sumLeaf(root->left); int sum2=sumLeaf(root->right); return sum1+sum2;
0
hanumanmanyam8373 weeks ago
class GfG
{
int sum=0;
public int SumofLeafNodes(Node root)
{
// your code here
if(root==null)
{
return 0;
}
if(root.left==null && root.right==null)
{
sum+= root.data;
}
SumofLeafNodes(root.left);
SumofLeafNodes(root.right);
return sum;
}
}
0
roopsaisurampudi1 month ago
if (root == null) return 0;
if (root.left == null && root.right == null) return root.data;
return SumofLeafNodes(root.left) + SumofLeafNodes(root.right);
0
mohitm15092 months ago
class GfG{ public int SumofLeafNodes(Node root) { // your code here if(root == null) return 0; int sum = 0; if(root.left == null && root.right == null) sum += root.data; sum += SumofLeafNodes(root.left); sum += SumofLeafNodes(root.right); return sum; }}
0
sumandas203 months ago
int sumLeaf(Node* root){ // Code here if(root==NULL)return 0; if(root->left==NULL&&root->right==NULL){ return root->data; } return sumLeaf(root->left)+sumLeaf(root->right);}
0
sengarharsh9293 months ago
class GfG{ int sum=0; public int SumofLeafNodes(Node root) { // your code here iif(root==null )return 0; if(root.left==null && root.right==null) { sum=sum+root.data; } if(root.left!=null) { SumofLeafNodes(root.left); } if(root.right!=null) { SumofLeafNodes(root.right); } return sum; }}
+1
shreyash9779663 months ago
int sumLeaf(Node* root)
{
// Code here
if(root==NULL) return 0;
if(root->left==NULL and root->right==NULL){
return root->data;
}
return (sumLeaf(root->left)+sumLeaf(root->right));
}
+1
sarthakkashyap1903 months ago
C++ soln.
void col(Node* root, int &sum){
if(!root){
return;
}
if(root->left==NULL && root->right==NULL){
sum+=root->data;
return;
}
col(root->left, sum);
col(root->right, sum);
return;
}
int sumLeaf(Node* root)
{
// Code here
int sum = 0;
col(root, sum);
return sum;
}
+1
maskiyashank883 months ago
Simple Java Code 0.4 time
public int SumofLeafNodes(Node node)
{
// your code here
if(node==null)
return 0;
if(node.left == null && node.right == null)
return node.data;
return SumofLeafNodes(node.left) + SumofLeafNodes(node.right);
}
-1
dhirunand3 months ago
class GfG
{
public int SumofLeafNodes(Node root)
{
// your code here
count = 0;
solve(root);
return count;
}
static int count;
public static void solve(Node root){
if(root.left == null && root.right == null){
count+=root.data;
return;
}
if(root.left==null && root.right != null)
solve(root.right);
if(root.right==null && root.left != null)
solve(root.left);
if(root.right!=null && root.left != null){
solve(root.left);
solve(root.right);
}
}
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 392,
"s": 238,
"text": "Given a Binary Tree of size N. The task is to complete the function sumLeaf(), that should return the sum of all the leaf nodes of the given binary tree."
},
{
"code": null,
"e": 713,
"s": 392,
"text": "Input:\nFirst line of input contains number of testcases T. For each testcase, there will be two lines, first of which containing the number of edges (between two nodes) in the tree. Next line contains N pairs (considering a and b) with a 'L' (means node b on left of a) or 'R' (means node b on right of a) after a and b."
},
{
"code": null,
"e": 818,
"s": 713,
"text": "Output:\nFor each testcase, there will be a single line containing the sum of all leaf nodes in the tree."
},
{
"code": null,
"e": 955,
"s": 818,
"text": "User Task: \nThe task is to complete the function sumLeaf() which takes root reference as argument and returns the sum of all leaf nodes."
},
{
"code": null,
"e": 996,
"s": 955,
"text": "Constraints:\n1 <= T <= 100\n1 <= N <= 103"
},
{
"code": null,
"e": 1070,
"s": 996,
"text": "Example:\nInput:\n2\n2\n1 2 L 1 3 R\n5\n10 20 L 10 30 R 20 40 L 20 60 R 30 90 L"
},
{
"code": null,
"e": 1084,
"s": 1070,
"text": "Output:\n5\n190"
},
{
"code": null,
"e": 1167,
"s": 1084,
"text": "Explanation:\nTestcase 1: Leaf nodes in the tree are 2 and 3, and their sum is 5.\n "
},
{
"code": null,
"e": 1169,
"s": 1167,
"text": "0"
},
{
"code": null,
"e": 1190,
"s": 1169,
"text": "harshscode1 week ago"
},
{
"code": null,
"e": 1224,
"s": 1190,
"text": "EASY AND EFFICIENT SOLUTION......"
},
{
"code": null,
"e": 1445,
"s": 1224,
"text": " if(root==NULL) return 0; if(root->left==NULL and root->right==NULL) return root->data; int sum1=sumLeaf(root->left); int sum2=sumLeaf(root->right); return sum1+sum2;"
},
{
"code": null,
"e": 1447,
"s": 1445,
"text": "0"
},
{
"code": null,
"e": 1475,
"s": 1447,
"text": "hanumanmanyam8373 weeks ago"
},
{
"code": null,
"e": 1854,
"s": 1475,
"text": "class GfG\n{\n int sum=0;\n public int SumofLeafNodes(Node root)\n {\n // your code here\n if(root==null)\n {\n return 0;\n }\n if(root.left==null && root.right==null)\n {\n sum+= root.data;\n }\n SumofLeafNodes(root.left);\n SumofLeafNodes(root.right);\n return sum;\n \n \n }\n}"
},
{
"code": null,
"e": 1856,
"s": 1854,
"text": "0"
},
{
"code": null,
"e": 1884,
"s": 1856,
"text": "roopsaisurampudi1 month ago"
},
{
"code": null,
"e": 2054,
"s": 1884,
"text": "if (root == null) return 0;\n if (root.left == null && root.right == null) return root.data;\n return SumofLeafNodes(root.left) + SumofLeafNodes(root.right);"
},
{
"code": null,
"e": 2056,
"s": 2054,
"text": "0"
},
{
"code": null,
"e": 2079,
"s": 2056,
"text": "mohitm15092 months ago"
},
{
"code": null,
"e": 2398,
"s": 2079,
"text": "class GfG{ public int SumofLeafNodes(Node root) { // your code here if(root == null) return 0; int sum = 0; if(root.left == null && root.right == null) sum += root.data; sum += SumofLeafNodes(root.left); sum += SumofLeafNodes(root.right); return sum; }}"
},
{
"code": null,
"e": 2402,
"s": 2400,
"text": "0"
},
{
"code": null,
"e": 2425,
"s": 2402,
"text": "sumandas203 months ago"
},
{
"code": null,
"e": 2615,
"s": 2425,
"text": "int sumLeaf(Node* root){ // Code here if(root==NULL)return 0; if(root->left==NULL&&root->right==NULL){ return root->data; } return sumLeaf(root->left)+sumLeaf(root->right);}"
},
{
"code": null,
"e": 2617,
"s": 2615,
"text": "0"
},
{
"code": null,
"e": 2644,
"s": 2617,
"text": "sengarharsh9293 months ago"
},
{
"code": null,
"e": 3048,
"s": 2644,
"text": "class GfG{ int sum=0; public int SumofLeafNodes(Node root) { // your code here iif(root==null )return 0; if(root.left==null && root.right==null) { sum=sum+root.data; } if(root.left!=null) { SumofLeafNodes(root.left); } if(root.right!=null) { SumofLeafNodes(root.right); } return sum; }}"
},
{
"code": null,
"e": 3051,
"s": 3048,
"text": "+1"
},
{
"code": null,
"e": 3078,
"s": 3051,
"text": "shreyash9779663 months ago"
},
{
"code": null,
"e": 3288,
"s": 3078,
"text": "int sumLeaf(Node* root)\n{\n // Code here\n if(root==NULL) return 0;\n if(root->left==NULL and root->right==NULL){\n return root->data;\n }\n return (sumLeaf(root->left)+sumLeaf(root->right));\n}"
},
{
"code": null,
"e": 3291,
"s": 3288,
"text": "+1"
},
{
"code": null,
"e": 3321,
"s": 3291,
"text": "sarthakkashyap1903 months ago"
},
{
"code": null,
"e": 3331,
"s": 3321,
"text": "C++ soln."
},
{
"code": null,
"e": 3664,
"s": 3331,
"text": "void col(Node* root, int &sum){\n if(!root){\n return;\n }\n if(root->left==NULL && root->right==NULL){\n sum+=root->data;\n return;\n }\n col(root->left, sum);\n col(root->right, sum);\n return;\n}\nint sumLeaf(Node* root)\n{\n // Code here\n int sum = 0;\n col(root, sum);\n return sum;\n \n}"
},
{
"code": null,
"e": 3667,
"s": 3664,
"text": "+1"
},
{
"code": null,
"e": 3694,
"s": 3667,
"text": "maskiyashank883 months ago"
},
{
"code": null,
"e": 3720,
"s": 3694,
"text": "Simple Java Code 0.4 time"
},
{
"code": null,
"e": 3993,
"s": 3720,
"text": "public int SumofLeafNodes(Node node)\n {\n // your code here\n if(node==null)\n return 0;\n if(node.left == null && node.right == null)\n return node.data;\n return SumofLeafNodes(node.left) + SumofLeafNodes(node.right);\n }"
},
{
"code": null,
"e": 3996,
"s": 3993,
"text": "-1"
},
{
"code": null,
"e": 4018,
"s": 3996,
"text": "dhirunand3 months ago"
},
{
"code": null,
"e": 4684,
"s": 4018,
"text": "class GfG\n{\n public int SumofLeafNodes(Node root)\n {\n // your code here\n count = 0;\n solve(root);\n \n return count;\n }\n \n static int count;\n public static void solve(Node root){\n\n if(root.left == null && root.right == null){\n count+=root.data;\n return;\n }\n \n if(root.left==null && root.right != null)\n solve(root.right);\n \n if(root.right==null && root.left != null)\n solve(root.left);\n \n if(root.right!=null && root.left != null){\n solve(root.left);\n solve(root.right);\n }\n }\n}"
},
{
"code": null,
"e": 4830,
"s": 4684,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4866,
"s": 4830,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 4876,
"s": 4866,
"text": "\nProblem\n"
},
{
"code": null,
"e": 4886,
"s": 4876,
"text": "\nContest\n"
},
{
"code": null,
"e": 4949,
"s": 4886,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 5097,
"s": 4949,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 5305,
"s": 5097,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 5411,
"s": 5305,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to Beat Google’s AutoML - Hyperparameter Optimisation with Flair | by Tadej Magajna | Towards Data Science
|
This is a follow-up to our previous post about State of the Art Text Classification. We explain how to do hyperparameter optimisation using Flair Python NLP library to achieve optimal results in text classification outperforming Google’s AutoML Natural Language.
Hyperparameter optimisation (or tuning) is the process of choosing a set of optimal parameters for a machine learning algorithm. Data preprocessors, optimisers and ML algorithms all receive a set of parameters that guide their behaviour. To achieve optimal performance they need to be tuned to fit the statistical properties, type of features and size of the dataset used. The most typical hyperparameters in deep learning include learning rate, number of hidden layers in a deep neural network, batch size, dropout...
In NLP we also encounter a number of other hyperparameters often to do with preprocessing and text embedding such as type of embedding, embedding dimension, number of RNN layers...
Typically, if we’re lucky enough to have a problem simple enough to only require one or two hyperparameters with a few discrete values (like k in k-means for example), we can simply try all possible options. But with increasing number of parameters the trial and error approach becomes difficult.
Our search space grows exponentially with the number of parameters tuned.
Assuming discrete options, this means that if we have 8 parameters where each parameter has 10 discrete options we end up with 108 possible combinations of hyperparameters. This makes hand-picking parameters unfeasible assuming training a model usually requires a considerable amount of time and resources.
There are many hyperparameter optimisation techniques such as grid search, random search, bayesian optimisation, gradient methods and finally TPE. Tree-structured Parzen Estimator (TPE) is the method we used in Flair’s wrapper around Hyperopt - a popular Python hyperparameter optimisation library.
Flair provides a simple API to tune your text classifier parameters. We need to, however, tell it what kinds of hyperparameters it needs to tune and what values it should consider for them.Running the optimiser is not harder than training the classifier itself, but it requires significantly more time and resources as it essentially executes training a large number of times. Therefore it’s advisable to run this on GPU accelerated hardware.
We will perform hyperparameter optimisation of a text classifier model trained on Kaggle SMS Spam Collection Dataset learning to differentiate between spam and not-spam messages.
To prepare the dataset please refer to the “Preprocessing — Building the Dataset” section of State of the Art Text Classification where we obtain train.csv, test.csv and dev.csv. Make sure datasets are stored in same directory as the script running Flair.
You can check whether you have a GPU available for training by running:
import torchtorch.cuda.is_available()
It returns a boolean indicating whether CUDA is available for PyTorch (on top of which Flair is written).
The first step of hyperparameter optimisation will most likely include defining the search space. This means defining all the hyperparameters we want to tune and whether the optimiser should only consider a set of discrete values for them or search in a bounded continuous space.
For discrete parameters use:
search_space.add(Parameter.PARAMNAME, hp.choice, options=[1, 2, ..])
And for uniform continuous parameters use:
search_space.add(Parameter.PARAMNAME, hp.uniform, low=0.0, high=0.5)
A list of all possible parameters can be seen here.
Next you will need to specify some parameters referring to the type of text classifier we want to use and how many training_runs and epochs to run.
param_selector = TextClassifierParamSelector( corpus=corpus, multi_label=False, base_path='resources/results', document_embedding_type='lstm', max_epochs=10, training_runs=1, optimization_value=OptimizationValue.DEV_SCORE)
Note that DEV_SCORE is set as our optimisation value. This is extremely important because we don’t want to optimise our hyperparameters based on the test set as that would cause overfitting.
Finally, we run param_selector.optimize(search_space, max_evals=100) which will execute 100 evaluations of the optimiser and save our results to resources/results/param_selection.txt
Full source code to run the whole process is as follows:
from flair.hyperparameter.param_selection import TextClassifierParamSelector, OptimizationValuefrom hyperopt import hpfrom flair.hyperparameter.param_selection import SearchSpace, Parameterfrom flair.embeddings import WordEmbeddings, FlairEmbeddingsfrom flair.data_fetcher import NLPTaskDataFetcherfrom pathlib import Pathcorpus = NLPTaskDataFetcher.load_classification_corpus(Path('./'), test_file='test.csv', dev_file='dev.csv', train_file='train.csv')word_embeddings = [[WordEmbeddings('glove'), FlairEmbeddings('news-forward'), FlairEmbeddings('news-backward')]]search_space = SearchSpace()search_space.add(Parameter.EMBEDDINGS, hp.choice, options=word_embeddings)search_space.add(Parameter.HIDDEN_SIZE, hp.choice, options=[32, 64, 128, 256, 512])search_space.add(Parameter.RNN_LAYERS, hp.choice, options=[1, 2])search_space.add(Parameter.DROPOUT, hp.uniform, low=0.0, high=0.5)search_space.add(Parameter.LEARNING_RATE, hp.choice, options=[0.05, 0.1, 0.15, 0.2])search_space.add(Parameter.MINI_BATCH_SIZE, hp.choice, options=[16, 32, 64])param_selector = TextClassifierParamSelector( corpus=corpus, multi_label=False, base_path='resources/results', document_embedding_type='lstm', max_epochs=10, training_runs=1, optimization_value=OptimizationValue.DEV_SCORE)param_selector.optimize(search_space, max_evals=100)
Our search space includes learning rate, document embedding hidden size, number of document embedding RNN layers, dropout value and batch size. Note that despite using only one type of word embedding (a stack of news-forward, news-backward, and GloVe) we still had to pass it to search space as it is a required parameter.
The optimiser ran for about 6 hours on a GPU and executed 100 evaluations. The final results were written to resources/results/param_selection.txt.
The last few lines display the best parameter combination as shown below:
--------evaluation run 97dropout: 0.19686569599930906embeddings: ./glove.gensim, ./english-forward-v0.2rc.pt, lm-news-english-backward-v0.2rc.pthidden_size: 256learning_rate: 0.05mini_batch_size: 32rnn_layers: 2score: 0.009033333333333374variance: 8.888888888888905e-07test_score: 0.9923...----------best parameter combinationdropout: 0.19686569599930906embeddings: 0 hidden_size: 3learning_rate: 0 <- *this means 0th option*mini_batch_size: 1rnn_layers: 1
Based ontest_score from the tuning results confirmed by a few further evaluations we achieved a test f1-score of 0.9923 (99.23%)!
This means we outperformed Google’s AutoML by a tiny margin.
hint: if precision = recall thenf-score = precision =recall
Does this mean I will always be able to achieve state-of-the art results following this guide?
Short answer: no. The guide should give you a good idea about how to use Flair’s hyperparameter optimiser and is not a comprehensive comparison of NLP text classification frameworks. Using the approaches described will certainly yield results comparable to other state-of-the art frameworks but they will vary depending on the dataset, what preprocessing methods are used and what hyperparameter search space is defined.
Note that when choosing the best parameter combination, Flair takes into account both loss and variance of results obtained. Therefore, the model with the lowest loss and highest f1-score will not necessarily be selected as best.
To use the best performing parameters on an actual model you need to read the optimal parameters from param_selection.txt and manually copy them one by one to the code that will train our model just like we did in part 1.
While we are extremely happy with the library, it would be much nicer to be able to have the optimal parameters available in a more code-friendly format, or even better, have an option to simply export the optimal model during optimisation.
No rights reserved
by the author.
|
[
{
"code": null,
"e": 435,
"s": 172,
"text": "This is a follow-up to our previous post about State of the Art Text Classification. We explain how to do hyperparameter optimisation using Flair Python NLP library to achieve optimal results in text classification outperforming Google’s AutoML Natural Language."
},
{
"code": null,
"e": 954,
"s": 435,
"text": "Hyperparameter optimisation (or tuning) is the process of choosing a set of optimal parameters for a machine learning algorithm. Data preprocessors, optimisers and ML algorithms all receive a set of parameters that guide their behaviour. To achieve optimal performance they need to be tuned to fit the statistical properties, type of features and size of the dataset used. The most typical hyperparameters in deep learning include learning rate, number of hidden layers in a deep neural network, batch size, dropout..."
},
{
"code": null,
"e": 1135,
"s": 954,
"text": "In NLP we also encounter a number of other hyperparameters often to do with preprocessing and text embedding such as type of embedding, embedding dimension, number of RNN layers..."
},
{
"code": null,
"e": 1432,
"s": 1135,
"text": "Typically, if we’re lucky enough to have a problem simple enough to only require one or two hyperparameters with a few discrete values (like k in k-means for example), we can simply try all possible options. But with increasing number of parameters the trial and error approach becomes difficult."
},
{
"code": null,
"e": 1506,
"s": 1432,
"text": "Our search space grows exponentially with the number of parameters tuned."
},
{
"code": null,
"e": 1813,
"s": 1506,
"text": "Assuming discrete options, this means that if we have 8 parameters where each parameter has 10 discrete options we end up with 108 possible combinations of hyperparameters. This makes hand-picking parameters unfeasible assuming training a model usually requires a considerable amount of time and resources."
},
{
"code": null,
"e": 2112,
"s": 1813,
"text": "There are many hyperparameter optimisation techniques such as grid search, random search, bayesian optimisation, gradient methods and finally TPE. Tree-structured Parzen Estimator (TPE) is the method we used in Flair’s wrapper around Hyperopt - a popular Python hyperparameter optimisation library."
},
{
"code": null,
"e": 2555,
"s": 2112,
"text": "Flair provides a simple API to tune your text classifier parameters. We need to, however, tell it what kinds of hyperparameters it needs to tune and what values it should consider for them.Running the optimiser is not harder than training the classifier itself, but it requires significantly more time and resources as it essentially executes training a large number of times. Therefore it’s advisable to run this on GPU accelerated hardware."
},
{
"code": null,
"e": 2734,
"s": 2555,
"text": "We will perform hyperparameter optimisation of a text classifier model trained on Kaggle SMS Spam Collection Dataset learning to differentiate between spam and not-spam messages."
},
{
"code": null,
"e": 2990,
"s": 2734,
"text": "To prepare the dataset please refer to the “Preprocessing — Building the Dataset” section of State of the Art Text Classification where we obtain train.csv, test.csv and dev.csv. Make sure datasets are stored in same directory as the script running Flair."
},
{
"code": null,
"e": 3062,
"s": 2990,
"text": "You can check whether you have a GPU available for training by running:"
},
{
"code": null,
"e": 3100,
"s": 3062,
"text": "import torchtorch.cuda.is_available()"
},
{
"code": null,
"e": 3206,
"s": 3100,
"text": "It returns a boolean indicating whether CUDA is available for PyTorch (on top of which Flair is written)."
},
{
"code": null,
"e": 3486,
"s": 3206,
"text": "The first step of hyperparameter optimisation will most likely include defining the search space. This means defining all the hyperparameters we want to tune and whether the optimiser should only consider a set of discrete values for them or search in a bounded continuous space."
},
{
"code": null,
"e": 3515,
"s": 3486,
"text": "For discrete parameters use:"
},
{
"code": null,
"e": 3584,
"s": 3515,
"text": "search_space.add(Parameter.PARAMNAME, hp.choice, options=[1, 2, ..])"
},
{
"code": null,
"e": 3627,
"s": 3584,
"text": "And for uniform continuous parameters use:"
},
{
"code": null,
"e": 3696,
"s": 3627,
"text": "search_space.add(Parameter.PARAMNAME, hp.uniform, low=0.0, high=0.5)"
},
{
"code": null,
"e": 3748,
"s": 3696,
"text": "A list of all possible parameters can be seen here."
},
{
"code": null,
"e": 3896,
"s": 3748,
"text": "Next you will need to specify some parameters referring to the type of text classifier we want to use and how many training_runs and epochs to run."
},
{
"code": null,
"e": 4144,
"s": 3896,
"text": "param_selector = TextClassifierParamSelector( corpus=corpus, multi_label=False, base_path='resources/results', document_embedding_type='lstm', max_epochs=10, training_runs=1, optimization_value=OptimizationValue.DEV_SCORE)"
},
{
"code": null,
"e": 4335,
"s": 4144,
"text": "Note that DEV_SCORE is set as our optimisation value. This is extremely important because we don’t want to optimise our hyperparameters based on the test set as that would cause overfitting."
},
{
"code": null,
"e": 4518,
"s": 4335,
"text": "Finally, we run param_selector.optimize(search_space, max_evals=100) which will execute 100 evaluations of the optimiser and save our results to resources/results/param_selection.txt"
},
{
"code": null,
"e": 4575,
"s": 4518,
"text": "Full source code to run the whole process is as follows:"
},
{
"code": null,
"e": 5917,
"s": 4575,
"text": "from flair.hyperparameter.param_selection import TextClassifierParamSelector, OptimizationValuefrom hyperopt import hpfrom flair.hyperparameter.param_selection import SearchSpace, Parameterfrom flair.embeddings import WordEmbeddings, FlairEmbeddingsfrom flair.data_fetcher import NLPTaskDataFetcherfrom pathlib import Pathcorpus = NLPTaskDataFetcher.load_classification_corpus(Path('./'), test_file='test.csv', dev_file='dev.csv', train_file='train.csv')word_embeddings = [[WordEmbeddings('glove'), FlairEmbeddings('news-forward'), FlairEmbeddings('news-backward')]]search_space = SearchSpace()search_space.add(Parameter.EMBEDDINGS, hp.choice, options=word_embeddings)search_space.add(Parameter.HIDDEN_SIZE, hp.choice, options=[32, 64, 128, 256, 512])search_space.add(Parameter.RNN_LAYERS, hp.choice, options=[1, 2])search_space.add(Parameter.DROPOUT, hp.uniform, low=0.0, high=0.5)search_space.add(Parameter.LEARNING_RATE, hp.choice, options=[0.05, 0.1, 0.15, 0.2])search_space.add(Parameter.MINI_BATCH_SIZE, hp.choice, options=[16, 32, 64])param_selector = TextClassifierParamSelector( corpus=corpus, multi_label=False, base_path='resources/results', document_embedding_type='lstm', max_epochs=10, training_runs=1, optimization_value=OptimizationValue.DEV_SCORE)param_selector.optimize(search_space, max_evals=100)"
},
{
"code": null,
"e": 6240,
"s": 5917,
"text": "Our search space includes learning rate, document embedding hidden size, number of document embedding RNN layers, dropout value and batch size. Note that despite using only one type of word embedding (a stack of news-forward, news-backward, and GloVe) we still had to pass it to search space as it is a required parameter."
},
{
"code": null,
"e": 6388,
"s": 6240,
"text": "The optimiser ran for about 6 hours on a GPU and executed 100 evaluations. The final results were written to resources/results/param_selection.txt."
},
{
"code": null,
"e": 6462,
"s": 6388,
"text": "The last few lines display the best parameter combination as shown below:"
},
{
"code": null,
"e": 6919,
"s": 6462,
"text": "--------evaluation run 97dropout: 0.19686569599930906embeddings: ./glove.gensim, ./english-forward-v0.2rc.pt, lm-news-english-backward-v0.2rc.pthidden_size: 256learning_rate: 0.05mini_batch_size: 32rnn_layers: 2score: 0.009033333333333374variance: 8.888888888888905e-07test_score: 0.9923...----------best parameter combinationdropout: 0.19686569599930906embeddings: 0 hidden_size: 3learning_rate: 0 <- *this means 0th option*mini_batch_size: 1rnn_layers: 1"
},
{
"code": null,
"e": 7049,
"s": 6919,
"text": "Based ontest_score from the tuning results confirmed by a few further evaluations we achieved a test f1-score of 0.9923 (99.23%)!"
},
{
"code": null,
"e": 7110,
"s": 7049,
"text": "This means we outperformed Google’s AutoML by a tiny margin."
},
{
"code": null,
"e": 7170,
"s": 7110,
"text": "hint: if precision = recall thenf-score = precision =recall"
},
{
"code": null,
"e": 7265,
"s": 7170,
"text": "Does this mean I will always be able to achieve state-of-the art results following this guide?"
},
{
"code": null,
"e": 7686,
"s": 7265,
"text": "Short answer: no. The guide should give you a good idea about how to use Flair’s hyperparameter optimiser and is not a comprehensive comparison of NLP text classification frameworks. Using the approaches described will certainly yield results comparable to other state-of-the art frameworks but they will vary depending on the dataset, what preprocessing methods are used and what hyperparameter search space is defined."
},
{
"code": null,
"e": 7916,
"s": 7686,
"text": "Note that when choosing the best parameter combination, Flair takes into account both loss and variance of results obtained. Therefore, the model with the lowest loss and highest f1-score will not necessarily be selected as best."
},
{
"code": null,
"e": 8138,
"s": 7916,
"text": "To use the best performing parameters on an actual model you need to read the optimal parameters from param_selection.txt and manually copy them one by one to the code that will train our model just like we did in part 1."
},
{
"code": null,
"e": 8379,
"s": 8138,
"text": "While we are extremely happy with the library, it would be much nicer to be able to have the optimal parameters available in a more code-friendly format, or even better, have an option to simply export the optimal model during optimisation."
},
{
"code": null,
"e": 8398,
"s": 8379,
"text": "No rights reserved"
}
] |
C++ Program to Implement Variable Length Array
|
Variable length arrays can have a size as required by the user i.e they can have a variable size.
A program to implement variable length arrays in C++ is given as follows −
Live Demo
#include <iostream>
#include <string>
using namespace std;
int main() {
int *array, size;
cout<<"Enter size of array: "<<endl;
cin>>size;
array = new int [size];
cout<<"Enter array elements: "<<endl;
for (int i = 0; i < size; i++)
cin>>array[i];
cout<<"The array elements are: ";
for(int i = 0; i < size; i++)
cout<<array[i]<<" ";
cout<<endl;
delete []array;
return 0;
}
The output of the above program is as follows −
Enter size of array: 10
Enter array elements: 11 54 7 87 90 2 56 12 36 80
The array elements are: 11 54 7 87 90 2 56 12 36 80
In the above program, first the array is initialized. Then the array size and array elements are requested from the user. This is given below −
cout<<"Enter size of array: "<<endl;
cin>>size;
array = new int [size];
cout<<"Enter array elements: "<<endl;
for (int i = 0; i < size; i++)
cin>>array[i];
Finally, the array elements are displayed and the array is deleted. This is given below −
cout<<"The array elements are: ";
for(int i = 0; i < size; i++)
cout<<array[i]<<" ";
cout<<endl;
delete []array;
|
[
{
"code": null,
"e": 1160,
"s": 1062,
"text": "Variable length arrays can have a size as required by the user i.e they can have a variable size."
},
{
"code": null,
"e": 1235,
"s": 1160,
"text": "A program to implement variable length arrays in C++ is given as follows −"
},
{
"code": null,
"e": 1246,
"s": 1235,
"text": " Live Demo"
},
{
"code": null,
"e": 1659,
"s": 1246,
"text": "#include <iostream>\n#include <string>\n\nusing namespace std;\nint main() {\n int *array, size;\n cout<<\"Enter size of array: \"<<endl;\n cin>>size;\n array = new int [size];\n cout<<\"Enter array elements: \"<<endl;\n\n for (int i = 0; i < size; i++)\n cin>>array[i];\n cout<<\"The array elements are: \";\n\n for(int i = 0; i < size; i++)\n cout<<array[i]<<\" \";\n cout<<endl;\n delete []array;\n return 0;\n}"
},
{
"code": null,
"e": 1707,
"s": 1659,
"text": "The output of the above program is as follows −"
},
{
"code": null,
"e": 1833,
"s": 1707,
"text": "Enter size of array: 10\nEnter array elements: 11 54 7 87 90 2 56 12 36 80\nThe array elements are: 11 54 7 87 90 2 56 12 36 80"
},
{
"code": null,
"e": 1977,
"s": 1833,
"text": "In the above program, first the array is initialized. Then the array size and array elements are requested from the user. This is given below −"
},
{
"code": null,
"e": 2136,
"s": 1977,
"text": "cout<<\"Enter size of array: \"<<endl;\ncin>>size;\n\narray = new int [size];\n\ncout<<\"Enter array elements: \"<<endl;\n\nfor (int i = 0; i < size; i++)\ncin>>array[i];"
},
{
"code": null,
"e": 2226,
"s": 2136,
"text": "Finally, the array elements are displayed and the array is deleted. This is given below −"
},
{
"code": null,
"e": 2339,
"s": 2226,
"text": "cout<<\"The array elements are: \";\nfor(int i = 0; i < size; i++)\ncout<<array[i]<<\" \";\ncout<<endl;\ndelete []array;"
}
] |
Collecting news articles through RSS/Atom feeds using Python | by Artem | Towards Data Science
|
In one of my previous posts I was talking about how you could scrape and analyze news articles with just 5 lines of code:
towardsdatascience.com
This time I will show you how you could set up a pipe to automatically collect all the new articles that have been published by almost any news provider (such as NY Times, CNN, Bloomberg, etc.)
To achieve such a goal I will show you how you could automate news collection using feedparser Python package that helps to normalize RSS/Atom feeds.
For data engineers and data scientists who might want to collect their own data and practice building data pipes.
RSS is an XML formatted plain text that provides a brief summary about articles that have been recently published by some content provider (news, podcasts, personal blog, etc.)
The most common producers of RSS are news publishers.
RSS feed exists to provide access to the latest news (for news aggregators and news syndicators, for example).
RSS feed does not contain entire article text (in most cases) but provides some basic information such as author, title, description, publication time, etc.
Atom is another XML format that has been developed as an alternative to the RSS feed. Atom seems to be more advanced comparing to the RSS, but I am not going to compare those 2 formats in this post.
An example of RSS XML:
<?xml version="1.0" encoding="UTF-8" ?><rss version="2.0"><channel> <title>RSS Title</title> <description>This is an example of an RSS feed</description> <link>http://www.example.com/main.html</link> <lastBuildDate>Mon, 06 Sep 2010 00:01:00 +0000 </lastBuildDate> <pubDate>Sun, 06 Sep 2009 16:20:00 +0000</pubDate> <ttl>1800</ttl> <item> <title>Example entry</title> <description>Here is some text containing an interesting description.</description> <link>http://www.example.com/blog/post/1</link> <guid isPermaLink="false">7bd204c6-1655-4c27-aeee-53f933c5395f</guid> <pubDate>Sun, 06 Sep 2009 16:20:00 +0000</pubDate> </item></channel></rss>
So, the only thing left is to collect all the urls (endpoints) of the news publishers that we are interested about.
For this article, I take the NY Times feed endpoint. To do so, I had to:
go to the https://www.nytimes.com/
“inspect” the source code of the page
search for “rss” term
grab the first result
Let’s grab that link and check if it looks like something that we need.
Alright, as we can see it is “NYT > Top Stories” RSS.
Under the <channel> section you might find the general information about the feed itself — description, when it was built, language, etc.
Each <item> under this RSS represents the article. The first item represents the article that has a title (<title>)called “Trump Bet He Could Isolate Iran and Charm North Korea. It’s Not That Easy.”
If we take the link (<link>) under this <item> we will be forwarded to the original page of an article:
RSS will not give us the full text of an article but it will propose a short <description> instead.
Now when we know what are the RSS and how we could use it, we may try to automate the way we obtain new articles.
The main drawback of RSS/Atom feeds is that they are not normalized. According to the Wikipedia page RSS/Atom have only few mandatory fields (link, title, description).
It means that in case you would like to store data from different news publishers you should either take into account all the possible key-value pairs or use some schema-free technology (elasticsearch, for example).
pip install feedparser
import feedparser
feed = feedparser.parse('https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml')
Now our feed is loaded under the feed variable. Under the .feed attribute we might find the main info regarding the feed metadata itself.
feed.feedOut[171]: {‘title’: ‘NYT > Top Stories’, ‘title_detail’: {‘type’: ‘text/plain’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘NYT > Top Stories’}, ‘links’: [{‘rel’: ‘alternate’, ‘type’: ‘text/html’, ‘href’: ‘https://www.nytimes.com?emc=rss&partner=rss'}, {‘href’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘rel’: ‘self’, ‘type’: ‘application/rss+xml’}], ‘link’: ‘https://www.nytimes.com?emc=rss&partner=rss', ‘subtitle’: ‘’, ‘subtitle_detail’: {‘type’: ‘text/html’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘’}, ‘language’: ‘en-us’, ‘rights’: ‘Copyright 2020 The New York Times Company’, ‘rights_detail’: {‘type’: ‘text/plain’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘Copyright 2020 The New York Times Company’}, ‘updated’: ‘Thu, 02 Jan 2020 15:03:52 +0000’, ‘updated_parsed’: time.struct_time(tm_year=2020, tm_mon=1, tm_mday=2, tm_hour=15, tm_min=3, tm_sec=52, tm_wday=3, tm_yday=2, tm_isdst=0), ‘published’: ‘Thu, 02 Jan 2020 15:03:52 +0000’, ‘published_parsed’: time.struct_time(tm_year=2020, tm_mon=1, tm_mday=2, tm_hour=15, tm_min=3, tm_sec=52, tm_wday=3, tm_yday=2, tm_isdst=0), ‘image’: {‘title’: ‘NYT > Top Stories’, ‘title_detail’: {‘type’: ‘text/plain’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘NYT > Top Stories’}, ‘href’: ‘https://static01.nyt.com/images/misc/NYT_logo_rss_250x40.png', ‘links’: [{‘rel’: ‘alternate’, ‘type’: ‘text/html’, ‘href’: ‘https://www.nytimes.com?emc=rss&partner=rss'}], ‘link’: ‘https://www.nytimes.com?emc=rss&partner=rss'}}
The most important fields are copyright and published.
Feedparser takes care to assign correct values to those attributes so you do not have to waste time on normalizing them yourself.
Same as for the feeds, you might find the information about each article under the .entries attribute.
feed.entries[0].titleOut[5]: 'Trump Bet He Could Isolate Iran and Charm North Korea. It’s Not That Easy.'
That way, we will know the basic information about each element in of the feed.
In case you want a full text of the article you have to take the url and use newspaper3k. Check my other article that I have embedded in the beginning of this post.
Try to think about how you could build a data pipe to collect new articles, deduplicate with those that your database has seen already. Also, an additional NLP pipe on top might make lots of useful insights (spaCy python package is perfect for that).
In my personal blog, I talk about how I build newscatcher — an API to access news data from the most popular news publishers. In case you would like to know how to scale what I have described above into thousands of feeds follow my Medium blog and tune to my twitter.
|
[
{
"code": null,
"e": 293,
"s": 171,
"text": "In one of my previous posts I was talking about how you could scrape and analyze news articles with just 5 lines of code:"
},
{
"code": null,
"e": 316,
"s": 293,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 510,
"s": 316,
"text": "This time I will show you how you could set up a pipe to automatically collect all the new articles that have been published by almost any news provider (such as NY Times, CNN, Bloomberg, etc.)"
},
{
"code": null,
"e": 660,
"s": 510,
"text": "To achieve such a goal I will show you how you could automate news collection using feedparser Python package that helps to normalize RSS/Atom feeds."
},
{
"code": null,
"e": 774,
"s": 660,
"text": "For data engineers and data scientists who might want to collect their own data and practice building data pipes."
},
{
"code": null,
"e": 951,
"s": 774,
"text": "RSS is an XML formatted plain text that provides a brief summary about articles that have been recently published by some content provider (news, podcasts, personal blog, etc.)"
},
{
"code": null,
"e": 1005,
"s": 951,
"text": "The most common producers of RSS are news publishers."
},
{
"code": null,
"e": 1116,
"s": 1005,
"text": "RSS feed exists to provide access to the latest news (for news aggregators and news syndicators, for example)."
},
{
"code": null,
"e": 1273,
"s": 1116,
"text": "RSS feed does not contain entire article text (in most cases) but provides some basic information such as author, title, description, publication time, etc."
},
{
"code": null,
"e": 1472,
"s": 1273,
"text": "Atom is another XML format that has been developed as an alternative to the RSS feed. Atom seems to be more advanced comparing to the RSS, but I am not going to compare those 2 formats in this post."
},
{
"code": null,
"e": 1495,
"s": 1472,
"text": "An example of RSS XML:"
},
{
"code": null,
"e": 2144,
"s": 1495,
"text": "<?xml version=\"1.0\" encoding=\"UTF-8\" ?><rss version=\"2.0\"><channel> <title>RSS Title</title> <description>This is an example of an RSS feed</description> <link>http://www.example.com/main.html</link> <lastBuildDate>Mon, 06 Sep 2010 00:01:00 +0000 </lastBuildDate> <pubDate>Sun, 06 Sep 2009 16:20:00 +0000</pubDate> <ttl>1800</ttl> <item> <title>Example entry</title> <description>Here is some text containing an interesting description.</description> <link>http://www.example.com/blog/post/1</link> <guid isPermaLink=\"false\">7bd204c6-1655-4c27-aeee-53f933c5395f</guid> <pubDate>Sun, 06 Sep 2009 16:20:00 +0000</pubDate> </item></channel></rss>"
},
{
"code": null,
"e": 2260,
"s": 2144,
"text": "So, the only thing left is to collect all the urls (endpoints) of the news publishers that we are interested about."
},
{
"code": null,
"e": 2333,
"s": 2260,
"text": "For this article, I take the NY Times feed endpoint. To do so, I had to:"
},
{
"code": null,
"e": 2368,
"s": 2333,
"text": "go to the https://www.nytimes.com/"
},
{
"code": null,
"e": 2406,
"s": 2368,
"text": "“inspect” the source code of the page"
},
{
"code": null,
"e": 2428,
"s": 2406,
"text": "search for “rss” term"
},
{
"code": null,
"e": 2450,
"s": 2428,
"text": "grab the first result"
},
{
"code": null,
"e": 2522,
"s": 2450,
"text": "Let’s grab that link and check if it looks like something that we need."
},
{
"code": null,
"e": 2576,
"s": 2522,
"text": "Alright, as we can see it is “NYT > Top Stories” RSS."
},
{
"code": null,
"e": 2714,
"s": 2576,
"text": "Under the <channel> section you might find the general information about the feed itself — description, when it was built, language, etc."
},
{
"code": null,
"e": 2913,
"s": 2714,
"text": "Each <item> under this RSS represents the article. The first item represents the article that has a title (<title>)called “Trump Bet He Could Isolate Iran and Charm North Korea. It’s Not That Easy.”"
},
{
"code": null,
"e": 3017,
"s": 2913,
"text": "If we take the link (<link>) under this <item> we will be forwarded to the original page of an article:"
},
{
"code": null,
"e": 3117,
"s": 3017,
"text": "RSS will not give us the full text of an article but it will propose a short <description> instead."
},
{
"code": null,
"e": 3231,
"s": 3117,
"text": "Now when we know what are the RSS and how we could use it, we may try to automate the way we obtain new articles."
},
{
"code": null,
"e": 3400,
"s": 3231,
"text": "The main drawback of RSS/Atom feeds is that they are not normalized. According to the Wikipedia page RSS/Atom have only few mandatory fields (link, title, description)."
},
{
"code": null,
"e": 3616,
"s": 3400,
"text": "It means that in case you would like to store data from different news publishers you should either take into account all the possible key-value pairs or use some schema-free technology (elasticsearch, for example)."
},
{
"code": null,
"e": 3639,
"s": 3616,
"text": "pip install feedparser"
},
{
"code": null,
"e": 3657,
"s": 3639,
"text": "import feedparser"
},
{
"code": null,
"e": 3742,
"s": 3657,
"text": "feed = feedparser.parse('https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml')"
},
{
"code": null,
"e": 3880,
"s": 3742,
"text": "Now our feed is loaded under the feed variable. Under the .feed attribute we might find the main info regarding the feed metadata itself."
},
{
"code": null,
"e": 5590,
"s": 3880,
"text": "feed.feedOut[171]: {‘title’: ‘NYT > Top Stories’, ‘title_detail’: {‘type’: ‘text/plain’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘NYT > Top Stories’}, ‘links’: [{‘rel’: ‘alternate’, ‘type’: ‘text/html’, ‘href’: ‘https://www.nytimes.com?emc=rss&partner=rss'}, {‘href’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘rel’: ‘self’, ‘type’: ‘application/rss+xml’}], ‘link’: ‘https://www.nytimes.com?emc=rss&partner=rss', ‘subtitle’: ‘’, ‘subtitle_detail’: {‘type’: ‘text/html’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘’}, ‘language’: ‘en-us’, ‘rights’: ‘Copyright 2020 The New York Times Company’, ‘rights_detail’: {‘type’: ‘text/plain’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘Copyright 2020 The New York Times Company’}, ‘updated’: ‘Thu, 02 Jan 2020 15:03:52 +0000’, ‘updated_parsed’: time.struct_time(tm_year=2020, tm_mon=1, tm_mday=2, tm_hour=15, tm_min=3, tm_sec=52, tm_wday=3, tm_yday=2, tm_isdst=0), ‘published’: ‘Thu, 02 Jan 2020 15:03:52 +0000’, ‘published_parsed’: time.struct_time(tm_year=2020, tm_mon=1, tm_mday=2, tm_hour=15, tm_min=3, tm_sec=52, tm_wday=3, tm_yday=2, tm_isdst=0), ‘image’: {‘title’: ‘NYT > Top Stories’, ‘title_detail’: {‘type’: ‘text/plain’, ‘language’: None, ‘base’: ‘https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml', ‘value’: ‘NYT > Top Stories’}, ‘href’: ‘https://static01.nyt.com/images/misc/NYT_logo_rss_250x40.png', ‘links’: [{‘rel’: ‘alternate’, ‘type’: ‘text/html’, ‘href’: ‘https://www.nytimes.com?emc=rss&partner=rss'}], ‘link’: ‘https://www.nytimes.com?emc=rss&partner=rss'}}"
},
{
"code": null,
"e": 5645,
"s": 5590,
"text": "The most important fields are copyright and published."
},
{
"code": null,
"e": 5775,
"s": 5645,
"text": "Feedparser takes care to assign correct values to those attributes so you do not have to waste time on normalizing them yourself."
},
{
"code": null,
"e": 5878,
"s": 5775,
"text": "Same as for the feeds, you might find the information about each article under the .entries attribute."
},
{
"code": null,
"e": 5984,
"s": 5878,
"text": "feed.entries[0].titleOut[5]: 'Trump Bet He Could Isolate Iran and Charm North Korea. It’s Not That Easy.'"
},
{
"code": null,
"e": 6064,
"s": 5984,
"text": "That way, we will know the basic information about each element in of the feed."
},
{
"code": null,
"e": 6229,
"s": 6064,
"text": "In case you want a full text of the article you have to take the url and use newspaper3k. Check my other article that I have embedded in the beginning of this post."
},
{
"code": null,
"e": 6480,
"s": 6229,
"text": "Try to think about how you could build a data pipe to collect new articles, deduplicate with those that your database has seen already. Also, an additional NLP pipe on top might make lots of useful insights (spaCy python package is perfect for that)."
}
] |
Docker - Networking
|
Docker takes care of the networking aspects so that the containers can communicate with other containers and also with the Docker Host. If you do an ifconfig on the Docker Host, you will see the Docker Ethernet adapter. This adapter is created when Docker is installed on the Docker Host.
This is a bridge between the Docker Host and the Linux Host. Now let’s look at some commands associated with networking in Docker.
This command can be used to list all the networks associated with Docker on the host.
docker network ls
None
The command will output all the networks on the Docker Host.
sudo docker network ls
The output of the above command is shown below
If you want to see more details on the network associated with Docker, you can use the Docker network inspect command.
docker network inspect networkname
networkname − This is the name of the network you need to inspect.
networkname − This is the name of the network you need to inspect.
The command will output all the details about the network.
sudo docker network inspect bridge
The output of the above command is shown below −
Now let’s run a container and see what happens when we inspect the network again. Let’s spin up an Ubuntu container with the following command −
sudo docker run –it ubuntu:latest /bin/bash
Now if we inspect our network name via the following command, you will now see that the container is attached to the bridge.
sudo docker network inspect bridge
One can create a network in Docker before launching containers. This can be done with the following command −
docker network create –-driver drivername name
drivername − This is the name used for the network driver.
drivername − This is the name used for the network driver.
name − This is the name given to the network.
name − This is the name given to the network.
The command will output the long ID for the new network.
sudo docker network create –-driver bridge new_nw
The output of the above command is shown below −
You can now attach the new network when launching the container. So let’s spin up an Ubuntu container with the following command −
sudo docker run –it –network=new_nw ubuntu:latest /bin/bash
And now when you inspect the network via the following command, you will see the container attached to the network.
sudo docker network inspect new_nw
70 Lectures
12 hours
Anshul Chauhan
41 Lectures
5 hours
AR Shankar
31 Lectures
3 hours
Abhilash Nelson
15 Lectures
2 hours
Harshit Srivastava, Pranjal Srivastava
33 Lectures
4 hours
Mumshad Mannambeth
13 Lectures
53 mins
Musab Zayadneh
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2629,
"s": 2340,
"text": "Docker takes care of the networking aspects so that the containers can communicate with other containers and also with the Docker Host. If you do an ifconfig on the Docker Host, you will see the Docker Ethernet adapter. This adapter is created when Docker is installed on the Docker Host."
},
{
"code": null,
"e": 2760,
"s": 2629,
"text": "This is a bridge between the Docker Host and the Linux Host. Now let’s look at some commands associated with networking in Docker."
},
{
"code": null,
"e": 2846,
"s": 2760,
"text": "This command can be used to list all the networks associated with Docker on the host."
},
{
"code": null,
"e": 2866,
"s": 2846,
"text": "docker network ls \n"
},
{
"code": null,
"e": 2871,
"s": 2866,
"text": "None"
},
{
"code": null,
"e": 2932,
"s": 2871,
"text": "The command will output all the networks on the Docker Host."
},
{
"code": null,
"e": 2955,
"s": 2932,
"text": "sudo docker network ls"
},
{
"code": null,
"e": 3002,
"s": 2955,
"text": "The output of the above command is shown below"
},
{
"code": null,
"e": 3121,
"s": 3002,
"text": "If you want to see more details on the network associated with Docker, you can use the Docker network inspect command."
},
{
"code": null,
"e": 3158,
"s": 3121,
"text": "docker network inspect networkname \n"
},
{
"code": null,
"e": 3225,
"s": 3158,
"text": "networkname − This is the name of the network you need to inspect."
},
{
"code": null,
"e": 3292,
"s": 3225,
"text": "networkname − This is the name of the network you need to inspect."
},
{
"code": null,
"e": 3351,
"s": 3292,
"text": "The command will output all the details about the network."
},
{
"code": null,
"e": 3387,
"s": 3351,
"text": "sudo docker network inspect bridge "
},
{
"code": null,
"e": 3436,
"s": 3387,
"text": "The output of the above command is shown below −"
},
{
"code": null,
"e": 3581,
"s": 3436,
"text": "Now let’s run a container and see what happens when we inspect the network again. Let’s spin up an Ubuntu container with the following command −"
},
{
"code": null,
"e": 3627,
"s": 3581,
"text": "sudo docker run –it ubuntu:latest /bin/bash \n"
},
{
"code": null,
"e": 3752,
"s": 3627,
"text": "Now if we inspect our network name via the following command, you will now see that the container is attached to the bridge."
},
{
"code": null,
"e": 3788,
"s": 3752,
"text": "sudo docker network inspect bridge\n"
},
{
"code": null,
"e": 3898,
"s": 3788,
"text": "One can create a network in Docker before launching containers. This can be done with the following command −"
},
{
"code": null,
"e": 3947,
"s": 3898,
"text": "docker network create –-driver drivername name \n"
},
{
"code": null,
"e": 4006,
"s": 3947,
"text": "drivername − This is the name used for the network driver."
},
{
"code": null,
"e": 4065,
"s": 4006,
"text": "drivername − This is the name used for the network driver."
},
{
"code": null,
"e": 4111,
"s": 4065,
"text": "name − This is the name given to the network."
},
{
"code": null,
"e": 4157,
"s": 4111,
"text": "name − This is the name given to the network."
},
{
"code": null,
"e": 4214,
"s": 4157,
"text": "The command will output the long ID for the new network."
},
{
"code": null,
"e": 4265,
"s": 4214,
"text": "sudo docker network create –-driver bridge new_nw "
},
{
"code": null,
"e": 4314,
"s": 4265,
"text": "The output of the above command is shown below −"
},
{
"code": null,
"e": 4445,
"s": 4314,
"text": "You can now attach the new network when launching the container. So let’s spin up an Ubuntu container with the following command −"
},
{
"code": null,
"e": 4506,
"s": 4445,
"text": "sudo docker run –it –network=new_nw ubuntu:latest /bin/bash\n"
},
{
"code": null,
"e": 4622,
"s": 4506,
"text": "And now when you inspect the network via the following command, you will see the container attached to the network."
},
{
"code": null,
"e": 4659,
"s": 4622,
"text": "sudo docker network inspect new_nw \n"
},
{
"code": null,
"e": 4693,
"s": 4659,
"text": "\n 70 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 4709,
"s": 4693,
"text": " Anshul Chauhan"
},
{
"code": null,
"e": 4742,
"s": 4709,
"text": "\n 41 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 4754,
"s": 4742,
"text": " AR Shankar"
},
{
"code": null,
"e": 4787,
"s": 4754,
"text": "\n 31 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 4804,
"s": 4787,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 4837,
"s": 4804,
"text": "\n 15 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4877,
"s": 4837,
"text": " Harshit Srivastava, Pranjal Srivastava"
},
{
"code": null,
"e": 4910,
"s": 4877,
"text": "\n 33 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4930,
"s": 4910,
"text": " Mumshad Mannambeth"
},
{
"code": null,
"e": 4962,
"s": 4930,
"text": "\n 13 Lectures \n 53 mins\n"
},
{
"code": null,
"e": 4978,
"s": 4962,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 4985,
"s": 4978,
"text": " Print"
},
{
"code": null,
"e": 4996,
"s": 4985,
"text": " Add Notes"
}
] |
How do I install Python SciPy?
|
We can install Python SciPy with the help of following methods −
Scientific Python Distributions − There are various scientific Python distributions that provide the language itself along with the most used packages. The advantage of using these distributions is that they require little configuration and work on almost all the setups. Here we will be discussing three most useful distributions −Anaconda − Anaconda, a free Python distribution, works well on MS Windows, Mac OS, and Linux. It provides us over 1500 Python and R packages along with a large collection of libraries. This Python distribution is best suited for beginners.WinPython − It is another free Python distribution that includes scientific packages as well as Spyder IDE. As the name entails, it only works with MS Windows OS.Pyzo − Pyzo is also a free Python distribution. It is based on Anaconda and the IEP interactive development environment. It supports all the major operating systems such as MS Windows, Mac OS, and Linux.
Scientific Python Distributions − There are various scientific Python distributions that provide the language itself along with the most used packages. The advantage of using these distributions is that they require little configuration and work on almost all the setups. Here we will be discussing three most useful distributions −
Anaconda − Anaconda, a free Python distribution, works well on MS Windows, Mac OS, and Linux. It provides us over 1500 Python and R packages along with a large collection of libraries. This Python distribution is best suited for beginners.
Anaconda − Anaconda, a free Python distribution, works well on MS Windows, Mac OS, and Linux. It provides us over 1500 Python and R packages along with a large collection of libraries. This Python distribution is best suited for beginners.
WinPython − It is another free Python distribution that includes scientific packages as well as Spyder IDE. As the name entails, it only works with MS Windows OS.
WinPython − It is another free Python distribution that includes scientific packages as well as Spyder IDE. As the name entails, it only works with MS Windows OS.
Pyzo − Pyzo is also a free Python distribution. It is based on Anaconda and the IEP interactive development environment. It supports all the major operating systems such as MS Windows, Mac OS, and Linux.
Pyzo − Pyzo is also a free Python distribution. It is based on Anaconda and the IEP interactive development environment. It supports all the major operating systems such as MS Windows, Mac OS, and Linux.
Via pip − Pip is an inbuilt package management system that comes with Python. You can use pip to install, update, or delete any official package. Below is the command to install SciPy along with other useful packages via pip −
Via pip − Pip is an inbuilt package management system that comes with Python. You can use pip to install, update, or delete any official package. Below is the command to install SciPy along with other useful packages via pip −
python -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose
System Package Manager − You can use the system package managers to install the most common Python packages as follows −Ubuntu and Debian− For Ubuntu and Debian OS, use apt-get as given in the below command -
System Package Manager − You can use the system package managers to install the most common Python packages as follows −
Ubuntu and Debian− For Ubuntu and Debian OS, use apt-get as given in the below command -
Ubuntu and Debian− For Ubuntu and Debian OS, use apt-get as given in the below command -
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
Fedora 22 and later− For Fedora 22 and later OS, use dnf as given in the below command
Fedora 22 and later− For Fedora 22 and later OS, use dnf as given in the below command
sudo dnf install numpy scipy python-matplotlib ipython pythonpandas
sympy python-nose atlas-devel
Mac OS− If you are using Macports package manager, you can execute the following command−
Mac OS− If you are using Macports package manager, you can execute the following command−
sudo port install py35-numpy py35-scipy py35-matplotlib py35-
ipython +notebook py35-pandas py35-sympy py35-nose
Whereas if you are using Homebrew (having incomplete coverage of SciPy ecosystem), use the below command −
Sudobrew install numpy scipy ipython jupyter
Source packages − This method is best suited for those who are involved in development because with source packages they can get development versions or can alter the source code too. You can get the source packages for SciPy here.
Source packages − This method is best suited for those who are involved in development because with source packages they can get development versions or can alter the source code too. You can get the source packages for SciPy here.
Binaries − You can directly install the packages using its binary files. Binary files can either come from GitHub or PyPi or third-party repositories. For example, Ubuntu OS has package repositories from where you can download individual binaries.
Binaries − You can directly install the packages using its binary files. Binary files can either come from GitHub or PyPi or third-party repositories. For example, Ubuntu OS has package repositories from where you can download individual binaries.
|
[
{
"code": null,
"e": 1127,
"s": 1062,
"text": "We can install Python SciPy with the help of following methods −"
},
{
"code": null,
"e": 2064,
"s": 1127,
"text": "Scientific Python Distributions − There are various scientific Python distributions that provide the language itself along with the most used packages. The advantage of using these distributions is that they require little configuration and work on almost all the setups. Here we will be discussing three most useful distributions −Anaconda − Anaconda, a free Python distribution, works well on MS Windows, Mac OS, and Linux. It provides us over 1500 Python and R packages along with a large collection of libraries. This Python distribution is best suited for beginners.WinPython − It is another free Python distribution that includes scientific packages as well as Spyder IDE. As the name entails, it only works with MS Windows OS.Pyzo − Pyzo is also a free Python distribution. It is based on Anaconda and the IEP interactive development environment. It supports all the major operating systems such as MS Windows, Mac OS, and Linux."
},
{
"code": null,
"e": 2397,
"s": 2064,
"text": "Scientific Python Distributions − There are various scientific Python distributions that provide the language itself along with the most used packages. The advantage of using these distributions is that they require little configuration and work on almost all the setups. Here we will be discussing three most useful distributions −"
},
{
"code": null,
"e": 2637,
"s": 2397,
"text": "Anaconda − Anaconda, a free Python distribution, works well on MS Windows, Mac OS, and Linux. It provides us over 1500 Python and R packages along with a large collection of libraries. This Python distribution is best suited for beginners."
},
{
"code": null,
"e": 2877,
"s": 2637,
"text": "Anaconda − Anaconda, a free Python distribution, works well on MS Windows, Mac OS, and Linux. It provides us over 1500 Python and R packages along with a large collection of libraries. This Python distribution is best suited for beginners."
},
{
"code": null,
"e": 3040,
"s": 2877,
"text": "WinPython − It is another free Python distribution that includes scientific packages as well as Spyder IDE. As the name entails, it only works with MS Windows OS."
},
{
"code": null,
"e": 3203,
"s": 3040,
"text": "WinPython − It is another free Python distribution that includes scientific packages as well as Spyder IDE. As the name entails, it only works with MS Windows OS."
},
{
"code": null,
"e": 3407,
"s": 3203,
"text": "Pyzo − Pyzo is also a free Python distribution. It is based on Anaconda and the IEP interactive development environment. It supports all the major operating systems such as MS Windows, Mac OS, and Linux."
},
{
"code": null,
"e": 3611,
"s": 3407,
"text": "Pyzo − Pyzo is also a free Python distribution. It is based on Anaconda and the IEP interactive development environment. It supports all the major operating systems such as MS Windows, Mac OS, and Linux."
},
{
"code": null,
"e": 3838,
"s": 3611,
"text": "Via pip − Pip is an inbuilt package management system that comes with Python. You can use pip to install, update, or delete any official package. Below is the command to install SciPy along with other useful packages via pip −"
},
{
"code": null,
"e": 4065,
"s": 3838,
"text": "Via pip − Pip is an inbuilt package management system that comes with Python. You can use pip to install, update, or delete any official package. Below is the command to install SciPy along with other useful packages via pip −"
},
{
"code": null,
"e": 4151,
"s": 4065,
"text": "python -m pip install --user numpy scipy matplotlib ipython jupyter pandas sympy nose"
},
{
"code": null,
"e": 4360,
"s": 4151,
"text": "System Package Manager − You can use the system package managers to install the most common Python packages as follows −Ubuntu and Debian− For Ubuntu and Debian OS, use apt-get as given in the below command -"
},
{
"code": null,
"e": 4481,
"s": 4360,
"text": "System Package Manager − You can use the system package managers to install the most common Python packages as follows −"
},
{
"code": null,
"e": 4570,
"s": 4481,
"text": "Ubuntu and Debian− For Ubuntu and Debian OS, use apt-get as given in the below command -"
},
{
"code": null,
"e": 4659,
"s": 4570,
"text": "Ubuntu and Debian− For Ubuntu and Debian OS, use apt-get as given in the below command -"
},
{
"code": null,
"e": 4788,
"s": 4659,
"text": "sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose"
},
{
"code": null,
"e": 4875,
"s": 4788,
"text": "Fedora 22 and later− For Fedora 22 and later OS, use dnf as given in the below command"
},
{
"code": null,
"e": 4962,
"s": 4875,
"text": "Fedora 22 and later− For Fedora 22 and later OS, use dnf as given in the below command"
},
{
"code": null,
"e": 5060,
"s": 4962,
"text": "sudo dnf install numpy scipy python-matplotlib ipython pythonpandas\nsympy python-nose atlas-devel"
},
{
"code": null,
"e": 5150,
"s": 5060,
"text": "Mac OS− If you are using Macports package manager, you can execute the following command−"
},
{
"code": null,
"e": 5240,
"s": 5150,
"text": "Mac OS− If you are using Macports package manager, you can execute the following command−"
},
{
"code": null,
"e": 5353,
"s": 5240,
"text": "sudo port install py35-numpy py35-scipy py35-matplotlib py35-\nipython +notebook py35-pandas py35-sympy py35-nose"
},
{
"code": null,
"e": 5460,
"s": 5353,
"text": "Whereas if you are using Homebrew (having incomplete coverage of SciPy ecosystem), use the below command −"
},
{
"code": null,
"e": 5505,
"s": 5460,
"text": "Sudobrew install numpy scipy ipython jupyter"
},
{
"code": null,
"e": 5737,
"s": 5505,
"text": "Source packages − This method is best suited for those who are involved in development because with source packages they can get development versions or can alter the source code too. You can get the source packages for SciPy here."
},
{
"code": null,
"e": 5969,
"s": 5737,
"text": "Source packages − This method is best suited for those who are involved in development because with source packages they can get development versions or can alter the source code too. You can get the source packages for SciPy here."
},
{
"code": null,
"e": 6217,
"s": 5969,
"text": "Binaries − You can directly install the packages using its binary files. Binary files can either come from GitHub or PyPi or third-party repositories. For example, Ubuntu OS has package repositories from where you can download individual binaries."
},
{
"code": null,
"e": 6465,
"s": 6217,
"text": "Binaries − You can directly install the packages using its binary files. Binary files can either come from GitHub or PyPi or third-party repositories. For example, Ubuntu OS has package repositories from where you can download individual binaries."
}
] |
Deno.js | Introduction - GeeksforGeeks
|
18 Dec, 2021
Introduction: DenoJS is a Secure runtime for JavaScript and TypeScript based on the V8 JavaScript Engine (developed by The Chromium Project, Google), Rust Programming Language and Tokio which is an asynchronous runtime for Rust. NodeJS is also a JavaScript runtime which uses the V8 engine. DenoJS 1.0.0 was released on May 13th, 2020 and is created by Ryan Dahl, who is also the creator for NodeJS.
DenoJS aims to be a productive and secure scripting environment to provide an easy experience for the modern age developers. The creators of NodeJS had expressed several concerns over the functioning of NodeJS. They had expressed concerns over the security of Node, how Node handled packages and other legacy APIs within Node which will never change, among other things. Node was released in 2009 and since then JavaScript has changed a lot. They wanted to make a better version of NodeJS with modern JavaScript tools and APIs. They also wanted something compatible with the Browser and Server Environments and hence came the realization of DenoJS.
Advantages and Features of DenoJS:
Secure by Default: One of the biggest advantages of DenoJS is that it is Secure by Default. It runs in a Sandbox environment and unless specifically permitted, Deno isn’t allowed to access files, the environment, or the network which is not the case in NodeJS. To access the environment or the network we explicitly need to add Security flags and Permissions when executing a Deno application. If the respective flags are not added, It will give a PermissionsDenied error when running the application. The list of Security flags are provided below–allow-write: Allow Write Access–allow-read: Allow Read Access–allow-net: Allow Network Access–allow-env: Allow Environment Access–allow-plugin: Allow loading External Plugins–allow-hrtime: Allow High Resolution Time Measurement–allow-run: Allow Subprocesses to run-A: Allow all Permissions
–allow-write: Allow Write Access
–allow-read: Allow Read Access
–allow-net: Allow Network Access
–allow-env: Allow Environment Access
–allow-plugin: Allow loading External Plugins
–allow-hrtime: Allow High Resolution Time Measurement
–allow-run: Allow Subprocesses to run
-A: Allow all Permissions
Typescript: DenoJS supports Typescript out of the Box. We can use TypeScript or JavaScript while developing Deno applications since the TypeScript compiler is also included within Deno. Hence we can simply create a new .ts file and it will be successfully compiled and executed with Deno.
Single Executable File: DenoJS ships as a single executable with no dependencies. Deno attempts to provide a standalone tool for quickly scripting complex functionality. Like a web browser, it knows how to fetch and import external code. In Deno, a single file can define complex behaviour without any other tooling. Given a URL to a Deno program, it is runnable with nothing more than the 15 MB of memory. Deno explicitly takes on the role of both a runtime and package manager.
De-centralized Packages: One of the major drawbacks of NodeJS is how the dependencies are handled with NPM packages. For example, If we want to use Express with NodeJS, we would simply install using npm and the dependency would go to node_modules folder. The problem is that when Express is installed, it simply isn’t the Express package. It also includes the dependencies of Express. Hence we end up with lots of folders within node_modules, making the process of handling external packages extremely difficult as well as increasing the size of the application. DenoJS offers De-centralized Packages i.e. Deno does not use npm. There are no NPM packages for Deno and there is no package.json file and no node_modules dependencies folder. It uses a standard browser-compatible protocol for loading modules via URLs. It imports modules which are referenced via URLs or file paths from within the application. If we want to import and use an external module, we can simply Import it from the URL:
https://deno.land/x/[Package_Name]
Browser Compatibility: DenoJS was designed to be Browser Compatible. The set of Deno scripts which are written completely in JavaScript and do not use the global Deno namespace can also be executed using Deno in a modern web browser without any code change. Deno also follows the standardized browser JavaScript APIs. Since Deno is browser compatible, we have access and can use JavaScript APIs like fetch. We also have access to the global JavaScript window object.
ES Modules: Unlike NodeJS, Deno incorporates modern JavaScript syntax. Deno uses ES Modules (the import Syntax) and does not support the common JavaScript require Syntax used in NodeJS. All External ES modules are imported via URLs just like in GoLang, for example:
import { serve } from "https://deno.land/x/http/server.ts"
This Import is used to create a simple HTTP Server in Deno. Another Key feature of Deno is that after this module is imported, it is cached to the Hard drive on load. Remote code which is fetched will be cached to the Hard drive when it is first executed, and it will never be updated until the code is run with the –reload flag. According to official DenoJS documentation, modules and files loaded from remote URLs are intended to be immutable and cacheable.
Standard Modules: Deno has an extensive set of Standard Library Modules that are audited by the Deno team and are guaranteed to work with Deno. These standard modules are hosted here and are distributed via URLs like all other ES modules that are compatible with Deno. Some of these standard libraries are fs, datetime, http, etc much like the NodeJS modules. Deno can also import modules from any location on the web, like GitHub, a personal webserver, or a CDN (Content Delivery Network). There are more number of standard modules as well as external modules being added to Deno everyday.
Top Level Await: Another Core and important feature of Deno is top-level/First Class Await Syntax. In Deno, we can use the Await Syntax in the global Scope without having to wrap it in the Async Function. Also all async actions in Deno return a Promise which can remove/avoid the Callback Hell that Node can lead to due to nested callbacks.
Utilities: Deno provides built-In Testing and has built-in utilities like a dependency inspector (deno info) and a code formatter (deno fmt). Deno also allows direct execution of WebAssembly WASM Binaries. NodeJS is known for its HTTP and Data Streaming capabilities, however Deno is able to serve HTTP more efficiently than Node.
Installation of DenoJS: For the different ways on Installing DenoJS, Refer to this link. We will be Installing Deno using Chocolatey for Windows. For Installing Deno using Chocolatey, run the following command:
choco install deno
This will install Deno on the local System to the default $HOME/.deno directory. This is the default Deno’s Base directory and is referenced via the environmental variable DENO_DIR.
Getting Started: To check DenoJS Installation, run the command:
deno --version
This should provide you with the version of TypeScript, V8 Engine and Deno.
If we simply execute the command:
deno
It runs the command:
deno repl
Which opens up the REPL interface that stands for READ EVAL PRINT LOOP which means that we can type in basic JavaScript code from within the Command-line and it will compile, executes and prints the result.
We will execute a simple Deno Script in our local, located at the standard deno.land/std/ URL. We can directly run this Script from the remote URL by using the Command:
deno run https://deno.land/std/examples/welcome.ts
This is the link to the welcome.ts script in the official Deno docs Examples. We can view the Source code by simply navigating to that URL. The deno run command compiles the Script and executes the Script to display the result in your console. Deno Automatically knows whether we are running this script via the URL or just viewing it in the Browser.
To install the script on the local machine, we can run the command:
deno install https://deno.land/std/examples/welcome.ts
This will install the welcome.ts script to the $Home/.deno/bin directory. The file will be downloaded as a .cmd file on Windows. In case the Installation already Exists, we need to explicitly overwrite it.
Note: Run CMD with Administrator Privileges to avoid unnecessary errors with Deno.We will create a welcome.ts file on the local machine and execute it with Deno. welcome.ts
Javascript
console.log("Welcome to GeeksForGeeks");
To execute this file, Run the command:
deno run welcome.ts
Output:
varshagumber28
abhishek0719kadiyan
sumitgumber28
gulshankumarar231
as5853535
JavaScript-Misc
JavaScript
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Installation of Node.js on Linux
How to update Node.js and NPM to next version ?
How to update NPM ?
How to install the previous version of node.js and npm ?
Difference between promise and async await in Node.js
|
[
{
"code": null,
"e": 24654,
"s": 24626,
"text": "\n18 Dec, 2021"
},
{
"code": null,
"e": 25054,
"s": 24654,
"text": "Introduction: DenoJS is a Secure runtime for JavaScript and TypeScript based on the V8 JavaScript Engine (developed by The Chromium Project, Google), Rust Programming Language and Tokio which is an asynchronous runtime for Rust. NodeJS is also a JavaScript runtime which uses the V8 engine. DenoJS 1.0.0 was released on May 13th, 2020 and is created by Ryan Dahl, who is also the creator for NodeJS."
},
{
"code": null,
"e": 25703,
"s": 25054,
"text": "DenoJS aims to be a productive and secure scripting environment to provide an easy experience for the modern age developers. The creators of NodeJS had expressed several concerns over the functioning of NodeJS. They had expressed concerns over the security of Node, how Node handled packages and other legacy APIs within Node which will never change, among other things. Node was released in 2009 and since then JavaScript has changed a lot. They wanted to make a better version of NodeJS with modern JavaScript tools and APIs. They also wanted something compatible with the Browser and Server Environments and hence came the realization of DenoJS."
},
{
"code": null,
"e": 25739,
"s": 25703,
"text": "Advantages and Features of DenoJS: "
},
{
"code": null,
"e": 26577,
"s": 25739,
"text": "Secure by Default: One of the biggest advantages of DenoJS is that it is Secure by Default. It runs in a Sandbox environment and unless specifically permitted, Deno isn’t allowed to access files, the environment, or the network which is not the case in NodeJS. To access the environment or the network we explicitly need to add Security flags and Permissions when executing a Deno application. If the respective flags are not added, It will give a PermissionsDenied error when running the application. The list of Security flags are provided below–allow-write: Allow Write Access–allow-read: Allow Read Access–allow-net: Allow Network Access–allow-env: Allow Environment Access–allow-plugin: Allow loading External Plugins–allow-hrtime: Allow High Resolution Time Measurement–allow-run: Allow Subprocesses to run-A: Allow all Permissions"
},
{
"code": null,
"e": 26610,
"s": 26577,
"text": "–allow-write: Allow Write Access"
},
{
"code": null,
"e": 26641,
"s": 26610,
"text": "–allow-read: Allow Read Access"
},
{
"code": null,
"e": 26674,
"s": 26641,
"text": "–allow-net: Allow Network Access"
},
{
"code": null,
"e": 26711,
"s": 26674,
"text": "–allow-env: Allow Environment Access"
},
{
"code": null,
"e": 26757,
"s": 26711,
"text": "–allow-plugin: Allow loading External Plugins"
},
{
"code": null,
"e": 26811,
"s": 26757,
"text": "–allow-hrtime: Allow High Resolution Time Measurement"
},
{
"code": null,
"e": 26849,
"s": 26811,
"text": "–allow-run: Allow Subprocesses to run"
},
{
"code": null,
"e": 26875,
"s": 26849,
"text": "-A: Allow all Permissions"
},
{
"code": null,
"e": 27164,
"s": 26875,
"text": "Typescript: DenoJS supports Typescript out of the Box. We can use TypeScript or JavaScript while developing Deno applications since the TypeScript compiler is also included within Deno. Hence we can simply create a new .ts file and it will be successfully compiled and executed with Deno."
},
{
"code": null,
"e": 27644,
"s": 27164,
"text": "Single Executable File: DenoJS ships as a single executable with no dependencies. Deno attempts to provide a standalone tool for quickly scripting complex functionality. Like a web browser, it knows how to fetch and import external code. In Deno, a single file can define complex behaviour without any other tooling. Given a URL to a Deno program, it is runnable with nothing more than the 15 MB of memory. Deno explicitly takes on the role of both a runtime and package manager."
},
{
"code": null,
"e": 28639,
"s": 27644,
"text": "De-centralized Packages: One of the major drawbacks of NodeJS is how the dependencies are handled with NPM packages. For example, If we want to use Express with NodeJS, we would simply install using npm and the dependency would go to node_modules folder. The problem is that when Express is installed, it simply isn’t the Express package. It also includes the dependencies of Express. Hence we end up with lots of folders within node_modules, making the process of handling external packages extremely difficult as well as increasing the size of the application. DenoJS offers De-centralized Packages i.e. Deno does not use npm. There are no NPM packages for Deno and there is no package.json file and no node_modules dependencies folder. It uses a standard browser-compatible protocol for loading modules via URLs. It imports modules which are referenced via URLs or file paths from within the application. If we want to import and use an external module, we can simply Import it from the URL:"
},
{
"code": null,
"e": 28674,
"s": 28639,
"text": "https://deno.land/x/[Package_Name]"
},
{
"code": null,
"e": 29141,
"s": 28674,
"text": "Browser Compatibility: DenoJS was designed to be Browser Compatible. The set of Deno scripts which are written completely in JavaScript and do not use the global Deno namespace can also be executed using Deno in a modern web browser without any code change. Deno also follows the standardized browser JavaScript APIs. Since Deno is browser compatible, we have access and can use JavaScript APIs like fetch. We also have access to the global JavaScript window object."
},
{
"code": null,
"e": 29407,
"s": 29141,
"text": "ES Modules: Unlike NodeJS, Deno incorporates modern JavaScript syntax. Deno uses ES Modules (the import Syntax) and does not support the common JavaScript require Syntax used in NodeJS. All External ES modules are imported via URLs just like in GoLang, for example:"
},
{
"code": null,
"e": 29466,
"s": 29407,
"text": "import { serve } from \"https://deno.land/x/http/server.ts\""
},
{
"code": null,
"e": 29926,
"s": 29466,
"text": "This Import is used to create a simple HTTP Server in Deno. Another Key feature of Deno is that after this module is imported, it is cached to the Hard drive on load. Remote code which is fetched will be cached to the Hard drive when it is first executed, and it will never be updated until the code is run with the –reload flag. According to official DenoJS documentation, modules and files loaded from remote URLs are intended to be immutable and cacheable."
},
{
"code": null,
"e": 30517,
"s": 29926,
"text": "Standard Modules: Deno has an extensive set of Standard Library Modules that are audited by the Deno team and are guaranteed to work with Deno. These standard modules are hosted here and are distributed via URLs like all other ES modules that are compatible with Deno. Some of these standard libraries are fs, datetime, http, etc much like the NodeJS modules. Deno can also import modules from any location on the web, like GitHub, a personal webserver, or a CDN (Content Delivery Network). There are more number of standard modules as well as external modules being added to Deno everyday."
},
{
"code": null,
"e": 30858,
"s": 30517,
"text": "Top Level Await: Another Core and important feature of Deno is top-level/First Class Await Syntax. In Deno, we can use the Await Syntax in the global Scope without having to wrap it in the Async Function. Also all async actions in Deno return a Promise which can remove/avoid the Callback Hell that Node can lead to due to nested callbacks."
},
{
"code": null,
"e": 31189,
"s": 30858,
"text": "Utilities: Deno provides built-In Testing and has built-in utilities like a dependency inspector (deno info) and a code formatter (deno fmt). Deno also allows direct execution of WebAssembly WASM Binaries. NodeJS is known for its HTTP and Data Streaming capabilities, however Deno is able to serve HTTP more efficiently than Node."
},
{
"code": null,
"e": 31402,
"s": 31189,
"text": "Installation of DenoJS: For the different ways on Installing DenoJS, Refer to this link. We will be Installing Deno using Chocolatey for Windows. For Installing Deno using Chocolatey, run the following command: "
},
{
"code": null,
"e": 31421,
"s": 31402,
"text": "choco install deno"
},
{
"code": null,
"e": 31605,
"s": 31421,
"text": "This will install Deno on the local System to the default $HOME/.deno directory. This is the default Deno’s Base directory and is referenced via the environmental variable DENO_DIR. "
},
{
"code": null,
"e": 31670,
"s": 31605,
"text": "Getting Started: To check DenoJS Installation, run the command: "
},
{
"code": null,
"e": 31685,
"s": 31670,
"text": "deno --version"
},
{
"code": null,
"e": 31762,
"s": 31685,
"text": "This should provide you with the version of TypeScript, V8 Engine and Deno. "
},
{
"code": null,
"e": 31797,
"s": 31762,
"text": "If we simply execute the command: "
},
{
"code": null,
"e": 31802,
"s": 31797,
"text": "deno"
},
{
"code": null,
"e": 31824,
"s": 31802,
"text": "It runs the command: "
},
{
"code": null,
"e": 31834,
"s": 31824,
"text": "deno repl"
},
{
"code": null,
"e": 32042,
"s": 31834,
"text": "Which opens up the REPL interface that stands for READ EVAL PRINT LOOP which means that we can type in basic JavaScript code from within the Command-line and it will compile, executes and prints the result. "
},
{
"code": null,
"e": 32213,
"s": 32042,
"text": "We will execute a simple Deno Script in our local, located at the standard deno.land/std/ URL. We can directly run this Script from the remote URL by using the Command: "
},
{
"code": null,
"e": 32264,
"s": 32213,
"text": "deno run https://deno.land/std/examples/welcome.ts"
},
{
"code": null,
"e": 32615,
"s": 32264,
"text": "This is the link to the welcome.ts script in the official Deno docs Examples. We can view the Source code by simply navigating to that URL. The deno run command compiles the Script and executes the Script to display the result in your console. Deno Automatically knows whether we are running this script via the URL or just viewing it in the Browser."
},
{
"code": null,
"e": 32684,
"s": 32615,
"text": "To install the script on the local machine, we can run the command: "
},
{
"code": null,
"e": 32739,
"s": 32684,
"text": "deno install https://deno.land/std/examples/welcome.ts"
},
{
"code": null,
"e": 32946,
"s": 32739,
"text": "This will install the welcome.ts script to the $Home/.deno/bin directory. The file will be downloaded as a .cmd file on Windows. In case the Installation already Exists, we need to explicitly overwrite it. "
},
{
"code": null,
"e": 33122,
"s": 32948,
"text": "Note: Run CMD with Administrator Privileges to avoid unnecessary errors with Deno.We will create a welcome.ts file on the local machine and execute it with Deno. welcome.ts "
},
{
"code": null,
"e": 33133,
"s": 33122,
"text": "Javascript"
},
{
"code": "console.log(\"Welcome to GeeksForGeeks\");",
"e": 33174,
"s": 33133,
"text": null
},
{
"code": null,
"e": 33214,
"s": 33174,
"text": "To execute this file, Run the command: "
},
{
"code": null,
"e": 33234,
"s": 33214,
"text": "deno run welcome.ts"
},
{
"code": null,
"e": 33244,
"s": 33234,
"text": "Output: "
},
{
"code": null,
"e": 33261,
"s": 33246,
"text": "varshagumber28"
},
{
"code": null,
"e": 33281,
"s": 33261,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 33295,
"s": 33281,
"text": "sumitgumber28"
},
{
"code": null,
"e": 33313,
"s": 33295,
"text": "gulshankumarar231"
},
{
"code": null,
"e": 33323,
"s": 33313,
"text": "as5853535"
},
{
"code": null,
"e": 33339,
"s": 33323,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 33350,
"s": 33339,
"text": "JavaScript"
},
{
"code": null,
"e": 33358,
"s": 33350,
"text": "Node.js"
},
{
"code": null,
"e": 33375,
"s": 33358,
"text": "Web Technologies"
},
{
"code": null,
"e": 33473,
"s": 33375,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33518,
"s": 33473,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 33579,
"s": 33518,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 33651,
"s": 33579,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 33703,
"s": 33651,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 33749,
"s": 33703,
"text": "How to Open URL in New Tab using JavaScript ?"
},
{
"code": null,
"e": 33782,
"s": 33749,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 33830,
"s": 33782,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 33850,
"s": 33830,
"text": "How to update NPM ?"
},
{
"code": null,
"e": 33907,
"s": 33850,
"text": "How to install the previous version of node.js and npm ?"
}
] |
Addressing modes of 8051
|
In this section, we will see different addressing modes of the 8051 microcontrollers. In 8051 there are 1-byte, 2-byte instructions and very few 3-byte instructions are present. The opcodes are 8-bit long. As the opcodes are 8-bit data, there are 256 possibilities. Among 256, 255 opcodes are implemented.
The clock frequency is12MHz, so 64 instruction types are executed in just 1 μs, and rest are just 2 μs. The Multiplication and Division operations take 4 μsto to execute.
In 8051 There are six types of addressing modes.
Immediate AddressingMode
Immediate AddressingMode
Register AddressingMode
Register AddressingMode
Direct AddressingMode
Direct AddressingMode
Register IndirectAddressing Mode
Register IndirectAddressing Mode
Indexed AddressingMode
Indexed AddressingMode
Implied AddressingMode
Implied AddressingMode
In this Immediate Addressing Mode, the data is provided in the instruction itself. The data is provided immediately after the opcode. These are some examples of Immediate Addressing Mode.
MOVA, #0AFH;
MOVR3, #45H;
MOVDPTR, #FE00H;
In these instructions, the # symbol is used for immediate data. In the last instruction, there is DPTR. The DPTR stands for Data Pointer. Using this, it points the external data memory location. In the first instruction, the immediate data is AFH, but one 0 is added at the beginning. So when the data is starting with A to F, the data should be preceded by 0.
In the register addressing mode the source or destination data should be present in a register (R0 to R7). These are some examples of RegisterAddressing Mode.
MOVA, R5;
MOVR2, #45H;
MOVR0, A;
In 8051, there is no instruction like MOVR5, R7. But we can get the same result by using this instruction MOV R5, 07H, or by using MOV 05H, R7. But this two instruction will work when the selected register bank is RB0. To use another register bank and to get the same effect, we have to add the starting address of that register bank with the register number. For an example, if the RB2 is selected, and we want to access R5, then the address will be (10H + 05H = 15H), so the instruction will look like this MOV 15H, R7. Here 10H is the starting address of Register Bank 2.
In the Direct Addressing Mode, the source or destination address is specified by using 8-bit data in the instruction. Only the internal data memory can be used in this mode. Here some of the examples of direct Addressing Mode.
MOV80H, R6;
MOVR2, 45H;
MOVR0, 05H;
The first instruction will send the content of registerR6 to port P0 (Address of Port 0 is 80H). The second one is forgetting content from 45H to R2. The third one is used to get data from Register R5 (When register bank RB0 is selected) to register R5.
In this mode, the source or destination address is given in the register. By using register indirect addressing mode, the internal or external addresses can be accessed. The R0 and R1 are used for 8-bit addresses, and DPTR is used for 16-bit addresses, no other registers can be used for addressing purposes. Let us see some examples of this mode.
MOV0E5H, @R0;
MOV@R1, 80H
In the instructions, the @ symbol is used for register indirect addressing. In the first instruction, it is showing that theR0 register is used. If the content of R0 is 40H, then that instruction will take the data which is located at location 40H of the internal RAM. In the second one, if the content of R1 is 30H, then it indicates that the content of port P0 will be stored at location 30H in the internal RAM.
MOVXA, @R1;
MOV@DPTR, A;
In these two instructions, the X in MOVX indicates the external data memory. The external data memory can only be accessed in register indirect mode. In the first instruction if the R0 is holding 40H, then A will get the content of external RAM location40H. And in the second one, the content of A is overwritten in the location pointed by DPTR.
In the indexed addressing mode, the source memory can only be accessed from program memory only. The destination operand is always the register A. These are some examples of Indexed addressing mode.
MOVCA, @A+PC;
MOVCA, @A+DPTR;
The C in MOVC instruction refers to code byte. For the first instruction, let us consider A holds 30H. And the PC value is1125H. The contents of program memory location 1155H (30H + 1125H) are moved to register A.
In the implied addressing mode, there will be a single operand. These types of instruction can work on specific registers only. These types of instructions are also known as register specific instruction. Here are some examples of Implied Addressing Mode.
RLA;
SWAPA;
These are 1- byte instruction. The first one is used to rotate the A register content to the Left. The second one is used to swap the nibbles in A.
|
[
{
"code": null,
"e": 1368,
"s": 1062,
"text": "In this section, we will see different addressing modes of the 8051 microcontrollers. In 8051 there are 1-byte, 2-byte instructions and very few 3-byte instructions are present. The opcodes are 8-bit long. As the opcodes are 8-bit data, there are 256 possibilities. Among 256, 255 opcodes are implemented."
},
{
"code": null,
"e": 1539,
"s": 1368,
"text": "The clock frequency is12MHz, so 64 instruction types are executed in just 1 μs, and rest are just 2 μs. The Multiplication and Division operations take 4 μsto to execute."
},
{
"code": null,
"e": 1589,
"s": 1539,
"text": "In 8051 There are six types of addressing modes. "
},
{
"code": null,
"e": 1614,
"s": 1589,
"text": "Immediate AddressingMode"
},
{
"code": null,
"e": 1639,
"s": 1614,
"text": "Immediate AddressingMode"
},
{
"code": null,
"e": 1663,
"s": 1639,
"text": "Register AddressingMode"
},
{
"code": null,
"e": 1687,
"s": 1663,
"text": "Register AddressingMode"
},
{
"code": null,
"e": 1709,
"s": 1687,
"text": "Direct AddressingMode"
},
{
"code": null,
"e": 1731,
"s": 1709,
"text": "Direct AddressingMode"
},
{
"code": null,
"e": 1764,
"s": 1731,
"text": "Register IndirectAddressing Mode"
},
{
"code": null,
"e": 1797,
"s": 1764,
"text": "Register IndirectAddressing Mode"
},
{
"code": null,
"e": 1820,
"s": 1797,
"text": "Indexed AddressingMode"
},
{
"code": null,
"e": 1843,
"s": 1820,
"text": "Indexed AddressingMode"
},
{
"code": null,
"e": 1866,
"s": 1843,
"text": "Implied AddressingMode"
},
{
"code": null,
"e": 1889,
"s": 1866,
"text": "Implied AddressingMode"
},
{
"code": null,
"e": 2077,
"s": 1889,
"text": "In this Immediate Addressing Mode, the data is provided in the instruction itself. The data is provided immediately after the opcode. These are some examples of Immediate Addressing Mode."
},
{
"code": null,
"e": 2120,
"s": 2077,
"text": "MOVA, #0AFH;\nMOVR3, #45H;\nMOVDPTR, #FE00H;"
},
{
"code": null,
"e": 2481,
"s": 2120,
"text": "In these instructions, the # symbol is used for immediate data. In the last instruction, there is DPTR. The DPTR stands for Data Pointer. Using this, it points the external data memory location. In the first instruction, the immediate data is AFH, but one 0 is added at the beginning. So when the data is starting with A to F, the data should be preceded by 0."
},
{
"code": null,
"e": 2640,
"s": 2481,
"text": "In the register addressing mode the source or destination data should be present in a register (R0 to R7). These are some examples of RegisterAddressing Mode."
},
{
"code": null,
"e": 2673,
"s": 2640,
"text": "MOVA, R5;\nMOVR2, #45H;\nMOVR0, A;"
},
{
"code": null,
"e": 3248,
"s": 2673,
"text": "In 8051, there is no instruction like MOVR5, R7. But we can get the same result by using this instruction MOV R5, 07H, or by using MOV 05H, R7. But this two instruction will work when the selected register bank is RB0. To use another register bank and to get the same effect, we have to add the starting address of that register bank with the register number. For an example, if the RB2 is selected, and we want to access R5, then the address will be (10H + 05H = 15H), so the instruction will look like this MOV 15H, R7. Here 10H is the starting address of Register Bank 2."
},
{
"code": null,
"e": 3475,
"s": 3248,
"text": "In the Direct Addressing Mode, the source or destination address is specified by using 8-bit data in the instruction. Only the internal data memory can be used in this mode. Here some of the examples of direct Addressing Mode."
},
{
"code": null,
"e": 3511,
"s": 3475,
"text": "MOV80H, R6;\nMOVR2, 45H;\nMOVR0, 05H;"
},
{
"code": null,
"e": 3765,
"s": 3511,
"text": "The first instruction will send the content of registerR6 to port P0 (Address of Port 0 is 80H). The second one is forgetting content from 45H to R2. The third one is used to get data from Register R5 (When register bank RB0 is selected) to register R5."
},
{
"code": null,
"e": 4113,
"s": 3765,
"text": "In this mode, the source or destination address is given in the register. By using register indirect addressing mode, the internal or external addresses can be accessed. The R0 and R1 are used for 8-bit addresses, and DPTR is used for 16-bit addresses, no other registers can be used for addressing purposes. Let us see some examples of this mode."
},
{
"code": null,
"e": 4139,
"s": 4113,
"text": "MOV0E5H, @R0;\nMOV@R1, 80H"
},
{
"code": null,
"e": 4554,
"s": 4139,
"text": "In the instructions, the @ symbol is used for register indirect addressing. In the first instruction, it is showing that theR0 register is used. If the content of R0 is 40H, then that instruction will take the data which is located at location 40H of the internal RAM. In the second one, if the content of R1 is 30H, then it indicates that the content of port P0 will be stored at location 30H in the internal RAM."
},
{
"code": null,
"e": 4579,
"s": 4554,
"text": "MOVXA, @R1;\nMOV@DPTR, A;"
},
{
"code": null,
"e": 4925,
"s": 4579,
"text": "In these two instructions, the X in MOVX indicates the external data memory. The external data memory can only be accessed in register indirect mode. In the first instruction if the R0 is holding 40H, then A will get the content of external RAM location40H. And in the second one, the content of A is overwritten in the location pointed by DPTR."
},
{
"code": null,
"e": 5124,
"s": 4925,
"text": "In the indexed addressing mode, the source memory can only be accessed from program memory only. The destination operand is always the register A. These are some examples of Indexed addressing mode."
},
{
"code": null,
"e": 5154,
"s": 5124,
"text": "MOVCA, @A+PC;\nMOVCA, @A+DPTR;"
},
{
"code": null,
"e": 5368,
"s": 5154,
"text": "The C in MOVC instruction refers to code byte. For the first instruction, let us consider A holds 30H. And the PC value is1125H. The contents of program memory location 1155H (30H + 1125H) are moved to register A."
},
{
"code": null,
"e": 5624,
"s": 5368,
"text": "In the implied addressing mode, there will be a single operand. These types of instruction can work on specific registers only. These types of instructions are also known as register specific instruction. Here are some examples of Implied Addressing Mode."
},
{
"code": null,
"e": 5636,
"s": 5624,
"text": "RLA;\nSWAPA;"
},
{
"code": null,
"e": 5784,
"s": 5636,
"text": "These are 1- byte instruction. The first one is used to rotate the A register content to the Left. The second one is used to swap the nibbles in A."
}
] |
How to get a number of vowels in a string in JavaScript? - GeeksforGeeks
|
18 Jan, 2021
The Approach of this article is to return the number of vowels in a string using Javascript. A vowel is also a letter that represents a sound produced in this way: The vowels in English are a, e, i, o, u.
Example:
Input:GeeksForGeeks
Output: 5
Input: Hello Geeks
Output: 4
Explanation: Here we create a user-defined function called “getvowels()” which reads a string and compare with list of vowels. It compares each character of a string with vowels. When the vowels is matches it will increment the value of the vowelsCount.
Example: Below code will illustrate the approach.
HTML
<html> <head> <title> How to get a number of vowels in a string in JavaScript? </title> </head> <body> <h1> GeeksforGeeks </h1> <h2> How to get a number of vowels in a string in JavaScript? </h2><script> function getVowels(string) { var Vowels = 'aAeEiIoOuU'; var vowelsCount = 0; for(var i = 0; i < string.length ; i++) { if (Vowels.indexOf(string[i]) !== -1) { vowelsCount += 1; } } return vowelsCount; } document.write("The Number of vowels in -"+ " A Computer Science Portal for Geeks:" + getVowels("A Computer Science Portal for Geeks"));</script></body></html>
Output:
javascript-basics
javascript-string
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Difference Between PUT and PATCH Request
How to get character array from string in JavaScript?
Remove elements from a JavaScript Array
How to filter object array based on attributes?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
|
[
{
"code": null,
"e": 25300,
"s": 25272,
"text": "\n18 Jan, 2021"
},
{
"code": null,
"e": 25507,
"s": 25300,
"text": "The Approach of this article is to return the number of vowels in a string using Javascript. A vowel is also a letter that represents a sound produced in this way: The vowels in English are a, e, i, o, u. "
},
{
"code": null,
"e": 25516,
"s": 25507,
"text": "Example:"
},
{
"code": null,
"e": 25582,
"s": 25516,
"text": "Input:GeeksForGeeks\nOutput: 5 \n\nInput: Hello Geeks\nOutput: 4"
},
{
"code": null,
"e": 25836,
"s": 25582,
"text": "Explanation: Here we create a user-defined function called “getvowels()” which reads a string and compare with list of vowels. It compares each character of a string with vowels. When the vowels is matches it will increment the value of the vowelsCount."
},
{
"code": null,
"e": 25886,
"s": 25836,
"text": "Example: Below code will illustrate the approach."
},
{
"code": null,
"e": 25891,
"s": 25886,
"text": "HTML"
},
{
"code": "<html> <head> <title> How to get a number of vowels in a string in JavaScript? </title> </head> <body> <h1> GeeksforGeeks </h1> <h2> How to get a number of vowels in a string in JavaScript? </h2><script> function getVowels(string) { var Vowels = 'aAeEiIoOuU'; var vowelsCount = 0; for(var i = 0; i < string.length ; i++) { if (Vowels.indexOf(string[i]) !== -1) { vowelsCount += 1; } } return vowelsCount; } document.write(\"The Number of vowels in -\"+ \" A Computer Science Portal for Geeks:\" + getVowels(\"A Computer Science Portal for Geeks\"));</script></body></html>",
"e": 26616,
"s": 25891,
"text": null
},
{
"code": null,
"e": 26624,
"s": 26616,
"text": "Output:"
},
{
"code": null,
"e": 26642,
"s": 26624,
"text": "javascript-basics"
},
{
"code": null,
"e": 26660,
"s": 26642,
"text": "javascript-string"
},
{
"code": null,
"e": 26671,
"s": 26660,
"text": "JavaScript"
},
{
"code": null,
"e": 26688,
"s": 26671,
"text": "Web Technologies"
},
{
"code": null,
"e": 26715,
"s": 26688,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 26813,
"s": 26715,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26874,
"s": 26813,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 26915,
"s": 26874,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 26969,
"s": 26915,
"text": "How to get character array from string in JavaScript?"
},
{
"code": null,
"e": 27009,
"s": 26969,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 27057,
"s": 27009,
"text": "How to filter object array based on attributes?"
},
{
"code": null,
"e": 27099,
"s": 27057,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27132,
"s": 27099,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 27175,
"s": 27132,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 27225,
"s": 27175,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Pandas Display Options. When the default setting does not meet... | by Soner Yıldırım | Towards Data Science
|
Pandas is a very powerful Python data analysis library that expedites the preprocessing steps of your project. The core data structure of Pandas is DataFrame which represents data in tabular form with labeled rows and columns. Pandas comes with many display options for a DataFrame. The default settings work well in most cases but you may need adjust them depending on the characteristics of the dataset. With a wide range of setting options, pandas allows to create a tailormade display preference.
Display options can be handled with two functions:
get_option: Shows the option for a setting
set_option: Allows to change the option
Let’s import a sample dataframe and go through with examples:
import pandas as pdimport numpy as npdf = pd.read_csv("/content/Telco-Customer-Churn.csv")df.shape(7043, 21)df.head()
The dataframe has 21 columns but only 10 columns are displayes. The ones in the middle are represented by dots. We can easily adjust it but let’s first learn the option for displaying columns:
pd.get_option("display.max_columns")10
It is 10, as expected. We can adjust it with a similar syntax using set_option function and indicating how many columns to display:
pd.set_option("display.max_columns", 25)df.head()
All of the columns are displayed now. They do not fit the screen but we can see them using the scroll bar at the bottom.
This option is useful when we have wide dataframes that contain many features of an observation.
A similar option exists for displaying rows:
pd.get_option("display.max_rows")60
So if we want to view 100 rows, we can adjust this setting similarly:
pd.set_option("display.max_rows", 100)
There is also an option to adjust the displayed width of a column. In some cases, datasets include long strings which are too long to display with default setting. If we want to view the complete string, we can use max_colwidth parameter. Let’s first see the default option:
pd.get_option("display.max_colwidth")50
So if a cell includes more than 50 characters, we are not able to see all of it. I created a simple dataframe to show how it looks with truncated view:
Let’s increase the width to see the complete text:
pd.set_option("display.max_colwidth", 150)
Another display option we may need to adjust is the precision of floating point numbers. The default value should work just fine but there might be some extreme cases that require more precision:
pd.get_option("display.precision")6
Default value is 6. Let’s increase it to 10. We may also want to decrease it to make it look simpler though.
pd.set_option("display.precision", 10)
There are many more Pandas display options that can be adjusted. I wanted to emphasize the ones that you may need more frequently. If you would like to see the whole list, you can always visit Pandas documentation.
If you would like to learn more about Pandas, here is a list of detailed Pandas posts:
A Complete Pandas Guide
Time Series Analysis with Pandas
Handling Missing Values with Pandas
Pandas — Save Memory with These Simple Tricks
Reshaping Pandas DataFrames
Pandas String Operations — Explained
Pandas Groupby — Explained
How to “read_csv” with Pandas
Thank you for reading. Please let me know if you have any feedback.
|
[
{
"code": null,
"e": 672,
"s": 171,
"text": "Pandas is a very powerful Python data analysis library that expedites the preprocessing steps of your project. The core data structure of Pandas is DataFrame which represents data in tabular form with labeled rows and columns. Pandas comes with many display options for a DataFrame. The default settings work well in most cases but you may need adjust them depending on the characteristics of the dataset. With a wide range of setting options, pandas allows to create a tailormade display preference."
},
{
"code": null,
"e": 723,
"s": 672,
"text": "Display options can be handled with two functions:"
},
{
"code": null,
"e": 766,
"s": 723,
"text": "get_option: Shows the option for a setting"
},
{
"code": null,
"e": 806,
"s": 766,
"text": "set_option: Allows to change the option"
},
{
"code": null,
"e": 868,
"s": 806,
"text": "Let’s import a sample dataframe and go through with examples:"
},
{
"code": null,
"e": 986,
"s": 868,
"text": "import pandas as pdimport numpy as npdf = pd.read_csv(\"/content/Telco-Customer-Churn.csv\")df.shape(7043, 21)df.head()"
},
{
"code": null,
"e": 1179,
"s": 986,
"text": "The dataframe has 21 columns but only 10 columns are displayes. The ones in the middle are represented by dots. We can easily adjust it but let’s first learn the option for displaying columns:"
},
{
"code": null,
"e": 1218,
"s": 1179,
"text": "pd.get_option(\"display.max_columns\")10"
},
{
"code": null,
"e": 1350,
"s": 1218,
"text": "It is 10, as expected. We can adjust it with a similar syntax using set_option function and indicating how many columns to display:"
},
{
"code": null,
"e": 1400,
"s": 1350,
"text": "pd.set_option(\"display.max_columns\", 25)df.head()"
},
{
"code": null,
"e": 1521,
"s": 1400,
"text": "All of the columns are displayed now. They do not fit the screen but we can see them using the scroll bar at the bottom."
},
{
"code": null,
"e": 1618,
"s": 1521,
"text": "This option is useful when we have wide dataframes that contain many features of an observation."
},
{
"code": null,
"e": 1663,
"s": 1618,
"text": "A similar option exists for displaying rows:"
},
{
"code": null,
"e": 1699,
"s": 1663,
"text": "pd.get_option(\"display.max_rows\")60"
},
{
"code": null,
"e": 1769,
"s": 1699,
"text": "So if we want to view 100 rows, we can adjust this setting similarly:"
},
{
"code": null,
"e": 1808,
"s": 1769,
"text": "pd.set_option(\"display.max_rows\", 100)"
},
{
"code": null,
"e": 2083,
"s": 1808,
"text": "There is also an option to adjust the displayed width of a column. In some cases, datasets include long strings which are too long to display with default setting. If we want to view the complete string, we can use max_colwidth parameter. Let’s first see the default option:"
},
{
"code": null,
"e": 2123,
"s": 2083,
"text": "pd.get_option(\"display.max_colwidth\")50"
},
{
"code": null,
"e": 2275,
"s": 2123,
"text": "So if a cell includes more than 50 characters, we are not able to see all of it. I created a simple dataframe to show how it looks with truncated view:"
},
{
"code": null,
"e": 2326,
"s": 2275,
"text": "Let’s increase the width to see the complete text:"
},
{
"code": null,
"e": 2369,
"s": 2326,
"text": "pd.set_option(\"display.max_colwidth\", 150)"
},
{
"code": null,
"e": 2565,
"s": 2369,
"text": "Another display option we may need to adjust is the precision of floating point numbers. The default value should work just fine but there might be some extreme cases that require more precision:"
},
{
"code": null,
"e": 2601,
"s": 2565,
"text": "pd.get_option(\"display.precision\")6"
},
{
"code": null,
"e": 2710,
"s": 2601,
"text": "Default value is 6. Let’s increase it to 10. We may also want to decrease it to make it look simpler though."
},
{
"code": null,
"e": 2749,
"s": 2710,
"text": "pd.set_option(\"display.precision\", 10)"
},
{
"code": null,
"e": 2964,
"s": 2749,
"text": "There are many more Pandas display options that can be adjusted. I wanted to emphasize the ones that you may need more frequently. If you would like to see the whole list, you can always visit Pandas documentation."
},
{
"code": null,
"e": 3051,
"s": 2964,
"text": "If you would like to learn more about Pandas, here is a list of detailed Pandas posts:"
},
{
"code": null,
"e": 3075,
"s": 3051,
"text": "A Complete Pandas Guide"
},
{
"code": null,
"e": 3108,
"s": 3075,
"text": "Time Series Analysis with Pandas"
},
{
"code": null,
"e": 3144,
"s": 3108,
"text": "Handling Missing Values with Pandas"
},
{
"code": null,
"e": 3190,
"s": 3144,
"text": "Pandas — Save Memory with These Simple Tricks"
},
{
"code": null,
"e": 3218,
"s": 3190,
"text": "Reshaping Pandas DataFrames"
},
{
"code": null,
"e": 3255,
"s": 3218,
"text": "Pandas String Operations — Explained"
},
{
"code": null,
"e": 3282,
"s": 3255,
"text": "Pandas Groupby — Explained"
},
{
"code": null,
"e": 3312,
"s": 3282,
"text": "How to “read_csv” with Pandas"
}
] |
Python 3 - os.mkdir() Method
|
The method mkdir() create a directory named path with numeric mode mode. The default mode is 0777 (octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out.
Following is the syntax for mkdir() method −
os.mkdir(path[, mode])
path − This is the path, which needs to be created.
path − This is the path, which needs to be created.
mode − This is the mode of the directories to be given.
mode − This is the mode of the directories to be given.
This method does not return any value.
The following example shows the usage of mkdir() method.
#!/usr/bin/python3
import os, sys
# Path to be created
path = "/tmp/home/monthly/daily/hourly"
os.mkdir( path, 0755 )
print ("Path is created")
When we run the above program, it produces the following result −
Path is created
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2544,
"s": 2340,
"text": "The method mkdir() create a directory named path with numeric mode mode. The default mode is 0777 (octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out."
},
{
"code": null,
"e": 2589,
"s": 2544,
"text": "Following is the syntax for mkdir() method −"
},
{
"code": null,
"e": 2613,
"s": 2589,
"text": "os.mkdir(path[, mode])\n"
},
{
"code": null,
"e": 2665,
"s": 2613,
"text": "path − This is the path, which needs to be created."
},
{
"code": null,
"e": 2717,
"s": 2665,
"text": "path − This is the path, which needs to be created."
},
{
"code": null,
"e": 2773,
"s": 2717,
"text": "mode − This is the mode of the directories to be given."
},
{
"code": null,
"e": 2829,
"s": 2773,
"text": "mode − This is the mode of the directories to be given."
},
{
"code": null,
"e": 2868,
"s": 2829,
"text": "This method does not return any value."
},
{
"code": null,
"e": 2925,
"s": 2868,
"text": "The following example shows the usage of mkdir() method."
},
{
"code": null,
"e": 3071,
"s": 2925,
"text": "#!/usr/bin/python3\nimport os, sys\n\n# Path to be created\npath = \"/tmp/home/monthly/daily/hourly\"\nos.mkdir( path, 0755 )\n\nprint (\"Path is created\")"
},
{
"code": null,
"e": 3137,
"s": 3071,
"text": "When we run the above program, it produces the following result −"
},
{
"code": null,
"e": 3154,
"s": 3137,
"text": "Path is created\n"
},
{
"code": null,
"e": 3191,
"s": 3154,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 3207,
"s": 3191,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 3240,
"s": 3207,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3259,
"s": 3240,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3294,
"s": 3259,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3316,
"s": 3294,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3350,
"s": 3316,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3378,
"s": 3350,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3413,
"s": 3378,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3427,
"s": 3413,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3460,
"s": 3427,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3477,
"s": 3460,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3484,
"s": 3477,
"text": " Print"
},
{
"code": null,
"e": 3495,
"s": 3484,
"text": " Add Notes"
}
] |
Jackson Annotations - @JsonTypeInfo
|
@JsonTypeInfo is used to indicate details of type information which is to be included in serialization and de-serialization.
import java.io.IOException;
import com.fasterxml.jackson.annotation.JsonSubTypes;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.JsonTypeInfo.As;
import com.fasterxml.jackson.annotation.JsonTypeName;
import com.fasterxml.jackson.databind.ObjectMapper;
public class JacksonTester {
public static void main(String args[]) throws IOException {
Shape shape = new JacksonTester.Circle("CustomCircle", 1);
String result = new ObjectMapper()
.writerWithDefaultPrettyPrinter()
.writeValueAsString(shape);
System.out.println(result);
String json = "{\"name\":\"CustomCircle\",\"radius\":1.0, \"type\":\"circle\"}";
Circle circle = new ObjectMapper().readerFor(Shape.class).readValue(json);
System.out.println(circle.name);
}
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME,
include = As.PROPERTY, property = "type") @JsonSubTypes({
@JsonSubTypes.Type(value = Square.class, name = "square"),
@JsonSubTypes.Type(value = Circle.class, name = "circle")
})
static class Shape {
public String name;
Shape(String name){
this.name = name;
}
}
@JsonTypeName("square")
static class Square extends Shape {
public double length;
Square(){
this(null,0.0);
}
Square(String name, double length){
super(name);
this.length = length;
}
}
@JsonTypeName("circle")
static class Circle extends Shape {
public double radius;
Circle(){
this(null,0.0);
}
Circle(String name, double radius) {
super(name);
this.radius = radius;
}
}
}
{
"type" : "circle",
"name" : "CustomCircle",
"radius" : 1.0
}
CustomCircle
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2600,
"s": 2475,
"text": "@JsonTypeInfo is used to indicate details of type information which is to be included in serialization and de-serialization."
},
{
"code": null,
"e": 4306,
"s": 2600,
"text": "import java.io.IOException;\nimport com.fasterxml.jackson.annotation.JsonSubTypes;\nimport com.fasterxml.jackson.annotation.JsonTypeInfo;\nimport com.fasterxml.jackson.annotation.JsonTypeInfo.As;\nimport com.fasterxml.jackson.annotation.JsonTypeName;\nimport com.fasterxml.jackson.databind.ObjectMapper;\n\npublic class JacksonTester {\n public static void main(String args[]) throws IOException {\n Shape shape = new JacksonTester.Circle(\"CustomCircle\", 1);\n String result = new ObjectMapper()\n .writerWithDefaultPrettyPrinter()\n .writeValueAsString(shape);\n System.out.println(result); \n String json = \"{\\\"name\\\":\\\"CustomCircle\\\",\\\"radius\\\":1.0, \\\"type\\\":\\\"circle\\\"}\";\n Circle circle = new ObjectMapper().readerFor(Shape.class).readValue(json);\n System.out.println(circle.name);\n }\n @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, \n include = As.PROPERTY, property = \"type\") @JsonSubTypes({\n \n @JsonSubTypes.Type(value = Square.class, name = \"square\"),\n @JsonSubTypes.Type(value = Circle.class, name = \"circle\")\n })\n static class Shape {\n public String name; \n Shape(String name){\n this.name = name;\n }\n }\n @JsonTypeName(\"square\")\n static class Square extends Shape {\n public double length;\n Square(){\n this(null,0.0);\n }\n Square(String name, double length){\n super(name);\n this.length = length;\n }\n }\n @JsonTypeName(\"circle\")\n static class Circle extends Shape {\n public double radius; \n Circle(){\n this(null,0.0);\n }\n Circle(String name, double radius) {\n super(name);\n this.radius = radius;\n }\n }\n}"
},
{
"code": null,
"e": 4392,
"s": 4306,
"text": "{\n \"type\" : \"circle\",\n \"name\" : \"CustomCircle\",\n \"radius\" : 1.0\n}\nCustomCircle\n"
},
{
"code": null,
"e": 4399,
"s": 4392,
"text": " Print"
},
{
"code": null,
"e": 4410,
"s": 4399,
"text": " Add Notes"
}
] |
remainder() in C++
|
20 Aug, 2021
This function is also used to return the remainder(modulus) of 2 floating point numbers mentioned in its arguments.The quotient computed is rounded.remainder = number – rquot * denom Where rquot is the result of: number/denom, rounded toward the nearest integral value (with halfway cases rounded toward the even number).
Syntax :
double remainder(double a, double b)
float remainder(float a, float b)
long double remainder(long double a, long double b)
Parameter:
a and b are the values
of numerator and denominator.
Return:
The remainder() function returns the floating
point remainder of numerator/denominator
rounded to nearest.
Error or Exception : It is mandatory to give both the arguments otherwise it will give error – no matching function for call to ‘remainder()’ like this.# CODE 1
CPP
// CPP program to demonstrate// remainder() function#include <cmath>#include <iostream> using namespace std; int main(){ double a, b; double answer; a = 50.35; b = -4.1; // here quotient is -12.2805 and rounded to nearest value then // rquot = -12. // remainder = 50.35 – (-12 * -4.1) answer = remainder(a, b); cout << "Remainder of " << a << "/" << b << " is " << answer << endl; a = 16.80; b = 3.5; // here quotient is 4.8 and rounded to nearest value then // rquot = -5. // remainder = 16.80 – (5 * 3.5) answer = remainder(a, b); cout << "Remainder of " << a << "/" << b << " is " << answer << endl; a = 16.80; b = 0; answer = remainder(a, b); cout << "Remainder of " << a << "/" << b << " is " << answer << endl; return 0;}
OUTPUT :
Remainder of 50.35/-4.1 is 1.15
Remainder of 16.8/3.5 is -0.7
Remainder of 16.8/0 is -nan
# CODE 2
CPP
// CPP program to demonstrate// remainder() function#include <cmath>#include <iostream> using namespace std; int main(){ int a = 50; double b = 41.35, answer; answer = remainder(a, b); cout << "Remainder of " << a << "/" << b << " = " << answer << endl; return 0;}
OUTPUT :
Remainder of 50/41.35 = 8.65
abhishek0719kadiyan
CPP-Library
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n20 Aug, 2021"
},
{
"code": null,
"e": 351,
"s": 28,
"text": "This function is also used to return the remainder(modulus) of 2 floating point numbers mentioned in its arguments.The quotient computed is rounded.remainder = number – rquot * denom Where rquot is the result of: number/denom, rounded toward the nearest integral value (with halfway cases rounded toward the even number). "
},
{
"code": null,
"e": 362,
"s": 351,
"text": "Syntax : "
},
{
"code": null,
"e": 669,
"s": 362,
"text": "double remainder(double a, double b)\nfloat remainder(float a, float b)\nlong double remainder(long double a, long double b)\nParameter:\na and b are the values \nof numerator and denominator.\n\n\nReturn:\nThe remainder() function returns the floating \npoint remainder of numerator/denominator \nrounded to nearest."
},
{
"code": null,
"e": 830,
"s": 669,
"text": "Error or Exception : It is mandatory to give both the arguments otherwise it will give error – no matching function for call to ‘remainder()’ like this.# CODE 1"
},
{
"code": null,
"e": 834,
"s": 830,
"text": "CPP"
},
{
"code": "// CPP program to demonstrate// remainder() function#include <cmath>#include <iostream> using namespace std; int main(){ double a, b; double answer; a = 50.35; b = -4.1; // here quotient is -12.2805 and rounded to nearest value then // rquot = -12. // remainder = 50.35 – (-12 * -4.1) answer = remainder(a, b); cout << \"Remainder of \" << a << \"/\" << b << \" is \" << answer << endl; a = 16.80; b = 3.5; // here quotient is 4.8 and rounded to nearest value then // rquot = -5. // remainder = 16.80 – (5 * 3.5) answer = remainder(a, b); cout << \"Remainder of \" << a << \"/\" << b << \" is \" << answer << endl; a = 16.80; b = 0; answer = remainder(a, b); cout << \"Remainder of \" << a << \"/\" << b << \" is \" << answer << endl; return 0;}",
"e": 1633,
"s": 834,
"text": null
},
{
"code": null,
"e": 1643,
"s": 1633,
"text": "OUTPUT : "
},
{
"code": null,
"e": 1733,
"s": 1643,
"text": "Remainder of 50.35/-4.1 is 1.15\nRemainder of 16.8/3.5 is -0.7\nRemainder of 16.8/0 is -nan"
},
{
"code": null,
"e": 1744,
"s": 1733,
"text": "# CODE 2 "
},
{
"code": null,
"e": 1748,
"s": 1744,
"text": "CPP"
},
{
"code": "// CPP program to demonstrate// remainder() function#include <cmath>#include <iostream> using namespace std; int main(){ int a = 50; double b = 41.35, answer; answer = remainder(a, b); cout << \"Remainder of \" << a << \"/\" << b << \" = \" << answer << endl; return 0;}",
"e": 2030,
"s": 1748,
"text": null
},
{
"code": null,
"e": 2040,
"s": 2030,
"text": "OUTPUT : "
},
{
"code": null,
"e": 2070,
"s": 2040,
"text": "Remainder of 50/41.35 = 8.65 "
},
{
"code": null,
"e": 2090,
"s": 2070,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 2102,
"s": 2090,
"text": "CPP-Library"
},
{
"code": null,
"e": 2106,
"s": 2102,
"text": "C++"
},
{
"code": null,
"e": 2110,
"s": 2106,
"text": "CPP"
}
] |
Express.js express.text() Function
|
05 May, 2021
The express.text() function is a built-in middleware function in Express. It parses the incoming request payloads into a string and is based on body-parser.Syntax:
express.text( [options] )
Parameter: The options parameter contains various property like defaultCharset, inflate, limit, verify etc.Return Value: It returns an Object.Installation of express module:
You can visit the link to Install express module. You can install this package by using this command.
You can visit the link to Install express module. You can install this package by using this command.
npm install express
After installing the express module, you can check your express version in command prompt using the command.
After installing the express module, you can check your express version in command prompt using the command.
npm version express
After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.
After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.
node index.js
Example 1: Filename: index.js
javascript
var express = require('express');var app = express();var PORT = 3000; app.use(express.text()); app.post('/', function (req, res) { console.log(req.body); res.end();}) app.listen(PORT, function(err){ if (err) console.log(err); console.log("Server listening on PORT", PORT);});
Steps to run the program:
The project structure will look like this:
The project structure will look like this:
Make sure you have installed express module using the following command:
Make sure you have installed express module using the following command:
npm install express
Run index.js file using below command:
Run index.js file using below command:
node index.js
Output:
Output:
Server listening on PORT 3000
Now make a POST request to http://localhost:3000/ with header set to ‘content-type: text/plain’ and body {“title”:”GeeksforGeeks”}, then you will see the following output on your console:
Now make a POST request to http://localhost:3000/ with header set to ‘content-type: text/plain’ and body {“title”:”GeeksforGeeks”}, then you will see the following output on your console:
Server listening on PORT 3000
{"title":"GeeksforGeeks"}
Example 2: Filename: index.js
javascript
var express = require('express');var app = express();var PORT = 3000; // Without this middleware// app.use(express.text());app.post('/', function (req, res) { console.log(req.body); res.end();}) app.listen(PORT, function(err){ if (err) console.log(err); console.log("Server listening on PORT", PORT);});
Run index.js file using below command:
node index.js
Now make a POST request to http://localhost:3000/ with header set to ‘content-type: text/plain’ and body {“title”:”GeeksforGeeks”}, then you will see the following output on your console:
Server listening on PORT 3000
undefined
Reference: https://expressjs.com/en/4x/api.html#express.text
simmytarika5
Express.js
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n05 May, 2021"
},
{
"code": null,
"e": 194,
"s": 28,
"text": "The express.text() function is a built-in middleware function in Express. It parses the incoming request payloads into a string and is based on body-parser.Syntax: "
},
{
"code": null,
"e": 220,
"s": 194,
"text": "express.text( [options] )"
},
{
"code": null,
"e": 396,
"s": 220,
"text": "Parameter: The options parameter contains various property like defaultCharset, inflate, limit, verify etc.Return Value: It returns an Object.Installation of express module: "
},
{
"code": null,
"e": 500,
"s": 396,
"text": "You can visit the link to Install express module. You can install this package by using this command. "
},
{
"code": null,
"e": 604,
"s": 500,
"text": "You can visit the link to Install express module. You can install this package by using this command. "
},
{
"code": null,
"e": 624,
"s": 604,
"text": "npm install express"
},
{
"code": null,
"e": 735,
"s": 624,
"text": "After installing the express module, you can check your express version in command prompt using the command. "
},
{
"code": null,
"e": 846,
"s": 735,
"text": "After installing the express module, you can check your express version in command prompt using the command. "
},
{
"code": null,
"e": 866,
"s": 846,
"text": "npm version express"
},
{
"code": null,
"e": 1003,
"s": 866,
"text": "After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command. "
},
{
"code": null,
"e": 1140,
"s": 1003,
"text": "After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command. "
},
{
"code": null,
"e": 1154,
"s": 1140,
"text": "node index.js"
},
{
"code": null,
"e": 1186,
"s": 1154,
"text": "Example 1: Filename: index.js "
},
{
"code": null,
"e": 1197,
"s": 1186,
"text": "javascript"
},
{
"code": "var express = require('express');var app = express();var PORT = 3000; app.use(express.text()); app.post('/', function (req, res) { console.log(req.body); res.end();}) app.listen(PORT, function(err){ if (err) console.log(err); console.log(\"Server listening on PORT\", PORT);});",
"e": 1487,
"s": 1197,
"text": null
},
{
"code": null,
"e": 1515,
"s": 1487,
"text": "Steps to run the program: "
},
{
"code": null,
"e": 1560,
"s": 1515,
"text": "The project structure will look like this: "
},
{
"code": null,
"e": 1605,
"s": 1560,
"text": "The project structure will look like this: "
},
{
"code": null,
"e": 1680,
"s": 1605,
"text": "Make sure you have installed express module using the following command: "
},
{
"code": null,
"e": 1755,
"s": 1680,
"text": "Make sure you have installed express module using the following command: "
},
{
"code": null,
"e": 1775,
"s": 1755,
"text": "npm install express"
},
{
"code": null,
"e": 1816,
"s": 1775,
"text": "Run index.js file using below command: "
},
{
"code": null,
"e": 1857,
"s": 1816,
"text": "Run index.js file using below command: "
},
{
"code": null,
"e": 1871,
"s": 1857,
"text": "node index.js"
},
{
"code": null,
"e": 1881,
"s": 1871,
"text": "Output: "
},
{
"code": null,
"e": 1891,
"s": 1881,
"text": "Output: "
},
{
"code": null,
"e": 1921,
"s": 1891,
"text": "Server listening on PORT 3000"
},
{
"code": null,
"e": 2112,
"s": 1921,
"text": " Now make a POST request to http://localhost:3000/ with header set to ‘content-type: text/plain’ and body {“title”:”GeeksforGeeks”}, then you will see the following output on your console: "
},
{
"code": null,
"e": 2304,
"s": 2114,
"text": "Now make a POST request to http://localhost:3000/ with header set to ‘content-type: text/plain’ and body {“title”:”GeeksforGeeks”}, then you will see the following output on your console: "
},
{
"code": null,
"e": 2360,
"s": 2304,
"text": "Server listening on PORT 3000\n{\"title\":\"GeeksforGeeks\"}"
},
{
"code": null,
"e": 2396,
"s": 2364,
"text": "Example 2: Filename: index.js "
},
{
"code": null,
"e": 2407,
"s": 2396,
"text": "javascript"
},
{
"code": "var express = require('express');var app = express();var PORT = 3000; // Without this middleware// app.use(express.text());app.post('/', function (req, res) { console.log(req.body); res.end();}) app.listen(PORT, function(err){ if (err) console.log(err); console.log(\"Server listening on PORT\", PORT);});",
"e": 2723,
"s": 2407,
"text": null
},
{
"code": null,
"e": 2764,
"s": 2723,
"text": "Run index.js file using below command: "
},
{
"code": null,
"e": 2778,
"s": 2764,
"text": "node index.js"
},
{
"code": null,
"e": 2968,
"s": 2778,
"text": "Now make a POST request to http://localhost:3000/ with header set to ‘content-type: text/plain’ and body {“title”:”GeeksforGeeks”}, then you will see the following output on your console: "
},
{
"code": null,
"e": 3008,
"s": 2968,
"text": "Server listening on PORT 3000\nundefined"
},
{
"code": null,
"e": 3070,
"s": 3008,
"text": "Reference: https://expressjs.com/en/4x/api.html#express.text "
},
{
"code": null,
"e": 3083,
"s": 3070,
"text": "simmytarika5"
},
{
"code": null,
"e": 3094,
"s": 3083,
"text": "Express.js"
},
{
"code": null,
"e": 3102,
"s": 3094,
"text": "Node.js"
},
{
"code": null,
"e": 3119,
"s": 3102,
"text": "Web Technologies"
}
] |
Extended Integral Types (Choosing the correct integer size in C/C++)
|
28 Dec, 2021
C/C++ has very loose definitions on its basic integer data types (char, short, int, long, and long long). The language guarantees that they can represent at least some range of values, but any particular platform (compiler, operating system, hardware) might be larger than that.A good example is long. On one machine, it might be 32 bits (the minimum required by C). On another, it’s 64 bits. What do you do if you want an integer type that’s precisely 32 bits long? That’s where int32_t comes in: it’s an alias for whatever integer type your particular system has that is exactly 32 bits.Template:
intN_t or uintN_t
Where N is width of integer which can be 8, 16, 32, 64
or any other type width supported by the library.
CPP
// C++ program to show use of extended integral types#include <iostream>using namespace std; int main(){ uint8_t i; // i with width of exact 8 bits // Minimum value represented by unsigned 8 bit is 0 i = 0; cout << "Minimum value of i\t: "<< (int)i << endl; // Maximum value represented by unsigned 8 bit is 255 i = 255; cout << "Maximum value of i\t: "<< (int)i << endl; // Warning: large integer implicitly truncated to // unsigned type. It will print any garbage value i = 2436; cout << "Beyond range value of i\t: " << (int)i << endl; return 0;}
Output:
In function 'int main()':
19:7: warning: large integer implicitly truncated to unsigned type [-overflow]
i = 2436;
^
Minimum value of i : 0
Maximum value of i : 255
Beyond range value of i : 132
Different Variations 1. Fixed width unsigned 8 bit integer: uint8_t It means give me an unsigned int of exactly 8 bits.2. Minimum width unsigned 8 bit integer: uint_least8_t It means give me the smallest type of unsigned int which has at least 8 bits. Optimized for memory consumption.3. Fastest minimum width unsigned 8 bit integer: uint_fast8_t It means give me an unsigned int of at least 8 bits which will make my program faster. It may pick larger data type because of alignment considerations. Optimized for speed.Thus a uint8_t is guaranteed to be exactly 8 bits wide. A uint_least8_t is the smallest integer guaranteed to be at least 8 bits wide. An uint_fast8_t is the fastest integer guaranteed to be at least 8 bits wide. So the extended integral types help us in writing portable and efficient code.This article is contributed by Rohit Kasle. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
rkbhola5
cpp-data-types
C Language
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n28 Dec, 2021"
},
{
"code": null,
"e": 653,
"s": 52,
"text": "C/C++ has very loose definitions on its basic integer data types (char, short, int, long, and long long). The language guarantees that they can represent at least some range of values, but any particular platform (compiler, operating system, hardware) might be larger than that.A good example is long. On one machine, it might be 32 bits (the minimum required by C). On another, it’s 64 bits. What do you do if you want an integer type that’s precisely 32 bits long? That’s where int32_t comes in: it’s an alias for whatever integer type your particular system has that is exactly 32 bits.Template: "
},
{
"code": null,
"e": 776,
"s": 653,
"text": "intN_t or uintN_t\nWhere N is width of integer which can be 8, 16, 32, 64\nor any other type width supported by the library."
},
{
"code": null,
"e": 782,
"s": 778,
"text": "CPP"
},
{
"code": "// C++ program to show use of extended integral types#include <iostream>using namespace std; int main(){ uint8_t i; // i with width of exact 8 bits // Minimum value represented by unsigned 8 bit is 0 i = 0; cout << \"Minimum value of i\\t: \"<< (int)i << endl; // Maximum value represented by unsigned 8 bit is 255 i = 255; cout << \"Maximum value of i\\t: \"<< (int)i << endl; // Warning: large integer implicitly truncated to // unsigned type. It will print any garbage value i = 2436; cout << \"Beyond range value of i\\t: \" << (int)i << endl; return 0;}",
"e": 1378,
"s": 782,
"text": null
},
{
"code": null,
"e": 1388,
"s": 1378,
"text": "Output: "
},
{
"code": null,
"e": 1607,
"s": 1388,
"text": " In function 'int main()':\n19:7: warning: large integer implicitly truncated to unsigned type [-overflow]\n i = 2436;\n ^\n\nMinimum value of i : 0\nMaximum value of i : 255\nBeyond range value of i : 132"
},
{
"code": null,
"e": 2588,
"s": 1607,
"text": "Different Variations 1. Fixed width unsigned 8 bit integer: uint8_t It means give me an unsigned int of exactly 8 bits.2. Minimum width unsigned 8 bit integer: uint_least8_t It means give me the smallest type of unsigned int which has at least 8 bits. Optimized for memory consumption.3. Fastest minimum width unsigned 8 bit integer: uint_fast8_t It means give me an unsigned int of at least 8 bits which will make my program faster. It may pick larger data type because of alignment considerations. Optimized for speed.Thus a uint8_t is guaranteed to be exactly 8 bits wide. A uint_least8_t is the smallest integer guaranteed to be at least 8 bits wide. An uint_fast8_t is the fastest integer guaranteed to be at least 8 bits wide. So the extended integral types help us in writing portable and efficient code.This article is contributed by Rohit Kasle. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 2597,
"s": 2588,
"text": "rkbhola5"
},
{
"code": null,
"e": 2612,
"s": 2597,
"text": "cpp-data-types"
},
{
"code": null,
"e": 2623,
"s": 2612,
"text": "C Language"
},
{
"code": null,
"e": 2627,
"s": 2623,
"text": "C++"
},
{
"code": null,
"e": 2631,
"s": 2627,
"text": "CPP"
}
] |
Working with Excel files in Python using Xlwings
|
03 Jan, 2021
Xlwings is a Python library that makes it easy to call Python from Excel and vice versa. It creates reading and writing to and from Excel using Python easily. It can also be modified to act as a Python Server for Excel to synchronously exchange data between Python and Excel. Xlwings makes automating Excel with Python easy and can be used for- generating an automatic report, creating Excel embedded functions, manipulating Excel or CSV databases etc.
Installation:The virtual environment is used to separate a project environment (libraries, environment variables etc) etc from other project and from the global environment of the same machine. This step is optional as it isn’t always necessary to create a virtual environment for a project. We will be using a python package virtualenv for this purpose.
virtualenv env
.\env\scripts\activate
And the virtual environment is ready.
pip install xlwings
To start using Xlwings, there are certain basic steps which are to be done almost every time. This includes opening an Excel file, viewing the sheet available and then selecting a sheet. Sheet 1 of data.xlsx file.
Excel used: Click here
https://drive.google.com/file/d/1ZoT_y-SccAslpD6HWTgCn9N-iiKqw_pA/view?usp=sharing
Below are some examples which depict how to perform various operations using Xlwings library:
Example 1:
Python3
# Python program to# access Excel files # Import required libraryimport xlwings as xw # Opening an excel filewb = xw.Book('data.xlsx') # Viewing available# sheets in itwks = xw.sheetsprint("Available sheets :\n", wks) # Selecting a sheetws = wks[0] # Selecting a value# from the selected sheetval = ws.range("A1").valueprint("A value in sheet1 :", val)
Output:
Values can be selected from an Excel sheet by specifying a cell, row, column, or area. Selecting the entire row or column is not recommended as an entire row or column in Excel is quite long, so it would result in a long list of trailing None. Selecting a range of 2D data will result in a list of lists of data.
Example 2:
Python3
# Python3 code to select# data from excelimport xlwings as xw # Specifying a sheetws = xw.Book("data.xlsx").sheets['Sheet1'] # Selecting data from# a single cellv1 = ws.range("A2").valuev2 = ws.range("A3").valueprint("Result :", v1, v2) # Selecting entire# rows and columnsr = ws.range("4:4").valueprint("Row :", r) c = ws.range("C:C").valueprint("Column :", c) # Selecting a 2D# range of datatable = ws.range("A1:C4").valueprint("Table :", table) # Automatic table# detection from# a cellautomatic = ws.range("A1").expand().valueprint("Automatic Table :", automatic)
Output:
Xlwings can be used to insert data in an Excel file similarly that it reads from an Excel file. Data can be provided as a list or a single input to a certain cell or a selection of cells.
Example 3:
Python3
# Python3 code to write# data to Excelimport xlwings as xw # Selecting a sheet in Excelws = xw.Book('data.xlsx').sheets("Sheet2") # Writing one value to# one cellws.range("A1").value = "geeks" # Writing multiple values# to a cell for automatic# data placementws.range("B1").value = ["for", "geeks"] # Writing 2D data to a cell# to automatically put data# into correct cellsws.range("A2").value = [[1, 2, 3], ['a', 'b', 'c']] # Writing multiple data to# multiple cellsws.range("A4:C4").value = ["This", "is", "awesome"] # Outputting entire tableprint("Table :\n", ws.range("A1").expand().value)
Output:
Excel Screenshot:
Python-excel
python-modules
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Jan, 2021"
},
{
"code": null,
"e": 482,
"s": 28,
"text": "Xlwings is a Python library that makes it easy to call Python from Excel and vice versa. It creates reading and writing to and from Excel using Python easily. It can also be modified to act as a Python Server for Excel to synchronously exchange data between Python and Excel. Xlwings makes automating Excel with Python easy and can be used for- generating an automatic report, creating Excel embedded functions, manipulating Excel or CSV databases etc. "
},
{
"code": null,
"e": 839,
"s": 482,
"text": "Installation:The virtual environment is used to separate a project environment (libraries, environment variables etc) etc from other project and from the global environment of the same machine. This step is optional as it isn’t always necessary to create a virtual environment for a project. We will be using a python package virtualenv for this purpose. "
},
{
"code": null,
"e": 877,
"s": 839,
"text": "virtualenv env\n.\\env\\scripts\\activate"
},
{
"code": null,
"e": 917,
"s": 877,
"text": "And the virtual environment is ready. "
},
{
"code": null,
"e": 937,
"s": 917,
"text": "pip install xlwings"
},
{
"code": null,
"e": 1153,
"s": 937,
"text": " To start using Xlwings, there are certain basic steps which are to be done almost every time. This includes opening an Excel file, viewing the sheet available and then selecting a sheet. Sheet 1 of data.xlsx file. "
},
{
"code": null,
"e": 1177,
"s": 1153,
"text": "Excel used: Click here "
},
{
"code": null,
"e": 1260,
"s": 1177,
"text": "https://drive.google.com/file/d/1ZoT_y-SccAslpD6HWTgCn9N-iiKqw_pA/view?usp=sharing"
},
{
"code": null,
"e": 1354,
"s": 1260,
"text": "Below are some examples which depict how to perform various operations using Xlwings library:"
},
{
"code": null,
"e": 1365,
"s": 1354,
"text": "Example 1:"
},
{
"code": null,
"e": 1373,
"s": 1365,
"text": "Python3"
},
{
"code": "# Python program to# access Excel files # Import required libraryimport xlwings as xw # Opening an excel filewb = xw.Book('data.xlsx') # Viewing available# sheets in itwks = xw.sheetsprint(\"Available sheets :\\n\", wks) # Selecting a sheetws = wks[0] # Selecting a value# from the selected sheetval = ws.range(\"A1\").valueprint(\"A value in sheet1 :\", val)",
"e": 1731,
"s": 1373,
"text": null
},
{
"code": null,
"e": 1740,
"s": 1731,
"text": "Output: "
},
{
"code": null,
"e": 2054,
"s": 1740,
"text": "Values can be selected from an Excel sheet by specifying a cell, row, column, or area. Selecting the entire row or column is not recommended as an entire row or column in Excel is quite long, so it would result in a long list of trailing None. Selecting a range of 2D data will result in a list of lists of data. "
},
{
"code": null,
"e": 2065,
"s": 2054,
"text": "Example 2:"
},
{
"code": null,
"e": 2073,
"s": 2065,
"text": "Python3"
},
{
"code": "# Python3 code to select# data from excelimport xlwings as xw # Specifying a sheetws = xw.Book(\"data.xlsx\").sheets['Sheet1'] # Selecting data from# a single cellv1 = ws.range(\"A2\").valuev2 = ws.range(\"A3\").valueprint(\"Result :\", v1, v2) # Selecting entire# rows and columnsr = ws.range(\"4:4\").valueprint(\"Row :\", r) c = ws.range(\"C:C\").valueprint(\"Column :\", c) # Selecting a 2D# range of datatable = ws.range(\"A1:C4\").valueprint(\"Table :\", table) # Automatic table# detection from# a cellautomatic = ws.range(\"A1\").expand().valueprint(\"Automatic Table :\", automatic)",
"e": 2647,
"s": 2073,
"text": null
},
{
"code": null,
"e": 2656,
"s": 2647,
"text": "Output: "
},
{
"code": null,
"e": 2845,
"s": 2656,
"text": "Xlwings can be used to insert data in an Excel file similarly that it reads from an Excel file. Data can be provided as a list or a single input to a certain cell or a selection of cells. "
},
{
"code": null,
"e": 2856,
"s": 2845,
"text": "Example 3:"
},
{
"code": null,
"e": 2864,
"s": 2856,
"text": "Python3"
},
{
"code": "# Python3 code to write# data to Excelimport xlwings as xw # Selecting a sheet in Excelws = xw.Book('data.xlsx').sheets(\"Sheet2\") # Writing one value to# one cellws.range(\"A1\").value = \"geeks\" # Writing multiple values# to a cell for automatic# data placementws.range(\"B1\").value = [\"for\", \"geeks\"] # Writing 2D data to a cell# to automatically put data# into correct cellsws.range(\"A2\").value = [[1, 2, 3], ['a', 'b', 'c']] # Writing multiple data to# multiple cellsws.range(\"A4:C4\").value = [\"This\", \"is\", \"awesome\"] # Outputting entire tableprint(\"Table :\\n\", ws.range(\"A1\").expand().value)",
"e": 3464,
"s": 2864,
"text": null
},
{
"code": null,
"e": 3473,
"s": 3464,
"text": "Output: "
},
{
"code": null,
"e": 3491,
"s": 3473,
"text": "Excel Screenshot:"
},
{
"code": null,
"e": 3504,
"s": 3491,
"text": "Python-excel"
},
{
"code": null,
"e": 3519,
"s": 3504,
"text": "python-modules"
},
{
"code": null,
"e": 3526,
"s": 3519,
"text": "Python"
}
] |
ISRO | ISRO CS 2015 | Question 12
|
03 Apr, 2018
A machine needs a minimum of 100 sec to sort 1000 names by quick sort. The minimum time needed to sort 100 names will be approximately(A) 50.2 sec(B) 6.7 sec(C) 72.7 sec(D) 11.2 secAnswer: (B)Explanation: Best case time complexity of quick sort to take the minimum time to sort names = O(n log n)
let t1 = time taken to sort 1000 names = 100 sec;
n1=1000 names
let t2 = time taken to sort 100 names;
n2= 100 names
t1/t2 = n1*log(n1) / n2*log(n2)
100/t2 = 1000*log(1000) / 100*log(100)
100/t2 = 10*log(1000) / log(100)
100/t2 = 10*3 / 2
t2 = 20/3 = 6.66
Option (B) is correctQuiz of this Question
ISRO
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Apr, 2018"
},
{
"code": null,
"e": 325,
"s": 28,
"text": "A machine needs a minimum of 100 sec to sort 1000 names by quick sort. The minimum time needed to sort 100 names will be approximately(A) 50.2 sec(B) 6.7 sec(C) 72.7 sec(D) 11.2 secAnswer: (B)Explanation: Best case time complexity of quick sort to take the minimum time to sort names = O(n log n)"
},
{
"code": null,
"e": 584,
"s": 325,
"text": "let t1 = time taken to sort 1000 names = 100 sec;\nn1=1000 names\nlet t2 = time taken to sort 100 names;\nn2= 100 names\n\nt1/t2 = n1*log(n1) / n2*log(n2) \n100/t2 = 1000*log(1000) / 100*log(100)\n100/t2 = 10*log(1000) / log(100)\n100/t2 = 10*3 / 2\nt2 = 20/3 = 6.66 "
},
{
"code": null,
"e": 627,
"s": 584,
"text": "Option (B) is correctQuiz of this Question"
},
{
"code": null,
"e": 632,
"s": 627,
"text": "ISRO"
}
] |
How to validate if input date (start date) in input field must be before a given date (end date) using express-validator ?
|
08 Apr, 2022
In HTML forms, we often required validation of different types. Validate existing email, validate password length, validate confirm password, validate to allow only integer inputs, these are some examples of validation. In certain cases, we want the user to type a date that must come before some given date (Ex: ‘start date’ must come before ‘end date’), and based on that we give the user access to the request or deny the request access. We can also validate these input fields using express-validator middleware.
Command to install express-validator:
npm install express-validator
Steps to use express-validator to implement the logic:
Install express-validator middleware.
Create a validator.js file to code all the validation logic.
Use custom validator to validate and fetch the end date as the request body.
Convert the date strings to a valid date and compare them according to requirement.
Use the validation name(validateInputField) in the routes as a middleware as an array of validations.
Destructure ‘validationResult’ function from express-validator to use it to find any errors.
If error occurs redirect to the same page passing the error information.
If error list is empty, give access to the user for the subsequent request.
Note: Here we use local or custom database to implement the logic, the same steps can be followed to implement the logic in a regular database like MongoDB or MySql.
Example: This example illustrates how to validate a input field to only allow a date after a given date.
Filename – index.js
javascript
const express = require('express')const bodyParser = require('body-parser')const {validationResult} = require('express-validator')const repo = require('./repository')const { validateStartDate } = require('./validator')const formTemplet = require('./form') const app = express()const port = process.env.PORT || 3000 // The body-parser middleware to parse form dataapp.use(bodyParser.urlencoded({extended : true})) // Get route to display HTML formapp.get('/', (req, res) => { res.send(formTemplet({}))}) // Post route to handle form submission logic andapp.post( '/project', [validateStartDate], async (req, res) => { const errors = validationResult(req) if(!errors.isEmpty()) { return res.send(formTemplet({errors})) } const {name, domain, sdate, edate, } = req.body // Fetch year, month, day of respective dates const [sd, sm, sy] = sdate.split('/') const [ed, em, ey] = edate.split('/') // New record await repo.create({ 'Project Name': name, 'Project Domain': domain, 'Start Date': new Date(sy, sm-1, sd).toDateString(), 'End Date': new Date(ey, em-1, ed).toDateString() }) res.send('<strong>Project details stored successfully' + ' in the database</strong>');}) // Server setupapp.listen(port, () => { console.log(`Server start on port ${port}`)})
Filename – repository.js: This file contains all the logic to create a local database and interact with it.
javascript
// Importing node.js file system moduleconst fs = require('fs') class Repository { constructor(filename) { // Filename where data are going to store if (!filename) { throw new Error('Filename is required to create a datastore!') } this.filename = filename try { fs.accessSync(this.filename) } catch(err) { // If file not exist it is created with empty array fs.writeFileSync(this.filename, '[]') } } // Get all existing records async getAll() { return JSON.parse( await fs.promises.readFile(this.filename, { encoding : 'utf8' }) ) } // Create new record async create(attrs) { // Fetch all existing records const records = await this.getAll() // All the existing records with new // record push back to database records.push(attrs) await fs.promises.writeFile( this.filename, JSON.stringify(records, null, 2) ) return attrs }} // The'datastore.json' file created at// runtime and all the information// provided via signup form store// in this file in JSON format.module.exports = new Repository('datastore.json')
Filename – form.js: This file contains logic to show the form to submit project data with start and end date.
javascript
const getError = (errors, prop) => { try { return errors.mapped()[prop].msg } catch (error) { return '' }} module.exports = ({errors}) => { return `<!DOCTYPE html><html> <head> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/bulma/0.9.0/css/bulma.min.css'> <style> div.columns { margin-top: 100px; } .button { margin-top: 10px } </style></head> <body> <div class='container'> <div class='columns is-centered'> <div class='column is-5'> <form action='/project' method='POST'> <div> <div> <label class='label' id='name'> Project Name </label> </div> <input class='input' type='text' name='name' placeholder='Project Name' for='name'> </div> <div> <div> <label class='label' id='domain'> Project Domain </label> </div> <input class='input' type='text' name='domain' placeholder='Project Domain' for='base64data'> </div> <div> <div> <label class='label' id='sdate'> Start Date </label> </div> <input class='input' type='text' name='sdate' placeholder='dd/mm/yyyy' for='sdate'> <p class="help is-danger"> ${getError(errors, 'sdate')} </p> </div> <div> <div> <label class='label' id='edate'> End Date </label> </div> <input class='input' type='text' name='edate' placeholder='dd/mm/yyyy' for='edate'> </div> <div> <button class='button is-primary'> Submit </button> </div> </form> </div> </div> </div></body> </html> `}
Filename – validator.js: This file contain all the validation logic(Logic to validate a input field to accept only a date before a given date).
javascript
const { check } = require('express-validator')const repo = require('./repository')module.exports = { validateStartDate: check('sdate') // To delete leading and trailing space .trim() // Custom validator .custom((sdate, { req }) => { // Fetch year, month and day of // respective dates const [sd, sm, sy] = sdate.split('/') const [ed, em, ey] = req.body.edate.split('/') // Constructing dates from given // string date input const startDate = new Date(sy, sm, sd) const endDate = new Date(ey, em, ed) // Validate start date so that it must // comes before end date if (startDate >= endDate) { throw new Error('Start date of project must be before End date') } return true })}
Filename – package.json
Package.json file
Database:
Database
Output:
Attempt to submit form when start date comes after end date
Response when attempt to submit the form where start date comes after end date
Attempt to submit form when start date comes before end date
Response when attempt to submit the form where end date comes after start date
Database after successful form submission:
Note: We have used some Bulma classes(CSS framework) in the signup.js file to design the content.
anikaseth98
kk9826225
rkbhola5
Node.js-Misc
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n08 Apr, 2022"
},
{
"code": null,
"e": 545,
"s": 28,
"text": "In HTML forms, we often required validation of different types. Validate existing email, validate password length, validate confirm password, validate to allow only integer inputs, these are some examples of validation. In certain cases, we want the user to type a date that must come before some given date (Ex: ‘start date’ must come before ‘end date’), and based on that we give the user access to the request or deny the request access. We can also validate these input fields using express-validator middleware."
},
{
"code": null,
"e": 583,
"s": 545,
"text": "Command to install express-validator:"
},
{
"code": null,
"e": 613,
"s": 583,
"text": "npm install express-validator"
},
{
"code": null,
"e": 668,
"s": 613,
"text": "Steps to use express-validator to implement the logic:"
},
{
"code": null,
"e": 706,
"s": 668,
"text": "Install express-validator middleware."
},
{
"code": null,
"e": 767,
"s": 706,
"text": "Create a validator.js file to code all the validation logic."
},
{
"code": null,
"e": 844,
"s": 767,
"text": "Use custom validator to validate and fetch the end date as the request body."
},
{
"code": null,
"e": 928,
"s": 844,
"text": "Convert the date strings to a valid date and compare them according to requirement."
},
{
"code": null,
"e": 1030,
"s": 928,
"text": "Use the validation name(validateInputField) in the routes as a middleware as an array of validations."
},
{
"code": null,
"e": 1123,
"s": 1030,
"text": "Destructure ‘validationResult’ function from express-validator to use it to find any errors."
},
{
"code": null,
"e": 1196,
"s": 1123,
"text": "If error occurs redirect to the same page passing the error information."
},
{
"code": null,
"e": 1272,
"s": 1196,
"text": "If error list is empty, give access to the user for the subsequent request."
},
{
"code": null,
"e": 1438,
"s": 1272,
"text": "Note: Here we use local or custom database to implement the logic, the same steps can be followed to implement the logic in a regular database like MongoDB or MySql."
},
{
"code": null,
"e": 1543,
"s": 1438,
"text": "Example: This example illustrates how to validate a input field to only allow a date after a given date."
},
{
"code": null,
"e": 1563,
"s": 1543,
"text": "Filename – index.js"
},
{
"code": null,
"e": 1574,
"s": 1563,
"text": "javascript"
},
{
"code": "const express = require('express')const bodyParser = require('body-parser')const {validationResult} = require('express-validator')const repo = require('./repository')const { validateStartDate } = require('./validator')const formTemplet = require('./form') const app = express()const port = process.env.PORT || 3000 // The body-parser middleware to parse form dataapp.use(bodyParser.urlencoded({extended : true})) // Get route to display HTML formapp.get('/', (req, res) => { res.send(formTemplet({}))}) // Post route to handle form submission logic andapp.post( '/project', [validateStartDate], async (req, res) => { const errors = validationResult(req) if(!errors.isEmpty()) { return res.send(formTemplet({errors})) } const {name, domain, sdate, edate, } = req.body // Fetch year, month, day of respective dates const [sd, sm, sy] = sdate.split('/') const [ed, em, ey] = edate.split('/') // New record await repo.create({ 'Project Name': name, 'Project Domain': domain, 'Start Date': new Date(sy, sm-1, sd).toDateString(), 'End Date': new Date(ey, em-1, ed).toDateString() }) res.send('<strong>Project details stored successfully' + ' in the database</strong>');}) // Server setupapp.listen(port, () => { console.log(`Server start on port ${port}`)})",
"e": 2890,
"s": 1574,
"text": null
},
{
"code": null,
"e": 2998,
"s": 2890,
"text": "Filename – repository.js: This file contains all the logic to create a local database and interact with it."
},
{
"code": null,
"e": 3009,
"s": 2998,
"text": "javascript"
},
{
"code": "// Importing node.js file system moduleconst fs = require('fs') class Repository { constructor(filename) { // Filename where data are going to store if (!filename) { throw new Error('Filename is required to create a datastore!') } this.filename = filename try { fs.accessSync(this.filename) } catch(err) { // If file not exist it is created with empty array fs.writeFileSync(this.filename, '[]') } } // Get all existing records async getAll() { return JSON.parse( await fs.promises.readFile(this.filename, { encoding : 'utf8' }) ) } // Create new record async create(attrs) { // Fetch all existing records const records = await this.getAll() // All the existing records with new // record push back to database records.push(attrs) await fs.promises.writeFile( this.filename, JSON.stringify(records, null, 2) ) return attrs }} // The'datastore.json' file created at// runtime and all the information// provided via signup form store// in this file in JSON format.module.exports = new Repository('datastore.json')",
"e": 4140,
"s": 3009,
"text": null
},
{
"code": null,
"e": 4250,
"s": 4140,
"text": "Filename – form.js: This file contains logic to show the form to submit project data with start and end date."
},
{
"code": null,
"e": 4261,
"s": 4250,
"text": "javascript"
},
{
"code": "const getError = (errors, prop) => { try { return errors.mapped()[prop].msg } catch (error) { return '' }} module.exports = ({errors}) => { return `<!DOCTYPE html><html> <head> <link rel='stylesheet' href='https://cdnjs.cloudflare.com/ajax/libs/bulma/0.9.0/css/bulma.min.css'> <style> div.columns { margin-top: 100px; } .button { margin-top: 10px } </style></head> <body> <div class='container'> <div class='columns is-centered'> <div class='column is-5'> <form action='/project' method='POST'> <div> <div> <label class='label' id='name'> Project Name </label> </div> <input class='input' type='text' name='name' placeholder='Project Name' for='name'> </div> <div> <div> <label class='label' id='domain'> Project Domain </label> </div> <input class='input' type='text' name='domain' placeholder='Project Domain' for='base64data'> </div> <div> <div> <label class='label' id='sdate'> Start Date </label> </div> <input class='input' type='text' name='sdate' placeholder='dd/mm/yyyy' for='sdate'> <p class=\"help is-danger\"> ${getError(errors, 'sdate')} </p> </div> <div> <div> <label class='label' id='edate'> End Date </label> </div> <input class='input' type='text' name='edate' placeholder='dd/mm/yyyy' for='edate'> </div> <div> <button class='button is-primary'> Submit </button> </div> </form> </div> </div> </div></body> </html> `}",
"e": 6937,
"s": 4261,
"text": null
},
{
"code": null,
"e": 7081,
"s": 6937,
"text": "Filename – validator.js: This file contain all the validation logic(Logic to validate a input field to accept only a date before a given date)."
},
{
"code": null,
"e": 7092,
"s": 7081,
"text": "javascript"
},
{
"code": "const { check } = require('express-validator')const repo = require('./repository')module.exports = { validateStartDate: check('sdate') // To delete leading and trailing space .trim() // Custom validator .custom((sdate, { req }) => { // Fetch year, month and day of // respective dates const [sd, sm, sy] = sdate.split('/') const [ed, em, ey] = req.body.edate.split('/') // Constructing dates from given // string date input const startDate = new Date(sy, sm, sd) const endDate = new Date(ey, em, ed) // Validate start date so that it must // comes before end date if (startDate >= endDate) { throw new Error('Start date of project must be before End date') } return true })}",
"e": 7981,
"s": 7092,
"text": null
},
{
"code": null,
"e": 8005,
"s": 7981,
"text": "Filename – package.json"
},
{
"code": null,
"e": 8023,
"s": 8005,
"text": "Package.json file"
},
{
"code": null,
"e": 8033,
"s": 8023,
"text": "Database:"
},
{
"code": null,
"e": 8042,
"s": 8033,
"text": "Database"
},
{
"code": null,
"e": 8050,
"s": 8042,
"text": "Output:"
},
{
"code": null,
"e": 8110,
"s": 8050,
"text": "Attempt to submit form when start date comes after end date"
},
{
"code": null,
"e": 8189,
"s": 8110,
"text": "Response when attempt to submit the form where start date comes after end date"
},
{
"code": null,
"e": 8250,
"s": 8189,
"text": "Attempt to submit form when start date comes before end date"
},
{
"code": null,
"e": 8329,
"s": 8250,
"text": "Response when attempt to submit the form where end date comes after start date"
},
{
"code": null,
"e": 8372,
"s": 8329,
"text": "Database after successful form submission:"
},
{
"code": null,
"e": 8470,
"s": 8372,
"text": "Note: We have used some Bulma classes(CSS framework) in the signup.js file to design the content."
},
{
"code": null,
"e": 8482,
"s": 8470,
"text": "anikaseth98"
},
{
"code": null,
"e": 8492,
"s": 8482,
"text": "kk9826225"
},
{
"code": null,
"e": 8501,
"s": 8492,
"text": "rkbhola5"
},
{
"code": null,
"e": 8514,
"s": 8501,
"text": "Node.js-Misc"
},
{
"code": null,
"e": 8522,
"s": 8514,
"text": "Node.js"
},
{
"code": null,
"e": 8539,
"s": 8522,
"text": "Web Technologies"
}
] |
How to convert a file to zip file and download it using Node.js ?
|
07 Oct, 2021
The Zip files are a common way for storing compressed files and folders. In this article, I’ll demonstrate how to convert the file to zip-format by using the adm-zip module (NPM PACKAGE).
Uses of ADM-ZIP
compress the original file and change them to zip format.
update/delete the existing files(.zip format).
Installation of ADM-ZIP:
Step 1: Install the module using the below command in the terminal.
npm install adm-zip
Step 2: Check the version of the installed module by using the below command.
npm version adm-zip
We are going to change this upload_data folder to zip file using the adm-zip module!
upload_data FOLDER
Code for conversion and downloading zip file:
Javascript
// express is a node framework that is helps in creating// 2 or more web-pages applicationconst express = require('express') // filesystem is a node module that allows us to work with// the files that are stored on our pcconst file_system = require('fs') // it is an npm package.this is to be required in our JS // file for the conversion of data to a zip file!const admz = require('adm-zip') // stores the express module into the app variable!const app = express() // this is the name of specific folder which is to be // changed into zip file1var to_zip = file_system.readdirSync(__dirname+'/'+'upload_data') // this is used to request the specific file and then print // the data in it!app.get('/',function(req,res){ res.sendFile(__dirname+'/'+'index.html') // zp is created as an object of class admz() which // contains functionalities var zp = new admz(); // this is the main part of our work! // here for loop check counts and passes each and every // file of our folder "upload_data" // and convert each of them to a zip! for(var k=0 ; k<to_zip.length ; k++){ zp.addLocalFile(__dirname+'/'+'upload_data'+'/'+to_zip[k]) } // here we assigned the name to our downloaded file! const file_after_download = 'downloaded_file.zip'; // toBuffer() is used to read the data and save it // for downloading process! const data = zp.toBuffer(); // this is the code for downloading! // here we have to specify 3 things: // 1. type of content that we are downloading // 2. name of file to be downloaded // 3. length or size of the downloaded file! res.set('Content-Type','application/octet-stream'); res.set('Content-Disposition',`attachment; filename=${file_after_download}`); res.set('Content-Length',data.length); res.send(data); }) // this is used to listen a specific port!app.listen(7777,function(){ console.log('port is active at 7777');})
Steps to run the program:
Our project looks like:
Our project looks like:
final project
Open the terminal at the desired local and make sure that u have downloaded the adm-zip package using the following command.
npm install adm-zip
Run app.js file using following command.
node app.js
app is running
Open the browser and open localhost:7777 and then the upload_data folder get converted to zip file and gets downloaded!
changed to zip file
Output: Representing the whole procedure for converting file to zip file with the help of the following gif, so in this way you can change your folder to a zip file and then download it!
file to zip file
abhishek0719kadiyan
NodeJS-Questions
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
JWT Authentication with Node.js
Installation of Node.js on Windows
Difference between dependencies, devDependencies and peerDependencies
Mongoose Populate() Method
How to connect Node.js with React.js ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ?
Differences between Functional Components and Class Components in React
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n07 Oct, 2021"
},
{
"code": null,
"e": 216,
"s": 28,
"text": "The Zip files are a common way for storing compressed files and folders. In this article, I’ll demonstrate how to convert the file to zip-format by using the adm-zip module (NPM PACKAGE)."
},
{
"code": null,
"e": 232,
"s": 216,
"text": "Uses of ADM-ZIP"
},
{
"code": null,
"e": 290,
"s": 232,
"text": "compress the original file and change them to zip format."
},
{
"code": null,
"e": 337,
"s": 290,
"text": "update/delete the existing files(.zip format)."
},
{
"code": null,
"e": 362,
"s": 337,
"text": "Installation of ADM-ZIP:"
},
{
"code": null,
"e": 432,
"s": 362,
"text": "Step 1: Install the module using the below command in the terminal. "
},
{
"code": null,
"e": 452,
"s": 432,
"text": "npm install adm-zip"
},
{
"code": null,
"e": 532,
"s": 452,
"text": "Step 2: Check the version of the installed module by using the below command. "
},
{
"code": null,
"e": 552,
"s": 532,
"text": "npm version adm-zip"
},
{
"code": null,
"e": 639,
"s": 554,
"text": "We are going to change this upload_data folder to zip file using the adm-zip module!"
},
{
"code": null,
"e": 660,
"s": 641,
"text": "upload_data FOLDER"
},
{
"code": null,
"e": 708,
"s": 662,
"text": "Code for conversion and downloading zip file:"
},
{
"code": null,
"e": 721,
"s": 710,
"text": "Javascript"
},
{
"code": "// express is a node framework that is helps in creating// 2 or more web-pages applicationconst express = require('express') // filesystem is a node module that allows us to work with// the files that are stored on our pcconst file_system = require('fs') // it is an npm package.this is to be required in our JS // file for the conversion of data to a zip file!const admz = require('adm-zip') // stores the express module into the app variable!const app = express() // this is the name of specific folder which is to be // changed into zip file1var to_zip = file_system.readdirSync(__dirname+'/'+'upload_data') // this is used to request the specific file and then print // the data in it!app.get('/',function(req,res){ res.sendFile(__dirname+'/'+'index.html') // zp is created as an object of class admz() which // contains functionalities var zp = new admz(); // this is the main part of our work! // here for loop check counts and passes each and every // file of our folder \"upload_data\" // and convert each of them to a zip! for(var k=0 ; k<to_zip.length ; k++){ zp.addLocalFile(__dirname+'/'+'upload_data'+'/'+to_zip[k]) } // here we assigned the name to our downloaded file! const file_after_download = 'downloaded_file.zip'; // toBuffer() is used to read the data and save it // for downloading process! const data = zp.toBuffer(); // this is the code for downloading! // here we have to specify 3 things: // 1. type of content that we are downloading // 2. name of file to be downloaded // 3. length or size of the downloaded file! res.set('Content-Type','application/octet-stream'); res.set('Content-Disposition',`attachment; filename=${file_after_download}`); res.set('Content-Length',data.length); res.send(data); }) // this is used to listen a specific port!app.listen(7777,function(){ console.log('port is active at 7777');})",
"e": 2688,
"s": 721,
"text": null
},
{
"code": null,
"e": 2714,
"s": 2688,
"text": "Steps to run the program:"
},
{
"code": null,
"e": 2738,
"s": 2714,
"text": "Our project looks like:"
},
{
"code": null,
"e": 2762,
"s": 2738,
"text": "Our project looks like:"
},
{
"code": null,
"e": 2776,
"s": 2762,
"text": "final project"
},
{
"code": null,
"e": 2901,
"s": 2776,
"text": "Open the terminal at the desired local and make sure that u have downloaded the adm-zip package using the following command."
},
{
"code": null,
"e": 2925,
"s": 2905,
"text": "npm install adm-zip"
},
{
"code": null,
"e": 2966,
"s": 2925,
"text": "Run app.js file using following command."
},
{
"code": null,
"e": 2982,
"s": 2970,
"text": "node app.js"
},
{
"code": null,
"e": 3001,
"s": 2986,
"text": "app is running"
},
{
"code": null,
"e": 3121,
"s": 3001,
"text": "Open the browser and open localhost:7777 and then the upload_data folder get converted to zip file and gets downloaded!"
},
{
"code": null,
"e": 3145,
"s": 3125,
"text": "changed to zip file"
},
{
"code": null,
"e": 3332,
"s": 3145,
"text": "Output: Representing the whole procedure for converting file to zip file with the help of the following gif, so in this way you can change your folder to a zip file and then download it!"
},
{
"code": null,
"e": 3349,
"s": 3332,
"text": "file to zip file"
},
{
"code": null,
"e": 3369,
"s": 3349,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 3386,
"s": 3369,
"text": "NodeJS-Questions"
},
{
"code": null,
"e": 3394,
"s": 3386,
"text": "Node.js"
},
{
"code": null,
"e": 3411,
"s": 3394,
"text": "Web Technologies"
},
{
"code": null,
"e": 3509,
"s": 3411,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3541,
"s": 3509,
"text": "JWT Authentication with Node.js"
},
{
"code": null,
"e": 3576,
"s": 3541,
"text": "Installation of Node.js on Windows"
},
{
"code": null,
"e": 3646,
"s": 3576,
"text": "Difference between dependencies, devDependencies and peerDependencies"
},
{
"code": null,
"e": 3673,
"s": 3646,
"text": "Mongoose Populate() Method"
},
{
"code": null,
"e": 3712,
"s": 3673,
"text": "How to connect Node.js with React.js ?"
},
{
"code": null,
"e": 3774,
"s": 3712,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 3835,
"s": 3774,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 3885,
"s": 3835,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 3928,
"s": 3885,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Python | Find Mixed Combinations of string and list
|
26 Nov, 2019
Sometimes, while working with Python, we can have a problem in which we need to make combinations of string and character list. This type of problem can come in domains in which we need to interleave the data. Let’s discuss certain ways in which this task can be performed.
Method #1 : Using loop + enumerate() + replace()This task can be performed using combination of above functions. In this, we just iterate each element of character list and insert each combination using brute force way.
# Python3 code to demonstrate working of# Mixed Combinations of string and list# using loop + enumerate() + replace() # initialize list and string test_list = ["a", "b", "c"]test_str = "gfg" # printing original list and stringprint("The original list : " + str(test_list))print("The original string : " + test_str) # Mixed Combinations of string and list# using loop + enumerate() + replace()res = []for idx, ele in enumerate(test_str): res += [test_str[ : idx] + test_str[idx : ].replace(ele, k, 1) for k in test_list] # printing resultprint("The list after mixed Combinations : " + str(res))
The original list : [‘a’, ‘b’, ‘c’]The original string : gfgThe list after mixed Combinations : [‘afg’, ‘bfg’, ‘cfg’, ‘gag’, ‘gbg’, ‘gcg’, ‘gfa’, ‘gfb’, ‘gfc’]
Method #2 : Using list comprehensionThe above functionality can be used to perform this task. In this, we provide a one-line alternative using comprehension.
# Python3 code to demonstrate working of# Mixed Combinations of string and list# using list comprehension # initialize list and string test_list = ["a", "b", "c"]test_str = "gfg" # printing original list and stringprint("The original list : " + str(test_list))print("The original string : " + test_str) # Mixed Combinations of string and list# using list comprehensionres = [test_str[ : idx] + ele + test_str[idx + 1 : ]\ for idx in range(len(test_str)) for ele in test_list] # printing resultprint("The list after mixed Combinations : " + str(res))
The original list : [‘a’, ‘b’, ‘c’]The original string : gfgThe list after mixed Combinations : [‘afg’, ‘bfg’, ‘cfg’, ‘gag’, ‘gbg’, ‘gcg’, ‘gfa’, ‘gfb’, ‘gfc’]
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n26 Nov, 2019"
},
{
"code": null,
"e": 302,
"s": 28,
"text": "Sometimes, while working with Python, we can have a problem in which we need to make combinations of string and character list. This type of problem can come in domains in which we need to interleave the data. Let’s discuss certain ways in which this task can be performed."
},
{
"code": null,
"e": 522,
"s": 302,
"text": "Method #1 : Using loop + enumerate() + replace()This task can be performed using combination of above functions. In this, we just iterate each element of character list and insert each combination using brute force way."
},
{
"code": "# Python3 code to demonstrate working of# Mixed Combinations of string and list# using loop + enumerate() + replace() # initialize list and string test_list = [\"a\", \"b\", \"c\"]test_str = \"gfg\" # printing original list and stringprint(\"The original list : \" + str(test_list))print(\"The original string : \" + test_str) # Mixed Combinations of string and list# using loop + enumerate() + replace()res = []for idx, ele in enumerate(test_str): res += [test_str[ : idx] + test_str[idx : ].replace(ele, k, 1) for k in test_list] # printing resultprint(\"The list after mixed Combinations : \" + str(res))",
"e": 1134,
"s": 522,
"text": null
},
{
"code": null,
"e": 1294,
"s": 1134,
"text": "The original list : [‘a’, ‘b’, ‘c’]The original string : gfgThe list after mixed Combinations : [‘afg’, ‘bfg’, ‘cfg’, ‘gag’, ‘gbg’, ‘gcg’, ‘gfa’, ‘gfb’, ‘gfc’]"
},
{
"code": null,
"e": 1454,
"s": 1296,
"text": "Method #2 : Using list comprehensionThe above functionality can be used to perform this task. In this, we provide a one-line alternative using comprehension."
},
{
"code": "# Python3 code to demonstrate working of# Mixed Combinations of string and list# using list comprehension # initialize list and string test_list = [\"a\", \"b\", \"c\"]test_str = \"gfg\" # printing original list and stringprint(\"The original list : \" + str(test_list))print(\"The original string : \" + test_str) # Mixed Combinations of string and list# using list comprehensionres = [test_str[ : idx] + ele + test_str[idx + 1 : ]\\ for idx in range(len(test_str)) for ele in test_list] # printing resultprint(\"The list after mixed Combinations : \" + str(res))",
"e": 2013,
"s": 1454,
"text": null
},
{
"code": null,
"e": 2173,
"s": 2013,
"text": "The original list : [‘a’, ‘b’, ‘c’]The original string : gfgThe list after mixed Combinations : [‘afg’, ‘bfg’, ‘cfg’, ‘gag’, ‘gbg’, ‘gcg’, ‘gfa’, ‘gfb’, ‘gfc’]"
},
{
"code": null,
"e": 2194,
"s": 2173,
"text": "Python list-programs"
},
{
"code": null,
"e": 2201,
"s": 2194,
"text": "Python"
},
{
"code": null,
"e": 2217,
"s": 2201,
"text": "Python Programs"
}
] |
Find the number of zeroes
|
23 Jun, 2022
Given an array of 1s and 0s which has all 1s first followed by all 0s. Find the number of 0s. Count the number of zeroes in the given array.Examples :
Input: arr[] = {1, 1, 1, 1, 0, 0}
Output: 2
Input: arr[] = {1, 0, 0, 0, 0}
Output: 4
Input: arr[] = {0, 0, 0}
Output: 3
Input: arr[] = {1, 1, 1, 1}
Output: 0
Approach 1: A simple solution is to traverse the input array. As soon as we find a 0, we return n – index of first 0. Here n is number of elements in input array. Time complexity of this solution would be O(n).
Implementation of above approach is below:
C++
#include <bits/stdc++.h>using namespace std;int firstzeroindex(int arr[], int n){ for (int i = 0; i < n; i++) { if (arr[i] == 0) { return i; } } return -1;}int main(){ int arr[] = { 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 }; int n = sizeof(arr) / sizeof(arr[0]); int x = firstzeroindex(arr, n); if (x == -1) { cout << "Count of zero is 0" << endl; } else { cout << "count of zero is " << n - x << endl; } return 0;}// this code is contributed by machhaliya muhammad
count of zero is 6
Time complexity: O(n) where n is size of arr.
Approach 2: Since the input array is sorted, we can use Binary Search to find the first occurrence of 0. Once we have index of first element, we can return count as n – index of first zero.
Implementation:
C
C++
Java
Python3
C#
PHP
Javascript
// A divide and conquer solution to find count of zeroes in an array// where all 1s are present before all 0s#include <stdio.h> /* if 0 is present in arr[] then returns the index of FIRST occurrenceof 0 in arr[low..high], otherwise returns -1 */int firstZero(int arr[], int low, int high){ if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low)/2; if (( mid == 0 || arr[mid-1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid -1)); } return -1;} // A wrapper over recursive function firstZero()int countZeroes(int arr[], int n){ // Find index of first zero in given array int first = firstZero(arr, 0, n-1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first);} /* Driver program to check above functions */int main(){ int arr[] = {1, 1, 1, 0, 0, 0, 0, 0}; int n = sizeof(arr)/sizeof(arr[0]); printf("Count of zeroes is %d", countZeroes(arr, n)); return 0;}
// A divide and conquer solution to// find count of zeroes in an array// where all 1s are present before all 0s#include <bits/stdc++.h>using namespace std; /* if 0 is present in arr[] thenreturns the index of FIRST occurrenceof 0 in arr[low..high], otherwise returns -1 */int firstZero(int arr[], int low, int high){ if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low) / 2; if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; // If mid element is not 0 if (arr[mid] == 1) return firstZero(arr, (mid + 1), high); // If mid element is 0, but not first 0 else return firstZero(arr, low, (mid -1)); } return -1;} // A wrapper over recursive function firstZero()int countZeroes(int arr[], int n){ // Find index of first zero in given array int first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first);} // Driver Codeint main(){ int arr[] = {1, 1, 1, 0, 0, 0, 0, 0}; int n = sizeof(arr) / sizeof(arr[0]); cout << "Count of zeroes is " << countZeroes(arr, n); return 0;} // This code is contributed by SoumikMondal
// A divide and conquer solution to find count of zeroes in an array// where all 1s are present before all 0s class CountZeros{ /* if 0 is present in arr[] then returns the index of FIRST occurrence of 0 in arr[low..high], otherwise returns -1 */ int firstZero(int arr[], int low, int high) { if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low) / 2; if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid - 1)); } return -1; } // A wrapper over recursive function firstZero() int countZeroes(int arr[], int n) { // Find index of first zero in given array int first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first); } // Driver program to test above functions public static void main(String[] args) { CountZeros count = new CountZeros(); int arr[] = {1, 1, 1, 0, 0, 0, 0, 0}; int n = arr.length; System.out.println("Count of zeroes is " + count.countZeroes(arr, n)); }}
# A divide and conquer solution to# find count of zeroes in an array# where all 1s are present before all 0s # if 0 is present in arr[] then returns# the index of FIRST occurrence of 0 in# arr[low..high], otherwise returns -1def firstZero(arr, low, high): if (high >= low): # Check if mid element is first 0 mid = low + int((high - low) / 2) if (( mid == 0 or arr[mid-1] == 1) and arr[mid] == 0): return mid # If mid element is not 0 if (arr[mid] == 1): return firstZero(arr, (mid + 1), high) # If mid element is 0, but not first 0 else: return firstZero(arr, low, (mid - 1)) return -1 # A wrapper over recursive# function firstZero()def countZeroes(arr, n): # Find index of first zero in given array first = firstZero(arr, 0, n - 1) # If 0 is not present at all, return 0 if (first == -1): return 0 return (n - first) # Driver Codearr = [1, 1, 1, 0, 0, 0, 0, 0]n = len(arr)print("Count of zeroes is", countZeroes(arr, n)) # This code is contributed by Smitha Dinesh Semwal
// A divide and conquer solution to find// count of zeroes in an array where all// 1s are present before all 0susing System; class CountZeros{ /* if 0 is present in arr[] then returns the index of FIRST occurrence of 0 in arr[low..high], otherwise returns -1 */ int firstZero(int []arr, int low, int high) { if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low) / 2; if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid - 1)); } return -1; } // A wrapper over recursive function firstZero() int countZeroes(int []arr, int n) { // Find index of first zero in given array int first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first); } // Driver program to test above functions public static void Main() { CountZeros count = new CountZeros(); int []arr = {1, 1, 1, 0, 0, 0, 0, 0}; int n = arr.Length; Console.Write("Count of zeroes is " + count.countZeroes(arr, n)); }} // This code is contributed by nitin mittal.
<?php// A divide and conquer solution to// find count of zeroes in an array// where all 1s are present before all 0s /* if 0 is present in arr[] then returns the index of FIRST occurrence of 0 in arr[low..high], otherwise returns -1 */function firstZero($arr, $low, $high){ if ($high >= $low) { // Check if mid element is first 0 $mid = $low + floor(($high - $low)/2); if (( $mid == 0 || $arr[$mid-1] == 1) && $arr[$mid] == 0) return $mid; // If mid element is not 0 if ($arr[$mid] == 1) return firstZero($arr, ($mid + 1), $high); // If mid element is 0, // but not first 0 else return firstZero($arr, $low, ($mid - 1)); } return -1;} // A wrapper over recursive// function firstZero()function countZeroes($arr, $n){ // Find index of first // zero in given array $first = firstZero($arr, 0, $n - 1); // If 0 is not present // at all, return 0 if ($first == -1) return 0; return ($n - $first);} // Driver Code $arr = array(1, 1, 1, 0, 0, 0, 0, 0); $n = sizeof($arr); echo("Count of zeroes is "); echo(countZeroes($arr, $n)); // This code is contributed by nitin mittal?>
<script>// A divide and conquer solution to find count of zeroes in an array// where all 1s are present before all 0s /* * if 0 is present in arr then returns the index of FIRST occurrence of 0 in * arr[low..high], otherwise returns -1 */ function firstZero(arr , low , high) { if (high >= low) { // Check if mid element is first 0 var mid = low + parseInt((high - low) / 2); if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid - 1)); } return -1; } // A wrapper over recursive function firstZero() function countZeroes(arr , n) { // Find index of first zero in given array var first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first); } // Driver program to test above functions var arr = [ 1, 1, 1, 0, 0, 0, 0, 0 ]; var n = arr.length; document.write("Count of zeroes is " + countZeroes(arr, n)); // This code is contributed by gauravrajput1</script>
Count of zeroes is 5
Time Complexity: O(Logn) where n is number of elements in arr[].
nitin mittal
SoumikMondal
GauravRajput1
shaheeneallamaiqbal
hardikkoriintern
Yahoo
Arrays
Divide and Conquer
Searching
Yahoo
Arrays
Searching
Divide and Conquer
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n23 Jun, 2022"
},
{
"code": null,
"e": 204,
"s": 52,
"text": "Given an array of 1s and 0s which has all 1s first followed by all 0s. Find the number of 0s. Count the number of zeroes in the given array.Examples : "
},
{
"code": null,
"e": 365,
"s": 204,
"text": "Input: arr[] = {1, 1, 1, 1, 0, 0}\nOutput: 2\n\nInput: arr[] = {1, 0, 0, 0, 0}\nOutput: 4\n\nInput: arr[] = {0, 0, 0}\nOutput: 3\n\nInput: arr[] = {1, 1, 1, 1}\nOutput: 0"
},
{
"code": null,
"e": 576,
"s": 365,
"text": "Approach 1: A simple solution is to traverse the input array. As soon as we find a 0, we return n – index of first 0. Here n is number of elements in input array. Time complexity of this solution would be O(n)."
},
{
"code": null,
"e": 619,
"s": 576,
"text": "Implementation of above approach is below:"
},
{
"code": null,
"e": 623,
"s": 619,
"text": "C++"
},
{
"code": "#include <bits/stdc++.h>using namespace std;int firstzeroindex(int arr[], int n){ for (int i = 0; i < n; i++) { if (arr[i] == 0) { return i; } } return -1;}int main(){ int arr[] = { 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0 }; int n = sizeof(arr) / sizeof(arr[0]); int x = firstzeroindex(arr, n); if (x == -1) { cout << \"Count of zero is 0\" << endl; } else { cout << \"count of zero is \" << n - x << endl; } return 0;}// this code is contributed by machhaliya muhammad",
"e": 1153,
"s": 623,
"text": null
},
{
"code": null,
"e": 1172,
"s": 1153,
"text": "count of zero is 6"
},
{
"code": null,
"e": 1218,
"s": 1172,
"text": "Time complexity: O(n) where n is size of arr."
},
{
"code": null,
"e": 1408,
"s": 1218,
"text": "Approach 2: Since the input array is sorted, we can use Binary Search to find the first occurrence of 0. Once we have index of first element, we can return count as n – index of first zero."
},
{
"code": null,
"e": 1424,
"s": 1408,
"text": "Implementation:"
},
{
"code": null,
"e": 1426,
"s": 1424,
"text": "C"
},
{
"code": null,
"e": 1430,
"s": 1426,
"text": "C++"
},
{
"code": null,
"e": 1435,
"s": 1430,
"text": "Java"
},
{
"code": null,
"e": 1443,
"s": 1435,
"text": "Python3"
},
{
"code": null,
"e": 1446,
"s": 1443,
"text": "C#"
},
{
"code": null,
"e": 1450,
"s": 1446,
"text": "PHP"
},
{
"code": null,
"e": 1461,
"s": 1450,
"text": "Javascript"
},
{
"code": "// A divide and conquer solution to find count of zeroes in an array// where all 1s are present before all 0s#include <stdio.h> /* if 0 is present in arr[] then returns the index of FIRST occurrenceof 0 in arr[low..high], otherwise returns -1 */int firstZero(int arr[], int low, int high){ if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low)/2; if (( mid == 0 || arr[mid-1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid -1)); } return -1;} // A wrapper over recursive function firstZero()int countZeroes(int arr[], int n){ // Find index of first zero in given array int first = firstZero(arr, 0, n-1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first);} /* Driver program to check above functions */int main(){ int arr[] = {1, 1, 1, 0, 0, 0, 0, 0}; int n = sizeof(arr)/sizeof(arr[0]); printf(\"Count of zeroes is %d\", countZeroes(arr, n)); return 0;}",
"e": 2648,
"s": 1461,
"text": null
},
{
"code": "// A divide and conquer solution to// find count of zeroes in an array// where all 1s are present before all 0s#include <bits/stdc++.h>using namespace std; /* if 0 is present in arr[] thenreturns the index of FIRST occurrenceof 0 in arr[low..high], otherwise returns -1 */int firstZero(int arr[], int low, int high){ if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low) / 2; if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; // If mid element is not 0 if (arr[mid] == 1) return firstZero(arr, (mid + 1), high); // If mid element is 0, but not first 0 else return firstZero(arr, low, (mid -1)); } return -1;} // A wrapper over recursive function firstZero()int countZeroes(int arr[], int n){ // Find index of first zero in given array int first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first);} // Driver Codeint main(){ int arr[] = {1, 1, 1, 0, 0, 0, 0, 0}; int n = sizeof(arr) / sizeof(arr[0]); cout << \"Count of zeroes is \" << countZeroes(arr, n); return 0;} // This code is contributed by SoumikMondal",
"e": 3940,
"s": 2648,
"text": null
},
{
"code": "// A divide and conquer solution to find count of zeroes in an array// where all 1s are present before all 0s class CountZeros{ /* if 0 is present in arr[] then returns the index of FIRST occurrence of 0 in arr[low..high], otherwise returns -1 */ int firstZero(int arr[], int low, int high) { if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low) / 2; if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid - 1)); } return -1; } // A wrapper over recursive function firstZero() int countZeroes(int arr[], int n) { // Find index of first zero in given array int first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first); } // Driver program to test above functions public static void main(String[] args) { CountZeros count = new CountZeros(); int arr[] = {1, 1, 1, 0, 0, 0, 0, 0}; int n = arr.length; System.out.println(\"Count of zeroes is \" + count.countZeroes(arr, n)); }}",
"e": 5328,
"s": 3940,
"text": null
},
{
"code": "# A divide and conquer solution to# find count of zeroes in an array# where all 1s are present before all 0s # if 0 is present in arr[] then returns# the index of FIRST occurrence of 0 in# arr[low..high], otherwise returns -1def firstZero(arr, low, high): if (high >= low): # Check if mid element is first 0 mid = low + int((high - low) / 2) if (( mid == 0 or arr[mid-1] == 1) and arr[mid] == 0): return mid # If mid element is not 0 if (arr[mid] == 1): return firstZero(arr, (mid + 1), high) # If mid element is 0, but not first 0 else: return firstZero(arr, low, (mid - 1)) return -1 # A wrapper over recursive# function firstZero()def countZeroes(arr, n): # Find index of first zero in given array first = firstZero(arr, 0, n - 1) # If 0 is not present at all, return 0 if (first == -1): return 0 return (n - first) # Driver Codearr = [1, 1, 1, 0, 0, 0, 0, 0]n = len(arr)print(\"Count of zeroes is\", countZeroes(arr, n)) # This code is contributed by Smitha Dinesh Semwal",
"e": 6476,
"s": 5328,
"text": null
},
{
"code": "// A divide and conquer solution to find// count of zeroes in an array where all// 1s are present before all 0susing System; class CountZeros{ /* if 0 is present in arr[] then returns the index of FIRST occurrence of 0 in arr[low..high], otherwise returns -1 */ int firstZero(int []arr, int low, int high) { if (high >= low) { // Check if mid element is first 0 int mid = low + (high - low) / 2; if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid - 1)); } return -1; } // A wrapper over recursive function firstZero() int countZeroes(int []arr, int n) { // Find index of first zero in given array int first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first); } // Driver program to test above functions public static void Main() { CountZeros count = new CountZeros(); int []arr = {1, 1, 1, 0, 0, 0, 0, 0}; int n = arr.Length; Console.Write(\"Count of zeroes is \" + count.countZeroes(arr, n)); }} // This code is contributed by nitin mittal.",
"e": 7983,
"s": 6476,
"text": null
},
{
"code": "<?php// A divide and conquer solution to// find count of zeroes in an array// where all 1s are present before all 0s /* if 0 is present in arr[] then returns the index of FIRST occurrence of 0 in arr[low..high], otherwise returns -1 */function firstZero($arr, $low, $high){ if ($high >= $low) { // Check if mid element is first 0 $mid = $low + floor(($high - $low)/2); if (( $mid == 0 || $arr[$mid-1] == 1) && $arr[$mid] == 0) return $mid; // If mid element is not 0 if ($arr[$mid] == 1) return firstZero($arr, ($mid + 1), $high); // If mid element is 0, // but not first 0 else return firstZero($arr, $low, ($mid - 1)); } return -1;} // A wrapper over recursive// function firstZero()function countZeroes($arr, $n){ // Find index of first // zero in given array $first = firstZero($arr, 0, $n - 1); // If 0 is not present // at all, return 0 if ($first == -1) return 0; return ($n - $first);} // Driver Code $arr = array(1, 1, 1, 0, 0, 0, 0, 0); $n = sizeof($arr); echo(\"Count of zeroes is \"); echo(countZeroes($arr, $n)); // This code is contributed by nitin mittal?>",
"e": 9309,
"s": 7983,
"text": null
},
{
"code": "<script>// A divide and conquer solution to find count of zeroes in an array// where all 1s are present before all 0s /* * if 0 is present in arr then returns the index of FIRST occurrence of 0 in * arr[low..high], otherwise returns -1 */ function firstZero(arr , low , high) { if (high >= low) { // Check if mid element is first 0 var mid = low + parseInt((high - low) / 2); if ((mid == 0 || arr[mid - 1] == 1) && arr[mid] == 0) return mid; if (arr[mid] == 1) // If mid element is not 0 return firstZero(arr, (mid + 1), high); else // If mid element is 0, but not first 0 return firstZero(arr, low, (mid - 1)); } return -1; } // A wrapper over recursive function firstZero() function countZeroes(arr , n) { // Find index of first zero in given array var first = firstZero(arr, 0, n - 1); // If 0 is not present at all, return 0 if (first == -1) return 0; return (n - first); } // Driver program to test above functions var arr = [ 1, 1, 1, 0, 0, 0, 0, 0 ]; var n = arr.length; document.write(\"Count of zeroes is \" + countZeroes(arr, n)); // This code is contributed by gauravrajput1</script>",
"e": 10647,
"s": 9309,
"text": null
},
{
"code": null,
"e": 10668,
"s": 10647,
"text": "Count of zeroes is 5"
},
{
"code": null,
"e": 10733,
"s": 10668,
"text": "Time Complexity: O(Logn) where n is number of elements in arr[]."
},
{
"code": null,
"e": 10746,
"s": 10733,
"text": "nitin mittal"
},
{
"code": null,
"e": 10759,
"s": 10746,
"text": "SoumikMondal"
},
{
"code": null,
"e": 10773,
"s": 10759,
"text": "GauravRajput1"
},
{
"code": null,
"e": 10793,
"s": 10773,
"text": "shaheeneallamaiqbal"
},
{
"code": null,
"e": 10810,
"s": 10793,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 10816,
"s": 10810,
"text": "Yahoo"
},
{
"code": null,
"e": 10823,
"s": 10816,
"text": "Arrays"
},
{
"code": null,
"e": 10842,
"s": 10823,
"text": "Divide and Conquer"
},
{
"code": null,
"e": 10852,
"s": 10842,
"text": "Searching"
},
{
"code": null,
"e": 10858,
"s": 10852,
"text": "Yahoo"
},
{
"code": null,
"e": 10865,
"s": 10858,
"text": "Arrays"
},
{
"code": null,
"e": 10875,
"s": 10865,
"text": "Searching"
},
{
"code": null,
"e": 10894,
"s": 10875,
"text": "Divide and Conquer"
}
] |
Find minimum adjustment cost of an array
|
04 Jul, 2022
Given an array of positive integers, replace each element in the array such that the difference between adjacent elements in the array is less than or equal to a given target. We need to minimize the adjustment cost, that is the sum of differences between new and old values. We basically need to minimize ∑|A[i] – Anew[i]| where 0 ≤ i ≤ n-1, n is size of A[] and Anew[] is the array with adjacent difference less that or equal to target.
Assume all elements of the array is less than constant M = 100.
Examples:
Input: arr = [1, 3, 0, 3], target = 1
Output: Minimum adjustment cost is 3
Explanation: One of the possible solutions
is [2, 3, 2, 3]
Input: arr = [2, 3, 2, 3], target = 1
Output: Minimum adjustment cost is 0
Explanation: All adjacent elements in the input
array are already less than equal to given target
Input: arr = [55, 77, 52, 61, 39, 6,
25, 60, 49, 47], target = 10
Output: Minimum adjustment cost is 75
Explanation: One of the possible solutions is
[55, 62, 52, 49, 39, 29, 30, 40, 49, 47]
In order to minimize the adjustment cost ∑|A[i] – Anew[i]| for all index i in the array, |A[i] – Anew[i]| should be as close to zero as possible. Also, |A[i] – Anew[i+1] ]| ≤ Target.This problem can be solved by dynamic programming.
Let dp[i][j] defines minimal adjustment cost on changing A[i] to j, then the DP relation is defined by –
dp[i][j] = min{dp[i - 1][k]} + |j - A[i]|
for all k's such that |k - j| ≤ target
Here, 0 ≤ i ≤ n and 0 ≤ j ≤ M where n is number of elements in the array and M = 100. We have to consider all k such that max(j – target, 0) ≤ k ≤ min(M, j + target)Finally, the minimum adjustment cost of the array will be min{dp[n – 1][j]} for all 0 ≤ j ≤ M.
Below is the implementation of above idea –
C++
Java
Python3
C#
PHP
Javascript
// C++ program to find minimum adjustment cost of an array#include <bits/stdc++.h>using namespace std; #define M 100 // Function to find minimum adjustment cost of an arrayint minAdjustmentCost(int A[], int n, int target){ // dp[i][j] stores minimal adjustment cost on changing // A[i] to j int dp[n][M + 1]; // handle first element of array separately for (int j = 0; j <= M; j++) dp[0][j] = abs(j - A[0]); // do for rest elements of the array for (int i = 1; i < n; i++) { // replace A[i] to j and calculate minimal adjustment // cost dp[i][j] for (int j = 0; j <= M; j++) { // initialize minimal adjustment cost to INT_MAX dp[i][j] = INT_MAX; // consider all k such that k >= max(j - target, 0) and // k <= min(M, j + target) and take minimum for (int k = max(j-target,0); k <= min(M,j+target); k++) dp[i][j] = min(dp[i][j], dp[i - 1][k] + abs(A[i] - j)); } } // return minimum value from last row of dp table int res = INT_MAX; for (int j = 0; j <= M; j++) res = min(res, dp[n - 1][j]); return res;} // Driver Program to test above functionsint main(){ int arr[] = {55, 77, 52, 61, 39, 6, 25, 60, 49, 47}; int n = sizeof(arr) / sizeof(arr[0]); int target = 10; cout << "Minimum adjustment cost is " << minAdjustmentCost(arr, n, target) << endl; return 0;}
// Java program to find minimum adjustment cost of an arrayimport java.io.*;import java.util.*; class GFG{ public static int M = 100; // Function to find minimum adjustment cost of an array static int minAdjustmentCost(int A[], int n, int target) { // dp[i][j] stores minimal adjustment cost on changing // A[i] to j int[][] dp = new int[n][M + 1]; // handle first element of array separately for (int j = 0; j <= M; j++) dp[0][j] = Math.abs(j - A[0]); // do for rest elements of the array for (int i = 1; i < n; i++) { // replace A[i] to j and calculate minimal adjustment // cost dp[i][j] for (int j = 0; j <= M; j++) { // initialize minimal adjustment cost to INT_MAX dp[i][j] = Integer.MAX_VALUE; // consider all k such that k >= max(j - target, 0) and // k <= min(M, j + target) and take minimum int k = Math.max(j-target,0); for ( ; k <= Math.min(M,j+target); k++) dp[i][j] = Math.min(dp[i][j], dp[i - 1][k] + Math.abs(A[i] - j)); } } // return minimum value from last row of dp table int res = Integer.MAX_VALUE; for (int j = 0; j <= M; j++) res = Math.min(res, dp[n - 1][j]); return res; } // Driver program public static void main (String[] args) { int arr[] = {55, 77, 52, 61, 39, 6, 25, 60, 49, 47}; int n = arr.length; int target = 10; System.out.println("Minimum adjustment cost is " +minAdjustmentCost(arr, n, target)); }} // This code is contributed by Pramod Kumar
# Python3 program to find minimum# adjustment cost of an arrayM = 100 # Function to find minimum# adjustment cost of an arraydef minAdjustmentCost(A, n, target): # dp[i][j] stores minimal adjustment # cost on changing A[i] to j dp = [[0 for i in range(M + 1)] for i in range(n)] # handle first element # of array separately for j in range(M + 1): dp[0][j] = abs(j - A[0]) # do for rest elements # of the array for i in range(1, n): # replace A[i] to j and # calculate minimal adjustment # cost dp[i][j] for j in range(M + 1): # initialize minimal adjustment # cost to INT_MAX dp[i][j] = 100000000 # consider all k such that # k >= max(j - target, 0) and # k <= min(M, j + target) and # take minimum for k in range(max(j - target, 0), min(M, j + target) + 1): dp[i][j] = min(dp[i][j], dp[i - 1][k] + abs(A[i] - j)) # return minimum value from # last row of dp table res = 10000000 for j in range(M + 1): res = min(res, dp[n - 1][j]) return res # Driver Codearr= [55, 77, 52, 61, 39, 6, 25, 60, 49, 47]n = len(arr)target = 10print("Minimum adjustment cost is", minAdjustmentCost(arr, n, target), sep = ' ') # This code is contributed# by sahilshelangia
// C# program to find minimum adjustment// cost of an arrayusing System; class GFG { public static int M = 100; // Function to find minimum adjustment // cost of an array static int minAdjustmentCost(int []A, int n, int target) { // dp[i][j] stores minimal adjustment // cost on changing A[i] to j int[,] dp = new int[n,M + 1]; // handle first element of array // separately for (int j = 0; j <= M; j++) dp[0,j] = Math.Abs(j - A[0]); // do for rest elements of the array for (int i = 1; i < n; i++) { // replace A[i] to j and calculate // minimal adjustment cost dp[i][j] for (int j = 0; j <= M; j++) { // initialize minimal adjustment // cost to INT_MAX dp[i,j] = int.MaxValue; // consider all k such that // k >= max(j - target, 0) and // k <= min(M, j + target) and // take minimum int k = Math.Max(j - target, 0); for ( ; k <= Math.Min(M, j + target); k++) dp[i,j] = Math.Min(dp[i,j], dp[i - 1,k] + Math.Abs(A[i] - j)); } } // return minimum value from last // row of dp table int res = int.MaxValue; for (int j = 0; j <= M; j++) res = Math.Min(res, dp[n - 1,j]); return res; } // Driver program public static void Main () { int []arr = {55, 77, 52, 61, 39, 6, 25, 60, 49, 47}; int n = arr.Length; int target = 10; Console.WriteLine("Minimum adjustment" + " cost is " + minAdjustmentCost(arr, n, target)); }} // This code is contributed by Sam007.
<?php// PHP program to find minimum// adjustment cost of an array $M = 100; // Function to find minimum// adjustment cost of an arrayfunction minAdjustmentCost( $A, $n, $target){ // dp[i][j] stores minimal // adjustment cost on changing // A[i] to j global $M; $dp = array(array()); // handle first element // of array separately for($j = 0; $j <= $M; $j++) $dp[0][$j] = abs($j - $A[0]); // do for rest // elements of the array for($i = 1; $i < $n; $i++) { // replace A[i] to j and // calculate minimal adjustment // cost dp[i][j] for($j = 0; $j <= $M; $j++) { // initialize minimal adjustment // cost to INT_MAX $dp[$i][$j] = PHP_INT_MAX; // consider all k such that // k >= max(j - target, 0) and // k <= min(M, j + target) and // take minimum for($k = max($j - $target, 0); $k <= min($M, $j + $target); $k++) $dp[$i][$j] = min($dp[$i][$j], $dp[$i - 1][$k] + abs($A[$i] - $j)); } } // return minimum value // from last row of dp table $res = PHP_INT_MAX; for($j = 0; $j <= $M; $j++) $res = min($res, $dp[$n - 1][$j]); return $res;} // Driver Code $arr = array(55, 77, 52, 61, 39, 6, 25, 60, 49, 47); $n = count($arr); $target = 10; echo "Minimum adjustment cost is " , minAdjustmentCost($arr, $n, $target); // This code is contributed by anuj_67.?>
<script> // Javascript program to find minimum adjustment cost of an array let M = 100; // Function to find minimum adjustment cost of an array function minAdjustmentCost(A, n, target) { // dp[i][j] stores minimal adjustment cost on changing // A[i] to j let dp = new Array(n); for (let i = 0; i < n; i++) { dp[i] = new Array(n); for (let j = 0; j <= M; j++) { dp[i][j] = 0; } } // handle first element of array separately for (let j = 0; j <= M; j++) dp[0][j] = Math.abs(j - A[0]); // do for rest elements of the array for (let i = 1; i < n; i++) { // replace A[i] to j and calculate minimal adjustment // cost dp[i][j] for (let j = 0; j <= M; j++) { // initialize minimal adjustment cost to INT_MAX dp[i][j] = Number.MAX_VALUE; // consider all k such that k >= max(j - target, 0) and // k <= min(M, j + target) and take minimum let k = Math.max(j-target,0); for ( ; k <= Math.min(M,j+target); k++) dp[i][j] = Math.min(dp[i][j], dp[i - 1][k] + Math.abs(A[i] - j)); } } // return minimum value from last row of dp table let res = Number.MAX_VALUE; for (let j = 0; j <= M; j++) res = Math.min(res, dp[n - 1][j]); return res; } let arr = [55, 77, 52, 61, 39, 6, 25, 60, 49, 47]; let n = arr.length; let target = 10; document.write("Minimum adjustment cost is " +minAdjustmentCost(arr, n, target)); // This code is contributed by decode2207.</script>
Minimum adjustment cost is 75
This article is contributed by Aditya Goel. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Sam007
vt_m
sahilshelangia
shubham_singh
decode2207
hardikkoriintern
Arrays
Dynamic Programming
Arrays
Dynamic Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 52,
"s": 24,
"text": "\n04 Jul, 2022"
},
{
"code": null,
"e": 491,
"s": 52,
"text": "Given an array of positive integers, replace each element in the array such that the difference between adjacent elements in the array is less than or equal to a given target. We need to minimize the adjustment cost, that is the sum of differences between new and old values. We basically need to minimize ∑|A[i] – Anew[i]| where 0 ≤ i ≤ n-1, n is size of A[] and Anew[] is the array with adjacent difference less that or equal to target."
},
{
"code": null,
"e": 555,
"s": 491,
"text": "Assume all elements of the array is less than constant M = 100."
},
{
"code": null,
"e": 566,
"s": 555,
"text": "Examples: "
},
{
"code": null,
"e": 1084,
"s": 566,
"text": "Input: arr = [1, 3, 0, 3], target = 1\nOutput: Minimum adjustment cost is 3\nExplanation: One of the possible solutions \nis [2, 3, 2, 3]\n\nInput: arr = [2, 3, 2, 3], target = 1\nOutput: Minimum adjustment cost is 0\nExplanation: All adjacent elements in the input \narray are already less than equal to given target\n\nInput: arr = [55, 77, 52, 61, 39, 6, \n 25, 60, 49, 47], target = 10\nOutput: Minimum adjustment cost is 75\nExplanation: One of the possible solutions is \n[55, 62, 52, 49, 39, 29, 30, 40, 49, 47]"
},
{
"code": null,
"e": 1317,
"s": 1084,
"text": "In order to minimize the adjustment cost ∑|A[i] – Anew[i]| for all index i in the array, |A[i] – Anew[i]| should be as close to zero as possible. Also, |A[i] – Anew[i+1] ]| ≤ Target.This problem can be solved by dynamic programming."
},
{
"code": null,
"e": 1423,
"s": 1317,
"text": "Let dp[i][j] defines minimal adjustment cost on changing A[i] to j, then the DP relation is defined by – "
},
{
"code": null,
"e": 1515,
"s": 1423,
"text": "dp[i][j] = min{dp[i - 1][k]} + |j - A[i]|\n for all k's such that |k - j| ≤ target"
},
{
"code": null,
"e": 1775,
"s": 1515,
"text": "Here, 0 ≤ i ≤ n and 0 ≤ j ≤ M where n is number of elements in the array and M = 100. We have to consider all k such that max(j – target, 0) ≤ k ≤ min(M, j + target)Finally, the minimum adjustment cost of the array will be min{dp[n – 1][j]} for all 0 ≤ j ≤ M."
},
{
"code": null,
"e": 1819,
"s": 1775,
"text": "Below is the implementation of above idea –"
},
{
"code": null,
"e": 1823,
"s": 1819,
"text": "C++"
},
{
"code": null,
"e": 1828,
"s": 1823,
"text": "Java"
},
{
"code": null,
"e": 1836,
"s": 1828,
"text": "Python3"
},
{
"code": null,
"e": 1839,
"s": 1836,
"text": "C#"
},
{
"code": null,
"e": 1843,
"s": 1839,
"text": "PHP"
},
{
"code": null,
"e": 1854,
"s": 1843,
"text": "Javascript"
},
{
"code": "// C++ program to find minimum adjustment cost of an array#include <bits/stdc++.h>using namespace std; #define M 100 // Function to find minimum adjustment cost of an arrayint minAdjustmentCost(int A[], int n, int target){ // dp[i][j] stores minimal adjustment cost on changing // A[i] to j int dp[n][M + 1]; // handle first element of array separately for (int j = 0; j <= M; j++) dp[0][j] = abs(j - A[0]); // do for rest elements of the array for (int i = 1; i < n; i++) { // replace A[i] to j and calculate minimal adjustment // cost dp[i][j] for (int j = 0; j <= M; j++) { // initialize minimal adjustment cost to INT_MAX dp[i][j] = INT_MAX; // consider all k such that k >= max(j - target, 0) and // k <= min(M, j + target) and take minimum for (int k = max(j-target,0); k <= min(M,j+target); k++) dp[i][j] = min(dp[i][j], dp[i - 1][k] + abs(A[i] - j)); } } // return minimum value from last row of dp table int res = INT_MAX; for (int j = 0; j <= M; j++) res = min(res, dp[n - 1][j]); return res;} // Driver Program to test above functionsint main(){ int arr[] = {55, 77, 52, 61, 39, 6, 25, 60, 49, 47}; int n = sizeof(arr) / sizeof(arr[0]); int target = 10; cout << \"Minimum adjustment cost is \" << minAdjustmentCost(arr, n, target) << endl; return 0;}",
"e": 3292,
"s": 1854,
"text": null
},
{
"code": "// Java program to find minimum adjustment cost of an arrayimport java.io.*;import java.util.*; class GFG{ public static int M = 100; // Function to find minimum adjustment cost of an array static int minAdjustmentCost(int A[], int n, int target) { // dp[i][j] stores minimal adjustment cost on changing // A[i] to j int[][] dp = new int[n][M + 1]; // handle first element of array separately for (int j = 0; j <= M; j++) dp[0][j] = Math.abs(j - A[0]); // do for rest elements of the array for (int i = 1; i < n; i++) { // replace A[i] to j and calculate minimal adjustment // cost dp[i][j] for (int j = 0; j <= M; j++) { // initialize minimal adjustment cost to INT_MAX dp[i][j] = Integer.MAX_VALUE; // consider all k such that k >= max(j - target, 0) and // k <= min(M, j + target) and take minimum int k = Math.max(j-target,0); for ( ; k <= Math.min(M,j+target); k++) dp[i][j] = Math.min(dp[i][j], dp[i - 1][k] + Math.abs(A[i] - j)); } } // return minimum value from last row of dp table int res = Integer.MAX_VALUE; for (int j = 0; j <= M; j++) res = Math.min(res, dp[n - 1][j]); return res; } // Driver program public static void main (String[] args) { int arr[] = {55, 77, 52, 61, 39, 6, 25, 60, 49, 47}; int n = arr.length; int target = 10; System.out.println(\"Minimum adjustment cost is \" +minAdjustmentCost(arr, n, target)); }} // This code is contributed by Pramod Kumar",
"e": 5097,
"s": 3292,
"text": null
},
{
"code": "# Python3 program to find minimum# adjustment cost of an arrayM = 100 # Function to find minimum# adjustment cost of an arraydef minAdjustmentCost(A, n, target): # dp[i][j] stores minimal adjustment # cost on changing A[i] to j dp = [[0 for i in range(M + 1)] for i in range(n)] # handle first element # of array separately for j in range(M + 1): dp[0][j] = abs(j - A[0]) # do for rest elements # of the array for i in range(1, n): # replace A[i] to j and # calculate minimal adjustment # cost dp[i][j] for j in range(M + 1): # initialize minimal adjustment # cost to INT_MAX dp[i][j] = 100000000 # consider all k such that # k >= max(j - target, 0) and # k <= min(M, j + target) and # take minimum for k in range(max(j - target, 0), min(M, j + target) + 1): dp[i][j] = min(dp[i][j], dp[i - 1][k] + abs(A[i] - j)) # return minimum value from # last row of dp table res = 10000000 for j in range(M + 1): res = min(res, dp[n - 1][j]) return res # Driver Codearr= [55, 77, 52, 61, 39, 6, 25, 60, 49, 47]n = len(arr)target = 10print(\"Minimum adjustment cost is\", minAdjustmentCost(arr, n, target), sep = ' ') # This code is contributed# by sahilshelangia",
"e": 6649,
"s": 5097,
"text": null
},
{
"code": "// C# program to find minimum adjustment// cost of an arrayusing System; class GFG { public static int M = 100; // Function to find minimum adjustment // cost of an array static int minAdjustmentCost(int []A, int n, int target) { // dp[i][j] stores minimal adjustment // cost on changing A[i] to j int[,] dp = new int[n,M + 1]; // handle first element of array // separately for (int j = 0; j <= M; j++) dp[0,j] = Math.Abs(j - A[0]); // do for rest elements of the array for (int i = 1; i < n; i++) { // replace A[i] to j and calculate // minimal adjustment cost dp[i][j] for (int j = 0; j <= M; j++) { // initialize minimal adjustment // cost to INT_MAX dp[i,j] = int.MaxValue; // consider all k such that // k >= max(j - target, 0) and // k <= min(M, j + target) and // take minimum int k = Math.Max(j - target, 0); for ( ; k <= Math.Min(M, j + target); k++) dp[i,j] = Math.Min(dp[i,j], dp[i - 1,k] + Math.Abs(A[i] - j)); } } // return minimum value from last // row of dp table int res = int.MaxValue; for (int j = 0; j <= M; j++) res = Math.Min(res, dp[n - 1,j]); return res; } // Driver program public static void Main () { int []arr = {55, 77, 52, 61, 39, 6, 25, 60, 49, 47}; int n = arr.Length; int target = 10; Console.WriteLine(\"Minimum adjustment\" + \" cost is \" + minAdjustmentCost(arr, n, target)); }} // This code is contributed by Sam007.",
"e": 8633,
"s": 6649,
"text": null
},
{
"code": "<?php// PHP program to find minimum// adjustment cost of an array $M = 100; // Function to find minimum// adjustment cost of an arrayfunction minAdjustmentCost( $A, $n, $target){ // dp[i][j] stores minimal // adjustment cost on changing // A[i] to j global $M; $dp = array(array()); // handle first element // of array separately for($j = 0; $j <= $M; $j++) $dp[0][$j] = abs($j - $A[0]); // do for rest // elements of the array for($i = 1; $i < $n; $i++) { // replace A[i] to j and // calculate minimal adjustment // cost dp[i][j] for($j = 0; $j <= $M; $j++) { // initialize minimal adjustment // cost to INT_MAX $dp[$i][$j] = PHP_INT_MAX; // consider all k such that // k >= max(j - target, 0) and // k <= min(M, j + target) and // take minimum for($k = max($j - $target, 0); $k <= min($M, $j + $target); $k++) $dp[$i][$j] = min($dp[$i][$j], $dp[$i - 1][$k] + abs($A[$i] - $j)); } } // return minimum value // from last row of dp table $res = PHP_INT_MAX; for($j = 0; $j <= $M; $j++) $res = min($res, $dp[$n - 1][$j]); return $res;} // Driver Code $arr = array(55, 77, 52, 61, 39, 6, 25, 60, 49, 47); $n = count($arr); $target = 10; echo \"Minimum adjustment cost is \" , minAdjustmentCost($arr, $n, $target); // This code is contributed by anuj_67.?>",
"e": 10284,
"s": 8633,
"text": null
},
{
"code": "<script> // Javascript program to find minimum adjustment cost of an array let M = 100; // Function to find minimum adjustment cost of an array function minAdjustmentCost(A, n, target) { // dp[i][j] stores minimal adjustment cost on changing // A[i] to j let dp = new Array(n); for (let i = 0; i < n; i++) { dp[i] = new Array(n); for (let j = 0; j <= M; j++) { dp[i][j] = 0; } } // handle first element of array separately for (let j = 0; j <= M; j++) dp[0][j] = Math.abs(j - A[0]); // do for rest elements of the array for (let i = 1; i < n; i++) { // replace A[i] to j and calculate minimal adjustment // cost dp[i][j] for (let j = 0; j <= M; j++) { // initialize minimal adjustment cost to INT_MAX dp[i][j] = Number.MAX_VALUE; // consider all k such that k >= max(j - target, 0) and // k <= min(M, j + target) and take minimum let k = Math.max(j-target,0); for ( ; k <= Math.min(M,j+target); k++) dp[i][j] = Math.min(dp[i][j], dp[i - 1][k] + Math.abs(A[i] - j)); } } // return minimum value from last row of dp table let res = Number.MAX_VALUE; for (let j = 0; j <= M; j++) res = Math.min(res, dp[n - 1][j]); return res; } let arr = [55, 77, 52, 61, 39, 6, 25, 60, 49, 47]; let n = arr.length; let target = 10; document.write(\"Minimum adjustment cost is \" +minAdjustmentCost(arr, n, target)); // This code is contributed by decode2207.</script>",
"e": 12134,
"s": 10284,
"text": null
},
{
"code": null,
"e": 12165,
"s": 12134,
"text": "Minimum adjustment cost is 75\n"
},
{
"code": null,
"e": 12461,
"s": 12165,
"text": "This article is contributed by Aditya Goel. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. "
},
{
"code": null,
"e": 12468,
"s": 12461,
"text": "Sam007"
},
{
"code": null,
"e": 12473,
"s": 12468,
"text": "vt_m"
},
{
"code": null,
"e": 12488,
"s": 12473,
"text": "sahilshelangia"
},
{
"code": null,
"e": 12502,
"s": 12488,
"text": "shubham_singh"
},
{
"code": null,
"e": 12513,
"s": 12502,
"text": "decode2207"
},
{
"code": null,
"e": 12530,
"s": 12513,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 12537,
"s": 12530,
"text": "Arrays"
},
{
"code": null,
"e": 12557,
"s": 12537,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 12564,
"s": 12557,
"text": "Arrays"
},
{
"code": null,
"e": 12584,
"s": 12564,
"text": "Dynamic Programming"
}
] |
Matplotlib.pyplot.cla() in Python
|
19 Apr, 2020
Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. Pyplot is a state-based interface to a Matplotlib module which provides a MATLAB-like interface. There are various plots which can be used in Pyplot are Line Plot, Contour, Histogram, Scatter, 3D Plot, etc.
The cla() function in pyplot module of matplotlib library is used to clear the current axes.Syntax:
matplotlib.pyplot.cla()
Below examples illustrate the matplotlib.pyplot.cla() function in matplotlib.pyplot:
Example 1:
# Implementation of matplotlib function import matplotlib.pyplot as plt plt.plot([1, 2, 3, 4], [16, 4, 1, 8]) plt.cla()plt.title('matplotlib.pyplot.cla Example')plt.show()
Output:
Example 2:
# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt t = np.linspace(0.0, 2.0, 201)s = np.sin(2 * np.pi * t) fig, [ax, ax1] = plt.subplots(2, 1) ax.set_ylabel('y-axis')ax.plot(t, s)ax.grid(True) ax1.set_ylabel('y-axis')ax1.set_xlabel('x-axis')ax1.plot(t, s)ax1.grid(True)ax1.cla() fig.suptitle('matplotlib.pyplot.cla Example')plt.show()
Output:
Python-matplotlib
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n19 Apr, 2020"
},
{
"code": null,
"e": 333,
"s": 28,
"text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. Pyplot is a state-based interface to a Matplotlib module which provides a MATLAB-like interface. There are various plots which can be used in Pyplot are Line Plot, Contour, Histogram, Scatter, 3D Plot, etc."
},
{
"code": null,
"e": 433,
"s": 333,
"text": "The cla() function in pyplot module of matplotlib library is used to clear the current axes.Syntax:"
},
{
"code": null,
"e": 458,
"s": 433,
"text": "matplotlib.pyplot.cla()\n"
},
{
"code": null,
"e": 543,
"s": 458,
"text": "Below examples illustrate the matplotlib.pyplot.cla() function in matplotlib.pyplot:"
},
{
"code": null,
"e": 554,
"s": 543,
"text": "Example 1:"
},
{
"code": "# Implementation of matplotlib function import matplotlib.pyplot as plt plt.plot([1, 2, 3, 4], [16, 4, 1, 8]) plt.cla()plt.title('matplotlib.pyplot.cla Example')plt.show()",
"e": 734,
"s": 554,
"text": null
},
{
"code": null,
"e": 742,
"s": 734,
"text": "Output:"
},
{
"code": null,
"e": 753,
"s": 742,
"text": "Example 2:"
},
{
"code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt t = np.linspace(0.0, 2.0, 201)s = np.sin(2 * np.pi * t) fig, [ax, ax1] = plt.subplots(2, 1) ax.set_ylabel('y-axis')ax.plot(t, s)ax.grid(True) ax1.set_ylabel('y-axis')ax1.set_xlabel('x-axis')ax1.plot(t, s)ax1.grid(True)ax1.cla() fig.suptitle('matplotlib.pyplot.cla Example')plt.show()",
"e": 1137,
"s": 753,
"text": null
},
{
"code": null,
"e": 1145,
"s": 1137,
"text": "Output:"
},
{
"code": null,
"e": 1163,
"s": 1145,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 1170,
"s": 1163,
"text": "Python"
}
] |
Java Program to Find out the Area and Perimeter of Rectangle using Class Concept
|
17 Dec, 2020
The perimeter is the length of the outline of a shape. To find the perimeter of a rectangle or square you have to add the lengths of all four sides. x is in this case the length of the rectangle while y is the width of the rectangle. The area is the measurement of the surface of a shape.
The main task here is to create a class that can be used to find the Area and Perimeter of a rectangle.
By using the formula for Area of Rectangle and Perimeter of Rectangle:
Area of Rectangle = Length * Breadth
Perimeter of Rectangle = 2 * (Length + Breadth)
Example:
Length = 10
Breadth = 20
Area of rectangle is = 200
Perimeter of rectangle is = 60
Approach :
First, we will create a class called Rectangle in which we will define the variable length and breadth. And the method Area and Perimeter to calculate the Area and Perimeter of Rectangle.
Then we will create another class in which we will create a main method.
Inside the main method, the object of the rectangle is created using which we can call assign the values to the variable of Rectangle class, and after assigning the values the methods of the Rectangle class will be called to calculate the Area and Perimeter of Rectangle.
Both the methods of the Rectangle class will use the values of length and breadth that we had passed from the main method using the object of that class. And the result will get printed.
Example:
Java
// Java program to create a class to// print the area and perimeter of a// rectangle import java.util.*; // Rectangle Class Filepublic class Rectangle { // Variable of data type double double length; double width; // Area Method to calculate the area of Rectangle void Area() { double area; area = this.length * this.width; System.out.println("Area of rectangle is : " + area); } // Perimeter Method to calculate the Perimeter of // Rectangle void Perimeter() { double perimeter; perimeter = 2 * (this.length + this.width); System.out.println("Perimeter of rectangle is : " + perimeter); }} class Use_Rectangle { public static void main(String args[]) { // Object of Rectangle class is created Rectangle rect = new Rectangle(); // Assigning the value in the length variable of // Rectangle Class rect.length = 15.854; // Assigning the value in the width variable of // Rectangle Class rect.width = 22.65; System.out.println("Length = " + rect.length); System.out.println("Width = " + rect.width); // Calling of Area method of Rectangle Class rect.Area(); // Calling of Perimeter method of Rectangle Class rect.Perimeter(); }}
Length = 15.854
Width = 22.65
Area of rectangle is : 359.09309999999994
Perimeter of rectangle is : 77.008
Picked
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 53,
"s": 25,
"text": "\n17 Dec, 2020"
},
{
"code": null,
"e": 342,
"s": 53,
"text": "The perimeter is the length of the outline of a shape. To find the perimeter of a rectangle or square you have to add the lengths of all four sides. x is in this case the length of the rectangle while y is the width of the rectangle. The area is the measurement of the surface of a shape."
},
{
"code": null,
"e": 447,
"s": 342,
"text": "The main task here is to create a class that can be used to find the Area and Perimeter of a rectangle. "
},
{
"code": null,
"e": 518,
"s": 447,
"text": "By using the formula for Area of Rectangle and Perimeter of Rectangle:"
},
{
"code": null,
"e": 555,
"s": 518,
"text": "Area of Rectangle = Length * Breadth"
},
{
"code": null,
"e": 603,
"s": 555,
"text": "Perimeter of Rectangle = 2 * (Length + Breadth)"
},
{
"code": null,
"e": 612,
"s": 603,
"text": "Example:"
},
{
"code": null,
"e": 624,
"s": 612,
"text": "Length = 10"
},
{
"code": null,
"e": 637,
"s": 624,
"text": "Breadth = 20"
},
{
"code": null,
"e": 664,
"s": 637,
"text": "Area of rectangle is = 200"
},
{
"code": null,
"e": 695,
"s": 664,
"text": "Perimeter of rectangle is = 60"
},
{
"code": null,
"e": 706,
"s": 695,
"text": "Approach :"
},
{
"code": null,
"e": 894,
"s": 706,
"text": "First, we will create a class called Rectangle in which we will define the variable length and breadth. And the method Area and Perimeter to calculate the Area and Perimeter of Rectangle."
},
{
"code": null,
"e": 967,
"s": 894,
"text": "Then we will create another class in which we will create a main method."
},
{
"code": null,
"e": 1239,
"s": 967,
"text": "Inside the main method, the object of the rectangle is created using which we can call assign the values to the variable of Rectangle class, and after assigning the values the methods of the Rectangle class will be called to calculate the Area and Perimeter of Rectangle."
},
{
"code": null,
"e": 1426,
"s": 1239,
"text": "Both the methods of the Rectangle class will use the values of length and breadth that we had passed from the main method using the object of that class. And the result will get printed."
},
{
"code": null,
"e": 1435,
"s": 1426,
"text": "Example:"
},
{
"code": null,
"e": 1440,
"s": 1435,
"text": "Java"
},
{
"code": "// Java program to create a class to// print the area and perimeter of a// rectangle import java.util.*; // Rectangle Class Filepublic class Rectangle { // Variable of data type double double length; double width; // Area Method to calculate the area of Rectangle void Area() { double area; area = this.length * this.width; System.out.println(\"Area of rectangle is : \" + area); } // Perimeter Method to calculate the Perimeter of // Rectangle void Perimeter() { double perimeter; perimeter = 2 * (this.length + this.width); System.out.println(\"Perimeter of rectangle is : \" + perimeter); }} class Use_Rectangle { public static void main(String args[]) { // Object of Rectangle class is created Rectangle rect = new Rectangle(); // Assigning the value in the length variable of // Rectangle Class rect.length = 15.854; // Assigning the value in the width variable of // Rectangle Class rect.width = 22.65; System.out.println(\"Length = \" + rect.length); System.out.println(\"Width = \" + rect.width); // Calling of Area method of Rectangle Class rect.Area(); // Calling of Perimeter method of Rectangle Class rect.Perimeter(); }}",
"e": 2828,
"s": 1440,
"text": null
},
{
"code": null,
"e": 2935,
"s": 2828,
"text": "Length = 15.854\nWidth = 22.65\nArea of rectangle is : 359.09309999999994\nPerimeter of rectangle is : 77.008"
},
{
"code": null,
"e": 2942,
"s": 2935,
"text": "Picked"
},
{
"code": null,
"e": 2947,
"s": 2942,
"text": "Java"
},
{
"code": null,
"e": 2961,
"s": 2947,
"text": "Java Programs"
},
{
"code": null,
"e": 2966,
"s": 2961,
"text": "Java"
}
] |
Problem Solving for Minimum Spanning Trees (Kruskal’s and Prim’s) - GeeksforGeeks
|
04 Oct, 2018
Minimum spanning Tree (MST) is an important topic for GATE. Therefore, we will discuss how to solve different types of questions based on MST. Before understanding this article, you should understand basics of MST and their algorithms (Kruskal’s algorithm and Prim’s algorithm).
Type 1. Conceptual questions based on MST –There are some important properties of MST on the basis of which conceptual questions can be asked as:
The number of edges in MST with n nodes is (n-1).
The weight of MST of a graph is always unique. However there may be different ways to get this weight (if there edges with same weights).
The weight of MST is sum of weights of edges in MST.
Maximum path length between two vertices is (n-1) for MST with n vertices.
There exists only one path from one vertex to another in MST.
Removal of any edge from MST disconnects the graph.
For a graph having edges with distinct weights, MST is unique.
Que – 1. Let G be an undirected connected graph with distinct edge weight. Let emax be the edge with maximum weight and emin the edge with minimum weight. Which of the following statements is false? (GATE CS 2000)(A) Every minimum spanning tree of G must contain emin.(B) If emax is in a minimum spanning tree, then its removal must disconnect G(C) No minimum spanning tree contains emax(D) G has a unique minimum spanning tree
Solution: As edge weights are unique, there will be only one edge emin and that will be added to MST, therefore option (A) is always true.As spanning tree has minimum number of edges, removal of any edge will disconnect the graph. Therefore, option (B) is also true.As all edge weights are distinct, G will have a unique minimum spanning tree. So, option (D) is correct.Option C is false as emax can be part of MST if other edges with lesser weights are creating cycle and number of edges before adding emax is less than (n-1).
Type 2. How to find the weight of minimum spanning tree given the graph –This is the simplest type of question based on MST. To solve this using kruskal’s algorithm,
Arrange the edges in non-decreasing order of weights.
Add edges one by one if they don’t create cycle until we get n-1 number of edges where n are number of nodes in the graph.
Que – 2. Consider a complete undirected graph with vertex set {0, 1, 2, 3, 4}. Entry Wij in the matrix W below is the weight of the edge {i, j}. What is the minimum possible weight of a spanning tree T in this graph such that vertex 0 is a leaf node in the tree T? (GATE CS 2010)(A) 7(B) 8(C) 9(D) 10
Solution: In the adjacency matrix of the graph with 5 vertices (v1 to v5), the edges arranged in non-decreasing order are:
(v1,v2), (v1,v4), (v4,v5), (v3,v5), (v1,v5),
(v2,v4), (v3,v4), (v1,v3), (v2,v5), (v2,v3)
As it is given, vertex v1 is a leaf node, it should have only one edge incident to it. Therefore, we will consider it in the end. Considering vertices v2 to v5, edges in non decreasing order are:
(v4,v5), (v3,v5), (v2,v4), (v3,v4), (v2,v5), (v2,v3)
Adding first three edges (v4,v5), (v3,v5), (v2,v4), no cycle is created. Also, we can connect v1 to v2 using edge (v1,v2). The total weight is sum of weight of these 4 edges which is 10.
Type 3. How many minimum spanning trees are possible using Kruskal’s algorithm for a given graph –
If all edges weight are distinct, minimum spanning tree is unique.
If two edges have same weight, then we have to consider both possibilities and find possible minimum spanning trees.
Que – 3. The number of distinct minimum spanning trees for the weighted graph below is ____ (GATE-CS-2014)(A) 4(B) 5(C) 6(D) 7
Solution: There are 5 edges with weight 1 and adding them all in MST does not create cycle.As the graph has 9 vertices, therefore we require total 8 edges out of which 5 has been added. Out of remaining 3, one edge is fixed represented by f.
For remaining 2 edges, one is to be chosen from c or d or e and another one is to be chosen from a or b. Remaining black ones will always create cycle so they are not considered. So, possible MST are 3*2 = 6.
Type 4. Out of given sequences, which one is not the sequence of edges added to the MST using Kruskal’s algorithm –To solve this type of questions, try to find out the sequence of edges which can be produced by Kruskal. The sequence which does not match will be the answer.
Que – 4. Consider the following graph:Which one of the following is NOT the sequence of edges added to the minimum spanning tree using Kruskal’s algorithm? (GATE-CS-2009)(A) (b,e), (e,f), (a,c), (b,c), (f,g), (c,d)(B) (b,e), (e,f), (a,c), (f,g), (b,c), (c,d)(C) (b,e), (a,c), (e,f), (b,c), (f,g), (c,d)(D) (b,e), (e,f), (b,c), (a,c), (f,g), (c,d)
Solution: Kruskal algorithms adds the edges in non-decreasing order of their weights, therefore, we first sort the edges in non-decreasing order of weight as:
(b,e), (e,f), (a,c), (b,c), (f,g), (a,b), (e,g), (c,d), (b,d), (e,d), (d,f).
First it will add (b,e) in MST. Then, it will add (e,f) as well as (a,c) (either (e,f) followed by (a,c) or vice versa) because of both having same weight and adding both of them will not create cycle.However, in option (D), (b,c) has been added to MST before adding (a,c). So it can’t be the sequence produced by Kruskal’s algorithm.
KarishmaChadha
MST
GATE CS
Greedy
Greedy
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Differences between IPv4 and IPv6
Preemptive and Non-Preemptive Scheduling
Difference between Clustered and Non-clustered index
Introduction of Process Synchronization
Phases of a Compiler
Dijkstra's shortest path algorithm | Greedy Algo-7
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2
Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5
Program for array rotation
Write a program to print all permutations of a given string
|
[
{
"code": null,
"e": 25729,
"s": 25701,
"text": "\n04 Oct, 2018"
},
{
"code": null,
"e": 26008,
"s": 25729,
"text": "Minimum spanning Tree (MST) is an important topic for GATE. Therefore, we will discuss how to solve different types of questions based on MST. Before understanding this article, you should understand basics of MST and their algorithms (Kruskal’s algorithm and Prim’s algorithm)."
},
{
"code": null,
"e": 26154,
"s": 26008,
"text": "Type 1. Conceptual questions based on MST –There are some important properties of MST on the basis of which conceptual questions can be asked as:"
},
{
"code": null,
"e": 26204,
"s": 26154,
"text": "The number of edges in MST with n nodes is (n-1)."
},
{
"code": null,
"e": 26342,
"s": 26204,
"text": "The weight of MST of a graph is always unique. However there may be different ways to get this weight (if there edges with same weights)."
},
{
"code": null,
"e": 26395,
"s": 26342,
"text": "The weight of MST is sum of weights of edges in MST."
},
{
"code": null,
"e": 26470,
"s": 26395,
"text": "Maximum path length between two vertices is (n-1) for MST with n vertices."
},
{
"code": null,
"e": 26532,
"s": 26470,
"text": "There exists only one path from one vertex to another in MST."
},
{
"code": null,
"e": 26584,
"s": 26532,
"text": "Removal of any edge from MST disconnects the graph."
},
{
"code": null,
"e": 26647,
"s": 26584,
"text": "For a graph having edges with distinct weights, MST is unique."
},
{
"code": null,
"e": 27075,
"s": 26647,
"text": "Que – 1. Let G be an undirected connected graph with distinct edge weight. Let emax be the edge with maximum weight and emin the edge with minimum weight. Which of the following statements is false? (GATE CS 2000)(A) Every minimum spanning tree of G must contain emin.(B) If emax is in a minimum spanning tree, then its removal must disconnect G(C) No minimum spanning tree contains emax(D) G has a unique minimum spanning tree"
},
{
"code": null,
"e": 27603,
"s": 27075,
"text": "Solution: As edge weights are unique, there will be only one edge emin and that will be added to MST, therefore option (A) is always true.As spanning tree has minimum number of edges, removal of any edge will disconnect the graph. Therefore, option (B) is also true.As all edge weights are distinct, G will have a unique minimum spanning tree. So, option (D) is correct.Option C is false as emax can be part of MST if other edges with lesser weights are creating cycle and number of edges before adding emax is less than (n-1)."
},
{
"code": null,
"e": 27769,
"s": 27603,
"text": "Type 2. How to find the weight of minimum spanning tree given the graph –This is the simplest type of question based on MST. To solve this using kruskal’s algorithm,"
},
{
"code": null,
"e": 27823,
"s": 27769,
"text": "Arrange the edges in non-decreasing order of weights."
},
{
"code": null,
"e": 27946,
"s": 27823,
"text": "Add edges one by one if they don’t create cycle until we get n-1 number of edges where n are number of nodes in the graph."
},
{
"code": null,
"e": 28247,
"s": 27946,
"text": "Que – 2. Consider a complete undirected graph with vertex set {0, 1, 2, 3, 4}. Entry Wij in the matrix W below is the weight of the edge {i, j}. What is the minimum possible weight of a spanning tree T in this graph such that vertex 0 is a leaf node in the tree T? (GATE CS 2010)(A) 7(B) 8(C) 9(D) 10"
},
{
"code": null,
"e": 28370,
"s": 28247,
"text": "Solution: In the adjacency matrix of the graph with 5 vertices (v1 to v5), the edges arranged in non-decreasing order are:"
},
{
"code": null,
"e": 28462,
"s": 28370,
"text": "(v1,v2), (v1,v4), (v4,v5), (v3,v5), (v1,v5), \n(v2,v4), (v3,v4), (v1,v3), (v2,v5), (v2,v3) \n"
},
{
"code": null,
"e": 28658,
"s": 28462,
"text": "As it is given, vertex v1 is a leaf node, it should have only one edge incident to it. Therefore, we will consider it in the end. Considering vertices v2 to v5, edges in non decreasing order are:"
},
{
"code": null,
"e": 28712,
"s": 28658,
"text": "(v4,v5), (v3,v5), (v2,v4), (v3,v4), (v2,v5), (v2,v3)\n"
},
{
"code": null,
"e": 28899,
"s": 28712,
"text": "Adding first three edges (v4,v5), (v3,v5), (v2,v4), no cycle is created. Also, we can connect v1 to v2 using edge (v1,v2). The total weight is sum of weight of these 4 edges which is 10."
},
{
"code": null,
"e": 28998,
"s": 28899,
"text": "Type 3. How many minimum spanning trees are possible using Kruskal’s algorithm for a given graph –"
},
{
"code": null,
"e": 29065,
"s": 28998,
"text": "If all edges weight are distinct, minimum spanning tree is unique."
},
{
"code": null,
"e": 29182,
"s": 29065,
"text": "If two edges have same weight, then we have to consider both possibilities and find possible minimum spanning trees."
},
{
"code": null,
"e": 29309,
"s": 29182,
"text": "Que – 3. The number of distinct minimum spanning trees for the weighted graph below is ____ (GATE-CS-2014)(A) 4(B) 5(C) 6(D) 7"
},
{
"code": null,
"e": 29551,
"s": 29309,
"text": "Solution: There are 5 edges with weight 1 and adding them all in MST does not create cycle.As the graph has 9 vertices, therefore we require total 8 edges out of which 5 has been added. Out of remaining 3, one edge is fixed represented by f."
},
{
"code": null,
"e": 29760,
"s": 29551,
"text": "For remaining 2 edges, one is to be chosen from c or d or e and another one is to be chosen from a or b. Remaining black ones will always create cycle so they are not considered. So, possible MST are 3*2 = 6."
},
{
"code": null,
"e": 30034,
"s": 29760,
"text": "Type 4. Out of given sequences, which one is not the sequence of edges added to the MST using Kruskal’s algorithm –To solve this type of questions, try to find out the sequence of edges which can be produced by Kruskal. The sequence which does not match will be the answer."
},
{
"code": null,
"e": 30381,
"s": 30034,
"text": "Que – 4. Consider the following graph:Which one of the following is NOT the sequence of edges added to the minimum spanning tree using Kruskal’s algorithm? (GATE-CS-2009)(A) (b,e), (e,f), (a,c), (b,c), (f,g), (c,d)(B) (b,e), (e,f), (a,c), (f,g), (b,c), (c,d)(C) (b,e), (a,c), (e,f), (b,c), (f,g), (c,d)(D) (b,e), (e,f), (b,c), (a,c), (f,g), (c,d)"
},
{
"code": null,
"e": 30540,
"s": 30381,
"text": "Solution: Kruskal algorithms adds the edges in non-decreasing order of their weights, therefore, we first sort the edges in non-decreasing order of weight as:"
},
{
"code": null,
"e": 30618,
"s": 30540,
"text": "(b,e), (e,f), (a,c), (b,c), (f,g), (a,b), (e,g), (c,d), (b,d), (e,d), (d,f).\n"
},
{
"code": null,
"e": 30953,
"s": 30618,
"text": "First it will add (b,e) in MST. Then, it will add (e,f) as well as (a,c) (either (e,f) followed by (a,c) or vice versa) because of both having same weight and adding both of them will not create cycle.However, in option (D), (b,c) has been added to MST before adding (a,c). So it can’t be the sequence produced by Kruskal’s algorithm."
},
{
"code": null,
"e": 30968,
"s": 30953,
"text": "KarishmaChadha"
},
{
"code": null,
"e": 30972,
"s": 30968,
"text": "MST"
},
{
"code": null,
"e": 30980,
"s": 30972,
"text": "GATE CS"
},
{
"code": null,
"e": 30987,
"s": 30980,
"text": "Greedy"
},
{
"code": null,
"e": 30994,
"s": 30987,
"text": "Greedy"
},
{
"code": null,
"e": 31092,
"s": 30994,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31126,
"s": 31092,
"text": "Differences between IPv4 and IPv6"
},
{
"code": null,
"e": 31167,
"s": 31126,
"text": "Preemptive and Non-Preemptive Scheduling"
},
{
"code": null,
"e": 31220,
"s": 31167,
"text": "Difference between Clustered and Non-clustered index"
},
{
"code": null,
"e": 31260,
"s": 31220,
"text": "Introduction of Process Synchronization"
},
{
"code": null,
"e": 31281,
"s": 31260,
"text": "Phases of a Compiler"
},
{
"code": null,
"e": 31332,
"s": 31281,
"text": "Dijkstra's shortest path algorithm | Greedy Algo-7"
},
{
"code": null,
"e": 31390,
"s": 31332,
"text": "Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2"
},
{
"code": null,
"e": 31441,
"s": 31390,
"text": "Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5"
},
{
"code": null,
"e": 31468,
"s": 31441,
"text": "Program for array rotation"
}
] |
Python | String translate() - GeeksforGeeks
|
24 Jul, 2020
translate() returns a string that is modified string of givens string according to given translation mappings.
There are two ways to translate :
Parameters :
string.translate(mapping)
mapping – a dictionary having mapping between two characters.Returns : Returns modified string where each character is mapped to its corresponding character according to the provided mapping table.
# Python3 code to demonstrate # translations without # maketrans() # specifying the mapping # using ASCII table = { 119 : 103, 121 : 102, 117 : None } # target string trg = "weeksyourweeks" # Printing original string print ("The string before translating is : ", end ="") print (trg) # using translate() to make translations. print ("The string after translating is : ", end ="") print (trg.translate(table))
The string before translating is : weeksyourweeks
The string after translating is : geeksforgeeks
One more example:
# Python 3 Program to show working# of translate() method # specifying the mapping # using ASCII translation = {103: None, 101: None, 101: None} string = "geeks"print("Original string:", string) # translate stringprint("Translated string:", string.translate(translation))
Original string: geeks
Translated string: ks
Syntax : maketrans(str1, str2, str3)Parameters :str1 : Specifies the list of characters that need to be replaced.str2 : Specifies the list of characters with which the characters need to be replaced.str3 : Specifies the list of characters that needs to be deleted.
Returns : Returns the translation table which specifies the conversions that can be used by translate()
# Python 3 Program to show working# of translate() method # First StringfirstString = "gef" # Second StringsecondString = "eks" # Third StringthirdString = "ge" # Original Stringstring = "geeks"print("Original string:", string) translation = string.maketrans(firstString, secondString, thirdString) # Translated Stringprint("Translated string:", string.translate(translation))
Original string: geeks
Translated string: ks
Output :
Original string: geeks
Translated string: ks
nidhi_biet
python-string
Python-string-functions
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
Python Dictionary
Taking input in Python
Read a file line by line in Python
How to Install PIP on Windows ?
Enumerate() in Python
Different ways to create Pandas Dataframe
|
[
{
"code": null,
"e": 24553,
"s": 24525,
"text": "\n24 Jul, 2020"
},
{
"code": null,
"e": 24664,
"s": 24553,
"text": "translate() returns a string that is modified string of givens string according to given translation mappings."
},
{
"code": null,
"e": 24698,
"s": 24664,
"text": "There are two ways to translate :"
},
{
"code": null,
"e": 24711,
"s": 24698,
"text": "Parameters :"
},
{
"code": null,
"e": 24737,
"s": 24711,
"text": "string.translate(mapping)"
},
{
"code": null,
"e": 24935,
"s": 24737,
"text": "mapping – a dictionary having mapping between two characters.Returns : Returns modified string where each character is mapped to its corresponding character according to the provided mapping table."
},
{
"code": "# Python3 code to demonstrate # translations without # maketrans() # specifying the mapping # using ASCII table = { 119 : 103, 121 : 102, 117 : None } # target string trg = \"weeksyourweeks\" # Printing original string print (\"The string before translating is : \", end =\"\") print (trg) # using translate() to make translations. print (\"The string after translating is : \", end =\"\") print (trg.translate(table)) ",
"e": 25352,
"s": 24935,
"text": null
},
{
"code": null,
"e": 25451,
"s": 25352,
"text": "The string before translating is : weeksyourweeks\nThe string after translating is : geeksforgeeks\n"
},
{
"code": null,
"e": 25469,
"s": 25451,
"text": "One more example:"
},
{
"code": "# Python 3 Program to show working# of translate() method # specifying the mapping # using ASCII translation = {103: None, 101: None, 101: None} string = \"geeks\"print(\"Original string:\", string) # translate stringprint(\"Translated string:\", string.translate(translation))",
"e": 25753,
"s": 25469,
"text": null
},
{
"code": null,
"e": 25799,
"s": 25753,
"text": "Original string: geeks\nTranslated string: ks\n"
},
{
"code": null,
"e": 26064,
"s": 25799,
"text": "Syntax : maketrans(str1, str2, str3)Parameters :str1 : Specifies the list of characters that need to be replaced.str2 : Specifies the list of characters with which the characters need to be replaced.str3 : Specifies the list of characters that needs to be deleted."
},
{
"code": null,
"e": 26168,
"s": 26064,
"text": "Returns : Returns the translation table which specifies the conversions that can be used by translate()"
},
{
"code": "# Python 3 Program to show working# of translate() method # First StringfirstString = \"gef\" # Second StringsecondString = \"eks\" # Third StringthirdString = \"ge\" # Original Stringstring = \"geeks\"print(\"Original string:\", string) translation = string.maketrans(firstString, secondString, thirdString) # Translated Stringprint(\"Translated string:\", string.translate(translation))",
"e": 26620,
"s": 26168,
"text": null
},
{
"code": null,
"e": 26666,
"s": 26620,
"text": "Original string: geeks\nTranslated string: ks\n"
},
{
"code": null,
"e": 26675,
"s": 26666,
"text": "Output :"
},
{
"code": null,
"e": 26720,
"s": 26675,
"text": "Original string: geeks\nTranslated string: ks"
},
{
"code": null,
"e": 26731,
"s": 26720,
"text": "nidhi_biet"
},
{
"code": null,
"e": 26745,
"s": 26731,
"text": "python-string"
},
{
"code": null,
"e": 26769,
"s": 26745,
"text": "Python-string-functions"
},
{
"code": null,
"e": 26776,
"s": 26769,
"text": "Python"
},
{
"code": null,
"e": 26874,
"s": 26776,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26902,
"s": 26874,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 26952,
"s": 26902,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 26974,
"s": 26952,
"text": "Python map() function"
},
{
"code": null,
"e": 27018,
"s": 26974,
"text": "How to get column names in Pandas dataframe"
},
{
"code": null,
"e": 27036,
"s": 27018,
"text": "Python Dictionary"
},
{
"code": null,
"e": 27059,
"s": 27036,
"text": "Taking input in Python"
},
{
"code": null,
"e": 27094,
"s": 27059,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 27126,
"s": 27094,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27148,
"s": 27126,
"text": "Enumerate() in Python"
}
] |
Character.isISOControl() method with examples in Java - GeeksforGeeks
|
06 Dec, 2018
The java.lang.Character.isISOControl() is an inbuilt method in java which determines if the specified character is an ISO control character or not. A character is considered to be an ISO control character if its code is in the range ‘\u0000’ through ‘\u001F’ or in the range ‘\u007F’ through ‘\u009F’. This method cannot handle supplementary characters. In order to support all the Unicode characters, including supplementary characters, the parameter can be on int datatype in the above method.
Syntax:
public static boolean isISOControl(data_type ch)
Parameters: The function accepts a single parameter ch which is mandatory. It specifies the character to be tested. The parameter can be of char or int datatype.
Return value: The function returns a boolean value. The boolean value is true if the character is an ISO control character or false otherwise.
Below programs illustrate the above method:
Program 1:
// java program to demonstrate// Character.isISOControl() method// when the parameter is a character import java.lang.*; public class gfg { public static void main(String[] args) { // create 2 char primitives c1, c2 and assign values char c1 = '-', c2 = '\u0017'; // assign isISOControl results of c1 // to boolean primitives bool1 boolean bool1 = Character.isISOControl(c1); if (bool1) System.out.println(c1 + " is an ISO control character"); else System.out.println(c1 + " is not an ISO control character"); // assign isISOControl results of c2 // to boolean primitives bool2 boolean bool2 = Character.isISOControl(c2); if (bool2) System.out.println(c2 + " is an ISO control character"); else System.out.println(c2 + " is not an ISO control character"); }}
- is not an ISO control character
u0017 is an ISO control character
Program 2:
// java program to demonstrate// Character.isISOControl(char ch) method// when the parameter is an integerimport java.lang.*; public class gfg { public static void main(String[] args) { // create 2 char primitives c1, c2 and assign values int c1 = 0x008f; int c2 = 0x0123; // assign isISOControl results of c1 // to boolean primitives bool1 boolean bool1 = Character.isISOControl(c1); if (bool1) System.out.println(c1 + " is an ISO control character"); else System.out.println(c1 + " is not an ISO control character"); // assign isISOControl results of c2 // to boolean primitives bool2 boolean bool2 = Character.isISOControl(c2); if (bool2) System.out.println(c2 + " is an ISO control character"); else System.out.println(c2 + " is not an ISO control character"); }}
143 is an ISO control character
291 is not an ISO control character
Reference: https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#isISOControl(char)
Java-Character
Java-Functions
Java-lang package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Different ways of Reading a text file in Java
Constructors in Java
Stream In Java
Exceptions in Java
Generics in Java
Comparator Interface in Java with Examples
StringBuilder Class in Java with Examples
HashMap get() Method in Java
Functional Interfaces in Java
Strings in Java
|
[
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n06 Dec, 2018"
},
{
"code": null,
"e": 24444,
"s": 23948,
"text": "The java.lang.Character.isISOControl() is an inbuilt method in java which determines if the specified character is an ISO control character or not. A character is considered to be an ISO control character if its code is in the range ‘\\u0000’ through ‘\\u001F’ or in the range ‘\\u007F’ through ‘\\u009F’. This method cannot handle supplementary characters. In order to support all the Unicode characters, including supplementary characters, the parameter can be on int datatype in the above method."
},
{
"code": null,
"e": 24452,
"s": 24444,
"text": "Syntax:"
},
{
"code": null,
"e": 24502,
"s": 24452,
"text": "public static boolean isISOControl(data_type ch)\n"
},
{
"code": null,
"e": 24664,
"s": 24502,
"text": "Parameters: The function accepts a single parameter ch which is mandatory. It specifies the character to be tested. The parameter can be of char or int datatype."
},
{
"code": null,
"e": 24807,
"s": 24664,
"text": "Return value: The function returns a boolean value. The boolean value is true if the character is an ISO control character or false otherwise."
},
{
"code": null,
"e": 24851,
"s": 24807,
"text": "Below programs illustrate the above method:"
},
{
"code": null,
"e": 24862,
"s": 24851,
"text": "Program 1:"
},
{
"code": "// java program to demonstrate// Character.isISOControl() method// when the parameter is a character import java.lang.*; public class gfg { public static void main(String[] args) { // create 2 char primitives c1, c2 and assign values char c1 = '-', c2 = '\\u0017'; // assign isISOControl results of c1 // to boolean primitives bool1 boolean bool1 = Character.isISOControl(c1); if (bool1) System.out.println(c1 + \" is an ISO control character\"); else System.out.println(c1 + \" is not an ISO control character\"); // assign isISOControl results of c2 // to boolean primitives bool2 boolean bool2 = Character.isISOControl(c2); if (bool2) System.out.println(c2 + \" is an ISO control character\"); else System.out.println(c2 + \" is not an ISO control character\"); }}",
"e": 25774,
"s": 24862,
"text": null
},
{
"code": null,
"e": 25843,
"s": 25774,
"text": "- is not an ISO control character\nu0017 is an ISO control character\n"
},
{
"code": null,
"e": 25854,
"s": 25843,
"text": "Program 2:"
},
{
"code": "// java program to demonstrate// Character.isISOControl(char ch) method// when the parameter is an integerimport java.lang.*; public class gfg { public static void main(String[] args) { // create 2 char primitives c1, c2 and assign values int c1 = 0x008f; int c2 = 0x0123; // assign isISOControl results of c1 // to boolean primitives bool1 boolean bool1 = Character.isISOControl(c1); if (bool1) System.out.println(c1 + \" is an ISO control character\"); else System.out.println(c1 + \" is not an ISO control character\"); // assign isISOControl results of c2 // to boolean primitives bool2 boolean bool2 = Character.isISOControl(c2); if (bool2) System.out.println(c2 + \" is an ISO control character\"); else System.out.println(c2 + \" is not an ISO control character\"); }}",
"e": 26777,
"s": 25854,
"text": null
},
{
"code": null,
"e": 26846,
"s": 26777,
"text": "143 is an ISO control character\n291 is not an ISO control character\n"
},
{
"code": null,
"e": 26943,
"s": 26846,
"text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#isISOControl(char)"
},
{
"code": null,
"e": 26958,
"s": 26943,
"text": "Java-Character"
},
{
"code": null,
"e": 26973,
"s": 26958,
"text": "Java-Functions"
},
{
"code": null,
"e": 26991,
"s": 26973,
"text": "Java-lang package"
},
{
"code": null,
"e": 26996,
"s": 26991,
"text": "Java"
},
{
"code": null,
"e": 27001,
"s": 26996,
"text": "Java"
},
{
"code": null,
"e": 27099,
"s": 27001,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27108,
"s": 27099,
"text": "Comments"
},
{
"code": null,
"e": 27121,
"s": 27108,
"text": "Old Comments"
},
{
"code": null,
"e": 27167,
"s": 27121,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 27188,
"s": 27167,
"text": "Constructors in Java"
},
{
"code": null,
"e": 27203,
"s": 27188,
"text": "Stream In Java"
},
{
"code": null,
"e": 27222,
"s": 27203,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 27239,
"s": 27222,
"text": "Generics in Java"
},
{
"code": null,
"e": 27282,
"s": 27239,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 27324,
"s": 27282,
"text": "StringBuilder Class in Java with Examples"
},
{
"code": null,
"e": 27353,
"s": 27324,
"text": "HashMap get() Method in Java"
},
{
"code": null,
"e": 27383,
"s": 27353,
"text": "Functional Interfaces in Java"
}
] |
How to convert an image to base64 encoding in PHP? - GeeksforGeeks
|
31 Jul, 2021
The base64_encode() function is an inbuilt function in PHP which is used to convert any data to base64 encoding. In order to convert an image into base64 encoding firstly need to get the contents of file. This can be done with the help of file_get_contents() function of PHP. Then pass this raw data to base64_encode() function to encode.
Required Function:
base64_encode() Function The base64_encode() function is an inbuilt function in PHP which is used to Encodes data with MIME base64. MIME (Multipurpose Internet Mail Extensions) base64 is used to encode the string in base64. The base64_encoded data takes 33% more space then original data.
file_get_contents() Function The file_get_contents() function in PHP is an inbuilt function which is used to read a file into a string. The function uses memory mapping techniques which are supported by the server and thus enhances the performances making it a preferred way of reading contents of a file.
Input Image:
Program:
<?php // Get the image and convert into string$img = file_get_contents('https://media.geeksforgeeks.org/wp-content/uploads/geeksforgeeks-22.png'); // Encode the image string data into base64$data = base64_encode($img); // Display the outputecho $data;?>
Output:
iVBORw0KGgoAAAANSUhEUgAAApsAAAC4CAYAAACsNSfVAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAAZdEVYdFNvZnR3YXJdhfdsglgklAEFkb2JlIEltYWdlUmVhZHlxyWqwrwtwefd...TeUsalQKBQKhUKhsBvK2FQoFAqFQqFQ2A1lbCoUCoVCoVAo7IYyNhUKhUKhUCgUdkMZmwqFQKBQKO0H0fxpZ1bfc
Reference:
http://php.net/manual/en/function.base64-encode.php
http://php.net/manual/en/function.file-get-contents.php
PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples.
Picked
Technical Scripter 2018
PHP
Technical Scripter
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Insert Form Data into Database using PHP ?
How to convert array to string in PHP ?
PHP | Converting string to Date and DateTime
Comparing two dates in PHP
How to receive JSON POST with PHP ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 26984,
"s": 26956,
"text": "\n31 Jul, 2021"
},
{
"code": null,
"e": 27323,
"s": 26984,
"text": "The base64_encode() function is an inbuilt function in PHP which is used to convert any data to base64 encoding. In order to convert an image into base64 encoding firstly need to get the contents of file. This can be done with the help of file_get_contents() function of PHP. Then pass this raw data to base64_encode() function to encode."
},
{
"code": null,
"e": 27342,
"s": 27323,
"text": "Required Function:"
},
{
"code": null,
"e": 27631,
"s": 27342,
"text": "base64_encode() Function The base64_encode() function is an inbuilt function in PHP which is used to Encodes data with MIME base64. MIME (Multipurpose Internet Mail Extensions) base64 is used to encode the string in base64. The base64_encoded data takes 33% more space then original data."
},
{
"code": null,
"e": 27937,
"s": 27631,
"text": "file_get_contents() Function The file_get_contents() function in PHP is an inbuilt function which is used to read a file into a string. The function uses memory mapping techniques which are supported by the server and thus enhances the performances making it a preferred way of reading contents of a file."
},
{
"code": null,
"e": 27950,
"s": 27937,
"text": "Input Image:"
},
{
"code": null,
"e": 27959,
"s": 27950,
"text": "Program:"
},
{
"code": "<?php // Get the image and convert into string$img = file_get_contents('https://media.geeksforgeeks.org/wp-content/uploads/geeksforgeeks-22.png'); // Encode the image string data into base64$data = base64_encode($img); // Display the outputecho $data;?>",
"e": 28217,
"s": 27959,
"text": null
},
{
"code": null,
"e": 28225,
"s": 28217,
"text": "Output:"
},
{
"code": null,
"e": 28494,
"s": 28225,
"text": "iVBORw0KGgoAAAANSUhEUgAAApsAAAC4CAYAAACsNSfVAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAAZdEVYdFNvZnR3YXJdhfdsglgklAEFkb2JlIEltYWdlUmVhZHlxyWqwrwtwefd...TeUsalQKBQKhUKhsBvK2FQoFAqFQqFQ2A1lbCoUCoVCoVAo7IYyNhUKhUKhUCgUdkMZmwqFQKBQKO0H0fxpZ1bfc\n"
},
{
"code": null,
"e": 28505,
"s": 28494,
"text": "Reference:"
},
{
"code": null,
"e": 28557,
"s": 28505,
"text": "http://php.net/manual/en/function.base64-encode.php"
},
{
"code": null,
"e": 28613,
"s": 28557,
"text": "http://php.net/manual/en/function.file-get-contents.php"
},
{
"code": null,
"e": 28782,
"s": 28613,
"text": "PHP is a server-side scripting language designed specifically for web development. You can learn PHP from the ground up by following this PHP Tutorial and PHP Examples."
},
{
"code": null,
"e": 28789,
"s": 28782,
"text": "Picked"
},
{
"code": null,
"e": 28813,
"s": 28789,
"text": "Technical Scripter 2018"
},
{
"code": null,
"e": 28817,
"s": 28813,
"text": "PHP"
},
{
"code": null,
"e": 28836,
"s": 28817,
"text": "Technical Scripter"
},
{
"code": null,
"e": 28853,
"s": 28836,
"text": "Web Technologies"
},
{
"code": null,
"e": 28857,
"s": 28853,
"text": "PHP"
},
{
"code": null,
"e": 28955,
"s": 28857,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29005,
"s": 28955,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 29045,
"s": 29005,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 29090,
"s": 29045,
"text": "PHP | Converting string to Date and DateTime"
},
{
"code": null,
"e": 29117,
"s": 29090,
"text": "Comparing two dates in PHP"
},
{
"code": null,
"e": 29153,
"s": 29117,
"text": "How to receive JSON POST with PHP ?"
},
{
"code": null,
"e": 29193,
"s": 29153,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 29226,
"s": 29193,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 29271,
"s": 29226,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 29314,
"s": 29271,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Construct a linked list from 2D matrix in C++
|
Suppose we have one matrix, we have to convert it to 2d linked list using recursive approach.
The list will have the right and down pointer.
So, if the input is like
then the output will be
To solve this, we will follow these steps −
Define a function make_2d_list(), this will take matrix mat, i, j, m, n,
Define a function make_2d_list(), this will take matrix mat, i, j, m, n,
if i and j are not in the matrix boundary, then −return null
if i and j are not in the matrix boundary, then −
return null
return null
temp := create a new node with value mat[i, j]
temp := create a new node with value mat[i, j]
right of temp := make_2d_list(mat, i, j + 1, m, n)
right of temp := make_2d_list(mat, i, j + 1, m, n)
down of temp := make_2d_list(mat, i + 1, j, m, n)
down of temp := make_2d_list(mat, i + 1, j, m, n)
return temp
return temp
Let us see the following implementation to get better understanding −
Live Demo
#include <bits/stdc++.h>
using namespace std;
class TreeNode {
public:
int data;
TreeNode *right, *down;
TreeNode(int d){
data = d;
right = down = NULL;
}
};
void show_2d_list(TreeNode* head) {
TreeNode *right_ptr, *down_ptr = head;
while (down_ptr) {
right_ptr = down_ptr;
while (right_ptr) {
cout << right_ptr->data << " ";
right_ptr = right_ptr->right;
}
cout << endl;
down_ptr = down_ptr->down;
}
}
TreeNode* make_2d_list(int mat[][3], int i, int j, int m, int n) {
if (i > n - 1 || j > m - 1)
return NULL;
TreeNode* temp = new TreeNode(mat[i][j]);
temp->right = make_2d_list(mat, i, j + 1, m, n);
temp->down = make_2d_list(mat, i + 1, j, m, n);
return temp;
}
int main() {
int m = 3, n = 3;
int mat[][3] = {
{ 10, 20, 30 },
{ 40, 50, 60 },
{ 70, 80, 90 } };
TreeNode* head = make_2d_list(mat, 0, 0, m, n);
show_2d_list(head);
}
{ { 10, 20, 30 },
{ 40, 50, 60 },
{ 70, 80, 90 } }
10 20 30
40 50 60
70 80 90
|
[
{
"code": null,
"e": 1156,
"s": 1062,
"text": "Suppose we have one matrix, we have to convert it to 2d linked list using recursive approach."
},
{
"code": null,
"e": 1203,
"s": 1156,
"text": "The list will have the right and down pointer."
},
{
"code": null,
"e": 1228,
"s": 1203,
"text": "So, if the input is like"
},
{
"code": null,
"e": 1252,
"s": 1228,
"text": "then the output will be"
},
{
"code": null,
"e": 1296,
"s": 1252,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1369,
"s": 1296,
"text": "Define a function make_2d_list(), this will take matrix mat, i, j, m, n,"
},
{
"code": null,
"e": 1442,
"s": 1369,
"text": "Define a function make_2d_list(), this will take matrix mat, i, j, m, n,"
},
{
"code": null,
"e": 1503,
"s": 1442,
"text": "if i and j are not in the matrix boundary, then −return null"
},
{
"code": null,
"e": 1553,
"s": 1503,
"text": "if i and j are not in the matrix boundary, then −"
},
{
"code": null,
"e": 1565,
"s": 1553,
"text": "return null"
},
{
"code": null,
"e": 1577,
"s": 1565,
"text": "return null"
},
{
"code": null,
"e": 1624,
"s": 1577,
"text": "temp := create a new node with value mat[i, j]"
},
{
"code": null,
"e": 1671,
"s": 1624,
"text": "temp := create a new node with value mat[i, j]"
},
{
"code": null,
"e": 1722,
"s": 1671,
"text": "right of temp := make_2d_list(mat, i, j + 1, m, n)"
},
{
"code": null,
"e": 1773,
"s": 1722,
"text": "right of temp := make_2d_list(mat, i, j + 1, m, n)"
},
{
"code": null,
"e": 1823,
"s": 1773,
"text": "down of temp := make_2d_list(mat, i + 1, j, m, n)"
},
{
"code": null,
"e": 1873,
"s": 1823,
"text": "down of temp := make_2d_list(mat, i + 1, j, m, n)"
},
{
"code": null,
"e": 1885,
"s": 1873,
"text": "return temp"
},
{
"code": null,
"e": 1897,
"s": 1885,
"text": "return temp"
},
{
"code": null,
"e": 1967,
"s": 1897,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 1978,
"s": 1967,
"text": " Live Demo"
},
{
"code": null,
"e": 2946,
"s": 1978,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nclass TreeNode {\n public:\n int data;\n TreeNode *right, *down;\n TreeNode(int d){\n data = d;\n right = down = NULL;\n }\n};\nvoid show_2d_list(TreeNode* head) {\n TreeNode *right_ptr, *down_ptr = head;\n while (down_ptr) {\n right_ptr = down_ptr;\n while (right_ptr) {\n cout << right_ptr->data << \" \";\n right_ptr = right_ptr->right;\n }\n cout << endl;\n down_ptr = down_ptr->down;\n }\n}\nTreeNode* make_2d_list(int mat[][3], int i, int j, int m, int n) {\n if (i > n - 1 || j > m - 1)\n return NULL;\n TreeNode* temp = new TreeNode(mat[i][j]);\n temp->right = make_2d_list(mat, i, j + 1, m, n);\n temp->down = make_2d_list(mat, i + 1, j, m, n);\n return temp;\n}\nint main() {\n int m = 3, n = 3;\n int mat[][3] = {\n { 10, 20, 30 },\n { 40, 50, 60 },\n { 70, 80, 90 } };\n TreeNode* head = make_2d_list(mat, 0, 0, m, n);\n show_2d_list(head);\n}"
},
{
"code": null,
"e": 2997,
"s": 2946,
"text": "{ { 10, 20, 30 },\n{ 40, 50, 60 },\n{ 70, 80, 90 } }"
},
{
"code": null,
"e": 3024,
"s": 2997,
"text": "10 20 30\n40 50 60\n70 80 90"
}
] |
BabelJS - Transpile ES6 Modules to ES5
|
In this chapter, we will see how to transpile ES6 modules to ES5 using Babel.
Consider a scenario where parts of JavaScript code need to be reused. ES6 comes to your rescue with the concept of Modules.
A module is nothing more than a chunk of JavaScript code written in a file. The functions or variables in a module are not available for use, unless the module file exports them.
In simpler terms, the modules help you to write the code in your module and expose only those parts of the code that should be accessed by other parts of your code.
Let us consider an example to understand how to use module and how to export it to make use of it in the code.
add.js
var add = (x,y) => {
return x+y;
}
module.exports=add;
multiply.js
var multiply = (x,y) => {
return x*y;
};
module.exports = multiply;
main.js
import add from './add';
import multiply from './multiply'
let a = add(10,20);
let b = multiply(40,10);
console.log("%c"+a,"font-size:30px;color:green;");
console.log("%c"+b,"font-size:30px;color:green;");
I have three files add.js that adds 2 given numbers, multiply.js that multiplies two given numbers and main.js, which calls add and multiply, and consoles the output.
To give add.js and multiply.js in main.js, we have to export it first as shown below −
module.exports = add;
module.exports = multiply;
To use them in main.js, we need to import them as shown below
import add from './add';
import multiply from './multiply'
We need module bundler to build the files, so that we can execute them in the browser.
We can do that −
Using Webpack
Using Gulp
In this section, we will see what the ES6 modules are. We will also learn how to use webpack.
Before we start, we need to install the following packages −
npm install --save-dev webpack
npm install --save-dev webpack-dev-server
npm install --save-dev babel-core
npm install --save-dev babel-loader
npm install --save-dev babel-preset-env
We have added pack and publish tasks to scripts to run them using npm. Here is the webpack.config.js file which will build the final file.
var path = require('path');
module.exports = {
entry: {
app: './src/main.js'
},
output: {
path: path.resolve(__dirname, 'dev'),
filename: 'main_bundle.js'
},
mode:'development',
module: {
rules: [
{
test: /\.js$/,
include: path.resolve(__dirname, 'src'),
loader: 'babel-loader',
query: {
presets: ['env']
}
}
]
}
};
Run the command npm run pack to build the files. The final file will be stored in the dev/ folder.
npm run pack
dev/main_bundle.js common file is created. This file combines add.js, multiply.js and main.js and stores it in dev/main_bundle.js.
/******/ (function(modules) { // webpackBootstrap
/******/ // The module cache
/******/ var installedModules = {};
/******/
/******/ // The require function
/******/ function __webpack_require__(moduleId) {
/******/
/******/ // Check if module is in cache
/******/ if(installedModules[moduleId]) {
/******/ return installedModules[moduleId].exports;
/******/ }
/******/ // Create a new module (and put it into the cache)
/******/ var module = installedModules[moduleId] = {
/******/ i: moduleId,
/******/ l: false,
/******/ exports: {}
/******/ };
/******/
/******/ // Execute the module function
/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ // Flag the module as loaded
/******/ module.l = true;
/******/
/******/ // Return the exports of the module
/******/ return module.exports;
/******/ }
/******/
/******/
/******/ // expose the modules object (__webpack_modules__)
/******/ __webpack_require__.m = modules;
/******/
/******/ // expose the module cache
/******/ __webpack_require__.c = installedModules;
/******/
/******/ // define getter function for harmony exports
/******/ __webpack_require__.d = function(exports, name, getter) {
/******/ if(!__webpack_require__.o(exports, name)) {
/******/ Object.defineProperty(exports, name, { enumerable: true, get: getter });
/******/ }
/******/ };
/******/
/******/ // define __esModule on exports
/******/ __webpack_require__.r = function(exports) {
/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
/******/ }
/******/ Object.defineProperty(exports, '__esModule', { value: true });
/******/ };
/******/
/******/ // create a fake namespace object
/******/ // mode & 1: value is a module id, require it
/******/ // mode & 2: merge all properties of value into the ns
/******/ // mode & 4: return value when already ns object
/******/ // mode & 8|1: behave like require
/******/ __webpack_require__.t = function(value, mode) {
/******/ if(mode & 1) value = __webpack_require__(value);
/******/ if(mode & 8) return value;
/******/ if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/ var ns = Object.create(null);
/******/ __webpack_require__.r(ns);
/******/ Object.defineProperty(ns, 'default', { enumerable: true, value: value });
/******/ if(mode & 2 && typeof value != 'string')
for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));
/******/ return ns;
/******/ };
/******/
/******/ // getDefaultExport function for compatibility with non-harmony modules
/******/ __webpack_require__.n = function(module) {
/******/ var getter = module && module.__esModule ?
/******/ function getDefault() { return module['default']; } :
/******/ function getModuleExports() { return module; };
/******/ __webpack_require__.d(getter, 'a', getter);
/******/ return getter;
/******/ };
/******/
/******/ // Object.prototype.hasOwnProperty.call
/******/ __webpack_require__.o = function(object, property) {
return Object.prototype.hasOwnProperty.call(object, property);
};
/******/
/******/ // __webpack_public_path__
/******/ __webpack_require__.p = "";
/******/
/******/
/******/ // Load entry module and return exports
/******/ return __webpack_require__(__webpack_require__.s = "./src/main.js");
/******/ })
/************************************************************************/
/******/ ({
/***/ "./src/add.js":
/*!********************!*\
!*** ./src/add.js ***!
\********************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
"use strict";
eval(
"\n\nvar add = function add(x, y) {\n return x + y;\n};
\n\nmodule.exports = add;
\n\n//# sourceURL = webpack:///./src/add.js?"
);
/***/ }),
/***/ "./src/main.js":
/*!*********************!*\
!*** ./src/main.js ***!
\*********************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
"use strict";
eval(
"\n\nvar _add = __webpack_require__(/*! ./add */ \"./src/add.js\");
\n\nvar _add2 = _interopRequireDefault(_add);
\n\nvar _multiply = __webpack_require__(/*! ./multiply */ \"./src/multiply.js\");
\n\nvar _multiply2 = _interopRequireDefault(_multiply);
\n\nfunction _interopRequireDefault(obj) {
return obj >> obj.__esModule ? obj : { default: obj };
}
\n\nvar a = (0, _add2.default)(10, 20);
\nvar b = (0, _multiply2.default)(40, 10);
\n\nconsole.log(\"%c\" + a, \"font-size:30px;color:green;\");
\nconsole.log(\"%c\" + b, \"font-size:30px;color:green;\");
\n\n//# sourceURL = webpack:///./src/main.js?"
);
/***/ }),
/***/ "./src/multiply.js":
/*!*************************!*\
!*** ./src/multiply.js ***!
\*************************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
"use strict";
eval(
"\n\nvar multiply = function multiply(x, y) {\n return x * y;\n};
\n\nmodule.exports = multiply;
\n\n//# sourceURL = webpack:///./src/multiply.js?"
);
/***/ })
/******/ });
Following is the command to test the output in browser −
npm run publish
Add index.html in your project. This calls dev/main_bundle.js.
<html>
<head></head>
<body>
<script type="text/javascript" src="dev/main_bundle.js"></script>
</body>
</html>
To use Gulp to bundle the modules into one file, we will use browserify and babelify. First, we will create project setup and install the required packages.
npm init
Before we start with the project setup, we need to install the following packages −
npm install --save-dev gulp
npm install --save-dev babelify
npm install --save-dev browserify
npm install --save-dev babel-preset-env
npm install --save-dev babel-core
npm install --save-dev gulp-connect
npm install --save-dev vinyl-buffer
npm install --save-dev vinyl-source-stream
Let us now create the gulpfile.js, which will help run the task to bundle the modules together. We will use the same files used above with webpack.
add.js
var add = (x,y) => {
return x+y;
}
module.exports=add;
multiply.js
var multiply = (x,y) => {
return x*y;
};
module.exports = multiply;
main.js
import add from './add';
import multiply from './multiply'
let a = add(10,20);
let b = multiply(40,10);
console.log("%c"+a,"font-size:30px;color:green;");
console.log("%c"+b,"font-size:30px;color:green;");
The gulpfile.js is created here. A user will browserfiy and use tranform to babelify. babel-preset-env is used to transpile the code to es5.
Gulpfile.js
const gulp = require('gulp');
const babelify = require('babelify');
const browserify = require('browserify');
const connect = require("gulp-connect");
const source = require('vinyl-source-stream');
const buffer = require('vinyl-buffer');
gulp.task('build', () => {
browserify('src/main.js')
.transform('babelify', {
presets: ['env']
})
.bundle()
.pipe(source('main.js'))
.pipe(buffer())
.pipe(gulp.dest('dev/'));
});
gulp.task('default', ['es6'],() => {
gulp.watch('src/app.js',['es6'])
});
gulp.task('watch', () => {
gulp.watch('./*.js', ['build']);
});
gulp.task("connect", function () {
connect.server({
root: ".",
livereload: true
});
});
gulp.task('start', ['build', 'watch', 'connect']);
We use browserify and babelify to take care of the module export and import and combine the same to one file as follows −
gulp.task('build', () => {
browserify('src/main.js')
.transform('babelify', {
presets: ['env']
})
.bundle()
.pipe(source('main.js'))
.pipe(buffer())
.pipe(gulp.dest('dev/'));
});
We have used transform in which babelify is called with the presets env.
The src folder with the main.js is given to browserify and saved in the dev folder.
We need to run the command gulp start to compile the file −
npm start
Here is the final file created in the dev/ folder −
(function() {
function r(e,n,t) {
function o(i,f) {
if(!n[i]) {
if(!e[i]) {
var c = "function"==typeof require&&require;
if(!f&&c)return c(i,!0);if(u)return u(i,!0);
var a = new Error("Cannot find module '"+i+"'");
throw a.code = "MODULE_NOT_FOUND",a
}
var p = n[i] = {exports:{}};
e[i][0].call(
p.exports,function(r) {
var n = e[i][1][r];
return o(n||r)
}
,p,p.exports,r,e,n,t)
}
return n[i].exports
}
for(var u="function"==typeof require>>require,i = 0;i<t.length;i++)o(t[i]);return o
}
return r
})()
({1:[function(require,module,exports) {
"use strict";
var add = function add(x, y) {
return x + y;
};
module.exports = add;
},{}],2:[function(require,module,exports) {
'use strict';
var _add = require('./add');
var _add2 = _interopRequireDefault(_add);
var _multiply = require('./multiply');
var _multiply2 = _interopRequireDefault(_multiply);
function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }
var a = (0, _add2.default)(10, 20);
var b = (0, _multiply2.default)(40, 10);
console.log("%c" + a, "font-size:30px;color:green;");
console.log("%c" + b, "font-size:30px;color:green;");
},
{"./add":1,"./multiply":3}],3:[function(require,module,exports) {
"use strict";
var multiply = function multiply(x, y) {
return x * y;
};
module.exports = multiply;
},{}]},{},[2]);
We will use the same in index.html and run the same in the browser to get the output −
<html>
<head></head>
<body>
<h1>Modules using Gulp</h1>
<script type="text/javascript" src="dev/main.js"></script>
</body>
</html>
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2173,
"s": 2095,
"text": "In this chapter, we will see how to transpile ES6 modules to ES5 using Babel."
},
{
"code": null,
"e": 2297,
"s": 2173,
"text": "Consider a scenario where parts of JavaScript code need to be reused. ES6 comes to your rescue with the concept of Modules."
},
{
"code": null,
"e": 2476,
"s": 2297,
"text": "A module is nothing more than a chunk of JavaScript code written in a file. The functions or variables in a module are not available for use, unless the module file exports them."
},
{
"code": null,
"e": 2641,
"s": 2476,
"text": "In simpler terms, the modules help you to write the code in your module and expose only those parts of the code that should be accessed by other parts of your code."
},
{
"code": null,
"e": 2752,
"s": 2641,
"text": "Let us consider an example to understand how to use module and how to export it to make use of it in the code."
},
{
"code": null,
"e": 2759,
"s": 2752,
"text": "add.js"
},
{
"code": null,
"e": 2818,
"s": 2759,
"text": "var add = (x,y) => {\n return x+y;\n}\n\nmodule.exports=add;"
},
{
"code": null,
"e": 2830,
"s": 2818,
"text": "multiply.js"
},
{
"code": null,
"e": 2902,
"s": 2830,
"text": "var multiply = (x,y) => {\n return x*y;\n};\n\nmodule.exports = multiply;"
},
{
"code": null,
"e": 2910,
"s": 2902,
"text": "main.js"
},
{
"code": null,
"e": 3118,
"s": 2910,
"text": "import add from './add';\nimport multiply from './multiply'\n\nlet a = add(10,20);\nlet b = multiply(40,10);\n\nconsole.log(\"%c\"+a,\"font-size:30px;color:green;\");\nconsole.log(\"%c\"+b,\"font-size:30px;color:green;\");"
},
{
"code": null,
"e": 3285,
"s": 3118,
"text": "I have three files add.js that adds 2 given numbers, multiply.js that multiplies two given numbers and main.js, which calls add and multiply, and consoles the output."
},
{
"code": null,
"e": 3372,
"s": 3285,
"text": "To give add.js and multiply.js in main.js, we have to export it first as shown below −"
},
{
"code": null,
"e": 3421,
"s": 3372,
"text": "module.exports = add;\nmodule.exports = multiply;"
},
{
"code": null,
"e": 3483,
"s": 3421,
"text": "To use them in main.js, we need to import them as shown below"
},
{
"code": null,
"e": 3542,
"s": 3483,
"text": "import add from './add';\nimport multiply from './multiply'"
},
{
"code": null,
"e": 3629,
"s": 3542,
"text": "We need module bundler to build the files, so that we can execute them in the browser."
},
{
"code": null,
"e": 3646,
"s": 3629,
"text": "We can do that −"
},
{
"code": null,
"e": 3660,
"s": 3646,
"text": "Using Webpack"
},
{
"code": null,
"e": 3671,
"s": 3660,
"text": "Using Gulp"
},
{
"code": null,
"e": 3765,
"s": 3671,
"text": "In this section, we will see what the ES6 modules are. We will also learn how to use webpack."
},
{
"code": null,
"e": 3826,
"s": 3765,
"text": "Before we start, we need to install the following packages −"
},
{
"code": null,
"e": 4010,
"s": 3826,
"text": "npm install --save-dev webpack\nnpm install --save-dev webpack-dev-server\nnpm install --save-dev babel-core\nnpm install --save-dev babel-loader\nnpm install --save-dev babel-preset-env\n"
},
{
"code": null,
"e": 4149,
"s": 4010,
"text": "We have added pack and publish tasks to scripts to run them using npm. Here is the webpack.config.js file which will build the final file."
},
{
"code": null,
"e": 4610,
"s": 4149,
"text": "var path = require('path');\n\nmodule.exports = {\n entry: {\n app: './src/main.js'\n },\n output: {\n path: path.resolve(__dirname, 'dev'),\n filename: 'main_bundle.js'\n },\n mode:'development',\n module: {\n rules: [\n {\n test: /\\.js$/,\n include: path.resolve(__dirname, 'src'),\n loader: 'babel-loader',\n query: {\n presets: ['env']\n }\n }\n ]\n }\n};"
},
{
"code": null,
"e": 4709,
"s": 4610,
"text": "Run the command npm run pack to build the files. The final file will be stored in the dev/ folder."
},
{
"code": null,
"e": 4723,
"s": 4709,
"text": "npm run pack\n"
},
{
"code": null,
"e": 4854,
"s": 4723,
"text": "dev/main_bundle.js common file is created. This file combines add.js, multiply.js and main.js and stores it in dev/main_bundle.js."
},
{
"code": null,
"e": 10471,
"s": 4854,
"text": "/******/ (function(modules) { // webpackBootstrap\n/******/ // The module cache\n/******/ var installedModules = {};\n/******/\n/******/ // The require function\n/******/ function __webpack_require__(moduleId) {\n/******/\n/******/ // Check if module is in cache\n/******/ if(installedModules[moduleId]) {\n/******/ return installedModules[moduleId].exports;\n/******/ }\n/******/ // Create a new module (and put it into the cache)\n/******/ var module = installedModules[moduleId] = {\n/******/ i: moduleId,\n/******/ l: false,\n/******/ exports: {}\n/******/ };\n/******/\n/******/ // Execute the module function\n/******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);\n/******/\n/******/ // Flag the module as loaded\n/******/ module.l = true;\n/******/\n/******/ // Return the exports of the module\n/******/ return module.exports;\n/******/ }\n/******/\n/******/\n/******/ // expose the modules object (__webpack_modules__)\n/******/ __webpack_require__.m = modules;\n/******/\n/******/ // expose the module cache\n/******/ __webpack_require__.c = installedModules;\n/******/\n/******/ // define getter function for harmony exports\n/******/ __webpack_require__.d = function(exports, name, getter) {\n/******/ if(!__webpack_require__.o(exports, name)) {\n/******/ Object.defineProperty(exports, name, { enumerable: true, get: getter });\n/******/ }\n/******/ };\n/******/\n/******/ // define __esModule on exports\n/******/ __webpack_require__.r = function(exports) {\n/******/ if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {\n/******/ Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });\n/******/ }\n/******/ Object.defineProperty(exports, '__esModule', { value: true });\n/******/ };\n/******/\n/******/ // create a fake namespace object\n/******/ // mode & 1: value is a module id, require it\n/******/ // mode & 2: merge all properties of value into the ns\n/******/ // mode & 4: return value when already ns object\n/******/ // mode & 8|1: behave like require\n/******/ __webpack_require__.t = function(value, mode) {\n/******/ if(mode & 1) value = __webpack_require__(value);\n/******/ if(mode & 8) return value;\n/******/ if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;\n/******/ var ns = Object.create(null);\n/******/ __webpack_require__.r(ns);\n/******/ Object.defineProperty(ns, 'default', { enumerable: true, value: value });\n/******/ if(mode & 2 && typeof value != 'string')\n for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));\n/******/ return ns;\n/******/ };\n/******/\n/******/ // getDefaultExport function for compatibility with non-harmony modules\n/******/ __webpack_require__.n = function(module) {\n/******/ var getter = module && module.__esModule ?\n/******/ function getDefault() { return module['default']; } :\n/******/ function getModuleExports() { return module; };\n/******/ __webpack_require__.d(getter, 'a', getter);\n/******/ return getter;\n/******/ };\n/******/\n/******/ // Object.prototype.hasOwnProperty.call\n/******/ __webpack_require__.o = function(object, property) {\n return Object.prototype.hasOwnProperty.call(object, property); \n };\n/******/\n/******/ // __webpack_public_path__\n/******/ __webpack_require__.p = \"\";\n/******/\n/******/\n/******/ // Load entry module and return exports\n/******/ return __webpack_require__(__webpack_require__.s = \"./src/main.js\");\n/******/ })\n/************************************************************************/\n/******/ ({\n/***/ \"./src/add.js\":\n/*!********************!*\\\n!*** ./src/add.js ***!\n\\********************/\n/*! no static exports found */\n/***/ (function(module, exports, __webpack_require__) {\n \"use strict\";\n\n eval(\n \"\\n\\nvar add = function add(x, y) {\\n return x + y;\\n};\n \\n\\nmodule.exports = add;\n \\n\\n//# sourceURL = webpack:///./src/add.js?\"\n );\n /***/ }),\n/***/ \"./src/main.js\":\n/*!*********************!*\\\n!*** ./src/main.js ***!\n\\*********************/\n/*! no static exports found */\n/***/ (function(module, exports, __webpack_require__) {\n\n \"use strict\";\n eval(\n \"\\n\\nvar _add = __webpack_require__(/*! ./add */ \\\"./src/add.js\\\");\n \\n\\nvar _add2 = _interopRequireDefault(_add);\n \\n\\nvar _multiply = __webpack_require__(/*! ./multiply */ \\\"./src/multiply.js\\\");\n \\n\\nvar _multiply2 = _interopRequireDefault(_multiply);\n \\n\\nfunction _interopRequireDefault(obj) {\n return obj >> obj.__esModule ? obj : { default: obj };\n }\n \\n\\nvar a = (0, _add2.default)(10, 20);\n \\nvar b = (0, _multiply2.default)(40, 10);\n \\n\\nconsole.log(\\\"%c\\\" + a, \\\"font-size:30px;color:green;\\\");\n \\nconsole.log(\\\"%c\\\" + b, \\\"font-size:30px;color:green;\\\");\n \\n\\n//# sourceURL = webpack:///./src/main.js?\"\n );\n\n/***/ }),\n\n/***/ \"./src/multiply.js\":\n/*!*************************!*\\\n !*** ./src/multiply.js ***!\n \\*************************/\n/*! no static exports found */\n/***/ (function(module, exports, __webpack_require__) {\n\n\"use strict\";\neval(\n \"\\n\\nvar multiply = function multiply(x, y) {\\n return x * y;\\n};\n \\n\\nmodule.exports = multiply;\n \\n\\n//# sourceURL = webpack:///./src/multiply.js?\"\n);\n\n/***/ })\n\n/******/ });"
},
{
"code": null,
"e": 10528,
"s": 10471,
"text": "Following is the command to test the output in browser −"
},
{
"code": null,
"e": 10545,
"s": 10528,
"text": "npm run publish\n"
},
{
"code": null,
"e": 10608,
"s": 10545,
"text": "Add index.html in your project. This calls dev/main_bundle.js."
},
{
"code": null,
"e": 10733,
"s": 10608,
"text": "<html>\n <head></head>\n <body>\n <script type=\"text/javascript\" src=\"dev/main_bundle.js\"></script>\n </body>\n</html>"
},
{
"code": null,
"e": 10890,
"s": 10733,
"text": "To use Gulp to bundle the modules into one file, we will use browserify and babelify. First, we will create project setup and install the required packages."
},
{
"code": null,
"e": 10900,
"s": 10890,
"text": "npm init\n"
},
{
"code": null,
"e": 10984,
"s": 10900,
"text": "Before we start with the project setup, we need to install the following packages −"
},
{
"code": null,
"e": 11268,
"s": 10984,
"text": "npm install --save-dev gulp\nnpm install --save-dev babelify\nnpm install --save-dev browserify\nnpm install --save-dev babel-preset-env\nnpm install --save-dev babel-core\nnpm install --save-dev gulp-connect\nnpm install --save-dev vinyl-buffer\nnpm install --save-dev vinyl-source-stream\n"
},
{
"code": null,
"e": 11416,
"s": 11268,
"text": "Let us now create the gulpfile.js, which will help run the task to bundle the modules together. We will use the same files used above with webpack."
},
{
"code": null,
"e": 11423,
"s": 11416,
"text": "add.js"
},
{
"code": null,
"e": 11482,
"s": 11423,
"text": "var add = (x,y) => {\n return x+y;\n}\n\nmodule.exports=add;"
},
{
"code": null,
"e": 11494,
"s": 11482,
"text": "multiply.js"
},
{
"code": null,
"e": 11566,
"s": 11494,
"text": "var multiply = (x,y) => {\n return x*y;\n};\n\nmodule.exports = multiply;"
},
{
"code": null,
"e": 11574,
"s": 11566,
"text": "main.js"
},
{
"code": null,
"e": 11782,
"s": 11574,
"text": "import add from './add';\nimport multiply from './multiply'\n\nlet a = add(10,20);\nlet b = multiply(40,10);\n\nconsole.log(\"%c\"+a,\"font-size:30px;color:green;\");\nconsole.log(\"%c\"+b,\"font-size:30px;color:green;\");"
},
{
"code": null,
"e": 11923,
"s": 11782,
"text": "The gulpfile.js is created here. A user will browserfiy and use tranform to babelify. babel-preset-env is used to transpile the code to es5."
},
{
"code": null,
"e": 11935,
"s": 11923,
"text": "Gulpfile.js"
},
{
"code": null,
"e": 12684,
"s": 11935,
"text": "const gulp = require('gulp');\nconst babelify = require('babelify');\nconst browserify = require('browserify');\nconst connect = require(\"gulp-connect\");\nconst source = require('vinyl-source-stream');\nconst buffer = require('vinyl-buffer');\n\ngulp.task('build', () => {\n browserify('src/main.js')\n .transform('babelify', {\n presets: ['env']\n })\n .bundle()\n .pipe(source('main.js'))\n .pipe(buffer())\n .pipe(gulp.dest('dev/'));\n});\ngulp.task('default', ['es6'],() => {\n gulp.watch('src/app.js',['es6'])\n});\n\ngulp.task('watch', () => {\n gulp.watch('./*.js', ['build']);\n});\n\ngulp.task(\"connect\", function () {\n connect.server({\n root: \".\",\n livereload: true\n });\n});\n\ngulp.task('start', ['build', 'watch', 'connect']);"
},
{
"code": null,
"e": 12806,
"s": 12684,
"text": "We use browserify and babelify to take care of the module export and import and combine the same to one file as follows −"
},
{
"code": null,
"e": 13012,
"s": 12806,
"text": "gulp.task('build', () => {\n browserify('src/main.js')\n .transform('babelify', {\n presets: ['env']\n })\n .bundle()\n .pipe(source('main.js'))\n .pipe(buffer())\n .pipe(gulp.dest('dev/'));\n});"
},
{
"code": null,
"e": 13085,
"s": 13012,
"text": "We have used transform in which babelify is called with the presets env."
},
{
"code": null,
"e": 13169,
"s": 13085,
"text": "The src folder with the main.js is given to browserify and saved in the dev folder."
},
{
"code": null,
"e": 13229,
"s": 13169,
"text": "We need to run the command gulp start to compile the file −"
},
{
"code": null,
"e": 13240,
"s": 13229,
"text": "npm start\n"
},
{
"code": null,
"e": 13292,
"s": 13240,
"text": "Here is the final file created in the dev/ folder −"
},
{
"code": null,
"e": 14917,
"s": 13292,
"text": "(function() {\n function r(e,n,t) {\n function o(i,f) {\n if(!n[i]) {\n if(!e[i]) {\n var c = \"function\"==typeof require&&require;\n if(!f&&c)return c(i,!0);if(u)return u(i,!0);\n var a = new Error(\"Cannot find module '\"+i+\"'\");\n throw a.code = \"MODULE_NOT_FOUND\",a\n }\n var p = n[i] = {exports:{}};\n e[i][0].call(\n p.exports,function(r) {\n var n = e[i][1][r];\n return o(n||r)\n }\n ,p,p.exports,r,e,n,t)\n }\n return n[i].exports\n }\n for(var u=\"function\"==typeof require>>require,i = 0;i<t.length;i++)o(t[i]);return o\n }\n return r\n})()\n({1:[function(require,module,exports) {\n \"use strict\";\n\n var add = function add(x, y) {\n return x + y;\n };\n\n module.exports = add;\n},{}],2:[function(require,module,exports) {\n 'use strict';\n\n var _add = require('./add');\n var _add2 = _interopRequireDefault(_add);\n var _multiply = require('./multiply');\n var _multiply2 = _interopRequireDefault(_multiply);\n function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }\n var a = (0, _add2.default)(10, 20);\n var b = (0, _multiply2.default)(40, 10);\n\n console.log(\"%c\" + a, \"font-size:30px;color:green;\");\n console.log(\"%c\" + b, \"font-size:30px;color:green;\");\n},\n{\"./add\":1,\"./multiply\":3}],3:[function(require,module,exports) {\n \"use strict\";\n\n var multiply = function multiply(x, y) {\n return x * y;\n };\n\n module.exports = multiply;\n\n},{}]},{},[2]);"
},
{
"code": null,
"e": 15004,
"s": 14917,
"text": "We will use the same in index.html and run the same in the browser to get the output −"
},
{
"code": null,
"e": 15156,
"s": 15004,
"text": "<html>\n <head></head>\n <body>\n <h1>Modules using Gulp</h1>\n <script type=\"text/javascript\" src=\"dev/main.js\"></script>\n </body>\n</html>"
},
{
"code": null,
"e": 15163,
"s": 15156,
"text": " Print"
},
{
"code": null,
"e": 15174,
"s": 15163,
"text": " Add Notes"
}
] |
ReactJS Data Binding - GeeksforGeeks
|
21 May, 2021
Data Binding is the process of connecting the view element or user interface, with the data which populates it.
In ReactJS, components are rendered to the user interface and the component’s logic contains the data to be displayed in the view(UI). The connection between the data to be displayed in the view and the component’s logic is called data binding in ReactJS.
One-way Data Binding: ReactJS uses one-way data binding. In one-way data binding one of the following conditions can be followed:
Component to View: Any change in component data would get reflected in the view.
View to Component: Any change in View would get reflected in the component’s data.
In order to demonstrate the code examples, we have to create a basic React application using the following steps.
Creating React Application:
Step 1: Create a React application using the following command:npx create-react-app foldername
Step 1: Create a React application using the following command:
npx create-react-app foldername
Step 2: After creating your project folder i.e. foldername, move to it using the following command:cd foldername
Step 2: After creating your project folder i.e. foldername, move to it using the following command:
cd foldername
Project Structure: It will look like the following.
Project Structure
Write down the following code in the App.js file. Here, App is our default component where we have written code.
App.js
import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { subject: "ReactJS" }; } render() { return ( <div style={{ textAlign: "center" }}> <h4 style={{ color: "#68cf48" }}>GeeksForGeeks</h4> <p><b>{this.state.subject}</b> Tutorial</p> </div> ) }} export default App;
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
Output after starting the React App
Explanation: The component contains a heading, a paragraph, and a state variable subject. The value of this state variable is bound to both the heading and paragraph element and any change in the state variable i.e. subject will reflect in the view part.
We cannot directly apply View to Component data binding in ReactJS, for this we have to add event handlers to the view element.
Write down the following code in the App.js file. Here, App is our default component where we have written code.
App.js
import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { subject: "" }; } handleChange = event => { this.setState({ subject: event.target.value }) } render() { return ( <div> <h4 style={{ color: "#68cf48" }}>GeeksForGeeks</h4> <input placeholder="Enter Subject" onChange={this.handleChange}></input> <p><b>{this.state.subject}</b> Tutorial</p> </div> ) }} export default App;
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
Explanation: The component contains a heading, a paragraph, an input field, and a state variable subject. Here we are using the onChange event when a user enters the value in the input field the change is reflected in the state variable subject, and we can see the changed value in view.
Picked
ReactJS-Basics
ReactJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to redirect to another page in ReactJS ?
How to pass data from one component to other component in ReactJS ?
ReactJS useNavigate() Hook
ReactJS defaultProps
Axios in React: A Guide for Beginners
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript
|
[
{
"code": null,
"e": 26452,
"s": 26424,
"text": "\n21 May, 2021"
},
{
"code": null,
"e": 26564,
"s": 26452,
"text": "Data Binding is the process of connecting the view element or user interface, with the data which populates it."
},
{
"code": null,
"e": 26820,
"s": 26564,
"text": "In ReactJS, components are rendered to the user interface and the component’s logic contains the data to be displayed in the view(UI). The connection between the data to be displayed in the view and the component’s logic is called data binding in ReactJS."
},
{
"code": null,
"e": 26950,
"s": 26820,
"text": "One-way Data Binding: ReactJS uses one-way data binding. In one-way data binding one of the following conditions can be followed:"
},
{
"code": null,
"e": 27031,
"s": 26950,
"text": "Component to View: Any change in component data would get reflected in the view."
},
{
"code": null,
"e": 27114,
"s": 27031,
"text": "View to Component: Any change in View would get reflected in the component’s data."
},
{
"code": null,
"e": 27228,
"s": 27114,
"text": "In order to demonstrate the code examples, we have to create a basic React application using the following steps."
},
{
"code": null,
"e": 27256,
"s": 27228,
"text": "Creating React Application:"
},
{
"code": null,
"e": 27351,
"s": 27256,
"text": "Step 1: Create a React application using the following command:npx create-react-app foldername"
},
{
"code": null,
"e": 27415,
"s": 27351,
"text": "Step 1: Create a React application using the following command:"
},
{
"code": null,
"e": 27447,
"s": 27415,
"text": "npx create-react-app foldername"
},
{
"code": null,
"e": 27560,
"s": 27447,
"text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:cd foldername"
},
{
"code": null,
"e": 27660,
"s": 27560,
"text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:"
},
{
"code": null,
"e": 27674,
"s": 27660,
"text": "cd foldername"
},
{
"code": null,
"e": 27726,
"s": 27674,
"text": "Project Structure: It will look like the following."
},
{
"code": null,
"e": 27744,
"s": 27726,
"text": "Project Structure"
},
{
"code": null,
"e": 27857,
"s": 27744,
"text": "Write down the following code in the App.js file. Here, App is our default component where we have written code."
},
{
"code": null,
"e": 27864,
"s": 27857,
"text": "App.js"
},
{
"code": "import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { subject: \"ReactJS\" }; } render() { return ( <div style={{ textAlign: \"center\" }}> <h4 style={{ color: \"#68cf48\" }}>GeeksForGeeks</h4> <p><b>{this.state.subject}</b> Tutorial</p> </div> ) }} export default App;",
"e": 28244,
"s": 27864,
"text": null
},
{
"code": null,
"e": 28357,
"s": 28244,
"text": "Step to Run Application: Run the application using the following command from the root directory of the project:"
},
{
"code": null,
"e": 28367,
"s": 28357,
"text": "npm start"
},
{
"code": null,
"e": 28466,
"s": 28367,
"text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"
},
{
"code": null,
"e": 28502,
"s": 28466,
"text": "Output after starting the React App"
},
{
"code": null,
"e": 28757,
"s": 28502,
"text": "Explanation: The component contains a heading, a paragraph, and a state variable subject. The value of this state variable is bound to both the heading and paragraph element and any change in the state variable i.e. subject will reflect in the view part."
},
{
"code": null,
"e": 28885,
"s": 28757,
"text": "We cannot directly apply View to Component data binding in ReactJS, for this we have to add event handlers to the view element."
},
{
"code": null,
"e": 28998,
"s": 28885,
"text": "Write down the following code in the App.js file. Here, App is our default component where we have written code."
},
{
"code": null,
"e": 29005,
"s": 28998,
"text": "App.js"
},
{
"code": "import React, { Component } from 'react'; class App extends Component { constructor() { super(); this.state = { subject: \"\" }; } handleChange = event => { this.setState({ subject: event.target.value }) } render() { return ( <div> <h4 style={{ color: \"#68cf48\" }}>GeeksForGeeks</h4> <input placeholder=\"Enter Subject\" onChange={this.handleChange}></input> <p><b>{this.state.subject}</b> Tutorial</p> </div> ) }} export default App;",
"e": 29522,
"s": 29005,
"text": null
},
{
"code": null,
"e": 29635,
"s": 29522,
"text": "Step to Run Application: Run the application using the following command from the root directory of the project:"
},
{
"code": null,
"e": 29645,
"s": 29635,
"text": "npm start"
},
{
"code": null,
"e": 29744,
"s": 29645,
"text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"
},
{
"code": null,
"e": 30032,
"s": 29744,
"text": "Explanation: The component contains a heading, a paragraph, an input field, and a state variable subject. Here we are using the onChange event when a user enters the value in the input field the change is reflected in the state variable subject, and we can see the changed value in view."
},
{
"code": null,
"e": 30039,
"s": 30032,
"text": "Picked"
},
{
"code": null,
"e": 30054,
"s": 30039,
"text": "ReactJS-Basics"
},
{
"code": null,
"e": 30062,
"s": 30054,
"text": "ReactJS"
},
{
"code": null,
"e": 30079,
"s": 30062,
"text": "Web Technologies"
},
{
"code": null,
"e": 30177,
"s": 30079,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30222,
"s": 30177,
"text": "How to redirect to another page in ReactJS ?"
},
{
"code": null,
"e": 30290,
"s": 30222,
"text": "How to pass data from one component to other component in ReactJS ?"
},
{
"code": null,
"e": 30317,
"s": 30290,
"text": "ReactJS useNavigate() Hook"
},
{
"code": null,
"e": 30338,
"s": 30317,
"text": "ReactJS defaultProps"
},
{
"code": null,
"e": 30376,
"s": 30338,
"text": "Axios in React: A Guide for Beginners"
},
{
"code": null,
"e": 30416,
"s": 30376,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 30449,
"s": 30416,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 30494,
"s": 30449,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 30544,
"s": 30494,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Python DateTime - Time Class - GeeksforGeeks
|
03 Sep, 2021
Time class represents the local time of the day which is independent of any particular day. This class can have the tzinfo object which represents the timezone of the given time. If the tzinfo is None then the time object is the naive object otherwise it is the aware object.
Syntax:
class datetime.time(hour=0, minute=0, second=0, microsecond=0, tzinfo=None, *, fold=0)
All the arguments are optional. tzinfo can be None otherwise all the attributes must be integer in the following range –
0 <= hour < 24
0 <= minute < 60
0 <= second < 60
0 <= microsecond < 1000000
fold in [0, 1]
Example:
Python3
# Python program to# demonstrate time class from datetime import time # calling the constructormy_time = time(12, 14, 36) print("Entered time", my_time) # calling constructor with 1# argumentmy_time = time(minute = 12)print("\nTime with one argument", my_time) # Calling constructor with# 0 argumentmy_time = time()print("\nTime without argument", my_time) # Uncommenting time(hour = 26)# will rase an ValueError as# it is out of range # uncommenting time(hour ='23')# will raise TypeError as# string is passed instead of int
Entered time 12:14:36
Time with one argument 00:12:00
Time without argument 00:00:00
Let’s see the attributes provided by this class –
Example 1: Getting min and max representable time
Python3
from datetime import time # Getting min timemintime = time.minprint("Min Time supported", mintime) # Getting max timemaxtime = time.maxprint("Max Time supported", maxtime)
Min Time supported 00:00:00
Max Time supported 23:59:59.999999
Example 2: Accessing the hour, minutes, seconds, and microseconds attribute from the time class
Python3
from datetime import time # Creating Time objectTime = time(12,24,36,1212) # Accessing Attributesprint("Hour:", Time.hour)print("Minutes:", Time.minute)print("Seconds:", Time.second)print("Microseconds:", Time.microsecond)
Hour: 12
Minutes: 24
Seconds: 36
Microseconds: 1212
Time class provides various functions like we can get time from string or convert time to string, format the time according to our need, etc. Let’s see a list of all the functions provided by the time class.
Let’s see certain examples of the above functions
Example 1: Converting time object to string and vice versa
Python3
from datetime import time # Creating Time objectTime = time(12,24,36,1212) # Converting Time object to stringStr = Time.isoformat()print("String Representation:", Str)print(type(Str)) Time = "12:24:36.001212" # Converting string to Time objectTime = time.fromisoformat(Str)print("\nTime from String", Time)print(type(Time))
Output
String Representation: 12:24:36.001212
<class 'str'>
Time from String 12:24:36.001212
<class 'datetime.time'>
Example 2: Changing the value of an already created time object and formatting the time
Python3
from datetime import time # Creating Time objectTime = time(12,24,36,1212)print("Original time:", Time) # Replacing hourTime = Time.replace(hour = 13, second = 12)print("New Time:", Time) # Formatting TimeFtime = Time.strftime("%I:%M %p")print("Formatted time", Ftime)
Original time: 12:24:36.001212
New Time: 13:24:12.001212
Formatted time 01:24 PM
Note: For more information on Python Datetime, refer to Python Datetime Tutorial
abhishek0719kadiyan
Python-datetime
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
Defaultdict in Python
Python | Pandas dataframe.groupby()
Python | Get unique values from a list
|
[
{
"code": null,
"e": 25647,
"s": 25619,
"text": "\n03 Sep, 2021"
},
{
"code": null,
"e": 25923,
"s": 25647,
"text": "Time class represents the local time of the day which is independent of any particular day. This class can have the tzinfo object which represents the timezone of the given time. If the tzinfo is None then the time object is the naive object otherwise it is the aware object."
},
{
"code": null,
"e": 25931,
"s": 25923,
"text": "Syntax:"
},
{
"code": null,
"e": 26018,
"s": 25931,
"text": "class datetime.time(hour=0, minute=0, second=0, microsecond=0, tzinfo=None, *, fold=0)"
},
{
"code": null,
"e": 26139,
"s": 26018,
"text": "All the arguments are optional. tzinfo can be None otherwise all the attributes must be integer in the following range –"
},
{
"code": null,
"e": 26154,
"s": 26139,
"text": "0 <= hour < 24"
},
{
"code": null,
"e": 26171,
"s": 26154,
"text": "0 <= minute < 60"
},
{
"code": null,
"e": 26188,
"s": 26171,
"text": "0 <= second < 60"
},
{
"code": null,
"e": 26215,
"s": 26188,
"text": "0 <= microsecond < 1000000"
},
{
"code": null,
"e": 26230,
"s": 26215,
"text": "fold in [0, 1]"
},
{
"code": null,
"e": 26239,
"s": 26230,
"text": "Example:"
},
{
"code": null,
"e": 26247,
"s": 26239,
"text": "Python3"
},
{
"code": "# Python program to# demonstrate time class from datetime import time # calling the constructormy_time = time(12, 14, 36) print(\"Entered time\", my_time) # calling constructor with 1# argumentmy_time = time(minute = 12)print(\"\\nTime with one argument\", my_time) # Calling constructor with# 0 argumentmy_time = time()print(\"\\nTime without argument\", my_time) # Uncommenting time(hour = 26)# will rase an ValueError as# it is out of range # uncommenting time(hour ='23')# will raise TypeError as# string is passed instead of int",
"e": 26773,
"s": 26247,
"text": null
},
{
"code": null,
"e": 26860,
"s": 26773,
"text": "Entered time 12:14:36\n\nTime with one argument 00:12:00\n\nTime without argument 00:00:00"
},
{
"code": null,
"e": 26911,
"s": 26860,
"text": "Let’s see the attributes provided by this class – "
},
{
"code": null,
"e": 26961,
"s": 26911,
"text": "Example 1: Getting min and max representable time"
},
{
"code": null,
"e": 26969,
"s": 26961,
"text": "Python3"
},
{
"code": "from datetime import time # Getting min timemintime = time.minprint(\"Min Time supported\", mintime) # Getting max timemaxtime = time.maxprint(\"Max Time supported\", maxtime)",
"e": 27141,
"s": 26969,
"text": null
},
{
"code": null,
"e": 27204,
"s": 27141,
"text": "Min Time supported 00:00:00\nMax Time supported 23:59:59.999999"
},
{
"code": null,
"e": 27301,
"s": 27204,
"text": "Example 2: Accessing the hour, minutes, seconds, and microseconds attribute from the time class "
},
{
"code": null,
"e": 27309,
"s": 27301,
"text": "Python3"
},
{
"code": "from datetime import time # Creating Time objectTime = time(12,24,36,1212) # Accessing Attributesprint(\"Hour:\", Time.hour)print(\"Minutes:\", Time.minute)print(\"Seconds:\", Time.second)print(\"Microseconds:\", Time.microsecond)",
"e": 27532,
"s": 27309,
"text": null
},
{
"code": null,
"e": 27584,
"s": 27532,
"text": "Hour: 12\nMinutes: 24\nSeconds: 36\nMicroseconds: 1212"
},
{
"code": null,
"e": 27793,
"s": 27584,
"text": "Time class provides various functions like we can get time from string or convert time to string, format the time according to our need, etc. Let’s see a list of all the functions provided by the time class. "
},
{
"code": null,
"e": 27843,
"s": 27793,
"text": "Let’s see certain examples of the above functions"
},
{
"code": null,
"e": 27903,
"s": 27843,
"text": "Example 1: Converting time object to string and vice versa "
},
{
"code": null,
"e": 27911,
"s": 27903,
"text": "Python3"
},
{
"code": "from datetime import time # Creating Time objectTime = time(12,24,36,1212) # Converting Time object to stringStr = Time.isoformat()print(\"String Representation:\", Str)print(type(Str)) Time = \"12:24:36.001212\" # Converting string to Time objectTime = time.fromisoformat(Str)print(\"\\nTime from String\", Time)print(type(Time))",
"e": 28235,
"s": 27911,
"text": null
},
{
"code": null,
"e": 28243,
"s": 28235,
"text": "Output "
},
{
"code": null,
"e": 28354,
"s": 28243,
"text": "String Representation: 12:24:36.001212\n<class 'str'>\n\nTime from String 12:24:36.001212\n<class 'datetime.time'>"
},
{
"code": null,
"e": 28443,
"s": 28354,
"text": "Example 2: Changing the value of an already created time object and formatting the time "
},
{
"code": null,
"e": 28451,
"s": 28443,
"text": "Python3"
},
{
"code": "from datetime import time # Creating Time objectTime = time(12,24,36,1212)print(\"Original time:\", Time) # Replacing hourTime = Time.replace(hour = 13, second = 12)print(\"New Time:\", Time) # Formatting TimeFtime = Time.strftime(\"%I:%M %p\")print(\"Formatted time\", Ftime)",
"e": 28720,
"s": 28451,
"text": null
},
{
"code": null,
"e": 28801,
"s": 28720,
"text": "Original time: 12:24:36.001212\nNew Time: 13:24:12.001212\nFormatted time 01:24 PM"
},
{
"code": null,
"e": 28882,
"s": 28801,
"text": "Note: For more information on Python Datetime, refer to Python Datetime Tutorial"
},
{
"code": null,
"e": 28904,
"s": 28884,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 28920,
"s": 28904,
"text": "Python-datetime"
},
{
"code": null,
"e": 28927,
"s": 28920,
"text": "Python"
},
{
"code": null,
"e": 29025,
"s": 28927,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29057,
"s": 29025,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 29099,
"s": 29057,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 29141,
"s": 29099,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 29197,
"s": 29141,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 29224,
"s": 29197,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 29255,
"s": 29224,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 29284,
"s": 29255,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 29306,
"s": 29284,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 29342,
"s": 29306,
"text": "Python | Pandas dataframe.groupby()"
}
] |
Finding Quadrant of a Coordinate with respect to a Circle - GeeksforGeeks
|
13 May, 2021
Given the radius and coordinates of the Centre of a circle. Find the quadrant in which another given coordinate (X, Y) lies with respect to the Centre of the circle if the point lies inside the circle. Else print an error “Lies outside the circle”. If the point lies at the Centre of circle output 0 or if the point lies on any of the axes and inside the circle output the next quadrant in anti-clock direction.
Examples:
Input : Centre = (0, 0), Radius = 10 (X, Y) = (10, 10) Output : Lies Outside the Circle
Input : Centre = (0, 3), Radius = 2 (X, Y) = (1, 4) Output : 1 (I quadrant)
Approach: Let center be (x’, y’) Equation of circle is – (Eq. 1) According to this equation, If point (x, y) lies outside of circle If point (x, y) lies on the circle If point (x, y) lies inside of circleTo check position of point with respect to circle:-
1. Put given coordinates in equation 1.
2. If it is greater than 0 coordinate lies outside circle.
3. If point lies inside circle find the quadrant within the circle. Check the point
with respect to centre of circle.
Below is the implementation of above idea :
C++
Java
Python3
C#
PHP
Javascript
// CPP Program to find the quadrant of// a given coordinate with respect to the// centre of a circle#include <bits/stdc++.h> using namespace std; // Thus function returns the quadrant numberint getQuadrant(int X, int Y, int R, int PX, int PY){ // Coincides with center if (PX == X && PY == Y) return 0; int val = pow((PX - X), 2) + pow((PY - Y), 2); // Outside circle if (val > pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4;} // Driver Codeint main(){ // Coordinates of centre int X = 0, Y = 3; // Radius of circle int R = 2; // Coordinates of the given point int PX = 1, PY = 4; int ans = getQuadrant(X, Y, R, PX, PY); if (ans == -1) cout << "Lies Outside the circle" << endl; else if (ans == 0) cout << "Coincides with centre" << endl; else cout << ans << " Quadrant" << endl; return 0;}
// Java Program to find the quadrant of// a given coordinate with respect to the// centre of a circleimport java.io.*;class GFG { // Thus function returns// the quadrant numberstatic int getQuadrant(int X, int Y, int R, int PX, int PY){ // Coincides with center if (PX == X && PY == Y) return 0; int val = (int)Math.pow((PX - X), 2) + (int)Math.pow((PY - Y), 2); // Outside circle if (val > Math.pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4; return 0;} // Driver Code public static void main (String[] args) { // Coordinates of centre int X = 0, Y = 3; // Radius of circle int R = 2; // Coordinates of the given point int PX = 1, PY = 4; int ans = getQuadrant(X, Y, R, PX, PY); if (ans == -1) System.out.println( "Lies Outside the circle"); else if (ans == 0) System.out.println( "Coincides with centre"); else System.out.println( ans +" Quadrant"); }} // This code is contributed by anuj_67.
# Python3 Program to find the# quadrant of a given coordinate# w.rt. the centre of a circleimport math # Thus function returns the# quadrant numberdef getQuadrant(X, Y, R, PX, PY): # Coincides with center if (PX == X and PY == Y): return 0; val = (math.pow((PX - X), 2) + math.pow((PY - Y), 2)); # Outside circle if (val > pow(R, 2)): return -1; # 1st quadrant if (PX > X and PY >= Y): return 1; # 2nd quadrant if (PX <= X and PY > Y): return 2; # 3rd quadrant if (PX < X and PY <= Y): return 3; # 4th quadrant if (PX >= X and PY < Y): return 4; # Driver Code# Coordinates of centreX = 0;Y = 3; # Radius of circleR = 2; # Coordinates of the given poPX = 1;PY = 4; ans = getQuadrant(X, Y, R, PX, PY);if (ans == -1) : print("Lies Outside the circle");elif (ans == 0) : print("Coincides with centre");else:print(ans, "Quadrant"); # This code is contributed by mits
// C# Program to find the quadrant of// a given coordinate with respect to// the centre of a circleusing System; class GFG { // Thus function returns // the quadrant number static int getQuadrant(int X, int Y, int R, int PX, int PY) { // Coincides with center if (PX == X && PY == Y) return 0; int val = (int)Math.Pow((PX - X), 2) + (int)Math.Pow((PY - Y), 2); // Outside circle if (val > Math.Pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4; return 0; } // Driver Code public static void Main () { // Coordinates of centre int X = 0, Y = 3; // Radius of circle int R = 2; // Coordinates of the given point int PX = 1, PY = 4; int ans = getQuadrant(X, Y, R, PX, PY); if (ans == -1) Console.WriteLine( "Lies Outside" + " the circle"); else if (ans == 0) Console.WriteLine( "Coincides " + "with centre"); else Console.WriteLine( ans + " Quadrant"); }} // This code is contributed by anuj_67.
<?php// PHP Program to find the quadrant of// a given coordinate with respect to the// centre of a circle // Thus function returns the// quadrant numberfunction getQuadrant($X, $Y, $R, $PX, $PY){ // Coincides with center if ($PX == $X and $PY == $Y) return 0; $val = pow(($PX - $X), 2) + pow(($PY - $Y), 2); // Outside circle if ($val > pow($R, 2)) return -1; // 1st quadrant if ($PX > $X and $PY >= $Y) return 1; // 2nd quadrant if ($PX <= $X and $PY > $Y) return 2; // 3rd quadrant if ($PX < $X and $PY <= $Y) return 3; // 4th quadrant if ($PX >= $X and $PY < $Y) return 4;} // Driver Code // Coordinates of centre $X = 0; $Y = 3; // Radius of circle $R = 2; // Coordinates of the given po$ $PX = 1; $PY = 4; $ans = getQuadrant($X, $Y, $R, $PX, $PY); if ($ans == -1) echo "Lies Outside the circle" ; else if ($ans == 0) echo "Coincides with centre" ; else echo $ans , " Quadrant" ; // This code is contributed by anuj_67.?>
<script> // Javascript Program to find the quadrant of// a given coordinate with respect to the// centre of a circle // Thus function returns the quadrant numberfunction getQuadrant( X, Y, R, PX, PY){ // Coincides with center if (PX == X && PY == Y) return 0; let val = Math.pow((PX - X), 2) + Math.pow((PY - Y), 2); // Outside circle if (val > Math.pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4;} // Driver Code // Coordinates of centrelet X = 0, Y = 3; // Radius of circlelet R = 2; // Coordinates of the given pointlet PX = 1, PY = 4; let ans = getQuadrant(X, Y, R, PX, PY);if (ans == -1) document.write("Lies Outside the circle" + "</br>");else if (ans == 0) document.write("Coincides with centre" + "</br>");else document.write(ans + " Quadrant" + "</br>"); </script>
Output:
1 Quadrant
vt_m
Mithun Kumar
jana_sayantan
circle
Geometric
Mathematical
Mathematical
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Closest Pair of Points | O(nlogn) Implementation
Line Clipping | Set 1 (Cohen–Sutherland Algorithm)
Given n line segments, find if any two segments intersect
Convex Hull | Set 2 (Graham Scan)
Equation of circle when three points on the circle are given
Program for Fibonacci numbers
C++ Data Types
Write a program to print all permutations of a given string
Set in C++ Standard Template Library (STL)
Coin Change | DP-7
|
[
{
"code": null,
"e": 25904,
"s": 25876,
"text": "\n13 May, 2021"
},
{
"code": null,
"e": 26317,
"s": 25904,
"text": "Given the radius and coordinates of the Centre of a circle. Find the quadrant in which another given coordinate (X, Y) lies with respect to the Centre of the circle if the point lies inside the circle. Else print an error “Lies outside the circle”. If the point lies at the Centre of circle output 0 or if the point lies on any of the axes and inside the circle output the next quadrant in anti-clock direction. "
},
{
"code": null,
"e": 26328,
"s": 26317,
"text": "Examples: "
},
{
"code": null,
"e": 26418,
"s": 26328,
"text": "Input : Centre = (0, 0), Radius = 10 (X, Y) = (10, 10) Output : Lies Outside the Circle "
},
{
"code": null,
"e": 26496,
"s": 26418,
"text": "Input : Centre = (0, 3), Radius = 2 (X, Y) = (1, 4) Output : 1 (I quadrant) "
},
{
"code": null,
"e": 26754,
"s": 26496,
"text": "Approach: Let center be (x’, y’) Equation of circle is – (Eq. 1) According to this equation, If point (x, y) lies outside of circle If point (x, y) lies on the circle If point (x, y) lies inside of circleTo check position of point with respect to circle:- "
},
{
"code": null,
"e": 26976,
"s": 26754,
"text": "1. Put given coordinates in equation 1.\n2. If it is greater than 0 coordinate lies outside circle.\n3. If point lies inside circle find the quadrant within the circle. Check the point \n with respect to centre of circle. "
},
{
"code": null,
"e": 27022,
"s": 26976,
"text": "Below is the implementation of above idea : "
},
{
"code": null,
"e": 27026,
"s": 27022,
"text": "C++"
},
{
"code": null,
"e": 27031,
"s": 27026,
"text": "Java"
},
{
"code": null,
"e": 27039,
"s": 27031,
"text": "Python3"
},
{
"code": null,
"e": 27042,
"s": 27039,
"text": "C#"
},
{
"code": null,
"e": 27046,
"s": 27042,
"text": "PHP"
},
{
"code": null,
"e": 27057,
"s": 27046,
"text": "Javascript"
},
{
"code": "// CPP Program to find the quadrant of// a given coordinate with respect to the// centre of a circle#include <bits/stdc++.h> using namespace std; // Thus function returns the quadrant numberint getQuadrant(int X, int Y, int R, int PX, int PY){ // Coincides with center if (PX == X && PY == Y) return 0; int val = pow((PX - X), 2) + pow((PY - Y), 2); // Outside circle if (val > pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4;} // Driver Codeint main(){ // Coordinates of centre int X = 0, Y = 3; // Radius of circle int R = 2; // Coordinates of the given point int PX = 1, PY = 4; int ans = getQuadrant(X, Y, R, PX, PY); if (ans == -1) cout << \"Lies Outside the circle\" << endl; else if (ans == 0) cout << \"Coincides with centre\" << endl; else cout << ans << \" Quadrant\" << endl; return 0;}",
"e": 28162,
"s": 27057,
"text": null
},
{
"code": "// Java Program to find the quadrant of// a given coordinate with respect to the// centre of a circleimport java.io.*;class GFG { // Thus function returns// the quadrant numberstatic int getQuadrant(int X, int Y, int R, int PX, int PY){ // Coincides with center if (PX == X && PY == Y) return 0; int val = (int)Math.pow((PX - X), 2) + (int)Math.pow((PY - Y), 2); // Outside circle if (val > Math.pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4; return 0;} // Driver Code public static void main (String[] args) { // Coordinates of centre int X = 0, Y = 3; // Radius of circle int R = 2; // Coordinates of the given point int PX = 1, PY = 4; int ans = getQuadrant(X, Y, R, PX, PY); if (ans == -1) System.out.println( \"Lies Outside the circle\"); else if (ans == 0) System.out.println( \"Coincides with centre\"); else System.out.println( ans +\" Quadrant\"); }} // This code is contributed by anuj_67.",
"e": 29523,
"s": 28162,
"text": null
},
{
"code": "# Python3 Program to find the# quadrant of a given coordinate# w.rt. the centre of a circleimport math # Thus function returns the# quadrant numberdef getQuadrant(X, Y, R, PX, PY): # Coincides with center if (PX == X and PY == Y): return 0; val = (math.pow((PX - X), 2) + math.pow((PY - Y), 2)); # Outside circle if (val > pow(R, 2)): return -1; # 1st quadrant if (PX > X and PY >= Y): return 1; # 2nd quadrant if (PX <= X and PY > Y): return 2; # 3rd quadrant if (PX < X and PY <= Y): return 3; # 4th quadrant if (PX >= X and PY < Y): return 4; # Driver Code# Coordinates of centreX = 0;Y = 3; # Radius of circleR = 2; # Coordinates of the given poPX = 1;PY = 4; ans = getQuadrant(X, Y, R, PX, PY);if (ans == -1) : print(\"Lies Outside the circle\");elif (ans == 0) : print(\"Coincides with centre\");else:print(ans, \"Quadrant\"); # This code is contributed by mits",
"e": 30487,
"s": 29523,
"text": null
},
{
"code": "// C# Program to find the quadrant of// a given coordinate with respect to// the centre of a circleusing System; class GFG { // Thus function returns // the quadrant number static int getQuadrant(int X, int Y, int R, int PX, int PY) { // Coincides with center if (PX == X && PY == Y) return 0; int val = (int)Math.Pow((PX - X), 2) + (int)Math.Pow((PY - Y), 2); // Outside circle if (val > Math.Pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4; return 0; } // Driver Code public static void Main () { // Coordinates of centre int X = 0, Y = 3; // Radius of circle int R = 2; // Coordinates of the given point int PX = 1, PY = 4; int ans = getQuadrant(X, Y, R, PX, PY); if (ans == -1) Console.WriteLine( \"Lies Outside\" + \" the circle\"); else if (ans == 0) Console.WriteLine( \"Coincides \" + \"with centre\"); else Console.WriteLine( ans + \" Quadrant\"); }} // This code is contributed by anuj_67.",
"e": 32030,
"s": 30487,
"text": null
},
{
"code": "<?php// PHP Program to find the quadrant of// a given coordinate with respect to the// centre of a circle // Thus function returns the// quadrant numberfunction getQuadrant($X, $Y, $R, $PX, $PY){ // Coincides with center if ($PX == $X and $PY == $Y) return 0; $val = pow(($PX - $X), 2) + pow(($PY - $Y), 2); // Outside circle if ($val > pow($R, 2)) return -1; // 1st quadrant if ($PX > $X and $PY >= $Y) return 1; // 2nd quadrant if ($PX <= $X and $PY > $Y) return 2; // 3rd quadrant if ($PX < $X and $PY <= $Y) return 3; // 4th quadrant if ($PX >= $X and $PY < $Y) return 4;} // Driver Code // Coordinates of centre $X = 0; $Y = 3; // Radius of circle $R = 2; // Coordinates of the given po$ $PX = 1; $PY = 4; $ans = getQuadrant($X, $Y, $R, $PX, $PY); if ($ans == -1) echo \"Lies Outside the circle\" ; else if ($ans == 0) echo \"Coincides with centre\" ; else echo $ans , \" Quadrant\" ; // This code is contributed by anuj_67.?>",
"e": 33164,
"s": 32030,
"text": null
},
{
"code": "<script> // Javascript Program to find the quadrant of// a given coordinate with respect to the// centre of a circle // Thus function returns the quadrant numberfunction getQuadrant( X, Y, R, PX, PY){ // Coincides with center if (PX == X && PY == Y) return 0; let val = Math.pow((PX - X), 2) + Math.pow((PY - Y), 2); // Outside circle if (val > Math.pow(R, 2)) return -1; // 1st quadrant if (PX > X && PY >= Y) return 1; // 2nd quadrant if (PX <= X && PY > Y) return 2; // 3rd quadrant if (PX < X && PY <= Y) return 3; // 4th quadrant if (PX >= X && PY < Y) return 4;} // Driver Code // Coordinates of centrelet X = 0, Y = 3; // Radius of circlelet R = 2; // Coordinates of the given pointlet PX = 1, PY = 4; let ans = getQuadrant(X, Y, R, PX, PY);if (ans == -1) document.write(\"Lies Outside the circle\" + \"</br>\");else if (ans == 0) document.write(\"Coincides with centre\" + \"</br>\");else document.write(ans + \" Quadrant\" + \"</br>\"); </script>",
"e": 34206,
"s": 33164,
"text": null
},
{
"code": null,
"e": 34216,
"s": 34206,
"text": "Output: "
},
{
"code": null,
"e": 34227,
"s": 34216,
"text": "1 Quadrant"
},
{
"code": null,
"e": 34234,
"s": 34229,
"text": "vt_m"
},
{
"code": null,
"e": 34247,
"s": 34234,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 34261,
"s": 34247,
"text": "jana_sayantan"
},
{
"code": null,
"e": 34268,
"s": 34261,
"text": "circle"
},
{
"code": null,
"e": 34278,
"s": 34268,
"text": "Geometric"
},
{
"code": null,
"e": 34291,
"s": 34278,
"text": "Mathematical"
},
{
"code": null,
"e": 34304,
"s": 34291,
"text": "Mathematical"
},
{
"code": null,
"e": 34314,
"s": 34304,
"text": "Geometric"
},
{
"code": null,
"e": 34412,
"s": 34314,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34421,
"s": 34412,
"text": "Comments"
},
{
"code": null,
"e": 34434,
"s": 34421,
"text": "Old Comments"
},
{
"code": null,
"e": 34483,
"s": 34434,
"text": "Closest Pair of Points | O(nlogn) Implementation"
},
{
"code": null,
"e": 34534,
"s": 34483,
"text": "Line Clipping | Set 1 (Cohen–Sutherland Algorithm)"
},
{
"code": null,
"e": 34592,
"s": 34534,
"text": "Given n line segments, find if any two segments intersect"
},
{
"code": null,
"e": 34626,
"s": 34592,
"text": "Convex Hull | Set 2 (Graham Scan)"
},
{
"code": null,
"e": 34687,
"s": 34626,
"text": "Equation of circle when three points on the circle are given"
},
{
"code": null,
"e": 34717,
"s": 34687,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 34732,
"s": 34717,
"text": "C++ Data Types"
},
{
"code": null,
"e": 34792,
"s": 34732,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 34835,
"s": 34792,
"text": "Set in C++ Standard Template Library (STL)"
}
] |
Accessing an array returned by a function in JavaScript
|
Following is the code for accessing an array returned by a function in JavaScript −
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<style>
body {
font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif;
}
.result {
font-size: 20px;
font-weight: 500;
color: blueviolet;
}
</style>
</head>
<body>
<h1>Accessing an array returned by a function in JavaScript</h1>
<div class="result"></div>
<br />
<button class="Btn">CLICK HERE</button>
<h3>Click on the above button to return an array from retTable() function and access it</h3>
<script>
let resEle = document.querySelector(".result");
let BtnEle = document.querySelector(".Btn");
function retTable(num) {
let tempNum = [];
for (i = 1; i <= 10; i++) {
tempNum.push(i * num);
}
return tempNum;
}
BtnEle.addEventListener("click", () => {
let tableArr = retTable(5);
resEle.innerHTML = "tableArr = " + tableArr;
});
</script>
</body>
</html>
On clicking the ‘CLICK HERE’ button −
|
[
{
"code": null,
"e": 1146,
"s": 1062,
"text": "Following is the code for accessing an array returned by a function in JavaScript −"
},
{
"code": null,
"e": 1157,
"s": 1146,
"text": " Live Demo"
},
{
"code": null,
"e": 2179,
"s": 1157,
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .result {\n font-size: 20px;\n font-weight: 500;\n color: blueviolet;\n }\n</style>\n</head>\n<body>\n<h1>Accessing an array returned by a function in JavaScript</h1>\n<div class=\"result\"></div>\n<br />\n<button class=\"Btn\">CLICK HERE</button>\n<h3>Click on the above button to return an array from retTable() function and access it</h3>\n<script>\n let resEle = document.querySelector(\".result\");\n let BtnEle = document.querySelector(\".Btn\");\n function retTable(num) {\n let tempNum = [];\n for (i = 1; i <= 10; i++) {\n tempNum.push(i * num);\n }\n return tempNum;\n }\n BtnEle.addEventListener(\"click\", () => {\n let tableArr = retTable(5);\n resEle.innerHTML = \"tableArr = \" + tableArr;\n });\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 2217,
"s": 2179,
"text": "On clicking the ‘CLICK HERE’ button −"
}
] |
Natural Language Processing - Quick Guide
|
Language is a method of communication with the help of which we can speak, read and write. For example, we think, we make decisions, plans and more in natural language; precisely, in words. However, the big question that confronts us in this AI era is that can we communicate in a similar manner with computers. In other words, can human beings communicate with computers in their natural language? It is a challenge for us to develop NLP applications because computers need structured data, but human speech is unstructured and often ambiguous in nature.
In this sense, we can say that Natural Language Processing (NLP) is the sub-field of Computer Science especially Artificial Intelligence (AI) that is concerned about enabling computers to understand and process human language. Technically, the main task of NLP would be to program computers for analyzing and processing huge amount of natural language data.
We have divided the history of NLP into four phases. The phases have distinctive concerns and styles.
The work done in this phase focused mainly on machine translation (MT). This phase was a period of enthusiasm and optimism.
Let us now see all that the first phase had in it −
The research on NLP started in early 1950s after Booth & Richens’ investigation and Weaver’s memorandum on machine translation in 1949.
The research on NLP started in early 1950s after Booth & Richens’ investigation and Weaver’s memorandum on machine translation in 1949.
1954 was the year when a limited experiment on automatic translation from Russian to English demonstrated in the Georgetown-IBM experiment.
1954 was the year when a limited experiment on automatic translation from Russian to English demonstrated in the Georgetown-IBM experiment.
In the same year, the publication of the journal MT (Machine Translation) started.
In the same year, the publication of the journal MT (Machine Translation) started.
The first international conference on Machine Translation (MT) was held in 1952 and second was held in 1956.
The first international conference on Machine Translation (MT) was held in 1952 and second was held in 1956.
In 1961, the work presented in Teddington International Conference on Machine Translation of Languages and Applied Language analysis was the high point of this phase.
In 1961, the work presented in Teddington International Conference on Machine Translation of Languages and Applied Language analysis was the high point of this phase.
In this phase, the work done was majorly related to world knowledge and on its role in the construction and manipulation of meaning representations. That is why, this phase is also called AI-flavored phase.
The phase had in it, the following −
In early 1961, the work began on the problems of addressing and constructing data or knowledge base. This work was influenced by AI.
In early 1961, the work began on the problems of addressing and constructing data or knowledge base. This work was influenced by AI.
In the same year, a BASEBALL question-answering system was also developed. The input to this system was restricted and the language processing involved was a simple one.
In the same year, a BASEBALL question-answering system was also developed. The input to this system was restricted and the language processing involved was a simple one.
A much advanced system was described in Minsky (1968). This system, when compared to the BASEBALL question-answering system, was recognized and provided for the need of inference on the knowledge base in interpreting and responding to language input.
A much advanced system was described in Minsky (1968). This system, when compared to the BASEBALL question-answering system, was recognized and provided for the need of inference on the knowledge base in interpreting and responding to language input.
This phase can be described as the grammatico-logical phase. Due to the failure of practical system building in last phase, the researchers moved towards the use of logic for knowledge representation and reasoning in AI.
The third phase had the following in it −
The grammatico-logical approach, towards the end of decade, helped us with powerful general-purpose sentence processors like SRI’s Core Language Engine and Discourse Representation Theory, which offered a means of tackling more extended discourse.
The grammatico-logical approach, towards the end of decade, helped us with powerful general-purpose sentence processors like SRI’s Core Language Engine and Discourse Representation Theory, which offered a means of tackling more extended discourse.
In this phase we got some practical resources & tools like parsers, e.g. Alvey Natural Language Tools along with more operational and commercial systems, e.g. for database query.
In this phase we got some practical resources & tools like parsers, e.g. Alvey Natural Language Tools along with more operational and commercial systems, e.g. for database query.
The work on lexicon in 1980s also pointed in the direction of grammatico-logical approach.
The work on lexicon in 1980s also pointed in the direction of grammatico-logical approach.
We can describe this as a lexical & corpus phase. The phase had a lexicalized approach to grammar that appeared in late 1980s and became an increasing influence. There was a revolution in natural language processing in this decade with the introduction of machine learning algorithms for language processing.
Language is a crucial component for human lives and also the most fundamental aspect of our behavior. We can experience it in mainly two forms - written and spoken. In the written form, it is a way to pass our knowledge from one generation to the next. In the spoken form, it is the primary medium for human beings to coordinate with each other in their day-to-day behavior. Language is studied in various academic disciplines. Each discipline comes with its own set of problems and a set of solution to address those.
Consider the following table to understand this −
Linguists
How phrases and sentences can be formed with words?
What curbs the possible meaning for a sentence?
Intuitions about well-formedness and meaning.
Mathematical model of structure. For example, model theoretic semantics, formal language theory.
Psycholinguists
How human beings can identify the structure of sentences?
How the meaning of words can be identified?
When does understanding take place?
Experimental techniques mainly for measuring the performance of human beings.
Statistical analysis of observations.
Philosophers
How do words and sentences acquire the meaning?
How the objects are identified by the words?
What is meaning?
Natural language argumentation by using intuition.
Mathematical models like logic and model theory.
Computational Linguists
How can we identify the structure of a sentence
How knowledge and reasoning can be modeled?
How we can use language to accomplish specific tasks?
Algorithms
Data structures
Formal models of representation and reasoning.
AI techniques like search & representation methods.
Ambiguity, generally used in natural language processing, can be referred as the ability of being understood in more than one way. In simple terms, we can say that ambiguity is the capability of being understood in more than one way. Natural language is very ambiguous. NLP has the following types of ambiguities −
The ambiguity of a single word is called lexical ambiguity. For example, treating the word silver as a noun, an adjective, or a verb.
This kind of ambiguity occurs when a sentence is parsed in different ways. For example, the sentence “The man saw the girl with the telescope”. It is ambiguous whether the man saw the girl carrying a telescope or he saw her through his telescope.
This kind of ambiguity occurs when the meaning of the words themselves can be misinterpreted. In other words, semantic ambiguity happens when a sentence contains an ambiguous word or phrase. For example, the sentence “The car hit the pole while it was moving” is having semantic ambiguity because the interpretations can be “The car, while moving, hit the pole” and “The car hit the pole while the pole was moving”.
This kind of ambiguity arises due to the use of anaphora entities in discourse. For example, the horse ran up the hill. It was very steep. It soon got tired. Here, the anaphoric reference of “it” in two situations cause ambiguity.
Such kind of ambiguity refers to the situation where the context of a phrase gives it multiple interpretations. In simple words, we can say that pragmatic ambiguity arises when the statement is not specific. For example, the sentence “I like you too” can have multiple interpretations like I like you (just like you like me), I like you (just like someone else dose).
Following diagram shows the phases or logical steps in natural language processing −
It is the first phase of NLP. The purpose of this phase is to break chunks of language input into sets of tokens corresponding to paragraphs, sentences and words. For example, a word like “uneasy” can be broken into two sub-word tokens as “un-easy”.
It is the second phase of NLP. The purpose of this phase is two folds: to check that a sentence is well formed or not and to break it up into a structure that shows the syntactic relationships between the different words. For example, the sentence like “The school goes to the boy” would be rejected by syntax analyzer or parser.
It is the third phase of NLP. The purpose of this phase is to draw exact meaning, or you can say dictionary meaning from the text. The text is checked for meaningfulness. For example, semantic analyzer would reject a sentence like “Hot ice-cream”.
It is the fourth phase of NLP. Pragmatic analysis simply fits the actual objects/events, which exist in a given context with object references obtained during the last phase (semantic analysis). For example, the sentence “Put the banana in the basket on the shelf” can have two semantic interpretations and pragmatic analyzer will choose between these two possibilities.
In this chapter, we will learn about the linguistic resources in Natural Language Processing.
A corpus is a large and structured set of machine-readable texts that have been produced in a natural communicative setting. Its plural is corpora. They can be derived in different ways like text that was originally electronic, transcripts of spoken language and optical character recognition, etc.
Language is infinite but a corpus has to be finite in size. For the corpus to be finite in size, we need to sample and proportionally include a wide range of text types to ensure a good corpus design.
Let us now learn about some important elements for corpus design −
Representativeness is a defining feature of corpus design. The following definitions from two great researchers − Leech and Biber, will help us understand corpus representativeness −
According to Leech (1991), “A corpus is thought to be representative of the language variety it is supposed to represent if the findings based on its contents can be generalized to the said language variety”.
According to Leech (1991), “A corpus is thought to be representative of the language variety it is supposed to represent if the findings based on its contents can be generalized to the said language variety”.
According to Biber (1993), “Representativeness refers to the extent to which a sample includes the full range of variability in a population”.
According to Biber (1993), “Representativeness refers to the extent to which a sample includes the full range of variability in a population”.
In this way, we can conclude that representativeness of a corpus are determined by the following two factors −
Balance − The range of genre include in a corpus
Balance − The range of genre include in a corpus
Sampling − How the chunks for each genre are selected.
Sampling − How the chunks for each genre are selected.
Another very important element of corpus design is corpus balance – the range of genre included in a corpus. We have already studied that representativeness of a general corpus depends upon how balanced the corpus is. A balanced corpus covers a wide range of text categories, which are supposed to be representatives of the language. We do not have any reliable scientific measure for balance but the best estimation and intuition works in this concern. In other words, we can say that the accepted balance is determined by its intended uses only.
Another important element of corpus design is sampling. Corpus representativeness and balance is very closely associated with sampling. That is why we can say that sampling is inescapable in corpus building.
According to Biber(1993), “Some of the first considerations in constructing a corpus concern the overall design: for example, the kinds of texts included, the number of texts, the selection of particular texts, the selection of text samples from within texts, and the length of text samples. Each of these involves a sampling decision, either conscious or not.”
According to Biber(1993), “Some of the first considerations in constructing a corpus concern the overall design: for example, the kinds of texts included, the number of texts, the selection of particular texts, the selection of text samples from within texts, and the length of text samples. Each of these involves a sampling decision, either conscious or not.”
While obtaining a representative sample, we need to consider the following −
Sampling unit − It refers to the unit which requires a sample. For example, for written text, a sampling unit may be a newspaper, journal or a book.
Sampling unit − It refers to the unit which requires a sample. For example, for written text, a sampling unit may be a newspaper, journal or a book.
Sampling frame − The list of al sampling units is called a sampling frame.
Sampling frame − The list of al sampling units is called a sampling frame.
Population − It may be referred as the assembly of all sampling units. It is defined in terms of language production, language reception or language as a product.
Population − It may be referred as the assembly of all sampling units. It is defined in terms of language production, language reception or language as a product.
Another important element of corpus design is its size. How large the corpus should be? There is no specific answer to this question. The size of the corpus depends upon the purpose for which it is intended as well as on some practical considerations as follows −
Kind of query anticipated from the user.
Kind of query anticipated from the user.
The methodology used by the users to study the data.
The methodology used by the users to study the data.
Availability of the source of data.
Availability of the source of data.
With the advancement in technology, the corpus size also increases. The following table of comparison will help you understand how the corpus size works −
In our subsequent sections, we will look at a few examples of corpus.
It may be defined as linguistically parsed text corpus that annotates syntactic or semantic sentence structure. Geoffrey Leech coined the term ‘treebank’, which represents that the most common way of representing the grammatical analysis is by means of a tree structure. Generally, Treebanks are created on the top of a corpus, which has already been annotated with part-of-speech tags.
Semantic and Syntactic Treebanks are the two most common types of Treebanks in linguistics. Let us now learn more about these types −
These Treebanks use a formal representation of sentence’s semantic structure. They vary in the depth of their semantic representation. Robot Commands Treebank, Geoquery, Groningen Meaning Bank, RoboCup Corpus are some of the examples of Semantic Treebanks.
Opposite to the semantic Treebanks, inputs to the Syntactic Treebank systems are expressions of the formal language obtained from the conversion of parsed Treebank data. The outputs of such systems are predicate logic based meaning representation. Various syntactic Treebanks in different languages have been created so far. For example, Penn Arabic Treebank, Columbia Arabic Treebank are syntactic Treebanks created in Arabia language. Sininca syntactic Treebank created in Chinese language. Lucy, Susane and BLLIP WSJ syntactic corpus created in English language.
Followings are some of the applications of TreeBanks −
If we talk about Computational Linguistic then the best use of TreeBanks is to engineer state-of-the-art natural language processing systems such as part-of-speech taggers, parsers, semantic analyzers and machine translation systems.
In case of Corpus linguistics, the best use of Treebanks is to study syntactic phenomena.
The best use of Treebanks in theoretical and psycholinguistics is interaction evidence.
PropBank more specifically called “Proposition Bank” is a corpus, which is annotated with verbal propositions and their arguments. The corpus is a verb-oriented resource; the annotations here are more closely related to the syntactic level. Martha Palmer et al., Department of Linguistic, University of Colorado Boulder developed it. We can use the term PropBank as a common noun referring to any corpus that has been annotated with propositions and their arguments.
In Natural Language Processing (NLP), the PropBank project has played a very significant role. It helps in semantic role labeling.
VerbNet(VN) is the hierarchical domain-independent and largest lexical resource present in English that incorporates both semantic as well as syntactic information about its contents. VN is a broad-coverage verb lexicon having mappings to other lexical resources such as WordNet, Xtag and FrameNet. It is organized into verb classes extending Levin classes by refinement and addition of subclasses for achieving syntactic and semantic coherence among class members.
Each VerbNet (VN) class contains −
For depicting the possible surface realizations of the argument structure for constructions such as transitive, intransitive, prepositional phrases, resultatives, and a large set of diathesis alternations.
For constraining, the types of thematic roles allowed by the arguments, and further restrictions may be imposed. This will help in indicating the syntactic nature of the constituent likely to be associated with the thematic role.
WordNet, created by Princeton is a lexical database for English language. It is the part of the NLTK corpus. In WordNet, nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms called Synsets. All the synsets are linked with the help of conceptual-semantic and lexical relations. Its structure makes it very useful for natural language processing (NLP).
In information systems, WordNet is used for various purposes like word-sense disambiguation, information retrieval, automatic text classification and machine translation. One of the most important uses of WordNet is to find out the similarity among words. For this task, various algorithms have been implemented in various packages like Similarity in Perl, NLTK in Python and ADW in Java.
In this chapter, we will understand world level analysis in Natural Language Processing.
A regular expression (RE) is a language for specifying text search strings. RE helps us to match or find other strings or sets of strings, using a specialized syntax held in a pattern. Regular expressions are used to search texts in UNIX as well as in MS WORD in identical way. We have various search engines using a number of RE features.
Followings are some of the important properties of RE −
American Mathematician Stephen Cole Kleene formalized the Regular Expression language.
American Mathematician Stephen Cole Kleene formalized the Regular Expression language.
RE is a formula in a special language, which can be used for specifying simple classes of strings, a sequence of symbols. In other words, we can say that RE is an algebraic notation for characterizing a set of strings.
RE is a formula in a special language, which can be used for specifying simple classes of strings, a sequence of symbols. In other words, we can say that RE is an algebraic notation for characterizing a set of strings.
Regular expression requires two things, one is the pattern that we wish to search and other is a corpus of text from which we need to search.
Regular expression requires two things, one is the pattern that we wish to search and other is a corpus of text from which we need to search.
Mathematically, A Regular Expression can be defined as follows −
ε is a Regular Expression, which indicates that the language is having an empty string.
ε is a Regular Expression, which indicates that the language is having an empty string.
φ is a Regular Expression which denotes that it is an empty language.
φ is a Regular Expression which denotes that it is an empty language.
If X and Y are Regular Expressions, then
X, Y
X.Y(Concatenation of XY)
X+Y (Union of X and Y)
X*, Y* (Kleen Closure of X and Y)
If X and Y are Regular Expressions, then
X, Y
X, Y
X.Y(Concatenation of XY)
X.Y(Concatenation of XY)
X+Y (Union of X and Y)
X+Y (Union of X and Y)
X*, Y* (Kleen Closure of X and Y)
X*, Y* (Kleen Closure of X and Y)
are also regular expressions.
If a string is derived from above rules then that would also be a regular expression.
If a string is derived from above rules then that would also be a regular expression.
The following table shows a few examples of Regular Expressions −
It may be defined as the set that represents the value of the regular expression and consists specific properties.
If we do the union of two regular sets then the resulting set would also be regula.
If we do the union of two regular sets then the resulting set would also be regula.
If we do the intersection of two regular sets then the resulting set would also be regular.
If we do the intersection of two regular sets then the resulting set would also be regular.
If we do the complement of regular sets, then the resulting set would also be regular.
If we do the complement of regular sets, then the resulting set would also be regular.
If we do the difference of two regular sets, then the resulting set would also be regular.
If we do the difference of two regular sets, then the resulting set would also be regular.
If we do the reversal of regular sets, then the resulting set would also be regular.
If we do the reversal of regular sets, then the resulting set would also be regular.
If we take the closure of regular sets, then the resulting set would also be regular.
If we take the closure of regular sets, then the resulting set would also be regular.
If we do the concatenation of two regular sets, then the resulting set would also be regular.
If we do the concatenation of two regular sets, then the resulting set would also be regular.
The term automata, derived from the Greek word "αὐτόματα" meaning "self-acting", is the plural of automaton which may be defined as an abstract self-propelled computing device that follows a predetermined sequence of operations automatically.
An automaton having a finite number of states is called a Finite Automaton (FA) or Finite State automata (FSA).
Mathematically, an automaton can be represented by a 5-tuple (Q, Σ, δ, q0, F), where −
Q is a finite set of states.
Q is a finite set of states.
Σ is a finite set of symbols, called the alphabet of the automaton.
Σ is a finite set of symbols, called the alphabet of the automaton.
δ is the transition function
δ is the transition function
q0 is the initial state from where any input is processed (q0 ∈ Q).
q0 is the initial state from where any input is processed (q0 ∈ Q).
F is a set of final state/states of Q (F ⊆ Q).
F is a set of final state/states of Q (F ⊆ Q).
Following points will give us a clear view about the relationship between finite automata, regular grammars and regular expressions −
As we know that finite state automata are the theoretical foundation of computational work and regular expressions is one way of describing them.
As we know that finite state automata are the theoretical foundation of computational work and regular expressions is one way of describing them.
We can say that any regular expression can be implemented as FSA and any FSA can be described with a regular expression.
We can say that any regular expression can be implemented as FSA and any FSA can be described with a regular expression.
On the other hand, regular expression is a way to characterize a kind of language called regular language. Hence, we can say that regular language can be described with the help of both FSA and regular expression.
On the other hand, regular expression is a way to characterize a kind of language called regular language. Hence, we can say that regular language can be described with the help of both FSA and regular expression.
Regular grammar, a formal grammar that can be right-regular or left-regular, is another way to characterize regular language.
Regular grammar, a formal grammar that can be right-regular or left-regular, is another way to characterize regular language.
Following diagram shows that finite automata, regular expressions and regular grammars are the equivalent ways of describing regular languages.
Finite state automation is of two types. Let us see what the types are.
It may be defined as the type of finite automation wherein, for every input symbol we can determine the state to which the machine will move. It has a finite number of states that is why the machine is called Deterministic Finite Automaton (DFA).
Mathematically, a DFA can be represented by a 5-tuple (Q, Σ, δ, q0, F), where −
Q is a finite set of states.
Q is a finite set of states.
Σ is a finite set of symbols, called the alphabet of the automaton.
Σ is a finite set of symbols, called the alphabet of the automaton.
δ is the transition function where δ: Q × Σ → Q .
δ is the transition function where δ: Q × Σ → Q .
q0 is the initial state from where any input is processed (q0 ∈ Q).
q0 is the initial state from where any input is processed (q0 ∈ Q).
F is a set of final state/states of Q (F ⊆ Q).
F is a set of final state/states of Q (F ⊆ Q).
Whereas graphically, a DFA can be represented by diagraphs called state diagrams where −
The states are represented by vertices.
The states are represented by vertices.
The transitions are shown by labeled arcs.
The transitions are shown by labeled arcs.
The initial state is represented by an empty incoming arc.
The initial state is represented by an empty incoming arc.
The final state is represented by double circle.
The final state is represented by double circle.
Suppose a DFA be
Q = {a, b, c},
Q = {a, b, c},
Σ = {0, 1},
Σ = {0, 1},
q0 = {a},
q0 = {a},
F = {c},
F = {c},
Transition function δ is shown in the table as follows −
Transition function δ is shown in the table as follows −
The graphical representation of this DFA would be as follows −
It may be defined as the type of finite automation where for every input symbol we cannot determine the state to which the machine will move i.e. the machine can move to any combination of the states. It has a finite number of states that is why the machine is called Non-deterministic Finite Automation (NDFA).
Mathematically, NDFA can be represented by a 5-tuple (Q, Σ, δ, q0, F), where −
Q is a finite set of states.
Q is a finite set of states.
Σ is a finite set of symbols, called the alphabet of the automaton.
Σ is a finite set of symbols, called the alphabet of the automaton.
δ :-is the transition function where δ: Q × Σ → 2 Q.
δ :-is the transition function where δ: Q × Σ → 2 Q.
q0 :-is the initial state from where any input is processed (q0 ∈ Q).
q0 :-is the initial state from where any input is processed (q0 ∈ Q).
F :-is a set of final state/states of Q (F ⊆ Q).
F :-is a set of final state/states of Q (F ⊆ Q).
Whereas graphically (same as DFA), a NDFA can be represented by diagraphs called state diagrams where −
The states are represented by vertices.
The states are represented by vertices.
The transitions are shown by labeled arcs.
The transitions are shown by labeled arcs.
The initial state is represented by an empty incoming arc.
The initial state is represented by an empty incoming arc.
The final state is represented by double circle.
The final state is represented by double circle.
Suppose a NDFA be
Q = {a, b, c},
Q = {a, b, c},
Σ = {0, 1},
Σ = {0, 1},
q0 = {a},
q0 = {a},
F = {c},
F = {c},
Transition function δ is shown in the table as follows −
Transition function δ is shown in the table as follows −
The graphical representation of this NDFA would be as follows −
The term morphological parsing is related to the parsing of morphemes. We can define morphological parsing as the problem of recognizing that a word breaks down into smaller meaningful units called morphemes producing some sort of linguistic structure for it. For example, we can break the word foxes into two, fox and -es. We can see that the word foxes, is made up of two morphemes, one is fox and other is -es.
In other sense, we can say that morphology is the study of −
The formation of words.
The formation of words.
The origin of the words.
The origin of the words.
Grammatical forms of the words.
Grammatical forms of the words.
Use of prefixes and suffixes in the formation of words.
Use of prefixes and suffixes in the formation of words.
How parts-of-speech (PoS) of a language are formed.
How parts-of-speech (PoS) of a language are formed.
Morphemes, the smallest meaning-bearing units, can be divided into two types −
Stems
Stems
Word Order
Word Order
It is the core meaningful unit of a word. We can also say that it is the root of the word. For example, in the word foxes, the stem is fox.
Affixes − As the name suggests, they add some additional meaning and grammatical functions to the words. For example, in the word foxes, the affix is − es.
Affixes − As the name suggests, they add some additional meaning and grammatical functions to the words. For example, in the word foxes, the affix is − es.
Further, affixes can also be divided into following four types −
Prefixes − As the name suggests, prefixes precede the stem. For example, in the word unbuckle, un is the prefix.
Prefixes − As the name suggests, prefixes precede the stem. For example, in the word unbuckle, un is the prefix.
Suffixes − As the name suggests, suffixes follow the stem. For example, in the word cats, -s is the suffix.
Suffixes − As the name suggests, suffixes follow the stem. For example, in the word cats, -s is the suffix.
Infixes − As the name suggests, infixes are inserted inside the stem. For example, the word cupful, can be pluralized as cupsful by using -s as the infix.
Infixes − As the name suggests, infixes are inserted inside the stem. For example, the word cupful, can be pluralized as cupsful by using -s as the infix.
Circumfixes − They precede and follow the stem. There are very less examples of circumfixes in English language. A very common example is ‘A-ing’ where we can use -A precede and -ing follows the stem.
Circumfixes − They precede and follow the stem. There are very less examples of circumfixes in English language. A very common example is ‘A-ing’ where we can use -A precede and -ing follows the stem.
The order of the words would be decided by morphological parsing. Let us now see the requirements for building a morphological parser −
The very first requirement for building a morphological parser is lexicon, which includes the list of stems and affixes along with the basic information about them. For example, the information like whether the stem is Noun stem or Verb stem, etc.
It is basically the model of morpheme ordering. In other sense, the model explaining which classes of morphemes can follow other classes of morphemes inside a word. For example, the morphotactic fact is that the English plural morpheme always follows the noun rather than preceding it.
These spelling rules are used to model the changes occurring in a word. For example, the rule of converting y to ie in word like city+s = cities not citys.
Syntactic analysis or parsing or syntax analysis is the third phase of NLP. The purpose of this phase is to draw exact meaning, or you can say dictionary meaning from the text. Syntax analysis checks the text for meaningfulness comparing to the rules of formal grammar. For example, the sentence like “hot ice-cream” would be rejected by semantic analyzer.
In this sense, syntactic analysis or parsing may be defined as the process of analyzing the strings of symbols in natural language conforming to the rules of formal grammar. The origin of the word ‘parsing’ is from Latin word ‘pars’ which means ‘part’.
It is used to implement the task of parsing. It may be defined as the software component designed for taking input data (text) and giving structural representation of the input after checking for correct syntax as per formal grammar. It also builds a data structure generally in the form of parse tree or abstract syntax tree or other hierarchical structure.
The main roles of the parse include −
To report any syntax error.
To report any syntax error.
To recover from commonly occurring error so that the processing of the remainder of program can be continued.
To recover from commonly occurring error so that the processing of the remainder of program can be continued.
To create parse tree.
To create parse tree.
To create symbol table.
To create symbol table.
To produce intermediate representations (IR).
To produce intermediate representations (IR).
Derivation divides parsing into the followings two types −
Top-down Parsing
Top-down Parsing
Bottom-up Parsing
Bottom-up Parsing
In this kind of parsing, the parser starts constructing the parse tree from the start symbol and then tries to transform the start symbol to the input. The most common form of topdown parsing uses recursive procedure to process the input. The main disadvantage of recursive descent parsing is backtracking.
In this kind of parsing, the parser starts with the input symbol and tries to construct the parser tree up to the start symbol.
In order to get the input string, we need a sequence of production rules. Derivation is a set of production rules. During parsing, we need to decide the non-terminal, which is to be replaced along with deciding the production rule with the help of which the non-terminal will be replaced.
In this section, we will learn about the two types of derivations, which can be used to decide which non-terminal to be replaced with production rule −
In the left-most derivation, the sentential form of an input is scanned and replaced from the left to the right. The sentential form in this case is called the left-sentential form.
In the left-most derivation, the sentential form of an input is scanned and replaced from right to left. The sentential form in this case is called the right-sentential form.
It may be defined as the graphical depiction of a derivation. The start symbol of derivation serves as the root of the parse tree. In every parse tree, the leaf nodes are terminals and interior nodes are non-terminals. A property of parse tree is that in-order traversal will produce the original input string.
Grammar is very essential and important to describe the syntactic structure of well-formed programs. In the literary sense, they denote syntactical rules for conversation in natural languages. Linguistics have attempted to define grammars since the inception of natural languages like English, Hindi, etc.
The theory of formal languages is also applicable in the fields of Computer Science mainly in programming languages and data structure. For example, in ‘C’ language, the precise grammar rules state how functions are made from lists and statements.
A mathematical model of grammar was given by Noam Chomsky in 1956, which is effective for writing computer languages.
Mathematically, a grammar G can be formally written as a 4-tuple (N, T, S, P) where −
N or VN = set of non-terminal symbols, i.e., variables.
N or VN = set of non-terminal symbols, i.e., variables.
T or ∑ = set of terminal symbols.
T or ∑ = set of terminal symbols.
S = Start symbol where S ∈ N
S = Start symbol where S ∈ N
P denotes the Production rules for Terminals as well as Non-terminals. It has the form α → β, where α and β are strings on VN ∪ ∑ and least one symbol of α belongs to VN
P denotes the Production rules for Terminals as well as Non-terminals. It has the form α → β, where α and β are strings on VN ∪ ∑ and least one symbol of α belongs to VN
Phrase structure grammar, introduced by Noam Chomsky, is based on the constituency relation. That is why it is also called constituency grammar. It is opposite to dependency grammar.
Before giving an example of constituency grammar, we need to know the fundamental points about constituency grammar and constituency relation.
All the related frameworks view the sentence structure in terms of constituency relation.
All the related frameworks view the sentence structure in terms of constituency relation.
The constituency relation is derived from the subject-predicate division of Latin as well as Greek grammar.
The constituency relation is derived from the subject-predicate division of Latin as well as Greek grammar.
The basic clause structure is understood in terms of noun phrase NP and verb phrase VP.
The basic clause structure is understood in terms of noun phrase NP and verb phrase VP.
We can write the sentence “This tree is illustrating the constituency relation” as follows −
It is opposite to the constituency grammar and based on dependency relation. It was introduced by Lucien Tesniere. Dependency grammar (DG) is opposite to the constituency grammar because it lacks phrasal nodes.
Before giving an example of Dependency grammar, we need to know the fundamental points about Dependency grammar and Dependency relation.
In DG, the linguistic units, i.e., words are connected to each other by directed links.
In DG, the linguistic units, i.e., words are connected to each other by directed links.
The verb becomes the center of the clause structure.
The verb becomes the center of the clause structure.
Every other syntactic units are connected to the verb in terms of directed link. These syntactic units are called dependencies.
Every other syntactic units are connected to the verb in terms of directed link. These syntactic units are called dependencies.
We can write the sentence “This tree is illustrating the dependency relation” as follows;
Parse tree that uses Constituency grammar is called constituency-based parse tree; and the parse trees that uses dependency grammar is called dependency-based parse tree.
Context free grammar, also called CFG, is a notation for describing languages and a superset of Regular grammar. It can be seen in the following diagram −
CFG consists of finite set of grammar rules with the following four components −
It is denoted by V. The non-terminals are syntactic variables that denote the sets of strings, which further help defining the language, generated by the grammar.
It is also called tokens and defined by Σ. Strings are formed with the basic symbols of terminals.
It is denoted by P. The set defines how the terminals and non-terminals can be combined. Every production(P) consists of non-terminals, an arrow, and terminals (the sequence of terminals). Non-terminals are called the left side of the production and terminals are called the right side of the production.
The production begins from the start symbol. It is denoted by symbol S. Non-terminal symbol is always designated as start symbol.
The purpose of semantic analysis is to draw exact meaning, or you can say dictionary meaning from the text. The work of semantic analyzer is to check the text for meaningfulness.
We already know that lexical analysis also deals with the meaning of the words, then how is semantic analysis different from lexical analysis? Lexical analysis is based on smaller token but on the other side semantic analysis focuses on larger chunks. That is why semantic analysis can be divided into the following two parts −
It is the first part of the semantic analysis in which the study of the meaning of individual words is performed. This part is called lexical semantics.
In the second part, the individual words will be combined to provide meaning in sentences.
The most important task of semantic analysis is to get the proper meaning of the sentence. For example, analyze the sentence “Ram is great.” In this sentence, the speaker is talking either about Lord Ram or about a person whose name is Ram. That is why the job, to get the proper meaning of the sentence, of semantic analyzer is important.
Followings are some important elements of semantic analysis −
It may be defined as the relationship between a generic term and instances of that generic term. Here the generic term is called hypernym and its instances are called hyponyms. For example, the word color is hypernym and the color blue, yellow etc. are hyponyms.
It may be defined as the words having same spelling or same form but having different and unrelated meaning. For example, the word “Bat” is a homonymy word because bat can be an implement to hit a ball or bat is a nocturnal flying mammal also.
Polysemy is a Greek word, which means “many signs”. It is a word or phrase with different but related sense. In other words, we can say that polysemy has the same spelling but different and related meaning. For example, the word “bank” is a polysemy word having the following meanings −
A financial institution.
A financial institution.
The building in which such an institution is located.
The building in which such an institution is located.
A synonym for “to rely on”.
A synonym for “to rely on”.
Both polysemy and homonymy words have the same syntax or spelling. The main difference between them is that in polysemy, the meanings of the words are related but in homonymy, the meanings of the words are not related. For example, if we talk about the same word “Bank”, we can write the meaning ‘a financial institution’ or ‘a river bank’. In that case it would be the example of homonym because the meanings are unrelated to each other.
It is the relation between two lexical items having different forms but expressing the same or a close meaning. Examples are ‘author/writer’, ‘fate/destiny’.
It is the relation between two lexical items having symmetry between their semantic components relative to an axis. The scope of antonymy is as follows −
Application of property or not − Example is ‘life/death’, ‘certitude/incertitude’
Application of property or not − Example is ‘life/death’, ‘certitude/incertitude’
Application of scalable property − Example is ‘rich/poor’, ‘hot/cold’
Application of scalable property − Example is ‘rich/poor’, ‘hot/cold’
Application of a usage − Example is ‘father/son’, ‘moon/sun’.
Application of a usage − Example is ‘father/son’, ‘moon/sun’.
Semantic analysis creates a representation of the meaning of a sentence. But before getting into the concept and approaches related to meaning representation, we need to understand the building blocks of semantic system.
In word representation or representation of the meaning of the words, the following building blocks play an important role −
Entities − It represents the individual such as a particular person, location etc. For example, Haryana. India, Ram all are entities.
Entities − It represents the individual such as a particular person, location etc. For example, Haryana. India, Ram all are entities.
Concepts − It represents the general category of the individuals such as a person, city, etc.
Concepts − It represents the general category of the individuals such as a person, city, etc.
Relations − It represents the relationship between entities and concept. For example, Ram is a person.
Relations − It represents the relationship between entities and concept. For example, Ram is a person.
Predicates − It represents the verb structures. For example, semantic roles and case grammar are the examples of predicates.
Predicates − It represents the verb structures. For example, semantic roles and case grammar are the examples of predicates.
Now, we can understand that meaning representation shows how to put together the building blocks of semantic systems. In other words, it shows how to put together entities, concepts, relation and predicates to describe a situation. It also enables the reasoning about the semantic world.
Semantic analysis uses the following approaches for the representation of meaning −
First order predicate logic (FOPL)
First order predicate logic (FOPL)
Semantic Nets
Semantic Nets
Frames
Frames
Conceptual dependency (CD)
Conceptual dependency (CD)
Rule-based architecture
Rule-based architecture
Case Grammar
Case Grammar
Conceptual Graphs
Conceptual Graphs
A question that arises here is why do we need meaning representation? Followings are the reasons for the same −
The very first reason is that with the help of meaning representation the linking of linguistic elements to the non-linguistic elements can be done.
With the help of meaning representation, unambiguous, canonical forms can be represented at the lexical level.
Meaning representation can be used to reason for verifying what is true in the world as well as to infer the knowledge from the semantic representation.
The first part of semantic analysis, studying the meaning of individual words is called lexical semantics. It includes words, sub-words, affixes (sub-units), compound words and phrases also. All the words, sub-words, etc. are collectively called lexical items. In other words, we can say that lexical semantics is the relationship between lexical items, meaning of sentences and syntax of sentence.
Following are the steps involved in lexical semantics −
Classification of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics.
Classification of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics.
Decomposition of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics.
Decomposition of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics.
Differences as well as similarities between various lexical semantic structures is also analyzed.
Differences as well as similarities between various lexical semantic structures is also analyzed.
We understand that words have different meanings based on the context of its usage in the sentence. If we talk about human languages, then they are ambiguous too because many words can be interpreted in multiple ways depending upon the context of their occurrence.
Word sense disambiguation, in natural language processing (NLP), may be defined as the ability to determine which meaning of word is activated by the use of word in a particular context. Lexical ambiguity, syntactic or semantic, is one of the very first problem that any NLP system faces. Part-of-speech (POS) taggers with high level of accuracy can solve Word’s syntactic ambiguity. On the other hand, the problem of resolving semantic ambiguity is called WSD (word sense disambiguation). Resolving semantic ambiguity is harder than resolving syntactic ambiguity.
For example, consider the two examples of the distinct sense that exist for the word “bass” −
I can hear bass sound.
I can hear bass sound.
He likes to eat grilled bass.
He likes to eat grilled bass.
The occurrence of the word bass clearly denotes the distinct meaning. In first sentence, it means frequency and in second, it means fish. Hence, if it would be disambiguated by WSD then the correct meaning to the above sentences can be assigned as follows −
I can hear bass/frequency sound.
I can hear bass/frequency sound.
He likes to eat grilled bass/fish.
He likes to eat grilled bass/fish.
The evaluation of WSD requires the following two inputs −
The very first input for evaluation of WSD is dictionary, which is used to specify the senses to be disambiguated.
Another input required by WSD is the high-annotated test corpus that has the target or correct-senses. The test corpora can be of two types &minsu;
Lexical sample − This kind of corpora is used in the system, where it is required to disambiguate a small sample of words.
Lexical sample − This kind of corpora is used in the system, where it is required to disambiguate a small sample of words.
All-words − This kind of corpora is used in the system, where it is expected to disambiguate all the words in a piece of running text.
All-words − This kind of corpora is used in the system, where it is expected to disambiguate all the words in a piece of running text.
Approaches and methods to WSD are classified according to the source of knowledge used in word disambiguation.
Let us now see the four conventional methods to WSD −
As the name suggests, for disambiguation, these methods primarily rely on dictionaries, treasures and lexical knowledge base. They do not use corpora evidences for disambiguation. The Lesk method is the seminal dictionary-based method introduced by Michael Lesk in 1986. The Lesk definition, on which the Lesk algorithm is based is “measure overlap between sense definitions for all words in context”. However, in 2000, Kilgarriff and Rosensweig gave the simplified Lesk definition as “measure overlap between sense definitions of word and current context”, which further means identify the correct sense for one word at a time. Here the current context is the set of words in surrounding sentence or paragraph.
For disambiguation, machine learning methods make use of sense-annotated corpora to train. These methods assume that the context can provide enough evidence on its own to disambiguate the sense. In these methods, the words knowledge and reasoning are deemed unnecessary. The context is represented as a set of “features” of the words. It includes the information about the surrounding words also. Support vector machine and memory-based learning are the most successful supervised learning approaches to WSD. These methods rely on substantial amount of manually sense-tagged corpora, which is very expensive to create.
Due to the lack of training corpus, most of the word sense disambiguation algorithms use semi-supervised learning methods. It is because semi-supervised methods use both labelled as well as unlabeled data. These methods require very small amount of annotated text and large amount of plain unannotated text. The technique that is used by semisupervised methods is bootstrapping from seed data.
These methods assume that similar senses occur in similar context. That is why the senses can be induced from text by clustering word occurrences by using some measure of similarity of the context. This task is called word sense induction or discrimination. Unsupervised methods have great potential to overcome the knowledge acquisition bottleneck due to non-dependency on manual efforts.
Word sense disambiguation (WSD) is applied in almost every application of language technology.
Let us now see the scope of WSD −
Machine translation or MT is the most obvious application of WSD. In MT, Lexical choice for the words that have distinct translations for different senses, is done by WSD. The senses in MT are represented as words in the target language. Most of the machine translation systems do not use explicit WSD module.
Information retrieval (IR) may be defined as a software program that deals with the organization, storage, retrieval and evaluation of information from document repositories particularly textual information. The system basically assists users in finding the information they required but it does not explicitly return the answers of the questions. WSD is used to resolve the ambiguities of the queries provided to IR system. As like MT, current IR systems do not explicitly use WSD module and they rely on the concept that user would type enough context in the query to only retrieve relevant documents.
In most of the applications, WSD is necessary to do accurate analysis of text. For example, WSD helps intelligent gathering system to do flagging of the correct words. For example, medical intelligent system might need flagging of “illegal drugs” rather than “medical drugs”
WSD and lexicography can work together in loop because modern lexicography is corpusbased. With lexicography, WSD provides rough empirical sense groupings as well as statistically significant contextual indicators of sense.
Followings are some difficulties faced by word sense disambiguation (WSD) −
The major problem of WSD is to decide the sense of the word because different senses can be very closely related. Even different dictionaries and thesauruses can provide different divisions of words into senses.
Another problem of WSD is that completely different algorithm might be needed for different applications. For example, in machine translation, it takes the form of target word selection; and in information retrieval, a sense inventory is not required.
Another problem of WSD is that WSD systems are generally tested by having their results on a task compared against the task of human beings. This is called the problem of interjudge variance.
Another difficulty in WSD is that words cannot be easily divided into discrete submeanings.
The most difficult problem of AI is to process the natural language by computers or in other words natural language processing is the most difficult problem of artificial intelligence. If we talk about the major problems in NLP, then one of the major problems in NLP is discourse processing − building theories and models of how utterances stick together to form coherent discourse. Actually, the language always consists of collocated, structured and coherent groups of sentences rather than isolated and unrelated sentences like movies. These coherent groups of sentences are referred to as discourse.
Coherence and discourse structure are interconnected in many ways. Coherence, along with property of good text, is used to evaluate the output quality of natural language generation system. The question that arises here is what does it mean for a text to be coherent? Suppose we collected one sentence from every page of the newspaper, then will it be a discourse? Of-course, not. It is because these sentences do not exhibit coherence. The coherent discourse must possess the following properties −
The discourse would be coherent if it has meaningful connections between its utterances. This property is called coherence relation. For example, some sort of explanation must be there to justify the connection between utterances.
Another property that makes a discourse coherent is that there must be a certain kind of relationship with the entities. Such kind of coherence is called entity-based coherence.
An important question regarding discourse is what kind of structure the discourse must have. The answer to this question depends upon the segmentation we applied on discourse. Discourse segmentations may be defined as determining the types of structures for large discourse. It is quite difficult to implement discourse segmentation, but it is very important for information retrieval, text summarization and information extraction kind of applications.
In this section, we will learn about the algorithms for discourse segmentation. The algorithms are described below −
The class of unsupervised discourse segmentation is often represented as linear segmentation. We can understand the task of linear segmentation with the help of an example. In the example, there is a task of segmenting the text into multi-paragraph units; the units represent the passage of the original text. These algorithms are dependent on cohesion that may be defined as the use of certain linguistic devices to tie the textual units together. On the other hand, lexicon cohesion is the cohesion that is indicated by the relationship between two or more words in two units like the use of synonyms.
The earlier method does not have any hand-labeled segment boundaries. On the other hand, supervised discourse segmentation needs to have boundary-labeled training data. It is very easy to acquire the same. In supervised discourse segmentation, discourse marker or cue words play an important role. Discourse marker or cue word is a word or phrase that functions to signal discourse structure. These discourse markers are domain-specific.
Lexical repetition is a way to find the structure in a discourse, but it does not satisfy the requirement of being coherent discourse. To achieve the coherent discourse, we must focus on coherence relations in specific. As we know that coherence relation defines the possible connection between utterances in a discourse. Hebb has proposed such kind of relations as follows −
We are taking two terms S0 and S1 to represent the meaning of the two related sentences −
It infers that the state asserted by term S0 could cause the state asserted by S1. For example, two statements show the relationship result: Ram was caught in the fire. His skin burned.
It infers that the state asserted by S1 could cause the state asserted by S0. For example, two statements show the relationship − Ram fought with Shyam’s friend. He was drunk.
It infers p(a1,a2,...) from assertion of S0 and p(b1,b2,...) from assertion S1. Here ai and bi are similar for all i. For example, two statements are parallel − Ram wanted car. Shyam wanted money.
It infers the same proposition P from both the assertions − S0 and S1 For example, two statements show the relation elaboration: Ram was from Chandigarh. Shyam was from Kerala.
It happens when a change of state can be inferred from the assertion of S0, final state of which can be inferred from S1 and vice-versa. For example, the two statements show the relation occasion: Ram picked up the book. He gave it to Shyam.
The coherence of entire discourse can also be considered by hierarchical structure between coherence relations. For example, the following passage can be represented as hierarchical structure −
S1 − Ram went to the bank to deposit money.
S1 − Ram went to the bank to deposit money.
S2 − He then took a train to Shyam’s cloth shop.
S2 − He then took a train to Shyam’s cloth shop.
S3 − He wanted to buy some clothes.
S3 − He wanted to buy some clothes.
S4 − He do not have new clothes for party.
S4 − He do not have new clothes for party.
S5 − He also wanted to talk to Shyam regarding his health
S5 − He also wanted to talk to Shyam regarding his health
Interpretation of the sentences from any discourse is another important task and to achieve this we need to know who or what entity is being talked about. Here, interpretation reference is the key element. Reference may be defined as the linguistic expression to denote an entity or individual. For example, in the passage, Ram, the manager of ABC bank, saw his friend Shyam at a shop. He went to meet him, the linguistic expressions like Ram, His, He are reference.
On the same note, reference resolution may be defined as the task of determining what entities are referred to by which linguistic expression.
We use the following terminologies in reference resolution −
Referring expression − The natural language expression that is used to perform reference is called a referring expression. For example, the passage used above is a referring expression.
Referring expression − The natural language expression that is used to perform reference is called a referring expression. For example, the passage used above is a referring expression.
Referent − It is the entity that is referred. For example, in the last given example Ram is a referent.
Referent − It is the entity that is referred. For example, in the last given example Ram is a referent.
Corefer − When two expressions are used to refer to the same entity, they are called corefers. For example, Ram and he are corefers.
Corefer − When two expressions are used to refer to the same entity, they are called corefers. For example, Ram and he are corefers.
Antecedent − The term has the license to use another term. For example, Ram is the antecedent of the reference he.
Antecedent − The term has the license to use another term. For example, Ram is the antecedent of the reference he.
Anaphora & Anaphoric − It may be defined as the reference to an entity that has been previously introduced into the sentence. And, the referring expression is called anaphoric.
Anaphora & Anaphoric − It may be defined as the reference to an entity that has been previously introduced into the sentence. And, the referring expression is called anaphoric.
Discourse model − The model that contains the representations of the entities that have been referred to in the discourse and the relationship they are engaged in.
Discourse model − The model that contains the representations of the entities that have been referred to in the discourse and the relationship they are engaged in.
Let us now see the different types of referring expressions. The five types of referring expressions are described below −
Such kind of reference represents the entities that are new to the hearer into the discourse context. For example − in the sentence Ram had gone around one day to bring him some food − some is an indefinite reference.
Opposite to above, such kind of reference represents the entities that are not new or identifiable to the hearer into the discourse context. For example, in the sentence - I used to read The Times of India – The Times of India is a definite reference.
It is a form of definite reference. For example, Ram laughed as loud as he could. The word he represents pronoun referring expression.
These demonstrate and behave differently than simple definite pronouns. For example, this and that are demonstrative pronouns.
It is the simplest type of referring expression. It can be the name of a person, organization and location also. For example, in the above examples, Ram is the name-refereeing expression.
The two reference resolution tasks are described below.
It is the task of finding referring expressions in a text that refer to the same entity. In simple words, it is the task of finding corefer expressions. A set of coreferring expressions are called coreference chain. For example - He, Chief Manager and His - these are referring expressions in the first passage given as example.
In English, the main problem for coreference resolution is the pronoun it. The reason behind this is that the pronoun it has many uses. For example, it can refer much like he and she. The pronoun it also refers to the things that do not refer to specific things. For example, It’s raining. It is really good.
Unlike the coreference resolution, pronominal anaphora resolution may be defined as the task of finding the antecedent for a single pronoun. For example, the pronoun is his and the task of pronominal anaphora resolution is to find the word Ram because Ram is the antecedent.
Tagging is a kind of classification that may be defined as the automatic assignment of description to the tokens. Here the descriptor is called tag, which may represent one of the part-of-speech, semantic information and so on.
Now, if we talk about Part-of-Speech (PoS) tagging, then it may be defined as the process of assigning one of the parts of speech to the given word. It is generally called POS tagging. In simple words, we can say that POS tagging is a task of labelling each word in a sentence with its appropriate part of speech. We already know that parts of speech include nouns, verb, adverbs, adjectives, pronouns, conjunction and their sub-categories.
Most of the POS tagging falls under Rule Base POS tagging, Stochastic POS tagging and Transformation based tagging.
One of the oldest techniques of tagging is rule-based POS tagging. Rule-based taggers use dictionary or lexicon for getting possible tags for tagging each word. If the word has more than one possible tag, then rule-based taggers use hand-written rules to identify the correct tag. Disambiguation can also be performed in rule-based tagging by analyzing the linguistic features of a word along with its preceding as well as following words. For example, suppose if the preceding word of a word is article then word must be a noun.
As the name suggests, all such kind of information in rule-based POS tagging is coded in the form of rules. These rules may be either −
Context-pattern rules
Context-pattern rules
Or, as Regular expression compiled into finite-state automata, intersected with lexically ambiguous sentence representation.
Or, as Regular expression compiled into finite-state automata, intersected with lexically ambiguous sentence representation.
We can also understand Rule-based POS tagging by its two-stage architecture −
First stage − In the first stage, it uses a dictionary to assign each word a list of potential parts-of-speech.
First stage − In the first stage, it uses a dictionary to assign each word a list of potential parts-of-speech.
Second stage − In the second stage, it uses large lists of hand-written disambiguation rules to sort down the list to a single part-of-speech for each word.
Second stage − In the second stage, it uses large lists of hand-written disambiguation rules to sort down the list to a single part-of-speech for each word.
Rule-based POS taggers possess the following properties −
These taggers are knowledge-driven taggers.
These taggers are knowledge-driven taggers.
The rules in Rule-based POS tagging are built manually.
The rules in Rule-based POS tagging are built manually.
The information is coded in the form of rules.
The information is coded in the form of rules.
We have some limited number of rules approximately around 1000.
We have some limited number of rules approximately around 1000.
Smoothing and language modeling is defined explicitly in rule-based taggers.
Smoothing and language modeling is defined explicitly in rule-based taggers.
Another technique of tagging is Stochastic POS Tagging. Now, the question that arises here is which model can be stochastic. The model that includes frequency or probability (statistics) can be called stochastic. Any number of different approaches to the problem of part-of-speech tagging can be referred to as stochastic tagger.
The simplest stochastic tagger applies the following approaches for POS tagging −
In this approach, the stochastic taggers disambiguate the words based on the probability that a word occurs with a particular tag. We can also say that the tag encountered most frequently with the word in the training set is the one assigned to an ambiguous instance of that word. The main issue with this approach is that it may yield inadmissible sequence of tags.
It is another approach of stochastic tagging, where the tagger calculates the probability of a given sequence of tags occurring. It is also called n-gram approach. It is called so because the best tag for a given word is determined by the probability at which it occurs with the n previous tags.
Stochastic POS taggers possess the following properties −
This POS tagging is based on the probability of tag occurring.
This POS tagging is based on the probability of tag occurring.
It requires training corpus
It requires training corpus
There would be no probability for the words that do not exist in the corpus.
There would be no probability for the words that do not exist in the corpus.
It uses different testing corpus (other than training corpus).
It uses different testing corpus (other than training corpus).
It is the simplest POS tagging because it chooses most frequent tags associated with a word in training corpus.
It is the simplest POS tagging because it chooses most frequent tags associated with a word in training corpus.
Transformation based tagging is also called Brill tagging. It is an instance of the transformation-based learning (TBL), which is a rule-based algorithm for automatic tagging of POS to the given text. TBL, allows us to have linguistic knowledge in a readable form, transforms one state to another state by using transformation rules.
It draws the inspiration from both the previous explained taggers − rule-based and stochastic. If we see similarity between rule-based and transformation tagger, then like rule-based, it is also based on the rules that specify what tags need to be assigned to what words. On the other hand, if we see similarity between stochastic and transformation tagger then like stochastic, it is machine learning technique in which rules are automatically induced from data.
In order to understand the working and concept of transformation-based taggers, we need to understand the working of transformation-based learning. Consider the following steps to understand the working of TBL −
Start with the solution − The TBL usually starts with some solution to the problem and works in cycles.
Start with the solution − The TBL usually starts with some solution to the problem and works in cycles.
Most beneficial transformation chosen − In each cycle, TBL will choose the most beneficial transformation.
Most beneficial transformation chosen − In each cycle, TBL will choose the most beneficial transformation.
Apply to the problem − The transformation chosen in the last step will be applied to the problem.
Apply to the problem − The transformation chosen in the last step will be applied to the problem.
The algorithm will stop when the selected transformation in step 2 will not add either more value or there are no more transformations to be selected. Such kind of learning is best suited in classification tasks.
The advantages of TBL are as follows −
We learn small set of simple rules and these rules are enough for tagging.
We learn small set of simple rules and these rules are enough for tagging.
Development as well as debugging is very easy in TBL because the learned rules are easy to understand.
Development as well as debugging is very easy in TBL because the learned rules are easy to understand.
Complexity in tagging is reduced because in TBL there is interlacing of machinelearned and human-generated rules.
Complexity in tagging is reduced because in TBL there is interlacing of machinelearned and human-generated rules.
Transformation-based tagger is much faster than Markov-model tagger.
Transformation-based tagger is much faster than Markov-model tagger.
The disadvantages of TBL are as follows −
Transformation-based learning (TBL) does not provide tag probabilities.
Transformation-based learning (TBL) does not provide tag probabilities.
In TBL, the training time is very long especially on large corpora.
In TBL, the training time is very long especially on large corpora.
Before digging deep into HMM POS tagging, we must understand the concept of Hidden Markov Model (HMM).
An HMM model may be defined as the doubly-embedded stochastic model, where the underlying stochastic process is hidden. This hidden stochastic process can only be observed through another set of stochastic processes that produces the sequence of observations.
For example, a sequence of hidden coin tossing experiments is done and we see only the observation sequence consisting of heads and tails. The actual details of the process - how many coins used, the order in which they are selected - are hidden from us. By observing this sequence of heads and tails, we can build several HMMs to explain the sequence. Following is one form of Hidden Markov Model for this problem −
We assumed that there are two states in the HMM and each of the state corresponds to the selection of different biased coin. Following matrix gives the state transition probabilities −
A=[a11a12a21a22]
Here,
aij = probability of transition from one state to another from i to j.
aij = probability of transition from one state to another from i to j.
a11 + a12 = 1 and a21 + a22 =1
a11 + a12 = 1 and a21 + a22 =1
P1 = probability of heads of the first coin i.e. the bias of the first coin.
P1 = probability of heads of the first coin i.e. the bias of the first coin.
P2 = probability of heads of the second coin i.e. the bias of the second coin.
P2 = probability of heads of the second coin i.e. the bias of the second coin.
We can also create an HMM model assuming that there are 3 coins or more.
This way, we can characterize HMM by the following elements −
N, the number of states in the model (in the above example N =2, only two states).
N, the number of states in the model (in the above example N =2, only two states).
M, the number of distinct observations that can appear with each state in the above example M = 2, i.e., H or T).
M, the number of distinct observations that can appear with each state in the above example M = 2, i.e., H or T).
A, the state transition probability distribution − the matrix A in the above example.
A, the state transition probability distribution − the matrix A in the above example.
P, the probability distribution of the observable symbols in each state (in our example P1 and P2).
P, the probability distribution of the observable symbols in each state (in our example P1 and P2).
I, the initial state distribution.
I, the initial state distribution.
The POS tagging process is the process of finding the sequence of tags which is most likely to have generated a given word sequence. We can model this POS process by using a Hidden Markov Model (HMM), where tags are the hidden states that produced the observable output, i.e., the words.
Mathematically, in POS tagging, we are always interested in finding a tag sequence (C) which maximizes −
P (C|W)
Where,
C = C1, C2, C3... CT
W = W1, W2, W3, WT
On the other side of coin, the fact is that we need a lot of statistical data to reasonably estimate such kind of sequences. However, to simplify the problem, we can apply some mathematical transformations along with some assumptions.
The use of HMM to do a POS tagging is a special case of Bayesian interference. Hence, we will start by restating the problem using Bayes’ rule, which says that the above-mentioned conditional probability is equal to −
(PROB (C1,..., CT) * PROB (W1,..., WT | C1,..., CT)) / PROB (W1,..., WT)
We can eliminate the denominator in all these cases because we are interested in finding the sequence C which maximizes the above value. This will not affect our answer. Now, our problem reduces to finding the sequence C that maximizes −
PROB (C1,..., CT) * PROB (W1,..., WT | C1,..., CT) (1)
Even after reducing the problem in the above expression, it would require large amount of data. We can make reasonable independence assumptions about the two probabilities in the above expression to overcome the problem.
The probability of a tag depends on the previous one (bigram model) or previous two (trigram model) or previous n tags (n-gram model) which, mathematically, can be explained as follows −
PROB (C1,..., CT) = Πi=1..T PROB (Ci|Ci-n+1...Ci-1) (n-gram model)
PROB (C1,..., CT) = Πi=1..T PROB (Ci|Ci-1) (bigram model)
The beginning of a sentence can be accounted for by assuming an initial probability for each tag.
PROB (C1|C0) = PROB initial (C1)
The second probability in equation (1) above can be approximated by assuming that a word appears in a category independent of the words in the preceding or succeeding categories which can be explained mathematically as follows −
PROB (W1,..., WT | C1,..., CT) = Πi=1..T PROB (Wi|Ci)
Now, on the basis of the above two assumptions, our goal reduces to finding a sequence C which maximizes
Πi=1...T PROB(Ci|Ci-1) * PROB(Wi|Ci)
Now the question that arises here is has converting the problem to the above form really helped us. The answer is - yes, it has. If we have a large tagged corpus, then the two probabilities in the above formula can be calculated as −
PROB (Ci=VERB|Ci-1=NOUN) = (# of instances where Verb follows Noun) / (# of instances where Noun appears) (2)
PROB (Wi|Ci) = (# of instances where Wi appears in Ci) /(# of instances where Ci appears) (3)
In this chapter, we will discuss the natural language inception in Natural Language Processing. To begin with, let us first understand what is Natural Language Grammar.
For linguistics, language is a group of arbitrary vocal signs. We may say that language is creative, governed by rules, innate as well as universal at the same time. On the other hand, it is humanly too. The nature of the language is different for different people. There is a lot of misconception about the nature of the language. That is why it is very important to understand the meaning of the ambiguous term ‘grammar’. In linguistics, the term grammar may be defined as the rules or principles with the help of which language works. In broad sense, we can divide grammar in two categories −
The set of rules, where linguistics and grammarians formulate the speaker’s grammar is called descriptive grammar.
It is a very different sense of grammar, which attempts to maintain a standard of correctness in the language. This category has little to do with the actual working of the language.
The language of study is divided into the interrelated components, which are conventional as well as arbitrary divisions of linguistic investigation. The explanation of these components is as follows −
The very first component of language is phonology. It is the study of the speech sounds of a particular language. The origin of the word can be traced to Greek language, where ‘phone’ means sound or voice. Phonetics, a subdivision of phonology is the study of the speech sounds of human language from the perspective of their production, perception or their physical properties. IPA (International Phonetic Alphabet) is a tool that represents human sounds in a regular way while studying phonology. In IPA, every written symbol represents one and only one speech sound and vice-versa.
It may be defined as one of the units of sound that differentiate one word from other in a language. In linguistic, phonemes are written between slashes. For example, phoneme /k/ occurs in the words such as kit, skit.
It is the second component of language. It is the study of the structure and classification of the words in a particular language. The origin of the word is from Greek language, where the word ‘morphe’ means ‘form’. Morphology considers the principles of formation of words in a language. In other words, how sounds combine into meaningful units like prefixes, suffixes and roots. It also considers how words can be grouped into parts of speech.
In linguistics, the abstract unit of morphological analysis that corresponds to a set of forms taken by a single word is called lexeme. The way in which a lexeme is used in a sentence is determined by its grammatical category. Lexeme can be individual word or multiword. For example, the word talk is an example of an individual word lexeme, which may have many grammatical variants like talks, talked and talking. Multiword lexeme can be made up of more than one orthographic word. For example, speak up, pull through, etc. are the examples of multiword lexemes.
It is the third component of language. It is the study of the order and arrangement of the words into larger units. The word can be traced to Greek language, where the word suntassein means ‘to put in order’. It studies the type of sentences and their structure, of clauses, of phrases.
It is the fourth component of language. It is the study of how meaning is conveyed. The meaning can be related to the outside world or can be related to the grammar of the sentence. The word can be traced to Greek language, where the word semainein means means ‘to signify’, ‘show’, ‘signal’.
It is the fifth component of language. It is the study of the functions of the language and its use in context. The origin of the word can be traced to Greek language where the word ‘pragma’ means ‘deed’, ‘affair’.
A grammatical category may be defined as a class of units or features within the grammar of a language. These units are the building blocks of language and share a common set of characteristics. Grammatical categories are also called grammatical features.
The inventory of grammatical categories is described below −
It is the simplest grammatical category. We have two terms related to this category −singular and plural. Singular is the concept of ‘one’ whereas, plural is the concept of ‘more than one’. For example, dog/dogs, this/these.
Grammatical gender is expressed by variation in personal pronouns and 3rd person. Examples of grammatical genders are singular − he, she, it; the first and second person forms − I, we and you; the 3rd person plural form they, is either common gender or neuter gender.
Another simple grammatical category is person. Under this, following three terms are recognized −
1st person − The person who is speaking is recognized as 1st person.
1st person − The person who is speaking is recognized as 1st person.
2nd person − The person who is the hearer or the person spoken to is recognized as 2nd person.
2nd person − The person who is the hearer or the person spoken to is recognized as 2nd person.
3rd person − The person or thing about whom we are speaking is recognized as 3rd person.
3rd person − The person or thing about whom we are speaking is recognized as 3rd person.
It is one of the most difficult grammatical categories. It may be defined as an indication of the function of a noun phrase (NP) or the relationship of a noun phrase to a verb or to the other noun phrases in the sentence. We have the following three cases expressed in personal and interrogative pronouns −
Nominative case − It is the function of subject. For example, I, we, you, he, she, it, they and who are nominative.
Nominative case − It is the function of subject. For example, I, we, you, he, she, it, they and who are nominative.
Genitive case − It is the function of possessor. For example, my/mine, our/ours, his, her/hers, its, their/theirs, whose are genitive.
Genitive case − It is the function of possessor. For example, my/mine, our/ours, his, her/hers, its, their/theirs, whose are genitive.
Objective case − It is the function of object. For example, me, us, you, him, her, them, whom are objective.
Objective case − It is the function of object. For example, me, us, you, him, her, them, whom are objective.
This grammatical category is related to adjectives and adverbs. It has the following three terms −
Positive degree − It expresses a quality. For example, big, fast, beautiful are positive degrees.
Positive degree − It expresses a quality. For example, big, fast, beautiful are positive degrees.
Comparative degree − It expresses greater degree or intensity of the quality in one of two items. For example, bigger, faster, more beautiful are comparative degrees.
Comparative degree − It expresses greater degree or intensity of the quality in one of two items. For example, bigger, faster, more beautiful are comparative degrees.
Superlative degree − It expresses greatest degree or intensity of the quality in one of three or more items. For example, biggest, fastest, most beautiful are superlative degrees.
Superlative degree − It expresses greatest degree or intensity of the quality in one of three or more items. For example, biggest, fastest, most beautiful are superlative degrees.
Both these concepts are very simple. Definiteness as we know represents a referent, which is known, familiar or identifiable by the speaker or hearer. On the other hand, indefiniteness represents a referent that is not known, or is unfamiliar. The concept can be understood in the co-occurrence of an article with a noun −
definite article − the
definite article − the
indefinite article − a/an
indefinite article − a/an
This grammatical category is related to verb and can be defined as the linguistic indication of the time of an action. A tense establishes a relation because it indicates the time of an event with respect to the moment of speaking. Broadly, it is of the following three types −
Present tense − Represents the occurrence of an action in the present moment. For example, Ram works hard.
Present tense − Represents the occurrence of an action in the present moment. For example, Ram works hard.
Past tense − Represents the occurrence of an action before the present moment. For example, it rained.
Past tense − Represents the occurrence of an action before the present moment. For example, it rained.
Future tense − Represents the occurrence of an action after the present moment. For example, it will rain.
Future tense − Represents the occurrence of an action after the present moment. For example, it will rain.
This grammatical category may be defined as the view taken of an event. It can be of the following types −
Perfective aspect − The view is taken as whole and complete in the aspect. For example, the simple past tense like yesterday I met my friend, in English is perfective in aspect as it views the event as complete and whole.
Perfective aspect − The view is taken as whole and complete in the aspect. For example, the simple past tense like yesterday I met my friend, in English is perfective in aspect as it views the event as complete and whole.
Imperfective aspect − The view is taken as ongoing and incomplete in the aspect. For example, the present participle tense like I am working on this problem, in English is imperfective in aspect as it views the event as incomplete and ongoing.
Imperfective aspect − The view is taken as ongoing and incomplete in the aspect. For example, the present participle tense like I am working on this problem, in English is imperfective in aspect as it views the event as incomplete and ongoing.
This grammatical category is a bit difficult to define but it can be simply stated as the indication of the speaker’s attitude towards what he/she is talking about. It is also the grammatical feature of verbs. It is distinct from grammatical tenses and grammatical aspect. The examples of moods are indicative, interrogative, imperative, injunctive, subjunctive, potential, optative, gerunds and participles.
It is also called concord. It happens when a word changes from depending on the other words to which it relates. In other words, it involves making the value of some grammatical category agree between different words or part of speech. Followings are the agreements based on other grammatical categories −
Agreement based on Person − It is the agreement between subject and the verb. For example, we always use “I am” and “He is” but never “He am” and “I is”.
Agreement based on Person − It is the agreement between subject and the verb. For example, we always use “I am” and “He is” but never “He am” and “I is”.
Agreement based on Number − This agreement is between subject and the verb. In this case, there are specific verb forms for first person singular, second person plural and so on. For example, 1st person singular: I really am, 2nd person plural: We really are, 3rd person singular: The boy sings, 3rd person plural: The boys sing.
Agreement based on Number − This agreement is between subject and the verb. In this case, there are specific verb forms for first person singular, second person plural and so on. For example, 1st person singular: I really am, 2nd person plural: We really are, 3rd person singular: The boy sings, 3rd person plural: The boys sing.
Agreement based on Gender − In English, there is agreement in gender between pronouns and antecedents. For example, He reached his destination. The ship reached her destination.
Agreement based on Gender − In English, there is agreement in gender between pronouns and antecedents. For example, He reached his destination. The ship reached her destination.
Agreement based on Case − This kind of agreement is not a significant feature of English. For example, who came first − he or his sister?
Agreement based on Case − This kind of agreement is not a significant feature of English. For example, who came first − he or his sister?
The written English and spoken English grammar have many common features but along with that, they also differ in a number of aspects. The following features distinguish between the spoken and written English grammar −
This striking feature makes spoken and written English grammar different from each other. It is individually known as phenomena of disfluencies and collectively as phenomena of repair. Disfluencies include the use of following −
Fillers words − Sometimes in between the sentence, we use some filler words. They are called fillers of filler pause. Examples of such words are uh and um.
Fillers words − Sometimes in between the sentence, we use some filler words. They are called fillers of filler pause. Examples of such words are uh and um.
Reparandum and repair − The repeated segment of words in between the sentence is called reparandum. In the same segment, the changed word is called repair. Consider the following example to understand this −
Reparandum and repair − The repeated segment of words in between the sentence is called reparandum. In the same segment, the changed word is called repair. Consider the following example to understand this −
Does ABC airlines offer any one-way flights uh one-way fares for 5000 rupees?
In the above sentence, one-way flight is a reparadum and one-way flights is a repair.
After the filler pause, restarts occurs. For example, in the above sentence, restarts occur when the speaker starts asking about one-way flights then stops, correct himself by filler pause and then restarting asking about one-way fares.
Sometimes we speak the sentences with smaller fragments of words. For example, wwha-what is the time? Here the words w-wha are word fragments.
Information retrieval (IR) may be defined as a software program that deals with the organization, storage, retrieval and evaluation of information from document repositories particularly textual information. The system assists users in finding the information they require but it does not explicitly return the answers of the questions. It informs the existence and location of documents that might consist of the required information. The documents that satisfy user’s requirement are called relevant documents. A perfect IR system will retrieve only relevant documents.
With the help of the following diagram, we can understand the process of information retrieval (IR) −
It is clear from the above diagram that a user who needs information will have to formulate a request in the form of query in natural language. Then the IR system will respond by retrieving the relevant output, in the form of documents, about the required information.
The main goal of IR research is to develop a model for retrieving information from the repositories of documents. Here, we are going to discuss a classical problem, named ad-hoc retrieval problem, related to the IR system.
In ad-hoc retrieval, the user must enter a query in natural language that describes the required information. Then the IR system will return the required documents related to the desired information. For example, suppose we are searching something on the Internet and it gives some exact pages that are relevant as per our requirement but there can be some non-relevant pages too. This is due to the ad-hoc retrieval problem.
Followings are some aspects of ad-hoc retrieval that are addressed in IR research −
How users with the help of relevance feedback can improve original formulation of a query?
How users with the help of relevance feedback can improve original formulation of a query?
How to implement database merging, i.e., how results from different text databases can be merged into one result set?
How to implement database merging, i.e., how results from different text databases can be merged into one result set?
How to handle partly corrupted data? Which models are appropriate for the same?
How to handle partly corrupted data? Which models are appropriate for the same?
Mathematically, models are used in many scientific areas having objective to understand some phenomenon in the real world. A model of information retrieval predicts and explains what a user will find in relevance to the given query. IR model is basically a pattern that defines the above-mentioned aspects of retrieval procedure and consists of the following −
A model for documents.
A model for documents.
A model for queries.
A model for queries.
A matching function that compares queries to documents.
A matching function that compares queries to documents.
Mathematically, a retrieval model consists of −
D − Representation for documents.
R − Representation for queries.
F − The modeling framework for D, Q along with relationship between them.
R (q,di) − A similarity function which orders the documents with respect to the query. It is also called ranking.
An information model (IR) model can be classified into the following three models −
It is the simplest and easy to implement IR model. This model is based on mathematical knowledge that was easily recognized and understood as well. Boolean, Vector and Probabilistic are the three classical IR models.
It is completely opposite to classical IR model. Such kind of IR models are based on principles other than similarity, probability, Boolean operations. Information logic model, situation theory model and interaction models are the examples of non-classical IR model.
It is the enhancement of classical IR model making use of some specific techniques from some other fields. Cluster model, fuzzy model and latent semantic indexing (LSI) models are the example of alternative IR model.
Let us now learn about the design features of IR systems −
The primary data structure of most of the IR systems is in the form of inverted index. We can define an inverted index as a data structure that list, for every word, all documents that contain it and frequency of the occurrences in document. It makes it easy to search for ‘hits’ of a query word.
Stop words are those high frequency words that are deemed unlikely to be useful for searching. They have less semantic weights. All such kind of words are in a list called stop list. For example, articles “a”, “an”, “the” and prepositions like “in”, “of”, “for”, “at” etc. are the examples of stop words. The size of the inverted index can be significantly reduced by stop list. As per Zipf’s law, a stop list covering a few dozen words reduces the size of inverted index by almost half. On the other hand, sometimes the elimination of stop word may cause elimination of the term that is useful for searching. For example, if we eliminate the alphabet “A” from “Vitamin A” then it would have no significance.
Stemming, the simplified form of morphological analysis, is the heuristic process of extracting the base form of words by chopping off the ends of words. For example, the words laughing, laughs, laughed would be stemmed to the root word laugh.
In our subsequent sections, we will discuss about some important and useful IR models.
It is the oldest information retrieval (IR) model. The model is based on set theory and the Boolean algebra, where documents are sets of terms and queries are Boolean expressions on terms. The Boolean model can be defined as −
D − A set of words, i.e., the indexing terms present in a document. Here, each term is either present (1) or absent (0).
D − A set of words, i.e., the indexing terms present in a document. Here, each term is either present (1) or absent (0).
Q − A Boolean expression, where terms are the index terms and operators are logical products − AND, logical sum − OR and logical difference − NOT
Q − A Boolean expression, where terms are the index terms and operators are logical products − AND, logical sum − OR and logical difference − NOT
F − Boolean algebra over sets of terms as well as over sets of documents
If we talk about the relevance feedback, then in Boolean IR model the Relevance prediction can be defined as follows −
F − Boolean algebra over sets of terms as well as over sets of documents
If we talk about the relevance feedback, then in Boolean IR model the Relevance prediction can be defined as follows −
R − A document is predicted as relevant to the query expression if and only if it satisfies the query expression as −
R − A document is predicted as relevant to the query expression if and only if it satisfies the query expression as −
((text ˅ information) ˄ rerieval ˄ ̃ theory)
We can explain this model by a query term as an unambiguous definition of a set of documents.
For example, the query term “economic” defines the set of documents that are indexed with the term “economic”.
Now, what would be the result after combining terms with Boolean AND Operator? It will define a document set that is smaller than or equal to the document sets of any of the single terms. For example, the query with terms “social” and “economic” will produce the documents set of documents that are indexed with both the terms. In other words, document set with the intersection of both the sets.
Now, what would be the result after combining terms with Boolean OR operator? It will define a document set that is bigger than or equal to the document sets of any of the single terms. For example, the query with terms “social” or “economic” will produce the documents set of documents that are indexed with either the term “social” or “economic”. In other words, document set with the union of both the sets.
The advantages of the Boolean model are as follows −
The simplest model, which is based on sets.
The simplest model, which is based on sets.
Easy to understand and implement.
Easy to understand and implement.
It only retrieves exact matches
It only retrieves exact matches
It gives the user, a sense of control over the system.
It gives the user, a sense of control over the system.
The disadvantages of the Boolean model are as follows −
The model’s similarity function is Boolean. Hence, there would be no partial matches. This can be annoying for the users.
The model’s similarity function is Boolean. Hence, there would be no partial matches. This can be annoying for the users.
In this model, the Boolean operator usage has much more influence than a critical word.
In this model, the Boolean operator usage has much more influence than a critical word.
The query language is expressive, but it is complicated too.
The query language is expressive, but it is complicated too.
No ranking for retrieved documents.
No ranking for retrieved documents.
Due to the above disadvantages of the Boolean model, Gerard Salton and his colleagues suggested a model, which is based on Luhn’s similarity criterion. The similarity criterion formulated by Luhn states, “the more two representations agreed in given elements and their distribution, the higher would be the probability of their representing similar information.”
Consider the following important points to understand more about the Vector Space Model −
The index representations (documents) and the queries are considered as vectors embedded in a high dimensional Euclidean space.
The index representations (documents) and the queries are considered as vectors embedded in a high dimensional Euclidean space.
The similarity measure of a document vector to a query vector is usually the cosine of the angle between them.
The similarity measure of a document vector to a query vector is usually the cosine of the angle between them.
Cosine is a normalized dot product, which can be calculated with the help of the following formula −
Score⟮d→q→⟯=∑k=1mdk.qk∑k=1m⟮dk⟯2.∑k=1mm⟮qk⟯2
Score⟮d→q→⟯=1whend=q
Score⟮d→q→⟯=0whendandqsharenoitems
The query and documents are represented by a two-dimensional vector space. The terms are car and insurance. There is one query and three documents in the vector space.
The top ranked document in response to the terms car and insurance will be the document d2 because the angle between q and d2 is the smallest. The reason behind this is that both the concepts car and insurance are salient in d2 and hence have the high weights. On the other side, d1 and d3 also mention both the terms but in each case, one of them is not a centrally important term in the document.
Term weighting means the weights on the terms in vector space. Higher the weight of the term, greater would be the impact of the term on cosine. More weights should be assigned to the more important terms in the model. Now the question that arises here is how can we model this.
One way to do this is to count the words in a document as its term weight. However, do you think it would be effective method?
Another method, which is more effective, is to use term frequency (tfij), document frequency (dfi) and collection frequency (cfi).
It may be defined as the number of occurrences of wi in dj. The information that is captured by term frequency is how salient a word is within the given document or in other words we can say that the higher the term frequency the more that word is a good description of the content of that document.
It may be defined as the total number of documents in the collection in which wi occurs. It is an indicator of informativeness. Semantically focused words will occur several times in the document unlike the semantically unfocused words.
It may be defined as the total number of occurrences of wi in the collection.
Mathematically, dfi≤cfiand∑jtfij=cfi
Let us now learn about the different forms of document frequency weighting. The forms are described below −
This is also classified as the term frequency factor, which means that if a term t appears often in a document then a query containing t should retrieve that document. We can combine word’s term frequency (tfij) and document frequency (dfi) into a single weight as follows −
weight(i,j)={(1+log(tfij))logNdfiiftfi,j≥10iftfi,j=0
Here N is the total number of documents.
This is another form of document frequency weighting and often called idf weighting or inverse document frequency weighting. The important point of idf weighting is that the term’s scarcity across the collection is a measure of its importance and importance is inversely proportional to frequency of occurrence.
Mathematically,
idft=log(1+Nnt)
idft=log(N−ntnt)
Here,
N = documents in the collection
nt = documents containing term t
The primary goal of any information retrieval system must be accuracy − to produce relevant documents as per the user’s requirement. However, the question that arises here is how can we improve the output by improving user’s query formation style. Certainly, the output of any IR system is dependent on the user’s query and a well-formatted query will produce more accurate results. The user can improve his/her query with the help of relevance feedback, an important aspect of any IR model.
Relevance feedback takes the output that is initially returned from the given query. This initial output can be used to gather user information and to know whether that output is relevant to perform a new query or not. The feedbacks can be classified as follows −
It may be defined as the feedback that is obtained from the assessors of relevance. These assessors will also indicate the relevance of a document retrieved from the query. In order to improve query retrieval performance, the relevance feedback information needs to be interpolated with the original query.
Assessors or other users of the system may indicate the relevance explicitly by using the following relevance systems −
Binary relevance system − This relevance feedback system indicates that a document is either relevant (1) or irrelevant (0) for a given query.
Binary relevance system − This relevance feedback system indicates that a document is either relevant (1) or irrelevant (0) for a given query.
Graded relevance system − The graded relevance feedback system indicates the relevance of a document, for a given query, on the basis of grading by using numbers, letters or descriptions. The description can be like “not relevant”, “somewhat relevant”, “very relevant” or “relevant”.
Graded relevance system − The graded relevance feedback system indicates the relevance of a document, for a given query, on the basis of grading by using numbers, letters or descriptions. The description can be like “not relevant”, “somewhat relevant”, “very relevant” or “relevant”.
It is the feedback that is inferred from user behavior. The behavior includes the duration of time user spent viewing a document, which document is selected for viewing and which is not, page browsing and scrolling actions, etc. One of the best examples of implicit feedback is dwell time, which is a measure of how much time a user spends viewing the page linked to in a search result.
It is also called Blind feedback. It provides a method for automatic local analysis. The manual part of relevance feedback is automated with the help of Pseudo relevance feedback so that the user gets improved retrieval performance without an extended interaction. The main advantage of this feedback system is that it does not require assessors like in explicit relevance feedback system.
Consider the following steps to implement this feedback −
Step 1 − First, the result returned by initial query must be taken as relevant result. The range of relevant result must be in top 10-50 results.
Step 1 − First, the result returned by initial query must be taken as relevant result. The range of relevant result must be in top 10-50 results.
Step 2 − Now, select the top 20-30 terms from the documents using for instance term frequency(tf)-inverse document frequency(idf) weight.
Step 2 − Now, select the top 20-30 terms from the documents using for instance term frequency(tf)-inverse document frequency(idf) weight.
Step 3 − Add these terms to the query and match the returned documents. Then return the most relevant documents.
Step 3 − Add these terms to the query and match the returned documents. Then return the most relevant documents.
Natural Language Processing (NLP) is an emerging technology that derives various forms of AI that we see in the present times and its use for creating a seamless as well as interactive interface between humans and machines will continue to be a top priority for today’s and tomorrow’s increasingly cognitive applications. Here, we are going to discuss about some of the very useful applications of NLP.
Machine translation (MT), process of translating one source language or text into another language, is one of the most important applications of NLP. We can understand the process of machine translation with the help of the following flowchart −
There are different types of machine translation systems. Let us see what the different types are.
Bilingual MT systems produce translations between two particular languages.
Multilingual MT systems produce translations between any pair of languages. They may be either uni-directional or bi-directional in nature.
Let us now learn about the important approaches to Machine Translation. The approaches to MT are as follows −
It is less popular but the oldest approach of MT. The systems that use this approach are capable of translating SL (source language) directly to TL (target language). Such systems are bi-lingual and uni-directional in nature.
The systems that use Interlingua approach translate SL to an intermediate language called Interlingua (IL) and then translate IL to TL. The Interlingua approach can be understood with the help of the following MT pyramid −
Three stages are involved with this approach.
In the first stage, source language (SL) texts are converted to abstract SL-oriented representations.
In the first stage, source language (SL) texts are converted to abstract SL-oriented representations.
In the second stage, SL-oriented representations are converted into equivalent target language (TL)-oriented representations.
In the second stage, SL-oriented representations are converted into equivalent target language (TL)-oriented representations.
In the third stage, the final text is generated.
In the third stage, the final text is generated.
This is an emerging approach for MT. Basically, it uses large amount of raw data in the form of parallel corpora. The raw data consists of the text and their translations. Analogybased, example-based, memory-based machine translation techniques use empirical MTapproach.
One of the most common problems these days is unwanted emails. This makes Spam filters all the more important because it is the first line of defense against this problem.
Spam filtering system can be developed by using NLP functionality by considering the major false-positive and false-negative issues.
Followings are some existing NLP models for spam filtering −
An N-Gram model is an N-character slice of a longer string. In this model, N-grams of several different lengths are used simultaneously in processing and detecting spam emails.
Spammers, generators of spam emails, usually change one or more characters of attacking words in their spams so that they can breach content-based spam filters. That is why we can say that content-based filters are not useful if they cannot understand the meaning of the words or phrases in the email. In order to eliminate such issues in spam filtering, a rule-based word stemming technique, that can match words which look alike and sound alike, is developed.
This has now become a widely-used technology for spam filtering. The incidence of the words in an email is measured against its typical occurrence in a database of unsolicited (spam) and legitimate (ham) email messages in a statistical technique.
In this digital era, the most valuable thing is data, or you can say information. However, do we really get useful as well as the required amount of information? The answer is ‘NO’ because the information is overloaded and our access to knowledge and information far exceeds our capacity to understand it. We are in a serious need of automatic text summarization and information because the flood of information over internet is not going to stop.
Text summarization may be defined as the technique to create short, accurate summary of longer text documents. Automatic text summarization will help us with relevant information in less time. Natural language processing (NLP) plays an important role in developing an automatic text summarization.
Another main application of natural language processing (NLP) is question-answering. Search engines put the information of the world at our fingertips, but they are still lacking when it comes to answer the questions posted by human beings in their natural language. We have big tech companies like Google are also working in this direction.
Question-answering is a Computer Science discipline within the fields of AI and NLP. It focuses on building systems that automatically answer questions posted by human beings in their natural language. A computer system that understands the natural language has the capability of a program system to translate the sentences written by humans into an internal representation so that the valid answers can be generated by the system. The exact answers can be generated by doing syntax and semantic analysis of the questions. Lexical gap, ambiguity and multilingualism are some of the challenges for NLP in building good question answering system.
Another important application of natural language processing (NLP) is sentiment analysis. As the name suggests, sentiment analysis is used to identify the sentiments among several posts. It is also used to identify the sentiment where the emotions are not expressed explicitly. Companies are using sentiment analysis, an application of natural language processing (NLP) to identify the opinion and sentiment of their customers online. It will help companies to understand what their customers think about the products and services. Companies can judge their overall reputation from customer posts with the help of sentiment analysis. In this way, we can say that beyond determining simple polarity, sentiment analysis understands sentiments in context to help us better understand what is behind the expressed opinion.
In this chapter, we will learn about language processing using Python.
The following features make Python different from other languages −
Python is interpreted − We do not need to compile our Python program before executing it because the interpreter processes Python at runtime.
Python is interpreted − We do not need to compile our Python program before executing it because the interpreter processes Python at runtime.
Interactive − We can directly interact with the interpreter to write our Python programs.
Interactive − We can directly interact with the interpreter to write our Python programs.
Object-oriented − Python is object-oriented in nature and it makes this language easier to write programs because with the help of this technique of programming it encapsulates code within objects.
Object-oriented − Python is object-oriented in nature and it makes this language easier to write programs because with the help of this technique of programming it encapsulates code within objects.
Beginner can easily learn − Python is also called beginner’s language because it is very easy to understand, and it supports the development of a wide range of applications.
Beginner can easily learn − Python is also called beginner’s language because it is very easy to understand, and it supports the development of a wide range of applications.
The latest version of Python 3 released is Python 3.7.1 is available for Windows, Mac OS and most of the flavors of Linux OS.
For windows, we can go to the link www.python.org/downloads/windows/ to download and install Python.
For windows, we can go to the link www.python.org/downloads/windows/ to download and install Python.
For MAC OS, we can use the link www.python.org/downloads/mac-osx/.
For MAC OS, we can use the link www.python.org/downloads/mac-osx/.
In case of Linux, different flavors of Linux use different package managers for installation of new packages.
For example, to install Python 3 on Ubuntu Linux, we can use the following command from terminal −
In case of Linux, different flavors of Linux use different package managers for installation of new packages.
For example, to install Python 3 on Ubuntu Linux, we can use the following command from terminal −
For example, to install Python 3 on Ubuntu Linux, we can use the following command from terminal −
$sudo apt-get install python3-minimal
To study more about Python programming, read Python 3 basic tutorial – Python 3
We will be using Python library NLTK (Natural Language Toolkit) for doing text analysis in English Language. The Natural language toolkit (NLTK) is a collection of Python libraries designed especially for identifying and tag parts of speech found in the text of natural language like English.
Before starting to use NLTK, we need to install it. With the help of following command, we can install it in our Python environment −
pip install nltk
If we are using Anaconda, then a Conda package for NLTK can be built by using the following command −
conda install -c anaconda nltk
After installing NLTK, another important task is to download its preset text repositories so that it can be easily used. However, before that we need to import NLTK the way we import any other Python module. The following command will help us in importing NLTK −
import nltk
Now, download NLTK data with the help of the following command −
nltk.download()
It will take some time to install all available packages of NLTK.
Some other Python packages like gensim and pattern are also very necessary for text analysis as well as building natural language processing applications by using NLTK. the packages can be installed as shown below −
gensim is a robust semantic modeling library which can be used for many applications. We can install it by following command −
pip install gensim
It can be used to make gensim package work properly. The following command helps in installing pattern −
pip install pattern
Tokenization may be defined as the Process of breaking the given text, into smaller units called tokens. Words, numbers or punctuation marks can be tokens. It may also be called word segmentation.
Input − Bed and chair are types of furniture.
We have different packages for tokenization provided by NLTK. We can use these packages based on our requirements. The packages and the details of their installation are as follows −
This package can be used to divide the input text into sentences. We can import it by using the following command −
from nltk.tokenize import sent_tokenize
This package can be used to divide the input text into words. We can import it by using the following command −
from nltk.tokenize import word_tokenize
This package can be used to divide the input text into words and punctuation marks. We can import it by using the following command −
from nltk.tokenize import WordPuncttokenizer
Due to grammatical reasons, language includes lots of variations. Variations in the sense that the language, English as well as other languages too, have different forms of a word. For example, the words like democracy, democratic, and democratization. For machine learning projects, it is very important for machines to understand that these different words, like above, have the same base form. That is why it is very useful to extract the base forms of the words while analyzing the text.
Stemming is a heuristic process that helps in extracting the base forms of the words by chopping of their ends.
The different packages for stemming provided by NLTK module are as follows −
Porter’s algorithm is used by this stemming package to extract the base form of the words. With the help of the following command, we can import this package −
from nltk.stem.porter import PorterStemmer
For example, ‘write’ would be the output of the word ‘writing’ given as the input to this stemmer.
Lancaster’s algorithm is used by this stemming package to extract the base form of the words. With the help of following command, we can import this package −
from nltk.stem.lancaster import LancasterStemmer
For example, ‘writ’ would be the output of the word ‘writing’ given as the input to this stemmer.
Snowball’s algorithm is used by this stemming package to extract the base form of the words. With the help of following command, we can import this package −
from nltk.stem.snowball import SnowballStemmer
For example, ‘write’ would be the output of the word ‘writing’ given as the input to this stemmer.
It is another way to extract the base form of words, normally aiming to remove inflectional endings by using vocabulary and morphological analysis. After lemmatization, the base form of any word is called lemma.
NLTK module provides the following package for lemmatization −
This package will extract the base form of the word depending upon whether it is used as a noun or as a verb. The following command can be used to import this package −
from nltk.stem import WordNetLemmatizer
The identification of parts of speech (POS) and short phrases can be done with the help of chunking. It is one of the important processes in natural language processing. As we are aware about the process of tokenization for the creation of tokens, chunking actually is to do the labeling of those tokens. In other words, we can say that we can get the structure of the sentence with the help of chunking process.
In the following example, we will implement Noun-Phrase chunking, a category of chunking which will find the noun phrase chunks in the sentence, by using NLTK Python module.
Consider the following steps to implement noun-phrase chunking −
Step 1: Chunk grammar definition
In this step, we need to define the grammar for chunking. It would consist of the rules, which we need to follow.
Step 2: Chunk parser creation
Next, we need to create a chunk parser. It would parse the grammar and give the output.
Step 3: The Output
In this step, we will get the output in a tree format.
Start by importing the the NLTK package −
import nltk
Now, we need to define the sentence.
Here,
DT is the determinant
DT is the determinant
VBP is the verb
VBP is the verb
JJ is the adjective
JJ is the adjective
IN is the preposition
IN is the preposition
NN is the noun
NN is the noun
sentence = [("a", "DT"),("clever","JJ"),("fox","NN"),("was","VBP"),
("jumping","VBP"),("over","IN"),("the","DT"),("wall","NN")]
Next, the grammar should be given in the form of regular expression.
grammar = "NP:{<DT>?<JJ>*<NN>}"
Now, we need to define a parser for parsing the grammar.
parser_chunking = nltk.RegexpParser(grammar)
Now, the parser will parse the sentence as follows −
parser_chunking.parse(sentence)
Next, the output will be in the variable as follows:-
Output = parser_chunking.parse(sentence)
Now, the following code will help you draw your output in the form of a tree.
output.draw()
59 Lectures
2.5 hours
Mike West
17 Lectures
1 hours
Pranjal Srivastava
6 Lectures
1 hours
Prabh Kirpa Classes
12 Lectures
1 hours
Stone River ELearning
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2452,
"s": 1896,
"text": "Language is a method of communication with the help of which we can speak, read and write. For example, we think, we make decisions, plans and more in natural language; precisely, in words. However, the big question that confronts us in this AI era is that can we communicate in a similar manner with computers. In other words, can human beings communicate with computers in their natural language? It is a challenge for us to develop NLP applications because computers need structured data, but human speech is unstructured and often ambiguous in nature."
},
{
"code": null,
"e": 2810,
"s": 2452,
"text": "In this sense, we can say that Natural Language Processing (NLP) is the sub-field of Computer Science especially Artificial Intelligence (AI) that is concerned about enabling computers to understand and process human language. Technically, the main task of NLP would be to program computers for analyzing and processing huge amount of natural language data."
},
{
"code": null,
"e": 2912,
"s": 2810,
"text": "We have divided the history of NLP into four phases. The phases have distinctive concerns and styles."
},
{
"code": null,
"e": 3036,
"s": 2912,
"text": "The work done in this phase focused mainly on machine translation (MT). This phase was a period of enthusiasm and optimism."
},
{
"code": null,
"e": 3088,
"s": 3036,
"text": "Let us now see all that the first phase had in it −"
},
{
"code": null,
"e": 3224,
"s": 3088,
"text": "The research on NLP started in early 1950s after Booth & Richens’ investigation and Weaver’s memorandum on machine translation in 1949."
},
{
"code": null,
"e": 3360,
"s": 3224,
"text": "The research on NLP started in early 1950s after Booth & Richens’ investigation and Weaver’s memorandum on machine translation in 1949."
},
{
"code": null,
"e": 3500,
"s": 3360,
"text": "1954 was the year when a limited experiment on automatic translation from Russian to English demonstrated in the Georgetown-IBM experiment."
},
{
"code": null,
"e": 3640,
"s": 3500,
"text": "1954 was the year when a limited experiment on automatic translation from Russian to English demonstrated in the Georgetown-IBM experiment."
},
{
"code": null,
"e": 3723,
"s": 3640,
"text": "In the same year, the publication of the journal MT (Machine Translation) started."
},
{
"code": null,
"e": 3806,
"s": 3723,
"text": "In the same year, the publication of the journal MT (Machine Translation) started."
},
{
"code": null,
"e": 3915,
"s": 3806,
"text": "The first international conference on Machine Translation (MT) was held in 1952 and second was held in 1956."
},
{
"code": null,
"e": 4024,
"s": 3915,
"text": "The first international conference on Machine Translation (MT) was held in 1952 and second was held in 1956."
},
{
"code": null,
"e": 4191,
"s": 4024,
"text": "In 1961, the work presented in Teddington International Conference on Machine Translation of Languages and Applied Language analysis was the high point of this phase."
},
{
"code": null,
"e": 4358,
"s": 4191,
"text": "In 1961, the work presented in Teddington International Conference on Machine Translation of Languages and Applied Language analysis was the high point of this phase."
},
{
"code": null,
"e": 4565,
"s": 4358,
"text": "In this phase, the work done was majorly related to world knowledge and on its role in the construction and manipulation of meaning representations. That is why, this phase is also called AI-flavored phase."
},
{
"code": null,
"e": 4602,
"s": 4565,
"text": "The phase had in it, the following −"
},
{
"code": null,
"e": 4735,
"s": 4602,
"text": "In early 1961, the work began on the problems of addressing and constructing data or knowledge base. This work was influenced by AI."
},
{
"code": null,
"e": 4868,
"s": 4735,
"text": "In early 1961, the work began on the problems of addressing and constructing data or knowledge base. This work was influenced by AI."
},
{
"code": null,
"e": 5038,
"s": 4868,
"text": "In the same year, a BASEBALL question-answering system was also developed. The input to this system was restricted and the language processing involved was a simple one."
},
{
"code": null,
"e": 5208,
"s": 5038,
"text": "In the same year, a BASEBALL question-answering system was also developed. The input to this system was restricted and the language processing involved was a simple one."
},
{
"code": null,
"e": 5459,
"s": 5208,
"text": "A much advanced system was described in Minsky (1968). This system, when compared to the BASEBALL question-answering system, was recognized and provided for the need of inference on the knowledge base in interpreting and responding to language input."
},
{
"code": null,
"e": 5710,
"s": 5459,
"text": "A much advanced system was described in Minsky (1968). This system, when compared to the BASEBALL question-answering system, was recognized and provided for the need of inference on the knowledge base in interpreting and responding to language input."
},
{
"code": null,
"e": 5931,
"s": 5710,
"text": "This phase can be described as the grammatico-logical phase. Due to the failure of practical system building in last phase, the researchers moved towards the use of logic for knowledge representation and reasoning in AI."
},
{
"code": null,
"e": 5973,
"s": 5931,
"text": "The third phase had the following in it −"
},
{
"code": null,
"e": 6221,
"s": 5973,
"text": "The grammatico-logical approach, towards the end of decade, helped us with powerful general-purpose sentence processors like SRI’s Core Language Engine and Discourse Representation Theory, which offered a means of tackling more extended discourse."
},
{
"code": null,
"e": 6469,
"s": 6221,
"text": "The grammatico-logical approach, towards the end of decade, helped us with powerful general-purpose sentence processors like SRI’s Core Language Engine and Discourse Representation Theory, which offered a means of tackling more extended discourse."
},
{
"code": null,
"e": 6648,
"s": 6469,
"text": "In this phase we got some practical resources & tools like parsers, e.g. Alvey Natural Language Tools along with more operational and commercial systems, e.g. for database query."
},
{
"code": null,
"e": 6827,
"s": 6648,
"text": "In this phase we got some practical resources & tools like parsers, e.g. Alvey Natural Language Tools along with more operational and commercial systems, e.g. for database query."
},
{
"code": null,
"e": 6918,
"s": 6827,
"text": "The work on lexicon in 1980s also pointed in the direction of grammatico-logical approach."
},
{
"code": null,
"e": 7009,
"s": 6918,
"text": "The work on lexicon in 1980s also pointed in the direction of grammatico-logical approach."
},
{
"code": null,
"e": 7318,
"s": 7009,
"text": "We can describe this as a lexical & corpus phase. The phase had a lexicalized approach to grammar that appeared in late 1980s and became an increasing influence. There was a revolution in natural language processing in this decade with the introduction of machine learning algorithms for language processing."
},
{
"code": null,
"e": 7837,
"s": 7318,
"text": "Language is a crucial component for human lives and also the most fundamental aspect of our behavior. We can experience it in mainly two forms - written and spoken. In the written form, it is a way to pass our knowledge from one generation to the next. In the spoken form, it is the primary medium for human beings to coordinate with each other in their day-to-day behavior. Language is studied in various academic disciplines. Each discipline comes with its own set of problems and a set of solution to address those."
},
{
"code": null,
"e": 7887,
"s": 7837,
"text": "Consider the following table to understand this −"
},
{
"code": null,
"e": 7897,
"s": 7887,
"text": "Linguists"
},
{
"code": null,
"e": 7949,
"s": 7897,
"text": "How phrases and sentences can be formed with words?"
},
{
"code": null,
"e": 7997,
"s": 7949,
"text": "What curbs the possible meaning for a sentence?"
},
{
"code": null,
"e": 8043,
"s": 7997,
"text": "Intuitions about well-formedness and meaning."
},
{
"code": null,
"e": 8140,
"s": 8043,
"text": "Mathematical model of structure. For example, model theoretic semantics, formal language theory."
},
{
"code": null,
"e": 8156,
"s": 8140,
"text": "Psycholinguists"
},
{
"code": null,
"e": 8214,
"s": 8156,
"text": "How human beings can identify the structure of sentences?"
},
{
"code": null,
"e": 8258,
"s": 8214,
"text": "How the meaning of words can be identified?"
},
{
"code": null,
"e": 8294,
"s": 8258,
"text": "When does understanding take place?"
},
{
"code": null,
"e": 8372,
"s": 8294,
"text": "Experimental techniques mainly for measuring the performance of human beings."
},
{
"code": null,
"e": 8410,
"s": 8372,
"text": "Statistical analysis of observations."
},
{
"code": null,
"e": 8423,
"s": 8410,
"text": "Philosophers"
},
{
"code": null,
"e": 8471,
"s": 8423,
"text": "How do words and sentences acquire the meaning?"
},
{
"code": null,
"e": 8516,
"s": 8471,
"text": "How the objects are identified by the words?"
},
{
"code": null,
"e": 8533,
"s": 8516,
"text": "What is meaning?"
},
{
"code": null,
"e": 8584,
"s": 8533,
"text": "Natural language argumentation by using intuition."
},
{
"code": null,
"e": 8633,
"s": 8584,
"text": "Mathematical models like logic and model theory."
},
{
"code": null,
"e": 8657,
"s": 8633,
"text": "Computational Linguists"
},
{
"code": null,
"e": 8705,
"s": 8657,
"text": "How can we identify the structure of a sentence"
},
{
"code": null,
"e": 8749,
"s": 8705,
"text": "How knowledge and reasoning can be modeled?"
},
{
"code": null,
"e": 8803,
"s": 8749,
"text": "How we can use language to accomplish specific tasks?"
},
{
"code": null,
"e": 8814,
"s": 8803,
"text": "Algorithms"
},
{
"code": null,
"e": 8830,
"s": 8814,
"text": "Data structures"
},
{
"code": null,
"e": 8877,
"s": 8830,
"text": "Formal models of representation and reasoning."
},
{
"code": null,
"e": 8929,
"s": 8877,
"text": "AI techniques like search & representation methods."
},
{
"code": null,
"e": 9244,
"s": 8929,
"text": "Ambiguity, generally used in natural language processing, can be referred as the ability of being understood in more than one way. In simple terms, we can say that ambiguity is the capability of being understood in more than one way. Natural language is very ambiguous. NLP has the following types of ambiguities −"
},
{
"code": null,
"e": 9378,
"s": 9244,
"text": "The ambiguity of a single word is called lexical ambiguity. For example, treating the word silver as a noun, an adjective, or a verb."
},
{
"code": null,
"e": 9625,
"s": 9378,
"text": "This kind of ambiguity occurs when a sentence is parsed in different ways. For example, the sentence “The man saw the girl with the telescope”. It is ambiguous whether the man saw the girl carrying a telescope or he saw her through his telescope."
},
{
"code": null,
"e": 10041,
"s": 9625,
"text": "This kind of ambiguity occurs when the meaning of the words themselves can be misinterpreted. In other words, semantic ambiguity happens when a sentence contains an ambiguous word or phrase. For example, the sentence “The car hit the pole while it was moving” is having semantic ambiguity because the interpretations can be “The car, while moving, hit the pole” and “The car hit the pole while the pole was moving”."
},
{
"code": null,
"e": 10272,
"s": 10041,
"text": "This kind of ambiguity arises due to the use of anaphora entities in discourse. For example, the horse ran up the hill. It was very steep. It soon got tired. Here, the anaphoric reference of “it” in two situations cause ambiguity."
},
{
"code": null,
"e": 10640,
"s": 10272,
"text": "Such kind of ambiguity refers to the situation where the context of a phrase gives it multiple interpretations. In simple words, we can say that pragmatic ambiguity arises when the statement is not specific. For example, the sentence “I like you too” can have multiple interpretations like I like you (just like you like me), I like you (just like someone else dose)."
},
{
"code": null,
"e": 10725,
"s": 10640,
"text": "Following diagram shows the phases or logical steps in natural language processing −"
},
{
"code": null,
"e": 10975,
"s": 10725,
"text": "It is the first phase of NLP. The purpose of this phase is to break chunks of language input into sets of tokens corresponding to paragraphs, sentences and words. For example, a word like “uneasy” can be broken into two sub-word tokens as “un-easy”."
},
{
"code": null,
"e": 11305,
"s": 10975,
"text": "It is the second phase of NLP. The purpose of this phase is two folds: to check that a sentence is well formed or not and to break it up into a structure that shows the syntactic relationships between the different words. For example, the sentence like “The school goes to the boy” would be rejected by syntax analyzer or parser."
},
{
"code": null,
"e": 11553,
"s": 11305,
"text": "It is the third phase of NLP. The purpose of this phase is to draw exact meaning, or you can say dictionary meaning from the text. The text is checked for meaningfulness. For example, semantic analyzer would reject a sentence like “Hot ice-cream”."
},
{
"code": null,
"e": 11924,
"s": 11553,
"text": "It is the fourth phase of NLP. Pragmatic analysis simply fits the actual objects/events, which exist in a given context with object references obtained during the last phase (semantic analysis). For example, the sentence “Put the banana in the basket on the shelf” can have two semantic interpretations and pragmatic analyzer will choose between these two possibilities."
},
{
"code": null,
"e": 12018,
"s": 11924,
"text": "In this chapter, we will learn about the linguistic resources in Natural Language Processing."
},
{
"code": null,
"e": 12317,
"s": 12018,
"text": "A corpus is a large and structured set of machine-readable texts that have been produced in a natural communicative setting. Its plural is corpora. They can be derived in different ways like text that was originally electronic, transcripts of spoken language and optical character recognition, etc."
},
{
"code": null,
"e": 12518,
"s": 12317,
"text": "Language is infinite but a corpus has to be finite in size. For the corpus to be finite in size, we need to sample and proportionally include a wide range of text types to ensure a good corpus design."
},
{
"code": null,
"e": 12585,
"s": 12518,
"text": "Let us now learn about some important elements for corpus design −"
},
{
"code": null,
"e": 12768,
"s": 12585,
"text": "Representativeness is a defining feature of corpus design. The following definitions from two great researchers − Leech and Biber, will help us understand corpus representativeness −"
},
{
"code": null,
"e": 12977,
"s": 12768,
"text": "According to Leech (1991), “A corpus is thought to be representative of the language variety it is supposed to represent if the findings based on its contents can be generalized to the said language variety”."
},
{
"code": null,
"e": 13186,
"s": 12977,
"text": "According to Leech (1991), “A corpus is thought to be representative of the language variety it is supposed to represent if the findings based on its contents can be generalized to the said language variety”."
},
{
"code": null,
"e": 13329,
"s": 13186,
"text": "According to Biber (1993), “Representativeness refers to the extent to which a sample includes the full range of variability in a population”."
},
{
"code": null,
"e": 13472,
"s": 13329,
"text": "According to Biber (1993), “Representativeness refers to the extent to which a sample includes the full range of variability in a population”."
},
{
"code": null,
"e": 13583,
"s": 13472,
"text": "In this way, we can conclude that representativeness of a corpus are determined by the following two factors −"
},
{
"code": null,
"e": 13632,
"s": 13583,
"text": "Balance − The range of genre include in a corpus"
},
{
"code": null,
"e": 13681,
"s": 13632,
"text": "Balance − The range of genre include in a corpus"
},
{
"code": null,
"e": 13736,
"s": 13681,
"text": "Sampling − How the chunks for each genre are selected."
},
{
"code": null,
"e": 13791,
"s": 13736,
"text": "Sampling − How the chunks for each genre are selected."
},
{
"code": null,
"e": 14339,
"s": 13791,
"text": "Another very important element of corpus design is corpus balance – the range of genre included in a corpus. We have already studied that representativeness of a general corpus depends upon how balanced the corpus is. A balanced corpus covers a wide range of text categories, which are supposed to be representatives of the language. We do not have any reliable scientific measure for balance but the best estimation and intuition works in this concern. In other words, we can say that the accepted balance is determined by its intended uses only."
},
{
"code": null,
"e": 14547,
"s": 14339,
"text": "Another important element of corpus design is sampling. Corpus representativeness and balance is very closely associated with sampling. That is why we can say that sampling is inescapable in corpus building."
},
{
"code": null,
"e": 14909,
"s": 14547,
"text": "According to Biber(1993), “Some of the first considerations in constructing a corpus concern the overall design: for example, the kinds of texts included, the number of texts, the selection of particular texts, the selection of text samples from within texts, and the length of text samples. Each of these involves a sampling decision, either conscious or not.”"
},
{
"code": null,
"e": 15271,
"s": 14909,
"text": "According to Biber(1993), “Some of the first considerations in constructing a corpus concern the overall design: for example, the kinds of texts included, the number of texts, the selection of particular texts, the selection of text samples from within texts, and the length of text samples. Each of these involves a sampling decision, either conscious or not.”"
},
{
"code": null,
"e": 15348,
"s": 15271,
"text": "While obtaining a representative sample, we need to consider the following −"
},
{
"code": null,
"e": 15497,
"s": 15348,
"text": "Sampling unit − It refers to the unit which requires a sample. For example, for written text, a sampling unit may be a newspaper, journal or a book."
},
{
"code": null,
"e": 15646,
"s": 15497,
"text": "Sampling unit − It refers to the unit which requires a sample. For example, for written text, a sampling unit may be a newspaper, journal or a book."
},
{
"code": null,
"e": 15721,
"s": 15646,
"text": "Sampling frame − The list of al sampling units is called a sampling frame."
},
{
"code": null,
"e": 15796,
"s": 15721,
"text": "Sampling frame − The list of al sampling units is called a sampling frame."
},
{
"code": null,
"e": 15959,
"s": 15796,
"text": "Population − It may be referred as the assembly of all sampling units. It is defined in terms of language production, language reception or language as a product."
},
{
"code": null,
"e": 16122,
"s": 15959,
"text": "Population − It may be referred as the assembly of all sampling units. It is defined in terms of language production, language reception or language as a product."
},
{
"code": null,
"e": 16386,
"s": 16122,
"text": "Another important element of corpus design is its size. How large the corpus should be? There is no specific answer to this question. The size of the corpus depends upon the purpose for which it is intended as well as on some practical considerations as follows −"
},
{
"code": null,
"e": 16427,
"s": 16386,
"text": "Kind of query anticipated from the user."
},
{
"code": null,
"e": 16468,
"s": 16427,
"text": "Kind of query anticipated from the user."
},
{
"code": null,
"e": 16521,
"s": 16468,
"text": "The methodology used by the users to study the data."
},
{
"code": null,
"e": 16574,
"s": 16521,
"text": "The methodology used by the users to study the data."
},
{
"code": null,
"e": 16610,
"s": 16574,
"text": "Availability of the source of data."
},
{
"code": null,
"e": 16646,
"s": 16610,
"text": "Availability of the source of data."
},
{
"code": null,
"e": 16801,
"s": 16646,
"text": "With the advancement in technology, the corpus size also increases. The following table of comparison will help you understand how the corpus size works −"
},
{
"code": null,
"e": 16871,
"s": 16801,
"text": "In our subsequent sections, we will look at a few examples of corpus."
},
{
"code": null,
"e": 17258,
"s": 16871,
"text": "It may be defined as linguistically parsed text corpus that annotates syntactic or semantic sentence structure. Geoffrey Leech coined the term ‘treebank’, which represents that the most common way of representing the grammatical analysis is by means of a tree structure. Generally, Treebanks are created on the top of a corpus, which has already been annotated with part-of-speech tags."
},
{
"code": null,
"e": 17392,
"s": 17258,
"text": "Semantic and Syntactic Treebanks are the two most common types of Treebanks in linguistics. Let us now learn more about these types −"
},
{
"code": null,
"e": 17649,
"s": 17392,
"text": "These Treebanks use a formal representation of sentence’s semantic structure. They vary in the depth of their semantic representation. Robot Commands Treebank, Geoquery, Groningen Meaning Bank, RoboCup Corpus are some of the examples of Semantic Treebanks."
},
{
"code": null,
"e": 18215,
"s": 17649,
"text": "Opposite to the semantic Treebanks, inputs to the Syntactic Treebank systems are expressions of the formal language obtained from the conversion of parsed Treebank data. The outputs of such systems are predicate logic based meaning representation. Various syntactic Treebanks in different languages have been created so far. For example, Penn Arabic Treebank, Columbia Arabic Treebank are syntactic Treebanks created in Arabia language. Sininca syntactic Treebank created in Chinese language. Lucy, Susane and BLLIP WSJ syntactic corpus created in English language."
},
{
"code": null,
"e": 18270,
"s": 18215,
"text": "Followings are some of the applications of TreeBanks −"
},
{
"code": null,
"e": 18504,
"s": 18270,
"text": "If we talk about Computational Linguistic then the best use of TreeBanks is to engineer state-of-the-art natural language processing systems such as part-of-speech taggers, parsers, semantic analyzers and machine translation systems."
},
{
"code": null,
"e": 18594,
"s": 18504,
"text": "In case of Corpus linguistics, the best use of Treebanks is to study syntactic phenomena."
},
{
"code": null,
"e": 18682,
"s": 18594,
"text": "The best use of Treebanks in theoretical and psycholinguistics is interaction evidence."
},
{
"code": null,
"e": 19149,
"s": 18682,
"text": "PropBank more specifically called “Proposition Bank” is a corpus, which is annotated with verbal propositions and their arguments. The corpus is a verb-oriented resource; the annotations here are more closely related to the syntactic level. Martha Palmer et al., Department of Linguistic, University of Colorado Boulder developed it. We can use the term PropBank as a common noun referring to any corpus that has been annotated with propositions and their arguments."
},
{
"code": null,
"e": 19280,
"s": 19149,
"text": "In Natural Language Processing (NLP), the PropBank project has played a very significant role. It helps in semantic role labeling."
},
{
"code": null,
"e": 19746,
"s": 19280,
"text": "VerbNet(VN) is the hierarchical domain-independent and largest lexical resource present in English that incorporates both semantic as well as syntactic information about its contents. VN is a broad-coverage verb lexicon having mappings to other lexical resources such as WordNet, Xtag and FrameNet. It is organized into verb classes extending Levin classes by refinement and addition of subclasses for achieving syntactic and semantic coherence among class members."
},
{
"code": null,
"e": 19781,
"s": 19746,
"text": "Each VerbNet (VN) class contains −"
},
{
"code": null,
"e": 19987,
"s": 19781,
"text": "For depicting the possible surface realizations of the argument structure for constructions such as transitive, intransitive, prepositional phrases, resultatives, and a large set of diathesis alternations."
},
{
"code": null,
"e": 20217,
"s": 19987,
"text": "For constraining, the types of thematic roles allowed by the arguments, and further restrictions may be imposed. This will help in indicating the syntactic nature of the constituent likely to be associated with the thematic role."
},
{
"code": null,
"e": 20596,
"s": 20217,
"text": "WordNet, created by Princeton is a lexical database for English language. It is the part of the NLTK corpus. In WordNet, nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms called Synsets. All the synsets are linked with the help of conceptual-semantic and lexical relations. Its structure makes it very useful for natural language processing (NLP)."
},
{
"code": null,
"e": 20985,
"s": 20596,
"text": "In information systems, WordNet is used for various purposes like word-sense disambiguation, information retrieval, automatic text classification and machine translation. One of the most important uses of WordNet is to find out the similarity among words. For this task, various algorithms have been implemented in various packages like Similarity in Perl, NLTK in Python and ADW in Java."
},
{
"code": null,
"e": 21074,
"s": 20985,
"text": "In this chapter, we will understand world level analysis in Natural Language Processing."
},
{
"code": null,
"e": 21414,
"s": 21074,
"text": "A regular expression (RE) is a language for specifying text search strings. RE helps us to match or find other strings or sets of strings, using a specialized syntax held in a pattern. Regular expressions are used to search texts in UNIX as well as in MS WORD in identical way. We have various search engines using a number of RE features."
},
{
"code": null,
"e": 21470,
"s": 21414,
"text": "Followings are some of the important properties of RE −"
},
{
"code": null,
"e": 21557,
"s": 21470,
"text": "American Mathematician Stephen Cole Kleene formalized the Regular Expression language."
},
{
"code": null,
"e": 21644,
"s": 21557,
"text": "American Mathematician Stephen Cole Kleene formalized the Regular Expression language."
},
{
"code": null,
"e": 21863,
"s": 21644,
"text": "RE is a formula in a special language, which can be used for specifying simple classes of strings, a sequence of symbols. In other words, we can say that RE is an algebraic notation for characterizing a set of strings."
},
{
"code": null,
"e": 22082,
"s": 21863,
"text": "RE is a formula in a special language, which can be used for specifying simple classes of strings, a sequence of symbols. In other words, we can say that RE is an algebraic notation for characterizing a set of strings."
},
{
"code": null,
"e": 22224,
"s": 22082,
"text": "Regular expression requires two things, one is the pattern that we wish to search and other is a corpus of text from which we need to search."
},
{
"code": null,
"e": 22366,
"s": 22224,
"text": "Regular expression requires two things, one is the pattern that we wish to search and other is a corpus of text from which we need to search."
},
{
"code": null,
"e": 22431,
"s": 22366,
"text": "Mathematically, A Regular Expression can be defined as follows −"
},
{
"code": null,
"e": 22519,
"s": 22431,
"text": "ε is a Regular Expression, which indicates that the language is having an empty string."
},
{
"code": null,
"e": 22607,
"s": 22519,
"text": "ε is a Regular Expression, which indicates that the language is having an empty string."
},
{
"code": null,
"e": 22677,
"s": 22607,
"text": "φ is a Regular Expression which denotes that it is an empty language."
},
{
"code": null,
"e": 22747,
"s": 22677,
"text": "φ is a Regular Expression which denotes that it is an empty language."
},
{
"code": null,
"e": 22878,
"s": 22747,
"text": "If X and Y are Regular Expressions, then\n\nX, Y\nX.Y(Concatenation of XY)\nX+Y (Union of X and Y)\nX*, Y* (Kleen Closure of X and Y)\n\n"
},
{
"code": null,
"e": 22919,
"s": 22878,
"text": "If X and Y are Regular Expressions, then"
},
{
"code": null,
"e": 22924,
"s": 22919,
"text": "X, Y"
},
{
"code": null,
"e": 22929,
"s": 22924,
"text": "X, Y"
},
{
"code": null,
"e": 22954,
"s": 22929,
"text": "X.Y(Concatenation of XY)"
},
{
"code": null,
"e": 22979,
"s": 22954,
"text": "X.Y(Concatenation of XY)"
},
{
"code": null,
"e": 23002,
"s": 22979,
"text": "X+Y (Union of X and Y)"
},
{
"code": null,
"e": 23025,
"s": 23002,
"text": "X+Y (Union of X and Y)"
},
{
"code": null,
"e": 23059,
"s": 23025,
"text": "X*, Y* (Kleen Closure of X and Y)"
},
{
"code": null,
"e": 23093,
"s": 23059,
"text": "X*, Y* (Kleen Closure of X and Y)"
},
{
"code": null,
"e": 23123,
"s": 23093,
"text": "are also regular expressions."
},
{
"code": null,
"e": 23209,
"s": 23123,
"text": "If a string is derived from above rules then that would also be a regular expression."
},
{
"code": null,
"e": 23295,
"s": 23209,
"text": "If a string is derived from above rules then that would also be a regular expression."
},
{
"code": null,
"e": 23361,
"s": 23295,
"text": "The following table shows a few examples of Regular Expressions −"
},
{
"code": null,
"e": 23476,
"s": 23361,
"text": "It may be defined as the set that represents the value of the regular expression and consists specific properties."
},
{
"code": null,
"e": 23560,
"s": 23476,
"text": "If we do the union of two regular sets then the resulting set would also be regula."
},
{
"code": null,
"e": 23644,
"s": 23560,
"text": "If we do the union of two regular sets then the resulting set would also be regula."
},
{
"code": null,
"e": 23736,
"s": 23644,
"text": "If we do the intersection of two regular sets then the resulting set would also be regular."
},
{
"code": null,
"e": 23828,
"s": 23736,
"text": "If we do the intersection of two regular sets then the resulting set would also be regular."
},
{
"code": null,
"e": 23915,
"s": 23828,
"text": "If we do the complement of regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24002,
"s": 23915,
"text": "If we do the complement of regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24093,
"s": 24002,
"text": "If we do the difference of two regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24184,
"s": 24093,
"text": "If we do the difference of two regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24269,
"s": 24184,
"text": "If we do the reversal of regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24354,
"s": 24269,
"text": "If we do the reversal of regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24440,
"s": 24354,
"text": "If we take the closure of regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24526,
"s": 24440,
"text": "If we take the closure of regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24620,
"s": 24526,
"text": "If we do the concatenation of two regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24714,
"s": 24620,
"text": "If we do the concatenation of two regular sets, then the resulting set would also be regular."
},
{
"code": null,
"e": 24959,
"s": 24714,
"text": "The term automata, derived from the Greek word \"αὐτόματα\" meaning \"self-acting\", is the plural of automaton which may be defined as an abstract self-propelled computing device that follows a predetermined sequence of operations automatically."
},
{
"code": null,
"e": 25071,
"s": 24959,
"text": "An automaton having a finite number of states is called a Finite Automaton (FA) or Finite State automata (FSA)."
},
{
"code": null,
"e": 25158,
"s": 25071,
"text": "Mathematically, an automaton can be represented by a 5-tuple (Q, Σ, δ, q0, F), where −"
},
{
"code": null,
"e": 25187,
"s": 25158,
"text": "Q is a finite set of states."
},
{
"code": null,
"e": 25216,
"s": 25187,
"text": "Q is a finite set of states."
},
{
"code": null,
"e": 25284,
"s": 25216,
"text": "Σ is a finite set of symbols, called the alphabet of the automaton."
},
{
"code": null,
"e": 25352,
"s": 25284,
"text": "Σ is a finite set of symbols, called the alphabet of the automaton."
},
{
"code": null,
"e": 25381,
"s": 25352,
"text": "δ is the transition function"
},
{
"code": null,
"e": 25410,
"s": 25381,
"text": "δ is the transition function"
},
{
"code": null,
"e": 25478,
"s": 25410,
"text": "q0 is the initial state from where any input is processed (q0 ∈ Q)."
},
{
"code": null,
"e": 25546,
"s": 25478,
"text": "q0 is the initial state from where any input is processed (q0 ∈ Q)."
},
{
"code": null,
"e": 25593,
"s": 25546,
"text": "F is a set of final state/states of Q (F ⊆ Q)."
},
{
"code": null,
"e": 25640,
"s": 25593,
"text": "F is a set of final state/states of Q (F ⊆ Q)."
},
{
"code": null,
"e": 25774,
"s": 25640,
"text": "Following points will give us a clear view about the relationship between finite automata, regular grammars and regular expressions −"
},
{
"code": null,
"e": 25920,
"s": 25774,
"text": "As we know that finite state automata are the theoretical foundation of computational work and regular expressions is one way of describing them."
},
{
"code": null,
"e": 26066,
"s": 25920,
"text": "As we know that finite state automata are the theoretical foundation of computational work and regular expressions is one way of describing them."
},
{
"code": null,
"e": 26187,
"s": 26066,
"text": "We can say that any regular expression can be implemented as FSA and any FSA can be described with a regular expression."
},
{
"code": null,
"e": 26308,
"s": 26187,
"text": "We can say that any regular expression can be implemented as FSA and any FSA can be described with a regular expression."
},
{
"code": null,
"e": 26522,
"s": 26308,
"text": "On the other hand, regular expression is a way to characterize a kind of language called regular language. Hence, we can say that regular language can be described with the help of both FSA and regular expression."
},
{
"code": null,
"e": 26736,
"s": 26522,
"text": "On the other hand, regular expression is a way to characterize a kind of language called regular language. Hence, we can say that regular language can be described with the help of both FSA and regular expression."
},
{
"code": null,
"e": 26862,
"s": 26736,
"text": "Regular grammar, a formal grammar that can be right-regular or left-regular, is another way to characterize regular language."
},
{
"code": null,
"e": 26988,
"s": 26862,
"text": "Regular grammar, a formal grammar that can be right-regular or left-regular, is another way to characterize regular language."
},
{
"code": null,
"e": 27132,
"s": 26988,
"text": "Following diagram shows that finite automata, regular expressions and regular grammars are the equivalent ways of describing regular languages."
},
{
"code": null,
"e": 27204,
"s": 27132,
"text": "Finite state automation is of two types. Let us see what the types are."
},
{
"code": null,
"e": 27451,
"s": 27204,
"text": "It may be defined as the type of finite automation wherein, for every input symbol we can determine the state to which the machine will move. It has a finite number of states that is why the machine is called Deterministic Finite Automaton (DFA)."
},
{
"code": null,
"e": 27531,
"s": 27451,
"text": "Mathematically, a DFA can be represented by a 5-tuple (Q, Σ, δ, q0, F), where −"
},
{
"code": null,
"e": 27560,
"s": 27531,
"text": "Q is a finite set of states."
},
{
"code": null,
"e": 27589,
"s": 27560,
"text": "Q is a finite set of states."
},
{
"code": null,
"e": 27657,
"s": 27589,
"text": "Σ is a finite set of symbols, called the alphabet of the automaton."
},
{
"code": null,
"e": 27725,
"s": 27657,
"text": "Σ is a finite set of symbols, called the alphabet of the automaton."
},
{
"code": null,
"e": 27775,
"s": 27725,
"text": "δ is the transition function where δ: Q × Σ → Q ."
},
{
"code": null,
"e": 27825,
"s": 27775,
"text": "δ is the transition function where δ: Q × Σ → Q ."
},
{
"code": null,
"e": 27893,
"s": 27825,
"text": "q0 is the initial state from where any input is processed (q0 ∈ Q)."
},
{
"code": null,
"e": 27961,
"s": 27893,
"text": "q0 is the initial state from where any input is processed (q0 ∈ Q)."
},
{
"code": null,
"e": 28008,
"s": 27961,
"text": "F is a set of final state/states of Q (F ⊆ Q)."
},
{
"code": null,
"e": 28055,
"s": 28008,
"text": "F is a set of final state/states of Q (F ⊆ Q)."
},
{
"code": null,
"e": 28144,
"s": 28055,
"text": "Whereas graphically, a DFA can be represented by diagraphs called state diagrams where −"
},
{
"code": null,
"e": 28184,
"s": 28144,
"text": "The states are represented by vertices."
},
{
"code": null,
"e": 28224,
"s": 28184,
"text": "The states are represented by vertices."
},
{
"code": null,
"e": 28267,
"s": 28224,
"text": "The transitions are shown by labeled arcs."
},
{
"code": null,
"e": 28310,
"s": 28267,
"text": "The transitions are shown by labeled arcs."
},
{
"code": null,
"e": 28369,
"s": 28310,
"text": "The initial state is represented by an empty incoming arc."
},
{
"code": null,
"e": 28428,
"s": 28369,
"text": "The initial state is represented by an empty incoming arc."
},
{
"code": null,
"e": 28477,
"s": 28428,
"text": "The final state is represented by double circle."
},
{
"code": null,
"e": 28526,
"s": 28477,
"text": "The final state is represented by double circle."
},
{
"code": null,
"e": 28543,
"s": 28526,
"text": "Suppose a DFA be"
},
{
"code": null,
"e": 28558,
"s": 28543,
"text": "Q = {a, b, c},"
},
{
"code": null,
"e": 28573,
"s": 28558,
"text": "Q = {a, b, c},"
},
{
"code": null,
"e": 28585,
"s": 28573,
"text": "Σ = {0, 1},"
},
{
"code": null,
"e": 28597,
"s": 28585,
"text": "Σ = {0, 1},"
},
{
"code": null,
"e": 28607,
"s": 28597,
"text": "q0 = {a},"
},
{
"code": null,
"e": 28617,
"s": 28607,
"text": "q0 = {a},"
},
{
"code": null,
"e": 28626,
"s": 28617,
"text": "F = {c},"
},
{
"code": null,
"e": 28635,
"s": 28626,
"text": "F = {c},"
},
{
"code": null,
"e": 28692,
"s": 28635,
"text": "Transition function δ is shown in the table as follows −"
},
{
"code": null,
"e": 28749,
"s": 28692,
"text": "Transition function δ is shown in the table as follows −"
},
{
"code": null,
"e": 28812,
"s": 28749,
"text": "The graphical representation of this DFA would be as follows −"
},
{
"code": null,
"e": 29124,
"s": 28812,
"text": "It may be defined as the type of finite automation where for every input symbol we cannot determine the state to which the machine will move i.e. the machine can move to any combination of the states. It has a finite number of states that is why the machine is called Non-deterministic Finite Automation (NDFA)."
},
{
"code": null,
"e": 29203,
"s": 29124,
"text": "Mathematically, NDFA can be represented by a 5-tuple (Q, Σ, δ, q0, F), where −"
},
{
"code": null,
"e": 29232,
"s": 29203,
"text": "Q is a finite set of states."
},
{
"code": null,
"e": 29261,
"s": 29232,
"text": "Q is a finite set of states."
},
{
"code": null,
"e": 29329,
"s": 29261,
"text": "Σ is a finite set of symbols, called the alphabet of the automaton."
},
{
"code": null,
"e": 29397,
"s": 29329,
"text": "Σ is a finite set of symbols, called the alphabet of the automaton."
},
{
"code": null,
"e": 29450,
"s": 29397,
"text": "δ :-is the transition function where δ: Q × Σ → 2 Q."
},
{
"code": null,
"e": 29503,
"s": 29450,
"text": "δ :-is the transition function where δ: Q × Σ → 2 Q."
},
{
"code": null,
"e": 29573,
"s": 29503,
"text": "q0 :-is the initial state from where any input is processed (q0 ∈ Q)."
},
{
"code": null,
"e": 29643,
"s": 29573,
"text": "q0 :-is the initial state from where any input is processed (q0 ∈ Q)."
},
{
"code": null,
"e": 29692,
"s": 29643,
"text": "F :-is a set of final state/states of Q (F ⊆ Q)."
},
{
"code": null,
"e": 29741,
"s": 29692,
"text": "F :-is a set of final state/states of Q (F ⊆ Q)."
},
{
"code": null,
"e": 29845,
"s": 29741,
"text": "Whereas graphically (same as DFA), a NDFA can be represented by diagraphs called state diagrams where −"
},
{
"code": null,
"e": 29885,
"s": 29845,
"text": "The states are represented by vertices."
},
{
"code": null,
"e": 29925,
"s": 29885,
"text": "The states are represented by vertices."
},
{
"code": null,
"e": 29968,
"s": 29925,
"text": "The transitions are shown by labeled arcs."
},
{
"code": null,
"e": 30011,
"s": 29968,
"text": "The transitions are shown by labeled arcs."
},
{
"code": null,
"e": 30070,
"s": 30011,
"text": "The initial state is represented by an empty incoming arc."
},
{
"code": null,
"e": 30129,
"s": 30070,
"text": "The initial state is represented by an empty incoming arc."
},
{
"code": null,
"e": 30178,
"s": 30129,
"text": "The final state is represented by double circle."
},
{
"code": null,
"e": 30227,
"s": 30178,
"text": "The final state is represented by double circle."
},
{
"code": null,
"e": 30245,
"s": 30227,
"text": "Suppose a NDFA be"
},
{
"code": null,
"e": 30260,
"s": 30245,
"text": "Q = {a, b, c},"
},
{
"code": null,
"e": 30275,
"s": 30260,
"text": "Q = {a, b, c},"
},
{
"code": null,
"e": 30287,
"s": 30275,
"text": "Σ = {0, 1},"
},
{
"code": null,
"e": 30299,
"s": 30287,
"text": "Σ = {0, 1},"
},
{
"code": null,
"e": 30309,
"s": 30299,
"text": "q0 = {a},"
},
{
"code": null,
"e": 30319,
"s": 30309,
"text": "q0 = {a},"
},
{
"code": null,
"e": 30328,
"s": 30319,
"text": "F = {c},"
},
{
"code": null,
"e": 30337,
"s": 30328,
"text": "F = {c},"
},
{
"code": null,
"e": 30394,
"s": 30337,
"text": "Transition function δ is shown in the table as follows −"
},
{
"code": null,
"e": 30451,
"s": 30394,
"text": "Transition function δ is shown in the table as follows −"
},
{
"code": null,
"e": 30515,
"s": 30451,
"text": "The graphical representation of this NDFA would be as follows −"
},
{
"code": null,
"e": 30929,
"s": 30515,
"text": "The term morphological parsing is related to the parsing of morphemes. We can define morphological parsing as the problem of recognizing that a word breaks down into smaller meaningful units called morphemes producing some sort of linguistic structure for it. For example, we can break the word foxes into two, fox and -es. We can see that the word foxes, is made up of two morphemes, one is fox and other is -es."
},
{
"code": null,
"e": 30990,
"s": 30929,
"text": "In other sense, we can say that morphology is the study of −"
},
{
"code": null,
"e": 31014,
"s": 30990,
"text": "The formation of words."
},
{
"code": null,
"e": 31038,
"s": 31014,
"text": "The formation of words."
},
{
"code": null,
"e": 31063,
"s": 31038,
"text": "The origin of the words."
},
{
"code": null,
"e": 31088,
"s": 31063,
"text": "The origin of the words."
},
{
"code": null,
"e": 31120,
"s": 31088,
"text": "Grammatical forms of the words."
},
{
"code": null,
"e": 31152,
"s": 31120,
"text": "Grammatical forms of the words."
},
{
"code": null,
"e": 31208,
"s": 31152,
"text": "Use of prefixes and suffixes in the formation of words."
},
{
"code": null,
"e": 31264,
"s": 31208,
"text": "Use of prefixes and suffixes in the formation of words."
},
{
"code": null,
"e": 31316,
"s": 31264,
"text": "How parts-of-speech (PoS) of a language are formed."
},
{
"code": null,
"e": 31368,
"s": 31316,
"text": "How parts-of-speech (PoS) of a language are formed."
},
{
"code": null,
"e": 31447,
"s": 31368,
"text": "Morphemes, the smallest meaning-bearing units, can be divided into two types −"
},
{
"code": null,
"e": 31453,
"s": 31447,
"text": "Stems"
},
{
"code": null,
"e": 31459,
"s": 31453,
"text": "Stems"
},
{
"code": null,
"e": 31470,
"s": 31459,
"text": "Word Order"
},
{
"code": null,
"e": 31481,
"s": 31470,
"text": "Word Order"
},
{
"code": null,
"e": 31621,
"s": 31481,
"text": "It is the core meaningful unit of a word. We can also say that it is the root of the word. For example, in the word foxes, the stem is fox."
},
{
"code": null,
"e": 31777,
"s": 31621,
"text": "Affixes − As the name suggests, they add some additional meaning and grammatical functions to the words. For example, in the word foxes, the affix is − es."
},
{
"code": null,
"e": 31933,
"s": 31777,
"text": "Affixes − As the name suggests, they add some additional meaning and grammatical functions to the words. For example, in the word foxes, the affix is − es."
},
{
"code": null,
"e": 31998,
"s": 31933,
"text": "Further, affixes can also be divided into following four types −"
},
{
"code": null,
"e": 32111,
"s": 31998,
"text": "Prefixes − As the name suggests, prefixes precede the stem. For example, in the word unbuckle, un is the prefix."
},
{
"code": null,
"e": 32224,
"s": 32111,
"text": "Prefixes − As the name suggests, prefixes precede the stem. For example, in the word unbuckle, un is the prefix."
},
{
"code": null,
"e": 32332,
"s": 32224,
"text": "Suffixes − As the name suggests, suffixes follow the stem. For example, in the word cats, -s is the suffix."
},
{
"code": null,
"e": 32440,
"s": 32332,
"text": "Suffixes − As the name suggests, suffixes follow the stem. For example, in the word cats, -s is the suffix."
},
{
"code": null,
"e": 32595,
"s": 32440,
"text": "Infixes − As the name suggests, infixes are inserted inside the stem. For example, the word cupful, can be pluralized as cupsful by using -s as the infix."
},
{
"code": null,
"e": 32750,
"s": 32595,
"text": "Infixes − As the name suggests, infixes are inserted inside the stem. For example, the word cupful, can be pluralized as cupsful by using -s as the infix."
},
{
"code": null,
"e": 32951,
"s": 32750,
"text": "Circumfixes − They precede and follow the stem. There are very less examples of circumfixes in English language. A very common example is ‘A-ing’ where we can use -A precede and -ing follows the stem."
},
{
"code": null,
"e": 33152,
"s": 32951,
"text": "Circumfixes − They precede and follow the stem. There are very less examples of circumfixes in English language. A very common example is ‘A-ing’ where we can use -A precede and -ing follows the stem."
},
{
"code": null,
"e": 33288,
"s": 33152,
"text": "The order of the words would be decided by morphological parsing. Let us now see the requirements for building a morphological parser −"
},
{
"code": null,
"e": 33536,
"s": 33288,
"text": "The very first requirement for building a morphological parser is lexicon, which includes the list of stems and affixes along with the basic information about them. For example, the information like whether the stem is Noun stem or Verb stem, etc."
},
{
"code": null,
"e": 33822,
"s": 33536,
"text": "It is basically the model of morpheme ordering. In other sense, the model explaining which classes of morphemes can follow other classes of morphemes inside a word. For example, the morphotactic fact is that the English plural morpheme always follows the noun rather than preceding it."
},
{
"code": null,
"e": 33978,
"s": 33822,
"text": "These spelling rules are used to model the changes occurring in a word. For example, the rule of converting y to ie in word like city+s = cities not citys."
},
{
"code": null,
"e": 34335,
"s": 33978,
"text": "Syntactic analysis or parsing or syntax analysis is the third phase of NLP. The purpose of this phase is to draw exact meaning, or you can say dictionary meaning from the text. Syntax analysis checks the text for meaningfulness comparing to the rules of formal grammar. For example, the sentence like “hot ice-cream” would be rejected by semantic analyzer."
},
{
"code": null,
"e": 34588,
"s": 34335,
"text": "In this sense, syntactic analysis or parsing may be defined as the process of analyzing the strings of symbols in natural language conforming to the rules of formal grammar. The origin of the word ‘parsing’ is from Latin word ‘pars’ which means ‘part’."
},
{
"code": null,
"e": 34947,
"s": 34588,
"text": "It is used to implement the task of parsing. It may be defined as the software component designed for taking input data (text) and giving structural representation of the input after checking for correct syntax as per formal grammar. It also builds a data structure generally in the form of parse tree or abstract syntax tree or other hierarchical structure."
},
{
"code": null,
"e": 34985,
"s": 34947,
"text": "The main roles of the parse include −"
},
{
"code": null,
"e": 35013,
"s": 34985,
"text": "To report any syntax error."
},
{
"code": null,
"e": 35041,
"s": 35013,
"text": "To report any syntax error."
},
{
"code": null,
"e": 35151,
"s": 35041,
"text": "To recover from commonly occurring error so that the processing of the remainder of program can be continued."
},
{
"code": null,
"e": 35261,
"s": 35151,
"text": "To recover from commonly occurring error so that the processing of the remainder of program can be continued."
},
{
"code": null,
"e": 35283,
"s": 35261,
"text": "To create parse tree."
},
{
"code": null,
"e": 35305,
"s": 35283,
"text": "To create parse tree."
},
{
"code": null,
"e": 35329,
"s": 35305,
"text": "To create symbol table."
},
{
"code": null,
"e": 35353,
"s": 35329,
"text": "To create symbol table."
},
{
"code": null,
"e": 35399,
"s": 35353,
"text": "To produce intermediate representations (IR)."
},
{
"code": null,
"e": 35445,
"s": 35399,
"text": "To produce intermediate representations (IR)."
},
{
"code": null,
"e": 35504,
"s": 35445,
"text": "Derivation divides parsing into the followings two types −"
},
{
"code": null,
"e": 35521,
"s": 35504,
"text": "Top-down Parsing"
},
{
"code": null,
"e": 35538,
"s": 35521,
"text": "Top-down Parsing"
},
{
"code": null,
"e": 35556,
"s": 35538,
"text": "Bottom-up Parsing"
},
{
"code": null,
"e": 35574,
"s": 35556,
"text": "Bottom-up Parsing"
},
{
"code": null,
"e": 35881,
"s": 35574,
"text": "In this kind of parsing, the parser starts constructing the parse tree from the start symbol and then tries to transform the start symbol to the input. The most common form of topdown parsing uses recursive procedure to process the input. The main disadvantage of recursive descent parsing is backtracking."
},
{
"code": null,
"e": 36009,
"s": 35881,
"text": "In this kind of parsing, the parser starts with the input symbol and tries to construct the parser tree up to the start symbol."
},
{
"code": null,
"e": 36298,
"s": 36009,
"text": "In order to get the input string, we need a sequence of production rules. Derivation is a set of production rules. During parsing, we need to decide the non-terminal, which is to be replaced along with deciding the production rule with the help of which the non-terminal will be replaced."
},
{
"code": null,
"e": 36450,
"s": 36298,
"text": "In this section, we will learn about the two types of derivations, which can be used to decide which non-terminal to be replaced with production rule −"
},
{
"code": null,
"e": 36632,
"s": 36450,
"text": "In the left-most derivation, the sentential form of an input is scanned and replaced from the left to the right. The sentential form in this case is called the left-sentential form."
},
{
"code": null,
"e": 36807,
"s": 36632,
"text": "In the left-most derivation, the sentential form of an input is scanned and replaced from right to left. The sentential form in this case is called the right-sentential form."
},
{
"code": null,
"e": 37118,
"s": 36807,
"text": "It may be defined as the graphical depiction of a derivation. The start symbol of derivation serves as the root of the parse tree. In every parse tree, the leaf nodes are terminals and interior nodes are non-terminals. A property of parse tree is that in-order traversal will produce the original input string."
},
{
"code": null,
"e": 37424,
"s": 37118,
"text": "Grammar is very essential and important to describe the syntactic structure of well-formed programs. In the literary sense, they denote syntactical rules for conversation in natural languages. Linguistics have attempted to define grammars since the inception of natural languages like English, Hindi, etc."
},
{
"code": null,
"e": 37672,
"s": 37424,
"text": "The theory of formal languages is also applicable in the fields of Computer Science mainly in programming languages and data structure. For example, in ‘C’ language, the precise grammar rules state how functions are made from lists and statements."
},
{
"code": null,
"e": 37790,
"s": 37672,
"text": "A mathematical model of grammar was given by Noam Chomsky in 1956, which is effective for writing computer languages."
},
{
"code": null,
"e": 37876,
"s": 37790,
"text": "Mathematically, a grammar G can be formally written as a 4-tuple (N, T, S, P) where −"
},
{
"code": null,
"e": 37932,
"s": 37876,
"text": "N or VN = set of non-terminal symbols, i.e., variables."
},
{
"code": null,
"e": 37988,
"s": 37932,
"text": "N or VN = set of non-terminal symbols, i.e., variables."
},
{
"code": null,
"e": 38022,
"s": 37988,
"text": "T or ∑ = set of terminal symbols."
},
{
"code": null,
"e": 38056,
"s": 38022,
"text": "T or ∑ = set of terminal symbols."
},
{
"code": null,
"e": 38085,
"s": 38056,
"text": "S = Start symbol where S ∈ N"
},
{
"code": null,
"e": 38114,
"s": 38085,
"text": "S = Start symbol where S ∈ N"
},
{
"code": null,
"e": 38284,
"s": 38114,
"text": "P denotes the Production rules for Terminals as well as Non-terminals. It has the form α → β, where α and β are strings on VN ∪ ∑ and least one symbol of α belongs to VN"
},
{
"code": null,
"e": 38454,
"s": 38284,
"text": "P denotes the Production rules for Terminals as well as Non-terminals. It has the form α → β, where α and β are strings on VN ∪ ∑ and least one symbol of α belongs to VN"
},
{
"code": null,
"e": 38637,
"s": 38454,
"text": "Phrase structure grammar, introduced by Noam Chomsky, is based on the constituency relation. That is why it is also called constituency grammar. It is opposite to dependency grammar."
},
{
"code": null,
"e": 38780,
"s": 38637,
"text": "Before giving an example of constituency grammar, we need to know the fundamental points about constituency grammar and constituency relation."
},
{
"code": null,
"e": 38870,
"s": 38780,
"text": "All the related frameworks view the sentence structure in terms of constituency relation."
},
{
"code": null,
"e": 38960,
"s": 38870,
"text": "All the related frameworks view the sentence structure in terms of constituency relation."
},
{
"code": null,
"e": 39068,
"s": 38960,
"text": "The constituency relation is derived from the subject-predicate division of Latin as well as Greek grammar."
},
{
"code": null,
"e": 39176,
"s": 39068,
"text": "The constituency relation is derived from the subject-predicate division of Latin as well as Greek grammar."
},
{
"code": null,
"e": 39264,
"s": 39176,
"text": "The basic clause structure is understood in terms of noun phrase NP and verb phrase VP."
},
{
"code": null,
"e": 39352,
"s": 39264,
"text": "The basic clause structure is understood in terms of noun phrase NP and verb phrase VP."
},
{
"code": null,
"e": 39445,
"s": 39352,
"text": "We can write the sentence “This tree is illustrating the constituency relation” as follows −"
},
{
"code": null,
"e": 39656,
"s": 39445,
"text": "It is opposite to the constituency grammar and based on dependency relation. It was introduced by Lucien Tesniere. Dependency grammar (DG) is opposite to the constituency grammar because it lacks phrasal nodes."
},
{
"code": null,
"e": 39793,
"s": 39656,
"text": "Before giving an example of Dependency grammar, we need to know the fundamental points about Dependency grammar and Dependency relation."
},
{
"code": null,
"e": 39881,
"s": 39793,
"text": "In DG, the linguistic units, i.e., words are connected to each other by directed links."
},
{
"code": null,
"e": 39969,
"s": 39881,
"text": "In DG, the linguistic units, i.e., words are connected to each other by directed links."
},
{
"code": null,
"e": 40022,
"s": 39969,
"text": "The verb becomes the center of the clause structure."
},
{
"code": null,
"e": 40075,
"s": 40022,
"text": "The verb becomes the center of the clause structure."
},
{
"code": null,
"e": 40203,
"s": 40075,
"text": "Every other syntactic units are connected to the verb in terms of directed link. These syntactic units are called dependencies."
},
{
"code": null,
"e": 40331,
"s": 40203,
"text": "Every other syntactic units are connected to the verb in terms of directed link. These syntactic units are called dependencies."
},
{
"code": null,
"e": 40421,
"s": 40331,
"text": "We can write the sentence “This tree is illustrating the dependency relation” as follows;"
},
{
"code": null,
"e": 40592,
"s": 40421,
"text": "Parse tree that uses Constituency grammar is called constituency-based parse tree; and the parse trees that uses dependency grammar is called dependency-based parse tree."
},
{
"code": null,
"e": 40747,
"s": 40592,
"text": "Context free grammar, also called CFG, is a notation for describing languages and a superset of Regular grammar. It can be seen in the following diagram −"
},
{
"code": null,
"e": 40828,
"s": 40747,
"text": "CFG consists of finite set of grammar rules with the following four components −"
},
{
"code": null,
"e": 40991,
"s": 40828,
"text": "It is denoted by V. The non-terminals are syntactic variables that denote the sets of strings, which further help defining the language, generated by the grammar."
},
{
"code": null,
"e": 41090,
"s": 40991,
"text": "It is also called tokens and defined by Σ. Strings are formed with the basic symbols of terminals."
},
{
"code": null,
"e": 41395,
"s": 41090,
"text": "It is denoted by P. The set defines how the terminals and non-terminals can be combined. Every production(P) consists of non-terminals, an arrow, and terminals (the sequence of terminals). Non-terminals are called the left side of the production and terminals are called the right side of the production."
},
{
"code": null,
"e": 41525,
"s": 41395,
"text": "The production begins from the start symbol. It is denoted by symbol S. Non-terminal symbol is always designated as start symbol."
},
{
"code": null,
"e": 41704,
"s": 41525,
"text": "The purpose of semantic analysis is to draw exact meaning, or you can say dictionary meaning from the text. The work of semantic analyzer is to check the text for meaningfulness."
},
{
"code": null,
"e": 42032,
"s": 41704,
"text": "We already know that lexical analysis also deals with the meaning of the words, then how is semantic analysis different from lexical analysis? Lexical analysis is based on smaller token but on the other side semantic analysis focuses on larger chunks. That is why semantic analysis can be divided into the following two parts −"
},
{
"code": null,
"e": 42185,
"s": 42032,
"text": "It is the first part of the semantic analysis in which the study of the meaning of individual words is performed. This part is called lexical semantics."
},
{
"code": null,
"e": 42276,
"s": 42185,
"text": "In the second part, the individual words will be combined to provide meaning in sentences."
},
{
"code": null,
"e": 42616,
"s": 42276,
"text": "The most important task of semantic analysis is to get the proper meaning of the sentence. For example, analyze the sentence “Ram is great.” In this sentence, the speaker is talking either about Lord Ram or about a person whose name is Ram. That is why the job, to get the proper meaning of the sentence, of semantic analyzer is important."
},
{
"code": null,
"e": 42678,
"s": 42616,
"text": "Followings are some important elements of semantic analysis −"
},
{
"code": null,
"e": 42941,
"s": 42678,
"text": "It may be defined as the relationship between a generic term and instances of that generic term. Here the generic term is called hypernym and its instances are called hyponyms. For example, the word color is hypernym and the color blue, yellow etc. are hyponyms."
},
{
"code": null,
"e": 43185,
"s": 42941,
"text": "It may be defined as the words having same spelling or same form but having different and unrelated meaning. For example, the word “Bat” is a homonymy word because bat can be an implement to hit a ball or bat is a nocturnal flying mammal also."
},
{
"code": null,
"e": 43472,
"s": 43185,
"text": "Polysemy is a Greek word, which means “many signs”. It is a word or phrase with different but related sense. In other words, we can say that polysemy has the same spelling but different and related meaning. For example, the word “bank” is a polysemy word having the following meanings −"
},
{
"code": null,
"e": 43497,
"s": 43472,
"text": "A financial institution."
},
{
"code": null,
"e": 43522,
"s": 43497,
"text": "A financial institution."
},
{
"code": null,
"e": 43576,
"s": 43522,
"text": "The building in which such an institution is located."
},
{
"code": null,
"e": 43630,
"s": 43576,
"text": "The building in which such an institution is located."
},
{
"code": null,
"e": 43658,
"s": 43630,
"text": "A synonym for “to rely on”."
},
{
"code": null,
"e": 43686,
"s": 43658,
"text": "A synonym for “to rely on”."
},
{
"code": null,
"e": 44125,
"s": 43686,
"text": "Both polysemy and homonymy words have the same syntax or spelling. The main difference between them is that in polysemy, the meanings of the words are related but in homonymy, the meanings of the words are not related. For example, if we talk about the same word “Bank”, we can write the meaning ‘a financial institution’ or ‘a river bank’. In that case it would be the example of homonym because the meanings are unrelated to each other."
},
{
"code": null,
"e": 44283,
"s": 44125,
"text": "It is the relation between two lexical items having different forms but expressing the same or a close meaning. Examples are ‘author/writer’, ‘fate/destiny’."
},
{
"code": null,
"e": 44437,
"s": 44283,
"text": "It is the relation between two lexical items having symmetry between their semantic components relative to an axis. The scope of antonymy is as follows −"
},
{
"code": null,
"e": 44519,
"s": 44437,
"text": "Application of property or not − Example is ‘life/death’, ‘certitude/incertitude’"
},
{
"code": null,
"e": 44601,
"s": 44519,
"text": "Application of property or not − Example is ‘life/death’, ‘certitude/incertitude’"
},
{
"code": null,
"e": 44671,
"s": 44601,
"text": "Application of scalable property − Example is ‘rich/poor’, ‘hot/cold’"
},
{
"code": null,
"e": 44741,
"s": 44671,
"text": "Application of scalable property − Example is ‘rich/poor’, ‘hot/cold’"
},
{
"code": null,
"e": 44803,
"s": 44741,
"text": "Application of a usage − Example is ‘father/son’, ‘moon/sun’."
},
{
"code": null,
"e": 44865,
"s": 44803,
"text": "Application of a usage − Example is ‘father/son’, ‘moon/sun’."
},
{
"code": null,
"e": 45086,
"s": 44865,
"text": "Semantic analysis creates a representation of the meaning of a sentence. But before getting into the concept and approaches related to meaning representation, we need to understand the building blocks of semantic system."
},
{
"code": null,
"e": 45211,
"s": 45086,
"text": "In word representation or representation of the meaning of the words, the following building blocks play an important role −"
},
{
"code": null,
"e": 45345,
"s": 45211,
"text": "Entities − It represents the individual such as a particular person, location etc. For example, Haryana. India, Ram all are entities."
},
{
"code": null,
"e": 45479,
"s": 45345,
"text": "Entities − It represents the individual such as a particular person, location etc. For example, Haryana. India, Ram all are entities."
},
{
"code": null,
"e": 45573,
"s": 45479,
"text": "Concepts − It represents the general category of the individuals such as a person, city, etc."
},
{
"code": null,
"e": 45667,
"s": 45573,
"text": "Concepts − It represents the general category of the individuals such as a person, city, etc."
},
{
"code": null,
"e": 45770,
"s": 45667,
"text": "Relations − It represents the relationship between entities and concept. For example, Ram is a person."
},
{
"code": null,
"e": 45873,
"s": 45770,
"text": "Relations − It represents the relationship between entities and concept. For example, Ram is a person."
},
{
"code": null,
"e": 45998,
"s": 45873,
"text": "Predicates − It represents the verb structures. For example, semantic roles and case grammar are the examples of predicates."
},
{
"code": null,
"e": 46123,
"s": 45998,
"text": "Predicates − It represents the verb structures. For example, semantic roles and case grammar are the examples of predicates."
},
{
"code": null,
"e": 46411,
"s": 46123,
"text": "Now, we can understand that meaning representation shows how to put together the building blocks of semantic systems. In other words, it shows how to put together entities, concepts, relation and predicates to describe a situation. It also enables the reasoning about the semantic world."
},
{
"code": null,
"e": 46495,
"s": 46411,
"text": "Semantic analysis uses the following approaches for the representation of meaning −"
},
{
"code": null,
"e": 46530,
"s": 46495,
"text": "First order predicate logic (FOPL)"
},
{
"code": null,
"e": 46565,
"s": 46530,
"text": "First order predicate logic (FOPL)"
},
{
"code": null,
"e": 46579,
"s": 46565,
"text": "Semantic Nets"
},
{
"code": null,
"e": 46593,
"s": 46579,
"text": "Semantic Nets"
},
{
"code": null,
"e": 46600,
"s": 46593,
"text": "Frames"
},
{
"code": null,
"e": 46607,
"s": 46600,
"text": "Frames"
},
{
"code": null,
"e": 46634,
"s": 46607,
"text": "Conceptual dependency (CD)"
},
{
"code": null,
"e": 46661,
"s": 46634,
"text": "Conceptual dependency (CD)"
},
{
"code": null,
"e": 46685,
"s": 46661,
"text": "Rule-based architecture"
},
{
"code": null,
"e": 46709,
"s": 46685,
"text": "Rule-based architecture"
},
{
"code": null,
"e": 46722,
"s": 46709,
"text": "Case Grammar"
},
{
"code": null,
"e": 46735,
"s": 46722,
"text": "Case Grammar"
},
{
"code": null,
"e": 46753,
"s": 46735,
"text": "Conceptual Graphs"
},
{
"code": null,
"e": 46771,
"s": 46753,
"text": "Conceptual Graphs"
},
{
"code": null,
"e": 46883,
"s": 46771,
"text": "A question that arises here is why do we need meaning representation? Followings are the reasons for the same −"
},
{
"code": null,
"e": 47032,
"s": 46883,
"text": "The very first reason is that with the help of meaning representation the linking of linguistic elements to the non-linguistic elements can be done."
},
{
"code": null,
"e": 47143,
"s": 47032,
"text": "With the help of meaning representation, unambiguous, canonical forms can be represented at the lexical level."
},
{
"code": null,
"e": 47296,
"s": 47143,
"text": "Meaning representation can be used to reason for verifying what is true in the world as well as to infer the knowledge from the semantic representation."
},
{
"code": null,
"e": 47695,
"s": 47296,
"text": "The first part of semantic analysis, studying the meaning of individual words is called lexical semantics. It includes words, sub-words, affixes (sub-units), compound words and phrases also. All the words, sub-words, etc. are collectively called lexical items. In other words, we can say that lexical semantics is the relationship between lexical items, meaning of sentences and syntax of sentence."
},
{
"code": null,
"e": 47751,
"s": 47695,
"text": "Following are the steps involved in lexical semantics −"
},
{
"code": null,
"e": 47855,
"s": 47751,
"text": "Classification of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics."
},
{
"code": null,
"e": 47959,
"s": 47855,
"text": "Classification of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics."
},
{
"code": null,
"e": 48062,
"s": 47959,
"text": "Decomposition of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics."
},
{
"code": null,
"e": 48165,
"s": 48062,
"text": "Decomposition of lexical items like words, sub-words, affixes, etc. is performed in lexical semantics."
},
{
"code": null,
"e": 48263,
"s": 48165,
"text": "Differences as well as similarities between various lexical semantic structures is also analyzed."
},
{
"code": null,
"e": 48361,
"s": 48263,
"text": "Differences as well as similarities between various lexical semantic structures is also analyzed."
},
{
"code": null,
"e": 48626,
"s": 48361,
"text": "We understand that words have different meanings based on the context of its usage in the sentence. If we talk about human languages, then they are ambiguous too because many words can be interpreted in multiple ways depending upon the context of their occurrence."
},
{
"code": null,
"e": 49191,
"s": 48626,
"text": "Word sense disambiguation, in natural language processing (NLP), may be defined as the ability to determine which meaning of word is activated by the use of word in a particular context. Lexical ambiguity, syntactic or semantic, is one of the very first problem that any NLP system faces. Part-of-speech (POS) taggers with high level of accuracy can solve Word’s syntactic ambiguity. On the other hand, the problem of resolving semantic ambiguity is called WSD (word sense disambiguation). Resolving semantic ambiguity is harder than resolving syntactic ambiguity."
},
{
"code": null,
"e": 49285,
"s": 49191,
"text": "For example, consider the two examples of the distinct sense that exist for the word “bass” −"
},
{
"code": null,
"e": 49308,
"s": 49285,
"text": "I can hear bass sound."
},
{
"code": null,
"e": 49331,
"s": 49308,
"text": "I can hear bass sound."
},
{
"code": null,
"e": 49361,
"s": 49331,
"text": "He likes to eat grilled bass."
},
{
"code": null,
"e": 49391,
"s": 49361,
"text": "He likes to eat grilled bass."
},
{
"code": null,
"e": 49649,
"s": 49391,
"text": "The occurrence of the word bass clearly denotes the distinct meaning. In first sentence, it means frequency and in second, it means fish. Hence, if it would be disambiguated by WSD then the correct meaning to the above sentences can be assigned as follows −"
},
{
"code": null,
"e": 49682,
"s": 49649,
"text": "I can hear bass/frequency sound."
},
{
"code": null,
"e": 49715,
"s": 49682,
"text": "I can hear bass/frequency sound."
},
{
"code": null,
"e": 49750,
"s": 49715,
"text": "He likes to eat grilled bass/fish."
},
{
"code": null,
"e": 49785,
"s": 49750,
"text": "He likes to eat grilled bass/fish."
},
{
"code": null,
"e": 49843,
"s": 49785,
"text": "The evaluation of WSD requires the following two inputs −"
},
{
"code": null,
"e": 49958,
"s": 49843,
"text": "The very first input for evaluation of WSD is dictionary, which is used to specify the senses to be disambiguated."
},
{
"code": null,
"e": 50106,
"s": 49958,
"text": "Another input required by WSD is the high-annotated test corpus that has the target or correct-senses. The test corpora can be of two types &minsu;"
},
{
"code": null,
"e": 50229,
"s": 50106,
"text": "Lexical sample − This kind of corpora is used in the system, where it is required to disambiguate a small sample of words."
},
{
"code": null,
"e": 50352,
"s": 50229,
"text": "Lexical sample − This kind of corpora is used in the system, where it is required to disambiguate a small sample of words."
},
{
"code": null,
"e": 50487,
"s": 50352,
"text": "All-words − This kind of corpora is used in the system, where it is expected to disambiguate all the words in a piece of running text."
},
{
"code": null,
"e": 50622,
"s": 50487,
"text": "All-words − This kind of corpora is used in the system, where it is expected to disambiguate all the words in a piece of running text."
},
{
"code": null,
"e": 50733,
"s": 50622,
"text": "Approaches and methods to WSD are classified according to the source of knowledge used in word disambiguation."
},
{
"code": null,
"e": 50787,
"s": 50733,
"text": "Let us now see the four conventional methods to WSD −"
},
{
"code": null,
"e": 51499,
"s": 50787,
"text": "As the name suggests, for disambiguation, these methods primarily rely on dictionaries, treasures and lexical knowledge base. They do not use corpora evidences for disambiguation. The Lesk method is the seminal dictionary-based method introduced by Michael Lesk in 1986. The Lesk definition, on which the Lesk algorithm is based is “measure overlap between sense definitions for all words in context”. However, in 2000, Kilgarriff and Rosensweig gave the simplified Lesk definition as “measure overlap between sense definitions of word and current context”, which further means identify the correct sense for one word at a time. Here the current context is the set of words in surrounding sentence or paragraph."
},
{
"code": null,
"e": 52118,
"s": 51499,
"text": "For disambiguation, machine learning methods make use of sense-annotated corpora to train. These methods assume that the context can provide enough evidence on its own to disambiguate the sense. In these methods, the words knowledge and reasoning are deemed unnecessary. The context is represented as a set of “features” of the words. It includes the information about the surrounding words also. Support vector machine and memory-based learning are the most successful supervised learning approaches to WSD. These methods rely on substantial amount of manually sense-tagged corpora, which is very expensive to create."
},
{
"code": null,
"e": 52512,
"s": 52118,
"text": "Due to the lack of training corpus, most of the word sense disambiguation algorithms use semi-supervised learning methods. It is because semi-supervised methods use both labelled as well as unlabeled data. These methods require very small amount of annotated text and large amount of plain unannotated text. The technique that is used by semisupervised methods is bootstrapping from seed data."
},
{
"code": null,
"e": 52902,
"s": 52512,
"text": "These methods assume that similar senses occur in similar context. That is why the senses can be induced from text by clustering word occurrences by using some measure of similarity of the context. This task is called word sense induction or discrimination. Unsupervised methods have great potential to overcome the knowledge acquisition bottleneck due to non-dependency on manual efforts."
},
{
"code": null,
"e": 52997,
"s": 52902,
"text": "Word sense disambiguation (WSD) is applied in almost every application of language technology."
},
{
"code": null,
"e": 53031,
"s": 52997,
"text": "Let us now see the scope of WSD −"
},
{
"code": null,
"e": 53341,
"s": 53031,
"text": "Machine translation or MT is the most obvious application of WSD. In MT, Lexical choice for the words that have distinct translations for different senses, is done by WSD. The senses in MT are represented as words in the target language. Most of the machine translation systems do not use explicit WSD module."
},
{
"code": null,
"e": 53945,
"s": 53341,
"text": "Information retrieval (IR) may be defined as a software program that deals with the organization, storage, retrieval and evaluation of information from document repositories particularly textual information. The system basically assists users in finding the information they required but it does not explicitly return the answers of the questions. WSD is used to resolve the ambiguities of the queries provided to IR system. As like MT, current IR systems do not explicitly use WSD module and they rely on the concept that user would type enough context in the query to only retrieve relevant documents."
},
{
"code": null,
"e": 54220,
"s": 53945,
"text": "In most of the applications, WSD is necessary to do accurate analysis of text. For example, WSD helps intelligent gathering system to do flagging of the correct words. For example, medical intelligent system might need flagging of “illegal drugs” rather than “medical drugs”"
},
{
"code": null,
"e": 54444,
"s": 54220,
"text": "WSD and lexicography can work together in loop because modern lexicography is corpusbased. With lexicography, WSD provides rough empirical sense groupings as well as statistically significant contextual indicators of sense."
},
{
"code": null,
"e": 54520,
"s": 54444,
"text": "Followings are some difficulties faced by word sense disambiguation (WSD) −"
},
{
"code": null,
"e": 54732,
"s": 54520,
"text": "The major problem of WSD is to decide the sense of the word because different senses can be very closely related. Even different dictionaries and thesauruses can provide different divisions of words into senses."
},
{
"code": null,
"e": 54984,
"s": 54732,
"text": "Another problem of WSD is that completely different algorithm might be needed for different applications. For example, in machine translation, it takes the form of target word selection; and in information retrieval, a sense inventory is not required."
},
{
"code": null,
"e": 55176,
"s": 54984,
"text": "Another problem of WSD is that WSD systems are generally tested by having their results on a task compared against the task of human beings. This is called the problem of interjudge variance."
},
{
"code": null,
"e": 55268,
"s": 55176,
"text": "Another difficulty in WSD is that words cannot be easily divided into discrete submeanings."
},
{
"code": null,
"e": 55872,
"s": 55268,
"text": "The most difficult problem of AI is to process the natural language by computers or in other words natural language processing is the most difficult problem of artificial intelligence. If we talk about the major problems in NLP, then one of the major problems in NLP is discourse processing − building theories and models of how utterances stick together to form coherent discourse. Actually, the language always consists of collocated, structured and coherent groups of sentences rather than isolated and unrelated sentences like movies. These coherent groups of sentences are referred to as discourse."
},
{
"code": null,
"e": 56372,
"s": 55872,
"text": "Coherence and discourse structure are interconnected in many ways. Coherence, along with property of good text, is used to evaluate the output quality of natural language generation system. The question that arises here is what does it mean for a text to be coherent? Suppose we collected one sentence from every page of the newspaper, then will it be a discourse? Of-course, not. It is because these sentences do not exhibit coherence. The coherent discourse must possess the following properties −"
},
{
"code": null,
"e": 56603,
"s": 56372,
"text": "The discourse would be coherent if it has meaningful connections between its utterances. This property is called coherence relation. For example, some sort of explanation must be there to justify the connection between utterances."
},
{
"code": null,
"e": 56781,
"s": 56603,
"text": "Another property that makes a discourse coherent is that there must be a certain kind of relationship with the entities. Such kind of coherence is called entity-based coherence."
},
{
"code": null,
"e": 57235,
"s": 56781,
"text": "An important question regarding discourse is what kind of structure the discourse must have. The answer to this question depends upon the segmentation we applied on discourse. Discourse segmentations may be defined as determining the types of structures for large discourse. It is quite difficult to implement discourse segmentation, but it is very important for information retrieval, text summarization and information extraction kind of applications."
},
{
"code": null,
"e": 57352,
"s": 57235,
"text": "In this section, we will learn about the algorithms for discourse segmentation. The algorithms are described below −"
},
{
"code": null,
"e": 57956,
"s": 57352,
"text": "The class of unsupervised discourse segmentation is often represented as linear segmentation. We can understand the task of linear segmentation with the help of an example. In the example, there is a task of segmenting the text into multi-paragraph units; the units represent the passage of the original text. These algorithms are dependent on cohesion that may be defined as the use of certain linguistic devices to tie the textual units together. On the other hand, lexicon cohesion is the cohesion that is indicated by the relationship between two or more words in two units like the use of synonyms."
},
{
"code": null,
"e": 58394,
"s": 57956,
"text": "The earlier method does not have any hand-labeled segment boundaries. On the other hand, supervised discourse segmentation needs to have boundary-labeled training data. It is very easy to acquire the same. In supervised discourse segmentation, discourse marker or cue words play an important role. Discourse marker or cue word is a word or phrase that functions to signal discourse structure. These discourse markers are domain-specific."
},
{
"code": null,
"e": 58770,
"s": 58394,
"text": "Lexical repetition is a way to find the structure in a discourse, but it does not satisfy the requirement of being coherent discourse. To achieve the coherent discourse, we must focus on coherence relations in specific. As we know that coherence relation defines the possible connection between utterances in a discourse. Hebb has proposed such kind of relations as follows −"
},
{
"code": null,
"e": 58860,
"s": 58770,
"text": "We are taking two terms S0 and S1 to represent the meaning of the two related sentences −"
},
{
"code": null,
"e": 59046,
"s": 58860,
"text": "It infers that the state asserted by term S0 could cause the state asserted by S1. For example, two statements show the relationship result: Ram was caught in the fire. His skin burned."
},
{
"code": null,
"e": 59222,
"s": 59046,
"text": "It infers that the state asserted by S1 could cause the state asserted by S0. For example, two statements show the relationship − Ram fought with Shyam’s friend. He was drunk."
},
{
"code": null,
"e": 59419,
"s": 59222,
"text": "It infers p(a1,a2,...) from assertion of S0 and p(b1,b2,...) from assertion S1. Here ai and bi are similar for all i. For example, two statements are parallel − Ram wanted car. Shyam wanted money."
},
{
"code": null,
"e": 59596,
"s": 59419,
"text": "It infers the same proposition P from both the assertions − S0 and S1 For example, two statements show the relation elaboration: Ram was from Chandigarh. Shyam was from Kerala."
},
{
"code": null,
"e": 59838,
"s": 59596,
"text": "It happens when a change of state can be inferred from the assertion of S0, final state of which can be inferred from S1 and vice-versa. For example, the two statements show the relation occasion: Ram picked up the book. He gave it to Shyam."
},
{
"code": null,
"e": 60032,
"s": 59838,
"text": "The coherence of entire discourse can also be considered by hierarchical structure between coherence relations. For example, the following passage can be represented as hierarchical structure −"
},
{
"code": null,
"e": 60076,
"s": 60032,
"text": "S1 − Ram went to the bank to deposit money."
},
{
"code": null,
"e": 60120,
"s": 60076,
"text": "S1 − Ram went to the bank to deposit money."
},
{
"code": null,
"e": 60169,
"s": 60120,
"text": "S2 − He then took a train to Shyam’s cloth shop."
},
{
"code": null,
"e": 60218,
"s": 60169,
"text": "S2 − He then took a train to Shyam’s cloth shop."
},
{
"code": null,
"e": 60254,
"s": 60218,
"text": "S3 − He wanted to buy some clothes."
},
{
"code": null,
"e": 60290,
"s": 60254,
"text": "S3 − He wanted to buy some clothes."
},
{
"code": null,
"e": 60333,
"s": 60290,
"text": "S4 − He do not have new clothes for party."
},
{
"code": null,
"e": 60376,
"s": 60333,
"text": "S4 − He do not have new clothes for party."
},
{
"code": null,
"e": 60434,
"s": 60376,
"text": "S5 − He also wanted to talk to Shyam regarding his health"
},
{
"code": null,
"e": 60492,
"s": 60434,
"text": "S5 − He also wanted to talk to Shyam regarding his health"
},
{
"code": null,
"e": 60959,
"s": 60492,
"text": "Interpretation of the sentences from any discourse is another important task and to achieve this we need to know who or what entity is being talked about. Here, interpretation reference is the key element. Reference may be defined as the linguistic expression to denote an entity or individual. For example, in the passage, Ram, the manager of ABC bank, saw his friend Shyam at a shop. He went to meet him, the linguistic expressions like Ram, His, He are reference."
},
{
"code": null,
"e": 61102,
"s": 60959,
"text": "On the same note, reference resolution may be defined as the task of determining what entities are referred to by which linguistic expression."
},
{
"code": null,
"e": 61163,
"s": 61102,
"text": "We use the following terminologies in reference resolution −"
},
{
"code": null,
"e": 61349,
"s": 61163,
"text": "Referring expression − The natural language expression that is used to perform reference is called a referring expression. For example, the passage used above is a referring expression."
},
{
"code": null,
"e": 61535,
"s": 61349,
"text": "Referring expression − The natural language expression that is used to perform reference is called a referring expression. For example, the passage used above is a referring expression."
},
{
"code": null,
"e": 61639,
"s": 61535,
"text": "Referent − It is the entity that is referred. For example, in the last given example Ram is a referent."
},
{
"code": null,
"e": 61743,
"s": 61639,
"text": "Referent − It is the entity that is referred. For example, in the last given example Ram is a referent."
},
{
"code": null,
"e": 61876,
"s": 61743,
"text": "Corefer − When two expressions are used to refer to the same entity, they are called corefers. For example, Ram and he are corefers."
},
{
"code": null,
"e": 62009,
"s": 61876,
"text": "Corefer − When two expressions are used to refer to the same entity, they are called corefers. For example, Ram and he are corefers."
},
{
"code": null,
"e": 62124,
"s": 62009,
"text": "Antecedent − The term has the license to use another term. For example, Ram is the antecedent of the reference he."
},
{
"code": null,
"e": 62239,
"s": 62124,
"text": "Antecedent − The term has the license to use another term. For example, Ram is the antecedent of the reference he."
},
{
"code": null,
"e": 62416,
"s": 62239,
"text": "Anaphora & Anaphoric − It may be defined as the reference to an entity that has been previously introduced into the sentence. And, the referring expression is called anaphoric."
},
{
"code": null,
"e": 62593,
"s": 62416,
"text": "Anaphora & Anaphoric − It may be defined as the reference to an entity that has been previously introduced into the sentence. And, the referring expression is called anaphoric."
},
{
"code": null,
"e": 62757,
"s": 62593,
"text": "Discourse model − The model that contains the representations of the entities that have been referred to in the discourse and the relationship they are engaged in."
},
{
"code": null,
"e": 62921,
"s": 62757,
"text": "Discourse model − The model that contains the representations of the entities that have been referred to in the discourse and the relationship they are engaged in."
},
{
"code": null,
"e": 63044,
"s": 62921,
"text": "Let us now see the different types of referring expressions. The five types of referring expressions are described below −"
},
{
"code": null,
"e": 63262,
"s": 63044,
"text": "Such kind of reference represents the entities that are new to the hearer into the discourse context. For example − in the sentence Ram had gone around one day to bring him some food − some is an indefinite reference."
},
{
"code": null,
"e": 63514,
"s": 63262,
"text": "Opposite to above, such kind of reference represents the entities that are not new or identifiable to the hearer into the discourse context. For example, in the sentence - I used to read The Times of India – The Times of India is a definite reference."
},
{
"code": null,
"e": 63649,
"s": 63514,
"text": "It is a form of definite reference. For example, Ram laughed as loud as he could. The word he represents pronoun referring expression."
},
{
"code": null,
"e": 63776,
"s": 63649,
"text": "These demonstrate and behave differently than simple definite pronouns. For example, this and that are demonstrative pronouns."
},
{
"code": null,
"e": 63964,
"s": 63776,
"text": "It is the simplest type of referring expression. It can be the name of a person, organization and location also. For example, in the above examples, Ram is the name-refereeing expression."
},
{
"code": null,
"e": 64020,
"s": 63964,
"text": "The two reference resolution tasks are described below."
},
{
"code": null,
"e": 64349,
"s": 64020,
"text": "It is the task of finding referring expressions in a text that refer to the same entity. In simple words, it is the task of finding corefer expressions. A set of coreferring expressions are called coreference chain. For example - He, Chief Manager and His - these are referring expressions in the first passage given as example."
},
{
"code": null,
"e": 64658,
"s": 64349,
"text": "In English, the main problem for coreference resolution is the pronoun it. The reason behind this is that the pronoun it has many uses. For example, it can refer much like he and she. The pronoun it also refers to the things that do not refer to specific things. For example, It’s raining. It is really good."
},
{
"code": null,
"e": 64933,
"s": 64658,
"text": "Unlike the coreference resolution, pronominal anaphora resolution may be defined as the task of finding the antecedent for a single pronoun. For example, the pronoun is his and the task of pronominal anaphora resolution is to find the word Ram because Ram is the antecedent."
},
{
"code": null,
"e": 65161,
"s": 64933,
"text": "Tagging is a kind of classification that may be defined as the automatic assignment of description to the tokens. Here the descriptor is called tag, which may represent one of the part-of-speech, semantic information and so on."
},
{
"code": null,
"e": 65602,
"s": 65161,
"text": "Now, if we talk about Part-of-Speech (PoS) tagging, then it may be defined as the process of assigning one of the parts of speech to the given word. It is generally called POS tagging. In simple words, we can say that POS tagging is a task of labelling each word in a sentence with its appropriate part of speech. We already know that parts of speech include nouns, verb, adverbs, adjectives, pronouns, conjunction and their sub-categories."
},
{
"code": null,
"e": 65718,
"s": 65602,
"text": "Most of the POS tagging falls under Rule Base POS tagging, Stochastic POS tagging and Transformation based tagging."
},
{
"code": null,
"e": 66248,
"s": 65718,
"text": "One of the oldest techniques of tagging is rule-based POS tagging. Rule-based taggers use dictionary or lexicon for getting possible tags for tagging each word. If the word has more than one possible tag, then rule-based taggers use hand-written rules to identify the correct tag. Disambiguation can also be performed in rule-based tagging by analyzing the linguistic features of a word along with its preceding as well as following words. For example, suppose if the preceding word of a word is article then word must be a noun."
},
{
"code": null,
"e": 66384,
"s": 66248,
"text": "As the name suggests, all such kind of information in rule-based POS tagging is coded in the form of rules. These rules may be either −"
},
{
"code": null,
"e": 66406,
"s": 66384,
"text": "Context-pattern rules"
},
{
"code": null,
"e": 66428,
"s": 66406,
"text": "Context-pattern rules"
},
{
"code": null,
"e": 66553,
"s": 66428,
"text": "Or, as Regular expression compiled into finite-state automata, intersected with lexically ambiguous sentence representation."
},
{
"code": null,
"e": 66678,
"s": 66553,
"text": "Or, as Regular expression compiled into finite-state automata, intersected with lexically ambiguous sentence representation."
},
{
"code": null,
"e": 66756,
"s": 66678,
"text": "We can also understand Rule-based POS tagging by its two-stage architecture −"
},
{
"code": null,
"e": 66868,
"s": 66756,
"text": "First stage − In the first stage, it uses a dictionary to assign each word a list of potential parts-of-speech."
},
{
"code": null,
"e": 66980,
"s": 66868,
"text": "First stage − In the first stage, it uses a dictionary to assign each word a list of potential parts-of-speech."
},
{
"code": null,
"e": 67138,
"s": 66980,
"text": "Second stage − In the second stage, it uses large lists of hand-written disambiguation rules to sort down the list to a single part-of-speech for each word."
},
{
"code": null,
"e": 67296,
"s": 67138,
"text": "Second stage − In the second stage, it uses large lists of hand-written disambiguation rules to sort down the list to a single part-of-speech for each word."
},
{
"code": null,
"e": 67354,
"s": 67296,
"text": "Rule-based POS taggers possess the following properties −"
},
{
"code": null,
"e": 67398,
"s": 67354,
"text": "These taggers are knowledge-driven taggers."
},
{
"code": null,
"e": 67442,
"s": 67398,
"text": "These taggers are knowledge-driven taggers."
},
{
"code": null,
"e": 67498,
"s": 67442,
"text": "The rules in Rule-based POS tagging are built manually."
},
{
"code": null,
"e": 67554,
"s": 67498,
"text": "The rules in Rule-based POS tagging are built manually."
},
{
"code": null,
"e": 67601,
"s": 67554,
"text": "The information is coded in the form of rules."
},
{
"code": null,
"e": 67648,
"s": 67601,
"text": "The information is coded in the form of rules."
},
{
"code": null,
"e": 67712,
"s": 67648,
"text": "We have some limited number of rules approximately around 1000."
},
{
"code": null,
"e": 67776,
"s": 67712,
"text": "We have some limited number of rules approximately around 1000."
},
{
"code": null,
"e": 67853,
"s": 67776,
"text": "Smoothing and language modeling is defined explicitly in rule-based taggers."
},
{
"code": null,
"e": 67930,
"s": 67853,
"text": "Smoothing and language modeling is defined explicitly in rule-based taggers."
},
{
"code": null,
"e": 68260,
"s": 67930,
"text": "Another technique of tagging is Stochastic POS Tagging. Now, the question that arises here is which model can be stochastic. The model that includes frequency or probability (statistics) can be called stochastic. Any number of different approaches to the problem of part-of-speech tagging can be referred to as stochastic tagger."
},
{
"code": null,
"e": 68342,
"s": 68260,
"text": "The simplest stochastic tagger applies the following approaches for POS tagging −"
},
{
"code": null,
"e": 68709,
"s": 68342,
"text": "In this approach, the stochastic taggers disambiguate the words based on the probability that a word occurs with a particular tag. We can also say that the tag encountered most frequently with the word in the training set is the one assigned to an ambiguous instance of that word. The main issue with this approach is that it may yield inadmissible sequence of tags."
},
{
"code": null,
"e": 69005,
"s": 68709,
"text": "It is another approach of stochastic tagging, where the tagger calculates the probability of a given sequence of tags occurring. It is also called n-gram approach. It is called so because the best tag for a given word is determined by the probability at which it occurs with the n previous tags."
},
{
"code": null,
"e": 69063,
"s": 69005,
"text": "Stochastic POS taggers possess the following properties −"
},
{
"code": null,
"e": 69126,
"s": 69063,
"text": "This POS tagging is based on the probability of tag occurring."
},
{
"code": null,
"e": 69189,
"s": 69126,
"text": "This POS tagging is based on the probability of tag occurring."
},
{
"code": null,
"e": 69217,
"s": 69189,
"text": "It requires training corpus"
},
{
"code": null,
"e": 69245,
"s": 69217,
"text": "It requires training corpus"
},
{
"code": null,
"e": 69322,
"s": 69245,
"text": "There would be no probability for the words that do not exist in the corpus."
},
{
"code": null,
"e": 69399,
"s": 69322,
"text": "There would be no probability for the words that do not exist in the corpus."
},
{
"code": null,
"e": 69462,
"s": 69399,
"text": "It uses different testing corpus (other than training corpus)."
},
{
"code": null,
"e": 69525,
"s": 69462,
"text": "It uses different testing corpus (other than training corpus)."
},
{
"code": null,
"e": 69637,
"s": 69525,
"text": "It is the simplest POS tagging because it chooses most frequent tags associated with a word in training corpus."
},
{
"code": null,
"e": 69749,
"s": 69637,
"text": "It is the simplest POS tagging because it chooses most frequent tags associated with a word in training corpus."
},
{
"code": null,
"e": 70083,
"s": 69749,
"text": "Transformation based tagging is also called Brill tagging. It is an instance of the transformation-based learning (TBL), which is a rule-based algorithm for automatic tagging of POS to the given text. TBL, allows us to have linguistic knowledge in a readable form, transforms one state to another state by using transformation rules."
},
{
"code": null,
"e": 70547,
"s": 70083,
"text": "It draws the inspiration from both the previous explained taggers − rule-based and stochastic. If we see similarity between rule-based and transformation tagger, then like rule-based, it is also based on the rules that specify what tags need to be assigned to what words. On the other hand, if we see similarity between stochastic and transformation tagger then like stochastic, it is machine learning technique in which rules are automatically induced from data."
},
{
"code": null,
"e": 70759,
"s": 70547,
"text": "In order to understand the working and concept of transformation-based taggers, we need to understand the working of transformation-based learning. Consider the following steps to understand the working of TBL −"
},
{
"code": null,
"e": 70863,
"s": 70759,
"text": "Start with the solution − The TBL usually starts with some solution to the problem and works in cycles."
},
{
"code": null,
"e": 70967,
"s": 70863,
"text": "Start with the solution − The TBL usually starts with some solution to the problem and works in cycles."
},
{
"code": null,
"e": 71074,
"s": 70967,
"text": "Most beneficial transformation chosen − In each cycle, TBL will choose the most beneficial transformation."
},
{
"code": null,
"e": 71181,
"s": 71074,
"text": "Most beneficial transformation chosen − In each cycle, TBL will choose the most beneficial transformation."
},
{
"code": null,
"e": 71279,
"s": 71181,
"text": "Apply to the problem − The transformation chosen in the last step will be applied to the problem."
},
{
"code": null,
"e": 71377,
"s": 71279,
"text": "Apply to the problem − The transformation chosen in the last step will be applied to the problem."
},
{
"code": null,
"e": 71590,
"s": 71377,
"text": "The algorithm will stop when the selected transformation in step 2 will not add either more value or there are no more transformations to be selected. Such kind of learning is best suited in classification tasks."
},
{
"code": null,
"e": 71629,
"s": 71590,
"text": "The advantages of TBL are as follows −"
},
{
"code": null,
"e": 71704,
"s": 71629,
"text": "We learn small set of simple rules and these rules are enough for tagging."
},
{
"code": null,
"e": 71779,
"s": 71704,
"text": "We learn small set of simple rules and these rules are enough for tagging."
},
{
"code": null,
"e": 71882,
"s": 71779,
"text": "Development as well as debugging is very easy in TBL because the learned rules are easy to understand."
},
{
"code": null,
"e": 71985,
"s": 71882,
"text": "Development as well as debugging is very easy in TBL because the learned rules are easy to understand."
},
{
"code": null,
"e": 72099,
"s": 71985,
"text": "Complexity in tagging is reduced because in TBL there is interlacing of machinelearned and human-generated rules."
},
{
"code": null,
"e": 72213,
"s": 72099,
"text": "Complexity in tagging is reduced because in TBL there is interlacing of machinelearned and human-generated rules."
},
{
"code": null,
"e": 72282,
"s": 72213,
"text": "Transformation-based tagger is much faster than Markov-model tagger."
},
{
"code": null,
"e": 72351,
"s": 72282,
"text": "Transformation-based tagger is much faster than Markov-model tagger."
},
{
"code": null,
"e": 72393,
"s": 72351,
"text": "The disadvantages of TBL are as follows −"
},
{
"code": null,
"e": 72465,
"s": 72393,
"text": "Transformation-based learning (TBL) does not provide tag probabilities."
},
{
"code": null,
"e": 72537,
"s": 72465,
"text": "Transformation-based learning (TBL) does not provide tag probabilities."
},
{
"code": null,
"e": 72605,
"s": 72537,
"text": "In TBL, the training time is very long especially on large corpora."
},
{
"code": null,
"e": 72673,
"s": 72605,
"text": "In TBL, the training time is very long especially on large corpora."
},
{
"code": null,
"e": 72776,
"s": 72673,
"text": "Before digging deep into HMM POS tagging, we must understand the concept of Hidden Markov Model (HMM)."
},
{
"code": null,
"e": 73036,
"s": 72776,
"text": "An HMM model may be defined as the doubly-embedded stochastic model, where the underlying stochastic process is hidden. This hidden stochastic process can only be observed through another set of stochastic processes that produces the sequence of observations."
},
{
"code": null,
"e": 73453,
"s": 73036,
"text": "For example, a sequence of hidden coin tossing experiments is done and we see only the observation sequence consisting of heads and tails. The actual details of the process - how many coins used, the order in which they are selected - are hidden from us. By observing this sequence of heads and tails, we can build several HMMs to explain the sequence. Following is one form of Hidden Markov Model for this problem −"
},
{
"code": null,
"e": 73638,
"s": 73453,
"text": "We assumed that there are two states in the HMM and each of the state corresponds to the selection of different biased coin. Following matrix gives the state transition probabilities −"
},
{
"code": null,
"e": 73655,
"s": 73638,
"text": "A=[a11a12a21a22]"
},
{
"code": null,
"e": 73661,
"s": 73655,
"text": "Here,"
},
{
"code": null,
"e": 73732,
"s": 73661,
"text": "aij = probability of transition from one state to another from i to j."
},
{
"code": null,
"e": 73803,
"s": 73732,
"text": "aij = probability of transition from one state to another from i to j."
},
{
"code": null,
"e": 73834,
"s": 73803,
"text": "a11 + a12 = 1 and a21 + a22 =1"
},
{
"code": null,
"e": 73865,
"s": 73834,
"text": "a11 + a12 = 1 and a21 + a22 =1"
},
{
"code": null,
"e": 73942,
"s": 73865,
"text": "P1 = probability of heads of the first coin i.e. the bias of the first coin."
},
{
"code": null,
"e": 74019,
"s": 73942,
"text": "P1 = probability of heads of the first coin i.e. the bias of the first coin."
},
{
"code": null,
"e": 74098,
"s": 74019,
"text": "P2 = probability of heads of the second coin i.e. the bias of the second coin."
},
{
"code": null,
"e": 74177,
"s": 74098,
"text": "P2 = probability of heads of the second coin i.e. the bias of the second coin."
},
{
"code": null,
"e": 74250,
"s": 74177,
"text": "We can also create an HMM model assuming that there are 3 coins or more."
},
{
"code": null,
"e": 74312,
"s": 74250,
"text": "This way, we can characterize HMM by the following elements −"
},
{
"code": null,
"e": 74395,
"s": 74312,
"text": "N, the number of states in the model (in the above example N =2, only two states)."
},
{
"code": null,
"e": 74478,
"s": 74395,
"text": "N, the number of states in the model (in the above example N =2, only two states)."
},
{
"code": null,
"e": 74592,
"s": 74478,
"text": "M, the number of distinct observations that can appear with each state in the above example M = 2, i.e., H or T)."
},
{
"code": null,
"e": 74706,
"s": 74592,
"text": "M, the number of distinct observations that can appear with each state in the above example M = 2, i.e., H or T)."
},
{
"code": null,
"e": 74792,
"s": 74706,
"text": "A, the state transition probability distribution − the matrix A in the above example."
},
{
"code": null,
"e": 74878,
"s": 74792,
"text": "A, the state transition probability distribution − the matrix A in the above example."
},
{
"code": null,
"e": 74978,
"s": 74878,
"text": "P, the probability distribution of the observable symbols in each state (in our example P1 and P2)."
},
{
"code": null,
"e": 75078,
"s": 74978,
"text": "P, the probability distribution of the observable symbols in each state (in our example P1 and P2)."
},
{
"code": null,
"e": 75113,
"s": 75078,
"text": "I, the initial state distribution."
},
{
"code": null,
"e": 75148,
"s": 75113,
"text": "I, the initial state distribution."
},
{
"code": null,
"e": 75436,
"s": 75148,
"text": "The POS tagging process is the process of finding the sequence of tags which is most likely to have generated a given word sequence. We can model this POS process by using a Hidden Markov Model (HMM), where tags are the hidden states that produced the observable output, i.e., the words."
},
{
"code": null,
"e": 75541,
"s": 75436,
"text": "Mathematically, in POS tagging, we are always interested in finding a tag sequence (C) which maximizes −"
},
{
"code": null,
"e": 75549,
"s": 75541,
"text": "P (C|W)"
},
{
"code": null,
"e": 75556,
"s": 75549,
"text": "Where,"
},
{
"code": null,
"e": 75577,
"s": 75556,
"text": "C = C1, C2, C3... CT"
},
{
"code": null,
"e": 75596,
"s": 75577,
"text": "W = W1, W2, W3, WT"
},
{
"code": null,
"e": 75831,
"s": 75596,
"text": "On the other side of coin, the fact is that we need a lot of statistical data to reasonably estimate such kind of sequences. However, to simplify the problem, we can apply some mathematical transformations along with some assumptions."
},
{
"code": null,
"e": 76049,
"s": 75831,
"text": "The use of HMM to do a POS tagging is a special case of Bayesian interference. Hence, we will start by restating the problem using Bayes’ rule, which says that the above-mentioned conditional probability is equal to −"
},
{
"code": null,
"e": 76122,
"s": 76049,
"text": "(PROB (C1,..., CT) * PROB (W1,..., WT | C1,..., CT)) / PROB (W1,..., WT)"
},
{
"code": null,
"e": 76360,
"s": 76122,
"text": "We can eliminate the denominator in all these cases because we are interested in finding the sequence C which maximizes the above value. This will not affect our answer. Now, our problem reduces to finding the sequence C that maximizes −"
},
{
"code": null,
"e": 76415,
"s": 76360,
"text": "PROB (C1,..., CT) * PROB (W1,..., WT | C1,..., CT) (1)"
},
{
"code": null,
"e": 76636,
"s": 76415,
"text": "Even after reducing the problem in the above expression, it would require large amount of data. We can make reasonable independence assumptions about the two probabilities in the above expression to overcome the problem."
},
{
"code": null,
"e": 76823,
"s": 76636,
"text": "The probability of a tag depends on the previous one (bigram model) or previous two (trigram model) or previous n tags (n-gram model) which, mathematically, can be explained as follows −"
},
{
"code": null,
"e": 76890,
"s": 76823,
"text": "PROB (C1,..., CT) = Πi=1..T PROB (Ci|Ci-n+1...Ci-1) (n-gram model)"
},
{
"code": null,
"e": 76948,
"s": 76890,
"text": "PROB (C1,..., CT) = Πi=1..T PROB (Ci|Ci-1) (bigram model)"
},
{
"code": null,
"e": 77046,
"s": 76948,
"text": "The beginning of a sentence can be accounted for by assuming an initial probability for each tag."
},
{
"code": null,
"e": 77079,
"s": 77046,
"text": "PROB (C1|C0) = PROB initial (C1)"
},
{
"code": null,
"e": 77308,
"s": 77079,
"text": "The second probability in equation (1) above can be approximated by assuming that a word appears in a category independent of the words in the preceding or succeeding categories which can be explained mathematically as follows −"
},
{
"code": null,
"e": 77362,
"s": 77308,
"text": "PROB (W1,..., WT | C1,..., CT) = Πi=1..T PROB (Wi|Ci)"
},
{
"code": null,
"e": 77467,
"s": 77362,
"text": "Now, on the basis of the above two assumptions, our goal reduces to finding a sequence C which maximizes"
},
{
"code": null,
"e": 77504,
"s": 77467,
"text": "Πi=1...T PROB(Ci|Ci-1) * PROB(Wi|Ci)"
},
{
"code": null,
"e": 77738,
"s": 77504,
"text": "Now the question that arises here is has converting the problem to the above form really helped us. The answer is - yes, it has. If we have a large tagged corpus, then the two probabilities in the above formula can be calculated as −"
},
{
"code": null,
"e": 77848,
"s": 77738,
"text": "PROB (Ci=VERB|Ci-1=NOUN) = (# of instances where Verb follows Noun) / (# of instances where Noun appears) (2)"
},
{
"code": null,
"e": 77942,
"s": 77848,
"text": "PROB (Wi|Ci) = (# of instances where Wi appears in Ci) /(# of instances where Ci appears) (3)"
},
{
"code": null,
"e": 78111,
"s": 77942,
"text": "In this chapter, we will discuss the natural language inception in Natural Language Processing. To begin with, let us first understand what is Natural Language Grammar."
},
{
"code": null,
"e": 78707,
"s": 78111,
"text": "For linguistics, language is a group of arbitrary vocal signs. We may say that language is creative, governed by rules, innate as well as universal at the same time. On the other hand, it is humanly too. The nature of the language is different for different people. There is a lot of misconception about the nature of the language. That is why it is very important to understand the meaning of the ambiguous term ‘grammar’. In linguistics, the term grammar may be defined as the rules or principles with the help of which language works. In broad sense, we can divide grammar in two categories −"
},
{
"code": null,
"e": 78822,
"s": 78707,
"text": "The set of rules, where linguistics and grammarians formulate the speaker’s grammar is called descriptive grammar."
},
{
"code": null,
"e": 79005,
"s": 78822,
"text": "It is a very different sense of grammar, which attempts to maintain a standard of correctness in the language. This category has little to do with the actual working of the language."
},
{
"code": null,
"e": 79207,
"s": 79005,
"text": "The language of study is divided into the interrelated components, which are conventional as well as arbitrary divisions of linguistic investigation. The explanation of these components is as follows −"
},
{
"code": null,
"e": 79792,
"s": 79207,
"text": "The very first component of language is phonology. It is the study of the speech sounds of a particular language. The origin of the word can be traced to Greek language, where ‘phone’ means sound or voice. Phonetics, a subdivision of phonology is the study of the speech sounds of human language from the perspective of their production, perception or their physical properties. IPA (International Phonetic Alphabet) is a tool that represents human sounds in a regular way while studying phonology. In IPA, every written symbol represents one and only one speech sound and vice-versa."
},
{
"code": null,
"e": 80010,
"s": 79792,
"text": "It may be defined as one of the units of sound that differentiate one word from other in a language. In linguistic, phonemes are written between slashes. For example, phoneme /k/ occurs in the words such as kit, skit."
},
{
"code": null,
"e": 80456,
"s": 80010,
"text": "It is the second component of language. It is the study of the structure and classification of the words in a particular language. The origin of the word is from Greek language, where the word ‘morphe’ means ‘form’. Morphology considers the principles of formation of words in a language. In other words, how sounds combine into meaningful units like prefixes, suffixes and roots. It also considers how words can be grouped into parts of speech."
},
{
"code": null,
"e": 81020,
"s": 80456,
"text": "In linguistics, the abstract unit of morphological analysis that corresponds to a set of forms taken by a single word is called lexeme. The way in which a lexeme is used in a sentence is determined by its grammatical category. Lexeme can be individual word or multiword. For example, the word talk is an example of an individual word lexeme, which may have many grammatical variants like talks, talked and talking. Multiword lexeme can be made up of more than one orthographic word. For example, speak up, pull through, etc. are the examples of multiword lexemes."
},
{
"code": null,
"e": 81307,
"s": 81020,
"text": "It is the third component of language. It is the study of the order and arrangement of the words into larger units. The word can be traced to Greek language, where the word suntassein means ‘to put in order’. It studies the type of sentences and their structure, of clauses, of phrases."
},
{
"code": null,
"e": 81600,
"s": 81307,
"text": "It is the fourth component of language. It is the study of how meaning is conveyed. The meaning can be related to the outside world or can be related to the grammar of the sentence. The word can be traced to Greek language, where the word semainein means means ‘to signify’, ‘show’, ‘signal’."
},
{
"code": null,
"e": 81815,
"s": 81600,
"text": "It is the fifth component of language. It is the study of the functions of the language and its use in context. The origin of the word can be traced to Greek language where the word ‘pragma’ means ‘deed’, ‘affair’."
},
{
"code": null,
"e": 82071,
"s": 81815,
"text": "A grammatical category may be defined as a class of units or features within the grammar of a language. These units are the building blocks of language and share a common set of characteristics. Grammatical categories are also called grammatical features."
},
{
"code": null,
"e": 82132,
"s": 82071,
"text": "The inventory of grammatical categories is described below −"
},
{
"code": null,
"e": 82357,
"s": 82132,
"text": "It is the simplest grammatical category. We have two terms related to this category −singular and plural. Singular is the concept of ‘one’ whereas, plural is the concept of ‘more than one’. For example, dog/dogs, this/these."
},
{
"code": null,
"e": 82625,
"s": 82357,
"text": "Grammatical gender is expressed by variation in personal pronouns and 3rd person. Examples of grammatical genders are singular − he, she, it; the first and second person forms − I, we and you; the 3rd person plural form they, is either common gender or neuter gender."
},
{
"code": null,
"e": 82723,
"s": 82625,
"text": "Another simple grammatical category is person. Under this, following three terms are recognized −"
},
{
"code": null,
"e": 82792,
"s": 82723,
"text": "1st person − The person who is speaking is recognized as 1st person."
},
{
"code": null,
"e": 82861,
"s": 82792,
"text": "1st person − The person who is speaking is recognized as 1st person."
},
{
"code": null,
"e": 82956,
"s": 82861,
"text": "2nd person − The person who is the hearer or the person spoken to is recognized as 2nd person."
},
{
"code": null,
"e": 83051,
"s": 82956,
"text": "2nd person − The person who is the hearer or the person spoken to is recognized as 2nd person."
},
{
"code": null,
"e": 83140,
"s": 83051,
"text": "3rd person − The person or thing about whom we are speaking is recognized as 3rd person."
},
{
"code": null,
"e": 83229,
"s": 83140,
"text": "3rd person − The person or thing about whom we are speaking is recognized as 3rd person."
},
{
"code": null,
"e": 83536,
"s": 83229,
"text": "It is one of the most difficult grammatical categories. It may be defined as an indication of the function of a noun phrase (NP) or the relationship of a noun phrase to a verb or to the other noun phrases in the sentence. We have the following three cases expressed in personal and interrogative pronouns −"
},
{
"code": null,
"e": 83652,
"s": 83536,
"text": "Nominative case − It is the function of subject. For example, I, we, you, he, she, it, they and who are nominative."
},
{
"code": null,
"e": 83768,
"s": 83652,
"text": "Nominative case − It is the function of subject. For example, I, we, you, he, she, it, they and who are nominative."
},
{
"code": null,
"e": 83903,
"s": 83768,
"text": "Genitive case − It is the function of possessor. For example, my/mine, our/ours, his, her/hers, its, their/theirs, whose are genitive."
},
{
"code": null,
"e": 84038,
"s": 83903,
"text": "Genitive case − It is the function of possessor. For example, my/mine, our/ours, his, her/hers, its, their/theirs, whose are genitive."
},
{
"code": null,
"e": 84147,
"s": 84038,
"text": "Objective case − It is the function of object. For example, me, us, you, him, her, them, whom are objective."
},
{
"code": null,
"e": 84256,
"s": 84147,
"text": "Objective case − It is the function of object. For example, me, us, you, him, her, them, whom are objective."
},
{
"code": null,
"e": 84355,
"s": 84256,
"text": "This grammatical category is related to adjectives and adverbs. It has the following three terms −"
},
{
"code": null,
"e": 84453,
"s": 84355,
"text": "Positive degree − It expresses a quality. For example, big, fast, beautiful are positive degrees."
},
{
"code": null,
"e": 84551,
"s": 84453,
"text": "Positive degree − It expresses a quality. For example, big, fast, beautiful are positive degrees."
},
{
"code": null,
"e": 84718,
"s": 84551,
"text": "Comparative degree − It expresses greater degree or intensity of the quality in one of two items. For example, bigger, faster, more beautiful are comparative degrees."
},
{
"code": null,
"e": 84885,
"s": 84718,
"text": "Comparative degree − It expresses greater degree or intensity of the quality in one of two items. For example, bigger, faster, more beautiful are comparative degrees."
},
{
"code": null,
"e": 85065,
"s": 84885,
"text": "Superlative degree − It expresses greatest degree or intensity of the quality in one of three or more items. For example, biggest, fastest, most beautiful are superlative degrees."
},
{
"code": null,
"e": 85245,
"s": 85065,
"text": "Superlative degree − It expresses greatest degree or intensity of the quality in one of three or more items. For example, biggest, fastest, most beautiful are superlative degrees."
},
{
"code": null,
"e": 85568,
"s": 85245,
"text": "Both these concepts are very simple. Definiteness as we know represents a referent, which is known, familiar or identifiable by the speaker or hearer. On the other hand, indefiniteness represents a referent that is not known, or is unfamiliar. The concept can be understood in the co-occurrence of an article with a noun −"
},
{
"code": null,
"e": 85591,
"s": 85568,
"text": "definite article − the"
},
{
"code": null,
"e": 85614,
"s": 85591,
"text": "definite article − the"
},
{
"code": null,
"e": 85640,
"s": 85614,
"text": "indefinite article − a/an"
},
{
"code": null,
"e": 85666,
"s": 85640,
"text": "indefinite article − a/an"
},
{
"code": null,
"e": 85944,
"s": 85666,
"text": "This grammatical category is related to verb and can be defined as the linguistic indication of the time of an action. A tense establishes a relation because it indicates the time of an event with respect to the moment of speaking. Broadly, it is of the following three types −"
},
{
"code": null,
"e": 86051,
"s": 85944,
"text": "Present tense − Represents the occurrence of an action in the present moment. For example, Ram works hard."
},
{
"code": null,
"e": 86158,
"s": 86051,
"text": "Present tense − Represents the occurrence of an action in the present moment. For example, Ram works hard."
},
{
"code": null,
"e": 86261,
"s": 86158,
"text": "Past tense − Represents the occurrence of an action before the present moment. For example, it rained."
},
{
"code": null,
"e": 86364,
"s": 86261,
"text": "Past tense − Represents the occurrence of an action before the present moment. For example, it rained."
},
{
"code": null,
"e": 86471,
"s": 86364,
"text": "Future tense − Represents the occurrence of an action after the present moment. For example, it will rain."
},
{
"code": null,
"e": 86578,
"s": 86471,
"text": "Future tense − Represents the occurrence of an action after the present moment. For example, it will rain."
},
{
"code": null,
"e": 86685,
"s": 86578,
"text": "This grammatical category may be defined as the view taken of an event. It can be of the following types −"
},
{
"code": null,
"e": 86907,
"s": 86685,
"text": "Perfective aspect − The view is taken as whole and complete in the aspect. For example, the simple past tense like yesterday I met my friend, in English is perfective in aspect as it views the event as complete and whole."
},
{
"code": null,
"e": 87129,
"s": 86907,
"text": "Perfective aspect − The view is taken as whole and complete in the aspect. For example, the simple past tense like yesterday I met my friend, in English is perfective in aspect as it views the event as complete and whole."
},
{
"code": null,
"e": 87373,
"s": 87129,
"text": "Imperfective aspect − The view is taken as ongoing and incomplete in the aspect. For example, the present participle tense like I am working on this problem, in English is imperfective in aspect as it views the event as incomplete and ongoing."
},
{
"code": null,
"e": 87617,
"s": 87373,
"text": "Imperfective aspect − The view is taken as ongoing and incomplete in the aspect. For example, the present participle tense like I am working on this problem, in English is imperfective in aspect as it views the event as incomplete and ongoing."
},
{
"code": null,
"e": 88026,
"s": 87617,
"text": "This grammatical category is a bit difficult to define but it can be simply stated as the indication of the speaker’s attitude towards what he/she is talking about. It is also the grammatical feature of verbs. It is distinct from grammatical tenses and grammatical aspect. The examples of moods are indicative, interrogative, imperative, injunctive, subjunctive, potential, optative, gerunds and participles."
},
{
"code": null,
"e": 88332,
"s": 88026,
"text": "It is also called concord. It happens when a word changes from depending on the other words to which it relates. In other words, it involves making the value of some grammatical category agree between different words or part of speech. Followings are the agreements based on other grammatical categories −"
},
{
"code": null,
"e": 88486,
"s": 88332,
"text": "Agreement based on Person − It is the agreement between subject and the verb. For example, we always use “I am” and “He is” but never “He am” and “I is”."
},
{
"code": null,
"e": 88640,
"s": 88486,
"text": "Agreement based on Person − It is the agreement between subject and the verb. For example, we always use “I am” and “He is” but never “He am” and “I is”."
},
{
"code": null,
"e": 88970,
"s": 88640,
"text": "Agreement based on Number − This agreement is between subject and the verb. In this case, there are specific verb forms for first person singular, second person plural and so on. For example, 1st person singular: I really am, 2nd person plural: We really are, 3rd person singular: The boy sings, 3rd person plural: The boys sing."
},
{
"code": null,
"e": 89300,
"s": 88970,
"text": "Agreement based on Number − This agreement is between subject and the verb. In this case, there are specific verb forms for first person singular, second person plural and so on. For example, 1st person singular: I really am, 2nd person plural: We really are, 3rd person singular: The boy sings, 3rd person plural: The boys sing."
},
{
"code": null,
"e": 89478,
"s": 89300,
"text": "Agreement based on Gender − In English, there is agreement in gender between pronouns and antecedents. For example, He reached his destination. The ship reached her destination."
},
{
"code": null,
"e": 89656,
"s": 89478,
"text": "Agreement based on Gender − In English, there is agreement in gender between pronouns and antecedents. For example, He reached his destination. The ship reached her destination."
},
{
"code": null,
"e": 89795,
"s": 89656,
"text": "Agreement based on Case − This kind of agreement is not a significant feature of English. For example, who came first − he or his sister? "
},
{
"code": null,
"e": 89934,
"s": 89795,
"text": "Agreement based on Case − This kind of agreement is not a significant feature of English. For example, who came first − he or his sister? "
},
{
"code": null,
"e": 90153,
"s": 89934,
"text": "The written English and spoken English grammar have many common features but along with that, they also differ in a number of aspects. The following features distinguish between the spoken and written English grammar −"
},
{
"code": null,
"e": 90382,
"s": 90153,
"text": "This striking feature makes spoken and written English grammar different from each other. It is individually known as phenomena of disfluencies and collectively as phenomena of repair. Disfluencies include the use of following −"
},
{
"code": null,
"e": 90538,
"s": 90382,
"text": "Fillers words − Sometimes in between the sentence, we use some filler words. They are called fillers of filler pause. Examples of such words are uh and um."
},
{
"code": null,
"e": 90694,
"s": 90538,
"text": "Fillers words − Sometimes in between the sentence, we use some filler words. They are called fillers of filler pause. Examples of such words are uh and um."
},
{
"code": null,
"e": 90902,
"s": 90694,
"text": "Reparandum and repair − The repeated segment of words in between the sentence is called reparandum. In the same segment, the changed word is called repair. Consider the following example to understand this −"
},
{
"code": null,
"e": 91110,
"s": 90902,
"text": "Reparandum and repair − The repeated segment of words in between the sentence is called reparandum. In the same segment, the changed word is called repair. Consider the following example to understand this −"
},
{
"code": null,
"e": 91188,
"s": 91110,
"text": "Does ABC airlines offer any one-way flights uh one-way fares for 5000 rupees?"
},
{
"code": null,
"e": 91274,
"s": 91188,
"text": "In the above sentence, one-way flight is a reparadum and one-way flights is a repair."
},
{
"code": null,
"e": 91511,
"s": 91274,
"text": "After the filler pause, restarts occurs. For example, in the above sentence, restarts occur when the speaker starts asking about one-way flights then stops, correct himself by filler pause and then restarting asking about one-way fares."
},
{
"code": null,
"e": 91654,
"s": 91511,
"text": "Sometimes we speak the sentences with smaller fragments of words. For example, wwha-what is the time? Here the words w-wha are word fragments."
},
{
"code": null,
"e": 92226,
"s": 91654,
"text": "Information retrieval (IR) may be defined as a software program that deals with the organization, storage, retrieval and evaluation of information from document repositories particularly textual information. The system assists users in finding the information they require but it does not explicitly return the answers of the questions. It informs the existence and location of documents that might consist of the required information. The documents that satisfy user’s requirement are called relevant documents. A perfect IR system will retrieve only relevant documents."
},
{
"code": null,
"e": 92328,
"s": 92226,
"text": "With the help of the following diagram, we can understand the process of information retrieval (IR) −"
},
{
"code": null,
"e": 92597,
"s": 92328,
"text": "It is clear from the above diagram that a user who needs information will have to formulate a request in the form of query in natural language. Then the IR system will respond by retrieving the relevant output, in the form of documents, about the required information."
},
{
"code": null,
"e": 92820,
"s": 92597,
"text": "The main goal of IR research is to develop a model for retrieving information from the repositories of documents. Here, we are going to discuss a classical problem, named ad-hoc retrieval problem, related to the IR system."
},
{
"code": null,
"e": 93246,
"s": 92820,
"text": "In ad-hoc retrieval, the user must enter a query in natural language that describes the required information. Then the IR system will return the required documents related to the desired information. For example, suppose we are searching something on the Internet and it gives some exact pages that are relevant as per our requirement but there can be some non-relevant pages too. This is due to the ad-hoc retrieval problem."
},
{
"code": null,
"e": 93330,
"s": 93246,
"text": "Followings are some aspects of ad-hoc retrieval that are addressed in IR research −"
},
{
"code": null,
"e": 93421,
"s": 93330,
"text": "How users with the help of relevance feedback can improve original formulation of a query?"
},
{
"code": null,
"e": 93512,
"s": 93421,
"text": "How users with the help of relevance feedback can improve original formulation of a query?"
},
{
"code": null,
"e": 93630,
"s": 93512,
"text": "How to implement database merging, i.e., how results from different text databases can be merged into one result set?"
},
{
"code": null,
"e": 93748,
"s": 93630,
"text": "How to implement database merging, i.e., how results from different text databases can be merged into one result set?"
},
{
"code": null,
"e": 93828,
"s": 93748,
"text": "How to handle partly corrupted data? Which models are appropriate for the same?"
},
{
"code": null,
"e": 93908,
"s": 93828,
"text": "How to handle partly corrupted data? Which models are appropriate for the same?"
},
{
"code": null,
"e": 94269,
"s": 93908,
"text": "Mathematically, models are used in many scientific areas having objective to understand some phenomenon in the real world. A model of information retrieval predicts and explains what a user will find in relevance to the given query. IR model is basically a pattern that defines the above-mentioned aspects of retrieval procedure and consists of the following −"
},
{
"code": null,
"e": 94292,
"s": 94269,
"text": "A model for documents."
},
{
"code": null,
"e": 94315,
"s": 94292,
"text": "A model for documents."
},
{
"code": null,
"e": 94336,
"s": 94315,
"text": "A model for queries."
},
{
"code": null,
"e": 94357,
"s": 94336,
"text": "A model for queries."
},
{
"code": null,
"e": 94413,
"s": 94357,
"text": "A matching function that compares queries to documents."
},
{
"code": null,
"e": 94469,
"s": 94413,
"text": "A matching function that compares queries to documents."
},
{
"code": null,
"e": 94517,
"s": 94469,
"text": "Mathematically, a retrieval model consists of −"
},
{
"code": null,
"e": 94551,
"s": 94517,
"text": "D − Representation for documents."
},
{
"code": null,
"e": 94583,
"s": 94551,
"text": "R − Representation for queries."
},
{
"code": null,
"e": 94657,
"s": 94583,
"text": "F − The modeling framework for D, Q along with relationship between them."
},
{
"code": null,
"e": 94771,
"s": 94657,
"text": "R (q,di) − A similarity function which orders the documents with respect to the query. It is also called ranking."
},
{
"code": null,
"e": 94855,
"s": 94771,
"text": "An information model (IR) model can be classified into the following three models −"
},
{
"code": null,
"e": 95072,
"s": 94855,
"text": "It is the simplest and easy to implement IR model. This model is based on mathematical knowledge that was easily recognized and understood as well. Boolean, Vector and Probabilistic are the three classical IR models."
},
{
"code": null,
"e": 95339,
"s": 95072,
"text": "It is completely opposite to classical IR model. Such kind of IR models are based on principles other than similarity, probability, Boolean operations. Information logic model, situation theory model and interaction models are the examples of non-classical IR model."
},
{
"code": null,
"e": 95556,
"s": 95339,
"text": "It is the enhancement of classical IR model making use of some specific techniques from some other fields. Cluster model, fuzzy model and latent semantic indexing (LSI) models are the example of alternative IR model."
},
{
"code": null,
"e": 95615,
"s": 95556,
"text": "Let us now learn about the design features of IR systems −"
},
{
"code": null,
"e": 95912,
"s": 95615,
"text": "The primary data structure of most of the IR systems is in the form of inverted index. We can define an inverted index as a data structure that list, for every word, all documents that contain it and frequency of the occurrences in document. It makes it easy to search for ‘hits’ of a query word."
},
{
"code": null,
"e": 96621,
"s": 95912,
"text": "Stop words are those high frequency words that are deemed unlikely to be useful for searching. They have less semantic weights. All such kind of words are in a list called stop list. For example, articles “a”, “an”, “the” and prepositions like “in”, “of”, “for”, “at” etc. are the examples of stop words. The size of the inverted index can be significantly reduced by stop list. As per Zipf’s law, a stop list covering a few dozen words reduces the size of inverted index by almost half. On the other hand, sometimes the elimination of stop word may cause elimination of the term that is useful for searching. For example, if we eliminate the alphabet “A” from “Vitamin A” then it would have no significance."
},
{
"code": null,
"e": 96865,
"s": 96621,
"text": "Stemming, the simplified form of morphological analysis, is the heuristic process of extracting the base form of words by chopping off the ends of words. For example, the words laughing, laughs, laughed would be stemmed to the root word laugh."
},
{
"code": null,
"e": 96952,
"s": 96865,
"text": "In our subsequent sections, we will discuss about some important and useful IR models."
},
{
"code": null,
"e": 97179,
"s": 96952,
"text": "It is the oldest information retrieval (IR) model. The model is based on set theory and the Boolean algebra, where documents are sets of terms and queries are Boolean expressions on terms. The Boolean model can be defined as −"
},
{
"code": null,
"e": 97300,
"s": 97179,
"text": "D − A set of words, i.e., the indexing terms present in a document. Here, each term is either present (1) or absent (0)."
},
{
"code": null,
"e": 97421,
"s": 97300,
"text": "D − A set of words, i.e., the indexing terms present in a document. Here, each term is either present (1) or absent (0)."
},
{
"code": null,
"e": 97567,
"s": 97421,
"text": "Q − A Boolean expression, where terms are the index terms and operators are logical products − AND, logical sum − OR and logical difference − NOT"
},
{
"code": null,
"e": 97713,
"s": 97567,
"text": "Q − A Boolean expression, where terms are the index terms and operators are logical products − AND, logical sum − OR and logical difference − NOT"
},
{
"code": null,
"e": 97905,
"s": 97713,
"text": "F − Boolean algebra over sets of terms as well as over sets of documents\nIf we talk about the relevance feedback, then in Boolean IR model the Relevance prediction can be defined as follows −"
},
{
"code": null,
"e": 97978,
"s": 97905,
"text": "F − Boolean algebra over sets of terms as well as over sets of documents"
},
{
"code": null,
"e": 98097,
"s": 97978,
"text": "If we talk about the relevance feedback, then in Boolean IR model the Relevance prediction can be defined as follows −"
},
{
"code": null,
"e": 98215,
"s": 98097,
"text": "R − A document is predicted as relevant to the query expression if and only if it satisfies the query expression as −"
},
{
"code": null,
"e": 98333,
"s": 98215,
"text": "R − A document is predicted as relevant to the query expression if and only if it satisfies the query expression as −"
},
{
"code": null,
"e": 98379,
"s": 98333,
"text": "((text ˅ information) ˄ rerieval ˄ ̃ theory)"
},
{
"code": null,
"e": 98473,
"s": 98379,
"text": "We can explain this model by a query term as an unambiguous definition of a set of documents."
},
{
"code": null,
"e": 98584,
"s": 98473,
"text": "For example, the query term “economic” defines the set of documents that are indexed with the term “economic”."
},
{
"code": null,
"e": 98981,
"s": 98584,
"text": "Now, what would be the result after combining terms with Boolean AND Operator? It will define a document set that is smaller than or equal to the document sets of any of the single terms. For example, the query with terms “social” and “economic” will produce the documents set of documents that are indexed with both the terms. In other words, document set with the intersection of both the sets."
},
{
"code": null,
"e": 99392,
"s": 98981,
"text": "Now, what would be the result after combining terms with Boolean OR operator? It will define a document set that is bigger than or equal to the document sets of any of the single terms. For example, the query with terms “social” or “economic” will produce the documents set of documents that are indexed with either the term “social” or “economic”. In other words, document set with the union of both the sets."
},
{
"code": null,
"e": 99445,
"s": 99392,
"text": "The advantages of the Boolean model are as follows −"
},
{
"code": null,
"e": 99489,
"s": 99445,
"text": "The simplest model, which is based on sets."
},
{
"code": null,
"e": 99533,
"s": 99489,
"text": "The simplest model, which is based on sets."
},
{
"code": null,
"e": 99567,
"s": 99533,
"text": "Easy to understand and implement."
},
{
"code": null,
"e": 99601,
"s": 99567,
"text": "Easy to understand and implement."
},
{
"code": null,
"e": 99633,
"s": 99601,
"text": "It only retrieves exact matches"
},
{
"code": null,
"e": 99665,
"s": 99633,
"text": "It only retrieves exact matches"
},
{
"code": null,
"e": 99720,
"s": 99665,
"text": "It gives the user, a sense of control over the system."
},
{
"code": null,
"e": 99775,
"s": 99720,
"text": "It gives the user, a sense of control over the system."
},
{
"code": null,
"e": 99831,
"s": 99775,
"text": "The disadvantages of the Boolean model are as follows −"
},
{
"code": null,
"e": 99953,
"s": 99831,
"text": "The model’s similarity function is Boolean. Hence, there would be no partial matches. This can be annoying for the users."
},
{
"code": null,
"e": 100075,
"s": 99953,
"text": "The model’s similarity function is Boolean. Hence, there would be no partial matches. This can be annoying for the users."
},
{
"code": null,
"e": 100163,
"s": 100075,
"text": "In this model, the Boolean operator usage has much more influence than a critical word."
},
{
"code": null,
"e": 100251,
"s": 100163,
"text": "In this model, the Boolean operator usage has much more influence than a critical word."
},
{
"code": null,
"e": 100312,
"s": 100251,
"text": "The query language is expressive, but it is complicated too."
},
{
"code": null,
"e": 100373,
"s": 100312,
"text": "The query language is expressive, but it is complicated too."
},
{
"code": null,
"e": 100409,
"s": 100373,
"text": "No ranking for retrieved documents."
},
{
"code": null,
"e": 100445,
"s": 100409,
"text": "No ranking for retrieved documents."
},
{
"code": null,
"e": 100808,
"s": 100445,
"text": "Due to the above disadvantages of the Boolean model, Gerard Salton and his colleagues suggested a model, which is based on Luhn’s similarity criterion. The similarity criterion formulated by Luhn states, “the more two representations agreed in given elements and their distribution, the higher would be the probability of their representing similar information.”"
},
{
"code": null,
"e": 100898,
"s": 100808,
"text": "Consider the following important points to understand more about the Vector Space Model −"
},
{
"code": null,
"e": 101026,
"s": 100898,
"text": "The index representations (documents) and the queries are considered as vectors embedded in a high dimensional Euclidean space."
},
{
"code": null,
"e": 101154,
"s": 101026,
"text": "The index representations (documents) and the queries are considered as vectors embedded in a high dimensional Euclidean space."
},
{
"code": null,
"e": 101265,
"s": 101154,
"text": "The similarity measure of a document vector to a query vector is usually the cosine of the angle between them."
},
{
"code": null,
"e": 101376,
"s": 101265,
"text": "The similarity measure of a document vector to a query vector is usually the cosine of the angle between them."
},
{
"code": null,
"e": 101477,
"s": 101376,
"text": "Cosine is a normalized dot product, which can be calculated with the help of the following formula −"
},
{
"code": null,
"e": 101522,
"s": 101477,
"text": "Score⟮d→q→⟯=∑k=1mdk.qk∑k=1m⟮dk⟯2.∑k=1mm⟮qk⟯2"
},
{
"code": null,
"e": 101543,
"s": 101522,
"text": "Score⟮d→q→⟯=1whend=q"
},
{
"code": null,
"e": 101578,
"s": 101543,
"text": "Score⟮d→q→⟯=0whendandqsharenoitems"
},
{
"code": null,
"e": 101746,
"s": 101578,
"text": "The query and documents are represented by a two-dimensional vector space. The terms are car and insurance. There is one query and three documents in the vector space."
},
{
"code": null,
"e": 102145,
"s": 101746,
"text": "The top ranked document in response to the terms car and insurance will be the document d2 because the angle between q and d2 is the smallest. The reason behind this is that both the concepts car and insurance are salient in d2 and hence have the high weights. On the other side, d1 and d3 also mention both the terms but in each case, one of them is not a centrally important term in the document."
},
{
"code": null,
"e": 102424,
"s": 102145,
"text": "Term weighting means the weights on the terms in vector space. Higher the weight of the term, greater would be the impact of the term on cosine. More weights should be assigned to the more important terms in the model. Now the question that arises here is how can we model this."
},
{
"code": null,
"e": 102551,
"s": 102424,
"text": "One way to do this is to count the words in a document as its term weight. However, do you think it would be effective method?"
},
{
"code": null,
"e": 102682,
"s": 102551,
"text": "Another method, which is more effective, is to use term frequency (tfij), document frequency (dfi) and collection frequency (cfi)."
},
{
"code": null,
"e": 102982,
"s": 102682,
"text": "It may be defined as the number of occurrences of wi in dj. The information that is captured by term frequency is how salient a word is within the given document or in other words we can say that the higher the term frequency the more that word is a good description of the content of that document."
},
{
"code": null,
"e": 103219,
"s": 102982,
"text": "It may be defined as the total number of documents in the collection in which wi occurs. It is an indicator of informativeness. Semantically focused words will occur several times in the document unlike the semantically unfocused words."
},
{
"code": null,
"e": 103297,
"s": 103219,
"text": "It may be defined as the total number of occurrences of wi in the collection."
},
{
"code": null,
"e": 103334,
"s": 103297,
"text": "Mathematically, dfi≤cfiand∑jtfij=cfi"
},
{
"code": null,
"e": 103442,
"s": 103334,
"text": "Let us now learn about the different forms of document frequency weighting. The forms are described below −"
},
{
"code": null,
"e": 103717,
"s": 103442,
"text": "This is also classified as the term frequency factor, which means that if a term t appears often in a document then a query containing t should retrieve that document. We can combine word’s term frequency (tfij) and document frequency (dfi) into a single weight as follows −"
},
{
"code": null,
"e": 103770,
"s": 103717,
"text": "weight(i,j)={(1+log(tfij))logNdfiiftfi,j≥10iftfi,j=0"
},
{
"code": null,
"e": 103811,
"s": 103770,
"text": "Here N is the total number of documents."
},
{
"code": null,
"e": 104123,
"s": 103811,
"text": "This is another form of document frequency weighting and often called idf weighting or inverse document frequency weighting. The important point of idf weighting is that the term’s scarcity across the collection is a measure of its importance and importance is inversely proportional to frequency of occurrence."
},
{
"code": null,
"e": 104139,
"s": 104123,
"text": "Mathematically,"
},
{
"code": null,
"e": 104155,
"s": 104139,
"text": "idft=log(1+Nnt)"
},
{
"code": null,
"e": 104172,
"s": 104155,
"text": "idft=log(N−ntnt)"
},
{
"code": null,
"e": 104178,
"s": 104172,
"text": "Here,"
},
{
"code": null,
"e": 104210,
"s": 104178,
"text": "N = documents in the collection"
},
{
"code": null,
"e": 104243,
"s": 104210,
"text": "nt = documents containing term t"
},
{
"code": null,
"e": 104735,
"s": 104243,
"text": "The primary goal of any information retrieval system must be accuracy − to produce relevant documents as per the user’s requirement. However, the question that arises here is how can we improve the output by improving user’s query formation style. Certainly, the output of any IR system is dependent on the user’s query and a well-formatted query will produce more accurate results. The user can improve his/her query with the help of relevance feedback, an important aspect of any IR model."
},
{
"code": null,
"e": 104999,
"s": 104735,
"text": "Relevance feedback takes the output that is initially returned from the given query. This initial output can be used to gather user information and to know whether that output is relevant to perform a new query or not. The feedbacks can be classified as follows −"
},
{
"code": null,
"e": 105306,
"s": 104999,
"text": "It may be defined as the feedback that is obtained from the assessors of relevance. These assessors will also indicate the relevance of a document retrieved from the query. In order to improve query retrieval performance, the relevance feedback information needs to be interpolated with the original query."
},
{
"code": null,
"e": 105426,
"s": 105306,
"text": "Assessors or other users of the system may indicate the relevance explicitly by using the following relevance systems −"
},
{
"code": null,
"e": 105569,
"s": 105426,
"text": "Binary relevance system − This relevance feedback system indicates that a document is either relevant (1) or irrelevant (0) for a given query."
},
{
"code": null,
"e": 105712,
"s": 105569,
"text": "Binary relevance system − This relevance feedback system indicates that a document is either relevant (1) or irrelevant (0) for a given query."
},
{
"code": null,
"e": 105997,
"s": 105712,
"text": "Graded relevance system − The graded relevance feedback system indicates the relevance of a document, for a given query, on the basis of grading by using numbers, letters or descriptions. The description can be like “not relevant”, “somewhat relevant”, “very relevant” or “relevant”. "
},
{
"code": null,
"e": 106282,
"s": 105997,
"text": "Graded relevance system − The graded relevance feedback system indicates the relevance of a document, for a given query, on the basis of grading by using numbers, letters or descriptions. The description can be like “not relevant”, “somewhat relevant”, “very relevant” or “relevant”. "
},
{
"code": null,
"e": 106669,
"s": 106282,
"text": "It is the feedback that is inferred from user behavior. The behavior includes the duration of time user spent viewing a document, which document is selected for viewing and which is not, page browsing and scrolling actions, etc. One of the best examples of implicit feedback is dwell time, which is a measure of how much time a user spends viewing the page linked to in a search result."
},
{
"code": null,
"e": 107059,
"s": 106669,
"text": "It is also called Blind feedback. It provides a method for automatic local analysis. The manual part of relevance feedback is automated with the help of Pseudo relevance feedback so that the user gets improved retrieval performance without an extended interaction. The main advantage of this feedback system is that it does not require assessors like in explicit relevance feedback system."
},
{
"code": null,
"e": 107117,
"s": 107059,
"text": "Consider the following steps to implement this feedback −"
},
{
"code": null,
"e": 107263,
"s": 107117,
"text": "Step 1 − First, the result returned by initial query must be taken as relevant result. The range of relevant result must be in top 10-50 results."
},
{
"code": null,
"e": 107409,
"s": 107263,
"text": "Step 1 − First, the result returned by initial query must be taken as relevant result. The range of relevant result must be in top 10-50 results."
},
{
"code": null,
"e": 107547,
"s": 107409,
"text": "Step 2 − Now, select the top 20-30 terms from the documents using for instance term frequency(tf)-inverse document frequency(idf) weight."
},
{
"code": null,
"e": 107685,
"s": 107547,
"text": "Step 2 − Now, select the top 20-30 terms from the documents using for instance term frequency(tf)-inverse document frequency(idf) weight."
},
{
"code": null,
"e": 107798,
"s": 107685,
"text": "Step 3 − Add these terms to the query and match the returned documents. Then return the most relevant documents."
},
{
"code": null,
"e": 107911,
"s": 107798,
"text": "Step 3 − Add these terms to the query and match the returned documents. Then return the most relevant documents."
},
{
"code": null,
"e": 108314,
"s": 107911,
"text": "Natural Language Processing (NLP) is an emerging technology that derives various forms of AI that we see in the present times and its use for creating a seamless as well as interactive interface between humans and machines will continue to be a top priority for today’s and tomorrow’s increasingly cognitive applications. Here, we are going to discuss about some of the very useful applications of NLP."
},
{
"code": null,
"e": 108560,
"s": 108314,
"text": "Machine translation (MT), process of translating one source language or text into another language, is one of the most important applications of NLP. We can understand the process of machine translation with the help of the following flowchart −"
},
{
"code": null,
"e": 108659,
"s": 108560,
"text": "There are different types of machine translation systems. Let us see what the different types are."
},
{
"code": null,
"e": 108735,
"s": 108659,
"text": "Bilingual MT systems produce translations between two particular languages."
},
{
"code": null,
"e": 108875,
"s": 108735,
"text": "Multilingual MT systems produce translations between any pair of languages. They may be either uni-directional or bi-directional in nature."
},
{
"code": null,
"e": 108985,
"s": 108875,
"text": "Let us now learn about the important approaches to Machine Translation. The approaches to MT are as follows −"
},
{
"code": null,
"e": 109211,
"s": 108985,
"text": "It is less popular but the oldest approach of MT. The systems that use this approach are capable of translating SL (source language) directly to TL (target language). Such systems are bi-lingual and uni-directional in nature."
},
{
"code": null,
"e": 109434,
"s": 109211,
"text": "The systems that use Interlingua approach translate SL to an intermediate language called Interlingua (IL) and then translate IL to TL. The Interlingua approach can be understood with the help of the following MT pyramid −"
},
{
"code": null,
"e": 109480,
"s": 109434,
"text": "Three stages are involved with this approach."
},
{
"code": null,
"e": 109582,
"s": 109480,
"text": "In the first stage, source language (SL) texts are converted to abstract SL-oriented representations."
},
{
"code": null,
"e": 109684,
"s": 109582,
"text": "In the first stage, source language (SL) texts are converted to abstract SL-oriented representations."
},
{
"code": null,
"e": 109810,
"s": 109684,
"text": "In the second stage, SL-oriented representations are converted into equivalent target language (TL)-oriented representations."
},
{
"code": null,
"e": 109936,
"s": 109810,
"text": "In the second stage, SL-oriented representations are converted into equivalent target language (TL)-oriented representations."
},
{
"code": null,
"e": 109985,
"s": 109936,
"text": "In the third stage, the final text is generated."
},
{
"code": null,
"e": 110034,
"s": 109985,
"text": "In the third stage, the final text is generated."
},
{
"code": null,
"e": 110305,
"s": 110034,
"text": "This is an emerging approach for MT. Basically, it uses large amount of raw data in the form of parallel corpora. The raw data consists of the text and their translations. Analogybased, example-based, memory-based machine translation techniques use empirical MTapproach."
},
{
"code": null,
"e": 110477,
"s": 110305,
"text": "One of the most common problems these days is unwanted emails. This makes Spam filters all the more important because it is the first line of defense against this problem."
},
{
"code": null,
"e": 110610,
"s": 110477,
"text": "Spam filtering system can be developed by using NLP functionality by considering the major false-positive and false-negative issues."
},
{
"code": null,
"e": 110671,
"s": 110610,
"text": "Followings are some existing NLP models for spam filtering −"
},
{
"code": null,
"e": 110848,
"s": 110671,
"text": "An N-Gram model is an N-character slice of a longer string. In this model, N-grams of several different lengths are used simultaneously in processing and detecting spam emails."
},
{
"code": null,
"e": 111310,
"s": 110848,
"text": "Spammers, generators of spam emails, usually change one or more characters of attacking words in their spams so that they can breach content-based spam filters. That is why we can say that content-based filters are not useful if they cannot understand the meaning of the words or phrases in the email. In order to eliminate such issues in spam filtering, a rule-based word stemming technique, that can match words which look alike and sound alike, is developed."
},
{
"code": null,
"e": 111557,
"s": 111310,
"text": "This has now become a widely-used technology for spam filtering. The incidence of the words in an email is measured against its typical occurrence in a database of unsolicited (spam) and legitimate (ham) email messages in a statistical technique."
},
{
"code": null,
"e": 112005,
"s": 111557,
"text": "In this digital era, the most valuable thing is data, or you can say information. However, do we really get useful as well as the required amount of information? The answer is ‘NO’ because the information is overloaded and our access to knowledge and information far exceeds our capacity to understand it. We are in a serious need of automatic text summarization and information because the flood of information over internet is not going to stop."
},
{
"code": null,
"e": 112303,
"s": 112005,
"text": "Text summarization may be defined as the technique to create short, accurate summary of longer text documents. Automatic text summarization will help us with relevant information in less time. Natural language processing (NLP) plays an important role in developing an automatic text summarization."
},
{
"code": null,
"e": 112645,
"s": 112303,
"text": "Another main application of natural language processing (NLP) is question-answering. Search engines put the information of the world at our fingertips, but they are still lacking when it comes to answer the questions posted by human beings in their natural language. We have big tech companies like Google are also working in this direction."
},
{
"code": null,
"e": 113290,
"s": 112645,
"text": "Question-answering is a Computer Science discipline within the fields of AI and NLP. It focuses on building systems that automatically answer questions posted by human beings in their natural language. A computer system that understands the natural language has the capability of a program system to translate the sentences written by humans into an internal representation so that the valid answers can be generated by the system. The exact answers can be generated by doing syntax and semantic analysis of the questions. Lexical gap, ambiguity and multilingualism are some of the challenges for NLP in building good question answering system."
},
{
"code": null,
"e": 114109,
"s": 113290,
"text": "Another important application of natural language processing (NLP) is sentiment analysis. As the name suggests, sentiment analysis is used to identify the sentiments among several posts. It is also used to identify the sentiment where the emotions are not expressed explicitly. Companies are using sentiment analysis, an application of natural language processing (NLP) to identify the opinion and sentiment of their customers online. It will help companies to understand what their customers think about the products and services. Companies can judge their overall reputation from customer posts with the help of sentiment analysis. In this way, we can say that beyond determining simple polarity, sentiment analysis understands sentiments in context to help us better understand what is behind the expressed opinion."
},
{
"code": null,
"e": 114180,
"s": 114109,
"text": "In this chapter, we will learn about language processing using Python."
},
{
"code": null,
"e": 114248,
"s": 114180,
"text": "The following features make Python different from other languages −"
},
{
"code": null,
"e": 114390,
"s": 114248,
"text": "Python is interpreted − We do not need to compile our Python program before executing it because the interpreter processes Python at runtime."
},
{
"code": null,
"e": 114532,
"s": 114390,
"text": "Python is interpreted − We do not need to compile our Python program before executing it because the interpreter processes Python at runtime."
},
{
"code": null,
"e": 114622,
"s": 114532,
"text": "Interactive − We can directly interact with the interpreter to write our Python programs."
},
{
"code": null,
"e": 114712,
"s": 114622,
"text": "Interactive − We can directly interact with the interpreter to write our Python programs."
},
{
"code": null,
"e": 114910,
"s": 114712,
"text": "Object-oriented − Python is object-oriented in nature and it makes this language easier to write programs because with the help of this technique of programming it encapsulates code within objects."
},
{
"code": null,
"e": 115108,
"s": 114910,
"text": "Object-oriented − Python is object-oriented in nature and it makes this language easier to write programs because with the help of this technique of programming it encapsulates code within objects."
},
{
"code": null,
"e": 115282,
"s": 115108,
"text": "Beginner can easily learn − Python is also called beginner’s language because it is very easy to understand, and it supports the development of a wide range of applications."
},
{
"code": null,
"e": 115456,
"s": 115282,
"text": "Beginner can easily learn − Python is also called beginner’s language because it is very easy to understand, and it supports the development of a wide range of applications."
},
{
"code": null,
"e": 115582,
"s": 115456,
"text": "The latest version of Python 3 released is Python 3.7.1 is available for Windows, Mac OS and most of the flavors of Linux OS."
},
{
"code": null,
"e": 115683,
"s": 115582,
"text": "For windows, we can go to the link www.python.org/downloads/windows/ to download and install Python."
},
{
"code": null,
"e": 115784,
"s": 115683,
"text": "For windows, we can go to the link www.python.org/downloads/windows/ to download and install Python."
},
{
"code": null,
"e": 115851,
"s": 115784,
"text": "For MAC OS, we can use the link www.python.org/downloads/mac-osx/."
},
{
"code": null,
"e": 115918,
"s": 115851,
"text": "For MAC OS, we can use the link www.python.org/downloads/mac-osx/."
},
{
"code": null,
"e": 116130,
"s": 115918,
"text": "In case of Linux, different flavors of Linux use different package managers for installation of new packages.\n\nFor example, to install Python 3 on Ubuntu Linux, we can use the following command from terminal −\n\n"
},
{
"code": null,
"e": 116240,
"s": 116130,
"text": "In case of Linux, different flavors of Linux use different package managers for installation of new packages."
},
{
"code": null,
"e": 116339,
"s": 116240,
"text": "For example, to install Python 3 on Ubuntu Linux, we can use the following command from terminal −"
},
{
"code": null,
"e": 116438,
"s": 116339,
"text": "For example, to install Python 3 on Ubuntu Linux, we can use the following command from terminal −"
},
{
"code": null,
"e": 116477,
"s": 116438,
"text": "$sudo apt-get install python3-minimal\n"
},
{
"code": null,
"e": 116557,
"s": 116477,
"text": "To study more about Python programming, read Python 3 basic tutorial – Python 3"
},
{
"code": null,
"e": 116850,
"s": 116557,
"text": "We will be using Python library NLTK (Natural Language Toolkit) for doing text analysis in English Language. The Natural language toolkit (NLTK) is a collection of Python libraries designed especially for identifying and tag parts of speech found in the text of natural language like English."
},
{
"code": null,
"e": 116984,
"s": 116850,
"text": "Before starting to use NLTK, we need to install it. With the help of following command, we can install it in our Python environment −"
},
{
"code": null,
"e": 117002,
"s": 116984,
"text": "pip install nltk\n"
},
{
"code": null,
"e": 117104,
"s": 117002,
"text": "If we are using Anaconda, then a Conda package for NLTK can be built by using the following command −"
},
{
"code": null,
"e": 117136,
"s": 117104,
"text": "conda install -c anaconda nltk\n"
},
{
"code": null,
"e": 117399,
"s": 117136,
"text": "After installing NLTK, another important task is to download its preset text repositories so that it can be easily used. However, before that we need to import NLTK the way we import any other Python module. The following command will help us in importing NLTK −"
},
{
"code": null,
"e": 117412,
"s": 117399,
"text": "import nltk\n"
},
{
"code": null,
"e": 117477,
"s": 117412,
"text": "Now, download NLTK data with the help of the following command −"
},
{
"code": null,
"e": 117494,
"s": 117477,
"text": "nltk.download()\n"
},
{
"code": null,
"e": 117560,
"s": 117494,
"text": "It will take some time to install all available packages of NLTK."
},
{
"code": null,
"e": 117776,
"s": 117560,
"text": "Some other Python packages like gensim and pattern are also very necessary for text analysis as well as building natural language processing applications by using NLTK. the packages can be installed as shown below −"
},
{
"code": null,
"e": 117903,
"s": 117776,
"text": "gensim is a robust semantic modeling library which can be used for many applications. We can install it by following command −"
},
{
"code": null,
"e": 117923,
"s": 117903,
"text": "pip install gensim\n"
},
{
"code": null,
"e": 118028,
"s": 117923,
"text": "It can be used to make gensim package work properly. The following command helps in installing pattern −"
},
{
"code": null,
"e": 118049,
"s": 118028,
"text": "pip install pattern\n"
},
{
"code": null,
"e": 118246,
"s": 118049,
"text": "Tokenization may be defined as the Process of breaking the given text, into smaller units called tokens. Words, numbers or punctuation marks can be tokens. It may also be called word segmentation."
},
{
"code": null,
"e": 118292,
"s": 118246,
"text": "Input − Bed and chair are types of furniture."
},
{
"code": null,
"e": 118475,
"s": 118292,
"text": "We have different packages for tokenization provided by NLTK. We can use these packages based on our requirements. The packages and the details of their installation are as follows −"
},
{
"code": null,
"e": 118591,
"s": 118475,
"text": "This package can be used to divide the input text into sentences. We can import it by using the following command −"
},
{
"code": null,
"e": 118632,
"s": 118591,
"text": "from nltk.tokenize import sent_tokenize\n"
},
{
"code": null,
"e": 118744,
"s": 118632,
"text": "This package can be used to divide the input text into words. We can import it by using the following command −"
},
{
"code": null,
"e": 118785,
"s": 118744,
"text": "from nltk.tokenize import word_tokenize\n"
},
{
"code": null,
"e": 118919,
"s": 118785,
"text": "This package can be used to divide the input text into words and punctuation marks. We can import it by using the following command −"
},
{
"code": null,
"e": 118965,
"s": 118919,
"text": "from nltk.tokenize import WordPuncttokenizer\n"
},
{
"code": null,
"e": 119457,
"s": 118965,
"text": "Due to grammatical reasons, language includes lots of variations. Variations in the sense that the language, English as well as other languages too, have different forms of a word. For example, the words like democracy, democratic, and democratization. For machine learning projects, it is very important for machines to understand that these different words, like above, have the same base form. That is why it is very useful to extract the base forms of the words while analyzing the text."
},
{
"code": null,
"e": 119569,
"s": 119457,
"text": "Stemming is a heuristic process that helps in extracting the base forms of the words by chopping of their ends."
},
{
"code": null,
"e": 119646,
"s": 119569,
"text": "The different packages for stemming provided by NLTK module are as follows −"
},
{
"code": null,
"e": 119806,
"s": 119646,
"text": "Porter’s algorithm is used by this stemming package to extract the base form of the words. With the help of the following command, we can import this package −"
},
{
"code": null,
"e": 119850,
"s": 119806,
"text": "from nltk.stem.porter import PorterStemmer\n"
},
{
"code": null,
"e": 119949,
"s": 119850,
"text": "For example, ‘write’ would be the output of the word ‘writing’ given as the input to this stemmer."
},
{
"code": null,
"e": 120108,
"s": 119949,
"text": "Lancaster’s algorithm is used by this stemming package to extract the base form of the words. With the help of following command, we can import this package −"
},
{
"code": null,
"e": 120158,
"s": 120108,
"text": "from nltk.stem.lancaster import LancasterStemmer\n"
},
{
"code": null,
"e": 120256,
"s": 120158,
"text": "For example, ‘writ’ would be the output of the word ‘writing’ given as the input to this stemmer."
},
{
"code": null,
"e": 120414,
"s": 120256,
"text": "Snowball’s algorithm is used by this stemming package to extract the base form of the words. With the help of following command, we can import this package −"
},
{
"code": null,
"e": 120462,
"s": 120414,
"text": "from nltk.stem.snowball import SnowballStemmer\n"
},
{
"code": null,
"e": 120561,
"s": 120462,
"text": "For example, ‘write’ would be the output of the word ‘writing’ given as the input to this stemmer."
},
{
"code": null,
"e": 120773,
"s": 120561,
"text": "It is another way to extract the base form of words, normally aiming to remove inflectional endings by using vocabulary and morphological analysis. After lemmatization, the base form of any word is called lemma."
},
{
"code": null,
"e": 120836,
"s": 120773,
"text": "NLTK module provides the following package for lemmatization −"
},
{
"code": null,
"e": 121005,
"s": 120836,
"text": "This package will extract the base form of the word depending upon whether it is used as a noun or as a verb. The following command can be used to import this package −"
},
{
"code": null,
"e": 121046,
"s": 121005,
"text": "from nltk.stem import WordNetLemmatizer\n"
},
{
"code": null,
"e": 121459,
"s": 121046,
"text": "The identification of parts of speech (POS) and short phrases can be done with the help of chunking. It is one of the important processes in natural language processing. As we are aware about the process of tokenization for the creation of tokens, chunking actually is to do the labeling of those tokens. In other words, we can say that we can get the structure of the sentence with the help of chunking process."
},
{
"code": null,
"e": 121633,
"s": 121459,
"text": "In the following example, we will implement Noun-Phrase chunking, a category of chunking which will find the noun phrase chunks in the sentence, by using NLTK Python module."
},
{
"code": null,
"e": 121698,
"s": 121633,
"text": "Consider the following steps to implement noun-phrase chunking −"
},
{
"code": null,
"e": 121731,
"s": 121698,
"text": "Step 1: Chunk grammar definition"
},
{
"code": null,
"e": 121845,
"s": 121731,
"text": "In this step, we need to define the grammar for chunking. It would consist of the rules, which we need to follow."
},
{
"code": null,
"e": 121875,
"s": 121845,
"text": "Step 2: Chunk parser creation"
},
{
"code": null,
"e": 121963,
"s": 121875,
"text": "Next, we need to create a chunk parser. It would parse the grammar and give the output."
},
{
"code": null,
"e": 121982,
"s": 121963,
"text": "Step 3: The Output"
},
{
"code": null,
"e": 122037,
"s": 121982,
"text": "In this step, we will get the output in a tree format."
},
{
"code": null,
"e": 122079,
"s": 122037,
"text": "Start by importing the the NLTK package −"
},
{
"code": null,
"e": 122092,
"s": 122079,
"text": "import nltk\n"
},
{
"code": null,
"e": 122129,
"s": 122092,
"text": "Now, we need to define the sentence."
},
{
"code": null,
"e": 122135,
"s": 122129,
"text": "Here,"
},
{
"code": null,
"e": 122157,
"s": 122135,
"text": "DT is the determinant"
},
{
"code": null,
"e": 122179,
"s": 122157,
"text": "DT is the determinant"
},
{
"code": null,
"e": 122195,
"s": 122179,
"text": "VBP is the verb"
},
{
"code": null,
"e": 122211,
"s": 122195,
"text": "VBP is the verb"
},
{
"code": null,
"e": 122231,
"s": 122211,
"text": "JJ is the adjective"
},
{
"code": null,
"e": 122251,
"s": 122231,
"text": "JJ is the adjective"
},
{
"code": null,
"e": 122273,
"s": 122251,
"text": "IN is the preposition"
},
{
"code": null,
"e": 122295,
"s": 122273,
"text": "IN is the preposition"
},
{
"code": null,
"e": 122310,
"s": 122295,
"text": "NN is the noun"
},
{
"code": null,
"e": 122325,
"s": 122310,
"text": "NN is the noun"
},
{
"code": null,
"e": 122457,
"s": 122325,
"text": "sentence = [(\"a\", \"DT\"),(\"clever\",\"JJ\"),(\"fox\",\"NN\"),(\"was\",\"VBP\"),\n (\"jumping\",\"VBP\"),(\"over\",\"IN\"),(\"the\",\"DT\"),(\"wall\",\"NN\")]\n"
},
{
"code": null,
"e": 122526,
"s": 122457,
"text": "Next, the grammar should be given in the form of regular expression."
},
{
"code": null,
"e": 122559,
"s": 122526,
"text": "grammar = \"NP:{<DT>?<JJ>*<NN>}\"\n"
},
{
"code": null,
"e": 122616,
"s": 122559,
"text": "Now, we need to define a parser for parsing the grammar."
},
{
"code": null,
"e": 122662,
"s": 122616,
"text": "parser_chunking = nltk.RegexpParser(grammar)\n"
},
{
"code": null,
"e": 122715,
"s": 122662,
"text": "Now, the parser will parse the sentence as follows −"
},
{
"code": null,
"e": 122748,
"s": 122715,
"text": "parser_chunking.parse(sentence)\n"
},
{
"code": null,
"e": 122802,
"s": 122748,
"text": "Next, the output will be in the variable as follows:-"
},
{
"code": null,
"e": 122844,
"s": 122802,
"text": "Output = parser_chunking.parse(sentence)\n"
},
{
"code": null,
"e": 122922,
"s": 122844,
"text": "Now, the following code will help you draw your output in the form of a tree."
},
{
"code": null,
"e": 122937,
"s": 122922,
"text": "output.draw()\n"
},
{
"code": null,
"e": 122972,
"s": 122937,
"text": "\n 59 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 122983,
"s": 122972,
"text": " Mike West"
},
{
"code": null,
"e": 123016,
"s": 122983,
"text": "\n 17 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 123036,
"s": 123016,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 123068,
"s": 123036,
"text": "\n 6 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 123089,
"s": 123068,
"text": " Prabh Kirpa Classes"
},
{
"code": null,
"e": 123122,
"s": 123089,
"text": "\n 12 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 123145,
"s": 123122,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 123152,
"s": 123145,
"text": " Print"
},
{
"code": null,
"e": 123163,
"s": 123152,
"text": " Add Notes"
}
] |
End to End — Predictive model using Python framework | by Sundar Krishnan | Towards Data Science
|
Predictive modeling is always a fun task. The major time spent is to understand what the business needs and then frame your problem. The next step is to tailor the solution to the needs. As we solve many problems, we understand that a framework can be used to build our first cut models. Not only this framework gives you faster results, it also helps you to plan for next steps based on the results.
In this article, we will see how a Python based framework can be applied to a variety of predictive modeling tasks. This will cover/touch upon most of the areas in the CRISP-DM process. So what is CRISP-DM?
en.wikipedia.org
Here is the link to the code. In this article, I skipped a lot of code for the purpose of brevity. Please follow the Github code on the side while reading this article.
github.com
The framework discussed in this article are spread into 9 different areas and I linked them to where they fall in the CRISP DM process.
Load Dataset — Data Understanding
import pandas as pddf = pd.read_excel("bank.xlsx")
Data Transformation — Data Preparation
Now, we have our dataset in a pandas dataframe. Next, we look at the variable descriptions and the contents of the dataset using df.info() and df.head() respectively. The target variable (‘Yes’/’No’) is converted to (1/0) using the code below.
df['target'] = df['y'].apply(lambda x: 1 if x == 'yes' else 0)
Descriptive Stats — Data Understanding
Exploratory statistics help a modeler understand the data better. A couple of these stats are available in this framework. First, we check the missing values in each column in the dataset by using the below code.
df.isnull().mean().sort_values(ascending=False)*100
Second, we check the correlation between variables using the code below.
import seaborn as snsimport matplotlib.pyplot as plt%matplotlib inlinecorr = df.corr()sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns)
Finally, in the framework, I included a binning algorithm that automatically bins the input variables in the dataset and creates a bivariate plot (inputs vs target).
bar_color = '#058caa'num_color = '#ed8549'final_iv,_ = data_vars(df1,df1['target'])final_iv = final_iv[(final_iv.VAR_NAME != 'target')]grouped = final_iv.groupby(['VAR_NAME'])for key, group in grouped: ax = group.plot('MIN_VALUE','EVENT_RATE',kind='bar',color=bar_color,linewidth=1.0,edgecolor=['black']) ax.set_title(str(key) + " vs " + str('target')) ax.set_xlabel(key) ax.set_ylabel(str('target') + " %") rects = ax.patches for rect in rects: height = rect.get_height() ax.text(rect.get_x()+rect.get_width()/2., 1.01*height, str(round(height*100,1)) + '%', ha='center', va='bottom', color=num_color, fontweight='bold')
Variable Selection — Data Preparation
Please read my article below on variable selection process which is used in this framework. The variables are selected based on a voting system. We use different algorithms to select features and then finally each algorithm votes for their selected feature. The final vote count is used to select the best feature for modeling.
medium.com
Model — Modeling
80% of the predictive model work is done so far. To complete the rest 20%, we split our dataset into train/test and try a variety of algorithms on the data and pick the best one.
from sklearn.cross_validation import train_test_splittrain, test = train_test_split(df1, test_size = 0.4)train = train.reset_index(drop=True)test = test.reset_index(drop=True)features_train = train[list(vif['Features'])]label_train = train['target']features_test = test[list(vif['Features'])]label_test = test['target']
We apply different algorithms on the train dataset and evaluate the performance on the test data to make sure the model is stable. The framework includes codes for Random Forest, Logistic Regression, Naive Bayes, Neural Network and Gradient Boosting. We can add other models based on our needs. The Random forest code is provided below.
from sklearn.ensemble import RandomForestClassifierclf = RandomForestClassifier()clf.fit(features_train,label_train)pred_train = clf.predict(features_train)pred_test = clf.predict(features_test)from sklearn.metrics import accuracy_scoreaccuracy_train = accuracy_score(pred_train,label_train)accuracy_test = accuracy_score(pred_test,label_test)from sklearn import metricsfpr, tpr, _ = metrics.roc_curve(np.array(label_train), clf.predict_proba(features_train)[:,1])auc_train = metrics.auc(fpr,tpr)fpr, tpr, _ = metrics.roc_curve(np.array(label_test), clf.predict_proba(features_test)[:,1])auc_test = metrics.auc(fpr,tpr)
Hyper parameter Tuning — Modeling
In addition, the hyperparameters of the models can be tuned to improve the performance as well. Here is a code to do that.
from sklearn.model_selection import RandomizedSearchCVfrom sklearn.ensemble import RandomForestClassifiern_estimators = [int(x) for x in np.linspace(start = 10, stop = 500, num = 10)]max_features = ['auto', 'sqrt']max_depth = [int(x) for x in np.linspace(3, 10, num = 1)]max_depth.append(None)min_samples_split = [2, 5, 10]min_samples_leaf = [1, 2, 4]bootstrap = [True, False]random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap}rf = RandomForestClassifier()rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 10, cv = 2, verbose=2, random_state=42, n_jobs = -1)rf_random.fit(features_train, label_train)
Final Model and Model Performance — Evaluation
The final model that gives us the better accuracy values is picked for now. However, we are not done yet. We need to evaluate the model performance based on a variety of metrics. The framework contain codes that calculate cross-tab of actual vs predicted values, ROC Curve, Deciles, KS statistic, Lift chart, Actual vs predicted chart, Gains chart. We will go through each one of them below.
Crosstab
Crosstab
pd.crosstab(label_train,pd.Series(pred_train),rownames=['ACTUAL'],colnames=['PRED'])
2. ROC/AUC curve or c-statistic
from bokeh.charts import Histogramfrom ipywidgets import interactfrom bokeh.plotting import figurefrom bokeh.io import push_notebook, show, output_notebookoutput_notebook()from sklearn import metricspreds = clf.predict_proba(features_train)[:,1]fpr, tpr, _ = metrics.roc_curve(np.array(label_train), preds)auc = metrics.auc(fpr,tpr)p = figure(title="ROC Curve - Train data")r = p.line(fpr,tpr,color='#0077bc',legend = 'AUC = '+ str(round(auc,3)), line_width=2)s = p.line([0,1],[0,1], color= '#d15555',line_dash='dotdash',line_width=2)show(p)
3. Decile Plots and Kolmogorov Smirnov (KS) Statistic
A macro is executed in the backend to generate the plot below. And the number highlighted in yellow is the KS-statistic value.
deciling(scores_train,['DECILE'],'TARGET','NONTARGET')
4. Lift chart, Actual vs predicted chart, Gains chart
Similar to decile plots, a macro is used to generate the plots below.
gains(lift_train,['DECILE'],'TARGET','SCORE')
Save Model for future use — Deployment
Finally, we developed our model and evaluated all the different metrics and now we are ready to deploy model in production. The last step before deployment is to save our model which is done using the code below.
import pandasfrom sklearn.externals import joblibfilename = 'final_model.model'i = [d,clf]joblib.dump(i,filename)
Here, “clf” is the model classifier object and “d” is the label encoder object used to transform character to numeric variables.
Score New data — Deployment
For scoring, we need to load our model object (clf) and the label encoder object back to the python environment.
# Use the code to load the modelfilename = 'final_model.model'from sklearn.externals import joblibd,clf=joblib.load(filename)
Then, we load our new dataset and pass to the scoring macro.
def score_new(features,clf): score = pd.DataFrame(clf.predict_proba(features)[:,1], columns = ['SCORE']) score['DECILE'] = pd.qcut(score['SCORE'].rank(method = 'first'),10,labels=range(10,0,-1)) score['DECILE'] = score['DECILE'].astype(float) return(score)
And we call the macro using the code below
scores = score_new(new_score_data,clf)
That’s it. We have scored our new data. This article provides a high level overview of the technical codes. If you need to discuss anything in particular or you have feedback on any of the modules please leave a comment or reach out to me via LinkedIn.
Have fun!
I released a python package which will perform some of the tasks mentioned in this article — WOE and IV, Bivariate charts, Variable selection. If you are interested to use the package version read the article below.
|
[
{
"code": null,
"e": 573,
"s": 172,
"text": "Predictive modeling is always a fun task. The major time spent is to understand what the business needs and then frame your problem. The next step is to tailor the solution to the needs. As we solve many problems, we understand that a framework can be used to build our first cut models. Not only this framework gives you faster results, it also helps you to plan for next steps based on the results."
},
{
"code": null,
"e": 780,
"s": 573,
"text": "In this article, we will see how a Python based framework can be applied to a variety of predictive modeling tasks. This will cover/touch upon most of the areas in the CRISP-DM process. So what is CRISP-DM?"
},
{
"code": null,
"e": 797,
"s": 780,
"text": "en.wikipedia.org"
},
{
"code": null,
"e": 966,
"s": 797,
"text": "Here is the link to the code. In this article, I skipped a lot of code for the purpose of brevity. Please follow the Github code on the side while reading this article."
},
{
"code": null,
"e": 977,
"s": 966,
"text": "github.com"
},
{
"code": null,
"e": 1113,
"s": 977,
"text": "The framework discussed in this article are spread into 9 different areas and I linked them to where they fall in the CRISP DM process."
},
{
"code": null,
"e": 1147,
"s": 1113,
"text": "Load Dataset — Data Understanding"
},
{
"code": null,
"e": 1198,
"s": 1147,
"text": "import pandas as pddf = pd.read_excel(\"bank.xlsx\")"
},
{
"code": null,
"e": 1237,
"s": 1198,
"text": "Data Transformation — Data Preparation"
},
{
"code": null,
"e": 1481,
"s": 1237,
"text": "Now, we have our dataset in a pandas dataframe. Next, we look at the variable descriptions and the contents of the dataset using df.info() and df.head() respectively. The target variable (‘Yes’/’No’) is converted to (1/0) using the code below."
},
{
"code": null,
"e": 1544,
"s": 1481,
"text": "df['target'] = df['y'].apply(lambda x: 1 if x == 'yes' else 0)"
},
{
"code": null,
"e": 1583,
"s": 1544,
"text": "Descriptive Stats — Data Understanding"
},
{
"code": null,
"e": 1796,
"s": 1583,
"text": "Exploratory statistics help a modeler understand the data better. A couple of these stats are available in this framework. First, we check the missing values in each column in the dataset by using the below code."
},
{
"code": null,
"e": 1848,
"s": 1796,
"text": "df.isnull().mean().sort_values(ascending=False)*100"
},
{
"code": null,
"e": 1921,
"s": 1848,
"text": "Second, we check the correlation between variables using the code below."
},
{
"code": null,
"e": 2092,
"s": 1921,
"text": "import seaborn as snsimport matplotlib.pyplot as plt%matplotlib inlinecorr = df.corr()sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns)"
},
{
"code": null,
"e": 2258,
"s": 2092,
"text": "Finally, in the framework, I included a binning algorithm that automatically bins the input variables in the dataset and creates a bivariate plot (inputs vs target)."
},
{
"code": null,
"e": 2928,
"s": 2258,
"text": "bar_color = '#058caa'num_color = '#ed8549'final_iv,_ = data_vars(df1,df1['target'])final_iv = final_iv[(final_iv.VAR_NAME != 'target')]grouped = final_iv.groupby(['VAR_NAME'])for key, group in grouped: ax = group.plot('MIN_VALUE','EVENT_RATE',kind='bar',color=bar_color,linewidth=1.0,edgecolor=['black']) ax.set_title(str(key) + \" vs \" + str('target')) ax.set_xlabel(key) ax.set_ylabel(str('target') + \" %\") rects = ax.patches for rect in rects: height = rect.get_height() ax.text(rect.get_x()+rect.get_width()/2., 1.01*height, str(round(height*100,1)) + '%', ha='center', va='bottom', color=num_color, fontweight='bold')"
},
{
"code": null,
"e": 2966,
"s": 2928,
"text": "Variable Selection — Data Preparation"
},
{
"code": null,
"e": 3294,
"s": 2966,
"text": "Please read my article below on variable selection process which is used in this framework. The variables are selected based on a voting system. We use different algorithms to select features and then finally each algorithm votes for their selected feature. The final vote count is used to select the best feature for modeling."
},
{
"code": null,
"e": 3305,
"s": 3294,
"text": "medium.com"
},
{
"code": null,
"e": 3322,
"s": 3305,
"text": "Model — Modeling"
},
{
"code": null,
"e": 3501,
"s": 3322,
"text": "80% of the predictive model work is done so far. To complete the rest 20%, we split our dataset into train/test and try a variety of algorithms on the data and pick the best one."
},
{
"code": null,
"e": 3821,
"s": 3501,
"text": "from sklearn.cross_validation import train_test_splittrain, test = train_test_split(df1, test_size = 0.4)train = train.reset_index(drop=True)test = test.reset_index(drop=True)features_train = train[list(vif['Features'])]label_train = train['target']features_test = test[list(vif['Features'])]label_test = test['target']"
},
{
"code": null,
"e": 4158,
"s": 3821,
"text": "We apply different algorithms on the train dataset and evaluate the performance on the test data to make sure the model is stable. The framework includes codes for Random Forest, Logistic Regression, Naive Bayes, Neural Network and Gradient Boosting. We can add other models based on our needs. The Random forest code is provided below."
},
{
"code": null,
"e": 4778,
"s": 4158,
"text": "from sklearn.ensemble import RandomForestClassifierclf = RandomForestClassifier()clf.fit(features_train,label_train)pred_train = clf.predict(features_train)pred_test = clf.predict(features_test)from sklearn.metrics import accuracy_scoreaccuracy_train = accuracy_score(pred_train,label_train)accuracy_test = accuracy_score(pred_test,label_test)from sklearn import metricsfpr, tpr, _ = metrics.roc_curve(np.array(label_train), clf.predict_proba(features_train)[:,1])auc_train = metrics.auc(fpr,tpr)fpr, tpr, _ = metrics.roc_curve(np.array(label_test), clf.predict_proba(features_test)[:,1])auc_test = metrics.auc(fpr,tpr)"
},
{
"code": null,
"e": 4812,
"s": 4778,
"text": "Hyper parameter Tuning — Modeling"
},
{
"code": null,
"e": 4935,
"s": 4812,
"text": "In addition, the hyperparameters of the models can be tuned to improve the performance as well. Here is a code to do that."
},
{
"code": null,
"e": 5796,
"s": 4935,
"text": "from sklearn.model_selection import RandomizedSearchCVfrom sklearn.ensemble import RandomForestClassifiern_estimators = [int(x) for x in np.linspace(start = 10, stop = 500, num = 10)]max_features = ['auto', 'sqrt']max_depth = [int(x) for x in np.linspace(3, 10, num = 1)]max_depth.append(None)min_samples_split = [2, 5, 10]min_samples_leaf = [1, 2, 4]bootstrap = [True, False]random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap}rf = RandomForestClassifier()rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 10, cv = 2, verbose=2, random_state=42, n_jobs = -1)rf_random.fit(features_train, label_train)"
},
{
"code": null,
"e": 5843,
"s": 5796,
"text": "Final Model and Model Performance — Evaluation"
},
{
"code": null,
"e": 6235,
"s": 5843,
"text": "The final model that gives us the better accuracy values is picked for now. However, we are not done yet. We need to evaluate the model performance based on a variety of metrics. The framework contain codes that calculate cross-tab of actual vs predicted values, ROC Curve, Deciles, KS statistic, Lift chart, Actual vs predicted chart, Gains chart. We will go through each one of them below."
},
{
"code": null,
"e": 6244,
"s": 6235,
"text": "Crosstab"
},
{
"code": null,
"e": 6253,
"s": 6244,
"text": "Crosstab"
},
{
"code": null,
"e": 6338,
"s": 6253,
"text": "pd.crosstab(label_train,pd.Series(pred_train),rownames=['ACTUAL'],colnames=['PRED'])"
},
{
"code": null,
"e": 6370,
"s": 6338,
"text": "2. ROC/AUC curve or c-statistic"
},
{
"code": null,
"e": 6912,
"s": 6370,
"text": "from bokeh.charts import Histogramfrom ipywidgets import interactfrom bokeh.plotting import figurefrom bokeh.io import push_notebook, show, output_notebookoutput_notebook()from sklearn import metricspreds = clf.predict_proba(features_train)[:,1]fpr, tpr, _ = metrics.roc_curve(np.array(label_train), preds)auc = metrics.auc(fpr,tpr)p = figure(title=\"ROC Curve - Train data\")r = p.line(fpr,tpr,color='#0077bc',legend = 'AUC = '+ str(round(auc,3)), line_width=2)s = p.line([0,1],[0,1], color= '#d15555',line_dash='dotdash',line_width=2)show(p)"
},
{
"code": null,
"e": 6966,
"s": 6912,
"text": "3. Decile Plots and Kolmogorov Smirnov (KS) Statistic"
},
{
"code": null,
"e": 7093,
"s": 6966,
"text": "A macro is executed in the backend to generate the plot below. And the number highlighted in yellow is the KS-statistic value."
},
{
"code": null,
"e": 7148,
"s": 7093,
"text": "deciling(scores_train,['DECILE'],'TARGET','NONTARGET')"
},
{
"code": null,
"e": 7202,
"s": 7148,
"text": "4. Lift chart, Actual vs predicted chart, Gains chart"
},
{
"code": null,
"e": 7272,
"s": 7202,
"text": "Similar to decile plots, a macro is used to generate the plots below."
},
{
"code": null,
"e": 7318,
"s": 7272,
"text": "gains(lift_train,['DECILE'],'TARGET','SCORE')"
},
{
"code": null,
"e": 7357,
"s": 7318,
"text": "Save Model for future use — Deployment"
},
{
"code": null,
"e": 7570,
"s": 7357,
"text": "Finally, we developed our model and evaluated all the different metrics and now we are ready to deploy model in production. The last step before deployment is to save our model which is done using the code below."
},
{
"code": null,
"e": 7684,
"s": 7570,
"text": "import pandasfrom sklearn.externals import joblibfilename = 'final_model.model'i = [d,clf]joblib.dump(i,filename)"
},
{
"code": null,
"e": 7813,
"s": 7684,
"text": "Here, “clf” is the model classifier object and “d” is the label encoder object used to transform character to numeric variables."
},
{
"code": null,
"e": 7841,
"s": 7813,
"text": "Score New data — Deployment"
},
{
"code": null,
"e": 7954,
"s": 7841,
"text": "For scoring, we need to load our model object (clf) and the label encoder object back to the python environment."
},
{
"code": null,
"e": 8080,
"s": 7954,
"text": "# Use the code to load the modelfilename = 'final_model.model'from sklearn.externals import joblibd,clf=joblib.load(filename)"
},
{
"code": null,
"e": 8141,
"s": 8080,
"text": "Then, we load our new dataset and pass to the scoring macro."
},
{
"code": null,
"e": 8410,
"s": 8141,
"text": "def score_new(features,clf): score = pd.DataFrame(clf.predict_proba(features)[:,1], columns = ['SCORE']) score['DECILE'] = pd.qcut(score['SCORE'].rank(method = 'first'),10,labels=range(10,0,-1)) score['DECILE'] = score['DECILE'].astype(float) return(score)"
},
{
"code": null,
"e": 8453,
"s": 8410,
"text": "And we call the macro using the code below"
},
{
"code": null,
"e": 8492,
"s": 8453,
"text": "scores = score_new(new_score_data,clf)"
},
{
"code": null,
"e": 8745,
"s": 8492,
"text": "That’s it. We have scored our new data. This article provides a high level overview of the technical codes. If you need to discuss anything in particular or you have feedback on any of the modules please leave a comment or reach out to me via LinkedIn."
},
{
"code": null,
"e": 8755,
"s": 8745,
"text": "Have fun!"
}
] |
Electron - Inter Process Communication
|
Electron provides us with 2 IPC (Inter Process Communication) modules called ipcMain and ipcRenderer.
The ipcMain module is used to communicate asynchronously from the main process to renderer processes. When used in the main process, the module handles asynchronous and synchronous messages sent from a renderer process (web page). The messages sent from a renderer will be emitted to this module.
The ipcRenderer module is used to communicate asynchronously from a renderer process to the main process. It provides a few methods so you can send synchronous and asynchronous messages from the renderer process (web page) to the main process. You can also receive replies from the main process.
We will create a main process and a renderer process that will send each other messages using the above modules.
Create a new file called main_process.js with the following contents −
const {app, BrowserWindow} = require('electron')
const url = require('url')
const path = require('path')
const {ipcMain} = require('electron')
let win
function createWindow() {
win = new BrowserWindow({width: 800, height: 600})
win.loadURL(url.format ({
pathname: path.join(__dirname, 'index.html'),
protocol: 'file:',
slashes: true
}))
}
// Event handler for asynchronous incoming messages
ipcMain.on('asynchronous-message', (event, arg) => {
console.log(arg)
// Event emitter for sending asynchronous messages
event.sender.send('asynchronous-reply', 'async pong')
})
// Event handler for synchronous incoming messages
ipcMain.on('synchronous-message', (event, arg) => {
console.log(arg)
// Synchronous event emmision
event.returnValue = 'sync pong'
})
app.on('ready', createWindow)
Now create a new index.html file and add the following code in it.
<!DOCTYPE html>
<html>
<head>
<meta charset = "UTF-8">
<title>Hello World!</title>
</head>
<body>
<script>
const {ipcRenderer} = require('electron')
// Synchronous message emmiter and handler
console.log(ipcRenderer.sendSync('synchronous-message', 'sync ping'))
// Async message handler
ipcRenderer.on('asynchronous-reply', (event, arg) => {
console.log(arg)
})
// Async message sender
ipcRenderer.send('asynchronous-message', 'async ping')
</script>
</body>
</html>
Run the app using the following command −
$ electron ./main_process.js
The above command will generate the following output −
// On your app console
Sync Pong
Async Pong
// On your terminal where you ran the app
Sync Ping
Async Ping
It is recommended not to perform computation of heavy/ blocking tasks on the renderer process. Always use IPC to delegate these tasks to the main process. This helps in maintaining the pace of your application.
251 Lectures
35.5 hours
Gowthami Swarna
9 Lectures
41 mins
Ashraf Said
8 Lectures
32 mins
Ashraf Said
25 Lectures
1 hours
Ashraf Said
17 Lectures
1 hours
Ashraf Said
8 Lectures
25 mins
Ashraf Said
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2167,
"s": 2065,
"text": "Electron provides us with 2 IPC (Inter Process Communication) modules called ipcMain and ipcRenderer."
},
{
"code": null,
"e": 2464,
"s": 2167,
"text": "The ipcMain module is used to communicate asynchronously from the main process to renderer processes. When used in the main process, the module handles asynchronous and synchronous messages sent from a renderer process (web page). The messages sent from a renderer will be emitted to this module."
},
{
"code": null,
"e": 2760,
"s": 2464,
"text": "The ipcRenderer module is used to communicate asynchronously from a renderer process to the main process. It provides a few methods so you can send synchronous and asynchronous messages from the renderer process (web page) to the main process. You can also receive replies from the main process."
},
{
"code": null,
"e": 2873,
"s": 2760,
"text": "We will create a main process and a renderer process that will send each other messages using the above modules."
},
{
"code": null,
"e": 2944,
"s": 2873,
"text": "Create a new file called main_process.js with the following contents −"
},
{
"code": null,
"e": 3781,
"s": 2944,
"text": "const {app, BrowserWindow} = require('electron')\nconst url = require('url')\nconst path = require('path')\nconst {ipcMain} = require('electron')\n\nlet win\n\nfunction createWindow() {\n win = new BrowserWindow({width: 800, height: 600})\n win.loadURL(url.format ({\n pathname: path.join(__dirname, 'index.html'),\n protocol: 'file:',\n slashes: true\n }))\n}\n\n// Event handler for asynchronous incoming messages\nipcMain.on('asynchronous-message', (event, arg) => {\n console.log(arg)\n\n // Event emitter for sending asynchronous messages\n event.sender.send('asynchronous-reply', 'async pong')\n})\n\n// Event handler for synchronous incoming messages\nipcMain.on('synchronous-message', (event, arg) => {\n console.log(arg) \n\n // Synchronous event emmision\n event.returnValue = 'sync pong'\n})\n\napp.on('ready', createWindow)"
},
{
"code": null,
"e": 3848,
"s": 3781,
"text": "Now create a new index.html file and add the following code in it."
},
{
"code": null,
"e": 4443,
"s": 3848,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta charset = \"UTF-8\">\n <title>Hello World!</title>\n </head>\n \n <body>\n <script>\n const {ipcRenderer} = require('electron')\n\n // Synchronous message emmiter and handler\n console.log(ipcRenderer.sendSync('synchronous-message', 'sync ping')) \n\n // Async message handler\n ipcRenderer.on('asynchronous-reply', (event, arg) => {\n console.log(arg)\n })\n\n // Async message sender\n ipcRenderer.send('asynchronous-message', 'async ping')\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 4485,
"s": 4443,
"text": "Run the app using the following command −"
},
{
"code": null,
"e": 4515,
"s": 4485,
"text": "$ electron ./main_process.js\n"
},
{
"code": null,
"e": 4570,
"s": 4515,
"text": "The above command will generate the following output −"
},
{
"code": null,
"e": 4679,
"s": 4570,
"text": "// On your app console\nSync Pong\nAsync Pong\n\n// On your terminal where you ran the app\nSync Ping\nAsync Ping\n"
},
{
"code": null,
"e": 4890,
"s": 4679,
"text": "It is recommended not to perform computation of heavy/ blocking tasks on the renderer process. Always use IPC to delegate these tasks to the main process. This helps in maintaining the pace of your application."
},
{
"code": null,
"e": 4927,
"s": 4890,
"text": "\n 251 Lectures \n 35.5 hours \n"
},
{
"code": null,
"e": 4944,
"s": 4927,
"text": " Gowthami Swarna"
},
{
"code": null,
"e": 4975,
"s": 4944,
"text": "\n 9 Lectures \n 41 mins\n"
},
{
"code": null,
"e": 4988,
"s": 4975,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5019,
"s": 4988,
"text": "\n 8 Lectures \n 32 mins\n"
},
{
"code": null,
"e": 5032,
"s": 5019,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5065,
"s": 5032,
"text": "\n 25 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 5078,
"s": 5065,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5111,
"s": 5078,
"text": "\n 17 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 5124,
"s": 5111,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5155,
"s": 5124,
"text": "\n 8 Lectures \n 25 mins\n"
},
{
"code": null,
"e": 5168,
"s": 5155,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5175,
"s": 5168,
"text": " Print"
},
{
"code": null,
"e": 5186,
"s": 5175,
"text": " Add Notes"
}
] |
VB.Net - Send Email
|
VB.Net allows sending e-mails from your application. The System.Net.Mail namespace contains classes used for sending e-mails to a Simple Mail Transfer Protocol (SMTP) server for delivery.
The following table lists some of these commonly used classes −
Attachment
Represents an attachment to an e-mail.
AttachmentCollection
Stores attachments to be sent as part of an e-mail message.
MailAddress
Represents the address of an electronic mail sender or recipient.
MailAddressCollection
Stores e-mail addresses that are associated with an e-mail message.
MailMessage
Represents an e-mail message that can be sent using the SmtpClient class.
SmtpClient
Allows applications to send e-mail by using the Simple Mail Transfer Protocol (SMTP).
SmtpException
Represents the exception that is thrown when the SmtpClient is not able to complete a Send or SendAsync operation.
The SmtpClient class allows applications to send e-mail by using the Simple Mail Transfer Protocol (SMTP).
Following are some commonly used properties of the SmtpClient class −
ClientCertificates
Specifies which certificates should be used to establish the Secure Sockets Layer (SSL) connection.
Credentials
Gets or sets the credentials used to authenticate the sender.
EnableSsl
Specifies whether the SmtpClient uses Secure Sockets Layer (SSL) to encrypt the connection.
Host
Gets or sets the name or IP address of the host used for SMTP transactions.
Port
Gets or sets the port used for SMTP transactions.
Timeout
Gets or sets a value that specifies the amount of time after which a synchronous Send call times out.
UseDefaultCredentials
Gets or sets a Boolean value that controls whether the DefaultCredentials are sent with requests.
Following are some commonly used methods of the SmtpClient class −
Dispose
Sends a QUIT message to the SMTP server, gracefully ends the TCP connection, and releases all resources used by the current instance of the SmtpClient class.
Dispose(Boolean)
Sends a QUIT message to the SMTP server, gracefully ends the TCP connection, releases all resources used by the current instance of the SmtpClient class, and optionally disposes of the managed resources.
OnSendCompleted
Raises the SendCompleted event.
Send(MailMessage)
Sends the specified message to an SMTP server for delivery.
Send(String, String, String, String)
Sends the specified e-mail message to an SMTP server for delivery. The message sender, recipients, subject, and message body are specified using String objects.
SendAsync(MailMessage, Object)
Sends the specified e-mail message to an SMTP server for delivery. This method does not block the calling thread and allows the caller to pass an object to the method that is invoked when the operation completes.
SendAsync(String, String, String, String, Object)
Sends an e-mail message to an SMTP server for delivery. The message sender, recipients, subject, and message body are specified using String objects. This method does not block the calling thread and allows the caller to pass an object to the method that is invoked when the operation completes.
SendAsyncCancel
Cancels an asynchronous operation to send an e-mail message.
SendMailAsync(MailMessage)
Sends the specified message to an SMTP server for delivery as an asynchronous operation.
SendMailAsync(String, String, String, String)
Sends the specified message to an SMTP server for delivery as an asynchronous operation. . The message sender, recipients, subject, and message body are specified using String objects.
ToString
Returns a string that represents the current object.
The following example demonstrates how to send mail using the SmtpClient class. Following points are to be noted in this respect −
You must specify the SMTP host server that you use to send e-mail. The Host and Port properties will be different for different host server. We will be using gmail server.
You must specify the SMTP host server that you use to send e-mail. The Host and Port properties will be different for different host server. We will be using gmail server.
You need to give the Credentials for authentication, if required by the SMTP server.
You need to give the Credentials for authentication, if required by the SMTP server.
You should also provide the email address of the sender and the e-mail address or addresses of the recipients using the MailMessage.From and MailMessage.To properties, respectively.
You should also provide the email address of the sender and the e-mail address or addresses of the recipients using the MailMessage.From and MailMessage.To properties, respectively.
You should also specify the message content using the MailMessage.Body property.
You should also specify the message content using the MailMessage.Body property.
In this example, let us create a simple application that would send an e-mail. Take the following steps −
Add three labels, three text boxes and a button control in the form.
Add three labels, three text boxes and a button control in the form.
Change the text properties of the labels to - 'From', 'To:' and 'Message:' respectively.
Change the text properties of the labels to - 'From', 'To:' and 'Message:' respectively.
Change the name properties of the texts to txtFrom, txtTo and txtMessage respectively.
Change the name properties of the texts to txtFrom, txtTo and txtMessage respectively.
Change the text property of the button control to 'Send'
Change the text property of the button control to 'Send'
Add the following code in the code editor.
Add the following code in the code editor.
Imports System.Net.Mail
Public Class Form1
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
' Set the caption bar text of the form.
Me.Text = "tutorialspoint.com"
End Sub
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Try
Dim Smtp_Server As New SmtpClient
Dim e_mail As New MailMessage()
Smtp_Server.UseDefaultCredentials = False
Smtp_Server.Credentials = New Net.NetworkCredential("[email protected]", "password")
Smtp_Server.Port = 587
Smtp_Server.EnableSsl = True
Smtp_Server.Host = "smtp.gmail.com"
e_mail = New MailMessage()
e_mail.From = New MailAddress(txtFrom.Text)
e_mail.To.Add(txtTo.Text)
e_mail.Subject = "Email Sending"
e_mail.IsBodyHtml = False
e_mail.Body = txtMessage.Text
Smtp_Server.Send(e_mail)
MsgBox("Mail Sent")
Catch error_t As Exception
MsgBox(error_t.ToString)
End Try
End Sub
You must provide your gmail address and real password for credentials.
You must provide your gmail address and real password for credentials.
When the above code is executed and run using Start button available at the Microsoft Visual Studio tool bar, it will show the following window, which you will use to send your e-mails, try it yourself.
When the above code is executed and run using Start button available at the Microsoft Visual Studio tool bar, it will show the following window, which you will use to send your e-mails, try it yourself.
63 Lectures
4 hours
Frahaan Hussain
103 Lectures
12 hours
Arnold Higuit
60 Lectures
9.5 hours
Arnold Higuit
97 Lectures
9 hours
Arnold Higuit
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2488,
"s": 2300,
"text": "VB.Net allows sending e-mails from your application. The System.Net.Mail namespace contains classes used for sending e-mails to a Simple Mail Transfer Protocol (SMTP) server for delivery."
},
{
"code": null,
"e": 2552,
"s": 2488,
"text": "The following table lists some of these commonly used classes −"
},
{
"code": null,
"e": 2563,
"s": 2552,
"text": "Attachment"
},
{
"code": null,
"e": 2602,
"s": 2563,
"text": "Represents an attachment to an e-mail."
},
{
"code": null,
"e": 2623,
"s": 2602,
"text": "AttachmentCollection"
},
{
"code": null,
"e": 2683,
"s": 2623,
"text": "Stores attachments to be sent as part of an e-mail message."
},
{
"code": null,
"e": 2695,
"s": 2683,
"text": "MailAddress"
},
{
"code": null,
"e": 2761,
"s": 2695,
"text": "Represents the address of an electronic mail sender or recipient."
},
{
"code": null,
"e": 2783,
"s": 2761,
"text": "MailAddressCollection"
},
{
"code": null,
"e": 2851,
"s": 2783,
"text": "Stores e-mail addresses that are associated with an e-mail message."
},
{
"code": null,
"e": 2863,
"s": 2851,
"text": "MailMessage"
},
{
"code": null,
"e": 2937,
"s": 2863,
"text": "Represents an e-mail message that can be sent using the SmtpClient class."
},
{
"code": null,
"e": 2948,
"s": 2937,
"text": "SmtpClient"
},
{
"code": null,
"e": 3034,
"s": 2948,
"text": "Allows applications to send e-mail by using the Simple Mail Transfer Protocol (SMTP)."
},
{
"code": null,
"e": 3048,
"s": 3034,
"text": "SmtpException"
},
{
"code": null,
"e": 3163,
"s": 3048,
"text": "Represents the exception that is thrown when the SmtpClient is not able to complete a Send or SendAsync operation."
},
{
"code": null,
"e": 3270,
"s": 3163,
"text": "The SmtpClient class allows applications to send e-mail by using the Simple Mail Transfer Protocol (SMTP)."
},
{
"code": null,
"e": 3340,
"s": 3270,
"text": "Following are some commonly used properties of the SmtpClient class −"
},
{
"code": null,
"e": 3359,
"s": 3340,
"text": "ClientCertificates"
},
{
"code": null,
"e": 3459,
"s": 3359,
"text": "Specifies which certificates should be used to establish the Secure Sockets Layer (SSL) connection."
},
{
"code": null,
"e": 3471,
"s": 3459,
"text": "Credentials"
},
{
"code": null,
"e": 3533,
"s": 3471,
"text": "Gets or sets the credentials used to authenticate the sender."
},
{
"code": null,
"e": 3543,
"s": 3533,
"text": "EnableSsl"
},
{
"code": null,
"e": 3635,
"s": 3543,
"text": "Specifies whether the SmtpClient uses Secure Sockets Layer (SSL) to encrypt the connection."
},
{
"code": null,
"e": 3640,
"s": 3635,
"text": "Host"
},
{
"code": null,
"e": 3716,
"s": 3640,
"text": "Gets or sets the name or IP address of the host used for SMTP transactions."
},
{
"code": null,
"e": 3721,
"s": 3716,
"text": "Port"
},
{
"code": null,
"e": 3771,
"s": 3721,
"text": "Gets or sets the port used for SMTP transactions."
},
{
"code": null,
"e": 3779,
"s": 3771,
"text": "Timeout"
},
{
"code": null,
"e": 3881,
"s": 3779,
"text": "Gets or sets a value that specifies the amount of time after which a synchronous Send call times out."
},
{
"code": null,
"e": 3903,
"s": 3881,
"text": "UseDefaultCredentials"
},
{
"code": null,
"e": 4001,
"s": 3903,
"text": "Gets or sets a Boolean value that controls whether the DefaultCredentials are sent with requests."
},
{
"code": null,
"e": 4068,
"s": 4001,
"text": "Following are some commonly used methods of the SmtpClient class −"
},
{
"code": null,
"e": 4076,
"s": 4068,
"text": "Dispose"
},
{
"code": null,
"e": 4234,
"s": 4076,
"text": "Sends a QUIT message to the SMTP server, gracefully ends the TCP connection, and releases all resources used by the current instance of the SmtpClient class."
},
{
"code": null,
"e": 4251,
"s": 4234,
"text": "Dispose(Boolean)"
},
{
"code": null,
"e": 4455,
"s": 4251,
"text": "Sends a QUIT message to the SMTP server, gracefully ends the TCP connection, releases all resources used by the current instance of the SmtpClient class, and optionally disposes of the managed resources."
},
{
"code": null,
"e": 4471,
"s": 4455,
"text": "OnSendCompleted"
},
{
"code": null,
"e": 4503,
"s": 4471,
"text": "Raises the SendCompleted event."
},
{
"code": null,
"e": 4521,
"s": 4503,
"text": "Send(MailMessage)"
},
{
"code": null,
"e": 4581,
"s": 4521,
"text": "Sends the specified message to an SMTP server for delivery."
},
{
"code": null,
"e": 4618,
"s": 4581,
"text": "Send(String, String, String, String)"
},
{
"code": null,
"e": 4779,
"s": 4618,
"text": "Sends the specified e-mail message to an SMTP server for delivery. The message sender, recipients, subject, and message body are specified using String objects."
},
{
"code": null,
"e": 4810,
"s": 4779,
"text": "SendAsync(MailMessage, Object)"
},
{
"code": null,
"e": 5023,
"s": 4810,
"text": "Sends the specified e-mail message to an SMTP server for delivery. This method does not block the calling thread and allows the caller to pass an object to the method that is invoked when the operation completes."
},
{
"code": null,
"e": 5073,
"s": 5023,
"text": "SendAsync(String, String, String, String, Object)"
},
{
"code": null,
"e": 5369,
"s": 5073,
"text": "Sends an e-mail message to an SMTP server for delivery. The message sender, recipients, subject, and message body are specified using String objects. This method does not block the calling thread and allows the caller to pass an object to the method that is invoked when the operation completes."
},
{
"code": null,
"e": 5385,
"s": 5369,
"text": "SendAsyncCancel"
},
{
"code": null,
"e": 5446,
"s": 5385,
"text": "Cancels an asynchronous operation to send an e-mail message."
},
{
"code": null,
"e": 5473,
"s": 5446,
"text": "SendMailAsync(MailMessage)"
},
{
"code": null,
"e": 5562,
"s": 5473,
"text": "Sends the specified message to an SMTP server for delivery as an asynchronous operation."
},
{
"code": null,
"e": 5608,
"s": 5562,
"text": "SendMailAsync(String, String, String, String)"
},
{
"code": null,
"e": 5793,
"s": 5608,
"text": "Sends the specified message to an SMTP server for delivery as an asynchronous operation. . The message sender, recipients, subject, and message body are specified using String objects."
},
{
"code": null,
"e": 5802,
"s": 5793,
"text": "ToString"
},
{
"code": null,
"e": 5855,
"s": 5802,
"text": "Returns a string that represents the current object."
},
{
"code": null,
"e": 5986,
"s": 5855,
"text": "The following example demonstrates how to send mail using the SmtpClient class. Following points are to be noted in this respect −"
},
{
"code": null,
"e": 6158,
"s": 5986,
"text": "You must specify the SMTP host server that you use to send e-mail. The Host and Port properties will be different for different host server. We will be using gmail server."
},
{
"code": null,
"e": 6330,
"s": 6158,
"text": "You must specify the SMTP host server that you use to send e-mail. The Host and Port properties will be different for different host server. We will be using gmail server."
},
{
"code": null,
"e": 6415,
"s": 6330,
"text": "You need to give the Credentials for authentication, if required by the SMTP server."
},
{
"code": null,
"e": 6500,
"s": 6415,
"text": "You need to give the Credentials for authentication, if required by the SMTP server."
},
{
"code": null,
"e": 6682,
"s": 6500,
"text": "You should also provide the email address of the sender and the e-mail address or addresses of the recipients using the MailMessage.From and MailMessage.To properties, respectively."
},
{
"code": null,
"e": 6864,
"s": 6682,
"text": "You should also provide the email address of the sender and the e-mail address or addresses of the recipients using the MailMessage.From and MailMessage.To properties, respectively."
},
{
"code": null,
"e": 6945,
"s": 6864,
"text": "You should also specify the message content using the MailMessage.Body property."
},
{
"code": null,
"e": 7026,
"s": 6945,
"text": "You should also specify the message content using the MailMessage.Body property."
},
{
"code": null,
"e": 7132,
"s": 7026,
"text": "In this example, let us create a simple application that would send an e-mail. Take the following steps −"
},
{
"code": null,
"e": 7201,
"s": 7132,
"text": "Add three labels, three text boxes and a button control in the form."
},
{
"code": null,
"e": 7270,
"s": 7201,
"text": "Add three labels, three text boxes and a button control in the form."
},
{
"code": null,
"e": 7359,
"s": 7270,
"text": "Change the text properties of the labels to - 'From', 'To:' and 'Message:' respectively."
},
{
"code": null,
"e": 7448,
"s": 7359,
"text": "Change the text properties of the labels to - 'From', 'To:' and 'Message:' respectively."
},
{
"code": null,
"e": 7535,
"s": 7448,
"text": "Change the name properties of the texts to txtFrom, txtTo and txtMessage respectively."
},
{
"code": null,
"e": 7622,
"s": 7535,
"text": "Change the name properties of the texts to txtFrom, txtTo and txtMessage respectively."
},
{
"code": null,
"e": 7679,
"s": 7622,
"text": "Change the text property of the button control to 'Send'"
},
{
"code": null,
"e": 7736,
"s": 7679,
"text": "Change the text property of the button control to 'Send'"
},
{
"code": null,
"e": 7779,
"s": 7736,
"text": "Add the following code in the code editor."
},
{
"code": null,
"e": 7822,
"s": 7779,
"text": "Add the following code in the code editor."
},
{
"code": null,
"e": 8880,
"s": 7822,
"text": "Imports System.Net.Mail\nPublic Class Form1\n Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load\n ' Set the caption bar text of the form. \n Me.Text = \"tutorialspoint.com\"\n End Sub\n\n Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click\n Try\n Dim Smtp_Server As New SmtpClient\n Dim e_mail As New MailMessage()\n Smtp_Server.UseDefaultCredentials = False\n Smtp_Server.Credentials = New Net.NetworkCredential(\"[email protected]\", \"password\")\n Smtp_Server.Port = 587\n Smtp_Server.EnableSsl = True\n Smtp_Server.Host = \"smtp.gmail.com\"\n\n e_mail = New MailMessage()\n e_mail.From = New MailAddress(txtFrom.Text)\n e_mail.To.Add(txtTo.Text)\n e_mail.Subject = \"Email Sending\"\n e_mail.IsBodyHtml = False\n e_mail.Body = txtMessage.Text\n Smtp_Server.Send(e_mail)\n MsgBox(\"Mail Sent\")\n\n Catch error_t As Exception\n MsgBox(error_t.ToString)\n End Try\n End Sub"
},
{
"code": null,
"e": 8951,
"s": 8880,
"text": "You must provide your gmail address and real password for credentials."
},
{
"code": null,
"e": 9022,
"s": 8951,
"text": "You must provide your gmail address and real password for credentials."
},
{
"code": null,
"e": 9225,
"s": 9022,
"text": "When the above code is executed and run using Start button available at the Microsoft Visual Studio tool bar, it will show the following window, which you will use to send your e-mails, try it yourself."
},
{
"code": null,
"e": 9428,
"s": 9225,
"text": "When the above code is executed and run using Start button available at the Microsoft Visual Studio tool bar, it will show the following window, which you will use to send your e-mails, try it yourself."
},
{
"code": null,
"e": 9461,
"s": 9428,
"text": "\n 63 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 9478,
"s": 9461,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 9513,
"s": 9478,
"text": "\n 103 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 9528,
"s": 9513,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 9563,
"s": 9528,
"text": "\n 60 Lectures \n 9.5 hours \n"
},
{
"code": null,
"e": 9578,
"s": 9563,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 9611,
"s": 9578,
"text": "\n 97 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 9626,
"s": 9611,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 9633,
"s": 9626,
"text": " Print"
},
{
"code": null,
"e": 9644,
"s": 9633,
"text": " Add Notes"
}
] |
Creating and Managing Elasticsearch Indices with Python | by Niels D. Goet | Towards Data Science
|
Elasticsearch (ES) is a distributed search engine that is designed for scalability and redundancy. It is fast, and it is suited for storing and handling large volumes of data for analytics, machine learning, and other applications. ES has gained traction in recent years, and is an important technology for any data scientist’s (and data engineer’s) toolbox. In this blog post, we’ll provision our own (free) ES cluster with Bonsai, and use the Python Elasticsearch Client to set up an index, and write and query data. All code for this blog post is available on GitHub.
In a first step, we’ll set up an Elasticsearch cluster. You can do this through a managed Elasticsearch service provider such as AWS’ Amazon Elasticsearch Service, Bonsai, or Elastic.co. How you decide to provision, deploy, and manage your cluster (and where) depends on what kind of solution you’re building and what features are important to you.
Here, we’ll go with Bonsai, which gives us a free “Sandbox” tier with 125mb of storage and a 35,000 documents storage limit. To get started, sign up for an account on the Bonsai and follow the steps on the website to set up and deploy your cluster. If you already have an account on Heroku, you can also provision an ES cluster for your app directly by selecting Bonsai as an add on in the Resources tab of your app, without the need for signing up for an account on Bonsai (see below).
After you’ve set up your Bonsai account and deployed your cluster (or provisioned your Bonsai cluster for your app via Heroku), nagivate to the “Credentials” page in in the left sidebar of your Bonsai Cluster overview dashboard and copy your host URL, access key, and access secret. We’ll use this information later to set up a connection using the Python Elasticsearch Client.
Bonsai’s Sandbox tier is not a production-level deployment and comes with several limitations. For example, the Sandbox tier does not allow you to set up users with limited permissions (e.g. read access only), so keep that in mind if you want to use this setup for an application that is exposed to the Internet. Elastic.co offers a 14-day free trial that you can use instead if you’d like to explore the full set of features that ES has to offer (including, e.g., cluster and index privileges).
After you’ve completed the above steps, you can inspect your cluster health and indices from the console using Elasticsearch’s Compact and Aligned Text (CAT) APIs. The CAT APIs are designed to give you human-readable output. They are therefore not intended to be used by applications; rather they should be used from terminal or in the Kibana console.
On the console (both on Kibana and on Bonsai directly), the following GET request will return information on cluster health (a “green” status means that all your shards are assigned):
Now that we have a cluster up and running, let’s look at the fundamentals of data and indices in ES.
Data in Elasticsearch is in the form of JSON objects called “documents”. A document contains fields, which are key-value pairs that can be a value (e.g. a boolean or an integer) or a nested structure. Documents are organised in indices, which are both logical groupings of data that follow a schema as well as the physical organisation of the data through shards.
Each index consists of one or more physical shards that form a logical group. Each shard, in turn, is a “self-contained index”. The data in an index is partitioned across these shards, and shards are distributed across nodes. Each document has a “primary shard” and a “replica shard” (that is, if the index has been configured to have one or more replica shards).
Data that resides in different shards can be processed in parallel. To the extent that your CPU and memory allow (i.e., depending on the kind of ES cluster you have), more shards means faster search. Note however that shards come with overhead, and that you should carefully consider the appropriate number and size of the shards in your cluster.
Now that we’ve gone over the basics of ES, let’s take a stab at setting up an index of our own. Here, we’ll use data on Netflix shows available on Kaggle. This dataset is in CSV format and contains information on movies and series available on Netflix, including metadata such as their release date, their title, and cast.
We’ll first define a ES mapping for our index. In this case, we’ll go with relatively straightforward field types, defining all fields as text, with the exception of release_year (which we’ll make an integer).
While Elasticsearch is able to infer the mapping of your documents when you write them, using dynamic field mapping, it does not necessarily do so optimally. Typically, you’ll want to spend some time on defining your mapping because the field types (as well as various other options) impact the size of your index and the flexibility you’ll have in querying data. For example, the text field type is broken down into individual terms upon indexing, allowing for partial matching. The keyword type can be used for strings too, but is not analysed (or rather: “tokenised”) when indexing and only allows for exact matching (in this example, we could have used it for the type field, which takes one of two values — “Movie” or “TV Show”).
Another thing to keep in mind here is that ES does not have an array field type: if your document includes a field that consists of an array of, say, integers, the ES equivalent would be the integer data type (for a complete overview of ES field types, see this page). Take for example the following document:
{ "values": [1,2,3,4,5]}
The underlying mapping for this data would simply define the values field type to be an integer:
mapping = { "mappings": { "properties": { "values": { "type": "integer" } } }}
Now that we’ve defined our mapping, we can set up our index on ES using the Python Elasticsearch Client. In the Python code below, I’ve set up a class called EsConnection, with an ES client connection attribute (es_client). Note that I’m storing my access credentials in environment variables here for convenience (and to avoid accidentally sharing them when, e.g., pushing code to GitHub). For real applications you’ll have to use a more secure approach, like AWS Secrets Manager.
The EsConnection class also includes two methods, one to create an index based on a mapping (create_index), and another (populate_index) to take a CSV file, convert its rows to JSONs, and write each entry (“document”) to an ES index. Here, we’ll only use the first function, to create our netflix_movies index:
We can now inspect the mapping for our index using the following code, which should return the mapping that we have just written:
Now that we’ve defined our mapping, we can write the Netflix movies data to our ES index, using the following code (note that this code expects the Netflix data from Kaggle discussed earlier in this post to be available in a subdirectory called “data"):
While this code is running, we can check whether the data is coming through by reviewing the logs in your Bonsai dashboard. The logs should show POST requests to the netflix_movies index (see below).
We can also check the number of entries in the netflix_movies index using the count method, e.g.:
Finally, we can use the CAT indices API in your Kibana console to check the number of documents in your indices, their size, the number of primary and replication shards, and their health status:
In the output shown above, the store.size field shows the size taken by your primary and replication shards combined, while pri.store.size only shows the size of your primary shard(s). The store.size in this case will be 2x the primary shard size, since our shard health is “green”, which means that the replica shards were properly assigned.
The Python Elasticsearch client can also be used directly with the CAT API, if you’d prefer to use Python throughout.
Now that we have set up our cluster and written our documents, let’s take a brief look at querying data from Elasticsearch. ES queries are written in Elasticsearch Domain Specific Language (DSL). As described in the ES documentation, DSL is based on JSON and should be thought of as an Abstract Syntax Tree (AST) of queries. DSL queries consist of two types of clauses:
leaf query clauses that look for a specific value in a specific field (e.g. a match or range); andcompound query clauses that are used to logically combine multiple queries (such as multiple leaf or compound queries) or to alter the behaviour of these queries.
leaf query clauses that look for a specific value in a specific field (e.g. a match or range); and
compound query clauses that are used to logically combine multiple queries (such as multiple leaf or compound queries) or to alter the behaviour of these queries.
When you run a query against your index, ES sorts the results by a relevance score (a float) that represents the quality of the match (this value is reported in the _score field associated with a “hit”). ES queries have a query and a filter context. The filter context — as the name implies — filters out documents that do not match the conditions in the syntax. However, it will not affect the relevance score. The match in the bool context does contribute to the relevance score.
Let’s look at an example of a query with a query and a filter context. The example below filters on release year, but runs a match against the movie type.
{ "query": { "bool": { "must": [ {"match": {"type": "TV Show"}}, ], "filter": [ {"range": {"release_year": {"gte": 2021}}} ] } }}
We can run this query on our index directly using our ES Console, or through Python. The example shown below uses the latter approach. This code first sets up the ES client. Subsequently it defines the query, and finally, it runs the query against the netflix_movies ES index. The last line of the code prints the titles of the movies that match our query.
Keep in mind that there is a maximum of 10,000 hits per query for a regular search. If you need to extract more than that in a single query (i.e. “deep pagination”), you can use the search_after option (you could also use scroll search, but this is no longer recommended by ES). Also note that since we know that “TV Show” is the value that the type field can take, we may as well have used the filter context. I’ll leave these and other intricacies of DSL for a future blog post.
And that’s it! We’ve provisioned an ES cluster, inspected our cluster health, defined an index, written documents, and run a simple query against our index. Stay tuned for future blog posts on Elasticsearch, where I’ll dive into the details of DSL and of index store size optimisation.
Thanks for reading!
Support my work: If you liked this article and you’d like to support my work, please consider becoming a paying Medium member via my referral page. The price of the subscription is the same if you sign up via my referral page, but I will receive part of your monthly membership fee.
If you liked this article, here are some other articles you may enjoy:
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
Disclaimer: “Elasticsearch” and “Kibana” are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. Description and/or use of any third-party services and/or trademarks in this blog post should not be seen as endorsement for or by their respective rights holders.
Please read this disclaimer carefully before relying on any of the content in my articles on Medium.com.
|
[
{
"code": null,
"e": 743,
"s": 172,
"text": "Elasticsearch (ES) is a distributed search engine that is designed for scalability and redundancy. It is fast, and it is suited for storing and handling large volumes of data for analytics, machine learning, and other applications. ES has gained traction in recent years, and is an important technology for any data scientist’s (and data engineer’s) toolbox. In this blog post, we’ll provision our own (free) ES cluster with Bonsai, and use the Python Elasticsearch Client to set up an index, and write and query data. All code for this blog post is available on GitHub."
},
{
"code": null,
"e": 1092,
"s": 743,
"text": "In a first step, we’ll set up an Elasticsearch cluster. You can do this through a managed Elasticsearch service provider such as AWS’ Amazon Elasticsearch Service, Bonsai, or Elastic.co. How you decide to provision, deploy, and manage your cluster (and where) depends on what kind of solution you’re building and what features are important to you."
},
{
"code": null,
"e": 1579,
"s": 1092,
"text": "Here, we’ll go with Bonsai, which gives us a free “Sandbox” tier with 125mb of storage and a 35,000 documents storage limit. To get started, sign up for an account on the Bonsai and follow the steps on the website to set up and deploy your cluster. If you already have an account on Heroku, you can also provision an ES cluster for your app directly by selecting Bonsai as an add on in the Resources tab of your app, without the need for signing up for an account on Bonsai (see below)."
},
{
"code": null,
"e": 1957,
"s": 1579,
"text": "After you’ve set up your Bonsai account and deployed your cluster (or provisioned your Bonsai cluster for your app via Heroku), nagivate to the “Credentials” page in in the left sidebar of your Bonsai Cluster overview dashboard and copy your host URL, access key, and access secret. We’ll use this information later to set up a connection using the Python Elasticsearch Client."
},
{
"code": null,
"e": 2453,
"s": 1957,
"text": "Bonsai’s Sandbox tier is not a production-level deployment and comes with several limitations. For example, the Sandbox tier does not allow you to set up users with limited permissions (e.g. read access only), so keep that in mind if you want to use this setup for an application that is exposed to the Internet. Elastic.co offers a 14-day free trial that you can use instead if you’d like to explore the full set of features that ES has to offer (including, e.g., cluster and index privileges)."
},
{
"code": null,
"e": 2805,
"s": 2453,
"text": "After you’ve completed the above steps, you can inspect your cluster health and indices from the console using Elasticsearch’s Compact and Aligned Text (CAT) APIs. The CAT APIs are designed to give you human-readable output. They are therefore not intended to be used by applications; rather they should be used from terminal or in the Kibana console."
},
{
"code": null,
"e": 2989,
"s": 2805,
"text": "On the console (both on Kibana and on Bonsai directly), the following GET request will return information on cluster health (a “green” status means that all your shards are assigned):"
},
{
"code": null,
"e": 3090,
"s": 2989,
"text": "Now that we have a cluster up and running, let’s look at the fundamentals of data and indices in ES."
},
{
"code": null,
"e": 3454,
"s": 3090,
"text": "Data in Elasticsearch is in the form of JSON objects called “documents”. A document contains fields, which are key-value pairs that can be a value (e.g. a boolean or an integer) or a nested structure. Documents are organised in indices, which are both logical groupings of data that follow a schema as well as the physical organisation of the data through shards."
},
{
"code": null,
"e": 3818,
"s": 3454,
"text": "Each index consists of one or more physical shards that form a logical group. Each shard, in turn, is a “self-contained index”. The data in an index is partitioned across these shards, and shards are distributed across nodes. Each document has a “primary shard” and a “replica shard” (that is, if the index has been configured to have one or more replica shards)."
},
{
"code": null,
"e": 4165,
"s": 3818,
"text": "Data that resides in different shards can be processed in parallel. To the extent that your CPU and memory allow (i.e., depending on the kind of ES cluster you have), more shards means faster search. Note however that shards come with overhead, and that you should carefully consider the appropriate number and size of the shards in your cluster."
},
{
"code": null,
"e": 4488,
"s": 4165,
"text": "Now that we’ve gone over the basics of ES, let’s take a stab at setting up an index of our own. Here, we’ll use data on Netflix shows available on Kaggle. This dataset is in CSV format and contains information on movies and series available on Netflix, including metadata such as their release date, their title, and cast."
},
{
"code": null,
"e": 4698,
"s": 4488,
"text": "We’ll first define a ES mapping for our index. In this case, we’ll go with relatively straightforward field types, defining all fields as text, with the exception of release_year (which we’ll make an integer)."
},
{
"code": null,
"e": 5433,
"s": 4698,
"text": "While Elasticsearch is able to infer the mapping of your documents when you write them, using dynamic field mapping, it does not necessarily do so optimally. Typically, you’ll want to spend some time on defining your mapping because the field types (as well as various other options) impact the size of your index and the flexibility you’ll have in querying data. For example, the text field type is broken down into individual terms upon indexing, allowing for partial matching. The keyword type can be used for strings too, but is not analysed (or rather: “tokenised”) when indexing and only allows for exact matching (in this example, we could have used it for the type field, which takes one of two values — “Movie” or “TV Show”)."
},
{
"code": null,
"e": 5743,
"s": 5433,
"text": "Another thing to keep in mind here is that ES does not have an array field type: if your document includes a field that consists of an array of, say, integers, the ES equivalent would be the integer data type (for a complete overview of ES field types, see this page). Take for example the following document:"
},
{
"code": null,
"e": 5771,
"s": 5743,
"text": "{ \"values\": [1,2,3,4,5]}"
},
{
"code": null,
"e": 5868,
"s": 5771,
"text": "The underlying mapping for this data would simply define the values field type to be an integer:"
},
{
"code": null,
"e": 6004,
"s": 5868,
"text": "mapping = { \"mappings\": { \"properties\": { \"values\": { \"type\": \"integer\" } } }}"
},
{
"code": null,
"e": 6486,
"s": 6004,
"text": "Now that we’ve defined our mapping, we can set up our index on ES using the Python Elasticsearch Client. In the Python code below, I’ve set up a class called EsConnection, with an ES client connection attribute (es_client). Note that I’m storing my access credentials in environment variables here for convenience (and to avoid accidentally sharing them when, e.g., pushing code to GitHub). For real applications you’ll have to use a more secure approach, like AWS Secrets Manager."
},
{
"code": null,
"e": 6797,
"s": 6486,
"text": "The EsConnection class also includes two methods, one to create an index based on a mapping (create_index), and another (populate_index) to take a CSV file, convert its rows to JSONs, and write each entry (“document”) to an ES index. Here, we’ll only use the first function, to create our netflix_movies index:"
},
{
"code": null,
"e": 6927,
"s": 6797,
"text": "We can now inspect the mapping for our index using the following code, which should return the mapping that we have just written:"
},
{
"code": null,
"e": 7181,
"s": 6927,
"text": "Now that we’ve defined our mapping, we can write the Netflix movies data to our ES index, using the following code (note that this code expects the Netflix data from Kaggle discussed earlier in this post to be available in a subdirectory called “data\"):"
},
{
"code": null,
"e": 7381,
"s": 7181,
"text": "While this code is running, we can check whether the data is coming through by reviewing the logs in your Bonsai dashboard. The logs should show POST requests to the netflix_movies index (see below)."
},
{
"code": null,
"e": 7479,
"s": 7381,
"text": "We can also check the number of entries in the netflix_movies index using the count method, e.g.:"
},
{
"code": null,
"e": 7675,
"s": 7479,
"text": "Finally, we can use the CAT indices API in your Kibana console to check the number of documents in your indices, their size, the number of primary and replication shards, and their health status:"
},
{
"code": null,
"e": 8018,
"s": 7675,
"text": "In the output shown above, the store.size field shows the size taken by your primary and replication shards combined, while pri.store.size only shows the size of your primary shard(s). The store.size in this case will be 2x the primary shard size, since our shard health is “green”, which means that the replica shards were properly assigned."
},
{
"code": null,
"e": 8136,
"s": 8018,
"text": "The Python Elasticsearch client can also be used directly with the CAT API, if you’d prefer to use Python throughout."
},
{
"code": null,
"e": 8506,
"s": 8136,
"text": "Now that we have set up our cluster and written our documents, let’s take a brief look at querying data from Elasticsearch. ES queries are written in Elasticsearch Domain Specific Language (DSL). As described in the ES documentation, DSL is based on JSON and should be thought of as an Abstract Syntax Tree (AST) of queries. DSL queries consist of two types of clauses:"
},
{
"code": null,
"e": 8767,
"s": 8506,
"text": "leaf query clauses that look for a specific value in a specific field (e.g. a match or range); andcompound query clauses that are used to logically combine multiple queries (such as multiple leaf or compound queries) or to alter the behaviour of these queries."
},
{
"code": null,
"e": 8866,
"s": 8767,
"text": "leaf query clauses that look for a specific value in a specific field (e.g. a match or range); and"
},
{
"code": null,
"e": 9029,
"s": 8866,
"text": "compound query clauses that are used to logically combine multiple queries (such as multiple leaf or compound queries) or to alter the behaviour of these queries."
},
{
"code": null,
"e": 9511,
"s": 9029,
"text": "When you run a query against your index, ES sorts the results by a relevance score (a float) that represents the quality of the match (this value is reported in the _score field associated with a “hit”). ES queries have a query and a filter context. The filter context — as the name implies — filters out documents that do not match the conditions in the syntax. However, it will not affect the relevance score. The match in the bool context does contribute to the relevance score."
},
{
"code": null,
"e": 9666,
"s": 9511,
"text": "Let’s look at an example of a query with a query and a filter context. The example below filters on release year, but runs a match against the movie type."
},
{
"code": null,
"e": 9890,
"s": 9666,
"text": "{ \"query\": { \"bool\": { \"must\": [ {\"match\": {\"type\": \"TV Show\"}}, ], \"filter\": [ {\"range\": {\"release_year\": {\"gte\": 2021}}} ] } }}"
},
{
"code": null,
"e": 10247,
"s": 9890,
"text": "We can run this query on our index directly using our ES Console, or through Python. The example shown below uses the latter approach. This code first sets up the ES client. Subsequently it defines the query, and finally, it runs the query against the netflix_movies ES index. The last line of the code prints the titles of the movies that match our query."
},
{
"code": null,
"e": 10728,
"s": 10247,
"text": "Keep in mind that there is a maximum of 10,000 hits per query for a regular search. If you need to extract more than that in a single query (i.e. “deep pagination”), you can use the search_after option (you could also use scroll search, but this is no longer recommended by ES). Also note that since we know that “TV Show” is the value that the type field can take, we may as well have used the filter context. I’ll leave these and other intricacies of DSL for a future blog post."
},
{
"code": null,
"e": 11014,
"s": 10728,
"text": "And that’s it! We’ve provisioned an ES cluster, inspected our cluster health, defined an index, written documents, and run a simple query against our index. Stay tuned for future blog posts on Elasticsearch, where I’ll dive into the details of DSL and of index store size optimisation."
},
{
"code": null,
"e": 11034,
"s": 11014,
"text": "Thanks for reading!"
},
{
"code": null,
"e": 11317,
"s": 11034,
"text": "Support my work: If you liked this article and you’d like to support my work, please consider becoming a paying Medium member via my referral page. The price of the subscription is the same if you sign up via my referral page, but I will receive part of your monthly membership fee."
},
{
"code": null,
"e": 11388,
"s": 11317,
"text": "If you liked this article, here are some other articles you may enjoy:"
},
{
"code": null,
"e": 11411,
"s": 11388,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11434,
"s": 11411,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11457,
"s": 11434,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11480,
"s": 11457,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11768,
"s": 11480,
"text": "Disclaimer: “Elasticsearch” and “Kibana” are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. Description and/or use of any third-party services and/or trademarks in this blog post should not be seen as endorsement for or by their respective rights holders."
}
] |
Check whether the given string is a valid identifier in Python
|
Suppose we have a string representing an identifier. We have to check whether it is valid or not. There are few criteria based on which we can determine whether it is valid or not.
It must start with underscore '_' or any uppercase or lowercase letters
It does not contain any whitespace
All the subsequent characters after the first one must not consist of any special characters like $, #, % etc.
If all of these three are valid then only the string is valid identifier.
So, if the input is like id = "_hello_56", then the output will be True.
To solve this, we will follow these steps −
if first character in s is not alphabetic and not underscore, thenreturn False
return False
for each character ch in s[from index 1 to end], doif ch is not alphanumeric and ch is not underscore, thenreturn False
if ch is not alphanumeric and ch is not underscore, thenreturn False
return False
return True
Let us see the following implementation to get better understanding −
Live Demo
def solve(s):
if not s[0].isalpha() and s[0] != '_':
return False
for ch in s[1:]:
if not ch.isalnum() and ch != '_':
return False
return True
id = "_hello_56"
print(solve(id))
"_hello_56"
True
|
[
{
"code": null,
"e": 1243,
"s": 1062,
"text": "Suppose we have a string representing an identifier. We have to check whether it is valid or not. There are few criteria based on which we can determine whether it is valid or not."
},
{
"code": null,
"e": 1315,
"s": 1243,
"text": "It must start with underscore '_' or any uppercase or lowercase letters"
},
{
"code": null,
"e": 1350,
"s": 1315,
"text": "It does not contain any whitespace"
},
{
"code": null,
"e": 1461,
"s": 1350,
"text": "All the subsequent characters after the first one must not consist of any special characters like $, #, % etc."
},
{
"code": null,
"e": 1535,
"s": 1461,
"text": "If all of these three are valid then only the string is valid identifier."
},
{
"code": null,
"e": 1608,
"s": 1535,
"text": "So, if the input is like id = \"_hello_56\", then the output will be True."
},
{
"code": null,
"e": 1652,
"s": 1608,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1731,
"s": 1652,
"text": "if first character in s is not alphabetic and not underscore, thenreturn False"
},
{
"code": null,
"e": 1744,
"s": 1731,
"text": "return False"
},
{
"code": null,
"e": 1864,
"s": 1744,
"text": "for each character ch in s[from index 1 to end], doif ch is not alphanumeric and ch is not underscore, thenreturn False"
},
{
"code": null,
"e": 1933,
"s": 1864,
"text": "if ch is not alphanumeric and ch is not underscore, thenreturn False"
},
{
"code": null,
"e": 1946,
"s": 1933,
"text": "return False"
},
{
"code": null,
"e": 1958,
"s": 1946,
"text": "return True"
},
{
"code": null,
"e": 2028,
"s": 1958,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 2038,
"s": 2028,
"text": "Live Demo"
},
{
"code": null,
"e": 2248,
"s": 2038,
"text": "def solve(s):\n if not s[0].isalpha() and s[0] != '_':\n return False\n\n for ch in s[1:]:\n if not ch.isalnum() and ch != '_':\n return False\n\n return True\n\nid = \"_hello_56\"\nprint(solve(id))"
},
{
"code": null,
"e": 2260,
"s": 2248,
"text": "\"_hello_56\""
},
{
"code": null,
"e": 2265,
"s": 2260,
"text": "True"
}
] |
C# | Substring() Method - GeeksforGeeks
|
10 May, 2019
In C#, Substring() is a string method. It is used to retrieve a substring from the current instance of the string. This method can be overloaded by passing the different number of parameters to it as follows:
String.Substring(Int32) MethodString.Substring(Int32, Int32) Method
String.Substring(Int32) Method
String.Substring(Int32, Int32) Method
String.Substring Method (startIndex)
This method is used to retrieves a substring from the current instance of the string. The parameter “startIndex” will specify the starting position of substring and then substring will continue to the end of the string.
Syntax:
public string Substring(int startIndex)
Parameter: This method accept one parameter “startIndex”. This parameter will specify the starting position of the substring which has to be retrieve. The type of this parameter is System.Int32.
Return Value: This method will return the substring which begins from startIndex and continues to the end of the string. The return value type is System.String.
Exception: If startIndex is less than zero or greater than the length of current instance then it will arise ArgumentOutOfRangeException.
Example:
Input : str = "GeeksForGeeks"
str.Substring(5);
Output: ForGeeks
Input : str = "GeeksForGeeks"
str.Substring(8);
Output: Geeks
Below program illustrate the above-discussed method:
// C# program to demonstrate the // String.Substring Method (startIndex)using System;class Geeks { // Main Method public static void Main() { // define string String str = "GeeksForGeeks"; Console.WriteLine("String : " + str); // retrieve the substring from index 5 Console.WriteLine("Sub String1: " + str.Substring(5)); // retrieve the substring from index 8 Console.WriteLine("Sub String2: " + str.Substring(8)); }}
String : GeeksForGeeks
Sub String1: ForGeeks
Sub String2: Geeks
String.Substring Method (int startIndex, int length)
This method is used to extract a substring that begins from specified position describe by parameter startIndex and has a specified length. If startIndex is equal to the length of string and parameter length is zero, then it will return nothing substring.
Syntax :
public string Substring(int startIndex, int length)
Parameter: This method accept two parameters “startIndex” and length. First parameter will specify the starting position of the substring which has to be retrieve and second parameter will specify the length of the substring. The type of both the parameters is System.Int32.
Return Value: This method will return the substring which begins from specified position and substring will have a specified length. The return value type is System.String.
Exception: This method can arise ArgumentOutOfRangeException in two conditions:
if the parameters startIndex or length is less than zero.If startIndex + length indicates a position which is not within current instance.
if the parameters startIndex or length is less than zero.
If startIndex + length indicates a position which is not within current instance.
Example:
Input : str = "GeeksForGeeks"
str.Substring(0,8);
Output: GeeksFor
Input : str = "GeeksForGeeks"
str.Substring(5,3);
Output: For
Input : str = "Geeks"
str.Substring(4,0);
Output:
Below program illustrate the above-discussed method:
// C# program to demonstrate the // String.Substring Method // (int startIndex, int length)using System;class Geeks { // Main Method public static void Main() { // define string String str = "GeeksForGeeks"; Console.WriteLine("String : " + str); // retrieve the substring from index 0 to length 8 Console.WriteLine("Sub String1: " + str.Substring(0, 8)); // retrieve the substring from index 5 to length 3 Console.WriteLine("Sub String2: " + str.Substring(5, 3)); }}
String : GeeksForGeeks
Sub String1: GeeksFor
Sub String2: For
References:
https://msdn.microsoft.com/en-us/library/system.string.substring1
https://msdn.microsoft.com/en-us/library/system.string.substring2
Akanksha_Rai
CSharp-method
CSharp-string
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# | Delegates
Destructors in C#
Extension Method in C#
C# | Constructors
Introduction to .NET Framework
C# | Abstract Classes
C# | Class and Object
C# | Data Types
C# | Encapsulation
HashSet in C# with Examples
|
[
{
"code": null,
"e": 24519,
"s": 24491,
"text": "\n10 May, 2019"
},
{
"code": null,
"e": 24728,
"s": 24519,
"text": "In C#, Substring() is a string method. It is used to retrieve a substring from the current instance of the string. This method can be overloaded by passing the different number of parameters to it as follows:"
},
{
"code": null,
"e": 24796,
"s": 24728,
"text": "String.Substring(Int32) MethodString.Substring(Int32, Int32) Method"
},
{
"code": null,
"e": 24827,
"s": 24796,
"text": "String.Substring(Int32) Method"
},
{
"code": null,
"e": 24865,
"s": 24827,
"text": "String.Substring(Int32, Int32) Method"
},
{
"code": null,
"e": 24902,
"s": 24865,
"text": "String.Substring Method (startIndex)"
},
{
"code": null,
"e": 25122,
"s": 24902,
"text": "This method is used to retrieves a substring from the current instance of the string. The parameter “startIndex” will specify the starting position of substring and then substring will continue to the end of the string."
},
{
"code": null,
"e": 25130,
"s": 25122,
"text": "Syntax:"
},
{
"code": null,
"e": 25171,
"s": 25130,
"text": "public string Substring(int startIndex)\n"
},
{
"code": null,
"e": 25366,
"s": 25171,
"text": "Parameter: This method accept one parameter “startIndex”. This parameter will specify the starting position of the substring which has to be retrieve. The type of this parameter is System.Int32."
},
{
"code": null,
"e": 25527,
"s": 25366,
"text": "Return Value: This method will return the substring which begins from startIndex and continues to the end of the string. The return value type is System.String."
},
{
"code": null,
"e": 25665,
"s": 25527,
"text": "Exception: If startIndex is less than zero or greater than the length of current instance then it will arise ArgumentOutOfRangeException."
},
{
"code": null,
"e": 25674,
"s": 25665,
"text": "Example:"
},
{
"code": null,
"e": 25821,
"s": 25674,
"text": "Input : str = \"GeeksForGeeks\"\n str.Substring(5);\nOutput: ForGeeks\n\nInput : str = \"GeeksForGeeks\"\n str.Substring(8);\nOutput: Geeks\n"
},
{
"code": null,
"e": 25874,
"s": 25821,
"text": "Below program illustrate the above-discussed method:"
},
{
"code": "// C# program to demonstrate the // String.Substring Method (startIndex)using System;class Geeks { // Main Method public static void Main() { // define string String str = \"GeeksForGeeks\"; Console.WriteLine(\"String : \" + str); // retrieve the substring from index 5 Console.WriteLine(\"Sub String1: \" + str.Substring(5)); // retrieve the substring from index 8 Console.WriteLine(\"Sub String2: \" + str.Substring(8)); }}",
"e": 26366,
"s": 25874,
"text": null
},
{
"code": null,
"e": 26434,
"s": 26366,
"text": "String : GeeksForGeeks\nSub String1: ForGeeks\nSub String2: Geeks\n"
},
{
"code": null,
"e": 26487,
"s": 26434,
"text": "String.Substring Method (int startIndex, int length)"
},
{
"code": null,
"e": 26743,
"s": 26487,
"text": "This method is used to extract a substring that begins from specified position describe by parameter startIndex and has a specified length. If startIndex is equal to the length of string and parameter length is zero, then it will return nothing substring."
},
{
"code": null,
"e": 26752,
"s": 26743,
"text": "Syntax :"
},
{
"code": null,
"e": 26805,
"s": 26752,
"text": "public string Substring(int startIndex, int length)\n"
},
{
"code": null,
"e": 27080,
"s": 26805,
"text": "Parameter: This method accept two parameters “startIndex” and length. First parameter will specify the starting position of the substring which has to be retrieve and second parameter will specify the length of the substring. The type of both the parameters is System.Int32."
},
{
"code": null,
"e": 27253,
"s": 27080,
"text": "Return Value: This method will return the substring which begins from specified position and substring will have a specified length. The return value type is System.String."
},
{
"code": null,
"e": 27333,
"s": 27253,
"text": "Exception: This method can arise ArgumentOutOfRangeException in two conditions:"
},
{
"code": null,
"e": 27472,
"s": 27333,
"text": "if the parameters startIndex or length is less than zero.If startIndex + length indicates a position which is not within current instance."
},
{
"code": null,
"e": 27530,
"s": 27472,
"text": "if the parameters startIndex or length is less than zero."
},
{
"code": null,
"e": 27612,
"s": 27530,
"text": "If startIndex + length indicates a position which is not within current instance."
},
{
"code": null,
"e": 27621,
"s": 27612,
"text": "Example:"
},
{
"code": null,
"e": 27831,
"s": 27621,
"text": "Input : str = \"GeeksForGeeks\"\n str.Substring(0,8);\nOutput: GeeksFor\n\nInput : str = \"GeeksForGeeks\"\n str.Substring(5,3);\nOutput: For\n\nInput : str = \"Geeks\"\n str.Substring(4,0);\nOutput: \n"
},
{
"code": null,
"e": 27884,
"s": 27831,
"text": "Below program illustrate the above-discussed method:"
},
{
"code": "// C# program to demonstrate the // String.Substring Method // (int startIndex, int length)using System;class Geeks { // Main Method public static void Main() { // define string String str = \"GeeksForGeeks\"; Console.WriteLine(\"String : \" + str); // retrieve the substring from index 0 to length 8 Console.WriteLine(\"Sub String1: \" + str.Substring(0, 8)); // retrieve the substring from index 5 to length 3 Console.WriteLine(\"Sub String2: \" + str.Substring(5, 3)); }}",
"e": 28425,
"s": 27884,
"text": null
},
{
"code": null,
"e": 28491,
"s": 28425,
"text": "String : GeeksForGeeks\nSub String1: GeeksFor\nSub String2: For\n"
},
{
"code": null,
"e": 28503,
"s": 28491,
"text": "References:"
},
{
"code": null,
"e": 28569,
"s": 28503,
"text": "https://msdn.microsoft.com/en-us/library/system.string.substring1"
},
{
"code": null,
"e": 28635,
"s": 28569,
"text": "https://msdn.microsoft.com/en-us/library/system.string.substring2"
},
{
"code": null,
"e": 28648,
"s": 28635,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 28662,
"s": 28648,
"text": "CSharp-method"
},
{
"code": null,
"e": 28676,
"s": 28662,
"text": "CSharp-string"
},
{
"code": null,
"e": 28679,
"s": 28676,
"text": "C#"
},
{
"code": null,
"e": 28777,
"s": 28679,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28792,
"s": 28777,
"text": "C# | Delegates"
},
{
"code": null,
"e": 28810,
"s": 28792,
"text": "Destructors in C#"
},
{
"code": null,
"e": 28833,
"s": 28810,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 28851,
"s": 28833,
"text": "C# | Constructors"
},
{
"code": null,
"e": 28882,
"s": 28851,
"text": "Introduction to .NET Framework"
},
{
"code": null,
"e": 28904,
"s": 28882,
"text": "C# | Abstract Classes"
},
{
"code": null,
"e": 28926,
"s": 28904,
"text": "C# | Class and Object"
},
{
"code": null,
"e": 28942,
"s": 28926,
"text": "C# | Data Types"
},
{
"code": null,
"e": 28961,
"s": 28942,
"text": "C# | Encapsulation"
}
] |
Git and GitHub basics for Data Scientists | by Philip Wilkinson | Towards Data Science
|
This year, as Head of Science for the UCL Data Science Society, the society is presenting a series of 20 workshops covering topics such as introduction to Python, a Data Scientists toolkit and Machine learning methods, throughout the academic year. For each of these the aim is to create a series of small blogposts that will outline the main points with links to the full workshop for anyone who wishes to follow along. All of these can be found in our GitHub repository, and will be updated throughout the year with new workshops and challenges.
The eighth workshop in the series is an introduction Git and GitHub where we will cover what is Git and GitHub, creating your first repository, making your first commit and then linking this to a remote repository. Further details on branching, secret files and going back to a previous commit can also be found on our GitHub page.
If you have missed any of our previous workshops you can find the last three at the following links:
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
According to the Git Website:
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
Including:
Acting as an industrial production standard
For use during a Hackathon
For use in writing a book
For keeping lecture and practical notes
Any many other uses ...
Git itself differs from GitHub in that Git is a system that manages the version control of a project, while GitHub is a remote platform that hosts project using Git. This means that while can be using the Git system, you don’t necessarily have to use GitHub, unless you want a backup of your repository on another platform.
For interacting with Git and GitHub, we will not be using Jupyter Notebooks that we have already but instead will be interacting with the comand line/terminal. For this, specifically in relation to GitHub operations, I tend to prefer using GitBash which allows me to interact with GitHub nice and easily, but you can use your own term on Mac or Bash on Windows.
In order to create a project we need to create a directory/folder for your project. In the command line we can do that using the mkdir command, followed by the name of the folder, which will create a folder on your machine.
mkdir my-novel
We then navigate to that folder through the command line using the cd command which will allow us to change directory as follows:
cd my-novel
Once we are in that directory then we can initialise the project with Git so that Git can be used to manage the versions of the project using the git init command:
git init
Now, you should have a .git folder in your directory which should current be invisible. Your project that you just created should be called the local workspace.
Of course, now we have created the repository then we need to actually fill it! We can go ahead and create the start of our novel still using the command line by using the nano command which opens a text editor to begin creating a file. In our case we can call this intro.txt as:
nano intro.txt
Where you can go ahead and write your own intro to the novel. Once this has been done you can use the commands ctrl o to save the file and then ctrl x to leave the file.
Before we can then commit our changes, to make sure we have a record of our initial work as it goes along, we need to add our changes to the stage (otherwise known as staging the changes). This is done using the git add command whereby you can either specify the file names you wish to add or you can use . to specify any changes as follows:
git add intro.txt
We can then check our stage using the status command as:
git status
Where we will see
On branch masterNo commits yetChanges to be committed: (use "git rm --cached <file>..." to unstage) new file: intro.txt
The key thing for us is that this tells us we have changes to be committed, the new file intro.txt , but that so far we have made no commits.
With this in mind then we can commit our changes using the commit command. The key thing here is that we can use the -m argument where after we specify a comment string which explains what you are committing, any changes that you made and why this may be. This just ensures that you remember any significant changes that you made so that if your code ends up breaking, or in our case we want to undo a chapter of our novel, we know which commit to go back to according to the message. Here we can do this as:
git commit -m "Initialise the novel with an intro chapter"
Where we have now made our first contribution to the project!
Of course, having all of the changes on our local machine is all well and good for version control (no more final final final version 27.txt files!), but what if we want to then collaborate with other people or we want to have a remote back-up in case something goes wrong? For this, we can use the magic of GitHub.
For this, we will need to create a GitHub account and initialise a new repository. For this, once you have set up your account, you need to create a new repository but make sure that you do not initialise this with either a README, licence or gitignore files as this will create errors. Don’t worry though, you can create these files after you have pushed to the remote repository.
Since we have already made our commits to our own local repository all we need to do is to link the remote repository to our own. For this we need to use the repository url, which is found here:
On your project which we can then link this through our terminal using git remote add origin as:
git remote add origin <REMOTE_URL>
which make sure you are using your own URL.
We can then verify that this connection has been made using git remote -v which should print the remote url that you have specified. Now we have connected to our remote repository!
One issue here however is that we have not set where exactly in the remote origin to push to. While we have our local respository, we don’t have a branch yet in our remote repository. For this, we can specify:
git push --set-upstream origin HEAD:master
Which will tell the current head of the repository to push to the master branch within our repository.
We can then make our first push to our remote repository by specifying we are pushing the HEAD as:
git push origin HEAD
Which will mean in the future all we need to do, after we have added all of our files and committed them, is to use git push in the future, which is nice and simple! You can check to make sure that your files are on the remote repository by going to the repository within you GitHub account and the files should be there. This will then allow you to not only store your work on a remote repository but to also work collaboratively with other programmers as well!
So there you go, you have now created your first local repository, committed your files, then pushed those to a remote repository. In the practical itself, more details are provided on branching, .gitignore files (to keep things secret!) and going back to previous commits as well, which can all be found HERE.
If you want any further information on our society feel free to follow us on our socials:
Facebook: https://www.facebook.com/ucldata
Instagram: https://www.instagram.com/ucl.datasci/
LinkedIn: https://www.linkedin.com/company/ucldata/
And if you want to keep up date with stores from the UCL Data Science Society and other amazing authors, feel free to sign up to medium using my referral code below.
philip-wilkinson.medium.com
Other medium articles by me can be found at:
|
[
{
"code": null,
"e": 719,
"s": 171,
"text": "This year, as Head of Science for the UCL Data Science Society, the society is presenting a series of 20 workshops covering topics such as introduction to Python, a Data Scientists toolkit and Machine learning methods, throughout the academic year. For each of these the aim is to create a series of small blogposts that will outline the main points with links to the full workshop for anyone who wishes to follow along. All of these can be found in our GitHub repository, and will be updated throughout the year with new workshops and challenges."
},
{
"code": null,
"e": 1051,
"s": 719,
"text": "The eighth workshop in the series is an introduction Git and GitHub where we will cover what is Git and GitHub, creating your first repository, making your first commit and then linking this to a remote repository. Further details on branching, secret files and going back to a previous commit can also be found on our GitHub page."
},
{
"code": null,
"e": 1152,
"s": 1051,
"text": "If you have missed any of our previous workshops you can find the last three at the following links:"
},
{
"code": null,
"e": 1175,
"s": 1152,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1198,
"s": 1175,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1221,
"s": 1198,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1251,
"s": 1221,
"text": "According to the Git Website:"
},
{
"code": null,
"e": 1407,
"s": 1251,
"text": "Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency."
},
{
"code": null,
"e": 1418,
"s": 1407,
"text": "Including:"
},
{
"code": null,
"e": 1462,
"s": 1418,
"text": "Acting as an industrial production standard"
},
{
"code": null,
"e": 1489,
"s": 1462,
"text": "For use during a Hackathon"
},
{
"code": null,
"e": 1515,
"s": 1489,
"text": "For use in writing a book"
},
{
"code": null,
"e": 1555,
"s": 1515,
"text": "For keeping lecture and practical notes"
},
{
"code": null,
"e": 1579,
"s": 1555,
"text": "Any many other uses ..."
},
{
"code": null,
"e": 1903,
"s": 1579,
"text": "Git itself differs from GitHub in that Git is a system that manages the version control of a project, while GitHub is a remote platform that hosts project using Git. This means that while can be using the Git system, you don’t necessarily have to use GitHub, unless you want a backup of your repository on another platform."
},
{
"code": null,
"e": 2265,
"s": 1903,
"text": "For interacting with Git and GitHub, we will not be using Jupyter Notebooks that we have already but instead will be interacting with the comand line/terminal. For this, specifically in relation to GitHub operations, I tend to prefer using GitBash which allows me to interact with GitHub nice and easily, but you can use your own term on Mac or Bash on Windows."
},
{
"code": null,
"e": 2489,
"s": 2265,
"text": "In order to create a project we need to create a directory/folder for your project. In the command line we can do that using the mkdir command, followed by the name of the folder, which will create a folder on your machine."
},
{
"code": null,
"e": 2505,
"s": 2489,
"text": " mkdir my-novel"
},
{
"code": null,
"e": 2635,
"s": 2505,
"text": "We then navigate to that folder through the command line using the cd command which will allow us to change directory as follows:"
},
{
"code": null,
"e": 2647,
"s": 2635,
"text": "cd my-novel"
},
{
"code": null,
"e": 2811,
"s": 2647,
"text": "Once we are in that directory then we can initialise the project with Git so that Git can be used to manage the versions of the project using the git init command:"
},
{
"code": null,
"e": 2820,
"s": 2811,
"text": "git init"
},
{
"code": null,
"e": 2981,
"s": 2820,
"text": "Now, you should have a .git folder in your directory which should current be invisible. Your project that you just created should be called the local workspace."
},
{
"code": null,
"e": 3261,
"s": 2981,
"text": "Of course, now we have created the repository then we need to actually fill it! We can go ahead and create the start of our novel still using the command line by using the nano command which opens a text editor to begin creating a file. In our case we can call this intro.txt as:"
},
{
"code": null,
"e": 3276,
"s": 3261,
"text": "nano intro.txt"
},
{
"code": null,
"e": 3446,
"s": 3276,
"text": "Where you can go ahead and write your own intro to the novel. Once this has been done you can use the commands ctrl o to save the file and then ctrl x to leave the file."
},
{
"code": null,
"e": 3788,
"s": 3446,
"text": "Before we can then commit our changes, to make sure we have a record of our initial work as it goes along, we need to add our changes to the stage (otherwise known as staging the changes). This is done using the git add command whereby you can either specify the file names you wish to add or you can use . to specify any changes as follows:"
},
{
"code": null,
"e": 3806,
"s": 3788,
"text": "git add intro.txt"
},
{
"code": null,
"e": 3863,
"s": 3806,
"text": "We can then check our stage using the status command as:"
},
{
"code": null,
"e": 3874,
"s": 3863,
"text": "git status"
},
{
"code": null,
"e": 3892,
"s": 3874,
"text": "Where we will see"
},
{
"code": null,
"e": 4015,
"s": 3892,
"text": "On branch masterNo commits yetChanges to be committed: (use \"git rm --cached <file>...\" to unstage)\tnew file: intro.txt"
},
{
"code": null,
"e": 4157,
"s": 4015,
"text": "The key thing for us is that this tells us we have changes to be committed, the new file intro.txt , but that so far we have made no commits."
},
{
"code": null,
"e": 4666,
"s": 4157,
"text": "With this in mind then we can commit our changes using the commit command. The key thing here is that we can use the -m argument where after we specify a comment string which explains what you are committing, any changes that you made and why this may be. This just ensures that you remember any significant changes that you made so that if your code ends up breaking, or in our case we want to undo a chapter of our novel, we know which commit to go back to according to the message. Here we can do this as:"
},
{
"code": null,
"e": 4725,
"s": 4666,
"text": "git commit -m \"Initialise the novel with an intro chapter\""
},
{
"code": null,
"e": 4787,
"s": 4725,
"text": "Where we have now made our first contribution to the project!"
},
{
"code": null,
"e": 5103,
"s": 4787,
"text": "Of course, having all of the changes on our local machine is all well and good for version control (no more final final final version 27.txt files!), but what if we want to then collaborate with other people or we want to have a remote back-up in case something goes wrong? For this, we can use the magic of GitHub."
},
{
"code": null,
"e": 5485,
"s": 5103,
"text": "For this, we will need to create a GitHub account and initialise a new repository. For this, once you have set up your account, you need to create a new repository but make sure that you do not initialise this with either a README, licence or gitignore files as this will create errors. Don’t worry though, you can create these files after you have pushed to the remote repository."
},
{
"code": null,
"e": 5680,
"s": 5485,
"text": "Since we have already made our commits to our own local repository all we need to do is to link the remote repository to our own. For this we need to use the repository url, which is found here:"
},
{
"code": null,
"e": 5777,
"s": 5680,
"text": "On your project which we can then link this through our terminal using git remote add origin as:"
},
{
"code": null,
"e": 5812,
"s": 5777,
"text": "git remote add origin <REMOTE_URL>"
},
{
"code": null,
"e": 5856,
"s": 5812,
"text": "which make sure you are using your own URL."
},
{
"code": null,
"e": 6037,
"s": 5856,
"text": "We can then verify that this connection has been made using git remote -v which should print the remote url that you have specified. Now we have connected to our remote repository!"
},
{
"code": null,
"e": 6247,
"s": 6037,
"text": "One issue here however is that we have not set where exactly in the remote origin to push to. While we have our local respository, we don’t have a branch yet in our remote repository. For this, we can specify:"
},
{
"code": null,
"e": 6290,
"s": 6247,
"text": "git push --set-upstream origin HEAD:master"
},
{
"code": null,
"e": 6393,
"s": 6290,
"text": "Which will tell the current head of the repository to push to the master branch within our repository."
},
{
"code": null,
"e": 6492,
"s": 6393,
"text": "We can then make our first push to our remote repository by specifying we are pushing the HEAD as:"
},
{
"code": null,
"e": 6513,
"s": 6492,
"text": "git push origin HEAD"
},
{
"code": null,
"e": 6976,
"s": 6513,
"text": "Which will mean in the future all we need to do, after we have added all of our files and committed them, is to use git push in the future, which is nice and simple! You can check to make sure that your files are on the remote repository by going to the repository within you GitHub account and the files should be there. This will then allow you to not only store your work on a remote repository but to also work collaboratively with other programmers as well!"
},
{
"code": null,
"e": 7287,
"s": 6976,
"text": "So there you go, you have now created your first local repository, committed your files, then pushed those to a remote repository. In the practical itself, more details are provided on branching, .gitignore files (to keep things secret!) and going back to previous commits as well, which can all be found HERE."
},
{
"code": null,
"e": 7377,
"s": 7287,
"text": "If you want any further information on our society feel free to follow us on our socials:"
},
{
"code": null,
"e": 7420,
"s": 7377,
"text": "Facebook: https://www.facebook.com/ucldata"
},
{
"code": null,
"e": 7470,
"s": 7420,
"text": "Instagram: https://www.instagram.com/ucl.datasci/"
},
{
"code": null,
"e": 7522,
"s": 7470,
"text": "LinkedIn: https://www.linkedin.com/company/ucldata/"
},
{
"code": null,
"e": 7688,
"s": 7522,
"text": "And if you want to keep up date with stores from the UCL Data Science Society and other amazing authors, feel free to sign up to medium using my referral code below."
},
{
"code": null,
"e": 7716,
"s": 7688,
"text": "philip-wilkinson.medium.com"
}
] |
Python program to find the smallest number in a list
|
In this article, we will learn about the solution to the problem statement given below.
Problem statement − We are given al list, we need to display the smallest number available in the list
Here we can either sort the list and get the smallest element or use the built-in min() function to get the smallest element.
Now let’s observe the concept in the implementation below −
Live Demo
list1 = [101, 120, 104, 145, 99]
# sorting using built-in function
list1.sort()
print("Smallest element is:", list1[0])
Smallest element is: 99
All the variables are declared in the local scope and their references are seen in the figure above.
Live Demo
list1 = [101, 120, 104, 145, 99]
#using built-in min fucntion
print("Smallest element is:", min(list1))
Smallest element is: 99
All the variables are declared in the local scope and their references are seen in the figure above.
In this article, we have learned about how we can find the smallest number in a list.
|
[
{
"code": null,
"e": 1150,
"s": 1062,
"text": "In this article, we will learn about the solution to the problem statement given below."
},
{
"code": null,
"e": 1253,
"s": 1150,
"text": "Problem statement − We are given al list, we need to display the smallest number available in the list"
},
{
"code": null,
"e": 1379,
"s": 1253,
"text": "Here we can either sort the list and get the smallest element or use the built-in min() function to get the smallest element."
},
{
"code": null,
"e": 1439,
"s": 1379,
"text": "Now let’s observe the concept in the implementation below −"
},
{
"code": null,
"e": 1450,
"s": 1439,
"text": " Live Demo"
},
{
"code": null,
"e": 1570,
"s": 1450,
"text": "list1 = [101, 120, 104, 145, 99]\n# sorting using built-in function\nlist1.sort()\nprint(\"Smallest element is:\", list1[0])"
},
{
"code": null,
"e": 1594,
"s": 1570,
"text": "Smallest element is: 99"
},
{
"code": null,
"e": 1695,
"s": 1594,
"text": "All the variables are declared in the local scope and their references are seen in the figure above."
},
{
"code": null,
"e": 1706,
"s": 1695,
"text": " Live Demo"
},
{
"code": null,
"e": 1810,
"s": 1706,
"text": "list1 = [101, 120, 104, 145, 99]\n#using built-in min fucntion\nprint(\"Smallest element is:\", min(list1))"
},
{
"code": null,
"e": 1834,
"s": 1810,
"text": "Smallest element is: 99"
},
{
"code": null,
"e": 1935,
"s": 1834,
"text": "All the variables are declared in the local scope and their references are seen in the figure above."
},
{
"code": null,
"e": 2021,
"s": 1935,
"text": "In this article, we have learned about how we can find the smallest number in a list."
}
] |
Modeling Price with Regularized Linear Model & Xgboost | by Susan Li | Towards Data Science
|
We would like to model the price of a house, we know that the price depends on the location of the house, square footage of a house, year built, year renovated, number of bedrooms, number of garages, etc. So those factors contribute to the pattern — premium location would typically lead to a higher price. However, all houses within the same area and have same square footage do not have the exact same price. The variation in price is the noise. Our goal in price modeling is to model the pattern and ignore the noise. The same concepts apply to modeling hotel room prices too.
Therefore, to start, we are going to implement regularization techniques for linear regression of house pricing data.
There is an excellent house prices data set can be found here.
import warningsdef ignore_warn(*args, **kwargs): passwarnings.warn = ignore_warnimport numpy as np import pandas as pd %matplotlib inlineimport matplotlib.pyplot as plt import seaborn as snsfrom scipy import statsfrom scipy.stats import norm, skewfrom sklearn import preprocessingfrom sklearn.metrics import r2_scorefrom sklearn.metrics import mean_squared_errorfrom sklearn.model_selection import train_test_splitfrom sklearn.linear_model import ElasticNetCV, ElasticNetfrom xgboost import XGBRegressor, plot_importance from sklearn.model_selection import RandomizedSearchCVfrom sklearn.model_selection import StratifiedKFoldpd.set_option('display.float_format', lambda x: '{:.3f}'.format(x))df = pd.read_csv('house_train.csv')df.shape
(df.isnull().sum() / len(df)).sort_values(ascending=False)[:20]
The good news is that we have many features to play with (81), the bad news is that 19 features have missing values, and 4 of them have over 80% missing values. For any feature, if it is missing 80% of values, it can’t be that important, therefore, I decided to remove these 4 features.
df.drop(['PoolQC', 'MiscFeature', 'Alley', 'Fence', 'Id'], axis=1, inplace=True)
sns.distplot(df['SalePrice'] , fit=norm);# Get the fitted parameters used by the function(mu, sigma) = norm.fit(df['SalePrice'])print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))#Now plot the distributionplt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best')plt.ylabel('Frequency')plt.title('Sale Price distribution')#Get also the QQ-plotfig = plt.figure()res = stats.probplot(df['SalePrice'], plot=plt)plt.show();
The target feature - SalePrice is right skewed. As linear models like normally distributed data , we will transform SalePrice and make it more normally distributed.
sns.distplot(np.log1p(df['SalePrice']) , fit=norm);# Get the fitted parameters used by the function(mu, sigma) = norm.fit(np.log1p(df['SalePrice']))print( '\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))#Now plot the distributionplt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best')plt.ylabel('Frequency')plt.title('log(Sale Price+1) distribution')#Get also the QQ-plotfig = plt.figure()res = stats.probplot(np.log1p(df['SalePrice']), plot=plt)plt.show();
pd.set_option('precision',2)plt.figure(figsize=(10, 8))sns.heatmap(df.drop(['SalePrice'],axis=1).corr(), square=True)plt.suptitle("Pearson Correlation Heatmap")plt.show();
There exists strong correlations between some of the features. For example, GarageYrBlt and YearBuilt, TotRmsAbvGrd and GrLivArea, GarageArea and GarageCars are strongly correlated. They actually express more or less the same thing. I will let ElasticNetCV to help reduce redundancy later.
corr_with_sale_price = df.corr()["SalePrice"].sort_values(ascending=False)plt.figure(figsize=(14,6))corr_with_sale_price.drop("SalePrice").plot.bar()plt.show();
The correlation of SalePrice with OverallQual is the greatest (around 0.8). Also GrLivArea presents a correlation of over 0.7, and GarageCars presents a correlation of over 0.6. Let’s look at these 4 features in more detail.
sns.pairplot(df[['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars']])plt.show();
Log transform features that have a highly skewed distribution (skew > 0.75)
Dummy coding categorical features
Fill NaN with the mean of the column.
Train and test sets split.
Ridge and Lasso regression are regularized linear regression models.
ElasticNet is essentially a Lasso/Ridge hybrid, that entails the minimization of an objective function that includes both L1 (Lasso) and L2 (Ridge) norms.
ElasticNet is useful when there are multiple features which are correlated with one another.
The class ElasticNetCV can be used to set the parameters alpha (α) and l1_ratio (ρ) by cross-validation.
ElasticNetCV: ElasticNet model with best model selection by cross-validation.
Let’s see what ElasticNetCV is going to select for us.
0< The optimal l1_ratio <1 , indicating the penalty is a combination of L1 and L2, that is, the combination of Lasso and Ridge.
The RMSE here is actually RMSLE ( Root Mean Squared Logarithmic Error). Because we have taken the log of the actual values. Here is a nice write up explaining the differences between RMSE and RMSLE.
feature_importance = pd.Series(index = X_train.columns, data = np.abs(cv_model.coef_))n_selected_features = (feature_importance>0).sum()print('{0:d} features, reduction of {1:2.2f}%'.format( n_selected_features,(1-n_selected_features/len(feature_importance))*100))feature_importance.sort_values().tail(30).plot(kind = 'bar', figsize = (12,5));
A reduction of 58.91% features looks productive. The top 4 most important features selected by ElasticNetCV are Condition2_PosN, MSZoning_C(all), Exterior1st_BrkComm & GrLivArea. We are going to see how these features compare with those selected by Xgboost.
The first Xgboost model, we start from default parameters.
It is already way better an the model selected by ElasticNetCV!
The second Xgboost model, we gradually add a few parameters that suppose to add model’s accuracy.
There was again an improvement!
The third Xgboost model, we add a learning rate, hopefully it will yield a more accurate model.
Unfortunately, there was no improvement. I concluded that xgb_model2 is the best model.
from collections import OrderedDictOrderedDict(sorted(xgb_model2.get_booster().get_fscore().items(), key=lambda t: t[1], reverse=True))
The top 4 most important features selected by Xgboost are LotArea, GrLivArea, OverallQual & TotalBsmtSF.
There is only one feature GrLivArea was selected by both ElasticNetCV and Xgboost.
So now we are going to select some relevant features and fit the Xgboost again.
Another small improvement!
Jupyter notebook can be found on Github. Enjoy the rest of the week!
|
[
{
"code": null,
"e": 752,
"s": 172,
"text": "We would like to model the price of a house, we know that the price depends on the location of the house, square footage of a house, year built, year renovated, number of bedrooms, number of garages, etc. So those factors contribute to the pattern — premium location would typically lead to a higher price. However, all houses within the same area and have same square footage do not have the exact same price. The variation in price is the noise. Our goal in price modeling is to model the pattern and ignore the noise. The same concepts apply to modeling hotel room prices too."
},
{
"code": null,
"e": 870,
"s": 752,
"text": "Therefore, to start, we are going to implement regularization techniques for linear regression of house pricing data."
},
{
"code": null,
"e": 933,
"s": 870,
"text": "There is an excellent house prices data set can be found here."
},
{
"code": null,
"e": 1673,
"s": 933,
"text": "import warningsdef ignore_warn(*args, **kwargs): passwarnings.warn = ignore_warnimport numpy as np import pandas as pd %matplotlib inlineimport matplotlib.pyplot as plt import seaborn as snsfrom scipy import statsfrom scipy.stats import norm, skewfrom sklearn import preprocessingfrom sklearn.metrics import r2_scorefrom sklearn.metrics import mean_squared_errorfrom sklearn.model_selection import train_test_splitfrom sklearn.linear_model import ElasticNetCV, ElasticNetfrom xgboost import XGBRegressor, plot_importance from sklearn.model_selection import RandomizedSearchCVfrom sklearn.model_selection import StratifiedKFoldpd.set_option('display.float_format', lambda x: '{:.3f}'.format(x))df = pd.read_csv('house_train.csv')df.shape"
},
{
"code": null,
"e": 1737,
"s": 1673,
"text": "(df.isnull().sum() / len(df)).sort_values(ascending=False)[:20]"
},
{
"code": null,
"e": 2024,
"s": 1737,
"text": "The good news is that we have many features to play with (81), the bad news is that 19 features have missing values, and 4 of them have over 80% missing values. For any feature, if it is missing 80% of values, it can’t be that important, therefore, I decided to remove these 4 features."
},
{
"code": null,
"e": 2105,
"s": 2024,
"text": "df.drop(['PoolQC', 'MiscFeature', 'Alley', 'Fence', 'Id'], axis=1, inplace=True)"
},
{
"code": null,
"e": 2586,
"s": 2105,
"text": "sns.distplot(df['SalePrice'] , fit=norm);# Get the fitted parameters used by the function(mu, sigma) = norm.fit(df['SalePrice'])print( '\\n mu = {:.2f} and sigma = {:.2f}\\n'.format(mu, sigma))#Now plot the distributionplt.legend(['Normal dist. ($\\mu=$ {:.2f} and $\\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best')plt.ylabel('Frequency')plt.title('Sale Price distribution')#Get also the QQ-plotfig = plt.figure()res = stats.probplot(df['SalePrice'], plot=plt)plt.show();"
},
{
"code": null,
"e": 2751,
"s": 2586,
"text": "The target feature - SalePrice is right skewed. As linear models like normally distributed data , we will transform SalePrice and make it more normally distributed."
},
{
"code": null,
"e": 3269,
"s": 2751,
"text": "sns.distplot(np.log1p(df['SalePrice']) , fit=norm);# Get the fitted parameters used by the function(mu, sigma) = norm.fit(np.log1p(df['SalePrice']))print( '\\n mu = {:.2f} and sigma = {:.2f}\\n'.format(mu, sigma))#Now plot the distributionplt.legend(['Normal dist. ($\\mu=$ {:.2f} and $\\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best')plt.ylabel('Frequency')plt.title('log(Sale Price+1) distribution')#Get also the QQ-plotfig = plt.figure()res = stats.probplot(np.log1p(df['SalePrice']), plot=plt)plt.show();"
},
{
"code": null,
"e": 3441,
"s": 3269,
"text": "pd.set_option('precision',2)plt.figure(figsize=(10, 8))sns.heatmap(df.drop(['SalePrice'],axis=1).corr(), square=True)plt.suptitle(\"Pearson Correlation Heatmap\")plt.show();"
},
{
"code": null,
"e": 3731,
"s": 3441,
"text": "There exists strong correlations between some of the features. For example, GarageYrBlt and YearBuilt, TotRmsAbvGrd and GrLivArea, GarageArea and GarageCars are strongly correlated. They actually express more or less the same thing. I will let ElasticNetCV to help reduce redundancy later."
},
{
"code": null,
"e": 3892,
"s": 3731,
"text": "corr_with_sale_price = df.corr()[\"SalePrice\"].sort_values(ascending=False)plt.figure(figsize=(14,6))corr_with_sale_price.drop(\"SalePrice\").plot.bar()plt.show();"
},
{
"code": null,
"e": 4117,
"s": 3892,
"text": "The correlation of SalePrice with OverallQual is the greatest (around 0.8). Also GrLivArea presents a correlation of over 0.7, and GarageCars presents a correlation of over 0.6. Let’s look at these 4 features in more detail."
},
{
"code": null,
"e": 4202,
"s": 4117,
"text": "sns.pairplot(df[['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars']])plt.show();"
},
{
"code": null,
"e": 4278,
"s": 4202,
"text": "Log transform features that have a highly skewed distribution (skew > 0.75)"
},
{
"code": null,
"e": 4312,
"s": 4278,
"text": "Dummy coding categorical features"
},
{
"code": null,
"e": 4350,
"s": 4312,
"text": "Fill NaN with the mean of the column."
},
{
"code": null,
"e": 4377,
"s": 4350,
"text": "Train and test sets split."
},
{
"code": null,
"e": 4446,
"s": 4377,
"text": "Ridge and Lasso regression are regularized linear regression models."
},
{
"code": null,
"e": 4601,
"s": 4446,
"text": "ElasticNet is essentially a Lasso/Ridge hybrid, that entails the minimization of an objective function that includes both L1 (Lasso) and L2 (Ridge) norms."
},
{
"code": null,
"e": 4694,
"s": 4601,
"text": "ElasticNet is useful when there are multiple features which are correlated with one another."
},
{
"code": null,
"e": 4799,
"s": 4694,
"text": "The class ElasticNetCV can be used to set the parameters alpha (α) and l1_ratio (ρ) by cross-validation."
},
{
"code": null,
"e": 4877,
"s": 4799,
"text": "ElasticNetCV: ElasticNet model with best model selection by cross-validation."
},
{
"code": null,
"e": 4932,
"s": 4877,
"text": "Let’s see what ElasticNetCV is going to select for us."
},
{
"code": null,
"e": 5060,
"s": 4932,
"text": "0< The optimal l1_ratio <1 , indicating the penalty is a combination of L1 and L2, that is, the combination of Lasso and Ridge."
},
{
"code": null,
"e": 5259,
"s": 5060,
"text": "The RMSE here is actually RMSLE ( Root Mean Squared Logarithmic Error). Because we have taken the log of the actual values. Here is a nice write up explaining the differences between RMSE and RMSLE."
},
{
"code": null,
"e": 5606,
"s": 5259,
"text": "feature_importance = pd.Series(index = X_train.columns, data = np.abs(cv_model.coef_))n_selected_features = (feature_importance>0).sum()print('{0:d} features, reduction of {1:2.2f}%'.format( n_selected_features,(1-n_selected_features/len(feature_importance))*100))feature_importance.sort_values().tail(30).plot(kind = 'bar', figsize = (12,5));"
},
{
"code": null,
"e": 5864,
"s": 5606,
"text": "A reduction of 58.91% features looks productive. The top 4 most important features selected by ElasticNetCV are Condition2_PosN, MSZoning_C(all), Exterior1st_BrkComm & GrLivArea. We are going to see how these features compare with those selected by Xgboost."
},
{
"code": null,
"e": 5923,
"s": 5864,
"text": "The first Xgboost model, we start from default parameters."
},
{
"code": null,
"e": 5987,
"s": 5923,
"text": "It is already way better an the model selected by ElasticNetCV!"
},
{
"code": null,
"e": 6085,
"s": 5987,
"text": "The second Xgboost model, we gradually add a few parameters that suppose to add model’s accuracy."
},
{
"code": null,
"e": 6117,
"s": 6085,
"text": "There was again an improvement!"
},
{
"code": null,
"e": 6213,
"s": 6117,
"text": "The third Xgboost model, we add a learning rate, hopefully it will yield a more accurate model."
},
{
"code": null,
"e": 6301,
"s": 6213,
"text": "Unfortunately, there was no improvement. I concluded that xgb_model2 is the best model."
},
{
"code": null,
"e": 6437,
"s": 6301,
"text": "from collections import OrderedDictOrderedDict(sorted(xgb_model2.get_booster().get_fscore().items(), key=lambda t: t[1], reverse=True))"
},
{
"code": null,
"e": 6542,
"s": 6437,
"text": "The top 4 most important features selected by Xgboost are LotArea, GrLivArea, OverallQual & TotalBsmtSF."
},
{
"code": null,
"e": 6625,
"s": 6542,
"text": "There is only one feature GrLivArea was selected by both ElasticNetCV and Xgboost."
},
{
"code": null,
"e": 6705,
"s": 6625,
"text": "So now we are going to select some relevant features and fit the Xgboost again."
},
{
"code": null,
"e": 6732,
"s": 6705,
"text": "Another small improvement!"
}
] |
Rotation of stepper motor in forward and reverse directions
|
Let us consider ALS-NIFC-01, which is a stepper motor interface. Using 26-core flat cable, it is connected to ALS kit. It will be used for interfacing two stepper motors. In our current experiment, we use only one stepper motor. The motor has a step size of 1.8°. The stepper motor works on a power supply of +12V. Power supply of +5V (white wire), GND (black), and +12V (red) is provided to the interface. Note that -12V supply is not used by the interface. We shall have to make sure that the +12V supply has adequate current rating to drive the stepper motor. With the step motor interface, this is ensured by using the power supply provided.
Using five-way powermate connector, the stepper motor is connected to the interface. The step motor is a two-phase, six-wire motor. The six wires are for the D, B, C, A inputs and VM connection (two wires). The five-way powermate connector is used for connection purpose. Ensure that red wire is connected to A1 on the interface. To provide DBCA inputs to one stepper motor, PC3-0 is used and for the other motor PC7-0 provides DBCA inputs, as shown in Fig. (a). Thus, the 8255 port C should be configured as an output port, while using the stepper motor interface. The physical layout of the interface is provided in Fig. (b).
It is possible to have four-step sequence with ‘one-phase ON’ scheme, as shown in the following. In this case the step size will be 1.8°.
D B C A
1 0 0 0 = 8
0 1 0 0 = 4
0 0 1 0 = 2
0 0 0 1 = 1
The 4 step sequence that we send to the stepper motor interface is 88H, 44H, 22H and 11H instead of 08H, 04H, 02H, 01H so that the step motor can be connected to any one of the two connectors provided on the interface board.
If the sequence is reversed, the rotation will also be reversed.
Let us consider a problem solution in this domain. The problem states that: The program shown here makes the step motor rotate 100 steps of 1.8 degrees each, resulting in half revolution. then, it rotates half revolution in the opposite direction. This sequence is repeated forever. To stop the operation, we have to reset the microprocessor kit.
The 8085 assembly language program is stated below for the rotation of in both forward and reverse direction.
; FILE NAME STEP_MOTOR.ASM
ORG C100H
N DB 100 ; 100 steps of 1.8° = 0.5 Revolution
ORG C000H
PA EQU D8H
PB EQU D9H
PC EQU DAH
CTRL EQU DBH
DELAY EQU 04BEH
MVI A, 80H
OUT CTRL ; Configure 8255 Ports as O/P in Mode 0
BEGIN: LDA N
MOV B, A
MOV C, A ; Step Count Value in B and C Registers
; The next 7 instructions are used for Rotating by 100 Steps in One Direction
MVI A, 88H;
LOOP1:OUT PC
LXI D, FFFFH
CALL DELAY ; Generate Delay of 0.5 Secs.
RRC
DCR B
JNZ LOOP1
; The next 7 instructions are used for Rotating by 100 Steps in Opposite Direction
MVI A, 88H
LOOP2: OUT PC
LXI D, FFFFH
CALL DELAY ; Generate delay of 0.5 Secs.
RLC
DCR C
JNZ LOOP2
JMP BEGIN1
|
[
{
"code": null,
"e": 1708,
"s": 1062,
"text": "Let us consider ALS-NIFC-01, which is a stepper motor interface. Using 26-core flat cable, it is connected to ALS kit. It will be used for interfacing two stepper motors. In our current experiment, we use only one stepper motor. The motor has a step size of 1.8°. The stepper motor works on a power supply of +12V. Power supply of +5V (white wire), GND (black), and +12V (red) is provided to the interface. Note that -12V supply is not used by the interface. We shall have to make sure that the +12V supply has adequate current rating to drive the stepper motor. With the step motor interface, this is ensured by using the power supply provided."
},
{
"code": null,
"e": 2336,
"s": 1708,
"text": "Using five-way powermate connector, the stepper motor is connected to the interface. The step motor is a two-phase, six-wire motor. The six wires are for the D, B, C, A inputs and VM connection (two wires). The five-way powermate connector is used for connection purpose. Ensure that red wire is connected to A1 on the interface. To provide DBCA inputs to one stepper motor, PC3-0 is used and for the other motor PC7-0 provides DBCA inputs, as shown in Fig. (a). Thus, the 8255 port C should be configured as an output port, while using the stepper motor interface. The physical layout of the interface is provided in Fig. (b)."
},
{
"code": null,
"e": 2474,
"s": 2336,
"text": "It is possible to have four-step sequence with ‘one-phase ON’ scheme, as shown in the following. In this case the step size will be 1.8°."
},
{
"code": null,
"e": 2529,
"s": 2474,
"text": "D B C A"
},
{
"code": null,
"e": 2598,
"s": 2529,
"text": "1 0 0 0 = 8"
},
{
"code": null,
"e": 2667,
"s": 2598,
"text": "0 1 0 0 = 4"
},
{
"code": null,
"e": 2735,
"s": 2667,
"text": "0 0 1 0 = 2"
},
{
"code": null,
"e": 2802,
"s": 2735,
"text": "0 0 0 1 = 1"
},
{
"code": null,
"e": 3027,
"s": 2802,
"text": "The 4 step sequence that we send to the stepper motor interface is 88H, 44H, 22H and 11H instead of 08H, 04H, 02H, 01H so that the step motor can be connected to any one of the two connectors provided on the interface board."
},
{
"code": null,
"e": 3092,
"s": 3027,
"text": "If the sequence is reversed, the rotation will also be reversed."
},
{
"code": null,
"e": 3439,
"s": 3092,
"text": "Let us consider a problem solution in this domain. The problem states that: The program shown here makes the step motor rotate 100 steps of 1.8 degrees each, resulting in half revolution. then, it rotates half revolution in the opposite direction. This sequence is repeated forever. To stop the operation, we have to reset the microprocessor kit."
},
{
"code": null,
"e": 3549,
"s": 3439,
"text": "The 8085 assembly language program is stated below for the rotation of in both forward and reverse direction."
},
{
"code": null,
"e": 4215,
"s": 3549,
"text": "; FILE NAME STEP_MOTOR.ASM\nORG C100H\nN DB 100 ; 100 steps of 1.8° = 0.5 Revolution\n\nORG C000H\nPA EQU D8H\nPB EQU D9H\nPC EQU DAH\nCTRL EQU DBH\nDELAY EQU 04BEH\n\nMVI A, 80H\nOUT CTRL ; Configure 8255 Ports as O/P in Mode 0\n\nBEGIN: LDA N\nMOV B, A\nMOV C, A ; Step Count Value in B and C Registers\n; The next 7 instructions are used for Rotating by 100 Steps in One Direction\n\nMVI A, 88H;\nLOOP1:OUT PC\n\nLXI D, FFFFH\nCALL DELAY ; Generate Delay of 0.5 Secs.\n\nRRC\nDCR B\nJNZ LOOP1\n; The next 7 instructions are used for Rotating by 100 Steps in Opposite Direction\n\nMVI A, 88H\nLOOP2: OUT PC\n\nLXI D, FFFFH\nCALL DELAY ; Generate delay of 0.5 Secs.\n\nRLC\nDCR C\nJNZ LOOP2\n\nJMP BEGIN1"
}
] |
Docker - Configuring
|
In this chapter, we will look at the different options to configure Docker.
This command is used to stop the Docker daemon process.
service docker stop
None
A message showing that the Docker process has stopped.
sudo service docker stop
When we run the above command, it will produce the following result −
This command is used to start the Docker daemon process.
service docker start
None
A message showing that the Docker process has started.
sudo service docker start
When we run the above command, it will produce the following result −
70 Lectures
12 hours
Anshul Chauhan
41 Lectures
5 hours
AR Shankar
31 Lectures
3 hours
Abhilash Nelson
15 Lectures
2 hours
Harshit Srivastava, Pranjal Srivastava
33 Lectures
4 hours
Mumshad Mannambeth
13 Lectures
53 mins
Musab Zayadneh
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2416,
"s": 2340,
"text": "In this chapter, we will look at the different options to configure Docker."
},
{
"code": null,
"e": 2472,
"s": 2416,
"text": "This command is used to stop the Docker daemon process."
},
{
"code": null,
"e": 2494,
"s": 2472,
"text": "service docker stop \n"
},
{
"code": null,
"e": 2499,
"s": 2494,
"text": "None"
},
{
"code": null,
"e": 2554,
"s": 2499,
"text": "A message showing that the Docker process has stopped."
},
{
"code": null,
"e": 2580,
"s": 2554,
"text": "sudo service docker stop "
},
{
"code": null,
"e": 2650,
"s": 2580,
"text": "When we run the above command, it will produce the following result −"
},
{
"code": null,
"e": 2707,
"s": 2650,
"text": "This command is used to start the Docker daemon process."
},
{
"code": null,
"e": 2730,
"s": 2707,
"text": "service docker start \n"
},
{
"code": null,
"e": 2735,
"s": 2730,
"text": "None"
},
{
"code": null,
"e": 2790,
"s": 2735,
"text": "A message showing that the Docker process has started."
},
{
"code": null,
"e": 2816,
"s": 2790,
"text": "sudo service docker start"
},
{
"code": null,
"e": 2886,
"s": 2816,
"text": "When we run the above command, it will produce the following result −"
},
{
"code": null,
"e": 2920,
"s": 2886,
"text": "\n 70 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 2936,
"s": 2920,
"text": " Anshul Chauhan"
},
{
"code": null,
"e": 2969,
"s": 2936,
"text": "\n 41 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 2981,
"s": 2969,
"text": " AR Shankar"
},
{
"code": null,
"e": 3014,
"s": 2981,
"text": "\n 31 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 3031,
"s": 3014,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3064,
"s": 3031,
"text": "\n 15 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3104,
"s": 3064,
"text": " Harshit Srivastava, Pranjal Srivastava"
},
{
"code": null,
"e": 3137,
"s": 3104,
"text": "\n 33 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 3157,
"s": 3137,
"text": " Mumshad Mannambeth"
},
{
"code": null,
"e": 3189,
"s": 3157,
"text": "\n 13 Lectures \n 53 mins\n"
},
{
"code": null,
"e": 3205,
"s": 3189,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 3212,
"s": 3205,
"text": " Print"
},
{
"code": null,
"e": 3223,
"s": 3212,
"text": " Add Notes"
}
] |
The Prosecutor’s Fallacy. Conditional Probability in the... | by Ray Johns | Towards Data Science
|
You know that you are innocent, but physical evidence at the scene of the crime matches your description. The prosecutor argues that you are guilty because the odds of finding this evidence given that you are innocent are so small that the jury should discard the probability that you did not actually commit the crime.
But those numbers don’t add up. The prosecutor has misapplied conditional probability and neglected the prior odds of you, the defendant, being guilty before they introduced the evidence.
The prosecutor’s fallacy is a courtroom misapplication of Bayes’ Theorem. Rather than ask the probability that the defendant is innocent given all the evidence, the prosecution, judge, and jury make the mistake of asking what the probability is that the evidence would occur if the defendant were innocent (a much smaller number):
P(defendant is guilty|all the evidence)
P(all the evidence|defendant is innocent)
To illustrate why this difference can spell life or death, imagine yourself the defendant again. You want to prove to the court that you’re really telling the truth, so you agree to a polygraph test.
Coincidentally, the same man who invented the lie detector later created Wonder Woman and her lasso of truth.
William Moulton Marston debuted his invention in the case of James Alphonso Frye, who was accused of murder in 1922.
For our simulation, we’ll take the mean of a more modern polygraph from this paper (“Accuracy estimates of the CQT range from 74% to 89% for guilty examinees, with 1% to 13% false-negatives, and 59% to 83% for innocent examinees, with a false-positive ratio varying from 10% to 23%...”)
Examine these percentages a moment. Given that this study found that a vast majority of people are honest most of the time, and that “big lies” are things like “not telling your partner who you have really been with”, let’s generously assume that 15% of people would lie about murder under a polygraph test, and 85% would tell the truth.
If we tested 10,000 people with this lie detector under these assumptions...
1500 people out of 10000 are_lying
1215 people out of 10000 are_true_positives
120 people out of 10000 are_false_negatives
8500 people out of 10000 are_not_lying
1445 people out of 10000 are_false_positives
6035 people out of 10000 are_true_negatives
The important distinctions to know before we apply Bayes’ Theorem are these:
The true positives are the people who lied and failed the polygraph (they were screened correctly)
The false negatives are the people who lied and beat the polygraph (they were screened incorrectly)
The false positives are the people who told the truth but failed the polygraph anyway
The true negatives are the people who told the truth and passed the polygraph
Got it? Good.
What the polygraph examiner really wants to know is not P(+|L), which is the accuracy of the test; but rather P(L|+), or the probability you were lying given that the test was positive. We know how P(+|L) relates to P(L|+).
P(L|+) = P(+|L)P(L) / P(+)
To figure out what P(+) is independent of our prior knowledge of whether or not someone was lying, we need to compute the total sample space of the event of testing positive using the Law of Total Probability:
P(L|+) = P(+|L)P(L) / P(+|L)P(L) + P(+|L^c)P(L^c)
That is to say, we need to know not only the probability of testing positive given that you are lying, but also the probability of testing positive given that you’re not lying (our false positive rate). The sum of those two terms gives us the total probability of testing positive. That allows us to finally determine the conditional probability that you are lying:
The probability that you are actually lying, given that you tested positive on the polygraph, is 45.68%.The probability of a false positive is 54.32%.
The probability that you’re actually lying, given a positive test result, is only 45.68%. That’s worse than chance. Note how it differs from the test’s accuracy levels (81% true positives and 71% true negatives). Meanwhile, your risk of being falsely accused of lying, even if you’re telling the truth, is also close to-indeed, slightly higher than-chance, at 54.32%. Not reassuring.
Marston was, in fact, a notorious fraud.
The Frye court ruled that the polygraph test could not be trusted as evidence. To this day, lie detector tests are inadmissible in court because of their unreliability. But that does not stop the prosecutor’s fallacy from creeping in to court in other, more insidious ways.
This statistical reasoning error runs rampant in the criminal justice system and corrupts criminal cases that rely on everything from fingerprints to DNA evidence to cell tower data. What’s worse, courts often reject the expert testimony of statisticians because “it’s not rocket science”-it’s “common sense”:
In the Netherlands, a nurse named Lucia de Berk went to prison for life because she had been proximate to “suspicious” deaths that a statistical expert calculated had less than a 1 in 342 million chance of being random. The calculation, tainted by the prosecutor’s fallacy, was incorrect. The true figure was more like 1 in 50 (or even 1 in 5). What’s more, many of the “incidents” were only marked suspicious after investigators knew that she had been close by.
A British nurse, Ben Geen, was accused of inducing respiratory arrest for the “thrill” of reviving his patients, on the claim that respiratory arrest was too rare a phenomenon to occur by chance given that Green was near.
Mothers in the U.K. have been prosecuted for murdering their children, when really they died of SIDS, after experts erroneously quoted the odds of two children in the same family dying of SIDS as 1 in 73 million
The data in Ben Geen’s case are available thanks to Freedom of Information requests — so I have briefly analyzed them.
# Hospital data file from the expert in Ben Geen's exoneration case# Data acquired through FOI requests# Admissions: no. patients admitted to ED by month# CardioED: no. patients admitted to CC from ED by month with cardio-respiratory arrest# RespED: no. patients admitted to CC from ED by month with respiratory arrest
The most comparable hospitals to the one in which Geen worked are large hospitals that saw at least one case of respiratory arrest (although “0” in the data most likely means “missing data” and not that zero incidents occurred).
ax = sns.boxplot(x='Year', y='CardioED', data=df)
ax = sns.pairplot(df, x_vars=['Year'], y_vars=['CardioED', 'RespED', 'Admissions'])
The four hospitals that are comparable to the one where Geen worked are Hexham, Solihull, Wansbeck, and Wycombe. The data for Solihull (for both CardioED and RespED) are extremely anomalous:
After accounting for the discrepancies in the data, we can calculate that respiratory events without accompanying cardiac events happen, on average, roughly a little under 5 times as often as cardiac events (4.669 CardioED admissions on average per RespED admission).
The average number of respiratory arrests per month unaccompanied by cardiac failure is approximately 1–2, with large fluctuations. That’s not particularly rare, and certainly not rare enough to send a nurse to prison for life. (You can read more about the case and this data here and see my jupyter notebook here.)
Common sense, it would seem, is hardly common — a problem which the judicial system should take much more seriously than it does.
Originally published at https://www.espritdecorpus.com/posts/the-prosecutors-fallacy/.
|
[
{
"code": null,
"e": 492,
"s": 172,
"text": "You know that you are innocent, but physical evidence at the scene of the crime matches your description. The prosecutor argues that you are guilty because the odds of finding this evidence given that you are innocent are so small that the jury should discard the probability that you did not actually commit the crime."
},
{
"code": null,
"e": 680,
"s": 492,
"text": "But those numbers don’t add up. The prosecutor has misapplied conditional probability and neglected the prior odds of you, the defendant, being guilty before they introduced the evidence."
},
{
"code": null,
"e": 1011,
"s": 680,
"text": "The prosecutor’s fallacy is a courtroom misapplication of Bayes’ Theorem. Rather than ask the probability that the defendant is innocent given all the evidence, the prosecution, judge, and jury make the mistake of asking what the probability is that the evidence would occur if the defendant were innocent (a much smaller number):"
},
{
"code": null,
"e": 1051,
"s": 1011,
"text": "P(defendant is guilty|all the evidence)"
},
{
"code": null,
"e": 1093,
"s": 1051,
"text": "P(all the evidence|defendant is innocent)"
},
{
"code": null,
"e": 1293,
"s": 1093,
"text": "To illustrate why this difference can spell life or death, imagine yourself the defendant again. You want to prove to the court that you’re really telling the truth, so you agree to a polygraph test."
},
{
"code": null,
"e": 1403,
"s": 1293,
"text": "Coincidentally, the same man who invented the lie detector later created Wonder Woman and her lasso of truth."
},
{
"code": null,
"e": 1520,
"s": 1403,
"text": "William Moulton Marston debuted his invention in the case of James Alphonso Frye, who was accused of murder in 1922."
},
{
"code": null,
"e": 1807,
"s": 1520,
"text": "For our simulation, we’ll take the mean of a more modern polygraph from this paper (“Accuracy estimates of the CQT range from 74% to 89% for guilty examinees, with 1% to 13% false-negatives, and 59% to 83% for innocent examinees, with a false-positive ratio varying from 10% to 23%...”)"
},
{
"code": null,
"e": 2145,
"s": 1807,
"text": "Examine these percentages a moment. Given that this study found that a vast majority of people are honest most of the time, and that “big lies” are things like “not telling your partner who you have really been with”, let’s generously assume that 15% of people would lie about murder under a polygraph test, and 85% would tell the truth."
},
{
"code": null,
"e": 2222,
"s": 2145,
"text": "If we tested 10,000 people with this lie detector under these assumptions..."
},
{
"code": null,
"e": 2257,
"s": 2222,
"text": "1500 people out of 10000 are_lying"
},
{
"code": null,
"e": 2301,
"s": 2257,
"text": "1215 people out of 10000 are_true_positives"
},
{
"code": null,
"e": 2345,
"s": 2301,
"text": "120 people out of 10000 are_false_negatives"
},
{
"code": null,
"e": 2384,
"s": 2345,
"text": "8500 people out of 10000 are_not_lying"
},
{
"code": null,
"e": 2429,
"s": 2384,
"text": "1445 people out of 10000 are_false_positives"
},
{
"code": null,
"e": 2473,
"s": 2429,
"text": "6035 people out of 10000 are_true_negatives"
},
{
"code": null,
"e": 2550,
"s": 2473,
"text": "The important distinctions to know before we apply Bayes’ Theorem are these:"
},
{
"code": null,
"e": 2649,
"s": 2550,
"text": "The true positives are the people who lied and failed the polygraph (they were screened correctly)"
},
{
"code": null,
"e": 2749,
"s": 2649,
"text": "The false negatives are the people who lied and beat the polygraph (they were screened incorrectly)"
},
{
"code": null,
"e": 2835,
"s": 2749,
"text": "The false positives are the people who told the truth but failed the polygraph anyway"
},
{
"code": null,
"e": 2913,
"s": 2835,
"text": "The true negatives are the people who told the truth and passed the polygraph"
},
{
"code": null,
"e": 2927,
"s": 2913,
"text": "Got it? Good."
},
{
"code": null,
"e": 3151,
"s": 2927,
"text": "What the polygraph examiner really wants to know is not P(+|L), which is the accuracy of the test; but rather P(L|+), or the probability you were lying given that the test was positive. We know how P(+|L) relates to P(L|+)."
},
{
"code": null,
"e": 3178,
"s": 3151,
"text": "P(L|+) = P(+|L)P(L) / P(+)"
},
{
"code": null,
"e": 3388,
"s": 3178,
"text": "To figure out what P(+) is independent of our prior knowledge of whether or not someone was lying, we need to compute the total sample space of the event of testing positive using the Law of Total Probability:"
},
{
"code": null,
"e": 3438,
"s": 3388,
"text": "P(L|+) = P(+|L)P(L) / P(+|L)P(L) + P(+|L^c)P(L^c)"
},
{
"code": null,
"e": 3804,
"s": 3438,
"text": "That is to say, we need to know not only the probability of testing positive given that you are lying, but also the probability of testing positive given that you’re not lying (our false positive rate). The sum of those two terms gives us the total probability of testing positive. That allows us to finally determine the conditional probability that you are lying:"
},
{
"code": null,
"e": 3955,
"s": 3804,
"text": "The probability that you are actually lying, given that you tested positive on the polygraph, is 45.68%.The probability of a false positive is 54.32%."
},
{
"code": null,
"e": 4339,
"s": 3955,
"text": "The probability that you’re actually lying, given a positive test result, is only 45.68%. That’s worse than chance. Note how it differs from the test’s accuracy levels (81% true positives and 71% true negatives). Meanwhile, your risk of being falsely accused of lying, even if you’re telling the truth, is also close to-indeed, slightly higher than-chance, at 54.32%. Not reassuring."
},
{
"code": null,
"e": 4380,
"s": 4339,
"text": "Marston was, in fact, a notorious fraud."
},
{
"code": null,
"e": 4654,
"s": 4380,
"text": "The Frye court ruled that the polygraph test could not be trusted as evidence. To this day, lie detector tests are inadmissible in court because of their unreliability. But that does not stop the prosecutor’s fallacy from creeping in to court in other, more insidious ways."
},
{
"code": null,
"e": 4964,
"s": 4654,
"text": "This statistical reasoning error runs rampant in the criminal justice system and corrupts criminal cases that rely on everything from fingerprints to DNA evidence to cell tower data. What’s worse, courts often reject the expert testimony of statisticians because “it’s not rocket science”-it’s “common sense”:"
},
{
"code": null,
"e": 5427,
"s": 4964,
"text": "In the Netherlands, a nurse named Lucia de Berk went to prison for life because she had been proximate to “suspicious” deaths that a statistical expert calculated had less than a 1 in 342 million chance of being random. The calculation, tainted by the prosecutor’s fallacy, was incorrect. The true figure was more like 1 in 50 (or even 1 in 5). What’s more, many of the “incidents” were only marked suspicious after investigators knew that she had been close by."
},
{
"code": null,
"e": 5649,
"s": 5427,
"text": "A British nurse, Ben Geen, was accused of inducing respiratory arrest for the “thrill” of reviving his patients, on the claim that respiratory arrest was too rare a phenomenon to occur by chance given that Green was near."
},
{
"code": null,
"e": 5861,
"s": 5649,
"text": "Mothers in the U.K. have been prosecuted for murdering their children, when really they died of SIDS, after experts erroneously quoted the odds of two children in the same family dying of SIDS as 1 in 73 million"
},
{
"code": null,
"e": 5980,
"s": 5861,
"text": "The data in Ben Geen’s case are available thanks to Freedom of Information requests — so I have briefly analyzed them."
},
{
"code": null,
"e": 6299,
"s": 5980,
"text": "# Hospital data file from the expert in Ben Geen's exoneration case# Data acquired through FOI requests# Admissions: no. patients admitted to ED by month# CardioED: no. patients admitted to CC from ED by month with cardio-respiratory arrest# RespED: no. patients admitted to CC from ED by month with respiratory arrest"
},
{
"code": null,
"e": 6528,
"s": 6299,
"text": "The most comparable hospitals to the one in which Geen worked are large hospitals that saw at least one case of respiratory arrest (although “0” in the data most likely means “missing data” and not that zero incidents occurred)."
},
{
"code": null,
"e": 6578,
"s": 6528,
"text": "ax = sns.boxplot(x='Year', y='CardioED', data=df)"
},
{
"code": null,
"e": 6662,
"s": 6578,
"text": "ax = sns.pairplot(df, x_vars=['Year'], y_vars=['CardioED', 'RespED', 'Admissions'])"
},
{
"code": null,
"e": 6853,
"s": 6662,
"text": "The four hospitals that are comparable to the one where Geen worked are Hexham, Solihull, Wansbeck, and Wycombe. The data for Solihull (for both CardioED and RespED) are extremely anomalous:"
},
{
"code": null,
"e": 7121,
"s": 6853,
"text": "After accounting for the discrepancies in the data, we can calculate that respiratory events without accompanying cardiac events happen, on average, roughly a little under 5 times as often as cardiac events (4.669 CardioED admissions on average per RespED admission)."
},
{
"code": null,
"e": 7437,
"s": 7121,
"text": "The average number of respiratory arrests per month unaccompanied by cardiac failure is approximately 1–2, with large fluctuations. That’s not particularly rare, and certainly not rare enough to send a nurse to prison for life. (You can read more about the case and this data here and see my jupyter notebook here.)"
},
{
"code": null,
"e": 7567,
"s": 7437,
"text": "Common sense, it would seem, is hardly common — a problem which the judicial system should take much more seriously than it does."
}
] |
Division Operators in Python?
|
Generally, the data type of an expression depends on the types of the arguments. This rule is applied to most of the operators: like when we add two integers ,the result should be an integer. However, in case of division this doesn’t work out well because there are two different expectations. Sometimes we expect division to generate create precise floating point number and other times we want a rounded-down integer result.
In general, the python definition of division(/) depended solely on the arguments. For example in python 2.7, dividing 20/7 was 2 because both arguments where integers. However, 20./7 will generate 2.857142857142857 as output because the arguments were floating point numbers.
Above definition of ‘/’ often caused problems for applications where data types were used that the author hadn’t expected.
Consider a simple program of converting temperature from Celsius to Fahrenheit conversion will produce two different results depending on the input. If one user provide integer argument(18) and another floating point argument(18.0), then answers were totally different, even though all of the inputs had the equal numeric values.
#Conversion of celcius to Fahrendheit in python 2.6
>>> print 18*9/5 + 32
64
>>> print 18.0*9/5 + 32
64.4
>>> 18 == 18.0
True
From above we can see, when we pass the 18.0, we get the correct output and when we pass 18, we are getting incorrect output. This behaviour is because in python 2.x, the “/” operator works as a floor division in case all the arguments are integers. However, if one of the argument is float value the “/” operator returns a float value.
An explicit conversion function(like float(x)) can help prevent this. The idea however, is for python be simple and sparse language, without a dense clutter of conversions to cover the rare case of an unexpected data type. Starting with Python 2.2 version, a new division operator was added to clarify what was expectred. The ordinary / operator will, in the future, return floating-point results. A special division operator, //, will return rounded-down results.
>>> # Python 2.7 program to demonstrate the use of "//" for both integers and floating point number
>>> print 9//2
4
>>> print -9//2
-5
>>> print 9.0//2
4.0
>>> print -9.0//2
-5.0
In python 3.x, above mentioned flaws were removed and the ‘/’ operator does floating point division for both integer and floating point arguments.
>>> #Conversion of celcius to Fahrendheit in python 3.x
>>> #Passing 18 (integer)
>>> print (18*9/5 + 32)
64.4
>>> #Passing 18.0(float)
>>> print(18.0*9/5 + 32)
64.4
Also there is no difference when we pass +ve or –ve arguments.
>>> print(9/2)
4.5
>>> print(-9/2)
-4.5
>>> print(9.0/2)
4.5
>>> print(-9.0/2)
-4.5
|
[
{
"code": null,
"e": 1489,
"s": 1062,
"text": "Generally, the data type of an expression depends on the types of the arguments. This rule is applied to most of the operators: like when we add two integers ,the result should be an integer. However, in case of division this doesn’t work out well because there are two different expectations. Sometimes we expect division to generate create precise floating point number and other times we want a rounded-down integer result."
},
{
"code": null,
"e": 1766,
"s": 1489,
"text": "In general, the python definition of division(/) depended solely on the arguments. For example in python 2.7, dividing 20/7 was 2 because both arguments where integers. However, 20./7 will generate 2.857142857142857 as output because the arguments were floating point numbers."
},
{
"code": null,
"e": 1889,
"s": 1766,
"text": "Above definition of ‘/’ often caused problems for applications where data types were used that the author hadn’t expected."
},
{
"code": null,
"e": 2219,
"s": 1889,
"text": "Consider a simple program of converting temperature from Celsius to Fahrenheit conversion will produce two different results depending on the input. If one user provide integer argument(18) and another floating point argument(18.0), then answers were totally different, even though all of the inputs had the equal numeric values."
},
{
"code": null,
"e": 2345,
"s": 2219,
"text": "#Conversion of celcius to Fahrendheit in python 2.6\n>>> print 18*9/5 + 32\n64\n>>> print 18.0*9/5 + 32\n64.4\n>>> 18 == 18.0\nTrue"
},
{
"code": null,
"e": 2682,
"s": 2345,
"text": "From above we can see, when we pass the 18.0, we get the correct output and when we pass 18, we are getting incorrect output. This behaviour is because in python 2.x, the “/” operator works as a floor division in case all the arguments are integers. However, if one of the argument is float value the “/” operator returns a float value."
},
{
"code": null,
"e": 3147,
"s": 2682,
"text": "An explicit conversion function(like float(x)) can help prevent this. The idea however, is for python be simple and sparse language, without a dense clutter of conversions to cover the rare case of an unexpected data type. Starting with Python 2.2 version, a new division operator was added to clarify what was expectred. The ordinary / operator will, in the future, return floating-point results. A special division operator, //, will return rounded-down results."
},
{
"code": null,
"e": 3327,
"s": 3147,
"text": ">>> # Python 2.7 program to demonstrate the use of \"//\" for both integers and floating point number\n>>> print 9//2\n4\n>>> print -9//2\n-5\n>>> print 9.0//2\n4.0\n>>> print -9.0//2\n-5.0"
},
{
"code": null,
"e": 3474,
"s": 3327,
"text": "In python 3.x, above mentioned flaws were removed and the ‘/’ operator does floating point division for both integer and floating point arguments."
},
{
"code": null,
"e": 3640,
"s": 3474,
"text": ">>> #Conversion of celcius to Fahrendheit in python 3.x\n>>> #Passing 18 (integer)\n>>> print (18*9/5 + 32)\n64.4\n>>> #Passing 18.0(float)\n>>> print(18.0*9/5 + 32)\n64.4"
},
{
"code": null,
"e": 3703,
"s": 3640,
"text": "Also there is no difference when we pass +ve or –ve arguments."
},
{
"code": null,
"e": 3787,
"s": 3703,
"text": ">>> print(9/2)\n4.5\n>>> print(-9/2)\n-4.5\n>>> print(9.0/2)\n4.5\n>>> print(-9.0/2)\n-4.5"
}
] |
Add two numbers without using arithmetic operators - GeeksforGeeks
|
27 Apr, 2022
C++
C
Java
Python3
C#
Javascript
#include <iostream>using namespace std; int add(int a, int b){ // for loop will start from 1 and move till the value of // second number , first number(a) is incremented in for // loop for (int i = 1; i <= b; i++) a++; return a;} int main(){ // first number is 10 and second number is 32 , for loop // will start from 1 and move till 32 and the value of a // is incremented 32 times which will give us the total // sum of two numbers int a = add(10, 32); cout << a; return 0;} // This code is contributed by Aditya Kumar (adityakumar129)
#include <stdio.h> int add(int a, int b){ // for loop will start from 1 and move till the value of // second number , first number(a) is incremented in for // loop for (int i = 1; i <= b; i++) a++; return a;} int main(){ // first number is 10 and second number is 32 , for loop // will start from 1 and move till 32 and the value of a // is incremented 32 times which will give us the total // sum of two numbers int a = add(10, 32); printf("%d", a); return 0;} // This code is contributed by Aditya Kumar (adityakumar129)
import java.util.*; class GFG { static int add(int a, int b) { // for loop will start from 1 and move till the // value of second number , first number(a) is // incremented in for loop for (int i = 1; i <= b; i++) a++; return a; } public static void main(String[] args) { // first number is 10 and second number is 32 , for // loop will start from 1 and move till 32 and the // value of a is incremented 32 times which will // give us the total sum of two numbers int a = add(10, 32); System.out.print(a); }} // This code is contributed by Aditya Kumar (adityakumar129)
# Python implementationdef add(a, b): # for loop will start from 1 and move till the value of second number , # first number(a) is incremented in for loop for i in range(1, b + 1): a = a + 1 return a # driver code# first number is 10 and second number is 32 , for loop# will start from 1 and move till 32 and the value of a# is incremented 32 times which will give us the total# sum of two numbersa = add(10, 32)print(a) # This code is contributed by Aditya Kumar (adityakumar129)
using System;public class GFG { static int add(int a, int b) { for (int i = 1; i <= b; i++) // for loop will start from 1 and move till the value of second // number , first number(a) is incremented in for loop { a++; } return a; } public static void Main(String[] args) { int a = add(10, 32); // first number is 10 and second number is 32 , for loop will start Console.Write(a); // from 1 and move till 32 and the value of a is incremented 32 times // which will give us the total sum of two numbers }} // This code is contributed by Rajput-Ji
<script> function add(a , b) { // for loop will start from 1 and move till the value of second // number , first number(a) is incremented in for loop for (i = 1; i <= b; i++) { a++; } return a; } // first number is 10 and second number is 32 , for loop will start var a = add(10, 32); // from 1 and move till 32 and the value of a is incremented 32 times // which will give us the total sum of two numbers document.write(a); // This code is contributed by Rajput-Ji</script>
Write a function Add() that returns sum of two integers. The function should not use any of the arithmetic operators (+, ++, –, -, .. etc).Sum of two bits can be obtained by performing XOR (^) of the two bits. Carry bit can be obtained by performing AND (&) of two bits.
Above is simple Half Adder logic that can be used to add 2 single bits. We can extend this logic for integers. If x and y don’t have set bits at same position(s), then bitwise XOR (^) of x and y gives the sum of x and y. To incorporate common set bits also, bitwise AND (&) is used. Bitwise AND of x and y gives all carry bits. We calculate (x & y) << 1 and add it to x ^ y to get the required result.
C++
C
Java
Python3
C#
PHP
Javascript
// C++ Program to add two numbers// without using arithmetic operator#include <bits/stdc++.h>using namespace std; int Add(int x, int y){ // Iterate till there is no carry while (y != 0) { // carry should be unsigned to // deal with -ve numbers // carry now contains common //set bits of x and y unsigned carry = x & y; // Sum of bits of x and y where at //least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding // it to x gives the required sum y = carry << 1; } return x;} // Driver codeint main(){ cout << Add(15, 32); return 0;} // This code is contributed by rathbhupendra
// C Program to add two numbers// without using arithmetic operator#include<stdio.h> int Add(int x, int y){ // Iterate till there is no carry while (y != 0) { // carry now contains common //set bits of x and y unsigned carry = x & y; // Sum of bits of x and y where at //least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding // it to x gives the required sum y = carry << 1; } return x;} int main(){ printf("%d", Add(15, 32)); return 0;}
// Java Program to add two numbers// without using arithmetic operatorimport java.io.*; class GFG{ static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common // set bits of x and y int carry = x & y; // Sum of bits of x and // y where at least one // of the bits is not set x = x ^ y; // Carry is shifted by // one so that adding it // to x gives the required sum y = carry << 1; } return x; } // Driver code public static void main(String arg[]) { System.out.println(Add(15, 32)); }} // This code is contributed by Anant Agarwal.
# Python3 Program to add two numbers# without using arithmetic operatordef Add(x, y): # Iterate till there is no carry while (y != 0): # carry now contains common # set bits of x and y carry = x & y # Sum of bits of x and y where at # least one of the bits is not set x = x ^ y # Carry is shifted by one so that # adding it to x gives the required sum y = carry << 1 return x print(Add(15, 32)) # This code is contributed by# Smitha Dinesh Semwal
// C# Program to add two numbers// without using arithmetic operatorusing System; class GFG{ static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common // set bits of x and y int carry = x & y; // Sum of bits of x and // y where at least one // of the bits is not set x = x ^ y; // Carry is shifted by // one so that adding it // to x gives the required sum y = carry << 1; } return x; } // Driver code public static void Main() { Console.WriteLine(Add(15, 32)); }} // This code is contributed by vt_m.
<?php// PHP Program to add two numbers// without using arithmetic operator function Add( $x, $y){ // Iterate till there is // no carry while ($y != 0) { // carry now contains common //set bits of x and y $carry = $x & $y; // Sum of bits of x and y where at //least one of the bits is not set $x = $x ^ $y; // Carry is shifted by one // so that adding it to x // gives the required sum $y = $carry << 1; } return $x;} // Driver Code echo Add(15, 32); // This code is contributed by anuj_67.?>
<script> // Javascript Program to add two numbers// without using arithmetic operator function Add(x, y) { // Iterate till there is no carry while (y != 0) { // carry now contains common //set bits of x and y let carry = x & y; // Sum of bits of x and y where at //least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding // it to x gives the required sum y = carry << 1; } return x; } //driver code document.write(Add(15, 32)); // This code is contributed by Surbhi Tyagi</script>
Output :
47
Time Complexity: O(log y)
Auxiliary Space: O(1)
Following is the recursive implementation for the same approach.
C++
C
Java
Python3
C#
Javascript
int Add(int x, int y){ if (y == 0) return x; else return Add( x ^ y,(unsigned) (x & y) << 1);} // This code is contributed by shubhamsingh10
int Add(int x, int y){ if (y == 0) return x; else return Add( x ^ y, (unsigned)(x & y) << 1);}
static int Add(int x, int y){ if (y == 0) return x; else return Add(x ^ y, (x & y) << 1);} // This code is contributed by subham348
def Add(x, y): if (y == 0): return x else return Add( x ^ y, (x & y) << 1) # This code is contributed by subhammahato348
static int Add(int x, int y){ if (y == 0) return x; else return Add(x ^ y, (x & y) << 1);} // This code is contributed by subhammahato348
function Add(x, y){ if (y == 0) return x; else return Add(x ^ y, (x & y) << 1);} // This code is contributed by Ankita saini
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
vt_m
rathbhupendra
surbhityagi15
subham348
subhammahato348
ankita_saini
SHUBHAMSINGH10
piyusht20
mozasuvesh
souravmahato348
sayam10
Rajput-Ji
GauravRajput1
shinjanpatra
adityakumar129
Bitwise-XOR
Bit Magic
Mathematical
Mathematical
Bit Magic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Bitwise Operators in C/C++
Left Shift and Right Shift Operators in C/C++
Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
Count set bits in an integer
Cyclic Redundancy Check and Modulo-2 Division
C++ Data Types
Set in C++ Standard Template Library (STL)
Merge two sorted arrays
Program to find GCD or HCF of two numbers
Modulo Operator (%) in C/C++ with Examples
|
[
{
"code": null,
"e": 34594,
"s": 34566,
"text": "\n27 Apr, 2022"
},
{
"code": null,
"e": 34598,
"s": 34594,
"text": "C++"
},
{
"code": null,
"e": 34600,
"s": 34598,
"text": "C"
},
{
"code": null,
"e": 34605,
"s": 34600,
"text": "Java"
},
{
"code": null,
"e": 34613,
"s": 34605,
"text": "Python3"
},
{
"code": null,
"e": 34616,
"s": 34613,
"text": "C#"
},
{
"code": null,
"e": 34627,
"s": 34616,
"text": "Javascript"
},
{
"code": "#include <iostream>using namespace std; int add(int a, int b){ // for loop will start from 1 and move till the value of // second number , first number(a) is incremented in for // loop for (int i = 1; i <= b; i++) a++; return a;} int main(){ // first number is 10 and second number is 32 , for loop // will start from 1 and move till 32 and the value of a // is incremented 32 times which will give us the total // sum of two numbers int a = add(10, 32); cout << a; return 0;} // This code is contributed by Aditya Kumar (adityakumar129)",
"e": 35209,
"s": 34627,
"text": null
},
{
"code": "#include <stdio.h> int add(int a, int b){ // for loop will start from 1 and move till the value of // second number , first number(a) is incremented in for // loop for (int i = 1; i <= b; i++) a++; return a;} int main(){ // first number is 10 and second number is 32 , for loop // will start from 1 and move till 32 and the value of a // is incremented 32 times which will give us the total // sum of two numbers int a = add(10, 32); printf(\"%d\", a); return 0;} // This code is contributed by Aditya Kumar (adityakumar129)",
"e": 35776,
"s": 35209,
"text": null
},
{
"code": "import java.util.*; class GFG { static int add(int a, int b) { // for loop will start from 1 and move till the // value of second number , first number(a) is // incremented in for loop for (int i = 1; i <= b; i++) a++; return a; } public static void main(String[] args) { // first number is 10 and second number is 32 , for // loop will start from 1 and move till 32 and the // value of a is incremented 32 times which will // give us the total sum of two numbers int a = add(10, 32); System.out.print(a); }} // This code is contributed by Aditya Kumar (adityakumar129)",
"e": 36454,
"s": 35776,
"text": null
},
{
"code": "# Python implementationdef add(a, b): # for loop will start from 1 and move till the value of second number , # first number(a) is incremented in for loop for i in range(1, b + 1): a = a + 1 return a # driver code# first number is 10 and second number is 32 , for loop# will start from 1 and move till 32 and the value of a# is incremented 32 times which will give us the total# sum of two numbersa = add(10, 32)print(a) # This code is contributed by Aditya Kumar (adityakumar129)",
"e": 36955,
"s": 36454,
"text": null
},
{
"code": "using System;public class GFG { static int add(int a, int b) { for (int i = 1; i <= b; i++) // for loop will start from 1 and move till the value of second // number , first number(a) is incremented in for loop { a++; } return a; } public static void Main(String[] args) { int a = add(10, 32); // first number is 10 and second number is 32 , for loop will start Console.Write(a); // from 1 and move till 32 and the value of a is incremented 32 times // which will give us the total sum of two numbers }} // This code is contributed by Rajput-Ji",
"e": 37539,
"s": 36955,
"text": null
},
{
"code": "<script> function add(a , b) { // for loop will start from 1 and move till the value of second // number , first number(a) is incremented in for loop for (i = 1; i <= b; i++) { a++; } return a; } // first number is 10 and second number is 32 , for loop will start var a = add(10, 32); // from 1 and move till 32 and the value of a is incremented 32 times // which will give us the total sum of two numbers document.write(a); // This code is contributed by Rajput-Ji</script>",
"e": 38098,
"s": 37539,
"text": null
},
{
"code": null,
"e": 38370,
"s": 38098,
"text": "Write a function Add() that returns sum of two integers. The function should not use any of the arithmetic operators (+, ++, –, -, .. etc).Sum of two bits can be obtained by performing XOR (^) of the two bits. Carry bit can be obtained by performing AND (&) of two bits. "
},
{
"code": null,
"e": 38773,
"s": 38370,
"text": "Above is simple Half Adder logic that can be used to add 2 single bits. We can extend this logic for integers. If x and y don’t have set bits at same position(s), then bitwise XOR (^) of x and y gives the sum of x and y. To incorporate common set bits also, bitwise AND (&) is used. Bitwise AND of x and y gives all carry bits. We calculate (x & y) << 1 and add it to x ^ y to get the required result. "
},
{
"code": null,
"e": 38777,
"s": 38773,
"text": "C++"
},
{
"code": null,
"e": 38779,
"s": 38777,
"text": "C"
},
{
"code": null,
"e": 38784,
"s": 38779,
"text": "Java"
},
{
"code": null,
"e": 38792,
"s": 38784,
"text": "Python3"
},
{
"code": null,
"e": 38795,
"s": 38792,
"text": "C#"
},
{
"code": null,
"e": 38799,
"s": 38795,
"text": "PHP"
},
{
"code": null,
"e": 38810,
"s": 38799,
"text": "Javascript"
},
{
"code": "// C++ Program to add two numbers// without using arithmetic operator#include <bits/stdc++.h>using namespace std; int Add(int x, int y){ // Iterate till there is no carry while (y != 0) { // carry should be unsigned to // deal with -ve numbers // carry now contains common //set bits of x and y unsigned carry = x & y; // Sum of bits of x and y where at //least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding // it to x gives the required sum y = carry << 1; } return x;} // Driver codeint main(){ cout << Add(15, 32); return 0;} // This code is contributed by rathbhupendra",
"e": 39518,
"s": 38810,
"text": null
},
{
"code": "// C Program to add two numbers// without using arithmetic operator#include<stdio.h> int Add(int x, int y){ // Iterate till there is no carry while (y != 0) { // carry now contains common //set bits of x and y unsigned carry = x & y; // Sum of bits of x and y where at //least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding // it to x gives the required sum y = carry << 1; } return x;} int main(){ printf(\"%d\", Add(15, 32)); return 0;}",
"e": 40076,
"s": 39518,
"text": null
},
{
"code": "// Java Program to add two numbers// without using arithmetic operatorimport java.io.*; class GFG{ static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common // set bits of x and y int carry = x & y; // Sum of bits of x and // y where at least one // of the bits is not set x = x ^ y; // Carry is shifted by // one so that adding it // to x gives the required sum y = carry << 1; } return x; } // Driver code public static void main(String arg[]) { System.out.println(Add(15, 32)); }} // This code is contributed by Anant Agarwal.",
"e": 40850,
"s": 40076,
"text": null
},
{
"code": "# Python3 Program to add two numbers# without using arithmetic operatordef Add(x, y): # Iterate till there is no carry while (y != 0): # carry now contains common # set bits of x and y carry = x & y # Sum of bits of x and y where at # least one of the bits is not set x = x ^ y # Carry is shifted by one so that # adding it to x gives the required sum y = carry << 1 return x print(Add(15, 32)) # This code is contributed by# Smitha Dinesh Semwal",
"e": 41384,
"s": 40850,
"text": null
},
{
"code": "// C# Program to add two numbers// without using arithmetic operatorusing System; class GFG{ static int Add(int x, int y) { // Iterate till there is no carry while (y != 0) { // carry now contains common // set bits of x and y int carry = x & y; // Sum of bits of x and // y where at least one // of the bits is not set x = x ^ y; // Carry is shifted by // one so that adding it // to x gives the required sum y = carry << 1; } return x; } // Driver code public static void Main() { Console.WriteLine(Add(15, 32)); }} // This code is contributed by vt_m.",
"e": 42130,
"s": 41384,
"text": null
},
{
"code": "<?php// PHP Program to add two numbers// without using arithmetic operator function Add( $x, $y){ // Iterate till there is // no carry while ($y != 0) { // carry now contains common //set bits of x and y $carry = $x & $y; // Sum of bits of x and y where at //least one of the bits is not set $x = $x ^ $y; // Carry is shifted by one // so that adding it to x // gives the required sum $y = $carry << 1; } return $x;} // Driver Code echo Add(15, 32); // This code is contributed by anuj_67.?>",
"e": 42733,
"s": 42130,
"text": null
},
{
"code": "<script> // Javascript Program to add two numbers// without using arithmetic operator function Add(x, y) { // Iterate till there is no carry while (y != 0) { // carry now contains common //set bits of x and y let carry = x & y; // Sum of bits of x and y where at //least one of the bits is not set x = x ^ y; // Carry is shifted by one so that adding // it to x gives the required sum y = carry << 1; } return x; } //driver code document.write(Add(15, 32)); // This code is contributed by Surbhi Tyagi</script>",
"e": 43412,
"s": 42733,
"text": null
},
{
"code": null,
"e": 43422,
"s": 43412,
"text": "Output : "
},
{
"code": null,
"e": 43425,
"s": 43422,
"text": "47"
},
{
"code": null,
"e": 43451,
"s": 43425,
"text": "Time Complexity: O(log y)"
},
{
"code": null,
"e": 43473,
"s": 43451,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 43538,
"s": 43473,
"text": "Following is the recursive implementation for the same approach."
},
{
"code": null,
"e": 43542,
"s": 43538,
"text": "C++"
},
{
"code": null,
"e": 43544,
"s": 43542,
"text": "C"
},
{
"code": null,
"e": 43549,
"s": 43544,
"text": "Java"
},
{
"code": null,
"e": 43557,
"s": 43549,
"text": "Python3"
},
{
"code": null,
"e": 43560,
"s": 43557,
"text": "C#"
},
{
"code": null,
"e": 43571,
"s": 43560,
"text": "Javascript"
},
{
"code": "int Add(int x, int y){ if (y == 0) return x; else return Add( x ^ y,(unsigned) (x & y) << 1);} // This code is contributed by shubhamsingh10",
"e": 43732,
"s": 43571,
"text": null
},
{
"code": "int Add(int x, int y){ if (y == 0) return x; else return Add( x ^ y, (unsigned)(x & y) << 1);}",
"e": 43847,
"s": 43732,
"text": null
},
{
"code": "static int Add(int x, int y){ if (y == 0) return x; else return Add(x ^ y, (x & y) << 1);} // This code is contributed by subham348",
"e": 43987,
"s": 43847,
"text": null
},
{
"code": "def Add(x, y): if (y == 0): return x else return Add( x ^ y, (x & y) << 1) # This code is contributed by subhammahato348",
"e": 44135,
"s": 43987,
"text": null
},
{
"code": "static int Add(int x, int y){ if (y == 0) return x; else return Add(x ^ y, (x & y) << 1);} // This code is contributed by subhammahato348",
"e": 44281,
"s": 44135,
"text": null
},
{
"code": "function Add(x, y){ if (y == 0) return x; else return Add(x ^ y, (x & y) << 1);} // This code is contributed by Ankita saini",
"e": 44426,
"s": 44281,
"text": null
},
{
"code": null,
"e": 44552,
"s": 44426,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 44557,
"s": 44552,
"text": "vt_m"
},
{
"code": null,
"e": 44571,
"s": 44557,
"text": "rathbhupendra"
},
{
"code": null,
"e": 44585,
"s": 44571,
"text": "surbhityagi15"
},
{
"code": null,
"e": 44595,
"s": 44585,
"text": "subham348"
},
{
"code": null,
"e": 44611,
"s": 44595,
"text": "subhammahato348"
},
{
"code": null,
"e": 44624,
"s": 44611,
"text": "ankita_saini"
},
{
"code": null,
"e": 44639,
"s": 44624,
"text": "SHUBHAMSINGH10"
},
{
"code": null,
"e": 44649,
"s": 44639,
"text": "piyusht20"
},
{
"code": null,
"e": 44660,
"s": 44649,
"text": "mozasuvesh"
},
{
"code": null,
"e": 44676,
"s": 44660,
"text": "souravmahato348"
},
{
"code": null,
"e": 44684,
"s": 44676,
"text": "sayam10"
},
{
"code": null,
"e": 44694,
"s": 44684,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 44708,
"s": 44694,
"text": "GauravRajput1"
},
{
"code": null,
"e": 44721,
"s": 44708,
"text": "shinjanpatra"
},
{
"code": null,
"e": 44736,
"s": 44721,
"text": "adityakumar129"
},
{
"code": null,
"e": 44748,
"s": 44736,
"text": "Bitwise-XOR"
},
{
"code": null,
"e": 44758,
"s": 44748,
"text": "Bit Magic"
},
{
"code": null,
"e": 44771,
"s": 44758,
"text": "Mathematical"
},
{
"code": null,
"e": 44784,
"s": 44771,
"text": "Mathematical"
},
{
"code": null,
"e": 44794,
"s": 44784,
"text": "Bit Magic"
},
{
"code": null,
"e": 44892,
"s": 44794,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 44919,
"s": 44892,
"text": "Bitwise Operators in C/C++"
},
{
"code": null,
"e": 44965,
"s": 44919,
"text": "Left Shift and Right Shift Operators in C/C++"
},
{
"code": null,
"e": 45033,
"s": 44965,
"text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)"
},
{
"code": null,
"e": 45062,
"s": 45033,
"text": "Count set bits in an integer"
},
{
"code": null,
"e": 45108,
"s": 45062,
"text": "Cyclic Redundancy Check and Modulo-2 Division"
},
{
"code": null,
"e": 45123,
"s": 45108,
"text": "C++ Data Types"
},
{
"code": null,
"e": 45166,
"s": 45123,
"text": "Set in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 45190,
"s": 45166,
"text": "Merge two sorted arrays"
},
{
"code": null,
"e": 45232,
"s": 45190,
"text": "Program to find GCD or HCF of two numbers"
}
] |
Optimal Strategy For A Game | Practice | GeeksforGeeks
|
You are given an array A of size N. The array contains integers and is of even length. The elements of the array represent N coin of values V1, V2, ....Vn. You play against an opponent in an alternating way.
In each turn, a player selects either the first or last coin from the row, removes it from the row permanently, and receives the value of the coin.
You need to determine the maximum possible amount of money you can win if you go first.
Note: Both the players are playing optimally.
Example 1:
Input:
N = 4
A[] = {5,3,7,10}
Output: 15
Explanation: The user collects maximum
value as 15(10 + 5)
Example 2:
Input:
N = 4
A[] = {8,15,3,7}
Output: 22
Explanation: The user collects maximum
value as 22(7 + 15)
Your Task:
Complete the function maximumAmount() which takes an array arr[] (represent values of N coins) and N as number of coins as a parameter and returns the maximum possible amount of money you can win if you go first.
Expected Time Complexity : O(N*N)
Expected Auxiliary Space: O(N*N)
Constraints:
2 <= N <= 103
1 <= Ai <= 106
0
milindprajapatmst195 days ago
Total Time Taken: 0.33/1.14
Time Complexity: O(n*n)
Space Complexity: O(n*n)
# define ll long long
# define pi pair<ll, ll>
# define mp make_pair
# define f first
# define s second
const int N = 1e3;
pi dp[N][N];
class Solution {
public:
pi help(int k1, int k2, int arr[]) {
if (k1 == k2)
return mp(arr[k1], 0);
if (dp[k1][k2].f == -1) {
pi p1 = help(k1 + 1, k2, arr);
pi p2 = help(k1, k2 - 1, arr);
if (p1.s + arr[k1] < p2.s + arr[k2])
dp[k1][k2] = mp(p2.s + arr[k2], p2.f);
else
dp[k1][k2] = mp(p1.s + arr[k1], p1.f);
}
return dp[k1][k2];
}
ll maximumAmount(int arr[], int n) {
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
dp[i][j] = mp(-1, -1);
pi p = help(0, n - 1, arr);
return p.f;
}
};
0
maheshahirwar2042 weeks ago
Here is amazing java solution
class solve{ //Function to find the maximum possible amount of money we can win. static long[][]dp=new long[1001][1001]; static long countMaximum(int arr[], int n) { for(long[]a:dp)Arrays.fill(a,-1); return solve(arr,0,n-1); }static long solve(int[]arr,int l,int r){ if(l+1==r){ return dp[l][r]=Math.max(arr[l],arr[r]); } if(dp[l][r]!=-1)return dp[l][r]; long tem1,tem2,tem3,tem4; long first,second; if(dp[l+2][r]!=-1){ tem1=dp[l+2][r]; }else{ tem1=solve(arr,l+2,r); dp[l+2][r]=tem1; } if(dp[l+1][r-1]!=-1){ tem3=tem2=dp[l+1][r-1]; }else{ tem3=tem2=solve(arr,l+1,r-1); dp[l+1][r-1]=tem2; } if(dp[l][r-2]!=-1){ tem4=dp[l][r-2]; }else{ tem4=solve(arr,l,r-2); dp[l][r-2]=tem4; } first=arr[l]+Math.min(tem1,tem2); second=arr[r]+Math.min(tem3,tem4); return dp[l][r]=Math.max(first,second);} }
0
ashutosh0920003 weeks ago
public class OptimalStrategyForAGame {public static int optimalStrategy(int[] arr) {int[][] dp = new int[arr.length][arr.length];//g=>gap (we use gap algorithm here)// like count palindromic subsequencesfor(int g=0;g<dp.length;g++) { for(int i=0,j=g;j<dp.length;i++,j++) { if(g==0) { //gap => 0 dp[i][j] = arr[i]; } else if(g==1) { dp[i][j] = Math.max(arr[i], arr[j]); } else { int val1 = arr[i] + Math.min(dp[i+2][j],dp[i+1][j-1]); // ye mere sath ho rha hai to iss liye maine min manba hai //i =>i+1 , j //I take i // if i+1 -> i+2,j // or j -> i+1 , j-1 // agar hum i lete hai tab j opponent ke liye rhega aur agar hum j lete hai ti i //opponent ke liye rhega int val2 = arr[j] + Math.min(dp[i+1][j-1], dp[i][j-2]); //ye mere sath ho rha hai isliye min mantre hai // j -> i , j-1 // I take j //if j-1 ->i , j-2 //or i+1 -> i+1 , j-1 //similarly phle hume j le lia tha int val = Math.max(val1, val2); // isme hme khudh krna hai isliye hum max mante hai dp[i][j] = val; } }}return dp[0][arr.length-1];//print top right}public static void main(String[] args) {int[] arr = {20,30,2,10};System.out.println(optimalStrategy(arr));}}
0
ashutosh092000
This comment was deleted.
0
ruchitchudasama1232 months ago
public:
long long go(int arr[],int l,int h,vector<vector<long long>> &dp){
if(l+1>h){
return 0;
}
if(dp[l][h]!=-1)return dp[l][h];
long long first=arr[l]+min(go(arr,l+2,h,dp),go(arr,l+1,h-1,dp));
long long second=arr[h]+min(go(arr,l,h-2,dp),go(arr,l+1,h-1,dp));
return dp[l][h]=max(first,second);
}
long long maximumAmount(int arr[], int n){
vector<vector<long long>> dp(n+1,vector<long long>(n+1,-1));
return go(arr,0,n-1,dp);
}
+1
aloksinghbais022 months ago
C++ solution using memoization having time complexity as O(N*N) and space complexity as O(N*N) is as follows :-
Execution Time :- 0.1 / 1.1 sec
long long int dp[1001][1001]; long long int max(long long int a,long long int b){ return a > b ? a : b; } long long int helper(int arr[],int i,int j){ if(i == j){ return arr[i]; } if(i > j) return (0); if(dp[i][j] != -1) return (dp[i][j]); long long int res1 = arr[j] + min(helper(arr,i,j-2),helper(arr,i+1,j-1)); long long int res2 = arr[i] + min(helper(arr,i+2,j),helper(arr,i+1,j-1)); return dp[i][j] = max(res1,res2); } long long maximumAmount(int arr[], int n){ memset(dp,-1,sizeof(dp)); return helper(arr,0,n-1); }
-1
vishalpandey100220002 months ago
static long countMaximum(int arr[], int n) { // Your code here long dp[][]=new long[n][n]; for (int i=0;i<n-1;i++){ dp[i][i+1]=Math.max(arr[i],arr[i+1]); } for (int gap=3;gap<n;gap=gap+2){ for (int i = 0; i+gap<n; i++){ int j=i+gap; dp[i][j]=Math.max(arr[i]+Math.min(dp[i+2][j],dp[i+1][j-1]),arr[j]+Math.min(dp[i+1][j-1],dp[i][j-2])); } } return dp[0][n-1]; }
0
victor19108662 months ago
Memoization Approach Easy to Understand || c++
long long int helper(int arr[],int i,int n,vector<vector<long long int>> &dp){ if(i>n)return 0; if(i==n)return arr[i]; if(dp[i][n]!=-1)return dp[i][n]; return dp[i][n]=max(arr[i]+min(helper(arr,i+1,n-1,dp),helper(arr,i+2,n,dp)), arr[n]+min(helper(arr,i+1,n-1,dp),helper(arr,i,n-2,dp))); } long long maximumAmount(int arr[], int n){ vector<vector<long long int>> dp(n+1,vector<long long int>(n+1,-1)); return helper(arr,0,n-1,dp); }
0
jyotish04
This comment was deleted.
0
amishasahu3283 months ago
class Solution{
public:
// Memoization method
long long dp[1001][1001];
long long solve(int arr[], int n, int i, int j)
{
// below diagonal values zero
if(i > j)
dp[i][j] = 0;
// filling the central diagonal value
if(i == j)
dp[i][j] = arr[i];
if(dp[i][j] != -1)
return dp[i][j];
long long int front_pick = arr[i]+ min(solve(arr,n,i+2,j),solve(arr,n,i+1,j-1));
long long int back_pick = arr[j]+ min(solve(arr,n,i,j-2),solve(arr,n,i+1,j-1));
return dp[i][j] = max(front_pick, back_pick);
}
long long maximumAmount(int arr[], int n){
memset(dp, -1, sizeof(dp));
return solve(arr, n, 0, n-1);
}
};
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 447,
"s": 238,
"text": "You are given an array A of size N. The array contains integers and is of even length. The elements of the array represent N coin of values V1, V2, ....Vn. You play against an opponent in an alternating way. "
},
{
"code": null,
"e": 595,
"s": 447,
"text": "In each turn, a player selects either the first or last coin from the row, removes it from the row permanently, and receives the value of the coin."
},
{
"code": null,
"e": 729,
"s": 595,
"text": "You need to determine the maximum possible amount of money you can win if you go first.\nNote: Both the players are playing optimally."
},
{
"code": null,
"e": 740,
"s": 729,
"text": "Example 1:"
},
{
"code": null,
"e": 841,
"s": 740,
"text": "Input:\nN = 4\nA[] = {5,3,7,10}\nOutput: 15\nExplanation: The user collects maximum\nvalue as 15(10 + 5)\n"
},
{
"code": null,
"e": 852,
"s": 841,
"text": "Example 2:"
},
{
"code": null,
"e": 952,
"s": 852,
"text": "Input:\nN = 4\nA[] = {8,15,3,7}\nOutput: 22\nExplanation: The user collects maximum\nvalue as 22(7 + 15)"
},
{
"code": null,
"e": 1176,
"s": 952,
"text": "Your Task:\nComplete the function maximumAmount() which takes an array arr[] (represent values of N coins) and N as number of coins as a parameter and returns the maximum possible amount of money you can win if you go first."
},
{
"code": null,
"e": 1243,
"s": 1176,
"text": "Expected Time Complexity : O(N*N)\nExpected Auxiliary Space: O(N*N)"
},
{
"code": null,
"e": 1285,
"s": 1243,
"text": "Constraints:\n2 <= N <= 103\n1 <= Ai <= 106"
},
{
"code": null,
"e": 1287,
"s": 1285,
"text": "0"
},
{
"code": null,
"e": 1317,
"s": 1287,
"text": "milindprajapatmst195 days ago"
},
{
"code": null,
"e": 1345,
"s": 1317,
"text": "Total Time Taken: 0.33/1.14"
},
{
"code": null,
"e": 1369,
"s": 1345,
"text": "Time Complexity: O(n*n)"
},
{
"code": null,
"e": 1394,
"s": 1369,
"text": "Space Complexity: O(n*n)"
},
{
"code": null,
"e": 2221,
"s": 1396,
"text": "# define ll long long\n# define pi pair<ll, ll>\n# define mp make_pair\n# define f first\n# define s second\n\nconst int N = 1e3;\npi dp[N][N];\n\nclass Solution {\n public:\n pi help(int k1, int k2, int arr[]) {\n if (k1 == k2)\n return mp(arr[k1], 0);\n if (dp[k1][k2].f == -1) {\n pi p1 = help(k1 + 1, k2, arr);\n pi p2 = help(k1, k2 - 1, arr);\n if (p1.s + arr[k1] < p2.s + arr[k2])\n dp[k1][k2] = mp(p2.s + arr[k2], p2.f);\n else\n dp[k1][k2] = mp(p1.s + arr[k1], p1.f);\n }\n return dp[k1][k2];\n }\n ll maximumAmount(int arr[], int n) {\n for (int i = 0; i < n; i++)\n for (int j = 0; j < n; j++)\n dp[i][j] = mp(-1, -1);\n pi p = help(0, n - 1, arr);\n return p.f;\n }\n};"
},
{
"code": null,
"e": 2223,
"s": 2221,
"text": "0"
},
{
"code": null,
"e": 2251,
"s": 2223,
"text": "maheshahirwar2042 weeks ago"
},
{
"code": null,
"e": 2281,
"s": 2251,
"text": "Here is amazing java solution"
},
{
"code": null,
"e": 3199,
"s": 2283,
"text": "class solve{ //Function to find the maximum possible amount of money we can win. static long[][]dp=new long[1001][1001]; static long countMaximum(int arr[], int n) { for(long[]a:dp)Arrays.fill(a,-1); return solve(arr,0,n-1); }static long solve(int[]arr,int l,int r){ if(l+1==r){ return dp[l][r]=Math.max(arr[l],arr[r]); } if(dp[l][r]!=-1)return dp[l][r]; long tem1,tem2,tem3,tem4; long first,second; if(dp[l+2][r]!=-1){ tem1=dp[l+2][r]; }else{ tem1=solve(arr,l+2,r); dp[l+2][r]=tem1; } if(dp[l+1][r-1]!=-1){ tem3=tem2=dp[l+1][r-1]; }else{ tem3=tem2=solve(arr,l+1,r-1); dp[l+1][r-1]=tem2; } if(dp[l][r-2]!=-1){ tem4=dp[l][r-2]; }else{ tem4=solve(arr,l,r-2); dp[l][r-2]=tem4; } first=arr[l]+Math.min(tem1,tem2); second=arr[r]+Math.min(tem3,tem4); return dp[l][r]=Math.max(first,second);} } "
},
{
"code": null,
"e": 3201,
"s": 3199,
"text": "0"
},
{
"code": null,
"e": 3227,
"s": 3201,
"text": "ashutosh0920003 weeks ago"
},
{
"code": null,
"e": 4408,
"s": 3227,
"text": "public class OptimalStrategyForAGame {public static int optimalStrategy(int[] arr) {int[][] dp = new int[arr.length][arr.length];//g=>gap (we use gap algorithm here)// like count palindromic subsequencesfor(int g=0;g<dp.length;g++) { for(int i=0,j=g;j<dp.length;i++,j++) { if(g==0) { //gap => 0 dp[i][j] = arr[i]; } else if(g==1) { dp[i][j] = Math.max(arr[i], arr[j]); } else { int val1 = arr[i] + Math.min(dp[i+2][j],dp[i+1][j-1]); // ye mere sath ho rha hai to iss liye maine min manba hai //i =>i+1 , j //I take i // if i+1 -> i+2,j // or j -> i+1 , j-1 // agar hum i lete hai tab j opponent ke liye rhega aur agar hum j lete hai ti i //opponent ke liye rhega int val2 = arr[j] + Math.min(dp[i+1][j-1], dp[i][j-2]); //ye mere sath ho rha hai isliye min mantre hai // j -> i , j-1 // I take j //if j-1 ->i , j-2 //or i+1 -> i+1 , j-1 //similarly phle hume j le lia tha int val = Math.max(val1, val2); // isme hme khudh krna hai isliye hum max mante hai dp[i][j] = val; } }}return dp[0][arr.length-1];//print top right}public static void main(String[] args) {int[] arr = {20,30,2,10};System.out.println(optimalStrategy(arr));}}"
},
{
"code": null,
"e": 4412,
"s": 4410,
"text": "0"
},
{
"code": null,
"e": 4427,
"s": 4412,
"text": "ashutosh092000"
},
{
"code": null,
"e": 4453,
"s": 4427,
"text": "This comment was deleted."
},
{
"code": null,
"e": 4455,
"s": 4453,
"text": "0"
},
{
"code": null,
"e": 4486,
"s": 4455,
"text": "ruchitchudasama1232 months ago"
},
{
"code": null,
"e": 5022,
"s": 4486,
"text": "public:\n \n long long go(int arr[],int l,int h,vector<vector<long long>> &dp){\n if(l+1>h){\n return 0;\n }\n if(dp[l][h]!=-1)return dp[l][h];\n \n long long first=arr[l]+min(go(arr,l+2,h,dp),go(arr,l+1,h-1,dp));\n long long second=arr[h]+min(go(arr,l,h-2,dp),go(arr,l+1,h-1,dp));\n return dp[l][h]=max(first,second);\n }\n long long maximumAmount(int arr[], int n){\n vector<vector<long long>> dp(n+1,vector<long long>(n+1,-1));\n return go(arr,0,n-1,dp);\n }"
},
{
"code": null,
"e": 5025,
"s": 5022,
"text": "+1"
},
{
"code": null,
"e": 5053,
"s": 5025,
"text": "aloksinghbais022 months ago"
},
{
"code": null,
"e": 5166,
"s": 5053,
"text": "C++ solution using memoization having time complexity as O(N*N) and space complexity as O(N*N) is as follows :- "
},
{
"code": null,
"e": 5200,
"s": 5168,
"text": "Execution Time :- 0.1 / 1.1 sec"
},
{
"code": null,
"e": 5828,
"s": 5202,
"text": "long long int dp[1001][1001]; long long int max(long long int a,long long int b){ return a > b ? a : b; } long long int helper(int arr[],int i,int j){ if(i == j){ return arr[i]; } if(i > j) return (0); if(dp[i][j] != -1) return (dp[i][j]); long long int res1 = arr[j] + min(helper(arr,i,j-2),helper(arr,i+1,j-1)); long long int res2 = arr[i] + min(helper(arr,i+2,j),helper(arr,i+1,j-1)); return dp[i][j] = max(res1,res2); } long long maximumAmount(int arr[], int n){ memset(dp,-1,sizeof(dp)); return helper(arr,0,n-1); }"
},
{
"code": null,
"e": 5831,
"s": 5828,
"text": "-1"
},
{
"code": null,
"e": 5864,
"s": 5831,
"text": "vishalpandey100220002 months ago"
},
{
"code": null,
"e": 6328,
"s": 5864,
"text": "static long countMaximum(int arr[], int n) { // Your code here long dp[][]=new long[n][n]; for (int i=0;i<n-1;i++){ dp[i][i+1]=Math.max(arr[i],arr[i+1]); } for (int gap=3;gap<n;gap=gap+2){ for (int i = 0; i+gap<n; i++){ int j=i+gap; dp[i][j]=Math.max(arr[i]+Math.min(dp[i+2][j],dp[i+1][j-1]),arr[j]+Math.min(dp[i+1][j-1],dp[i][j-2])); } } return dp[0][n-1]; }"
},
{
"code": null,
"e": 6330,
"s": 6328,
"text": "0"
},
{
"code": null,
"e": 6356,
"s": 6330,
"text": "victor19108662 months ago"
},
{
"code": null,
"e": 6403,
"s": 6356,
"text": "Memoization Approach Easy to Understand || c++"
},
{
"code": null,
"e": 6885,
"s": 6405,
"text": "long long int helper(int arr[],int i,int n,vector<vector<long long int>> &dp){ if(i>n)return 0; if(i==n)return arr[i]; if(dp[i][n]!=-1)return dp[i][n]; return dp[i][n]=max(arr[i]+min(helper(arr,i+1,n-1,dp),helper(arr,i+2,n,dp)), arr[n]+min(helper(arr,i+1,n-1,dp),helper(arr,i,n-2,dp))); } long long maximumAmount(int arr[], int n){ vector<vector<long long int>> dp(n+1,vector<long long int>(n+1,-1)); return helper(arr,0,n-1,dp); }"
},
{
"code": null,
"e": 6887,
"s": 6885,
"text": "0"
},
{
"code": null,
"e": 6897,
"s": 6887,
"text": "jyotish04"
},
{
"code": null,
"e": 6923,
"s": 6897,
"text": "This comment was deleted."
},
{
"code": null,
"e": 6925,
"s": 6923,
"text": "0"
},
{
"code": null,
"e": 6951,
"s": 6925,
"text": "amishasahu3283 months ago"
},
{
"code": null,
"e": 7743,
"s": 6951,
"text": "class Solution{\n public:\n // Memoization method\n long long dp[1001][1001];\n long long solve(int arr[], int n, int i, int j)\n {\n // below diagonal values zero\n if(i > j)\n dp[i][j] = 0;\n // filling the central diagonal value\n if(i == j)\n dp[i][j] = arr[i];\n \n if(dp[i][j] != -1)\n return dp[i][j];\n \n long long int front_pick = arr[i]+ min(solve(arr,n,i+2,j),solve(arr,n,i+1,j-1));\n long long int back_pick = arr[j]+ min(solve(arr,n,i,j-2),solve(arr,n,i+1,j-1));\n \n return dp[i][j] = max(front_pick, back_pick);\n \n }\n long long maximumAmount(int arr[], int n){\n memset(dp, -1, sizeof(dp));\n return solve(arr, n, 0, n-1);\n \n }\n};"
},
{
"code": null,
"e": 7889,
"s": 7743,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 7925,
"s": 7889,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 7935,
"s": 7925,
"text": "\nProblem\n"
},
{
"code": null,
"e": 7945,
"s": 7935,
"text": "\nContest\n"
},
{
"code": null,
"e": 8008,
"s": 7945,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 8156,
"s": 8008,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 8364,
"s": 8156,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 8470,
"s": 8364,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How can I get the current time and date in Kotlin?
|
This example demonstrates how to get the current time and date in an Android app Kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇉ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:id="@+id/dateAndTime"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:layout_centerHorizontal="true"
android:layout_marginBottom="36dp"
android:textAlignment="center"
android:textSize="24sp"
android:textStyle="bold" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.os.Bundle
import android.widget.TextView
import androidx.appcompat.app.AppCompatActivity
import java.text.SimpleDateFormat
import java.util.*
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val textView: TextView = findViewById(R.id.dateAndTime)
val simpleDateFormat = SimpleDateFormat("yyyy.MM.dd G 'at' HH:mm:ss z")
val currentDateAndTime: String = simpleDateFormat.format(Date())
textView.text = currentDateAndTime
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="app.com.kotlipapp">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code.
|
[
{
"code": null,
"e": 1151,
"s": 1062,
"text": "This example demonstrates how to get the current time and date in an Android app Kotlin."
},
{
"code": null,
"e": 1280,
"s": 1151,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇉ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1345,
"s": 1280,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2059,
"s": 1345,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:app=\"http://schemas.android.com/apk/res-auto\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <TextView\n android:id=\"@+id/dateAndTime\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginBottom=\"36dp\"\n android:textAlignment=\"center\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2114,
"s": 2059,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 2715,
"s": 2114,
"text": "import android.os.Bundle\nimport android.widget.TextView\nimport androidx.appcompat.app.AppCompatActivity\nimport java.text.SimpleDateFormat\nimport java.util.*\nclass MainActivity : AppCompatActivity() {\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n val textView: TextView = findViewById(R.id.dateAndTime)\n val simpleDateFormat = SimpleDateFormat(\"yyyy.MM.dd G 'at' HH:mm:ss z\")\n val currentDateAndTime: String = simpleDateFormat.format(Date())\n textView.text = currentDateAndTime\n }\n}"
},
{
"code": null,
"e": 2770,
"s": 2715,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 3446,
"s": 2770,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"app.com.kotlipapp\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 3797,
"s": 3446,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
},
{
"code": null,
"e": 3838,
"s": 3797,
"text": "Click here to download the project code."
}
] |
Progress bars for Python with tqdm | by Doug Steen | Towards Data Science
|
Not long after I began working on machine learning projects in Python, I ran into computationally-intensive tasks that just took a long time to run. Usually this was associated with some kind of iterable process. A couple that immediately come to mind are (1) running a grid search on p, d, and q orders to fit ARIMA models on large data sets, and (2) grid searching hyperparameters while training machine learning algorithms. In both cases, you can potentially spend hours (or more!) waiting for your code to finish running. Desperate for some kind of indicator to show the progress of these tasks, I found tqdm.
tqdm is a Python library that allows you to output a smart progress bar by wrapping around any iterable. A tqdm progress bar not only shows you how much time has elapsed, but also shows the estimated time remaining for the iterable.
Since tqdm is part of the Python Package Index (PyPI), it can be installed using the pip install tqdm command.
I tend to work often in IPython/Jupyter notebooks, and tqdm provides excellent support for this. To begin playing with tqdm in a notebook, you can import the following:
For the sake of clarity, I won’t get into a computationally-intensive grid search in this post — instead I’ll use a few simple examples to demonstrate the use of tqdm.
For-loop progress
Let’s say we wanted to simulate flipping a fair coin 100,000,000 times while tracking the results, and we also wanted to see how long these iterations will take to run in Python. We can wrap the tqdm function around the iterable (range(100000000)), which will generate a progress bar while our for-loop is running. We can also assign a name to the progress bar using the desc keyword argument.
The resulting tqdm progress bar gives us information that includes the task completion percentage, number of iterations complete, time elapsed, estimated time remaining, and the iterations completed per second.
In this case, tqdm allows for further optimization by using trange(100000000) in place of the tqdm(range(100000000)).
Nested for-loop progress
If you have a situation that calls for a nested for-loop, tqdm allows you to track the progress of these loops at multiple levels. For example, let’s take our coin-flip example, but this time we want to play three separate “games” of 10,000,000 flips each while tracking the results. We can create a tqdm progress bar for “Overall Progress”, as well as progress bars for each of the three games.
A slightly different implementation of tqdm involves integration with pandas. tqdm can provide additional functionality for the .apply() method of a pandas dataframe. The pandas .progress_apply() method must first be ‘registered’ with tqdm using the code below. Then, the .progress_apply() method is used instead of the traditional .apply() method — the difference is, we now have a smart progress bar included in the method’s output.
Processing Dataframe: 100%|██████████| 1000/1000 [00:02<00:00, 336.21it/s]
In addition to being integrated with IPython/Jupyter and pandas, tqdm offers integration with Keras and experimental modules for itertools, concurrent, Discord, and Telegram. This post only scratches the surface of the capabilities of tqdm, so be sure to check out the documentation to learn more about how to include smart progress bars in your Python code!
|
[
{
"code": null,
"e": 786,
"s": 172,
"text": "Not long after I began working on machine learning projects in Python, I ran into computationally-intensive tasks that just took a long time to run. Usually this was associated with some kind of iterable process. A couple that immediately come to mind are (1) running a grid search on p, d, and q orders to fit ARIMA models on large data sets, and (2) grid searching hyperparameters while training machine learning algorithms. In both cases, you can potentially spend hours (or more!) waiting for your code to finish running. Desperate for some kind of indicator to show the progress of these tasks, I found tqdm."
},
{
"code": null,
"e": 1019,
"s": 786,
"text": "tqdm is a Python library that allows you to output a smart progress bar by wrapping around any iterable. A tqdm progress bar not only shows you how much time has elapsed, but also shows the estimated time remaining for the iterable."
},
{
"code": null,
"e": 1130,
"s": 1019,
"text": "Since tqdm is part of the Python Package Index (PyPI), it can be installed using the pip install tqdm command."
},
{
"code": null,
"e": 1299,
"s": 1130,
"text": "I tend to work often in IPython/Jupyter notebooks, and tqdm provides excellent support for this. To begin playing with tqdm in a notebook, you can import the following:"
},
{
"code": null,
"e": 1467,
"s": 1299,
"text": "For the sake of clarity, I won’t get into a computationally-intensive grid search in this post — instead I’ll use a few simple examples to demonstrate the use of tqdm."
},
{
"code": null,
"e": 1485,
"s": 1467,
"text": "For-loop progress"
},
{
"code": null,
"e": 1879,
"s": 1485,
"text": "Let’s say we wanted to simulate flipping a fair coin 100,000,000 times while tracking the results, and we also wanted to see how long these iterations will take to run in Python. We can wrap the tqdm function around the iterable (range(100000000)), which will generate a progress bar while our for-loop is running. We can also assign a name to the progress bar using the desc keyword argument."
},
{
"code": null,
"e": 2090,
"s": 1879,
"text": "The resulting tqdm progress bar gives us information that includes the task completion percentage, number of iterations complete, time elapsed, estimated time remaining, and the iterations completed per second."
},
{
"code": null,
"e": 2208,
"s": 2090,
"text": "In this case, tqdm allows for further optimization by using trange(100000000) in place of the tqdm(range(100000000))."
},
{
"code": null,
"e": 2233,
"s": 2208,
"text": "Nested for-loop progress"
},
{
"code": null,
"e": 2629,
"s": 2233,
"text": "If you have a situation that calls for a nested for-loop, tqdm allows you to track the progress of these loops at multiple levels. For example, let’s take our coin-flip example, but this time we want to play three separate “games” of 10,000,000 flips each while tracking the results. We can create a tqdm progress bar for “Overall Progress”, as well as progress bars for each of the three games."
},
{
"code": null,
"e": 3064,
"s": 2629,
"text": "A slightly different implementation of tqdm involves integration with pandas. tqdm can provide additional functionality for the .apply() method of a pandas dataframe. The pandas .progress_apply() method must first be ‘registered’ with tqdm using the code below. Then, the .progress_apply() method is used instead of the traditional .apply() method — the difference is, we now have a smart progress bar included in the method’s output."
},
{
"code": null,
"e": 3139,
"s": 3064,
"text": "Processing Dataframe: 100%|██████████| 1000/1000 [00:02<00:00, 336.21it/s]"
}
] |
Program for Fibonacci numbers in C
|
Given with ‘n’ numbers the task is to generate the fibonacci series till starting from 0 to n where fibonacci series of integer is in the form
0, 1, 1, 2, 3, 5, 8, 13, 21, 34
Where, integer 0 and 1 will have fixed space, after that two digits are added for example,
0+1=1(3rd place)
1+1=2(4th place)
2+1=3(5th place) and So on
Sequence F(n) of fibonacci series will have recurrence relation defined as −
Fn = Fn-1 + Fn-2
Where, F(0)=0 and F(1)=1 are always fixed
There can be multiple approaches that can be used to generate the fiboacce series −
Recursive approach − in this, approach function will make a call to itself after every integer value. It is simple and easy to implement but it will lead to exponential time complexity which makes this approach ineffective.
Using For Loop − By using For loop in generating Fibonacci series time complexity can be reduced to O(n) which makes this approach effective.
Input-: n=10
Output-: 0 1 1 2 3 5 8 13 21 34
Start
Step 1 -> Declare function for Fibonacci series
Void Fibonacci(int n)
Declare variables as int a=0,b=1,c,i
Print a and b
Loop For i=2 and i<n and ++i
Set c=a+b
Print c
Set a=b
Set b=c
End
Step 2 -> In main()
Declare int as 10
Call Fibonacci(n)
Stop
#include<stdio.h>
void fibonacci(int n){
int a=0,b=1,c,i;
printf("fibonacci series till %d is ",n);
printf("\n%d %d",a,b);//it will print 0 and 1
for(i=2;i<n;++i) //loop starts from 2 because 0 and 1 are the fixed values that series will take{
c=a+b;
printf(" %d",c);
a=b;
b=c;
}
}
int main(){
int n=10;
fibonacci(n);
return 0;
}
fibonacci series till 10 is
0 1 1 2 3 5 8 13 21 34
|
[
{
"code": null,
"e": 1205,
"s": 1062,
"text": "Given with ‘n’ numbers the task is to generate the fibonacci series till starting from 0 to n where fibonacci series of integer is in the form"
},
{
"code": null,
"e": 1237,
"s": 1205,
"text": "0, 1, 1, 2, 3, 5, 8, 13, 21, 34"
},
{
"code": null,
"e": 1328,
"s": 1237,
"text": "Where, integer 0 and 1 will have fixed space, after that two digits are added for example,"
},
{
"code": null,
"e": 1389,
"s": 1328,
"text": "0+1=1(3rd place)\n1+1=2(4th place)\n2+1=3(5th place) and So on"
},
{
"code": null,
"e": 1466,
"s": 1389,
"text": "Sequence F(n) of fibonacci series will have recurrence relation defined as −"
},
{
"code": null,
"e": 1525,
"s": 1466,
"text": "Fn = Fn-1 + Fn-2\nWhere, F(0)=0 and F(1)=1 are always fixed"
},
{
"code": null,
"e": 1609,
"s": 1525,
"text": "There can be multiple approaches that can be used to generate the fiboacce series −"
},
{
"code": null,
"e": 1833,
"s": 1609,
"text": "Recursive approach − in this, approach function will make a call to itself after every integer value. It is simple and easy to implement but it will lead to exponential time complexity which makes this approach ineffective."
},
{
"code": null,
"e": 1975,
"s": 1833,
"text": "Using For Loop − By using For loop in generating Fibonacci series time complexity can be reduced to O(n) which makes this approach effective."
},
{
"code": null,
"e": 2020,
"s": 1975,
"text": "Input-: n=10\nOutput-: 0 1 1 2 3 5 8 13 21 34"
},
{
"code": null,
"e": 2344,
"s": 2020,
"text": "Start\nStep 1 -> Declare function for Fibonacci series\n Void Fibonacci(int n)\n Declare variables as int a=0,b=1,c,i\n Print a and b\n Loop For i=2 and i<n and ++i\n Set c=a+b\n Print c\n Set a=b\n Set b=c\n End\nStep 2 -> In main()\n Declare int as 10\n Call Fibonacci(n)\nStop"
},
{
"code": null,
"e": 2722,
"s": 2344,
"text": "#include<stdio.h>\nvoid fibonacci(int n){\n int a=0,b=1,c,i;\n printf(\"fibonacci series till %d is \",n);\n printf(\"\\n%d %d\",a,b);//it will print 0 and 1\n for(i=2;i<n;++i) //loop starts from 2 because 0 and 1 are the fixed values that series will take{\n c=a+b;\n printf(\" %d\",c);\n a=b;\n b=c;\n }\n}\nint main(){\n int n=10;\n fibonacci(n);\n return 0;\n}"
},
{
"code": null,
"e": 2773,
"s": 2722,
"text": "fibonacci series till 10 is\n0 1 1 2 3 5 8 13 21 34"
}
] |
Django - Caching
|
To cache something is to save the result of an expensive calculation, so that you don’t perform it the next time you need it. Following is a pseudo code that explains how caching works −
given a URL, try finding that page in the cache
if the page is in the cache:
return the cached page
else:
generate the page
save the generated page in the cache (for next time)
return the generated page
Django comes with its own caching system that lets you save your dynamic pages, to avoid calculating them again when needed. The good point in Django Cache framework is that you can cache −
The output of a specific view.
A part of a template.
Your entire site.
To use cache in Django, first thing to do is to set up where the cache will stay. The cache framework offers different possibilities - cache can be saved in database, on file system or directly in memory. Setting is done in the settings.py file of your project.
Just add the following in the project settings.py file −
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'my_table_name',
}
}
For this to work and to complete the setting, we need to create the cache table 'my_table_name'. For this, you need to do the following −
python manage.py createcachetable
Just add the following in the project settings.py file −
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/var/tmp/django_cache',
}
}
This is the most efficient way of caching, to use it you can use one of the following options depending on the Python binding library you choose for the memory cache −
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
Or
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'unix:/tmp/memcached.sock',
}
}
The simplest way of using cache in Django is to cache the entire site. This is done by editing the MIDDLEWARE_CLASSES option in the project settings.py. The following need to be added to the option −
MIDDLEWARE_CLASSES += (
'django.middleware.cache.UpdateCacheMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware',
)
Note that the order is important here, Update should come before Fetch middleware.
Then in the same file, you need to set −
CACHE_MIDDLEWARE_ALIAS – The cache alias to use for storage.
CACHE_MIDDLEWARE_SECONDS – The number of seconds each page should be cached.
If you don’t want to cache the entire site you can cache a specific view. This is done by using the cache_page decorator that comes with Django. Let us say we want to cache the result of the viewArticles view −
from django.views.decorators.cache import cache_page
@cache_page(60 * 15)
def viewArticles(request, year, month):
text = "Displaying articles of : %s/%s"%(year, month)
return HttpResponse(text)
As you can see cache_page takes the number of seconds you want the view result to be cached as parameter. In our example above, the result will be cached for 15 minutes.
Note − As we have seen before the above view was map to −
urlpatterns = patterns('myapp.views',
url(r'^articles/(?P<month>\d{2})/(?P<year>\d{4})/', 'viewArticles', name = 'articles'),)
Since the URL is taking parameters, each different call will be cached separately. For example, request to /myapp/articles/02/2007 will be cached separately to /myapp/articles/03/2008.
Caching a view can also directly be done in the url.py file. Then the following has the same result as the above. Just edit your myapp/url.py file and change the related mapped URL (above) to be −
urlpatterns = patterns('myapp.views',
url(r'^articles/(?P<month>\d{2})/(?P<year>\d{4})/',
cache_page(60 * 15)('viewArticles'), name = 'articles'),)
And, of course, it's no longer needed in myapp/views.py.
You can also cache parts of a template, this is done by using the cache tag. Let's take our hello.html template −
{% extends "main_template.html" %}
{% block title %}My Hello Page{% endblock %}
{% block content %}
Hello World!!!<p>Today is {{today}}</p>
We are
{% if today.day == 1 %}
the first day of month.
{% elif today == 30 %}
the last day of month.
{% else %}
I don't know.
{%endif%}
<p>
{% for day in days_of_week %}
{{day}}
</p>
{% endfor %}
{% endblock %}
And to cache the content block, our template will become −
{% load cache %}
{% extends "main_template.html" %}
{% block title %}My Hello Page{% endblock %}
{% cache 500 content %}
{% block content %}
Hello World!!!<p>Today is {{today}}</p>
We are
{% if today.day == 1 %}
the first day of month.
{% elif today == 30 %}
the last day of month.
{% else %}
I don't know.
{%endif%}
<p>
{% for day in days_of_week %}
{{day}}
</p>
{% endfor %}
{% endblock %}
{% endcache %}
As you can see above, the cache tag will take 2 parameters − the time you want the block to be cached (in seconds) and the name to be given to the cache fragment.
39 Lectures
3.5 hours
John Elder
36 Lectures
2.5 hours
John Elder
28 Lectures
2 hours
John Elder
20 Lectures
1 hours
John Elder
35 Lectures
3 hours
John Elder
79 Lectures
10 hours
Rathan Kumar
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2232,
"s": 2045,
"text": "To cache something is to save the result of an expensive calculation, so that you don’t perform it the next time you need it. Following is a pseudo code that explains how caching works −"
},
{
"code": null,
"e": 2448,
"s": 2232,
"text": "given a URL, try finding that page in the cache\n\nif the page is in the cache:\n return the cached page\nelse:\n generate the page\n save the generated page in the cache (for next time)\n return the generated page"
},
{
"code": null,
"e": 2638,
"s": 2448,
"text": "Django comes with its own caching system that lets you save your dynamic pages, to avoid calculating them again when needed. The good point in Django Cache framework is that you can cache −"
},
{
"code": null,
"e": 2669,
"s": 2638,
"text": "The output of a specific view."
},
{
"code": null,
"e": 2691,
"s": 2669,
"text": "A part of a template."
},
{
"code": null,
"e": 2709,
"s": 2691,
"text": "Your entire site."
},
{
"code": null,
"e": 2971,
"s": 2709,
"text": "To use cache in Django, first thing to do is to set up where the cache will stay. The cache framework offers different possibilities - cache can be saved in database, on file system or directly in memory. Setting is done in the settings.py file of your project."
},
{
"code": null,
"e": 3028,
"s": 2971,
"text": "Just add the following in the project settings.py file −"
},
{
"code": null,
"e": 3161,
"s": 3028,
"text": "CACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.db.DatabaseCache',\n 'LOCATION': 'my_table_name',\n }\n}"
},
{
"code": null,
"e": 3299,
"s": 3161,
"text": "For this to work and to complete the setting, we need to create the cache table 'my_table_name'. For this, you need to do the following −"
},
{
"code": null,
"e": 3334,
"s": 3299,
"text": "python manage.py createcachetable\n"
},
{
"code": null,
"e": 3391,
"s": 3334,
"text": "Just add the following in the project settings.py file −"
},
{
"code": null,
"e": 3540,
"s": 3391,
"text": "CACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',\n 'LOCATION': '/var/tmp/django_cache',\n }\n}"
},
{
"code": null,
"e": 3708,
"s": 3540,
"text": "This is the most efficient way of caching, to use it you can use one of the following options depending on the Python binding library you choose for the memory cache −"
},
{
"code": null,
"e": 3851,
"s": 3708,
"text": "CACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': '127.0.0.1:11211',\n }\n}"
},
{
"code": null,
"e": 3854,
"s": 3851,
"text": "Or"
},
{
"code": null,
"e": 4006,
"s": 3854,
"text": "CACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': 'unix:/tmp/memcached.sock',\n }\n}"
},
{
"code": null,
"e": 4206,
"s": 4006,
"text": "The simplest way of using cache in Django is to cache the entire site. This is done by editing the MIDDLEWARE_CLASSES option in the project settings.py. The following need to be added to the option −"
},
{
"code": null,
"e": 4387,
"s": 4206,
"text": "MIDDLEWARE_CLASSES += (\n 'django.middleware.cache.UpdateCacheMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.cache.FetchFromCacheMiddleware',\n)"
},
{
"code": null,
"e": 4470,
"s": 4387,
"text": "Note that the order is important here, Update should come before Fetch middleware."
},
{
"code": null,
"e": 4511,
"s": 4470,
"text": "Then in the same file, you need to set −"
},
{
"code": null,
"e": 4650,
"s": 4511,
"text": "CACHE_MIDDLEWARE_ALIAS – The cache alias to use for storage.\nCACHE_MIDDLEWARE_SECONDS – The number of seconds each page should be cached.\n"
},
{
"code": null,
"e": 4861,
"s": 4650,
"text": "If you don’t want to cache the entire site you can cache a specific view. This is done by using the cache_page decorator that comes with Django. Let us say we want to cache the result of the viewArticles view −"
},
{
"code": null,
"e": 5063,
"s": 4861,
"text": "from django.views.decorators.cache import cache_page\n\n@cache_page(60 * 15)\n\ndef viewArticles(request, year, month):\n text = \"Displaying articles of : %s/%s\"%(year, month)\n return HttpResponse(text)"
},
{
"code": null,
"e": 5233,
"s": 5063,
"text": "As you can see cache_page takes the number of seconds you want the view result to be cached as parameter. In our example above, the result will be cached for 15 minutes."
},
{
"code": null,
"e": 5291,
"s": 5233,
"text": "Note − As we have seen before the above view was map to −"
},
{
"code": null,
"e": 5421,
"s": 5291,
"text": "urlpatterns = patterns('myapp.views',\n url(r'^articles/(?P<month>\\d{2})/(?P<year>\\d{4})/', 'viewArticles', name = 'articles'),)"
},
{
"code": null,
"e": 5606,
"s": 5421,
"text": "Since the URL is taking parameters, each different call will be cached separately. For example, request to /myapp/articles/02/2007 will be cached separately to /myapp/articles/03/2008."
},
{
"code": null,
"e": 5803,
"s": 5606,
"text": "Caching a view can also directly be done in the url.py file. Then the following has the same result as the above. Just edit your myapp/url.py file and change the related mapped URL (above) to be −"
},
{
"code": null,
"e": 5958,
"s": 5803,
"text": "urlpatterns = patterns('myapp.views',\n url(r'^articles/(?P<month>\\d{2})/(?P<year>\\d{4})/', \n cache_page(60 * 15)('viewArticles'), name = 'articles'),)"
},
{
"code": null,
"e": 6015,
"s": 5958,
"text": "And, of course, it's no longer needed in myapp/views.py."
},
{
"code": null,
"e": 6129,
"s": 6015,
"text": "You can also cache parts of a template, this is done by using the cache tag. Let's take our hello.html template −"
},
{
"code": null,
"e": 6492,
"s": 6129,
"text": "{% extends \"main_template.html\" %}\n{% block title %}My Hello Page{% endblock %}\n{% block content %}\n\nHello World!!!<p>Today is {{today}}</p>\nWe are\n{% if today.day == 1 %}\n\nthe first day of month.\n{% elif today == 30 %}\n\nthe last day of month.\n{% else %}\n\nI don't know.\n{%endif%}\n\n<p>\n {% for day in days_of_week %}\n {{day}}\n</p>\n\n{% endfor %}\n{% endblock %}"
},
{
"code": null,
"e": 6551,
"s": 6492,
"text": "And to cache the content block, our template will become −"
},
{
"code": null,
"e": 6970,
"s": 6551,
"text": "{% load cache %}\n{% extends \"main_template.html\" %}\n{% block title %}My Hello Page{% endblock %}\n{% cache 500 content %}\n{% block content %}\n\nHello World!!!<p>Today is {{today}}</p>\nWe are\n{% if today.day == 1 %}\n\nthe first day of month.\n{% elif today == 30 %}\n\nthe last day of month.\n{% else %}\n\nI don't know.\n{%endif%}\n\n<p>\n {% for day in days_of_week %}\n {{day}}\n</p>\n\n{% endfor %}\n{% endblock %}\n{% endcache %}"
},
{
"code": null,
"e": 7133,
"s": 6970,
"text": "As you can see above, the cache tag will take 2 parameters − the time you want the block to be cached (in seconds) and the name to be given to the cache fragment."
},
{
"code": null,
"e": 7168,
"s": 7133,
"text": "\n 39 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 7180,
"s": 7168,
"text": " John Elder"
},
{
"code": null,
"e": 7215,
"s": 7180,
"text": "\n 36 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 7227,
"s": 7215,
"text": " John Elder"
},
{
"code": null,
"e": 7260,
"s": 7227,
"text": "\n 28 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 7272,
"s": 7260,
"text": " John Elder"
},
{
"code": null,
"e": 7305,
"s": 7272,
"text": "\n 20 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7317,
"s": 7305,
"text": " John Elder"
},
{
"code": null,
"e": 7350,
"s": 7317,
"text": "\n 35 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 7362,
"s": 7350,
"text": " John Elder"
},
{
"code": null,
"e": 7396,
"s": 7362,
"text": "\n 79 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 7410,
"s": 7396,
"text": " Rathan Kumar"
},
{
"code": null,
"e": 7417,
"s": 7410,
"text": " Print"
},
{
"code": null,
"e": 7428,
"s": 7417,
"text": " Add Notes"
}
] |
Jupyter workflow for data scientists | by Sophia Yang | Towards Data Science
|
Many data scientists like to use Jupyter Notebook or JupyterLab to do their data explorations, visualizations, and model building. I know some data scientists refuse to use Jupyter Notebook. But, I love to use Jupyter Notebook/Lab to do my experiments and explorations. Here is a Jupyter Notebook workflow that might be helpful.
Whenever you are working on a new project, you should always have a new fresh environment to start with. If you are lazy and don’t want to create a new environment for every project, you should at least create one new environment that’s separate from your base environment and do your work in this new environment. To create a new environment, simply run conda create -n new_project python=3.8 . Here I created a new environment called new_project . I specified the Python version when I create the environment, but you don't have to. Now I can activate this environment by running conda activate new_project , and then install the necessary packages I need for this environment, e.g., conda install jupyter notebook pandas numpy . If you would like to use JupyterLab, make sure to run conda install jupyterlab .
To start Jupyter Notebook, run jupyter notebook in the command line. If you would like to start JupyterLab, run jupyter lab . JupyterLab is a web-based interactive development environment for Jupyter notebooks. It has a different interface and has more (easy-to-use) plugins and thus more flexible than Jupyter Notebook. The workflow between Jupyter Notebook and JupyterLab JupyterLab is similar. This article applies to both Jupyter Notebook and JupyterLab.
A lot of times we want to run Jupyter from a remote server and work on it from a local browser. First, we need to set up the same environment on this remote server. Next, we need to set up port forwarding. Here we forward port 10000 from the remote machine to our local machine and start the notebook on this port. Note that the remote port and the local port don’t have to be the same, but I like to use the same port for simplicity. If port 10000 is not available, you can use other port numbers. Now you should see a link on your command line starting with localhost:10000. Copy and paste this link to your web browser, and you are all set to use the Jupyter Notebook from the remote machine on your local machine.
ssh - L 10000:localhost:10000 user@host jupyter notebook --ip 0.0.0.0 --port 10000 --no-browser
A : insert cell aboveB : insert cell belowX : cut cellC : copy cellV : paste cellD,D : delete cellZ : undo delete cells⌘ S : save and checkpoint⌘ enter : run selected cellshift enter : run current cell
The built-in magic function %%time outputs time execution of a Python statement or expression. %%timeit measures time execution of a Python statement or expression by running it many times for better accuracy.
JupyterLab has a built-in dark mode. You can find it at Settings - Jupyter Lab Theme - Jupyter Lab Dark . I don't think Jupyter Notebook has a built-in dark mode yet. But you can use the Python package jupyterthemes to change your notebook theme. Check out jupyterthemes for details.
Project Jupyter announced their Jupyter visual debugger back in March this year. Here is how to install the debugger if you are using JupyterLab 2.xx.
conda install nodejs xeus-python -c defaults -c conda-forgejupyter labextension install @jupyterlab/debugger
Note that the debugger is already included in JupyterLab 3.0. If you are using JupyterLab 3.0 and above, you will only need to install xeus-python:
conda install xeus-python -c defaults -c conda-forge
To use the Jupyter visual debugger, click XPython in the JupyterLab New Launcher:
And then activate the debugger by drag the right corner button to orange, select a breakpoint with a red dot in front of the line, and play the callstack:
More information about the Jupyter visual debugger can be found in this blog post.
Whenever we see an exception, we can use the %debug magic command to open up an interactive Python debugger.
Alternatively, we can also turn on the %pdb magic command to automatically run the debugger on an exception.
We can also use breakpoint and run debugger line by line. breakpoint also works for other code editors like VSCode.
jupytext and VSCode made version control so much easier for Jupyter notebooks. Before we jump into jupytext and VSCode, let me remind you why version control is annoying with Jupyter:
Jupyter Notebooks can be very large. If you have to push your Jupyter notebooks to Github, please remove all of your output. Otherwise, your Github repository can explode very quickly.
Jupyter Notebooks are saved as json files. It is difficult to read your notebook diffs on Github.
Now let’s talk about solutions. There are two approaches:
The first approach lets you still use your .ipynb file. But when we work on those .ipynb files, we use jupytext ( conda install jupytext -c defaults -c conda-forge ) to pair your notebook with a .py file. Click "File - Jupytext - Pair Notebook with percent Script". I prefer the percent script, which indicates cells in .py with # %% comment. But you can also use other formats. Now whenever we make any changes in the .ipynb file, the changes will automatically show up in the corresponding .py file. Then we can just do version control with this .py file.
The second approach lets you see Jupyter interface with .py files. With this approach, we don’t work with .ipynb files anymore. Instead, we work with .py files directly.
To see the Jupyter interface with .py files, we need to use the command # %% for each code cell. In VSCode, we can run this cell by clicking the Run Cell button (see image below). Similarly, when we have # %% in the .py files and have jupytext installed, Jupyter Notebook or JupyterLab can open the .py file as the regular Jupyter files you normally see.
Now all of our work will be in .py files, we can do our normal version control with those files.
There are a lot of platforms and tools that can help you deploy your Jupyter Notebooks. For example, you can host your notebooks on binder for free. And there are a lot of other commercial products out there for deploying your notebooks into production.
However, what I would like to say is DO NOT deploy Jupyter Notebook .ipynb files directly in production unless your infrastructure is designed to deploy Jupyter notebooks. Please convert your notebooks to a normal Python script if you can.
Hopefully, you find this article useful! Thanks!
References:https://blog.jupyter.org/a-visual-debugger-for-jupyter-914e61716559https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.debugger.html
By Sophia Yang on December 30, 2020
|
[
{
"code": null,
"e": 501,
"s": 172,
"text": "Many data scientists like to use Jupyter Notebook or JupyterLab to do their data explorations, visualizations, and model building. I know some data scientists refuse to use Jupyter Notebook. But, I love to use Jupyter Notebook/Lab to do my experiments and explorations. Here is a Jupyter Notebook workflow that might be helpful."
},
{
"code": null,
"e": 1314,
"s": 501,
"text": "Whenever you are working on a new project, you should always have a new fresh environment to start with. If you are lazy and don’t want to create a new environment for every project, you should at least create one new environment that’s separate from your base environment and do your work in this new environment. To create a new environment, simply run conda create -n new_project python=3.8 . Here I created a new environment called new_project . I specified the Python version when I create the environment, but you don't have to. Now I can activate this environment by running conda activate new_project , and then install the necessary packages I need for this environment, e.g., conda install jupyter notebook pandas numpy . If you would like to use JupyterLab, make sure to run conda install jupyterlab ."
},
{
"code": null,
"e": 1773,
"s": 1314,
"text": "To start Jupyter Notebook, run jupyter notebook in the command line. If you would like to start JupyterLab, run jupyter lab . JupyterLab is a web-based interactive development environment for Jupyter notebooks. It has a different interface and has more (easy-to-use) plugins and thus more flexible than Jupyter Notebook. The workflow between Jupyter Notebook and JupyterLab JupyterLab is similar. This article applies to both Jupyter Notebook and JupyterLab."
},
{
"code": null,
"e": 2491,
"s": 1773,
"text": "A lot of times we want to run Jupyter from a remote server and work on it from a local browser. First, we need to set up the same environment on this remote server. Next, we need to set up port forwarding. Here we forward port 10000 from the remote machine to our local machine and start the notebook on this port. Note that the remote port and the local port don’t have to be the same, but I like to use the same port for simplicity. If port 10000 is not available, you can use other port numbers. Now you should see a link on your command line starting with localhost:10000. Copy and paste this link to your web browser, and you are all set to use the Jupyter Notebook from the remote machine on your local machine."
},
{
"code": null,
"e": 2587,
"s": 2491,
"text": "ssh - L 10000:localhost:10000 user@host jupyter notebook --ip 0.0.0.0 --port 10000 --no-browser"
},
{
"code": null,
"e": 2789,
"s": 2587,
"text": "A : insert cell aboveB : insert cell belowX : cut cellC : copy cellV : paste cellD,D : delete cellZ : undo delete cells⌘ S : save and checkpoint⌘ enter : run selected cellshift enter : run current cell"
},
{
"code": null,
"e": 2999,
"s": 2789,
"text": "The built-in magic function %%time outputs time execution of a Python statement or expression. %%timeit measures time execution of a Python statement or expression by running it many times for better accuracy."
},
{
"code": null,
"e": 3283,
"s": 2999,
"text": "JupyterLab has a built-in dark mode. You can find it at Settings - Jupyter Lab Theme - Jupyter Lab Dark . I don't think Jupyter Notebook has a built-in dark mode yet. But you can use the Python package jupyterthemes to change your notebook theme. Check out jupyterthemes for details."
},
{
"code": null,
"e": 3434,
"s": 3283,
"text": "Project Jupyter announced their Jupyter visual debugger back in March this year. Here is how to install the debugger if you are using JupyterLab 2.xx."
},
{
"code": null,
"e": 3543,
"s": 3434,
"text": "conda install nodejs xeus-python -c defaults -c conda-forgejupyter labextension install @jupyterlab/debugger"
},
{
"code": null,
"e": 3691,
"s": 3543,
"text": "Note that the debugger is already included in JupyterLab 3.0. If you are using JupyterLab 3.0 and above, you will only need to install xeus-python:"
},
{
"code": null,
"e": 3744,
"s": 3691,
"text": "conda install xeus-python -c defaults -c conda-forge"
},
{
"code": null,
"e": 3826,
"s": 3744,
"text": "To use the Jupyter visual debugger, click XPython in the JupyterLab New Launcher:"
},
{
"code": null,
"e": 3981,
"s": 3826,
"text": "And then activate the debugger by drag the right corner button to orange, select a breakpoint with a red dot in front of the line, and play the callstack:"
},
{
"code": null,
"e": 4064,
"s": 3981,
"text": "More information about the Jupyter visual debugger can be found in this blog post."
},
{
"code": null,
"e": 4173,
"s": 4064,
"text": "Whenever we see an exception, we can use the %debug magic command to open up an interactive Python debugger."
},
{
"code": null,
"e": 4282,
"s": 4173,
"text": "Alternatively, we can also turn on the %pdb magic command to automatically run the debugger on an exception."
},
{
"code": null,
"e": 4398,
"s": 4282,
"text": "We can also use breakpoint and run debugger line by line. breakpoint also works for other code editors like VSCode."
},
{
"code": null,
"e": 4582,
"s": 4398,
"text": "jupytext and VSCode made version control so much easier for Jupyter notebooks. Before we jump into jupytext and VSCode, let me remind you why version control is annoying with Jupyter:"
},
{
"code": null,
"e": 4767,
"s": 4582,
"text": "Jupyter Notebooks can be very large. If you have to push your Jupyter notebooks to Github, please remove all of your output. Otherwise, your Github repository can explode very quickly."
},
{
"code": null,
"e": 4865,
"s": 4767,
"text": "Jupyter Notebooks are saved as json files. It is difficult to read your notebook diffs on Github."
},
{
"code": null,
"e": 4923,
"s": 4865,
"text": "Now let’s talk about solutions. There are two approaches:"
},
{
"code": null,
"e": 5481,
"s": 4923,
"text": "The first approach lets you still use your .ipynb file. But when we work on those .ipynb files, we use jupytext ( conda install jupytext -c defaults -c conda-forge ) to pair your notebook with a .py file. Click \"File - Jupytext - Pair Notebook with percent Script\". I prefer the percent script, which indicates cells in .py with # %% comment. But you can also use other formats. Now whenever we make any changes in the .ipynb file, the changes will automatically show up in the corresponding .py file. Then we can just do version control with this .py file."
},
{
"code": null,
"e": 5651,
"s": 5481,
"text": "The second approach lets you see Jupyter interface with .py files. With this approach, we don’t work with .ipynb files anymore. Instead, we work with .py files directly."
},
{
"code": null,
"e": 6006,
"s": 5651,
"text": "To see the Jupyter interface with .py files, we need to use the command # %% for each code cell. In VSCode, we can run this cell by clicking the Run Cell button (see image below). Similarly, when we have # %% in the .py files and have jupytext installed, Jupyter Notebook or JupyterLab can open the .py file as the regular Jupyter files you normally see."
},
{
"code": null,
"e": 6103,
"s": 6006,
"text": "Now all of our work will be in .py files, we can do our normal version control with those files."
},
{
"code": null,
"e": 6357,
"s": 6103,
"text": "There are a lot of platforms and tools that can help you deploy your Jupyter Notebooks. For example, you can host your notebooks on binder for free. And there are a lot of other commercial products out there for deploying your notebooks into production."
},
{
"code": null,
"e": 6597,
"s": 6357,
"text": "However, what I would like to say is DO NOT deploy Jupyter Notebook .ipynb files directly in production unless your infrastructure is designed to deploy Jupyter notebooks. Please convert your notebooks to a normal Python script if you can."
},
{
"code": null,
"e": 6646,
"s": 6597,
"text": "Hopefully, you find this article useful! Thanks!"
},
{
"code": null,
"e": 6806,
"s": 6646,
"text": "References:https://blog.jupyter.org/a-visual-debugger-for-jupyter-914e61716559https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.debugger.html"
}
] |
Add New Line to Text File in R - GeeksforGeeks
|
21 Apr, 2021
In this article, we are going to add a new line to the Text file using the R Programming language.
It can be done in different ways:
Using write() function.
Using lapply() function.
Using cat() function.
Method 1: Adding text using write() function.
Now we are going to select the line which is going to be added to the text file.
line <- "because it has perfect solutions"
At last, we use the write() function to append the above line in the text file.
There are 3 arguments in the write() function first is the line which we want to add in the text file and that line is stored in a variable named line 2nd argument is the file name itself through this argument a connection is set between our code and the text file and the last one is the append command which is used to append the text in the text file. All these things would be seen as:
write(line,file="Text.txt",append=TRUE)
Below is the implementation:
R
# line which we need to be addline = "because it has perfect solutions" # write function to add the filewrite(line, file = "Text.txt", append = TRUE, sep = "\n")
Output:
The Red highlighted line is the line that is appended.
Note: If you put append = FALSE then the whole text file gets over-written.
R
# line which we need to add on text fileline = "because it has perfect solutions" # write function helps us to add text write(line, file = "Text.txt", append = FALSE)
Output:
Method 2: Adding text using lapply() function.
The lapply() function uses the vector to add the text to the text file. So, we create a vector of the text we want to add to the text file
a=c("also gfg has many approaches to solve a problem")
A vector name “a” is created who has the line we want to add to our text file.
Now we use the lapply function with the write() and anonymous function. The anonymous function is used to take each character of the vector as an argument also the append is used to append the text to the text file.
lapply(a, function(anyNameofVect){ write(anyNameofVect, file=”Text.txt”, sep=”\n”, append=TRUE, ncolumns=100000) })
Below is the implementation:
R
# vector of texta = c("also gfg has many approaches to solve a problem") # lapply function with anonymous function lapply(a, function(c){ write(c, file = "Text.txt", sep = "\n", append = TRUE, ncolumns = 100000) })
Output:
Method 3: Adding text using cat() function:
The cat function simply did the same thing as the lapply. It takes the text and establish a connection with the text file using the file name and use append command to append the text to the text file
R
# cat function to append the text to the text filecat("That's why gfg make Data Structures easy", sep = "\n", file = "Text.txt", append = TRUE)
Output:
Picked
R-FileHandling
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
Replace Specific Characters in String in R
How to filter R DataFrame by values in a column?
How to filter R dataframe by multiple conditions?
R - if statement
How to import an Excel File into R ?
Time Series Analysis in R
|
[
{
"code": null,
"e": 24851,
"s": 24823,
"text": "\n21 Apr, 2021"
},
{
"code": null,
"e": 24950,
"s": 24851,
"text": "In this article, we are going to add a new line to the Text file using the R Programming language."
},
{
"code": null,
"e": 24984,
"s": 24950,
"text": "It can be done in different ways:"
},
{
"code": null,
"e": 25008,
"s": 24984,
"text": "Using write() function."
},
{
"code": null,
"e": 25033,
"s": 25008,
"text": "Using lapply() function."
},
{
"code": null,
"e": 25055,
"s": 25033,
"text": "Using cat() function."
},
{
"code": null,
"e": 25101,
"s": 25055,
"text": "Method 1: Adding text using write() function."
},
{
"code": null,
"e": 25182,
"s": 25101,
"text": "Now we are going to select the line which is going to be added to the text file."
},
{
"code": null,
"e": 25225,
"s": 25182,
"text": "line <- \"because it has perfect solutions\""
},
{
"code": null,
"e": 25305,
"s": 25225,
"text": "At last, we use the write() function to append the above line in the text file."
},
{
"code": null,
"e": 25695,
"s": 25305,
"text": "There are 3 arguments in the write() function first is the line which we want to add in the text file and that line is stored in a variable named line 2nd argument is the file name itself through this argument a connection is set between our code and the text file and the last one is the append command which is used to append the text in the text file. All these things would be seen as:"
},
{
"code": null,
"e": 25735,
"s": 25695,
"text": "write(line,file=\"Text.txt\",append=TRUE)"
},
{
"code": null,
"e": 25764,
"s": 25735,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 25766,
"s": 25764,
"text": "R"
},
{
"code": "# line which we need to be addline = \"because it has perfect solutions\" # write function to add the filewrite(line, file = \"Text.txt\", append = TRUE, sep = \"\\n\")",
"e": 25934,
"s": 25766,
"text": null
},
{
"code": null,
"e": 25942,
"s": 25934,
"text": "Output:"
},
{
"code": null,
"e": 25997,
"s": 25942,
"text": "The Red highlighted line is the line that is appended."
},
{
"code": null,
"e": 26073,
"s": 25997,
"text": "Note: If you put append = FALSE then the whole text file gets over-written."
},
{
"code": null,
"e": 26075,
"s": 26073,
"text": "R"
},
{
"code": "# line which we need to add on text fileline = \"because it has perfect solutions\" # write function helps us to add text write(line, file = \"Text.txt\", append = FALSE)",
"e": 26243,
"s": 26075,
"text": null
},
{
"code": null,
"e": 26251,
"s": 26243,
"text": "Output:"
},
{
"code": null,
"e": 26298,
"s": 26251,
"text": "Method 2: Adding text using lapply() function."
},
{
"code": null,
"e": 26437,
"s": 26298,
"text": "The lapply() function uses the vector to add the text to the text file. So, we create a vector of the text we want to add to the text file"
},
{
"code": null,
"e": 26492,
"s": 26437,
"text": "a=c(\"also gfg has many approaches to solve a problem\")"
},
{
"code": null,
"e": 26571,
"s": 26492,
"text": "A vector name “a” is created who has the line we want to add to our text file."
},
{
"code": null,
"e": 26787,
"s": 26571,
"text": "Now we use the lapply function with the write() and anonymous function. The anonymous function is used to take each character of the vector as an argument also the append is used to append the text to the text file."
},
{
"code": null,
"e": 26903,
"s": 26787,
"text": "lapply(a, function(anyNameofVect){ write(anyNameofVect, file=”Text.txt”, sep=”\\n”, append=TRUE, ncolumns=100000) })"
},
{
"code": null,
"e": 26932,
"s": 26903,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 26934,
"s": 26932,
"text": "R"
},
{
"code": "# vector of texta = c(\"also gfg has many approaches to solve a problem\") # lapply function with anonymous function lapply(a, function(c){ write(c, file = \"Text.txt\", sep = \"\\n\", append = TRUE, ncolumns = 100000) })",
"e": 27207,
"s": 26934,
"text": null
},
{
"code": null,
"e": 27215,
"s": 27207,
"text": "Output:"
},
{
"code": null,
"e": 27259,
"s": 27215,
"text": "Method 3: Adding text using cat() function:"
},
{
"code": null,
"e": 27460,
"s": 27259,
"text": "The cat function simply did the same thing as the lapply. It takes the text and establish a connection with the text file using the file name and use append command to append the text to the text file"
},
{
"code": null,
"e": 27462,
"s": 27460,
"text": "R"
},
{
"code": "# cat function to append the text to the text filecat(\"That's why gfg make Data Structures easy\", sep = \"\\n\", file = \"Text.txt\", append = TRUE)",
"e": 27609,
"s": 27462,
"text": null
},
{
"code": null,
"e": 27617,
"s": 27609,
"text": "Output:"
},
{
"code": null,
"e": 27624,
"s": 27617,
"text": "Picked"
},
{
"code": null,
"e": 27639,
"s": 27624,
"text": "R-FileHandling"
},
{
"code": null,
"e": 27650,
"s": 27639,
"text": "R Language"
},
{
"code": null,
"e": 27748,
"s": 27650,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27757,
"s": 27748,
"text": "Comments"
},
{
"code": null,
"e": 27770,
"s": 27757,
"text": "Old Comments"
},
{
"code": null,
"e": 27822,
"s": 27770,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 27860,
"s": 27822,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 27895,
"s": 27860,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 27953,
"s": 27895,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 27996,
"s": 27953,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 28045,
"s": 27996,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 28095,
"s": 28045,
"text": "How to filter R dataframe by multiple conditions?"
},
{
"code": null,
"e": 28112,
"s": 28095,
"text": "R - if statement"
},
{
"code": null,
"e": 28149,
"s": 28112,
"text": "How to import an Excel File into R ?"
}
] |
Data Science Best Practices: Python Environments | by Jerry Yu | Towards Data Science
|
Virtual environments are isolated coding spaces where Python packages can be installed, upgraded, and used, instead of packages being installed globally in your working environment. Virtual environments can be installed with a .txt or .yml file, that lists the package names and their versions, through the terminal/command prompt/anaconda prompt. When you install or upgrade a package, the old versions stay installed in your directory, so different virtual environments can utilize different package versions.
Picture this, you spend months cleaning, manipulating, creating features, training your model, improving the model, and evaluating your model performance; you operationalize your model and put it into production. At this point in time, you haven’t heard about virtual environments, so you move onto the next project. You find a Python package you need for the next project, but you first have to update say, Pandas, so you decided to update all of your packages all in one go using:
conda update --all
You go about your day, but get a warning from your last project, which is currently in production, telling you something is broken. If you’re lucky and the bug isn’t obscure, you are able to fix it. However, if you’re unlucky like my coworkers, and happen to run your production models on a server, AND you break the whole Python path environment, well, bad luck. Time to wipe it clean and host a new server.
You would also want to use project-specific virtual environments to ensure code reproducibility by indicating all packages with their specified versions in the environment. You can also bypass admin rights that might be needed when installing a package, by using a virtual environment. Lastly, using environments is another step you can take at better organizing your projects, or avoid installing packages to your default path when you only need it for one project.
Here is one of the way to create an environment using conda
conda create --name myenv ## Replace myenv with environment name
The above code will create the environment in the current path using the same version of Python that you are currently using since you did not specify a version. You can activate this environment by:
conda activate myenv ## Replace myenv with environment name
Once the environment is activated, you can install packages normally and it will only be added to that environment. If you want to specify the packages before creating and activating the environment, you can create a file and change the file-extension to .yml with the format below. The name section will be the environment name used when you activate the environment.
After you have your .yml file ready, do this to create the environment from the file:
conda env create -f environment.yml
environment.yml is the name of the .yml file.
Now, this is a very important step, for the individuals that like working with Jupyter Notebooks.
python -m ipykernel install --user --name myenv
If this line is not executed, you will not be able to use the environment you just created in your Jupyter Notebook.
If you need to update your environment due to:
a new version of one of your core dependencies was released
need to install additional packages
or you no longer need a package
Edit your .yml file, edit the package names, then save it. Open terminal/command prompt and navigate to the directory of your .yml file and run the following code:
conda env update --file environment.yml --prune
You can make an exact copy of an environment by using the following code in the terminal/command prompt:
conda create --name clonename --clone myenv
clonename will be the name you want to name your cloned environment, and myenv is the name of the environment that you want to clone.
If you found this content helpful and would like to read my other content, feel free to follow me at:TwitterInstagramFacebookI post data science tips and tricks that you may find useful!
|
[
{
"code": null,
"e": 684,
"s": 172,
"text": "Virtual environments are isolated coding spaces where Python packages can be installed, upgraded, and used, instead of packages being installed globally in your working environment. Virtual environments can be installed with a .txt or .yml file, that lists the package names and their versions, through the terminal/command prompt/anaconda prompt. When you install or upgrade a package, the old versions stay installed in your directory, so different virtual environments can utilize different package versions."
},
{
"code": null,
"e": 1167,
"s": 684,
"text": "Picture this, you spend months cleaning, manipulating, creating features, training your model, improving the model, and evaluating your model performance; you operationalize your model and put it into production. At this point in time, you haven’t heard about virtual environments, so you move onto the next project. You find a Python package you need for the next project, but you first have to update say, Pandas, so you decided to update all of your packages all in one go using:"
},
{
"code": null,
"e": 1186,
"s": 1167,
"text": "conda update --all"
},
{
"code": null,
"e": 1595,
"s": 1186,
"text": "You go about your day, but get a warning from your last project, which is currently in production, telling you something is broken. If you’re lucky and the bug isn’t obscure, you are able to fix it. However, if you’re unlucky like my coworkers, and happen to run your production models on a server, AND you break the whole Python path environment, well, bad luck. Time to wipe it clean and host a new server."
},
{
"code": null,
"e": 2062,
"s": 1595,
"text": "You would also want to use project-specific virtual environments to ensure code reproducibility by indicating all packages with their specified versions in the environment. You can also bypass admin rights that might be needed when installing a package, by using a virtual environment. Lastly, using environments is another step you can take at better organizing your projects, or avoid installing packages to your default path when you only need it for one project."
},
{
"code": null,
"e": 2122,
"s": 2062,
"text": "Here is one of the way to create an environment using conda"
},
{
"code": null,
"e": 2190,
"s": 2122,
"text": "conda create --name myenv ## Replace myenv with environment name"
},
{
"code": null,
"e": 2390,
"s": 2190,
"text": "The above code will create the environment in the current path using the same version of Python that you are currently using since you did not specify a version. You can activate this environment by:"
},
{
"code": null,
"e": 2451,
"s": 2390,
"text": "conda activate myenv ## Replace myenv with environment name"
},
{
"code": null,
"e": 2820,
"s": 2451,
"text": "Once the environment is activated, you can install packages normally and it will only be added to that environment. If you want to specify the packages before creating and activating the environment, you can create a file and change the file-extension to .yml with the format below. The name section will be the environment name used when you activate the environment."
},
{
"code": null,
"e": 2906,
"s": 2820,
"text": "After you have your .yml file ready, do this to create the environment from the file:"
},
{
"code": null,
"e": 2942,
"s": 2906,
"text": "conda env create -f environment.yml"
},
{
"code": null,
"e": 2988,
"s": 2942,
"text": "environment.yml is the name of the .yml file."
},
{
"code": null,
"e": 3086,
"s": 2988,
"text": "Now, this is a very important step, for the individuals that like working with Jupyter Notebooks."
},
{
"code": null,
"e": 3134,
"s": 3086,
"text": "python -m ipykernel install --user --name myenv"
},
{
"code": null,
"e": 3251,
"s": 3134,
"text": "If this line is not executed, you will not be able to use the environment you just created in your Jupyter Notebook."
},
{
"code": null,
"e": 3298,
"s": 3251,
"text": "If you need to update your environment due to:"
},
{
"code": null,
"e": 3358,
"s": 3298,
"text": "a new version of one of your core dependencies was released"
},
{
"code": null,
"e": 3394,
"s": 3358,
"text": "need to install additional packages"
},
{
"code": null,
"e": 3426,
"s": 3394,
"text": "or you no longer need a package"
},
{
"code": null,
"e": 3590,
"s": 3426,
"text": "Edit your .yml file, edit the package names, then save it. Open terminal/command prompt and navigate to the directory of your .yml file and run the following code:"
},
{
"code": null,
"e": 3638,
"s": 3590,
"text": "conda env update --file environment.yml --prune"
},
{
"code": null,
"e": 3743,
"s": 3638,
"text": "You can make an exact copy of an environment by using the following code in the terminal/command prompt:"
},
{
"code": null,
"e": 3787,
"s": 3743,
"text": "conda create --name clonename --clone myenv"
},
{
"code": null,
"e": 3921,
"s": 3787,
"text": "clonename will be the name you want to name your cloned environment, and myenv is the name of the environment that you want to clone."
}
] |
Return the input of the Entry widget in Tkinter
|
An Entry widget in Tkinter is nothing but an input widget that accepts single-line user input in a text field. To return the data entered in an Entry widget, we have to use the get() method. It returns the data of the entry widget which further can be printed on the console.
The following example will return the input data that can be used to display in the window with the help of a Label Widget as well.
#Import the required libraries
from tkinter import *
from tkinter import ttk
#Create an instance of Tkinter Frame
win = Tk()
#Set the geometry
win.geometry("700x250")
# Define a function to return the Input data
def get_data():
label.config(text= entry.get(), font= ('Helvetica 13'))
#Create an Entry Widget
entry = Entry(win, width= 42)
entry.place(relx= .5, rely= .5, anchor= CENTER)
#Inititalize a Label widget
label= Label(win, text="", font=('Helvetica 13'))
label.pack()
#Create a Button to get the input data
ttk.Button(win, text= "Click to Show", command= get_data).place(relx= .7, rely= .5, anchor= CENTER)
win.mainloop()
If we will execute the above code, it will display a window with an Entry widget and a button to display the input on the screen.
Now, click the "Click to Show" button and it will display the user input on the canvas.
|
[
{
"code": null,
"e": 1338,
"s": 1062,
"text": "An Entry widget in Tkinter is nothing but an input widget that accepts single-line user input in a text field. To return the data entered in an Entry widget, we have to use the get() method. It returns the data of the entry widget which further can be printed on the console."
},
{
"code": null,
"e": 1470,
"s": 1338,
"text": "The following example will return the input data that can be used to display in the window with the help of a Label Widget as well."
},
{
"code": null,
"e": 2111,
"s": 1470,
"text": "#Import the required libraries\nfrom tkinter import *\nfrom tkinter import ttk\n\n#Create an instance of Tkinter Frame\nwin = Tk()\n\n#Set the geometry\nwin.geometry(\"700x250\")\n\n# Define a function to return the Input data\ndef get_data():\n label.config(text= entry.get(), font= ('Helvetica 13'))\n\n#Create an Entry Widget\nentry = Entry(win, width= 42)\nentry.place(relx= .5, rely= .5, anchor= CENTER)\n\n#Inititalize a Label widget\nlabel= Label(win, text=\"\", font=('Helvetica 13'))\nlabel.pack()\n\n#Create a Button to get the input data\nttk.Button(win, text= \"Click to Show\", command= get_data).place(relx= .7, rely= .5, anchor= CENTER)\n\nwin.mainloop()"
},
{
"code": null,
"e": 2241,
"s": 2111,
"text": "If we will execute the above code, it will display a window with an Entry widget and a button to display the input on the screen."
},
{
"code": null,
"e": 2329,
"s": 2241,
"text": "Now, click the \"Click to Show\" button and it will display the user input on the canvas."
}
] |
How to Fuzzy Match Datsets in Amazon Redshift | by Lewis Gavin | Towards Data Science
|
If you’re lucky, when working with multiple datasets within your data warehouse there will be some kind of join column available for tables you want to bring together.
When this is the case, life is good. However modern Big Data solutions are opening up the use case for bringing all sorts of data together from disparate sources.
Although we can easily store this data in one place, joining them for analysis isn’t always as simple, as these datasets often haven’t been produced by the same source system so clean ID columns to join on aren’t always available. Even columns that provide you with the same information don’t always do so in the same format — and if it’s user captured you can never guarantee consistency.
One way of joining datasets together based on similarity is Fuzzy Matching — especially when you have text based fields in each datasets you know are almost similar, e.g. a user entered company name or product name.
This post wont go into detail about all the details of fuzzy matching but will show you how to utilise a Python implementation within Redshift.
At its simplest level, fuzzy matching looks to produce a similarity score of how similar two things are. I’ll focus on comparing strings to explain the concept.
As humans we can easily spot typing errors or conceptually understand when two similar things are the same. Fuzzy matching algorithms try to help a computer do this. Rather than a match between two strings being a boolean true or false i.e. exactly the same or not the same, fuzzy matching gives a closeness score.
Take the strings ‘brooklyn bridge’ and ‘brooklin bridge’. A human would easily be able to spot that these are the same thing even though there’s a spelling error in the second string.
A fuzzy matching algorithm such as Levenshtein distance that gives a percentage score of similarity would probably score these two strings as at least 90% similar. We can use this to set a threshold of what we want “similar” to be, i.e. any two strings with a fuzzy score over 80% is a match.
These days you can pretty much find a Python package for anything so I’m not going to reinvent the wheel. A decent python package that implements fuzzy matching using Levenshtein is fuzzy wuzzy.
Following the installation process stated in the docs you end up with a bunch of functions for comparing strings.
The one I’ll be using for this example is the simple ratio function that takes two strings and gives a closeness ratio. Here’s an example implementation showing the Brooklyn bridge example from earlier.
>from fuzzywuzzy import fuzz>fuzz.ratio(“brooklyn bridge”, “brooklin bridge”)> 93
As expected this returns a fairly high score as the two are very similar.
User Defined Functions allow you to add repeatable code blocks to Redshift using either SQL or Python. The python support will allow us to take the implementation from the previous section and add to Redshift so we can simply call it like any other native SQL function.
First of all we need to add the fuzzywuzzy library to Redshift. There is some thorough documentation but I’ll outline the basic steps below.
Download the fuzzywuzzy repo from githubTake a copy of the fuzzywuzzy folder within the repo and zip it up.Copy this zipped folder to an S3 bucketIn Redshift run the following to import the fuzzywuzzy library
Download the fuzzywuzzy repo from github
Take a copy of the fuzzywuzzy folder within the repo and zip it up.
Copy this zipped folder to an S3 bucket
In Redshift run the following to import the fuzzywuzzy library
CREATE LIBRARY fuzzywuzzy LANGUAGE plpythonu FROM 's3://<bucket_name>/fuzzywuzzy.zip' CREDENTIALS 'aws_access_key_id=<access key id>;aws_secret_access_key=<secret key>'
Once done we can now go ahead and create the function using this library in Redshift.
CREATE FUNCTION fuzzy_test (string_a TEXT,string_b TEXT) RETURNS FLOAT IMMUTABLEAS$$ FROM fuzzywuzzy import fuzz RETURN fuzz.ratio (string_a,string_b) $$ LANGUAGE plpythonu;
We can now test it and check we see the same results as we did locally.
SELECT fuzzy_test('brooklyn bridge', 'brooklin bridge');> 93
And it’s as simple as that. It’s a nice feature allowing Python UDF’s giving you a lot of flexibility due to the range of libraries available in Python.
Due to the power of the Redshift cluster this means large scale fuzzy matches are possible that would probably never finish on a laptop. However...
One thing to consider if you’re going to use this for joins is that it will obviously be slower than usual as there’s not much optimisation that can take place on the join. So if you’re matching large datasets of strings then prepare for a long wait :)
|
[
{
"code": null,
"e": 340,
"s": 172,
"text": "If you’re lucky, when working with multiple datasets within your data warehouse there will be some kind of join column available for tables you want to bring together."
},
{
"code": null,
"e": 503,
"s": 340,
"text": "When this is the case, life is good. However modern Big Data solutions are opening up the use case for bringing all sorts of data together from disparate sources."
},
{
"code": null,
"e": 893,
"s": 503,
"text": "Although we can easily store this data in one place, joining them for analysis isn’t always as simple, as these datasets often haven’t been produced by the same source system so clean ID columns to join on aren’t always available. Even columns that provide you with the same information don’t always do so in the same format — and if it’s user captured you can never guarantee consistency."
},
{
"code": null,
"e": 1109,
"s": 893,
"text": "One way of joining datasets together based on similarity is Fuzzy Matching — especially when you have text based fields in each datasets you know are almost similar, e.g. a user entered company name or product name."
},
{
"code": null,
"e": 1253,
"s": 1109,
"text": "This post wont go into detail about all the details of fuzzy matching but will show you how to utilise a Python implementation within Redshift."
},
{
"code": null,
"e": 1414,
"s": 1253,
"text": "At its simplest level, fuzzy matching looks to produce a similarity score of how similar two things are. I’ll focus on comparing strings to explain the concept."
},
{
"code": null,
"e": 1729,
"s": 1414,
"text": "As humans we can easily spot typing errors or conceptually understand when two similar things are the same. Fuzzy matching algorithms try to help a computer do this. Rather than a match between two strings being a boolean true or false i.e. exactly the same or not the same, fuzzy matching gives a closeness score."
},
{
"code": null,
"e": 1913,
"s": 1729,
"text": "Take the strings ‘brooklyn bridge’ and ‘brooklin bridge’. A human would easily be able to spot that these are the same thing even though there’s a spelling error in the second string."
},
{
"code": null,
"e": 2206,
"s": 1913,
"text": "A fuzzy matching algorithm such as Levenshtein distance that gives a percentage score of similarity would probably score these two strings as at least 90% similar. We can use this to set a threshold of what we want “similar” to be, i.e. any two strings with a fuzzy score over 80% is a match."
},
{
"code": null,
"e": 2401,
"s": 2206,
"text": "These days you can pretty much find a Python package for anything so I’m not going to reinvent the wheel. A decent python package that implements fuzzy matching using Levenshtein is fuzzy wuzzy."
},
{
"code": null,
"e": 2515,
"s": 2401,
"text": "Following the installation process stated in the docs you end up with a bunch of functions for comparing strings."
},
{
"code": null,
"e": 2718,
"s": 2515,
"text": "The one I’ll be using for this example is the simple ratio function that takes two strings and gives a closeness ratio. Here’s an example implementation showing the Brooklyn bridge example from earlier."
},
{
"code": null,
"e": 2800,
"s": 2718,
"text": ">from fuzzywuzzy import fuzz>fuzz.ratio(“brooklyn bridge”, “brooklin bridge”)> 93"
},
{
"code": null,
"e": 2874,
"s": 2800,
"text": "As expected this returns a fairly high score as the two are very similar."
},
{
"code": null,
"e": 3144,
"s": 2874,
"text": "User Defined Functions allow you to add repeatable code blocks to Redshift using either SQL or Python. The python support will allow us to take the implementation from the previous section and add to Redshift so we can simply call it like any other native SQL function."
},
{
"code": null,
"e": 3285,
"s": 3144,
"text": "First of all we need to add the fuzzywuzzy library to Redshift. There is some thorough documentation but I’ll outline the basic steps below."
},
{
"code": null,
"e": 3494,
"s": 3285,
"text": "Download the fuzzywuzzy repo from githubTake a copy of the fuzzywuzzy folder within the repo and zip it up.Copy this zipped folder to an S3 bucketIn Redshift run the following to import the fuzzywuzzy library"
},
{
"code": null,
"e": 3535,
"s": 3494,
"text": "Download the fuzzywuzzy repo from github"
},
{
"code": null,
"e": 3603,
"s": 3535,
"text": "Take a copy of the fuzzywuzzy folder within the repo and zip it up."
},
{
"code": null,
"e": 3643,
"s": 3603,
"text": "Copy this zipped folder to an S3 bucket"
},
{
"code": null,
"e": 3706,
"s": 3643,
"text": "In Redshift run the following to import the fuzzywuzzy library"
},
{
"code": null,
"e": 3875,
"s": 3706,
"text": "CREATE LIBRARY fuzzywuzzy LANGUAGE plpythonu FROM 's3://<bucket_name>/fuzzywuzzy.zip' CREDENTIALS 'aws_access_key_id=<access key id>;aws_secret_access_key=<secret key>'"
},
{
"code": null,
"e": 3961,
"s": 3875,
"text": "Once done we can now go ahead and create the function using this library in Redshift."
},
{
"code": null,
"e": 4138,
"s": 3961,
"text": "CREATE FUNCTION fuzzy_test (string_a TEXT,string_b TEXT) RETURNS FLOAT IMMUTABLEAS$$ FROM fuzzywuzzy import fuzz RETURN fuzz.ratio (string_a,string_b) $$ LANGUAGE plpythonu;"
},
{
"code": null,
"e": 4210,
"s": 4138,
"text": "We can now test it and check we see the same results as we did locally."
},
{
"code": null,
"e": 4271,
"s": 4210,
"text": "SELECT fuzzy_test('brooklyn bridge', 'brooklin bridge');> 93"
},
{
"code": null,
"e": 4424,
"s": 4271,
"text": "And it’s as simple as that. It’s a nice feature allowing Python UDF’s giving you a lot of flexibility due to the range of libraries available in Python."
},
{
"code": null,
"e": 4572,
"s": 4424,
"text": "Due to the power of the Redshift cluster this means large scale fuzzy matches are possible that would probably never finish on a laptop. However..."
}
] |
How to Run PostgreSQL and pgAdmin Using Docker | by Mahbub Zaman | Towards Data Science
|
You can use pgAdmin as an alternate solution if you don’t like managing your database using the command-line interface. It’s a web-based front-end for the PostgreSQL database server.
We will be using Docker for our setup since we don’t want to worry about environment management. By using Docker, we don’t have to worry about the installation of the PostgreSQL or pgAdmin. Moreover, you can use Docker to run this project on macOS, Windows, and Linux distributions.
From this post, you’ll learn how to connect pgAdmin to a PostgreSQL database server using Docker.
First, you will need to install Docker. I’ll use macOS for demonstration purposes.
Method 1
We will use a Docker compose file for our first method, and we need to put the docker-compose.yml inside a folder. In this case, the name of the folder is pgAdmin. Let’s break down the individual ingredients of the docker-compose.yml file.
version: '3.8'services: db: container_name: pg_container image: postgres restart: always environment: POSTGRES_USER: root POSTGRES_PASSWORD: root POSTGRES_DB: test_db ports: - "5432:5432" pgadmin: container_name: pgadmin4_container image: dpage/pgadmin4 restart: always environment: PGADMIN_DEFAULT_EMAIL: [email protected] PGADMIN_DEFAULT_PASSWORD: root ports: - "5050:80"
First, we are using a version tag to define the Compose file format, which is 3.8. There are other file formats — 1, 2, 2.x, and 3.x. You can read more from Docker’s documentation.
Then we have a hash called services. Inside this, we have to define the services we want to use for our application. For our application, we have two services, db, and pgadmin.
We have used the tag container_name for both services to change the default container name to pg_container and pgadmin4_container for our convenience.
The second tag image is used to define the Docker image for the db and pgadmin service. To make our setup process quick and easy, we will use the pre-built official image of PostgreSQL and pgAdmin.
In my previous Docker posts, I’ve talked about the use of restart, environment, and ports tags. Have a look at the following post, where you can learn these tags.
towardsdatascience.com
Now run the following command from the same directory where the docker-compose.yml file is located.
cd pgAdmindocker compose up
The command docker compose up starts and runs the entire app. Congratulations!, you are successfully running a PostgreSQL database and pgadmin4 on your machine using Docker. Now let’s connect pgadmin4 to our PostgreSQL database server.
First, access the pgadmin4 via your favorite web browser by vising the URL http://localhost:5050/. Use the [email protected] for the email address and root as the password to log in.
Click Servers > Create > Server to create a new server.
Select the General tab. For the name field, use any name. In this case, I’ll use my_db. Now move to the Connection tab. To get the value for of Host, run the following commands.
docker ps
The command ps will show you brief information about all the running containers.
Please read the Update section first. Now we can grab the container id for the PostgreSQL container.
docker inspect fcc97e066cc8 | grep IPAddress
The command inspect shows detailed information about a running container. Also, we are using the pipeline and grep command to extract IPAddress information.
Finally, we have the value of Host, which in this case is 192.168.80.2. Use the value root for Username and root for Password, also tick the Save password? box if you don’t want to type the password every time you log in to pgadmin4.
Update [16 April 2021]
I received an email this morning from Dosenwerfer, that the person had some issues when running the services again. Because it changed the IP address of the PostgreSQL container, and you have to set up the configuration again. The recommended solution is to use the container name. Because the container name is identical to the hostname, you can read more from here. So our current configuration would be the following.
You can find the PostgreSQL database server’s container name using the docker ps command and grab the name from the NAMES column. In this post, we have explicitly named the container in the docker-compose.yml file, so you can refer to that as well. Many thanks to Dosenwerfer.
For the second method, you can connect a pgAdmin docker container to a running PostgreSQL container. It is helpful when you don’t have a pgAdmin service in your docker-compose.yml file. Have a look at the following post where you can learn this method.
towardsdatascience.com
If you want to import some data for testing purposes, you can use the SQL query I’ve already prepared. Click Servers > my_db > Databases > test_db > Schemas > Tables. Right-click on tables and select Query tool. Copy-paste the SQL query from my GitHub repository to Query Editor and click the play button. This action will create two tables called students and marks with some test data.
Database management via a command-line interface can be nerve-racking. To solve this issue, we can use a tool with an interface. The pgAdmin solves this problem. Moreover, Docker makes the entire process smoother. Also, you can use the test data I’ve provided to play around with PostgreSQL queries. I hope this will help you get started with PostgreSQL, pgAdmin, and Docker. Happy coding!
|
[
{
"code": null,
"e": 355,
"s": 172,
"text": "You can use pgAdmin as an alternate solution if you don’t like managing your database using the command-line interface. It’s a web-based front-end for the PostgreSQL database server."
},
{
"code": null,
"e": 638,
"s": 355,
"text": "We will be using Docker for our setup since we don’t want to worry about environment management. By using Docker, we don’t have to worry about the installation of the PostgreSQL or pgAdmin. Moreover, you can use Docker to run this project on macOS, Windows, and Linux distributions."
},
{
"code": null,
"e": 736,
"s": 638,
"text": "From this post, you’ll learn how to connect pgAdmin to a PostgreSQL database server using Docker."
},
{
"code": null,
"e": 819,
"s": 736,
"text": "First, you will need to install Docker. I’ll use macOS for demonstration purposes."
},
{
"code": null,
"e": 828,
"s": 819,
"text": "Method 1"
},
{
"code": null,
"e": 1068,
"s": 828,
"text": "We will use a Docker compose file for our first method, and we need to put the docker-compose.yml inside a folder. In this case, the name of the folder is pgAdmin. Let’s break down the individual ingredients of the docker-compose.yml file."
},
{
"code": null,
"e": 1507,
"s": 1068,
"text": "version: '3.8'services: db: container_name: pg_container image: postgres restart: always environment: POSTGRES_USER: root POSTGRES_PASSWORD: root POSTGRES_DB: test_db ports: - \"5432:5432\" pgadmin: container_name: pgadmin4_container image: dpage/pgadmin4 restart: always environment: PGADMIN_DEFAULT_EMAIL: [email protected] PGADMIN_DEFAULT_PASSWORD: root ports: - \"5050:80\""
},
{
"code": null,
"e": 1688,
"s": 1507,
"text": "First, we are using a version tag to define the Compose file format, which is 3.8. There are other file formats — 1, 2, 2.x, and 3.x. You can read more from Docker’s documentation."
},
{
"code": null,
"e": 1865,
"s": 1688,
"text": "Then we have a hash called services. Inside this, we have to define the services we want to use for our application. For our application, we have two services, db, and pgadmin."
},
{
"code": null,
"e": 2016,
"s": 1865,
"text": "We have used the tag container_name for both services to change the default container name to pg_container and pgadmin4_container for our convenience."
},
{
"code": null,
"e": 2214,
"s": 2016,
"text": "The second tag image is used to define the Docker image for the db and pgadmin service. To make our setup process quick and easy, we will use the pre-built official image of PostgreSQL and pgAdmin."
},
{
"code": null,
"e": 2377,
"s": 2214,
"text": "In my previous Docker posts, I’ve talked about the use of restart, environment, and ports tags. Have a look at the following post, where you can learn these tags."
},
{
"code": null,
"e": 2400,
"s": 2377,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 2500,
"s": 2400,
"text": "Now run the following command from the same directory where the docker-compose.yml file is located."
},
{
"code": null,
"e": 2528,
"s": 2500,
"text": "cd pgAdmindocker compose up"
},
{
"code": null,
"e": 2764,
"s": 2528,
"text": "The command docker compose up starts and runs the entire app. Congratulations!, you are successfully running a PostgreSQL database and pgadmin4 on your machine using Docker. Now let’s connect pgadmin4 to our PostgreSQL database server."
},
{
"code": null,
"e": 2945,
"s": 2764,
"text": "First, access the pgadmin4 via your favorite web browser by vising the URL http://localhost:5050/. Use the [email protected] for the email address and root as the password to log in."
},
{
"code": null,
"e": 3001,
"s": 2945,
"text": "Click Servers > Create > Server to create a new server."
},
{
"code": null,
"e": 3179,
"s": 3001,
"text": "Select the General tab. For the name field, use any name. In this case, I’ll use my_db. Now move to the Connection tab. To get the value for of Host, run the following commands."
},
{
"code": null,
"e": 3189,
"s": 3179,
"text": "docker ps"
},
{
"code": null,
"e": 3270,
"s": 3189,
"text": "The command ps will show you brief information about all the running containers."
},
{
"code": null,
"e": 3371,
"s": 3270,
"text": "Please read the Update section first. Now we can grab the container id for the PostgreSQL container."
},
{
"code": null,
"e": 3416,
"s": 3371,
"text": "docker inspect fcc97e066cc8 | grep IPAddress"
},
{
"code": null,
"e": 3573,
"s": 3416,
"text": "The command inspect shows detailed information about a running container. Also, we are using the pipeline and grep command to extract IPAddress information."
},
{
"code": null,
"e": 3807,
"s": 3573,
"text": "Finally, we have the value of Host, which in this case is 192.168.80.2. Use the value root for Username and root for Password, also tick the Save password? box if you don’t want to type the password every time you log in to pgadmin4."
},
{
"code": null,
"e": 3830,
"s": 3807,
"text": "Update [16 April 2021]"
},
{
"code": null,
"e": 4251,
"s": 3830,
"text": "I received an email this morning from Dosenwerfer, that the person had some issues when running the services again. Because it changed the IP address of the PostgreSQL container, and you have to set up the configuration again. The recommended solution is to use the container name. Because the container name is identical to the hostname, you can read more from here. So our current configuration would be the following."
},
{
"code": null,
"e": 4528,
"s": 4251,
"text": "You can find the PostgreSQL database server’s container name using the docker ps command and grab the name from the NAMES column. In this post, we have explicitly named the container in the docker-compose.yml file, so you can refer to that as well. Many thanks to Dosenwerfer."
},
{
"code": null,
"e": 4781,
"s": 4528,
"text": "For the second method, you can connect a pgAdmin docker container to a running PostgreSQL container. It is helpful when you don’t have a pgAdmin service in your docker-compose.yml file. Have a look at the following post where you can learn this method."
},
{
"code": null,
"e": 4804,
"s": 4781,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 5192,
"s": 4804,
"text": "If you want to import some data for testing purposes, you can use the SQL query I’ve already prepared. Click Servers > my_db > Databases > test_db > Schemas > Tables. Right-click on tables and select Query tool. Copy-paste the SQL query from my GitHub repository to Query Editor and click the play button. This action will create two tables called students and marks with some test data."
}
] |
Custom Multiplication in list of lists in Python
|
Multiplying two lists in python can be a necessity in many data analysis calculations. In this article we will see how to multiply the elements of a list of lists also called a nested list with another list.
In this approach we design tow for loops, one inside another. The outer loop keeps track of number of elements in the list and the inner loop keeps track of each element inside the nested list. We use the * operator to multiply the elements of the second list with respective elements of the nested list.
Live Demo
listA = [[2, 11, 5], [3, 2, 8], [11, 9, 8]]
multipliers = [5, 11, 0]
# Original list
print("The given list: " ,listA)
# Multiplier list
print(" Multiplier list : " ,multipliers )
# using loops
res = [[] for idx in range(len(listA))]
for i in range(len(listA)):
for j in range(len(multipliers)):
res[i] += [multipliers[i] * listA[i][j]]
#Result
print("Result of multiplication : ",res)
Running the above code gives us the following result −
The given list: [[2, 11, 5], [3, 2, 8], [11, 9, 8]]
Multiplier list : [5, 11, 0]
Result of multiplication : [[10, 55, 25], [33, 22, 88], [0, 0, 0]]
The enumerate method can be used to fetch each element of the nested list and then for loops can be used to do the multiplication.
Live Demo
listA = [[2, 11, 5], [3, 2, 8], [11, 9, 8]]
multipliers = [5, 11, 0]
# Original list
print("The given list: " + str(listA))
# Multiplier list
print(" Multiplier list : " ,multipliers )
# Using enumerate
res = [[multipliers[i] * j for j in x]
for i, x in enumerate(listA)]
#Result
print("Result of multiplication : ",res)
Running the above code gives us the following result −
The given list: [[2, 11, 5], [3, 2, 8], [11, 9, 8]]
Multiplier list : [5, 11, 0]
Result of multiplication : [[10, 55, 25], [33, 22, 88], [0, 0, 0]]
|
[
{
"code": null,
"e": 1270,
"s": 1062,
"text": "Multiplying two lists in python can be a necessity in many data analysis calculations. In this article we will see how to multiply the elements of a list of lists also called a nested list with another list."
},
{
"code": null,
"e": 1575,
"s": 1270,
"text": "In this approach we design tow for loops, one inside another. The outer loop keeps track of number of elements in the list and the inner loop keeps track of each element inside the nested list. We use the * operator to multiply the elements of the second list with respective elements of the nested list."
},
{
"code": null,
"e": 1586,
"s": 1575,
"text": " Live Demo"
},
{
"code": null,
"e": 1994,
"s": 1586,
"text": "listA = [[2, 11, 5], [3, 2, 8], [11, 9, 8]]\n\nmultipliers = [5, 11, 0]\n\n# Original list\nprint(\"The given list: \" ,listA)\n\n# Multiplier list\nprint(\" Multiplier list : \" ,multipliers )\n\n# using loops\nres = [[] for idx in range(len(listA))]\n for i in range(len(listA)):\n for j in range(len(multipliers)):\n res[i] += [multipliers[i] * listA[i][j]]\n\n#Result\nprint(\"Result of multiplication : \",res)"
},
{
"code": null,
"e": 2049,
"s": 1994,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2197,
"s": 2049,
"text": "The given list: [[2, 11, 5], [3, 2, 8], [11, 9, 8]]\nMultiplier list : [5, 11, 0]\nResult of multiplication : [[10, 55, 25], [33, 22, 88], [0, 0, 0]]"
},
{
"code": null,
"e": 2328,
"s": 2197,
"text": "The enumerate method can be used to fetch each element of the nested list and then for loops can be used to do the multiplication."
},
{
"code": null,
"e": 2339,
"s": 2328,
"text": " Live Demo"
},
{
"code": null,
"e": 2674,
"s": 2339,
"text": "listA = [[2, 11, 5], [3, 2, 8], [11, 9, 8]]\n\nmultipliers = [5, 11, 0]\n\n# Original list\nprint(\"The given list: \" + str(listA))\n\n# Multiplier list\nprint(\" Multiplier list : \" ,multipliers )\n\n# Using enumerate\nres = [[multipliers[i] * j for j in x]\n for i, x in enumerate(listA)]\n\n #Result\nprint(\"Result of multiplication : \",res)"
},
{
"code": null,
"e": 2729,
"s": 2674,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2877,
"s": 2729,
"text": "The given list: [[2, 11, 5], [3, 2, 8], [11, 9, 8]]\nMultiplier list : [5, 11, 0]\nResult of multiplication : [[10, 55, 25], [33, 22, 88], [0, 0, 0]]"
}
] |
Check whether second string can be formed from characters of first string - GeeksforGeeks
|
12 May, 2021
Given two strings str1 and str2, check if str2 can be formed from str1Example :
Input : str1 = geekforgeeks, str2 = geeks
Output : Yes
Here, string2 can be formed from string1.
Input : str1 = geekforgeeks, str2 = and
Output : No
Here string2 cannot be formed from string1.
Input : str1 = geekforgeeks, str2 = geeeek
Output : Yes
Here string2 can be formed from string1
as string1 contains 'e' comes 4 times in
string2 which is present in string1.
The idea is to count frequencies of characters of str1 in a count array. Then traverse str2 and decrease frequency of characters of str2 in the count array. If frequency of a characters becomes negative at any point, return false.Below is the implementation of above approach :
C++
Java
Python3
C#
PHP
Javascript
// CPP program to check whether second string// can be formed from first string#include <bits/stdc++.h>using namespace std;const int MAX = 256; bool canMakeStr2(string str1, string str2){ // Create a count array and count frequencies // characters in str1. int count[MAX] = {0}; for (int i = 0; i < str1.length(); i++) count[str1[i]]++; // Now traverse through str2 to check // if every character has enough counts for (int i = 0; i < str2.length(); i++) { if (count[str2[i]] == 0) return false; count[str2[i]]--; } return true;} // Driver Codeint main(){ string str1 = "geekforgeeks"; string str2 = "for"; if (canMakeStr2(str1, str2)) cout << "Yes"; else cout << "No"; return 0;}
// Java program to check whether second string// can be formed from first string class GFG { static int MAX = 256; static boolean canMakeStr2(String str1, String str2) { // Create a count array and count frequencies // characters in str1. int[] count = new int[MAX]; char []str3 = str1.toCharArray(); for (int i = 0; i < str3.length; i++) count[str3[i]]++; // Now traverse through str2 to check // if every character has enough counts char []str4 = str2.toCharArray(); for (int i = 0; i < str4.length; i++) { if (count[str4[i]] == 0) return false; count[str4[i]]--; } return true; } // Driver Code static public void main(String []args) { String str1 = "geekforgeeks"; String str2 = "for"; if (canMakeStr2(str1, str2)) System.out.println("Yes"); else System.out.println("No"); }}
# Python program to check whether second string# can be formed from first stringdef canMakeStr2(s1, s2): # Create a count array and count # frequencies characters in s1 count = {s1[i] : 0 for i in range(len(s1))} for i in range(len(s1)): count[s1[i]] += 1 # Now traverse through str2 to check # if every character has enough counts for i in range(len(s2)): if (count.get(s2[i]) == None or count[s2[i]] == 0): return False count[s2[i]] -= 1 return True # Driver Codes1 = "geekforgeeks"s2 = "for" if canMakeStr2(s1, s2): print("Yes")else: print("No")
// C# program to check whether second string// can be formed from first stringusing System; class GFG { static int MAX = 256; static bool canMakeStr2(string str1, string str2) { // Create a count array and count frequencies // characters in str1. int[] count = new int[MAX]; for (int i = 0; i < str1.Length; i++) count[str1[i]]++; // Now traverse through str2 to check // if every character has enough counts for (int i = 0; i < str2.Length; i++) { if (count[str2[i]] == 0) return false; count[str2[i]]--; } return true; } // Driver Code static public void Main() { string str1 = "geekforgeeks"; string str2 = "for"; if (canMakeStr2(str1, str2)) Console.WriteLine("Yes"); else Console.WriteLine("No"); }} // This code is contributed by vt_m.
<?php// PHP program to check whether// second string can be formed// from first string$MAX = 256; function canMakeStr2($str1, $str2){ // Create a count array and // count frequencies characters // in str1. $count = (0); for ($i = 0; $i < strlen($str1); $i++) // Now traverse through str2 // to check if every character // has enough counts for ($i = 0; $i < strlen($str2); $i++) { if ($count[$str2[$i]] == 0) return -1; } return true;} // Driver Code$str1 = "geekforgeeks";$str2 = "for";if (canMakeStr2($str1, $str2))echo "Yes";elseecho "No"; // This code is contributed by ajit?>
<script> // Javascript program to check whether second string // can be formed from first string let MAX = 256; function canMakeStr2(str1, str2) { // Create a count array and count frequencies // characters in str1. let count = new Array(MAX); count.fill(0); for (let i = 0; i < str1.length; i++) count[str1[i]]++; // Now traverse through str2 to check // if every character has enough counts for (let i = 0; i < str2.length; i++) { if (count[str2[i]] == 0) return false; count[str2[i]]--; } return true; } let str1 = "geekforgeeks"; let str2 = "for"; if (canMakeStr2(str1, str2)) document.write("Yes"); else document.write("No"); </script>
Yes
Output :
Yes
YouTubeGeeksforGeeks500K subscribersCheck whether second string can be formed from characters of first string | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 1:44•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=2J9r0lvAQjI" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
jit_t
ukasp
mohit kumar 29
nidhi_biet
mukesh07
atarax
frequency-counting
Arrays
Strings
Technical Scripter
Arrays
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Introduction to Arrays
Maximum and minimum of an array using minimum number of comparisons
Multidimensional Arrays in Java
Queue | Set 1 (Introduction and Array Implementation)
Linked List vs Array
Reverse a string in Java
Longest Common Subsequence | DP-4
C++ Data Types
Write a program to print all permutations of a given string
Python program to check if a string is palindrome or not
|
[
{
"code": null,
"e": 25042,
"s": 25014,
"text": "\n12 May, 2021"
},
{
"code": null,
"e": 25124,
"s": 25042,
"text": "Given two strings str1 and str2, check if str2 can be formed from str1Example : "
},
{
"code": null,
"e": 25496,
"s": 25124,
"text": "Input : str1 = geekforgeeks, str2 = geeks\nOutput : Yes\nHere, string2 can be formed from string1.\n\nInput : str1 = geekforgeeks, str2 = and\nOutput : No\nHere string2 cannot be formed from string1.\n\nInput : str1 = geekforgeeks, str2 = geeeek\nOutput : Yes\nHere string2 can be formed from string1\nas string1 contains 'e' comes 4 times in\nstring2 which is present in string1. "
},
{
"code": null,
"e": 25778,
"s": 25498,
"text": "The idea is to count frequencies of characters of str1 in a count array. Then traverse str2 and decrease frequency of characters of str2 in the count array. If frequency of a characters becomes negative at any point, return false.Below is the implementation of above approach : "
},
{
"code": null,
"e": 25782,
"s": 25778,
"text": "C++"
},
{
"code": null,
"e": 25787,
"s": 25782,
"text": "Java"
},
{
"code": null,
"e": 25795,
"s": 25787,
"text": "Python3"
},
{
"code": null,
"e": 25798,
"s": 25795,
"text": "C#"
},
{
"code": null,
"e": 25802,
"s": 25798,
"text": "PHP"
},
{
"code": null,
"e": 25813,
"s": 25802,
"text": "Javascript"
},
{
"code": "// CPP program to check whether second string// can be formed from first string#include <bits/stdc++.h>using namespace std;const int MAX = 256; bool canMakeStr2(string str1, string str2){ // Create a count array and count frequencies // characters in str1. int count[MAX] = {0}; for (int i = 0; i < str1.length(); i++) count[str1[i]]++; // Now traverse through str2 to check // if every character has enough counts for (int i = 0; i < str2.length(); i++) { if (count[str2[i]] == 0) return false; count[str2[i]]--; } return true;} // Driver Codeint main(){ string str1 = \"geekforgeeks\"; string str2 = \"for\"; if (canMakeStr2(str1, str2)) cout << \"Yes\"; else cout << \"No\"; return 0;}",
"e": 26591,
"s": 25813,
"text": null
},
{
"code": "// Java program to check whether second string// can be formed from first string class GFG { static int MAX = 256; static boolean canMakeStr2(String str1, String str2) { // Create a count array and count frequencies // characters in str1. int[] count = new int[MAX]; char []str3 = str1.toCharArray(); for (int i = 0; i < str3.length; i++) count[str3[i]]++; // Now traverse through str2 to check // if every character has enough counts char []str4 = str2.toCharArray(); for (int i = 0; i < str4.length; i++) { if (count[str4[i]] == 0) return false; count[str4[i]]--; } return true; } // Driver Code static public void main(String []args) { String str1 = \"geekforgeeks\"; String str2 = \"for\"; if (canMakeStr2(str1, str2)) System.out.println(\"Yes\"); else System.out.println(\"No\"); }}",
"e": 27588,
"s": 26591,
"text": null
},
{
"code": "# Python program to check whether second string# can be formed from first stringdef canMakeStr2(s1, s2): # Create a count array and count # frequencies characters in s1 count = {s1[i] : 0 for i in range(len(s1))} for i in range(len(s1)): count[s1[i]] += 1 # Now traverse through str2 to check # if every character has enough counts for i in range(len(s2)): if (count.get(s2[i]) == None or count[s2[i]] == 0): return False count[s2[i]] -= 1 return True # Driver Codes1 = \"geekforgeeks\"s2 = \"for\" if canMakeStr2(s1, s2): print(\"Yes\")else: print(\"No\")",
"e": 28210,
"s": 27588,
"text": null
},
{
"code": "// C# program to check whether second string// can be formed from first stringusing System; class GFG { static int MAX = 256; static bool canMakeStr2(string str1, string str2) { // Create a count array and count frequencies // characters in str1. int[] count = new int[MAX]; for (int i = 0; i < str1.Length; i++) count[str1[i]]++; // Now traverse through str2 to check // if every character has enough counts for (int i = 0; i < str2.Length; i++) { if (count[str2[i]] == 0) return false; count[str2[i]]--; } return true; } // Driver Code static public void Main() { string str1 = \"geekforgeeks\"; string str2 = \"for\"; if (canMakeStr2(str1, str2)) Console.WriteLine(\"Yes\"); else Console.WriteLine(\"No\"); }} // This code is contributed by vt_m.",
"e": 29141,
"s": 28210,
"text": null
},
{
"code": "<?php// PHP program to check whether// second string can be formed// from first string$MAX = 256; function canMakeStr2($str1, $str2){ // Create a count array and // count frequencies characters // in str1. $count = (0); for ($i = 0; $i < strlen($str1); $i++) // Now traverse through str2 // to check if every character // has enough counts for ($i = 0; $i < strlen($str2); $i++) { if ($count[$str2[$i]] == 0) return -1; } return true;} // Driver Code$str1 = \"geekforgeeks\";$str2 = \"for\";if (canMakeStr2($str1, $str2))echo \"Yes\";elseecho \"No\"; // This code is contributed by ajit?>",
"e": 29777,
"s": 29141,
"text": null
},
{
"code": "<script> // Javascript program to check whether second string // can be formed from first string let MAX = 256; function canMakeStr2(str1, str2) { // Create a count array and count frequencies // characters in str1. let count = new Array(MAX); count.fill(0); for (let i = 0; i < str1.length; i++) count[str1[i]]++; // Now traverse through str2 to check // if every character has enough counts for (let i = 0; i < str2.length; i++) { if (count[str2[i]] == 0) return false; count[str2[i]]--; } return true; } let str1 = \"geekforgeeks\"; let str2 = \"for\"; if (canMakeStr2(str1, str2)) document.write(\"Yes\"); else document.write(\"No\"); </script>",
"e": 30604,
"s": 29777,
"text": null
},
{
"code": null,
"e": 30608,
"s": 30604,
"text": "Yes"
},
{
"code": null,
"e": 30619,
"s": 30608,
"text": "Output : "
},
{
"code": null,
"e": 30623,
"s": 30619,
"text": "Yes"
},
{
"code": null,
"e": 31497,
"s": 30625,
"text": "YouTubeGeeksforGeeks500K subscribersCheck whether second string can be formed from characters of first string | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 1:44•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=2J9r0lvAQjI\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 31505,
"s": 31499,
"text": "jit_t"
},
{
"code": null,
"e": 31511,
"s": 31505,
"text": "ukasp"
},
{
"code": null,
"e": 31526,
"s": 31511,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 31537,
"s": 31526,
"text": "nidhi_biet"
},
{
"code": null,
"e": 31546,
"s": 31537,
"text": "mukesh07"
},
{
"code": null,
"e": 31553,
"s": 31546,
"text": "atarax"
},
{
"code": null,
"e": 31572,
"s": 31553,
"text": "frequency-counting"
},
{
"code": null,
"e": 31579,
"s": 31572,
"text": "Arrays"
},
{
"code": null,
"e": 31587,
"s": 31579,
"text": "Strings"
},
{
"code": null,
"e": 31606,
"s": 31587,
"text": "Technical Scripter"
},
{
"code": null,
"e": 31613,
"s": 31606,
"text": "Arrays"
},
{
"code": null,
"e": 31621,
"s": 31613,
"text": "Strings"
},
{
"code": null,
"e": 31719,
"s": 31621,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31728,
"s": 31719,
"text": "Comments"
},
{
"code": null,
"e": 31741,
"s": 31728,
"text": "Old Comments"
},
{
"code": null,
"e": 31764,
"s": 31741,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 31832,
"s": 31764,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 31864,
"s": 31832,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 31918,
"s": 31864,
"text": "Queue | Set 1 (Introduction and Array Implementation)"
},
{
"code": null,
"e": 31939,
"s": 31918,
"text": "Linked List vs Array"
},
{
"code": null,
"e": 31964,
"s": 31939,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 31998,
"s": 31964,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 32013,
"s": 31998,
"text": "C++ Data Types"
},
{
"code": null,
"e": 32073,
"s": 32013,
"text": "Write a program to print all permutations of a given string"
}
] |
Working with Datetime Index in pandas | by Sergi Lehkyi | Towards Data Science
|
As you may understand from the title it is not a complete guide on Time Series or Datetime data type in Python. So if you expect to get in-depth explanation from A to Z it’s a wrong place. Seriously. There is a fantastic article on this topic, well explained, detailed and quite straightforward. Don’t waste your time on this one.
For those who have reached this part I will tell that you will find something useful here for sure. Again, seriously. I found my notes on Time Series and decided to organize it into a little article with general tips, which are aplicable, I guess, in 80 to 90% of times you work with dates. So it’s worth sharing, isn’t it?
I have a dataset with air pollutants measurements for every hour since 2016 in Madrid, so I will use it as an example.
By default pandas will use the first column as index while importing csv file with read_csv(), so if your datetime column isn’t first you will need to specify it explicitly index_col='date'.
The beauty of pandas is that it can preprocess your datetime data during import. By specifying parse_dates=True pandas will try parsing the index, if we pass list of ints or names e.g. if [1, 2, 3] – it will try parsing columns 1, 2, 3 each as a separate date column, list of lists e.g. if [[1, 3]] – combine columns 1 and 3 and parse as a single date column, dict, e.g. {‘foo’ : [1, 3]} – parse columns 1, 3 as date and call result ‘foo’. If you are using other method to import data you can always use pd.to_datetime after it.
I have imported my data using the following code:
import pandas as pdimport globpattern = 'data/madrid*.csv'csv_files = glob.glob(pattern)frames = []for csv in csv_files: df = pd.read_csv(csv, index_col='date', parse_dates=True) frames.append(df)df = pd.concat(frames)df.head()Out[4]: BEN CH4 CO EBE NMHC NO NO_2 NOx date 2016-11-01 01:00:00 NaN NaN 0.7 NaN NaN 153.0 77.0 NaN 2016-11-01 01:00:00 3.1 NaN 1.1 2.0 0.53 260.0 144.0 NaN 2016-11-01 01:00:00 5.9 NaN NaN 7.5 NaN 297.0 139.0 NaN 2016-11-01 01:00:00 NaN NaN 1.0 NaN NaN 154.0 113.0 NaN 2016-11-01 01:00:00 NaN NaN NaN NaN NaN 275.0 127.0 NaN
The data is gathered from 24 different stations about 14 different pollutants. We are not going to analyze this data, and to make it little bit simpler we will choose only one station, two pollutants and remove all NaN values (DANGER! please, do not repeat it at home without understanding the consequences).
df_time = df[['O_3', 'PM10']][df['station'] == 28079008].dropna() df_time.head() Out[9]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0
Now when we have our data prepared we can play with Datetime Index.
Although the default pandas datetime format is ISO8601 (“yyyy-mm-dd hh:mm:ss”) when selecting data using partial string indexing it understands a lot of other different formats. For example:
df_time.loc['2016-11-01'].head()Out[17]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['November 1, 2016'].head()Out[18]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['2016-Nov-1'].head()Out[19]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.0
All produce the same output. So we are free to use whatever is more comfortable for us. Also we can select data for entire month:
df_time.loc['2016-11'].head()Out[20]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['2016-11'].count()Out[24]: O_3 715PM10 715dtype: int64
The same works if we want to select entire year:
df_time.loc['2016'].head()Out[31]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['2016'].count()Out[32]: O_3 8720PM10 8720dtype: int64
If we want to slice data and find records for some specific period of time we continue to use loc accessor, all the rules are the same as for regular index:
df_time.loc['2017-11-02 23:00' : '2017-12-01'].head()Out[34]: O_3 PM10date 2017-11-02 23:00:00 5.0 30.02017-11-03 00:00:00 5.0 25.02017-11-03 01:00:00 5.0 12.02017-11-03 02:00:00 6.0 8.02017-11-03 03:00:00 7.0 14.0df_time.loc['2017-11-02 23:00' : '2017-12-01'].count()Out[35]: O_3 690PM10 690dtype: int64
Pandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.
resample() is a time-based groupby, followed by a reduction method on each of its groups.
The resample function is very flexible and allows us to specify many different parameters to control the frequency conversion and resampling operation. sum, mean, std, sem,max, min, median, first, last, ohlcare available as a method of the returned object by resample()
# Converting hourly data into monthly datadf_time.resample('M').mean().head()Out[46]: O_3 PM10date 2016-01-31 21.871622 19.9905412016-02-29 32.241679 25.8538352016-03-31 44.234014 16.9523812016-04-30 46.845938 12.1890762016-05-31 53.136671 14.671177
For upsampling, we can specify a way to upsample to interpolate over the gaps that are created:
# Converting hourly data into 10-minutely datadf_time.resample('10Min').mean().head()Out[47]: O_3 PM10date 2016-01-01 01:00:00 8.0 17.02016-01-01 01:10:00 NaN NaN2016-01-01 01:20:00 NaN NaN2016-01-01 01:30:00 NaN NaN2016-01-01 01:40:00 NaN NaNdf_time.resample('10Min').mean().ffill().head()Out[48]: O_3 PM10date 2016-01-01 01:00:00 8.0 17.02016-01-01 01:10:00 8.0 17.02016-01-01 01:20:00 8.0 17.02016-01-01 01:30:00 8.0 17.02016-01-01 01:40:00 8.0 17.0
We can use the following methods to fill the NaN values: ‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’. More details on this can be found in documentation. Or we can do it using interpolation with following methods: ‘linear’, ‘time’, ‘index’, ‘values’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘krogh’, ‘polynomial’, ‘spline’, ‘piecewise_polynomial’, ‘from_derivatives’, ‘pchip’, ‘akima’. And again, deeper explanation on this can be found in pandas docs.
And resampling frequencies:
And another one awesome feature of Datetime Index is simplicity in plotting, as matplotlib will automatically treat it as x axis, so we don’t need to explicitly specify anything.
import seaborn as snssns.set()df_plot = df_time.resample('M').mean()plt.plot(df_plot)plt.title('Air polution by O3 and PM10')plt.ylabel('micrograms per cubic meter (mg/m3)')plt.xticks(rotation=45)plt.show()
As promised in the beginning — few tips, that help in the majority of situations when working with datetime data. For me — one more refresher and organizer of thoughts that converts into knowledge. All win. Someone will find it useful, someone might not (I warned in the first paragraph :D), so actually I expect everyone reading this will find it useful.
This is the most exciting feature of knowledge — when you share it, you don’t loose anything, you only gain. To write an article, it requires some research, some verification, some learning — basically you get even more knowledge in the end.
Knowledge is just a tool. And it’s your responsibility to apply it or not. In the end of the day it doesn’t matter how much you know, it’s about how you use that knowledge. But that’s already another story...
Thank you for reading, have an incredible week, learn, spread the knowledge, use it wisely and use it for good deeds 🙂
Originally published at sergilehkyi.com.
|
[
{
"code": null,
"e": 503,
"s": 172,
"text": "As you may understand from the title it is not a complete guide on Time Series or Datetime data type in Python. So if you expect to get in-depth explanation from A to Z it’s a wrong place. Seriously. There is a fantastic article on this topic, well explained, detailed and quite straightforward. Don’t waste your time on this one."
},
{
"code": null,
"e": 827,
"s": 503,
"text": "For those who have reached this part I will tell that you will find something useful here for sure. Again, seriously. I found my notes on Time Series and decided to organize it into a little article with general tips, which are aplicable, I guess, in 80 to 90% of times you work with dates. So it’s worth sharing, isn’t it?"
},
{
"code": null,
"e": 946,
"s": 827,
"text": "I have a dataset with air pollutants measurements for every hour since 2016 in Madrid, so I will use it as an example."
},
{
"code": null,
"e": 1137,
"s": 946,
"text": "By default pandas will use the first column as index while importing csv file with read_csv(), so if your datetime column isn’t first you will need to specify it explicitly index_col='date'."
},
{
"code": null,
"e": 1666,
"s": 1137,
"text": "The beauty of pandas is that it can preprocess your datetime data during import. By specifying parse_dates=True pandas will try parsing the index, if we pass list of ints or names e.g. if [1, 2, 3] – it will try parsing columns 1, 2, 3 each as a separate date column, list of lists e.g. if [[1, 3]] – combine columns 1 and 3 and parse as a single date column, dict, e.g. {‘foo’ : [1, 3]} – parse columns 1, 3 as date and call result ‘foo’. If you are using other method to import data you can always use pd.to_datetime after it."
},
{
"code": null,
"e": 1716,
"s": 1666,
"text": "I have imported my data using the following code:"
},
{
"code": null,
"e": 2433,
"s": 1716,
"text": "import pandas as pdimport globpattern = 'data/madrid*.csv'csv_files = glob.glob(pattern)frames = []for csv in csv_files: df = pd.read_csv(csv, index_col='date', parse_dates=True) frames.append(df)df = pd.concat(frames)df.head()Out[4]: BEN CH4 CO EBE NMHC NO NO_2 NOx date 2016-11-01 01:00:00 NaN NaN 0.7 NaN NaN 153.0 77.0 NaN 2016-11-01 01:00:00 3.1 NaN 1.1 2.0 0.53 260.0 144.0 NaN 2016-11-01 01:00:00 5.9 NaN NaN 7.5 NaN 297.0 139.0 NaN 2016-11-01 01:00:00 NaN NaN 1.0 NaN NaN 154.0 113.0 NaN 2016-11-01 01:00:00 NaN NaN NaN NaN NaN 275.0 127.0 NaN"
},
{
"code": null,
"e": 2742,
"s": 2433,
"text": "The data is gathered from 24 different stations about 14 different pollutants. We are not going to analyze this data, and to make it little bit simpler we will choose only one station, two pollutants and remove all NaN values (DANGER! please, do not repeat it at home without understanding the consequences)."
},
{
"code": null,
"e": 3042,
"s": 2742,
"text": "df_time = df[['O_3', 'PM10']][df['station'] == 28079008].dropna() df_time.head() Out[9]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0"
},
{
"code": null,
"e": 3110,
"s": 3042,
"text": "Now when we have our data prepared we can play with Datetime Index."
},
{
"code": null,
"e": 3301,
"s": 3110,
"text": "Although the default pandas datetime format is ISO8601 (“yyyy-mm-dd hh:mm:ss”) when selecting data using partial string indexing it understands a lot of other different formats. For example:"
},
{
"code": null,
"e": 4031,
"s": 3301,
"text": "df_time.loc['2016-11-01'].head()Out[17]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['November 1, 2016'].head()Out[18]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['2016-Nov-1'].head()Out[19]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.0"
},
{
"code": null,
"e": 4161,
"s": 4031,
"text": "All produce the same output. So we are free to use whatever is more comfortable for us. Also we can select data for entire month:"
},
{
"code": null,
"e": 4483,
"s": 4161,
"text": "df_time.loc['2016-11'].head()Out[20]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['2016-11'].count()Out[24]: O_3 715PM10 715dtype: int64"
},
{
"code": null,
"e": 4532,
"s": 4483,
"text": "The same works if we want to select entire year:"
},
{
"code": null,
"e": 4850,
"s": 4532,
"text": "df_time.loc['2016'].head()Out[31]: O_3 PM10date 2016-11-01 01:00:00 4.0 46.02016-11-01 02:00:00 4.0 37.02016-11-01 03:00:00 4.0 31.02016-11-01 04:00:00 5.0 31.02016-11-01 05:00:00 6.0 27.0df_time.loc['2016'].count()Out[32]: O_3 8720PM10 8720dtype: int64"
},
{
"code": null,
"e": 5007,
"s": 4850,
"text": "If we want to slice data and find records for some specific period of time we continue to use loc accessor, all the rules are the same as for regular index:"
},
{
"code": null,
"e": 5377,
"s": 5007,
"text": "df_time.loc['2017-11-02 23:00' : '2017-12-01'].head()Out[34]: O_3 PM10date 2017-11-02 23:00:00 5.0 30.02017-11-03 00:00:00 5.0 25.02017-11-03 01:00:00 5.0 12.02017-11-03 02:00:00 6.0 8.02017-11-03 03:00:00 7.0 14.0df_time.loc['2017-11-02 23:00' : '2017-12-01'].count()Out[35]: O_3 690PM10 690dtype: int64"
},
{
"code": null,
"e": 5629,
"s": 5377,
"text": "Pandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications."
},
{
"code": null,
"e": 5719,
"s": 5629,
"text": "resample() is a time-based groupby, followed by a reduction method on each of its groups."
},
{
"code": null,
"e": 5989,
"s": 5719,
"text": "The resample function is very flexible and allows us to specify many different parameters to control the frequency conversion and resampling operation. sum, mean, std, sem,max, min, median, first, last, ohlcare available as a method of the returned object by resample()"
},
{
"code": null,
"e": 6300,
"s": 5989,
"text": "# Converting hourly data into monthly datadf_time.resample('M').mean().head()Out[46]: O_3 PM10date 2016-01-31 21.871622 19.9905412016-02-29 32.241679 25.8538352016-03-31 44.234014 16.9523812016-04-30 46.845938 12.1890762016-05-31 53.136671 14.671177"
},
{
"code": null,
"e": 6396,
"s": 6300,
"text": "For upsampling, we can specify a way to upsample to interpolate over the gaps that are created:"
},
{
"code": null,
"e": 6967,
"s": 6396,
"text": "# Converting hourly data into 10-minutely datadf_time.resample('10Min').mean().head()Out[47]: O_3 PM10date 2016-01-01 01:00:00 8.0 17.02016-01-01 01:10:00 NaN NaN2016-01-01 01:20:00 NaN NaN2016-01-01 01:30:00 NaN NaN2016-01-01 01:40:00 NaN NaNdf_time.resample('10Min').mean().ffill().head()Out[48]: O_3 PM10date 2016-01-01 01:00:00 8.0 17.02016-01-01 01:10:00 8.0 17.02016-01-01 01:20:00 8.0 17.02016-01-01 01:30:00 8.0 17.02016-01-01 01:40:00 8.0 17.0"
},
{
"code": null,
"e": 7450,
"s": 6967,
"text": "We can use the following methods to fill the NaN values: ‘pad’, ‘backfill’, ‘ffill’, ‘bfill’, ‘nearest’. More details on this can be found in documentation. Or we can do it using interpolation with following methods: ‘linear’, ‘time’, ‘index’, ‘values’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘barycentric’, ‘krogh’, ‘polynomial’, ‘spline’, ‘piecewise_polynomial’, ‘from_derivatives’, ‘pchip’, ‘akima’. And again, deeper explanation on this can be found in pandas docs."
},
{
"code": null,
"e": 7478,
"s": 7450,
"text": "And resampling frequencies:"
},
{
"code": null,
"e": 7657,
"s": 7478,
"text": "And another one awesome feature of Datetime Index is simplicity in plotting, as matplotlib will automatically treat it as x axis, so we don’t need to explicitly specify anything."
},
{
"code": null,
"e": 7864,
"s": 7657,
"text": "import seaborn as snssns.set()df_plot = df_time.resample('M').mean()plt.plot(df_plot)plt.title('Air polution by O3 and PM10')plt.ylabel('micrograms per cubic meter (mg/m3)')plt.xticks(rotation=45)plt.show()"
},
{
"code": null,
"e": 8220,
"s": 7864,
"text": "As promised in the beginning — few tips, that help in the majority of situations when working with datetime data. For me — one more refresher and organizer of thoughts that converts into knowledge. All win. Someone will find it useful, someone might not (I warned in the first paragraph :D), so actually I expect everyone reading this will find it useful."
},
{
"code": null,
"e": 8462,
"s": 8220,
"text": "This is the most exciting feature of knowledge — when you share it, you don’t loose anything, you only gain. To write an article, it requires some research, some verification, some learning — basically you get even more knowledge in the end."
},
{
"code": null,
"e": 8671,
"s": 8462,
"text": "Knowledge is just a tool. And it’s your responsibility to apply it or not. In the end of the day it doesn’t matter how much you know, it’s about how you use that knowledge. But that’s already another story..."
},
{
"code": null,
"e": 8790,
"s": 8671,
"text": "Thank you for reading, have an incredible week, learn, spread the knowledge, use it wisely and use it for good deeds 🙂"
}
] |
Do Double Equal Sign exist in MySQL?
|
There is no double equal sign concept. It can be used to compare two values. If you use double equal sign(==) in MySQL, you will get an error message.
Let us verify the concept is true or not. Declare a variable −
mysql> set @Number=10;
Query OK, 0 rows affected (0.00 sec)
Now, compare the above variable value with 10. If both the values are same then the result will be 1 otherwise 0.
Using double equal sign −
mysql> select 10==@Number;
This will produce the following output i.e. an error −
ERROR 1064 (42000): You have an error in your SQL syntax;
check the manual that corresponds to your MySQL server version
for the right syntax to use near '==@Number' at line 1
Let us now change the double equal sign(==) to single equal sign(=) −
mysql> select 10=@Number;
This will produce the following output −
+------------+
| 10=@Number |
+------------+
| 1 |
+------------+
1 row in set (0.00 sec)
|
[
{
"code": null,
"e": 1213,
"s": 1062,
"text": "There is no double equal sign concept. It can be used to compare two values. If you use double equal sign(==) in MySQL, you will get an error message."
},
{
"code": null,
"e": 1276,
"s": 1213,
"text": "Let us verify the concept is true or not. Declare a variable −"
},
{
"code": null,
"e": 1336,
"s": 1276,
"text": "mysql> set @Number=10;\nQuery OK, 0 rows affected (0.00 sec)"
},
{
"code": null,
"e": 1450,
"s": 1336,
"text": "Now, compare the above variable value with 10. If both the values are same then the result will be 1 otherwise 0."
},
{
"code": null,
"e": 1476,
"s": 1450,
"text": "Using double equal sign −"
},
{
"code": null,
"e": 1503,
"s": 1476,
"text": "mysql> select 10==@Number;"
},
{
"code": null,
"e": 1558,
"s": 1503,
"text": "This will produce the following output i.e. an error −"
},
{
"code": null,
"e": 1736,
"s": 1558,
"text": "ERROR 1064 (42000): You have an error in your SQL syntax; \ncheck the manual that corresponds to your MySQL server version \nfor the right syntax to use near '==@Number' at line 1"
},
{
"code": null,
"e": 1806,
"s": 1736,
"text": "Let us now change the double equal sign(==) to single equal sign(=) −"
},
{
"code": null,
"e": 1832,
"s": 1806,
"text": "mysql> select 10=@Number;"
},
{
"code": null,
"e": 1873,
"s": 1832,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1972,
"s": 1873,
"text": "+------------+\n| 10=@Number |\n+------------+\n| 1 |\n+------------+\n1 row in set (0.00 sec)"
}
] |
How to add components with relative X and Y Coordinates in Java?
|
To add components with relative X and Y coordinates, you need to set both the gridx and gridy components −
GridBagConstraints constraints = new GridBagConstraints();
constraints.gridy = GridBagConstraints.RELATIVE;
constraints.gridx = GridBagConstraints.RELATIVE;
The following is an example to add components with relative X and Y Coordinates −
package my;
import java.awt.GridBagConstraints;
import java.awt.GridBagLayout;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class SwingDemo {
public static void main(String[] args) {
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
JPanel panel = new JPanel();
panel.setLayout(new GridBagLayout());
GridBagConstraints constraints = new GridBagConstraints();
constraints.gridx = 1;
constraints.gridy = GridBagConstraints.RELATIVE;
panel.add(new JButton("1st row 1st column"), constraints);
panel.add(new JButton("2nd row"), constraints);
panel.add(new JButton("3rd row"), constraints);
constraints.gridx = GridBagConstraints.RELATIVE;
panel.add(new JButton("1st row 2nd column"), constraints);
panel.add(new JButton("1st row 3rd column"), constraints);
frame.add(panel);
frame.setSize(550, 300);
frame.setVisible(true);
}
}
|
[
{
"code": null,
"e": 1169,
"s": 1062,
"text": "To add components with relative X and Y coordinates, you need to set both the gridx and gridy components −"
},
{
"code": null,
"e": 1326,
"s": 1169,
"text": "GridBagConstraints constraints = new GridBagConstraints();\nconstraints.gridy = GridBagConstraints.RELATIVE;\nconstraints.gridx = GridBagConstraints.RELATIVE;"
},
{
"code": null,
"e": 1408,
"s": 1326,
"text": "The following is an example to add components with relative X and Y Coordinates −"
},
{
"code": null,
"e": 2411,
"s": 1408,
"text": "package my;\nimport java.awt.GridBagConstraints;\nimport java.awt.GridBagLayout;\nimport javax.swing.JButton;\nimport javax.swing.JFrame;\nimport javax.swing.JPanel;\npublic class SwingDemo {\n public static void main(String[] args) {\n JFrame frame = new JFrame();\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n JPanel panel = new JPanel();\n panel.setLayout(new GridBagLayout());\n GridBagConstraints constraints = new GridBagConstraints();\n constraints.gridx = 1;\n constraints.gridy = GridBagConstraints.RELATIVE;\n panel.add(new JButton(\"1st row 1st column\"), constraints);\n panel.add(new JButton(\"2nd row\"), constraints);\n panel.add(new JButton(\"3rd row\"), constraints);\n constraints.gridx = GridBagConstraints.RELATIVE;\n panel.add(new JButton(\"1st row 2nd column\"), constraints);\n panel.add(new JButton(\"1st row 3rd column\"), constraints);\n frame.add(panel);\n frame.setSize(550, 300);\n frame.setVisible(true);\n }\n}"
}
] |
Materialize - File
|
The following example demonstrates different types of File Upload Controls.
materialize_file.htm
<html>
<head>
<title>The Materialize File Example</title>
<meta name = "viewport" content = "width = device-width, initial-scale = 1">
<link rel = "stylesheet"
href = "https://fonts.googleapis.com/icon?family=Material+Icons">
<link rel = "stylesheet"
href = "https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/css/materialize.min.css">
<script type = "text/javascript"
src = "https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src = "https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/js/materialize.min.js">
</script>
</head>
<body class = "container">
<div class = "row">
<form class = "col s12">
<div class = "row">
<label>Materialize File Input</label>
<div class = "file-field input-field">
<div class = "btn">
<span>Browse</span>
<input type = "file" />
</div>
<div class = "file-path-wrapper">
<input class = "file-path validate" type = "text"
placeholder = "Upload file" />
</div>
</div>
</div>
<div class = "row">
<label>Materialize Multi File Input</label>
<div class = "file-field input-field">
<div class = "btn">
<span>Browse</span>
<input type = "file" multiple />
</div>
<div class = "file-path-wrapper">
<input class = "file-path validate" type = "text"
placeholder = "Upload multiple files" />
</div>
</div>
</div>
</form>
</div>
</body>
</html>
Verify the result.
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2263,
"s": 2187,
"text": "The following example demonstrates different types of File Upload Controls."
},
{
"code": null,
"e": 2284,
"s": 2263,
"text": "materialize_file.htm"
},
{
"code": null,
"e": 4230,
"s": 2284,
"text": "<html>\n <head>\n <title>The Materialize File Example</title>\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1\"> \n <link rel = \"stylesheet\"\n href = \"https://fonts.googleapis.com/icon?family=Material+Icons\">\n <link rel = \"stylesheet\"\n href = \"https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/css/materialize.min.css\">\n <script type = \"text/javascript\"\n src = \"https://code.jquery.com/jquery-2.1.1.min.js\"></script> \n <script src = \"https://cdnjs.cloudflare.com/ajax/libs/materialize/0.97.3/js/materialize.min.js\">\n </script> \n </head>\n \n <body class = \"container\"> \n <div class = \"row\">\n <form class = \"col s12\">\n <div class = \"row\">\n <label>Materialize File Input</label>\n <div class = \"file-field input-field\">\n <div class = \"btn\">\n <span>Browse</span>\n <input type = \"file\" />\n </div>\n \n <div class = \"file-path-wrapper\">\n <input class = \"file-path validate\" type = \"text\"\n placeholder = \"Upload file\" />\n </div>\n </div>\n </div>\n \n <div class = \"row\">\n <label>Materialize Multi File Input</label>\n <div class = \"file-field input-field\">\n <div class = \"btn\">\n <span>Browse</span>\n <input type = \"file\" multiple />\n </div>\n \n <div class = \"file-path-wrapper\">\n <input class = \"file-path validate\" type = \"text\"\n placeholder = \"Upload multiple files\" />\n </div>\n </div> \n </div>\n </form> \n </div>\n </body> \n</html>"
},
{
"code": null,
"e": 4249,
"s": 4230,
"text": "Verify the result."
},
{
"code": null,
"e": 4256,
"s": 4249,
"text": " Print"
},
{
"code": null,
"e": 4267,
"s": 4256,
"text": " Add Notes"
}
] |
Date.now() function in JavaScript
|
The Date object is a data type built into the JavaScript language. Date objects are created with the new Date( ) as shown below.
Once a Date object is created, a number of methods allow you to operate on it. Most methods simply allow you to get and set the year, month, day, hour, minute, second, and millisecond fields of the object, using either local time or UTC (universal, or GMT) time.
The now() function of the Date object returns the number of milliseconds since 1st Jan 1970.
Its syntax is as follows
Date.now();
Live Demo
<html>
<head>
<title>JavaScript Example</title>
</head>
<body>
<script type="text/javascript">
var currentDate = Date.now();
var birthDate = Date.parse('29 sep 1989 00:4:00 GMT');
document.write(currentDate);
document.write(", "+birthDate+", ");
document.write(currentDate - birthDate);
</script>
</body>
</html>
1537780591654, 623030640000, 914749951654
You can get the current date using this function but, since it returns the nu8mber of milliseconds from 1st Jan 1970 to get the formatted date pass the value obtained to the Date constructor.
Live Demo
<html>
<head>
<title>JavaScript Example</title>
</head>
<body>
<script type="text/javascript">
var currentDate = Date.now();
document.write(currentDate);
document.write("<br>");
document.write(Date(currentDate).toString());
</script>
</body>
</html>
1539758885099
Wed Oct 17 2018 12:18:05 GMT+0530 (India Standard Time)
|
[
{
"code": null,
"e": 1191,
"s": 1062,
"text": "The Date object is a data type built into the JavaScript language. Date objects are created with the new Date( ) as shown below."
},
{
"code": null,
"e": 1454,
"s": 1191,
"text": "Once a Date object is created, a number of methods allow you to operate on it. Most methods simply allow you to get and set the year, month, day, hour, minute, second, and millisecond fields of the object, using either local time or UTC (universal, or GMT) time."
},
{
"code": null,
"e": 1547,
"s": 1454,
"text": "The now() function of the Date object returns the number of milliseconds since 1st Jan 1970."
},
{
"code": null,
"e": 1572,
"s": 1547,
"text": "Its syntax is as follows"
},
{
"code": null,
"e": 1584,
"s": 1572,
"text": "Date.now();"
},
{
"code": null,
"e": 1595,
"s": 1584,
"text": " Live Demo"
},
{
"code": null,
"e": 1947,
"s": 1595,
"text": "<html>\n<head>\n <title>JavaScript Example</title>\n</head>\n<body>\n <script type=\"text/javascript\">\n var currentDate = Date.now();\n var birthDate = Date.parse('29 sep 1989 00:4:00 GMT');\n document.write(currentDate);\n document.write(\", \"+birthDate+\", \");\n document.write(currentDate - birthDate);\n </script>\n</body>\n</html>"
},
{
"code": null,
"e": 1989,
"s": 1947,
"text": "1537780591654, 623030640000, 914749951654"
},
{
"code": null,
"e": 2181,
"s": 1989,
"text": "You can get the current date using this function but, since it returns the nu8mber of milliseconds from 1st Jan 1970 to get the formatted date pass the value obtained to the Date constructor."
},
{
"code": null,
"e": 2192,
"s": 2181,
"text": " Live Demo"
},
{
"code": null,
"e": 2475,
"s": 2192,
"text": "<html>\n<head>\n <title>JavaScript Example</title>\n</head>\n<body>\n <script type=\"text/javascript\">\n var currentDate = Date.now();\n document.write(currentDate);\n document.write(\"<br>\");\n document.write(Date(currentDate).toString());\n </script>\n</body>\n</html>"
},
{
"code": null,
"e": 2545,
"s": 2475,
"text": "1539758885099\nWed Oct 17 2018 12:18:05 GMT+0530 (India Standard Time)"
}
] |
Move Column to First Position of DataFrame in R - GeeksforGeeks
|
23 Aug, 2021
In this article, we are going to see how to move particular column in the dataframe to the first position of the dataframe in R Programming language.
Create dataframe for demonstration:
R
# create a dataframe with 4 columns# they are id,name,age and addressdata = data.frame(id=c(1, 2, 3), name=c("sravan", "bobby", "satwik"), age=c(23, 21, 17), address=c("kakumanu", "ponnur", "hyd")) # displaydata
Output:
In this method we will move the columns to the first position using base R language.
Syntax: dataframe[ , c(“column_name”, names(dataframe)[names(dataframe) != “column_name”])]
where
dataframe is the input dataframe
column_name is the name of the column to be shifted to the first
We are going to move a particular column to first position using index operator and by c() function, we will combine the column name and then pushed to first position in the dataframe.
Example: R program to shift the columns to first for the above-created dataframe.
R
# create a dataframe with 4 columns# they are id,name,age and address data = data.frame(id = c(1,2,3), name = c("sravan","bobby", "satwik"), age = c(23,21,17), address = c("kakumanu","ponnur","hyd")) # displayprint("Original Dataframe")data print("After moving age to first column : ") # move age to first columndata_Sorted = data[ , c("age", names(data)[names(data) != "age"])] # display sorted datadata_Sorted print("After move address to first column : ") # move address to first columndata_Sorted1 = data[ , c("address", names(data)[names(data) != "address"])] # display sorted datadata_Sorted1
Output:
By using this package we can shift particular column to the first, Here we are using %>% operator to load the shifted data into dataframe and use the select () function that takes the particular column name to be shifted and every() thing is used to get the dataframe data
Syntax: dataframe%>% dplyr::select(“column_name”, everything())
where
dataframe is the input dataframe
column_name is the column to be shifted to first
Example: R program to shift particular column to first
R
# load the dplyr packagelibrary("dplyr") # create a dataframe with 4 columns# they are id,name,age and address data = data.frame(id = c(1,2,3), name = c("sravan","bobby", "satwik"), age = c(23,21,17), address = c("kakumanu","ponnur","hyd")) # displayprint("Original Dataframe")data print("After moving age to first column : ") # move age to first columndata_Sorted = data %>% dplyr::select("age", everything()) # display sorted datadata_Sorted
Output:
Picked
R DataFrame-Programs
R-DataFrame
R Language
R Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
Replace Specific Characters in String in R
How to filter R dataframe by multiple conditions?
Convert Matrix to Dataframe in R
|
[
{
"code": null,
"e": 24851,
"s": 24823,
"text": "\n23 Aug, 2021"
},
{
"code": null,
"e": 25001,
"s": 24851,
"text": "In this article, we are going to see how to move particular column in the dataframe to the first position of the dataframe in R Programming language."
},
{
"code": null,
"e": 25037,
"s": 25001,
"text": "Create dataframe for demonstration:"
},
{
"code": null,
"e": 25039,
"s": 25037,
"text": "R"
},
{
"code": "# create a dataframe with 4 columns# they are id,name,age and addressdata = data.frame(id=c(1, 2, 3), name=c(\"sravan\", \"bobby\", \"satwik\"), age=c(23, 21, 17), address=c(\"kakumanu\", \"ponnur\", \"hyd\")) # displaydata",
"e": 25327,
"s": 25039,
"text": null
},
{
"code": null,
"e": 25335,
"s": 25327,
"text": "Output:"
},
{
"code": null,
"e": 25420,
"s": 25335,
"text": "In this method we will move the columns to the first position using base R language."
},
{
"code": null,
"e": 25512,
"s": 25420,
"text": "Syntax: dataframe[ , c(“column_name”, names(dataframe)[names(dataframe) != “column_name”])]"
},
{
"code": null,
"e": 25518,
"s": 25512,
"text": "where"
},
{
"code": null,
"e": 25551,
"s": 25518,
"text": "dataframe is the input dataframe"
},
{
"code": null,
"e": 25616,
"s": 25551,
"text": "column_name is the name of the column to be shifted to the first"
},
{
"code": null,
"e": 25801,
"s": 25616,
"text": "We are going to move a particular column to first position using index operator and by c() function, we will combine the column name and then pushed to first position in the dataframe."
},
{
"code": null,
"e": 25883,
"s": 25801,
"text": "Example: R program to shift the columns to first for the above-created dataframe."
},
{
"code": null,
"e": 25885,
"s": 25883,
"text": "R"
},
{
"code": "# create a dataframe with 4 columns# they are id,name,age and address data = data.frame(id = c(1,2,3), name = c(\"sravan\",\"bobby\", \"satwik\"), age = c(23,21,17), address = c(\"kakumanu\",\"ponnur\",\"hyd\")) # displayprint(\"Original Dataframe\")data print(\"After moving age to first column : \") # move age to first columndata_Sorted = data[ , c(\"age\", names(data)[names(data) != \"age\"])] # display sorted datadata_Sorted print(\"After move address to first column : \") # move address to first columndata_Sorted1 = data[ , c(\"address\", names(data)[names(data) != \"address\"])] # display sorted datadata_Sorted1",
"e": 26617,
"s": 25885,
"text": null
},
{
"code": null,
"e": 26625,
"s": 26617,
"text": "Output:"
},
{
"code": null,
"e": 26898,
"s": 26625,
"text": "By using this package we can shift particular column to the first, Here we are using %>% operator to load the shifted data into dataframe and use the select () function that takes the particular column name to be shifted and every() thing is used to get the dataframe data"
},
{
"code": null,
"e": 26962,
"s": 26898,
"text": "Syntax: dataframe%>% dplyr::select(“column_name”, everything())"
},
{
"code": null,
"e": 26968,
"s": 26962,
"text": "where"
},
{
"code": null,
"e": 27001,
"s": 26968,
"text": "dataframe is the input dataframe"
},
{
"code": null,
"e": 27050,
"s": 27001,
"text": "column_name is the column to be shifted to first"
},
{
"code": null,
"e": 27105,
"s": 27050,
"text": "Example: R program to shift particular column to first"
},
{
"code": null,
"e": 27107,
"s": 27105,
"text": "R"
},
{
"code": "# load the dplyr packagelibrary(\"dplyr\") # create a dataframe with 4 columns# they are id,name,age and address data = data.frame(id = c(1,2,3), name = c(\"sravan\",\"bobby\", \"satwik\"), age = c(23,21,17), address = c(\"kakumanu\",\"ponnur\",\"hyd\")) # displayprint(\"Original Dataframe\")data print(\"After moving age to first column : \") # move age to first columndata_Sorted = data %>% dplyr::select(\"age\", everything()) # display sorted datadata_Sorted",
"e": 27671,
"s": 27107,
"text": null
},
{
"code": null,
"e": 27679,
"s": 27671,
"text": "Output:"
},
{
"code": null,
"e": 27686,
"s": 27679,
"text": "Picked"
},
{
"code": null,
"e": 27707,
"s": 27686,
"text": "R DataFrame-Programs"
},
{
"code": null,
"e": 27719,
"s": 27707,
"text": "R-DataFrame"
},
{
"code": null,
"e": 27730,
"s": 27719,
"text": "R Language"
},
{
"code": null,
"e": 27741,
"s": 27730,
"text": "R Programs"
},
{
"code": null,
"e": 27839,
"s": 27741,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27848,
"s": 27839,
"text": "Comments"
},
{
"code": null,
"e": 27861,
"s": 27848,
"text": "Old Comments"
},
{
"code": null,
"e": 27913,
"s": 27861,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 27951,
"s": 27913,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 27986,
"s": 27951,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 28044,
"s": 27986,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 28093,
"s": 28044,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 28151,
"s": 28093,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 28200,
"s": 28151,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 28243,
"s": 28200,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 28293,
"s": 28243,
"text": "How to filter R dataframe by multiple conditions?"
}
] |
Policy-Based Methods. Hill Climbing algorithm | by Jordi TORRES.AI | Towards Data Science
|
This is a new post devoted to Policy-Based Methods, in the “Deep Reinforcement Learning Explained” series. Here we will introduce a class of algorithms that allow us to approximate the policy function, π, instead of the values functions (V, or Q). Remember that we defined policy as the entity that tells us what to do in every state. That is, instead of training a network that outputs action values, we will train a network to output (the probability of) actions, as we advanced with one example in Post 6 .
The central topic in Value Iteration and Q-learning, introduced in previous Posts, is the value of the state (denoted by V) or value of the state-action (denoted by Q). Remember that Value is defined as the discounted total reward that we can gather from a state or by issuing a particular action from the state. If the value is known, the decision on every step becomes simple and obvious: greedily in terms of value, and that guarantees a good total reward at the end of the episode. To obtain these values, we have used the Bellman equation, which expresses the value on the current step via the values on the next step (it makes a prediction from a prediction).
Reinforcement learning is ultimately about learning an optimal policy, denoted by π*, from interacting with the Environment. So far, we’ve been learning at value-based methods, where we first find an estimate of the optimal action-value function q* from which we obtain the optimal policy π*.
For small state spaces, like the Frozen-Lake example introduced in Post 1, this optimal value function q* can be represented in a table, the Q-table, with one row for each state and one column for each action. At each time step, for a given state, we only need to pull its corresponding row from the table, and the optimal action is just the action with the largest value entry.
But what about environments with much larger state spaces, like the Pong Environment introduced in the previous Posts? There’s a vast number of possible states, and that would make the table way too big to be useful in practice. So, we presented how to represent the optimal action-value function q* with a neural network. In this case, the neural network is fed with the Environment state as input, and it returns as output the value of each possible action.
But it is important to note that in both cases, whether we used a table or a neural network, we had to first estimate the optimal action-value function before we could tackle the optimal policy π*. Then, an interesting question arises: can we directly find the optimal policy without first having to deal with a value function? The answer is yes, and the class of algorithms to accomplish this are known as policy-based methods.
With value-based methods, the Agent uses its experience with the Environment to maintain an estimate of the optimal action-value function. The optimal policy is then obtained from the optimal action-value function estimate (e.g., using e-greedy).
Instead, Policy-based methods directly learn the optimal policy from the interactions with the environment, without having to maintain a separate value function estimate.
An example of a policy-based method, was already introduced at the beginning of this series when the cross-entropy method was presented in Post 6 . We introduced that a policy, denoted by π(a|s), says which action the Agent should take for every state observed. In practice, the policy is usually represented as a probability distribution over actions (that the Agent can take at a given state) with the number of classes equal to the number of actions we can carry out. We refer to it as a stochastic policy because it returns a probability distribution over actions rather than returning a single deterministic action.
Policy-based methods offer a few advantages over value-prediction methods like DQN presented in the previous three Posts. One is that, as we already discussed, we no longer have to worry about devising an action-selection strategy like ε-greedy policy; instead, we directly sample actions from the policy. And this is important; remember that we wasted a lot of time fixing up methods to improve the stability of training our DQN. For instance, we had to use experience replay and target networks, and there are several other methods in the academic literature that helps. A policy network tends to simplify some of that complexity.
In Deep Reinforcement Learning, it is common to represent the policy with a neural network (as we did for the first time in Post 6). Let ́s consider the Cart-Pole balancing problem from Post 12 of this series as an example to introduce how we can represent a policy with a neural network.
Remember that a cart is positioned on a frictionless track along the horizontal axis in this example, and a pole is anchored to the top of the cart. The objective is to keep the pole from falling over by moving the cart either left or right, and without falling off the track.
The system is controlled by applying a force of +1 (left) or -1 (right) to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every time-step that the pole remains upright, including the episode’s final step. The episode ends when the pole is more than 15 degrees from the vertical, or the cart moves more than 2.4 units from the center.
The observation space for this Environment at each time point is an array of 4 numbers. At every time step, you can observe its position, velocity, angle, and angular velocity. These are the observable states of this world. You can look up what each of these numbers represents in this document. Notice the minimum (-Inf) and maximum (Inf) values for both Cart Velocity and the Pole Velocity at Tip. Since the entry in the array corresponding to each of these indices can be any real number, that means the state space is infinite!
At any state, the cart only has two possible actions: move to the left or move to the right. In other words, the state-space of the Cart-Pole has four dimensions of continuous values, and the action-space has one dimension of two discrete values.
We can construct a neural network that approximates the policy that takes a state as input. In this example, the output layer will have two nodes that return, respectively, the probability for each action. In general, if the Environment has discrete action space, as in this example, the output layer has a node for each possible action and contains the probability that the Agent should select each possible action.
The way to use the network is that the Agent feeds the current Environment state, and then the Agent samples from the probabilities of actions (left or right in this case) to select its next action.
Then, the objective is to determine appropriate values for the network weights represented by θ (Theta). θ encodes the policy that for each state that we pass into the network, it returns action probabilities where the optimal action is most likely to be selected. The actions chosen influences the Rewards obtained that are used to get the return.
Remember that the Agent’s goal is always to maximize expected return. In our case, let’s denote the expected return as J. The main idea is that it is possible to write the expected return J as a function of θ. Later we will see how we can express this relationship, J(θ), in a more “mathematical” way to find the values for the weights that maximize the expected return.
In the previous section, we have seen how a neural network can represent a policy. The weights in this neural network are initially set to random values. Then, the Agent updates the weights as it interacts with the Environment. This section will give an overview of approaches that we can take towards optimizing these weights, derivative-free methods, also known as zero-order methods.
Derivative-free methods directly search in parameter space for the vector of weights that maximizes the returns obtained by a policy; by evaluating only some positions of the parameter space, without derivatives that compute the gradients. Let’s explain the most straightforward algorithm in this category that will help us to understand later how policy gradient methods work, the Hill Climbing.
Hill Climbing is an iterative algorithm that can be used to find the weights θ for an optimal policy. It is a relatively simple algorithm that the Agent can use to gradually improve the weights θ in its policy network while interacting with the Environment.
As the name indicates, intuitively, we can visualize that the algorithm draws up a strategy to reach the highest point of a hill, where θ indicates the coordinates of where we are at a given moment and G indicates the altitude at which we are at that point:
This visual example represents a function of two parameters, but the same idea extends to more than two parameters. The algorithm begins with an initial guess for the value of θ (random set of weights). We collect a single episode with the policy that corresponds to those weights θ and then record the return G.
This return is an estimate of what the surface looks like at that value of Theta. It will not be a perfect estimate because the return we just collected is unlikely to be equal to the expected return. This is because due to randomness in the Environment (and the policy, if it is stochastic), it is highly likely that if we collect a second episode with the same values for θ, we’ll likely get a different value for the return G. But in practice, even though the (sampled) return is not a perfect estimate for the expected return estimates, it often turns out to be good enough.
At each iteration, we slightly perturb the values (add a little bit of random noise) of the current best estimate for the weights θ, to yield a new set of candidate weights we can try. These new weights are then used to collect an episode. To see how good those new weights are, we’ll use the policy that they give us to again interact with the Environment for an episode and add up the return.
In up the new weights, give us more return than our current best estimate, we focus our attention on that new value, and then we just repeat by iteratively proposing new policies in the hope that they outperform the existing policy. In the event that they don’t well, we go back to our last best guess for the optimal policy and iterate until we end up with the optimal policy.
Now that we have an intuitive understanding of how the hill climbing algorithm should work, we can summarize it in the following pseudocode:
Initialize policy π with random weights θInitialize θbest (our best guess for the weights θ)Initialize Gbest (our highest return G we have gotten so far)Collect a single episode with θbest, and record the return GIf G > Gbest then θbest ← θ and Gbest ← GAdd a little bit of random noise to θbest, to get a new set of weights θRepeat steps 4–6 until Environment solved.
Initialize policy π with random weights θ
Initialize θbest (our best guess for the weights θ)
Initialize Gbest (our highest return G we have gotten so far)
Collect a single episode with θbest, and record the return G
If G > Gbest then θbest ← θ and Gbest ← G
Add a little bit of random noise to θbest, to get a new set of weights θ
Repeat steps 4–6 until Environment solved.
In our example, we assumed a surface with only one maximum in which the algorithm Hill-climbers are well-suited. Note that it’s not guaranteed to always yield the weights of the optimal policy on a surface with more than one local maxima. This is because if the algorithm begins in a poor location, it may converge to the lower maximum.
This section will explore an implementation of Hill-Climbing applied to the Cartpole Environment based in the previous pseudocode. The neural network model here is so simple, uses only the simplest matrix of shape[4x2] (state_space x action_space), that does not use tensors (no PyTorch required nor even GPU).
The code presented in this section can be found on GitHub (and can be run as a Colab google notebook using this link).
From this post, since it repeats many of the things that we have been using, we will not go into describing the code in detail; I think this is quite self-explanatory.
As always, we will start by importing the required packages and create the Environment:
import gymimport numpy as npfrom collections import dequeimport matplotlib.pyplot as pltenv = gym.make('CartPole-v0')
The policy π (and its initialization with random weights θ) can be coded as:
class Policy(): def __init__(self, s_size=4, a_size=2): # 1. Initialize policy π with random weights self.θ = 1e-4*np.random.rand(s_size, a_size) def forward(self, state): x = np.dot(state, self.θ) return np.exp(x)/sum(np.exp(x)) def act(self, state): probs = self.forward(state) action = np.argmax(probs) # deterministic policy return action
To visualize the training’s effect, we plot the weights θ before and after the training and render how the Agent apply the policy:
def watch_agent(): env = gym.make('CartPole-v0') state = env.reset() rewards = [] img = plt.imshow(env.render(mode='rgb_array')) for t in range(2000): action = policy.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) rewards.append(reward) if done: print("Reward:", sum([r for r in rewards])) break env.close()policy = Policy()print ("Policy weights θ before train:\n", policy.θ)watch_agent()Policy weights θ before train: [[6.30558674e-06 2.13219853e-05] [2.32801200e-05 5.86359967e-05] [1.33454380e-05 6.69857175e-05] [9.39527443e-05 6.65193884e-05]]Reward: 9.0
The following code defines the function that trains the Agent:
def hill_climbing(n_episodes=10000, gamma=1.0, noise=1e-2): """Implementation of hill climbing. Params ====== n_episodes (int): maximum number of training episodes gamma (float): discount rate noise(float): standard deviation of additive noise """ scores_deque = deque(maxlen=100) scores = [] #2. Initialize θbest Gbest = -np.Inf #3. Initialize Gbest θbest = policy.θ for i_episode in range(1, n_episodes+1): rewards = [] state = env.reset() while True: #4.Collect a single episode with θ,and record the return G action = policy.act(state) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break scores_deque.append(sum(rewards)) scores.append(sum(rewards)) discounts = [gamma**i for i in range(len(rewards)+1)] G = sum([a*b for a,b in zip(discounts, rewards)]) if G >= Gbest: # 5. If G>Gbest then θbest←θ & Gbest←G Gbest = G θbest = policy.θ #6. Add a little bit of random noise to θbes policy.θ = θbest + noise * np.random.rand(*policy.θ.shape) if i_episode % 10 == 0: print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) # 7. Repeat steps 4-6 until Environment solved. if np.mean(scores_deque)>=env.spec.reward_threshold: print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) policy.θ = θbest break return scores
The code does not require too much explanation as it is quite explicit, and it is annotated with the corresponding pseudocode steps. Maybe note some details. For instance, the algorithm seeks to maximize the cumulative discounted reward, and it looks in Python as follows:
discounts = [gamma**i for i in range(len(rewards)+1)]G = sum([a*b for a,b in zip(discounts, rewards)])
Remember that Hill Climbing is a simple gradient-free algorithm (i.e., we do not use the gradient ascent/gradient descent methods). We try to climb to the top of the curve by only changing the arguments of the target function G, the weight matrix θ determining the neural network that underlies in our mode, using a certain noise:
policy.θ = θbest + noise * np.random.rand(*policy.θ.shape)
As in some previous examples, we try to exceed a certain threshold to consider the Environment solved. For Cartpole-v0, this threshold score is 195, indicated by env.spec.reward_threshold. In the example that we used to write this post, we only needed 215 episodes to solve the Environment:
scores = hill_climbing(gamma=0.9)Episode 10 Average Score: 59.50Episode 20 Average Score: 95.45Episode 30 Average Score: 122.37Episode 40 Average Score: 134.60Episode 50 Average Score: 145.60Episode 60 Average Score: 149.38Episode 70 Average Score: 154.33Episode 80 Average Score: 160.04Episode 90 Average Score: 163.56Episode 100 Average Score: 166.87Episode 110 Average Score: 174.70Episode 120 Average Score: 168.54Episode 130 Average Score: 170.92Episode 140 Average Score: 173.79Episode 150 Average Score: 174.83Episode 160 Average Score: 178.00Episode 170 Average Score: 179.60Episode 180 Average Score: 179.58Episode 190 Average Score: 180.41Episode 200 Average Score: 180.74Episode 210 Average Score: 186.96Environment solved in 215 episodes! Average Score: 195.65
With the following code, we can plot the scores obtained in each episode during training:
fig = plt.figure()plt.plot(np.arange(1, len(scores)+1), scores)plt.ylabel('Score')plt.xlabel('Episode #')plt.show()
Now we plot the weights θ again, after the training and also how the Agent applies this policy and seems smarter:
print ("Policy weights θ after train:\n", policy.θ)watch_agent()Policy weights θ after train: [[0.83126272 0.83426041] [0.83710884 0.86015151] [0.84691878 0.89171965] [0.80911446 0.87010399]]Reward: 200.0
Although in this example, we have coded a deterministic policy for simplicity, Policy-based methods can learn either stochastic or deterministic policies, and they can be used to solve Environments with either finite or continuous action spaces.
Hill Climbing algorithm does not need to be differentiable or even continuous, but because it is taking random steps, this may not result in the most efficient path up the hill. There are in the literature many improvements to this approach: adaptive noise scaling, Steepest ascent hill climbing, random restarts, simulated annealing, evolution strategies, or cross-entropy method (presented in Post 6 ) .
However, the usual solutions to this problem consider Policy Gradient Methods that estimate an optimal policy’s weights through gradient ascent. Policy gradient methods are a subclass of policy-based methods that we will present in the next post.
In this post, we introduced the concept of Policy-Based Methods. There are several reasons why we consider policy-based methods instead of value-based methods that seem to work so well as we saw in the previous post. Primarily because Policy-based methods directly get to the problem at hand (estimating the optimal policy) without having to store additional data, i.e., the action values that may not be useful. A further advantage over value-based methods, Policy-based methods are well-suited for continuous action spaces. As we will see in future posts, unlike value-based methods, policy-based methods can learn true stochastic policies.
See you in the next post!
by UPC Barcelona Tech and Barcelona Supercomputing Center
A relaxed introductory series that gradually and with a practical approach introduces the reader to this exciting technology that is the real enabler of the latest disruptive advances in the field of Artificial Intelligence.
I started to write this series in May, during the period of lockdown in Barcelona. Honestly, writing these posts in my spare time helped me to #StayAtHome because of the lockdown. Thank you for reading this publication in those days; it justifies the effort I made.
Disclaimers — These posts were written during this period of lockdown in Barcelona as a personal distraction and dissemination of scientific knowledge, in case it could be of help to someone, but without the purpose of being an academic reference document in the DRL area. If the reader needs a more rigorous document, the last post in the series offers an extensive list of academic resources and books that the reader can consult. The author is aware that this series of posts may contain some errors and suffers from a revision of the English text to improve it if the purpose were an academic document. But although the author would like to improve the content in quantity and quality, his professional commitments do not leave him free time to do so. However, the author agrees to refine all those errors that readers can report as soon as he can.
|
[
{
"code": null,
"e": 681,
"s": 171,
"text": "This is a new post devoted to Policy-Based Methods, in the “Deep Reinforcement Learning Explained” series. Here we will introduce a class of algorithms that allow us to approximate the policy function, π, instead of the values functions (V, or Q). Remember that we defined policy as the entity that tells us what to do in every state. That is, instead of training a network that outputs action values, we will train a network to output (the probability of) actions, as we advanced with one example in Post 6 ."
},
{
"code": null,
"e": 1347,
"s": 681,
"text": "The central topic in Value Iteration and Q-learning, introduced in previous Posts, is the value of the state (denoted by V) or value of the state-action (denoted by Q). Remember that Value is defined as the discounted total reward that we can gather from a state or by issuing a particular action from the state. If the value is known, the decision on every step becomes simple and obvious: greedily in terms of value, and that guarantees a good total reward at the end of the episode. To obtain these values, we have used the Bellman equation, which expresses the value on the current step via the values on the next step (it makes a prediction from a prediction)."
},
{
"code": null,
"e": 1640,
"s": 1347,
"text": "Reinforcement learning is ultimately about learning an optimal policy, denoted by π*, from interacting with the Environment. So far, we’ve been learning at value-based methods, where we first find an estimate of the optimal action-value function q* from which we obtain the optimal policy π*."
},
{
"code": null,
"e": 2019,
"s": 1640,
"text": "For small state spaces, like the Frozen-Lake example introduced in Post 1, this optimal value function q* can be represented in a table, the Q-table, with one row for each state and one column for each action. At each time step, for a given state, we only need to pull its corresponding row from the table, and the optimal action is just the action with the largest value entry."
},
{
"code": null,
"e": 2479,
"s": 2019,
"text": "But what about environments with much larger state spaces, like the Pong Environment introduced in the previous Posts? There’s a vast number of possible states, and that would make the table way too big to be useful in practice. So, we presented how to represent the optimal action-value function q* with a neural network. In this case, the neural network is fed with the Environment state as input, and it returns as output the value of each possible action."
},
{
"code": null,
"e": 2908,
"s": 2479,
"text": "But it is important to note that in both cases, whether we used a table or a neural network, we had to first estimate the optimal action-value function before we could tackle the optimal policy π*. Then, an interesting question arises: can we directly find the optimal policy without first having to deal with a value function? The answer is yes, and the class of algorithms to accomplish this are known as policy-based methods."
},
{
"code": null,
"e": 3155,
"s": 2908,
"text": "With value-based methods, the Agent uses its experience with the Environment to maintain an estimate of the optimal action-value function. The optimal policy is then obtained from the optimal action-value function estimate (e.g., using e-greedy)."
},
{
"code": null,
"e": 3326,
"s": 3155,
"text": "Instead, Policy-based methods directly learn the optimal policy from the interactions with the environment, without having to maintain a separate value function estimate."
},
{
"code": null,
"e": 3947,
"s": 3326,
"text": "An example of a policy-based method, was already introduced at the beginning of this series when the cross-entropy method was presented in Post 6 . We introduced that a policy, denoted by π(a|s), says which action the Agent should take for every state observed. In practice, the policy is usually represented as a probability distribution over actions (that the Agent can take at a given state) with the number of classes equal to the number of actions we can carry out. We refer to it as a stochastic policy because it returns a probability distribution over actions rather than returning a single deterministic action."
},
{
"code": null,
"e": 4580,
"s": 3947,
"text": "Policy-based methods offer a few advantages over value-prediction methods like DQN presented in the previous three Posts. One is that, as we already discussed, we no longer have to worry about devising an action-selection strategy like ε-greedy policy; instead, we directly sample actions from the policy. And this is important; remember that we wasted a lot of time fixing up methods to improve the stability of training our DQN. For instance, we had to use experience replay and target networks, and there are several other methods in the academic literature that helps. A policy network tends to simplify some of that complexity."
},
{
"code": null,
"e": 4869,
"s": 4580,
"text": "In Deep Reinforcement Learning, it is common to represent the policy with a neural network (as we did for the first time in Post 6). Let ́s consider the Cart-Pole balancing problem from Post 12 of this series as an example to introduce how we can represent a policy with a neural network."
},
{
"code": null,
"e": 5146,
"s": 4869,
"text": "Remember that a cart is positioned on a frictionless track along the horizontal axis in this example, and a pole is anchored to the top of the cart. The objective is to keep the pole from falling over by moving the cart either left or right, and without falling off the track."
},
{
"code": null,
"e": 5552,
"s": 5146,
"text": "The system is controlled by applying a force of +1 (left) or -1 (right) to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every time-step that the pole remains upright, including the episode’s final step. The episode ends when the pole is more than 15 degrees from the vertical, or the cart moves more than 2.4 units from the center."
},
{
"code": null,
"e": 6084,
"s": 5552,
"text": "The observation space for this Environment at each time point is an array of 4 numbers. At every time step, you can observe its position, velocity, angle, and angular velocity. These are the observable states of this world. You can look up what each of these numbers represents in this document. Notice the minimum (-Inf) and maximum (Inf) values for both Cart Velocity and the Pole Velocity at Tip. Since the entry in the array corresponding to each of these indices can be any real number, that means the state space is infinite!"
},
{
"code": null,
"e": 6331,
"s": 6084,
"text": "At any state, the cart only has two possible actions: move to the left or move to the right. In other words, the state-space of the Cart-Pole has four dimensions of continuous values, and the action-space has one dimension of two discrete values."
},
{
"code": null,
"e": 6748,
"s": 6331,
"text": "We can construct a neural network that approximates the policy that takes a state as input. In this example, the output layer will have two nodes that return, respectively, the probability for each action. In general, if the Environment has discrete action space, as in this example, the output layer has a node for each possible action and contains the probability that the Agent should select each possible action."
},
{
"code": null,
"e": 6947,
"s": 6748,
"text": "The way to use the network is that the Agent feeds the current Environment state, and then the Agent samples from the probabilities of actions (left or right in this case) to select its next action."
},
{
"code": null,
"e": 7296,
"s": 6947,
"text": "Then, the objective is to determine appropriate values for the network weights represented by θ (Theta). θ encodes the policy that for each state that we pass into the network, it returns action probabilities where the optimal action is most likely to be selected. The actions chosen influences the Rewards obtained that are used to get the return."
},
{
"code": null,
"e": 7667,
"s": 7296,
"text": "Remember that the Agent’s goal is always to maximize expected return. In our case, let’s denote the expected return as J. The main idea is that it is possible to write the expected return J as a function of θ. Later we will see how we can express this relationship, J(θ), in a more “mathematical” way to find the values for the weights that maximize the expected return."
},
{
"code": null,
"e": 8054,
"s": 7667,
"text": "In the previous section, we have seen how a neural network can represent a policy. The weights in this neural network are initially set to random values. Then, the Agent updates the weights as it interacts with the Environment. This section will give an overview of approaches that we can take towards optimizing these weights, derivative-free methods, also known as zero-order methods."
},
{
"code": null,
"e": 8451,
"s": 8054,
"text": "Derivative-free methods directly search in parameter space for the vector of weights that maximizes the returns obtained by a policy; by evaluating only some positions of the parameter space, without derivatives that compute the gradients. Let’s explain the most straightforward algorithm in this category that will help us to understand later how policy gradient methods work, the Hill Climbing."
},
{
"code": null,
"e": 8709,
"s": 8451,
"text": "Hill Climbing is an iterative algorithm that can be used to find the weights θ for an optimal policy. It is a relatively simple algorithm that the Agent can use to gradually improve the weights θ in its policy network while interacting with the Environment."
},
{
"code": null,
"e": 8967,
"s": 8709,
"text": "As the name indicates, intuitively, we can visualize that the algorithm draws up a strategy to reach the highest point of a hill, where θ indicates the coordinates of where we are at a given moment and G indicates the altitude at which we are at that point:"
},
{
"code": null,
"e": 9280,
"s": 8967,
"text": "This visual example represents a function of two parameters, but the same idea extends to more than two parameters. The algorithm begins with an initial guess for the value of θ (random set of weights). We collect a single episode with the policy that corresponds to those weights θ and then record the return G."
},
{
"code": null,
"e": 9859,
"s": 9280,
"text": "This return is an estimate of what the surface looks like at that value of Theta. It will not be a perfect estimate because the return we just collected is unlikely to be equal to the expected return. This is because due to randomness in the Environment (and the policy, if it is stochastic), it is highly likely that if we collect a second episode with the same values for θ, we’ll likely get a different value for the return G. But in practice, even though the (sampled) return is not a perfect estimate for the expected return estimates, it often turns out to be good enough."
},
{
"code": null,
"e": 10255,
"s": 9859,
"text": "At each iteration, we slightly perturb the values (add a little bit of random noise) of the current best estimate for the weights θ, to yield a new set of candidate weights we can try. These new weights are then used to collect an episode. To see how good those new weights are, we’ll use the policy that they give us to again interact with the Environment for an episode and add up the return."
},
{
"code": null,
"e": 10633,
"s": 10255,
"text": "In up the new weights, give us more return than our current best estimate, we focus our attention on that new value, and then we just repeat by iteratively proposing new policies in the hope that they outperform the existing policy. In the event that they don’t well, we go back to our last best guess for the optimal policy and iterate until we end up with the optimal policy."
},
{
"code": null,
"e": 10774,
"s": 10633,
"text": "Now that we have an intuitive understanding of how the hill climbing algorithm should work, we can summarize it in the following pseudocode:"
},
{
"code": null,
"e": 11148,
"s": 10774,
"text": "Initialize policy π with random weights θInitialize θbest (our best guess for the weights θ)Initialize Gbest (our highest return G we have gotten so far)Collect a single episode with θbest, and record the return GIf G > Gbest then θbest ← θ and Gbest ← GAdd a little bit of random noise to θbest, to get a new set of weights θRepeat steps 4–6 until Environment solved."
},
{
"code": null,
"e": 11190,
"s": 11148,
"text": "Initialize policy π with random weights θ"
},
{
"code": null,
"e": 11242,
"s": 11190,
"text": "Initialize θbest (our best guess for the weights θ)"
},
{
"code": null,
"e": 11305,
"s": 11242,
"text": "Initialize Gbest (our highest return G we have gotten so far)"
},
{
"code": null,
"e": 11366,
"s": 11305,
"text": "Collect a single episode with θbest, and record the return G"
},
{
"code": null,
"e": 11411,
"s": 11366,
"text": "If G > Gbest then θbest ← θ and Gbest ← G"
},
{
"code": null,
"e": 11485,
"s": 11411,
"text": "Add a little bit of random noise to θbest, to get a new set of weights θ"
},
{
"code": null,
"e": 11528,
"s": 11485,
"text": "Repeat steps 4–6 until Environment solved."
},
{
"code": null,
"e": 11865,
"s": 11528,
"text": "In our example, we assumed a surface with only one maximum in which the algorithm Hill-climbers are well-suited. Note that it’s not guaranteed to always yield the weights of the optimal policy on a surface with more than one local maxima. This is because if the algorithm begins in a poor location, it may converge to the lower maximum."
},
{
"code": null,
"e": 12176,
"s": 11865,
"text": "This section will explore an implementation of Hill-Climbing applied to the Cartpole Environment based in the previous pseudocode. The neural network model here is so simple, uses only the simplest matrix of shape[4x2] (state_space x action_space), that does not use tensors (no PyTorch required nor even GPU)."
},
{
"code": null,
"e": 12295,
"s": 12176,
"text": "The code presented in this section can be found on GitHub (and can be run as a Colab google notebook using this link)."
},
{
"code": null,
"e": 12463,
"s": 12295,
"text": "From this post, since it repeats many of the things that we have been using, we will not go into describing the code in detail; I think this is quite self-explanatory."
},
{
"code": null,
"e": 12551,
"s": 12463,
"text": "As always, we will start by importing the required packages and create the Environment:"
},
{
"code": null,
"e": 12669,
"s": 12551,
"text": "import gymimport numpy as npfrom collections import dequeimport matplotlib.pyplot as pltenv = gym.make('CartPole-v0')"
},
{
"code": null,
"e": 12746,
"s": 12669,
"text": "The policy π (and its initialization with random weights θ) can be coded as:"
},
{
"code": null,
"e": 13161,
"s": 12746,
"text": "class Policy(): def __init__(self, s_size=4, a_size=2): # 1. Initialize policy π with random weights self.θ = 1e-4*np.random.rand(s_size, a_size) def forward(self, state): x = np.dot(state, self.θ) return np.exp(x)/sum(np.exp(x)) def act(self, state): probs = self.forward(state) action = np.argmax(probs) # deterministic policy return action"
},
{
"code": null,
"e": 13292,
"s": 13161,
"text": "To visualize the training’s effect, we plot the weights θ before and after the training and render how the Agent apply the policy:"
},
{
"code": null,
"e": 14023,
"s": 13292,
"text": "def watch_agent(): env = gym.make('CartPole-v0') state = env.reset() rewards = [] img = plt.imshow(env.render(mode='rgb_array')) for t in range(2000): action = policy.act(state) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) rewards.append(reward) if done: print(\"Reward:\", sum([r for r in rewards])) break env.close()policy = Policy()print (\"Policy weights θ before train:\\n\", policy.θ)watch_agent()Policy weights θ before train: [[6.30558674e-06 2.13219853e-05] [2.32801200e-05 5.86359967e-05] [1.33454380e-05 6.69857175e-05] [9.39527443e-05 6.65193884e-05]]Reward: 9.0"
},
{
"code": null,
"e": 14086,
"s": 14023,
"text": "The following code defines the function that trains the Agent:"
},
{
"code": null,
"e": 15803,
"s": 14086,
"text": "def hill_climbing(n_episodes=10000, gamma=1.0, noise=1e-2): \"\"\"Implementation of hill climbing. Params ====== n_episodes (int): maximum number of training episodes gamma (float): discount rate noise(float): standard deviation of additive noise \"\"\" scores_deque = deque(maxlen=100) scores = [] #2. Initialize θbest Gbest = -np.Inf #3. Initialize Gbest θbest = policy.θ for i_episode in range(1, n_episodes+1): rewards = [] state = env.reset() while True: #4.Collect a single episode with θ,and record the return G action = policy.act(state) state, reward, done, _ = env.step(action) rewards.append(reward) if done: break scores_deque.append(sum(rewards)) scores.append(sum(rewards)) discounts = [gamma**i for i in range(len(rewards)+1)] G = sum([a*b for a,b in zip(discounts, rewards)]) if G >= Gbest: # 5. If G>Gbest then θbest←θ & Gbest←G Gbest = G θbest = policy.θ #6. Add a little bit of random noise to θbes policy.θ = θbest + noise * np.random.rand(*policy.θ.shape) if i_episode % 10 == 0: print('Episode {}\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) # 7. Repeat steps 4-6 until Environment solved. if np.mean(scores_deque)>=env.spec.reward_threshold: print('Environment solved in {:d} episodes!\\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque))) policy.θ = θbest break return scores"
},
{
"code": null,
"e": 16076,
"s": 15803,
"text": "The code does not require too much explanation as it is quite explicit, and it is annotated with the corresponding pseudocode steps. Maybe note some details. For instance, the algorithm seeks to maximize the cumulative discounted reward, and it looks in Python as follows:"
},
{
"code": null,
"e": 16179,
"s": 16076,
"text": "discounts = [gamma**i for i in range(len(rewards)+1)]G = sum([a*b for a,b in zip(discounts, rewards)])"
},
{
"code": null,
"e": 16510,
"s": 16179,
"text": "Remember that Hill Climbing is a simple gradient-free algorithm (i.e., we do not use the gradient ascent/gradient descent methods). We try to climb to the top of the curve by only changing the arguments of the target function G, the weight matrix θ determining the neural network that underlies in our mode, using a certain noise:"
},
{
"code": null,
"e": 16569,
"s": 16510,
"text": "policy.θ = θbest + noise * np.random.rand(*policy.θ.shape)"
},
{
"code": null,
"e": 16860,
"s": 16569,
"text": "As in some previous examples, we try to exceed a certain threshold to consider the Environment solved. For Cartpole-v0, this threshold score is 195, indicated by env.spec.reward_threshold. In the example that we used to write this post, we only needed 215 episodes to solve the Environment:"
},
{
"code": null,
"e": 17633,
"s": 16860,
"text": "scores = hill_climbing(gamma=0.9)Episode 10\tAverage Score: 59.50Episode 20\tAverage Score: 95.45Episode 30\tAverage Score: 122.37Episode 40\tAverage Score: 134.60Episode 50\tAverage Score: 145.60Episode 60\tAverage Score: 149.38Episode 70\tAverage Score: 154.33Episode 80\tAverage Score: 160.04Episode 90\tAverage Score: 163.56Episode 100\tAverage Score: 166.87Episode 110\tAverage Score: 174.70Episode 120\tAverage Score: 168.54Episode 130\tAverage Score: 170.92Episode 140\tAverage Score: 173.79Episode 150\tAverage Score: 174.83Episode 160\tAverage Score: 178.00Episode 170\tAverage Score: 179.60Episode 180\tAverage Score: 179.58Episode 190\tAverage Score: 180.41Episode 200\tAverage Score: 180.74Episode 210\tAverage Score: 186.96Environment solved in 215 episodes!\tAverage Score: 195.65"
},
{
"code": null,
"e": 17723,
"s": 17633,
"text": "With the following code, we can plot the scores obtained in each episode during training:"
},
{
"code": null,
"e": 17839,
"s": 17723,
"text": "fig = plt.figure()plt.plot(np.arange(1, len(scores)+1), scores)plt.ylabel('Score')plt.xlabel('Episode #')plt.show()"
},
{
"code": null,
"e": 17953,
"s": 17839,
"text": "Now we plot the weights θ again, after the training and also how the Agent applies this policy and seems smarter:"
},
{
"code": null,
"e": 18158,
"s": 17953,
"text": "print (\"Policy weights θ after train:\\n\", policy.θ)watch_agent()Policy weights θ after train: [[0.83126272 0.83426041] [0.83710884 0.86015151] [0.84691878 0.89171965] [0.80911446 0.87010399]]Reward: 200.0"
},
{
"code": null,
"e": 18404,
"s": 18158,
"text": "Although in this example, we have coded a deterministic policy for simplicity, Policy-based methods can learn either stochastic or deterministic policies, and they can be used to solve Environments with either finite or continuous action spaces."
},
{
"code": null,
"e": 18810,
"s": 18404,
"text": "Hill Climbing algorithm does not need to be differentiable or even continuous, but because it is taking random steps, this may not result in the most efficient path up the hill. There are in the literature many improvements to this approach: adaptive noise scaling, Steepest ascent hill climbing, random restarts, simulated annealing, evolution strategies, or cross-entropy method (presented in Post 6 ) ."
},
{
"code": null,
"e": 19057,
"s": 18810,
"text": "However, the usual solutions to this problem consider Policy Gradient Methods that estimate an optimal policy’s weights through gradient ascent. Policy gradient methods are a subclass of policy-based methods that we will present in the next post."
},
{
"code": null,
"e": 19700,
"s": 19057,
"text": "In this post, we introduced the concept of Policy-Based Methods. There are several reasons why we consider policy-based methods instead of value-based methods that seem to work so well as we saw in the previous post. Primarily because Policy-based methods directly get to the problem at hand (estimating the optimal policy) without having to store additional data, i.e., the action values that may not be useful. A further advantage over value-based methods, Policy-based methods are well-suited for continuous action spaces. As we will see in future posts, unlike value-based methods, policy-based methods can learn true stochastic policies."
},
{
"code": null,
"e": 19726,
"s": 19700,
"text": "See you in the next post!"
},
{
"code": null,
"e": 19784,
"s": 19726,
"text": "by UPC Barcelona Tech and Barcelona Supercomputing Center"
},
{
"code": null,
"e": 20009,
"s": 19784,
"text": "A relaxed introductory series that gradually and with a practical approach introduces the reader to this exciting technology that is the real enabler of the latest disruptive advances in the field of Artificial Intelligence."
},
{
"code": null,
"e": 20275,
"s": 20009,
"text": "I started to write this series in May, during the period of lockdown in Barcelona. Honestly, writing these posts in my spare time helped me to #StayAtHome because of the lockdown. Thank you for reading this publication in those days; it justifies the effort I made."
}
] |
list::pop_front() and list::pop_back() in C++ STL - GeeksforGeeks
|
12 Dec, 2017
Lists are containers used in C++ to store data in a non contiguous fashion, Normally, Arrays and Vectors are contiguous in nature, therefore the insertion and deletion operations are costlier as compared to the insertion and deletion option in Lists.
pop_front() function is used to pop or remove elements from a list from the front. The value is removed from the list from the beginning, and the container size is decreased by 1.
Syntax :
listname.pop_front()
Parameters :
No argument is passed as parameter.
Result :
Removes the value present at the front
of the given list named as listname
Examples:
Input : list list{1, 2, 3, 4, 5};
list.pop_front();
Output : 2, 3, 4, 5
Input : list list{5, 4, 3, 2, 1};
list.pop_front();
Output : 4, 3, 2, 1
Errors and Exceptions
No-Throw-Guarantee – if an exception is thrown, there are no changes in the container.If the list is empty, it shows undefined behaviour.
No-Throw-Guarantee – if an exception is thrown, there are no changes in the container.
If the list is empty, it shows undefined behaviour.
// CPP program to illustrate// pop_front() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{ 1, 2, 3, 4, 5 }; mylist.pop_front(); // list becomes 2, 3, 4, 5 for (auto it = mylist.begin(); it != mylist.end(); ++it) cout << ' ' << *it;}
Output:
2, 3, 4, 5
Application : Input an empty list with the following numbers and order using push_front() function and print the reverse of the list.
Input : 1, 2, 3, 4, 5, 6, 7, 8
Output: 8, 7, 6, 5, 4, 3, 2, 1
// CPP program to illustrate// application Of pop_front() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{}, newlist{}; mylist.push_front(8); mylist.push_front(7); mylist.push_front(6); mylist.push_front(5); mylist.push_front(4); mylist.push_front(3); mylist.push_front(2); mylist.push_front(1); // list becomes 1, 2, 3, 4, 5, 6, 7, 8 while (!mylist.empty()) { newlist.push_front(mylist.front()); mylist.pop_front(); } for (auto it = newlist.begin(); it != newlist.end(); ++it) cout << ' ' << *it;}
Output:
8, 7, 6, 5, 4, 3, 2, 1
pop_back() function is used to pop or remove elements from a list from the back. The value is removed from the list from the end, and the container size is decreased by 1.
Syntax :
listname.pop_back()
Parameters :
No argument is passed as parameter.
Result :
Removes the value present at the end or back
of the given list named as listname
Examples:
Input : list list{1, 2, 3, 4, 5};
list.pop_back();
Output : 1, 2, 3, 4
Input : list list{5, 4, 3, 2, 1};
list.pop_back();
Output : 5, 4, 3, 2
Errors and Exceptions
No-Throw-Guarantee – if an exception is thrown, there are no changes in the container.If the list is empty, it shows undefined behaviour.
No-Throw-Guarantee – if an exception is thrown, there are no changes in the container.
If the list is empty, it shows undefined behaviour.
// CPP program to illustrate// pop_back() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{ 1, 2, 3, 4, 5 }; mylist.pop_back(); // list becomes 1, 2, 3, 4 for (auto it = mylist.begin(); it != mylist.end(); ++it) cout << ' ' << *it;}
Output:
1, 2, 3, 4
Application : Input an empty list with the following numbers and order using push_front() function and print the reverse of the list.
Input : 1, 20, 39, 43, 57, 64, 73, 82
Output: 82, 73, 64, 57, 43, 39, 20, 1
// CPP program to illustrate// application Of pop_back() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{}, newlist{}; mylist.push_front(82); mylist.push_front(73); mylist.push_front(64); mylist.push_front(57); mylist.push_front(43); mylist.push_front(39); mylist.push_front(20); mylist.push_front(1); // list becomes 1, 20, 39, 43, 57, 64, 73, 82 while (!mylist.empty()) { newlist.push_back(mylist.back()); mylist.pop_back(); } for (auto it = newlist.begin(); it != newlist.end(); ++it) cout << ' ' << *it;}
Output:
82, 73, 64, 57, 43, 39, 20, 1
CPP-Library
STL
C++
STL
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Inheritance in C++
Map in C++ Standard Template Library (STL)
C++ Classes and Objects
Operator Overloading in C++
Socket Programming in C/C++
Bitwise Operators in C/C++
Multidimensional Arrays in C / C++
Virtual Function in C++
Constructors in C++
Templates in C++ with Examples
|
[
{
"code": null,
"e": 24701,
"s": 24673,
"text": "\n12 Dec, 2017"
},
{
"code": null,
"e": 24952,
"s": 24701,
"text": "Lists are containers used in C++ to store data in a non contiguous fashion, Normally, Arrays and Vectors are contiguous in nature, therefore the insertion and deletion operations are costlier as compared to the insertion and deletion option in Lists."
},
{
"code": null,
"e": 25132,
"s": 24952,
"text": "pop_front() function is used to pop or remove elements from a list from the front. The value is removed from the list from the beginning, and the container size is decreased by 1."
},
{
"code": null,
"e": 25141,
"s": 25132,
"text": "Syntax :"
},
{
"code": null,
"e": 25297,
"s": 25141,
"text": "listname.pop_front()\nParameters :\nNo argument is passed as parameter.\nResult :\nRemoves the value present at the front \nof the given list named as listname\n"
},
{
"code": null,
"e": 25307,
"s": 25297,
"text": "Examples:"
},
{
"code": null,
"e": 25473,
"s": 25307,
"text": "Input : list list{1, 2, 3, 4, 5};\n list.pop_front();\nOutput : 2, 3, 4, 5\n\nInput : list list{5, 4, 3, 2, 1};\n list.pop_front();\nOutput : 4, 3, 2, 1\n"
},
{
"code": null,
"e": 25495,
"s": 25473,
"text": "Errors and Exceptions"
},
{
"code": null,
"e": 25633,
"s": 25495,
"text": "No-Throw-Guarantee – if an exception is thrown, there are no changes in the container.If the list is empty, it shows undefined behaviour."
},
{
"code": null,
"e": 25720,
"s": 25633,
"text": "No-Throw-Guarantee – if an exception is thrown, there are no changes in the container."
},
{
"code": null,
"e": 25772,
"s": 25720,
"text": "If the list is empty, it shows undefined behaviour."
},
{
"code": "// CPP program to illustrate// pop_front() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{ 1, 2, 3, 4, 5 }; mylist.pop_front(); // list becomes 2, 3, 4, 5 for (auto it = mylist.begin(); it != mylist.end(); ++it) cout << ' ' << *it;}",
"e": 26074,
"s": 25772,
"text": null
},
{
"code": null,
"e": 26082,
"s": 26074,
"text": "Output:"
},
{
"code": null,
"e": 26094,
"s": 26082,
"text": "2, 3, 4, 5\n"
},
{
"code": null,
"e": 26228,
"s": 26094,
"text": "Application : Input an empty list with the following numbers and order using push_front() function and print the reverse of the list."
},
{
"code": null,
"e": 26291,
"s": 26228,
"text": "Input : 1, 2, 3, 4, 5, 6, 7, 8\nOutput: 8, 7, 6, 5, 4, 3, 2, 1\n"
},
{
"code": "// CPP program to illustrate// application Of pop_front() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{}, newlist{}; mylist.push_front(8); mylist.push_front(7); mylist.push_front(6); mylist.push_front(5); mylist.push_front(4); mylist.push_front(3); mylist.push_front(2); mylist.push_front(1); // list becomes 1, 2, 3, 4, 5, 6, 7, 8 while (!mylist.empty()) { newlist.push_front(mylist.front()); mylist.pop_front(); } for (auto it = newlist.begin(); it != newlist.end(); ++it) cout << ' ' << *it;}",
"e": 26899,
"s": 26291,
"text": null
},
{
"code": null,
"e": 26907,
"s": 26899,
"text": "Output:"
},
{
"code": null,
"e": 26931,
"s": 26907,
"text": "8, 7, 6, 5, 4, 3, 2, 1\n"
},
{
"code": null,
"e": 27103,
"s": 26931,
"text": "pop_back() function is used to pop or remove elements from a list from the back. The value is removed from the list from the end, and the container size is decreased by 1."
},
{
"code": null,
"e": 27112,
"s": 27103,
"text": "Syntax :"
},
{
"code": null,
"e": 27273,
"s": 27112,
"text": "listname.pop_back()\nParameters :\nNo argument is passed as parameter.\nResult :\nRemoves the value present at the end or back \nof the given list named as listname\n"
},
{
"code": null,
"e": 27283,
"s": 27273,
"text": "Examples:"
},
{
"code": null,
"e": 27447,
"s": 27283,
"text": "Input : list list{1, 2, 3, 4, 5};\n list.pop_back();\nOutput : 1, 2, 3, 4\n\nInput : list list{5, 4, 3, 2, 1};\n list.pop_back();\nOutput : 5, 4, 3, 2\n"
},
{
"code": null,
"e": 27469,
"s": 27447,
"text": "Errors and Exceptions"
},
{
"code": null,
"e": 27607,
"s": 27469,
"text": "No-Throw-Guarantee – if an exception is thrown, there are no changes in the container.If the list is empty, it shows undefined behaviour."
},
{
"code": null,
"e": 27694,
"s": 27607,
"text": "No-Throw-Guarantee – if an exception is thrown, there are no changes in the container."
},
{
"code": null,
"e": 27746,
"s": 27694,
"text": "If the list is empty, it shows undefined behaviour."
},
{
"code": "// CPP program to illustrate// pop_back() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{ 1, 2, 3, 4, 5 }; mylist.pop_back(); // list becomes 1, 2, 3, 4 for (auto it = mylist.begin(); it != mylist.end(); ++it) cout << ' ' << *it;}",
"e": 28046,
"s": 27746,
"text": null
},
{
"code": null,
"e": 28054,
"s": 28046,
"text": "Output:"
},
{
"code": null,
"e": 28066,
"s": 28054,
"text": "1, 2, 3, 4\n"
},
{
"code": null,
"e": 28200,
"s": 28066,
"text": "Application : Input an empty list with the following numbers and order using push_front() function and print the reverse of the list."
},
{
"code": null,
"e": 28277,
"s": 28200,
"text": "Input : 1, 20, 39, 43, 57, 64, 73, 82\nOutput: 82, 73, 64, 57, 43, 39, 20, 1\n"
},
{
"code": "// CPP program to illustrate// application Of pop_back() function#include <iostream>#include <list>using namespace std; int main(){ list<int> mylist{}, newlist{}; mylist.push_front(82); mylist.push_front(73); mylist.push_front(64); mylist.push_front(57); mylist.push_front(43); mylist.push_front(39); mylist.push_front(20); mylist.push_front(1); // list becomes 1, 20, 39, 43, 57, 64, 73, 82 while (!mylist.empty()) { newlist.push_back(mylist.back()); mylist.pop_back(); } for (auto it = newlist.begin(); it != newlist.end(); ++it) cout << ' ' << *it;}",
"e": 28895,
"s": 28277,
"text": null
},
{
"code": null,
"e": 28903,
"s": 28895,
"text": "Output:"
},
{
"code": null,
"e": 28934,
"s": 28903,
"text": "82, 73, 64, 57, 43, 39, 20, 1\n"
},
{
"code": null,
"e": 28946,
"s": 28934,
"text": "CPP-Library"
},
{
"code": null,
"e": 28950,
"s": 28946,
"text": "STL"
},
{
"code": null,
"e": 28954,
"s": 28950,
"text": "C++"
},
{
"code": null,
"e": 28958,
"s": 28954,
"text": "STL"
},
{
"code": null,
"e": 28962,
"s": 28958,
"text": "CPP"
},
{
"code": null,
"e": 29060,
"s": 28962,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29079,
"s": 29060,
"text": "Inheritance in C++"
},
{
"code": null,
"e": 29122,
"s": 29079,
"text": "Map in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 29146,
"s": 29122,
"text": "C++ Classes and Objects"
},
{
"code": null,
"e": 29174,
"s": 29146,
"text": "Operator Overloading in C++"
},
{
"code": null,
"e": 29202,
"s": 29174,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 29229,
"s": 29202,
"text": "Bitwise Operators in C/C++"
},
{
"code": null,
"e": 29264,
"s": 29229,
"text": "Multidimensional Arrays in C / C++"
},
{
"code": null,
"e": 29288,
"s": 29264,
"text": "Virtual Function in C++"
},
{
"code": null,
"e": 29308,
"s": 29288,
"text": "Constructors in C++"
}
] |
Convert dictionary object into string in Python
|
For data manipulation in python we may come across situation to convert a dictionary object into a string object. This can be achieved in the following ways.
In this straight forward method we simple apply the str() by passing the dictionary object as a parameter. We can check the type of the objects using the type() before and after conversion.
Live Demo
DictA = {"Mon": "2 pm","Wed": "9 am","Fri": "11 am"}
print("Given dictionary : \n", DictA)
print("Type : ", type(DictA))
# using str
res = str(DictA)
# Print result
print("Result as string:\n", res)
print("Type of Result: ", type(res))
Running the above code gives us the following result −
Given dictionary :
{'Mon': '2 pm', 'Wed': '9 am', 'Fri': '11 am'}
Type :
Result as string:
{'Mon': '2 pm', 'Wed': '9 am', 'Fri': '11 am'}
Type of Result:
The json module gives us dumps method. Through this method the dictionary object is directly converted into string.
Live Demo
import json
DictA = {"Mon": "2 pm","Wed": "9 am","Fri": "11 am"}
print("Given dictionary : \n", DictA)
print("Type : ", type(DictA))
# using str
res = json.dumps(DictA)
# Print result
print("Result as string:\n", res)
print("Type of Result: ", type(res))
Running the above code gives us the following result −
Given dictionary :
{'Mon': '2 pm', 'Wed': '9 am', 'Fri': '11 am'}
Type :
Result as string:
{"Mon": "2 pm", "Wed": "9 am", "Fri": "11 am"}
Type of Result:
|
[
{
"code": null,
"e": 1220,
"s": 1062,
"text": "For data manipulation in python we may come across situation to convert a dictionary object into a string object. This can be achieved in the following ways."
},
{
"code": null,
"e": 1410,
"s": 1220,
"text": "In this straight forward method we simple apply the str() by passing the dictionary object as a parameter. We can check the type of the objects using the type() before and after conversion."
},
{
"code": null,
"e": 1421,
"s": 1410,
"text": " Live Demo"
},
{
"code": null,
"e": 1657,
"s": 1421,
"text": "DictA = {\"Mon\": \"2 pm\",\"Wed\": \"9 am\",\"Fri\": \"11 am\"}\nprint(\"Given dictionary : \\n\", DictA)\nprint(\"Type : \", type(DictA))\n# using str\nres = str(DictA)\n# Print result\nprint(\"Result as string:\\n\", res)\nprint(\"Type of Result: \", type(res))"
},
{
"code": null,
"e": 1712,
"s": 1657,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 1866,
"s": 1712,
"text": "Given dictionary :\n{'Mon': '2 pm', 'Wed': '9 am', 'Fri': '11 am'}\nType :\nResult as string:\n{'Mon': '2 pm', 'Wed': '9 am', 'Fri': '11 am'}\nType of Result:"
},
{
"code": null,
"e": 1982,
"s": 1866,
"text": "The json module gives us dumps method. Through this method the dictionary object is directly converted into string."
},
{
"code": null,
"e": 1993,
"s": 1982,
"text": " Live Demo"
},
{
"code": null,
"e": 2248,
"s": 1993,
"text": "import json\nDictA = {\"Mon\": \"2 pm\",\"Wed\": \"9 am\",\"Fri\": \"11 am\"}\nprint(\"Given dictionary : \\n\", DictA)\nprint(\"Type : \", type(DictA))\n# using str\nres = json.dumps(DictA)\n# Print result\nprint(\"Result as string:\\n\", res)\nprint(\"Type of Result: \", type(res))"
},
{
"code": null,
"e": 2303,
"s": 2248,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2457,
"s": 2303,
"text": "Given dictionary :\n{'Mon': '2 pm', 'Wed': '9 am', 'Fri': '11 am'}\nType :\nResult as string:\n{\"Mon\": \"2 pm\", \"Wed\": \"9 am\", \"Fri\": \"11 am\"}\nType of Result:"
}
] |
How to prevent a dialog from closing when a button is clicked using Kotlin?
|
This example demonstrates how to prevent a dialog from closing when a button is clicked using Kotlin.
Step 1 − Create a new project in Android Studio, go to File ? New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="50dp"
android:text="Tutorials Point"
android:textAlignment="center"
android:textColor="@android:color/holo_green_dark"
android:textSize="32sp"
android:textStyle="bold" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AlertDialog
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
val dialog: AlertDialog = AlertDialog.Builder(this)
.setTitle("My Alert Dialog")
.setMessage("Do you wanna dismiss this Dialog")
.setPositiveButton("Ok", null)
.setNegativeButton("Cancel", null)
.show();
val positiveButton: Button = dialog.getButton(AlertDialog.BUTTON_POSITIVE);
positiveButton.setOnClickListener {
Toast.makeText(this, "Do not dismiss", Toast.LENGTH_SHORT).show();
}
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
Click here to download the project code.
|
[
{
"code": null,
"e": 1164,
"s": 1062,
"text": "This example demonstrates how to prevent a dialog from closing when a button is clicked using Kotlin."
},
{
"code": null,
"e": 1293,
"s": 1164,
"text": "Step 1 − Create a new project in Android Studio, go to File ? New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1358,
"s": 1293,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 1999,
"s": 1358,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n<TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"50dp\"\n android:text=\"Tutorials Point\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2054,
"s": 1999,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 2892,
"s": 2054,
"text": "import androidx.appcompat.app.AppCompatActivity\nimport android.os.Bundle\nimport android.widget.Button\nimport android.widget.Toast\nimport androidx.appcompat.app.AlertDialog\nclass MainActivity : AppCompatActivity() {\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n val dialog: AlertDialog = AlertDialog.Builder(this)\n .setTitle(\"My Alert Dialog\")\n .setMessage(\"Do you wanna dismiss this Dialog\")\n .setPositiveButton(\"Ok\", null)\n .setNegativeButton(\"Cancel\", null)\n .show();\n val positiveButton: Button = dialog.getButton(AlertDialog.BUTTON_POSITIVE);\n positiveButton.setOnClickListener {\n Toast.makeText(this, \"Do not dismiss\", Toast.LENGTH_SHORT).show();\n }\n }\n}"
},
{
"code": null,
"e": 2947,
"s": 2892,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 3618,
"s": 2947,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 3966,
"s": 3618,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen"
},
{
"code": null,
"e": 4007,
"s": 3966,
"text": "Click here to download the project code."
}
] |
Deep learning pipeline for Natural Language Processing (NLP) | by Bauyrjan Jyenis | Towards Data Science
|
In this article, I will explore the basics of the Natural Language Processing (NLP) and demonstrate how to implement a pipeline that combines a traditional unsupervised learning algorithm with a deep learning algorithm to train unlabeled large text data. Hence, the main objective is going to be to demonstrate how to set up that pipeline that facilitates collection and creation of raw text data, preprocessing and categorizing an unlabeled text data to finally training and evaluating deep learning models in Keras.
After reading this tutorial, you will be able to perform the following:
How to collect data from Twitter via Twitter API and Tweepy Python packageHow to efficiently read and clean up large text dataset with pandasHow to preprocess text data using basic NLP techniques and generate featuresHow to categorize unlabeled text dataHow to train, compile, fit and evaluate deep learning models in Keras
How to collect data from Twitter via Twitter API and Tweepy Python package
How to efficiently read and clean up large text dataset with pandas
How to preprocess text data using basic NLP techniques and generate features
How to categorize unlabeled text data
How to train, compile, fit and evaluate deep learning models in Keras
Find my Jupyter notebooks with Python code in my GitHub here.
Roll up your sleeves, we have a lot of work to do, let’s get started.....
At the time of doing this project, the US 2020 election was just around the corner and it made sense to do sentiment analysis of tweets related to the upcoming election to learn about the kind of opinions and topics being discussed in Twitter world just about 2 weeks prior to the election day. Twitter is a great source for unfiltered opinions as opposed to the typical filtered news we see from the major media outlets. As such, we are going to build our own dataset by collecting tweets from Twitter using Twitter API and the python package Tweepy.
Before getting started with streaming data from Twitter, you must have the following:
Twitter account and Twitter API consumer keys (access token key, access token secret key, consumer key and consumer secret key)Tweepy package installed in your Jupyter notebook
Twitter account and Twitter API consumer keys (access token key, access token secret key, consumer key and consumer secret key)
Tweepy package installed in your Jupyter notebook
Setting up a Twitter account and retrieving your Twitter API consumer keys is out of scope of this article. Should you need help with those, check out this post.
Tweepy could be installed via pip install in Jupyter notebook, the following one line code will do the trick.
# Install Tweepy!pip install tweepy
Once installed, go ahead and import the package into your notebook.
In this section, I will show you how to set your data streaming pipeline with the use of Twitter API, Tweepy and a custom function. We can achieve this in 3 steps:
Set up your Twitter API consumer keysSet up a Twitter API authorization handlerWrite a custom function that listens and streams live tweets
Set up your Twitter API consumer keys
Set up a Twitter API authorization handler
Write a custom function that listens and streams live tweets
# Twitter API consumer keysaccess_token = " insert your key here "access_token_secret = " insert your key here "consumer_key = " insert your key here "consumer_secret = " insert your key here "# Twitter API authorizationauth = tweepy.OAuthHandler(consumer_key, consumer_secret)auth.set_access_token(access_token, access_token_secret)# Custom function that streams data from Twitter (20,000 tweets at most per instance)class MyStreamListener(tweepy.StreamListener): """Function to listen and stream Twitter data""" def __init__(self, api=None): super(MyStreamListener, self).__init__() self.num_tweets = 0 self.file = open("tweets.txt", "w")def on_status(self, status): tweet = status._json self.file.write( json.dumps(tweet) + '\n' ) self.num_tweets += 1 if self.num_tweets < 20000: return True else: return False self.file.close()def on_error(self, status): print(status)
Now that the environment is set up, you are ready to start streaming live tweets from Twitter. Before doing that, identify some keywords that you would like to use to collect the relevant tweets of interest for you. Since I will be streaming tweets related to the US election, I have picked some relevant keywords such as “US election”, “Trump”, “Biden” etc.
Our goal is going to be to collect at least 400,000 tweets to make it a large enough text data and it is computationally taxing to collect all of that at one go. Thus, I will be setting up a pipeline in a way to efficiently stream data in chunks. Notice from the above custom function, it will listen and stream up to 20,000 tweets at most in each chunk. As such, in order to collect over 400,000 tweets, we will need to run at least 20 chunks.
Here is how the code looks for the chunk that listens and streams live tweets into a pandas DataFrame:
# Listen and stream live tweetslistener = MyStreamListener()stream = tweepy.Stream(auth, listener)stream.filter(track = ['US Election', 'election', 'trump', 'Mike Pence', 'biden', 'Kamala Harris', 'Donald Trump', 'Joe Biden'])# Read the tweets into a listtweets_data_path = 'tweets.txt'tweets_data=[]tweets_file_1 = open(tweets_data_path, 'r')# Read in tweets and store in list: tweets_datafor line in tweets_file_1: tweet = json.loads(line) tweets_data.append(tweet)# Close connection to filetweets_file_1.close()# Print the keys of the first tweet dictprint(tweets_data[0].keys())# Read the data into a pandas DataFramenames = tweets_data[0].keys()df1 = pd.DataFrame(tweets_data, columns= names)
As mentioned, in order to collect 400,000 tweets, you will have to run the above code at least 20 times and save the collected tweets in separate pandas dataframe that will be concatenated later to consolidate all of the tweets into a single dataset.
# Concatenate dataframes into a single pandas dataframelist_of_dataChunks = [df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20]df = pd.concat(list_of_dataChunks, ignore_index=True)# Export the dataset into a CSV filedf.to_csv('tweets.csv', index=False)
Now that you have exported your combined dataset into a CSV file, you can use it for the next steps of cleaning and visualizing the data.
I have donated the dataset I have created and made it publicly available in Kaggle.
In this section, we will be cleaning the data we just collected. Before the data could be visualized, the dataset must be cleaned and transformed into a format that can be efficiently visualized. Given the dataset with 440,000 rows, one has to find an efficient way of reading and cleaning it. To do that, pandas chunksize attribute could be used to read data from CSV file into a pandas dataframe in chunks. Also, we can specify the names of columns we are interested in reading as opposed to reading the dataset with all of its columns. With the chunksize and smaller amount of columns of interest, the large dataset could be read into a dataframe rather efficiently and quickly without having to need for other alternatives such as distributed computing with PySpark on a cluster.
To transform the dataset into a shape required for visualization, the following basic NLP techniques will be applied:
Extract the only tweets that are in English languageDrop duplicates if anyDrop missing values if anyTokenize (break the tweets into single words)Convert the words into lowercaseRemove punctuationsRemove stopwordsRemove URLs, the word “twitter” and other acronyms
Extract the only tweets that are in English language
Drop duplicates if any
Drop missing values if any
Tokenize (break the tweets into single words)
Convert the words into lowercase
Remove punctuations
Remove stopwords
Remove URLs, the word “twitter” and other acronyms
The approach I will follow to implement above steps is as follows:
Write a custom function that tokenizes the tweetsWrite a another custom function that applies all of the above mentioned cleaning steps on the data.Finally, read the data in chunks and apply these wrangling steps via the custom functions to each of the chunks of the data as they get read.
Write a custom function that tokenizes the tweets
Write a another custom function that applies all of the above mentioned cleaning steps on the data.
Finally, read the data in chunks and apply these wrangling steps via the custom functions to each of the chunks of the data as they get read.
Let’s see all of these in action...
# Function to tokenize the tweetsdef custom_tokenize(text): """Function that tokenizes text""" from nltk.tokenize import word_tokenize if not text: print('The text to be tokenized is a None type. Defaulting to blank string.') text = '' return word_tokenize(text)# Function that applies the cleaning stepsdef clean_up(data): """Function that cleans up the data into a shape that can be further used for modeling""" english = data[data['lang']=='en'] # extract only tweets in english language english.drop_duplicates() # drop duplicate tweets english['text'].dropna(inplace=True) # drop any rows with missing tweets tokenized = english['text'].apply(custom_tokenize) # Tokenize tweets lower_tokens = tokenized.apply(lambda x: [t.lower() for t in x]) # Convert tokens into lower case alpha_only = lower_tokens.apply(lambda x: [t for t in x if t.isalpha()]) # Remove punctuations no_stops = alpha_only.apply(lambda x: [t for t in x if t not in stopwords.words('english')]) # remove stop words no_stops.apply(lambda x: [x.remove(t) for t in x if t=='rt']) # remove acronym "rt" no_stops.apply(lambda x: [x.remove(t) for t in x if t=='https']) # remove acronym "https" no_stops.apply(lambda x: [x.remove(t) for t in x if t=='twitter']) # remove the word "twitter" no_stops.apply(lambda x: [x.remove(t) for t in x if t=='retweet']) # remove the word "retweet" return no_stops# Read and clean the datawarnings.filterwarnings("ignore")use_cols = ['text', 'lang'] # specify the columnspath = 'tweets.csv' # path to the raw datasetdata_iterator = pd.read_csv(path, usecols=use_cols, chunksize=50000)chunk_list = []for data_chunk in data_iterator: filtered_chunk = clean_up(data_chunk) chunk_list.append(filtered_chunk)tidy_data = pd.concat(chunk_list)
The chunksize in this case is 50,000 and that is how pandas will read 50,000 tweets in each chunk and apply the cleaning steps on them before reading the next batch and so on and so forth.
After this process, the dataset is going to be clean and be ready for visualization. To avoid performing the data wrangling steps each time you open up your notebook, you could simply export the tidy data to an external file and use that in the future. For the large dataset, it is more efficient to export them into JSON file as opposed to CSV.
# Explort the tidy data to json file for ease of use in the next stepstidy_data.to_json('tidy_tweets.json', orient='table')
Here is how the tidy data looks like:
Now that the data is clean, let’s visualize and understand the nature of our data. A few obvious things we can look at are as follows:
Number of words in each tweetAverage length of word in a tweetUnigramBigramTrigramWordcloud
Number of words in each tweet
Average length of word in a tweet
Unigram
Bigram
Trigram
Wordcloud
It appears that the number of words in each tweet range from 1 to 19 words and on average falls between 10 to 12 words.
The average number of characters in a word in a tweet appear to range from 3 to 14 characters and on average occurring between 5 to 7 characters. People probably choose short words to express their opinions in the best way they can within the 280 character limit set by Twitter.
As expected, the words “trump” and “biden” dominate the 2020 US election related tweets that were pulled between Oct 15 and Oct 16.
From visualizing the data, notice that the words are not lemmatized. Lemmatization is a process of turning the words into their base or dictionary form. It is a common technique used in NLP and in machine learning in general. So in the next step, we are going to lemmatize the tokens with the following code.
# Convert tokens into format required for lemmatizationfrom nltk.corpus import wordnetdef get_wordnet_pos(word): """Map POS tag to first character lemmatize() accepts""" tag = nltk.pos_tag([word])[0][1][0].upper() tag_dict = {"J": wordnet.ADJ, "N": wordnet.NOUN, "V": wordnet.VERB, "R": wordnet.ADV}return tag_dict.get(tag, wordnet.NOUN)# Lemmatize tokenslemmatizer = WordNetLemmatizer()tidy_tweets['lemmatized'] = tidy_tweets['text'].apply(lambda x: [lemmatizer.lemmatize(word, get_wordnet_pos(word)) for word in x])# Convert the lemmatized words back to the text formattidy_tweets['tokens_back_to_text'] = [' '.join(map(str, l)) for l in tidy_tweets['lemmatized']]
Now, let’s save the lemmatized tokens into another JSON file to make it easy to use in the next step in the pipeline.
# Explort the lemmatized data to json file for ease of use in the next stepstidy_tweets.to_json('lemmatized.json', orient='table')
Prior to undertaking the steps in the preprocessing and modeling, let’s review and be clear on our approach for the next steps in the pipeline. Before we can make predictions as to which category a tweet belongs to, we must first tag the raw tweets with categories. Remember, we streamed our data as raw tweets from Twitter hence the data didn’t come as labeled. Therefore, it is appropriate to implement the following approach:
Label the dataset with k-means clustering algorithmTrain deep learning models to predict the categories of the tweetsEvaluate the models and identify potential improvements
Label the dataset with k-means clustering algorithm
Train deep learning models to predict the categories of the tweets
Evaluate the models and identify potential improvements
In this section, the objective is going to be to tag the tweets with 2 labels corresponding to positive or negative sentiments. Then further preprocess and transform the labeled text data into a format that can be further used to train deep learning models.
There are many different ways to categorize unlabeled text data and such methods include, but not limited to, using SVM, hierarchical clustering, cosine similarity and even Amazon Mechanical Turk. In this example, I will show you another simpler, perhaps not the most accurate, way of doing a quick and dirty way of categorizing the text data. To do that, I will first conduct a sentiment analysis with VADER to determine whether the tweets are positive, negative or neutral. Next, I will use a simple k-means clustering algorithm to cluster the tweets based on the calculated compound value drawn from the values of how positive, negative and neutral the tweets are.
Let’s look at the dataset first
The column “tokens_back_to_text” is the lemmatized tokens transformed back to the text format and I am going to be using this column from the tidy dataset for the creation of the sentiments with SenitmentIntensityAnalyzer from VADER package.
# Extract lemmatized text into a listtweets = list(df['tokens_back_to_text'])# Create sentiments with SentimentIntensityAnalyzerfrom vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer sid = SentimentIntensityAnalyzer()sentiment = [sid.polarity_scores(tweet) for tweet in tweets]
Here is how the first 5 rows of the sentiments look like
Now, I will use the column “compound” from the above dataframe and will feed it into a k-means clustering algorithm to categorize the tweets with 0 or 1 representing “negative” or “positive” sentiment respectively. That is, I will tag the tweets with the corresponding compound value of greater and equal to 0.05 as a positive sentiment while the value less than 0.05 will be tagged as a negative sentiment. There is no hard rule here, it is just how I set up my experiment.
Here is how you can implement a text labeling job with k-means clustering algorithm from scikit-learn in python. Remember to give the same index to both labels and the original dataframe where you have your tweets/texts.
# Tag the tweets with labels using k-means clustering algorithmfrom sklearn.cluster import KMeanskmeans = KMeans(n_clusters=2, random_state=0).fit(compound)labels = pd.DataFrame(kmeans.labels_, columns=['label'], index=df.index)
Looking at the counts of labels for 0 and 1, notice that the dataset is imbalanced where more than twice the tweets are labeled as 1. This is going to impact the performance of the model, so we have to balance our dataset prior to training our models.
In addition, we could also possibly identify topics of the tweets from each of the categories with the help of a powerful NLP algorithm called “Latent Dirichlet Allocation” which could provide an intuition of topics in the negative and positive tweets. I will show this in a separate article at a later time. For now, let’s use the categories 0s and 1s for the sake of this exercise. So now, we have successfully converted our problem to a supervised learning problem and next we will proceed onto training deep learning models using our now labeled text data.
We have a pretty large dataset with over 400,000 tweets with more than 60,000 unique words. Training RNNs with multiple hidden layers on such a large dataset is computationally taxing and may take days (if not weeks) if you attempted to train them on a CPU. One common approach for training deep learning models is to use GPU optimized machines for higher training performance. In this exercise, we are going to use Amazon SageMaker p2.xlarge instance that comes pre-loaded with TensorFlow backend and CUDA. We will be using Keras interface to the TensorFlow.
Let’s get started, we will apply the following steps.
Tokenizing, padding and sequencing the datasetBalance the dataset with SMOTESplit the dataset into training and test setsTrain SimpleRNN and LSTM modelsEvaluate models
Tokenizing, padding and sequencing the dataset
Balance the dataset with SMOTE
Split the dataset into training and test sets
Train SimpleRNN and LSTM models
Evaluate models
The dataset must be transformed into a numerical format as machine learning algorithms do not understand natural language. Before vectorizing the data, let’s look at the text format of the data.
tweets.head()
# prepare tokenizer tokenizer = Tokenizer() tokenizer.fit_on_texts(tweets)# integer encode the documentssequences = tokenizer.texts_to_sequences(tweets)# pad documents to a max length of 14 words maxlen = 14 X = pad_sequences(sequences, maxlen=maxlen)
from imblearn.over_sampling import SMOTEfrom imblearn.under_sampling import RandomUnderSamplerfrom imblearn.pipeline import Pipeline# define pipelineover = SMOTE(sampling_strategy=0.5)under = RandomUnderSampler(sampling_strategy=0.8)steps = [('o', over), ('u', under)]pipeline = Pipeline(steps=steps)# transform the datasetX, y = pipeline.fit_resample(X, labels['label'])# One-hot encoding of labelsfrom keras.utils.np_utils import to_categoricaly = to_categorical(y)
As seen from the distribution of data between 0 and 1 from above, the data now looks to be pretty balanced compared to what it was before.
Now that the data is balanced, we are ready to split the data into training and test sets. I am going to put away 30% of the dataset for testing.
# Split the dataset into train and test setsfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=43)
In this section, I will show you how to implement a couple of variants of RNN deep learning architecture, 3 layer SimpleRNN and 3 layer LSTM architectures. The activation function is set to “tanh” for both SimpleRNN and LSTM layers by default, so let’s just leave it at its default setting. I will use all the 65,125 unique words as the size of the vocabulary, will limit the maximum length of each input to 14 words as it is consistent with the maximum length of word in a tweet and will set the output dimension of the embedding matrix to 32.
SimpleRNN
The dropout layers will be used to enforce regularization terms to control overfitting. As my dataset is labeled in binary class, I will use binary crossentropy as the loss function. In terms of an optimizer, Adam optimizer is a good choice and I will include accuracy as the metric. I will run 10 epochs on the training set in which 70% of the training set will be used to train the model while the remaining 30% will be used for validation. This is not to be mixed up with the test set we kept away.
# SimpleRNNmodel = Sequential()model.add(Embedding(input_dim = vocab_size, output_dim = output_dim, input_length = maxlen, embeddings_constraint=maxnorm(3)))model.add(SimpleRNN(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(SimpleRNN(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(SimpleRNN(output_dim=output_dim))model.add(Dense(2,activation='softmax'))
Model summary as follows:
The SimpleRNN model results are shown as follows — 10 epochs:
LSTM
3 layer LSTM model will be trained with dropout layers. I will run 10 epochs on the training set in which 70% of the training set will be used to train the model while the remaining 30% will be used for validation. This is not to be mixed up with the test set we kept away.
# LSTMmodel.add(Embedding(input_dim = vocab_size, output_dim = output_dim, input_length = maxlen, embeddings_constraint=maxnorm(3)))model.add(LSTM(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(LSTM(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(LSTM(output_dim=output_dim, kernel_constraint=maxnorm(3)))model.add(Dense(2,activation='softmax'))
Model summary is shown as follows:
LSTM model results are shown as follows — 10 epochs:
Now, let’s plot the models’ performances over time and look at their accuracies and losses across 10 epochs.
SimpleRNN: Accuracy
SimpleRNN: Loss
Notice the training accuracy, the SimpleRNN model quickly starts overfitting and the validation accuracy has high variance for the same reason.
LSTM: Accuracy
LSTM: Loss
As seen from the accuracy and loss plot of LSTM, the model is overfitting and the validation accuracy not only has high variance but also dropping quickly for the same reason.
In this project, I attempted to demonstrate how to set up a deep learning pipeline that predicts the sentiments of the tweets related to the 2020 US election. To do that, I first created my own dataset by scraping raw tweets via Twitter API and Tweepy package.
Over 440,000 tweets were streamed via Twitter API and stored into a CSV file. After wrangling and visualizing the data, a traditional clustering algorithm, k-means clustering in this case, was used to tag the tweets with two different labels, representing positive or negative sentiments. That is, the problem was converted into a supervised learning problem before training the deep learning models with the data. Then the dataset was split into training and test sets.
Later, the training set was used to train SimpleRNN and LSTM models respectively and were evaluated using the loss and accuracy curves from the model performances in each epoch. Overall, both models appear to be performing as they should and are likely to be overfitting the data according to what is seen from accuracy plots and as such I suggest the following recommendations for the next step.
Find another approach or different learning algorithm to label the dataset
Try Amazon Mechanical Turk or Ground Truth to label the data set
Try different RNN architectures
Perform more advanced hyperparameter tuning of the RNN architectures
Perform cross-validation
Make the data multi-class problem
How to efficiently collect data from Twitter via Tweepy and Twitter APIHow to efficiently work with large dataset
How to efficiently collect data from Twitter via Tweepy and Twitter API
How to efficiently work with large dataset
3. How to build deep learning architectures, compile and fit in Keras
4. How to apply basic NLP concepts and techniques to a text data
Again, find my Jupyter notebooks with Python code in my GitHub here and let’s connect on LinkedIn.
Enjoy deep learning :) I’d love to hear your feedback and suggestions, so please either use the clap button or comment in the section below.
Thank you!
NLP to Using RNN and LSTM
Sequence Classification with LSTM
Understanding LSTM
Word Embeddings in Gensim
|
[
{
"code": null,
"e": 690,
"s": 172,
"text": "In this article, I will explore the basics of the Natural Language Processing (NLP) and demonstrate how to implement a pipeline that combines a traditional unsupervised learning algorithm with a deep learning algorithm to train unlabeled large text data. Hence, the main objective is going to be to demonstrate how to set up that pipeline that facilitates collection and creation of raw text data, preprocessing and categorizing an unlabeled text data to finally training and evaluating deep learning models in Keras."
},
{
"code": null,
"e": 762,
"s": 690,
"text": "After reading this tutorial, you will be able to perform the following:"
},
{
"code": null,
"e": 1086,
"s": 762,
"text": "How to collect data from Twitter via Twitter API and Tweepy Python packageHow to efficiently read and clean up large text dataset with pandasHow to preprocess text data using basic NLP techniques and generate featuresHow to categorize unlabeled text dataHow to train, compile, fit and evaluate deep learning models in Keras"
},
{
"code": null,
"e": 1161,
"s": 1086,
"text": "How to collect data from Twitter via Twitter API and Tweepy Python package"
},
{
"code": null,
"e": 1229,
"s": 1161,
"text": "How to efficiently read and clean up large text dataset with pandas"
},
{
"code": null,
"e": 1306,
"s": 1229,
"text": "How to preprocess text data using basic NLP techniques and generate features"
},
{
"code": null,
"e": 1344,
"s": 1306,
"text": "How to categorize unlabeled text data"
},
{
"code": null,
"e": 1414,
"s": 1344,
"text": "How to train, compile, fit and evaluate deep learning models in Keras"
},
{
"code": null,
"e": 1476,
"s": 1414,
"text": "Find my Jupyter notebooks with Python code in my GitHub here."
},
{
"code": null,
"e": 1550,
"s": 1476,
"text": "Roll up your sleeves, we have a lot of work to do, let’s get started....."
},
{
"code": null,
"e": 2102,
"s": 1550,
"text": "At the time of doing this project, the US 2020 election was just around the corner and it made sense to do sentiment analysis of tweets related to the upcoming election to learn about the kind of opinions and topics being discussed in Twitter world just about 2 weeks prior to the election day. Twitter is a great source for unfiltered opinions as opposed to the typical filtered news we see from the major media outlets. As such, we are going to build our own dataset by collecting tweets from Twitter using Twitter API and the python package Tweepy."
},
{
"code": null,
"e": 2188,
"s": 2102,
"text": "Before getting started with streaming data from Twitter, you must have the following:"
},
{
"code": null,
"e": 2365,
"s": 2188,
"text": "Twitter account and Twitter API consumer keys (access token key, access token secret key, consumer key and consumer secret key)Tweepy package installed in your Jupyter notebook"
},
{
"code": null,
"e": 2493,
"s": 2365,
"text": "Twitter account and Twitter API consumer keys (access token key, access token secret key, consumer key and consumer secret key)"
},
{
"code": null,
"e": 2543,
"s": 2493,
"text": "Tweepy package installed in your Jupyter notebook"
},
{
"code": null,
"e": 2705,
"s": 2543,
"text": "Setting up a Twitter account and retrieving your Twitter API consumer keys is out of scope of this article. Should you need help with those, check out this post."
},
{
"code": null,
"e": 2815,
"s": 2705,
"text": "Tweepy could be installed via pip install in Jupyter notebook, the following one line code will do the trick."
},
{
"code": null,
"e": 2851,
"s": 2815,
"text": "# Install Tweepy!pip install tweepy"
},
{
"code": null,
"e": 2919,
"s": 2851,
"text": "Once installed, go ahead and import the package into your notebook."
},
{
"code": null,
"e": 3083,
"s": 2919,
"text": "In this section, I will show you how to set your data streaming pipeline with the use of Twitter API, Tweepy and a custom function. We can achieve this in 3 steps:"
},
{
"code": null,
"e": 3223,
"s": 3083,
"text": "Set up your Twitter API consumer keysSet up a Twitter API authorization handlerWrite a custom function that listens and streams live tweets"
},
{
"code": null,
"e": 3261,
"s": 3223,
"text": "Set up your Twitter API consumer keys"
},
{
"code": null,
"e": 3304,
"s": 3261,
"text": "Set up a Twitter API authorization handler"
},
{
"code": null,
"e": 3365,
"s": 3304,
"text": "Write a custom function that listens and streams live tweets"
},
{
"code": null,
"e": 4345,
"s": 3365,
"text": "# Twitter API consumer keysaccess_token = \" insert your key here \"access_token_secret = \" insert your key here \"consumer_key = \" insert your key here \"consumer_secret = \" insert your key here \"# Twitter API authorizationauth = tweepy.OAuthHandler(consumer_key, consumer_secret)auth.set_access_token(access_token, access_token_secret)# Custom function that streams data from Twitter (20,000 tweets at most per instance)class MyStreamListener(tweepy.StreamListener): \"\"\"Function to listen and stream Twitter data\"\"\" def __init__(self, api=None): super(MyStreamListener, self).__init__() self.num_tweets = 0 self.file = open(\"tweets.txt\", \"w\")def on_status(self, status): tweet = status._json self.file.write( json.dumps(tweet) + '\\n' ) self.num_tweets += 1 if self.num_tweets < 20000: return True else: return False self.file.close()def on_error(self, status): print(status)"
},
{
"code": null,
"e": 4704,
"s": 4345,
"text": "Now that the environment is set up, you are ready to start streaming live tweets from Twitter. Before doing that, identify some keywords that you would like to use to collect the relevant tweets of interest for you. Since I will be streaming tweets related to the US election, I have picked some relevant keywords such as “US election”, “Trump”, “Biden” etc."
},
{
"code": null,
"e": 5149,
"s": 4704,
"text": "Our goal is going to be to collect at least 400,000 tweets to make it a large enough text data and it is computationally taxing to collect all of that at one go. Thus, I will be setting up a pipeline in a way to efficiently stream data in chunks. Notice from the above custom function, it will listen and stream up to 20,000 tweets at most in each chunk. As such, in order to collect over 400,000 tweets, we will need to run at least 20 chunks."
},
{
"code": null,
"e": 5252,
"s": 5149,
"text": "Here is how the code looks for the chunk that listens and streams live tweets into a pandas DataFrame:"
},
{
"code": null,
"e": 5956,
"s": 5252,
"text": "# Listen and stream live tweetslistener = MyStreamListener()stream = tweepy.Stream(auth, listener)stream.filter(track = ['US Election', 'election', 'trump', 'Mike Pence', 'biden', 'Kamala Harris', 'Donald Trump', 'Joe Biden'])# Read the tweets into a listtweets_data_path = 'tweets.txt'tweets_data=[]tweets_file_1 = open(tweets_data_path, 'r')# Read in tweets and store in list: tweets_datafor line in tweets_file_1: tweet = json.loads(line) tweets_data.append(tweet)# Close connection to filetweets_file_1.close()# Print the keys of the first tweet dictprint(tweets_data[0].keys())# Read the data into a pandas DataFramenames = tweets_data[0].keys()df1 = pd.DataFrame(tweets_data, columns= names)"
},
{
"code": null,
"e": 6207,
"s": 5956,
"text": "As mentioned, in order to collect 400,000 tweets, you will have to run the above code at least 20 times and save the collected tweets in separate pandas dataframe that will be concatenated later to consolidate all of the tweets into a single dataset."
},
{
"code": null,
"e": 6520,
"s": 6207,
"text": "# Concatenate dataframes into a single pandas dataframelist_of_dataChunks = [df1, df2, df3, df4, df5, df6, df7, df8, df9, df10, df11, df12, df13, df14, df15, df16, df17, df18, df19, df20]df = pd.concat(list_of_dataChunks, ignore_index=True)# Export the dataset into a CSV filedf.to_csv('tweets.csv', index=False)"
},
{
"code": null,
"e": 6658,
"s": 6520,
"text": "Now that you have exported your combined dataset into a CSV file, you can use it for the next steps of cleaning and visualizing the data."
},
{
"code": null,
"e": 6742,
"s": 6658,
"text": "I have donated the dataset I have created and made it publicly available in Kaggle."
},
{
"code": null,
"e": 7526,
"s": 6742,
"text": "In this section, we will be cleaning the data we just collected. Before the data could be visualized, the dataset must be cleaned and transformed into a format that can be efficiently visualized. Given the dataset with 440,000 rows, one has to find an efficient way of reading and cleaning it. To do that, pandas chunksize attribute could be used to read data from CSV file into a pandas dataframe in chunks. Also, we can specify the names of columns we are interested in reading as opposed to reading the dataset with all of its columns. With the chunksize and smaller amount of columns of interest, the large dataset could be read into a dataframe rather efficiently and quickly without having to need for other alternatives such as distributed computing with PySpark on a cluster."
},
{
"code": null,
"e": 7644,
"s": 7526,
"text": "To transform the dataset into a shape required for visualization, the following basic NLP techniques will be applied:"
},
{
"code": null,
"e": 7907,
"s": 7644,
"text": "Extract the only tweets that are in English languageDrop duplicates if anyDrop missing values if anyTokenize (break the tweets into single words)Convert the words into lowercaseRemove punctuationsRemove stopwordsRemove URLs, the word “twitter” and other acronyms"
},
{
"code": null,
"e": 7960,
"s": 7907,
"text": "Extract the only tweets that are in English language"
},
{
"code": null,
"e": 7983,
"s": 7960,
"text": "Drop duplicates if any"
},
{
"code": null,
"e": 8010,
"s": 7983,
"text": "Drop missing values if any"
},
{
"code": null,
"e": 8056,
"s": 8010,
"text": "Tokenize (break the tweets into single words)"
},
{
"code": null,
"e": 8089,
"s": 8056,
"text": "Convert the words into lowercase"
},
{
"code": null,
"e": 8109,
"s": 8089,
"text": "Remove punctuations"
},
{
"code": null,
"e": 8126,
"s": 8109,
"text": "Remove stopwords"
},
{
"code": null,
"e": 8177,
"s": 8126,
"text": "Remove URLs, the word “twitter” and other acronyms"
},
{
"code": null,
"e": 8244,
"s": 8177,
"text": "The approach I will follow to implement above steps is as follows:"
},
{
"code": null,
"e": 8534,
"s": 8244,
"text": "Write a custom function that tokenizes the tweetsWrite a another custom function that applies all of the above mentioned cleaning steps on the data.Finally, read the data in chunks and apply these wrangling steps via the custom functions to each of the chunks of the data as they get read."
},
{
"code": null,
"e": 8584,
"s": 8534,
"text": "Write a custom function that tokenizes the tweets"
},
{
"code": null,
"e": 8684,
"s": 8584,
"text": "Write a another custom function that applies all of the above mentioned cleaning steps on the data."
},
{
"code": null,
"e": 8826,
"s": 8684,
"text": "Finally, read the data in chunks and apply these wrangling steps via the custom functions to each of the chunks of the data as they get read."
},
{
"code": null,
"e": 8862,
"s": 8826,
"text": "Let’s see all of these in action..."
},
{
"code": null,
"e": 10674,
"s": 8862,
"text": "# Function to tokenize the tweetsdef custom_tokenize(text): \"\"\"Function that tokenizes text\"\"\" from nltk.tokenize import word_tokenize if not text: print('The text to be tokenized is a None type. Defaulting to blank string.') text = '' return word_tokenize(text)# Function that applies the cleaning stepsdef clean_up(data): \"\"\"Function that cleans up the data into a shape that can be further used for modeling\"\"\" english = data[data['lang']=='en'] # extract only tweets in english language english.drop_duplicates() # drop duplicate tweets english['text'].dropna(inplace=True) # drop any rows with missing tweets tokenized = english['text'].apply(custom_tokenize) # Tokenize tweets lower_tokens = tokenized.apply(lambda x: [t.lower() for t in x]) # Convert tokens into lower case alpha_only = lower_tokens.apply(lambda x: [t for t in x if t.isalpha()]) # Remove punctuations no_stops = alpha_only.apply(lambda x: [t for t in x if t not in stopwords.words('english')]) # remove stop words no_stops.apply(lambda x: [x.remove(t) for t in x if t=='rt']) # remove acronym \"rt\" no_stops.apply(lambda x: [x.remove(t) for t in x if t=='https']) # remove acronym \"https\" no_stops.apply(lambda x: [x.remove(t) for t in x if t=='twitter']) # remove the word \"twitter\" no_stops.apply(lambda x: [x.remove(t) for t in x if t=='retweet']) # remove the word \"retweet\" return no_stops# Read and clean the datawarnings.filterwarnings(\"ignore\")use_cols = ['text', 'lang'] # specify the columnspath = 'tweets.csv' # path to the raw datasetdata_iterator = pd.read_csv(path, usecols=use_cols, chunksize=50000)chunk_list = []for data_chunk in data_iterator: filtered_chunk = clean_up(data_chunk) chunk_list.append(filtered_chunk)tidy_data = pd.concat(chunk_list)"
},
{
"code": null,
"e": 10863,
"s": 10674,
"text": "The chunksize in this case is 50,000 and that is how pandas will read 50,000 tweets in each chunk and apply the cleaning steps on them before reading the next batch and so on and so forth."
},
{
"code": null,
"e": 11209,
"s": 10863,
"text": "After this process, the dataset is going to be clean and be ready for visualization. To avoid performing the data wrangling steps each time you open up your notebook, you could simply export the tidy data to an external file and use that in the future. For the large dataset, it is more efficient to export them into JSON file as opposed to CSV."
},
{
"code": null,
"e": 11333,
"s": 11209,
"text": "# Explort the tidy data to json file for ease of use in the next stepstidy_data.to_json('tidy_tweets.json', orient='table')"
},
{
"code": null,
"e": 11371,
"s": 11333,
"text": "Here is how the tidy data looks like:"
},
{
"code": null,
"e": 11506,
"s": 11371,
"text": "Now that the data is clean, let’s visualize and understand the nature of our data. A few obvious things we can look at are as follows:"
},
{
"code": null,
"e": 11598,
"s": 11506,
"text": "Number of words in each tweetAverage length of word in a tweetUnigramBigramTrigramWordcloud"
},
{
"code": null,
"e": 11628,
"s": 11598,
"text": "Number of words in each tweet"
},
{
"code": null,
"e": 11662,
"s": 11628,
"text": "Average length of word in a tweet"
},
{
"code": null,
"e": 11670,
"s": 11662,
"text": "Unigram"
},
{
"code": null,
"e": 11677,
"s": 11670,
"text": "Bigram"
},
{
"code": null,
"e": 11685,
"s": 11677,
"text": "Trigram"
},
{
"code": null,
"e": 11695,
"s": 11685,
"text": "Wordcloud"
},
{
"code": null,
"e": 11815,
"s": 11695,
"text": "It appears that the number of words in each tweet range from 1 to 19 words and on average falls between 10 to 12 words."
},
{
"code": null,
"e": 12094,
"s": 11815,
"text": "The average number of characters in a word in a tweet appear to range from 3 to 14 characters and on average occurring between 5 to 7 characters. People probably choose short words to express their opinions in the best way they can within the 280 character limit set by Twitter."
},
{
"code": null,
"e": 12226,
"s": 12094,
"text": "As expected, the words “trump” and “biden” dominate the 2020 US election related tweets that were pulled between Oct 15 and Oct 16."
},
{
"code": null,
"e": 12535,
"s": 12226,
"text": "From visualizing the data, notice that the words are not lemmatized. Lemmatization is a process of turning the words into their base or dictionary form. It is a common technique used in NLP and in machine learning in general. So in the next step, we are going to lemmatize the tokens with the following code."
},
{
"code": null,
"e": 13256,
"s": 12535,
"text": "# Convert tokens into format required for lemmatizationfrom nltk.corpus import wordnetdef get_wordnet_pos(word): \"\"\"Map POS tag to first character lemmatize() accepts\"\"\" tag = nltk.pos_tag([word])[0][1][0].upper() tag_dict = {\"J\": wordnet.ADJ, \"N\": wordnet.NOUN, \"V\": wordnet.VERB, \"R\": wordnet.ADV}return tag_dict.get(tag, wordnet.NOUN)# Lemmatize tokenslemmatizer = WordNetLemmatizer()tidy_tweets['lemmatized'] = tidy_tweets['text'].apply(lambda x: [lemmatizer.lemmatize(word, get_wordnet_pos(word)) for word in x])# Convert the lemmatized words back to the text formattidy_tweets['tokens_back_to_text'] = [' '.join(map(str, l)) for l in tidy_tweets['lemmatized']]"
},
{
"code": null,
"e": 13374,
"s": 13256,
"text": "Now, let’s save the lemmatized tokens into another JSON file to make it easy to use in the next step in the pipeline."
},
{
"code": null,
"e": 13505,
"s": 13374,
"text": "# Explort the lemmatized data to json file for ease of use in the next stepstidy_tweets.to_json('lemmatized.json', orient='table')"
},
{
"code": null,
"e": 13934,
"s": 13505,
"text": "Prior to undertaking the steps in the preprocessing and modeling, let’s review and be clear on our approach for the next steps in the pipeline. Before we can make predictions as to which category a tweet belongs to, we must first tag the raw tweets with categories. Remember, we streamed our data as raw tweets from Twitter hence the data didn’t come as labeled. Therefore, it is appropriate to implement the following approach:"
},
{
"code": null,
"e": 14107,
"s": 13934,
"text": "Label the dataset with k-means clustering algorithmTrain deep learning models to predict the categories of the tweetsEvaluate the models and identify potential improvements"
},
{
"code": null,
"e": 14159,
"s": 14107,
"text": "Label the dataset with k-means clustering algorithm"
},
{
"code": null,
"e": 14226,
"s": 14159,
"text": "Train deep learning models to predict the categories of the tweets"
},
{
"code": null,
"e": 14282,
"s": 14226,
"text": "Evaluate the models and identify potential improvements"
},
{
"code": null,
"e": 14540,
"s": 14282,
"text": "In this section, the objective is going to be to tag the tweets with 2 labels corresponding to positive or negative sentiments. Then further preprocess and transform the labeled text data into a format that can be further used to train deep learning models."
},
{
"code": null,
"e": 15208,
"s": 14540,
"text": "There are many different ways to categorize unlabeled text data and such methods include, but not limited to, using SVM, hierarchical clustering, cosine similarity and even Amazon Mechanical Turk. In this example, I will show you another simpler, perhaps not the most accurate, way of doing a quick and dirty way of categorizing the text data. To do that, I will first conduct a sentiment analysis with VADER to determine whether the tweets are positive, negative or neutral. Next, I will use a simple k-means clustering algorithm to cluster the tweets based on the calculated compound value drawn from the values of how positive, negative and neutral the tweets are."
},
{
"code": null,
"e": 15240,
"s": 15208,
"text": "Let’s look at the dataset first"
},
{
"code": null,
"e": 15482,
"s": 15240,
"text": "The column “tokens_back_to_text” is the lemmatized tokens transformed back to the text format and I am going to be using this column from the tidy dataset for the creation of the sentiments with SenitmentIntensityAnalyzer from VADER package."
},
{
"code": null,
"e": 15774,
"s": 15482,
"text": "# Extract lemmatized text into a listtweets = list(df['tokens_back_to_text'])# Create sentiments with SentimentIntensityAnalyzerfrom vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer sid = SentimentIntensityAnalyzer()sentiment = [sid.polarity_scores(tweet) for tweet in tweets]"
},
{
"code": null,
"e": 15831,
"s": 15774,
"text": "Here is how the first 5 rows of the sentiments look like"
},
{
"code": null,
"e": 16306,
"s": 15831,
"text": "Now, I will use the column “compound” from the above dataframe and will feed it into a k-means clustering algorithm to categorize the tweets with 0 or 1 representing “negative” or “positive” sentiment respectively. That is, I will tag the tweets with the corresponding compound value of greater and equal to 0.05 as a positive sentiment while the value less than 0.05 will be tagged as a negative sentiment. There is no hard rule here, it is just how I set up my experiment."
},
{
"code": null,
"e": 16527,
"s": 16306,
"text": "Here is how you can implement a text labeling job with k-means clustering algorithm from scikit-learn in python. Remember to give the same index to both labels and the original dataframe where you have your tweets/texts."
},
{
"code": null,
"e": 16756,
"s": 16527,
"text": "# Tag the tweets with labels using k-means clustering algorithmfrom sklearn.cluster import KMeanskmeans = KMeans(n_clusters=2, random_state=0).fit(compound)labels = pd.DataFrame(kmeans.labels_, columns=['label'], index=df.index)"
},
{
"code": null,
"e": 17008,
"s": 16756,
"text": "Looking at the counts of labels for 0 and 1, notice that the dataset is imbalanced where more than twice the tweets are labeled as 1. This is going to impact the performance of the model, so we have to balance our dataset prior to training our models."
},
{
"code": null,
"e": 17569,
"s": 17008,
"text": "In addition, we could also possibly identify topics of the tweets from each of the categories with the help of a powerful NLP algorithm called “Latent Dirichlet Allocation” which could provide an intuition of topics in the negative and positive tweets. I will show this in a separate article at a later time. For now, let’s use the categories 0s and 1s for the sake of this exercise. So now, we have successfully converted our problem to a supervised learning problem and next we will proceed onto training deep learning models using our now labeled text data."
},
{
"code": null,
"e": 18129,
"s": 17569,
"text": "We have a pretty large dataset with over 400,000 tweets with more than 60,000 unique words. Training RNNs with multiple hidden layers on such a large dataset is computationally taxing and may take days (if not weeks) if you attempted to train them on a CPU. One common approach for training deep learning models is to use GPU optimized machines for higher training performance. In this exercise, we are going to use Amazon SageMaker p2.xlarge instance that comes pre-loaded with TensorFlow backend and CUDA. We will be using Keras interface to the TensorFlow."
},
{
"code": null,
"e": 18183,
"s": 18129,
"text": "Let’s get started, we will apply the following steps."
},
{
"code": null,
"e": 18351,
"s": 18183,
"text": "Tokenizing, padding and sequencing the datasetBalance the dataset with SMOTESplit the dataset into training and test setsTrain SimpleRNN and LSTM modelsEvaluate models"
},
{
"code": null,
"e": 18398,
"s": 18351,
"text": "Tokenizing, padding and sequencing the dataset"
},
{
"code": null,
"e": 18429,
"s": 18398,
"text": "Balance the dataset with SMOTE"
},
{
"code": null,
"e": 18475,
"s": 18429,
"text": "Split the dataset into training and test sets"
},
{
"code": null,
"e": 18507,
"s": 18475,
"text": "Train SimpleRNN and LSTM models"
},
{
"code": null,
"e": 18523,
"s": 18507,
"text": "Evaluate models"
},
{
"code": null,
"e": 18718,
"s": 18523,
"text": "The dataset must be transformed into a numerical format as machine learning algorithms do not understand natural language. Before vectorizing the data, let’s look at the text format of the data."
},
{
"code": null,
"e": 18732,
"s": 18718,
"text": "tweets.head()"
},
{
"code": null,
"e": 18984,
"s": 18732,
"text": "# prepare tokenizer tokenizer = Tokenizer() tokenizer.fit_on_texts(tweets)# integer encode the documentssequences = tokenizer.texts_to_sequences(tweets)# pad documents to a max length of 14 words maxlen = 14 X = pad_sequences(sequences, maxlen=maxlen)"
},
{
"code": null,
"e": 19452,
"s": 18984,
"text": "from imblearn.over_sampling import SMOTEfrom imblearn.under_sampling import RandomUnderSamplerfrom imblearn.pipeline import Pipeline# define pipelineover = SMOTE(sampling_strategy=0.5)under = RandomUnderSampler(sampling_strategy=0.8)steps = [('o', over), ('u', under)]pipeline = Pipeline(steps=steps)# transform the datasetX, y = pipeline.fit_resample(X, labels['label'])# One-hot encoding of labelsfrom keras.utils.np_utils import to_categoricaly = to_categorical(y)"
},
{
"code": null,
"e": 19591,
"s": 19452,
"text": "As seen from the distribution of data between 0 and 1 from above, the data now looks to be pretty balanced compared to what it was before."
},
{
"code": null,
"e": 19737,
"s": 19591,
"text": "Now that the data is balanced, we are ready to split the data into training and test sets. I am going to put away 30% of the dataset for testing."
},
{
"code": null,
"e": 19923,
"s": 19737,
"text": "# Split the dataset into train and test setsfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=43)"
},
{
"code": null,
"e": 20468,
"s": 19923,
"text": "In this section, I will show you how to implement a couple of variants of RNN deep learning architecture, 3 layer SimpleRNN and 3 layer LSTM architectures. The activation function is set to “tanh” for both SimpleRNN and LSTM layers by default, so let’s just leave it at its default setting. I will use all the 65,125 unique words as the size of the vocabulary, will limit the maximum length of each input to 14 words as it is consistent with the maximum length of word in a tweet and will set the output dimension of the embedding matrix to 32."
},
{
"code": null,
"e": 20478,
"s": 20468,
"text": "SimpleRNN"
},
{
"code": null,
"e": 20980,
"s": 20478,
"text": "The dropout layers will be used to enforce regularization terms to control overfitting. As my dataset is labeled in binary class, I will use binary crossentropy as the loss function. In terms of an optimizer, Adam optimizer is a good choice and I will include accuracy as the metric. I will run 10 epochs on the training set in which 70% of the training set will be used to train the model while the remaining 30% will be used for validation. This is not to be mixed up with the test set we kept away."
},
{
"code": null,
"e": 21459,
"s": 20980,
"text": "# SimpleRNNmodel = Sequential()model.add(Embedding(input_dim = vocab_size, output_dim = output_dim, input_length = maxlen, embeddings_constraint=maxnorm(3)))model.add(SimpleRNN(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(SimpleRNN(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(SimpleRNN(output_dim=output_dim))model.add(Dense(2,activation='softmax'))"
},
{
"code": null,
"e": 21485,
"s": 21459,
"text": "Model summary as follows:"
},
{
"code": null,
"e": 21547,
"s": 21485,
"text": "The SimpleRNN model results are shown as follows — 10 epochs:"
},
{
"code": null,
"e": 21552,
"s": 21547,
"text": "LSTM"
},
{
"code": null,
"e": 21826,
"s": 21552,
"text": "3 layer LSTM model will be trained with dropout layers. I will run 10 epochs on the training set in which 70% of the training set will be used to train the model while the remaining 30% will be used for validation. This is not to be mixed up with the test set we kept away."
},
{
"code": null,
"e": 22295,
"s": 21826,
"text": "# LSTMmodel.add(Embedding(input_dim = vocab_size, output_dim = output_dim, input_length = maxlen, embeddings_constraint=maxnorm(3)))model.add(LSTM(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(LSTM(output_dim=output_dim, return_sequences=True, kernel_constraint=maxnorm(3)))model.add(Dropout(0.2))model.add(LSTM(output_dim=output_dim, kernel_constraint=maxnorm(3)))model.add(Dense(2,activation='softmax'))"
},
{
"code": null,
"e": 22330,
"s": 22295,
"text": "Model summary is shown as follows:"
},
{
"code": null,
"e": 22383,
"s": 22330,
"text": "LSTM model results are shown as follows — 10 epochs:"
},
{
"code": null,
"e": 22492,
"s": 22383,
"text": "Now, let’s plot the models’ performances over time and look at their accuracies and losses across 10 epochs."
},
{
"code": null,
"e": 22512,
"s": 22492,
"text": "SimpleRNN: Accuracy"
},
{
"code": null,
"e": 22528,
"s": 22512,
"text": "SimpleRNN: Loss"
},
{
"code": null,
"e": 22672,
"s": 22528,
"text": "Notice the training accuracy, the SimpleRNN model quickly starts overfitting and the validation accuracy has high variance for the same reason."
},
{
"code": null,
"e": 22687,
"s": 22672,
"text": "LSTM: Accuracy"
},
{
"code": null,
"e": 22698,
"s": 22687,
"text": "LSTM: Loss"
},
{
"code": null,
"e": 22874,
"s": 22698,
"text": "As seen from the accuracy and loss plot of LSTM, the model is overfitting and the validation accuracy not only has high variance but also dropping quickly for the same reason."
},
{
"code": null,
"e": 23135,
"s": 22874,
"text": "In this project, I attempted to demonstrate how to set up a deep learning pipeline that predicts the sentiments of the tweets related to the 2020 US election. To do that, I first created my own dataset by scraping raw tweets via Twitter API and Tweepy package."
},
{
"code": null,
"e": 23606,
"s": 23135,
"text": "Over 440,000 tweets were streamed via Twitter API and stored into a CSV file. After wrangling and visualizing the data, a traditional clustering algorithm, k-means clustering in this case, was used to tag the tweets with two different labels, representing positive or negative sentiments. That is, the problem was converted into a supervised learning problem before training the deep learning models with the data. Then the dataset was split into training and test sets."
},
{
"code": null,
"e": 24003,
"s": 23606,
"text": "Later, the training set was used to train SimpleRNN and LSTM models respectively and were evaluated using the loss and accuracy curves from the model performances in each epoch. Overall, both models appear to be performing as they should and are likely to be overfitting the data according to what is seen from accuracy plots and as such I suggest the following recommendations for the next step."
},
{
"code": null,
"e": 24078,
"s": 24003,
"text": "Find another approach or different learning algorithm to label the dataset"
},
{
"code": null,
"e": 24143,
"s": 24078,
"text": "Try Amazon Mechanical Turk or Ground Truth to label the data set"
},
{
"code": null,
"e": 24175,
"s": 24143,
"text": "Try different RNN architectures"
},
{
"code": null,
"e": 24244,
"s": 24175,
"text": "Perform more advanced hyperparameter tuning of the RNN architectures"
},
{
"code": null,
"e": 24269,
"s": 24244,
"text": "Perform cross-validation"
},
{
"code": null,
"e": 24303,
"s": 24269,
"text": "Make the data multi-class problem"
},
{
"code": null,
"e": 24417,
"s": 24303,
"text": "How to efficiently collect data from Twitter via Tweepy and Twitter APIHow to efficiently work with large dataset"
},
{
"code": null,
"e": 24489,
"s": 24417,
"text": "How to efficiently collect data from Twitter via Tweepy and Twitter API"
},
{
"code": null,
"e": 24532,
"s": 24489,
"text": "How to efficiently work with large dataset"
},
{
"code": null,
"e": 24602,
"s": 24532,
"text": "3. How to build deep learning architectures, compile and fit in Keras"
},
{
"code": null,
"e": 24667,
"s": 24602,
"text": "4. How to apply basic NLP concepts and techniques to a text data"
},
{
"code": null,
"e": 24766,
"s": 24667,
"text": "Again, find my Jupyter notebooks with Python code in my GitHub here and let’s connect on LinkedIn."
},
{
"code": null,
"e": 24907,
"s": 24766,
"text": "Enjoy deep learning :) I’d love to hear your feedback and suggestions, so please either use the clap button or comment in the section below."
},
{
"code": null,
"e": 24918,
"s": 24907,
"text": "Thank you!"
},
{
"code": null,
"e": 24944,
"s": 24918,
"text": "NLP to Using RNN and LSTM"
},
{
"code": null,
"e": 24978,
"s": 24944,
"text": "Sequence Classification with LSTM"
},
{
"code": null,
"e": 24997,
"s": 24978,
"text": "Understanding LSTM"
}
] |
How to set default parameter values to a function in Python?
|
Python’s handling of default parameter values is one of a few things that can bother most new Python programmers.
What causes issues is using a “mutable” object as a default value; that is, a value that can be modified in place, like a list or a dictionary.
A new list is created each time the function is called if a second argument isn’t provided, so that the EXPECTED OUTPUT is:
[12]
[43]
A new list is created once when the function is defined, and the same list is used in each successive call.
Python’s default arguments are evaluated once when the function is defined, not each time the function is called. This means that if you use a mutable default argument and mutate it, you will and have mutated that object for all future calls to the function as well.
What we should do
Create a new object each time the function is called, by using a default arg to signal that no argument was provided (None is often a good choice).
def func(data=[]):
data.append(1)
return data
func()
func()
def append2(element, foo=None):
if foo is None:
foo = []
foo.append(element)
return foo
print(append2(12))
print(append2(43))
[12]
[43]
|
[
{
"code": null,
"e": 1176,
"s": 1062,
"text": "Python’s handling of default parameter values is one of a few things that can bother most new Python programmers."
},
{
"code": null,
"e": 1320,
"s": 1176,
"text": "What causes issues is using a “mutable” object as a default value; that is, a value that can be modified in place, like a list or a dictionary."
},
{
"code": null,
"e": 1444,
"s": 1320,
"text": "A new list is created each time the function is called if a second argument isn’t provided, so that the EXPECTED OUTPUT is:"
},
{
"code": null,
"e": 1454,
"s": 1444,
"text": "[12]\n[43]"
},
{
"code": null,
"e": 1562,
"s": 1454,
"text": "A new list is created once when the function is defined, and the same list is used in each successive call."
},
{
"code": null,
"e": 1829,
"s": 1562,
"text": "Python’s default arguments are evaluated once when the function is defined, not each time the function is called. This means that if you use a mutable default argument and mutate it, you will and have mutated that object for all future calls to the function as well."
},
{
"code": null,
"e": 1847,
"s": 1829,
"text": "What we should do"
},
{
"code": null,
"e": 1995,
"s": 1847,
"text": "Create a new object each time the function is called, by using a default arg to signal that no argument was provided (None is often a good choice)."
},
{
"code": null,
"e": 2209,
"s": 1995,
"text": "def func(data=[]):\n data.append(1)\n return data\nfunc()\nfunc()\ndef append2(element, foo=None):\n if foo is None:\n foo = []\n foo.append(element)\n return foo\nprint(append2(12))\nprint(append2(43))"
},
{
"code": null,
"e": 2219,
"s": 2209,
"text": "[12]\n[43]"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.