title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
SQL - LEFT JOINS | The SQL LEFT JOIN returns all rows from the left table, even if there are no matches in the right table. This means that if the ON clause matches 0 (zero) records in the right table; the join will still return a row in the result, but with NULL in each column from the right table.
This means that a left join returns all the values from the left table, plus matched values from the right table or NULL in case of no matching join predicate.
The basic syntax of a LEFT JOIN is as follows.
SELECT table1.column1, table2.column2...
FROM table1
LEFT JOIN table2
ON table1.common_field = table2.common_field;
Here, the given condition could be any given expression based on your requirement.
Consider the following two tables,
Table 1 − CUSTOMERS Table is as follows.
+----+----------+-----+-----------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+-----------+----------+
| 1 | Ramesh | 32 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+-----------+----------+
Table 2 − Orders Table is as follows.
+-----+---------------------+-------------+--------+
| OID | DATE | CUSTOMER_ID | AMOUNT |
+-----+---------------------+-------------+--------+
| 102 | 2009-10-08 00:00:00 | 3 | 3000 |
| 100 | 2009-10-08 00:00:00 | 3 | 1500 |
| 101 | 2009-11-20 00:00:00 | 2 | 1560 |
| 103 | 2008-05-20 00:00:00 | 4 | 2060 |
+-----+---------------------+-------------+--------+
Now, let us join these two tables using the LEFT JOIN as follows.
SQL> SELECT ID, NAME, AMOUNT, DATE
FROM CUSTOMERS
LEFT JOIN ORDERS
ON CUSTOMERS.ID = ORDERS.CUSTOMER_ID;
This would produce the following result −
+----+----------+--------+---------------------+
| ID | NAME | AMOUNT | DATE |
+----+----------+--------+---------------------+
| 1 | Ramesh | NULL | NULL |
| 2 | Khilan | 1560 | 2009-11-20 00:00:00 |
| 3 | kaushik | 3000 | 2009-10-08 00:00:00 |
| 3 | kaushik | 1500 | 2009-10-08 00:00:00 |
| 4 | Chaitali | 2060 | 2008-05-20 00:00:00 |
| 5 | Hardik | NULL | NULL |
| 6 | Komal | NULL | NULL |
| 7 | Muffy | NULL | NULL |
+----+----------+--------+---------------------+
42 Lectures
5 hours
Anadi Sharma
14 Lectures
2 hours
Anadi Sharma
44 Lectures
4.5 hours
Anadi Sharma
94 Lectures
7 hours
Abhishek And Pukhraj
80 Lectures
6.5 hours
Oracle Master Training | 150,000+ Students Worldwide
31 Lectures
6 hours
Eduonix Learning Solutions
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2735,
"s": 2453,
"text": "The SQL LEFT JOIN returns all rows from the left table, even if there are no matches in the right table. This means that if the ON clause matches 0 (zero) records in the right table; the join will still return a row in the result, but with NULL in each column from the right table."
},
{
"code": null,
"e": 2895,
"s": 2735,
"text": "This means that a left join returns all the values from the left table, plus matched values from the right table or NULL in case of no matching join predicate."
},
{
"code": null,
"e": 2942,
"s": 2895,
"text": "The basic syntax of a LEFT JOIN is as follows."
},
{
"code": null,
"e": 3059,
"s": 2942,
"text": "SELECT table1.column1, table2.column2...\nFROM table1\nLEFT JOIN table2\nON table1.common_field = table2.common_field;\n"
},
{
"code": null,
"e": 3142,
"s": 3059,
"text": "Here, the given condition could be any given expression based on your requirement."
},
{
"code": null,
"e": 3177,
"s": 3142,
"text": "Consider the following two tables,"
},
{
"code": null,
"e": 3218,
"s": 3177,
"text": "Table 1 − CUSTOMERS Table is as follows."
},
{
"code": null,
"e": 3735,
"s": 3218,
"text": "+----+----------+-----+-----------+----------+\n| ID | NAME | AGE | ADDRESS | SALARY |\n+----+----------+-----+-----------+----------+\n| 1 | Ramesh | 32 | Ahmedabad | 2000.00 |\n| 2 | Khilan | 25 | Delhi | 1500.00 |\n| 3 | kaushik | 23 | Kota | 2000.00 |\n| 4 | Chaitali | 25 | Mumbai | 6500.00 |\n| 5 | Hardik | 27 | Bhopal | 8500.00 |\n| 6 | Komal | 22 | MP | 4500.00 |\n| 7 | Muffy | 24 | Indore | 10000.00 |\n+----+----------+-----+-----------+----------+"
},
{
"code": null,
"e": 3773,
"s": 3735,
"text": "Table 2 − Orders Table is as follows."
},
{
"code": null,
"e": 4197,
"s": 3773,
"text": "+-----+---------------------+-------------+--------+\n| OID | DATE | CUSTOMER_ID | AMOUNT |\n+-----+---------------------+-------------+--------+\n| 102 | 2009-10-08 00:00:00 | 3 | 3000 |\n| 100 | 2009-10-08 00:00:00 | 3 | 1500 |\n| 101 | 2009-11-20 00:00:00 | 2 | 1560 |\n| 103 | 2008-05-20 00:00:00 | 4 | 2060 |\n+-----+---------------------+-------------+--------+"
},
{
"code": null,
"e": 4263,
"s": 4197,
"text": "Now, let us join these two tables using the LEFT JOIN as follows."
},
{
"code": null,
"e": 4378,
"s": 4263,
"text": "SQL> SELECT ID, NAME, AMOUNT, DATE\n FROM CUSTOMERS\n LEFT JOIN ORDERS\n ON CUSTOMERS.ID = ORDERS.CUSTOMER_ID;"
},
{
"code": null,
"e": 4420,
"s": 4378,
"text": "This would produce the following result −"
},
{
"code": null,
"e": 5009,
"s": 4420,
"text": "+----+----------+--------+---------------------+\n| ID | NAME | AMOUNT | DATE |\n+----+----------+--------+---------------------+\n| 1 | Ramesh | NULL | NULL |\n| 2 | Khilan | 1560 | 2009-11-20 00:00:00 |\n| 3 | kaushik | 3000 | 2009-10-08 00:00:00 |\n| 3 | kaushik | 1500 | 2009-10-08 00:00:00 |\n| 4 | Chaitali | 2060 | 2008-05-20 00:00:00 |\n| 5 | Hardik | NULL | NULL |\n| 6 | Komal | NULL | NULL |\n| 7 | Muffy | NULL | NULL |\n+----+----------+--------+---------------------+\n"
},
{
"code": null,
"e": 5042,
"s": 5009,
"text": "\n 42 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 5056,
"s": 5042,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 5089,
"s": 5056,
"text": "\n 14 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 5103,
"s": 5089,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 5138,
"s": 5103,
"text": "\n 44 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 5152,
"s": 5138,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 5185,
"s": 5152,
"text": "\n 94 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 5207,
"s": 5185,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 5242,
"s": 5207,
"text": "\n 80 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 5296,
"s": 5242,
"text": " Oracle Master Training | 150,000+ Students Worldwide"
},
{
"code": null,
"e": 5329,
"s": 5296,
"text": "\n 31 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 5357,
"s": 5329,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 5364,
"s": 5357,
"text": " Print"
},
{
"code": null,
"e": 5375,
"s": 5364,
"text": " Add Notes"
}
]
|
GATE | GATE-CS-2014-(Set-3) | Question 65 - GeeksforGeeks | 28 Jun, 2021
A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below?4, 7, 6, 1, 7, 6, 1, 2, 7, 2(A) 4(B) 5(C) 6(D) 7Answer: (C)Explanation: What is a Page fault ? An interrupt that occurs when a program requests data that is not currently in real memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it into RAM.
Now, 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 is the reference string, you can think of it as data requests made by a program.
Now the system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy.
[ ] - Initially page frames are empty.i.e. no
process pages in main memory.
[ 4 ] - Now 4 is brought into 1st frame (1st
page fault)
Explanation: Process page 4 was requested by the program, but it was not in the main memory(in form of page frames),which resulted in a page fault, after that process page 4 was brought in the main memory by the operating system.
[ 4 7 ] - Now 7 is brought into 2nd frame
(2nd page fault) - Same explanation.
[ 4 7 6 ] - Now 6 is brought into 3rd frame
(3rd page fault)
[ 1 7 6 ] - Now 1 is brought into 1st frame, as 1st
frame was least recently used(4th page fault).
After this 7, 6 and 1 are were already present in the frames hence no replacements in pages.
[ 1 2 6 ] - Now 2 is brought into 2nd frame, as 2nd
frame was least recently used(5th page fault).
[ 1 2 7 ] -Now 7 is brought into 3rd frame, as 3rd frame
was least recently used(6th page fault).
Hence, total number of page faults(also called pf) are 6. Therefore, C is the answer.Quiz of this Question
GATE-CS-2014-(Set-3)
GATE-GATE-CS-2014-(Set-3)
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
GATE | GATE-CS-2016 (Set 2) | Question 48
GATE | GATE-CS-2014-(Set-1) | Question 30
GATE | GATE-CS-2001 | Question 23
GATE | GATE CS 2010 | Question 45
GATE | GATE-CS-2015 (Set 3) | Question 65
GATE | GATE-CS-2004 | Question 3
GATE | GATE-CS-2014-(Set-1) | Question 65
C++ Program to count Vowels in a string using Pointer
GATE | GATE-CS-2015 (Set 1) | Question 42
GATE | GATE-CS-2001 | Question 48 | [
{
"code": null,
"e": 24080,
"s": 24052,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 24670,
"s": 24080,
"text": "A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below?4, 7, 6, 1, 7, 6, 1, 2, 7, 2(A) 4(B) 5(C) 6(D) 7Answer: (C)Explanation: What is a Page fault ? An interrupt that occurs when a program requests data that is not currently in real memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it into RAM."
},
{
"code": null,
"e": 24785,
"s": 24670,
"text": "Now, 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 is the reference string, you can think of it as data requests made by a program."
},
{
"code": null,
"e": 24924,
"s": 24785,
"text": "Now the system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy."
},
{
"code": null,
"e": 25075,
"s": 24924,
"text": "[ ] - Initially page frames are empty.i.e. no \n process pages in main memory.\n\n[ 4 ] - Now 4 is brought into 1st frame (1st \n page fault) "
},
{
"code": null,
"e": 25305,
"s": 25075,
"text": "Explanation: Process page 4 was requested by the program, but it was not in the main memory(in form of page frames),which resulted in a page fault, after that process page 4 was brought in the main memory by the operating system."
},
{
"code": null,
"e": 25579,
"s": 25305,
"text": "\n[ 4 7 ] - Now 7 is brought into 2nd frame \n (2nd page fault) - Same explanation.\n\n[ 4 7 6 ] - Now 6 is brought into 3rd frame\n (3rd page fault)\n\n[ 1 7 6 ] - Now 1 is brought into 1st frame, as 1st \n frame was least recently used(4th page fault). "
},
{
"code": null,
"e": 25672,
"s": 25579,
"text": "After this 7, 6 and 1 are were already present in the frames hence no replacements in pages."
},
{
"code": null,
"e": 25893,
"s": 25672,
"text": "\n[ 1 2 6 ] - Now 2 is brought into 2nd frame, as 2nd\n frame was least recently used(5th page fault).\n\n[ 1 2 7 ] -Now 7 is brought into 3rd frame, as 3rd frame\n was least recently used(6th page fault). "
},
{
"code": null,
"e": 26000,
"s": 25893,
"text": "Hence, total number of page faults(also called pf) are 6. Therefore, C is the answer.Quiz of this Question"
},
{
"code": null,
"e": 26021,
"s": 26000,
"text": "GATE-CS-2014-(Set-3)"
},
{
"code": null,
"e": 26047,
"s": 26021,
"text": "GATE-GATE-CS-2014-(Set-3)"
},
{
"code": null,
"e": 26052,
"s": 26047,
"text": "GATE"
},
{
"code": null,
"e": 26150,
"s": 26052,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26159,
"s": 26150,
"text": "Comments"
},
{
"code": null,
"e": 26172,
"s": 26159,
"text": "Old Comments"
},
{
"code": null,
"e": 26214,
"s": 26172,
"text": "GATE | GATE-CS-2016 (Set 2) | Question 48"
},
{
"code": null,
"e": 26256,
"s": 26214,
"text": "GATE | GATE-CS-2014-(Set-1) | Question 30"
},
{
"code": null,
"e": 26290,
"s": 26256,
"text": "GATE | GATE-CS-2001 | Question 23"
},
{
"code": null,
"e": 26324,
"s": 26290,
"text": "GATE | GATE CS 2010 | Question 45"
},
{
"code": null,
"e": 26366,
"s": 26324,
"text": "GATE | GATE-CS-2015 (Set 3) | Question 65"
},
{
"code": null,
"e": 26399,
"s": 26366,
"text": "GATE | GATE-CS-2004 | Question 3"
},
{
"code": null,
"e": 26441,
"s": 26399,
"text": "GATE | GATE-CS-2014-(Set-1) | Question 65"
},
{
"code": null,
"e": 26495,
"s": 26441,
"text": "C++ Program to count Vowels in a string using Pointer"
},
{
"code": null,
"e": 26537,
"s": 26495,
"text": "GATE | GATE-CS-2015 (Set 1) | Question 42"
}
]
|
Batch Script - Logical Operators | Logical operators are used to evaluate Boolean expressions. Following are the logical operators available.
The batch language is equipped with a full set of Boolean logic operators like AND, OR, XOR, but only for binary numbers. Neither are there any values for TRUE or FALSE. The only logical operator available for conditions is the NOT operator.
The easiest way to implement the AND/OR operator for non-binary numbers is to use the nested IF condition. The following example shows how this can be implemented.
@echo off
SET /A a = 5
SET /A b = 10
IF %a% LSS 10 (IF %b% GTR 0 (ECHO %a% is less than 10 AND %b% is greater than 0))
The above command produces the following output.
5 is less than 10 AND 10 is greater than 0
Following is an example of the AND operation that can be implemented using the IF statement.
@echo off
SET /A a = 5
SET /A b = 10
IF %a% GEQ 10 (
IF %b% LEQ 0 (
ECHO %a% is NOT less than 10 OR %b% is NOT greater than 0
) ELSE (
ECHO %a% is less than 10 OR %b% is greater than 0
)
) ELSE (
ECHO %a% is less than 10 OR %b% is greater than 0
)
The above command produces the following output.
5 is less than 10 AND 10 is greater than 0
Following is an example of how the NOT operator can be used.
@echo off
SET /A a = 5
IF NOT %a%==6 echo "A is not equal to 6"
The above command produces the following output.
"A is equal to 5"
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2276,
"s": 2169,
"text": "Logical operators are used to evaluate Boolean expressions. Following are the logical operators available."
},
{
"code": null,
"e": 2518,
"s": 2276,
"text": "The batch language is equipped with a full set of Boolean logic operators like AND, OR, XOR, but only for binary numbers. Neither are there any values for TRUE or FALSE. The only logical operator available for conditions is the NOT operator."
},
{
"code": null,
"e": 2682,
"s": 2518,
"text": "The easiest way to implement the AND/OR operator for non-binary numbers is to use the nested IF condition. The following example shows how this can be implemented."
},
{
"code": null,
"e": 2801,
"s": 2682,
"text": "@echo off\nSET /A a = 5\nSET /A b = 10\nIF %a% LSS 10 (IF %b% GTR 0 (ECHO %a% is less than 10 AND %b% is greater than 0))"
},
{
"code": null,
"e": 2850,
"s": 2801,
"text": "The above command produces the following output."
},
{
"code": null,
"e": 2894,
"s": 2850,
"text": "5 is less than 10 AND 10 is greater than 0\n"
},
{
"code": null,
"e": 2987,
"s": 2894,
"text": "Following is an example of the AND operation that can be implemented using the IF statement."
},
{
"code": null,
"e": 3260,
"s": 2987,
"text": "@echo off\nSET /A a = 5\nSET /A b = 10\n\nIF %a% GEQ 10 (\n IF %b% LEQ 0 (\n ECHO %a% is NOT less than 10 OR %b% is NOT greater than 0\n ) ELSE (\n ECHO %a% is less than 10 OR %b% is greater than 0\n )\n) ELSE (\n ECHO %a% is less than 10 OR %b% is greater than 0\n)"
},
{
"code": null,
"e": 3309,
"s": 3260,
"text": "The above command produces the following output."
},
{
"code": null,
"e": 3353,
"s": 3309,
"text": "5 is less than 10 AND 10 is greater than 0\n"
},
{
"code": null,
"e": 3414,
"s": 3353,
"text": "Following is an example of how the NOT operator can be used."
},
{
"code": null,
"e": 3478,
"s": 3414,
"text": "@echo off\nSET /A a = 5\nIF NOT %a%==6 echo \"A is not equal to 6\""
},
{
"code": null,
"e": 3527,
"s": 3478,
"text": "The above command produces the following output."
},
{
"code": null,
"e": 3546,
"s": 3527,
"text": "\"A is equal to 5\"\n"
},
{
"code": null,
"e": 3553,
"s": 3546,
"text": " Print"
},
{
"code": null,
"e": 3564,
"s": 3553,
"text": " Add Notes"
}
]
|
Bayesian regression with implementation in R | by Liyi Zhang | Towards Data Science | Linear regression can be established and interpreted from a Bayesian perspective. The first parts discuss theory and assumptions pretty much from scratch, and later parts include an R implementation and remarks. Readers can feel free to copy the two blocks of code into an R notebook and play around with it.
Recall that in linear regression, we are given target values y, data X, and we use the model
where y is N*1 vector, X is N*D matrix, w is D*1 vector, and the error is N*1 vector. We have N data points. Dimension D is understood in terms of features, so if we use a list of x, a list of x2 (and a list of 1’s corresponding to w_0), we say D=3. If you don’t like matrix form, think of it as just a condensed form of the following, where everything is a scaler instead of a vector or matrix:
In classical linear regression, the error term is assumed to have Normal distribution, and so it immediately follows that y is normally distributed with mean Xw, and variance of whatever variance the error term has (denote by σ2, or diagonal matrix with entries σ2). The normal assumption turns out well in most cases, and this normal model is also what we use in Bayesian regression.
We are now faced with two problems: inference of w, and prediction of y for any new X. Using the well-known Bayes rule and the above assumptions, we are only steps away towards not only solving these two problems, but also giving a full probability distribution of y for any new X. Here is the Bayes rule using our notations, which expresses the posterior distribution of parameter w given data:
π and f are probability density functions. Since the result is a function of w, we can ignore the denominator, knowing that the numerator is proportional to lefthand side by a constant. We know from assumptions that the likelihood function f(y|w,x) follows the normal distribution. The other term is prior distribution of w, and this reflects, as the name suggests, prior knowledge of the parameters.
Prior Distribution. Defining the prior is an interesting part of the Bayesian workflow. For convenience we let w ~ N(m_o, S_o), and the hyperparameters m and S now reflect prior knowledge of w. If you have little knowledge of w, or find any assignment of m and S too subjective, ‘non-informative’ priors are an amendment. In this case, we set m to 0 and more importantly set S as a diagonal matrix with very large values. We are saying that w has a very high variance, and so we have little knowledge of what w will be.
With all these probability functions defined, a few lines of simply algebraic manipulations (quite a few lines in fact) will give the posterior after observation of N data points:
It looks like a bunch of symbols, but they are all defined already, and you can compute this distribution once this theoretical result is implemented in code. (N(m,S) means normal distribution with mean m and covariance matrix S.)
A full Bayesian approach means not only getting a single prediction (denote new pair of data by y_o, x_o), but also acquiring the distribution of this new point.
What we have done is the reverse of marginalizing from joint to get marginal distribution on the first line, and using Bayes rule inside the integral on the second line, where we have also removed unnecessary dependences. Notice that we know what the last two probability functions are. The result of full predictive distribution is:
Implementation in R is quite convenient. Backed up with the above theoretical results, we just input matrix multiplications into our code and get results of both predictions and predictive distributions. To illustrate with an example, we use a toy problem: X is from -1 to 1, evenly spaced, and y is constructed as the following additions of sinusoidal curves with normal noise (see graph below for illustration of y).
The following code gets this data.
library(ggplot2) # — — — — — Get data — — — — — — — — — — — — — — — — — — — — —+X <- (-30:30)/30 N <- length(X) D <- 10 var <- 0.15*0.15 e <- rnorm(N,0,var^0.5) EY <- sin(2*pi*X)*(X<=0) + 0.5*sin(4*pi*X)*(X>0) Y <- sin(2*pi*X)*(X<=0) + 0.5*sin(4*pi*X)*(X>0) + e data <- data.frame(X,Y) g1 <- ggplot(data=data) + geom_point(mapping=aes(x=X,y=Y))
The following code (under section ‘Inference’) implements the above theoretical results. We also expand features of x (denoted in code as phi_X, under section Construct basis functions). Just as we would expand x into x2, etc., we now expand it into 9 radial basis functions, each one looking like the follows. Note that although these look like normal density, they are not interpreted as probabilities.
One advantage of radial basis functions is that radial basis functions can fit a variety of curves, including polynomial and sinusoidal.
# — — — — — Construct basis functions — — — — — — — — — — — —+phi_X <- matrix(0, nrow=N, ncol=D)phi_X[,1] <- Xmu <- seq(min(X),max(X),length.out=D+1)mu <- mu[c(-1,-length(mu))]for(i in 2:D){ phi_X[,i] <- exp(-(X-mu[i-1])^2/(2*var))}# — — — — — Inference — — — — — — — — — — — — — — — — — — — —+# Commented out is general prior# m0 <- matrix(0,D,1)# S0 <- diag(x=1000,D,D) # SN <- inv(inv(S0)+t(phi_X)%*%phi_X/var)# mN <- SN%*%(inv(S0)%*%m0 + t(phi_X)%*%Y/var)# Y_hat <- t(mN) %*% t(phi_X)# We use non-informative prior for nowm0 <- matrix(0,D,1)SN <- solve(t(phi_X)%*%phi_X/var)mN <- SN%*%t(phi_X)%*%Y/varY_hat <- t(mN) %*% t(phi_X)var_hat <- array(0, N)for(i in 1:N){ var_hat[i] <- var + phi_X[i,]%*%SN%*%phi_X[i,]}g_bayes <- g1 + geom_line(mapping=aes(x=X,y=Y_hat[1,]),color=’#0000FF’)g_bayes_full <- g_bayes + geom_ribbon(mapping=aes(x=X,y=Y_hat[1,], ymin=Y_hat[1,]-1.96*var_hat^0.5, ymax=Y_hat[1,]+1.96*var_hat^0.5, alpha=0.1), fill=’#9999FF’)
One detail to note in these computations, is that we use non-informative prior. The commented out section is exactly the theoretical results above, while for non-informative prior we use covariance matrix with diagonal entries approaching infinity, so the inverse of that is directly considered as 0 in this code. If you’d like to use this code, make sure you install ggplot2 package for plotting.
The following illustration aims at representing a full predictive distribution and giving a sense of how well the data is fit.
Multiple linear regression result is same as the case of Bayesian regression using improper prior with an infinite covariance matrix. Generally, it is good practice to obtain some empirical knowledge regarding the parameters, and use an informative prior. Bayesian regression can then quantify and show how different prior knowledge impact predictions. In any case, the Bayesian view can conveniently interpret the range of y predictions as a probability, different from the Confidence Interval computed from classical linear regression.
Data fitting in this perspective also makes it easy for you to ‘learn as you go’. Say I first observed 10000 data points, and computed a posterior of parameter w. After that, I somehow managed to acquire 1000 more data points, and instead of running the whole regression again, I can use the previously computed posterior as my prior for these 1000 points. This sequential process yields the same result as using the whole data all over again. I like this idea in that it’s very intuitive, in the manner as a learned opinion is proportional to previously learned opinions plus new observations, and the learning goes on. A joke says that a Bayesian who dreams of a horse and observes a donkey, will call it a mule. But if he takes more observations of it, eventually he will say it is indeed a donkey. | [
{
"code": null,
"e": 481,
"s": 172,
"text": "Linear regression can be established and interpreted from a Bayesian perspective. The first parts discuss theory and assumptions pretty much from scratch, and later parts include an R implementation and remarks. Readers can feel free to copy the two blocks of code into an R notebook and play around with it."
},
{
"code": null,
"e": 574,
"s": 481,
"text": "Recall that in linear regression, we are given target values y, data X, and we use the model"
},
{
"code": null,
"e": 970,
"s": 574,
"text": "where y is N*1 vector, X is N*D matrix, w is D*1 vector, and the error is N*1 vector. We have N data points. Dimension D is understood in terms of features, so if we use a list of x, a list of x2 (and a list of 1’s corresponding to w_0), we say D=3. If you don’t like matrix form, think of it as just a condensed form of the following, where everything is a scaler instead of a vector or matrix:"
},
{
"code": null,
"e": 1355,
"s": 970,
"text": "In classical linear regression, the error term is assumed to have Normal distribution, and so it immediately follows that y is normally distributed with mean Xw, and variance of whatever variance the error term has (denote by σ2, or diagonal matrix with entries σ2). The normal assumption turns out well in most cases, and this normal model is also what we use in Bayesian regression."
},
{
"code": null,
"e": 1751,
"s": 1355,
"text": "We are now faced with two problems: inference of w, and prediction of y for any new X. Using the well-known Bayes rule and the above assumptions, we are only steps away towards not only solving these two problems, but also giving a full probability distribution of y for any new X. Here is the Bayes rule using our notations, which expresses the posterior distribution of parameter w given data:"
},
{
"code": null,
"e": 2152,
"s": 1751,
"text": "π and f are probability density functions. Since the result is a function of w, we can ignore the denominator, knowing that the numerator is proportional to lefthand side by a constant. We know from assumptions that the likelihood function f(y|w,x) follows the normal distribution. The other term is prior distribution of w, and this reflects, as the name suggests, prior knowledge of the parameters."
},
{
"code": null,
"e": 2672,
"s": 2152,
"text": "Prior Distribution. Defining the prior is an interesting part of the Bayesian workflow. For convenience we let w ~ N(m_o, S_o), and the hyperparameters m and S now reflect prior knowledge of w. If you have little knowledge of w, or find any assignment of m and S too subjective, ‘non-informative’ priors are an amendment. In this case, we set m to 0 and more importantly set S as a diagonal matrix with very large values. We are saying that w has a very high variance, and so we have little knowledge of what w will be."
},
{
"code": null,
"e": 2852,
"s": 2672,
"text": "With all these probability functions defined, a few lines of simply algebraic manipulations (quite a few lines in fact) will give the posterior after observation of N data points:"
},
{
"code": null,
"e": 3083,
"s": 2852,
"text": "It looks like a bunch of symbols, but they are all defined already, and you can compute this distribution once this theoretical result is implemented in code. (N(m,S) means normal distribution with mean m and covariance matrix S.)"
},
{
"code": null,
"e": 3245,
"s": 3083,
"text": "A full Bayesian approach means not only getting a single prediction (denote new pair of data by y_o, x_o), but also acquiring the distribution of this new point."
},
{
"code": null,
"e": 3579,
"s": 3245,
"text": "What we have done is the reverse of marginalizing from joint to get marginal distribution on the first line, and using Bayes rule inside the integral on the second line, where we have also removed unnecessary dependences. Notice that we know what the last two probability functions are. The result of full predictive distribution is:"
},
{
"code": null,
"e": 3998,
"s": 3579,
"text": "Implementation in R is quite convenient. Backed up with the above theoretical results, we just input matrix multiplications into our code and get results of both predictions and predictive distributions. To illustrate with an example, we use a toy problem: X is from -1 to 1, evenly spaced, and y is constructed as the following additions of sinusoidal curves with normal noise (see graph below for illustration of y)."
},
{
"code": null,
"e": 4033,
"s": 3998,
"text": "The following code gets this data."
},
{
"code": null,
"e": 4378,
"s": 4033,
"text": "library(ggplot2) # — — — — — Get data — — — — — — — — — — — — — — — — — — — — —+X <- (-30:30)/30 N <- length(X) D <- 10 var <- 0.15*0.15 e <- rnorm(N,0,var^0.5) EY <- sin(2*pi*X)*(X<=0) + 0.5*sin(4*pi*X)*(X>0) Y <- sin(2*pi*X)*(X<=0) + 0.5*sin(4*pi*X)*(X>0) + e data <- data.frame(X,Y) g1 <- ggplot(data=data) + geom_point(mapping=aes(x=X,y=Y))"
},
{
"code": null,
"e": 4783,
"s": 4378,
"text": "The following code (under section ‘Inference’) implements the above theoretical results. We also expand features of x (denoted in code as phi_X, under section Construct basis functions). Just as we would expand x into x2, etc., we now expand it into 9 radial basis functions, each one looking like the follows. Note that although these look like normal density, they are not interpreted as probabilities."
},
{
"code": null,
"e": 4920,
"s": 4783,
"text": "One advantage of radial basis functions is that radial basis functions can fit a variety of curves, including polynomial and sinusoidal."
},
{
"code": null,
"e": 5932,
"s": 4920,
"text": "# — — — — — Construct basis functions — — — — — — — — — — — —+phi_X <- matrix(0, nrow=N, ncol=D)phi_X[,1] <- Xmu <- seq(min(X),max(X),length.out=D+1)mu <- mu[c(-1,-length(mu))]for(i in 2:D){ phi_X[,i] <- exp(-(X-mu[i-1])^2/(2*var))}# — — — — — Inference — — — — — — — — — — — — — — — — — — — —+# Commented out is general prior# m0 <- matrix(0,D,1)# S0 <- diag(x=1000,D,D) # SN <- inv(inv(S0)+t(phi_X)%*%phi_X/var)# mN <- SN%*%(inv(S0)%*%m0 + t(phi_X)%*%Y/var)# Y_hat <- t(mN) %*% t(phi_X)# We use non-informative prior for nowm0 <- matrix(0,D,1)SN <- solve(t(phi_X)%*%phi_X/var)mN <- SN%*%t(phi_X)%*%Y/varY_hat <- t(mN) %*% t(phi_X)var_hat <- array(0, N)for(i in 1:N){ var_hat[i] <- var + phi_X[i,]%*%SN%*%phi_X[i,]}g_bayes <- g1 + geom_line(mapping=aes(x=X,y=Y_hat[1,]),color=’#0000FF’)g_bayes_full <- g_bayes + geom_ribbon(mapping=aes(x=X,y=Y_hat[1,], ymin=Y_hat[1,]-1.96*var_hat^0.5, ymax=Y_hat[1,]+1.96*var_hat^0.5, alpha=0.1), fill=’#9999FF’)"
},
{
"code": null,
"e": 6330,
"s": 5932,
"text": "One detail to note in these computations, is that we use non-informative prior. The commented out section is exactly the theoretical results above, while for non-informative prior we use covariance matrix with diagonal entries approaching infinity, so the inverse of that is directly considered as 0 in this code. If you’d like to use this code, make sure you install ggplot2 package for plotting."
},
{
"code": null,
"e": 6457,
"s": 6330,
"text": "The following illustration aims at representing a full predictive distribution and giving a sense of how well the data is fit."
},
{
"code": null,
"e": 6995,
"s": 6457,
"text": "Multiple linear regression result is same as the case of Bayesian regression using improper prior with an infinite covariance matrix. Generally, it is good practice to obtain some empirical knowledge regarding the parameters, and use an informative prior. Bayesian regression can then quantify and show how different prior knowledge impact predictions. In any case, the Bayesian view can conveniently interpret the range of y predictions as a probability, different from the Confidence Interval computed from classical linear regression."
}
]
|
Create Database in MariaDB using PyMySQL in Python - GeeksforGeeks | 14 Oct, 2020
MariaDB is an open source Database Management System and its predecessor to MySQL. The pymysql client can be used to interact with MariaDB similar to that of MySQL using Python.
In this article we will look into the process of creating a database using pymysql. To create a database use the below syntax:
Syntax:CREATE DATABASE databaseName;
Example :
In this example we will be using the pymysql client to create a database named “GFG”:
Python
# import the mysql client for python import pymysql # Create a connection object# IP address of the MySQL database serverHost = "localhost" # User name of the database serverUser = "user" # Password for the database userPassword = "" conn = pymysql.connect(host=Host, user=User, password=Password) # Create a cursor objectcur = conn.cursor() # creating database cur.execute("CREATE DATABASE GFG") cur.execute("SHOW DATABASES")databaseList = cur.fetchall() for database in databaseList: print(database) conn.close()
Output :
The above program illustrates the creation of MariaDB database “GFG” in which host-name is ‘localhost‘, the username is ‘user’ and password is ‘your password’.
Let’s suppose we want to create a table in the database, then we need to connect to a database. Below is a program to create a table in the GFG database which was created in the above program.
Example :
Python3
import pymysql conn = pymysql.connect('localhost','user','password','GFG')cur = conn.cursor()cur.execute("DROP TABLE IF EXISTS PRODUCT") query = """CREATE TABLE PRODUCT ( PRODUCT_ID CHAR(20) NOT NULL, price int(10), PRODUCT_TYPE VARCHAR(64) ) """ # To execute the SQL querycur.execute(query) # To commit the changesconn.commit() conn.close()
Output :
Python-MariaDB
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
Selecting rows in pandas DataFrame based on conditions
How to drop one or multiple columns in Pandas Dataframe
Python | Get unique values from a list
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | os.path.join() method
Defaultdict in Python
Create a directory in Python
Python Classes and Objects | [
{
"code": null,
"e": 24212,
"s": 24184,
"text": "\n14 Oct, 2020"
},
{
"code": null,
"e": 24390,
"s": 24212,
"text": "MariaDB is an open source Database Management System and its predecessor to MySQL. The pymysql client can be used to interact with MariaDB similar to that of MySQL using Python."
},
{
"code": null,
"e": 24517,
"s": 24390,
"text": "In this article we will look into the process of creating a database using pymysql. To create a database use the below syntax:"
},
{
"code": null,
"e": 24555,
"s": 24517,
"text": "Syntax:CREATE DATABASE databaseName;\n"
},
{
"code": null,
"e": 24565,
"s": 24555,
"text": "Example :"
},
{
"code": null,
"e": 24651,
"s": 24565,
"text": "In this example we will be using the pymysql client to create a database named “GFG”:"
},
{
"code": null,
"e": 24658,
"s": 24651,
"text": "Python"
},
{
"code": "# import the mysql client for python import pymysql # Create a connection object# IP address of the MySQL database serverHost = \"localhost\" # User name of the database serverUser = \"user\" # Password for the database userPassword = \"\" conn = pymysql.connect(host=Host, user=User, password=Password) # Create a cursor objectcur = conn.cursor() # creating database cur.execute(\"CREATE DATABASE GFG\") cur.execute(\"SHOW DATABASES\")databaseList = cur.fetchall() for database in databaseList: print(database) conn.close()",
"e": 25210,
"s": 24658,
"text": null
},
{
"code": null,
"e": 25219,
"s": 25210,
"text": "Output :"
},
{
"code": null,
"e": 25380,
"s": 25219,
"text": "The above program illustrates the creation of MariaDB database “GFG” in which host-name is ‘localhost‘, the username is ‘user’ and password is ‘your password’."
},
{
"code": null,
"e": 25573,
"s": 25380,
"text": "Let’s suppose we want to create a table in the database, then we need to connect to a database. Below is a program to create a table in the GFG database which was created in the above program."
},
{
"code": null,
"e": 25583,
"s": 25573,
"text": "Example :"
},
{
"code": null,
"e": 25591,
"s": 25583,
"text": "Python3"
},
{
"code": "import pymysql conn = pymysql.connect('localhost','user','password','GFG')cur = conn.cursor()cur.execute(\"DROP TABLE IF EXISTS PRODUCT\") query = \"\"\"CREATE TABLE PRODUCT ( PRODUCT_ID CHAR(20) NOT NULL, price int(10), PRODUCT_TYPE VARCHAR(64) ) \"\"\" # To execute the SQL querycur.execute(query) # To commit the changesconn.commit() conn.close()",
"e": 25976,
"s": 25591,
"text": null
},
{
"code": null,
"e": 25985,
"s": 25976,
"text": "Output :"
},
{
"code": null,
"e": 26000,
"s": 25985,
"text": "Python-MariaDB"
},
{
"code": null,
"e": 26007,
"s": 26000,
"text": "Python"
},
{
"code": null,
"e": 26105,
"s": 26007,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26114,
"s": 26105,
"text": "Comments"
},
{
"code": null,
"e": 26127,
"s": 26114,
"text": "Old Comments"
},
{
"code": null,
"e": 26159,
"s": 26127,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26214,
"s": 26159,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 26270,
"s": 26214,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26309,
"s": 26270,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 26351,
"s": 26309,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26393,
"s": 26351,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26424,
"s": 26393,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26446,
"s": 26424,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26475,
"s": 26446,
"text": "Create a directory in Python"
}
]
|
Exploring the Softmax Function. Developing Intuition With the Wolfram... | by Arnoud Buzing | Towards Data Science | In machine learning, classification problems are often solved with neural networks which give probabilities for each class or type it is trained to recognize. A typical example is image classification where the input to a neural network is an image and the output is a list of possible things that image represents with probabilities.
The Wolfram Language (WL) comes with a large library of pre-trained neural networks including ones that solve classification problems. For example, the built-in system function ImageIdentify uses a pre-trained network that can recognize over 4,000 objects in images.
Side note: Because of the unique typesetting capabilities of the Wolfram notebook interface (such as mixing code with images), all code is shown with screen captures. A notebook with full code is included at the end of this story.
You can use the underlying neural network directly to access the probabilities for each of the 4,000+ possible objects. Clearly “domestic cat” wins hands down in this case with a probability of almost 1. Other types of cat follow with lower probabilities. The result for “shower curtain” is probably because of the background of the image. Summing up all the 4,000+ probabilities gives the number 1.0.
When you examine the neural network in detail and look at the layers it consists of, you will notice that the final layer is something called SoftmaxLayer. This layer is very commonly used in neural networks to assign a list of probabilities to a list of objects.
The SoftmaxLayer uses the softmax function which takes a list of numbers as input and gives a normalized list of numbers as output. More specifically, each element of the input list is exponentiated and divided or normalized by the sum of all exponentiated elements.
It is clear from the function definition that the sum of the output elements is always 1. The reason for this is that each element in the output is a fraction where the denominator is the sum of all numerators. What is less clear is how an arbitrary input list relates to an output list, because the softmax function is nonlinear.
To help with this and gain intuition, I wrote a WL function to understand softmax function inputs and outputs. It simply creates two bar charts, one charting the input list, and one charting the output list.
understand[list_List] := Row[{ BarChart[list], Style[" \[Rule] ", 32], BarChart[SoftmaxLayer[][list]]}]
Let’s start with a very simple input of three zeros. In this case, the output has three equal elements as well, and because they have to add up to 1 they are all 0.333...
And this is true for any list where all elements are the same. For example, a four-element list of 7s will yield a result where all elements are 0.25:
Things get more interesting when the input elements are not all equal. Let’s start with a list of linearly increasing elements. The output is a scaled-down version of the exponential function.
Similarly, a list of linearly decreasing elements yields a decreasing exponential function:
A downward opening parabola yields an output “curve” that looks like a normal distribution (it could be exactly that?).
An upward opening parabola gives a much more extreme output, with the endpoint values dominating.
Finally, and mostly for fun, periodic functions maintain their periodicity in some rescaled form:
Exploring this and more in a notebook is very educational. Understanding how the softmax function works helps to understand how neural networks compute their final classification probability assignments. If you want to experiment more yourself, download this notebook from the Wolfram Cloud. If you’re completely new to WL, I recommend reading my recent post titled “Learning Wolfram: From Zero to Hero”. | [
{
"code": null,
"e": 507,
"s": 172,
"text": "In machine learning, classification problems are often solved with neural networks which give probabilities for each class or type it is trained to recognize. A typical example is image classification where the input to a neural network is an image and the output is a list of possible things that image represents with probabilities."
},
{
"code": null,
"e": 774,
"s": 507,
"text": "The Wolfram Language (WL) comes with a large library of pre-trained neural networks including ones that solve classification problems. For example, the built-in system function ImageIdentify uses a pre-trained network that can recognize over 4,000 objects in images."
},
{
"code": null,
"e": 1005,
"s": 774,
"text": "Side note: Because of the unique typesetting capabilities of the Wolfram notebook interface (such as mixing code with images), all code is shown with screen captures. A notebook with full code is included at the end of this story."
},
{
"code": null,
"e": 1407,
"s": 1005,
"text": "You can use the underlying neural network directly to access the probabilities for each of the 4,000+ possible objects. Clearly “domestic cat” wins hands down in this case with a probability of almost 1. Other types of cat follow with lower probabilities. The result for “shower curtain” is probably because of the background of the image. Summing up all the 4,000+ probabilities gives the number 1.0."
},
{
"code": null,
"e": 1671,
"s": 1407,
"text": "When you examine the neural network in detail and look at the layers it consists of, you will notice that the final layer is something called SoftmaxLayer. This layer is very commonly used in neural networks to assign a list of probabilities to a list of objects."
},
{
"code": null,
"e": 1938,
"s": 1671,
"text": "The SoftmaxLayer uses the softmax function which takes a list of numbers as input and gives a normalized list of numbers as output. More specifically, each element of the input list is exponentiated and divided or normalized by the sum of all exponentiated elements."
},
{
"code": null,
"e": 2269,
"s": 1938,
"text": "It is clear from the function definition that the sum of the output elements is always 1. The reason for this is that each element in the output is a fraction where the denominator is the sum of all numerators. What is less clear is how an arbitrary input list relates to an output list, because the softmax function is nonlinear."
},
{
"code": null,
"e": 2477,
"s": 2269,
"text": "To help with this and gain intuition, I wrote a WL function to understand softmax function inputs and outputs. It simply creates two bar charts, one charting the input list, and one charting the output list."
},
{
"code": null,
"e": 2582,
"s": 2477,
"text": "understand[list_List] := Row[{ BarChart[list], Style[\" \\[Rule] \", 32], BarChart[SoftmaxLayer[][list]]}]"
},
{
"code": null,
"e": 2753,
"s": 2582,
"text": "Let’s start with a very simple input of three zeros. In this case, the output has three equal elements as well, and because they have to add up to 1 they are all 0.333..."
},
{
"code": null,
"e": 2904,
"s": 2753,
"text": "And this is true for any list where all elements are the same. For example, a four-element list of 7s will yield a result where all elements are 0.25:"
},
{
"code": null,
"e": 3097,
"s": 2904,
"text": "Things get more interesting when the input elements are not all equal. Let’s start with a list of linearly increasing elements. The output is a scaled-down version of the exponential function."
},
{
"code": null,
"e": 3189,
"s": 3097,
"text": "Similarly, a list of linearly decreasing elements yields a decreasing exponential function:"
},
{
"code": null,
"e": 3309,
"s": 3189,
"text": "A downward opening parabola yields an output “curve” that looks like a normal distribution (it could be exactly that?)."
},
{
"code": null,
"e": 3407,
"s": 3309,
"text": "An upward opening parabola gives a much more extreme output, with the endpoint values dominating."
},
{
"code": null,
"e": 3505,
"s": 3407,
"text": "Finally, and mostly for fun, periodic functions maintain their periodicity in some rescaled form:"
}
]
|
Understanding Singular Value Decomposition and its Application in Data Science | by Reza Bagheri | Towards Data Science | In linear algebra, the Singular Value Decomposition (SVD) of a matrix is a factorization of that matrix into three matrices. It has some interesting algebraic properties and conveys important geometrical and theoretical insights about linear transformations. It also has some important applications in data science. In this article, I will try to explain the mathematical intuition behind SVD and its geometrical meaning. Instead of manual calculations, I will use the Python libraries to do the calculations and later give you some examples of using SVD in data science applications. In this article, bold-face lower-case letters (like a) refer to vectors. Bold-face capital letters (like A) refer to matrices, and italic lower-case letters (like a) refer to scalars.
To understand SVD we need to first understand the Eigenvalue Decomposition of a matrix. We can think of a matrix A as a transformation that acts on a vector x by multiplication to produce a new vector Ax. We use [A]ij or aij to denote the element of matrix A at row i and column j. If A is an m×p matrix and B is a p×n matrix, the matrix product C=AB (which is an m×n matrix) is defined as:
For example, the rotation matrix in a 2-d space can be defined as:
This matrix rotates a vector about the origin by the angle θ (with counterclockwise rotation for a positive θ). Another example is the stretching matrix B in a 2-d space which is defined as:
This matrix stretches a vector along the x-axis by a constant factor k but does not affect it in the y-direction. Similarly, we can have a stretching matrix in y-direction:
As an example, if we have a vector
then y=Ax is the vector which results after rotation of x by θ, and Bx is a vector which is the result of stretching x in the x-direction by a constant factor k.
Listing 1 shows how these matrices can be applied to a vector x and visualized in Python. We can use the NumPy arrays as vectors and matrices.
Here the rotation matrix is calculated for θ=300 and in the stretching matrix k=3. y is the transformed vector of x. To plot the vectors, the quiver() function in matplotlib has been used. Figure 1 shows the output of the code.
The matrices are represented by a 2-d array in NumPy. We can use the np.matmul(a,b) function to the multiply matrix a by b However, it is easier to use the @ operator to do that. The vectors can be represented either by a 1-d array or a 2-d array with a shape of (1,n) which is a row vector or (n,1) which is a column vector.
Now we are going to try a different transformation matrix. Suppose that
However, we don’t apply it to just one vector. Initially, we have a circle that contains all the vectors that are one unit away from the origin. These vectors have the general form of
Now we calculate t=Ax. So t is the set of all the vectors in x which have been transformed by A. Listing 2 shows how this can be done in Python.
Figure 2 shows the plots of x and t and the effect of transformation on two sample vectors x1 and x2 in x.
The initial vectors (x) on the left side form a circle as mentioned before, but the transformation matrix somehow changes this circle and turns it into an ellipse.
The sample vectors x1 and x2 in the circle are transformed into t1 and t2 respectively. So:
Eigenvalues and Eigenvectors
A vector is a quantity which has both magnitude and direction. The general effect of matrix A on the vectors in x is a combination of rotation and stretching. For example, it changes both the direction and magnitude of the vector x1 to give the transformed vector t1. However, for vector x2 only the magnitude changes after transformation. In fact, x2 and t2 have the same direction. Matrix A only stretches x2 in the same direction and gives the vector t2 which has a bigger magnitude. The only way to change the magnitude of a vector without changing its direction is by multiplying it with a scalar. So if we have a vector u, and λ is a scalar quantity then λu has the same direction and a different magnitude. So for a vector like x2 in figure 2, the effect of multiplying by A is like multiplying it with a scalar quantity like λ.
This is not true for all the vectors in x. In fact, for each matrix A, only some of the vectors have this property. These special vectors are called the eigenvectors of A and their corresponding scalar quantity λ is called an eigenvalue of A for that eigenvector. So the eigenvector of an n×n matrix A is defined as a nonzero vector u such that:
where λ is a scalar and is called the eigenvalue of A, and u is the eigenvector corresponding to λ. In addition, if you have any other vectors in the form of au where a is a scalar, then by placing it in the previous equation we get:
which means that any vector which has the same direction as the eigenvector u (or the opposite direction if a is negative) is also an eigenvector with the same corresponding eigenvalue.
For example, the eigenvalues of
are λ1=-1 and λ2=-2 and their corresponding eigenvectors are:
and we have:
This means that when we apply matrix B to all the possible vectors, it does not change the direction of these two vectors (or any vectors which have the same or opposite direction) and only stretches them. So for the eigenvectors, the matrix multiplication turns into a simple scalar multiplication. Here I am not going to explain how the eigenvalues and eigenvectors can be calculated mathematically. Instead, I will show you how they can be obtained in Python.
We can use the LA.eig() function in NumPy to calculate the eigenvalues and eigenvectors. It returns a tuple. The first element of this tuple is an array that stores the eigenvalues, and the second element is a 2-d array that stores the corresponding eigenvectors. In fact, in Listing 3 the column u[:,i] is the eigenvector corresponding to the eigenvalue lam[i]. Now if we check the output of Listing 3, we get:
lam= [-1. -2.]u= [[ 1. -0.7071] [ 0. 0.7071]]
You may have noticed that the eigenvector for λ=-1 is the same as u1, but the other one is different. That is because LA.eig() returns the normalized eigenvector. A normalized vector is a unit vector whose length is 1. But before explaining how the length can be calculated, we need to get familiar with the transpose of a matrix and the dot product.
Transpose
The transpose of the column vector u (which is shown by u superscript T) is the row vector of u (in this article sometimes I show it as u^T). The transpose of an m×n matrix A is an n×m matrix whose columns are formed from the corresponding rows of A. For example if we have
then the transpose of C is:
So the transpose of a row vector becomes a column vector with the same elements and vice versa. In fact, the element in the i-th row and j-th column of the transposed matrix is equal to the element in the j-th row and i-th column of the original matrix. So
In NumPy you can use the transpose() method to calculate the transpose. For example to calculate the transpose of matrix C we write C.transpose(). We can also use the transpose attribute T, and write C.T to get its transpose. The transpose has some important properties. First, the transpose of the transpose of A is A. So:
In addition, the transpose of a product is the product of the transposes in the reverse order.
To prove it remember the matrix multiplication definition:
and based on the definition of matrix transpose, the left side is:
and the right side is
so both sides of the equation are equal.
Dot product
If we have two vectors u and v:
The dot product (or inner product) of these vectors is defined as the transpose of u multiplied by v:
Based on this definition the dot product is commutative so:
Partitioned matrix
When calculating the transpose of a matrix, it is usually useful to show it as a partitioned matrix. For example, the matrix
can be also written as:
where
So we can think of each column of C as a column vector, and C can be thought of as a matrix with just one row. Now to write the transpose of C, we can simply turn this row into a column, similar to what we do for a row vector. The only difference is that each element in C is now a vector itself and should be transposed too.
Now we know that
So:
Now each row of the C^T is the transpose of the corresponding column of the original matrix C.
Now let matrix A be a partitioned column matrix and matrix B be a partitioned row matrix:
where each column vector ai is defined as the i-th column of A:
Here for each element, the first subscript refers to the row number and the second subscript to the column number. So A is an m×p matrix. In addition, B is a p×n matrix where each row vector in bi^T is the i-th row of B:
Again, the first subscript refers to the row number and the second subscript to the column number. Please note that by convection, a vector is written as a column vector. So to write a row vector, we write it as the transpose of a column vector. So bi is a column vector, and its transpose is a row vector that captures the i-th row of B. Now we can calculate AB:
so the product of the i-th column of A and the i-th row of B gives an m×n matrix, and all these matrices are added together to give AB which is also an m×n matrix. In fact, we can simply assume that we are multiplying a row vector A by a column vector B. As a special case, suppose that x is a column vector. Now we can calculate Ax similarly:
So Ax is simply a linear combination of the columns of A.
To calculate the dot product of two vectors a and b in NumPy, we can write np.dot(a,b) if both are 1-d arrays, or simply use the definition of the dot product and write a.T @ b .
Now that we are familiar with the transpose and dot product, we can define the length (also called the 2-norm) of the vector u as:
To normalize a vector u, we simply divide it by its length to have the normalized vector n:
The normalized vector n is still in the same direction of u, but its length is 1. Now we can normalize the eigenvector of λ=-2 that we saw before:
which is the same as the output of Listing 3. As shown before, if you multiply (or divide) an eigenvector by a constant, the new vector is still an eigenvector for the same eigenvalue, so by normalizing an eigenvector corresponding to an eigenvalue, you still have an eigenvector for that eigenvalue.
But why eigenvectors are important to us? As mentioned before an eigenvector simplifies the matrix multiplication into a scalar multiplication. In addition, they have some more interesting properties. Let me go back to matrix A that was used in Listing 2 and calculate its eigenvectors:
As you remember this matrix transformed a set of vectors forming a circle into a new set forming an ellipse (Figure 2). We will use LA.eig() to calculate the eigenvectors in Listing 4.
The output is :
lam= [3. 2.]u= [[ 1. -0.8944] [ 0. 0.4472]]
So we have two eigenvectors:
and the corresponding eigenvalues are:
Now we plot the eigenvectors on top of the transformed vectors:
There is nothing special about these eigenvectors in Figure 3. Now let me try another matrix:
Here we have two eigenvectors:
and the corresponding eigenvalues are:
Now we can plot the eigenvectors on top of the transformed vectors by replacing this new matrix in Listing 5. The result is shown in Figure 4.
This time the eigenvectors have an interesting property. We see that the eigenvectors are along the major and minor axes of the ellipse (principal axes). An ellipse can be thought of as a circle stretched or shrunk along its principal axes as shown in Figure 5, and matrix B transforms the initial circle by stretching it along u1 and u2, the eigenvectors of B.
But why the eigenvectors of A did not have this property? That is because B is a symmetric matrix. A symmetric matrix is a matrix that is equal to its transpose. So the elements on the main diagonal are arbitrary but for the other elements, each element on row i and column j is equal to the element on row j and column i (aij = aji). Here is an example of a symmetric matrix:
A symmetric matrix is always a square matrix (n×n). You can now easily see that A was not symmetric. A symmetric matrix transforms a vector by stretching or shrinking it along its eigenvectors. In addition, we know that all the matrices transform an eigenvector by multiplying its length (or magnitude) by the corresponding eigenvalue. We know that the initial vectors in the circle have a length of 1 and both u1 and u2 are normalized, so they are part of the initial vectors x. Now their transformed vectors are:
So the amount of stretching or shrinking along each eigenvector is proportional to the corresponding eigenvalue as shown in Figure 6.
So when you have more stretching in the direction of an eigenvector, the eigenvalue corresponding to that eigenvector will be greater. In fact, if the absolute value of an eigenvalue is greater than 1, the circle x stretches along it, and if the absolute value is less than 1, it shrinks along it. Let me try this matrix:
The eigenvectors and corresponding eigenvalues are:
Now if we plot the transformed vectors we get:
As you see now we have stretching along u1 and shrinking along u2. The other important thing about these eigenvectors is that they can form a basis for a vector space.
Basis
A set of vectors {v1, v2, v3 ..., vn} form a basis for a vector space V, if they are linearly independent and span V. A vector space is a set of vectors that can be added together or multiplied by scalars. This is a closed set, so when the vectors are added or multiplied by a scalar, the result still belongs to the set. The operations of vector addition and scalar multiplication must satisfy certain requirements which are not discussed here. Euclidean space R2 (in which we are plotting our vectors) is an example of a vector space.
When a set of vectors is linearly independent, it means that no vector in the set can be written as a linear combination of the other vectors. So it is not possible to write
when some of a1, a2, .., an are not zero. In other words, none of the vi vectors in this set can be expressed in terms of the other vectors. A set of vectors spans a space if every other vector in the space can be written as a linear combination of the spanning set. So every vector s in V can be written as:
A vector space V can have many different vector bases, but each basis always has the same number of basis vectors. The number of basis vectors of vector space V is called the dimension of V. In Euclidean space R2, the vectors:
is the simplest example of a basis since they are linearly independent and every vector in R2 can be expressed as a linear combination of them. They are called the standard basis for R2. As a result, the dimension of R2 is 2. It can have other bases, but all of them have two vectors that are linearly independent and span it. For example, vectors:
can also form a basis for R2. An important reason to find a basis for a vector space is to have a coordinate system on that. If the set of vectors B ={v1, v2, v3 ..., vn} form a basis for a vector space, then every vector x in that space can be uniquely specified using those basis vectors :
Now the coordinate of x relative to this basis B is:
In fact, when we are writing a vector in R2, we are already expressing its coordinate relative to the standard basis. That is because any vector
can be written as
Now a question comes up. If we know the coordinate of a vector relative to the standard basis, how can we find its coordinate relative to a new basis?
The equation:
can be also written as:
The matrix:
is called the change-of-coordinate matrix. The columns of this matrix are the vectors in basis B. The equation
gives the coordinate of x in R^n if we know its coordinate in basis B. If we need the opposite we can multiply both sides of this equation by the inverse of the change-of-coordinate matrix to get:
Now if we know the coordinate of x in R^n (which is simply x itself), we can multiply it by the inverse of the change-of-coordinate matrix to get its coordinate relative to basis B. For example, suppose that our basis set B is formed by the vectors:
and we have a vector:
To calculate the coordinate of x in B, first, we form the change-of-coordinate matrix:
Now the coordinate of x relative to B is:
Listing 6 shows how this can be calculated in NumPy. To calculate the inverse of a matrix, the function np.linalg.inv() can be used.
The output shows the coordinate of x in B:
x_B= [[4. ] [2.83]]
Figure 8 shows the effect of changing the basis.
To find the u1-coordinate of x in basis B, we can draw a line passing from x and parallel to u2 and see where it intersects the u1 axis. u2-coordinate can be found similarly as shown in Figure 8. In an n-dimensional space, to find the coordinate of ui, we need to draw a hyper-plane passing from x and parallel to all other eigenvectors except ui and see where it intersects the ui axis. As Figure 8 (left) shows when the eigenvectors are orthogonal (like i and j in R2), we just need to draw a line that passes through point x and is perpendicular to the axis that we want to find its coordinate.
Properties of symmetric matrices
As figures 5 to 7 show the eigenvectors of the symmetric matrices B and C are perpendicular to each other and form orthogonal vectors. This is not a coincidence and is a property of symmetric matrices.
An important property of the symmetric matrices is that an n×n symmetric matrix has n linearly independent and orthogonal eigenvectors, and it has n real eigenvalues corresponding to those eigenvectors. It is important to note that these eigenvalues are not necessarily different from each other and some of them can be equal. Another important property of symmetric matrices is that they are orthogonally diagonalizable.
Eigendecomposition
A symmetric matrix is orthogonally diagonalizable. It means that if we have an n×n symmetric matrix A, we can decompose it as
where D is an n×n diagonal matrix comprised of the n eigenvalues of A. P is also an n×n matrix, and the columns of P are the n linearly independent eigenvectors of A that correspond to those eigenvalues in D respectively. In other words, if u1, u2, u3 ..., un are the eigenvectors of A, and λ1, λ2, ..., λn are their corresponding eigenvalues respectively, then A can be written as
This can also be written as
You should notice that each ui is considered a column vector and its transpose is a row vector. So the transpose of P has been written in terms of the transpose of the columns of P. This factorization of A is called the eigendecomposition of A.
Let me clarify it by an example. Suppose that
It has two eigenvectors:
and the corresponding eigenvalues were:
So D can be defined as
Now the columns of P are the eigenvectors of A that correspond to those eigenvalues in D respectively. So
The transpose of P is
So A can be written as
It is important to note that if you do the multiplications on the right side of the above equation, you will not get A exactly. That is because we have the rounding errors in NumPy to calculate the irrational numbers that usually show up in the eigenvalues and eigenvectors, and we have also rounded the values of the eigenvalues and eigenvectors here, however, in theory, both sides should be equal. But what does it mean? To understand the eigendecomposition better, we can take a look at its geometrical interpretation.
Geometrical interpretation of eigendecomposition
To better understand the eigendecomposition equation, we need to first simplify it. If we assume that each eigenvector ui is an n × 1 column vector
then the transpose of ui is a 1 × n row vector
and their multiplication
becomes an n×n matrix. First, we calculate DP^T to simplify the eigendecomposition equation:
Now the eigendecomposition equation becomes:
So the n×n matrix A can be broken into n matrices with the same shape (n×n), and each of these matrices has a multiplier which is equal to the corresponding eigenvalue λi. Each of the matrices
is called a projection matrix. Imagine that we have a vector x and a unit vector v. The inner product of v and x which is equal to v.x=v^T x gives the scalar projection of x onto v (which is the length of the vector projection of x into v), and if we multiply it by v again, it gives a vector which is called the orthogonal projection of x onto v. This is shown in Figure 9.
So when v is a unit vector, multiplying
by x, will give the orthogonal projection of x onto v, and that is why it is called the projection matrix. So multiplying ui ui^T by x, we get the orthogonal projection of x onto ui.
Now let me calculate the projection matrices of matrix A mentioned before.
We already had calculated the eigenvalues and eigenvectors of A.
Using the output of Listing 7, we get the first term in the eigendecomposition equation (we call it A1 here):
As you see it is also a symmetric matrix. In fact, all the projection matrices in the eigendecomposition equation are symmetric. That is because the element in row m and column n of each matrix
is equal to
and the element at row n and column m has the same value which makes it a symmetric matrix. This projection matrix has some interesting properties. First, we can calculate its eigenvalues and eigenvectors:
lam= [ 3.618 0. ]u= [[ 0.8507 -0.5257] [ 0.5257 0.8507]]
As you see, it has two eigenvalues (since it is a 2×2 symmetric matrix). One of them is zero and the other is equal to λ1 of the original matrix A. In addition, the eigenvectors are exactly the same eigenvectors of A. This is not a coincidence. Suppose we get the i-th term in the eigendecomposition equation and multiply it by ui.
We know that ui is an eigenvector and it is normalized, so its length and its inner product with itself are both equal to 1. So:
Now if you look at the definition of the eigenvectors, this equation means that one of the eigenvalues of the matrix
is λi and the corresponding eigenvector is ui. But this matrix is an n×n symmetric matrix and should have n eigenvalues and eigenvectors. Now we can multiply it by any of the remaining (n-1) eigenvalues of A to get:
where i ≠ j. We know that the eigenvalues of A are orthogonal which means each pair of them are perpendicular. The inner product of two perpendicular vectors is zero (since the scalar projection of one onto the other should be zero). So the inner product of ui and uj is zero, and we get
which means that uj is also an eigenvector and its corresponding eigenvalue is zero. So we conclude that each matrix
in the eigendecomposition equation is a symmetric n×n matrix with n eigenvectors. The eigenvectors are the same as the original matrix A which are u1, u2, ... un. The corresponding eigenvalue of ui is λi (which is the same as A), but all the other eigenvalues are zero. Now, remember how a symmetric matrix transforms a vector. It will stretch or shrink the vector along its eigenvectors, and the amount of stretching or shrinking is proportional to the corresponding eigenvalue. So this matrix will stretch a vector along ui. But since the other eigenvalues are zero, it will shrink it to zero in those directions. Let me go back to matrix A and plot the transformation effect of A1 using Listing 9.
As you see, the initial circle is stretched along u1 and shrunk to zero along u2. So the result of this transformation is a straight line, not an ellipse. This is consistent with the fact that A1 is a projection matrix and should project everything onto u1, so the result should be a straight line along u1.
Rank
Figure 10 shows an interesting example in which the 2×2 matrix A1 is multiplied by a 2-d vector x, but the transformed vector Ax is a straight line. Here is another example. Suppose that we have a matrix:
Figure 11 shows how it transforms the unit vectors x.
So it acts as a projection matrix and projects all the vectors in x on the line y=2x. That is because the columns of F are not linear independent. In fact, if the columns of F are called f1 and f2 respectively, then we have f1=2f2. Remember that we write the multiplication of a matrix and a vector as:
So unlike the vectors in x which need two coordinates, Fx only needs one coordinate and exists in a 1-d space. In general, an m×n matrix does not necessarily transform an n-dimensional vector into anther m-dimensional vector. The dimension of the transformed vector can be lower if the columns of that matrix are not linearly independent.
The column space of matrix A written as Col A is defined as the set of all linear combinations of the columns of A, and since Ax is also a linear combination of the columns of A, Col A is the set of all vectors in Ax. The number of basis vectors of Col A or the dimension of Col A is called the rank of A. So the rank of A is the dimension of Ax.
The rank of A is also the maximum number of linearly independent columns of A. That is because we can write all the dependent columns as a linear combination of these linearly independent columns, and Ax which is a linear combination of all the columns can be written as a linear combination of these linearly independent columns. So they span Ax and form a basis for col A, and the number of these vectors becomes the dimension of col of A or rank of A.
In the previous example, the rank of F is 1. In addition, in the eigendecomposition equation, the rank of each matrix
is 1. Remember that they only have one non-zero eigenvalue and that is not a coincidence. It can be shown that the rank of a symmetric matrix is equal to the number of its non-zero eigenvalues.
Now we go back to the eigendecomposition equation again. Suppose that we apply our symmetric matrix A to an arbitrary vector x. Now the eigendecomposition equation becomes:
Each of the eigenvectors ui is normalized, so they are unit vectors. Now in each term of the eigendecomposition equation
gives a new vector which is the orthogonal projection of x onto ui. Then this vector is multiplied by λi. Since λi is a scalar, multiplying it by a vector, only changes the magnitude of that vector, not its direction. So λi only changes the magnitude of
Finally all the n vectors
are summed together to give Ax. This process is shown in Figure 12.
So the eigendecomposition mathematically explains an important property of the symmetric matrices that we saw in the plots before. A symmetric matrix transforms a vector by stretching or shrinking it along its eigenvectors, and the amount of stretching or shrinking along each eigenvector is proportional to the corresponding eigenvalue.
In addition, the eigendecomposition can break an n×n symmetric matrix into n matrices with the same shape (n×n) multiplied by one of the eigenvalues. The eigenvalues play an important role here since they can be thought of as a multiplier. The projection matrix only projects x onto each ui, but the eigenvalue scales the length of the vector projection (ui ui^Tx). The bigger the eigenvalue, the bigger the length of the resulting vector (λiui ui^Tx) is, and the more weight is given to its corresponding matrix (ui ui^T). So we can approximate our original symmetric matrix A by summing the terms which have the highest eigenvalues. For example, if we assume the eigenvalues λi have been sorted in descending order,
then we can only take the first k terms in the eigendecomposition equation to have a good approximation for the original matrix:
where Ak is the approximation of A with the first k terms. If we only include the first k eigenvalues and eigenvectors in the original eigendecomposition equation, we get the same result:
Now Dk is a k×k diagonal matrix comprised of the first k eigenvalues of A, Pk is an n×k matrix comprised of the first k eigenvectors of A, and its transpose becomes a k×n matrix. So their multiplication still gives an n×n matrix which is the same approximation of A.
If in the original matrix A, the other (n-k) eigenvalues that we leave out are very small and close to zero, then the approximated matrix is very similar to the original matrix, and we have a good approximation. Matrix
with
is an example. Here λ2 is rather small. We call the vectors in the unit circle x, and plot the transformation of them by the original matrix (Cx). Then we approximate matrix C with the first term in its eigendecomposition equation which is:
and plot the transformation of s by that. As you see in Figure 13, the result of the approximated matrix which is a straight line is very close to the original matrix.
Why the eigendecomposition equation is valid and why it needs a symmetric matrix? Remember the important property of symmetric matrices. Suppose that x is an n×1 column vector. If A is an n×n symmetric matrix, then it has n linearly independent and orthogonal eigenvectors which can be used as a new basis. So we can now write the coordinate of x relative to this new basis:
and based on the definition of basis, any vector x can be uniquely written as a linear combination of the eigenvectors of A.
But the eigenvectors of a symmetric matrix are orthogonal too. So to find each coordinate ai, we just need to draw a line perpendicular to an axis of ui through point x and see where it intersects it (refer to Figure 8). As mentioned before this can be also done using the projection matrix. So each term ai is equal to the dot product of x and ui (refer to Figure 9), and x can be written as
So we need a symmetric matrix to express x as a linear combination of the eigenvectors in the above equation. Now if we multiply A by x, we can factor out the ai terms since they are scalar quantities. So we get:
and since the ui vectors are the eigenvectors of A, we finally get:
which is the eigendecomposition equation. Whatever happens after the multiplication by A is true for all matrices, and does not need a symmetric matrix. We need an n×n symmetric matrix since it has n real eigenvalues plus n linear independent and orthogonal eigenvectors that can be used as a new basis for x. When you have a non-symmetric matrix you do not have such a combination. For example, suppose that you have a non-symmetric matrix:
If you calculate the eigenvalues and eigenvectors of this matrix, you get:
lam= [2.5+0.866j 2.5-0.866j]u= [[0.7071+0.j 0.7071-0.j ] [0.3536-0.6124j 0.3536+0.6124j]]
which means you have no real eigenvalues to do the decomposition. Another example is:
and you get:
lam= [2. 2.]u= [[ 1. -1.] [ 0. 0.]]
Here the eigenvectors are not linearly independent. In fact u1= -u2. So you cannot reconstruct A like Figure 11 using only one eigenvector. In addition, it does not show a direction of stretching for this matrix as shown in Figure 14.
Finally, remember that for
we had:
lam= [ 7.8151 -2.8151]u= [[ 0.639 -0.5667] [ 0.7692 0.8239]]
Here the eigenvectors are linearly independent, but they are not orthogonal (refer to Figure 3), and they do not show the correct direction of stretching for this matrix after transformation.
The eigendecomposition method is very useful, but only works for a symmetric matrix. A symmetric matrix is always a square matrix, so if you have a matrix that is not square, or a square but non-symmetric matrix, then you cannot use the eigendecomposition method to approximate it with other matrices. SVD can overcome this problem.
Singular Values
Before talking about SVD, we should find a way to calculate the stretching directions for a non-symmetric matrix. Suppose that A is an m×n matrix which is not necessarily symmetric. Then it can be shown that
is an n×n symmetric matrix. Remember that the transpose of a product is the product of the transposes in the reverse order. So
So A^T A is equal to its transpose, and it is a symmetric matrix. we want to calculate the stretching directions for a non-symmetric matrix., but how can we define the stretching directions mathematically?
So far, we only focused on the vectors in a 2-d space, but we can use the same concepts in an n-d space. Here I focus on a 3-d space to be able to visualize the concepts. Now the column vectors have 3 elements. Initially, we have a sphere that contains all the vectors that are one unit away from the origin as shown in Figure 15. If we call these vectors x then ||x||=1. Now if we multiply them by a 3×3 symmetric matrix, Ax becomes a 3-d oval. The first direction of stretching can be defined as the direction of the vector which has the greatest length in this oval (Av1 in Figure 15). In fact, Av1 is the maximum of ||Ax|| over all unit vectors x. This vector is the transformation of the vector v1 by A.
The second direction of stretching is along the vector Av2. Av2 is the maximum of ||Ax|| over all vectors in x which are perpendicular to v1. So among all the vectors in x, we maximize ||Ax|| with this constraint that x is perpendicular to v1. Finally, v3 is the vector that is perpendicular to both v1 and v2 and gives the greatest length of Ax with these constraints. The direction of Av3 determines the third direction of stretching. So generally in an n-dimensional space, the i-th direction of stretching is the direction of the vector Avi which has the greatest length and is perpendicular to the previous (i-1) directions of stretching.
Now let A be an m×n matrix. We showed that A^T A is a symmetric matrix, so it has n real eigenvalues and n linear independent and orthogonal eigenvectors which can form a basis for the n-element vectors that it can transform (in R^n space). We call these eigenvectors v1, v2, ... vn and we assume they are normalized. For each of these eigenvectors we can use the definition of length and the rule for the product of transposed matrices to have:
Now we assume that the corresponding eigenvalue of vi is λi
But vi is normalized, so
As a result:
This result shows that all the eigenvalues are positive. Now assume that we label them in decreasing order, so:
Now we define the singular value of A as the square root of λi (the eigenvalue of A^T A), and we denote it with σi.
So the singular values of A are the length of vectors Avi. Now we can summarize an important result which forms the backbone of the SVD method. It can be shown that the maximum value of ||Ax|| subject to the constraints
is σk, and this maximum is attained at vk. For the constraints, we used the fact that when x is perpendicular to vi, their dot product is zero.
So if vi is the eigenvector of A^T A (ordered based on its corresponding singular value), and assuming that ||x||=1, then Avi is showing a direction of stretching for Ax, and the corresponding singular value σi gives the length of Avi.
The singular values can also determine the rank of A. Suppose that the number of non-zero singular values is r. Since they are positive and labeled in decreasing order, we can write them as
which correspond to
and each λi is the corresponding eigenvalue of vi. Then it can be shown that rank A which is the number of vectors that form the basis of Ax is r. It can be also shown that the set {Av1, Av2, ..., Avr} is an orthogonal basis for Ax (the Col A). So the vectors Avi are perpendicular to each other as shown in Figure 15.
Now we go back to the non-symmetric matrix
We plotted the eigenvectors of A in Figure 3, and it was mentioned that they do not show the directions of stretching for Ax. In Figure 16 the eigenvectors of A^T A have been plotted on the left side (v1 and v2). Since A^T A is a symmetric matrix, these vectors show the directions of stretching for it. On the right side, the vectors Av1 and Av2 have been plotted, and it is clear that these vectors show the directions of stretching for Ax.
So Avi shows the direction of stretching of A no matter A is symmetric or not.
Now imagine that matrix A is symmetric and is equal to its transpose. In addition, suppose that its i-th eigenvector is ui and the corresponding eigenvalue is λi. If we multiply A^T A by ui we get:
which means that ui is also an eigenvector of A^T A, but its corresponding eigenvalue is λi2. So when A is symmetric, instead of calculating Avi (where vi is the eigenvector of A^T A) we can simply use ui (the eigenvector of A) to have the directions of stretching, and this is exactly what we did for the eigendecomposition process. Now that we know how to calculate the directions of stretching for a non-symmetric matrix, we are ready to see the SVD equation.
Singular Value Decomposition (SVD)
Let A be an m×n matrix and rank A = r. So the number of non-zero singular values of A is r. Since they are positive and labeled in decreasing order, we can write them as
where
We know that each singular value σi is the square root of the λi (eigenvalue of A^TA), and corresponds to an eigenvector vi with the same order. Now we can write the singular value decomposition of A as:
where V is an n×n matrix that its columns are vi. So:
We call a set of orthogonal and normalized vectors an orthonormal set. So the set {vi} is an orthonormal set. A matrix whose columns are an orthonormal set is called an orthogonal matrix, and V is an orthogonal matrix.
Σ is an m×n diagonal matrix of the form:
So we first make an r × r diagonal matrix with diagonal entries of σ1, σ2, ..., σr. Then we pad it with zero to make it an m × n matrix.
We also know that the set {Av1, Av2, ..., Avr} is an orthogonal basis for Col A, and σi = ||Avi||. So we can normalize the Avi vectors by dividing them by their length:
Now we have a set {u1, u2, ..., ur} which is an orthonormal basis for Ax which is r-dimensional. We know that A is an m × n matrix, and the rank of A can be m at most (when all the columns of A are linearly independent). Since we need an m×m matrix for U, we add (m-r) vectors to the set of ui to make it a normalized basis for an m-dimensional space R^m (There are several methods that can be used for this purpose. For example we can use the Gram-Schmidt Process. However, explaining it is beyond the scope of this article). So now we have an orthonormal basis {u1, u2, ... ,um}. These vectors will be the columns of U which is an orthogonal m×m matrix
So in the end, we can decompose A as
To better understand this equation, we need to simplify it:
We know that σi is a scalar; ui is an m-dimensional column vector, and vi is an n-dimensional column vector. So each σiui vi^T is an m×n matrix, and the SVD equation decomposes the matrix A into r matrices with the same shape (m×n).
First, let me show why this equation is valid. If we multiply both sides of the SVD equation by x we get:
We know that the set {u1, u2, ..., ur} is an orthonormal basis for Ax. So the vector Ax can be written as a linear combination of them.
and since ui vectors are orthogonal, each term ai is equal to the dot product of Ax and ui (scalar projection of Ax onto ui):
but we also know that
So by replacing that into the previous equation, we have:
We also know that vi is the eigenvector of A^T A and its corresponding eigenvalue λi is the square of the singular value σi
But dot product is commutative, so
Notice that vi^Tx gives the scalar projection of x onto vi, and the length is scaled by the singular value. Now if we replace the ai value into the equation for Ax, we get the SVD equation:
So each ai = σivi ^Tx is the scalar projection of Ax onto ui, and if it is multiplied by ui, the result is a vector which is the orthogonal projection of Ax onto ui. The singular value σi scales the length of this vector along ui. Remember that in the eigendecomposition equation, each ui ui^T was a projection matrix that would give the orthogonal projection of x onto ui. Here σivi ^T can be thought as a projection matrix that takes x, but projects Ax onto ui. Since it projects all the vectors on ui, its rank is 1. Figure 17 summarizes all the steps required for SVD. We start by picking a random 2-d vector x1 from all the vectors that have a length of 1 in x (Figure 17–1). Then we try to calculate Ax1 using the SVD method.
First, we calculate the eigenvalues (λ1, λ2) and eigenvectors (v1, v2) of A^TA. We know that the singular values are the square root of the eigenvalues (σi2=λi) as shown in (Figure 17–2). Av1 and Av2 show the directions of stretching of Ax, and u1 and u2 are the unit vectors of Av1 and Av2 (Figure 17–4). The orthogonal projection of Ax1 onto u1 and u2 are
respectively (Figure 17–5), and by simply adding them together we get Ax1
as shown in (Figure 17–6).
Here is an example showing how to calculate the SVD of a matrix in Python. We want to find the SVD of
This is a 2×3 matrix. So x is a 3-d column vector, but Ax is a not 3-dimensional vector, and x and Ax exist in different vector spaces. First, we calculate the eigenvalues and eigenvectors of A^T A.
The output is:
lam= [90.1167 0. 12.8833]v= [[ 0.9415 0.3228 0.0969] [ 0.3314 -0.9391 -0.0906] [-0.0617 -0.1174 0.9912]]
As you see the 2nd eigenvalue is zero. Since A^T A is a symmetric matrix and has two non-zero eigenvalues, its rank is 2. Figure 18 shows two plots of A^T Ax from different angles. Since the rank of A^TA is 2, all the vectors A^TAx lie on a plane.
Listing 11 shows how to construct the matrices Σ and V. We first sort the eigenvalues in descending order. The columns of V are the corresponding eigenvectors in the same order.
Then we filter the non-zero eigenvalues and take the square root of them to get the non-zero singular values. We know that Σ should be a 3×3 matrix. So we place the two non-zero singular values in a 2×2 diagonal matrix and pad it with zero to have a 3 × 3 matrix. The output is:
Sigma= [[9.493 0. 0. ] [0. 3.5893 0. ]]V= [[ 0.9415 0.0969 0.3228] [ 0.3314 -0.0906 -0.9391] [-0.0617 0.9912 -0.1174]]
To construct V, we take the vi vectors corresponding to the r non-zero singular values of A and divide them by their corresponding singular values. Since A is a 2×3 matrix, U should be a 2×2 matrix. We have 2 non-zero singular values, so the rank of A is 2 and r=2. As a result, we already have enough vi vectors to form U.
The output is:
U= [[ 0.4121 0.9111] [ 0.9111 -0.4121]]
Finally, we get the decomposition of A:
We really did not need to follow all these steps. NumPy has a function called svd() which can do the same thing for us. Listing 13 shows how we can use this function to calculate the SVD of matrix A easily.
The output is:
U= [[-0.4121 -0.9111] [-0.9111 0.4121]]s= [9.493 3.5893]V [[-0.9415 -0.0969 -0.3228] [-0.3314 0.0906 0.9391] [ 0.0617 -0.9912 0.1174]]
You should notice a few things in the output. First, This function returns an array of singular values that are on the main diagonal of Σ, not the matrix Σ. In addition, it returns V^T, not V, so I have printed the transpose of the array VT that it returns. Finally, the ui and vi vectors reported by svd() have the opposite sign of the ui and vi vectors that were calculated in Listing 10-12. Remember that if vi is an eigenvector for an eigenvalue, then (-1)vi is also an eigenvector for the same eigenvalue, and its length is also the same. So if vi is normalized, (-1)vi is normalized too. In fact, in Listing 10 we calculated vi with a different method and svd() is just reporting (-1)vi which is still correct. Since ui=Avi/σi, the set of ui reported by svd() will have the opposite sign too.
You can easily construct the matrix Σ and check that multiplying these matrices gives A.
Reconstructed A= [[ 4. 1. 3.] [ 8. 3. -2.]]
In Figure 19, you see a plot of x which is the vectors in a unit sphere and Ax which is the set of 2-d vectors produced by A. The vectors u1 and u2 show the directions of stretching. The ellipse produced by Ax is not hollow like the ones that we saw before (for example in Figure 6), and the transformed vectors fill it completely.
Similar to the eigendecomposition method, we can approximate our original matrix A by summing the terms which have the highest singular values. So we can use the first k terms in the SVD equation, using the k highest singular values which means we only include the first k vectors in U and V matrices in the decomposition equation:
We know that the set {u1, u2, ..., ur} forms a basis for Ax. So when we pick k vectors from this set, Ak x is written as a linear combination of u1, u2, ... uk. So they span Ak x and since they are linearly independent they form a basis for Ak x (or col A). So the rank of Ak is k, and by picking the first k singular values, we approximate A with a rank-k matrix.
As an example, suppose that we want to calculate the SVD of matrix
Again x is the vectors in a unit sphere (Figure 19 left). The singular values are σ1=11.97, σ2=5.57, σ3=3.25, and the rank of A is 3. So Ax is an ellipsoid in 3-d space as shown in Figure 20 (left). If we approximate it using the first singular value, the rank of Ak will be one and Ak multiplied by x will be a line (Figure 20 right). If we only use the first two singular values, the rank of Ak will be 2 and Ak multiplied by x will be a plane (Figure 20 middle).
It is important to note that if we have a symmetric matrix, the SVD equation is simplified into the eigendecomposition equation. Suppose that the symmetric matrix A has eigenvectors vi with the corresponding eigenvalues λi. So we
We already showed that for a symmetric matrix, vi is also an eigenvector of A^TA with the corresponding eigenvalue of λi2. So the singular values of A are the square root of λi2 and σi=λi. now we can calculate ui:
So ui is the eigenvector of A corresponding to λi (and σi). Now we can simplify the SVD equation to get the eigendecomposition equation:
Finally, it can be shown that SVD is the best way to approximate A with a rank-k matrix. The Frobenius norm of an m × n matrix A is defined as the square root of the sum of the absolute squares of its elements:
So this is like the generalization of the vector length for a matrix. Now if the m×n matrix Ak is the approximated rank-k matrix by SVD, we can think of
as the distance between A and Ak. The smaller this distance, the better Ak approximates A. Now if B is any m×n rank-k matrix, it can be shown that
In other words, the difference between A and its rank-k approximation generated by SVD has the minimum Frobenius norm, and no other rank-k matrix can give a better approximation for A (with a closer distance in terms of the Frobenius norm).
Now that we are familiar with SVD, we can see some of its applications in data science.
Dimensionality reduction
We can store an image in a matrix. Every image consists of a set of pixels which are the building blocks of that image. Each pixel represents the color or the intensity of light in a specific location in the image. In a grayscale image with PNG format, each pixel has a value between 0 and 1, where zero corresponds to black and 1 corresponds to white. So a grayscale image with m×n pixels can be stored in an m×n matrix or NumPy array. Here we use the imread() function to load a grayscale image of Einstein which has 480 × 423 pixels into a 2-d array. Then we use SVD to decompose the matrix and reconstruct it using the first 30 singular values.
The original matrix is 480×423. So we need to store 480×423=203040 values. After SVD each ui has 480 elements and each vi has 423 elements. To be able to reconstruct the image using the first 30 singular values we only need to keep the first 30 σi, ui, and vi which means storing 30×(1+480+423)=27120 values. This is roughly 13% of the number of values required for the original image. So using SVD we can have a good approximation of the original image and save a lot of memory. Listing 16 and calculates the matrices corresponding to the first 6 singular values. Each matrix σiui vi ^T has a rank of 1 and has the same number of rows and columns as the original matrix. Figure 22 shows the result.
Please note that unlike the original grayscale image, the value of the elements of these rank-1 matrices can be greater than 1 or less than zero, and they should not be interpreted as a grayscale image. So I did not use cmap='gray' and did not display them as grayscale images. When plotting them we do not care about the absolute value of the pixels. Instead, we care about their values relative to each other.
To understand how the image information is stored in each of these matrices, we can study a much simpler image. In Listing 17, we read a binary image with five simple shapes: a rectangle and 4 circles. The result is shown in Figure 23.
The image has been reconstructed using the first 2, 4, and 6 singular values. Now we plot the matrices corresponding to the first 6 singular values:
Each matrix (σi ui vi ^T) has a rank of 1 which means it only has one independent column and all the other columns are a scalar multiplication of that one. So if call the independent column c1 (or it can be any of the other column), the columns have the general form of:
where ai is a scalar multiplier. In addition, this matrix projects all the vectors on ui, so every column is also a scalar multiplication of ui. This can be seen in Figure 25. Two columns of the matrix σ2u2 v2^T are shown versus u2. Both columns have the same pattern of u2 with different values (ai for column #300 has a negative value).
So using the values of c1 and ai (or u2 and its multipliers), each matrix captures some details of the original image. In figure 24, the first 2 matrices can capture almost all the information about the left rectangle in the original image. The 4 circles are roughly captured as four rectangles in the first 2 matrices in Figure 24, and more details on them are added in the last 4 matrices. This can be also seen in Figure 23 where the circles in the reconstructed image become rounder as we add more singular values. These rank-1 matrices may look simple, but they are able to capture some information about the repeating patterns in the image. For example in Figure 26, we have the image of the national monument of Scotland which has 6 pillars (in the image), and the matrix corresponding to the first singular value can capture the number of pillars in the original image.
Eigenfaces
In this example, we are going to use the Olivetti faces dataset in the Scikit-learn library. This data set contains 400 images. The images were taken between April 1992 and April 1994 at AT&T Laboratories Cambridge. The images show the face of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions, and facial details. These images are grayscale and each image has 64×64 pixels. The intensity of each pixel is a number on the interval [0, 1]. First, we load the dataset:
The fetch_olivetti_faces() function has been already imported in Listing 1. We call it to read the data and stores the images in the imgs array. This is a (400, 64, 64) array which contains 400 grayscale 64×64 images. We can show some of them as an example here:
In the previous example, we stored our original image in a matrix and then used SVD to decompose it. Here we take another approach. We know that we have 400 images, so we give each image a label from 1 to 400. Now we use one-hot encoding to represent these labels by a vector. We use a column vector with 400 elements. For each label k, all the elements are zero except the k-th element. So label k will be represented by the vector:
Now we store each image in a column vector. Each image has 64 × 64 = 4096 pixels. So we can flatten each image and place the pixel values into a column vector f with 4096 elements as shown in Figure 28:
So each image with label k will be stored in the vector fk, and we need 400 fk vectors to keep all the images. Now we define a transformation matrix M which transforms the label vector ik to its corresponding image vector fk. The vectors fk will be the columns of matrix M:
This matrix has 4096 rows and 400 columns. We can simply use y=Mx to find the corresponding image of each label (x can be any vectors ik, and y will be the corresponding fk). For example for the third image of this dataset, the label is 3, and all the elements of i3 are zero except the third element which is 1. Now, remember the multiplication of partitioned matrices. When we multiply M by i3, all the columns of M are multiplied by zero except the third column f3, so:
Listing 21 shows how we can construct M and use it to show a certain image from the dataset.
The length of each label vector ik is one and these label vectors form a standard basis for a 400-dimensional space. In this space, each axis corresponds to one of the labels with the restriction that its value can be either zero or one. The vectors fk live in a 4096-dimensional space in which each axis corresponds to one pixel of the image, and matrix M maps ik to fk. Now we can use SVD to decompose M. Remember that when we decompose M (with rank r) to
the set {u1, u2, ..., ur} which are the first r columns of U will be a basis for Mx. Each vector ui will have 4096 elements. Since y=Mx is the space in which our image vectors live, the vectors ui form a basis for the image vectors as shown in Figure 29. In this figure, I have tried to visualize an n-dimensional vector space. This is, of course, impossible when n≥3, but this is just a fictitious illustration to help you understand this method.
So we can reshape ui into a 64 ×64 pixel array and try to plot it like an image. The value of the elements of these vectors can be greater than 1 or less than zero, and when reshaped they should not be interpreted as a grayscale image. So I did not use cmap='gray' when displaying them.
The output is:
You can check that the array s in Listing 22 has 400 elements, so we have 400 non-zero singular values and the rank of the matrix is 400. As a result, we need the first 400 vectors of U to reconstruct the matrix completely. We can easily reconstruct one of the images using the basis vectors:
Here we take image #160 and reconstruct it using different numbers of singular values:
The vectors ui are called the eigenfaces and can be used for face recognition. As you see in Figure 30, each eigenface captures some information of the image vectors. For example, u1 is mostly about the eyes, or u6 captures part of the nose. When reconstructing the image in Figure 31, the first singular value adds the eyes, but the rest of the face is vague. By increasing k, nose, eyebrows, beard, and glasses are added to the face. Some people believe that the eyes are the most important feature of your face. It seems that SVD agrees with them since the first eigenface which has the highest singular value captures the eyes.
Reducing noise
SVD can be used to reduce the noise in the images. Listing 24 shows an example:
Here we first load the image and add some noise to it. Then we reconstruct the image using the first 20, 55 and 200 singular values. As you see in Figure 32, the amount of noise increases as we increase the rank of the reconstructed matrix. So if we use a lower rank like 20 we can significantly reduce the noise in the image. It is important to understand why it works much better at lower ranks.
Here is a simple example to show how SVD reduces the noise. Imagine that we have 3×15 matrix defined in Listing 25:
A color map of this matrix is shown below:
The matrix columns can be divided into two categories. In the first 5 columns, only the first element is not zero, and in the last 10 columns, only the first element is zero. We also have a noisy column (column #12) which should belong to the second category, bit its first and last element do not have the right values. We can assume that these two elements contain some noise. Now we decompose this matrix using SVD. The rank of the matrix is 3, and it only has 3 non-zero singular values. Now we reconstruct it using the first 2 and 3 singular values.
As Figure 34 shows, by using the first 2 singular values column #12 changes and follows the same pattern of the columns in the second category. However, the actual values of its elements are a little lower now. If we use all the 3 singular values, we get back the original noisy column. Figure 35 shows a plot of these columns in 3-d space.
First look at the ui vectors generated by SVD. u1 shows the average direction of the column vectors in the first category. Of course, it has the opposite direction, but it does not matter (Remember that if vi is an eigenvector for an eigenvalue, then (-1)vi is also an eigenvector for the same eigenvalue, and since ui=Avi/σi, then its sign depends on vi). What is important is the stretching direction not the sign of the vector. Similarly, u2 shows the average direction for the second category.
The noisy column is shown by the vector n. It is not along u1 and u2. Now if we use ui as a basis, we can decompose n and find its orthogonal projection onto ui. As you see it has a component along u3 (in the opposite direction) which is the noise direction. This direction represents the noise present in the third element of n. It has the lowest singular value which means it is not considered an important feature by SVD. When we reconstruct n using the first two singular values, we ignore this direction and the noise present in the third element is eliminated. Now we only have the vector projections along u1 and u2. But the scalar projection along u1 has a much higher value. That is because vector n is more similar to the first category.
So the projection of n in the u1-u2 plane is almost along u1, and the reconstruction of n using the first two singular values gives a vector which is more similar to the first category. It is important to note that the noise in the first element which is represented by u2 is not eliminated. In addition, though the direction of the reconstructed n is almost correct, its magnitude is smaller compared to the vectors in the first category. In fact, in the reconstructed vector, the second element (which did not contain noise) has now a lower value compared to the original vector (Figure 36).
So SVD assigns most of the noise (but not all of that) to the vectors represented by the lower singular values. If we reconstruct a low-rank matrix (ignoring the lower singular values), the noise will be reduced, however, the correct part of the matrix changes too. The result is a matrix that is only an approximation of the noiseless matrix that we are looking for. This can be seen in Figure 32. The image background is white and the noisy pixels are black. When we reconstruct the low-rank image, the background is much more uniform but it is gray now. In fact, what we get is a less noisy approximation of the white background that we expect to have if there is no noise in the image.
I hope that you enjoyed reading this article. Please let me know if you have any questions or suggestions. All the Code Listings in this article are available for download as a Jupyter notebook from GitHub at: https://github.com/reza-bagheri/SVD_article
Further reading:
Eigendecomposition and SVD can be also used for the Principal Component Analysis (PCA). PCA is very useful for dimensionality reduction. To learn more about the application of eigendecomposition and SVD in PCA, you can read these articles: | [
{
"code": null,
"e": 940,
"s": 171,
"text": "In linear algebra, the Singular Value Decomposition (SVD) of a matrix is a factorization of that matrix into three matrices. It has some interesting algebraic properties and conveys important geometrical and theoretical insights about linear transformations. It also has some important applications in data science. In this article, I will try to explain the mathematical intuition behind SVD and its geometrical meaning. Instead of manual calculations, I will use the Python libraries to do the calculations and later give you some examples of using SVD in data science applications. In this article, bold-face lower-case letters (like a) refer to vectors. Bold-face capital letters (like A) refer to matrices, and italic lower-case letters (like a) refer to scalars."
},
{
"code": null,
"e": 1331,
"s": 940,
"text": "To understand SVD we need to first understand the Eigenvalue Decomposition of a matrix. We can think of a matrix A as a transformation that acts on a vector x by multiplication to produce a new vector Ax. We use [A]ij or aij to denote the element of matrix A at row i and column j. If A is an m×p matrix and B is a p×n matrix, the matrix product C=AB (which is an m×n matrix) is defined as:"
},
{
"code": null,
"e": 1398,
"s": 1331,
"text": "For example, the rotation matrix in a 2-d space can be defined as:"
},
{
"code": null,
"e": 1589,
"s": 1398,
"text": "This matrix rotates a vector about the origin by the angle θ (with counterclockwise rotation for a positive θ). Another example is the stretching matrix B in a 2-d space which is defined as:"
},
{
"code": null,
"e": 1762,
"s": 1589,
"text": "This matrix stretches a vector along the x-axis by a constant factor k but does not affect it in the y-direction. Similarly, we can have a stretching matrix in y-direction:"
},
{
"code": null,
"e": 1797,
"s": 1762,
"text": "As an example, if we have a vector"
},
{
"code": null,
"e": 1959,
"s": 1797,
"text": "then y=Ax is the vector which results after rotation of x by θ, and Bx is a vector which is the result of stretching x in the x-direction by a constant factor k."
},
{
"code": null,
"e": 2102,
"s": 1959,
"text": "Listing 1 shows how these matrices can be applied to a vector x and visualized in Python. We can use the NumPy arrays as vectors and matrices."
},
{
"code": null,
"e": 2330,
"s": 2102,
"text": "Here the rotation matrix is calculated for θ=300 and in the stretching matrix k=3. y is the transformed vector of x. To plot the vectors, the quiver() function in matplotlib has been used. Figure 1 shows the output of the code."
},
{
"code": null,
"e": 2656,
"s": 2330,
"text": "The matrices are represented by a 2-d array in NumPy. We can use the np.matmul(a,b) function to the multiply matrix a by b However, it is easier to use the @ operator to do that. The vectors can be represented either by a 1-d array or a 2-d array with a shape of (1,n) which is a row vector or (n,1) which is a column vector."
},
{
"code": null,
"e": 2728,
"s": 2656,
"text": "Now we are going to try a different transformation matrix. Suppose that"
},
{
"code": null,
"e": 2912,
"s": 2728,
"text": "However, we don’t apply it to just one vector. Initially, we have a circle that contains all the vectors that are one unit away from the origin. These vectors have the general form of"
},
{
"code": null,
"e": 3057,
"s": 2912,
"text": "Now we calculate t=Ax. So t is the set of all the vectors in x which have been transformed by A. Listing 2 shows how this can be done in Python."
},
{
"code": null,
"e": 3164,
"s": 3057,
"text": "Figure 2 shows the plots of x and t and the effect of transformation on two sample vectors x1 and x2 in x."
},
{
"code": null,
"e": 3328,
"s": 3164,
"text": "The initial vectors (x) on the left side form a circle as mentioned before, but the transformation matrix somehow changes this circle and turns it into an ellipse."
},
{
"code": null,
"e": 3420,
"s": 3328,
"text": "The sample vectors x1 and x2 in the circle are transformed into t1 and t2 respectively. So:"
},
{
"code": null,
"e": 3449,
"s": 3420,
"text": "Eigenvalues and Eigenvectors"
},
{
"code": null,
"e": 4285,
"s": 3449,
"text": "A vector is a quantity which has both magnitude and direction. The general effect of matrix A on the vectors in x is a combination of rotation and stretching. For example, it changes both the direction and magnitude of the vector x1 to give the transformed vector t1. However, for vector x2 only the magnitude changes after transformation. In fact, x2 and t2 have the same direction. Matrix A only stretches x2 in the same direction and gives the vector t2 which has a bigger magnitude. The only way to change the magnitude of a vector without changing its direction is by multiplying it with a scalar. So if we have a vector u, and λ is a scalar quantity then λu has the same direction and a different magnitude. So for a vector like x2 in figure 2, the effect of multiplying by A is like multiplying it with a scalar quantity like λ."
},
{
"code": null,
"e": 4631,
"s": 4285,
"text": "This is not true for all the vectors in x. In fact, for each matrix A, only some of the vectors have this property. These special vectors are called the eigenvectors of A and their corresponding scalar quantity λ is called an eigenvalue of A for that eigenvector. So the eigenvector of an n×n matrix A is defined as a nonzero vector u such that:"
},
{
"code": null,
"e": 4865,
"s": 4631,
"text": "where λ is a scalar and is called the eigenvalue of A, and u is the eigenvector corresponding to λ. In addition, if you have any other vectors in the form of au where a is a scalar, then by placing it in the previous equation we get:"
},
{
"code": null,
"e": 5051,
"s": 4865,
"text": "which means that any vector which has the same direction as the eigenvector u (or the opposite direction if a is negative) is also an eigenvector with the same corresponding eigenvalue."
},
{
"code": null,
"e": 5083,
"s": 5051,
"text": "For example, the eigenvalues of"
},
{
"code": null,
"e": 5145,
"s": 5083,
"text": "are λ1=-1 and λ2=-2 and their corresponding eigenvectors are:"
},
{
"code": null,
"e": 5158,
"s": 5145,
"text": "and we have:"
},
{
"code": null,
"e": 5621,
"s": 5158,
"text": "This means that when we apply matrix B to all the possible vectors, it does not change the direction of these two vectors (or any vectors which have the same or opposite direction) and only stretches them. So for the eigenvectors, the matrix multiplication turns into a simple scalar multiplication. Here I am not going to explain how the eigenvalues and eigenvectors can be calculated mathematically. Instead, I will show you how they can be obtained in Python."
},
{
"code": null,
"e": 6033,
"s": 5621,
"text": "We can use the LA.eig() function in NumPy to calculate the eigenvalues and eigenvectors. It returns a tuple. The first element of this tuple is an array that stores the eigenvalues, and the second element is a 2-d array that stores the corresponding eigenvectors. In fact, in Listing 3 the column u[:,i] is the eigenvector corresponding to the eigenvalue lam[i]. Now if we check the output of Listing 3, we get:"
},
{
"code": null,
"e": 6091,
"s": 6033,
"text": "lam= [-1. -2.]u= [[ 1. -0.7071] [ 0. 0.7071]]"
},
{
"code": null,
"e": 6442,
"s": 6091,
"text": "You may have noticed that the eigenvector for λ=-1 is the same as u1, but the other one is different. That is because LA.eig() returns the normalized eigenvector. A normalized vector is a unit vector whose length is 1. But before explaining how the length can be calculated, we need to get familiar with the transpose of a matrix and the dot product."
},
{
"code": null,
"e": 6452,
"s": 6442,
"text": "Transpose"
},
{
"code": null,
"e": 6726,
"s": 6452,
"text": "The transpose of the column vector u (which is shown by u superscript T) is the row vector of u (in this article sometimes I show it as u^T). The transpose of an m×n matrix A is an n×m matrix whose columns are formed from the corresponding rows of A. For example if we have"
},
{
"code": null,
"e": 6754,
"s": 6726,
"text": "then the transpose of C is:"
},
{
"code": null,
"e": 7011,
"s": 6754,
"text": "So the transpose of a row vector becomes a column vector with the same elements and vice versa. In fact, the element in the i-th row and j-th column of the transposed matrix is equal to the element in the j-th row and i-th column of the original matrix. So"
},
{
"code": null,
"e": 7335,
"s": 7011,
"text": "In NumPy you can use the transpose() method to calculate the transpose. For example to calculate the transpose of matrix C we write C.transpose(). We can also use the transpose attribute T, and write C.T to get its transpose. The transpose has some important properties. First, the transpose of the transpose of A is A. So:"
},
{
"code": null,
"e": 7430,
"s": 7335,
"text": "In addition, the transpose of a product is the product of the transposes in the reverse order."
},
{
"code": null,
"e": 7489,
"s": 7430,
"text": "To prove it remember the matrix multiplication definition:"
},
{
"code": null,
"e": 7556,
"s": 7489,
"text": "and based on the definition of matrix transpose, the left side is:"
},
{
"code": null,
"e": 7578,
"s": 7556,
"text": "and the right side is"
},
{
"code": null,
"e": 7619,
"s": 7578,
"text": "so both sides of the equation are equal."
},
{
"code": null,
"e": 7631,
"s": 7619,
"text": "Dot product"
},
{
"code": null,
"e": 7663,
"s": 7631,
"text": "If we have two vectors u and v:"
},
{
"code": null,
"e": 7765,
"s": 7663,
"text": "The dot product (or inner product) of these vectors is defined as the transpose of u multiplied by v:"
},
{
"code": null,
"e": 7825,
"s": 7765,
"text": "Based on this definition the dot product is commutative so:"
},
{
"code": null,
"e": 7844,
"s": 7825,
"text": "Partitioned matrix"
},
{
"code": null,
"e": 7969,
"s": 7844,
"text": "When calculating the transpose of a matrix, it is usually useful to show it as a partitioned matrix. For example, the matrix"
},
{
"code": null,
"e": 7993,
"s": 7969,
"text": "can be also written as:"
},
{
"code": null,
"e": 7999,
"s": 7993,
"text": "where"
},
{
"code": null,
"e": 8325,
"s": 7999,
"text": "So we can think of each column of C as a column vector, and C can be thought of as a matrix with just one row. Now to write the transpose of C, we can simply turn this row into a column, similar to what we do for a row vector. The only difference is that each element in C is now a vector itself and should be transposed too."
},
{
"code": null,
"e": 8342,
"s": 8325,
"text": "Now we know that"
},
{
"code": null,
"e": 8346,
"s": 8342,
"text": "So:"
},
{
"code": null,
"e": 8441,
"s": 8346,
"text": "Now each row of the C^T is the transpose of the corresponding column of the original matrix C."
},
{
"code": null,
"e": 8531,
"s": 8441,
"text": "Now let matrix A be a partitioned column matrix and matrix B be a partitioned row matrix:"
},
{
"code": null,
"e": 8595,
"s": 8531,
"text": "where each column vector ai is defined as the i-th column of A:"
},
{
"code": null,
"e": 8816,
"s": 8595,
"text": "Here for each element, the first subscript refers to the row number and the second subscript to the column number. So A is an m×p matrix. In addition, B is a p×n matrix where each row vector in bi^T is the i-th row of B:"
},
{
"code": null,
"e": 9180,
"s": 8816,
"text": "Again, the first subscript refers to the row number and the second subscript to the column number. Please note that by convection, a vector is written as a column vector. So to write a row vector, we write it as the transpose of a column vector. So bi is a column vector, and its transpose is a row vector that captures the i-th row of B. Now we can calculate AB:"
},
{
"code": null,
"e": 9524,
"s": 9180,
"text": "so the product of the i-th column of A and the i-th row of B gives an m×n matrix, and all these matrices are added together to give AB which is also an m×n matrix. In fact, we can simply assume that we are multiplying a row vector A by a column vector B. As a special case, suppose that x is a column vector. Now we can calculate Ax similarly:"
},
{
"code": null,
"e": 9582,
"s": 9524,
"text": "So Ax is simply a linear combination of the columns of A."
},
{
"code": null,
"e": 9761,
"s": 9582,
"text": "To calculate the dot product of two vectors a and b in NumPy, we can write np.dot(a,b) if both are 1-d arrays, or simply use the definition of the dot product and write a.T @ b ."
},
{
"code": null,
"e": 9892,
"s": 9761,
"text": "Now that we are familiar with the transpose and dot product, we can define the length (also called the 2-norm) of the vector u as:"
},
{
"code": null,
"e": 9984,
"s": 9892,
"text": "To normalize a vector u, we simply divide it by its length to have the normalized vector n:"
},
{
"code": null,
"e": 10131,
"s": 9984,
"text": "The normalized vector n is still in the same direction of u, but its length is 1. Now we can normalize the eigenvector of λ=-2 that we saw before:"
},
{
"code": null,
"e": 10432,
"s": 10131,
"text": "which is the same as the output of Listing 3. As shown before, if you multiply (or divide) an eigenvector by a constant, the new vector is still an eigenvector for the same eigenvalue, so by normalizing an eigenvector corresponding to an eigenvalue, you still have an eigenvector for that eigenvalue."
},
{
"code": null,
"e": 10719,
"s": 10432,
"text": "But why eigenvectors are important to us? As mentioned before an eigenvector simplifies the matrix multiplication into a scalar multiplication. In addition, they have some more interesting properties. Let me go back to matrix A that was used in Listing 2 and calculate its eigenvectors:"
},
{
"code": null,
"e": 10904,
"s": 10719,
"text": "As you remember this matrix transformed a set of vectors forming a circle into a new set forming an ellipse (Figure 2). We will use LA.eig() to calculate the eigenvectors in Listing 4."
},
{
"code": null,
"e": 10920,
"s": 10904,
"text": "The output is :"
},
{
"code": null,
"e": 10976,
"s": 10920,
"text": "lam= [3. 2.]u= [[ 1. -0.8944] [ 0. 0.4472]]"
},
{
"code": null,
"e": 11005,
"s": 10976,
"text": "So we have two eigenvectors:"
},
{
"code": null,
"e": 11044,
"s": 11005,
"text": "and the corresponding eigenvalues are:"
},
{
"code": null,
"e": 11108,
"s": 11044,
"text": "Now we plot the eigenvectors on top of the transformed vectors:"
},
{
"code": null,
"e": 11202,
"s": 11108,
"text": "There is nothing special about these eigenvectors in Figure 3. Now let me try another matrix:"
},
{
"code": null,
"e": 11233,
"s": 11202,
"text": "Here we have two eigenvectors:"
},
{
"code": null,
"e": 11272,
"s": 11233,
"text": "and the corresponding eigenvalues are:"
},
{
"code": null,
"e": 11415,
"s": 11272,
"text": "Now we can plot the eigenvectors on top of the transformed vectors by replacing this new matrix in Listing 5. The result is shown in Figure 4."
},
{
"code": null,
"e": 11777,
"s": 11415,
"text": "This time the eigenvectors have an interesting property. We see that the eigenvectors are along the major and minor axes of the ellipse (principal axes). An ellipse can be thought of as a circle stretched or shrunk along its principal axes as shown in Figure 5, and matrix B transforms the initial circle by stretching it along u1 and u2, the eigenvectors of B."
},
{
"code": null,
"e": 12154,
"s": 11777,
"text": "But why the eigenvectors of A did not have this property? That is because B is a symmetric matrix. A symmetric matrix is a matrix that is equal to its transpose. So the elements on the main diagonal are arbitrary but for the other elements, each element on row i and column j is equal to the element on row j and column i (aij = aji). Here is an example of a symmetric matrix:"
},
{
"code": null,
"e": 12669,
"s": 12154,
"text": "A symmetric matrix is always a square matrix (n×n). You can now easily see that A was not symmetric. A symmetric matrix transforms a vector by stretching or shrinking it along its eigenvectors. In addition, we know that all the matrices transform an eigenvector by multiplying its length (or magnitude) by the corresponding eigenvalue. We know that the initial vectors in the circle have a length of 1 and both u1 and u2 are normalized, so they are part of the initial vectors x. Now their transformed vectors are:"
},
{
"code": null,
"e": 12803,
"s": 12669,
"text": "So the amount of stretching or shrinking along each eigenvector is proportional to the corresponding eigenvalue as shown in Figure 6."
},
{
"code": null,
"e": 13125,
"s": 12803,
"text": "So when you have more stretching in the direction of an eigenvector, the eigenvalue corresponding to that eigenvector will be greater. In fact, if the absolute value of an eigenvalue is greater than 1, the circle x stretches along it, and if the absolute value is less than 1, it shrinks along it. Let me try this matrix:"
},
{
"code": null,
"e": 13177,
"s": 13125,
"text": "The eigenvectors and corresponding eigenvalues are:"
},
{
"code": null,
"e": 13224,
"s": 13177,
"text": "Now if we plot the transformed vectors we get:"
},
{
"code": null,
"e": 13392,
"s": 13224,
"text": "As you see now we have stretching along u1 and shrinking along u2. The other important thing about these eigenvectors is that they can form a basis for a vector space."
},
{
"code": null,
"e": 13398,
"s": 13392,
"text": "Basis"
},
{
"code": null,
"e": 13935,
"s": 13398,
"text": "A set of vectors {v1, v2, v3 ..., vn} form a basis for a vector space V, if they are linearly independent and span V. A vector space is a set of vectors that can be added together or multiplied by scalars. This is a closed set, so when the vectors are added or multiplied by a scalar, the result still belongs to the set. The operations of vector addition and scalar multiplication must satisfy certain requirements which are not discussed here. Euclidean space R2 (in which we are plotting our vectors) is an example of a vector space."
},
{
"code": null,
"e": 14109,
"s": 13935,
"text": "When a set of vectors is linearly independent, it means that no vector in the set can be written as a linear combination of the other vectors. So it is not possible to write"
},
{
"code": null,
"e": 14418,
"s": 14109,
"text": "when some of a1, a2, .., an are not zero. In other words, none of the vi vectors in this set can be expressed in terms of the other vectors. A set of vectors spans a space if every other vector in the space can be written as a linear combination of the spanning set. So every vector s in V can be written as:"
},
{
"code": null,
"e": 14645,
"s": 14418,
"text": "A vector space V can have many different vector bases, but each basis always has the same number of basis vectors. The number of basis vectors of vector space V is called the dimension of V. In Euclidean space R2, the vectors:"
},
{
"code": null,
"e": 14994,
"s": 14645,
"text": "is the simplest example of a basis since they are linearly independent and every vector in R2 can be expressed as a linear combination of them. They are called the standard basis for R2. As a result, the dimension of R2 is 2. It can have other bases, but all of them have two vectors that are linearly independent and span it. For example, vectors:"
},
{
"code": null,
"e": 15286,
"s": 14994,
"text": "can also form a basis for R2. An important reason to find a basis for a vector space is to have a coordinate system on that. If the set of vectors B ={v1, v2, v3 ..., vn} form a basis for a vector space, then every vector x in that space can be uniquely specified using those basis vectors :"
},
{
"code": null,
"e": 15339,
"s": 15286,
"text": "Now the coordinate of x relative to this basis B is:"
},
{
"code": null,
"e": 15484,
"s": 15339,
"text": "In fact, when we are writing a vector in R2, we are already expressing its coordinate relative to the standard basis. That is because any vector"
},
{
"code": null,
"e": 15502,
"s": 15484,
"text": "can be written as"
},
{
"code": null,
"e": 15653,
"s": 15502,
"text": "Now a question comes up. If we know the coordinate of a vector relative to the standard basis, how can we find its coordinate relative to a new basis?"
},
{
"code": null,
"e": 15667,
"s": 15653,
"text": "The equation:"
},
{
"code": null,
"e": 15691,
"s": 15667,
"text": "can be also written as:"
},
{
"code": null,
"e": 15703,
"s": 15691,
"text": "The matrix:"
},
{
"code": null,
"e": 15814,
"s": 15703,
"text": "is called the change-of-coordinate matrix. The columns of this matrix are the vectors in basis B. The equation"
},
{
"code": null,
"e": 16011,
"s": 15814,
"text": "gives the coordinate of x in R^n if we know its coordinate in basis B. If we need the opposite we can multiply both sides of this equation by the inverse of the change-of-coordinate matrix to get:"
},
{
"code": null,
"e": 16261,
"s": 16011,
"text": "Now if we know the coordinate of x in R^n (which is simply x itself), we can multiply it by the inverse of the change-of-coordinate matrix to get its coordinate relative to basis B. For example, suppose that our basis set B is formed by the vectors:"
},
{
"code": null,
"e": 16283,
"s": 16261,
"text": "and we have a vector:"
},
{
"code": null,
"e": 16370,
"s": 16283,
"text": "To calculate the coordinate of x in B, first, we form the change-of-coordinate matrix:"
},
{
"code": null,
"e": 16412,
"s": 16370,
"text": "Now the coordinate of x relative to B is:"
},
{
"code": null,
"e": 16545,
"s": 16412,
"text": "Listing 6 shows how this can be calculated in NumPy. To calculate the inverse of a matrix, the function np.linalg.inv() can be used."
},
{
"code": null,
"e": 16588,
"s": 16545,
"text": "The output shows the coordinate of x in B:"
},
{
"code": null,
"e": 16616,
"s": 16588,
"text": "x_B= [[4. ] [2.83]]"
},
{
"code": null,
"e": 16665,
"s": 16616,
"text": "Figure 8 shows the effect of changing the basis."
},
{
"code": null,
"e": 17263,
"s": 16665,
"text": "To find the u1-coordinate of x in basis B, we can draw a line passing from x and parallel to u2 and see where it intersects the u1 axis. u2-coordinate can be found similarly as shown in Figure 8. In an n-dimensional space, to find the coordinate of ui, we need to draw a hyper-plane passing from x and parallel to all other eigenvectors except ui and see where it intersects the ui axis. As Figure 8 (left) shows when the eigenvectors are orthogonal (like i and j in R2), we just need to draw a line that passes through point x and is perpendicular to the axis that we want to find its coordinate."
},
{
"code": null,
"e": 17296,
"s": 17263,
"text": "Properties of symmetric matrices"
},
{
"code": null,
"e": 17498,
"s": 17296,
"text": "As figures 5 to 7 show the eigenvectors of the symmetric matrices B and C are perpendicular to each other and form orthogonal vectors. This is not a coincidence and is a property of symmetric matrices."
},
{
"code": null,
"e": 17920,
"s": 17498,
"text": "An important property of the symmetric matrices is that an n×n symmetric matrix has n linearly independent and orthogonal eigenvectors, and it has n real eigenvalues corresponding to those eigenvectors. It is important to note that these eigenvalues are not necessarily different from each other and some of them can be equal. Another important property of symmetric matrices is that they are orthogonally diagonalizable."
},
{
"code": null,
"e": 17939,
"s": 17920,
"text": "Eigendecomposition"
},
{
"code": null,
"e": 18065,
"s": 17939,
"text": "A symmetric matrix is orthogonally diagonalizable. It means that if we have an n×n symmetric matrix A, we can decompose it as"
},
{
"code": null,
"e": 18447,
"s": 18065,
"text": "where D is an n×n diagonal matrix comprised of the n eigenvalues of A. P is also an n×n matrix, and the columns of P are the n linearly independent eigenvectors of A that correspond to those eigenvalues in D respectively. In other words, if u1, u2, u3 ..., un are the eigenvectors of A, and λ1, λ2, ..., λn are their corresponding eigenvalues respectively, then A can be written as"
},
{
"code": null,
"e": 18475,
"s": 18447,
"text": "This can also be written as"
},
{
"code": null,
"e": 18720,
"s": 18475,
"text": "You should notice that each ui is considered a column vector and its transpose is a row vector. So the transpose of P has been written in terms of the transpose of the columns of P. This factorization of A is called the eigendecomposition of A."
},
{
"code": null,
"e": 18766,
"s": 18720,
"text": "Let me clarify it by an example. Suppose that"
},
{
"code": null,
"e": 18791,
"s": 18766,
"text": "It has two eigenvectors:"
},
{
"code": null,
"e": 18831,
"s": 18791,
"text": "and the corresponding eigenvalues were:"
},
{
"code": null,
"e": 18854,
"s": 18831,
"text": "So D can be defined as"
},
{
"code": null,
"e": 18960,
"s": 18854,
"text": "Now the columns of P are the eigenvectors of A that correspond to those eigenvalues in D respectively. So"
},
{
"code": null,
"e": 18982,
"s": 18960,
"text": "The transpose of P is"
},
{
"code": null,
"e": 19005,
"s": 18982,
"text": "So A can be written as"
},
{
"code": null,
"e": 19528,
"s": 19005,
"text": "It is important to note that if you do the multiplications on the right side of the above equation, you will not get A exactly. That is because we have the rounding errors in NumPy to calculate the irrational numbers that usually show up in the eigenvalues and eigenvectors, and we have also rounded the values of the eigenvalues and eigenvectors here, however, in theory, both sides should be equal. But what does it mean? To understand the eigendecomposition better, we can take a look at its geometrical interpretation."
},
{
"code": null,
"e": 19577,
"s": 19528,
"text": "Geometrical interpretation of eigendecomposition"
},
{
"code": null,
"e": 19725,
"s": 19577,
"text": "To better understand the eigendecomposition equation, we need to first simplify it. If we assume that each eigenvector ui is an n × 1 column vector"
},
{
"code": null,
"e": 19772,
"s": 19725,
"text": "then the transpose of ui is a 1 × n row vector"
},
{
"code": null,
"e": 19797,
"s": 19772,
"text": "and their multiplication"
},
{
"code": null,
"e": 19890,
"s": 19797,
"text": "becomes an n×n matrix. First, we calculate DP^T to simplify the eigendecomposition equation:"
},
{
"code": null,
"e": 19935,
"s": 19890,
"text": "Now the eigendecomposition equation becomes:"
},
{
"code": null,
"e": 20128,
"s": 19935,
"text": "So the n×n matrix A can be broken into n matrices with the same shape (n×n), and each of these matrices has a multiplier which is equal to the corresponding eigenvalue λi. Each of the matrices"
},
{
"code": null,
"e": 20503,
"s": 20128,
"text": "is called a projection matrix. Imagine that we have a vector x and a unit vector v. The inner product of v and x which is equal to v.x=v^T x gives the scalar projection of x onto v (which is the length of the vector projection of x into v), and if we multiply it by v again, it gives a vector which is called the orthogonal projection of x onto v. This is shown in Figure 9."
},
{
"code": null,
"e": 20543,
"s": 20503,
"text": "So when v is a unit vector, multiplying"
},
{
"code": null,
"e": 20726,
"s": 20543,
"text": "by x, will give the orthogonal projection of x onto v, and that is why it is called the projection matrix. So multiplying ui ui^T by x, we get the orthogonal projection of x onto ui."
},
{
"code": null,
"e": 20801,
"s": 20726,
"text": "Now let me calculate the projection matrices of matrix A mentioned before."
},
{
"code": null,
"e": 20866,
"s": 20801,
"text": "We already had calculated the eigenvalues and eigenvectors of A."
},
{
"code": null,
"e": 20976,
"s": 20866,
"text": "Using the output of Listing 7, we get the first term in the eigendecomposition equation (we call it A1 here):"
},
{
"code": null,
"e": 21170,
"s": 20976,
"text": "As you see it is also a symmetric matrix. In fact, all the projection matrices in the eigendecomposition equation are symmetric. That is because the element in row m and column n of each matrix"
},
{
"code": null,
"e": 21182,
"s": 21170,
"text": "is equal to"
},
{
"code": null,
"e": 21388,
"s": 21182,
"text": "and the element at row n and column m has the same value which makes it a symmetric matrix. This projection matrix has some interesting properties. First, we can calculate its eigenvalues and eigenvectors:"
},
{
"code": null,
"e": 21452,
"s": 21388,
"text": "lam= [ 3.618 0. ]u= [[ 0.8507 -0.5257] [ 0.5257 0.8507]]"
},
{
"code": null,
"e": 21784,
"s": 21452,
"text": "As you see, it has two eigenvalues (since it is a 2×2 symmetric matrix). One of them is zero and the other is equal to λ1 of the original matrix A. In addition, the eigenvectors are exactly the same eigenvectors of A. This is not a coincidence. Suppose we get the i-th term in the eigendecomposition equation and multiply it by ui."
},
{
"code": null,
"e": 21913,
"s": 21784,
"text": "We know that ui is an eigenvector and it is normalized, so its length and its inner product with itself are both equal to 1. So:"
},
{
"code": null,
"e": 22030,
"s": 21913,
"text": "Now if you look at the definition of the eigenvectors, this equation means that one of the eigenvalues of the matrix"
},
{
"code": null,
"e": 22246,
"s": 22030,
"text": "is λi and the corresponding eigenvector is ui. But this matrix is an n×n symmetric matrix and should have n eigenvalues and eigenvectors. Now we can multiply it by any of the remaining (n-1) eigenvalues of A to get:"
},
{
"code": null,
"e": 22535,
"s": 22246,
"text": "where i ≠ j. We know that the eigenvalues of A are orthogonal which means each pair of them are perpendicular. The inner product of two perpendicular vectors is zero (since the scalar projection of one onto the other should be zero). So the inner product of ui and uj is zero, and we get"
},
{
"code": null,
"e": 22652,
"s": 22535,
"text": "which means that uj is also an eigenvector and its corresponding eigenvalue is zero. So we conclude that each matrix"
},
{
"code": null,
"e": 23353,
"s": 22652,
"text": "in the eigendecomposition equation is a symmetric n×n matrix with n eigenvectors. The eigenvectors are the same as the original matrix A which are u1, u2, ... un. The corresponding eigenvalue of ui is λi (which is the same as A), but all the other eigenvalues are zero. Now, remember how a symmetric matrix transforms a vector. It will stretch or shrink the vector along its eigenvectors, and the amount of stretching or shrinking is proportional to the corresponding eigenvalue. So this matrix will stretch a vector along ui. But since the other eigenvalues are zero, it will shrink it to zero in those directions. Let me go back to matrix A and plot the transformation effect of A1 using Listing 9."
},
{
"code": null,
"e": 23661,
"s": 23353,
"text": "As you see, the initial circle is stretched along u1 and shrunk to zero along u2. So the result of this transformation is a straight line, not an ellipse. This is consistent with the fact that A1 is a projection matrix and should project everything onto u1, so the result should be a straight line along u1."
},
{
"code": null,
"e": 23666,
"s": 23661,
"text": "Rank"
},
{
"code": null,
"e": 23871,
"s": 23666,
"text": "Figure 10 shows an interesting example in which the 2×2 matrix A1 is multiplied by a 2-d vector x, but the transformed vector Ax is a straight line. Here is another example. Suppose that we have a matrix:"
},
{
"code": null,
"e": 23925,
"s": 23871,
"text": "Figure 11 shows how it transforms the unit vectors x."
},
{
"code": null,
"e": 24228,
"s": 23925,
"text": "So it acts as a projection matrix and projects all the vectors in x on the line y=2x. That is because the columns of F are not linear independent. In fact, if the columns of F are called f1 and f2 respectively, then we have f1=2f2. Remember that we write the multiplication of a matrix and a vector as:"
},
{
"code": null,
"e": 24567,
"s": 24228,
"text": "So unlike the vectors in x which need two coordinates, Fx only needs one coordinate and exists in a 1-d space. In general, an m×n matrix does not necessarily transform an n-dimensional vector into anther m-dimensional vector. The dimension of the transformed vector can be lower if the columns of that matrix are not linearly independent."
},
{
"code": null,
"e": 24914,
"s": 24567,
"text": "The column space of matrix A written as Col A is defined as the set of all linear combinations of the columns of A, and since Ax is also a linear combination of the columns of A, Col A is the set of all vectors in Ax. The number of basis vectors of Col A or the dimension of Col A is called the rank of A. So the rank of A is the dimension of Ax."
},
{
"code": null,
"e": 25369,
"s": 24914,
"text": "The rank of A is also the maximum number of linearly independent columns of A. That is because we can write all the dependent columns as a linear combination of these linearly independent columns, and Ax which is a linear combination of all the columns can be written as a linear combination of these linearly independent columns. So they span Ax and form a basis for col A, and the number of these vectors becomes the dimension of col of A or rank of A."
},
{
"code": null,
"e": 25487,
"s": 25369,
"text": "In the previous example, the rank of F is 1. In addition, in the eigendecomposition equation, the rank of each matrix"
},
{
"code": null,
"e": 25681,
"s": 25487,
"text": "is 1. Remember that they only have one non-zero eigenvalue and that is not a coincidence. It can be shown that the rank of a symmetric matrix is equal to the number of its non-zero eigenvalues."
},
{
"code": null,
"e": 25854,
"s": 25681,
"text": "Now we go back to the eigendecomposition equation again. Suppose that we apply our symmetric matrix A to an arbitrary vector x. Now the eigendecomposition equation becomes:"
},
{
"code": null,
"e": 25975,
"s": 25854,
"text": "Each of the eigenvectors ui is normalized, so they are unit vectors. Now in each term of the eigendecomposition equation"
},
{
"code": null,
"e": 26229,
"s": 25975,
"text": "gives a new vector which is the orthogonal projection of x onto ui. Then this vector is multiplied by λi. Since λi is a scalar, multiplying it by a vector, only changes the magnitude of that vector, not its direction. So λi only changes the magnitude of"
},
{
"code": null,
"e": 26255,
"s": 26229,
"text": "Finally all the n vectors"
},
{
"code": null,
"e": 26323,
"s": 26255,
"text": "are summed together to give Ax. This process is shown in Figure 12."
},
{
"code": null,
"e": 26661,
"s": 26323,
"text": "So the eigendecomposition mathematically explains an important property of the symmetric matrices that we saw in the plots before. A symmetric matrix transforms a vector by stretching or shrinking it along its eigenvectors, and the amount of stretching or shrinking along each eigenvector is proportional to the corresponding eigenvalue."
},
{
"code": null,
"e": 27379,
"s": 26661,
"text": "In addition, the eigendecomposition can break an n×n symmetric matrix into n matrices with the same shape (n×n) multiplied by one of the eigenvalues. The eigenvalues play an important role here since they can be thought of as a multiplier. The projection matrix only projects x onto each ui, but the eigenvalue scales the length of the vector projection (ui ui^Tx). The bigger the eigenvalue, the bigger the length of the resulting vector (λiui ui^Tx) is, and the more weight is given to its corresponding matrix (ui ui^T). So we can approximate our original symmetric matrix A by summing the terms which have the highest eigenvalues. For example, if we assume the eigenvalues λi have been sorted in descending order,"
},
{
"code": null,
"e": 27508,
"s": 27379,
"text": "then we can only take the first k terms in the eigendecomposition equation to have a good approximation for the original matrix:"
},
{
"code": null,
"e": 27696,
"s": 27508,
"text": "where Ak is the approximation of A with the first k terms. If we only include the first k eigenvalues and eigenvectors in the original eigendecomposition equation, we get the same result:"
},
{
"code": null,
"e": 27963,
"s": 27696,
"text": "Now Dk is a k×k diagonal matrix comprised of the first k eigenvalues of A, Pk is an n×k matrix comprised of the first k eigenvectors of A, and its transpose becomes a k×n matrix. So their multiplication still gives an n×n matrix which is the same approximation of A."
},
{
"code": null,
"e": 28182,
"s": 27963,
"text": "If in the original matrix A, the other (n-k) eigenvalues that we leave out are very small and close to zero, then the approximated matrix is very similar to the original matrix, and we have a good approximation. Matrix"
},
{
"code": null,
"e": 28187,
"s": 28182,
"text": "with"
},
{
"code": null,
"e": 28428,
"s": 28187,
"text": "is an example. Here λ2 is rather small. We call the vectors in the unit circle x, and plot the transformation of them by the original matrix (Cx). Then we approximate matrix C with the first term in its eigendecomposition equation which is:"
},
{
"code": null,
"e": 28596,
"s": 28428,
"text": "and plot the transformation of s by that. As you see in Figure 13, the result of the approximated matrix which is a straight line is very close to the original matrix."
},
{
"code": null,
"e": 28971,
"s": 28596,
"text": "Why the eigendecomposition equation is valid and why it needs a symmetric matrix? Remember the important property of symmetric matrices. Suppose that x is an n×1 column vector. If A is an n×n symmetric matrix, then it has n linearly independent and orthogonal eigenvectors which can be used as a new basis. So we can now write the coordinate of x relative to this new basis:"
},
{
"code": null,
"e": 29096,
"s": 28971,
"text": "and based on the definition of basis, any vector x can be uniquely written as a linear combination of the eigenvectors of A."
},
{
"code": null,
"e": 29489,
"s": 29096,
"text": "But the eigenvectors of a symmetric matrix are orthogonal too. So to find each coordinate ai, we just need to draw a line perpendicular to an axis of ui through point x and see where it intersects it (refer to Figure 8). As mentioned before this can be also done using the projection matrix. So each term ai is equal to the dot product of x and ui (refer to Figure 9), and x can be written as"
},
{
"code": null,
"e": 29702,
"s": 29489,
"text": "So we need a symmetric matrix to express x as a linear combination of the eigenvectors in the above equation. Now if we multiply A by x, we can factor out the ai terms since they are scalar quantities. So we get:"
},
{
"code": null,
"e": 29770,
"s": 29702,
"text": "and since the ui vectors are the eigenvectors of A, we finally get:"
},
{
"code": null,
"e": 30212,
"s": 29770,
"text": "which is the eigendecomposition equation. Whatever happens after the multiplication by A is true for all matrices, and does not need a symmetric matrix. We need an n×n symmetric matrix since it has n real eigenvalues plus n linear independent and orthogonal eigenvectors that can be used as a new basis for x. When you have a non-symmetric matrix you do not have such a combination. For example, suppose that you have a non-symmetric matrix:"
},
{
"code": null,
"e": 30287,
"s": 30212,
"text": "If you calculate the eigenvalues and eigenvectors of this matrix, you get:"
},
{
"code": null,
"e": 30387,
"s": 30287,
"text": "lam= [2.5+0.866j 2.5-0.866j]u= [[0.7071+0.j 0.7071-0.j ] [0.3536-0.6124j 0.3536+0.6124j]]"
},
{
"code": null,
"e": 30473,
"s": 30387,
"text": "which means you have no real eigenvalues to do the decomposition. Another example is:"
},
{
"code": null,
"e": 30486,
"s": 30473,
"text": "and you get:"
},
{
"code": null,
"e": 30526,
"s": 30486,
"text": "lam= [2. 2.]u= [[ 1. -1.] [ 0. 0.]]"
},
{
"code": null,
"e": 30761,
"s": 30526,
"text": "Here the eigenvectors are not linearly independent. In fact u1= -u2. So you cannot reconstruct A like Figure 11 using only one eigenvector. In addition, it does not show a direction of stretching for this matrix as shown in Figure 14."
},
{
"code": null,
"e": 30788,
"s": 30761,
"text": "Finally, remember that for"
},
{
"code": null,
"e": 30796,
"s": 30788,
"text": "we had:"
},
{
"code": null,
"e": 30862,
"s": 30796,
"text": "lam= [ 7.8151 -2.8151]u= [[ 0.639 -0.5667] [ 0.7692 0.8239]]"
},
{
"code": null,
"e": 31054,
"s": 30862,
"text": "Here the eigenvectors are linearly independent, but they are not orthogonal (refer to Figure 3), and they do not show the correct direction of stretching for this matrix after transformation."
},
{
"code": null,
"e": 31387,
"s": 31054,
"text": "The eigendecomposition method is very useful, but only works for a symmetric matrix. A symmetric matrix is always a square matrix, so if you have a matrix that is not square, or a square but non-symmetric matrix, then you cannot use the eigendecomposition method to approximate it with other matrices. SVD can overcome this problem."
},
{
"code": null,
"e": 31403,
"s": 31387,
"text": "Singular Values"
},
{
"code": null,
"e": 31611,
"s": 31403,
"text": "Before talking about SVD, we should find a way to calculate the stretching directions for a non-symmetric matrix. Suppose that A is an m×n matrix which is not necessarily symmetric. Then it can be shown that"
},
{
"code": null,
"e": 31738,
"s": 31611,
"text": "is an n×n symmetric matrix. Remember that the transpose of a product is the product of the transposes in the reverse order. So"
},
{
"code": null,
"e": 31944,
"s": 31738,
"text": "So A^T A is equal to its transpose, and it is a symmetric matrix. we want to calculate the stretching directions for a non-symmetric matrix., but how can we define the stretching directions mathematically?"
},
{
"code": null,
"e": 32653,
"s": 31944,
"text": "So far, we only focused on the vectors in a 2-d space, but we can use the same concepts in an n-d space. Here I focus on a 3-d space to be able to visualize the concepts. Now the column vectors have 3 elements. Initially, we have a sphere that contains all the vectors that are one unit away from the origin as shown in Figure 15. If we call these vectors x then ||x||=1. Now if we multiply them by a 3×3 symmetric matrix, Ax becomes a 3-d oval. The first direction of stretching can be defined as the direction of the vector which has the greatest length in this oval (Av1 in Figure 15). In fact, Av1 is the maximum of ||Ax|| over all unit vectors x. This vector is the transformation of the vector v1 by A."
},
{
"code": null,
"e": 33297,
"s": 32653,
"text": "The second direction of stretching is along the vector Av2. Av2 is the maximum of ||Ax|| over all vectors in x which are perpendicular to v1. So among all the vectors in x, we maximize ||Ax|| with this constraint that x is perpendicular to v1. Finally, v3 is the vector that is perpendicular to both v1 and v2 and gives the greatest length of Ax with these constraints. The direction of Av3 determines the third direction of stretching. So generally in an n-dimensional space, the i-th direction of stretching is the direction of the vector Avi which has the greatest length and is perpendicular to the previous (i-1) directions of stretching."
},
{
"code": null,
"e": 33743,
"s": 33297,
"text": "Now let A be an m×n matrix. We showed that A^T A is a symmetric matrix, so it has n real eigenvalues and n linear independent and orthogonal eigenvectors which can form a basis for the n-element vectors that it can transform (in R^n space). We call these eigenvectors v1, v2, ... vn and we assume they are normalized. For each of these eigenvectors we can use the definition of length and the rule for the product of transposed matrices to have:"
},
{
"code": null,
"e": 33803,
"s": 33743,
"text": "Now we assume that the corresponding eigenvalue of vi is λi"
},
{
"code": null,
"e": 33828,
"s": 33803,
"text": "But vi is normalized, so"
},
{
"code": null,
"e": 33841,
"s": 33828,
"text": "As a result:"
},
{
"code": null,
"e": 33953,
"s": 33841,
"text": "This result shows that all the eigenvalues are positive. Now assume that we label them in decreasing order, so:"
},
{
"code": null,
"e": 34069,
"s": 33953,
"text": "Now we define the singular value of A as the square root of λi (the eigenvalue of A^T A), and we denote it with σi."
},
{
"code": null,
"e": 34289,
"s": 34069,
"text": "So the singular values of A are the length of vectors Avi. Now we can summarize an important result which forms the backbone of the SVD method. It can be shown that the maximum value of ||Ax|| subject to the constraints"
},
{
"code": null,
"e": 34433,
"s": 34289,
"text": "is σk, and this maximum is attained at vk. For the constraints, we used the fact that when x is perpendicular to vi, their dot product is zero."
},
{
"code": null,
"e": 34669,
"s": 34433,
"text": "So if vi is the eigenvector of A^T A (ordered based on its corresponding singular value), and assuming that ||x||=1, then Avi is showing a direction of stretching for Ax, and the corresponding singular value σi gives the length of Avi."
},
{
"code": null,
"e": 34859,
"s": 34669,
"text": "The singular values can also determine the rank of A. Suppose that the number of non-zero singular values is r. Since they are positive and labeled in decreasing order, we can write them as"
},
{
"code": null,
"e": 34879,
"s": 34859,
"text": "which correspond to"
},
{
"code": null,
"e": 35198,
"s": 34879,
"text": "and each λi is the corresponding eigenvalue of vi. Then it can be shown that rank A which is the number of vectors that form the basis of Ax is r. It can be also shown that the set {Av1, Av2, ..., Avr} is an orthogonal basis for Ax (the Col A). So the vectors Avi are perpendicular to each other as shown in Figure 15."
},
{
"code": null,
"e": 35241,
"s": 35198,
"text": "Now we go back to the non-symmetric matrix"
},
{
"code": null,
"e": 35684,
"s": 35241,
"text": "We plotted the eigenvectors of A in Figure 3, and it was mentioned that they do not show the directions of stretching for Ax. In Figure 16 the eigenvectors of A^T A have been plotted on the left side (v1 and v2). Since A^T A is a symmetric matrix, these vectors show the directions of stretching for it. On the right side, the vectors Av1 and Av2 have been plotted, and it is clear that these vectors show the directions of stretching for Ax."
},
{
"code": null,
"e": 35763,
"s": 35684,
"text": "So Avi shows the direction of stretching of A no matter A is symmetric or not."
},
{
"code": null,
"e": 35961,
"s": 35763,
"text": "Now imagine that matrix A is symmetric and is equal to its transpose. In addition, suppose that its i-th eigenvector is ui and the corresponding eigenvalue is λi. If we multiply A^T A by ui we get:"
},
{
"code": null,
"e": 36424,
"s": 35961,
"text": "which means that ui is also an eigenvector of A^T A, but its corresponding eigenvalue is λi2. So when A is symmetric, instead of calculating Avi (where vi is the eigenvector of A^T A) we can simply use ui (the eigenvector of A) to have the directions of stretching, and this is exactly what we did for the eigendecomposition process. Now that we know how to calculate the directions of stretching for a non-symmetric matrix, we are ready to see the SVD equation."
},
{
"code": null,
"e": 36459,
"s": 36424,
"text": "Singular Value Decomposition (SVD)"
},
{
"code": null,
"e": 36629,
"s": 36459,
"text": "Let A be an m×n matrix and rank A = r. So the number of non-zero singular values of A is r. Since they are positive and labeled in decreasing order, we can write them as"
},
{
"code": null,
"e": 36635,
"s": 36629,
"text": "where"
},
{
"code": null,
"e": 36839,
"s": 36635,
"text": "We know that each singular value σi is the square root of the λi (eigenvalue of A^TA), and corresponds to an eigenvector vi with the same order. Now we can write the singular value decomposition of A as:"
},
{
"code": null,
"e": 36893,
"s": 36839,
"text": "where V is an n×n matrix that its columns are vi. So:"
},
{
"code": null,
"e": 37112,
"s": 36893,
"text": "We call a set of orthogonal and normalized vectors an orthonormal set. So the set {vi} is an orthonormal set. A matrix whose columns are an orthonormal set is called an orthogonal matrix, and V is an orthogonal matrix."
},
{
"code": null,
"e": 37153,
"s": 37112,
"text": "Σ is an m×n diagonal matrix of the form:"
},
{
"code": null,
"e": 37290,
"s": 37153,
"text": "So we first make an r × r diagonal matrix with diagonal entries of σ1, σ2, ..., σr. Then we pad it with zero to make it an m × n matrix."
},
{
"code": null,
"e": 37459,
"s": 37290,
"text": "We also know that the set {Av1, Av2, ..., Avr} is an orthogonal basis for Col A, and σi = ||Avi||. So we can normalize the Avi vectors by dividing them by their length:"
},
{
"code": null,
"e": 38114,
"s": 37459,
"text": "Now we have a set {u1, u2, ..., ur} which is an orthonormal basis for Ax which is r-dimensional. We know that A is an m × n matrix, and the rank of A can be m at most (when all the columns of A are linearly independent). Since we need an m×m matrix for U, we add (m-r) vectors to the set of ui to make it a normalized basis for an m-dimensional space R^m (There are several methods that can be used for this purpose. For example we can use the Gram-Schmidt Process. However, explaining it is beyond the scope of this article). So now we have an orthonormal basis {u1, u2, ... ,um}. These vectors will be the columns of U which is an orthogonal m×m matrix"
},
{
"code": null,
"e": 38151,
"s": 38114,
"text": "So in the end, we can decompose A as"
},
{
"code": null,
"e": 38211,
"s": 38151,
"text": "To better understand this equation, we need to simplify it:"
},
{
"code": null,
"e": 38444,
"s": 38211,
"text": "We know that σi is a scalar; ui is an m-dimensional column vector, and vi is an n-dimensional column vector. So each σiui vi^T is an m×n matrix, and the SVD equation decomposes the matrix A into r matrices with the same shape (m×n)."
},
{
"code": null,
"e": 38550,
"s": 38444,
"text": "First, let me show why this equation is valid. If we multiply both sides of the SVD equation by x we get:"
},
{
"code": null,
"e": 38686,
"s": 38550,
"text": "We know that the set {u1, u2, ..., ur} is an orthonormal basis for Ax. So the vector Ax can be written as a linear combination of them."
},
{
"code": null,
"e": 38812,
"s": 38686,
"text": "and since ui vectors are orthogonal, each term ai is equal to the dot product of Ax and ui (scalar projection of Ax onto ui):"
},
{
"code": null,
"e": 38834,
"s": 38812,
"text": "but we also know that"
},
{
"code": null,
"e": 38892,
"s": 38834,
"text": "So by replacing that into the previous equation, we have:"
},
{
"code": null,
"e": 39016,
"s": 38892,
"text": "We also know that vi is the eigenvector of A^T A and its corresponding eigenvalue λi is the square of the singular value σi"
},
{
"code": null,
"e": 39051,
"s": 39016,
"text": "But dot product is commutative, so"
},
{
"code": null,
"e": 39241,
"s": 39051,
"text": "Notice that vi^Tx gives the scalar projection of x onto vi, and the length is scaled by the singular value. Now if we replace the ai value into the equation for Ax, we get the SVD equation:"
},
{
"code": null,
"e": 39973,
"s": 39241,
"text": "So each ai = σivi ^Tx is the scalar projection of Ax onto ui, and if it is multiplied by ui, the result is a vector which is the orthogonal projection of Ax onto ui. The singular value σi scales the length of this vector along ui. Remember that in the eigendecomposition equation, each ui ui^T was a projection matrix that would give the orthogonal projection of x onto ui. Here σivi ^T can be thought as a projection matrix that takes x, but projects Ax onto ui. Since it projects all the vectors on ui, its rank is 1. Figure 17 summarizes all the steps required for SVD. We start by picking a random 2-d vector x1 from all the vectors that have a length of 1 in x (Figure 17–1). Then we try to calculate Ax1 using the SVD method."
},
{
"code": null,
"e": 40331,
"s": 39973,
"text": "First, we calculate the eigenvalues (λ1, λ2) and eigenvectors (v1, v2) of A^TA. We know that the singular values are the square root of the eigenvalues (σi2=λi) as shown in (Figure 17–2). Av1 and Av2 show the directions of stretching of Ax, and u1 and u2 are the unit vectors of Av1 and Av2 (Figure 17–4). The orthogonal projection of Ax1 onto u1 and u2 are"
},
{
"code": null,
"e": 40405,
"s": 40331,
"text": "respectively (Figure 17–5), and by simply adding them together we get Ax1"
},
{
"code": null,
"e": 40432,
"s": 40405,
"text": "as shown in (Figure 17–6)."
},
{
"code": null,
"e": 40534,
"s": 40432,
"text": "Here is an example showing how to calculate the SVD of a matrix in Python. We want to find the SVD of"
},
{
"code": null,
"e": 40733,
"s": 40534,
"text": "This is a 2×3 matrix. So x is a 3-d column vector, but Ax is a not 3-dimensional vector, and x and Ax exist in different vector spaces. First, we calculate the eigenvalues and eigenvectors of A^T A."
},
{
"code": null,
"e": 40748,
"s": 40733,
"text": "The output is:"
},
{
"code": null,
"e": 40867,
"s": 40748,
"text": "lam= [90.1167 0. 12.8833]v= [[ 0.9415 0.3228 0.0969] [ 0.3314 -0.9391 -0.0906] [-0.0617 -0.1174 0.9912]]"
},
{
"code": null,
"e": 41115,
"s": 40867,
"text": "As you see the 2nd eigenvalue is zero. Since A^T A is a symmetric matrix and has two non-zero eigenvalues, its rank is 2. Figure 18 shows two plots of A^T Ax from different angles. Since the rank of A^TA is 2, all the vectors A^TAx lie on a plane."
},
{
"code": null,
"e": 41293,
"s": 41115,
"text": "Listing 11 shows how to construct the matrices Σ and V. We first sort the eigenvalues in descending order. The columns of V are the corresponding eigenvectors in the same order."
},
{
"code": null,
"e": 41572,
"s": 41293,
"text": "Then we filter the non-zero eigenvalues and take the square root of them to get the non-zero singular values. We know that Σ should be a 3×3 matrix. So we place the two non-zero singular values in a 2×2 diagonal matrix and pad it with zero to have a 3 × 3 matrix. The output is:"
},
{
"code": null,
"e": 41722,
"s": 41572,
"text": "Sigma= [[9.493 0. 0. ] [0. 3.5893 0. ]]V= [[ 0.9415 0.0969 0.3228] [ 0.3314 -0.0906 -0.9391] [-0.0617 0.9912 -0.1174]]"
},
{
"code": null,
"e": 42046,
"s": 41722,
"text": "To construct V, we take the vi vectors corresponding to the r non-zero singular values of A and divide them by their corresponding singular values. Since A is a 2×3 matrix, U should be a 2×2 matrix. We have 2 non-zero singular values, so the rank of A is 2 and r=2. As a result, we already have enough vi vectors to form U."
},
{
"code": null,
"e": 42061,
"s": 42046,
"text": "The output is:"
},
{
"code": null,
"e": 42105,
"s": 42061,
"text": "U= [[ 0.4121 0.9111] [ 0.9111 -0.4121]]"
},
{
"code": null,
"e": 42145,
"s": 42105,
"text": "Finally, we get the decomposition of A:"
},
{
"code": null,
"e": 42352,
"s": 42145,
"text": "We really did not need to follow all these steps. NumPy has a function called svd() which can do the same thing for us. Listing 13 shows how we can use this function to calculate the SVD of matrix A easily."
},
{
"code": null,
"e": 42367,
"s": 42352,
"text": "The output is:"
},
{
"code": null,
"e": 42514,
"s": 42367,
"text": "U= [[-0.4121 -0.9111] [-0.9111 0.4121]]s= [9.493 3.5893]V [[-0.9415 -0.0969 -0.3228] [-0.3314 0.0906 0.9391] [ 0.0617 -0.9912 0.1174]]"
},
{
"code": null,
"e": 43313,
"s": 42514,
"text": "You should notice a few things in the output. First, This function returns an array of singular values that are on the main diagonal of Σ, not the matrix Σ. In addition, it returns V^T, not V, so I have printed the transpose of the array VT that it returns. Finally, the ui and vi vectors reported by svd() have the opposite sign of the ui and vi vectors that were calculated in Listing 10-12. Remember that if vi is an eigenvector for an eigenvalue, then (-1)vi is also an eigenvector for the same eigenvalue, and its length is also the same. So if vi is normalized, (-1)vi is normalized too. In fact, in Listing 10 we calculated vi with a different method and svd() is just reporting (-1)vi which is still correct. Since ui=Avi/σi, the set of ui reported by svd() will have the opposite sign too."
},
{
"code": null,
"e": 43402,
"s": 43313,
"text": "You can easily construct the matrix Σ and check that multiplying these matrices gives A."
},
{
"code": null,
"e": 43466,
"s": 43402,
"text": "Reconstructed A= [[ 4. 1. 3.] [ 8. 3. -2.]]"
},
{
"code": null,
"e": 43798,
"s": 43466,
"text": "In Figure 19, you see a plot of x which is the vectors in a unit sphere and Ax which is the set of 2-d vectors produced by A. The vectors u1 and u2 show the directions of stretching. The ellipse produced by Ax is not hollow like the ones that we saw before (for example in Figure 6), and the transformed vectors fill it completely."
},
{
"code": null,
"e": 44130,
"s": 43798,
"text": "Similar to the eigendecomposition method, we can approximate our original matrix A by summing the terms which have the highest singular values. So we can use the first k terms in the SVD equation, using the k highest singular values which means we only include the first k vectors in U and V matrices in the decomposition equation:"
},
{
"code": null,
"e": 44495,
"s": 44130,
"text": "We know that the set {u1, u2, ..., ur} forms a basis for Ax. So when we pick k vectors from this set, Ak x is written as a linear combination of u1, u2, ... uk. So they span Ak x and since they are linearly independent they form a basis for Ak x (or col A). So the rank of Ak is k, and by picking the first k singular values, we approximate A with a rank-k matrix."
},
{
"code": null,
"e": 44562,
"s": 44495,
"text": "As an example, suppose that we want to calculate the SVD of matrix"
},
{
"code": null,
"e": 45028,
"s": 44562,
"text": "Again x is the vectors in a unit sphere (Figure 19 left). The singular values are σ1=11.97, σ2=5.57, σ3=3.25, and the rank of A is 3. So Ax is an ellipsoid in 3-d space as shown in Figure 20 (left). If we approximate it using the first singular value, the rank of Ak will be one and Ak multiplied by x will be a line (Figure 20 right). If we only use the first two singular values, the rank of Ak will be 2 and Ak multiplied by x will be a plane (Figure 20 middle)."
},
{
"code": null,
"e": 45258,
"s": 45028,
"text": "It is important to note that if we have a symmetric matrix, the SVD equation is simplified into the eigendecomposition equation. Suppose that the symmetric matrix A has eigenvectors vi with the corresponding eigenvalues λi. So we"
},
{
"code": null,
"e": 45472,
"s": 45258,
"text": "We already showed that for a symmetric matrix, vi is also an eigenvector of A^TA with the corresponding eigenvalue of λi2. So the singular values of A are the square root of λi2 and σi=λi. now we can calculate ui:"
},
{
"code": null,
"e": 45609,
"s": 45472,
"text": "So ui is the eigenvector of A corresponding to λi (and σi). Now we can simplify the SVD equation to get the eigendecomposition equation:"
},
{
"code": null,
"e": 45820,
"s": 45609,
"text": "Finally, it can be shown that SVD is the best way to approximate A with a rank-k matrix. The Frobenius norm of an m × n matrix A is defined as the square root of the sum of the absolute squares of its elements:"
},
{
"code": null,
"e": 45973,
"s": 45820,
"text": "So this is like the generalization of the vector length for a matrix. Now if the m×n matrix Ak is the approximated rank-k matrix by SVD, we can think of"
},
{
"code": null,
"e": 46120,
"s": 45973,
"text": "as the distance between A and Ak. The smaller this distance, the better Ak approximates A. Now if B is any m×n rank-k matrix, it can be shown that"
},
{
"code": null,
"e": 46361,
"s": 46120,
"text": "In other words, the difference between A and its rank-k approximation generated by SVD has the minimum Frobenius norm, and no other rank-k matrix can give a better approximation for A (with a closer distance in terms of the Frobenius norm)."
},
{
"code": null,
"e": 46449,
"s": 46361,
"text": "Now that we are familiar with SVD, we can see some of its applications in data science."
},
{
"code": null,
"e": 46474,
"s": 46449,
"text": "Dimensionality reduction"
},
{
"code": null,
"e": 47123,
"s": 46474,
"text": "We can store an image in a matrix. Every image consists of a set of pixels which are the building blocks of that image. Each pixel represents the color or the intensity of light in a specific location in the image. In a grayscale image with PNG format, each pixel has a value between 0 and 1, where zero corresponds to black and 1 corresponds to white. So a grayscale image with m×n pixels can be stored in an m×n matrix or NumPy array. Here we use the imread() function to load a grayscale image of Einstein which has 480 × 423 pixels into a 2-d array. Then we use SVD to decompose the matrix and reconstruct it using the first 30 singular values."
},
{
"code": null,
"e": 47823,
"s": 47123,
"text": "The original matrix is 480×423. So we need to store 480×423=203040 values. After SVD each ui has 480 elements and each vi has 423 elements. To be able to reconstruct the image using the first 30 singular values we only need to keep the first 30 σi, ui, and vi which means storing 30×(1+480+423)=27120 values. This is roughly 13% of the number of values required for the original image. So using SVD we can have a good approximation of the original image and save a lot of memory. Listing 16 and calculates the matrices corresponding to the first 6 singular values. Each matrix σiui vi ^T has a rank of 1 and has the same number of rows and columns as the original matrix. Figure 22 shows the result."
},
{
"code": null,
"e": 48235,
"s": 47823,
"text": "Please note that unlike the original grayscale image, the value of the elements of these rank-1 matrices can be greater than 1 or less than zero, and they should not be interpreted as a grayscale image. So I did not use cmap='gray' and did not display them as grayscale images. When plotting them we do not care about the absolute value of the pixels. Instead, we care about their values relative to each other."
},
{
"code": null,
"e": 48471,
"s": 48235,
"text": "To understand how the image information is stored in each of these matrices, we can study a much simpler image. In Listing 17, we read a binary image with five simple shapes: a rectangle and 4 circles. The result is shown in Figure 23."
},
{
"code": null,
"e": 48620,
"s": 48471,
"text": "The image has been reconstructed using the first 2, 4, and 6 singular values. Now we plot the matrices corresponding to the first 6 singular values:"
},
{
"code": null,
"e": 48891,
"s": 48620,
"text": "Each matrix (σi ui vi ^T) has a rank of 1 which means it only has one independent column and all the other columns are a scalar multiplication of that one. So if call the independent column c1 (or it can be any of the other column), the columns have the general form of:"
},
{
"code": null,
"e": 49230,
"s": 48891,
"text": "where ai is a scalar multiplier. In addition, this matrix projects all the vectors on ui, so every column is also a scalar multiplication of ui. This can be seen in Figure 25. Two columns of the matrix σ2u2 v2^T are shown versus u2. Both columns have the same pattern of u2 with different values (ai for column #300 has a negative value)."
},
{
"code": null,
"e": 50108,
"s": 49230,
"text": "So using the values of c1 and ai (or u2 and its multipliers), each matrix captures some details of the original image. In figure 24, the first 2 matrices can capture almost all the information about the left rectangle in the original image. The 4 circles are roughly captured as four rectangles in the first 2 matrices in Figure 24, and more details on them are added in the last 4 matrices. This can be also seen in Figure 23 where the circles in the reconstructed image become rounder as we add more singular values. These rank-1 matrices may look simple, but they are able to capture some information about the repeating patterns in the image. For example in Figure 26, we have the image of the national monument of Scotland which has 6 pillars (in the image), and the matrix corresponding to the first singular value can capture the number of pillars in the original image."
},
{
"code": null,
"e": 50119,
"s": 50108,
"text": "Eigenfaces"
},
{
"code": null,
"e": 50660,
"s": 50119,
"text": "In this example, we are going to use the Olivetti faces dataset in the Scikit-learn library. This data set contains 400 images. The images were taken between April 1992 and April 1994 at AT&T Laboratories Cambridge. The images show the face of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions, and facial details. These images are grayscale and each image has 64×64 pixels. The intensity of each pixel is a number on the interval [0, 1]. First, we load the dataset:"
},
{
"code": null,
"e": 50923,
"s": 50660,
"text": "The fetch_olivetti_faces() function has been already imported in Listing 1. We call it to read the data and stores the images in the imgs array. This is a (400, 64, 64) array which contains 400 grayscale 64×64 images. We can show some of them as an example here:"
},
{
"code": null,
"e": 51357,
"s": 50923,
"text": "In the previous example, we stored our original image in a matrix and then used SVD to decompose it. Here we take another approach. We know that we have 400 images, so we give each image a label from 1 to 400. Now we use one-hot encoding to represent these labels by a vector. We use a column vector with 400 elements. For each label k, all the elements are zero except the k-th element. So label k will be represented by the vector:"
},
{
"code": null,
"e": 51560,
"s": 51357,
"text": "Now we store each image in a column vector. Each image has 64 × 64 = 4096 pixels. So we can flatten each image and place the pixel values into a column vector f with 4096 elements as shown in Figure 28:"
},
{
"code": null,
"e": 51834,
"s": 51560,
"text": "So each image with label k will be stored in the vector fk, and we need 400 fk vectors to keep all the images. Now we define a transformation matrix M which transforms the label vector ik to its corresponding image vector fk. The vectors fk will be the columns of matrix M:"
},
{
"code": null,
"e": 52307,
"s": 51834,
"text": "This matrix has 4096 rows and 400 columns. We can simply use y=Mx to find the corresponding image of each label (x can be any vectors ik, and y will be the corresponding fk). For example for the third image of this dataset, the label is 3, and all the elements of i3 are zero except the third element which is 1. Now, remember the multiplication of partitioned matrices. When we multiply M by i3, all the columns of M are multiplied by zero except the third column f3, so:"
},
{
"code": null,
"e": 52400,
"s": 52307,
"text": "Listing 21 shows how we can construct M and use it to show a certain image from the dataset."
},
{
"code": null,
"e": 52858,
"s": 52400,
"text": "The length of each label vector ik is one and these label vectors form a standard basis for a 400-dimensional space. In this space, each axis corresponds to one of the labels with the restriction that its value can be either zero or one. The vectors fk live in a 4096-dimensional space in which each axis corresponds to one pixel of the image, and matrix M maps ik to fk. Now we can use SVD to decompose M. Remember that when we decompose M (with rank r) to"
},
{
"code": null,
"e": 53306,
"s": 52858,
"text": "the set {u1, u2, ..., ur} which are the first r columns of U will be a basis for Mx. Each vector ui will have 4096 elements. Since y=Mx is the space in which our image vectors live, the vectors ui form a basis for the image vectors as shown in Figure 29. In this figure, I have tried to visualize an n-dimensional vector space. This is, of course, impossible when n≥3, but this is just a fictitious illustration to help you understand this method."
},
{
"code": null,
"e": 53593,
"s": 53306,
"text": "So we can reshape ui into a 64 ×64 pixel array and try to plot it like an image. The value of the elements of these vectors can be greater than 1 or less than zero, and when reshaped they should not be interpreted as a grayscale image. So I did not use cmap='gray' when displaying them."
},
{
"code": null,
"e": 53608,
"s": 53593,
"text": "The output is:"
},
{
"code": null,
"e": 53901,
"s": 53608,
"text": "You can check that the array s in Listing 22 has 400 elements, so we have 400 non-zero singular values and the rank of the matrix is 400. As a result, we need the first 400 vectors of U to reconstruct the matrix completely. We can easily reconstruct one of the images using the basis vectors:"
},
{
"code": null,
"e": 53988,
"s": 53901,
"text": "Here we take image #160 and reconstruct it using different numbers of singular values:"
},
{
"code": null,
"e": 54620,
"s": 53988,
"text": "The vectors ui are called the eigenfaces and can be used for face recognition. As you see in Figure 30, each eigenface captures some information of the image vectors. For example, u1 is mostly about the eyes, or u6 captures part of the nose. When reconstructing the image in Figure 31, the first singular value adds the eyes, but the rest of the face is vague. By increasing k, nose, eyebrows, beard, and glasses are added to the face. Some people believe that the eyes are the most important feature of your face. It seems that SVD agrees with them since the first eigenface which has the highest singular value captures the eyes."
},
{
"code": null,
"e": 54635,
"s": 54620,
"text": "Reducing noise"
},
{
"code": null,
"e": 54715,
"s": 54635,
"text": "SVD can be used to reduce the noise in the images. Listing 24 shows an example:"
},
{
"code": null,
"e": 55113,
"s": 54715,
"text": "Here we first load the image and add some noise to it. Then we reconstruct the image using the first 20, 55 and 200 singular values. As you see in Figure 32, the amount of noise increases as we increase the rank of the reconstructed matrix. So if we use a lower rank like 20 we can significantly reduce the noise in the image. It is important to understand why it works much better at lower ranks."
},
{
"code": null,
"e": 55229,
"s": 55113,
"text": "Here is a simple example to show how SVD reduces the noise. Imagine that we have 3×15 matrix defined in Listing 25:"
},
{
"code": null,
"e": 55272,
"s": 55229,
"text": "A color map of this matrix is shown below:"
},
{
"code": null,
"e": 55827,
"s": 55272,
"text": "The matrix columns can be divided into two categories. In the first 5 columns, only the first element is not zero, and in the last 10 columns, only the first element is zero. We also have a noisy column (column #12) which should belong to the second category, bit its first and last element do not have the right values. We can assume that these two elements contain some noise. Now we decompose this matrix using SVD. The rank of the matrix is 3, and it only has 3 non-zero singular values. Now we reconstruct it using the first 2 and 3 singular values."
},
{
"code": null,
"e": 56168,
"s": 55827,
"text": "As Figure 34 shows, by using the first 2 singular values column #12 changes and follows the same pattern of the columns in the second category. However, the actual values of its elements are a little lower now. If we use all the 3 singular values, we get back the original noisy column. Figure 35 shows a plot of these columns in 3-d space."
},
{
"code": null,
"e": 56666,
"s": 56168,
"text": "First look at the ui vectors generated by SVD. u1 shows the average direction of the column vectors in the first category. Of course, it has the opposite direction, but it does not matter (Remember that if vi is an eigenvector for an eigenvalue, then (-1)vi is also an eigenvector for the same eigenvalue, and since ui=Avi/σi, then its sign depends on vi). What is important is the stretching direction not the sign of the vector. Similarly, u2 shows the average direction for the second category."
},
{
"code": null,
"e": 57414,
"s": 56666,
"text": "The noisy column is shown by the vector n. It is not along u1 and u2. Now if we use ui as a basis, we can decompose n and find its orthogonal projection onto ui. As you see it has a component along u3 (in the opposite direction) which is the noise direction. This direction represents the noise present in the third element of n. It has the lowest singular value which means it is not considered an important feature by SVD. When we reconstruct n using the first two singular values, we ignore this direction and the noise present in the third element is eliminated. Now we only have the vector projections along u1 and u2. But the scalar projection along u1 has a much higher value. That is because vector n is more similar to the first category."
},
{
"code": null,
"e": 58008,
"s": 57414,
"text": "So the projection of n in the u1-u2 plane is almost along u1, and the reconstruction of n using the first two singular values gives a vector which is more similar to the first category. It is important to note that the noise in the first element which is represented by u2 is not eliminated. In addition, though the direction of the reconstructed n is almost correct, its magnitude is smaller compared to the vectors in the first category. In fact, in the reconstructed vector, the second element (which did not contain noise) has now a lower value compared to the original vector (Figure 36)."
},
{
"code": null,
"e": 58698,
"s": 58008,
"text": "So SVD assigns most of the noise (but not all of that) to the vectors represented by the lower singular values. If we reconstruct a low-rank matrix (ignoring the lower singular values), the noise will be reduced, however, the correct part of the matrix changes too. The result is a matrix that is only an approximation of the noiseless matrix that we are looking for. This can be seen in Figure 32. The image background is white and the noisy pixels are black. When we reconstruct the low-rank image, the background is much more uniform but it is gray now. In fact, what we get is a less noisy approximation of the white background that we expect to have if there is no noise in the image."
},
{
"code": null,
"e": 58952,
"s": 58698,
"text": "I hope that you enjoyed reading this article. Please let me know if you have any questions or suggestions. All the Code Listings in this article are available for download as a Jupyter notebook from GitHub at: https://github.com/reza-bagheri/SVD_article"
},
{
"code": null,
"e": 58969,
"s": 58952,
"text": "Further reading:"
}
]
|
SQL Query to Check if Date is Greater Than Today in SQL - GeeksforGeeks | 15 Oct, 2021
In this article, we will see the SQL query to check if DATE is greater than today’s date by comparing date with today’s date using the GETDATE() function. This function in SQL Server is used to return the present date and time of the database system in a ‘YYYY-MM-DD hh:mm: ss. mmm’ pattern.
Features:
This function is used to find the present date and time of the database system.
This function comes under Date Functions.
This function doesn’t accept any parameter.
This function returns output in ‘YYYY-MM-DD hh:mm: ss. mmm‘ format.
To check a current date we use simply GETDATE( ) function.
Query:
SELECT GETDATE();
Output:
Now, take an example to check if the date is greater than today’s date in MS SQL Server. For this we follow given below steps:
Step 1: Create a database
we can use the following command to create a database called geeks.
Query:
CREATE DATABASE geeks;
Step 2: Use database
Use the below SQL statement to switch the database context to geeks:
Query:
USE geeks;
Step 3: Table definition
We have the following geeks for geeks in our geek’s database.
Query:
CREATE TABLE geeksforgeeks(
NAME VARCHAR(20),
Ordered DATE,
Deliver DATE);
Step 4: Insert data into a table
Query:
INSERT INTO geeksforgeeks VALUES
('ROMY', '2021-01-16', '2021-03-12'),
('AVINAV', '2021-11-12', '2021-12-12'),
('PUSHKAR', '2021-06-23', '2021-10-13');
Step 5: For a view a table data
To see the content of the table, run the below command
Query:
SELECT * FROM geeksforgeeks;
Output:
Step 6: Check date greater than today date or not
For this, we will check from the table, which row has delivered a value greater than today’s date.
Query:
SELECT * FROM geeksforgeeks WHERE Deliver > GETDATE();
Output:
Returned value whose date is 2021-12-12 and 2021-10-13 which is greater than 2021-09-22 (Today’s date)
Check whose ordered date is greater than today’s date.
Query:
SELECT * FROM geeksforgeeks WHERE Ordered > GETDATE();
Output:
kashishsoda
Picked
SQL-Query
SQL-Server
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
SQL Trigger | Student Database
Difference between DELETE, DROP and TRUNCATE
SQL | Views
SQL Interview Questions
Difference between DDL and DML in DBMS
CTE in SQL
How to Update Multiple Columns in Single Update Statement in SQL?
SQL | TRANSACTIONS
SQL Correlated Subqueries
What is Cursor in SQL ? | [
{
"code": null,
"e": 23928,
"s": 23900,
"text": "\n15 Oct, 2021"
},
{
"code": null,
"e": 24220,
"s": 23928,
"text": "In this article, we will see the SQL query to check if DATE is greater than today’s date by comparing date with today’s date using the GETDATE() function. This function in SQL Server is used to return the present date and time of the database system in a ‘YYYY-MM-DD hh:mm: ss. mmm’ pattern."
},
{
"code": null,
"e": 24230,
"s": 24220,
"text": "Features:"
},
{
"code": null,
"e": 24310,
"s": 24230,
"text": "This function is used to find the present date and time of the database system."
},
{
"code": null,
"e": 24352,
"s": 24310,
"text": "This function comes under Date Functions."
},
{
"code": null,
"e": 24396,
"s": 24352,
"text": "This function doesn’t accept any parameter."
},
{
"code": null,
"e": 24464,
"s": 24396,
"text": "This function returns output in ‘YYYY-MM-DD hh:mm: ss. mmm‘ format."
},
{
"code": null,
"e": 24523,
"s": 24464,
"text": "To check a current date we use simply GETDATE( ) function."
},
{
"code": null,
"e": 24530,
"s": 24523,
"text": "Query:"
},
{
"code": null,
"e": 24556,
"s": 24530,
"text": "SELECT GETDATE(); "
},
{
"code": null,
"e": 24564,
"s": 24556,
"text": "Output:"
},
{
"code": null,
"e": 24691,
"s": 24564,
"text": "Now, take an example to check if the date is greater than today’s date in MS SQL Server. For this we follow given below steps:"
},
{
"code": null,
"e": 24717,
"s": 24691,
"text": "Step 1: Create a database"
},
{
"code": null,
"e": 24785,
"s": 24717,
"text": "we can use the following command to create a database called geeks."
},
{
"code": null,
"e": 24792,
"s": 24785,
"text": "Query:"
},
{
"code": null,
"e": 24815,
"s": 24792,
"text": "CREATE DATABASE geeks;"
},
{
"code": null,
"e": 24836,
"s": 24815,
"text": "Step 2: Use database"
},
{
"code": null,
"e": 24905,
"s": 24836,
"text": "Use the below SQL statement to switch the database context to geeks:"
},
{
"code": null,
"e": 24912,
"s": 24905,
"text": "Query:"
},
{
"code": null,
"e": 24923,
"s": 24912,
"text": "USE geeks;"
},
{
"code": null,
"e": 24948,
"s": 24923,
"text": "Step 3: Table definition"
},
{
"code": null,
"e": 25010,
"s": 24948,
"text": "We have the following geeks for geeks in our geek’s database."
},
{
"code": null,
"e": 25017,
"s": 25010,
"text": "Query:"
},
{
"code": null,
"e": 25092,
"s": 25017,
"text": "CREATE TABLE geeksforgeeks(\nNAME VARCHAR(20),\nOrdered DATE,\nDeliver DATE);"
},
{
"code": null,
"e": 25125,
"s": 25092,
"text": "Step 4: Insert data into a table"
},
{
"code": null,
"e": 25132,
"s": 25125,
"text": "Query:"
},
{
"code": null,
"e": 25287,
"s": 25132,
"text": "INSERT INTO geeksforgeeks VALUES\n ('ROMY', '2021-01-16', '2021-03-12'),\n('AVINAV', '2021-11-12', '2021-12-12'),\n ('PUSHKAR', '2021-06-23', '2021-10-13');"
},
{
"code": null,
"e": 25319,
"s": 25287,
"text": "Step 5: For a view a table data"
},
{
"code": null,
"e": 25374,
"s": 25319,
"text": "To see the content of the table, run the below command"
},
{
"code": null,
"e": 25381,
"s": 25374,
"text": "Query:"
},
{
"code": null,
"e": 25410,
"s": 25381,
"text": "SELECT * FROM geeksforgeeks;"
},
{
"code": null,
"e": 25418,
"s": 25410,
"text": "Output:"
},
{
"code": null,
"e": 25469,
"s": 25418,
"text": "Step 6: Check date greater than today date or not"
},
{
"code": null,
"e": 25568,
"s": 25469,
"text": "For this, we will check from the table, which row has delivered a value greater than today’s date."
},
{
"code": null,
"e": 25575,
"s": 25568,
"text": "Query:"
},
{
"code": null,
"e": 25630,
"s": 25575,
"text": "SELECT * FROM geeksforgeeks WHERE Deliver > GETDATE();"
},
{
"code": null,
"e": 25638,
"s": 25630,
"text": "Output:"
},
{
"code": null,
"e": 25741,
"s": 25638,
"text": "Returned value whose date is 2021-12-12 and 2021-10-13 which is greater than 2021-09-22 (Today’s date)"
},
{
"code": null,
"e": 25796,
"s": 25741,
"text": "Check whose ordered date is greater than today’s date."
},
{
"code": null,
"e": 25803,
"s": 25796,
"text": "Query:"
},
{
"code": null,
"e": 25858,
"s": 25803,
"text": "SELECT * FROM geeksforgeeks WHERE Ordered > GETDATE();"
},
{
"code": null,
"e": 25866,
"s": 25858,
"text": "Output:"
},
{
"code": null,
"e": 25878,
"s": 25866,
"text": "kashishsoda"
},
{
"code": null,
"e": 25885,
"s": 25878,
"text": "Picked"
},
{
"code": null,
"e": 25895,
"s": 25885,
"text": "SQL-Query"
},
{
"code": null,
"e": 25906,
"s": 25895,
"text": "SQL-Server"
},
{
"code": null,
"e": 25910,
"s": 25906,
"text": "SQL"
},
{
"code": null,
"e": 25914,
"s": 25910,
"text": "SQL"
},
{
"code": null,
"e": 26012,
"s": 25914,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26021,
"s": 26012,
"text": "Comments"
},
{
"code": null,
"e": 26034,
"s": 26021,
"text": "Old Comments"
},
{
"code": null,
"e": 26065,
"s": 26034,
"text": "SQL Trigger | Student Database"
},
{
"code": null,
"e": 26110,
"s": 26065,
"text": "Difference between DELETE, DROP and TRUNCATE"
},
{
"code": null,
"e": 26122,
"s": 26110,
"text": "SQL | Views"
},
{
"code": null,
"e": 26146,
"s": 26122,
"text": "SQL Interview Questions"
},
{
"code": null,
"e": 26185,
"s": 26146,
"text": "Difference between DDL and DML in DBMS"
},
{
"code": null,
"e": 26196,
"s": 26185,
"text": "CTE in SQL"
},
{
"code": null,
"e": 26262,
"s": 26196,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 26281,
"s": 26262,
"text": "SQL | TRANSACTIONS"
},
{
"code": null,
"e": 26307,
"s": 26281,
"text": "SQL Correlated Subqueries"
}
]
|
C++ Program to Implement Graham Scan Algorithm to Find the Convex Hull | Convex hull is the minimum closed area which can cover all given data points.
Graham's Scan algorithm will find the corner points of the convex hull. In this algorithm, at first the lowest point is chosen. That point is the starting point of the convex hull. Remaining n-1 vertices are sorted based on the anti-clock wise direction from the start point. If two or more points are forming same angle, then remove all points of same angle except the farthest point from start.
From the remaining points, push them into the stack. And remove items from stack one by one, when orientation is not anti-clockwise for stack top point, second top point and newly selected point points[i], after checking, insert points[i] into the stack.
Input: Set of points: {(-7,8), (-4,6), (2,6), (6,4), (8,6), (7,-2), (4,-6), (8,-7),(0,0), (3,-2),(6,-10),(0,-6),(-9,-5),(-8,-2),(-8,0),(-10,3),(-2,2),(-10,4)}
Output: Boundary points of convex hull are: (-9, -5) (-10, 3) (-10, 4) (-7, 8) (8, 6) (8, -7) (6, -10)
Input: The set of points, number of points.
Output: The boundary points of convex hull.
Begin
minY := points[0].y
min := 0
for i := 1 to n-1 do
y := points[i].y
if y < minY or minY = y and points[i].x < points[min].x, then
minY := points[i].y
min := i
done
swap points[0] and points[min]
p0 := points[0]
sort points from points[1] to end
arrSize := 1
for i := 1 to n, do
when i < n-1 and (p0, points[i], points[i+1]) are collinear, do
i := i + 1
done
points[arrSize] := points[i]
arrSize := arrSize + 1
done
if arrSize < 3, then
return cHullPoints
push points[0] into stack
push points[1] into stack
push points[2] into stack
for i := 3 to arrSize, do
while top of stack, item below the top and points[i] is not in
anticlockwise rotation, do
delete top element from stack
done
push points[i] into stack
done
while stack is not empty, do
item stack top element into cHullPoints
pop from stack
done
End
#include<iostream>
#include<stack>
#include<algorithm>
#include<vector>
using namespace std;
struct point { //define points for 2d plane
int x, y;
};
point p0; //used to another two points
point secondTop(stack<point> &stk) {
point tempPoint = stk.top();
stk.pop();
point res = stk.top(); //get the second top element
stk.push(tempPoint); //push previous top again
return res;
}
int squaredDist(point p1, point p2) {
return ((p1.x-p2.x)*(p1.x-p2.x) + (p1.y-p2.y)*(p1.y-p2.y));
}
int direction(point a, point b, point c) {
int val = (b.y-a.y)*(c.x-b.x)-(b.x-a.x)*(c.y-b.y);
if (val == 0)
return 0; //colinear
else if(val < 0)
return 2; //anti-clockwise direction
return 1; //clockwise direction
}
int comp(const void *point1, const void*point2) {
point *p1 = (point*)point1;
point *p2 = (point*)point2;
int dir = direction(p0, *p1, *p2);
if(dir == 0)
return (squaredDist(p0, *p2) >= squaredDist(p0, *p1))?-1 : 1;
return (dir==2)? -1 : 1;
}
vector<point> findConvexHull(point points[], int n) {
vector<point> convexHullPoints;
int minY = points[0].y, min = 0;
for(int i = 1; i<n; i++) {
int y = points[i].y;
//find bottom most or left most point
if((y < minY) || (minY == y) && points[i].x < points[min].x) {
minY = points[i].y;
min = i;
}
}
swap(points[0], points[min]); //swap min point to 0th location
p0 = points[0];
qsort(&points[1], n-1, sizeof(point), comp); //sort points from 1 place to end
int arrSize = 1; //used to locate items in modified array
for(int i = 1; i<n; i++) {
//when the angle of ith and (i+1)th elements are same, remove points
while(i < n-1 && direction(p0, points[i], points[i+1]) == 0)
i++;
points[arrSize] = points[i];
arrSize++;
}
if(arrSize < 3)
return convexHullPoints; //there must be at least 3 points, return empty list.
//create a stack and add first three points in the stack
stack<point> stk;
stk.push(points[0]); stk.push(points[1]); stk.push(points[2]);
for(int i = 3; i<arrSize; i++) { //for remaining vertices
while(direction(secondTop(stk), stk.top(), points[i]) != 2)
stk.pop(); //when top, second top and ith point are not making left turn, remove point
stk.push(points[i]);
}
while(!stk.empty()) {
convexHullPoints.push_back(stk.top()); //add points from stack
stk.pop();
}
}
int main() {
point points[] = {{-7,8},{-4,6},{2,6},{6,4},{8,6},{7,-2},{4,-6},{8,-7},{0,0},
{3,-2},{6,-10},{0,-6},{-9,-5},{-8,-2},{-8,0},{-10,3},{-2,2},{-10,4}};
int n = 18;
vector<point> result;
result = findConvexHull(points, n);
cout << "Boundary points of convex hull are: "<<endl;
vector<point>::iterator it;
for(it = result.begin(); it!=result.end(); it++)
cout << "(" << it->x << ", " <<it->y <<") ";
}
Boundary points of convex hull are:
(-9, -5) (-10, 3) (-10, 4) (-7, 8) (8, 6) (8, -7) (6, -10) | [
{
"code": null,
"e": 1140,
"s": 1062,
"text": "Convex hull is the minimum closed area which can cover all given data points."
},
{
"code": null,
"e": 1537,
"s": 1140,
"text": "Graham's Scan algorithm will find the corner points of the convex hull. In this algorithm, at first the lowest point is chosen. That point is the starting point of the convex hull. Remaining n-1 vertices are sorted based on the anti-clock wise direction from the start point. If two or more points are forming same angle, then remove all points of same angle except the farthest point from start."
},
{
"code": null,
"e": 1792,
"s": 1537,
"text": "From the remaining points, push them into the stack. And remove items from stack one by one, when orientation is not anti-clockwise for stack top point, second top point and newly selected point points[i], after checking, insert points[i] into the stack."
},
{
"code": null,
"e": 2054,
"s": 1792,
"text": "Input: Set of points: {(-7,8), (-4,6), (2,6), (6,4), (8,6), (7,-2), (4,-6), (8,-7),(0,0), (3,-2),(6,-10),(0,-6),(-9,-5),(-8,-2),(-8,0),(-10,3),(-2,2),(-10,4)}\nOutput: Boundary points of convex hull are: (-9, -5) (-10, 3) (-10, 4) (-7, 8) (8, 6) (8, -7) (6, -10)"
},
{
"code": null,
"e": 2098,
"s": 2054,
"text": "Input: The set of points, number of points."
},
{
"code": null,
"e": 2142,
"s": 2098,
"text": "Output: The boundary points of convex hull."
},
{
"code": null,
"e": 3117,
"s": 2142,
"text": "Begin\n minY := points[0].y\n min := 0\n for i := 1 to n-1 do\n y := points[i].y\n if y < minY or minY = y and points[i].x < points[min].x, then\n minY := points[i].y\n min := i\n done\n swap points[0] and points[min]\n p0 := points[0]\n sort points from points[1] to end\n arrSize := 1\n for i := 1 to n, do\n when i < n-1 and (p0, points[i], points[i+1]) are collinear, do\n i := i + 1\n done\n points[arrSize] := points[i]\n arrSize := arrSize + 1\n done\n if arrSize < 3, then\n return cHullPoints\n push points[0] into stack\n push points[1] into stack\n push points[2] into stack\n for i := 3 to arrSize, do\n while top of stack, item below the top and points[i] is not in\n anticlockwise rotation, do\n delete top element from stack\n done\n push points[i] into stack\n done\n while stack is not empty, do\n item stack top element into cHullPoints\n pop from stack\n done\nEnd"
},
{
"code": null,
"e": 6077,
"s": 3117,
"text": "#include<iostream>\n#include<stack>\n#include<algorithm>\n#include<vector>\nusing namespace std;\nstruct point { //define points for 2d plane\n int x, y;\n};\npoint p0; //used to another two points\npoint secondTop(stack<point> &stk) {\n point tempPoint = stk.top(); \n stk.pop();\n point res = stk.top(); //get the second top element\n stk.push(tempPoint); //push previous top again\n return res;\n}\nint squaredDist(point p1, point p2) {\n return ((p1.x-p2.x)*(p1.x-p2.x) + (p1.y-p2.y)*(p1.y-p2.y));\n}\nint direction(point a, point b, point c) {\n int val = (b.y-a.y)*(c.x-b.x)-(b.x-a.x)*(c.y-b.y);\n if (val == 0)\n return 0; //colinear\n else if(val < 0)\n return 2; //anti-clockwise direction\n return 1; //clockwise direction\n}\nint comp(const void *point1, const void*point2) {\n point *p1 = (point*)point1;\n point *p2 = (point*)point2;\n int dir = direction(p0, *p1, *p2);\n if(dir == 0)\n return (squaredDist(p0, *p2) >= squaredDist(p0, *p1))?-1 : 1;\n return (dir==2)? -1 : 1;\n}\nvector<point> findConvexHull(point points[], int n) {\n vector<point> convexHullPoints;\n int minY = points[0].y, min = 0;\n for(int i = 1; i<n; i++) {\n int y = points[i].y;\n //find bottom most or left most point\n if((y < minY) || (minY == y) && points[i].x < points[min].x) {\n minY = points[i].y;\n min = i;\n }\n }\n swap(points[0], points[min]); //swap min point to 0th location\n p0 = points[0];\n qsort(&points[1], n-1, sizeof(point), comp); //sort points from 1 place to end\n int arrSize = 1; //used to locate items in modified array\n for(int i = 1; i<n; i++) {\n //when the angle of ith and (i+1)th elements are same, remove points\n while(i < n-1 && direction(p0, points[i], points[i+1]) == 0)\n i++;\n points[arrSize] = points[i];\n arrSize++;\n }\n if(arrSize < 3)\n return convexHullPoints; //there must be at least 3 points, return empty list.\n //create a stack and add first three points in the stack\n stack<point> stk;\n stk.push(points[0]); stk.push(points[1]); stk.push(points[2]);\n for(int i = 3; i<arrSize; i++) { //for remaining vertices\n while(direction(secondTop(stk), stk.top(), points[i]) != 2)\n stk.pop(); //when top, second top and ith point are not making left turn, remove point\n stk.push(points[i]);\n }\n while(!stk.empty()) {\n convexHullPoints.push_back(stk.top()); //add points from stack\n stk.pop();\n }\n}\nint main() {\n point points[] = {{-7,8},{-4,6},{2,6},{6,4},{8,6},{7,-2},{4,-6},{8,-7},{0,0},\n {3,-2},{6,-10},{0,-6},{-9,-5},{-8,-2},{-8,0},{-10,3},{-2,2},{-10,4}};\n int n = 18;\n vector<point> result;\n result = findConvexHull(points, n);\n cout << \"Boundary points of convex hull are: \"<<endl;\n vector<point>::iterator it;\n for(it = result.begin(); it!=result.end(); it++)\n cout << \"(\" << it->x << \", \" <<it->y <<\") \";\n}"
},
{
"code": null,
"e": 6172,
"s": 6077,
"text": "Boundary points of convex hull are:\n(-9, -5) (-10, 3) (-10, 4) (-7, 8) (8, 6) (8, -7) (6, -10)"
}
]
|
C++ Program For Deleting A Node In A Doubly Linked List - GeeksforGeeks | 15 Dec, 2021
Pre-requisite: Doubly Link List Set 1| Introduction and Insertion
Write a function to delete a given node in a doubly-linked list. Original Doubly Linked List
Approach: The deletion of a node in a doubly-linked list can be divided into three main categories:
After the deletion of the head node.
After the deletion of the middle node.
After the deletion of the last node.
All three mentioned cases can be handled in two steps if the pointer of the node to be deleted and the head pointer is known.
If the node to be deleted is the head node then make the next node as head.If a node is deleted, connect the next and previous node of the deleted node.
If the node to be deleted is the head node then make the next node as head.
If a node is deleted, connect the next and previous node of the deleted node.
Algorithm
Let the node to be deleted be del.
If node to be deleted is head node, then change the head pointer to next current head.
if headnode == del then
headnode = del.nextNode
Set next of previous to del, if previous to del exists.
if del.nextNode != none
del.nextNode.previousNode = del.previousNode
Set prev of next to del, if next to del exists.
if del.previousNode != none
del.previousNode.nextNode = del.next
C++
// C++ program to delete a node from// Doubly Linked List#include <bits/stdc++.h>using namespace std; // Anode of the doubly linked listclass Node { public: int data; Node* next; Node* prev; }; /* Function to delete a node in a Doubly Linked List. head_ref --> pointer to head node pointer. del --> pointer to node to be deleted. */void deleteNode(Node** head_ref, Node* del) { // Base case if (*head_ref == NULL || del == NULL) return; // If node to be deleted is head node if (*head_ref == del) *head_ref = del->next; /* Change next only if node to be deleted is NOT the last node */ if (del->next != NULL) del->next->prev = del->prev; /* Change prev only if node to be deleted is NOT the first node */ if (del->prev != NULL) del->prev->next = del->next; /* Finally, free the memory occupied by del*/ free(del); return; } // UTILITY FUNCTIONS /* Function to insert a node at the beginning of the Doubly Linked List */void push(Node** head_ref, int new_data) { // Allocate node Node* new_node = new Node(); // Put in the data new_node->data = new_data; /* Since we are adding at the beginning, prev is always NULL */ new_node->prev = NULL; /* Link the old list off the new node */ new_node->next = (*head_ref); /* Change prev of head node to new node */ if ((*head_ref) != NULL) (*head_ref)->prev = new_node; /* Move the head to point to the new node */ (*head_ref) = new_node; } /* Function to print nodes in a given doubly linked list. This function is same as printList() of singly linked list */void printList(Node* node) { while (node != NULL) { cout << node->data << " "; node = node->next; } } // Driver codeint main() { // Start with the empty list Node* head = NULL; /* Let us create the doubly linked list 10<->8<->4<->2 */ push(&head, 2); push(&head, 4); push(&head, 8); push(&head, 10); cout << "Original Linked list "; printList(head); /* Delete nodes from the doubly linked list */ // Delete first node deleteNode(&head, head); // Delete middle node deleteNode(&head, head->next); // Delete last node deleteNode(&head, head->next); /* Modified linked list will be NULL<-8->NULL */ cout << "Modified Linked list "; printList(head); return 0;} // This code is contributed by rathbhupendra
Output:
Original Linked list 10 8 4 2
Modified Linked list 8
Complexity Analysis:
Time Complexity: O(1). Since traversal of the linked list is not required so the time complexity is constant.
Space Complexity: O(1). As no extra space is required, so the space complexity is constant.
Please refer complete article on Delete a node in a Doubly Linked List for more details!
Amazon
doubly linked list
C++ Programs
Linked List
Amazon
Linked List
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between user defined function and library function in C/C++
Program to implement Singly Linked List in C++ using class
Count substrings that contain all vowels | SET 2
Const keyword in C++
cout in C++
Linked List | Set 1 (Introduction)
Linked List | Set 2 (Inserting a node)
Stack Data Structure (Introduction and Program)
Reverse a linked list
Linked List | Set 3 (Deleting a node) | [
{
"code": null,
"e": 24215,
"s": 24187,
"text": "\n15 Dec, 2021"
},
{
"code": null,
"e": 24281,
"s": 24215,
"text": "Pre-requisite: Doubly Link List Set 1| Introduction and Insertion"
},
{
"code": null,
"e": 24375,
"s": 24281,
"text": "Write a function to delete a given node in a doubly-linked list. Original Doubly Linked List "
},
{
"code": null,
"e": 24476,
"s": 24375,
"text": "Approach: The deletion of a node in a doubly-linked list can be divided into three main categories: "
},
{
"code": null,
"e": 24514,
"s": 24476,
"text": "After the deletion of the head node. "
},
{
"code": null,
"e": 24554,
"s": 24514,
"text": "After the deletion of the middle node. "
},
{
"code": null,
"e": 24591,
"s": 24554,
"text": "After the deletion of the last node."
},
{
"code": null,
"e": 24718,
"s": 24591,
"text": "All three mentioned cases can be handled in two steps if the pointer of the node to be deleted and the head pointer is known. "
},
{
"code": null,
"e": 24871,
"s": 24718,
"text": "If the node to be deleted is the head node then make the next node as head.If a node is deleted, connect the next and previous node of the deleted node."
},
{
"code": null,
"e": 24947,
"s": 24871,
"text": "If the node to be deleted is the head node then make the next node as head."
},
{
"code": null,
"e": 25025,
"s": 24947,
"text": "If a node is deleted, connect the next and previous node of the deleted node."
},
{
"code": null,
"e": 25036,
"s": 25025,
"text": "Algorithm "
},
{
"code": null,
"e": 25071,
"s": 25036,
"text": "Let the node to be deleted be del."
},
{
"code": null,
"e": 25158,
"s": 25071,
"text": "If node to be deleted is head node, then change the head pointer to next current head."
},
{
"code": null,
"e": 25213,
"s": 25158,
"text": "if headnode == del then\n headnode = del.nextNode"
},
{
"code": null,
"e": 25269,
"s": 25213,
"text": "Set next of previous to del, if previous to del exists."
},
{
"code": null,
"e": 25346,
"s": 25269,
"text": "if del.nextNode != none \n del.nextNode.previousNode = del.previousNode "
},
{
"code": null,
"e": 25394,
"s": 25346,
"text": "Set prev of next to del, if next to del exists."
},
{
"code": null,
"e": 25466,
"s": 25394,
"text": "if del.previousNode != none \n del.previousNode.nextNode = del.next"
},
{
"code": null,
"e": 25470,
"s": 25466,
"text": "C++"
},
{
"code": "// C++ program to delete a node from// Doubly Linked List#include <bits/stdc++.h>using namespace std; // Anode of the doubly linked listclass Node { public: int data; Node* next; Node* prev; }; /* Function to delete a node in a Doubly Linked List. head_ref --> pointer to head node pointer. del --> pointer to node to be deleted. */void deleteNode(Node** head_ref, Node* del) { // Base case if (*head_ref == NULL || del == NULL) return; // If node to be deleted is head node if (*head_ref == del) *head_ref = del->next; /* Change next only if node to be deleted is NOT the last node */ if (del->next != NULL) del->next->prev = del->prev; /* Change prev only if node to be deleted is NOT the first node */ if (del->prev != NULL) del->prev->next = del->next; /* Finally, free the memory occupied by del*/ free(del); return; } // UTILITY FUNCTIONS /* Function to insert a node at the beginning of the Doubly Linked List */void push(Node** head_ref, int new_data) { // Allocate node Node* new_node = new Node(); // Put in the data new_node->data = new_data; /* Since we are adding at the beginning, prev is always NULL */ new_node->prev = NULL; /* Link the old list off the new node */ new_node->next = (*head_ref); /* Change prev of head node to new node */ if ((*head_ref) != NULL) (*head_ref)->prev = new_node; /* Move the head to point to the new node */ (*head_ref) = new_node; } /* Function to print nodes in a given doubly linked list. This function is same as printList() of singly linked list */void printList(Node* node) { while (node != NULL) { cout << node->data << \" \"; node = node->next; } } // Driver codeint main() { // Start with the empty list Node* head = NULL; /* Let us create the doubly linked list 10<->8<->4<->2 */ push(&head, 2); push(&head, 4); push(&head, 8); push(&head, 10); cout << \"Original Linked list \"; printList(head); /* Delete nodes from the doubly linked list */ // Delete first node deleteNode(&head, head); // Delete middle node deleteNode(&head, head->next); // Delete last node deleteNode(&head, head->next); /* Modified linked list will be NULL<-8->NULL */ cout << \"Modified Linked list \"; printList(head); return 0;} // This code is contributed by rathbhupendra",
"e": 28040,
"s": 25470,
"text": null
},
{
"code": null,
"e": 28048,
"s": 28040,
"text": "Output:"
},
{
"code": null,
"e": 28102,
"s": 28048,
"text": "Original Linked list 10 8 4 2 \nModified Linked list 8"
},
{
"code": null,
"e": 28124,
"s": 28102,
"text": "Complexity Analysis: "
},
{
"code": null,
"e": 28234,
"s": 28124,
"text": "Time Complexity: O(1). Since traversal of the linked list is not required so the time complexity is constant."
},
{
"code": null,
"e": 28326,
"s": 28234,
"text": "Space Complexity: O(1). As no extra space is required, so the space complexity is constant."
},
{
"code": null,
"e": 28415,
"s": 28326,
"text": "Please refer complete article on Delete a node in a Doubly Linked List for more details!"
},
{
"code": null,
"e": 28422,
"s": 28415,
"text": "Amazon"
},
{
"code": null,
"e": 28441,
"s": 28422,
"text": "doubly linked list"
},
{
"code": null,
"e": 28454,
"s": 28441,
"text": "C++ Programs"
},
{
"code": null,
"e": 28466,
"s": 28454,
"text": "Linked List"
},
{
"code": null,
"e": 28473,
"s": 28466,
"text": "Amazon"
},
{
"code": null,
"e": 28485,
"s": 28473,
"text": "Linked List"
},
{
"code": null,
"e": 28583,
"s": 28485,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28592,
"s": 28583,
"text": "Comments"
},
{
"code": null,
"e": 28605,
"s": 28592,
"text": "Old Comments"
},
{
"code": null,
"e": 28676,
"s": 28605,
"text": "Difference between user defined function and library function in C/C++"
},
{
"code": null,
"e": 28735,
"s": 28676,
"text": "Program to implement Singly Linked List in C++ using class"
},
{
"code": null,
"e": 28784,
"s": 28735,
"text": "Count substrings that contain all vowels | SET 2"
},
{
"code": null,
"e": 28805,
"s": 28784,
"text": "Const keyword in C++"
},
{
"code": null,
"e": 28817,
"s": 28805,
"text": "cout in C++"
},
{
"code": null,
"e": 28852,
"s": 28817,
"text": "Linked List | Set 1 (Introduction)"
},
{
"code": null,
"e": 28891,
"s": 28852,
"text": "Linked List | Set 2 (Inserting a node)"
},
{
"code": null,
"e": 28939,
"s": 28891,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 28961,
"s": 28939,
"text": "Reverse a linked list"
}
]
|
SQL on Ethereum: How to Work With All the Data from a Transaction | by Andrew Hong | Towards Data Science | This post was first published on ath.mirror.xyz, be sure to subscribe there and follow me on twitter to get my most up-to-date crypto and data science content. All images in this article are created by the author.
if you’re looking for more web3 data content, check out my 30-day free course (with videos)!
If you’ve ever made a transaction on Ethereum (or any smart contract enabled blockchain), then you’ve probably looked it up on a block explorer and seen this heap of information:
Learning to read the details of a transaction will be the foundation for all your Ethereum data analysis and knowledge, so let’s cover all the pieces and how to work with them in SQL. I’ll be using Dune Analytics to run my queries, but there are many other tools you can use to query the chain such as Big Query and Flipside Crypto.
If you’re completely new to SQL and Ethereum I recommend starting with my full beginners’ overview first.
We’re going to cover transactions in four layers:
Transaction basicsFunction calls and stateInternal Transactions (Traces)Logs (Events Emitted)
Transaction basics
Function calls and state
Internal Transactions (Traces)
Logs (Events Emitted)
As the base for our transaction examples, we’ll be using the Mirror Crowdfund contract. Put simply, this is a smart contract that allows you to get ERC20 (fungible) or ERC721 (NFTs) tokens in exchange for donating ETH to the contract. The creator of the contract can then withdraw those funds by closing the crowdfund. This is by no means a simple contract, but the point I want to make here is that you don’t need to understand all the solidity code to start your analysis — you just need to know how to navigate the four layers above.
The three transactions we’ll study are:
Creation/deployment of the crowdfund contractContributions of ETH to the contractClosing and withdrawing funds from the contract
Creation/deployment of the crowdfund contract
Contributions of ETH to the contract
Closing and withdrawing funds from the contract
Side note, we also just opened up crowdfunds for anyone to use, so if you’re curious or want to create a crowdfund head to mirror.xyz/dashboard to get started. Hop into our discord while you’re at it!
First Transaction: 0x5e5ef5dd9d147028f9bc21127e3de774a80c56a2e510d95f41984e6b7af1b8db
Let’s start with the transaction basics.
Each transaction has a unique keccak256 transaction hash of a few different variablesThere’s a blocknumber associated based on when the transaction was mined, typically a new block is created every 15 seconds.From is the one who signed the transaction, To is the contract address that was interacted withValue is the ETH value that was transferred from the signer's wallet. Even if that value is 0 that doesn't mean that no ETH was transferred during the transaction.Gas is a bit complicated (especially with EIP-1559), but just keep this formula in mind: Gas Price * Gas Used by Transaction = Transaction Fee
Each transaction has a unique keccak256 transaction hash of a few different variables
There’s a blocknumber associated based on when the transaction was mined, typically a new block is created every 15 seconds.
From is the one who signed the transaction, To is the contract address that was interacted with
Value is the ETH value that was transferred from the signer's wallet. Even if that value is 0 that doesn't mean that no ETH was transferred during the transaction.
Gas is a bit complicated (especially with EIP-1559), but just keep this formula in mind: Gas Price * Gas Used by Transaction = Transaction Fee
Now for the meat and bones, the input data of a transaction:
This is just bytecode for any function call and the parameters passed in. The first 8 characters (4 bytes) are the function signature 0x849a3aa3, essentially a hash of the function name and parameter types. And no, these are not always unique which can lead to hacks/security issues. In this case, this function calls the factory contract to create the crowdfund contract (it's a proxy, but we won't get into that).
createCrowdfund((uint256,uint256,bytes32)[], (address,uint256), string, string, address, address, uint256, uint256, uint256)
This shows up if you click “decode input data”, and you can see the various variables values set as well. Every subsequent 64 characters (32 bytes) is a different input variable. The crowdfund comes with three tiers of editions. In this crowdfund for BLVKHVND they used quantities of 1000, 250, and 50 with prices of 0.1, 0.3, and 1 ETH respectively.
Notice that the price actually shows up as 100000000000000000 , which is because the first 18 zeroes represent decimals. We'll have to do the conversions by dividing by 10^18 in our data.
That was a lot, let’s get to querying. Dune has a table called ethereum.transactions which has all the variables we've talked about above for every transaction since the first block. We can query this table for the appearance of 0x849a3aa3 in the last few months:
SELECT * FROM ethereum.transactions WHERE "block_time" > now() - interval '3 months'AND "data" is not nullAND SUBSTRING ( encode("data", 'hex'), 1, 8 ) = '849a3aa3'
ethereum.transactions is a very large table, so if you query without filters the query is going to timeout (taking more than 30 minutes). Filtering by block_time is usually most useful, and in this case we're taking all the rows that have occurred within 3 months. Also, many transactions are just ETH transfers without any data attached so we'll filter that out by only keeping data is not null. Now for checking for the function signature, we need to encode the data into a string from hexadecimal, then take only the characters from position 1 to position 8 using SUBSTRING.
Now the complicated parts, internal transactions and events emitted. For this, it’ll be easier to look at the code. If you go to the contract tab on etherscan and do a ctrl+f on file 1 of 10 you'll find the following code (I've edited out some bits to make this more readable).
function createCrowdfund( ...variables... ) external returns (address crowdfundProxy) { ...some variable prep code... crowdfundProxy = address( new CrowdfundWithPodiumEditionsProxy{ salt: keccak256(abi.encode(symbol_, operator_)) }(treasuryConfig, operator_) ); emit CrowdfundDeployed(crowdfundProxy, name_, symbol_, operator_); ...register to treasury code... }
The first key line here is crowdfundProxy = address(contract_to_be_created), which is what deploys the new contract and creates an internal transaction of type CREATE 0. Transferring ETH also creates an internal transaction of type CALL , which we'll see in the next transaction we study.
We can query for all the crowdfund contracts created with:
SELECT tx."block_time", tx."from", tr."type", tr."code"FROM ethereum.transactions tx LEFT JOIN ethereum.traces tr ON tx."hash"=tr."tx_hash" --tracks internal transactionsWHERE tx."to" = '\x15312b97389a1dc3bcaba7ae58ebbd552e606ed2' -- crowdfund podiums editionAND tr."type" = 'create'
We need ethereum.transactions because we want to filter for traces (internal transactions) only related to transactions on the factory contract. We need this since an internal transaction will not always have the same to as that of the overall transaction. We can JOIN the tables on the transaction hash, and then filter for only internal transactions of the create type.
The second key line here is emit CrowdfundDeployed, which creates a log that is stored in the node but not in the block. If you look at the logs, you'll notice that EditionCreated events are also emitted, but this is from another contract that actually creates the ERC721 tokens (hence a different address).
Similar to a function signature, events have a unique hash as well that sits in Topic 0. So in the events above, 0x5133bb164b64ffa4461bc0c782a5c0e71cdc9d6c6ef5aa9af84f7fd2cd966d8e is the hash for CrowdfundDeployed and 0xbaf1f6ab5aa5406df2735e70c52585e630f9744f4ecdedd8b619e983e927f0b6 is the hash for EditionCreated.
We can query the ethereum.logs table in dune to see all crowdfunds created as well:
SELECT * FROM ethereum.logsWHERE "topic1"='\x5133bb164b64ffa4461bc0c782a5c0e71cdc9d6c6ef5aa9af84f7fd2cd966d8e'::bytea
topic2 and topic3 typically hold the data for ETH transfers, otherwise, event data will show up in the data column. We'll get more into how to work with this later.
Logs are very helpful, as they can be used to emit state variables instead of just the function call values (TheGraph uses logs to model subgraphs for GraphQL queries). Next, we’ll utilize everything we’ve covered to study the contributions of ETH to our newly created crowdfund contract (sitting at the address 0x320d83769eb64096ea74b686eb586e197997f930 ).
If you’ve made it this far, then you’re already through all the tough concepts. Give yourself a pat on the back! We’ll really be getting into the details in the next two sections, so take a breather if you need to.
Second Transaction: 0xd4ce80a5ee62190c5f5d5a5a7e95ba7751c8f3ef63ea0e4b65a1abfdbbb9d1ef
This one is fairly simple to read. Jesse paid 1 ETH to mint an edition of tokenId 167 from the BLVKHVND crowdfund. He also got 1000 HVND, the ERC20 token the crowdfund gives out based on the size of the donation.
But what if we wanted to see how much ETH has been contributed over time, or how many editions have been sold? Sometimes contracts will have a view function in Read Contract on etherscan where you can get total balances. But in this case, the contract doesn't have that.
Remember that function calls change the state data, which we’ll need to piece together the overall state data by aggregating over transaction history. Sometimes the overall state of a contract can be emitted in events, such as with Compound V2’s AccrueInterest event.
In our case, we’ll need to do two things in one query to get to total ETH contributed:
get the transactions that have the “contribute” method calledsum the total ETH transferred by filtering for internal transactions which have the type CALL
get the transactions that have the “contribute” method called
sum the total ETH transferred by filtering for internal transactions which have the type CALL
Remember, I can get the method function signature by decoding the input data on etherscan.
SELECT SUM(tr."value"/1e18) as contribution FROM ethereum.transactions tx LEFT JOIN ethereum.traces tr ON tx."hash" = tr."tx_hash"--transactions filtering WHERE tx."to" = '\x320d83769eb64096ea74b686eb586e197997f930'::byteaAND tx."data" is not nullAND SUBSTRING ( encode(tx."data", 'hex'), 1, 8 ) IN ('a08f793c', 'ce4661bb')--traces filtering AND tr."success"AND tr."value" > 0AND tr."call_type" = 'call'
There was technically another method called contributeForPodium, which is why we check for two function signatures above. The CALL type actually has subtypes as well at the opcode level, so we need the specific base call_type of call (if you're familiar with a delegatecall, then you'll know that would give us a double count). We joined on transaction hash, and then divided by 10^18 to get the right decimals of ETH value.
Let’s move on to the last transaction, where the data starts to get really tricky on us.
Third Transaction: 0xe9d5fefde77d4086d0f64dd1403f9b6e8e12aac74db238ebf11252740c3f65a8
Here, we can see that 337 ETH was transferred and 1,012,965 HVND tokens (the latter of which was decided by operatorPercent_ in the first transaction). After this function is called, the contract just operates the way any normal ERC20 would.
In the case that a crowdfund was already closed, we could have gotten the total raised from the data in this transaction — such as value transferred in an internal transaction of CALL type. It's better to tie this to an event though, in case there are some transfer behaviors that we don't know about. But wait, why are the logs not readable?
Well, this is where we start to get into some pretty confusing patterns. Earlier I mentioned that this crowdfund is deployed as a proxy — that means it’s just like an empty USB that plugs into a computer that actually holds the logic. It’s much cheaper to create USBs than computers — and that logic holds for on-chain too (except the cost is in gas). If you want to read about proxy patterns, I’d check out this great article by the OpenZeppelin team.
The computer in this case is known as the logic and is only deployed once. The proxy is deployed many times, and it doesn’t have the logic functions or events in the contract code. Therefore, etherscan isn’t equipped to show the decoded data in logs. So then how do we piece this together? We could take the keccak256 hash of the event, just like we did for function signatures. But here’s where reading the code will help save you some time. If you go to Read Contract on the factory contract, you'll see the address of the logic contract:
From there, we can look for the closeFunding() function in the code:
function closeFunding() external onlyOperator nonReentrant { ...code... _mint(operator, operatorTokens); // Announce that funding has been closed. emit FundingClosed(address(this).balance, operatorTokens); ...ETH value transfers... }
ETH value transfers don’t emit events since they are just internal transactions. And if you are familiar with how the ERC20 standard works, you’ll know that _mint actually creates a Transfer event (meaning that covers our first event). That means that FundingClosed must be the second log, with the topic of 0x352ce94da8e3109dc06c05ed84e8a0aaf9ce2c4329dfd10ad1190cf620048972. Can you figure out why else it couldn't be the third log (hint: what's a key difference between the first two logs and the third log)?
With that knowledge, we can query this just like any other event, with some fancy data decoding (remember parameters are every 64 characters (32 bytes). We have to turn it into a string to slice it, and then we change it into a number and divide by 1018 to get rid of decimals.
SELECT "contract_address", bytea2numeric( decode ( SUBSTRING ( encode("data", 'hex') , 1, 64 ), 'hex'))/1e18 as eth_raised, bytea2numeric ( decode ( SUBSTRING ( encode("data", 'hex') , 65 , 64 ), 'hex'))/1e18 as tokens_allocated_ownedFROM ethereum.logsWHERE "topic1"='\x352ce94da8e3109dc06c05ed84e8a0aaf9ce2c4329dfd10ad1190cf620048972'::byteaAND "contract_address"='\x320d83769eb64096ea74b686eb586e197997f930'::bytea
Congrats, you now know your way around ethereum.transactions, ethereum.traces, and ethereum.logs. They can always be joined by transaction hash, and then the rest is just knowing how to manipulate the data with encode/decode, substring, and some bytea operators. Woohoo!
We could have done this exercise for the contribute method in the last transaction too. Since this is all happening on the proxy contract.
Now, if we had to go and keep track of function signatures and event topics — as well as decoding all the variables in each query — I think we would have all quit data analysis by now. Luckily, most data services have some variation of contract decoding, meaning I can give a contract address and the ABI and Dune will take care of the decoding for me. That way, events/functions become their own tables and I can easily make the same “total contributions” query from earlier with this:
WITH union_sum as ( SELECT SUM("amount")/1e18 as raised FROM mirror."CrowdfundWithPodiumEditionsLogic_evt_Contribution" WHERE "contract_address"='\x320d83769eb64096ea74b686eb586e197997f930' UNION ALL SELECT SUM("amount")/1e18 as raised FROM mirror."CrowdfundWithPodiumEditionsLogic_evt_ContributionForEdition" WHERE "contract_address"='\x320d83769eb64096ea74b686eb586e197997f930' ) SELECT SUM("raised") FROM union_sum
link to query
Thankfully this query is much more readable and easier to write. They even take care of proxy/factory logic patterns — thanks team! Without this abstraction, I guarantee that data analysis would be ten times messier to write and one hundred times worse to debug. Dune has plenty of other useful tables as well, such as prices.usd for daily token prices and dex.trades for all token trades across all main exchanges (and event nft.trades for OpenSea NFT actions).
While most of the time you’ll be playing with decoded data, knowing what really sits underneath it all will help you level up a lot faster in Web3! Plus, you’re now etherscan fluent — which I promise will be a part of every crypto job description in the future. I hope you found this helpful, and as always feel free to reach out if you need some help getting started. | [
{
"code": null,
"e": 385,
"s": 171,
"text": "This post was first published on ath.mirror.xyz, be sure to subscribe there and follow me on twitter to get my most up-to-date crypto and data science content. All images in this article are created by the author."
},
{
"code": null,
"e": 478,
"s": 385,
"text": "if you’re looking for more web3 data content, check out my 30-day free course (with videos)!"
},
{
"code": null,
"e": 657,
"s": 478,
"text": "If you’ve ever made a transaction on Ethereum (or any smart contract enabled blockchain), then you’ve probably looked it up on a block explorer and seen this heap of information:"
},
{
"code": null,
"e": 990,
"s": 657,
"text": "Learning to read the details of a transaction will be the foundation for all your Ethereum data analysis and knowledge, so let’s cover all the pieces and how to work with them in SQL. I’ll be using Dune Analytics to run my queries, but there are many other tools you can use to query the chain such as Big Query and Flipside Crypto."
},
{
"code": null,
"e": 1096,
"s": 990,
"text": "If you’re completely new to SQL and Ethereum I recommend starting with my full beginners’ overview first."
},
{
"code": null,
"e": 1146,
"s": 1096,
"text": "We’re going to cover transactions in four layers:"
},
{
"code": null,
"e": 1240,
"s": 1146,
"text": "Transaction basicsFunction calls and stateInternal Transactions (Traces)Logs (Events Emitted)"
},
{
"code": null,
"e": 1259,
"s": 1240,
"text": "Transaction basics"
},
{
"code": null,
"e": 1284,
"s": 1259,
"text": "Function calls and state"
},
{
"code": null,
"e": 1315,
"s": 1284,
"text": "Internal Transactions (Traces)"
},
{
"code": null,
"e": 1337,
"s": 1315,
"text": "Logs (Events Emitted)"
},
{
"code": null,
"e": 1874,
"s": 1337,
"text": "As the base for our transaction examples, we’ll be using the Mirror Crowdfund contract. Put simply, this is a smart contract that allows you to get ERC20 (fungible) or ERC721 (NFTs) tokens in exchange for donating ETH to the contract. The creator of the contract can then withdraw those funds by closing the crowdfund. This is by no means a simple contract, but the point I want to make here is that you don’t need to understand all the solidity code to start your analysis — you just need to know how to navigate the four layers above."
},
{
"code": null,
"e": 1914,
"s": 1874,
"text": "The three transactions we’ll study are:"
},
{
"code": null,
"e": 2043,
"s": 1914,
"text": "Creation/deployment of the crowdfund contractContributions of ETH to the contractClosing and withdrawing funds from the contract"
},
{
"code": null,
"e": 2089,
"s": 2043,
"text": "Creation/deployment of the crowdfund contract"
},
{
"code": null,
"e": 2126,
"s": 2089,
"text": "Contributions of ETH to the contract"
},
{
"code": null,
"e": 2174,
"s": 2126,
"text": "Closing and withdrawing funds from the contract"
},
{
"code": null,
"e": 2375,
"s": 2174,
"text": "Side note, we also just opened up crowdfunds for anyone to use, so if you’re curious or want to create a crowdfund head to mirror.xyz/dashboard to get started. Hop into our discord while you’re at it!"
},
{
"code": null,
"e": 2461,
"s": 2375,
"text": "First Transaction: 0x5e5ef5dd9d147028f9bc21127e3de774a80c56a2e510d95f41984e6b7af1b8db"
},
{
"code": null,
"e": 2502,
"s": 2461,
"text": "Let’s start with the transaction basics."
},
{
"code": null,
"e": 3112,
"s": 2502,
"text": "Each transaction has a unique keccak256 transaction hash of a few different variablesThere’s a blocknumber associated based on when the transaction was mined, typically a new block is created every 15 seconds.From is the one who signed the transaction, To is the contract address that was interacted withValue is the ETH value that was transferred from the signer's wallet. Even if that value is 0 that doesn't mean that no ETH was transferred during the transaction.Gas is a bit complicated (especially with EIP-1559), but just keep this formula in mind: Gas Price * Gas Used by Transaction = Transaction Fee"
},
{
"code": null,
"e": 3198,
"s": 3112,
"text": "Each transaction has a unique keccak256 transaction hash of a few different variables"
},
{
"code": null,
"e": 3323,
"s": 3198,
"text": "There’s a blocknumber associated based on when the transaction was mined, typically a new block is created every 15 seconds."
},
{
"code": null,
"e": 3419,
"s": 3323,
"text": "From is the one who signed the transaction, To is the contract address that was interacted with"
},
{
"code": null,
"e": 3583,
"s": 3419,
"text": "Value is the ETH value that was transferred from the signer's wallet. Even if that value is 0 that doesn't mean that no ETH was transferred during the transaction."
},
{
"code": null,
"e": 3726,
"s": 3583,
"text": "Gas is a bit complicated (especially with EIP-1559), but just keep this formula in mind: Gas Price * Gas Used by Transaction = Transaction Fee"
},
{
"code": null,
"e": 3787,
"s": 3726,
"text": "Now for the meat and bones, the input data of a transaction:"
},
{
"code": null,
"e": 4203,
"s": 3787,
"text": "This is just bytecode for any function call and the parameters passed in. The first 8 characters (4 bytes) are the function signature 0x849a3aa3, essentially a hash of the function name and parameter types. And no, these are not always unique which can lead to hacks/security issues. In this case, this function calls the factory contract to create the crowdfund contract (it's a proxy, but we won't get into that)."
},
{
"code": null,
"e": 4328,
"s": 4203,
"text": "createCrowdfund((uint256,uint256,bytes32)[], (address,uint256), string, string, address, address, uint256, uint256, uint256)"
},
{
"code": null,
"e": 4679,
"s": 4328,
"text": "This shows up if you click “decode input data”, and you can see the various variables values set as well. Every subsequent 64 characters (32 bytes) is a different input variable. The crowdfund comes with three tiers of editions. In this crowdfund for BLVKHVND they used quantities of 1000, 250, and 50 with prices of 0.1, 0.3, and 1 ETH respectively."
},
{
"code": null,
"e": 4867,
"s": 4679,
"text": "Notice that the price actually shows up as 100000000000000000 , which is because the first 18 zeroes represent decimals. We'll have to do the conversions by dividing by 10^18 in our data."
},
{
"code": null,
"e": 5131,
"s": 4867,
"text": "That was a lot, let’s get to querying. Dune has a table called ethereum.transactions which has all the variables we've talked about above for every transaction since the first block. We can query this table for the appearance of 0x849a3aa3 in the last few months:"
},
{
"code": null,
"e": 5296,
"s": 5131,
"text": "SELECT * FROM ethereum.transactions WHERE \"block_time\" > now() - interval '3 months'AND \"data\" is not nullAND SUBSTRING ( encode(\"data\", 'hex'), 1, 8 ) = '849a3aa3'"
},
{
"code": null,
"e": 5874,
"s": 5296,
"text": "ethereum.transactions is a very large table, so if you query without filters the query is going to timeout (taking more than 30 minutes). Filtering by block_time is usually most useful, and in this case we're taking all the rows that have occurred within 3 months. Also, many transactions are just ETH transfers without any data attached so we'll filter that out by only keeping data is not null. Now for checking for the function signature, we need to encode the data into a string from hexadecimal, then take only the characters from position 1 to position 8 using SUBSTRING."
},
{
"code": null,
"e": 6152,
"s": 5874,
"text": "Now the complicated parts, internal transactions and events emitted. For this, it’ll be easier to look at the code. If you go to the contract tab on etherscan and do a ctrl+f on file 1 of 10 you'll find the following code (I've edited out some bits to make this more readable)."
},
{
"code": null,
"e": 6600,
"s": 6152,
"text": "function createCrowdfund( ...variables... ) external returns (address crowdfundProxy) { ...some variable prep code... crowdfundProxy = address( new CrowdfundWithPodiumEditionsProxy{ salt: keccak256(abi.encode(symbol_, operator_)) }(treasuryConfig, operator_) ); emit CrowdfundDeployed(crowdfundProxy, name_, symbol_, operator_); ...register to treasury code... }"
},
{
"code": null,
"e": 6889,
"s": 6600,
"text": "The first key line here is crowdfundProxy = address(contract_to_be_created), which is what deploys the new contract and creates an internal transaction of type CREATE 0. Transferring ETH also creates an internal transaction of type CALL , which we'll see in the next transaction we study."
},
{
"code": null,
"e": 6948,
"s": 6889,
"text": "We can query for all the crowdfund contracts created with:"
},
{
"code": null,
"e": 7232,
"s": 6948,
"text": "SELECT tx.\"block_time\", tx.\"from\", tr.\"type\", tr.\"code\"FROM ethereum.transactions tx LEFT JOIN ethereum.traces tr ON tx.\"hash\"=tr.\"tx_hash\" --tracks internal transactionsWHERE tx.\"to\" = '\\x15312b97389a1dc3bcaba7ae58ebbd552e606ed2' -- crowdfund podiums editionAND tr.\"type\" = 'create'"
},
{
"code": null,
"e": 7604,
"s": 7232,
"text": "We need ethereum.transactions because we want to filter for traces (internal transactions) only related to transactions on the factory contract. We need this since an internal transaction will not always have the same to as that of the overall transaction. We can JOIN the tables on the transaction hash, and then filter for only internal transactions of the create type."
},
{
"code": null,
"e": 7912,
"s": 7604,
"text": "The second key line here is emit CrowdfundDeployed, which creates a log that is stored in the node but not in the block. If you look at the logs, you'll notice that EditionCreated events are also emitted, but this is from another contract that actually creates the ERC721 tokens (hence a different address)."
},
{
"code": null,
"e": 8229,
"s": 7912,
"text": "Similar to a function signature, events have a unique hash as well that sits in Topic 0. So in the events above, 0x5133bb164b64ffa4461bc0c782a5c0e71cdc9d6c6ef5aa9af84f7fd2cd966d8e is the hash for CrowdfundDeployed and 0xbaf1f6ab5aa5406df2735e70c52585e630f9744f4ecdedd8b619e983e927f0b6 is the hash for EditionCreated."
},
{
"code": null,
"e": 8313,
"s": 8229,
"text": "We can query the ethereum.logs table in dune to see all crowdfunds created as well:"
},
{
"code": null,
"e": 8431,
"s": 8313,
"text": "SELECT * FROM ethereum.logsWHERE \"topic1\"='\\x5133bb164b64ffa4461bc0c782a5c0e71cdc9d6c6ef5aa9af84f7fd2cd966d8e'::bytea"
},
{
"code": null,
"e": 8596,
"s": 8431,
"text": "topic2 and topic3 typically hold the data for ETH transfers, otherwise, event data will show up in the data column. We'll get more into how to work with this later."
},
{
"code": null,
"e": 8954,
"s": 8596,
"text": "Logs are very helpful, as they can be used to emit state variables instead of just the function call values (TheGraph uses logs to model subgraphs for GraphQL queries). Next, we’ll utilize everything we’ve covered to study the contributions of ETH to our newly created crowdfund contract (sitting at the address 0x320d83769eb64096ea74b686eb586e197997f930 )."
},
{
"code": null,
"e": 9169,
"s": 8954,
"text": "If you’ve made it this far, then you’re already through all the tough concepts. Give yourself a pat on the back! We’ll really be getting into the details in the next two sections, so take a breather if you need to."
},
{
"code": null,
"e": 9256,
"s": 9169,
"text": "Second Transaction: 0xd4ce80a5ee62190c5f5d5a5a7e95ba7751c8f3ef63ea0e4b65a1abfdbbb9d1ef"
},
{
"code": null,
"e": 9469,
"s": 9256,
"text": "This one is fairly simple to read. Jesse paid 1 ETH to mint an edition of tokenId 167 from the BLVKHVND crowdfund. He also got 1000 HVND, the ERC20 token the crowdfund gives out based on the size of the donation."
},
{
"code": null,
"e": 9740,
"s": 9469,
"text": "But what if we wanted to see how much ETH has been contributed over time, or how many editions have been sold? Sometimes contracts will have a view function in Read Contract on etherscan where you can get total balances. But in this case, the contract doesn't have that."
},
{
"code": null,
"e": 10008,
"s": 9740,
"text": "Remember that function calls change the state data, which we’ll need to piece together the overall state data by aggregating over transaction history. Sometimes the overall state of a contract can be emitted in events, such as with Compound V2’s AccrueInterest event."
},
{
"code": null,
"e": 10095,
"s": 10008,
"text": "In our case, we’ll need to do two things in one query to get to total ETH contributed:"
},
{
"code": null,
"e": 10250,
"s": 10095,
"text": "get the transactions that have the “contribute” method calledsum the total ETH transferred by filtering for internal transactions which have the type CALL"
},
{
"code": null,
"e": 10312,
"s": 10250,
"text": "get the transactions that have the “contribute” method called"
},
{
"code": null,
"e": 10406,
"s": 10312,
"text": "sum the total ETH transferred by filtering for internal transactions which have the type CALL"
},
{
"code": null,
"e": 10497,
"s": 10406,
"text": "Remember, I can get the method function signature by decoding the input data on etherscan."
},
{
"code": null,
"e": 10901,
"s": 10497,
"text": "SELECT SUM(tr.\"value\"/1e18) as contribution FROM ethereum.transactions tx LEFT JOIN ethereum.traces tr ON tx.\"hash\" = tr.\"tx_hash\"--transactions filtering WHERE tx.\"to\" = '\\x320d83769eb64096ea74b686eb586e197997f930'::byteaAND tx.\"data\" is not nullAND SUBSTRING ( encode(tx.\"data\", 'hex'), 1, 8 ) IN ('a08f793c', 'ce4661bb')--traces filtering AND tr.\"success\"AND tr.\"value\" > 0AND tr.\"call_type\" = 'call'"
},
{
"code": null,
"e": 11326,
"s": 10901,
"text": "There was technically another method called contributeForPodium, which is why we check for two function signatures above. The CALL type actually has subtypes as well at the opcode level, so we need the specific base call_type of call (if you're familiar with a delegatecall, then you'll know that would give us a double count). We joined on transaction hash, and then divided by 10^18 to get the right decimals of ETH value."
},
{
"code": null,
"e": 11415,
"s": 11326,
"text": "Let’s move on to the last transaction, where the data starts to get really tricky on us."
},
{
"code": null,
"e": 11501,
"s": 11415,
"text": "Third Transaction: 0xe9d5fefde77d4086d0f64dd1403f9b6e8e12aac74db238ebf11252740c3f65a8"
},
{
"code": null,
"e": 11743,
"s": 11501,
"text": "Here, we can see that 337 ETH was transferred and 1,012,965 HVND tokens (the latter of which was decided by operatorPercent_ in the first transaction). After this function is called, the contract just operates the way any normal ERC20 would."
},
{
"code": null,
"e": 12086,
"s": 11743,
"text": "In the case that a crowdfund was already closed, we could have gotten the total raised from the data in this transaction — such as value transferred in an internal transaction of CALL type. It's better to tie this to an event though, in case there are some transfer behaviors that we don't know about. But wait, why are the logs not readable?"
},
{
"code": null,
"e": 12539,
"s": 12086,
"text": "Well, this is where we start to get into some pretty confusing patterns. Earlier I mentioned that this crowdfund is deployed as a proxy — that means it’s just like an empty USB that plugs into a computer that actually holds the logic. It’s much cheaper to create USBs than computers — and that logic holds for on-chain too (except the cost is in gas). If you want to read about proxy patterns, I’d check out this great article by the OpenZeppelin team."
},
{
"code": null,
"e": 13080,
"s": 12539,
"text": "The computer in this case is known as the logic and is only deployed once. The proxy is deployed many times, and it doesn’t have the logic functions or events in the contract code. Therefore, etherscan isn’t equipped to show the decoded data in logs. So then how do we piece this together? We could take the keccak256 hash of the event, just like we did for function signatures. But here’s where reading the code will help save you some time. If you go to Read Contract on the factory contract, you'll see the address of the logic contract:"
},
{
"code": null,
"e": 13149,
"s": 13080,
"text": "From there, we can look for the closeFunding() function in the code:"
},
{
"code": null,
"e": 13421,
"s": 13149,
"text": "function closeFunding() external onlyOperator nonReentrant { ...code... _mint(operator, operatorTokens); // Announce that funding has been closed. emit FundingClosed(address(this).balance, operatorTokens); ...ETH value transfers... }"
},
{
"code": null,
"e": 13932,
"s": 13421,
"text": "ETH value transfers don’t emit events since they are just internal transactions. And if you are familiar with how the ERC20 standard works, you’ll know that _mint actually creates a Transfer event (meaning that covers our first event). That means that FundingClosed must be the second log, with the topic of 0x352ce94da8e3109dc06c05ed84e8a0aaf9ce2c4329dfd10ad1190cf620048972. Can you figure out why else it couldn't be the third log (hint: what's a key difference between the first two logs and the third log)?"
},
{
"code": null,
"e": 14210,
"s": 13932,
"text": "With that knowledge, we can query this just like any other event, with some fancy data decoding (remember parameters are every 64 characters (32 bytes). We have to turn it into a string to slice it, and then we change it into a number and divide by 1018 to get rid of decimals."
},
{
"code": null,
"e": 14643,
"s": 14210,
"text": "SELECT \"contract_address\", bytea2numeric( decode ( SUBSTRING ( encode(\"data\", 'hex') , 1, 64 ), 'hex'))/1e18 as eth_raised, bytea2numeric ( decode ( SUBSTRING ( encode(\"data\", 'hex') , 65 , 64 ), 'hex'))/1e18 as tokens_allocated_ownedFROM ethereum.logsWHERE \"topic1\"='\\x352ce94da8e3109dc06c05ed84e8a0aaf9ce2c4329dfd10ad1190cf620048972'::byteaAND \"contract_address\"='\\x320d83769eb64096ea74b686eb586e197997f930'::bytea"
},
{
"code": null,
"e": 14914,
"s": 14643,
"text": "Congrats, you now know your way around ethereum.transactions, ethereum.traces, and ethereum.logs. They can always be joined by transaction hash, and then the rest is just knowing how to manipulate the data with encode/decode, substring, and some bytea operators. Woohoo!"
},
{
"code": null,
"e": 15053,
"s": 14914,
"text": "We could have done this exercise for the contribute method in the last transaction too. Since this is all happening on the proxy contract."
},
{
"code": null,
"e": 15540,
"s": 15053,
"text": "Now, if we had to go and keep track of function signatures and event topics — as well as decoding all the variables in each query — I think we would have all quit data analysis by now. Luckily, most data services have some variation of contract decoding, meaning I can give a contract address and the ABI and Dune will take care of the decoding for me. That way, events/functions become their own tables and I can easily make the same “total contributions” query from earlier with this:"
},
{
"code": null,
"e": 16020,
"s": 15540,
"text": "WITH union_sum as ( SELECT SUM(\"amount\")/1e18 as raised FROM mirror.\"CrowdfundWithPodiumEditionsLogic_evt_Contribution\" WHERE \"contract_address\"='\\x320d83769eb64096ea74b686eb586e197997f930' UNION ALL SELECT SUM(\"amount\")/1e18 as raised FROM mirror.\"CrowdfundWithPodiumEditionsLogic_evt_ContributionForEdition\" WHERE \"contract_address\"='\\x320d83769eb64096ea74b686eb586e197997f930' ) SELECT SUM(\"raised\") FROM union_sum"
},
{
"code": null,
"e": 16034,
"s": 16020,
"text": "link to query"
},
{
"code": null,
"e": 16497,
"s": 16034,
"text": "Thankfully this query is much more readable and easier to write. They even take care of proxy/factory logic patterns — thanks team! Without this abstraction, I guarantee that data analysis would be ten times messier to write and one hundred times worse to debug. Dune has plenty of other useful tables as well, such as prices.usd for daily token prices and dex.trades for all token trades across all main exchanges (and event nft.trades for OpenSea NFT actions)."
}
]
|
Intro to Open Database for Geoscience Computing: Part 1 of 2 | by Yohanes Nuwara | Towards Data Science | Nowadays, open geoscience data are very important for researches, benchmarking, and project purposes. They are so useful that people working in the energy industries (oil and gas, and geothermal) and in academia utilize these datasets to do benchmarking on their methodology, and also as reproducible teaching material for higher university students. At the same time, not all institutions have access to a wide range of industrial commercial software or applications for the computation and processing of these datasets. The purpose of this article is to introduce a tutorial on how to make the most of the available open databases.
I have compiled at least three massive open geoscience databases on the internet, which I believe everyone now could access them “under the same roof”! Here are the three open geoscience databases.
Public geoscience Data in Google Drive
SEG Wiki Open Data
Geothermal Data Repository (GDR) OpenEI
Public geoscience Data is an open database available as a Google Drive, created by Peter Amstrand. This database contains a wide range of reproducible datasets, such as the seismic 3D data of Netherlands F3 and Canning projects, well-log files of North Sea projects compiled by GEOLINK, variety of geoscience images used for training and testing datasets in machine learning researches, and many more.
SEG Wiki Open Data is a catalog of available open geophysical data owned by the Society of Exploration Geophysicists (SEG). This catalog contains more than 30 geophysical datasets, such as offshore and onshore seismic data and well-log files, the New Zealand 3D projects, geophysical synthetic benchmark models such as BP Benchmark Models, Marmousi Model, and KFUPM-KAUST Red Sea model, gravity-magnetic data, and topographic-bathymetric data.
Geothermal Data Repository or GDR OpenEI is an open geothermal energy data portal provided by the United States Department of Energy (DoE) and developed by the National Renewable Energy Laboratory (NREL). As it is apparent for the name, this data portal mainly focuses on geothermal and hydrothermal exploration data, for instance, the FORGE Project data near Milford in Utah.
The use of Python as an open-source programming language enables people to develop a variety of powerful programs, as its utilization in geoscientific purposes is also inevitably very well recognized. Python programming language is also now becoming a “communication language” among developers. With no doubt, people with limited access to commercial software could develop their own programs using Python.
In addition, running in a cloud service is a seamless opportunity to increase efficiency in computing. One does not require to take up large space for memory in the local computer. Google Colaboratory (or Colab) is a research initiative of Google to make computing in Python possible in the cloud. The type of Python shell we are working in Google Colab is an IPython notebook.
This article consists of 3 tutorials, started from how to access the Public geoscience Data Google Drive, then how to unzip and open simple files, and finally how to access the SEG Wiki and GDR OpenEI directly from the website to Google Colab.
Open the Google Drive Public geoscience Data link. Once you clicked the link, you will have a copy of that database stored in your Shared with Me directory. Your next move is to transfer Public geoscience Data from your Shared with Me directory to your My Drive. You may not know how to perform this action in Google Drive. The following GIF animation shows you the way!
The Public geoscience Data is now stored in your My Drive directory. Make sure it is stored by visiting your My Drive directory and search for the Public geoscience Data folder. Now open the folder. You will find 12 folders and 2 files inside.
However, not all folders and files inside that Public geoscience Data contains geoscience datasets for your further processing, analysis, and computation. For instance, the report for images folder contains geoscience documents (in PDF and image format) published by the University of Stavanger and the University of Oslo. These documents should be very valuable for research references, but not for computations (seismic processing, well log analysis, et cetera). If you are curious about the content, here is the list that I made.
In the above list, you find some folders contain ZIP folders and some simple files such as images in format PNG and JPG. In Tutorial 2 of this article, you will learn how to unzip files and open these images in Google Colab. However, the tutorial on how to access big files (which are the geoscience datasets) such as the seismic data SEGY or SGY and well-log data LAS files, won’t be covered here since we require a very long discussion. But, don’t worry, we will cover this up in the next Part 2 of this tutorial series. Later in Part 2, we will cover more exposure on Python codes.
Another tutorial article on the way: Intro to Open Database for Geoscience Computing, Part 2 of 2
After sorting out, hence, I have screened 6 folders that contain datasets, as follows:
Canning 3D TDQ seismic
Dutch F3 seismic data
GEOLINK North Sea wells
Poseidon seismic
Core images
48 well composite logs
Next, if you checked the detail of just one of the 6 folders, for instance in Canning 3D TDQ, you will find a seismic data file named canning3d_GDA94_UTM50s.sgy that has a file size of 103 gigabytes! You aren’t surprised if you are in the oil and gas industry because in fact, normally does it so as most seismic data, the file size is way very large. It won’t be a wise decision to download this very large seismic data on your local PC. That’s why I introduce you to an effective way to open this file in Cloud. The Cloud service that I mentioned here is Google Colab.
So now let’s go to Google Colab. Visit this link to redirect you to Google Colab. At the very first window that pops up when you visit Google Colab, click New Notebook to create a new IPython notebook.
Then you will be redirected to a your new notebook. Now, you already have a new blank notebook, where you will put codes to open the Public geoscience Data. The following is the structure of a notebook.
On top of your new notebook, you see Untitled.ipynb. It is the name of your notebook, so change it to your preferred name, but remember not to remove the extension .ipynb, because it’s the typical extension of IPython notebook. For instance, name our new notebook as Public-geoscience-Data.ipynb.
You see also Code. The Code is used to create a new cell code in the notebook, so whenever you create a code script, click on it to add a new cell. For warming up, you could create some codes, as it is shown below. To run your code, click the Play button on the left of the cell, or simply do CTRL + ENTER.
At the far left of your notebook, you find also three symbols. The bottom-most symbol is Folder, where you could navigate the directories which you are working on now. By default, if you clicked it, there is only one folder named sample_data. It is created for you by Colab but here, we don’t use it.
Now, let’s start to access your Public geoscience Data folder in Google Drive to your Google Colab. Add a new cell by clicking the Code button, and run the following script.
from google.colab import drivedrive.mount('/content/drive')
The following message will appear. The message provides a Google Drive link. Click that link and follow the instructions you need to access your Google Drive account. A new link will again be provided to you, copy it, and paste the link back to your notebook cell inside the “Enter your authorization code:” Then, press ENTER. Please wait until the authorization process is finished.
If you have finished the authorization, you could check your Google Drive folders available in the Folder button (previously discussed). Click REFRESHif you could not find it. Then navigate through your Google Drive folders to find the Public geoscience Data. This is the path of that folder.
"/content/My Drive/Public geoscience Data"
The folder should be there. If you did not find that folder, you might not succeed to move the Public geoscience Data from your Shared with Me to your My Drive. In that case, you should check out the above tutorial on how to do so again.
Another way to ensure that Public geoscience Data is in the path, run the following script.
cd "/content/drive/My Drive/Public geoscience Data"
If it returns fine, then fine, no problem. If a message comes out “No such file or directory”, it means you need to make sure you have successfully moved the folder to your My Drive.
Next move, see the content of the folder. You have 2 ways to do so. The easiest way is to navigate to the Folder section at the far left of your notebook and expand the folder, just like when you operate in your PC. Another elegant way is by running the following script.
ls "/content/drive/My Drive/Public geoscience Data"
You will find the following contents.
The script ls is used to inspect a folder, whereas the previously mentioned cd is used to go to a certain folder in our working directory. It’s called a Bash script, a Command-Line programming language in Linux. This one is an interesting topic, we don’t need to focus on that one at this time being!
Congratulations! You now have your open database in your Google Colab! As it is previously discussed, only 6 files inside this Public geoscience Data will be considered datasets.
Further, you will see if you look into some folders (Recall: Table 1), such as the GEOLINK North Sea Wells. Again run ls to inspect the files inside this folder.
ls "/content/drive/My Drive/Public geoscience Data/GEOLINK North sea wells with Lithology interpretation"
There is one ZIP file named GEOLINK_Lithology and wells NORTH SEA.zip, also, an image file in PNG format named Lithology code in the well.png. In the next Tutorial 2, you will learn how to unzip this file and open an image in your Google Colab notebook.
The advantage of using Google Colab is viable, you could unzip a file without downloading it into your local PC and without using any unzipper program. To the above GEOLINK North Sea ZIP file, run the following script to unzip this file.
!unzip '/content/drive/My Drive/Public geoscience Data/GEOLINK North sea wells with Lithology interpretation/GEOLINK_Lithology and wells NORTH SEA.zip' -d '/content/GEOLINK North Sea'
It will take some time to unzip the files. When it is finished, the unzipped file will appear in a new folder under your working directory /content, called GEOLINK North Sea.
The above script could be broken down into four syntaxes: !unzip ‘zip_file_path’ -d ‘to_new_folder’. The first syntax !unzip instructs Colab to unzip the file, the second syntax ‘zip_file_path’ is the path of the zip file, the third syntax -d tells Colab to store the unzipped file into a new directory, where the new directory path name is in the fourth syntax ‘to_new_folder’.
Congratulations! You now know how to unzip a file. Because the unzipped files are now stored in a new directory, run ls to inspect the contents.
ls '/content/GEOLINK North Sea'
You will find approximately 200 well-log LAS files individually inside.
These are the wells drilled in the North Sea. Again, the tutorial on how to open these LAS files will be covered in the sequel article in Part 2.
We will now focus on another task, opening a simple file such as a PNG or JPG image. If you go back to the original GEOLINK North Sea Wells folder, you will find an image file named location of geolink wells.png. It is the location map of these wells.
We will use a Python library called Pillow or PIL. First off, we need to import a module from PIL named Image in our Colab notebook (PIL has been already in Colab, so we do not need to install.)
from PIL import Image
Next, run the following script to open the image. The image file path is /content/GEOLINK North Sea/location of wells.png. We use the open function from Image module to do so.
img = Image.open("/content/GEOLINK North Sea/location of wells.png")img
When you run the script, an image will appear in your Colab notebook! As you have already guessed, this is the North Sea map that contains the well locations. This is an offshore field near Snorre Field.
Congratulations! You already know how to open an image in Colab. Until this discussion, you know enough how to access some datasets from Public geoscience Data Google Drive. Of course, there are still many more untouched files discussed in this Part 1 article, such as seismic data and well-log files, so stay tuned for the Part 2 article. At least, you know how to access a geoscience database in Google Drive directly to Google Colab.
Next, in Tutorial 3, we will discuss on how to access the other open databases, namely the SEG Wiki Open Data and GDR OpenEI.
To access both these databases, we do not need the above Google Drive workflows anymore! We could stream and download the dataset directly from the webpage to our Colab notebook.
We need to peek what both of these open databases webpage look like. First off, we will visit the SEG Wiki Open Data webpage.
In the contents, you will see 30+ open datasets. We recommend visiting the content link first of the data that you would like to access Google Colab. You will find the details of the data and click the associated link. For instance, visit this link Stratton 3D seismic dataset.
Scroll down in the Stratton 3D webpage to section How to download, there are files with links. For now, we will use in the third row, the 3D filtered migration file, right-click on it, and “Copy link address”.
The link has been copied, now go back to your Colab notebook, paste the copied URL in the following script, and run the script. Remember to paste the copied URL inside the single quotation mark.
!wget 'your_copied_URL' -P '/content/Stratton/seismic'
Here, we use !wget syntax to download the dataset in the URL. The next syntax -P is passed to store the downloaded data into a new folder, which named in the next syntax, '/content/Stratton/seismic’. This !wget syntax is universal, meaning that you could download any dataset in a form of URL.
As we noticed while downloading the dataset URL, the file size is 423 megabytes! Imagine we download this dataset into our local computer, it surely will take up large space.
If we navigate to the newly created folder seismic inside Stratton, we will see a seismic data named Stratton3D_32bit.sgy. In Part 2 article, we will discuss how to open this seismic data.
Congratulations! You now know how to download a dataset from the SEG Wiki Open Data directly to Google Colab. The same way of using !wget we will also apply to another GDR OpenEI database.
Next, let’s visit the Geothermal Data Repository (GDR) OpenEI webpage. In the first encounter, you need to browse a dataset with a certain keyword to search. Just type gravity forge. We will use an instance of 3D gravity data of FORGE site near Milford in Utah.
Visit the Utah FORGE: 3D Gravity Data that appears in the search webpage. Unless you find it, just browse the complete name in the “Search” block. Inside the page, scroll down to find the ZIP file FORGE_3D_gravity.zip, right-click on the “Download”, and similarly “Copy link address”.
Again, go back to the Colab notebook and this time we will leave it to you! If you still remember (we are sure you do), use !wget to download the URL and then specify the target folder.
!wget 'https://gdr.openei.org/files/1144/FORGE_3D_gravity%20(1).zip' -P /content/FORGE
You now get the downloaded file in the newly created folder FORGE. And as you realize, it is a ZIP file. You already know how to unzip it for sure. Run the following script.
!unzip '/content/FORGE_3D_gravity (1).zip' -d '/content/FORGE'
Congratulations! You have accessed some datasets from GDR OpenEI using !wget
Throughout the first part of this introductory article series on Open Geoscience Computing using Python in the Google Colab cloud environment, we have discussed three tutorials that cover basic workflow to access the open datasets.
For wrap up, I created a Colab notebook that contains all the codes discussed in this tutorial. Open the notebook here.
Along with this article, I also made a GitHub repository that contains step-by-step tutorials about Open Geoscience Computing. Visit this link to start the journey.
Last but not least, it is highly recommended for anyone who is interested to use open datasets to cite these data with respect to the data owners. The Citation Wiki is available on my GitHub wiki page. The citation content will also be frequently updated.
Enjoy your Open Geoscience Computing, and stay tuned for Part 2!
Reach my GitHub: github.com/yohanesnuwara
Reach my email: [email protected]
Reach my LinkedIn: https://www.linkedin.com/in/yohanes-nuwara-5492b4118/ | [
{
"code": null,
"e": 806,
"s": 172,
"text": "Nowadays, open geoscience data are very important for researches, benchmarking, and project purposes. They are so useful that people working in the energy industries (oil and gas, and geothermal) and in academia utilize these datasets to do benchmarking on their methodology, and also as reproducible teaching material for higher university students. At the same time, not all institutions have access to a wide range of industrial commercial software or applications for the computation and processing of these datasets. The purpose of this article is to introduce a tutorial on how to make the most of the available open databases."
},
{
"code": null,
"e": 1004,
"s": 806,
"text": "I have compiled at least three massive open geoscience databases on the internet, which I believe everyone now could access them “under the same roof”! Here are the three open geoscience databases."
},
{
"code": null,
"e": 1043,
"s": 1004,
"text": "Public geoscience Data in Google Drive"
},
{
"code": null,
"e": 1062,
"s": 1043,
"text": "SEG Wiki Open Data"
},
{
"code": null,
"e": 1102,
"s": 1062,
"text": "Geothermal Data Repository (GDR) OpenEI"
},
{
"code": null,
"e": 1504,
"s": 1102,
"text": "Public geoscience Data is an open database available as a Google Drive, created by Peter Amstrand. This database contains a wide range of reproducible datasets, such as the seismic 3D data of Netherlands F3 and Canning projects, well-log files of North Sea projects compiled by GEOLINK, variety of geoscience images used for training and testing datasets in machine learning researches, and many more."
},
{
"code": null,
"e": 1948,
"s": 1504,
"text": "SEG Wiki Open Data is a catalog of available open geophysical data owned by the Society of Exploration Geophysicists (SEG). This catalog contains more than 30 geophysical datasets, such as offshore and onshore seismic data and well-log files, the New Zealand 3D projects, geophysical synthetic benchmark models such as BP Benchmark Models, Marmousi Model, and KFUPM-KAUST Red Sea model, gravity-magnetic data, and topographic-bathymetric data."
},
{
"code": null,
"e": 2325,
"s": 1948,
"text": "Geothermal Data Repository or GDR OpenEI is an open geothermal energy data portal provided by the United States Department of Energy (DoE) and developed by the National Renewable Energy Laboratory (NREL). As it is apparent for the name, this data portal mainly focuses on geothermal and hydrothermal exploration data, for instance, the FORGE Project data near Milford in Utah."
},
{
"code": null,
"e": 2732,
"s": 2325,
"text": "The use of Python as an open-source programming language enables people to develop a variety of powerful programs, as its utilization in geoscientific purposes is also inevitably very well recognized. Python programming language is also now becoming a “communication language” among developers. With no doubt, people with limited access to commercial software could develop their own programs using Python."
},
{
"code": null,
"e": 3110,
"s": 2732,
"text": "In addition, running in a cloud service is a seamless opportunity to increase efficiency in computing. One does not require to take up large space for memory in the local computer. Google Colaboratory (or Colab) is a research initiative of Google to make computing in Python possible in the cloud. The type of Python shell we are working in Google Colab is an IPython notebook."
},
{
"code": null,
"e": 3354,
"s": 3110,
"text": "This article consists of 3 tutorials, started from how to access the Public geoscience Data Google Drive, then how to unzip and open simple files, and finally how to access the SEG Wiki and GDR OpenEI directly from the website to Google Colab."
},
{
"code": null,
"e": 3725,
"s": 3354,
"text": "Open the Google Drive Public geoscience Data link. Once you clicked the link, you will have a copy of that database stored in your Shared with Me directory. Your next move is to transfer Public geoscience Data from your Shared with Me directory to your My Drive. You may not know how to perform this action in Google Drive. The following GIF animation shows you the way!"
},
{
"code": null,
"e": 3969,
"s": 3725,
"text": "The Public geoscience Data is now stored in your My Drive directory. Make sure it is stored by visiting your My Drive directory and search for the Public geoscience Data folder. Now open the folder. You will find 12 folders and 2 files inside."
},
{
"code": null,
"e": 4502,
"s": 3969,
"text": "However, not all folders and files inside that Public geoscience Data contains geoscience datasets for your further processing, analysis, and computation. For instance, the report for images folder contains geoscience documents (in PDF and image format) published by the University of Stavanger and the University of Oslo. These documents should be very valuable for research references, but not for computations (seismic processing, well log analysis, et cetera). If you are curious about the content, here is the list that I made."
},
{
"code": null,
"e": 5087,
"s": 4502,
"text": "In the above list, you find some folders contain ZIP folders and some simple files such as images in format PNG and JPG. In Tutorial 2 of this article, you will learn how to unzip files and open these images in Google Colab. However, the tutorial on how to access big files (which are the geoscience datasets) such as the seismic data SEGY or SGY and well-log data LAS files, won’t be covered here since we require a very long discussion. But, don’t worry, we will cover this up in the next Part 2 of this tutorial series. Later in Part 2, we will cover more exposure on Python codes."
},
{
"code": null,
"e": 5185,
"s": 5087,
"text": "Another tutorial article on the way: Intro to Open Database for Geoscience Computing, Part 2 of 2"
},
{
"code": null,
"e": 5272,
"s": 5185,
"text": "After sorting out, hence, I have screened 6 folders that contain datasets, as follows:"
},
{
"code": null,
"e": 5295,
"s": 5272,
"text": "Canning 3D TDQ seismic"
},
{
"code": null,
"e": 5317,
"s": 5295,
"text": "Dutch F3 seismic data"
},
{
"code": null,
"e": 5341,
"s": 5317,
"text": "GEOLINK North Sea wells"
},
{
"code": null,
"e": 5358,
"s": 5341,
"text": "Poseidon seismic"
},
{
"code": null,
"e": 5370,
"s": 5358,
"text": "Core images"
},
{
"code": null,
"e": 5393,
"s": 5370,
"text": "48 well composite logs"
},
{
"code": null,
"e": 5964,
"s": 5393,
"text": "Next, if you checked the detail of just one of the 6 folders, for instance in Canning 3D TDQ, you will find a seismic data file named canning3d_GDA94_UTM50s.sgy that has a file size of 103 gigabytes! You aren’t surprised if you are in the oil and gas industry because in fact, normally does it so as most seismic data, the file size is way very large. It won’t be a wise decision to download this very large seismic data on your local PC. That’s why I introduce you to an effective way to open this file in Cloud. The Cloud service that I mentioned here is Google Colab."
},
{
"code": null,
"e": 6166,
"s": 5964,
"text": "So now let’s go to Google Colab. Visit this link to redirect you to Google Colab. At the very first window that pops up when you visit Google Colab, click New Notebook to create a new IPython notebook."
},
{
"code": null,
"e": 6369,
"s": 6166,
"text": "Then you will be redirected to a your new notebook. Now, you already have a new blank notebook, where you will put codes to open the Public geoscience Data. The following is the structure of a notebook."
},
{
"code": null,
"e": 6666,
"s": 6369,
"text": "On top of your new notebook, you see Untitled.ipynb. It is the name of your notebook, so change it to your preferred name, but remember not to remove the extension .ipynb, because it’s the typical extension of IPython notebook. For instance, name our new notebook as Public-geoscience-Data.ipynb."
},
{
"code": null,
"e": 6973,
"s": 6666,
"text": "You see also Code. The Code is used to create a new cell code in the notebook, so whenever you create a code script, click on it to add a new cell. For warming up, you could create some codes, as it is shown below. To run your code, click the Play button on the left of the cell, or simply do CTRL + ENTER."
},
{
"code": null,
"e": 7274,
"s": 6973,
"text": "At the far left of your notebook, you find also three symbols. The bottom-most symbol is Folder, where you could navigate the directories which you are working on now. By default, if you clicked it, there is only one folder named sample_data. It is created for you by Colab but here, we don’t use it."
},
{
"code": null,
"e": 7448,
"s": 7274,
"text": "Now, let’s start to access your Public geoscience Data folder in Google Drive to your Google Colab. Add a new cell by clicking the Code button, and run the following script."
},
{
"code": null,
"e": 7508,
"s": 7448,
"text": "from google.colab import drivedrive.mount('/content/drive')"
},
{
"code": null,
"e": 7892,
"s": 7508,
"text": "The following message will appear. The message provides a Google Drive link. Click that link and follow the instructions you need to access your Google Drive account. A new link will again be provided to you, copy it, and paste the link back to your notebook cell inside the “Enter your authorization code:” Then, press ENTER. Please wait until the authorization process is finished."
},
{
"code": null,
"e": 8185,
"s": 7892,
"text": "If you have finished the authorization, you could check your Google Drive folders available in the Folder button (previously discussed). Click REFRESHif you could not find it. Then navigate through your Google Drive folders to find the Public geoscience Data. This is the path of that folder."
},
{
"code": null,
"e": 8228,
"s": 8185,
"text": "\"/content/My Drive/Public geoscience Data\""
},
{
"code": null,
"e": 8466,
"s": 8228,
"text": "The folder should be there. If you did not find that folder, you might not succeed to move the Public geoscience Data from your Shared with Me to your My Drive. In that case, you should check out the above tutorial on how to do so again."
},
{
"code": null,
"e": 8558,
"s": 8466,
"text": "Another way to ensure that Public geoscience Data is in the path, run the following script."
},
{
"code": null,
"e": 8610,
"s": 8558,
"text": "cd \"/content/drive/My Drive/Public geoscience Data\""
},
{
"code": null,
"e": 8793,
"s": 8610,
"text": "If it returns fine, then fine, no problem. If a message comes out “No such file or directory”, it means you need to make sure you have successfully moved the folder to your My Drive."
},
{
"code": null,
"e": 9065,
"s": 8793,
"text": "Next move, see the content of the folder. You have 2 ways to do so. The easiest way is to navigate to the Folder section at the far left of your notebook and expand the folder, just like when you operate in your PC. Another elegant way is by running the following script."
},
{
"code": null,
"e": 9117,
"s": 9065,
"text": "ls \"/content/drive/My Drive/Public geoscience Data\""
},
{
"code": null,
"e": 9155,
"s": 9117,
"text": "You will find the following contents."
},
{
"code": null,
"e": 9456,
"s": 9155,
"text": "The script ls is used to inspect a folder, whereas the previously mentioned cd is used to go to a certain folder in our working directory. It’s called a Bash script, a Command-Line programming language in Linux. This one is an interesting topic, we don’t need to focus on that one at this time being!"
},
{
"code": null,
"e": 9635,
"s": 9456,
"text": "Congratulations! You now have your open database in your Google Colab! As it is previously discussed, only 6 files inside this Public geoscience Data will be considered datasets."
},
{
"code": null,
"e": 9797,
"s": 9635,
"text": "Further, you will see if you look into some folders (Recall: Table 1), such as the GEOLINK North Sea Wells. Again run ls to inspect the files inside this folder."
},
{
"code": null,
"e": 9903,
"s": 9797,
"text": "ls \"/content/drive/My Drive/Public geoscience Data/GEOLINK North sea wells with Lithology interpretation\""
},
{
"code": null,
"e": 10157,
"s": 9903,
"text": "There is one ZIP file named GEOLINK_Lithology and wells NORTH SEA.zip, also, an image file in PNG format named Lithology code in the well.png. In the next Tutorial 2, you will learn how to unzip this file and open an image in your Google Colab notebook."
},
{
"code": null,
"e": 10395,
"s": 10157,
"text": "The advantage of using Google Colab is viable, you could unzip a file without downloading it into your local PC and without using any unzipper program. To the above GEOLINK North Sea ZIP file, run the following script to unzip this file."
},
{
"code": null,
"e": 10579,
"s": 10395,
"text": "!unzip '/content/drive/My Drive/Public geoscience Data/GEOLINK North sea wells with Lithology interpretation/GEOLINK_Lithology and wells NORTH SEA.zip' -d '/content/GEOLINK North Sea'"
},
{
"code": null,
"e": 10754,
"s": 10579,
"text": "It will take some time to unzip the files. When it is finished, the unzipped file will appear in a new folder under your working directory /content, called GEOLINK North Sea."
},
{
"code": null,
"e": 11133,
"s": 10754,
"text": "The above script could be broken down into four syntaxes: !unzip ‘zip_file_path’ -d ‘to_new_folder’. The first syntax !unzip instructs Colab to unzip the file, the second syntax ‘zip_file_path’ is the path of the zip file, the third syntax -d tells Colab to store the unzipped file into a new directory, where the new directory path name is in the fourth syntax ‘to_new_folder’."
},
{
"code": null,
"e": 11278,
"s": 11133,
"text": "Congratulations! You now know how to unzip a file. Because the unzipped files are now stored in a new directory, run ls to inspect the contents."
},
{
"code": null,
"e": 11310,
"s": 11278,
"text": "ls '/content/GEOLINK North Sea'"
},
{
"code": null,
"e": 11382,
"s": 11310,
"text": "You will find approximately 200 well-log LAS files individually inside."
},
{
"code": null,
"e": 11528,
"s": 11382,
"text": "These are the wells drilled in the North Sea. Again, the tutorial on how to open these LAS files will be covered in the sequel article in Part 2."
},
{
"code": null,
"e": 11780,
"s": 11528,
"text": "We will now focus on another task, opening a simple file such as a PNG or JPG image. If you go back to the original GEOLINK North Sea Wells folder, you will find an image file named location of geolink wells.png. It is the location map of these wells."
},
{
"code": null,
"e": 11975,
"s": 11780,
"text": "We will use a Python library called Pillow or PIL. First off, we need to import a module from PIL named Image in our Colab notebook (PIL has been already in Colab, so we do not need to install.)"
},
{
"code": null,
"e": 11997,
"s": 11975,
"text": "from PIL import Image"
},
{
"code": null,
"e": 12173,
"s": 11997,
"text": "Next, run the following script to open the image. The image file path is /content/GEOLINK North Sea/location of wells.png. We use the open function from Image module to do so."
},
{
"code": null,
"e": 12245,
"s": 12173,
"text": "img = Image.open(\"/content/GEOLINK North Sea/location of wells.png\")img"
},
{
"code": null,
"e": 12449,
"s": 12245,
"text": "When you run the script, an image will appear in your Colab notebook! As you have already guessed, this is the North Sea map that contains the well locations. This is an offshore field near Snorre Field."
},
{
"code": null,
"e": 12886,
"s": 12449,
"text": "Congratulations! You already know how to open an image in Colab. Until this discussion, you know enough how to access some datasets from Public geoscience Data Google Drive. Of course, there are still many more untouched files discussed in this Part 1 article, such as seismic data and well-log files, so stay tuned for the Part 2 article. At least, you know how to access a geoscience database in Google Drive directly to Google Colab."
},
{
"code": null,
"e": 13012,
"s": 12886,
"text": "Next, in Tutorial 3, we will discuss on how to access the other open databases, namely the SEG Wiki Open Data and GDR OpenEI."
},
{
"code": null,
"e": 13191,
"s": 13012,
"text": "To access both these databases, we do not need the above Google Drive workflows anymore! We could stream and download the dataset directly from the webpage to our Colab notebook."
},
{
"code": null,
"e": 13317,
"s": 13191,
"text": "We need to peek what both of these open databases webpage look like. First off, we will visit the SEG Wiki Open Data webpage."
},
{
"code": null,
"e": 13595,
"s": 13317,
"text": "In the contents, you will see 30+ open datasets. We recommend visiting the content link first of the data that you would like to access Google Colab. You will find the details of the data and click the associated link. For instance, visit this link Stratton 3D seismic dataset."
},
{
"code": null,
"e": 13805,
"s": 13595,
"text": "Scroll down in the Stratton 3D webpage to section How to download, there are files with links. For now, we will use in the third row, the 3D filtered migration file, right-click on it, and “Copy link address”."
},
{
"code": null,
"e": 14000,
"s": 13805,
"text": "The link has been copied, now go back to your Colab notebook, paste the copied URL in the following script, and run the script. Remember to paste the copied URL inside the single quotation mark."
},
{
"code": null,
"e": 14055,
"s": 14000,
"text": "!wget 'your_copied_URL' -P '/content/Stratton/seismic'"
},
{
"code": null,
"e": 14349,
"s": 14055,
"text": "Here, we use !wget syntax to download the dataset in the URL. The next syntax -P is passed to store the downloaded data into a new folder, which named in the next syntax, '/content/Stratton/seismic’. This !wget syntax is universal, meaning that you could download any dataset in a form of URL."
},
{
"code": null,
"e": 14524,
"s": 14349,
"text": "As we noticed while downloading the dataset URL, the file size is 423 megabytes! Imagine we download this dataset into our local computer, it surely will take up large space."
},
{
"code": null,
"e": 14713,
"s": 14524,
"text": "If we navigate to the newly created folder seismic inside Stratton, we will see a seismic data named Stratton3D_32bit.sgy. In Part 2 article, we will discuss how to open this seismic data."
},
{
"code": null,
"e": 14902,
"s": 14713,
"text": "Congratulations! You now know how to download a dataset from the SEG Wiki Open Data directly to Google Colab. The same way of using !wget we will also apply to another GDR OpenEI database."
},
{
"code": null,
"e": 15164,
"s": 14902,
"text": "Next, let’s visit the Geothermal Data Repository (GDR) OpenEI webpage. In the first encounter, you need to browse a dataset with a certain keyword to search. Just type gravity forge. We will use an instance of 3D gravity data of FORGE site near Milford in Utah."
},
{
"code": null,
"e": 15449,
"s": 15164,
"text": "Visit the Utah FORGE: 3D Gravity Data that appears in the search webpage. Unless you find it, just browse the complete name in the “Search” block. Inside the page, scroll down to find the ZIP file FORGE_3D_gravity.zip, right-click on the “Download”, and similarly “Copy link address”."
},
{
"code": null,
"e": 15635,
"s": 15449,
"text": "Again, go back to the Colab notebook and this time we will leave it to you! If you still remember (we are sure you do), use !wget to download the URL and then specify the target folder."
},
{
"code": null,
"e": 15722,
"s": 15635,
"text": "!wget 'https://gdr.openei.org/files/1144/FORGE_3D_gravity%20(1).zip' -P /content/FORGE"
},
{
"code": null,
"e": 15896,
"s": 15722,
"text": "You now get the downloaded file in the newly created folder FORGE. And as you realize, it is a ZIP file. You already know how to unzip it for sure. Run the following script."
},
{
"code": null,
"e": 15959,
"s": 15896,
"text": "!unzip '/content/FORGE_3D_gravity (1).zip' -d '/content/FORGE'"
},
{
"code": null,
"e": 16036,
"s": 15959,
"text": "Congratulations! You have accessed some datasets from GDR OpenEI using !wget"
},
{
"code": null,
"e": 16268,
"s": 16036,
"text": "Throughout the first part of this introductory article series on Open Geoscience Computing using Python in the Google Colab cloud environment, we have discussed three tutorials that cover basic workflow to access the open datasets."
},
{
"code": null,
"e": 16388,
"s": 16268,
"text": "For wrap up, I created a Colab notebook that contains all the codes discussed in this tutorial. Open the notebook here."
},
{
"code": null,
"e": 16553,
"s": 16388,
"text": "Along with this article, I also made a GitHub repository that contains step-by-step tutorials about Open Geoscience Computing. Visit this link to start the journey."
},
{
"code": null,
"e": 16809,
"s": 16553,
"text": "Last but not least, it is highly recommended for anyone who is interested to use open datasets to cite these data with respect to the data owners. The Citation Wiki is available on my GitHub wiki page. The citation content will also be frequently updated."
},
{
"code": null,
"e": 16874,
"s": 16809,
"text": "Enjoy your Open Geoscience Computing, and stay tuned for Part 2!"
},
{
"code": null,
"e": 16916,
"s": 16874,
"text": "Reach my GitHub: github.com/yohanesnuwara"
},
{
"code": null,
"e": 16955,
"s": 16916,
"text": "Reach my email: [email protected]"
}
]
|
3Sum in Python | Suppose we have an array of numbers. It stores n integers, there are there elements a, b, c in the array, such that a + b + c = 0. Find all unique triplets in the array which satisfies the situation. So if the array is like [-1,0,1,2,-1,-4], then the result will be [[-1, 1, 0], [-1, -1, 2]]
To solve this, we will follow these steps −
Sort the array nums, and define an array res
for i in range 0 to length of nums – 3if i > 0 and nums[i] = nums[i - 1], then skip the next part and continuel := i + 1 and r := length of nums – 1while l < rsum := sum of nums[i], nums[l] and nums[r]if sum < 0, then l := l + 1, otherwise when sum > 0, then r := r – 1otherwise insert nums[i], nums[l], nums[r] into the res arraywhile l < length of nums – 1 and nums[l] = nums[l + 1]increase l by 1while r > 0 and nums[r] = nums[r - 1]decrease r by 1increase l by 1 and decrease r by 1
if i > 0 and nums[i] = nums[i - 1], then skip the next part and continue
l := i + 1 and r := length of nums – 1
while l < rsum := sum of nums[i], nums[l] and nums[r]if sum < 0, then l := l + 1, otherwise when sum > 0, then r := r – 1otherwise insert nums[i], nums[l], nums[r] into the res arraywhile l < length of nums – 1 and nums[l] = nums[l + 1]increase l by 1while r > 0 and nums[r] = nums[r - 1]decrease r by 1increase l by 1 and decrease r by 1
sum := sum of nums[i], nums[l] and nums[r]
if sum < 0, then l := l + 1, otherwise when sum > 0, then r := r – 1
otherwise insert nums[i], nums[l], nums[r] into the res array
while l < length of nums – 1 and nums[l] = nums[l + 1]increase l by 1
increase l by 1
while r > 0 and nums[r] = nums[r - 1]decrease r by 1
decrease r by 1
increase l by 1 and decrease r by 1
return res
Let us see the following implementation to get better understanding −
Live Demo
class Solution(object):
def threeSum(self, nums):
nums.sort()
result = []
for i in range(len(nums)-2):
if i> 0 and nums[i] == nums[i-1]:
continue
l = i+1
r = len(nums)-1
while(l<r):
sum = nums[i] + nums[l] + nums[r]
if sum<0:
l+=1
elif sum >0:
r-=1
else:
result.append([nums[i],nums[l],nums[r]])
while l<len(nums)-1 and nums[l] == nums[l + 1] : l += 1
while r>0 and nums[r] == nums[r - 1]: r -= 1
l+=1
r-=1
return result
ob1 = Solution()
print(ob1.threeSum([-1,0,1,2,-1,-4]))
[-1,0,1,2,-1,-4]
[[-1,-1,2],[-1,0,1]] | [
{
"code": null,
"e": 1354,
"s": 1062,
"text": "Suppose we have an array of numbers. It stores n integers, there are there elements a, b, c in the array, such that a + b + c = 0. Find all unique triplets in the array which satisfies the situation. So if the array is like [-1,0,1,2,-1,-4], then the result will be [[-1, 1, 0], [-1, -1, 2]]"
},
{
"code": null,
"e": 1398,
"s": 1354,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1443,
"s": 1398,
"text": "Sort the array nums, and define an array res"
},
{
"code": null,
"e": 1930,
"s": 1443,
"text": "for i in range 0 to length of nums – 3if i > 0 and nums[i] = nums[i - 1], then skip the next part and continuel := i + 1 and r := length of nums – 1while l < rsum := sum of nums[i], nums[l] and nums[r]if sum < 0, then l := l + 1, otherwise when sum > 0, then r := r – 1otherwise insert nums[i], nums[l], nums[r] into the res arraywhile l < length of nums – 1 and nums[l] = nums[l + 1]increase l by 1while r > 0 and nums[r] = nums[r - 1]decrease r by 1increase l by 1 and decrease r by 1"
},
{
"code": null,
"e": 2003,
"s": 1930,
"text": "if i > 0 and nums[i] = nums[i - 1], then skip the next part and continue"
},
{
"code": null,
"e": 2042,
"s": 2003,
"text": "l := i + 1 and r := length of nums – 1"
},
{
"code": null,
"e": 2381,
"s": 2042,
"text": "while l < rsum := sum of nums[i], nums[l] and nums[r]if sum < 0, then l := l + 1, otherwise when sum > 0, then r := r – 1otherwise insert nums[i], nums[l], nums[r] into the res arraywhile l < length of nums – 1 and nums[l] = nums[l + 1]increase l by 1while r > 0 and nums[r] = nums[r - 1]decrease r by 1increase l by 1 and decrease r by 1"
},
{
"code": null,
"e": 2424,
"s": 2381,
"text": "sum := sum of nums[i], nums[l] and nums[r]"
},
{
"code": null,
"e": 2493,
"s": 2424,
"text": "if sum < 0, then l := l + 1, otherwise when sum > 0, then r := r – 1"
},
{
"code": null,
"e": 2555,
"s": 2493,
"text": "otherwise insert nums[i], nums[l], nums[r] into the res array"
},
{
"code": null,
"e": 2625,
"s": 2555,
"text": "while l < length of nums – 1 and nums[l] = nums[l + 1]increase l by 1"
},
{
"code": null,
"e": 2641,
"s": 2625,
"text": "increase l by 1"
},
{
"code": null,
"e": 2694,
"s": 2641,
"text": "while r > 0 and nums[r] = nums[r - 1]decrease r by 1"
},
{
"code": null,
"e": 2710,
"s": 2694,
"text": "decrease r by 1"
},
{
"code": null,
"e": 2746,
"s": 2710,
"text": "increase l by 1 and decrease r by 1"
},
{
"code": null,
"e": 2757,
"s": 2746,
"text": "return res"
},
{
"code": null,
"e": 2827,
"s": 2757,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 2838,
"s": 2827,
"text": " Live Demo"
},
{
"code": null,
"e": 3542,
"s": 2838,
"text": "class Solution(object):\n def threeSum(self, nums):\n nums.sort()\n result = []\n for i in range(len(nums)-2):\n if i> 0 and nums[i] == nums[i-1]:\n continue\n l = i+1\n r = len(nums)-1\n while(l<r):\n sum = nums[i] + nums[l] + nums[r]\n if sum<0:\n l+=1\n elif sum >0:\n r-=1\n else:\n result.append([nums[i],nums[l],nums[r]])\n while l<len(nums)-1 and nums[l] == nums[l + 1] : l += 1\n while r>0 and nums[r] == nums[r - 1]: r -= 1\n l+=1\n r-=1\n return result\nob1 = Solution()\nprint(ob1.threeSum([-1,0,1,2,-1,-4]))"
},
{
"code": null,
"e": 3559,
"s": 3542,
"text": "[-1,0,1,2,-1,-4]"
},
{
"code": null,
"e": 3580,
"s": 3559,
"text": "[[-1,-1,2],[-1,0,1]]"
}
]
|
C# Program to Overload Unary Increment (++) and Decrement (-) Operators - GeeksforGeeks | 16 Nov, 2021
In C#, overloading is the common way of implementing polymorphism. It is the ability to redefine a function in more than one form. A user can implement method overloading by defining two or more methods in a class sharing the same name but with different method signatures. So in this article, we will learn how to overload unary increment and decrement operators.
In C#, the decrement operator(–) is used to decrement an integer value by one. It is of two types pre-decrement operator and post decrement operator. When this operator is placed before any variable name then such type of operator is known as pre-decrement operator, e.g., –y whereas when the operator is placed after any variable name then such type of operator is known as post-decrement operator, e.g., y–. We can also overload the decrement operator using the following syntax. Here we will pass the object as the parameter and then set the decrement value to object value and this method will return the decremented value.
Syntax:
public static GFG operator --(GFG obj)
{
obj.value = --obj.value;
return obj;
}
Example:
Input : 50
Output : 49
Input : 79
Output : 78
Example:
C#
// C# program to demonstrate overloading decrement operatorusing System; class GFG{ // Declare integer variableprivate int value; // Initialize data memberspublic GFG(int value){this.value = value;} // Overload unary decrement operatorpublic static GFG operator--(GFG obj){ obj.value = --obj.value; return obj;} // Display method to display the valuepublic void Display(){ Console.WriteLine("Values : " + value); Console.WriteLine();}} class Geeks{ // Driver codestatic void Main(string[] args){ // Declare the object and assign // the value to 50 GFG obj = new GFG(50); // Call the unary decrement overload method obj--; // Call the display method obj.Display();}}
Output:
Values : 49
In C#, the increment operator(++) is used to increment an integer value by one. It is of two types pre-increment operator and post-increment operator. When this operator is placed before any variable name then such type of operator is known as pre-increment operator, e.g., ++y whereas when the operator is placed after any variable name then such type of operator is known as post-increment operator, e.g., y++. We can also overload the increment operator using the following syntax. Here we will pass the object as the parameter and then set the increment value to object value and this method will return the incremented value.
Syntax:
public static GFG operator ++(GFG obj)
{
obj.value = ++obj.value;
return obj;
}
Example:
Input : 50
Output : 51
Input : 79
Output : 80
Example:
C#
// C# program to demonstrate overloading// increment operatorusing System; class GFG{ // Declare integer variableprivate int value; // Initialize data memberspublic GFG(int value){ this.value = value;} // Overload unary increment operatorpublic static GFG operator ++(GFG obj){ obj.value = ++obj.value; return obj;} // Display method to display the valuepublic void Display(){ Console.WriteLine("Values : " + value); Console.WriteLine();}} class Geeks{ // Driver codestatic void Main(string[] args){ // Declare the object and assign the value to 50 GFG obj = new GFG(50); // Call the unary increment overload method obj++; // Call the display method obj.Display();}}
Output:
Values : 51
CSharp Operators
CSharp-programs
Picked
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Extension Method in C#
HashSet in C# with Examples
Partial Classes in C#
Top 50 C# Interview Questions & Answers
C# | How to insert an element in an Array?
C# | Inheritance
C# | List Class
Lambda Expressions in C#
C# | .NET Framework (Basic Architecture and Component Stack)
Linked List Implementation in C# | [
{
"code": null,
"e": 24222,
"s": 24194,
"text": "\n16 Nov, 2021"
},
{
"code": null,
"e": 24587,
"s": 24222,
"text": "In C#, overloading is the common way of implementing polymorphism. It is the ability to redefine a function in more than one form. A user can implement method overloading by defining two or more methods in a class sharing the same name but with different method signatures. So in this article, we will learn how to overload unary increment and decrement operators."
},
{
"code": null,
"e": 25215,
"s": 24587,
"text": "In C#, the decrement operator(–) is used to decrement an integer value by one. It is of two types pre-decrement operator and post decrement operator. When this operator is placed before any variable name then such type of operator is known as pre-decrement operator, e.g., –y whereas when the operator is placed after any variable name then such type of operator is known as post-decrement operator, e.g., y–. We can also overload the decrement operator using the following syntax. Here we will pass the object as the parameter and then set the decrement value to object value and this method will return the decremented value."
},
{
"code": null,
"e": 25223,
"s": 25215,
"text": "Syntax:"
},
{
"code": null,
"e": 25311,
"s": 25223,
"text": "public static GFG operator --(GFG obj)\n{\n obj.value = --obj.value;\n return obj;\n}"
},
{
"code": null,
"e": 25320,
"s": 25311,
"text": "Example:"
},
{
"code": null,
"e": 25369,
"s": 25320,
"text": "Input : 50\nOutput : 49\n\nInput : 79\nOutput : 78"
},
{
"code": null,
"e": 25378,
"s": 25369,
"text": "Example:"
},
{
"code": null,
"e": 25381,
"s": 25378,
"text": "C#"
},
{
"code": "// C# program to demonstrate overloading decrement operatorusing System; class GFG{ // Declare integer variableprivate int value; // Initialize data memberspublic GFG(int value){this.value = value;} // Overload unary decrement operatorpublic static GFG operator--(GFG obj){ obj.value = --obj.value; return obj;} // Display method to display the valuepublic void Display(){ Console.WriteLine(\"Values : \" + value); Console.WriteLine();}} class Geeks{ // Driver codestatic void Main(string[] args){ // Declare the object and assign // the value to 50 GFG obj = new GFG(50); // Call the unary decrement overload method obj--; // Call the display method obj.Display();}}",
"e": 26105,
"s": 25381,
"text": null
},
{
"code": null,
"e": 26113,
"s": 26105,
"text": "Output:"
},
{
"code": null,
"e": 26125,
"s": 26113,
"text": "Values : 49"
},
{
"code": null,
"e": 26756,
"s": 26125,
"text": "In C#, the increment operator(++) is used to increment an integer value by one. It is of two types pre-increment operator and post-increment operator. When this operator is placed before any variable name then such type of operator is known as pre-increment operator, e.g., ++y whereas when the operator is placed after any variable name then such type of operator is known as post-increment operator, e.g., y++. We can also overload the increment operator using the following syntax. Here we will pass the object as the parameter and then set the increment value to object value and this method will return the incremented value."
},
{
"code": null,
"e": 26764,
"s": 26756,
"text": "Syntax:"
},
{
"code": null,
"e": 26852,
"s": 26764,
"text": "public static GFG operator ++(GFG obj)\n{\n obj.value = ++obj.value;\n return obj;\n}"
},
{
"code": null,
"e": 26861,
"s": 26852,
"text": "Example:"
},
{
"code": null,
"e": 26910,
"s": 26861,
"text": "Input : 50\nOutput : 51\n\nInput : 79\nOutput : 80"
},
{
"code": null,
"e": 26919,
"s": 26910,
"text": "Example:"
},
{
"code": null,
"e": 26922,
"s": 26919,
"text": "C#"
},
{
"code": "// C# program to demonstrate overloading// increment operatorusing System; class GFG{ // Declare integer variableprivate int value; // Initialize data memberspublic GFG(int value){ this.value = value;} // Overload unary increment operatorpublic static GFG operator ++(GFG obj){ obj.value = ++obj.value; return obj;} // Display method to display the valuepublic void Display(){ Console.WriteLine(\"Values : \" + value); Console.WriteLine();}} class Geeks{ // Driver codestatic void Main(string[] args){ // Declare the object and assign the value to 50 GFG obj = new GFG(50); // Call the unary increment overload method obj++; // Call the display method obj.Display();}}",
"e": 27651,
"s": 26922,
"text": null
},
{
"code": null,
"e": 27659,
"s": 27651,
"text": "Output:"
},
{
"code": null,
"e": 27671,
"s": 27659,
"text": "Values : 51"
},
{
"code": null,
"e": 27688,
"s": 27671,
"text": "CSharp Operators"
},
{
"code": null,
"e": 27704,
"s": 27688,
"text": "CSharp-programs"
},
{
"code": null,
"e": 27711,
"s": 27704,
"text": "Picked"
},
{
"code": null,
"e": 27714,
"s": 27711,
"text": "C#"
},
{
"code": null,
"e": 27812,
"s": 27714,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27821,
"s": 27812,
"text": "Comments"
},
{
"code": null,
"e": 27834,
"s": 27821,
"text": "Old Comments"
},
{
"code": null,
"e": 27857,
"s": 27834,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 27885,
"s": 27857,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 27907,
"s": 27885,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 27947,
"s": 27907,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 27990,
"s": 27947,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 28007,
"s": 27990,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 28023,
"s": 28007,
"text": "C# | List Class"
},
{
"code": null,
"e": 28048,
"s": 28023,
"text": "Lambda Expressions in C#"
},
{
"code": null,
"e": 28109,
"s": 28048,
"text": "C# | .NET Framework (Basic Architecture and Component Stack)"
}
]
|
6 Lesser-Known Pandas Aggregate Functions | by Soner Yıldırım | Towards Data Science | The groupby is one of the most frequently used Pandas functions for data analysis. It first divides the data points (i.e. rows in a data frame) into groups based on the distinct values in a column. Then, it calculates aggregated values for each group.
Consider we have a dataset that contains brands and prices of cars. In order to calculate the average price for each branch, we group the rows based on the brand column and then apply the mean function on the price column.
Pandas provides several aggregate functions that can be used along with the groupby function such as mean, min, max, sum, and so on. In this article, we will see some of the lesser-known aggregate functions that make the groupby function even more useful.
The functions we will cover are:
first
last
nth
nunique
describe
quantile
Let’s start with creating a sample data frame.
import numpy as npimport pandas as pddf = pd.DataFrame({ "Brand": ["Ford","Honda","Toyota","Seat"] * 25, "Price": np.random.randint(10000, 30000, size=100)})df.head()
We have a data frame that contains the price and brand information of 100 cars.
The first function, as its name suggests, returns the first value for each group.
df.groupby("Brand", as_index=False).first()
The last function returns the last value for each group.
df.groupby("Brand", as_index=False).last()
The first and last functions may not seem very useful for this dataset. However, there will be cases where you need a simple solution to find the first or last entry for each group. When you work with date or time-based data, the order matters even more.
The nth function extends the capabilities of the first and last functions. It allows for getting the nth row for each group.
df.groupby("Brand", as_index=False).nth(2)
nth(0) is the same as first()
nth(-1) is the same as last()
The nunique function returns the number of distinct values for each group. It will probably be 25 for each brand in our dataset because we generated 25 random integers in a large range.
When working with real life datasets, the unique values per category or group might be a valuable insight.
df.groupby("Brand", as_index=False).nunique()
The describe function returns several statistics for each group. It is usually used to get an overview about the entire data frame. We can also use it with the groupby function to compare the groups from a few different perspectives.
df.groupby("Brand", as_index=False).describe()
The 25%, 50%, and 75% values are the first, second, and third quartiles, respectively. Together with the other statistics, they provide a structured overview of the distribution of values.
The first quantile (25%) means that 25% of values are below this value. Similarly, 50% of values are below the second quantile so the second quantile is the median value.
We get the 25%, 50%, and 75% quantiles with the describe function. The quantile function offers more flexibility because it accepts a parameter.
In order to find the 40% quantile, we pass 0.4 as a parameter to the quantile function.
df.groupby("Brand", as_index=False).quantile(0.4)
The groupby function is a life saver in exploratory data analysis. The mean, sum, min, and max are the commonly used aggregate functions with the groupby.
The functions we have covered in this article are not so commonly used but there will be cases where they come in quite handy.
Last but not least, if you are not a Medium member yet and plan to become one, I kindly ask you to do so using the following link. I will receive a portion from your membership fee with no additional cost to you.
sonery.medium.com
Thank you for reading. Please let me know if you have any feedback. | [
{
"code": null,
"e": 423,
"s": 171,
"text": "The groupby is one of the most frequently used Pandas functions for data analysis. It first divides the data points (i.e. rows in a data frame) into groups based on the distinct values in a column. Then, it calculates aggregated values for each group."
},
{
"code": null,
"e": 646,
"s": 423,
"text": "Consider we have a dataset that contains brands and prices of cars. In order to calculate the average price for each branch, we group the rows based on the brand column and then apply the mean function on the price column."
},
{
"code": null,
"e": 902,
"s": 646,
"text": "Pandas provides several aggregate functions that can be used along with the groupby function such as mean, min, max, sum, and so on. In this article, we will see some of the lesser-known aggregate functions that make the groupby function even more useful."
},
{
"code": null,
"e": 935,
"s": 902,
"text": "The functions we will cover are:"
},
{
"code": null,
"e": 941,
"s": 935,
"text": "first"
},
{
"code": null,
"e": 946,
"s": 941,
"text": "last"
},
{
"code": null,
"e": 950,
"s": 946,
"text": "nth"
},
{
"code": null,
"e": 958,
"s": 950,
"text": "nunique"
},
{
"code": null,
"e": 967,
"s": 958,
"text": "describe"
},
{
"code": null,
"e": 976,
"s": 967,
"text": "quantile"
},
{
"code": null,
"e": 1023,
"s": 976,
"text": "Let’s start with creating a sample data frame."
},
{
"code": null,
"e": 1196,
"s": 1023,
"text": "import numpy as npimport pandas as pddf = pd.DataFrame({ \"Brand\": [\"Ford\",\"Honda\",\"Toyota\",\"Seat\"] * 25, \"Price\": np.random.randint(10000, 30000, size=100)})df.head()"
},
{
"code": null,
"e": 1276,
"s": 1196,
"text": "We have a data frame that contains the price and brand information of 100 cars."
},
{
"code": null,
"e": 1358,
"s": 1276,
"text": "The first function, as its name suggests, returns the first value for each group."
},
{
"code": null,
"e": 1402,
"s": 1358,
"text": "df.groupby(\"Brand\", as_index=False).first()"
},
{
"code": null,
"e": 1459,
"s": 1402,
"text": "The last function returns the last value for each group."
},
{
"code": null,
"e": 1502,
"s": 1459,
"text": "df.groupby(\"Brand\", as_index=False).last()"
},
{
"code": null,
"e": 1757,
"s": 1502,
"text": "The first and last functions may not seem very useful for this dataset. However, there will be cases where you need a simple solution to find the first or last entry for each group. When you work with date or time-based data, the order matters even more."
},
{
"code": null,
"e": 1882,
"s": 1757,
"text": "The nth function extends the capabilities of the first and last functions. It allows for getting the nth row for each group."
},
{
"code": null,
"e": 1925,
"s": 1882,
"text": "df.groupby(\"Brand\", as_index=False).nth(2)"
},
{
"code": null,
"e": 1955,
"s": 1925,
"text": "nth(0) is the same as first()"
},
{
"code": null,
"e": 1985,
"s": 1955,
"text": "nth(-1) is the same as last()"
},
{
"code": null,
"e": 2171,
"s": 1985,
"text": "The nunique function returns the number of distinct values for each group. It will probably be 25 for each brand in our dataset because we generated 25 random integers in a large range."
},
{
"code": null,
"e": 2278,
"s": 2171,
"text": "When working with real life datasets, the unique values per category or group might be a valuable insight."
},
{
"code": null,
"e": 2324,
"s": 2278,
"text": "df.groupby(\"Brand\", as_index=False).nunique()"
},
{
"code": null,
"e": 2558,
"s": 2324,
"text": "The describe function returns several statistics for each group. It is usually used to get an overview about the entire data frame. We can also use it with the groupby function to compare the groups from a few different perspectives."
},
{
"code": null,
"e": 2605,
"s": 2558,
"text": "df.groupby(\"Brand\", as_index=False).describe()"
},
{
"code": null,
"e": 2794,
"s": 2605,
"text": "The 25%, 50%, and 75% values are the first, second, and third quartiles, respectively. Together with the other statistics, they provide a structured overview of the distribution of values."
},
{
"code": null,
"e": 2965,
"s": 2794,
"text": "The first quantile (25%) means that 25% of values are below this value. Similarly, 50% of values are below the second quantile so the second quantile is the median value."
},
{
"code": null,
"e": 3110,
"s": 2965,
"text": "We get the 25%, 50%, and 75% quantiles with the describe function. The quantile function offers more flexibility because it accepts a parameter."
},
{
"code": null,
"e": 3198,
"s": 3110,
"text": "In order to find the 40% quantile, we pass 0.4 as a parameter to the quantile function."
},
{
"code": null,
"e": 3248,
"s": 3198,
"text": "df.groupby(\"Brand\", as_index=False).quantile(0.4)"
},
{
"code": null,
"e": 3403,
"s": 3248,
"text": "The groupby function is a life saver in exploratory data analysis. The mean, sum, min, and max are the commonly used aggregate functions with the groupby."
},
{
"code": null,
"e": 3530,
"s": 3403,
"text": "The functions we have covered in this article are not so commonly used but there will be cases where they come in quite handy."
},
{
"code": null,
"e": 3743,
"s": 3530,
"text": "Last but not least, if you are not a Medium member yet and plan to become one, I kindly ask you to do so using the following link. I will receive a portion from your membership fee with no additional cost to you."
},
{
"code": null,
"e": 3761,
"s": 3743,
"text": "sonery.medium.com"
}
]
|
MySQL Tryit Editor v1.0 | SELECT CAST("2017-08-29" AS DATE);
Edit the SQL Statement, and click "Run SQL" to see the result.
This SQL-Statement is not supported in the WebSQL Database.
The example still works, because it uses a modified version of SQL.
Your browser does not support WebSQL.
Your are now using a light-version of the Try-SQL Editor, with a read-only Database.
If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time.
Our Try-SQL Editor uses WebSQL to demonstrate SQL.
A Database-object is created in your browser, for testing purposes.
You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button.
WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object.
WebSQL is supported in Chrome, Safari, and Opera.
If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data. | [
{
"code": null,
"e": 35,
"s": 0,
"text": "SELECT CAST(\"2017-08-29\" AS DATE);"
},
{
"code": null,
"e": 37,
"s": 35,
"text": ""
},
{
"code": null,
"e": 109,
"s": 46,
"text": "Edit the SQL Statement, and click \"Run SQL\" to see the result."
},
{
"code": null,
"e": 169,
"s": 109,
"text": "This SQL-Statement is not supported in the WebSQL Database."
},
{
"code": null,
"e": 237,
"s": 169,
"text": "The example still works, because it uses a modified version of SQL."
},
{
"code": null,
"e": 275,
"s": 237,
"text": "Your browser does not support WebSQL."
},
{
"code": null,
"e": 360,
"s": 275,
"text": "Your are now using a light-version of the Try-SQL Editor, with a read-only Database."
},
{
"code": null,
"e": 534,
"s": 360,
"text": "If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time."
},
{
"code": null,
"e": 585,
"s": 534,
"text": "Our Try-SQL Editor uses WebSQL to demonstrate SQL."
},
{
"code": null,
"e": 653,
"s": 585,
"text": "A Database-object is created in your browser, for testing purposes."
},
{
"code": null,
"e": 824,
"s": 653,
"text": "You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the \"Restore Database\" button."
},
{
"code": null,
"e": 924,
"s": 824,
"text": "WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object."
},
{
"code": null,
"e": 974,
"s": 924,
"text": "WebSQL is supported in Chrome, Safari, and Opera."
}
]
|
PostgreSQL- LOWER function - GeeksforGeeks | 08 Oct, 2021
In PostgreSQL, the LOWER function is used to convert a string, an expression, or values in a column to lower case.
Syntax: LOWER(string or value or expression)
Let’s analyze the above syntax:
The LOWER function takes in value with either all uppercase or partial uppercase values or characters and convert them into lower case of the same type.
If the supplied argument is string-convertible, one can make use of the CAST function which converts a non-string value to a string.
Example 1:
The below statement uses LOWER function to get the full names of the films from the Film table of the sample database, ie, dvdrental:
SELECT LOWER(title) from film;
Output:
Example 2:
The below statement converts an upper case string to lower case:
SELECT LOWER('GEEKSFORGEEKS');
Output:
sooda367
PostgreSQL-function
PostgreSQL-String-function
PostgreSQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
PostgreSQL - GROUP BY clause
PostgreSQL - DROP INDEX
PostgreSQL - LEFT JOIN
PostgreSQL - ROW_NUMBER Function
PostgreSQL - Copy Table
PostgreSQL - Cursor
PostgreSQL - Record type variable
PostgreSQL - SELECT
PostgreSQL - TEXT Data Type
PostgreSQL - Select Into | [
{
"code": null,
"e": 23747,
"s": 23719,
"text": "\n08 Oct, 2021"
},
{
"code": null,
"e": 23862,
"s": 23747,
"text": "In PostgreSQL, the LOWER function is used to convert a string, an expression, or values in a column to lower case."
},
{
"code": null,
"e": 23907,
"s": 23862,
"text": "Syntax: LOWER(string or value or expression)"
},
{
"code": null,
"e": 23939,
"s": 23907,
"text": "Let’s analyze the above syntax:"
},
{
"code": null,
"e": 24092,
"s": 23939,
"text": "The LOWER function takes in value with either all uppercase or partial uppercase values or characters and convert them into lower case of the same type."
},
{
"code": null,
"e": 24225,
"s": 24092,
"text": "If the supplied argument is string-convertible, one can make use of the CAST function which converts a non-string value to a string."
},
{
"code": null,
"e": 24236,
"s": 24225,
"text": "Example 1:"
},
{
"code": null,
"e": 24370,
"s": 24236,
"text": "The below statement uses LOWER function to get the full names of the films from the Film table of the sample database, ie, dvdrental:"
},
{
"code": null,
"e": 24401,
"s": 24370,
"text": "SELECT LOWER(title) from film;"
},
{
"code": null,
"e": 24409,
"s": 24401,
"text": "Output:"
},
{
"code": null,
"e": 24420,
"s": 24409,
"text": "Example 2:"
},
{
"code": null,
"e": 24485,
"s": 24420,
"text": "The below statement converts an upper case string to lower case:"
},
{
"code": null,
"e": 24516,
"s": 24485,
"text": "SELECT LOWER('GEEKSFORGEEKS');"
},
{
"code": null,
"e": 24524,
"s": 24516,
"text": "Output:"
},
{
"code": null,
"e": 24533,
"s": 24524,
"text": "sooda367"
},
{
"code": null,
"e": 24553,
"s": 24533,
"text": "PostgreSQL-function"
},
{
"code": null,
"e": 24580,
"s": 24553,
"text": "PostgreSQL-String-function"
},
{
"code": null,
"e": 24591,
"s": 24580,
"text": "PostgreSQL"
},
{
"code": null,
"e": 24689,
"s": 24591,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 24698,
"s": 24689,
"text": "Comments"
},
{
"code": null,
"e": 24711,
"s": 24698,
"text": "Old Comments"
},
{
"code": null,
"e": 24740,
"s": 24711,
"text": "PostgreSQL - GROUP BY clause"
},
{
"code": null,
"e": 24764,
"s": 24740,
"text": "PostgreSQL - DROP INDEX"
},
{
"code": null,
"e": 24787,
"s": 24764,
"text": "PostgreSQL - LEFT JOIN"
},
{
"code": null,
"e": 24820,
"s": 24787,
"text": "PostgreSQL - ROW_NUMBER Function"
},
{
"code": null,
"e": 24844,
"s": 24820,
"text": "PostgreSQL - Copy Table"
},
{
"code": null,
"e": 24864,
"s": 24844,
"text": "PostgreSQL - Cursor"
},
{
"code": null,
"e": 24898,
"s": 24864,
"text": "PostgreSQL - Record type variable"
},
{
"code": null,
"e": 24918,
"s": 24898,
"text": "PostgreSQL - SELECT"
},
{
"code": null,
"e": 24946,
"s": 24918,
"text": "PostgreSQL - TEXT Data Type"
}
]
|
C Program - Print the sum of digits of given number - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
C Program to print the sum of digits of a given number.
#include<stdio.h>
int main(void)
{
int n,sum=0,rem;
printf("Enter a number to Calculate Sum: ");
scanf("%d",&n);
while(n>0)
{
rem = n%10;
sum+=rem;
n/=10;
}
printf("Sum of Digits=%d\n",sum);
return 0;
}
Output:
Enter a number to Calculate Sum: 56489
Sum of Digits=32
Happy Learning 🙂
C Program – Armstrong numbers between given numbers
C Program – Reverse the Elements of an Array
C Program – Check a number is Palindrome or not
Python How to read input from keyboard
C Program – Sum of digits of given number till single digit
C Program – Print prime numbers between two numbers
Java program to find sum of digits
Java Program to Print Pattern Triangle
Java Program to Print Diamond Pattern
C Program – Bubble Sort Program in C
PHP Echo Vs Print Example Tutorials
Python – Print different vowels present in a String
C Program – Given number is Even or Odd
C Program – Reverse of a number
C Program – Find GCD of two numbers
C Program – Armstrong numbers between given numbers
C Program – Reverse the Elements of an Array
C Program – Check a number is Palindrome or not
Python How to read input from keyboard
C Program – Sum of digits of given number till single digit
C Program – Print prime numbers between two numbers
Java program to find sum of digits
Java Program to Print Pattern Triangle
Java Program to Print Diamond Pattern
C Program – Bubble Sort Program in C
PHP Echo Vs Print Example Tutorials
Python – Print different vowels present in a String
C Program – Given number is Even or Odd
C Program – Reverse of a number
C Program – Find GCD of two numbers
Δ
C – Introduction
C – Features
C – Variables & Keywords
C – Program Structure
C – Comment Lines & Tokens
C – Number System
C – Local and Global Variables
C – Scope & Lifetime of Variables
C – Data Types
C – Integer Data Types
C – Floating Data Types
C – Derived, Defined Data Types
C – Type Conversions
C – Arithmetic Operators
C – Bitwise Operators
C – Logical Operators
C – Comma and sizeof Operators
C – Operator Precedence and Associativity
C – Relational Operators
C Flow Control – if, if-else, nested if-else, if-else-if
C – Switch Case
C Iterative – for, while, dowhile loops
C Unconditional – break, continue, goto statements
C – Expressions and Statements
C – Header Files & Preprocessor Directives
C – One Dimensional Arrays
C – Multi Dimensional Arrays
C – Pointers Basics
C – Pointers with Arrays
C – Functions
C – How to Pass Arrays to Functions
C – Categories of Functions
C – User defined Functions
C – Formal and Actual Arguments
C – Recursion functions
C – Structures Part -1
C – Structures Part -2
C – Unions
C – File Handling
C – File Operations
C – Dynamic Memory Allocation
C Program – Fibonacci Series
C Program – Prime or Not
C Program – Factorial of Number
C Program – Even or Odd
C Program – Sum of digits till Single Digit
C Program – Sum of digits
C Program – Reverse of a number
C Program – Armstrong Numbers
C Program – Print prime Numbers
C Program – GCD of two Numbers
C Program – Number Palindrome or Not
C Program – Find Largest and Smallest number in an Array
C Program – Add elements of an Array
C Program – Addition of Matrices
C Program – Multiplication of Matrices
C Program – Reverse of an Array
C Program – Bubble Sort
C Program – Add and Sub without using + – | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 454,
"s": 398,
"text": "C Program to print the sum of digits of a given number."
},
{
"code": null,
"e": 739,
"s": 454,
"text": "#include<stdio.h> \nint main(void) \n{ \n int n,sum=0,rem; \n printf(\"Enter a number to Calculate Sum: \"); \n scanf(\"%d\",&n); \n while(n>0) \n { \n rem = n%10; \n sum+=rem; \n n/=10; \n } \n printf(\"Sum of Digits=%d\\n\",sum); \n return 0; \n}"
},
{
"code": null,
"e": 747,
"s": 739,
"text": "Output:"
},
{
"code": null,
"e": 803,
"s": 747,
"text": "Enter a number to Calculate Sum: 56489\nSum of Digits=32"
},
{
"code": null,
"e": 820,
"s": 803,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 1465,
"s": 822,
"text": "\nC Program – Armstrong numbers between given numbers\nC Program – Reverse the Elements of an Array\nC Program – Check a number is Palindrome or not\nPython How to read input from keyboard\nC Program – Sum of digits of given number till single digit\nC Program – Print prime numbers between two numbers\nJava program to find sum of digits\nJava Program to Print Pattern Triangle\nJava Program to Print Diamond Pattern\nC Program – Bubble Sort Program in C\nPHP Echo Vs Print Example Tutorials\nPython – Print different vowels present in a String\nC Program – Given number is Even or Odd\nC Program – Reverse of a number\nC Program – Find GCD of two numbers\n"
},
{
"code": null,
"e": 1517,
"s": 1465,
"text": "C Program – Armstrong numbers between given numbers"
},
{
"code": null,
"e": 1562,
"s": 1517,
"text": "C Program – Reverse the Elements of an Array"
},
{
"code": null,
"e": 1610,
"s": 1562,
"text": "C Program – Check a number is Palindrome or not"
},
{
"code": null,
"e": 1649,
"s": 1610,
"text": "Python How to read input from keyboard"
},
{
"code": null,
"e": 1709,
"s": 1649,
"text": "C Program – Sum of digits of given number till single digit"
},
{
"code": null,
"e": 1761,
"s": 1709,
"text": "C Program – Print prime numbers between two numbers"
},
{
"code": null,
"e": 1796,
"s": 1761,
"text": "Java program to find sum of digits"
},
{
"code": null,
"e": 1835,
"s": 1796,
"text": "Java Program to Print Pattern Triangle"
},
{
"code": null,
"e": 1873,
"s": 1835,
"text": "Java Program to Print Diamond Pattern"
},
{
"code": null,
"e": 1910,
"s": 1873,
"text": "C Program – Bubble Sort Program in C"
},
{
"code": null,
"e": 1946,
"s": 1910,
"text": "PHP Echo Vs Print Example Tutorials"
},
{
"code": null,
"e": 1998,
"s": 1946,
"text": "Python – Print different vowels present in a String"
},
{
"code": null,
"e": 2038,
"s": 1998,
"text": "C Program – Given number is Even or Odd"
},
{
"code": null,
"e": 2070,
"s": 2038,
"text": "C Program – Reverse of a number"
},
{
"code": null,
"e": 2106,
"s": 2070,
"text": "C Program – Find GCD of two numbers"
},
{
"code": null,
"e": 2112,
"s": 2110,
"text": "Δ"
},
{
"code": null,
"e": 2130,
"s": 2112,
"text": " C – Introduction"
},
{
"code": null,
"e": 2144,
"s": 2130,
"text": " C – Features"
},
{
"code": null,
"e": 2170,
"s": 2144,
"text": " C – Variables & Keywords"
},
{
"code": null,
"e": 2193,
"s": 2170,
"text": " C – Program Structure"
},
{
"code": null,
"e": 2222,
"s": 2193,
"text": " C – Comment Lines & Tokens"
},
{
"code": null,
"e": 2241,
"s": 2222,
"text": " C – Number System"
},
{
"code": null,
"e": 2273,
"s": 2241,
"text": " C – Local and Global Variables"
},
{
"code": null,
"e": 2308,
"s": 2273,
"text": " C – Scope & Lifetime of Variables"
},
{
"code": null,
"e": 2324,
"s": 2308,
"text": " C – Data Types"
},
{
"code": null,
"e": 2348,
"s": 2324,
"text": " C – Integer Data Types"
},
{
"code": null,
"e": 2373,
"s": 2348,
"text": " C – Floating Data Types"
},
{
"code": null,
"e": 2406,
"s": 2373,
"text": " C – Derived, Defined Data Types"
},
{
"code": null,
"e": 2428,
"s": 2406,
"text": " C – Type Conversions"
},
{
"code": null,
"e": 2454,
"s": 2428,
"text": " C – Arithmetic Operators"
},
{
"code": null,
"e": 2477,
"s": 2454,
"text": " C – Bitwise Operators"
},
{
"code": null,
"e": 2500,
"s": 2477,
"text": " C – Logical Operators"
},
{
"code": null,
"e": 2533,
"s": 2500,
"text": " C – Comma and sizeof Operators"
},
{
"code": null,
"e": 2576,
"s": 2533,
"text": " C – Operator Precedence and Associativity"
},
{
"code": null,
"e": 2602,
"s": 2576,
"text": " C – Relational Operators"
},
{
"code": null,
"e": 2660,
"s": 2602,
"text": " C Flow Control – if, if-else, nested if-else, if-else-if"
},
{
"code": null,
"e": 2677,
"s": 2660,
"text": " C – Switch Case"
},
{
"code": null,
"e": 2718,
"s": 2677,
"text": " C Iterative – for, while, dowhile loops"
},
{
"code": null,
"e": 2770,
"s": 2718,
"text": " C Unconditional – break, continue, goto statements"
},
{
"code": null,
"e": 2802,
"s": 2770,
"text": " C – Expressions and Statements"
},
{
"code": null,
"e": 2846,
"s": 2802,
"text": " C – Header Files & Preprocessor Directives"
},
{
"code": null,
"e": 2874,
"s": 2846,
"text": " C – One Dimensional Arrays"
},
{
"code": null,
"e": 2904,
"s": 2874,
"text": " C – Multi Dimensional Arrays"
},
{
"code": null,
"e": 2925,
"s": 2904,
"text": " C – Pointers Basics"
},
{
"code": null,
"e": 2951,
"s": 2925,
"text": " C – Pointers with Arrays"
},
{
"code": null,
"e": 2966,
"s": 2951,
"text": " C – Functions"
},
{
"code": null,
"e": 3003,
"s": 2966,
"text": " C – How to Pass Arrays to Functions"
},
{
"code": null,
"e": 3032,
"s": 3003,
"text": " C – Categories of Functions"
},
{
"code": null,
"e": 3060,
"s": 3032,
"text": " C – User defined Functions"
},
{
"code": null,
"e": 3093,
"s": 3060,
"text": " C – Formal and Actual Arguments"
},
{
"code": null,
"e": 3118,
"s": 3093,
"text": " C – Recursion functions"
},
{
"code": null,
"e": 3142,
"s": 3118,
"text": " C – Structures Part -1"
},
{
"code": null,
"e": 3166,
"s": 3142,
"text": " C – Structures Part -2"
},
{
"code": null,
"e": 3178,
"s": 3166,
"text": " C – Unions"
},
{
"code": null,
"e": 3197,
"s": 3178,
"text": " C – File Handling"
},
{
"code": null,
"e": 3218,
"s": 3197,
"text": " C – File Operations"
},
{
"code": null,
"e": 3249,
"s": 3218,
"text": " C – Dynamic Memory Allocation"
},
{
"code": null,
"e": 3279,
"s": 3249,
"text": " C Program – Fibonacci Series"
},
{
"code": null,
"e": 3305,
"s": 3279,
"text": " C Program – Prime or Not"
},
{
"code": null,
"e": 3338,
"s": 3305,
"text": " C Program – Factorial of Number"
},
{
"code": null,
"e": 3363,
"s": 3338,
"text": " C Program – Even or Odd"
},
{
"code": null,
"e": 3408,
"s": 3363,
"text": " C Program – Sum of digits till Single Digit"
},
{
"code": null,
"e": 3435,
"s": 3408,
"text": " C Program – Sum of digits"
},
{
"code": null,
"e": 3468,
"s": 3435,
"text": " C Program – Reverse of a number"
},
{
"code": null,
"e": 3499,
"s": 3468,
"text": " C Program – Armstrong Numbers"
},
{
"code": null,
"e": 3532,
"s": 3499,
"text": " C Program – Print prime Numbers"
},
{
"code": null,
"e": 3564,
"s": 3532,
"text": " C Program – GCD of two Numbers"
},
{
"code": null,
"e": 3602,
"s": 3564,
"text": " C Program – Number Palindrome or Not"
},
{
"code": null,
"e": 3660,
"s": 3602,
"text": " C Program – Find Largest and Smallest number in an Array"
},
{
"code": null,
"e": 3698,
"s": 3660,
"text": " C Program – Add elements of an Array"
},
{
"code": null,
"e": 3732,
"s": 3698,
"text": " C Program – Addition of Matrices"
},
{
"code": null,
"e": 3772,
"s": 3732,
"text": " C Program – Multiplication of Matrices"
},
{
"code": null,
"e": 3805,
"s": 3772,
"text": " C Program – Reverse of an Array"
},
{
"code": null,
"e": 3830,
"s": 3805,
"text": " C Program – Bubble Sort"
}
]
|
Symfony - Logging | Logging is very important for a web application. Web applications are used by hundreds to thousands of users at a time. To get sneak preview of happenings around a web application, Logging should be enabled. Without logging, the developer will not be able to find the status of the application. Let us consider that an end customer reports an issue or a project stackholder reports performance issue, then the first tool for the developer is Logging. By checking the log information, one can get some idea about the possible reason of the issue.
Symfony provides an excellent logging feature by integrating Monolog logging framework. Monolog is a de-facto standard for logging in PHP environment. Logging is enabled in every Symfony web application and it is provided as a Service. Simply get the logger object using base controller as follows.
$logger = $this->get('logger');
Once the logger object is fetched, we can log information, warning, and error using it.
$logger->info('Hi, It is just a information. Nothing to worry.');
$logger->warn('Hi, Something is fishy. Please check it.');
$logger->error('Hi, Some error occured. Check it now.');
$logger->critical('Hi, Something catastrophic occured. Hurry up!');
Symfony web application configuration file app/config/config.yml has a separate section for the logger framework. It can be used to update the working of the logger framework.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2749,
"s": 2203,
"text": "Logging is very important for a web application. Web applications are used by hundreds to thousands of users at a time. To get sneak preview of happenings around a web application, Logging should be enabled. Without logging, the developer will not be able to find the status of the application. Let us consider that an end customer reports an issue or a project stackholder reports performance issue, then the first tool for the developer is Logging. By checking the log information, one can get some idea about the possible reason of the issue."
},
{
"code": null,
"e": 3048,
"s": 2749,
"text": "Symfony provides an excellent logging feature by integrating Monolog logging framework. Monolog is a de-facto standard for logging in PHP environment. Logging is enabled in every Symfony web application and it is provided as a Service. Simply get the logger object using base controller as follows."
},
{
"code": null,
"e": 3082,
"s": 3048,
"text": "$logger = $this->get('logger'); \n"
},
{
"code": null,
"e": 3170,
"s": 3082,
"text": "Once the logger object is fetched, we can log information, warning, and error using it."
},
{
"code": null,
"e": 3424,
"s": 3170,
"text": "$logger->info('Hi, It is just a information. Nothing to worry.'); \n$logger->warn('Hi, Something is fishy. Please check it.'); \n$logger->error('Hi, Some error occured. Check it now.'); \n$logger->critical('Hi, Something catastrophic occured. Hurry up!');\n"
},
{
"code": null,
"e": 3600,
"s": 3424,
"text": "Symfony web application configuration file app/config/config.yml has a separate section for the logger framework. It can be used to update the working of the logger framework."
},
{
"code": null,
"e": 3607,
"s": 3600,
"text": " Print"
},
{
"code": null,
"e": 3618,
"s": 3607,
"text": " Add Notes"
}
]
|
gRPC - Client Streaming RPC | Let us see now see how client streaming works while using gRPC communication. In this case, the client will search and add books to the cart. Once the client is done adding all the books, the server would provide the checkout cart value to the client.
First let us define the bookstore.proto file in common_proto_files −
syntax = "proto3";
option java_package = "com.tp.bookstore";
service BookStore {
rpc totalCartValue (stream Book) returns (Cart) {}
}
message BookSearch {
string name = 1;
string author = 2;
int32 price = 3;
}
message Cart {
int32 books = 1;
int32 price = 2;
}
Here, the following block represents the name of the service "BookStore" and the function name "totalCartValue" which can be called. The "totalCartValue" function takes in the input of type "Book" which is a stream. And the function returns an object of type "Cart". So, effectively, we let the client add books in a streaming fashion and once the client is done, the server provides the total cart value to the client.
service BookStore {
rpc totalCartValue (stream Book) returns (Cart) {}
}
Now let us look at these types.
message Book {
string name = 1;
string author = 2;
int32 price = 3;
}
The client would send in the "Book" it wants to buy. It may not be the complete book info; it can simply be the title of the book.
message Cart {
int32 books = 1;
int32 price = 2;
}
The server, on getting the list of books, would return the "Cart" object which is nothing but the total number of books the client has purchased and the total price.
Note that we already had the Maven setup done for auto-generating our class files as well as our RPC code. So, now we can simply compile our project −
mvn clean install
This should auto-generate the source code required for us to use gRPC. The source code would be placed under −
Protobuf class code: target/generated-sources/protobuf/java/com.tp.bookstore
Protobuf gRPC code: target/generated-sources/protobuf/grpc-java/com.tp.bookstore
Now that we have defined the proto file which contains the function definition, let us setup a server which can serve call these functions.
Let us write our server code to serve the above function and save it in com.tp.bookstore.BookeStoreServerClientStreaming.java −
package com.tp.bookstore;
import io.grpc.Server;
import io.grpc.ServerBuilder;
import io.grpc.stub.StreamObserver;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;
import java.util.stream.Collectors;
import com.tp.bookstore.BookStoreOuterClass.Book;
import com.tp.bookstore.BookStoreOuterClass.BookSearch;
import com.tp.bookstore.BookStoreOuterClass.Cart;
public class BookeStoreServerClientStreaming {
private static final Logger logger = Logger.getLoggerr(BookeStoreServerClientStreaming.class.getName());
static Map<String, Book> bookMap = new HashMap<>();
static {
bookMap.put("Great Gatsby", Book.newBuilder().setName("Great Gatsby")
.setAuthor("Scott Fitzgerald")
.setPrice(300).build());
bookMap.put("To Kill MockingBird", Book.newBuilder().setName("To Kill MockingBird")
.setAuthor("Harper Lee")
.setPrice(400).build());
bookMap.put("Passage to India", Book.newBuilder().setName("Passage to India")
.setAuthor("E.M.Forster")
.setPrice(500).build());
bookMap.put("The Side of Paradise", Book.newBuilder().setName("The Side of Paradise")
.setAuthor("Scott Fitzgerald")
.setPrice(600).build());
bookMap.put("Go Set a Watchman", Book.newBuilder().setName("Go Set a Watchman")
.setAuthor("Harper Lee")
.setPrice(700).build());
}
private Server server;
private void start() throws IOException {
int port = 50051;
server = ServerBuilder.forPort(port)
.addService(new BookStoreImpl()).build().start();
logger.info("Server started, listening on " + port);
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
System.err.println("Shutting down gRPC server");
try {
server.shutdown().awaitTermination(30, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace(System.err);
}
}
});
}
public static void main(String[] args) throws IOException, InterruptedException {
final BookeStoreServerClientStreaming greetServer = new BookeStoreServerClientStreaming();
greetServer.start();
greetServer.server.awaitTermination();
}
static class BookStoreImpl extends BookStoreGrpc.BookStoreImplBase {
@Override
public StreamObserver<Book> totalCartValue(StreamObserver<Cart> responseObserver) {
return new StreamObserver<Book>() {
ArrayList<Book> bookCart = new ArrayList<Book>();
@Override
public void onNext(Book book)
logger.info("Searching for book with title starting with: " + book.getName());
for (Entry<String, Book> bookEntry : bookMap.entrySet()) {
if(bookEntry.getValue().getName().startsWith(book.getName())){
logger.info("Found book, adding to cart:....");
bookCart.add(bookEntry.getValue());
}
}
}
@Override
public void onError(Throwable t) {
logger.info("Error while reading book stream: " + t);
}
@Override
public void onCompleted() {
int cartValue = 0;
for (Book book : bookCart) {
cartValue += book.getPrice();
}
responseObserver.onNext(Cart.newBuilder()
.setPrice(cartValue)
.setBooks(bookCart.size()).build());
responseObserver.onCompleted();
}
};
}
}
The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −
Starting from the main method, we create a gRPC server at a specified port.
Starting from the main method, we create a gRPC server at a specified port.
But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service.
But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service.
For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl
For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl
The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the totalCartValue method.
The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the totalCartValue method.
Now, given that this is the case of client streaming, the server will get a list of Book (defined in the proto file) as the client adds them. The server thus returns a custom stream observer. This stream observer implements what happens when a new Book is found and what happens when the stream is closed.
Now, given that this is the case of client streaming, the server will get a list of Book (defined in the proto file) as the client adds them. The server thus returns a custom stream observer. This stream observer implements what happens when a new Book is found and what happens when the stream is closed.
The onNext() method would be called by the gRPC framework when the client adds a Book. At this point, the server adds that to the cart. In case of streaming, the server does not wait for all the books available.
The onNext() method would be called by the gRPC framework when the client adds a Book. At this point, the server adds that to the cart. In case of streaming, the server does not wait for all the books available.
When the client is done with the addition of Books, the stream observer's onCompleted() method is called. This method implements what the server wants to send when the client is done adding Book, i.e., it returns the Cart object to the client.
When the client is done with the addition of Books, the stream observer's onCompleted() method is called. This method implements what the server wants to send when the client is done adding Book, i.e., it returns the Cart object to the client.
Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code.
Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code.
Now that we have written the code for the server, let us setup a client which can call these functions.
Let us write our client code to call the above function and save it in com.tp.bookstore.BookStoreClientServerStreamingBlocking.java −
package com.tp.bookstore;
import io.grpc.Channel;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import io.grpc.StatusRuntimeException;
import io.grpc.stub.StreamObserver;
import java.util.Iterator;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
import com.tp.bookstore.BookStoreGrpc.BookStoreFutureStub;
import com.tp.bookstore.BookStoreGrpc.BookStoreStub;
import com.tp.bookstore.BookStoreOuterClass.Book;
import com.tp.bookstore.BookStoreOuterClass.BookSearch;
import com.tp.bookstore.BookStoreOuterClass.Cart;
import com.tp.greeting.GreeterGrpc;
import com.tp.greeting.Greeting.ServerOutput;
import com.tp.greeting.Greeting.ClientInput;
public class BookStoreClientStreamingClient {
private static final Logger logger = Logger.getLogger(BookStoreClientStreaming.class.getName());
private final BookStoreStub stub;
private boolean serverResponseCompleted = false;
StreamObserver<Book> streamClientSender;
public BookStoreClientStreamingClient(Channel channel) {
stub = BookStoreGrpc.newStub(channel);
}
public StreamObserver<Cart> getServerResponseObserver(){
StreamObserver<Cart> observer = new StreamObserver<Cart>(){
@Override
public void onNext(Cart cart) {
logger.info("Order summary:" + "\nTotal number of Books:" + cart.getBooks() +
"\nTotal Order Value:" + cart.getPrice());
}
@Override
public void onCompleted() {
//logger.info("Server: Done reading orderreading cart");
serverResponseCompleted = true;
}
};
return observer;
}
public void addBook(String book) {
logger.info("Adding book with title starting with: " + book);
Book request = Book.newBuilder().setName(book).build();
if(streamClientSender == null) {
streamClientSender = stub.totalCartValue(getServerResponseObserver());
}
try {
streamClientSender.onNext(request);
}
catch (StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
}
}
public void completeOrder() {
logger.info("Done, waiting for server to create order summary...");
if(streamClientSender != null);
streamClientSender.onCompleted();
}
public static void main(String[] args) throws Exception {
String serverAddress = "localhost:50051";
ManagedChannel channel = ManagedChannelBuilder.forTarget(serverAddress)
.usePlaintext()
.build();
try {
BookStoreClientStreamingClient client = new BookStoreClientStreamingClient(channel);
String bookName = "";
while(true) {
System.out.println("Type book name to be added to the cart....");
bookName = System.console().readLine();
if(bookName.equals("EXIT")) {
client.completeOrder();
break;
}
client.addBook(bookName);
}
while(client.serverResponseCompleted == false) {
Thread.sleep(2000);
}
} finally {
channel.shutdownNow().awaitTermination(5, TimeUnit.SECONDS);
}
}
}
The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −
Starting from the main method, we accept one argument, i.e., the title of the book we want to search for.
Starting from the main method, we accept one argument, i.e., the title of the book we want to search for.
We setup a Channel for gRPC communication with our server.
We setup a Channel for gRPC communication with our server.
Next, we create a non-blocking stub using the channel we created. This is where we are choosing the service "BookStore" whose functions we plan to call.
Next, we create a non-blocking stub using the channel we created. This is where we are choosing the service "BookStore" whose functions we plan to call.
Then, we simply create the expected input defined in the .proto file,i.e., in our case, Book, and we add the title that we want the server to add.
Then, we simply create the expected input defined in the .proto file,i.e., in our case, Book, and we add the title that we want the server to add.
But given this is the case of client streaming, we first create a stream observer for the server. This server stream observer lists the behavior on what needs to be done when the server responds, i.e., onNext()and onCompleted()
But given this is the case of client streaming, we first create a stream observer for the server. This server stream observer lists the behavior on what needs to be done when the server responds, i.e., onNext()and onCompleted()
And using the stub, we also get the client stream observer. We use this stream observer for sending the data, i.e., Book, to be added to the cart. We ultimately, make the call and get an iterator on valid Books. When we iterate, we get the corresponding Books made available by the Server.
And using the stub, we also get the client stream observer. We use this stream observer for sending the data, i.e., Book, to be added to the cart. We ultimately, make the call and get an iterator on valid Books. When we iterate, we get the corresponding Books made available by the Server.
And once our order is complete, we ensure that the client stream observer is closed. It tells the server to calculate the Cart Value and provide that as an output.
And once our order is complete, we ensure that the client stream observer is closed. It tells the server to calculate the Cart Value and provide that as an output.
Finally, we close the channel to avoid any resource leak.
Finally, we close the channel to avoid any resource leak.
So, that is our client code.
To sum up, what we want to do is the following −
Start the gRPC server.
Start the gRPC server.
The Client adds a stream of books by notifying them to the server.
The Client adds a stream of books by notifying them to the server.
The Server searches the book in its store and adds them to the cart.
The Server searches the book in its store and adds them to the cart.
When the client is done ordering, the Server responds the total cart value of the client.
When the client is done ordering, the Server responds the total cart value of the client.
Now, that we have defined our proto file, written our server and the client code, let us proceed to execute this code and see things in action.
For running the code, fire up two shells. Start the server on the first shell by executing the following command −
java -cp .\target\grpc-point-1.0.jar
com.tp.bookstore.BookeStoreServerClientStreaming
We would see the following output −
Jul 03, 2021 10:37:21 PM
com.tp.bookstore.BookeStoreServerStreaming start
INFO: Server started, listening on 50051
The above output means the server has started.
Now, let us start the client.
java -cp .\target\grpc-point-1.0.jar
com.tp.bookstore.BookStoreClientServerStreamingClient
Let us add a few books to our client.
Type book name to be added to the cart....
Gr
Jul 24, 2021 5:53:07 PM
com.tp.bookstore.BookStoreClientStreamingClient addBook
INFO: Adding book with title starting with: Great
Type book name to be added to the cart....
Pa
Jul 24, 2021 5:53:20 PM
com.tp.bookstore.BookStoreClientStreamingClient addBook
INFO: Adding book with title starting with: Passage
Type book name to be added to the cart....
Once we have added the books and we input "EXIT", the server then calculates the cart value and here is the output we get −
EXIT
Jul 24, 2021 5:53:33 PM
com.tp.bookstore.BookStoreClientStreamingClient completeOrder
INFO: Done, waiting for server to create order summary...
Jul 24, 2021 5:53:33 PM
com.tp.bookstore.BookStoreClientStreamingClient$1 onNext
INFO: Order summary:
Total number of Books: 2
Total Order Value: 800
So, as we can see, the client was able to add books. And once all the books were added, the server responds with the total number of books and the total price.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2089,
"s": 1837,
"text": "Let us see now see how client streaming works while using gRPC communication. In this case, the client will search and add books to the cart. Once the client is done adding all the books, the server would provide the checkout cart value to the client."
},
{
"code": null,
"e": 2158,
"s": 2089,
"text": "First let us define the bookstore.proto file in common_proto_files −"
},
{
"code": null,
"e": 2438,
"s": 2158,
"text": "syntax = \"proto3\";\noption java_package = \"com.tp.bookstore\";\nservice BookStore {\n rpc totalCartValue (stream Book) returns (Cart) {}\n}\nmessage BookSearch {\n string name = 1;\n string author = 2;\n int32 price = 3;\n}\nmessage Cart {\n int32 books = 1;\n int32 price = 2;\n}\n"
},
{
"code": null,
"e": 2858,
"s": 2438,
"text": "Here, the following block represents the name of the service \"BookStore\" and the function name \"totalCartValue\" which can be called. The \"totalCartValue\" function takes in the input of type \"Book\" which is a stream. And the function returns an object of type \"Cart\". So, effectively, we let the client add books in a streaming fashion and once the client is done, the server provides the total cart value to the client."
},
{
"code": null,
"e": 2935,
"s": 2858,
"text": "service BookStore {\n rpc totalCartValue (stream Book) returns (Cart) {}\n}\n"
},
{
"code": null,
"e": 2967,
"s": 2935,
"text": "Now let us look at these types."
},
{
"code": null,
"e": 3047,
"s": 2967,
"text": "message Book {\n string name = 1;\n string author = 2;\n int32 price = 3;\n}\n"
},
{
"code": null,
"e": 3178,
"s": 3047,
"text": "The client would send in the \"Book\" it wants to buy. It may not be the complete book info; it can simply be the title of the book."
},
{
"code": null,
"e": 3236,
"s": 3178,
"text": "message Cart {\n int32 books = 1;\n int32 price = 2;\n}\n"
},
{
"code": null,
"e": 3402,
"s": 3236,
"text": "The server, on getting the list of books, would return the \"Cart\" object which is nothing but the total number of books the client has purchased and the total price."
},
{
"code": null,
"e": 3553,
"s": 3402,
"text": "Note that we already had the Maven setup done for auto-generating our class files as well as our RPC code. So, now we can simply compile our project −"
},
{
"code": null,
"e": 3572,
"s": 3553,
"text": "mvn clean install\n"
},
{
"code": null,
"e": 3683,
"s": 3572,
"text": "This should auto-generate the source code required for us to use gRPC. The source code would be placed under −"
},
{
"code": null,
"e": 3842,
"s": 3683,
"text": "Protobuf class code: target/generated-sources/protobuf/java/com.tp.bookstore\nProtobuf gRPC code: target/generated-sources/protobuf/grpc-java/com.tp.bookstore\n"
},
{
"code": null,
"e": 3982,
"s": 3842,
"text": "Now that we have defined the proto file which contains the function definition, let us setup a server which can serve call these functions."
},
{
"code": null,
"e": 4110,
"s": 3982,
"text": "Let us write our server code to serve the above function and save it in com.tp.bookstore.BookeStoreServerClientStreaming.java −"
},
{
"code": null,
"e": 7871,
"s": 4110,
"text": "package com.tp.bookstore;\n\nimport io.grpc.Server;\nimport io.grpc.ServerBuilder;\nimport io.grpc.stub.StreamObserver;\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Map.Entry;\nimport java.util.concurrent.TimeUnit;\nimport java.util.logging.Logger;\nimport java.util.stream.Collectors;\n\nimport com.tp.bookstore.BookStoreOuterClass.Book;\nimport com.tp.bookstore.BookStoreOuterClass.BookSearch;\nimport com.tp.bookstore.BookStoreOuterClass.Cart;\n\npublic class BookeStoreServerClientStreaming {\n private static final Logger logger = Logger.getLoggerr(BookeStoreServerClientStreaming.class.getName());\n \n static Map<String, Book> bookMap = new HashMap<>();\n static {\n bookMap.put(\"Great Gatsby\", Book.newBuilder().setName(\"Great Gatsby\")\n .setAuthor(\"Scott Fitzgerald\")\n .setPrice(300).build());\n bookMap.put(\"To Kill MockingBird\", Book.newBuilder().setName(\"To Kill MockingBird\")\n .setAuthor(\"Harper Lee\")\n .setPrice(400).build());\n bookMap.put(\"Passage to India\", Book.newBuilder().setName(\"Passage to India\")\n .setAuthor(\"E.M.Forster\")\n .setPrice(500).build());\n bookMap.put(\"The Side of Paradise\", Book.newBuilder().setName(\"The Side of Paradise\")\n .setAuthor(\"Scott Fitzgerald\")\n .setPrice(600).build());\n bookMap.put(\"Go Set a Watchman\", Book.newBuilder().setName(\"Go Set a Watchman\")\n .setAuthor(\"Harper Lee\")\n .setPrice(700).build());\n }\n private Server server;\n private void start() throws IOException {\n int port = 50051;\n server = ServerBuilder.forPort(port)\n .addService(new BookStoreImpl()).build().start();\n \n logger.info(\"Server started, listening on \" + port);\n Runtime.getRuntime().addShutdownHook(new Thread() {\n @Override\n public void run() {\n System.err.println(\"Shutting down gRPC server\");\n try {\n server.shutdown().awaitTermination(30, TimeUnit.SECONDS);\n } catch (InterruptedException e) {\n e.printStackTrace(System.err);\n }\n }\n });\n }\n public static void main(String[] args) throws IOException, InterruptedException {\n final BookeStoreServerClientStreaming greetServer = new BookeStoreServerClientStreaming();\n greetServer.start();\n greetServer.server.awaitTermination();\n }\n static class BookStoreImpl extends BookStoreGrpc.BookStoreImplBase {\n @Override\n public StreamObserver<Book> totalCartValue(StreamObserver<Cart> responseObserver) {\n return new StreamObserver<Book>() {\n ArrayList<Book> bookCart = new ArrayList<Book>();\n @Override\n public void onNext(Book book) \n logger.info(\"Searching for book with title starting with: \" + book.getName());\n for (Entry<String, Book> bookEntry : bookMap.entrySet()) {\n if(bookEntry.getValue().getName().startsWith(book.getName())){\n logger.info(\"Found book, adding to cart:....\");\n bookCart.add(bookEntry.getValue());\n }\n }\n }\n @Override\n public void onError(Throwable t) {\n logger.info(\"Error while reading book stream: \" + t);\n }\n @Override\n public void onCompleted() {\n int cartValue = 0;\n for (Book book : bookCart) {\n cartValue += book.getPrice();\n }\n responseObserver.onNext(Cart.newBuilder()\n .setPrice(cartValue)\n .setBooks(bookCart.size()).build());\n responseObserver.onCompleted();\n }\n };\n \n }\n} \n"
},
{
"code": null,
"e": 8042,
"s": 7871,
"text": "The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −"
},
{
"code": null,
"e": 8118,
"s": 8042,
"text": "Starting from the main method, we create a gRPC server at a specified port."
},
{
"code": null,
"e": 8194,
"s": 8118,
"text": "Starting from the main method, we create a gRPC server at a specified port."
},
{
"code": null,
"e": 8323,
"s": 8194,
"text": "But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service."
},
{
"code": null,
"e": 8452,
"s": 8323,
"text": "But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service."
},
{
"code": null,
"e": 8605,
"s": 8452,
"text": "For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl"
},
{
"code": null,
"e": 8758,
"s": 8605,
"text": "For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl"
},
{
"code": null,
"e": 8919,
"s": 8758,
"text": "The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the totalCartValue method."
},
{
"code": null,
"e": 9080,
"s": 8919,
"text": "The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the totalCartValue method."
},
{
"code": null,
"e": 9386,
"s": 9080,
"text": "Now, given that this is the case of client streaming, the server will get a list of Book (defined in the proto file) as the client adds them. The server thus returns a custom stream observer. This stream observer implements what happens when a new Book is found and what happens when the stream is closed."
},
{
"code": null,
"e": 9692,
"s": 9386,
"text": "Now, given that this is the case of client streaming, the server will get a list of Book (defined in the proto file) as the client adds them. The server thus returns a custom stream observer. This stream observer implements what happens when a new Book is found and what happens when the stream is closed."
},
{
"code": null,
"e": 9904,
"s": 9692,
"text": "The onNext() method would be called by the gRPC framework when the client adds a Book. At this point, the server adds that to the cart. In case of streaming, the server does not wait for all the books available."
},
{
"code": null,
"e": 10116,
"s": 9904,
"text": "The onNext() method would be called by the gRPC framework when the client adds a Book. At this point, the server adds that to the cart. In case of streaming, the server does not wait for all the books available."
},
{
"code": null,
"e": 10360,
"s": 10116,
"text": "When the client is done with the addition of Books, the stream observer's onCompleted() method is called. This method implements what the server wants to send when the client is done adding Book, i.e., it returns the Cart object to the client."
},
{
"code": null,
"e": 10604,
"s": 10360,
"text": "When the client is done with the addition of Books, the stream observer's onCompleted() method is called. This method implements what the server wants to send when the client is done adding Book, i.e., it returns the Cart object to the client."
},
{
"code": null,
"e": 10723,
"s": 10604,
"text": "Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code."
},
{
"code": null,
"e": 10842,
"s": 10723,
"text": "Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code."
},
{
"code": null,
"e": 10946,
"s": 10842,
"text": "Now that we have written the code for the server, let us setup a client which can call these functions."
},
{
"code": null,
"e": 11080,
"s": 10946,
"text": "Let us write our client code to call the above function and save it in com.tp.bookstore.BookStoreClientServerStreamingBlocking.java −"
},
{
"code": null,
"e": 14331,
"s": 11080,
"text": "package com.tp.bookstore;\n\nimport io.grpc.Channel;\nimport io.grpc.ManagedChannel;\nimport io.grpc.ManagedChannelBuilder;\nimport io.grpc.StatusRuntimeException;\nimport io.grpc.stub.StreamObserver;\n\nimport java.util.Iterator;\nimport java.util.concurrent.TimeUnit;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\n\nimport com.tp.bookstore.BookStoreGrpc.BookStoreFutureStub;\nimport com.tp.bookstore.BookStoreGrpc.BookStoreStub;\nimport com.tp.bookstore.BookStoreOuterClass.Book;\nimport com.tp.bookstore.BookStoreOuterClass.BookSearch;\nimport com.tp.bookstore.BookStoreOuterClass.Cart;\nimport com.tp.greeting.GreeterGrpc;\nimport com.tp.greeting.Greeting.ServerOutput;\nimport com.tp.greeting.Greeting.ClientInput;\n\npublic class BookStoreClientStreamingClient {\n private static final Logger logger = Logger.getLogger(BookStoreClientStreaming.class.getName());\n private final BookStoreStub stub;\n\tprivate boolean serverResponseCompleted = false; \n StreamObserver<Book> streamClientSender;\n \n public BookStoreClientStreamingClient(Channel channel) {\n stub = BookStoreGrpc.newStub(channel);\n }\n public StreamObserver<Cart> getServerResponseObserver(){\n StreamObserver<Cart> observer = new StreamObserver<Cart>(){\n @Override\n public void onNext(Cart cart) {\n logger.info(\"Order summary:\" + \"\\nTotal number of Books:\" + cart.getBooks() + \n \"\\nTotal Order Value:\" + cart.getPrice());\n }\n @Override\n public void onCompleted() {\n //logger.info(\"Server: Done reading orderreading cart\");\n serverResponseCompleted = true;\n }\n };\n return observer;\n }\n public void addBook(String book) {\n logger.info(\"Adding book with title starting with: \" + book);\n Book request = Book.newBuilder().setName(book).build();\n \n if(streamClientSender == null) {\n streamClientSender = stub.totalCartValue(getServerResponseObserver());\n }\n try {\n streamClientSender.onNext(request);\n }\n catch (StatusRuntimeException e) {\n logger.log(Level.WARNING, \"RPC failed: {0}\", e.getStatus());\n }\n }\n public void completeOrder() {\n logger.info(\"Done, waiting for server to create order summary...\");\n if(streamClientSender != null);\n streamClientSender.onCompleted();\n }\n public static void main(String[] args) throws Exception {\n String serverAddress = \"localhost:50051\";\n\t ManagedChannel channel = ManagedChannelBuilder.forTarget(serverAddress)\n .usePlaintext()\n .build();\n try {\n BookStoreClientStreamingClient client = new BookStoreClientStreamingClient(channel);\n String bookName = \"\"; \n \n while(true) {\n System.out.println(\"Type book name to be added to the cart....\");\n bookName = System.console().readLine();\n if(bookName.equals(\"EXIT\")) {\n client.completeOrder();\n break; \n }\n client.addBook(bookName);\n }\n \n while(client.serverResponseCompleted == false) {\n Thread.sleep(2000);\n }\n \n } finally {\n channel.shutdownNow().awaitTermination(5, TimeUnit.SECONDS);\n }\n }\n}"
},
{
"code": null,
"e": 14502,
"s": 14331,
"text": "The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −"
},
{
"code": null,
"e": 14608,
"s": 14502,
"text": "Starting from the main method, we accept one argument, i.e., the title of the book we want to search for."
},
{
"code": null,
"e": 14714,
"s": 14608,
"text": "Starting from the main method, we accept one argument, i.e., the title of the book we want to search for."
},
{
"code": null,
"e": 14773,
"s": 14714,
"text": "We setup a Channel for gRPC communication with our server."
},
{
"code": null,
"e": 14832,
"s": 14773,
"text": "We setup a Channel for gRPC communication with our server."
},
{
"code": null,
"e": 14985,
"s": 14832,
"text": "Next, we create a non-blocking stub using the channel we created. This is where we are choosing the service \"BookStore\" whose functions we plan to call."
},
{
"code": null,
"e": 15138,
"s": 14985,
"text": "Next, we create a non-blocking stub using the channel we created. This is where we are choosing the service \"BookStore\" whose functions we plan to call."
},
{
"code": null,
"e": 15285,
"s": 15138,
"text": "Then, we simply create the expected input defined in the .proto file,i.e., in our case, Book, and we add the title that we want the server to add."
},
{
"code": null,
"e": 15432,
"s": 15285,
"text": "Then, we simply create the expected input defined in the .proto file,i.e., in our case, Book, and we add the title that we want the server to add."
},
{
"code": null,
"e": 15660,
"s": 15432,
"text": "But given this is the case of client streaming, we first create a stream observer for the server. This server stream observer lists the behavior on what needs to be done when the server responds, i.e., onNext()and onCompleted()"
},
{
"code": null,
"e": 15888,
"s": 15660,
"text": "But given this is the case of client streaming, we first create a stream observer for the server. This server stream observer lists the behavior on what needs to be done when the server responds, i.e., onNext()and onCompleted()"
},
{
"code": null,
"e": 16178,
"s": 15888,
"text": "And using the stub, we also get the client stream observer. We use this stream observer for sending the data, i.e., Book, to be added to the cart. We ultimately, make the call and get an iterator on valid Books. When we iterate, we get the corresponding Books made available by the Server."
},
{
"code": null,
"e": 16468,
"s": 16178,
"text": "And using the stub, we also get the client stream observer. We use this stream observer for sending the data, i.e., Book, to be added to the cart. We ultimately, make the call and get an iterator on valid Books. When we iterate, we get the corresponding Books made available by the Server."
},
{
"code": null,
"e": 16632,
"s": 16468,
"text": "And once our order is complete, we ensure that the client stream observer is closed. It tells the server to calculate the Cart Value and provide that as an output."
},
{
"code": null,
"e": 16796,
"s": 16632,
"text": "And once our order is complete, we ensure that the client stream observer is closed. It tells the server to calculate the Cart Value and provide that as an output."
},
{
"code": null,
"e": 16854,
"s": 16796,
"text": "Finally, we close the channel to avoid any resource leak."
},
{
"code": null,
"e": 16912,
"s": 16854,
"text": "Finally, we close the channel to avoid any resource leak."
},
{
"code": null,
"e": 16941,
"s": 16912,
"text": "So, that is our client code."
},
{
"code": null,
"e": 16990,
"s": 16941,
"text": "To sum up, what we want to do is the following −"
},
{
"code": null,
"e": 17013,
"s": 16990,
"text": "Start the gRPC server."
},
{
"code": null,
"e": 17036,
"s": 17013,
"text": "Start the gRPC server."
},
{
"code": null,
"e": 17103,
"s": 17036,
"text": "The Client adds a stream of books by notifying them to the server."
},
{
"code": null,
"e": 17170,
"s": 17103,
"text": "The Client adds a stream of books by notifying them to the server."
},
{
"code": null,
"e": 17239,
"s": 17170,
"text": "The Server searches the book in its store and adds them to the cart."
},
{
"code": null,
"e": 17308,
"s": 17239,
"text": "The Server searches the book in its store and adds them to the cart."
},
{
"code": null,
"e": 17398,
"s": 17308,
"text": "When the client is done ordering, the Server responds the total cart value of the client."
},
{
"code": null,
"e": 17488,
"s": 17398,
"text": "When the client is done ordering, the Server responds the total cart value of the client."
},
{
"code": null,
"e": 17632,
"s": 17488,
"text": "Now, that we have defined our proto file, written our server and the client code, let us proceed to execute this code and see things in action."
},
{
"code": null,
"e": 17747,
"s": 17632,
"text": "For running the code, fire up two shells. Start the server on the first shell by executing the following command −"
},
{
"code": null,
"e": 17835,
"s": 17747,
"text": "java -cp .\\target\\grpc-point-1.0.jar \ncom.tp.bookstore.BookeStoreServerClientStreaming\n"
},
{
"code": null,
"e": 17871,
"s": 17835,
"text": "We would see the following output −"
},
{
"code": null,
"e": 17988,
"s": 17871,
"text": "Jul 03, 2021 10:37:21 PM \ncom.tp.bookstore.BookeStoreServerStreaming start\nINFO: Server started, listening on 50051\n"
},
{
"code": null,
"e": 18035,
"s": 17988,
"text": "The above output means the server has started."
},
{
"code": null,
"e": 18065,
"s": 18035,
"text": "Now, let us start the client."
},
{
"code": null,
"e": 18158,
"s": 18065,
"text": "java -cp .\\target\\grpc-point-1.0.jar \ncom.tp.bookstore.BookStoreClientServerStreamingClient\n"
},
{
"code": null,
"e": 18196,
"s": 18158,
"text": "Let us add a few books to our client."
},
{
"code": null,
"e": 18598,
"s": 18196,
"text": "Type book name to be added to the cart....\nGr\nJul 24, 2021 5:53:07 PM \ncom.tp.bookstore.BookStoreClientStreamingClient addBook\nINFO: Adding book with title starting with: Great\n\nType book name to be added to the cart....\nPa\nJul 24, 2021 5:53:20 PM \ncom.tp.bookstore.BookStoreClientStreamingClient addBook\nINFO: Adding book with title starting with: Passage\n\nType book name to be added to the cart....\n"
},
{
"code": null,
"e": 18722,
"s": 18598,
"text": "Once we have added the books and we input \"EXIT\", the server then calculates the cart value and here is the output we get −"
},
{
"code": null,
"e": 19024,
"s": 18722,
"text": "EXIT\nJul 24, 2021 5:53:33 PM \ncom.tp.bookstore.BookStoreClientStreamingClient completeOrder\nINFO: Done, waiting for server to create order summary...\nJul 24, 2021 5:53:33 PM \ncom.tp.bookstore.BookStoreClientStreamingClient$1 onNext\nINFO: Order summary:\nTotal number of Books: 2\nTotal Order Value: 800\n"
},
{
"code": null,
"e": 19184,
"s": 19024,
"text": "So, as we can see, the client was able to add books. And once all the books were added, the server responds with the total number of books and the total price."
},
{
"code": null,
"e": 19191,
"s": 19184,
"text": " Print"
},
{
"code": null,
"e": 19202,
"s": 19191,
"text": " Add Notes"
}
]
|
Spring Batch - XML to MySQL | In this chapter, we will create a Spring Batch application which uses an XML Reader and a MySQL Writer.
Reader − The reader we are using in the application is StaxEventItemReader to read data from XML documents.
Following is the input XML document we are using in this application. This document holds data records which specify details like tutorial id, tutorial author, tutorial title, submission date, tutorial icon, and tutorial description.
<?xml version="1.0" encoding="UTF-8"?>
<tutorials>
<tutorial>
<tutorial_id>1001</tutorial_id>
<tutorial_author>Sanjay</tutorial_author>
<tutorial_title>Learn Java</tutorial_title>
<submission_date>06-05-2007</submission_date>
<tutorial_icon>https://www.tutorialspoint.com/java/images/java-minilogo.jpg</tutorial_icon>
<tutorial_description>Java is a high-level programming language originally
developed by Sun Microsystems and released in 1995.
Java runs on a variety of platforms.
This tutorial gives a complete understanding of Java.');</tutorial_description>
</tutorial>
<tutorial>
<tutorial_id>1002</tutorial_id>
<tutorial_author>Abdul S</tutorial_author>
<tutorial_title>Learn MySQL</tutorial_title>
<submission_date>19-04-2007</submission_date>
<tutorial_icon>https://www.tutorialspoint.com/mysql/images/mysql-minilogo.jpg</tutorial_icon>
<tutorial_description>MySQL is the most popular
Open Source Relational SQL database management system.
MySQL is one of the best RDBMS being used for developing web-based software applications.
This tutorial will give you quick start with MySQL
and make you comfortable with MySQL programming.</tutorial_description>
</tutorial>
<tutorial>
<tutorial_id>1003</tutorial_id>
<tutorial_author>Krishna Kasyap</tutorial_author>
<tutorial_title>Learn JavaFX</tutorial_title>
<submission_date>06-07-2017</submission_date>
<tutorial_icon>https://www.tutorialspoint.com/javafx/images/javafx-minilogo.jpg</tutorial_icon>
<tutorial_description>JavaFX is a Java library used to build Rich Internet Applications.
The applications developed using JavaFX can run on various devices
such as Desktop Computers, Mobile Phones, TVs, Tablets, etc.
This tutorial, discusses all the necessary elements of JavaFX that are required
to develop effective Rich Internet Applications</tutorial_description>
</tutorial>
</tutorials>
Writer − The writer we are using in the application is JdbcBatchItemWriter to write the data to MySQL database. Assume we have created a table in MySQL inside a database called "details".
CREATE TABLE details.TUTORIALS(
tutorial_id int(10) NOT NULL,
tutorial_author VARCHAR(20),
tutorial_title VARCHAR(50),
submission_date VARCHAR(20),
tutorial_icon VARCHAR(200),
tutorial_description VARCHAR(1000)
);
Processor − The processor we are using in the application is a custom processor which writes the data of each record on the PDF document.
In batch process, if "n" records or data elements were read, then for each record, it will read the data, process it, and write the data in the Writer. To process the data, it relays on the processor passed. In this case, in the custom processor class, we have written code to load a particular PDF document, create a new page, write the data item onto the PDF in a tabular format.
Finally, if you execute this application, it reads all the data items from the XML document, stores them in the MySQL database, and prints them in the given PDF document in individual pages.
Following is the configuration file of our sample Spring Batch application. In this file, we will define the Job and the steps. In addition to these, we also define the beans for ItemReader, ItemProcessor, and ItemWriter. (Here, we associate them with their respective classes and pass the values for the required properties to configure them.)
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:batch = "http://www.springframework.org/schema/batch"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xmlns:util = "http://www.springframework.org/schema/util"
xsi:schemaLocation = "http://www.springframework.org/schema/batch
http://www.springframework.org/schema/batch/spring-batch-2.2.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util-3.0.xsd ">
<import resource = "../jobs/context.xml" />
<bean id = "itemProcessor" class = "CustomItemProcessor" />
<batch:job id = "helloWorldJob">
<batch:step id = "step1">
<batch:tasklet>
<batch:chunk reader = "xmlItemReader" writer = "mysqlItemWriter" processor = "itemProcessor">
</batch:chunk>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id = "xmlItemReader"
class = "org.springframework.batch.item.xml.StaxEventItemReader">
<property name = "fragmentRootElementName" value = "tutorial" />
<property name = "resource" value = "classpath:resources/tutorial.xml" />
<property name = "unmarshaller" ref = "customUnMarshaller" />
</bean>
<bean id = "customUnMarshaller" class = "org.springframework.oxm.xstream.XStreamMarshaller">
<property name = "aliases">
<util:map id = "aliases">
<entry key = "tutorial" value = "Tutorial" />
</util:map>
</property>
</bean>
<bean id = "mysqlItemWriter" class = "org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name = "dataSource" ref = "dataSource" />
<property name = "sql">
<value>
<![CDATA[insert into details.tutorials (tutorial_id, tutorial_author, tutorial_title,
submission_date, tutorial_icon, tutorial_description)
values (:tutorial_id, :tutorial_author, :tutorial_title, :submission_date,
:tutorial_icon, :tutorial_description);]]>
</value>
</property>
<property name = "itemSqlParameterSourceProvider">
<bean class = "org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider" />
</property>
</bean>
</beans>
Following is the context.xml of our Spring Batch application. In this file, we will define the beans like job repository, job launcher, and transaction manager.
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:jdbc = "http://www.springframework.org/schema/jdbc"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
http://www.springframework.org/schema/jdbc
http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd">
<!-- stored job-meta in database -->
<bean id = "jobRepository"
class = "org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name = "dataSource" ref = "dataSource" />
<property name = "transactionManager" ref = "transactionManager" />
<property name = "databaseType" value = "mysql" />
</bean>
<bean id = "transactionManager"
class = "org.springframework.batch.support.transaction.ResourcelessTransactionMana ger" />
<bean id = "jobLauncher"
class = "org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name = "jobRepository" ref = "jobRepository" />
</bean>
<!-- connect to MySQL database -->
<bean id = "dataSource"
class = "org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name = "driverClassName" value = "com.mysql.jdbc.Driver" />
<property name = "url" value = "jdbc:mysql://localhost:3306/details" />
<property name = "username" value = "myuser" />
<property name = "password" value = "password" />
</bean>
<!-- create job-meta tables automatically -->
<jdbc:initialize-database data-source = "dataSource">
<jdbc:script location = "org/springframework/batch/core/schema-drop-mysql.sql"/>
<jdbc:script location = "org/springframework/batch/core/schema-mysql.sql"/>
</jdbc:initialize-database>
</beans>
Following is the processor class. In this class, we write the code of processing in the application. Here, we are loading a PDF document, creating a new page, creating a table, and inserting the following values for each record: tutorial id, tutorial name, author, date of submission in the table.
import java.io.File;
import java.io.IOException;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import org.apache.pdfbox.pdmodel.PDPageContentStream;
import org.apache.pdfbox.pdmodel.font.PDType1Font;
import org.springframework.batch.item.ItemProcessor;
public class CustomItemProcessor implements ItemProcessor<Tutorial, Tutorial> {
public static void drawTable(PDPage page, PDPageContentStream contentStream,
float y, float margin, String[][] content) throws IOException {
final int rows = content.length;
final int cols = content[0].length;
final float rowHeight = 50;
final float tableWidth = page.getMediaBox().getWidth()-(2*margin);
final float tableHeight = rowHeight * rows;
final float colWidth = tableWidth/(float)cols;
final float cellMargin=5f;
// draw the rows
float nexty = y ;
for (int i = 0; i <= rows; i++) {
contentStream.drawLine(margin,nexty,margin+tableWidth,nexty);
nexty-= rowHeight;
}
//draw the columns
float nextx = margin;
for (int i = 0; i <= cols; i++) {
contentStream.drawLine(nextx,y,nextx,y-tableHeight);
nextx += colWidth;
}
// now add the text
contentStream.setFont(PDType1Font.HELVETICA_BOLD,12);
float textx = margin+cellMargin;
float texty = y-15;
for(int i = 0; i < content.length; i++){
for(int j = 0 ; j < content[i].length; j++){
String text = content[i][j];
contentStream.beginText();
contentStream.moveTextPositionByAmount(textx,texty);
contentStream.drawString(text);
contentStream.endText();
textx += colWidth;
}
texty-=rowHeight;
textx = margin+cellMargin;
}
}
@Override
public Tutorial process(Tutorial item) throws Exception {
System.out.println("Processing..." + item);
// Creating PDF document object
PDDocument doc = PDDocument.load(new File("C:/Examples/test.pdf"));
// Creating a blank page
PDPage page = new PDPage();
doc.addPage( page );
PDPageContentStream contentStream = new PDPageContentStream(doc, page);
String[][] content = {{"Id",""+item.getTutorial_id()},
{"Title", item.getTutorial_title()},
{"Authour", item.getTutorial_author()},
{"Submission Date", item.getSubmission_date()}} ;
drawTable(page, contentStream, 700, 100, content);
contentStream.close();
doc.save("C:/Examples/test.pdf" );
System.out.println("Hello");
return item;
}
}
Following is the ReportFieldSetMapper class which sets the data to the Tutorial class.
import org.springframework.batch.item.file.mapping.FieldSetMapper;
import org.springframework.batch.item.file.transform.FieldSet;
import org.springframework.validation.BindException;
public class TutorialFieldSetMapper implements FieldSetMapper<Tutorial> {
@Override
public Tutorial mapFieldSet(FieldSet fieldSet) throws BindException {
// instantiating the Tutorial class
Tutorial tutorial = new Tutorial();
// Setting the fields from XML
tutorial.setTutorial_id(fieldSet.readInt(0));
tutorial.setTutorial_title(fieldSet.readString(1));
tutorial.setTutorial_author(fieldSet.readString(2));
tutorial.setTutorial_icon(fieldSet.readString(3));
tutorial.setTutorial_description(fieldSet.readString(4));
return tutorial;
}
}
Following is the Tutorial class. It is a simple class with setter and getter methods.
public class Tutorial {
private int tutorial_id;
private String tutorial_author;
private String tutorial_title;
private String submission_date;
private String tutorial_icon;
private String tutorial_description;
@Override
public String toString() {
return " [id=" + tutorial_id + ", author=" + tutorial_author
+ ", title=" + tutorial_title + ", date=" + submission_date + ", icon ="
+tutorial_icon +", description = "+tutorial_description+"]";
}
public int getTutorial_id() {
return tutorial_id;
}
public void setTutorial_id(int tutorial_id) {
this.tutorial_id = tutorial_id;
}
public String getTutorial_author() {
return tutorial_author;
}
public void setTutorial_author(String tutorial_author) {
this.tutorial_author = tutorial_author;
}
public String getTutorial_title() {
return tutorial_title;
}
public void setTutorial_title(String tutorial_title) {
this.tutorial_title = tutorial_title;
}
public String getSubmission_date() {
return submission_date;
}
public void setSubmission_date(String submission_date) {
this.submission_date = submission_date;
}
public String getTutorial_icon() {
return tutorial_icon;
}
public void setTutorial_icon(String tutorial_icon) {
this.tutorial_icon = tutorial_icon;
}
public String getTutorial_description() {
return tutorial_description;
}
public void setTutorial_description(String tutorial_description) {
this.tutorial_description = tutorial_description;
}
}
Following is the code which launces the batch process. In this class, we will launch the Batch Application by running the JobLauncher.
public class App {
public static void main(String[] args) throws Exception {
String[] springConfig = { "jobs/job_hello_world.xml" };
// Creating the application context object
ApplicationContext context = new ClassPathXmlApplicationContext(springConfig);
// Creating the job launcher
JobLauncher jobLauncher = (JobLauncher) context.getBean("jobLauncher");
// Creating the job
Job job = (Job) context.getBean("helloWorldJob");
// Executing the JOB
JobExecution execution = jobLauncher.run(job, new JobParameters());
System.out.println("Exit Status : " + execution.getStatus());
}
}
On executing this application, it will produce the following output.
May 05, 2017 4:39:22 PM org.springframework.context.support.ClassPathXmlApplicationContext
prepareRefresh
INFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@306a30c7:
startup date [Fri May 05 16:39:22 IST 2017]; root of context hierarchy
May 05, 2017 4:39:23 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
May 05, 2017 4:39:32 PM org.springframework.batch.core.job.SimpleStepHandler handleStep
INFO: Executing step: [step1]
Processing... [id=1001, author=Sanjay, title=Learn Java, date=06-05-2007,
icon =https://www.tutorialspoint.com/java/images/java-mini-logo.jpg,
description = Java is a high-level programming language originally developed by Sun Microsystems
and released in 1995. Java runs on a variety of platforms.
This tutorial gives a complete understanding of Java.');]
Hello
Processing.. [id=1002, author=Abdul S, title=Learn MySQL, date=19-04-2007,
icon =https://www.tutorialspoint.com/mysql/images/mysql-mini-logo.jpg,
description = MySQL is the most popular Open Source Relational SQL database management system.
MySQL is one of the best RDBMS being used for developing web-based software applications.
This tutorial will give you quick start with MySQL and make you comfortable with MySQL programming.]
Hello
Processing... [id=1003, author=Krishna Kasyap, title=Learn JavaFX, date=06-072017,
icon =https://www.tutorialspoint.com/javafx/images/javafx-mini-logo.jpg,
description = JavaFX is a Java library used to build Rich Internet Applications.
The applications developed using JavaFX can run on various devices
such as Desktop Computers, Mobile Phones, TVs, Tablets, etc.
This tutorial, discusses all the necessary elements of JavaFX
that are required to develop effective Rich Internet Applications]
Hello
May 05, 2017 4:39:36 PM org.springframework.batch.core.launch.support.SimpleJobLauncher run
INFO: Job: [FlowJob: [name=helloWorldJob]] completed with the following parameters: [{}]
and the following status: [COMPLETED]
Exit Status : COMPLETED
If you verify the details.tutorial table in the database, it will show you the following output −
This will generate a PDF with the records on each page as shown below.
102 Lectures
8 hours
Karthikeya T
39 Lectures
5 hours
Chaand Sheikh
73 Lectures
5.5 hours
Senol Atac
62 Lectures
4.5 hours
Senol Atac
67 Lectures
4.5 hours
Senol Atac
69 Lectures
5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2032,
"s": 1928,
"text": "In this chapter, we will create a Spring Batch application which uses an XML Reader and a MySQL Writer."
},
{
"code": null,
"e": 2140,
"s": 2032,
"text": "Reader − The reader we are using in the application is StaxEventItemReader to read data from XML documents."
},
{
"code": null,
"e": 2374,
"s": 2140,
"text": "Following is the input XML document we are using in this application. This document holds data records which specify details like tutorial id, tutorial author, tutorial title, submission date, tutorial icon, and tutorial description."
},
{
"code": null,
"e": 4501,
"s": 2374,
"text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?> \n<tutorials> \n <tutorial> \n <tutorial_id>1001</tutorial_id> \n <tutorial_author>Sanjay</tutorial_author> \n <tutorial_title>Learn Java</tutorial_title> \n <submission_date>06-05-2007</submission_date> \n <tutorial_icon>https://www.tutorialspoint.com/java/images/java-minilogo.jpg</tutorial_icon> \n <tutorial_description>Java is a high-level programming language originally \n developed by Sun Microsystems and released in 1995. \n Java runs on a variety of platforms. \n This tutorial gives a complete understanding of Java.');</tutorial_description> \n </tutorial> \n \n <tutorial> \n <tutorial_id>1002</tutorial_id> \n <tutorial_author>Abdul S</tutorial_author> \n <tutorial_title>Learn MySQL</tutorial_title> \n <submission_date>19-04-2007</submission_date> \n <tutorial_icon>https://www.tutorialspoint.com/mysql/images/mysql-minilogo.jpg</tutorial_icon> \n <tutorial_description>MySQL is the most popular \n Open Source Relational SQL database management system. \n MySQL is one of the best RDBMS being used for developing web-based software applications. \n This tutorial will give you quick start with MySQL \n and make you comfortable with MySQL programming.</tutorial_description> \n </tutorial> \n \n <tutorial>\n <tutorial_id>1003</tutorial_id> \n <tutorial_author>Krishna Kasyap</tutorial_author> \n <tutorial_title>Learn JavaFX</tutorial_title> \n <submission_date>06-07-2017</submission_date> \n <tutorial_icon>https://www.tutorialspoint.com/javafx/images/javafx-minilogo.jpg</tutorial_icon> \n <tutorial_description>JavaFX is a Java library used to build Rich Internet Applications. \n The applications developed using JavaFX can run on various devices \n such as Desktop Computers, Mobile Phones, TVs, Tablets, etc. \n This tutorial, discusses all the necessary elements of JavaFX that are required\n to develop effective Rich Internet Applications</tutorial_description> \n </tutorial> \n</tutorials>"
},
{
"code": null,
"e": 4690,
"s": 4501,
"text": "Writer − The writer we are using in the application is JdbcBatchItemWriter to write the data to MySQL database. Assume we have created a table in MySQL inside a database called \"details\". "
},
{
"code": null,
"e": 4929,
"s": 4690,
"text": "CREATE TABLE details.TUTORIALS( \n tutorial_id int(10) NOT NULL, \n tutorial_author VARCHAR(20), \n tutorial_title VARCHAR(50), \n submission_date VARCHAR(20), \n tutorial_icon VARCHAR(200), \n tutorial_description VARCHAR(1000) \n);"
},
{
"code": null,
"e": 5067,
"s": 4929,
"text": "Processor − The processor we are using in the application is a custom processor which writes the data of each record on the PDF document."
},
{
"code": null,
"e": 5449,
"s": 5067,
"text": "In batch process, if \"n\" records or data elements were read, then for each record, it will read the data, process it, and write the data in the Writer. To process the data, it relays on the processor passed. In this case, in the custom processor class, we have written code to load a particular PDF document, create a new page, write the data item onto the PDF in a tabular format."
},
{
"code": null,
"e": 5640,
"s": 5449,
"text": "Finally, if you execute this application, it reads all the data items from the XML document, stores them in the MySQL database, and prints them in the given PDF document in individual pages."
},
{
"code": null,
"e": 5985,
"s": 5640,
"text": "Following is the configuration file of our sample Spring Batch application. In this file, we will define the Job and the steps. In addition to these, we also define the beans for ItemReader, ItemProcessor, and ItemWriter. (Here, we associate them with their respective classes and pass the values for the required properties to configure them.)"
},
{
"code": null,
"e": 8499,
"s": 5985,
"text": "<beans xmlns = \"http://www.springframework.org/schema/beans\" \n xmlns:batch = \"http://www.springframework.org/schema/batch\" \n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\" \n xmlns:util = \"http://www.springframework.org/schema/util\" \n xsi:schemaLocation = \"http://www.springframework.org/schema/batch \n \n http://www.springframework.org/schema/batch/spring-batch-2.2.xsd \n http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.2.xsd \n http://www.springframework.org/schema/util \n http://www.springframework.org/schema/util/spring-util-3.0.xsd \"> \n \n <import resource = \"../jobs/context.xml\" /> \n \n <bean id = \"itemProcessor\" class = \"CustomItemProcessor\" /> \n <batch:job id = \"helloWorldJob\"> \n <batch:step id = \"step1\"> \n <batch:tasklet> \n <batch:chunk reader = \"xmlItemReader\" writer = \"mysqlItemWriter\" processor = \"itemProcessor\">\n </batch:chunk> \n </batch:tasklet> \n </batch:step> \n </batch:job> \n \n <bean id = \"xmlItemReader\" \n class = \"org.springframework.batch.item.xml.StaxEventItemReader\"> \n <property name = \"fragmentRootElementName\" value = \"tutorial\" /> \n <property name = \"resource\" value = \"classpath:resources/tutorial.xml\" /> \n <property name = \"unmarshaller\" ref = \"customUnMarshaller\" /> \n </bean> \n \n <bean id = \"customUnMarshaller\" class = \"org.springframework.oxm.xstream.XStreamMarshaller\">\n <property name = \"aliases\"> \n <util:map id = \"aliases\"> \n <entry key = \"tutorial\" value = \"Tutorial\" /> \n </util:map> \n </property> \n </bean> \n <bean id = \"mysqlItemWriter\" class = \"org.springframework.batch.item.database.JdbcBatchItemWriter\"> \n <property name = \"dataSource\" ref = \"dataSource\" /> \n <property name = \"sql\"> \n <value> \n <![CDATA[insert into details.tutorials (tutorial_id, tutorial_author, tutorial_title, \n submission_date, tutorial_icon, tutorial_description) \n values (:tutorial_id, :tutorial_author, :tutorial_title, :submission_date, \n :tutorial_icon, :tutorial_description);]]>\n </value> \n </property> \n \n <property name = \"itemSqlParameterSourceProvider\"> \n <bean class = \"org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider\" /> \n </property> \n </bean> \n</beans> "
},
{
"code": null,
"e": 8660,
"s": 8499,
"text": "Following is the context.xml of our Spring Batch application. In this file, we will define the beans like job repository, job launcher, and transaction manager."
},
{
"code": null,
"e": 10561,
"s": 8660,
"text": "<beans xmlns = \"http://www.springframework.org/schema/beans\" \n xmlns:jdbc = \"http://www.springframework.org/schema/jdbc\" \n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\" \n xsi:schemaLocation = \"http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.2.xsd \n http://www.springframework.org/schema/jdbc \n http://www.springframework.org/schema/jdbc/spring-jdbc-3.2.xsd\"> \n \n <!-- stored job-meta in database -->\n <bean id = \"jobRepository\" \n class = \"org.springframework.batch.core.repository.support.JobRepositoryFactoryBean\"> \n <property name = \"dataSource\" ref = \"dataSource\" /> \n <property name = \"transactionManager\" ref = \"transactionManager\" /> \n <property name = \"databaseType\" value = \"mysql\" /> \n </bean> \n \n <bean id = \"transactionManager\" \n class = \"org.springframework.batch.support.transaction.ResourcelessTransactionMana ger\" /> \n <bean id = \"jobLauncher\" \n class = \"org.springframework.batch.core.launch.support.SimpleJobLauncher\"> \n <property name = \"jobRepository\" ref = \"jobRepository\" /> \n </bean> \n \n <!-- connect to MySQL database --> \n <bean id = \"dataSource\" \n class = \"org.springframework.jdbc.datasource.DriverManagerDataSource\"> \n <property name = \"driverClassName\" value = \"com.mysql.jdbc.Driver\" /> \n <property name = \"url\" value = \"jdbc:mysql://localhost:3306/details\" /> \n <property name = \"username\" value = \"myuser\" /> \n <property name = \"password\" value = \"password\" /> \n </bean> \n \n <!-- create job-meta tables automatically --> \n <jdbc:initialize-database data-source = \"dataSource\"> \n <jdbc:script location = \"org/springframework/batch/core/schema-drop-mysql.sql\"/> \n <jdbc:script location = \"org/springframework/batch/core/schema-mysql.sql\"/> \n </jdbc:initialize-database> \n</beans> "
},
{
"code": null,
"e": 10859,
"s": 10561,
"text": "Following is the processor class. In this class, we write the code of processing in the application. Here, we are loading a PDF document, creating a new page, creating a table, and inserting the following values for each record: tutorial id, tutorial name, author, date of submission in the table."
},
{
"code": null,
"e": 13657,
"s": 10859,
"text": "import java.io.File; \nimport java.io.IOException; \n\nimport org.apache.pdfbox.pdmodel.PDDocument; \nimport org.apache.pdfbox.pdmodel.PDPage; \nimport org.apache.pdfbox.pdmodel.PDPageContentStream; \nimport org.apache.pdfbox.pdmodel.font.PDType1Font; \nimport org.springframework.batch.item.ItemProcessor; \n\npublic class CustomItemProcessor implements ItemProcessor<Tutorial, Tutorial> { \n \n public static void drawTable(PDPage page, PDPageContentStream contentStream, \n float y, float margin, String[][] content) throws IOException { \n final int rows = content.length; \n final int cols = content[0].length; \n final float rowHeight = 50; \n final float tableWidth = page.getMediaBox().getWidth()-(2*margin); \n final float tableHeight = rowHeight * rows; \n final float colWidth = tableWidth/(float)cols; \n final float cellMargin=5f; \n \n // draw the rows \n float nexty = y ; \n for (int i = 0; i <= rows; i++) { \n contentStream.drawLine(margin,nexty,margin+tableWidth,nexty); \n nexty-= rowHeight; \n } \n \n //draw the columns \n float nextx = margin; \n for (int i = 0; i <= cols; i++) {\n contentStream.drawLine(nextx,y,nextx,y-tableHeight); \n nextx += colWidth; \n } \n \n // now add the text \n contentStream.setFont(PDType1Font.HELVETICA_BOLD,12); \n \n float textx = margin+cellMargin; \n float texty = y-15; \n for(int i = 0; i < content.length; i++){ \n for(int j = 0 ; j < content[i].length; j++){ \n String text = content[i][j]; \n contentStream.beginText(); \n contentStream.moveTextPositionByAmount(textx,texty); \n contentStream.drawString(text); \n contentStream.endText(); \n textx += colWidth; \n } \n \n texty-=rowHeight; \n textx = margin+cellMargin; \n } \n } \n \n @Override \n public Tutorial process(Tutorial item) throws Exception { \n System.out.println(\"Processing...\" + item); \n \n // Creating PDF document object \n PDDocument doc = PDDocument.load(new File(\"C:/Examples/test.pdf\")); \n \n // Creating a blank page \n PDPage page = new PDPage(); \n doc.addPage( page ); \n PDPageContentStream contentStream = new PDPageContentStream(doc, page); \n \n String[][] content = {{\"Id\",\"\"+item.getTutorial_id()},\n {\"Title\", item.getTutorial_title()}, \n {\"Authour\", item.getTutorial_author()}, \n {\"Submission Date\", item.getSubmission_date()}} ; \n drawTable(page, contentStream, 700, 100, content); \n \n contentStream.close(); \n doc.save(\"C:/Examples/test.pdf\" ); \n System.out.println(\"Hello\"); \n return item; \n } \n} "
},
{
"code": null,
"e": 13744,
"s": 13657,
"text": "Following is the ReportFieldSetMapper class which sets the data to the Tutorial class."
},
{
"code": null,
"e": 14563,
"s": 13744,
"text": "import org.springframework.batch.item.file.mapping.FieldSetMapper; \nimport org.springframework.batch.item.file.transform.FieldSet; \nimport org.springframework.validation.BindException; \n\npublic class TutorialFieldSetMapper implements FieldSetMapper<Tutorial> { \n \n @Override \n public Tutorial mapFieldSet(FieldSet fieldSet) throws BindException { \n // instantiating the Tutorial class \n Tutorial tutorial = new Tutorial(); \n \n // Setting the fields from XML \n tutorial.setTutorial_id(fieldSet.readInt(0)); \n tutorial.setTutorial_title(fieldSet.readString(1)); \n tutorial.setTutorial_author(fieldSet.readString(2)); \n tutorial.setTutorial_icon(fieldSet.readString(3)); \n tutorial.setTutorial_description(fieldSet.readString(4)); \n return tutorial; \n } \n} "
},
{
"code": null,
"e": 14649,
"s": 14563,
"text": "Following is the Tutorial class. It is a simple class with setter and getter methods."
},
{
"code": null,
"e": 16360,
"s": 14649,
"text": "public class Tutorial { \n private int tutorial_id; \n private String tutorial_author; \n private String tutorial_title; \n private String submission_date; \n private String tutorial_icon; \n private String tutorial_description; \n \n @Override \n public String toString() { \n return \" [id=\" + tutorial_id + \", author=\" + tutorial_author \n + \", title=\" + tutorial_title + \", date=\" + submission_date + \", icon =\" \n +tutorial_icon +\", description = \"+tutorial_description+\"]\"; \n } \n \n public int getTutorial_id() { \n return tutorial_id; \n } \n \n public void setTutorial_id(int tutorial_id) { \n this.tutorial_id = tutorial_id; \n } \n \n public String getTutorial_author() { \n return tutorial_author; \n } \n \n public void setTutorial_author(String tutorial_author) { \n this.tutorial_author = tutorial_author; \n } \n \n public String getTutorial_title() { \n return tutorial_title; \n } \n \n public void setTutorial_title(String tutorial_title) { \n this.tutorial_title = tutorial_title; \n } \n \n public String getSubmission_date() { \n return submission_date; \n } \n \n public void setSubmission_date(String submission_date) { \n this.submission_date = submission_date; \n } \n \n public String getTutorial_icon() { \n return tutorial_icon; \n } \n \n public void setTutorial_icon(String tutorial_icon) { \n this.tutorial_icon = tutorial_icon; \n } \n \n public String getTutorial_description() { \n return tutorial_description; \n } \n \n public void setTutorial_description(String tutorial_description) { \n this.tutorial_description = tutorial_description; \n } \n}"
},
{
"code": null,
"e": 16495,
"s": 16360,
"text": "Following is the code which launces the batch process. In this class, we will launch the Batch Application by running the JobLauncher."
},
{
"code": null,
"e": 17188,
"s": 16495,
"text": "public class App { \n public static void main(String[] args) throws Exception { \n String[] springConfig = { \"jobs/job_hello_world.xml\" }; \n \n // Creating the application context object \n ApplicationContext context = new ClassPathXmlApplicationContext(springConfig); \n \n // Creating the job launcher \n JobLauncher jobLauncher = (JobLauncher) context.getBean(\"jobLauncher\"); \n \n // Creating the job \n Job job = (Job) context.getBean(\"helloWorldJob\"); \n \n // Executing the JOB \n JobExecution execution = jobLauncher.run(job, new JobParameters()); \n System.out.println(\"Exit Status : \" + execution.getStatus()); \n } \n} "
},
{
"code": null,
"e": 17257,
"s": 17188,
"text": "On executing this application, it will produce the following output."
},
{
"code": null,
"e": 19327,
"s": 17257,
"text": "May 05, 2017 4:39:22 PM org.springframework.context.support.ClassPathXmlApplicationContext \nprepareRefresh \nINFO: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@306a30c7: \nstartup date [Fri May 05 16:39:22 IST 2017]; root of context hierarchy \nMay 05, 2017 4:39:23 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions \nMay 05, 2017 4:39:32 PM org.springframework.batch.core.job.SimpleStepHandler handleStep \nINFO: Executing step: [step1] \nProcessing... [id=1001, author=Sanjay, title=Learn Java, date=06-05-2007, \nicon =https://www.tutorialspoint.com/java/images/java-mini-logo.jpg, \ndescription = Java is a high-level programming language originally developed by Sun Microsystems \nand released in 1995. Java runs on a variety of platforms. \nThis tutorial gives a complete understanding of Java.');] \nHello \nProcessing.. [id=1002, author=Abdul S, title=Learn MySQL, date=19-04-2007, \nicon =https://www.tutorialspoint.com/mysql/images/mysql-mini-logo.jpg, \ndescription = MySQL is the most popular Open Source Relational SQL database management system. \nMySQL is one of the best RDBMS being used for developing web-based software applications. \nThis tutorial will give you quick start with MySQL and make you comfortable with MySQL programming.] \nHello \nProcessing... [id=1003, author=Krishna Kasyap, title=Learn JavaFX, date=06-072017, \nicon =https://www.tutorialspoint.com/javafx/images/javafx-mini-logo.jpg,\ndescription = JavaFX is a Java library used to build Rich Internet Applications. \nThe applications developed using JavaFX can run on various devices \nsuch as Desktop Computers, Mobile Phones, TVs, Tablets, etc. \nThis tutorial, discusses all the necessary elements of JavaFX \nthat are required to develop effective Rich Internet Applications] \nHello \nMay 05, 2017 4:39:36 PM org.springframework.batch.core.launch.support.SimpleJobLauncher run \nINFO: Job: [FlowJob: [name=helloWorldJob]] completed with the following parameters: [{}] \nand the following status: [COMPLETED] \nExit Status : COMPLETED \n"
},
{
"code": null,
"e": 19425,
"s": 19327,
"text": "If you verify the details.tutorial table in the database, it will show you the following output −"
},
{
"code": null,
"e": 19496,
"s": 19425,
"text": "This will generate a PDF with the records on each page as shown below."
},
{
"code": null,
"e": 19530,
"s": 19496,
"text": "\n 102 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 19544,
"s": 19530,
"text": " Karthikeya T"
},
{
"code": null,
"e": 19577,
"s": 19544,
"text": "\n 39 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 19592,
"s": 19577,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 19627,
"s": 19592,
"text": "\n 73 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 19639,
"s": 19627,
"text": " Senol Atac"
},
{
"code": null,
"e": 19674,
"s": 19639,
"text": "\n 62 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 19686,
"s": 19674,
"text": " Senol Atac"
},
{
"code": null,
"e": 19721,
"s": 19686,
"text": "\n 67 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 19733,
"s": 19721,
"text": " Senol Atac"
},
{
"code": null,
"e": 19766,
"s": 19733,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 19778,
"s": 19766,
"text": " Senol Atac"
},
{
"code": null,
"e": 19785,
"s": 19778,
"text": " Print"
},
{
"code": null,
"e": 19796,
"s": 19785,
"text": " Add Notes"
}
]
|
Build your first Anomaly Detector in Power BI using PyCaret | by Moez Ali | Towards Data Science | In our last post, Machine Learning in Power BI using PyCaret, we presented a step-by-step tutorial on how PyCaret can be integrated within Power BI, thus allowing analysts and data scientists to add a layer of machine learning to their Dashboards and Reports without any additional license costs.
In this post, we will dive deeper and implement an Anomaly Detector in Power BI using PyCaret. If you haven’t heard about PyCaret before, please read this announcement to learn more.
What is Anomaly Detection? Types of Anomaly Detection?
Train and implement an unsupervised anomaly detector in Power BI.
Analyze results and visualize information in a dashboard.
How to deploy the anomaly detector in Power BI production?
If you have used Python before, it is likely that you already have Anaconda Distribution installed on your computer. If not, click here to download Anaconda Distribution with Python 3.7 or greater.
Before we start using PyCaret’s machine learning capabilities in Power BI we have to create a virtual environment and install pycaret. It’s a three-step process:
✅ Step 1 — Create an anaconda environment
Open Anaconda Prompt from start menu and execute the following code:
conda create --name myenv python=3.7
✅ Step 2 — Install PyCaret
Execute the following code in Anaconda Prompt:
pip install pycaret
Installation may take 15–20 minutes. If you are having issues with installation, please see our GitHub page for known issues and resolutions.
✅Step 3 — Set Python Directory in Power BI
The virtual environment created must be linked with Power BI. This can be done using Global Settings in Power BI Desktop (File → Options → Global → Python scripting). Anaconda Environment by default is installed under:
C:\Users\username\AppData\Local\Continuum\anaconda3\envs\myenv
Anomaly Detection is a technique in machine learning used for identifying rare items, events or observations which raise suspicions by differing significantly from the majority of the data.
Typically, the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or error. There are three ways to implement an anomaly detector:
(a) Supervised: Used when the data set has labels identifying which transactions are anomaly and which are normal. (this is similar to a supervised classification problem).
(b) Semi-Supervised: The idea behind semi-supervised anomaly detection is to train a model on normal data only (without any anomalies). When the trained model is then used on unseen data points, it can predict whether the new data point is normal or not (based on the distribution of the data in the trained model).
(c) Unsupervised: Exactly as it sounds, unsupervised means no labels and therefore no training and test data set. In unsupervised learning a model is trained on the complete dataset and assumes that the majority of the instances are normal. While looking for instances that seem to fit least to the remainder. There are several unsupervised anomaly detection algorithms such as Isolation Forest or One-Class Support Vector Machine. Each has their own method of identifying anomalies in the dataset.
This tutorial is about implementing unsupervised anomaly detection in Power BI using a Python library called PyCaret. Discussion of the specific details and mathematics behind these algorithms are out-of-scope for this tutorial.
Many companies issue corporate credit cards (also known as purchase cards or p-cards) to employees for effectively managing operational purchasing. Normally there is a process in place for employees to submit those claims electronically. The data collected is typically transactional and likely to include date of transaction, vendor name, type of expense, merchant and amount.
In this tutorial we will use State Employees Credit Card Transactions from 2014–2019 for the Department of Education in the State of Delaware, US. The data is available online on their open data platform.
Disclaimer: This tutorial demonstrates the use of PyCaret in Power BI to build an anomaly detector. The sample dashboard that is built in this tutorial by no means reflects actual anomalies or is meant to identify anomalies.
Now that you have setup the Anaconda Environment, installed PyCaret, understand the basics of Anomaly Detection and have the business context for this tutorial, let’s get started.
The first step is importing the dataset into Power BI Desktop. You can load the data using a web connector. (Power BI Desktop → Get Data → From Web).
Link to csv file: https://raw.githubusercontent.com/pycaret/pycaret/master/datasets/delaware_anomaly.csv
To train an anomaly detector in Power BI we will have to execute a Python script in Power Query Editor (Power Query Editor → Transform → Run python script). Run the following code as a Python script:
from pycaret.anomaly import *dataset = get_outliers(dataset, ignore_features=['DEPT_NAME', 'MERCHANT', 'TRANS_DT'])
We have ignored a few columns in the dataset by passing them under ignore_features parameter. There could be many reasons for which you might not want to use certain columns for training a machine learning algorithm.
PyCaret allows you to hide instead of drop unneeded columns from a dataset as you might require those columns for later analysis. For example, in this case we don't want to use transactional date for training an algorithm and hence we have passed it under ignore_features.
There are over 10 ready-to-use anomaly detection algorithms in PyCaret.
By default, PyCaret trains a K-Nearest Neighbors Anomaly Detector with 5% fraction (i.e. 5% of the total number of rows in the table will be flagged as outliers). Default values can be changed easily:
To change the fraction value you can use the fraction parameter within the get_outliers( ) function.
To change the model type use the model parameter within get_outliers().
See an example code for training an Isolation Forest detector with 0.1 fraction:
from pycaret.anomaly import *dataset = get_outliers(dataset, model = 'iforest', fraction = 0.1, ignore_features=['DEPT_NAME', 'MERCHANT', 'TRANS_DT'])
Output:
Two new columns are attached to the original table. Label (1 = outlier, 0 = inlier) and Score (data points with high scores are categorized as outlier). Apply the query to see results in Power BI data set.
Once you have Outlier labels in Power BI, here’s an example of how you can visualize it in dashboard:
You can download the PBIX file and the data set from our GitHub.
What has been demonstrated above was one simple way to implement Anomaly Detection in Power BI. However, it is important to note that the method shown above train the anomaly detector every time the Power BI dataset is refreshed. This may be a problem for two reasons:
When the model is re-trained with new data, the anomaly labels may change (some transactions that were labeled as outliers earlier may not be considered outliers anymore)
You don’t want to spend hours of time everyday re-training the model.
An alternative way to implement anomaly detection in Power BI when it is intended to be used in production is to pass the pre-trained model to Power BI for labeling instead of training the model in Power BI itself.
You can use any Integrated Development Environment (IDE)or Notebook for training machine learning models. In this example, we have used Visual Studio Code to train an anomaly detection model.
A trained model is then saved as a pickle file and imported into Power Query for generating anomaly labels (1 or 0).
If you would like to learn more about implementing Anomaly Detection in Jupyter notebook using PyCaret, watch this 2 minute video tutorial:
Execute the below code as a Python script to generate labels from the pre-trained model.
from pycaret.anomaly import *dataset = predict_model('c:/.../anomaly_deployment_13052020, data = dataset)
The output of this will be the same as the one we saw above. However, the difference is that when you use a pre-trained model, the label is generated on a new dataset using the same model instead of re-training the model every time you refresh the Power BI dataset.
Once you’ve uploaded the .pbix file to the Power BI service, a couple more steps are necessary to enable seamless integration of the machine learning pipeline into your data pipeline. These include:
Enable scheduled refresh for the dataset — to enable a scheduled refresh for the workbook that contains your dataset with Python scripts, see Configuring scheduled refresh, which also includes information about Personal Gateway.
Install the Personal Gateway — you need a Personal Gateway installed on the machine where the file is located, and where Python is installed; the Power BI service must have access to that Python environment. You can get more information on how to install and configure Personal Gateway.
If you are Interested in learning more about Anomaly Detection, checkout our Notebook Tutorial.
We have received overwhelming support and feedback from the community. We are actively working on improving PyCaret and preparing for our next release. PyCaret 1.0.1 will be bigger and better. If you would like to share your feedback and help us improve further, you may fill this form on the website or leave a comment on our GitHub or LinkedIn page.
Follow our LinkedIn and subscribe to our Youtube channel to learn more about PyCaret.
User Guide / DocumentationGitHub RepositoryInstall PyCaretNotebook TutorialsContribute in PyCaret
As of the first release 1.0.0, PyCaret has the following modules available for use. Click on the links below to see the documentation and working examples in Python.
ClassificationRegressionClusteringAnomaly DetectionNatural Language ProcessingAssociation Rule Mining
PyCaret getting started tutorials in Notebook:
ClusteringAnomaly DetectionNatural Language ProcessingAssociation Rule MiningRegressionClassification
PyCaret is an open source project. Everybody is welcome to contribute. If you would like contribute, please feel free to work on open issues. Pull requests are accepted with unit tests on dev-1.0.1 branch.
Please give us ⭐️ on our GitHub repo if you like PyCaret.
Medium : https://medium.com/@moez_62905/
LinkedIn : https://www.linkedin.com/in/profile-moez/
Twitter : https://twitter.com/moezpycaretorg1 | [
{
"code": null,
"e": 469,
"s": 172,
"text": "In our last post, Machine Learning in Power BI using PyCaret, we presented a step-by-step tutorial on how PyCaret can be integrated within Power BI, thus allowing analysts and data scientists to add a layer of machine learning to their Dashboards and Reports without any additional license costs."
},
{
"code": null,
"e": 652,
"s": 469,
"text": "In this post, we will dive deeper and implement an Anomaly Detector in Power BI using PyCaret. If you haven’t heard about PyCaret before, please read this announcement to learn more."
},
{
"code": null,
"e": 707,
"s": 652,
"text": "What is Anomaly Detection? Types of Anomaly Detection?"
},
{
"code": null,
"e": 773,
"s": 707,
"text": "Train and implement an unsupervised anomaly detector in Power BI."
},
{
"code": null,
"e": 831,
"s": 773,
"text": "Analyze results and visualize information in a dashboard."
},
{
"code": null,
"e": 890,
"s": 831,
"text": "How to deploy the anomaly detector in Power BI production?"
},
{
"code": null,
"e": 1088,
"s": 890,
"text": "If you have used Python before, it is likely that you already have Anaconda Distribution installed on your computer. If not, click here to download Anaconda Distribution with Python 3.7 or greater."
},
{
"code": null,
"e": 1250,
"s": 1088,
"text": "Before we start using PyCaret’s machine learning capabilities in Power BI we have to create a virtual environment and install pycaret. It’s a three-step process:"
},
{
"code": null,
"e": 1292,
"s": 1250,
"text": "✅ Step 1 — Create an anaconda environment"
},
{
"code": null,
"e": 1361,
"s": 1292,
"text": "Open Anaconda Prompt from start menu and execute the following code:"
},
{
"code": null,
"e": 1398,
"s": 1361,
"text": "conda create --name myenv python=3.7"
},
{
"code": null,
"e": 1425,
"s": 1398,
"text": "✅ Step 2 — Install PyCaret"
},
{
"code": null,
"e": 1472,
"s": 1425,
"text": "Execute the following code in Anaconda Prompt:"
},
{
"code": null,
"e": 1492,
"s": 1472,
"text": "pip install pycaret"
},
{
"code": null,
"e": 1634,
"s": 1492,
"text": "Installation may take 15–20 minutes. If you are having issues with installation, please see our GitHub page for known issues and resolutions."
},
{
"code": null,
"e": 1677,
"s": 1634,
"text": "✅Step 3 — Set Python Directory in Power BI"
},
{
"code": null,
"e": 1896,
"s": 1677,
"text": "The virtual environment created must be linked with Power BI. This can be done using Global Settings in Power BI Desktop (File → Options → Global → Python scripting). Anaconda Environment by default is installed under:"
},
{
"code": null,
"e": 1959,
"s": 1896,
"text": "C:\\Users\\username\\AppData\\Local\\Continuum\\anaconda3\\envs\\myenv"
},
{
"code": null,
"e": 2149,
"s": 1959,
"text": "Anomaly Detection is a technique in machine learning used for identifying rare items, events or observations which raise suspicions by differing significantly from the majority of the data."
},
{
"code": null,
"e": 2342,
"s": 2149,
"text": "Typically, the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or error. There are three ways to implement an anomaly detector:"
},
{
"code": null,
"e": 2515,
"s": 2342,
"text": "(a) Supervised: Used when the data set has labels identifying which transactions are anomaly and which are normal. (this is similar to a supervised classification problem)."
},
{
"code": null,
"e": 2831,
"s": 2515,
"text": "(b) Semi-Supervised: The idea behind semi-supervised anomaly detection is to train a model on normal data only (without any anomalies). When the trained model is then used on unseen data points, it can predict whether the new data point is normal or not (based on the distribution of the data in the trained model)."
},
{
"code": null,
"e": 3330,
"s": 2831,
"text": "(c) Unsupervised: Exactly as it sounds, unsupervised means no labels and therefore no training and test data set. In unsupervised learning a model is trained on the complete dataset and assumes that the majority of the instances are normal. While looking for instances that seem to fit least to the remainder. There are several unsupervised anomaly detection algorithms such as Isolation Forest or One-Class Support Vector Machine. Each has their own method of identifying anomalies in the dataset."
},
{
"code": null,
"e": 3559,
"s": 3330,
"text": "This tutorial is about implementing unsupervised anomaly detection in Power BI using a Python library called PyCaret. Discussion of the specific details and mathematics behind these algorithms are out-of-scope for this tutorial."
},
{
"code": null,
"e": 3937,
"s": 3559,
"text": "Many companies issue corporate credit cards (also known as purchase cards or p-cards) to employees for effectively managing operational purchasing. Normally there is a process in place for employees to submit those claims electronically. The data collected is typically transactional and likely to include date of transaction, vendor name, type of expense, merchant and amount."
},
{
"code": null,
"e": 4142,
"s": 3937,
"text": "In this tutorial we will use State Employees Credit Card Transactions from 2014–2019 for the Department of Education in the State of Delaware, US. The data is available online on their open data platform."
},
{
"code": null,
"e": 4367,
"s": 4142,
"text": "Disclaimer: This tutorial demonstrates the use of PyCaret in Power BI to build an anomaly detector. The sample dashboard that is built in this tutorial by no means reflects actual anomalies or is meant to identify anomalies."
},
{
"code": null,
"e": 4547,
"s": 4367,
"text": "Now that you have setup the Anaconda Environment, installed PyCaret, understand the basics of Anomaly Detection and have the business context for this tutorial, let’s get started."
},
{
"code": null,
"e": 4697,
"s": 4547,
"text": "The first step is importing the dataset into Power BI Desktop. You can load the data using a web connector. (Power BI Desktop → Get Data → From Web)."
},
{
"code": null,
"e": 4802,
"s": 4697,
"text": "Link to csv file: https://raw.githubusercontent.com/pycaret/pycaret/master/datasets/delaware_anomaly.csv"
},
{
"code": null,
"e": 5002,
"s": 4802,
"text": "To train an anomaly detector in Power BI we will have to execute a Python script in Power Query Editor (Power Query Editor → Transform → Run python script). Run the following code as a Python script:"
},
{
"code": null,
"e": 5118,
"s": 5002,
"text": "from pycaret.anomaly import *dataset = get_outliers(dataset, ignore_features=['DEPT_NAME', 'MERCHANT', 'TRANS_DT'])"
},
{
"code": null,
"e": 5335,
"s": 5118,
"text": "We have ignored a few columns in the dataset by passing them under ignore_features parameter. There could be many reasons for which you might not want to use certain columns for training a machine learning algorithm."
},
{
"code": null,
"e": 5608,
"s": 5335,
"text": "PyCaret allows you to hide instead of drop unneeded columns from a dataset as you might require those columns for later analysis. For example, in this case we don't want to use transactional date for training an algorithm and hence we have passed it under ignore_features."
},
{
"code": null,
"e": 5680,
"s": 5608,
"text": "There are over 10 ready-to-use anomaly detection algorithms in PyCaret."
},
{
"code": null,
"e": 5881,
"s": 5680,
"text": "By default, PyCaret trains a K-Nearest Neighbors Anomaly Detector with 5% fraction (i.e. 5% of the total number of rows in the table will be flagged as outliers). Default values can be changed easily:"
},
{
"code": null,
"e": 5982,
"s": 5881,
"text": "To change the fraction value you can use the fraction parameter within the get_outliers( ) function."
},
{
"code": null,
"e": 6054,
"s": 5982,
"text": "To change the model type use the model parameter within get_outliers()."
},
{
"code": null,
"e": 6135,
"s": 6054,
"text": "See an example code for training an Isolation Forest detector with 0.1 fraction:"
},
{
"code": null,
"e": 6286,
"s": 6135,
"text": "from pycaret.anomaly import *dataset = get_outliers(dataset, model = 'iforest', fraction = 0.1, ignore_features=['DEPT_NAME', 'MERCHANT', 'TRANS_DT'])"
},
{
"code": null,
"e": 6294,
"s": 6286,
"text": "Output:"
},
{
"code": null,
"e": 6500,
"s": 6294,
"text": "Two new columns are attached to the original table. Label (1 = outlier, 0 = inlier) and Score (data points with high scores are categorized as outlier). Apply the query to see results in Power BI data set."
},
{
"code": null,
"e": 6602,
"s": 6500,
"text": "Once you have Outlier labels in Power BI, here’s an example of how you can visualize it in dashboard:"
},
{
"code": null,
"e": 6667,
"s": 6602,
"text": "You can download the PBIX file and the data set from our GitHub."
},
{
"code": null,
"e": 6936,
"s": 6667,
"text": "What has been demonstrated above was one simple way to implement Anomaly Detection in Power BI. However, it is important to note that the method shown above train the anomaly detector every time the Power BI dataset is refreshed. This may be a problem for two reasons:"
},
{
"code": null,
"e": 7107,
"s": 6936,
"text": "When the model is re-trained with new data, the anomaly labels may change (some transactions that were labeled as outliers earlier may not be considered outliers anymore)"
},
{
"code": null,
"e": 7177,
"s": 7107,
"text": "You don’t want to spend hours of time everyday re-training the model."
},
{
"code": null,
"e": 7392,
"s": 7177,
"text": "An alternative way to implement anomaly detection in Power BI when it is intended to be used in production is to pass the pre-trained model to Power BI for labeling instead of training the model in Power BI itself."
},
{
"code": null,
"e": 7584,
"s": 7392,
"text": "You can use any Integrated Development Environment (IDE)or Notebook for training machine learning models. In this example, we have used Visual Studio Code to train an anomaly detection model."
},
{
"code": null,
"e": 7701,
"s": 7584,
"text": "A trained model is then saved as a pickle file and imported into Power Query for generating anomaly labels (1 or 0)."
},
{
"code": null,
"e": 7841,
"s": 7701,
"text": "If you would like to learn more about implementing Anomaly Detection in Jupyter notebook using PyCaret, watch this 2 minute video tutorial:"
},
{
"code": null,
"e": 7930,
"s": 7841,
"text": "Execute the below code as a Python script to generate labels from the pre-trained model."
},
{
"code": null,
"e": 8036,
"s": 7930,
"text": "from pycaret.anomaly import *dataset = predict_model('c:/.../anomaly_deployment_13052020, data = dataset)"
},
{
"code": null,
"e": 8302,
"s": 8036,
"text": "The output of this will be the same as the one we saw above. However, the difference is that when you use a pre-trained model, the label is generated on a new dataset using the same model instead of re-training the model every time you refresh the Power BI dataset."
},
{
"code": null,
"e": 8501,
"s": 8302,
"text": "Once you’ve uploaded the .pbix file to the Power BI service, a couple more steps are necessary to enable seamless integration of the machine learning pipeline into your data pipeline. These include:"
},
{
"code": null,
"e": 8730,
"s": 8501,
"text": "Enable scheduled refresh for the dataset — to enable a scheduled refresh for the workbook that contains your dataset with Python scripts, see Configuring scheduled refresh, which also includes information about Personal Gateway."
},
{
"code": null,
"e": 9017,
"s": 8730,
"text": "Install the Personal Gateway — you need a Personal Gateway installed on the machine where the file is located, and where Python is installed; the Power BI service must have access to that Python environment. You can get more information on how to install and configure Personal Gateway."
},
{
"code": null,
"e": 9113,
"s": 9017,
"text": "If you are Interested in learning more about Anomaly Detection, checkout our Notebook Tutorial."
},
{
"code": null,
"e": 9465,
"s": 9113,
"text": "We have received overwhelming support and feedback from the community. We are actively working on improving PyCaret and preparing for our next release. PyCaret 1.0.1 will be bigger and better. If you would like to share your feedback and help us improve further, you may fill this form on the website or leave a comment on our GitHub or LinkedIn page."
},
{
"code": null,
"e": 9551,
"s": 9465,
"text": "Follow our LinkedIn and subscribe to our Youtube channel to learn more about PyCaret."
},
{
"code": null,
"e": 9649,
"s": 9551,
"text": "User Guide / DocumentationGitHub RepositoryInstall PyCaretNotebook TutorialsContribute in PyCaret"
},
{
"code": null,
"e": 9815,
"s": 9649,
"text": "As of the first release 1.0.0, PyCaret has the following modules available for use. Click on the links below to see the documentation and working examples in Python."
},
{
"code": null,
"e": 9917,
"s": 9815,
"text": "ClassificationRegressionClusteringAnomaly DetectionNatural Language ProcessingAssociation Rule Mining"
},
{
"code": null,
"e": 9964,
"s": 9917,
"text": "PyCaret getting started tutorials in Notebook:"
},
{
"code": null,
"e": 10066,
"s": 9964,
"text": "ClusteringAnomaly DetectionNatural Language ProcessingAssociation Rule MiningRegressionClassification"
},
{
"code": null,
"e": 10272,
"s": 10066,
"text": "PyCaret is an open source project. Everybody is welcome to contribute. If you would like contribute, please feel free to work on open issues. Pull requests are accepted with unit tests on dev-1.0.1 branch."
},
{
"code": null,
"e": 10330,
"s": 10272,
"text": "Please give us ⭐️ on our GitHub repo if you like PyCaret."
},
{
"code": null,
"e": 10371,
"s": 10330,
"text": "Medium : https://medium.com/@moez_62905/"
},
{
"code": null,
"e": 10424,
"s": 10371,
"text": "LinkedIn : https://www.linkedin.com/in/profile-moez/"
}
]
|
gRPC - Unary gRPC | We will now look at various types of communication that the gRPC framework supports. We will use an example of Bookstore where the client can search and place an order for book delivery.
Let us see unary gRPC communication where we let the client search for a title and return randomly one of the book matching the title queried for.
First let us define the bookstore.proto file in common_proto_files −
syntax = "proto3";
option java_package = "com.tp.bookstore";
service BookStore {
rpc first (BookSearch) returns (Book) {}
}
message BookSearch {
string name = 1;
string author = 2;
string genre = 3;
}
message Book {
string name = 1;
string author = 2;
int32 price = 3;
}
Let us now take a closer look at each of the lines in the above block.
syntax = "proto3";
The "syntax" here represents the version of Protobuf we are using. We are using the latest version 3 and the schema thus can use all the syntax which is valid for version 3.
package tutorial;
The package here is used for conflict resolution if, say, we have multiple classes/members with the same name.
option java_package = "com.tp.bookstore";
This argument is specific to Java, i.e., the package where to auto-generate the code from the .proto file.
service BookStore {
rpc first (BookSearch) returns (Book) {}
}
This represents the name of the service "BookStore" and the function name "first" which can be called. The "first" function takes in the input of type "BookSearch" and returns the output of type "Book". So, effectively, we let the client search for a title and return one of the book matching the title queried for.
Now let us look at these types.
message BookSearch {
string name = 1;
string author = 2;
string genre = 3;
}
In the above block, we have defined the BookSearch which contains the attributes like name, author and genre. The client is supposed to send the object of type of "BookSearch" to the server.
message Book {
string name = 1;
string author = 2;
int32 price = 3;
}
Here, we have also defined that, given a "BookSearch", the server would return the "Book" which contains book attributes along with the price of the book. The server is supposed to send the object of type of "Book" to the client.
Note that we already had the Maven setup done for auto-generating our class files as well as our RPC code. So, now we can simply compile our project −
mvn clean install
This should auto-generate the source code required for us to use gRPC. The source code would be placed under −
Protobuf class code: target/generated-sources/protobuf/java/com.tp.bookstore
Protobuf gRPC code: target/generated-sources/protobuf/grpc-java/com.tp.bookstore
Now that we have defined the proto file which contains the function definition, let us setup a server which can call these functions.
Let us write our server code to serve the above function and save it in com.tp.bookstore.BookeStoreServerUnary.java −
package com.tp.bookstore;
import io.grpc.Server;
import io.grpc.ServerBuilder;
import io.grpc.stub.StreamObserver;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.logging.Logger;
import java.util.stream.Collectors;
import com.tp.bookstore.BookStoreOuterClass.Book;
import com.tp.bookstore.BookStoreOuterClass.BookSearch;
public class BookeStoreServerUnary {
private static final Logger logger = Logger.getLogger(BookeStoreServerUnary.class.getName());
static Map<String, Book> bookMap = new HashMap<>();
static {
bookMap.put("Great Gatsby", Book.newBuilder().setName("Great Gatsby")
.setAuthor("Scott Fitzgerald")
.setPrice(300).build());
bookMap.put("To Kill MockingBird", Book.newBuilder().setName("To Kill MockingBird")
.setAuthor("Harper Lee")
.setPrice(400).build());
bookMap.put("Passage to India", Book.newBuilder().setName("Passage to India")
.setAuthor("E.M.Forster")
.setPrice(500).build());
bookMap.put("The Side of Paradise", Book.newBuilder().setName("The Side of Paradise")
.setAuthor("Scott Fitzgerald")
.setPrice(600).build());
bookMap.put("Go Set a Watchman", Book.newBuilder().setName("Go Set a Watchman")
.setAuthor("Harper Lee")
.setPrice(700).build());
}
private Server server;
private void start() throws IOException {
int port = 50051;
server = ServerBuilder.forPort(port)
.addService(new BookStoreImpl()).build().start();
logger.info("Server started, listening on " + port);
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
System.err.println("Shutting down gRPC server");
try {
server.shutdown().awaitTermination(30, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace(System.err);
}
}
});
}
public static void main(String[] args) throws IOException, InterruptedException {
final BookeStoreServerUnary greetServer = new BookeStoreServerUnary();
greetServer.start();
greetServer.server.awaitTermination();
}
static class BookStoreImpl extends BookStoreGrpc.BookStoreImplBase {
@Override
public void first(BookSearch searchQuery, StreamObserver<Book> responseObserver) {
logger.info("Searching for book with title: " + searchQuery.getName());
List<String> matchingBookTitles = bookMap.keySet().stream().filter(title ->
title.startsWith(searchQuery.getName().trim())).collect(Collectors.toList());
Book foundBook = null;
if(matchingBookTitles.size() > 0) {
foundBook = bookMap.get(matchingBookTitles.get(0));
}
responseObserver.onNext(foundBook);
responseObserver.onCompleted();
}
}
}
The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −
Starting from the main method, we create a gRPC server at a specified port.
Starting from the main method, we create a gRPC server at a specified port.
But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service.
But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service.
For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl
For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl
The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the first method.
The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the first method.
The method expects an object of type as defined in the .proto file, i.e.,for us the BookSearch
The method expects an object of type as defined in the .proto file, i.e.,for us the BookSearch
The method searches for the book in the available bookMap and then returns the Book by calling the onNext() method. Once done, the server announces that it is done with the output by calling onCompleted()
The method searches for the book in the available bookMap and then returns the Book by calling the onNext() method. Once done, the server announces that it is done with the output by calling onCompleted()
Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code.
Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code.
Now that we have written the code for the server, let us setup a client which can call these functions.
Let us write our client code to call the above function and save it in com.tp.bookstore.BookStoreClientUnaryBlocking.java −
package com.tp.bookstore;
import io.grpc.Channel;
import io.grpc.ManagedChannel;
import io.grpc.ManagedChannelBuilder;
import io.grpc.StatusRuntimeException;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
import com.tp.bookstore.BookStoreOuterClass.Book;
import com.tp.bookstore.BookStoreOuterClass.BookSearch;
import com.tp.greeting.GreeterGrpc;
import com.tp.greeting.Greeting.ServerOutput;
import com.tp.greeting.Greeting.ClientInput;
public class BookStoreClientUnaryBlocking {
private static final Logger logger = Logger.getLogger(BookStoreClientUnaryBlocking.class.getName());
private final BookStoreGrpc.BookStoreBlockingStub blockingStub;
public BookStoreClientUnaryBlocking(Channel channel) {
blockingStub = BookStoreGrpc.newBlockingStub(channel);
}
public void getBook(String bookName) {
logger.info("Querying for book with title: " + bookName);
BookSearch request = BookSearch.newBuilder().setName(bookName).build();
Book response;
try {
response = blockingStub.first(request);
} catch (StatusRuntimeException e) {
logger.log(Level.WARNING, "RPC failed: {0}", e.getStatus());
return;
}
logger.info("Got following book from server: " + response);
}
public static void main(String[] args) throws Exception {
String bookName = args[0];
String serverAddress = "localhost:50051";
ManagedChannel channel = ManagedChannelBuilder.forTarget(serverAddress)
.usePlaintext()
.build();
try {
BookStoreClientUnaryBlocking client = new
BookStoreClientUnaryBlocking(channel);
client.getBook(bookName);
} finally {
channel.shutdownNow().awaitTermination(5,
TimeUnit.SECONDS);
}
}
}
The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −
Starting from the main method, we accept one argument, i.e., the title of the book we want to search for.
Starting from the main method, we accept one argument, i.e., the title of the book we want to search for.
We setup a Channel for gRPC communication with our server.
We setup a Channel for gRPC communication with our server.
And then, we create a blocking stub using the channel. This is where we choose the service "BookStore" whose functions we plan to call. A "stub" is nothing but a wrapper which hides the complexity of the remote call from the caller.
And then, we create a blocking stub using the channel. This is where we choose the service "BookStore" whose functions we plan to call. A "stub" is nothing but a wrapper which hides the complexity of the remote call from the caller.
Then, we simply create the expected input defined in the .proto file,i.e., in our case BookSearch and we add the title name we want the server to search for.
Then, we simply create the expected input defined in the .proto file,i.e., in our case BookSearch and we add the title name we want the server to search for.
We ultimately make the call and await the result from the server.
We ultimately make the call and await the result from the server.
Finally, we close the channel to avoid any resource leak.
Finally, we close the channel to avoid any resource leak.
So, that is our client code.
To sum up, what we want to do is the following −
Start the gRPC server.
Start the gRPC server.
The Client queries the Server for a book with a given name/title.
The Client queries the Server for a book with a given name/title.
The Server searches the book in its store.
The Server searches the book in its store.
The Server then responds with the book and its other attributes.
The Server then responds with the book and its other attributes.
Now, that we have defined our proto file, written our server and the client code, let us proceed to execute this code and see things in action.
For running the code, fire up two shells. Start the server on the first shell by executing the following command −
java -cp .\target\grpc-point-1.0.jar
com.tp.bookstore.BookeStoreServerUnary
We would see the following output −
Jul 03, 2021 7:21:58 PM
com.tp.bookstore.BookeStoreServerUnary start
INFO: Server started, listening on 50051
The above output means the server has started.
Now, let us start the client.
java -cp .\target\grpc-point-1.0.jar
com.tp.bookstore.BookStoreClientUnaryBlocking "To Kill"
We would see the following output −
Jul 03, 2021 7:22:03 PM
com.tp.bookstore.BookStoreClientUnaryBlocking getBook
INFO: Querying for book with title: To Kill
Jul 03, 2021 7:22:04 PM
com.tp.bookstore.BookStoreClientUnaryBlocking getBook
INFO: Got following book from server: name: "To Kill
MockingBird"
author: "Harper Lee"
price: 400
So, as we see, the client was able to get the book details by querying the server with the name of the book.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2024,
"s": 1837,
"text": "We will now look at various types of communication that the gRPC framework supports. We will use an example of Bookstore where the client can search and place an order for book delivery."
},
{
"code": null,
"e": 2171,
"s": 2024,
"text": "Let us see unary gRPC communication where we let the client search for a title and return randomly one of the book matching the title queried for."
},
{
"code": null,
"e": 2240,
"s": 2171,
"text": "First let us define the bookstore.proto file in common_proto_files −"
},
{
"code": null,
"e": 2534,
"s": 2240,
"text": "syntax = \"proto3\";\noption java_package = \"com.tp.bookstore\";\n\nservice BookStore {\n rpc first (BookSearch) returns (Book) {}\n}\nmessage BookSearch {\n string name = 1;\n string author = 2;\n string genre = 3;\n}\nmessage Book {\n string name = 1;\n string author = 2;\n int32 price = 3;\n}\n"
},
{
"code": null,
"e": 2605,
"s": 2534,
"text": "Let us now take a closer look at each of the lines in the above block."
},
{
"code": null,
"e": 2625,
"s": 2605,
"text": "syntax = \"proto3\";\n"
},
{
"code": null,
"e": 2799,
"s": 2625,
"text": "The \"syntax\" here represents the version of Protobuf we are using. We are using the latest version 3 and the schema thus can use all the syntax which is valid for version 3."
},
{
"code": null,
"e": 2818,
"s": 2799,
"text": "package tutorial;\n"
},
{
"code": null,
"e": 2929,
"s": 2818,
"text": "The package here is used for conflict resolution if, say, we have multiple classes/members with the same name."
},
{
"code": null,
"e": 2972,
"s": 2929,
"text": "option java_package = \"com.tp.bookstore\";\n"
},
{
"code": null,
"e": 3079,
"s": 2972,
"text": "This argument is specific to Java, i.e., the package where to auto-generate the code from the .proto file."
},
{
"code": null,
"e": 3146,
"s": 3079,
"text": "service BookStore {\n rpc first (BookSearch) returns (Book) {}\n}\n"
},
{
"code": null,
"e": 3462,
"s": 3146,
"text": "This represents the name of the service \"BookStore\" and the function name \"first\" which can be called. The \"first\" function takes in the input of type \"BookSearch\" and returns the output of type \"Book\". So, effectively, we let the client search for a title and return one of the book matching the title queried for."
},
{
"code": null,
"e": 3494,
"s": 3462,
"text": "Now let us look at these types."
},
{
"code": null,
"e": 3581,
"s": 3494,
"text": "message BookSearch {\n string name = 1;\n string author = 2;\n string genre = 3;\n}\n"
},
{
"code": null,
"e": 3772,
"s": 3581,
"text": "In the above block, we have defined the BookSearch which contains the attributes like name, author and genre. The client is supposed to send the object of type of \"BookSearch\" to the server."
},
{
"code": null,
"e": 3852,
"s": 3772,
"text": "message Book {\n string name = 1;\n string author = 2;\n int32 price = 3;\n}\n"
},
{
"code": null,
"e": 4082,
"s": 3852,
"text": "Here, we have also defined that, given a \"BookSearch\", the server would return the \"Book\" which contains book attributes along with the price of the book. The server is supposed to send the object of type of \"Book\" to the client."
},
{
"code": null,
"e": 4233,
"s": 4082,
"text": "Note that we already had the Maven setup done for auto-generating our class files as well as our RPC code. So, now we can simply compile our project −"
},
{
"code": null,
"e": 4252,
"s": 4233,
"text": "mvn clean install\n"
},
{
"code": null,
"e": 4363,
"s": 4252,
"text": "This should auto-generate the source code required for us to use gRPC. The source code would be placed under −"
},
{
"code": null,
"e": 4522,
"s": 4363,
"text": "Protobuf class code: target/generated-sources/protobuf/java/com.tp.bookstore\nProtobuf gRPC code: target/generated-sources/protobuf/grpc-java/com.tp.bookstore\n"
},
{
"code": null,
"e": 4656,
"s": 4522,
"text": "Now that we have defined the proto file which contains the function definition, let us setup a server which can call these functions."
},
{
"code": null,
"e": 4774,
"s": 4656,
"text": "Let us write our server code to serve the above function and save it in com.tp.bookstore.BookeStoreServerUnary.java −"
},
{
"code": null,
"e": 7746,
"s": 4774,
"text": "package com.tp.bookstore;\n\nimport io.grpc.Server;\nimport io.grpc.ServerBuilder;\nimport io.grpc.stub.StreamObserver;\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.concurrent.TimeUnit;\nimport java.util.logging.Logger;\nimport java.util.stream.Collectors;\n\nimport com.tp.bookstore.BookStoreOuterClass.Book;\nimport com.tp.bookstore.BookStoreOuterClass.BookSearch;\npublic class BookeStoreServerUnary {\n private static final Logger logger = Logger.getLogger(BookeStoreServerUnary.class.getName());\n \n static Map<String, Book> bookMap = new HashMap<>();\n static {\n bookMap.put(\"Great Gatsby\", Book.newBuilder().setName(\"Great Gatsby\")\n .setAuthor(\"Scott Fitzgerald\")\n .setPrice(300).build());\n bookMap.put(\"To Kill MockingBird\", Book.newBuilder().setName(\"To Kill MockingBird\")\n .setAuthor(\"Harper Lee\")\n .setPrice(400).build());\n bookMap.put(\"Passage to India\", Book.newBuilder().setName(\"Passage to India\")\n .setAuthor(\"E.M.Forster\")\n .setPrice(500).build());\n bookMap.put(\"The Side of Paradise\", Book.newBuilder().setName(\"The Side of Paradise\")\n .setAuthor(\"Scott Fitzgerald\")\n .setPrice(600).build());\n bookMap.put(\"Go Set a Watchman\", Book.newBuilder().setName(\"Go Set a Watchman\")\n .setAuthor(\"Harper Lee\")\n .setPrice(700).build());\n }\n private Server server;\n private void start() throws IOException {\n int port = 50051;\n server = ServerBuilder.forPort(port)\n .addService(new BookStoreImpl()).build().start();\n \n logger.info(\"Server started, listening on \" + port);\n \n Runtime.getRuntime().addShutdownHook(new Thread() {\n @Override\n public void run() {\n System.err.println(\"Shutting down gRPC server\");\n try {\n server.shutdown().awaitTermination(30, TimeUnit.SECONDS);\n } catch (InterruptedException e) {\n e.printStackTrace(System.err);\n }\n }\n });\n }\n public static void main(String[] args) throws IOException, InterruptedException {\n final BookeStoreServerUnary greetServer = new BookeStoreServerUnary();\n greetServer.start();\n greetServer.server.awaitTermination();\n }\n static class BookStoreImpl extends BookStoreGrpc.BookStoreImplBase {\n @Override\n public void first(BookSearch searchQuery, StreamObserver<Book> responseObserver) {\n logger.info(\"Searching for book with title: \" + searchQuery.getName());\n List<String> matchingBookTitles = bookMap.keySet().stream().filter(title ->\n title.startsWith(searchQuery.getName().trim())).collect(Collectors.toList());\n\n Book foundBook = null;\n if(matchingBookTitles.size() > 0) {\n foundBook = bookMap.get(matchingBookTitles.get(0));\n }\n responseObserver.onNext(foundBook);\n responseObserver.onCompleted();\n }\n }\n}"
},
{
"code": null,
"e": 7917,
"s": 7746,
"text": "The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −"
},
{
"code": null,
"e": 7993,
"s": 7917,
"text": "Starting from the main method, we create a gRPC server at a specified port."
},
{
"code": null,
"e": 8069,
"s": 7993,
"text": "Starting from the main method, we create a gRPC server at a specified port."
},
{
"code": null,
"e": 8198,
"s": 8069,
"text": "But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service."
},
{
"code": null,
"e": 8327,
"s": 8198,
"text": "But before starting the server, we assign the server the service which we want to run, i.e., in our case, the BookStore service."
},
{
"code": null,
"e": 8480,
"s": 8327,
"text": "For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl"
},
{
"code": null,
"e": 8633,
"s": 8480,
"text": "For this purpose, we need to pass the service instance to the server, so we go ahead and create a service instance, i.e., in our case, the BookStoreImpl"
},
{
"code": null,
"e": 8785,
"s": 8633,
"text": "The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the first method."
},
{
"code": null,
"e": 8937,
"s": 8785,
"text": "The service instance need to provide an implementation of the method/function which is present in the .proto file, i.e., in our case, the first method."
},
{
"code": null,
"e": 9032,
"s": 8937,
"text": "The method expects an object of type as defined in the .proto file, i.e.,for us the BookSearch"
},
{
"code": null,
"e": 9127,
"s": 9032,
"text": "The method expects an object of type as defined in the .proto file, i.e.,for us the BookSearch"
},
{
"code": null,
"e": 9332,
"s": 9127,
"text": "The method searches for the book in the available bookMap and then returns the Book by calling the onNext() method. Once done, the server announces that it is done with the output by calling onCompleted()"
},
{
"code": null,
"e": 9537,
"s": 9332,
"text": "The method searches for the book in the available bookMap and then returns the Book by calling the onNext() method. Once done, the server announces that it is done with the output by calling onCompleted()"
},
{
"code": null,
"e": 9656,
"s": 9537,
"text": "Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code."
},
{
"code": null,
"e": 9775,
"s": 9656,
"text": "Finally, we also have a shutdown hook to ensure clean shutting down of the server when we are done executing our code."
},
{
"code": null,
"e": 9879,
"s": 9775,
"text": "Now that we have written the code for the server, let us setup a client which can call these functions."
},
{
"code": null,
"e": 10003,
"s": 9879,
"text": "Let us write our client code to call the above function and save it in com.tp.bookstore.BookStoreClientUnaryBlocking.java −"
},
{
"code": null,
"e": 11840,
"s": 10003,
"text": "package com.tp.bookstore;\n\nimport io.grpc.Channel;\nimport io.grpc.ManagedChannel;\nimport io.grpc.ManagedChannelBuilder;\nimport io.grpc.StatusRuntimeException;\nimport java.util.concurrent.TimeUnit;\nimport java.util.logging.Level;\nimport java.util.logging.Logger;\n\nimport com.tp.bookstore.BookStoreOuterClass.Book;\nimport com.tp.bookstore.BookStoreOuterClass.BookSearch;\nimport com.tp.greeting.GreeterGrpc;\nimport com.tp.greeting.Greeting.ServerOutput;\nimport com.tp.greeting.Greeting.ClientInput;\n\npublic class BookStoreClientUnaryBlocking {\n private static final Logger logger = Logger.getLogger(BookStoreClientUnaryBlocking.class.getName());\n private final BookStoreGrpc.BookStoreBlockingStub blockingStub;\n\t\n public BookStoreClientUnaryBlocking(Channel channel) {\n blockingStub = BookStoreGrpc.newBlockingStub(channel);\n }\n public void getBook(String bookName) {\n logger.info(\"Querying for book with title: \" + bookName);\n BookSearch request = BookSearch.newBuilder().setName(bookName).build();\n \n Book response; \n try {\n response = blockingStub.first(request);\n } catch (StatusRuntimeException e) {\n logger.log(Level.WARNING, \"RPC failed: {0}\", e.getStatus());\n return;\n }\n logger.info(\"Got following book from server: \" + response);\n }\n public static void main(String[] args) throws Exception {\n String bookName = args[0];\n String serverAddress = \"localhost:50051\";\n\t \n ManagedChannel channel = ManagedChannelBuilder.forTarget(serverAddress)\n .usePlaintext()\n .build();\n \n try {\n BookStoreClientUnaryBlocking client = new \n BookStoreClientUnaryBlocking(channel);\n client.getBook(bookName);\n } finally {\n channel.shutdownNow().awaitTermination(5, \n TimeUnit.SECONDS);\n }\n }\n}"
},
{
"code": null,
"e": 12011,
"s": 11840,
"text": "The above code starts a gRPC server at a specified port and serves the functions and services which we had written in our proto file. Let us walk through the above code −"
},
{
"code": null,
"e": 12117,
"s": 12011,
"text": "Starting from the main method, we accept one argument, i.e., the title of the book we want to search for."
},
{
"code": null,
"e": 12223,
"s": 12117,
"text": "Starting from the main method, we accept one argument, i.e., the title of the book we want to search for."
},
{
"code": null,
"e": 12282,
"s": 12223,
"text": "We setup a Channel for gRPC communication with our server."
},
{
"code": null,
"e": 12341,
"s": 12282,
"text": "We setup a Channel for gRPC communication with our server."
},
{
"code": null,
"e": 12574,
"s": 12341,
"text": "And then, we create a blocking stub using the channel. This is where we choose the service \"BookStore\" whose functions we plan to call. A \"stub\" is nothing but a wrapper which hides the complexity of the remote call from the caller."
},
{
"code": null,
"e": 12807,
"s": 12574,
"text": "And then, we create a blocking stub using the channel. This is where we choose the service \"BookStore\" whose functions we plan to call. A \"stub\" is nothing but a wrapper which hides the complexity of the remote call from the caller."
},
{
"code": null,
"e": 12965,
"s": 12807,
"text": "Then, we simply create the expected input defined in the .proto file,i.e., in our case BookSearch and we add the title name we want the server to search for."
},
{
"code": null,
"e": 13123,
"s": 12965,
"text": "Then, we simply create the expected input defined in the .proto file,i.e., in our case BookSearch and we add the title name we want the server to search for."
},
{
"code": null,
"e": 13189,
"s": 13123,
"text": "We ultimately make the call and await the result from the server."
},
{
"code": null,
"e": 13255,
"s": 13189,
"text": "We ultimately make the call and await the result from the server."
},
{
"code": null,
"e": 13313,
"s": 13255,
"text": "Finally, we close the channel to avoid any resource leak."
},
{
"code": null,
"e": 13371,
"s": 13313,
"text": "Finally, we close the channel to avoid any resource leak."
},
{
"code": null,
"e": 13400,
"s": 13371,
"text": "So, that is our client code."
},
{
"code": null,
"e": 13449,
"s": 13400,
"text": "To sum up, what we want to do is the following −"
},
{
"code": null,
"e": 13472,
"s": 13449,
"text": "Start the gRPC server."
},
{
"code": null,
"e": 13495,
"s": 13472,
"text": "Start the gRPC server."
},
{
"code": null,
"e": 13561,
"s": 13495,
"text": "The Client queries the Server for a book with a given name/title."
},
{
"code": null,
"e": 13627,
"s": 13561,
"text": "The Client queries the Server for a book with a given name/title."
},
{
"code": null,
"e": 13670,
"s": 13627,
"text": "The Server searches the book in its store."
},
{
"code": null,
"e": 13713,
"s": 13670,
"text": "The Server searches the book in its store."
},
{
"code": null,
"e": 13778,
"s": 13713,
"text": "The Server then responds with the book and its other attributes."
},
{
"code": null,
"e": 13843,
"s": 13778,
"text": "The Server then responds with the book and its other attributes."
},
{
"code": null,
"e": 13987,
"s": 13843,
"text": "Now, that we have defined our proto file, written our server and the client code, let us proceed to execute this code and see things in action."
},
{
"code": null,
"e": 14102,
"s": 13987,
"text": "For running the code, fire up two shells. Start the server on the first shell by executing the following command −"
},
{
"code": null,
"e": 14180,
"s": 14102,
"text": "java -cp .\\target\\grpc-point-1.0.jar \ncom.tp.bookstore.BookeStoreServerUnary\n"
},
{
"code": null,
"e": 14216,
"s": 14180,
"text": "We would see the following output −"
},
{
"code": null,
"e": 14328,
"s": 14216,
"text": "Jul 03, 2021 7:21:58 PM \ncom.tp.bookstore.BookeStoreServerUnary start\nINFO: Server started, listening on 50051\n"
},
{
"code": null,
"e": 14375,
"s": 14328,
"text": "The above output means the server has started."
},
{
"code": null,
"e": 14405,
"s": 14375,
"text": "Now, let us start the client."
},
{
"code": null,
"e": 14500,
"s": 14405,
"text": "java -cp .\\target\\grpc-point-1.0.jar \ncom.tp.bookstore.BookStoreClientUnaryBlocking \"To Kill\"\n"
},
{
"code": null,
"e": 14536,
"s": 14500,
"text": "We would see the following output −"
},
{
"code": null,
"e": 14840,
"s": 14536,
"text": "Jul 03, 2021 7:22:03 PM \ncom.tp.bookstore.BookStoreClientUnaryBlocking getBook\nINFO: Querying for book with title: To Kill\n\nJul 03, 2021 7:22:04 PM \ncom.tp.bookstore.BookStoreClientUnaryBlocking getBook\nINFO: Got following book from server: name: \"To Kill \n\nMockingBird\"\nauthor: \"Harper Lee\"\nprice: 400\n"
},
{
"code": null,
"e": 14949,
"s": 14840,
"text": "So, as we see, the client was able to get the book details by querying the server with the name of the book."
},
{
"code": null,
"e": 14956,
"s": 14949,
"text": " Print"
},
{
"code": null,
"e": 14967,
"s": 14956,
"text": " Add Notes"
}
]
|
Ruby - Built-in Functions | Since the Kernel module is included by Object class, its methods are available everywhere in the Ruby program. They can be called without a receiver (functional form). Therefore, they are often called functions.
abort
Terminates program. If an exception is raised (i.e., $! isn't nil), its error message is displayed.
Array( obj)
Returns obj after converting it to an array using to_ary or to_a.
at_exit {...}
Registers a block for execution when the program exits. Similar to END statement, but END statement registers the block only once.
autoload( classname, file)
Registers a class classname to be loaded from file the first time it's used. classname may be a string or a symbol.
binding
Returns the current variable and method bindings. The Binding object that is returned may be passed to the eval method as its second argument.
block_given?
Returns true if the method was called with a block.
callcc {| c|...}
Passes a Continuation object c to the block and executes the block. callcc can be used for global exit or loop construct.
caller([ n])
Returns the current execution stack in an array of the strings in the form file:line. If n is specified, returns stack entries from nth level on down.
catch( tag) {...}
Catches a nonlocal exit by a throw called during the execution of its block.
chomp([ rs = $/])
Returns the value of variable $_ with the ending newline removed, assigning the result back to $_. The value of the newline string can be specified with rs.
chomp!([ rs = $/])
Removes newline from $_, modifying the string in place.
chop
Returns the value of $_ with its last character (one byte) removed, assigning the result back to $_.
chop!
Removes the last character from $_, modifying the string in place.
eval( str[, scope[, file, line]])
Executes str as Ruby code. The binding in which to perform the evaluation may be specified with scope. The filename and line number of the code to be compiled may be specified using file and line.
exec( cmd[, arg...])
Replaces the current process by running the command cmd. If multiple arguments are specified, the command is executed with no shell expansion.
exit([ result = 0])
Exits program, with result as the status code returned.
exit!([ result = 0])
Kills the program bypassing exit handling such as ensure, etc.
fail(...)
See raise(...)
Float( obj)
Returns obj after converting it to a float. Numeric objects are converted directly; nil is converted to 0.0; strings are converted considering 0x, 0b radix prefix. The rest are converted using obj.to_f.
fork
fork {...}
Creates a child process. nil is returned in the child process and the child process' ID (integer) is returned in the parent process. If a block is specified, it's run in the child process.
format( fmt[, arg...])
See sprintf.
gets([ rs = $/])
Reads the filename specified in the command line or one line from standard input. The record separator string can be specified explicitly with rs.
global_variables
Returns an array of global variable names.
gsub( x, y)
gsub( x) {...}
Replaces all strings matching x in $_ with y. If a block is specified, matched strings are replaced with the result of the block. The modified result is assigned to $_.
gsub!( x, y)
gsub!( x) {...}
Performs the same substitution as gsub, except the string is changed in place.
Integer( obj)
Returns obj after converting it to an integer. Numeric objects are converted directly; nil is converted to 0; strings are converted considering 0x, 0b radix prefix. The rest are converted using obj.to_i.
lambda {| x|...}
proc {| x|...}
lambda
proc
Converts a block into a Proc object. If no block is specified, the block associated with the calling method is converted.
load( file[, private = false])
Loads a Ruby program from file. Unlike require, it doesn't load extension libraries. If private is true, the program is loaded into an anonymous module, thus protecting the namespace of the calling program.
local_variables
Returns an array of local variable names.
loop {...}
Repeats a block of code.
open( path[, mode = "r"])
open( path[, mode = "r"]) {| f|...}
Opens a file. If a block is specified, the block is executed with the opened stream passed as an argument. The file is closed automatically when the block exits. If path begins with a pipe |, the following string is run as a command, and the stream associated with that process is returned.
p( obj)
Displays obj using its inspect method (often used for debugging).
print([ arg...])
Prints arg to $defout. If no arguments are specified, the value of $_ is printed.
printf( fmt[, arg...])
Formats arg according to fmt using sprintf and prints the result to $defout. For formatting specifications, see sprintf for detail.
proc {| x|...}
proc
See lamda.
putc( c)
Prints one character to the default output ($defout).
puts([ str])
Prints string to the default output ($defout). If the string doesn't end with a newline, a newline is appended to the string.
raise(...)
fail(...)
Raises an exception. Assumes RuntimeError if no exception class is specified. Calling raise
without arguments in a rescue clause re-raises the exception. Doing so outside a rescue clause raises a message-less RuntimeError. fail is an obsolete name for raise.
rand([ max = 0])
Generates a pseudo-random number greater than or equal to 0 and less than max. If max is either not specified or is set to 0, a random number is returned as a floating-point number greater than or equal to 0 and less than 1. srand may be used to initialize pseudo-random stream.
readline([ rs = $/])
Equivalent to gets except it raises an EOFError exception on reading EOF.
readlines([ rs = $/])
Returns an array of strings holding either the filenames specified as command-line arguments or the contents of standard input.
require( lib)
Loads the library (including extension libraries) lib when it's first called. require will not load the same library more than once. If no extension is specified in lib, require tries to add .rb,.so, etc., to it.
scan( re)
scan( re) {|x|...}
Equivalent to $_.scan.
select( reads[, writes = nil[, excepts = nil[, timeout = nil]]])
Checks for changes in the status of three types of IO objects input, output, and exceptions which are passed as arrays of IO objects. nil is passed for arguments that don't need checking. A three-element array containing arrays of the IO objects for which there were changes in status is returned. nil is returned on timeout.
set_trace_func( proc)
Sets a handler for tracing. proc may be a string or proc object. set_trace_func is used by the debugger and profiler.
sleep([ sec])
Suspends program execution for sec seconds. If sec isn't specified, the program is suspended forever.
split([ sep[, max]])
Equivalent to $_.split.
sprintf( fmt[, arg...])
format( fmt[, arg...])
Returns a string in which arg is formatted according to fmt. Formatting specifications are essentially the same as those for sprintf in the C programming language. Conversion specifiers (% followed by conversion field specifier) in fmt are replaced by formatted string of corresponding argument. A list of conversion filed is given below in next section.
srand([ seed])
Initializes an array of random numbers. If seed isn't specified, initialization is performed using the time and other system information for the seed.
String( obj)
Returns obj after converting it to a string using obj.to_s.
syscall( sys[, arg...])
Calls an operating system call function specified by number sys. The numbers and meaning of sys is system-dependant.
system( cmd[, arg...])
Executes cmd as a call to the command line. If multiple arguments are specified, the command is run directly with no shell expansion. Returns true if the return status is 0 (success).
sub( x, y)
sub( x) {...}
Replaces the first string matching x in $_ with y. If a block is specified, matched strings are replaced with the result of the block. The modified result is assigned to $_.
sub!( x, y)
sub!( x) {...}
Performs the same replacement as sub, except the string is changed in place.
test( test, f1[, f2])
Performs various file tests specified by the character test. In order to improve readability, you should use File class methods (for example File::readable?) rather than this function. A list of arguments is given below in next section.
throw( tag[, value = nil])
Jumps to the catch function waiting with the symbol or string tag. value is the return value to be used by catch.
trace_var( var, cmd)
trace_var( var) {...}
Sets tracing for a global variable. The variable name is specified as a symbol. cmd may be a string or Proc object.
trap( sig, cmd)
trap( sig) {...}
Sets a signal handler. sig may be a string (like SIGUSR1) or an integer. SIG may be omitted from signal name. Signal handler for EXIT signal or signal number 0 is invoked just before process termination.
untrace_var( var[, cmd])
Removes tracing for a global variable. If cmd is specified, only that command is removed.
Here is a list of Built-in Functions related to number. They should be used as follows −
#!/usr/bin/ruby
num = 12.40
puts num.floor # 12
puts num + 10 # 22.40
puts num.integer? # false as num is a float.
This will produce the following result −
12
22.4
false
n + num
n - num
n * num
n / num
Performs arithmetic operations: addition, subtraction, multiplication, and division.
n % num
Returns the modulus of n.
n ** num
Exponentiation.
n.abs
Returns the absolute value of n.
n.ceil
Returns the smallest integer greater than or equal to n.
n.coerce( num)
Returns an array containing num and n both possibly converted to a type that allows them to be operated on mutually. Used in automatic type conversion in numeric operators.
n.divmod( num)
Returns an array containing the quotient and modulus from dividing n by num.
n.floor
Returns the largest integer less than or equal to n.
n.integer?
Returns true if n is an integer.
n.modulo( num)
Returns the modulus obtained by dividing n by num and rounding the quotient with floor
n.nonzero?
Returns n if it isn't zero, otherwise nil.
n.remainder( num)
Returns the remainder obtained by dividing n by num and removing decimals from the quotient. The result and n always have same sign.
n.round
Returns n rounded to the nearest integer.
n.truncate
Returns n as an integer with decimals removed.
n.zero?
Returns zero if n is 0.
n & num
n | num
n ^ num
Bitwise operations: AND, OR, XOR, and inversion.
n << num
n >> num
Bitwise left shift and right shift.
n[num]
Returns the value of the numth bit from the least significant bit, which is n[0].
n.chr
Returns a string containing the character for the character code n.
n.next
n.succ
Returns the next integer following n. Equivalent to n + 1.
n.size
Returns the number of bytes in the machine representation of n.
n.step( upto, step) {|n| ...}
Iterates the block from n to upto, incrementing by step each time.
n.times {|n| ...}
Iterates the block n times.
n.to_f
Converts n into a floating point number. Float conversion may lose precision information.
n.to_int
Returns n after converting into interger number.
Float::induced_from(num)
Returns the result of converting num to a floating-point number.
f.finite?
Returns true if f isn't infinite and f.nan is false.
f.infinite?
Returns 1 if f is positive infinity, -1 if negative infinity, or nil if anything else.
f.nan?
Returns true if f isn't a valid IEEE floating point number.
atan2( x, y)
Calculates the arc tangent.
cos( x)
Calculates the cosine of x.
exp( x)
Calculates an exponential function (e raised to the power of x).
frexp( x)
Returns a two-element array containing the nominalized fraction and exponent of x.
ldexp( x, exp)
Returns the value of x times 2 to the power of exp.
log( x)
Calculates the natural logarithm of x.
log10( x)
Calculates the base 10 logarithm of x.
sin( x)
Calculates the sine of x.
sqrt( x)
Returns the square root of x. x must be positive.
tan( x)
Calculates the tangent of x.
The function sprintf( fmt[, arg...]) and format( fmt[, arg...]) returns a string in which arg is formatted according to fmt. Formatting specifications are essentially the same as those for sprintf in the C programming language. Conversion specifiers (% followed by conversion field specifier) in fmt are replaced by formatted string of corresponding argument.
b
Binary integer
c
Single character
d,i
Decimal integer
e
Exponential notation (e.g., 2.44e6)
E
Exponential notation (e.g., 2.44E6)
f
Floating-point number (e.g., 2.44)
g
use %e if exponent is less than -4, %f otherwise
G
use %E if exponent is less than -4, %f otherwise
o
Octal integer
s
String or any object converted using to_s
u
Unsigned decimal integer
x
Hexadecimal integer (e.g., 39ff)
X
Hexadecimal integer (e.g., 39FF)
Following is the usage example −
#!/usr/bin/ruby
str = sprintf("%s\n", "abc") # => "abc\n" (simplest form)
puts str
str = sprintf("d=%d", 42) # => "d=42" (decimal output)
puts str
str = sprintf("%04x", 255) # => "00ff" (width 4, zero padded)
puts str
str = sprintf("%8s", "hello") # => " hello" (space padded)
puts str
str = sprintf("%.2s", "hello") # => "he" (trimmed by precision)
puts str
This will produce the following result −
abc
d = 42
00ff
hello
he
The function test( test, f1[, f2]) performs one of the following file tests specified by the character test. In order to improve readability, you should use File class methods (for example, File::readable?) rather than this function.
?r
Is f1 readable by the effective uid of caller?
?w
Is f1 writable by the effective uid of caller?
?x
Is f1 executable by the effective uid of caller?
?o
Is f1 owned by the effective uid of caller?
?R
Is f1 readable by the real uid of caller?
?W
Is f1 writable by the real uid of caller?
?X
Is f1 executable by the real uid of caller?
?O
Is f1 owned by the real uid of caller?
?e
Does f1 exist?
?z
Does f1 have zero length?
?s
File size of f1(nil if 0)
?f
Is f1 a regular file?
?d
Is f1 a directory?
?l
Is f1 a symbolic link?
?p
Is f1 a named pipe (FIFO)?
?S
Is f1 a socket?
?b
Is f1 a block device?
?c
Is f1 a character device?
?u
Does f1 have the setuid bit set?
?g
Does f1 have the setgid bit set?
?k
Does f1 have the sticky bit set?
?M
Last modification time for f1.
?A
Last access time for f1.
?C
Last inode change time for f1.
?=
Are modification times of f1 and f2 equal?
?>
Is the modification time of f1 more recent than f2 ?
?<
Is the modification time of f1 older than f2 ?
?-
Is f1 a hard link to f2 ?
Following is the usage example. Assuming main.rb exist with read, write and not execute permissions −
#!/usr/bin/ruby
puts test(?r, "main.rb" ) # => true
puts test(?w, "main.rb" ) # => true
puts test(?x, "main.rb" ) # => false
This will produce the following result −
true
false
false
46 Lectures
9.5 hours
Eduonix Learning Solutions
97 Lectures
7.5 hours
Skillbakerystudios
227 Lectures
40 hours
YouAccel
19 Lectures
10 hours
Programming Line
51 Lectures
5 hours
Stone River ELearning
39 Lectures
4.5 hours
Stone River ELearning
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2506,
"s": 2294,
"text": "Since the Kernel module is included by Object class, its methods are available everywhere in the Ruby program. They can be called without a receiver (functional form). Therefore, they are often called functions."
},
{
"code": null,
"e": 2512,
"s": 2506,
"text": "abort"
},
{
"code": null,
"e": 2612,
"s": 2512,
"text": "Terminates program. If an exception is raised (i.e., $! isn't nil), its error message is displayed."
},
{
"code": null,
"e": 2624,
"s": 2612,
"text": "Array( obj)"
},
{
"code": null,
"e": 2690,
"s": 2624,
"text": "Returns obj after converting it to an array using to_ary or to_a."
},
{
"code": null,
"e": 2704,
"s": 2690,
"text": "at_exit {...}"
},
{
"code": null,
"e": 2835,
"s": 2704,
"text": "Registers a block for execution when the program exits. Similar to END statement, but END statement registers the block only once."
},
{
"code": null,
"e": 2862,
"s": 2835,
"text": "autoload( classname, file)"
},
{
"code": null,
"e": 2978,
"s": 2862,
"text": "Registers a class classname to be loaded from file the first time it's used. classname may be a string or a symbol."
},
{
"code": null,
"e": 2986,
"s": 2978,
"text": "binding"
},
{
"code": null,
"e": 3129,
"s": 2986,
"text": "Returns the current variable and method bindings. The Binding object that is returned may be passed to the eval method as its second argument."
},
{
"code": null,
"e": 3142,
"s": 3129,
"text": "block_given?"
},
{
"code": null,
"e": 3194,
"s": 3142,
"text": "Returns true if the method was called with a block."
},
{
"code": null,
"e": 3211,
"s": 3194,
"text": "callcc {| c|...}"
},
{
"code": null,
"e": 3333,
"s": 3211,
"text": "Passes a Continuation object c to the block and executes the block. callcc can be used for global exit or loop construct."
},
{
"code": null,
"e": 3346,
"s": 3333,
"text": "caller([ n])"
},
{
"code": null,
"e": 3497,
"s": 3346,
"text": "Returns the current execution stack in an array of the strings in the form file:line. If n is specified, returns stack entries from nth level on down."
},
{
"code": null,
"e": 3515,
"s": 3497,
"text": "catch( tag) {...}"
},
{
"code": null,
"e": 3592,
"s": 3515,
"text": "Catches a nonlocal exit by a throw called during the execution of its block."
},
{
"code": null,
"e": 3610,
"s": 3592,
"text": "chomp([ rs = $/])"
},
{
"code": null,
"e": 3767,
"s": 3610,
"text": "Returns the value of variable $_ with the ending newline removed, assigning the result back to $_. The value of the newline string can be specified with rs."
},
{
"code": null,
"e": 3786,
"s": 3767,
"text": "chomp!([ rs = $/])"
},
{
"code": null,
"e": 3842,
"s": 3786,
"text": "Removes newline from $_, modifying the string in place."
},
{
"code": null,
"e": 3847,
"s": 3842,
"text": "chop"
},
{
"code": null,
"e": 3948,
"s": 3847,
"text": "Returns the value of $_ with its last character (one byte) removed, assigning the result back to $_."
},
{
"code": null,
"e": 3954,
"s": 3948,
"text": "chop!"
},
{
"code": null,
"e": 4021,
"s": 3954,
"text": "Removes the last character from $_, modifying the string in place."
},
{
"code": null,
"e": 4055,
"s": 4021,
"text": "eval( str[, scope[, file, line]])"
},
{
"code": null,
"e": 4252,
"s": 4055,
"text": "Executes str as Ruby code. The binding in which to perform the evaluation may be specified with scope. The filename and line number of the code to be compiled may be specified using file and line."
},
{
"code": null,
"e": 4273,
"s": 4252,
"text": "exec( cmd[, arg...])"
},
{
"code": null,
"e": 4416,
"s": 4273,
"text": "Replaces the current process by running the command cmd. If multiple arguments are specified, the command is executed with no shell expansion."
},
{
"code": null,
"e": 4436,
"s": 4416,
"text": "exit([ result = 0])"
},
{
"code": null,
"e": 4492,
"s": 4436,
"text": "Exits program, with result as the status code returned."
},
{
"code": null,
"e": 4513,
"s": 4492,
"text": "exit!([ result = 0])"
},
{
"code": null,
"e": 4576,
"s": 4513,
"text": "Kills the program bypassing exit handling such as ensure, etc."
},
{
"code": null,
"e": 4586,
"s": 4576,
"text": "fail(...)"
},
{
"code": null,
"e": 4601,
"s": 4586,
"text": "See raise(...)"
},
{
"code": null,
"e": 4613,
"s": 4601,
"text": "Float( obj)"
},
{
"code": null,
"e": 4816,
"s": 4613,
"text": "Returns obj after converting it to a float. Numeric objects are converted directly; nil is converted to 0.0; strings are converted considering 0x, 0b radix prefix. The rest are converted using obj.to_f."
},
{
"code": null,
"e": 4821,
"s": 4816,
"text": "fork"
},
{
"code": null,
"e": 4832,
"s": 4821,
"text": "fork {...}"
},
{
"code": null,
"e": 5021,
"s": 4832,
"text": "Creates a child process. nil is returned in the child process and the child process' ID (integer) is returned in the parent process. If a block is specified, it's run in the child process."
},
{
"code": null,
"e": 5044,
"s": 5021,
"text": "format( fmt[, arg...])"
},
{
"code": null,
"e": 5057,
"s": 5044,
"text": "See sprintf."
},
{
"code": null,
"e": 5074,
"s": 5057,
"text": "gets([ rs = $/])"
},
{
"code": null,
"e": 5221,
"s": 5074,
"text": "Reads the filename specified in the command line or one line from standard input. The record separator string can be specified explicitly with rs."
},
{
"code": null,
"e": 5238,
"s": 5221,
"text": "global_variables"
},
{
"code": null,
"e": 5281,
"s": 5238,
"text": "Returns an array of global variable names."
},
{
"code": null,
"e": 5293,
"s": 5281,
"text": "gsub( x, y)"
},
{
"code": null,
"e": 5308,
"s": 5293,
"text": "gsub( x) {...}"
},
{
"code": null,
"e": 5477,
"s": 5308,
"text": "Replaces all strings matching x in $_ with y. If a block is specified, matched strings are replaced with the result of the block. The modified result is assigned to $_."
},
{
"code": null,
"e": 5490,
"s": 5477,
"text": "gsub!( x, y)"
},
{
"code": null,
"e": 5506,
"s": 5490,
"text": "gsub!( x) {...}"
},
{
"code": null,
"e": 5585,
"s": 5506,
"text": "Performs the same substitution as gsub, except the string is changed in place."
},
{
"code": null,
"e": 5599,
"s": 5585,
"text": "Integer( obj)"
},
{
"code": null,
"e": 5803,
"s": 5599,
"text": "Returns obj after converting it to an integer. Numeric objects are converted directly; nil is converted to 0; strings are converted considering 0x, 0b radix prefix. The rest are converted using obj.to_i."
},
{
"code": null,
"e": 5820,
"s": 5803,
"text": "lambda {| x|...}"
},
{
"code": null,
"e": 5835,
"s": 5820,
"text": "proc {| x|...}"
},
{
"code": null,
"e": 5842,
"s": 5835,
"text": "lambda"
},
{
"code": null,
"e": 5847,
"s": 5842,
"text": "proc"
},
{
"code": null,
"e": 5969,
"s": 5847,
"text": "Converts a block into a Proc object. If no block is specified, the block associated with the calling method is converted."
},
{
"code": null,
"e": 6000,
"s": 5969,
"text": "load( file[, private = false])"
},
{
"code": null,
"e": 6207,
"s": 6000,
"text": "Loads a Ruby program from file. Unlike require, it doesn't load extension libraries. If private is true, the program is loaded into an anonymous module, thus protecting the namespace of the calling program."
},
{
"code": null,
"e": 6223,
"s": 6207,
"text": "local_variables"
},
{
"code": null,
"e": 6265,
"s": 6223,
"text": "Returns an array of local variable names."
},
{
"code": null,
"e": 6276,
"s": 6265,
"text": "loop {...}"
},
{
"code": null,
"e": 6301,
"s": 6276,
"text": "Repeats a block of code."
},
{
"code": null,
"e": 6327,
"s": 6301,
"text": "open( path[, mode = \"r\"])"
},
{
"code": null,
"e": 6363,
"s": 6327,
"text": "open( path[, mode = \"r\"]) {| f|...}"
},
{
"code": null,
"e": 6654,
"s": 6363,
"text": "Opens a file. If a block is specified, the block is executed with the opened stream passed as an argument. The file is closed automatically when the block exits. If path begins with a pipe |, the following string is run as a command, and the stream associated with that process is returned."
},
{
"code": null,
"e": 6662,
"s": 6654,
"text": "p( obj)"
},
{
"code": null,
"e": 6728,
"s": 6662,
"text": "Displays obj using its inspect method (often used for debugging)."
},
{
"code": null,
"e": 6745,
"s": 6728,
"text": "print([ arg...])"
},
{
"code": null,
"e": 6827,
"s": 6745,
"text": "Prints arg to $defout. If no arguments are specified, the value of $_ is printed."
},
{
"code": null,
"e": 6850,
"s": 6827,
"text": "printf( fmt[, arg...])"
},
{
"code": null,
"e": 6982,
"s": 6850,
"text": "Formats arg according to fmt using sprintf and prints the result to $defout. For formatting specifications, see sprintf for detail."
},
{
"code": null,
"e": 6997,
"s": 6982,
"text": "proc {| x|...}"
},
{
"code": null,
"e": 7002,
"s": 6997,
"text": "proc"
},
{
"code": null,
"e": 7013,
"s": 7002,
"text": "See lamda."
},
{
"code": null,
"e": 7022,
"s": 7013,
"text": "putc( c)"
},
{
"code": null,
"e": 7076,
"s": 7022,
"text": "Prints one character to the default output ($defout)."
},
{
"code": null,
"e": 7089,
"s": 7076,
"text": "puts([ str])"
},
{
"code": null,
"e": 7215,
"s": 7089,
"text": "Prints string to the default output ($defout). If the string doesn't end with a newline, a newline is appended to the string."
},
{
"code": null,
"e": 7226,
"s": 7215,
"text": "raise(...)"
},
{
"code": null,
"e": 7236,
"s": 7226,
"text": "fail(...)"
},
{
"code": null,
"e": 7495,
"s": 7236,
"text": "Raises an exception. Assumes RuntimeError if no exception class is specified. Calling raise\nwithout arguments in a rescue clause re-raises the exception. Doing so outside a rescue clause raises a message-less RuntimeError. fail is an obsolete name for raise."
},
{
"code": null,
"e": 7512,
"s": 7495,
"text": "rand([ max = 0])"
},
{
"code": null,
"e": 7791,
"s": 7512,
"text": "Generates a pseudo-random number greater than or equal to 0 and less than max. If max is either not specified or is set to 0, a random number is returned as a floating-point number greater than or equal to 0 and less than 1. srand may be used to initialize pseudo-random stream."
},
{
"code": null,
"e": 7812,
"s": 7791,
"text": "readline([ rs = $/])"
},
{
"code": null,
"e": 7886,
"s": 7812,
"text": "Equivalent to gets except it raises an EOFError exception on reading EOF."
},
{
"code": null,
"e": 7908,
"s": 7886,
"text": "readlines([ rs = $/])"
},
{
"code": null,
"e": 8036,
"s": 7908,
"text": "Returns an array of strings holding either the filenames specified as command-line arguments or the contents of standard input."
},
{
"code": null,
"e": 8050,
"s": 8036,
"text": "require( lib)"
},
{
"code": null,
"e": 8263,
"s": 8050,
"text": "Loads the library (including extension libraries) lib when it's first called. require will not load the same library more than once. If no extension is specified in lib, require tries to add .rb,.so, etc., to it."
},
{
"code": null,
"e": 8273,
"s": 8263,
"text": "scan( re)"
},
{
"code": null,
"e": 8292,
"s": 8273,
"text": "scan( re) {|x|...}"
},
{
"code": null,
"e": 8315,
"s": 8292,
"text": "Equivalent to $_.scan."
},
{
"code": null,
"e": 8380,
"s": 8315,
"text": "select( reads[, writes = nil[, excepts = nil[, timeout = nil]]])"
},
{
"code": null,
"e": 8706,
"s": 8380,
"text": "Checks for changes in the status of three types of IO objects input, output, and exceptions which are passed as arrays of IO objects. nil is passed for arguments that don't need checking. A three-element array containing arrays of the IO objects for which there were changes in status is returned. nil is returned on timeout."
},
{
"code": null,
"e": 8728,
"s": 8706,
"text": "set_trace_func( proc)"
},
{
"code": null,
"e": 8846,
"s": 8728,
"text": "Sets a handler for tracing. proc may be a string or proc object. set_trace_func is used by the debugger and profiler."
},
{
"code": null,
"e": 8860,
"s": 8846,
"text": "sleep([ sec])"
},
{
"code": null,
"e": 8962,
"s": 8860,
"text": "Suspends program execution for sec seconds. If sec isn't specified, the program is suspended forever."
},
{
"code": null,
"e": 8983,
"s": 8962,
"text": "split([ sep[, max]])"
},
{
"code": null,
"e": 9007,
"s": 8983,
"text": "Equivalent to $_.split."
},
{
"code": null,
"e": 9031,
"s": 9007,
"text": "sprintf( fmt[, arg...])"
},
{
"code": null,
"e": 9054,
"s": 9031,
"text": "format( fmt[, arg...])"
},
{
"code": null,
"e": 9409,
"s": 9054,
"text": "Returns a string in which arg is formatted according to fmt. Formatting specifications are essentially the same as those for sprintf in the C programming language. Conversion specifiers (% followed by conversion field specifier) in fmt are replaced by formatted string of corresponding argument. A list of conversion filed is given below in next section."
},
{
"code": null,
"e": 9424,
"s": 9409,
"text": "srand([ seed])"
},
{
"code": null,
"e": 9575,
"s": 9424,
"text": "Initializes an array of random numbers. If seed isn't specified, initialization is performed using the time and other system information for the seed."
},
{
"code": null,
"e": 9588,
"s": 9575,
"text": "String( obj)"
},
{
"code": null,
"e": 9648,
"s": 9588,
"text": "Returns obj after converting it to a string using obj.to_s."
},
{
"code": null,
"e": 9672,
"s": 9648,
"text": "syscall( sys[, arg...])"
},
{
"code": null,
"e": 9789,
"s": 9672,
"text": "Calls an operating system call function specified by number sys. The numbers and meaning of sys is system-dependant."
},
{
"code": null,
"e": 9812,
"s": 9789,
"text": "system( cmd[, arg...])"
},
{
"code": null,
"e": 9996,
"s": 9812,
"text": "Executes cmd as a call to the command line. If multiple arguments are specified, the command is run directly with no shell expansion. Returns true if the return status is 0 (success)."
},
{
"code": null,
"e": 10007,
"s": 9996,
"text": "sub( x, y)"
},
{
"code": null,
"e": 10021,
"s": 10007,
"text": "sub( x) {...}"
},
{
"code": null,
"e": 10195,
"s": 10021,
"text": "Replaces the first string matching x in $_ with y. If a block is specified, matched strings are replaced with the result of the block. The modified result is assigned to $_."
},
{
"code": null,
"e": 10207,
"s": 10195,
"text": "sub!( x, y)"
},
{
"code": null,
"e": 10222,
"s": 10207,
"text": "sub!( x) {...}"
},
{
"code": null,
"e": 10299,
"s": 10222,
"text": "Performs the same replacement as sub, except the string is changed in place."
},
{
"code": null,
"e": 10321,
"s": 10299,
"text": "test( test, f1[, f2])"
},
{
"code": null,
"e": 10558,
"s": 10321,
"text": "Performs various file tests specified by the character test. In order to improve readability, you should use File class methods (for example File::readable?) rather than this function. A list of arguments is given below in next section."
},
{
"code": null,
"e": 10585,
"s": 10558,
"text": "throw( tag[, value = nil])"
},
{
"code": null,
"e": 10699,
"s": 10585,
"text": "Jumps to the catch function waiting with the symbol or string tag. value is the return value to be used by catch."
},
{
"code": null,
"e": 10720,
"s": 10699,
"text": "trace_var( var, cmd)"
},
{
"code": null,
"e": 10742,
"s": 10720,
"text": "trace_var( var) {...}"
},
{
"code": null,
"e": 10858,
"s": 10742,
"text": "Sets tracing for a global variable. The variable name is specified as a symbol. cmd may be a string or Proc object."
},
{
"code": null,
"e": 10874,
"s": 10858,
"text": "trap( sig, cmd)"
},
{
"code": null,
"e": 10891,
"s": 10874,
"text": "trap( sig) {...}"
},
{
"code": null,
"e": 11095,
"s": 10891,
"text": "Sets a signal handler. sig may be a string (like SIGUSR1) or an integer. SIG may be omitted from signal name. Signal handler for EXIT signal or signal number 0 is invoked just before process termination."
},
{
"code": null,
"e": 11120,
"s": 11095,
"text": "untrace_var( var[, cmd])"
},
{
"code": null,
"e": 11210,
"s": 11120,
"text": "Removes tracing for a global variable. If cmd is specified, only that command is removed."
},
{
"code": null,
"e": 11299,
"s": 11210,
"text": "Here is a list of Built-in Functions related to number. They should be used as follows −"
},
{
"code": null,
"e": 11429,
"s": 11299,
"text": "#!/usr/bin/ruby\n\nnum = 12.40\nputs num.floor # 12\nputs num + 10 # 22.40\nputs num.integer? # false as num is a float."
},
{
"code": null,
"e": 11470,
"s": 11429,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 11485,
"s": 11470,
"text": "12\n22.4\nfalse\n"
},
{
"code": null,
"e": 11493,
"s": 11485,
"text": "n + num"
},
{
"code": null,
"e": 11501,
"s": 11493,
"text": "n - num"
},
{
"code": null,
"e": 11509,
"s": 11501,
"text": "n * num"
},
{
"code": null,
"e": 11517,
"s": 11509,
"text": "n / num"
},
{
"code": null,
"e": 11602,
"s": 11517,
"text": "Performs arithmetic operations: addition, subtraction, multiplication, and division."
},
{
"code": null,
"e": 11610,
"s": 11602,
"text": "n % num"
},
{
"code": null,
"e": 11636,
"s": 11610,
"text": "Returns the modulus of n."
},
{
"code": null,
"e": 11645,
"s": 11636,
"text": "n ** num"
},
{
"code": null,
"e": 11661,
"s": 11645,
"text": "Exponentiation."
},
{
"code": null,
"e": 11667,
"s": 11661,
"text": "n.abs"
},
{
"code": null,
"e": 11700,
"s": 11667,
"text": "Returns the absolute value of n."
},
{
"code": null,
"e": 11707,
"s": 11700,
"text": "n.ceil"
},
{
"code": null,
"e": 11764,
"s": 11707,
"text": "Returns the smallest integer greater than or equal to n."
},
{
"code": null,
"e": 11779,
"s": 11764,
"text": "n.coerce( num)"
},
{
"code": null,
"e": 11952,
"s": 11779,
"text": "Returns an array containing num and n both possibly converted to a type that allows them to be operated on mutually. Used in automatic type conversion in numeric operators."
},
{
"code": null,
"e": 11967,
"s": 11952,
"text": "n.divmod( num)"
},
{
"code": null,
"e": 12044,
"s": 11967,
"text": "Returns an array containing the quotient and modulus from dividing n by num."
},
{
"code": null,
"e": 12052,
"s": 12044,
"text": "n.floor"
},
{
"code": null,
"e": 12105,
"s": 12052,
"text": "Returns the largest integer less than or equal to n."
},
{
"code": null,
"e": 12116,
"s": 12105,
"text": "n.integer?"
},
{
"code": null,
"e": 12149,
"s": 12116,
"text": "Returns true if n is an integer."
},
{
"code": null,
"e": 12164,
"s": 12149,
"text": "n.modulo( num)"
},
{
"code": null,
"e": 12251,
"s": 12164,
"text": "Returns the modulus obtained by dividing n by num and rounding the quotient with floor"
},
{
"code": null,
"e": 12262,
"s": 12251,
"text": "n.nonzero?"
},
{
"code": null,
"e": 12305,
"s": 12262,
"text": "Returns n if it isn't zero, otherwise nil."
},
{
"code": null,
"e": 12323,
"s": 12305,
"text": "n.remainder( num)"
},
{
"code": null,
"e": 12456,
"s": 12323,
"text": "Returns the remainder obtained by dividing n by num and removing decimals from the quotient. The result and n always have same sign."
},
{
"code": null,
"e": 12464,
"s": 12456,
"text": "n.round"
},
{
"code": null,
"e": 12506,
"s": 12464,
"text": "Returns n rounded to the nearest integer."
},
{
"code": null,
"e": 12517,
"s": 12506,
"text": "n.truncate"
},
{
"code": null,
"e": 12564,
"s": 12517,
"text": "Returns n as an integer with decimals removed."
},
{
"code": null,
"e": 12572,
"s": 12564,
"text": "n.zero?"
},
{
"code": null,
"e": 12596,
"s": 12572,
"text": "Returns zero if n is 0."
},
{
"code": null,
"e": 12604,
"s": 12596,
"text": "n & num"
},
{
"code": null,
"e": 12612,
"s": 12604,
"text": "n | num"
},
{
"code": null,
"e": 12620,
"s": 12612,
"text": "n ^ num"
},
{
"code": null,
"e": 12669,
"s": 12620,
"text": "Bitwise operations: AND, OR, XOR, and inversion."
},
{
"code": null,
"e": 12678,
"s": 12669,
"text": "n << num"
},
{
"code": null,
"e": 12687,
"s": 12678,
"text": "n >> num"
},
{
"code": null,
"e": 12723,
"s": 12687,
"text": "Bitwise left shift and right shift."
},
{
"code": null,
"e": 12730,
"s": 12723,
"text": "n[num]"
},
{
"code": null,
"e": 12812,
"s": 12730,
"text": "Returns the value of the numth bit from the least significant bit, which is n[0]."
},
{
"code": null,
"e": 12818,
"s": 12812,
"text": "n.chr"
},
{
"code": null,
"e": 12886,
"s": 12818,
"text": "Returns a string containing the character for the character code n."
},
{
"code": null,
"e": 12893,
"s": 12886,
"text": "n.next"
},
{
"code": null,
"e": 12900,
"s": 12893,
"text": "n.succ"
},
{
"code": null,
"e": 12959,
"s": 12900,
"text": "Returns the next integer following n. Equivalent to n + 1."
},
{
"code": null,
"e": 12966,
"s": 12959,
"text": "n.size"
},
{
"code": null,
"e": 13030,
"s": 12966,
"text": "Returns the number of bytes in the machine representation of n."
},
{
"code": null,
"e": 13060,
"s": 13030,
"text": "n.step( upto, step) {|n| ...}"
},
{
"code": null,
"e": 13127,
"s": 13060,
"text": "Iterates the block from n to upto, incrementing by step each time."
},
{
"code": null,
"e": 13145,
"s": 13127,
"text": "n.times {|n| ...}"
},
{
"code": null,
"e": 13173,
"s": 13145,
"text": "Iterates the block n times."
},
{
"code": null,
"e": 13180,
"s": 13173,
"text": "n.to_f"
},
{
"code": null,
"e": 13270,
"s": 13180,
"text": "Converts n into a floating point number. Float conversion may lose precision information."
},
{
"code": null,
"e": 13279,
"s": 13270,
"text": "n.to_int"
},
{
"code": null,
"e": 13328,
"s": 13279,
"text": "Returns n after converting into interger number."
},
{
"code": null,
"e": 13353,
"s": 13328,
"text": "Float::induced_from(num)"
},
{
"code": null,
"e": 13418,
"s": 13353,
"text": "Returns the result of converting num to a floating-point number."
},
{
"code": null,
"e": 13428,
"s": 13418,
"text": "f.finite?"
},
{
"code": null,
"e": 13481,
"s": 13428,
"text": "Returns true if f isn't infinite and f.nan is false."
},
{
"code": null,
"e": 13493,
"s": 13481,
"text": "f.infinite?"
},
{
"code": null,
"e": 13580,
"s": 13493,
"text": "Returns 1 if f is positive infinity, -1 if negative infinity, or nil if anything else."
},
{
"code": null,
"e": 13587,
"s": 13580,
"text": "f.nan?"
},
{
"code": null,
"e": 13647,
"s": 13587,
"text": "Returns true if f isn't a valid IEEE floating point number."
},
{
"code": null,
"e": 13660,
"s": 13647,
"text": "atan2( x, y)"
},
{
"code": null,
"e": 13688,
"s": 13660,
"text": "Calculates the arc tangent."
},
{
"code": null,
"e": 13696,
"s": 13688,
"text": "cos( x)"
},
{
"code": null,
"e": 13724,
"s": 13696,
"text": "Calculates the cosine of x."
},
{
"code": null,
"e": 13732,
"s": 13724,
"text": "exp( x)"
},
{
"code": null,
"e": 13797,
"s": 13732,
"text": "Calculates an exponential function (e raised to the power of x)."
},
{
"code": null,
"e": 13807,
"s": 13797,
"text": "frexp( x)"
},
{
"code": null,
"e": 13890,
"s": 13807,
"text": "Returns a two-element array containing the nominalized fraction and exponent of x."
},
{
"code": null,
"e": 13905,
"s": 13890,
"text": "ldexp( x, exp)"
},
{
"code": null,
"e": 13957,
"s": 13905,
"text": "Returns the value of x times 2 to the power of exp."
},
{
"code": null,
"e": 13965,
"s": 13957,
"text": "log( x)"
},
{
"code": null,
"e": 14004,
"s": 13965,
"text": "Calculates the natural logarithm of x."
},
{
"code": null,
"e": 14014,
"s": 14004,
"text": "log10( x)"
},
{
"code": null,
"e": 14053,
"s": 14014,
"text": "Calculates the base 10 logarithm of x."
},
{
"code": null,
"e": 14061,
"s": 14053,
"text": "sin( x)"
},
{
"code": null,
"e": 14087,
"s": 14061,
"text": "Calculates the sine of x."
},
{
"code": null,
"e": 14096,
"s": 14087,
"text": "sqrt( x)"
},
{
"code": null,
"e": 14146,
"s": 14096,
"text": "Returns the square root of x. x must be positive."
},
{
"code": null,
"e": 14154,
"s": 14146,
"text": "tan( x)"
},
{
"code": null,
"e": 14183,
"s": 14154,
"text": "Calculates the tangent of x."
},
{
"code": null,
"e": 14543,
"s": 14183,
"text": "The function sprintf( fmt[, arg...]) and format( fmt[, arg...]) returns a string in which arg is formatted according to fmt. Formatting specifications are essentially the same as those for sprintf in the C programming language. Conversion specifiers (% followed by conversion field specifier) in fmt are replaced by formatted string of corresponding argument."
},
{
"code": null,
"e": 14545,
"s": 14543,
"text": "b"
},
{
"code": null,
"e": 14560,
"s": 14545,
"text": "Binary integer"
},
{
"code": null,
"e": 14562,
"s": 14560,
"text": "c"
},
{
"code": null,
"e": 14579,
"s": 14562,
"text": "Single character"
},
{
"code": null,
"e": 14583,
"s": 14579,
"text": "d,i"
},
{
"code": null,
"e": 14599,
"s": 14583,
"text": "Decimal integer"
},
{
"code": null,
"e": 14601,
"s": 14599,
"text": "e"
},
{
"code": null,
"e": 14637,
"s": 14601,
"text": "Exponential notation (e.g., 2.44e6)"
},
{
"code": null,
"e": 14639,
"s": 14637,
"text": "E"
},
{
"code": null,
"e": 14675,
"s": 14639,
"text": "Exponential notation (e.g., 2.44E6)"
},
{
"code": null,
"e": 14677,
"s": 14675,
"text": "f"
},
{
"code": null,
"e": 14712,
"s": 14677,
"text": "Floating-point number (e.g., 2.44)"
},
{
"code": null,
"e": 14714,
"s": 14712,
"text": "g"
},
{
"code": null,
"e": 14763,
"s": 14714,
"text": "use %e if exponent is less than -4, %f otherwise"
},
{
"code": null,
"e": 14765,
"s": 14763,
"text": "G"
},
{
"code": null,
"e": 14814,
"s": 14765,
"text": "use %E if exponent is less than -4, %f otherwise"
},
{
"code": null,
"e": 14816,
"s": 14814,
"text": "o"
},
{
"code": null,
"e": 14830,
"s": 14816,
"text": "Octal integer"
},
{
"code": null,
"e": 14832,
"s": 14830,
"text": "s"
},
{
"code": null,
"e": 14874,
"s": 14832,
"text": "String or any object converted using to_s"
},
{
"code": null,
"e": 14876,
"s": 14874,
"text": "u"
},
{
"code": null,
"e": 14901,
"s": 14876,
"text": "Unsigned decimal integer"
},
{
"code": null,
"e": 14903,
"s": 14901,
"text": "x"
},
{
"code": null,
"e": 14936,
"s": 14903,
"text": "Hexadecimal integer (e.g., 39ff)"
},
{
"code": null,
"e": 14938,
"s": 14936,
"text": "X"
},
{
"code": null,
"e": 14971,
"s": 14938,
"text": "Hexadecimal integer (e.g., 39FF)"
},
{
"code": null,
"e": 15004,
"s": 14971,
"text": "Following is the usage example −"
},
{
"code": null,
"e": 15385,
"s": 15004,
"text": "#!/usr/bin/ruby\n\nstr = sprintf(\"%s\\n\", \"abc\") # => \"abc\\n\" (simplest form)\nputs str \n\nstr = sprintf(\"d=%d\", 42) # => \"d=42\" (decimal output)\nputs str \n\nstr = sprintf(\"%04x\", 255) # => \"00ff\" (width 4, zero padded)\nputs str \n\nstr = sprintf(\"%8s\", \"hello\") # => \" hello\" (space padded)\nputs str \n\nstr = sprintf(\"%.2s\", \"hello\") # => \"he\" (trimmed by precision)\nputs str "
},
{
"code": null,
"e": 15426,
"s": 15385,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 15455,
"s": 15426,
"text": "abc\nd = 42\n00ff\n hello\nhe\n"
},
{
"code": null,
"e": 15689,
"s": 15455,
"text": "The function test( test, f1[, f2]) performs one of the following file tests specified by the character test. In order to improve readability, you should use File class methods (for example, File::readable?) rather than this function."
},
{
"code": null,
"e": 15692,
"s": 15689,
"text": "?r"
},
{
"code": null,
"e": 15739,
"s": 15692,
"text": "Is f1 readable by the effective uid of caller?"
},
{
"code": null,
"e": 15742,
"s": 15739,
"text": "?w"
},
{
"code": null,
"e": 15789,
"s": 15742,
"text": "Is f1 writable by the effective uid of caller?"
},
{
"code": null,
"e": 15792,
"s": 15789,
"text": "?x"
},
{
"code": null,
"e": 15841,
"s": 15792,
"text": "Is f1 executable by the effective uid of caller?"
},
{
"code": null,
"e": 15844,
"s": 15841,
"text": "?o"
},
{
"code": null,
"e": 15888,
"s": 15844,
"text": "Is f1 owned by the effective uid of caller?"
},
{
"code": null,
"e": 15891,
"s": 15888,
"text": "?R"
},
{
"code": null,
"e": 15933,
"s": 15891,
"text": "Is f1 readable by the real uid of caller?"
},
{
"code": null,
"e": 15936,
"s": 15933,
"text": "?W"
},
{
"code": null,
"e": 15978,
"s": 15936,
"text": "Is f1 writable by the real uid of caller?"
},
{
"code": null,
"e": 15981,
"s": 15978,
"text": "?X"
},
{
"code": null,
"e": 16025,
"s": 15981,
"text": "Is f1 executable by the real uid of caller?"
},
{
"code": null,
"e": 16028,
"s": 16025,
"text": "?O"
},
{
"code": null,
"e": 16067,
"s": 16028,
"text": "Is f1 owned by the real uid of caller?"
},
{
"code": null,
"e": 16070,
"s": 16067,
"text": "?e"
},
{
"code": null,
"e": 16085,
"s": 16070,
"text": "Does f1 exist?"
},
{
"code": null,
"e": 16088,
"s": 16085,
"text": "?z"
},
{
"code": null,
"e": 16114,
"s": 16088,
"text": "Does f1 have zero length?"
},
{
"code": null,
"e": 16117,
"s": 16114,
"text": "?s"
},
{
"code": null,
"e": 16143,
"s": 16117,
"text": "File size of f1(nil if 0)"
},
{
"code": null,
"e": 16146,
"s": 16143,
"text": "?f"
},
{
"code": null,
"e": 16168,
"s": 16146,
"text": "Is f1 a regular file?"
},
{
"code": null,
"e": 16171,
"s": 16168,
"text": "?d"
},
{
"code": null,
"e": 16190,
"s": 16171,
"text": "Is f1 a directory?"
},
{
"code": null,
"e": 16193,
"s": 16190,
"text": "?l"
},
{
"code": null,
"e": 16216,
"s": 16193,
"text": "Is f1 a symbolic link?"
},
{
"code": null,
"e": 16219,
"s": 16216,
"text": "?p"
},
{
"code": null,
"e": 16246,
"s": 16219,
"text": "Is f1 a named pipe (FIFO)?"
},
{
"code": null,
"e": 16249,
"s": 16246,
"text": "?S"
},
{
"code": null,
"e": 16265,
"s": 16249,
"text": "Is f1 a socket?"
},
{
"code": null,
"e": 16268,
"s": 16265,
"text": "?b"
},
{
"code": null,
"e": 16290,
"s": 16268,
"text": "Is f1 a block device?"
},
{
"code": null,
"e": 16293,
"s": 16290,
"text": "?c"
},
{
"code": null,
"e": 16319,
"s": 16293,
"text": "Is f1 a character device?"
},
{
"code": null,
"e": 16322,
"s": 16319,
"text": "?u"
},
{
"code": null,
"e": 16355,
"s": 16322,
"text": "Does f1 have the setuid bit set?"
},
{
"code": null,
"e": 16358,
"s": 16355,
"text": "?g"
},
{
"code": null,
"e": 16391,
"s": 16358,
"text": "Does f1 have the setgid bit set?"
},
{
"code": null,
"e": 16394,
"s": 16391,
"text": "?k"
},
{
"code": null,
"e": 16427,
"s": 16394,
"text": "Does f1 have the sticky bit set?"
},
{
"code": null,
"e": 16430,
"s": 16427,
"text": "?M"
},
{
"code": null,
"e": 16461,
"s": 16430,
"text": "Last modification time for f1."
},
{
"code": null,
"e": 16464,
"s": 16461,
"text": "?A"
},
{
"code": null,
"e": 16489,
"s": 16464,
"text": "Last access time for f1."
},
{
"code": null,
"e": 16492,
"s": 16489,
"text": "?C"
},
{
"code": null,
"e": 16523,
"s": 16492,
"text": "Last inode change time for f1."
},
{
"code": null,
"e": 16526,
"s": 16523,
"text": "?="
},
{
"code": null,
"e": 16569,
"s": 16526,
"text": "Are modification times of f1 and f2 equal?"
},
{
"code": null,
"e": 16572,
"s": 16569,
"text": "?>"
},
{
"code": null,
"e": 16625,
"s": 16572,
"text": "Is the modification time of f1 more recent than f2 ?"
},
{
"code": null,
"e": 16628,
"s": 16625,
"text": "?<"
},
{
"code": null,
"e": 16675,
"s": 16628,
"text": "Is the modification time of f1 older than f2 ?"
},
{
"code": null,
"e": 16678,
"s": 16675,
"text": "?-"
},
{
"code": null,
"e": 16704,
"s": 16678,
"text": "Is f1 a hard link to f2 ?"
},
{
"code": null,
"e": 16806,
"s": 16704,
"text": "Following is the usage example. Assuming main.rb exist with read, write and not execute permissions −"
},
{
"code": null,
"e": 16938,
"s": 16806,
"text": "#!/usr/bin/ruby\n\nputs test(?r, \"main.rb\" ) # => true\nputs test(?w, \"main.rb\" ) # => true\nputs test(?x, \"main.rb\" ) # => false"
},
{
"code": null,
"e": 16979,
"s": 16938,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 16997,
"s": 16979,
"text": "true\nfalse\nfalse\n"
},
{
"code": null,
"e": 17032,
"s": 16997,
"text": "\n 46 Lectures \n 9.5 hours \n"
},
{
"code": null,
"e": 17060,
"s": 17032,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 17095,
"s": 17060,
"text": "\n 97 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 17115,
"s": 17095,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 17150,
"s": 17115,
"text": "\n 227 Lectures \n 40 hours \n"
},
{
"code": null,
"e": 17160,
"s": 17150,
"text": " YouAccel"
},
{
"code": null,
"e": 17194,
"s": 17160,
"text": "\n 19 Lectures \n 10 hours \n"
},
{
"code": null,
"e": 17212,
"s": 17194,
"text": " Programming Line"
},
{
"code": null,
"e": 17245,
"s": 17212,
"text": "\n 51 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 17268,
"s": 17245,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 17303,
"s": 17268,
"text": "\n 39 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 17326,
"s": 17303,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 17333,
"s": 17326,
"text": " Print"
},
{
"code": null,
"e": 17344,
"s": 17333,
"text": " Add Notes"
}
]
|
C# Type Casting | Type casting is when you assign a value of one data type to another type.
In C#, there are two types of casting:
Implicit Casting (automatically) - converting a smaller type
to a larger type size
char -> int -> long -> float -> double
Explicit Casting (manually) - converting a larger type
to a smaller size type
double -> float -> long -> int -> char
Implicit casting is done automatically when passing a smaller size type to a
larger size type:
int myInt = 9;
double myDouble = myInt; // Automatic casting: int to double
Console.WriteLine(myInt); // Outputs 9
Console.WriteLine(myDouble); // Outputs 9
Try it Yourself »
Explicit casting must be done manually by placing the type in parentheses
in front of the value:
double myDouble = 9.78;
int myInt = (int) myDouble; // Manual casting: double to int
Console.WriteLine(myDouble); // Outputs 9.78
Console.WriteLine(myInt); // Outputs 9
Try it Yourself »
It is also possible to convert data types explicitly by using built-in methods, such as Convert.ToBoolean, Convert.ToDouble, Convert.ToString, Convert.ToInt32 (int) and Convert.ToInt64 (long):
int myInt = 10;
double myDouble = 5.25;
bool myBool = true;
Console.WriteLine(Convert.ToString(myInt)); // convert int to string
Console.WriteLine(Convert.ToDouble(myInt)); // convert int to double
Console.WriteLine(Convert.ToInt32(myDouble)); // convert double to int
Console.WriteLine(Convert.ToString(myBool)); // convert bool to string
Try it Yourself »
Many times, there's no need for type conversion. But sometimes you have to. Take a look at the next chapter, when working with user input, to see an example of this.
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
[email protected]
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 74,
"s": 0,
"text": "Type casting is when you assign a value of one data type to another type."
},
{
"code": null,
"e": 113,
"s": 74,
"text": "In C#, there are two types of casting:"
},
{
"code": null,
"e": 237,
"s": 113,
"text": "Implicit Casting (automatically) - converting a smaller type \nto a larger type size\nchar -> int -> long -> float -> double\n"
},
{
"code": null,
"e": 355,
"s": 237,
"text": "Explicit Casting (manually) - converting a larger type \nto a smaller size type\ndouble -> float -> long -> int -> char"
},
{
"code": null,
"e": 451,
"s": 355,
"text": "Implicit casting is done automatically when passing a smaller size type to a \nlarger size type:"
},
{
"code": null,
"e": 623,
"s": 451,
"text": "int myInt = 9;\ndouble myDouble = myInt; // Automatic casting: int to double\n\nConsole.WriteLine(myInt); // Outputs 9\nConsole.WriteLine(myDouble); // Outputs 9\n"
},
{
"code": null,
"e": 643,
"s": 623,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 741,
"s": 643,
"text": "Explicit casting must be done manually by placing the type in parentheses \nin front of the value:"
},
{
"code": null,
"e": 922,
"s": 741,
"text": "double myDouble = 9.78;\nint myInt = (int) myDouble; // Manual casting: double to int\n\nConsole.WriteLine(myDouble); // Outputs 9.78\nConsole.WriteLine(myInt); // Outputs 9\n"
},
{
"code": null,
"e": 942,
"s": 922,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1135,
"s": 942,
"text": "It is also possible to convert data types explicitly by using built-in methods, such as Convert.ToBoolean, Convert.ToDouble, Convert.ToString, Convert.ToInt32 (int) and Convert.ToInt64 (long):"
},
{
"code": null,
"e": 1485,
"s": 1135,
"text": "int myInt = 10;\ndouble myDouble = 5.25;\nbool myBool = true;\n\nConsole.WriteLine(Convert.ToString(myInt)); // convert int to string\nConsole.WriteLine(Convert.ToDouble(myInt)); // convert int to double\nConsole.WriteLine(Convert.ToInt32(myDouble)); // convert double to int\nConsole.WriteLine(Convert.ToString(myBool)); // convert bool to string"
},
{
"code": null,
"e": 1505,
"s": 1485,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1671,
"s": 1505,
"text": "Many times, there's no need for type conversion. But sometimes you have to. Take a look at the next chapter, when working with user input, to see an example of this."
},
{
"code": null,
"e": 1704,
"s": 1671,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 1746,
"s": 1704,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 1853,
"s": 1746,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 1872,
"s": 1853,
"text": "[email protected]"
}
]
|
How to create image overlay hover effect with CSS? | Following is the code to produce bottom navigation menu with CSS −
Live Demo
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
.card-container {
display: inline-block;
position: relative;
width: 50%;
}
img {
opacity: 1;
display: block;
width: 100%;
transition: .5s ease;
backface-visibility: hidden;
}
.hoverText {
transition: .5s ease;
opacity: 0;
position: absolute;
top: 50%;
left: 40%;
text-align: center;
}
.card-container:hover img {
opacity: 0.4;
}
.card-container:hover .hoverText {
opacity: 1;
}
.caption {
background-color: rgb(18, 53, 131);
color: white;
font-size: 30px;
padding: 20px;
border-radius: 6px;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-weight: bolder;
}
</style>
</head>
<body>
<h1>Image Overlay effect Example</h1>
<div class="card-container">
<img src="https://i.picsum.photos/id/237/536/354.jpg">
<div class="hoverText">
<div class="caption">Dog</div>
</div>
</div>
</body>
</html>
The above code will produce the following output −
On hovering above the image the caption will be shown as follows − | [
{
"code": null,
"e": 1129,
"s": 1062,
"text": "Following is the code to produce bottom navigation menu with CSS −"
},
{
"code": null,
"e": 1140,
"s": 1129,
"text": " Live Demo"
},
{
"code": null,
"e": 2123,
"s": 1140,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n<style>\n.card-container {\n display: inline-block;\n position: relative;\n width: 50%;\n}\nimg {\n opacity: 1;\n display: block;\n width: 100%;\n transition: .5s ease;\n backface-visibility: hidden;\n}\n.hoverText {\n transition: .5s ease;\n opacity: 0;\n position: absolute;\n top: 50%;\n left: 40%;\n text-align: center;\n}\n.card-container:hover img {\n opacity: 0.4;\n}\n.card-container:hover .hoverText {\n opacity: 1;\n}\n.caption {\n background-color: rgb(18, 53, 131);\n color: white;\n font-size: 30px;\n padding: 20px;\n border-radius: 6px;\n font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;\n font-weight: bolder;\n}\n</style>\n</head>\n<body>\n<h1>Image Overlay effect Example</h1>\n<div class=\"card-container\">\n<img src=\"https://i.picsum.photos/id/237/536/354.jpg\">\n<div class=\"hoverText\">\n<div class=\"caption\">Dog</div>\n</div>\n</div>\n</body>\n</html>"
},
{
"code": null,
"e": 2174,
"s": 2123,
"text": "The above code will produce the following output −"
},
{
"code": null,
"e": 2241,
"s": 2174,
"text": "On hovering above the image the caption will be shown as follows −"
}
]
|
Classify images with CNN. Getting young children to tidy up their... | by 1Regina | Towards Data Science | Getting young children to tidy up their rooms is often challenging. What I insist is messy, they will insist is clean enough. After all, all adjectives are subjective and I want my children to grow up respecting others’ opinion in our inclusive society. How do you put some definition around differences in opinion? An objective way to achieve this distinction is using image classification to differentiate between a clean versus a messy room.
This application can also be extended to cleaning services agencies or Airbnb especially so with more than 10% of its user complaints pertaining to dirty and messy conditions. (see source)
The process to building my application comprises of 5 stages: 1) Web-scraping 2) Dataset preparation 3)Models Building 4) Model Testing and 5)Front end Flask App.
1. Web-Scraping
There was no dataset of the size I want, so I had to turn to web-scraping (such is life!) with keywords on google_images_download to obtain images from Google.
For the “Clean” category, words like “clean bedroom”, “tidy room”, “ hotel room”, “clean and neat” and “declutter” were used.
For the “Messy” category, words used include “clutter”, “mess” and “disorganized room”.
I created a third category for fun to cater to the snapshots of excuse notes my kids will leave me to escape their chores. (Yup, inertia actually begins at a tender age.) For this category/class, I used words like “kids’ handwritten note”, “love letters”, “handwritten notes” etc to scrape google images.
2. Dataset Preparation
After obtaining more than 3,000 images, it is important to ensure the images are correct. Going into each category, I eye-balled the pictures, removing i) conflicting pictures, ii) contrasting pictures showing before-and-after, iii) duplicates which won’t help in training, iv) cleaning agents pictures, v) cartoon images, vi) pictures of the words, vii) company logos.
This dataset spring-cleaning is essential. Remember the principle: garbage in, garbage out.
At the end of the day, I was left with more than 600 pictures for ”Messy” and “Excuse” and more than 900 for “Clean”.
3. Models Building
Starting with machine-learning, I used 8 different algorithms.
K-Nearest NeighborsGaussian Naive BayesLogistic RegressionSupport Vector ClassifierDecision TreeMulti-Layer PerceptronRandom ForestXGBoost
K-Nearest Neighbors
Gaussian Naive Bayes
Logistic Regression
Support Vector Classifier
Decision Tree
Multi-Layer Perceptron
Random Forest
XGBoost
Using the F1 score as the evaluation metric for all the models, XGBoost is the best performer at 0.76. (For those with an appetite for technicals, I will elaborate them in italics in this article. You can skip them if your interest is only on the outcome. F1 is the harmonic balance between precision and recall. i) Precision is the ratio of True Positives out of all the predicted positives (comprising of True Positives and False Positives) whereas ii) Recall is the ratio of True Positives out of True Positives and False Negatives.
Curious to know if this can be improved in a substantial manner, let’s explore Deep Learning using Convolutional Neural Network (CNN) which is reputable for working with images.
The CNN architecture comprises of many layers. Each layer will extract certain features (e.g contrast, shapes, edges, texture) from the training pictures for each class . The trained model is then subsequently applied to unseen pictures which are then classified using the trained feature elements.
Indeed, the CNN model with the Adam optimizer and early stopping is the best model, beating the XGBoost with a new 0.84 F1 score associated with the trained network. The superiority of the Adam optimizer lies in its adaptive learning rate and favoured due to its relatively less parameters tuning. Below are the codes for my model and classification report for the associated F1 score.
# compile & train model# initialize # of epochs to train for, and batch sizeEPOCHS = 30BS = 128# initialize the model and optimizer model.compile(loss=”categorical_crossentropy”, optimizer=’adam’,metrics=[“accuracy”])# train the networkimport kerasH = model.fit_generator(aug.flow(trainX, trainY, batch_size=BS), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS, epochs=EPOCHS,# inject callbacks with best weights restorationcallbacks=[ keras.callbacks.EarlyStopping(patience=8, verbose=1, restore_best_weights=True), keras.callbacks.ReduceLROnPlateau(factor=.5, patience=3, verbose=1), ])
“restore_best_weight” ensures that the model will retrieve the best weights achieved through the epoch runs.
4. Model Testing
Putting the model to the test with 3 unseen pictures, one from each class, here are the results:
Taking a step further, even if my creative kids were to someday write me a note on a decorated piece of paper or couple a “gift” with their notes, the model can still confidently classify these as excuses.
5. Front End Application
Image classification is a subset of image recognition which has widespread use in the security industry (facial recognition), virtual search engine (object finder in stores), healthcare (emotion detection in patients) and gaming and augmented reality. In a way, smartphone cameras have made all these advances possible with multitudes of pictures that can be easily created. This project demonstrates an application of image classification in our homes :) | [
{
"code": null,
"e": 617,
"s": 172,
"text": "Getting young children to tidy up their rooms is often challenging. What I insist is messy, they will insist is clean enough. After all, all adjectives are subjective and I want my children to grow up respecting others’ opinion in our inclusive society. How do you put some definition around differences in opinion? An objective way to achieve this distinction is using image classification to differentiate between a clean versus a messy room."
},
{
"code": null,
"e": 806,
"s": 617,
"text": "This application can also be extended to cleaning services agencies or Airbnb especially so with more than 10% of its user complaints pertaining to dirty and messy conditions. (see source)"
},
{
"code": null,
"e": 969,
"s": 806,
"text": "The process to building my application comprises of 5 stages: 1) Web-scraping 2) Dataset preparation 3)Models Building 4) Model Testing and 5)Front end Flask App."
},
{
"code": null,
"e": 985,
"s": 969,
"text": "1. Web-Scraping"
},
{
"code": null,
"e": 1145,
"s": 985,
"text": "There was no dataset of the size I want, so I had to turn to web-scraping (such is life!) with keywords on google_images_download to obtain images from Google."
},
{
"code": null,
"e": 1271,
"s": 1145,
"text": "For the “Clean” category, words like “clean bedroom”, “tidy room”, “ hotel room”, “clean and neat” and “declutter” were used."
},
{
"code": null,
"e": 1359,
"s": 1271,
"text": "For the “Messy” category, words used include “clutter”, “mess” and “disorganized room”."
},
{
"code": null,
"e": 1664,
"s": 1359,
"text": "I created a third category for fun to cater to the snapshots of excuse notes my kids will leave me to escape their chores. (Yup, inertia actually begins at a tender age.) For this category/class, I used words like “kids’ handwritten note”, “love letters”, “handwritten notes” etc to scrape google images."
},
{
"code": null,
"e": 1687,
"s": 1664,
"text": "2. Dataset Preparation"
},
{
"code": null,
"e": 2057,
"s": 1687,
"text": "After obtaining more than 3,000 images, it is important to ensure the images are correct. Going into each category, I eye-balled the pictures, removing i) conflicting pictures, ii) contrasting pictures showing before-and-after, iii) duplicates which won’t help in training, iv) cleaning agents pictures, v) cartoon images, vi) pictures of the words, vii) company logos."
},
{
"code": null,
"e": 2149,
"s": 2057,
"text": "This dataset spring-cleaning is essential. Remember the principle: garbage in, garbage out."
},
{
"code": null,
"e": 2267,
"s": 2149,
"text": "At the end of the day, I was left with more than 600 pictures for ”Messy” and “Excuse” and more than 900 for “Clean”."
},
{
"code": null,
"e": 2286,
"s": 2267,
"text": "3. Models Building"
},
{
"code": null,
"e": 2349,
"s": 2286,
"text": "Starting with machine-learning, I used 8 different algorithms."
},
{
"code": null,
"e": 2488,
"s": 2349,
"text": "K-Nearest NeighborsGaussian Naive BayesLogistic RegressionSupport Vector ClassifierDecision TreeMulti-Layer PerceptronRandom ForestXGBoost"
},
{
"code": null,
"e": 2508,
"s": 2488,
"text": "K-Nearest Neighbors"
},
{
"code": null,
"e": 2529,
"s": 2508,
"text": "Gaussian Naive Bayes"
},
{
"code": null,
"e": 2549,
"s": 2529,
"text": "Logistic Regression"
},
{
"code": null,
"e": 2575,
"s": 2549,
"text": "Support Vector Classifier"
},
{
"code": null,
"e": 2589,
"s": 2575,
"text": "Decision Tree"
},
{
"code": null,
"e": 2612,
"s": 2589,
"text": "Multi-Layer Perceptron"
},
{
"code": null,
"e": 2626,
"s": 2612,
"text": "Random Forest"
},
{
"code": null,
"e": 2634,
"s": 2626,
"text": "XGBoost"
},
{
"code": null,
"e": 3170,
"s": 2634,
"text": "Using the F1 score as the evaluation metric for all the models, XGBoost is the best performer at 0.76. (For those with an appetite for technicals, I will elaborate them in italics in this article. You can skip them if your interest is only on the outcome. F1 is the harmonic balance between precision and recall. i) Precision is the ratio of True Positives out of all the predicted positives (comprising of True Positives and False Positives) whereas ii) Recall is the ratio of True Positives out of True Positives and False Negatives."
},
{
"code": null,
"e": 3348,
"s": 3170,
"text": "Curious to know if this can be improved in a substantial manner, let’s explore Deep Learning using Convolutional Neural Network (CNN) which is reputable for working with images."
},
{
"code": null,
"e": 3647,
"s": 3348,
"text": "The CNN architecture comprises of many layers. Each layer will extract certain features (e.g contrast, shapes, edges, texture) from the training pictures for each class . The trained model is then subsequently applied to unseen pictures which are then classified using the trained feature elements."
},
{
"code": null,
"e": 4033,
"s": 3647,
"text": "Indeed, the CNN model with the Adam optimizer and early stopping is the best model, beating the XGBoost with a new 0.84 F1 score associated with the trained network. The superiority of the Adam optimizer lies in its adaptive learning rate and favoured due to its relatively less parameters tuning. Below are the codes for my model and classification report for the associated F1 score."
},
{
"code": null,
"e": 4640,
"s": 4033,
"text": "# compile & train model# initialize # of epochs to train for, and batch sizeEPOCHS = 30BS = 128# initialize the model and optimizer model.compile(loss=”categorical_crossentropy”, optimizer=’adam’,metrics=[“accuracy”])# train the networkimport kerasH = model.fit_generator(aug.flow(trainX, trainY, batch_size=BS), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS, epochs=EPOCHS,# inject callbacks with best weights restorationcallbacks=[ keras.callbacks.EarlyStopping(patience=8, verbose=1, restore_best_weights=True), keras.callbacks.ReduceLROnPlateau(factor=.5, patience=3, verbose=1), ])"
},
{
"code": null,
"e": 4749,
"s": 4640,
"text": "“restore_best_weight” ensures that the model will retrieve the best weights achieved through the epoch runs."
},
{
"code": null,
"e": 4766,
"s": 4749,
"text": "4. Model Testing"
},
{
"code": null,
"e": 4863,
"s": 4766,
"text": "Putting the model to the test with 3 unseen pictures, one from each class, here are the results:"
},
{
"code": null,
"e": 5069,
"s": 4863,
"text": "Taking a step further, even if my creative kids were to someday write me a note on a decorated piece of paper or couple a “gift” with their notes, the model can still confidently classify these as excuses."
},
{
"code": null,
"e": 5094,
"s": 5069,
"text": "5. Front End Application"
}
]
|
How to set auto-increment to an existing column in a table using JDBC API? | You can add/set an auto increment constraint to a column in a table using the ALTER TABLE command.
ALTER TABLE table_name ADD id INT PRIMARY KEY AUTO_INCREMENT
Assume we have a table named Dispatches in the database with 7 columns namely id, CustomerName, DispatchDate, DeliveryTime, Price and, Location with description as shown below:
+--------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+-------+
| ProductName | varchar(255) | YES | UNI | NULL | |
| CustomerName | varchar(255) | YES | | NULL | |
| DispatchDate | date | YES | | NULL | |
| DeliveryTime | time | YES | | NULL | |
| Price | int(11) | YES | | NULL | |
| Location | text | YES | | NULL | |
| ID | int(11) | NO | PRI | NULL | |
+--------------+--------------+------+-----+---------+-------+
Following JDBC program establishes connection with MySQL database ,adds a column named Id and sets the values to the id column as auto-increment.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
public class SettingAutoIncrement {
public static void main(String args[]) throws SQLException {
//Registering the Driver
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
//Getting the connection
String mysqlUrl = "jdbc:mysql://localhost/mydatabase";
Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");
System.out.println("Connection established......");
//Creating the Statement
Statement stmt = con.createStatement();
//Setting the column id auto-increment
String query = "ALTER TABLE Sales ADD id INT PRIMARY KEY AUTO_INCREMENT";
stmt.execute(query);
stmt.executeBatch();
System.out.println("Table altered......");
}
}
Connection established......
Table altered......
If you retrieve the contents of the Sales table using the Select command you can observe that a column named id added to the table with auto-incremented integer values.
mysql> select * from Sales;
+-------------+--------------+--------------+--------------+-------+----------------+----+
| ProductName | CustomerName | DispatchDate | DeliveryTime | Price | Location | id |
+-------------+--------------+--------------+--------------+-------+----------------+----+
| Key-Board | Raja | 2019-09-01 | 08:51:36 | 7000 | Hyderabad | 1 |
| Earphones | Roja | 2019-05-01 | 05:54:28 | 2000 | Vishakhapatnam | 2 |
| Mouse | Puja | 2019-03-01 | 04:26:38 | 3000 | Vijayawada | 3 |
| Mobile | Vanaja | 2019-03-01 | 04:26:35 | 9000 | Vijayawada | 4 |
| Headset | Jalaja | 2019-03-01 | 05:19:16 | 6000 | Vijayawada | 5 |
+-------------+--------------+--------------+--------------+-------+----------------+----+
5 rows in set (0.00 sec) | [
{
"code": null,
"e": 1161,
"s": 1062,
"text": "You can add/set an auto increment constraint to a column in a table using the ALTER TABLE command."
},
{
"code": null,
"e": 1222,
"s": 1161,
"text": "ALTER TABLE table_name ADD id INT PRIMARY KEY AUTO_INCREMENT"
},
{
"code": null,
"e": 1399,
"s": 1222,
"text": "Assume we have a table named Dispatches in the database with 7 columns namely id, CustomerName, DispatchDate, DeliveryTime, Price and, Location with description as shown below:"
},
{
"code": null,
"e": 2092,
"s": 1399,
"text": "+--------------+--------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+--------------+--------------+------+-----+---------+-------+\n| ProductName | varchar(255) | YES | UNI | NULL | |\n| CustomerName | varchar(255) | YES | | NULL | |\n| DispatchDate | date | YES | | NULL | |\n| DeliveryTime | time | YES | | NULL | |\n| Price | int(11) | YES | | NULL | |\n| Location | text | YES | | NULL | |\n| ID | int(11) | NO | PRI | NULL | |\n+--------------+--------------+------+-----+---------+-------+"
},
{
"code": null,
"e": 2238,
"s": 2092,
"text": "Following JDBC program establishes connection with MySQL database ,adds a column named Id and sets the values to the id column as auto-increment."
},
{
"code": null,
"e": 3094,
"s": 2238,
"text": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\nimport java.sql.Statement;\npublic class SettingAutoIncrement {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Creating the Statement\n Statement stmt = con.createStatement();\n //Setting the column id auto-increment\n String query = \"ALTER TABLE Sales ADD id INT PRIMARY KEY AUTO_INCREMENT\";\n stmt.execute(query);\n stmt.executeBatch();\n System.out.println(\"Table altered......\");\n }\n}"
},
{
"code": null,
"e": 3143,
"s": 3094,
"text": "Connection established......\nTable altered......"
},
{
"code": null,
"e": 3312,
"s": 3143,
"text": "If you retrieve the contents of the Sales table using the Select command you can observe that a column named id added to the table with auto-incremented integer values."
},
{
"code": null,
"e": 4184,
"s": 3312,
"text": "mysql> select * from Sales;\n+-------------+--------------+--------------+--------------+-------+----------------+----+\n| ProductName | CustomerName | DispatchDate | DeliveryTime | Price | Location | id |\n+-------------+--------------+--------------+--------------+-------+----------------+----+\n| Key-Board | Raja | 2019-09-01 | 08:51:36 | 7000 | Hyderabad | 1 |\n| Earphones | Roja | 2019-05-01 | 05:54:28 | 2000 | Vishakhapatnam | 2 |\n| Mouse | Puja | 2019-03-01 | 04:26:38 | 3000 | Vijayawada | 3 |\n| Mobile | Vanaja | 2019-03-01 | 04:26:35 | 9000 | Vijayawada | 4 |\n| Headset | Jalaja | 2019-03-01 | 05:19:16 | 6000 | Vijayawada | 5 |\n+-------------+--------------+--------------+--------------+-------+----------------+----+\n5 rows in set (0.00 sec)"
}
]
|
How to use the TextWatcher class in kotlin? | This example demonstrates how to use the TextWatcher class in kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇒New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="12dp"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_marginTop="50dp"
android:text="Tutorials Point"
android:textAlignment="center"
android:textColor="@android:color/holo_green_dark"
android:textSize="32sp"
android:textStyle="bold" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_above="@id/etInput"
android:layout_centerInParent="true"
android:layout_marginBottom="30dp"
android:text="Android Text Watcher"
android:textColor="@android:color/holo_orange_dark"
android:textSize="24sp"
android:textStyle="bold" />
<EditText
android:id="@+id/etInput"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:hint="Input"
android:maxLength="15" />
<TextView
android:id="@+id/textView"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_below="@id/etInput"
android:layout_centerHorizontal="true"
android:layout_marginTop="12dp"
android:textColor="@android:color/holo_red_dark"
android:textSize="24sp"
android:textStyle="bold|italic" />
</RelativeLayout>
Step 3 − Add the following code to MainActivity.kt
import android.os.Bundle
import android.text.Editable
import android.text.TextWatcher
import android.widget.EditText
import android.widget.TextView
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
lateinit var input: EditText
lateinit var output: TextView
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
input = findViewById(R.id.etInput)
output = findViewById(R.id.textView)
input.addTextChangedListener(textWatcher)
}
private val textWatcher = object : TextWatcher {
override fun afterTextChanged(s: Editable?) {
}
override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {
}
override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {
output.text = s
if (start == 12) {
Toast.makeText(applicationContext, "Maximum Limit Reached", Toast.LENGTH_SHORT)
.show()
}
}
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="app.com.kotlipapp">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code. | [
{
"code": null,
"e": 1132,
"s": 1062,
"text": "This example demonstrates how to use the TextWatcher class in kotlin."
},
{
"code": null,
"e": 1260,
"s": 1132,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1325,
"s": 1260,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 3021,
"s": 1325,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"12dp\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"50dp\"\n android:text=\"Tutorials Point\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_above=\"@id/etInput\"\n android:layout_centerInParent=\"true\"\n android:layout_marginBottom=\"30dp\"\n android:text=\"Android Text Watcher\"\n android:textColor=\"@android:color/holo_orange_dark\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n <EditText\n android:id=\"@+id/etInput\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:hint=\"Input\"\n android:maxLength=\"15\" />\n <TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"fill_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_below=\"@id/etInput\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"12dp\"\n android:textColor=\"@android:color/holo_red_dark\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold|italic\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 3072,
"s": 3021,
"text": "Step 3 − Add the following code to MainActivity.kt"
},
{
"code": null,
"e": 4203,
"s": 3072,
"text": "import android.os.Bundle\nimport android.text.Editable\nimport android.text.TextWatcher\nimport android.widget.EditText\nimport android.widget.TextView\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n lateinit var input: EditText\n lateinit var output: TextView\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n input = findViewById(R.id.etInput)\n output = findViewById(R.id.textView)\n input.addTextChangedListener(textWatcher)\n }\n private val textWatcher = object : TextWatcher {\n override fun afterTextChanged(s: Editable?) {\n }\n override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {\n }\n override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {\n output.text = s\n if (start == 12) {\n Toast.makeText(applicationContext, \"Maximum Limit Reached\", Toast.LENGTH_SHORT)\n .show()\n }\n }\n }\n}"
},
{
"code": null,
"e": 4258,
"s": 4203,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4934,
"s": 4258,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"app.com.kotlipapp\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 5285,
"s": 4934,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −"
},
{
"code": null,
"e": 5326,
"s": 5285,
"text": "Click here to download the project code."
}
]
|
How to Find HCF or GCD using Python? | Highest Common Factor or Greatest Common Divisor of two or more integers is the largest positive integer that evenly divides the numbers without a remainder. For example, the GCD of 8 and 12 is 4.
x = int(input("Enter first number: "))
y = int(input("Enter second number: "))
if x > y:
smaller = y
else:
smaller = x
for i in range(1,smaller + 1):
if((x % i == 0) and (y % i == 0)):
hcf = i
print("The H.C.F. of", x,"and", x,"is", hcf) | [
{
"code": null,
"e": 1259,
"s": 1062,
"text": "Highest Common Factor or Greatest Common Divisor of two or more integers is the largest positive integer that evenly divides the numbers without a remainder. For example, the GCD of 8 and 12 is 4."
},
{
"code": null,
"e": 1531,
"s": 1259,
"text": "x = int(input(\"Enter first number: \")) \ny = int(input(\"Enter second number: \")) \nif x > y: \n smaller = y \nelse: \n smaller = x \nfor i in range(1,smaller + 1): \nif((x % i == 0) and (y % i == 0)): \n hcf = i \n\nprint(\"The H.C.F. of\", x,\"and\", x,\"is\", hcf) \n"
}
]
|
Program to print path from root to a given node in a binary tree using C++ | In this tutorial, we will be discussing a program to print the path from root to a given node in a binary tree.
For a given binary tree having distinct nodes, we have to print the complete path to reach a particularly given node from the root node of the binary tree.
To solve this problem, we will use recursion. While traversing the binary tree, we will recursively search for the particular element to be found. Also alongside we will be storing the path to reach the element to be searched.
#include <bits/stdc++.h>
using namespace std;
struct Node{
int data;
Node *left, *right;
};
struct Node* create_node(int data){
struct Node *new_node = new Node;
new_node->data = data;
new_node->left = new_node->right = NULL;
return new_node;
}
//checks if a path from root node to element exists
bool is_path(Node *root, vector<int>& arr, int x){
if (!root)
return false;
arr.push_back(root->data);
if (root->data == x)
return true;
if (is_path(root->left, arr, x) || is_path(root->right, arr, x))
return true;
arr.pop_back();
return false;
}
//printing the path from the root node to the element
void print_path(Node *root, int x){
vector<int> arr;
if (is_path(root, arr, x)){
for (int i=0; i<arr.size()-1; i++)
cout << arr[i] << " -> ";
cout << arr[arr.size() - 1];
}
else
cout << "Path doesn't exists" << endl;
}
int main(){
struct Node *root = create_node(13);
root->left = create_node(21);
root->right = create_node(43);
root->left->left = create_node(34);
root->left->right = create_node(55);
root->right->left = create_node(68);
root->right->right = create_node(79);
int x = 68;
print_path(root, x);
return 0;
}
13 -> 43 -> 68 | [
{
"code": null,
"e": 1174,
"s": 1062,
"text": "In this tutorial, we will be discussing a program to print the path from root to a given node in a binary tree."
},
{
"code": null,
"e": 1330,
"s": 1174,
"text": "For a given binary tree having distinct nodes, we have to print the complete path to reach a particularly given node from the root node of the binary tree."
},
{
"code": null,
"e": 1557,
"s": 1330,
"text": "To solve this problem, we will use recursion. While traversing the binary tree, we will recursively search for the particular element to be found. Also alongside we will be storing the path to reach the element to be searched."
},
{
"code": null,
"e": 2803,
"s": 1557,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nstruct Node{\n int data;\n Node *left, *right;\n};\nstruct Node* create_node(int data){\n struct Node *new_node = new Node;\n new_node->data = data;\n new_node->left = new_node->right = NULL;\n return new_node;\n}\n//checks if a path from root node to element exists\nbool is_path(Node *root, vector<int>& arr, int x){\n if (!root)\n return false;\n arr.push_back(root->data);\n if (root->data == x)\n return true;\n if (is_path(root->left, arr, x) || is_path(root->right, arr, x))\n return true;\n arr.pop_back();\n return false;\n}\n//printing the path from the root node to the element\nvoid print_path(Node *root, int x){\n vector<int> arr;\n if (is_path(root, arr, x)){\n for (int i=0; i<arr.size()-1; i++)\n cout << arr[i] << \" -> \";\n cout << arr[arr.size() - 1];\n }\n else\n cout << \"Path doesn't exists\" << endl;\n}\nint main(){\n struct Node *root = create_node(13);\n root->left = create_node(21);\n root->right = create_node(43);\n root->left->left = create_node(34);\n root->left->right = create_node(55);\n root->right->left = create_node(68);\n root->right->right = create_node(79);\n int x = 68;\n print_path(root, x);\n return 0;\n}"
},
{
"code": null,
"e": 2818,
"s": 2803,
"text": "13 -> 43 -> 68"
}
]
|
Creating Interactive Radar Charts with Python | by M Khorasani | Towards Data Science | Perhaps one of the unsung heroes of data visualization is the benevolent and graceful radar plot. We’ve grown accustomed to a whole slew of other visualizations, choropleths, donuts and heatmaps to name a few, but radar charts are largely missing from our dazzling dashboards. Granted, there are only particular use cases for such charts, namely, visualizing wind roses, geographical data and some other types of multivariate data. But when you do use one, it is remarkably effective at visualizing outliers or commonality amongst numerical and ordinal datasets.
For this tutorial, we will be generating a dashboard with an interactive and dynamic radar chart that will render in real-time. To that end, we will be using Plotly and Streamlit as our formidable Python stack. If you haven’t already done so, fire up Anaconda or any other Python IDE of your choice and install the following packages:
pip install plotlypip install streamlit
Then proceed with importing the following packages into your script:
Plotly is one of the most versatile and interactive data visualization tools currently out there. It is armed with bindings for Python and R, and its integration into dashboards is seamless. Plotly is able to generate interactive radar charts with practically no overhead for you as the developer. All you need to do is to input an array of numbers corresponding to the value of each of your variables, and leave the rest to Plotly as shown below:
For our dashboard, we will be using Streamlit. Long story short, Streamlit is a pure Python web framework that renders dashboards as web apps in real-time. Once your dashboard is ready, you can port-forward it to one of your local TCP ports and open it on your browser. Alternatively you can deploy your dashboard to the cloud with the likes of Heroku, AWS or Streamlit’s own one-click deployment service, if you have the stomach for more headache.
For the Streamlit dashboard, we will create a slider that will enable us to input any value we like into our radar_chart function.
Once the script is completed, you can render the dashboard/app by opening Anaconda prompt and typing in the following commands.
Initially, change the directory to the location of your script:
cd C:/Users/.../script_directory
Then run your app by typing:
streamlit run script_name.py
Streamlit will then automatically generate your app and will forward it to your local host, which you can open in any browser of your choice.
And there you have it, an interactive radar chart where you can change the values shown in the visual by modifying the positions of the slider.
You can also play around with all the interactive bells and whistles afforded by Plotly.
If you want to learn more about data visualization and Python, then feel free to check out the following (affiliate linked) courses: | [
{
"code": null,
"e": 734,
"s": 171,
"text": "Perhaps one of the unsung heroes of data visualization is the benevolent and graceful radar plot. We’ve grown accustomed to a whole slew of other visualizations, choropleths, donuts and heatmaps to name a few, but radar charts are largely missing from our dazzling dashboards. Granted, there are only particular use cases for such charts, namely, visualizing wind roses, geographical data and some other types of multivariate data. But when you do use one, it is remarkably effective at visualizing outliers or commonality amongst numerical and ordinal datasets."
},
{
"code": null,
"e": 1069,
"s": 734,
"text": "For this tutorial, we will be generating a dashboard with an interactive and dynamic radar chart that will render in real-time. To that end, we will be using Plotly and Streamlit as our formidable Python stack. If you haven’t already done so, fire up Anaconda or any other Python IDE of your choice and install the following packages:"
},
{
"code": null,
"e": 1109,
"s": 1069,
"text": "pip install plotlypip install streamlit"
},
{
"code": null,
"e": 1178,
"s": 1109,
"text": "Then proceed with importing the following packages into your script:"
},
{
"code": null,
"e": 1626,
"s": 1178,
"text": "Plotly is one of the most versatile and interactive data visualization tools currently out there. It is armed with bindings for Python and R, and its integration into dashboards is seamless. Plotly is able to generate interactive radar charts with practically no overhead for you as the developer. All you need to do is to input an array of numbers corresponding to the value of each of your variables, and leave the rest to Plotly as shown below:"
},
{
"code": null,
"e": 2075,
"s": 1626,
"text": "For our dashboard, we will be using Streamlit. Long story short, Streamlit is a pure Python web framework that renders dashboards as web apps in real-time. Once your dashboard is ready, you can port-forward it to one of your local TCP ports and open it on your browser. Alternatively you can deploy your dashboard to the cloud with the likes of Heroku, AWS or Streamlit’s own one-click deployment service, if you have the stomach for more headache."
},
{
"code": null,
"e": 2206,
"s": 2075,
"text": "For the Streamlit dashboard, we will create a slider that will enable us to input any value we like into our radar_chart function."
},
{
"code": null,
"e": 2334,
"s": 2206,
"text": "Once the script is completed, you can render the dashboard/app by opening Anaconda prompt and typing in the following commands."
},
{
"code": null,
"e": 2398,
"s": 2334,
"text": "Initially, change the directory to the location of your script:"
},
{
"code": null,
"e": 2431,
"s": 2398,
"text": "cd C:/Users/.../script_directory"
},
{
"code": null,
"e": 2460,
"s": 2431,
"text": "Then run your app by typing:"
},
{
"code": null,
"e": 2489,
"s": 2460,
"text": "streamlit run script_name.py"
},
{
"code": null,
"e": 2631,
"s": 2489,
"text": "Streamlit will then automatically generate your app and will forward it to your local host, which you can open in any browser of your choice."
},
{
"code": null,
"e": 2775,
"s": 2631,
"text": "And there you have it, an interactive radar chart where you can change the values shown in the visual by modifying the positions of the slider."
},
{
"code": null,
"e": 2864,
"s": 2775,
"text": "You can also play around with all the interactive bells and whistles afforded by Plotly."
}
]
|
How to validate if input in input field has alphabets only using express-validator ? - GeeksforGeeks | 24 Dec, 2021
In HTML forms, we often required validation of different types. Validate existing email, validate password length, validate confirm password, validate to allow only integer inputs, these are some examples of validation. In a certain input field, only alphabets are allowed i.e. there not allowed any number or special character. We can also validate these input fields to only accept alphabets using express-validator middleware.
Command to install express-validator:
npm install express-validator
Steps to use express-validator to implement the logic:
Install express-validator middleware.
Create a validator.js file to code all the validation logic.
Validate input by validateInputField: check(input field name) and chain on the validation isAlpha() with ‘ . ‘
Use the validation name(validateInputField) in the routes as a middleware as an array of validations.
Destructure ‘validationResult’ function from express-validator to use it to find any errors.
If error occurs redirect to the same page passing the error information.
If error list is empty, give access to the user for the subsequent request.
Note: Here we use local or custom database to implement the logic, the same steps can be followed to implement the logic in a regular database like MongoDB or MySql.
Example: This example illustrates how to validate an input field to only allow the alphabets.
Filename – index.js
javascript
const express = require('express')const bodyParser = require('body-parser')const {validationResult} = require('express-validator')const repo = require('./repository')const { validateFirstName, validateLastName } = require('./validator')const signupTemplet = require('./signup') const app = express()const port = process.env.PORT || 3000 // The body-parser middleware to parse form dataapp.use(bodyParser.urlencoded({extended : true})) // Get route to display HTML form to sign upapp.get('/signup', (req, res) => { res.send(signupTemplet({}))}) // Post route to handle form submission logic andapp.post( '/signup', [validateFirstName, validateLastName], async (req, res) => { const errors = validationResult(req) if(!errors.isEmpty()){ return res.send(signupTemplet({errors})) } const {email, fn, ln, password} = req.body await repo.create({ email, 'First Name':fn, 'Last Name': ln, password }) res.send('Sign Up successfully')}) // Server setupapp.listen(port, () => { console.log(`Server start on port ${port}`)})
Filename – repository.js: This file contains all the logic to create a local database and interact with it.
javascript
// Importing node.js file system moduleconst fs = require('fs') class Repository { constructor(filename) { // The filename where datas are going to store if(!filename) { throw new Error('Filename is required to create a datastore!') } this.filename = filename try { fs.accessSync(this.filename) } catch(err) { // If file not exist it is created with empty array fs.writeFileSync(this.filename, '[]') } } // Get all existing records async getAll(){ return JSON.parse( await fs.promises.readFile(this.filename, { encoding : 'utf8' }) ) } // Create new record async create(attrs){ const records = await this.getAll() records.push(attrs) await fs.promises.writeFile( this.filename, JSON.stringify(records, null, 2) ) return attrs }} // The 'datastore.json' file created at runtime// and all the information provided via signup form// store in this file in JSON format.module.exports = new Repository('datastore.json')
Filename – signup.js: This file contains logic to show sign up form.
javascript
const getError = (errors, prop) => { try { return errors.mapped()[prop].msg } catch (error) { return '' }} module.exports = ({errors}) => { return ` <!DOCTYPE html> <html> <head> <link rel='stylesheet'href='https://cdnjs.cloudflare.com/ajax/libs/bulma/0.9.0/css/bulma.min.css'> <style> div.columns{ margin-top: 100px; } .button{ margin-top : 10px } </style> </head> <body> <div class='container'> <div class='columns is-centered'> <div class='column is-5'> <h1 class='title'>Sign Up<h1> <form method='POST'> <div> <div> <label class='label' id='email'>Username</label> </div> <input class='input' type='text' name='email' placeholder='Email' for='email'> </div> <div> <div> <label class='label' id='fn'>First Name</label> </div> <input class='input' type='text' name='fn' placeholder='First Name' for='fn'> <p class="help is-danger">${getError(errors, 'fn')}</p> </div> <div> <div> <label class='label' id='ln'>Last Name</label> </div> <input class='input' type='text' name='ln' placeholder='Last Name' for='ln'> <p class="help is-danger">${getError(errors, 'ln')}</p> </div> <div> <div> <label class='label' id='password'>Password</label> </div> <input class='input' type='password' name='password' placeholder='Password' for='password'> </div> <div> <button class='button is-primary'>Sign Up</button> </div> </form> </div> </div> </div> </body> </html> `}
Filename – validator.js: This file contain all the validation logic(Logic to validate a input field to only allow the alphabets).
javascript
const {check} = require('express-validator')const repo = require('./repository')module.exports = { validateFirstName : check('fn') // To delete leading and trailing space .trim() // Validate the minimum length of the password // Optional for this context .isLength({min:3}) // Custom message .withMessage('First Name must be 3 characters long') // Name must contains only alphabets .isAlpha() // Custom message .withMessage('First Name must be alphabetic'), validateLastName : check('ln') // To delete leading and trailing space .trim() // Validate the minimum length of the password // Optional for this context .isLength({min:2}) // Custom message .withMessage('Last Name must be 2 characters long') // Name must contains only alphabets .isAlpha() // Custom message .withMessage('Last Name must be alphabetic')}
Filename – package.json
package.json file
Output:
Attempt to sign up when first name input field not contain only alphabets
Response when attempt to sign up with input field ‘first name’ which not contain only alphabets
Attempt to sign up when first name and last name input fields that contains only alphabets
Response when attempt to sign up with input field ‘first name’, ‘last name’ that contains only alphabets
Database after successful Sign Up:
Database after successful Sign Up
Note: We have used some Bulma classes(CSS framework) in the signup.js file to design the content.
kk9826225
Express.js
Node.js-Misc
Node.js
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Node.js fs.readFile() Method
Node.js fs.writeFile() Method
How to install the previous version of node.js and npm ?
Difference between promise and async await in Node.js
How to use an ES6 import in Node.js?
Top 10 Front End Developer Skills That You Need in 2022
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 24725,
"s": 24697,
"text": "\n24 Dec, 2021"
},
{
"code": null,
"e": 25155,
"s": 24725,
"text": "In HTML forms, we often required validation of different types. Validate existing email, validate password length, validate confirm password, validate to allow only integer inputs, these are some examples of validation. In a certain input field, only alphabets are allowed i.e. there not allowed any number or special character. We can also validate these input fields to only accept alphabets using express-validator middleware."
},
{
"code": null,
"e": 25193,
"s": 25155,
"text": "Command to install express-validator:"
},
{
"code": null,
"e": 25223,
"s": 25193,
"text": "npm install express-validator"
},
{
"code": null,
"e": 25278,
"s": 25223,
"text": "Steps to use express-validator to implement the logic:"
},
{
"code": null,
"e": 25316,
"s": 25278,
"text": "Install express-validator middleware."
},
{
"code": null,
"e": 25377,
"s": 25316,
"text": "Create a validator.js file to code all the validation logic."
},
{
"code": null,
"e": 25488,
"s": 25377,
"text": "Validate input by validateInputField: check(input field name) and chain on the validation isAlpha() with ‘ . ‘"
},
{
"code": null,
"e": 25590,
"s": 25488,
"text": "Use the validation name(validateInputField) in the routes as a middleware as an array of validations."
},
{
"code": null,
"e": 25683,
"s": 25590,
"text": "Destructure ‘validationResult’ function from express-validator to use it to find any errors."
},
{
"code": null,
"e": 25756,
"s": 25683,
"text": "If error occurs redirect to the same page passing the error information."
},
{
"code": null,
"e": 25832,
"s": 25756,
"text": "If error list is empty, give access to the user for the subsequent request."
},
{
"code": null,
"e": 25998,
"s": 25832,
"text": "Note: Here we use local or custom database to implement the logic, the same steps can be followed to implement the logic in a regular database like MongoDB or MySql."
},
{
"code": null,
"e": 26092,
"s": 25998,
"text": "Example: This example illustrates how to validate an input field to only allow the alphabets."
},
{
"code": null,
"e": 26112,
"s": 26092,
"text": "Filename – index.js"
},
{
"code": null,
"e": 26123,
"s": 26112,
"text": "javascript"
},
{
"code": "const express = require('express')const bodyParser = require('body-parser')const {validationResult} = require('express-validator')const repo = require('./repository')const { validateFirstName, validateLastName } = require('./validator')const signupTemplet = require('./signup') const app = express()const port = process.env.PORT || 3000 // The body-parser middleware to parse form dataapp.use(bodyParser.urlencoded({extended : true})) // Get route to display HTML form to sign upapp.get('/signup', (req, res) => { res.send(signupTemplet({}))}) // Post route to handle form submission logic andapp.post( '/signup', [validateFirstName, validateLastName], async (req, res) => { const errors = validationResult(req) if(!errors.isEmpty()){ return res.send(signupTemplet({errors})) } const {email, fn, ln, password} = req.body await repo.create({ email, 'First Name':fn, 'Last Name': ln, password }) res.send('Sign Up successfully')}) // Server setupapp.listen(port, () => { console.log(`Server start on port ${port}`)})",
"e": 27189,
"s": 26123,
"text": null
},
{
"code": null,
"e": 27297,
"s": 27189,
"text": "Filename – repository.js: This file contains all the logic to create a local database and interact with it."
},
{
"code": null,
"e": 27308,
"s": 27297,
"text": "javascript"
},
{
"code": "// Importing node.js file system moduleconst fs = require('fs') class Repository { constructor(filename) { // The filename where datas are going to store if(!filename) { throw new Error('Filename is required to create a datastore!') } this.filename = filename try { fs.accessSync(this.filename) } catch(err) { // If file not exist it is created with empty array fs.writeFileSync(this.filename, '[]') } } // Get all existing records async getAll(){ return JSON.parse( await fs.promises.readFile(this.filename, { encoding : 'utf8' }) ) } // Create new record async create(attrs){ const records = await this.getAll() records.push(attrs) await fs.promises.writeFile( this.filename, JSON.stringify(records, null, 2) ) return attrs }} // The 'datastore.json' file created at runtime// and all the information provided via signup form// store in this file in JSON format.module.exports = new Repository('datastore.json')",
"e": 28327,
"s": 27308,
"text": null
},
{
"code": null,
"e": 28396,
"s": 28327,
"text": "Filename – signup.js: This file contains logic to show sign up form."
},
{
"code": null,
"e": 28407,
"s": 28396,
"text": "javascript"
},
{
"code": "const getError = (errors, prop) => { try { return errors.mapped()[prop].msg } catch (error) { return '' }} module.exports = ({errors}) => { return ` <!DOCTYPE html> <html> <head> <link rel='stylesheet'href='https://cdnjs.cloudflare.com/ajax/libs/bulma/0.9.0/css/bulma.min.css'> <style> div.columns{ margin-top: 100px; } .button{ margin-top : 10px } </style> </head> <body> <div class='container'> <div class='columns is-centered'> <div class='column is-5'> <h1 class='title'>Sign Up<h1> <form method='POST'> <div> <div> <label class='label' id='email'>Username</label> </div> <input class='input' type='text' name='email' placeholder='Email' for='email'> </div> <div> <div> <label class='label' id='fn'>First Name</label> </div> <input class='input' type='text' name='fn' placeholder='First Name' for='fn'> <p class=\"help is-danger\">${getError(errors, 'fn')}</p> </div> <div> <div> <label class='label' id='ln'>Last Name</label> </div> <input class='input' type='text' name='ln' placeholder='Last Name' for='ln'> <p class=\"help is-danger\">${getError(errors, 'ln')}</p> </div> <div> <div> <label class='label' id='password'>Password</label> </div> <input class='input' type='password' name='password' placeholder='Password' for='password'> </div> <div> <button class='button is-primary'>Sign Up</button> </div> </form> </div> </div> </div> </body> </html> `}",
"e": 30540,
"s": 28407,
"text": null
},
{
"code": null,
"e": 30670,
"s": 30540,
"text": "Filename – validator.js: This file contain all the validation logic(Logic to validate a input field to only allow the alphabets)."
},
{
"code": null,
"e": 30681,
"s": 30670,
"text": "javascript"
},
{
"code": "const {check} = require('express-validator')const repo = require('./repository')module.exports = { validateFirstName : check('fn') // To delete leading and trailing space .trim() // Validate the minimum length of the password // Optional for this context .isLength({min:3}) // Custom message .withMessage('First Name must be 3 characters long') // Name must contains only alphabets .isAlpha() // Custom message .withMessage('First Name must be alphabetic'), validateLastName : check('ln') // To delete leading and trailing space .trim() // Validate the minimum length of the password // Optional for this context .isLength({min:2}) // Custom message .withMessage('Last Name must be 2 characters long') // Name must contains only alphabets .isAlpha() // Custom message .withMessage('Last Name must be alphabetic')}",
"e": 31581,
"s": 30681,
"text": null
},
{
"code": null,
"e": 31605,
"s": 31581,
"text": "Filename – package.json"
},
{
"code": null,
"e": 31623,
"s": 31605,
"text": "package.json file"
},
{
"code": null,
"e": 31631,
"s": 31623,
"text": "Output:"
},
{
"code": null,
"e": 31706,
"s": 31631,
"text": "Attempt to sign up when first name input field not contain only alphabets"
},
{
"code": null,
"e": 31802,
"s": 31706,
"text": "Response when attempt to sign up with input field ‘first name’ which not contain only alphabets"
},
{
"code": null,
"e": 31893,
"s": 31802,
"text": "Attempt to sign up when first name and last name input fields that contains only alphabets"
},
{
"code": null,
"e": 31998,
"s": 31893,
"text": "Response when attempt to sign up with input field ‘first name’, ‘last name’ that contains only alphabets"
},
{
"code": null,
"e": 32033,
"s": 31998,
"text": "Database after successful Sign Up:"
},
{
"code": null,
"e": 32067,
"s": 32033,
"text": "Database after successful Sign Up"
},
{
"code": null,
"e": 32165,
"s": 32067,
"text": "Note: We have used some Bulma classes(CSS framework) in the signup.js file to design the content."
},
{
"code": null,
"e": 32175,
"s": 32165,
"text": "kk9826225"
},
{
"code": null,
"e": 32186,
"s": 32175,
"text": "Express.js"
},
{
"code": null,
"e": 32199,
"s": 32186,
"text": "Node.js-Misc"
},
{
"code": null,
"e": 32207,
"s": 32199,
"text": "Node.js"
},
{
"code": null,
"e": 32224,
"s": 32207,
"text": "Web Technologies"
},
{
"code": null,
"e": 32251,
"s": 32224,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 32349,
"s": 32251,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32358,
"s": 32349,
"text": "Comments"
},
{
"code": null,
"e": 32371,
"s": 32358,
"text": "Old Comments"
},
{
"code": null,
"e": 32400,
"s": 32371,
"text": "Node.js fs.readFile() Method"
},
{
"code": null,
"e": 32430,
"s": 32400,
"text": "Node.js fs.writeFile() Method"
},
{
"code": null,
"e": 32487,
"s": 32430,
"text": "How to install the previous version of node.js and npm ?"
},
{
"code": null,
"e": 32541,
"s": 32487,
"text": "Difference between promise and async await in Node.js"
},
{
"code": null,
"e": 32578,
"s": 32541,
"text": "How to use an ES6 import in Node.js?"
},
{
"code": null,
"e": 32634,
"s": 32578,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 32696,
"s": 32634,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 32739,
"s": 32696,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 32789,
"s": 32739,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
]
|
Extract rows from R DataFrame based on factors - GeeksforGeeks | 23 May, 2021
In this article, we will discuss how to extract rows from dataframe based on factors in R Programming Language.
The data frame column can be accessed using its name (df$col-name) or by its index (df[[ col-indx ]]) to access a particular column. The data frame columns may contain values as factors by explicit conversion using the factor() method. The specific rows can then be accessed using indexing methods.
Syntax:
df[ df$col-name == val , ]
The rows which satisfy this particular column condition value will be returned as an output.
Example:
R
# declaring a data framedata_frame = data.frame(col1 = factor(c("A","z","z","c","e")), col2 = factor(c(4:8))) print ("Original dataframe")print (data_frame) sapply(data_frame , class) # where column sum is greater than 10data_frame_mod <- data_frame[data_frame$col1=="z",] print ("Modified dataframe")print (data_frame_mod) sapply(data_frame_mod , class)
Output
[1] "Original dataframe"
col1 col2
1 A 4
2 z 5
3 z 6
4 c 7
5 e 8
col1 col2
"factor" "factor"
[1] "Modified dataframe"
col1 col2
2 z 5
3 z 6
col1 col2
"factor" "factor"
Multiple factor level rows can also be accessed using indexing method. The factor column values can be also validated against a vector containing values using the %in% operator, which is used to check the existence of the value encountered in the input vector. It returns a boolean value TRUE in case the value is contained in the vector.
Syntax:
val %in% vec
Example:
R
# declaring a data framedata_frame = data.frame(col1 = factor(letters[1:5]), col2 = factor(c(4:8))) print ("Original dataframe")print (data_frame) sapply(data_frame , class) # where column sum is greater than 10data_frame_mod <- data_frame[data_frame$col2 %in% c(4 , 6),] print ("Modified dataframe")print (data_frame_mod)sapply(data_frame_mod , class)
Output
[1] "Original dataframe"
col1 col2
1 a 4
2 b 5
3 c 6
4 d 7
5 e 8
col1 col2
"factor" "factor"
[1] "Modified dataframe"
col1 col2
1 a 4
3 c 6
col1 col2
"factor" "factor"
The subset() method in R is used to return the rows satisfying the constraints mentioned. Both single and multiple factor levels can be returned using this method. The row numbers in the original data frame are retained in order. The factor column values can be validated for a mentioned condition. The output has to be stored in a variable in order to preserve the changes.
Syntax:
subset ( df , condition )
Conditions may contain logical operators == , != , > , < operators to compare the factor levels contained within the columns.
Example:
R
# declaring a data framedata_frame = data.frame(col1 = factor(letters[1:5]), col2 = factor(c(4:8))) print ("Original dataframe")print (data_frame) sapply(data_frame , class) # where column sum is greater than 10data_frame_mod <- subset(data_frame, col2 %in% c(4 , 6))print ("Modified dataframe")print (data_frame_mod)sapply(data_frame_mod , class)
Output
[1] "Original dataframe"
col1 col2
1 a 4
2 b 5
3 c 6
4 d 7
5 e 8
col1 col2
"factor" "factor"
[1] "Modified dataframe"
col1 col2
1 a 4
3 c 6
col1 col2
"factor" "factor"
Picked
R DataFrame-Programs
R-DataFrame
R Language
R Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
Replace Specific Characters in String in R
How to filter R dataframe by multiple conditions?
Convert Matrix to Dataframe in R | [
{
"code": null,
"e": 24851,
"s": 24823,
"text": "\n23 May, 2021"
},
{
"code": null,
"e": 24963,
"s": 24851,
"text": "In this article, we will discuss how to extract rows from dataframe based on factors in R Programming Language."
},
{
"code": null,
"e": 25263,
"s": 24963,
"text": "The data frame column can be accessed using its name (df$col-name) or by its index (df[[ col-indx ]]) to access a particular column. The data frame columns may contain values as factors by explicit conversion using the factor() method. The specific rows can then be accessed using indexing methods. "
},
{
"code": null,
"e": 25271,
"s": 25263,
"text": "Syntax:"
},
{
"code": null,
"e": 25298,
"s": 25271,
"text": "df[ df$col-name == val , ]"
},
{
"code": null,
"e": 25391,
"s": 25298,
"text": "The rows which satisfy this particular column condition value will be returned as an output."
},
{
"code": null,
"e": 25401,
"s": 25391,
"text": "Example: "
},
{
"code": null,
"e": 25403,
"s": 25401,
"text": "R"
},
{
"code": "# declaring a data framedata_frame = data.frame(col1 = factor(c(\"A\",\"z\",\"z\",\"c\",\"e\")), col2 = factor(c(4:8))) print (\"Original dataframe\")print (data_frame) sapply(data_frame , class) # where column sum is greater than 10data_frame_mod <- data_frame[data_frame$col1==\"z\",] print (\"Modified dataframe\")print (data_frame_mod) sapply(data_frame_mod , class)",
"e": 25787,
"s": 25403,
"text": null
},
{
"code": null,
"e": 25794,
"s": 25787,
"text": "Output"
},
{
"code": null,
"e": 26021,
"s": 25794,
"text": "[1] \"Original dataframe\"\n col1 col2\n1 A 4\n2 z 5\n3 z 6\n4 c 7\n5 e 8\n col1 col2\n\"factor\" \"factor\"\n[1] \"Modified dataframe\"\n col1 col2\n2 z 5\n3 z 6\n col1 col2\n\"factor\" \"factor\" "
},
{
"code": null,
"e": 26361,
"s": 26021,
"text": "Multiple factor level rows can also be accessed using indexing method. The factor column values can be also validated against a vector containing values using the %in% operator, which is used to check the existence of the value encountered in the input vector. It returns a boolean value TRUE in case the value is contained in the vector. "
},
{
"code": null,
"e": 26369,
"s": 26361,
"text": "Syntax:"
},
{
"code": null,
"e": 26382,
"s": 26369,
"text": "val %in% vec"
},
{
"code": null,
"e": 26391,
"s": 26382,
"text": "Example:"
},
{
"code": null,
"e": 26393,
"s": 26391,
"text": "R"
},
{
"code": "# declaring a data framedata_frame = data.frame(col1 = factor(letters[1:5]), col2 = factor(c(4:8))) print (\"Original dataframe\")print (data_frame) sapply(data_frame , class) # where column sum is greater than 10data_frame_mod <- data_frame[data_frame$col2 %in% c(4 , 6),] print (\"Modified dataframe\")print (data_frame_mod)sapply(data_frame_mod , class)",
"e": 26774,
"s": 26393,
"text": null
},
{
"code": null,
"e": 26781,
"s": 26774,
"text": "Output"
},
{
"code": null,
"e": 27004,
"s": 26781,
"text": "[1] \"Original dataframe\"\ncol1 col2\n1 a 4\n2 b 5\n3 c 6\n4 d 7\n5 e 8\n col1 col2\n\"factor\" \"factor\"\n[1] \"Modified dataframe\"\ncol1 col2\n1 a 4\n3 c 6\n col1 col2\n\"factor\" \"factor\" "
},
{
"code": null,
"e": 27380,
"s": 27004,
"text": "The subset() method in R is used to return the rows satisfying the constraints mentioned. Both single and multiple factor levels can be returned using this method. The row numbers in the original data frame are retained in order. The factor column values can be validated for a mentioned condition. The output has to be stored in a variable in order to preserve the changes. "
},
{
"code": null,
"e": 27388,
"s": 27380,
"text": "Syntax:"
},
{
"code": null,
"e": 27414,
"s": 27388,
"text": "subset ( df , condition )"
},
{
"code": null,
"e": 27540,
"s": 27414,
"text": "Conditions may contain logical operators == , != , > , < operators to compare the factor levels contained within the columns."
},
{
"code": null,
"e": 27549,
"s": 27540,
"text": "Example:"
},
{
"code": null,
"e": 27551,
"s": 27549,
"text": "R"
},
{
"code": "# declaring a data framedata_frame = data.frame(col1 = factor(letters[1:5]), col2 = factor(c(4:8))) print (\"Original dataframe\")print (data_frame) sapply(data_frame , class) # where column sum is greater than 10data_frame_mod <- subset(data_frame, col2 %in% c(4 , 6))print (\"Modified dataframe\")print (data_frame_mod)sapply(data_frame_mod , class)",
"e": 27926,
"s": 27551,
"text": null
},
{
"code": null,
"e": 27933,
"s": 27926,
"text": "Output"
},
{
"code": null,
"e": 28160,
"s": 27933,
"text": "[1] \"Original dataframe\"\n col1 col2\n1 a 4\n2 b 5\n3 c 6\n4 d 7\n5 e 8\n col1 col2\n\"factor\" \"factor\"\n[1] \"Modified dataframe\"\n col1 col2\n1 a 4\n3 c 6\n col1 col2\n\"factor\" \"factor\" "
},
{
"code": null,
"e": 28167,
"s": 28160,
"text": "Picked"
},
{
"code": null,
"e": 28188,
"s": 28167,
"text": "R DataFrame-Programs"
},
{
"code": null,
"e": 28200,
"s": 28188,
"text": "R-DataFrame"
},
{
"code": null,
"e": 28211,
"s": 28200,
"text": "R Language"
},
{
"code": null,
"e": 28222,
"s": 28211,
"text": "R Programs"
},
{
"code": null,
"e": 28320,
"s": 28222,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28329,
"s": 28320,
"text": "Comments"
},
{
"code": null,
"e": 28342,
"s": 28329,
"text": "Old Comments"
},
{
"code": null,
"e": 28394,
"s": 28342,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 28432,
"s": 28394,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 28467,
"s": 28432,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 28525,
"s": 28467,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 28574,
"s": 28525,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 28632,
"s": 28574,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 28681,
"s": 28632,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 28724,
"s": 28681,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 28774,
"s": 28724,
"text": "How to filter R dataframe by multiple conditions?"
}
]
|
Create a mirror image with CSS | The flip effect is used to create a mirror image of the object. The following parameters can be used in this filter -
You can try to run the following code to create a mirror image
Live Demo
<html>
<head>
</head>
<body>
<img src="/css/images/logo.png" alt="CSS Logo" style="Filter: FlipH">
<img src="/css/images/logo.png" alt="CSS Logo" style="Filter: FlipV">
<p>Text Example:</p>
<div style="width: 300;
height: 50;
font-size: 30pt;
font-family: Arial Black;
color: red;
Filter: FlipV">CSS Tutorials</div>
</body>
</html> | [
{
"code": null,
"e": 1180,
"s": 1062,
"text": "The flip effect is used to create a mirror image of the object. The following parameters can be used in this filter -"
},
{
"code": null,
"e": 1243,
"s": 1180,
"text": "You can try to run the following code to create a mirror image"
},
{
"code": null,
"e": 1253,
"s": 1243,
"text": "Live Demo"
},
{
"code": null,
"e": 1670,
"s": 1253,
"text": "<html>\n <head>\n </head>\n <body>\n\n <img src=\"/css/images/logo.png\" alt=\"CSS Logo\" style=\"Filter: FlipH\">\n <img src=\"/css/images/logo.png\" alt=\"CSS Logo\" style=\"Filter: FlipV\">\n\n <p>Text Example:</p>\n\n <div style=\"width: 300;\n height: 50;\n font-size: 30pt;\n font-family: Arial Black;\n color: red;\n Filter: FlipV\">CSS Tutorials</div>\n </body>\n\n</html>"
}
]
|
Machine Learning Resampling Techniques for Class Imbalances | by Allison Kelly | Towards Data Science | Let’s face it. Inequality sucks. And I’m not even talking about the fact that African Americans and Hispanics make up 56% of the American prison population despite being 32% of the total population or that the combined wealth of Bill Gates, Jeff Bezos, and Warren Buffet is more than the combined wealth of the bottom 50% of Americans.
As much as I’d like to rage about those facts, for now I’m talking about class imbalance in the context of machine learning classification models. According to Wikipedia, algorithmic bias “can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm.” Training your model with highly imbalanced data can favor the majority class which can have serious implications.
Take Kaggle’s Cervical Cancer Risk Classification dataset.
Only 2% of all diagnoses included in the dataset were found to be cancer. After a simple Random Forest Classification model was performed, the accuracy of the model was 99%!
from sklearn.ensemble import RandomForestClassifierfrom sklearn.metrics import accuracy_score, recall_score, classification_reportfrom sklearn.model_selection import train_test_splitX = df.drop('Cancer', axis=1).dropna()y = df['Cancer'].dropna()X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)random_forest = RandomForestClassifier(n_estimators=500).fit(X_train, y_train)y_pred = random_forest.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Recall score: ' + str(recall_score(y_test, y_pred)))print(classification_report(y_test, y_pred))
That can’t quite be possible. Let’s dig a little deeper. Let’s only include two of the 36 columns in the model.
X = df.loc[:,['Smokes', 'Hormonal Contraceptives']]y = df['Cancer'].dropna()X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)random_forest = RandomForestClassifier(n_estimators=500).fit(X_train, y_train)y_pred = random_forest.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Recall score: ' + str(recall_score(y_test, y_pred)))print(classification_report(y_test, y_pred))
But the accuracy only dropped to 97%! There’s very little chance that the two features we chose randomly can predict cancer with 97% accuracy.
We already know that 2% of patients were diagnosed with cancer, but our second model predicts none of the patients would have cancer. See how this can become a problem?
In serious medical diagnosis models, punitive models such as COMPAS, the recidivism risk model, and fraud detection, there are human lives that hang in the balance. In these cases, it’s best to err on the side of caution and protect as many people as possible — whether that means lowering the number of false positives or false negatives.
When thinking about how to optimize these kinds of models, there are a few metrics we can interpret.
Recall: Taking the cervical cancer risk dataset as an example, you can ask yourself of the model — out of all the patients that were actually diagnosed with cancer, what percentage did our model predict as having cancer?
However, recall doesn’t give you the entire picture. Say your model categorized everyone as having cancer, including those who don’t, your score would be 100%. A low recall score is indicative of a high number of false negatives.
Precision: Precision asks the opposite question — out of all the patients the model predicted as having cancer, how many actually did have cancer?
If your model predicted 10 patients had cancer our score would be 100% if all of the predictions were correct, even if a thousand more patients went undiagnosed. A low precision score is indicative of a high number of false positives.
Accuracy: One of the more robust evaluation metrics is Accuracy, as it measures the total number of true predictions, both positive and negative. It’s the most common metric for classification tasks.
F1 Score: The F1 Score is another extremely informative metric. Because it measures the “harmonic mean of precision and recall,” it cannot be high without both precision and recall being high, indicative of an overall well-performing model. However, the F1 score can be formatted to account for binary, multiclass, and imbalanced classification problems with the following parameters in the sklearn.metrics.f1_score method:
Binary — To be used with binary classification problems.
Micro — Counts the total true positives, false negatives and false positives.
Macro — Calculates the unweighted mean of all classes (for multiclass problems.)
Weighted — Accounts for class imbalances by weighing the true positives for each class and taking the average score.
Samples — Finds the average score of each metric for each class.
Equipped with this information, we’ll optimize for the binary F1 score, since we only have two classes. You may be thinking “why not use the weighted F1 score if our data is so severely imbalanced?” That’s where resampling methods come in!
We’ll explore three methods (though there are many more out there) that are simple and useful — undersampling the majority, oversampling the minority, and SMOTE (synthetic minority oversamplting technique). Each method we’ll be using aims to create a training set with a 50–50 distribution since we’re working with a binary classification problem.These methods can be used to create a 25–25–25–25 distribution for a four class multi-class problem, regardless of the initial distribution of classes, or another ratio that may train your model with better results.
Make sure to split your data into training and testing sets BEFORE resampling! If you don’t, you’ll be compromising the quality of your model from data leakage, causing overfitting and poor generalization.
# Import the resampling packagefrom sklearn.utils import resample# Split into training and test setsX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)# Returning to one dataframetraining_set = pd.concat([X_train, y_train], axis=1)# Separating classescancer = training_set[training_set.Cancer == 1]not_cancer = training_set[training_set.Cancer == 0]
Undersampling the Majority
Undersampling can be defined as reducing the number of the majority class. This technique is best used on data where you have thousands if not millions of datapoints. Typically, you wouldn’t want to reduce the amount of data you’re working with, but if you can sacrifice some training data, this technique will be useful. Here’s how it works on the cervical cancer dataset:
# Undersampling the majorityundersample = resample(not_cancer, replace=True, n_samples=len(cancer), #set the number of samples to equal the number of the minority class random_state=42)# Returning to new training setundersample_train = pd.concat([cancer, undersample])undersample_train.Cancer.value_counts(normalize=True)
We’ve got our evenly distributed classes! Now we can test it on the Random Forest classifier.
# Separate undersampled data into X and y setsundersample_x_train = undersample_train.drop('Cancer', axis=1)undersample_y_train = undersample_train.Cancer# Fit model on undersampled dataundersample_rf = RandomForestClassifier(n_estimators=500).fit(undersample_x_train, undersample_y_train)# Make predictions on test setsy_pred = random_forest.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Average Recall score: ' + str(recall_score(y_test, y_pred, average='macro')))print(classification_report(y_test, y_pred))
Not exactly the best results, but because we reduced the number of the majority class, we trained our model on only 28 instances which is far too small of a sample size. Next, we’ll try oversampling the minority.
Oversampling the Minority
Oversampling the minority will increase the number of datapoints in the minority class, again aiming to evenly distribute the classes in the training set. We’ll repeat the same process as before.
# Oversampling the minorityoversample = resample(cancer, replace=True, n_samples=len(not_cancer), #set the number of samples to equal the number of the majority class random_state=42)# Returning to new training setoversample_train = pd.concat([not_cancer, oversample])oversample_train.Cancer.value_counts(normalize=True)
# Separate oversampled data into X and y setsoversample_x_train = oversample_train.drop('Cancer', axis=1)oversample_y_train = oversample_train.Cancer# Fit model on oversampled dataoversample_rf = RandomForestClassifier(n_estimators=500).fit(oversample_x_train, oversample_y_train)# Make predictions on test setsy_pred = oversample_rf.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Average Recall score: ' + str(recall_score(y_test, y_pred, average='macro')))print(classification_report(y_test, y_pred))
Unfortunately, our results are only marginally better. We have one more technique to try.
SMOTE (synthetic minority oversampling technique)
SMOTE synthesizes datapoints from the existing pool of the minority class and adds them to the dataset. This technique ensures there’s very little data leakage by creating new, unseen datapoints for the model to train on.
# Import the SMOTE packagefrom imblearn.over_sampling import SMOTE# Synthesize minority class datapoints using SMOTEsm = SMOTE(random_state=42, sampling_strategy=’minority’)smote_x_train, smote_y_train = sm.fit_resample(X_train, y_train)# Separate into training and test setssmote_x_train = pd.DataFrame(smote_x_train, columns = X_train.columns)smote_y_train = pd.DataFrame(smote_y_train, columns = ['Cancer'])smote = RandomForestClassifier(n_estimators=1000).fit(smote_x_train, smote_y_train) # Predict on training setsmote_preds = smote.predict(X_test)# Checking accuracy and recallprint('Accuracy Score: ', accuracy_score(y_test, smote_preds),'\n\n')print('Averaged Recall Score: ', recall_score(y_test, smote_preds, average='macro'), '\n\n') print(classification_report(y_test, smote_preds))
Accuracy and f1 score increased, however the recall score dropped slightly. Depending on your use case for the model, at this point you’ll have to decide which model protects the most people.
One of the best ways to visualize the quality of your model is by examining the ROC curve. ROC (Receiver Operating Characteristic) curves plot the true positive rate, TPR, against the false positive rate, FPR. The best prediction would be found at the point (0,1) where there are no false positives (costs) and 100% true positives (benefits).
Included in the plot is the line of no-descrimination which demonstrates random guessing, similar to the probability of a coin-flip. Points above the line are “good” guesses, as they are closer to a perfect outcome at the point (0,1). The opposite is also true, where points below the line indicate poor predictions.
AUC, or area under the curve, is a quantitative measure of the degree of separability. The closer the predictions are to the ideal outcome, the larger the AUC would be. When AOC tells us not to settle for less, she’s telling us to aim for an AUC close to 1 or 100%.
Here is an example with the Random Forest Classifier trained with the SMOTE resampled training set (read the documentation here):
y_score = smote.fit(smote_x_train, smote_y_train).predict_proba(X_test)[:,1]fpr, tpr, thresholds = roc_curve(y_test, y_score)print('AUC: {}'.format(auc(fpr, tpr)))
Not too bad! There’s still plenty of room for improvement, but we can see what this looks like in a plot:
plt.figure(figsize=(10, 8))lw = 2plt.plot(fpr, tpr, color=’darkorange’, lw=lw, label=’ROC curve’)plt.plot([0, 1], [0, 1], color=’navy’, lw=lw, linestyle=’ — ‘)plt.xlim([0.0, 1.0])plt.ylim([0.0, 1.05])plt.yticks([i/20.0 for i in range(21)])plt.xticks([i/20.0 for i in range(21)])plt.xlabel(‘False Positive Rate’)plt.ylabel(‘True Positive Rate’)plt.title(‘Receiver operating characteristic (ROC) Curve’)plt.legend(loc=’lower right’)plt.show()
Ideally, we’d like to see the orange line far steeper and not level out until it’s much closer to (0,1) but this is a great starting point.
As with all machine learning projects, the process is iterative. There are still other ways to resample and validate your model that should be explored before you decide which to move forward with. Think thoroughly about which performance metric fits your purpose, choose an algorithm that excels when it comes to your type of data, and evaluate your model based on the one that can do the least harm once it’s deployed.
Comment below and tell me other ways you’ve resample your data! | [
{
"code": null,
"e": 508,
"s": 172,
"text": "Let’s face it. Inequality sucks. And I’m not even talking about the fact that African Americans and Hispanics make up 56% of the American prison population despite being 32% of the total population or that the combined wealth of Bill Gates, Jeff Bezos, and Warren Buffet is more than the combined wealth of the bottom 50% of Americans."
},
{
"code": null,
"e": 1040,
"s": 508,
"text": "As much as I’d like to rage about those facts, for now I’m talking about class imbalance in the context of machine learning classification models. According to Wikipedia, algorithmic bias “can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm.” Training your model with highly imbalanced data can favor the majority class which can have serious implications."
},
{
"code": null,
"e": 1099,
"s": 1040,
"text": "Take Kaggle’s Cervical Cancer Risk Classification dataset."
},
{
"code": null,
"e": 1273,
"s": 1099,
"text": "Only 2% of all diagnoses included in the dataset were found to be cancer. After a simple Random Forest Classification model was performed, the accuracy of the model was 99%!"
},
{
"code": null,
"e": 1876,
"s": 1273,
"text": "from sklearn.ensemble import RandomForestClassifierfrom sklearn.metrics import accuracy_score, recall_score, classification_reportfrom sklearn.model_selection import train_test_splitX = df.drop('Cancer', axis=1).dropna()y = df['Cancer'].dropna()X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)random_forest = RandomForestClassifier(n_estimators=500).fit(X_train, y_train)y_pred = random_forest.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Recall score: ' + str(recall_score(y_test, y_pred)))print(classification_report(y_test, y_pred))"
},
{
"code": null,
"e": 1988,
"s": 1876,
"text": "That can’t quite be possible. Let’s dig a little deeper. Let’s only include two of the 36 columns in the model."
},
{
"code": null,
"e": 2422,
"s": 1988,
"text": "X = df.loc[:,['Smokes', 'Hormonal Contraceptives']]y = df['Cancer'].dropna()X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)random_forest = RandomForestClassifier(n_estimators=500).fit(X_train, y_train)y_pred = random_forest.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Recall score: ' + str(recall_score(y_test, y_pred)))print(classification_report(y_test, y_pred))"
},
{
"code": null,
"e": 2565,
"s": 2422,
"text": "But the accuracy only dropped to 97%! There’s very little chance that the two features we chose randomly can predict cancer with 97% accuracy."
},
{
"code": null,
"e": 2734,
"s": 2565,
"text": "We already know that 2% of patients were diagnosed with cancer, but our second model predicts none of the patients would have cancer. See how this can become a problem?"
},
{
"code": null,
"e": 3074,
"s": 2734,
"text": "In serious medical diagnosis models, punitive models such as COMPAS, the recidivism risk model, and fraud detection, there are human lives that hang in the balance. In these cases, it’s best to err on the side of caution and protect as many people as possible — whether that means lowering the number of false positives or false negatives."
},
{
"code": null,
"e": 3175,
"s": 3074,
"text": "When thinking about how to optimize these kinds of models, there are a few metrics we can interpret."
},
{
"code": null,
"e": 3396,
"s": 3175,
"text": "Recall: Taking the cervical cancer risk dataset as an example, you can ask yourself of the model — out of all the patients that were actually diagnosed with cancer, what percentage did our model predict as having cancer?"
},
{
"code": null,
"e": 3626,
"s": 3396,
"text": "However, recall doesn’t give you the entire picture. Say your model categorized everyone as having cancer, including those who don’t, your score would be 100%. A low recall score is indicative of a high number of false negatives."
},
{
"code": null,
"e": 3773,
"s": 3626,
"text": "Precision: Precision asks the opposite question — out of all the patients the model predicted as having cancer, how many actually did have cancer?"
},
{
"code": null,
"e": 4008,
"s": 3773,
"text": "If your model predicted 10 patients had cancer our score would be 100% if all of the predictions were correct, even if a thousand more patients went undiagnosed. A low precision score is indicative of a high number of false positives."
},
{
"code": null,
"e": 4208,
"s": 4008,
"text": "Accuracy: One of the more robust evaluation metrics is Accuracy, as it measures the total number of true predictions, both positive and negative. It’s the most common metric for classification tasks."
},
{
"code": null,
"e": 4632,
"s": 4208,
"text": "F1 Score: The F1 Score is another extremely informative metric. Because it measures the “harmonic mean of precision and recall,” it cannot be high without both precision and recall being high, indicative of an overall well-performing model. However, the F1 score can be formatted to account for binary, multiclass, and imbalanced classification problems with the following parameters in the sklearn.metrics.f1_score method:"
},
{
"code": null,
"e": 4689,
"s": 4632,
"text": "Binary — To be used with binary classification problems."
},
{
"code": null,
"e": 4767,
"s": 4689,
"text": "Micro — Counts the total true positives, false negatives and false positives."
},
{
"code": null,
"e": 4848,
"s": 4767,
"text": "Macro — Calculates the unweighted mean of all classes (for multiclass problems.)"
},
{
"code": null,
"e": 4965,
"s": 4848,
"text": "Weighted — Accounts for class imbalances by weighing the true positives for each class and taking the average score."
},
{
"code": null,
"e": 5030,
"s": 4965,
"text": "Samples — Finds the average score of each metric for each class."
},
{
"code": null,
"e": 5270,
"s": 5030,
"text": "Equipped with this information, we’ll optimize for the binary F1 score, since we only have two classes. You may be thinking “why not use the weighted F1 score if our data is so severely imbalanced?” That’s where resampling methods come in!"
},
{
"code": null,
"e": 5833,
"s": 5270,
"text": "We’ll explore three methods (though there are many more out there) that are simple and useful — undersampling the majority, oversampling the minority, and SMOTE (synthetic minority oversamplting technique). Each method we’ll be using aims to create a training set with a 50–50 distribution since we’re working with a binary classification problem.These methods can be used to create a 25–25–25–25 distribution for a four class multi-class problem, regardless of the initial distribution of classes, or another ratio that may train your model with better results."
},
{
"code": null,
"e": 6039,
"s": 5833,
"text": "Make sure to split your data into training and testing sets BEFORE resampling! If you don’t, you’ll be compromising the quality of your model from data leakage, causing overfitting and poor generalization."
},
{
"code": null,
"e": 6413,
"s": 6039,
"text": "# Import the resampling packagefrom sklearn.utils import resample# Split into training and test setsX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)# Returning to one dataframetraining_set = pd.concat([X_train, y_train], axis=1)# Separating classescancer = training_set[training_set.Cancer == 1]not_cancer = training_set[training_set.Cancer == 0]"
},
{
"code": null,
"e": 6440,
"s": 6413,
"text": "Undersampling the Majority"
},
{
"code": null,
"e": 6814,
"s": 6440,
"text": "Undersampling can be defined as reducing the number of the majority class. This technique is best used on data where you have thousands if not millions of datapoints. Typically, you wouldn’t want to reduce the amount of data you’re working with, but if you can sacrifice some training data, this technique will be useful. Here’s how it works on the cervical cancer dataset:"
},
{
"code": null,
"e": 7204,
"s": 6814,
"text": "# Undersampling the majorityundersample = resample(not_cancer, replace=True, n_samples=len(cancer), #set the number of samples to equal the number of the minority class random_state=42)# Returning to new training setundersample_train = pd.concat([cancer, undersample])undersample_train.Cancer.value_counts(normalize=True)"
},
{
"code": null,
"e": 7298,
"s": 7204,
"text": "We’ve got our evenly distributed classes! Now we can test it on the Random Forest classifier."
},
{
"code": null,
"e": 7848,
"s": 7298,
"text": "# Separate undersampled data into X and y setsundersample_x_train = undersample_train.drop('Cancer', axis=1)undersample_y_train = undersample_train.Cancer# Fit model on undersampled dataundersample_rf = RandomForestClassifier(n_estimators=500).fit(undersample_x_train, undersample_y_train)# Make predictions on test setsy_pred = random_forest.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Average Recall score: ' + str(recall_score(y_test, y_pred, average='macro')))print(classification_report(y_test, y_pred))"
},
{
"code": null,
"e": 8061,
"s": 7848,
"text": "Not exactly the best results, but because we reduced the number of the majority class, we trained our model on only 28 instances which is far too small of a sample size. Next, we’ll try oversampling the minority."
},
{
"code": null,
"e": 8087,
"s": 8061,
"text": "Oversampling the Minority"
},
{
"code": null,
"e": 8283,
"s": 8087,
"text": "Oversampling the minority will increase the number of datapoints in the minority class, again aiming to evenly distribute the classes in the training set. We’ll repeat the same process as before."
},
{
"code": null,
"e": 8672,
"s": 8283,
"text": "# Oversampling the minorityoversample = resample(cancer, replace=True, n_samples=len(not_cancer), #set the number of samples to equal the number of the majority class random_state=42)# Returning to new training setoversample_train = pd.concat([not_cancer, oversample])oversample_train.Cancer.value_counts(normalize=True)"
},
{
"code": null,
"e": 9213,
"s": 8672,
"text": "# Separate oversampled data into X and y setsoversample_x_train = oversample_train.drop('Cancer', axis=1)oversample_y_train = oversample_train.Cancer# Fit model on oversampled dataoversample_rf = RandomForestClassifier(n_estimators=500).fit(oversample_x_train, oversample_y_train)# Make predictions on test setsy_pred = oversample_rf.predict(X_test)print('Accuracy score: ' + str(accuracy_score(y_test, y_pred)))print('Average Recall score: ' + str(recall_score(y_test, y_pred, average='macro')))print(classification_report(y_test, y_pred))"
},
{
"code": null,
"e": 9303,
"s": 9213,
"text": "Unfortunately, our results are only marginally better. We have one more technique to try."
},
{
"code": null,
"e": 9353,
"s": 9303,
"text": "SMOTE (synthetic minority oversampling technique)"
},
{
"code": null,
"e": 9575,
"s": 9353,
"text": "SMOTE synthesizes datapoints from the existing pool of the minority class and adds them to the dataset. This technique ensures there’s very little data leakage by creating new, unseen datapoints for the model to train on."
},
{
"code": null,
"e": 10374,
"s": 9575,
"text": "# Import the SMOTE packagefrom imblearn.over_sampling import SMOTE# Synthesize minority class datapoints using SMOTEsm = SMOTE(random_state=42, sampling_strategy=’minority’)smote_x_train, smote_y_train = sm.fit_resample(X_train, y_train)# Separate into training and test setssmote_x_train = pd.DataFrame(smote_x_train, columns = X_train.columns)smote_y_train = pd.DataFrame(smote_y_train, columns = ['Cancer'])smote = RandomForestClassifier(n_estimators=1000).fit(smote_x_train, smote_y_train) # Predict on training setsmote_preds = smote.predict(X_test)# Checking accuracy and recallprint('Accuracy Score: ', accuracy_score(y_test, smote_preds),'\\n\\n')print('Averaged Recall Score: ', recall_score(y_test, smote_preds, average='macro'), '\\n\\n') print(classification_report(y_test, smote_preds))"
},
{
"code": null,
"e": 10566,
"s": 10374,
"text": "Accuracy and f1 score increased, however the recall score dropped slightly. Depending on your use case for the model, at this point you’ll have to decide which model protects the most people."
},
{
"code": null,
"e": 10909,
"s": 10566,
"text": "One of the best ways to visualize the quality of your model is by examining the ROC curve. ROC (Receiver Operating Characteristic) curves plot the true positive rate, TPR, against the false positive rate, FPR. The best prediction would be found at the point (0,1) where there are no false positives (costs) and 100% true positives (benefits)."
},
{
"code": null,
"e": 11226,
"s": 10909,
"text": "Included in the plot is the line of no-descrimination which demonstrates random guessing, similar to the probability of a coin-flip. Points above the line are “good” guesses, as they are closer to a perfect outcome at the point (0,1). The opposite is also true, where points below the line indicate poor predictions."
},
{
"code": null,
"e": 11492,
"s": 11226,
"text": "AUC, or area under the curve, is a quantitative measure of the degree of separability. The closer the predictions are to the ideal outcome, the larger the AUC would be. When AOC tells us not to settle for less, she’s telling us to aim for an AUC close to 1 or 100%."
},
{
"code": null,
"e": 11622,
"s": 11492,
"text": "Here is an example with the Random Forest Classifier trained with the SMOTE resampled training set (read the documentation here):"
},
{
"code": null,
"e": 11786,
"s": 11622,
"text": "y_score = smote.fit(smote_x_train, smote_y_train).predict_proba(X_test)[:,1]fpr, tpr, thresholds = roc_curve(y_test, y_score)print('AUC: {}'.format(auc(fpr, tpr)))"
},
{
"code": null,
"e": 11892,
"s": 11786,
"text": "Not too bad! There’s still plenty of room for improvement, but we can see what this looks like in a plot:"
},
{
"code": null,
"e": 12333,
"s": 11892,
"text": "plt.figure(figsize=(10, 8))lw = 2plt.plot(fpr, tpr, color=’darkorange’, lw=lw, label=’ROC curve’)plt.plot([0, 1], [0, 1], color=’navy’, lw=lw, linestyle=’ — ‘)plt.xlim([0.0, 1.0])plt.ylim([0.0, 1.05])plt.yticks([i/20.0 for i in range(21)])plt.xticks([i/20.0 for i in range(21)])plt.xlabel(‘False Positive Rate’)plt.ylabel(‘True Positive Rate’)plt.title(‘Receiver operating characteristic (ROC) Curve’)plt.legend(loc=’lower right’)plt.show()"
},
{
"code": null,
"e": 12473,
"s": 12333,
"text": "Ideally, we’d like to see the orange line far steeper and not level out until it’s much closer to (0,1) but this is a great starting point."
},
{
"code": null,
"e": 12894,
"s": 12473,
"text": "As with all machine learning projects, the process is iterative. There are still other ways to resample and validate your model that should be explored before you decide which to move forward with. Think thoroughly about which performance metric fits your purpose, choose an algorithm that excels when it comes to your type of data, and evaluate your model based on the one that can do the least harm once it’s deployed."
}
]
|
Number Of Open Doors | Practice | GeeksforGeeks | Consider a long alley with a N number of doors on one side. All the doors are closed initially. You move to and fro in the alley changing the states of the doors as follows: you open a door that is already closed and you close a door that is already opened. You start at one end go on altering the state of the doors till you reach the other end and then you come back and start altering the states of the doors again.
In the first go, you alter the states of doors numbered 1, 2, 3, ... , n.
In the second go, you alter the states of doors numbered 2, 4, 6...
In the third go, you alter the states of doors numbered 3, 6, 9 ...
You continue this till the Nth go in which you alter the state of the door numbered N.
You have to find the number of open doors at the end of the procedure.
Example 1:
Input:
N = 2
Output:
1
Explanation:
Initially all doors are closed.
After 1st go, all doors will be opened.
After 2nd go second door will be closed.
So, Only 1st door will remain Open.
Example 2:
Input:
N = 4
Output:
2
Explanation:
Following the sequence 4 times, we can
see that only 1st and 4th doors will
remain open.
Your Task:
You don't need to read input or print anything. Your task is to complete the function noOfOpenDoors() which takes an Integer N as input and returns the answer.
Expected Time Complexity: O(1)
Expected Auxiliary Space: O(1)
Constraints:
1 <= N <= 1012
0
badgujarsachin831 month ago
int noOfOpenDoors(long long N) {
// code here
int res=sqrt(N);
return res;
}
0
aakasshuit2 months ago
//Java Solution
int ans =(int) Math.sqrt(N);
return ans;
0
gayathrisrujanareddy2 months ago
class Solution:
def noOfOpenDoors(self, N):
# code here
ans=math.sqrt(N)
return int(ans)
0
ramratan700473 months ago
# Python solution
#Divyanshu Kumar(Asansol Engineering college,ECE) Madhubani wala
def noOfOpenDoors(self, N):
# code here
t=math.sqrt(N)
return int(t)
-1
sairish20014 months ago
int noOfOpenDoors(long long N) {
return pow(N, 0.5);
}
0
lindan1236 months ago
int noOfOpenDoors(long long N) {
// code here
int ans= sqrt(N);
return ans;
}
0
sangramranshing576 months ago
class Solution {
public:
int noOfOpenDoors(long long N) {
int a = sqrt(N);
return a;
}
};
0
as76636 months ago
return sqrt(N);
A gate will be open in the end if and only if it has an odd number of factors.
Only perfect squares have the odd number of factors.
0
euhidaman6 months ago
My JAVA solution -->
class Solution {
static int noOfOpenDoors(Long N) {
// code here
return (int)Math.sqrt(N);
}
};
0
suryabhanmaurya5717 months ago
Time Complexity - O(1)
class Solution: def noOfOpenDoors(self, N): # code here sq = math.sqrt(N) return int(sq)
we find sqrt for solution because if door lies between 4- 8 then ans will be 2 and 9 - 15 ans will be 3
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 1025,
"s": 238,
"text": "Consider a long alley with a N number of doors on one side. All the doors are closed initially. You move to and fro in the alley changing the states of the doors as follows: you open a door that is already closed and you close a door that is already opened. You start at one end go on altering the state of the doors till you reach the other end and then you come back and start altering the states of the doors again.\nIn the first go, you alter the states of doors numbered 1, 2, 3, ... , n.\nIn the second go, you alter the states of doors numbered 2, 4, 6...\nIn the third go, you alter the states of doors numbered 3, 6, 9 ...\nYou continue this till the Nth go in which you alter the state of the door numbered N.\nYou have to find the number of open doors at the end of the procedure."
},
{
"code": null,
"e": 1038,
"s": 1027,
"text": "Example 1:"
},
{
"code": null,
"e": 1223,
"s": 1038,
"text": "Input:\nN = 2\nOutput:\n1\nExplanation:\nInitially all doors are closed.\nAfter 1st go, all doors will be opened.\nAfter 2nd go second door will be closed.\nSo, Only 1st door will remain Open."
},
{
"code": null,
"e": 1234,
"s": 1223,
"text": "Example 2:"
},
{
"code": null,
"e": 1359,
"s": 1234,
"text": "Input:\nN = 4\nOutput:\n2\nExplanation:\nFollowing the sequence 4 times, we can\nsee that only 1st and 4th doors will\nremain open."
},
{
"code": null,
"e": 1532,
"s": 1361,
"text": "Your Task:\nYou don't need to read input or print anything. Your task is to complete the function noOfOpenDoors() which takes an Integer N as input and returns the answer."
},
{
"code": null,
"e": 1596,
"s": 1534,
"text": "Expected Time Complexity: O(1)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 1626,
"s": 1598,
"text": "Constraints:\n1 <= N <= 1012"
},
{
"code": null,
"e": 1628,
"s": 1626,
"text": "0"
},
{
"code": null,
"e": 1656,
"s": 1628,
"text": "badgujarsachin831 month ago"
},
{
"code": null,
"e": 1762,
"s": 1656,
"text": "int noOfOpenDoors(long long N) {\n // code here\n int res=sqrt(N); \n return res;\n }"
},
{
"code": null,
"e": 1764,
"s": 1762,
"text": "0"
},
{
"code": null,
"e": 1787,
"s": 1764,
"text": "aakasshuit2 months ago"
},
{
"code": null,
"e": 1861,
"s": 1787,
"text": "//Java Solution\n\n int ans =(int) Math.sqrt(N);\n return ans;"
},
{
"code": null,
"e": 1863,
"s": 1861,
"text": "0"
},
{
"code": null,
"e": 1896,
"s": 1863,
"text": "gayathrisrujanareddy2 months ago"
},
{
"code": null,
"e": 2010,
"s": 1896,
"text": "class Solution:\n def noOfOpenDoors(self, N):\n # code here\n ans=math.sqrt(N)\n return int(ans)"
},
{
"code": null,
"e": 2012,
"s": 2010,
"text": "0"
},
{
"code": null,
"e": 2038,
"s": 2012,
"text": "ramratan700473 months ago"
},
{
"code": null,
"e": 2214,
"s": 2038,
"text": "# Python solution\n#Divyanshu Kumar(Asansol Engineering college,ECE) Madhubani wala\ndef noOfOpenDoors(self, N):\n # code here\n t=math.sqrt(N)\n return int(t)"
},
{
"code": null,
"e": 2217,
"s": 2214,
"text": "-1"
},
{
"code": null,
"e": 2241,
"s": 2217,
"text": "sairish20014 months ago"
},
{
"code": null,
"e": 2299,
"s": 2241,
"text": "int noOfOpenDoors(long long N) {\n return pow(N, 0.5);\n}"
},
{
"code": null,
"e": 2301,
"s": 2299,
"text": "0"
},
{
"code": null,
"e": 2323,
"s": 2301,
"text": "lindan1236 months ago"
},
{
"code": null,
"e": 2429,
"s": 2323,
"text": "int noOfOpenDoors(long long N) {\n // code here\n int ans= sqrt(N);\n return ans;\n }"
},
{
"code": null,
"e": 2431,
"s": 2429,
"text": "0"
},
{
"code": null,
"e": 2461,
"s": 2431,
"text": "sangramranshing576 months ago"
},
{
"code": null,
"e": 2479,
"s": 2461,
"text": "class Solution {\n"
},
{
"code": null,
"e": 2494,
"s": 2479,
"text": " public:\n "
},
{
"code": null,
"e": 2533,
"s": 2494,
"text": " int noOfOpenDoors(long long N) {"
},
{
"code": null,
"e": 2568,
"s": 2533,
"text": "\n \n int a = sqrt(N);"
},
{
"code": null,
"e": 2593,
"s": 2568,
"text": " return a;"
},
{
"code": null,
"e": 2600,
"s": 2593,
"text": "\n }"
},
{
"code": null,
"e": 2604,
"s": 2600,
"text": "\n};"
},
{
"code": null,
"e": 2606,
"s": 2604,
"text": "0"
},
{
"code": null,
"e": 2625,
"s": 2606,
"text": "as76636 months ago"
},
{
"code": null,
"e": 2641,
"s": 2625,
"text": "return sqrt(N);"
},
{
"code": null,
"e": 2720,
"s": 2641,
"text": "A gate will be open in the end if and only if it has an odd number of factors."
},
{
"code": null,
"e": 2773,
"s": 2720,
"text": "Only perfect squares have the odd number of factors."
},
{
"code": null,
"e": 2775,
"s": 2773,
"text": "0"
},
{
"code": null,
"e": 2797,
"s": 2775,
"text": "euhidaman6 months ago"
},
{
"code": null,
"e": 2938,
"s": 2797,
"text": "My JAVA solution -->\nclass Solution {\n static int noOfOpenDoors(Long N) {\n // code here\n return (int)Math.sqrt(N);\n }\n};"
},
{
"code": null,
"e": 2940,
"s": 2938,
"text": "0"
},
{
"code": null,
"e": 2971,
"s": 2940,
"text": "suryabhanmaurya5717 months ago"
},
{
"code": null,
"e": 2994,
"s": 2971,
"text": "Time Complexity - O(1)"
},
{
"code": null,
"e": 3103,
"s": 2994,
"text": "class Solution: def noOfOpenDoors(self, N): # code here sq = math.sqrt(N) return int(sq)"
},
{
"code": null,
"e": 3209,
"s": 3105,
"text": "we find sqrt for solution because if door lies between 4- 8 then ans will be 2 and 9 - 15 ans will be 3"
},
{
"code": null,
"e": 3355,
"s": 3209,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 3391,
"s": 3355,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 3401,
"s": 3391,
"text": "\nProblem\n"
},
{
"code": null,
"e": 3411,
"s": 3401,
"text": "\nContest\n"
},
{
"code": null,
"e": 3474,
"s": 3411,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 3622,
"s": 3474,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 3830,
"s": 3622,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 3936,
"s": 3830,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
]
|
How to disable browser autofill on input fields using jQuery ? - GeeksforGeeks | 21 Dec, 2020
In this article, we will see how to disable the browser auto fill property on the input field. For that, HTML page is created in which jQuery CDN is imported to use jQuery and its code is written.
Approach:
Create a basic HTML page having at least one input field in it with the “id” attribute.
Import jQuery CDN from script tag to use jQuery on the page.
Then write the jQuery code in the script tag for disabling autofill on the input field.
To achieve this, we use two methods of jQuery to set the attribute value to the field:
attr() Method
prop() method
Example 1: Using attr() method
<!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content= "width=device-width,initial-scale=1.0"> <!-- Import jQuery cdn library --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"> </script> <script> // Execute this code when page is // totally loaded $(document).ready(function () { /* Setting the autocomplete of input field to off to make autofill to disable */ $("#name").attr("autocomplete", "off"); }); </script></head> <body> <label for="name">Name:</label> <input type="text" name="name" id="name"></body> </html>
Output:
This will be output of code
Example 2: Using prop() method
<!DOCTYPE html><html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content= "width=device-width, initial-scale=1.0"> <title>Disable Autofill</title> <!-- Import jQuery cdn library --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"> </script> <script> // Execute this code when page is // totally loaded $(document).ready(function () { /* Setting the autocomplete of input field to off to make autofill to disable */ $("#name").prop("autocomplete", "off"); }); </script></head> <body> <label for="name">Name:</label> <input type="text" name="name" id="name"></body> </html>
Output:
Output of code. In this field there is no autofill
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
jQuery-Misc
Picked
HTML
JQuery
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
REST API (Introduction)
How to Insert Form Data into Database using PHP ?
HTML Cheat Sheet - A Basic Guide to HTML
Types of CSS (Cascading Style Sheet)
How to position a div at the bottom of its container using CSS?
JQuery | Set the value of an input text field
Form validation using jQuery
How to change selected value of a drop-down list using jQuery?
How to change the background color after clicking the button in JavaScript ?
How to fetch data from JSON file and display in HTML table using jQuery ? | [
{
"code": null,
"e": 25398,
"s": 25370,
"text": "\n21 Dec, 2020"
},
{
"code": null,
"e": 25595,
"s": 25398,
"text": "In this article, we will see how to disable the browser auto fill property on the input field. For that, HTML page is created in which jQuery CDN is imported to use jQuery and its code is written."
},
{
"code": null,
"e": 25605,
"s": 25595,
"text": "Approach:"
},
{
"code": null,
"e": 25693,
"s": 25605,
"text": "Create a basic HTML page having at least one input field in it with the “id” attribute."
},
{
"code": null,
"e": 25754,
"s": 25693,
"text": "Import jQuery CDN from script tag to use jQuery on the page."
},
{
"code": null,
"e": 25842,
"s": 25754,
"text": "Then write the jQuery code in the script tag for disabling autofill on the input field."
},
{
"code": null,
"e": 25929,
"s": 25842,
"text": "To achieve this, we use two methods of jQuery to set the attribute value to the field:"
},
{
"code": null,
"e": 25943,
"s": 25929,
"text": "attr() Method"
},
{
"code": null,
"e": 25957,
"s": 25943,
"text": "prop() method"
},
{
"code": null,
"e": 25988,
"s": 25957,
"text": "Example 1: Using attr() method"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\"> <meta name=\"viewport\" content= \"width=device-width,initial-scale=1.0\"> <!-- Import jQuery cdn library --> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js\"> </script> <script> // Execute this code when page is // totally loaded $(document).ready(function () { /* Setting the autocomplete of input field to off to make autofill to disable */ $(\"#name\").attr(\"autocomplete\", \"off\"); }); </script></head> <body> <label for=\"name\">Name:</label> <input type=\"text\" name=\"name\" id=\"name\"></body> </html>",
"e": 26707,
"s": 25988,
"text": null
},
{
"code": null,
"e": 26715,
"s": 26707,
"text": "Output:"
},
{
"code": null,
"e": 26743,
"s": 26715,
"text": "This will be output of code"
},
{
"code": null,
"e": 26774,
"s": 26743,
"text": "Example 2: Using prop() method"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <meta charset=\"UTF-8\"> <meta name=\"viewport\" content= \"width=device-width, initial-scale=1.0\"> <title>Disable Autofill</title> <!-- Import jQuery cdn library --> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js\"> </script> <script> // Execute this code when page is // totally loaded $(document).ready(function () { /* Setting the autocomplete of input field to off to make autofill to disable */ $(\"#name\").prop(\"autocomplete\", \"off\"); }); </script></head> <body> <label for=\"name\">Name:</label> <input type=\"text\" name=\"name\" id=\"name\"></body> </html>",
"e": 27540,
"s": 26774,
"text": null
},
{
"code": null,
"e": 27548,
"s": 27540,
"text": "Output:"
},
{
"code": null,
"e": 27599,
"s": 27548,
"text": "Output of code. In this field there is no autofill"
},
{
"code": null,
"e": 27736,
"s": 27599,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 27748,
"s": 27736,
"text": "jQuery-Misc"
},
{
"code": null,
"e": 27755,
"s": 27748,
"text": "Picked"
},
{
"code": null,
"e": 27760,
"s": 27755,
"text": "HTML"
},
{
"code": null,
"e": 27767,
"s": 27760,
"text": "JQuery"
},
{
"code": null,
"e": 27784,
"s": 27767,
"text": "Web Technologies"
},
{
"code": null,
"e": 27789,
"s": 27784,
"text": "HTML"
},
{
"code": null,
"e": 27887,
"s": 27789,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27911,
"s": 27887,
"text": "REST API (Introduction)"
},
{
"code": null,
"e": 27961,
"s": 27911,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 28002,
"s": 27961,
"text": "HTML Cheat Sheet - A Basic Guide to HTML"
},
{
"code": null,
"e": 28039,
"s": 28002,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 28103,
"s": 28039,
"text": "How to position a div at the bottom of its container using CSS?"
},
{
"code": null,
"e": 28149,
"s": 28103,
"text": "JQuery | Set the value of an input text field"
},
{
"code": null,
"e": 28178,
"s": 28149,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 28241,
"s": 28178,
"text": "How to change selected value of a drop-down list using jQuery?"
},
{
"code": null,
"e": 28318,
"s": 28241,
"text": "How to change the background color after clicking the button in JavaScript ?"
}
]
|
How to Produce a DeepFake Video in 5 Minutes | by Dimitris Poulopoulos | Towards Data Science | Do you dance? Do you have a favourite dancer or performer that you want to see yourself copying their moves? Well, now you can!
Imagine having a full-body picture of yourself. Just a still image. Then all you need is a solo video of your favourite dancer performing some moves. Not that hard now that TikTok is taking over the world...
Image animation uses a video sequence to drive the motion of an object in a picture. In this story, we see how image animation technology is now ridiculously easy to use, and how you can animate almost anything you can think of. To this end, I transformed the source code of a relevant publication into a simple script, creating a thin wrapper that anyone can use to produce DeepFakes. With a source image and the right driving video, everything is possible.
Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!
In this article, we talk about a new publication (2019), part of Advances in Neural Information Processing Systems 32 (NIPS 2019), called “First Order Motion Model for Image Animation” [1]. In this paper, the authors, Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci and Nicu Sebe, present a novel way to animate a source image given a driving video, without any additional information or annotation about the object to animate.
Under the hood, they use a neural network trained to reconstruct a video, given a source frame (still image) and a latent representation of the motion in the video, which is learned during training. At test time, the model takes as input a new source image and a driving video (e.g. a sequence of frames) and predicts how the object in the source image moves according to the motion depicted in these frames.
The model tracks everything that is interesting in an animation: head movements, talking, eye tracking and even body action. For example, let us look at the GIF below: president Trump drives the cast of Game of Thrones to talk and move like him.
Before creating our own sequences, let us explore this approach a bit further. First, the training data set is a large collection of videos. During training, the authors extract frame pairs from the same video and feed them to the model. The model tries to reconstruct the video by somehow learning what are the key points in the pairs and how to represent the motion between them.
To this end, the framework consists of two models: the motion estimator and the video generator. Initially, the motion estimator tries to learn a latent representation of the motion in the video. This is encoded as motion-specific key point displacements (where key points can be the position of eyes or mouth) and local affine transformations. This combination can model a larger family of transformations instead of only using the key point displacements. The output of the model is two-fold: a dense motion field and an occlusion mask. This mask defines which parts of the driving video can be reconstructed by warping the source image, and which parts should be inferred by the context because they are not present in the source image (e.g. the back of the head). For instance, consider the fashion GIF below. The back of each model is not present in the source picture, thus, it should be inferred by the model.
Next, the video generator takes as input the output of the motion detector and the source image and animates it according to the driving video; it warps that source image in ways that resemble the driving video and inpatient the parts that are occluded. Figure 1 depicts the framework architecture.
The source code of this paper is on GitHub. What I did is create a simple shell script, a thin wrapper, that utilizes the source code and can be used easily by everyone for quick experimentation.
To use it, first, you need to install the module. Run pip install deep-animator to install the library in your environment. Then, we need four items:
The model weights; of course, we do not want to train the model from scratch. Thus, we need the weights to load a pre-trained model.
A YAML configuration file for our model.
A source image; this could be for example a portrait.
A driving video; best to download a video with a clearly visible face for start.
To get some results quickly and test the performance of the algorithm you can use this source image and this driving video. The model weights can be found here. A simple YAML configuration file is given below. Open a text editor, copy and paste the following lines and save it as conf.yml.
model_params: common_params: num_kp: 10 num_channels: 3 estimate_jacobian: True kp_detector_params: temperature: 0.1 block_expansion: 32 max_features: 1024 scale_factor: 0.25 num_blocks: 5 generator_params: block_expansion: 64 max_features: 512 num_down_blocks: 2 num_bottleneck_blocks: 6 estimate_occlusion_map: True dense_motion_params: block_expansion: 64 max_features: 1024 num_blocks: 5 scale_factor: 0.25 discriminator_params: scales: [1] block_expansion: 32 max_features: 512 num_blocks: 4
Now, we are ready to have a statue mimic Leonardo DiCaprio! To get your results just run the following command.
deep_animate <path_to_the_source_image> <path_to_the_driving_video> <path_to_yaml_conf> <path_to_model_weights>
For example, if you have downloaded everything in the same folder, cd to that folder and run:
deep_animate 00.png 00.mp4 conf.yml deep_animator_model.pth.tar
On my CPU, it takes around five minutes to get the generated video. This will be saved into the same folder unless specified otherwise by the --dest option. Also, you can use GPU acceleration with the --device cuda option. Finally, we are ready to see the result. Pretty awesome!
I this story, we presented the work done by A. Siarohin et al. and how to use it to obtain great results with no effort. Finally, we used deep-animator, a thin wrapper, to animate a statue.
Although there are some concerns about such technologies, it can have various applications and also show how easy it is nowadays to generate fake stories, raising awareness about it.
Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!
My name is Dimitris Poulopoulos, and I’m a machine learning engineer working for Arrikto. I have designed and implemented AI and software solutions for major clients such as the European Commission, Eurostat, IMF, the European Central Bank, OECD, and IKEA.
If you are interested in reading more posts about Machine Learning, Deep Learning, Data Science, and DataOps, follow me on Medium, LinkedIn, or @james2pl on Twitter.
Opinions expressed are solely my own and do not express the views or opinions of my employer. Also, visit the resources page on my website, a place for great books and top-rated courses, to start building your own Data Science curriculum!
[1] A. Siarohin, S. Lathuilière, S. Tulyakov, E. Ricci, and N. Sebe, “First-order motion model for image animation,” in Conference on Neural Information Processing Systems (NeurIPS), December 2019. | [
{
"code": null,
"e": 299,
"s": 171,
"text": "Do you dance? Do you have a favourite dancer or performer that you want to see yourself copying their moves? Well, now you can!"
},
{
"code": null,
"e": 507,
"s": 299,
"text": "Imagine having a full-body picture of yourself. Just a still image. Then all you need is a solo video of your favourite dancer performing some moves. Not that hard now that TikTok is taking over the world..."
},
{
"code": null,
"e": 966,
"s": 507,
"text": "Image animation uses a video sequence to drive the motion of an object in a picture. In this story, we see how image animation technology is now ridiculously easy to use, and how you can animate almost anything you can think of. To this end, I transformed the source code of a relevant publication into a simple script, creating a thin wrapper that anyone can use to produce DeepFakes. With a source image and the right driving video, everything is possible."
},
{
"code": null,
"e": 1188,
"s": 966,
"text": "Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!"
},
{
"code": null,
"e": 1642,
"s": 1188,
"text": "In this article, we talk about a new publication (2019), part of Advances in Neural Information Processing Systems 32 (NIPS 2019), called “First Order Motion Model for Image Animation” [1]. In this paper, the authors, Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci and Nicu Sebe, present a novel way to animate a source image given a driving video, without any additional information or annotation about the object to animate."
},
{
"code": null,
"e": 2051,
"s": 1642,
"text": "Under the hood, they use a neural network trained to reconstruct a video, given a source frame (still image) and a latent representation of the motion in the video, which is learned during training. At test time, the model takes as input a new source image and a driving video (e.g. a sequence of frames) and predicts how the object in the source image moves according to the motion depicted in these frames."
},
{
"code": null,
"e": 2297,
"s": 2051,
"text": "The model tracks everything that is interesting in an animation: head movements, talking, eye tracking and even body action. For example, let us look at the GIF below: president Trump drives the cast of Game of Thrones to talk and move like him."
},
{
"code": null,
"e": 2679,
"s": 2297,
"text": "Before creating our own sequences, let us explore this approach a bit further. First, the training data set is a large collection of videos. During training, the authors extract frame pairs from the same video and feed them to the model. The model tries to reconstruct the video by somehow learning what are the key points in the pairs and how to represent the motion between them."
},
{
"code": null,
"e": 3596,
"s": 2679,
"text": "To this end, the framework consists of two models: the motion estimator and the video generator. Initially, the motion estimator tries to learn a latent representation of the motion in the video. This is encoded as motion-specific key point displacements (where key points can be the position of eyes or mouth) and local affine transformations. This combination can model a larger family of transformations instead of only using the key point displacements. The output of the model is two-fold: a dense motion field and an occlusion mask. This mask defines which parts of the driving video can be reconstructed by warping the source image, and which parts should be inferred by the context because they are not present in the source image (e.g. the back of the head). For instance, consider the fashion GIF below. The back of each model is not present in the source picture, thus, it should be inferred by the model."
},
{
"code": null,
"e": 3895,
"s": 3596,
"text": "Next, the video generator takes as input the output of the motion detector and the source image and animates it according to the driving video; it warps that source image in ways that resemble the driving video and inpatient the parts that are occluded. Figure 1 depicts the framework architecture."
},
{
"code": null,
"e": 4091,
"s": 3895,
"text": "The source code of this paper is on GitHub. What I did is create a simple shell script, a thin wrapper, that utilizes the source code and can be used easily by everyone for quick experimentation."
},
{
"code": null,
"e": 4241,
"s": 4091,
"text": "To use it, first, you need to install the module. Run pip install deep-animator to install the library in your environment. Then, we need four items:"
},
{
"code": null,
"e": 4374,
"s": 4241,
"text": "The model weights; of course, we do not want to train the model from scratch. Thus, we need the weights to load a pre-trained model."
},
{
"code": null,
"e": 4415,
"s": 4374,
"text": "A YAML configuration file for our model."
},
{
"code": null,
"e": 4469,
"s": 4415,
"text": "A source image; this could be for example a portrait."
},
{
"code": null,
"e": 4550,
"s": 4469,
"text": "A driving video; best to download a video with a clearly visible face for start."
},
{
"code": null,
"e": 4840,
"s": 4550,
"text": "To get some results quickly and test the performance of the algorithm you can use this source image and this driving video. The model weights can be found here. A simple YAML configuration file is given below. Open a text editor, copy and paste the following lines and save it as conf.yml."
},
{
"code": null,
"e": 5420,
"s": 4840,
"text": "model_params: common_params: num_kp: 10 num_channels: 3 estimate_jacobian: True kp_detector_params: temperature: 0.1 block_expansion: 32 max_features: 1024 scale_factor: 0.25 num_blocks: 5 generator_params: block_expansion: 64 max_features: 512 num_down_blocks: 2 num_bottleneck_blocks: 6 estimate_occlusion_map: True dense_motion_params: block_expansion: 64 max_features: 1024 num_blocks: 5 scale_factor: 0.25 discriminator_params: scales: [1] block_expansion: 32 max_features: 512 num_blocks: 4"
},
{
"code": null,
"e": 5532,
"s": 5420,
"text": "Now, we are ready to have a statue mimic Leonardo DiCaprio! To get your results just run the following command."
},
{
"code": null,
"e": 5644,
"s": 5532,
"text": "deep_animate <path_to_the_source_image> <path_to_the_driving_video> <path_to_yaml_conf> <path_to_model_weights>"
},
{
"code": null,
"e": 5738,
"s": 5644,
"text": "For example, if you have downloaded everything in the same folder, cd to that folder and run:"
},
{
"code": null,
"e": 5802,
"s": 5738,
"text": "deep_animate 00.png 00.mp4 conf.yml deep_animator_model.pth.tar"
},
{
"code": null,
"e": 6082,
"s": 5802,
"text": "On my CPU, it takes around five minutes to get the generated video. This will be saved into the same folder unless specified otherwise by the --dest option. Also, you can use GPU acceleration with the --device cuda option. Finally, we are ready to see the result. Pretty awesome!"
},
{
"code": null,
"e": 6272,
"s": 6082,
"text": "I this story, we presented the work done by A. Siarohin et al. and how to use it to obtain great results with no effort. Finally, we used deep-animator, a thin wrapper, to animate a statue."
},
{
"code": null,
"e": 6455,
"s": 6272,
"text": "Although there are some concerns about such technologies, it can have various applications and also show how easy it is nowadays to generate fake stories, raising awareness about it."
},
{
"code": null,
"e": 6677,
"s": 6455,
"text": "Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!"
},
{
"code": null,
"e": 6934,
"s": 6677,
"text": "My name is Dimitris Poulopoulos, and I’m a machine learning engineer working for Arrikto. I have designed and implemented AI and software solutions for major clients such as the European Commission, Eurostat, IMF, the European Central Bank, OECD, and IKEA."
},
{
"code": null,
"e": 7100,
"s": 6934,
"text": "If you are interested in reading more posts about Machine Learning, Deep Learning, Data Science, and DataOps, follow me on Medium, LinkedIn, or @james2pl on Twitter."
},
{
"code": null,
"e": 7339,
"s": 7100,
"text": "Opinions expressed are solely my own and do not express the views or opinions of my employer. Also, visit the resources page on my website, a place for great books and top-rated courses, to start building your own Data Science curriculum!"
}
]
|
Easy Sentiment Analysis with Sentimentr in R | Towards Data Science | The Sentimentr package for R is beneficial in analyzing text for psychological or sociological studies. Its first big advantage is that it makes sentiment analysis simple and achievable within a few lines of code. Its second big advantage is that it corrects for inversions, meaning that while a more basic sentiment analysis would judge “I am not good” as positive due to the adjective good, Sentimentr recognizes the inversion of good and classifies it as negative.
All in all, Sentimentr allows you to quickly do a sophisticated sentiment analysis and directly use it as an input for your regression or any other further analysis.
This article covers how to get started. Please refer to other articles such as Tyler Rinker’s Github Repo’s Readme if you are looking for advanced analyzing techniques. For this tutorial, I will be analyzing Amazon Reviews on Beauty products from the He & McAuley (2016) Dataset. However, you can easily adapt the code to make it fit your own dataset.
By default, Sentimentr uses the Jockers (2017) dictionary, which should be perfect for most circumstances.
install.packages("sentimentr")library(sentimentr)
The first two commands install and load the Sentimentr package. Next, I am loading the data. As it is in JSON format, I need to load the ndjson package. I can then use the package’s stream_in function to load the Amazon Beauty Data.
install.packages("ndjson")library(ndjson)df = stream_in("AmazonBeauty.json")head(df)
I also used the head function to quickly look at the first couple of rows of the data. As you will see when performing this on your own machine, there is a column called reviewText that contains the reviews.
sentiment=sentiment_by(df$reviewText)
This command runs the sentiment analysis. In this case, I used the sentiment_by command to get an aggregate sentiment measure for the entire review. In other cases, you could use the sentiment command (without _by) to get the sentiment per sentence.
While this command runs (it does take a while), I will discuss what the function will return. The sentiment object in this example will be a data.table including the following columns:
element_id — The id number of the review
word_count — The word count of the review
sd — The standard deviation of the sentiment score of the sentences in the review
ave_sentiment — The average sentiment score of the sentences in the review
The most interesting variable is the ave_sentiment, which is the sentiment of the review in one number. The number can take positive or negative values and expresses the valence and the polarity of the sentiment.
We can look at some summary statistics of the calculated sentiment scores.
summary(sentiment$ave_sentiment)
As you can see, most reviews tend to be moderately positive, but there are some extreme outliers, with the most positive review being 3.44 and the most negative being -1.88. These are quite far from the mean and the median, and one should consider removing them for further analysis.
I also did a quick histogram to look at the sentiment of the reviews.
library(ggplot2)qplot(sentiment$ave_sentiment, geom="histogram",binwidth=0.1,main="Review Sentiment Histogram")
As I am most interested in the sentiment scores, I will conclude this tutorial by integrating the sentiment scores and their standard deviation back into the main dataset. | [
{
"code": null,
"e": 639,
"s": 171,
"text": "The Sentimentr package for R is beneficial in analyzing text for psychological or sociological studies. Its first big advantage is that it makes sentiment analysis simple and achievable within a few lines of code. Its second big advantage is that it corrects for inversions, meaning that while a more basic sentiment analysis would judge “I am not good” as positive due to the adjective good, Sentimentr recognizes the inversion of good and classifies it as negative."
},
{
"code": null,
"e": 805,
"s": 639,
"text": "All in all, Sentimentr allows you to quickly do a sophisticated sentiment analysis and directly use it as an input for your regression or any other further analysis."
},
{
"code": null,
"e": 1157,
"s": 805,
"text": "This article covers how to get started. Please refer to other articles such as Tyler Rinker’s Github Repo’s Readme if you are looking for advanced analyzing techniques. For this tutorial, I will be analyzing Amazon Reviews on Beauty products from the He & McAuley (2016) Dataset. However, you can easily adapt the code to make it fit your own dataset."
},
{
"code": null,
"e": 1264,
"s": 1157,
"text": "By default, Sentimentr uses the Jockers (2017) dictionary, which should be perfect for most circumstances."
},
{
"code": null,
"e": 1314,
"s": 1264,
"text": "install.packages(\"sentimentr\")library(sentimentr)"
},
{
"code": null,
"e": 1547,
"s": 1314,
"text": "The first two commands install and load the Sentimentr package. Next, I am loading the data. As it is in JSON format, I need to load the ndjson package. I can then use the package’s stream_in function to load the Amazon Beauty Data."
},
{
"code": null,
"e": 1632,
"s": 1547,
"text": "install.packages(\"ndjson\")library(ndjson)df = stream_in(\"AmazonBeauty.json\")head(df)"
},
{
"code": null,
"e": 1840,
"s": 1632,
"text": "I also used the head function to quickly look at the first couple of rows of the data. As you will see when performing this on your own machine, there is a column called reviewText that contains the reviews."
},
{
"code": null,
"e": 1878,
"s": 1840,
"text": "sentiment=sentiment_by(df$reviewText)"
},
{
"code": null,
"e": 2128,
"s": 1878,
"text": "This command runs the sentiment analysis. In this case, I used the sentiment_by command to get an aggregate sentiment measure for the entire review. In other cases, you could use the sentiment command (without _by) to get the sentiment per sentence."
},
{
"code": null,
"e": 2313,
"s": 2128,
"text": "While this command runs (it does take a while), I will discuss what the function will return. The sentiment object in this example will be a data.table including the following columns:"
},
{
"code": null,
"e": 2354,
"s": 2313,
"text": "element_id — The id number of the review"
},
{
"code": null,
"e": 2396,
"s": 2354,
"text": "word_count — The word count of the review"
},
{
"code": null,
"e": 2478,
"s": 2396,
"text": "sd — The standard deviation of the sentiment score of the sentences in the review"
},
{
"code": null,
"e": 2553,
"s": 2478,
"text": "ave_sentiment — The average sentiment score of the sentences in the review"
},
{
"code": null,
"e": 2766,
"s": 2553,
"text": "The most interesting variable is the ave_sentiment, which is the sentiment of the review in one number. The number can take positive or negative values and expresses the valence and the polarity of the sentiment."
},
{
"code": null,
"e": 2841,
"s": 2766,
"text": "We can look at some summary statistics of the calculated sentiment scores."
},
{
"code": null,
"e": 2874,
"s": 2841,
"text": "summary(sentiment$ave_sentiment)"
},
{
"code": null,
"e": 3158,
"s": 2874,
"text": "As you can see, most reviews tend to be moderately positive, but there are some extreme outliers, with the most positive review being 3.44 and the most negative being -1.88. These are quite far from the mean and the median, and one should consider removing them for further analysis."
},
{
"code": null,
"e": 3228,
"s": 3158,
"text": "I also did a quick histogram to look at the sentiment of the reviews."
},
{
"code": null,
"e": 3342,
"s": 3228,
"text": "library(ggplot2)qplot(sentiment$ave_sentiment, geom=\"histogram\",binwidth=0.1,main=\"Review Sentiment Histogram\")"
}
]
|
First Normal Form (1NF) - GeeksforGeeks | 28 Mar, 2022
If a table has data redundancy and is not properly normalized, then it will be difficult to handle and update the database, without facing data loss. It will also eat up extra memory space and Insertion, Update and Deletion Anomalies are very frequent if database is not normalized.
Normalization is the process of minimizing redundancy from a relation or set of relations. Redundancy in relation may cause insertion, deletion and update anomalies. So, it helps to minimize the redundancy in relations. Normal forms are used to eliminate or reduce redundancy in database tables.
There are various level of normalization. These are some of them:
1. First Normal Form (1NF)
2. Second Normal Form (2NF)
3. Third Normal Form (3NF)
4. Boyce-Codd Normal Form (BCNF)
5. Fourth Normal Form (4NF)
6. Fifth Normal Form (5NF)
In this article, we will discuss First Normal Form (1NF).
First Normal Form (1NF): If a relation contains a composite or multi-valued attribute, it violates the first normal form, or the relation is in first normal form if it does not contain any composite or multi-valued attribute. A relation is in first normal form if every attribute in that relation is singled valued attribute.
A table is in 1 NF iff:
There are only Single Valued Attributes.Attribute Domain does not change.There is a unique name for every Attribute/Column.The order in which data is stored does not matter.
There are only Single Valued Attributes.
Attribute Domain does not change.
There is a unique name for every Attribute/Column.
The order in which data is stored does not matter.
Consider the examples given below.
Example-1: Relation STUDENT in table 1 is not in 1NF because of multi-valued attribute STUD_PHONE. Its decomposition into 1NF has been shown in table 2.
Example-2:
ID Name Courses
------------------
1 A c1, c2
2 E c3
3 M C2, c3
In the above table, Course is a multi-valued attribute so it is not in 1NF.
Below Table is in 1NF as there is no multi-valued attribute:
ID Name Course
------------------
1 A c1
1 A c2
2 E c3
3 M c2
3 M c3
Note: A database design is considered as bad if it is not even in the First Normal Form (1NF). https://youtu.be/-JOpBzyrZ_8
meshachmitchell
joenebula123
rajatkhanna
DBMS-Normalization
DBMS
GATE CS
DBMS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
SQL Interview Questions
SQL Trigger | Student Database
CTE in SQL
Introduction of B-Tree
Difference between Clustered and Non-clustered index
Layers of OSI Model
TCP/IP Model
Types of Operating Systems
Page Replacement Algorithms in Operating Systems
Differences between TCP and UDP | [
{
"code": null,
"e": 25903,
"s": 25875,
"text": "\n28 Mar, 2022"
},
{
"code": null,
"e": 26187,
"s": 25903,
"text": "If a table has data redundancy and is not properly normalized, then it will be difficult to handle and update the database, without facing data loss. It will also eat up extra memory space and Insertion, Update and Deletion Anomalies are very frequent if database is not normalized. "
},
{
"code": null,
"e": 26484,
"s": 26187,
"text": "Normalization is the process of minimizing redundancy from a relation or set of relations. Redundancy in relation may cause insertion, deletion and update anomalies. So, it helps to minimize the redundancy in relations. Normal forms are used to eliminate or reduce redundancy in database tables. "
},
{
"code": null,
"e": 26551,
"s": 26484,
"text": "There are various level of normalization. These are some of them: "
},
{
"code": null,
"e": 26723,
"s": 26551,
"text": "1. First Normal Form (1NF)\n2. Second Normal Form (2NF)\n3. Third Normal Form (3NF) \n4. Boyce-Codd Normal Form (BCNF)\n5. Fourth Normal Form (4NF)\n6. Fifth Normal Form (5NF) "
},
{
"code": null,
"e": 26782,
"s": 26723,
"text": "In this article, we will discuss First Normal Form (1NF). "
},
{
"code": null,
"e": 27109,
"s": 26782,
"text": "First Normal Form (1NF): If a relation contains a composite or multi-valued attribute, it violates the first normal form, or the relation is in first normal form if it does not contain any composite or multi-valued attribute. A relation is in first normal form if every attribute in that relation is singled valued attribute. "
},
{
"code": null,
"e": 27134,
"s": 27109,
"text": "A table is in 1 NF iff: "
},
{
"code": null,
"e": 27310,
"s": 27134,
"text": "There are only Single Valued Attributes.Attribute Domain does not change.There is a unique name for every Attribute/Column.The order in which data is stored does not matter. "
},
{
"code": null,
"e": 27351,
"s": 27310,
"text": "There are only Single Valued Attributes."
},
{
"code": null,
"e": 27385,
"s": 27351,
"text": "Attribute Domain does not change."
},
{
"code": null,
"e": 27436,
"s": 27385,
"text": "There is a unique name for every Attribute/Column."
},
{
"code": null,
"e": 27489,
"s": 27436,
"text": "The order in which data is stored does not matter. "
},
{
"code": null,
"e": 27525,
"s": 27489,
"text": "Consider the examples given below. "
},
{
"code": null,
"e": 27679,
"s": 27525,
"text": "Example-1: Relation STUDENT in table 1 is not in 1NF because of multi-valued attribute STUD_PHONE. Its decomposition into 1NF has been shown in table 2. "
},
{
"code": null,
"e": 27691,
"s": 27679,
"text": "Example-2: "
},
{
"code": null,
"e": 27784,
"s": 27691,
"text": "ID Name Courses\n------------------\n1 A c1, c2\n2 E c3\n3 M C2, c3 "
},
{
"code": null,
"e": 27861,
"s": 27784,
"text": "In the above table, Course is a multi-valued attribute so it is not in 1NF. "
},
{
"code": null,
"e": 27924,
"s": 27861,
"text": "Below Table is in 1NF as there is no multi-valued attribute: "
},
{
"code": null,
"e": 28042,
"s": 27924,
"text": "ID Name Course\n------------------\n1 A c1\n1 A c2\n2 E c3\n3 M c2\n3 M c3"
},
{
"code": null,
"e": 28167,
"s": 28042,
"text": "Note: A database design is considered as bad if it is not even in the First Normal Form (1NF). https://youtu.be/-JOpBzyrZ_8 "
},
{
"code": null,
"e": 28183,
"s": 28167,
"text": "meshachmitchell"
},
{
"code": null,
"e": 28196,
"s": 28183,
"text": "joenebula123"
},
{
"code": null,
"e": 28208,
"s": 28196,
"text": "rajatkhanna"
},
{
"code": null,
"e": 28227,
"s": 28208,
"text": "DBMS-Normalization"
},
{
"code": null,
"e": 28232,
"s": 28227,
"text": "DBMS"
},
{
"code": null,
"e": 28240,
"s": 28232,
"text": "GATE CS"
},
{
"code": null,
"e": 28245,
"s": 28240,
"text": "DBMS"
},
{
"code": null,
"e": 28343,
"s": 28245,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28367,
"s": 28343,
"text": "SQL Interview Questions"
},
{
"code": null,
"e": 28398,
"s": 28367,
"text": "SQL Trigger | Student Database"
},
{
"code": null,
"e": 28409,
"s": 28398,
"text": "CTE in SQL"
},
{
"code": null,
"e": 28432,
"s": 28409,
"text": "Introduction of B-Tree"
},
{
"code": null,
"e": 28485,
"s": 28432,
"text": "Difference between Clustered and Non-clustered index"
},
{
"code": null,
"e": 28505,
"s": 28485,
"text": "Layers of OSI Model"
},
{
"code": null,
"e": 28518,
"s": 28505,
"text": "TCP/IP Model"
},
{
"code": null,
"e": 28545,
"s": 28518,
"text": "Types of Operating Systems"
},
{
"code": null,
"e": 28594,
"s": 28545,
"text": "Page Replacement Algorithms in Operating Systems"
}
]
|
Count pairs in an array whose absolute difference is divisible by K | Using Map - GeeksforGeeks | 30 Nov, 2021
Given an array, arr[] of N elements and an integer K, the task is to find the number of pairs (i, j) such that the absolute value of (arr[i] – arr[j]) is a multiple of K.
Examples:
Input: N = 4, K = 2, arr[] = {1, 2, 3, 4}Output: 2Explanation: Total 2 pairs exists in the array with absolute difference divisible by 2. The pairs are: (1, 3), (2, 4).
Input: N = 3, K = 3, arr[] = {3, 3, 3}Output: 3Explanation: Total 3 pairs exists in this array with absolute difference divisible by 3. The pairs are: (3, 3), (3, 3), (3, 3).
Naive approach: The easiest way is to iterate through every possible pair in the array and if the absolute difference of the numbers is a multiple of K, then increase the count by 1. Print the value of the count after all pairs are processed.Time Complexity: O(N2)Auxiliary Space: O(1)
Frequency Array Approach: The approach to solving this problem using a frequency array is discussed in Set-1 of this article. In this approach, we have discussed the approach to solve it using the map.
Efficient Approach: To optimize the above approach, the idea is to observe the fact that for two numbers a[i] and a[j], if a[i] % k = a[j] % k, then abs(a[i] – a[j]) is a multiple of K. Follow the below steps to solve the problem:
Initialize the variable ans as 0 to store the answer.
Declare an unordered_map<int, int> count_map[] which stores the count of remainders of array elements with K.
Iterate over the range [1, N] using the variable index and increment the value arr[index]%k in the count_map by 1 for every index.
Iterate over all the key-value pairs in the count_map. For each key-value pair:The value count_map[rem] is the number of elements whose remainder with K is equal to ‘rem‘.For a valid pair to be formed, select any two numbers from the count_map[rem] numbers.The number of ways to select two numbers from ‘N‘ numbers is Nc2 = N * (N – 1) / 2.
The value count_map[rem] is the number of elements whose remainder with K is equal to ‘rem‘.
For a valid pair to be formed, select any two numbers from the count_map[rem] numbers.
The number of ways to select two numbers from ‘N‘ numbers is Nc2 = N * (N – 1) / 2.
Add the answer of all key-value pairs and print ans.
Below is the implementation of the above approach.
C++
Java
Python3
C#
Javascript
// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to count number of pairs// (i, j) such that abs(arr[i] - arr[j])// is divisible by k.void countOfPairs(int* arr, int N, int K){ // Frequency Map to keep count of // remainders of array elements with K. unordered_map<int, int> count_map; for (int index = 0; index < N; ++index) { count_map[arr[index] % K]++; } // To store the final answer. int ans = 0; for (auto it : count_map) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (it.second * (it.second - 1)) / 2; } // Output the answer. cout << ans << endl;} // Driver Codeint main(){ int K = 2; // Input array int arr[] = { 1, 2, 3, 4 }; // Size of array int N = sizeof arr / sizeof arr[0]; countOfPairs(arr, N, K); return 0;}
// Java program for the above approachimport java.util.*; class GFG { // Function to count number of pairs // (i, j) such that Math.abs(arr[i] - arr[j]) // is divisible by k. static void countOfPairs(int[] arr, int N, int K) { // Frequency Map to keep count of // remainders of array elements with K. HashMap<Integer, Integer> count_map = new HashMap<Integer, Integer>(); for (int index = 0; index < N; ++index) { if (count_map.containsKey(arr[index] % K)) { count_map.put(arr[index] % K, count_map.get(arr[index] % K) + 1); } else { count_map.put(arr[index] % K, 1); } } // To store the final answer. int ans = 0; for (Map.Entry<Integer, Integer> it : count_map.entrySet()) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (it.getValue() * (it.getValue() - 1)) / 2; } // Output the answer. System.out.print(ans + "\n"); } // Driver Code public static void main(String[] args) { int K = 2; // Input array int arr[] = { 1, 2, 3, 4 }; // Size of array int N = arr.length; countOfPairs(arr, N, K); }} // This code is contributed by shikhasingrajput
# Python Program to implement# the above approach # Function to count number of pairs# (i, j) such that abs(arr[i] - arr[j])# is divisible by k.def countOfPairs(arr, N, K): # Frequency Map to keep count of # remainders of array elements with K. count_map = {} for index in range(N): if (not arr[index] % K in count_map): count_map[arr[index] % K] = 1 else: count_map[arr[index] % K] += 1 # To store the final answer. ans = 0 for val in count_map.values(): # Number of ways of selecting any two # numbers from all numbers having the # same remainder is Nc2 = N # * (N - 1) / 2 ans += (val * (val - 1)) // 2 # Output the answer. print(ans) # Driver CodeK = 2 # Input arrayarr = [1, 2, 3, 4] # Size of arrayN = len(arr) countOfPairs(arr, N, K) # This code is contributed by Saurabh Jaiswal
// C# program for the above approachusing System;using System.Collections.Generic; public class GFG { // Function to count number of pairs // (i, j) such that Math.Abs(arr[i] - arr[j]) // is divisible by k. static void countOfPairs(int[] arr, int N, int K) { // Frequency Map to keep count of // remainders of array elements with K. Dictionary<int, int> count_map = new Dictionary<int, int>(); for (int index = 0; index < N; ++index) { if (count_map.ContainsKey(arr[index] % K)) { count_map[arr[index] % K] = count_map[arr[index] % K] + 1; } else { count_map.Add(arr[index] % K, 1); } } // To store the readonly answer. int ans = 0; foreach (KeyValuePair<int, int> it in count_map) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (it.Value * (it.Value - 1)) / 2; } // Output the answer. Console.Write(ans + "\n"); } // Driver Code public static void Main(String[] args) { int K = 2; // Input array int []arr = { 1, 2, 3, 4 }; // Size of array int N = arr.Length; countOfPairs(arr, N, K); }} // This code is contributed by shikhasingrajput
<script> // JavaScript Program to implement // the above approach // Function to count number of pairs // (i, j) such that abs(arr[i] - arr[j]) // is divisible by k. function countOfPairs(arr, N, K) { // Frequency Map to keep count of // remainders of array elements with K. let count_map = new Map(); for (let index = 0; index < N; ++index) { if (!count_map.has(arr[index] % K)) count_map.set(arr[index] % K, 1); else count_map.set(arr[index] % K, count_map.get(arr[index] % K) + 1) } // To store the final answer. let ans = 0; for (let [key, value] of count_map) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (value * (value - 1)) / 2; } // Output the answer. document.write(ans + '<br>'); } // Driver Code let K = 2; // Input array let arr = [1, 2, 3, 4]; // Size of array let N = arr.length; countOfPairs(arr, N, K); // This code is contributed by Potta Lokesh </script>
2
Time Complexity: O(NlogN)Auxiliary Space: O(N)
lokeshpotta20
shikhasingrajput
_saurabh_jaiswal
cpp-map
Maths
Arrays
Mathematical
Arrays
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Chocolate Distribution Problem
Window Sliding Technique
Reversal algorithm for array rotation
Next Greater Element
Find duplicates in O(n) time and O(1) extra space | Set 1
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Coin Change | DP-7 | [
{
"code": null,
"e": 26067,
"s": 26039,
"text": "\n30 Nov, 2021"
},
{
"code": null,
"e": 26238,
"s": 26067,
"text": "Given an array, arr[] of N elements and an integer K, the task is to find the number of pairs (i, j) such that the absolute value of (arr[i] – arr[j]) is a multiple of K."
},
{
"code": null,
"e": 26249,
"s": 26238,
"text": "Examples: "
},
{
"code": null,
"e": 26418,
"s": 26249,
"text": "Input: N = 4, K = 2, arr[] = {1, 2, 3, 4}Output: 2Explanation: Total 2 pairs exists in the array with absolute difference divisible by 2. The pairs are: (1, 3), (2, 4)."
},
{
"code": null,
"e": 26595,
"s": 26418,
"text": "Input: N = 3, K = 3, arr[] = {3, 3, 3}Output: 3Explanation: Total 3 pairs exists in this array with absolute difference divisible by 3. The pairs are: (3, 3), (3, 3), (3, 3). "
},
{
"code": null,
"e": 26881,
"s": 26595,
"text": "Naive approach: The easiest way is to iterate through every possible pair in the array and if the absolute difference of the numbers is a multiple of K, then increase the count by 1. Print the value of the count after all pairs are processed.Time Complexity: O(N2)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 27083,
"s": 26881,
"text": "Frequency Array Approach: The approach to solving this problem using a frequency array is discussed in Set-1 of this article. In this approach, we have discussed the approach to solve it using the map."
},
{
"code": null,
"e": 27315,
"s": 27083,
"text": "Efficient Approach: To optimize the above approach, the idea is to observe the fact that for two numbers a[i] and a[j], if a[i] % k = a[j] % k, then abs(a[i] – a[j]) is a multiple of K. Follow the below steps to solve the problem:"
},
{
"code": null,
"e": 27369,
"s": 27315,
"text": "Initialize the variable ans as 0 to store the answer."
},
{
"code": null,
"e": 27479,
"s": 27369,
"text": "Declare an unordered_map<int, int> count_map[] which stores the count of remainders of array elements with K."
},
{
"code": null,
"e": 27610,
"s": 27479,
"text": "Iterate over the range [1, N] using the variable index and increment the value arr[index]%k in the count_map by 1 for every index."
},
{
"code": null,
"e": 27951,
"s": 27610,
"text": "Iterate over all the key-value pairs in the count_map. For each key-value pair:The value count_map[rem] is the number of elements whose remainder with K is equal to ‘rem‘.For a valid pair to be formed, select any two numbers from the count_map[rem] numbers.The number of ways to select two numbers from ‘N‘ numbers is Nc2 = N * (N – 1) / 2."
},
{
"code": null,
"e": 28044,
"s": 27951,
"text": "The value count_map[rem] is the number of elements whose remainder with K is equal to ‘rem‘."
},
{
"code": null,
"e": 28131,
"s": 28044,
"text": "For a valid pair to be formed, select any two numbers from the count_map[rem] numbers."
},
{
"code": null,
"e": 28215,
"s": 28131,
"text": "The number of ways to select two numbers from ‘N‘ numbers is Nc2 = N * (N – 1) / 2."
},
{
"code": null,
"e": 28268,
"s": 28215,
"text": "Add the answer of all key-value pairs and print ans."
},
{
"code": null,
"e": 28319,
"s": 28268,
"text": "Below is the implementation of the above approach."
},
{
"code": null,
"e": 28323,
"s": 28319,
"text": "C++"
},
{
"code": null,
"e": 28328,
"s": 28323,
"text": "Java"
},
{
"code": null,
"e": 28336,
"s": 28328,
"text": "Python3"
},
{
"code": null,
"e": 28339,
"s": 28336,
"text": "C#"
},
{
"code": null,
"e": 28350,
"s": 28339,
"text": "Javascript"
},
{
"code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to count number of pairs// (i, j) such that abs(arr[i] - arr[j])// is divisible by k.void countOfPairs(int* arr, int N, int K){ // Frequency Map to keep count of // remainders of array elements with K. unordered_map<int, int> count_map; for (int index = 0; index < N; ++index) { count_map[arr[index] % K]++; } // To store the final answer. int ans = 0; for (auto it : count_map) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (it.second * (it.second - 1)) / 2; } // Output the answer. cout << ans << endl;} // Driver Codeint main(){ int K = 2; // Input array int arr[] = { 1, 2, 3, 4 }; // Size of array int N = sizeof arr / sizeof arr[0]; countOfPairs(arr, N, K); return 0;}",
"e": 29311,
"s": 28350,
"text": null
},
{
"code": "// Java program for the above approachimport java.util.*; class GFG { // Function to count number of pairs // (i, j) such that Math.abs(arr[i] - arr[j]) // is divisible by k. static void countOfPairs(int[] arr, int N, int K) { // Frequency Map to keep count of // remainders of array elements with K. HashMap<Integer, Integer> count_map = new HashMap<Integer, Integer>(); for (int index = 0; index < N; ++index) { if (count_map.containsKey(arr[index] % K)) { count_map.put(arr[index] % K, count_map.get(arr[index] % K) + 1); } else { count_map.put(arr[index] % K, 1); } } // To store the final answer. int ans = 0; for (Map.Entry<Integer, Integer> it : count_map.entrySet()) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (it.getValue() * (it.getValue() - 1)) / 2; } // Output the answer. System.out.print(ans + \"\\n\"); } // Driver Code public static void main(String[] args) { int K = 2; // Input array int arr[] = { 1, 2, 3, 4 }; // Size of array int N = arr.length; countOfPairs(arr, N, K); }} // This code is contributed by shikhasingrajput",
"e": 30751,
"s": 29311,
"text": null
},
{
"code": "# Python Program to implement# the above approach # Function to count number of pairs# (i, j) such that abs(arr[i] - arr[j])# is divisible by k.def countOfPairs(arr, N, K): # Frequency Map to keep count of # remainders of array elements with K. count_map = {} for index in range(N): if (not arr[index] % K in count_map): count_map[arr[index] % K] = 1 else: count_map[arr[index] % K] += 1 # To store the final answer. ans = 0 for val in count_map.values(): # Number of ways of selecting any two # numbers from all numbers having the # same remainder is Nc2 = N # * (N - 1) / 2 ans += (val * (val - 1)) // 2 # Output the answer. print(ans) # Driver CodeK = 2 # Input arrayarr = [1, 2, 3, 4] # Size of arrayN = len(arr) countOfPairs(arr, N, K) # This code is contributed by Saurabh Jaiswal",
"e": 31642,
"s": 30751,
"text": null
},
{
"code": "// C# program for the above approachusing System;using System.Collections.Generic; public class GFG { // Function to count number of pairs // (i, j) such that Math.Abs(arr[i] - arr[j]) // is divisible by k. static void countOfPairs(int[] arr, int N, int K) { // Frequency Map to keep count of // remainders of array elements with K. Dictionary<int, int> count_map = new Dictionary<int, int>(); for (int index = 0; index < N; ++index) { if (count_map.ContainsKey(arr[index] % K)) { count_map[arr[index] % K] = count_map[arr[index] % K] + 1; } else { count_map.Add(arr[index] % K, 1); } } // To store the readonly answer. int ans = 0; foreach (KeyValuePair<int, int> it in count_map) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (it.Value * (it.Value - 1)) / 2; } // Output the answer. Console.Write(ans + \"\\n\"); } // Driver Code public static void Main(String[] args) { int K = 2; // Input array int []arr = { 1, 2, 3, 4 }; // Size of array int N = arr.Length; countOfPairs(arr, N, K); }} // This code is contributed by shikhasingrajput",
"e": 33076,
"s": 31642,
"text": null
},
{
"code": "<script> // JavaScript Program to implement // the above approach // Function to count number of pairs // (i, j) such that abs(arr[i] - arr[j]) // is divisible by k. function countOfPairs(arr, N, K) { // Frequency Map to keep count of // remainders of array elements with K. let count_map = new Map(); for (let index = 0; index < N; ++index) { if (!count_map.has(arr[index] % K)) count_map.set(arr[index] % K, 1); else count_map.set(arr[index] % K, count_map.get(arr[index] % K) + 1) } // To store the final answer. let ans = 0; for (let [key, value] of count_map) { // Number of ways of selecting any two // numbers from all numbers having the // same remainder is Nc2 = N // * (N - 1) / 2 ans += (value * (value - 1)) / 2; } // Output the answer. document.write(ans + '<br>'); } // Driver Code let K = 2; // Input array let arr = [1, 2, 3, 4]; // Size of array let N = arr.length; countOfPairs(arr, N, K); // This code is contributed by Potta Lokesh </script>",
"e": 34346,
"s": 33076,
"text": null
},
{
"code": null,
"e": 34351,
"s": 34349,
"text": "2"
},
{
"code": null,
"e": 34400,
"s": 34353,
"text": "Time Complexity: O(NlogN)Auxiliary Space: O(N)"
},
{
"code": null,
"e": 34416,
"s": 34402,
"text": "lokeshpotta20"
},
{
"code": null,
"e": 34433,
"s": 34416,
"text": "shikhasingrajput"
},
{
"code": null,
"e": 34450,
"s": 34433,
"text": "_saurabh_jaiswal"
},
{
"code": null,
"e": 34458,
"s": 34450,
"text": "cpp-map"
},
{
"code": null,
"e": 34464,
"s": 34458,
"text": "Maths"
},
{
"code": null,
"e": 34471,
"s": 34464,
"text": "Arrays"
},
{
"code": null,
"e": 34484,
"s": 34471,
"text": "Mathematical"
},
{
"code": null,
"e": 34491,
"s": 34484,
"text": "Arrays"
},
{
"code": null,
"e": 34504,
"s": 34491,
"text": "Mathematical"
},
{
"code": null,
"e": 34602,
"s": 34504,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34633,
"s": 34602,
"text": "Chocolate Distribution Problem"
},
{
"code": null,
"e": 34658,
"s": 34633,
"text": "Window Sliding Technique"
},
{
"code": null,
"e": 34696,
"s": 34658,
"text": "Reversal algorithm for array rotation"
},
{
"code": null,
"e": 34717,
"s": 34696,
"text": "Next Greater Element"
},
{
"code": null,
"e": 34775,
"s": 34717,
"text": "Find duplicates in O(n) time and O(1) extra space | Set 1"
},
{
"code": null,
"e": 34805,
"s": 34775,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 34865,
"s": 34805,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 34880,
"s": 34865,
"text": "C++ Data Types"
},
{
"code": null,
"e": 34923,
"s": 34880,
"text": "Set in C++ Standard Template Library (STL)"
}
]
|
Build, Develop and Deploy a Machine Learning Model to predict cars price using Gradient Boosting. | by Ayoub RMIDI | Towards Data Science | In this blog post I will cover a Data Science process to build and deploy a Machine Learning Model that can predict a car price, by following the steps below :
Setting the research goalRetrieving DataData Preprocessing & CleansingData Exploration & VisualizationData ModelingModel Deployment
Setting the research goal
Retrieving Data
Data Preprocessing & Cleansing
Data Exploration & Visualization
Data Modeling
Model Deployment
The aim of this work is to get familiarized with a Data Science process as described above by building & deploying a Machine Learning Model that can predict a car price based on its features, by trying 4 regression models and choosing the one with the the highest R2 score and the lowest Root Mean Squared Error.
Retrieving Data is the important step that comes after setting the research goal, for this purpose I used the famous python library specialized in these kinds of tasks which is “BeautifulSoup”. The process for this task is quite simple, first we loop over the ads pages in order to collect the ads URLs by incrementing the page number parameter called oin the base URL which look like this :
basic_url = "https://www.avito.ma/fr/maroc/voitures-à_vendre&o="
once the URLs is collected I set them in a csv file called “ads_urls.csv”, which look like this :
https://www.avito.ma/fr/temara/voitures/peugeot_508_26753561.htmhttps://www.avito.ma/fr/safi/voitures/renualt_kasket_26753552.htmhttps://www.avito.ma/fr/oued_fes/voitures/Citroen_C3_Diesel_26753551.htm....
Each link of these links above, contain data about the car posted in that ad, such as : ‘Model Year’, ‘Mileage’, ‘Fuel Type’, ‘Price’ ..., which seems exactly like this :
then, I read this file containing list of URLs to crawl on ad pages and extract the necessary data, so here is a brief code Snapshot about how it works.
Finally I get the following data schema :
price, year_model, mileage, fuel_type, mark, model, fiscal_power, sector, type, city135 000 DH,Année-Modèle:2013,Kilométrage:160 000–169 999,Type de carburant:Diesel,Marque:Peugeot,Modèle:508,Puissance fiscale:-,-,”Type:Voitures, Offre”, Temara60 000 DH,Année-Modèle:2009,Kilométrage:130 000 - 139 999,Type de carburant:Essence,Marque:Ford,Modèle:Fiesta,Puissance fiscale:7 CV,Secteur:saies,"Type:Voitures, Offre", Fès....
The next step is the most time consuming in every Data Science process which is Data Preprocessing & Cleansing.
The reason behind the Data Preprocessing is to transform our RAW data to a useful one and also to reduce the data size so that it becomes easy to analyse, in this case we need to :
price : remove the “DH” (Dirham currency) character and spaces from the price then transform it to integer.
year_model : remove unwanted string from the year_model column such as : “Année-Modèle:” or “ou plus ancien” then transform it to integer.
mileage : for this field the thing is quite different, we have a range of mileage, for example in each observation we have mileage between 160000 - 169999 KM, so after removing the “KM” and “-” characters I had to render this column with a way to keep only one value and not a range, we may use the min or the max value, to me I chose to use the mean so that we stay in the middle of that range.
fuel_type : remove unwanted string from the fuel_type column like “Type de carburant:”.
mark : remove the “Marque:” string from the mark column.
fiscal_power : for this field I had to handle 2 cases : remove the unwanted string such as “Puissance fiscale: CV” and deal with missimg values, so one of the tips to handle missing values is to fill them with the mean of all fiscal_power columns.
For the Data Exploration and analysis we do not need some columns such as “sector” and “type”, so we end up by dropping them from the data frame.
After the Data Extraction and Data Preprocessing steps, I should now visualize my data set so that I can have more insights about what is happening under the hood, and how my data is distributed.
In the following section, I will run a Q&A session in order to answer many questions based on histograms & plot.
Q : How price is distributed over the year model ?
A : As we can see from the plot above, the cars price increase respectively by years, and more explicitly we can say that the more the car is recently released, the more the price augments, while in the other side the oldest cars still have a low price, and this is totally logical since whenever the cars become kind of old from the date of release, the price starts decreasing.
Q : How is the price distributed over fiscal power ?
A : From the plot above we can notice clearly that there is a huge concentration of points in the range of [2800 DH,800000 DH], and [3 CV, 13 CV], which could be interpreted as first the huge domination price of medium fiscal power cars in the market with correct price and secondly the more the fiscal power increase the price do too.
A : Although weak, it appears that there is a positive relationship between the year_modeland price let’s see what is the actual correlation between price and the other data points. We will confirm it by looking at the Heatmap correlation matrix.
As we can see there is a strong correlation between the price and year_model features with a 0.47 as a correlation score, which validates the strong relationship between the price and year_model columns.
Now we came to the main task in all this process, which is Data Modeling, for this purpose I will use 4 Machine Learning models dedicated for Regression problems, at the end I will do a Benchmarking table to compare each model r2_score and select the best one. The used models are : K Nearest Neighbors regression, Multiple Linear Regression, Decision Tree Regression and Gradient Boosting Regression.
Data Transformation
I Intentionally let this part until the Data Modeling instead of doing it with Data Preprocessing for some visualization purposes.
At the moment I still have 2 categorical features which are the fuel_type and mark , the aim of this section is to preprocess those features in order to make them numerical so that they will fit into our model. In literature there are two famous ways of categorical variable transformations, the first one is label encoding, and the second one is the one hot encoding, for this use case we will use the one hot position and the reason why I choose this kind of data labeling is because I will not need any kind of data normalization later, and also this has the benefit of not weighting a value improperly but does have the downside of adding more columns to the data set.
As shown in the figure above, after transforming the categorical features to numerical ones by using the one hot encoding, we got a wide data frame.
Data Splitting
Usually we split our data into three parts : Training , validation and Testing set, but for simplicity we will use only train and test with 20% in test size, and the rest for training.
Gradient Boosting Regression
Boosting is another ensemble technique for creating collection of powerful predictors, and Gradient Boosting is a technique for producing regression models consisting of collections of regressors.
An ensemble is a collection of predictors whose predictions are combined usually by some sort of weighted average or vote in order to provide an overall prediction that takes its guidance from the collection itself. So boosting is an ensemble technique in which learners are learned sequentially with early learners fitting simple models to the data and then analyzing the data for errors, those errors identify problems or particular instances of the data that are difficult or hard to fit, as a consequence later models focus primarily on those examples trying to get them right.
At the end, all the models contribute with weights and the set is combined into some overall predictors, so boosting is a method of converting a sequence of weak learners into a very complex predictor, it’s a way of increasing the complexity of a particular model initial learners tend to be very simple and then the weighted combination can grow more and more complex as learners are added.
The Math behind Gradient Boosting
This algorithm is an instance of gradient boosting, it’s called gradient boosting because it’s related to a gradient descent sort of procedure.
First we make a set of predictions ŷ(i) for each data point.
We can calculate the error in our predictions, let’s call it J(y, ŷ) , and J just relates the accuracy of ŷ in modelling y.
For mean squared error MSE : J(y, ŷ) = Σ ( y(i) -ŷ(i) )2 .
So now we can try to think about adjusting our prediction ŷ to try to reduce the error above : ŷ(i)= ŷ(i) + α *∇J(y, ŷ) , with : ∇J(y, ŷ) = y(i)- ŷ(i)
Each learner is estimating the gradient of the loss function.
Gradient Descent : take sequence of steps to reduce J .
Sum of predictors, weighted by step size alpha.
Gradient Boosting Regressor — Code Snapshot
Interpreting Residual VS Predicted values
After building the GBR model, it is mostly recommended to plot the error distribution to verify the following assumptions :
Normally distributed
Homoscedastic (The same variance at every X)
Independent
which is already verified in our case as you can check on the notebook for more details, however we need to observe the residual plot to make sure it doesn’t follow a non-linear or Heteroscedasticity distribution.
As we can see from the plot above, the residuals roughly form a “horizontal band” around the 0 line, this suggest that the variances of the error terms are equal, furthermore no one residual “stands out” from the basic random pattern of residuals which involve that there is no outliers.
Models Benchmarking
So after trying many regression models to fit our data set, it’s time to draw a Benchmarking table that will summarize all the results we have got.
╔═══════════════════╦════════════════════╦═══════════════╗║ Model ║ R^2 score ║ RMSE ║╠═══════════════════╬════════════════════╬═══════════════╣║ KNN ║ 0.56 ║ 37709.67 ║║ Linear Regression ║ 0.62 ║ 34865.07 ║║ Gradient Boosting ║ 0.80 ║ 25176.16 ║║ Decision Tree ║ 0.63 ║ 34551.17 ║╚═══════════════════╩════════════════════╩═══════════════╝
It appears that the Gradient Boosting model won the battle as it was expected with the lowest RMSE value and the highest R2 score.
Creating the Flask web app
Creating the Flask web app
Flask is a “micro” framework for Python. It is called a micro framework because they want to keep the core simple but expandable. While confusing at first, it is relatively easy to set up a website on Flask using Jinja2 templating.
The flask app consists of 2 main components: the python app (app.py) and the HTML templates, for the app.py here is how it looks like :
So once the app.py is built, we can run it from the Terminal, if everything goes right, we will get the index.html page running on the '/’ route as :http://localhost:8080/ , then we can fill the given form with the right values and get the result as a sweet alert pop-up.
2. Deploy the app to Heroku
For this part, I will need a Heroku account (Free) and the HerokuCLI, I can also use GitHub for this task, but let’s keep things simple.
Set the Procfile: A Procfile is a mechanism for declaring what commands are run by your application’s dynos on the Heroku platform. Create a file called “Procfile” and put the following in it:
web: gunicorn app:app --log-file -
Create the python requirements file by running the following command in your Terminal at the root of your Flask app :
$ pipreqs ./
if you’re working in a python virtual environment, this would be efficient :
$ pip3 freeze > requirements.txt
Create a new app on the Heroku Website by logging into your account.
Once the app is well created we will be redirected to a page showing us the instructions to do for a successful deployment , after being successfully connected to the HerokuCLI, I need to change the directory to our flask App and run the following command :
$ heroku git:clone -a cars-price-prediction$ git add .$ git commit -am "make it better"$ git push heroku master
The app should be now live at my-app-name.herokuapp.com! Check out a working version of the app here.
In this blog post I tried to cover a data science process by building a model that can predict the price of cars based on its features, starting from the Data Collecting to the Data Modeling and comparison between built-in Models, and finishing by Model Deployment as a web app.
Full notebook : https://github.com/PaacMaan/cars-price-predictor/blob/master/cars_price_predictor.ipynb
GitHub repository :
github.com
Web Application link : https://cars-price-prediction.herokuapp.com
Shared with ❤. | [
{
"code": null,
"e": 331,
"s": 171,
"text": "In this blog post I will cover a Data Science process to build and deploy a Machine Learning Model that can predict a car price, by following the steps below :"
},
{
"code": null,
"e": 463,
"s": 331,
"text": "Setting the research goalRetrieving DataData Preprocessing & CleansingData Exploration & VisualizationData ModelingModel Deployment"
},
{
"code": null,
"e": 489,
"s": 463,
"text": "Setting the research goal"
},
{
"code": null,
"e": 505,
"s": 489,
"text": "Retrieving Data"
},
{
"code": null,
"e": 536,
"s": 505,
"text": "Data Preprocessing & Cleansing"
},
{
"code": null,
"e": 569,
"s": 536,
"text": "Data Exploration & Visualization"
},
{
"code": null,
"e": 583,
"s": 569,
"text": "Data Modeling"
},
{
"code": null,
"e": 600,
"s": 583,
"text": "Model Deployment"
},
{
"code": null,
"e": 913,
"s": 600,
"text": "The aim of this work is to get familiarized with a Data Science process as described above by building & deploying a Machine Learning Model that can predict a car price based on its features, by trying 4 regression models and choosing the one with the the highest R2 score and the lowest Root Mean Squared Error."
},
{
"code": null,
"e": 1305,
"s": 913,
"text": "Retrieving Data is the important step that comes after setting the research goal, for this purpose I used the famous python library specialized in these kinds of tasks which is “BeautifulSoup”. The process for this task is quite simple, first we loop over the ads pages in order to collect the ads URLs by incrementing the page number parameter called oin the base URL which look like this :"
},
{
"code": null,
"e": 1371,
"s": 1305,
"text": "basic_url = \"https://www.avito.ma/fr/maroc/voitures-à_vendre&o=\""
},
{
"code": null,
"e": 1469,
"s": 1371,
"text": "once the URLs is collected I set them in a csv file called “ads_urls.csv”, which look like this :"
},
{
"code": null,
"e": 1675,
"s": 1469,
"text": "https://www.avito.ma/fr/temara/voitures/peugeot_508_26753561.htmhttps://www.avito.ma/fr/safi/voitures/renualt_kasket_26753552.htmhttps://www.avito.ma/fr/oued_fes/voitures/Citroen_C3_Diesel_26753551.htm...."
},
{
"code": null,
"e": 1846,
"s": 1675,
"text": "Each link of these links above, contain data about the car posted in that ad, such as : ‘Model Year’, ‘Mileage’, ‘Fuel Type’, ‘Price’ ..., which seems exactly like this :"
},
{
"code": null,
"e": 1999,
"s": 1846,
"text": "then, I read this file containing list of URLs to crawl on ad pages and extract the necessary data, so here is a brief code Snapshot about how it works."
},
{
"code": null,
"e": 2041,
"s": 1999,
"text": "Finally I get the following data schema :"
},
{
"code": null,
"e": 2473,
"s": 2041,
"text": "price, year_model, mileage, fuel_type, mark, model, fiscal_power, sector, type, city135 000 DH,Année-Modèle:2013,Kilométrage:160 000–169 999,Type de carburant:Diesel,Marque:Peugeot,Modèle:508,Puissance fiscale:-,-,”Type:Voitures, Offre”, Temara60 000 DH,Année-Modèle:2009,Kilométrage:130 000 - 139 999,Type de carburant:Essence,Marque:Ford,Modèle:Fiesta,Puissance fiscale:7 CV,Secteur:saies,\"Type:Voitures, Offre\", Fès...."
},
{
"code": null,
"e": 2585,
"s": 2473,
"text": "The next step is the most time consuming in every Data Science process which is Data Preprocessing & Cleansing."
},
{
"code": null,
"e": 2766,
"s": 2585,
"text": "The reason behind the Data Preprocessing is to transform our RAW data to a useful one and also to reduce the data size so that it becomes easy to analyse, in this case we need to :"
},
{
"code": null,
"e": 2874,
"s": 2766,
"text": "price : remove the “DH” (Dirham currency) character and spaces from the price then transform it to integer."
},
{
"code": null,
"e": 3015,
"s": 2874,
"text": "year_model : remove unwanted string from the year_model column such as : “Année-Modèle:” or “ou plus ancien” then transform it to integer."
},
{
"code": null,
"e": 3411,
"s": 3015,
"text": "mileage : for this field the thing is quite different, we have a range of mileage, for example in each observation we have mileage between 160000 - 169999 KM, so after removing the “KM” and “-” characters I had to render this column with a way to keep only one value and not a range, we may use the min or the max value, to me I chose to use the mean so that we stay in the middle of that range."
},
{
"code": null,
"e": 3499,
"s": 3411,
"text": "fuel_type : remove unwanted string from the fuel_type column like “Type de carburant:”."
},
{
"code": null,
"e": 3556,
"s": 3499,
"text": "mark : remove the “Marque:” string from the mark column."
},
{
"code": null,
"e": 3804,
"s": 3556,
"text": "fiscal_power : for this field I had to handle 2 cases : remove the unwanted string such as “Puissance fiscale: CV” and deal with missimg values, so one of the tips to handle missing values is to fill them with the mean of all fiscal_power columns."
},
{
"code": null,
"e": 3950,
"s": 3804,
"text": "For the Data Exploration and analysis we do not need some columns such as “sector” and “type”, so we end up by dropping them from the data frame."
},
{
"code": null,
"e": 4146,
"s": 3950,
"text": "After the Data Extraction and Data Preprocessing steps, I should now visualize my data set so that I can have more insights about what is happening under the hood, and how my data is distributed."
},
{
"code": null,
"e": 4259,
"s": 4146,
"text": "In the following section, I will run a Q&A session in order to answer many questions based on histograms & plot."
},
{
"code": null,
"e": 4310,
"s": 4259,
"text": "Q : How price is distributed over the year model ?"
},
{
"code": null,
"e": 4690,
"s": 4310,
"text": "A : As we can see from the plot above, the cars price increase respectively by years, and more explicitly we can say that the more the car is recently released, the more the price augments, while in the other side the oldest cars still have a low price, and this is totally logical since whenever the cars become kind of old from the date of release, the price starts decreasing."
},
{
"code": null,
"e": 4743,
"s": 4690,
"text": "Q : How is the price distributed over fiscal power ?"
},
{
"code": null,
"e": 5079,
"s": 4743,
"text": "A : From the plot above we can notice clearly that there is a huge concentration of points in the range of [2800 DH,800000 DH], and [3 CV, 13 CV], which could be interpreted as first the huge domination price of medium fiscal power cars in the market with correct price and secondly the more the fiscal power increase the price do too."
},
{
"code": null,
"e": 5326,
"s": 5079,
"text": "A : Although weak, it appears that there is a positive relationship between the year_modeland price let’s see what is the actual correlation between price and the other data points. We will confirm it by looking at the Heatmap correlation matrix."
},
{
"code": null,
"e": 5530,
"s": 5326,
"text": "As we can see there is a strong correlation between the price and year_model features with a 0.47 as a correlation score, which validates the strong relationship between the price and year_model columns."
},
{
"code": null,
"e": 5932,
"s": 5530,
"text": "Now we came to the main task in all this process, which is Data Modeling, for this purpose I will use 4 Machine Learning models dedicated for Regression problems, at the end I will do a Benchmarking table to compare each model r2_score and select the best one. The used models are : K Nearest Neighbors regression, Multiple Linear Regression, Decision Tree Regression and Gradient Boosting Regression."
},
{
"code": null,
"e": 5952,
"s": 5932,
"text": "Data Transformation"
},
{
"code": null,
"e": 6083,
"s": 5952,
"text": "I Intentionally let this part until the Data Modeling instead of doing it with Data Preprocessing for some visualization purposes."
},
{
"code": null,
"e": 6756,
"s": 6083,
"text": "At the moment I still have 2 categorical features which are the fuel_type and mark , the aim of this section is to preprocess those features in order to make them numerical so that they will fit into our model. In literature there are two famous ways of categorical variable transformations, the first one is label encoding, and the second one is the one hot encoding, for this use case we will use the one hot position and the reason why I choose this kind of data labeling is because I will not need any kind of data normalization later, and also this has the benefit of not weighting a value improperly but does have the downside of adding more columns to the data set."
},
{
"code": null,
"e": 6905,
"s": 6756,
"text": "As shown in the figure above, after transforming the categorical features to numerical ones by using the one hot encoding, we got a wide data frame."
},
{
"code": null,
"e": 6920,
"s": 6905,
"text": "Data Splitting"
},
{
"code": null,
"e": 7105,
"s": 6920,
"text": "Usually we split our data into three parts : Training , validation and Testing set, but for simplicity we will use only train and test with 20% in test size, and the rest for training."
},
{
"code": null,
"e": 7134,
"s": 7105,
"text": "Gradient Boosting Regression"
},
{
"code": null,
"e": 7331,
"s": 7134,
"text": "Boosting is another ensemble technique for creating collection of powerful predictors, and Gradient Boosting is a technique for producing regression models consisting of collections of regressors."
},
{
"code": null,
"e": 7913,
"s": 7331,
"text": "An ensemble is a collection of predictors whose predictions are combined usually by some sort of weighted average or vote in order to provide an overall prediction that takes its guidance from the collection itself. So boosting is an ensemble technique in which learners are learned sequentially with early learners fitting simple models to the data and then analyzing the data for errors, those errors identify problems or particular instances of the data that are difficult or hard to fit, as a consequence later models focus primarily on those examples trying to get them right."
},
{
"code": null,
"e": 8305,
"s": 7913,
"text": "At the end, all the models contribute with weights and the set is combined into some overall predictors, so boosting is a method of converting a sequence of weak learners into a very complex predictor, it’s a way of increasing the complexity of a particular model initial learners tend to be very simple and then the weighted combination can grow more and more complex as learners are added."
},
{
"code": null,
"e": 8339,
"s": 8305,
"text": "The Math behind Gradient Boosting"
},
{
"code": null,
"e": 8483,
"s": 8339,
"text": "This algorithm is an instance of gradient boosting, it’s called gradient boosting because it’s related to a gradient descent sort of procedure."
},
{
"code": null,
"e": 8545,
"s": 8483,
"text": "First we make a set of predictions ŷ(i) for each data point."
},
{
"code": null,
"e": 8671,
"s": 8545,
"text": "We can calculate the error in our predictions, let’s call it J(y, ŷ) , and J just relates the accuracy of ŷ in modelling y."
},
{
"code": null,
"e": 8732,
"s": 8671,
"text": "For mean squared error MSE : J(y, ŷ) = Σ ( y(i) -ŷ(i) )2 ."
},
{
"code": null,
"e": 8889,
"s": 8732,
"text": "So now we can try to think about adjusting our prediction ŷ to try to reduce the error above : ŷ(i)= ŷ(i) + α *∇J(y, ŷ) , with : ∇J(y, ŷ) = y(i)- ŷ(i)"
},
{
"code": null,
"e": 8951,
"s": 8889,
"text": "Each learner is estimating the gradient of the loss function."
},
{
"code": null,
"e": 9007,
"s": 8951,
"text": "Gradient Descent : take sequence of steps to reduce J ."
},
{
"code": null,
"e": 9055,
"s": 9007,
"text": "Sum of predictors, weighted by step size alpha."
},
{
"code": null,
"e": 9099,
"s": 9055,
"text": "Gradient Boosting Regressor — Code Snapshot"
},
{
"code": null,
"e": 9141,
"s": 9099,
"text": "Interpreting Residual VS Predicted values"
},
{
"code": null,
"e": 9265,
"s": 9141,
"text": "After building the GBR model, it is mostly recommended to plot the error distribution to verify the following assumptions :"
},
{
"code": null,
"e": 9286,
"s": 9265,
"text": "Normally distributed"
},
{
"code": null,
"e": 9331,
"s": 9286,
"text": "Homoscedastic (The same variance at every X)"
},
{
"code": null,
"e": 9343,
"s": 9331,
"text": "Independent"
},
{
"code": null,
"e": 9557,
"s": 9343,
"text": "which is already verified in our case as you can check on the notebook for more details, however we need to observe the residual plot to make sure it doesn’t follow a non-linear or Heteroscedasticity distribution."
},
{
"code": null,
"e": 9845,
"s": 9557,
"text": "As we can see from the plot above, the residuals roughly form a “horizontal band” around the 0 line, this suggest that the variances of the error terms are equal, furthermore no one residual “stands out” from the basic random pattern of residuals which involve that there is no outliers."
},
{
"code": null,
"e": 9865,
"s": 9845,
"text": "Models Benchmarking"
},
{
"code": null,
"e": 10013,
"s": 9865,
"text": "So after trying many regression models to fit our data set, it’s time to draw a Benchmarking table that will summarize all the results we have got."
},
{
"code": null,
"e": 10478,
"s": 10013,
"text": "╔═══════════════════╦════════════════════╦═══════════════╗║ Model ║ R^2 score ║ RMSE ║╠═══════════════════╬════════════════════╬═══════════════╣║ KNN ║ 0.56 ║ 37709.67 ║║ Linear Regression ║ 0.62 ║ 34865.07 ║║ Gradient Boosting ║ 0.80 ║ 25176.16 ║║ Decision Tree ║ 0.63 ║ 34551.17 ║╚═══════════════════╩════════════════════╩═══════════════╝"
},
{
"code": null,
"e": 10609,
"s": 10478,
"text": "It appears that the Gradient Boosting model won the battle as it was expected with the lowest RMSE value and the highest R2 score."
},
{
"code": null,
"e": 10636,
"s": 10609,
"text": "Creating the Flask web app"
},
{
"code": null,
"e": 10663,
"s": 10636,
"text": "Creating the Flask web app"
},
{
"code": null,
"e": 10895,
"s": 10663,
"text": "Flask is a “micro” framework for Python. It is called a micro framework because they want to keep the core simple but expandable. While confusing at first, it is relatively easy to set up a website on Flask using Jinja2 templating."
},
{
"code": null,
"e": 11031,
"s": 10895,
"text": "The flask app consists of 2 main components: the python app (app.py) and the HTML templates, for the app.py here is how it looks like :"
},
{
"code": null,
"e": 11303,
"s": 11031,
"text": "So once the app.py is built, we can run it from the Terminal, if everything goes right, we will get the index.html page running on the '/’ route as :http://localhost:8080/ , then we can fill the given form with the right values and get the result as a sweet alert pop-up."
},
{
"code": null,
"e": 11331,
"s": 11303,
"text": "2. Deploy the app to Heroku"
},
{
"code": null,
"e": 11468,
"s": 11331,
"text": "For this part, I will need a Heroku account (Free) and the HerokuCLI, I can also use GitHub for this task, but let’s keep things simple."
},
{
"code": null,
"e": 11661,
"s": 11468,
"text": "Set the Procfile: A Procfile is a mechanism for declaring what commands are run by your application’s dynos on the Heroku platform. Create a file called “Procfile” and put the following in it:"
},
{
"code": null,
"e": 11696,
"s": 11661,
"text": "web: gunicorn app:app --log-file -"
},
{
"code": null,
"e": 11814,
"s": 11696,
"text": "Create the python requirements file by running the following command in your Terminal at the root of your Flask app :"
},
{
"code": null,
"e": 11827,
"s": 11814,
"text": "$ pipreqs ./"
},
{
"code": null,
"e": 11904,
"s": 11827,
"text": "if you’re working in a python virtual environment, this would be efficient :"
},
{
"code": null,
"e": 11937,
"s": 11904,
"text": "$ pip3 freeze > requirements.txt"
},
{
"code": null,
"e": 12006,
"s": 11937,
"text": "Create a new app on the Heroku Website by logging into your account."
},
{
"code": null,
"e": 12264,
"s": 12006,
"text": "Once the app is well created we will be redirected to a page showing us the instructions to do for a successful deployment , after being successfully connected to the HerokuCLI, I need to change the directory to our flask App and run the following command :"
},
{
"code": null,
"e": 12376,
"s": 12264,
"text": "$ heroku git:clone -a cars-price-prediction$ git add .$ git commit -am \"make it better\"$ git push heroku master"
},
{
"code": null,
"e": 12478,
"s": 12376,
"text": "The app should be now live at my-app-name.herokuapp.com! Check out a working version of the app here."
},
{
"code": null,
"e": 12757,
"s": 12478,
"text": "In this blog post I tried to cover a data science process by building a model that can predict the price of cars based on its features, starting from the Data Collecting to the Data Modeling and comparison between built-in Models, and finishing by Model Deployment as a web app."
},
{
"code": null,
"e": 12861,
"s": 12757,
"text": "Full notebook : https://github.com/PaacMaan/cars-price-predictor/blob/master/cars_price_predictor.ipynb"
},
{
"code": null,
"e": 12881,
"s": 12861,
"text": "GitHub repository :"
},
{
"code": null,
"e": 12892,
"s": 12881,
"text": "github.com"
},
{
"code": null,
"e": 12959,
"s": 12892,
"text": "Web Application link : https://cars-price-prediction.herokuapp.com"
}
]
|
How to count elements in a nested Python dictionary? | It is possible to iterate over each key value pair in a dictionary by the expression
for k,v in students.items():
Since value component of each item is itself a dictionary in nested Python dictionary, length of each sub-dictionary is len(v). Perform cumulative addition over the loop to obtain count of all elements
>>> students={"student1":{"name":"Raaj", "age":23, "subjects":["Phy", "Che", "maths"],"GPA":8.5},"student2":{"name":"Kiran", "age":21, "subjects":["Phy", "Che", "bio"],"GPA":8.25}}
>>> s=0
>>> for k,v in students.items():
s=s+len(v)
>>> s
8
A more compact representation of above will be −
>>> sum(len(v)for v in students.values())
8 | [
{
"code": null,
"e": 1147,
"s": 1062,
"text": "It is possible to iterate over each key value pair in a dictionary by the expression"
},
{
"code": null,
"e": 1176,
"s": 1147,
"text": "for k,v in students.items():"
},
{
"code": null,
"e": 1378,
"s": 1176,
"text": "Since value component of each item is itself a dictionary in nested Python dictionary, length of each sub-dictionary is len(v). Perform cumulative addition over the loop to obtain count of all elements"
},
{
"code": null,
"e": 1615,
"s": 1378,
"text": ">>> students={\"student1\":{\"name\":\"Raaj\", \"age\":23, \"subjects\":[\"Phy\", \"Che\", \"maths\"],\"GPA\":8.5},\"student2\":{\"name\":\"Kiran\", \"age\":21, \"subjects\":[\"Phy\", \"Che\", \"bio\"],\"GPA\":8.25}}\n>>> s=0\n>>> for k,v in students.items():\n s=s+len(v)"
},
{
"code": null,
"e": 1623,
"s": 1615,
"text": ">>> s\n8"
},
{
"code": null,
"e": 1672,
"s": 1623,
"text": "A more compact representation of above will be −"
},
{
"code": null,
"e": 1716,
"s": 1672,
"text": ">>> sum(len(v)for v in students.values())\n8"
}
]
|
Counters in Python | Set 1 (Initialization and Updation) - GeeksforGeeks | 03 Mar, 2022
Counter is a container included in the collections module. Now you all must be wondering what is a container. Don’t worry first let’s discuss about the container.
Containers are objects that hold objects. They provide a way to access the contained objects and iterate over them. Examples of built in containers are Tuple, list, and dictionary. Others are included in Collections module.A Counter is a subclass of dict. Therefore it is an unordered collection where elements and their respective count are stored as a dictionary. This is equivalent to a bag or multiset of other languages.Syntax :
class collections.Counter([iterable-or-mapping])
Initialization : The constructor of counter can be called in any one of the following ways :
With sequence of items
With dictionary containing keys and counts
With keyword arguments mapping string names to counts
Example of each type of initialization :
Python3
# A Python program to show different ways to create# Counterfrom collections import Counter # With sequence of itemsprint(Counter(['B','B','A','B','C','A','B','B','A','C'])) # with dictionaryprint(Counter({'A':3, 'B':5, 'C':2})) # with keyword argumentsprint(Counter(A=3, B=5, C=2))
Output of all the three lines is same :
Counter({'B': 5, 'A': 3, 'C': 2})
Counter({'B': 5, 'A': 3, 'C': 2})
Counter({'B': 5, 'A': 3, 'C': 2})
Updation : We can also create an empty counter in the following manner :
coun = collections.Counter()
And can be updated via update() method .Syntax for the same :
coun.update(Data)
Python3
# A Python program to demonstrate update()from collections import Countercoun = Counter() coun.update([1, 2, 3, 1, 2, 1, 1, 2])print(coun) coun.update([1, 2, 4])print(coun)
Output :
Counter({1: 4, 2: 3, 3: 1})
Counter({1: 5, 2: 4, 3: 1, 4: 1})
Data can be provided in any of the three ways as mentioned in initialization and the counter’s data will be increased not replaced.
Counts can be zero and negative also.
Python3
# Python program to demonstrate that counts in# Counter can be 0 and negativefrom collections import Counter c1 = Counter(A=4, B=3, C=10)c2 = Counter(A=10, B=3, C=4) c1.subtract(c2)print(c1)
Output :
Counter({'c': 6, 'B': 0, 'A': -6})
We can use Counter to count distinct elements of a list or other collections.
Python3
# An example program where different list items are# counted using counterfrom collections import Counter # Create a listz = ['blue', 'red', 'blue', 'yellow', 'blue', 'red'] # Count distinct elements and print Counter aobjectprint(Counter(z))
Output:
Counter({'blue': 3, 'red': 2, 'yellow': 1})
YouTubeGeeksforGeeks500K subscribersPython Programming Tutorial | Counters in Python - Part 1 | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:17•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=3qdRTlWpdGo" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
This article is contributed by Mayank Rawat .If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
surinderdawra388
Python collections-module
Python
School Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
Read a file line by line in Python
Arrays in C/C++
Inheritance in C++
Reverse a string in Java
Interfaces in Java
C++ Classes and Objects | [
{
"code": null,
"e": 41472,
"s": 41444,
"text": "\n03 Mar, 2022"
},
{
"code": null,
"e": 41637,
"s": 41472,
"text": "Counter is a container included in the collections module. Now you all must be wondering what is a container. Don’t worry first let’s discuss about the container. "
},
{
"code": null,
"e": 42073,
"s": 41637,
"text": "Containers are objects that hold objects. They provide a way to access the contained objects and iterate over them. Examples of built in containers are Tuple, list, and dictionary. Others are included in Collections module.A Counter is a subclass of dict. Therefore it is an unordered collection where elements and their respective count are stored as a dictionary. This is equivalent to a bag or multiset of other languages.Syntax : "
},
{
"code": null,
"e": 42122,
"s": 42073,
"text": "class collections.Counter([iterable-or-mapping])"
},
{
"code": null,
"e": 42217,
"s": 42122,
"text": "Initialization : The constructor of counter can be called in any one of the following ways : "
},
{
"code": null,
"e": 42240,
"s": 42217,
"text": "With sequence of items"
},
{
"code": null,
"e": 42285,
"s": 42242,
"text": "With dictionary containing keys and counts"
},
{
"code": null,
"e": 42341,
"s": 42287,
"text": "With keyword arguments mapping string names to counts"
},
{
"code": null,
"e": 42384,
"s": 42341,
"text": "Example of each type of initialization : "
},
{
"code": null,
"e": 42392,
"s": 42384,
"text": "Python3"
},
{
"code": "# A Python program to show different ways to create# Counterfrom collections import Counter # With sequence of itemsprint(Counter(['B','B','A','B','C','A','B','B','A','C'])) # with dictionaryprint(Counter({'A':3, 'B':5, 'C':2})) # with keyword argumentsprint(Counter(A=3, B=5, C=2))",
"e": 42675,
"s": 42392,
"text": null
},
{
"code": null,
"e": 42717,
"s": 42675,
"text": "Output of all the three lines is same : "
},
{
"code": null,
"e": 42819,
"s": 42717,
"text": "Counter({'B': 5, 'A': 3, 'C': 2})\nCounter({'B': 5, 'A': 3, 'C': 2})\nCounter({'B': 5, 'A': 3, 'C': 2})"
},
{
"code": null,
"e": 42894,
"s": 42819,
"text": "Updation : We can also create an empty counter in the following manner : "
},
{
"code": null,
"e": 42923,
"s": 42894,
"text": "coun = collections.Counter()"
},
{
"code": null,
"e": 42987,
"s": 42923,
"text": "And can be updated via update() method .Syntax for the same : "
},
{
"code": null,
"e": 43005,
"s": 42987,
"text": "coun.update(Data)"
},
{
"code": null,
"e": 43015,
"s": 43007,
"text": "Python3"
},
{
"code": "# A Python program to demonstrate update()from collections import Countercoun = Counter() coun.update([1, 2, 3, 1, 2, 1, 1, 2])print(coun) coun.update([1, 2, 4])print(coun)",
"e": 43188,
"s": 43015,
"text": null
},
{
"code": null,
"e": 43199,
"s": 43188,
"text": "Output : "
},
{
"code": null,
"e": 43261,
"s": 43199,
"text": "Counter({1: 4, 2: 3, 3: 1})\nCounter({1: 5, 2: 4, 3: 1, 4: 1})"
},
{
"code": null,
"e": 43395,
"s": 43263,
"text": "Data can be provided in any of the three ways as mentioned in initialization and the counter’s data will be increased not replaced."
},
{
"code": null,
"e": 43435,
"s": 43395,
"text": "Counts can be zero and negative also. "
},
{
"code": null,
"e": 43443,
"s": 43435,
"text": "Python3"
},
{
"code": "# Python program to demonstrate that counts in# Counter can be 0 and negativefrom collections import Counter c1 = Counter(A=4, B=3, C=10)c2 = Counter(A=10, B=3, C=4) c1.subtract(c2)print(c1)",
"e": 43635,
"s": 43443,
"text": null
},
{
"code": null,
"e": 43645,
"s": 43635,
"text": "Output : "
},
{
"code": null,
"e": 43681,
"s": 43645,
"text": " Counter({'c': 6, 'B': 0, 'A': -6})"
},
{
"code": null,
"e": 43763,
"s": 43683,
"text": "We can use Counter to count distinct elements of a list or other collections. "
},
{
"code": null,
"e": 43771,
"s": 43763,
"text": "Python3"
},
{
"code": "# An example program where different list items are# counted using counterfrom collections import Counter # Create a listz = ['blue', 'red', 'blue', 'yellow', 'blue', 'red'] # Count distinct elements and print Counter aobjectprint(Counter(z))",
"e": 44014,
"s": 43771,
"text": null
},
{
"code": null,
"e": 44024,
"s": 44014,
"text": "Output: "
},
{
"code": null,
"e": 44068,
"s": 44024,
"text": "Counter({'blue': 3, 'red': 2, 'yellow': 1})"
},
{
"code": null,
"e": 44926,
"s": 44070,
"text": "YouTubeGeeksforGeeks500K subscribersPython Programming Tutorial | Counters in Python - Part 1 | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.More videosMore videosYou're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:17•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=3qdRTlWpdGo\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 45347,
"s": 44926,
"text": "This article is contributed by Mayank Rawat .If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 45364,
"s": 45347,
"text": "surinderdawra388"
},
{
"code": null,
"e": 45390,
"s": 45364,
"text": "Python collections-module"
},
{
"code": null,
"e": 45397,
"s": 45390,
"text": "Python"
},
{
"code": null,
"e": 45416,
"s": 45397,
"text": "School Programming"
},
{
"code": null,
"e": 45514,
"s": 45416,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 45523,
"s": 45514,
"text": "Comments"
},
{
"code": null,
"e": 45536,
"s": 45523,
"text": "Old Comments"
},
{
"code": null,
"e": 45564,
"s": 45536,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 45614,
"s": 45564,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 45636,
"s": 45614,
"text": "Python map() function"
},
{
"code": null,
"e": 45680,
"s": 45636,
"text": "How to get column names in Pandas dataframe"
},
{
"code": null,
"e": 45715,
"s": 45680,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 45731,
"s": 45715,
"text": "Arrays in C/C++"
},
{
"code": null,
"e": 45750,
"s": 45731,
"text": "Inheritance in C++"
},
{
"code": null,
"e": 45775,
"s": 45750,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 45794,
"s": 45775,
"text": "Interfaces in Java"
}
]
|
Alphabet Recognition System|Image Classification|CNN|Convolutional Neural Network | Towards Data Science | In this article, I am going to show you how to build an Alphabet Recognition System using Convolutional Neural Networks (CNNs) and deploy it using anvil.works. At the end of this post, you will be able to create an exact replica of the system shown above.
Convolutional Neural Network
CNN Implementation
Anvil Integration
Let’s start by understanding what exactly is a Convolutional Neural Network. A Convolutional Neural Network (CNN) is a type of neural network widely used for image recognition and classification.
CNNs are regularised versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer.
CNNs consists of the following layers:
Convolution layer: A “kernel” of size for example, 3X3 or 5X5, is passed over the image and a dot product of the original pixel values with weights defined in the kernel is calculated. This matrix is then passed through an activation function “ReLu” that converts every negative value in the matrix to zero.
Pooling layer: A “pooling matrix” of size, for example, 2X2 or 4X4, is passed over the matrix to reduce the size of the matrix so as to highlight only the important features of the image.
There are 2 types of pooling operations:
Max Pooling is a type of pooling in which the maximum value present inside the pooling matrix is put inside the final matrix.Average Pooling is a type of pooling in which the average of all the values present inside the pooling kernel is calculated and put inside the final matrix.
Max Pooling is a type of pooling in which the maximum value present inside the pooling matrix is put inside the final matrix.
Average Pooling is a type of pooling in which the average of all the values present inside the pooling kernel is calculated and put inside the final matrix.
(Note: There can be more than one combination of Convolution and Pooling layer in a CNN architecture to improve its performance.)
Fully connected layer: The final matrix is then flattened into a one-dimensional vector. This vector is then passed into the neural network. Finally, the output layer is a list of probabilities for different possible labels attached to the image (e.g. alphabets a,b,c). The label that receives the highest probability is the classification decision.
Let’s start the implementation by importing the libraries inside a Jupyter Notebook as shown below:
import numpy as npimport matplotlib.pyplot as pltfrom keras.preprocessing.image import ImageDataGeneratorfrom keras.preprocessing import imageimport kerasfrom keras.models import Sequentialfrom keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Activationimport osimport pickle
Then, let us import the 2 datasets containing images from a to z for training and testing our model. You can download the datasets from my GitHub repository linked below.
train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)test_datagen = ImageDataGenerator(rescale = 1./255)train_generator = train_datagen.flow_from_directory( directory = 'Training', target_size = (32,32), batch_size = 32, class_mode = 'categorical')test_generator = test_datagen.flow_from_directory( directory = 'Testing', target_size = (32,32), batch_size = 32, class_mode = 'categorical')
ImageDataGenerator generates batches of tensor image data, converting the the RGB coefficients in range 0–255 to target values between 0 and 1 by scaling with a 1/255 factor using rescale.
shear_range is used for randomly applying shearing transformations.
zoom_range is used for randomly zooming inside pictures.
horizontal_flip is used for randomly flipping half of the images horizontally.
Then we import the images one by one from the directories using .flow_from_directory and apply the ImageDataGenerator on it.
We then convert the images from its original size to our target_size and declare the batch_size count which refers to the number of training examples used in one iteration.
Then we set the class_mode to categorical indicating that we have multiple classes (a to z) to predict from.
Next we build our CNN architecture.
model = Sequential()model.add(Conv2D(32, (3, 3), input_shape = (32,32,3), activation = 'relu'))model.add(MaxPooling2D(pool_size = (2, 2)))model.add(Conv2D(32, (3, 3), activation = 'relu'))model.add(MaxPooling2D(pool_size = (2, 2)))model.add(Flatten())model.add(Dense(units = 128, activation = 'relu'))model.add(Dense(units = 26, activation = 'softmax'))model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])model.summary()
We start by creating a Sequential model which allows us to define the CNN architecture layer by layer using the .add function.
We first add a convolution layer with 32 filters of size 3X3 on the input images and pass it through the ‘relu’ activation function.
We then perform MaxPooling operations using a pool of size 2X2.
These layers are then repeated once again to improve the performance of the model.
Finally we flatten our resultant matrix and pass it through a dense layer consisting of 128 nodes. This is then connected to the output layer consisting of 26 nodes, each node representing an alphabet. We use the softmax activation which converts the scores to a normalised probability distribution, and the node with the highest probability is selected as the output.
Once our CNN architecture is defined, we compile the model using adam optimizer.
Lastly, we train our model as follows.
model.fit_generator(train_generator, steps_per_epoch = 16, epochs = 3, validation_data = test_generator, validation_steps = 16)
The accuracy achieved after training the model is: 93.42%
Let’s now try testing our model. But before we do that, we need to define a function that gives us the associated alphabet with the result.
def get_result(result): if result[0][0] == 1: return('a') elif result[0][1] == 1: return ('b') elif result[0][2] == 1: return ('c') elif result[0][3] == 1: return ('d') elif result[0][4] == 1: return ('e') elif result[0][5] == 1: return ('f') elif result[0][6] == 1: return ('g') elif result[0][7] == 1: return ('h') elif result[0][8] == 1: return ('i') elif result[0][9] == 1: return ('j') elif result[0][10] == 1: return ('k') elif result[0][11] == 1: return ('l') elif result[0][12] == 1: return ('m') elif result[0][13] == 1: return ('n') elif result[0][14] == 1: return ('o') elif result[0][15] == 1: return ('p') elif result[0][16] == 1: return ('q') elif result[0][17] == 1: return ('r') elif result[0][18] == 1: return ('s') elif result[0][19] == 1: return ('t') elif result[0][20] == 1: return ('u') elif result[0][21] == 1: return ('v') elif result[0][22] == 1: return ('w') elif result[0][23] == 1: return ('x') elif result[0][24] == 1: return ('y') elif result[0][25] == 1: return ('z')
Finally, let us test our model as follows:
filename = r'Testing\e\25.png'test_image = image.load_img(filename, target_size = (32,32))plt.imshow(test_image)test_image = image.img_to_array(test_image)test_image = np.expand_dims(test_image, axis = 0)result = model.predict(test_image)result = get_result(result)print ('Predicted Alphabet is: {}'.format(result))
The model correctly predicts the input image alphabet to be ‘e’.
Anvil is a platform that allows us to build full stack web applications with python. It makes it easier for us to turn machine learning model from a Jupyter notebook into a web application.
Let’s start by creating a account on anvil. Once done, create a new blank app with material design.
Check out this link for a step by step tutorial on how to use anvil.
The toolbox on the right contains all the components that can be dragged onto the website.
Components needed:
2 Labels (For the heading and sub heading)Image (To display the input image)FileLoader (To upload the input image)Highlighted Button (To predict the results)Label (To view the results)
2 Labels (For the heading and sub heading)
Image (To display the input image)
FileLoader (To upload the input image)
Highlighted Button (To predict the results)
Label (To view the results)
Drag and drop these components and arrange them as per your requirement.
In order to add heading and subheading, select the label and in the properties section on the right side and go to the option named ‘text’ as shown below (highlighted in red), and type the heading/subheading.
Once the User Interface is completed, go inside the Code section as shown above (highlighted in green) and create a new function as follows
def primary_color_1_click(self, **event_args): file = self.file_loader_1.file self.image_1.source = file result = anvil.server.call('model_run',file) self.label_3.text = result pass
This function will execute when we press the PREDICT button. It will take the input image uploaded from the file loader and pass it to the jupyter notebook’s ‘model_run’ function. This function will return the predicted alphabet which is displayed via the label component (label_3).
All that is left to do now is connecting our anvil website to the jupyter notebook.
This requires the implementation of 2 steps as follows:
Import Anvil uplink key: click on the settings button and then click on uplink, click on enable uplink key and copy the key.
Import Anvil uplink key: click on the settings button and then click on uplink, click on enable uplink key and copy the key.
Inside your jupyter notebook paste the following:
import anvil.serverimport anvil.mediaanvil.server.connect("paste your anvil uplink key here")
2. Create a function ‘model_run’ which predicts the image uploaded in the website.
@anvil.server.callabledef model_run(path): with anvil.media.TempFile(path) as filename: test_image = image.load_img(filename, target_size = (32,32)) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = model.predict(test_image) result = get_result(result) return ('Predicted Alphabet is: {}'.format(result))
And, yes!!!! Now u can go back to anvil and hit the run button to discover a fully accomplished Alphabet Recognition System.
You can find the source code and the datasets in my GitHub repository. | [
{
"code": null,
"e": 427,
"s": 171,
"text": "In this article, I am going to show you how to build an Alphabet Recognition System using Convolutional Neural Networks (CNNs) and deploy it using anvil.works. At the end of this post, you will be able to create an exact replica of the system shown above."
},
{
"code": null,
"e": 456,
"s": 427,
"text": "Convolutional Neural Network"
},
{
"code": null,
"e": 475,
"s": 456,
"text": "CNN Implementation"
},
{
"code": null,
"e": 493,
"s": 475,
"text": "Anvil Integration"
},
{
"code": null,
"e": 689,
"s": 493,
"text": "Let’s start by understanding what exactly is a Convolutional Neural Network. A Convolutional Neural Network (CNN) is a type of neural network widely used for image recognition and classification."
},
{
"code": null,
"e": 889,
"s": 689,
"text": "CNNs are regularised versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer."
},
{
"code": null,
"e": 928,
"s": 889,
"text": "CNNs consists of the following layers:"
},
{
"code": null,
"e": 1236,
"s": 928,
"text": "Convolution layer: A “kernel” of size for example, 3X3 or 5X5, is passed over the image and a dot product of the original pixel values with weights defined in the kernel is calculated. This matrix is then passed through an activation function “ReLu” that converts every negative value in the matrix to zero."
},
{
"code": null,
"e": 1424,
"s": 1236,
"text": "Pooling layer: A “pooling matrix” of size, for example, 2X2 or 4X4, is passed over the matrix to reduce the size of the matrix so as to highlight only the important features of the image."
},
{
"code": null,
"e": 1465,
"s": 1424,
"text": "There are 2 types of pooling operations:"
},
{
"code": null,
"e": 1747,
"s": 1465,
"text": "Max Pooling is a type of pooling in which the maximum value present inside the pooling matrix is put inside the final matrix.Average Pooling is a type of pooling in which the average of all the values present inside the pooling kernel is calculated and put inside the final matrix."
},
{
"code": null,
"e": 1873,
"s": 1747,
"text": "Max Pooling is a type of pooling in which the maximum value present inside the pooling matrix is put inside the final matrix."
},
{
"code": null,
"e": 2030,
"s": 1873,
"text": "Average Pooling is a type of pooling in which the average of all the values present inside the pooling kernel is calculated and put inside the final matrix."
},
{
"code": null,
"e": 2160,
"s": 2030,
"text": "(Note: There can be more than one combination of Convolution and Pooling layer in a CNN architecture to improve its performance.)"
},
{
"code": null,
"e": 2510,
"s": 2160,
"text": "Fully connected layer: The final matrix is then flattened into a one-dimensional vector. This vector is then passed into the neural network. Finally, the output layer is a list of probabilities for different possible labels attached to the image (e.g. alphabets a,b,c). The label that receives the highest probability is the classification decision."
},
{
"code": null,
"e": 2610,
"s": 2510,
"text": "Let’s start the implementation by importing the libraries inside a Jupyter Notebook as shown below:"
},
{
"code": null,
"e": 2895,
"s": 2610,
"text": "import numpy as npimport matplotlib.pyplot as pltfrom keras.preprocessing.image import ImageDataGeneratorfrom keras.preprocessing import imageimport kerasfrom keras.models import Sequentialfrom keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Activationimport osimport pickle"
},
{
"code": null,
"e": 3066,
"s": 2895,
"text": "Then, let us import the 2 datasets containing images from a to z for training and testing our model. You can download the datasets from my GitHub repository linked below."
},
{
"code": null,
"e": 3642,
"s": 3066,
"text": "train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True)test_datagen = ImageDataGenerator(rescale = 1./255)train_generator = train_datagen.flow_from_directory( directory = 'Training', target_size = (32,32), batch_size = 32, class_mode = 'categorical')test_generator = test_datagen.flow_from_directory( directory = 'Testing', target_size = (32,32), batch_size = 32, class_mode = 'categorical')"
},
{
"code": null,
"e": 3831,
"s": 3642,
"text": "ImageDataGenerator generates batches of tensor image data, converting the the RGB coefficients in range 0–255 to target values between 0 and 1 by scaling with a 1/255 factor using rescale."
},
{
"code": null,
"e": 3899,
"s": 3831,
"text": "shear_range is used for randomly applying shearing transformations."
},
{
"code": null,
"e": 3956,
"s": 3899,
"text": "zoom_range is used for randomly zooming inside pictures."
},
{
"code": null,
"e": 4035,
"s": 3956,
"text": "horizontal_flip is used for randomly flipping half of the images horizontally."
},
{
"code": null,
"e": 4160,
"s": 4035,
"text": "Then we import the images one by one from the directories using .flow_from_directory and apply the ImageDataGenerator on it."
},
{
"code": null,
"e": 4333,
"s": 4160,
"text": "We then convert the images from its original size to our target_size and declare the batch_size count which refers to the number of training examples used in one iteration."
},
{
"code": null,
"e": 4442,
"s": 4333,
"text": "Then we set the class_mode to categorical indicating that we have multiple classes (a to z) to predict from."
},
{
"code": null,
"e": 4478,
"s": 4442,
"text": "Next we build our CNN architecture."
},
{
"code": null,
"e": 4939,
"s": 4478,
"text": "model = Sequential()model.add(Conv2D(32, (3, 3), input_shape = (32,32,3), activation = 'relu'))model.add(MaxPooling2D(pool_size = (2, 2)))model.add(Conv2D(32, (3, 3), activation = 'relu'))model.add(MaxPooling2D(pool_size = (2, 2)))model.add(Flatten())model.add(Dense(units = 128, activation = 'relu'))model.add(Dense(units = 26, activation = 'softmax'))model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])model.summary()"
},
{
"code": null,
"e": 5066,
"s": 4939,
"text": "We start by creating a Sequential model which allows us to define the CNN architecture layer by layer using the .add function."
},
{
"code": null,
"e": 5199,
"s": 5066,
"text": "We first add a convolution layer with 32 filters of size 3X3 on the input images and pass it through the ‘relu’ activation function."
},
{
"code": null,
"e": 5263,
"s": 5199,
"text": "We then perform MaxPooling operations using a pool of size 2X2."
},
{
"code": null,
"e": 5346,
"s": 5263,
"text": "These layers are then repeated once again to improve the performance of the model."
},
{
"code": null,
"e": 5715,
"s": 5346,
"text": "Finally we flatten our resultant matrix and pass it through a dense layer consisting of 128 nodes. This is then connected to the output layer consisting of 26 nodes, each node representing an alphabet. We use the softmax activation which converts the scores to a normalised probability distribution, and the node with the highest probability is selected as the output."
},
{
"code": null,
"e": 5796,
"s": 5715,
"text": "Once our CNN architecture is defined, we compile the model using adam optimizer."
},
{
"code": null,
"e": 5835,
"s": 5796,
"text": "Lastly, we train our model as follows."
},
{
"code": null,
"e": 6059,
"s": 5835,
"text": "model.fit_generator(train_generator, steps_per_epoch = 16, epochs = 3, validation_data = test_generator, validation_steps = 16)"
},
{
"code": null,
"e": 6117,
"s": 6059,
"text": "The accuracy achieved after training the model is: 93.42%"
},
{
"code": null,
"e": 6257,
"s": 6117,
"text": "Let’s now try testing our model. But before we do that, we need to define a function that gives us the associated alphabet with the result."
},
{
"code": null,
"e": 7516,
"s": 6257,
"text": "def get_result(result): if result[0][0] == 1: return('a') elif result[0][1] == 1: return ('b') elif result[0][2] == 1: return ('c') elif result[0][3] == 1: return ('d') elif result[0][4] == 1: return ('e') elif result[0][5] == 1: return ('f') elif result[0][6] == 1: return ('g') elif result[0][7] == 1: return ('h') elif result[0][8] == 1: return ('i') elif result[0][9] == 1: return ('j') elif result[0][10] == 1: return ('k') elif result[0][11] == 1: return ('l') elif result[0][12] == 1: return ('m') elif result[0][13] == 1: return ('n') elif result[0][14] == 1: return ('o') elif result[0][15] == 1: return ('p') elif result[0][16] == 1: return ('q') elif result[0][17] == 1: return ('r') elif result[0][18] == 1: return ('s') elif result[0][19] == 1: return ('t') elif result[0][20] == 1: return ('u') elif result[0][21] == 1: return ('v') elif result[0][22] == 1: return ('w') elif result[0][23] == 1: return ('x') elif result[0][24] == 1: return ('y') elif result[0][25] == 1: return ('z')"
},
{
"code": null,
"e": 7559,
"s": 7516,
"text": "Finally, let us test our model as follows:"
},
{
"code": null,
"e": 7875,
"s": 7559,
"text": "filename = r'Testing\\e\\25.png'test_image = image.load_img(filename, target_size = (32,32))plt.imshow(test_image)test_image = image.img_to_array(test_image)test_image = np.expand_dims(test_image, axis = 0)result = model.predict(test_image)result = get_result(result)print ('Predicted Alphabet is: {}'.format(result))"
},
{
"code": null,
"e": 7940,
"s": 7875,
"text": "The model correctly predicts the input image alphabet to be ‘e’."
},
{
"code": null,
"e": 8130,
"s": 7940,
"text": "Anvil is a platform that allows us to build full stack web applications with python. It makes it easier for us to turn machine learning model from a Jupyter notebook into a web application."
},
{
"code": null,
"e": 8230,
"s": 8130,
"text": "Let’s start by creating a account on anvil. Once done, create a new blank app with material design."
},
{
"code": null,
"e": 8299,
"s": 8230,
"text": "Check out this link for a step by step tutorial on how to use anvil."
},
{
"code": null,
"e": 8390,
"s": 8299,
"text": "The toolbox on the right contains all the components that can be dragged onto the website."
},
{
"code": null,
"e": 8409,
"s": 8390,
"text": "Components needed:"
},
{
"code": null,
"e": 8594,
"s": 8409,
"text": "2 Labels (For the heading and sub heading)Image (To display the input image)FileLoader (To upload the input image)Highlighted Button (To predict the results)Label (To view the results)"
},
{
"code": null,
"e": 8637,
"s": 8594,
"text": "2 Labels (For the heading and sub heading)"
},
{
"code": null,
"e": 8672,
"s": 8637,
"text": "Image (To display the input image)"
},
{
"code": null,
"e": 8711,
"s": 8672,
"text": "FileLoader (To upload the input image)"
},
{
"code": null,
"e": 8755,
"s": 8711,
"text": "Highlighted Button (To predict the results)"
},
{
"code": null,
"e": 8783,
"s": 8755,
"text": "Label (To view the results)"
},
{
"code": null,
"e": 8856,
"s": 8783,
"text": "Drag and drop these components and arrange them as per your requirement."
},
{
"code": null,
"e": 9065,
"s": 8856,
"text": "In order to add heading and subheading, select the label and in the properties section on the right side and go to the option named ‘text’ as shown below (highlighted in red), and type the heading/subheading."
},
{
"code": null,
"e": 9205,
"s": 9065,
"text": "Once the User Interface is completed, go inside the Code section as shown above (highlighted in green) and create a new function as follows"
},
{
"code": null,
"e": 9412,
"s": 9205,
"text": "def primary_color_1_click(self, **event_args): file = self.file_loader_1.file self.image_1.source = file result = anvil.server.call('model_run',file) self.label_3.text = result pass"
},
{
"code": null,
"e": 9695,
"s": 9412,
"text": "This function will execute when we press the PREDICT button. It will take the input image uploaded from the file loader and pass it to the jupyter notebook’s ‘model_run’ function. This function will return the predicted alphabet which is displayed via the label component (label_3)."
},
{
"code": null,
"e": 9779,
"s": 9695,
"text": "All that is left to do now is connecting our anvil website to the jupyter notebook."
},
{
"code": null,
"e": 9835,
"s": 9779,
"text": "This requires the implementation of 2 steps as follows:"
},
{
"code": null,
"e": 9960,
"s": 9835,
"text": "Import Anvil uplink key: click on the settings button and then click on uplink, click on enable uplink key and copy the key."
},
{
"code": null,
"e": 10085,
"s": 9960,
"text": "Import Anvil uplink key: click on the settings button and then click on uplink, click on enable uplink key and copy the key."
},
{
"code": null,
"e": 10135,
"s": 10085,
"text": "Inside your jupyter notebook paste the following:"
},
{
"code": null,
"e": 10229,
"s": 10135,
"text": "import anvil.serverimport anvil.mediaanvil.server.connect(\"paste your anvil uplink key here\")"
},
{
"code": null,
"e": 10312,
"s": 10229,
"text": "2. Create a function ‘model_run’ which predicts the image uploaded in the website."
},
{
"code": null,
"e": 10715,
"s": 10312,
"text": "@anvil.server.callabledef model_run(path): with anvil.media.TempFile(path) as filename: test_image = image.load_img(filename, target_size = (32,32)) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = model.predict(test_image) result = get_result(result) return ('Predicted Alphabet is: {}'.format(result))"
},
{
"code": null,
"e": 10840,
"s": 10715,
"text": "And, yes!!!! Now u can go back to anvil and hit the run button to discover a fully accomplished Alphabet Recognition System."
}
]
|
Get the substring before the last occurrence of a separator in Java | We have the following string with a separator.
String str = "David-Warner";
We want the substring before the last occurrence of a separator. Use the lastIndexOf() method.
For that, you need to get the index of the separator using indexOf()
String separator ="-";
int sepPos = str.lastIndexOf(separator);
System.out.println("Substring before last separator = "+str.substring(0,sepPos));
The following is an example.
Live Demo
public class Demo {
public static void main(String[] args) {
String str = "David-Warner";
String separator ="-";
int sepPos = str.lastIndexOf(separator);
if (sepPos == -1) {
System.out.println("");
}
System.out.println("Substring before last separator = "+str.substring(0,sepPos));
}
}
Substring before last separator = David | [
{
"code": null,
"e": 1109,
"s": 1062,
"text": "We have the following string with a separator."
},
{
"code": null,
"e": 1138,
"s": 1109,
"text": "String str = \"David-Warner\";"
},
{
"code": null,
"e": 1233,
"s": 1138,
"text": "We want the substring before the last occurrence of a separator. Use the lastIndexOf() method."
},
{
"code": null,
"e": 1302,
"s": 1233,
"text": "For that, you need to get the index of the separator using indexOf()"
},
{
"code": null,
"e": 1448,
"s": 1302,
"text": "String separator =\"-\";\nint sepPos = str.lastIndexOf(separator);\nSystem.out.println(\"Substring before last separator = \"+str.substring(0,sepPos));"
},
{
"code": null,
"e": 1477,
"s": 1448,
"text": "The following is an example."
},
{
"code": null,
"e": 1488,
"s": 1477,
"text": " Live Demo"
},
{
"code": null,
"e": 1825,
"s": 1488,
"text": "public class Demo {\n public static void main(String[] args) {\n String str = \"David-Warner\";\n String separator =\"-\";\n int sepPos = str.lastIndexOf(separator);\n if (sepPos == -1) {\n System.out.println(\"\");\n }\n System.out.println(\"Substring before last separator = \"+str.substring(0,sepPos));\n }\n}"
},
{
"code": null,
"e": 1865,
"s": 1825,
"text": "Substring before last separator = David"
}
]
|
Concatenate Multiple Strings in Java. | You can concatenate multiple strings using the ‘+’ operator of Java.
public class Test {
public static void main(String args[]) {
String st1 = "Hello";
String st2 = "How";
String st3 = "You";
String res = st1+st2+st3;
System.out.println(res);
}
}
HelloHowYou | [
{
"code": null,
"e": 1131,
"s": 1062,
"text": "You can concatenate multiple strings using the ‘+’ operator of Java."
},
{
"code": null,
"e": 1345,
"s": 1131,
"text": "public class Test {\n public static void main(String args[]) {\n String st1 = \"Hello\";\n String st2 = \"How\";\n String st3 = \"You\";\n String res = st1+st2+st3;\n System.out.println(res);\n }\n}"
},
{
"code": null,
"e": 1357,
"s": 1345,
"text": "HelloHowYou"
}
]
|
How to import and export a module/library in JavaScript? | Note − To run this example you will need to run a localhost server.
Following is the code for importing and exporting a module/library in JavaScript −
INDEX.html
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<style>
body {
font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif;
}
.result {
font-size: 18px;
font-weight: 500;
}
</style>
</head>
<body>
<h1>JavaScript Importing and Exporting Modules</h1>
<button class="Btn">IMPORT</button>
<div class="result"></div>
<h3>Click on the above button to import module </h3>
<script src="script.js" type="module"></script>
<script src="sample.js" type="module">
</script>
</body>
</html>
script.js
import test from './sample.js';
document.querySelector('.Btn').addEventListener('click',()=>{
test();
})
sample.js
let resultEle = document.querySelector(".result");
export default function testImport(){
resultEle.innerHTML = 'Module testImport has been imported';
}
The above code will produce the following output −
On clicking the ‘IMPORT’ button − | [
{
"code": null,
"e": 1130,
"s": 1062,
"text": "Note − To run this example you will need to run a localhost server."
},
{
"code": null,
"e": 1213,
"s": 1130,
"text": "Following is the code for importing and exporting a module/library in JavaScript −"
},
{
"code": null,
"e": 1224,
"s": 1213,
"text": "INDEX.html"
},
{
"code": null,
"e": 1235,
"s": 1224,
"text": " Live Demo"
},
{
"code": null,
"e": 1858,
"s": 1235,
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .result {\n font-size: 18px;\n font-weight: 500;\n }\n</style>\n</head>\n<body>\n<h1>JavaScript Importing and Exporting Modules</h1>\n<button class=\"Btn\">IMPORT</button>\n<div class=\"result\"></div>\n<h3>Click on the above button to import module </h3>\n<script src=\"script.js\" type=\"module\"></script>\n<script src=\"sample.js\" type=\"module\">\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 1868,
"s": 1858,
"text": "script.js"
},
{
"code": null,
"e": 1976,
"s": 1868,
"text": "import test from './sample.js';\ndocument.querySelector('.Btn').addEventListener('click',()=>{\n test();\n})"
},
{
"code": null,
"e": 1986,
"s": 1976,
"text": "sample.js"
},
{
"code": null,
"e": 2141,
"s": 1986,
"text": "let resultEle = document.querySelector(\".result\");\nexport default function testImport(){\n resultEle.innerHTML = 'Module testImport has been imported';\n}"
},
{
"code": null,
"e": 2192,
"s": 2141,
"text": "The above code will produce the following output −"
},
{
"code": null,
"e": 2226,
"s": 2192,
"text": "On clicking the ‘IMPORT’ button −"
}
]
|
Last Minute Notes – Operating Systems - GeeksforGeeks | 28 Jun, 2021
See Last Minute Notes for all subjects all subjects here.
Operating Systems: It is the interface between the user and the computer hardware.
Types of Operating System (OS):
Batch OS – A set of similar jobs are stored in the main memory for execution. A job gets assigned to the CPU, only when the execution of the previous job completes.Multiprogramming OS – The main memory consists of jobs waiting for CPU time. The OS selects one of the processes and assigns it to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects another process from the job queue and assigns it to the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting multiple tasks done at once.Multitasking OS – Multitasking OS combines the benefits of Multiprogramming OS and CPU scheduling to perform quick switches between jobs. The switch is so quick that the user can interact with each program as it runsTime Sharing OS – Time-sharing systems require interaction with the user to instruct the OS to perform various tasks. The OS responds with an output. The instructions are usually given through an input device like the keyboard.Real Time OS – Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within deadlines.
Batch OS – A set of similar jobs are stored in the main memory for execution. A job gets assigned to the CPU, only when the execution of the previous job completes.
Multiprogramming OS – The main memory consists of jobs waiting for CPU time. The OS selects one of the processes and assigns it to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects another process from the job queue and assigns it to the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting multiple tasks done at once.
Multitasking OS – Multitasking OS combines the benefits of Multiprogramming OS and CPU scheduling to perform quick switches between jobs. The switch is so quick that the user can interact with each program as it runs
Time Sharing OS – Time-sharing systems require interaction with the user to instruct the OS to perform various tasks. The OS responds with an output. The instructions are usually given through an input device like the keyboard.
Real Time OS – Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within deadlines.
Threads:A thread is a lightweight process and forms the basic unit of CPU utilization. A process can perform more than one task at the same time by including multiple threads.
A thread has its own program counter, register set, and stack
A thread shares resources with other threads of the same process the code section, the data section, files and signals.
A new thread, or a child process of a given process, can be introduced by using the fork() system call. A process with n fork() system calls generates 2n – 1 child processes.There are two types of threads:
User threads
Kernel threads
Example: Java thread, POSIX threads.Example : Window Solaris.
Process:A process is a program under execution. The value of program counter (PC) indicates the address of the next instruction of the process being executed. Each process is represented by a Process Control Block (PCB).
Process Scheduling: Below are different times with respect to a process.
Arrival Time – Time at which the process arrives in the ready queue.Completion Time – Time at which process completes its execution.Burst Time – Time required by a process for CPU execution.Turn Around Time – Time Difference between completion time and arrival time.Turn Around Time = Completion Time - Arrival Time Waiting Time (WT) – Time Difference between turn around time and burst time.Waiting Time = Turn Around Time - Burst Time
Arrival Time – Time at which the process arrives in the ready queue.
Completion Time – Time at which process completes its execution.
Burst Time – Time required by a process for CPU execution.
Turn Around Time – Time Difference between completion time and arrival time.Turn Around Time = Completion Time - Arrival Time
Turn Around Time = Completion Time - Arrival Time
Waiting Time (WT) – Time Difference between turn around time and burst time.Waiting Time = Turn Around Time - Burst Time
Waiting Time = Turn Around Time - Burst Time
Why do we need scheduling?A typical process involves both I/O time and CPU time. In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time. In multiprogramming systems, one process can use CPU while another is waiting for I/O. This is possible only with process scheduling.
Objectives of Process Scheduling Algorithm:
Max CPU utilization (Keep CPU as busy as possible)
Fair allocation of CPU.
Max throughput (Number of processes that complete their execution per time unit)
Min turnaround time (Time taken by a process to finish execution)
Min waiting time (Time for which a process waits in ready queue)
Min response time (Time when a process produces first response)
Different Scheduling Algorithms:
First Come First Serve (FCFS) : Simplest scheduling algorithm that schedules according to arrival times of processes.Shortest Job First (SJF): Process which have the shortest burst time are scheduled first.Shortest Remaining Time First (SRTF): It is preemptive mode of SJF algorithm in which jobs are scheduled according to the shortest remaining time.Round Robin (RR) Scheduling: Each process is assigned a fixed time, in cyclic way.Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then scheduling is according to the arrival time.Highest Response Ratio Next (HRRN): In this scheduling, processes with highest response ratio is scheduled. This algorithm avoids starvation.Response Ratio = (Waiting Time + Burst time) / Burst timeMultilevel Queue Scheduling (MLQ): According to the priority of process, processes are placed in the different queues. Generally high priority process are placed in the top level queue. Only after completion of processes from top level queue, lower level queued processes are scheduled.Multi level Feedback Queue (MLFQ) Scheduling: It allows the process to move in between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue.
First Come First Serve (FCFS) : Simplest scheduling algorithm that schedules according to arrival times of processes.
Shortest Job First (SJF): Process which have the shortest burst time are scheduled first.
Shortest Remaining Time First (SRTF): It is preemptive mode of SJF algorithm in which jobs are scheduled according to the shortest remaining time.
Round Robin (RR) Scheduling: Each process is assigned a fixed time, in cyclic way.
Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then scheduling is according to the arrival time.
Highest Response Ratio Next (HRRN): In this scheduling, processes with highest response ratio is scheduled. This algorithm avoids starvation.Response Ratio = (Waiting Time + Burst time) / Burst time
Response Ratio = (Waiting Time + Burst time) / Burst time
Multilevel Queue Scheduling (MLQ): According to the priority of process, processes are placed in the different queues. Generally high priority process are placed in the top level queue. Only after completion of processes from top level queue, lower level queued processes are scheduled.
Multi level Feedback Queue (MLFQ) Scheduling: It allows the process to move in between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue.
Some useful facts about Scheduling Algorithms:
FCFS can cause long waiting times, especially when the first job takes too much CPU time.Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation when a long process is there in the ready queue and shorter processes keep coming.If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS scheduling.SJF is optimal in terms of average waiting time for a given set of processes. SJF gives minimum average waiting time, but problems with SJF is how to know/predict the time of next job.
FCFS can cause long waiting times, especially when the first job takes too much CPU time.
Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation when a long process is there in the ready queue and shorter processes keep coming.
If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS scheduling.
SJF is optimal in terms of average waiting time for a given set of processes. SJF gives minimum average waiting time, but problems with SJF is how to know/predict the time of next job.
The Critical Section Problem:
Critical Section – The portion of the code in the program where shared variables are accessed and/or updated.Remainder Section – The remaining portion of the program excluding the Critical Section.Race around Condition – The final output of the code depends on the order in which the variables are accessed. This is termed as the race around condition.
Critical Section – The portion of the code in the program where shared variables are accessed and/or updated.
Remainder Section – The remaining portion of the program excluding the Critical Section.
Race around Condition – The final output of the code depends on the order in which the variables are accessed. This is termed as the race around condition.
A solution for the critical section problem must satisfy the following three conditions:
Mutual Exclusion – If a process Pi is executing in its critical section, then no other process is allowed to enter into the critical section.Progress – If no process is executing in the critical section, then the decision of a process to enter a critical section cannot be made by any other process that is executing in its remainder section. The selection of the process cannot be postponed indefinitely.Bounded Waiting – There exists a bound on the number of times other processes can enter into the critical section after a process has made request to access the critical section and before the requested is granted.
Mutual Exclusion – If a process Pi is executing in its critical section, then no other process is allowed to enter into the critical section.
Progress – If no process is executing in the critical section, then the decision of a process to enter a critical section cannot be made by any other process that is executing in its remainder section. The selection of the process cannot be postponed indefinitely.
Bounded Waiting – There exists a bound on the number of times other processes can enter into the critical section after a process has made request to access the critical section and before the requested is granted.
Synchronization Tools:A Semaphore is an integer variable that is accessed only through two atomic operations, wait () and signal (). An atomic operation is executed in a single CPU time slice without any pre-emption. Semaphores are of two types:
Counting Semaphore – A counting semaphore is an integer variable whose value can range over an unrestricted domain.Mutex – A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.Misconception:There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
Counting Semaphore – A counting semaphore is an integer variable whose value can range over an unrestricted domain.
Mutex – A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.Misconception:There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.
Misconception:There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.
Deadlock:A situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. Deadlock can arise if following four conditions hold simultaneously (Necessary Conditions):
Mutual Exclusion – One or more than one resource are non-sharable (Only one process can use at a time).Hold and Wait – A process is holding at least one resource and waiting for resources.No Preemption – A resource cannot be taken from a process unless the process releases the resource.Circular Wait – A set of processes are waiting for each other in circular form.
Mutual Exclusion – One or more than one resource are non-sharable (Only one process can use at a time).
Hold and Wait – A process is holding at least one resource and waiting for resources.
No Preemption – A resource cannot be taken from a process unless the process releases the resource.
Circular Wait – A set of processes are waiting for each other in circular form.
Methods for handling deadlock: There are three ways to handle deadlock
Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.Deadlock detection and recovery : Let deadlock occur, then do preemption to handle it once occurred.Ignore the problem all together – : If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take.
Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.
Deadlock detection and recovery : Let deadlock occur, then do preemption to handle it once occurred.
Ignore the problem all together – : If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take.
Banker’s Algorithm:This algorithm handles multiple instances of the same resource.
Memory Management:These techniques allow the memory to be shared among multiple processes.
Overlays – The memory should contain only those instructions and data that are required at a given time.
Swapping – In multiprogramming, the instructions that have used the time slice are swapped out from the memory.
Memory Management Techniques:
(a) Single Partition Allocation Schemes –The memory is divided into two parts. One part is kept to be used by the OS and the other is kept to be used by the users.
(b) Multiple Partition Schemes –
Fixed Partition – The memory is divided into fixed size partitions.Variable Partition – The memory is divided into variable sized partitions.
Fixed Partition – The memory is divided into fixed size partitions.
Variable Partition – The memory is divided into variable sized partitions.
Variable partition allocation schemes:
First Fit – The arriving process is allotted the first hole of memory in which it fits completely.Best Fit – The arriving process is allotted the hole of memory in which it fits the best by leaving the minimum memory empty.Worst Fit – The arriving process is allotted the hole of memory in which it leaves the maximum gap.
First Fit – The arriving process is allotted the first hole of memory in which it fits completely.
Best Fit – The arriving process is allotted the hole of memory in which it fits the best by leaving the minimum memory empty.
Worst Fit – The arriving process is allotted the hole of memory in which it leaves the maximum gap.
Note:
Best fit does not necessarily give the best results for memory allocation.
The cause of external fragmentation is the condition in Fixed partitioning and Variable partitioning saying that entire process should be allocated in a contiguous memory location.Therefore Paging is used.
Paging –The physical memory is divided into equal sized frames. The main memory is divided into fixed size pages. The size of a physical memory frame is equal to the size of a virtual memory frame.Segmentation –Segmentation is implemented to give users view of memory. The logical address space is a collection of segments. Segmentation can be implemented with or without the use of paging.
Paging –The physical memory is divided into equal sized frames. The main memory is divided into fixed size pages. The size of a physical memory frame is equal to the size of a virtual memory frame.
Segmentation –Segmentation is implemented to give users view of memory. The logical address space is a collection of segments. Segmentation can be implemented with or without the use of paging.
Page Fault:A page fault is a type of interrupt, raised by the hardware when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory.
Page Replacement Algorithms:
First In First Out (FIFO) –This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.Belady’s anomaly:Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.Optimal Page replacement –In this algorithm, pages are replaced which are not used for the longest duration of time in the future.Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory.Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.Least Recently Used (LRU) –In this algorithm, the page will be replaced which is least recently used.Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially, we have 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —> 1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory. File System: A file is a collection of related information that is recorded on secondary storage. Or file is a collection of logically related entities.File Directories: Collection of files is a file directory. The directory contains information about the files, including attributes, location and ownership. Much of this information, especially that is concerned with storage, is managed by the operating system.SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the usersTWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability. File Allocation Methods:Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file Disk Scheduling:Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling.Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.Disk Access Time: Seek Time + Rotational Latency + Transfer TimeDisk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests. Disk Scheduling Algorithms:FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk. My Personal Notes
arrow_drop_upSave
First In First Out (FIFO) –This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.Belady’s anomaly:Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.
Belady’s anomaly:Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement –In this algorithm, pages are replaced which are not used for the longest duration of time in the future.Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory.Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.
Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory.
Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.
Least Recently Used (LRU) –In this algorithm, the page will be replaced which is least recently used.Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially, we have 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —> 1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory. File System: A file is a collection of related information that is recorded on secondary storage. Or file is a collection of logically related entities.File Directories: Collection of files is a file directory. The directory contains information about the files, including attributes, location and ownership. Much of this information, especially that is concerned with storage, is managed by the operating system.SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the usersTWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability. File Allocation Methods:Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file Disk Scheduling:Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling.Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.Disk Access Time: Seek Time + Rotational Latency + Transfer TimeDisk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests. Disk Scheduling Algorithms:FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk. My Personal Notes
arrow_drop_upSave
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially, we have 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —> 1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory.
File System: A file is a collection of related information that is recorded on secondary storage. Or file is a collection of logically related entities.
File Directories: Collection of files is a file directory. The directory contains information about the files, including attributes, location and ownership. Much of this information, especially that is concerned with storage, is managed by the operating system.
SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the usersTWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability.
SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the users
TWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.
TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability.
File Allocation Methods:
Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file
Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.
Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.
Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file
Disk Scheduling:Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling.
Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.Disk Access Time: Seek Time + Rotational Latency + Transfer TimeDisk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests.
Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.
Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.
Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.
Disk Access Time: Seek Time + Rotational Latency + Transfer Time
Disk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests.
Disk Scheduling Algorithms:
FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.
SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.
SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.
CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.
LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
AmeetPanwar
Evien
metadata
vishal9619
Akanksha_Rai
GATE CS
Operating Systems
Operating Systems
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Page Replacement Algorithms in Operating Systems
Differences between TCP and UDP
Cache Memory in Computer Organization
Introduction of Operating System - Set 1
Semaphores in Process Synchronization
Page Replacement Algorithms in Operating Systems
Program for FCFS CPU Scheduling | Set 1
Paging in Operating System
Cache Memory in Computer Organization
Program for Round Robin scheduling | Set 1 | [
{
"code": null,
"e": 25391,
"s": 25363,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 25449,
"s": 25391,
"text": "See Last Minute Notes for all subjects all subjects here."
},
{
"code": null,
"e": 25532,
"s": 25449,
"text": "Operating Systems: It is the interface between the user and the computer hardware."
},
{
"code": null,
"e": 25564,
"s": 25532,
"text": "Types of Operating System (OS):"
},
{
"code": null,
"e": 26698,
"s": 25564,
"text": "Batch OS – A set of similar jobs are stored in the main memory for execution. A job gets assigned to the CPU, only when the execution of the previous job completes.Multiprogramming OS – The main memory consists of jobs waiting for CPU time. The OS selects one of the processes and assigns it to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects another process from the job queue and assigns it to the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting multiple tasks done at once.Multitasking OS – Multitasking OS combines the benefits of Multiprogramming OS and CPU scheduling to perform quick switches between jobs. The switch is so quick that the user can interact with each program as it runsTime Sharing OS – Time-sharing systems require interaction with the user to instruct the OS to perform various tasks. The OS responds with an output. The instructions are usually given through an input device like the keyboard.Real Time OS – Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within deadlines."
},
{
"code": null,
"e": 26863,
"s": 26698,
"text": "Batch OS – A set of similar jobs are stored in the main memory for execution. A job gets assigned to the CPU, only when the execution of the previous job completes."
},
{
"code": null,
"e": 27267,
"s": 26863,
"text": "Multiprogramming OS – The main memory consists of jobs waiting for CPU time. The OS selects one of the processes and assigns it to the CPU. Whenever the executing process needs to wait for any other operation (like I/O), the OS selects another process from the job queue and assigns it to the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting multiple tasks done at once."
},
{
"code": null,
"e": 27484,
"s": 27267,
"text": "Multitasking OS – Multitasking OS combines the benefits of Multiprogramming OS and CPU scheduling to perform quick switches between jobs. The switch is so quick that the user can interact with each program as it runs"
},
{
"code": null,
"e": 27712,
"s": 27484,
"text": "Time Sharing OS – Time-sharing systems require interaction with the user to instruct the OS to perform various tasks. The OS responds with an output. The instructions are usually given through an input device like the keyboard."
},
{
"code": null,
"e": 27836,
"s": 27712,
"text": "Real Time OS – Real-Time OS are usually built for dedicated systems to accomplish a specific set of tasks within deadlines."
},
{
"code": null,
"e": 28012,
"s": 27836,
"text": "Threads:A thread is a lightweight process and forms the basic unit of CPU utilization. A process can perform more than one task at the same time by including multiple threads."
},
{
"code": null,
"e": 28074,
"s": 28012,
"text": "A thread has its own program counter, register set, and stack"
},
{
"code": null,
"e": 28194,
"s": 28074,
"text": "A thread shares resources with other threads of the same process the code section, the data section, files and signals."
},
{
"code": null,
"e": 28400,
"s": 28194,
"text": "A new thread, or a child process of a given process, can be introduced by using the fork() system call. A process with n fork() system calls generates 2n – 1 child processes.There are two types of threads:"
},
{
"code": null,
"e": 28413,
"s": 28400,
"text": "User threads"
},
{
"code": null,
"e": 28428,
"s": 28413,
"text": "Kernel threads"
},
{
"code": null,
"e": 28490,
"s": 28428,
"text": "Example: Java thread, POSIX threads.Example : Window Solaris."
},
{
"code": null,
"e": 28714,
"s": 28492,
"text": " Process:A process is a program under execution. The value of program counter (PC) indicates the address of the next instruction of the process being executed. Each process is represented by a Process Control Block (PCB)."
},
{
"code": null,
"e": 28787,
"s": 28714,
"text": "Process Scheduling: Below are different times with respect to a process."
},
{
"code": null,
"e": 29225,
"s": 28787,
"text": "Arrival Time – Time at which the process arrives in the ready queue.Completion Time – Time at which process completes its execution.Burst Time – Time required by a process for CPU execution.Turn Around Time – Time Difference between completion time and arrival time.Turn Around Time = Completion Time - Arrival Time Waiting Time (WT) – Time Difference between turn around time and burst time.Waiting Time = Turn Around Time - Burst Time "
},
{
"code": null,
"e": 29294,
"s": 29225,
"text": "Arrival Time – Time at which the process arrives in the ready queue."
},
{
"code": null,
"e": 29359,
"s": 29294,
"text": "Completion Time – Time at which process completes its execution."
},
{
"code": null,
"e": 29418,
"s": 29359,
"text": "Burst Time – Time required by a process for CPU execution."
},
{
"code": null,
"e": 29545,
"s": 29418,
"text": "Turn Around Time – Time Difference between completion time and arrival time.Turn Around Time = Completion Time - Arrival Time "
},
{
"code": null,
"e": 29596,
"s": 29545,
"text": "Turn Around Time = Completion Time - Arrival Time "
},
{
"code": null,
"e": 29718,
"s": 29596,
"text": "Waiting Time (WT) – Time Difference between turn around time and burst time.Waiting Time = Turn Around Time - Burst Time "
},
{
"code": null,
"e": 29764,
"s": 29718,
"text": "Waiting Time = Turn Around Time - Burst Time "
},
{
"code": null,
"e": 30090,
"s": 29764,
"text": "Why do we need scheduling?A typical process involves both I/O time and CPU time. In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time. In multiprogramming systems, one process can use CPU while another is waiting for I/O. This is possible only with process scheduling."
},
{
"code": null,
"e": 30134,
"s": 30090,
"text": "Objectives of Process Scheduling Algorithm:"
},
{
"code": null,
"e": 30185,
"s": 30134,
"text": "Max CPU utilization (Keep CPU as busy as possible)"
},
{
"code": null,
"e": 30209,
"s": 30185,
"text": "Fair allocation of CPU."
},
{
"code": null,
"e": 30290,
"s": 30209,
"text": "Max throughput (Number of processes that complete their execution per time unit)"
},
{
"code": null,
"e": 30356,
"s": 30290,
"text": "Min turnaround time (Time taken by a process to finish execution)"
},
{
"code": null,
"e": 30421,
"s": 30356,
"text": "Min waiting time (Time for which a process waits in ready queue)"
},
{
"code": null,
"e": 30485,
"s": 30421,
"text": "Min response time (Time when a process produces first response)"
},
{
"code": null,
"e": 30518,
"s": 30485,
"text": "Different Scheduling Algorithms:"
},
{
"code": null,
"e": 31951,
"s": 30518,
"text": "First Come First Serve (FCFS) : Simplest scheduling algorithm that schedules according to arrival times of processes.Shortest Job First (SJF): Process which have the shortest burst time are scheduled first.Shortest Remaining Time First (SRTF): It is preemptive mode of SJF algorithm in which jobs are scheduled according to the shortest remaining time.Round Robin (RR) Scheduling: Each process is assigned a fixed time, in cyclic way.Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then scheduling is according to the arrival time.Highest Response Ratio Next (HRRN): In this scheduling, processes with highest response ratio is scheduled. This algorithm avoids starvation.Response Ratio = (Waiting Time + Burst time) / Burst timeMultilevel Queue Scheduling (MLQ): According to the priority of process, processes are placed in the different queues. Generally high priority process are placed in the top level queue. Only after completion of processes from top level queue, lower level queued processes are scheduled.Multi level Feedback Queue (MLFQ) Scheduling: It allows the process to move in between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue."
},
{
"code": null,
"e": 32069,
"s": 31951,
"text": "First Come First Serve (FCFS) : Simplest scheduling algorithm that schedules according to arrival times of processes."
},
{
"code": null,
"e": 32159,
"s": 32069,
"text": "Shortest Job First (SJF): Process which have the shortest burst time are scheduled first."
},
{
"code": null,
"e": 32306,
"s": 32159,
"text": "Shortest Remaining Time First (SRTF): It is preemptive mode of SJF algorithm in which jobs are scheduled according to the shortest remaining time."
},
{
"code": null,
"e": 32389,
"s": 32306,
"text": "Round Robin (RR) Scheduling: Each process is assigned a fixed time, in cyclic way."
},
{
"code": null,
"e": 32646,
"s": 32389,
"text": "Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then scheduling is according to the arrival time."
},
{
"code": null,
"e": 32845,
"s": 32646,
"text": "Highest Response Ratio Next (HRRN): In this scheduling, processes with highest response ratio is scheduled. This algorithm avoids starvation.Response Ratio = (Waiting Time + Burst time) / Burst time"
},
{
"code": null,
"e": 32903,
"s": 32845,
"text": "Response Ratio = (Waiting Time + Burst time) / Burst time"
},
{
"code": null,
"e": 33190,
"s": 32903,
"text": "Multilevel Queue Scheduling (MLQ): According to the priority of process, processes are placed in the different queues. Generally high priority process are placed in the top level queue. Only after completion of processes from top level queue, lower level queued processes are scheduled."
},
{
"code": null,
"e": 33449,
"s": 33190,
"text": "Multi level Feedback Queue (MLFQ) Scheduling: It allows the process to move in between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it is moved to a lower-priority queue."
},
{
"code": null,
"e": 33496,
"s": 33449,
"text": "Some useful facts about Scheduling Algorithms:"
},
{
"code": null,
"e": 34047,
"s": 33496,
"text": "FCFS can cause long waiting times, especially when the first job takes too much CPU time.Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation when a long process is there in the ready queue and shorter processes keep coming.If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS scheduling.SJF is optimal in terms of average waiting time for a given set of processes. SJF gives minimum average waiting time, but problems with SJF is how to know/predict the time of next job."
},
{
"code": null,
"e": 34137,
"s": 34047,
"text": "FCFS can cause long waiting times, especially when the first job takes too much CPU time."
},
{
"code": null,
"e": 34317,
"s": 34137,
"text": "Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a situation when a long process is there in the ready queue and shorter processes keep coming."
},
{
"code": null,
"e": 34416,
"s": 34317,
"text": "If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS scheduling."
},
{
"code": null,
"e": 34601,
"s": 34416,
"text": "SJF is optimal in terms of average waiting time for a given set of processes. SJF gives minimum average waiting time, but problems with SJF is how to know/predict the time of next job."
},
{
"code": null,
"e": 34631,
"s": 34601,
"text": "The Critical Section Problem:"
},
{
"code": null,
"e": 34984,
"s": 34631,
"text": "Critical Section – The portion of the code in the program where shared variables are accessed and/or updated.Remainder Section – The remaining portion of the program excluding the Critical Section.Race around Condition – The final output of the code depends on the order in which the variables are accessed. This is termed as the race around condition."
},
{
"code": null,
"e": 35094,
"s": 34984,
"text": "Critical Section – The portion of the code in the program where shared variables are accessed and/or updated."
},
{
"code": null,
"e": 35183,
"s": 35094,
"text": "Remainder Section – The remaining portion of the program excluding the Critical Section."
},
{
"code": null,
"e": 35339,
"s": 35183,
"text": "Race around Condition – The final output of the code depends on the order in which the variables are accessed. This is termed as the race around condition."
},
{
"code": null,
"e": 35428,
"s": 35339,
"text": "A solution for the critical section problem must satisfy the following three conditions:"
},
{
"code": null,
"e": 36048,
"s": 35428,
"text": "Mutual Exclusion – If a process Pi is executing in its critical section, then no other process is allowed to enter into the critical section.Progress – If no process is executing in the critical section, then the decision of a process to enter a critical section cannot be made by any other process that is executing in its remainder section. The selection of the process cannot be postponed indefinitely.Bounded Waiting – There exists a bound on the number of times other processes can enter into the critical section after a process has made request to access the critical section and before the requested is granted."
},
{
"code": null,
"e": 36190,
"s": 36048,
"text": "Mutual Exclusion – If a process Pi is executing in its critical section, then no other process is allowed to enter into the critical section."
},
{
"code": null,
"e": 36455,
"s": 36190,
"text": "Progress – If no process is executing in the critical section, then the decision of a process to enter a critical section cannot be made by any other process that is executing in its remainder section. The selection of the process cannot be postponed indefinitely."
},
{
"code": null,
"e": 36670,
"s": 36455,
"text": "Bounded Waiting – There exists a bound on the number of times other processes can enter into the critical section after a process has made request to access the critical section and before the requested is granted."
},
{
"code": null,
"e": 36917,
"s": 36670,
"text": " Synchronization Tools:A Semaphore is an integer variable that is accessed only through two atomic operations, wait () and signal (). An atomic operation is executed in a single CPU time slice without any pre-emption. Semaphores are of two types:"
},
{
"code": null,
"e": 37660,
"s": 36917,
"text": "Counting Semaphore – A counting semaphore is an integer variable whose value can range over an unrestricted domain.Mutex – A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.Misconception:There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore."
},
{
"code": null,
"e": 37776,
"s": 37660,
"text": "Counting Semaphore – A counting semaphore is an integer variable whose value can range over an unrestricted domain."
},
{
"code": null,
"e": 38404,
"s": 37776,
"text": "Mutex – A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.Misconception:There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore."
},
{
"code": null,
"e": 38523,
"s": 38404,
"text": "At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore."
},
{
"code": null,
"e": 38820,
"s": 38523,
"text": "Misconception:There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore."
},
{
"code": null,
"e": 39078,
"s": 38820,
"text": "Deadlock:A situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. Deadlock can arise if following four conditions hold simultaneously (Necessary Conditions):"
},
{
"code": null,
"e": 39445,
"s": 39078,
"text": "Mutual Exclusion – One or more than one resource are non-sharable (Only one process can use at a time).Hold and Wait – A process is holding at least one resource and waiting for resources.No Preemption – A resource cannot be taken from a process unless the process releases the resource.Circular Wait – A set of processes are waiting for each other in circular form."
},
{
"code": null,
"e": 39549,
"s": 39445,
"text": "Mutual Exclusion – One or more than one resource are non-sharable (Only one process can use at a time)."
},
{
"code": null,
"e": 39635,
"s": 39549,
"text": "Hold and Wait – A process is holding at least one resource and waiting for resources."
},
{
"code": null,
"e": 39735,
"s": 39635,
"text": "No Preemption – A resource cannot be taken from a process unless the process releases the resource."
},
{
"code": null,
"e": 39815,
"s": 39735,
"text": "Circular Wait – A set of processes are waiting for each other in circular form."
},
{
"code": null,
"e": 39886,
"s": 39815,
"text": "Methods for handling deadlock: There are three ways to handle deadlock"
},
{
"code": null,
"e": 40232,
"s": 39886,
"text": "Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.Deadlock detection and recovery : Let deadlock occur, then do preemption to handle it once occurred.Ignore the problem all together – : If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take."
},
{
"code": null,
"e": 40321,
"s": 40232,
"text": "Deadlock prevention or avoidance: The idea is to not let the system into deadlock state."
},
{
"code": null,
"e": 40422,
"s": 40321,
"text": "Deadlock detection and recovery : Let deadlock occur, then do preemption to handle it once occurred."
},
{
"code": null,
"e": 40580,
"s": 40422,
"text": "Ignore the problem all together – : If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take."
},
{
"code": null,
"e": 40663,
"s": 40580,
"text": "Banker’s Algorithm:This algorithm handles multiple instances of the same resource."
},
{
"code": null,
"e": 40754,
"s": 40663,
"text": "Memory Management:These techniques allow the memory to be shared among multiple processes."
},
{
"code": null,
"e": 40859,
"s": 40754,
"text": "Overlays – The memory should contain only those instructions and data that are required at a given time."
},
{
"code": null,
"e": 40971,
"s": 40859,
"text": "Swapping – In multiprogramming, the instructions that have used the time slice are swapped out from the memory."
},
{
"code": null,
"e": 41001,
"s": 40971,
"text": "Memory Management Techniques:"
},
{
"code": null,
"e": 41165,
"s": 41001,
"text": "(a) Single Partition Allocation Schemes –The memory is divided into two parts. One part is kept to be used by the OS and the other is kept to be used by the users."
},
{
"code": null,
"e": 41198,
"s": 41165,
"text": "(b) Multiple Partition Schemes –"
},
{
"code": null,
"e": 41340,
"s": 41198,
"text": "Fixed Partition – The memory is divided into fixed size partitions.Variable Partition – The memory is divided into variable sized partitions."
},
{
"code": null,
"e": 41408,
"s": 41340,
"text": "Fixed Partition – The memory is divided into fixed size partitions."
},
{
"code": null,
"e": 41483,
"s": 41408,
"text": "Variable Partition – The memory is divided into variable sized partitions."
},
{
"code": null,
"e": 41522,
"s": 41483,
"text": "Variable partition allocation schemes:"
},
{
"code": null,
"e": 41845,
"s": 41522,
"text": "First Fit – The arriving process is allotted the first hole of memory in which it fits completely.Best Fit – The arriving process is allotted the hole of memory in which it fits the best by leaving the minimum memory empty.Worst Fit – The arriving process is allotted the hole of memory in which it leaves the maximum gap."
},
{
"code": null,
"e": 41944,
"s": 41845,
"text": "First Fit – The arriving process is allotted the first hole of memory in which it fits completely."
},
{
"code": null,
"e": 42070,
"s": 41944,
"text": "Best Fit – The arriving process is allotted the hole of memory in which it fits the best by leaving the minimum memory empty."
},
{
"code": null,
"e": 42170,
"s": 42070,
"text": "Worst Fit – The arriving process is allotted the hole of memory in which it leaves the maximum gap."
},
{
"code": null,
"e": 42176,
"s": 42170,
"text": "Note:"
},
{
"code": null,
"e": 42251,
"s": 42176,
"text": "Best fit does not necessarily give the best results for memory allocation."
},
{
"code": null,
"e": 42457,
"s": 42251,
"text": "The cause of external fragmentation is the condition in Fixed partitioning and Variable partitioning saying that entire process should be allocated in a contiguous memory location.Therefore Paging is used."
},
{
"code": null,
"e": 42848,
"s": 42457,
"text": "Paging –The physical memory is divided into equal sized frames. The main memory is divided into fixed size pages. The size of a physical memory frame is equal to the size of a virtual memory frame.Segmentation –Segmentation is implemented to give users view of memory. The logical address space is a collection of segments. Segmentation can be implemented with or without the use of paging."
},
{
"code": null,
"e": 43046,
"s": 42848,
"text": "Paging –The physical memory is divided into equal sized frames. The main memory is divided into fixed size pages. The size of a physical memory frame is equal to the size of a virtual memory frame."
},
{
"code": null,
"e": 43240,
"s": 43046,
"text": "Segmentation –Segmentation is implemented to give users view of memory. The logical address space is a collection of segments. Segmentation can be implemented with or without the use of paging."
},
{
"code": null,
"e": 43439,
"s": 43240,
"text": "Page Fault:A page fault is a type of interrupt, raised by the hardware when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory."
},
{
"code": null,
"e": 43468,
"s": 43439,
"text": "Page Replacement Algorithms:"
},
{
"code": null,
"e": 50023,
"s": 43468,
"text": "First In First Out (FIFO) –This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.Belady’s anomaly:Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.Optimal Page replacement –In this algorithm, pages are replaced which are not used for the longest duration of time in the future.Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory.Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.Least Recently Used (LRU) –In this algorithm, the page will be replaced which is least recently used.Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially, we have 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —> 1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory. File System: A file is a collection of related information that is recorded on secondary storage. Or file is a collection of logically related entities.File Directories: Collection of files is a file directory. The directory contains information about the files, including attributes, location and ownership. Much of this information, especially that is concerned with storage, is managed by the operating system.SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the usersTWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability. File Allocation Methods:Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file Disk Scheduling:Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling.Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.Disk Access Time: Seek Time + Rotational Latency + Transfer TimeDisk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests. Disk Scheduling Algorithms:FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk. My Personal Notes\narrow_drop_upSave"
},
{
"code": null,
"e": 51193,
"s": 50023,
"text": "First In First Out (FIFO) –This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault.Belady’s anomaly:Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults."
},
{
"code": null,
"e": 51659,
"s": 51193,
"text": "For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. Finally, 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault."
},
{
"code": null,
"e": 52071,
"s": 51659,
"text": "Belady’s anomaly:Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10 page faults."
},
{
"code": null,
"e": 52979,
"s": 52071,
"text": "Optimal Page replacement –In this algorithm, pages are replaced which are not used for the longest duration of time in the future.Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory.Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it."
},
{
"code": null,
"e": 53514,
"s": 52979,
"text": "Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory."
},
{
"code": null,
"e": 53758,
"s": 53514,
"text": "Optimal page replacement is perfect, but not possible in practice as an operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it."
},
{
"code": null,
"e": 58237,
"s": 53758,
"text": "Least Recently Used (LRU) –In this algorithm, the page will be replaced which is least recently used.Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially, we have 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —> 1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory. File System: A file is a collection of related information that is recorded on secondary storage. Or file is a collection of logically related entities.File Directories: Collection of files is a file directory. The directory contains information about the files, including attributes, location and ownership. Much of this information, especially that is concerned with storage, is managed by the operating system.SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the usersTWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability. File Allocation Methods:Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file Disk Scheduling:Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling.Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.Disk Access Time: Seek Time + Rotational Latency + Transfer TimeDisk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests. Disk Scheduling Algorithms:FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk. My Personal Notes\narrow_drop_upSave"
},
{
"code": null,
"e": 58759,
"s": 58237,
"text": "Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially, we have 4-page slots empty. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —> 1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory."
},
{
"code": null,
"e": 58915,
"s": 58762,
"text": "File System: A file is a collection of related information that is recorded on secondary storage. Or file is a collection of logically related entities."
},
{
"code": null,
"e": 59177,
"s": 58915,
"text": "File Directories: Collection of files is a file directory. The directory contains information about the files, including attributes, location and ownership. Much of this information, especially that is concerned with storage, is managed by the operating system."
},
{
"code": null,
"e": 59491,
"s": 59177,
"text": "SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the usersTWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file.TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability."
},
{
"code": null,
"e": 59574,
"s": 59491,
"text": "SINGLE-LEVEL DIRECTORY: In this a single directory is maintained for all the users"
},
{
"code": null,
"e": 59670,
"s": 59574,
"text": "TWO-LEVEL DIRECTORY: Due to two levels there is a path name for every file to locate that file."
},
{
"code": null,
"e": 59807,
"s": 59670,
"text": "TREE-STRUCTURED DIRECTORY : Directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability."
},
{
"code": null,
"e": 59834,
"s": 59809,
"text": "File Allocation Methods:"
},
{
"code": null,
"e": 60277,
"s": 59834,
"text": "Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation.Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain.Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file"
},
{
"code": null,
"e": 60387,
"s": 60277,
"text": "Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file creation."
},
{
"code": null,
"e": 60539,
"s": 60387,
"text": "Linked Allocation(Non-contiguous allocation): Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain."
},
{
"code": null,
"e": 60722,
"s": 60539,
"text": "Indexed Allocation : It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation table contains a separate one-level index for each file"
},
{
"code": null,
"e": 60878,
"s": 60724,
"text": "Disk Scheduling:Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk scheduling is also known as I/O scheduling."
},
{
"code": null,
"e": 61543,
"s": 60878,
"text": "Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write.Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads.Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred.Disk Access Time: Seek Time + Rotational Latency + Transfer TimeDisk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests."
},
{
"code": null,
"e": 61665,
"s": 61543,
"text": "Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be read or write."
},
{
"code": null,
"e": 61822,
"s": 61665,
"text": "Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to rotate into a position so that it can access the read/write heads."
},
{
"code": null,
"e": 61969,
"s": 61822,
"text": "Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and number of bytes to be transferred."
},
{
"code": null,
"e": 62034,
"s": 61969,
"text": "Disk Access Time: Seek Time + Rotational Latency + Transfer Time"
},
{
"code": null,
"e": 62212,
"s": 62034,
"text": "Disk Response Time: Response Time is the average of time spent by a request waiting to perform its I/O operation. Average Response time is the response time of the all requests."
},
{
"code": null,
"e": 62242,
"s": 62214,
"text": "Disk Scheduling Algorithms:"
},
{
"code": null,
"e": 64022,
"s": 62242,
"text": "FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first.SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm.CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area.LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk."
},
{
"code": null,
"e": 64168,
"s": 64022,
"text": "FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue."
},
{
"code": null,
"e": 64473,
"s": 64168,
"text": "SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first. So, the seek time of every request is calculated in advance in a queue and then they are scheduled according to their calculated seek time. As a result, the request near the disk arm will get executed first."
},
{
"code": null,
"e": 64794,
"s": 64473,
"text": "SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the requests coming in its path and after reaching the end of the disk, it reverses its direction and again services the request arriving in its path. So, this algorithm works like an elevator and hence also known as elevator algorithm."
},
{
"code": null,
"e": 65049,
"s": 64794,
"text": "CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its direction. So, it may be possible that too many requests are waiting at the other end or there may be zero or few requests pending at the scanned area."
},
{
"code": null,
"e": 65408,
"s": 65049,
"text": "LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in front of the head and then reverses its direction from there only. Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the disk."
},
{
"code": null,
"e": 65807,
"s": 65408,
"text": "CLOOK: As LOOK is similar to SCAN algorithm, in a similar way, CLOOK is similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in front of the head and then from there goes to the other end’s last request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the disk."
},
{
"code": null,
"e": 65821,
"s": 65809,
"text": "AmeetPanwar"
},
{
"code": null,
"e": 65827,
"s": 65821,
"text": "Evien"
},
{
"code": null,
"e": 65836,
"s": 65827,
"text": "metadata"
},
{
"code": null,
"e": 65847,
"s": 65836,
"text": "vishal9619"
},
{
"code": null,
"e": 65860,
"s": 65847,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 65868,
"s": 65860,
"text": "GATE CS"
},
{
"code": null,
"e": 65886,
"s": 65868,
"text": "Operating Systems"
},
{
"code": null,
"e": 65904,
"s": 65886,
"text": "Operating Systems"
},
{
"code": null,
"e": 66002,
"s": 65904,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 66051,
"s": 66002,
"text": "Page Replacement Algorithms in Operating Systems"
},
{
"code": null,
"e": 66083,
"s": 66051,
"text": "Differences between TCP and UDP"
},
{
"code": null,
"e": 66121,
"s": 66083,
"text": "Cache Memory in Computer Organization"
},
{
"code": null,
"e": 66162,
"s": 66121,
"text": "Introduction of Operating System - Set 1"
},
{
"code": null,
"e": 66200,
"s": 66162,
"text": "Semaphores in Process Synchronization"
},
{
"code": null,
"e": 66249,
"s": 66200,
"text": "Page Replacement Algorithms in Operating Systems"
},
{
"code": null,
"e": 66289,
"s": 66249,
"text": "Program for FCFS CPU Scheduling | Set 1"
},
{
"code": null,
"e": 66316,
"s": 66289,
"text": "Paging in Operating System"
},
{
"code": null,
"e": 66354,
"s": 66316,
"text": "Cache Memory in Computer Organization"
}
]
|
Building a Map from 2 arrays of values and keys in JavaScript | Suppose, we have two arrays −
const keys = [0, 4, 2, 3, 1];
const values = ["first", "second", "third", "fourth", "fifth"];
We are required to write a JavaScript function that takes in the keys and the values array and maps the values to the corresponding keys. The output should be −
const map = {
0 => 'first',
4 => 'second',
2 => 'third',
3 => 'fourth',
1 => 'fifth'
};
Following is the code −
const keys = [0, 4, 2, 3, 1];
const values = ["first", "second", "third", "fourth", "fifth"];
const buildMap = (keys, values) => {
const map = new Map();
for(let i = 0; i < keys.length; i++){
map.set(keys[i], values[i]);
};
return map;
};
console.log(buildMap(keys, values));
This will produce the following output in console −
Map(5) {
0 => 'first',
4 => 'second',
2 => 'third',
3 => 'fourth',
1 => 'fifth'
} | [
{
"code": null,
"e": 1092,
"s": 1062,
"text": "Suppose, we have two arrays −"
},
{
"code": null,
"e": 1186,
"s": 1092,
"text": "const keys = [0, 4, 2, 3, 1];\nconst values = [\"first\", \"second\", \"third\", \"fourth\", \"fifth\"];"
},
{
"code": null,
"e": 1347,
"s": 1186,
"text": "We are required to write a JavaScript function that takes in the keys and the values array and maps the values to the corresponding keys. The output should be −"
},
{
"code": null,
"e": 1450,
"s": 1347,
"text": "const map = {\n 0 => 'first',\n 4 => 'second',\n 2 => 'third',\n 3 => 'fourth',\n 1 => 'fifth'\n};"
},
{
"code": null,
"e": 1474,
"s": 1450,
"text": "Following is the code −"
},
{
"code": null,
"e": 1768,
"s": 1474,
"text": "const keys = [0, 4, 2, 3, 1];\nconst values = [\"first\", \"second\", \"third\", \"fourth\", \"fifth\"];\nconst buildMap = (keys, values) => {\n const map = new Map();\n for(let i = 0; i < keys.length; i++){\n map.set(keys[i], values[i]);\n };\n return map;\n};\nconsole.log(buildMap(keys, values));"
},
{
"code": null,
"e": 1820,
"s": 1768,
"text": "This will produce the following output in console −"
},
{
"code": null,
"e": 1917,
"s": 1820,
"text": "Map(5) {\n 0 => 'first',\n 4 => 'second',\n 2 => 'third',\n 3 => 'fourth',\n 1 => 'fifth'\n}"
}
]
|
Reverse a Doubly Linked List using C++ | In this article we have a doubly linked list, and we will explain different approaches to reverse a doubly linked list in C++. For example −
Input : {1, 2, 3, 4}
Output : {4, 3, 2, 1}
There is generally one approach that comes to mind, but we will use two approaches − The normal and unorthodox approach.
In this approach, we will go through the list, and as we go through it, we reverse it.
#include <bits/stdc++.h>
using namespace std;
class Node {
public:
int data;
Node *next;
Node *prev;
};
void reverse(Node **head_ref) {
auto temp = (*head_ref) -> next;
(*head_ref) -> next = (*head_ref) -> prev;
(*head_ref) -> prev = temp;
if(temp != NULL) {
(*head_ref) = (*head_ref) -> prev;
reverse(head_ref);
}
else
return;
}
void push(Node** head_ref, int new_data) {
Node* new_node = new Node();
new_node->data = new_data;
new_node->prev = NULL;
new_node->next = (*head_ref);
if((*head_ref) != NULL)
(*head_ref) -> prev = new_node ;
(*head_ref) = new_node;
}
int main() {
Node* head = NULL;
push(&head, 6);
push(&head, 4);
push(&head, 8);
push(&head, 9);
auto node = head;
cout << "Before\n" ;
while(node != NULL) {
cout << node->data << " ";
node = node->next;
}
cout << "\n";
reverse(&head);
node = head;
cout << "After\n";
while(node != NULL) {
cout << node->data << " ";
node = node->next;
}
return 0;
}
Before
9 8 4 6
After
6 4 8 9
This approach takes O(N) time complexity which is very good as this complexity can perform in higher constraints.
As the name suggests, it's not a very common approach that comes to the user's mind, but we will explore this approach as well.In this approach, we will make a stack and keep pushing data in it, and while popping, we are going to change its values.
#include <bits/stdc++.h>
using namespace std;
class Node {
public:
int data;
Node *next;
Node *prev;
};
void push(Node** head_ref, int new_data) {
Node* new_node = new Node();
new_node->data = new_data;
new_node->prev = NULL;
new_node->next = (*head_ref);
if((*head_ref) != NULL)
(*head_ref) -> prev = new_node ;
(*head_ref) = new_node;
}
int main() {
Node* head = NULL;
push(&head, 6);
push(&head, 4);
push(&head, 8);
push(&head, 9);
auto node = head;
cout >> "Before\n" ;
while(node != NULL) {
cout >> node->data >> " ";
node = node->next;
}
cout >> "\n";
stack<Node*> s;
node = head;
while(node) {
head = node;
s.push(node);
node = node -> next;
}
while(!s.empty()) {
auto x = s.top();
auto temp = x -> prev;
x -> prev = x -> next;
x -> next = temp;
s.pop();
}
node = head;
cout << "After\n";
while(node != NULL) {
cout << node->data << " ";
node = node->next;
}
return 0;
}
Before
9 8 4 6
After
6 4 8 9
In this approach we are using a stack that we are filling while traversing through the list and then we are popping items out of the stack and changing their values such that the list is reversed. O(N) is the time complexity of this program and it is suitable for higher constraints too.
In this article we solve a problem to reverse a doubly linked list with or without stack. In O(N) time complexity where N is the size of our list.We also learned C++ program for this problem and the complete approach ( Normal and Unorthodox ) by which we solved this problem. We can write the same program in other languages such as C, java, python and other languages. We hope you find this article helpful. | [
{
"code": null,
"e": 1203,
"s": 1062,
"text": "In this article we have a doubly linked list, and we will explain different approaches to reverse a doubly linked list in C++. For example −"
},
{
"code": null,
"e": 1246,
"s": 1203,
"text": "Input : {1, 2, 3, 4}\nOutput : {4, 3, 2, 1}"
},
{
"code": null,
"e": 1367,
"s": 1246,
"text": "There is generally one approach that comes to mind, but we will use two approaches − The normal and unorthodox approach."
},
{
"code": null,
"e": 1454,
"s": 1367,
"text": "In this approach, we will go through the list, and as we go through it, we reverse it."
},
{
"code": null,
"e": 2515,
"s": 1454,
"text": "#include <bits/stdc++.h>\n\nusing namespace std;\n\nclass Node {\n public:\n int data;\n Node *next;\n Node *prev;\n};\n\nvoid reverse(Node **head_ref) {\n auto temp = (*head_ref) -> next;\n (*head_ref) -> next = (*head_ref) -> prev;\n (*head_ref) -> prev = temp;\n if(temp != NULL) {\n (*head_ref) = (*head_ref) -> prev;\n reverse(head_ref);\n }\n else\n return;\n}\nvoid push(Node** head_ref, int new_data) {\n Node* new_node = new Node();\n new_node->data = new_data;\n\n new_node->prev = NULL;\n\n new_node->next = (*head_ref);\n if((*head_ref) != NULL)\n (*head_ref) -> prev = new_node ;\n\n (*head_ref) = new_node;\n}\nint main() {\n Node* head = NULL;\n push(&head, 6);\n push(&head, 4);\n push(&head, 8);\n push(&head, 9);\n auto node = head;\n cout << \"Before\\n\" ;\n while(node != NULL) {\n cout << node->data << \" \";\n node = node->next;\n }\n cout << \"\\n\";\n reverse(&head);\n node = head;\n cout << \"After\\n\";\n while(node != NULL) {\n cout << node->data << \" \";\n node = node->next;\n }\n return 0;\n}"
},
{
"code": null,
"e": 2544,
"s": 2515,
"text": "Before\n9 8 4 6\nAfter\n6 4 8 9"
},
{
"code": null,
"e": 2658,
"s": 2544,
"text": "This approach takes O(N) time complexity which is very good as this complexity can perform in higher constraints."
},
{
"code": null,
"e": 2907,
"s": 2658,
"text": "As the name suggests, it's not a very common approach that comes to the user's mind, but we will explore this approach as well.In this approach, we will make a stack and keep pushing data in it, and while popping, we are going to change its values."
},
{
"code": null,
"e": 3958,
"s": 2907,
"text": "#include <bits/stdc++.h>\n\nusing namespace std;\n\nclass Node {\n public:\n int data;\n Node *next;\n Node *prev;\n};\nvoid push(Node** head_ref, int new_data) {\n Node* new_node = new Node();\n new_node->data = new_data;\n\n new_node->prev = NULL;\n\n new_node->next = (*head_ref);\n if((*head_ref) != NULL)\n (*head_ref) -> prev = new_node ;\n\n (*head_ref) = new_node;\n}\nint main() {\n Node* head = NULL;\n push(&head, 6);\n push(&head, 4);\n push(&head, 8);\n push(&head, 9);\n auto node = head;\n cout >> \"Before\\n\" ;\n while(node != NULL) {\n cout >> node->data >> \" \";\n node = node->next;\n }\n cout >> \"\\n\";\n stack<Node*> s;\n node = head;\n while(node) {\n head = node;\n s.push(node);\n node = node -> next;\n }\n while(!s.empty()) {\n auto x = s.top();\n auto temp = x -> prev;\n x -> prev = x -> next;\n x -> next = temp;\n s.pop();\n }\n node = head;\n cout << \"After\\n\";\n while(node != NULL) {\n cout << node->data << \" \";\n node = node->next;\n }\n return 0;\n}"
},
{
"code": null,
"e": 3987,
"s": 3958,
"text": "Before\n9 8 4 6\nAfter\n6 4 8 9"
},
{
"code": null,
"e": 4275,
"s": 3987,
"text": "In this approach we are using a stack that we are filling while traversing through the list and then we are popping items out of the stack and changing their values such that the list is reversed. O(N) is the time complexity of this program and it is suitable for higher constraints too."
},
{
"code": null,
"e": 4684,
"s": 4275,
"text": "In this article we solve a problem to reverse a doubly linked list with or without stack. In O(N) time complexity where N is the size of our list.We also learned C++ program for this problem and the complete approach ( Normal and Unorthodox ) by which we solved this problem. We can write the same program in other languages such as C, java, python and other languages. We hope you find this article helpful."
}
]
|
Basic Statistics for Time Series Analysis in Python | by Marco Peixeiro | Towards Data Science | A time series is simply a set of data points ordered in time, where time is usually the independent variable.
Now, forecasting the future is not the only purpose of time series analysis. It is also relevant to asses important properties, such as stationarity, seasonality or autocorrelation.
Before moving on to more advanced modelling practices, we must master the basics first.
In this article, we will introduce the building blocks of time series analysis by introducing descriptive and inferential statistics. These concepts will serve later on when we implement complex models on time series, as statistical significance is necessary to build a robust analysis.
All code examples will be in Python and you can grab the notebook to follow along.
Let’s get started!
Use Python and TensorFlow to apply more complex models for time series analysis with the Applied Time Series Analysis in Python course!
Descriptive statistics are a set of values and coefficients that summarize a dataset. It provides information about central tendency and variability.
Values such as the mean, median, standard deviation, minimum and maximum are usually the ones we are looking for.
So, let’s see how we can obtain those values with Python.
First, we will import all of the required libraries. Not all of them are used for descriptive statistics, but we will use them later on.
import pandas as pdimport matplotlib.pyplot as pltimport matplotlib.mlab as mlabimport seaborn as snsfrom sklearn.linear_model import LinearRegressionimport statsmodels.api as sm
Now, we will explore the dataset shampoo.csv. This dataset traces the historical volume of sales of shampoo in a certain period of time.
In order to see the entire dataset, we can execute the following Python code:
data = pd.read_csv('shampoo.csv')data
Be careful, as this will show the entire dataset. In this case, there are only 36 instances, but for larger datasets, this is not very practical.
Instead, we should use the following piece of code:
data.head()
The line of code above will show the first five entries of the dataset. You can decide to display more by specifying how many entries you would like to see.
data.head(10)
The line above will show the first 10 entries of a dataset.
Now, there is a very simple way to obtain the mean, median, standard deviation, and other information about the central tendency of the dataset.
Simply run the line of code below:
data.describe()
And you should see the following information for the shampoo dataset:
As you can see, with this simple method, we have information about the size of the dataset, its mean and standard deviation, minimum and maximum values, as well as information about its quartiles.
Numbers are a good starting point, but being able to visualize a time series can give you quick insights which will help you steer you analysis in the right direction.
Histograms and scatter plots are the most widely used visualizations when it comes to time series.
A simple histogram of our dataset can be displayed with:
data.hist()
However, we can do much better. Let’s plot a better histogram and add labels to this axes.
plt.figure(figsize=[10, 7.5]); # Set dimensions for figureplt.hist(data['Sales'])plt.title('Histogram of Shampoo Sales');plt.xlabel('Shampoo Sales ($M)');plt.ylabel('Frequency');
The histogram above is much better. There are numerous parameters you can change to customize the visualization to your need. For example, you can change the color and the number of bins in your histogram.
plt.figure(figsize=[10, 7.5]); # Set dimensions for figureplt.hist(data['Sales'], bins=20, color='#fcba03')plt.title('Histogram of Shampoo Sales');plt.xlabel('Shampoo Sales ($M)');plt.ylabel('Frequency');
You should now be very comfortable with plotting a histogram and customizing it to your needs.
Last but not least is knowing how to display a scatter plot. Very simply, we can visualize our dataset like so:
plt.figure(figsize=[20, 7.5]); # Set dimensions for figuresns.scatterplot(x=data['Month'], y=data['Sales']);plt.title('Historical Shampoo Sales');plt.ylabel('Shampoo Sales ($M)');plt.xlabel('Month');
If you wish to learn more about plotting with the libraries we used and see how different parameters can change the plots, make sure to consult the documentation of matplotlib or seaborn.
As the name implies, inferential statistics is the use of analysis to infer properties from a dataset.
Usually, we are looking to find a trend in our dataset that will allow us to make predictions. This is also the occasion for us to test different hypotheses.
For introductory purposes, we will use a simple linear regression to illustrate and explain inferential statistics in the context of time series.
For this section, we will another dataset that retraces the historical concentration of CO2 in the atmosphere. Since the dataset spans 2014 years of history, let’s just consider data from 1950 and onward.
Let’s also apply what we learned before and display a scatter plot of our data.
# Dataset from here: https://www.co2.earth/historical-co2-datasetsco2_dataset = pd.read_csv('co2_dataset.csv')plt.figure(figsize=[20, 7.5]); # Set dimensions for figure# Let's only consider the data from the year 1950X = co2_dataset['year'].values[1950:]y = co2_dataset['data_mean_global'].values[1950:]sns.scatterplot(x=X, y=y);plt.title('Historical Global CO2 Concentration in the Atmosphere');plt.ylabel('CO2 Concentration (ppm)');plt.xlabel('Year');
As you can see, it seems that the concentration is increasing over time.
Although the trend does not seem to be linear, it can probably still explain part of the variability of the data. Therefore, let’s make the following assumption:
The CO2 concentration depends on time in a linear way, with some errors
Mathematically, this is expressed as:
And you should easily recognize this as a linear equation with a constant term, a slope, and an error term.
It is important to note that when doing a simple linear regression, the following assumptions are made:
the errors are normally distributed, and on average 0
the errors have the same variance (homoscedastic)
the errors are unrelated to each other
However, none of these assumptions are technically used when performing a simple linear regression. We are not generating normal distributions of the error term to estimate the parameters of our linear equation.
Instead, the ordinary least squares (OLS) is used to estimate the parameters. This is simply trying to find the minimum value of the sum of the squared error:
Let’s now fit a linear model to our data and see the estimated parameters for our model:
X = co2_dataset['year'].values[1950:].reshape(-1, 1)y = co2_dataset['data_mean_global'].values[1950:].reshape(-1, 1)reg = LinearRegression()reg.fit(X, y)print(f"The slope is {reg.coef_[0][0]} and the intercept is {reg.intercept_[0]}")predictions = reg.predict(X.reshape(-1, 1))plt.figure(figsize=(20, 8))plt.scatter(X, y,c='black')plt.plot(X, predictions, c='blue', linewidth=2)plt.title('Historical Global CO2 Concentration in the Atmosphere');plt.ylabel('CO2 Concentration (ppm)');plt.xlabel('Year');plt.show()
Running the piece of code above, you should the same plot and you see that the slope is 1.3589 and the intercept is -2348.
Do the parameters make sense?
The slope is indeed positive, which is normal since concentration is undoubtedly increasing.
However, the intercept is negative. Does that mean that a time 0, the concentration of CO2 is negative?
No. Our model is definitely not robust enough to go back 1950 years in time. But these parameters are the ones that minimize the sum of squared errors, hence producing the best linear fit.
From the graph, we can visually say the a straight line is not the best fit to our data, but it is not the worst either.
Recall the assumption of a linear model that the errors are normally distributed. We can check this assumption by plotting a QQ-plot of the residuals.
A QQ-plot is a scatter plot of quantiles from two different distributions. If the distributions are the same, then we should see a straight line.
Therefore, if we plot a QQ-plot of our residuals against a normal distribution, we can see if they fall on a straight line; meaning that our residuals are indeed normally distributed.
X = sm.add_constant(co2_dataset['year'].values[1950:])model = sm.OLS(co2_dataset['data_mean_global'].values[1950:], X).fit()residuals = model.residqq_plot = sm.qqplot(residuals, line='q')plt.show()
As you can see, the blue dots represent the residuals, and they do not fall on a straight line. Therefore, they are not normally distributed, and this is an indicator that a linear model is not the best fit to our data.
This can be further supported by plotting a histogram of the residuals:
X = sm.add_constant(co2_dataset['year'].values[1950:])model = sm.OLS(co2_dataset['data_mean_global'].values[1950:], X).fit()residuals = model.residplt.hist(residuals);
Again, we can clearly see that it is not a normal distribution.
A major component of inferential statistics is hypothesis testing. This is a way to determine if the observed trend is due to randomness, or if there is a real statistical significance.
For hypothesis testing, we must define a hypothesis and a null hypothesis. The hypothesis is usually the trend we are trying to extract from data, while the null hypothesis is its exact opposite.
Let’s define the hypotheses for our case:
hypothesis: there is a linear correlation between time and CO2 concentration
null hypothesis: there is no linear correlation between time and CO2 concentration
Awesome! Now, let’s fit a linear model to our dataset using another library that will automatically run hypothesis tests for us:
X = sm.add_constant(co2_dataset['year'].values[1950:])model = sm.OLS(co2_dataset['data_mean_global'].values[1950:], X).fit()print(model.summary())
Now, there is a lot of information here, but let’s consider only a few numbers.
First, we have a very high R2 value of 0.971. This means that more than 97% of the variability in CO2 concentration is explained with the time variable.
Then, the F-statistic is very large as well: 2073. This means that there is statistical significance that a linear correlation exists between time and CO2 concentration.
Finally, looking at the p-value of the slope coefficient, you notice that it is 0. While the number is probably not 0, but still very small, it is another indicator of statistical significance that a linear correlation exists.
Usually, a threshold of 0.05 is used for the p-value. If less, the null hypothesis is rejected.
Therefore, because of a large F-statistic, in combination with a small p-value, we can reject the null hypothesis.
That’s it! You are now in a very good position to kickstart your time series analysis.
With these basic concepts, we will build upon them to make better models to help us forecast time series data.
Learn the latest best practices for time series analysis in Python with:
Applied Time Series Analysis in Python | [
{
"code": null,
"e": 281,
"s": 171,
"text": "A time series is simply a set of data points ordered in time, where time is usually the independent variable."
},
{
"code": null,
"e": 463,
"s": 281,
"text": "Now, forecasting the future is not the only purpose of time series analysis. It is also relevant to asses important properties, such as stationarity, seasonality or autocorrelation."
},
{
"code": null,
"e": 551,
"s": 463,
"text": "Before moving on to more advanced modelling practices, we must master the basics first."
},
{
"code": null,
"e": 838,
"s": 551,
"text": "In this article, we will introduce the building blocks of time series analysis by introducing descriptive and inferential statistics. These concepts will serve later on when we implement complex models on time series, as statistical significance is necessary to build a robust analysis."
},
{
"code": null,
"e": 921,
"s": 838,
"text": "All code examples will be in Python and you can grab the notebook to follow along."
},
{
"code": null,
"e": 940,
"s": 921,
"text": "Let’s get started!"
},
{
"code": null,
"e": 1076,
"s": 940,
"text": "Use Python and TensorFlow to apply more complex models for time series analysis with the Applied Time Series Analysis in Python course!"
},
{
"code": null,
"e": 1226,
"s": 1076,
"text": "Descriptive statistics are a set of values and coefficients that summarize a dataset. It provides information about central tendency and variability."
},
{
"code": null,
"e": 1340,
"s": 1226,
"text": "Values such as the mean, median, standard deviation, minimum and maximum are usually the ones we are looking for."
},
{
"code": null,
"e": 1398,
"s": 1340,
"text": "So, let’s see how we can obtain those values with Python."
},
{
"code": null,
"e": 1535,
"s": 1398,
"text": "First, we will import all of the required libraries. Not all of them are used for descriptive statistics, but we will use them later on."
},
{
"code": null,
"e": 1714,
"s": 1535,
"text": "import pandas as pdimport matplotlib.pyplot as pltimport matplotlib.mlab as mlabimport seaborn as snsfrom sklearn.linear_model import LinearRegressionimport statsmodels.api as sm"
},
{
"code": null,
"e": 1851,
"s": 1714,
"text": "Now, we will explore the dataset shampoo.csv. This dataset traces the historical volume of sales of shampoo in a certain period of time."
},
{
"code": null,
"e": 1929,
"s": 1851,
"text": "In order to see the entire dataset, we can execute the following Python code:"
},
{
"code": null,
"e": 1967,
"s": 1929,
"text": "data = pd.read_csv('shampoo.csv')data"
},
{
"code": null,
"e": 2113,
"s": 1967,
"text": "Be careful, as this will show the entire dataset. In this case, there are only 36 instances, but for larger datasets, this is not very practical."
},
{
"code": null,
"e": 2165,
"s": 2113,
"text": "Instead, we should use the following piece of code:"
},
{
"code": null,
"e": 2177,
"s": 2165,
"text": "data.head()"
},
{
"code": null,
"e": 2334,
"s": 2177,
"text": "The line of code above will show the first five entries of the dataset. You can decide to display more by specifying how many entries you would like to see."
},
{
"code": null,
"e": 2348,
"s": 2334,
"text": "data.head(10)"
},
{
"code": null,
"e": 2408,
"s": 2348,
"text": "The line above will show the first 10 entries of a dataset."
},
{
"code": null,
"e": 2553,
"s": 2408,
"text": "Now, there is a very simple way to obtain the mean, median, standard deviation, and other information about the central tendency of the dataset."
},
{
"code": null,
"e": 2588,
"s": 2553,
"text": "Simply run the line of code below:"
},
{
"code": null,
"e": 2604,
"s": 2588,
"text": "data.describe()"
},
{
"code": null,
"e": 2674,
"s": 2604,
"text": "And you should see the following information for the shampoo dataset:"
},
{
"code": null,
"e": 2871,
"s": 2674,
"text": "As you can see, with this simple method, we have information about the size of the dataset, its mean and standard deviation, minimum and maximum values, as well as information about its quartiles."
},
{
"code": null,
"e": 3039,
"s": 2871,
"text": "Numbers are a good starting point, but being able to visualize a time series can give you quick insights which will help you steer you analysis in the right direction."
},
{
"code": null,
"e": 3138,
"s": 3039,
"text": "Histograms and scatter plots are the most widely used visualizations when it comes to time series."
},
{
"code": null,
"e": 3195,
"s": 3138,
"text": "A simple histogram of our dataset can be displayed with:"
},
{
"code": null,
"e": 3207,
"s": 3195,
"text": "data.hist()"
},
{
"code": null,
"e": 3298,
"s": 3207,
"text": "However, we can do much better. Let’s plot a better histogram and add labels to this axes."
},
{
"code": null,
"e": 3477,
"s": 3298,
"text": "plt.figure(figsize=[10, 7.5]); # Set dimensions for figureplt.hist(data['Sales'])plt.title('Histogram of Shampoo Sales');plt.xlabel('Shampoo Sales ($M)');plt.ylabel('Frequency');"
},
{
"code": null,
"e": 3683,
"s": 3477,
"text": "The histogram above is much better. There are numerous parameters you can change to customize the visualization to your need. For example, you can change the color and the number of bins in your histogram."
},
{
"code": null,
"e": 3888,
"s": 3683,
"text": "plt.figure(figsize=[10, 7.5]); # Set dimensions for figureplt.hist(data['Sales'], bins=20, color='#fcba03')plt.title('Histogram of Shampoo Sales');plt.xlabel('Shampoo Sales ($M)');plt.ylabel('Frequency');"
},
{
"code": null,
"e": 3983,
"s": 3888,
"text": "You should now be very comfortable with plotting a histogram and customizing it to your needs."
},
{
"code": null,
"e": 4095,
"s": 3983,
"text": "Last but not least is knowing how to display a scatter plot. Very simply, we can visualize our dataset like so:"
},
{
"code": null,
"e": 4295,
"s": 4095,
"text": "plt.figure(figsize=[20, 7.5]); # Set dimensions for figuresns.scatterplot(x=data['Month'], y=data['Sales']);plt.title('Historical Shampoo Sales');plt.ylabel('Shampoo Sales ($M)');plt.xlabel('Month');"
},
{
"code": null,
"e": 4483,
"s": 4295,
"text": "If you wish to learn more about plotting with the libraries we used and see how different parameters can change the plots, make sure to consult the documentation of matplotlib or seaborn."
},
{
"code": null,
"e": 4586,
"s": 4483,
"text": "As the name implies, inferential statistics is the use of analysis to infer properties from a dataset."
},
{
"code": null,
"e": 4744,
"s": 4586,
"text": "Usually, we are looking to find a trend in our dataset that will allow us to make predictions. This is also the occasion for us to test different hypotheses."
},
{
"code": null,
"e": 4890,
"s": 4744,
"text": "For introductory purposes, we will use a simple linear regression to illustrate and explain inferential statistics in the context of time series."
},
{
"code": null,
"e": 5095,
"s": 4890,
"text": "For this section, we will another dataset that retraces the historical concentration of CO2 in the atmosphere. Since the dataset spans 2014 years of history, let’s just consider data from 1950 and onward."
},
{
"code": null,
"e": 5175,
"s": 5095,
"text": "Let’s also apply what we learned before and display a scatter plot of our data."
},
{
"code": null,
"e": 5629,
"s": 5175,
"text": "# Dataset from here: https://www.co2.earth/historical-co2-datasetsco2_dataset = pd.read_csv('co2_dataset.csv')plt.figure(figsize=[20, 7.5]); # Set dimensions for figure# Let's only consider the data from the year 1950X = co2_dataset['year'].values[1950:]y = co2_dataset['data_mean_global'].values[1950:]sns.scatterplot(x=X, y=y);plt.title('Historical Global CO2 Concentration in the Atmosphere');plt.ylabel('CO2 Concentration (ppm)');plt.xlabel('Year');"
},
{
"code": null,
"e": 5702,
"s": 5629,
"text": "As you can see, it seems that the concentration is increasing over time."
},
{
"code": null,
"e": 5864,
"s": 5702,
"text": "Although the trend does not seem to be linear, it can probably still explain part of the variability of the data. Therefore, let’s make the following assumption:"
},
{
"code": null,
"e": 5936,
"s": 5864,
"text": "The CO2 concentration depends on time in a linear way, with some errors"
},
{
"code": null,
"e": 5974,
"s": 5936,
"text": "Mathematically, this is expressed as:"
},
{
"code": null,
"e": 6082,
"s": 5974,
"text": "And you should easily recognize this as a linear equation with a constant term, a slope, and an error term."
},
{
"code": null,
"e": 6186,
"s": 6082,
"text": "It is important to note that when doing a simple linear regression, the following assumptions are made:"
},
{
"code": null,
"e": 6240,
"s": 6186,
"text": "the errors are normally distributed, and on average 0"
},
{
"code": null,
"e": 6290,
"s": 6240,
"text": "the errors have the same variance (homoscedastic)"
},
{
"code": null,
"e": 6329,
"s": 6290,
"text": "the errors are unrelated to each other"
},
{
"code": null,
"e": 6541,
"s": 6329,
"text": "However, none of these assumptions are technically used when performing a simple linear regression. We are not generating normal distributions of the error term to estimate the parameters of our linear equation."
},
{
"code": null,
"e": 6700,
"s": 6541,
"text": "Instead, the ordinary least squares (OLS) is used to estimate the parameters. This is simply trying to find the minimum value of the sum of the squared error:"
},
{
"code": null,
"e": 6789,
"s": 6700,
"text": "Let’s now fit a linear model to our data and see the estimated parameters for our model:"
},
{
"code": null,
"e": 7302,
"s": 6789,
"text": "X = co2_dataset['year'].values[1950:].reshape(-1, 1)y = co2_dataset['data_mean_global'].values[1950:].reshape(-1, 1)reg = LinearRegression()reg.fit(X, y)print(f\"The slope is {reg.coef_[0][0]} and the intercept is {reg.intercept_[0]}\")predictions = reg.predict(X.reshape(-1, 1))plt.figure(figsize=(20, 8))plt.scatter(X, y,c='black')plt.plot(X, predictions, c='blue', linewidth=2)plt.title('Historical Global CO2 Concentration in the Atmosphere');plt.ylabel('CO2 Concentration (ppm)');plt.xlabel('Year');plt.show()"
},
{
"code": null,
"e": 7425,
"s": 7302,
"text": "Running the piece of code above, you should the same plot and you see that the slope is 1.3589 and the intercept is -2348."
},
{
"code": null,
"e": 7455,
"s": 7425,
"text": "Do the parameters make sense?"
},
{
"code": null,
"e": 7548,
"s": 7455,
"text": "The slope is indeed positive, which is normal since concentration is undoubtedly increasing."
},
{
"code": null,
"e": 7652,
"s": 7548,
"text": "However, the intercept is negative. Does that mean that a time 0, the concentration of CO2 is negative?"
},
{
"code": null,
"e": 7841,
"s": 7652,
"text": "No. Our model is definitely not robust enough to go back 1950 years in time. But these parameters are the ones that minimize the sum of squared errors, hence producing the best linear fit."
},
{
"code": null,
"e": 7962,
"s": 7841,
"text": "From the graph, we can visually say the a straight line is not the best fit to our data, but it is not the worst either."
},
{
"code": null,
"e": 8113,
"s": 7962,
"text": "Recall the assumption of a linear model that the errors are normally distributed. We can check this assumption by plotting a QQ-plot of the residuals."
},
{
"code": null,
"e": 8259,
"s": 8113,
"text": "A QQ-plot is a scatter plot of quantiles from two different distributions. If the distributions are the same, then we should see a straight line."
},
{
"code": null,
"e": 8443,
"s": 8259,
"text": "Therefore, if we plot a QQ-plot of our residuals against a normal distribution, we can see if they fall on a straight line; meaning that our residuals are indeed normally distributed."
},
{
"code": null,
"e": 8641,
"s": 8443,
"text": "X = sm.add_constant(co2_dataset['year'].values[1950:])model = sm.OLS(co2_dataset['data_mean_global'].values[1950:], X).fit()residuals = model.residqq_plot = sm.qqplot(residuals, line='q')plt.show()"
},
{
"code": null,
"e": 8861,
"s": 8641,
"text": "As you can see, the blue dots represent the residuals, and they do not fall on a straight line. Therefore, they are not normally distributed, and this is an indicator that a linear model is not the best fit to our data."
},
{
"code": null,
"e": 8933,
"s": 8861,
"text": "This can be further supported by plotting a histogram of the residuals:"
},
{
"code": null,
"e": 9101,
"s": 8933,
"text": "X = sm.add_constant(co2_dataset['year'].values[1950:])model = sm.OLS(co2_dataset['data_mean_global'].values[1950:], X).fit()residuals = model.residplt.hist(residuals);"
},
{
"code": null,
"e": 9165,
"s": 9101,
"text": "Again, we can clearly see that it is not a normal distribution."
},
{
"code": null,
"e": 9351,
"s": 9165,
"text": "A major component of inferential statistics is hypothesis testing. This is a way to determine if the observed trend is due to randomness, or if there is a real statistical significance."
},
{
"code": null,
"e": 9547,
"s": 9351,
"text": "For hypothesis testing, we must define a hypothesis and a null hypothesis. The hypothesis is usually the trend we are trying to extract from data, while the null hypothesis is its exact opposite."
},
{
"code": null,
"e": 9589,
"s": 9547,
"text": "Let’s define the hypotheses for our case:"
},
{
"code": null,
"e": 9666,
"s": 9589,
"text": "hypothesis: there is a linear correlation between time and CO2 concentration"
},
{
"code": null,
"e": 9749,
"s": 9666,
"text": "null hypothesis: there is no linear correlation between time and CO2 concentration"
},
{
"code": null,
"e": 9878,
"s": 9749,
"text": "Awesome! Now, let’s fit a linear model to our dataset using another library that will automatically run hypothesis tests for us:"
},
{
"code": null,
"e": 10025,
"s": 9878,
"text": "X = sm.add_constant(co2_dataset['year'].values[1950:])model = sm.OLS(co2_dataset['data_mean_global'].values[1950:], X).fit()print(model.summary())"
},
{
"code": null,
"e": 10105,
"s": 10025,
"text": "Now, there is a lot of information here, but let’s consider only a few numbers."
},
{
"code": null,
"e": 10258,
"s": 10105,
"text": "First, we have a very high R2 value of 0.971. This means that more than 97% of the variability in CO2 concentration is explained with the time variable."
},
{
"code": null,
"e": 10428,
"s": 10258,
"text": "Then, the F-statistic is very large as well: 2073. This means that there is statistical significance that a linear correlation exists between time and CO2 concentration."
},
{
"code": null,
"e": 10655,
"s": 10428,
"text": "Finally, looking at the p-value of the slope coefficient, you notice that it is 0. While the number is probably not 0, but still very small, it is another indicator of statistical significance that a linear correlation exists."
},
{
"code": null,
"e": 10751,
"s": 10655,
"text": "Usually, a threshold of 0.05 is used for the p-value. If less, the null hypothesis is rejected."
},
{
"code": null,
"e": 10866,
"s": 10751,
"text": "Therefore, because of a large F-statistic, in combination with a small p-value, we can reject the null hypothesis."
},
{
"code": null,
"e": 10953,
"s": 10866,
"text": "That’s it! You are now in a very good position to kickstart your time series analysis."
},
{
"code": null,
"e": 11064,
"s": 10953,
"text": "With these basic concepts, we will build upon them to make better models to help us forecast time series data."
},
{
"code": null,
"e": 11137,
"s": 11064,
"text": "Learn the latest best practices for time series analysis in Python with:"
}
]
|
How to load external HTML into a <div> using jQuery? | To load external HTML into a <div>, wrap your code inside the load() function. To load a page in div in jQuery, use the load() method. Firstly, add the web page you want to add.
Here’s the code for new.html −
<html>
<head>
</head>
<body>
<p>This is demo text.<p>
</body>
</html>
The following is the code snippet for the file which adds the above page,
<!DOCTYPE html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
$('#content').load("new.html");
});
</script>
</head>
<body>
<div id="content"></div>
</body>
</html> | [
{
"code": null,
"e": 1240,
"s": 1062,
"text": "To load external HTML into a <div>, wrap your code inside the load() function. To load a page in div in jQuery, use the load() method. Firstly, add the web page you want to add."
},
{
"code": null,
"e": 1271,
"s": 1240,
"text": "Here’s the code for new.html −"
},
{
"code": null,
"e": 1345,
"s": 1271,
"text": "<html>\n<head> \n</head>\n<body>\n<p>This is demo text.<p>\n</body>\n</html>"
},
{
"code": null,
"e": 1420,
"s": 1345,
"text": "The following is the code snippet for the file which adds the above page,"
},
{
"code": null,
"e": 1690,
"s": 1420,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n<script>\n$(document).ready(function(){\n \n $('#content').load(\"new.html\");\n\n});\n</script>\n</head>\n<body>\n\n<div id=\"content\"></div>\n\n</body>\n</html>"
}
]
|
How to Install easily Spark for Python | by Papa Moryba Kouate | Towards Data Science | Introduction
When we work with Big Data, we need more computational power that we can get with a distributed system of multiple computers. Moreover, to work effectively into the big data ecosystem, we also need a cluster computing framework which permits us to perform processing tasks quickly on large data sets.
The two most famous cluster computing frameworks are Hadoop and Spark that are available for free as open-source.
If we want to compare Apache Spark with Hadoop, we can say that Spark is 100 times faster in memory and 10 times faster on disk.
For what concerns on-perm installations Hadoop requires more memory on disk and Spark requires more RAM, that means setting up a cluster could be very expensive.
Today we can solve that problem with the services of cloud computing provided by AWS and Azure. If you are interested in knowing something more about them, in particular a topic like the Cloud Data Warehouse, let me suggest to you my article that you can find here.
Instead, in this article, I will show you how to install the Spark Python API, called Pyspark. Installing Pyspark on Windows 10 requires some different steps to follow and sometimes we can forget these steps. So, with this article, I hope to give you a useful guide to install Pyspark with no problems.
Part I: Check your Java version and download Apache Spark
I assume that you have on your PC a Python version at least 3.7.
So, to run Spark, the first thing we need to install is Java. It is recommended to have Java 8 or Java 1.8.
So, open your Command Prompt and control the version of your Java with the command that you can see below. If you have an old version, you can download it here.
Java -version
out:
Now, we have to download Spark that you can easily find here. The following frame show you the steps that you will see when you are in the site.
Download Apache Spark1. Choose a Spark release: 3.0.0(Jun 18 2020)--selected2. Choose a package type: Pre-built for Apache Hadoop 2.7 --selected3. Download Spark: spark-3.0.0-bin-hadoop2.7.tgz
Above you can observe how I set my version of Spark. I selected the recent Spark release and the Pre-built package for Apache Hadoop 2.7.
Then in the 3rd step, we can download our Spark version by clicking on the link that will open another web page in which you have to click on the first downloading version suggested.
When the download is finished, extract the compressed file in a folder called spark. Remember that the path of this folder will be used in the next part.
Part II: Download winutils.exe and Set Up your Environment
The second step is to download winutils.exe that you can find here. In my case, I chose the version hadoop-3.0.0, and I downloaded it. You can create a folder called winutils and put it there.
Now, it is time to set up our environment. The first thing to do is to go in the windows search bar and digit “edit the system environment variables”.
Go and click on the Environment Variables.
When you will be in the group of the System Variables, create three variables called HADOOP_HOME, SPARK_HOME and JAVA_HOME.
In HADOOP_HOME →put the path of the location wintulis folder created before.
In SPARK_HOME →put the path of the location of spark folder that you created before.
In JAVA_HOME →put the path of the location of your JAVA program.
In the following images you can see how you can set up your variables.
Then in the variable Path as you can see below, we can add the following two paths:
%SPARK_HOME%\bin
%JAVA_HOME%\bin
Part III: Run it on your Anaconda and fly to Jupyter
Now you can open the Anaconda Prompt and change the directory into SPARK_HOME directory and type bin\pyspark.
out:
So, you are ready to fly to Jupyter and try Pyspark with the following code. If there are no errors that means your Pyspark has been installed rightly.
import findsparkfindspark.init()
Conclusion
Now you have your Pyspark and you can start to learn and practice how it works by making some data manipulation. If you have some trouble to install it, feel free to contact me.
I send out a periodical newsletter. If you would like to join please sign up via this link.
In addition to my newsletter, we can also get in touch in my telegram group Data Science for Beginners. | [
{
"code": null,
"e": 185,
"s": 172,
"text": "Introduction"
},
{
"code": null,
"e": 486,
"s": 185,
"text": "When we work with Big Data, we need more computational power that we can get with a distributed system of multiple computers. Moreover, to work effectively into the big data ecosystem, we also need a cluster computing framework which permits us to perform processing tasks quickly on large data sets."
},
{
"code": null,
"e": 600,
"s": 486,
"text": "The two most famous cluster computing frameworks are Hadoop and Spark that are available for free as open-source."
},
{
"code": null,
"e": 729,
"s": 600,
"text": "If we want to compare Apache Spark with Hadoop, we can say that Spark is 100 times faster in memory and 10 times faster on disk."
},
{
"code": null,
"e": 891,
"s": 729,
"text": "For what concerns on-perm installations Hadoop requires more memory on disk and Spark requires more RAM, that means setting up a cluster could be very expensive."
},
{
"code": null,
"e": 1157,
"s": 891,
"text": "Today we can solve that problem with the services of cloud computing provided by AWS and Azure. If you are interested in knowing something more about them, in particular a topic like the Cloud Data Warehouse, let me suggest to you my article that you can find here."
},
{
"code": null,
"e": 1460,
"s": 1157,
"text": "Instead, in this article, I will show you how to install the Spark Python API, called Pyspark. Installing Pyspark on Windows 10 requires some different steps to follow and sometimes we can forget these steps. So, with this article, I hope to give you a useful guide to install Pyspark with no problems."
},
{
"code": null,
"e": 1518,
"s": 1460,
"text": "Part I: Check your Java version and download Apache Spark"
},
{
"code": null,
"e": 1583,
"s": 1518,
"text": "I assume that you have on your PC a Python version at least 3.7."
},
{
"code": null,
"e": 1691,
"s": 1583,
"text": "So, to run Spark, the first thing we need to install is Java. It is recommended to have Java 8 or Java 1.8."
},
{
"code": null,
"e": 1852,
"s": 1691,
"text": "So, open your Command Prompt and control the version of your Java with the command that you can see below. If you have an old version, you can download it here."
},
{
"code": null,
"e": 1866,
"s": 1852,
"text": "Java -version"
},
{
"code": null,
"e": 1871,
"s": 1866,
"text": "out:"
},
{
"code": null,
"e": 2016,
"s": 1871,
"text": "Now, we have to download Spark that you can easily find here. The following frame show you the steps that you will see when you are in the site."
},
{
"code": null,
"e": 2209,
"s": 2016,
"text": "Download Apache Spark1. Choose a Spark release: 3.0.0(Jun 18 2020)--selected2. Choose a package type: Pre-built for Apache Hadoop 2.7 --selected3. Download Spark: spark-3.0.0-bin-hadoop2.7.tgz"
},
{
"code": null,
"e": 2347,
"s": 2209,
"text": "Above you can observe how I set my version of Spark. I selected the recent Spark release and the Pre-built package for Apache Hadoop 2.7."
},
{
"code": null,
"e": 2530,
"s": 2347,
"text": "Then in the 3rd step, we can download our Spark version by clicking on the link that will open another web page in which you have to click on the first downloading version suggested."
},
{
"code": null,
"e": 2684,
"s": 2530,
"text": "When the download is finished, extract the compressed file in a folder called spark. Remember that the path of this folder will be used in the next part."
},
{
"code": null,
"e": 2743,
"s": 2684,
"text": "Part II: Download winutils.exe and Set Up your Environment"
},
{
"code": null,
"e": 2936,
"s": 2743,
"text": "The second step is to download winutils.exe that you can find here. In my case, I chose the version hadoop-3.0.0, and I downloaded it. You can create a folder called winutils and put it there."
},
{
"code": null,
"e": 3087,
"s": 2936,
"text": "Now, it is time to set up our environment. The first thing to do is to go in the windows search bar and digit “edit the system environment variables”."
},
{
"code": null,
"e": 3130,
"s": 3087,
"text": "Go and click on the Environment Variables."
},
{
"code": null,
"e": 3254,
"s": 3130,
"text": "When you will be in the group of the System Variables, create three variables called HADOOP_HOME, SPARK_HOME and JAVA_HOME."
},
{
"code": null,
"e": 3331,
"s": 3254,
"text": "In HADOOP_HOME →put the path of the location wintulis folder created before."
},
{
"code": null,
"e": 3416,
"s": 3331,
"text": "In SPARK_HOME →put the path of the location of spark folder that you created before."
},
{
"code": null,
"e": 3481,
"s": 3416,
"text": "In JAVA_HOME →put the path of the location of your JAVA program."
},
{
"code": null,
"e": 3552,
"s": 3481,
"text": "In the following images you can see how you can set up your variables."
},
{
"code": null,
"e": 3636,
"s": 3552,
"text": "Then in the variable Path as you can see below, we can add the following two paths:"
},
{
"code": null,
"e": 3653,
"s": 3636,
"text": "%SPARK_HOME%\\bin"
},
{
"code": null,
"e": 3669,
"s": 3653,
"text": "%JAVA_HOME%\\bin"
},
{
"code": null,
"e": 3722,
"s": 3669,
"text": "Part III: Run it on your Anaconda and fly to Jupyter"
},
{
"code": null,
"e": 3832,
"s": 3722,
"text": "Now you can open the Anaconda Prompt and change the directory into SPARK_HOME directory and type bin\\pyspark."
},
{
"code": null,
"e": 3837,
"s": 3832,
"text": "out:"
},
{
"code": null,
"e": 3989,
"s": 3837,
"text": "So, you are ready to fly to Jupyter and try Pyspark with the following code. If there are no errors that means your Pyspark has been installed rightly."
},
{
"code": null,
"e": 4022,
"s": 3989,
"text": "import findsparkfindspark.init()"
},
{
"code": null,
"e": 4033,
"s": 4022,
"text": "Conclusion"
},
{
"code": null,
"e": 4211,
"s": 4033,
"text": "Now you have your Pyspark and you can start to learn and practice how it works by making some data manipulation. If you have some trouble to install it, feel free to contact me."
},
{
"code": null,
"e": 4303,
"s": 4211,
"text": "I send out a periodical newsletter. If you would like to join please sign up via this link."
}
]
|
CSS - Layouts | Hope you are very comfortable with HTML tables and you are efficient in designing page layouts using HTML Tables. But you know CSS also provides plenty of controls for positioning elements in a document. Since CSS is the wave of the future, why not learn and use CSS instead of tables for page layout purposes?
The following list collects a few pros and cons of both the technologies −
Most browsers support tables, while CSS support is being slowly adopted.
Most browsers support tables, while CSS support is being slowly adopted.
Tables are more forgiving when the browser window size changes - morphing their content and wrapping to accommodate the changes accordingly. CSS positioning tends to be exact and fairly inflexible.
Tables are more forgiving when the browser window size changes - morphing their content and wrapping to accommodate the changes accordingly. CSS positioning tends to be exact and fairly inflexible.
Tables are much easier to learn and manipulate than CSS rules.
Tables are much easier to learn and manipulate than CSS rules.
But each of these arguments can be reversed −
CSS is pivotal to the future of Web documents and will be supported by most browsers.
CSS is pivotal to the future of Web documents and will be supported by most browsers.
CSS is more exact than tables, allowing your document to be viewed as you intended, regardless of the browser window.
CSS is more exact than tables, allowing your document to be viewed as you intended, regardless of the browser window.
Keeping track of nested tables can be a real pain. CSS rules tend to be well organized, easily read, and easily changed.
Keeping track of nested tables can be a real pain. CSS rules tend to be well organized, easily read, and easily changed.
Finally, we would suggest you to use whichever technology makes sense to you and use what you know or what presents your documents in the best way.
CSS also provides table-layout property to make your tables load much faster. Following is an example −
<table style = "table-layout:fixed;width:600px;">
<tr height = "30">
<td width = "150">CSS table layout cell 1</td>
<td width = "200">CSS table layout cell 2</td>
<td width = "250">CSS table layout cell 3</td>
</tr>
</table>
You will notice the benefits more on large tables. With traditional HTML, the browser had to calculate every cell before finally rendering the table. When you set the table-layout algorithm to fixed, however, it only needs to look at the first row before rendering the whole table. It means your table will need to have fixed column widths and row heights.
Here are the steps to create a simple Column Layout using CSS −
Set the margin and padding of the complete document as follows −
<style style = "text/css">
<!--
body {
margin:9px 9px 0 9px;
padding:0;
background:#FFF;
}
-->
</style>
Now, we will define a column with yellow color and later, we will attach this rule to a <div> −
<style style = "text/css">
<!--
#level0 {
background:#FC0;
}
-->
</style>
Upto this point, we will have a document with yellow body, so let us now define another division inside level0 −
<style style = "text/css">
<!--
#level1 {
margin-left:143px;
padding-left:9px;
background:#FFF;
}
-->
</style>
Now, we will nest one more division inside level1, and we will change just background color −
<style style = "text/css">
<!--
#level2 {
background:#FFF3AC;
}
-->
</style>
Finally, we will use the same technique, nest a level3 division inside level2 to get the visual layout for the right column −
<style style = "text/css">
<!--
#level3 {
margin-right:143px;
padding-right:9px;
background:#FFF;
}
#main {
background:#CCC;
}
-->
</style>
Complete the source code as follows −
<style style = "text/css">
body {
margin:9px 9px 0 9px;
padding:0;
background:#FFF;
}
#level0 {background:#FC0;}
#level1 {
margin-left:143px;
padding-left:9px;
background:#FFF;
}
#level2 {background:#FFF3AC;}
#level3 {
margin-right:143px;
padding-right:9px;
background:#FFF;
}
#main {background:#CCC;}
</style>
<body>
<div id = "level0">
<div id = "level1">
<div id = "level2">
<div id = "level3">
<div id = "main">
Final Content goes here...
</div>
</div>
</div>
</div>
</div>
</body>
Similarly, you can add a top navigation bar or an ad bar at the top of the page.
It will produce the following result −
33 Lectures
2.5 hours
Anadi Sharma
26 Lectures
2.5 hours
Frahaan Hussain
44 Lectures
4.5 hours
DigiFisk (Programming Is Fun)
21 Lectures
2.5 hours
DigiFisk (Programming Is Fun)
51 Lectures
7.5 hours
DigiFisk (Programming Is Fun)
52 Lectures
4 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2938,
"s": 2626,
"text": "Hope you are very comfortable with HTML tables and you are efficient in designing page layouts using HTML Tables. But you know CSS also provides plenty of controls for positioning elements in a document. Since CSS is the wave of the future, why not learn and use CSS instead of tables for page layout purposes?"
},
{
"code": null,
"e": 3013,
"s": 2938,
"text": "The following list collects a few pros and cons of both the technologies −"
},
{
"code": null,
"e": 3086,
"s": 3013,
"text": "Most browsers support tables, while CSS support is being slowly adopted."
},
{
"code": null,
"e": 3159,
"s": 3086,
"text": "Most browsers support tables, while CSS support is being slowly adopted."
},
{
"code": null,
"e": 3357,
"s": 3159,
"text": "Tables are more forgiving when the browser window size changes - morphing their content and wrapping to accommodate the changes accordingly. CSS positioning tends to be exact and fairly inflexible."
},
{
"code": null,
"e": 3555,
"s": 3357,
"text": "Tables are more forgiving when the browser window size changes - morphing their content and wrapping to accommodate the changes accordingly. CSS positioning tends to be exact and fairly inflexible."
},
{
"code": null,
"e": 3618,
"s": 3555,
"text": "Tables are much easier to learn and manipulate than CSS rules."
},
{
"code": null,
"e": 3681,
"s": 3618,
"text": "Tables are much easier to learn and manipulate than CSS rules."
},
{
"code": null,
"e": 3727,
"s": 3681,
"text": "But each of these arguments can be reversed −"
},
{
"code": null,
"e": 3813,
"s": 3727,
"text": "CSS is pivotal to the future of Web documents and will be supported by most browsers."
},
{
"code": null,
"e": 3899,
"s": 3813,
"text": "CSS is pivotal to the future of Web documents and will be supported by most browsers."
},
{
"code": null,
"e": 4017,
"s": 3899,
"text": "CSS is more exact than tables, allowing your document to be viewed as you intended, regardless of the browser window."
},
{
"code": null,
"e": 4135,
"s": 4017,
"text": "CSS is more exact than tables, allowing your document to be viewed as you intended, regardless of the browser window."
},
{
"code": null,
"e": 4256,
"s": 4135,
"text": "Keeping track of nested tables can be a real pain. CSS rules tend to be well organized, easily read, and easily changed."
},
{
"code": null,
"e": 4377,
"s": 4256,
"text": "Keeping track of nested tables can be a real pain. CSS rules tend to be well organized, easily read, and easily changed."
},
{
"code": null,
"e": 4525,
"s": 4377,
"text": "Finally, we would suggest you to use whichever technology makes sense to you and use what you know or what presents your documents in the best way."
},
{
"code": null,
"e": 4629,
"s": 4525,
"text": "CSS also provides table-layout property to make your tables load much faster. Following is an example −"
},
{
"code": null,
"e": 4878,
"s": 4629,
"text": "<table style = \"table-layout:fixed;width:600px;\">\n <tr height = \"30\">\n <td width = \"150\">CSS table layout cell 1</td>\n <td width = \"200\">CSS table layout cell 2</td>\n <td width = \"250\">CSS table layout cell 3</td>\n </tr>\n</table>"
},
{
"code": null,
"e": 5235,
"s": 4878,
"text": "You will notice the benefits more on large tables. With traditional HTML, the browser had to calculate every cell before finally rendering the table. When you set the table-layout algorithm to fixed, however, it only needs to look at the first row before rendering the whole table. It means your table will need to have fixed column widths and row heights."
},
{
"code": null,
"e": 5299,
"s": 5235,
"text": "Here are the steps to create a simple Column Layout using CSS −"
},
{
"code": null,
"e": 5364,
"s": 5299,
"text": "Set the margin and padding of the complete document as follows −"
},
{
"code": null,
"e": 5513,
"s": 5364,
"text": "<style style = \"text/css\">\n <!--\n body {\n margin:9px 9px 0 9px;\n padding:0;\n background:#FFF;\n }\n -->\n</style>"
},
{
"code": null,
"e": 5609,
"s": 5513,
"text": "Now, we will define a column with yellow color and later, we will attach this rule to a <div> −"
},
{
"code": null,
"e": 5710,
"s": 5609,
"text": "<style style = \"text/css\">\n <!--\n #level0 {\n background:#FC0;\n }\n -->\n</style>"
},
{
"code": null,
"e": 5823,
"s": 5710,
"text": "Upto this point, we will have a document with yellow body, so let us now define another division inside level0 −"
},
{
"code": null,
"e": 5980,
"s": 5823,
"text": "<style style = \"text/css\">\n <!--\n #level1 {\n margin-left:143px;\n padding-left:9px;\n background:#FFF;\n }\n -->\n</style>\n"
},
{
"code": null,
"e": 6074,
"s": 5980,
"text": "Now, we will nest one more division inside level1, and we will change just background color −"
},
{
"code": null,
"e": 6178,
"s": 6074,
"text": "<style style = \"text/css\">\n <!--\n #level2 {\n background:#FFF3AC;\n }\n -->\n</style>"
},
{
"code": null,
"e": 6304,
"s": 6178,
"text": "Finally, we will use the same technique, nest a level3 division inside level2 to get the visual layout for the right column −"
},
{
"code": null,
"e": 6510,
"s": 6304,
"text": "<style style = \"text/css\">\n <!--\n #level3 {\n margin-right:143px;\n padding-right:9px;\n background:#FFF;\n }\n #main {\n background:#CCC;\n }\n -->\n</style>"
},
{
"code": null,
"e": 6548,
"s": 6510,
"text": "Complete the source code as follows −"
},
{
"code": null,
"e": 7233,
"s": 6548,
"text": "<style style = \"text/css\">\n body {\n margin:9px 9px 0 9px;\n padding:0;\n background:#FFF;\n }\n\t\n #level0 {background:#FC0;}\n\t\n #level1 {\n margin-left:143px;\n padding-left:9px;\n background:#FFF;\n }\n\t\n #level2 {background:#FFF3AC;}\n\t\n #level3 {\n margin-right:143px;\n padding-right:9px;\n background:#FFF;\n }\n\t\n #main {background:#CCC;}\n</style>\n<body>\n <div id = \"level0\">\n <div id = \"level1\">\n <div id = \"level2\">\n <div id = \"level3\">\n <div id = \"main\">\n Final Content goes here...\n </div>\n </div>\n </div>\n </div>\n </div>\n</body>"
},
{
"code": null,
"e": 7314,
"s": 7233,
"text": "Similarly, you can add a top navigation bar or an ad bar at the top of the page."
},
{
"code": null,
"e": 7353,
"s": 7314,
"text": "It will produce the following result −"
},
{
"code": null,
"e": 7388,
"s": 7353,
"text": "\n 33 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 7402,
"s": 7388,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 7437,
"s": 7402,
"text": "\n 26 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 7454,
"s": 7437,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 7489,
"s": 7454,
"text": "\n 44 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 7520,
"s": 7489,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 7555,
"s": 7520,
"text": "\n 21 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 7586,
"s": 7555,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 7621,
"s": 7586,
"text": "\n 51 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 7652,
"s": 7621,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 7685,
"s": 7652,
"text": "\n 52 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 7716,
"s": 7685,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 7723,
"s": 7716,
"text": " Print"
},
{
"code": null,
"e": 7734,
"s": 7723,
"text": " Add Notes"
}
]
|
C library function - isprint() | The C library function int isprint(int c) checks whether the passed character is printable. A printable character is a character that is not a control character.
Following is the declaration for isprint() function.
int isprint(int c);
c − This is the character to be checked.
c − This is the character to be checked.
This function returns a non-zero value(true) if c is a printable character else, zero (false).
The following example shows the usage of isprint() function.
#include <stdio.h>
#include <ctype.h>
int main () {
int var1 = 'k';
int var2 = '8';
int var3 = '\t';
int var4 = ' ';
if( isprint(var1) ) {
printf("var1 = |%c| can be printed\n", var1 );
} else {
printf("var1 = |%c| can't be printed\n", var1 );
}
if( isprint(var2) ) {
printf("var2 = |%c| can be printed\n", var2 );
} else {
printf("var2 = |%c| can't be printed\n", var2 );
}
if( isprint(var3) ) {
printf("var3 = |%c| can be printed\n", var3 );
} else {
printf("var3 = |%c| can't be printed\n", var3 );
}
if( isprint(var4) ) {
printf("var4 = |%c| can be printed\n", var4 );
} else {
printf("var4 = |%c| can't be printed\n", var4 );
}
return(0);
}
Let us compile and run the above program to produce the following result −
var1 = |k| can be printed
var2 = |8| can be printed
var3 = | | can't be printed
var4 = | | can be printed
12 Lectures
2 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
12 Lectures
2 hours
Richa Maheshwari
20 Lectures
3.5 hours
Vandana Annavaram
44 Lectures
1 hours
Amit Diwan
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2169,
"s": 2007,
"text": "The C library function int isprint(int c) checks whether the passed character is printable. A printable character is a character that is not a control character."
},
{
"code": null,
"e": 2222,
"s": 2169,
"text": "Following is the declaration for isprint() function."
},
{
"code": null,
"e": 2242,
"s": 2222,
"text": "int isprint(int c);"
},
{
"code": null,
"e": 2283,
"s": 2242,
"text": "c − This is the character to be checked."
},
{
"code": null,
"e": 2324,
"s": 2283,
"text": "c − This is the character to be checked."
},
{
"code": null,
"e": 2419,
"s": 2324,
"text": "This function returns a non-zero value(true) if c is a printable character else, zero (false)."
},
{
"code": null,
"e": 2480,
"s": 2419,
"text": "The following example shows the usage of isprint() function."
},
{
"code": null,
"e": 3250,
"s": 2480,
"text": "#include <stdio.h>\n#include <ctype.h>\n\nint main () {\n int var1 = 'k';\n int var2 = '8';\n int var3 = '\\t';\n int var4 = ' ';\n \n if( isprint(var1) ) {\n printf(\"var1 = |%c| can be printed\\n\", var1 );\n } else {\n printf(\"var1 = |%c| can't be printed\\n\", var1 );\n }\n \n if( isprint(var2) ) {\n printf(\"var2 = |%c| can be printed\\n\", var2 );\n } else {\n printf(\"var2 = |%c| can't be printed\\n\", var2 );\n }\n \n if( isprint(var3) ) {\n printf(\"var3 = |%c| can be printed\\n\", var3 );\n } else {\n printf(\"var3 = |%c| can't be printed\\n\", var3 );\n }\n \n if( isprint(var4) ) {\n printf(\"var4 = |%c| can be printed\\n\", var4 );\n } else {\n printf(\"var4 = |%c| can't be printed\\n\", var4 );\n }\n \n return(0);\n} "
},
{
"code": null,
"e": 3325,
"s": 3250,
"text": "Let us compile and run the above program to produce the following result −"
},
{
"code": null,
"e": 3583,
"s": 3325,
"text": "var1 = |k| can be printed \nvar2 = |8| can be printed \nvar3 = | | can't be printed \nvar4 = | | can be printed\n"
},
{
"code": null,
"e": 3616,
"s": 3583,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3631,
"s": 3616,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3666,
"s": 3631,
"text": "\n 12 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3681,
"s": 3666,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3716,
"s": 3681,
"text": "\n 48 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 3730,
"s": 3716,
"text": " Asif Hussain"
},
{
"code": null,
"e": 3763,
"s": 3730,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3781,
"s": 3763,
"text": " Richa Maheshwari"
},
{
"code": null,
"e": 3816,
"s": 3781,
"text": "\n 20 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 3835,
"s": 3816,
"text": " Vandana Annavaram"
},
{
"code": null,
"e": 3868,
"s": 3835,
"text": "\n 44 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3880,
"s": 3868,
"text": " Amit Diwan"
},
{
"code": null,
"e": 3887,
"s": 3880,
"text": " Print"
},
{
"code": null,
"e": 3898,
"s": 3887,
"text": " Add Notes"
}
]
|
What are the rules to create a constructor in java? | A constructor is used to initialize an object when it is created. It is syntactically similar to a method. The difference is that the constructors have same name as their class and, have no return type.
There is no need to invoke constructors explicitly these are automatically invoked at the time of instantiation.
While defining the constructors you should keep the following points in mind.
A constructor does not have return type.
A constructor does not have return type.
The name of the constructor is same as the name of the class.
The name of the constructor is same as the name of the class.
A constructor cannot be abstract, final, static and Synchronized.
A constructor cannot be abstract, final, static and Synchronized.
You can use the access specifiers public, protected & private with constructors.
You can use the access specifiers public, protected & private with constructors.
The following Java program demonstrates the creation of constructors.
Live Demo
public class Student {
public String name;
public int age;
public Student(){
this.name = "Raju";
this.age = 20;
}
public void display(){
System.out.println("Name of the Student: "+this.name );
System.out.println("Age of the Student: "+this.age );
}
public static void main(String args[]) {
new Student().display();
}
}
Name of the Student: Raju
Age of the Student: 20 | [
{
"code": null,
"e": 1265,
"s": 1062,
"text": "A constructor is used to initialize an object when it is created. It is syntactically similar to a method. The difference is that the constructors have same name as their class and, have no return type."
},
{
"code": null,
"e": 1378,
"s": 1265,
"text": "There is no need to invoke constructors explicitly these are automatically invoked at the time of instantiation."
},
{
"code": null,
"e": 1456,
"s": 1378,
"text": "While defining the constructors you should keep the following points in mind."
},
{
"code": null,
"e": 1497,
"s": 1456,
"text": "A constructor does not have return type."
},
{
"code": null,
"e": 1538,
"s": 1497,
"text": "A constructor does not have return type."
},
{
"code": null,
"e": 1600,
"s": 1538,
"text": "The name of the constructor is same as the name of the class."
},
{
"code": null,
"e": 1662,
"s": 1600,
"text": "The name of the constructor is same as the name of the class."
},
{
"code": null,
"e": 1728,
"s": 1662,
"text": "A constructor cannot be abstract, final, static and Synchronized."
},
{
"code": null,
"e": 1794,
"s": 1728,
"text": "A constructor cannot be abstract, final, static and Synchronized."
},
{
"code": null,
"e": 1875,
"s": 1794,
"text": "You can use the access specifiers public, protected & private with constructors."
},
{
"code": null,
"e": 1956,
"s": 1875,
"text": "You can use the access specifiers public, protected & private with constructors."
},
{
"code": null,
"e": 2026,
"s": 1956,
"text": "The following Java program demonstrates the creation of constructors."
},
{
"code": null,
"e": 2037,
"s": 2026,
"text": " Live Demo"
},
{
"code": null,
"e": 2410,
"s": 2037,
"text": "public class Student {\n public String name;\n public int age;\n public Student(){\n this.name = \"Raju\";\n this.age = 20;\n }\n public void display(){\n System.out.println(\"Name of the Student: \"+this.name );\n System.out.println(\"Age of the Student: \"+this.age );\n }\n public static void main(String args[]) {\n new Student().display();\n }\n}"
},
{
"code": null,
"e": 2459,
"s": 2410,
"text": "Name of the Student: Raju\nAge of the Student: 20"
}
]
|
Using AWS Sagemaker and Lambda to Build a Serverless ML Platform | by Andrewngai | Towards Data Science | Most data enthusiasts know how to build and train a model, but how to deploy your model and make it useful in real-life sometimes can be a challenging issue for beginner data scientists. Luckily, there are many different platforms and tools available to help with model deployment. Amazon Sagemaker is one of my favorites, as it largely reduces the effort and hesitation of building, training, and deployment of your models. With the help of numerous AWS functionalities and tools such as Lambda function, S3, Dynamo DB, the entire process of building a working ML application can be at the click of a mouse.
In this article, I would like to demo how we can leverage the power of AWS to build a serverless ML application that predicts air flight delay.
Dataset
Demo Architecture
Create a Notebook Instance
Create a Sagemaker Endpoint
Create a Lambda Function
Create API Endpoints
End to End Testing
Conclusion
The dataset is from the U.S. Department of Transportation contains 7.21 million flight records in 2018 with 28 columns.
https://www.kaggle.com/yuanyuwendymu/airline-delay-and-cancellation-data-2009-2018
Due to a large amount of data(7.21million), we used Google Bigquerry for data cleaning, preprocessing and simple feature engineering. As the purpose of this project is to demonstrate how AWS helps with model training and deployment, We will not spend to much time on how we pre-process the data with Bigquerry(Maybe a future topic). The processed data ready for training is then stored into an S3 bucket in CSV format. Now we can start building our demo on AWS
To make this application useful, we will take advantage of AWS API Gateway and Lambda function to build an API taking HTTP POST request that contains airline information(a real-life example could be an APP or website that takes Airline number as input), the POST request will trigger the Lambda function to parse the value and send test data to a Sagemaker endpoint that have the model deployed. The return value will be parsed by Lambda function again and send the prediction result back to the user. The architecture diagram demonstrates the end to end pipeline from user input to prediction output.
As we mentioned earlier, the pre-processed result is stored in an S3 bucket for easy usage. However, Sagemaker can consume data from different channels. For example, Dynamo DB, Aurora or even IoT device data from AWS IoT. Sagemaker also provides various tools for automatic model tuning and selection. Let’s get started by first create a Jupyter notebook instance in Sagemaker. From the AWS main page, choose service and go into Sagemaker. And you will see the following options, choose Notebook instances.
If you have never used Amazon SageMaker before, for the first two months, you are offered a monthly free tier of 250 hours of t2. medium or t3. medium notebook usage for building your models, plus 50 hours of m4. Enter the name and instance type for your notebook. If you have never created an IAM role before create a new role or use the default Sagemaker IAM role. IAM role controls the resource permission for your instances.
Amazon SageMaker has built-in Jupyter Notebooks that allow you to write code in Python, Julia, R, or Scala. We will be using Python 3 for this project. It also provides a list of sample notebooks that loaded, when the notebook instance spins up. Each sample notebook has different use cases and contains detail comments on each step. I strongly recommend beginners to go through those notebooks and you will have a much better idea of how to utilize Sagemakers.
Now let’s create our notebook, the notebook is modified based on the Breast Cancer Prediction.ipynb. You can read through the original sample. Remember to change the S3 bucket name and CSV file to match yours.
The last line of code will delete the endpoint that is created from the notebook, let’s first comment out the last line. Then you can execute the notebook by hit Run All or use the Shift and Enter keys to run the cell step by step. By running the notebook, the model is trained and deployed with a Sagemaker endpoint created. You can view this endpoint from the Amazon SageMaker console. you can change the default endpoint name to be something more meaningful.
Now we have a SageMaker endpoint. Let’s create a Lambda function to call the endpoint. It will parse the HTTP POST request, revoke the Sagemaker endpoint, return prediction, parse the result and send it back to users. The Lambda can use boto3 sagemaker-runtime.invoke_endpoint() to call the endpoint
AWS Lambda is a useful tool, allowing the developer to build serverless function on a cost per usage-based. You also benefit from the faster development, easier operational management, and scalability of FaaS. From the Lambda function, select Author from scratch. Enter your function name, choose Python 3.6 for this project. Remember to choose an IAM execution role with a policy that gives your function permission to invoke sagemaker endpoint.
Here is the code for our Lambda function, it uses the ENDPOINT_NAME as an environment variable that holds the name of the SageMaker endpoint. Remember to assign the variable to your endpoint name under the environment variables section.
import osimport ioimport boto3import jsonimport csv# grab environment variablesENDPOINT_NAME = os.environ['ENDPOINT_NAME']runtime= boto3.client('runtime.sagemaker')def lambda_handler(event, context): print("Received event: " + json.dumps(event, indent=2)) data = json.loads(json.dumps(event)) payload = data['data'] print(payload) response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, ContentType='text/csv', Body=payload) print('res is !!!!') print(response) result = json.loads(response['Body'].read().decode()) print('result is!!!!') print(result) pred = int(result['predictions'][0]['score']) #pred = int(result['predictions'][0]) predicted_label = 'delay' if pred == 1 else 'no delay' return predicted_label
Now we have the Lambda function ready, let’s create the API to receive HTTP requests and integrate everything. You can follow the AWS Documentation to create the API from API Gateway. Or just follow these simple steps.
Open the AWS API Gateway console. Choose Create API Choose an API name, you can leave the Endpoint Type as Regional. Choose Create API.
Open the AWS API Gateway console. Choose Create API Choose an API name, you can leave the Endpoint Type as Regional. Choose Create API.
2. Next, create a Resource choosing from the Actions drop-down list, When the resource is created, from the same list, choose to Create Method to create a POST method.
3. On the screen that appears, enter the following: For Integration type, choose Lambda Function. For Lambda Function, enter the function you created
4. After the setup is complete, deploy the API to a stage. From the Actions drop-down list, choose Deploy API.
5. On the page that appears, create a new stage. and choose the Deploy This step will give you the Invoke URL.
Now we have built a serverless ML pipeline that can take user’s data from HTTP POST and return ML prediction results from the model we deployed. We can test it on Postman. Postman is an interface testing tool. When doing interface testing, Postman is like a client. It can simulate all kinds of HTTP requests initiated by users, send the request data to the server, obtain the corresponding response result, and verify whether the result data in the response matches the expected value. Now we send a POST request to the Invoke Url we just created in the last step, it triggers the Lambda function and invokes our Sagemaker endpoint to make the prediction. We can see we get a result back as ‘no dealy’
Concluding thoughts
AWS Sagemaker has been a great tool for all-level data scientists and ML application developers to achieve robust and complete end-to-end ML solutions. It can automate many tedious efforts like hyperparameter tuning, model selection, even data labeling. With the help of the Lambda function, ML application can be highly flexible and cost-effective.
Thanks for reading and I am looking forward to hearing your questions and thoughts. If you want to learn more about Data Science and Cloud Computing, you can find me on Linkedin. | [
{
"code": null,
"e": 781,
"s": 172,
"text": "Most data enthusiasts know how to build and train a model, but how to deploy your model and make it useful in real-life sometimes can be a challenging issue for beginner data scientists. Luckily, there are many different platforms and tools available to help with model deployment. Amazon Sagemaker is one of my favorites, as it largely reduces the effort and hesitation of building, training, and deployment of your models. With the help of numerous AWS functionalities and tools such as Lambda function, S3, Dynamo DB, the entire process of building a working ML application can be at the click of a mouse."
},
{
"code": null,
"e": 925,
"s": 781,
"text": "In this article, I would like to demo how we can leverage the power of AWS to build a serverless ML application that predicts air flight delay."
},
{
"code": null,
"e": 933,
"s": 925,
"text": "Dataset"
},
{
"code": null,
"e": 951,
"s": 933,
"text": "Demo Architecture"
},
{
"code": null,
"e": 978,
"s": 951,
"text": "Create a Notebook Instance"
},
{
"code": null,
"e": 1006,
"s": 978,
"text": "Create a Sagemaker Endpoint"
},
{
"code": null,
"e": 1031,
"s": 1006,
"text": "Create a Lambda Function"
},
{
"code": null,
"e": 1052,
"s": 1031,
"text": "Create API Endpoints"
},
{
"code": null,
"e": 1071,
"s": 1052,
"text": "End to End Testing"
},
{
"code": null,
"e": 1082,
"s": 1071,
"text": "Conclusion"
},
{
"code": null,
"e": 1202,
"s": 1082,
"text": "The dataset is from the U.S. Department of Transportation contains 7.21 million flight records in 2018 with 28 columns."
},
{
"code": null,
"e": 1285,
"s": 1202,
"text": "https://www.kaggle.com/yuanyuwendymu/airline-delay-and-cancellation-data-2009-2018"
},
{
"code": null,
"e": 1746,
"s": 1285,
"text": "Due to a large amount of data(7.21million), we used Google Bigquerry for data cleaning, preprocessing and simple feature engineering. As the purpose of this project is to demonstrate how AWS helps with model training and deployment, We will not spend to much time on how we pre-process the data with Bigquerry(Maybe a future topic). The processed data ready for training is then stored into an S3 bucket in CSV format. Now we can start building our demo on AWS"
},
{
"code": null,
"e": 2348,
"s": 1746,
"text": "To make this application useful, we will take advantage of AWS API Gateway and Lambda function to build an API taking HTTP POST request that contains airline information(a real-life example could be an APP or website that takes Airline number as input), the POST request will trigger the Lambda function to parse the value and send test data to a Sagemaker endpoint that have the model deployed. The return value will be parsed by Lambda function again and send the prediction result back to the user. The architecture diagram demonstrates the end to end pipeline from user input to prediction output."
},
{
"code": null,
"e": 2855,
"s": 2348,
"text": "As we mentioned earlier, the pre-processed result is stored in an S3 bucket for easy usage. However, Sagemaker can consume data from different channels. For example, Dynamo DB, Aurora or even IoT device data from AWS IoT. Sagemaker also provides various tools for automatic model tuning and selection. Let’s get started by first create a Jupyter notebook instance in Sagemaker. From the AWS main page, choose service and go into Sagemaker. And you will see the following options, choose Notebook instances."
},
{
"code": null,
"e": 3284,
"s": 2855,
"text": "If you have never used Amazon SageMaker before, for the first two months, you are offered a monthly free tier of 250 hours of t2. medium or t3. medium notebook usage for building your models, plus 50 hours of m4. Enter the name and instance type for your notebook. If you have never created an IAM role before create a new role or use the default Sagemaker IAM role. IAM role controls the resource permission for your instances."
},
{
"code": null,
"e": 3746,
"s": 3284,
"text": "Amazon SageMaker has built-in Jupyter Notebooks that allow you to write code in Python, Julia, R, or Scala. We will be using Python 3 for this project. It also provides a list of sample notebooks that loaded, when the notebook instance spins up. Each sample notebook has different use cases and contains detail comments on each step. I strongly recommend beginners to go through those notebooks and you will have a much better idea of how to utilize Sagemakers."
},
{
"code": null,
"e": 3956,
"s": 3746,
"text": "Now let’s create our notebook, the notebook is modified based on the Breast Cancer Prediction.ipynb. You can read through the original sample. Remember to change the S3 bucket name and CSV file to match yours."
},
{
"code": null,
"e": 4418,
"s": 3956,
"text": "The last line of code will delete the endpoint that is created from the notebook, let’s first comment out the last line. Then you can execute the notebook by hit Run All or use the Shift and Enter keys to run the cell step by step. By running the notebook, the model is trained and deployed with a Sagemaker endpoint created. You can view this endpoint from the Amazon SageMaker console. you can change the default endpoint name to be something more meaningful."
},
{
"code": null,
"e": 4718,
"s": 4418,
"text": "Now we have a SageMaker endpoint. Let’s create a Lambda function to call the endpoint. It will parse the HTTP POST request, revoke the Sagemaker endpoint, return prediction, parse the result and send it back to users. The Lambda can use boto3 sagemaker-runtime.invoke_endpoint() to call the endpoint"
},
{
"code": null,
"e": 5165,
"s": 4718,
"text": "AWS Lambda is a useful tool, allowing the developer to build serverless function on a cost per usage-based. You also benefit from the faster development, easier operational management, and scalability of FaaS. From the Lambda function, select Author from scratch. Enter your function name, choose Python 3.6 for this project. Remember to choose an IAM execution role with a policy that gives your function permission to invoke sagemaker endpoint."
},
{
"code": null,
"e": 5402,
"s": 5165,
"text": "Here is the code for our Lambda function, it uses the ENDPOINT_NAME as an environment variable that holds the name of the SageMaker endpoint. Remember to assign the variable to your endpoint name under the environment variables section."
},
{
"code": null,
"e": 6254,
"s": 5402,
"text": "import osimport ioimport boto3import jsonimport csv# grab environment variablesENDPOINT_NAME = os.environ['ENDPOINT_NAME']runtime= boto3.client('runtime.sagemaker')def lambda_handler(event, context): print(\"Received event: \" + json.dumps(event, indent=2)) data = json.loads(json.dumps(event)) payload = data['data'] print(payload) response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME, ContentType='text/csv', Body=payload) print('res is !!!!') print(response) result = json.loads(response['Body'].read().decode()) print('result is!!!!') print(result) pred = int(result['predictions'][0]['score']) #pred = int(result['predictions'][0]) predicted_label = 'delay' if pred == 1 else 'no delay' return predicted_label"
},
{
"code": null,
"e": 6473,
"s": 6254,
"text": "Now we have the Lambda function ready, let’s create the API to receive HTTP requests and integrate everything. You can follow the AWS Documentation to create the API from API Gateway. Or just follow these simple steps."
},
{
"code": null,
"e": 6609,
"s": 6473,
"text": "Open the AWS API Gateway console. Choose Create API Choose an API name, you can leave the Endpoint Type as Regional. Choose Create API."
},
{
"code": null,
"e": 6745,
"s": 6609,
"text": "Open the AWS API Gateway console. Choose Create API Choose an API name, you can leave the Endpoint Type as Regional. Choose Create API."
},
{
"code": null,
"e": 6913,
"s": 6745,
"text": "2. Next, create a Resource choosing from the Actions drop-down list, When the resource is created, from the same list, choose to Create Method to create a POST method."
},
{
"code": null,
"e": 7063,
"s": 6913,
"text": "3. On the screen that appears, enter the following: For Integration type, choose Lambda Function. For Lambda Function, enter the function you created"
},
{
"code": null,
"e": 7174,
"s": 7063,
"text": "4. After the setup is complete, deploy the API to a stage. From the Actions drop-down list, choose Deploy API."
},
{
"code": null,
"e": 7285,
"s": 7174,
"text": "5. On the page that appears, create a new stage. and choose the Deploy This step will give you the Invoke URL."
},
{
"code": null,
"e": 7988,
"s": 7285,
"text": "Now we have built a serverless ML pipeline that can take user’s data from HTTP POST and return ML prediction results from the model we deployed. We can test it on Postman. Postman is an interface testing tool. When doing interface testing, Postman is like a client. It can simulate all kinds of HTTP requests initiated by users, send the request data to the server, obtain the corresponding response result, and verify whether the result data in the response matches the expected value. Now we send a POST request to the Invoke Url we just created in the last step, it triggers the Lambda function and invokes our Sagemaker endpoint to make the prediction. We can see we get a result back as ‘no dealy’"
},
{
"code": null,
"e": 8008,
"s": 7988,
"text": "Concluding thoughts"
},
{
"code": null,
"e": 8358,
"s": 8008,
"text": "AWS Sagemaker has been a great tool for all-level data scientists and ML application developers to achieve robust and complete end-to-end ML solutions. It can automate many tedious efforts like hyperparameter tuning, model selection, even data labeling. With the help of the Lambda function, ML application can be highly flexible and cost-effective."
}
]
|
Machine Learning | An Introduction | by Gavin Edwards | Towards Data Science | Introduction
Terminology
Process
Background Theory
Machine Learning Approaches
Machine Learning is undeniably one of the most influential and powerful technologies in today’s world. More importantly, we are far from seeing its full potential. There’s no doubt, it will continue to be making headlines for the foreseeable future. This article is designed as an introduction to the Machine Learning concepts, covering all the fundamental ideas without being too high level.
Machine learning is a tool for turning information into knowledge. In the past 50 years, there has been an explosion of data. This mass of data is useless unless we analyse it and find the patterns hidden within. Machine learning techniques are used to automatically find the valuable underlying patterns within complex data that we would otherwise struggle to discover. The hidden patterns and knowledge about a problem can be used to predict future events and perform all kinds of complex decision making.
We are drowning in information and starving for knowledge — John Naisbitt
Most of us are unaware that we already interact with Machine Learning every single day. Every time we Google something, listen to a song or even take a photo, Machine Learning is becoming part of the engine behind it, constantly learning and improving from every interaction. It’s also behind world-changing advances like detecting cancer, creating new drugs and self-driving cars.
The reason that Machine Learning is so exciting, is because it is a step away from all our previous rule-based systems of:
if(x = y): do z
Traditionally, software engineering combined human created rules with data to create answers to a problem. Instead, machine learning uses data and answers to discover the rules behind a problem. (Chollet, 2017)
To learn the rules governing a phenomenon, machines have to go through a learning process, trying different rules and learning from how well they perform. Hence, why it’s known as Machine Learning.
There are multiple forms of Machine Learning; supervised, unsupervised , semi-supervised and reinforcement learning. Each form of Machine Learning has differing approaches, but they all follow the same underlying process and theory. This explanation covers the general Machine Leaning concept and then focusses in on each approach.
Dataset: A set of data examples, that contain features important to solving the problem.
Features: Important pieces of data that help us understand a problem. These are fed in to a Machine Learning algorithm to help it learn.
Model: The representation (internal model) of a phenomenon that a Machine Learning algorithm has learnt. It learns this from the data it is shown during training. The model is the output you get after training an algorithm. For example, a decision tree algorithm would be trained and produce a decision tree model.
Data Collection: Collect the data that the algorithm will learn from.Data Preparation: Format and engineer the data into the optimal format, extracting important features and performing dimensionality reduction.Training: Also known as the fitting stage, this is where the Machine Learning algorithm actually learns by showing it the data that has been collected and prepared.Evaluation: Test the model to see how well it performs.Tuning: Fine tune the model to maximise it’s performance.
Data Collection: Collect the data that the algorithm will learn from.
Data Preparation: Format and engineer the data into the optimal format, extracting important features and performing dimensionality reduction.
Training: Also known as the fitting stage, this is where the Machine Learning algorithm actually learns by showing it the data that has been collected and prepared.
Evaluation: Test the model to see how well it performs.
Tuning: Fine tune the model to maximise it’s performance.
The Analytical Engine weaves algebraic patterns just as the Jaquard weaves flowers and leaves — Ada Lovelace
Ada Lovelace, one of the founders of computing, and perhaps the first computer programmer, realised that anything in the world could be described with math.
More importantly, this meant a mathematical formula can be created to derive the relationship representing any phenomenon. Ada Lovelace realised that machines had the potential to understand the world without the need for human assistance.
Around 200 years later, these fundamental ideas are critical in Machine Learning. No matter what the problem is, it’s information can be plotted onto a graph as data points. Machine Learning then tries to find the mathematical patterns and relationships hidden within the original information.
Probability is orderly opinion... inference from data is nothing other than the revision of such opinion in the light of relevant new information — Thomas Bayes
Another mathematician, Thomas Bayes, founded ideas that are essential in the probability theory that is manifested into Machine Learning.
We live in a probabilistic world. Everything that happens has uncertainty attached to it. The Bayesian interpretation of probability is what Machine Learning is based upon. Bayesian probability means that we think of probability as quantifying the uncertainty of an event.
Because of this, we have to base our probabilities on the information available about an event, rather than counting the number of repeated trials. For example, when predicting a football match, instead of counting the total amount of times Manchester United have won against Liverpool, a Bayesian approach would use relevant information such as the current form, league placing and starting team.
The benefit of taking this approach is that probabilities can still be assigned to rare events, as the decision making process is based on relevant features and reasoning.
There are many approaches that can be taken when conducting Machine Learning. They are usually grouped into the areas listed below. Supervised and Unsupervised are well established approaches and the most commonly used. Semi-supervised and Reinforcement Learning are newer and more complex but have shown impressive results.
The No Free Lunch theorem is famous in Machine Learning. It states that there is no single algorithm that will work well for all tasks. Each task that you try to solve has it’s own idiosyncrasies. Therefore, there are lots of algorithms and approaches to suit each problems individual quirks. Plenty more styles of Machine Learning and AI will keep being introduced that best fit different problems.
Supervised Learning
Unsupervised Learning
Semi-supervised Learning
Reinforcement Learning
In supervised learning, the goal is to learn the mapping (the rules) between a set of inputs and outputs.
For example, the inputs could be the weather forecast, and the outputs would be the visitors to the beach. The goal in supervised learning would be to learn the mapping that describes the relationship between temperature and number of beach visitors.
Example labelled data is provided of past input and output pairs during the learning process to teach the model how it should behave, hence, ‘supervised’ learning. For the beach example, new inputs can then be fed in of forecast temperature and the Machine learning algorithm will then output a future prediction for the number of visitors.
Being able to adapt to new inputs and make predictions is the crucial generalisation part of machine learning. In training, we want to maximise generalisation, so the supervised model defines the real ‘general’ underlying relationship. If the model is over-trained, we cause over-fitting to the examples used and the model would be unable to adapt to new, previously unseen inputs.
A side effect to be aware of in supervised learning that the supervision we provide introduces bias to the learning. The model can only be imitating exactly what it was shown, so it is very important to show it reliable, unbiased examples. Also, supervised learning usually requires a lot of data before it learns. Obtaining enough reliably labelled data is often the hardest and most expensive part of using supervised learning. (Hence why data has been called the new oil!)
The output from a supervised Machine Learning model could be a category from a finite set e.g [low, medium, high] for the number of visitors to the beach:
Input [temperature=20] -> Model -> Output = [visitors=high]
When this is the case, it’s is deciding how to classify the input, and so is known as classification.
Alternatively, the output could be a real-world scalar (output a number):
Input [temperature=20] -> Model -> Output = [visitors=300]
When this is the case, it is known as regression.
Classification is used to group the similar data points into different sections in order to classify them. Machine Learning is used to find the rules that explain how to separate the different data points.
But how are the magical rules created? Well, there are multiple ways to discover the rules. They all focus on using data and answers to discover rules that linearly separate data points.
Linear separability is a key concept in machine learning. All that linear separability means is ‘can the different data points be separated by a line?’. So put simply, classification approaches try to find the best way to separate data points with a line.
The lines drawn between classes are known as the decision boundaries. The entire area that is chosen to define a class is known as the decision surface. The decision surface defines that if a data point falls within its boundaries, it will be assigned a certain class.
Regression is another form of supervised learning. The difference between classification and regression is that regression outputs a number rather than a class. Therefore, regression is useful when predicting number based problems like stock market prices, the temperature for a given day, or the probability of an event.
Regression is used in financial trading to find the patterns in stocks and other assets to decide when to buy/sell and make a profit. For classification, it is already being used to classify if an email you receive is spam.
Both the classification and regression supervised learning techniques can be extended to much more complex tasks. For example, tasks involving speech and audio. Image classification, object detection and chat bots are some examples.
A recent example shown below uses a model trained with supervised learning to realistically fake videos of people talking.
You might be wondering how does this complex image based task relate to classification or regression? Well, it comes back to everything in the world, even complex phenomenon, being fundamentally described with math and numbers. In this example, a neural network is still only outputting numbers like in regression. But in this example the numbers are the numerical 3d coordinate values of a facial mesh.
In unsupervised learning, only input data is provided in the examples. There are no labelled example outputs to aim for. But it may be surprising to know that it is still possible to find many interesting and complex patterns hidden within data without any labels.
An example of unsupervised learning in real life would be sorting different colour coins into separate piles. Nobody taught you how to separate them, but by just looking at their features such as colour, you can see which colour coins are associated and cluster them into their correct groups.
Unsupervised learning can be harder than supervised learning, as the removal of supervision means the problem has become less defined. The algorithm has a less focused idea of what patterns to look for.
Think of it in your own learning. If you learnt to play the guitar by being supervised by a teacher, you would learn quickly by re-using the supervised knowledge of notes, chords and rhythms. But if you only taught yourself, you’d find it so much harder knowing where to start.
By being unsupervised in a laissez-faire teaching style, you start from a clean slate with less bias and may even find a new, better way solve a problem. Therefore, this is why unsupervised learning is also known as knowledge discovery. Unsupervised learning is very useful when conducting exploratory data analysis.
To find the interesting structures in unlabeled data, we use density estimation. The most common form of which is clustering. Among others, there is also dimensionality reduction, latent variable models and anomaly detection. More complex unsupervised techniques involve neural networks like Auto-encoders and Deep Belief Networks, but we won’t go into them in this introduction blog.
Unsupervised learning is mostly used for clustering. Clustering is the act of creating groups with differing characteristics. Clustering attempts to find various subgroups within a dataset. As this is unsupervised learning, we are not restricted to any set of labels and are free to choose how many clusters to create. This is both a blessing and a curse. Picking a model that has the correct number of clusters (complexity) has to be conducted via an empirical model selection process.
In Association Learning you want to uncover the rules that describe your data. For example, if a person watches video A they will likely watch video B. Association rules are perfect for examples such as this where you want to find related items.
The identification of rare or unusual items that differ from the majority of data. For example, your bank will use this to detect fraudulent activity on your card. Your normal spending habits will fall within a normal range of behaviors and values. But when someone tries to steal from you using your card the behavior will be different from your normal pattern. Anomaly detection uses unsupervised learning to separate and detect these strange occurrences.
Dimensionality reduction aims to find the most important features to reduce the original feature set down into a smaller more efficient set that still encodes the important data.
For example, in predicting the number of visitors to the beach we might use the temperature, day of the week, month and number of events scheduled for that day as inputs. But the month might actually be not important for predicting the number of visitors.
Irrelevant features such as this can confuse a Machine Leaning algorithms and make them less efficient and accurate. By using dimensionality reduction, only the most important features are identified and used. Principal Component Analysis (PCA) is a commonly used technique.
In the real world, clustering has successfully been used to discover a new type of star by investigating what sub groups of star automatically form based on the stars characteristics. In marketing, it is regularly used to cluster customers into similar groups based on their behaviors and characteristics.
Association learning is used for recommending or finding related items. A common example is market basket analysis. In market basket analysis, association rules are found to predict other items a customer is likely to buy based on what they have placed in their basket. Amazon use this. If you place a new laptop in your basket, they recommend items like a laptop case via their association rules.
Anomaly detection is well suited in scenarios such as fraud detection and malware detection.
Semi-supervised learning is a mix between supervised and unsupervised approaches. The learning process isn’t closely supervised with example outputs for every single input, but we also don’t let the algorithm do its own thing and provide no form of feedback. Semi-supervised learning takes the middle road.
By being able to mix together a small amount of labelled data with a much larger unlabeled dataset it reduces the burden of having enough labelled data. Therefore, it opens up many more problems to be solved with machine learning.
Generative Adversarial Networks (GANs) have been a recent breakthrough with incredible results. GANs use two neural networks, a generator and discriminator. The generator generates output and the discriminator critiques it. By battling against each other they both become increasingly skilled.
By using a network to both generate input and another one to generate outputs there is no need for us to provide explicit labels every single time and so it can be classed as semi-supervised.
A perfect example is in medical scans, such as breast cancer scans. A trained expert is needed to label these which is time consuming and very expensive. Instead, an expert can label just a small set of breast cancer scans, and the semi-supervised algorithm would be able to leverage this small subset and apply it to a larger set of scans.
For me, GAN’s are one of the most impressive examples of semi-supervised learning. Below is a video where a Generative Adversarial Network uses unsupervised learning to map aspects from one image to another.
The final type of machine learning is by far my favourite. It is less common and much more complex, but it has generated incredible results. It doesn’t use labels as such, and instead uses rewards to learn.
If you’re familiar with psychology, you’ll have heard of reinforcement learning. If not, you’ll already know the concept from how we learn in everyday life. In this approach, occasional positive and negative feedback is used to reinforce behaviours. Think of it like training a dog, good behaviours are rewarded with a treat and become more common. Bad behaviours are punished and become less common. This reward-motivated behaviour is key in reinforcement learning.
This is very similar to how we as humans also learn. Throughout our lives, we receive positive and negative signals and constantly learn from them. The chemicals in our brain are one of many ways we get these signals. When something good happens, the neurons in our brains provide a hit of positive neurotransmitters such as dopamine which makes us feel good and we become more likely to repeat that specific action. We don’t need constant supervision to learn like in supervised learning. By only giving the occasional reinforcement signals, we still learn very effectively.
One of the most exciting parts of Reinforcement Learning is that is a first step away from training on static datasets, and instead of being able to use dynamic, noisy data-rich environments. This brings Machine Learning closer to a learning style used by humans. The world is simply our noisy, complex data-rich environment.
Games are very popular in Reinforcement Learning research. They provide ideal data-rich environments. The scores in games are ideal reward signals to train reward-motivated behaviours. Additionally, time can be sped up in a simulated game environment to reduce overall training time.
A Reinforcement Learning algorithm just aims to maximise its rewards by playing the game over and over again. If you can frame a problem with a frequent ‘score’ as a reward, it is likely to be suited to Reinforcement Learning.
Reinforcement learning hasn’t been used as much in the real world due to how new and complex it is. But a real world example is using reinforcement learning to reduce data center running costs by controlling the cooling systems in a more efficient way. The algorithm learns a optimal policy of how to act in order to get the lowest energy costs. The lower the cost, the more reward it receives.
In research it is frequently used in games. Games of perfect information (where you can see the entire state of the environment) and imperfect information (where parts of the state are hidden e.g. the real world) have both seen incredible success that outperform humans.
Google DeepMind have used reinforcement learning in research to play Go and Atari games at superhuman levels.
That’s all for the introduction to Machine Learning! Keep your eye out for more blogs coming soon that will go into more depth on specific topics.
If you enjoy my work and want to keep up to date with the latest publications or would like to get in touch, I can be found on twitter at @GavinEdwards_AI or on Medium at Gavin Edwards — Thanks! 🤖🧠
Chollet, F. Deep learning with Python. Shelter Island Manning.
Some rights reserved | [
{
"code": null,
"e": 185,
"s": 172,
"text": "Introduction"
},
{
"code": null,
"e": 197,
"s": 185,
"text": "Terminology"
},
{
"code": null,
"e": 205,
"s": 197,
"text": "Process"
},
{
"code": null,
"e": 223,
"s": 205,
"text": "Background Theory"
},
{
"code": null,
"e": 251,
"s": 223,
"text": "Machine Learning Approaches"
},
{
"code": null,
"e": 644,
"s": 251,
"text": "Machine Learning is undeniably one of the most influential and powerful technologies in today’s world. More importantly, we are far from seeing its full potential. There’s no doubt, it will continue to be making headlines for the foreseeable future. This article is designed as an introduction to the Machine Learning concepts, covering all the fundamental ideas without being too high level."
},
{
"code": null,
"e": 1152,
"s": 644,
"text": "Machine learning is a tool for turning information into knowledge. In the past 50 years, there has been an explosion of data. This mass of data is useless unless we analyse it and find the patterns hidden within. Machine learning techniques are used to automatically find the valuable underlying patterns within complex data that we would otherwise struggle to discover. The hidden patterns and knowledge about a problem can be used to predict future events and perform all kinds of complex decision making."
},
{
"code": null,
"e": 1226,
"s": 1152,
"text": "We are drowning in information and starving for knowledge — John Naisbitt"
},
{
"code": null,
"e": 1608,
"s": 1226,
"text": "Most of us are unaware that we already interact with Machine Learning every single day. Every time we Google something, listen to a song or even take a photo, Machine Learning is becoming part of the engine behind it, constantly learning and improving from every interaction. It’s also behind world-changing advances like detecting cancer, creating new drugs and self-driving cars."
},
{
"code": null,
"e": 1731,
"s": 1608,
"text": "The reason that Machine Learning is so exciting, is because it is a step away from all our previous rule-based systems of:"
},
{
"code": null,
"e": 1747,
"s": 1731,
"text": "if(x = y): do z"
},
{
"code": null,
"e": 1958,
"s": 1747,
"text": "Traditionally, software engineering combined human created rules with data to create answers to a problem. Instead, machine learning uses data and answers to discover the rules behind a problem. (Chollet, 2017)"
},
{
"code": null,
"e": 2156,
"s": 1958,
"text": "To learn the rules governing a phenomenon, machines have to go through a learning process, trying different rules and learning from how well they perform. Hence, why it’s known as Machine Learning."
},
{
"code": null,
"e": 2488,
"s": 2156,
"text": "There are multiple forms of Machine Learning; supervised, unsupervised , semi-supervised and reinforcement learning. Each form of Machine Learning has differing approaches, but they all follow the same underlying process and theory. This explanation covers the general Machine Leaning concept and then focusses in on each approach."
},
{
"code": null,
"e": 2577,
"s": 2488,
"text": "Dataset: A set of data examples, that contain features important to solving the problem."
},
{
"code": null,
"e": 2714,
"s": 2577,
"text": "Features: Important pieces of data that help us understand a problem. These are fed in to a Machine Learning algorithm to help it learn."
},
{
"code": null,
"e": 3029,
"s": 2714,
"text": "Model: The representation (internal model) of a phenomenon that a Machine Learning algorithm has learnt. It learns this from the data it is shown during training. The model is the output you get after training an algorithm. For example, a decision tree algorithm would be trained and produce a decision tree model."
},
{
"code": null,
"e": 3517,
"s": 3029,
"text": "Data Collection: Collect the data that the algorithm will learn from.Data Preparation: Format and engineer the data into the optimal format, extracting important features and performing dimensionality reduction.Training: Also known as the fitting stage, this is where the Machine Learning algorithm actually learns by showing it the data that has been collected and prepared.Evaluation: Test the model to see how well it performs.Tuning: Fine tune the model to maximise it’s performance."
},
{
"code": null,
"e": 3587,
"s": 3517,
"text": "Data Collection: Collect the data that the algorithm will learn from."
},
{
"code": null,
"e": 3730,
"s": 3587,
"text": "Data Preparation: Format and engineer the data into the optimal format, extracting important features and performing dimensionality reduction."
},
{
"code": null,
"e": 3895,
"s": 3730,
"text": "Training: Also known as the fitting stage, this is where the Machine Learning algorithm actually learns by showing it the data that has been collected and prepared."
},
{
"code": null,
"e": 3951,
"s": 3895,
"text": "Evaluation: Test the model to see how well it performs."
},
{
"code": null,
"e": 4009,
"s": 3951,
"text": "Tuning: Fine tune the model to maximise it’s performance."
},
{
"code": null,
"e": 4118,
"s": 4009,
"text": "The Analytical Engine weaves algebraic patterns just as the Jaquard weaves flowers and leaves — Ada Lovelace"
},
{
"code": null,
"e": 4275,
"s": 4118,
"text": "Ada Lovelace, one of the founders of computing, and perhaps the first computer programmer, realised that anything in the world could be described with math."
},
{
"code": null,
"e": 4515,
"s": 4275,
"text": "More importantly, this meant a mathematical formula can be created to derive the relationship representing any phenomenon. Ada Lovelace realised that machines had the potential to understand the world without the need for human assistance."
},
{
"code": null,
"e": 4809,
"s": 4515,
"text": "Around 200 years later, these fundamental ideas are critical in Machine Learning. No matter what the problem is, it’s information can be plotted onto a graph as data points. Machine Learning then tries to find the mathematical patterns and relationships hidden within the original information."
},
{
"code": null,
"e": 4970,
"s": 4809,
"text": "Probability is orderly opinion... inference from data is nothing other than the revision of such opinion in the light of relevant new information — Thomas Bayes"
},
{
"code": null,
"e": 5108,
"s": 4970,
"text": "Another mathematician, Thomas Bayes, founded ideas that are essential in the probability theory that is manifested into Machine Learning."
},
{
"code": null,
"e": 5381,
"s": 5108,
"text": "We live in a probabilistic world. Everything that happens has uncertainty attached to it. The Bayesian interpretation of probability is what Machine Learning is based upon. Bayesian probability means that we think of probability as quantifying the uncertainty of an event."
},
{
"code": null,
"e": 5779,
"s": 5381,
"text": "Because of this, we have to base our probabilities on the information available about an event, rather than counting the number of repeated trials. For example, when predicting a football match, instead of counting the total amount of times Manchester United have won against Liverpool, a Bayesian approach would use relevant information such as the current form, league placing and starting team."
},
{
"code": null,
"e": 5951,
"s": 5779,
"text": "The benefit of taking this approach is that probabilities can still be assigned to rare events, as the decision making process is based on relevant features and reasoning."
},
{
"code": null,
"e": 6276,
"s": 5951,
"text": "There are many approaches that can be taken when conducting Machine Learning. They are usually grouped into the areas listed below. Supervised and Unsupervised are well established approaches and the most commonly used. Semi-supervised and Reinforcement Learning are newer and more complex but have shown impressive results."
},
{
"code": null,
"e": 6676,
"s": 6276,
"text": "The No Free Lunch theorem is famous in Machine Learning. It states that there is no single algorithm that will work well for all tasks. Each task that you try to solve has it’s own idiosyncrasies. Therefore, there are lots of algorithms and approaches to suit each problems individual quirks. Plenty more styles of Machine Learning and AI will keep being introduced that best fit different problems."
},
{
"code": null,
"e": 6696,
"s": 6676,
"text": "Supervised Learning"
},
{
"code": null,
"e": 6718,
"s": 6696,
"text": "Unsupervised Learning"
},
{
"code": null,
"e": 6743,
"s": 6718,
"text": "Semi-supervised Learning"
},
{
"code": null,
"e": 6766,
"s": 6743,
"text": "Reinforcement Learning"
},
{
"code": null,
"e": 6872,
"s": 6766,
"text": "In supervised learning, the goal is to learn the mapping (the rules) between a set of inputs and outputs."
},
{
"code": null,
"e": 7123,
"s": 6872,
"text": "For example, the inputs could be the weather forecast, and the outputs would be the visitors to the beach. The goal in supervised learning would be to learn the mapping that describes the relationship between temperature and number of beach visitors."
},
{
"code": null,
"e": 7464,
"s": 7123,
"text": "Example labelled data is provided of past input and output pairs during the learning process to teach the model how it should behave, hence, ‘supervised’ learning. For the beach example, new inputs can then be fed in of forecast temperature and the Machine learning algorithm will then output a future prediction for the number of visitors."
},
{
"code": null,
"e": 7846,
"s": 7464,
"text": "Being able to adapt to new inputs and make predictions is the crucial generalisation part of machine learning. In training, we want to maximise generalisation, so the supervised model defines the real ‘general’ underlying relationship. If the model is over-trained, we cause over-fitting to the examples used and the model would be unable to adapt to new, previously unseen inputs."
},
{
"code": null,
"e": 8322,
"s": 7846,
"text": "A side effect to be aware of in supervised learning that the supervision we provide introduces bias to the learning. The model can only be imitating exactly what it was shown, so it is very important to show it reliable, unbiased examples. Also, supervised learning usually requires a lot of data before it learns. Obtaining enough reliably labelled data is often the hardest and most expensive part of using supervised learning. (Hence why data has been called the new oil!)"
},
{
"code": null,
"e": 8477,
"s": 8322,
"text": "The output from a supervised Machine Learning model could be a category from a finite set e.g [low, medium, high] for the number of visitors to the beach:"
},
{
"code": null,
"e": 8537,
"s": 8477,
"text": "Input [temperature=20] -> Model -> Output = [visitors=high]"
},
{
"code": null,
"e": 8639,
"s": 8537,
"text": "When this is the case, it’s is deciding how to classify the input, and so is known as classification."
},
{
"code": null,
"e": 8713,
"s": 8639,
"text": "Alternatively, the output could be a real-world scalar (output a number):"
},
{
"code": null,
"e": 8772,
"s": 8713,
"text": "Input [temperature=20] -> Model -> Output = [visitors=300]"
},
{
"code": null,
"e": 8822,
"s": 8772,
"text": "When this is the case, it is known as regression."
},
{
"code": null,
"e": 9028,
"s": 8822,
"text": "Classification is used to group the similar data points into different sections in order to classify them. Machine Learning is used to find the rules that explain how to separate the different data points."
},
{
"code": null,
"e": 9215,
"s": 9028,
"text": "But how are the magical rules created? Well, there are multiple ways to discover the rules. They all focus on using data and answers to discover rules that linearly separate data points."
},
{
"code": null,
"e": 9471,
"s": 9215,
"text": "Linear separability is a key concept in machine learning. All that linear separability means is ‘can the different data points be separated by a line?’. So put simply, classification approaches try to find the best way to separate data points with a line."
},
{
"code": null,
"e": 9740,
"s": 9471,
"text": "The lines drawn between classes are known as the decision boundaries. The entire area that is chosen to define a class is known as the decision surface. The decision surface defines that if a data point falls within its boundaries, it will be assigned a certain class."
},
{
"code": null,
"e": 10062,
"s": 9740,
"text": "Regression is another form of supervised learning. The difference between classification and regression is that regression outputs a number rather than a class. Therefore, regression is useful when predicting number based problems like stock market prices, the temperature for a given day, or the probability of an event."
},
{
"code": null,
"e": 10286,
"s": 10062,
"text": "Regression is used in financial trading to find the patterns in stocks and other assets to decide when to buy/sell and make a profit. For classification, it is already being used to classify if an email you receive is spam."
},
{
"code": null,
"e": 10519,
"s": 10286,
"text": "Both the classification and regression supervised learning techniques can be extended to much more complex tasks. For example, tasks involving speech and audio. Image classification, object detection and chat bots are some examples."
},
{
"code": null,
"e": 10642,
"s": 10519,
"text": "A recent example shown below uses a model trained with supervised learning to realistically fake videos of people talking."
},
{
"code": null,
"e": 11046,
"s": 10642,
"text": "You might be wondering how does this complex image based task relate to classification or regression? Well, it comes back to everything in the world, even complex phenomenon, being fundamentally described with math and numbers. In this example, a neural network is still only outputting numbers like in regression. But in this example the numbers are the numerical 3d coordinate values of a facial mesh."
},
{
"code": null,
"e": 11311,
"s": 11046,
"text": "In unsupervised learning, only input data is provided in the examples. There are no labelled example outputs to aim for. But it may be surprising to know that it is still possible to find many interesting and complex patterns hidden within data without any labels."
},
{
"code": null,
"e": 11605,
"s": 11311,
"text": "An example of unsupervised learning in real life would be sorting different colour coins into separate piles. Nobody taught you how to separate them, but by just looking at their features such as colour, you can see which colour coins are associated and cluster them into their correct groups."
},
{
"code": null,
"e": 11808,
"s": 11605,
"text": "Unsupervised learning can be harder than supervised learning, as the removal of supervision means the problem has become less defined. The algorithm has a less focused idea of what patterns to look for."
},
{
"code": null,
"e": 12086,
"s": 11808,
"text": "Think of it in your own learning. If you learnt to play the guitar by being supervised by a teacher, you would learn quickly by re-using the supervised knowledge of notes, chords and rhythms. But if you only taught yourself, you’d find it so much harder knowing where to start."
},
{
"code": null,
"e": 12403,
"s": 12086,
"text": "By being unsupervised in a laissez-faire teaching style, you start from a clean slate with less bias and may even find a new, better way solve a problem. Therefore, this is why unsupervised learning is also known as knowledge discovery. Unsupervised learning is very useful when conducting exploratory data analysis."
},
{
"code": null,
"e": 12788,
"s": 12403,
"text": "To find the interesting structures in unlabeled data, we use density estimation. The most common form of which is clustering. Among others, there is also dimensionality reduction, latent variable models and anomaly detection. More complex unsupervised techniques involve neural networks like Auto-encoders and Deep Belief Networks, but we won’t go into them in this introduction blog."
},
{
"code": null,
"e": 13275,
"s": 12788,
"text": "Unsupervised learning is mostly used for clustering. Clustering is the act of creating groups with differing characteristics. Clustering attempts to find various subgroups within a dataset. As this is unsupervised learning, we are not restricted to any set of labels and are free to choose how many clusters to create. This is both a blessing and a curse. Picking a model that has the correct number of clusters (complexity) has to be conducted via an empirical model selection process."
},
{
"code": null,
"e": 13521,
"s": 13275,
"text": "In Association Learning you want to uncover the rules that describe your data. For example, if a person watches video A they will likely watch video B. Association rules are perfect for examples such as this where you want to find related items."
},
{
"code": null,
"e": 13979,
"s": 13521,
"text": "The identification of rare or unusual items that differ from the majority of data. For example, your bank will use this to detect fraudulent activity on your card. Your normal spending habits will fall within a normal range of behaviors and values. But when someone tries to steal from you using your card the behavior will be different from your normal pattern. Anomaly detection uses unsupervised learning to separate and detect these strange occurrences."
},
{
"code": null,
"e": 14158,
"s": 13979,
"text": "Dimensionality reduction aims to find the most important features to reduce the original feature set down into a smaller more efficient set that still encodes the important data."
},
{
"code": null,
"e": 14414,
"s": 14158,
"text": "For example, in predicting the number of visitors to the beach we might use the temperature, day of the week, month and number of events scheduled for that day as inputs. But the month might actually be not important for predicting the number of visitors."
},
{
"code": null,
"e": 14689,
"s": 14414,
"text": "Irrelevant features such as this can confuse a Machine Leaning algorithms and make them less efficient and accurate. By using dimensionality reduction, only the most important features are identified and used. Principal Component Analysis (PCA) is a commonly used technique."
},
{
"code": null,
"e": 14995,
"s": 14689,
"text": "In the real world, clustering has successfully been used to discover a new type of star by investigating what sub groups of star automatically form based on the stars characteristics. In marketing, it is regularly used to cluster customers into similar groups based on their behaviors and characteristics."
},
{
"code": null,
"e": 15393,
"s": 14995,
"text": "Association learning is used for recommending or finding related items. A common example is market basket analysis. In market basket analysis, association rules are found to predict other items a customer is likely to buy based on what they have placed in their basket. Amazon use this. If you place a new laptop in your basket, they recommend items like a laptop case via their association rules."
},
{
"code": null,
"e": 15486,
"s": 15393,
"text": "Anomaly detection is well suited in scenarios such as fraud detection and malware detection."
},
{
"code": null,
"e": 15793,
"s": 15486,
"text": "Semi-supervised learning is a mix between supervised and unsupervised approaches. The learning process isn’t closely supervised with example outputs for every single input, but we also don’t let the algorithm do its own thing and provide no form of feedback. Semi-supervised learning takes the middle road."
},
{
"code": null,
"e": 16024,
"s": 15793,
"text": "By being able to mix together a small amount of labelled data with a much larger unlabeled dataset it reduces the burden of having enough labelled data. Therefore, it opens up many more problems to be solved with machine learning."
},
{
"code": null,
"e": 16318,
"s": 16024,
"text": "Generative Adversarial Networks (GANs) have been a recent breakthrough with incredible results. GANs use two neural networks, a generator and discriminator. The generator generates output and the discriminator critiques it. By battling against each other they both become increasingly skilled."
},
{
"code": null,
"e": 16510,
"s": 16318,
"text": "By using a network to both generate input and another one to generate outputs there is no need for us to provide explicit labels every single time and so it can be classed as semi-supervised."
},
{
"code": null,
"e": 16851,
"s": 16510,
"text": "A perfect example is in medical scans, such as breast cancer scans. A trained expert is needed to label these which is time consuming and very expensive. Instead, an expert can label just a small set of breast cancer scans, and the semi-supervised algorithm would be able to leverage this small subset and apply it to a larger set of scans."
},
{
"code": null,
"e": 17059,
"s": 16851,
"text": "For me, GAN’s are one of the most impressive examples of semi-supervised learning. Below is a video where a Generative Adversarial Network uses unsupervised learning to map aspects from one image to another."
},
{
"code": null,
"e": 17266,
"s": 17059,
"text": "The final type of machine learning is by far my favourite. It is less common and much more complex, but it has generated incredible results. It doesn’t use labels as such, and instead uses rewards to learn."
},
{
"code": null,
"e": 17733,
"s": 17266,
"text": "If you’re familiar with psychology, you’ll have heard of reinforcement learning. If not, you’ll already know the concept from how we learn in everyday life. In this approach, occasional positive and negative feedback is used to reinforce behaviours. Think of it like training a dog, good behaviours are rewarded with a treat and become more common. Bad behaviours are punished and become less common. This reward-motivated behaviour is key in reinforcement learning."
},
{
"code": null,
"e": 18309,
"s": 17733,
"text": "This is very similar to how we as humans also learn. Throughout our lives, we receive positive and negative signals and constantly learn from them. The chemicals in our brain are one of many ways we get these signals. When something good happens, the neurons in our brains provide a hit of positive neurotransmitters such as dopamine which makes us feel good and we become more likely to repeat that specific action. We don’t need constant supervision to learn like in supervised learning. By only giving the occasional reinforcement signals, we still learn very effectively."
},
{
"code": null,
"e": 18635,
"s": 18309,
"text": "One of the most exciting parts of Reinforcement Learning is that is a first step away from training on static datasets, and instead of being able to use dynamic, noisy data-rich environments. This brings Machine Learning closer to a learning style used by humans. The world is simply our noisy, complex data-rich environment."
},
{
"code": null,
"e": 18919,
"s": 18635,
"text": "Games are very popular in Reinforcement Learning research. They provide ideal data-rich environments. The scores in games are ideal reward signals to train reward-motivated behaviours. Additionally, time can be sped up in a simulated game environment to reduce overall training time."
},
{
"code": null,
"e": 19146,
"s": 18919,
"text": "A Reinforcement Learning algorithm just aims to maximise its rewards by playing the game over and over again. If you can frame a problem with a frequent ‘score’ as a reward, it is likely to be suited to Reinforcement Learning."
},
{
"code": null,
"e": 19541,
"s": 19146,
"text": "Reinforcement learning hasn’t been used as much in the real world due to how new and complex it is. But a real world example is using reinforcement learning to reduce data center running costs by controlling the cooling systems in a more efficient way. The algorithm learns a optimal policy of how to act in order to get the lowest energy costs. The lower the cost, the more reward it receives."
},
{
"code": null,
"e": 19812,
"s": 19541,
"text": "In research it is frequently used in games. Games of perfect information (where you can see the entire state of the environment) and imperfect information (where parts of the state are hidden e.g. the real world) have both seen incredible success that outperform humans."
},
{
"code": null,
"e": 19922,
"s": 19812,
"text": "Google DeepMind have used reinforcement learning in research to play Go and Atari games at superhuman levels."
},
{
"code": null,
"e": 20069,
"s": 19922,
"text": "That’s all for the introduction to Machine Learning! Keep your eye out for more blogs coming soon that will go into more depth on specific topics."
},
{
"code": null,
"e": 20267,
"s": 20069,
"text": "If you enjoy my work and want to keep up to date with the latest publications or would like to get in touch, I can be found on twitter at @GavinEdwards_AI or on Medium at Gavin Edwards — Thanks! 🤖🧠"
},
{
"code": null,
"e": 20330,
"s": 20267,
"text": "Chollet, F. Deep learning with Python. Shelter Island Manning."
}
]
|
JIRA - Advanced Search | Apart from the type of searches explained in the previous chapter, JIRA also has a few advanced search options, which can be performed using the following three ways.
Using Field Reference
Using Keyword Reference
Using Operators Reference
These above-mentioned three ways have been explained in detail below.
The user should consider the following points while performing any advanced search.
Advanced search uses structured queries to search for JIRA issues.
Advanced search uses structured queries to search for JIRA issues.
Search results displays in the Issue Navigator.
Search results displays in the Issue Navigator.
Search results can be exported to MS Excel and many other available formats.
Search results can be exported to MS Excel and many other available formats.
Save and Subscribe features are available to advanced searches.
Save and Subscribe features are available to advanced searches.
An advanced search uses the JIRA Query Language known as JQL.
An advanced search uses the JIRA Query Language known as JQL.
A simple query in JQL consists of a field, operator, followed by one or more values or functions. For example, the following simple query will find all issues in the "WFT" project −
A simple query in JQL consists of a field, operator, followed by one or more values or functions. For example, the following simple query will find all issues in the "WFT" project −
Project = "WFT"
JQL supports SQL like syntax such as ORDER BY, GROUP BY, ISNULL() functions, but JQL is not a Database Query Language.
JQL supports SQL like syntax such as ORDER BY, GROUP BY, ISNULL() functions, but JQL is not a Database Query Language.
A field reference means a word that represents the field name in the JIRA issue including the custom fields. The syntax is −
<field name> <operators like =,>, <> “values” or “functions”
The operator compares the value of the field with value presents at right side such that only true results are retrieved by query.
Go to Issues → Search for Issues in the navigator bar.
The following screenshot shows how to navigate the Search section.
If there is an existing search criterion, click on the New Filter button to reset the criteria. The following screenshot shows how to start with a new criteria −
Type the query using the Field, Operator and Values like issueKey = “WFT-107”.
There are other fields as well – Affected Version, Assignee, Attachments, Category, Comment, Component, Created, Creator, Description, Due, Environment, etc. As soon as the user starts typing, the auto-complete functionality helps to write in the defined format.
The following screenshot shows how to add Field Name criteria using advanced features.
Operator selection − The following screenshot shows how to select operators.
The next step is to enter the value and then click on the Search symbol. The following screenshot shows how to add values and search.
The following screenshot shows the search result based on criteria set.
Here, we will understand how to use a keyword reference and what its advantages are
A keyword in JQL −
joins two or more queries together to form a complex JQL query.
alters the logic of one or more queries.
alters the logic of operators.
has an explicit definition in a JQL query.
performs a specific function that defines the results of a JQL query.
List of Keywords −
AND − ex - status = open AND priority = urgent And assignee = Ashish.
OR − ex – duedate < now() or duedate is empty.
NOT − ex – not assignee = Ashish .
EMPTY − ex - affectedVersion is empty / affectedVersion = empty.
NULL − ex – assignee is null.
ORDER BY − ex – duedate = empty order by created, priority desc.
Similar to field reference, as soon as the user starts typing, the auto-complete functionality helps to get the correct syntax. The following screenshot shows how to add keywords.
Click on the Search symbol and it will provide the search results. The following screenshot shows the result based on a criteria set.
Operators are used to compare values of the left side with the right side, such that only true results display as the search result.
Equals: =
Not Equals: !=
Greater Than: >
Less Than: <
Greater Than Equals: =>
Less than equals: =<
IN
NOT IN
CONTAINS: ~
Does Not contain: ! ~
IS
IS NOT
WAS
WAS IN
WAS NOT IN
WAS NOT
CHANGED
Similar to the Field and the Keyword Reference, these operators can also be used to enhance the search results.
6 Lectures
5 hours
Frahaan Hussain
6 Lectures
5 hours
Manu Mitra
41 Lectures
3.5 hours
Simon Sez IT
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2362,
"s": 2195,
"text": "Apart from the type of searches explained in the previous chapter, JIRA also has a few advanced search options, which can be performed using the following three ways."
},
{
"code": null,
"e": 2384,
"s": 2362,
"text": "Using Field Reference"
},
{
"code": null,
"e": 2408,
"s": 2384,
"text": "Using Keyword Reference"
},
{
"code": null,
"e": 2434,
"s": 2408,
"text": "Using Operators Reference"
},
{
"code": null,
"e": 2504,
"s": 2434,
"text": "These above-mentioned three ways have been explained in detail below."
},
{
"code": null,
"e": 2588,
"s": 2504,
"text": "The user should consider the following points while performing any advanced search."
},
{
"code": null,
"e": 2655,
"s": 2588,
"text": "Advanced search uses structured queries to search for JIRA issues."
},
{
"code": null,
"e": 2722,
"s": 2655,
"text": "Advanced search uses structured queries to search for JIRA issues."
},
{
"code": null,
"e": 2770,
"s": 2722,
"text": "Search results displays in the Issue Navigator."
},
{
"code": null,
"e": 2818,
"s": 2770,
"text": "Search results displays in the Issue Navigator."
},
{
"code": null,
"e": 2895,
"s": 2818,
"text": "Search results can be exported to MS Excel and many other available formats."
},
{
"code": null,
"e": 2972,
"s": 2895,
"text": "Search results can be exported to MS Excel and many other available formats."
},
{
"code": null,
"e": 3036,
"s": 2972,
"text": "Save and Subscribe features are available to advanced searches."
},
{
"code": null,
"e": 3100,
"s": 3036,
"text": "Save and Subscribe features are available to advanced searches."
},
{
"code": null,
"e": 3162,
"s": 3100,
"text": "An advanced search uses the JIRA Query Language known as JQL."
},
{
"code": null,
"e": 3224,
"s": 3162,
"text": "An advanced search uses the JIRA Query Language known as JQL."
},
{
"code": null,
"e": 3406,
"s": 3224,
"text": "A simple query in JQL consists of a field, operator, followed by one or more values or functions. For example, the following simple query will find all issues in the \"WFT\" project −"
},
{
"code": null,
"e": 3588,
"s": 3406,
"text": "A simple query in JQL consists of a field, operator, followed by one or more values or functions. For example, the following simple query will find all issues in the \"WFT\" project −"
},
{
"code": null,
"e": 3605,
"s": 3588,
"text": "Project = \"WFT\"\n"
},
{
"code": null,
"e": 3724,
"s": 3605,
"text": "JQL supports SQL like syntax such as ORDER BY, GROUP BY, ISNULL() functions, but JQL is not a Database Query Language."
},
{
"code": null,
"e": 3843,
"s": 3724,
"text": "JQL supports SQL like syntax such as ORDER BY, GROUP BY, ISNULL() functions, but JQL is not a Database Query Language."
},
{
"code": null,
"e": 3968,
"s": 3843,
"text": "A field reference means a word that represents the field name in the JIRA issue including the custom fields. The syntax is −"
},
{
"code": null,
"e": 4031,
"s": 3968,
"text": "<field name> <operators like =,>, <> “values” or “functions” \n"
},
{
"code": null,
"e": 4162,
"s": 4031,
"text": "The operator compares the value of the field with value presents at right side such that only true results are retrieved by query."
},
{
"code": null,
"e": 4217,
"s": 4162,
"text": "Go to Issues → Search for Issues in the navigator bar."
},
{
"code": null,
"e": 4284,
"s": 4217,
"text": "The following screenshot shows how to navigate the Search section."
},
{
"code": null,
"e": 4446,
"s": 4284,
"text": "If there is an existing search criterion, click on the New Filter button to reset the criteria. The following screenshot shows how to start with a new criteria −"
},
{
"code": null,
"e": 4525,
"s": 4446,
"text": "Type the query using the Field, Operator and Values like issueKey = “WFT-107”."
},
{
"code": null,
"e": 4788,
"s": 4525,
"text": "There are other fields as well – Affected Version, Assignee, Attachments, Category, Comment, Component, Created, Creator, Description, Due, Environment, etc. As soon as the user starts typing, the auto-complete functionality helps to write in the defined format."
},
{
"code": null,
"e": 4875,
"s": 4788,
"text": "The following screenshot shows how to add Field Name criteria using advanced features."
},
{
"code": null,
"e": 4952,
"s": 4875,
"text": "Operator selection − The following screenshot shows how to select operators."
},
{
"code": null,
"e": 5086,
"s": 4952,
"text": "The next step is to enter the value and then click on the Search symbol. The following screenshot shows how to add values and search."
},
{
"code": null,
"e": 5158,
"s": 5086,
"text": "The following screenshot shows the search result based on criteria set."
},
{
"code": null,
"e": 5242,
"s": 5158,
"text": "Here, we will understand how to use a keyword reference and what its advantages are"
},
{
"code": null,
"e": 5261,
"s": 5242,
"text": "A keyword in JQL −"
},
{
"code": null,
"e": 5325,
"s": 5261,
"text": "joins two or more queries together to form a complex JQL query."
},
{
"code": null,
"e": 5366,
"s": 5325,
"text": "alters the logic of one or more queries."
},
{
"code": null,
"e": 5397,
"s": 5366,
"text": "alters the logic of operators."
},
{
"code": null,
"e": 5440,
"s": 5397,
"text": "has an explicit definition in a JQL query."
},
{
"code": null,
"e": 5510,
"s": 5440,
"text": "performs a specific function that defines the results of a JQL query."
},
{
"code": null,
"e": 5529,
"s": 5510,
"text": "List of Keywords −"
},
{
"code": null,
"e": 5599,
"s": 5529,
"text": "AND − ex - status = open AND priority = urgent And assignee = Ashish."
},
{
"code": null,
"e": 5646,
"s": 5599,
"text": "OR − ex – duedate < now() or duedate is empty."
},
{
"code": null,
"e": 5681,
"s": 5646,
"text": "NOT − ex – not assignee = Ashish ."
},
{
"code": null,
"e": 5746,
"s": 5681,
"text": "EMPTY − ex - affectedVersion is empty / affectedVersion = empty."
},
{
"code": null,
"e": 5776,
"s": 5746,
"text": "NULL − ex – assignee is null."
},
{
"code": null,
"e": 5841,
"s": 5776,
"text": "ORDER BY − ex – duedate = empty order by created, priority desc."
},
{
"code": null,
"e": 6021,
"s": 5841,
"text": "Similar to field reference, as soon as the user starts typing, the auto-complete functionality helps to get the correct syntax. The following screenshot shows how to add keywords."
},
{
"code": null,
"e": 6155,
"s": 6021,
"text": "Click on the Search symbol and it will provide the search results. The following screenshot shows the result based on a criteria set."
},
{
"code": null,
"e": 6288,
"s": 6155,
"text": "Operators are used to compare values of the left side with the right side, such that only true results display as the search result."
},
{
"code": null,
"e": 6299,
"s": 6288,
"text": "Equals: = "
},
{
"code": null,
"e": 6315,
"s": 6299,
"text": "Not Equals: != "
},
{
"code": null,
"e": 6331,
"s": 6315,
"text": "Greater Than: >"
},
{
"code": null,
"e": 6345,
"s": 6331,
"text": "Less Than: < "
},
{
"code": null,
"e": 6369,
"s": 6345,
"text": "Greater Than Equals: =>"
},
{
"code": null,
"e": 6390,
"s": 6369,
"text": "Less than equals: =<"
},
{
"code": null,
"e": 6393,
"s": 6390,
"text": "IN"
},
{
"code": null,
"e": 6400,
"s": 6393,
"text": "NOT IN"
},
{
"code": null,
"e": 6412,
"s": 6400,
"text": "CONTAINS: ~"
},
{
"code": null,
"e": 6434,
"s": 6412,
"text": "Does Not contain: ! ~"
},
{
"code": null,
"e": 6437,
"s": 6434,
"text": "IS"
},
{
"code": null,
"e": 6444,
"s": 6437,
"text": "IS NOT"
},
{
"code": null,
"e": 6448,
"s": 6444,
"text": "WAS"
},
{
"code": null,
"e": 6455,
"s": 6448,
"text": "WAS IN"
},
{
"code": null,
"e": 6466,
"s": 6455,
"text": "WAS NOT IN"
},
{
"code": null,
"e": 6474,
"s": 6466,
"text": "WAS NOT"
},
{
"code": null,
"e": 6482,
"s": 6474,
"text": "CHANGED"
},
{
"code": null,
"e": 6594,
"s": 6482,
"text": "Similar to the Field and the Keyword Reference, these operators can also be used to enhance the search results."
},
{
"code": null,
"e": 6626,
"s": 6594,
"text": "\n 6 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6643,
"s": 6626,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6675,
"s": 6643,
"text": "\n 6 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6687,
"s": 6675,
"text": " Manu Mitra"
},
{
"code": null,
"e": 6722,
"s": 6687,
"text": "\n 41 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6736,
"s": 6722,
"text": " Simon Sez IT"
},
{
"code": null,
"e": 6743,
"s": 6736,
"text": " Print"
},
{
"code": null,
"e": 6754,
"s": 6743,
"text": " Add Notes"
}
]
|
Difference between Hard real time and Soft real time system - GeeksforGeeks | 10 May, 2020
Real time system is defined as a system in which job has deadline, job has to finished by the deadline (strictly finished). If a result is delayed, huge loss may happen.
1. Hard Real Time System :Hard real time is a system whose operation is incorrect whose result is not produce according to time constraint.For example,
1. Air Traffic Control
2. Medical System
2. Soft Real Time System :Soft real time system is a system whose operation is degrade if results are not produce according to the specified timing requirement.For example<
1. Multimedia Transmission and Reception
2. Computer Games
Difference between Hard real time and Soft real time system :
Difference Between
GATE CS
Operating Systems
Operating Systems
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Difference between Process and Thread
Stack vs Heap Memory Allocation
Difference Between Method Overloading and Method Overriding in Java
Differences between JDK, JRE and JVM
Layers of OSI Model
ACID Properties in DBMS
TCP/IP Model
Page Replacement Algorithms in Operating Systems
Normal Forms in DBMS | [
{
"code": null,
"e": 24832,
"s": 24804,
"text": "\n10 May, 2020"
},
{
"code": null,
"e": 25002,
"s": 24832,
"text": "Real time system is defined as a system in which job has deadline, job has to finished by the deadline (strictly finished). If a result is delayed, huge loss may happen."
},
{
"code": null,
"e": 25154,
"s": 25002,
"text": "1. Hard Real Time System :Hard real time is a system whose operation is incorrect whose result is not produce according to time constraint.For example,"
},
{
"code": null,
"e": 25196,
"s": 25154,
"text": "1. Air Traffic Control\n2. Medical System "
},
{
"code": null,
"e": 25369,
"s": 25196,
"text": "2. Soft Real Time System :Soft real time system is a system whose operation is degrade if results are not produce according to the specified timing requirement.For example<"
},
{
"code": null,
"e": 25429,
"s": 25369,
"text": "1. Multimedia Transmission and Reception\n2. Computer Games "
},
{
"code": null,
"e": 25491,
"s": 25429,
"text": "Difference between Hard real time and Soft real time system :"
},
{
"code": null,
"e": 25510,
"s": 25491,
"text": "Difference Between"
},
{
"code": null,
"e": 25518,
"s": 25510,
"text": "GATE CS"
},
{
"code": null,
"e": 25536,
"s": 25518,
"text": "Operating Systems"
},
{
"code": null,
"e": 25554,
"s": 25536,
"text": "Operating Systems"
},
{
"code": null,
"e": 25652,
"s": 25554,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25713,
"s": 25652,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 25751,
"s": 25713,
"text": "Difference between Process and Thread"
},
{
"code": null,
"e": 25783,
"s": 25751,
"text": "Stack vs Heap Memory Allocation"
},
{
"code": null,
"e": 25851,
"s": 25783,
"text": "Difference Between Method Overloading and Method Overriding in Java"
},
{
"code": null,
"e": 25888,
"s": 25851,
"text": "Differences between JDK, JRE and JVM"
},
{
"code": null,
"e": 25908,
"s": 25888,
"text": "Layers of OSI Model"
},
{
"code": null,
"e": 25932,
"s": 25908,
"text": "ACID Properties in DBMS"
},
{
"code": null,
"e": 25945,
"s": 25932,
"text": "TCP/IP Model"
},
{
"code": null,
"e": 25994,
"s": 25945,
"text": "Page Replacement Algorithms in Operating Systems"
}
]
|
Largest triangle that can be inscribed in a semicircle - GeeksforGeeks | 16 Mar, 2021
Given a semicircle with radius r, we have to find the largest triangle that can be inscribed in the semicircle, with base lying on the diameter.Examples:
Input: r = 5
Output: 25
Input: r = 8
Output: 64
Approach: From the figure, we can clearly understand the biggest triangle that can be inscribed in the semicircle has height r. Also, we know the base has length 2r. So the triangle is an isosceles triangle.
So, Area A: = (base * height)/2 = (2r * r)/2 = r^2
Below is the implementation of above approach:
C++
Java
Python 3
C#
PHP
Javascript
// C++ Program to find the biggest triangle// which can be inscribed within the semicircle#include <bits/stdc++.h>using namespace std; // Function to find the area// of the trianglefloat trianglearea(float r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver codeint main(){ float r = 5; cout << trianglearea(r) << endl; return 0;}
// Java Program to find the biggest triangle// which can be inscribed within the semicircleimport java.io.*; class GFG { // Function to find the area// of the trianglestatic float trianglearea(float r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver code public static void main (String[] args) { float r = 5; System.out.println( trianglearea(r)); }}// This code is contributed // by chandan_jnu.
# Python 3 Program to find the biggest triangle# which can be inscribed within the semicircle # Function to find the area# of the triangledef trianglearea(r) : # the radius cannot be negative if r < 0 : return -1 # area of the triangle return r * r # Driver Codeif __name__ == "__main__" : r = 5 print(trianglearea(r)) # This code is contributed by ANKITRAI1
// C# Program to find the biggest// triangle which can be inscribed// within the semicircleusing System; class GFG{ // Function to find the area// of the trianglestatic float trianglearea(float r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver codepublic static void Main (){ float r = 5; Console.Write(trianglearea(r));}} // This code is contributed// by ChitraNayal
<?php// PHP Program to find the biggest// triangle which can be inscribed// within the semicircle // Function to find the area// of the trianglefunction trianglearea($r){ // the radius cannot be negative if ($r < 0) return -1; // area of the triangle return $r * $r;} // Driver code$r = 5;echo trianglearea($r); // This code is contributed// by inder_verma?>
<script> // javascript Program to find the biggest triangle// which can be inscribed within the semicircle // Function to find the area// of the trianglefunction trianglearea(r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver code var r = 5;document.write( trianglearea(r)); // This code contributed by Princi Singh </script>
25
ankthon
Chandan_Kumar
ukasp
inderDuMCA
princi singh
circle
triangle
Geometric
Mathematical
Mathematical
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Circle and Lattice Points
Queries on count of points lie inside a circle
Convex Hull using Divide and Conquer Algorithm
Equation of circle when three points on the circle are given
Maximum number of region in which N non-parallel lines can divide a plane
Program for Fibonacci numbers
Write a program to print all permutations of a given string
C++ Data Types
Set in C++ Standard Template Library (STL)
Coin Change | DP-7 | [
{
"code": null,
"e": 24963,
"s": 24935,
"text": "\n16 Mar, 2021"
},
{
"code": null,
"e": 25119,
"s": 24963,
"text": "Given a semicircle with radius r, we have to find the largest triangle that can be inscribed in the semicircle, with base lying on the diameter.Examples: "
},
{
"code": null,
"e": 25168,
"s": 25119,
"text": "Input: r = 5\nOutput: 25\n\nInput: r = 8\nOutput: 64"
},
{
"code": null,
"e": 25381,
"s": 25172,
"text": "Approach: From the figure, we can clearly understand the biggest triangle that can be inscribed in the semicircle has height r. Also, we know the base has length 2r. So the triangle is an isosceles triangle. "
},
{
"code": null,
"e": 25432,
"s": 25381,
"text": "So, Area A: = (base * height)/2 = (2r * r)/2 = r^2"
},
{
"code": null,
"e": 25481,
"s": 25432,
"text": "Below is the implementation of above approach: "
},
{
"code": null,
"e": 25485,
"s": 25481,
"text": "C++"
},
{
"code": null,
"e": 25490,
"s": 25485,
"text": "Java"
},
{
"code": null,
"e": 25499,
"s": 25490,
"text": "Python 3"
},
{
"code": null,
"e": 25502,
"s": 25499,
"text": "C#"
},
{
"code": null,
"e": 25506,
"s": 25502,
"text": "PHP"
},
{
"code": null,
"e": 25517,
"s": 25506,
"text": "Javascript"
},
{
"code": "// C++ Program to find the biggest triangle// which can be inscribed within the semicircle#include <bits/stdc++.h>using namespace std; // Function to find the area// of the trianglefloat trianglearea(float r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver codeint main(){ float r = 5; cout << trianglearea(r) << endl; return 0;}",
"e": 25935,
"s": 25517,
"text": null
},
{
"code": "// Java Program to find the biggest triangle// which can be inscribed within the semicircleimport java.io.*; class GFG { // Function to find the area// of the trianglestatic float trianglearea(float r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver code public static void main (String[] args) { float r = 5; System.out.println( trianglearea(r)); }}// This code is contributed // by chandan_jnu.",
"e": 26433,
"s": 25935,
"text": null
},
{
"code": "# Python 3 Program to find the biggest triangle# which can be inscribed within the semicircle # Function to find the area# of the triangledef trianglearea(r) : # the radius cannot be negative if r < 0 : return -1 # area of the triangle return r * r # Driver Codeif __name__ == \"__main__\" : r = 5 print(trianglearea(r)) # This code is contributed by ANKITRAI1",
"e": 26823,
"s": 26433,
"text": null
},
{
"code": "// C# Program to find the biggest// triangle which can be inscribed// within the semicircleusing System; class GFG{ // Function to find the area// of the trianglestatic float trianglearea(float r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver codepublic static void Main (){ float r = 5; Console.Write(trianglearea(r));}} // This code is contributed// by ChitraNayal",
"e": 27280,
"s": 26823,
"text": null
},
{
"code": "<?php// PHP Program to find the biggest// triangle which can be inscribed// within the semicircle // Function to find the area// of the trianglefunction trianglearea($r){ // the radius cannot be negative if ($r < 0) return -1; // area of the triangle return $r * $r;} // Driver code$r = 5;echo trianglearea($r); // This code is contributed// by inder_verma?>",
"e": 27660,
"s": 27280,
"text": null
},
{
"code": "<script> // javascript Program to find the biggest triangle// which can be inscribed within the semicircle // Function to find the area// of the trianglefunction trianglearea(r){ // the radius cannot be negative if (r < 0) return -1; // area of the triangle return r * r;} // Driver code var r = 5;document.write( trianglearea(r)); // This code contributed by Princi Singh </script>",
"e": 28066,
"s": 27660,
"text": null
},
{
"code": null,
"e": 28069,
"s": 28066,
"text": "25"
},
{
"code": null,
"e": 28079,
"s": 28071,
"text": "ankthon"
},
{
"code": null,
"e": 28093,
"s": 28079,
"text": "Chandan_Kumar"
},
{
"code": null,
"e": 28099,
"s": 28093,
"text": "ukasp"
},
{
"code": null,
"e": 28110,
"s": 28099,
"text": "inderDuMCA"
},
{
"code": null,
"e": 28123,
"s": 28110,
"text": "princi singh"
},
{
"code": null,
"e": 28130,
"s": 28123,
"text": "circle"
},
{
"code": null,
"e": 28139,
"s": 28130,
"text": "triangle"
},
{
"code": null,
"e": 28149,
"s": 28139,
"text": "Geometric"
},
{
"code": null,
"e": 28162,
"s": 28149,
"text": "Mathematical"
},
{
"code": null,
"e": 28175,
"s": 28162,
"text": "Mathematical"
},
{
"code": null,
"e": 28185,
"s": 28175,
"text": "Geometric"
},
{
"code": null,
"e": 28283,
"s": 28185,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28292,
"s": 28283,
"text": "Comments"
},
{
"code": null,
"e": 28305,
"s": 28292,
"text": "Old Comments"
},
{
"code": null,
"e": 28331,
"s": 28305,
"text": "Circle and Lattice Points"
},
{
"code": null,
"e": 28378,
"s": 28331,
"text": "Queries on count of points lie inside a circle"
},
{
"code": null,
"e": 28425,
"s": 28378,
"text": "Convex Hull using Divide and Conquer Algorithm"
},
{
"code": null,
"e": 28486,
"s": 28425,
"text": "Equation of circle when three points on the circle are given"
},
{
"code": null,
"e": 28560,
"s": 28486,
"text": "Maximum number of region in which N non-parallel lines can divide a plane"
},
{
"code": null,
"e": 28590,
"s": 28560,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 28650,
"s": 28590,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 28665,
"s": 28650,
"text": "C++ Data Types"
},
{
"code": null,
"e": 28708,
"s": 28665,
"text": "Set in C++ Standard Template Library (STL)"
}
]
|
PDFBox - Adding Rectangles | This chapter teaches you how to create color boxes in a page of a PDF document.
You can add rectangular boxes in a PDF page using the addRect() method of the PDPageContentStream class.
Following are the steps to create rectangular shapes in a page of a PDF document.
Load an existing PDF document using the static method load() of the PDDocument class. This method accepts a file object as a parameter, since this is a static method you can invoke it using class name as shown below.
File file = new File("path of the document")
PDDocument document = PDDocument.load(file);
You need to retrieve the PDPage object of the required page where you want to add rectangles using the getPage() method of the PDDocument class. To this method you need to pass the index of the page where you want to add rectangles.
PDPage page = document.getPage(0);
You can insert various kinds of data elements using the object of the class named PDPageContentStream. You need to pass the document object and the page object to the constructor of this class therefore, instantiate this class by passing these two objects created in the previous steps as shown below.
PDPageContentStream contentStream = new PDPageContentStream(document, page);
You can set the non-stroking color to the rectangle using the setNonStrokingColor() method of the class PDPageContentStream. To this method, you need to pass the required color as a parameter as shown below.
contentStream.setNonStrokingColor(Color.DARK_GRAY);
Draw the rectangle with required dimensions using the addRect() method. To this method, you need to pass the dimensions of the rectangle that is to be added as shown below.
contentStream.addRect(200, 650, 100, 100);
The fill() method of the PDPageContentStream class fills the path between the specified dimensions with the required color as shown below.
contentStream.fill();
Finally close the document using close() method of the PDDocument class as shown below.
document.close();
Suppose we have a PDF document named blankpage.pdf in the path C:\PdfBox_Examples\ and this contains a single blank page as shown below.
This example demonstrates how to create/insert rectangles in a PDF document. Here, we will create a box in a Blank PDF. Save this code as AddRectangles.java.
import java.awt.Color;
import java.io.File;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import org.apache.pdfbox.pdmodel.PDPageContentStream;
public class ShowColorBoxes {
public static void main(String args[]) throws Exception {
//Loading an existing document
File file = new File("C:/PdfBox_Examples/BlankPage.pdf");
PDDocument document = PDDocument.load(file);
//Retrieving a page of the PDF Document
PDPage page = document.getPage(0);
//Instantiating the PDPageContentStream class
PDPageContentStream contentStream = new PDPageContentStream(document, page);
//Setting the non stroking color
contentStream.setNonStrokingColor(Color.DARK_GRAY);
//Drawing a rectangle
contentStream.addRect(200, 650, 100, 100);
//Drawing a rectangle
contentStream.fill();
System.out.println("rectangle added");
//Closing the ContentStream object
contentStream.close();
//Saving the document
File file1 = new File("C:/PdfBox_Examples/colorbox.pdf");
document.save(file1);
//Closing the document
document.close();
}
}
Compile and execute the saved Java file from the command prompt using the following commands.
javac AddRectangles.java
java AddRectangles
Upon execution, the above program creates a rectangle in a PDF document displaying the following image.
Rectangle created
If you verify the given path and open the saved document — colorbox.pdf, you can observe that a box is inserted in it as shown below.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2107,
"s": 2027,
"text": "This chapter teaches you how to create color boxes in a page of a PDF document."
},
{
"code": null,
"e": 2213,
"s": 2107,
"text": "You can add rectangular boxes in a PDF page using the addRect() method of the PDPageContentStream class."
},
{
"code": null,
"e": 2295,
"s": 2213,
"text": "Following are the steps to create rectangular shapes in a page of a PDF document."
},
{
"code": null,
"e": 2512,
"s": 2295,
"text": "Load an existing PDF document using the static method load() of the PDDocument class. This method accepts a file object as a parameter, since this is a static method you can invoke it using class name as shown below."
},
{
"code": null,
"e": 2604,
"s": 2512,
"text": "File file = new File(\"path of the document\") \nPDDocument document = PDDocument.load(file);\n"
},
{
"code": null,
"e": 2837,
"s": 2604,
"text": "You need to retrieve the PDPage object of the required page where you want to add rectangles using the getPage() method of the PDDocument class. To this method you need to pass the index of the page where you want to add rectangles."
},
{
"code": null,
"e": 2873,
"s": 2837,
"text": "PDPage page = document.getPage(0);\n"
},
{
"code": null,
"e": 3175,
"s": 2873,
"text": "You can insert various kinds of data elements using the object of the class named PDPageContentStream. You need to pass the document object and the page object to the constructor of this class therefore, instantiate this class by passing these two objects created in the previous steps as shown below."
},
{
"code": null,
"e": 3253,
"s": 3175,
"text": "PDPageContentStream contentStream = new PDPageContentStream(document, page);\n"
},
{
"code": null,
"e": 3461,
"s": 3253,
"text": "You can set the non-stroking color to the rectangle using the setNonStrokingColor() method of the class PDPageContentStream. To this method, you need to pass the required color as a parameter as shown below."
},
{
"code": null,
"e": 3514,
"s": 3461,
"text": "contentStream.setNonStrokingColor(Color.DARK_GRAY);\n"
},
{
"code": null,
"e": 3687,
"s": 3514,
"text": "Draw the rectangle with required dimensions using the addRect() method. To this method, you need to pass the dimensions of the rectangle that is to be added as shown below."
},
{
"code": null,
"e": 3731,
"s": 3687,
"text": "contentStream.addRect(200, 650, 100, 100);\n"
},
{
"code": null,
"e": 3870,
"s": 3731,
"text": "The fill() method of the PDPageContentStream class fills the path between the specified dimensions with the required color as shown below."
},
{
"code": null,
"e": 3893,
"s": 3870,
"text": "contentStream.fill();\n"
},
{
"code": null,
"e": 3981,
"s": 3893,
"text": "Finally close the document using close() method of the PDDocument class as shown below."
},
{
"code": null,
"e": 4000,
"s": 3981,
"text": "document.close();\n"
},
{
"code": null,
"e": 4137,
"s": 4000,
"text": "Suppose we have a PDF document named blankpage.pdf in the path C:\\PdfBox_Examples\\ and this contains a single blank page as shown below."
},
{
"code": null,
"e": 4295,
"s": 4137,
"text": "This example demonstrates how to create/insert rectangles in a PDF document. Here, we will create a box in a Blank PDF. Save this code as AddRectangles.java."
},
{
"code": null,
"e": 5499,
"s": 4295,
"text": "import java.awt.Color;\nimport java.io.File;\n \nimport org.apache.pdfbox.pdmodel.PDDocument;\nimport org.apache.pdfbox.pdmodel.PDPage;\nimport org.apache.pdfbox.pdmodel.PDPageContentStream;\npublic class ShowColorBoxes {\n\n public static void main(String args[]) throws Exception {\n\n //Loading an existing document\n File file = new File(\"C:/PdfBox_Examples/BlankPage.pdf\");\n PDDocument document = PDDocument.load(file);\n \n //Retrieving a page of the PDF Document\n PDPage page = document.getPage(0);\n\n //Instantiating the PDPageContentStream class\n PDPageContentStream contentStream = new PDPageContentStream(document, page);\n \n //Setting the non stroking color\n contentStream.setNonStrokingColor(Color.DARK_GRAY);\n\n //Drawing a rectangle \n contentStream.addRect(200, 650, 100, 100);\n\n //Drawing a rectangle\n contentStream.fill();\n\n System.out.println(\"rectangle added\");\n\n //Closing the ContentStream object\n contentStream.close();\n\n //Saving the document\n File file1 = new File(\"C:/PdfBox_Examples/colorbox.pdf\");\n document.save(file1);\n\n //Closing the document\n document.close();\n }\n}"
},
{
"code": null,
"e": 5593,
"s": 5499,
"text": "Compile and execute the saved Java file from the command prompt using the following commands."
},
{
"code": null,
"e": 5639,
"s": 5593,
"text": "javac AddRectangles.java \njava AddRectangles\n"
},
{
"code": null,
"e": 5743,
"s": 5639,
"text": "Upon execution, the above program creates a rectangle in a PDF document displaying the following image."
},
{
"code": null,
"e": 5762,
"s": 5743,
"text": "Rectangle created\n"
},
{
"code": null,
"e": 5896,
"s": 5762,
"text": "If you verify the given path and open the saved document — colorbox.pdf, you can observe that a box is inserted in it as shown below."
},
{
"code": null,
"e": 5903,
"s": 5896,
"text": " Print"
},
{
"code": null,
"e": 5914,
"s": 5903,
"text": " Add Notes"
}
]
|
Render ASP.NET TextBox as HTML5 Input type “Number” | To render ASP.NET TextBox as HTML5 input type “Number”, set type="number" directly on the textbox.
Let us see an example of ASP.NET TextBox −
<asp:TextBox runat="server" type="number" />
You can also use the following dynamically created the control −
TextBox tb = new TextBox();
tb.Attributes.Add("Type", "number"); | [
{
"code": null,
"e": 1161,
"s": 1062,
"text": "To render ASP.NET TextBox as HTML5 input type “Number”, set type=\"number\" directly on the textbox."
},
{
"code": null,
"e": 1204,
"s": 1161,
"text": "Let us see an example of ASP.NET TextBox −"
},
{
"code": null,
"e": 1249,
"s": 1204,
"text": "<asp:TextBox runat=\"server\" type=\"number\" />"
},
{
"code": null,
"e": 1314,
"s": 1249,
"text": "You can also use the following dynamically created the control −"
},
{
"code": null,
"e": 1379,
"s": 1314,
"text": "TextBox tb = new TextBox();\ntb.Attributes.Add(\"Type\", \"number\");"
}
]
|
Python - Files I/O | This chapter covers all the basic I/O functions available in Python. For more functions, please refer to standard Python documentation.
The simplest way to produce output is using the print statement where you can pass zero or more expressions separated by commas. This function converts the expressions you pass into a string and writes the result to standard output as follows −
#!/usr/bin/python
print "Python is really a great language,", "isn't it?"
This produces the following result on your standard screen −
Python is really a great language, isn't it?
Python provides two built-in functions to read a line of text from standard input, which by default comes from the keyboard. These functions are −
raw_input
input
The raw_input([prompt]) function reads one line from standard input and returns it as a string (removing the trailing newline).
#!/usr/bin/python
str = raw_input("Enter your input: ")
print "Received input is : ", str
This prompts you to enter any string and it would display same string on the screen. When I typed "Hello Python!", its output is like this −
Enter your input: Hello Python
Received input is : Hello Python
The input([prompt]) function is equivalent to raw_input, except that it assumes the input is a valid Python expression and returns the evaluated result to you.
#!/usr/bin/python
str = input("Enter your input: ")
print "Received input is : ", str
This would produce the following result against the entered input −
Enter your input: [x*5 for x in range(2,10,2)]
Recieved input is : [10, 20, 30, 40]
Until now, you have been reading and writing to the standard input and output. Now, we will see how to use actual data files.
Python provides basic functions and methods necessary to manipulate files by default. You can do most of the file manipulation using a file object.
Before you can read or write a file, you have to open it using Python's built-in open() function. This function creates a file object, which would be utilized to call other support methods associated with it.
file object = open(file_name [, access_mode][, buffering])
Here are parameter details −
file_name − The file_name argument is a string value that contains the name of the file that you want to access.
file_name − The file_name argument is a string value that contains the name of the file that you want to access.
access_mode − The access_mode determines the mode in which the file has to be opened, i.e., read, write, append, etc. A complete list of possible values is given below in the table. This is optional parameter and the default file access mode is read (r).
access_mode − The access_mode determines the mode in which the file has to be opened, i.e., read, write, append, etc. A complete list of possible values is given below in the table. This is optional parameter and the default file access mode is read (r).
buffering − If the buffering value is set to 0, no buffering takes place. If the buffering value is 1, line buffering is performed while accessing a file. If you specify the buffering value as an integer greater than 1, then buffering action is performed with the indicated buffer size. If negative, the buffer size is the system default(default behavior).
buffering − If the buffering value is set to 0, no buffering takes place. If the buffering value is 1, line buffering is performed while accessing a file. If you specify the buffering value as an integer greater than 1, then buffering action is performed with the indicated buffer size. If negative, the buffer size is the system default(default behavior).
Here is a list of the different modes of opening a file −
r
Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode.
rb
Opens a file for reading only in binary format. The file pointer is placed at the beginning of the file. This is the default mode.
r+
Opens a file for both reading and writing. The file pointer placed at the beginning of the file.
rb+
Opens a file for both reading and writing in binary format. The file pointer placed at the beginning of the file.
w
Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.
wb
Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing.
w+
Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.
wb+
Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing.
a
Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.
ab
Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing.
a+
Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.
ab+
Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing.
Once a file is opened and you have one file object, you can get various information related to that file.
Here is a list of all attributes related to file object −
file.closed
Returns true if file is closed, false otherwise.
file.mode
Returns access mode with which file was opened.
file.name
Returns name of the file.
file.softspace
Returns false if space explicitly required with print, true otherwise.
#!/usr/bin/python
# Open a file
fo = open("foo.txt", "wb")
print "Name of the file: ", fo.name
print "Closed or not : ", fo.closed
print "Opening mode : ", fo.mode
print "Softspace flag : ", fo.softspace
This produces the following result −
Name of the file: foo.txt
Closed or not : False
Opening mode : wb
Softspace flag : 0
The close() method of a file object flushes any unwritten information and closes the file object, after which no more writing can be done.
Python automatically closes a file when the reference object of a file is reassigned to another file. It is a good practice to use the close() method to close a file.
fileObject.close()
#!/usr/bin/python
# Open a file
fo = open("foo.txt", "wb")
print "Name of the file: ", fo.name
# Close opend file
fo.close()
This produces the following result −
Name of the file: foo.txt
The file object provides a set of access methods to make our lives easier. We would see how to use read() and write() methods to read and write files.
The write() method writes any string to an open file. It is important to note that Python strings can have binary data and not just text.
The write() method does not add a newline character ('\n') to the end of the string −
fileObject.write(string)
Here, passed parameter is the content to be written into the opened file.
#!/usr/bin/python
# Open a file
fo = open("foo.txt", "wb")
fo.write( "Python is a great language.\nYeah its great!!\n")
# Close opend file
fo.close()
The above method would create foo.txt file and would write given content in that file and finally it would close that file. If you would open this file, it would have following content.
Python is a great language.
Yeah its great!!
The read() method reads a string from an open file. It is important to note that Python strings can have binary data. apart from text data.
fileObject.read([count])
Here, passed parameter is the number of bytes to be read from the opened file. This method starts reading from the beginning of the file and if count is missing, then it tries to read as much as possible, maybe until the end of file.
Let's take a file foo.txt, which we created above.
#!/usr/bin/python
# Open a file
fo = open("foo.txt", "r+")
str = fo.read(10);
print "Read String is : ", str
# Close opend file
fo.close()
This produces the following result −
Read String is : Python is
The tell() method tells you the current position within the file; in other words, the next read or write will occur at that many bytes from the beginning of the file.
The seek(offset[, from]) method changes the current file position. The offset argument indicates the number of bytes to be moved. The from argument specifies the reference position from where the bytes are to be moved.
If from is set to 0, it means use the beginning of the file as the reference position and 1 means use the current position as the reference position and if it is set to 2 then the end of the file would be taken as the reference position.
Let us take a file foo.txt, which we created above.
#!/usr/bin/python
# Open a file
fo = open("foo.txt", "r+")
str = fo.read(10)
print "Read String is : ", str
# Check current position
position = fo.tell()
print "Current file position : ", position
# Reposition pointer at the beginning once again
position = fo.seek(0, 0);
str = fo.read(10)
print "Again read String is : ", str
# Close opend file
fo.close()
This produces the following result −
Read String is : Python is
Current file position : 10
Again read String is : Python is
Python os module provides methods that help you perform file-processing operations, such as renaming and deleting files.
To use this module you need to import it first and then you can call any related functions.
The rename() method takes two arguments, the current filename and the new filename.
os.rename(current_file_name, new_file_name)
Following is the example to rename an existing file test1.txt −
#!/usr/bin/python
import os
# Rename a file from test1.txt to test2.txt
os.rename( "test1.txt", "test2.txt" )
You can use the remove() method to delete files by supplying the name of the file to be deleted as the argument.
os.remove(file_name)
Following is the example to delete an existing file test2.txt −
#!/usr/bin/python
import os
# Delete file test2.txt
os.remove("text2.txt")
All files are contained within various directories, and Python has no problem handling these too. The os module has several methods that help you create, remove, and change directories.
You can use the mkdir() method of the os module to create directories in the current directory. You need to supply an argument to this method which contains the name of the directory to be created.
os.mkdir("newdir")
Following is the example to create a directory test in the current directory −
#!/usr/bin/python
import os
# Create a directory "test"
os.mkdir("test")
You can use the chdir() method to change the current directory. The chdir() method takes an argument, which is the name of the directory that you want to make the current directory.
os.chdir("newdir")
Following is the example to go into "/home/newdir" directory −
#!/usr/bin/python
import os
# Changing a directory to "/home/newdir"
os.chdir("/home/newdir")
The getcwd() method displays the current working directory.
os.getcwd()
Following is the example to give current directory −
#!/usr/bin/python
import os
# This would give location of the current directory
os.getcwd()
The rmdir() method deletes the directory, which is passed as an argument in the method.
Before removing a directory, all the contents in it should be removed.
os.rmdir('dirname')
Following is the example to remove "/tmp/test" directory. It is required to give fully qualified name of the directory, otherwise it would search for that directory in the current directory.
#!/usr/bin/python
import os
# This would remove "/tmp/test" directory.
os.rmdir( "/tmp/test" )
There are three important sources, which provide a wide range of utility methods to handle and manipulate files & directories on Windows and Unix operating systems. They are as follows −
File Object Methods: The file object provides functions to manipulate files.
File Object Methods: The file object provides functions to manipulate files.
OS Object Methods: This provides methods to process files as well as directories.
OS Object Methods: This provides methods to process files as well as directories.
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2380,
"s": 2244,
"text": "This chapter covers all the basic I/O functions available in Python. For more functions, please refer to standard Python documentation."
},
{
"code": null,
"e": 2625,
"s": 2380,
"text": "The simplest way to produce output is using the print statement where you can pass zero or more expressions separated by commas. This function converts the expressions you pass into a string and writes the result to standard output as follows −"
},
{
"code": null,
"e": 2700,
"s": 2625,
"text": "#!/usr/bin/python\n\nprint \"Python is really a great language,\", \"isn't it?\""
},
{
"code": null,
"e": 2761,
"s": 2700,
"text": "This produces the following result on your standard screen −"
},
{
"code": null,
"e": 2807,
"s": 2761,
"text": "Python is really a great language, isn't it?\n"
},
{
"code": null,
"e": 2954,
"s": 2807,
"text": "Python provides two built-in functions to read a line of text from standard input, which by default comes from the keyboard. These functions are −"
},
{
"code": null,
"e": 2964,
"s": 2954,
"text": "raw_input"
},
{
"code": null,
"e": 2970,
"s": 2964,
"text": "input"
},
{
"code": null,
"e": 3098,
"s": 2970,
"text": "The raw_input([prompt]) function reads one line from standard input and returns it as a string (removing the trailing newline)."
},
{
"code": null,
"e": 3189,
"s": 3098,
"text": "#!/usr/bin/python\n\nstr = raw_input(\"Enter your input: \")\nprint \"Received input is : \", str"
},
{
"code": null,
"e": 3330,
"s": 3189,
"text": "This prompts you to enter any string and it would display same string on the screen. When I typed \"Hello Python!\", its output is like this −"
},
{
"code": null,
"e": 3396,
"s": 3330,
"text": "Enter your input: Hello Python\nReceived input is : Hello Python\n"
},
{
"code": null,
"e": 3556,
"s": 3396,
"text": "The input([prompt]) function is equivalent to raw_input, except that it assumes the input is a valid Python expression and returns the evaluated result to you."
},
{
"code": null,
"e": 3643,
"s": 3556,
"text": "#!/usr/bin/python\n\nstr = input(\"Enter your input: \")\nprint \"Received input is : \", str"
},
{
"code": null,
"e": 3711,
"s": 3643,
"text": "This would produce the following result against the entered input −"
},
{
"code": null,
"e": 3797,
"s": 3711,
"text": "Enter your input: [x*5 for x in range(2,10,2)]\nRecieved input is : [10, 20, 30, 40]\n"
},
{
"code": null,
"e": 3923,
"s": 3797,
"text": "Until now, you have been reading and writing to the standard input and output. Now, we will see how to use actual data files."
},
{
"code": null,
"e": 4071,
"s": 3923,
"text": "Python provides basic functions and methods necessary to manipulate files by default. You can do most of the file manipulation using a file object."
},
{
"code": null,
"e": 4280,
"s": 4071,
"text": "Before you can read or write a file, you have to open it using Python's built-in open() function. This function creates a file object, which would be utilized to call other support methods associated with it."
},
{
"code": null,
"e": 4340,
"s": 4280,
"text": "file object = open(file_name [, access_mode][, buffering])\n"
},
{
"code": null,
"e": 4369,
"s": 4340,
"text": "Here are parameter details −"
},
{
"code": null,
"e": 4482,
"s": 4369,
"text": "file_name − The file_name argument is a string value that contains the name of the file that you want to access."
},
{
"code": null,
"e": 4595,
"s": 4482,
"text": "file_name − The file_name argument is a string value that contains the name of the file that you want to access."
},
{
"code": null,
"e": 4850,
"s": 4595,
"text": "access_mode − The access_mode determines the mode in which the file has to be opened, i.e., read, write, append, etc. A complete list of possible values is given below in the table. This is optional parameter and the default file access mode is read (r)."
},
{
"code": null,
"e": 5105,
"s": 4850,
"text": "access_mode − The access_mode determines the mode in which the file has to be opened, i.e., read, write, append, etc. A complete list of possible values is given below in the table. This is optional parameter and the default file access mode is read (r)."
},
{
"code": null,
"e": 5463,
"s": 5105,
"text": "buffering − If the buffering value is set to 0, no buffering takes place. If the buffering value is 1, line buffering is performed while accessing a file. If you specify the buffering value as an integer greater than 1, then buffering action is performed with the indicated buffer size. If negative, the buffer size is the system default(default behavior)."
},
{
"code": null,
"e": 5821,
"s": 5463,
"text": "buffering − If the buffering value is set to 0, no buffering takes place. If the buffering value is 1, line buffering is performed while accessing a file. If you specify the buffering value as an integer greater than 1, then buffering action is performed with the indicated buffer size. If negative, the buffer size is the system default(default behavior)."
},
{
"code": null,
"e": 5879,
"s": 5821,
"text": "Here is a list of the different modes of opening a file −"
},
{
"code": null,
"e": 5881,
"s": 5879,
"text": "r"
},
{
"code": null,
"e": 5995,
"s": 5881,
"text": "Opens a file for reading only. The file pointer is placed at the beginning of the file. This is the default mode."
},
{
"code": null,
"e": 5998,
"s": 5995,
"text": "rb"
},
{
"code": null,
"e": 6129,
"s": 5998,
"text": "Opens a file for reading only in binary format. The file pointer is placed at the beginning of the file. This is the default mode."
},
{
"code": null,
"e": 6132,
"s": 6129,
"text": "r+"
},
{
"code": null,
"e": 6229,
"s": 6132,
"text": "Opens a file for both reading and writing. The file pointer placed at the beginning of the file."
},
{
"code": null,
"e": 6233,
"s": 6229,
"text": "rb+"
},
{
"code": null,
"e": 6347,
"s": 6233,
"text": "Opens a file for both reading and writing in binary format. The file pointer placed at the beginning of the file."
},
{
"code": null,
"e": 6349,
"s": 6347,
"text": "w"
},
{
"code": null,
"e": 6480,
"s": 6349,
"text": "Opens a file for writing only. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing."
},
{
"code": null,
"e": 6483,
"s": 6480,
"text": "wb"
},
{
"code": null,
"e": 6631,
"s": 6483,
"text": "Opens a file for writing only in binary format. Overwrites the file if the file exists. If the file does not exist, creates a new file for writing."
},
{
"code": null,
"e": 6634,
"s": 6631,
"text": "w+"
},
{
"code": null,
"e": 6798,
"s": 6634,
"text": "Opens a file for both writing and reading. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing."
},
{
"code": null,
"e": 6802,
"s": 6798,
"text": "wb+"
},
{
"code": null,
"e": 6983,
"s": 6802,
"text": "Opens a file for both writing and reading in binary format. Overwrites the existing file if the file exists. If the file does not exist, creates a new file for reading and writing."
},
{
"code": null,
"e": 6985,
"s": 6983,
"text": "a"
},
{
"code": null,
"e": 7180,
"s": 6985,
"text": "Opens a file for appending. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing."
},
{
"code": null,
"e": 7183,
"s": 7180,
"text": "ab"
},
{
"code": null,
"e": 7395,
"s": 7183,
"text": "Opens a file for appending in binary format. The file pointer is at the end of the file if the file exists. That is, the file is in the append mode. If the file does not exist, it creates a new file for writing."
},
{
"code": null,
"e": 7398,
"s": 7395,
"text": "a+"
},
{
"code": null,
"e": 7616,
"s": 7398,
"text": "Opens a file for both appending and reading. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing."
},
{
"code": null,
"e": 7620,
"s": 7616,
"text": "ab+"
},
{
"code": null,
"e": 7855,
"s": 7620,
"text": "Opens a file for both appending and reading in binary format. The file pointer is at the end of the file if the file exists. The file opens in the append mode. If the file does not exist, it creates a new file for reading and writing."
},
{
"code": null,
"e": 7961,
"s": 7855,
"text": "Once a file is opened and you have one file object, you can get various information related to that file."
},
{
"code": null,
"e": 8019,
"s": 7961,
"text": "Here is a list of all attributes related to file object −"
},
{
"code": null,
"e": 8031,
"s": 8019,
"text": "file.closed"
},
{
"code": null,
"e": 8080,
"s": 8031,
"text": "Returns true if file is closed, false otherwise."
},
{
"code": null,
"e": 8090,
"s": 8080,
"text": "file.mode"
},
{
"code": null,
"e": 8138,
"s": 8090,
"text": "Returns access mode with which file was opened."
},
{
"code": null,
"e": 8148,
"s": 8138,
"text": "file.name"
},
{
"code": null,
"e": 8174,
"s": 8148,
"text": "Returns name of the file."
},
{
"code": null,
"e": 8189,
"s": 8174,
"text": "file.softspace"
},
{
"code": null,
"e": 8260,
"s": 8189,
"text": "Returns false if space explicitly required with print, true otherwise."
},
{
"code": null,
"e": 8465,
"s": 8260,
"text": "#!/usr/bin/python\n\n# Open a file\nfo = open(\"foo.txt\", \"wb\")\nprint \"Name of the file: \", fo.name\nprint \"Closed or not : \", fo.closed\nprint \"Opening mode : \", fo.mode\nprint \"Softspace flag : \", fo.softspace"
},
{
"code": null,
"e": 8502,
"s": 8465,
"text": "This produces the following result −"
},
{
"code": null,
"e": 8592,
"s": 8502,
"text": "Name of the file: foo.txt\nClosed or not : False\nOpening mode : wb\nSoftspace flag : 0\n"
},
{
"code": null,
"e": 8731,
"s": 8592,
"text": "The close() method of a file object flushes any unwritten information and closes the file object, after which no more writing can be done."
},
{
"code": null,
"e": 8898,
"s": 8731,
"text": "Python automatically closes a file when the reference object of a file is reassigned to another file. It is a good practice to use the close() method to close a file."
},
{
"code": null,
"e": 8918,
"s": 8898,
"text": "fileObject.close()\n"
},
{
"code": null,
"e": 9045,
"s": 8918,
"text": "#!/usr/bin/python\n\n# Open a file\nfo = open(\"foo.txt\", \"wb\")\nprint \"Name of the file: \", fo.name\n\n# Close opend file\nfo.close()"
},
{
"code": null,
"e": 9082,
"s": 9045,
"text": "This produces the following result −"
},
{
"code": null,
"e": 9110,
"s": 9082,
"text": "Name of the file: foo.txt\n"
},
{
"code": null,
"e": 9261,
"s": 9110,
"text": "The file object provides a set of access methods to make our lives easier. We would see how to use read() and write() methods to read and write files."
},
{
"code": null,
"e": 9399,
"s": 9261,
"text": "The write() method writes any string to an open file. It is important to note that Python strings can have binary data and not just text."
},
{
"code": null,
"e": 9485,
"s": 9399,
"text": "The write() method does not add a newline character ('\\n') to the end of the string −"
},
{
"code": null,
"e": 9511,
"s": 9485,
"text": "fileObject.write(string)\n"
},
{
"code": null,
"e": 9585,
"s": 9511,
"text": "Here, passed parameter is the content to be written into the opened file."
},
{
"code": null,
"e": 9737,
"s": 9585,
"text": "#!/usr/bin/python\n\n# Open a file\nfo = open(\"foo.txt\", \"wb\")\nfo.write( \"Python is a great language.\\nYeah its great!!\\n\")\n\n# Close opend file\nfo.close()"
},
{
"code": null,
"e": 9923,
"s": 9737,
"text": "The above method would create foo.txt file and would write given content in that file and finally it would close that file. If you would open this file, it would have following content."
},
{
"code": null,
"e": 9969,
"s": 9923,
"text": "Python is a great language.\nYeah its great!!\n"
},
{
"code": null,
"e": 10109,
"s": 9969,
"text": "The read() method reads a string from an open file. It is important to note that Python strings can have binary data. apart from text data."
},
{
"code": null,
"e": 10135,
"s": 10109,
"text": "fileObject.read([count])\n"
},
{
"code": null,
"e": 10369,
"s": 10135,
"text": "Here, passed parameter is the number of bytes to be read from the opened file. This method starts reading from the beginning of the file and if count is missing, then it tries to read as much as possible, maybe until the end of file."
},
{
"code": null,
"e": 10421,
"s": 10369,
"text": " Let's take a file foo.txt, which we created above."
},
{
"code": null,
"e": 10561,
"s": 10421,
"text": "#!/usr/bin/python\n\n# Open a file\nfo = open(\"foo.txt\", \"r+\")\nstr = fo.read(10);\nprint \"Read String is : \", str\n# Close opend file\nfo.close()"
},
{
"code": null,
"e": 10598,
"s": 10561,
"text": "This produces the following result −"
},
{
"code": null,
"e": 10627,
"s": 10598,
"text": "Read String is : Python is\n"
},
{
"code": null,
"e": 10794,
"s": 10627,
"text": "The tell() method tells you the current position within the file; in other words, the next read or write will occur at that many bytes from the beginning of the file."
},
{
"code": null,
"e": 11013,
"s": 10794,
"text": "The seek(offset[, from]) method changes the current file position. The offset argument indicates the number of bytes to be moved. The from argument specifies the reference position from where the bytes are to be moved."
},
{
"code": null,
"e": 11251,
"s": 11013,
"text": "If from is set to 0, it means use the beginning of the file as the reference position and 1 means use the current position as the reference position and if it is set to 2 then the end of the file would be taken as the reference position."
},
{
"code": null,
"e": 11304,
"s": 11251,
"text": " Let us take a file foo.txt, which we created above."
},
{
"code": null,
"e": 11664,
"s": 11304,
"text": "#!/usr/bin/python\n\n# Open a file\nfo = open(\"foo.txt\", \"r+\")\nstr = fo.read(10)\nprint \"Read String is : \", str\n\n# Check current position\nposition = fo.tell()\nprint \"Current file position : \", position\n\n# Reposition pointer at the beginning once again\nposition = fo.seek(0, 0);\nstr = fo.read(10)\nprint \"Again read String is : \", str\n# Close opend file\nfo.close()"
},
{
"code": null,
"e": 11701,
"s": 11664,
"text": "This produces the following result −"
},
{
"code": null,
"e": 11792,
"s": 11701,
"text": "Read String is : Python is\nCurrent file position : 10\nAgain read String is : Python is\n"
},
{
"code": null,
"e": 11913,
"s": 11792,
"text": "Python os module provides methods that help you perform file-processing operations, such as renaming and deleting files."
},
{
"code": null,
"e": 12005,
"s": 11913,
"text": "To use this module you need to import it first and then you can call any related functions."
},
{
"code": null,
"e": 12089,
"s": 12005,
"text": "The rename() method takes two arguments, the current filename and the new filename."
},
{
"code": null,
"e": 12134,
"s": 12089,
"text": "os.rename(current_file_name, new_file_name)\n"
},
{
"code": null,
"e": 12198,
"s": 12134,
"text": "Following is the example to rename an existing file test1.txt −"
},
{
"code": null,
"e": 12309,
"s": 12198,
"text": "#!/usr/bin/python\nimport os\n\n# Rename a file from test1.txt to test2.txt\nos.rename( \"test1.txt\", \"test2.txt\" )"
},
{
"code": null,
"e": 12422,
"s": 12309,
"text": "You can use the remove() method to delete files by supplying the name of the file to be deleted as the argument."
},
{
"code": null,
"e": 12444,
"s": 12422,
"text": "os.remove(file_name)\n"
},
{
"code": null,
"e": 12508,
"s": 12444,
"text": "Following is the example to delete an existing file test2.txt −"
},
{
"code": null,
"e": 12584,
"s": 12508,
"text": "#!/usr/bin/python\nimport os\n\n# Delete file test2.txt\nos.remove(\"text2.txt\")"
},
{
"code": null,
"e": 12770,
"s": 12584,
"text": "All files are contained within various directories, and Python has no problem handling these too. The os module has several methods that help you create, remove, and change directories."
},
{
"code": null,
"e": 12968,
"s": 12770,
"text": "You can use the mkdir() method of the os module to create directories in the current directory. You need to supply an argument to this method which contains the name of the directory to be created."
},
{
"code": null,
"e": 12988,
"s": 12968,
"text": "os.mkdir(\"newdir\")\n"
},
{
"code": null,
"e": 13067,
"s": 12988,
"text": "Following is the example to create a directory test in the current directory −"
},
{
"code": null,
"e": 13141,
"s": 13067,
"text": "#!/usr/bin/python\nimport os\n\n# Create a directory \"test\"\nos.mkdir(\"test\")"
},
{
"code": null,
"e": 13323,
"s": 13141,
"text": "You can use the chdir() method to change the current directory. The chdir() method takes an argument, which is the name of the directory that you want to make the current directory."
},
{
"code": null,
"e": 13343,
"s": 13323,
"text": "os.chdir(\"newdir\")\n"
},
{
"code": null,
"e": 13406,
"s": 13343,
"text": "Following is the example to go into \"/home/newdir\" directory −"
},
{
"code": null,
"e": 13501,
"s": 13406,
"text": "#!/usr/bin/python\nimport os\n\n# Changing a directory to \"/home/newdir\"\nos.chdir(\"/home/newdir\")"
},
{
"code": null,
"e": 13561,
"s": 13501,
"text": "The getcwd() method displays the current working directory."
},
{
"code": null,
"e": 13574,
"s": 13561,
"text": "os.getcwd()\n"
},
{
"code": null,
"e": 13627,
"s": 13574,
"text": "Following is the example to give current directory −"
},
{
"code": null,
"e": 13720,
"s": 13627,
"text": "#!/usr/bin/python\nimport os\n\n# This would give location of the current directory\nos.getcwd()"
},
{
"code": null,
"e": 13808,
"s": 13720,
"text": "The rmdir() method deletes the directory, which is passed as an argument in the method."
},
{
"code": null,
"e": 13879,
"s": 13808,
"text": "Before removing a directory, all the contents in it should be removed."
},
{
"code": null,
"e": 13900,
"s": 13879,
"text": "os.rmdir('dirname')\n"
},
{
"code": null,
"e": 14091,
"s": 13900,
"text": "Following is the example to remove \"/tmp/test\" directory. It is required to give fully qualified name of the directory, otherwise it would search for that directory in the current directory."
},
{
"code": null,
"e": 14190,
"s": 14091,
"text": "#!/usr/bin/python\nimport os\n\n# This would remove \"/tmp/test\" directory.\nos.rmdir( \"/tmp/test\" )"
},
{
"code": null,
"e": 14377,
"s": 14190,
"text": "There are three important sources, which provide a wide range of utility methods to handle and manipulate files & directories on Windows and Unix operating systems. They are as follows −"
},
{
"code": null,
"e": 14454,
"s": 14377,
"text": "File Object Methods: The file object provides functions to manipulate files."
},
{
"code": null,
"e": 14531,
"s": 14454,
"text": "File Object Methods: The file object provides functions to manipulate files."
},
{
"code": null,
"e": 14614,
"s": 14531,
"text": "OS Object Methods: This provides methods to process files as well as directories. "
},
{
"code": null,
"e": 14697,
"s": 14614,
"text": "OS Object Methods: This provides methods to process files as well as directories. "
},
{
"code": null,
"e": 14734,
"s": 14697,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 14750,
"s": 14734,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 14783,
"s": 14750,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 14802,
"s": 14783,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 14837,
"s": 14802,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 14859,
"s": 14837,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 14893,
"s": 14859,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 14921,
"s": 14893,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 14956,
"s": 14921,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 14970,
"s": 14956,
"text": " Lets Kode It"
},
{
"code": null,
"e": 15003,
"s": 14970,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 15020,
"s": 15003,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 15027,
"s": 15020,
"text": " Print"
},
{
"code": null,
"e": 15038,
"s": 15027,
"text": " Add Notes"
}
]
|
Tryit Editor v3.7 | Tryit: no iframe border | []
|
GATE | GATE-CS-2017 (Set 1) | Question 64 - GeeksforGeeks | 31 Aug, 2021
The output of executing the following C program is ________.
# include
int total(int v)
{
static int count = 0;
while (v) {
count += v & 1;
v >>= 1;
}
return count;
}
void main()
{
static int x = 0;
int i = 5;
for (; i> 0; i--) {
x = x + total(i);
}
printf (“%d\n”, x) ;
}
(A) 23(B) 24(C) 26(D) 27Answer: (A)Explanation: Digits : 5-0101, 4-0100, 3-0011, 2-0010, 1-0001Countof 1s : 2, 3, 5, 6, 7
Total: 2+3+5+6+7 = 23
total(i)=23
Therefore, option A is correct
Check out the running code with comments:
#include<stdio.h> int total(int v) { static int count = 0; while(v) { count += v&1; v >>= 1; } //This count can be used to see number of 1s returned //for every number i //printf("%d", count); return count;} void main() { static int x=0; int i=5; for(; i>0; i--) { //total gets added everytime with total number //of digits in the given number i x = x + total(i); } printf("%d\n", x);}
Alternate Solution :
This solution is contributed by parul sharmaQuiz of this Question
AnasH
adnanirshad158
GATE-CS-2017 (Set 1)
GATE-GATE-CS-2017 (Set 1)
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GATE | GATE-IT-2004 | Question 66
GATE | GATE CS 2019 | Question 27
GATE | GATE-CS-2006 | Question 49
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE-CS-2004 | Question 3
GATE | GATE-CS-2016 (Set 2) | Question 48
GATE | GATE-CS-2000 | Question 43
GATE | GATE-CS-2017 (Set 2) | Question 42
GATE | Gate IT 2007 | Question 30
GATE | GATE CS 2010 | Question 24 | [
{
"code": null,
"e": 24460,
"s": 24432,
"text": "\n31 Aug, 2021"
},
{
"code": null,
"e": 24521,
"s": 24460,
"text": "The output of executing the following C program is ________."
},
{
"code": null,
"e": 24799,
"s": 24521,
"text": "# include \n\nint total(int v) \n{ \n static int count = 0; \n while (v) { \n count += v & 1; \n v >>= 1; \n } \n return count; \n} \n\nvoid main() \n{ \n static int x = 0; \n int i = 5; \n for (; i> 0; i--) { \n x = x + total(i); \n } \n printf (“%d\\n”, x) ; \n} \n"
},
{
"code": null,
"e": 24921,
"s": 24799,
"text": "(A) 23(B) 24(C) 26(D) 27Answer: (A)Explanation: Digits : 5-0101, 4-0100, 3-0011, 2-0010, 1-0001Countof 1s : 2, 3, 5, 6, 7"
},
{
"code": null,
"e": 24943,
"s": 24921,
"text": "Total: 2+3+5+6+7 = 23"
},
{
"code": null,
"e": 24955,
"s": 24943,
"text": "total(i)=23"
},
{
"code": null,
"e": 24986,
"s": 24955,
"text": "Therefore, option A is correct"
},
{
"code": null,
"e": 25028,
"s": 24986,
"text": "Check out the running code with comments:"
},
{
"code": "#include<stdio.h> int total(int v) { static int count = 0; while(v) { count += v&1; v >>= 1; } //This count can be used to see number of 1s returned //for every number i //printf(\"%d\", count); return count;} void main() { static int x=0; int i=5; for(; i>0; i--) { //total gets added everytime with total number //of digits in the given number i x = x + total(i); } printf(\"%d\\n\", x);}",
"e": 25501,
"s": 25028,
"text": null
},
{
"code": null,
"e": 25522,
"s": 25501,
"text": "Alternate Solution :"
},
{
"code": null,
"e": 25588,
"s": 25522,
"text": "This solution is contributed by parul sharmaQuiz of this Question"
},
{
"code": null,
"e": 25594,
"s": 25588,
"text": "AnasH"
},
{
"code": null,
"e": 25609,
"s": 25594,
"text": "adnanirshad158"
},
{
"code": null,
"e": 25630,
"s": 25609,
"text": "GATE-CS-2017 (Set 1)"
},
{
"code": null,
"e": 25656,
"s": 25630,
"text": "GATE-GATE-CS-2017 (Set 1)"
},
{
"code": null,
"e": 25661,
"s": 25656,
"text": "GATE"
},
{
"code": null,
"e": 25759,
"s": 25661,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25793,
"s": 25759,
"text": "GATE | GATE-IT-2004 | Question 66"
},
{
"code": null,
"e": 25827,
"s": 25793,
"text": "GATE | GATE CS 2019 | Question 27"
},
{
"code": null,
"e": 25861,
"s": 25827,
"text": "GATE | GATE-CS-2006 | Question 49"
},
{
"code": null,
"e": 25903,
"s": 25861,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 65"
},
{
"code": null,
"e": 25936,
"s": 25903,
"text": "GATE | GATE-CS-2004 | Question 3"
},
{
"code": null,
"e": 25978,
"s": 25936,
"text": "GATE | GATE-CS-2016 (Set 2) | Question 48"
},
{
"code": null,
"e": 26012,
"s": 25978,
"text": "GATE | GATE-CS-2000 | Question 43"
},
{
"code": null,
"e": 26054,
"s": 26012,
"text": "GATE | GATE-CS-2017 (Set 2) | Question 42"
},
{
"code": null,
"e": 26088,
"s": 26054,
"text": "GATE | Gate IT 2007 | Question 30"
}
]
|
Sum of Two Integers in Python | Suppose we have two integers a and b. Our task is to find the sum of these two integers. One constraint is that, we cannot use any operator like + or -. So if a = 5 and b = 7, the result will be 12.
To solve this, we will follow these steps −
For solving we will use the bitwise logical operators
If b = 0, then return a
otherwise, recursively use the sum function by providing an XOR b, and a AND b after left shifting the result one time
Let us see the following implementation to get a better understanding −
Live Demo
#include <iostream>
using namespace std;
class Solution {
public:
int getSum(int a, int b) {
return b == 0?a:getSum(a^b, (unsigned int)(a&b)<<1);
}
};
main(){
Solution ob;
cout<<ob.getSum(5,7)<<endl;
}
a = 5
b = 7
12 | [
{
"code": null,
"e": 1261,
"s": 1062,
"text": "Suppose we have two integers a and b. Our task is to find the sum of these two integers. One constraint is that, we cannot use any operator like + or -. So if a = 5 and b = 7, the result will be 12."
},
{
"code": null,
"e": 1305,
"s": 1261,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1359,
"s": 1305,
"text": "For solving we will use the bitwise logical operators"
},
{
"code": null,
"e": 1383,
"s": 1359,
"text": "If b = 0, then return a"
},
{
"code": null,
"e": 1502,
"s": 1383,
"text": "otherwise, recursively use the sum function by providing an XOR b, and a AND b after left shifting the result one time"
},
{
"code": null,
"e": 1574,
"s": 1502,
"text": "Let us see the following implementation to get a better understanding −"
},
{
"code": null,
"e": 1585,
"s": 1574,
"text": " Live Demo"
},
{
"code": null,
"e": 1808,
"s": 1585,
"text": "#include <iostream>\nusing namespace std;\nclass Solution {\n public:\n int getSum(int a, int b) {\n return b == 0?a:getSum(a^b, (unsigned int)(a&b)<<1);\n }\n};\nmain(){\n Solution ob;\n cout<<ob.getSum(5,7)<<endl;\n}"
},
{
"code": null,
"e": 1820,
"s": 1808,
"text": "a = 5\nb = 7"
},
{
"code": null,
"e": 1823,
"s": 1820,
"text": "12"
}
]
|
Create a TreeMap in Java and add key-value pairs | A TreeMap cannot contain duplicate keys. TreeMap cannot contain the null key. However, It can have null values.
Let us first see how to create a TreeMap −
TreeMap<Integer,String> m = new TreeMap<Integer,String>();
Add some elements in the form of key-value pairs −
m.put(1,"India");
m.put(2,"US");
m.put(3,"Australia");
m.put(4,"Netherlands");
m.put(5,"Canada");
The following is an example to create a TreeMap and add key-value pairs −
Live Demo
import java.util.*;
public class Demo {
public static void main(String args[]) {
TreeMap<Integer,String> m = new TreeMap<Integer,String>();
m.put(1,"India");
m.put(2,"US");
m.put(3,"Australia");
m.put(4,"Netherlands");
m.put(5,"Canada");
for(Map.Entry e:m.entrySet()) {
System.out.println(e.getKey()+" "+e.getValue());
}
}
}
1 India
2 US
3 Australia
4 Netherlands
5 Canada | [
{
"code": null,
"e": 1174,
"s": 1062,
"text": "A TreeMap cannot contain duplicate keys. TreeMap cannot contain the null key. However, It can have null values."
},
{
"code": null,
"e": 1217,
"s": 1174,
"text": "Let us first see how to create a TreeMap −"
},
{
"code": null,
"e": 1276,
"s": 1217,
"text": "TreeMap<Integer,String> m = new TreeMap<Integer,String>();"
},
{
"code": null,
"e": 1327,
"s": 1276,
"text": "Add some elements in the form of key-value pairs −"
},
{
"code": null,
"e": 1425,
"s": 1327,
"text": "m.put(1,\"India\");\nm.put(2,\"US\");\nm.put(3,\"Australia\");\nm.put(4,\"Netherlands\");\nm.put(5,\"Canada\");"
},
{
"code": null,
"e": 1499,
"s": 1425,
"text": "The following is an example to create a TreeMap and add key-value pairs −"
},
{
"code": null,
"e": 1510,
"s": 1499,
"text": " Live Demo"
},
{
"code": null,
"e": 1898,
"s": 1510,
"text": "import java.util.*;\npublic class Demo {\n public static void main(String args[]) {\n TreeMap<Integer,String> m = new TreeMap<Integer,String>();\n m.put(1,\"India\");\n m.put(2,\"US\");\n m.put(3,\"Australia\");\n m.put(4,\"Netherlands\");\n m.put(5,\"Canada\");\n for(Map.Entry e:m.entrySet()) {\n System.out.println(e.getKey()+\" \"+e.getValue());\n }\n }\n}"
},
{
"code": null,
"e": 1946,
"s": 1898,
"text": "1 India\n2 US\n3 Australia\n4 Netherlands\n5 Canada"
}
]
|
Android | Creating a Calendar View app - GeeksforGeeks | 26 Jul, 2021
This article shows how to create an android application for displaying the Calendar using CalendarView. It also provides the selection of the current date and displaying the date. The setOnDateChangeListener Interface is used which provide onSelectedDayChange method.
onSelectedDayChange: In this method, we get the values of days, months, and years that are selected by the user.
onSelectedDayChange: In this method, we get the values of days, months, and years that are selected by the user.
Below are the steps for creating the Android Application of the Calendar.
Step 1: Create a new project and you will have a layout XML file and java file. Your screen will look like the image below.
Step 2: Open your xml file and add CalendarView and TextView. And assign id to TextView and CalendarView. After completing this process, the xml file screen looks like given below.
Step 3: Now, open up the activity java file and define the CalendarView and TextView type variable, and also use findViewById() to get the Calendarview and textview.
Step 4: Now, add setOnDateChangeListener interface in object of CalendarView which provides setOnDateChangeListener method. In this method, we get the Dates(days, months, years) and set the dates in TextView for Display.
Step 5: Now run the app and set the current date which will be shown on the top of the screen.Complete code of MainActivity.java or activity_main.xml of Calendar is given below.
activity_main.xml
<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!-- Add TextView to display the date --> <TextView android:id="@+id/date_view" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginLeft="150dp" android:layout_marginTop="20dp" android:text="Set the Date" android:textColor="@android:color/background_dark" android:textStyle="bold" /> <!-- Add CalendarView to display the Calendar --> <CalendarView android:id="@+id/calendar" android:layout_marginTop="80dp" android:layout_marginLeft="19dp" android:layout_width="wrap_content" android:layout_height="wrap_content"> </CalendarView> </RelativeLayout>
MainActivity.java
package org.geeksforgeeks.navedmalik.calendar; import android.support.annotation.NonNull;import android.support.v7.app.AppCompatActivity;import android.os.Bundle;import android.widget.Button;import android.widget.CalendarView;import android.widget.TextView; public class MainActivity extends AppCompatActivity { // Define the variable of CalendarView type // and TextView type; CalendarView calendar; TextView date_view; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // By ID we can use each component // which id is assign in xml file // use findViewById() to get the // CalendarView and TextView calendar = (CalendarView) findViewById(R.id.calendar); date_view = (TextView) findViewById(R.id.date_view); // Add Listener in calendar calendar .setOnDateChangeListener( new CalendarView .OnDateChangeListener() { @Override // In this Listener have one method // and in this method we will // get the value of DAYS, MONTH, YEARS public void onSelectedDayChange( @NonNull CalendarView view, int year, int month, int dayOfMonth) { // Store the value of date with // format in String type Variable // Add 1 in month because month // index is start with 0 String Date = dayOfMonth + "-" + (month + 1) + "-" + year; // set this date in TextView for Display date_view.setText(Date); } }); }}
Output:
vartika02
clintra
Android-Date-time
Android
Java
Java
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Create and Add Data to SQLite Database in Android?
Broadcast Receiver in Android With Example
Content Providers in Android with Example
Services in Android with Example
How to change the color of Action Bar in an Android App?
Arrays in Java
Split() String method in Java with examples
For-each loop in Java
Initialize an ArrayList in Java
Arrays.sort() in Java with examples | [
{
"code": null,
"e": 24231,
"s": 24203,
"text": "\n26 Jul, 2021"
},
{
"code": null,
"e": 24500,
"s": 24231,
"text": "This article shows how to create an android application for displaying the Calendar using CalendarView. It also provides the selection of the current date and displaying the date. The setOnDateChangeListener Interface is used which provide onSelectedDayChange method. "
},
{
"code": null,
"e": 24613,
"s": 24500,
"text": "onSelectedDayChange: In this method, we get the values of days, months, and years that are selected by the user."
},
{
"code": null,
"e": 24726,
"s": 24613,
"text": "onSelectedDayChange: In this method, we get the values of days, months, and years that are selected by the user."
},
{
"code": null,
"e": 24801,
"s": 24726,
"text": "Below are the steps for creating the Android Application of the Calendar. "
},
{
"code": null,
"e": 24926,
"s": 24801,
"text": "Step 1: Create a new project and you will have a layout XML file and java file. Your screen will look like the image below. "
},
{
"code": null,
"e": 25107,
"s": 24926,
"text": "Step 2: Open your xml file and add CalendarView and TextView. And assign id to TextView and CalendarView. After completing this process, the xml file screen looks like given below."
},
{
"code": null,
"e": 25274,
"s": 25107,
"text": "Step 3: Now, open up the activity java file and define the CalendarView and TextView type variable, and also use findViewById() to get the Calendarview and textview. "
},
{
"code": null,
"e": 25497,
"s": 25274,
"text": "Step 4: Now, add setOnDateChangeListener interface in object of CalendarView which provides setOnDateChangeListener method. In this method, we get the Dates(days, months, years) and set the dates in TextView for Display. "
},
{
"code": null,
"e": 25676,
"s": 25497,
"text": "Step 5: Now run the app and set the current date which will be shown on the top of the screen.Complete code of MainActivity.java or activity_main.xml of Calendar is given below. "
},
{
"code": null,
"e": 25694,
"s": 25676,
"text": "activity_main.xml"
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" tools:context=\".MainActivity\"> <!-- Add TextView to display the date --> <TextView android:id=\"@+id/date_view\" android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:layout_marginLeft=\"150dp\" android:layout_marginTop=\"20dp\" android:text=\"Set the Date\" android:textColor=\"@android:color/background_dark\" android:textStyle=\"bold\" /> <!-- Add CalendarView to display the Calendar --> <CalendarView android:id=\"@+id/calendar\" android:layout_marginTop=\"80dp\" android:layout_marginLeft=\"19dp\" android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\"> </CalendarView> </RelativeLayout>",
"e": 26730,
"s": 25694,
"text": null
},
{
"code": null,
"e": 26748,
"s": 26730,
"text": "MainActivity.java"
},
{
"code": "package org.geeksforgeeks.navedmalik.calendar; import android.support.annotation.NonNull;import android.support.v7.app.AppCompatActivity;import android.os.Bundle;import android.widget.Button;import android.widget.CalendarView;import android.widget.TextView; public class MainActivity extends AppCompatActivity { // Define the variable of CalendarView type // and TextView type; CalendarView calendar; TextView date_view; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // By ID we can use each component // which id is assign in xml file // use findViewById() to get the // CalendarView and TextView calendar = (CalendarView) findViewById(R.id.calendar); date_view = (TextView) findViewById(R.id.date_view); // Add Listener in calendar calendar .setOnDateChangeListener( new CalendarView .OnDateChangeListener() { @Override // In this Listener have one method // and in this method we will // get the value of DAYS, MONTH, YEARS public void onSelectedDayChange( @NonNull CalendarView view, int year, int month, int dayOfMonth) { // Store the value of date with // format in String type Variable // Add 1 in month because month // index is start with 0 String Date = dayOfMonth + \"-\" + (month + 1) + \"-\" + year; // set this date in TextView for Display date_view.setText(Date); } }); }}",
"e": 28841,
"s": 26748,
"text": null
},
{
"code": null,
"e": 28851,
"s": 28841,
"text": "Output: "
},
{
"code": null,
"e": 28861,
"s": 28851,
"text": "vartika02"
},
{
"code": null,
"e": 28869,
"s": 28861,
"text": "clintra"
},
{
"code": null,
"e": 28887,
"s": 28869,
"text": "Android-Date-time"
},
{
"code": null,
"e": 28895,
"s": 28887,
"text": "Android"
},
{
"code": null,
"e": 28900,
"s": 28895,
"text": "Java"
},
{
"code": null,
"e": 28905,
"s": 28900,
"text": "Java"
},
{
"code": null,
"e": 28913,
"s": 28905,
"text": "Android"
},
{
"code": null,
"e": 29011,
"s": 28913,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29020,
"s": 29011,
"text": "Comments"
},
{
"code": null,
"e": 29033,
"s": 29020,
"text": "Old Comments"
},
{
"code": null,
"e": 29091,
"s": 29033,
"text": "How to Create and Add Data to SQLite Database in Android?"
},
{
"code": null,
"e": 29134,
"s": 29091,
"text": "Broadcast Receiver in Android With Example"
},
{
"code": null,
"e": 29176,
"s": 29134,
"text": "Content Providers in Android with Example"
},
{
"code": null,
"e": 29209,
"s": 29176,
"text": "Services in Android with Example"
},
{
"code": null,
"e": 29266,
"s": 29209,
"text": "How to change the color of Action Bar in an Android App?"
},
{
"code": null,
"e": 29281,
"s": 29266,
"text": "Arrays in Java"
},
{
"code": null,
"e": 29325,
"s": 29281,
"text": "Split() String method in Java with examples"
},
{
"code": null,
"e": 29347,
"s": 29325,
"text": "For-each loop in Java"
},
{
"code": null,
"e": 29379,
"s": 29347,
"text": "Initialize an ArrayList in Java"
}
]
|
Bayesian Thinking for Linear Regression @ Kaggle Days Meetup | by Souradip Chakraborty | Towards Data Science | One of the major motivations of this research is the fact that there has been an increasing focus on Deep model interpretability with the advent of more and more complex models. More is the complexity of the model, difficult it gets to have interpretability with respect to the outputs and a lot of research is going in the field of Bayesian thinking and learning.
But before understanding and being able to appreciate Bayesian in deep neural models, we should be well versed and adept with Bayesian thinking in linear models for example- Bayesian Linear regression. But there are very few good materials available online in a combined fashion which can give a clear motivation and understanding of the Bayesian Linear regression.
This was one of the major motivations for this blog and here I will try to give an understanding of how to approach the Linear regression from a Bayesian analysis standpoint.
Just before starting let's understand that we are very cleary with Maximum Likelihood estimates of Linear regression models. I have discussed this in one of my previous talks, please refer to the link.
Just to introduce very briefly the concept of the Maximum Likelihood Estimate in a Linear regression scenario.
The most important point to understand from this is that MLE gives you a point estimate of the parameter by maximizing the Likelihood P(D|θ).
Even, MAP which is Maximum a posteriori estimation maximizes the posterior probability P(θ|D), which also gives point estimation.
So, these methods don’t give you enough information about the distribution of the coefficients, whereas in Bayesian we estimate the posterior distribution of the parameters. Hence, in this case, the output is not a single value but a probability density/mass function.
Just to understand the intuition behind it.
Just one more step to go !!!
Before delving deep into Bayesian Regression, we need to understand one more thing which is Markov Chain Monte Carlo Simulations and why it is needed?
MCMC methods are used to approximate the posterior distribution of a parameter of interest by random sampling in a probabilistic space. But why approximating the distribution and not calculating the exact distribution might be one question that you must be intrigued by.
So, it's almost impossible to compute and intractable due to the denominator and hence we go for approximating the posterior distribution using MCMC.
Monte Carlo methods help us in generating random variables following a given distribution. For example: -θ ~ N(mean, sigma**2), will help in generating multiple θ from the Normal distribution and there are numerous methods to do so.
Markov chain is a sequence of numbers where each number is dependent on the previous number in the sequence. Markov Chain Monte Carlo helps in generating random variables from a distribution where each value of θ is drawn from a known distribution with a mean equal to the previous value.
As you can understand, this is basically a random walk and there can be a lot of values generated which might not be necessary and relevant. Hence, we need to do an acceptance/rejection sampling of the values to generate the distribution of interest.
In this case the Metropolis-Hastings algorithm is used to refine the values by rejection and acceptance sampling.
We can also use Gibb’s sampling, where the goal is to find the posterior distribution P(θ1,θ2|y,x) which is done by obtaining the posterior conditional distributions P(θ1|θ2, y,x) and P(θ2|θ1, y,x). So, we generate
θ1 ~P(θ1|θ2, y,x), replace the value of generated θ1 in the second equation and generate θ2 ~ P(θ2|θ1, y,x) and we continue the process for several iterations to get the posterior.
Now, let’s illustrate the same with an example.
In the below example, I will be illustrating the Bayesian Linear Regression methodology firstly with Gibbs sampling. Here, I have assumed certain distributions for the parameters.
In this section, I will show you an illustration of using Gibbs sampling for Bayesian Linear regression. This section has been taken from the brilliantly illustrated Kieran R Campbell’s blog on Bayesian Linear regression.
So, to begin with, let's understand how the data is for our case. Let D be the dataset for the following experiment and D is defined as,
D = ((x1,y1) ,(x2,y2),(x3,y3) ...... (xn,yn)) is the data, where x1,x2.... belongs to the univariate feature space.
Y ~ N( b*x + c, 1/t), where Y is a random variable normally distributed with mean b*x + c and variance of 1/t, where t represents the precision.
In this case, we will be considering a univariate feature space (X) with two parameters, slope(c) and intercept (b). So, in this experiment, we will learn the posterior distributions of the above parameters c & b and the precision t.
Let's write down the assumed prior distributions of the above parameters
So, lets first write down the density function of a normal distribution and the log_pdf to have a generalizable form, which will be of use multiple times later.
Now it will be very easy for us to pen down the likelihood for our case which also follows Normal with mean bx+c and precision t.
Taking the logarithm of the same gives the below expression
Now comes the little complex part which is to derive the conditional posterior distribution for all the three parameters b,c,t.
Conditional Posterior Distribution for the intercept_c :
So, as we can see that this is the process to find out the conditional posterior distribution for the intercept c.
The code snippet of the above equation
def get_intercept_c(y, x, b, t, c0, tc): n = len(y) assert len(x) == n precision = c0 + t * n mean = tc * c0 + t * np.sum(y - b * x) mean = mean/precision return np.random.normal(mean, 1 / np.sqrt(precision)
Similarly, we can find the same for the slope b.
Conditional Posterior Distribution for the slope_b :
The code-snippet of the update for the slope is
def get_slope_b(y, x, c, t, b0, tb): n = len(y) assert len(x) == n precision = tb + t * np.sum(x * x) mean = tb * b0 + t * np.sum( (y - c) * x) mean = mean/precision return np.random.normal(mean, 1 / np.sqrt(precision))
Conditional Posterior Distribution for the precision_t :
The conditional posterior distribution gives b,c will not be like the above two as the prior follows Gamma distribution.
def get_precision_t(y, x, b, c, alpha, beta): n = len(y) alpha_new = alpha + n / 2 resid = y - c - b * x beta_new = beta + np.sum(resid * resid) / 2 return np.random.gamma(alpha_new, 1 / beta_new)
Now, since we could get the closed distributional form for the parameters, now we can generate from the posterior distribution using MCMC simulations.
So, first, to run the experiment, I generated a sample data with known slope and intercept coefficients which we can validate from our Bayesian Linear regression. We have assumed for the experiment a=6, b = 2
# observed datan = 1000_a = 6_b = 2x = np.linspace(0, 1, n)y = _a*x + _b + np.random.randn(n)synth_plot = plt.plot(x, y, "o")plt.xlabel("x")plt.ylabel("y")
Now, after updating the parameters based on the equations and snippets shown above, we run the MCMC simulations for getting the true posterior distribution of the parameters.
def gibbs(y, x, iters, init, hypers): assert len(y) == len(x) c = init["c"] b = init["b"] t = init["t"] trace = np.zeros((iters, 3)) ## trace to store values of b, c, t for it in tqdm(range(iters)): c = get_intercept_c(y, x, b, t, hypers["c0"], hypers["tc"]) b = get_slope_b(y, x, c, t, hypers["b0"], hypers["tb"]) t = get_precision_t(y, x, b, c, hypers["alpha"], hypers["beta"]) trace[it,:] = np.array((c, b, t)) trace = pd.DataFrame(trace) trace.columns = ['intercept', 'slope', 'precision'] return traceiters = 15000trace = gibbs(y, x, iters, init, hypers)
On running the experiment for 15000 iterations, we see the trace plot to validate our hypothesis. There is a concept of a Burn-in period in MCMC simulations where we ignore the first few iterations as it doesn’t replicate samples from the true posterior rather random samples.
So as you can see, the sample posterior distribution replicated our assumptions and we have much more information about the coefficients and the parameters.
But it is not always possible to have a closed distributional form of the conditional posterior and hence we have to opt for a proposal distribution with acceptance & rejection sampling using the Metropolis-Hastings algorithm discussed above briefly.
Note: There are a lot of advantages of Bayesian thinking and reasoning and I will be coming up with subsequent materials on Variational Inference and Bayesian thinking in my upcoming blogs and materials. I will be delving into Bayesian Deep Learning and its advantages. So, keep reading and sharing knowledge.
P.S — This talk was part of the Kaggle Days Meetup Delhi-NCR session, please follow them for such materials and sessions. You will also get my video explaining the topic briefly. | [
{
"code": null,
"e": 537,
"s": 172,
"text": "One of the major motivations of this research is the fact that there has been an increasing focus on Deep model interpretability with the advent of more and more complex models. More is the complexity of the model, difficult it gets to have interpretability with respect to the outputs and a lot of research is going in the field of Bayesian thinking and learning."
},
{
"code": null,
"e": 903,
"s": 537,
"text": "But before understanding and being able to appreciate Bayesian in deep neural models, we should be well versed and adept with Bayesian thinking in linear models for example- Bayesian Linear regression. But there are very few good materials available online in a combined fashion which can give a clear motivation and understanding of the Bayesian Linear regression."
},
{
"code": null,
"e": 1078,
"s": 903,
"text": "This was one of the major motivations for this blog and here I will try to give an understanding of how to approach the Linear regression from a Bayesian analysis standpoint."
},
{
"code": null,
"e": 1280,
"s": 1078,
"text": "Just before starting let's understand that we are very cleary with Maximum Likelihood estimates of Linear regression models. I have discussed this in one of my previous talks, please refer to the link."
},
{
"code": null,
"e": 1391,
"s": 1280,
"text": "Just to introduce very briefly the concept of the Maximum Likelihood Estimate in a Linear regression scenario."
},
{
"code": null,
"e": 1533,
"s": 1391,
"text": "The most important point to understand from this is that MLE gives you a point estimate of the parameter by maximizing the Likelihood P(D|θ)."
},
{
"code": null,
"e": 1663,
"s": 1533,
"text": "Even, MAP which is Maximum a posteriori estimation maximizes the posterior probability P(θ|D), which also gives point estimation."
},
{
"code": null,
"e": 1932,
"s": 1663,
"text": "So, these methods don’t give you enough information about the distribution of the coefficients, whereas in Bayesian we estimate the posterior distribution of the parameters. Hence, in this case, the output is not a single value but a probability density/mass function."
},
{
"code": null,
"e": 1976,
"s": 1932,
"text": "Just to understand the intuition behind it."
},
{
"code": null,
"e": 2005,
"s": 1976,
"text": "Just one more step to go !!!"
},
{
"code": null,
"e": 2156,
"s": 2005,
"text": "Before delving deep into Bayesian Regression, we need to understand one more thing which is Markov Chain Monte Carlo Simulations and why it is needed?"
},
{
"code": null,
"e": 2427,
"s": 2156,
"text": "MCMC methods are used to approximate the posterior distribution of a parameter of interest by random sampling in a probabilistic space. But why approximating the distribution and not calculating the exact distribution might be one question that you must be intrigued by."
},
{
"code": null,
"e": 2577,
"s": 2427,
"text": "So, it's almost impossible to compute and intractable due to the denominator and hence we go for approximating the posterior distribution using MCMC."
},
{
"code": null,
"e": 2810,
"s": 2577,
"text": "Monte Carlo methods help us in generating random variables following a given distribution. For example: -θ ~ N(mean, sigma**2), will help in generating multiple θ from the Normal distribution and there are numerous methods to do so."
},
{
"code": null,
"e": 3099,
"s": 2810,
"text": "Markov chain is a sequence of numbers where each number is dependent on the previous number in the sequence. Markov Chain Monte Carlo helps in generating random variables from a distribution where each value of θ is drawn from a known distribution with a mean equal to the previous value."
},
{
"code": null,
"e": 3350,
"s": 3099,
"text": "As you can understand, this is basically a random walk and there can be a lot of values generated which might not be necessary and relevant. Hence, we need to do an acceptance/rejection sampling of the values to generate the distribution of interest."
},
{
"code": null,
"e": 3464,
"s": 3350,
"text": "In this case the Metropolis-Hastings algorithm is used to refine the values by rejection and acceptance sampling."
},
{
"code": null,
"e": 3679,
"s": 3464,
"text": "We can also use Gibb’s sampling, where the goal is to find the posterior distribution P(θ1,θ2|y,x) which is done by obtaining the posterior conditional distributions P(θ1|θ2, y,x) and P(θ2|θ1, y,x). So, we generate"
},
{
"code": null,
"e": 3860,
"s": 3679,
"text": "θ1 ~P(θ1|θ2, y,x), replace the value of generated θ1 in the second equation and generate θ2 ~ P(θ2|θ1, y,x) and we continue the process for several iterations to get the posterior."
},
{
"code": null,
"e": 3908,
"s": 3860,
"text": "Now, let’s illustrate the same with an example."
},
{
"code": null,
"e": 4088,
"s": 3908,
"text": "In the below example, I will be illustrating the Bayesian Linear Regression methodology firstly with Gibbs sampling. Here, I have assumed certain distributions for the parameters."
},
{
"code": null,
"e": 4310,
"s": 4088,
"text": "In this section, I will show you an illustration of using Gibbs sampling for Bayesian Linear regression. This section has been taken from the brilliantly illustrated Kieran R Campbell’s blog on Bayesian Linear regression."
},
{
"code": null,
"e": 4447,
"s": 4310,
"text": "So, to begin with, let's understand how the data is for our case. Let D be the dataset for the following experiment and D is defined as,"
},
{
"code": null,
"e": 4563,
"s": 4447,
"text": "D = ((x1,y1) ,(x2,y2),(x3,y3) ...... (xn,yn)) is the data, where x1,x2.... belongs to the univariate feature space."
},
{
"code": null,
"e": 4708,
"s": 4563,
"text": "Y ~ N( b*x + c, 1/t), where Y is a random variable normally distributed with mean b*x + c and variance of 1/t, where t represents the precision."
},
{
"code": null,
"e": 4942,
"s": 4708,
"text": "In this case, we will be considering a univariate feature space (X) with two parameters, slope(c) and intercept (b). So, in this experiment, we will learn the posterior distributions of the above parameters c & b and the precision t."
},
{
"code": null,
"e": 5015,
"s": 4942,
"text": "Let's write down the assumed prior distributions of the above parameters"
},
{
"code": null,
"e": 5176,
"s": 5015,
"text": "So, lets first write down the density function of a normal distribution and the log_pdf to have a generalizable form, which will be of use multiple times later."
},
{
"code": null,
"e": 5306,
"s": 5176,
"text": "Now it will be very easy for us to pen down the likelihood for our case which also follows Normal with mean bx+c and precision t."
},
{
"code": null,
"e": 5366,
"s": 5306,
"text": "Taking the logarithm of the same gives the below expression"
},
{
"code": null,
"e": 5494,
"s": 5366,
"text": "Now comes the little complex part which is to derive the conditional posterior distribution for all the three parameters b,c,t."
},
{
"code": null,
"e": 5551,
"s": 5494,
"text": "Conditional Posterior Distribution for the intercept_c :"
},
{
"code": null,
"e": 5666,
"s": 5551,
"text": "So, as we can see that this is the process to find out the conditional posterior distribution for the intercept c."
},
{
"code": null,
"e": 5705,
"s": 5666,
"text": "The code snippet of the above equation"
},
{
"code": null,
"e": 5931,
"s": 5705,
"text": "def get_intercept_c(y, x, b, t, c0, tc): n = len(y) assert len(x) == n precision = c0 + t * n mean = tc * c0 + t * np.sum(y - b * x) mean = mean/precision return np.random.normal(mean, 1 / np.sqrt(precision)"
},
{
"code": null,
"e": 5980,
"s": 5931,
"text": "Similarly, we can find the same for the slope b."
},
{
"code": null,
"e": 6033,
"s": 5980,
"text": "Conditional Posterior Distribution for the slope_b :"
},
{
"code": null,
"e": 6081,
"s": 6033,
"text": "The code-snippet of the update for the slope is"
},
{
"code": null,
"e": 6319,
"s": 6081,
"text": "def get_slope_b(y, x, c, t, b0, tb): n = len(y) assert len(x) == n precision = tb + t * np.sum(x * x) mean = tb * b0 + t * np.sum( (y - c) * x) mean = mean/precision return np.random.normal(mean, 1 / np.sqrt(precision))"
},
{
"code": null,
"e": 6376,
"s": 6319,
"text": "Conditional Posterior Distribution for the precision_t :"
},
{
"code": null,
"e": 6497,
"s": 6376,
"text": "The conditional posterior distribution gives b,c will not be like the above two as the prior follows Gamma distribution."
},
{
"code": null,
"e": 6709,
"s": 6497,
"text": "def get_precision_t(y, x, b, c, alpha, beta): n = len(y) alpha_new = alpha + n / 2 resid = y - c - b * x beta_new = beta + np.sum(resid * resid) / 2 return np.random.gamma(alpha_new, 1 / beta_new)"
},
{
"code": null,
"e": 6860,
"s": 6709,
"text": "Now, since we could get the closed distributional form for the parameters, now we can generate from the posterior distribution using MCMC simulations."
},
{
"code": null,
"e": 7069,
"s": 6860,
"text": "So, first, to run the experiment, I generated a sample data with known slope and intercept coefficients which we can validate from our Bayesian Linear regression. We have assumed for the experiment a=6, b = 2"
},
{
"code": null,
"e": 7225,
"s": 7069,
"text": "# observed datan = 1000_a = 6_b = 2x = np.linspace(0, 1, n)y = _a*x + _b + np.random.randn(n)synth_plot = plt.plot(x, y, \"o\")plt.xlabel(\"x\")plt.ylabel(\"y\")"
},
{
"code": null,
"e": 7400,
"s": 7225,
"text": "Now, after updating the parameters based on the equations and snippets shown above, we run the MCMC simulations for getting the true posterior distribution of the parameters."
},
{
"code": null,
"e": 8039,
"s": 7400,
"text": "def gibbs(y, x, iters, init, hypers): assert len(y) == len(x) c = init[\"c\"] b = init[\"b\"] t = init[\"t\"] trace = np.zeros((iters, 3)) ## trace to store values of b, c, t for it in tqdm(range(iters)): c = get_intercept_c(y, x, b, t, hypers[\"c0\"], hypers[\"tc\"]) b = get_slope_b(y, x, c, t, hypers[\"b0\"], hypers[\"tb\"]) t = get_precision_t(y, x, b, c, hypers[\"alpha\"], hypers[\"beta\"]) trace[it,:] = np.array((c, b, t)) trace = pd.DataFrame(trace) trace.columns = ['intercept', 'slope', 'precision'] return traceiters = 15000trace = gibbs(y, x, iters, init, hypers)"
},
{
"code": null,
"e": 8316,
"s": 8039,
"text": "On running the experiment for 15000 iterations, we see the trace plot to validate our hypothesis. There is a concept of a Burn-in period in MCMC simulations where we ignore the first few iterations as it doesn’t replicate samples from the true posterior rather random samples."
},
{
"code": null,
"e": 8473,
"s": 8316,
"text": "So as you can see, the sample posterior distribution replicated our assumptions and we have much more information about the coefficients and the parameters."
},
{
"code": null,
"e": 8724,
"s": 8473,
"text": "But it is not always possible to have a closed distributional form of the conditional posterior and hence we have to opt for a proposal distribution with acceptance & rejection sampling using the Metropolis-Hastings algorithm discussed above briefly."
},
{
"code": null,
"e": 9034,
"s": 8724,
"text": "Note: There are a lot of advantages of Bayesian thinking and reasoning and I will be coming up with subsequent materials on Variational Inference and Bayesian thinking in my upcoming blogs and materials. I will be delving into Bayesian Deep Learning and its advantages. So, keep reading and sharing knowledge."
}
]
|
How do you get the font metrics in Java Swing? | To get the font metrics, use the FontMetrics class:
Graphics2D graphics = (Graphics2D) gp.create();
String str = getWidth() + "(Width) x (Height)" + getHeight();
FontMetrics m = graphics.getFontMetrics();
Now to display it:
int xValue = (getWidth() - m.stringWidth(str)) / 2;
int yValue = ((getHeight() - m.getHeight()) / 2) + m.getAscent();
graphics.drawString(str, xValue, yValue);
The following is an example to get the font metrics in Java Swing:
import java.awt.BorderLayout;
import java.awt.Dimension;
import java.awt.FontMetrics;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Color;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class SwingDemo {
public static void main(String[] args) {
JFrame frame = new JFrame("Font Metrics");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(new BorderLayout());
frame.add(new Demo());
frame.pack();
frame.setVisible(true);
System.out.println(frame.getSize());
}
}
class Demo extends JPanel {
@Override
public Dimension getPreferredSize() {
return new Dimension(500, 500);
}
@Override
protected void paintComponent(Graphics gp) {
super.paintComponent(gp);
Graphics2D graphics = (Graphics2D) gp.create();
String str = getWidth() + "(Width) x (Height)" + getHeight();
FontMetrics m = graphics.getFontMetrics();
int xValue = (getWidth() - m.stringWidth(str)) / 2;
int yValue = ((getHeight() - m.getHeight()) / 2) + m.getAscent();
graphics.drawString(str, xValue, yValue);
graphics.dispose();
}
} | [
{
"code": null,
"e": 1114,
"s": 1062,
"text": "To get the font metrics, use the FontMetrics class:"
},
{
"code": null,
"e": 1267,
"s": 1114,
"text": "Graphics2D graphics = (Graphics2D) gp.create();\nString str = getWidth() + \"(Width) x (Height)\" + getHeight();\nFontMetrics m = graphics.getFontMetrics();"
},
{
"code": null,
"e": 1286,
"s": 1267,
"text": "Now to display it:"
},
{
"code": null,
"e": 1446,
"s": 1286,
"text": "int xValue = (getWidth() - m.stringWidth(str)) / 2;\nint yValue = ((getHeight() - m.getHeight()) / 2) + m.getAscent();\ngraphics.drawString(str, xValue, yValue);"
},
{
"code": null,
"e": 1513,
"s": 1446,
"text": "The following is an example to get the font metrics in Java Swing:"
},
{
"code": null,
"e": 2680,
"s": 1513,
"text": "import java.awt.BorderLayout;\nimport java.awt.Dimension;\nimport java.awt.FontMetrics;\nimport java.awt.Graphics;\nimport java.awt.Graphics2D;\nimport java.awt.Color;\nimport javax.swing.JFrame;\nimport javax.swing.JPanel;\npublic class SwingDemo {\n public static void main(String[] args) {\n JFrame frame = new JFrame(\"Font Metrics\");\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setLayout(new BorderLayout());\n frame.add(new Demo());\n frame.pack();\n frame.setVisible(true);\n System.out.println(frame.getSize());\n }\n}\nclass Demo extends JPanel {\n @Override\n public Dimension getPreferredSize() {\n return new Dimension(500, 500);\n }\n @Override\n protected void paintComponent(Graphics gp) {\n super.paintComponent(gp);\n Graphics2D graphics = (Graphics2D) gp.create();\n String str = getWidth() + \"(Width) x (Height)\" + getHeight();\n FontMetrics m = graphics.getFontMetrics();\n int xValue = (getWidth() - m.stringWidth(str)) / 2;\n int yValue = ((getHeight() - m.getHeight()) / 2) + m.getAscent();\n graphics.drawString(str, xValue, yValue);\n graphics.dispose();\n }\n}"
}
]
|
Java Examples - Infix to Postfix | How to convert an infix expression to postfix expression ?
Following example demonstrates how to convert an infix to postfix expression by using the concept of stack.
import java.io.IOException;
public class InToPost {
private Stack theStack;
private String input;
private String output = "";
public InToPost(String in) {
input = in;
int stackSize = input.length();
theStack = new Stack(stackSize);
}
public String doTrans() {
for (int j = 0; j < input.length(); j++) {
char ch = input.charAt(j);
switch (ch) {
case '+':
case '-':
gotOper(ch, 1);
break;
case '*':
case '/':
gotOper(ch, 2);
break;
case '(':
theStack.push(ch);
break;
case ')':
gotParen(ch);
break;
default:
output = output + ch;
break;
}
}
while (!theStack.isEmpty()) {
output = output + theStack.pop();
}
System.out.println(output);
return output;
}
public void gotOper(char opThis, int prec1) {
while (!theStack.isEmpty()) {
char opTop = theStack.pop();
if (opTop == '(') {
theStack.push(opTop);
break;
} else {
int prec2;
if (opTop == '+' || opTop == '-')
prec2 = 1;
else
prec2 = 2;
if (prec2 < prec1) {
theStack.push(opTop);
break;
}
else output = output + opTop;
}
}
theStack.push(opThis);
}
public void gotParen(char ch) {
while (!theStack.isEmpty()) {
char chx = theStack.pop();
if (chx == '(')
break;
else output = output + chx;
}
}
public static void main(String[] args) throws IOException {
String input = "1+2*4/5-7+3/6";
String output;
InToPost theTrans = new InToPost(input);
output = theTrans.doTrans();
System.out.println("Postfix is " + output + '\n');
}
class Stack {
private int maxSize;
private char[] stackArray;
private int top;
public Stack(int max) {
maxSize = max;
stackArray = new char[maxSize];
top = -1;
}
public void push(char j) {
stackArray[++top] = j;
}
public char pop() {
return stackArray[top--];
}
public char peek() {
return stackArray[top];
}
public boolean isEmpty() {
return (top == -1);
}
}
}
The above code sample will produce the following result.
124*5/+7-36/+
Postfix is 124*5/+7-36/+
The following is an another sample example to convert an infix expression to postfix expression.
import java.io.BufferedReader;
import java.io.InputStreamReader;
class stack {
char stack1[] = new char[20];
int t;
void push(char ch) {
t++;
stack1[t] = ch;
}
char pop() {
char ch;
ch = stack1[t];
t--;
return ch;
}
int pre(char ch) {
switch(ch) {
case '-':return 1;
case '+':return 1;
case '*':return 2;
case '/':return 2;
}
return 0;
}
boolean operator(char ch) {
if(ch == '/' || ch == '*' || ch == '+' || ch == '-') return true;
else return false;
}
boolean isAlpha(char ch) {
if(ch >= 'a' && ch <= 'z' || ch >= '0' && ch == '9') return true;
else return false;
}
void postfix(String s1) {
char output[] = new char[s1.length()];
char ch;
int p = 0,i;
for(i = 0;i<s1.length();i++) {
ch = s1.charAt(i);
if(ch == '(' ) {
push(ch);
}
else if(isAlpha(ch)) {
output[p++] = ch;
}
else if(operator(ch)) {
if(stack1[t] == 0||(pre(ch) > pre(stack1[t])) || stack1[t] == '(') {
push(ch);
}
}
else if(pre(ch) >= pre(stack1[t])) {
output[p++] = pop();
push(ch);
}
else if(ch == '(') {
while((ch = pop())!='(') {
output[p++] = ch;
}
}
}
while(t != 0) {
output[p++] = pop();
}
for(int j = 0;j>s1.length();j++) {
System.out.print(output[j]);
}
}
}
public class Demo {
public static void main(String[] args)throws Exception {
String s;
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
stack b = new stack();
System.out.println("Please Enter input s1ing");
s = br.readLine();
System.out.println("Input String is "+s);
System.out.println("Output String is");
b.postfix(s);
}
}
The above code sample will produce the following result.
Enter input string
124*5/+7-36/+
Input String:124*5/+7-36/+
Output String:
12*/-3/675
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2127,
"s": 2068,
"text": "How to convert an infix expression to postfix expression ?"
},
{
"code": null,
"e": 2235,
"s": 2127,
"text": "Following example demonstrates how to convert an infix to postfix expression by using the concept of stack."
},
{
"code": null,
"e": 4786,
"s": 2235,
"text": "import java.io.IOException;\n\npublic class InToPost {\n private Stack theStack;\n private String input;\n private String output = \"\";\n public InToPost(String in) {\n input = in;\n int stackSize = input.length();\n theStack = new Stack(stackSize);\n }\n public String doTrans() {\n for (int j = 0; j < input.length(); j++) {\n char ch = input.charAt(j);\n switch (ch) {\n case '+': \n case '-':\n gotOper(ch, 1); \n break; \n case '*': \n case '/':\n gotOper(ch, 2); \n break; \n case '(': \n theStack.push(ch);\n break;\n case ')': \n gotParen(ch); \n break;\n default: \n output = output + ch; \n break;\n }\n }\n while (!theStack.isEmpty()) {\n output = output + theStack.pop();\n }\n System.out.println(output);\n return output; \n }\n public void gotOper(char opThis, int prec1) {\n while (!theStack.isEmpty()) {\n char opTop = theStack.pop();\n if (opTop == '(') {\n theStack.push(opTop);\n break;\n } else {\n int prec2;\n if (opTop == '+' || opTop == '-')\n prec2 = 1;\n else\n prec2 = 2;\n if (prec2 < prec1) { \n theStack.push(opTop);\n break;\n } \n else output = output + opTop;\n }\n }\n theStack.push(opThis);\n }\n public void gotParen(char ch) { \n while (!theStack.isEmpty()) {\n char chx = theStack.pop();\n if (chx == '(') \n break; \n else output = output + chx; \n }\n }\n public static void main(String[] args) throws IOException {\n String input = \"1+2*4/5-7+3/6\";\n String output;\n InToPost theTrans = new InToPost(input);\n output = theTrans.doTrans(); \n System.out.println(\"Postfix is \" + output + '\\n');\n }\n class Stack {\n private int maxSize;\n private char[] stackArray;\n private int top;\n \n public Stack(int max) {\n maxSize = max;\n stackArray = new char[maxSize];\n top = -1;\n }\n public void push(char j) {\n stackArray[++top] = j;\n }\n public char pop() {\n return stackArray[top--];\n }\n public char peek() {\n return stackArray[top];\n }\n public boolean isEmpty() {\n return (top == -1);\n }\n }\n}"
},
{
"code": null,
"e": 4843,
"s": 4786,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 4883,
"s": 4843,
"text": "124*5/+7-36/+\nPostfix is 124*5/+7-36/+\n"
},
{
"code": null,
"e": 4980,
"s": 4883,
"text": "The following is an another sample example to convert an infix expression to postfix expression."
},
{
"code": null,
"e": 7021,
"s": 4980,
"text": "import java.io.BufferedReader;\nimport java.io.InputStreamReader;\n\nclass stack { \n char stack1[] = new char[20]; \n int t;\n void push(char ch) { \n t++;\n stack1[t] = ch;\n } \n char pop() { \n char ch;\n ch = stack1[t]; \n t--;\n return ch;\n } \n int pre(char ch) { \n switch(ch) { \n case '-':return 1;\n case '+':return 1;\n case '*':return 2;\n case '/':return 2;\n } \n return 0;\n } \n boolean operator(char ch) { \n if(ch == '/' || ch == '*' || ch == '+' || ch == '-') return true; \n else return false; \n } \n boolean isAlpha(char ch) { \n if(ch >= 'a' && ch <= 'z' || ch >= '0' && ch == '9') return true; \n else return false; \n } \n void postfix(String s1) { \n char output[] = new char[s1.length()];\n char ch;\n int p = 0,i; \n for(i = 0;i<s1.length();i++) { \n ch = s1.charAt(i); \n if(ch == '(' ) { \n push(ch);\n } \n else if(isAlpha(ch)) { \n output[p++] = ch; \n } \n else if(operator(ch)) { \n if(stack1[t] == 0||(pre(ch) > pre(stack1[t])) || stack1[t] == '(') { \n push(ch); \n } \n } \n else if(pre(ch) >= pre(stack1[t])) { \n output[p++] = pop();\n push(ch);\n } \n else if(ch == '(') { \n while((ch = pop())!='(') { \n output[p++] = ch;\n } \n } \n } \n while(t != 0) { \n output[p++] = pop();\n } \n for(int j = 0;j>s1.length();j++) {\n System.out.print(output[j]); \n }\n }\n}\npublic class Demo { \n public static void main(String[] args)throws Exception { \n String s;\n BufferedReader br = new BufferedReader(new InputStreamReader(System.in));\n stack b = new stack();\n System.out.println(\"Please Enter input s1ing\");\n s = br.readLine();\n System.out.println(\"Input String is \"+s);\n System.out.println(\"Output String is\");\n b.postfix(s);\n }\n}"
},
{
"code": null,
"e": 7078,
"s": 7021,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 7165,
"s": 7078,
"text": "Enter input string\n124*5/+7-36/+\nInput String:124*5/+7-36/+\nOutput String:\n12*/-3/675\n"
},
{
"code": null,
"e": 7172,
"s": 7165,
"text": " Print"
},
{
"code": null,
"e": 7183,
"s": 7172,
"text": " Add Notes"
}
]
|
Check if a string is suffix of another - GeeksforGeeks | 03 May, 2021
Given two strings s1 and s2, check if s1 is a suffix of s2. Or in simple words, we need to find whether string s2 ends with string s1.
Examples :
Input : s1 = "geeks" and s2 = "geeksforgeeks"
Output : Yes
Input : s1 = "world", s2 = "my first code is hello world"
Output : Yes
Input : s1 = "geeks" and s2 = "geeksforGeek"
Output : No
Method 1 (Writing our own code)
C++
Java
Python3
C#
PHP
Javascript
// CPP program to find if a string is// suffix of another#include <iostream>#include <string>using namespace std; bool isSuffix(string s1, string s2){ int n1 = s1.length(), n2 = s2.length(); if (n1 > n2) return false; for (int i=0; i<n1; i++) if (s1[n1 - i - 1] != s2[n2 - i - 1]) return false; return true;} int main(){ string s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function bool result = isSuffix(s1, s2); if (result) cout << "Yes"; else cout << "No"; return 0;}
// Java program to find if a string is// suffix of another class GFG{ static boolean isSuffix(String s1, String s2) { int n1 = s1.length(), n2 = s2.length(); if (n1 > n2) return false; for (int i=0; i<n1; i++) if (s1.charAt(n1 - i - 1) != s2.charAt(n2 - i - 1)) return false; return true; } public static void main(String []args) { String s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function boolean result = isSuffix(s1, s2); if (result) System.out.println( "Yes"); else System.out.println("No"); } } // This code is contributed by iAyushRaj
# Python 3 program to find if a# string is suffix of anotherdef isSuffix(s1, s2): n1 = len(s1) n2 = len(s2) if (n1 > n2): return False for i in range(n1): if(s1[n1 - i - 1] != s2[n2 - i - 1]): return False return True # Driver Codeif __name__ == "__main__": s1 = "geeks" s2 = "geeksforgeeks" # Test case-sensitive implementation # of endsWith function result = isSuffix(s1, s2) if (result): print("Yes") else: print( "No") # This code is contributed# by ChitraNayal
// C# program to find if a string is// suffix of another using System;class GFG{ static bool isSuffix(string s1, string s2) { int n1 = s1.Length, n2 = s2.Length; if (n1 > n2) return false; for (int i=0; i<n1; i++) if (s1[n1 - i - 1] != s2[n2 - i - 1]) return false; return true; } public static void Main() { string s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function bool result = isSuffix(s1, s2); if (result) Console.WriteLine( "Yes"); else Console.WriteLine("No"); } } // This code is contributed by iAyushRaj
<?php// PHP program to find if a// string is suffix of anotherfunction isSuffix($s1, $s2){ $n1 = ($s1); $n2 = strlen($s2); if ($n1 > $n2) return false; for ($i = 0; $i < $n1; $i++) if ($s1[$n1 - $i - 1] != $s2[$n2 - $i - 1]) return false; return true;}// Driver Code$s1 = "geeks";$s2 = "geeksforgeeks"; // Test case-sensitive implementation// of endsWith function$result = isSuffix($s1, $s2); if ($result) echo "Yes";else echo "No"; // This code is contributed by m_kit?>
<script> // Javascript program to find if// a string is suffix of anotherfunction isSuffix(s1, s2){ let n1 = s1.length, n2 = s2.length; if (n1 > n2) return false; for(let i = 0; i < n1; i++) if (s1[n1 - i - 1] != s2[n2 - i - 1]) return false; return true;} // Driver codelet s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation// of endsWith functionlet result = isSuffix(s1, s2); if (result) document.write( "Yes");else document.write("No"); // This code is contributed by decode2207 </script>
Yes
Method 2 (Using boost library in C++) Since std::string class does not provide any endWith() function in which a string ends with another string so we will be using Boost Library. Make sure to include #include boost/algorithm/string.hpp and #include string to run the code fine.
C++
Java
Python3
C#
Javascript
// CPP program to find if a string is// suffix of another#include <boost/algorithm/string.hpp>#include <iostream>#include <string>using namespace std; int main(){ string s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function bool result = boost::algorithm::ends_with(s2, s1); if (result) cout << "Yes"; else cout << "No"; return 0;}
// Java program to find if a string is// suffix of anotherclass GFG{ public static void main(String[] args) { String s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function boolean result = s2.endsWith(s1); if (result) System.out.println("Yes"); else System.out.println("No"); }} // This code is contributed by 29AjayKumar
# Python3 program to find if a string is# suffix of another if __name__ == '__main__': s1 = "geeks"; s2 = "geeksforgeeks"; # Test case-sensitive implementation # of endsWith function result = s2.endswith(s1); if (result): print("Yes"); else: print("No"); # This code is contributed by Rajput-Ji
// C# program to find if a string is// suffix of anotherusing System; class GFG{ // Driver code public static void Main(String[] args) { String s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function bool result = s2.EndsWith(s1); if (result) Console.WriteLine("Yes"); else Console.WriteLine("No"); }} // This code contributed by Rajput-Ji
<script>// Javascript program to find if a string is// suffix of another let s1 = "geeks", s2 = "geeksforgeeks"; // Test case-sensitive implementation // of endsWith function let result = s2.endsWith(s1); if (result) document.write("Yes"); else document.write("No"); // This code is contributed by avanitrachhadiya2155</script>
Yes
YouTubeGeeksforGeeks500K subscribersCheck if a string is suffix of another | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:03•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=rkWYCL9joY4" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
jit_t
ukasp
iAyushRaj
29AjayKumar
Rajput-Ji
decode2207
avanitrachhadiya2155
Searching
Strings
Searching
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Best First Search (Informed Search)
Split the given array into K sub-arrays such that maximum sum of all sub arrays is minimum
3 Different ways to print Fibonacci series in Java
Find whether an array is subset of another array | Added Method 5
Recursive Programs to find Minimum and Maximum elements of array
Reverse a string in Java
Write a program to reverse an array or string
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types | [
{
"code": null,
"e": 24615,
"s": 24587,
"text": "\n03 May, 2021"
},
{
"code": null,
"e": 24751,
"s": 24615,
"text": "Given two strings s1 and s2, check if s1 is a suffix of s2. Or in simple words, we need to find whether string s2 ends with string s1. "
},
{
"code": null,
"e": 24763,
"s": 24751,
"text": "Examples : "
},
{
"code": null,
"e": 24952,
"s": 24763,
"text": "Input : s1 = \"geeks\" and s2 = \"geeksforgeeks\"\nOutput : Yes\n\nInput : s1 = \"world\", s2 = \"my first code is hello world\"\nOutput : Yes\n\nInput : s1 = \"geeks\" and s2 = \"geeksforGeek\"\nOutput : No"
},
{
"code": null,
"e": 24984,
"s": 24952,
"text": "Method 1 (Writing our own code)"
},
{
"code": null,
"e": 24988,
"s": 24984,
"text": "C++"
},
{
"code": null,
"e": 24993,
"s": 24988,
"text": "Java"
},
{
"code": null,
"e": 25001,
"s": 24993,
"text": "Python3"
},
{
"code": null,
"e": 25004,
"s": 25001,
"text": "C#"
},
{
"code": null,
"e": 25008,
"s": 25004,
"text": "PHP"
},
{
"code": null,
"e": 25019,
"s": 25008,
"text": "Javascript"
},
{
"code": "// CPP program to find if a string is// suffix of another#include <iostream>#include <string>using namespace std; bool isSuffix(string s1, string s2){ int n1 = s1.length(), n2 = s2.length(); if (n1 > n2) return false; for (int i=0; i<n1; i++) if (s1[n1 - i - 1] != s2[n2 - i - 1]) return false; return true;} int main(){ string s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function bool result = isSuffix(s1, s2); if (result) cout << \"Yes\"; else cout << \"No\"; return 0;}",
"e": 25605,
"s": 25019,
"text": null
},
{
"code": "// Java program to find if a string is// suffix of another class GFG{ static boolean isSuffix(String s1, String s2) { int n1 = s1.length(), n2 = s2.length(); if (n1 > n2) return false; for (int i=0; i<n1; i++) if (s1.charAt(n1 - i - 1) != s2.charAt(n2 - i - 1)) return false; return true; } public static void main(String []args) { String s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function boolean result = isSuffix(s1, s2); if (result) System.out.println( \"Yes\"); else System.out.println(\"No\"); } } // This code is contributed by iAyushRaj",
"e": 26357,
"s": 25605,
"text": null
},
{
"code": "# Python 3 program to find if a# string is suffix of anotherdef isSuffix(s1, s2): n1 = len(s1) n2 = len(s2) if (n1 > n2): return False for i in range(n1): if(s1[n1 - i - 1] != s2[n2 - i - 1]): return False return True # Driver Codeif __name__ == \"__main__\": s1 = \"geeks\" s2 = \"geeksforgeeks\" # Test case-sensitive implementation # of endsWith function result = isSuffix(s1, s2) if (result): print(\"Yes\") else: print( \"No\") # This code is contributed# by ChitraNayal",
"e": 26910,
"s": 26357,
"text": null
},
{
"code": "// C# program to find if a string is// suffix of another using System;class GFG{ static bool isSuffix(string s1, string s2) { int n1 = s1.Length, n2 = s2.Length; if (n1 > n2) return false; for (int i=0; i<n1; i++) if (s1[n1 - i - 1] != s2[n2 - i - 1]) return false; return true; } public static void Main() { string s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function bool result = isSuffix(s1, s2); if (result) Console.WriteLine( \"Yes\"); else Console.WriteLine(\"No\"); } } // This code is contributed by iAyushRaj",
"e": 27634,
"s": 26910,
"text": null
},
{
"code": "<?php// PHP program to find if a// string is suffix of anotherfunction isSuffix($s1, $s2){ $n1 = ($s1); $n2 = strlen($s2); if ($n1 > $n2) return false; for ($i = 0; $i < $n1; $i++) if ($s1[$n1 - $i - 1] != $s2[$n2 - $i - 1]) return false; return true;}// Driver Code$s1 = \"geeks\";$s2 = \"geeksforgeeks\"; // Test case-sensitive implementation// of endsWith function$result = isSuffix($s1, $s2); if ($result) echo \"Yes\";else echo \"No\"; // This code is contributed by m_kit?>",
"e": 28140,
"s": 27634,
"text": null
},
{
"code": "<script> // Javascript program to find if// a string is suffix of anotherfunction isSuffix(s1, s2){ let n1 = s1.length, n2 = s2.length; if (n1 > n2) return false; for(let i = 0; i < n1; i++) if (s1[n1 - i - 1] != s2[n2 - i - 1]) return false; return true;} // Driver codelet s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation// of endsWith functionlet result = isSuffix(s1, s2); if (result) document.write( \"Yes\");else document.write(\"No\"); // This code is contributed by decode2207 </script>",
"e": 28723,
"s": 28140,
"text": null
},
{
"code": null,
"e": 28727,
"s": 28723,
"text": "Yes"
},
{
"code": null,
"e": 29008,
"s": 28729,
"text": "Method 2 (Using boost library in C++) Since std::string class does not provide any endWith() function in which a string ends with another string so we will be using Boost Library. Make sure to include #include boost/algorithm/string.hpp and #include string to run the code fine."
},
{
"code": null,
"e": 29012,
"s": 29008,
"text": "C++"
},
{
"code": null,
"e": 29017,
"s": 29012,
"text": "Java"
},
{
"code": null,
"e": 29025,
"s": 29017,
"text": "Python3"
},
{
"code": null,
"e": 29028,
"s": 29025,
"text": "C#"
},
{
"code": null,
"e": 29039,
"s": 29028,
"text": "Javascript"
},
{
"code": "// CPP program to find if a string is// suffix of another#include <boost/algorithm/string.hpp>#include <iostream>#include <string>using namespace std; int main(){ string s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function bool result = boost::algorithm::ends_with(s2, s1); if (result) cout << \"Yes\"; else cout << \"No\"; return 0;}",
"e": 29453,
"s": 29039,
"text": null
},
{
"code": "// Java program to find if a string is// suffix of anotherclass GFG{ public static void main(String[] args) { String s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function boolean result = s2.endsWith(s1); if (result) System.out.println(\"Yes\"); else System.out.println(\"No\"); }} // This code is contributed by 29AjayKumar",
"e": 29898,
"s": 29453,
"text": null
},
{
"code": "# Python3 program to find if a string is# suffix of another if __name__ == '__main__': s1 = \"geeks\"; s2 = \"geeksforgeeks\"; # Test case-sensitive implementation # of endsWith function result = s2.endswith(s1); if (result): print(\"Yes\"); else: print(\"No\"); # This code is contributed by Rajput-Ji",
"e": 30242,
"s": 29898,
"text": null
},
{
"code": "// C# program to find if a string is// suffix of anotherusing System; class GFG{ // Driver code public static void Main(String[] args) { String s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function bool result = s2.EndsWith(s1); if (result) Console.WriteLine(\"Yes\"); else Console.WriteLine(\"No\"); }} // This code contributed by Rajput-Ji",
"e": 30706,
"s": 30242,
"text": null
},
{
"code": "<script>// Javascript program to find if a string is// suffix of another let s1 = \"geeks\", s2 = \"geeksforgeeks\"; // Test case-sensitive implementation // of endsWith function let result = s2.endsWith(s1); if (result) document.write(\"Yes\"); else document.write(\"No\"); // This code is contributed by avanitrachhadiya2155</script>",
"e": 31092,
"s": 30706,
"text": null
},
{
"code": null,
"e": 31096,
"s": 31092,
"text": "Yes"
},
{
"code": null,
"e": 31935,
"s": 31098,
"text": "YouTubeGeeksforGeeks500K subscribersCheck if a string is suffix of another | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:03•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=rkWYCL9joY4\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 31941,
"s": 31935,
"text": "jit_t"
},
{
"code": null,
"e": 31947,
"s": 31941,
"text": "ukasp"
},
{
"code": null,
"e": 31957,
"s": 31947,
"text": "iAyushRaj"
},
{
"code": null,
"e": 31969,
"s": 31957,
"text": "29AjayKumar"
},
{
"code": null,
"e": 31979,
"s": 31969,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 31990,
"s": 31979,
"text": "decode2207"
},
{
"code": null,
"e": 32011,
"s": 31990,
"text": "avanitrachhadiya2155"
},
{
"code": null,
"e": 32021,
"s": 32011,
"text": "Searching"
},
{
"code": null,
"e": 32029,
"s": 32021,
"text": "Strings"
},
{
"code": null,
"e": 32039,
"s": 32029,
"text": "Searching"
},
{
"code": null,
"e": 32047,
"s": 32039,
"text": "Strings"
},
{
"code": null,
"e": 32145,
"s": 32047,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32154,
"s": 32145,
"text": "Comments"
},
{
"code": null,
"e": 32167,
"s": 32154,
"text": "Old Comments"
},
{
"code": null,
"e": 32203,
"s": 32167,
"text": "Best First Search (Informed Search)"
},
{
"code": null,
"e": 32294,
"s": 32203,
"text": "Split the given array into K sub-arrays such that maximum sum of all sub arrays is minimum"
},
{
"code": null,
"e": 32345,
"s": 32294,
"text": "3 Different ways to print Fibonacci series in Java"
},
{
"code": null,
"e": 32411,
"s": 32345,
"text": "Find whether an array is subset of another array | Added Method 5"
},
{
"code": null,
"e": 32476,
"s": 32411,
"text": "Recursive Programs to find Minimum and Maximum elements of array"
},
{
"code": null,
"e": 32501,
"s": 32476,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 32547,
"s": 32501,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 32581,
"s": 32547,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 32641,
"s": 32581,
"text": "Write a program to print all permutations of a given string"
}
]
|
Python Program For QuickSort On Doubly Linked List - GeeksforGeeks | 23 Dec, 2021
Following is a typical recursive implementation of QuickSort for arrays. The implementation uses last element as pivot.
Python3
"""A typical recursive implementation of Quicksort for array """ """ This function takes last element as pivot, places the pivot element at its correct position in sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot """ """ i --> is the first index in the array x --> is the last index in the array tmp --> is a temporary variable for swapping values (integer)"""# array arr, integer l, integer hdef partition (arr, l, h): x = arr[h] i = (l - 1) for j in range(l, h): if (arr[j] <= x): i +=1 tmp = arr[i] arr[i] = arr[j] arr[j] = tmp tmp = arr[i + 1] arr[i + 1] = arr[h] arr[h] = tmp return(i + 1) """A --> Array to be sorted,l --> Starting index, h --> Ending index""" # array A, integer l, integer hdef quickSort(A, l, h): if (l < h): p = partition(A, l, h) # pivot index quickSort(A, l, p - 1) # left quickSort(A, p + 1, h) # right # This code is contributed by humphreykibet.
Can we use the same algorithm for Linked List? Following is C++ implementation for the doubly linked list. The idea is simple, we first find out pointer to the last node. Once we have a pointer to the last node, we can recursively sort the linked list using pointers to first and last nodes of a linked list, similar to the above recursive function where we pass indexes of first and last array elements. The partition function for a linked list is also similar to partition for arrays. Instead of returning index of the pivot element, it returns a pointer to the pivot element. In the following implementation, quickSort() is just a wrapper function, the main recursive function is _quickSort() which is similar to quickSort() for array implementation.
Python3
# A Python program to sort a linked list using Quicksorthead = None # a node of the doubly linked listclass Node: def __init__(self, d): self.data = d self.next = None self.prev = None # A utility function to find last node of linked listdef lastNode(node): while(node.next != None): node = node.next; return node; # Considers last element as pivot, places the pivot element at its# correct position in sorted array, and places all smaller (smaller than# pivot) to left of pivot and all greater elements to right of pivot def partition(l, h): # set pivot as h element x = h.data; # similar to i = l-1 for array implementation i = l.prev; j = l # Similar to "for (int j = l; j <= h- 1; j++)" while(j != h): if(j.data <= x): # Similar to i++ for array i = l if(i == None) else i.next; temp = i.data; i.data = j.data; j.data = temp; j = j.next i = l if (i == None) else i.next; # Similar to i++ temp = i.data; i.data = h.data; h.data = temp; return i; # A recursive implementation of quicksort for linked list def _quickSort(l,h): if(h != None and l != h and l != h.next): temp = partition(l, h); _quickSort(l,temp.prev); _quickSort(temp.next, h); # The main function to sort a linked list. It mainly calls _quickSort()def quickSort(node): # Find last node head = lastNode(node); # Call the recursive QuickSort _quickSort(node,head); # A utility function to print contents of arrdef printList(head): while(head != None): print(head.data, end=" "); head = head.next; # Function to insert a node at the beginning of the Doubly Linked List def push(new_Data): global head; new_Node = Node(new_Data); # allocate node # if head is null, head = new_Node if(head == None): head = new_Node; return; # link the old list off the new node new_Node.next = head; # change prev of head node to new node head.prev = new_Node; # since we are adding at the beginning, prev is always NULL new_Node.prev = None; # move the head to point to the new node head = new_Node; # Driver program to test above function push(5);push(20);push(4);push(3);push(30); print("Linked List before sorting ");printList(head);print("Linked List after sorting");quickSort(head);printList(head); # This code is contributed by _saurabh_jaiswal
Output :
Linked List before sorting
30 3 4 20 5
Linked List after sorting
3 4 5 20 30
Time Complexity: Time complexity of the above implementation is same as time complexity of QuickSort() for arrays. It takes O(n^2) time in the worst case and O(nLogn) in average and best cases. The worst case occurs when the linked list is already sorted.Can we implement random quicksort for a linked list? Quicksort can be implemented for Linked List only when we can pick a fixed point as the pivot (like the last element in the above implementation). Random QuickSort cannot be efficiently implemented for Linked Lists by picking random pivot.
Please refer complete article on QuickSort on Doubly Linked List for more details!
doubly linked list
HSBC
Linked-List-Sorting
Quick Sort
Linked List
Python Programs
Sorting
HSBC
Linked List
Sorting
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Swap nodes in a linked list without swapping data
Circular Linked List | Set 2 (Traversal)
Delete a node in a Doubly Linked List
Given a linked list which is sorted, how will you insert in sorted way
Circular Singly Linked List | Insertion
Python program to convert a list to string
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Split string into list of characters
Python | Convert a list to dictionary | [
{
"code": null,
"e": 24854,
"s": 24826,
"text": "\n23 Dec, 2021"
},
{
"code": null,
"e": 24975,
"s": 24854,
"text": "Following is a typical recursive implementation of QuickSort for arrays. The implementation uses last element as pivot. "
},
{
"code": null,
"e": 24983,
"s": 24975,
"text": "Python3"
},
{
"code": "\"\"\"A typical recursive implementation of Quicksort for array \"\"\" \"\"\" This function takes last element as pivot, places the pivot element at its correct position in sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot \"\"\" \"\"\" i --> is the first index in the array x --> is the last index in the array tmp --> is a temporary variable for swapping values (integer)\"\"\"# array arr, integer l, integer hdef partition (arr, l, h): x = arr[h] i = (l - 1) for j in range(l, h): if (arr[j] <= x): i +=1 tmp = arr[i] arr[i] = arr[j] arr[j] = tmp tmp = arr[i + 1] arr[i + 1] = arr[h] arr[h] = tmp return(i + 1) \"\"\"A --> Array to be sorted,l --> Starting index, h --> Ending index\"\"\" # array A, integer l, integer hdef quickSort(A, l, h): if (l < h): p = partition(A, l, h) # pivot index quickSort(A, l, p - 1) # left quickSort(A, p + 1, h) # right # This code is contributed by humphreykibet.",
"e": 26048,
"s": 24983,
"text": null
},
{
"code": null,
"e": 26803,
"s": 26048,
"text": "Can we use the same algorithm for Linked List? Following is C++ implementation for the doubly linked list. The idea is simple, we first find out pointer to the last node. Once we have a pointer to the last node, we can recursively sort the linked list using pointers to first and last nodes of a linked list, similar to the above recursive function where we pass indexes of first and last array elements. The partition function for a linked list is also similar to partition for arrays. Instead of returning index of the pivot element, it returns a pointer to the pivot element. In the following implementation, quickSort() is just a wrapper function, the main recursive function is _quickSort() which is similar to quickSort() for array implementation. "
},
{
"code": null,
"e": 26811,
"s": 26803,
"text": "Python3"
},
{
"code": "# A Python program to sort a linked list using Quicksorthead = None # a node of the doubly linked listclass Node: def __init__(self, d): self.data = d self.next = None self.prev = None # A utility function to find last node of linked listdef lastNode(node): while(node.next != None): node = node.next; return node; # Considers last element as pivot, places the pivot element at its# correct position in sorted array, and places all smaller (smaller than# pivot) to left of pivot and all greater elements to right of pivot def partition(l, h): # set pivot as h element x = h.data; # similar to i = l-1 for array implementation i = l.prev; j = l # Similar to \"for (int j = l; j <= h- 1; j++)\" while(j != h): if(j.data <= x): # Similar to i++ for array i = l if(i == None) else i.next; temp = i.data; i.data = j.data; j.data = temp; j = j.next i = l if (i == None) else i.next; # Similar to i++ temp = i.data; i.data = h.data; h.data = temp; return i; # A recursive implementation of quicksort for linked list def _quickSort(l,h): if(h != None and l != h and l != h.next): temp = partition(l, h); _quickSort(l,temp.prev); _quickSort(temp.next, h); # The main function to sort a linked list. It mainly calls _quickSort()def quickSort(node): # Find last node head = lastNode(node); # Call the recursive QuickSort _quickSort(node,head); # A utility function to print contents of arrdef printList(head): while(head != None): print(head.data, end=\" \"); head = head.next; # Function to insert a node at the beginning of the Doubly Linked List def push(new_Data): global head; new_Node = Node(new_Data); # allocate node # if head is null, head = new_Node if(head == None): head = new_Node; return; # link the old list off the new node new_Node.next = head; # change prev of head node to new node head.prev = new_Node; # since we are adding at the beginning, prev is always NULL new_Node.prev = None; # move the head to point to the new node head = new_Node; # Driver program to test above function push(5);push(20);push(4);push(3);push(30); print(\"Linked List before sorting \");printList(head);print(\"Linked List after sorting\");quickSort(head);printList(head); # This code is contributed by _saurabh_jaiswal",
"e": 29544,
"s": 26811,
"text": null
},
{
"code": null,
"e": 29553,
"s": 29544,
"text": "Output :"
},
{
"code": null,
"e": 29638,
"s": 29553,
"text": "Linked List before sorting\n30 3 4 20 5\nLinked List after sorting\n3 4 5 20 30"
},
{
"code": null,
"e": 30186,
"s": 29638,
"text": "Time Complexity: Time complexity of the above implementation is same as time complexity of QuickSort() for arrays. It takes O(n^2) time in the worst case and O(nLogn) in average and best cases. The worst case occurs when the linked list is already sorted.Can we implement random quicksort for a linked list? Quicksort can be implemented for Linked List only when we can pick a fixed point as the pivot (like the last element in the above implementation). Random QuickSort cannot be efficiently implemented for Linked Lists by picking random pivot."
},
{
"code": null,
"e": 30269,
"s": 30186,
"text": "Please refer complete article on QuickSort on Doubly Linked List for more details!"
},
{
"code": null,
"e": 30288,
"s": 30269,
"text": "doubly linked list"
},
{
"code": null,
"e": 30293,
"s": 30288,
"text": "HSBC"
},
{
"code": null,
"e": 30313,
"s": 30293,
"text": "Linked-List-Sorting"
},
{
"code": null,
"e": 30324,
"s": 30313,
"text": "Quick Sort"
},
{
"code": null,
"e": 30336,
"s": 30324,
"text": "Linked List"
},
{
"code": null,
"e": 30352,
"s": 30336,
"text": "Python Programs"
},
{
"code": null,
"e": 30360,
"s": 30352,
"text": "Sorting"
},
{
"code": null,
"e": 30365,
"s": 30360,
"text": "HSBC"
},
{
"code": null,
"e": 30377,
"s": 30365,
"text": "Linked List"
},
{
"code": null,
"e": 30385,
"s": 30377,
"text": "Sorting"
},
{
"code": null,
"e": 30483,
"s": 30385,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30492,
"s": 30483,
"text": "Comments"
},
{
"code": null,
"e": 30505,
"s": 30492,
"text": "Old Comments"
},
{
"code": null,
"e": 30555,
"s": 30505,
"text": "Swap nodes in a linked list without swapping data"
},
{
"code": null,
"e": 30596,
"s": 30555,
"text": "Circular Linked List | Set 2 (Traversal)"
},
{
"code": null,
"e": 30634,
"s": 30596,
"text": "Delete a node in a Doubly Linked List"
},
{
"code": null,
"e": 30705,
"s": 30634,
"text": "Given a linked list which is sorted, how will you insert in sorted way"
},
{
"code": null,
"e": 30745,
"s": 30705,
"text": "Circular Singly Linked List | Insertion"
},
{
"code": null,
"e": 30788,
"s": 30745,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 30810,
"s": 30788,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 30849,
"s": 30810,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 30895,
"s": 30849,
"text": "Python | Split string into list of characters"
}
]
|
Style input type button with CSS | The input type button can be a submit button or reset button. With CSS, we can style any button on a web page.
You can try to run the following code to style input type button:
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
input[type=button] {
background-color: orange;
border: none;
text-decoration: none;
color: white;
padding: 20px 20px;
margin: 20px 20px;
cursor: pointer;
}
</style>
</head>
<body>
<p>Fill the below form,</p>
<form>
<label for = "subject">Subject</label>
<input type = "text" id = "subject" name = "sub"><br><br>
<label for = "student">Student</label>
<input type = "text" id = "student" name = "stu"><br>
<input type = "button" value = "Button">
</form>
</body>
</html> | [
{
"code": null,
"e": 1173,
"s": 1062,
"text": "The input type button can be a submit button or reset button. With CSS, we can style any button on a web page."
},
{
"code": null,
"e": 1239,
"s": 1173,
"text": "You can try to run the following code to style input type button:"
},
{
"code": null,
"e": 1249,
"s": 1239,
"text": "Live Demo"
},
{
"code": null,
"e": 1946,
"s": 1249,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <style>\n input[type=button] {\n background-color: orange;\n border: none;\n text-decoration: none;\n color: white;\n padding: 20px 20px;\n margin: 20px 20px;\n cursor: pointer;\n }\n </style>\n </head>\n <body>\n <p>Fill the below form,</p>\n <form>\n <label for = \"subject\">Subject</label>\n <input type = \"text\" id = \"subject\" name = \"sub\"><br><br>\n <label for = \"student\">Student</label>\n <input type = \"text\" id = \"student\" name = \"stu\"><br>\n <input type = \"button\" value = \"Button\">\n </form>\n </body>\n</html>"
}
]
|
How to run TestNG from command line? | TestNG allows to run the test suites from the command line (cmd). Here's a set of prerequisites that must be fulfilled in order to run a test suite from the command line −
testng.xml file should be created to define the test suites and the testing classes to execute.
testng.xml file should be created to define the test suites and the testing classes to execute.
All dependent jars should be available inside a project folder. It includes testing.jar, jcommander.jar and any other jars used in the test cases.
All dependent jars should be available inside a project folder. It includes testing.jar, jcommander.jar and any other jars used in the test cases.
Path of bin or out folder where the .class files are stored after the compilation.
Path of bin or out folder where the .class files are stored after the compilation.
Step 1 − Create different testing classes having different @Test methods
Step 1 − Create different testing classes having different @Test methods
Step 2 − Compile the class; it will create an out folder in IntelliJ and bin folder in Eclipse.
Step 2 − Compile the class; it will create an out folder in IntelliJ and bin folder in Eclipse.
Step 3 − Place all the jar files in the lib folder.
Step 3 − Place all the jar files in the lib folder.
Step 4 − Now create the testng.xml as given below.
Step 4 − Now create the testng.xml as given below.
Step 5 − Open the cmd.
Step 5 − Open the cmd.
Step 6 − Navigate to the project path using cd <project_path>
Step 6 − Navigate to the project path using cd <project_path>
Step 7 − Run the following command−
Step 7 − Run the following command−
java -cp <path of lib>; <path of out or bin folder>
org.testng.TestNG <path of testng>/testng.xml
The following code demonstrates how to run TestNG from the command line −
import org.testng.annotations.*;
import org.testng.annotations.Test;
public class OrderofTestExecutionInTestNG {
// test case 1
@Test
public void testCase1() {
System.out.println("in test case 1");
}
// test case 2
@Test
public void testCase2() {
System.out.println("in test case 2");
}
@BeforeMethod
public void beforeMethod() {
System.out.println("in beforeMethod");
}
@AfterMethod
public void afterMethod() {
System.out.println("in afterMethod");
}
@BeforeClass
public void beforeClass() {
System.out.println("in beforeClass");
}
@AfterClass
public void afterClass() {
System.out.println("in afterClass");
}
@BeforeTest
public void beforeTest() {
System.out.println("in beforeTest");
}
@AfterTest
public void afterTest() {
System.out.println("in afterTest");
}
@BeforeSuite
public void beforeSuite() {
System.out.println("in beforeSuite");
}
@AfterSuite
public void afterSuite() {
System.out.println("in afterSuite");
}
}
This is a configuration file that is used to organize and run the TestNG test cases. It is very handy when limited tests are needed to execute rather than the full suite.
<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd" >
<suite name = "Suite1">
<test name = "test1">
<classes>
<class name = "OrderofTestExecutionInTestNG"/>
</classes>
</test>
</suite>
java -cp
C:\Users\********\IdeaProjects\TestNGProject\lib\*;C:\Users\***
*****\IdeaProjects\TestNGProjectct\out\production\TestNGProject
org.testng.TestNG src/testng.xml
If the user didn't navigate to the testing project path using cd <project path>, then the complete path can be provided in the command as shown above. But, if the user is already in the testing project path, then the command can be modified as follows −
java -cp .\lib\*;.\out\production\TestNGProject
org.testng.TestNG src\testng.xml
in beforeSuite
in beforeTest
in beforeClass
in beforeMethod
in test case 1
in afterMethod
in beforeMethod
in test case 2
in afterMethod
in afterClass
in afterTest
in afterSuite
===============================================
Suite1
Total tests run: 2, Passes: 2, Failures: 0, Skips: 0
=============================================== | [
{
"code": null,
"e": 1234,
"s": 1062,
"text": "TestNG allows to run the test suites from the command line (cmd). Here's a set of prerequisites that must be fulfilled in order to run a test suite from the command line −"
},
{
"code": null,
"e": 1330,
"s": 1234,
"text": "testng.xml file should be created to define the test suites and the testing classes to execute."
},
{
"code": null,
"e": 1426,
"s": 1330,
"text": "testng.xml file should be created to define the test suites and the testing classes to execute."
},
{
"code": null,
"e": 1573,
"s": 1426,
"text": "All dependent jars should be available inside a project folder. It includes testing.jar, jcommander.jar and any other jars used in the test cases."
},
{
"code": null,
"e": 1720,
"s": 1573,
"text": "All dependent jars should be available inside a project folder. It includes testing.jar, jcommander.jar and any other jars used in the test cases."
},
{
"code": null,
"e": 1803,
"s": 1720,
"text": "Path of bin or out folder where the .class files are stored after the compilation."
},
{
"code": null,
"e": 1886,
"s": 1803,
"text": "Path of bin or out folder where the .class files are stored after the compilation."
},
{
"code": null,
"e": 1959,
"s": 1886,
"text": "Step 1 − Create different testing classes having different @Test methods"
},
{
"code": null,
"e": 2032,
"s": 1959,
"text": "Step 1 − Create different testing classes having different @Test methods"
},
{
"code": null,
"e": 2128,
"s": 2032,
"text": "Step 2 − Compile the class; it will create an out folder in IntelliJ and bin folder in Eclipse."
},
{
"code": null,
"e": 2224,
"s": 2128,
"text": "Step 2 − Compile the class; it will create an out folder in IntelliJ and bin folder in Eclipse."
},
{
"code": null,
"e": 2276,
"s": 2224,
"text": "Step 3 − Place all the jar files in the lib folder."
},
{
"code": null,
"e": 2328,
"s": 2276,
"text": "Step 3 − Place all the jar files in the lib folder."
},
{
"code": null,
"e": 2379,
"s": 2328,
"text": "Step 4 − Now create the testng.xml as given below."
},
{
"code": null,
"e": 2430,
"s": 2379,
"text": "Step 4 − Now create the testng.xml as given below."
},
{
"code": null,
"e": 2453,
"s": 2430,
"text": "Step 5 − Open the cmd."
},
{
"code": null,
"e": 2476,
"s": 2453,
"text": "Step 5 − Open the cmd."
},
{
"code": null,
"e": 2538,
"s": 2476,
"text": "Step 6 − Navigate to the project path using cd <project_path>"
},
{
"code": null,
"e": 2600,
"s": 2538,
"text": "Step 6 − Navigate to the project path using cd <project_path>"
},
{
"code": null,
"e": 2636,
"s": 2600,
"text": "Step 7 − Run the following command−"
},
{
"code": null,
"e": 2672,
"s": 2636,
"text": "Step 7 − Run the following command−"
},
{
"code": null,
"e": 2770,
"s": 2672,
"text": "java -cp <path of lib>; <path of out or bin folder>\norg.testng.TestNG <path of testng>/testng.xml"
},
{
"code": null,
"e": 2844,
"s": 2770,
"text": "The following code demonstrates how to run TestNG from the command line −"
},
{
"code": null,
"e": 3925,
"s": 2844,
"text": "import org.testng.annotations.*;\nimport org.testng.annotations.Test;\npublic class OrderofTestExecutionInTestNG {\n // test case 1\n @Test\n public void testCase1() {\n System.out.println(\"in test case 1\");\n }\n // test case 2\n @Test\n public void testCase2() {\n System.out.println(\"in test case 2\");\n }\n @BeforeMethod\n public void beforeMethod() {\n System.out.println(\"in beforeMethod\");\n }\n @AfterMethod\n public void afterMethod() {\n System.out.println(\"in afterMethod\");\n }\n @BeforeClass\n public void beforeClass() {\n System.out.println(\"in beforeClass\");\n }\n @AfterClass\n public void afterClass() {\n System.out.println(\"in afterClass\");\n }\n @BeforeTest\n public void beforeTest() {\n System.out.println(\"in beforeTest\");\n }\n @AfterTest\n public void afterTest() {\n System.out.println(\"in afterTest\");\n }\n @BeforeSuite\n public void beforeSuite() {\n System.out.println(\"in beforeSuite\");\n }\n @AfterSuite\n public void afterSuite() {\n System.out.println(\"in afterSuite\");\n }\n}"
},
{
"code": null,
"e": 4096,
"s": 3925,
"text": "This is a configuration file that is used to organize and run the TestNG test cases. It is very handy when limited tests are needed to execute rather than the full suite."
},
{
"code": null,
"e": 4358,
"s": 4096,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<!DOCTYPE suite SYSTEM \"http://testng.org/testng-1.0.dtd\" >\n\n<suite name = \"Suite1\">\n <test name = \"test1\">\n <classes>\n <class name = \"OrderofTestExecutionInTestNG\"/>\n </classes>\n </test>\n</suite>"
},
{
"code": null,
"e": 4528,
"s": 4358,
"text": "java -cp\nC:\\Users\\********\\IdeaProjects\\TestNGProject\\lib\\*;C:\\Users\\***\n*****\\IdeaProjects\\TestNGProjectct\\out\\production\\TestNGProject\norg.testng.TestNG src/testng.xml"
},
{
"code": null,
"e": 4782,
"s": 4528,
"text": "If the user didn't navigate to the testing project path using cd <project path>, then the complete path can be provided in the command as shown above. But, if the user is already in the testing project path, then the command can be modified as follows −"
},
{
"code": null,
"e": 4863,
"s": 4782,
"text": "java -cp .\\lib\\*;.\\out\\production\\TestNGProject\norg.testng.TestNG src\\testng.xml"
},
{
"code": null,
"e": 5196,
"s": 4863,
"text": "in beforeSuite\nin beforeTest\nin beforeClass\nin beforeMethod\nin test case 1\nin afterMethod\nin beforeMethod\nin test case 2\nin afterMethod\nin afterClass\nin afterTest\nin afterSuite\n===============================================\nSuite1\nTotal tests run: 2, Passes: 2, Failures: 0, Skips: 0\n==============================================="
}
]
|
Peewee - Update Existing Records | Existing data can be modified by calling save() method on model instance as well as with update() class method.
Following example fetches a row from User table with the help of get() method and updates it by changing the value of age field.
row=User.get(User.name=="Amar")
print ("name: {} age: {}".format(row.name, row.age))
row.age=25
row.save()
The update() method of Method class generates UPDATE query. The query object’s execute() method is then invoked.
Following example uses update() method to change the age column of rows in which it is >20.
qry=User.update({User.age:25}).where(User.age>20)
print (qry.sql())
qry.execute()
The SQL query rendered by update() method is as follows −
('UPDATE "User" SET "age" = ? WHERE ("User"."age" > ?)', [25, 20])
Peewee also has a bulk_update() method to help update multiple model instance in a single query operation. The method requires model objects to be updated and list of fields to be updated.
Following example updates the age field of specified rows by new value.
rows=User.select()
rows[0].age=25
rows[2].age=23
User.bulk_update([rows[0], rows[2]], fields=[User.age])
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2496,
"s": 2384,
"text": "Existing data can be modified by calling save() method on model instance as well as with update() class method."
},
{
"code": null,
"e": 2625,
"s": 2496,
"text": "Following example fetches a row from User table with the help of get() method and updates it by changing the value of age field."
},
{
"code": null,
"e": 2732,
"s": 2625,
"text": "row=User.get(User.name==\"Amar\")\nprint (\"name: {} age: {}\".format(row.name, row.age))\nrow.age=25\nrow.save()"
},
{
"code": null,
"e": 2845,
"s": 2732,
"text": "The update() method of Method class generates UPDATE query. The query object’s execute() method is then invoked."
},
{
"code": null,
"e": 2937,
"s": 2845,
"text": "Following example uses update() method to change the age column of rows in which it is >20."
},
{
"code": null,
"e": 3019,
"s": 2937,
"text": "qry=User.update({User.age:25}).where(User.age>20)\nprint (qry.sql())\nqry.execute()"
},
{
"code": null,
"e": 3077,
"s": 3019,
"text": "The SQL query rendered by update() method is as follows −"
},
{
"code": null,
"e": 3144,
"s": 3077,
"text": "('UPDATE \"User\" SET \"age\" = ? WHERE (\"User\".\"age\" > ?)', [25, 20])"
},
{
"code": null,
"e": 3333,
"s": 3144,
"text": "Peewee also has a bulk_update() method to help update multiple model instance in a single query operation. The method requires model objects to be updated and list of fields to be updated."
},
{
"code": null,
"e": 3405,
"s": 3333,
"text": "Following example updates the age field of specified rows by new value."
},
{
"code": null,
"e": 3510,
"s": 3405,
"text": "rows=User.select()\nrows[0].age=25\nrows[2].age=23\nUser.bulk_update([rows[0], rows[2]], fields=[User.age])"
},
{
"code": null,
"e": 3517,
"s": 3510,
"text": " Print"
},
{
"code": null,
"e": 3528,
"s": 3517,
"text": " Add Notes"
}
]
|
Market Segmentation with R (PCA & K-means Clustering) — Part 1 | by Rebecca Yiu | Towards Data Science | For those who are new to the marketing field, here’s a convenient Wikipedia-style explanation: market segmentation is a process used in marketing to divide customers into different groups (also called segments) according to their characteristics (demographics, shopping behavior, preference, etc.) Customers in the same market segment tend to respond to marketing strategy similarly. Therefore, the segmentation process can help companies understand their customer groups, target the right groups, and tailor effective marketing strategies for different target groups.
This article will demonstrate the process of a data science approach to market segmentation, with a sample survey dataset using R. In this example, ABC company, a portable phone charger maker, wants to understand its market segments, so it collects data from portable charger users through a survey study. The survey questions consist of four types: 1) Attitudinal 2) Demographic 3) Purchase process & Usage behavior 4) Brand Awareness. In this case, we will only work with the attitudinal data for segmenting. In reality, decision-makers choose different types of input variables (demographic, geographic, behavioral, etc.) for segmentation based on their individual cases. Nonetheless, the idea is the same regardless of which inputs you choose!
(Note: Thomas W. Miller raised a good point about using sales transaction data as inputs for segmentation in his book Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python. In short, he warns against segmenting with sales transaction data because information about sales is only available for current customers. When you have a new customer, it’s hard to utilize the insights you obtained without any of his/her sales data.)
Before we dive into the methods and models, remember that as a responsible data analyst, always understand your data first!
# Importing and checking Dataraw <- read.csv(“Chargers.csv”)str(raw)head(raw)
Each row in our data represents a respondent and each column represents his/her answer for the corresponding survey question. There are 2,500 respondents and 24 attitudinal questions. All are rating questions that ask the respondents about their opinions towards a given statement. Answers are in the 1–5 range. Here’s an example:
Please indicate how much you agree or disagre with the following statements (1 = strongly disagree, 5 = strongly agree).
I value style the most when it comes to purchasing a portable phone charger.
...
Understanding the nature of the questions, we can next move on to verify the data in our dataset. Writing a simple function sometimes does the trick:
# Verifying Data describe(raw)colSums(is.na(raw)) #Checking NAs table(unlist(raw[,]) %in% 1:5) #Simple Test
The validate package in R is also a handy tool for verifying data. It allows you to test your data against a set of rules you create. However, I find it not the most convenient when it comes to dealing with large datasets. I am still looking into alternative methods (preferably systems) that effectively verify data quality. I will greatly appreciate any suggestions.
Now that we have validated our data and we are confident about’em, let’s move on to the more fun stuff!
The term “dimension reduction” used to freak me out. However, it is not as complicated as it sounds: it’s simply the process of extracting the essence from a myriad of data, so the new, smaller dataset can represent the unique features of the original data without losing too much useful information. Think of it as Picasso’s Cubism paintings where he elegantly captures the essence of an object with a few lines and shapes, forgoing many details. For me, I always like to think of his Guitar. If you have another artwork in mind, please COMMENT!!
PCA is a form of dimension reduction. This video by StatQuest (shout out to my favorite Statistics/Data Science video channel) explains the concept quite intuitively. I strongly recommend you watch this video if this is the first time you hear of PCA. In short, PCA allows you to take a dataset with a high number of dimensions and compresses it to a dataset with fewer dimensions, which still captures most variance within the original data.
Why is PCA helpful to divide customers into different groups, you ask? Imagine that you need to separate customers based on their answers to these survey questions. The first problem you encounter is how to differentiate them based on their inputs on 24 variables. Sure, you can try to come up with a few main themes to summarize these questions, and assign each respondent a “score” for each theme, then group them based on the scores. But how can you be SURE that the themes you propose are truly effective in dividing people? How do you decide what weight to give each question? Furthermore, what will you do if you have 5000 variables instead of 24? A human brain simply can’t operate with that much information in a short period of time. At least my brain can’t for sure.
This is where PCA can step in and do the task for you. Performing PCA on our data, R can transform the correlated 24 variables into a smaller number of uncorrelated variables called the principal components. With the smaller, compressed set of variables, we can perform further computation with ease, and we can investigate some hidden patterns within the data that was hard to discover at first.
When there are abundant literature/videos/articles out there that provide thorough explanations of PCA, I hope to present a few high-level points about PCA for people who find materials out there too technical:
Variability makes data useful. Imagine a dataset with 10,000 uniform values. It does not tell you much, and it’s boring. 😑
Again, PCA’s function is to create a smaller subset of variables (principal components) that capture the variability within the original, much larger dataset.
Each principal component is a linear combination of the initial variables.
Each principal component has an orthogonal relationship with each other. That means they are uncorrelated.
The first principal component (PC1) captures most variability within the data. The second principal component (PC2) captures the second most. The third principal components (PC3) captures the third most...and so on
In addition, here are a couple of terms you should know if you are planning to run PCA for your project:
Loading describes the relationship between the original variables and the new principal component. Specifically, it describes the weight given to an original variable when calculating a new principal component.
Score describes the relationship between the original data and the newly generated axis. In other words, score is the new value for a data row in the principal component space.
Proportion of Variance indicates the share of the total data variability each principal component accounts for. It is often used with Cumulative Proportion to evaluate the usefulness of a principal component.
Cumulative Proportion represents the cumulative proportion of variance explained by consecutive principal components. The cumulative proportion explained by all principal components equals 1 (100% of data variability are explained).
Before you run a PCA, you should take a look at your data correlation. If your data is not highly correlated, you might not need a PCA at all!
# Creating a correlation plot library(ggpcorrplot)cormat <- round(cor(raw), 2)ggcorrplot(cormat, hc.order = TRUE, type = “lower”, outline.color = “white”)
As the graph shows, our variables are quite correlated. We can proceed to PCA happily ✌.️
# PCApr_out <-prcomp(raw, center = TRUE, scale = TRUE) #Scaling data before PCA is usually advisable! summary(pr_out)
There are 24 new principal components because we had 24 variables in the first place. The first principal component accounts for 28% of the data variance. The second principal component accounts for 8.8%. The third accounts for 7.6%...We can use a scree plot to visualize this:
# Screeplotpr_var <- pr_out$sdev ^ 2pve <- pr_var / sum(pr_var)plot(pve, xlab = "Principal Component", ylab = "Proportion of Variance Explained", ylim = c(0,1), type = 'b')
X-axis describes the number of principal component(s), and y-axis describes the proportion of variance explained (PVE) by each. The variance explained drastically decreases after PC2. This spot is often called an elbow point, indicating the number of PCs that should be used for the analysis.
# Cumulative PVE plotplot(cumsum(pve), xlab = "Principal Component", ylab = "Cumulative Proportion of Variance Explained", ylim =c(0,1), type = 'b')
If we choose only 2 principal components, they will yield less than 40% of the total variance in data. This number is perhaps not enough.
Another rule of choosing the number of PCs is to choose PCs with eigenvalues higher than 1. This is called the Kaiser rule, and it is controversial. You can find many debates on this topic online.
Basically, there isn’t a single best way to decide the best number of PCs. People use PCA for different purposes, and it is always important to think about what you want to get out of your PCA analysis before making the decision. In our case, since we are using PCA to determine meaningful and actionable market segmentation, one criterion we should definitely consider is whether the PCs we decide on make sense in the real-world and business settings.
Let’s pick the first 5 PCs for now, since 5 components are not too hard to work with, and it follows the Kaiser rule.
Next, we want to make meanings out of these PCs. Remember that loadings describe the weights given to each raw variable in calculating the new principal component? They are key to help us interpret the PCA results. When directly working with the PCA loadings can be tricky and confusing, we can rotate these loadings to make interpretation easier.
There are multiple rotation methods out there, and we will use a method called “varimax”. (Note, this step of rotation is NOT a part of the PCA. It simply helps to interpret our results. Here is a good thread on the topic.)
# Rotate loadingsrot_loading <- varimax(pr_out$rotation[, 1:5])rot_loading
Here’s an incomplete portion of the varimax-rotated loadings up to Q12. The numbers in the table correspond to the relationships between our questions (raw variables) and the selected components. If the number is positive, the variable positively contributes to the component. If it’s negative, then they are negatively related. Larger the number, stronger the relationship.
With these loadings, we can refer back to our questionnaire to get some ideas about what each PC is about. Let’s look at PC1, for example. I noticed that Q10, Q3 & Q7 negatively contribute to PC1. On the other hand, I see that Q8 & Q11 positively contribute to PC1. Checking the questionnaire, I realized that Q10, Q3 & Q7 are questions related to the style of the charger, when Q8 & Q11 focus on the functionality of the product. Therefore, we can make a temporary conclusion that PC1 describes people’s preference for the product’s functionality. It makes sense that people who value functionality more might not care too much about style.
Then, you can move on to PC2 and follow the same procedure to interpret each PC. I will not go through the complete process here, and I hope you got the idea. Once you go through all PCs and feel like each describes unique, logically-coherent traits, and you believe they make business sense, you’re good for the next step. However, if you feel like some information is missing or is repetitive within the PCs, you can consider going back and including more PCs, or you can eliminate some. You might have to go through several iterations until you get a satisfying result.
Just kidding. But you are halfway there. You’ve walked through the process of compressing a large dataset to a smaller one with a few variables that can help you identify different customer groups out there using PCA. In the next post, I will introduce how to segment our customers based on the PCs we obtained using a clustering method.
Lastly, #HappyInternationalWomensDay to all the amazing superwomen out there 👯👧 💁 👭!
Thanks for reading! 💚 Feel free to connect with me on Linkedin! | [
{
"code": null,
"e": 740,
"s": 171,
"text": "For those who are new to the marketing field, here’s a convenient Wikipedia-style explanation: market segmentation is a process used in marketing to divide customers into different groups (also called segments) according to their characteristics (demographics, shopping behavior, preference, etc.) Customers in the same market segment tend to respond to marketing strategy similarly. Therefore, the segmentation process can help companies understand their customer groups, target the right groups, and tailor effective marketing strategies for different target groups."
},
{
"code": null,
"e": 1488,
"s": 740,
"text": "This article will demonstrate the process of a data science approach to market segmentation, with a sample survey dataset using R. In this example, ABC company, a portable phone charger maker, wants to understand its market segments, so it collects data from portable charger users through a survey study. The survey questions consist of four types: 1) Attitudinal 2) Demographic 3) Purchase process & Usage behavior 4) Brand Awareness. In this case, we will only work with the attitudinal data for segmenting. In reality, decision-makers choose different types of input variables (demographic, geographic, behavioral, etc.) for segmentation based on their individual cases. Nonetheless, the idea is the same regardless of which inputs you choose!"
},
{
"code": null,
"e": 1945,
"s": 1488,
"text": "(Note: Thomas W. Miller raised a good point about using sales transaction data as inputs for segmentation in his book Marketing Data Science: Modeling Techniques in Predictive Analytics with R and Python. In short, he warns against segmenting with sales transaction data because information about sales is only available for current customers. When you have a new customer, it’s hard to utilize the insights you obtained without any of his/her sales data.)"
},
{
"code": null,
"e": 2069,
"s": 1945,
"text": "Before we dive into the methods and models, remember that as a responsible data analyst, always understand your data first!"
},
{
"code": null,
"e": 2147,
"s": 2069,
"text": "# Importing and checking Dataraw <- read.csv(“Chargers.csv”)str(raw)head(raw)"
},
{
"code": null,
"e": 2478,
"s": 2147,
"text": "Each row in our data represents a respondent and each column represents his/her answer for the corresponding survey question. There are 2,500 respondents and 24 attitudinal questions. All are rating questions that ask the respondents about their opinions towards a given statement. Answers are in the 1–5 range. Here’s an example:"
},
{
"code": null,
"e": 2599,
"s": 2478,
"text": "Please indicate how much you agree or disagre with the following statements (1 = strongly disagree, 5 = strongly agree)."
},
{
"code": null,
"e": 2676,
"s": 2599,
"text": "I value style the most when it comes to purchasing a portable phone charger."
},
{
"code": null,
"e": 2680,
"s": 2676,
"text": "..."
},
{
"code": null,
"e": 2830,
"s": 2680,
"text": "Understanding the nature of the questions, we can next move on to verify the data in our dataset. Writing a simple function sometimes does the trick:"
},
{
"code": null,
"e": 2938,
"s": 2830,
"text": "# Verifying Data describe(raw)colSums(is.na(raw)) #Checking NAs table(unlist(raw[,]) %in% 1:5) #Simple Test"
},
{
"code": null,
"e": 3307,
"s": 2938,
"text": "The validate package in R is also a handy tool for verifying data. It allows you to test your data against a set of rules you create. However, I find it not the most convenient when it comes to dealing with large datasets. I am still looking into alternative methods (preferably systems) that effectively verify data quality. I will greatly appreciate any suggestions."
},
{
"code": null,
"e": 3411,
"s": 3307,
"text": "Now that we have validated our data and we are confident about’em, let’s move on to the more fun stuff!"
},
{
"code": null,
"e": 3959,
"s": 3411,
"text": "The term “dimension reduction” used to freak me out. However, it is not as complicated as it sounds: it’s simply the process of extracting the essence from a myriad of data, so the new, smaller dataset can represent the unique features of the original data without losing too much useful information. Think of it as Picasso’s Cubism paintings where he elegantly captures the essence of an object with a few lines and shapes, forgoing many details. For me, I always like to think of his Guitar. If you have another artwork in mind, please COMMENT!!"
},
{
"code": null,
"e": 4402,
"s": 3959,
"text": "PCA is a form of dimension reduction. This video by StatQuest (shout out to my favorite Statistics/Data Science video channel) explains the concept quite intuitively. I strongly recommend you watch this video if this is the first time you hear of PCA. In short, PCA allows you to take a dataset with a high number of dimensions and compresses it to a dataset with fewer dimensions, which still captures most variance within the original data."
},
{
"code": null,
"e": 5179,
"s": 4402,
"text": "Why is PCA helpful to divide customers into different groups, you ask? Imagine that you need to separate customers based on their answers to these survey questions. The first problem you encounter is how to differentiate them based on their inputs on 24 variables. Sure, you can try to come up with a few main themes to summarize these questions, and assign each respondent a “score” for each theme, then group them based on the scores. But how can you be SURE that the themes you propose are truly effective in dividing people? How do you decide what weight to give each question? Furthermore, what will you do if you have 5000 variables instead of 24? A human brain simply can’t operate with that much information in a short period of time. At least my brain can’t for sure."
},
{
"code": null,
"e": 5576,
"s": 5179,
"text": "This is where PCA can step in and do the task for you. Performing PCA on our data, R can transform the correlated 24 variables into a smaller number of uncorrelated variables called the principal components. With the smaller, compressed set of variables, we can perform further computation with ease, and we can investigate some hidden patterns within the data that was hard to discover at first."
},
{
"code": null,
"e": 5787,
"s": 5576,
"text": "When there are abundant literature/videos/articles out there that provide thorough explanations of PCA, I hope to present a few high-level points about PCA for people who find materials out there too technical:"
},
{
"code": null,
"e": 5910,
"s": 5787,
"text": "Variability makes data useful. Imagine a dataset with 10,000 uniform values. It does not tell you much, and it’s boring. 😑"
},
{
"code": null,
"e": 6069,
"s": 5910,
"text": "Again, PCA’s function is to create a smaller subset of variables (principal components) that capture the variability within the original, much larger dataset."
},
{
"code": null,
"e": 6144,
"s": 6069,
"text": "Each principal component is a linear combination of the initial variables."
},
{
"code": null,
"e": 6251,
"s": 6144,
"text": "Each principal component has an orthogonal relationship with each other. That means they are uncorrelated."
},
{
"code": null,
"e": 6466,
"s": 6251,
"text": "The first principal component (PC1) captures most variability within the data. The second principal component (PC2) captures the second most. The third principal components (PC3) captures the third most...and so on"
},
{
"code": null,
"e": 6571,
"s": 6466,
"text": "In addition, here are a couple of terms you should know if you are planning to run PCA for your project:"
},
{
"code": null,
"e": 6782,
"s": 6571,
"text": "Loading describes the relationship between the original variables and the new principal component. Specifically, it describes the weight given to an original variable when calculating a new principal component."
},
{
"code": null,
"e": 6959,
"s": 6782,
"text": "Score describes the relationship between the original data and the newly generated axis. In other words, score is the new value for a data row in the principal component space."
},
{
"code": null,
"e": 7168,
"s": 6959,
"text": "Proportion of Variance indicates the share of the total data variability each principal component accounts for. It is often used with Cumulative Proportion to evaluate the usefulness of a principal component."
},
{
"code": null,
"e": 7401,
"s": 7168,
"text": "Cumulative Proportion represents the cumulative proportion of variance explained by consecutive principal components. The cumulative proportion explained by all principal components equals 1 (100% of data variability are explained)."
},
{
"code": null,
"e": 7544,
"s": 7401,
"text": "Before you run a PCA, you should take a look at your data correlation. If your data is not highly correlated, you might not need a PCA at all!"
},
{
"code": null,
"e": 7699,
"s": 7544,
"text": "# Creating a correlation plot library(ggpcorrplot)cormat <- round(cor(raw), 2)ggcorrplot(cormat, hc.order = TRUE, type = “lower”, outline.color = “white”)"
},
{
"code": null,
"e": 7789,
"s": 7699,
"text": "As the graph shows, our variables are quite correlated. We can proceed to PCA happily ✌.️"
},
{
"code": null,
"e": 7907,
"s": 7789,
"text": "# PCApr_out <-prcomp(raw, center = TRUE, scale = TRUE) #Scaling data before PCA is usually advisable! summary(pr_out)"
},
{
"code": null,
"e": 8185,
"s": 7907,
"text": "There are 24 new principal components because we had 24 variables in the first place. The first principal component accounts for 28% of the data variance. The second principal component accounts for 8.8%. The third accounts for 7.6%...We can use a scree plot to visualize this:"
},
{
"code": null,
"e": 8359,
"s": 8185,
"text": "# Screeplotpr_var <- pr_out$sdev ^ 2pve <- pr_var / sum(pr_var)plot(pve, xlab = \"Principal Component\", ylab = \"Proportion of Variance Explained\", ylim = c(0,1), type = 'b')"
},
{
"code": null,
"e": 8652,
"s": 8359,
"text": "X-axis describes the number of principal component(s), and y-axis describes the proportion of variance explained (PVE) by each. The variance explained drastically decreases after PC2. This spot is often called an elbow point, indicating the number of PCs that should be used for the analysis."
},
{
"code": null,
"e": 8801,
"s": 8652,
"text": "# Cumulative PVE plotplot(cumsum(pve), xlab = \"Principal Component\", ylab = \"Cumulative Proportion of Variance Explained\", ylim =c(0,1), type = 'b')"
},
{
"code": null,
"e": 8939,
"s": 8801,
"text": "If we choose only 2 principal components, they will yield less than 40% of the total variance in data. This number is perhaps not enough."
},
{
"code": null,
"e": 9136,
"s": 8939,
"text": "Another rule of choosing the number of PCs is to choose PCs with eigenvalues higher than 1. This is called the Kaiser rule, and it is controversial. You can find many debates on this topic online."
},
{
"code": null,
"e": 9590,
"s": 9136,
"text": "Basically, there isn’t a single best way to decide the best number of PCs. People use PCA for different purposes, and it is always important to think about what you want to get out of your PCA analysis before making the decision. In our case, since we are using PCA to determine meaningful and actionable market segmentation, one criterion we should definitely consider is whether the PCs we decide on make sense in the real-world and business settings."
},
{
"code": null,
"e": 9708,
"s": 9590,
"text": "Let’s pick the first 5 PCs for now, since 5 components are not too hard to work with, and it follows the Kaiser rule."
},
{
"code": null,
"e": 10056,
"s": 9708,
"text": "Next, we want to make meanings out of these PCs. Remember that loadings describe the weights given to each raw variable in calculating the new principal component? They are key to help us interpret the PCA results. When directly working with the PCA loadings can be tricky and confusing, we can rotate these loadings to make interpretation easier."
},
{
"code": null,
"e": 10280,
"s": 10056,
"text": "There are multiple rotation methods out there, and we will use a method called “varimax”. (Note, this step of rotation is NOT a part of the PCA. It simply helps to interpret our results. Here is a good thread on the topic.)"
},
{
"code": null,
"e": 10355,
"s": 10280,
"text": "# Rotate loadingsrot_loading <- varimax(pr_out$rotation[, 1:5])rot_loading"
},
{
"code": null,
"e": 10730,
"s": 10355,
"text": "Here’s an incomplete portion of the varimax-rotated loadings up to Q12. The numbers in the table correspond to the relationships between our questions (raw variables) and the selected components. If the number is positive, the variable positively contributes to the component. If it’s negative, then they are negatively related. Larger the number, stronger the relationship."
},
{
"code": null,
"e": 11372,
"s": 10730,
"text": "With these loadings, we can refer back to our questionnaire to get some ideas about what each PC is about. Let’s look at PC1, for example. I noticed that Q10, Q3 & Q7 negatively contribute to PC1. On the other hand, I see that Q8 & Q11 positively contribute to PC1. Checking the questionnaire, I realized that Q10, Q3 & Q7 are questions related to the style of the charger, when Q8 & Q11 focus on the functionality of the product. Therefore, we can make a temporary conclusion that PC1 describes people’s preference for the product’s functionality. It makes sense that people who value functionality more might not care too much about style."
},
{
"code": null,
"e": 11945,
"s": 11372,
"text": "Then, you can move on to PC2 and follow the same procedure to interpret each PC. I will not go through the complete process here, and I hope you got the idea. Once you go through all PCs and feel like each describes unique, logically-coherent traits, and you believe they make business sense, you’re good for the next step. However, if you feel like some information is missing or is repetitive within the PCs, you can consider going back and including more PCs, or you can eliminate some. You might have to go through several iterations until you get a satisfying result."
},
{
"code": null,
"e": 12283,
"s": 11945,
"text": "Just kidding. But you are halfway there. You’ve walked through the process of compressing a large dataset to a smaller one with a few variables that can help you identify different customer groups out there using PCA. In the next post, I will introduce how to segment our customers based on the PCs we obtained using a clustering method."
},
{
"code": null,
"e": 12368,
"s": 12283,
"text": "Lastly, #HappyInternationalWomensDay to all the amazing superwomen out there 👯👧 💁 👭!"
}
]
|
logical Operators - Java | Practice | GeeksforGeeks | Logical operators are used when we want to check the truth value of certain statements. Logical operators help us in checking multiple statements together for their truthness.
Here we will learn logical operators like AND(&&), OR(||), NOT(!). These operators produce either a true or a false as an output.
Example 1:
Input:
true false
Output:
false true false
Explanation:
true&&false=>false
true||false=>true
!(true) && !(false)=>false
Example 2:
Input:
true true
Output:
true true false
Your Task:
Your task is to complete the function logicOp() which takes a and b as a parameter and prints (a AND b), (a OR b), (a NOT b) in separated by space.
Constraints:
a, b = {true, false}
0
knhashasPremium2 weeks ago
System.out.print((a&&b)+" "+(a || b)+" "+((!a)&&(!b)));
0
arpandutta5033 weeks ago
static void logicOp(boolean a,boolean b){ System.out.print(a&&b); System.out.print(" "); System.out.print(a || b); System.out.print(" "); System.out.print((!a)&&(!b)); }}
0
sharsimran20024 weeks ago
class Geeks{ static void logicOp(boolean a,boolean b){ System.out.print(a&&b); System.out.print(" "); System.out.print(a || b); System.out.print(" "); System.out.print((!a)&&(!b)); }}
0
shrutidhongade161 month ago
class Geeks{
static void logicOp(boolean a, boolean b){
/*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/
System.out.printf("%s %s %s", a&&b, a||b, (!a && !b));
}
}
0
badgujarsachin831 month ago
static void logicOp(boolean a, boolean b){
/*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/
System.out.print((a&& b)+" ");
System.out.print((a||b)+" ");
System.out.print(!a && !b);
}
-1
ravi119033852 months ago
System.out.print((a&&b)+" "); System.out.print((a||b)+" "); System.out.print(!a && !b);
-3
nagaajayk2 months ago
System.out.print((a&&b)+" "+(a||b)+" "+(!a && !b));
-2
katwakrishna9502 months ago
class Geeks{
static void logicOp(boolean a, boolean b){
/*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/
System.out.print( (a&&b)+" " );
System.out.print( (a||b)+" " );
System.out.print( ((!a)&&(!b)) );
}
}
-1
hitentandon3 months ago
System.out.print((a&&b)+" "+(a||b)+" "+((!(a||b))));
-1
ritukapadiya20023 months ago
static void logicOp(boolean a, boolean b){ /*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/ System.out.print(a&&b); System.out.print(" "); System.out.print(a||b); System.out.print(" "); System.out.print((!a)&&(!b)); }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 454,
"s": 278,
"text": "Logical operators are used when we want to check the truth value of certain statements. Logical operators help us in checking multiple statements together for their truthness."
},
{
"code": null,
"e": 584,
"s": 454,
"text": "Here we will learn logical operators like AND(&&), OR(||), NOT(!). These operators produce either a true or a false as an output."
},
{
"code": null,
"e": 595,
"s": 584,
"text": "Example 1:"
},
{
"code": null,
"e": 721,
"s": 595,
"text": "Input:\ntrue false\n\nOutput:\nfalse true false\n\n\nExplanation:\n\ntrue&&false=>false\n\ntrue||false=>true\n\n!(true) && !(false)=>false"
},
{
"code": null,
"e": 732,
"s": 721,
"text": "Example 2:"
},
{
"code": null,
"e": 775,
"s": 732,
"text": "Input:\ntrue true\n\nOutput:\ntrue true false\n"
},
{
"code": null,
"e": 969,
"s": 775,
"text": "Your Task:\nYour task is to complete the function logicOp() which takes a and b as a parameter and prints (a AND b), (a OR b), (a NOT b) in separated by space.\n\nConstraints:\na, b = {true, false}"
},
{
"code": null,
"e": 971,
"s": 969,
"text": "0"
},
{
"code": null,
"e": 998,
"s": 971,
"text": "knhashasPremium2 weeks ago"
},
{
"code": null,
"e": 1054,
"s": 998,
"text": "System.out.print((a&&b)+\" \"+(a || b)+\" \"+((!a)&&(!b)));"
},
{
"code": null,
"e": 1056,
"s": 1054,
"text": "0"
},
{
"code": null,
"e": 1081,
"s": 1056,
"text": "arpandutta5033 weeks ago"
},
{
"code": null,
"e": 1277,
"s": 1081,
"text": "static void logicOp(boolean a,boolean b){ System.out.print(a&&b); System.out.print(\" \"); System.out.print(a || b); System.out.print(\" \"); System.out.print((!a)&&(!b)); }}"
},
{
"code": null,
"e": 1279,
"s": 1277,
"text": "0"
},
{
"code": null,
"e": 1305,
"s": 1279,
"text": "sharsimran20024 weeks ago"
},
{
"code": null,
"e": 1519,
"s": 1305,
"text": "class Geeks{ static void logicOp(boolean a,boolean b){ System.out.print(a&&b); System.out.print(\" \"); System.out.print(a || b); System.out.print(\" \"); System.out.print((!a)&&(!b)); }}"
},
{
"code": null,
"e": 1521,
"s": 1519,
"text": "0"
},
{
"code": null,
"e": 1549,
"s": 1521,
"text": "shrutidhongade161 month ago"
},
{
"code": null,
"e": 1765,
"s": 1549,
"text": "class Geeks{\n \n static void logicOp(boolean a, boolean b){\n /*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/\n \n System.out.printf(\"%s %s %s\", a&&b, a||b, (!a && !b));\n }\n}"
},
{
"code": null,
"e": 1767,
"s": 1765,
"text": "0"
},
{
"code": null,
"e": 1795,
"s": 1767,
"text": "badgujarsachin831 month ago"
},
{
"code": null,
"e": 2029,
"s": 1795,
"text": " static void logicOp(boolean a, boolean b){\n /*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/\n System.out.print((a&& b)+\" \");\n System.out.print((a||b)+\" \");\n System.out.print(!a && !b);\n }"
},
{
"code": null,
"e": 2032,
"s": 2029,
"text": "-1"
},
{
"code": null,
"e": 2057,
"s": 2032,
"text": "ravi119033852 months ago"
},
{
"code": null,
"e": 2157,
"s": 2057,
"text": "System.out.print((a&&b)+\" \"); System.out.print((a||b)+\" \"); System.out.print(!a && !b);"
},
{
"code": null,
"e": 2160,
"s": 2157,
"text": "-3"
},
{
"code": null,
"e": 2182,
"s": 2160,
"text": "nagaajayk2 months ago"
},
{
"code": null,
"e": 2243,
"s": 2182,
"text": " System.out.print((a&&b)+\" \"+(a||b)+\" \"+(!a && !b));\n"
},
{
"code": null,
"e": 2246,
"s": 2243,
"text": "-2"
},
{
"code": null,
"e": 2274,
"s": 2246,
"text": "katwakrishna9502 months ago"
},
{
"code": null,
"e": 2558,
"s": 2274,
"text": "class Geeks{\n \n static void logicOp(boolean a, boolean b){\n /*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/\n \n System.out.print( (a&&b)+\" \" );\n System.out.print( (a||b)+\" \" );\n System.out.print( ((!a)&&(!b)) );\n \n }\n}"
},
{
"code": null,
"e": 2561,
"s": 2558,
"text": "-1"
},
{
"code": null,
"e": 2585,
"s": 2561,
"text": "hitentandon3 months ago"
},
{
"code": null,
"e": 2638,
"s": 2585,
"text": "System.out.print((a&&b)+\" \"+(a||b)+\" \"+((!(a||b))));"
},
{
"code": null,
"e": 2641,
"s": 2638,
"text": "-1"
},
{
"code": null,
"e": 2670,
"s": 2641,
"text": "ritukapadiya20023 months ago"
},
{
"code": null,
"e": 2947,
"s": 2670,
"text": "static void logicOp(boolean a, boolean b){ /*output (a&&b), (a||b), and ((!a)&&(!b))separated by spaces*/ System.out.print(a&&b); System.out.print(\" \"); System.out.print(a||b); System.out.print(\" \"); System.out.print((!a)&&(!b)); }"
},
{
"code": null,
"e": 3093,
"s": 2947,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 3129,
"s": 3093,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 3139,
"s": 3129,
"text": "\nProblem\n"
},
{
"code": null,
"e": 3149,
"s": 3139,
"text": "\nContest\n"
},
{
"code": null,
"e": 3212,
"s": 3149,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 3360,
"s": 3212,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 3568,
"s": 3360,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 3674,
"s": 3568,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
]
|
\ldots - Tex Command | \ldots - Used to draw lower dots symbol.
{ \ldots }
\ldots command is used to draw lower dots symbol.
x_1,\ldots,x_n
x1,...,xn
x_1,\ldots,x_n
x1,...,xn
x_1,\ldots,x_n
14 Lectures
52 mins
Ashraf Said
11 Lectures
1 hours
Ashraf Said
9 Lectures
1 hours
Emenwa Global, Ejike IfeanyiChukwu
29 Lectures
2.5 hours
Mohammad Nauman
14 Lectures
1 hours
Daniel Stern
15 Lectures
47 mins
Nishant Kumar
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 8027,
"s": 7986,
"text": "\\ldots - Used to draw lower dots symbol."
},
{
"code": null,
"e": 8038,
"s": 8027,
"text": "{ \\ldots }"
},
{
"code": null,
"e": 8088,
"s": 8038,
"text": "\\ldots command is used to draw lower dots symbol."
},
{
"code": null,
"e": 8118,
"s": 8088,
"text": "\nx_1,\\ldots,x_n\n\nx1,...,xn\n\n\n"
},
{
"code": null,
"e": 8146,
"s": 8118,
"text": "x_1,\\ldots,x_n\n\nx1,...,xn\n\n"
},
{
"code": null,
"e": 8161,
"s": 8146,
"text": "x_1,\\ldots,x_n"
},
{
"code": null,
"e": 8193,
"s": 8161,
"text": "\n 14 Lectures \n 52 mins\n"
},
{
"code": null,
"e": 8206,
"s": 8193,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8239,
"s": 8206,
"text": "\n 11 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8252,
"s": 8239,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8284,
"s": 8252,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8320,
"s": 8284,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 8355,
"s": 8320,
"text": "\n 29 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8372,
"s": 8355,
"text": " Mohammad Nauman"
},
{
"code": null,
"e": 8405,
"s": 8372,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8419,
"s": 8405,
"text": " Daniel Stern"
},
{
"code": null,
"e": 8451,
"s": 8419,
"text": "\n 15 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 8466,
"s": 8451,
"text": " Nishant Kumar"
},
{
"code": null,
"e": 8473,
"s": 8466,
"text": " Print"
},
{
"code": null,
"e": 8484,
"s": 8473,
"text": " Add Notes"
}
]
|
Create a Pandas DataFrame from a Numpy array and specify the index column and column headers - GeeksforGeeks | 21 Aug, 2020
This article demonstrates multiple examples to convert the Numpy arrays into Pandas Dataframe and to specify the index column and column headers for the data frame.
Example 1: In this example, the Pandas dataframe will be generated and proper names of index column and column headers are mentioned in the function. This approach can be used when there is no pattern in naming the index column or column headers.
Below is the implementation:
Python3
# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index # column and column headers # import required librariesimport numpy as npimport pandas as pd # creating a numpy arraynumpyArray = np.array([[15, 22, 43], [33, 24, 56]]) # generating the Pandas dataframe# from the Numpy array and specifying# name of index and columnspanda_df = pd.DataFrame(data = numpyArray, index = ["Row_1", "Row_2"], columns = ["Column_1", "Column_2", "Column_3"]) # printing the dataframeprint(panda_df)
Output:
Example 2: In this example, the index column and column headers are generated through iteration. The range of iterations for rows and columns are defined by the shape of the Numpy array. With every iteration, a digit will be added to the predefined string and the new index column or column header will generate. Thus, if there is some pattern in naming the labels of the dataframe this approach is suitable.
Below is the implementation:
Python3
# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index column # and column headers # import required librariesimport pandas as pdimport numpy as np # creating a numpy arraynumpyArray = np.array([[15, 22, 43], [33, 24, 56]]) # generating the Pandas dataframe# from the Numpy array and specifying# name of index and columnspanda_df = pd.DataFrame(data = numpyArray[0:, 0:], index = ['Row_' + str(i + 1) for i in range(numpyArray.shape[0])], columns = ['Column_' + str(i + 1) for i in range(numpyArray.shape[1])]) # printing the dataframeprint(panda_df)
Output:
Example 3: In this example, the index column and column headers are defined before converting the Numpy array into Pandas dataframe. The label names are again generated through iterations but the method is little different. Here, the number of iterations is defined by the length of the sub-array inside the Numpy array. This method can be used if the index column and column header names follow some pattern.
Below is the implementation:
Python3
# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index column # and column headers # import required librariesimport pandas as pdimport numpy as np # creating a numpy arraynumpyArray = np.array([[15, 22, 43], [33, 24, 56]]) # defining index for the # Pandas dataframeindex = ['Row_' + str(i) for i in range(1, len(numpyArray) + 1)] # defining column headers for the # Pandas dataframecolumns = ['Column_' + str(i) for i in range(1, len(numpyArray[0]) + 1)] # generating the Pandas dataframe# from the Numpy array and specifying# details of index and column headerspanda_df = pd.DataFrame(numpyArray , index = index, columns = columns) # printing the dataframeprint(panda_df)
Output:
Example #4: In this approach, the index column and the column headers for the Pandas dataframe will present itself in the Numpy array. During the conversion of the Numpy array into Pandas data frame, proper indexing for the sub-arrays of the Numpy array has to be done in order to get correct sequence of the dataframe labels.
Below is the implementation:
Python3
# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index column # and column headers # import required librariesimport pandas as pdimport numpy as np # creating a numpy array and# specifying the index and # column headers along with # data stored in the arraynumpyArray = np.array([['', 'Column_1', 'Column_2', 'Column_3'], ['Row_1', 15, 22, 43], ['Row_2', 33, 24, 56]]) # generating the Pandas dataframe# from the Numpy array and specifying# details of index and column headerspanda_df = pd.DataFrame(data = numpyArray[1:, 1:], index = numpyArray[1:, 0], columns = numpyArray[0, 1:]) # printing the dataframeprint(panda_df)
Output:
Python pandas-dataFrame
Python Pandas-exercise
Python-numpy
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Create a Pandas DataFrame from Lists
Box Plot in Python using Matplotlib
Python Dictionary
Bar Plot in Matplotlib
Enumerate() in Python
Python | Get dictionary keys as a list
Python | Convert set into a list
Ways to filter Pandas DataFrame by column values
Graph Plotting in Python | Set 1
Python - Call function from another file | [
{
"code": null,
"e": 24701,
"s": 24673,
"text": "\n21 Aug, 2020"
},
{
"code": null,
"e": 24866,
"s": 24701,
"text": "This article demonstrates multiple examples to convert the Numpy arrays into Pandas Dataframe and to specify the index column and column headers for the data frame."
},
{
"code": null,
"e": 25114,
"s": 24866,
"text": "Example 1: In this example, the Pandas dataframe will be generated and proper names of index column and column headers are mentioned in the function. This approach can be used when there is no pattern in naming the index column or column headers. "
},
{
"code": null,
"e": 25143,
"s": 25114,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 25151,
"s": 25143,
"text": "Python3"
},
{
"code": "# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index # column and column headers # import required librariesimport numpy as npimport pandas as pd # creating a numpy arraynumpyArray = np.array([[15, 22, 43], [33, 24, 56]]) # generating the Pandas dataframe# from the Numpy array and specifying# name of index and columnspanda_df = pd.DataFrame(data = numpyArray, index = [\"Row_1\", \"Row_2\"], columns = [\"Column_1\", \"Column_2\", \"Column_3\"]) # printing the dataframeprint(panda_df)",
"e": 25777,
"s": 25151,
"text": null
},
{
"code": null,
"e": 25786,
"s": 25777,
"text": "Output: "
},
{
"code": null,
"e": 26196,
"s": 25786,
"text": "Example 2: In this example, the index column and column headers are generated through iteration. The range of iterations for rows and columns are defined by the shape of the Numpy array. With every iteration, a digit will be added to the predefined string and the new index column or column header will generate. Thus, if there is some pattern in naming the labels of the dataframe this approach is suitable. "
},
{
"code": null,
"e": 26225,
"s": 26196,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 26233,
"s": 26225,
"text": "Python3"
},
{
"code": "# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index column # and column headers # import required librariesimport pandas as pdimport numpy as np # creating a numpy arraynumpyArray = np.array([[15, 22, 43], [33, 24, 56]]) # generating the Pandas dataframe# from the Numpy array and specifying# name of index and columnspanda_df = pd.DataFrame(data = numpyArray[0:, 0:], index = ['Row_' + str(i + 1) for i in range(numpyArray.shape[0])], columns = ['Column_' + str(i + 1) for i in range(numpyArray.shape[1])]) # printing the dataframeprint(panda_df)",
"e": 26942,
"s": 26233,
"text": null
},
{
"code": null,
"e": 26951,
"s": 26942,
"text": "Output: "
},
{
"code": null,
"e": 27362,
"s": 26951,
"text": "Example 3: In this example, the index column and column headers are defined before converting the Numpy array into Pandas dataframe. The label names are again generated through iterations but the method is little different. Here, the number of iterations is defined by the length of the sub-array inside the Numpy array. This method can be used if the index column and column header names follow some pattern. "
},
{
"code": null,
"e": 27391,
"s": 27362,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 27399,
"s": 27391,
"text": "Python3"
},
{
"code": "# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index column # and column headers # import required librariesimport pandas as pdimport numpy as np # creating a numpy arraynumpyArray = np.array([[15, 22, 43], [33, 24, 56]]) # defining index for the # Pandas dataframeindex = ['Row_' + str(i) for i in range(1, len(numpyArray) + 1)] # defining column headers for the # Pandas dataframecolumns = ['Column_' + str(i) for i in range(1, len(numpyArray[0]) + 1)] # generating the Pandas dataframe# from the Numpy array and specifying# details of index and column headerspanda_df = pd.DataFrame(numpyArray , index = index, columns = columns) # printing the dataframeprint(panda_df)",
"e": 28205,
"s": 27399,
"text": null
},
{
"code": null,
"e": 28214,
"s": 28205,
"text": "Output: "
},
{
"code": null,
"e": 28542,
"s": 28214,
"text": "Example #4: In this approach, the index column and the column headers for the Pandas dataframe will present itself in the Numpy array. During the conversion of the Numpy array into Pandas data frame, proper indexing for the sub-arrays of the Numpy array has to be done in order to get correct sequence of the dataframe labels. "
},
{
"code": null,
"e": 28571,
"s": 28542,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 28579,
"s": 28571,
"text": "Python3"
},
{
"code": "# Python program to Create a # Pandas DataFrame from a Numpy # array and specify the index column # and column headers # import required librariesimport pandas as pdimport numpy as np # creating a numpy array and# specifying the index and # column headers along with # data stored in the arraynumpyArray = np.array([['', 'Column_1', 'Column_2', 'Column_3'], ['Row_1', 15, 22, 43], ['Row_2', 33, 24, 56]]) # generating the Pandas dataframe# from the Numpy array and specifying# details of index and column headerspanda_df = pd.DataFrame(data = numpyArray[1:, 1:], index = numpyArray[1:, 0], columns = numpyArray[0, 1:]) # printing the dataframeprint(panda_df)",
"e": 29356,
"s": 28579,
"text": null
},
{
"code": null,
"e": 29365,
"s": 29356,
"text": "Output: "
},
{
"code": null,
"e": 29391,
"s": 29367,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 29414,
"s": 29391,
"text": "Python Pandas-exercise"
},
{
"code": null,
"e": 29427,
"s": 29414,
"text": "Python-numpy"
},
{
"code": null,
"e": 29441,
"s": 29427,
"text": "Python-pandas"
},
{
"code": null,
"e": 29448,
"s": 29441,
"text": "Python"
},
{
"code": null,
"e": 29546,
"s": 29448,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29555,
"s": 29546,
"text": "Comments"
},
{
"code": null,
"e": 29568,
"s": 29555,
"text": "Old Comments"
},
{
"code": null,
"e": 29605,
"s": 29568,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 29641,
"s": 29605,
"text": "Box Plot in Python using Matplotlib"
},
{
"code": null,
"e": 29659,
"s": 29641,
"text": "Python Dictionary"
},
{
"code": null,
"e": 29682,
"s": 29659,
"text": "Bar Plot in Matplotlib"
},
{
"code": null,
"e": 29704,
"s": 29682,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 29743,
"s": 29704,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 29776,
"s": 29743,
"text": "Python | Convert set into a list"
},
{
"code": null,
"e": 29825,
"s": 29776,
"text": "Ways to filter Pandas DataFrame by column values"
},
{
"code": null,
"e": 29858,
"s": 29825,
"text": "Graph Plotting in Python | Set 1"
}
]
|
How to get the absolute coordinates of a view in Android? | Before getting into example, we should know what is absolute coordinates. It means absolute position (x,y)of a view on window manager. This example demonstrate about how to get the absolute coordinates of a view.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version = "1.0" encoding = "utf-8"?>
<RelativeLayout
xmlns:android = "http://schemas.android.com/apk/res/android"
xmlns:tools = "http://schemas.android.com/tools"
android:layout_width = "match_parent"
android:layout_height = "match_parent"
android:padding = "16dp"
tools:context = ".MainActivity"
android:background = "#dde4dd">
<TextView
android:id = "@+id/text"
android:layout_marginLeft = "100dp"
android:layout_width = "wrap_content"
android:layout_height = "wrap_content"
android:text = "Hello World!" />
</RelativeLayout>
In the above xml, we have given one Textview. when user click on textview, it will show the position of the view on Toast.
Step 3 − Add the following code to src/MainActivity.java
package com.example.andy.myapplication;
import android.graphics.Point;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.TextureView;
import android.view.View;
import android.widget.TextView;
import android.widget.Toast;
public class MainActivity extends AppCompatActivity {
TextView textView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.text);
textView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
int[] location = new int[2];
textView.getLocationOnScreen(location);
Toast.makeText(MainActivity.this,"X axis is "+location[0] +"and Y axis is "+location[1],Toast.LENGTH_LONG).show();
}
});
}
public static Point getLocationOnScreen(View view) {
int[] location = new int[2];
view.getLocationOnScreen(location);
return new Point(location[0], location[1]);
}
}
In the above code, when user click on textview, it will show absolute coordinates on the screen . Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
In the above result is initial screen, when user click on textview, it will show the result as shown below -
Click here to download the project code | [
{
"code": null,
"e": 1275,
"s": 1062,
"text": "Before getting into example, we should know what is absolute coordinates. It means absolute position (x,y)of a view on window manager. This example demonstrate about how to get the absolute coordinates of a view."
},
{
"code": null,
"e": 1404,
"s": 1275,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1469,
"s": 1404,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2057,
"s": 1469,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<RelativeLayout\n xmlns:android = \"http://schemas.android.com/apk/res/android\"\n xmlns:tools = \"http://schemas.android.com/tools\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"match_parent\"\n android:padding = \"16dp\"\n tools:context = \".MainActivity\"\n android:background = \"#dde4dd\">\n <TextView\n android:id = \"@+id/text\"\n android:layout_marginLeft = \"100dp\"\n android:layout_width = \"wrap_content\"\n android:layout_height = \"wrap_content\"\n android:text = \"Hello World!\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2180,
"s": 2057,
"text": "In the above xml, we have given one Textview. when user click on textview, it will show the position of the view on Toast."
},
{
"code": null,
"e": 2237,
"s": 2180,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 3337,
"s": 2237,
"text": "package com.example.andy.myapplication;\n\nimport android.graphics.Point;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.TextureView;\nimport android.view.View;\nimport android.widget.TextView;\nimport android.widget.Toast;\n\npublic class MainActivity extends AppCompatActivity {\n TextView textView;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n textView = findViewById(R.id.text);\n textView.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n int[] location = new int[2];\n textView.getLocationOnScreen(location);\n Toast.makeText(MainActivity.this,\"X axis is \"+location[0] +\"and Y axis is \"+location[1],Toast.LENGTH_LONG).show();\n }\n });\n }\n public static Point getLocationOnScreen(View view) {\n int[] location = new int[2];\n view.getLocationOnScreen(location);\n return new Point(location[0], location[1]);\n }\n}"
},
{
"code": null,
"e": 3782,
"s": 3337,
"text": "In the above code, when user click on textview, it will show absolute coordinates on the screen . Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −"
},
{
"code": null,
"e": 3891,
"s": 3782,
"text": "In the above result is initial screen, when user click on textview, it will show the result as shown below -"
},
{
"code": null,
"e": 3931,
"s": 3891,
"text": "Click here to download the project code"
}
]
|
Jackson Annotations - @JsonBackReference | @JsonManagedReferences and JsonBackReferences are used to display objects with parent child relationship. @JsonManagedReferences is used to refer to parent object and @JsonBackReferences is used to mark child objects.
import java.io.IOException;
import java.text.ParseException;
import java.util.ArrayList;
import java.util.List;
import com.fasterxml.jackson.annotation.JsonBackReference;
import com.fasterxml.jackson.annotation.JsonManagedReference;
import com.fasterxml.jackson.databind.ObjectMapper;
public class JacksonTester {
public static void main(String args[]) throws IOException, ParseException {
ObjectMapper mapper = new ObjectMapper();
Student student = new Student(1, "Mark");
Book book1 = new Book(1,"Learn HTML", student);
Book book2 = new Book(1,"Learn JAVA", student);
student.addBook(book1);
student.addBook(book2);
String jsonString = mapper
.writerWithDefaultPrettyPrinter()
.writeValueAsString(book1);
System.out.println(jsonString);
}
}
class Student {
public int rollNo;
public String name;
@JsonBackReference
public List<Book> books;
Student(int rollNo, String name){
this.rollNo = rollNo;
this.name = name;
this.books = new ArrayList<Book>();
}
public void addBook(Book book){
books.add(book);
}
}
class Book {
public int id;
public String name;
Book(int id, String name, Student owner) {
this.id = id;
this.name = name;
this.owner = owner;
}
@JsonManagedReference
public Student owner;
}
{
"id" : 1,
"name" : "Learn HTML",
"owner" : {
"rollNo" : 1,
"name" : "Mark"
}
}
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2693,
"s": 2475,
"text": "@JsonManagedReferences and JsonBackReferences are used to display objects with parent child relationship. @JsonManagedReferences is used to refer to parent object and @JsonBackReferences is used to mark child objects."
},
{
"code": null,
"e": 4061,
"s": 2693,
"text": "import java.io.IOException;\nimport java.text.ParseException;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport com.fasterxml.jackson.annotation.JsonBackReference;\nimport com.fasterxml.jackson.annotation.JsonManagedReference;\nimport com.fasterxml.jackson.databind.ObjectMapper;\n\npublic class JacksonTester {\n public static void main(String args[]) throws IOException, ParseException {\n ObjectMapper mapper = new ObjectMapper(); \n Student student = new Student(1, \"Mark\");\n Book book1 = new Book(1,\"Learn HTML\", student);\n Book book2 = new Book(1,\"Learn JAVA\", student);\n\n student.addBook(book1);\n student.addBook(book2);\n\n String jsonString = mapper\n .writerWithDefaultPrettyPrinter()\n .writeValueAsString(book1);\n System.out.println(jsonString);\n }\n}\nclass Student { \n public int rollNo;\n public String name;\n\n @JsonBackReference\n public List<Book> books;\n\n Student(int rollNo, String name){\n this.rollNo = rollNo;\n this.name = name;\n this.books = new ArrayList<Book>();\n }\n public void addBook(Book book){\n books.add(book);\n }\n}\nclass Book {\n public int id;\n public String name;\n\n Book(int id, String name, Student owner) {\n this.id = id;\n this.name = name;\n this.owner = owner;\n }\n\n @JsonManagedReference\n public Student owner;\n}"
},
{
"code": null,
"e": 4167,
"s": 4061,
"text": "{\n \"id\" : 1,\n \"name\" : \"Learn HTML\",\n \"owner\" : {\n \"rollNo\" : 1,\n \"name\" : \"Mark\"\n }\n}\n"
},
{
"code": null,
"e": 4174,
"s": 4167,
"text": " Print"
},
{
"code": null,
"e": 4185,
"s": 4174,
"text": " Add Notes"
}
]
|
How to perform case-insensitive search in Oracle? | Problem:
You want to perform case-insensitive search in Oracle.
Solution
One way to deal with case issues is to use the built in UPPER and LOWER functions. These functions let you force case conversion on a string for a single operation
DECLARE
full_name1 VARCHAR2(30) := 'roger federer';
full_name2 VARCHAR2(30) := 'ROGER FEDERER';
BEGIN
IF LOWER(full_name1) = LOWER(full_name2) THEN
DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');
END IF;
END;
DECLARE
full_name1 VARCHAR2(30) := 'roger federer';
full_name2 VARCHAR2(30) := 'ROGER FEDERER';
BEGIN
IF LOWER(full_name1) = LOWER(full_name2) THEN
DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');
END IF;
END;
In the above example the full_name1 and full_name2 are first converted into LOWER CASE then compared with each other resulting the output
roger federer and ROGER FEDERER are the same.
There is one disadvantage with UPPER and LOWER functions, which is performance. Any function applied on a field will degrade performance.
Starting with Oracle Database 10g Release 2 you can use the initialization parameters NLS_COMP and NLS_SORT to render all string comparisons case insensitive.
We need to set the NLS_COMP parameter to LINGUISTIC, which will tell the database to use NLS_SORT for string comparisons. Then we will set NLS_SORT to a case insensitive setting, like BINARY_CI.
By default, NLS_COMP is set to BINARY. we can use LEAST function to see if the the uppercase characters sort lower than the lowercase characters or the other way.
SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;
SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;
The above SQL returns ‘ROGER FEDERER’ telling us that uppercase characters sort lower than the lowercase characters.
Now, we are going to set couple of paramters.
Set NLS_COMP to specify that a linguistic sort.Set NLS_SORT to specify the sorting rules that we want.
Set NLS_COMP to specify that a linguistic sort.
Set NLS_SORT to specify the sorting rules that we want.
ALTER SESSION SET NLS_COMP=LINGUISTIC
ALTER SESSION SET NLS_SORT=BINARY_CI
ALTER SESSION SET NLS_COMP=LINGUISTIC
ALTER SESSION SET NLS_SORT=BINARY_CI
After we set above settings to the session, we will call LEAST one more time to see what it returns.
SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;
SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;
roger federer
roger federer
Finally, we will call the pl/sql block above with out applying the UPPER and LOWER functions to compare the strings.
DECLARE
full_name1 VARCHAR2(30) := 'roger federer';
full_name2 VARCHAR2(30) := 'ROGER FEDERER';
BEGIN
IF full_name1 = full_name2 THEN
DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');
END IF;
END;
DECLARE
full_name1 VARCHAR2(30) := 'roger federer';
full_name2 VARCHAR2(30) := 'ROGER FEDERER';
BEGIN
IF full_name1 = full_name2 THEN
DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');
END IF;
END;
roger federer and ROGER FEDERER are the same.
The settings will remain untile you close the session. | [
{
"code": null,
"e": 1071,
"s": 1062,
"text": "Problem:"
},
{
"code": null,
"e": 1126,
"s": 1071,
"text": "You want to perform case-insensitive search in Oracle."
},
{
"code": null,
"e": 1135,
"s": 1126,
"text": "Solution"
},
{
"code": null,
"e": 1299,
"s": 1135,
"text": "One way to deal with case issues is to use the built in UPPER and LOWER functions. These functions let you force case conversion on a string for a single operation"
},
{
"code": null,
"e": 1562,
"s": 1299,
"text": "DECLARE\n full_name1 VARCHAR2(30) := 'roger federer';\n full_name2 VARCHAR2(30) := 'ROGER FEDERER';\nBEGIN\n IF LOWER(full_name1) = LOWER(full_name2) THEN\n DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');\n END IF;\nEND;"
},
{
"code": null,
"e": 1825,
"s": 1562,
"text": "DECLARE\n full_name1 VARCHAR2(30) := 'roger federer';\n full_name2 VARCHAR2(30) := 'ROGER FEDERER';\nBEGIN\n IF LOWER(full_name1) = LOWER(full_name2) THEN\n DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');\n END IF;\nEND;"
},
{
"code": null,
"e": 1963,
"s": 1825,
"text": "In the above example the full_name1 and full_name2 are first converted into LOWER CASE then compared with each other resulting the output"
},
{
"code": null,
"e": 2009,
"s": 1963,
"text": "roger federer and ROGER FEDERER are the same."
},
{
"code": null,
"e": 2147,
"s": 2009,
"text": "There is one disadvantage with UPPER and LOWER functions, which is performance. Any function applied on a field will degrade performance."
},
{
"code": null,
"e": 2306,
"s": 2147,
"text": "Starting with Oracle Database 10g Release 2 you can use the initialization parameters NLS_COMP and NLS_SORT to render all string comparisons case insensitive."
},
{
"code": null,
"e": 2501,
"s": 2306,
"text": "We need to set the NLS_COMP parameter to LINGUISTIC, which will tell the database to use NLS_SORT for string comparisons. Then we will set NLS_SORT to a case insensitive setting, like BINARY_CI."
},
{
"code": null,
"e": 2664,
"s": 2501,
"text": "By default, NLS_COMP is set to BINARY. we can use LEAST function to see if the the uppercase characters sort lower than the lowercase characters or the other way."
},
{
"code": null,
"e": 2722,
"s": 2664,
"text": "SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;"
},
{
"code": null,
"e": 2780,
"s": 2722,
"text": "SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;"
},
{
"code": null,
"e": 2897,
"s": 2780,
"text": "The above SQL returns ‘ROGER FEDERER’ telling us that uppercase characters sort lower than the lowercase characters."
},
{
"code": null,
"e": 2943,
"s": 2897,
"text": "Now, we are going to set couple of paramters."
},
{
"code": null,
"e": 3046,
"s": 2943,
"text": "Set NLS_COMP to specify that a linguistic sort.Set NLS_SORT to specify the sorting rules that we want."
},
{
"code": null,
"e": 3094,
"s": 3046,
"text": "Set NLS_COMP to specify that a linguistic sort."
},
{
"code": null,
"e": 3150,
"s": 3094,
"text": "Set NLS_SORT to specify the sorting rules that we want."
},
{
"code": null,
"e": 3225,
"s": 3150,
"text": "ALTER SESSION SET NLS_COMP=LINGUISTIC\nALTER SESSION SET NLS_SORT=BINARY_CI"
},
{
"code": null,
"e": 3300,
"s": 3225,
"text": "ALTER SESSION SET NLS_COMP=LINGUISTIC\nALTER SESSION SET NLS_SORT=BINARY_CI"
},
{
"code": null,
"e": 3401,
"s": 3300,
"text": "After we set above settings to the session, we will call LEAST one more time to see what it returns."
},
{
"code": null,
"e": 3460,
"s": 3401,
"text": " SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;"
},
{
"code": null,
"e": 3519,
"s": 3460,
"text": " SELECT LEAST ('ROGER FEDERER','roger federer') FROM dual;"
},
{
"code": null,
"e": 3533,
"s": 3519,
"text": "roger federer"
},
{
"code": null,
"e": 3547,
"s": 3533,
"text": "roger federer"
},
{
"code": null,
"e": 3664,
"s": 3547,
"text": "Finally, we will call the pl/sql block above with out applying the UPPER and LOWER functions to compare the strings."
},
{
"code": null,
"e": 3913,
"s": 3664,
"text": "DECLARE\n full_name1 VARCHAR2(30) := 'roger federer';\n full_name2 VARCHAR2(30) := 'ROGER FEDERER';\nBEGIN\n IF full_name1 = full_name2 THEN\n DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');\n END IF;\nEND;"
},
{
"code": null,
"e": 4162,
"s": 3913,
"text": "DECLARE\n full_name1 VARCHAR2(30) := 'roger federer';\n full_name2 VARCHAR2(30) := 'ROGER FEDERER';\nBEGIN\n IF full_name1 = full_name2 THEN\n DBMS_OUTPUT.PUT_LINE( full_name1 || ' and ' || full_name2 || ' are the same.');\n END IF;\nEND;"
},
{
"code": null,
"e": 4208,
"s": 4162,
"text": "roger federer and ROGER FEDERER are the same."
},
{
"code": null,
"e": 4263,
"s": 4208,
"text": "The settings will remain untile you close the session."
}
]
|
Practice | GeeksforGeeks | A computer science portal for geeks | Popular Company Tags
Amazon
Microsoft
Oracle
Samsung
Adobe
Synopsys
Infosys
Cisco
Wipro
Ola-Cabs
Morgan-Stanley
Goldman-Sachs
show more
Amazon
Microsoft
Oracle
Samsung
Adobe
Synopsys
Infosys
Cisco
Wipro
Ola-Cabs
Morgan-Stanley
Goldman-Sachs
Popular Topic Tags
Maths
Array
Dynamic-Programming
Greedy-Algorithm
Hashing
Tree
Bit-Algorithm
Matrix
Backtracking
Operating System
Linked-List
Graph
show more
Maths
Array
Dynamic-Programming
Greedy-Algorithm
Hashing
Tree
Bit-Algorithm
Matrix
Backtracking
Operating System
Linked-List
Graph
Home
krishnabhatia
Subjective Answers
Home
krishnabhatia
Subjective Answers
Selection of a victim. Given a set of deadlocked transactions, we must determine which transaction (or transactions) to roll back to break the deadlock. We should roll back those transactions that will incur the minimum cost. Unfortunately, the term minimum cost is not a precise one.
Factors which determine the cost of a rollback
How long the transaction has computed, and how much longer the transaction will compute before it completes its designated task.
How many data items the transaction has used.
How many more data items the transaction needs for it to complete.
How many transactions will be involved in the rollback.
Cursor stability is a form of degree-two consistency designed for programs that iterate over tuples of a relation by using cursors.
Instead of locking the entire relation, cursor stability ensures that:
The tuple that is currently being processed by the iteration is locked in shared mode.
Any modified tuples are locked in exclusive mode until the transaction commits.
Cursor stability is used in practice on heavily accessed relations as a means of increasing concurrency and improving system performance. Applications that use cursor stability must be coded in a way that ensures database consistency despite the possibility of non-serializable schedules.
The timestamp ordering protocol operates as follows:
Suppose that transaction Ti issues read(Q).
If TS(T i) < W-timestamp(Q), then Ti needs to read a value of Q that was already overwritten. Hence, the read operation is rejected, and Ti is rolled back.
If TS(T i)≥W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti).
If TS(T i) < W-timestamp(Q), then Ti needs to read a value of Q that was already overwritten. Hence, the read operation is rejected, and Ti is rolled back.
If TS(T i)≥W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti).
Suppose that transaction Ti issues write(Q).
If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was needed previously, and the system assumed that that value would never be produced. Hence, the system rejects the write operation and rolls Ti back.
If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q. Hence, the system rejects this write operation and rolls Ti back.
Otherwise, the system executes the write operation and sets W-timestamp(Q) to TS(Ti).
If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was needed previously, and the system assumed that that value would never be produced. Hence, the system rejects the write operation and rolls Ti back.
If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q. Hence, the system rejects this write operation and rolls Ti back.
Otherwise, the system executes the write operation and sets W-timestamp(Q) to TS(Ti).
There are two deadlock prevention schemes using timestamp
The wait–die scheme is a non-preemptive technique. When transaction Ti requests a data item currently held by Tj, Ti is allowed to wait only if it has a timestamp smaller than that of Tj (that is, Ti is older than Tj). Otherwise, Ti is rolled back (dies)
The wound–wait scheme is a preemptive technique. It is a counterpart to the wait–die scheme. When transaction Ti requests a data item currently held by Tj, Ti is allowed to wait only if it has a timestamp larger than that of Tj (that is, Ti is younger than Tj).Otherwise, Tj is rolled back(Tj is wounded by Ti).
A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first.
This can occur when range locks are not acquired on performing a SELECT.... WHERE operation.
In other words, data getting changed in current transaction by other transactions is called Phantom Reads.
Let us consider a schedule S in which there are two consecutive instructions, I and J, of transactions Ti and Tj, respectively (i != j).If I and J refer to different data items, then we can swap I and J without affecting the results of any instruction in the schedule. However, if I and J refer to the same data item Q, then the order of the two steps may matter.
There are four cases we need to consider (since we are dealing with read and write only)
I = read(Q), J = read(Q). The order of I and J does not matter, since the same value of Q is read by Ti and Tj, regardless of the order.
I =read(Q), J =write(Q). If I comes before J, then Ti does not read the value of Q that is written by Tj in instruction J. If J comes before I, then Ti reads the value of Q that is written by Tj. Thus, the order of I and J matters.
I = write(Q), J = read(Q). The order of I and J matters for reasons similar to those of the previous case.
I = write(Q), J = write(Q). Since both instructions are write operations, the order of these instructions does not affect either Ti or Tj.However, the value obtained by the next read(Q) instruction of S is affected, since the result of only the latter of the two write instructions is preserved in the database. If there is no other write(Q) instruction after I and J in S, then the order of I and J directly affects the final value of Q in the database state that results from schedule S.
Concurrency gives the following two advantages
Improved throughput and resource utilization. A transaction consists of many steps. Some involve I/O activity; others involve CPU activity. The CPU and the disks in a computer system can operate in parallel. Therefore, I/O activity can be done in parallel with processing at the CPU. The parallelism of the CPU and the I/O system can, therefore, be exploited to run multiple transactions in parallel. While a read or write on behalf of one transaction is in progress on one disk, another transaction can be running in the CPU, while another disk may be executing a read or write on behalf of a third transaction. All of this increases the throughput of the system—that is, the number of transactions executed in a given amount of time. Correspondingly, the processor and disk utilization also increase, in other words, the processor and disk spend less time idle, or not performing any useful work.
Reduced waiting time. There may be a mix of transactions running on a system, some short and some long. If transactions run serially, a short transaction may have to wait for a preceding long transaction to complete, which can lead to unpredictable delays in running a transaction. If the transactions are operating on different parts of the database, it is better to let them run concurrently, sharing the CPU cycles and disk accesses among them. Concurrent execution reduces the unpredictable delays in running transactions. Moreover, it also reduces the average response time: the average time for a transaction to be completed after it has been submitted.
Data in a database can be stored in:
Volatile storage. Information residing in volatile storage does not usually survive system crashes. Examples of such storage are main memory and cache memory. Access to volatile storage is extremely fast, both because of the speed of the memory access itself, and because it is possible to access any data item in volatile storage directly.
Nonvolatile storage. Information residing in nonvolatile storage survives system crashes. Examples of nonvolatile storage include secondary storage devices such as magnetic disk and flash storage, used for online storage, and tertiary storage devices such as optical media, and magnetic tapes, used for archival storage
Stable storage. Information residing in stable storage is never lost (theoretically never cannot be guaranteed—for example, it is possible, although extremely unlikely, that a black hole may envelop the earth and permanently destroy all data!). Although stable storage is theoretically impossible to obtain, it can be closely approximated by techniques that make data loss extremely unlikely. To implement stable storage, we replicate the information in several nonvolatile storage media (usually disk) with independent failure modes. Updates must be done with care to ensure that a failure during an update to stable storage does not cause a loss of information.
Operations performed on a transaction are
read(X), which transfers the data item X from the database to a variable, also called X, in a buffer in main memory belonging to the transaction that executed the read operation.
write(X), which transfers the value in the variable X in the main-memory buffer of the transaction that executed the write to the data item X in the database.
Queries involving a natural join may be processed in several ways, depending on the availability of indices and the form of physical storage for the relations.
If the join result is almost as large as the Cartesian product of the two relations, a block nested-loop join strategy may be advantageous.
If indices are available, the indexed nested-loop join can be used.
If the relations are sorted, a merge join may be desirable.It may be advantageous to sort a relation prior to join computation (so as to allow the use of the merge-join strategy).
The hash-join algorithm partitions the relations into several pieces, such that each piece of one of the relations fits in memory. The partitioning is carried out with a hash function on the join attributes so that corresponding pairs of partitions can be joined independently.
The first action that the system must perform on a query is to translate the query into its internal form, which (for relational database systems) is usually based on the relational algebra. In the process of generating the internal form of the query, the parser checks the syntax of the user’s query, verifies that the relation names appearing in the query are names of relations in the database, and so on. If the query was expressed in terms of a view, the parser replaces all references to the view name with the relational-algebra expression to compute the view.
Pipelines can be executed in the following two ways
In a demand-driven pipeline, the system makes repeated requests for tuples from the operation at the top of the pipeline. Each time that an operation receives a request for tuples, it computes the next tuple (or tuples) to be returned and then returns that tuple. If the inputs of the operation are not pipelined, the next tuple(s) to be returned can be computed from the input relations, while the system keeps track of what has been returned so far. If it has some pipelined inputs, the operation also makes requests for tuples from its pipelined inputs. Using the tuples received from its pipelined inputs, the operation computes tuples for its output and passes them up to its parent.
In a producer-driven pipeline, operations do not wait for requests to produce tuples but instead generate the tuples eagerly. Each operation in a producer-driven pipeline is modeled as a separate process or thread within the system that takes a stream of tuples from its pipelined inputs and generates a stream of tuples for its output.
The first step in each case is to partition the two relations by the same hash function, and thereby create the partitions r0,r1,...,rnh and s0,s1,...,snh. Depending on the operation, the system then takes these steps on each partition i =0,1,...,nh
Different set operations
r ∪ s
Build an in-memory hash index on ri
. Add the tuples in si to the hash index only if they are not already present.
Add the tuples in the hash index to the result
Build an in-memory hash index on ri
. Add the tuples in si to the hash index only if they are not already present.
Add the tuples in the hash index to the result
r ∩ s
Build an in-memory hash index on ri.
For each tuple in si, probe the hash index and output the tuple to the result only if it is already present in the hash index.
Build an in-memory hash index on ri.
For each tuple in si, probe the hash index and output the tuple to the result only if it is already present in the hash index.
r − s
Build an in-memory hash index on ri.
For each tuple in si, probe the hash index, and if the tuple is present in the hash index, delete it from the hash index.
Add the tuples remaining in the hash index to the result.
Build an in-memory hash index on ri.
For each tuple in si, probe the hash index, and if the tuple is present in the hash index, delete it from the hash index.
Add the tuples remaining in the hash index to the result.
Various ways in which a selection operation on a relation whose tuples are stored together in one file are:
A1 (linear search). In a linear search, the system scans each file block and tests all records to see whether they satisfy the selection condition. An initial seek is required to access the first block of the file.
A2 (primary index, equality on key). For an equality comparison on a key attribute with a primary index, we can use the index to retrieve a single record that satisfies the corresponding equality condition
A3 (primary index, equality on non-key). We can retrieve multiple records by using a primary index when the selection condition specifies an equality comparison on a non-key attribute, A.
A4 (secondary index, equality). Selections specifying an equality condition can use a secondary index. This strategy can retrieve a single record if the equality condition is on a key; multiple records may be retrieved if the indexing field is not a key
Bucket overflow occurs because
Insufficient buckets. The number of buckets, which we denote nB, must be chosen such that nB > nr/fr, where nr denotes the total number of records that will be stored and fr denotes the number of records that will fit in a bucket.This designation, assumes that the total number of records is known when the hash function is chosen.
Skew. Some buckets are assigned more records than are others, so a bucket may overflow even when other buckets still have space. This situation is called bucket skew.
Skew can occur for two reasons:
Multiple records may have the same search key.
The chosen hash function may result in nonuniform distribution of search keys.
Multiple records may have the same search key.
The chosen hash function may result in nonuniform distribution of search keys.
Dense index: In a dense index, an index entry appears for every search-key value in the file. In a dense clustering index, the index record contains the search-key value and a pointer to the first data record with that search-key value. The rest of the records with the same search-key value would be stored sequentially after the first record, since, because the index is a clustering one, records are sorted on the same search key. In a dense nonclustering index, the index must store a list of pointers to all records with the same search-key value.
Sparse index: In a sparse index, an index entry appears for only some of the search-key values. Sparse indices can be used only if the relation is stored in sorted order of the search key, that is if the index is a clustering index. As is true in dense indices, each index entry contains a search-key value and a pointer to the first data record with that search-key value. To locate a record, we find the index entry with the largest search-key value that is less than or equal to the search-key value for which we are looking. We start at the record pointed to by that index entry, and follow the pointers in the file until we find the desired record.
Indexing techniques are evaluated on the basis of
Access types: The types of access that are supported efficiently.Access types can include finding records with a specified attribute value and finding records whose attribute values fall in a specified range.
Access time: The time it takes to find a particular data item, or set of items, using the technique in question.
Insertion time: The time it takes to insert a new data item. This value includes the time it takes to find the correct place to insert the new data item, as well as the time it takes to update the index structure.
Deletion time: The time it takes to delete a data item. This value includes the time it takes to find the item to be deleted, as well as the time it takes to update the index structure.
Space overhead: The additional space occupied by an index structure. Provided that the amount of additional space is moderate, it is usually worthwhile to sacrifice the space to achieve improved performance.
Different techniques used by BUFFER MANAGER are
Buffer replacement strategy. When there is no room left in the buffer, a block must be removed from the buffer before a new one can be read in. Most operating systems use a least recently used (LRU) scheme, in which the block that was referenced least recently is written back to disk and is removed from the buffer. This simple approach can be improved on for database application.
Pinned blocks. For the database system to be able to recover from crashes it is necessary to restrict those times when a block may be written back to disk.For instance, most recovery systems require that a block should not be written to disk while an update on the block is in progress. A block that is not allowed to be written back to disk is said to be pinned. Although many operating systems do not support pinned blocks, such a feature is essential for a database system that is resilient to crashes.
Forced output of blocks.There are situations in which it is necessary to write back the block to disk, even though the buffer space that it occupies is not needed. This write is called the forced output of a block.
Records can be organized in the following ways:
Heap file organization. Any record can be placed anywhere in the file where there is space for the record. There is no ordering of records. Typically, there is a single file for each relation.
Sequential file organization. Records are stored in sequential order, according to the value of a “search key” of each record.
Hashing file organization. A hash function is computed on some attribute of each record.The result of the hash function specifiesinwhich block of the file the record should be placed
Mapping cardinalities, or cardinality ratios, express the number of entities to which another entity can be associated via a relationship set.
Mapping cardinalities are most useful in describing binary relationship sets, although they can contribute to the description of relationship sets that involve more than two entity sets.
For a binary relationship set R between entity sets A and B, the mapping cardinality must be
One-to-one. An entity in A is associated with at most one entity in B, and an entity in B is associated with at most one entity in A
One-to-many. An entity in A is associated with any number (zero or more) of entities in B. An entity in B, however, can be associated with at most one entity in A
Many-to-one. An entity in A is associated with at most one entity in B. An entity in B, however, can be associated with any number (zero or more) of entities in A.
Many-to-many.An entity in A is associated with any number (zero or more) of entities in B, and an entity in B is associated with any number (zero or more) of entities in A.
The basic data types supported by SQL are:
char(n): A fixed-length character string with user-specified length n.The full form, character, can be used instead.
varchar(n): A variable-length character string with user-specified maximum length n. The full form, character varying, is equivalent.
int: An integer (a finite subset of the integers that is machine dependent).The full form, integer, is equivalent.
smallint: A small integer(a machine-dependent subset of the integer type).
numeric(p,d): A fixed-point number with user-specified precision.The number consists of p digits (plus a sign), and d of the p digits are to the right of the decimal point. Thus, numeric(3,1) allows 44.5 to be stored exactly, but neither 444.5 or 0 .32 can be stored exactly in a field of this type.
real, double precision: Floating-point and double-precision floating-point numbers with machine-dependent precision.
float(n): A floating-point number, with precision of at least n digits.
Functions of a DBA include
Schema definition.The DBA creates the original database schema by executing a set of data definition statements in the DDL.
Storage structure and access-method definition.
Schema and physical-organization modification.The DBA carries out changes to the schema and physical organization to reflect the changing needs of the organization, or to alter the physical organization to improve performance.
Granting of authorization for data access. By granting different types of authorization, the database administrator can regulate which parts of the database various users can access. The authorization information is kept in a special system structure that the database system consults when ever someone attempts to access the data in the system.
Routine maintenance. Examples of the database administrator’s routine maintenance activities are:
Periodically backing up the database, either onto tapes or onto remote servers, to prevent loss of data in case of disasters such as flooding.
Ensuring that enough free disk space is available for normal operations, and upgrading disk space as required.
Monitoring jobs running on the database and ensuring that performance is not degraded by very expensive tasks submitted by some users.
Periodically backing up the database, either onto tapes or onto remote servers, to prevent loss of data in case of disasters such as flooding.
Ensuring that enough free disk space is available for normal operations, and upgrading disk space as required.
Monitoring jobs running on the database and ensuring that performance is not degraded by very expensive tasks submitted by some users.
The immediate database modification technique allows database modification to be output to the database while the transaction is still in the active state. The data modification written by active transactions are called “uncommitted modification”.
If the system crash or transaction aborts, then the old value field of the log records is used to restore the modified data items to the value they had prior to the start of the transaction. This restoration is accomplished through the undo operation. In order to understand undo operations, let us consider the format of log record.
<Ti, Xj, V_old, V_new>
Here, Ti is transaction identifier, Xj is the data item, V_old is the old value of data item and V_new is the modified or new value of the data item Xj.
Undo (Ti):
It restores the value of all data items updated by transaction T1 to the old values.
Before a transaction, T1 starts its execution the record <T1, start> is written to the log. During its execution, any write (x) operation by T1 is performed by writing of the appropriate new update record to the log. When T1 partially commits the record <T1, commit> is written to the log.
It ensures transaction atomicity by recording all database modifications in the log but deferring the execution of all write operations of a transaction until the transaction partially commits.
A transaction is said to be partially committed once the final action of the transaction has been executed. When a transaction has performed all the actions, then the information in the log associated with the transaction is used in executing the deferred writes. In other words, at partial commits, time logged updates are “replayed” into database item.
The recovery procedure of deferred database modification is based on Redo operation
Redo(Ti)
It sets the value of all data items updated by transaction Ti to the new values from the log of records.
After a failure has occurred the recovery subsystem consults the log to determine which transaction need to be redone. Transaction Ti needs to be redone if an only if the log contains both the record <Ti, start> and the record <Ti, commit>. Thus, if the system crashes after the transaction completes its execution, then the information in the log is used in restoring the system to a previous consistence state.
Log Based Recovery is used for recording database modification. In log based recovery a log file is maintained for recovery purpose.
The log file is a sequence of log records. Log Record maintains a record of all the operations (update) of the database.
Types of log records:
<Start> Log Record:
Contain information about the start of each transaction. It has transaction identification. Transaction identifier is the unique identification of the transaction that starts
Representation:
<Ti , start>
<Update> Log Record:
It describes a single database write and has the following fields:
< Ti, Xj, V1,V2 >
Here, Ti is transaction identifier, Xj is the data item, V1 is the old value of data item and V2 is the modified or new value of the data item Xj.
<Commit> Log Record
When a transaction Ti is successfully committed or completed a <Ti, commit> log record is stored in the log file.
<Abort> Log Record
When a transaction Ti is aborted due to any reason, a <Ti, abort> log record is stored in the log file.
A Stable Storage is a storage in which information is never lost. Stable storage devices are the theoretically impossible to obtain. But, we must use some technique to design a storage system in which the chances of data loss are extremely low.
Causes of Failures:
System Crashes
User Error
Carelessness
Sabotage (intentional corruption of data)
Statement Failure
Application software errors
Network Failure
Media Failure
Natural Physical Disasters
The most important information needed for whole recovery process must be stored in stable storage.
Data Model can be defined as an integrated collection of concepts for describing and manipulating data, relationships between data,and constraints on the data in an organization.
Different types of data models:
Object Based Data Models -Object based data models use concepts such as entities, attributes, and relationships.
Physical Data Models - Physical data models describe how data is stored in the computer.
Record Based Data Models - Record based logical models are used in describing data at the logical and view levels.
The object based and record based data models are used to describe data at the conceptual and external levels, the physical data model issued to describe data at the internal level.
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. The most common requirement is to minimize the time taken to execute a program, a less common one is to minimize the amount of memory occupied. The growth of portable computers has created a market for minimizing the power consumed by a program. Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources.
Synthesis Phase, also known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table.
A compiler can have many phases and passes.
Pass: A pass refers to the traversal of a compiler through the entire program.
Pass: A pass refers to the traversal of a compiler through the entire program.
Phase: A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase.
Phase: A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase.
LOOSELY COUPLED SYSTEM
Each processor has its own memory module.
Efficient when tasks running on different processors, has minimal interaction.
It generally, do not encounter memory conflict.
Message transfer system (MTS).
Data rate is low.
Less expensive
TIGHTLY COUPLED SYSTEM
Processors have shared memory modules.
Efficient for high-speed or real-time processing.
It experiences more memory conflicts.
Interconnection networks PMIN, IOPIN, ISIN.
Data rate is high.
More expensive.
An annotated parse tree is one in which various facts about the program have been attached to parse tree nodes. For example, one might compute the set of identifiers that each subtree mentions, and attach that set to the subtree. Compilers have to store information they have collected about the program somewhere; this is a convenient place to store information which is derivable from the tree.
NFA
NFA or Non-Deterministic Finite Automaton is the one in which there exist many paths for a specific input from a current state to next state.
NFA can use Empty String transition.
NFA can be understood as multiple little machines computing at the same time.
If all of the branches of NFA dies or rejects the string, we can say that NFA rejects the string.
We do not need to specify how the NFA reacts according to some symbol.
DFA
Deterministic Finite Automaton is an FA in which there is only one path for a specific input from the current state to next state. There is a unique transition on each input symbol.
DFA cannot use Empty String transition
DFA can be understood as one machine
DFA will reject the string if it ends at other than accepting state
For Every symbol of the alphabet, there is only one state transition in DFA.
Intermediate code generator receives input from its predecessor phase, semantic analyzer, in the form of an annotated syntax tree. That syntax tree then can be converted into a linear representation, e.g., postfix notation. Intermediate code tends to be machine independent code. Therefore, code generator assumes to have an unlimited number of memory storage (register) to generate code.
A three-address code has at most three address locations to calculate the expression. A three-address code can be represented in two forms:
quadruples
triples
Intermediate code can be represented as
High Level IR - High-level intermediate code representation is very close to the source language itself. They can be easily generated from the source code and we can easily apply code modifications to enhance performance. But for target machine optimization, it is less preferred.
High Level IR - High-level intermediate code representation is very close to the source language itself. They can be easily generated from the source code and we can easily apply code modifications to enhance performance. But for target machine optimization, it is less preferred.
Low Level IR - This one is close to the target machine, which makes it suitable for register and memory allocation, instruction set selection, etc. It is good for machine-dependent optimizations.
Low Level IR - This one is close to the target machine, which makes it suitable for register and memory allocation, instruction set selection, etc. It is good for machine-dependent optimizations.
Intermediate code can be either language specific (e.g., Byte Code for Java) or language independent (three-address code).
We need to translate the source code into intermediate code which is then translated to its target code because:
If a compiler translates the source language to its target machine language without having the option for generating an intermediate code, then for each new machine, a full native compiler is required
Intermediate code eliminates the need for a new full compiler for every unique machine by keeping the analysis portion same for all the compilers.
The second part of compiler, synthesis, is changed according to the target machine.
It becomes easier to apply the source code modifications to improve code performance by applying code optimization techniques on the intermediate code.
The problem in generating three address codes in a single pass is that we may not know the labels that control must go to at the time jump statements are generated.So to get around this problem a series of branching statements with the targets of the jumps temporarily left unspecified is generated.
Back Patching is putting the address instead of labels when the proper label is determined.
Back patching Algorithms perform three types of operations
1) makelist (i) – creates a new list containing only i, an index into the array of quadruples and returns a pointer to the list it has made.
2) Merge (i, j) – concatenates the lists pointed to by i and j, and returns a pointer to the concatenated list.
3) Backpatch (p, i) – inserts i as the target label for each of the statements on the list pointed to by p.
A symbol table, either linear or hash, provides the following operations
insert()
This operation is more frequently used by analysis phase, i.e., the first half of the compiler where tokens are identified and names are stored in the table. This operation is used to add information in the symbol table about unique names occurring in the source code.
The insert() function takes the symbol and its attributes as arguments and stores the information in the symbol table.
EXAMPLE:
int a;
insert(a, int);
lookup()
lookup() operation is used to search a name in the symbol table to determine:
if the symbol exists in the table.
if it is declared before it is being used.
if the name is used in the scope.
if the symbol is initialized.
if the symbol declared multiple times.
The format of lookup() function varies according to the programming language.
Basic format:
lookup(symbol)
Symbol table is an important data structure created and maintained by compilers in order to store information about the occurrence of various entities such as variable names, function names, objects, classes, interfaces, etc.
A Symbol table is used for
To store the names of all entities in a structured form at one place.
To store the names of all entities in a structured form at one place.
To verify if a variable has been declared.
To verify if a variable has been declared.
To implement type checking, by verifying assignments and expressions in the source code are semantically correct.
To implement type checking, by verifying assignments and expressions in the source code are semantically correct.
To determine the scope of a name (scope resolution).
To determine the scope of a name (scope resolution).
Lexical Analysis
First phase of a compiler.
It is also called scanner.
Main task:
read the input characters and produce as output a sequence of tokens.
Process: Input: program as a single string of characters.
Collects characters into logical groupings and assigns internal codes to the groupings according to their structure.
Groupings: lexemes
Internal codes: tokens
Secondary tasks:
Stripping out from the source program comments and white spaces in the form of blank, tab, and new line characters.
Correlating error messages from the compiler with the source program.
Inserting lexemes for user-defined names into the symbol table.
Syntax Analysis
The syntax analyzer or parser must determine the structure of the sequence of tokens provided to it by the scanner.
Check the input program to determine whether is syntactically correct
Produce either a complete parse tree of at least trace the structure of the complete parse tree.
Error: produce a diagnostic message and recover (gets back to a normal state and continue the analysis of the input program: find as many errors as possible in one pass).
Different scheduling criteria are:
CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system).
Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput.
Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/0.
Waiting time. The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/0, it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.
Response time. Time from the submission of a request until the first response is produced. This measure, called response time, is the since it takes to start responding, not the time it takes to output the response.
A thread that is to be canceled is often referred to as the target thread.
Cancellation of a target thread may occur in two different ways:
Asynchronous cancellation. One thread immediately terminates the target thread.
Deferred cancellation. The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.
Benefits of multithreaded programming can be broken down into four major categories:
Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.
Resource sharing. Processes may only share resources through techniques such as shared memory or message passing. Such techniques must be explicitly arranged by the programmer. However, threads share the memory and the resources of the process to which they belong by default.
Economy. Allocating memory and resources for process creation is costly. Because threads share the resources of the process to which they belong, it is more economical to create and context-switch threads. Empirically gauging the difference in overhead can be difficult, but in general, it is much more time consuming to create and manage processes than threads.
Scalability. The benefits of multithreading can be greatly increased in a multiprocessor architecture, where threads may be running in parallel on different processors. A single-threaded process can only run on one processor, regardless how many are available. Multithreading on a multiCPU machine increases parallelism
Process Cooperation is necessary because it provides
Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information.
Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing elements (such as CPUs or I/O channels).
Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads, as we discussed in Chapter 2.
Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel.
For each monitor, a semaphore mutex (initialized to 1) is provided. A process must execute wait (mutex) before entering the monitor and must execute signal (mutex) after leaving the monitor.
Since a signaling process must wait until the resumed process either leaves or waits, an additional semaphore, next, is introduced, initialized to 0. The signaling processes can use next to suspend themselves. An integer variable next_count is also provided to count the number of processes suspended on next.
Thus, each external procedure F is replaced by :
wait(mutex);
body of F
if (next_count > 0)
signal(next);
else signal(mutex);
Mutual exclusion within a monitor is ensured.
Condition variables implementation.
For each condition x, we introduce a semaphore x_sem and an integer variable x_count, both initialized to 0.
For x.wait():
x_count++;
if (next_count > 0)
signal(next);
else signal(mutex);
wait (x_sem) ;
x_count--;
For x.signal():
if (x_count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
There are a few solutions to the priority-inversion problem in real-time systems. One is to turn off all system interrupts, effectively halting thread preemption in the system, while critical tasks execute. However, to make this work, you cannot implement more than two thread priorities, and critical sections where resources are locked need to be very brief and tightly controlled.
However, a more practical and less-invasive solution is to implement the priority inheritance protocol.
With priority inheritance, the system code that implements resource locking checks to see if a lower priority thread already owns a lock on the associated resource when a thread attempts to lock it. If one does, that owning thread's priority is temporarily increased to match that of the higher priority thread attempting to acquire the lock. As a result, the lock owner (once blocked at a lower priority) will execute, release the lock, and then be restored to its original priority level.
The priority-based model of execution states that a task can only be preempted by another task of higher priority. However, scenarios can arise where a lower priority task may indirectly preempt a higher priority task, in a sense inverting the priorities of the associated tasks, and violating the priority-based ordering of execution. This is called "priority inversion", and usually occurs when resource sharing is involved.
I/O-bound programs have the property of performing only a small amount of computation before performing I/O. Such programs typically do not use up their entire CPU quantum.
CPU-bound programs, on the other hand, use their entire quantum without performing any blocking I/O operations. Consequently, one could make better use of the computer’s resources by giving higher priority to I/O-bound programs and allow them to execute ahead of the CPU-bound programs.
Processor affinity means you can specify which processor(s) a given process or thread should run on.
AFFINITY LEVELS
There are three levels of affinity in the RTSS subsystem:
Subsystem affinity - Subsystem affinity refers to the set of processors you have dedicated to RTSS.
Process affinity - Process affinity refers to the processors that the threads of a given process may run on. If you don't specify a processor for a process to run on, its main thread will run it on the lowest-numbered RTSS processor available in the system. The set of processors that a process's threads can run on must be a subset of the set of processors available to the RTSS subsystem.
Thread affinity - Thread affinity determines the processors that an individual thread can run on. By default, a thread will run on the lowest-numbered RTSS processor available for the process to run on. The set of processors a thread can run on must be a subset of the set of processors its process can run on.
Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock can be obtained only by executing a different process. If the process is not relinquishing the processor, other processes do not get the opportunity to set the program condition required for the first process to make progress. In a multiprocessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock.
Long-Term Scheduler
A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming.It is also called a job scheduler
Short-Term Scheduler
Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them.Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.It is also called as CPU scheduler.
Medium-Term Scheduler
It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.Medium-term scheduling is a part of swapping
A trap is an exception in a user process. It's caused by division by zero or invalid memory access. It's also the usual way to invoke a kernel routine (a system call) because those run with a higher priority than user code. Handling is synchronous (so the user code is suspended and continues afterwards). In a sense they are "active" - most of the time, the code expects the trap to happen and relies on this fact.
An interrupt is something generated by the hardware (devices like the hard disk, graphics card, I/O ports, etc). These are asynchronous (i.e. they don't happen at predictable places in the user code) or "passive" since the interrupt handler has to wait for them to happen eventually.
Single Inheritance
Derived class inherits a single base class.
Class derived_class : access_specifier base class
Derived class access the features of single base class
Public, Private, Protected
Require small amount of run time over head
Multiple Inheritance
Derived class inherits two or more than two base class.
Class derived _class: access_specifier base_class1, access_specifier base_class2, ....
Derived class access the combined features of inherited base classes
Public, Private, Protected
Require additional runtime overhead as compared to single inheritance
If limited to single inheritance, the result is a specialization hierarchy and has a tree topology. Otherwise, in general, it forms a specialization lattice with DAG topology. An entity type with more than one superclass is called a shared subclass. A shared subclass inherits attributes from its superclasses only once, just like in most OO languages.
A category T is a class that is a subset of the union of n defining superclasses D1, D2, ..., Dn,n > 1 and is formally specified as follows:
T ⊆ (D1 ∪ D2 ...∪ Dn)
Specialization Hierarchy – has the constraint that every subclass participates as a subclass in only one class/subclass relationship, i.e. that each subclass has only one parent. This results in a tree structure.
Specialization Lattice – has the constraint that a subclass can be a subclass of more than one class/subclass relationship.
In a lattice or hierarchy, the subclass inherits the attributes not only of the direct superclass, but also all of the predecessor super classes all the way to the root.
The subclass namely called child class is that class which extends another class so that it inherits both protected and public members from that class.
A subclass is needed in data modeling because it is an easy way to define inheritance relationship between two classes. The relationship between two classes w.r.t data helps to design the structure of data.
A tuple relational calculus expression may generate an infinite expression.
We need to restrict the relational calculus a bit.
The domain of a formula P, denoted dom(P), is the set of all values referenced in P.
These include values mentioned in P as well as values that appear in a tuple of a relation mentioned in P.
So, the domain of P is the set of all values explicitly appearing in P or that appear in relations mentioned is P.
So, the domain of P is the set of all values explicitly appearing in P or that appear in relations mentioned is P.
We may say an expression { t | P} is safe if all values that appear in the result are values from dom().
A safe expression yields a finite number of tuples as its result. Otherwise, it is called unsafe
Relational calculus is a non procedural query language. It uses mathematical predicate calculus instead of algebra. It provides the description about the query to get the result where as relational algebra gives the method to get the result. It informs the system what to do with the relation, but does not inform how to perform it.
A tuple relational calculus is a non procedural query language which specifies to select the tuples in a relation. It can select the tuples with range of values or tuples for certain attribute values etc. The resulting relation can have one or more tuples.
{t | P (t)} or {t | condition (t)}
Domain relational calculus uses list of attribute to be selected from the relation based on the condition. It is same as TRC but differs by selecting the attributes rather than selecting whole tuples.
{<EMP_ID, EMP_NAME> | <EMP_ID, EMP_NAME> ? EMPLOYEE Λ DEPT_ID = 10}
The two-tier architecture is like client server application. The direct communication takes place between client and server. There is no intermediate between client and server. Because of tight coupling, a 2 tiered application will run faster.
Advantages:
Easy to maintain and modification is bit easy
Communication is faster
Easy to maintain and modification is bit easy
Communication is faster
Disadvantages:
In two tier architecture application performance will degrade upon increasing the users.
Cost-ineffective
In two tier architecture application performance will degrade upon increasing the users.
Cost-ineffective
Three-tier architecture typically comprises of
1) Client layer
2) Business Layer
3) Data layer
Advantages
High performance, lightweight persistent objects
Scalability – Each tier can scale horizontally
Performance – Because the Presentation tier can cache requests, network utilization is minimized, and the load is reduced on the Application and Data tiers.
High degree of flexibility in deployment platform and configuration
Better Re-use
Improve Data Integrity
Improved Security – Client is not direct access to database.
Easy to maintain and modification is bit easy, won’t affect other modules
In three tier architecture application performance is good.
High performance, lightweight persistent objects
Scalability – Each tier can scale horizontally
Performance – Because the Presentation tier can cache requests, network utilization is minimized, and the load is reduced on the Application and Data tiers.
High degree of flexibility in deployment platform and configuration
Better Re-use
Improve Data Integrity
Improved Security – Client is not direct access to database.
Easy to maintain and modification is bit easy, won’t affect other modules
In three tier architecture application performance is good.
Disadvantages
Increase Complexity/Effort
Increase Complexity/Effort
The schema is sometimes called the intention, and a database state an extension of the schema.
When we define a new database, we specify its database schema only to the DBMS. At this point, the corresponding database state is the empty state with no data. We get the initial state of the database when the database is first populated or loaded with the initial data. From then on, every time an update operation is applied to the database, we get another database state. At any point in time, the database has a current state. The DBMS is partly responsible for ensuring that every state of the database is a valid state-that is, a state that satisfies the structure and constraints specified in the schema. The DBMS stores the descriptions of the schema constructs and constraints-also called the meta-data-in the DBMS catalog so that DBMS software can refer to the schema whenever it needs to.
Redundancy is the state of being not or no longer needed or useful.In the traditional approach, uncontrolled redundancy in storing the same data/information many times in the database leads to several problems. This leads to Duplication of effort, Wastage of storage space and inconsistent data.
A controlled redundancy is a necessary technique to use redundant fields in a database. This speed ups the database access and also improves the performance of queries. Usually, the DBMS ensures the allocation of the data in the records. It should have the capability to control this redundancy in order to prohibit inconsistencies among the files.
Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes queries to recompile. We recommend not updating statistics too often because there is a performance tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend on your application. UPDATE STATISTICS can use tempdb to sort the sample of rows for building statistics.
A test case may be defined as a set of instructions for getting an error in the system by causing a failure. Testing software is not so much expensive in the comparison of software testing. Many kinds of aspects are to be kept in mind when test cases are selected.
The aim of the test case should be getting a program which has no errors if any error is found in the program, it is solved it quickly.
The aim of the test case should be getting a program which has no errors if any error is found in the program, it is solved it quickly.
The selected test case should contain all inputs to the program.
The selected test case should contain all inputs to the program.
A specified area should be present for the valuation of a test case.
A specified area should be present for the valuation of a test case.
A test case should be plan quickly as possible in the development process.
A test case should be plan quickly as possible in the development process.
A good testing should have following qualities:
correctness
Reliability
Usability
Efficiency
Integrity
Flexibility
Structure
A good testing should have following qualities:
correctness
Reliability
Usability
Efficiency
Integrity
Flexibility
Structure
correctness
correctness
Reliability
Reliability
Usability
Usability
Efficiency
Efficiency
Integrity
Integrity
Flexibility
Flexibility
Structure
Structure
Alpha testing
Alpha testing may be defined as a system testing which is done by the customer at the place where the developer has developed the system.
Alpha testing takes place once development is complete.
Alpha testing continues until customer agrees that system implementation is as per his/her expectation.
Alpha testing results in minor design changes.
Alpha testing is done in a controlled manner because the software is tested in developer's area.
Beta testing
Beta testing may be defined as system testing which is done by the customer on customer's own sites.
The application is tested in Beta Testing after development and testing is completed.
The problems faced by the customer are reported and software is re-released after beta testing for next beta test cycle.
To get problems and defects before the final release of the product, beta testing is very helpful.
Beta testing is done in normal environment and developers are not present during beta testing.
It is a measure to assess how practical and beneficial the software project development will be for an organization. The software analyzer conducts a thorough study to understand economic, technical and operational feasibility of the project.
Economic - Resource transportation, cost for training, cost of additional utilities and tools and overall estimation of costs and benefits of the project.
Economic - Resource transportation, cost for training, cost of additional utilities and tools and overall estimation of costs and benefits of the project.
Technical - Is it possible to develop this system? Assessing suitability of machine(s) and operating system(s) on which software will execute, existing developers’ knowledge and skills, training, utilities or tools for project.
Technical - Is it possible to develop this system? Assessing suitability of machine(s) and operating system(s) on which software will execute, existing developers’ knowledge and skills, training, utilities or tools for project.
Operational - Can the organization adjust smoothly to the changes done as per the demand of project? Is the problem worth solving?
Operational - Can the organization adjust smoothly to the changes done as per the demand of project? Is the problem worth solving?
Software scope is a well-defined boundary, which encompasses all the activities that are done to develop and deliver the software product.
The software scope clearly defines all functionalities and artifacts to be delivered as a part of the software. The scope identifies what the product will do and what it will not do, what the end product will contain and what it will not contain.
SDLC Models are adopted as per requirements of development process. It may very software-to-software to ensuring which model is suitable.
We can select the best SDLC model if following answers are satisfied -
Is SDLC suitable for selected technology to implement the software ?
Is SDLC appropriate for client’s requirements and priorities ?
Is SDLC model suitable for size and complexity of the software ?
Is the SDLC model suitable for the type of projects and engineering we do?
Is the SDLC appropriate for the geographically co-located or dispersed developers?
Nested loop (loop over loop)
An outer loop within an inner loop is formed consisting of fewer entries and then for individual entry, inner loop is individually processed.
E.g.
Select col1.*, col2.* from coll, col2 where coll.col1=col2.col2;
It’s processing takes place in this way:
For i in (select * from col1) loop
For j in (select * from col2 where col2=i.col1) loop
Results are displayed;
End of the loop;
End of the loop;
The Steps of nested loop are:
Identify outer (driving) table
Assign inner (driven) table to outer table.
For every row of outer table, access the rows of inner table.
Nested Loops is executed from the inner to the outer as:
outer_loop
inner_loop
Hash join
While joining large tables, the use of Hash Join is preferred.
Algorithm of Hash Join is divided into:
Build: It is a hash table having in-memory which is present on the smaller table.
Probe: this hash value of the hash table is applicable for each second row element.
Sort merge join
Two independent sources of data are joined in sort merge join. They performance is better as compared to nested loop when the data volume is big enough but it is not good as hash joins generally.
The full operation can be divided into parts of two:
Sort join operation :
Get first row R1 from input1
Get first row R2 from input2.
Merge join operation:
‘while’ is not present at either loop’s end.
if R1 joins with R2
next row is got R2 from the input 2
return (R1, R2)
else if R1 < style=””> next row is got from R1 from input 1
else
next row is got from R2 from input 2
end of the loop
The disadvantages of query are:
No indexes
Stored procedures are excessively compiled.
Triggers and procedures are without SET NOCOUNT ON.
Complicated joins making up inadequately written query.
Cursors and temporary tables showcase a bad presentation.
Storage and access of data from the central location in order to take some strategic decision is called Data Warehousing. Enterprise management is used for managing the information whose framework is known as Data Warehousing.
An overview of data warehouse:
Restrictions that are applied are:
Only the current database can have views.
You are not liable to change any computed value in any particular view.
Integrity constants decide the functionality of INSERT and DELETE.
Full-text index definitions cannot be applied.
Temporary views cannot be created.
Temporary tables cannot contain views.
No association with DEFAULT definitions.
Triggers such as INSTEAD OF is associated with views.
COALESCE function is used to return the value which is set to be not null in the list. If all values in the list are null, then the coalesce function will return NULL.
Coalesce(value1, value2,value3,...)
RAW datatype is used to store values in binary data format. The maximum size for a raw in a table in 32767 bytes.
Varchar can store upto 2000 bytes and varchar2 can store upto 4000 bytes. Varchar will occupy space for NULL values and Varchar2 will not occupy any space. Both are differed with respect to space.
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). Its purpose is to ease the implementation of tasks for the DBA, with its full- function scheduling engine, which allows you to schedule your own jobs and scripts.
Subquery – The inner query is executed only once. The inner query will get executed first and the output of the inner query used by the outer query. The inner query is not dependent on outer query.
Correlated subquery: – The outer query will get executed first and for every row of outer query, inner query will get executed. So the inner query will get executed as many times as number of rows in the result of the outer query. The outer query output can use the inner query output for comparison. This means inner query and outer query dependent on each other
A CTE can be used:
• For recursion
• Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.
• Reference the resulting non-large table multiple times in the same statement.
No, we don’t have UPDATED magic table.
The ‘magic tables’ are the INSERTED and DELETED tables, as well as the update() and columns_updated() functions, and are used to determine the changes resulting from DML statements.
• For an INSERT statement, the INSERTED table will contain the inserted rows.
• For an UPDATE statement, the INSERTED table will contain the rows after an update, and the DELETED table will contain the rows before an update.
• For a DELETE statement, the DELETED table will contain the rows to be deleted.
Both CTEs and Sub Queries have pretty much the same performance and function.
CTE’s have an advantage over using a subquery in that you can use recursion in a CTE.
The biggest advantage of using CTE is readability. CTEs can be referenced multiple times in the same statement where as sub query cannot.
select * into <new table> from <existing table> where 1=2
select top 0 * into <new table> from <existing table>
SELECT column FROM table ORDER BY RAND() LIMIT 1;
select distinct hiredate from emp a where &n = (select count(distinct sal) from emp b where a.sal >= b.sal);
select * from emp minus select * from emp where rownum <= (select count(*) - &n from emp);
select * from emp where rownum <= &n;
1.Using Filter Index. Filtered index is used to Index a portion of rows in a table. While creating an index, we can specify conditional statements. The below SQL Query will create a Unique Index on the rows having non null values:
CREATE UNIQUE INDEX IX_ClientMaster_ClientCode ON ClientMaster(ClienCode)
WHERE ClientCode IS NOT NULL
2.Create a view having the unique fields and create a Unique Clustered Index on it:
Create View vClientMaster_forIndex
With SchemaBinding
As
Select ClientCode Fromdbo.ClientMaster Where ClientCode IS NOT NULL;
Go
CREATE Unique Clustered Index UK_vClientMaster_ForIndex
on vClientMaster_forIndex(ClientCode)
INSERT INTO table DEFAULT VALUES;
The only difference between the RANK() and DENSE_RANK() functions is in cases where there is a “tie”; i.e., in cases where multiple values in a set have the same ranking. In such cases, RANK() will assign non-consecutive “ranks” to the values in the set (resulting in gaps between the integer ranking values when there is a tie), whereas DENSE_RANK() will assign consecutive ranks to the values in the set (so there will be no gaps between the integer ranking values in the case of a tie).
For example, consider the set {25, 25, 50, 75, 75, 100. For such a set, RANK() will return {1, 1, 3, 4, 4, 6} (note that the values 2 and 5 are skipped), whereas DENSE_RANK() will return {1, 1, 2, 3, 3, 4}.
Both the NVL(exp1, exp2) and NVL2(exp1, exp2, exp3) functions check the value exp1 to see if it is null.
With the NVL(exp1, exp2) function, if exp1 is not null, then the value of exp1 is returned; otherwise, the value of exp2 is returned, but case to the same data type as that of exp1.
With the NVL2(exp1, exp2, exp3) function, if exp1 is not null, then exp2 is returned; otherwise, the value of exp3 is returned.
To select all the even number records from a table:
Select * from table where id % 2 = 0
To select all the odd number records from a table:
Select * from table where id % 2 != 0
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL server’s query optimizer for a stored procedure or ad hoc query. Execution plans are very useful for helping a developer understand and analyze the performance characteristics of a query or stored procedure since the plan is used to execute the query or stored procedure.
SELECT * FROM mytable WHERE a=X UNION ALL SELECT * FROM mytable WHERE b=Y AND a!=X
Servletrunner is a small utility that runs servlets. It is included in the JSDK 2.0, while the JSDK 2.1 includes an HTTP server for this purpose.
The servletrunner is a small, multithreaded process that handles requests for servlets. Because servletrunner is multithreaded, it can be used to run multiple servlets simultaneously or to test one servlet that calls other servlets to satisfy client requests.
The rmiregistry command creates and starts a remote object registry on the specified port on the current host. If port is omitted, the registry is started on port 1099. The rmiregistry command produces no output and is typically run in the background.
EXAMPLE:
rmiregistry &
A remote object registry is a bootstrap naming service that is used by RMI servers on the same host to bind remote objects to names. Clients on local and remote hosts can then look up remote objects and make remote method invocations.
The registry is typically used to locate the first remote object on which an application needs to invoke methods. That object, in turn, will provide application-specific support for finding other objects.
A stub for a remote object acts as a client's local representative or proxy for the remote object. The caller invokes a method on the local stub which is responsible for carrying out the method call on the remote object. In RMI, a stub for a remote object implements the same set of remote interfaces that a remote object implements.
When a stub's method is invoked, it does the following:
initiates a connection with the remote JVM containing the remote object,
marshals (writes and transmits) the parameters to the remote JVM,
waits for the result of the method invocation,
unmarshals (reads) the return value or exception returned, and
returns the value to the caller.
The skeleton is responsible for dispatching the call to the actual remote object implementation
When a skeleton receives an incoming method invocation it does the following:
unmarshals (reads) the parameters for the remote method,
invokes the method on the actual remote object implementation, and
marshals (writes and transmits) the result (return value or exception) to the caller.
CHECKBOX
The Checkbox class is used to create a checkbox. It is used to turn an option on (true) or off (false). Clicking on a Checkbox changes its state from "on" to "off" or from "off" to "on".
EXAMPLE:
import java.awt.*;
public class CheckboxExample
{
CheckboxExample(){
Frame f= new Frame("Checkbox Example");
Checkbox checkbox1 = new Checkbox("C++");
checkbox1.setBounds(100,100, 50,50);
Checkbox checkbox2 = new Checkbox("Java", true);
checkbox2.setBounds(100,150, 50,50);
f.add(checkbox1);
f.add(checkbox2);
f.setSize(400,400);
f.setLayout(null);
f.setVisible(true);
}
public static void main(String args[])
{
new CheckboxExample();
}
}
OUTPUT:
CHECKBOX GROUP
The object of CheckboxGroup class is used to group together a set of Checkbox. At a time only one check box button is allowed to be in "on" state and remaining check box button in "off" state. It inherits the object class.
EXAMPLE:
import java.awt.*;
public class CheckboxGroupExample
{
CheckboxGroupExample(){
Frame f= new Frame("CheckboxGroup Example");
CheckboxGroup cbg = new CheckboxGroup();
Checkbox checkBox1 = new Checkbox("C++", cbg, false);
checkBox1.setBounds(100,100, 50,50);
Checkbox checkBox2 = new Checkbox("Java", cbg, true);
checkBox2.setBounds(100,150, 50,50);
f.add(checkBox1);
f.add(checkBox2);
f.setSize(400,400);
f.setLayout(null);
f.setVisible(true);
}
public static void main(String args[])
{
new CheckboxGroupExample();
}
}
OUTPUT:
FileInputStream is used for reading streams of raw bytes of data, like raw images. FileReaders, on the other hand, are used for reading streams of characters
The difference between FileInputStream and FileReader is, FileInputStream reads the file byte by byte and FileReader reads the file character by character.
So when you are trying to read the file which contains the character "Č", in FileInputStream will give the result as,196 140 because the ASCII value of Č is 268.
In FileReader will give the result as 268 which is the ASCII value of the char Č.
Thread.sleep causes the current thread to suspend execution for a specified period. This is an efficient means of making processor time available to the other threads of an application or other applications that might be running on a computer system. The sleep method can also be used for pacing and waiting for another thread with duties that are understood to have time requirements
EXAMPLE:
try
{
Thread.sleep(1000);
}
catch(InterruptedException ex)
{
Thread.currentThread().interrupt();
}
Here, the program will be paused for 1000 miliseconds.
The date class is deprecated because of handling internationalization date and time. It allows date object to be accessed in a system independent manner.
The calendar class should be used instead of the date class.
Calendar cal = Calendar.getInstance();
cal.set(Calendar.YEAR, 1988);
cal.set(Calendar.MONTH, Calendar.JANUARY);
cal.set(Calendar.DAY_OF_MONTH, 1);
Date dateRepresentation = cal.getTime();
The current length of a StringBuffer can be found via the method. The total allocated capacity can be found through the capacity() method.
int capacity() Returns the current capacity.
int length() Returns the length (character count).
EXAMPLE:
public class Main {
public static void main(String[] argv) {
StringBuffer sb = new StringBuffer();
sb.append("abcdef.com");
System.out.println(sb.length());
System.out.println(sb.capacity());
}
}
OUTPUT:
10
16
setChartAt()
The java.lang.StringBuffer.setCharAt() method sets the character at the specified index to ch. This sequence is altered to represent a new character sequence that is identical to the old character sequence, except that it contains the character ch at position index.
insert()
This method inserts the data into a substring of this StringBuffer. We should specify the offset value (integer type) of the buffer, at which we need to insert the data. Using this method, data of various types like integer, character, string etc. can be inserted.
A concrete class is used to define a useful object that can be instantiated as an automatic variable on the program stack. The implementation of a concrete class is defined. The concrete class is not intended to be a base class and no attempt to minimize dependency on other classes in the implementation or behavior of the class.
The println("...") method prints the string "..." and moves the cursor to a new line. The print("...") method instead prints just the string "...", but does not move the cursor to a new line. Hence, subsequent printing instructions will print on the same line.
Example
print
for(int i = 0; i < 5; i++)
System.out.print(" " + i);
OUTPUT:
0 1 2 3 4
println
for(int i = 0; i < 5; i++)
System.out.println(" " + i);
OUTPUT:
0
1
2
3
4
A derived data type is a complex classification that identifies one or various data types and is made up of simpler data types called primitive data types. Derived data types have advanced properties and use far beyond those of the basic primitive data types that operate as their essential building blocks.
In a Java program, all characters are grouped into symbols called tokens.
A token is the smallest element of a program that is meaningful to the compiler.
EXAMPLE:
Public class Hello
{
Public static void main(String args[])
{
System.out.println(“welcome in Java”); //print welcome in java
}
}
In above Example, the source code contains tokens such as public, class, Hello, {, public, static, void, main, (, String, [], args, {, System, out, println, (, “welcome in Java”, }, }.
The resulting tokens are compiled into Java bytecodes that are capable of being run from within an interpreted Java environment. Tokens are useful for the compiler to detect errors. When tokens are not arranged in a particular sequence, the compiler generates an error message.
Bytecode is computer object code that is processed by a program, usually referred to as a virtual machine, rather than by the "real" computer machine, the hardware processor.
Rather than being interpreted one instruction at a time, Java bytecode can be recompiled at each particular system platform by a just-in-time compiler. Usually, this will enable the Java program to run faster. In Java, bytecode is contained in a binary file with a .CLASS suffix.
It is used to create an instance of driver and register it with the DriverManager. Once you have loaded a driver, it is available for making a connection with DBMS.
A Statement object is used to represent SQL statement such DML statement or DDL statement. You simply create a Statement object and then execute it, supplying the appropriate execute() method with SQL statement you want to send.
Type 4 is the fastest JDBC driver. Type 1 and 3 drivers will be slower than Type 2 drivers(the database calls are made at least three translations in contrast to two). Type 4 drivers requires one translation.
The Java URLConnection class represents a communication link between the URL and the application. This class can be used to read and write data to the specified resource referred by the URL.
The URLConnection class provides many methods, we can display all the data of a webpage by using the getInputStream() method. The getInputStream() method returns all the data of the specified URL in the stream that can be read and displayed.
EXAMPLE:
import java.io.*;
import java.net.*;
public class URLConnectionExample {
public static void main(String[] args){
try{
URL url=new URL("http://www.geeksforgeeks.org");
URLConnection urlcon=url.openConnection();
InputStream stream=urlcon.getInputStream();
int i;
while((i=stream.read())!=-1){
System.out.print((char)i);
}
}catch(Exception e){System.out.println(e);}
}
}
JTree is a Swing component with which we can display hierarchical data. JTree is quite a complex component. A JTree has a 'root node' which is the top-most parent for all nodes in the tree. A node is an item in a tree. A node can have many children nodes. These children nodes themselves can have further children nodes. If a node doesn't have any children node, it is called a leaf node.
The leaf node is displayed with a different visual indicator. The nodes with children are displayed with a different visual indicator along with a visual 'handle' which can be used to expand or collapse that node. Expanding a node displays the children and collapsing hides them.
package net.codejava.swing;
import javax.swing.JFrame;
import javax.swing.JTree;
import javax.swing.SwingUtilities;
import javax.swing.tree.DefaultMutableTreeNode;
public class TreeExample extends JFrame
{
private JTree tree;
public TreeExample()
{
//create the root node
DefaultMutableTreeNode root = new DefaultMutableTreeNode("Root");
//create the child nodes
DefaultMutableTreeNode vegetableNode = new DefaultMutableTreeNode("Vegetables");
DefaultMutableTreeNode fruitNode = new DefaultMutableTreeNode("Fruits");
//add the child nodes to the root node
root.add(vegetableNode);
root.add(fruitNode);
//create the tree by passing in the root node
tree = new JTree(root);
add(tree);
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
this.setTitle("JTree Example");
this.pack();
this.setVisible(true);
}
public static void main(String[] args)
{
SwingUtilities.invokeLater(new Runnable() {
@Override
public void run() {
new TreeExample();
}
});
}
}
OUTPUT WITH TWO CHILDREN:
A JSplitPane has a splitter to split two components. The splitter bar can be displayed horizontally or vertically.
The JSplitPane class provides many constructors. we can create it using its default constructor and add two components using its setTopComponent(Component c), setBottomComponent(Component c),setLeftComponent(Component c), setRightComponent(Component c).
JSplitPane can redraw components in a continuous or non-continuous way when we change the position of the splitter bar.
The JProgressBar class is used to display the progress of the task. It inherits JComponent class.
EXAMPLE:
import javax.swing.*;
public class ProgressBarExample extends JFrame
{
JProgressBar jb;
int i=0,num=0;
ProgressBarExample()
{
jb=new JProgressBar(0,2000);
jb.setBounds(40,40,160,30);
jb.setValue(0);
jb.setStringPainted(true);
add(jb);
setSize(250,150);
setLayout(null);
}
public void iterate()
{
while(i<=2000)
{
jb.setValue(i);
i=i+20;
try{Thread.sleep(150);}catch(Exception e){}
}
}
public static void main(String[] args)
{
ProgressBarExample m=new ProgressBarExample();
m.setVisible(true);
m.iterate();
}
}
OUTPUT:
The JTabbedPane class is used to switch between a group of components by clicking on a tab with a given title or icon. It inherits JComponent class.
EXAMPLE:
import javax.swing.*;
public class TabbedPaneExample {
JFrame f;
TabbedPaneExample(){
f=new JFrame();
JTextArea ta=new JTextArea(200,200);
JPanel p1=new JPanel();
p1.add(ta);
JPanel p2=new JPanel();
JPanel p3=new JPanel();
JTabbedPane tp=new JTabbedPane();
tp.setBounds(50,50,200,200);
tp.add("main",p1);
tp.add("visit",p2);
tp.add("help",p3);
f.add(tp);
f.setSize(400,400);
f.setLayout(null);
f.setVisible(true);
}
public static void main(String[] args) {
new TabbedPaneExample();
}}
OUTPUT:
javax.swing.filechooser.FileFilter is used to restrict the files that are shown in a JFileChooser By default, a file chooser shows all user files and directories in a file chooser dialog, with the exception of "hidden" files in Unix (those starting with a '.'). You may restrict the list that is shown by setting the file filter for a file chooser dialog.
The Java JSlider class is used to create the slider. By using JSlider, a user can select a value from a specific range.
import javax.swing.*;
public class SliderExample1 extends JFrame
{
public SliderExample1()
{
JSlider slider = new JSlider(JSlider.HORIZONTAL, 0, 50, 25);
JPanel panel=new JPanel();
panel.add(slider);
add(panel);
}
public static void main(String s[])
{
SliderExample1 frame=new SliderExample1();
frame.pack();
frame.setVisible(true);
}
}
OUTPUT:
A spinner consists of a text field on the left side and two buttons with up and down arrows on the right side. If you press the up or down button, the item that displays in the input text will change in a given ordered sequence.
Example:
package jspinnerdemo;
import java.awt.*;
import java.util.*;
import javax.swing.*;
public class Main {
public static void main(String[] args) {
JFrame frame = new JFrame("JSpinner Demo");
// Spinner with number
SpinnerNumberModel snm = new SpinnerNumberModel(
new Integer(0),
new Integer(0),
new Integer(100),
new Integer(5)
);
JSpinner spnNumber = new JSpinner(snm);
// Spinner with Dates
SpinnerModel snd = new SpinnerDateModel(
new Date(),
null,
null,
Calendar.DAY_OF_MONTH
);
JSpinner spnDate = new JSpinner(snd);
// Spinner with List
String[] colors = {"Red","Green","Blue"};
SpinnerModel snl = new SpinnerListModel(colors);
JSpinner spnList = new JSpinner(snl);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(600, 100);
Container cont = frame.getContentPane();
cont.setLayout(new FlowLayout());
cont.add(new JLabel("Select Number:"));
cont.add(spnNumber);
cont.add(new JLabel("Select Date:"));
cont.add(spnDate);
cont.add(new JLabel("Select Color:"));
cont.add(spnList);
frame.setVisible(true);
}
}
OUTPUT:
The java.net.Socket class represents the socket that both the client and the server use to communicate with each other.
Sockets provide the communication mechanism between two computers using TCP. A client program creates a socket on its end of the communication and attempts to connect that socket to a server.
The following steps occur when establishing a TCP connection between two computers using sockets −
The server instantiates a ServerSocket object, denoting which port number communication is to occur on.
The server invokes the accept() method of the ServerSocket class. This method waits until a client connects to the server on the given port.
After the server is waiting, a client instantiates a Socket object, specifying the server name and the port number to connect to.
The constructor of the Socket class attempts to connect the client to the specified server and the port number. If communication is established, the client now has a Socket object capable of communicating with the server.
On the server side, the accept() method returns a reference to a new socket on the server that is connected to the client's socket.
Example:
// File Name GreetingServer.java
import java.net.*;
import java.io.*;
public class GreetingServer extends Thread {
private ServerSocket serverSocket;
public GreetingServer(int port) throws IOException {
serverSocket = new ServerSocket(port);
serverSocket.setSoTimeout(10000);
}
public void run() {
while(true) {
try {
System.out.println("Waiting for client on port " +
serverSocket.getLocalPort() + "...");
Socket server = serverSocket.accept();
System.out.println("Just connected to " + server.getRemoteSocketAddress());
DataInputStream in = new DataInputStream(server.getInputStream());
System.out.println(in.readUTF());
DataOutputStream out = new DataOutputStream(server.getOutputStream());
out.writeUTF("Thank you for connecting to " + server.getLocalSocketAddress()
+ "\nGoodbye!");
server.close();
}catch(SocketTimeoutException s) {
System.out.println("Socket timed out!");
break;
}catch(IOException e) {
e.printStackTrace();
break;
}
}
}
public static void main(String [] args) {
int port = Integer.parseInt(args[0]);
try {
Thread t = new GreetingServer(port);
t.start();
}catch(IOException e) {
e.printStackTrace();
}
}
}
Compile the client and the server and then start the server as
$ java GreetingServer 6066
Waiting for client on port 6066...
Output:
$ java GreetingClient localhost 6066
Connecting to localhost on port 6066
Just connected to localhost/127.0.0.1:6066
Server says Thank you for connecting to /127.0.0.1:6066
Goodbye!
There are four types of sockets.
Stream Sockets − Delivery in a networked environment is guaranteed. If you send through the stream socket three items "A, B, C", they will arrive in the same order − "A, B, C". These sockets use TCP (Transmission Control Protocol) for data transmission. If delivery is impossible, the sender receives an error indicator. Data records do not have any boundaries.
Datagram Sockets − Delivery in a networked environment is not guaranteed. They're connectionless because you don't need to have an open connection as in Stream Sockets − you build a packet with the destination information and send it out. They use UDP (User Datagram Protocol).
Raw Sockets − These provide users access to the underlying communication protocols, which support socket abstractions. These sockets are normally datagram oriented, though their exact characteristics are dependent on the interface provided by the protocol. Raw sockets are not intended for the general user; they have been provided mainly for those interested in developing new communication protocols, or for gaining access to some of the more cryptic facilities of an existing protocol.
Sequenced Packet Sockets − They are similar to a stream socket, with the exception that record boundaries are preserved. This interface is provided only as a part of the Network Systems (NS) socket abstraction and is very important in most serious NS applications. Sequenced-packet sockets allow the user to manipulate the Sequence Packet Protocol (SPP) or Internet Datagram Protocol (IDP) headers on a packet or a group of packets, either by writing a prototype header along with whatever data is to be sent, or by specifying a default header to be used with all outgoing data, and allows the user to receive the headers on incoming packets.
Stream Sockets − Delivery in a networked environment is guaranteed. If you send through the stream socket three items "A, B, C", they will arrive in the same order − "A, B, C". These sockets use TCP (Transmission Control Protocol) for data transmission. If delivery is impossible, the sender receives an error indicator. Data records do not have any boundaries.
Datagram Sockets − Delivery in a networked environment is not guaranteed. They're connectionless because you don't need to have an open connection as in Stream Sockets − you build a packet with the destination information and send it out. They use UDP (User Datagram Protocol).
Raw Sockets − These provide users access to the underlying communication protocols, which support socket abstractions. These sockets are normally datagram oriented, though their exact characteristics are dependent on the interface provided by the protocol. Raw sockets are not intended for the general user; they have been provided mainly for those interested in developing new communication protocols, or for gaining access to some of the more cryptic facilities of an existing protocol.
Sequenced Packet Sockets − They are similar to a stream socket, with the exception that record boundaries are preserved. This interface is provided only as a part of the Network Systems (NS) socket abstraction and is very important in most serious NS applications. Sequenced-packet sockets allow the user to manipulate the Sequence Packet Protocol (SPP) or Internet Datagram Protocol (IDP) headers on a packet or a group of packets, either by writing a prototype header along with whatever data is to be sent, or by specifying a default header to be used with all outgoing data, and allows the user to receive the headers on incoming packets.
A combination of an IP address and a port number is called a socket.
Sockets allow communication between two different processes on the same or different machines.
A Unix Socket is used in a client-server application framework. A server is a process that performs some functions on request from a client. Most of the application-level protocols like FTP, SMTP, and POP3 make use of sockets to establish a connection between client and server and then for exchanging data.
Types
Stream Sockets
Datagram Sockets
Raw Sockets
Sequenced Packet Sockets
JInternalFrame differs from JFrame in that it is a lightweight component and so must be contained inside another container like JDesktopPane of JFrame of JApplet.
JInternalFrame Example:
import javax.swing.JInternalFrame;
import javax.swing.JDesktopPane;
import javax.swing.JMenu;
import javax.swing.JMenuItem;
import javax.swing.JMenuBar;
import javax.swing.JFrame;
import java.awt.event.*;
import java.awt.*;
public class JInternalFrameDemo extends JFrame {
JDesktopPane jdpDesktop;
static int openFrameCount = 0;
public JInternalFrameDemo() {
super("JInternalFrame Usage Demo");
// Make the main window positioned as 50 pixels from each edge of the
// screen.
int inset = 50;
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
setBounds(inset, inset, screenSize.width - inset * 2,
screenSize.height - inset * 2);
// Add a Window Exit Listener
addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
// Create and Set up the GUI.
jdpDesktop = new JDesktopPane();
// A specialized layered pane to be used with JInternalFrames
createFrame(); // Create first window
setContentPane(jdpDesktop);
setJMenuBar(createMenuBar());
// Make dragging faster by setting drag mode to Outline
jdpDesktop.putClientProperty("JDesktopPane.dragMode", "outline");
}
protected JMenuBar createMenuBar() {
JMenuBar menuBar = new JMenuBar();
JMenu menu = new JMenu("Frame");
menu.setMnemonic(KeyEvent.VK_N);
JMenuItem menuItem = new JMenuItem("New IFrame");
menuItem.setMnemonic(KeyEvent.VK_N);
menuItem.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
createFrame();
}
});
menu.add(menuItem);
menuBar.add(menu);
return menuBar;
}
protected void createFrame() {
MyInternalFrame frame = new MyInternalFrame();
frame.setVisible(true);
// Every JInternalFrame must be added to content pane using JDesktopPane
jdpDesktop.add(frame);
try {
frame.setSelected(true);
} catch (java.beans.PropertyVetoException e) {
}
}
public static void main(String[] args) {
JInternalFrameDemo frame = new JInternalFrameDemo();
frame.setVisible(true);
}
class MyInternalFrame extends JInternalFrame {
static final int xPosition = 30, yPosition = 30;
public MyInternalFrame() {
super("IFrame #" + (++openFrameCount), true, // resizable
true, // closable
true, // maximizable
true);// iconifiable
setSize(300, 300);
// Set the window's location.
setLocation(xPosition * openFrameCount, yPosition
* openFrameCount);
}
}
}
OUTPUT:
JFrame Example
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class JFrameDemo {
public static void main(String s[]) {
JFrame frame = new JFrame("JFrame Source Demo");
// Add a window listner for close button
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
// This is an empty content area in the frame
JLabel jlbempty = new JLabel("");
jlbempty.setPreferredSize(new Dimension(175, 100));
frame.getContentPane().add(jlbempty, BorderLayout.CENTER);
frame.pack();
frame.setVisible(true);
}
}
OUTPUT:
PLAF stands for Pluggable Look And Feel, allows a Swing application to change its entire appearance with one or two lines of code. The most common use of this feature is to give applications a choice between the native platform look-and-feel and a new platform-independent Java look-and-feel (also known as the Metal look-and-feel).
Modal dialog boxes forces the user to acknowledge the dialog before moving before moving onto the application. Modeless dialog boxes enable the user to interact with the dialog and the application interchangeably.
A modal dialog box doesn’t allow the user to access the parent window while the dialog is open – it must be dealt with and closed before continuing. A modeless dialog can be open in the background.
Example for Model Dialog is Save, Save As Dialog in MS – Word. while it is opening you can’t do any thing in the application until you close that window. Example for Modeless Dialog is Find, Replace dialogs. You can use Find Dialog, same time you can also work in that word application.
Using the following statements we can determine the dimensions of our applet.
Dimension dim = getSize();
int appletHeight = dim.height();
int appletWidth = dim.width();
The first statement uses the getsize() method to return the size of the applet as a Dimension object. The Applet class inherits it from the Component class in the java.awt package. The next two statements extract separate width and height fields.
The paint () method supports painting via a Graphics object. This method holds instructions to paint this component. Actually, in Swing, you should change paintComponent() instead of paint(), as paint calls paintBorder(), paintComponent() and paintChildren(). You shouldn't call this method directly, you should call repaint() instead.
The repaint () method is used to cause paint () to be invoked by the AWT painting method. This method can't be overridden. It controls the update() -> paint() cycle. You should call this method to get a component to repaint itself. If you have done anything to change the look of the component, but not it's size ( like changing color, animating, etc. ) then call this method.
The immediate superclass of Applet is Panel.
Panel provides the following things:
1) Panels allow us to format the screen. Panels must have a specific layout. If a layout is not specified, the default will be a FlowLayout.
2) FlowLayout adds components to the screen one after another from top to bottom and from left to right. Components are rearranged when the user resizes the window. FlowLayout may take no arguments.
Constructor: FlowLayout fl = new FlowLayout( );
3) BorderLayout divides the screen in nine sections based in geographic orientation such as "North", "South", "East"....etc. BorderLayout takes no arguments.
Constructor: BorderLayout bl = new BorderLayout( );
4) GridLayout divides the screen in the number of sections specified by the programmer. GridLayout takes two arguments (#rows, #cols).
Constructor: GridLayout gl = new GridLayout(rows, cols ); //rows and cols are int numbers
We use codebase in applet whenever the applet class file is not in the same directory.
codebase = codebaseURL
This optional attribute specifies the base URL of the applet: the directory that contains the applet's code. If this attribute is not specified, then the document's URL is used.
HTML code:
<object type="application/x-java-applet" code="HelloWorld.class"
codebase="/external/examples/common/java/" width="200px" height="50px">
</object>
Whenever a screen needs redrawing, the update() method is called. By default, the update() method clears the screen and then calls the paint() method, which normally contains all the drawing code.
Example
import java.awt.*;
import java.applet.Applet;
import java.awt.event.*;
/*<applet code="UpdateExample.class" width="350" height="150"> </applet>*/
public class UpdateExample extends Applet implements MouseListener
{
private int mouseX, mouseY;
private boolean mouseclicked = false;
public void init()
{
setBackground(Color.black);
addMouseListener(this);
}
public void mouseClicked(MouseEvent e)
{
mouseX=e.getX();
mouseY=e.getY();
mouseclicked = true;
repaint();
}
public void mouseEntered(MouseEvent e){};
public void mousePressed(MouseEvent e){};
public void mouseReleased(MouseEvent e){};
public void mouseExited(MouseEvent e){};
public void update(Graphics g)
{
paint(g);
}
public void paint( Graphics g)
{
String str;
g.setColor(Color.white);
if (mouseclicked)
{
str = "X="+ mouseX + "," + "Y=" + mouseY;
g.drawString(str,mouseX,mouseY);
mouseclicked = false;
}
}
}
Yes, using <paran> tag as follows,
<paran name = "param1" value = "value1">
<param name = "param2" value = "value2>
One can access these parameters inside the applet by calling getParameter() method inside the applet.
HTML File
<HTML>
<HEAD>
<TITLE>Java applet example - Passing applet parameters to Java applets</TITLE>
</HEAD>
<BODY>
<APPLET CODE="AppletParameterTest.class" WIDTH="400" HEIGHT="50">
<PARAM NAME="font" VALUE="Dialog">
<PARAM NAME="size" VALUE="24">
<PARAM NAME="string" VALUE="Hello, world ... it's me. :)">
</APPLET>
</BODY>
</HTML>
Applet
import java.applet.*;
import java.awt.*;
/**
* A Java applet parameter test class.
* Demonstrates how to read applet parameters.
*/
public class AppletParameterTest extends Applet {
public void paint(Graphics g) {
String myFont = getParameter("font");
String myString = getParameter("string");
int mySize = Integer.parseInt(getParameter("size"));
Font f = new Font(myFont, Font.BOLD, mySize);
g.setFont(f);
g.setColor(Color.red);
g.drawString(myString, 20, 20);
}
}
When an applet begin,
init() -> start() ->paint()
The init() and start() methods are invoked first.
That, in turn, creates a thread and starts that thread, which causes this class's run() method to be invoked.
The paint() method is invoked by Swing independently in the GUI event handling thread if Swing detects that the applet needs to be redrawn.
The init() and start() methods are invoked first.
That, in turn, creates a thread and starts that thread, which causes this class's run() method to be invoked.
The paint() method is invoked by Swing independently in the GUI event handling thread if Swing detects that the applet needs to be redrawn.
When an applet is terminated,
stop() -> destroy()
We don't have the concept of constructor in applets. Applets can be invoked either through browser or through Appletviewer utility provided by JDK.
Applets don't have implicit constructors but you can define explicitly, but actually no need to mention it because it is initialized using an init().
FlowLayout - Top to bottom, left to right.
BoderLayout - At borders(North, South, East, West) and at the center of a container.
CardLayout - Elements are stacked on top of each other.
GridLayout - Elements are of equal size and are laid out using the square of grid.
GridBagLayout - Elements organized according to grid. The elements are of different sizes and may occupy more than one row or column of grid.
Double buffering is the process of use of two buffers rather than one to temporarily hold data being moved to and from I/O device.
The resulting image is smoother, less flicker and quicker than drawing on the screen. It also helps prevent bottlenecks.
AWT components depend upon native code counterparts(called peers) to handle their functionality(drawing and rendering). This extra 'baggage' makes them heavy weight components.
The Font class is used to render glyphs you characters you see on the screen.
FontMetrics class encapsulates information about a specific font on a specific graphics object.
1.The File class encapsulates the files and directories of the local file system. The RandomAccessFile class provides the methods needed to directly access data contained in any part of a file.
2.The java.io.RandomAccessFile class implements a random access file.
3.Random access file offers a seek feature that can go directly to a particular position.
4.Unlike the input and output stream classes in java.io. RandomAccessFile is used for both reading and writing files.
5. RandomAccessFile does not inherit from InputStream or OutputStream. It implements the DataInput and DataOutput interfaces.
When a task invokes yield() method, it returns to the ready state either from running, waiting or after its creation. When a task invokes sleep() method it returns to the waiting state from a running state.
Yielding
1.Yield will cause the thread to rejoin the queue.
2.When a task is invoked in yielding, it returns to the ready state.
3.It is used to get the running thread into out of runnable state with the same priority.
Sleeping
1.Sleep holds the thread's execution for the specified time.
2.When a task is invoked in sleeping, it returns to the waiting state.
3.It is used to delay the execution for a period of time.
No.
There are two ways to create a thread:
extends Thread class
implement Runnable interface
Even when implemented Runnable to create a thread, we have to create an instance of the Thread class, pass the instance of the class implementing Runnable as the argument in Thread class's constructor.
Using extend:
public class MyThread extends Thread{
public void run()
{
System.out.println("Thread started running..");
}
public static void main( String args[] )
{
MyThread mt = new MyThread();
mt.start();
}
}
OUTPUT:
Thread started running..
Using Runnable:
public void run() {
System.out.println("Thread started running..");
}
public static void main(String args[]) {
MyThread mt = new MyThread();
Thread t = new Thread(mt);
t.start();
}
}
OUTPUT:
Thread started running..
Threaded programming is normally used when a program is required to do more than one task at the same time. Threading is generally used in applications with graphical user interfaces where a new thread may be created to do some work relating to processing while the main thread keeps the interface responsive to human interaction.
Both start() and run() provide ways to create threaded programs. The start() method starts the execution of the new thread and calls run() method. the start() method returns immediately as the new thread normally continues until the run() method returns.
Here is a simple code example which prints name of Thread which executes run() method of Runnable task. Its clear that if you call start() method a new Thread executes Runnable task while if you directly call run() method task, current thread which is main in this case will execute the task.
public class StartVsRunCall{
public static void main(String args[]) {
//creating two threads for start and run method call
Thread startThread = new Thread(new Task("start"));
Thread runThread = new Thread(new Task("run"));
startThread.start(); //calling start method of Thread - will execute in new Thread
runThread.run(); //calling run method of Thread - will execute in current Thread
}
/*
* Simple Runnable implementation
*/
private static class Task implements Runnable{
private String caller;
public Task(String caller){
this.caller = caller;
}
@Override
public void run() {
System.out.println("Caller: "+ caller + " and code on this Thread is executed by : " + Thread.currentThread().getName());
}
}
}
Output:
Caller: start and code on this Thread is executed by: Thread-0
Caller: run and code on this Thread is executed by: main
In Summary only difference between start() and run() method in Thread is that start creates new thread while run doesn't create any thread and simply execute in current thread like a normal method call.
java.lang.Throwable
In Java, exceptions are objects. When you throw an exception, you throw an object. You can't throw just any object as an exception, however only those objects whose classes descend from Throwable. Throwable serves as the base class for an entire family of classes, declared in java.lang, that your program can instantiate and throw.
In overloading, the compiler picks an overloaded method. When translating the program, before the program ever runs. This method selection is known as static or early binding. However, in polymorphism, the compiler does not makes any decision when translating the method. The program has to run before any one can know what is stored in the object reference variable. Therefore, the JVM and not the compiler selects the appropriate method, This method selection is known as late binding.
Assigning an object to another object does not create a duplicate object. It simply assigns a reference of already existing object to a new object.
The clone() method when used creates a new object with separate memory space.
For example: aObj = bObj.clone();
This statement copies on object bObj to new memory location and assign the reference of new object to aObj.
Within the inner class, the keyword this holds a reference to the current object but if inner class needs to access the current outer class object then precede the keyword this with the outerclass name.
Using this in inner class:
class Outer
{
private int a = 10;
class Inner
{
private int a = 20;
public void myMethod()
{
System.out.println(this.a);
}
}
}
OUTPUT:
20
Using this in outerclass:
class Outer
{
private int a = 10;
class Inner
{
private int a = 20;
public void myMethod()
{
System.out.println(Outer.this.a);
}
}
}
OUTPUT:
10
The instanceof() keyword is a two argument that tests whether the runtime type of its first argument compatible with its second argument compatible with its second argument. It performs test at compile time and runtime.
The instanceof in java is also known as type comparison operator because it compares the instance with type. It returns either true or false. If we apply the instanceof operator with any variable that has null value, it returns false.
Example:
class Simple1
{
public static void main(String args[])
{
Simple1 s = new Simple1();
System.out.println(s instanceof Simple1);
}
}
OUTPUT:
true
The class is instantiated and declared in the same place. The declaration and instantiation takes the form
new Xxx()
{ //body }
Here, Xxx is an interface name. An anonymous class cannot have a constructor. This is because you do not specify a name of the class, you cannot use that name to specify a constructor.
Constructing an instance of a class invokes the constructor of all the superclass along the inheritance chain. A superclasss constructor is called before the subclass's constructor. This is called constructor chaining.
package com.myjava.constructors;
public class MyChaining {
public MyChaining(){
System.out.println("In default constructor...");
}
public MyChaining(int i){
this();
System.out.println("In single parameter constructor...");
}
public MyChaining(int i, int j){
this(j);
System.out.println("In double parameter constructor...");
}
public static void main(String a[]){
MyChaining ch = new MyChaining(10, 20);
}
}
OUTPUT:
In default constructor...
In single parameter constructor...
In double parameter constructor...
this() can be used to invoke a constructor of the same class whereas super() can be used to invoke a superclass instructor.
OR
super is used to access methods of the base class while this is used to access methods of the current class.
There are some classes that cannot be extended(i.e subclasses).
A non-public class can only be subclassed by classes in the same package as the class but not from classes in a different package.
A final class cannot be classed.
A class that has only private construction cannot b subclassed.
If the class has private members then regular inner class can access them.
A non-public class can only be subclassed by classes in the same package as the class but not from classes in a different package.
A final class cannot be classed.
A class that has only private construction cannot b subclassed.
If the class has private members then regular inner class can access them.
Yes, it is possible by using super keyword. For example, consider the following code statement.
public void play()
{
super.play();
//my own play() method code
}
Thus, the first statement in the body calls inherited version of play() and then it comes back to the subclass's specific code.
The string value is represented using private array variable. The array cannot be accessed outside the String class. The String class provides many public methods( such as length(), charAt() ) to retrieve array information. If array were not private, the user would be able to change the string content by modifying the array. This would violate that String class is immutable.
Float One = new Float(3.7);
Float Two = new Float(5.2);
Float Sum = new Float(One.floatValue() +Two.floatValue());
Here, floatValue() method is used. The Float wrapper class does not support floating point arithmetic. So it is necessary to convert it to float primitive type before performing arithmetic operations.
Locale class is used to tailor a program output to the conventions of a particular geographic, political or cultural region.
An operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to form information for the user.
An operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to form information for the user.
Locale is a mechanism for identifying objects, not a container for the objects themselves.
Locale is a mechanism for identifying objects, not a container for the objects themselves.
A locale consists of a language and a country. Class Locale, in package java.util contains information about 140 locales.
length() method is used to get the number of elements in string buffer whereas length parameter is used only with arrays to get its length.
Here is an example for better understanding.
public class length
{
public static void main(String args[])
{
String x = "test";
int a[] = {1, 2, 3, 4};
System.out.println(x.length());
System.out.println(a.length);
}
}
OUTPUT
4
4
Not directly. Although Java provides wrapper classes that wrap the primitive types in objects. These are Integers, Double, Byte, Float, Long and Character. In addition to allowing a primitive type to be passed by reference, the wrapper classes define several methods that enable you to manipulate their values.
Although the finalize() method approximates the function of a destructor, it is not the same.
A C++ destructor is always called just before an object goes out of scope, but you can't know when would finalize() be called for specific object.
In C++,
Every object is destroyed when it goes out of scope. Thus, if you declare a local object inside a function, when that function returns, that local object is automatically destroyed. The same goes for function parameters and for objects returned by functions.
Just before destruction, the object's destructor is called. This happens immediately, and before any other program statements will execute. Thus, a C++ destructor will always execute in a deterministic fashion. You can always know when and where a destructor will be executed.
Every object is destroyed when it goes out of scope. Thus, if you declare a local object inside a function, when that function returns, that local object is automatically destroyed. The same goes for function parameters and for objects returned by functions.
Just before destruction, the object's destructor is called. This happens immediately, and before any other program statements will execute. Thus, a C++ destructor will always execute in a deterministic fashion. You can always know when and where a destructor will be executed.
In Java, objects are not explicitly destroyed when they go out of scope. Rather, an object is marked as unused when there are no longer any references pointing to it. Even then, the finalize() method will not be called until the garbage collector runs. Thus, you cannot know precisely when or where a call to finalize( ) will occur. Even if you execute a call to gc( ) (the garbage collector), there is no guarantee that finalize( ) will immediately be executed.
No, arithmetic operations cannot be performed on a reference variable because the reference variable is an alias of another variable.
We use if else-if ladder when conditions controlling the selection process involves multiple variables.
For example,
if (p<0) //........
else if (q>10.7) //.........
else if (!finish) //........
This sequence cannot be re-coded with switch statement because all conditions involve different variables and different types.
Java is strongly typed language. This implies that all operations are type checked by the compiler for type compatibility. Illegal operations will not be compiled. Therefore, strong type checking present errors and enhance reliability.
Primitive types are the data types that are defined by the language itself. In contrast, reference types are types that are defined by classes in the Java API rather than by language itself.
Moreover, memory location associated with primitive type contains the actual value of the variable. In contrast, memory location associated with reference variable contains an address that indicates the memory location of the actual object.
Although Java does not have unsigned int's but one can convert an int to unsigned representation by using the following convention.
((long) i) & 0x00000000FFFFFFFFL;
Here, i is a variable of int type that you want to convert to unsigned int.
Yes. In Java, identifiers can be at maximum 65535 character length. Although there is no restriction placed in principle but Java source code is compiled into Java class files and the specification for class files does in effect, place an upper bound on the size of identifiers.
The precedence of operators refers to the order in which operators are evaluated within an expression whereas associativity refers to the order in which the consecutive operators within the same group are carried out.
Precedence rules specify the priority of operators (which operators will be evaluated first, e.g. multiplication has higher precedence than addition, PEMDAS). The associativity rules tell how the operators of the same precedence are grouped
Yes, an application can have multiple classes having main method, but while starting the application, mention the classname which is to be run i.e the classname which is to be executed. The JVM will only look for the main() method in that class which you have mentioned.
String[] args is the only parameter in the main method. It declares a parameter named args which contains an array of objects of the class type String.In other words, if you run your program as java MyProgram one two then args will contain ["one", "two"].
select max(salary) from employees where pin < (select max(salary) from employees)
Weekly
Monthly
Overall
Error
Content Related Issue
Sofware Related Issue | [
{
"code": null,
"e": 310,
"s": 169,
"text": "\nPopular Company Tags\n\nAmazon\nMicrosoft\nOracle\nSamsung\nAdobe\nSynopsys\nInfosys\nCisco\nWipro\nOla-Cabs\nMorgan-Stanley\nGoldman-Sachs\nshow more\n\n"
},
{
"code": null,
"e": 317,
"s": 310,
"text": "Amazon"
},
{
"code": null,
"e": 327,
"s": 317,
"text": "Microsoft"
},
{
"code": null,
"e": 334,
"s": 327,
"text": "Oracle"
},
{
"code": null,
"e": 342,
"s": 334,
"text": "Samsung"
},
{
"code": null,
"e": 348,
"s": 342,
"text": "Adobe"
},
{
"code": null,
"e": 357,
"s": 348,
"text": "Synopsys"
},
{
"code": null,
"e": 365,
"s": 357,
"text": "Infosys"
},
{
"code": null,
"e": 371,
"s": 365,
"text": "Cisco"
},
{
"code": null,
"e": 377,
"s": 371,
"text": "Wipro"
},
{
"code": null,
"e": 386,
"s": 377,
"text": "Ola-Cabs"
},
{
"code": null,
"e": 401,
"s": 386,
"text": "Morgan-Stanley"
},
{
"code": null,
"e": 415,
"s": 401,
"text": "Goldman-Sachs"
},
{
"code": null,
"e": 580,
"s": 415,
"text": "\n\nPopular Topic Tags\n\nMaths\nArray\nDynamic-Programming\nGreedy-Algorithm\nHashing\nTree\nBit-Algorithm\nMatrix\nBacktracking\nOperating System\nLinked-List\nGraph\nshow more\n\n"
},
{
"code": null,
"e": 586,
"s": 580,
"text": "Maths"
},
{
"code": null,
"e": 592,
"s": 586,
"text": "Array"
},
{
"code": null,
"e": 612,
"s": 592,
"text": "Dynamic-Programming"
},
{
"code": null,
"e": 629,
"s": 612,
"text": "Greedy-Algorithm"
},
{
"code": null,
"e": 637,
"s": 629,
"text": "Hashing"
},
{
"code": null,
"e": 642,
"s": 637,
"text": "Tree"
},
{
"code": null,
"e": 656,
"s": 642,
"text": "Bit-Algorithm"
},
{
"code": null,
"e": 663,
"s": 656,
"text": "Matrix"
},
{
"code": null,
"e": 676,
"s": 663,
"text": "Backtracking"
},
{
"code": null,
"e": 693,
"s": 676,
"text": "Operating System"
},
{
"code": null,
"e": 705,
"s": 693,
"text": "Linked-List"
},
{
"code": null,
"e": 711,
"s": 705,
"text": "Graph"
},
{
"code": null,
"e": 751,
"s": 711,
"text": "\nHome\nkrishnabhatia\nSubjective Answers\n"
},
{
"code": null,
"e": 756,
"s": 751,
"text": "Home"
},
{
"code": null,
"e": 770,
"s": 756,
"text": "krishnabhatia"
},
{
"code": null,
"e": 789,
"s": 770,
"text": "Subjective Answers"
},
{
"code": null,
"e": 1075,
"s": 789,
"text": " Selection of a victim. Given a set of deadlocked transactions, we must determine which transaction (or transactions) to roll back to break the deadlock. We should roll back those transactions that will incur the minimum cost. Unfortunately, the term minimum cost is not a precise one."
},
{
"code": null,
"e": 1122,
"s": 1075,
"text": "Factors which determine the cost of a rollback"
},
{
"code": null,
"e": 1251,
"s": 1122,
"text": "How long the transaction has computed, and how much longer the transaction will compute before it completes its designated task."
},
{
"code": null,
"e": 1297,
"s": 1251,
"text": "How many data items the transaction has used."
},
{
"code": null,
"e": 1364,
"s": 1297,
"text": "How many more data items the transaction needs for it to complete."
},
{
"code": null,
"e": 1421,
"s": 1364,
"text": "How many transactions will be involved in the rollback. "
},
{
"code": null,
"e": 1553,
"s": 1421,
"text": "Cursor stability is a form of degree-two consistency designed for programs that iterate over tuples of a relation by using cursors."
},
{
"code": null,
"e": 1624,
"s": 1553,
"text": "Instead of locking the entire relation, cursor stability ensures that:"
},
{
"code": null,
"e": 1711,
"s": 1624,
"text": "The tuple that is currently being processed by the iteration is locked in shared mode."
},
{
"code": null,
"e": 1791,
"s": 1711,
"text": "Any modified tuples are locked in exclusive mode until the transaction commits."
},
{
"code": null,
"e": 2081,
"s": 1791,
"text": "Cursor stability is used in practice on heavily accessed relations as a means of increasing concurrency and improving system performance. Applications that use cursor stability must be coded in a way that ensures database consistency despite the possibility of non-serializable schedules. "
},
{
"code": null,
"e": 2134,
"s": 2081,
"text": "The timestamp ordering protocol operates as follows:"
},
{
"code": null,
"e": 2179,
"s": 2134,
"text": " Suppose that transaction Ti issues read(Q)."
},
{
"code": null,
"e": 2472,
"s": 2179,
"text": "\nIf TS(T i) < W-timestamp(Q), then Ti needs to read a value of Q that was already overwritten. Hence, the read operation is rejected, and Ti is rolled back.\nIf TS(T i)≥W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti).\n"
},
{
"code": null,
"e": 2628,
"s": 2472,
"text": "If TS(T i) < W-timestamp(Q), then Ti needs to read a value of Q that was already overwritten. Hence, the read operation is rejected, and Ti is rolled back."
},
{
"code": null,
"e": 2763,
"s": 2628,
"text": "If TS(T i)≥W-timestamp(Q), then the read operation is executed, and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti)."
},
{
"code": null,
"e": 2809,
"s": 2763,
"text": " Suppose that transaction Ti issues write(Q)."
},
{
"code": null,
"e": 3271,
"s": 2809,
"text": "\nIf TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was needed previously, and the system assumed that that value would never be produced. Hence, the system rejects the write operation and rolls Ti back.\nIf TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q. Hence, the system rejects this write operation and rolls Ti back.\nOtherwise, the system executes the write operation and sets W-timestamp(Q) to TS(Ti).\n\t \n"
},
{
"code": null,
"e": 3492,
"s": 3271,
"text": "If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was needed previously, and the system assumed that that value would never be produced. Hence, the system rejects the write operation and rolls Ti back."
},
{
"code": null,
"e": 3641,
"s": 3492,
"text": "If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q. Hence, the system rejects this write operation and rolls Ti back."
},
{
"code": null,
"e": 3731,
"s": 3641,
"text": "Otherwise, the system executes the write operation and sets W-timestamp(Q) to TS(Ti).\n\t "
},
{
"code": null,
"e": 3789,
"s": 3731,
"text": "There are two deadlock prevention schemes using timestamp"
},
{
"code": null,
"e": 4045,
"s": 3789,
"text": " The wait–die scheme is a non-preemptive technique. When transaction Ti requests a data item currently held by Tj, Ti is allowed to wait only if it has a timestamp smaller than that of Tj (that is, Ti is older than Tj). Otherwise, Ti is rolled back (dies)"
},
{
"code": null,
"e": 4359,
"s": 4045,
"text": " The wound–wait scheme is a preemptive technique. It is a counterpart to the wait–die scheme. When transaction Ti requests a data item currently held by Tj, Ti is allowed to wait only if it has a timestamp larger than that of Tj (that is, Ti is younger than Tj).Otherwise, Tj is rolled back(Tj is wounded by Ti). "
},
{
"code": null,
"e": 4540,
"s": 4359,
"text": "A phantom read occurs when, in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first."
},
{
"code": null,
"e": 4633,
"s": 4540,
"text": "This can occur when range locks are not acquired on performing a SELECT.... WHERE operation."
},
{
"code": null,
"e": 4740,
"s": 4633,
"text": "In other words, data getting changed in current transaction by other transactions is called Phantom Reads."
},
{
"code": null,
"e": 5104,
"s": 4740,
"text": "Let us consider a schedule S in which there are two consecutive instructions, I and J, of transactions Ti and Tj, respectively (i != j).If I and J refer to different data items, then we can swap I and J without affecting the results of any instruction in the schedule. However, if I and J refer to the same data item Q, then the order of the two steps may matter."
},
{
"code": null,
"e": 5193,
"s": 5104,
"text": "There are four cases we need to consider (since we are dealing with read and write only)"
},
{
"code": null,
"e": 5331,
"s": 5193,
"text": " I = read(Q), J = read(Q). The order of I and J does not matter, since the same value of Q is read by Ti and Tj, regardless of the order."
},
{
"code": null,
"e": 5564,
"s": 5331,
"text": " I =read(Q), J =write(Q). If I comes before J, then Ti does not read the value of Q that is written by Tj in instruction J. If J comes before I, then Ti reads the value of Q that is written by Tj. Thus, the order of I and J matters."
},
{
"code": null,
"e": 5672,
"s": 5564,
"text": " I = write(Q), J = read(Q). The order of I and J matters for reasons similar to those of the previous case."
},
{
"code": null,
"e": 6169,
"s": 5672,
"text": " I = write(Q), J = write(Q). Since both instructions are write operations, the order of these instructions does not affect either Ti or Tj.However, the value obtained by the next read(Q) instruction of S is affected, since the result of only the latter of the two write instructions is preserved in the database. If there is no other write(Q) instruction after I and J in S, then the order of I and J directly affects the final value of Q in the database state that results from schedule S.\n\t \n\t "
},
{
"code": null,
"e": 6216,
"s": 6169,
"text": "Concurrency gives the following two advantages"
},
{
"code": null,
"e": 7116,
"s": 6216,
"text": " Improved throughput and resource utilization. A transaction consists of many steps. Some involve I/O activity; others involve CPU activity. The CPU and the disks in a computer system can operate in parallel. Therefore, I/O activity can be done in parallel with processing at the CPU. The parallelism of the CPU and the I/O system can, therefore, be exploited to run multiple transactions in parallel. While a read or write on behalf of one transaction is in progress on one disk, another transaction can be running in the CPU, while another disk may be executing a read or write on behalf of a third transaction. All of this increases the throughput of the system—that is, the number of transactions executed in a given amount of time. Correspondingly, the processor and disk utilization also increase, in other words, the processor and disk spend less time idle, or not performing any useful work."
},
{
"code": null,
"e": 7779,
"s": 7116,
"text": "Reduced waiting time. There may be a mix of transactions running on a system, some short and some long. If transactions run serially, a short transaction may have to wait for a preceding long transaction to complete, which can lead to unpredictable delays in running a transaction. If the transactions are operating on different parts of the database, it is better to let them run concurrently, sharing the CPU cycles and disk accesses among them. Concurrent execution reduces the unpredictable delays in running transactions. Moreover, it also reduces the average response time: the average time for a transaction to be completed after it has been submitted.\n\t "
},
{
"code": null,
"e": 7816,
"s": 7779,
"text": "Data in a database can be stored in:"
},
{
"code": null,
"e": 8157,
"s": 7816,
"text": "Volatile storage. Information residing in volatile storage does not usually survive system crashes. Examples of such storage are main memory and cache memory. Access to volatile storage is extremely fast, both because of the speed of the memory access itself, and because it is possible to access any data item in volatile storage directly."
},
{
"code": null,
"e": 8478,
"s": 8157,
"text": "Nonvolatile storage. Information residing in nonvolatile storage survives system crashes. Examples of nonvolatile storage include secondary storage devices such as magnetic disk and flash storage, used for online storage, and tertiary storage devices such as optical media, and magnetic tapes, used for archival storage "
},
{
"code": null,
"e": 9143,
"s": 8478,
"text": "Stable storage. Information residing in stable storage is never lost (theoretically never cannot be guaranteed—for example, it is possible, although extremely unlikely, that a black hole may envelop the earth and permanently destroy all data!). Although stable storage is theoretically impossible to obtain, it can be closely approximated by techniques that make data loss extremely unlikely. To implement stable storage, we replicate the information in several nonvolatile storage media (usually disk) with independent failure modes. Updates must be done with care to ensure that a failure during an update to stable storage does not cause a loss of information. "
},
{
"code": null,
"e": 9185,
"s": 9143,
"text": "Operations performed on a transaction are"
},
{
"code": null,
"e": 9364,
"s": 9185,
"text": "read(X), which transfers the data item X from the database to a variable, also called X, in a buffer in main memory belonging to the transaction that executed the read operation."
},
{
"code": null,
"e": 9524,
"s": 9364,
"text": "write(X), which transfers the value in the variable X in the main-memory buffer of the transaction that executed the write to the data item X in the database. "
},
{
"code": null,
"e": 9685,
"s": 9524,
"text": "Queries involving a natural join may be processed in several ways, depending on the availability of indices and the form of physical storage for the relations. "
},
{
"code": null,
"e": 9825,
"s": 9685,
"text": "If the join result is almost as large as the Cartesian product of the two relations, a block nested-loop join strategy may be advantageous."
},
{
"code": null,
"e": 9893,
"s": 9825,
"text": "If indices are available, the indexed nested-loop join can be used."
},
{
"code": null,
"e": 10074,
"s": 9893,
"text": " If the relations are sorted, a merge join may be desirable.It may be advantageous to sort a relation prior to join computation (so as to allow the use of the merge-join strategy)."
},
{
"code": null,
"e": 10354,
"s": 10074,
"text": " The hash-join algorithm partitions the relations into several pieces, such that each piece of one of the relations fits in memory. The partitioning is carried out with a hash function on the join attributes so that corresponding pairs of partitions can be joined independently. "
},
{
"code": null,
"e": 10924,
"s": 10354,
"text": " The first action that the system must perform on a query is to translate the query into its internal form, which (for relational database systems) is usually based on the relational algebra. In the process of generating the internal form of the query, the parser checks the syntax of the user’s query, verifies that the relation names appearing in the query are names of relations in the database, and so on. If the query was expressed in terms of a view, the parser replaces all references to the view name with the relational-algebra expression to compute the view. "
},
{
"code": null,
"e": 10976,
"s": 10924,
"text": "Pipelines can be executed in the following two ways"
},
{
"code": null,
"e": 11667,
"s": 10976,
"text": " In a demand-driven pipeline, the system makes repeated requests for tuples from the operation at the top of the pipeline. Each time that an operation receives a request for tuples, it computes the next tuple (or tuples) to be returned and then returns that tuple. If the inputs of the operation are not pipelined, the next tuple(s) to be returned can be computed from the input relations, while the system keeps track of what has been returned so far. If it has some pipelined inputs, the operation also makes requests for tuples from its pipelined inputs. Using the tuples received from its pipelined inputs, the operation computes tuples for its output and passes them up to its parent. "
},
{
"code": null,
"e": 12005,
"s": 11667,
"text": " In a producer-driven pipeline, operations do not wait for requests to produce tuples but instead generate the tuples eagerly. Each operation in a producer-driven pipeline is modeled as a separate process or thread within the system that takes a stream of tuples from its pipelined inputs and generates a stream of tuples for its output."
},
{
"code": null,
"e": 12256,
"s": 12005,
"text": " The first step in each case is to partition the two relations by the same hash function, and thereby create the partitions r0,r1,...,rnh and s0,s1,...,snh. Depending on the operation, the system then takes these steps on each partition i =0,1,...,nh"
},
{
"code": null,
"e": 12281,
"s": 12256,
"text": "Different set operations"
},
{
"code": null,
"e": 12288,
"s": 12281,
"text": " r ∪ s"
},
{
"code": null,
"e": 12453,
"s": 12288,
"text": "\n Build an in-memory hash index on ri\n. Add the tuples in si to the hash index only if they are not already present.\nAdd the tuples in the hash index to the result\n"
},
{
"code": null,
"e": 12490,
"s": 12453,
"text": " Build an in-memory hash index on ri"
},
{
"code": null,
"e": 12569,
"s": 12490,
"text": ". Add the tuples in si to the hash index only if they are not already present."
},
{
"code": null,
"e": 12616,
"s": 12569,
"text": "Add the tuples in the hash index to the result"
},
{
"code": null,
"e": 12623,
"s": 12616,
"text": " r ∩ s"
},
{
"code": null,
"e": 12789,
"s": 12623,
"text": "\nBuild an in-memory hash index on ri.\nFor each tuple in si, probe the hash index and output the tuple to the result only if it is already present in the hash index.\n"
},
{
"code": null,
"e": 12826,
"s": 12789,
"text": "Build an in-memory hash index on ri."
},
{
"code": null,
"e": 12953,
"s": 12826,
"text": "For each tuple in si, probe the hash index and output the tuple to the result only if it is already present in the hash index."
},
{
"code": null,
"e": 12960,
"s": 12953,
"text": " r − s"
},
{
"code": null,
"e": 13184,
"s": 12960,
"text": "\nBuild an in-memory hash index on ri.\nFor each tuple in si, probe the hash index, and if the tuple is present in the hash index, delete it from the hash index.\nAdd the tuples remaining in the hash index to the result.\n\t \n"
},
{
"code": null,
"e": 13221,
"s": 13184,
"text": "Build an in-memory hash index on ri."
},
{
"code": null,
"e": 13343,
"s": 13221,
"text": "For each tuple in si, probe the hash index, and if the tuple is present in the hash index, delete it from the hash index."
},
{
"code": null,
"e": 13406,
"s": 13343,
"text": "Add the tuples remaining in the hash index to the result.\n\t "
},
{
"code": null,
"e": 13515,
"s": 13406,
"text": "Various ways in which a selection operation on a relation whose tuples are stored together in one file are:"
},
{
"code": null,
"e": 13732,
"s": 13515,
"text": " A1 (linear search). In a linear search, the system scans each file block and tests all records to see whether they satisfy the selection condition. An initial seek is required to access the first block of the file. "
},
{
"code": null,
"e": 13938,
"s": 13732,
"text": "A2 (primary index, equality on key). For an equality comparison on a key attribute with a primary index, we can use the index to retrieve a single record that satisfies the corresponding equality condition"
},
{
"code": null,
"e": 14127,
"s": 13938,
"text": " A3 (primary index, equality on non-key). We can retrieve multiple records by using a primary index when the selection condition specifies an equality comparison on a non-key attribute, A."
},
{
"code": null,
"e": 14382,
"s": 14127,
"text": "A4 (secondary index, equality). Selections specifying an equality condition can use a secondary index. This strategy can retrieve a single record if the equality condition is on a key; multiple records may be retrieved if the indexing field is not a key "
},
{
"code": null,
"e": 14413,
"s": 14382,
"text": "Bucket overflow occurs because"
},
{
"code": null,
"e": 14747,
"s": 14413,
"text": " Insufficient buckets. The number of buckets, which we denote nB, must be chosen such that nB > nr/fr, where nr denotes the total number of records that will be stored and fr denotes the number of records that will fit in a bucket.This designation, assumes that the total number of records is known when the hash function is chosen. "
},
{
"code": null,
"e": 14915,
"s": 14747,
"text": " Skew. Some buckets are assigned more records than are others, so a bucket may overflow even when other buckets still have space. This situation is called bucket skew."
},
{
"code": null,
"e": 14947,
"s": 14915,
"text": "Skew can occur for two reasons:"
},
{
"code": null,
"e": 15076,
"s": 14947,
"text": "\nMultiple records may have the same search key.\n The chosen hash function may result in nonuniform distribution of search keys.\n"
},
{
"code": null,
"e": 15123,
"s": 15076,
"text": "Multiple records may have the same search key."
},
{
"code": null,
"e": 15203,
"s": 15123,
"text": " The chosen hash function may result in nonuniform distribution of search keys."
},
{
"code": null,
"e": 15758,
"s": 15203,
"text": " Dense index: In a dense index, an index entry appears for every search-key value in the file. In a dense clustering index, the index record contains the search-key value and a pointer to the first data record with that search-key value. The rest of the records with the same search-key value would be stored sequentially after the first record, since, because the index is a clustering one, records are sorted on the same search key. In a dense nonclustering index, the index must store a list of pointers to all records with the same search-key value. "
},
{
"code": null,
"e": 16415,
"s": 15758,
"text": " Sparse index: In a sparse index, an index entry appears for only some of the search-key values. Sparse indices can be used only if the relation is stored in sorted order of the search key, that is if the index is a clustering index. As is true in dense indices, each index entry contains a search-key value and a pointer to the first data record with that search-key value. To locate a record, we find the index entry with the largest search-key value that is less than or equal to the search-key value for which we are looking. We start at the record pointed to by that index entry, and follow the pointers in the file until we find the desired record.\n "
},
{
"code": null,
"e": 16465,
"s": 16415,
"text": "Indexing techniques are evaluated on the basis of"
},
{
"code": null,
"e": 16674,
"s": 16465,
"text": "Access types: The types of access that are supported efficiently.Access types can include finding records with a specified attribute value and finding records whose attribute values fall in a specified range."
},
{
"code": null,
"e": 16789,
"s": 16674,
"text": "Access time: The time it takes to find a particular data item, or set of items, using the technique in question. "
},
{
"code": null,
"e": 17005,
"s": 16789,
"text": " Insertion time: The time it takes to insert a new data item. This value includes the time it takes to find the correct place to insert the new data item, as well as the time it takes to update the index structure. "
},
{
"code": null,
"e": 17192,
"s": 17005,
"text": " Deletion time: The time it takes to delete a data item. This value includes the time it takes to find the item to be deleted, as well as the time it takes to update the index structure."
},
{
"code": null,
"e": 17403,
"s": 17192,
"text": "Space overhead: The additional space occupied by an index structure. Provided that the amount of additional space is moderate, it is usually worthwhile to sacrifice the space to achieve improved performance.\n\t "
},
{
"code": null,
"e": 17451,
"s": 17403,
"text": "Different techniques used by BUFFER MANAGER are"
},
{
"code": null,
"e": 17835,
"s": 17451,
"text": " Buffer replacement strategy. When there is no room left in the buffer, a block must be removed from the buffer before a new one can be read in. Most operating systems use a least recently used (LRU) scheme, in which the block that was referenced least recently is written back to disk and is removed from the buffer. This simple approach can be improved on for database application."
},
{
"code": null,
"e": 18342,
"s": 17835,
"text": "Pinned blocks. For the database system to be able to recover from crashes it is necessary to restrict those times when a block may be written back to disk.For instance, most recovery systems require that a block should not be written to disk while an update on the block is in progress. A block that is not allowed to be written back to disk is said to be pinned. Although many operating systems do not support pinned blocks, such a feature is essential for a database system that is resilient to crashes. "
},
{
"code": null,
"e": 18557,
"s": 18342,
"text": "Forced output of blocks.There are situations in which it is necessary to write back the block to disk, even though the buffer space that it occupies is not needed. This write is called the forced output of a block."
},
{
"code": null,
"e": 18605,
"s": 18557,
"text": "Records can be organized in the following ways:"
},
{
"code": null,
"e": 18799,
"s": 18605,
"text": " Heap file organization. Any record can be placed anywhere in the file where there is space for the record. There is no ordering of records. Typically, there is a single file for each relation."
},
{
"code": null,
"e": 18927,
"s": 18799,
"text": " Sequential file organization. Records are stored in sequential order, according to the value of a “search key” of each record."
},
{
"code": null,
"e": 19110,
"s": 18927,
"text": "Hashing file organization. A hash function is computed on some attribute of each record.The result of the hash function specifiesinwhich block of the file the record should be placed"
},
{
"code": null,
"e": 19253,
"s": 19110,
"text": "Mapping cardinalities, or cardinality ratios, express the number of entities to which another entity can be associated via a relationship set."
},
{
"code": null,
"e": 19440,
"s": 19253,
"text": "Mapping cardinalities are most useful in describing binary relationship sets, although they can contribute to the description of relationship sets that involve more than two entity sets."
},
{
"code": null,
"e": 19534,
"s": 19440,
"text": "For a binary relationship set R between entity sets A and B, the mapping cardinality must be "
},
{
"code": null,
"e": 19668,
"s": 19534,
"text": " One-to-one. An entity in A is associated with at most one entity in B, and an entity in B is associated with at most one entity in A"
},
{
"code": null,
"e": 19831,
"s": 19668,
"text": "One-to-many. An entity in A is associated with any number (zero or more) of entities in B. An entity in B, however, can be associated with at most one entity in A"
},
{
"code": null,
"e": 19996,
"s": 19831,
"text": " Many-to-one. An entity in A is associated with at most one entity in B. An entity in B, however, can be associated with any number (zero or more) of entities in A."
},
{
"code": null,
"e": 20171,
"s": 19996,
"text": " Many-to-many.An entity in A is associated with any number (zero or more) of entities in B, and an entity in B is associated with any number (zero or more) of entities in A. "
},
{
"code": null,
"e": 20214,
"s": 20171,
"text": "The basic data types supported by SQL are:"
},
{
"code": null,
"e": 20333,
"s": 20214,
"text": " char(n): A fixed-length character string with user-specified length n.The full form, character, can be used instead. "
},
{
"code": null,
"e": 20468,
"s": 20333,
"text": " varchar(n): A variable-length character string with user-specified maximum length n. The full form, character varying, is equivalent."
},
{
"code": null,
"e": 20585,
"s": 20468,
"text": " int: An integer (a finite subset of the integers that is machine dependent).The full form, integer, is equivalent."
},
{
"code": null,
"e": 20661,
"s": 20585,
"text": " smallint: A small integer(a machine-dependent subset of the integer type)."
},
{
"code": null,
"e": 20965,
"s": 20661,
"text": " numeric(p,d): A fixed-point number with user-specified precision.The number consists of p digits (plus a sign), and d of the p digits are to the right of the decimal point. Thus, numeric(3,1) allows 44.5 to be stored exactly, but neither 444.5 or 0 .32 can be stored exactly in a field of this type. "
},
{
"code": null,
"e": 21083,
"s": 20965,
"text": "real, double precision: Floating-point and double-precision floating-point numbers with machine-dependent precision. "
},
{
"code": null,
"e": 21159,
"s": 21083,
"text": " float(n): A floating-point number, with precision of at least n digits.\n\t "
},
{
"code": null,
"e": 21186,
"s": 21159,
"text": "Functions of a DBA include"
},
{
"code": null,
"e": 21311,
"s": 21186,
"text": "Schema definition.The DBA creates the original database schema by executing a set of data definition statements in the DDL. "
},
{
"code": null,
"e": 21360,
"s": 21311,
"text": " Storage structure and access-method definition."
},
{
"code": null,
"e": 21587,
"s": 21360,
"text": "Schema and physical-organization modification.The DBA carries out changes to the schema and physical organization to reflect the changing needs of the organization, or to alter the physical organization to improve performance."
},
{
"code": null,
"e": 21934,
"s": 21587,
"text": " Granting of authorization for data access. By granting different types of authorization, the database administrator can regulate which parts of the database various users can access. The authorization information is kept in a special system structure that the database system consults when ever someone attempts to access the data in the system."
},
{
"code": null,
"e": 22032,
"s": 21934,
"text": "Routine maintenance. Examples of the database administrator’s routine maintenance activities are:"
},
{
"code": null,
"e": 22426,
"s": 22032,
"text": "\n Periodically backing up the database, either onto tapes or onto remote servers, to prevent loss of data in case of disasters such as flooding.\n Ensuring that enough free disk space is available for normal operations, and upgrading disk space as required.\n Monitoring jobs running on the database and ensuring that performance is not degraded by very expensive tasks submitted by some users.\n"
},
{
"code": null,
"e": 22570,
"s": 22426,
"text": " Periodically backing up the database, either onto tapes or onto remote servers, to prevent loss of data in case of disasters such as flooding."
},
{
"code": null,
"e": 22682,
"s": 22570,
"text": " Ensuring that enough free disk space is available for normal operations, and upgrading disk space as required."
},
{
"code": null,
"e": 22818,
"s": 22682,
"text": " Monitoring jobs running on the database and ensuring that performance is not degraded by very expensive tasks submitted by some users."
},
{
"code": null,
"e": 23402,
"s": 22818,
"text": "The immediate database modification technique allows database modification to be output to the database while the transaction is still in the active state. The data modification written by active transactions are called “uncommitted modification”.\nIf the system crash or transaction aborts, then the old value field of the log records is used to restore the modified data items to the value they had prior to the start of the transaction. This restoration is accomplished through the undo operation. In order to understand undo operations, let us consider the format of log record.\n "
},
{
"code": null,
"e": 23425,
"s": 23402,
"text": "<Ti, Xj, V_old, V_new>"
},
{
"code": null,
"e": 23578,
"s": 23425,
"text": "Here, Ti is transaction identifier, Xj is the data item, V_old is the old value of data item and V_new is the modified or new value of the data item Xj."
},
{
"code": null,
"e": 23964,
"s": 23578,
"text": "Undo (Ti):\nIt restores the value of all data items updated by transaction T1 to the old values.\nBefore a transaction, T1 starts its execution the record <T1, start> is written to the log. During its execution, any write (x) operation by T1 is performed by writing of the appropriate new update record to the log. When T1 partially commits the record <T1, commit> is written to the log."
},
{
"code": null,
"e": 24513,
"s": 23964,
"text": "It ensures transaction atomicity by recording all database modifications in the log but deferring the execution of all write operations of a transaction until the transaction partially commits.\nA transaction is said to be partially committed once the final action of the transaction has been executed. When a transaction has performed all the actions, then the information in the log associated with the transaction is used in executing the deferred writes. In other words, at partial commits, time logged updates are “replayed” into database item."
},
{
"code": null,
"e": 24598,
"s": 24513,
"text": "The recovery procedure of deferred database modification is based on Redo operation "
},
{
"code": null,
"e": 25126,
"s": 24598,
"text": "Redo(Ti)\nIt sets the value of all data items updated by transaction Ti to the new values from the log of records.\nAfter a failure has occurred the recovery subsystem consults the log to determine which transaction need to be redone. Transaction Ti needs to be redone if an only if the log contains both the record <Ti, start> and the record <Ti, commit>. Thus, if the system crashes after the transaction completes its execution, then the information in the log is used in restoring the system to a previous consistence state."
},
{
"code": null,
"e": 25381,
"s": 25126,
"text": "Log Based Recovery is used for recording database modification. In log based recovery a log file is maintained for recovery purpose.\nThe log file is a sequence of log records. Log Record maintains a record of all the operations (update) of the database. "
},
{
"code": null,
"e": 25403,
"s": 25381,
"text": "Types of log records:"
},
{
"code": null,
"e": 25423,
"s": 25403,
"text": "<Start> Log Record:"
},
{
"code": null,
"e": 25598,
"s": 25423,
"text": "Contain information about the start of each transaction. It has transaction identification. Transaction identifier is the unique identification of the transaction that starts"
},
{
"code": null,
"e": 25614,
"s": 25598,
"text": "Representation:"
},
{
"code": null,
"e": 25627,
"s": 25614,
"text": "<Ti , start>"
},
{
"code": null,
"e": 25648,
"s": 25627,
"text": "<Update> Log Record:"
},
{
"code": null,
"e": 25880,
"s": 25648,
"text": "It describes a single database write and has the following fields:\n< Ti, Xj, V1,V2 >\nHere, Ti is transaction identifier, Xj is the data item, V1 is the old value of data item and V2 is the modified or new value of the data item Xj."
},
{
"code": null,
"e": 25900,
"s": 25880,
"text": "<Commit> Log Record"
},
{
"code": null,
"e": 26014,
"s": 25900,
"text": "When a transaction Ti is successfully committed or completed a <Ti, commit> log record is stored in the log file."
},
{
"code": null,
"e": 26033,
"s": 26014,
"text": "<Abort> Log Record"
},
{
"code": null,
"e": 26137,
"s": 26033,
"text": "When a transaction Ti is aborted due to any reason, a <Ti, abort> log record is stored in the log file."
},
{
"code": null,
"e": 26385,
"s": 26139,
"text": " A Stable Storage is a storage in which information is never lost. Stable storage devices are the theoretically impossible to obtain. But, we must use some technique to design a storage system in which the chances of data loss are extremely low."
},
{
"code": null,
"e": 26405,
"s": 26385,
"text": "Causes of Failures:"
},
{
"code": null,
"e": 26420,
"s": 26405,
"text": "System Crashes"
},
{
"code": null,
"e": 26431,
"s": 26420,
"text": "User Error"
},
{
"code": null,
"e": 26444,
"s": 26431,
"text": "Carelessness"
},
{
"code": null,
"e": 26486,
"s": 26444,
"text": "Sabotage (intentional corruption of data)"
},
{
"code": null,
"e": 26504,
"s": 26486,
"text": "Statement Failure"
},
{
"code": null,
"e": 26532,
"s": 26504,
"text": "Application software errors"
},
{
"code": null,
"e": 26548,
"s": 26532,
"text": "Network Failure"
},
{
"code": null,
"e": 26562,
"s": 26548,
"text": "Media Failure"
},
{
"code": null,
"e": 26590,
"s": 26562,
"text": "Natural Physical Disasters "
},
{
"code": null,
"e": 26689,
"s": 26590,
"text": "The most important information needed for whole recovery process must be stored in stable storage."
},
{
"code": null,
"e": 26869,
"s": 26689,
"text": " Data Model can be defined as an integrated collection of concepts for describing and manipulating data, relationships between data,and constraints on the data in an organization."
},
{
"code": null,
"e": 26901,
"s": 26869,
"text": "Different types of data models:"
},
{
"code": null,
"e": 27014,
"s": 26901,
"text": "Object Based Data Models -Object based data models use concepts such as entities, attributes, and relationships."
},
{
"code": null,
"e": 27103,
"s": 27014,
"text": "Physical Data Models - Physical data models describe how data is stored in the computer."
},
{
"code": null,
"e": 27218,
"s": 27103,
"text": "Record Based Data Models - Record based logical models are used in describing data at the logical and view levels."
},
{
"code": null,
"e": 27402,
"s": 27218,
"text": "The object based and record based data models are used to describe data at the conceptual and external levels, the physical data model issued to describe data at the internal level.\n "
},
{
"code": null,
"e": 28008,
"s": 27402,
"text": "In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. The most common requirement is to minimize the time taken to execute a program, a less common one is to minimize the amount of memory occupied. The growth of portable computers has created a market for minimizing the power consumed by a program. Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources."
},
{
"code": null,
"e": 28195,
"s": 28010,
"text": "Synthesis Phase, also known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table."
},
{
"code": null,
"e": 28239,
"s": 28195,
"text": "A compiler can have many phases and passes."
},
{
"code": null,
"e": 28320,
"s": 28239,
"text": "\nPass: A pass refers to the traversal of a compiler through the entire program.\n"
},
{
"code": null,
"e": 28399,
"s": 28320,
"text": "Pass: A pass refers to the traversal of a compiler through the entire program."
},
{
"code": null,
"e": 28612,
"s": 28399,
"text": "\nPhase: A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase.\n"
},
{
"code": null,
"e": 28823,
"s": 28612,
"text": "Phase: A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase."
},
{
"code": null,
"e": 28846,
"s": 28823,
"text": "LOOSELY COUPLED SYSTEM"
},
{
"code": null,
"e": 28888,
"s": 28846,
"text": "Each processor has its own memory module."
},
{
"code": null,
"e": 28967,
"s": 28888,
"text": "Efficient when tasks running on different processors, has minimal interaction."
},
{
"code": null,
"e": 29015,
"s": 28967,
"text": "It generally, do not encounter memory conflict."
},
{
"code": null,
"e": 29046,
"s": 29015,
"text": "Message transfer system (MTS)."
},
{
"code": null,
"e": 29064,
"s": 29046,
"text": "Data rate is low."
},
{
"code": null,
"e": 29079,
"s": 29064,
"text": "Less expensive"
},
{
"code": null,
"e": 29102,
"s": 29079,
"text": "TIGHTLY COUPLED SYSTEM"
},
{
"code": null,
"e": 29141,
"s": 29102,
"text": "Processors have shared memory modules."
},
{
"code": null,
"e": 29191,
"s": 29141,
"text": "Efficient for high-speed or real-time processing."
},
{
"code": null,
"e": 29229,
"s": 29191,
"text": "It experiences more memory conflicts."
},
{
"code": null,
"e": 29273,
"s": 29229,
"text": "Interconnection networks PMIN, IOPIN, ISIN."
},
{
"code": null,
"e": 29292,
"s": 29273,
"text": "Data rate is high."
},
{
"code": null,
"e": 29308,
"s": 29292,
"text": "More expensive."
},
{
"code": null,
"e": 29705,
"s": 29308,
"text": "An annotated parse tree is one in which various facts about the program have been attached to parse tree nodes. For example, one might compute the set of identifiers that each subtree mentions, and attach that set to the subtree. Compilers have to store information they have collected about the program somewhere; this is a convenient place to store information which is derivable from the tree."
},
{
"code": null,
"e": 29709,
"s": 29705,
"text": "NFA"
},
{
"code": null,
"e": 29851,
"s": 29709,
"text": "NFA or Non-Deterministic Finite Automaton is the one in which there exist many paths for a specific input from a current state to next state."
},
{
"code": null,
"e": 29888,
"s": 29851,
"text": "NFA can use Empty String transition."
},
{
"code": null,
"e": 29966,
"s": 29888,
"text": "NFA can be understood as multiple little machines computing at the same time."
},
{
"code": null,
"e": 30064,
"s": 29966,
"text": "If all of the branches of NFA dies or rejects the string, we can say that NFA rejects the string."
},
{
"code": null,
"e": 30135,
"s": 30064,
"text": "We do not need to specify how the NFA reacts according to some symbol."
},
{
"code": null,
"e": 30139,
"s": 30135,
"text": "DFA"
},
{
"code": null,
"e": 30321,
"s": 30139,
"text": "Deterministic Finite Automaton is an FA in which there is only one path for a specific input from the current state to next state. There is a unique transition on each input symbol."
},
{
"code": null,
"e": 30360,
"s": 30321,
"text": "DFA cannot use Empty String transition"
},
{
"code": null,
"e": 30397,
"s": 30360,
"text": "DFA can be understood as one machine"
},
{
"code": null,
"e": 30465,
"s": 30397,
"text": "DFA will reject the string if it ends at other than accepting state"
},
{
"code": null,
"e": 30542,
"s": 30465,
"text": "For Every symbol of the alphabet, there is only one state transition in DFA."
},
{
"code": null,
"e": 30931,
"s": 30542,
"text": "Intermediate code generator receives input from its predecessor phase, semantic analyzer, in the form of an annotated syntax tree. That syntax tree then can be converted into a linear representation, e.g., postfix notation. Intermediate code tends to be machine independent code. Therefore, code generator assumes to have an unlimited number of memory storage (register) to generate code."
},
{
"code": null,
"e": 31071,
"s": 30931,
"text": "A three-address code has at most three address locations to calculate the expression. A three-address code can be represented in two forms:"
},
{
"code": null,
"e": 31083,
"s": 31071,
"text": "quadruples "
},
{
"code": null,
"e": 31091,
"s": 31083,
"text": "triples"
},
{
"code": null,
"e": 31131,
"s": 31091,
"text": "Intermediate code can be represented as"
},
{
"code": null,
"e": 31414,
"s": 31131,
"text": "\nHigh Level IR - High-level intermediate code representation is very close to the source language itself. They can be easily generated from the source code and we can easily apply code modifications to enhance performance. But for target machine optimization, it is less preferred.\n"
},
{
"code": null,
"e": 31695,
"s": 31414,
"text": "High Level IR - High-level intermediate code representation is very close to the source language itself. They can be easily generated from the source code and we can easily apply code modifications to enhance performance. But for target machine optimization, it is less preferred."
},
{
"code": null,
"e": 31893,
"s": 31695,
"text": "\nLow Level IR - This one is close to the target machine, which makes it suitable for register and memory allocation, instruction set selection, etc. It is good for machine-dependent optimizations.\n"
},
{
"code": null,
"e": 32089,
"s": 31893,
"text": "Low Level IR - This one is close to the target machine, which makes it suitable for register and memory allocation, instruction set selection, etc. It is good for machine-dependent optimizations."
},
{
"code": null,
"e": 32212,
"s": 32089,
"text": "Intermediate code can be either language specific (e.g., Byte Code for Java) or language independent (three-address code)."
},
{
"code": null,
"e": 32325,
"s": 32212,
"text": "We need to translate the source code into intermediate code which is then translated to its target code because:"
},
{
"code": null,
"e": 32526,
"s": 32325,
"text": "If a compiler translates the source language to its target machine language without having the option for generating an intermediate code, then for each new machine, a full native compiler is required"
},
{
"code": null,
"e": 32673,
"s": 32526,
"text": "Intermediate code eliminates the need for a new full compiler for every unique machine by keeping the analysis portion same for all the compilers."
},
{
"code": null,
"e": 32757,
"s": 32673,
"text": "The second part of compiler, synthesis, is changed according to the target machine."
},
{
"code": null,
"e": 32909,
"s": 32757,
"text": "It becomes easier to apply the source code modifications to improve code performance by applying code optimization techniques on the intermediate code."
},
{
"code": null,
"e": 33209,
"s": 32909,
"text": "The problem in generating three address codes in a single pass is that we may not know the labels that control must go to at the time jump statements are generated.So to get around this problem a series of branching statements with the targets of the jumps temporarily left unspecified is generated."
},
{
"code": null,
"e": 33301,
"s": 33209,
"text": "Back Patching is putting the address instead of labels when the proper label is determined."
},
{
"code": null,
"e": 33360,
"s": 33301,
"text": "Back patching Algorithms perform three types of operations"
},
{
"code": null,
"e": 33501,
"s": 33360,
"text": "1) makelist (i) – creates a new list containing only i, an index into the array of quadruples and returns a pointer to the list it has made."
},
{
"code": null,
"e": 33613,
"s": 33501,
"text": "2) Merge (i, j) – concatenates the lists pointed to by i and j, and returns a pointer to the concatenated list."
},
{
"code": null,
"e": 33721,
"s": 33613,
"text": "3) Backpatch (p, i) – inserts i as the target label for each of the statements on the list pointed to by p."
},
{
"code": null,
"e": 33794,
"s": 33721,
"text": "A symbol table, either linear or hash, provides the following operations"
},
{
"code": null,
"e": 33803,
"s": 33794,
"text": "insert()"
},
{
"code": null,
"e": 34072,
"s": 33803,
"text": "This operation is more frequently used by analysis phase, i.e., the first half of the compiler where tokens are identified and names are stored in the table. This operation is used to add information in the symbol table about unique names occurring in the source code."
},
{
"code": null,
"e": 34191,
"s": 34072,
"text": "The insert() function takes the symbol and its attributes as arguments and stores the information in the symbol table."
},
{
"code": null,
"e": 34207,
"s": 34191,
"text": "EXAMPLE:\nint a;"
},
{
"code": null,
"e": 34223,
"s": 34207,
"text": "insert(a, int);"
},
{
"code": null,
"e": 34234,
"s": 34225,
"text": "lookup()"
},
{
"code": null,
"e": 34312,
"s": 34234,
"text": "lookup() operation is used to search a name in the symbol table to determine:"
},
{
"code": null,
"e": 34347,
"s": 34312,
"text": "if the symbol exists in the table."
},
{
"code": null,
"e": 34390,
"s": 34347,
"text": "if it is declared before it is being used."
},
{
"code": null,
"e": 34424,
"s": 34390,
"text": "if the name is used in the scope."
},
{
"code": null,
"e": 34454,
"s": 34424,
"text": "if the symbol is initialized."
},
{
"code": null,
"e": 34493,
"s": 34454,
"text": "if the symbol declared multiple times."
},
{
"code": null,
"e": 34571,
"s": 34493,
"text": "The format of lookup() function varies according to the programming language."
},
{
"code": null,
"e": 34585,
"s": 34571,
"text": "Basic format:"
},
{
"code": null,
"e": 34600,
"s": 34585,
"text": "lookup(symbol)"
},
{
"code": null,
"e": 34829,
"s": 34602,
"text": "Symbol table is an important data structure created and maintained by compilers in order to store information about the occurrence of various entities such as variable names, function names, objects, classes, interfaces, etc. "
},
{
"code": null,
"e": 34856,
"s": 34829,
"text": "A Symbol table is used for"
},
{
"code": null,
"e": 34928,
"s": 34856,
"text": "\nTo store the names of all entities in a structured form at one place.\n"
},
{
"code": null,
"e": 34998,
"s": 34928,
"text": "To store the names of all entities in a structured form at one place."
},
{
"code": null,
"e": 35043,
"s": 34998,
"text": "\nTo verify if a variable has been declared.\n"
},
{
"code": null,
"e": 35086,
"s": 35043,
"text": "To verify if a variable has been declared."
},
{
"code": null,
"e": 35202,
"s": 35086,
"text": "\nTo implement type checking, by verifying assignments and expressions in the source code are semantically correct.\n"
},
{
"code": null,
"e": 35316,
"s": 35202,
"text": "To implement type checking, by verifying assignments and expressions in the source code are semantically correct."
},
{
"code": null,
"e": 35371,
"s": 35316,
"text": "\nTo determine the scope of a name (scope resolution).\n"
},
{
"code": null,
"e": 35424,
"s": 35371,
"text": "To determine the scope of a name (scope resolution)."
},
{
"code": null,
"e": 35443,
"s": 35426,
"text": "Lexical Analysis"
},
{
"code": null,
"e": 35470,
"s": 35443,
"text": "First phase of a compiler."
},
{
"code": null,
"e": 35497,
"s": 35470,
"text": "It is also called scanner."
},
{
"code": null,
"e": 35508,
"s": 35497,
"text": "Main task:"
},
{
"code": null,
"e": 35578,
"s": 35508,
"text": "read the input characters and produce as output a sequence of tokens."
},
{
"code": null,
"e": 35636,
"s": 35578,
"text": "Process: Input: program as a single string of characters."
},
{
"code": null,
"e": 35753,
"s": 35636,
"text": "Collects characters into logical groupings and assigns internal codes to the groupings according to their structure."
},
{
"code": null,
"e": 35772,
"s": 35753,
"text": "Groupings: lexemes"
},
{
"code": null,
"e": 35795,
"s": 35772,
"text": "Internal codes: tokens"
},
{
"code": null,
"e": 35812,
"s": 35795,
"text": "Secondary tasks:"
},
{
"code": null,
"e": 35930,
"s": 35812,
"text": " Stripping out from the source program comments and white spaces in the form of blank, tab, and new line characters. "
},
{
"code": null,
"e": 36001,
"s": 35930,
"text": "Correlating error messages from the compiler with the source program. "
},
{
"code": null,
"e": 36065,
"s": 36001,
"text": "Inserting lexemes for user-defined names into the symbol table."
},
{
"code": null,
"e": 36081,
"s": 36065,
"text": "Syntax Analysis"
},
{
"code": null,
"e": 36197,
"s": 36081,
"text": "The syntax analyzer or parser must determine the structure of the sequence of tokens provided to it by the scanner."
},
{
"code": null,
"e": 36267,
"s": 36197,
"text": "Check the input program to determine whether is syntactically correct"
},
{
"code": null,
"e": 36365,
"s": 36267,
"text": " Produce either a complete parse tree of at least trace the structure of the complete parse tree."
},
{
"code": null,
"e": 36536,
"s": 36365,
"text": "Error: produce a diagnostic message and recover (gets back to a normal state and continue the analysis of the input program: find as many errors as possible in one pass)."
},
{
"code": null,
"e": 36571,
"s": 36536,
"text": "Different scheduling criteria are:"
},
{
"code": null,
"e": 36820,
"s": 36571,
"text": "CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). "
},
{
"code": null,
"e": 36997,
"s": 36820,
"text": "Throughput. If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput."
},
{
"code": null,
"e": 37377,
"s": 36997,
"text": "Turnaround time. From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/0."
},
{
"code": null,
"e": 37659,
"s": 37377,
"text": "Waiting time. The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/0, it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue. "
},
{
"code": null,
"e": 37876,
"s": 37659,
"text": "Response time. Time from the submission of a request until the first response is produced. This measure, called response time, is the since it takes to start responding, not the time it takes to output the response. "
},
{
"code": null,
"e": 37953,
"s": 37878,
"text": "A thread that is to be canceled is often referred to as the target thread."
},
{
"code": null,
"e": 38018,
"s": 37953,
"text": "Cancellation of a target thread may occur in two different ways:"
},
{
"code": null,
"e": 38099,
"s": 38018,
"text": "Asynchronous cancellation. One thread immediately terminates the target thread. "
},
{
"code": null,
"e": 38260,
"s": 38099,
"text": "Deferred cancellation. The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion. "
},
{
"code": null,
"e": 38346,
"s": 38260,
"text": "Benefits of multithreaded programming can be broken down into four major categories: "
},
{
"code": null,
"e": 38560,
"s": 38346,
"text": "Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. "
},
{
"code": null,
"e": 38838,
"s": 38560,
"text": "Resource sharing. Processes may only share resources through techniques such as shared memory or message passing. Such techniques must be explicitly arranged by the programmer. However, threads share the memory and the resources of the process to which they belong by default. "
},
{
"code": null,
"e": 39201,
"s": 38838,
"text": "Economy. Allocating memory and resources for process creation is costly. Because threads share the resources of the process to which they belong, it is more economical to create and context-switch threads. Empirically gauging the difference in overhead can be difficult, but in general, it is much more time consuming to create and manage processes than threads."
},
{
"code": null,
"e": 39521,
"s": 39201,
"text": "Scalability. The benefits of multithreading can be greatly increased in a multiprocessor architecture, where threads may be running in parallel on different processors. A single-threaded process can only run on one processor, regardless how many are available. Multithreading on a multiCPU machine increases parallelism"
},
{
"code": null,
"e": 39575,
"s": 39521,
"text": "Process Cooperation is necessary because it provides"
},
{
"code": null,
"e": 39778,
"s": 39575,
"text": "Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information. "
},
{
"code": null,
"e": 40066,
"s": 39778,
"text": "Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing elements (such as CPUs or I/O channels). "
},
{
"code": null,
"e": 40233,
"s": 40066,
"text": "Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads, as we discussed in Chapter 2. "
},
{
"code": null,
"e": 40386,
"s": 40233,
"text": "Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel. "
},
{
"code": null,
"e": 40578,
"s": 40386,
"text": "For each monitor, a semaphore mutex (initialized to 1) is provided. A process must execute wait (mutex) before entering the monitor and must execute signal (mutex) after leaving the monitor. "
},
{
"code": null,
"e": 40889,
"s": 40578,
"text": "Since a signaling process must wait until the resumed process either leaves or waits, an additional semaphore, next, is introduced, initialized to 0. The signaling processes can use next to suspend themselves. An integer variable next_count is also provided to count the number of processes suspended on next. "
},
{
"code": null,
"e": 40938,
"s": 40889,
"text": "Thus, each external procedure F is replaced by :"
},
{
"code": null,
"e": 40983,
"s": 40938,
"text": "wait(mutex); \nbody of F \nif (next_count > 0)"
},
{
"code": null,
"e": 40997,
"s": 40983,
"text": "signal(next);"
},
{
"code": null,
"e": 41018,
"s": 40997,
"text": "else signal(mutex); "
},
{
"code": null,
"e": 41064,
"s": 41018,
"text": "Mutual exclusion within a monitor is ensured."
},
{
"code": null,
"e": 41101,
"s": 41064,
"text": " Condition variables implementation."
},
{
"code": null,
"e": 41211,
"s": 41101,
"text": "For each condition x, we introduce a semaphore x_sem and an integer variable x_count, both initialized to 0. "
},
{
"code": null,
"e": 41225,
"s": 41211,
"text": "For x.wait():"
},
{
"code": null,
"e": 41237,
"s": 41225,
"text": "x_count++; "
},
{
"code": null,
"e": 41258,
"s": 41237,
"text": "if (next_count > 0) "
},
{
"code": null,
"e": 41272,
"s": 41258,
"text": "signal(next);"
},
{
"code": null,
"e": 41293,
"s": 41272,
"text": "else signal(mutex); "
},
{
"code": null,
"e": 41308,
"s": 41293,
"text": "wait (x_sem) ;"
},
{
"code": null,
"e": 41320,
"s": 41308,
"text": "x_count--; "
},
{
"code": null,
"e": 41336,
"s": 41320,
"text": "For x.signal():"
},
{
"code": null,
"e": 41355,
"s": 41336,
"text": "if (x_count > 0) {"
},
{
"code": null,
"e": 41369,
"s": 41355,
"text": "next_count++;"
},
{
"code": null,
"e": 41384,
"s": 41369,
"text": "signal(x_sem);"
},
{
"code": null,
"e": 41396,
"s": 41384,
"text": "wait(next);"
},
{
"code": null,
"e": 41410,
"s": 41396,
"text": "next_count--;"
},
{
"code": null,
"e": 41412,
"s": 41410,
"text": "}"
},
{
"code": null,
"e": 41796,
"s": 41412,
"text": "There are a few solutions to the priority-inversion problem in real-time systems. One is to turn off all system interrupts, effectively halting thread preemption in the system, while critical tasks execute. However, to make this work, you cannot implement more than two thread priorities, and critical sections where resources are locked need to be very brief and tightly controlled."
},
{
"code": null,
"e": 41900,
"s": 41796,
"text": "However, a more practical and less-invasive solution is to implement the priority inheritance protocol."
},
{
"code": null,
"e": 42391,
"s": 41900,
"text": "With priority inheritance, the system code that implements resource locking checks to see if a lower priority thread already owns a lock on the associated resource when a thread attempts to lock it. If one does, that owning thread's priority is temporarily increased to match that of the higher priority thread attempting to acquire the lock. As a result, the lock owner (once blocked at a lower priority) will execute, release the lock, and then be restored to its original priority level."
},
{
"code": null,
"e": 42818,
"s": 42391,
"text": "The priority-based model of execution states that a task can only be preempted by another task of higher priority. However, scenarios can arise where a lower priority task may indirectly preempt a higher priority task, in a sense inverting the priorities of the associated tasks, and violating the priority-based ordering of execution. This is called \"priority inversion\", and usually occurs when resource sharing is involved."
},
{
"code": null,
"e": 42991,
"s": 42818,
"text": "I/O-bound programs have the property of performing only a small amount of computation before performing I/O. Such programs typically do not use up their entire CPU quantum."
},
{
"code": null,
"e": 43278,
"s": 42991,
"text": "CPU-bound programs, on the other hand, use their entire quantum without performing any blocking I/O operations. Consequently, one could make better use of the computer’s resources by giving higher priority to I/O-bound programs and allow them to execute ahead of the CPU-bound programs."
},
{
"code": null,
"e": 43379,
"s": 43278,
"text": "Processor affinity means you can specify which processor(s) a given process or thread should run on."
},
{
"code": null,
"e": 43395,
"s": 43379,
"text": "AFFINITY LEVELS"
},
{
"code": null,
"e": 43453,
"s": 43395,
"text": "There are three levels of affinity in the RTSS subsystem:"
},
{
"code": null,
"e": 43553,
"s": 43453,
"text": "Subsystem affinity - Subsystem affinity refers to the set of processors you have dedicated to RTSS."
},
{
"code": null,
"e": 43944,
"s": 43553,
"text": "Process affinity - Process affinity refers to the processors that the threads of a given process may run on. If you don't specify a processor for a process to run on, its main thread will run it on the lowest-numbered RTSS processor available in the system. The set of processors that a process's threads can run on must be a subset of the set of processors available to the RTSS subsystem."
},
{
"code": null,
"e": 44256,
"s": 43944,
"text": "Thread affinity - Thread affinity determines the processors that an individual thread can run on. By default, a thread will run on the lowest-numbered RTSS processor available for the process to run on. The set of processors a thread can run on must be a subset of the set of processors its process can run on."
},
{
"code": null,
"e": 44777,
"s": 44256,
"text": "Spinlocks are not appropriate for single-processor systems because the condition that would break a process out of the spinlock can be obtained only by executing a different process. If the process is not relinquishing the processor, other processes do not get the opportunity to set the program condition required for the first process to make progress. In a multiprocessor system, other processes execute on other processors and thereby modify the program state in order to release the first process from the spinlock."
},
{
"code": null,
"e": 44797,
"s": 44777,
"text": "Long-Term Scheduler"
},
{
"code": null,
"e": 45217,
"s": 44797,
"text": "A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling.The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming.It is also called a job scheduler"
},
{
"code": null,
"e": 45238,
"s": 45217,
"text": "Short-Term Scheduler"
},
{
"code": null,
"e": 45711,
"s": 45238,
"text": "Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them.Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.It is also called as CPU scheduler."
},
{
"code": null,
"e": 45733,
"s": 45711,
"text": "Medium-Term Scheduler"
},
{
"code": null,
"e": 45940,
"s": 45733,
"text": "It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes.Medium-term scheduling is a part of swapping"
},
{
"code": null,
"e": 46356,
"s": 45940,
"text": "A trap is an exception in a user process. It's caused by division by zero or invalid memory access. It's also the usual way to invoke a kernel routine (a system call) because those run with a higher priority than user code. Handling is synchronous (so the user code is suspended and continues afterwards). In a sense they are \"active\" - most of the time, the code expects the trap to happen and relies on this fact."
},
{
"code": null,
"e": 46640,
"s": 46356,
"text": "An interrupt is something generated by the hardware (devices like the hard disk, graphics card, I/O ports, etc). These are asynchronous (i.e. they don't happen at predictable places in the user code) or \"passive\" since the interrupt handler has to wait for them to happen eventually."
},
{
"code": null,
"e": 46659,
"s": 46640,
"text": "Single Inheritance"
},
{
"code": null,
"e": 46703,
"s": 46659,
"text": "Derived class inherits a single base class."
},
{
"code": null,
"e": 46753,
"s": 46703,
"text": "Class derived_class : access_specifier base class"
},
{
"code": null,
"e": 46808,
"s": 46753,
"text": "Derived class access the features of single base class"
},
{
"code": null,
"e": 46835,
"s": 46808,
"text": "Public, Private, Protected"
},
{
"code": null,
"e": 46878,
"s": 46835,
"text": "Require small amount of run time over head"
},
{
"code": null,
"e": 46899,
"s": 46878,
"text": "Multiple Inheritance"
},
{
"code": null,
"e": 46955,
"s": 46899,
"text": "Derived class inherits two or more than two base class."
},
{
"code": null,
"e": 47042,
"s": 46955,
"text": "Class derived _class: access_specifier base_class1, access_specifier base_class2, ...."
},
{
"code": null,
"e": 47111,
"s": 47042,
"text": "Derived class access the combined features of inherited base classes"
},
{
"code": null,
"e": 47138,
"s": 47111,
"text": "Public, Private, Protected"
},
{
"code": null,
"e": 47208,
"s": 47138,
"text": "Require additional runtime overhead as compared to single inheritance"
},
{
"code": null,
"e": 47561,
"s": 47208,
"text": "If limited to single inheritance, the result is a specialization hierarchy and has a tree topology. Otherwise, in general, it forms a specialization lattice with DAG topology. An entity type with more than one superclass is called a shared subclass. A shared subclass inherits attributes from its superclasses only once, just like in most OO languages."
},
{
"code": null,
"e": 47702,
"s": 47561,
"text": "A category T is a class that is a subset of the union of n defining superclasses D1, D2, ..., Dn,n > 1 and is formally specified as follows:"
},
{
"code": null,
"e": 47725,
"s": 47702,
"text": "T ⊆ (D1 ∪ D2 ...∪ Dn) "
},
{
"code": null,
"e": 47938,
"s": 47725,
"text": "Specialization Hierarchy – has the constraint that every subclass participates as a subclass in only one class/subclass relationship, i.e. that each subclass has only one parent. This results in a tree structure."
},
{
"code": null,
"e": 48064,
"s": 47938,
"text": "Specialization Lattice – has the constraint that a subclass can be a subclass of more than one class/subclass relationship. "
},
{
"code": null,
"e": 48234,
"s": 48064,
"text": "In a lattice or hierarchy, the subclass inherits the attributes not only of the direct superclass, but also all of the predecessor super classes all the way to the root."
},
{
"code": null,
"e": 48386,
"s": 48234,
"text": "The subclass namely called child class is that class which extends another class so that it inherits both protected and public members from that class."
},
{
"code": null,
"e": 48593,
"s": 48386,
"text": "A subclass is needed in data modeling because it is an easy way to define inheritance relationship between two classes. The relationship between two classes w.r.t data helps to design the structure of data."
},
{
"code": null,
"e": 48669,
"s": 48593,
"text": "A tuple relational calculus expression may generate an infinite expression."
},
{
"code": null,
"e": 48720,
"s": 48669,
"text": "We need to restrict the relational calculus a bit."
},
{
"code": null,
"e": 48805,
"s": 48720,
"text": "The domain of a formula P, denoted dom(P), is the set of all values referenced in P."
},
{
"code": null,
"e": 48912,
"s": 48805,
"text": "These include values mentioned in P as well as values that appear in a tuple of a relation mentioned in P."
},
{
"code": null,
"e": 49029,
"s": 48912,
"text": "\nSo, the domain of P is the set of all values explicitly appearing in P or that appear in relations mentioned is P.\n"
},
{
"code": null,
"e": 49144,
"s": 49029,
"text": "So, the domain of P is the set of all values explicitly appearing in P or that appear in relations mentioned is P."
},
{
"code": null,
"e": 49249,
"s": 49144,
"text": "We may say an expression { t | P} is safe if all values that appear in the result are values from dom()."
},
{
"code": null,
"e": 49346,
"s": 49249,
"text": "A safe expression yields a finite number of tuples as its result. Otherwise, it is called unsafe"
},
{
"code": null,
"e": 49679,
"s": 49346,
"text": "Relational calculus is a non procedural query language. It uses mathematical predicate calculus instead of algebra. It provides the description about the query to get the result where as relational algebra gives the method to get the result. It informs the system what to do with the relation, but does not inform how to perform it."
},
{
"code": null,
"e": 49936,
"s": 49679,
"text": "A tuple relational calculus is a non procedural query language which specifies to select the tuples in a relation. It can select the tuples with range of values or tuples for certain attribute values etc. The resulting relation can have one or more tuples."
},
{
"code": null,
"e": 49973,
"s": 49936,
"text": "{t | P (t)} or {t | condition (t)}"
},
{
"code": null,
"e": 50174,
"s": 49973,
"text": "Domain relational calculus uses list of attribute to be selected from the relation based on the condition. It is same as TRC but differs by selecting the attributes rather than selecting whole tuples."
},
{
"code": null,
"e": 50244,
"s": 50174,
"text": "{<EMP_ID, EMP_NAME> | <EMP_ID, EMP_NAME> ? EMPLOYEE Λ DEPT_ID = 10} "
},
{
"code": null,
"e": 50489,
"s": 50244,
"text": " The two-tier architecture is like client server application. The direct communication takes place between client and server. There is no intermediate between client and server. Because of tight coupling, a 2 tiered application will run faster."
},
{
"code": null,
"e": 50501,
"s": 50489,
"text": "Advantages:"
},
{
"code": null,
"e": 50573,
"s": 50501,
"text": "\nEasy to maintain and modification is bit easy\nCommunication is faster\n"
},
{
"code": null,
"e": 50619,
"s": 50573,
"text": "Easy to maintain and modification is bit easy"
},
{
"code": null,
"e": 50643,
"s": 50619,
"text": "Communication is faster"
},
{
"code": null,
"e": 50658,
"s": 50643,
"text": "Disadvantages:"
},
{
"code": null,
"e": 50766,
"s": 50658,
"text": "\nIn two tier architecture application performance will degrade upon increasing the users.\nCost-ineffective\n"
},
{
"code": null,
"e": 50855,
"s": 50766,
"text": "In two tier architecture application performance will degrade upon increasing the users."
},
{
"code": null,
"e": 50872,
"s": 50855,
"text": "Cost-ineffective"
},
{
"code": null,
"e": 50919,
"s": 50872,
"text": "Three-tier architecture typically comprises of"
},
{
"code": null,
"e": 50967,
"s": 50919,
"text": "1) Client layer\n2) Business Layer\n3) Data layer"
},
{
"code": null,
"e": 50978,
"s": 50967,
"text": "Advantages"
},
{
"code": null,
"e": 51533,
"s": 50978,
"text": "\nHigh performance, lightweight persistent objects\nScalability – Each tier can scale horizontally\nPerformance – Because the Presentation tier can cache requests, network utilization is minimized, and the load is reduced on the Application and Data tiers.\nHigh degree of flexibility in deployment platform and configuration\nBetter Re-use\nImprove Data Integrity\nImproved Security – Client is not direct access to database.\nEasy to maintain and modification is bit easy, won’t affect other modules\nIn three tier architecture application performance is good.\n"
},
{
"code": null,
"e": 51582,
"s": 51533,
"text": "High performance, lightweight persistent objects"
},
{
"code": null,
"e": 51629,
"s": 51582,
"text": "Scalability – Each tier can scale horizontally"
},
{
"code": null,
"e": 51786,
"s": 51629,
"text": "Performance – Because the Presentation tier can cache requests, network utilization is minimized, and the load is reduced on the Application and Data tiers."
},
{
"code": null,
"e": 51854,
"s": 51786,
"text": "High degree of flexibility in deployment platform and configuration"
},
{
"code": null,
"e": 51868,
"s": 51854,
"text": "Better Re-use"
},
{
"code": null,
"e": 51891,
"s": 51868,
"text": "Improve Data Integrity"
},
{
"code": null,
"e": 51952,
"s": 51891,
"text": "Improved Security – Client is not direct access to database."
},
{
"code": null,
"e": 52026,
"s": 51952,
"text": "Easy to maintain and modification is bit easy, won’t affect other modules"
},
{
"code": null,
"e": 52086,
"s": 52026,
"text": "In three tier architecture application performance is good."
},
{
"code": null,
"e": 52100,
"s": 52086,
"text": "Disadvantages"
},
{
"code": null,
"e": 52129,
"s": 52100,
"text": "\nIncrease Complexity/Effort\n"
},
{
"code": null,
"e": 52156,
"s": 52129,
"text": "Increase Complexity/Effort"
},
{
"code": null,
"e": 52251,
"s": 52156,
"text": "The schema is sometimes called the intention, and a database state an extension of the schema."
},
{
"code": null,
"e": 53052,
"s": 52251,
"text": "When we define a new database, we specify its database schema only to the DBMS. At this point, the corresponding database state is the empty state with no data. We get the initial state of the database when the database is first populated or loaded with the initial data. From then on, every time an update operation is applied to the database, we get another database state. At any point in time, the database has a current state. The DBMS is partly responsible for ensuring that every state of the database is a valid state-that is, a state that satisfies the structure and constraints specified in the schema. The DBMS stores the descriptions of the schema constructs and constraints-also called the meta-data-in the DBMS catalog so that DBMS software can refer to the schema whenever it needs to."
},
{
"code": null,
"e": 53348,
"s": 53052,
"text": "Redundancy is the state of being not or no longer needed or useful.In the traditional approach, uncontrolled redundancy in storing the same data/information many times in the database leads to several problems. This leads to Duplication of effort, Wastage of storage space and inconsistent data."
},
{
"code": null,
"e": 53697,
"s": 53348,
"text": "A controlled redundancy is a necessary technique to use redundant fields in a database. This speed ups the database access and also improves the performance of queries. Usually, the DBMS ensures the allocation of the data in the records. It should have the capability to control this redundancy in order to prohibit inconsistencies among the files."
},
{
"code": null,
"e": 54131,
"s": 53699,
"text": "Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes queries to recompile. We recommend not updating statistics too often because there is a performance tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend on your application. UPDATE STATISTICS can use tempdb to sort the sample of rows for building statistics."
},
{
"code": null,
"e": 54398,
"s": 54133,
"text": "A test case may be defined as a set of instructions for getting an error in the system by causing a failure. Testing software is not so much expensive in the comparison of software testing. Many kinds of aspects are to be kept in mind when test cases are selected."
},
{
"code": null,
"e": 54536,
"s": 54398,
"text": "\nThe aim of the test case should be getting a program which has no errors if any error is found in the program, it is solved it quickly.\n"
},
{
"code": null,
"e": 54672,
"s": 54536,
"text": "The aim of the test case should be getting a program which has no errors if any error is found in the program, it is solved it quickly."
},
{
"code": null,
"e": 54739,
"s": 54672,
"text": "\nThe selected test case should contain all inputs to the program.\n"
},
{
"code": null,
"e": 54804,
"s": 54739,
"text": "The selected test case should contain all inputs to the program."
},
{
"code": null,
"e": 54875,
"s": 54804,
"text": "\nA specified area should be present for the valuation of a test case.\n"
},
{
"code": null,
"e": 54944,
"s": 54875,
"text": "A specified area should be present for the valuation of a test case."
},
{
"code": null,
"e": 55021,
"s": 54944,
"text": "\nA test case should be plan quickly as possible in the development process.\n"
},
{
"code": null,
"e": 55096,
"s": 55021,
"text": "A test case should be plan quickly as possible in the development process."
},
{
"code": null,
"e": 55239,
"s": 55096,
"text": "\nA good testing should have following qualities:\n\n\ncorrectness\n\n\nReliability\n\n\nUsability\n\n\nEfficiency\n\n\nIntegrity\n\n\nFlexibility\n\n\nStructure\n\n\n"
},
{
"code": null,
"e": 55287,
"s": 55239,
"text": "A good testing should have following qualities:"
},
{
"code": null,
"e": 55380,
"s": 55287,
"text": "\n\ncorrectness\n\n\nReliability\n\n\nUsability\n\n\nEfficiency\n\n\nIntegrity\n\n\nFlexibility\n\n\nStructure\n\n"
},
{
"code": null,
"e": 55394,
"s": 55380,
"text": "\ncorrectness\n"
},
{
"code": null,
"e": 55406,
"s": 55394,
"text": "correctness"
},
{
"code": null,
"e": 55420,
"s": 55406,
"text": "\nReliability\n"
},
{
"code": null,
"e": 55432,
"s": 55420,
"text": "Reliability"
},
{
"code": null,
"e": 55444,
"s": 55432,
"text": "\nUsability\n"
},
{
"code": null,
"e": 55454,
"s": 55444,
"text": "Usability"
},
{
"code": null,
"e": 55467,
"s": 55454,
"text": "\nEfficiency\n"
},
{
"code": null,
"e": 55478,
"s": 55467,
"text": "Efficiency"
},
{
"code": null,
"e": 55490,
"s": 55478,
"text": "\nIntegrity\n"
},
{
"code": null,
"e": 55500,
"s": 55490,
"text": "Integrity"
},
{
"code": null,
"e": 55514,
"s": 55500,
"text": "\nFlexibility\n"
},
{
"code": null,
"e": 55526,
"s": 55514,
"text": "Flexibility"
},
{
"code": null,
"e": 55538,
"s": 55526,
"text": "\nStructure\n"
},
{
"code": null,
"e": 55548,
"s": 55538,
"text": "Structure"
},
{
"code": null,
"e": 55562,
"s": 55548,
"text": "Alpha testing"
},
{
"code": null,
"e": 55700,
"s": 55562,
"text": "Alpha testing may be defined as a system testing which is done by the customer at the place where the developer has developed the system."
},
{
"code": null,
"e": 55756,
"s": 55700,
"text": "Alpha testing takes place once development is complete."
},
{
"code": null,
"e": 55860,
"s": 55756,
"text": "Alpha testing continues until customer agrees that system implementation is as per his/her expectation."
},
{
"code": null,
"e": 55907,
"s": 55860,
"text": "Alpha testing results in minor design changes."
},
{
"code": null,
"e": 56004,
"s": 55907,
"text": "Alpha testing is done in a controlled manner because the software is tested in developer's area."
},
{
"code": null,
"e": 56017,
"s": 56004,
"text": "Beta testing"
},
{
"code": null,
"e": 56118,
"s": 56017,
"text": "Beta testing may be defined as system testing which is done by the customer on customer's own sites."
},
{
"code": null,
"e": 56204,
"s": 56118,
"text": "The application is tested in Beta Testing after development and testing is completed."
},
{
"code": null,
"e": 56325,
"s": 56204,
"text": "The problems faced by the customer are reported and software is re-released after beta testing for next beta test cycle."
},
{
"code": null,
"e": 56424,
"s": 56325,
"text": "To get problems and defects before the final release of the product, beta testing is very helpful."
},
{
"code": null,
"e": 56519,
"s": 56424,
"text": "Beta testing is done in normal environment and developers are not present during beta testing."
},
{
"code": null,
"e": 56764,
"s": 56521,
"text": "It is a measure to assess how practical and beneficial the software project development will be for an organization. The software analyzer conducts a thorough study to understand economic, technical and operational feasibility of the project."
},
{
"code": null,
"e": 56921,
"s": 56764,
"text": "\nEconomic - Resource transportation, cost for training, cost of additional utilities and tools and overall estimation of costs and benefits of the project.\n"
},
{
"code": null,
"e": 57076,
"s": 56921,
"text": "Economic - Resource transportation, cost for training, cost of additional utilities and tools and overall estimation of costs and benefits of the project."
},
{
"code": null,
"e": 57306,
"s": 57076,
"text": "\nTechnical - Is it possible to develop this system? Assessing suitability of machine(s) and operating system(s) on which software will execute, existing developers’ knowledge and skills, training, utilities or tools for project.\n"
},
{
"code": null,
"e": 57534,
"s": 57306,
"text": "Technical - Is it possible to develop this system? Assessing suitability of machine(s) and operating system(s) on which software will execute, existing developers’ knowledge and skills, training, utilities or tools for project."
},
{
"code": null,
"e": 57667,
"s": 57534,
"text": "\nOperational - Can the organization adjust smoothly to the changes done as per the demand of project? Is the problem worth solving?\n"
},
{
"code": null,
"e": 57798,
"s": 57667,
"text": "Operational - Can the organization adjust smoothly to the changes done as per the demand of project? Is the problem worth solving?"
},
{
"code": null,
"e": 57937,
"s": 57798,
"text": "Software scope is a well-defined boundary, which encompasses all the activities that are done to develop and deliver the software product."
},
{
"code": null,
"e": 58184,
"s": 57937,
"text": "The software scope clearly defines all functionalities and artifacts to be delivered as a part of the software. The scope identifies what the product will do and what it will not do, what the end product will contain and what it will not contain."
},
{
"code": null,
"e": 58322,
"s": 58184,
"text": "SDLC Models are adopted as per requirements of development process. It may very software-to-software to ensuring which model is suitable."
},
{
"code": null,
"e": 58393,
"s": 58322,
"text": "We can select the best SDLC model if following answers are satisfied -"
},
{
"code": null,
"e": 58462,
"s": 58393,
"text": "Is SDLC suitable for selected technology to implement the software ?"
},
{
"code": null,
"e": 58525,
"s": 58462,
"text": "Is SDLC appropriate for client’s requirements and priorities ?"
},
{
"code": null,
"e": 58590,
"s": 58525,
"text": "Is SDLC model suitable for size and complexity of the software ?"
},
{
"code": null,
"e": 58665,
"s": 58590,
"text": "Is the SDLC model suitable for the type of projects and engineering we do?"
},
{
"code": null,
"e": 58748,
"s": 58665,
"text": "Is the SDLC appropriate for the geographically co-located or dispersed developers?"
},
{
"code": null,
"e": 58777,
"s": 58748,
"text": "Nested loop (loop over loop)"
},
{
"code": null,
"e": 58919,
"s": 58777,
"text": "An outer loop within an inner loop is formed consisting of fewer entries and then for individual entry, inner loop is individually processed."
},
{
"code": null,
"e": 58924,
"s": 58919,
"text": "E.g."
},
{
"code": null,
"e": 58989,
"s": 58924,
"text": "Select col1.*, col2.* from coll, col2 where coll.col1=col2.col2;"
},
{
"code": null,
"e": 59030,
"s": 58989,
"text": "It’s processing takes place in this way:"
},
{
"code": null,
"e": 59175,
"s": 59030,
"text": "For i in (select * from col1) loop\nFor j in (select * from col2 where col2=i.col1) loop\nResults are displayed;\nEnd of the loop;\nEnd of the loop;"
},
{
"code": null,
"e": 59205,
"s": 59175,
"text": "The Steps of nested loop are:"
},
{
"code": null,
"e": 59236,
"s": 59205,
"text": "Identify outer (driving) table"
},
{
"code": null,
"e": 59280,
"s": 59236,
"text": "Assign inner (driven) table to outer table."
},
{
"code": null,
"e": 59342,
"s": 59280,
"text": "For every row of outer table, access the rows of inner table."
},
{
"code": null,
"e": 59399,
"s": 59342,
"text": "Nested Loops is executed from the inner to the outer as:"
},
{
"code": null,
"e": 59410,
"s": 59399,
"text": "outer_loop"
},
{
"code": null,
"e": 59421,
"s": 59410,
"text": "inner_loop"
},
{
"code": null,
"e": 59431,
"s": 59421,
"text": "Hash join"
},
{
"code": null,
"e": 59494,
"s": 59431,
"text": "While joining large tables, the use of Hash Join is preferred."
},
{
"code": null,
"e": 59534,
"s": 59494,
"text": "Algorithm of Hash Join is divided into:"
},
{
"code": null,
"e": 59616,
"s": 59534,
"text": "Build: It is a hash table having in-memory which is present on the smaller table."
},
{
"code": null,
"e": 59700,
"s": 59616,
"text": "Probe: this hash value of the hash table is applicable for each second row element."
},
{
"code": null,
"e": 59716,
"s": 59700,
"text": "Sort merge join"
},
{
"code": null,
"e": 59965,
"s": 59716,
"text": "Two independent sources of data are joined in sort merge join. They performance is better as compared to nested loop when the data volume is big enough but it is not good as hash joins generally.\nThe full operation can be divided into parts of two:"
},
{
"code": null,
"e": 59987,
"s": 59965,
"text": "Sort join operation :"
},
{
"code": null,
"e": 60016,
"s": 59987,
"text": "Get first row R1 from input1"
},
{
"code": null,
"e": 60046,
"s": 60016,
"text": "Get first row R2 from input2."
},
{
"code": null,
"e": 60068,
"s": 60046,
"text": "Merge join operation:"
},
{
"code": null,
"e": 60303,
"s": 60068,
"text": "‘while’ is not present at either loop’s end.\nif R1 joins with R2\nnext row is got R2 from the input 2\nreturn (R1, R2)\nelse if R1 < style=””> next row is got from R1 from input 1\nelse\nnext row is got from R2 from input 2\nend of the loop"
},
{
"code": null,
"e": 60335,
"s": 60303,
"text": "The disadvantages of query are:"
},
{
"code": null,
"e": 60346,
"s": 60335,
"text": "No indexes"
},
{
"code": null,
"e": 60390,
"s": 60346,
"text": "Stored procedures are excessively compiled."
},
{
"code": null,
"e": 60442,
"s": 60390,
"text": "Triggers and procedures are without SET NOCOUNT ON."
},
{
"code": null,
"e": 60498,
"s": 60442,
"text": "Complicated joins making up inadequately written query."
},
{
"code": null,
"e": 60556,
"s": 60498,
"text": "Cursors and temporary tables showcase a bad presentation."
},
{
"code": null,
"e": 60783,
"s": 60556,
"text": "Storage and access of data from the central location in order to take some strategic decision is called Data Warehousing. Enterprise management is used for managing the information whose framework is known as Data Warehousing."
},
{
"code": null,
"e": 60814,
"s": 60783,
"text": "An overview of data warehouse:"
},
{
"code": null,
"e": 60849,
"s": 60814,
"text": "Restrictions that are applied are:"
},
{
"code": null,
"e": 60891,
"s": 60849,
"text": "Only the current database can have views."
},
{
"code": null,
"e": 60963,
"s": 60891,
"text": "You are not liable to change any computed value in any particular view."
},
{
"code": null,
"e": 61030,
"s": 60963,
"text": "Integrity constants decide the functionality of INSERT and DELETE."
},
{
"code": null,
"e": 61077,
"s": 61030,
"text": "Full-text index definitions cannot be applied."
},
{
"code": null,
"e": 61112,
"s": 61077,
"text": "Temporary views cannot be created."
},
{
"code": null,
"e": 61151,
"s": 61112,
"text": "Temporary tables cannot contain views."
},
{
"code": null,
"e": 61192,
"s": 61151,
"text": "No association with DEFAULT definitions."
},
{
"code": null,
"e": 61246,
"s": 61192,
"text": "Triggers such as INSTEAD OF is associated with views."
},
{
"code": null,
"e": 61414,
"s": 61246,
"text": "COALESCE function is used to return the value which is set to be not null in the list. If all values in the list are null, then the coalesce function will return NULL."
},
{
"code": null,
"e": 61450,
"s": 61414,
"text": "Coalesce(value1, value2,value3,...)"
},
{
"code": null,
"e": 61564,
"s": 61450,
"text": "RAW datatype is used to store values in binary data format. The maximum size for a raw in a table in 32767 bytes."
},
{
"code": null,
"e": 61761,
"s": 61564,
"text": "Varchar can store upto 2000 bytes and varchar2 can store upto 4000 bytes. Varchar will occupy space for NULL values and Varchar2 will not occupy any space. Both are differed with respect to space."
},
{
"code": null,
"e": 62024,
"s": 61761,
"text": "SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). Its purpose is to ease the implementation of tasks for the DBA, with its full- function scheduling engine, which allows you to schedule your own jobs and scripts."
},
{
"code": null,
"e": 62222,
"s": 62024,
"text": "Subquery – The inner query is executed only once. The inner query will get executed first and the output of the inner query used by the outer query. The inner query is not dependent on outer query."
},
{
"code": null,
"e": 62586,
"s": 62222,
"text": "Correlated subquery: – The outer query will get executed first and for every row of outer query, inner query will get executed. So the inner query will get executed as many times as number of rows in the result of the outer query. The outer query output can use the inner query output for comparison. This means inner query and outer query dependent on each other"
},
{
"code": null,
"e": 62835,
"s": 62586,
"text": "A CTE can be used:\n• For recursion\n• Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.\n• Reference the resulting non-large table multiple times in the same statement."
},
{
"code": null,
"e": 63362,
"s": 62835,
"text": "No, we don’t have UPDATED magic table.\nThe ‘magic tables’ are the INSERTED and DELETED tables, as well as the update() and columns_updated() functions, and are used to determine the changes resulting from DML statements.\n• For an INSERT statement, the INSERTED table will contain the inserted rows.\n• For an UPDATE statement, the INSERTED table will contain the rows after an update, and the DELETED table will contain the rows before an update.\n• For a DELETE statement, the DELETED table will contain the rows to be deleted."
},
{
"code": null,
"e": 63440,
"s": 63362,
"text": "Both CTEs and Sub Queries have pretty much the same performance and function."
},
{
"code": null,
"e": 63526,
"s": 63440,
"text": "CTE’s have an advantage over using a subquery in that you can use recursion in a CTE."
},
{
"code": null,
"e": 63664,
"s": 63526,
"text": "The biggest advantage of using CTE is readability. CTEs can be referenced multiple times in the same statement where as sub query cannot."
},
{
"code": null,
"e": 63723,
"s": 63664,
"text": "select * into <new table> from <existing table> where 1=2 "
},
{
"code": null,
"e": 63777,
"s": 63723,
"text": "select top 0 * into <new table> from <existing table>"
},
{
"code": null,
"e": 63827,
"s": 63777,
"text": "SELECT column FROM table ORDER BY RAND() LIMIT 1;"
},
{
"code": null,
"e": 63937,
"s": 63827,
"text": "select distinct hiredate from emp a where &n = (select count(distinct sal) from emp b where a.sal >= b.sal);"
},
{
"code": null,
"e": 64028,
"s": 63937,
"text": "select * from emp minus select * from emp where rownum <= (select count(*) - &n from emp);"
},
{
"code": null,
"e": 64066,
"s": 64028,
"text": "select * from emp where rownum <= &n;"
},
{
"code": null,
"e": 64297,
"s": 64066,
"text": "1.Using Filter Index. Filtered index is used to Index a portion of rows in a table. While creating an index, we can specify conditional statements. The below SQL Query will create a Unique Index on the rows having non null values:"
},
{
"code": null,
"e": 64400,
"s": 64297,
"text": "CREATE UNIQUE INDEX IX_ClientMaster_ClientCode ON ClientMaster(ClienCode)\nWHERE ClientCode IS NOT NULL"
},
{
"code": null,
"e": 64484,
"s": 64400,
"text": "2.Create a view having the unique fields and create a Unique Clustered Index on it:"
},
{
"code": null,
"e": 64613,
"s": 64484,
"text": "Create View vClientMaster_forIndex\nWith SchemaBinding\nAs\nSelect ClientCode Fromdbo.ClientMaster Where ClientCode IS NOT NULL;\nGo"
},
{
"code": null,
"e": 64707,
"s": 64613,
"text": "CREATE Unique Clustered Index UK_vClientMaster_ForIndex\non vClientMaster_forIndex(ClientCode)"
},
{
"code": null,
"e": 64742,
"s": 64707,
"text": "INSERT INTO table DEFAULT VALUES;\n"
},
{
"code": null,
"e": 65232,
"s": 64742,
"text": "The only difference between the RANK() and DENSE_RANK() functions is in cases where there is a “tie”; i.e., in cases where multiple values in a set have the same ranking. In such cases, RANK() will assign non-consecutive “ranks” to the values in the set (resulting in gaps between the integer ranking values when there is a tie), whereas DENSE_RANK() will assign consecutive ranks to the values in the set (so there will be no gaps between the integer ranking values in the case of a tie)."
},
{
"code": null,
"e": 65446,
"s": 65232,
"text": "For example, consider the set {25, 25, 50, 75, 75, 100. For such a set, RANK() will return {1, 1, 3, 4, 4, 6} (note that the values 2 and 5 are skipped), whereas DENSE_RANK() will return {1, 1, 2, 3, 3, 4}."
},
{
"code": null,
"e": 65552,
"s": 65446,
"text": "Both the NVL(exp1, exp2) and NVL2(exp1, exp2, exp3) functions check the value exp1 to see if it is null."
},
{
"code": null,
"e": 65734,
"s": 65552,
"text": "With the NVL(exp1, exp2) function, if exp1 is not null, then the value of exp1 is returned; otherwise, the value of exp2 is returned, but case to the same data type as that of exp1."
},
{
"code": null,
"e": 65863,
"s": 65734,
"text": "With the NVL2(exp1, exp2, exp3) function, if exp1 is not null, then exp2 is returned; otherwise, the value of exp3 is returned."
},
{
"code": null,
"e": 65915,
"s": 65863,
"text": "To select all the even number records from a table:"
},
{
"code": null,
"e": 65954,
"s": 65915,
"text": "Select * from table where id % 2 = 0 \n"
},
{
"code": null,
"e": 66005,
"s": 65954,
"text": "To select all the odd number records from a table:"
},
{
"code": null,
"e": 66043,
"s": 66005,
"text": "Select * from table where id % 2 != 0"
},
{
"code": null,
"e": 66438,
"s": 66043,
"text": "An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL server’s query optimizer for a stored procedure or ad hoc query. Execution plans are very useful for helping a developer understand and analyze the performance characteristics of a query or stored procedure since the plan is used to execute the query or stored procedure."
},
{
"code": null,
"e": 66522,
"s": 66438,
"text": "SELECT * FROM mytable WHERE a=X UNION ALL SELECT * FROM mytable WHERE b=Y AND a!=X\n"
},
{
"code": null,
"e": 66669,
"s": 66522,
"text": "Servletrunner is a small utility that runs servlets. It is included in the JSDK 2.0, while the JSDK 2.1 includes an HTTP server for this purpose. "
},
{
"code": null,
"e": 66929,
"s": 66669,
"text": "The servletrunner is a small, multithreaded process that handles requests for servlets. Because servletrunner is multithreaded, it can be used to run multiple servlets simultaneously or to test one servlet that calls other servlets to satisfy client requests."
},
{
"code": null,
"e": 67182,
"s": 66929,
"text": "The rmiregistry command creates and starts a remote object registry on the specified port on the current host. If port is omitted, the registry is started on port 1099. The rmiregistry command produces no output and is typically run in the background. "
},
{
"code": null,
"e": 67193,
"s": 67182,
"text": "EXAMPLE:\n "
},
{
"code": null,
"e": 67207,
"s": 67193,
"text": "rmiregistry &"
},
{
"code": null,
"e": 67442,
"s": 67207,
"text": "A remote object registry is a bootstrap naming service that is used by RMI servers on the same host to bind remote objects to names. Clients on local and remote hosts can then look up remote objects and make remote method invocations."
},
{
"code": null,
"e": 67647,
"s": 67442,
"text": "The registry is typically used to locate the first remote object on which an application needs to invoke methods. That object, in turn, will provide application-specific support for finding other objects."
},
{
"code": null,
"e": 67982,
"s": 67647,
"text": " A stub for a remote object acts as a client's local representative or proxy for the remote object. The caller invokes a method on the local stub which is responsible for carrying out the method call on the remote object. In RMI, a stub for a remote object implements the same set of remote interfaces that a remote object implements."
},
{
"code": null,
"e": 68038,
"s": 67982,
"text": "When a stub's method is invoked, it does the following:"
},
{
"code": null,
"e": 68111,
"s": 68038,
"text": "initiates a connection with the remote JVM containing the remote object,"
},
{
"code": null,
"e": 68177,
"s": 68111,
"text": "marshals (writes and transmits) the parameters to the remote JVM,"
},
{
"code": null,
"e": 68224,
"s": 68177,
"text": "waits for the result of the method invocation,"
},
{
"code": null,
"e": 68287,
"s": 68224,
"text": "unmarshals (reads) the return value or exception returned, and"
},
{
"code": null,
"e": 68320,
"s": 68287,
"text": "returns the value to the caller."
},
{
"code": null,
"e": 68416,
"s": 68320,
"text": "The skeleton is responsible for dispatching the call to the actual remote object implementation"
},
{
"code": null,
"e": 68494,
"s": 68416,
"text": "When a skeleton receives an incoming method invocation it does the following:"
},
{
"code": null,
"e": 68551,
"s": 68494,
"text": "unmarshals (reads) the parameters for the remote method,"
},
{
"code": null,
"e": 68618,
"s": 68551,
"text": "invokes the method on the actual remote object implementation, and"
},
{
"code": null,
"e": 68704,
"s": 68618,
"text": "marshals (writes and transmits) the result (return value or exception) to the caller."
},
{
"code": null,
"e": 68713,
"s": 68704,
"text": "CHECKBOX"
},
{
"code": null,
"e": 68900,
"s": 68713,
"text": "The Checkbox class is used to create a checkbox. It is used to turn an option on (true) or off (false). Clicking on a Checkbox changes its state from \"on\" to \"off\" or from \"off\" to \"on\"."
},
{
"code": null,
"e": 68909,
"s": 68900,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 69481,
"s": 68909,
"text": "import java.awt.*; \npublic class CheckboxExample \n{ \n CheckboxExample(){ \n Frame f= new Frame(\"Checkbox Example\"); \n Checkbox checkbox1 = new Checkbox(\"C++\"); \n checkbox1.setBounds(100,100, 50,50); \n Checkbox checkbox2 = new Checkbox(\"Java\", true); \n checkbox2.setBounds(100,150, 50,50); \n f.add(checkbox1); \n f.add(checkbox2); \n f.setSize(400,400); \n f.setLayout(null); \n f.setVisible(true); \n } \npublic static void main(String args[]) \n{ \n new CheckboxExample(); \n} \n} "
},
{
"code": null,
"e": 69490,
"s": 69481,
"text": "OUTPUT:\n"
},
{
"code": null,
"e": 69505,
"s": 69490,
"text": "CHECKBOX GROUP"
},
{
"code": null,
"e": 69728,
"s": 69505,
"text": "The object of CheckboxGroup class is used to group together a set of Checkbox. At a time only one check box button is allowed to be in \"on\" state and remaining check box button in \"off\" state. It inherits the object class."
},
{
"code": null,
"e": 69737,
"s": 69728,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 70437,
"s": 69737,
"text": "import java.awt.*; \npublic class CheckboxGroupExample \n{ \n CheckboxGroupExample(){ \n Frame f= new Frame(\"CheckboxGroup Example\"); \n CheckboxGroup cbg = new CheckboxGroup(); \n Checkbox checkBox1 = new Checkbox(\"C++\", cbg, false); \n checkBox1.setBounds(100,100, 50,50); \n Checkbox checkBox2 = new Checkbox(\"Java\", cbg, true); \n checkBox2.setBounds(100,150, 50,50); \n f.add(checkBox1); \n f.add(checkBox2); \n f.setSize(400,400); \n f.setLayout(null); \n f.setVisible(true); \n } \npublic static void main(String args[]) \n{ \n new CheckboxGroupExample(); \n} \n} "
},
{
"code": null,
"e": 70445,
"s": 70437,
"text": "OUTPUT:"
},
{
"code": null,
"e": 70603,
"s": 70445,
"text": "FileInputStream is used for reading streams of raw bytes of data, like raw images. FileReaders, on the other hand, are used for reading streams of characters"
},
{
"code": null,
"e": 70759,
"s": 70603,
"text": "The difference between FileInputStream and FileReader is, FileInputStream reads the file byte by byte and FileReader reads the file character by character."
},
{
"code": null,
"e": 70923,
"s": 70759,
"text": "So when you are trying to read the file which contains the character \"Č\", in FileInputStream will give the result as,196 140 because the ASCII value of Č is 268."
},
{
"code": null,
"e": 71006,
"s": 70923,
"text": "In FileReader will give the result as 268 which is the ASCII value of the char Č."
},
{
"code": null,
"e": 71391,
"s": 71006,
"text": "Thread.sleep causes the current thread to suspend execution for a specified period. This is an efficient means of making processor time available to the other threads of an application or other applications that might be running on a computer system. The sleep method can also be used for pacing and waiting for another thread with duties that are understood to have time requirements"
},
{
"code": null,
"e": 71400,
"s": 71391,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 71517,
"s": 71400,
"text": "try \n{\n Thread.sleep(1000);\n} \ncatch(InterruptedException ex) \n{\n Thread.currentThread().interrupt();\n}"
},
{
"code": null,
"e": 71572,
"s": 71517,
"text": "Here, the program will be paused for 1000 miliseconds."
},
{
"code": null,
"e": 71726,
"s": 71572,
"text": "The date class is deprecated because of handling internationalization date and time. It allows date object to be accessed in a system independent manner."
},
{
"code": null,
"e": 71787,
"s": 71726,
"text": "The calendar class should be used instead of the date class."
},
{
"code": null,
"e": 71975,
"s": 71787,
"text": "Calendar cal = Calendar.getInstance();\ncal.set(Calendar.YEAR, 1988);\ncal.set(Calendar.MONTH, Calendar.JANUARY);\ncal.set(Calendar.DAY_OF_MONTH, 1);\nDate dateRepresentation = cal.getTime();"
},
{
"code": null,
"e": 72116,
"s": 71977,
"text": "The current length of a StringBuffer can be found via the method. The total allocated capacity can be found through the capacity() method."
},
{
"code": null,
"e": 72161,
"s": 72116,
"text": "int capacity() Returns the current capacity."
},
{
"code": null,
"e": 72212,
"s": 72161,
"text": "int length() Returns the length (character count)."
},
{
"code": null,
"e": 72221,
"s": 72212,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 72438,
"s": 72221,
"text": "public class Main {\n public static void main(String[] argv) {\n StringBuffer sb = new StringBuffer();\n sb.append(\"abcdef.com\");\n System.out.println(sb.length());\n\n System.out.println(sb.capacity());\n }\n}"
},
{
"code": null,
"e": 72447,
"s": 72438,
"text": " OUTPUT:"
},
{
"code": null,
"e": 72450,
"s": 72447,
"text": "10"
},
{
"code": null,
"e": 72453,
"s": 72450,
"text": "16"
},
{
"code": null,
"e": 72466,
"s": 72453,
"text": "setChartAt()"
},
{
"code": null,
"e": 72733,
"s": 72466,
"text": "The java.lang.StringBuffer.setCharAt() method sets the character at the specified index to ch. This sequence is altered to represent a new character sequence that is identical to the old character sequence, except that it contains the character ch at position index."
},
{
"code": null,
"e": 72742,
"s": 72733,
"text": "insert()"
},
{
"code": null,
"e": 73007,
"s": 72742,
"text": "This method inserts the data into a substring of this StringBuffer. We should specify the offset value (integer type) of the buffer, at which we need to insert the data. Using this method, data of various types like integer, character, string etc. can be inserted."
},
{
"code": null,
"e": 73338,
"s": 73007,
"text": "A concrete class is used to define a useful object that can be instantiated as an automatic variable on the program stack. The implementation of a concrete class is defined. The concrete class is not intended to be a base class and no attempt to minimize dependency on other classes in the implementation or behavior of the class."
},
{
"code": null,
"e": 73601,
"s": 73340,
"text": "The println(\"...\") method prints the string \"...\" and moves the cursor to a new line. The print(\"...\") method instead prints just the string \"...\", but does not move the cursor to a new line. Hence, subsequent printing instructions will print on the same line."
},
{
"code": null,
"e": 73609,
"s": 73601,
"text": "Example"
},
{
"code": null,
"e": 73615,
"s": 73609,
"text": "print"
},
{
"code": null,
"e": 73669,
"s": 73615,
"text": "for(int i = 0; i < 5; i++)\nSystem.out.print(\" \" + i);"
},
{
"code": null,
"e": 73677,
"s": 73669,
"text": "OUTPUT:"
},
{
"code": null,
"e": 73687,
"s": 73677,
"text": "0 1 2 3 4"
},
{
"code": null,
"e": 73695,
"s": 73687,
"text": "println"
},
{
"code": null,
"e": 73751,
"s": 73695,
"text": "for(int i = 0; i < 5; i++)\nSystem.out.println(\" \" + i);"
},
{
"code": null,
"e": 73759,
"s": 73751,
"text": "OUTPUT:"
},
{
"code": null,
"e": 73769,
"s": 73759,
"text": "0\n1\n2\n3\n4"
},
{
"code": null,
"e": 74079,
"s": 73771,
"text": "A derived data type is a complex classification that identifies one or various data types and is made up of simpler data types called primitive data types. Derived data types have advanced properties and use far beyond those of the basic primitive data types that operate as their essential building blocks."
},
{
"code": null,
"e": 74153,
"s": 74079,
"text": "In a Java program, all characters are grouped into symbols called tokens."
},
{
"code": null,
"e": 74234,
"s": 74153,
"text": "A token is the smallest element of a program that is meaningful to the compiler."
},
{
"code": null,
"e": 74243,
"s": 74234,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 74380,
"s": 74243,
"text": "Public class Hello\n\n{\n\nPublic static void main(String args[])\n\n{\n\nSystem.out.println(“welcome in Java”); //print welcome in java\n\n}\n\n}"
},
{
"code": null,
"e": 74565,
"s": 74380,
"text": "In above Example, the source code contains tokens such as public, class, Hello, {, public, static, void, main, (, String, [], args, {, System, out, println, (, “welcome in Java”, }, }."
},
{
"code": null,
"e": 74843,
"s": 74565,
"text": "The resulting tokens are compiled into Java bytecodes that are capable of being run from within an interpreted Java environment. Tokens are useful for the compiler to detect errors. When tokens are not arranged in a particular sequence, the compiler generates an error message."
},
{
"code": null,
"e": 75018,
"s": 74843,
"text": "Bytecode is computer object code that is processed by a program, usually referred to as a virtual machine, rather than by the \"real\" computer machine, the hardware processor."
},
{
"code": null,
"e": 75298,
"s": 75018,
"text": "Rather than being interpreted one instruction at a time, Java bytecode can be recompiled at each particular system platform by a just-in-time compiler. Usually, this will enable the Java program to run faster. In Java, bytecode is contained in a binary file with a .CLASS suffix."
},
{
"code": null,
"e": 75464,
"s": 75298,
"text": "It is used to create an instance of driver and register it with the DriverManager. Once you have loaded a driver, it is available for making a connection with DBMS. "
},
{
"code": null,
"e": 75694,
"s": 75464,
"text": "A Statement object is used to represent SQL statement such DML statement or DDL statement. You simply create a Statement object and then execute it, supplying the appropriate execute() method with SQL statement you want to send."
},
{
"code": null,
"e": 75903,
"s": 75694,
"text": "Type 4 is the fastest JDBC driver. Type 1 and 3 drivers will be slower than Type 2 drivers(the database calls are made at least three translations in contrast to two). Type 4 drivers requires one translation."
},
{
"code": null,
"e": 76094,
"s": 75903,
"text": "The Java URLConnection class represents a communication link between the URL and the application. This class can be used to read and write data to the specified resource referred by the URL."
},
{
"code": null,
"e": 76336,
"s": 76094,
"text": "The URLConnection class provides many methods, we can display all the data of a webpage by using the getInputStream() method. The getInputStream() method returns all the data of the specified URL in the stream that can be read and displayed."
},
{
"code": null,
"e": 76345,
"s": 76336,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 76743,
"s": 76345,
"text": "import java.io.*; \nimport java.net.*; \npublic class URLConnectionExample { \npublic static void main(String[] args){ \ntry{ \nURL url=new URL(\"http://www.geeksforgeeks.org\"); \nURLConnection urlcon=url.openConnection(); \nInputStream stream=urlcon.getInputStream(); \nint i; \nwhile((i=stream.read())!=-1){ \nSystem.out.print((char)i); \n} \n}catch(Exception e){System.out.println(e);} \n} \n} "
},
{
"code": null,
"e": 77134,
"s": 76745,
"text": "JTree is a Swing component with which we can display hierarchical data. JTree is quite a complex component. A JTree has a 'root node' which is the top-most parent for all nodes in the tree. A node is an item in a tree. A node can have many children nodes. These children nodes themselves can have further children nodes. If a node doesn't have any children node, it is called a leaf node."
},
{
"code": null,
"e": 77415,
"s": 77134,
"text": "The leaf node is displayed with a different visual indicator. The nodes with children are displayed with a different visual indicator along with a visual 'handle' which can be used to expand or collapse that node. Expanding a node displays the children and collapsing hides them. "
},
{
"code": null,
"e": 78621,
"s": 77415,
"text": "package net.codejava.swing;\nimport javax.swing.JFrame;\nimport javax.swing.JTree;\nimport javax.swing.SwingUtilities;\nimport javax.swing.tree.DefaultMutableTreeNode;\npublic class TreeExample extends JFrame\n{\n private JTree tree;\n public TreeExample()\n {\n //create the root node\n DefaultMutableTreeNode root = new DefaultMutableTreeNode(\"Root\");\n //create the child nodes\n DefaultMutableTreeNode vegetableNode = new DefaultMutableTreeNode(\"Vegetables\");\n DefaultMutableTreeNode fruitNode = new DefaultMutableTreeNode(\"Fruits\");\n //add the child nodes to the root node\n root.add(vegetableNode);\n root.add(fruitNode);\n \n //create the tree by passing in the root node\n tree = new JTree(root);\n add(tree);\n \n this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n this.setTitle(\"JTree Example\"); \n this.pack();\n this.setVisible(true);\n }\n \n public static void main(String[] args)\n {\n SwingUtilities.invokeLater(new Runnable() {\n @Override\n public void run() {\n new TreeExample();\n }\n });\n } \n}"
},
{
"code": null,
"e": 78647,
"s": 78621,
"text": "OUTPUT WITH TWO CHILDREN:"
},
{
"code": null,
"e": 78762,
"s": 78647,
"text": "A JSplitPane has a splitter to split two components. The splitter bar can be displayed horizontally or vertically."
},
{
"code": null,
"e": 79016,
"s": 78762,
"text": "The JSplitPane class provides many constructors. we can create it using its default constructor and add two components using its setTopComponent(Component c), setBottomComponent(Component c),setLeftComponent(Component c), setRightComponent(Component c)."
},
{
"code": null,
"e": 79136,
"s": 79016,
"text": "JSplitPane can redraw components in a continuous or non-continuous way when we change the position of the splitter bar."
},
{
"code": null,
"e": 79234,
"s": 79136,
"text": "The JProgressBar class is used to display the progress of the task. It inherits JComponent class."
},
{
"code": null,
"e": 79243,
"s": 79234,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 80048,
"s": 79243,
"text": "import javax.swing.*; \npublic class ProgressBarExample extends JFrame\n{ \n JProgressBar jb; \n int i=0,num=0; \n ProgressBarExample()\n { \n jb=new JProgressBar(0,2000); \n jb.setBounds(40,40,160,30); \n jb.setValue(0); \n jb.setStringPainted(true); \n add(jb); \n setSize(250,150); \n setLayout(null); \n } \n public void iterate()\n { \n while(i<=2000)\n { \n jb.setValue(i); \n i=i+20; \n try{Thread.sleep(150);}catch(Exception e){} \n } \n } \n public static void main(String[] args) \n { \n ProgressBarExample m=new ProgressBarExample(); \n m.setVisible(true); \n m.iterate(); \n } \n} "
},
{
"code": null,
"e": 80056,
"s": 80048,
"text": "OUTPUT:"
},
{
"code": null,
"e": 80205,
"s": 80056,
"text": "The JTabbedPane class is used to switch between a group of components by clicking on a tab with a given title or icon. It inherits JComponent class."
},
{
"code": null,
"e": 80214,
"s": 80205,
"text": "EXAMPLE:"
},
{
"code": null,
"e": 80811,
"s": 80214,
"text": "import javax.swing.*; \npublic class TabbedPaneExample { \nJFrame f; \nTabbedPaneExample(){ \n f=new JFrame(); \n JTextArea ta=new JTextArea(200,200); \n JPanel p1=new JPanel(); \n p1.add(ta); \n JPanel p2=new JPanel(); \n JPanel p3=new JPanel(); \n JTabbedPane tp=new JTabbedPane(); \n tp.setBounds(50,50,200,200); \n tp.add(\"main\",p1); \n tp.add(\"visit\",p2); \n tp.add(\"help\",p3); \n f.add(tp); \n f.setSize(400,400); \n f.setLayout(null); \n f.setVisible(true); \n} \npublic static void main(String[] args) { \n new TabbedPaneExample(); \n}} "
},
{
"code": null,
"e": 80819,
"s": 80811,
"text": "OUTPUT:"
},
{
"code": null,
"e": 81175,
"s": 80819,
"text": "javax.swing.filechooser.FileFilter is used to restrict the files that are shown in a JFileChooser By default, a file chooser shows all user files and directories in a file chooser dialog, with the exception of \"hidden\" files in Unix (those starting with a '.'). You may restrict the list that is shown by setting the file filter for a file chooser dialog."
},
{
"code": null,
"e": 81295,
"s": 81175,
"text": "The Java JSlider class is used to create the slider. By using JSlider, a user can select a value from a specific range."
},
{
"code": null,
"e": 81745,
"s": 81295,
"text": "import javax.swing.*; \npublic class SliderExample1 extends JFrame\n{ \n public SliderExample1() \n { \n JSlider slider = new JSlider(JSlider.HORIZONTAL, 0, 50, 25); \n JPanel panel=new JPanel(); \n panel.add(slider); \n add(panel); \n } \n \n public static void main(String s[]) \n { \n SliderExample1 frame=new SliderExample1(); \n frame.pack(); \n frame.setVisible(true); \n } \n} "
},
{
"code": null,
"e": 81753,
"s": 81745,
"text": "OUTPUT:"
},
{
"code": null,
"e": 81982,
"s": 81753,
"text": "A spinner consists of a text field on the left side and two buttons with up and down arrows on the right side. If you press the up or down button, the item that displays in the input text will change in a given ordered sequence."
},
{
"code": null,
"e": 81991,
"s": 81982,
"text": "Example:"
},
{
"code": null,
"e": 83367,
"s": 81991,
"text": "package jspinnerdemo;\n \nimport java.awt.*;\nimport java.util.*;\nimport javax.swing.*;\n \npublic class Main {\n public static void main(String[] args) {\n JFrame frame = new JFrame(\"JSpinner Demo\");\n \n // Spinner with number\n SpinnerNumberModel snm = new SpinnerNumberModel(\n new Integer(0),\n new Integer(0),\n new Integer(100),\n new Integer(5)\n );\n JSpinner spnNumber = new JSpinner(snm);\n \n // Spinner with Dates\n SpinnerModel snd = new SpinnerDateModel(\n new Date(),\n null,\n null,\n Calendar.DAY_OF_MONTH\n );\n JSpinner spnDate = new JSpinner(snd);\n \n // Spinner with List\n String[] colors = {\"Red\",\"Green\",\"Blue\"};\n SpinnerModel snl = new SpinnerListModel(colors);\n JSpinner spnList = new JSpinner(snl);\n \n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setSize(600, 100);\n \n Container cont = frame.getContentPane();\n \n cont.setLayout(new FlowLayout());\n cont.add(new JLabel(\"Select Number:\"));\n cont.add(spnNumber);\n \n cont.add(new JLabel(\"Select Date:\"));\n cont.add(spnDate);\n \n cont.add(new JLabel(\"Select Color:\"));\n cont.add(spnList);\n \n frame.setVisible(true);\n }\n}"
},
{
"code": null,
"e": 83375,
"s": 83367,
"text": "OUTPUT:"
},
{
"code": null,
"e": 83495,
"s": 83375,
"text": "The java.net.Socket class represents the socket that both the client and the server use to communicate with each other."
},
{
"code": null,
"e": 83687,
"s": 83495,
"text": "Sockets provide the communication mechanism between two computers using TCP. A client program creates a socket on its end of the communication and attempts to connect that socket to a server."
},
{
"code": null,
"e": 83786,
"s": 83687,
"text": "The following steps occur when establishing a TCP connection between two computers using sockets −"
},
{
"code": null,
"e": 83890,
"s": 83786,
"text": "The server instantiates a ServerSocket object, denoting which port number communication is to occur on."
},
{
"code": null,
"e": 84031,
"s": 83890,
"text": "The server invokes the accept() method of the ServerSocket class. This method waits until a client connects to the server on the given port."
},
{
"code": null,
"e": 84161,
"s": 84031,
"text": "After the server is waiting, a client instantiates a Socket object, specifying the server name and the port number to connect to."
},
{
"code": null,
"e": 84383,
"s": 84161,
"text": "The constructor of the Socket class attempts to connect the client to the specified server and the port number. If communication is established, the client now has a Socket object capable of communicating with the server."
},
{
"code": null,
"e": 84515,
"s": 84383,
"text": "On the server side, the accept() method returns a reference to a new socket on the server that is connected to the client's socket."
},
{
"code": null,
"e": 84524,
"s": 84515,
"text": "Example:"
},
{
"code": null,
"e": 86010,
"s": 84524,
"text": "// File Name GreetingServer.java\nimport java.net.*;\nimport java.io.*;\n\npublic class GreetingServer extends Thread {\n private ServerSocket serverSocket;\n \n public GreetingServer(int port) throws IOException {\n serverSocket = new ServerSocket(port);\n serverSocket.setSoTimeout(10000);\n }\n\n public void run() {\n while(true) {\n try {\n System.out.println(\"Waiting for client on port \" + \n serverSocket.getLocalPort() + \"...\");\n Socket server = serverSocket.accept();\n \n System.out.println(\"Just connected to \" + server.getRemoteSocketAddress());\n DataInputStream in = new DataInputStream(server.getInputStream());\n \n System.out.println(in.readUTF());\n DataOutputStream out = new DataOutputStream(server.getOutputStream());\n out.writeUTF(\"Thank you for connecting to \" + server.getLocalSocketAddress()\n + \"\\nGoodbye!\");\n server.close();\n \n }catch(SocketTimeoutException s) {\n System.out.println(\"Socket timed out!\");\n break;\n }catch(IOException e) {\n e.printStackTrace();\n break;\n }\n }\n }\n \n public static void main(String [] args) {\n int port = Integer.parseInt(args[0]);\n try {\n Thread t = new GreetingServer(port);\n t.start();\n }catch(IOException e) {\n e.printStackTrace();\n }\n }\n}"
},
{
"code": null,
"e": 86073,
"s": 86010,
"text": "Compile the client and the server and then start the server as"
},
{
"code": null,
"e": 86135,
"s": 86073,
"text": "$ java GreetingServer 6066\nWaiting for client on port 6066..."
},
{
"code": null,
"e": 86143,
"s": 86135,
"text": "Output:"
},
{
"code": null,
"e": 86181,
"s": 86143,
"text": "$ java GreetingClient localhost 6066 "
},
{
"code": null,
"e": 86219,
"s": 86181,
"text": "Connecting to localhost on port 6066 "
},
{
"code": null,
"e": 86262,
"s": 86219,
"text": "Just connected to localhost/127.0.0.1:6066"
},
{
"code": null,
"e": 86319,
"s": 86262,
"text": "Server says Thank you for connecting to /127.0.0.1:6066 "
},
{
"code": null,
"e": 86328,
"s": 86319,
"text": "Goodbye!"
},
{
"code": null,
"e": 86361,
"s": 86328,
"text": "There are four types of sockets."
},
{
"code": null,
"e": 88135,
"s": 86361,
"text": "\nStream Sockets − Delivery in a networked environment is guaranteed. If you send through the stream socket three items \"A, B, C\", they will arrive in the same order − \"A, B, C\". These sockets use TCP (Transmission Control Protocol) for data transmission. If delivery is impossible, the sender receives an error indicator. Data records do not have any boundaries.\nDatagram Sockets − Delivery in a networked environment is not guaranteed. They're connectionless because you don't need to have an open connection as in Stream Sockets − you build a packet with the destination information and send it out. They use UDP (User Datagram Protocol).\nRaw Sockets − These provide users access to the underlying communication protocols, which support socket abstractions. These sockets are normally datagram oriented, though their exact characteristics are dependent on the interface provided by the protocol. Raw sockets are not intended for the general user; they have been provided mainly for those interested in developing new communication protocols, or for gaining access to some of the more cryptic facilities of an existing protocol.\nSequenced Packet Sockets − They are similar to a stream socket, with the exception that record boundaries are preserved. This interface is provided only as a part of the Network Systems (NS) socket abstraction and is very important in most serious NS applications. Sequenced-packet sockets allow the user to manipulate the Sequence Packet Protocol (SPP) or Internet Datagram Protocol (IDP) headers on a packet or a group of packets, either by writing a prototype header along with whatever data is to be sent, or by specifying a default header to be used with all outgoing data, and allows the user to receive the headers on incoming packets.\n"
},
{
"code": null,
"e": 88497,
"s": 88135,
"text": "Stream Sockets − Delivery in a networked environment is guaranteed. If you send through the stream socket three items \"A, B, C\", they will arrive in the same order − \"A, B, C\". These sockets use TCP (Transmission Control Protocol) for data transmission. If delivery is impossible, the sender receives an error indicator. Data records do not have any boundaries."
},
{
"code": null,
"e": 88775,
"s": 88497,
"text": "Datagram Sockets − Delivery in a networked environment is not guaranteed. They're connectionless because you don't need to have an open connection as in Stream Sockets − you build a packet with the destination information and send it out. They use UDP (User Datagram Protocol)."
},
{
"code": null,
"e": 89264,
"s": 88775,
"text": "Raw Sockets − These provide users access to the underlying communication protocols, which support socket abstractions. These sockets are normally datagram oriented, though their exact characteristics are dependent on the interface provided by the protocol. Raw sockets are not intended for the general user; they have been provided mainly for those interested in developing new communication protocols, or for gaining access to some of the more cryptic facilities of an existing protocol."
},
{
"code": null,
"e": 89907,
"s": 89264,
"text": "Sequenced Packet Sockets − They are similar to a stream socket, with the exception that record boundaries are preserved. This interface is provided only as a part of the Network Systems (NS) socket abstraction and is very important in most serious NS applications. Sequenced-packet sockets allow the user to manipulate the Sequence Packet Protocol (SPP) or Internet Datagram Protocol (IDP) headers on a packet or a group of packets, either by writing a prototype header along with whatever data is to be sent, or by specifying a default header to be used with all outgoing data, and allows the user to receive the headers on incoming packets."
},
{
"code": null,
"e": 89976,
"s": 89907,
"text": "A combination of an IP address and a port number is called a socket."
},
{
"code": null,
"e": 90072,
"s": 89976,
"text": "Sockets allow communication between two different processes on the same or different machines. "
},
{
"code": null,
"e": 90380,
"s": 90072,
"text": "A Unix Socket is used in a client-server application framework. A server is a process that performs some functions on request from a client. Most of the application-level protocols like FTP, SMTP, and POP3 make use of sockets to establish a connection between client and server and then for exchanging data."
},
{
"code": null,
"e": 90386,
"s": 90380,
"text": "Types"
},
{
"code": null,
"e": 90401,
"s": 90386,
"text": "Stream Sockets"
},
{
"code": null,
"e": 90418,
"s": 90401,
"text": "Datagram Sockets"
},
{
"code": null,
"e": 90430,
"s": 90418,
"text": "Raw Sockets"
},
{
"code": null,
"e": 90455,
"s": 90430,
"text": "Sequenced Packet Sockets"
},
{
"code": null,
"e": 90619,
"s": 90455,
"text": "JInternalFrame differs from JFrame in that it is a lightweight component and so must be contained inside another container like JDesktopPane of JFrame of JApplet. "
},
{
"code": null,
"e": 90643,
"s": 90619,
"text": "JInternalFrame Example:"
},
{
"code": null,
"e": 93086,
"s": 90643,
"text": "import javax.swing.JInternalFrame;\nimport javax.swing.JDesktopPane;\nimport javax.swing.JMenu;\nimport javax.swing.JMenuItem;\nimport javax.swing.JMenuBar;\nimport javax.swing.JFrame;\nimport java.awt.event.*;\nimport java.awt.*;\n\npublic class JInternalFrameDemo extends JFrame {\n\n\tJDesktopPane jdpDesktop;\n\tstatic int openFrameCount = 0;\n\tpublic JInternalFrameDemo() {\n\t\tsuper(\"JInternalFrame Usage Demo\");\n\t\t// Make the main window positioned as 50 pixels from each edge of the\n\t\t// screen.\n\t\tint inset = 50;\n\t\tDimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();\n\t\tsetBounds(inset, inset, screenSize.width - inset * 2,\n\t\t\t\tscreenSize.height - inset * 2);\n\t\t// Add a Window Exit Listener\n\t\taddWindowListener(new WindowAdapter() {\n\n\t\t\tpublic void windowClosing(WindowEvent e) {\n\t\t\t\tSystem.exit(0);\n\t\t\t}\n\t\t});\n\t\t// Create and Set up the GUI.\n\t\tjdpDesktop = new JDesktopPane();\n\t\t// A specialized layered pane to be used with JInternalFrames\n\t\tcreateFrame(); // Create first window\n\t\tsetContentPane(jdpDesktop);\n\t\tsetJMenuBar(createMenuBar());\n\t\t// Make dragging faster by setting drag mode to Outline\n\t\tjdpDesktop.putClientProperty(\"JDesktopPane.dragMode\", \"outline\");\n\t}\n\tprotected JMenuBar createMenuBar() {\n\t\tJMenuBar menuBar = new JMenuBar();\n\t\tJMenu menu = new JMenu(\"Frame\");\n\t\tmenu.setMnemonic(KeyEvent.VK_N);\n\t\tJMenuItem menuItem = new JMenuItem(\"New IFrame\");\n\t\tmenuItem.setMnemonic(KeyEvent.VK_N);\n\t\tmenuItem.addActionListener(new ActionListener() {\n\n\t\t\tpublic void actionPerformed(ActionEvent e) {\n\t\t\t\tcreateFrame();\n\t\t\t}\n\t\t});\n\t\tmenu.add(menuItem);\n\t\tmenuBar.add(menu);\n\t\treturn menuBar;\n\t}\n\tprotected void createFrame() {\n\t\tMyInternalFrame frame = new MyInternalFrame();\n\t\tframe.setVisible(true);\n\t\t// Every JInternalFrame must be added to content pane using JDesktopPane\n\t\tjdpDesktop.add(frame);\n\t\ttry {\n\t\t\tframe.setSelected(true);\n\t\t} catch (java.beans.PropertyVetoException e) {\n\t\t}\n\t}\n\tpublic static void main(String[] args) {\n\t\tJInternalFrameDemo frame = new JInternalFrameDemo();\n\t\tframe.setVisible(true);\n\t}\n\tclass MyInternalFrame extends JInternalFrame {\n\n\t\tstatic final int xPosition = 30, yPosition = 30;\n\t\tpublic MyInternalFrame() {\n\t\t\tsuper(\"IFrame #\" + (++openFrameCount), true, // resizable\n\t\t\t\t\ttrue, // closable\n\t\t\t\t\ttrue, // maximizable\n\t\t\t\t\ttrue);// iconifiable\n\t\t\tsetSize(300, 300);\n\t\t\t// Set the window's location.\n\t\t\tsetLocation(xPosition * openFrameCount, yPosition\n\t\t\t\t\t* openFrameCount);\n\t\t}\n\t}\n}"
},
{
"code": null,
"e": 93095,
"s": 93086,
"text": "OUTPUT: "
},
{
"code": null,
"e": 93110,
"s": 93095,
"text": "JFrame Example"
},
{
"code": null,
"e": 93709,
"s": 93110,
"text": "import java.awt.*;\nimport java.awt.event.*;\nimport javax.swing.*;\n\npublic class JFrameDemo {\n\n\tpublic static void main(String s[]) {\n\t\tJFrame frame = new JFrame(\"JFrame Source Demo\");\n\t\t// Add a window listner for close button\n\t\tframe.addWindowListener(new WindowAdapter() {\n\n\t\t\tpublic void windowClosing(WindowEvent e) {\n\t\t\t\tSystem.exit(0);\n\t\t\t}\n\t\t});\n\t\t// This is an empty content area in the frame\n\t\tJLabel jlbempty = new JLabel(\"\");\n\t\tjlbempty.setPreferredSize(new Dimension(175, 100));\n\t\tframe.getContentPane().add(jlbempty, BorderLayout.CENTER);\n\t\tframe.pack();\n\t\tframe.setVisible(true);\n\t}\n}"
},
{
"code": null,
"e": 93717,
"s": 93709,
"text": "OUTPUT:"
},
{
"code": null,
"e": 94052,
"s": 93717,
"text": "PLAF stands for Pluggable Look And Feel, allows a Swing application to change its entire appearance with one or two lines of code. The most common use of this feature is to give applications a choice between the native platform look-and-feel and a new platform-independent Java look-and-feel (also known as the Metal look-and-feel). "
},
{
"code": null,
"e": 94266,
"s": 94052,
"text": "Modal dialog boxes forces the user to acknowledge the dialog before moving before moving onto the application. Modeless dialog boxes enable the user to interact with the dialog and the application interchangeably."
},
{
"code": null,
"e": 94464,
"s": 94266,
"text": "A modal dialog box doesn’t allow the user to access the parent window while the dialog is open – it must be dealt with and closed before continuing. A modeless dialog can be open in the background."
},
{
"code": null,
"e": 94751,
"s": 94464,
"text": "Example for Model Dialog is Save, Save As Dialog in MS – Word. while it is opening you can’t do any thing in the application until you close that window. Example for Modeless Dialog is Find, Replace dialogs. You can use Find Dialog, same time you can also work in that word application."
},
{
"code": null,
"e": 94829,
"s": 94751,
"text": "Using the following statements we can determine the dimensions of our applet."
},
{
"code": null,
"e": 94856,
"s": 94829,
"text": "Dimension dim = getSize();"
},
{
"code": null,
"e": 94889,
"s": 94856,
"text": "int appletHeight = dim.height();"
},
{
"code": null,
"e": 94920,
"s": 94889,
"text": "int appletWidth = dim.width();"
},
{
"code": null,
"e": 95167,
"s": 94920,
"text": "The first statement uses the getsize() method to return the size of the applet as a Dimension object. The Applet class inherits it from the Component class in the java.awt package. The next two statements extract separate width and height fields."
},
{
"code": null,
"e": 95503,
"s": 95167,
"text": "The paint () method supports painting via a Graphics object. This method holds instructions to paint this component. Actually, in Swing, you should change paintComponent() instead of paint(), as paint calls paintBorder(), paintComponent() and paintChildren(). You shouldn't call this method directly, you should call repaint() instead."
},
{
"code": null,
"e": 95880,
"s": 95503,
"text": "The repaint () method is used to cause paint () to be invoked by the AWT painting method. This method can't be overridden. It controls the update() -> paint() cycle. You should call this method to get a component to repaint itself. If you have done anything to change the look of the component, but not it's size ( like changing color, animating, etc. ) then call this method."
},
{
"code": null,
"e": 95926,
"s": 95880,
"text": "The immediate superclass of Applet is Panel. "
},
{
"code": null,
"e": 96305,
"s": 95926,
"text": "Panel provides the following things:\n\n1) Panels allow us to format the screen. Panels must have a specific layout. If a layout is not specified, the default will be a FlowLayout. \n2) FlowLayout adds components to the screen one after another from top to bottom and from left to right. Components are rearranged when the user resizes the window. FlowLayout may take no arguments."
},
{
"code": null,
"e": 96353,
"s": 96305,
"text": "Constructor: FlowLayout fl = new FlowLayout( );"
},
{
"code": null,
"e": 96511,
"s": 96353,
"text": "3) BorderLayout divides the screen in nine sections based in geographic orientation such as \"North\", \"South\", \"East\"....etc. BorderLayout takes no arguments."
},
{
"code": null,
"e": 96563,
"s": 96511,
"text": "Constructor: BorderLayout bl = new BorderLayout( );"
},
{
"code": null,
"e": 96698,
"s": 96563,
"text": "4) GridLayout divides the screen in the number of sections specified by the programmer. GridLayout takes two arguments (#rows, #cols)."
},
{
"code": null,
"e": 96789,
"s": 96698,
"text": "Constructor: GridLayout gl = new GridLayout(rows, cols ); //rows and cols are int numbers "
},
{
"code": null,
"e": 96876,
"s": 96789,
"text": "We use codebase in applet whenever the applet class file is not in the same directory."
},
{
"code": null,
"e": 97079,
"s": 96876,
"text": "codebase = codebaseURL \nThis optional attribute specifies the base URL of the applet: the directory that contains the applet's code. If this attribute is not specified, then the document's URL is used. "
},
{
"code": null,
"e": 97090,
"s": 97079,
"text": "HTML code:"
},
{
"code": null,
"e": 97246,
"s": 97090,
"text": "<object type=\"application/x-java-applet\" code=\"HelloWorld.class\" \n codebase=\"/external/examples/common/java/\" width=\"200px\" height=\"50px\">\n</object>"
},
{
"code": null,
"e": 97446,
"s": 97248,
"text": "Whenever a screen needs redrawing, the update() method is called. By default, the update() method clears the screen and then calls the paint() method, which normally contains all the drawing code."
},
{
"code": null,
"e": 97454,
"s": 97446,
"text": "Example"
},
{
"code": null,
"e": 98678,
"s": 97454,
"text": "import java.awt.*; \nimport java.applet.Applet; \nimport java.awt.event.*; \n/*<applet code=\"UpdateExample.class\" width=\"350\" height=\"150\"> </applet>*/ \npublic class UpdateExample extends Applet implements MouseListener \n{ \n private int mouseX, mouseY; \n private boolean mouseclicked = false; \n public void init() \n { \n setBackground(Color.black); \n addMouseListener(this); \n } \n public void mouseClicked(MouseEvent e) \n { \n mouseX=e.getX(); \n mouseY=e.getY(); \n mouseclicked = true; \n repaint(); \n } \n public void mouseEntered(MouseEvent e){}; \n public void mousePressed(MouseEvent e){}; \n public void mouseReleased(MouseEvent e){}; \n public void mouseExited(MouseEvent e){}; \n public void update(Graphics g) \n { \n paint(g); \n } \n public void paint( Graphics g) \n { \n String str; \n g.setColor(Color.white); \n if (mouseclicked) \n { \n str = \"X=\"+ mouseX + \",\" + \"Y=\" + mouseY; \n g.drawString(str,mouseX,mouseY); \n mouseclicked = false; \n } \n } \n} "
},
{
"code": null,
"e": 98715,
"s": 98680,
"text": "Yes, using <paran> tag as follows,"
},
{
"code": null,
"e": 98756,
"s": 98715,
"text": "<paran name = \"param1\" value = \"value1\">"
},
{
"code": null,
"e": 98796,
"s": 98756,
"text": "<param name = \"param2\" value = \"value2>"
},
{
"code": null,
"e": 98898,
"s": 98796,
"text": "One can access these parameters inside the applet by calling getParameter() method inside the applet."
},
{
"code": null,
"e": 98908,
"s": 98898,
"text": "HTML File"
},
{
"code": null,
"e": 99261,
"s": 98908,
"text": "<HTML> \n<HEAD> \n<TITLE>Java applet example - Passing applet parameters to Java applets</TITLE> \n</HEAD> \n<BODY> \n<APPLET CODE=\"AppletParameterTest.class\" WIDTH=\"400\" HEIGHT=\"50\">\n <PARAM NAME=\"font\" VALUE=\"Dialog\">\n <PARAM NAME=\"size\" VALUE=\"24\">\n <PARAM NAME=\"string\" VALUE=\"Hello, world ... it's me. :)\">\n</APPLET> \n</BODY> \n</HTML>"
},
{
"code": null,
"e": 99269,
"s": 99261,
"text": "Applet "
},
{
"code": null,
"e": 99798,
"s": 99269,
"text": "import java.applet.*;\nimport java.awt.*;\n\n/**\n * A Java applet parameter test class.\n * Demonstrates how to read applet parameters.\n */\npublic class AppletParameterTest extends Applet {\n\n public void paint(Graphics g) {\n\n String myFont = getParameter(\"font\");\n String myString = getParameter(\"string\");\n int mySize = Integer.parseInt(getParameter(\"size\"));\n\n Font f = new Font(myFont, Font.BOLD, mySize);\n g.setFont(f);\n g.setColor(Color.red);\n g.drawString(myString, 20, 20);\n\n }\n}"
},
{
"code": null,
"e": 99822,
"s": 99800,
"text": "When an applet begin,"
},
{
"code": null,
"e": 99850,
"s": 99822,
"text": "init() -> start() ->paint()"
},
{
"code": null,
"e": 100153,
"s": 99850,
"text": "\nThe init() and start() methods are invoked first.\nThat, in turn, creates a thread and starts that thread, which causes this class's run() method to be invoked.\nThe paint() method is invoked by Swing independently in the GUI event handling thread if Swing detects that the applet needs to be redrawn.\n"
},
{
"code": null,
"e": 100203,
"s": 100153,
"text": "The init() and start() methods are invoked first."
},
{
"code": null,
"e": 100313,
"s": 100203,
"text": "That, in turn, creates a thread and starts that thread, which causes this class's run() method to be invoked."
},
{
"code": null,
"e": 100454,
"s": 100313,
"text": "The paint() method is invoked by Swing independently in the GUI event handling thread if Swing detects that the applet needs to be redrawn."
},
{
"code": null,
"e": 100484,
"s": 100454,
"text": "When an applet is terminated,"
},
{
"code": null,
"e": 100504,
"s": 100484,
"text": "stop() -> destroy()"
},
{
"code": null,
"e": 100652,
"s": 100504,
"text": "We don't have the concept of constructor in applets. Applets can be invoked either through browser or through Appletviewer utility provided by JDK."
},
{
"code": null,
"e": 100802,
"s": 100652,
"text": "Applets don't have implicit constructors but you can define explicitly, but actually no need to mention it because it is initialized using an init()."
},
{
"code": null,
"e": 100845,
"s": 100802,
"text": "FlowLayout - Top to bottom, left to right."
},
{
"code": null,
"e": 100931,
"s": 100845,
"text": "BoderLayout - At borders(North, South, East, West) and at the center of a container."
},
{
"code": null,
"e": 100987,
"s": 100931,
"text": "CardLayout - Elements are stacked on top of each other."
},
{
"code": null,
"e": 101070,
"s": 100987,
"text": "GridLayout - Elements are of equal size and are laid out using the square of grid."
},
{
"code": null,
"e": 101212,
"s": 101070,
"text": "GridBagLayout - Elements organized according to grid. The elements are of different sizes and may occupy more than one row or column of grid."
},
{
"code": null,
"e": 101343,
"s": 101212,
"text": "Double buffering is the process of use of two buffers rather than one to temporarily hold data being moved to and from I/O device."
},
{
"code": null,
"e": 101464,
"s": 101343,
"text": "The resulting image is smoother, less flicker and quicker than drawing on the screen. It also helps prevent bottlenecks."
},
{
"code": null,
"e": 101641,
"s": 101464,
"text": "AWT components depend upon native code counterparts(called peers) to handle their functionality(drawing and rendering). This extra 'baggage' makes them heavy weight components."
},
{
"code": null,
"e": 101719,
"s": 101641,
"text": "The Font class is used to render glyphs you characters you see on the screen."
},
{
"code": null,
"e": 101815,
"s": 101719,
"text": "FontMetrics class encapsulates information about a specific font on a specific graphics object."
},
{
"code": null,
"e": 102009,
"s": 101815,
"text": "1.The File class encapsulates the files and directories of the local file system. The RandomAccessFile class provides the methods needed to directly access data contained in any part of a file."
},
{
"code": null,
"e": 102416,
"s": 102009,
"text": "2.The java.io.RandomAccessFile class implements a random access file.\n\n3.Random access file offers a seek feature that can go directly to a particular position.\n\n4.Unlike the input and output stream classes in java.io. RandomAccessFile is used for both reading and writing files.\n\n5. RandomAccessFile does not inherit from InputStream or OutputStream. It implements the DataInput and DataOutput interfaces."
},
{
"code": null,
"e": 102623,
"s": 102416,
"text": "When a task invokes yield() method, it returns to the ready state either from running, waiting or after its creation. When a task invokes sleep() method it returns to the waiting state from a running state."
},
{
"code": null,
"e": 102632,
"s": 102623,
"text": "Yielding"
},
{
"code": null,
"e": 102683,
"s": 102632,
"text": "1.Yield will cause the thread to rejoin the queue."
},
{
"code": null,
"e": 102752,
"s": 102683,
"text": "2.When a task is invoked in yielding, it returns to the ready state."
},
{
"code": null,
"e": 102842,
"s": 102752,
"text": "3.It is used to get the running thread into out of runnable state with the same priority."
},
{
"code": null,
"e": 102851,
"s": 102842,
"text": "Sleeping"
},
{
"code": null,
"e": 102912,
"s": 102851,
"text": "1.Sleep holds the thread's execution for the specified time."
},
{
"code": null,
"e": 102983,
"s": 102912,
"text": "2.When a task is invoked in sleeping, it returns to the waiting state."
},
{
"code": null,
"e": 103041,
"s": 102983,
"text": "3.It is used to delay the execution for a period of time."
},
{
"code": null,
"e": 103045,
"s": 103041,
"text": "No."
},
{
"code": null,
"e": 103084,
"s": 103045,
"text": "There are two ways to create a thread:"
},
{
"code": null,
"e": 103105,
"s": 103084,
"text": "extends Thread class"
},
{
"code": null,
"e": 103134,
"s": 103105,
"text": "implement Runnable interface"
},
{
"code": null,
"e": 103337,
"s": 103134,
"text": "Even when implemented Runnable to create a thread, we have to create an instance of the Thread class, pass the instance of the class implementing Runnable as the argument in Thread class's constructor."
},
{
"code": null,
"e": 103351,
"s": 103337,
"text": "Using extend:"
},
{
"code": null,
"e": 103601,
"s": 103351,
"text": "public class MyThread extends Thread{\n public void run()\n {\n System.out.println(\"Thread started running..\");\n }\n public static void main( String args[] )\n {\n MyThread mt = new MyThread();\n mt.start();\n }\n}"
},
{
"code": null,
"e": 103609,
"s": 103601,
"text": "OUTPUT:"
},
{
"code": null,
"e": 103634,
"s": 103609,
"text": "Thread started running.."
},
{
"code": null,
"e": 103650,
"s": 103634,
"text": "Using Runnable:"
},
{
"code": null,
"e": 103879,
"s": 103650,
"text": "public void run() {\n System.out.println(\"Thread started running..\");\n }\n \n public static void main(String args[]) {\n MyThread mt = new MyThread();\n Thread t = new Thread(mt);\n t.start();\n }\n}"
},
{
"code": null,
"e": 103887,
"s": 103879,
"text": "OUTPUT:"
},
{
"code": null,
"e": 103912,
"s": 103887,
"text": "Thread started running.."
},
{
"code": null,
"e": 104245,
"s": 103914,
"text": "Threaded programming is normally used when a program is required to do more than one task at the same time. Threading is generally used in applications with graphical user interfaces where a new thread may be created to do some work relating to processing while the main thread keeps the interface responsive to human interaction."
},
{
"code": null,
"e": 104500,
"s": 104245,
"text": "Both start() and run() provide ways to create threaded programs. The start() method starts the execution of the new thread and calls run() method. the start() method returns immediately as the new thread normally continues until the run() method returns."
},
{
"code": null,
"e": 104793,
"s": 104500,
"text": "Here is a simple code example which prints name of Thread which executes run() method of Runnable task. Its clear that if you call start() method a new Thread executes Runnable task while if you directly call run() method task, current thread which is main in this case will execute the task."
},
{
"code": null,
"e": 105709,
"s": 104793,
"text": "public class StartVsRunCall{\n\n public static void main(String args[]) {\n \n //creating two threads for start and run method call\n Thread startThread = new Thread(new Task(\"start\"));\n Thread runThread = new Thread(new Task(\"run\"));\n \n startThread.start(); //calling start method of Thread - will execute in new Thread\n runThread.run(); //calling run method of Thread - will execute in current Thread\n\n }\n\n /*\n * Simple Runnable implementation\n */\n private static class Task implements Runnable{\n private String caller;\n \n public Task(String caller){\n this.caller = caller;\n }\n \n @Override\n public void run() {\n System.out.println(\"Caller: \"+ caller + \" and code on this Thread is executed by : \" + Thread.currentThread().getName());\n \n } \n } \n}\n\n\n"
},
{
"code": null,
"e": 105717,
"s": 105709,
"text": "Output:"
},
{
"code": null,
"e": 105839,
"s": 105717,
"text": "Caller: start and code on this Thread is executed by: Thread-0\nCaller: run and code on this Thread is executed by: main"
},
{
"code": null,
"e": 106046,
"s": 105839,
"text": "In Summary only difference between start() and run() method in Thread is that start creates new thread while run doesn't create any thread and simply execute in current thread like a normal method call.\n\n\n "
},
{
"code": null,
"e": 106066,
"s": 106046,
"text": "java.lang.Throwable"
},
{
"code": null,
"e": 106399,
"s": 106066,
"text": "In Java, exceptions are objects. When you throw an exception, you throw an object. You can't throw just any object as an exception, however only those objects whose classes descend from Throwable. Throwable serves as the base class for an entire family of classes, declared in java.lang, that your program can instantiate and throw."
},
{
"code": null,
"e": 106887,
"s": 106399,
"text": "In overloading, the compiler picks an overloaded method. When translating the program, before the program ever runs. This method selection is known as static or early binding. However, in polymorphism, the compiler does not makes any decision when translating the method. The program has to run before any one can know what is stored in the object reference variable. Therefore, the JVM and not the compiler selects the appropriate method, This method selection is known as late binding."
},
{
"code": null,
"e": 107035,
"s": 106887,
"text": "Assigning an object to another object does not create a duplicate object. It simply assigns a reference of already existing object to a new object."
},
{
"code": null,
"e": 107113,
"s": 107035,
"text": "The clone() method when used creates a new object with separate memory space."
},
{
"code": null,
"e": 107148,
"s": 107113,
"text": "For example: aObj = bObj.clone();"
},
{
"code": null,
"e": 107256,
"s": 107148,
"text": "This statement copies on object bObj to new memory location and assign the reference of new object to aObj."
},
{
"code": null,
"e": 107459,
"s": 107256,
"text": "Within the inner class, the keyword this holds a reference to the current object but if inner class needs to access the current outer class object then precede the keyword this with the outerclass name."
},
{
"code": null,
"e": 107486,
"s": 107459,
"text": "Using this in inner class:"
},
{
"code": null,
"e": 107676,
"s": 107486,
"text": "class Outer \n{\n private int a = 10;\n class Inner \n {\n private int a = 20;\n public void myMethod() \n {\n System.out.println(this.a);\n }\n }\n}"
},
{
"code": null,
"e": 107684,
"s": 107676,
"text": "OUTPUT:"
},
{
"code": null,
"e": 107687,
"s": 107684,
"text": "20"
},
{
"code": null,
"e": 107713,
"s": 107687,
"text": "Using this in outerclass:"
},
{
"code": null,
"e": 107907,
"s": 107713,
"text": "class Outer\n{\n private int a = 10;\n class Inner \n {\n private int a = 20;\n public void myMethod()\n {\n System.out.println(Outer.this.a);\n }\n }\n}"
},
{
"code": null,
"e": 107915,
"s": 107907,
"text": "OUTPUT:"
},
{
"code": null,
"e": 107918,
"s": 107915,
"text": "10"
},
{
"code": null,
"e": 108138,
"s": 107918,
"text": "The instanceof() keyword is a two argument that tests whether the runtime type of its first argument compatible with its second argument compatible with its second argument. It performs test at compile time and runtime."
},
{
"code": null,
"e": 108373,
"s": 108138,
"text": "The instanceof in java is also known as type comparison operator because it compares the instance with type. It returns either true or false. If we apply the instanceof operator with any variable that has null value, it returns false."
},
{
"code": null,
"e": 108382,
"s": 108373,
"text": "Example:"
},
{
"code": null,
"e": 108551,
"s": 108382,
"text": "class Simple1\n{ \n public static void main(String args[])\n { \n Simple1 s = new Simple1(); \n System.out.println(s instanceof Simple1); \n } \n} "
},
{
"code": null,
"e": 108559,
"s": 108551,
"text": "OUTPUT:"
},
{
"code": null,
"e": 108564,
"s": 108559,
"text": "true"
},
{
"code": null,
"e": 108671,
"s": 108564,
"text": "The class is instantiated and declared in the same place. The declaration and instantiation takes the form"
},
{
"code": null,
"e": 108681,
"s": 108671,
"text": "new Xxx()"
},
{
"code": null,
"e": 108692,
"s": 108681,
"text": "{ //body }"
},
{
"code": null,
"e": 108877,
"s": 108692,
"text": "Here, Xxx is an interface name. An anonymous class cannot have a constructor. This is because you do not specify a name of the class, you cannot use that name to specify a constructor."
},
{
"code": null,
"e": 109096,
"s": 108877,
"text": "Constructing an instance of a class invokes the constructor of all the superclass along the inheritance chain. A superclasss constructor is called before the subclass's constructor. This is called constructor chaining."
},
{
"code": null,
"e": 109598,
"s": 109096,
"text": "package com.myjava.constructors;\n \npublic class MyChaining {\n \n public MyChaining(){\n System.out.println(\"In default constructor...\");\n }\n public MyChaining(int i){\n this();\n System.out.println(\"In single parameter constructor...\");\n }\n public MyChaining(int i, int j){\n this(j);\n System.out.println(\"In double parameter constructor...\");\n }\n \n public static void main(String a[]){\n MyChaining ch = new MyChaining(10, 20);\n }\n}"
},
{
"code": null,
"e": 109606,
"s": 109598,
"text": "OUTPUT:"
},
{
"code": null,
"e": 109702,
"s": 109606,
"text": "In default constructor...\nIn single parameter constructor...\nIn double parameter constructor..."
},
{
"code": null,
"e": 109826,
"s": 109702,
"text": "this() can be used to invoke a constructor of the same class whereas super() can be used to invoke a superclass instructor."
},
{
"code": null,
"e": 109829,
"s": 109826,
"text": "OR"
},
{
"code": null,
"e": 109938,
"s": 109829,
"text": "super is used to access methods of the base class while this is used to access methods of the current class."
},
{
"code": null,
"e": 110002,
"s": 109938,
"text": "There are some classes that cannot be extended(i.e subclasses)."
},
{
"code": null,
"e": 110307,
"s": 110002,
"text": "\nA non-public class can only be subclassed by classes in the same package as the class but not from classes in a different package.\nA final class cannot be classed.\nA class that has only private construction cannot b subclassed.\nIf the class has private members then regular inner class can access them.\n"
},
{
"code": null,
"e": 110438,
"s": 110307,
"text": "A non-public class can only be subclassed by classes in the same package as the class but not from classes in a different package."
},
{
"code": null,
"e": 110471,
"s": 110438,
"text": "A final class cannot be classed."
},
{
"code": null,
"e": 110535,
"s": 110471,
"text": "A class that has only private construction cannot b subclassed."
},
{
"code": null,
"e": 110610,
"s": 110535,
"text": "If the class has private members then regular inner class can access them."
},
{
"code": null,
"e": 110707,
"s": 110610,
"text": "Yes, it is possible by using super keyword. For example, consider the following code statement."
},
{
"code": null,
"e": 110780,
"s": 110707,
"text": "public void play()\n{\n super.play();\n //my own play() method code\n}"
},
{
"code": null,
"e": 110908,
"s": 110780,
"text": "Thus, the first statement in the body calls inherited version of play() and then it comes back to the subclass's specific code."
},
{
"code": null,
"e": 111287,
"s": 110908,
"text": "The string value is represented using private array variable. The array cannot be accessed outside the String class. The String class provides many public methods( such as length(), charAt() ) to retrieve array information. If array were not private, the user would be able to change the string content by modifying the array. This would violate that String class is immutable. "
},
{
"code": null,
"e": 111402,
"s": 111287,
"text": "Float One = new Float(3.7);\nFloat Two = new Float(5.2);\nFloat Sum = new Float(One.floatValue() +Two.floatValue());"
},
{
"code": null,
"e": 111609,
"s": 111402,
"text": "Here, floatValue() method is used. The Float wrapper class does not support floating point arithmetic. So it is necessary to convert it to float primitive type before performing arithmetic operations."
},
{
"code": null,
"e": 111734,
"s": 111609,
"text": "Locale class is used to tailor a program output to the conventions of a particular geographic, political or cultural region."
},
{
"code": null,
"e": 111873,
"s": 111734,
"text": "\nAn operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to form information for the user.\n"
},
{
"code": null,
"e": 112010,
"s": 111873,
"text": "An operation that requires a Locale to perform its task is called locale-sensitive and uses the Locale to form information for the user."
},
{
"code": null,
"e": 112103,
"s": 112010,
"text": "\nLocale is a mechanism for identifying objects, not a container for the objects themselves.\n"
},
{
"code": null,
"e": 112194,
"s": 112103,
"text": "Locale is a mechanism for identifying objects, not a container for the objects themselves."
},
{
"code": null,
"e": 112316,
"s": 112194,
"text": "A locale consists of a language and a country. Class Locale, in package java.util contains information about 140 locales."
},
{
"code": null,
"e": 112456,
"s": 112316,
"text": "length() method is used to get the number of elements in string buffer whereas length parameter is used only with arrays to get its length."
},
{
"code": null,
"e": 112501,
"s": 112456,
"text": "Here is an example for better understanding."
},
{
"code": null,
"e": 112717,
"s": 112501,
"text": "public class length\n{\n public static void main(String args[])\n {\n String x = \"test\";\n int a[] = {1, 2, 3, 4};\n System.out.println(x.length());\n System.out.println(a.length);\n }\n}"
},
{
"code": null,
"e": 112724,
"s": 112717,
"text": "OUTPUT"
},
{
"code": null,
"e": 112726,
"s": 112724,
"text": "4"
},
{
"code": null,
"e": 112728,
"s": 112726,
"text": "4"
},
{
"code": null,
"e": 113041,
"s": 112728,
"text": "Not directly. Although Java provides wrapper classes that wrap the primitive types in objects. These are Integers, Double, Byte, Float, Long and Character. In addition to allowing a primitive type to be passed by reference, the wrapper classes define several methods that enable you to manipulate their values. "
},
{
"code": null,
"e": 113135,
"s": 113041,
"text": "Although the finalize() method approximates the function of a destructor, it is not the same."
},
{
"code": null,
"e": 113282,
"s": 113135,
"text": "A C++ destructor is always called just before an object goes out of scope, but you can't know when would finalize() be called for specific object."
},
{
"code": null,
"e": 113290,
"s": 113282,
"text": "In C++,"
},
{
"code": null,
"e": 113828,
"s": 113290,
"text": "\nEvery object is destroyed when it goes out of scope. Thus, if you declare a local object inside a function, when that function returns, that local object is automatically destroyed. The same goes for function parameters and for objects returned by functions.\nJust before destruction, the object's destructor is called. This happens immediately, and before any other program statements will execute. Thus, a C++ destructor will always execute in a deterministic fashion. You can always know when and where a destructor will be executed.\n"
},
{
"code": null,
"e": 114087,
"s": 113828,
"text": "Every object is destroyed when it goes out of scope. Thus, if you declare a local object inside a function, when that function returns, that local object is automatically destroyed. The same goes for function parameters and for objects returned by functions."
},
{
"code": null,
"e": 114364,
"s": 114087,
"text": "Just before destruction, the object's destructor is called. This happens immediately, and before any other program statements will execute. Thus, a C++ destructor will always execute in a deterministic fashion. You can always know when and where a destructor will be executed."
},
{
"code": null,
"e": 114828,
"s": 114364,
"text": " In Java, objects are not explicitly destroyed when they go out of scope. Rather, an object is marked as unused when there are no longer any references pointing to it. Even then, the finalize() method will not be called until the garbage collector runs. Thus, you cannot know precisely when or where a call to finalize( ) will occur. Even if you execute a call to gc( ) (the garbage collector), there is no guarantee that finalize( ) will immediately be executed."
},
{
"code": null,
"e": 114963,
"s": 114828,
"text": "No, arithmetic operations cannot be performed on a reference variable because the reference variable is an alias of another variable. "
},
{
"code": null,
"e": 115067,
"s": 114963,
"text": "We use if else-if ladder when conditions controlling the selection process involves multiple variables."
},
{
"code": null,
"e": 115080,
"s": 115067,
"text": "For example,"
},
{
"code": null,
"e": 115158,
"s": 115080,
"text": "if (p<0) //........\nelse if (q>10.7) //.........\nelse if (!finish) //........"
},
{
"code": null,
"e": 115285,
"s": 115158,
"text": "This sequence cannot be re-coded with switch statement because all conditions involve different variables and different types."
},
{
"code": null,
"e": 115521,
"s": 115285,
"text": "Java is strongly typed language. This implies that all operations are type checked by the compiler for type compatibility. Illegal operations will not be compiled. Therefore, strong type checking present errors and enhance reliability."
},
{
"code": null,
"e": 115712,
"s": 115521,
"text": "Primitive types are the data types that are defined by the language itself. In contrast, reference types are types that are defined by classes in the Java API rather than by language itself."
},
{
"code": null,
"e": 115953,
"s": 115712,
"text": "Moreover, memory location associated with primitive type contains the actual value of the variable. In contrast, memory location associated with reference variable contains an address that indicates the memory location of the actual object."
},
{
"code": null,
"e": 116085,
"s": 115953,
"text": "Although Java does not have unsigned int's but one can convert an int to unsigned representation by using the following convention."
},
{
"code": null,
"e": 116119,
"s": 116085,
"text": "((long) i) & 0x00000000FFFFFFFFL;"
},
{
"code": null,
"e": 116195,
"s": 116119,
"text": "Here, i is a variable of int type that you want to convert to unsigned int."
},
{
"code": null,
"e": 116474,
"s": 116195,
"text": "Yes. In Java, identifiers can be at maximum 65535 character length. Although there is no restriction placed in principle but Java source code is compiled into Java class files and the specification for class files does in effect, place an upper bound on the size of identifiers."
},
{
"code": null,
"e": 116692,
"s": 116474,
"text": "The precedence of operators refers to the order in which operators are evaluated within an expression whereas associativity refers to the order in which the consecutive operators within the same group are carried out."
},
{
"code": null,
"e": 116933,
"s": 116692,
"text": "Precedence rules specify the priority of operators (which operators will be evaluated first, e.g. multiplication has higher precedence than addition, PEMDAS). The associativity rules tell how the operators of the same precedence are grouped"
},
{
"code": null,
"e": 117204,
"s": 116933,
"text": "Yes, an application can have multiple classes having main method, but while starting the application, mention the classname which is to be run i.e the classname which is to be executed. The JVM will only look for the main() method in that class which you have mentioned."
},
{
"code": null,
"e": 117461,
"s": 117204,
"text": "String[] args is the only parameter in the main method. It declares a parameter named args which contains an array of objects of the class type String.In other words, if you run your program as java MyProgram one two then args will contain [\"one\", \"two\"]."
},
{
"code": null,
"e": 117543,
"s": 117461,
"text": "select max(salary) from employees where pin < (select max(salary) from employees)"
},
{
"code": null,
"e": 117552,
"s": 117543,
"text": "\nWeekly\n"
},
{
"code": null,
"e": 117562,
"s": 117552,
"text": "\nMonthly\n"
},
{
"code": null,
"e": 117572,
"s": 117562,
"text": "\nOverall\n"
},
{
"code": null,
"e": 117578,
"s": 117572,
"text": "Error"
},
{
"code": null,
"e": 117600,
"s": 117578,
"text": "Content Related Issue"
}
]
|
How to find by id in MongoDB? | To find by id in MongoDB, use the find() method as in the below syntax −
db.findByIdDemo.find({"_id" :yourObjectId});
To understand the above syntax, let us create a collection with documents −
> db.findByIdDemo.insertOne({"Value":10});
{
"acknowledged" : true,
"insertedId" : ObjectId("5e07158925ddae1f53b621fc")
}
> db.findByIdDemo.insertOne({"Value":500});
{
"acknowledged" : true,
"insertedId" : ObjectId("5e07158c25ddae1f53b621fd")
}
> db.findByIdDemo.insertOne({"Value":1000});
{
"acknowledged" : true,
"insertedId" : ObjectId("5e07159125ddae1f53b621fe")
}
Following is the query to display all documents from a collection with the help of find() method −
> db.findByIdDemo.find();
This will produce the following output −
"_id" : ObjectId("5e07158925ddae1f53b621fc"), "Value" : 10 }
{ "_id" : ObjectId("5e07158c25ddae1f53b621fd"), "Value" : 500 }
{ "_id" : ObjectId("5e07159125ddae1f53b621fe"), "Value" : 1000 }
Following is the query to find by id in MongoDB −/p>
> db.findByIdDemo.find({"_id" :ObjectId("5e07158c25ddae1f53b621fd")});
This will produce the following output −
{ "_id" : ObjectId("5e07158c25ddae1f53b621fd"), "Value" : 500 } | [
{
"code": null,
"e": 1135,
"s": 1062,
"text": "To find by id in MongoDB, use the find() method as in the below syntax −"
},
{
"code": null,
"e": 1180,
"s": 1135,
"text": "db.findByIdDemo.find({\"_id\" :yourObjectId});"
},
{
"code": null,
"e": 1256,
"s": 1180,
"text": "To understand the above syntax, let us create a collection with documents −"
},
{
"code": null,
"e": 1643,
"s": 1256,
"text": "> db.findByIdDemo.insertOne({\"Value\":10});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e07158925ddae1f53b621fc\")\n}\n> db.findByIdDemo.insertOne({\"Value\":500});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e07158c25ddae1f53b621fd\")\n}\n> db.findByIdDemo.insertOne({\"Value\":1000});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e07159125ddae1f53b621fe\")\n}"
},
{
"code": null,
"e": 1742,
"s": 1643,
"text": "Following is the query to display all documents from a collection with the help of find() method −"
},
{
"code": null,
"e": 1768,
"s": 1742,
"text": "> db.findByIdDemo.find();"
},
{
"code": null,
"e": 1809,
"s": 1768,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1999,
"s": 1809,
"text": "\"_id\" : ObjectId(\"5e07158925ddae1f53b621fc\"), \"Value\" : 10 }\n{ \"_id\" : ObjectId(\"5e07158c25ddae1f53b621fd\"), \"Value\" : 500 }\n{ \"_id\" : ObjectId(\"5e07159125ddae1f53b621fe\"), \"Value\" : 1000 }"
},
{
"code": null,
"e": 2052,
"s": 1999,
"text": "Following is the query to find by id in MongoDB −/p>"
},
{
"code": null,
"e": 2123,
"s": 2052,
"text": "> db.findByIdDemo.find({\"_id\" :ObjectId(\"5e07158c25ddae1f53b621fd\")});"
},
{
"code": null,
"e": 2164,
"s": 2123,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2228,
"s": 2164,
"text": "{ \"_id\" : ObjectId(\"5e07158c25ddae1f53b621fd\"), \"Value\" : 500 }"
}
]
|
Bubble Charts. Plotly Express vs. Plotly.graph_objects | by Darío Weitz | Towards Data Science | Data visualization to tell a story is the most primitive form of human communication. There are cave drawings dating back to 44,000 BC, long before written communication dating back to about 3,000 BC.
According to the theory of evolution, we are a visual species: we evolved in such a way that an enormous amount of mental resources are used for visual perception and knowledge acquisition. We pay much more attention to visuals elements than to words.
Our brain is a parallel image processor with a very wide bandwidth. Some data confirming the above: 90% of the information transmitted to our brain is visual; visual information is transmitted to the brain 60,000 times faster than the same information in text form; 65% of the student population is composed of visual learners.
Data visualization is a tool for communication. It is the most powerful way to tell the story present in our data and to communicate it to an appropriate audience. But in a universe full of screens and visual elements, we have to prevent our audience from reaching the symptom of visual fatigue.
So our charts and figures should: be visually interesting; show a clear story; have information to draw from; have an informative title, clearly labeled axes, appropriate legends, and preferably be a two-dimensional chart with no unnecessary elements.
When the nature of the message is to show a connection or correlation between three or more numerical variables, a good chart should help us find patterns, establish the presence or absence of correlations, identify outliers, and show the existence of clusters and gaps. Bubble charts are well-suited for such purposes.
Bubble charts are used to determine if at least three numerical variables are related or share some kind of pattern. A bubble or disk is drawn for each observation of a pair of numerical variables (A, B) positioning, in a Cartesian coordinate system, the disk horizontally according to the value of variable A and vertically according to variable B. A third numerical variable(C) is represented by means of the area of the bubble. You can even incorporate a fourth variable (D: numerical or categorical) using different colors in different bubbles.
The storytelling −to show relationships between three or four variables but not their exact values− is narrated from the shape that these data points generate as well as from the differences in the relative sizes of the bubbles or discs.
Under special circumstances, they could be used to show trends over time or to compare categorical variables. They are considered a natural extension of the scatter plot where the dots are replaced with bubbles or disks.
Plotly Express (PE) is a high-level wrapper for Plotly.py fully compatible with the rest of the Plotly ecosystem, simple, powerful, and somewhat similar to Seaborn. It’s free and can be used in commercial applications and products. The library includes functions to plot trendlines and maps, as well as to perform faceting and animations. With PE you can make interactive graphics online but you can also save them offline.
We worked with a dataset downloaded from Kaggle [1]. It belongs to nutrition data on 80 cereal products. The data were collected by students from Cornell University at a local Wegmans supermarket in the early 1990s [2]. We would like to know if there is any correlation between Consumer Reports ratings and some specific nutrition data (sugar, sodium, vitamins).
First, we imported Plotly Express as px, the Pandas library as pd and converted our csv file into a dataframe:
import pandas as pdimport plotly.express as pxdf = pd.read_csv(path + 'cereals.csv', index_col = False, header = 0, sep = ';', engine='python')
The screenshot below shows the first ten records of the dataset:
A practical Data Exploration indicates that we only have to eliminate some rows with N/A values using df.dropna(inplace = True) and the dataset is ready for drawing a figure.
For the bubble chart in this article, the Plotly Express function is px.scatter and the corresponding parameters are: data_frame; x= a name of a column in data_frame representing one of the numerical variables; y= a name of a column in data_frame representing another numerical variable; size= a name of a column in data_frame representing the third numerical variable by means of the area of the bubbles.
df.dropna(inplace = True)fig0 = px.scatter(df, x = 'sugars', y = 'rating',size = 'vitamins')fig0.write_image(path + "figbubble0.png")fig0.show()
Undoubtedly, it is not an appropriate chart for useful storytelling.
We considered the bubble at 0 sugar and 93 rating to be an outlier. Instead of deleting the point, we established the y-axe in the range [0–80]. Also, we incorporated a fourth numerical variable using the color parameter in px.scatter.
We updated the chart with update.layout: set the title, the size of the font, the template, and the figure dimensions with width and height. Then we updated the x-axis and the y-axis (text, font, tickfont). We saved the chart as a static png file and, finally, we drew the chart.
fig1 = px.scatter(df, x = 'sugars', y = 'rating', size = 'vitamins', color = 'sodium')fig1.update_layout( title = "Cereals Consumer Reports Ratings ", "title_font_size = 40, template = 'seaborn', width = 1600, height = 1400)fig1.update_xaxes( title_text = 'Sugar', title_font=dict(size=30, family='Verdana', color='purple'), tickfont=dict(family='Calibri', color='black', size=25))fig1.update_yaxes( title_text = "Rating", range = (0,80), title_font=dict(size=30, family='Verdana', color='orange'), tickfont=dict(family='Calibri', color='black', size=25)fig1.write_image(path + "figbubble1.png")fig1.show()
Figure 1 clearly shows a negative correlation between consumer ratings and the level of sugar in the cereals. The colored vertical scale at the right side indicates the amount of sodium (color = ‘sodium’). Since there is an even distribution of colored bubbles across the chart, we can conclude that the amount of sodium does not significantly influence consumer appreciation. Finally, the size of the bubbles is related to the number of vitamins present in the cereals. The figure shows a null relationship between ratings and vitamins.
The plotly.graph_objects module contains a hierarchy of Python classes. Figure is a primary class. Figure has a data attribute and a layout attribute. The data attribute has more than 40 objects, each one refers to a specific type of chart (trace) with its corresponding parameters. The layout attribute specifies the properties of the figure as a whole (axes, title, shapes, legends, etc.).
The conceptual idea with plotly.graph_objects is to use .add_trace(go.Scatter()) to create the figure and then add methods such as .update_layout(), .update_xaxes, .update_yaxes to manipulate the figure. Finally, we export the figure with .write_image() and render it with .show().
Note that we typed mode = ‘markers’ and a dict with color, colorspace, opacity, size, and other size parameters. Particularly, the sizeref attribute allows scaling the size of the bubbles while colorscale allows displaying a particular color palette. The text of the legend is indicated by means of the name attribute.
import plotly.graph_objects as gofig2 = go.Figure()sizeref = 2.*max(df['sodium'])/(150**2)fig2.add_trace(go.Scatter( x = df['sugars'], y = df['rating'], mode = 'markers', name = 'Size = vitamins * Color = sodium', marker = dict(color = df['sodium'], colorscale = 'portland', opacity = 0.8,size = df['vitamins'], sizemode = 'area', sizeref= sizeref, sizemin= 4, showscale = True )))fig2.update_layout(title = "Cereals Consumer Reports Ratings ", title_font_size = 40, template = 'seaborn', width = 1600, height = 1400)fig2.update_layout(legend=dict( yanchor="top", y=0.99, xanchor="left",x=0.01), legend_font_size= 20, showlegend = True)fig2.update_xaxes(title_text = 'Sugar', title_font=dict(size=30, family='Verdana', color='purple'), tickfont=dict(family='Calibri', color='black', size=25))fig2.update_yaxes(title_text = "Rating", range = (0,80), title_font=dict(size=30, family='Verdana', color='orange'), tickfont=dict(family='Calibri', color='black', size=25))fig2.write_image(path + "figbubble2.png")fig2.show()
To sum up:
Bubble charts are appropriate when we want to show relationships between three or four variables but not their exact values. Plotly Express and Plotly.graph_objects allow you to create high quality static images with few and consistent lines of code.
But you must be aware of the following warnings:
Keep in mind that the area of a bubble is not proportional to its radius, but to the square of it;
Unlike scatter plots, bubble charts do not improve with the increase in the number of data points;
They should not be used for the representation of zero or negative values since there are no negative or zero areas;
To show trends over time with bubble charts you always have to put the time variable on the horizontal axis.
If you find this article of interest, please read my previous (https://medium.com/@dar.wtz):
Area Charts with Plotly Express, Traces & Layout
towardsdatascience.com
Histograms with Plotly Express, Themes & Templates | [
{
"code": null,
"e": 373,
"s": 172,
"text": "Data visualization to tell a story is the most primitive form of human communication. There are cave drawings dating back to 44,000 BC, long before written communication dating back to about 3,000 BC."
},
{
"code": null,
"e": 625,
"s": 373,
"text": "According to the theory of evolution, we are a visual species: we evolved in such a way that an enormous amount of mental resources are used for visual perception and knowledge acquisition. We pay much more attention to visuals elements than to words."
},
{
"code": null,
"e": 953,
"s": 625,
"text": "Our brain is a parallel image processor with a very wide bandwidth. Some data confirming the above: 90% of the information transmitted to our brain is visual; visual information is transmitted to the brain 60,000 times faster than the same information in text form; 65% of the student population is composed of visual learners."
},
{
"code": null,
"e": 1249,
"s": 953,
"text": "Data visualization is a tool for communication. It is the most powerful way to tell the story present in our data and to communicate it to an appropriate audience. But in a universe full of screens and visual elements, we have to prevent our audience from reaching the symptom of visual fatigue."
},
{
"code": null,
"e": 1501,
"s": 1249,
"text": "So our charts and figures should: be visually interesting; show a clear story; have information to draw from; have an informative title, clearly labeled axes, appropriate legends, and preferably be a two-dimensional chart with no unnecessary elements."
},
{
"code": null,
"e": 1821,
"s": 1501,
"text": "When the nature of the message is to show a connection or correlation between three or more numerical variables, a good chart should help us find patterns, establish the presence or absence of correlations, identify outliers, and show the existence of clusters and gaps. Bubble charts are well-suited for such purposes."
},
{
"code": null,
"e": 2370,
"s": 1821,
"text": "Bubble charts are used to determine if at least three numerical variables are related or share some kind of pattern. A bubble or disk is drawn for each observation of a pair of numerical variables (A, B) positioning, in a Cartesian coordinate system, the disk horizontally according to the value of variable A and vertically according to variable B. A third numerical variable(C) is represented by means of the area of the bubble. You can even incorporate a fourth variable (D: numerical or categorical) using different colors in different bubbles."
},
{
"code": null,
"e": 2608,
"s": 2370,
"text": "The storytelling −to show relationships between three or four variables but not their exact values− is narrated from the shape that these data points generate as well as from the differences in the relative sizes of the bubbles or discs."
},
{
"code": null,
"e": 2829,
"s": 2608,
"text": "Under special circumstances, they could be used to show trends over time or to compare categorical variables. They are considered a natural extension of the scatter plot where the dots are replaced with bubbles or disks."
},
{
"code": null,
"e": 3253,
"s": 2829,
"text": "Plotly Express (PE) is a high-level wrapper for Plotly.py fully compatible with the rest of the Plotly ecosystem, simple, powerful, and somewhat similar to Seaborn. It’s free and can be used in commercial applications and products. The library includes functions to plot trendlines and maps, as well as to perform faceting and animations. With PE you can make interactive graphics online but you can also save them offline."
},
{
"code": null,
"e": 3616,
"s": 3253,
"text": "We worked with a dataset downloaded from Kaggle [1]. It belongs to nutrition data on 80 cereal products. The data were collected by students from Cornell University at a local Wegmans supermarket in the early 1990s [2]. We would like to know if there is any correlation between Consumer Reports ratings and some specific nutrition data (sugar, sodium, vitamins)."
},
{
"code": null,
"e": 3727,
"s": 3616,
"text": "First, we imported Plotly Express as px, the Pandas library as pd and converted our csv file into a dataframe:"
},
{
"code": null,
"e": 3889,
"s": 3727,
"text": "import pandas as pdimport plotly.express as pxdf = pd.read_csv(path + 'cereals.csv', index_col = False, header = 0, sep = ';', engine='python')"
},
{
"code": null,
"e": 3954,
"s": 3889,
"text": "The screenshot below shows the first ten records of the dataset:"
},
{
"code": null,
"e": 4129,
"s": 3954,
"text": "A practical Data Exploration indicates that we only have to eliminate some rows with N/A values using df.dropna(inplace = True) and the dataset is ready for drawing a figure."
},
{
"code": null,
"e": 4535,
"s": 4129,
"text": "For the bubble chart in this article, the Plotly Express function is px.scatter and the corresponding parameters are: data_frame; x= a name of a column in data_frame representing one of the numerical variables; y= a name of a column in data_frame representing another numerical variable; size= a name of a column in data_frame representing the third numerical variable by means of the area of the bubbles."
},
{
"code": null,
"e": 4680,
"s": 4535,
"text": "df.dropna(inplace = True)fig0 = px.scatter(df, x = 'sugars', y = 'rating',size = 'vitamins')fig0.write_image(path + \"figbubble0.png\")fig0.show()"
},
{
"code": null,
"e": 4749,
"s": 4680,
"text": "Undoubtedly, it is not an appropriate chart for useful storytelling."
},
{
"code": null,
"e": 4985,
"s": 4749,
"text": "We considered the bubble at 0 sugar and 93 rating to be an outlier. Instead of deleting the point, we established the y-axe in the range [0–80]. Also, we incorporated a fourth numerical variable using the color parameter in px.scatter."
},
{
"code": null,
"e": 5265,
"s": 4985,
"text": "We updated the chart with update.layout: set the title, the size of the font, the template, and the figure dimensions with width and height. Then we updated the x-axis and the y-axis (text, font, tickfont). We saved the chart as a static png file and, finally, we drew the chart."
},
{
"code": null,
"e": 6202,
"s": 5265,
"text": "fig1 = px.scatter(df, x = 'sugars', y = 'rating', size = 'vitamins', color = 'sodium')fig1.update_layout( title = \"Cereals Consumer Reports Ratings \", \"title_font_size = 40, template = 'seaborn', width = 1600, height = 1400)fig1.update_xaxes( title_text = 'Sugar', title_font=dict(size=30, family='Verdana', color='purple'), tickfont=dict(family='Calibri', color='black', size=25))fig1.update_yaxes( title_text = \"Rating\", range = (0,80), title_font=dict(size=30, family='Verdana', color='orange'), tickfont=dict(family='Calibri', color='black', size=25)fig1.write_image(path + \"figbubble1.png\")fig1.show()"
},
{
"code": null,
"e": 6740,
"s": 6202,
"text": "Figure 1 clearly shows a negative correlation between consumer ratings and the level of sugar in the cereals. The colored vertical scale at the right side indicates the amount of sodium (color = ‘sodium’). Since there is an even distribution of colored bubbles across the chart, we can conclude that the amount of sodium does not significantly influence consumer appreciation. Finally, the size of the bubbles is related to the number of vitamins present in the cereals. The figure shows a null relationship between ratings and vitamins."
},
{
"code": null,
"e": 7132,
"s": 6740,
"text": "The plotly.graph_objects module contains a hierarchy of Python classes. Figure is a primary class. Figure has a data attribute and a layout attribute. The data attribute has more than 40 objects, each one refers to a specific type of chart (trace) with its corresponding parameters. The layout attribute specifies the properties of the figure as a whole (axes, title, shapes, legends, etc.)."
},
{
"code": null,
"e": 7414,
"s": 7132,
"text": "The conceptual idea with plotly.graph_objects is to use .add_trace(go.Scatter()) to create the figure and then add methods such as .update_layout(), .update_xaxes, .update_yaxes to manipulate the figure. Finally, we export the figure with .write_image() and render it with .show()."
},
{
"code": null,
"e": 7733,
"s": 7414,
"text": "Note that we typed mode = ‘markers’ and a dict with color, colorspace, opacity, size, and other size parameters. Particularly, the sizeref attribute allows scaling the size of the bubbles while colorscale allows displaying a particular color palette. The text of the legend is indicated by means of the name attribute."
},
{
"code": null,
"e": 9333,
"s": 7733,
"text": "import plotly.graph_objects as gofig2 = go.Figure()sizeref = 2.*max(df['sodium'])/(150**2)fig2.add_trace(go.Scatter( x = df['sugars'], y = df['rating'], mode = 'markers', name = 'Size = vitamins * Color = sodium', marker = dict(color = df['sodium'], colorscale = 'portland', opacity = 0.8,size = df['vitamins'], sizemode = 'area', sizeref= sizeref, sizemin= 4, showscale = True )))fig2.update_layout(title = \"Cereals Consumer Reports Ratings \", title_font_size = 40, template = 'seaborn', width = 1600, height = 1400)fig2.update_layout(legend=dict( yanchor=\"top\", y=0.99, xanchor=\"left\",x=0.01), legend_font_size= 20, showlegend = True)fig2.update_xaxes(title_text = 'Sugar', title_font=dict(size=30, family='Verdana', color='purple'), tickfont=dict(family='Calibri', color='black', size=25))fig2.update_yaxes(title_text = \"Rating\", range = (0,80), title_font=dict(size=30, family='Verdana', color='orange'), tickfont=dict(family='Calibri', color='black', size=25))fig2.write_image(path + \"figbubble2.png\")fig2.show()"
},
{
"code": null,
"e": 9344,
"s": 9333,
"text": "To sum up:"
},
{
"code": null,
"e": 9595,
"s": 9344,
"text": "Bubble charts are appropriate when we want to show relationships between three or four variables but not their exact values. Plotly Express and Plotly.graph_objects allow you to create high quality static images with few and consistent lines of code."
},
{
"code": null,
"e": 9644,
"s": 9595,
"text": "But you must be aware of the following warnings:"
},
{
"code": null,
"e": 9743,
"s": 9644,
"text": "Keep in mind that the area of a bubble is not proportional to its radius, but to the square of it;"
},
{
"code": null,
"e": 9842,
"s": 9743,
"text": "Unlike scatter plots, bubble charts do not improve with the increase in the number of data points;"
},
{
"code": null,
"e": 9959,
"s": 9842,
"text": "They should not be used for the representation of zero or negative values since there are no negative or zero areas;"
},
{
"code": null,
"e": 10068,
"s": 9959,
"text": "To show trends over time with bubble charts you always have to put the time variable on the horizontal axis."
},
{
"code": null,
"e": 10161,
"s": 10068,
"text": "If you find this article of interest, please read my previous (https://medium.com/@dar.wtz):"
},
{
"code": null,
"e": 10210,
"s": 10161,
"text": "Area Charts with Plotly Express, Traces & Layout"
},
{
"code": null,
"e": 10233,
"s": 10210,
"text": "towardsdatascience.com"
}
]
|
Can you assign an Array of 100 elements to an array of 10 elements in Java? | In general, arrays are the containers that store multiple variables of the same datatype. These are of fixed size and the size is determined at the time of creation. Each element in an array is positioned by a number starting from 0.
You can access the elements of an array using name and position as −
System.out.println(myArray[3]);
//Which is 1457
In Java, arrays are treated as referenced types you can create an array using the new keyword similar to objects and populate it using the indices as −
int myArray[] = new int[7];
While creating array in this way, you must specify the size of the array.
You can also directly assign values within flower braces separating them with commas (,) as −
int myArray = {1254, 1458, 5687, 1457, 4554, 5445, 7524};
Yes, you can assign an array with 100 elements to an array of size 10 provided they are of same type.
While assigning the compiler doesn’t bother the sizes it just verifies the type of both arrays and proceeds further.
import java.util.Arrays;
public class Test {
public static void main(String[] args) {
int[] intArray = new int[100];
for(int i=0; i<100; i++) {
intArray[i] = i;
}
System.out.println(Arrays.toString(intArray));
}
}
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,
50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
96, 97, 98, 99] | [
{
"code": null,
"e": 1296,
"s": 1062,
"text": "In general, arrays are the containers that store multiple variables of the same datatype. These are of fixed size and the size is determined at the time of creation. Each element in an array is positioned by a number starting from 0."
},
{
"code": null,
"e": 1365,
"s": 1296,
"text": "You can access the elements of an array using name and position as −"
},
{
"code": null,
"e": 1413,
"s": 1365,
"text": "System.out.println(myArray[3]);\n//Which is 1457"
},
{
"code": null,
"e": 1565,
"s": 1413,
"text": "In Java, arrays are treated as referenced types you can create an array using the new keyword similar to objects and populate it using the indices as −"
},
{
"code": null,
"e": 1593,
"s": 1565,
"text": "int myArray[] = new int[7];"
},
{
"code": null,
"e": 1667,
"s": 1593,
"text": "While creating array in this way, you must specify the size of the array."
},
{
"code": null,
"e": 1761,
"s": 1667,
"text": "You can also directly assign values within flower braces separating them with commas (,) as −"
},
{
"code": null,
"e": 1819,
"s": 1761,
"text": "int myArray = {1254, 1458, 5687, 1457, 4554, 5445, 7524};"
},
{
"code": null,
"e": 1921,
"s": 1819,
"text": "Yes, you can assign an array with 100 elements to an array of size 10 provided they are of same type."
},
{
"code": null,
"e": 2038,
"s": 1921,
"text": "While assigning the compiler doesn’t bother the sizes it just verifies the type of both arrays and proceeds further."
},
{
"code": null,
"e": 2291,
"s": 2038,
"text": "import java.util.Arrays;\npublic class Test {\n public static void main(String[] args) {\n int[] intArray = new int[100];\n for(int i=0; i<100; i++) {\n intArray[i] = i;\n }\n System.out.println(Arrays.toString(intArray));\n }\n}"
},
{
"code": null,
"e": 2682,
"s": 2291,
"text": "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,\n26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49,\n50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,\n74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,\n96, 97, 98, 99]"
}
]
|
Check if a graphs has a cycle of odd length - GeeksforGeeks | 07 Nov, 2021
Given a graph, the task is to find if it has a cycle of odd length or not.
The idea is based on an important fact that a graph does not contain a cycle of odd length if and only if it is Bipartite, i.e., it can be colored with two colors.It is obvious that if a graph has an odd length cycle then it cannot be Bipartite. In Bipartite graph there are two sets of vertices such that no vertex in a set is connected with any other vertex of the same set). For a cycle of odd length, two vertices must of the same set be connected which contradicts Bipartite definition.Let us understand converse, if a graph has no odd cycle then it must be Bipartite. Below is a induction based proof of this taken from http://infohost.nmt.edu/~math/faculty/barefoot/Math321Spring98/BipartiteGraphsAndEvenCycles.htmlAssume that (X, Y) is a bipartition of G and let C = u1, u2, . . . , uk be a cycle of G, where u1 is in the vertex set X (abbreviated u1 ∈ X). If u1 ∈X then u2 ∈ Y, . . . and, in general, u2j+1 ∈X and u2i ∈Y. Since C is a cycle, uk ∈Y, so that k = 2s for some positive integer s. Therefore cycle C is even.Assume that graph G has no odd cycles. It will be shown that such a graph is bipartite. The proof is induction on the number of edges. The assertion is clearly true for a graph with at most one edge. Assume that every graph with no odd cycles and at most q edges is bipartite and let G be a graph with q + 1 edges and with no odd cycles. Let e = uv be an edge of G and consider the graph H = G – uv. By induction, H has a bipartition (X, Y). If e has one end in X and the other end in Y then (X, Y) is a bipartition of G. Hence, assume that u and v are in X. If there were a path, P, between u and v in H then the length of P would be even. Thus, P + uv would be an odd cycle of G. Therefore, u and v must be in lie in different “pieces” or components of H. Thus, we have:
where X = X1 & X2 and Y = Y1 ∪ Y2. In this case it is clear that (X1 ∪ Y2, X2 ∪ Y1) is a bipartition of G.
Therefore we conclude that every graph with no odd cycles is bipartite. One can construct a bipartition as follows: (1) Choose an arbitrary vertex x0 and set X0 = {x0}. (2) Let Y0 be the set of all vertices adjacent to x0 and iterate steps 3-4. (3) Let Xk be the set of vertices not chosen that are adjacent to a vertex of Yk-1. (4) Let Yk be the set of vertices not chosen that are adjacent to a vertex of Xk-1. (5) If all vertices of G have been chosen then X = X0 ∪ X1 ∪ X2 ∪. . . and Y = Y0 ∪ Y1 ∪ Y2 ∪ . . .Below is code to check if a graph has odd cycle or not. The code basically checks whether graph is Bipartite.
C++
Java
Python3
C#
Javascript
// C++ program to find out whether a given graph is// Bipartite or not#include <bits/stdc++.h>#define V 4using namespace std; // This function returns true if graph G[V][V] contains// odd cycle, else falsebool containsOdd(int G[][V], int src){ // Create a color array to store colors assigned // to all vertices. Vertex number is used as index // in this array. The value '-1' of colorArr[i] // is used to indicate that no color is assigned to // vertex 'i'. The value 1 is used to indicate first // color is assigned and value 0 indicates second // color is assigned. int colorArr[V]; for (int i = 0; i < V; ++i) colorArr[i] = -1; // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal queue <int> q; q.push(src); // Run while there are vertices in queue (Similar to BFS) while (!q.empty()) { // Dequeue a vertex from queue int u = q.front(); q.pop(); // Return true if there is a self-loop if (G[u][u] == 1) return true; // Find all non-colored adjacent vertices for (int v = 0; v < V; ++v) { // An edge from u to v exists and destination // v is not colored if (G[u][v] && colorArr[v] == -1) { // Assign alternate color to this adjacent // v of u colorArr[v] = 1 - colorArr[u]; q.push(v); } // An edge from u to v exists and destination // v is colored with same color as u else if (G[u][v] && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false;} // Driver program to test above functionint main(){ int G[][V] = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0} }; containsOdd(G, 0) ? cout << "Yes" : cout << "No"; return 0;}
// JAVA Code For Check if a graphs has a cycle // of odd lengthimport java.util.*; class GFG { public static int V =4; // This function returns true if graph G[V][V] // contains odd cycle, else false public static boolean containsOdd(int G[][], int src) { // Create a color array to store colors assigned // to all vertices. Vertex number is used as // index in this array. The value '-1' of // colorArr[i] is used to indicate that no color // is assigned to vertex 'i'. The value 1 is // used to indicate first color is assigned and // value 0 indicates second color is assigned. int colorArr[] = new int[V]; for (int i = 0; i < V; ++i) colorArr[i] = -1; // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal LinkedList<Integer> q = new LinkedList<Integer>(); q.add(src); // Run while there are vertices in queue // (Similar to BFS) while (!q.isEmpty()) { // Dequeue a vertex from queue int u = q.peek(); q.pop(); // Return true if there is a self-loop if (G[u][u] == 1) return true; // Find all non-colored adjacent vertices for (int v = 0; v < V; ++v) { // An edge from u to v exists and // destination v is not colored if (G[u][v] == 1 && colorArr[v] == -1) { // Assign alternate color to this // adjacent v of u colorArr[v] = 1 - colorArr[u]; q.push(v); } // An edge from u to v exists and // destination v is colored with same // color as u else if (G[u][v] == 1 && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false; } /* Driver program to test above function */ public static void main(String[] args) { int G[][] = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0}}; if (containsOdd(G, 0)) System.out.println("Yes") ; else System.out.println("No"); }} // This code is contributed by Arnav Kr. Mandal.
# Python3 program to find out whether # a given graph is Bipartite or not import queue # This function returns true if graph # G[V][V] contains odd cycle, else false def containsOdd(G, src): global V # Create a color array to store # colors assigned to all vertices. # Vertex number is used as index # in this array. The value '-1' of # colorArr[i] is used to indicate # that no color is assigned to vertex # 'i'. The value 1 is used to indicate # first color is assigned and value 0 # indicates second color is assigned. colorArr = [-1] * V # Assign first color to source colorArr[src] = 1 # Create a queue (FIFO) of vertex # numbers and enqueue source vertex # for BFS traversal q = queue.Queue() q.put(src) # Run while there are vertices in # queue (Similar to BFS) while (not q.empty()): # Dequeue a vertex from queue u = q.get() # Return true if there is a self-loop if (G[u][u] == 1): return True # Find all non-colored adjacent vertices for v in range(V): # An edge from u to v exists and # destination v is not colored if (G[u][v] and colorArr[v] == -1): # Assign alternate color to this # adjacent v of u colorArr[v] = 1 - colorArr[u] q.put(v) # An edge from u to v exists and # destination v is colored with # same color as u elif (G[u][v] and colorArr[v] == colorArr[u]): return True # If we reach here, then all # adjacent vertices can be # colored with alternate color return False # Driver CodeV = 4G = [[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0]] if containsOdd(G, 0): print("Yes")else: print("No") # This code is contributed by PranchalK
// C# Code For Check if a graphs has a cycle // of odd length using System;using System.Collections.Generic; class GFG { public static int V = 4; // This function returns true if graph G[V,V] // contains odd cycle, else false public static bool containsOdd(int [,]G, int src) { // Create a color array to store colors assigned // to all vertices. Vertex number is used as // index in this array. The value '-1' of // colorArr[i] is used to indicate that no color // is assigned to vertex 'i'. The value 1 is // used to indicate first color is assigned and // value 0 indicates second color is assigned. int []colorArr = new int[V]; for (int i = 0; i < V; ++i) colorArr[i] = -1; // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal Queue<int> q = new Queue<int>(); q.Enqueue(src); // Run while there are vertices in queue // (Similar to BFS) while (q.Count != 0) { // Dequeue a vertex from queue int u = q.Peek(); q.Dequeue(); // Return true if there is a self-loop if (G[u, u] == 1) return true; // Find all non-colored adjacent vertices for (int v = 0; v < V; ++v) { // An edge from u to v exists and // destination v is not colored if (G[u, v] == 1 && colorArr[v] == -1) { // Assign alternate color to this // adjacent v of u colorArr[v] = 1 - colorArr[u]; q.Enqueue(v); } // An edge from u to v exists and // destination v is colored with same // color as u else if (G[u,v] == 1 && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false; } /* Driver code */ public static void Main() { int [,]G = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0}}; if (containsOdd(G, 0)) Console.WriteLine("Yes") ; else Console.WriteLine("No"); } } // This code has been contributed by 29AjayKumar
<script> // JavaScript Code For Check if a graphs has a cycle // of odd length var V = 4; // This function returns true if graph G[V,V] // contains odd cycle, else false function containsOdd(G, src) { // Create a color array to store colors assigned // to all vertices. Vertex number is used as // index in this array. The value '-1' of // colorArr[i] is used to indicate that no color // is assigned to vertex 'i'. The value 1 is // used to indicate first color is assigned and // value 0 indicates second color is assigned. var colorArr = Array(V).fill(-1); // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal var q = []; q.push(src); // Run while there are vertices in queue // (Similar to BFS) while (q.length != 0) { // Dequeue a vertex from queue var u = q[0]; q.shift(); // Return true if there is a self-loop if (G[u][u] == 1) return true; // Find all non-colored adjacent vertices for (var v = 0; v < V; ++v) { // An edge from u to v exists and // destination v is not colored if (G[u][v] == 1 && colorArr[v] == -1) { // Assign alternate color to this // adjacent v of u colorArr[v] = 1 - colorArr[u]; q.push(v); } // An edge from u to v exists and // destination v is colored with same // color as u else if (G[u][v] == 1 && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false; } /* Driver code */var G = [[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0]]; if (containsOdd(G, 0)) document.write("Yes") ; else document.write("No"); </script>
Output:
No
The above algorithm works only if the graph is strongly connected. We can extend it for the cases when graph is not strongly connected (Please refer this for details). In the above code, we always start with source 0 and assume that vertices are visited from it. One important observation is a graph with no edges is also Bipartite. Note that the Bipartite condition says all edges should be from one set to another.This article is contributed by Kartik. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
29AjayKumar
PranchalKatiyar
Akanksha_Rai
itsok
gulshankumarar231
sagartomar9927
graph-cycle
Graph
Graph
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2
Topological Sorting
Bellman–Ford Algorithm | DP-23
Detect Cycle in a Directed Graph
Floyd Warshall Algorithm | DP-16
Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
Disjoint Set (Or Union-Find) | Set 1 (Detect Cycle in an Undirected Graph)
Traveling Salesman Problem (TSP) Implementation
Ford-Fulkerson Algorithm for Maximum Flow Problem
Strongly Connected Components | [
{
"code": null,
"e": 24906,
"s": 24878,
"text": "\n07 Nov, 2021"
},
{
"code": null,
"e": 24982,
"s": 24906,
"text": "Given a graph, the task is to find if it has a cycle of odd length or not. "
},
{
"code": null,
"e": 26788,
"s": 24986,
"text": "The idea is based on an important fact that a graph does not contain a cycle of odd length if and only if it is Bipartite, i.e., it can be colored with two colors.It is obvious that if a graph has an odd length cycle then it cannot be Bipartite. In Bipartite graph there are two sets of vertices such that no vertex in a set is connected with any other vertex of the same set). For a cycle of odd length, two vertices must of the same set be connected which contradicts Bipartite definition.Let us understand converse, if a graph has no odd cycle then it must be Bipartite. Below is a induction based proof of this taken from http://infohost.nmt.edu/~math/faculty/barefoot/Math321Spring98/BipartiteGraphsAndEvenCycles.htmlAssume that (X, Y) is a bipartition of G and let C = u1, u2, . . . , uk be a cycle of G, where u1 is in the vertex set X (abbreviated u1 ∈ X). If u1 ∈X then u2 ∈ Y, . . . and, in general, u2j+1 ∈X and u2i ∈Y. Since C is a cycle, uk ∈Y, so that k = 2s for some positive integer s. Therefore cycle C is even.Assume that graph G has no odd cycles. It will be shown that such a graph is bipartite. The proof is induction on the number of edges. The assertion is clearly true for a graph with at most one edge. Assume that every graph with no odd cycles and at most q edges is bipartite and let G be a graph with q + 1 edges and with no odd cycles. Let e = uv be an edge of G and consider the graph H = G – uv. By induction, H has a bipartition (X, Y). If e has one end in X and the other end in Y then (X, Y) is a bipartition of G. Hence, assume that u and v are in X. If there were a path, P, between u and v in H then the length of P would be even. Thus, P + uv would be an odd cycle of G. Therefore, u and v must be in lie in different “pieces” or components of H. Thus, we have: "
},
{
"code": null,
"e": 26896,
"s": 26788,
"text": "where X = X1 & X2 and Y = Y1 ∪ Y2. In this case it is clear that (X1 ∪ Y2, X2 ∪ Y1) is a bipartition of G. "
},
{
"code": null,
"e": 27520,
"s": 26896,
"text": "Therefore we conclude that every graph with no odd cycles is bipartite. One can construct a bipartition as follows: (1) Choose an arbitrary vertex x0 and set X0 = {x0}. (2) Let Y0 be the set of all vertices adjacent to x0 and iterate steps 3-4. (3) Let Xk be the set of vertices not chosen that are adjacent to a vertex of Yk-1. (4) Let Yk be the set of vertices not chosen that are adjacent to a vertex of Xk-1. (5) If all vertices of G have been chosen then X = X0 ∪ X1 ∪ X2 ∪. . . and Y = Y0 ∪ Y1 ∪ Y2 ∪ . . .Below is code to check if a graph has odd cycle or not. The code basically checks whether graph is Bipartite. "
},
{
"code": null,
"e": 27524,
"s": 27520,
"text": "C++"
},
{
"code": null,
"e": 27529,
"s": 27524,
"text": "Java"
},
{
"code": null,
"e": 27537,
"s": 27529,
"text": "Python3"
},
{
"code": null,
"e": 27540,
"s": 27537,
"text": "C#"
},
{
"code": null,
"e": 27551,
"s": 27540,
"text": "Javascript"
},
{
"code": "// C++ program to find out whether a given graph is// Bipartite or not#include <bits/stdc++.h>#define V 4using namespace std; // This function returns true if graph G[V][V] contains// odd cycle, else falsebool containsOdd(int G[][V], int src){ // Create a color array to store colors assigned // to all vertices. Vertex number is used as index // in this array. The value '-1' of colorArr[i] // is used to indicate that no color is assigned to // vertex 'i'. The value 1 is used to indicate first // color is assigned and value 0 indicates second // color is assigned. int colorArr[V]; for (int i = 0; i < V; ++i) colorArr[i] = -1; // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal queue <int> q; q.push(src); // Run while there are vertices in queue (Similar to BFS) while (!q.empty()) { // Dequeue a vertex from queue int u = q.front(); q.pop(); // Return true if there is a self-loop if (G[u][u] == 1) return true; // Find all non-colored adjacent vertices for (int v = 0; v < V; ++v) { // An edge from u to v exists and destination // v is not colored if (G[u][v] && colorArr[v] == -1) { // Assign alternate color to this adjacent // v of u colorArr[v] = 1 - colorArr[u]; q.push(v); } // An edge from u to v exists and destination // v is colored with same color as u else if (G[u][v] && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false;} // Driver program to test above functionint main(){ int G[][V] = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0} }; containsOdd(G, 0) ? cout << \"Yes\" : cout << \"No\"; return 0;}",
"e": 29655,
"s": 27551,
"text": null
},
{
"code": "// JAVA Code For Check if a graphs has a cycle // of odd lengthimport java.util.*; class GFG { public static int V =4; // This function returns true if graph G[V][V] // contains odd cycle, else false public static boolean containsOdd(int G[][], int src) { // Create a color array to store colors assigned // to all vertices. Vertex number is used as // index in this array. The value '-1' of // colorArr[i] is used to indicate that no color // is assigned to vertex 'i'. The value 1 is // used to indicate first color is assigned and // value 0 indicates second color is assigned. int colorArr[] = new int[V]; for (int i = 0; i < V; ++i) colorArr[i] = -1; // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal LinkedList<Integer> q = new LinkedList<Integer>(); q.add(src); // Run while there are vertices in queue // (Similar to BFS) while (!q.isEmpty()) { // Dequeue a vertex from queue int u = q.peek(); q.pop(); // Return true if there is a self-loop if (G[u][u] == 1) return true; // Find all non-colored adjacent vertices for (int v = 0; v < V; ++v) { // An edge from u to v exists and // destination v is not colored if (G[u][v] == 1 && colorArr[v] == -1) { // Assign alternate color to this // adjacent v of u colorArr[v] = 1 - colorArr[u]; q.push(v); } // An edge from u to v exists and // destination v is colored with same // color as u else if (G[u][v] == 1 && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false; } /* Driver program to test above function */ public static void main(String[] args) { int G[][] = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0}}; if (containsOdd(G, 0)) System.out.println(\"Yes\") ; else System.out.println(\"No\"); }} // This code is contributed by Arnav Kr. Mandal.",
"e": 32401,
"s": 29655,
"text": null
},
{
"code": "# Python3 program to find out whether # a given graph is Bipartite or not import queue # This function returns true if graph # G[V][V] contains odd cycle, else false def containsOdd(G, src): global V # Create a color array to store # colors assigned to all vertices. # Vertex number is used as index # in this array. The value '-1' of # colorArr[i] is used to indicate # that no color is assigned to vertex # 'i'. The value 1 is used to indicate # first color is assigned and value 0 # indicates second color is assigned. colorArr = [-1] * V # Assign first color to source colorArr[src] = 1 # Create a queue (FIFO) of vertex # numbers and enqueue source vertex # for BFS traversal q = queue.Queue() q.put(src) # Run while there are vertices in # queue (Similar to BFS) while (not q.empty()): # Dequeue a vertex from queue u = q.get() # Return true if there is a self-loop if (G[u][u] == 1): return True # Find all non-colored adjacent vertices for v in range(V): # An edge from u to v exists and # destination v is not colored if (G[u][v] and colorArr[v] == -1): # Assign alternate color to this # adjacent v of u colorArr[v] = 1 - colorArr[u] q.put(v) # An edge from u to v exists and # destination v is colored with # same color as u elif (G[u][v] and colorArr[v] == colorArr[u]): return True # If we reach here, then all # adjacent vertices can be # colored with alternate color return False # Driver CodeV = 4G = [[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0]] if containsOdd(G, 0): print(\"Yes\")else: print(\"No\") # This code is contributed by PranchalK",
"e": 34417,
"s": 32401,
"text": null
},
{
"code": "// C# Code For Check if a graphs has a cycle // of odd length using System;using System.Collections.Generic; class GFG { public static int V = 4; // This function returns true if graph G[V,V] // contains odd cycle, else false public static bool containsOdd(int [,]G, int src) { // Create a color array to store colors assigned // to all vertices. Vertex number is used as // index in this array. The value '-1' of // colorArr[i] is used to indicate that no color // is assigned to vertex 'i'. The value 1 is // used to indicate first color is assigned and // value 0 indicates second color is assigned. int []colorArr = new int[V]; for (int i = 0; i < V; ++i) colorArr[i] = -1; // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal Queue<int> q = new Queue<int>(); q.Enqueue(src); // Run while there are vertices in queue // (Similar to BFS) while (q.Count != 0) { // Dequeue a vertex from queue int u = q.Peek(); q.Dequeue(); // Return true if there is a self-loop if (G[u, u] == 1) return true; // Find all non-colored adjacent vertices for (int v = 0; v < V; ++v) { // An edge from u to v exists and // destination v is not colored if (G[u, v] == 1 && colorArr[v] == -1) { // Assign alternate color to this // adjacent v of u colorArr[v] = 1 - colorArr[u]; q.Enqueue(v); } // An edge from u to v exists and // destination v is colored with same // color as u else if (G[u,v] == 1 && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false; } /* Driver code */ public static void Main() { int [,]G = {{0, 1, 0, 1}, {1, 0, 1, 0}, {0, 1, 0, 1}, {1, 0, 1, 0}}; if (containsOdd(G, 0)) Console.WriteLine(\"Yes\") ; else Console.WriteLine(\"No\"); } } // This code has been contributed by 29AjayKumar",
"e": 37186,
"s": 34417,
"text": null
},
{
"code": "<script> // JavaScript Code For Check if a graphs has a cycle // of odd length var V = 4; // This function returns true if graph G[V,V] // contains odd cycle, else false function containsOdd(G, src) { // Create a color array to store colors assigned // to all vertices. Vertex number is used as // index in this array. The value '-1' of // colorArr[i] is used to indicate that no color // is assigned to vertex 'i'. The value 1 is // used to indicate first color is assigned and // value 0 indicates second color is assigned. var colorArr = Array(V).fill(-1); // Assign first color to source colorArr[src] = 1; // Create a queue (FIFO) of vertex numbers and // enqueue source vertex for BFS traversal var q = []; q.push(src); // Run while there are vertices in queue // (Similar to BFS) while (q.length != 0) { // Dequeue a vertex from queue var u = q[0]; q.shift(); // Return true if there is a self-loop if (G[u][u] == 1) return true; // Find all non-colored adjacent vertices for (var v = 0; v < V; ++v) { // An edge from u to v exists and // destination v is not colored if (G[u][v] == 1 && colorArr[v] == -1) { // Assign alternate color to this // adjacent v of u colorArr[v] = 1 - colorArr[u]; q.push(v); } // An edge from u to v exists and // destination v is colored with same // color as u else if (G[u][v] == 1 && colorArr[v] == colorArr[u]) return true; } } // If we reach here, then all adjacent // vertices can be colored with alternate // color return false; } /* Driver code */var G = [[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0]]; if (containsOdd(G, 0)) document.write(\"Yes\") ; else document.write(\"No\"); </script>",
"e": 39352,
"s": 37186,
"text": null
},
{
"code": null,
"e": 39361,
"s": 39352,
"text": "Output: "
},
{
"code": null,
"e": 39365,
"s": 39361,
"text": "No "
},
{
"code": null,
"e": 40196,
"s": 39365,
"text": "The above algorithm works only if the graph is strongly connected. We can extend it for the cases when graph is not strongly connected (Please refer this for details). In the above code, we always start with source 0 and assume that vertices are visited from it. One important observation is a graph with no edges is also Bipartite. Note that the Bipartite condition says all edges should be from one set to another.This article is contributed by Kartik. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 40208,
"s": 40196,
"text": "29AjayKumar"
},
{
"code": null,
"e": 40224,
"s": 40208,
"text": "PranchalKatiyar"
},
{
"code": null,
"e": 40237,
"s": 40224,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 40243,
"s": 40237,
"text": "itsok"
},
{
"code": null,
"e": 40261,
"s": 40243,
"text": "gulshankumarar231"
},
{
"code": null,
"e": 40276,
"s": 40261,
"text": "sagartomar9927"
},
{
"code": null,
"e": 40288,
"s": 40276,
"text": "graph-cycle"
},
{
"code": null,
"e": 40294,
"s": 40288,
"text": "Graph"
},
{
"code": null,
"e": 40300,
"s": 40294,
"text": "Graph"
},
{
"code": null,
"e": 40398,
"s": 40300,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 40407,
"s": 40398,
"text": "Comments"
},
{
"code": null,
"e": 40420,
"s": 40407,
"text": "Old Comments"
},
{
"code": null,
"e": 40478,
"s": 40420,
"text": "Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2"
},
{
"code": null,
"e": 40498,
"s": 40478,
"text": "Topological Sorting"
},
{
"code": null,
"e": 40529,
"s": 40498,
"text": "Bellman–Ford Algorithm | DP-23"
},
{
"code": null,
"e": 40562,
"s": 40529,
"text": "Detect Cycle in a Directed Graph"
},
{
"code": null,
"e": 40595,
"s": 40562,
"text": "Floyd Warshall Algorithm | DP-16"
},
{
"code": null,
"e": 40663,
"s": 40595,
"text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)"
},
{
"code": null,
"e": 40738,
"s": 40663,
"text": "Disjoint Set (Or Union-Find) | Set 1 (Detect Cycle in an Undirected Graph)"
},
{
"code": null,
"e": 40786,
"s": 40738,
"text": "Traveling Salesman Problem (TSP) Implementation"
},
{
"code": null,
"e": 40836,
"s": 40786,
"text": "Ford-Fulkerson Algorithm for Maximum Flow Problem"
}
]
|
Coin Change | DP-7 - GeeksforGeeks | 29 Mar, 2022
Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter.For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3}, {2,2,6}, {2,3,5} and {5,5}. So the output should be 5.
1) Optimal Substructure To count the total number of solutions, we can divide all set solutions into two sets. 1) Solutions that do not contain mth coin (or Sm). 2) Solutions that contain at least one Sm. Let count(S[], m, n) be the function to count the number of solutions, then it can be written as sum of count(S[], m-1, n) and count(S[], m, n-Sm).Therefore, the problem has optimal substructure property as the problem can be solved using solutions to subproblems.
2) Overlapping Subproblems Following is a simple recursive implementation of the Coin Change problem. The implementation simply follows the recursive structure mentioned above.
3) Approach (Algorithm)
See, here each coin of a given denomination can come an infinite number of times. (Repetition allowed), this is what we call UNBOUNDED KNAPSACK. We have 2 choices for a coin of a particular denomination, either i) to include, or ii) to exclude. But here, the inclusion process is not for just once; we can include any denomination any number of times until N<0.
Basically, If we are at s[m-1], we can take as many instances of that coin ( unbounded inclusion ) i.e count(S, m, n – S[m-1] ) ; then we move to s[m-2]. After moving to s[m-2], we can’t move back and can’t make choices for s[m-1] i.e count(S, m-1, n ).
Finally, as we have to find the total number of ways, so we will add these 2 possible choices, i.e count(S, m, n – S[m-1] ) + count(S, m-1, n ) ; which will be our required answer.
C++
C
Java
Python3
C#
PHP
Javascript
// Recursive C++ program for// coin change problem.#include <bits/stdc++.h>using namespace std; // Returns the count of ways we can// sum S[0...m-1] coins to get sum nint count(int S[], int m, int n){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <= 0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count(S, m - 1, n) + count(S, m, n - S[m - 1]);} // Driver codeint main(){ int i, j; int arr[] = { 1, 2, 3 }; int m = sizeof(arr) / sizeof(arr[0]); cout << " " << count(arr, m, 4); return 0;} // This code is contributed by shivanisinghss2110
// Recursive C program for// coin change problem.#include<stdio.h> // Returns the count of ways we can// sum S[0...m-1] coins to get sum nint count( int S[], int m, int n ){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] );} // Driver program to test above functionint main(){ int i, j; int arr[] = {1, 2, 3}; int m = sizeof(arr)/sizeof(arr[0]); printf("%d ", count(arr, m, 4)); getchar(); return 0;}
// Recursive JAVA program for// coin change problem.import java.util.*;class GFG{ // Returns the count of ways we can// sum S[0...m-1] coins to get sum nstatic int count(int S[], int m, int n){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <= 0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count(S, m - 1, n) + count(S, m, n - S[m - 1]);} // Driver codepublic static void main(String args[]){ int arr[] = { 1, 2, 3 }; int m = arr.length; System.out.println(count(arr, m, 4));} } // This code is contributed by jyoti369
# Recursive Python3 program for# coin change problem. # Returns the count of ways we can sum# S[0...m-1] coins to get sum ndef count(S, m, n ): # If n is 0 then there is 1 # solution (do not include any coin) if (n == 0): return 1 # If n is less than 0 then no # solution exists if (n < 0): return 0; # If there are no coins and n # is greater than 0, then no # solution exist if (m <=0 and n >= 1): return 0 # count is sum of solutions (i) # including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] ); # Driver program to test above functionarr = [1, 2, 3]m = len(arr)print(count(arr, m, 4)) # This code is contributed by Smitha Dinesh Semwal
// Recursive C# program for// coin change problem.using System; class GFG{ // Returns the count of ways we can // sum S[0...m-1] coins to get sum n static int count( int []S, int m, int n ) { // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n - S[m - 1] ); } // Driver program public static void Main() { int []arr = {1, 2, 3}; int m = arr.Length; Console.Write( count(arr, m, 4)); }}// This code is contributed by Sam007
<?php// Recursive PHP program for// coin change problem. // Returns the count of ways we can// sum S[0...m-1] coins to get sum nfunction coun($S, $m, $n){ // If n is 0 then there is // 1 solution (do not include // any coin) if ($n == 0) return 1; // If n is less than 0 then no // solution exists if ($n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if ($m <= 0 && $n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return coun($S, $m - 1,$n ) + coun($S, $m, $n - $S[$m - 1] );} // Driver Code $arr = array(1, 2, 3); $m = count($arr); echo coun($arr, $m, 4); // This code is contributed by Sam007?>
<script>// Recursive javascript program for// coin change problem. // Returns the count of ways we can// sum S[0...m-1] coins to get sum nfunction count(S , m , n ){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n - S[m - 1] );} // Driver program to test above functionvar arr = [1, 2, 3];var m = arr.length;document.write( count(arr, m, 4)); // This code is contributed by Amit Katiyar</script>
4
It should be noted that the above function computes the same subproblems again and again. See the following recursion tree for S = {1, 2, 3} and n = 5.
The function C({1}, 3) is called two times. If we draw the complete tree, then we can see that there are many subproblems being called more than once.
C() --> count()
C({1,2,3}, 5)
/ \
/ \
C({1,2,3}, 2) C({1,2}, 5)
/ \ / \
/ \ / \
C({1,2,3}, -1) C({1,2}, 2) C({1,2}, 3) C({1}, 5)
/ \ / \ / \
/ \ / \ / \
C({1,2},0) C({1},2) C({1,2},1) C({1},3) C({1}, 4) C({}, 5)
/ \ / \ /\ / \
/ \ / \ / \ / \
. . . . . . C({1}, 3) C({}, 4)
/ \
/ \
. .
Since same subproblems are called again, this problem has Overlapping Subproblems property. So the Coin Change problem has both properties (see this and this) of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by constructing a temporary array table[][] in bottom up manner.
Dynamic Programming Solution
C++
C
Java
Python
C#
PHP
Javascript
// C++ program for coin change problem.#include<bits/stdc++.h> using namespace std; int count( int S[], int m, int n ){ int i, j, x, y; // We need n+1 rows as the table // is constructed in bottom up // manner using the base case 0 // value case (n = 0) int table[n + 1][m]; // Fill the entries for 0 // value case (n = 0) for (i = 0; i < m; i++) table[0][i] = 1; // Fill rest of the table entries // in bottom up manner for (i = 1; i < n + 1; i++) { for (j = 0; j < m; j++) { // Count of solutions including S[j] x = (i-S[j] >= 0) ? table[i - S[j]][j] : 0; // Count of solutions excluding S[j] y = (j >= 1) ? table[i][j - 1] : 0; // total count table[i][j] = x + y; } } return table[n][m - 1];} // Driver Codeint main(){ int arr[] = {1, 2, 3}; int m = sizeof(arr)/sizeof(arr[0]); int n = 4; cout << count(arr, m, n); return 0;} // This code is contributed// by Akanksha Rai(Abby_akku)
// C program for coin change problem.#include<stdio.h> int count( int S[], int m, int n ){ int i, j, x, y; // We need n+1 rows as the table is constructed // in bottom up manner using the base case 0 // value case (n = 0) int table[n+1][m]; // Fill the entries for 0 value case (n = 0) for (i=0; i<m; i++) table[0][i] = 1; // Fill rest of the table entries in bottom // up manner for (i = 1; i < n+1; i++) { for (j = 0; j < m; j++) { // Count of solutions including S[j] x = (i-S[j] >= 0)? table[i - S[j]][j]: 0; // Count of solutions excluding S[j] y = (j >= 1)? table[i][j-1]: 0; // total count table[i][j] = x + y; } } return table[n][m-1];} // Driver program to test above functionint main(){ int arr[] = {1, 2, 3}; int m = sizeof(arr)/sizeof(arr[0]); int n = 4; printf(" %d ", count(arr, m, n)); return 0;}
/* Dynamic Programming Java implementation of Coin Change problem */import java.util.Arrays; class CoinChange{ static long countWays(int S[], int m, int n) { //Time complexity of this function: O(mn) //Space Complexity of this function: O(n) // table[i] will be storing the number of solutions // for value i. We need n+1 rows as the table is // constructed in bottom up manner using the base // case (n = 0) long[] table = new long[n+1]; // Initialize all table values as 0 Arrays.fill(table, 0); //O(n) // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] // values after the index greater than or equal to // the value of the picked coin for (int i=0; i<m; i++) for (int j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n]; } // Driver Function to test above function public static void main(String args[]) { int arr[] = {1, 2, 3}; int m = arr.length; int n = 4; System.out.println(countWays(arr, m, n)); }}// This code is contributed by Pankaj Kumar
# Dynamic Programming Python implementation of Coin# Change problemdef count(S, m, n): # We need n+1 rows as the table is constructed # in bottom up manner using the base case 0 value # case (n = 0) table = [[0 for x in range(m)] for x in range(n+1)] # Fill the entries for 0 value case (n = 0) for i in range(m): table[0][i] = 1 # Fill rest of the table entries in bottom up manner for i in range(1, n+1): for j in range(m): # Count of solutions including S[j] x = table[i - S[j]][j] if i-S[j] >= 0 else 0 # Count of solutions excluding S[j] y = table[i][j-1] if j >= 1 else 0 # total count table[i][j] = x + y return table[n][m-1] # Driver program to test above functionarr = [1, 2, 3]m = len(arr)n = 4print(count(arr, m, n)) # This code is contributed by Bhavya Jain
/* Dynamic Programming C# implementation of CoinChange problem */using System; class GFG{ static long countWays(int []S, int m, int n) { //Time complexity of this function: O(mn) //Space Complexity of this function: O(n) // table[i] will be storing the number of solutions // for value i. We need n+1 rows as the table is // constructed in bottom up manner using the base // case (n = 0) int[] table = new int[n+1]; // Initialize all table values as 0 for(int i = 0; i < table.Length; i++) { table[i] = 0; } // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] // values after the index greater than or equal to // the value of the picked coin for (int i = 0; i < m; i++) for (int j = S[i]; j <= n; j++) table[j] += table[j - S[i]]; return table[n]; } // Driver Function public static void Main() { int []arr = {1, 2, 3}; int m = arr.Length; int n = 4; Console.Write(countWays(arr, m, n)); }}// This code is contributed by Sam007
<?php// PHP program for// coin change problem. function count1($S, $m, $n){ // We need n+1 rows as // the table is constructed // in bottom up manner // using the base case 0 // value case (n = 0) $table; for ($i = 0; $i < $n + 1; $i++) for ($j = 0; $j < $m; $j++) $table[$i][$j] = 0; // Fill the entries for // 0 value case (n = 0) for ($i = 0; $i < $m; $i++) $table[0][$i] = 1; // Fill rest of the table // entries in bottom up manner for ($i = 1; $i < $n + 1; $i++) { for ($j = 0; $j < $m; $j++) { // Count of solutions // including S[j] $x = ($i-$S[$j] >= 0) ? $table[$i - $S[$j]][$j] : 0; // Count of solutions // excluding S[j] $y = ($j >= 1) ? $table[$i][$j - 1] : 0; // total count $table[$i][$j] = $x + $y; } } return $table[$n][$m-1];} // Driver Code$arr = array(1, 2, 3);$m = count($arr);$n = 4;echo count1($arr, $m, $n); // This code is contributed by mits?>
<script> /* Dynamic Programming javascript implementation of Coin Change problem */ function countWays(S , m , n){ //Time complexity of this function: O(mn) //Space Complexity of this function: O(n) // table[i] will be storing the number of solutions // for value i. We need n+1 rows as the table is // constructed in bottom up manner using the base // case (n = 0) // Initialize all table values as 0 //O(n) var table = Array(n+1).fill(0); // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table // values after the index greater than or equal to // the value of the picked coin for (i=0; i<m; i++) for (j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n];} // Driver Function to test above functionvar arr = [1, 2, 3];var m = arr.length;var n = 4;document.write(countWays(arr, m, n)); // This code is contributed by 29AjayKumar </script>
4
Time Complexity: O(mn) Following is a simplified version of method 2. The auxiliary space required here is O(n) only.
C++
Java
Python
C#
PHP
Javascript
int count( int S[], int m, int n ) { // table[i] will be storing the number of solutions for // value i. We need n+1 rows as the table is constructed // in bottom up manner using the base case (n = 0) int table[n+1]; // Initialize all table values as 0 memset(table, 0, sizeof(table)); // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] values // after the index greater than or equal to the value of the // picked coin for(int i=0; i<m; i++) for(int j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n]; }
public static int count( int S[], int m, int n ){ // table[i] will be storing the number of solutions for // value i. We need n+1 rows as the table is constructed // in bottom up manner using the base case (n = 0) int table[]=new int[n+1]; // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] values // after the index greater than or equal to the value of the // picked coin for(int i=0; i<m; i++) for(int j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n];}
# Dynamic Programming Python implementation of Coin# Change problemdef count(S, m, n): # table[i] will be storing the number of solutions for # value i. We need n+1 rows as the table is constructed # in bottom up manner using the base case (n = 0) # Initialize all table values as 0 table = [0 for k in range(n+1)] # Base case (If given value is 0) table[0] = 1 # Pick all coins one by one and update the table[] values # after the index greater than or equal to the value of the # picked coin for i in range(0,m): for j in range(S[i],n+1): table[j] += table[j-S[i]] return table[n] # Driver program to test above functionarr = [1, 2, 3]m = len(arr)n = 4x = count(arr, m, n)print (x) # This code is contributed by Afzal Ansari
// Dynamic Programming C# implementation// of Coin Change problemusing System; class GFG{static int count(int []S, int m, int n){ // table[i] will be storing the // number of solutions for value i. // We need n+1 rows as the table // is constructed in bottom up manner // using the base case (n = 0) int [] table = new int[n + 1]; // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and // update the table[] values after // the index greater than or equal // to the value of the picked coin for(int i = 0; i < m; i++) for(int j = S[i]; j <= n; j++) table[j] += table[j - S[i]]; return table[n];} // Driver Codepublic static void Main(){ int []arr = {1, 2, 3}; int m = arr.Length; int n = 4; Console.Write(count(arr, m, n));}} // This code is contributed by Raj
<?phpfunction count_1( &$S, $m, $n ){ // table[i] will be storing the number // of solutions for value i. We need n+1 // rows as the table is constructed in // bottom up manner using the base case (n = 0) $table = array_fill(0, $n + 1, NULl); // Base case (If given value is 0) $table[0] = 1; // Pick all coins one by one and update // the table[] values after the index // greater than or equal to the value // of the picked coin for($i = 0; $i < $m; $i++) for($j = $S[$i]; $j <= $n; $j++) $table[$j] += $table[$j - $S[$i]]; return $table[$n];} // Driver Code$arr = array(1, 2, 3);$m = sizeof($arr);$n = 4;$x = count_1($arr, $m, $n);echo $x; // This code is contributed// by ChitraNayal?>
<script> // Dynamic Programming Javascript implementation // of Coin Change problem function count(S, m, n) { // table[i] will be storing the // number of solutions for value i. // We need n+1 rows as the table // is constructed in bottom up manner // using the base case (n = 0) let table = new Array(n + 1); table.fill(0); // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and // update the table[] values after // the index greater than or equal // to the value of the picked coin for(let i = 0; i < m; i++) for(let j = S[i]; j <= n; j++) table[j] += table[j - S[i]]; return table[n]; } let arr = [1, 2, 3]; let m = arr.length; let n = 4; document.write(count(arr, m, n));</script>
Output:
4
Thanks to Rohan Laishram for suggesting this space optimized version.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
References: http://www.algorithmist.com/index.php/Coin_Change
Following is another Top Down DP Approach using memoization:
C++
Java
Python3
C#
Javascript
#include <bits/stdc++.h>using namespace std; int coinchange(vector<int>& a, int v, int n, vector<vector<int> >& dp){ if (v == 0) return dp[n][v] = 1; if (n == 0) return 0; if (dp[n][v] != -1) return dp[n][v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n][v] = coinchange(a, v, n - 1, dp);}int32_t main(){ int tc = 1; // cin >> tc; while (tc--) { int n, v; n = 3, v = 4; vector<int> a = { 1, 2, 3 }; vector<vector<int> > dp(n + 1, vector<int>(v + 1, -1)); int res = coinchange(a, v, n, dp); cout << res << endl; }}// This Code is Contributed by// Harshit Agrawal NITT
// Java program for the above approachimport java.util.*; class GFG { static int coinchange(int[] a, int v, int n, int[][] dp) { if (v == 0) return dp[n][v] = 1; if (n == 0) return 0; if (dp[n][v] != -1) return dp[n][v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n][v] = coinchange(a, v, n - 1, dp); } // Driver code public static void main(String[] args) { int tc = 1; while (tc != 0) { int n, v; n = 3; v = 4; int[] a = { 1, 2, 3 }; int[][] dp = new int[n + 1][v + 1]; for (int[] row : dp) Arrays.fill(row, -1); int res = coinchange(a, v, n, dp); System.out.println(res); tc--; } }} // This code is contributed by rajsanghavi9.
# Python program for the above approachdef coinchange(a, v, n, dp): if (v == 0): dp[n][v] = 1; return dp[n][v]; if (n == 0): return 0; if (dp[n][v] != -1): return dp[n][v]; if (a[n - 1] <= v): # Either Pick this coin or not dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); return dp[n][v]; else: # We have no option but to leave this coin dp[n][v] = coinchange(a, v, n - 1, dp); return dp[n][v]; # Driver codeif __name__ == '__main__': tc = 1; while (tc != 0): n = 3; v = 4; a = [ 1, 2, 3 ]; dp = [[-1 for i in range(v+1)] for j in range(n+1)] res = coinchange(a, v, n, dp); print(res); tc -= 1; # This code is contributed by Rajput-Ji
// C# program for the above approachusing System;public class GFG { static int coinchange(int[] a, int v, int n, int[, ] dp) { if (v == 0) return dp[n, v] = 1; if (n == 0) return 0; if (dp[n, v] != -1) return dp[n, v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n, v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n, v] = coinchange(a, v, n - 1, dp); } // Driver code public static void Main(String[] args) { int tc = 1; while (tc != 0) { int n, v; n = 3; v = 4; int[] a = { 1, 2, 3 }; int[, ] dp = new int[n + 1, v + 1]; for (int j = 0; j < n + 1; j++) { for (int l = 0; l < v + 1; l++) dp[j, l] = -1; } int res = coinchange(a, v, n, dp); Console.WriteLine(res); tc--; } }} // This code is contributed by umadevi9616
<script>// javascript program for the above approach function coinchange(a , v , n, dp) { if (v == 0) return dp[n][v] = 1; if (n == 0) return 0; if (dp[n][v] != -1) return dp[n][v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n][v] = coinchange(a, v, n - 1, dp); } // Driver code var tc = 1; while (tc != 0) { var n, v; n = 3; v = 4; var a = [ 1, 2, 3 ]; var dp = Array(n+1).fill().map(() => Array(v+1).fill(-1)); var res = coinchange(a, v, n, dp); document.write(res); tc--; } // This code contributed by umadevi9616</script>
4
Time Complexity: O(M*N)Auxiliary Space: O(M*N)
Contributed by: Mayukh Sinha
Sam007
khyatigrover
Mithun Kumar
nicklodeon10
R_Raj
Akanksha_Rai
ukasp
PLACEMENT__CHAHIYE
piyush25pv
amit143katiyar
29AjayKumar
decode2207
mayukh99
sagartomar9927
shivanisinghss2110
HarshitAgrawal8
akshaysingh98088
rajsanghavi9
roynilarghya
jyoti369
umadevi9616
Rajput-Ji
imcoder
Accolite
Microsoft
Morgan Stanley
Paytm
Samsung
Snapdeal
Dynamic Programming
Greedy
Mathematical
Paytm
Morgan Stanley
Accolite
Microsoft
Samsung
Snapdeal
Dynamic Programming
Greedy
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Largest Sum Contiguous Subarray
Bellman–Ford Algorithm | DP-23
Floyd Warshall Algorithm | DP-16
Subset Sum Problem | DP-25
Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
Program for array rotation
Program for Shortest Job First (or SJF) CPU Scheduling | Set 1 (Non- preemptive)
Fractional Knapsack Problem
Difference between Prim's and Kruskal's algorithm for MST
Program for Least Recently Used (LRU) Page Replacement algorithm | [
{
"code": null,
"e": 34692,
"s": 34664,
"text": "\n29 Mar, 2022"
},
{
"code": null,
"e": 35154,
"s": 34692,
"text": "Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter.For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3}, {2,2,6}, {2,3,5} and {5,5}. So the output should be 5."
},
{
"code": null,
"e": 35625,
"s": 35154,
"text": "1) Optimal Substructure To count the total number of solutions, we can divide all set solutions into two sets. 1) Solutions that do not contain mth coin (or Sm). 2) Solutions that contain at least one Sm. Let count(S[], m, n) be the function to count the number of solutions, then it can be written as sum of count(S[], m-1, n) and count(S[], m, n-Sm).Therefore, the problem has optimal substructure property as the problem can be solved using solutions to subproblems. "
},
{
"code": null,
"e": 35802,
"s": 35625,
"text": "2) Overlapping Subproblems Following is a simple recursive implementation of the Coin Change problem. The implementation simply follows the recursive structure mentioned above."
},
{
"code": null,
"e": 35826,
"s": 35802,
"text": "3) Approach (Algorithm)"
},
{
"code": null,
"e": 36189,
"s": 35826,
"text": "See, here each coin of a given denomination can come an infinite number of times. (Repetition allowed), this is what we call UNBOUNDED KNAPSACK. We have 2 choices for a coin of a particular denomination, either i) to include, or ii) to exclude. But here, the inclusion process is not for just once; we can include any denomination any number of times until N<0."
},
{
"code": null,
"e": 36443,
"s": 36189,
"text": "Basically, If we are at s[m-1], we can take as many instances of that coin ( unbounded inclusion ) i.e count(S, m, n – S[m-1] ) ; then we move to s[m-2]. After moving to s[m-2], we can’t move back and can’t make choices for s[m-1] i.e count(S, m-1, n )."
},
{
"code": null,
"e": 36624,
"s": 36443,
"text": "Finally, as we have to find the total number of ways, so we will add these 2 possible choices, i.e count(S, m, n – S[m-1] ) + count(S, m-1, n ) ; which will be our required answer."
},
{
"code": null,
"e": 36628,
"s": 36624,
"text": "C++"
},
{
"code": null,
"e": 36630,
"s": 36628,
"text": "C"
},
{
"code": null,
"e": 36635,
"s": 36630,
"text": "Java"
},
{
"code": null,
"e": 36643,
"s": 36635,
"text": "Python3"
},
{
"code": null,
"e": 36646,
"s": 36643,
"text": "C#"
},
{
"code": null,
"e": 36650,
"s": 36646,
"text": "PHP"
},
{
"code": null,
"e": 36661,
"s": 36650,
"text": "Javascript"
},
{
"code": "// Recursive C++ program for// coin change problem.#include <bits/stdc++.h>using namespace std; // Returns the count of ways we can// sum S[0...m-1] coins to get sum nint count(int S[], int m, int n){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <= 0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count(S, m - 1, n) + count(S, m, n - S[m - 1]);} // Driver codeint main(){ int i, j; int arr[] = { 1, 2, 3 }; int m = sizeof(arr) / sizeof(arr[0]); cout << \" \" << count(arr, m, 4); return 0;} // This code is contributed by shivanisinghss2110",
"e": 37564,
"s": 36661,
"text": null
},
{
"code": "// Recursive C program for// coin change problem.#include<stdio.h> // Returns the count of ways we can// sum S[0...m-1] coins to get sum nint count( int S[], int m, int n ){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] );} // Driver program to test above functionint main(){ int i, j; int arr[] = {1, 2, 3}; int m = sizeof(arr)/sizeof(arr[0]); printf(\"%d \", count(arr, m, 4)); getchar(); return 0;}",
"e": 38400,
"s": 37564,
"text": null
},
{
"code": "// Recursive JAVA program for// coin change problem.import java.util.*;class GFG{ // Returns the count of ways we can// sum S[0...m-1] coins to get sum nstatic int count(int S[], int m, int n){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <= 0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count(S, m - 1, n) + count(S, m, n - S[m - 1]);} // Driver codepublic static void main(String args[]){ int arr[] = { 1, 2, 3 }; int m = arr.length; System.out.println(count(arr, m, 4));} } // This code is contributed by jyoti369",
"e": 39273,
"s": 38400,
"text": null
},
{
"code": "# Recursive Python3 program for# coin change problem. # Returns the count of ways we can sum# S[0...m-1] coins to get sum ndef count(S, m, n ): # If n is 0 then there is 1 # solution (do not include any coin) if (n == 0): return 1 # If n is less than 0 then no # solution exists if (n < 0): return 0; # If there are no coins and n # is greater than 0, then no # solution exist if (m <=0 and n >= 1): return 0 # count is sum of solutions (i) # including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n-S[m-1] ); # Driver program to test above functionarr = [1, 2, 3]m = len(arr)print(count(arr, m, 4)) # This code is contributed by Smitha Dinesh Semwal",
"e": 40015,
"s": 39273,
"text": null
},
{
"code": "// Recursive C# program for// coin change problem.using System; class GFG{ // Returns the count of ways we can // sum S[0...m-1] coins to get sum n static int count( int []S, int m, int n ) { // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n - S[m - 1] ); } // Driver program public static void Main() { int []arr = {1, 2, 3}; int m = arr.Length; Console.Write( count(arr, m, 4)); }}// This code is contributed by Sam007",
"e": 41010,
"s": 40015,
"text": null
},
{
"code": "<?php// Recursive PHP program for// coin change problem. // Returns the count of ways we can// sum S[0...m-1] coins to get sum nfunction coun($S, $m, $n){ // If n is 0 then there is // 1 solution (do not include // any coin) if ($n == 0) return 1; // If n is less than 0 then no // solution exists if ($n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if ($m <= 0 && $n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return coun($S, $m - 1,$n ) + coun($S, $m, $n - $S[$m - 1] );} // Driver Code $arr = array(1, 2, 3); $m = count($arr); echo coun($arr, $m, 4); // This code is contributed by Sam007?>",
"e": 41803,
"s": 41010,
"text": null
},
{
"code": "<script>// Recursive javascript program for// coin change problem. // Returns the count of ways we can// sum S[0...m-1] coins to get sum nfunction count(S , m , n ){ // If n is 0 then there is 1 solution // (do not include any coin) if (n == 0) return 1; // If n is less than 0 then no // solution exists if (n < 0) return 0; // If there are no coins and n // is greater than 0, then no // solution exist if (m <=0 && n >= 1) return 0; // count is sum of solutions (i) // including S[m-1] (ii) excluding S[m-1] return count( S, m - 1, n ) + count( S, m, n - S[m - 1] );} // Driver program to test above functionvar arr = [1, 2, 3];var m = arr.length;document.write( count(arr, m, 4)); // This code is contributed by Amit Katiyar</script>",
"e": 42622,
"s": 41803,
"text": null
},
{
"code": null,
"e": 42625,
"s": 42622,
"text": " 4"
},
{
"code": null,
"e": 42778,
"s": 42625,
"text": "It should be noted that the above function computes the same subproblems again and again. See the following recursion tree for S = {1, 2, 3} and n = 5. "
},
{
"code": null,
"e": 42931,
"s": 42778,
"text": "The function C({1}, 3) is called two times. If we draw the complete tree, then we can see that there are many subproblems being called more than once. "
},
{
"code": null,
"e": 43898,
"s": 42931,
"text": "C() --> count()\n C({1,2,3}, 5) \n / \\ \n / \\ \n C({1,2,3}, 2) C({1,2}, 5)\n / \\ / \\ \n / \\ / \\ \nC({1,2,3}, -1) C({1,2}, 2) C({1,2}, 3) C({1}, 5)\n / \\ / \\ / \\\n / \\ / \\ / \\\n C({1,2},0) C({1},2) C({1,2},1) C({1},3) C({1}, 4) C({}, 5)\n / \\ / \\ /\\ / \\ \n / \\ / \\ / \\ / \\ \n . . . . . . C({1}, 3) C({}, 4)\n / \\ \n / \\ \n . ."
},
{
"code": null,
"e": 44259,
"s": 43898,
"text": "Since same subproblems are called again, this problem has Overlapping Subproblems property. So the Coin Change problem has both properties (see this and this) of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by constructing a temporary array table[][] in bottom up manner."
},
{
"code": null,
"e": 44290,
"s": 44259,
"text": "Dynamic Programming Solution "
},
{
"code": null,
"e": 44294,
"s": 44290,
"text": "C++"
},
{
"code": null,
"e": 44296,
"s": 44294,
"text": "C"
},
{
"code": null,
"e": 44301,
"s": 44296,
"text": "Java"
},
{
"code": null,
"e": 44308,
"s": 44301,
"text": "Python"
},
{
"code": null,
"e": 44311,
"s": 44308,
"text": "C#"
},
{
"code": null,
"e": 44315,
"s": 44311,
"text": "PHP"
},
{
"code": null,
"e": 44326,
"s": 44315,
"text": "Javascript"
},
{
"code": "// C++ program for coin change problem.#include<bits/stdc++.h> using namespace std; int count( int S[], int m, int n ){ int i, j, x, y; // We need n+1 rows as the table // is constructed in bottom up // manner using the base case 0 // value case (n = 0) int table[n + 1][m]; // Fill the entries for 0 // value case (n = 0) for (i = 0; i < m; i++) table[0][i] = 1; // Fill rest of the table entries // in bottom up manner for (i = 1; i < n + 1; i++) { for (j = 0; j < m; j++) { // Count of solutions including S[j] x = (i-S[j] >= 0) ? table[i - S[j]][j] : 0; // Count of solutions excluding S[j] y = (j >= 1) ? table[i][j - 1] : 0; // total count table[i][j] = x + y; } } return table[n][m - 1];} // Driver Codeint main(){ int arr[] = {1, 2, 3}; int m = sizeof(arr)/sizeof(arr[0]); int n = 4; cout << count(arr, m, n); return 0;} // This code is contributed// by Akanksha Rai(Abby_akku)",
"e": 45375,
"s": 44326,
"text": null
},
{
"code": "// C program for coin change problem.#include<stdio.h> int count( int S[], int m, int n ){ int i, j, x, y; // We need n+1 rows as the table is constructed // in bottom up manner using the base case 0 // value case (n = 0) int table[n+1][m]; // Fill the entries for 0 value case (n = 0) for (i=0; i<m; i++) table[0][i] = 1; // Fill rest of the table entries in bottom // up manner for (i = 1; i < n+1; i++) { for (j = 0; j < m; j++) { // Count of solutions including S[j] x = (i-S[j] >= 0)? table[i - S[j]][j]: 0; // Count of solutions excluding S[j] y = (j >= 1)? table[i][j-1]: 0; // total count table[i][j] = x + y; } } return table[n][m-1];} // Driver program to test above functionint main(){ int arr[] = {1, 2, 3}; int m = sizeof(arr)/sizeof(arr[0]); int n = 4; printf(\" %d \", count(arr, m, n)); return 0;}",
"e": 46344,
"s": 45375,
"text": null
},
{
"code": "/* Dynamic Programming Java implementation of Coin Change problem */import java.util.Arrays; class CoinChange{ static long countWays(int S[], int m, int n) { //Time complexity of this function: O(mn) //Space Complexity of this function: O(n) // table[i] will be storing the number of solutions // for value i. We need n+1 rows as the table is // constructed in bottom up manner using the base // case (n = 0) long[] table = new long[n+1]; // Initialize all table values as 0 Arrays.fill(table, 0); //O(n) // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] // values after the index greater than or equal to // the value of the picked coin for (int i=0; i<m; i++) for (int j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n]; } // Driver Function to test above function public static void main(String args[]) { int arr[] = {1, 2, 3}; int m = arr.length; int n = 4; System.out.println(countWays(arr, m, n)); }}// This code is contributed by Pankaj Kumar",
"e": 47553,
"s": 46344,
"text": null
},
{
"code": "# Dynamic Programming Python implementation of Coin# Change problemdef count(S, m, n): # We need n+1 rows as the table is constructed # in bottom up manner using the base case 0 value # case (n = 0) table = [[0 for x in range(m)] for x in range(n+1)] # Fill the entries for 0 value case (n = 0) for i in range(m): table[0][i] = 1 # Fill rest of the table entries in bottom up manner for i in range(1, n+1): for j in range(m): # Count of solutions including S[j] x = table[i - S[j]][j] if i-S[j] >= 0 else 0 # Count of solutions excluding S[j] y = table[i][j-1] if j >= 1 else 0 # total count table[i][j] = x + y return table[n][m-1] # Driver program to test above functionarr = [1, 2, 3]m = len(arr)n = 4print(count(arr, m, n)) # This code is contributed by Bhavya Jain",
"e": 48436,
"s": 47553,
"text": null
},
{
"code": "/* Dynamic Programming C# implementation of CoinChange problem */using System; class GFG{ static long countWays(int []S, int m, int n) { //Time complexity of this function: O(mn) //Space Complexity of this function: O(n) // table[i] will be storing the number of solutions // for value i. We need n+1 rows as the table is // constructed in bottom up manner using the base // case (n = 0) int[] table = new int[n+1]; // Initialize all table values as 0 for(int i = 0; i < table.Length; i++) { table[i] = 0; } // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] // values after the index greater than or equal to // the value of the picked coin for (int i = 0; i < m; i++) for (int j = S[i]; j <= n; j++) table[j] += table[j - S[i]]; return table[n]; } // Driver Function public static void Main() { int []arr = {1, 2, 3}; int m = arr.Length; int n = 4; Console.Write(countWays(arr, m, n)); }}// This code is contributed by Sam007",
"e": 49632,
"s": 48436,
"text": null
},
{
"code": "<?php// PHP program for// coin change problem. function count1($S, $m, $n){ // We need n+1 rows as // the table is constructed // in bottom up manner // using the base case 0 // value case (n = 0) $table; for ($i = 0; $i < $n + 1; $i++) for ($j = 0; $j < $m; $j++) $table[$i][$j] = 0; // Fill the entries for // 0 value case (n = 0) for ($i = 0; $i < $m; $i++) $table[0][$i] = 1; // Fill rest of the table // entries in bottom up manner for ($i = 1; $i < $n + 1; $i++) { for ($j = 0; $j < $m; $j++) { // Count of solutions // including S[j] $x = ($i-$S[$j] >= 0) ? $table[$i - $S[$j]][$j] : 0; // Count of solutions // excluding S[j] $y = ($j >= 1) ? $table[$i][$j - 1] : 0; // total count $table[$i][$j] = $x + $y; } } return $table[$n][$m-1];} // Driver Code$arr = array(1, 2, 3);$m = count($arr);$n = 4;echo count1($arr, $m, $n); // This code is contributed by mits?>",
"e": 50719,
"s": 49632,
"text": null
},
{
"code": "<script> /* Dynamic Programming javascript implementation of Coin Change problem */ function countWays(S , m , n){ //Time complexity of this function: O(mn) //Space Complexity of this function: O(n) // table[i] will be storing the number of solutions // for value i. We need n+1 rows as the table is // constructed in bottom up manner using the base // case (n = 0) // Initialize all table values as 0 //O(n) var table = Array(n+1).fill(0); // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table // values after the index greater than or equal to // the value of the picked coin for (i=0; i<m; i++) for (j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n];} // Driver Function to test above functionvar arr = [1, 2, 3];var m = arr.length;var n = 4;document.write(countWays(arr, m, n)); // This code is contributed by 29AjayKumar </script>",
"e": 51689,
"s": 50719,
"text": null
},
{
"code": null,
"e": 51691,
"s": 51689,
"text": "4"
},
{
"code": null,
"e": 51810,
"s": 51691,
"text": "Time Complexity: O(mn) Following is a simplified version of method 2. The auxiliary space required here is O(n) only. "
},
{
"code": null,
"e": 51814,
"s": 51810,
"text": "C++"
},
{
"code": null,
"e": 51819,
"s": 51814,
"text": "Java"
},
{
"code": null,
"e": 51826,
"s": 51819,
"text": "Python"
},
{
"code": null,
"e": 51829,
"s": 51826,
"text": "C#"
},
{
"code": null,
"e": 51833,
"s": 51829,
"text": "PHP"
},
{
"code": null,
"e": 51844,
"s": 51833,
"text": "Javascript"
},
{
"code": "int count( int S[], int m, int n ) { // table[i] will be storing the number of solutions for // value i. We need n+1 rows as the table is constructed // in bottom up manner using the base case (n = 0) int table[n+1]; // Initialize all table values as 0 memset(table, 0, sizeof(table)); // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] values // after the index greater than or equal to the value of the // picked coin for(int i=0; i<m; i++) for(int j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n]; } ",
"e": 52570,
"s": 51844,
"text": null
},
{
"code": "public static int count( int S[], int m, int n ){ // table[i] will be storing the number of solutions for // value i. We need n+1 rows as the table is constructed // in bottom up manner using the base case (n = 0) int table[]=new int[n+1]; // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and update the table[] values // after the index greater than or equal to the value of the // picked coin for(int i=0; i<m; i++) for(int j=S[i]; j<=n; j++) table[j] += table[j-S[i]]; return table[n];}",
"e": 53143,
"s": 52570,
"text": null
},
{
"code": "# Dynamic Programming Python implementation of Coin# Change problemdef count(S, m, n): # table[i] will be storing the number of solutions for # value i. We need n+1 rows as the table is constructed # in bottom up manner using the base case (n = 0) # Initialize all table values as 0 table = [0 for k in range(n+1)] # Base case (If given value is 0) table[0] = 1 # Pick all coins one by one and update the table[] values # after the index greater than or equal to the value of the # picked coin for i in range(0,m): for j in range(S[i],n+1): table[j] += table[j-S[i]] return table[n] # Driver program to test above functionarr = [1, 2, 3]m = len(arr)n = 4x = count(arr, m, n)print (x) # This code is contributed by Afzal Ansari",
"e": 53928,
"s": 53143,
"text": null
},
{
"code": "// Dynamic Programming C# implementation// of Coin Change problemusing System; class GFG{static int count(int []S, int m, int n){ // table[i] will be storing the // number of solutions for value i. // We need n+1 rows as the table // is constructed in bottom up manner // using the base case (n = 0) int [] table = new int[n + 1]; // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and // update the table[] values after // the index greater than or equal // to the value of the picked coin for(int i = 0; i < m; i++) for(int j = S[i]; j <= n; j++) table[j] += table[j - S[i]]; return table[n];} // Driver Codepublic static void Main(){ int []arr = {1, 2, 3}; int m = arr.Length; int n = 4; Console.Write(count(arr, m, n));}} // This code is contributed by Raj",
"e": 54795,
"s": 53928,
"text": null
},
{
"code": "<?phpfunction count_1( &$S, $m, $n ){ // table[i] will be storing the number // of solutions for value i. We need n+1 // rows as the table is constructed in // bottom up manner using the base case (n = 0) $table = array_fill(0, $n + 1, NULl); // Base case (If given value is 0) $table[0] = 1; // Pick all coins one by one and update // the table[] values after the index // greater than or equal to the value // of the picked coin for($i = 0; $i < $m; $i++) for($j = $S[$i]; $j <= $n; $j++) $table[$j] += $table[$j - $S[$i]]; return $table[$n];} // Driver Code$arr = array(1, 2, 3);$m = sizeof($arr);$n = 4;$x = count_1($arr, $m, $n);echo $x; // This code is contributed// by ChitraNayal?>",
"e": 55545,
"s": 54795,
"text": null
},
{
"code": "<script> // Dynamic Programming Javascript implementation // of Coin Change problem function count(S, m, n) { // table[i] will be storing the // number of solutions for value i. // We need n+1 rows as the table // is constructed in bottom up manner // using the base case (n = 0) let table = new Array(n + 1); table.fill(0); // Base case (If given value is 0) table[0] = 1; // Pick all coins one by one and // update the table[] values after // the index greater than or equal // to the value of the picked coin for(let i = 0; i < m; i++) for(let j = S[i]; j <= n; j++) table[j] += table[j - S[i]]; return table[n]; } let arr = [1, 2, 3]; let m = arr.length; let n = 4; document.write(count(arr, m, n));</script>",
"e": 56431,
"s": 55545,
"text": null
},
{
"code": null,
"e": 56440,
"s": 56431,
"text": "Output: "
},
{
"code": null,
"e": 56442,
"s": 56440,
"text": "4"
},
{
"code": null,
"e": 56636,
"s": 56442,
"text": "Thanks to Rohan Laishram for suggesting this space optimized version.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 56698,
"s": 56636,
"text": "References: http://www.algorithmist.com/index.php/Coin_Change"
},
{
"code": null,
"e": 56759,
"s": 56698,
"text": "Following is another Top Down DP Approach using memoization:"
},
{
"code": null,
"e": 56763,
"s": 56759,
"text": "C++"
},
{
"code": null,
"e": 56768,
"s": 56763,
"text": "Java"
},
{
"code": null,
"e": 56776,
"s": 56768,
"text": "Python3"
},
{
"code": null,
"e": 56779,
"s": 56776,
"text": "C#"
},
{
"code": null,
"e": 56790,
"s": 56779,
"text": "Javascript"
},
{
"code": "#include <bits/stdc++.h>using namespace std; int coinchange(vector<int>& a, int v, int n, vector<vector<int> >& dp){ if (v == 0) return dp[n][v] = 1; if (n == 0) return 0; if (dp[n][v] != -1) return dp[n][v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n][v] = coinchange(a, v, n - 1, dp);}int32_t main(){ int tc = 1; // cin >> tc; while (tc--) { int n, v; n = 3, v = 4; vector<int> a = { 1, 2, 3 }; vector<vector<int> > dp(n + 1, vector<int>(v + 1, -1)); int res = coinchange(a, v, n, dp); cout << res << endl; }}// This Code is Contributed by// Harshit Agrawal NITT",
"e": 57696,
"s": 56790,
"text": null
},
{
"code": "// Java program for the above approachimport java.util.*; class GFG { static int coinchange(int[] a, int v, int n, int[][] dp) { if (v == 0) return dp[n][v] = 1; if (n == 0) return 0; if (dp[n][v] != -1) return dp[n][v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n][v] = coinchange(a, v, n - 1, dp); } // Driver code public static void main(String[] args) { int tc = 1; while (tc != 0) { int n, v; n = 3; v = 4; int[] a = { 1, 2, 3 }; int[][] dp = new int[n + 1][v + 1]; for (int[] row : dp) Arrays.fill(row, -1); int res = coinchange(a, v, n, dp); System.out.println(res); tc--; } }} // This code is contributed by rajsanghavi9.",
"e": 58796,
"s": 57696,
"text": null
},
{
"code": "# Python program for the above approachdef coinchange(a, v, n, dp): if (v == 0): dp[n][v] = 1; return dp[n][v]; if (n == 0): return 0; if (dp[n][v] != -1): return dp[n][v]; if (a[n - 1] <= v): # Either Pick this coin or not dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); return dp[n][v]; else: # We have no option but to leave this coin dp[n][v] = coinchange(a, v, n - 1, dp); return dp[n][v]; # Driver codeif __name__ == '__main__': tc = 1; while (tc != 0): n = 3; v = 4; a = [ 1, 2, 3 ]; dp = [[-1 for i in range(v+1)] for j in range(n+1)] res = coinchange(a, v, n, dp); print(res); tc -= 1; # This code is contributed by Rajput-Ji",
"e": 59598,
"s": 58796,
"text": null
},
{
"code": "// C# program for the above approachusing System;public class GFG { static int coinchange(int[] a, int v, int n, int[, ] dp) { if (v == 0) return dp[n, v] = 1; if (n == 0) return 0; if (dp[n, v] != -1) return dp[n, v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n, v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n, v] = coinchange(a, v, n - 1, dp); } // Driver code public static void Main(String[] args) { int tc = 1; while (tc != 0) { int n, v; n = 3; v = 4; int[] a = { 1, 2, 3 }; int[, ] dp = new int[n + 1, v + 1]; for (int j = 0; j < n + 1; j++) { for (int l = 0; l < v + 1; l++) dp[j, l] = -1; } int res = coinchange(a, v, n, dp); Console.WriteLine(res); tc--; } }} // This code is contributed by umadevi9616",
"e": 60748,
"s": 59598,
"text": null
},
{
"code": "<script>// javascript program for the above approach function coinchange(a , v , n, dp) { if (v == 0) return dp[n][v] = 1; if (n == 0) return 0; if (dp[n][v] != -1) return dp[n][v]; if (a[n - 1] <= v) { // Either Pick this coin or not return dp[n][v] = coinchange(a, v - a[n - 1], n, dp) + coinchange(a, v, n - 1, dp); } else // We have no option but to leave this coin return dp[n][v] = coinchange(a, v, n - 1, dp); } // Driver code var tc = 1; while (tc != 0) { var n, v; n = 3; v = 4; var a = [ 1, 2, 3 ]; var dp = Array(n+1).fill().map(() => Array(v+1).fill(-1)); var res = coinchange(a, v, n, dp); document.write(res); tc--; } // This code contributed by umadevi9616</script>",
"e": 61670,
"s": 60748,
"text": null
},
{
"code": null,
"e": 61672,
"s": 61670,
"text": "4"
},
{
"code": null,
"e": 61719,
"s": 61672,
"text": "Time Complexity: O(M*N)Auxiliary Space: O(M*N)"
},
{
"code": null,
"e": 61749,
"s": 61719,
"text": "Contributed by: Mayukh Sinha "
},
{
"code": null,
"e": 61756,
"s": 61749,
"text": "Sam007"
},
{
"code": null,
"e": 61769,
"s": 61756,
"text": "khyatigrover"
},
{
"code": null,
"e": 61782,
"s": 61769,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 61795,
"s": 61782,
"text": "nicklodeon10"
},
{
"code": null,
"e": 61801,
"s": 61795,
"text": "R_Raj"
},
{
"code": null,
"e": 61814,
"s": 61801,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 61820,
"s": 61814,
"text": "ukasp"
},
{
"code": null,
"e": 61839,
"s": 61820,
"text": "PLACEMENT__CHAHIYE"
},
{
"code": null,
"e": 61850,
"s": 61839,
"text": "piyush25pv"
},
{
"code": null,
"e": 61865,
"s": 61850,
"text": "amit143katiyar"
},
{
"code": null,
"e": 61877,
"s": 61865,
"text": "29AjayKumar"
},
{
"code": null,
"e": 61888,
"s": 61877,
"text": "decode2207"
},
{
"code": null,
"e": 61897,
"s": 61888,
"text": "mayukh99"
},
{
"code": null,
"e": 61912,
"s": 61897,
"text": "sagartomar9927"
},
{
"code": null,
"e": 61931,
"s": 61912,
"text": "shivanisinghss2110"
},
{
"code": null,
"e": 61947,
"s": 61931,
"text": "HarshitAgrawal8"
},
{
"code": null,
"e": 61964,
"s": 61947,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 61977,
"s": 61964,
"text": "rajsanghavi9"
},
{
"code": null,
"e": 61990,
"s": 61977,
"text": "roynilarghya"
},
{
"code": null,
"e": 61999,
"s": 61990,
"text": "jyoti369"
},
{
"code": null,
"e": 62011,
"s": 61999,
"text": "umadevi9616"
},
{
"code": null,
"e": 62021,
"s": 62011,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 62029,
"s": 62021,
"text": "imcoder"
},
{
"code": null,
"e": 62038,
"s": 62029,
"text": "Accolite"
},
{
"code": null,
"e": 62048,
"s": 62038,
"text": "Microsoft"
},
{
"code": null,
"e": 62063,
"s": 62048,
"text": "Morgan Stanley"
},
{
"code": null,
"e": 62069,
"s": 62063,
"text": "Paytm"
},
{
"code": null,
"e": 62077,
"s": 62069,
"text": "Samsung"
},
{
"code": null,
"e": 62086,
"s": 62077,
"text": "Snapdeal"
},
{
"code": null,
"e": 62106,
"s": 62086,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 62113,
"s": 62106,
"text": "Greedy"
},
{
"code": null,
"e": 62126,
"s": 62113,
"text": "Mathematical"
},
{
"code": null,
"e": 62132,
"s": 62126,
"text": "Paytm"
},
{
"code": null,
"e": 62147,
"s": 62132,
"text": "Morgan Stanley"
},
{
"code": null,
"e": 62156,
"s": 62147,
"text": "Accolite"
},
{
"code": null,
"e": 62166,
"s": 62156,
"text": "Microsoft"
},
{
"code": null,
"e": 62174,
"s": 62166,
"text": "Samsung"
},
{
"code": null,
"e": 62183,
"s": 62174,
"text": "Snapdeal"
},
{
"code": null,
"e": 62203,
"s": 62183,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 62210,
"s": 62203,
"text": "Greedy"
},
{
"code": null,
"e": 62223,
"s": 62210,
"text": "Mathematical"
},
{
"code": null,
"e": 62321,
"s": 62223,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 62353,
"s": 62321,
"text": "Largest Sum Contiguous Subarray"
},
{
"code": null,
"e": 62384,
"s": 62353,
"text": "Bellman–Ford Algorithm | DP-23"
},
{
"code": null,
"e": 62417,
"s": 62384,
"text": "Floyd Warshall Algorithm | DP-16"
},
{
"code": null,
"e": 62444,
"s": 62417,
"text": "Subset Sum Problem | DP-25"
},
{
"code": null,
"e": 62512,
"s": 62444,
"text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)"
},
{
"code": null,
"e": 62539,
"s": 62512,
"text": "Program for array rotation"
},
{
"code": null,
"e": 62620,
"s": 62539,
"text": "Program for Shortest Job First (or SJF) CPU Scheduling | Set 1 (Non- preemptive)"
},
{
"code": null,
"e": 62648,
"s": 62620,
"text": "Fractional Knapsack Problem"
},
{
"code": null,
"e": 62706,
"s": 62648,
"text": "Difference between Prim's and Kruskal's algorithm for MST"
}
]
|
Automating File Movement on your system - GeeksforGeeks | 20 Jan, 2022
Imagine a situation like this: You have a folder containing files of multiple types like txt, mp3, etc. You decide to clean up this mess and organize them in a way that images are in one folder and songs in another. Will you be moving them as per file type, i.e. number of moves = number of files. No, I’ll not be doing this :).
This task could be automated by writing this python script which can automatically create a separate directory and move files in their respective destinations.
Please note that you need to setup Python 2 on your system. The script makes use of the module named os which allows us to use OS-specific functionalities. Please note that you need to setup Python on your system before all this can be done. For that, follow these steps:
1) Download the Python from here; https://www.python.org/downloads/(I’ll prefer 2.7 as it’s a stable build)2) Install it. Go to C:( where windows reside and Get the path to your python folder (It’ll be something like C:\Python27)3) Go to My computer (or This pc), go to advanced system settings, search for a variable path there and click on edit.4) A box will appear with path, scroll the cursor to the already existing path till the end and add;C:\Python27 (i.e. ; and then the path to your python folder in C drive).5) Click Save or OK. Create a folder with name Pyprog(or anything else) in your C:\, here we’ll be storing all our python programs, open cmd and type cd C:\Pyprog and then for running a file called first.py (same every python program with an extension .py), run python first.py.
Python code Direct link – https://ide.geeksforgeeks.org/9bY2Mm
Python3
# Python program to organize files of a directoryimport osimport sysimport shutil # This function organizes contents of sourcePath into multiple# directories using the file types provided in extensionToDirdef OrganizeDirectory(sourcePath, extensionToDir): if not os.path.exists(sourcePath): print ("The source folder '" + sourcePath + "' does not exist!!\n") else: for file in os.listdir(sourcePath): file = os.path.join(sourcePath, file) # Ignore if its a directory if os.path.isdir(file): continue filename, fileExtension = os.path.splitext(file) fileExtension = fileExtension[1:] # If the file extension is present in the mapping if fileExtension in extensionToDir: # Store the corresponding directory name destinationName = extensionToDir[fileExtension] destinationPath = os.path.join(sourcePath, destinationName) # If the directory does not exist if not os.path.exists(destinationPath): print ("Creating new directory for `" + fileExtension + "` files, named - `" + destinationName + "'!!") # Create a new directory os.makedirs(destinationPath) # Move the file shutil.move(file, destinationPath) def main(): if len(sys.argv) != 2: print ("Usage: <program> <source path directory>") return sourcePath = sys.argv[1] extensionToDir = {} extensionToDir["mp3"] = "Songs" extensionToDir["jpg"] = "Images" print("") OrganizeDirectory(sourcePath, extensionToDir) if __name__ == "__main__": main()
Notice in the above image, two new folders named Images and Songs are created on executing the script and the files with mps and jpg extensions are now inside the required
Please note the paths mentioned in the above post are according to a general system, you should change paths according to our requirements and do the classification, but the important point is then changing the current working directory which is done using os.chdir(). Also, more if statements can be added pertaining to different file types.
About the author: Ekta is a very active contributor on Geeksforgeeks. Currently studying at Delhi Technological University.She has also made a Chrome extension for www.geeksquiz.com to practice mcqs randomly.She can be reached at github.com/Ekta1994
If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks.
arorakashish0911
gulshankumarar231
amartyaghoshgfg
How To
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Align Text in HTML?
How to Install FFmpeg on Windows?
How to filter object array based on attributes?
How to integrate Git Bash with Visual Studio Code?
How to Install and Run Apache Kafka on Windows?
How to Set Git Username and Password in GitBash?
Flutter - Asset Image
How to Recover a Deleted File in Linux?
How to Install Anaconda on Windows?
How to setup Anaconda path to environment variable ? | [
{
"code": null,
"e": 24928,
"s": 24900,
"text": "\n20 Jan, 2022"
},
{
"code": null,
"e": 25257,
"s": 24928,
"text": "Imagine a situation like this: You have a folder containing files of multiple types like txt, mp3, etc. You decide to clean up this mess and organize them in a way that images are in one folder and songs in another. Will you be moving them as per file type, i.e. number of moves = number of files. No, I’ll not be doing this :)."
},
{
"code": null,
"e": 25417,
"s": 25257,
"text": "This task could be automated by writing this python script which can automatically create a separate directory and move files in their respective destinations."
},
{
"code": null,
"e": 25689,
"s": 25417,
"text": "Please note that you need to setup Python 2 on your system. The script makes use of the module named os which allows us to use OS-specific functionalities. Please note that you need to setup Python on your system before all this can be done. For that, follow these steps:"
},
{
"code": null,
"e": 26487,
"s": 25689,
"text": "1) Download the Python from here; https://www.python.org/downloads/(I’ll prefer 2.7 as it’s a stable build)2) Install it. Go to C:( where windows reside and Get the path to your python folder (It’ll be something like C:\\Python27)3) Go to My computer (or This pc), go to advanced system settings, search for a variable path there and click on edit.4) A box will appear with path, scroll the cursor to the already existing path till the end and add;C:\\Python27 (i.e. ; and then the path to your python folder in C drive).5) Click Save or OK. Create a folder with name Pyprog(or anything else) in your C:\\, here we’ll be storing all our python programs, open cmd and type cd C:\\Pyprog and then for running a file called first.py (same every python program with an extension .py), run python first.py."
},
{
"code": null,
"e": 26550,
"s": 26487,
"text": "Python code Direct link – https://ide.geeksforgeeks.org/9bY2Mm"
},
{
"code": null,
"e": 26558,
"s": 26550,
"text": "Python3"
},
{
"code": "# Python program to organize files of a directoryimport osimport sysimport shutil # This function organizes contents of sourcePath into multiple# directories using the file types provided in extensionToDirdef OrganizeDirectory(sourcePath, extensionToDir): if not os.path.exists(sourcePath): print (\"The source folder '\" + sourcePath + \"' does not exist!!\\n\") else: for file in os.listdir(sourcePath): file = os.path.join(sourcePath, file) # Ignore if its a directory if os.path.isdir(file): continue filename, fileExtension = os.path.splitext(file) fileExtension = fileExtension[1:] # If the file extension is present in the mapping if fileExtension in extensionToDir: # Store the corresponding directory name destinationName = extensionToDir[fileExtension] destinationPath = os.path.join(sourcePath, destinationName) # If the directory does not exist if not os.path.exists(destinationPath): print (\"Creating new directory for `\" + fileExtension + \"` files, named - `\" + destinationName + \"'!!\") # Create a new directory os.makedirs(destinationPath) # Move the file shutil.move(file, destinationPath) def main(): if len(sys.argv) != 2: print (\"Usage: <program> <source path directory>\") return sourcePath = sys.argv[1] extensionToDir = {} extensionToDir[\"mp3\"] = \"Songs\" extensionToDir[\"jpg\"] = \"Images\" print(\"\") OrganizeDirectory(sourcePath, extensionToDir) if __name__ == \"__main__\": main()",
"e": 28313,
"s": 26558,
"text": null
},
{
"code": null,
"e": 28485,
"s": 28313,
"text": "Notice in the above image, two new folders named Images and Songs are created on executing the script and the files with mps and jpg extensions are now inside the required"
},
{
"code": null,
"e": 28828,
"s": 28485,
"text": "Please note the paths mentioned in the above post are according to a general system, you should change paths according to our requirements and do the classification, but the important point is then changing the current working directory which is done using os.chdir(). Also, more if statements can be added pertaining to different file types."
},
{
"code": null,
"e": 29081,
"s": 28828,
"text": "About the author: Ekta is a very active contributor on Geeksforgeeks. Currently studying at Delhi Technological University.She has also made a Chrome extension for www.geeksquiz.com to practice mcqs randomly.She can be reached at github.com/Ekta1994"
},
{
"code": null,
"e": 29185,
"s": 29081,
"text": "If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks. "
},
{
"code": null,
"e": 29202,
"s": 29185,
"text": "arorakashish0911"
},
{
"code": null,
"e": 29220,
"s": 29202,
"text": "gulshankumarar231"
},
{
"code": null,
"e": 29236,
"s": 29220,
"text": "amartyaghoshgfg"
},
{
"code": null,
"e": 29243,
"s": 29236,
"text": "How To"
},
{
"code": null,
"e": 29341,
"s": 29243,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29350,
"s": 29341,
"text": "Comments"
},
{
"code": null,
"e": 29363,
"s": 29350,
"text": "Old Comments"
},
{
"code": null,
"e": 29390,
"s": 29363,
"text": "How to Align Text in HTML?"
},
{
"code": null,
"e": 29424,
"s": 29390,
"text": "How to Install FFmpeg on Windows?"
},
{
"code": null,
"e": 29472,
"s": 29424,
"text": "How to filter object array based on attributes?"
},
{
"code": null,
"e": 29523,
"s": 29472,
"text": "How to integrate Git Bash with Visual Studio Code?"
},
{
"code": null,
"e": 29571,
"s": 29523,
"text": "How to Install and Run Apache Kafka on Windows?"
},
{
"code": null,
"e": 29620,
"s": 29571,
"text": "How to Set Git Username and Password in GitBash?"
},
{
"code": null,
"e": 29642,
"s": 29620,
"text": "Flutter - Asset Image"
},
{
"code": null,
"e": 29682,
"s": 29642,
"text": "How to Recover a Deleted File in Linux?"
},
{
"code": null,
"e": 29718,
"s": 29682,
"text": "How to Install Anaconda on Windows?"
}
]
|
Recursive product of summed digits JavaScript | We have to create a function that takes in any number of arguments (Number literals), adds them together, and returns the product of digits when the answer is only 1 digit long.
For example −
If the arguments are −
16, 34, 42
We have to first add them together −
16+34+42 = 92
And then keep multiplying the digits together until we get a 1-digit number like this −
9*2 = 18
1*8 = 8
When we get the one-digit number, we have to return it from our function.
We will break this into two functions −
One function accepts a number and returns the product of its digits, we will use recursion to do so, let’s call this first function product().
One function accepts a number and returns the product of its digits, we will use recursion to do so, let’s call this first function product().
Second function recursively calls this product() function and checks if the product happens to be 1 digit, it returns the product otherwise it keeps on iterating.
Second function recursively calls this product() function and checks if the product happens to be 1 digit, it returns the product otherwise it keeps on iterating.
The code for this whole functionality will be −
const recursiveMuliSum = (...numbers) => {
const add = (a) => a.length === 1 ? a[0] : a.reduce((acc, val) => acc+val);
const produce = (n, p = 1) => {
if(n){
return produce(Math.floor(n/10), p*(n%10));
};
return p;
};
const res = produce(add(numbers));
if(res > 9){
return recursiveMuliSum(res);
}
return res;
};
console.log(recursiveMuliSum(16, 28));
console.log(recursiveMuliSum(16, 28, 44, 76, 11));
console.log(recursiveMuliSum(1, 2, 4, 6, 8));
The output in the console will be −
6
5
2 | [
{
"code": null,
"e": 1240,
"s": 1062,
"text": "We have to create a function that takes in any number of arguments (Number literals), adds them together, and returns the product of digits when the answer is only 1 digit long."
},
{
"code": null,
"e": 1254,
"s": 1240,
"text": "For example −"
},
{
"code": null,
"e": 1277,
"s": 1254,
"text": "If the arguments are −"
},
{
"code": null,
"e": 1288,
"s": 1277,
"text": "16, 34, 42"
},
{
"code": null,
"e": 1325,
"s": 1288,
"text": "We have to first add them together −"
},
{
"code": null,
"e": 1339,
"s": 1325,
"text": "16+34+42 = 92"
},
{
"code": null,
"e": 1427,
"s": 1339,
"text": "And then keep multiplying the digits together until we get a 1-digit number like this −"
},
{
"code": null,
"e": 1444,
"s": 1427,
"text": "9*2 = 18\n1*8 = 8"
},
{
"code": null,
"e": 1518,
"s": 1444,
"text": "When we get the one-digit number, we have to return it from our function."
},
{
"code": null,
"e": 1558,
"s": 1518,
"text": "We will break this into two functions −"
},
{
"code": null,
"e": 1701,
"s": 1558,
"text": "One function accepts a number and returns the product of its digits, we will use recursion to do so, let’s call this first function product()."
},
{
"code": null,
"e": 1844,
"s": 1701,
"text": "One function accepts a number and returns the product of its digits, we will use recursion to do so, let’s call this first function product()."
},
{
"code": null,
"e": 2007,
"s": 1844,
"text": "Second function recursively calls this product() function and checks if the product happens to be 1 digit, it returns the product otherwise it keeps on iterating."
},
{
"code": null,
"e": 2170,
"s": 2007,
"text": "Second function recursively calls this product() function and checks if the product happens to be 1 digit, it returns the product otherwise it keeps on iterating."
},
{
"code": null,
"e": 2218,
"s": 2170,
"text": "The code for this whole functionality will be −"
},
{
"code": null,
"e": 2721,
"s": 2218,
"text": "const recursiveMuliSum = (...numbers) => {\n const add = (a) => a.length === 1 ? a[0] : a.reduce((acc, val) => acc+val);\n const produce = (n, p = 1) => {\n if(n){\n return produce(Math.floor(n/10), p*(n%10));\n };\n return p;\n };\n const res = produce(add(numbers));\n if(res > 9){\n return recursiveMuliSum(res);\n }\n return res;\n};\nconsole.log(recursiveMuliSum(16, 28));\nconsole.log(recursiveMuliSum(16, 28, 44, 76, 11));\nconsole.log(recursiveMuliSum(1, 2, 4, 6, 8));"
},
{
"code": null,
"e": 2757,
"s": 2721,
"text": "The output in the console will be −"
},
{
"code": null,
"e": 2763,
"s": 2757,
"text": "6\n5\n2"
}
]
|
How to show all the options from a dropdown list with JavaScript? | To show all the options from a dropdown list, use the options property. The property allows you to get all the options with length property.
You can try to run the following code to get all the options from a drop-down list.
Live Demo
<!DOCTYPE html>
<html>
<body>
<form id="myForm">
<select id="selectNow">
<option>One</option>
<option>Two</option>
<option>Three</option>
</select>
<input type="button" onclick="display()" value="Click">
</form>
<p>Click the button to get all the options</p>
<script>
function display() {
var a, i, options;
a = document.getElementById("selectNow");
options = "";
for (i = 0; i < a.length; i++) {
options = options + "<br> " + a.options[i].text;
}
document.write("DropDown Options: "+options);
}
</script>
</body>
</html> | [
{
"code": null,
"e": 1203,
"s": 1062,
"text": "To show all the options from a dropdown list, use the options property. The property allows you to get all the options with length property."
},
{
"code": null,
"e": 1287,
"s": 1203,
"text": "You can try to run the following code to get all the options from a drop-down list."
},
{
"code": null,
"e": 1297,
"s": 1287,
"text": "Live Demo"
},
{
"code": null,
"e": 2023,
"s": 1297,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <form id=\"myForm\">\n <select id=\"selectNow\">\n <option>One</option>\n <option>Two</option>\n <option>Three</option>\n </select>\n <input type=\"button\" onclick=\"display()\" value=\"Click\">\n </form>\n <p>Click the button to get all the options</p>\n <script>\n function display() {\n var a, i, options;\n a = document.getElementById(\"selectNow\");\n options = \"\";\n for (i = 0; i < a.length; i++) {\n options = options + \"<br> \" + a.options[i].text;\n }\n document.write(\"DropDown Options: \"+options);\n }\n </script>\n </body>\n</html>"
}
]
|
Predict Customer Churn with R. For any service company that bills on a... | by Susan Li | Towards Data Science | For any service company that bills on a recurring basis, a key variable is the rate of churn. Harvard Business Review, March 2016
For just about any growing company in this “as-a-service” world, two of the most important metrics are customer churn and lifetime value. Entrepreneur, February 2016
Customer churn occurs when customers or subscribers stop doing business with a company or service, also known as customer attrition. It is also referred as loss of clients or customers. One industry in which churn rates are particularly useful is the telecommunications industry, because most customers have multiple options from which to choose within a geographic location.
Similar concept with predicting employee turnover, we are going to predict customer churn using telecom dataset. We will introduce Logistic Regression, Decision Tree, and Random Forest. But this time, we will do all of the above in R. Let’s get started!
The data was downloaded from IBM Sample Data Sets. Each row represents a customer, each column contains that customer’s attributes:
library(plyr)library(corrplot)library(ggplot2)library(gridExtra)library(ggthemes)library(caret)library(MASS)library(randomForest)library(party)churn <- read.csv('Telco-Customer-Churn.csv')str(churn)
customerID
gender (female, male)
SeniorCitizen (Whether the customer is a senior citizen or not (1, 0))
Partner (Whether the customer has a partner or not (Yes, No))
Dependents (Whether the customer has dependents or not (Yes, No))
tenure (Number of months the customer has stayed with the company)
PhoneService (Whether the customer has a phone service or not (Yes, No))
MultipleLines (Whether the customer has multiple lines r not (Yes, No, No phone service)
InternetService (Customer’s internet service provider (DSL, Fiber optic, No)
OnlineSecurity (Whether the customer has online security or not (Yes, No, No internet service)
OnlineBackup (Whether the customer has online backup or not (Yes, No, No internet service)
DeviceProtection (Whether the customer has device protection or not (Yes, No, No internet service)
TechSupport (Whether the customer has tech support or not (Yes, No, No internet service)
streamingTV (Whether the customer has streaming TV or not (Yes, No, No internet service)
streamingMovies (Whether the customer has streaming movies or not (Yes, No, No internet service)
Contract (The contract term of the customer (Month-to-month, One year, Two year)
PaperlessBilling (Whether the customer has paperless billing or not (Yes, No))
PaymentMethod (The customer’s payment method (Electronic check, Mailed check, Bank transfer (automatic), Credit card (automatic)))
MonthlyCharges (The amount charged to the customer monthly — numeric)
TotalCharges (The total amount charged to the customer — numeric)
Churn ( Whether the customer churned or not (Yes or No))
The raw data contains 7043 rows (customers) and 21 columns (features). The “Churn” column is our target.
We use sapply to check the number if missing values in each columns. We found that there are 11 missing values in “TotalCharges” columns. So, let’s remove all rows with missing values.
sapply(churn, function(x) sum(is.na(x)))
churn <- churn[complete.cases(churn), ]
Look at the variables, we can see that we have some wrangling to do.
We will change “No internet service” to “No” for six columns, they are: “OnlineSecurity”, “OnlineBackup”, “DeviceProtection”, “TechSupport”, “streamingTV”, “streamingMovies”.
We will change “No internet service” to “No” for six columns, they are: “OnlineSecurity”, “OnlineBackup”, “DeviceProtection”, “TechSupport”, “streamingTV”, “streamingMovies”.
cols_recode1 <- c(10:15)for(i in 1:ncol(churn[,cols_recode1])) { churn[,cols_recode1][,i] <- as.factor(mapvalues (churn[,cols_recode1][,i], from =c("No internet service"),to=c("No")))}
2. We will change “No phone service” to “No” for column “MultipleLines”
churn$MultipleLines <- as.factor(mapvalues(churn$MultipleLines, from=c("No phone service"), to=c("No")))
3. Since the minimum tenure is 1 month and maximum tenure is 72 months, we can group them into five tenure groups: “0–12 Month”, “12–24 Month”, “24–48 Months”, “48–60 Month”, “> 60 Month”
min(churn$tenure); max(churn$tenure)
[1] 1
[1] 72
group_tenure <- function(tenure){ if (tenure >= 0 & tenure <= 12){ return('0-12 Month') }else if(tenure > 12 & tenure <= 24){ return('12-24 Month') }else if (tenure > 24 & tenure <= 48){ return('24-48 Month') }else if (tenure > 48 & tenure <=60){ return('48-60 Month') }else if (tenure > 60){ return('> 60 Month') }}churn$tenure_group <- sapply(churn$tenure,group_tenure)churn$tenure_group <- as.factor(churn$tenure_group)
4. Change the values in column “SeniorCitizen” from 0 or 1 to “No” or “Yes”.
churn$SeniorCitizen <- as.factor(mapvalues(churn$SeniorCitizen, from=c("0","1"), to=c("No", "Yes")))
5. Remove the columns we do not need for the analysis.
churn$customerID <- NULLchurn$tenure <- NULL
Correlation between numeric variables
numeric.var <- sapply(churn, is.numeric)corr.matrix <- cor(churn[,numeric.var])corrplot(corr.matrix, main="\n\nCorrelation Plot for Numerical Variables", method="number")
The Monthly Charges and Total Charges are correlated. So one of them will be removed from the model. We remove Total Charges.
churn$TotalCharges <- NULL
Bar plots of categorical variables
p1 <- ggplot(churn, aes(x=gender)) + ggtitle("Gender") + xlab("Gender") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p2 <- ggplot(churn, aes(x=SeniorCitizen)) + ggtitle("Senior Citizen") + xlab("Senior Citizen") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p3 <- ggplot(churn, aes(x=Partner)) + ggtitle("Partner") + xlab("Partner") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p4 <- ggplot(churn, aes(x=Dependents)) + ggtitle("Dependents") + xlab("Dependents") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()grid.arrange(p1, p2, p3, p4, ncol=2)
p5 <- ggplot(churn, aes(x=PhoneService)) + ggtitle("Phone Service") + xlab("Phone Service") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p6 <- ggplot(churn, aes(x=MultipleLines)) + ggtitle("Multiple Lines") + xlab("Multiple Lines") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p7 <- ggplot(churn, aes(x=InternetService)) + ggtitle("Internet Service") + xlab("Internet Service") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p8 <- ggplot(churn, aes(x=OnlineSecurity)) + ggtitle("Online Security") + xlab("Online Security") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()grid.arrange(p5, p6, p7, p8, ncol=2)
p9 <- ggplot(churn, aes(x=OnlineBackup)) + ggtitle("Online Backup") + xlab("Online Backup") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p10 <- ggplot(churn, aes(x=DeviceProtection)) + ggtitle("Device Protection") + xlab("Device Protection") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p11 <- ggplot(churn, aes(x=TechSupport)) + ggtitle("Tech Support") + xlab("Tech Support") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p12 <- ggplot(churn, aes(x=StreamingTV)) + ggtitle("Streaming TV") + xlab("Streaming TV") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()grid.arrange(p9, p10, p11, p12, ncol=2)
p13 <- ggplot(churn, aes(x=StreamingMovies)) + ggtitle("Streaming Movies") + xlab("Streaming Movies") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p14 <- ggplot(churn, aes(x=Contract)) + ggtitle("Contract") + xlab("Contract") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p15 <- ggplot(churn, aes(x=PaperlessBilling)) + ggtitle("Paperless Billing") + xlab("Paperless Billing") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p16 <- ggplot(churn, aes(x=PaymentMethod)) + ggtitle("Payment Method") + xlab("Payment Method") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()p17 <- ggplot(churn, aes(x=tenure_group)) + ggtitle("Tenure Group") + xlab("Tenure Group") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab("Percentage") + coord_flip() + theme_minimal()grid.arrange(p13, p14, p15, p16, p17, ncol=2)
All of the categorical variables seem to have a reasonably broad distribution, therefore, all of them will be kept for the further analysis.
First, we split the data into training and testing sets:
intrain<- createDataPartition(churn$Churn,p=0.7,list=FALSE)set.seed(2017)training<- churn[intrain,]testing<- churn[-intrain,]
Confirm the splitting is correct:
dim(training); dim(testing)
[1] 4924 19
[1] 2108 19
Fitting the Logistic Regression Model:
LogModel <- glm(Churn ~ .,family=binomial(link="logit"),data=training)print(summary(LogModel))
Feature Analysis:
The top three most-relevant features include Contract, tenure_group and PaperlessBilling.
anova(LogModel, test="Chisq")
Analyzing the deviance table we can see the drop in deviance when adding each variable one at a time. Adding InternetService, Contract and tenure_group significantly reduces the residual deviance. The other variables such as PaymentMethod and Dependents seem to improve the model less even though they all have low p-values.
Assessing the predictive ability of the Logistic Regression model
testing$Churn <- as.character(testing$Churn)testing$Churn[testing$Churn=="No"] <- "0"testing$Churn[testing$Churn=="Yes"] <- "1"fitted.results <- predict(LogModel,newdata=testing,type='response')fitted.results <- ifelse(fitted.results > 0.5,1,0)misClasificError <- mean(fitted.results != testing$Churn)print(paste('Logistic Regression Accuracy',1-misClasificError))
[1] Logistic Regression Accuracy 0.789373814041746
Logistic Regression Confusion Matrix
print("Confusion Matrix for Logistic Regression"); table(testing$Churn, fitted.results > 0.5)
Odds Ratio
One of the interesting performance measurements in logistic regression is Odds Ratio.Basically, Odds ratio is what the odds of an event is happening.
library(MASS)exp(cbind(OR=coef(LogModel), confint(LogModel)))
Decision Tree visualization
For illustration purpose, we are going to use only three variables for plotting Decision Trees, they are “Contract”, “tenure_group” and “PaperlessBilling”.
tree <- ctree(Churn~Contract+tenure_group+PaperlessBilling, training)plot(tree)
Out of three variables we use, Contract is the most important variable to predict customer churn or not churn.If a customer in a one-year or two-year contract, no matter he (she) has PapelessBilling or not, he (she) is less likely to churn.On the other hand, if a customer is in a month-to-month contract, and in the tenure group of 0–12 month, and using PaperlessBilling, then this customer is more likely to churn.
Out of three variables we use, Contract is the most important variable to predict customer churn or not churn.
If a customer in a one-year or two-year contract, no matter he (she) has PapelessBilling or not, he (she) is less likely to churn.
On the other hand, if a customer is in a month-to-month contract, and in the tenure group of 0–12 month, and using PaperlessBilling, then this customer is more likely to churn.
Decision Tree Confusion Matrix
We are using all the variables to product confusion matrix table and make predictions.
pred_tree <- predict(tree, testing)print("Confusion Matrix for Decision Tree"); table(Predicted = pred_tree, Actual = testing$Churn)
Decision Tree Accuracy
p1 <- predict(tree, training)tab1 <- table(Predicted = p1, Actual = training$Churn)tab2 <- table(Predicted = pred_tree, Actual = testing$Churn)print(paste('Decision Tree Accuracy',sum(diag(tab2))/sum(tab2)))
[1] Decision Tree Accuracy 0.780834914611006
The accuracy for Decision Tree has hardly improved. Let’s see if we can do better using Random Forest.
Random Forest Initial Model
rfModel <- randomForest(Churn ~., data = training)print(rfModel)
The error rate is relatively low when predicting “No”, and the error rate is much higher when predicting “Yes”.
Random Forest Prediction and Confusion Matrix
pred_rf <- predict(rfModel, testing)caret::confusionMatrix(pred_rf, testing$Churn)
Random Forest Error Rate
plot(rfModel)
We use this plot to help us determine the number of trees. As the number of trees increases, the OOB error rate decreases, and then becomes almost constant. We are not able to decrease the OOB error rate after about 100 to 200 trees.
Tune Random Forest Model
t <- tuneRF(training[, -18], training[, 18], stepFactor = 0.5, plot = TRUE, ntreeTry = 200, trace = TRUE, improve = 0.05)
We use this plot to give us some ideas on the number of mtry to choose. OOB error rate is at the lowest when mtry is 2. Therefore, we choose mtry=2.
Fit the Random Forest Model After Tuning
rfModel_new <- randomForest(Churn ~., data = training, ntree = 200, mtry = 2, importance = TRUE, proximity = TRUE)print(rfModel_new)
OOB error rate decreased to 20.41% from 20.61% on Figure 14.
Random Forest Predictions and Confusion Matrix After Tuning
pred_rf_new <- predict(rfModel_new, testing)caret::confusionMatrix(pred_rf_new, testing$Churn)
Both accuracy and sensitivity are improved, compare with Figure 15.
Random Forest Feature Importance
varImpPlot(rfModel_new, sort=T, n.var = 10, main = 'Top 10 Feature Importance')
From the above example, we can see that Logistic Regression, Decision Tree and Random Forest can be used for customer churn analysis for this particular dataset equally fine.
Throughout the analysis, I have learned several important things:
Features such as tenure_group, Contract, PaperlessBilling, MonthlyCharges and InternetService appear to play a role in customer churn.
There does not seem to be a relationship between gender and churn.
Customers in a month-to-month contract, with PaperlessBilling and are within 12 months tenure, are more likely to churn; On the other hand, customers with one or two year contract, with longer than 12 months tenure, that are not using PaperlessBilling, are less likely to churn.
Source code that created this post can be found here. I would be pleased to receive feedback or questions on any of the above. | [
{
"code": null,
"e": 177,
"s": 47,
"text": "For any service company that bills on a recurring basis, a key variable is the rate of churn. Harvard Business Review, March 2016"
},
{
"code": null,
"e": 343,
"s": 177,
"text": "For just about any growing company in this “as-a-service” world, two of the most important metrics are customer churn and lifetime value. Entrepreneur, February 2016"
},
{
"code": null,
"e": 719,
"s": 343,
"text": "Customer churn occurs when customers or subscribers stop doing business with a company or service, also known as customer attrition. It is also referred as loss of clients or customers. One industry in which churn rates are particularly useful is the telecommunications industry, because most customers have multiple options from which to choose within a geographic location."
},
{
"code": null,
"e": 973,
"s": 719,
"text": "Similar concept with predicting employee turnover, we are going to predict customer churn using telecom dataset. We will introduce Logistic Regression, Decision Tree, and Random Forest. But this time, we will do all of the above in R. Let’s get started!"
},
{
"code": null,
"e": 1105,
"s": 973,
"text": "The data was downloaded from IBM Sample Data Sets. Each row represents a customer, each column contains that customer’s attributes:"
},
{
"code": null,
"e": 1304,
"s": 1105,
"text": "library(plyr)library(corrplot)library(ggplot2)library(gridExtra)library(ggthemes)library(caret)library(MASS)library(randomForest)library(party)churn <- read.csv('Telco-Customer-Churn.csv')str(churn)"
},
{
"code": null,
"e": 1315,
"s": 1304,
"text": "customerID"
},
{
"code": null,
"e": 1337,
"s": 1315,
"text": "gender (female, male)"
},
{
"code": null,
"e": 1408,
"s": 1337,
"text": "SeniorCitizen (Whether the customer is a senior citizen or not (1, 0))"
},
{
"code": null,
"e": 1470,
"s": 1408,
"text": "Partner (Whether the customer has a partner or not (Yes, No))"
},
{
"code": null,
"e": 1536,
"s": 1470,
"text": "Dependents (Whether the customer has dependents or not (Yes, No))"
},
{
"code": null,
"e": 1603,
"s": 1536,
"text": "tenure (Number of months the customer has stayed with the company)"
},
{
"code": null,
"e": 1676,
"s": 1603,
"text": "PhoneService (Whether the customer has a phone service or not (Yes, No))"
},
{
"code": null,
"e": 1765,
"s": 1676,
"text": "MultipleLines (Whether the customer has multiple lines r not (Yes, No, No phone service)"
},
{
"code": null,
"e": 1842,
"s": 1765,
"text": "InternetService (Customer’s internet service provider (DSL, Fiber optic, No)"
},
{
"code": null,
"e": 1937,
"s": 1842,
"text": "OnlineSecurity (Whether the customer has online security or not (Yes, No, No internet service)"
},
{
"code": null,
"e": 2028,
"s": 1937,
"text": "OnlineBackup (Whether the customer has online backup or not (Yes, No, No internet service)"
},
{
"code": null,
"e": 2127,
"s": 2028,
"text": "DeviceProtection (Whether the customer has device protection or not (Yes, No, No internet service)"
},
{
"code": null,
"e": 2216,
"s": 2127,
"text": "TechSupport (Whether the customer has tech support or not (Yes, No, No internet service)"
},
{
"code": null,
"e": 2305,
"s": 2216,
"text": "streamingTV (Whether the customer has streaming TV or not (Yes, No, No internet service)"
},
{
"code": null,
"e": 2402,
"s": 2305,
"text": "streamingMovies (Whether the customer has streaming movies or not (Yes, No, No internet service)"
},
{
"code": null,
"e": 2483,
"s": 2402,
"text": "Contract (The contract term of the customer (Month-to-month, One year, Two year)"
},
{
"code": null,
"e": 2562,
"s": 2483,
"text": "PaperlessBilling (Whether the customer has paperless billing or not (Yes, No))"
},
{
"code": null,
"e": 2693,
"s": 2562,
"text": "PaymentMethod (The customer’s payment method (Electronic check, Mailed check, Bank transfer (automatic), Credit card (automatic)))"
},
{
"code": null,
"e": 2763,
"s": 2693,
"text": "MonthlyCharges (The amount charged to the customer monthly — numeric)"
},
{
"code": null,
"e": 2829,
"s": 2763,
"text": "TotalCharges (The total amount charged to the customer — numeric)"
},
{
"code": null,
"e": 2886,
"s": 2829,
"text": "Churn ( Whether the customer churned or not (Yes or No))"
},
{
"code": null,
"e": 2991,
"s": 2886,
"text": "The raw data contains 7043 rows (customers) and 21 columns (features). The “Churn” column is our target."
},
{
"code": null,
"e": 3176,
"s": 2991,
"text": "We use sapply to check the number if missing values in each columns. We found that there are 11 missing values in “TotalCharges” columns. So, let’s remove all rows with missing values."
},
{
"code": null,
"e": 3217,
"s": 3176,
"text": "sapply(churn, function(x) sum(is.na(x)))"
},
{
"code": null,
"e": 3257,
"s": 3217,
"text": "churn <- churn[complete.cases(churn), ]"
},
{
"code": null,
"e": 3326,
"s": 3257,
"text": "Look at the variables, we can see that we have some wrangling to do."
},
{
"code": null,
"e": 3501,
"s": 3326,
"text": "We will change “No internet service” to “No” for six columns, they are: “OnlineSecurity”, “OnlineBackup”, “DeviceProtection”, “TechSupport”, “streamingTV”, “streamingMovies”."
},
{
"code": null,
"e": 3676,
"s": 3501,
"text": "We will change “No internet service” to “No” for six columns, they are: “OnlineSecurity”, “OnlineBackup”, “DeviceProtection”, “TechSupport”, “streamingTV”, “streamingMovies”."
},
{
"code": null,
"e": 3913,
"s": 3676,
"text": "cols_recode1 <- c(10:15)for(i in 1:ncol(churn[,cols_recode1])) { churn[,cols_recode1][,i] <- as.factor(mapvalues (churn[,cols_recode1][,i], from =c(\"No internet service\"),to=c(\"No\")))}"
},
{
"code": null,
"e": 3985,
"s": 3913,
"text": "2. We will change “No phone service” to “No” for column “MultipleLines”"
},
{
"code": null,
"e": 4175,
"s": 3985,
"text": "churn$MultipleLines <- as.factor(mapvalues(churn$MultipleLines, from=c(\"No phone service\"), to=c(\"No\")))"
},
{
"code": null,
"e": 4363,
"s": 4175,
"text": "3. Since the minimum tenure is 1 month and maximum tenure is 72 months, we can group them into five tenure groups: “0–12 Month”, “12–24 Month”, “24–48 Months”, “48–60 Month”, “> 60 Month”"
},
{
"code": null,
"e": 4400,
"s": 4363,
"text": "min(churn$tenure); max(churn$tenure)"
},
{
"code": null,
"e": 4406,
"s": 4400,
"text": "[1] 1"
},
{
"code": null,
"e": 4413,
"s": 4406,
"text": "[1] 72"
},
{
"code": null,
"e": 4889,
"s": 4413,
"text": "group_tenure <- function(tenure){ if (tenure >= 0 & tenure <= 12){ return('0-12 Month') }else if(tenure > 12 & tenure <= 24){ return('12-24 Month') }else if (tenure > 24 & tenure <= 48){ return('24-48 Month') }else if (tenure > 48 & tenure <=60){ return('48-60 Month') }else if (tenure > 60){ return('> 60 Month') }}churn$tenure_group <- sapply(churn$tenure,group_tenure)churn$tenure_group <- as.factor(churn$tenure_group)"
},
{
"code": null,
"e": 4966,
"s": 4889,
"text": "4. Change the values in column “SeniorCitizen” from 0 or 1 to “No” or “Yes”."
},
{
"code": null,
"e": 5141,
"s": 4966,
"text": "churn$SeniorCitizen <- as.factor(mapvalues(churn$SeniorCitizen, from=c(\"0\",\"1\"), to=c(\"No\", \"Yes\")))"
},
{
"code": null,
"e": 5196,
"s": 5141,
"text": "5. Remove the columns we do not need for the analysis."
},
{
"code": null,
"e": 5241,
"s": 5196,
"text": "churn$customerID <- NULLchurn$tenure <- NULL"
},
{
"code": null,
"e": 5279,
"s": 5241,
"text": "Correlation between numeric variables"
},
{
"code": null,
"e": 5450,
"s": 5279,
"text": "numeric.var <- sapply(churn, is.numeric)corr.matrix <- cor(churn[,numeric.var])corrplot(corr.matrix, main=\"\\n\\nCorrelation Plot for Numerical Variables\", method=\"number\")"
},
{
"code": null,
"e": 5576,
"s": 5450,
"text": "The Monthly Charges and Total Charges are correlated. So one of them will be removed from the model. We remove Total Charges."
},
{
"code": null,
"e": 5603,
"s": 5576,
"text": "churn$TotalCharges <- NULL"
},
{
"code": null,
"e": 5638,
"s": 5603,
"text": "Bar plots of categorical variables"
},
{
"code": null,
"e": 6479,
"s": 5638,
"text": "p1 <- ggplot(churn, aes(x=gender)) + ggtitle(\"Gender\") + xlab(\"Gender\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p2 <- ggplot(churn, aes(x=SeniorCitizen)) + ggtitle(\"Senior Citizen\") + xlab(\"Senior Citizen\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p3 <- ggplot(churn, aes(x=Partner)) + ggtitle(\"Partner\") + xlab(\"Partner\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p4 <- ggplot(churn, aes(x=Dependents)) + ggtitle(\"Dependents\") + xlab(\"Dependents\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()grid.arrange(p1, p2, p3, p4, ncol=2)"
},
{
"code": null,
"e": 7380,
"s": 6479,
"text": "p5 <- ggplot(churn, aes(x=PhoneService)) + ggtitle(\"Phone Service\") + xlab(\"Phone Service\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p6 <- ggplot(churn, aes(x=MultipleLines)) + ggtitle(\"Multiple Lines\") + xlab(\"Multiple Lines\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p7 <- ggplot(churn, aes(x=InternetService)) + ggtitle(\"Internet Service\") + xlab(\"Internet Service\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p8 <- ggplot(churn, aes(x=OnlineSecurity)) + ggtitle(\"Online Security\") + xlab(\"Online Security\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()grid.arrange(p5, p6, p7, p8, ncol=2)"
},
{
"code": null,
"e": 8275,
"s": 7380,
"text": "p9 <- ggplot(churn, aes(x=OnlineBackup)) + ggtitle(\"Online Backup\") + xlab(\"Online Backup\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p10 <- ggplot(churn, aes(x=DeviceProtection)) + ggtitle(\"Device Protection\") + xlab(\"Device Protection\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p11 <- ggplot(churn, aes(x=TechSupport)) + ggtitle(\"Tech Support\") + xlab(\"Tech Support\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p12 <- ggplot(churn, aes(x=StreamingTV)) + ggtitle(\"Streaming TV\") + xlab(\"Streaming TV\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()grid.arrange(p9, p10, p11, p12, ncol=2)"
},
{
"code": null,
"e": 9391,
"s": 8275,
"text": "p13 <- ggplot(churn, aes(x=StreamingMovies)) + ggtitle(\"Streaming Movies\") + xlab(\"Streaming Movies\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p14 <- ggplot(churn, aes(x=Contract)) + ggtitle(\"Contract\") + xlab(\"Contract\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p15 <- ggplot(churn, aes(x=PaperlessBilling)) + ggtitle(\"Paperless Billing\") + xlab(\"Paperless Billing\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p16 <- ggplot(churn, aes(x=PaymentMethod)) + ggtitle(\"Payment Method\") + xlab(\"Payment Method\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()p17 <- ggplot(churn, aes(x=tenure_group)) + ggtitle(\"Tenure Group\") + xlab(\"Tenure Group\") + geom_bar(aes(y = 100*(..count..)/sum(..count..)), width = 0.5) + ylab(\"Percentage\") + coord_flip() + theme_minimal()grid.arrange(p13, p14, p15, p16, p17, ncol=2)"
},
{
"code": null,
"e": 9532,
"s": 9391,
"text": "All of the categorical variables seem to have a reasonably broad distribution, therefore, all of them will be kept for the further analysis."
},
{
"code": null,
"e": 9589,
"s": 9532,
"text": "First, we split the data into training and testing sets:"
},
{
"code": null,
"e": 9715,
"s": 9589,
"text": "intrain<- createDataPartition(churn$Churn,p=0.7,list=FALSE)set.seed(2017)training<- churn[intrain,]testing<- churn[-intrain,]"
},
{
"code": null,
"e": 9749,
"s": 9715,
"text": "Confirm the splitting is correct:"
},
{
"code": null,
"e": 9777,
"s": 9749,
"text": "dim(training); dim(testing)"
},
{
"code": null,
"e": 9789,
"s": 9777,
"text": "[1] 4924 19"
},
{
"code": null,
"e": 9801,
"s": 9789,
"text": "[1] 2108 19"
},
{
"code": null,
"e": 9840,
"s": 9801,
"text": "Fitting the Logistic Regression Model:"
},
{
"code": null,
"e": 9935,
"s": 9840,
"text": "LogModel <- glm(Churn ~ .,family=binomial(link=\"logit\"),data=training)print(summary(LogModel))"
},
{
"code": null,
"e": 9953,
"s": 9935,
"text": "Feature Analysis:"
},
{
"code": null,
"e": 10043,
"s": 9953,
"text": "The top three most-relevant features include Contract, tenure_group and PaperlessBilling."
},
{
"code": null,
"e": 10073,
"s": 10043,
"text": "anova(LogModel, test=\"Chisq\")"
},
{
"code": null,
"e": 10398,
"s": 10073,
"text": "Analyzing the deviance table we can see the drop in deviance when adding each variable one at a time. Adding InternetService, Contract and tenure_group significantly reduces the residual deviance. The other variables such as PaymentMethod and Dependents seem to improve the model less even though they all have low p-values."
},
{
"code": null,
"e": 10464,
"s": 10398,
"text": "Assessing the predictive ability of the Logistic Regression model"
},
{
"code": null,
"e": 10829,
"s": 10464,
"text": "testing$Churn <- as.character(testing$Churn)testing$Churn[testing$Churn==\"No\"] <- \"0\"testing$Churn[testing$Churn==\"Yes\"] <- \"1\"fitted.results <- predict(LogModel,newdata=testing,type='response')fitted.results <- ifelse(fitted.results > 0.5,1,0)misClasificError <- mean(fitted.results != testing$Churn)print(paste('Logistic Regression Accuracy',1-misClasificError))"
},
{
"code": null,
"e": 10880,
"s": 10829,
"text": "[1] Logistic Regression Accuracy 0.789373814041746"
},
{
"code": null,
"e": 10917,
"s": 10880,
"text": "Logistic Regression Confusion Matrix"
},
{
"code": null,
"e": 11011,
"s": 10917,
"text": "print(\"Confusion Matrix for Logistic Regression\"); table(testing$Churn, fitted.results > 0.5)"
},
{
"code": null,
"e": 11022,
"s": 11011,
"text": "Odds Ratio"
},
{
"code": null,
"e": 11172,
"s": 11022,
"text": "One of the interesting performance measurements in logistic regression is Odds Ratio.Basically, Odds ratio is what the odds of an event is happening."
},
{
"code": null,
"e": 11234,
"s": 11172,
"text": "library(MASS)exp(cbind(OR=coef(LogModel), confint(LogModel)))"
},
{
"code": null,
"e": 11262,
"s": 11234,
"text": "Decision Tree visualization"
},
{
"code": null,
"e": 11418,
"s": 11262,
"text": "For illustration purpose, we are going to use only three variables for plotting Decision Trees, they are “Contract”, “tenure_group” and “PaperlessBilling”."
},
{
"code": null,
"e": 11498,
"s": 11418,
"text": "tree <- ctree(Churn~Contract+tenure_group+PaperlessBilling, training)plot(tree)"
},
{
"code": null,
"e": 11915,
"s": 11498,
"text": "Out of three variables we use, Contract is the most important variable to predict customer churn or not churn.If a customer in a one-year or two-year contract, no matter he (she) has PapelessBilling or not, he (she) is less likely to churn.On the other hand, if a customer is in a month-to-month contract, and in the tenure group of 0–12 month, and using PaperlessBilling, then this customer is more likely to churn."
},
{
"code": null,
"e": 12026,
"s": 11915,
"text": "Out of three variables we use, Contract is the most important variable to predict customer churn or not churn."
},
{
"code": null,
"e": 12157,
"s": 12026,
"text": "If a customer in a one-year or two-year contract, no matter he (she) has PapelessBilling or not, he (she) is less likely to churn."
},
{
"code": null,
"e": 12334,
"s": 12157,
"text": "On the other hand, if a customer is in a month-to-month contract, and in the tenure group of 0–12 month, and using PaperlessBilling, then this customer is more likely to churn."
},
{
"code": null,
"e": 12365,
"s": 12334,
"text": "Decision Tree Confusion Matrix"
},
{
"code": null,
"e": 12452,
"s": 12365,
"text": "We are using all the variables to product confusion matrix table and make predictions."
},
{
"code": null,
"e": 12585,
"s": 12452,
"text": "pred_tree <- predict(tree, testing)print(\"Confusion Matrix for Decision Tree\"); table(Predicted = pred_tree, Actual = testing$Churn)"
},
{
"code": null,
"e": 12608,
"s": 12585,
"text": "Decision Tree Accuracy"
},
{
"code": null,
"e": 12816,
"s": 12608,
"text": "p1 <- predict(tree, training)tab1 <- table(Predicted = p1, Actual = training$Churn)tab2 <- table(Predicted = pred_tree, Actual = testing$Churn)print(paste('Decision Tree Accuracy',sum(diag(tab2))/sum(tab2)))"
},
{
"code": null,
"e": 12861,
"s": 12816,
"text": "[1] Decision Tree Accuracy 0.780834914611006"
},
{
"code": null,
"e": 12964,
"s": 12861,
"text": "The accuracy for Decision Tree has hardly improved. Let’s see if we can do better using Random Forest."
},
{
"code": null,
"e": 12992,
"s": 12964,
"text": "Random Forest Initial Model"
},
{
"code": null,
"e": 13057,
"s": 12992,
"text": "rfModel <- randomForest(Churn ~., data = training)print(rfModel)"
},
{
"code": null,
"e": 13169,
"s": 13057,
"text": "The error rate is relatively low when predicting “No”, and the error rate is much higher when predicting “Yes”."
},
{
"code": null,
"e": 13215,
"s": 13169,
"text": "Random Forest Prediction and Confusion Matrix"
},
{
"code": null,
"e": 13298,
"s": 13215,
"text": "pred_rf <- predict(rfModel, testing)caret::confusionMatrix(pred_rf, testing$Churn)"
},
{
"code": null,
"e": 13323,
"s": 13298,
"text": "Random Forest Error Rate"
},
{
"code": null,
"e": 13337,
"s": 13323,
"text": "plot(rfModel)"
},
{
"code": null,
"e": 13571,
"s": 13337,
"text": "We use this plot to help us determine the number of trees. As the number of trees increases, the OOB error rate decreases, and then becomes almost constant. We are not able to decrease the OOB error rate after about 100 to 200 trees."
},
{
"code": null,
"e": 13596,
"s": 13571,
"text": "Tune Random Forest Model"
},
{
"code": null,
"e": 13718,
"s": 13596,
"text": "t <- tuneRF(training[, -18], training[, 18], stepFactor = 0.5, plot = TRUE, ntreeTry = 200, trace = TRUE, improve = 0.05)"
},
{
"code": null,
"e": 13867,
"s": 13718,
"text": "We use this plot to give us some ideas on the number of mtry to choose. OOB error rate is at the lowest when mtry is 2. Therefore, we choose mtry=2."
},
{
"code": null,
"e": 13908,
"s": 13867,
"text": "Fit the Random Forest Model After Tuning"
},
{
"code": null,
"e": 14041,
"s": 13908,
"text": "rfModel_new <- randomForest(Churn ~., data = training, ntree = 200, mtry = 2, importance = TRUE, proximity = TRUE)print(rfModel_new)"
},
{
"code": null,
"e": 14102,
"s": 14041,
"text": "OOB error rate decreased to 20.41% from 20.61% on Figure 14."
},
{
"code": null,
"e": 14162,
"s": 14102,
"text": "Random Forest Predictions and Confusion Matrix After Tuning"
},
{
"code": null,
"e": 14257,
"s": 14162,
"text": "pred_rf_new <- predict(rfModel_new, testing)caret::confusionMatrix(pred_rf_new, testing$Churn)"
},
{
"code": null,
"e": 14325,
"s": 14257,
"text": "Both accuracy and sensitivity are improved, compare with Figure 15."
},
{
"code": null,
"e": 14358,
"s": 14325,
"text": "Random Forest Feature Importance"
},
{
"code": null,
"e": 14438,
"s": 14358,
"text": "varImpPlot(rfModel_new, sort=T, n.var = 10, main = 'Top 10 Feature Importance')"
},
{
"code": null,
"e": 14613,
"s": 14438,
"text": "From the above example, we can see that Logistic Regression, Decision Tree and Random Forest can be used for customer churn analysis for this particular dataset equally fine."
},
{
"code": null,
"e": 14679,
"s": 14613,
"text": "Throughout the analysis, I have learned several important things:"
},
{
"code": null,
"e": 14814,
"s": 14679,
"text": "Features such as tenure_group, Contract, PaperlessBilling, MonthlyCharges and InternetService appear to play a role in customer churn."
},
{
"code": null,
"e": 14881,
"s": 14814,
"text": "There does not seem to be a relationship between gender and churn."
},
{
"code": null,
"e": 15160,
"s": 14881,
"text": "Customers in a month-to-month contract, with PaperlessBilling and are within 12 months tenure, are more likely to churn; On the other hand, customers with one or two year contract, with longer than 12 months tenure, that are not using PaperlessBilling, are less likely to churn."
}
]
|
Boehm’s Software Maintenance Model - GeeksforGeeks | 16 Jun, 2020
In 1983, Boehm proposed a model for the maintenance process which was based upon the economic models and principles. Economics model is nothing new thing, economic decisions are a major building block of many processes and Boehm’s thesis was that economics model and principles could not only improve productivity in the maintenance but it also helps to understand the process very well.
Boehm maintenance process model represented as a closed-loop cycle as shown in the below diagram.
He theorizes that it is the platform where management decisions are made that drive the process. In this stage, a set of required changes is determined by applying particular strategies and cost -benefits evaluations to a set of proposed changes. Those approved changes are accompanied by company budgets, which will largely determine the extent and type of resources expanded.
Boehm had understood that the maintenance manager’s task is one of the balancing and the pursuit of the objectives of maintenance against the constraint imposed by the environment in which maintenance work is carried out. That’s why, the maintenance process should be driven by the maintenance manager’s decisions, which are typically based on the balancing of objectives against the constraint. Boehm proposed a formula for calculating the maintenance cost as it is a part of the COCOMO Model. All the collected data from the various projects, the formula was formed in terms of effort.
Boehm used a quantity called Annual Charge Traffic (ACT), which is defined as:
The fraction of a software product’s source instruction which changes during a year either through add, delete or modify.The ACT is related to the number of change request,
ACT = KLOCadded + KLOCdeleted / KLOCtotal
The annual maintenance effort (AME) in person-months is measured as:
AME = ACT * SDE
Where,ACT = Annual change traffic,SDE = Software Development effort in person-months.
Example –Annual change traffic (ACT) for a software system is 20% per year. The development effort is 700 PMs. Compute an estimate the annual maintenance effort (AME). If the lifetime of the project is 15 years, what is the total effort of the project ?
Explanation :Given,The development effort = 700PMAnnual Charge traffic (ACT) = 20%Total duration for which effort is to be calculated = 15years.
The maintenance effort is a fraction of development effort and that is assumed to be constant.
AME
= ACT * SDE
= 0.20 * 700
= 140PM
Maintenance effort for 15 years,
= 15 * 140
= 2100PM
So, Total effort,
= 700 + 2100
= 2800PM
Software Engineering
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
DFD for Library Management System
What is DFD(Data Flow Diagram)?
Use Case Diagram for Library Management System
RUP and its Phases
Software Development Life Cycle (SDLC)
System Testing
Software Engineering | Black box testing
Data Dictionaries in Software Engineering
Software Engineering | Seven Principles of software testing
Software Engineering | System configuration management | [
{
"code": null,
"e": 24868,
"s": 24840,
"text": "\n16 Jun, 2020"
},
{
"code": null,
"e": 25256,
"s": 24868,
"text": "In 1983, Boehm proposed a model for the maintenance process which was based upon the economic models and principles. Economics model is nothing new thing, economic decisions are a major building block of many processes and Boehm’s thesis was that economics model and principles could not only improve productivity in the maintenance but it also helps to understand the process very well."
},
{
"code": null,
"e": 25354,
"s": 25256,
"text": "Boehm maintenance process model represented as a closed-loop cycle as shown in the below diagram."
},
{
"code": null,
"e": 25732,
"s": 25354,
"text": "He theorizes that it is the platform where management decisions are made that drive the process. In this stage, a set of required changes is determined by applying particular strategies and cost -benefits evaluations to a set of proposed changes. Those approved changes are accompanied by company budgets, which will largely determine the extent and type of resources expanded."
},
{
"code": null,
"e": 26320,
"s": 25732,
"text": "Boehm had understood that the maintenance manager’s task is one of the balancing and the pursuit of the objectives of maintenance against the constraint imposed by the environment in which maintenance work is carried out. That’s why, the maintenance process should be driven by the maintenance manager’s decisions, which are typically based on the balancing of objectives against the constraint. Boehm proposed a formula for calculating the maintenance cost as it is a part of the COCOMO Model. All the collected data from the various projects, the formula was formed in terms of effort."
},
{
"code": null,
"e": 26399,
"s": 26320,
"text": "Boehm used a quantity called Annual Charge Traffic (ACT), which is defined as:"
},
{
"code": null,
"e": 26572,
"s": 26399,
"text": "The fraction of a software product’s source instruction which changes during a year either through add, delete or modify.The ACT is related to the number of change request,"
},
{
"code": null,
"e": 26617,
"s": 26572,
"text": "ACT = KLOCadded + KLOCdeleted / KLOCtotal "
},
{
"code": null,
"e": 26686,
"s": 26617,
"text": "The annual maintenance effort (AME) in person-months is measured as:"
},
{
"code": null,
"e": 26702,
"s": 26686,
"text": "AME = ACT * SDE"
},
{
"code": null,
"e": 26788,
"s": 26702,
"text": "Where,ACT = Annual change traffic,SDE = Software Development effort in person-months."
},
{
"code": null,
"e": 27042,
"s": 26788,
"text": "Example –Annual change traffic (ACT) for a software system is 20% per year. The development effort is 700 PMs. Compute an estimate the annual maintenance effort (AME). If the lifetime of the project is 15 years, what is the total effort of the project ?"
},
{
"code": null,
"e": 27187,
"s": 27042,
"text": "Explanation :Given,The development effort = 700PMAnnual Charge traffic (ACT) = 20%Total duration for which effort is to be calculated = 15years."
},
{
"code": null,
"e": 27282,
"s": 27187,
"text": "The maintenance effort is a fraction of development effort and that is assumed to be constant."
},
{
"code": null,
"e": 27322,
"s": 27282,
"text": "AME \n= ACT * SDE\n= 0.20 * 700 \n= 140PM "
},
{
"code": null,
"e": 27355,
"s": 27322,
"text": "Maintenance effort for 15 years,"
},
{
"code": null,
"e": 27377,
"s": 27355,
"text": "= 15 * 140 \n= 2100PM "
},
{
"code": null,
"e": 27395,
"s": 27377,
"text": "So, Total effort,"
},
{
"code": null,
"e": 27419,
"s": 27395,
"text": "= 700 + 2100 \n= 2800PM "
},
{
"code": null,
"e": 27440,
"s": 27419,
"text": "Software Engineering"
},
{
"code": null,
"e": 27538,
"s": 27440,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27547,
"s": 27538,
"text": "Comments"
},
{
"code": null,
"e": 27560,
"s": 27547,
"text": "Old Comments"
},
{
"code": null,
"e": 27594,
"s": 27560,
"text": "DFD for Library Management System"
},
{
"code": null,
"e": 27626,
"s": 27594,
"text": "What is DFD(Data Flow Diagram)?"
},
{
"code": null,
"e": 27673,
"s": 27626,
"text": "Use Case Diagram for Library Management System"
},
{
"code": null,
"e": 27692,
"s": 27673,
"text": "RUP and its Phases"
},
{
"code": null,
"e": 27731,
"s": 27692,
"text": "Software Development Life Cycle (SDLC)"
},
{
"code": null,
"e": 27746,
"s": 27731,
"text": "System Testing"
},
{
"code": null,
"e": 27787,
"s": 27746,
"text": "Software Engineering | Black box testing"
},
{
"code": null,
"e": 27829,
"s": 27787,
"text": "Data Dictionaries in Software Engineering"
},
{
"code": null,
"e": 27889,
"s": 27829,
"text": "Software Engineering | Seven Principles of software testing"
}
]
|
Convert comma separated string to array using JavaScript - GeeksforGeeks | 23 May, 2019
A comma-separated string can be converted to an array by 2 approaches:
Method 1: Using split() methodThe split() method is used to split a string on the basis of a separator. This separator could be defined as a comma to separate the string whenever a comma is encountered. This method returns an array of strings that are separated.
Syntax:
string.split(', ')
Example:
<!DOCTYPE html><html> <head> <title> Convert comma separated string to array using JavaScript </title></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b>Convert comma separated string to array using JavaScript</b> <p>Original string is "One, Two, Three, Four, Five"</p> <p> Separated Array is: <span class="output"></span> </p> <button onclick="separateString()"> Remove Text </button> <script type="text/javascript"> function separateString() { originalString = "One, Two, Three, Four, Five"; separatedArray = originalString.split(', '); console.log(separatedArray); document.querySelector('.output').textContent = separatedArray; } </script></body> </html>
Output:
After clicking the button:
Console Output:
Method 2: Iterating through the array keeping track of any comma encountered and creating a new array with the separated strings.This approach involves iterating through each character in the string and checking for the comma. A variable previousIndex is defined which keeps the track of the first character of the next string. The slice method is then used to remove the portion of the string between the previous index and the current location of the comma found. This string is then pushed on to a new array. This process is then repeated for the whole length of the string. The final array contains all the separated strings.
Syntax:
originalString = "One, Two, Three, Four, Five";separatedArray = []; // index of end of the last string let previousIndex = 0; for(i = 0; i < originalString.length; i++) { // check the character for a comma if (originalString[i] == ', ') { // split the string from the last index // to the comma separated = originalString.slice(previousIndex, i); separatedArray.push(separated); // update the index of the last string previousIndex = i + 1; }} // push the last string into the arrayseparatedArray.push(originalString.slice(previousIndex, i));
Example:
<!DOCTYPE html><html> <head> <title> Convert comma separated string to array using JavaScript </title></head> <body> <h1 style="color: green"> GeeksforGeeks </h1> <b>Convert comma separated string to array using JavaScript</b> <p>Original string is "One, Two, Three, Four, Five"</p> <p> Separated Array is: <span class="output"></span> </p> <button onclick="separateString()"> Remove Text </button> <script type="text/javascript"> function separateString() { originalString = "One, Two, Three, Four, Five"; separatedArray = []; // index of end of the last string let previousIndex = 0; for (i = 0; i < originalString.length; i++) { // check the character for a comma if (originalString[i] == ', ') { // split the string from the last index // to the comma separated = originalString.slice(previousIndex, i); separatedArray.push(separated); // update the index of the last string previousIndex = i + 1; } } // push the last string into the array separatedArray.push( originalString.slice(previousIndex, i)); console.log(separatedArray); document.querySelector( '.output').textContent = separatedArray; } </script></body> </html>
Output:
After clicking the button:
Console Output:
javascript-array
javascript-string
Picked
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24936,
"s": 24908,
"text": "\n23 May, 2019"
},
{
"code": null,
"e": 25007,
"s": 24936,
"text": "A comma-separated string can be converted to an array by 2 approaches:"
},
{
"code": null,
"e": 25270,
"s": 25007,
"text": "Method 1: Using split() methodThe split() method is used to split a string on the basis of a separator. This separator could be defined as a comma to separate the string whenever a comma is encountered. This method returns an array of strings that are separated."
},
{
"code": null,
"e": 25278,
"s": 25270,
"text": "Syntax:"
},
{
"code": null,
"e": 25297,
"s": 25278,
"text": "string.split(', ')"
},
{
"code": null,
"e": 25306,
"s": 25297,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> Convert comma separated string to array using JavaScript </title></head> <body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <b>Convert comma separated string to array using JavaScript</b> <p>Original string is \"One, Two, Three, Four, Five\"</p> <p> Separated Array is: <span class=\"output\"></span> </p> <button onclick=\"separateString()\"> Remove Text </button> <script type=\"text/javascript\"> function separateString() { originalString = \"One, Two, Three, Four, Five\"; separatedArray = originalString.split(', '); console.log(separatedArray); document.querySelector('.output').textContent = separatedArray; } </script></body> </html>",
"e": 26122,
"s": 25306,
"text": null
},
{
"code": null,
"e": 26130,
"s": 26122,
"text": "Output:"
},
{
"code": null,
"e": 26157,
"s": 26130,
"text": "After clicking the button:"
},
{
"code": null,
"e": 26173,
"s": 26157,
"text": "Console Output:"
},
{
"code": null,
"e": 26803,
"s": 26173,
"text": "Method 2: Iterating through the array keeping track of any comma encountered and creating a new array with the separated strings.This approach involves iterating through each character in the string and checking for the comma. A variable previousIndex is defined which keeps the track of the first character of the next string. The slice method is then used to remove the portion of the string between the previous index and the current location of the comma found. This string is then pushed on to a new array. This process is then repeated for the whole length of the string. The final array contains all the separated strings."
},
{
"code": null,
"e": 26811,
"s": 26803,
"text": "Syntax:"
},
{
"code": "originalString = \"One, Two, Three, Four, Five\";separatedArray = []; // index of end of the last string let previousIndex = 0; for(i = 0; i < originalString.length; i++) { // check the character for a comma if (originalString[i] == ', ') { // split the string from the last index // to the comma separated = originalString.slice(previousIndex, i); separatedArray.push(separated); // update the index of the last string previousIndex = i + 1; }} // push the last string into the arrayseparatedArray.push(originalString.slice(previousIndex, i));",
"e": 27408,
"s": 26811,
"text": null
},
{
"code": null,
"e": 27417,
"s": 27408,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> Convert comma separated string to array using JavaScript </title></head> <body> <h1 style=\"color: green\"> GeeksforGeeks </h1> <b>Convert comma separated string to array using JavaScript</b> <p>Original string is \"One, Two, Three, Four, Five\"</p> <p> Separated Array is: <span class=\"output\"></span> </p> <button onclick=\"separateString()\"> Remove Text </button> <script type=\"text/javascript\"> function separateString() { originalString = \"One, Two, Three, Four, Five\"; separatedArray = []; // index of end of the last string let previousIndex = 0; for (i = 0; i < originalString.length; i++) { // check the character for a comma if (originalString[i] == ', ') { // split the string from the last index // to the comma separated = originalString.slice(previousIndex, i); separatedArray.push(separated); // update the index of the last string previousIndex = i + 1; } } // push the last string into the array separatedArray.push( originalString.slice(previousIndex, i)); console.log(separatedArray); document.querySelector( '.output').textContent = separatedArray; } </script></body> </html>",
"e": 28993,
"s": 27417,
"text": null
},
{
"code": null,
"e": 29001,
"s": 28993,
"text": "Output:"
},
{
"code": null,
"e": 29028,
"s": 29001,
"text": "After clicking the button:"
},
{
"code": null,
"e": 29044,
"s": 29028,
"text": "Console Output:"
},
{
"code": null,
"e": 29061,
"s": 29044,
"text": "javascript-array"
},
{
"code": null,
"e": 29079,
"s": 29061,
"text": "javascript-string"
},
{
"code": null,
"e": 29086,
"s": 29079,
"text": "Picked"
},
{
"code": null,
"e": 29097,
"s": 29086,
"text": "JavaScript"
},
{
"code": null,
"e": 29114,
"s": 29097,
"text": "Web Technologies"
},
{
"code": null,
"e": 29212,
"s": 29114,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29252,
"s": 29212,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 29297,
"s": 29252,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 29358,
"s": 29297,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 29430,
"s": 29358,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 29482,
"s": 29430,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 29522,
"s": 29482,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 29555,
"s": 29522,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 29600,
"s": 29555,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 29643,
"s": 29600,
"text": "How to fetch data from an API in ReactJS ?"
}
]
|
C# Program for Converting Hexadecimal String to Integer - GeeksforGeeks | 02 Jul, 2020
Given an hexadecimal number as input, we need to write a program to convert the given hexadecimal number into equivalent integer. To convert an hexadecimal string to integer, we have to use Convert.ToInt32() function to convert the values.
Syntax:
Convert.ToInt32(input_string, Input_base);
Here,
input_string is the input containing hexadecimal number in string format.
input_base is the base of the input value – for a hexadecimal value it will be 16.
Examples:
Input : 56304
Output : 353028
Input : 598f
Output : 22927
If we input wrong value for eg. 672g, it shows error: Enter a hexadecimal number: System.FormatException: Additional unparsable characters are at the end of the string.
If we input number greater than 8 digit e.g. 746465789, it shows error: Enter a hexadecimal number: System.OverflowException: Arithmetic operation resulted in an overflow.
Program 1:
C#
// C# program to convert array // of hexadecimal strings to integersusing System;using System.Text; class Program { static void Main(string[] args) { // hexadecimal number as string string input = "56304"; int output = 0; // converting to integer output = Convert.ToInt32(input, 16); // to print the value Console.WriteLine("Integer number: " + output); }}
Output:
Integer number: 353028
Program 2:
C#
// C# program to convert array // of hexadecimal strings// to integersusing System;using System.Text; namespace geeks { class GFG { static void Main(string[] args) { string input = ""; int output = 0; try { // input string Console.Write("Enter a hexadecimal number: "); input = Console.ReadLine(); // converting to integer output = Convert.ToInt32(input, 16); Console.WriteLine("Integer number: " + output); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } // hit ENTER to exit Console.ReadLine(); }}}
Input:
598f
Output:
Enter a hexadecimal number:
Integer number: 22927
C#
C# Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# Dictionary with examples
C# | Delegates
C# | Method Overriding
C# | Abstract Classes
Extension Method in C#
Convert String to Character Array in C#
Program to Print a New Line in C#
Getting a Month Name Using Month Number in C#
Socket Programming in C#
Program to find absolute value of a given number | [
{
"code": null,
"e": 25038,
"s": 25010,
"text": "\n02 Jul, 2020"
},
{
"code": null,
"e": 25278,
"s": 25038,
"text": "Given an hexadecimal number as input, we need to write a program to convert the given hexadecimal number into equivalent integer. To convert an hexadecimal string to integer, we have to use Convert.ToInt32() function to convert the values."
},
{
"code": null,
"e": 25287,
"s": 25278,
"text": "Syntax: "
},
{
"code": null,
"e": 25331,
"s": 25287,
"text": "Convert.ToInt32(input_string, Input_base);\n"
},
{
"code": null,
"e": 25337,
"s": 25331,
"text": "Here,"
},
{
"code": null,
"e": 25411,
"s": 25337,
"text": "input_string is the input containing hexadecimal number in string format."
},
{
"code": null,
"e": 25494,
"s": 25411,
"text": "input_base is the base of the input value – for a hexadecimal value it will be 16."
},
{
"code": null,
"e": 25504,
"s": 25494,
"text": "Examples:"
},
{
"code": null,
"e": 25564,
"s": 25504,
"text": "Input : 56304\nOutput : 353028\n\nInput : 598f\nOutput : 22927\n"
},
{
"code": null,
"e": 25733,
"s": 25564,
"text": "If we input wrong value for eg. 672g, it shows error: Enter a hexadecimal number: System.FormatException: Additional unparsable characters are at the end of the string."
},
{
"code": null,
"e": 25905,
"s": 25733,
"text": "If we input number greater than 8 digit e.g. 746465789, it shows error: Enter a hexadecimal number: System.OverflowException: Arithmetic operation resulted in an overflow."
},
{
"code": null,
"e": 25918,
"s": 25905,
"text": "Program 1: "
},
{
"code": null,
"e": 25921,
"s": 25918,
"text": "C#"
},
{
"code": "// C# program to convert array // of hexadecimal strings to integersusing System;using System.Text; class Program { static void Main(string[] args) { // hexadecimal number as string string input = \"56304\"; int output = 0; // converting to integer output = Convert.ToInt32(input, 16); // to print the value Console.WriteLine(\"Integer number: \" + output); }}",
"e": 26358,
"s": 25921,
"text": null
},
{
"code": null,
"e": 26367,
"s": 26358,
"text": "Output: "
},
{
"code": null,
"e": 26391,
"s": 26367,
"text": "Integer number: 353028\n"
},
{
"code": null,
"e": 26404,
"s": 26391,
"text": "Program 2: "
},
{
"code": null,
"e": 26407,
"s": 26404,
"text": "C#"
},
{
"code": "// C# program to convert array // of hexadecimal strings// to integersusing System;using System.Text; namespace geeks { class GFG { static void Main(string[] args) { string input = \"\"; int output = 0; try { // input string Console.Write(\"Enter a hexadecimal number: \"); input = Console.ReadLine(); // converting to integer output = Convert.ToInt32(input, 16); Console.WriteLine(\"Integer number: \" + output); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } // hit ENTER to exit Console.ReadLine(); }}}",
"e": 27089,
"s": 26407,
"text": null
},
{
"code": null,
"e": 27096,
"s": 27089,
"text": "Input:"
},
{
"code": null,
"e": 27101,
"s": 27096,
"text": "598f"
},
{
"code": null,
"e": 27109,
"s": 27101,
"text": "Output:"
},
{
"code": null,
"e": 27161,
"s": 27109,
"text": "Enter a hexadecimal number: \nInteger number: 22927\n"
},
{
"code": null,
"e": 27164,
"s": 27161,
"text": "C#"
},
{
"code": null,
"e": 27176,
"s": 27164,
"text": "C# Programs"
},
{
"code": null,
"e": 27274,
"s": 27176,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27302,
"s": 27274,
"text": "C# Dictionary with examples"
},
{
"code": null,
"e": 27317,
"s": 27302,
"text": "C# | Delegates"
},
{
"code": null,
"e": 27340,
"s": 27317,
"text": "C# | Method Overriding"
},
{
"code": null,
"e": 27362,
"s": 27340,
"text": "C# | Abstract Classes"
},
{
"code": null,
"e": 27385,
"s": 27362,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 27425,
"s": 27385,
"text": "Convert String to Character Array in C#"
},
{
"code": null,
"e": 27459,
"s": 27425,
"text": "Program to Print a New Line in C#"
},
{
"code": null,
"e": 27505,
"s": 27459,
"text": "Getting a Month Name Using Month Number in C#"
},
{
"code": null,
"e": 27530,
"s": 27505,
"text": "Socket Programming in C#"
}
]
|
Python - Word Tokenization | Word tokenization is the process of splitting a large sample of text into words. This is a requirement in natural language processing tasks where each word needs to be captured and subjected
to further analysis like classifying and counting them for a particular sentiment etc. The Natural Language Tool kit(NLTK) is a library used to achieve this. Install NLTK before proceeding with the python
program for word tokenization.
conda install -c anaconda nltk
Next we use the word_tokenize method to split the paragraph into individual words.
import nltk
word_data = "It originated from the idea that there are readers who prefer learning new skills from the comforts of their drawing rooms"
nltk_tokens = nltk.word_tokenize(word_data)
print (nltk_tokens)
When we execute the above code, it produces the following result.
['It', 'originated', 'from', 'the', 'idea', 'that', 'there', 'are', 'readers',
'who', 'prefer', 'learning', 'new', 'skills', 'from', 'the',
'comforts', 'of', 'their', 'drawing', 'rooms']
We can also tokenize the sentences in a paragraph like we tokenized the words. We use the method sent_tokenize to achieve this. Below is an example.
import nltk
sentence_data = "Sun rises in the east. Sun sets in the west."
nltk_tokens = nltk.sent_tokenize(sentence_data)
print (nltk_tokens)
When we execute the above code, it produces the following result.
['Sun rises in the east.', 'Sun sets in the west.']
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2958,
"s": 2529,
"text": "Word tokenization is the process of splitting a large sample of text into words. This is a requirement in natural language processing tasks where each word needs to be captured and subjected\nto further analysis like classifying and counting them for a particular sentiment etc. The Natural Language Tool kit(NLTK) is a library used to achieve this. Install NLTK before proceeding with the python \nprogram for word tokenization. "
},
{
"code": null,
"e": 2989,
"s": 2958,
"text": "conda install -c anaconda nltk"
},
{
"code": null,
"e": 3072,
"s": 2989,
"text": "Next we use the word_tokenize method to split the paragraph into individual words."
},
{
"code": null,
"e": 3286,
"s": 3072,
"text": "import nltk\n\nword_data = \"It originated from the idea that there are readers who prefer learning new skills from the comforts of their drawing rooms\"\nnltk_tokens = nltk.word_tokenize(word_data)\nprint (nltk_tokens)"
},
{
"code": null,
"e": 3352,
"s": 3286,
"text": "When we execute the above code, it produces the following result."
},
{
"code": null,
"e": 3540,
"s": 3352,
"text": "['It', 'originated', 'from', 'the', 'idea', 'that', 'there', 'are', 'readers', \n'who', 'prefer', 'learning', 'new', 'skills', 'from', 'the',\n'comforts', 'of', 'their', 'drawing', 'rooms']"
},
{
"code": null,
"e": 3689,
"s": 3540,
"text": "We can also tokenize the sentences in a paragraph like we tokenized the words. We use the method sent_tokenize to achieve this. Below is an example."
},
{
"code": null,
"e": 3832,
"s": 3689,
"text": "import nltk\nsentence_data = \"Sun rises in the east. Sun sets in the west.\"\nnltk_tokens = nltk.sent_tokenize(sentence_data)\nprint (nltk_tokens)"
},
{
"code": null,
"e": 3898,
"s": 3832,
"text": "When we execute the above code, it produces the following result."
},
{
"code": null,
"e": 3950,
"s": 3898,
"text": "['Sun rises in the east.', 'Sun sets in the west.']"
},
{
"code": null,
"e": 3987,
"s": 3950,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 4003,
"s": 3987,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 4036,
"s": 4003,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 4055,
"s": 4036,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 4090,
"s": 4055,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 4112,
"s": 4090,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 4146,
"s": 4112,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 4174,
"s": 4146,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 4209,
"s": 4174,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 4223,
"s": 4209,
"text": " Lets Kode It"
},
{
"code": null,
"e": 4256,
"s": 4223,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 4273,
"s": 4256,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 4280,
"s": 4273,
"text": " Print"
},
{
"code": null,
"e": 4291,
"s": 4280,
"text": " Add Notes"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.