title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
Bootstrap 5 | Buttons - GeeksforGeeks
|
29 Jul, 2020
Bootstrap 5 is the latest major release by Bootstrap in which they have revamped the UI and made various changes. Buttons are the components provided to create various buttons. Bootstrap 5 includes several predefined button styles, each serving its own purpose.
Syntax:
<button class="badge bg-type"> Button Text <button>
Types: Following are the nine types of buttons available in Bootstrap 5:
btn-primary
btn-secondary
btn-success
btn-danger
btn-warning
btn-info
btn-light
btn-dark
btn-link
Example 1: This example uses show the working of first five types of buttons in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body> <div style="text-align: center;width: 600px;"> <h1 style="color: green;"> GeeksforGeeks </h1> </div> <div style="width: 600px; height: 200px; margin:20px;text-align: center;"> <button type="button" class="btn btn-primary">Primary</button> <button type="button" class="btn btn-secondary">Secondary</button> <button type="button" class="btn btn-success">Success</button> <button type="button" class="btn btn-danger">Danger</button> <button type="button" class="btn btn-warning">Warning</button> </div></body> </html>
Output:
Example 2: This example uses show the working of last four types of buttons in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body> <div style="text-align: center;width: 600px;"> <h1 style="color: green;"> GeeksforGeeks </h1> </div> <div style="width: 600px;height: 200px; margin:20px;text-align: center;"> <button type="button" class="btn btn-info">Info</button> <button type="button" class="btn btn-light">Light</button> <button type="button" class="btn btn-dark">Dark</button> <button type="button" class="btn btn-link">Link</button> </div></body> </html>
Output:
Example 3: This example uses show the working of different elements as buttons in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Button Elements</h2> <input class="btn btn-success" type="button" value="Input Button"> <input class="btn btn-success" type="submit" value="Submit Button"> <input class="btn btn-success" type="reset" value="Reset Button"> <button class="btn btn-success" type="button">Button</button> <a href="#" class="btn btn-success" role="button">Link Button</a> </div> </body> </html>
Output:
Button Outlines: Bootstrap 5 provides a series of classes that can be used when we need to use outline styled buttons in our project, i.e. button without background color. The outline button classes remove any background color or background image style applied to the buttons. All the button types support it as given in the example below:
Example: This example uses show the working of different outline buttons in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Button Outline</h2> <button type="button" class="btn btn-outline-primary"> Primary </button> <button type="button" class="btn btn-outline-secondary"> Secondary </button> <button type="button" class="btn btn-outline-success"> Success </button> <button type="button" class="btn btn-outline-danger"> Danger </button> <button type="button" class="btn btn-outline-warning"> Warning </button> <button type="button" class="btn btn-outline-info"> Info </button> <button type="button" class="btn btn-outline-light"> Light </button> <button type="button" class="btn btn-outline-dark"> Dark </button> </div></body> </html>
Output:
Button Sizes: Bootstrap 5 provides different classes that allow to changing the size of the button. The .btn-lg and .btn-sm classes are used for large and small buttons.
Example: This example uses show the working of different button sizes in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Button Sizes</h2> <button type="button" class="btn btn-success btn-sm"> Small Button </button> <button type="button" class="btn btn-success"> Default Button </button> <button type="button" class="btn btn-success btn-lg"> Large Button </button> </div> </body> </html>
Output:
Active State Buttons: The .active class is used to make button and link to an active state.
Example: This example uses show the working of a button’s active state in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> </head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Button Active State</h2> <button type="button" class="btn btn-success"> Default Button </button> <button type="button" class="btn btn-success active"> Active Button </button> </div></body> </html>
Output:
Disabled State Buttons: The disabled attribute is used with button element to set the disabled state of the button.
Example: This example uses show the working of a button’s disabled state in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Button Disabled State</h2> <button type="button" class="btn btn-primary" disabled> Disabled Button </button> <button type="button" class="btn btn-success" disabled> Disabled Button </button> </div> </body> </html>
Output:
Block Level Buttons: The .btn-block class is used to create a block-level button which takes all width of the parent element.
Example: This example shows the working of a block level button in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"> </head> <body style="text-align:center;" style="width:700px;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Block Level Buttons</h2> <button type="button" class="btn btn-block btn-primary"> Block Level Button </button> <button type="button" class="btn btn-block btn-success"> Block Level Button </button> </div></body> </html>
Output:
Spinner Buttons: The spinner-* classes are used to add a spinner to the button.
Example: This example shows the working of a spinner button in Bootstrap 5.
<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css" integrity="sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I" crossorigin="anonymous"></head> <body style="text-align:center;"> <div class="container mt-3"> <h1 style="color:green;"> GeeksforGeeks </h1> <h2>Spinner Buttons</h2> <button type="button" class="btn btn-primary"> <span class="spinner-border spinner-border-sm"> </span> Spinner Button </button> <button type="button" class="btn btn-success"> <span class="spinner-grow spinner-grow-sm"> </span> Spinner Button </button> </div></body> </html>
Output:
Bootstrap-Misc
Bootstrap
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Form validation using jQuery
How to change navigation bar color in Bootstrap ?
How to pass data into a bootstrap modal?
How to align navbar items to the right in Bootstrap 4 ?
How to Show Images on Click using HTML ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 28440,
"s": 28412,
"text": "\n29 Jul, 2020"
},
{
"code": null,
"e": 28702,
"s": 28440,
"text": "Bootstrap 5 is the latest major release by Bootstrap in which they have revamped the UI and made various changes. Buttons are the components provided to create various buttons. Bootstrap 5 includes several predefined button styles, each serving its own purpose."
},
{
"code": null,
"e": 28710,
"s": 28702,
"text": "Syntax:"
},
{
"code": null,
"e": 28762,
"s": 28710,
"text": "<button class=\"badge bg-type\"> Button Text <button>"
},
{
"code": null,
"e": 28835,
"s": 28762,
"text": "Types: Following are the nine types of buttons available in Bootstrap 5:"
},
{
"code": null,
"e": 28847,
"s": 28835,
"text": "btn-primary"
},
{
"code": null,
"e": 28861,
"s": 28847,
"text": "btn-secondary"
},
{
"code": null,
"e": 28873,
"s": 28861,
"text": "btn-success"
},
{
"code": null,
"e": 28884,
"s": 28873,
"text": "btn-danger"
},
{
"code": null,
"e": 28896,
"s": 28884,
"text": "btn-warning"
},
{
"code": null,
"e": 28905,
"s": 28896,
"text": "btn-info"
},
{
"code": null,
"e": 28915,
"s": 28905,
"text": "btn-light"
},
{
"code": null,
"e": 28924,
"s": 28915,
"text": "btn-dark"
},
{
"code": null,
"e": 28933,
"s": 28924,
"text": "btn-link"
},
{
"code": null,
"e": 29030,
"s": 28937,
"text": "Example 1: This example uses show the working of first five types of buttons in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body> <div style=\"text-align: center;width: 600px;\"> <h1 style=\"color: green;\"> GeeksforGeeks </h1> </div> <div style=\"width: 600px; height: 200px; margin:20px;text-align: center;\"> <button type=\"button\" class=\"btn btn-primary\">Primary</button> <button type=\"button\" class=\"btn btn-secondary\">Secondary</button> <button type=\"button\" class=\"btn btn-success\">Success</button> <button type=\"button\" class=\"btn btn-danger\">Danger</button> <button type=\"button\" class=\"btn btn-warning\">Warning</button> </div></body> </html>",
"e": 30080,
"s": 29030,
"text": null
},
{
"code": null,
"e": 30088,
"s": 30080,
"text": "Output:"
},
{
"code": null,
"e": 30180,
"s": 30088,
"text": "Example 2: This example uses show the working of last four types of buttons in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body> <div style=\"text-align: center;width: 600px;\"> <h1 style=\"color: green;\"> GeeksforGeeks </h1> </div> <div style=\"width: 600px;height: 200px; margin:20px;text-align: center;\"> <button type=\"button\" class=\"btn btn-info\">Info</button> <button type=\"button\" class=\"btn btn-light\">Light</button> <button type=\"button\" class=\"btn btn-dark\">Dark</button> <button type=\"button\" class=\"btn btn-link\">Link</button> </div></body> </html>",
"e": 31125,
"s": 30180,
"text": null
},
{
"code": null,
"e": 31133,
"s": 31125,
"text": "Output:"
},
{
"code": null,
"e": 31228,
"s": 31133,
"text": "Example 3: This example uses show the working of different elements as buttons in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Button Elements</h2> <input class=\"btn btn-success\" type=\"button\" value=\"Input Button\"> <input class=\"btn btn-success\" type=\"submit\" value=\"Submit Button\"> <input class=\"btn btn-success\" type=\"reset\" value=\"Reset Button\"> <button class=\"btn btn-success\" type=\"button\">Button</button> <a href=\"#\" class=\"btn btn-success\" role=\"button\">Link Button</a> </div> </body> </html> ",
"e": 32287,
"s": 31228,
"text": null
},
{
"code": null,
"e": 32295,
"s": 32287,
"text": "Output:"
},
{
"code": null,
"e": 32635,
"s": 32295,
"text": "Button Outlines: Bootstrap 5 provides a series of classes that can be used when we need to use outline styled buttons in our project, i.e. button without background color. The outline button classes remove any background color or background image style applied to the buttons. All the button types support it as given in the example below:"
},
{
"code": null,
"e": 32724,
"s": 32635,
"text": "Example: This example uses show the working of different outline buttons in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Button Outline</h2> <button type=\"button\" class=\"btn btn-outline-primary\"> Primary </button> <button type=\"button\" class=\"btn btn-outline-secondary\"> Secondary </button> <button type=\"button\" class=\"btn btn-outline-success\"> Success </button> <button type=\"button\" class=\"btn btn-outline-danger\"> Danger </button> <button type=\"button\" class=\"btn btn-outline-warning\"> Warning </button> <button type=\"button\" class=\"btn btn-outline-info\"> Info </button> <button type=\"button\" class=\"btn btn-outline-light\"> Light </button> <button type=\"button\" class=\"btn btn-outline-dark\"> Dark </button> </div></body> </html>",
"e": 34198,
"s": 32724,
"text": null
},
{
"code": null,
"e": 34206,
"s": 34198,
"text": "Output:"
},
{
"code": null,
"e": 34376,
"s": 34206,
"text": "Button Sizes: Bootstrap 5 provides different classes that allow to changing the size of the button. The .btn-lg and .btn-sm classes are used for large and small buttons."
},
{
"code": null,
"e": 34462,
"s": 34376,
"text": "Example: This example uses show the working of different button sizes in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Button Sizes</h2> <button type=\"button\" class=\"btn btn-success btn-sm\"> Small Button </button> <button type=\"button\" class=\"btn btn-success\"> Default Button </button> <button type=\"button\" class=\"btn btn-success btn-lg\"> Large Button </button> </div> </body> </html>",
"e": 35452,
"s": 34462,
"text": null
},
{
"code": null,
"e": 35460,
"s": 35452,
"text": "Output:"
},
{
"code": null,
"e": 35552,
"s": 35460,
"text": "Active State Buttons: The .active class is used to make button and link to an active state."
},
{
"code": null,
"e": 35639,
"s": 35552,
"text": "Example: This example uses show the working of a button’s active state in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> </head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Button Active State</h2> <button type=\"button\" class=\"btn btn-success\"> Default Button </button> <button type=\"button\" class=\"btn btn-success active\"> Active Button </button> </div></body> </html>",
"e": 36445,
"s": 35639,
"text": null
},
{
"code": null,
"e": 36453,
"s": 36445,
"text": "Output:"
},
{
"code": null,
"e": 36569,
"s": 36453,
"text": "Disabled State Buttons: The disabled attribute is used with button element to set the disabled state of the button."
},
{
"code": null,
"e": 36658,
"s": 36569,
"text": "Example: This example uses show the working of a button’s disabled state in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Button Disabled State</h2> <button type=\"button\" class=\"btn btn-primary\" disabled> Disabled Button </button> <button type=\"button\" class=\"btn btn-success\" disabled> Disabled Button </button> </div> </body> </html> ",
"e": 37540,
"s": 36658,
"text": null
},
{
"code": null,
"e": 37548,
"s": 37540,
"text": "Output:"
},
{
"code": null,
"e": 37674,
"s": 37548,
"text": "Block Level Buttons: The .btn-block class is used to create a block-level button which takes all width of the parent element."
},
{
"code": null,
"e": 37754,
"s": 37674,
"text": "Example: This example shows the working of a block level button in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"> </head> <body style=\"text-align:center;\" style=\"width:700px;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Block Level Buttons</h2> <button type=\"button\" class=\"btn btn-block btn-primary\"> Block Level Button </button> <button type=\"button\" class=\"btn btn-block btn-success\"> Block Level Button </button> </div></body> </html>",
"e": 38602,
"s": 37754,
"text": null
},
{
"code": null,
"e": 38610,
"s": 38602,
"text": "Output:"
},
{
"code": null,
"e": 38690,
"s": 38610,
"text": "Spinner Buttons: The spinner-* classes are used to add a spinner to the button."
},
{
"code": null,
"e": 38766,
"s": 38690,
"text": "Example: This example shows the working of a spinner button in Bootstrap 5."
},
{
"code": "<!DOCTYPE html><html> <head> <title> Bootstrap 5 | Buttons </title> <!-- Load Bootstrap --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/5.0.0-alpha1/css/bootstrap.min.css\" integrity=\"sha384-r4NyP46KrjDleawBgD5tp8Y7UzmLA05oM1iAEQ17CSuDqnUK2+k9luXQOfXJCJ4I\" crossorigin=\"anonymous\"></head> <body style=\"text-align:center;\"> <div class=\"container mt-3\"> <h1 style=\"color:green;\"> GeeksforGeeks </h1> <h2>Spinner Buttons</h2> <button type=\"button\" class=\"btn btn-primary\"> <span class=\"spinner-border spinner-border-sm\"> </span> Spinner Button </button> <button type=\"button\" class=\"btn btn-success\"> <span class=\"spinner-grow spinner-grow-sm\"> </span> Spinner Button </button> </div></body> </html>",
"e": 39679,
"s": 38766,
"text": null
},
{
"code": null,
"e": 39687,
"s": 39679,
"text": "Output:"
},
{
"code": null,
"e": 39702,
"s": 39687,
"text": "Bootstrap-Misc"
},
{
"code": null,
"e": 39712,
"s": 39702,
"text": "Bootstrap"
},
{
"code": null,
"e": 39729,
"s": 39712,
"text": "Web Technologies"
},
{
"code": null,
"e": 39827,
"s": 39729,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39836,
"s": 39827,
"text": "Comments"
},
{
"code": null,
"e": 39849,
"s": 39836,
"text": "Old Comments"
},
{
"code": null,
"e": 39878,
"s": 39849,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 39928,
"s": 39878,
"text": "How to change navigation bar color in Bootstrap ?"
},
{
"code": null,
"e": 39969,
"s": 39928,
"text": "How to pass data into a bootstrap modal?"
},
{
"code": null,
"e": 40025,
"s": 39969,
"text": "How to align navbar items to the right in Bootstrap 4 ?"
},
{
"code": null,
"e": 40066,
"s": 40025,
"text": "How to Show Images on Click using HTML ?"
},
{
"code": null,
"e": 40108,
"s": 40066,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 40141,
"s": 40108,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 40184,
"s": 40141,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 40246,
"s": 40184,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
}
] |
When to use Pandas transform() function | by B. Chen | Towards Data Science
|
Pandas is an amazing library that contains extensive built-in functions for manipulating data. Among them, transform() is super useful when you are looking to manipulate rows or columns.
In this article, we will cover the following most frequently used Pandas transform() features:
Transforming valuesCombining groupby() resultsFiltering dataHandling missing value at the group level
Transforming values
Combining groupby() results
Filtering data
Handling missing value at the group level
Please check out my Github repo for the source code
Let’s take a look at pd.transform(func, axis=0)
The first argument func is to specify the function to be used for manipulating data. It can be a function, a string function name, a list of functions, or a dictionary of axis labels -> functions
The 2nd argument axis is to specify which axis the func is applied to. 0 for applying the func to each column and 1 for applying the func to each row.
Let’s see how transform() works with the help of some examples.
We can pass a function to func. For example
df = pd.DataFrame({'A': [1,2,3], 'B': [10,20,30] })def plus_10(x): return x+10df.transform(plus_10)
You can also use a lambda expression. Below is the lambda equivalent for the plus_10() :
df.transform(lambda x: x+10)
We can pass any valid Pandas string function to func, for example 'sqrt' :
df.transform('sqrt')
func can be a list of functions. for example sqrt and exp from NumPy:
df.transform([np.sqrt, np.exp])
func can be a dict of axis labels -> function. For example
df.transform({ 'A': np.sqrt, 'B': np.exp,})
One of the most compelling usages of Pandas transform() is combining grouby() results.
Let’s see how this works with the help of an example. Suppose we have a dataset about a restaurant chain
df = pd.DataFrame({ 'restaurant_id': [101,102,103,104,105,106,107], 'address': ['A','B','C','D', 'E', 'F', 'G'], 'city': ['London','London','London','Oxford','Oxford', 'Durham', 'Durham'], 'sales': [10,500,48,12,21,22,14]})
We can see that each city has multiple restaurants with sales. We would like to know “What is the percentage of sales each restaurant represents in the city”. The expected output is:
The tricky part in this calculation is that we need to get a city_total_sales and combine it back into the data in order to get the percentage.
There are 2 solutions:
groupby(), apply(), and merge()groupby() and transform()
groupby(), apply(), and merge()
groupby() and transform()
The first solution is splitting the data with groupby() and using apply() to aggregate each group, then merge the results back into the original DataFrame using merge()
Step 1: Use groupby() and apply() to calculate the city_total_sales
city_sales = df.groupby('city')['sales'] .apply(sum).rename('city_total_sales').reset_index()
groupby('city') split the data by grouping on the city column. For each of these groups, the function sum gets applied to the sales column to calculate the sum for each group. Finally, the new column gets renamed to city_total_sales and the index gets reset (Note: reset_inde() is required to clear the index generated bygroupby('city')).
In addition, Pandas has a built-in sum() function and the following is the Pandas sum() equivalent:
city_sales = df.groupby('city')['sales'] .sum().rename('city_total_sales').reset_index()
Step 2: Use merge() function to combine the results
df_new = pd.merge(df, city_sales, how='left')
The group results get merged back into the original DataFrame using merge() with how='left' for left outer join.
Step 3: Calculate the percentage
Finally, the percentage can be calculated and formatted.
df_new['pct'] = df_new['sales'] / df_new['city_total_sales']df_new['pct'] = df_new['pct'].apply(lambda x: format(x, '.2%'))
This certainly does our work. But it is a multistep process and requires extra code to get the data in the form we require.
We can solve this effectively using the transform() function
This solution is a game-changer. A single line of code can solve the apply and merge.
Step 1: Use groupby() and transform() to calculate the city_total_sales
The transform function retains the same number of items as the original dataset after performing the transformation. Therefore, a one-line step using groupby followed by a transform(sum) returns the same output.
df['city_total_sales'] = df.groupby('city')['sales'] .transform('sum')
Step 2: Calculate the percentage
Finally, this is the same as the solution one to get the percentage.
df['pct'] = df['sales'] / df['city_total_sales']df['pct'] = df['pct'].apply(lambda x: format(x, '.2%'))
transform() can also be used to filter data. Here we are trying to get records where the city’s total sales is greater than 40
df[df.groupby('city')['sales'].transform('sum') > 40]
Another usage of Pandas transform() is to handle missing values at the group level. Let’s see how this works with an example.
Here is a DataFrame for demonstration
df = pd.DataFrame({ 'name': ['A', 'A', 'B', 'B', 'B', 'C', 'C', 'C'], 'value': [1, np.nan, np.nan, 2, 8, 2, np.nan, 3]})
In the example above, the data can be split into three groups by name, and each group has missing values. A common solution to replace missing values is to replace NaN with mean.
Let’s take a look at the average value in each group.
df.groupby('name')['value'].mean()nameA 1.0B 5.0C 2.5Name: value, dtype: float64
Here we can use transform() to replace missing values with the group average value.
df['value'] = df.groupby('name') .transform(lambda x: x.fillna(x.mean()))
Thanks for reading.
Please checkout the notebook on my Github for the source code.
Stay tuned if you are interested in the practical aspect of machine learning.
Difference between apply() and transform() in Pandas
Using Pandas method chaining to improve code readability
Working with datetime in Pandas DataFrame
Pandas read_csv() tricks you should know
4 tricks you should know to parse date columns with Pandas read_csv()
More can be found from my Github
|
[
{
"code": null,
"e": 359,
"s": 172,
"text": "Pandas is an amazing library that contains extensive built-in functions for manipulating data. Among them, transform() is super useful when you are looking to manipulate rows or columns."
},
{
"code": null,
"e": 454,
"s": 359,
"text": "In this article, we will cover the following most frequently used Pandas transform() features:"
},
{
"code": null,
"e": 556,
"s": 454,
"text": "Transforming valuesCombining groupby() resultsFiltering dataHandling missing value at the group level"
},
{
"code": null,
"e": 576,
"s": 556,
"text": "Transforming values"
},
{
"code": null,
"e": 604,
"s": 576,
"text": "Combining groupby() results"
},
{
"code": null,
"e": 619,
"s": 604,
"text": "Filtering data"
},
{
"code": null,
"e": 661,
"s": 619,
"text": "Handling missing value at the group level"
},
{
"code": null,
"e": 713,
"s": 661,
"text": "Please check out my Github repo for the source code"
},
{
"code": null,
"e": 761,
"s": 713,
"text": "Let’s take a look at pd.transform(func, axis=0)"
},
{
"code": null,
"e": 957,
"s": 761,
"text": "The first argument func is to specify the function to be used for manipulating data. It can be a function, a string function name, a list of functions, or a dictionary of axis labels -> functions"
},
{
"code": null,
"e": 1108,
"s": 957,
"text": "The 2nd argument axis is to specify which axis the func is applied to. 0 for applying the func to each column and 1 for applying the func to each row."
},
{
"code": null,
"e": 1172,
"s": 1108,
"text": "Let’s see how transform() works with the help of some examples."
},
{
"code": null,
"e": 1216,
"s": 1172,
"text": "We can pass a function to func. For example"
},
{
"code": null,
"e": 1319,
"s": 1216,
"text": "df = pd.DataFrame({'A': [1,2,3], 'B': [10,20,30] })def plus_10(x): return x+10df.transform(plus_10)"
},
{
"code": null,
"e": 1408,
"s": 1319,
"text": "You can also use a lambda expression. Below is the lambda equivalent for the plus_10() :"
},
{
"code": null,
"e": 1437,
"s": 1408,
"text": "df.transform(lambda x: x+10)"
},
{
"code": null,
"e": 1512,
"s": 1437,
"text": "We can pass any valid Pandas string function to func, for example 'sqrt' :"
},
{
"code": null,
"e": 1533,
"s": 1512,
"text": "df.transform('sqrt')"
},
{
"code": null,
"e": 1603,
"s": 1533,
"text": "func can be a list of functions. for example sqrt and exp from NumPy:"
},
{
"code": null,
"e": 1635,
"s": 1603,
"text": "df.transform([np.sqrt, np.exp])"
},
{
"code": null,
"e": 1694,
"s": 1635,
"text": "func can be a dict of axis labels -> function. For example"
},
{
"code": null,
"e": 1744,
"s": 1694,
"text": "df.transform({ 'A': np.sqrt, 'B': np.exp,})"
},
{
"code": null,
"e": 1831,
"s": 1744,
"text": "One of the most compelling usages of Pandas transform() is combining grouby() results."
},
{
"code": null,
"e": 1936,
"s": 1831,
"text": "Let’s see how this works with the help of an example. Suppose we have a dataset about a restaurant chain"
},
{
"code": null,
"e": 2164,
"s": 1936,
"text": "df = pd.DataFrame({ 'restaurant_id': [101,102,103,104,105,106,107], 'address': ['A','B','C','D', 'E', 'F', 'G'], 'city': ['London','London','London','Oxford','Oxford', 'Durham', 'Durham'], 'sales': [10,500,48,12,21,22,14]})"
},
{
"code": null,
"e": 2347,
"s": 2164,
"text": "We can see that each city has multiple restaurants with sales. We would like to know “What is the percentage of sales each restaurant represents in the city”. The expected output is:"
},
{
"code": null,
"e": 2491,
"s": 2347,
"text": "The tricky part in this calculation is that we need to get a city_total_sales and combine it back into the data in order to get the percentage."
},
{
"code": null,
"e": 2514,
"s": 2491,
"text": "There are 2 solutions:"
},
{
"code": null,
"e": 2571,
"s": 2514,
"text": "groupby(), apply(), and merge()groupby() and transform()"
},
{
"code": null,
"e": 2603,
"s": 2571,
"text": "groupby(), apply(), and merge()"
},
{
"code": null,
"e": 2629,
"s": 2603,
"text": "groupby() and transform()"
},
{
"code": null,
"e": 2798,
"s": 2629,
"text": "The first solution is splitting the data with groupby() and using apply() to aggregate each group, then merge the results back into the original DataFrame using merge()"
},
{
"code": null,
"e": 2866,
"s": 2798,
"text": "Step 1: Use groupby() and apply() to calculate the city_total_sales"
},
{
"code": null,
"e": 2972,
"s": 2866,
"text": "city_sales = df.groupby('city')['sales'] .apply(sum).rename('city_total_sales').reset_index()"
},
{
"code": null,
"e": 3311,
"s": 2972,
"text": "groupby('city') split the data by grouping on the city column. For each of these groups, the function sum gets applied to the sales column to calculate the sum for each group. Finally, the new column gets renamed to city_total_sales and the index gets reset (Note: reset_inde() is required to clear the index generated bygroupby('city'))."
},
{
"code": null,
"e": 3411,
"s": 3311,
"text": "In addition, Pandas has a built-in sum() function and the following is the Pandas sum() equivalent:"
},
{
"code": null,
"e": 3512,
"s": 3411,
"text": "city_sales = df.groupby('city')['sales'] .sum().rename('city_total_sales').reset_index()"
},
{
"code": null,
"e": 3564,
"s": 3512,
"text": "Step 2: Use merge() function to combine the results"
},
{
"code": null,
"e": 3610,
"s": 3564,
"text": "df_new = pd.merge(df, city_sales, how='left')"
},
{
"code": null,
"e": 3723,
"s": 3610,
"text": "The group results get merged back into the original DataFrame using merge() with how='left' for left outer join."
},
{
"code": null,
"e": 3756,
"s": 3723,
"text": "Step 3: Calculate the percentage"
},
{
"code": null,
"e": 3813,
"s": 3756,
"text": "Finally, the percentage can be calculated and formatted."
},
{
"code": null,
"e": 3937,
"s": 3813,
"text": "df_new['pct'] = df_new['sales'] / df_new['city_total_sales']df_new['pct'] = df_new['pct'].apply(lambda x: format(x, '.2%'))"
},
{
"code": null,
"e": 4061,
"s": 3937,
"text": "This certainly does our work. But it is a multistep process and requires extra code to get the data in the form we require."
},
{
"code": null,
"e": 4122,
"s": 4061,
"text": "We can solve this effectively using the transform() function"
},
{
"code": null,
"e": 4208,
"s": 4122,
"text": "This solution is a game-changer. A single line of code can solve the apply and merge."
},
{
"code": null,
"e": 4280,
"s": 4208,
"text": "Step 1: Use groupby() and transform() to calculate the city_total_sales"
},
{
"code": null,
"e": 4492,
"s": 4280,
"text": "The transform function retains the same number of items as the original dataset after performing the transformation. Therefore, a one-line step using groupby followed by a transform(sum) returns the same output."
},
{
"code": null,
"e": 4589,
"s": 4492,
"text": "df['city_total_sales'] = df.groupby('city')['sales'] .transform('sum')"
},
{
"code": null,
"e": 4622,
"s": 4589,
"text": "Step 2: Calculate the percentage"
},
{
"code": null,
"e": 4691,
"s": 4622,
"text": "Finally, this is the same as the solution one to get the percentage."
},
{
"code": null,
"e": 4795,
"s": 4691,
"text": "df['pct'] = df['sales'] / df['city_total_sales']df['pct'] = df['pct'].apply(lambda x: format(x, '.2%'))"
},
{
"code": null,
"e": 4922,
"s": 4795,
"text": "transform() can also be used to filter data. Here we are trying to get records where the city’s total sales is greater than 40"
},
{
"code": null,
"e": 4976,
"s": 4922,
"text": "df[df.groupby('city')['sales'].transform('sum') > 40]"
},
{
"code": null,
"e": 5102,
"s": 4976,
"text": "Another usage of Pandas transform() is to handle missing values at the group level. Let’s see how this works with an example."
},
{
"code": null,
"e": 5140,
"s": 5102,
"text": "Here is a DataFrame for demonstration"
},
{
"code": null,
"e": 5267,
"s": 5140,
"text": "df = pd.DataFrame({ 'name': ['A', 'A', 'B', 'B', 'B', 'C', 'C', 'C'], 'value': [1, np.nan, np.nan, 2, 8, 2, np.nan, 3]})"
},
{
"code": null,
"e": 5446,
"s": 5267,
"text": "In the example above, the data can be split into three groups by name, and each group has missing values. A common solution to replace missing values is to replace NaN with mean."
},
{
"code": null,
"e": 5500,
"s": 5446,
"text": "Let’s take a look at the average value in each group."
},
{
"code": null,
"e": 5590,
"s": 5500,
"text": "df.groupby('name')['value'].mean()nameA 1.0B 5.0C 2.5Name: value, dtype: float64"
},
{
"code": null,
"e": 5674,
"s": 5590,
"text": "Here we can use transform() to replace missing values with the group average value."
},
{
"code": null,
"e": 5763,
"s": 5674,
"text": "df['value'] = df.groupby('name') .transform(lambda x: x.fillna(x.mean()))"
},
{
"code": null,
"e": 5783,
"s": 5763,
"text": "Thanks for reading."
},
{
"code": null,
"e": 5846,
"s": 5783,
"text": "Please checkout the notebook on my Github for the source code."
},
{
"code": null,
"e": 5924,
"s": 5846,
"text": "Stay tuned if you are interested in the practical aspect of machine learning."
},
{
"code": null,
"e": 5977,
"s": 5924,
"text": "Difference between apply() and transform() in Pandas"
},
{
"code": null,
"e": 6034,
"s": 5977,
"text": "Using Pandas method chaining to improve code readability"
},
{
"code": null,
"e": 6076,
"s": 6034,
"text": "Working with datetime in Pandas DataFrame"
},
{
"code": null,
"e": 6117,
"s": 6076,
"text": "Pandas read_csv() tricks you should know"
},
{
"code": null,
"e": 6187,
"s": 6117,
"text": "4 tricks you should know to parse date columns with Pandas read_csv()"
}
] |
Inventory Management for Retail — Stochastic Demand | by Samir Saci | Towards Data Science
|
For most retailers, inventory management systems take a fixed, rule-based approach to forecast and replenishment orders management.
Considering the distribution of the demand, the objective is to build a replenishment policy that will minimize your ordering, holding and shortage costs.
In a previous article, we have built a simulation model assuming a deterministic constant demand (Units/Day).
towardsdatascience.com
In this article, we will improve this model and introduce a simple methodology using a discrete simulation model built with Python to test several inventory management rules assuming a normal distribution of the customer demand.
SUMMARYI. Scenario1. Problem StatementAs an Inventory Manager of a mid-size retail chain, you are in charge of setting the replenishment quantity in the ERP.2. Limits of the deterministic modelWhat could be the results with a normally distributed demand?II. Continuous Review Policy: Order Point, Order Quantity (s, Q)1. Introduction of the Inventory Policy2. Definition of the Safety Stock3. How do you define k?III. Example of replenishment policies1. Target of CSL = 95%2. Target of IFR = 99%III. Conclusion & Next Steps
As an Inventory Manager of a mid-size retail chain, you are in charge of setting the replenishment quantity in the ERP.
Based on the feedback of the store manager, you start to doubt that the replenishment rules of the ERP are the most optimal especially for the fast runners because your stores are facing lost sales due to stock-outs.
For each SKU, you would like to build a simple simulation model to test several inventory rules and estimate the impact on:
Performance Metrics
Cycle Service Level (CSL): probability to have a stock-out for each cycle (%)
Item Fill Rate (IFR): percentage of customer demand met without stock-out (%)
In this article, we will build this model for,
# Total Demand (units/year)D = 2000# Number of days of sales per year (days)T_total = 365# Customer demand per day (unit/day)D_day = D/T_total# Purchase cost of the product (Euros/unit)c = 50# Cost of placing an order (/order)c_t = 500# Holding Cost (% unit cost per year)h = .25c_e = h * c# Selling Price (Euros/unit)p = 75# Lead Time between ordering and receivingLD# Cost of shortage (Euros/unit)c_s = 12# Order Quantity Q = 82 (units/order)
To simplify the comprehension, let’s introduce some notations
In the previous article, we assumed a constant deterministic of the demand; we’ll now introduce randomness to get closer to real demand.
μ_D = 2000 (items/year)σ_D = 50(items/year)
You need to improve your replenishment policy to compensate for the volatility of your demand.
You can find the full code in my Github repository: Link (Follow me :D)My portfolio with other projects: Samir Saci
To solve this issue of demand volatility, we’ll introduce a continuous review policy (s, Q)
Continuous Review = your inventory level will be checked every day
(s, Q) = if your inventory level ≤ s your ERP will order Q
To simplify the comprehension, let’s introduce some notations:
The reorder point can be defined as the minimum inventory level you need to meet your customers’ demand during the lead time between your ordering and receiving.
The safety stock is a buffer to compensate for the volatility of the demand.
Your performance metrics will be directly impacted by the safety stock level; the highest k is the best your performance will be:
You fix your target for any of the two metrics (e.g: I want my CSL to be 95%)You calculate k to reach this targetYou fix your reorder point
You fix your target for any of the two metrics (e.g: I want my CSL to be 95%)
You calculate k to reach this target
You fix your reorder point
samirsaci.com
Based on the definition of the CSL, we have:
k = 1.64Reoder point with CSL: 36 units
Comments
In this example, we can see that we do not face any stock and the minimum stock level is very close to zero.
Code
In this previous example, our target was to have 95% of the replenishment cycles without stock-out.
In this example, we’ll focus more on our capacity to deliver products in full with a target of IFR. This formula is using the Unit Normal Loss Function (you can find more information about this function here: Link).
# G(k) = Q/sigma_ld * (1 - IFR)IFR = 0.99G_k = (Q/sigma_ld) * (1 - IFR) = 0.14# Final value of kk = 0.71Reoder point with CSL: 31 units
Comments
To reach 99% of demand units fulfilled without stock-out you need a lower safety stock. (31 units vs. 32 units)
Code
This improved model is bringing better results as it considers the variability of the demand in the safety stock sizing.
The process is simple, you start by fixing your performance metrics targets (IRF, CSL) and then you calculate your safety stock level using the k value.
The main issue with the continuous review policy is the high number of replenishment if you have many SKUs in your portfolio.
As a store manager (or Warehouse Manager), you would prefer to fix the replenishment time (e.g: 2 times per week). Therefore, we will introduce the periodic review policy in the next article.
Please feel free to contact me, I am willing to share and exchange on topics related to Data Science and Supply Chain.My Portfolio: https://samirsaci.com
[1] Supply Chain Science, Wallace J. Hopp
[2] Inventory Management for Retail — Deterministic Demand, Samir SACI, Link
|
[
{
"code": null,
"e": 304,
"s": 172,
"text": "For most retailers, inventory management systems take a fixed, rule-based approach to forecast and replenishment orders management."
},
{
"code": null,
"e": 459,
"s": 304,
"text": "Considering the distribution of the demand, the objective is to build a replenishment policy that will minimize your ordering, holding and shortage costs."
},
{
"code": null,
"e": 569,
"s": 459,
"text": "In a previous article, we have built a simulation model assuming a deterministic constant demand (Units/Day)."
},
{
"code": null,
"e": 592,
"s": 569,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 821,
"s": 592,
"text": "In this article, we will improve this model and introduce a simple methodology using a discrete simulation model built with Python to test several inventory management rules assuming a normal distribution of the customer demand."
},
{
"code": null,
"e": 1345,
"s": 821,
"text": "SUMMARYI. Scenario1. Problem StatementAs an Inventory Manager of a mid-size retail chain, you are in charge of setting the replenishment quantity in the ERP.2. Limits of the deterministic modelWhat could be the results with a normally distributed demand?II. Continuous Review Policy: Order Point, Order Quantity (s, Q)1. Introduction of the Inventory Policy2. Definition of the Safety Stock3. How do you define k?III. Example of replenishment policies1. Target of CSL = 95%2. Target of IFR = 99%III. Conclusion & Next Steps"
},
{
"code": null,
"e": 1465,
"s": 1345,
"text": "As an Inventory Manager of a mid-size retail chain, you are in charge of setting the replenishment quantity in the ERP."
},
{
"code": null,
"e": 1682,
"s": 1465,
"text": "Based on the feedback of the store manager, you start to doubt that the replenishment rules of the ERP are the most optimal especially for the fast runners because your stores are facing lost sales due to stock-outs."
},
{
"code": null,
"e": 1806,
"s": 1682,
"text": "For each SKU, you would like to build a simple simulation model to test several inventory rules and estimate the impact on:"
},
{
"code": null,
"e": 1826,
"s": 1806,
"text": "Performance Metrics"
},
{
"code": null,
"e": 1904,
"s": 1826,
"text": "Cycle Service Level (CSL): probability to have a stock-out for each cycle (%)"
},
{
"code": null,
"e": 1982,
"s": 1904,
"text": "Item Fill Rate (IFR): percentage of customer demand met without stock-out (%)"
},
{
"code": null,
"e": 2029,
"s": 1982,
"text": "In this article, we will build this model for,"
},
{
"code": null,
"e": 2474,
"s": 2029,
"text": "# Total Demand (units/year)D = 2000# Number of days of sales per year (days)T_total = 365# Customer demand per day (unit/day)D_day = D/T_total# Purchase cost of the product (Euros/unit)c = 50# Cost of placing an order (/order)c_t = 500# Holding Cost (% unit cost per year)h = .25c_e = h * c# Selling Price (Euros/unit)p = 75# Lead Time between ordering and receivingLD# Cost of shortage (Euros/unit)c_s = 12# Order Quantity Q = 82 (units/order)"
},
{
"code": null,
"e": 2536,
"s": 2474,
"text": "To simplify the comprehension, let’s introduce some notations"
},
{
"code": null,
"e": 2673,
"s": 2536,
"text": "In the previous article, we assumed a constant deterministic of the demand; we’ll now introduce randomness to get closer to real demand."
},
{
"code": null,
"e": 2717,
"s": 2673,
"text": "μ_D = 2000 (items/year)σ_D = 50(items/year)"
},
{
"code": null,
"e": 2812,
"s": 2717,
"text": "You need to improve your replenishment policy to compensate for the volatility of your demand."
},
{
"code": null,
"e": 2928,
"s": 2812,
"text": "You can find the full code in my Github repository: Link (Follow me :D)My portfolio with other projects: Samir Saci"
},
{
"code": null,
"e": 3020,
"s": 2928,
"text": "To solve this issue of demand volatility, we’ll introduce a continuous review policy (s, Q)"
},
{
"code": null,
"e": 3087,
"s": 3020,
"text": "Continuous Review = your inventory level will be checked every day"
},
{
"code": null,
"e": 3146,
"s": 3087,
"text": "(s, Q) = if your inventory level ≤ s your ERP will order Q"
},
{
"code": null,
"e": 3209,
"s": 3146,
"text": "To simplify the comprehension, let’s introduce some notations:"
},
{
"code": null,
"e": 3371,
"s": 3209,
"text": "The reorder point can be defined as the minimum inventory level you need to meet your customers’ demand during the lead time between your ordering and receiving."
},
{
"code": null,
"e": 3448,
"s": 3371,
"text": "The safety stock is a buffer to compensate for the volatility of the demand."
},
{
"code": null,
"e": 3578,
"s": 3448,
"text": "Your performance metrics will be directly impacted by the safety stock level; the highest k is the best your performance will be:"
},
{
"code": null,
"e": 3718,
"s": 3578,
"text": "You fix your target for any of the two metrics (e.g: I want my CSL to be 95%)You calculate k to reach this targetYou fix your reorder point"
},
{
"code": null,
"e": 3796,
"s": 3718,
"text": "You fix your target for any of the two metrics (e.g: I want my CSL to be 95%)"
},
{
"code": null,
"e": 3833,
"s": 3796,
"text": "You calculate k to reach this target"
},
{
"code": null,
"e": 3860,
"s": 3833,
"text": "You fix your reorder point"
},
{
"code": null,
"e": 3874,
"s": 3860,
"text": "samirsaci.com"
},
{
"code": null,
"e": 3919,
"s": 3874,
"text": "Based on the definition of the CSL, we have:"
},
{
"code": null,
"e": 3959,
"s": 3919,
"text": "k = 1.64Reoder point with CSL: 36 units"
},
{
"code": null,
"e": 3968,
"s": 3959,
"text": "Comments"
},
{
"code": null,
"e": 4077,
"s": 3968,
"text": "In this example, we can see that we do not face any stock and the minimum stock level is very close to zero."
},
{
"code": null,
"e": 4082,
"s": 4077,
"text": "Code"
},
{
"code": null,
"e": 4182,
"s": 4082,
"text": "In this previous example, our target was to have 95% of the replenishment cycles without stock-out."
},
{
"code": null,
"e": 4398,
"s": 4182,
"text": "In this example, we’ll focus more on our capacity to deliver products in full with a target of IFR. This formula is using the Unit Normal Loss Function (you can find more information about this function here: Link)."
},
{
"code": null,
"e": 4534,
"s": 4398,
"text": "# G(k) = Q/sigma_ld * (1 - IFR)IFR = 0.99G_k = (Q/sigma_ld) * (1 - IFR) = 0.14# Final value of kk = 0.71Reoder point with CSL: 31 units"
},
{
"code": null,
"e": 4543,
"s": 4534,
"text": "Comments"
},
{
"code": null,
"e": 4655,
"s": 4543,
"text": "To reach 99% of demand units fulfilled without stock-out you need a lower safety stock. (31 units vs. 32 units)"
},
{
"code": null,
"e": 4660,
"s": 4655,
"text": "Code"
},
{
"code": null,
"e": 4781,
"s": 4660,
"text": "This improved model is bringing better results as it considers the variability of the demand in the safety stock sizing."
},
{
"code": null,
"e": 4934,
"s": 4781,
"text": "The process is simple, you start by fixing your performance metrics targets (IRF, CSL) and then you calculate your safety stock level using the k value."
},
{
"code": null,
"e": 5060,
"s": 4934,
"text": "The main issue with the continuous review policy is the high number of replenishment if you have many SKUs in your portfolio."
},
{
"code": null,
"e": 5252,
"s": 5060,
"text": "As a store manager (or Warehouse Manager), you would prefer to fix the replenishment time (e.g: 2 times per week). Therefore, we will introduce the periodic review policy in the next article."
},
{
"code": null,
"e": 5406,
"s": 5252,
"text": "Please feel free to contact me, I am willing to share and exchange on topics related to Data Science and Supply Chain.My Portfolio: https://samirsaci.com"
},
{
"code": null,
"e": 5448,
"s": 5406,
"text": "[1] Supply Chain Science, Wallace J. Hopp"
}
] |
Node.js http.IncomingMessage.headers Method - GeeksforGeeks
|
19 Jan, 2022
The http.IncomingMessage.headers is an inbuilt application programming interface of class IncomingMessage within HTTP module which is used to get all the request/response headers object.
Syntax:
const message.headers
Parameters: This method does not accept any argument as a parameter.
Return Value: This method returns all the request/response headers object.
Example 1:
Filename: index.js
Javascript
// Node.js program to demonstrate the // request.headers method // Importing http modulevar http = require('http'); // Setting up PORTconst PORT = process.env.PORT || 3000; // Creating http Servervar httpServer = http.createServer( function (request, response) { // Getting request/response header // by using request.complete method const value = request.headers; // Display header console.log(value.connection) // Display result response.end("hello world", 'utf8', () => { console.log("displaying the result..."); httpServer.close(() => { console.log("server is closed") }) });}); // Listening to http ServerhttpServer.listen(PORT, () => { console.log("Server is running at port 3000...");});
Run the index.js file using the following command.
node index.js
Output:
Output: In-Console
Server is running at port 3000...
keep-alive
displaying the result...
Now go to http://localhost:3000/ in the browser, you will see the following output:
hello world
Example 2:
Filename: index.js
Javascript
// Node.js program to demonstrate the // request.headers Method // Importing http modulevar http = require('http'); // Request and response handlerconst http2Handlers = (request, response) => { // Getting request/response header // by using request.complete method const value = request.headers; // Display header console.log(value.host) // Display result response.end("hello world!!", 'utf8', () => { console.log("displaying the result..."); httpServer.close(() => { console.log("server is closed") }) });}; // Creating http Servervar httpServer = http.createServer( http2Handlers).listen(3000, () => { console.log("Server is running at port 3000...");});
Run the index.js file using the following command.
node index.js
Output:
Server is running at port 3000...
localhost:3000
displaying the result...
Now go to http://localhost:3000/ in the browser, you will see the following output:
hello world!!
Reference: https://nodejs.org/dist/latest-v12.x/docs/api/http.html#http_message_headers
clintra
Node.js-Methods
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Express.js express.Router() Function
JWT Authentication with Node.js
Express.js req.params Property
Mongoose Populate() Method
Difference between npm i and npm ci in Node.js
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
Convert a string to an integer in JavaScript
|
[
{
"code": null,
"e": 25002,
"s": 24974,
"text": "\n19 Jan, 2022"
},
{
"code": null,
"e": 25189,
"s": 25002,
"text": "The http.IncomingMessage.headers is an inbuilt application programming interface of class IncomingMessage within HTTP module which is used to get all the request/response headers object."
},
{
"code": null,
"e": 25197,
"s": 25189,
"text": "Syntax:"
},
{
"code": null,
"e": 25219,
"s": 25197,
"text": "const message.headers"
},
{
"code": null,
"e": 25288,
"s": 25219,
"text": "Parameters: This method does not accept any argument as a parameter."
},
{
"code": null,
"e": 25363,
"s": 25288,
"text": "Return Value: This method returns all the request/response headers object."
},
{
"code": null,
"e": 25374,
"s": 25363,
"text": "Example 1:"
},
{
"code": null,
"e": 25393,
"s": 25374,
"text": "Filename: index.js"
},
{
"code": null,
"e": 25404,
"s": 25393,
"text": "Javascript"
},
{
"code": "// Node.js program to demonstrate the // request.headers method // Importing http modulevar http = require('http'); // Setting up PORTconst PORT = process.env.PORT || 3000; // Creating http Servervar httpServer = http.createServer( function (request, response) { // Getting request/response header // by using request.complete method const value = request.headers; // Display header console.log(value.connection) // Display result response.end(\"hello world\", 'utf8', () => { console.log(\"displaying the result...\"); httpServer.close(() => { console.log(\"server is closed\") }) });}); // Listening to http ServerhttpServer.listen(PORT, () => { console.log(\"Server is running at port 3000...\");});",
"e": 26129,
"s": 25404,
"text": null
},
{
"code": null,
"e": 26180,
"s": 26129,
"text": "Run the index.js file using the following command."
},
{
"code": null,
"e": 26194,
"s": 26180,
"text": "node index.js"
},
{
"code": null,
"e": 26202,
"s": 26194,
"text": "Output:"
},
{
"code": null,
"e": 26291,
"s": 26202,
"text": "Output: In-Console\nServer is running at port 3000...\nkeep-alive\ndisplaying the result..."
},
{
"code": null,
"e": 26375,
"s": 26291,
"text": "Now go to http://localhost:3000/ in the browser, you will see the following output:"
},
{
"code": null,
"e": 26387,
"s": 26375,
"text": "hello world"
},
{
"code": null,
"e": 26398,
"s": 26387,
"text": "Example 2:"
},
{
"code": null,
"e": 26417,
"s": 26398,
"text": "Filename: index.js"
},
{
"code": null,
"e": 26428,
"s": 26417,
"text": "Javascript"
},
{
"code": "// Node.js program to demonstrate the // request.headers Method // Importing http modulevar http = require('http'); // Request and response handlerconst http2Handlers = (request, response) => { // Getting request/response header // by using request.complete method const value = request.headers; // Display header console.log(value.host) // Display result response.end(\"hello world!!\", 'utf8', () => { console.log(\"displaying the result...\"); httpServer.close(() => { console.log(\"server is closed\") }) });}; // Creating http Servervar httpServer = http.createServer( http2Handlers).listen(3000, () => { console.log(\"Server is running at port 3000...\");});",
"e": 27113,
"s": 26428,
"text": null
},
{
"code": null,
"e": 27164,
"s": 27113,
"text": "Run the index.js file using the following command."
},
{
"code": null,
"e": 27178,
"s": 27164,
"text": "node index.js"
},
{
"code": null,
"e": 27186,
"s": 27178,
"text": "Output:"
},
{
"code": null,
"e": 27260,
"s": 27186,
"text": "Server is running at port 3000...\nlocalhost:3000\ndisplaying the result..."
},
{
"code": null,
"e": 27344,
"s": 27260,
"text": "Now go to http://localhost:3000/ in the browser, you will see the following output:"
},
{
"code": null,
"e": 27358,
"s": 27344,
"text": "hello world!!"
},
{
"code": null,
"e": 27446,
"s": 27358,
"text": "Reference: https://nodejs.org/dist/latest-v12.x/docs/api/http.html#http_message_headers"
},
{
"code": null,
"e": 27454,
"s": 27446,
"text": "clintra"
},
{
"code": null,
"e": 27470,
"s": 27454,
"text": "Node.js-Methods"
},
{
"code": null,
"e": 27478,
"s": 27470,
"text": "Node.js"
},
{
"code": null,
"e": 27495,
"s": 27478,
"text": "Web Technologies"
},
{
"code": null,
"e": 27593,
"s": 27495,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27630,
"s": 27593,
"text": "Express.js express.Router() Function"
},
{
"code": null,
"e": 27662,
"s": 27630,
"text": "JWT Authentication with Node.js"
},
{
"code": null,
"e": 27693,
"s": 27662,
"text": "Express.js req.params Property"
},
{
"code": null,
"e": 27720,
"s": 27693,
"text": "Mongoose Populate() Method"
},
{
"code": null,
"e": 27767,
"s": 27720,
"text": "Difference between npm i and npm ci in Node.js"
},
{
"code": null,
"e": 27809,
"s": 27767,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27852,
"s": 27809,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 27914,
"s": 27852,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 27964,
"s": 27914,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
C program to print multiplication table by using for Loop
|
A for loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times.
Given below is an algorithm to print multiplication table by using for loop in C language −
Step 1: Enter a number to print table at runtime.
Step 2: Read that number from keyboard.
Step 3: Using for loop print number*I 10 times.
// for(i=1; i<=10; i++)
Step 4: Print num*I 10 times where i=0 to 10.
Following is the C program for printing a multiplication table for a given number −
Live Demo
#include <stdio.h>
int main(){
int i, num;
/* Input a number to print table */
printf("Enter number to print table: ");
scanf("%d", &num);
for(i=1; i<=10; i++){
printf("%d * %d = %d\n", num, i, (num*i));
}
return 0;
}
When the above program is executed, it produces the following result −
Enter number to print table: 7
7 * 1 = 7
7 * 2 = 14
7 * 3 = 21
7 * 4 = 28
7 * 5 = 35
7 * 6 = 42
7 * 7 = 49
7 * 8 = 56
7 * 9 = 63
7 * 10 = 70
|
[
{
"code": null,
"e": 1201,
"s": 1062,
"text": "A for loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times."
},
{
"code": null,
"e": 1293,
"s": 1201,
"text": "Given below is an algorithm to print multiplication table by using for loop in C language −"
},
{
"code": null,
"e": 1507,
"s": 1293,
"text": "Step 1: Enter a number to print table at runtime.\nStep 2: Read that number from keyboard.\nStep 3: Using for loop print number*I 10 times.\n // for(i=1; i<=10; i++)\nStep 4: Print num*I 10 times where i=0 to 10."
},
{
"code": null,
"e": 1591,
"s": 1507,
"text": "Following is the C program for printing a multiplication table for a given number −"
},
{
"code": null,
"e": 1602,
"s": 1591,
"text": " Live Demo"
},
{
"code": null,
"e": 1847,
"s": 1602,
"text": "#include <stdio.h>\nint main(){\n int i, num;\n /* Input a number to print table */\n printf(\"Enter number to print table: \");\n scanf(\"%d\", &num);\n for(i=1; i<=10; i++){\n printf(\"%d * %d = %d\\n\", num, i, (num*i));\n }\n return 0;\n}"
},
{
"code": null,
"e": 1918,
"s": 1847,
"text": "When the above program is executed, it produces the following result −"
},
{
"code": null,
"e": 2059,
"s": 1918,
"text": "Enter number to print table: 7\n7 * 1 = 7\n7 * 2 = 14\n7 * 3 = 21\n7 * 4 = 28\n7 * 5 = 35\n7 * 6 = 42\n7 * 7 = 49\n7 * 8 = 56\n7 * 9 = 63\n7 * 10 = 70"
}
] |
Implementing Linear Operators in Python with Google JAX | by Shailesh Kumar | Towards Data Science
|
A linear operator or a linear map is a mapping from a vector space to another vector space that preserves vector addition and scalar multiplication operations. In other words, if T is a linear operator then T(x+y) = T(x) + T(y) and T (a x) = a T(x) where x and y are vectors and a is a scalar.
Linear operators have wide applications in signal processing, image processing, data sciences, and machine learning.
In signal processing, signals are often represented as linear combinations of sinusoids. The Discrete Fourier Transform is a linear operator which decomposes a signal in its individual component frequencies. Wavelet Transform is often used to decompose signals into individual location-scale specific wavelets so that interesting events or patterns inside a signal can be identified as well as localized easily.
In statistics, linear models are used to describe the observations or target variables as linear combinations of features.
We can think of a linear operator as a mapping from model space to data space. Every linear operator has a matrix representation. If a linear operator T is represented by a matrix A, then the application of a linear operator y = T(x) can be written as:
y = A x
where x is the model and y is the data. In Fourier Transform, A's columns are the individual sinusoids, and model x describes the contribution of each sinusoid to the observed signal y. Usually, we are given the data/signal y and our task is to estimate the model vector x. This is known as an inverse problem. The inverse problem is easy for orthonormal bases. The simple solution is x = A^H y. However, this doesn’t work if the model size is less than the data size or more. When the model size is less, we have an overfitting problem. A basic approach is to solve the least-squares problem:
minimize \| A x - y \|_2^2
This leads to a system of normal equations:
A^T A x = A^T y
Libraries like NumPy and JAX provide extensive support for matrix algebra. However, direct methods from matrix algebra are prohibitive from both time and space complexity perspectives for large systems. Storing A itself may be very expensive for very large matrices. Computing A^T A is an O(n3) operation. Computing its inverse for solving the normal equation can become infeasible as n increases.
Functional representation of linear operators
Fortunately, many linear operators which are useful in the scientific literature can be implemented in terms of simple functions. For example, consider a forward difference operator for finite-size vectors x in R8 (8 dimensional real vectors). A matrix representation is:
A = jnp.array([[-1. 1. 0. 0. 0. 0. 0. 0.] [ 0. -1. 1. 0. 0. 0. 0. 0.] [ 0. 0. -1. 1. 0. 0. 0. 0.] [ 0. 0. 0. -1. 1. 0. 0. 0.] [ 0. 0. 0. 0. -1. 1. 0. 0.] [ 0. 0. 0. 0. 0. -1. 1. 0.] [ 0. 0. 0. 0. 0. 0. -1. 1.] [ 0. 0. 0. 0. 0. 0. 0. 0.]])
However the matrix-vector multiply computation A @ x can be written much more efficiently as:
import jax.numpy as jnpdef forward_diff(x): append = jnp.array([x[-1]]) return jnp.diff(x, append=append)
This brings down the computation from O(n2) to O(n).
In general, we need to implement two operations for a linear operator. The forward operator from the model space to the data space:
y = A x
And the adjoint operator from the data space to the model space:
x = A^T y
For complex vector spaces, the adjoint operator will be the Hermitian transpose:
x = A^H y
SciPy provides a very good interface for implementing linear operators in scipy.sparse.linalg.LinearOperator. PyLops builds on top of it to provide an extensive collection of linear operators.
JAX is a new library for high-performance numerical computing based on the functional programming paradigm. It enables us to write efficient numerical programs in Pure Python which can be compiled using XLA for CPU/GPU/TPU hardware for state-of-the-art performance.
CR-Sparse is a new open-source library being developed on top of JAX that aims to provide XLA accelerated functional models and algorithms for sparse representations-based signal processing. It now includes a good collection of linear operators built on top of JAX. Docs here. We represent a linear operator by a pair of functions times and trans. The times function implements the forward operation while the trans function implements the adjoint operation.
You can install CR-Sparse from PyPI:
pip install cr-sparse
For the latest code, install directly from GitHub
python -m pip install git+https://github.com/carnotresearch/cr-sparse.git
In the interactive code samples below, the lines starting with > have the code and lines without > have the output.
To create a first derivative operator (using forward differences):
> from cr.sparse import lop> n = 8> T = lop.first_derivative(n, kind='forward')
It is possible to see the matrix representation of a linear operator:
> print(lop.to_matrix(T))[[-1. 1. 0. 0. 0. 0. 0. 0.] [ 0. -1. 1. 0. 0. 0. 0. 0.] [ 0. 0. -1. 1. 0. 0. 0. 0.] [ 0. 0. 0. -1. 1. 0. 0. 0.] [ 0. 0. 0. 0. -1. 1. 0. 0.] [ 0. 0. 0. 0. 0. -1. 1. 0.] [ 0. 0. 0. 0. 0. 0. -1. 1.] [ 0. 0. 0. 0. 0. 0. 0. 0.]]
Computing the forward operation T x
> x = jnp.array([1,2,3,4,5,6,7,8])> y = T.times(x)> print(y)[1. 1. 1. 1. 1. 1. 1. 0.]
Computing the adjoint operation T^H x
> y = T.trans(x)> print(y)[-1. -1. -1. -1. -1. -1. -1. 7.]
Diagonal matrices are extremely sparse and linear operator-based implementation is ideal for them. Let’s build one:
> d = jnp.array([1., 2., 3., 4., 4, 3, 2, 1])> T = lop.diagonal(d)> print(lop.to_matrix(T))[[1. 0. 0. 0. 0. 0. 0. 0.] [0. 2. 0. 0. 0. 0. 0. 0.] [0. 0. 3. 0. 0. 0. 0. 0.] [0. 0. 0. 4. 0. 0. 0. 0.] [0. 0. 0. 0. 4. 0. 0. 0.] [0. 0. 0. 0. 0. 3. 0. 0.] [0. 0. 0. 0. 0. 0. 2. 0.] [0. 0. 0. 0. 0. 0. 0. 1.]]
Applying it:
> print(T.times(x))[ 1. 4. 9. 16. 20. 18. 14. 8.]> print(T.trans(x))[ 1. 4. 9. 16. 20. 18. 14. 8.]
All linear operators are built as a named tuple Operator. See its documentation here. Below is a basic outline.
class Operator(NamedTuple): times : Callable[[jnp.ndarray], jnp.ndarray] """A linear function mapping from A to B """ trans : Callable[[jnp.ndarray], jnp.ndarray] """Corresponding adjoint linear function mapping from B to A""" shape : Tuple[int, int] """Dimension of the linear operator (m, n)""" linear : bool = True """Indicates if the operator is linear or not""" real: bool = True """Indicates if a linear operator is real i.e. has a matrix representation of real numbers"""
Implementation of the diagonal linear operator (discussed above) is actually quite simple:
def diagonal(d): assert d.ndim == 1 n = d.shape[0] times = lambda x: d * x trans = lambda x: _hermitian(d) * x return Operator(times=times, trans=trans, shape=(n,n))
where the function _hermitian is as follows:
def _hermitian(a): """Computes the Hermitian transpose of a vector or a matrix """ return jnp.conjugate(a.T)
The great feature of JAX is that when it just-in-time compiles Python code, it can remove unnecessary operations. E.g., if d is a real vector, then _hermitian is a NOOP and can be optimized out during compilation. All operators in cr.sparse.lop have been carefully designed so that they can be easily JIT-compiled. We provide a utility function lop.jit to quickly wrap the times and trans functions of a linear operator with jax.jit.
T = lop.jit(T)
After this, T.times and T.trans operations will run much faster (by one or two orders of magnitude).
Something like A^H A for the normal equation above can be modeled as a function:
gram = lambda x : T.trans(T.times(x))
where it is assumed that T is already created and available in the closure.
Once, we have a framework of linear operators handy with us, it can be used to write algorithms like preconditioned conjugate gradient in JAX compatible manner (i.e. they can be JIT-compiled). This version is included in cr.sparse.opt.pcg.
CR-Sparse contains a good collection of algorithms for solving inverse problems using linear operators.
Greedy Sparse Recovery/Approximation Algorithms
Convex Optimization-based Sparse Recovery/Approximation Algorithms
We consider a compressive sensing example which consists of Partial Walsh Hadamard Measurements, Cosine Sparsifying Basis, and ADMM based signal recovery. In compressive sensing, the data size is much lesser than the model size. Thus the equation A x = b underfits. Finding a solution requires additional assumptions. One useful assumption is to look for x which is sparse (i.e. most of its entries are zero).
Here is our signal of interest x of n=8192 samples.
We will use a Type-II Discrete Cosine Orthonormal Basis for modeling this signal. Please note that normal DCT is not orthonormal.
Psi = lop.jit(lop.cosine_basis(n))
Let’s see if the signal is sparse on this basis:
alpha = Psi.trans(x)
It is clear that most of the coefficients in the discrete cosine basis are extremely small and can be safely ignored.
We next introduce a structured compressive sensing operator which takes the measurements of x in Walsh Hadamard Transform space but only a small m=1024 number of randomly selected measurements are kept. The input x may also be randomly permuted during measurement.
from jax import randomkey = random.PRNGKey(0)keys = random.split(key, 10)# indices of the measurements to be pickedp = random.permutation(keys[1], n)picks = jnp.sort(p[:m])# Make sure that DC component is always picked uppicks = picks.at[0].set(0)# a random permutation of inputperm = random.permutation(keys[2], n)# Walsh Hadamard Basis operatorTwh = lop.walsh_hadamard_basis(n)# Wrap it with picks and permTpwh = lop.jit(lop.partial_op(Twh, picks, perm))
We can now perform the measurements on xwith the operator Tpwh. The measurement process may also add some Gaussian noise.
# Perform exact measurementb = Tpwh.times(x)# Add some noisesigma = 0.2noise = sigma * random.normal(keys[3], (m,))b = b + noise
We can now use the yall1 solver included in CR-Sparse for recovering the original signal x from the measurements b.
# tolerance for solution convergencetol = 5e-4# BPDN parameterrho = 5e-4# Run the solversol = yall1.solve(Tpwh, b, rho=rho, tolerance=tol, W=Psi)iterations = int(sol.iterations)#Number of iterationsprint(f'{iterations=}')# Relative errorrel_error = norm(sol.x-xs)/norm(xs)print(f'{rel_error=:.4e}')
The solver converged in 150 iterations and the relative error was about 3.4e-2.
Let’s see how good is the recovery.
Please see here for the full example code.
In this article, we reviewed the concept of linear operators and the computational benefits associated with them. We presented a functional programming-based implementation of linear operators using JAX. We then looked at the application of these operators in compressive sensing problems. We could see that sophisticated signal recovery algorithms can be implemented using this approach which is fully compliant with the JAX requirements for JIT compilation. We aim to provide an extensive collection of operators and algorithms for inverse problems in CR-Sparse.
Linear Map
JAX Documentation
PyLops Documentation
Painless Conjugate Gradient
YALL1: Your Algorithms for L1 (Original MATLAB Package)
Wikipedia: Compressive Sensing
A Review of Sparse Recovery Algorithms
Sparse Signal Models Notes
Compressive Sensing Notes
|
[
{
"code": null,
"e": 466,
"s": 172,
"text": "A linear operator or a linear map is a mapping from a vector space to another vector space that preserves vector addition and scalar multiplication operations. In other words, if T is a linear operator then T(x+y) = T(x) + T(y) and T (a x) = a T(x) where x and y are vectors and a is a scalar."
},
{
"code": null,
"e": 583,
"s": 466,
"text": "Linear operators have wide applications in signal processing, image processing, data sciences, and machine learning."
},
{
"code": null,
"e": 995,
"s": 583,
"text": "In signal processing, signals are often represented as linear combinations of sinusoids. The Discrete Fourier Transform is a linear operator which decomposes a signal in its individual component frequencies. Wavelet Transform is often used to decompose signals into individual location-scale specific wavelets so that interesting events or patterns inside a signal can be identified as well as localized easily."
},
{
"code": null,
"e": 1118,
"s": 995,
"text": "In statistics, linear models are used to describe the observations or target variables as linear combinations of features."
},
{
"code": null,
"e": 1371,
"s": 1118,
"text": "We can think of a linear operator as a mapping from model space to data space. Every linear operator has a matrix representation. If a linear operator T is represented by a matrix A, then the application of a linear operator y = T(x) can be written as:"
},
{
"code": null,
"e": 1379,
"s": 1371,
"text": "y = A x"
},
{
"code": null,
"e": 1973,
"s": 1379,
"text": "where x is the model and y is the data. In Fourier Transform, A's columns are the individual sinusoids, and model x describes the contribution of each sinusoid to the observed signal y. Usually, we are given the data/signal y and our task is to estimate the model vector x. This is known as an inverse problem. The inverse problem is easy for orthonormal bases. The simple solution is x = A^H y. However, this doesn’t work if the model size is less than the data size or more. When the model size is less, we have an overfitting problem. A basic approach is to solve the least-squares problem:"
},
{
"code": null,
"e": 2000,
"s": 1973,
"text": "minimize \\| A x - y \\|_2^2"
},
{
"code": null,
"e": 2044,
"s": 2000,
"text": "This leads to a system of normal equations:"
},
{
"code": null,
"e": 2060,
"s": 2044,
"text": "A^T A x = A^T y"
},
{
"code": null,
"e": 2458,
"s": 2060,
"text": "Libraries like NumPy and JAX provide extensive support for matrix algebra. However, direct methods from matrix algebra are prohibitive from both time and space complexity perspectives for large systems. Storing A itself may be very expensive for very large matrices. Computing A^T A is an O(n3) operation. Computing its inverse for solving the normal equation can become infeasible as n increases."
},
{
"code": null,
"e": 2504,
"s": 2458,
"text": "Functional representation of linear operators"
},
{
"code": null,
"e": 2776,
"s": 2504,
"text": "Fortunately, many linear operators which are useful in the scientific literature can be implemented in terms of simple functions. For example, consider a forward difference operator for finite-size vectors x in R8 (8 dimensional real vectors). A matrix representation is:"
},
{
"code": null,
"e": 3065,
"s": 2776,
"text": "A = jnp.array([[-1. 1. 0. 0. 0. 0. 0. 0.] [ 0. -1. 1. 0. 0. 0. 0. 0.] [ 0. 0. -1. 1. 0. 0. 0. 0.] [ 0. 0. 0. -1. 1. 0. 0. 0.] [ 0. 0. 0. 0. -1. 1. 0. 0.] [ 0. 0. 0. 0. 0. -1. 1. 0.] [ 0. 0. 0. 0. 0. 0. -1. 1.] [ 0. 0. 0. 0. 0. 0. 0. 0.]])"
},
{
"code": null,
"e": 3159,
"s": 3065,
"text": "However the matrix-vector multiply computation A @ x can be written much more efficiently as:"
},
{
"code": null,
"e": 3271,
"s": 3159,
"text": "import jax.numpy as jnpdef forward_diff(x): append = jnp.array([x[-1]]) return jnp.diff(x, append=append)"
},
{
"code": null,
"e": 3324,
"s": 3271,
"text": "This brings down the computation from O(n2) to O(n)."
},
{
"code": null,
"e": 3456,
"s": 3324,
"text": "In general, we need to implement two operations for a linear operator. The forward operator from the model space to the data space:"
},
{
"code": null,
"e": 3464,
"s": 3456,
"text": "y = A x"
},
{
"code": null,
"e": 3529,
"s": 3464,
"text": "And the adjoint operator from the data space to the model space:"
},
{
"code": null,
"e": 3539,
"s": 3529,
"text": "x = A^T y"
},
{
"code": null,
"e": 3620,
"s": 3539,
"text": "For complex vector spaces, the adjoint operator will be the Hermitian transpose:"
},
{
"code": null,
"e": 3630,
"s": 3620,
"text": "x = A^H y"
},
{
"code": null,
"e": 3823,
"s": 3630,
"text": "SciPy provides a very good interface for implementing linear operators in scipy.sparse.linalg.LinearOperator. PyLops builds on top of it to provide an extensive collection of linear operators."
},
{
"code": null,
"e": 4089,
"s": 3823,
"text": "JAX is a new library for high-performance numerical computing based on the functional programming paradigm. It enables us to write efficient numerical programs in Pure Python which can be compiled using XLA for CPU/GPU/TPU hardware for state-of-the-art performance."
},
{
"code": null,
"e": 4548,
"s": 4089,
"text": "CR-Sparse is a new open-source library being developed on top of JAX that aims to provide XLA accelerated functional models and algorithms for sparse representations-based signal processing. It now includes a good collection of linear operators built on top of JAX. Docs here. We represent a linear operator by a pair of functions times and trans. The times function implements the forward operation while the trans function implements the adjoint operation."
},
{
"code": null,
"e": 4585,
"s": 4548,
"text": "You can install CR-Sparse from PyPI:"
},
{
"code": null,
"e": 4607,
"s": 4585,
"text": "pip install cr-sparse"
},
{
"code": null,
"e": 4657,
"s": 4607,
"text": "For the latest code, install directly from GitHub"
},
{
"code": null,
"e": 4731,
"s": 4657,
"text": "python -m pip install git+https://github.com/carnotresearch/cr-sparse.git"
},
{
"code": null,
"e": 4847,
"s": 4731,
"text": "In the interactive code samples below, the lines starting with > have the code and lines without > have the output."
},
{
"code": null,
"e": 4914,
"s": 4847,
"text": "To create a first derivative operator (using forward differences):"
},
{
"code": null,
"e": 4994,
"s": 4914,
"text": "> from cr.sparse import lop> n = 8> T = lop.first_derivative(n, kind='forward')"
},
{
"code": null,
"e": 5064,
"s": 4994,
"text": "It is possible to see the matrix representation of a linear operator:"
},
{
"code": null,
"e": 5363,
"s": 5064,
"text": "> print(lop.to_matrix(T))[[-1. 1. 0. 0. 0. 0. 0. 0.] [ 0. -1. 1. 0. 0. 0. 0. 0.] [ 0. 0. -1. 1. 0. 0. 0. 0.] [ 0. 0. 0. -1. 1. 0. 0. 0.] [ 0. 0. 0. 0. -1. 1. 0. 0.] [ 0. 0. 0. 0. 0. -1. 1. 0.] [ 0. 0. 0. 0. 0. 0. -1. 1.] [ 0. 0. 0. 0. 0. 0. 0. 0.]]"
},
{
"code": null,
"e": 5399,
"s": 5363,
"text": "Computing the forward operation T x"
},
{
"code": null,
"e": 5485,
"s": 5399,
"text": "> x = jnp.array([1,2,3,4,5,6,7,8])> y = T.times(x)> print(y)[1. 1. 1. 1. 1. 1. 1. 0.]"
},
{
"code": null,
"e": 5523,
"s": 5485,
"text": "Computing the adjoint operation T^H x"
},
{
"code": null,
"e": 5583,
"s": 5523,
"text": "> y = T.trans(x)> print(y)[-1. -1. -1. -1. -1. -1. -1. 7.]"
},
{
"code": null,
"e": 5699,
"s": 5583,
"text": "Diagonal matrices are extremely sparse and linear operator-based implementation is ideal for them. Let’s build one:"
},
{
"code": null,
"e": 6000,
"s": 5699,
"text": "> d = jnp.array([1., 2., 3., 4., 4, 3, 2, 1])> T = lop.diagonal(d)> print(lop.to_matrix(T))[[1. 0. 0. 0. 0. 0. 0. 0.] [0. 2. 0. 0. 0. 0. 0. 0.] [0. 0. 3. 0. 0. 0. 0. 0.] [0. 0. 0. 4. 0. 0. 0. 0.] [0. 0. 0. 0. 4. 0. 0. 0.] [0. 0. 0. 0. 0. 3. 0. 0.] [0. 0. 0. 0. 0. 0. 2. 0.] [0. 0. 0. 0. 0. 0. 0. 1.]]"
},
{
"code": null,
"e": 6013,
"s": 6000,
"text": "Applying it:"
},
{
"code": null,
"e": 6118,
"s": 6013,
"text": "> print(T.times(x))[ 1. 4. 9. 16. 20. 18. 14. 8.]> print(T.trans(x))[ 1. 4. 9. 16. 20. 18. 14. 8.]"
},
{
"code": null,
"e": 6230,
"s": 6118,
"text": "All linear operators are built as a named tuple Operator. See its documentation here. Below is a basic outline."
},
{
"code": null,
"e": 6739,
"s": 6230,
"text": "class Operator(NamedTuple): times : Callable[[jnp.ndarray], jnp.ndarray] \"\"\"A linear function mapping from A to B \"\"\" trans : Callable[[jnp.ndarray], jnp.ndarray] \"\"\"Corresponding adjoint linear function mapping from B to A\"\"\" shape : Tuple[int, int] \"\"\"Dimension of the linear operator (m, n)\"\"\" linear : bool = True \"\"\"Indicates if the operator is linear or not\"\"\" real: bool = True \"\"\"Indicates if a linear operator is real i.e. has a matrix representation of real numbers\"\"\""
},
{
"code": null,
"e": 6830,
"s": 6739,
"text": "Implementation of the diagonal linear operator (discussed above) is actually quite simple:"
},
{
"code": null,
"e": 7011,
"s": 6830,
"text": "def diagonal(d): assert d.ndim == 1 n = d.shape[0] times = lambda x: d * x trans = lambda x: _hermitian(d) * x return Operator(times=times, trans=trans, shape=(n,n))"
},
{
"code": null,
"e": 7056,
"s": 7011,
"text": "where the function _hermitian is as follows:"
},
{
"code": null,
"e": 7174,
"s": 7056,
"text": "def _hermitian(a): \"\"\"Computes the Hermitian transpose of a vector or a matrix \"\"\" return jnp.conjugate(a.T)"
},
{
"code": null,
"e": 7608,
"s": 7174,
"text": "The great feature of JAX is that when it just-in-time compiles Python code, it can remove unnecessary operations. E.g., if d is a real vector, then _hermitian is a NOOP and can be optimized out during compilation. All operators in cr.sparse.lop have been carefully designed so that they can be easily JIT-compiled. We provide a utility function lop.jit to quickly wrap the times and trans functions of a linear operator with jax.jit."
},
{
"code": null,
"e": 7624,
"s": 7608,
"text": "T = lop.jit(T) "
},
{
"code": null,
"e": 7725,
"s": 7624,
"text": "After this, T.times and T.trans operations will run much faster (by one or two orders of magnitude)."
},
{
"code": null,
"e": 7806,
"s": 7725,
"text": "Something like A^H A for the normal equation above can be modeled as a function:"
},
{
"code": null,
"e": 7844,
"s": 7806,
"text": "gram = lambda x : T.trans(T.times(x))"
},
{
"code": null,
"e": 7920,
"s": 7844,
"text": "where it is assumed that T is already created and available in the closure."
},
{
"code": null,
"e": 8160,
"s": 7920,
"text": "Once, we have a framework of linear operators handy with us, it can be used to write algorithms like preconditioned conjugate gradient in JAX compatible manner (i.e. they can be JIT-compiled). This version is included in cr.sparse.opt.pcg."
},
{
"code": null,
"e": 8264,
"s": 8160,
"text": "CR-Sparse contains a good collection of algorithms for solving inverse problems using linear operators."
},
{
"code": null,
"e": 8312,
"s": 8264,
"text": "Greedy Sparse Recovery/Approximation Algorithms"
},
{
"code": null,
"e": 8379,
"s": 8312,
"text": "Convex Optimization-based Sparse Recovery/Approximation Algorithms"
},
{
"code": null,
"e": 8789,
"s": 8379,
"text": "We consider a compressive sensing example which consists of Partial Walsh Hadamard Measurements, Cosine Sparsifying Basis, and ADMM based signal recovery. In compressive sensing, the data size is much lesser than the model size. Thus the equation A x = b underfits. Finding a solution requires additional assumptions. One useful assumption is to look for x which is sparse (i.e. most of its entries are zero)."
},
{
"code": null,
"e": 8841,
"s": 8789,
"text": "Here is our signal of interest x of n=8192 samples."
},
{
"code": null,
"e": 8971,
"s": 8841,
"text": "We will use a Type-II Discrete Cosine Orthonormal Basis for modeling this signal. Please note that normal DCT is not orthonormal."
},
{
"code": null,
"e": 9007,
"s": 8971,
"text": "Psi = lop.jit(lop.cosine_basis(n))"
},
{
"code": null,
"e": 9056,
"s": 9007,
"text": "Let’s see if the signal is sparse on this basis:"
},
{
"code": null,
"e": 9077,
"s": 9056,
"text": "alpha = Psi.trans(x)"
},
{
"code": null,
"e": 9195,
"s": 9077,
"text": "It is clear that most of the coefficients in the discrete cosine basis are extremely small and can be safely ignored."
},
{
"code": null,
"e": 9460,
"s": 9195,
"text": "We next introduce a structured compressive sensing operator which takes the measurements of x in Walsh Hadamard Transform space but only a small m=1024 number of randomly selected measurements are kept. The input x may also be randomly permuted during measurement."
},
{
"code": null,
"e": 9917,
"s": 9460,
"text": "from jax import randomkey = random.PRNGKey(0)keys = random.split(key, 10)# indices of the measurements to be pickedp = random.permutation(keys[1], n)picks = jnp.sort(p[:m])# Make sure that DC component is always picked uppicks = picks.at[0].set(0)# a random permutation of inputperm = random.permutation(keys[2], n)# Walsh Hadamard Basis operatorTwh = lop.walsh_hadamard_basis(n)# Wrap it with picks and permTpwh = lop.jit(lop.partial_op(Twh, picks, perm))"
},
{
"code": null,
"e": 10039,
"s": 9917,
"text": "We can now perform the measurements on xwith the operator Tpwh. The measurement process may also add some Gaussian noise."
},
{
"code": null,
"e": 10168,
"s": 10039,
"text": "# Perform exact measurementb = Tpwh.times(x)# Add some noisesigma = 0.2noise = sigma * random.normal(keys[3], (m,))b = b + noise"
},
{
"code": null,
"e": 10284,
"s": 10168,
"text": "We can now use the yall1 solver included in CR-Sparse for recovering the original signal x from the measurements b."
},
{
"code": null,
"e": 10583,
"s": 10284,
"text": "# tolerance for solution convergencetol = 5e-4# BPDN parameterrho = 5e-4# Run the solversol = yall1.solve(Tpwh, b, rho=rho, tolerance=tol, W=Psi)iterations = int(sol.iterations)#Number of iterationsprint(f'{iterations=}')# Relative errorrel_error = norm(sol.x-xs)/norm(xs)print(f'{rel_error=:.4e}')"
},
{
"code": null,
"e": 10663,
"s": 10583,
"text": "The solver converged in 150 iterations and the relative error was about 3.4e-2."
},
{
"code": null,
"e": 10699,
"s": 10663,
"text": "Let’s see how good is the recovery."
},
{
"code": null,
"e": 10742,
"s": 10699,
"text": "Please see here for the full example code."
},
{
"code": null,
"e": 11307,
"s": 10742,
"text": "In this article, we reviewed the concept of linear operators and the computational benefits associated with them. We presented a functional programming-based implementation of linear operators using JAX. We then looked at the application of these operators in compressive sensing problems. We could see that sophisticated signal recovery algorithms can be implemented using this approach which is fully compliant with the JAX requirements for JIT compilation. We aim to provide an extensive collection of operators and algorithms for inverse problems in CR-Sparse."
},
{
"code": null,
"e": 11318,
"s": 11307,
"text": "Linear Map"
},
{
"code": null,
"e": 11336,
"s": 11318,
"text": "JAX Documentation"
},
{
"code": null,
"e": 11357,
"s": 11336,
"text": "PyLops Documentation"
},
{
"code": null,
"e": 11385,
"s": 11357,
"text": "Painless Conjugate Gradient"
},
{
"code": null,
"e": 11441,
"s": 11385,
"text": "YALL1: Your Algorithms for L1 (Original MATLAB Package)"
},
{
"code": null,
"e": 11472,
"s": 11441,
"text": "Wikipedia: Compressive Sensing"
},
{
"code": null,
"e": 11511,
"s": 11472,
"text": "A Review of Sparse Recovery Algorithms"
},
{
"code": null,
"e": 11538,
"s": 11511,
"text": "Sparse Signal Models Notes"
}
] |
Accuracy, Recall, Precision, F-Score & Specificity, which to optimize on? | by Salma Ghoneim | Towards Data Science
|
I will use a basic example to explain each performance metric on in order for you to really understand the difference between each one of them. So that in your next ML project you can choose which performance metric to improve on that best suits your project.
A school is running a machine learning primary diabetes scan on all of its students.The output is either diabetic (+ve) or healthy (-ve).
There are only 4 cases any student X could end up with.We’ll be using the following as a reference later, So don’t hesitate to re-read it if you get confused.
True positive (TP): Prediction is +ve and X is diabetic, we want that
True negative (TN): Prediction is -ve and X is healthy, we want that too
False positive (FP): Prediction is +ve and X is healthy, false alarm, bad
False negative (FN): Prediction is -ve and X is diabetic, the worst
- If it starts with True then the prediction was correct whether diabetic or not, so true positive is a diabetic person correctly predicted & a true negative is a healthy person correctly predicted.Oppositely, if it starts with False then the prediction was incorrect, so false positive is a healthy person incorrectly predicted as diabetic(+) & a false negative is a diabetic person incorrectly predicted as healthy(-).- Positive or negative indicates the output of our program. While true or false judges this output whether correct or incorrect.
Before I continue, true positives & true negatives are always good. we love the news the word true brings. Which leaves false positives and false negatives.In our example, false positives are just a false alarm. In a 2nd more detailed scan it’ll be corrected. But a false negative label, this means that they think they’re healthy when they’re not, which is — in our problem — the worst case of the 4.Whether FP & FN are equally bad or if one of them is worse than the other depends on your problem. This piece of information has a great impact on your choice of the performance metric, So give it a thought before you continue.
It’s the ratio of the correctly labeled subjects to the whole pool of subjects.Accuracy is the most intuitive one.Accuracy answers the following question: How many students did we correctly label out of all the students? Accuracy = (TP+TN)/(TP+FP+FN+TN)numerator: all correctly labeled subject (All trues)denominator: all subjects
Precision is the ratio of the correctly +ve labeled by our program to all +ve labeled.Precision answers the following: How many of those who we labeled as diabetic are actually diabetic?Precision = TP/(TP+FP)numerator: +ve labeled diabetic people.denominator: all +ve labeled by our program (whether they’re diabetic or not in reality).
Recall is the ratio of the correctly +ve labeled by our program to all who are diabetic in reality.Recall answers the following question: Of all the people who are diabetic, how many of those we correctly predict?Recall = TP/(TP+FN)numerator: +ve labeled diabetic people.denominator: all people who are diabetic (whether detected by our program or not)
F1 Score considers both precision and recall.It is the harmonic mean(average) of the precision and recall.F1 Score is best if there is some sort of balance between precision (p) & recall (r) in the system. Oppositely F1 Score isn’t so high if one measure is improved at the expense of the other. For example, if P is 1 & R is 0, F1 score is 0.F1 Score = 2*(Recall * Precision) / (Recall + Precision)
Specificity is the correctly -ve labeled by the program to all who are healthy in reality.Specifity answers the following question: Of all the people who are healthy, how many of those did we correctly predict?Specificity = TN/(TN+FP)numerator: -ve labeled healthy people.denominator: all people who are healthy in reality (whether +ve or -ve labeled)
Yes, accuracy is a great measure but only when you have symmetric datasets (false negatives & false positives counts are close), also, false negatives & false positives have similar costs.If the cost of false positives and false negatives are different then F1 is your savior. F1 is best if you have an uneven class distribution.
Precision is how sure you are of your true positives whilst recall is how sure you are that you are not missing any positives.
Choose Recall if the idea of false positives is far better than false negatives, in other words, if the occurrence of false negatives is unaccepted/intolerable, that you’d rather get some extra false positives(false alarms) over saving some false negatives, like in our diabetes example.You’d rather get some healthy people labeled diabetic over leaving a diabetic person labeled healthy.
Choose precision if you want to be more confident of your true positives. for example, Spam emails. You’d rather have some spam emails in your inbox rather than some regular emails in your spam box. So, the email company wants to be extra sure that email Y is spam before they put it in the spam box and you never get to see it.
Choose Specificity if you want to cover all true negatives, meaning you don’t want any false alarms, you don’t want any false positives. for example, you’re running a drug test in which all people who test positive will immediately go to jail, you don’t want anyone drug-free going to jail. False positives here are intolerable.
— Accuracy value of 90% means that 1 of every 10 labels is incorrect, and 9 is correct. — Precision value of 80% means that on average, 2 of every 10 diabetic labeled student by our program is healthy, and 8 is diabetic. — Recall value is 70% means that 3 of every 10 diabetic people in reality are missed by our program and 7 labeled as diabetic. — Specificity value is 60% means that 4 of every 10 healthy people in reality are miss-labeled as diabetic and 6 are correctly labeled as healthy.
Wikipedia will explain it better than me
In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class (or vice versa).The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another).
A nice & easy how-to of calculating a confusion matrix is here.
from sklearn.metrics import confusion_matrix>>>tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel()# true negatives, false positives, false negatives, true positives>>>(tn, fp, fn, tp)(0, 2, 1, 1)
|
[
{
"code": null,
"e": 431,
"s": 171,
"text": "I will use a basic example to explain each performance metric on in order for you to really understand the difference between each one of them. So that in your next ML project you can choose which performance metric to improve on that best suits your project."
},
{
"code": null,
"e": 569,
"s": 431,
"text": "A school is running a machine learning primary diabetes scan on all of its students.The output is either diabetic (+ve) or healthy (-ve)."
},
{
"code": null,
"e": 728,
"s": 569,
"text": "There are only 4 cases any student X could end up with.We’ll be using the following as a reference later, So don’t hesitate to re-read it if you get confused."
},
{
"code": null,
"e": 798,
"s": 728,
"text": "True positive (TP): Prediction is +ve and X is diabetic, we want that"
},
{
"code": null,
"e": 871,
"s": 798,
"text": "True negative (TN): Prediction is -ve and X is healthy, we want that too"
},
{
"code": null,
"e": 945,
"s": 871,
"text": "False positive (FP): Prediction is +ve and X is healthy, false alarm, bad"
},
{
"code": null,
"e": 1013,
"s": 945,
"text": "False negative (FN): Prediction is -ve and X is diabetic, the worst"
},
{
"code": null,
"e": 1562,
"s": 1013,
"text": "- If it starts with True then the prediction was correct whether diabetic or not, so true positive is a diabetic person correctly predicted & a true negative is a healthy person correctly predicted.Oppositely, if it starts with False then the prediction was incorrect, so false positive is a healthy person incorrectly predicted as diabetic(+) & a false negative is a diabetic person incorrectly predicted as healthy(-).- Positive or negative indicates the output of our program. While true or false judges this output whether correct or incorrect."
},
{
"code": null,
"e": 2191,
"s": 1562,
"text": "Before I continue, true positives & true negatives are always good. we love the news the word true brings. Which leaves false positives and false negatives.In our example, false positives are just a false alarm. In a 2nd more detailed scan it’ll be corrected. But a false negative label, this means that they think they’re healthy when they’re not, which is — in our problem — the worst case of the 4.Whether FP & FN are equally bad or if one of them is worse than the other depends on your problem. This piece of information has a great impact on your choice of the performance metric, So give it a thought before you continue."
},
{
"code": null,
"e": 2522,
"s": 2191,
"text": "It’s the ratio of the correctly labeled subjects to the whole pool of subjects.Accuracy is the most intuitive one.Accuracy answers the following question: How many students did we correctly label out of all the students? Accuracy = (TP+TN)/(TP+FP+FN+TN)numerator: all correctly labeled subject (All trues)denominator: all subjects"
},
{
"code": null,
"e": 2859,
"s": 2522,
"text": "Precision is the ratio of the correctly +ve labeled by our program to all +ve labeled.Precision answers the following: How many of those who we labeled as diabetic are actually diabetic?Precision = TP/(TP+FP)numerator: +ve labeled diabetic people.denominator: all +ve labeled by our program (whether they’re diabetic or not in reality)."
},
{
"code": null,
"e": 3212,
"s": 2859,
"text": "Recall is the ratio of the correctly +ve labeled by our program to all who are diabetic in reality.Recall answers the following question: Of all the people who are diabetic, how many of those we correctly predict?Recall = TP/(TP+FN)numerator: +ve labeled diabetic people.denominator: all people who are diabetic (whether detected by our program or not)"
},
{
"code": null,
"e": 3612,
"s": 3212,
"text": "F1 Score considers both precision and recall.It is the harmonic mean(average) of the precision and recall.F1 Score is best if there is some sort of balance between precision (p) & recall (r) in the system. Oppositely F1 Score isn’t so high if one measure is improved at the expense of the other. For example, if P is 1 & R is 0, F1 score is 0.F1 Score = 2*(Recall * Precision) / (Recall + Precision)"
},
{
"code": null,
"e": 3964,
"s": 3612,
"text": "Specificity is the correctly -ve labeled by the program to all who are healthy in reality.Specifity answers the following question: Of all the people who are healthy, how many of those did we correctly predict?Specificity = TN/(TN+FP)numerator: -ve labeled healthy people.denominator: all people who are healthy in reality (whether +ve or -ve labeled)"
},
{
"code": null,
"e": 4294,
"s": 3964,
"text": "Yes, accuracy is a great measure but only when you have symmetric datasets (false negatives & false positives counts are close), also, false negatives & false positives have similar costs.If the cost of false positives and false negatives are different then F1 is your savior. F1 is best if you have an uneven class distribution."
},
{
"code": null,
"e": 4421,
"s": 4294,
"text": "Precision is how sure you are of your true positives whilst recall is how sure you are that you are not missing any positives."
},
{
"code": null,
"e": 4810,
"s": 4421,
"text": "Choose Recall if the idea of false positives is far better than false negatives, in other words, if the occurrence of false negatives is unaccepted/intolerable, that you’d rather get some extra false positives(false alarms) over saving some false negatives, like in our diabetes example.You’d rather get some healthy people labeled diabetic over leaving a diabetic person labeled healthy."
},
{
"code": null,
"e": 5139,
"s": 4810,
"text": "Choose precision if you want to be more confident of your true positives. for example, Spam emails. You’d rather have some spam emails in your inbox rather than some regular emails in your spam box. So, the email company wants to be extra sure that email Y is spam before they put it in the spam box and you never get to see it."
},
{
"code": null,
"e": 5468,
"s": 5139,
"text": "Choose Specificity if you want to cover all true negatives, meaning you don’t want any false alarms, you don’t want any false positives. for example, you’re running a drug test in which all people who test positive will immediately go to jail, you don’t want anyone drug-free going to jail. False positives here are intolerable."
},
{
"code": null,
"e": 5963,
"s": 5468,
"text": "— Accuracy value of 90% means that 1 of every 10 labels is incorrect, and 9 is correct. — Precision value of 80% means that on average, 2 of every 10 diabetic labeled student by our program is healthy, and 8 is diabetic. — Recall value is 70% means that 3 of every 10 diabetic people in reality are missed by our program and 7 labeled as diabetic. — Specificity value is 60% means that 4 of every 10 healthy people in reality are miss-labeled as diabetic and 6 are correctly labeled as healthy."
},
{
"code": null,
"e": 6004,
"s": 5963,
"text": "Wikipedia will explain it better than me"
},
{
"code": null,
"e": 6628,
"s": 6004,
"text": "In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each row of the matrix represents the instances in a predicted class while each column represents the instances in an actual class (or vice versa).The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another)."
},
{
"code": null,
"e": 6692,
"s": 6628,
"text": "A nice & easy how-to of calculating a confusion matrix is here."
}
] |
Probability Learning II: How Bayes’ Theorem is applied in Machine Learning | by z_ai | Towards Data Science
|
In the previous post we saw what Bayes’ Theorem is, and went through an easy, intuitive example of how it works. You can find this post here. If you don’t know what Bayes’ Theorem is, and you have not had the pleasure to read it yet, I recommend you do, as it will make understanding this present article a lot easier.
In this post, we will see the uses of this theorem in Machine Learning.
Before we start, here you have some additional resources to skyrocket your Machine Learning career:
Awesome Machine Learning Resources:- For learning resources go to How to Learn Machine Learning! - For professional resources (jobs, events, skill tests) go to AIgents.co — A career community for Data Scientists & Machine Learning Engineers.
Ready? Lets go then!
As mentioned in the previous post, Bayes’ theorem tells use how to gradually update our knowledge on something as we get more evidence or that about that something.
Generally, in Supervised Machine Learning, when we want to train a model the main building blocks are a set of data points that contain features (the attributes that define such data points),the labels of such data point (the numeric or categorical tag which we later want to predict on new data points), and a hypothesis function or model that links such features with their corresponding labels. We also have a loss function, which is the difference between the predictions of the model and the real labels which we want to reduce to achieve the best possible results.
These supervised Machine Learning problems can be divided into two main categories: regression, where we want to calculate a number or numeric value associated with some data (like for example the price of a house), and classification, where we want to assign the data point to a certain category (for example saying if an image shows a dog or a cat).
Bayes’ theorem can be used in both regression, and classification.
Lets see how!
Imagine we have a very simple set of data, which represents the temperature of each day of the year in a certain area of a town (the feature of the data points), and the number of water bottles sold by a local shop in that area every single day (the label of the data points).
By making a very simple model, we could see if these two are related, and if they are, then use this model to make predictions in order to stock up on water bottles depending on the temperature and never run out of stock, or avoid having too much inventory.
We could try a very simple linear regression model to see how these variables are related. In the following formula, that describes this linear model, y is the target label (the number of water bottles in our example), each of the θs is a parameter of the model (the slope and the cut with the y-axis) and x would be our feature (the temperature in our example).
The goal of this training would be to reduce the mentioned loss function, so that the predictions that the model makes for the known data points, are close to the actual values of the labels of such data points.
After having trained the model with the available data we would get a value for both of the θs. This training can be performed by using an iterative process like gradient descent or another probabilistic method like Maximum Likelihood. In any way, we would just have ONE single value for each one of the parameters.
In this manner, when we get new data without a label (new temperature forecasts) as we know the value of the θs, we could just use this simple equation to obtain the wanted Ys (number of water bottles needed for each day).
When we use Bayes’ theorem for regression, instead of thinking of the parameters (the θs) of the model as having a single, unique value, we represent them as parameters having a certain distribution: the prior distribution of the parameters. The following figures show the generic Bayes formula, and under it how it can be applied to a machine learning model.
The idea behind this is that we have some previous knowledge of the parameters of the model before we have any actual data: P(model) is this prior probability. Then, when we get some new data, we update the distribution of the parameters of the model, making it the posterior probability P(model|data).
What this means is that our parameter set (the θs of our model) is not constant, but instead has its own distribution. Based on previous knowledge (from experts for example, or from other works) we make a first hypothesis about the distribution of the parameters of our model. Then as we train our models with more data, this distribution gets updated and grows more exact (in practice the variance gets smaller).
This figure shows the initial distribution of the parameters of the model p(θ), and how as we add more data this distribution gets updated, making it grow more exact to p(θ|x), where x denotes this new data. The θ here is equivalent to the model in the formula shown above, and the x here is equivalent to the data in such formula.
Bayes’ formula, as always, tells us how to go from the prior to the posterior probabilities. We do this in an iterative process as we get more and more data, having the posterior probabilities become the prior probabilities for the next iteration. Once we have trained the model with enough data, to choose the set of final parameters we would search for the Maximum posterior (MAP) estimation to use a concrete set of values for the parameters of the model.
This kind of analysis gets its strength from the initial prior distribution: if we do not have any previous information, and can’t make any assumption about it, other probabilistic approaches like Maximum Likelihood are better suited.
However, if we have some prior information about the distribution of the parameters the Bayes’ approach proves to be very powerful, specially in the case of having unreliable training data. In this case, as we are not building the model and calculating its parameters from scratch using this data, but rather using some kind of previous knowledge to infer an initial distribution for these parameters, this previous distribution makes the parameters more robust and less affected by inaccurate data.
I don’t want to get very technical in this part, but the maths behind all this reasoning is beautiful; if you want to know about it don’t hesitate and email me to [email protected] or contact me on LinkdIn.
We have seen how Bayes’ theorem can be used for regression, by estimating the parameters of a linear model. The same reasoning could be applied to other kind of regression algorithms.
Now we will see how to use Bayes’ theorem for classification. This is known as Bayes’ optimal classifier. The reasoning now is very similar to the previous one.
Imagine we have a classification problem with i different classes. The thing we are after here is the class probability for each class wi. Like in the previous regression case, we also differentiate between prior and posterior probabilities, but now we have prior class probabilities p(wi) and posterior class probabilities, after using data or observations p(wi|x).
Here P(x) is the density function common to all the data points, P(x|wi) is the density function of the data points belonging to class wi, and P(wi) is the prior distribution of class wi. P(x|wi) is calculated from the training data, assuming a certain distribution and calculating a mean vector for each class and the covariance of the features of the data points belonging to such class. The prior class distributions P(wi) are estimated based on domain knowledge, expert advice or previous works, like in the regression example.
Lets see an example of how this works: Image we have measured the height of 34 individuals: 25 males (blue) and 9 females (red), and we get a new height observation of 172 cm which we want to classify as male or female. The following figure represents the predictions obtained using a Maximum likelihood classifier and a Bayes optimal classifier.
In this case we have used the number of samples in the training data as the prior knowledge for our class distributions, but if for example we were doing this same differentiation between height and gender for a specific country, and we knew the woman there are specially tall, and also knew the mean height of the men, we could have used this information to build our prior class distributions.
As we can see from the example, using these prior knowledge leads to different results than not using them. Assuming this previous knowledge is of high quality (or otherwise we wouldn’t use it), these predictions should be more accurate than similar trials that don’t incorporate this information.
After this, as always, as we get more data these distributions would get updated to reflect the knowledge obtained from this data.
As in the previous case, I don’t want to get too technical, or extend the article too much, so I won’t go into the mathematical details, but feel free to contact me if you are curious about them.
We have seen how Bayes’ theorem is used in Machine learning; both in regression and classification, to incorporate previous knowledge into our models and improve them.
In the following post we will see how simplifications of Bayes’ theorem are one of the most used techniques for Natural Language Processing and how they are applied to many real world use cases like spam filters or sentiment analysis tools. To check it out follow me on Medium, and stay tuned!
That is all, I hope you liked the post. Feel Free to connect with me on LinkedIn or follow me on Twitter at @jaimezorno. Also, you can take a look at my other posts on Data Science and Machine Learning here. Have a good read!
In case you want to go more in depth into Bayes and Machine Learning, check out these other resources:
How Bayesian Inference works
Bayesian statistics Youtube Series
Machine Learning Bayesian Learning slides
Bayesian Inference
Statistics and Probability online courses
For further resources on Machine Learning and Data Science check out the following repository: How to Learn Machine Learning! For career resources (jobs, events, skill tests) go to AIgents.co — A career community for Data Scientists & Machine Learning Engineers and as always, contact me with any questions. Have a fantastic day and keep learning.
|
[
{
"code": null,
"e": 491,
"s": 172,
"text": "In the previous post we saw what Bayes’ Theorem is, and went through an easy, intuitive example of how it works. You can find this post here. If you don’t know what Bayes’ Theorem is, and you have not had the pleasure to read it yet, I recommend you do, as it will make understanding this present article a lot easier."
},
{
"code": null,
"e": 563,
"s": 491,
"text": "In this post, we will see the uses of this theorem in Machine Learning."
},
{
"code": null,
"e": 663,
"s": 563,
"text": "Before we start, here you have some additional resources to skyrocket your Machine Learning career:"
},
{
"code": null,
"e": 905,
"s": 663,
"text": "Awesome Machine Learning Resources:- For learning resources go to How to Learn Machine Learning! - For professional resources (jobs, events, skill tests) go to AIgents.co — A career community for Data Scientists & Machine Learning Engineers."
},
{
"code": null,
"e": 926,
"s": 905,
"text": "Ready? Lets go then!"
},
{
"code": null,
"e": 1091,
"s": 926,
"text": "As mentioned in the previous post, Bayes’ theorem tells use how to gradually update our knowledge on something as we get more evidence or that about that something."
},
{
"code": null,
"e": 1662,
"s": 1091,
"text": "Generally, in Supervised Machine Learning, when we want to train a model the main building blocks are a set of data points that contain features (the attributes that define such data points),the labels of such data point (the numeric or categorical tag which we later want to predict on new data points), and a hypothesis function or model that links such features with their corresponding labels. We also have a loss function, which is the difference between the predictions of the model and the real labels which we want to reduce to achieve the best possible results."
},
{
"code": null,
"e": 2014,
"s": 1662,
"text": "These supervised Machine Learning problems can be divided into two main categories: regression, where we want to calculate a number or numeric value associated with some data (like for example the price of a house), and classification, where we want to assign the data point to a certain category (for example saying if an image shows a dog or a cat)."
},
{
"code": null,
"e": 2081,
"s": 2014,
"text": "Bayes’ theorem can be used in both regression, and classification."
},
{
"code": null,
"e": 2095,
"s": 2081,
"text": "Lets see how!"
},
{
"code": null,
"e": 2372,
"s": 2095,
"text": "Imagine we have a very simple set of data, which represents the temperature of each day of the year in a certain area of a town (the feature of the data points), and the number of water bottles sold by a local shop in that area every single day (the label of the data points)."
},
{
"code": null,
"e": 2630,
"s": 2372,
"text": "By making a very simple model, we could see if these two are related, and if they are, then use this model to make predictions in order to stock up on water bottles depending on the temperature and never run out of stock, or avoid having too much inventory."
},
{
"code": null,
"e": 2993,
"s": 2630,
"text": "We could try a very simple linear regression model to see how these variables are related. In the following formula, that describes this linear model, y is the target label (the number of water bottles in our example), each of the θs is a parameter of the model (the slope and the cut with the y-axis) and x would be our feature (the temperature in our example)."
},
{
"code": null,
"e": 3205,
"s": 2993,
"text": "The goal of this training would be to reduce the mentioned loss function, so that the predictions that the model makes for the known data points, are close to the actual values of the labels of such data points."
},
{
"code": null,
"e": 3521,
"s": 3205,
"text": "After having trained the model with the available data we would get a value for both of the θs. This training can be performed by using an iterative process like gradient descent or another probabilistic method like Maximum Likelihood. In any way, we would just have ONE single value for each one of the parameters."
},
{
"code": null,
"e": 3744,
"s": 3521,
"text": "In this manner, when we get new data without a label (new temperature forecasts) as we know the value of the θs, we could just use this simple equation to obtain the wanted Ys (number of water bottles needed for each day)."
},
{
"code": null,
"e": 4104,
"s": 3744,
"text": "When we use Bayes’ theorem for regression, instead of thinking of the parameters (the θs) of the model as having a single, unique value, we represent them as parameters having a certain distribution: the prior distribution of the parameters. The following figures show the generic Bayes formula, and under it how it can be applied to a machine learning model."
},
{
"code": null,
"e": 4407,
"s": 4104,
"text": "The idea behind this is that we have some previous knowledge of the parameters of the model before we have any actual data: P(model) is this prior probability. Then, when we get some new data, we update the distribution of the parameters of the model, making it the posterior probability P(model|data)."
},
{
"code": null,
"e": 4821,
"s": 4407,
"text": "What this means is that our parameter set (the θs of our model) is not constant, but instead has its own distribution. Based on previous knowledge (from experts for example, or from other works) we make a first hypothesis about the distribution of the parameters of our model. Then as we train our models with more data, this distribution gets updated and grows more exact (in practice the variance gets smaller)."
},
{
"code": null,
"e": 5153,
"s": 4821,
"text": "This figure shows the initial distribution of the parameters of the model p(θ), and how as we add more data this distribution gets updated, making it grow more exact to p(θ|x), where x denotes this new data. The θ here is equivalent to the model in the formula shown above, and the x here is equivalent to the data in such formula."
},
{
"code": null,
"e": 5612,
"s": 5153,
"text": "Bayes’ formula, as always, tells us how to go from the prior to the posterior probabilities. We do this in an iterative process as we get more and more data, having the posterior probabilities become the prior probabilities for the next iteration. Once we have trained the model with enough data, to choose the set of final parameters we would search for the Maximum posterior (MAP) estimation to use a concrete set of values for the parameters of the model."
},
{
"code": null,
"e": 5847,
"s": 5612,
"text": "This kind of analysis gets its strength from the initial prior distribution: if we do not have any previous information, and can’t make any assumption about it, other probabilistic approaches like Maximum Likelihood are better suited."
},
{
"code": null,
"e": 6347,
"s": 5847,
"text": "However, if we have some prior information about the distribution of the parameters the Bayes’ approach proves to be very powerful, specially in the case of having unreliable training data. In this case, as we are not building the model and calculating its parameters from scratch using this data, but rather using some kind of previous knowledge to infer an initial distribution for these parameters, this previous distribution makes the parameters more robust and less affected by inaccurate data."
},
{
"code": null,
"e": 6557,
"s": 6347,
"text": "I don’t want to get very technical in this part, but the maths behind all this reasoning is beautiful; if you want to know about it don’t hesitate and email me to [email protected] or contact me on LinkdIn."
},
{
"code": null,
"e": 6741,
"s": 6557,
"text": "We have seen how Bayes’ theorem can be used for regression, by estimating the parameters of a linear model. The same reasoning could be applied to other kind of regression algorithms."
},
{
"code": null,
"e": 6902,
"s": 6741,
"text": "Now we will see how to use Bayes’ theorem for classification. This is known as Bayes’ optimal classifier. The reasoning now is very similar to the previous one."
},
{
"code": null,
"e": 7269,
"s": 6902,
"text": "Imagine we have a classification problem with i different classes. The thing we are after here is the class probability for each class wi. Like in the previous regression case, we also differentiate between prior and posterior probabilities, but now we have prior class probabilities p(wi) and posterior class probabilities, after using data or observations p(wi|x)."
},
{
"code": null,
"e": 7801,
"s": 7269,
"text": "Here P(x) is the density function common to all the data points, P(x|wi) is the density function of the data points belonging to class wi, and P(wi) is the prior distribution of class wi. P(x|wi) is calculated from the training data, assuming a certain distribution and calculating a mean vector for each class and the covariance of the features of the data points belonging to such class. The prior class distributions P(wi) are estimated based on domain knowledge, expert advice or previous works, like in the regression example."
},
{
"code": null,
"e": 8148,
"s": 7801,
"text": "Lets see an example of how this works: Image we have measured the height of 34 individuals: 25 males (blue) and 9 females (red), and we get a new height observation of 172 cm which we want to classify as male or female. The following figure represents the predictions obtained using a Maximum likelihood classifier and a Bayes optimal classifier."
},
{
"code": null,
"e": 8544,
"s": 8148,
"text": "In this case we have used the number of samples in the training data as the prior knowledge for our class distributions, but if for example we were doing this same differentiation between height and gender for a specific country, and we knew the woman there are specially tall, and also knew the mean height of the men, we could have used this information to build our prior class distributions."
},
{
"code": null,
"e": 8842,
"s": 8544,
"text": "As we can see from the example, using these prior knowledge leads to different results than not using them. Assuming this previous knowledge is of high quality (or otherwise we wouldn’t use it), these predictions should be more accurate than similar trials that don’t incorporate this information."
},
{
"code": null,
"e": 8973,
"s": 8842,
"text": "After this, as always, as we get more data these distributions would get updated to reflect the knowledge obtained from this data."
},
{
"code": null,
"e": 9169,
"s": 8973,
"text": "As in the previous case, I don’t want to get too technical, or extend the article too much, so I won’t go into the mathematical details, but feel free to contact me if you are curious about them."
},
{
"code": null,
"e": 9337,
"s": 9169,
"text": "We have seen how Bayes’ theorem is used in Machine learning; both in regression and classification, to incorporate previous knowledge into our models and improve them."
},
{
"code": null,
"e": 9631,
"s": 9337,
"text": "In the following post we will see how simplifications of Bayes’ theorem are one of the most used techniques for Natural Language Processing and how they are applied to many real world use cases like spam filters or sentiment analysis tools. To check it out follow me on Medium, and stay tuned!"
},
{
"code": null,
"e": 9857,
"s": 9631,
"text": "That is all, I hope you liked the post. Feel Free to connect with me on LinkedIn or follow me on Twitter at @jaimezorno. Also, you can take a look at my other posts on Data Science and Machine Learning here. Have a good read!"
},
{
"code": null,
"e": 9960,
"s": 9857,
"text": "In case you want to go more in depth into Bayes and Machine Learning, check out these other resources:"
},
{
"code": null,
"e": 9989,
"s": 9960,
"text": "How Bayesian Inference works"
},
{
"code": null,
"e": 10024,
"s": 9989,
"text": "Bayesian statistics Youtube Series"
},
{
"code": null,
"e": 10066,
"s": 10024,
"text": "Machine Learning Bayesian Learning slides"
},
{
"code": null,
"e": 10085,
"s": 10066,
"text": "Bayesian Inference"
},
{
"code": null,
"e": 10127,
"s": 10085,
"text": "Statistics and Probability online courses"
}
] |
Quick Sort | Practice | GeeksforGeeks
|
Quick Sort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot.
Given an array arr[], its starting position low and its ending position high.
Implement the partition() and quickSort() functions to sort the array.
Example 1:
Input:
N = 5
arr[] = { 4, 1, 3, 9, 7}
Output:
1 3 4 7 9
Example 2:
Input:
N = 9
arr[] = { 2, 1, 6, 10, 4, 1, 3, 9, 7}
Output:
1 1 2 3 4 6 7 9 10
Your Task:
You don't need to read input or print anything. Your task is to complete the functions partition() and quickSort() which takes the array arr[], low and high as input parameters and partitions the array. Consider the last element as the pivot such that all the elements less than(or equal to) the pivot lie before it and the elements greater than it lie after the pivot.
Expected Time Complexity: O(N*logN)
Expected Auxiliary Space: O(1)
Constraints:
1 <= N <= 103
1 <= arr[i] <= 104
0
arpita biswal4 days ago
class Solution{ partition(arr, low, high){ // Your code here let pivot = arr[low]; let s=low, e= high; while(s<e){ while(arr[s]<=pivot){ s++; } while(arr[e]>pivot){ e--; } if(s<e){ this.swap(arr,s,e); } } this.swap(arr,low,e); return e; } swap(arr,i,j){ let temp=arr[i]; arr[i]=arr[j]; arr[j]=temp; } quickSort(arr, low, high){ //code here if(low < high){ let loc = this.partition(arr,low,high); this.quickSort(arr,low,loc-1); this.quickSort(arr,loc+1,high); } } }
0
prab010420001 week ago
// Java solution
class Solution{ //Function to sort an array using quick sort algorithm. static void quickSort(int arr[], int low, int high) { // code here if(low<high) { int p=partition(arr,low,high); quickSort(arr,low,p-1); quickSort(arr,p+1,high); } } static int partition(int arr[], int low, int high) { // your code here int i=low-1; int pivot=arr[high]; for(int j=low;j<=high-1;j++) { if(arr[j]<=pivot) { i++; int temp=arr[i]; arr[i]=arr[j]; arr[j]=temp; } } int temp=arr[i+1]; arr[i+1]=arr[high]; arr[high]=temp; return i+1; } }
0
viveksharma731 week ago
class Solution{ public: void swap(int arr[],int i,int j){ int temp =arr[i]; arr[i] = arr[j]; arr[j]= temp; } //Function to sort an array using quick sort algorithm. void quickSort(int arr[], int low, int high) { // code here if(low<high){ int pi= partition(arr,low,high); quickSort(arr,low,pi-1); quickSort(arr,pi+1,high); } } public: int partition (int arr[], int low, int high) { // Your code here int pivot=arr[high]; int i=low-1; for(int j=low;j<high;j++){ if(arr[j]<pivot){ i++; swap(arr,i,j); } } swap(arr,i+1,high); return i+1; }
0
indraneelghosh7401 week ago
this is my code when I put 1 2 in custom input it runs but it shows error with input 1 2 after submitting
public: //Function to sort an array using quick sort algorithm. void quickSort(int arr[], int low, int high) { if(low<high){ int pi=partition(arr,low,high); quickSort(arr,low,pi-1); quickSort(arr,pi+1,high); } } public: int partition (int arr[], int low, int high) { int p=arr[low]; int i=low,j=high; while(i<j){ while(arr[i]<=p) i++; while(arr[j]>=p) j--; if(i<j){ swap(arr[i],arr[j]); } } swap(arr[low],arr[j]); return j; }
0
irmanivaibhav2 weeks ago
class Solution:
def quickSort(self,arr,low,high):
# code here
if low<high:
pi=self.partition(arr,low,high)
self.quickSort(arr,low,pi-1)
self.quickSort(arr,pi+1,high)
def partition(self,arr,low,high):
# code here
i=low
j=high-1
pivot=arr[high]
while i<j:
while i<high and arr[i]<pivot:
i+=1
while j>low and arr[j]>=pivot:
j-=1
if i<j:
arr[i],arr[j]=arr[j],arr[i]
if arr[i]>pivot:
arr[i],arr[high]=arr[high],arr[i]
return i
0
vivekfaujdar482 weeks ago
JAVA CODE :
static void quickSort(int arr[], int low, int high) { if(low<high){ int pivot=partition(arr,low,high); quickSort(arr,low,pivot-1); quickSort(arr,pivot+1,high); } } static int partition(int arr[], int low, int high) { // your code here Solution obj=new Solution(); int pivot =arr[low]; int i=low; int j=high; while(i<j){ while(arr[i]<=pivot && i<j) i++; while(arr[j]>pivot) j--; if(i<j) obj.swap(arr,i,j); } obj.swap(arr,low,j); return j; } public void swap(int a[],int i,int j){ int temp=a[i]; a[i]=a[j]; a[j]=temp; }
0
himanshu567843 weeks ago
//Function to sort an array using quick sort algorithm. static void quickSort(int arr[], int low, int high) { if(high<=low)return; int p=partition(arr,low,high); quickSort(arr,low,p-1); quickSort(arr,p+1,high); } static int partition(int x[], int low, int high) { int i; int pi=x[high],p=i=low; for(;i<=high;i++) { if(x[i]<pi) { int t=x[p]; x[p]=x[i]; x[i]=t; p++; } } x[high]=x[p]; x[p]=pi; return p; // code here }
0
surajdas19073 weeks ago
class Solution
{
public:
int swap(int arr[],int i,int j){
int temp=arr[i];
arr[i]=arr[j];
arr[j]=temp;
}
//Function to sort an array using quick sort algorithm.
void quickSort(int arr[], int low, int high)
{
if(low<high){
int pi=partition(arr,low,high);
quickSort(arr,low,pi-1);
quickSort(arr,pi+1,high);
}
}
public:
int partition (int arr[], int low, int high)
{
// Your code here
int pivot=arr[high];
int i=low-1;
for(int j=low;j<high;j++){
if(arr[j]<pivot){
i++;
swap(arr,i,j);
}
}
swap(arr,i+1,high);
return i+1;
}
};
0
tarun119043123 weeks ago
Hoare’s Partition Scheme:
class Solution
{
public:
//Function to sort an array using quick sort algorithm.
void quickSort(int arr[], int low, int high)
{
if(low < high){
int p= partition(arr,low,high);
quickSort(arr,low,p);
quickSort(arr,p+1,high);
}
}
public:
int partition (int arr[], int low, int high)
{
int p=arr[low];
low--;high++;
while(true){
do{
low++;
}while(arr[low] < p);
do{
high--;
}while(arr[high] > p);
if(low>=high)
return high;
int temp=arr[low];
arr[low]=arr[high];
arr[high]=temp;
}
}
};
0
aachalsaxena0014 weeks ago
void quickSort(int arr[], int low, int high){ // code here while(low<high){ int pivot=partition(arr,low,high); quickSort(arr,low,pivot-1); quickSort(arr,pivot+1,high); } } int partition (int arr[], int low, int high){ int pivot=arr[high]; int i=low-1; for(int j=0;j<high-1;j++){ if(arr[j]<pivot){ i++; swap(arr[i],arr[j]); } } swap(arr[i+1],arr[high]); return (i+1);}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 447,
"s": 238,
"text": "Quick Sort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot.\nGiven an array arr[], its starting position low and its ending position high."
},
{
"code": null,
"e": 518,
"s": 447,
"text": "Implement the partition() and quickSort() functions to sort the array."
},
{
"code": null,
"e": 530,
"s": 518,
"text": "\nExample 1:"
},
{
"code": null,
"e": 588,
"s": 530,
"text": "Input: \nN = 5 \narr[] = { 4, 1, 3, 9, 7}\nOutput:\n1 3 4 7 9"
},
{
"code": null,
"e": 599,
"s": 588,
"text": "Example 2:"
},
{
"code": null,
"e": 678,
"s": 599,
"text": "Input: \nN = 9\narr[] = { 2, 1, 6, 10, 4, 1, 3, 9, 7}\nOutput:\n1 1 2 3 4 6 7 9 10"
},
{
"code": null,
"e": 1062,
"s": 678,
"text": "\nYour Task: \nYou don't need to read input or print anything. Your task is to complete the functions partition() and quickSort() which takes the array arr[], low and high as input parameters and partitions the array. Consider the last element as the pivot such that all the elements less than(or equal to) the pivot lie before it and the elements greater than it lie after the pivot."
},
{
"code": null,
"e": 1130,
"s": 1062,
"text": "\nExpected Time Complexity: O(N*logN)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 1177,
"s": 1130,
"text": "\nConstraints:\n1 <= N <= 103\n1 <= arr[i] <= 104"
},
{
"code": null,
"e": 1179,
"s": 1177,
"text": "0"
},
{
"code": null,
"e": 1203,
"s": 1179,
"text": "arpita biswal4 days ago"
},
{
"code": null,
"e": 1896,
"s": 1203,
"text": "class Solution{ partition(arr, low, high){ // Your code here let pivot = arr[low]; let s=low, e= high; while(s<e){ while(arr[s]<=pivot){ s++; } while(arr[e]>pivot){ e--; } if(s<e){ this.swap(arr,s,e); } } this.swap(arr,low,e); return e; } swap(arr,i,j){ let temp=arr[i]; arr[i]=arr[j]; arr[j]=temp; } quickSort(arr, low, high){ //code here if(low < high){ let loc = this.partition(arr,low,high); this.quickSort(arr,low,loc-1); this.quickSort(arr,loc+1,high); } } }"
},
{
"code": null,
"e": 1898,
"s": 1896,
"text": "0"
},
{
"code": null,
"e": 1921,
"s": 1898,
"text": "prab010420001 week ago"
},
{
"code": null,
"e": 1939,
"s": 1921,
"text": "// Java solution"
},
{
"code": null,
"e": 2684,
"s": 1939,
"text": "class Solution{ //Function to sort an array using quick sort algorithm. static void quickSort(int arr[], int low, int high) { // code here if(low<high) { int p=partition(arr,low,high); quickSort(arr,low,p-1); quickSort(arr,p+1,high); } } static int partition(int arr[], int low, int high) { // your code here int i=low-1; int pivot=arr[high]; for(int j=low;j<=high-1;j++) { if(arr[j]<=pivot) { i++; int temp=arr[i]; arr[i]=arr[j]; arr[j]=temp; } } int temp=arr[i+1]; arr[i+1]=arr[high]; arr[high]=temp; return i+1; } } "
},
{
"code": null,
"e": 2686,
"s": 2684,
"text": "0"
},
{
"code": null,
"e": 2710,
"s": 2686,
"text": "viveksharma731 week ago"
},
{
"code": null,
"e": 3566,
"s": 2710,
"text": "class Solution{ public: void swap(int arr[],int i,int j){ int temp =arr[i]; arr[i] = arr[j]; arr[j]= temp; } //Function to sort an array using quick sort algorithm. void quickSort(int arr[], int low, int high) { // code here if(low<high){ int pi= partition(arr,low,high); quickSort(arr,low,pi-1); quickSort(arr,pi+1,high); } } public: int partition (int arr[], int low, int high) { // Your code here int pivot=arr[high]; int i=low-1; for(int j=low;j<high;j++){ if(arr[j]<pivot){ i++; swap(arr,i,j); } } swap(arr,i+1,high); return i+1; }"
},
{
"code": null,
"e": 3570,
"s": 3568,
"text": "0"
},
{
"code": null,
"e": 3598,
"s": 3570,
"text": "indraneelghosh7401 week ago"
},
{
"code": null,
"e": 3704,
"s": 3598,
"text": "this is my code when I put 1 2 in custom input it runs but it shows error with input 1 2 after submitting"
},
{
"code": null,
"e": 4257,
"s": 3706,
"text": "public: //Function to sort an array using quick sort algorithm. void quickSort(int arr[], int low, int high) { if(low<high){ int pi=partition(arr,low,high); quickSort(arr,low,pi-1); quickSort(arr,pi+1,high); } } public: int partition (int arr[], int low, int high) { int p=arr[low]; int i=low,j=high; while(i<j){ while(arr[i]<=p) i++; while(arr[j]>=p) j--; if(i<j){ swap(arr[i],arr[j]); } } swap(arr[low],arr[j]); return j; }"
},
{
"code": null,
"e": 4259,
"s": 4257,
"text": "0"
},
{
"code": null,
"e": 4284,
"s": 4259,
"text": "irmanivaibhav2 weeks ago"
},
{
"code": null,
"e": 4928,
"s": 4284,
"text": "class Solution:\n def quickSort(self,arr,low,high):\n # code here\n if low<high:\n pi=self.partition(arr,low,high)\n self.quickSort(arr,low,pi-1)\n self.quickSort(arr,pi+1,high)\n \n def partition(self,arr,low,high):\n # code here\n i=low\n j=high-1\n pivot=arr[high]\n while i<j:\n while i<high and arr[i]<pivot:\n i+=1\n while j>low and arr[j]>=pivot:\n j-=1\n if i<j:\n arr[i],arr[j]=arr[j],arr[i]\n if arr[i]>pivot:\n arr[i],arr[high]=arr[high],arr[i]\n return i\n "
},
{
"code": null,
"e": 4930,
"s": 4928,
"text": "0"
},
{
"code": null,
"e": 4956,
"s": 4930,
"text": "vivekfaujdar482 weeks ago"
},
{
"code": null,
"e": 4968,
"s": 4956,
"text": "JAVA CODE :"
},
{
"code": null,
"e": 5669,
"s": 4968,
"text": "static void quickSort(int arr[], int low, int high) { if(low<high){ int pivot=partition(arr,low,high); quickSort(arr,low,pivot-1); quickSort(arr,pivot+1,high); } } static int partition(int arr[], int low, int high) { // your code here Solution obj=new Solution(); int pivot =arr[low]; int i=low; int j=high; while(i<j){ while(arr[i]<=pivot && i<j) i++; while(arr[j]>pivot) j--; if(i<j) obj.swap(arr,i,j); } obj.swap(arr,low,j); return j; } public void swap(int a[],int i,int j){ int temp=a[i]; a[i]=a[j]; a[j]=temp; }"
},
{
"code": null,
"e": 5671,
"s": 5669,
"text": "0"
},
{
"code": null,
"e": 5696,
"s": 5671,
"text": "himanshu567843 weeks ago"
},
{
"code": null,
"e": 6169,
"s": 5696,
"text": "//Function to sort an array using quick sort algorithm. static void quickSort(int arr[], int low, int high) { if(high<=low)return; int p=partition(arr,low,high); quickSort(arr,low,p-1); quickSort(arr,p+1,high); } static int partition(int x[], int low, int high) { int i; int pi=x[high],p=i=low; for(;i<=high;i++) { if(x[i]<pi) { int t=x[p]; x[p]=x[i]; x[i]=t; p++; } } x[high]=x[p]; x[p]=pi; return p; // code here }"
},
{
"code": null,
"e": 6171,
"s": 6169,
"text": "0"
},
{
"code": null,
"e": 6195,
"s": 6171,
"text": "surajdas19073 weeks ago"
},
{
"code": null,
"e": 6936,
"s": 6195,
"text": "class Solution\n{\n public:\n int swap(int arr[],int i,int j){\n int temp=arr[i];\n arr[i]=arr[j];\n arr[j]=temp;\n }\n \n //Function to sort an array using quick sort algorithm.\n void quickSort(int arr[], int low, int high)\n {\n if(low<high){\n int pi=partition(arr,low,high);\n quickSort(arr,low,pi-1);\n quickSort(arr,pi+1,high);\n }\n }\n \n public:\n int partition (int arr[], int low, int high)\n {\n // Your code here\n int pivot=arr[high];\n int i=low-1;\n for(int j=low;j<high;j++){\n if(arr[j]<pivot){\n i++;\n swap(arr,i,j);\n }\n }\n swap(arr,i+1,high);\n return i+1;\n }\n};"
},
{
"code": null,
"e": 6938,
"s": 6936,
"text": "0"
},
{
"code": null,
"e": 6963,
"s": 6938,
"text": "tarun119043123 weeks ago"
},
{
"code": null,
"e": 6989,
"s": 6963,
"text": "Hoare’s Partition Scheme:"
},
{
"code": null,
"e": 7766,
"s": 6989,
"text": "class Solution\n{\n public:\n //Function to sort an array using quick sort algorithm.\n void quickSort(int arr[], int low, int high)\n {\n if(low < high){\n int p= partition(arr,low,high);\n quickSort(arr,low,p);\n quickSort(arr,p+1,high);\n }\n }\n \n public:\n int partition (int arr[], int low, int high)\n {\n int p=arr[low];\n low--;high++;\n while(true){\n \n do{\n low++;\n }while(arr[low] < p);\n \n do{\n high--;\n }while(arr[high] > p);\n \n if(low>=high)\n return high;\n \n int temp=arr[low];\n arr[low]=arr[high];\n arr[high]=temp;\n }\n }\n};"
},
{
"code": null,
"e": 7768,
"s": 7766,
"text": "0"
},
{
"code": null,
"e": 7795,
"s": 7768,
"text": "aachalsaxena0014 weeks ago"
},
{
"code": null,
"e": 8223,
"s": 7795,
"text": "void quickSort(int arr[], int low, int high){ // code here while(low<high){ int pivot=partition(arr,low,high); quickSort(arr,low,pivot-1); quickSort(arr,pivot+1,high); } } int partition (int arr[], int low, int high){ int pivot=arr[high]; int i=low-1; for(int j=0;j<high-1;j++){ if(arr[j]<pivot){ i++; swap(arr[i],arr[j]); } } swap(arr[i+1],arr[high]); return (i+1);}"
},
{
"code": null,
"e": 8369,
"s": 8223,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 8405,
"s": 8369,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 8415,
"s": 8405,
"text": "\nProblem\n"
},
{
"code": null,
"e": 8425,
"s": 8415,
"text": "\nContest\n"
},
{
"code": null,
"e": 8488,
"s": 8425,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 8636,
"s": 8488,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 8844,
"s": 8636,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 8950,
"s": 8844,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
descendants generator – Python Beautifulsoup - GeeksforGeeks
|
25 Oct, 2020
descendants generator is provided by Beautiful Soup which is a web scraping framework for Python. Web scraping is the process of extracting data from the website using automated tools to make the process faster. The .contents and .children attribute only consider a tag’s direct children. The descendants generator is used to iterate over all of the tag’s children, recursively. Each child is going to be the tag element for the elements and NavigableString for the strings.
Syntax:
tag.descendants
Below given examples explain the concept of descendants generator in Beautiful Soup. Example 1: In this example, we are going to get the descendants of an element.
Python3
# Import Beautiful Soupfrom bs4 import BeautifulSoup # Create the documentdoc = "<body><b> Hello world </b><body>" # Initialize the object with the documentsoup = BeautifulSoup(doc, "html.parser") # Get the body tagtag = soup.body # Print all the descendants of tagfor descendant in tag.descendants: print(descendant)
Output:
<b> Hello world </b>
Hello world
<body></body>
Example 2: In this example, we are going to see the type of descendants.
Python3
# Import Beautiful Soupfrom bs4 import BeautifulSoup # Create the documentdoc = "<body><b> Hello world </b><body>" # Initialize the object with the documentsoup = BeautifulSoup(doc, "html.parser") # Get the body tagtag = soup.body # Print the type of the descendants of tagfor descendant in tag.descendants: print(type(descendant))
Output:
<class 'bs4.element.Tag'>
<class 'bs4.element.NavigableString'>
<class 'bs4.element.Tag'>
Web-scraping
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Defaultdict in Python
Python | Get unique values from a list
Python | os.path.join() method
Selecting rows in pandas DataFrame based on conditions
Create a directory in Python
Python | Pandas dataframe.groupby()
|
[
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n25 Oct, 2020"
},
{
"code": null,
"e": 24767,
"s": 24292,
"text": "descendants generator is provided by Beautiful Soup which is a web scraping framework for Python. Web scraping is the process of extracting data from the website using automated tools to make the process faster. The .contents and .children attribute only consider a tag’s direct children. The descendants generator is used to iterate over all of the tag’s children, recursively. Each child is going to be the tag element for the elements and NavigableString for the strings."
},
{
"code": null,
"e": 24776,
"s": 24767,
"text": "Syntax: "
},
{
"code": null,
"e": 24795,
"s": 24776,
"text": " tag.descendants \n"
},
{
"code": null,
"e": 24959,
"s": 24795,
"text": "Below given examples explain the concept of descendants generator in Beautiful Soup. Example 1: In this example, we are going to get the descendants of an element."
},
{
"code": null,
"e": 24967,
"s": 24959,
"text": "Python3"
},
{
"code": "# Import Beautiful Soupfrom bs4 import BeautifulSoup # Create the documentdoc = \"<body><b> Hello world </b><body>\" # Initialize the object with the documentsoup = BeautifulSoup(doc, \"html.parser\") # Get the body tagtag = soup.body # Print all the descendants of tagfor descendant in tag.descendants: print(descendant)",
"e": 25292,
"s": 24967,
"text": null
},
{
"code": null,
"e": 25301,
"s": 25292,
"text": "Output: "
},
{
"code": null,
"e": 25351,
"s": 25301,
"text": "<b> Hello world </b>\n Hello world \n<body></body>\n"
},
{
"code": null,
"e": 25424,
"s": 25351,
"text": "Example 2: In this example, we are going to see the type of descendants."
},
{
"code": null,
"e": 25432,
"s": 25424,
"text": "Python3"
},
{
"code": "# Import Beautiful Soupfrom bs4 import BeautifulSoup # Create the documentdoc = \"<body><b> Hello world </b><body>\" # Initialize the object with the documentsoup = BeautifulSoup(doc, \"html.parser\") # Get the body tagtag = soup.body # Print the type of the descendants of tagfor descendant in tag.descendants: print(type(descendant))",
"e": 25771,
"s": 25432,
"text": null
},
{
"code": null,
"e": 25780,
"s": 25771,
"text": "Output: "
},
{
"code": null,
"e": 25871,
"s": 25780,
"text": "<class 'bs4.element.Tag'>\n<class 'bs4.element.NavigableString'>\n<class 'bs4.element.Tag'>\n"
},
{
"code": null,
"e": 25884,
"s": 25871,
"text": "Web-scraping"
},
{
"code": null,
"e": 25891,
"s": 25884,
"text": "Python"
},
{
"code": null,
"e": 25989,
"s": 25891,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26021,
"s": 25989,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26077,
"s": 26021,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26119,
"s": 26077,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26161,
"s": 26119,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26183,
"s": 26161,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26222,
"s": 26183,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 26253,
"s": 26222,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26308,
"s": 26253,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 26337,
"s": 26308,
"text": "Create a directory in Python"
}
] |
Time Forecast with TPOT. a Python Automated Machine Learning... | by Susan Li | Towards Data Science
|
My colleague at work recommended me several wonderful Machine Learning libraries and some of them were new to me. Therefore, I decided to try them out, one by one. Today is TPOT’s turn.
The data set was about predicting engineers of Daimler’s Mercedes-Benz cars speed of testing system, the purpose is to reduce the time that cars spend on testing, with over three hundreds of features. Frankly, I have little or no domain expertise in automobile industry. Regardless, I will try to make the best predictions I can, using TPOT, a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
This data set contains an anonymized set of variables, each representing a custom feature in a Mercedes car.
The target feature is labeled “y” and represents the time (in seconds) that the car took to pass testing for each variable. The data set can be found here.
train = pd.read_csv('mer_train.csv')print('Train shape: ', train.shape)
We know what problems we have here: too many features (columns) and not enough rows.
In addition, we don’t know what those features are except “y” and “ID”.
Target feature “y” is the time (in seconds) that the car took to pass testing for each variable. Let’s see its distribution.
plt.figure(figsize = (10, 6))n, bins, patches = plt.hist(train['y'], 50, facecolor='blue', alpha=0.75)plt.xlabel('y value in seconds')plt.ylabel('count')plt.title('Histogram of y value')plt.show();
train['y'].describe()
plt.figure(figsize = (10, 6))plt.scatter(range(train.shape[0]), np.sort(train['y'].values))plt.xlabel('index')plt.ylabel('y')plt.title("Time Distribution")plt.show();
There is one outlier which was the maximum time at 265 seconds.
cols = [c for c in train.columns if 'X' in c]print('Number of features except ID and target feature: {}'.format(len(cols)))print('Feature types :')train[cols].dtypes.value_counts()
Out of all features, we are having 8 categorical features and 368 integer features. What about the cardinality of the features? The following ideas and scripts were from Mikel Bober Irizar.
counts = [[], [], []]for c in cols: typ = train[c].dtypes uniq = len(train[c].unique()) if uniq == 1: counts[0].append(c) elif uniq == 2 and typ == np.int64: counts[1].append(c) else: counts[2].append(c)print('Constant features: {} Binary features: {} Categorical features: {}\n'.format(*[len(c) for c in counts]))print('Constant features: ', counts[0])print()print('Categorical features: ', counts[2])
There were 12 features in which only contain a single value (0), these are useless for supervised algorithms, and we will drop them later.
The rest of our data set is made up of 356 binary features, and 8 categorical features. Let’s explore categorical features first.
for cat in ['X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X8']: print("Number of levels in category '{0}': \b {1:2}".format(cat, train[cat].nunique()))
Feature X0
sort_X0 = train.groupby('X0').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X0', data=train, order = sort_X0)plt.xlabel('X0')plt.ylabel('Occurances')plt.title('Feature X0')sns.despine();
X0 vs. target feature y
sort_y = train.groupby('X0')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (14, 6))sns.boxplot(y='y', x='X0', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X0 vs. y value')plt.show();
Feature X1
sort_X1 = train.groupby('X1').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X1', data=train, order = sort_X1)plt.xlabel('X1')plt.ylabel('Occurances')plt.title('Feature X1')sns.despine();
X1 vs. target feature y
sort_y = train.groupby('X1')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (10, 6))sns.boxplot(y='y', x='X1', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X1 vs. y value')plt.show();
Feature X2
sort_X2 = train.groupby('X2').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X2', data=train, order = sort_X2)plt.xlabel('X2')plt.ylabel('Occurances')plt.title('Feature X2')sns.despine();
X2 vs. target feature y
sort_y = train.groupby('X2')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X2', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X2 vs. y value')plt.show();
Feature X3
sort_X3 = train.groupby('X3').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X3', data=train, order = sort_X3)plt.xlabel('X3')plt.ylabel('Occurances')plt.title('Feature X3')sns.despine();
X3 vs. target feature y
sort_y = train.groupby('X3')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (10, 6))sns.boxplot(y='y', x='X3', data=train, order = sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X3 vs. y value')plt.show();
Feature X4
sort_X4 = train.groupby('X4').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X4', data=train, order = sort_X4)plt.xlabel('X4')plt.ylabel('Occurances')plt.title('Feature X4')sns.despine();
X4 vs. target feature y
sort_y = train.groupby('X4')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (10, 6))sns.boxplot(y='y', x='X4', data=train, order = sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X4 vs. y value')plt.show();
Feature X5
sort_X5 = train.groupby('X5').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X5', data=train, order = sort_X5)plt.xlabel('X5')plt.ylabel('Occurances')plt.title('Feature X5')sns.despine();
X5 vs. target feature y
sort_y = train.groupby('X5')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X5', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X5 vs. y value')plt.show();
Feature X6
sort_X6 = train.groupby('X6').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X6', data=train, order = sort_X6)plt.xlabel('X6')plt.ylabel('Occurances')plt.title('Feature X6')sns.despine();
X6 vs. target feature y
sort_y = train.groupby('X6')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X6', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X6 vs. y value')plt.show();
Feature X8
sort_X8 = train.groupby('X8').size()\ .sort_values(ascending=False)\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X8', data=train, order = sort_X8)plt.xlabel('X8')plt.ylabel('Occurances')plt.title('Feature X8')sns.despine();
X8 vs. target feature y
sort_y = train.groupby('X8')['y']\ .median()\ .sort_values(ascending=False)\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X8', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X8 vs. y value')plt.show();
Unfortunately, we did not learn much from the above EDA, this is life. However, we did notice that some categorical features have effects on the “y” and the “X0” seems to have the highest effect.
After exploring, we now going to encode these categorical features’ levels as digits using Scikit-learn’s MultiLabelBinarizer and treat them as new features.
We then drop the constant features and categorical features which have been encoded, as well as our target feature “y”.
train_new = train.drop(['y','X11', 'X93', 'X107', 'X233', 'X235', 'X268', 'X289', 'X290', 'X293', 'X297', 'X330', 'X347', 'X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X8'], axis=1)
We then add the encoded features to form the final data set to be used with TPOT.
train_new = np.hstack((train_new.values, X0_trans, X1_trans, X2_trans, X3_trans, X4_trans, X5_trans, X6_trans, X8_trans))
The final data set is in the form of a numpy array, in the shape of (4209, 552).
It’s time to construct and fit TPOT regressor. When it is finished, TPOT will display the “best” model (based on test data MSE in our case) hyperparameters, and will also output the pipelines as an execution-ready Python script file for a later use or our investigation.
Running above code will discover a pipeline as output that achieves 56 mean squared error (MSE) on the test set:
print("TPOT cross-validation MSE")print(tpot.score(X_test, y_test))
You may have noticed that MSE is a negative number, according to this thread, it is neg_mean_squared_error for TPOTRegressor that stands for negated value of mean squared error. Let’s try it again.
from sklearn.metrics import mean_squared_errorprint('MSE:')print(mean_squared_error(y_test, tpot.predict(X_test)))
print('RMSE:')print(np.sqrt(mean_squared_error(y_test, tpot.predict(X_test))))
So, the difference between our predicted time and the real time is about 7.5 seconds. That’s a pretty good result! And the model that produces this result is one that fits a RandomForestRegressor stacked with KNeighborsRegressor algorithm on the data set.
Finally, we are going to export this pipeline:
tpot.export('tpot_Mercedes_testing_time_pipeline.py')
I enjoyed learning and using TPOT, hope you are the same. Jupyter notebook can be found on Github. Have a great weekend!
Reference: TPOT Tutorial
|
[
{
"code": null,
"e": 357,
"s": 171,
"text": "My colleague at work recommended me several wonderful Machine Learning libraries and some of them were new to me. Therefore, I decided to try them out, one by one. Today is TPOT’s turn."
},
{
"code": null,
"e": 809,
"s": 357,
"text": "The data set was about predicting engineers of Daimler’s Mercedes-Benz cars speed of testing system, the purpose is to reduce the time that cars spend on testing, with over three hundreds of features. Frankly, I have little or no domain expertise in automobile industry. Regardless, I will try to make the best predictions I can, using TPOT, a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming."
},
{
"code": null,
"e": 918,
"s": 809,
"text": "This data set contains an anonymized set of variables, each representing a custom feature in a Mercedes car."
},
{
"code": null,
"e": 1074,
"s": 918,
"text": "The target feature is labeled “y” and represents the time (in seconds) that the car took to pass testing for each variable. The data set can be found here."
},
{
"code": null,
"e": 1146,
"s": 1074,
"text": "train = pd.read_csv('mer_train.csv')print('Train shape: ', train.shape)"
},
{
"code": null,
"e": 1231,
"s": 1146,
"text": "We know what problems we have here: too many features (columns) and not enough rows."
},
{
"code": null,
"e": 1303,
"s": 1231,
"text": "In addition, we don’t know what those features are except “y” and “ID”."
},
{
"code": null,
"e": 1428,
"s": 1303,
"text": "Target feature “y” is the time (in seconds) that the car took to pass testing for each variable. Let’s see its distribution."
},
{
"code": null,
"e": 1626,
"s": 1428,
"text": "plt.figure(figsize = (10, 6))n, bins, patches = plt.hist(train['y'], 50, facecolor='blue', alpha=0.75)plt.xlabel('y value in seconds')plt.ylabel('count')plt.title('Histogram of y value')plt.show();"
},
{
"code": null,
"e": 1648,
"s": 1626,
"text": "train['y'].describe()"
},
{
"code": null,
"e": 1815,
"s": 1648,
"text": "plt.figure(figsize = (10, 6))plt.scatter(range(train.shape[0]), np.sort(train['y'].values))plt.xlabel('index')plt.ylabel('y')plt.title(\"Time Distribution\")plt.show();"
},
{
"code": null,
"e": 1879,
"s": 1815,
"text": "There is one outlier which was the maximum time at 265 seconds."
},
{
"code": null,
"e": 2060,
"s": 1879,
"text": "cols = [c for c in train.columns if 'X' in c]print('Number of features except ID and target feature: {}'.format(len(cols)))print('Feature types :')train[cols].dtypes.value_counts()"
},
{
"code": null,
"e": 2250,
"s": 2060,
"text": "Out of all features, we are having 8 categorical features and 368 integer features. What about the cardinality of the features? The following ideas and scripts were from Mikel Bober Irizar."
},
{
"code": null,
"e": 2689,
"s": 2250,
"text": "counts = [[], [], []]for c in cols: typ = train[c].dtypes uniq = len(train[c].unique()) if uniq == 1: counts[0].append(c) elif uniq == 2 and typ == np.int64: counts[1].append(c) else: counts[2].append(c)print('Constant features: {} Binary features: {} Categorical features: {}\\n'.format(*[len(c) for c in counts]))print('Constant features: ', counts[0])print()print('Categorical features: ', counts[2])"
},
{
"code": null,
"e": 2828,
"s": 2689,
"text": "There were 12 features in which only contain a single value (0), these are useless for supervised algorithms, and we will drop them later."
},
{
"code": null,
"e": 2958,
"s": 2828,
"text": "The rest of our data set is made up of 356 binary features, and 8 categorical features. Let’s explore categorical features first."
},
{
"code": null,
"e": 3110,
"s": 2958,
"text": "for cat in ['X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X8']: print(\"Number of levels in category '{0}': \\b {1:2}\".format(cat, train[cat].nunique()))"
},
{
"code": null,
"e": 3121,
"s": 3110,
"text": "Feature X0"
},
{
"code": null,
"e": 3388,
"s": 3121,
"text": "sort_X0 = train.groupby('X0').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X0', data=train, order = sort_X0)plt.xlabel('X0')plt.ylabel('Occurances')plt.title('Feature X0')sns.despine();"
},
{
"code": null,
"e": 3412,
"s": 3388,
"text": "X0 vs. target feature y"
},
{
"code": null,
"e": 3726,
"s": 3412,
"text": "sort_y = train.groupby('X0')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (14, 6))sns.boxplot(y='y', x='X0', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X0 vs. y value')plt.show();"
},
{
"code": null,
"e": 3737,
"s": 3726,
"text": "Feature X1"
},
{
"code": null,
"e": 4004,
"s": 3737,
"text": "sort_X1 = train.groupby('X1').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X1', data=train, order = sort_X1)plt.xlabel('X1')plt.ylabel('Occurances')plt.title('Feature X1')sns.despine();"
},
{
"code": null,
"e": 4028,
"s": 4004,
"text": "X1 vs. target feature y"
},
{
"code": null,
"e": 4342,
"s": 4028,
"text": "sort_y = train.groupby('X1')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (10, 6))sns.boxplot(y='y', x='X1', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X1 vs. y value')plt.show();"
},
{
"code": null,
"e": 4353,
"s": 4342,
"text": "Feature X2"
},
{
"code": null,
"e": 4620,
"s": 4353,
"text": "sort_X2 = train.groupby('X2').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X2', data=train, order = sort_X2)plt.xlabel('X2')plt.ylabel('Occurances')plt.title('Feature X2')sns.despine();"
},
{
"code": null,
"e": 4644,
"s": 4620,
"text": "X2 vs. target feature y"
},
{
"code": null,
"e": 4958,
"s": 4644,
"text": "sort_y = train.groupby('X2')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X2', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X2 vs. y value')plt.show();"
},
{
"code": null,
"e": 4969,
"s": 4958,
"text": "Feature X3"
},
{
"code": null,
"e": 5236,
"s": 4969,
"text": "sort_X3 = train.groupby('X3').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X3', data=train, order = sort_X3)plt.xlabel('X3')plt.ylabel('Occurances')plt.title('Feature X3')sns.despine();"
},
{
"code": null,
"e": 5260,
"s": 5236,
"text": "X3 vs. target feature y"
},
{
"code": null,
"e": 5576,
"s": 5260,
"text": "sort_y = train.groupby('X3')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (10, 6))sns.boxplot(y='y', x='X3', data=train, order = sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X3 vs. y value')plt.show();"
},
{
"code": null,
"e": 5587,
"s": 5576,
"text": "Feature X4"
},
{
"code": null,
"e": 5854,
"s": 5587,
"text": "sort_X4 = train.groupby('X4').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X4', data=train, order = sort_X4)plt.xlabel('X4')plt.ylabel('Occurances')plt.title('Feature X4')sns.despine();"
},
{
"code": null,
"e": 5878,
"s": 5854,
"text": "X4 vs. target feature y"
},
{
"code": null,
"e": 6194,
"s": 5878,
"text": "sort_y = train.groupby('X4')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (10, 6))sns.boxplot(y='y', x='X4', data=train, order = sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X4 vs. y value')plt.show();"
},
{
"code": null,
"e": 6205,
"s": 6194,
"text": "Feature X5"
},
{
"code": null,
"e": 6472,
"s": 6205,
"text": "sort_X5 = train.groupby('X5').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X5', data=train, order = sort_X5)plt.xlabel('X5')plt.ylabel('Occurances')plt.title('Feature X5')sns.despine();"
},
{
"code": null,
"e": 6496,
"s": 6472,
"text": "X5 vs. target feature y"
},
{
"code": null,
"e": 6810,
"s": 6496,
"text": "sort_y = train.groupby('X5')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X5', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X5 vs. y value')plt.show();"
},
{
"code": null,
"e": 6821,
"s": 6810,
"text": "Feature X6"
},
{
"code": null,
"e": 7088,
"s": 6821,
"text": "sort_X6 = train.groupby('X6').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X6', data=train, order = sort_X6)plt.xlabel('X6')plt.ylabel('Occurances')plt.title('Feature X6')sns.despine();"
},
{
"code": null,
"e": 7112,
"s": 7088,
"text": "X6 vs. target feature y"
},
{
"code": null,
"e": 7429,
"s": 7112,
"text": "sort_y = train.groupby('X6')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X6', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X6 vs. y value')plt.show();"
},
{
"code": null,
"e": 7440,
"s": 7429,
"text": "Feature X8"
},
{
"code": null,
"e": 7707,
"s": 7440,
"text": "sort_X8 = train.groupby('X8').size()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize=(12,6))sns.countplot(x='X8', data=train, order = sort_X8)plt.xlabel('X8')plt.ylabel('Occurances')plt.title('Feature X8')sns.despine();"
},
{
"code": null,
"e": 7731,
"s": 7707,
"text": "X8 vs. target feature y"
},
{
"code": null,
"e": 8045,
"s": 7731,
"text": "sort_y = train.groupby('X8')['y']\\ .median()\\ .sort_values(ascending=False)\\ .indexplt.figure(figsize = (12, 6))sns.boxplot(y='y', x='X8', data=train, order=sort_y)ax = plt.gca()ax.set_xticklabels(ax.get_xticklabels())plt.title('X8 vs. y value')plt.show();"
},
{
"code": null,
"e": 8241,
"s": 8045,
"text": "Unfortunately, we did not learn much from the above EDA, this is life. However, we did notice that some categorical features have effects on the “y” and the “X0” seems to have the highest effect."
},
{
"code": null,
"e": 8399,
"s": 8241,
"text": "After exploring, we now going to encode these categorical features’ levels as digits using Scikit-learn’s MultiLabelBinarizer and treat them as new features."
},
{
"code": null,
"e": 8519,
"s": 8399,
"text": "We then drop the constant features and categorical features which have been encoded, as well as our target feature “y”."
},
{
"code": null,
"e": 8698,
"s": 8519,
"text": "train_new = train.drop(['y','X11', 'X93', 'X107', 'X233', 'X235', 'X268', 'X289', 'X290', 'X293', 'X297', 'X330', 'X347', 'X0', 'X1', 'X2', 'X3', 'X4', 'X5', 'X6', 'X8'], axis=1)"
},
{
"code": null,
"e": 8780,
"s": 8698,
"text": "We then add the encoded features to form the final data set to be used with TPOT."
},
{
"code": null,
"e": 8902,
"s": 8780,
"text": "train_new = np.hstack((train_new.values, X0_trans, X1_trans, X2_trans, X3_trans, X4_trans, X5_trans, X6_trans, X8_trans))"
},
{
"code": null,
"e": 8983,
"s": 8902,
"text": "The final data set is in the form of a numpy array, in the shape of (4209, 552)."
},
{
"code": null,
"e": 9254,
"s": 8983,
"text": "It’s time to construct and fit TPOT regressor. When it is finished, TPOT will display the “best” model (based on test data MSE in our case) hyperparameters, and will also output the pipelines as an execution-ready Python script file for a later use or our investigation."
},
{
"code": null,
"e": 9367,
"s": 9254,
"text": "Running above code will discover a pipeline as output that achieves 56 mean squared error (MSE) on the test set:"
},
{
"code": null,
"e": 9435,
"s": 9367,
"text": "print(\"TPOT cross-validation MSE\")print(tpot.score(X_test, y_test))"
},
{
"code": null,
"e": 9633,
"s": 9435,
"text": "You may have noticed that MSE is a negative number, according to this thread, it is neg_mean_squared_error for TPOTRegressor that stands for negated value of mean squared error. Let’s try it again."
},
{
"code": null,
"e": 9748,
"s": 9633,
"text": "from sklearn.metrics import mean_squared_errorprint('MSE:')print(mean_squared_error(y_test, tpot.predict(X_test)))"
},
{
"code": null,
"e": 9827,
"s": 9748,
"text": "print('RMSE:')print(np.sqrt(mean_squared_error(y_test, tpot.predict(X_test))))"
},
{
"code": null,
"e": 10083,
"s": 9827,
"text": "So, the difference between our predicted time and the real time is about 7.5 seconds. That’s a pretty good result! And the model that produces this result is one that fits a RandomForestRegressor stacked with KNeighborsRegressor algorithm on the data set."
},
{
"code": null,
"e": 10130,
"s": 10083,
"text": "Finally, we are going to export this pipeline:"
},
{
"code": null,
"e": 10184,
"s": 10130,
"text": "tpot.export('tpot_Mercedes_testing_time_pipeline.py')"
},
{
"code": null,
"e": 10305,
"s": 10184,
"text": "I enjoyed learning and using TPOT, hope you are the same. Jupyter notebook can be found on Github. Have a great weekend!"
}
] |
How to return only value of a field in MongoDB?
|
In order to return only value of a field in MongoDB, you need to write a query and use forEach loop. Let us first create a collection with documents
> db.returnOnlyValueOfFieldDemo.insertOne({"ClientName":"Larry"});
{
"acknowledged" : true,
"insertedId" : ObjectId("5c9ea537d628fa4220163b6e")
}
> db.returnOnlyValueOfFieldDemo.insertOne({"ClientName":"Chris"});
{
"acknowledged" : true,
"insertedId" : ObjectId("5c9ea53bd628fa4220163b6f")
}
> db.returnOnlyValueOfFieldDemo.insertOne({"ClientName":"Robert"});
{
"acknowledged" : true,
"insertedId" : ObjectId("5c9ea541d628fa4220163b70")
}
> db.returnOnlyValueOfFieldDemo.insertOne({"ClientName":"Ramit"});
{
"acknowledged" : true,
"insertedId" : ObjectId("5c9ea549d628fa4220163b71")
}
Following is the query to display all documents from a collection with the help of find() method
> db.returnOnlyValueOfFieldDemo.find().pretty();
This will produce the following output
{ "_id" : ObjectId("5c9ea537d628fa4220163b6e"), "ClientName" : "Larry" }
{ "_id" : ObjectId("5c9ea53bd628fa4220163b6f"), "ClientName" : "Chris" }
{ "_id" : ObjectId("5c9ea541d628fa4220163b70"), "ClientName" : "Robert" }
{ "_id" : ObjectId("5c9ea549d628fa4220163b71"), "ClientName" : "Ramit" }
Following is the query to return only value of a field in MongoDB
> var output = []
> db.returnOnlyValueOfFieldDemo.find().forEach(function(document) {output.push(document.ClientName) })
In order to get value of a field in MongoDB, you need to write variable name output at the Mongo shell (as we know the value is stored in output array). Following is the query
> output
This will produce the following output
[ "Larry", "Chris", "Robert", "Ramit" ]
|
[
{
"code": null,
"e": 1211,
"s": 1062,
"text": "In order to return only value of a field in MongoDB, you need to write a query and use forEach loop. Let us first create a collection with documents"
},
{
"code": null,
"e": 1820,
"s": 1211,
"text": "> db.returnOnlyValueOfFieldDemo.insertOne({\"ClientName\":\"Larry\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5c9ea537d628fa4220163b6e\")\n}\n> db.returnOnlyValueOfFieldDemo.insertOne({\"ClientName\":\"Chris\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5c9ea53bd628fa4220163b6f\")\n}\n> db.returnOnlyValueOfFieldDemo.insertOne({\"ClientName\":\"Robert\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5c9ea541d628fa4220163b70\")\n}\n> db.returnOnlyValueOfFieldDemo.insertOne({\"ClientName\":\"Ramit\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5c9ea549d628fa4220163b71\")\n}"
},
{
"code": null,
"e": 1917,
"s": 1820,
"text": "Following is the query to display all documents from a collection with the help of find() method"
},
{
"code": null,
"e": 1966,
"s": 1917,
"text": "> db.returnOnlyValueOfFieldDemo.find().pretty();"
},
{
"code": null,
"e": 2005,
"s": 1966,
"text": "This will produce the following output"
},
{
"code": null,
"e": 2298,
"s": 2005,
"text": "{ \"_id\" : ObjectId(\"5c9ea537d628fa4220163b6e\"), \"ClientName\" : \"Larry\" }\n{ \"_id\" : ObjectId(\"5c9ea53bd628fa4220163b6f\"), \"ClientName\" : \"Chris\" }\n{ \"_id\" : ObjectId(\"5c9ea541d628fa4220163b70\"), \"ClientName\" : \"Robert\" }\n{ \"_id\" : ObjectId(\"5c9ea549d628fa4220163b71\"), \"ClientName\" : \"Ramit\" }"
},
{
"code": null,
"e": 2364,
"s": 2298,
"text": "Following is the query to return only value of a field in MongoDB"
},
{
"code": null,
"e": 2485,
"s": 2364,
"text": "> var output = []\n> db.returnOnlyValueOfFieldDemo.find().forEach(function(document) {output.push(document.ClientName) })"
},
{
"code": null,
"e": 2661,
"s": 2485,
"text": "In order to get value of a field in MongoDB, you need to write variable name output at the Mongo shell (as we know the value is stored in output array). Following is the query"
},
{
"code": null,
"e": 2670,
"s": 2661,
"text": "> output"
},
{
"code": null,
"e": 2709,
"s": 2670,
"text": "This will produce the following output"
},
{
"code": null,
"e": 2749,
"s": 2709,
"text": "[ \"Larry\", \"Chris\", \"Robert\", \"Ramit\" ]"
}
] |
\eqalign - Tex Command
|
\eqalign - Used for equation alignment; for aligning multi-line displays at a single place.
{ \eqalign{ <math> & <math> \cr <repeat as needed> } }
\eqalign command is used for equation alignment; for aligning multi-line displays at a single place. The ampersand is placed where alignment is desired; a double-backslash can be used in place of the \cr ; the final \\ or \cr is optional; supports only a single \tag, which is vertically centered.
\eqalign{
3x - 4y &= 5\cr
x + 7 &= -2y
}
3x−4y=5x+7=−2y
\eqalign{
(a+b)^2 &= (a+b)(a+b) \\
&= a^2 + ab + ba + b^2 \\
&= a^2 + 2ab + b^2
}
(a+b)2=(a+b)(a+b)=a2+ab+ba+b2=a2+2ab+b2
\left\{
\eqalign{
a &= 1\\
b &= 2\\
c &= 3
}\right\}
\qquad
\eqalign{
ax + by &= c \\
x + 2y &= 3
}
{a=1b=2c=3}ax+by=cx+2y=3
\eqalign{
3x - 4y &= 5\cr
x + 7 &= -2y
}
3x−4y=5x+7=−2y
\eqalign{
3x - 4y &= 5\cr
x + 7 &= -2y
}
\eqalign{
(a+b)^2 &= (a+b)(a+b) \\
&= a^2 + ab + ba + b^2 \\
&= a^2 + 2ab + b^2
}
(a+b)2=(a+b)(a+b)=a2+ab+ba+b2=a2+2ab+b2
\eqalign{
(a+b)^2 &= (a+b)(a+b) \\
&= a^2 + ab + ba + b^2 \\
&= a^2 + 2ab + b^2
}
\left\{
\eqalign{
a &= 1\\
b &= 2\\
c &= 3
}\right\}
\qquad
\eqalign{
ax + by &= c \\
x + 2y &= 3
}
{a=1b=2c=3}ax+by=cx+2y=3
\left\{
\eqalign{
a &= 1\\
b &= 2\\
c &= 3
}\right\}
\qquad
\eqalign{
ax + by &= c \\
x + 2y &= 3
}
14 Lectures
52 mins
Ashraf Said
11 Lectures
1 hours
Ashraf Said
9 Lectures
1 hours
Emenwa Global, Ejike IfeanyiChukwu
29 Lectures
2.5 hours
Mohammad Nauman
14 Lectures
1 hours
Daniel Stern
15 Lectures
47 mins
Nishant Kumar
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 8078,
"s": 7986,
"text": "\\eqalign - Used for equation alignment; for aligning multi-line displays at a single place."
},
{
"code": null,
"e": 8133,
"s": 8078,
"text": "{ \\eqalign{ <math> & <math> \\cr <repeat as needed> } }"
},
{
"code": null,
"e": 8440,
"s": 8133,
"text": "\\eqalign command is used for equation alignment; for aligning multi-line displays at a single place. The ampersand is placed where alignment is desired; a double-backslash can be used in place of the \\cr ; the final \\\\ or \\cr is optional; supports only a single \\tag, which is vertically centered."
},
{
"code": null,
"e": 8777,
"s": 8440,
"text": "\n\\eqalign{\n3x - 4y &= 5\\cr\nx + 7 &= -2y\n}\n\n\n3x−4y=5x+7=−2y\n\n\n\\eqalign{\n(a+b)^2 &= (a+b)(a+b) \\\\\n &= a^2 + ab + ba + b^2 \\\\\n &= a^2 + 2ab + b^2\n}\n\n\n(a+b)2=(a+b)(a+b)=a2+ab+ba+b2=a2+2ab+b2\n\n\n\\left\\{\n\\eqalign{\na &= 1\\\\\nb &= 2\\\\\nc &= 3\n}\\right\\}\n\\qquad\n\\eqalign{\nax + by &= c \\\\\n x + 2y &= 3\n }\n\n\n{a=1b=2c=3}ax+by=cx+2y=3\n\n\n"
},
{
"code": null,
"e": 8839,
"s": 8777,
"text": "\\eqalign{\n3x - 4y &= 5\\cr\nx + 7 &= -2y\n}\n\n\n3x−4y=5x+7=−2y\n\n"
},
{
"code": null,
"e": 8883,
"s": 8839,
"text": "\\eqalign{\n3x - 4y &= 5\\cr\nx + 7 &= -2y\n}\n"
},
{
"code": null,
"e": 9025,
"s": 8883,
"text": "\\eqalign{\n(a+b)^2 &= (a+b)(a+b) \\\\\n &= a^2 + ab + ba + b^2 \\\\\n &= a^2 + 2ab + b^2\n}\n\n\n(a+b)2=(a+b)(a+b)=a2+ab+ba+b2=a2+2ab+b2\n\n"
},
{
"code": null,
"e": 9124,
"s": 9025,
"text": "\\eqalign{\n(a+b)^2 &= (a+b)(a+b) \\\\\n &= a^2 + ab + ba + b^2 \\\\\n &= a^2 + 2ab + b^2\n}\n"
},
{
"code": null,
"e": 9255,
"s": 9124,
"text": "\\left\\{\n\\eqalign{\na &= 1\\\\\nb &= 2\\\\\nc &= 3\n}\\right\\}\n\\qquad\n\\eqalign{\nax + by &= c \\\\\n x + 2y &= 3\n }\n\n\n{a=1b=2c=3}ax+by=cx+2y=3\n\n"
},
{
"code": null,
"e": 9358,
"s": 9255,
"text": "\\left\\{\n\\eqalign{\na &= 1\\\\\nb &= 2\\\\\nc &= 3\n}\\right\\}\n\\qquad\n\\eqalign{\nax + by &= c \\\\\n x + 2y &= 3\n }\n"
},
{
"code": null,
"e": 9390,
"s": 9358,
"text": "\n 14 Lectures \n 52 mins\n"
},
{
"code": null,
"e": 9403,
"s": 9390,
"text": " Ashraf Said"
},
{
"code": null,
"e": 9436,
"s": 9403,
"text": "\n 11 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 9449,
"s": 9436,
"text": " Ashraf Said"
},
{
"code": null,
"e": 9481,
"s": 9449,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 9517,
"s": 9481,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 9552,
"s": 9517,
"text": "\n 29 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 9569,
"s": 9552,
"text": " Mohammad Nauman"
},
{
"code": null,
"e": 9602,
"s": 9569,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 9616,
"s": 9602,
"text": " Daniel Stern"
},
{
"code": null,
"e": 9648,
"s": 9616,
"text": "\n 15 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 9663,
"s": 9648,
"text": " Nishant Kumar"
},
{
"code": null,
"e": 9670,
"s": 9663,
"text": " Print"
},
{
"code": null,
"e": 9681,
"s": 9670,
"text": " Add Notes"
}
] |
How to Detect the key points of an image using OpenCV Java library?
|
The detect() method of the org.opencv.features2d.Feature2D (abstract) class detects the key points of the given image. To this method, you need to pass a Mat the object representing the source image and an empty MatOfKeyPoint object to hold the read key points.
You can draw the draw key points on the image using the drawKeypoints() method of the org.opencv.features2d.Features2d class.
Since Feature2D is an abstract class you need to instantiate one of its subclasses to invoke the detect() method. Here we have used the FastFeatureDetector class.
Since Feature2D is an abstract class you need to instantiate one of its subclasses to invoke the detect() method. Here we have used the FastFeatureDetector class.
Features2D and Features2d are two different classes of the package features2d don’t get confused...
Features2D and Features2d are two different classes of the package features2d don’t get confused...
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfKeyPoint;
import org.opencv.core.Scalar;
import org.opencv.features2d.FastFeatureDetector;
import org.opencv.features2d.Features2d;
import org.opencv.highgui.HighGui;z
import org.opencv.imgcodecs.Imgcodecs;
public class DetectingKeyPoints{
public static void main(String args[]) throws Exception {
//Loading the OpenCV core library
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
//Reading the contents of the image
String file ="D:\\Images\\javafx_graphical.jpg";
Mat src = Imgcodecs.imread(file);
//Reading the key points of the image
Mat dst = new Mat();
MatOfKeyPoint matOfKeyPoints = new MatOfKeyPoint();
FastFeatureDetector featureDetector = FastFeatureDetector.create();
featureDetector.detect(src, matOfKeyPoints);
//Drawing the detected key points
Features2d.drawKeypoints(src, matOfKeyPoints, dst, new Scalar(0, 0, 255));
HighGui.imshow("Feature Detection", dst);
HighGui.waitKey();
}
}
|
[
{
"code": null,
"e": 1324,
"s": 1062,
"text": "The detect() method of the org.opencv.features2d.Feature2D (abstract) class detects the key points of the given image. To this method, you need to pass a Mat the object representing the source image and an empty MatOfKeyPoint object to hold the read key points."
},
{
"code": null,
"e": 1450,
"s": 1324,
"text": "You can draw the draw key points on the image using the drawKeypoints() method of the org.opencv.features2d.Features2d class."
},
{
"code": null,
"e": 1613,
"s": 1450,
"text": "Since Feature2D is an abstract class you need to instantiate one of its subclasses to invoke the detect() method. Here we have used the FastFeatureDetector class."
},
{
"code": null,
"e": 1776,
"s": 1613,
"text": "Since Feature2D is an abstract class you need to instantiate one of its subclasses to invoke the detect() method. Here we have used the FastFeatureDetector class."
},
{
"code": null,
"e": 1876,
"s": 1776,
"text": "Features2D and Features2d are two different classes of the package features2d don’t get confused..."
},
{
"code": null,
"e": 1976,
"s": 1876,
"text": "Features2D and Features2d are two different classes of the package features2d don’t get confused..."
},
{
"code": null,
"e": 3048,
"s": 1976,
"text": "import org.opencv.core.Core;\nimport org.opencv.core.Mat;\nimport org.opencv.core.MatOfKeyPoint;\nimport org.opencv.core.Scalar;\nimport org.opencv.features2d.FastFeatureDetector;\nimport org.opencv.features2d.Features2d;\nimport org.opencv.highgui.HighGui;z\nimport org.opencv.imgcodecs.Imgcodecs;\npublic class DetectingKeyPoints{\n public static void main(String args[]) throws Exception {\n //Loading the OpenCV core library\n System.loadLibrary( Core.NATIVE_LIBRARY_NAME );\n //Reading the contents of the image\n String file =\"D:\\\\Images\\\\javafx_graphical.jpg\";\n Mat src = Imgcodecs.imread(file);\n //Reading the key points of the image\n Mat dst = new Mat();\n MatOfKeyPoint matOfKeyPoints = new MatOfKeyPoint();\n FastFeatureDetector featureDetector = FastFeatureDetector.create();\n featureDetector.detect(src, matOfKeyPoints);\n //Drawing the detected key points\n Features2d.drawKeypoints(src, matOfKeyPoints, dst, new Scalar(0, 0, 255));\n HighGui.imshow(\"Feature Detection\", dst);\n HighGui.waitKey();\n }\n}"
}
] |
How to handle StringIndexOutOfBoundsException in Java?
|
Strings are used to store a sequence of characters in Java, they are treated as objects. The String class of the java.lang package represents a String.
You can create a String either by using the new keyword (like any other object) or, by assigning value to the literal (like any other primitive datatype).
public class StringDemo {
public static void main(String args[]) {
String stringObject = new String("Hello how are you");
System.out.println(stringObject);
String stringLiteral = "Welcome to Tutorialspoint";
System.out.println(stringLiteral);
}
}
Hello how are you
Welcome to Tutorialspoint
An array is a data structure/container/object that stores a fixed-size sequential collection of elements of the same type. The size/length of the array is determined at the time of creation.
The position of the elements in the array is called as index or subscript. The first element of the array is stored at the index 0 and, the second element is at the index 1 and so on.
Since the string stores an array of characters, just like arrays the position of each character is represented by an index (starting from 0). For example, if we have created a String as −
String str = "Hello";
The characters in it are positioned as −
If you try to access the character of a String at the index which is greater than its length a StringIndexOutOfBoundsException is thrown.
The String class in Java provides various methods to manipulate Strings. You can find the character at a particular index using the charAt() method of this class.
This method accepts an integer value specifying the index of theStringand returns the character in the String at the specified index.
In the following Java program, we are creating a String of length 17 and trying to print the element at the index 40.
public class Test {
public static void main(String[] args) {
String str = "Hello how are you";
System.out.println("Length of the String: "+str.length());
for(int i=0; i<str.length(); i++) {
System.out.println(str.charAt(i));
}
//Accessing element at greater than the length of the String
System.out.println(str.charAt(40));
}
}
Since we are accessing the element at the index greater than its length StringIndexOutOfBoundsException is thrown.
Length of the String: 17
H
e
l
l
o
h
o
w
a
r
e
y
o
u
Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 40
at java.base/java.lang.StringLatin1.charAt(Unknown Source)
at java.base/java.lang.String.charAt(Unknown Source)
at Test.main(Test.java:9)
Just like other exceptions you can handle this exception by wrapping the code that is prone to it within try catch. In the catch block catch the exception of type IndexOutOfBoundsException or, StringIndexOutOfBoundsException.
public class Test {
public static void main(String[] args) {
String str = "Hello how are you";
for(int i=0; i<tr.length(); i++) {
System.out.println(str.charAt(i));
}
System.out.println(str.length());
//Accessing element at greater than the length of the String
try {
System.out.println(str.charAt(40));
}catch(StringIndexOutOfBoundsException e) {
System.out.println("Exception occurred . . . . . . . . ");
}
}
}
H
e
l
l
o
h
o
w
a
r
e
y
o
u
17
Exception occurred . . . . . . . .
|
[
{
"code": null,
"e": 1214,
"s": 1062,
"text": "Strings are used to store a sequence of characters in Java, they are treated as objects. The String class of the java.lang package represents a String."
},
{
"code": null,
"e": 1369,
"s": 1214,
"text": "You can create a String either by using the new keyword (like any other object) or, by assigning value to the literal (like any other primitive datatype)."
},
{
"code": null,
"e": 1646,
"s": 1369,
"text": "public class StringDemo {\n public static void main(String args[]) {\n String stringObject = new String(\"Hello how are you\");\n System.out.println(stringObject);\n String stringLiteral = \"Welcome to Tutorialspoint\";\n System.out.println(stringLiteral);\n }\n}"
},
{
"code": null,
"e": 1690,
"s": 1646,
"text": "Hello how are you\nWelcome to Tutorialspoint"
},
{
"code": null,
"e": 1881,
"s": 1690,
"text": "An array is a data structure/container/object that stores a fixed-size sequential collection of elements of the same type. The size/length of the array is determined at the time of creation."
},
{
"code": null,
"e": 2065,
"s": 1881,
"text": "The position of the elements in the array is called as index or subscript. The first element of the array is stored at the index 0 and, the second element is at the index 1 and so on."
},
{
"code": null,
"e": 2253,
"s": 2065,
"text": "Since the string stores an array of characters, just like arrays the position of each character is represented by an index (starting from 0). For example, if we have created a String as −"
},
{
"code": null,
"e": 2275,
"s": 2253,
"text": "String str = \"Hello\";"
},
{
"code": null,
"e": 2316,
"s": 2275,
"text": "The characters in it are positioned as −"
},
{
"code": null,
"e": 2454,
"s": 2316,
"text": "If you try to access the character of a String at the index which is greater than its length a StringIndexOutOfBoundsException is thrown."
},
{
"code": null,
"e": 2617,
"s": 2454,
"text": "The String class in Java provides various methods to manipulate Strings. You can find the character at a particular index using the charAt() method of this class."
},
{
"code": null,
"e": 2751,
"s": 2617,
"text": "This method accepts an integer value specifying the index of theStringand returns the character in the String at the specified index."
},
{
"code": null,
"e": 2869,
"s": 2751,
"text": "In the following Java program, we are creating a String of length 17 and trying to print the element at the index 40."
},
{
"code": null,
"e": 3248,
"s": 2869,
"text": "public class Test {\n public static void main(String[] args) {\n String str = \"Hello how are you\";\n System.out.println(\"Length of the String: \"+str.length());\n for(int i=0; i<str.length(); i++) {\n System.out.println(str.charAt(i));\n }\n //Accessing element at greater than the length of the String\n System.out.println(str.charAt(40));\n }\n}"
},
{
"code": null,
"e": 3363,
"s": 3248,
"text": "Since we are accessing the element at the index greater than its length StringIndexOutOfBoundsException is thrown."
},
{
"code": null,
"e": 3666,
"s": 3363,
"text": "Length of the String: 17\nH\ne\nl\nl\no\n\nh\no\nw\n\na\nr\ne\n\ny\no\nu\nException in thread \"main\" java.lang.StringIndexOutOfBoundsException: String index out of range: 40\n at java.base/java.lang.StringLatin1.charAt(Unknown Source)\n at java.base/java.lang.String.charAt(Unknown Source)\n at Test.main(Test.java:9)"
},
{
"code": null,
"e": 3892,
"s": 3666,
"text": "Just like other exceptions you can handle this exception by wrapping the code that is prone to it within try catch. In the catch block catch the exception of type IndexOutOfBoundsException or, StringIndexOutOfBoundsException."
},
{
"code": null,
"e": 4386,
"s": 3892,
"text": "public class Test {\n public static void main(String[] args) {\n String str = \"Hello how are you\";\n for(int i=0; i<tr.length(); i++) {\n System.out.println(str.charAt(i));\n }\n System.out.println(str.length());\n //Accessing element at greater than the length of the String\n try {\n System.out.println(str.charAt(40));\n }catch(StringIndexOutOfBoundsException e) {\n System.out.println(\"Exception occurred . . . . . . . . \");\n }\n }\n}"
},
{
"code": null,
"e": 4455,
"s": 4386,
"text": "H\ne\nl\nl\no\n\nh\no\nw\n\na\nr\ne\n\ny\no\nu\n17\nException occurred . . . . . . . ."
}
] |
Extend data class in Kotlin
|
Data class is a class that holds the data for an application. It is just like a POJO class that we use in Java in order to hold the data.
In Java, for data class, we need to create getter and setter methods in order to access the properties of that class. In Kotlin, when a class is declared as a data class, the compiler automatically creates some supporting methods required to access the member variable of the class. The compiler will create getters and setters for the constructor parameters, hashCode(), equals(), toString(), copy().
For a class to be considered as a data class in Kotlin, the following conditions are to be fulfilled −
The primary constructor needs to have at least one parameter.
The primary constructor needs to have at least one parameter.
All primary constructor parameters need to be marked as val or var.
All primary constructor parameters need to be marked as val or var.
Data classes cannot be abstract, open, sealed, or inner.
Data classes cannot be abstract, open, sealed, or inner.
We cannot extend a data class but in order to implement the same feature, we can declare a super class and override the properties in a sub-class.
In the following example, we will create two data classes "Student" and "Book". We will also create an abstract class "Resource". Inside "Book", we will override the properties of the "Resource" class.
data class Student(val name: String, val age: Int)
fun main(args: Array) {
val stu = Student("Student1", 29)
val stu2 = Student("Student2", 30)
println("Student1 Name is: ${stu.name}")
println("Student1 Age is: ${stu.age}")
println("Student2 Name is: ${stu2.name}")
println("Student2 Age is: ${stu2.age}")
val b=Book(1L,"India","123222") // implementing abstract class
println(b.location)
}
// declaring super class
abstract class Resource {
abstract var id: Long
abstract var location: String
}
// override the properties of the Resource class
data class Book (
override var id: Long = 0,
override var location: String = "",
var isbn: String
) : Resource()
It will generate the following output −
Student1 Name is: Student1
Student1 Age is: 29
Student2 Name is: Student2
Student2 Age is: 30
India
|
[
{
"code": null,
"e": 1200,
"s": 1062,
"text": "Data class is a class that holds the data for an application. It is just like a POJO class that we use in Java in order to hold the data."
},
{
"code": null,
"e": 1602,
"s": 1200,
"text": "In Java, for data class, we need to create getter and setter methods in order to access the properties of that class. In Kotlin, when a class is declared as a data class, the compiler automatically creates some supporting methods required to access the member variable of the class. The compiler will create getters and setters for the constructor parameters, hashCode(), equals(), toString(), copy()."
},
{
"code": null,
"e": 1705,
"s": 1602,
"text": "For a class to be considered as a data class in Kotlin, the following conditions are to be fulfilled −"
},
{
"code": null,
"e": 1767,
"s": 1705,
"text": "The primary constructor needs to have at least one parameter."
},
{
"code": null,
"e": 1829,
"s": 1767,
"text": "The primary constructor needs to have at least one parameter."
},
{
"code": null,
"e": 1897,
"s": 1829,
"text": "All primary constructor parameters need to be marked as val or var."
},
{
"code": null,
"e": 1965,
"s": 1897,
"text": "All primary constructor parameters need to be marked as val or var."
},
{
"code": null,
"e": 2022,
"s": 1965,
"text": "Data classes cannot be abstract, open, sealed, or inner."
},
{
"code": null,
"e": 2079,
"s": 2022,
"text": "Data classes cannot be abstract, open, sealed, or inner."
},
{
"code": null,
"e": 2226,
"s": 2079,
"text": "We cannot extend a data class but in order to implement the same feature, we can declare a super class and override the properties in a sub-class."
},
{
"code": null,
"e": 2428,
"s": 2226,
"text": "In the following example, we will create two data classes \"Student\" and \"Book\". We will also create an abstract class \"Resource\". Inside \"Book\", we will override the properties of the \"Resource\" class."
},
{
"code": null,
"e": 3128,
"s": 2428,
"text": "data class Student(val name: String, val age: Int)\n\nfun main(args: Array) {\n val stu = Student(\"Student1\", 29)\n val stu2 = Student(\"Student2\", 30)\n println(\"Student1 Name is: ${stu.name}\")\n println(\"Student1 Age is: ${stu.age}\")\n println(\"Student2 Name is: ${stu2.name}\")\n println(\"Student2 Age is: ${stu2.age}\")\n val b=Book(1L,\"India\",\"123222\") // implementing abstract class\n println(b.location)\n}\n\n// declaring super class\nabstract class Resource {\n abstract var id: Long\n abstract var location: String\n}\n\n// override the properties of the Resource class\ndata class Book (\n override var id: Long = 0,\n override var location: String = \"\",\n var isbn: String\n) : Resource()"
},
{
"code": null,
"e": 3168,
"s": 3128,
"text": "It will generate the following output −"
},
{
"code": null,
"e": 3268,
"s": 3168,
"text": "Student1 Name is: Student1\nStudent1 Age is: 29\nStudent2 Name is: Student2\nStudent2 Age is: 30\nIndia"
}
] |
AtomicInteger getAndIncrement() method in Java with examples - GeeksforGeeks
|
29 Jan, 2019
The java.util.concurrent.atomic.AtomicInteger.getAndIncrement() is an inbuilt method in java that increases the given value by one and returns the value before updation which is of data-type int.
Syntax:
public final int getAndIncrement()
Parameters: The function does not accepts a single parameter.
Return value: The function returns the value before increment operation is performed to the previous value.
Program below demonstrates the function:
Program 1:
// Java program that demonstrates// the getAndIncrement() function import java.util.concurrent.atomic.AtomicInteger;public class GFG { public static void main(String args[]) { // Initially value as 0 AtomicInteger val = new AtomicInteger(0); // Decreases and gets // the previous value int res = val.getAndIncrement(); System.out.println("Previous value: " + res); // Prints the updated value System.out.println("Current value: " + val); }}
Previous value: 0
Current value: 1
Program 2:
// Java program that demonstrates// the getAndIncrement() function import java.util.concurrent.atomic.AtomicInteger; public class GFG { public static void main(String args[]) { // Initially value as 18 AtomicInteger val = new AtomicInteger(18); // Increases 1 and gets // the previous value int res = val.getAndIncrement(); System.out.println("Previous value: " + res); // Prints the updated value System.out.println("Current value: " + val); }}
Previous value: 18
Current value: 19
Reference: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicInteger.html#getAndIncrement–
Java - util package
Java-AtomicInteger
Java-Functions
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Initialize an ArrayList in Java
HashMap in Java with Examples
Interfaces in Java
Object Oriented Programming (OOPs) Concept in Java
ArrayList in Java
How to iterate any Map in Java
Multidimensional Arrays in Java
LinkedList in Java
Stack Class in Java
Overriding in Java
|
[
{
"code": null,
"e": 24402,
"s": 24374,
"text": "\n29 Jan, 2019"
},
{
"code": null,
"e": 24598,
"s": 24402,
"text": "The java.util.concurrent.atomic.AtomicInteger.getAndIncrement() is an inbuilt method in java that increases the given value by one and returns the value before updation which is of data-type int."
},
{
"code": null,
"e": 24606,
"s": 24598,
"text": "Syntax:"
},
{
"code": null,
"e": 24642,
"s": 24606,
"text": "public final int getAndIncrement()\n"
},
{
"code": null,
"e": 24704,
"s": 24642,
"text": "Parameters: The function does not accepts a single parameter."
},
{
"code": null,
"e": 24812,
"s": 24704,
"text": "Return value: The function returns the value before increment operation is performed to the previous value."
},
{
"code": null,
"e": 24853,
"s": 24812,
"text": "Program below demonstrates the function:"
},
{
"code": null,
"e": 24864,
"s": 24853,
"text": "Program 1:"
},
{
"code": "// Java program that demonstrates// the getAndIncrement() function import java.util.concurrent.atomic.AtomicInteger;public class GFG { public static void main(String args[]) { // Initially value as 0 AtomicInteger val = new AtomicInteger(0); // Decreases and gets // the previous value int res = val.getAndIncrement(); System.out.println(\"Previous value: \" + res); // Prints the updated value System.out.println(\"Current value: \" + val); }}",
"e": 25453,
"s": 24864,
"text": null
},
{
"code": null,
"e": 25489,
"s": 25453,
"text": "Previous value: 0\nCurrent value: 1\n"
},
{
"code": null,
"e": 25500,
"s": 25489,
"text": "Program 2:"
},
{
"code": "// Java program that demonstrates// the getAndIncrement() function import java.util.concurrent.atomic.AtomicInteger; public class GFG { public static void main(String args[]) { // Initially value as 18 AtomicInteger val = new AtomicInteger(18); // Increases 1 and gets // the previous value int res = val.getAndIncrement(); System.out.println(\"Previous value: \" + res); // Prints the updated value System.out.println(\"Current value: \" + val); }}",
"e": 26082,
"s": 25500,
"text": null
},
{
"code": null,
"e": 26120,
"s": 26082,
"text": "Previous value: 18\nCurrent value: 19\n"
},
{
"code": null,
"e": 26237,
"s": 26120,
"text": "Reference: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicInteger.html#getAndIncrement–"
},
{
"code": null,
"e": 26257,
"s": 26237,
"text": "Java - util package"
},
{
"code": null,
"e": 26276,
"s": 26257,
"text": "Java-AtomicInteger"
},
{
"code": null,
"e": 26291,
"s": 26276,
"text": "Java-Functions"
},
{
"code": null,
"e": 26296,
"s": 26291,
"text": "Java"
},
{
"code": null,
"e": 26301,
"s": 26296,
"text": "Java"
},
{
"code": null,
"e": 26399,
"s": 26301,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26408,
"s": 26399,
"text": "Comments"
},
{
"code": null,
"e": 26421,
"s": 26408,
"text": "Old Comments"
},
{
"code": null,
"e": 26453,
"s": 26421,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 26483,
"s": 26453,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 26502,
"s": 26483,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 26553,
"s": 26502,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 26571,
"s": 26553,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 26602,
"s": 26571,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 26634,
"s": 26602,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 26653,
"s": 26634,
"text": "LinkedList in Java"
},
{
"code": null,
"e": 26673,
"s": 26653,
"text": "Stack Class in Java"
}
] |
C# | Number of elements in HashSet - GeeksforGeeks
|
01 Feb, 2019
A HashSet is an unordered collection of the unique elements. It is found in System.Collections.Generic namespace. It is used in a situation where we want to prevent duplicates from being inserted in the collection. As far as performance is concerned, it is better in comparison to the list. You can use HashSet.Count Property to count the number of elements in a HashSet.
Syntax:
mySet.Count;
Here mySet is the HashSet
Below given are some examples to understand the implementation in a better way:
Example 1:
// C# code to get the number of// elements that are contained in HashSetusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Creating a HashSet of integers HashSet<int> mySet = new HashSet<int>(); // Inserting elements in HashSet for (int i = 0; i < 5; i++) { mySet.Add(i * 2); } // To get the number of // elements that are contained in HashSet Console.WriteLine(mySet.Count); }}
5
Example 2:
// C# code to get the number of// elements that are contained in HashSetusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Creating a HashSet of integers HashSet<int> mySet = new HashSet<int>(); // To get the number of // elements that are contained in HashSet. // Note that, here the HashSet is empty Console.WriteLine(mySet.Count); }}
0
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.hashset-1.count?view=netframework-4.7.2#System_Collections_Generic_HashSet_1_Count
CSharp-Generic-HashSet
CSharp-Generic-Namespace
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Extension Method in C#
Top 50 C# Interview Questions & Answers
HashSet in C# with Examples
C# | How to insert an element in an Array?
Partial Classes in C#
C# | Inheritance
C# | List Class
Lambda Expressions in C#
Difference between Hashtable and Dictionary in C#
Convert String to Character Array in C#
|
[
{
"code": null,
"e": 24302,
"s": 24274,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 24674,
"s": 24302,
"text": "A HashSet is an unordered collection of the unique elements. It is found in System.Collections.Generic namespace. It is used in a situation where we want to prevent duplicates from being inserted in the collection. As far as performance is concerned, it is better in comparison to the list. You can use HashSet.Count Property to count the number of elements in a HashSet."
},
{
"code": null,
"e": 24682,
"s": 24674,
"text": "Syntax:"
},
{
"code": null,
"e": 24696,
"s": 24682,
"text": "mySet.Count;\n"
},
{
"code": null,
"e": 24722,
"s": 24696,
"text": "Here mySet is the HashSet"
},
{
"code": null,
"e": 24802,
"s": 24722,
"text": "Below given are some examples to understand the implementation in a better way:"
},
{
"code": null,
"e": 24813,
"s": 24802,
"text": "Example 1:"
},
{
"code": "// C# code to get the number of// elements that are contained in HashSetusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Creating a HashSet of integers HashSet<int> mySet = new HashSet<int>(); // Inserting elements in HashSet for (int i = 0; i < 5; i++) { mySet.Add(i * 2); } // To get the number of // elements that are contained in HashSet Console.WriteLine(mySet.Count); }}",
"e": 25334,
"s": 24813,
"text": null
},
{
"code": null,
"e": 25337,
"s": 25334,
"text": "5\n"
},
{
"code": null,
"e": 25348,
"s": 25337,
"text": "Example 2:"
},
{
"code": "// C# code to get the number of// elements that are contained in HashSetusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Creating a HashSet of integers HashSet<int> mySet = new HashSet<int>(); // To get the number of // elements that are contained in HashSet. // Note that, here the HashSet is empty Console.WriteLine(mySet.Count); }}",
"e": 25800,
"s": 25348,
"text": null
},
{
"code": null,
"e": 25803,
"s": 25800,
"text": "0\n"
},
{
"code": null,
"e": 25814,
"s": 25803,
"text": "Reference:"
},
{
"code": null,
"e": 25968,
"s": 25814,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.hashset-1.count?view=netframework-4.7.2#System_Collections_Generic_HashSet_1_Count"
},
{
"code": null,
"e": 25991,
"s": 25968,
"text": "CSharp-Generic-HashSet"
},
{
"code": null,
"e": 26016,
"s": 25991,
"text": "CSharp-Generic-Namespace"
},
{
"code": null,
"e": 26019,
"s": 26016,
"text": "C#"
},
{
"code": null,
"e": 26117,
"s": 26019,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26126,
"s": 26117,
"text": "Comments"
},
{
"code": null,
"e": 26139,
"s": 26126,
"text": "Old Comments"
},
{
"code": null,
"e": 26162,
"s": 26139,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 26202,
"s": 26162,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 26230,
"s": 26202,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 26273,
"s": 26230,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 26295,
"s": 26273,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 26312,
"s": 26295,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 26328,
"s": 26312,
"text": "C# | List Class"
},
{
"code": null,
"e": 26353,
"s": 26328,
"text": "Lambda Expressions in C#"
},
{
"code": null,
"e": 26403,
"s": 26353,
"text": "Difference between Hashtable and Dictionary in C#"
}
] |
Python | Grayscaling of Images using OpenCV - GeeksforGeeks
|
26 Jul, 2021
Grayscaling is the process of converting an image from other color spaces e.g. RGB, CMYK, HSV, etc. to shades of gray. It varies between complete black and complete white.
Dimension reduction: For example, In RGB images there are three color channels and has three dimensions while grayscale images are single-dimensional.
Reduces model complexity: Consider training neural article on RGB images of 10x10x3 pixel. The input layer will have 300 input nodes. On the other hand, the same neural network will need only 100 input nodes for grayscale images.
For other algorithms to work: Many algorithms are customized to work only on grayscale images e.g. Canny edge detection function pre-implemented in OpenCV library works on Grayscale images only.
Let’s learn the different image processing methods to convert a colored image into a grayscale image.
Python3
# import opencvimport cv2 # Load the input imageimage = cv2.imread('C:\\Documents\\full_path\\tomatoes.jpg')cv2.imshow('Original', image)cv2.waitKey(0) # Use the cvtColor() function to grayscale the imagegray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow('Grayscale', gray_image)cv2.waitKey(0) # Window shown waits for any key pressing eventcv2.destroyAllWindows()
Input image:
Output Image:
Python3
# Import opencvimport cv2 # Use the second argument or (flag value) zero# that specifies the image is to be read in grayscale modeimg = cv2.imread('C:\\Documents\\full_path\\tomatoes.jpg', 0) cv2.imshow('Grayscale Image', img)cv2.waitKey(0) # Window shown waits for any key pressing eventcv2.destroyAllWindows()
Output Image:
Python3
# Import opencvimport cv2 # Load the input imageimg = cv2.imread('C:\\Documents\\full_path\\tomatoes.jpg') # Obtain the dimensions of the image array# using the shape method(row, col) = img.shape[0:2] # Take the average of pixel values of the BGR Channels# to convert the colored image to grayscale imagefor i in range(row): for j in range(col): # Find the average of the BGR pixel values img[i, j] = sum(img[i, j]) * 0.33 cv2.imshow('Grayscale Image', img)cv2.waitKey(0) # Window shown waits for any key pressing eventcv2.destroyAllWindows()
Output Image:
Hope you have understood the above discussed image processing techniques to convert a colored image into a grayscale image in Python!
sanju6890
saurabh1990aror
Image-Processing
OpenCV
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
How to Install PIP on Windows ?
Enumerate() in Python
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python String | replace()
Python program to convert a list to string
Create a Pandas DataFrame from Lists
Reading and Writing to text files in Python
|
[
{
"code": null,
"e": 25180,
"s": 25152,
"text": "\n26 Jul, 2021"
},
{
"code": null,
"e": 25352,
"s": 25180,
"text": "Grayscaling is the process of converting an image from other color spaces e.g. RGB, CMYK, HSV, etc. to shades of gray. It varies between complete black and complete white."
},
{
"code": null,
"e": 25503,
"s": 25352,
"text": "Dimension reduction: For example, In RGB images there are three color channels and has three dimensions while grayscale images are single-dimensional."
},
{
"code": null,
"e": 25733,
"s": 25503,
"text": "Reduces model complexity: Consider training neural article on RGB images of 10x10x3 pixel. The input layer will have 300 input nodes. On the other hand, the same neural network will need only 100 input nodes for grayscale images."
},
{
"code": null,
"e": 25928,
"s": 25733,
"text": "For other algorithms to work: Many algorithms are customized to work only on grayscale images e.g. Canny edge detection function pre-implemented in OpenCV library works on Grayscale images only."
},
{
"code": null,
"e": 26030,
"s": 25928,
"text": "Let’s learn the different image processing methods to convert a colored image into a grayscale image."
},
{
"code": null,
"e": 26038,
"s": 26030,
"text": "Python3"
},
{
"code": "# import opencvimport cv2 # Load the input imageimage = cv2.imread('C:\\\\Documents\\\\full_path\\\\tomatoes.jpg')cv2.imshow('Original', image)cv2.waitKey(0) # Use the cvtColor() function to grayscale the imagegray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow('Grayscale', gray_image)cv2.waitKey(0) # Window shown waits for any key pressing eventcv2.destroyAllWindows()",
"e": 26417,
"s": 26038,
"text": null
},
{
"code": null,
"e": 26431,
"s": 26417,
"text": "Input image: "
},
{
"code": null,
"e": 26446,
"s": 26431,
"text": "Output Image: "
},
{
"code": null,
"e": 26454,
"s": 26446,
"text": "Python3"
},
{
"code": "# Import opencvimport cv2 # Use the second argument or (flag value) zero# that specifies the image is to be read in grayscale modeimg = cv2.imread('C:\\\\Documents\\\\full_path\\\\tomatoes.jpg', 0) cv2.imshow('Grayscale Image', img)cv2.waitKey(0) # Window shown waits for any key pressing eventcv2.destroyAllWindows()",
"e": 26766,
"s": 26454,
"text": null
},
{
"code": null,
"e": 26780,
"s": 26766,
"text": "Output Image:"
},
{
"code": null,
"e": 26788,
"s": 26780,
"text": "Python3"
},
{
"code": "# Import opencvimport cv2 # Load the input imageimg = cv2.imread('C:\\\\Documents\\\\full_path\\\\tomatoes.jpg') # Obtain the dimensions of the image array# using the shape method(row, col) = img.shape[0:2] # Take the average of pixel values of the BGR Channels# to convert the colored image to grayscale imagefor i in range(row): for j in range(col): # Find the average of the BGR pixel values img[i, j] = sum(img[i, j]) * 0.33 cv2.imshow('Grayscale Image', img)cv2.waitKey(0) # Window shown waits for any key pressing eventcv2.destroyAllWindows()",
"e": 27348,
"s": 26788,
"text": null
},
{
"code": null,
"e": 27362,
"s": 27348,
"text": "Output Image:"
},
{
"code": null,
"e": 27496,
"s": 27362,
"text": "Hope you have understood the above discussed image processing techniques to convert a colored image into a grayscale image in Python!"
},
{
"code": null,
"e": 27506,
"s": 27496,
"text": "sanju6890"
},
{
"code": null,
"e": 27522,
"s": 27506,
"text": "saurabh1990aror"
},
{
"code": null,
"e": 27539,
"s": 27522,
"text": "Image-Processing"
},
{
"code": null,
"e": 27546,
"s": 27539,
"text": "OpenCV"
},
{
"code": null,
"e": 27553,
"s": 27546,
"text": "Python"
},
{
"code": null,
"e": 27651,
"s": 27553,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27660,
"s": 27651,
"text": "Comments"
},
{
"code": null,
"e": 27673,
"s": 27660,
"text": "Old Comments"
},
{
"code": null,
"e": 27691,
"s": 27673,
"text": "Python Dictionary"
},
{
"code": null,
"e": 27726,
"s": 27691,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 27758,
"s": 27726,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27780,
"s": 27758,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 27810,
"s": 27780,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 27852,
"s": 27810,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 27878,
"s": 27852,
"text": "Python String | replace()"
},
{
"code": null,
"e": 27921,
"s": 27878,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 27958,
"s": 27921,
"text": "Create a Pandas DataFrame from Lists"
}
] |
Get the last record from a table in MySQL database with Java?
|
To get data from MySQL database, you need to use executeQuery() method from java. First create a table in the MySQL database. Here, we will create the following table in the ‘sample’ database
mysql> create table javaGetDataDemo
- > (
- > Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,
- > FirstName varchar(10),
- > LastName varchar(10)
- > );
Query OK, 0 rows affected (0.80 sec)
Now you can insert some records in the table using insert command.
The query is as follows
mysql> insert into javaGetDataDemo(FirstName,LastName) values('John','Smith');
Query OK, 1 row affected (0.19 sec)
mysql> insert into javaGetDataDemo(FirstName,LastName) values('Carol','Taylor');
Query OK, 1 row affected (0.12 sec)
Display all records from the table using select statement.
The query is as follows
mysql> select *from javaGetDataDemo;
The following is the output
+----+-----------+----------+
| Id | FirstName | LastName |
+----+-----------+----------+
| 1 | John | Smith |
| 2 | Carol | Taylor |
+----+-----------+----------+
2 rows in set (0.00 sec)
Now here is the Java code to get the last record from the table with the help of ORDER BY DESC clause
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
public class GetDataFromMySQLToJava {
public static void main(String[] args) {
String JdbcURL = "jdbc:mysql://localhost:3306/sample?useSSL=false";
String Username = "root";
String password = "123456";
Connection con = null;
Statement stmt = null;
ResultSet rs = null;
try {
System.out.println("Connecting to database..............." + JdbcURL);
con = DriverManager.getConnection(JdbcURL, Username, password);
Statement st = con.createStatement();
String query = ("SELECT * FROM javaGetDataDemo ORDER BY Id DESC LIMIT 1;");
rs = st.executeQuery(query);
if (rs.next()) {
String fname = rs.getString("FirstName");
String lname = rs.getString("LastName");
System.out.println("FirstName:" + fname);
System.out.println("LastName:" + lname);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
The screenshot of Java code is as follows
The following is the screenshot of the output displaying the last record from the table
|
[
{
"code": null,
"e": 1254,
"s": 1062,
"text": "To get data from MySQL database, you need to use executeQuery() method from java. First create a table in the MySQL database. Here, we will create the following table in the ‘sample’ database"
},
{
"code": null,
"e": 1440,
"s": 1254,
"text": "mysql> create table javaGetDataDemo\n- > (\n- > Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n- > FirstName varchar(10),\n- > LastName varchar(10)\n- > );\nQuery OK, 0 rows affected (0.80 sec)"
},
{
"code": null,
"e": 1507,
"s": 1440,
"text": "Now you can insert some records in the table using insert command."
},
{
"code": null,
"e": 1531,
"s": 1507,
"text": "The query is as follows"
},
{
"code": null,
"e": 1763,
"s": 1531,
"text": "mysql> insert into javaGetDataDemo(FirstName,LastName) values('John','Smith');\nQuery OK, 1 row affected (0.19 sec)\nmysql> insert into javaGetDataDemo(FirstName,LastName) values('Carol','Taylor');\nQuery OK, 1 row affected (0.12 sec)"
},
{
"code": null,
"e": 1822,
"s": 1763,
"text": "Display all records from the table using select statement."
},
{
"code": null,
"e": 1846,
"s": 1822,
"text": "The query is as follows"
},
{
"code": null,
"e": 1883,
"s": 1846,
"text": "mysql> select *from javaGetDataDemo;"
},
{
"code": null,
"e": 1911,
"s": 1883,
"text": "The following is the output"
},
{
"code": null,
"e": 2116,
"s": 1911,
"text": "+----+-----------+----------+\n| Id | FirstName | LastName |\n+----+-----------+----------+\n| 1 | John | Smith |\n| 2 | Carol | Taylor |\n+----+-----------+----------+\n2 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2218,
"s": 2116,
"text": "Now here is the Java code to get the last record from the table with the help of ORDER BY DESC clause"
},
{
"code": null,
"e": 3299,
"s": 2218,
"text": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.ResultSet;\nimport java.sql.Statement;\npublic class GetDataFromMySQLToJava {\n public static void main(String[] args) {\n String JdbcURL = \"jdbc:mysql://localhost:3306/sample?useSSL=false\";\n String Username = \"root\";\n String password = \"123456\";\n Connection con = null;\n Statement stmt = null;\n ResultSet rs = null;\n try {\n System.out.println(\"Connecting to database...............\" + JdbcURL);\n con = DriverManager.getConnection(JdbcURL, Username, password);\n Statement st = con.createStatement();\n String query = (\"SELECT * FROM javaGetDataDemo ORDER BY Id DESC LIMIT 1;\");\n rs = st.executeQuery(query);\n if (rs.next()) {\n String fname = rs.getString(\"FirstName\");\n String lname = rs.getString(\"LastName\");\n System.out.println(\"FirstName:\" + fname);\n System.out.println(\"LastName:\" + lname);\n }\n } catch (Exception e) {\n e.printStackTrace();\n }\n }\n}"
},
{
"code": null,
"e": 3341,
"s": 3299,
"text": "The screenshot of Java code is as follows"
},
{
"code": null,
"e": 3429,
"s": 3341,
"text": "The following is the screenshot of the output displaying the last record from the table"
}
] |
Tryit Editor v3.7 - Show Java
|
static void Main(string[] args)
{
string txt = "We are the so-called \"Vikings\" from the north.";
|
[] |
Minimum number of moves to make all elements equal using C++.
|
Given an array of N elements and an integer K., It is allowed to perform the following operation any number of times on the given array −
Insert the Kth element at the end of the array and delete the first element of the array.
Insert the Kth element at the end of the array and delete the first element of the array.
The task is to find the minimum number of moves needed to make all elements of the array equal. Print -1 if it is not possible
If arr[] = {1, 2, 3, 4, 5, 6} and k = 6 then minimum 5 moves are
required:
Move-1: {2, 3, 4, 5, 6, 6}
Move-2: {3, 4, 5, 6, 6, 6}
Move-3: {4, 5, 6, 6, 6, 6}
Move-4: {5, 6, 6, 6, 6, 6}
Move-5: {6, 6, 6, 6, 6, 6}
1. First we copy a[k] to the end, then a[k+1] and so on
2. To make sure that we only copy equal elements, all elements in the range K to N should be equal. We need to remove all elements in range 1 to K that are not equal to a[k]
3. Keep applying operations until we reach the rightmost term in range 1 to K that is not equal to a[k].
#include <iostream>
#define SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
using namespace std;
int getMinMoves(int *arr, int n, int k){
int i;
for (i = k - 1; i < n; ++i) {
if (arr[i] != arr[k - 1]) {
return -1;
}
}
for (i = k - 1; i >= 0; --i) {
if (arr[i] != arr[k - 1]) {
return i + 1;
}
}
return 0;
}
int main(){
int arr[] = {1, 2, 3, 4, 5, 6};
int k = 6;
cout << "Minimum moves required = " << getMinMoves(arr, SIZE(arr), k) << endl;
return 0;
}
When you compile and execute the above program. It generates the following output −
Minimum moves required = 5
|
[
{
"code": null,
"e": 1200,
"s": 1062,
"text": "Given an array of N elements and an integer K., It is allowed to perform the following operation any number of times on the given array −"
},
{
"code": null,
"e": 1290,
"s": 1200,
"text": "Insert the Kth element at the end of the array and delete the first element of the array."
},
{
"code": null,
"e": 1380,
"s": 1290,
"text": "Insert the Kth element at the end of the array and delete the first element of the array."
},
{
"code": null,
"e": 1507,
"s": 1380,
"text": "The task is to find the minimum number of moves needed to make all elements of the array equal. Print -1 if it is not possible"
},
{
"code": null,
"e": 1717,
"s": 1507,
"text": "If arr[] = {1, 2, 3, 4, 5, 6} and k = 6 then minimum 5 moves are\nrequired:\nMove-1: {2, 3, 4, 5, 6, 6}\nMove-2: {3, 4, 5, 6, 6, 6}\nMove-3: {4, 5, 6, 6, 6, 6}\nMove-4: {5, 6, 6, 6, 6, 6}\nMove-5: {6, 6, 6, 6, 6, 6}"
},
{
"code": null,
"e": 2052,
"s": 1717,
"text": "1. First we copy a[k] to the end, then a[k+1] and so on\n2. To make sure that we only copy equal elements, all elements in the range K to N should be equal. We need to remove all elements in range 1 to K that are not equal to a[k]\n3. Keep applying operations until we reach the rightmost term in range 1 to K that is not equal to a[k]."
},
{
"code": null,
"e": 2570,
"s": 2052,
"text": "#include <iostream>\n#define SIZE(arr) (sizeof(arr) / sizeof(arr[0]))\nusing namespace std;\nint getMinMoves(int *arr, int n, int k){\n int i;\n for (i = k - 1; i < n; ++i) {\n if (arr[i] != arr[k - 1]) {\n return -1;\n }\n }\n for (i = k - 1; i >= 0; --i) {\n if (arr[i] != arr[k - 1]) {\n return i + 1;\n }\n }\n return 0;\n}\nint main(){\n int arr[] = {1, 2, 3, 4, 5, 6};\n int k = 6;\n cout << \"Minimum moves required = \" << getMinMoves(arr, SIZE(arr), k) << endl;\n return 0;\n}"
},
{
"code": null,
"e": 2654,
"s": 2570,
"text": "When you compile and execute the above program. It generates the following output −"
},
{
"code": null,
"e": 2681,
"s": 2654,
"text": "Minimum moves required = 5"
}
] |
Deletion in a Binary Tree in C++ Program
|
In this tutorial, we are going to learn how to delete a node in a binary tree.
The nodes in a binary tree don't follow any order like binary search trees. So, how to arrange the nodes after deleting a node in a binary tree?
Well, we will replace the deepest node of the tree with the deleting node. And then we will delete the deepest node from the node.
Let's see the steps to solve the problem.
Initialize the tree with binary node struct.
Initialize the tree with binary node struct.
Write a function (preorder, in order, and postorder) to print the nodes of the tree.
Write a function (preorder, in order, and postorder) to print the nodes of the tree.
Write a function to delete the node.Initialize a queue to iterate through the tree.Iterate until the queue is empty.Find the node with the given key and store it in a variable.And the last node from the queue is the deepest node.
Write a function to delete the node.
Initialize a queue to iterate through the tree.
Initialize a queue to iterate through the tree.
Iterate until the queue is empty.
Iterate until the queue is empty.
Find the node with the given key and store it in a variable.
Find the node with the given key and store it in a variable.
And the last node from the queue is the deepest node.
And the last node from the queue is the deepest node.
Delete the deepest node using another function.Use the queue to traverse through the tree.When we find the node delete it and return it.
Delete the deepest node using another function.
Use the queue to traverse through the tree.
Use the queue to traverse through the tree.
When we find the node delete it and return it.
When we find the node delete it and return it.
Print the tree to see if the node is deleted or not.
Print the tree to see if the node is deleted or not.
Let's see the code.
Live Demo
#include <bits/stdc++.h>
using namespace std;
struct Node {
int data;
struct Node *left, *right;
};
struct Node* newNode(int data) {
struct Node* temp = new Node;
temp->data = data;
temp->left = temp->right = NULL;
return temp;
};
void inorder(struct Node* node) {
if (node == NULL) {
return;
}
inorder(node->left);
cout << node->data << " ";
inorder(node->right);
}
void deleteDeepestNode(struct Node* root, struct Node* deleting_node){
queue<struct Node*> nodes;
nodes.push(root);
struct Node* temp;
while (!nodes.empty()) {
temp = nodes.front();
nodes.pop();
if (temp == deleting_node) {
temp = NULL;
delete (deleting_node);
return;
}
if (temp->right) {
if (temp->right == deleting_node) {
temp->right = NULL;
delete deleting_node;
return;
}
else {
nodes.push(temp->right);
}
}
if (temp->left) {
if (temp->left == deleting_node) {
temp->left = NULL;
delete deleting_node;
return;
}
else {
nodes.push(temp->left);
}
}
}
}
Node* deleteNode(struct Node* root, int key) {
if (root == NULL){
return NULL;
}
if (root->left == NULL && root->right == NULL) {
if (root->data == key) {
return NULL;
}
else {
return root;
}
}
queue<struct Node*> nodes;
nodes.push(root);
struct Node* temp;
struct Node* key_node = NULL;
while (!nodes.empty()) {
temp = nodes.front();
nodes.pop();
if (temp->data == key) {
key_node = temp;
}
if (temp->left) {
nodes.push(temp->left);
}
if (temp->right) {
nodes.push(temp->right);
}
}
if (key_node != NULL) {
int deepest_node_data = temp->data;
deleteDeepestNode(root, temp);
key_node->data = deepest_node_data;
}
return root;
}
int main() {
struct Node* root = newNode(1);
root->left = newNode(2);
root->left->left = newNode(3);
root->left->right = newNode(4);
root->right = newNode(5);
root->right->left = newNode(6);
root->right->right = newNode(7);
root->right->left->left = newNode(8);
root->right->left->right = newNode(9);
cout << "Tree before deleting key: ";
inorder(root);
int key = 5;
root = deleteNode(root, key);
cout << "\nTree after deleting key: ";
inorder(root);
cout << endl;
return 0;
}
If you run the above code, then you will get the following result.
Tree before deleting key: 3 2 4 1 8 6 9 5 7
Tree after deleting key: 3 2 4 1 8 6 9 7
If you have any queries in the tutorial, mention them in the comment section.
|
[
{
"code": null,
"e": 1141,
"s": 1062,
"text": "In this tutorial, we are going to learn how to delete a node in a binary tree."
},
{
"code": null,
"e": 1286,
"s": 1141,
"text": "The nodes in a binary tree don't follow any order like binary search trees. So, how to arrange the nodes after deleting a node in a binary tree?"
},
{
"code": null,
"e": 1417,
"s": 1286,
"text": "Well, we will replace the deepest node of the tree with the deleting node. And then we will delete the deepest node from the node."
},
{
"code": null,
"e": 1459,
"s": 1417,
"text": "Let's see the steps to solve the problem."
},
{
"code": null,
"e": 1504,
"s": 1459,
"text": "Initialize the tree with binary node struct."
},
{
"code": null,
"e": 1549,
"s": 1504,
"text": "Initialize the tree with binary node struct."
},
{
"code": null,
"e": 1634,
"s": 1549,
"text": "Write a function (preorder, in order, and postorder) to print the nodes of the tree."
},
{
"code": null,
"e": 1719,
"s": 1634,
"text": "Write a function (preorder, in order, and postorder) to print the nodes of the tree."
},
{
"code": null,
"e": 1949,
"s": 1719,
"text": "Write a function to delete the node.Initialize a queue to iterate through the tree.Iterate until the queue is empty.Find the node with the given key and store it in a variable.And the last node from the queue is the deepest node."
},
{
"code": null,
"e": 1986,
"s": 1949,
"text": "Write a function to delete the node."
},
{
"code": null,
"e": 2034,
"s": 1986,
"text": "Initialize a queue to iterate through the tree."
},
{
"code": null,
"e": 2082,
"s": 2034,
"text": "Initialize a queue to iterate through the tree."
},
{
"code": null,
"e": 2116,
"s": 2082,
"text": "Iterate until the queue is empty."
},
{
"code": null,
"e": 2150,
"s": 2116,
"text": "Iterate until the queue is empty."
},
{
"code": null,
"e": 2211,
"s": 2150,
"text": "Find the node with the given key and store it in a variable."
},
{
"code": null,
"e": 2272,
"s": 2211,
"text": "Find the node with the given key and store it in a variable."
},
{
"code": null,
"e": 2326,
"s": 2272,
"text": "And the last node from the queue is the deepest node."
},
{
"code": null,
"e": 2380,
"s": 2326,
"text": "And the last node from the queue is the deepest node."
},
{
"code": null,
"e": 2517,
"s": 2380,
"text": "Delete the deepest node using another function.Use the queue to traverse through the tree.When we find the node delete it and return it."
},
{
"code": null,
"e": 2565,
"s": 2517,
"text": "Delete the deepest node using another function."
},
{
"code": null,
"e": 2609,
"s": 2565,
"text": "Use the queue to traverse through the tree."
},
{
"code": null,
"e": 2653,
"s": 2609,
"text": "Use the queue to traverse through the tree."
},
{
"code": null,
"e": 2700,
"s": 2653,
"text": "When we find the node delete it and return it."
},
{
"code": null,
"e": 2747,
"s": 2700,
"text": "When we find the node delete it and return it."
},
{
"code": null,
"e": 2800,
"s": 2747,
"text": "Print the tree to see if the node is deleted or not."
},
{
"code": null,
"e": 2853,
"s": 2800,
"text": "Print the tree to see if the node is deleted or not."
},
{
"code": null,
"e": 2873,
"s": 2853,
"text": "Let's see the code."
},
{
"code": null,
"e": 2884,
"s": 2873,
"text": " Live Demo"
},
{
"code": null,
"e": 5441,
"s": 2884,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nstruct Node {\n int data;\n struct Node *left, *right;\n};\nstruct Node* newNode(int data) {\n struct Node* temp = new Node;\n temp->data = data;\n temp->left = temp->right = NULL;\n return temp;\n};\nvoid inorder(struct Node* node) {\n if (node == NULL) {\n return;\n }\n inorder(node->left);\n cout << node->data << \" \";\n inorder(node->right);\n}\nvoid deleteDeepestNode(struct Node* root, struct Node* deleting_node){\n queue<struct Node*> nodes;\n nodes.push(root);\n struct Node* temp;\n while (!nodes.empty()) {\n temp = nodes.front();\n nodes.pop();\n if (temp == deleting_node) {\n temp = NULL;\n delete (deleting_node);\n return;\n }\n if (temp->right) {\n if (temp->right == deleting_node) {\n temp->right = NULL;\n delete deleting_node;\n return;\n }\n else {\n nodes.push(temp->right);\n }\n }\n if (temp->left) {\n if (temp->left == deleting_node) {\n temp->left = NULL;\n delete deleting_node;\n return;\n }\n else {\n nodes.push(temp->left);\n }\n }\n }\n}\nNode* deleteNode(struct Node* root, int key) {\n if (root == NULL){\n return NULL;\n }\n if (root->left == NULL && root->right == NULL) {\n if (root->data == key) {\n return NULL;\n }\n else {\n return root;\n }\n }\n queue<struct Node*> nodes;\n nodes.push(root);\n struct Node* temp;\n struct Node* key_node = NULL;\n while (!nodes.empty()) {\n temp = nodes.front();\n nodes.pop();\n if (temp->data == key) {\n key_node = temp;\n }\n if (temp->left) {\n nodes.push(temp->left);\n }\n if (temp->right) {\n nodes.push(temp->right);\n }\n }\n if (key_node != NULL) {\n int deepest_node_data = temp->data;\n deleteDeepestNode(root, temp);\n key_node->data = deepest_node_data;\n }\n return root;\n}\nint main() {\n struct Node* root = newNode(1);\n root->left = newNode(2);\n root->left->left = newNode(3);\n root->left->right = newNode(4);\n root->right = newNode(5);\n root->right->left = newNode(6);\n root->right->right = newNode(7);\n root->right->left->left = newNode(8);\n root->right->left->right = newNode(9);\n cout << \"Tree before deleting key: \";\n inorder(root);\n int key = 5;\n root = deleteNode(root, key);\n cout << \"\\nTree after deleting key: \";\n inorder(root);\n cout << endl;\n return 0;\n}"
},
{
"code": null,
"e": 5508,
"s": 5441,
"text": "If you run the above code, then you will get the following result."
},
{
"code": null,
"e": 5593,
"s": 5508,
"text": "Tree before deleting key: 3 2 4 1 8 6 9 5 7\nTree after deleting key: 3 2 4 1 8 6 9 7"
},
{
"code": null,
"e": 5671,
"s": 5593,
"text": "If you have any queries in the tutorial, mention them in the comment section."
}
] |
Fabric.js Image cropX Property - GeeksforGeeks
|
31 Aug, 2020
Fabric.js is a javascript library that is used to work with canvas. The canvas image is one of the class of fabric.js that is used to create image instances. The canvas image means the Image is movable and can be stretched according to requirement. The cropX property of the image is used to crop a certain amount of the canvas image. The size is given in pixels.
Approach: First import the fabric.js library. After importing the library, create a canvas block in the body tag which will contain the image. After this, initialize an instance of Canvas and image class provided by Fabric.JS and give the width to be cropped of the canvas image in pixels using the cropX property of the image object.
Syntax:
fabric.Image(image, {
cropX:Number
});
Parameters: The above function takes two parameters as mentioned above and described below:
image: This parameter takes the image.
cropX: This parameter is the amount of image crop in pixels from the original size of the image.
Example: This example uses FabricJS to crop the section of the canvas image along the x-axis as shown in the below given example.
<!DOCTYPE html> <html> <head> <!-- Adding the FabricJS library --> <script src= "https://cdnjs.cloudflare.com/ajax/libs/fabric.js/3.6.2/fabric.min.js"> </script> </head> <body> <h1 style="color: green;">GeeksforGeeks</h1> <b>Fabric.js | Image cropX Property </b> <canvas id="canvas" width="400" height="300" style="border:2px solid #000000"> </canvas> <img src ="https://media.geeksforgeeks.org/wp-content/uploads/20200327230544/g4gicon.png" width="100" height="100" id="my-image" style="display: none;"><br> <button onclick="cropX()">Click me</button> <script> // Create the instance of canvas object var canvas = new fabric.Canvas("canvas"); // Getting the image var img= document.getElementById('my-image'); // Creating the image instance var imgInstance = new fabric.Image(img, { }); function cropX() { imgInstance = new fabric.Image(img, { cropX:80 }); canvas.clear(); canvas.add(imgInstance); } // Rendering the image to canvas canvas.add(imgInstance); </script> </body> </html>
Output:
Before clicking the button:
After clicking the button:
Fabric.js
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 25034,
"s": 25006,
"text": "\n31 Aug, 2020"
},
{
"code": null,
"e": 25398,
"s": 25034,
"text": "Fabric.js is a javascript library that is used to work with canvas. The canvas image is one of the class of fabric.js that is used to create image instances. The canvas image means the Image is movable and can be stretched according to requirement. The cropX property of the image is used to crop a certain amount of the canvas image. The size is given in pixels."
},
{
"code": null,
"e": 25733,
"s": 25398,
"text": "Approach: First import the fabric.js library. After importing the library, create a canvas block in the body tag which will contain the image. After this, initialize an instance of Canvas and image class provided by Fabric.JS and give the width to be cropped of the canvas image in pixels using the cropX property of the image object."
},
{
"code": null,
"e": 25741,
"s": 25733,
"text": "Syntax:"
},
{
"code": null,
"e": 25784,
"s": 25741,
"text": "fabric.Image(image, {\n cropX:Number\n});"
},
{
"code": null,
"e": 25876,
"s": 25784,
"text": "Parameters: The above function takes two parameters as mentioned above and described below:"
},
{
"code": null,
"e": 25915,
"s": 25876,
"text": "image: This parameter takes the image."
},
{
"code": null,
"e": 26012,
"s": 25915,
"text": "cropX: This parameter is the amount of image crop in pixels from the original size of the image."
},
{
"code": null,
"e": 26142,
"s": 26012,
"text": "Example: This example uses FabricJS to crop the section of the canvas image along the x-axis as shown in the below given example."
},
{
"code": " <!DOCTYPE html> <html> <head> <!-- Adding the FabricJS library --> <script src= \"https://cdnjs.cloudflare.com/ajax/libs/fabric.js/3.6.2/fabric.min.js\"> </script> </head> <body> <h1 style=\"color: green;\">GeeksforGeeks</h1> <b>Fabric.js | Image cropX Property </b> <canvas id=\"canvas\" width=\"400\" height=\"300\" style=\"border:2px solid #000000\"> </canvas> <img src =\"https://media.geeksforgeeks.org/wp-content/uploads/20200327230544/g4gicon.png\" width=\"100\" height=\"100\" id=\"my-image\" style=\"display: none;\"><br> <button onclick=\"cropX()\">Click me</button> <script> // Create the instance of canvas object var canvas = new fabric.Canvas(\"canvas\"); // Getting the image var img= document.getElementById('my-image'); // Creating the image instance var imgInstance = new fabric.Image(img, { }); function cropX() { imgInstance = new fabric.Image(img, { cropX:80 }); canvas.clear(); canvas.add(imgInstance); } // Rendering the image to canvas canvas.add(imgInstance); </script> </body> </html>",
"e": 27358,
"s": 26142,
"text": null
},
{
"code": null,
"e": 27366,
"s": 27358,
"text": "Output:"
},
{
"code": null,
"e": 27394,
"s": 27366,
"text": "Before clicking the button:"
},
{
"code": null,
"e": 27421,
"s": 27394,
"text": "After clicking the button:"
},
{
"code": null,
"e": 27431,
"s": 27421,
"text": "Fabric.js"
},
{
"code": null,
"e": 27442,
"s": 27431,
"text": "JavaScript"
},
{
"code": null,
"e": 27459,
"s": 27442,
"text": "Web Technologies"
},
{
"code": null,
"e": 27557,
"s": 27459,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27566,
"s": 27557,
"text": "Comments"
},
{
"code": null,
"e": 27579,
"s": 27566,
"text": "Old Comments"
},
{
"code": null,
"e": 27624,
"s": 27579,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 27685,
"s": 27624,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 27757,
"s": 27685,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 27809,
"s": 27757,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 27855,
"s": 27809,
"text": "How to Open URL in New Tab using JavaScript ?"
},
{
"code": null,
"e": 27897,
"s": 27855,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 27930,
"s": 27897,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 27973,
"s": 27930,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 28035,
"s": 27973,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
}
] |
Deep Learning For NLP with PyTorch and Torchtext | by Arie Pratama Sutiono | Towards Data Science
|
PyTorch has been an awesome deep learning framework that I have been working with. However, when it comes to NLP somehow I could not found as good utility library like torchvision. Turns out PyTorch has this torchtext, which, in my opinion, lack of examples on how to use it and the documentation [6] can be improved. Moreover, there are some great tutorials like [1] and [2] but, we still need more examples.
This article’s purpose is to give readers sample codes on how to use torchtext, in particular, to use pre-trained word embedding, use dataset API, use iterator API for mini-batch, and finally how to use these in conjunction to train a model.
There have been some alternatives in pre-trained word embeddings such as Spacy [3], Stanza (Stanford NLP)[4], Gensim [5] but in this article, I wanted to focus on doing word embedding with torchtext.
You can see the list of pre-trained word embeddings at torchtext. At this time of writing, there are 3 pre-trained word embedding classes supported: GloVe, FastText, and CharNGram, with no additional detail on how to load. The exhaustive list is stated here, but it took me sometimes to read that so I will layout the list here.
charngram.100dfasttext.en.300dfasttext.simple.300dglove.42B.300dglove.840B.300dglove.twitter.27B.25dglove.twitter.27B.50dglove.twitter.27B.100dglove.twitter.27B.200dglove.6B.50dglove.6B.100dglove.6B.200dglove.6B.300d
There are two ways we can load pre-trained word embeddings: initiate word embedding object or using Field instance.
Using Field Instance
You need some toy dataset to use this so let’s set one up.
df = pd.DataFrame([ ['my name is Jack', 'Y'], ['Hi I am Jack', 'Y'], ['Hello There!', 'Y'], ['Hi I am cooking', 'N'], ['Hello are you there?', 'N'], ['There is a bird there', 'N'],], columns=['text', 'label'])
then we can construct Field objects that hold metadata of feature column and label column.
from torchtext.data import Fieldtext_field = Field( tokenize='basic_english', lower=True)label_field = Field(sequential=False, use_vocab=False)# sadly have to apply preprocess manuallypreprocessed_text = df['text'].apply(lambda x: text_field.preprocess(x))# load fastext simple embedding with 300dtext_field.build_vocab( preprocessed_text, vectors='fasttext.simple.300d')# get the vocab instancevocab = text_field.vocab
to get the real instance of pre-trained word embedding, you can use
vocab.vectors
Initiate Word Embedding Object
For each of these codes, it will download a big size of word embeddings so you have to be patient and do not execute all of the below codes all at once.
FastText
FastText object has one parameter: language, and it can be ‘simple’ or ‘en’. Currently they only support 300 embedding dimensions as mentioned at the above embedding list.
from torchtext.vocab import FastTextembedding = FastText('simple')
CharNGram
from torchtext.vocab import CharNGramembedding_charngram = CharNGram()
GloVe
GloVe object has 2 parameters: name and dim. You can look up the available embedding list on what each parameter support.
from torchtext.vocab import GloVeembedding_glove = GloVe(name='6B', dim=100)
Using the torchtext API to use word embedding is super easy! Say you have stored your embedding at variable embedding, then you can use it like a python’s dict.
# known token, in my case print 12print(vocab['are'])# unknown token, will print 0print(vocab['crazy'])
As you can see, it has handled unknown token without throwing error! If you play with encoding the words into an integer, you can notice that by default unknown token will be encoded as 0 while pad token will be encoded as 1 .
Assuming variable df has been defined as above, we now proceed to prepare the data by constructing Fieldfor both the feature and label.
from torchtext.data import Fieldtext_field = Field( sequential=True, tokenize='basic_english', fix_length=5, lower=True)label_field = Field(sequential=False, use_vocab=False)# sadly have to apply preprocess manuallypreprocessed_text = df['text'].apply( lambda x: text_field.preprocess(x))# load fastext simple embedding with 300dtext_field.build_vocab( preprocessed_text, vectors='fasttext.simple.300d')# get the vocab instancevocab = text_field.vocab
A bit of warning here, Dataset.splitmay return 3 datasets (train, val, test) instead of 2 values as defined
I do not found any ready DatasetAPI to load pandas DataFrameto torchtext dataset, but it is pretty easy to form one.
from torchtext.data import Dataset, Exampleltoi = {l: i for i, l in enumerate(df['label'].unique())}df['label'] = df['label'].apply(lambda y: ltoi[y])class DataFrameDataset(Dataset): def __init__(self, df: pd.DataFrame, fields: list): super(DataFrameDataset, self).__init__( [ Example.fromlist(list(r), fields) for i, r in df.iterrows() ], fields )
we can now construct the DataFrameDatasetand initiate it with the pandas dataframe.
train_dataset, test_dataset = DataFrameDataset( df=df, fields=( ('text', text_field), ('label', label_field) )).split()
we then use BucketIteratorclass to easily construct minibatching iterator.
from torchtext.data import BucketIteratortrain_iter, test_iter = BucketIterator.splits( datasets=(train_dataset, test_dataset), batch_sizes=(2, 2), sort=False)
Remember to use sort=False otherwise it will lead to an error when you try to iterate test_iter because we haven’t defined the sort function, yet somehow, by default test_iter defined to be sorted.
A little note: while I do agree that we should use DataLoader API to handle the minibatch, but at this moment I have not explored how to use DataLoader with torchtext.
Let’s define an arbitrary PyTorch model using 1 embedding layer and 1 linear layer. In the current example, I do not use pre-trained word embedding but instead I use new untrained word embedding.
import torch.nn as nnimport torch.nn.functional as Ffrom torch.optim import Adamclass ModelParam(object): def __init__(self, param_dict: dict = dict()): self.input_size = param_dict.get('input_size', 0) self.vocab_size = param_dict.get('vocab_size') self.embedding_dim = param_dict.get('embedding_dim', 300) self.target_dim = param_dict.get('target_dim', 2) class MyModel(nn.Module): def __init__(self, model_param: ModelParam): super().__init__() self.embedding = nn.Embedding( model_param.vocab_size, model_param.embedding_dim ) self.lin = nn.Linear( model_param.input_size * model_param.embedding_dim, model_param.target_dim ) def forward(self, x): features = self.embedding(x).view(x.size()[0], -1) features = F.relu(features) features = self.lin(features) return features
Then I can easily iterate the training (and testing) routine as follows.
It is easy to modify the current defined model to a model that used pre-trained embedding.
class MyModelWithPretrainedEmbedding(nn.Module): def __init__(self, model_param: ModelParam, embedding): super().__init__() self.embedding = embedding self.lin = nn.Linear( model_param.input_size * model_param.embedding_dim, model_param.target_dim ) def forward(self, x): features = self.embedding[x].reshape(x.size()[0], -1) features = F.relu(features) features = self.lin(features) return features
I made 3 lines of modifications. You should notice that I have changed constructor input to accept an embedding. Additionally, I have also change the view method to reshape and use get operator [] instead of call operator () to access the embedding.
model = MyModelWithPretrainedEmbedding(model_param, vocab.vectors)
I have finished laying out my own exploration of using torchtext to handle text data in PyTorch. I began writing this article because I had trouble using it with the current tutorials available on the internet. I hope this article may reduce overhead for others too.
You need help to write this code? Here’s a link to google Colab.
Link to Google Colab
[1] Nie, A. A Tutorial on Torchtext. 2017. http://anie.me/On-Torchtext/
[2] Text Classification with TorchText Tutorial. https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html
[3] Stanza Documentation. https://stanfordnlp.github.io/stanza/
[4] Gensim Documentation. https://radimrehurek.com/gensim/
[5] Spacy Documentation. https://spacy.io/
[6] Torchtext Documentation. https://pytorch.org/text/
|
[
{
"code": null,
"e": 582,
"s": 172,
"text": "PyTorch has been an awesome deep learning framework that I have been working with. However, when it comes to NLP somehow I could not found as good utility library like torchvision. Turns out PyTorch has this torchtext, which, in my opinion, lack of examples on how to use it and the documentation [6] can be improved. Moreover, there are some great tutorials like [1] and [2] but, we still need more examples."
},
{
"code": null,
"e": 824,
"s": 582,
"text": "This article’s purpose is to give readers sample codes on how to use torchtext, in particular, to use pre-trained word embedding, use dataset API, use iterator API for mini-batch, and finally how to use these in conjunction to train a model."
},
{
"code": null,
"e": 1024,
"s": 824,
"text": "There have been some alternatives in pre-trained word embeddings such as Spacy [3], Stanza (Stanford NLP)[4], Gensim [5] but in this article, I wanted to focus on doing word embedding with torchtext."
},
{
"code": null,
"e": 1353,
"s": 1024,
"text": "You can see the list of pre-trained word embeddings at torchtext. At this time of writing, there are 3 pre-trained word embedding classes supported: GloVe, FastText, and CharNGram, with no additional detail on how to load. The exhaustive list is stated here, but it took me sometimes to read that so I will layout the list here."
},
{
"code": null,
"e": 1570,
"s": 1353,
"text": "charngram.100dfasttext.en.300dfasttext.simple.300dglove.42B.300dglove.840B.300dglove.twitter.27B.25dglove.twitter.27B.50dglove.twitter.27B.100dglove.twitter.27B.200dglove.6B.50dglove.6B.100dglove.6B.200dglove.6B.300d"
},
{
"code": null,
"e": 1686,
"s": 1570,
"text": "There are two ways we can load pre-trained word embeddings: initiate word embedding object or using Field instance."
},
{
"code": null,
"e": 1707,
"s": 1686,
"text": "Using Field Instance"
},
{
"code": null,
"e": 1766,
"s": 1707,
"text": "You need some toy dataset to use this so let’s set one up."
},
{
"code": null,
"e": 1994,
"s": 1766,
"text": "df = pd.DataFrame([ ['my name is Jack', 'Y'], ['Hi I am Jack', 'Y'], ['Hello There!', 'Y'], ['Hi I am cooking', 'N'], ['Hello are you there?', 'N'], ['There is a bird there', 'N'],], columns=['text', 'label'])"
},
{
"code": null,
"e": 2085,
"s": 1994,
"text": "then we can construct Field objects that hold metadata of feature column and label column."
},
{
"code": null,
"e": 2519,
"s": 2085,
"text": "from torchtext.data import Fieldtext_field = Field( tokenize='basic_english', lower=True)label_field = Field(sequential=False, use_vocab=False)# sadly have to apply preprocess manuallypreprocessed_text = df['text'].apply(lambda x: text_field.preprocess(x))# load fastext simple embedding with 300dtext_field.build_vocab( preprocessed_text, vectors='fasttext.simple.300d')# get the vocab instancevocab = text_field.vocab"
},
{
"code": null,
"e": 2587,
"s": 2519,
"text": "to get the real instance of pre-trained word embedding, you can use"
},
{
"code": null,
"e": 2601,
"s": 2587,
"text": "vocab.vectors"
},
{
"code": null,
"e": 2632,
"s": 2601,
"text": "Initiate Word Embedding Object"
},
{
"code": null,
"e": 2785,
"s": 2632,
"text": "For each of these codes, it will download a big size of word embeddings so you have to be patient and do not execute all of the below codes all at once."
},
{
"code": null,
"e": 2794,
"s": 2785,
"text": "FastText"
},
{
"code": null,
"e": 2966,
"s": 2794,
"text": "FastText object has one parameter: language, and it can be ‘simple’ or ‘en’. Currently they only support 300 embedding dimensions as mentioned at the above embedding list."
},
{
"code": null,
"e": 3033,
"s": 2966,
"text": "from torchtext.vocab import FastTextembedding = FastText('simple')"
},
{
"code": null,
"e": 3043,
"s": 3033,
"text": "CharNGram"
},
{
"code": null,
"e": 3114,
"s": 3043,
"text": "from torchtext.vocab import CharNGramembedding_charngram = CharNGram()"
},
{
"code": null,
"e": 3120,
"s": 3114,
"text": "GloVe"
},
{
"code": null,
"e": 3242,
"s": 3120,
"text": "GloVe object has 2 parameters: name and dim. You can look up the available embedding list on what each parameter support."
},
{
"code": null,
"e": 3319,
"s": 3242,
"text": "from torchtext.vocab import GloVeembedding_glove = GloVe(name='6B', dim=100)"
},
{
"code": null,
"e": 3480,
"s": 3319,
"text": "Using the torchtext API to use word embedding is super easy! Say you have stored your embedding at variable embedding, then you can use it like a python’s dict."
},
{
"code": null,
"e": 3584,
"s": 3480,
"text": "# known token, in my case print 12print(vocab['are'])# unknown token, will print 0print(vocab['crazy'])"
},
{
"code": null,
"e": 3811,
"s": 3584,
"text": "As you can see, it has handled unknown token without throwing error! If you play with encoding the words into an integer, you can notice that by default unknown token will be encoded as 0 while pad token will be encoded as 1 ."
},
{
"code": null,
"e": 3947,
"s": 3811,
"text": "Assuming variable df has been defined as above, we now proceed to prepare the data by constructing Fieldfor both the feature and label."
},
{
"code": null,
"e": 4422,
"s": 3947,
"text": "from torchtext.data import Fieldtext_field = Field( sequential=True, tokenize='basic_english', fix_length=5, lower=True)label_field = Field(sequential=False, use_vocab=False)# sadly have to apply preprocess manuallypreprocessed_text = df['text'].apply( lambda x: text_field.preprocess(x))# load fastext simple embedding with 300dtext_field.build_vocab( preprocessed_text, vectors='fasttext.simple.300d')# get the vocab instancevocab = text_field.vocab"
},
{
"code": null,
"e": 4530,
"s": 4422,
"text": "A bit of warning here, Dataset.splitmay return 3 datasets (train, val, test) instead of 2 values as defined"
},
{
"code": null,
"e": 4647,
"s": 4530,
"text": "I do not found any ready DatasetAPI to load pandas DataFrameto torchtext dataset, but it is pretty easy to form one."
},
{
"code": null,
"e": 5078,
"s": 4647,
"text": "from torchtext.data import Dataset, Exampleltoi = {l: i for i, l in enumerate(df['label'].unique())}df['label'] = df['label'].apply(lambda y: ltoi[y])class DataFrameDataset(Dataset): def __init__(self, df: pd.DataFrame, fields: list): super(DataFrameDataset, self).__init__( [ Example.fromlist(list(r), fields) for i, r in df.iterrows() ], fields )"
},
{
"code": null,
"e": 5162,
"s": 5078,
"text": "we can now construct the DataFrameDatasetand initiate it with the pandas dataframe."
},
{
"code": null,
"e": 5306,
"s": 5162,
"text": "train_dataset, test_dataset = DataFrameDataset( df=df, fields=( ('text', text_field), ('label', label_field) )).split()"
},
{
"code": null,
"e": 5381,
"s": 5306,
"text": "we then use BucketIteratorclass to easily construct minibatching iterator."
},
{
"code": null,
"e": 5551,
"s": 5381,
"text": "from torchtext.data import BucketIteratortrain_iter, test_iter = BucketIterator.splits( datasets=(train_dataset, test_dataset), batch_sizes=(2, 2), sort=False)"
},
{
"code": null,
"e": 5749,
"s": 5551,
"text": "Remember to use sort=False otherwise it will lead to an error when you try to iterate test_iter because we haven’t defined the sort function, yet somehow, by default test_iter defined to be sorted."
},
{
"code": null,
"e": 5917,
"s": 5749,
"text": "A little note: while I do agree that we should use DataLoader API to handle the minibatch, but at this moment I have not explored how to use DataLoader with torchtext."
},
{
"code": null,
"e": 6113,
"s": 5917,
"text": "Let’s define an arbitrary PyTorch model using 1 embedding layer and 1 linear layer. In the current example, I do not use pre-trained word embedding but instead I use new untrained word embedding."
},
{
"code": null,
"e": 7051,
"s": 6113,
"text": "import torch.nn as nnimport torch.nn.functional as Ffrom torch.optim import Adamclass ModelParam(object): def __init__(self, param_dict: dict = dict()): self.input_size = param_dict.get('input_size', 0) self.vocab_size = param_dict.get('vocab_size') self.embedding_dim = param_dict.get('embedding_dim', 300) self.target_dim = param_dict.get('target_dim', 2) class MyModel(nn.Module): def __init__(self, model_param: ModelParam): super().__init__() self.embedding = nn.Embedding( model_param.vocab_size, model_param.embedding_dim ) self.lin = nn.Linear( model_param.input_size * model_param.embedding_dim, model_param.target_dim ) def forward(self, x): features = self.embedding(x).view(x.size()[0], -1) features = F.relu(features) features = self.lin(features) return features"
},
{
"code": null,
"e": 7124,
"s": 7051,
"text": "Then I can easily iterate the training (and testing) routine as follows."
},
{
"code": null,
"e": 7215,
"s": 7124,
"text": "It is easy to modify the current defined model to a model that used pre-trained embedding."
},
{
"code": null,
"e": 7708,
"s": 7215,
"text": "class MyModelWithPretrainedEmbedding(nn.Module): def __init__(self, model_param: ModelParam, embedding): super().__init__() self.embedding = embedding self.lin = nn.Linear( model_param.input_size * model_param.embedding_dim, model_param.target_dim ) def forward(self, x): features = self.embedding[x].reshape(x.size()[0], -1) features = F.relu(features) features = self.lin(features) return features"
},
{
"code": null,
"e": 7958,
"s": 7708,
"text": "I made 3 lines of modifications. You should notice that I have changed constructor input to accept an embedding. Additionally, I have also change the view method to reshape and use get operator [] instead of call operator () to access the embedding."
},
{
"code": null,
"e": 8025,
"s": 7958,
"text": "model = MyModelWithPretrainedEmbedding(model_param, vocab.vectors)"
},
{
"code": null,
"e": 8292,
"s": 8025,
"text": "I have finished laying out my own exploration of using torchtext to handle text data in PyTorch. I began writing this article because I had trouble using it with the current tutorials available on the internet. I hope this article may reduce overhead for others too."
},
{
"code": null,
"e": 8357,
"s": 8292,
"text": "You need help to write this code? Here’s a link to google Colab."
},
{
"code": null,
"e": 8378,
"s": 8357,
"text": "Link to Google Colab"
},
{
"code": null,
"e": 8450,
"s": 8378,
"text": "[1] Nie, A. A Tutorial on Torchtext. 2017. http://anie.me/On-Torchtext/"
},
{
"code": null,
"e": 8574,
"s": 8450,
"text": "[2] Text Classification with TorchText Tutorial. https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html"
},
{
"code": null,
"e": 8638,
"s": 8574,
"text": "[3] Stanza Documentation. https://stanfordnlp.github.io/stanza/"
},
{
"code": null,
"e": 8697,
"s": 8638,
"text": "[4] Gensim Documentation. https://radimrehurek.com/gensim/"
},
{
"code": null,
"e": 8740,
"s": 8697,
"text": "[5] Spacy Documentation. https://spacy.io/"
}
] |
Julia Programming - Data Frames
|
DataFrame may be defined as a table or spreadsheet which we can be used to sort as well as explore a set of related data values. In other words, we can call it a smarter array for holding tabular data. Before we use it, we need to download and install DataFrame and CSV packages as follows −
(@v1.5) pkg> add DataFrames
(@v1.5) pkg> add CSV
To start using the DataFrames package, type the following command −
julia> using DataFrames
There are several ways to create new DataFrames (which we will discuss later in this section) but one of the quickest ways to load data into DataFrames is to load the Anscombe dataset. For better understanding, let us see the example below −
anscombe = DataFrame(
[10 10 10 8 8.04 9.14 7.46 6.58;
8 8 8 8 6.95 8.14 6.77 5.76;
13 13 13 8 7.58 8.74 12.74 7.71;
9 9 9 8 8.81 8.77 7.11 8.84;
11 11 11 8 8.33 9.26 7.81 8.47;
14 14 14 8 9.96 8.1 8.84 7.04;
6 6 6 8 7.24 6.13 6.08 5.25;
4 4 4 19 4.26 3.1 5.39 12.5;
12 12 12 8 10.84 9.13 8.15 5.56;
7 7 7 8 4.82 7.26 6.42 7.91;
5 5 5 8 5.68 4.74 5.73 6.89]);
julia> rename!(anscombe, [Symbol.(:N, 1:4); Symbol.(:M, 1:4)])
11×8 DataFrame
│ Row │ N1 │ N2 │ N3 │ N4 │ M1 │ M2 │ M3 │ M4 │
│ │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ 10.0 │ 10.0 │ 10.0 │ 8.0 │ 8.04 │ 9.14 │ 7.46 │ 6.58 │
│ 2 │ 8.0 │ 8.0 │ 8.0 │ 8.0 │ 6.95 │ 8.14 │ 6.77 │ 5.76 │
│ 3 │ 13.0 │ 13.0 │ 13.0 │ 8.0 │ 7.58 │ 8.74 │ 12.74 │ 7.71 │
│ 4 │ 9.0 │ 9.0 │ 9.0 │ 8.0 │ 8.81 │ 8.77 │ 7.11 │ 8.84 │
│ 5 │ 11.0 │ 11.0 │ 11.0 │ 8.0 │ 8.33 │ 9.26 │ 7.81 │ 8.47 │
│ 6 │ 14.0 │ 14.0 │ 14.0 │ 8.0 │ 9.96 │ 8.1 │ 8.84 │ 7.04 │
│ 7 │ 6.0 │ 6.0 │ 6.0 │ 8.0 │ 7.24 │ 6.13 │ 6.08 │ 5.25 │
│ 8 │ 4.0 │ 4.0 │ 4.0 │ 19.0 │ 4.26 │ 3.1 │ 5.39 │ 12.5 │
│ 9 │ 12.0 │ 12.0 │ 12.0 │ 8.0 │ 10.84 │ 9.13 │ 8.15 │ 5.56 │
│10 │ 7.0 │ 7.0 │ 7.0 │ 8.0 │ 4.82 │ 7.26 │ 6.42 │ 7.91 │
│11 │ 5.0 │ 5.0 │ 5.0 │ 8.0 │ 5.68 │ 4.74 │ 5.73 │ 6.89 │
We assigned the DataFrame to a variable named Anscombe, convert them to an array and then rename columns.
We can also use another dataset package named RDatasets package. It contains several other famous datasets including Anscombe’s. Before we start using it, we need to first download and install it as follows −
(@v1.5) pkg> add RDatasets
To start using this package, type the following command −
julia> using DataFrames
julia> anscombe = dataset("datasets","anscombe")
11×8 DataFrame
│ Row │ X1 │ X2 │ X3 │ X4 │ Y1 │ Y2 │ Y3 │ Y4 │
│ │ Int64 │ Int64 │ Int64 │ Int64 │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼───────┼───────┼───────┼───────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ 10 │ 10 │ 10 │ 8 │ 8.04 │ 9.14 │ 7.46 │ 6.58 │
│ 2 │ 8 │ 8 │ 8 │ 8 │ 6.95 │ 8.14 │ 6.77 │ 5.76 │
│ 3 │ 13 │ 13 │ 13 │ 8 │ 7.58 │ 8.74 │ 12.74│ 7.71 │
│ 4 │ 9 │ 9 │ 9 │ 8 │ 8.81 │ 8.77 │ 7.11 │ 8.84 │
│ 5 │ 11 │ 11 │ 11 │ 8 │ 8.33 │ 9.26 │ 7.81 │ 8.47 │
│ 6 │ 14 │ 14 │ 14 │ 8 │ 9.96 │ 8.1 │ 8.84 │ 7.04 │
│ 7 │ 6 │ 6 │ 6 │ 8 │ 7.24 │ 6.13 │ 6.08 │ 5.25 │
│ 8 │ 4 │ 4 │ 4 │ 19 │ 4.26 │ 3.1 │ 5.39 │ 12.5 │
│ 9 │ 12 │ 12 │ 12 │ 8 │ 10.84 │ 9.13 │ 8.15 │ 5.56 │
│ 10 │ 7 │ 7 │ 7 │ 8 │ 4.82 │ 7.26 │ 6.42 │ 7.91 │
│ 11 │ 5 │ 5 │ 5 │ 8 │ 5.68 │ 4.74 │ 5.73 │ 6.89 │
We can also create DataFrames by simply providing the information about rows, columns as we give in an array.
julia> empty_df = DataFrame(X = 1:10, Y = 21:30)
10×2 DataFrame
│ Row │ X │ Y │
│ │ Int64 │ Int64 │
├─────┼───────┼───────┤
│ 1 │ 1 │ 21 │
│ 2 │ 2 │ 22 │
│ 3 │ 3 │ 23 │
│ 4 │ 4 │ 24 │
│ 5 │ 5 │ 25 │
│ 6 │ 6 │ 26 │
│ 7 │ 7 │ 27 │
│ 8 │ 8 │ 28 │
│ 9 │ 9 │ 29 │
│ 10 │ 10 │ 30 │
To create completely empty DataFrame, we only need to supply the Column Names and define their types as follows −
julia> Complete_empty_df = DataFrame(Name=String[],
W=Float64[],
H=Float64[],
M=Float64[],
V=Float64[])
0×5 DataFrame
julia> Complete_empty_df = vcat(Complete_empty_df, DataFrame(Name="EmptyTestDataFrame", W=5.0, H=5.0, M=3.0, V=5.0))
1×5 DataFrame
│ Row │ Name │ W │ H │ M │ V │
│ │ String │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼────────────────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ EmptyTestDataFrame │ 5.0 │ 5.0 │ 3.0 │ 5.0 │
julia> Complete_empty_df = vcat(Complete_empty_df, DataFrame(Name="EmptyTestDataFrame2", W=6.0, H=6.0, M=5.0, V=7.0))
2×5 DataFrame
│ Row │ Name │ W │ H │ M │ V │
│ │ String │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼─────────────────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ EmptyTestDataFrame │ 5.0 │ 5.0 │ 3.0 │ 5.0 │
│ 2 │ EmptyTestDataFrame2 │ 6.0 │ 6.0 │ 5.0 │ 7.0 │
Now the Anscombe dataset has been loaded, we can do some statistics with it also. The inbuilt function named describe() enables us to calculate the statistics properties of the columns of a dataset. You can supply the symbols, given below, for the properties −
mean
mean
std
std
min
min
q25
q25
median
median
q75
q75
max
max
eltype
eltype
nunique
nunique
first
first
last
last
nmissing
nmissing
julia> describe(anscombe, :mean, :std, :min, :median, :q25)
8×6 DataFrame
│ Row │ variable │ mean │ std │ min │ median │ q25 │
│ │ Symbol │ Float64 │ Float64 │ Real │ Float64 │ Float64 │
├─────┼──────────┼─────────┼─────────┼──────┼─────────┼─────────┤
│ 1 │ X1 │ 9.0 │ 3.31662 │ 4 │ 9.0 │ 6.5 │
│ 2 │ X2 │ 9.0 │ 3.31662 │ 4 │ 9.0 │ 6.5 │
│ 3 │ X3 │ 9.0 │ 3.31662 │ 4 │ 9.0 │ 6.5 │
│ 4 │ X4 │ 9.0 │ 3.31662 │ 8 │ 8.0 │ 8.0 │
│ 5 │ Y1 │ 7.50091 │ 2.03157 │ 4.26 │ 7.58 │ 6.315 │
│ 6 │ Y2 │ 7.50091 │ 2.03166 │ 3.1 │ 8.14 │ 6.695 │
│ 7 │ Y3 │ 7.5 │ 2.03042 │ 5.39 │ 7.11 │ 6.25 │
│ 8 │ Y4 │ 7.50091 │ 2.03058 │ 5.25 │ 7.04 │ 6.17 │
We can also do a comparison between XY datasets as follows −
julia> [describe(anscombe[:, xy], :mean, :std, :median, :q25) for xy in [[:X1, :Y1], [:X2, :Y2], [:X3, :Y3], [:X4, :Y4]]]
4-element Array{DataFrame,1}:
2×5 DataFrame
│ Row │ variable │ mean │ std │ median │ q25 │
│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ X1 │ 9.0 │ 3.31662 │ 9.0 │ 6.5 │
│ 2 │ Y1 │ 7.50091 │ 2.03157 │ 7.58 │ 6.315 │
2×5 DataFrame
│ Row │ variable │ mean │ std │ median │ q25 │
│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ X2 │ 9.0 │ 3.31662 │ 9.0 │ 6.5 │
│ 2 │ Y2 │ 7.50091 │ 2.03166 │ 8.14 │ 6.695 │
2×5 DataFrame
│ Row │ variable │ mean │ std │ median │ q25 │
│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ X3 │ 9.0 │ 3.31662 │ 9.0 │ 6.5 │
│ 2 │ Y3 │ 7.5 │ 2.03042 │ 7.11 │ 6.25 │
2×5 DataFrame
│ Row │ variable │ mean │ std │ median │ q25 │
│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │
├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤
│ 1 │ X4 │ 9.0 │ 3.31662 │ 8.0 │ 8.0 │
│ 2 │ Y4 │ 7.50091 │ 2.03058 │ 7.04 │ 6.17 │
Let us reveal the true purpose of Anscombe, i.e., plot the four sets of its quartet as follows −
julia> using StatsPlots
[ Info: Precompiling StatsPlots [f3b207a7-027a-5e70-b257-86293d7955fd]
julia> @df anscombe scatter([:X1 :X2 :X3 :X4], [:Y1 :Y2 :Y3 :Y4],
smooth=true,
line = :red,
linewidth = 2,
title= ["X$i vs Y$i" for i in (1:4)'],
legend = false,
layout = 4,
xlimits = (2, 20),
ylimits = (2, 14))
In this section, we will be working with Linear Regression line for the dataset. For this we need to use Generalized Linear Model (GLM) package which you need to first add as follows −
(@v1.5) pkg> add GLM
Now let us create a liner regression model by specifying a formula using the @formula macro and supplying columns names as well as name of the DataFrame. An example for the same is given below −
julia> linearregressionmodel = fit(LinearModel, @formula(Y1 ~ X1), anscombe)
StatsModels.TableRegressionModel{LinearModel{GLM.LmResp{Array{Float64,1}},GLM.DensePredChol{Float64,LinearAlgebra.Cholesky{Float64,Array{Float64,2}}}},Array{Float64,2}}
Y1 ~ 1 + X1
Coefficients:
───────────────────────────────────────────────────────────────────────
Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%
───────────────────────────────────────────────────────────────────────
(Intercept) 3.00009 1.12475 2.67 0.0257 0.455737 5.54444
X1 0.500091 0.117906 4.24 0.0022 0.23337 0.766812
───────────────────────────────────────────────────────────────────────
Let us check the summary and the coefficient of the above created linear regression model −
julia> summary(linearregressionmodel)
"StatsModels.TableRegressionModel{LinearModel{GLM.LmResp{Array{Float64,1}},GLM.DensePredChol{Float64,LinearAlgebra.Cholesky{Float64,Array{Float64,2}}}},Array{Float64,2}}"
julia> coef(linearregressionmodel)
2-element Array{Float64,1}:
3.0000909090909054
0.5000909090909096
Now let us produce a function for the regression line. The form of the function is y = ax +c.
julia> f(x) = coef(linearmodel)[2] * x + coef(linearmodel)[1]
f (generic function with 1 method)
Once we have the function that describes the regression line, we can draw a plot as follows −
julia> p1 = plot(anscombe[:X1], anscombe[:Y1],
smooth=true,
seriestype=:scatter,
title = "X1 vs Y1",
linewidth=8,
linealpha=0.5,
label="data")
julia> plot!(f, 2, 20, label="correlation")
As we know that nothing is perfect. This is also true in case of datasets because not all the datasets are consistent and tidy. To show how we can work with different items of DataFrame, let us create a test DataFrame −
julia> testdf = DataFrame( Number = [3, 5, 7, 8, 20 ],
Name = ["Lithium", "Boron", "Nitrogen", "Oxygen", "Calcium" ],
AtomicWeight = [6.941, 10.811, 14.0067, 15.9994, 40.078 ],
Symbol = ["Li", "B", "N", "O", "Ca" ],
Discovered = [1817, 1808, 1772, 1774, missing ])
5×5 DataFrame
│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼──────────┼──────────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │
│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │
There can be some missing values in datasets. It can be checked with the help of describe() function as follows −
julia> describe(testdf)
5×8 DataFrame
│ Row │ variable │ mean │ min │ median │ max │ nunique │ nmissing │ eltype │
│ │ Symbol │ Union... │ Any │ Union... │ Any │ Union... │ Union... │ Type │
├─────┼──────────────┼─────────┼───────┼─────────┼────────┼─────────┼──────────┼───────────────────────┤
│ 1 │ Number │ 8.6 │ 3 │ 7.0 │ 20 │ │ │ Int64 │
│ 2 │ Name │ │ Boron │ │ Oxygen │ 5 │ │ String │
│ 3 │ AtomicWeight │ 17.5672 │ 6.941 │ 14.0067 │ 40.078 │ │ │ Float64 │
│ 4 │ Symbol │ │ B │ │ O │ 5 │ │ String │
│ 5 │ Discovered │ 1792.75 │ 1772 │ 1791.0 │ 1817 │ │ 1 │ Union{Missing, Int64} │
Julia provides a special datatype called Missing to address such issue. This datatype indicates that there is not a usable value at this location. That is why the DataFrames packages allow us to get most of our datasets and make sure that the calculations are not tampered due to missing values.
We can check with ismissing() function that whether the DataFrame has any missing value or not.
julia> for row in 1:nrows
for col in 1:ncols
if ismissing(testdf [row,col])
println("$(names(testdf)[col]) value for $(testdf[row,:Name]) is missing!")
end
end
end
Discovered value for Calcium is missing!
We can use the following code to change values that are not acceptable like “n/a”, “0”, “missing”. The below code will look in every cell for above mentioned non-acceptable values.
julia> for row in 1:size(testdf, 1) # or nrow(testdf)
for col in 1:size(testdf, 2) # or ncol(testdf)
println("processing row $row column $col ")
temp = testdf [row,col]
if ismissing(temp)
println("skipping missing")
elseif temp == "n/a" || temp == "0" || temp == 0
testdf [row, col] = missing
println("changed row $row column $col ")
end
end
end
processing row 1 column 1
processing row 1 column 2
processing row 1 column 3
processing row 1 column 4
processing row 1 column 5
processing row 2 column 1
processing row 2 column 2
processing row 2 column 3
processing row 2 column 4
processing row 2 column 5
processing row 3 column 1
processing row 3 column 2
processing row 3 column 3
processing row 3 column 4
processing row 3 column 5
processing row 4 column 1
processing row 4 column 2
processing row 4 column 3
processing row 4 column 4
processing row 4 column 5
processing row 5 column 1
processing row 5 column 2
processing row 5 column 3
processing row 5 column 4
processing row 5 column 5
skipping missing
Julia provides support for representing missing values in the statistical sense, that is for situations where no value is available for a variable in an observation, but a valid value theoretically exists.
The completecases() function is used to find the maximum value of the column that contains the missing value.
Example
julia> maximum(testdf[completecases(testdf), :].Discovered)
1817
The dropmissing() function is used to get the copy of DataFrames without having the missing values.
Example
julia> dropmissing(testdf)
4×5 DataFrame
│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64 │
├─────┼────────┼──────────┼──────────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │
The DataFrames package of Julia provides various methods using which you can add, remove, rename columns, and add/delete rows.
We can use hcat() function to add a column of integers to the DataFrame. It can be used as follows −
julia> hcat(testdf, axes(testdf, 1))
5×6 DataFrame
│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │ x1 │
│ │ Int64 │ String │ Float64 │ String │ Int64? │ Int64 │
├─────┼────────┼──────────┼──────────────┼────────┼────────────┼───────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │ 1 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │ 2 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │ 3 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │ 4 │
│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │ 5 │
But as you can notice that we haven’t changed the DataFrame or assigned any new DataFrame to a symbol. We can add another column as follows −
julia> testdf [!, :MP] = [180.7, 2300, -209.86, -222.65, 839]
5-element Array{Float64,1}:
180.7
2300.0
-209.86
-222.65
839.0
julia> testdf
5×6 DataFrame
│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │ MP │
│ │ Int64 │ String │ Float64 │ String │ Int64? │ Float64 │
├─────┼────────┼──────────┼──────────────┼────────┼────────────┼─────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │ 180.7 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │ 2300.0 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │ -209.86 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │ -222.65 │
│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │ 839.0 │
We have added a column having melting points of all the elements to our test DataFrame.
We can use select!() function to remove a column from the DataFrame. It will create a new DataFrame that contains the selected columns, hence to remove a particular column, we need to use select!() with Not. It is shown in the given example −
julia> select!(testdf, Not(:MP))
5×5 DataFrame
│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼──────────┼──────────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │
│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │
We have removed the column MP from our Data Frame.
We can use rename!() function to rename a column in the DataFrame. We will be renaming the AtomicWeight column to AW as follows −
julia> rename!(testdf, :AtomicWeight => :AW)
5×5 DataFrame
│ Row │ Number │ Name │ AW │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼──────────┼─────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │
│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │
We can use push!() function with suitable data to add rows in the DataFrame. In the below given example we will be adding a row having element Cooper −
Example
julia> push!(testdf, [29, "Copper", 63.546, "Cu", missing])
6×5 DataFrame
│ Row │ Number │ Name │ AW │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼──────────┼─────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │
│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │
│ 6 │ 29 │ Copper │ 63.546 │ Cu │ missing │
We can use deleterows!() function with suitable data to delete rows from the DataFrame. In the below given example we will be deleting three rows (4th, 5th,and 6th) from our test data frame −
Example
julia> deleterows!(testdf, 4:6)
3×5 DataFrame
│ Row │ Number │ Name │ AW │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼──────────┼─────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
To find the values in DataFrame, we need to use an elementwise operator examining all the rows. This operator will return an array of Boolean values to indicate whether cells meet the criteria or not.
Example
julia> testdf[:, :AW] .< 10
3-element BitArray{1}:
1
0
0
julia> testdf[testdf[:, :AW] .< 10, :]
1×5 DataFrame
│ Row │ Number │ Name │ AW │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼─────────┼─────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
To sort the values in DataFrame, we can use sort!() function. We need to give the columns on which we want to sort.
Example
julia> sort!(testdf, [order(:AW)])
3×5 DataFrame
│ Row │ Number │ Name │ AW │ Symbol │ Discovered │
│ │ Int64 │ String │ Float64 │ String │ Int64? │
├─────┼────────┼──────────┼─────────┼────────┼────────────┤
│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │
│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │
│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │
The DataFrame is sorted based on the values of column AW.
73 Lectures
4 hours
Lemuel Ogbunude
24 Lectures
3 hours
Mohammad Nauman
29 Lectures
2.5 hours
Stone River ELearning
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2370,
"s": 2078,
"text": "DataFrame may be defined as a table or spreadsheet which we can be used to sort as well as explore a set of related data values. In other words, we can call it a smarter array for holding tabular data. Before we use it, we need to download and install DataFrame and CSV packages as follows −"
},
{
"code": null,
"e": 2420,
"s": 2370,
"text": "(@v1.5) pkg> add DataFrames\n(@v1.5) pkg> add CSV\n"
},
{
"code": null,
"e": 2488,
"s": 2420,
"text": "To start using the DataFrames package, type the following command −"
},
{
"code": null,
"e": 2513,
"s": 2488,
"text": "julia> using DataFrames\n"
},
{
"code": null,
"e": 2755,
"s": 2513,
"text": "There are several ways to create new DataFrames (which we will discuss later in this section) but one of the quickest ways to load data into DataFrames is to load the Anscombe dataset. For better understanding, let us see the example below −"
},
{
"code": null,
"e": 3245,
"s": 2755,
"text": "anscombe = DataFrame(\n [10 10 10 8 8.04 9.14 7.46 6.58;\n 8 8 8 8 6.95 8.14 6.77 5.76;\n 13 13 13 8 7.58 8.74 12.74 7.71;\n 9 9 9 8 8.81 8.77 7.11 8.84;\n 11 11 11 8 8.33 9.26 7.81 8.47;\n 14 14 14 8 9.96 8.1 8.84 7.04;\n 6 6 6 8 7.24 6.13 6.08 5.25;\n 4 4 4 19 4.26 3.1 5.39 12.5;\n 12 12 12 8 10.84 9.13 8.15 5.56;\n 7 7 7 8 4.82 7.26 6.42 7.91;\n 5 5 5 8 5.68 4.74 5.73 6.89]);\n"
},
{
"code": null,
"e": 4556,
"s": 3245,
"text": "julia> rename!(anscombe, [Symbol.(:N, 1:4); Symbol.(:M, 1:4)])\n11×8 DataFrame\n│ Row │ N1 │ N2 │ N3 │ N4 │ M1 │ M2 │ M3 │ M4 │\n│ │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ 10.0 │ 10.0 │ 10.0 │ 8.0 │ 8.04 │ 9.14 │ 7.46 │ 6.58 │\n│ 2 │ 8.0 │ 8.0 │ 8.0 │ 8.0 │ 6.95 │ 8.14 │ 6.77 │ 5.76 │\n│ 3 │ 13.0 │ 13.0 │ 13.0 │ 8.0 │ 7.58 │ 8.74 │ 12.74 │ 7.71 │\n│ 4 │ 9.0 │ 9.0 │ 9.0 │ 8.0 │ 8.81 │ 8.77 │ 7.11 │ 8.84 │\n│ 5 │ 11.0 │ 11.0 │ 11.0 │ 8.0 │ 8.33 │ 9.26 │ 7.81 │ 8.47 │\n│ 6 │ 14.0 │ 14.0 │ 14.0 │ 8.0 │ 9.96 │ 8.1 │ 8.84 │ 7.04 │\n│ 7 │ 6.0 │ 6.0 │ 6.0 │ 8.0 │ 7.24 │ 6.13 │ 6.08 │ 5.25 │\n│ 8 │ 4.0 │ 4.0 │ 4.0 │ 19.0 │ 4.26 │ 3.1 │ 5.39 │ 12.5 │\n│ 9 │ 12.0 │ 12.0 │ 12.0 │ 8.0 │ 10.84 │ 9.13 │ 8.15 │ 5.56 │\n│10 │ 7.0 │ 7.0 │ 7.0 │ 8.0 │ 4.82 │ 7.26 │ 6.42 │ 7.91 │\n│11 │ 5.0 │ 5.0 │ 5.0 │ 8.0 │ 5.68 │ 4.74 │ 5.73 │ 6.89 │\n"
},
{
"code": null,
"e": 4662,
"s": 4556,
"text": "We assigned the DataFrame to a variable named Anscombe, convert them to an array and then rename columns."
},
{
"code": null,
"e": 4871,
"s": 4662,
"text": "We can also use another dataset package named RDatasets package. It contains several other famous datasets including Anscombe’s. Before we start using it, we need to first download and install it as follows −"
},
{
"code": null,
"e": 4899,
"s": 4871,
"text": "(@v1.5) pkg> add RDatasets\n"
},
{
"code": null,
"e": 4957,
"s": 4899,
"text": "To start using this package, type the following command −"
},
{
"code": null,
"e": 6166,
"s": 4957,
"text": "julia> using DataFrames\njulia> anscombe = dataset(\"datasets\",\"anscombe\")\n11×8 DataFrame\n│ Row │ X1 │ X2 │ X3 │ X4 │ Y1 │ Y2 │ Y3 │ Y4 │\n│ │ Int64 │ Int64 │ Int64 │ Int64 │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼───────┼───────┼───────┼───────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ 10 │ 10 │ 10 │ 8 │ 8.04 │ 9.14 │ 7.46 │ 6.58 │\n│ 2 │ 8 │ 8 │ 8 │ 8 │ 6.95 │ 8.14 │ 6.77 │ 5.76 │\n│ 3 │ 13 │ 13 │ 13 │ 8 │ 7.58 │ 8.74 │ 12.74│ 7.71 │\n│ 4 │ 9 │ 9 │ 9 │ 8 │ 8.81 │ 8.77 │ 7.11 │ 8.84 │\n│ 5 │ 11 │ 11 │ 11 │ 8 │ 8.33 │ 9.26 │ 7.81 │ 8.47 │\n│ 6 │ 14 │ 14 │ 14 │ 8 │ 9.96 │ 8.1 │ 8.84 │ 7.04 │\n│ 7 │ 6 │ 6 │ 6 │ 8 │ 7.24 │ 6.13 │ 6.08 │ 5.25 │\n│ 8 │ 4 │ 4 │ 4 │ 19 │ 4.26 │ 3.1 │ 5.39 │ 12.5 │\n│ 9 │ 12 │ 12 │ 12 │ 8 │ 10.84 │ 9.13 │ 8.15 │ 5.56 │\n│ 10 │ 7 │ 7 │ 7 │ 8 │ 4.82 │ 7.26 │ 6.42 │ 7.91 │\n│ 11 │ 5 │ 5 │ 5 │ 8 │ 5.68 │ 4.74 │ 5.73 │ 6.89 │\n"
},
{
"code": null,
"e": 6276,
"s": 6166,
"text": "We can also create DataFrames by simply providing the information about rows, columns as we give in an array."
},
{
"code": null,
"e": 6652,
"s": 6276,
"text": "julia> empty_df = DataFrame(X = 1:10, Y = 21:30)\n10×2 DataFrame\n│ Row │ X │ Y │\n│ │ Int64 │ Int64 │\n├─────┼───────┼───────┤\n│ 1 │ 1 │ 21 │\n│ 2 │ 2 │ 22 │\n│ 3 │ 3 │ 23 │\n│ 4 │ 4 │ 24 │\n│ 5 │ 5 │ 25 │\n│ 6 │ 6 │ 26 │\n│ 7 │ 7 │ 27 │\n│ 8 │ 8 │ 28 │\n│ 9 │ 9 │ 29 │\n│ 10 │ 10 │ 30 │"
},
{
"code": null,
"e": 6766,
"s": 6652,
"text": "To create completely empty DataFrame, we only need to supply the Column Names and define their types as follows −"
},
{
"code": null,
"e": 6920,
"s": 6766,
"text": "julia> Complete_empty_df = DataFrame(Name=String[],\n W=Float64[],\n H=Float64[],\n M=Float64[],\n V=Float64[])\n0×5 DataFrame"
},
{
"code": null,
"e": 7327,
"s": 6920,
"text": "julia> Complete_empty_df = vcat(Complete_empty_df, DataFrame(Name=\"EmptyTestDataFrame\", W=5.0, H=5.0, M=3.0, V=5.0))\n1×5 DataFrame\n│ Row │ Name │ W │ H │ M │ V │\n│ │ String │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼────────────────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ EmptyTestDataFrame │ 5.0 │ 5.0 │ 3.0 │ 5.0 │"
},
{
"code": null,
"e": 7809,
"s": 7327,
"text": "julia> Complete_empty_df = vcat(Complete_empty_df, DataFrame(Name=\"EmptyTestDataFrame2\", W=6.0, H=6.0, M=5.0, V=7.0))\n2×5 DataFrame\n│ Row │ Name │ W │ H │ M │ V │\n│ │ String │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼─────────────────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ EmptyTestDataFrame │ 5.0 │ 5.0 │ 3.0 │ 5.0 │\n│ 2 │ EmptyTestDataFrame2 │ 6.0 │ 6.0 │ 5.0 │ 7.0 │"
},
{
"code": null,
"e": 8070,
"s": 7809,
"text": "Now the Anscombe dataset has been loaded, we can do some statistics with it also. The inbuilt function named describe() enables us to calculate the statistics properties of the columns of a dataset. You can supply the symbols, given below, for the properties −"
},
{
"code": null,
"e": 8075,
"s": 8070,
"text": "mean"
},
{
"code": null,
"e": 8080,
"s": 8075,
"text": "mean"
},
{
"code": null,
"e": 8084,
"s": 8080,
"text": "std"
},
{
"code": null,
"e": 8088,
"s": 8084,
"text": "std"
},
{
"code": null,
"e": 8092,
"s": 8088,
"text": "min"
},
{
"code": null,
"e": 8096,
"s": 8092,
"text": "min"
},
{
"code": null,
"e": 8100,
"s": 8096,
"text": "q25"
},
{
"code": null,
"e": 8104,
"s": 8100,
"text": "q25"
},
{
"code": null,
"e": 8111,
"s": 8104,
"text": "median"
},
{
"code": null,
"e": 8118,
"s": 8111,
"text": "median"
},
{
"code": null,
"e": 8122,
"s": 8118,
"text": "q75"
},
{
"code": null,
"e": 8126,
"s": 8122,
"text": "q75"
},
{
"code": null,
"e": 8130,
"s": 8126,
"text": "max"
},
{
"code": null,
"e": 8134,
"s": 8130,
"text": "max"
},
{
"code": null,
"e": 8141,
"s": 8134,
"text": "eltype"
},
{
"code": null,
"e": 8148,
"s": 8141,
"text": "eltype"
},
{
"code": null,
"e": 8156,
"s": 8148,
"text": "nunique"
},
{
"code": null,
"e": 8164,
"s": 8156,
"text": "nunique"
},
{
"code": null,
"e": 8170,
"s": 8164,
"text": "first"
},
{
"code": null,
"e": 8176,
"s": 8170,
"text": "first"
},
{
"code": null,
"e": 8181,
"s": 8176,
"text": "last"
},
{
"code": null,
"e": 8186,
"s": 8181,
"text": "last"
},
{
"code": null,
"e": 8195,
"s": 8186,
"text": "nmissing"
},
{
"code": null,
"e": 8204,
"s": 8195,
"text": "nmissing"
},
{
"code": null,
"e": 9005,
"s": 8204,
"text": "julia> describe(anscombe, :mean, :std, :min, :median, :q25)\n8×6 DataFrame\n│ Row │ variable │ mean │ std │ min │ median │ q25 │\n│ │ Symbol │ Float64 │ Float64 │ Real │ Float64 │ Float64 │\n├─────┼──────────┼─────────┼─────────┼──────┼─────────┼─────────┤\n│ 1 │ X1 │ 9.0 │ 3.31662 │ 4 │ 9.0 │ 6.5 │\n│ 2 │ X2 │ 9.0 │ 3.31662 │ 4 │ 9.0 │ 6.5 │\n│ 3 │ X3 │ 9.0 │ 3.31662 │ 4 │ 9.0 │ 6.5 │\n│ 4 │ X4 │ 9.0 │ 3.31662 │ 8 │ 8.0 │ 8.0 │\n│ 5 │ Y1 │ 7.50091 │ 2.03157 │ 4.26 │ 7.58 │ 6.315 │\n│ 6 │ Y2 │ 7.50091 │ 2.03166 │ 3.1 │ 8.14 │ 6.695 │\n│ 7 │ Y3 │ 7.5 │ 2.03042 │ 5.39 │ 7.11 │ 6.25 │\n│ 8 │ Y4 │ 7.50091 │ 2.03058 │ 5.25 │ 7.04 │ 6.17 │\n"
},
{
"code": null,
"e": 9066,
"s": 9005,
"text": "We can also do a comparison between XY datasets as follows −"
},
{
"code": null,
"e": 10455,
"s": 9066,
"text": "julia> [describe(anscombe[:, xy], :mean, :std, :median, :q25) for xy in [[:X1, :Y1], [:X2, :Y2], [:X3, :Y3], [:X4, :Y4]]]\n4-element Array{DataFrame,1}:\n2×5 DataFrame\n│ Row │ variable │ mean │ std │ median │ q25 │\n│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ X1 │ 9.0 │ 3.31662 │ 9.0 │ 6.5 │\n│ 2 │ Y1 │ 7.50091 │ 2.03157 │ 7.58 │ 6.315 │\n2×5 DataFrame\n│ Row │ variable │ mean │ std │ median │ q25 │\n│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ X2 │ 9.0 │ 3.31662 │ 9.0 │ 6.5 │\n│ 2 │ Y2 │ 7.50091 │ 2.03166 │ 8.14 │ 6.695 │\n2×5 DataFrame\n│ Row │ variable │ mean │ std │ median │ q25 │\n│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ X3 │ 9.0 │ 3.31662 │ 9.0 │ 6.5 │\n│ 2 │ Y3 │ 7.5 │ 2.03042 │ 7.11 │ 6.25 │\n2×5 DataFrame\n│ Row │ variable │ mean │ std │ median │ q25 │\n│ │ Symbol │ Float64 │ Float64 │ Float64 │ Float64 │\n├─────┼──────────┼─────────┼─────────┼─────────┼─────────┤\n│ 1 │ X4 │ 9.0 │ 3.31662 │ 8.0 │ 8.0 │\n│ 2 │ Y4 │ 7.50091 │ 2.03058 │ 7.04 │ 6.17 │\n"
},
{
"code": null,
"e": 10552,
"s": 10455,
"text": "Let us reveal the true purpose of Anscombe, i.e., plot the four sets of its quartet as follows −"
},
{
"code": null,
"e": 10956,
"s": 10552,
"text": "julia> using StatsPlots\n[ Info: Precompiling StatsPlots [f3b207a7-027a-5e70-b257-86293d7955fd]\n\njulia> @df anscombe scatter([:X1 :X2 :X3 :X4], [:Y1 :Y2 :Y3 :Y4],\n smooth=true,\n line = :red,\n linewidth = 2,\n title= [\"X$i vs Y$i\" for i in (1:4)'],\n legend = false,\n layout = 4,\n xlimits = (2, 20),\n ylimits = (2, 14))"
},
{
"code": null,
"e": 11141,
"s": 10956,
"text": "In this section, we will be working with Linear Regression line for the dataset. For this we need to use Generalized Linear Model (GLM) package which you need to first add as follows −"
},
{
"code": null,
"e": 11163,
"s": 11141,
"text": "(@v1.5) pkg> add GLM\n"
},
{
"code": null,
"e": 11358,
"s": 11163,
"text": "Now let us create a liner regression model by specifying a formula using the @formula macro and supplying columns names as well as name of the DataFrame. An example for the same is given below −"
},
{
"code": null,
"e": 12067,
"s": 11358,
"text": "julia> linearregressionmodel = fit(LinearModel, @formula(Y1 ~ X1), anscombe)\nStatsModels.TableRegressionModel{LinearModel{GLM.LmResp{Array{Float64,1}},GLM.DensePredChol{Float64,LinearAlgebra.Cholesky{Float64,Array{Float64,2}}}},Array{Float64,2}}\n\n\nY1 ~ 1 + X1\n\nCoefficients:\n───────────────────────────────────────────────────────────────────────\n Coef. Std. Error t Pr(>|t|) Lower 95% Upper 95%\n───────────────────────────────────────────────────────────────────────\n(Intercept) 3.00009 1.12475 2.67 0.0257 0.455737 5.54444\n X1 0.500091 0.117906 4.24 0.0022 0.23337 0.766812\n───────────────────────────────────────────────────────────────────────"
},
{
"code": null,
"e": 12159,
"s": 12067,
"text": "Let us check the summary and the coefficient of the above created linear regression model −"
},
{
"code": null,
"e": 12472,
"s": 12159,
"text": "julia> summary(linearregressionmodel)\n\"StatsModels.TableRegressionModel{LinearModel{GLM.LmResp{Array{Float64,1}},GLM.DensePredChol{Float64,LinearAlgebra.Cholesky{Float64,Array{Float64,2}}}},Array{Float64,2}}\"\n\njulia> coef(linearregressionmodel)\n2-element Array{Float64,1}:\n 3.0000909090909054\n 0.5000909090909096"
},
{
"code": null,
"e": 12566,
"s": 12472,
"text": "Now let us produce a function for the regression line. The form of the function is y = ax +c."
},
{
"code": null,
"e": 12663,
"s": 12566,
"text": "julia> f(x) = coef(linearmodel)[2] * x + coef(linearmodel)[1]\nf (generic function with 1 method)"
},
{
"code": null,
"e": 12757,
"s": 12663,
"text": "Once we have the function that describes the regression line, we can draw a plot as follows −"
},
{
"code": null,
"e": 13029,
"s": 12757,
"text": "julia> p1 = plot(anscombe[:X1], anscombe[:Y1],\n smooth=true,\n seriestype=:scatter,\n title = \"X1 vs Y1\",\n linewidth=8,\n linealpha=0.5,\n label=\"data\")\n \njulia> plot!(f, 2, 20, label=\"correlation\")"
},
{
"code": null,
"e": 13249,
"s": 13029,
"text": "As we know that nothing is perfect. This is also true in case of datasets because not all the datasets are consistent and tidy. To show how we can work with different items of DataFrame, let us create a test DataFrame −"
},
{
"code": null,
"e": 14169,
"s": 13249,
"text": "julia> testdf = DataFrame( Number = [3, 5, 7, 8, 20 ],\n Name = [\"Lithium\", \"Boron\", \"Nitrogen\", \"Oxygen\", \"Calcium\" ],\n AtomicWeight = [6.941, 10.811, 14.0067, 15.9994, 40.078 ],\n Symbol = [\"Li\", \"B\", \"N\", \"O\", \"Ca\" ],\n Discovered = [1817, 1808, 1772, 1774, missing ])\n5×5 DataFrame\n│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼──────────┼──────────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │\n│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │ "
},
{
"code": null,
"e": 14283,
"s": 14169,
"text": "There can be some missing values in datasets. It can be checked with the help of describe() function as follows −"
},
{
"code": null,
"e": 15169,
"s": 14283,
"text": "julia> describe(testdf)\n5×8 DataFrame\n│ Row │ variable │ mean │ min │ median │ max │ nunique │ nmissing │ eltype │\n│ │ Symbol │ Union... │ Any │ Union... │ Any │ Union... │ Union... │ Type │\n├─────┼──────────────┼─────────┼───────┼─────────┼────────┼─────────┼──────────┼───────────────────────┤\n│ 1 │ Number │ 8.6 │ 3 │ 7.0 │ 20 │ │ │ Int64 │\n│ 2 │ Name │ │ Boron │ │ Oxygen │ 5 │ │ String │\n│ 3 │ AtomicWeight │ 17.5672 │ 6.941 │ 14.0067 │ 40.078 │ │ │ Float64 │\n│ 4 │ Symbol │ │ B │ │ O │ 5 │ │ String │\n│ 5 │ Discovered │ 1792.75 │ 1772 │ 1791.0 │ 1817 │ │ 1 │ Union{Missing, Int64} │"
},
{
"code": null,
"e": 15465,
"s": 15169,
"text": "Julia provides a special datatype called Missing to address such issue. This datatype indicates that there is not a usable value at this location. That is why the DataFrames packages allow us to get most of our datasets and make sure that the calculations are not tampered due to missing values."
},
{
"code": null,
"e": 15561,
"s": 15465,
"text": "We can check with ismissing() function that whether the DataFrame has any missing value or not."
},
{
"code": null,
"e": 15806,
"s": 15561,
"text": "julia> for row in 1:nrows\n for col in 1:ncols\n if ismissing(testdf [row,col])\n println(\"$(names(testdf)[col]) value for $(testdf[row,:Name]) is missing!\")\n end\n end\n end"
},
{
"code": null,
"e": 15847,
"s": 15806,
"text": "Discovered value for Calcium is missing!"
},
{
"code": null,
"e": 16028,
"s": 15847,
"text": "We can use the following code to change values that are not acceptable like “n/a”, “0”, “missing”. The below code will look in every cell for above mentioned non-acceptable values."
},
{
"code": null,
"e": 17170,
"s": 16028,
"text": "julia> for row in 1:size(testdf, 1) # or nrow(testdf)\n for col in 1:size(testdf, 2) # or ncol(testdf)\n println(\"processing row $row column $col \")\n temp = testdf [row,col]\n if ismissing(temp)\n println(\"skipping missing\")\n elseif temp == \"n/a\" || temp == \"0\" || temp == 0\n testdf [row, col] = missing\n println(\"changed row $row column $col \")\n end\n end\n end\nprocessing row 1 column 1\nprocessing row 1 column 2\nprocessing row 1 column 3\nprocessing row 1 column 4\nprocessing row 1 column 5\nprocessing row 2 column 1\nprocessing row 2 column 2\nprocessing row 2 column 3\nprocessing row 2 column 4\nprocessing row 2 column 5\nprocessing row 3 column 1\nprocessing row 3 column 2\nprocessing row 3 column 3\nprocessing row 3 column 4\nprocessing row 3 column 5\nprocessing row 4 column 1\nprocessing row 4 column 2\nprocessing row 4 column 3\nprocessing row 4 column 4\nprocessing row 4 column 5\nprocessing row 5 column 1\nprocessing row 5 column 2\nprocessing row 5 column 3\nprocessing row 5 column 4\nprocessing row 5 column 5\nskipping missing"
},
{
"code": null,
"e": 17376,
"s": 17170,
"text": "Julia provides support for representing missing values in the statistical sense, that is for situations where no value is available for a variable in an observation, but a valid value theoretically exists."
},
{
"code": null,
"e": 17486,
"s": 17376,
"text": "The completecases() function is used to find the maximum value of the column that contains the missing value."
},
{
"code": null,
"e": 17494,
"s": 17486,
"text": "Example"
},
{
"code": null,
"e": 17559,
"s": 17494,
"text": "julia> maximum(testdf[completecases(testdf), :].Discovered)\n1817"
},
{
"code": null,
"e": 17659,
"s": 17559,
"text": "The dropmissing() function is used to get the copy of DataFrames without having the missing values."
},
{
"code": null,
"e": 17667,
"s": 17659,
"text": "Example"
},
{
"code": null,
"e": 18164,
"s": 17667,
"text": "julia> dropmissing(testdf)\n4×5 DataFrame\n│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64 │\n├─────┼────────┼──────────┼──────────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │\n"
},
{
"code": null,
"e": 18291,
"s": 18164,
"text": "The DataFrames package of Julia provides various methods using which you can add, remove, rename columns, and add/delete rows."
},
{
"code": null,
"e": 18392,
"s": 18291,
"text": "We can use hcat() function to add a column of integers to the DataFrame. It can be used as follows −"
},
{
"code": null,
"e": 19028,
"s": 18392,
"text": "julia> hcat(testdf, axes(testdf, 1))\n5×6 DataFrame\n│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │ x1 │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │ Int64 │\n├─────┼────────┼──────────┼──────────────┼────────┼────────────┼───────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │ 1 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │ 2 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │ 3 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │ 4 │\n│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │ 5 │\n"
},
{
"code": null,
"e": 19170,
"s": 19028,
"text": "But as you can notice that we haven’t changed the DataFrame or assigned any new DataFrame to a symbol. We can add another column as follows −"
},
{
"code": null,
"e": 19928,
"s": 19170,
"text": "julia> testdf [!, :MP] = [180.7, 2300, -209.86, -222.65, 839]\n5-element Array{Float64,1}:\n 180.7\n 2300.0\n -209.86\n -222.65\n 839.0\njulia> testdf\n5×6 DataFrame\n│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │ MP │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │ Float64 │\n├─────┼────────┼──────────┼──────────────┼────────┼────────────┼─────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │ 180.7 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │ 2300.0 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │ -209.86 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │ -222.65 │\n│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │ 839.0 │"
},
{
"code": null,
"e": 20016,
"s": 19928,
"text": "We have added a column having melting points of all the elements to our test DataFrame."
},
{
"code": null,
"e": 20259,
"s": 20016,
"text": "We can use select!() function to remove a column from the DataFrame. It will create a new DataFrame that contains the selected columns, hence to remove a particular column, we need to use select!() with Not. It is shown in the given example −"
},
{
"code": null,
"e": 20826,
"s": 20259,
"text": "julia> select!(testdf, Not(:MP))\n5×5 DataFrame\n│ Row │ Number │ Name │ AtomicWeight │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼──────────┼──────────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │\n│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │"
},
{
"code": null,
"e": 20877,
"s": 20826,
"text": "We have removed the column MP from our Data Frame."
},
{
"code": null,
"e": 21007,
"s": 20877,
"text": "We can use rename!() function to rename a column in the DataFrame. We will be renaming the AtomicWeight column to AW as follows −"
},
{
"code": null,
"e": 21546,
"s": 21007,
"text": "julia> rename!(testdf, :AtomicWeight => :AW)\n5×5 DataFrame\n│ Row │ Number │ Name │ AW │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼──────────┼─────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │\n│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │"
},
{
"code": null,
"e": 21698,
"s": 21546,
"text": "We can use push!() function with suitable data to add rows in the DataFrame. In the below given example we will be adding a row having element Cooper −"
},
{
"code": null,
"e": 21706,
"s": 21698,
"text": "Example"
},
{
"code": null,
"e": 22320,
"s": 21706,
"text": "julia> push!(testdf, [29, \"Copper\", 63.546, \"Cu\", missing])\n6×5 DataFrame\n│ Row │ Number │ Name │ AW │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼──────────┼─────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │\n│ 4 │ 8 │ Oxygen │ 15.9994 │ O │ 1774 │\n│ 5 │ 20 │ Calcium │ 40.078 │ Ca │ missing │\n│ 6 │ 29 │ Copper │ 63.546 │ Cu │ missing │"
},
{
"code": null,
"e": 22512,
"s": 22320,
"text": "We can use deleterows!() function with suitable data to delete rows from the DataFrame. In the below given example we will be deleting three rows (4th, 5th,and 6th) from our test data frame −"
},
{
"code": null,
"e": 22520,
"s": 22512,
"text": "Example"
},
{
"code": null,
"e": 22926,
"s": 22520,
"text": "julia> deleterows!(testdf, 4:6)\n3×5 DataFrame\n│ Row │ Number │ Name │ AW │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼──────────┼─────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │"
},
{
"code": null,
"e": 23127,
"s": 22926,
"text": "To find the values in DataFrame, we need to use an elementwise operator examining all the rows. This operator will return an array of Boolean values to indicate whether cells meet the criteria or not."
},
{
"code": null,
"e": 23135,
"s": 23127,
"text": "Example"
},
{
"code": null,
"e": 23482,
"s": 23135,
"text": "julia> testdf[:, :AW] .< 10\n3-element BitArray{1}:\n1\n0\n0\n\njulia> testdf[testdf[:, :AW] .< 10, :]\n1×5 DataFrame\n│ Row │ Number │ Name │ AW │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼─────────┼─────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │"
},
{
"code": null,
"e": 23598,
"s": 23482,
"text": "To sort the values in DataFrame, we can use sort!() function. We need to give the columns on which we want to sort."
},
{
"code": null,
"e": 23606,
"s": 23598,
"text": "Example"
},
{
"code": null,
"e": 24015,
"s": 23606,
"text": "julia> sort!(testdf, [order(:AW)])\n3×5 DataFrame\n│ Row │ Number │ Name │ AW │ Symbol │ Discovered │\n│ │ Int64 │ String │ Float64 │ String │ Int64? │\n├─────┼────────┼──────────┼─────────┼────────┼────────────┤\n│ 1 │ 3 │ Lithium │ 6.941 │ Li │ 1817 │\n│ 2 │ 5 │ Boron │ 10.811 │ B │ 1808 │\n│ 3 │ 7 │ Nitrogen │ 14.0067 │ N │ 1772 │"
},
{
"code": null,
"e": 24073,
"s": 24015,
"text": "The DataFrame is sorted based on the values of column AW."
},
{
"code": null,
"e": 24106,
"s": 24073,
"text": "\n 73 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 24123,
"s": 24106,
"text": " Lemuel Ogbunude"
},
{
"code": null,
"e": 24156,
"s": 24123,
"text": "\n 24 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 24173,
"s": 24156,
"text": " Mohammad Nauman"
},
{
"code": null,
"e": 24208,
"s": 24173,
"text": "\n 29 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 24231,
"s": 24208,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 24238,
"s": 24231,
"text": " Print"
},
{
"code": null,
"e": 24249,
"s": 24238,
"text": " Add Notes"
}
] |
YAML - Quick Guide
|
YAML Ain't Markup Language is a data serialization language that matches user’s expectations about data. It designed to be human friendly and works perfectly with other programming languages. It is useful to manage data and includes Unicode printable characters. This chapter will give you an introduction to YAML and gives you an idea about its features.
Consider the text shown below −
Quick brown fox jumped over the lazy dog.
The YAML text for this will be represented as shown below −
yaml.load(Quick brown fox jumped over the lazy dog.)
>>'Quick brown fox jumped over the lazy dog.'
Note that YAML takes the value in string format and represents the output as mentioned above.
Let us understand the formats in YAML with the help of the following examples −
Consider the following point number of “pi”, which has a value of 3.1415926. In YAML, it is represented as a floating number as shown below −
>>> yaml.load('3.1415926536')
3.1415926536
Suppose, multiple values are to be loaded in specific data structure as mentioned below −
eggs
ham
spam
French basil salmon terrine
When you load this into YAML, the values are taken in an array data structure which is a form of list. The output is as shown below −
>>> yaml.load('''
- eggs
- ham
- spam
- French basil salmon terrine
''')
['eggs', 'ham', 'spam', 'French basil salmon terrine']
YAML includes a markup language with important construct, to distinguish data-oriented language with the document markup. The design goals and features of YAML are given below −
Matches native data structures of agile methodology and its languages such as Perl, Python, PHP, Ruby and JavaScript
Matches native data structures of agile methodology and its languages such as Perl, Python, PHP, Ruby and JavaScript
YAML data is portable between programming languages
YAML data is portable between programming languages
Includes data consistent data model
Includes data consistent data model
Easily readable by humans
Easily readable by humans
Supports one-direction processing
Supports one-direction processing
Ease of implementation and usage
Ease of implementation and usage
Now that you have an idea about YAML and its features, let us learn its basics with syntax and other operations. Remember that YAML includes a human readable structured format.
When you are creating a file in YAML, you should remember the following basic rules −
YAML is case sensitive
YAML is case sensitive
The files should have .yaml as the extension
The files should have .yaml as the extension
YAML does not allow the use of tabs while creating YAML files; spaces are allowed instead
YAML does not allow the use of tabs while creating YAML files; spaces are allowed instead
The basic components of YAML are described below −
This block format uses hyphen+space to begin a new item in a specified list. Observe the example shown below −
--- # Favorite movies
- Casablanca
- North by Northwest
- The Man Who Wasn't There
Inline Format
Inline format is delimited with comma and space and the items are enclosed in JSON. Observe the example shown below −
--- # Shopping list
[milk, groceries, eggs, juice, fruits]
Folded Text
Folded text converts newlines to spaces and removes the leading whitespace. Observe the example shown below −
- {name: John Smith, age: 33}
- name: Mary Smith
age: 27
The structure which follows all the basic conventions of YAML is shown below −
men: [John Smith, Bill Jones]
women:
- Mary Smith
- Susan Williams
The synopsis of YAML basic elements is given here: Comments in YAML begins with the (#) character.
The synopsis of YAML basic elements is given here: Comments in YAML begins with the (#) character.
Comments must be separated from other tokens by whitespaces.
Comments must be separated from other tokens by whitespaces.
Indentation of whitespace is used to denote structure.
Indentation of whitespace is used to denote structure.
Tabs are not included as indentation for YAML files.
Tabs are not included as indentation for YAML files.
List members are denoted by a leading hyphen (-).
List members are denoted by a leading hyphen (-).
List members are enclosed in square brackets and separated by commas.
List members are enclosed in square brackets and separated by commas.
Associative arrays are represented using colon ( : ) in the format of key value pair. They are enclosed in curly braces {}.
Associative arrays are represented using colon ( : ) in the format of key value pair. They are enclosed in curly braces {}.
Multiple documents with single streams are separated with 3 hyphens (---).
Multiple documents with single streams are separated with 3 hyphens (---).
Repeated nodes in each file are initially denoted by an ampersand (&) and by an asterisk (*) mark later.
Repeated nodes in each file are initially denoted by an ampersand (&) and by an asterisk (*) mark later.
YAML always requires colons and commas used as list separators followed by space with scalar values.
YAML always requires colons and commas used as list separators followed by space with scalar values.
Nodes should be labelled with an exclamation mark (!) or double exclamation mark (!!), followed by string which can be expanded into an URI or URL.
Nodes should be labelled with an exclamation mark (!) or double exclamation mark (!!), followed by string which can be expanded into an URI or URL.
Indentation and separation are two main concepts when you are learning any programming language. This chapter talks about these two concepts related to YAML in detail.
YAML does not include any mandatory spaces. Further, there is no need to be consistent. The valid YAML indentation is shown below −
a:
b:
- c
- d
- e
f:
"ghi"
You should remember the following rules while working with indentation in YAML:Flow blocks must be intended with at least some spaces with surrounding current block level.
You should remember the following rules while working with indentation in YAML:Flow blocks must be intended with at least some spaces with surrounding current block level.
Flow content of YAML spans multiple lines. The beginning of flow content begins with { or [.
Flow content of YAML spans multiple lines. The beginning of flow content begins with { or [.
Block list items include same indentation as the surrounding block level because - is considered as a part of indentation.
Block list items include same indentation as the surrounding block level because - is considered as a part of indentation.
Observe the following code that shows indentation with examples −
--- !clarkevans.com/^invoice
invoice: 34843
date : 2001-01-23
bill-to: &id001
given : Chris
family : Dumars
address:
lines: |
458 Walkman Dr.
Suite #292
city : Royal Oak
state : MI
postal : 48046
ship-to: *id001
product:
- sku : BL394D
quantity : 4
description : Basketball
price : 450.00
- sku : BL4438H
quantity : 1
description : Super Hoop
price : 2392.00
tax : 251.42
total: 4443.52
comments: >
Late afternoon is best.
Backup contact is Nancy
Billsmer @ 338-4338.
Strings are separated using double-quoted string. If you escape the newline characters in a given string, it is completely removed and translated into space value.
In this example we have focused listing of animals listed as an array structure with data type of string. Every new element is listed with a prefix of hyphen as mentioned as prefix.
-
- Cat
- Dog
- Goldfish
-
- Python
- Lion
- Tiger
Another example to explain string representation in YAML is mentioned below.
errors:
messages:
already_confirmed: "was already confirmed, please try signing in"
confirmation_period_expired: "needs to be confirmed within %{period}, please request a new one"
expired: "has expired, please request a new one"
not_found: "not found"
not_locked: "was not locked"
not_saved:
one: "1 error prohibited this %{resource} from being saved:"
other: "%{count} errors prohibited this %{resource} from being saved:"
This example refers to the set of error messages which a user can use just by mentioning the key aspect and to fetch the values accordingly. This pattern of YAML follows the structure of JSON which can be understood by user who is new to YAML.
Now that you are comfortable with the syntax and basics of YAML, let us proceed further into its details. In this chapter, we will see how to use comments in YAML.
YAML supports single line comments. Its structure is explained below with the help of an example −
# this is single line comment.
YAML does not support multi line comments. If you want to provide comments for multiple lines, you can do so as shown in the example below −
# this
# is a multiple
# line comment
The features of comments in YAML are given below −
A commented block is skipped during execution.
A commented block is skipped during execution.
Comments help to add description for specified code block.
Comments help to add description for specified code block.
Comments must not appear inside scalars.
Comments must not appear inside scalars.
YAML does not include any way to escape the hash symbol (#) so within multi-line string so there is no way to divide the comment from the raw string value.
YAML does not include any way to escape the hash symbol (#) so within multi-line string so there is no way to divide the comment from the raw string value.
The comments within a collection are shown below −
key: #comment 1
- value line 1
#comment 2
- value line 2
#comment 3
- value line 3
The shortcut key combination for commenting YAML blocks is Ctrl+Q.
If you are using Sublime Text editor, the steps for commenting the block are mentioned below −
Select the block. Use “CTRL + /” on Linux and Windows and “CMD+/” for Mac operating system. Execute the block.
Note that the same steps are applicable if you are using Visual Studio Code Editor. It is always recommended to use Sublime Text Editor for creating YAML files as it supported by most operating systems and includes developer friendly shortcut keys.
YAML includes block collections which use indentation for scope. Here, each entry begins with a new line. Block sequences in collections indicate each entry with a dash and space (-). In YAML, block collections styles are not denoted by any specific indicator. Block collection in YAML can distinguished from other scalar quantities with an identification of key value pair included in them.
Mappings are the representation of key value as included in JSON structure. It is used often in multi-lingual support systems and creation of API in mobile applications. Mappings use key value pair representation with the usage of colon and space (:).
Consider an example of sequence of scalars, for example a list of ball players as shown below −
- Mark Joseph
- James Stephen
- Ken Griffey
The following example shows mapping scalars to scalars −
hr: 87
avg: 0.298
rbi: 149
The following example shows mapping scalars to sequences −
European:
- Boston Red Sox
- Detroit Tigers
- New York Yankees
national:
- New York Mets
- Chicago Cubs
- Atlanta Braves
Collections can be used for sequence mappings which are shown below −
-
name: Mark Joseph
hr: 87
avg: 0.278
-
name: James Stephen
hr: 63
avg: 0.288
With collections, YAML includes flow styles using explicit indicators instead of using indentation to denote space. The flow sequence in collections is written as comma separated list enclosed in square brackets. The best illustration for collection which is included in PHP frameworks like symphony.
[PHP, Perl, Python]
These collections are stored in documents. The separation of documents in YAML is denoted with three hyphens or dashes (---). The end of document is marked with three dots (...).
The separation of documents in YAML is denoted by three dashes (---). The end of document is represented with three dots (...).
The document representation is referred as structure format which is mentioned below −
# Ranking of 1998 home runs
---
- Mark Joseph
- James Stephen
- Ken Griffey
# Team ranking
---
- Chicago Cubs
- St Louis Cardinals
A question mark with a combination of space indicates a complex mapping in structure. Within a block collection, a user can include structure with a dash, colon and question mark. The following example shows the mapping between sequences −
- 2001-07-23
? [ New York Yankees,Atlanta Braves ]
: [ 2001-07-02, 2001-08-12, 2001-08-14]
Scalars in YAML are written in block format using a literal type which is denoted as(|). It denotes line breaks count. In YAML, scalars are written in folded style (>) where each line denotes a folded space which ends with an empty line or more indented line.
New lines are preserved in literals are shown below −
ASCII Art
--- |
\//||\/||
// || ||__
The folded newlines are preserved for more indented lines and blank lines as shown below −
>
Sammy Sosa completed another
fine season with great stats.
63 Home Runs
0.288 Batting Average
What a year!
YAML flow scalars include plain styles and quoted styles. The double quoted style includes various escape sequences. Flow scalars can include multiple lines; line breaks are always folded in this structure.
plain:
This unquoted scalar
spans many lines.
quoted: "So does this
quoted scalar.\n"
In YAML, untagged nodes are specified with a specific type of the application. The examples of tags specification generally use seq, map and str types for YAML tag repository. The tags are represented as examples which are mentioned as below −
These tags include integer values in them. They are also called as numeric tags.
canonical: 12345
decimal: +12,345
sexagecimal: 3:25:45
octal: 014
hexadecimal: 0xC
These tags include decimal and exponential values. They are also called as exponential tags.
canonical: 1.23015e+3
exponential: 12.3015e+02
sexagecimal: 20:30.15
fixed: 1,230.15
negative infinity: -.inf
not a number: .NaN
It includes a variety of integer, floating and string values embedded in them. Hence it is called miscellaneous tags.
null: ~
true: y
false: n
string: '12345'
The following full-length example specifies the construct of YAML which includes symbols and various representations which will be helpful while converting or processing them in JSON format. These attributes are also called as key names in JSON documents. These notations are created for security purposes.
The above YAML format represents various attributes of defaults, adapter, and host with various other attributes. YAML also keeps a log of every file generated which maintains a track of error messages generated. On converting the specified YAML file in JSON format we get a desired output as mentioned below −
defaults: &defaults
adapter: postgres
host: localhost
development:
database: myapp_development
<<: *defaults
test:
database: myapp_test
<<: *defaults
Let’s convert the YAML to JSON format and check on the output.
{
"defaults": {
"adapter": "postgres",
"host": "localhost"
},
"development": {
"database": "myapp_development",
"adapter": "postgres",
"host": "localhost"
},
"test": {
"database": "myapp_test",
"adapter": "postgres",
"host": "localhost"
}
}
The defaults key with a prefix of “ <<: *” is included as and when required with no need to write the same code snippet repeatedly.
YAML follows a standard procedure for Process flow. The native data structure in YAML includes simple representations such as nodes. It is also called as Representation Node Graph.
It includes mapping, sequence and scalar quantities which is being serialized to create a serialization tree. With serialization the objects are converted with stream of bytes.
The serialization event tree helps in creating presentation of character streams as represented in the following diagram.
The reverse procedure parses the stream of bytes into serialized event tree. Later, the nodes are converted into node graph. These values are later converted in YAML native data structure. The figure below explains this −
The information in YAML is used in two ways: machine processing and human consumption. The processor in YAML is used as a tool for the procedure of converting information between complementary views in the diagram given above. This chapter describes the information structures a YAML processor must provide within a given application.
YAML includes a serialization procedure for representing data objects in serial format. The processing of YAML information includes three stages: Representation, Serialization, Presentation and parsing. Let us discuss each of them in detail.
YAML represents the data structure using three kinds of nodes: sequence, mapping and scalar.
Sequence refers to the ordered number of entries, which maps the unordered association of key value pair. It corresponds to the Perl or Python array list.
The code shown below is an example of sequence representation −
product:
- sku : BL394D
quantity : 4
description : Football
price : 450.00
- sku : BL4438H
quantity : 1
description : Super Hoop
price : 2392.00
Mapping on the other hand represents dictionary data structure or hash table. An example for the same is mentioned below −
batchLimit: 1000
threadCountLimit: 2
key: value
keyMapping: <What goes here?>
Scalars represent standard values of strings, integers, dates and atomic data types. Note that YAML also includes nodes which specify the data type structure. For more information on scalars, please refer to the chapter 6 of this tutorial.
Serialization process is required in YAML that eases human friendly key order and anchor names. The result of serialization is a YAML serialization tree. It can be traversed to produce a series of event calls of YAML data.
An example for serialization is given below −
consumer:
class: 'AppBundle\Entity\consumer'
attributes:
filters: ['customer.search', 'customer.order', 'customer.boolean']
collectionOperations:
get:
method: 'GET'
normalization_context:
groups: ['customer_list']
itemOperations:
get:
method: 'GET'
normalization_context:
groups: ['customer_get']
The final output of YAML serialization is called presentation. It represents a character stream in a human friendly manner. YAML processor includes various presentation details for creating stream, handling indentation and formatting content. This complete process is guided by the preferences of user.
An example for YAML presentation process is the result of JSON value created. Observe the code given below for a better understanding −
{
"consumer": {
"class": "AppBundle\\Entity\\consumer",
"attributes": {
"filters": [
"customer.search",
"customer.order",
"customer.boolean"
]
},
"collectionOperations": {
"get": {
"method": "GET",
"normalization_context": {
"groups": [
"customer_list"
]
}
}
},
"itemOperations": {
"get": {
"method": "GET",
"normalization_context": {
"groups": [
"customer_get"
]
}
}
}
}
}
Parsing is the inverse process of presentation; it includes a stream of characters and creates a series of events. It discards the details introduced in the presentation process which causes serialization events. Parsing procedure can fail due to ill-formed input. It is basically a procedure to check whether YAML is well-formed or not.
Consider a YAML example which is mentioned below −
---
environment: production
classes:
nfs::server:
exports:
- /srv/share1
- /srv/share3
parameters:
paramter1
With three hyphens, it represents the start of document with various attributes later defined in it.
YAML lint is the online parser of YAML and helps in parsing the YAML structure to check whether it is valid or not. The official link for YAML lint is mentioned below: http://www.yamllint.com/
You can see the output of parsing as shown below −
This chapter will explain the detail about the procedures and processes that we discussed in last chapter. Information Models in YAML will specify the features of serialization and presentation procedure in a systematic format using a specific diagram.
For an information model, it is important to represent the application information which are portable between programming environments.
The diagram shown above represents a normal information model which is represented in graph format. In YAML, the representation of native data is rooted, connected and is directed graph of tagged nodes. If we mention directed graph, it includes a set of nodes with directed graph. As mentioned in the information model, YAML supports three kinds of nodes namely −
Sequences
Scalars
Mappings
The basic definitions of these representation nodes were discussed in last chapter. In this chapter, we will focus on schematic view of these terms. The following sequence diagram represents the workflow of legends with various types of tags and mapping nodes.
There are three types of nodes: sequence node, scalar node and mapping node.
Sequence node follows a sequential architecture and includes an ordered series of zero or more nodes. A YAML sequence may contain the same node repeatedly or a single node.
The content of scalars in YAML includes Unicode characters which can be represented in the format with a series of zero. In general, scalar node includes scalar quantities.
Mapping node includes the key value pair representation. The content of mapping node includes a combination of key-value pair with a mandatory condition that key name should be maintained unique. Sequences and mappings collectively form a collection.
Note that as represented in the diagram shown above, scalars, sequences and mappings are represented in a systematic format.
Various types of characters are used for various functionalities. This chapter talks in detail about syntax used in YAML and focuses on character manipulation.
Indicator characters include a special semantics used to describe the content of YAML document. The following table shows this in detail.
_
It denotes a block sequence entry
?
It denotes a mapping key
:
It denotes a mapping value
,
It denotes flow collection entry
[
It starts a flow sequence
]
It ends a flow sequence
{
It starts a flow mapping
}
It ends a flow mapping
#
It denotes the comments
&
It denotes node’s anchor property
*
It denotes alias node
!
It denotes node’s tag
|
It denotes a literal block scalar
>
It denotes a folded block scalar
`
Single quote surrounds a quoted flow scalar
"
Double quote surrounds double quoted flow scalar
%
It denotes the directive used
The following example shows the characters used in syntax −
%YAML 1.1
---
!!map {
? !!str "sequence"
: !!seq [
!!str "one", !!str "two"
],
? !!str "mapping"
: !!map {
? !!str "sky" : !!str "blue",
? !!str "sea" : !!str "green",
}
}
# This represents
# only comments.
---
!!map1 {
? !!str "anchored"
: !local &A1 "value",
? !!str "alias"
: *A1,
}
!!str "text"
In this chapter you will learn about the following aspects of syntax primitives in YAML −
Production parameters
Indentation Spaces
Separation Spaces
Ignored Line Prefix
Line folding
Let us understand each aspect in detail.
Production parameters include a set of parameters and the range of allowed values which are used on a specific production. The following list of production parameters are used in YAML −
It is denoted by character n or m Character stream depends on the indentation level of blocks included in it. Many productions have parameterized these features.
It is denoted by c. YAML supports two groups of contexts: block styles and flow styles.
It is denoted by s. Scalar content may be presented in one of the five styles: plain, double quoted and single quoted flow, literal and folded block.
It is denoted by t. Block scalars offer many mechanisms which help in trimming the block: strip, clip and keep. Chomping helps in formatting new line strings. It is used Block style representation. Chomping process happens with the help of indicators. The indicators controls what output should be produced with newlines of string. The newlines are removed with (-) operator and newlines are added with (+) operator.
An example for chomping process is shown below −
strip: |-
text↓
clip: |
text↓
keep: |+
text↓
The output after parsing the specified YAML example is as follows −
In YAML character stream, indentation is defined as a line break character by zero or more characters. The most important point to be kept in mind is that indentation must not contain any tab characters. The characters in indentation should never be considered as a part of node’s content information. Observe the following code for better understanding −
%YAML 1.1
---
!!map {
? !!str "Not indented"
: !!map {
? !!str "By one space"
: !!str "By four\n spaces\n",
? !!str "Flow style"
: !!seq [
!!str "By two",
!!str "Still by two",
!!str "Again by two",
]
}
}
The output that you can see after indentation is as follows −
{
"Not indented": {
"By one space": "By four\n spaces\n",
"Flow style": [
"By two",
"Still by two",
"Again by two"
]
}
}
YAML uses space characters for separation between tokens. The most important note is that separation in YAML should not contain tab characters.
The following lone of code shows the usage of separation spaces −
{ · first: · Sammy, · last: · Sosa · }
{
"\u00b7 last": "\u00b7 Sosa \u00b7",
"\u00b7 first": "\u00b7 Sammy"
}
Empty prefix always includes indentation depending on the scalar type which also includes a leading whitespace. Plain scalars should not contain any tab characters. On the other hand, quoted scalars may contain tab characters. Block scalars completely depend on indentation.
The following example shows the working of ignored line prefix in a systematic manner −
%YAML 1.1
---
!!map {
? !!str "plain"
: !!str "text lines",
? !!str "quoted"
: !!str "text lines",
? !!str "block"
: !!str "text·®lines\n"
}
The output achieved for the block streams is as follows −
{
"plain": "text lines",
"quoted": "text lines",
"block": "text\u00b7\u00aelines\n"
}
Line Folding allows breaking long lines for readability. More amounts of short lines mean better readability. Line folding is achieved by noting original semantics of long line. The following example demonstrates line folding −
%YAML 1.1
--- !!str
"specific\L\
trimmed\n\n\n\
as space"
You can see the output for line folding in JSON format as follows −
"specific\u2028trimmed\n\n\nas space"
In YAML, you come across various character streams as follows −
Directives
Document Boundary Markers
Documents
Complete Stream
In this chapter, we will discuss them in detail.
Directives are basic instructions used in YAML processor. Directives are the presentation details like comments which are not reflected in serialization tree. In YAML, there is no way to define private directives. This section discusses various types of directives with relevant examples −
Reserved directives are initialized with three hyphen characters (---) as shown in the example below. The reserved directives are converted into specific value of JSON.
%YAML 1.1
--- !!str
"foo"
YAML Directives are default directives. If converted in JSON, the value fetched includes forward slash character in preceding and terminating characters.
%YAML 1.1
---
!!str "foo"
YAML uses these markers to allow more than one document to be contained in one stream. These markers are specially used to convey the structure of YAML document. Note that a line beginning with “---“is used to start a new document.
The following code explains about this with examples −
%YAML 1.1
---
!!str "foo"
%YAML 1.1
---
!!str "bar"
%YAML 1.1
---
!!str "baz"
YAML document is considered as a single native data structure presented as a single root node. The presentation details in YAML document such as directives, comments, indentation and styles are not considered as contents included in them.
There are two types of documents used in YAML. They are explained in this section −
It begins with the document start marker followed by the presentation of the root node. The example of YAML explicit declaration is given below −
---
some: yaml
...
It includes an explicit start and end markers which is “---“and “...” in given example. On converting the specified YAML in JSON format, we get the output as shown below −
{
"some": "yaml"
}
These documents do not begin with a document start marker. Observe the code given below −
fruits:
- Apple
- Orange
- Pineapple
- Mango
Converting these values in JSON format we get the output as a simple JSON object as given below −
{
"fruits": [
"Apple",
"Orange",
"Pineapple",
"Mango"
]
}
YAML includes a sequence of bytes called as character stream. The stream begins with a prefix containing a byte order denoting a character encoding. The complete stream begins with a prefix containing a character encoding, followed by comments.
An example of complete stream (character stream) is shown below −
%YAML 1.1
---
!!str "Text content\n"
Each presentation node includes two major characteristics called anchor and tag. Node properties may be specified with node content, omitted from the character stream.
The basic example of node representation is as follows −
%YAML 1.1
---
!!map {
? &A1 !!str "foo"
: !!str "bar",
? !!str &A2 "baz"
: *a1
}
The anchor property represents a node for future reference. The character stream of YAML representation in node is denoted with the ampersand (&) indicator. The YAML processor need not preserve the anchor name with the representation details composed in it. The following code explains this −
%YAML 1.1
---
!!map {
? !!str "First occurence"
: &A !!str "Value",
? !!str "Second occurence"
: *A
}
The output of YAML generated with anchor nodes is shown below −
---
!!map {
? !!str "First occurence"
: !!str "Value",
? !!str "Second occurence"
: !!str "Value",
}
The tag property represents the type of native data structure which defines a node completely. A tag is represented with the (!) indicator. Tags are considered as an inherent part of the representation graph. The following example of explains node tags in detail −
%YAML 1.1
---
!!map {
? !<tag:yaml.org,2002:str> "foo"
: !<!bar> "baz"
}
Node content can be represented in a flow content or block format. Block content extends to the end of line and uses indentation to denote structure. Each collection kind can be represented in a specific single flow collection style or can be considered as a single block. The following code explains this in detail −
%YAML 1.1
---
!!map {
? !!str "foo"
: !!str "bar baz"
}
%YAML 1.1
---
!!str "foo bar"
%YAML 1.1
---
!!str "foo bar"
%YAML 1.1
---
!!str "foo bar\n"
In this chapter, we will focus on various scalar types which are used for representing the content. In YAML, comments may either precede or follow scalar content. It is important to note that comments should not be included within scalar content.
Note that all flow scalar styles can include multiple lines, except with usage in multiple keys.
The representation of scalars is given below −
%YAML 1.1
---
!!map {
? !!str "simple key"
: !!map {
? !!str "also simple"
: !!str "value",
? !!str "not a simple key"
: !!str "any value"
}
}
The generated output of block scalar headers is shown below −
{
"simple key": {
"not a simple key": "any value",
"also simple": "value"
}
}
All characters in this example are considered as content, including the inner space characters.
%YAML 1.1
---
!!map {
? !!str "---"
: !!str "foo",
? !!str "...",
: !!str "bar"
}
%YAML 1.1
---
!!seq [
!!str "---",
!!str "...",
!!map {
? !!str "---"
: !!str "..."
}
]
The plain line breaks are represented with the example given below −
%YAML 1.1
---
!!str "as space \
trimmed\n\
specific\L\n\
none"
The corresponding JSON output for the same is mentioned below −
"as space trimmed\nspecific\u2028\nnone"
Flow styles in YAML can be thought of as a natural extension of JSON to cover the folding content lines for better readable feature which uses anchors and aliases to create the object instances. In this chapter, we will focus on flow representation of the following concepts −
Alias Nodes
Empty Nodes
Flow Scalar styles
Flow collection styles
Flow nodes
The example of alias nodes is shown below −
%YAML 1.2
---
!!map {
? !!str "First occurrence"
: &A !!str "Foo",
? !!str "Override anchor"
: &B !!str "Bar",
? !!str "Second occurrence"
: *A,
? !!str "Reuse anchor"
: *B,
}
The JSON output of the code given above is given below −
{
"First occurrence": "Foo",
"Second occurrence": "Foo",
"Override anchor": "Bar",
"Reuse anchor": "Bar"
}
Nodes with empty content are considered as empty nodes. The following example shows this −
%YAML 1.2
---
!!map {
? !!str "foo" : !!str "",
? !!str "" : !!str "bar",
}
The output of empty nodes in JSON is represented as below −
{
"": "bar",
"foo": ""
}
Flow scalar styles include double-quoted, single-quoted and plain types. The basic example for the same is given below −
%YAML 1.2
---
!!map {
? !!str "implicit block key"
: !!seq [
!!map {
? !!str "implicit flow key"
: !!str "value",
}
]
}
The output in JSON format for the example given above is shown below −
{
"implicit block key": [
{
"implicit flow key": "value"
}
]
}
Flow collection in YAML is nested with a block collection within another flow collection. Flow collection entries are terminated with comma (,) indicator. The following example explains the flow collection block in detail −
%YAML 1.2
---
!!seq [
!!seq [
!!str "one",
!!str "two",
],
!!seq [
!!str "three",
!!str "four",
],
]
The output for flow collection in JSON is shown below −
[
[
"one",
"two"
],
[
"three",
"four"
]
]
Flow styles like JSON include start and end indicators. The only flow style that does not have any property is the plain scalar.
%YAML 1.2
---
!!seq [
!!seq [ !!str "a", !!str "b" ],
!!map { ? !!str "a" : !!str "b" },
!!str "a",
!!str "b",
!!str "c",]
The output for the code shown above in JSON format is given below −
[
[
"a",
"b"
],
{
"a": "b"
},
"a",
"b",
"c"
]
YAML includes two block scalar styles: literal and folded. Block scalars are controlled with few indicators with a header preceding the content itself. An example of block scalar headers is given below −
%YAML 1.2
---
!!seq [
!!str "literal\n",
!!str "·folded\n",
!!str "keep\n\n",
!!str "·strip",
]
The output in JSON format with a default behavior is given below −
[
"literal\n",
"\u00b7folded\n",
"keep\n\n",
"\u00b7strip"
]
There are four types of block styles: literal, folded, keep and strip styles. These block styles are defined with the help of Block Chomping scenario. An example of block chomping scenario is given below −
%YAML 1.2
---
!!map {
? !!str "strip"
: !!str "# text",
? !!str "clip"
: !!str "# text\n",
? !!str "keep"
: !!str "# text\n",
}
You can see the output generated with three formats in JSON as given below −
{
"strip": "# text",
"clip": "# text\n",
"keep": "# text\n"
}
Chomping in YAML controls the final breaks and trailing empty lines which are interpreted in various forms.
In this case, the final line break and empty lines are excluded for scalar content. It is specified by the chomping indicator “-“.
Clipping is considered as a default behavior if no explicit chomping indicator is specified. The final break character is preserved in the scalar’s content. The best example of clipping is demonstrated in the example above. It terminates with newline “\n” character.
Keeping refers to the addition with representation of “+” chomping indicator. Additional lines created are not subject to folding. The additional lines are not subject to folding.
To understand sequence styles, it is important to understand collections. The concept of collections and sequence styles work in parallel. The collection in YAML is represented with proper sequence styles. If you want to refer proper sequencing of tags, always refer to collections. Collections in YAML are indexed by sequential integers starting with zero as represented in arrays. The focus of sequence styles begins with collections.
Let us consider the number of planets in universe as a sequence which can be created as a collection. The following code shows how to represent the sequence styles of planets in universe −
# Ordered sequence of nodes in YAML STRUCTURE
Block style: !!seq
- Mercury # Rotates - no light/dark sides.
- Venus # Deadliest. Aptly named.
- Earth # Mostly dirt.
- Mars # Seems empty.
- Jupiter # The king.
- Saturn # Pretty.
- Uranus # Where the sun hardly shines.
- Neptune # Boring. No rings.
- Pluto # You call this a planet?
Flow style: !!seq [ Mercury, Venus, Earth, Mars, # Rocks
Jupiter, Saturn, Uranus, Neptune, # Gas
Pluto ] # Overrated
Then, you can see the following output for ordered sequence in JSON format −
{
"Flow style": [
"Mercury",
"Venus",
"Earth",
"Mars",
"Jupiter",
"Saturn",
"Uranus",
"Neptune",
"Pluto"
],
"Block style": [
"Mercury",
"Venus",
"Earth",
"Mars",
"Jupiter",
"Saturn",
"Uranus",
"Neptune",
"Pluto"
]
}
Flow mappings in YAML represent the unordered collection of key value pairs. They are also called as mapping node. Note that keys should be maintained unique. If there is a duplication of keys in flow mapping structure, it will generate an error. The key order is generated in serialization tree.
An example of flow mapping structure is shown below −
%YAML 1.1
paper:
uuid: 8a8cbf60-e067-11e3-8b68-0800200c9a66
name: On formally undecidable propositions of Principia Mathematica and related systems I.
author: Kurt Gödel.
tags:
- tag:
uuid: 98fb0d90-e067-11e3-8b68-0800200c9a66
name: Mathematics
- tag:
uuid: 3f25f680-e068-11e3-8b68-0800200c9a66
name: Logic
The output of mapped sequence (unordered list) in JSON format is as shown below −
{
"paper": {
"uuid": "8a8cbf60-e067-11e3-8b68-0800200c9a66",
"name": "On formally undecidable propositions of Principia Mathematica and related systems I.",
"author": "Kurt Gödel."
},
"tags": [
{
"tag": {
"uuid": "98fb0d90-e067-11e3-8b68-0800200c9a66",
"name": "Mathematics"
}
},
{
"tag": {
"uuid": "3f25f680-e068-11e3-8b68-0800200c9a66",
"name": "Logic"
}
}
]
}
If you observe this output as shown above, it is observed that the key names are maintained unique in YAML mapping structure.
The block sequences of YAML represent a series of nodes. Each item is denoted by a leading “-“ indicator. Note that the “-“ indicator in YAML should be separated from the node with a white space.
The basic representation of block sequence is given below −
block sequence:
··- one↓
- two : three↓
Observe the following examples for a better understanding of block sequences.
port: &ports
adapter: postgres
host: localhost
development:
database: myapp_development
<<: *ports
The output of block sequences in JSON format is given below −
{
"port": {
"adapter": "postgres",
"host": "localhost"
},
"development": {
"database": "myapp_development",
"adapter": "postgres",
"host": "localhost"
}
}
A YAML schema is defined as a combination of set of tags and includes a mechanism for resolving non-specific tags. The failsafe schema in YAML is created in such a manner that it can be used with any YAML document. It is also considered as a recommended schema for a generic YAML document.
There are two types of failsafe schema: Generic Mapping and Generic Sequence
It represents an associative container. Here, each key is unique in the association and mapped to exactly one value. YAML includes no restrictions for key definitions.
An example for representing generic mapping is given below −
Clark : Evans
Ingy : döt Net
Oren : Ben-Kiki
Flow style: !!map { Clark: Evans, Ingy: döt Net, Oren: Ben-Kiki }
The output of generic mapping structure in JSON format is shown below −
{
"Oren": "Ben-Kiki",
"Ingy": "d\u00f6t Net",
"Clark": "Evans",
"Flow style": {
"Oren": "Ben-Kiki",
"Ingy": "d\u00f6t Net",
"Clark": "Evans"
}
}
It represents a type of sequence. It includes a collection indexed by sequential integers starting with zero. It is represented with !!seq tag.
Clark : Evans
Ingy : döt Net
Oren : Ben-Kiki
Flow style: !!seq { Clark: Evans, Ingy: döt Net, Oren: Ben-Kiki }
The output for this generic sequence of failsafe
schema is shown below:
{
"Oren": "Ben-Kiki",
"Ingy": "d\u00f6t Net",
"Clark": "Evans",
"Flow style": {
"Oren": "Ben-Kiki",
"Ingy": "d\u00f6t Net",
"Clark": "Evans"
}
}
JSON schema in YAML is considered as the common denominator of most modern computer languages. It allows parsing JSON files. It is strongly recommended in YAML that other schemas should be considered on JSON schema. The primary reason for this is that it includes key value combination which are user friendly. The messages can be encoded as key and can be used as and when needed.
The JSON schema is scalar and lacks a value. A mapping entry in JSON schema is represented in the format of some key and value pair where null is treated as valid.
A null JSON schema is represented as shown below −
!!null null: value for null key
key with null value: !!null null
The output of JSON representation is mentioned below −
{
"null": "value for null key",
"key with null value": null
}
The following example represents the Boolean JSON schema −
YAML is a superset of JSON: !!bool true
Pluto is a planet: !!bool false
The following is the output for the same in JSON format −
{
"YAML is a superset of JSON": true,
"Pluto is a planet": false
}
The following example represents the integer JSON schema −
negative: !!int -12
zero: !!int 0
positive: !!int 34
{
"positive": 34,
"zero": 0,
"negative": -12
}
The tags in JSON schema is represented with following example −
A null: null
Booleans: [ true, false ]
Integers: [ 0, -0, 3, -19 ]
Floats: [ 0., -0.0, 12e03, -2E+05 ]
Invalid: [ True, Null, 0o7, 0x3A, +12.3 ]
You can find the JSON Output as shown below −
{
"Integers": [
0,
0,
3,
-19
],
"Booleans": [
true,
false
],
"A null": null,
"Invalid": [
true,
null,
"0o7",
58,
12.300000000000001
],
"Floats": [
0.0,
-0.0,
"12e03",
"-2E+05"
]
}
33 Lectures
44 mins
Tarun Telang
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2404,
"s": 2048,
"text": "YAML Ain't Markup Language is a data serialization language that matches user’s expectations about data. It designed to be human friendly and works perfectly with other programming languages. It is useful to manage data and includes Unicode printable characters. This chapter will give you an introduction to YAML and gives you an idea about its features."
},
{
"code": null,
"e": 2436,
"s": 2404,
"text": "Consider the text shown below −"
},
{
"code": null,
"e": 2479,
"s": 2436,
"text": "Quick brown fox jumped over the lazy dog.\n"
},
{
"code": null,
"e": 2539,
"s": 2479,
"text": "The YAML text for this will be represented as shown below −"
},
{
"code": null,
"e": 2638,
"s": 2539,
"text": "yaml.load(Quick brown fox jumped over the lazy dog.)\n>>'Quick brown fox jumped over the lazy dog.'"
},
{
"code": null,
"e": 2732,
"s": 2638,
"text": "Note that YAML takes the value in string format and represents the output as mentioned above."
},
{
"code": null,
"e": 2812,
"s": 2732,
"text": "Let us understand the formats in YAML with the help of the following examples −"
},
{
"code": null,
"e": 2954,
"s": 2812,
"text": "Consider the following point number of “pi”, which has a value of 3.1415926. In YAML, it is represented as a floating number as shown below −"
},
{
"code": null,
"e": 2997,
"s": 2954,
"text": ">>> yaml.load('3.1415926536')\n3.1415926536"
},
{
"code": null,
"e": 3087,
"s": 2997,
"text": "Suppose, multiple values are to be loaded in specific data structure as mentioned below −"
},
{
"code": null,
"e": 3130,
"s": 3087,
"text": "eggs\nham\nspam\nFrench basil salmon terrine\n"
},
{
"code": null,
"e": 3264,
"s": 3130,
"text": "When you load this into YAML, the values are taken in an array data structure which is a form of list. The output is as shown below −"
},
{
"code": null,
"e": 3408,
"s": 3264,
"text": ">>> yaml.load('''\n - eggs\n - ham\n - spam\n - French basil salmon terrine\n ''')\n['eggs', 'ham', 'spam', 'French basil salmon terrine']\n"
},
{
"code": null,
"e": 3586,
"s": 3408,
"text": "YAML includes a markup language with important construct, to distinguish data-oriented language with the document markup. The design goals and features of YAML are given below −"
},
{
"code": null,
"e": 3703,
"s": 3586,
"text": "Matches native data structures of agile methodology and its languages such as Perl, Python, PHP, Ruby and JavaScript"
},
{
"code": null,
"e": 3820,
"s": 3703,
"text": "Matches native data structures of agile methodology and its languages such as Perl, Python, PHP, Ruby and JavaScript"
},
{
"code": null,
"e": 3872,
"s": 3820,
"text": "YAML data is portable between programming languages"
},
{
"code": null,
"e": 3924,
"s": 3872,
"text": "YAML data is portable between programming languages"
},
{
"code": null,
"e": 3960,
"s": 3924,
"text": "Includes data consistent data model"
},
{
"code": null,
"e": 3996,
"s": 3960,
"text": "Includes data consistent data model"
},
{
"code": null,
"e": 4022,
"s": 3996,
"text": "Easily readable by humans"
},
{
"code": null,
"e": 4048,
"s": 4022,
"text": "Easily readable by humans"
},
{
"code": null,
"e": 4082,
"s": 4048,
"text": "Supports one-direction processing"
},
{
"code": null,
"e": 4116,
"s": 4082,
"text": "Supports one-direction processing"
},
{
"code": null,
"e": 4149,
"s": 4116,
"text": "Ease of implementation and usage"
},
{
"code": null,
"e": 4182,
"s": 4149,
"text": "Ease of implementation and usage"
},
{
"code": null,
"e": 4359,
"s": 4182,
"text": "Now that you have an idea about YAML and its features, let us learn its basics with syntax and other operations. Remember that YAML includes a human readable structured format."
},
{
"code": null,
"e": 4445,
"s": 4359,
"text": "When you are creating a file in YAML, you should remember the following basic rules −"
},
{
"code": null,
"e": 4468,
"s": 4445,
"text": "YAML is case sensitive"
},
{
"code": null,
"e": 4491,
"s": 4468,
"text": "YAML is case sensitive"
},
{
"code": null,
"e": 4536,
"s": 4491,
"text": "The files should have .yaml as the extension"
},
{
"code": null,
"e": 4581,
"s": 4536,
"text": "The files should have .yaml as the extension"
},
{
"code": null,
"e": 4671,
"s": 4581,
"text": "YAML does not allow the use of tabs while creating YAML files; spaces are allowed instead"
},
{
"code": null,
"e": 4761,
"s": 4671,
"text": "YAML does not allow the use of tabs while creating YAML files; spaces are allowed instead"
},
{
"code": null,
"e": 4812,
"s": 4761,
"text": "The basic components of YAML are described below −"
},
{
"code": null,
"e": 4923,
"s": 4812,
"text": "This block format uses hyphen+space to begin a new item in a specified list. Observe the example shown below −"
},
{
"code": null,
"e": 5010,
"s": 4923,
"text": "--- # Favorite movies\n - Casablanca\n - North by Northwest\n - The Man Who Wasn't There\n"
},
{
"code": null,
"e": 5024,
"s": 5010,
"text": "Inline Format"
},
{
"code": null,
"e": 5142,
"s": 5024,
"text": "Inline format is delimited with comma and space and the items are enclosed in JSON. Observe the example shown below −"
},
{
"code": null,
"e": 5205,
"s": 5142,
"text": "--- # Shopping list\n [milk, groceries, eggs, juice, fruits]\n"
},
{
"code": null,
"e": 5217,
"s": 5205,
"text": "Folded Text"
},
{
"code": null,
"e": 5327,
"s": 5217,
"text": "Folded text converts newlines to spaces and removes the leading whitespace. Observe the example shown below −"
},
{
"code": null,
"e": 5387,
"s": 5327,
"text": "- {name: John Smith, age: 33}\n- name: Mary Smith\n age: 27\n"
},
{
"code": null,
"e": 5466,
"s": 5387,
"text": "The structure which follows all the basic conventions of YAML is shown below −"
},
{
"code": null,
"e": 5538,
"s": 5466,
"text": "men: [John Smith, Bill Jones]\nwomen:\n - Mary Smith\n - Susan Williams\n"
},
{
"code": null,
"e": 5637,
"s": 5538,
"text": "The synopsis of YAML basic elements is given here: Comments in YAML begins with the (#) character."
},
{
"code": null,
"e": 5736,
"s": 5637,
"text": "The synopsis of YAML basic elements is given here: Comments in YAML begins with the (#) character."
},
{
"code": null,
"e": 5797,
"s": 5736,
"text": "Comments must be separated from other tokens by whitespaces."
},
{
"code": null,
"e": 5858,
"s": 5797,
"text": "Comments must be separated from other tokens by whitespaces."
},
{
"code": null,
"e": 5913,
"s": 5858,
"text": "Indentation of whitespace is used to denote structure."
},
{
"code": null,
"e": 5968,
"s": 5913,
"text": "Indentation of whitespace is used to denote structure."
},
{
"code": null,
"e": 6021,
"s": 5968,
"text": "Tabs are not included as indentation for YAML files."
},
{
"code": null,
"e": 6074,
"s": 6021,
"text": "Tabs are not included as indentation for YAML files."
},
{
"code": null,
"e": 6124,
"s": 6074,
"text": "List members are denoted by a leading hyphen (-)."
},
{
"code": null,
"e": 6174,
"s": 6124,
"text": "List members are denoted by a leading hyphen (-)."
},
{
"code": null,
"e": 6244,
"s": 6174,
"text": "List members are enclosed in square brackets and separated by commas."
},
{
"code": null,
"e": 6314,
"s": 6244,
"text": "List members are enclosed in square brackets and separated by commas."
},
{
"code": null,
"e": 6438,
"s": 6314,
"text": "Associative arrays are represented using colon ( : ) in the format of key value pair. They are enclosed in curly braces {}."
},
{
"code": null,
"e": 6562,
"s": 6438,
"text": "Associative arrays are represented using colon ( : ) in the format of key value pair. They are enclosed in curly braces {}."
},
{
"code": null,
"e": 6637,
"s": 6562,
"text": "Multiple documents with single streams are separated with 3 hyphens (---)."
},
{
"code": null,
"e": 6712,
"s": 6637,
"text": "Multiple documents with single streams are separated with 3 hyphens (---)."
},
{
"code": null,
"e": 6817,
"s": 6712,
"text": "Repeated nodes in each file are initially denoted by an ampersand (&) and by an asterisk (*) mark later."
},
{
"code": null,
"e": 6922,
"s": 6817,
"text": "Repeated nodes in each file are initially denoted by an ampersand (&) and by an asterisk (*) mark later."
},
{
"code": null,
"e": 7023,
"s": 6922,
"text": "YAML always requires colons and commas used as list separators followed by space with scalar values."
},
{
"code": null,
"e": 7124,
"s": 7023,
"text": "YAML always requires colons and commas used as list separators followed by space with scalar values."
},
{
"code": null,
"e": 7272,
"s": 7124,
"text": "Nodes should be labelled with an exclamation mark (!) or double exclamation mark (!!), followed by string which can be expanded into an URI or URL."
},
{
"code": null,
"e": 7420,
"s": 7272,
"text": "Nodes should be labelled with an exclamation mark (!) or double exclamation mark (!!), followed by string which can be expanded into an URI or URL."
},
{
"code": null,
"e": 7588,
"s": 7420,
"text": "Indentation and separation are two main concepts when you are learning any programming language. This chapter talks about these two concepts related to YAML in detail."
},
{
"code": null,
"e": 7720,
"s": 7588,
"text": "YAML does not include any mandatory spaces. Further, there is no need to be consistent. The valid YAML indentation is shown below −"
},
{
"code": null,
"e": 7775,
"s": 7720,
"text": "a:\n b:\n - c\n - d\n - e\nf:\n \"ghi\""
},
{
"code": null,
"e": 7947,
"s": 7775,
"text": "You should remember the following rules while working with indentation in YAML:Flow blocks must be intended with at least some spaces with surrounding current block level."
},
{
"code": null,
"e": 8119,
"s": 7947,
"text": "You should remember the following rules while working with indentation in YAML:Flow blocks must be intended with at least some spaces with surrounding current block level."
},
{
"code": null,
"e": 8212,
"s": 8119,
"text": "Flow content of YAML spans multiple lines. The beginning of flow content begins with { or [."
},
{
"code": null,
"e": 8305,
"s": 8212,
"text": "Flow content of YAML spans multiple lines. The beginning of flow content begins with { or [."
},
{
"code": null,
"e": 8428,
"s": 8305,
"text": "Block list items include same indentation as the surrounding block level because - is considered as a part of indentation."
},
{
"code": null,
"e": 8551,
"s": 8428,
"text": "Block list items include same indentation as the surrounding block level because - is considered as a part of indentation."
},
{
"code": null,
"e": 8617,
"s": 8551,
"text": "Observe the following code that shows indentation with examples −"
},
{
"code": null,
"e": 9241,
"s": 8617,
"text": "--- !clarkevans.com/^invoice\ninvoice: 34843\ndate : 2001-01-23\nbill-to: &id001\n given : Chris\n family : Dumars\n address:\n lines: |\n 458 Walkman Dr.\n Suite #292\n city : Royal Oak\n state : MI\n postal : 48046\nship-to: *id001\nproduct:\n - sku : BL394D\n quantity : 4\n description : Basketball\n price : 450.00\n - sku : BL4438H\n quantity : 1\n description : Super Hoop\n price : 2392.00\ntax : 251.42\ntotal: 4443.52\ncomments: >\n Late afternoon is best.\n Backup contact is Nancy\n Billsmer @ 338-4338."
},
{
"code": null,
"e": 9405,
"s": 9241,
"text": "Strings are separated using double-quoted string. If you escape the newline characters in a given string, it is completely removed and translated into space value."
},
{
"code": null,
"e": 9587,
"s": 9405,
"text": "In this example we have focused listing of animals listed as an array structure with data type of string. Every new element is listed with a prefix of hyphen as mentioned as prefix."
},
{
"code": null,
"e": 9645,
"s": 9587,
"text": "-\n - Cat\n - Dog\n - Goldfish\n-\n - Python\n - Lion\n - Tiger\n"
},
{
"code": null,
"e": 9722,
"s": 9645,
"text": "Another example to explain string representation in YAML is mentioned below."
},
{
"code": null,
"e": 10232,
"s": 9722,
"text": " errors:\n messages:\n already_confirmed: \"was already confirmed, please try signing in\"\n confirmation_period_expired: \"needs to be confirmed within %{period}, please request a new one\"\n expired: \"has expired, please request a new one\"\n not_found: \"not found\"\n not_locked: \"was not locked\"\n not_saved:\n one: \"1 error prohibited this %{resource} from being saved:\"\n other: \"%{count} errors prohibited this %{resource} from being saved:\"\n"
},
{
"code": null,
"e": 10476,
"s": 10232,
"text": "This example refers to the set of error messages which a user can use just by mentioning the key aspect and to fetch the values accordingly. This pattern of YAML follows the structure of JSON which can be understood by user who is new to YAML."
},
{
"code": null,
"e": 10640,
"s": 10476,
"text": "Now that you are comfortable with the syntax and basics of YAML, let us proceed further into its details. In this chapter, we will see how to use comments in YAML."
},
{
"code": null,
"e": 10739,
"s": 10640,
"text": "YAML supports single line comments. Its structure is explained below with the help of an example −"
},
{
"code": null,
"e": 10771,
"s": 10739,
"text": "# this is single line comment.\n"
},
{
"code": null,
"e": 10912,
"s": 10771,
"text": "YAML does not support multi line comments. If you want to provide comments for multiple lines, you can do so as shown in the example below −"
},
{
"code": null,
"e": 10951,
"s": 10912,
"text": "# this\n# is a multiple\n# line comment\n"
},
{
"code": null,
"e": 11002,
"s": 10951,
"text": "The features of comments in YAML are given below −"
},
{
"code": null,
"e": 11049,
"s": 11002,
"text": "A commented block is skipped during execution."
},
{
"code": null,
"e": 11096,
"s": 11049,
"text": "A commented block is skipped during execution."
},
{
"code": null,
"e": 11155,
"s": 11096,
"text": "Comments help to add description for specified code block."
},
{
"code": null,
"e": 11214,
"s": 11155,
"text": "Comments help to add description for specified code block."
},
{
"code": null,
"e": 11255,
"s": 11214,
"text": "Comments must not appear inside scalars."
},
{
"code": null,
"e": 11296,
"s": 11255,
"text": "Comments must not appear inside scalars."
},
{
"code": null,
"e": 11452,
"s": 11296,
"text": "YAML does not include any way to escape the hash symbol (#) so within multi-line string so there is no way to divide the comment from the raw string value."
},
{
"code": null,
"e": 11608,
"s": 11452,
"text": "YAML does not include any way to escape the hash symbol (#) so within multi-line string so there is no way to divide the comment from the raw string value."
},
{
"code": null,
"e": 11659,
"s": 11608,
"text": "The comments within a collection are shown below −"
},
{
"code": null,
"e": 11758,
"s": 11659,
"text": "key: #comment 1\n - value line 1\n #comment 2\n - value line 2\n #comment 3\n - value line 3\n"
},
{
"code": null,
"e": 11825,
"s": 11758,
"text": "The shortcut key combination for commenting YAML blocks is Ctrl+Q."
},
{
"code": null,
"e": 11920,
"s": 11825,
"text": "If you are using Sublime Text editor, the steps for commenting the block are mentioned below −"
},
{
"code": null,
"e": 12031,
"s": 11920,
"text": "Select the block. Use “CTRL + /” on Linux and Windows and “CMD+/” for Mac operating system. Execute the block."
},
{
"code": null,
"e": 12280,
"s": 12031,
"text": "Note that the same steps are applicable if you are using Visual Studio Code Editor. It is always recommended to use Sublime Text Editor for creating YAML files as it supported by most operating systems and includes developer friendly shortcut keys."
},
{
"code": null,
"e": 12672,
"s": 12280,
"text": "YAML includes block collections which use indentation for scope. Here, each entry begins with a new line. Block sequences in collections indicate each entry with a dash and space (-). In YAML, block collections styles are not denoted by any specific indicator. Block collection in YAML can distinguished from other scalar quantities with an identification of key value pair included in them."
},
{
"code": null,
"e": 12924,
"s": 12672,
"text": "Mappings are the representation of key value as included in JSON structure. It is used often in multi-lingual support systems and creation of API in mobile applications. Mappings use key value pair representation with the usage of colon and space (:)."
},
{
"code": null,
"e": 13020,
"s": 12924,
"text": "Consider an example of sequence of scalars, for example a list of ball players as shown below −"
},
{
"code": null,
"e": 13065,
"s": 13020,
"text": "- Mark Joseph\n- James Stephen\n- Ken Griffey\n"
},
{
"code": null,
"e": 13122,
"s": 13065,
"text": "The following example shows mapping scalars to scalars −"
},
{
"code": null,
"e": 13150,
"s": 13122,
"text": "hr: 87\navg: 0.298\nrbi: 149\n"
},
{
"code": null,
"e": 13209,
"s": 13150,
"text": "The following example shows mapping scalars to sequences −"
},
{
"code": null,
"e": 13332,
"s": 13209,
"text": "European:\n- Boston Red Sox\n- Detroit Tigers\n- New York Yankees\n\nnational:\n- New York Mets\n- Chicago Cubs\n- Atlanta Braves\n"
},
{
"code": null,
"e": 13402,
"s": 13332,
"text": "Collections can be used for sequence mappings which are shown below −"
},
{
"code": null,
"e": 13499,
"s": 13402,
"text": "-\n name: Mark Joseph\n hr: 87\n avg: 0.278\n-\n name: James Stephen\n hr: 63\n avg: 0.288\n"
},
{
"code": null,
"e": 13800,
"s": 13499,
"text": "With collections, YAML includes flow styles using explicit indicators instead of using indentation to denote space. The flow sequence in collections is written as comma separated list enclosed in square brackets. The best illustration for collection which is included in PHP frameworks like symphony."
},
{
"code": null,
"e": 13821,
"s": 13800,
"text": "[PHP, Perl, Python]\n"
},
{
"code": null,
"e": 14000,
"s": 13821,
"text": "These collections are stored in documents. The separation of documents in YAML is denoted with three hyphens or dashes (---). The end of document is marked with three dots (...)."
},
{
"code": null,
"e": 14128,
"s": 14000,
"text": "The separation of documents in YAML is denoted by three dashes (---). The end of document is represented with three dots (...)."
},
{
"code": null,
"e": 14215,
"s": 14128,
"text": "The document representation is referred as structure format which is mentioned below −"
},
{
"code": null,
"e": 14349,
"s": 14215,
"text": "# Ranking of 1998 home runs\n---\n- Mark Joseph\n- James Stephen\n- Ken Griffey \n\n# Team ranking\n---\n- Chicago Cubs\n- St Louis Cardinals\n"
},
{
"code": null,
"e": 14589,
"s": 14349,
"text": "A question mark with a combination of space indicates a complex mapping in structure. Within a block collection, a user can include structure with a dash, colon and question mark. The following example shows the mapping between sequences −"
},
{
"code": null,
"e": 14681,
"s": 14589,
"text": "- 2001-07-23\n? [ New York Yankees,Atlanta Braves ]\n: [ 2001-07-02, 2001-08-12, 2001-08-14]\n"
},
{
"code": null,
"e": 14941,
"s": 14681,
"text": "Scalars in YAML are written in block format using a literal type which is denoted as(|). It denotes line breaks count. In YAML, scalars are written in folded style (>) where each line denotes a folded space which ends with an empty line or more indented line."
},
{
"code": null,
"e": 14995,
"s": 14941,
"text": "New lines are preserved in literals are shown below −"
},
{
"code": null,
"e": 15033,
"s": 14995,
"text": "ASCII Art\n--- |\n\\//||\\/||\n// || ||__\n"
},
{
"code": null,
"e": 15124,
"s": 15033,
"text": "The folded newlines are preserved for more indented lines and blank lines as shown below −"
},
{
"code": null,
"e": 15233,
"s": 15124,
"text": ">\nSammy Sosa completed another\nfine season with great stats.\n63 Home Runs\n0.288 Batting Average\nWhat a year!"
},
{
"code": null,
"e": 15440,
"s": 15233,
"text": "YAML flow scalars include plain styles and quoted styles. The double quoted style includes various escape sequences. Flow scalars can include multiple lines; line breaks are always folded in this structure."
},
{
"code": null,
"e": 15527,
"s": 15440,
"text": "plain:\nThis unquoted scalar\nspans many lines.\nquoted: \"So does this\nquoted scalar.\\n\"\n"
},
{
"code": null,
"e": 15771,
"s": 15527,
"text": "In YAML, untagged nodes are specified with a specific type of the application. The examples of tags specification generally use seq, map and str types for YAML tag repository. The tags are represented as examples which are mentioned as below −"
},
{
"code": null,
"e": 15852,
"s": 15771,
"text": "These tags include integer values in them. They are also called as numeric tags."
},
{
"code": null,
"e": 15936,
"s": 15852,
"text": "canonical: 12345\ndecimal: +12,345\nsexagecimal: 3:25:45\noctal: 014\nhexadecimal: 0xC\n"
},
{
"code": null,
"e": 16029,
"s": 15936,
"text": "These tags include decimal and exponential values. They are also called as exponential tags."
},
{
"code": null,
"e": 16159,
"s": 16029,
"text": "canonical: 1.23015e+3\nexponential: 12.3015e+02\nsexagecimal: 20:30.15\nfixed: 1,230.15\nnegative infinity: -.inf\nnot a number: .NaN\n"
},
{
"code": null,
"e": 16277,
"s": 16159,
"text": "It includes a variety of integer, floating and string values embedded in them. Hence it is called miscellaneous tags."
},
{
"code": null,
"e": 16319,
"s": 16277,
"text": "null: ~\ntrue: y\nfalse: n\nstring: '12345'\n"
},
{
"code": null,
"e": 16627,
"s": 16319,
"text": "The following full-length example specifies the construct of YAML which includes symbols and various representations which will be helpful while converting or processing them in JSON format. These attributes are also called as key names in JSON documents. These notations are created for security purposes. "
},
{
"code": null,
"e": 16938,
"s": 16627,
"text": "The above YAML format represents various attributes of defaults, adapter, and host with various other attributes. YAML also keeps a log of every file generated which maintains a track of error messages generated. On converting the specified YAML file in JSON format we get a desired output as mentioned below −"
},
{
"code": null,
"e": 17113,
"s": 16938,
"text": "defaults: &defaults\n adapter: postgres\n host: localhost\n\ndevelopment:\n database: myapp_development\n <<: *defaults\n\ntest:\n database: myapp_test\n <<: *defaults"
},
{
"code": null,
"e": 17176,
"s": 17113,
"text": "Let’s convert the YAML to JSON format and check on the output."
},
{
"code": null,
"e": 17483,
"s": 17176,
"text": "{\n \"defaults\": {\n \"adapter\": \"postgres\",\n \"host\": \"localhost\"\n },\n \"development\": {\n \"database\": \"myapp_development\",\n \"adapter\": \"postgres\",\n \"host\": \"localhost\"\n },\n \"test\": {\n \"database\": \"myapp_test\",\n \"adapter\": \"postgres\",\n \"host\": \"localhost\"\n }\n}"
},
{
"code": null,
"e": 17615,
"s": 17483,
"text": "The defaults key with a prefix of “ <<: *” is included as and when required with no need to write the same code snippet repeatedly."
},
{
"code": null,
"e": 17796,
"s": 17615,
"text": "YAML follows a standard procedure for Process flow. The native data structure in YAML includes simple representations such as nodes. It is also called as Representation Node Graph."
},
{
"code": null,
"e": 17973,
"s": 17796,
"text": "It includes mapping, sequence and scalar quantities which is being serialized to create a serialization tree. With serialization the objects are converted with stream of bytes."
},
{
"code": null,
"e": 18095,
"s": 17973,
"text": "The serialization event tree helps in creating presentation of character streams as represented in the following diagram."
},
{
"code": null,
"e": 18317,
"s": 18095,
"text": "The reverse procedure parses the stream of bytes into serialized event tree. Later, the nodes are converted into node graph. These values are later converted in YAML native data structure. The figure below explains this −"
},
{
"code": null,
"e": 18652,
"s": 18317,
"text": "The information in YAML is used in two ways: machine processing and human consumption. The processor in YAML is used as a tool for the procedure of converting information between complementary views in the diagram given above. This chapter describes the information structures a YAML processor must provide within a given application."
},
{
"code": null,
"e": 18894,
"s": 18652,
"text": "YAML includes a serialization procedure for representing data objects in serial format. The processing of YAML information includes three stages: Representation, Serialization, Presentation and parsing. Let us discuss each of them in detail."
},
{
"code": null,
"e": 18988,
"s": 18894,
"text": "YAML represents the data structure using three kinds of nodes: sequence, mapping and scalar."
},
{
"code": null,
"e": 19143,
"s": 18988,
"text": "Sequence refers to the ordered number of entries, which maps the unordered association of key value pair. It corresponds to the Perl or Python array list."
},
{
"code": null,
"e": 19207,
"s": 19143,
"text": "The code shown below is an example of sequence representation −"
},
{
"code": null,
"e": 19422,
"s": 19207,
"text": "product:\n - sku : BL394D\n quantity : 4\n description : Football\n price : 450.00\n - sku : BL4438H\n quantity : 1\n description : Super Hoop\n price : 2392.00"
},
{
"code": null,
"e": 19545,
"s": 19422,
"text": "Mapping on the other hand represents dictionary data structure or hash table. An example for the same is mentioned below −"
},
{
"code": null,
"e": 19624,
"s": 19545,
"text": "batchLimit: 1000\nthreadCountLimit: 2\nkey: value\nkeyMapping: <What goes here?>\n"
},
{
"code": null,
"e": 19864,
"s": 19624,
"text": "Scalars represent standard values of strings, integers, dates and atomic data types. Note that YAML also includes nodes which specify the data type structure. For more information on scalars, please refer to the chapter 6 of this tutorial."
},
{
"code": null,
"e": 20087,
"s": 19864,
"text": "Serialization process is required in YAML that eases human friendly key order and anchor names. The result of serialization is a YAML serialization tree. It can be traversed to produce a series of event calls of YAML data."
},
{
"code": null,
"e": 20133,
"s": 20087,
"text": "An example for serialization is given below −"
},
{
"code": null,
"e": 20516,
"s": 20133,
"text": "consumer:\n class: 'AppBundle\\Entity\\consumer'\n attributes:\n filters: ['customer.search', 'customer.order', 'customer.boolean']\n collectionOperations:\n get:\n method: 'GET'\n normalization_context:\n groups: ['customer_list']\n itemOperations:\n get:\n method: 'GET'\n normalization_context:\n groups: ['customer_get']\n"
},
{
"code": null,
"e": 20819,
"s": 20516,
"text": "The final output of YAML serialization is called presentation. It represents a character stream in a human friendly manner. YAML processor includes various presentation details for creating stream, handling indentation and formatting content. This complete process is guided by the preferences of user."
},
{
"code": null,
"e": 20955,
"s": 20819,
"text": "An example for YAML presentation process is the result of JSON value created. Observe the code given below for a better understanding −"
},
{
"code": null,
"e": 21635,
"s": 20955,
"text": "{\n \"consumer\": {\n \"class\": \"AppBundle\\\\Entity\\\\consumer\",\n \"attributes\": {\n \"filters\": [\n \"customer.search\",\n \"customer.order\",\n \"customer.boolean\"\n ]\n },\n \"collectionOperations\": {\n \"get\": {\n \"method\": \"GET\",\n \"normalization_context\": {\n \"groups\": [\n \"customer_list\"\n ]\n }\n }\n },\n \"itemOperations\": {\n \"get\": {\n \"method\": \"GET\",\n \"normalization_context\": {\n \"groups\": [\n \"customer_get\"\n ]\n }\n }\n }\n }\n}"
},
{
"code": null,
"e": 21974,
"s": 21635,
"text": "Parsing is the inverse process of presentation; it includes a stream of characters and creates a series of events. It discards the details introduced in the presentation process which causes serialization events. Parsing procedure can fail due to ill-formed input. It is basically a procedure to check whether YAML is well-formed or not."
},
{
"code": null,
"e": 22025,
"s": 21974,
"text": "Consider a YAML example which is mentioned below −"
},
{
"code": null,
"e": 22188,
"s": 22025,
"text": "---\n environment: production\n classes:\n nfs::server:\n exports:\n - /srv/share1\n - /srv/share3\n parameters:\n paramter1"
},
{
"code": null,
"e": 22289,
"s": 22188,
"text": "With three hyphens, it represents the start of document with various attributes later defined in it."
},
{
"code": null,
"e": 22482,
"s": 22289,
"text": "YAML lint is the online parser of YAML and helps in parsing the YAML structure to check whether it is valid or not. The official link for YAML lint is mentioned below: http://www.yamllint.com/"
},
{
"code": null,
"e": 22533,
"s": 22482,
"text": "You can see the output of parsing as shown below −"
},
{
"code": null,
"e": 22786,
"s": 22533,
"text": "This chapter will explain the detail about the procedures and processes that we discussed in last chapter. Information Models in YAML will specify the features of serialization and presentation procedure in a systematic format using a specific diagram."
},
{
"code": null,
"e": 22922,
"s": 22786,
"text": "For an information model, it is important to represent the application information which are portable between programming environments."
},
{
"code": null,
"e": 23286,
"s": 22922,
"text": "The diagram shown above represents a normal information model which is represented in graph format. In YAML, the representation of native data is rooted, connected and is directed graph of tagged nodes. If we mention directed graph, it includes a set of nodes with directed graph. As mentioned in the information model, YAML supports three kinds of nodes namely −"
},
{
"code": null,
"e": 23296,
"s": 23286,
"text": "Sequences"
},
{
"code": null,
"e": 23304,
"s": 23296,
"text": "Scalars"
},
{
"code": null,
"e": 23313,
"s": 23304,
"text": "Mappings"
},
{
"code": null,
"e": 23574,
"s": 23313,
"text": "The basic definitions of these representation nodes were discussed in last chapter. In this chapter, we will focus on schematic view of these terms. The following sequence diagram represents the workflow of legends with various types of tags and mapping nodes."
},
{
"code": null,
"e": 23651,
"s": 23574,
"text": "There are three types of nodes: sequence node, scalar node and mapping node."
},
{
"code": null,
"e": 23824,
"s": 23651,
"text": "Sequence node follows a sequential architecture and includes an ordered series of zero or more nodes. A YAML sequence may contain the same node repeatedly or a single node."
},
{
"code": null,
"e": 23997,
"s": 23824,
"text": "The content of scalars in YAML includes Unicode characters which can be represented in the format with a series of zero. In general, scalar node includes scalar quantities."
},
{
"code": null,
"e": 24248,
"s": 23997,
"text": "Mapping node includes the key value pair representation. The content of mapping node includes a combination of key-value pair with a mandatory condition that key name should be maintained unique. Sequences and mappings collectively form a collection."
},
{
"code": null,
"e": 24373,
"s": 24248,
"text": "Note that as represented in the diagram shown above, scalars, sequences and mappings are represented in a systematic format."
},
{
"code": null,
"e": 24533,
"s": 24373,
"text": "Various types of characters are used for various functionalities. This chapter talks in detail about syntax used in YAML and focuses on character manipulation."
},
{
"code": null,
"e": 24671,
"s": 24533,
"text": "Indicator characters include a special semantics used to describe the content of YAML document. The following table shows this in detail."
},
{
"code": null,
"e": 24673,
"s": 24671,
"text": "_"
},
{
"code": null,
"e": 24707,
"s": 24673,
"text": "It denotes a block sequence entry"
},
{
"code": null,
"e": 24709,
"s": 24707,
"text": "?"
},
{
"code": null,
"e": 24734,
"s": 24709,
"text": "It denotes a mapping key"
},
{
"code": null,
"e": 24736,
"s": 24734,
"text": ":"
},
{
"code": null,
"e": 24763,
"s": 24736,
"text": "It denotes a mapping value"
},
{
"code": null,
"e": 24765,
"s": 24763,
"text": ","
},
{
"code": null,
"e": 24798,
"s": 24765,
"text": "It denotes flow collection entry"
},
{
"code": null,
"e": 24800,
"s": 24798,
"text": "["
},
{
"code": null,
"e": 24826,
"s": 24800,
"text": "It starts a flow sequence"
},
{
"code": null,
"e": 24828,
"s": 24826,
"text": "]"
},
{
"code": null,
"e": 24852,
"s": 24828,
"text": "It ends a flow sequence"
},
{
"code": null,
"e": 24854,
"s": 24852,
"text": "{"
},
{
"code": null,
"e": 24879,
"s": 24854,
"text": "It starts a flow mapping"
},
{
"code": null,
"e": 24881,
"s": 24879,
"text": "}"
},
{
"code": null,
"e": 24904,
"s": 24881,
"text": "It ends a flow mapping"
},
{
"code": null,
"e": 24906,
"s": 24904,
"text": "#"
},
{
"code": null,
"e": 24930,
"s": 24906,
"text": "It denotes the comments"
},
{
"code": null,
"e": 24932,
"s": 24930,
"text": "&"
},
{
"code": null,
"e": 24966,
"s": 24932,
"text": "It denotes node’s anchor property"
},
{
"code": null,
"e": 24968,
"s": 24966,
"text": "*"
},
{
"code": null,
"e": 24990,
"s": 24968,
"text": "It denotes alias node"
},
{
"code": null,
"e": 24992,
"s": 24990,
"text": "!"
},
{
"code": null,
"e": 25014,
"s": 24992,
"text": "It denotes node’s tag"
},
{
"code": null,
"e": 25016,
"s": 25014,
"text": "|"
},
{
"code": null,
"e": 25050,
"s": 25016,
"text": "It denotes a literal block scalar"
},
{
"code": null,
"e": 25052,
"s": 25050,
"text": ">"
},
{
"code": null,
"e": 25085,
"s": 25052,
"text": "It denotes a folded block scalar"
},
{
"code": null,
"e": 25087,
"s": 25085,
"text": "`"
},
{
"code": null,
"e": 25131,
"s": 25087,
"text": "Single quote surrounds a quoted flow scalar"
},
{
"code": null,
"e": 25133,
"s": 25131,
"text": "\""
},
{
"code": null,
"e": 25182,
"s": 25133,
"text": "Double quote surrounds double quoted flow scalar"
},
{
"code": null,
"e": 25184,
"s": 25182,
"text": "%"
},
{
"code": null,
"e": 25214,
"s": 25184,
"text": "It denotes the directive used"
},
{
"code": null,
"e": 25274,
"s": 25214,
"text": "The following example shows the characters used in syntax −"
},
{
"code": null,
"e": 25622,
"s": 25274,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"sequence\"\n : !!seq [\n !!str \"one\", !!str \"two\"\n ],\n ? !!str \"mapping\"\n : !!map {\n ? !!str \"sky\" : !!str \"blue\",\n ? !!str \"sea\" : !!str \"green\",\n }\n}\n\n# This represents\n# only comments.\n---\n!!map1 {\n ? !!str \"anchored\"\n : !local &A1 \"value\",\n ? !!str \"alias\"\n : *A1,\n}\n!!str \"text\""
},
{
"code": null,
"e": 25712,
"s": 25622,
"text": "In this chapter you will learn about the following aspects of syntax primitives in YAML −"
},
{
"code": null,
"e": 25734,
"s": 25712,
"text": "Production parameters"
},
{
"code": null,
"e": 25753,
"s": 25734,
"text": "Indentation Spaces"
},
{
"code": null,
"e": 25771,
"s": 25753,
"text": "Separation Spaces"
},
{
"code": null,
"e": 25791,
"s": 25771,
"text": "Ignored Line Prefix"
},
{
"code": null,
"e": 25804,
"s": 25791,
"text": "Line folding"
},
{
"code": null,
"e": 25845,
"s": 25804,
"text": "Let us understand each aspect in detail."
},
{
"code": null,
"e": 26031,
"s": 25845,
"text": "Production parameters include a set of parameters and the range of allowed values which are used on a specific production. The following list of production parameters are used in YAML −"
},
{
"code": null,
"e": 26193,
"s": 26031,
"text": "It is denoted by character n or m\tCharacter stream depends on the indentation level of blocks included in it. Many productions have parameterized these features."
},
{
"code": null,
"e": 26281,
"s": 26193,
"text": "It is denoted by c.\tYAML supports two groups of contexts: block styles and flow styles."
},
{
"code": null,
"e": 26431,
"s": 26281,
"text": "It is denoted by s. Scalar content may be presented in one of the five styles: plain, double quoted and single quoted flow, literal and folded block."
},
{
"code": null,
"e": 26848,
"s": 26431,
"text": "It is denoted by t. Block scalars offer many mechanisms which help in trimming the block: strip, clip and keep. Chomping helps in formatting new line strings. It is used Block style representation. Chomping process happens with the help of indicators. The indicators controls what output should be produced with newlines of string. The newlines are removed with (-) operator and newlines are added with (+) operator."
},
{
"code": null,
"e": 26897,
"s": 26848,
"text": "An example for chomping process is shown below −"
},
{
"code": null,
"e": 26952,
"s": 26897,
"text": "strip: |-\n text↓\nclip: |\n text↓\nkeep: |+\n text↓\n"
},
{
"code": null,
"e": 27020,
"s": 26952,
"text": "The output after parsing the specified YAML example is as follows −"
},
{
"code": null,
"e": 27376,
"s": 27020,
"text": "In YAML character stream, indentation is defined as a line break character by zero or more characters. The most important point to be kept in mind is that indentation must not contain any tab characters. The characters in indentation should never be considered as a part of node’s content information. Observe the following code for better understanding −"
},
{
"code": null,
"e": 27647,
"s": 27376,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"Not indented\"\n : !!map {\n ? !!str \"By one space\"\n : !!str \"By four\\n spaces\\n\",\n ? !!str \"Flow style\"\n : !!seq [\n !!str \"By two\",\n !!str \"Still by two\",\n !!str \"Again by two\",\n ]\n }\n}"
},
{
"code": null,
"e": 27709,
"s": 27647,
"text": "The output that you can see after indentation is as follows −"
},
{
"code": null,
"e": 27885,
"s": 27709,
"text": "{\n \"Not indented\": {\n \"By one space\": \"By four\\n spaces\\n\", \n \"Flow style\": [\n \"By two\", \n \"Still by two\", \n \"Again by two\"\n ]\n }\n}\n"
},
{
"code": null,
"e": 28029,
"s": 27885,
"text": "YAML uses space characters for separation between tokens. The most important note is that separation in YAML should not contain tab characters."
},
{
"code": null,
"e": 28095,
"s": 28029,
"text": "The following lone of code shows the usage of separation spaces −"
},
{
"code": null,
"e": 28134,
"s": 28095,
"text": "{ · first: · Sammy, · last: · Sosa · }"
},
{
"code": null,
"e": 28214,
"s": 28134,
"text": "{\n \"\\u00b7 last\": \"\\u00b7 Sosa \\u00b7\", \n \"\\u00b7 first\": \"\\u00b7 Sammy\"\n}\n"
},
{
"code": null,
"e": 28489,
"s": 28214,
"text": "Empty prefix always includes indentation depending on the scalar type which also includes a leading whitespace. Plain scalars should not contain any tab characters. On the other hand, quoted scalars may contain tab characters. Block scalars completely depend on indentation."
},
{
"code": null,
"e": 28577,
"s": 28489,
"text": "The following example shows the working of ignored line prefix in a systematic manner −"
},
{
"code": null,
"e": 28736,
"s": 28577,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"plain\"\n : !!str \"text lines\",\n ? !!str \"quoted\"\n : !!str \"text lines\",\n ? !!str \"block\"\n : !!str \"text·®lines\\n\"\n}"
},
{
"code": null,
"e": 28794,
"s": 28736,
"text": "The output achieved for the block streams is as follows −"
},
{
"code": null,
"e": 28892,
"s": 28794,
"text": "{\n \"plain\": \"text lines\", \n \"quoted\": \"text lines\", \n \"block\": \"text\\u00b7\\u00aelines\\n\"\n}\n"
},
{
"code": null,
"e": 29120,
"s": 28892,
"text": "Line Folding allows breaking long lines for readability. More amounts of short lines mean better readability. Line folding is achieved by noting original semantics of long line. The following example demonstrates line folding −"
},
{
"code": null,
"e": 29178,
"s": 29120,
"text": "%YAML 1.1\n--- !!str\n\"specific\\L\\\ntrimmed\\n\\n\\n\\\nas space\""
},
{
"code": null,
"e": 29246,
"s": 29178,
"text": "You can see the output for line folding in JSON format as follows −"
},
{
"code": null,
"e": 29285,
"s": 29246,
"text": "\"specific\\u2028trimmed\\n\\n\\nas space\"\n"
},
{
"code": null,
"e": 29349,
"s": 29285,
"text": "In YAML, you come across various character streams as follows −"
},
{
"code": null,
"e": 29360,
"s": 29349,
"text": "Directives"
},
{
"code": null,
"e": 29386,
"s": 29360,
"text": "Document Boundary Markers"
},
{
"code": null,
"e": 29396,
"s": 29386,
"text": "Documents"
},
{
"code": null,
"e": 29412,
"s": 29396,
"text": "Complete Stream"
},
{
"code": null,
"e": 29461,
"s": 29412,
"text": "In this chapter, we will discuss them in detail."
},
{
"code": null,
"e": 29751,
"s": 29461,
"text": "Directives are basic instructions used in YAML processor. Directives are the presentation details like comments which are not reflected in serialization tree. In YAML, there is no way to define private directives. This section discusses various types of directives with relevant examples −"
},
{
"code": null,
"e": 29920,
"s": 29751,
"text": "Reserved directives are initialized with three hyphen characters (---) as shown in the example below. The reserved directives are converted into specific value of JSON."
},
{
"code": null,
"e": 29947,
"s": 29920,
"text": "%YAML 1.1\n--- !!str\n\"foo\"\n"
},
{
"code": null,
"e": 30101,
"s": 29947,
"text": "YAML Directives are default directives. If converted in JSON, the value fetched includes forward slash character in preceding and terminating characters."
},
{
"code": null,
"e": 30128,
"s": 30101,
"text": "%YAML 1.1\n---\n!!str \"foo\"\n"
},
{
"code": null,
"e": 30360,
"s": 30128,
"text": "YAML uses these markers to allow more than one document to be contained in one stream. These markers are specially used to convey the structure of YAML document. Note that a line beginning with “---“is used to start a new document."
},
{
"code": null,
"e": 30415,
"s": 30360,
"text": "The following code explains about this with examples −"
},
{
"code": null,
"e": 30494,
"s": 30415,
"text": "%YAML 1.1\n---\n!!str \"foo\"\n%YAML 1.1\n---\n!!str \"bar\"\n%YAML 1.1\n---\n!!str \"baz\"\n"
},
{
"code": null,
"e": 30733,
"s": 30494,
"text": "YAML document is considered as a single native data structure presented as a single root node. The presentation details in YAML document such as directives, comments, indentation and styles are not considered as contents included in them."
},
{
"code": null,
"e": 30817,
"s": 30733,
"text": "There are two types of documents used in YAML. They are explained in this section −"
},
{
"code": null,
"e": 30963,
"s": 30817,
"text": "It begins with the document start marker followed by the presentation of the root node. The example of YAML explicit declaration is given below −"
},
{
"code": null,
"e": 30985,
"s": 30963,
"text": "---\n\nsome: yaml\n\n...\n"
},
{
"code": null,
"e": 31157,
"s": 30985,
"text": "It includes an explicit start and end markers which is “---“and “...” in given example. On converting the specified YAML in JSON format, we get the output as shown below −"
},
{
"code": null,
"e": 31180,
"s": 31157,
"text": "{\n \"some\": \"yaml\"\n}\n"
},
{
"code": null,
"e": 31270,
"s": 31180,
"text": "These documents do not begin with a document start marker. Observe the code given below −"
},
{
"code": null,
"e": 31328,
"s": 31270,
"text": "fruits:\n - Apple\n - Orange\n - Pineapple\n - Mango\n"
},
{
"code": null,
"e": 31426,
"s": 31328,
"text": "Converting these values in JSON format we get the output as a simple JSON object as given below −"
},
{
"code": null,
"e": 31515,
"s": 31426,
"text": "{\n \"fruits\": [\n \"Apple\",\n \"Orange\",\n \"Pineapple\",\n \"Mango\"\n ]\n}\n"
},
{
"code": null,
"e": 31760,
"s": 31515,
"text": "YAML includes a sequence of bytes called as character stream. The stream begins with a prefix containing a byte order denoting a character encoding. The complete stream begins with a prefix containing a character encoding, followed by comments."
},
{
"code": null,
"e": 31826,
"s": 31760,
"text": "An example of complete stream (character stream) is shown below −"
},
{
"code": null,
"e": 31864,
"s": 31826,
"text": "%YAML 1.1\n---\n!!str \"Text content\\n\"\n"
},
{
"code": null,
"e": 32032,
"s": 31864,
"text": "Each presentation node includes two major characteristics called anchor and tag. Node properties may be specified with node content, omitted from the character stream."
},
{
"code": null,
"e": 32089,
"s": 32032,
"text": "The basic example of node representation is as follows −"
},
{
"code": null,
"e": 32183,
"s": 32089,
"text": "%YAML 1.1\n---\n!!map {\n ? &A1 !!str \"foo\"\n : !!str \"bar\",\n ? !!str &A2 \"baz\"\n : *a1\n}\n"
},
{
"code": null,
"e": 32476,
"s": 32183,
"text": "The anchor property represents a node for future reference. The character stream of YAML representation in node is denoted with the ampersand (&) indicator. The YAML processor need not preserve the anchor name with the representation details composed in it. The following code explains this −"
},
{
"code": null,
"e": 32590,
"s": 32476,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"First occurence\"\n : &A !!str \"Value\",\n ? !!str \"Second occurence\"\n : *A\n}"
},
{
"code": null,
"e": 32654,
"s": 32590,
"text": "The output of YAML generated with anchor nodes is shown below −"
},
{
"code": null,
"e": 32768,
"s": 32654,
"text": "---\n!!map {\n ? !!str \"First occurence\"\n : !!str \"Value\",\n ? !!str \"Second occurence\"\n : !!str \"Value\",\n}\n"
},
{
"code": null,
"e": 33034,
"s": 32768,
"text": "The tag property represents the type of native data structure which defines a node completely. A tag is represented with the (!) indicator. Tags are considered as an inherent part of the representation graph. The following example of explains node tags in detail −"
},
{
"code": null,
"e": 33113,
"s": 33034,
"text": "%YAML 1.1\n---\n!!map {\n ? !<tag:yaml.org,2002:str> \"foo\"\n : !<!bar> \"baz\"\n}"
},
{
"code": null,
"e": 33431,
"s": 33113,
"text": "Node content can be represented in a flow content or block format. Block content extends to the end of line and uses indentation to denote structure. Each collection kind can be represented in a specific single flow collection style or can be considered as a single block. The following code explains this in detail −"
},
{
"code": null,
"e": 33588,
"s": 33431,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"foo\"\n : !!str \"bar baz\"\n}\n\n%YAML 1.1\n---\n!!str \"foo bar\"\n\n%YAML 1.1\n---\n!!str \"foo bar\"\n\n%YAML 1.1\n---\n!!str \"foo bar\\n\""
},
{
"code": null,
"e": 33835,
"s": 33588,
"text": "In this chapter, we will focus on various scalar types which are used for representing the content. In YAML, comments may either precede or follow scalar content. It is important to note that comments should not be included within scalar content."
},
{
"code": null,
"e": 33932,
"s": 33835,
"text": "Note that all flow scalar styles can include multiple lines, except with usage in multiple keys."
},
{
"code": null,
"e": 33979,
"s": 33932,
"text": "The representation of scalars is given below −"
},
{
"code": null,
"e": 34155,
"s": 33979,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"simple key\"\n : !!map {\n ? !!str \"also simple\"\n : !!str \"value\",\n ? !!str \"not a simple key\"\n : !!str \"any value\"\n }\n}"
},
{
"code": null,
"e": 34217,
"s": 34155,
"text": "The generated output of block scalar headers is shown below −"
},
{
"code": null,
"e": 34314,
"s": 34217,
"text": "{\n \"simple key\": {\n \"not a simple key\": \"any value\", \n \"also simple\": \"value\"\n }\n}"
},
{
"code": null,
"e": 34410,
"s": 34314,
"text": "All characters in this example are considered as content, including the inner space characters."
},
{
"code": null,
"e": 34617,
"s": 34410,
"text": "%YAML 1.1\n---\n!!map {\n ? !!str \"---\"\n : !!str \"foo\",\n ? !!str \"...\",\n : !!str \"bar\"\n}\n\n%YAML 1.1\n---\n!!seq [\n !!str \"---\",\n !!str \"...\",\n !!map {\n ? !!str \"---\"\n : !!str \"...\"\n }\n]"
},
{
"code": null,
"e": 34686,
"s": 34617,
"text": "The plain line breaks are represented with the example given below −"
},
{
"code": null,
"e": 34749,
"s": 34686,
"text": "%YAML 1.1\n---\n!!str \"as space \\\ntrimmed\\n\\\nspecific\\L\\n\\\nnone\""
},
{
"code": null,
"e": 34813,
"s": 34749,
"text": "The corresponding JSON output for the same is mentioned below −"
},
{
"code": null,
"e": 34855,
"s": 34813,
"text": "\"as space trimmed\\nspecific\\u2028\\nnone\"\n"
},
{
"code": null,
"e": 35132,
"s": 34855,
"text": "Flow styles in YAML can be thought of as a natural extension of JSON to cover the folding content lines for better readable feature which uses anchors and aliases to create the object instances. In this chapter, we will focus on flow representation of the following concepts −"
},
{
"code": null,
"e": 35144,
"s": 35132,
"text": "Alias Nodes"
},
{
"code": null,
"e": 35156,
"s": 35144,
"text": "Empty Nodes"
},
{
"code": null,
"e": 35175,
"s": 35156,
"text": "Flow Scalar styles"
},
{
"code": null,
"e": 35198,
"s": 35175,
"text": "Flow collection styles"
},
{
"code": null,
"e": 35209,
"s": 35198,
"text": "Flow nodes"
},
{
"code": null,
"e": 35253,
"s": 35209,
"text": "The example of alias nodes is shown below −"
},
{
"code": null,
"e": 35453,
"s": 35253,
"text": "%YAML 1.2\n---\n!!map {\n ? !!str \"First occurrence\"\n : &A !!str \"Foo\",\n ? !!str \"Override anchor\"\n : &B !!str \"Bar\",\n ? !!str \"Second occurrence\"\n : *A,\n ? !!str \"Reuse anchor\"\n : *B,\n}"
},
{
"code": null,
"e": 35510,
"s": 35453,
"text": "The JSON output of the code given above is given below −"
},
{
"code": null,
"e": 35633,
"s": 35510,
"text": "{\n \"First occurrence\": \"Foo\", \n \"Second occurrence\": \"Foo\", \n \"Override anchor\": \"Bar\", \n \"Reuse anchor\": \"Bar\"\n}\n"
},
{
"code": null,
"e": 35724,
"s": 35633,
"text": "Nodes with empty content are considered as empty nodes. The following example shows this −"
},
{
"code": null,
"e": 35806,
"s": 35724,
"text": "%YAML 1.2\n---\n!!map {\n ? !!str \"foo\" : !!str \"\",\n ? !!str \"\" : !!str \"bar\",\n}"
},
{
"code": null,
"e": 35866,
"s": 35806,
"text": "The output of empty nodes in JSON is represented as below −"
},
{
"code": null,
"e": 35899,
"s": 35866,
"text": "{\n \"\": \"bar\", \n \"foo\": \"\"\n}\n"
},
{
"code": null,
"e": 36020,
"s": 35899,
"text": "Flow scalar styles include double-quoted, single-quoted and plain types. The basic example for the same is given below −"
},
{
"code": null,
"e": 36181,
"s": 36020,
"text": "%YAML 1.2\n---\n!!map {\n ? !!str \"implicit block key\"\n : !!seq [\n !!map {\n ? !!str \"implicit flow key\"\n : !!str \"value\",\n }\n ] \n}"
},
{
"code": null,
"e": 36252,
"s": 36181,
"text": "The output in JSON format for the example given above is shown below −"
},
{
"code": null,
"e": 36344,
"s": 36252,
"text": "{\n \"implicit block key\": [\n {\n \"implicit flow key\": \"value\"\n }\n ] \n}\n"
},
{
"code": null,
"e": 36568,
"s": 36344,
"text": "Flow collection in YAML is nested with a block collection within another flow collection. Flow collection entries are terminated with comma (,) indicator. The following example explains the flow collection block in detail −"
},
{
"code": null,
"e": 36709,
"s": 36568,
"text": "%YAML 1.2\n---\n!!seq [\n !!seq [\n !!str \"one\",\n !!str \"two\",\n ],\n \n !!seq [\n !!str \"three\",\n !!str \"four\",\n ],\n]"
},
{
"code": null,
"e": 36765,
"s": 36709,
"text": "The output for flow collection in JSON is shown below −"
},
{
"code": null,
"e": 36847,
"s": 36765,
"text": "[\n [\n \"one\", \n \"two\"\n ], \n [\n \"three\", \n \"four\"\n ]\n]\n"
},
{
"code": null,
"e": 36976,
"s": 36847,
"text": "Flow styles like JSON include start and end indicators. The only flow style that does not have any property is the plain scalar."
},
{
"code": null,
"e": 37099,
"s": 36976,
"text": "%YAML 1.2\n---\n!!seq [\n!!seq [ !!str \"a\", !!str \"b\" ],\n!!map { ? !!str \"a\" : !!str \"b\" },\n!!str \"a\",\n!!str \"b\",\n!!str \"c\",]"
},
{
"code": null,
"e": 37167,
"s": 37099,
"text": "The output for the code shown above in JSON format is given below −"
},
{
"code": null,
"e": 37266,
"s": 37167,
"text": "[\n [\n \"a\", \n \"b\"\n ], \n \n {\n \"a\": \"b\"\n }, \n \n \"a\", \n \"b\", \n \"c\"\n]\n"
},
{
"code": null,
"e": 37470,
"s": 37266,
"text": "YAML includes two block scalar styles: literal and folded. Block scalars are controlled with few indicators with a header preceding the content itself. An example of block scalar headers is given below −"
},
{
"code": null,
"e": 37578,
"s": 37470,
"text": "%YAML 1.2\n---\n!!seq [\n !!str \"literal\\n\",\n !!str \"·folded\\n\",\n !!str \"keep\\n\\n\",\n !!str \"·strip\",\n]"
},
{
"code": null,
"e": 37645,
"s": 37578,
"text": "The output in JSON format with a default behavior is given below −"
},
{
"code": null,
"e": 37722,
"s": 37645,
"text": "[\n \"literal\\n\", \n \"\\u00b7folded\\n\", \n \"keep\\n\\n\", \n \"\\u00b7strip\"\n]\n"
},
{
"code": null,
"e": 37928,
"s": 37722,
"text": "There are four types of block styles: literal, folded, keep and strip styles. These block styles are defined with the help of Block Chomping scenario. An example of block chomping scenario is given below −"
},
{
"code": null,
"e": 38075,
"s": 37928,
"text": "%YAML 1.2\n---\n!!map {\n ? !!str \"strip\"\n : !!str \"# text\",\n ? !!str \"clip\"\n : !!str \"# text\\n\",\n ? !!str \"keep\"\n : !!str \"# text\\n\",\n}\n"
},
{
"code": null,
"e": 38152,
"s": 38075,
"text": "You can see the output generated with three formats in JSON as given below −"
},
{
"code": null,
"e": 38226,
"s": 38152,
"text": "{\n \"strip\": \"# text\", \n \"clip\": \"# text\\n\", \n \"keep\": \"# text\\n\"\n}\n"
},
{
"code": null,
"e": 38334,
"s": 38226,
"text": "Chomping in YAML controls the final breaks and trailing empty lines which are interpreted in various forms."
},
{
"code": null,
"e": 38465,
"s": 38334,
"text": "In this case, the final line break and empty lines are excluded for scalar content. It is specified by the chomping indicator “-“."
},
{
"code": null,
"e": 38732,
"s": 38465,
"text": "Clipping is considered as a default behavior if no explicit chomping indicator is specified. The final break character is preserved in the scalar’s content. The best example of clipping is demonstrated in the example above. It terminates with newline “\\n” character."
},
{
"code": null,
"e": 38912,
"s": 38732,
"text": "Keeping refers to the addition with representation of “+” chomping indicator. Additional lines created are not subject to folding. The additional lines are not subject to folding."
},
{
"code": null,
"e": 39349,
"s": 38912,
"text": "To understand sequence styles, it is important to understand collections. The concept of collections and sequence styles work in parallel. The collection in YAML is represented with proper sequence styles. If you want to refer proper sequencing of tags, always refer to collections. Collections in YAML are indexed by sequential integers starting with zero as represented in arrays. The focus of sequence styles begins with collections."
},
{
"code": null,
"e": 39538,
"s": 39349,
"text": "Let us consider the number of planets in universe as a sequence which can be created as a collection. The following code shows how to represent the sequence styles of planets in universe −"
},
{
"code": null,
"e": 40087,
"s": 39538,
"text": "# Ordered sequence of nodes in YAML STRUCTURE\nBlock style: !!seq\n- Mercury # Rotates - no light/dark sides.\n- Venus # Deadliest. Aptly named.\n- Earth # Mostly dirt.\n- Mars # Seems empty.\n- Jupiter # The king.\n- Saturn # Pretty.\n- Uranus # Where the sun hardly shines.\n- Neptune # Boring. No rings.\n- Pluto # You call this a planet?\nFlow style: !!seq [ Mercury, Venus, Earth, Mars, # Rocks\n Jupiter, Saturn, Uranus, Neptune, # Gas\n Pluto ] # Overrated"
},
{
"code": null,
"e": 40164,
"s": 40087,
"text": "Then, you can see the following output for ordered sequence in JSON format −"
},
{
"code": null,
"e": 40522,
"s": 40164,
"text": "{\n \"Flow style\": [\n \"Mercury\", \n \"Venus\", \n \"Earth\", \n \"Mars\", \n \"Jupiter\", \n \"Saturn\", \n \"Uranus\", \n \"Neptune\", \n \"Pluto\"\n ], \n \n \"Block style\": [\n \"Mercury\", \n \"Venus\", \n \"Earth\", \n \"Mars\", \n \"Jupiter\", \n \"Saturn\", \n \"Uranus\", \n \"Neptune\", \n \"Pluto\"\n ]\n}\n"
},
{
"code": null,
"e": 40819,
"s": 40522,
"text": "Flow mappings in YAML represent the unordered collection of key value pairs. They are also called as mapping node. Note that keys should be maintained unique. If there is a duplication of keys in flow mapping structure, it will generate an error. The key order is generated in serialization tree."
},
{
"code": null,
"e": 40873,
"s": 40819,
"text": "An example of flow mapping structure is shown below −"
},
{
"code": null,
"e": 41221,
"s": 40873,
"text": "%YAML 1.1\npaper:\n uuid: 8a8cbf60-e067-11e3-8b68-0800200c9a66\n name: On formally undecidable propositions of Principia Mathematica and related systems I.\n author: Kurt Gödel.\ntags:\n - tag:\n uuid: 98fb0d90-e067-11e3-8b68-0800200c9a66\n name: Mathematics\n - tag:\n uuid: 3f25f680-e068-11e3-8b68-0800200c9a66\n name: Logic"
},
{
"code": null,
"e": 41303,
"s": 41221,
"text": "The output of mapped sequence (unordered list) in JSON format is as shown below −"
},
{
"code": null,
"e": 41806,
"s": 41303,
"text": "{\n \"paper\": {\n \"uuid\": \"8a8cbf60-e067-11e3-8b68-0800200c9a66\",\n \"name\": \"On formally undecidable propositions of Principia Mathematica and related systems I.\",\n \"author\": \"Kurt Gödel.\"\n },\n \"tags\": [\n {\n \"tag\": {\n \"uuid\": \"98fb0d90-e067-11e3-8b68-0800200c9a66\",\n \"name\": \"Mathematics\"\n }\n },\n {\n \"tag\": {\n \"uuid\": \"3f25f680-e068-11e3-8b68-0800200c9a66\",\n \"name\": \"Logic\"\n }\n }\n ]\n}\n"
},
{
"code": null,
"e": 41932,
"s": 41806,
"text": "If you observe this output as shown above, it is observed that the key names are maintained unique in YAML mapping structure."
},
{
"code": null,
"e": 42129,
"s": 41932,
"text": "The block sequences of YAML represent a series of nodes. Each item is denoted by a leading “-“ indicator. Note that the “-“ indicator in YAML should be separated from the node with a white space."
},
{
"code": null,
"e": 42189,
"s": 42129,
"text": "The basic representation of block sequence is given below −"
},
{
"code": null,
"e": 42232,
"s": 42189,
"text": "block sequence:\n··- one↓\n - two : three↓\n"
},
{
"code": null,
"e": 42310,
"s": 42232,
"text": "Observe the following examples for a better understanding of block sequences."
},
{
"code": null,
"e": 42423,
"s": 42310,
"text": "port: &ports\n adapter: postgres\n host: localhost\n\ndevelopment:\n database: myapp_development\n <<: *ports"
},
{
"code": null,
"e": 42485,
"s": 42423,
"text": "The output of block sequences in JSON format is given below −"
},
{
"code": null,
"e": 42683,
"s": 42485,
"text": "{\n \"port\": {\n \"adapter\": \"postgres\",\n \"host\": \"localhost\"\n },\n \"development\": {\n \"database\": \"myapp_development\",\n \"adapter\": \"postgres\",\n \"host\": \"localhost\"\n }\n}\n"
},
{
"code": null,
"e": 42973,
"s": 42683,
"text": "A YAML schema is defined as a combination of set of tags and includes a mechanism for resolving non-specific tags. The failsafe schema in YAML is created in such a manner that it can be used with any YAML document. It is also considered as a recommended schema for a generic YAML document."
},
{
"code": null,
"e": 43050,
"s": 42973,
"text": "There are two types of failsafe schema: Generic Mapping and Generic Sequence"
},
{
"code": null,
"e": 43218,
"s": 43050,
"text": "It represents an associative container. Here, each key is unique in the association and mapped to exactly one value. YAML includes no restrictions for key definitions."
},
{
"code": null,
"e": 43279,
"s": 43218,
"text": "An example for representing generic mapping is given below −"
},
{
"code": null,
"e": 43392,
"s": 43279,
"text": "Clark : Evans\nIngy : döt Net\nOren : Ben-Kiki\nFlow style: !!map { Clark: Evans, Ingy: döt Net, Oren: Ben-Kiki }"
},
{
"code": null,
"e": 43464,
"s": 43392,
"text": "The output of generic mapping structure in JSON format is shown below −"
},
{
"code": null,
"e": 43648,
"s": 43464,
"text": "{\n \"Oren\": \"Ben-Kiki\", \n \"Ingy\": \"d\\u00f6t Net\", \n \"Clark\": \"Evans\", \n \"Flow style\": {\n \"Oren\": \"Ben-Kiki\", \n \"Ingy\": \"d\\u00f6t Net\", \n \"Clark\": \"Evans\"\n }\n}\n"
},
{
"code": null,
"e": 43792,
"s": 43648,
"text": "It represents a type of sequence. It includes a collection indexed by sequential integers starting with zero. It is represented with !!seq tag."
},
{
"code": null,
"e": 43905,
"s": 43792,
"text": "Clark : Evans\nIngy : döt Net\nOren : Ben-Kiki\nFlow style: !!seq { Clark: Evans, Ingy: döt Net, Oren: Ben-Kiki }"
},
{
"code": null,
"e": 43954,
"s": 43905,
"text": "The output for this generic sequence of failsafe"
},
{
"code": null,
"e": 44160,
"s": 43954,
"text": "schema is shown below:\n{\n \"Oren\": \"Ben-Kiki\", \n \"Ingy\": \"d\\u00f6t Net\", \n \"Clark\": \"Evans\", \n \"Flow style\": {\n \"Oren\": \"Ben-Kiki\", \n \"Ingy\": \"d\\u00f6t Net\", \n \"Clark\": \"Evans\"\n }\n}"
},
{
"code": null,
"e": 44542,
"s": 44160,
"text": "JSON schema in YAML is considered as the common denominator of most modern computer languages. It allows parsing JSON files. It is strongly recommended in YAML that other schemas should be considered on JSON schema. The primary reason for this is that it includes key value combination which are user friendly. The messages can be encoded as key and can be used as and when needed."
},
{
"code": null,
"e": 44706,
"s": 44542,
"text": "The JSON schema is scalar and lacks a value. A mapping entry in JSON schema is represented in the format of some key and value pair where null is treated as valid."
},
{
"code": null,
"e": 44757,
"s": 44706,
"text": "A null JSON schema is represented as shown below −"
},
{
"code": null,
"e": 44822,
"s": 44757,
"text": "!!null null: value for null key\nkey with null value: !!null null"
},
{
"code": null,
"e": 44877,
"s": 44822,
"text": "The output of JSON representation is mentioned below −"
},
{
"code": null,
"e": 44947,
"s": 44877,
"text": "{\n \"null\": \"value for null key\", \n \"key with null value\": null\n}\n"
},
{
"code": null,
"e": 45006,
"s": 44947,
"text": "The following example represents the Boolean JSON schema −"
},
{
"code": null,
"e": 45078,
"s": 45006,
"text": "YAML is a superset of JSON: !!bool true\nPluto is a planet: !!bool false"
},
{
"code": null,
"e": 45136,
"s": 45078,
"text": "The following is the output for the same in JSON format −"
},
{
"code": null,
"e": 45211,
"s": 45136,
"text": "{\n \"YAML is a superset of JSON\": true, \n \"Pluto is a planet\": false\n}\n"
},
{
"code": null,
"e": 45270,
"s": 45211,
"text": "The following example represents the integer JSON schema −"
},
{
"code": null,
"e": 45324,
"s": 45270,
"text": "negative: !!int -12\nzero: !!int 0\npositive: !!int 34\n"
},
{
"code": null,
"e": 45383,
"s": 45324,
"text": "{\n \"positive\": 34, \n \"zero\": 0, \n \"negative\": -12\n}\n"
},
{
"code": null,
"e": 45447,
"s": 45383,
"text": "The tags in JSON schema is represented with following example −"
},
{
"code": null,
"e": 45592,
"s": 45447,
"text": "A null: null\nBooleans: [ true, false ]\nIntegers: [ 0, -0, 3, -19 ]\nFloats: [ 0., -0.0, 12e03, -2E+05 ]\nInvalid: [ True, Null, 0o7, 0x3A, +12.3 ]"
},
{
"code": null,
"e": 45638,
"s": 45592,
"text": "You can find the JSON Output as shown below −"
},
{
"code": null,
"e": 45975,
"s": 45638,
"text": "{\n \"Integers\": [\n 0, \n 0, \n 3, \n -19\n ], \n \n \"Booleans\": [\n true, \n false\n ], \n \"A null\": null, \n\n \"Invalid\": [\n true, \n null, \n \"0o7\", \n 58, \n 12.300000000000001\n ], \n \n \"Floats\": [\n 0.0, \n -0.0, \n \"12e03\", \n \"-2E+05\"\n ]\n}\n"
},
{
"code": null,
"e": 46007,
"s": 45975,
"text": "\n 33 Lectures \n 44 mins\n"
},
{
"code": null,
"e": 46021,
"s": 46007,
"text": " Tarun Telang"
},
{
"code": null,
"e": 46028,
"s": 46021,
"text": " Print"
},
{
"code": null,
"e": 46039,
"s": 46028,
"text": " Add Notes"
}
] |
Spring Boot Hibernate Example | Spring Boot Hibernate Integration | Boot CRUD Hibernate
|
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorial, we are going to show a simple Spring Boot with Hibernate Example. The Spring Boot Hibernate integration is a crazy combination since Hibernate has its own importance.
Technologies:
Spring Boot 1.2.3.RELEASE
Java 1.7
Hibernate 4.3
Maven
MySql
A typical Maven project structure.
Project Dependencies:
pom.xml
Here the main important thing is to place the spring-boot-starter-data-jpa dependency; it will take all the internal dependencies.
Recommended: Spring Boot with JPA Integration
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.springframework.samples</groupId>
<artifactId>SpringBoot_Hibernate_Example</artifactId>
<version>0.0.1-SNAPSHOT</version>
<properties> <!-- Generic properties -->
<java.version>1.7</java.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
</properties>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.2.3.RELEASE</version>
<relativePath />
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Database Schema :
Create a person table in your database, since we are going to access this from our application ( Spring Boot Hibernate integration).
CREATE TABLE person (
id BIGINT(20) NOT NULL AUTO_INCREMENT,
pcity VARCHAR(255) NULL DEFAULT NULL,
name VARCHAR(255) NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
Create an Entity Class for person table.
package com.onlinetutorialspoint.model;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
@Entity
@Table(name = "person")
public class Person {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
@Column(name = "pcity")
private String city;
public Person() {
super();
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getCity() {
return city;
}
public void setCity(String city) {
this.city = city;
}
@Override
public String toString() {
return "Person [pid=" + id + ", pName=" + name + ", pCity=" + city
+ "]";
}
}
There is no special in Peron.java class as part of Spring Boot Hibernate. It is as simple as a simple hibernate application entity file.
Configuration Properties :
Configuration information to get a connection from the database, and it is also consists of hibernate configurations like hibernate hbm2ddl auto.
Properties File :
# Database
db.driver: com.mysql.jdbc.Driver
db.url: jdbc:mysql://localhost:3306/onlinetutorialspoint
db.username: root
db.password: 12345
# Hibernate
hibernate.dialect: org.hibernate.dialect.MySQL5Dialect
hibernate.show_sql: true
hibernate.hbm2ddl.auto: create
entitymanager.packagesToScan: com
Create a DBConfiguration:
As part of the Spring Boot Hibernate integration, this is the main configuration file which is used to create a data source, Hibernate session Factory and managing transactions.
package com.onlinetutorialspoint.config;
import java.util.Properties;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import org.springframework.orm.hibernate4.HibernateTransactionManager;
import org.springframework.orm.hibernate4.LocalSessionFactoryBean;
import org.springframework.transaction.annotation.EnableTransactionManagement;
@Configuration
@EnableTransactionManagement
public class DBConfiguration {
@Value("${db.driver}")
private String DRIVER;
@Value("${db.password}")
private String PASSWORD;
@Value("${db.url}")
private String URL;
@Value("${db.username}")
private String USERNAME;
@Value("${hibernate.dialect}")
private String DIALECT;
@Value("${hibernate.show_sql}")
private String SHOW_SQL;
@Value("${hibernate.hbm2ddl.auto}")
private String HBM2DDL_AUTO;
@Value("${entitymanager.packagesToScan}")
private String PACKAGES_TO_SCAN;
@Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(DRIVER);
dataSource.setUrl(URL);
dataSource.setUsername(USERNAME);
dataSource.setPassword(PASSWORD);
return dataSource;
}
@Bean
public LocalSessionFactoryBean sessionFactory() {
LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();
sessionFactory.setDataSource(dataSource());
sessionFactory.setPackagesToScan(PACKAGES_TO_SCAN);
Properties hibernateProperties = new Properties();
hibernateProperties.put("hibernate.dialect", DIALECT);
hibernateProperties.put("hibernate.show_sql", SHOW_SQL);
hibernateProperties.put("hibernate.hbm2ddl.auto", HBM2DDL_AUTO);
sessionFactory.setHibernateProperties(hibernateProperties);
return sessionFactory;
}
@Bean
public HibernateTransactionManager transactionManager() {
HibernateTransactionManager transactionManager = new HibernateTransactionManager();
transactionManager.setSessionFactory(sessionFactory().getObject());
return transactionManager;
}
}
The DBConfiguration.java is a configuration file. Which will be executed by the Spring boot while it’s loading..
@Configuration annotation allows you to define configurations. you can get more about @Configuration and @Bean here.
@EnableTransactionManagement it enables the annotation-driven transaction management capability; we can also allow the transaction by using the <tx:*> XML namespace.
@Value is an annotation given by spring framework. It comes with Spring 3.0 release. @Value annotation is used for expression-driven dependency injection. A typical use case is to assign default field values using “${db.driver}” style expressions.
Create a DAO Class:
A PersonDAO.java class it performs basic crud operations. To make the Spring Boot Hibernate Example as simple as possible, I have created the method to get all persons from the database here.
package com.onlinetutorialspoint.dao;
import java.util.List;
import javax.transaction.Transactional;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Repository;
import com.onlinetutorialspoint.model.Person;
@Repository
@Transactional
public class PersonDAO {
@Autowired
private SessionFactory sessionFactory;
private Session getSession() {
return sessionFactory.getCurrentSession();
}
public String savePerson(Person person) {
Long isSuccess = (Long)getSession().save(person);
if(isSuccess >= 1){
return "Success";
}else{
return "Error while Saving Person";
}
}
public boolean delete(Person person) {
getSession().delete(person);
return true;
}
@SuppressWarnings("unchecked")
public List getAllPersons() {
return getSession().createQuery("from Person").list();
}
}
Create a Spring Controller :
package com.onlinetutorialspoint.controller;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import com.onlinetutorialspoint.dao.PersonDAO;
import com.onlinetutorialspoint.model.Person;
@Controller
@RequestMapping(value = "/person")
public class PersonController {
@Autowired
private PersonDAO personDao;
@RequestMapping(value = "/delete")
@ResponseBody
public String delete(long id) {
try {
Person person = new Person();
person.setId(id);
personDao.delete(person);
} catch (Exception ex) {
return ex.getMessage();
}
return "Person succesfully deleted!";
}
@RequestMapping(value = "/save")
@ResponseBody
public String create(String name, String city) {
try {
Person person = new Person();
person.setName(name);
person.setCity(city);
personDao.savePerson(person);
} catch (Exception ex) {
return ex.getMessage();
}
return "Person succesfully saved!";
}
@RequestMapping(value = "/allPersons")
@ResponseBody
public List getAllPersons() {
try {
return personDao.getAllPersons();
} catch (Exception ex) {
return null;
}
}
}
Create a Spring Boot Application Class :
package com.onlinetutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Well! We have done with Spring Boot Hibernate example. We need to run Aplication.java class now. If everything went well, you could see the output log like below.
Now we can access the application by http://localhost:8080/person/
Insert the Person :
http://localhost:8080/person/save?name=chandra shekhar Goka&city=Hiderabad
Show All Persons :
http://localhost:8080/person/allPersons
Delete a Person By Id :
http://localhost:8080/person/delete?id=1
Happy Learning 🙂
springboot_hibernate_example
Spring Boot Hibernate With CRUD Operations
File size: 22 MB
Downloads: 5269
Spring Boot JPA Integration Example
Spring Boot Security MySQL Database Integration Example
Spring Boot PDF iText integration with String Template
How to set Spring Boot SetTimeZone
Spring Boot How to change the Tomcat to Jetty Server
How to change Spring Boot Tomcat Port Number
Simple Spring Boot Example
How To Change Spring Boot Context Path
Spring Boot Actuator Database Health Check
Spring Boot Multiple Data Sources Example
Spring Boot MVC Example Tutorials
Spring Boot MockMvc JUnit Test Example
External Apache ActiveMQ Spring Boot Example
Spring Boot H2 Database + JDBC Template Example
Spring Boot Kafka Producer Example
Spring Boot JPA Integration Example
Spring Boot Security MySQL Database Integration Example
Spring Boot PDF iText integration with String Template
How to set Spring Boot SetTimeZone
Spring Boot How to change the Tomcat to Jetty Server
How to change Spring Boot Tomcat Port Number
Simple Spring Boot Example
How To Change Spring Boot Context Path
Spring Boot Actuator Database Health Check
Spring Boot Multiple Data Sources Example
Spring Boot MVC Example Tutorials
Spring Boot MockMvc JUnit Test Example
External Apache ActiveMQ Spring Boot Example
Spring Boot H2 Database + JDBC Template Example
Spring Boot Kafka Producer Example
prabhakar
June 28, 2017 at 10:51 am - Reply
Excellent article
but i have one requirement how to get a single person details using id
Viswanath
July 31, 2017 at 5:11 pm - Reply
I believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple
Neelam Panwar
August 14, 2017 at 6:18 pm - Reply
Hi Chandrashekhar,
I am new to spring boot and I have followed exactly this configuration and getting following response.
i just want me response to be list of json objects(present under entity tag in below response).
how can i remove other unwanted stuff from this resoponse body.
please suggest.
{
“context”: {
“headers”: {},
“entity”: [
{
“name”: “abe”,
“age”: “23”
},
{
“name”: “omy”,
“age”: “42”,
}
],
“entityType”: “java.util.ArrayList”,
“entityAnnotations”: [],
“entityStream”: {
“committed”: false,
“closed”: false
},
“length”: -1,
“language”: null,
“location”: null,
“date”: null,
“lastModified”: null,
“committed”: false,
“allowedMethods”: [],
“mediaType”: null,
“acceptableMediaTypes”: [
{
“type”: “*”,
“subtype”: “*”,
“parameters”: {},
“quality”: 1000,
“wildcardType”: true,
“wildcardSubtype”: true
}
],
“lengthLong”: -1,
“links”: [],
“entityTag”: null,
“stringHeaders”: {},
“entityClass”: “java.util.ArrayList”,
“responseCookies”: {},
“acceptableLanguages”: [
“*”
],
“requestCookies”: {}
},
Shailendra
September 18, 2017 at 4:03 am - Reply
@Configuration is an annotation, you can get more about @Configuration and @Bean here..
Its a link, Please mkae it look like a link. 🙂
Ramandeep Singh
September 26, 2017 at 10:09 am - Reply
download example is empty
Melanie Glastrong
November 9, 2017 at 8:01 am - Reply
Such a nice blog.
I have read an amazing article here.
Guru
December 12, 2018 at 3:25 pm - Reply
session factory not getting inject.throwing null pointer exception
Vishwa
April 8, 2019 at 1:00 pm - Reply
After importing project
Project build error: Non-resolvable parent POM for org.springframework.samples:SpringBoot_Hibernate_Example:0.0.1-SNAPSHOT: Failure to transfer org.springframework.boot:spring-boot-starter-parent:pom:
1.2.3.RELEASE from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original
error: Could not transfer artifact org.springframework.boot:spring-boot-starter-parent:pom:1.2.3.RELEASE from/to central (https://repo.maven.apache.org/maven2): Failed to connect to repo.maven.apache.org/
151.101.40.215:443 and ‘parent.relativePath’ points at no local POM
prabhakar
June 28, 2017 at 10:51 am - Reply
Excellent article
but i have one requirement how to get a single person details using id
Viswanath
July 31, 2017 at 5:11 pm - Reply
I believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple
Excellent article
but i have one requirement how to get a single person details using id
Viswanath
July 31, 2017 at 5:11 pm - Reply
I believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple
I believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple
Neelam Panwar
August 14, 2017 at 6:18 pm - Reply
Hi Chandrashekhar,
I am new to spring boot and I have followed exactly this configuration and getting following response.
i just want me response to be list of json objects(present under entity tag in below response).
how can i remove other unwanted stuff from this resoponse body.
please suggest.
{
“context”: {
“headers”: {},
“entity”: [
{
“name”: “abe”,
“age”: “23”
},
{
“name”: “omy”,
“age”: “42”,
}
],
“entityType”: “java.util.ArrayList”,
“entityAnnotations”: [],
“entityStream”: {
“committed”: false,
“closed”: false
},
“length”: -1,
“language”: null,
“location”: null,
“date”: null,
“lastModified”: null,
“committed”: false,
“allowedMethods”: [],
“mediaType”: null,
“acceptableMediaTypes”: [
{
“type”: “*”,
“subtype”: “*”,
“parameters”: {},
“quality”: 1000,
“wildcardType”: true,
“wildcardSubtype”: true
}
],
“lengthLong”: -1,
“links”: [],
“entityTag”: null,
“stringHeaders”: {},
“entityClass”: “java.util.ArrayList”,
“responseCookies”: {},
“acceptableLanguages”: [
“*”
],
“requestCookies”: {}
},
Hi Chandrashekhar,
I am new to spring boot and I have followed exactly this configuration and getting following response.
i just want me response to be list of json objects(present under entity tag in below response).
how can i remove other unwanted stuff from this resoponse body.
please suggest.
{
“context”: {
“headers”: {},
“entity”: [
{
“name”: “abe”,
“age”: “23”
},
{
“name”: “omy”,
“age”: “42”,
}
],
“entityType”: “java.util.ArrayList”,
“entityAnnotations”: [],
“entityStream”: {
“committed”: false,
“closed”: false
},
“length”: -1,
“language”: null,
“location”: null,
“date”: null,
“lastModified”: null,
“committed”: false,
“allowedMethods”: [],
“mediaType”: null,
“acceptableMediaTypes”: [
{
“type”: “*”,
“subtype”: “*”,
“parameters”: {},
“quality”: 1000,
“wildcardType”: true,
“wildcardSubtype”: true
}
],
“lengthLong”: -1,
“links”: [],
“entityTag”: null,
“stringHeaders”: {},
“entityClass”: “java.util.ArrayList”,
“responseCookies”: {},
“acceptableLanguages”: [
“*”
],
“requestCookies”: {}
},
Shailendra
September 18, 2017 at 4:03 am - Reply
@Configuration is an annotation, you can get more about @Configuration and @Bean here..
Its a link, Please mkae it look like a link. 🙂
@Configuration is an annotation, you can get more about @Configuration and @Bean here..
Its a link, Please mkae it look like a link. 🙂
Ramandeep Singh
September 26, 2017 at 10:09 am - Reply
download example is empty
download example is empty
Melanie Glastrong
November 9, 2017 at 8:01 am - Reply
Such a nice blog.
I have read an amazing article here.
Such a nice blog.
I have read an amazing article here.
Guru
December 12, 2018 at 3:25 pm - Reply
session factory not getting inject.throwing null pointer exception
session factory not getting inject.throwing null pointer exception
Vishwa
April 8, 2019 at 1:00 pm - Reply
After importing project
Project build error: Non-resolvable parent POM for org.springframework.samples:SpringBoot_Hibernate_Example:0.0.1-SNAPSHOT: Failure to transfer org.springframework.boot:spring-boot-starter-parent:pom:
1.2.3.RELEASE from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original
error: Could not transfer artifact org.springframework.boot:spring-boot-starter-parent:pom:1.2.3.RELEASE from/to central (https://repo.maven.apache.org/maven2): Failed to connect to repo.maven.apache.org/
151.101.40.215:443 and ‘parent.relativePath’ points at no local POM
After importing project
Project build error: Non-resolvable parent POM for org.springframework.samples:SpringBoot_Hibernate_Example:0.0.1-SNAPSHOT: Failure to transfer org.springframework.boot:spring-boot-starter-parent:pom:
1.2.3.RELEASE from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original
error: Could not transfer artifact org.springframework.boot:spring-boot-starter-parent:pom:1.2.3.RELEASE from/to central (https://repo.maven.apache.org/maven2): Failed to connect to repo.maven.apache.org/
151.101.40.215:443 and ‘parent.relativePath’ points at no local POM
Δ
Spring Boot – Hello World
Spring Boot – MVC Example
Spring Boot- Change Context Path
Spring Boot – Change Tomcat Port Number
Spring Boot – Change Tomcat to Jetty Server
Spring Boot – Tomcat session timeout
Spring Boot – Enable Random Port
Spring Boot – Properties File
Spring Boot – Beans Lazy Loading
Spring Boot – Set Favicon image
Spring Boot – Set Custom Banner
Spring Boot – Set Application TimeZone
Spring Boot – Send Mail
Spring Boot – FileUpload Ajax
Spring Boot – Actuator
Spring Boot – Actuator Database Health Check
Spring Boot – Swagger
Spring Boot – Enable CORS
Spring Boot – External Apache ActiveMQ Setup
Spring Boot – Inmemory Apache ActiveMq
Spring Boot – Scheduler Job
Spring Boot – Exception Handling
Spring Boot – Hibernate CRUD
Spring Boot – JPA Integration CRUD
Spring Boot – JPA DataRest CRUD
Spring Boot – JdbcTemplate CRUD
Spring Boot – Multiple Data Sources Config
Spring Boot – JNDI Configuration
Spring Boot – H2 Database CRUD
Spring Boot – MongoDB CRUD
Spring Boot – Redis Data CRUD
Spring Boot – MVC Login Form Validation
Spring Boot – Custom Error Pages
Spring Boot – iText PDF
Spring Boot – Enable SSL (HTTPs)
Spring Boot – Basic Authentication
Spring Boot – In Memory Basic Authentication
Spring Boot – Security MySQL Database Integration
Spring Boot – Redis Cache – Redis Server
Spring Boot – Hazelcast Cache
Spring Boot – EhCache
Spring Boot – Kafka Producer
Spring Boot – Kafka Consumer
Spring Boot – Kafka JSON Message to Kafka Topic
Spring Boot – RabbitMQ Publisher
Spring Boot – RabbitMQ Consumer
Spring Boot – SOAP Consumer
Spring Boot – Soap WebServices
Spring Boot – Batch Csv to Database
Spring Boot – Eureka Server
Spring Boot – MockMvc JUnit
Spring Boot – Docker Deployment
|
[
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 583,
"s": 398,
"text": "In this tutorial, we are going to show a simple Spring Boot with Hibernate Example. The Spring Boot Hibernate integration is a crazy combination since Hibernate has its own importance."
},
{
"code": null,
"e": 597,
"s": 583,
"text": "Technologies:"
},
{
"code": null,
"e": 623,
"s": 597,
"text": "Spring Boot 1.2.3.RELEASE"
},
{
"code": null,
"e": 632,
"s": 623,
"text": "Java 1.7"
},
{
"code": null,
"e": 646,
"s": 632,
"text": "Hibernate 4.3"
},
{
"code": null,
"e": 652,
"s": 646,
"text": "Maven"
},
{
"code": null,
"e": 658,
"s": 652,
"text": "MySql"
},
{
"code": null,
"e": 693,
"s": 658,
"text": "A typical Maven project structure."
},
{
"code": null,
"e": 715,
"s": 693,
"text": "Project Dependencies:"
},
{
"code": null,
"e": 723,
"s": 715,
"text": "pom.xml"
},
{
"code": null,
"e": 854,
"s": 723,
"text": "Here the main important thing is to place the spring-boot-starter-data-jpa dependency; it will take all the internal dependencies."
},
{
"code": null,
"e": 900,
"s": 854,
"text": "Recommended: Spring Boot with JPA Integration"
},
{
"code": null,
"e": 2540,
"s": 900,
"text": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> \n <modelVersion>4.0.0</modelVersion> \n <groupId>org.springframework.samples</groupId> \n <artifactId>SpringBoot_Hibernate_Example</artifactId> \n <version>0.0.1-SNAPSHOT</version> \n <properties> <!-- Generic properties --> \n <java.version>1.7</java.version> \n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> \n <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> \n </properties> \n <parent> \n <groupId>org.springframework.boot</groupId> \n <artifactId>spring-boot-starter-parent</artifactId> \n <version>1.2.3.RELEASE</version> \n <relativePath /> \n </parent> \n <dependencies> \n <dependency> \n <groupId>org.springframework.boot</groupId> \n <artifactId>spring-boot-starter-web</artifactId> \n </dependency> \n <dependency> \n <groupId>org.springframework.boot</groupId> \n <artifactId>spring-boot-starter-data-jpa</artifactId> \n </dependency> \n <dependency> \n <groupId>mysql</groupId> \n <artifactId>mysql-connector-java</artifactId> \n </dependency> \n </dependencies> \n <build> \n <plugins> \n <plugin> \n <groupId>org.springframework.boot</groupId> \n <artifactId>spring-boot-maven-plugin</artifactId> \n </plugin> \n </plugins> \n </build> \n</project>"
},
{
"code": null,
"e": 2558,
"s": 2540,
"text": "Database Schema :"
},
{
"code": null,
"e": 2691,
"s": 2558,
"text": "Create a person table in your database, since we are going to access this from our application ( Spring Boot Hibernate integration)."
},
{
"code": null,
"e": 2850,
"s": 2691,
"text": "CREATE TABLE person (\nid BIGINT(20) NOT NULL AUTO_INCREMENT,\npcity VARCHAR(255) NULL DEFAULT NULL,\nname VARCHAR(255) NULL DEFAULT NULL,\nPRIMARY KEY (`id`)\n)\n\n"
},
{
"code": null,
"e": 2891,
"s": 2850,
"text": "Create an Entity Class for person table."
},
{
"code": null,
"e": 3933,
"s": 2891,
"text": "package com.onlinetutorialspoint.model;\n\nimport javax.persistence.Column;\nimport javax.persistence.Entity;\nimport javax.persistence.GeneratedValue;\nimport javax.persistence.GenerationType;\nimport javax.persistence.Id;\nimport javax.persistence.Table;\n\n@Entity\n@Table(name = \"person\")\npublic class Person {\n @Id\n @GeneratedValue(strategy = GenerationType.IDENTITY)\n private Long id;\n private String name;\n @Column(name = \"pcity\")\n private String city;\n\n public Person() {\n super();\n }\n\n public Long getId() {\n return id;\n }\n\n public void setId(Long id) {\n this.id = id;\n }\n\n public String getName() {\n return name;\n }\n\n public void setName(String name) {\n this.name = name;\n }\n\n public String getCity() {\n return city;\n }\n\n public void setCity(String city) {\n this.city = city;\n }\n\n @Override\n public String toString() {\n return \"Person [pid=\" + id + \", pName=\" + name + \", pCity=\" + city\n + \"]\";\n }\n\n}\n"
},
{
"code": null,
"e": 4070,
"s": 3933,
"text": "There is no special in Peron.java class as part of Spring Boot Hibernate. It is as simple as a simple hibernate application entity file."
},
{
"code": null,
"e": 4097,
"s": 4070,
"text": "Configuration Properties :"
},
{
"code": null,
"e": 4243,
"s": 4097,
"text": "Configuration information to get a connection from the database, and it is also consists of hibernate configurations like hibernate hbm2ddl auto."
},
{
"code": null,
"e": 4261,
"s": 4243,
"text": "Properties File :"
},
{
"code": null,
"e": 4558,
"s": 4261,
"text": "# Database\ndb.driver: com.mysql.jdbc.Driver\ndb.url: jdbc:mysql://localhost:3306/onlinetutorialspoint\ndb.username: root\ndb.password: 12345\n\n# Hibernate\nhibernate.dialect: org.hibernate.dialect.MySQL5Dialect\nhibernate.show_sql: true\nhibernate.hbm2ddl.auto: create\nentitymanager.packagesToScan: com\n"
},
{
"code": null,
"e": 4584,
"s": 4558,
"text": "Create a DBConfiguration:"
},
{
"code": null,
"e": 4762,
"s": 4584,
"text": "As part of the Spring Boot Hibernate integration, this is the main configuration file which is used to create a data source, Hibernate session Factory and managing transactions."
},
{
"code": null,
"e": 7135,
"s": 4762,
"text": "\npackage com.onlinetutorialspoint.config;\n\nimport java.util.Properties;\n\nimport javax.sql.DataSource;\n\nimport org.springframework.beans.factory.annotation.Value;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.jdbc.datasource.DriverManagerDataSource;\nimport org.springframework.orm.hibernate4.HibernateTransactionManager;\nimport org.springframework.orm.hibernate4.LocalSessionFactoryBean;\nimport org.springframework.transaction.annotation.EnableTransactionManagement;\n\n@Configuration\n@EnableTransactionManagement\npublic class DBConfiguration {\n @Value(\"${db.driver}\")\n private String DRIVER;\n\n @Value(\"${db.password}\")\n private String PASSWORD;\n\n @Value(\"${db.url}\")\n private String URL;\n\n @Value(\"${db.username}\")\n private String USERNAME;\n\n @Value(\"${hibernate.dialect}\")\n private String DIALECT;\n\n @Value(\"${hibernate.show_sql}\")\n private String SHOW_SQL;\n\n @Value(\"${hibernate.hbm2ddl.auto}\")\n private String HBM2DDL_AUTO;\n\n @Value(\"${entitymanager.packagesToScan}\")\n private String PACKAGES_TO_SCAN;\n\n @Bean\n public DataSource dataSource() {\n DriverManagerDataSource dataSource = new DriverManagerDataSource();\n dataSource.setDriverClassName(DRIVER);\n dataSource.setUrl(URL);\n dataSource.setUsername(USERNAME);\n dataSource.setPassword(PASSWORD);\n return dataSource;\n }\n\n @Bean\n public LocalSessionFactoryBean sessionFactory() {\n LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean();\n sessionFactory.setDataSource(dataSource());\n sessionFactory.setPackagesToScan(PACKAGES_TO_SCAN);\n Properties hibernateProperties = new Properties();\n hibernateProperties.put(\"hibernate.dialect\", DIALECT);\n hibernateProperties.put(\"hibernate.show_sql\", SHOW_SQL);\n hibernateProperties.put(\"hibernate.hbm2ddl.auto\", HBM2DDL_AUTO);\n sessionFactory.setHibernateProperties(hibernateProperties);\n\n return sessionFactory;\n }\n\n @Bean\n public HibernateTransactionManager transactionManager() {\n HibernateTransactionManager transactionManager = new HibernateTransactionManager();\n transactionManager.setSessionFactory(sessionFactory().getObject());\n return transactionManager;\n }\n}\n"
},
{
"code": null,
"e": 7248,
"s": 7135,
"text": "The DBConfiguration.java is a configuration file. Which will be executed by the Spring boot while it’s loading.."
},
{
"code": null,
"e": 7365,
"s": 7248,
"text": "@Configuration annotation allows you to define configurations. you can get more about @Configuration and @Bean here."
},
{
"code": null,
"e": 7531,
"s": 7365,
"text": "@EnableTransactionManagement it enables the annotation-driven transaction management capability; we can also allow the transaction by using the <tx:*> XML namespace."
},
{
"code": null,
"e": 7779,
"s": 7531,
"text": "@Value is an annotation given by spring framework. It comes with Spring 3.0 release. @Value annotation is used for expression-driven dependency injection. A typical use case is to assign default field values using “${db.driver}” style expressions."
},
{
"code": null,
"e": 7799,
"s": 7779,
"text": "Create a DAO Class:"
},
{
"code": null,
"e": 7991,
"s": 7799,
"text": "A PersonDAO.java class it performs basic crud operations. To make the Spring Boot Hibernate Example as simple as possible, I have created the method to get all persons from the database here."
},
{
"code": null,
"e": 9028,
"s": 7991,
"text": "\npackage com.onlinetutorialspoint.dao;\n\nimport java.util.List;\n\nimport javax.transaction.Transactional;\n\nimport org.hibernate.Session;\nimport org.hibernate.SessionFactory;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.stereotype.Repository;\n\nimport com.onlinetutorialspoint.model.Person;\n\n@Repository\n@Transactional\npublic class PersonDAO {\n @Autowired\n private SessionFactory sessionFactory;\n\n private Session getSession() {\n return sessionFactory.getCurrentSession();\n }\n\n public String savePerson(Person person) {\n Long isSuccess = (Long)getSession().save(person);\n if(isSuccess >= 1){\n return \"Success\";\n }else{\n return \"Error while Saving Person\";\n }\n \n }\n\n public boolean delete(Person person) {\n getSession().delete(person);\n return true;\n }\n\n @SuppressWarnings(\"unchecked\")\n public List getAllPersons() {\n return getSession().createQuery(\"from Person\").list();\n }\n}\n"
},
{
"code": null,
"e": 9057,
"s": 9028,
"text": "Create a Spring Controller :"
},
{
"code": null,
"e": 10572,
"s": 9057,
"text": "\npackage com.onlinetutorialspoint.controller;\n\nimport java.util.List;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.stereotype.Controller;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.ResponseBody;\n\nimport com.onlinetutorialspoint.dao.PersonDAO;\nimport com.onlinetutorialspoint.model.Person;\n\n@Controller\n@RequestMapping(value = \"/person\")\npublic class PersonController {\n @Autowired\n private PersonDAO personDao;\n\n @RequestMapping(value = \"/delete\")\n @ResponseBody\n public String delete(long id) {\n try {\n Person person = new Person();\n person.setId(id);\n personDao.delete(person);\n } catch (Exception ex) {\n return ex.getMessage();\n }\n return \"Person succesfully deleted!\";\n }\n\n @RequestMapping(value = \"/save\")\n @ResponseBody\n public String create(String name, String city) {\n try {\n Person person = new Person();\n person.setName(name);\n person.setCity(city);\n personDao.savePerson(person);\n } catch (Exception ex) {\n return ex.getMessage();\n }\n return \"Person succesfully saved!\";\n }\n @RequestMapping(value = \"/allPersons\")\n @ResponseBody\n public List getAllPersons() {\n try {\n return personDao.getAllPersons();\n } catch (Exception ex) {\n return null;\n }\n }\n}\n"
},
{
"code": null,
"e": 10613,
"s": 10572,
"text": "Create a Spring Boot Application Class :"
},
{
"code": null,
"e": 10928,
"s": 10613,
"text": "\npackage com.onlinetutorialspoint;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\n\n@SpringBootApplication\npublic class Application {\n \n public static void main(String[] args) {\n SpringApplication.run(Application.class, args);\n }\n\n}\n"
},
{
"code": null,
"e": 11092,
"s": 10928,
"text": "Well! We have done with Spring Boot Hibernate example. We need to run Aplication.java class now. If everything went well, you could see the output log like below."
},
{
"code": null,
"e": 11159,
"s": 11092,
"text": "Now we can access the application by http://localhost:8080/person/"
},
{
"code": null,
"e": 11179,
"s": 11159,
"text": "Insert the Person :"
},
{
"code": null,
"e": 11254,
"s": 11179,
"text": "http://localhost:8080/person/save?name=chandra shekhar Goka&city=Hiderabad"
},
{
"code": null,
"e": 11273,
"s": 11254,
"text": "Show All Persons :"
},
{
"code": null,
"e": 11313,
"s": 11273,
"text": "http://localhost:8080/person/allPersons"
},
{
"code": null,
"e": 11337,
"s": 11313,
"text": "Delete a Person By Id :"
},
{
"code": null,
"e": 11378,
"s": 11337,
"text": "http://localhost:8080/person/delete?id=1"
},
{
"code": null,
"e": 11395,
"s": 11378,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 11504,
"s": 11395,
"text": "\n\nspringboot_hibernate_example\n\nSpring Boot Hibernate With CRUD Operations\nFile size: 22 MB\nDownloads: 5269\n"
},
{
"code": null,
"e": 12138,
"s": 11504,
"text": "\nSpring Boot JPA Integration Example\nSpring Boot Security MySQL Database Integration Example\nSpring Boot PDF iText integration with String Template\nHow to set Spring Boot SetTimeZone\nSpring Boot How to change the Tomcat to Jetty Server\nHow to change Spring Boot Tomcat Port Number\nSimple Spring Boot Example\nHow To Change Spring Boot Context Path\nSpring Boot Actuator Database Health Check\nSpring Boot Multiple Data Sources Example\nSpring Boot MVC Example Tutorials\nSpring Boot MockMvc JUnit Test Example\nExternal Apache ActiveMQ Spring Boot Example\nSpring Boot H2 Database + JDBC Template Example\nSpring Boot Kafka Producer Example\n"
},
{
"code": null,
"e": 12174,
"s": 12138,
"text": "Spring Boot JPA Integration Example"
},
{
"code": null,
"e": 12230,
"s": 12174,
"text": "Spring Boot Security MySQL Database Integration Example"
},
{
"code": null,
"e": 12285,
"s": 12230,
"text": "Spring Boot PDF iText integration with String Template"
},
{
"code": null,
"e": 12320,
"s": 12285,
"text": "How to set Spring Boot SetTimeZone"
},
{
"code": null,
"e": 12373,
"s": 12320,
"text": "Spring Boot How to change the Tomcat to Jetty Server"
},
{
"code": null,
"e": 12418,
"s": 12373,
"text": "How to change Spring Boot Tomcat Port Number"
},
{
"code": null,
"e": 12445,
"s": 12418,
"text": "Simple Spring Boot Example"
},
{
"code": null,
"e": 12484,
"s": 12445,
"text": "How To Change Spring Boot Context Path"
},
{
"code": null,
"e": 12527,
"s": 12484,
"text": "Spring Boot Actuator Database Health Check"
},
{
"code": null,
"e": 12569,
"s": 12527,
"text": "Spring Boot Multiple Data Sources Example"
},
{
"code": null,
"e": 12603,
"s": 12569,
"text": "Spring Boot MVC Example Tutorials"
},
{
"code": null,
"e": 12642,
"s": 12603,
"text": "Spring Boot MockMvc JUnit Test Example"
},
{
"code": null,
"e": 12687,
"s": 12642,
"text": "External Apache ActiveMQ Spring Boot Example"
},
{
"code": null,
"e": 12735,
"s": 12687,
"text": "Spring Boot H2 Database + JDBC Template Example"
},
{
"code": null,
"e": 12770,
"s": 12735,
"text": "Spring Boot Kafka Producer Example"
},
{
"code": null,
"e": 15519,
"s": 12770,
"text": "\n\n\n\n\n\nprabhakar\nJune 28, 2017 at 10:51 am - Reply \n\nExcellent article\nbut i have one requirement how to get a single person details using id\n\n\n\n\n\n\n\n\n\nViswanath\nJuly 31, 2017 at 5:11 pm - Reply \n\nI believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple\n\n\n\n\n\n\n\n\n\n\n\nNeelam Panwar\nAugust 14, 2017 at 6:18 pm - Reply \n\nHi Chandrashekhar,\nI am new to spring boot and I have followed exactly this configuration and getting following response.\ni just want me response to be list of json objects(present under entity tag in below response).\nhow can i remove other unwanted stuff from this resoponse body.\nplease suggest.\n{\n“context”: {\n“headers”: {},\n“entity”: [\n{\n“name”: “abe”,\n“age”: “23”\n},\n{\n“name”: “omy”,\n“age”: “42”,\n }\n],\n“entityType”: “java.util.ArrayList”,\n“entityAnnotations”: [],\n“entityStream”: {\n“committed”: false,\n“closed”: false\n},\n“length”: -1,\n“language”: null,\n“location”: null,\n“date”: null,\n“lastModified”: null,\n“committed”: false,\n“allowedMethods”: [],\n“mediaType”: null,\n“acceptableMediaTypes”: [\n{\n“type”: “*”,\n“subtype”: “*”,\n“parameters”: {},\n“quality”: 1000,\n“wildcardType”: true,\n“wildcardSubtype”: true\n}\n],\n“lengthLong”: -1,\n“links”: [],\n“entityTag”: null,\n“stringHeaders”: {},\n“entityClass”: “java.util.ArrayList”,\n“responseCookies”: {},\n“acceptableLanguages”: [\n“*”\n],\n“requestCookies”: {}\n},\n\n\n\n\n\n\n\n\n\nShailendra\nSeptember 18, 2017 at 4:03 am - Reply \n\n@Configuration is an annotation, you can get more about @Configuration and @Bean here..\nIts a link, Please mkae it look like a link. 🙂\n\n\n\n\n\n\n\n\n\nRamandeep Singh\nSeptember 26, 2017 at 10:09 am - Reply \n\ndownload example is empty\n\n\n\n\n\n\n\n\n\nMelanie Glastrong\nNovember 9, 2017 at 8:01 am - Reply \n\nSuch a nice blog.\nI have read an amazing article here.\n\n\n\n\n\n\n\n\n\nGuru\nDecember 12, 2018 at 3:25 pm - Reply \n\nsession factory not getting inject.throwing null pointer exception\n\n\n\n\n\n\n\n\n\nVishwa\nApril 8, 2019 at 1:00 pm - Reply \n\nAfter importing project\nProject build error: Non-resolvable parent POM for org.springframework.samples:SpringBoot_Hibernate_Example:0.0.1-SNAPSHOT: Failure to transfer org.springframework.boot:spring-boot-starter-parent:pom:\n1.2.3.RELEASE from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original\nerror: Could not transfer artifact org.springframework.boot:spring-boot-starter-parent:pom:1.2.3.RELEASE from/to central (https://repo.maven.apache.org/maven2): Failed to connect to repo.maven.apache.org/\n151.101.40.215:443 and ‘parent.relativePath’ points at no local POM\n\n\n\n\n"
},
{
"code": null,
"e": 15906,
"s": 15519,
"text": "\n\n\n\n\nprabhakar\nJune 28, 2017 at 10:51 am - Reply \n\nExcellent article\nbut i have one requirement how to get a single person details using id\n\n\n\n\n\n\n\n\n\nViswanath\nJuly 31, 2017 at 5:11 pm - Reply \n\nI believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple\n\n\n\n\n\n"
},
{
"code": null,
"e": 15995,
"s": 15906,
"text": "Excellent article\nbut i have one requirement how to get a single person details using id"
},
{
"code": null,
"e": 16236,
"s": 15995,
"text": "\n\n\n\n\nViswanath\nJuly 31, 2017 at 5:11 pm - Reply \n\nI believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple\n\n\n\n"
},
{
"code": null,
"e": 16423,
"s": 16236,
"text": "I believe using the Repository of Spring Data is more straightforward .. you need to write any persistence code, ie handling session, sessionfactory, datasource, etc.. it is very simple"
},
{
"code": null,
"e": 17499,
"s": 16423,
"text": "\n\n\n\n\nNeelam Panwar\nAugust 14, 2017 at 6:18 pm - Reply \n\nHi Chandrashekhar,\nI am new to spring boot and I have followed exactly this configuration and getting following response.\ni just want me response to be list of json objects(present under entity tag in below response).\nhow can i remove other unwanted stuff from this resoponse body.\nplease suggest.\n{\n“context”: {\n“headers”: {},\n“entity”: [\n{\n“name”: “abe”,\n“age”: “23”\n},\n{\n“name”: “omy”,\n“age”: “42”,\n }\n],\n“entityType”: “java.util.ArrayList”,\n“entityAnnotations”: [],\n“entityStream”: {\n“committed”: false,\n“closed”: false\n},\n“length”: -1,\n“language”: null,\n“location”: null,\n“date”: null,\n“lastModified”: null,\n“committed”: false,\n“allowedMethods”: [],\n“mediaType”: null,\n“acceptableMediaTypes”: [\n{\n“type”: “*”,\n“subtype”: “*”,\n“parameters”: {},\n“quality”: 1000,\n“wildcardType”: true,\n“wildcardSubtype”: true\n}\n],\n“lengthLong”: -1,\n“links”: [],\n“entityTag”: null,\n“stringHeaders”: {},\n“entityClass”: “java.util.ArrayList”,\n“responseCookies”: {},\n“acceptableLanguages”: [\n“*”\n],\n“requestCookies”: {}\n},\n\n\n\n"
},
{
"code": null,
"e": 17901,
"s": 17499,
"text": "Hi Chandrashekhar,\nI am new to spring boot and I have followed exactly this configuration and getting following response.\ni just want me response to be list of json objects(present under entity tag in below response).\nhow can i remove other unwanted stuff from this resoponse body.\nplease suggest.\n{\n“context”: {\n“headers”: {},\n“entity”: [\n{\n“name”: “abe”,\n“age”: “23”\n},\n{\n“name”: “omy”,\n“age”: “42”,"
},
{
"code": null,
"e": 18515,
"s": 17901,
"text": " }\n],\n“entityType”: “java.util.ArrayList”,\n“entityAnnotations”: [],\n“entityStream”: {\n“committed”: false,\n“closed”: false\n},\n“length”: -1,\n“language”: null,\n“location”: null,\n“date”: null,\n“lastModified”: null,\n“committed”: false,\n“allowedMethods”: [],\n“mediaType”: null,\n“acceptableMediaTypes”: [\n{\n“type”: “*”,\n“subtype”: “*”,\n“parameters”: {},\n“quality”: 1000,\n“wildcardType”: true,\n“wildcardSubtype”: true\n}\n],\n“lengthLong”: -1,\n“links”: [],\n“entityTag”: null,\n“stringHeaders”: {},\n“entityClass”: “java.util.ArrayList”,\n“responseCookies”: {},\n“acceptableLanguages”: [\n“*”\n],\n“requestCookies”: {}\n},"
},
{
"code": null,
"e": 18710,
"s": 18515,
"text": "\n\n\n\n\nShailendra\nSeptember 18, 2017 at 4:03 am - Reply \n\n@Configuration is an annotation, you can get more about @Configuration and @Bean here..\nIts a link, Please mkae it look like a link. 🙂\n\n\n\n"
},
{
"code": null,
"e": 18798,
"s": 18710,
"text": "@Configuration is an annotation, you can get more about @Configuration and @Bean here.."
},
{
"code": null,
"e": 18845,
"s": 18798,
"text": "Its a link, Please mkae it look like a link. 🙂"
},
{
"code": null,
"e": 18937,
"s": 18845,
"text": "\n\n\n\n\nRamandeep Singh\nSeptember 26, 2017 at 10:09 am - Reply \n\ndownload example is empty\n\n\n\n"
},
{
"code": null,
"e": 18963,
"s": 18937,
"text": "download example is empty"
},
{
"code": null,
"e": 19083,
"s": 18963,
"text": "\n\n\n\n\nMelanie Glastrong\nNovember 9, 2017 at 8:01 am - Reply \n\nSuch a nice blog.\nI have read an amazing article here.\n\n\n\n"
},
{
"code": null,
"e": 19101,
"s": 19083,
"text": "Such a nice blog."
},
{
"code": null,
"e": 19138,
"s": 19101,
"text": "I have read an amazing article here."
},
{
"code": null,
"e": 19258,
"s": 19138,
"text": "\n\n\n\n\nGuru\nDecember 12, 2018 at 3:25 pm - Reply \n\nsession factory not getting inject.throwing null pointer exception\n\n\n\n"
},
{
"code": null,
"e": 19325,
"s": 19258,
"text": "session factory not getting inject.throwing null pointer exception"
},
{
"code": null,
"e": 20082,
"s": 19325,
"text": "\n\n\n\n\nVishwa\nApril 8, 2019 at 1:00 pm - Reply \n\nAfter importing project\nProject build error: Non-resolvable parent POM for org.springframework.samples:SpringBoot_Hibernate_Example:0.0.1-SNAPSHOT: Failure to transfer org.springframework.boot:spring-boot-starter-parent:pom:\n1.2.3.RELEASE from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original\nerror: Could not transfer artifact org.springframework.boot:spring-boot-starter-parent:pom:1.2.3.RELEASE from/to central (https://repo.maven.apache.org/maven2): Failed to connect to repo.maven.apache.org/\n151.101.40.215:443 and ‘parent.relativePath’ points at no local POM\n\n\n\n"
},
{
"code": null,
"e": 20106,
"s": 20082,
"text": "After importing project"
},
{
"code": null,
"e": 20788,
"s": 20106,
"text": "Project build error: Non-resolvable parent POM for org.springframework.samples:SpringBoot_Hibernate_Example:0.0.1-SNAPSHOT: Failure to transfer org.springframework.boot:spring-boot-starter-parent:pom:\n1.2.3.RELEASE from https://repo.maven.apache.org/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of central has elapsed or updates are forced. Original\nerror: Could not transfer artifact org.springframework.boot:spring-boot-starter-parent:pom:1.2.3.RELEASE from/to central (https://repo.maven.apache.org/maven2): Failed to connect to repo.maven.apache.org/\n151.101.40.215:443 and ‘parent.relativePath’ points at no local POM"
},
{
"code": null,
"e": 20794,
"s": 20792,
"text": "Δ"
},
{
"code": null,
"e": 20821,
"s": 20794,
"text": " Spring Boot – Hello World"
},
{
"code": null,
"e": 20848,
"s": 20821,
"text": " Spring Boot – MVC Example"
},
{
"code": null,
"e": 20882,
"s": 20848,
"text": " Spring Boot- Change Context Path"
},
{
"code": null,
"e": 20923,
"s": 20882,
"text": " Spring Boot – Change Tomcat Port Number"
},
{
"code": null,
"e": 20968,
"s": 20923,
"text": " Spring Boot – Change Tomcat to Jetty Server"
},
{
"code": null,
"e": 21006,
"s": 20968,
"text": " Spring Boot – Tomcat session timeout"
},
{
"code": null,
"e": 21040,
"s": 21006,
"text": " Spring Boot – Enable Random Port"
},
{
"code": null,
"e": 21071,
"s": 21040,
"text": " Spring Boot – Properties File"
},
{
"code": null,
"e": 21105,
"s": 21071,
"text": " Spring Boot – Beans Lazy Loading"
},
{
"code": null,
"e": 21138,
"s": 21105,
"text": " Spring Boot – Set Favicon image"
},
{
"code": null,
"e": 21171,
"s": 21138,
"text": " Spring Boot – Set Custom Banner"
},
{
"code": null,
"e": 21211,
"s": 21171,
"text": " Spring Boot – Set Application TimeZone"
},
{
"code": null,
"e": 21236,
"s": 21211,
"text": " Spring Boot – Send Mail"
},
{
"code": null,
"e": 21267,
"s": 21236,
"text": " Spring Boot – FileUpload Ajax"
},
{
"code": null,
"e": 21291,
"s": 21267,
"text": " Spring Boot – Actuator"
},
{
"code": null,
"e": 21337,
"s": 21291,
"text": " Spring Boot – Actuator Database Health Check"
},
{
"code": null,
"e": 21360,
"s": 21337,
"text": " Spring Boot – Swagger"
},
{
"code": null,
"e": 21387,
"s": 21360,
"text": " Spring Boot – Enable CORS"
},
{
"code": null,
"e": 21433,
"s": 21387,
"text": " Spring Boot – External Apache ActiveMQ Setup"
},
{
"code": null,
"e": 21473,
"s": 21433,
"text": " Spring Boot – Inmemory Apache ActiveMq"
},
{
"code": null,
"e": 21502,
"s": 21473,
"text": " Spring Boot – Scheduler Job"
},
{
"code": null,
"e": 21536,
"s": 21502,
"text": " Spring Boot – Exception Handling"
},
{
"code": null,
"e": 21566,
"s": 21536,
"text": " Spring Boot – Hibernate CRUD"
},
{
"code": null,
"e": 21602,
"s": 21566,
"text": " Spring Boot – JPA Integration CRUD"
},
{
"code": null,
"e": 21635,
"s": 21602,
"text": " Spring Boot – JPA DataRest CRUD"
},
{
"code": null,
"e": 21668,
"s": 21635,
"text": " Spring Boot – JdbcTemplate CRUD"
},
{
"code": null,
"e": 21712,
"s": 21668,
"text": " Spring Boot – Multiple Data Sources Config"
},
{
"code": null,
"e": 21746,
"s": 21712,
"text": " Spring Boot – JNDI Configuration"
},
{
"code": null,
"e": 21778,
"s": 21746,
"text": " Spring Boot – H2 Database CRUD"
},
{
"code": null,
"e": 21806,
"s": 21778,
"text": " Spring Boot – MongoDB CRUD"
},
{
"code": null,
"e": 21837,
"s": 21806,
"text": " Spring Boot – Redis Data CRUD"
},
{
"code": null,
"e": 21878,
"s": 21837,
"text": " Spring Boot – MVC Login Form Validation"
},
{
"code": null,
"e": 21912,
"s": 21878,
"text": " Spring Boot – Custom Error Pages"
},
{
"code": null,
"e": 21937,
"s": 21912,
"text": " Spring Boot – iText PDF"
},
{
"code": null,
"e": 21971,
"s": 21937,
"text": " Spring Boot – Enable SSL (HTTPs)"
},
{
"code": null,
"e": 22007,
"s": 21971,
"text": " Spring Boot – Basic Authentication"
},
{
"code": null,
"e": 22053,
"s": 22007,
"text": " Spring Boot – In Memory Basic Authentication"
},
{
"code": null,
"e": 22104,
"s": 22053,
"text": " Spring Boot – Security MySQL Database Integration"
},
{
"code": null,
"e": 22146,
"s": 22104,
"text": " Spring Boot – Redis Cache – Redis Server"
},
{
"code": null,
"e": 22177,
"s": 22146,
"text": " Spring Boot – Hazelcast Cache"
},
{
"code": null,
"e": 22200,
"s": 22177,
"text": " Spring Boot – EhCache"
},
{
"code": null,
"e": 22230,
"s": 22200,
"text": " Spring Boot – Kafka Producer"
},
{
"code": null,
"e": 22260,
"s": 22230,
"text": " Spring Boot – Kafka Consumer"
},
{
"code": null,
"e": 22309,
"s": 22260,
"text": " Spring Boot – Kafka JSON Message to Kafka Topic"
},
{
"code": null,
"e": 22343,
"s": 22309,
"text": " Spring Boot – RabbitMQ Publisher"
},
{
"code": null,
"e": 22376,
"s": 22343,
"text": " Spring Boot – RabbitMQ Consumer"
},
{
"code": null,
"e": 22405,
"s": 22376,
"text": " Spring Boot – SOAP Consumer"
},
{
"code": null,
"e": 22437,
"s": 22405,
"text": " Spring Boot – Soap WebServices"
},
{
"code": null,
"e": 22474,
"s": 22437,
"text": " Spring Boot – Batch Csv to Database"
},
{
"code": null,
"e": 22503,
"s": 22474,
"text": " Spring Boot – Eureka Server"
},
{
"code": null,
"e": 22532,
"s": 22503,
"text": " Spring Boot – MockMvc JUnit"
}
] |
Investigation Using Emails
|
The previous chapters discussed about the importance and the process of network forensics and the concepts involved. In this chapter, let us learn about the role of emails in digital forensics and their investigation using Python.
Emails play a very important role in business communications and have emerged as one of the most important applications on internet. They are a convenient mode for sending messages as well as documents, not only from computers but also from other electronic gadgets such as mobile phones and tablets.
The negative side of emails is that criminals may leak important information about their company. Hence, the role of emails in digital forensics has been increased in recent years. In digital forensics, emails are considered as crucial evidences and Email Header Analysis has become important to collect evidence during forensic process.
An investigator has the following goals while performing email forensics −
To identify the main criminal
To collect necessary evidences
To presenting the findings
To build the case
Email forensics play a very important role in investigation as most of the communication in present era relies on emails. However, an email forensic investigator may face the following challenges during the investigation −
The biggest challenge in email forensics is the use of fake e-mails that are created by manipulating and scripting headers etc. In this category criminals also use temporary email which is a service that allows a registered user to receive email at a temporary address that expires after a certain time period.
Another challenge in email forensics is spoofing in which criminals used to present an email as someone else’s. In this case the machine will receive both fake as well as original IP address.
Here, the Email server strips identifying information from the email message before forwarding it further. This leads to another big challenge for email investigations.
Email forensics is the study of source and content of email as evidence to identify the actual sender and recipient of a message along with some other information such as date/time of transmission and intention of sender. It involves investigating metadata, port scanning as well as keyword searching.
Some of the common techniques which can be used for email forensic investigation are
Header Analysis
Server investigation
Network Device Investigation
Sender Mailer Fingerprints
Software Embedded Identifiers
In the following sections, we are going to learn how to fetch information using Python for the purpose of email investigation.
EML files are basically emails in file format which are widely used for storing email messages. They are structured text files that are compatible across multiple email clients such as Microsoft Outlook, Outlook Express, and Windows Live Mail.
An EML file stores email headers, body content, attachment data as plain text. It uses base64 to encode binary data and Quoted-Printable (QP) encoding to store content information. The Python script that can be used to extract information from EML file is given below −
First, import the following Python libraries as shown below −
from __future__ import print_function
from argparse import ArgumentParser, FileType
from email import message_from_file
import os
import quopri
import base64
In the above libraries, quopri is used to decode the QP encoded values from EML files. Any base64 encoded data can be decoded with the help of base64 library.
Next, let us provide argument for command-line handler. Note that here it will accept only one argument which would be the path to EML file as shown below −
if __name__ == '__main__':
parser = ArgumentParser('Extracting information from EML file')
parser.add_argument("EML_FILE",help="Path to EML File", type=FileType('r'))
args = parser.parse_args()
main(args.EML_FILE)
Now, we need to define main() function in which we will use the method named message_from_file() from email library to read the file like object. Here we will access the headers, body content, attachments and other payload information by using resulting variable named emlfile as shown in the code given below −
def main(input_file):
emlfile = message_from_file(input_file)
for key, value in emlfile._headers:
print("{}: {}".format(key, value))
print("\nBody\n")
if emlfile.is_multipart():
for part in emlfile.get_payload():
process_payload(part)
else:
process_payload(emlfile[1])
Now, we need to define process_payload() method in which we will extract message body content by using get_payload() method. We will decode QP encoded data by using quopri.decodestring() function. We will also check the content MIME type so that it can handle the storage of the email properly. Observe the code given below −
def process_payload(payload):
print(payload.get_content_type() + "\n" + "=" * len(payload.get_content_type()))
body = quopri.decodestring(payload.get_payload())
if payload.get_charset():
body = body.decode(payload.get_charset())
else:
try:
body = body.decode()
except UnicodeDecodeError:
body = body.decode('cp1252')
if payload.get_content_type() == "text/html":
outfile = os.path.basename(args.EML_FILE.name) + ".html"
open(outfile, 'w').write(body)
elif payload.get_content_type().startswith('application'):
outfile = open(payload.get_filename(), 'wb')
body = base64.b64decode(payload.get_payload())
outfile.write(body)
outfile.close()
print("Exported: {}\n".format(outfile.name))
else:
print(body)
After executing the above script, we will get the header information along with various payloads on the console.
Email messages come in many different formats. MSG is one such kind of format used by Microsoft Outlook and Exchange. Files with MSG extension may contain plain ASCII text for the headers and the main message body as well as hyperlinks and attachments.
In this section, we will learn how to extract information from MSG file using Outlook API. Note that the following Python script will work only on Windows. For this, we need to install third party Python library named pywin32 as follows −
pip install pywin32
Now, import the following libraries using the commands shown −
from __future__ import print_function
from argparse import ArgumentParser
import os
import win32com.client
import pywintypes
Now, let us provide an argument for command-line handler. Here it will accept two arguments one would be the path to MSG file and other would be the desired output folder as follows −
if __name__ == '__main__':
parser = ArgumentParser(‘Extracting information from MSG file’)
parser.add_argument("MSG_FILE", help="Path to MSG file")
parser.add_argument("OUTPUT_DIR", help="Path to output folder")
args = parser.parse_args()
out_dir = args.OUTPUT_DIR
if not os.path.exists(out_dir):
os.makedirs(out_dir)
main(args.MSG_FILE, args.OUTPUT_DIR)
Now, we need to define main() function in which we will call win32com library for setting up Outlook API which further allows access to the MAPI namespace.
def main(msg_file, output_dir):
mapi = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
msg = mapi.OpenSharedItem(os.path.abspath(args.MSG_FILE))
display_msg_attribs(msg)
display_msg_recipients(msg)
extract_msg_body(msg, output_dir)
extract_attachments(msg, output_dir)
Now, define different functions which we are using in this script. The code given below shows defining the display_msg_attribs() function that allow us to display various attributes of a message like subject, to , BCC, CC, Size, SenderName, sent, etc.
def display_msg_attribs(msg):
attribs = [
'Application', 'AutoForwarded', 'BCC', 'CC', 'Class',
'ConversationID', 'ConversationTopic', 'CreationTime',
'ExpiryTime', 'Importance', 'InternetCodePage', 'IsMarkedAsTask',
'LastModificationTime', 'Links','ReceivedTime', 'ReminderSet',
'ReminderTime', 'ReplyRecipientNames', 'Saved', 'Sender',
'SenderEmailAddress', 'SenderEmailType', 'SenderName', 'Sent',
'SentOn', 'SentOnBehalfOfName', 'Size', 'Subject',
'TaskCompletedDate', 'TaskDueDate', 'To', 'UnRead'
]
print("\nMessage Attributes")
for entry in attribs:
print("{}: {}".format(entry, getattr(msg, entry, 'N/A')))
Now, define the display_msg_recipeints() function that iterates through the messages and displays the recipient details.
def display_msg_recipients(msg):
recipient_attrib = ['Address', 'AutoResponse', 'Name', 'Resolved', 'Sendable']
i = 1
while True:
try:
recipient = msg.Recipients(i)
except pywintypes.com_error:
break
print("\nRecipient {}".format(i))
print("=" * 15)
for entry in recipient_attrib:
print("{}: {}".format(entry, getattr(recipient, entry, 'N/A')))
i += 1
Next, we define extract_msg_body() function that extracts the body content, HTML as well as Plain text, from the message.
def extract_msg_body(msg, out_dir):
html_data = msg.HTMLBody.encode('cp1252')
outfile = os.path.join(out_dir, os.path.basename(args.MSG_FILE))
open(outfile + ".body.html", 'wb').write(html_data)
print("Exported: {}".format(outfile + ".body.html"))
body_data = msg.Body.encode('cp1252')
open(outfile + ".body.txt", 'wb').write(body_data)
print("Exported: {}".format(outfile + ".body.txt"))
Next, we shall define the extract_attachments() function that exports attachment data into desired output directory.
def extract_attachments(msg, out_dir):
attachment_attribs = ['DisplayName', 'FileName', 'PathName', 'Position', 'Size']
i = 1 # Attachments start at 1
while True:
try:
attachment = msg.Attachments(i)
except pywintypes.com_error:
break
Once all the functions are defined, we will print all the attributes to the console with the following line of codes −
print("\nAttachment {}".format(i))
print("=" * 15)
for entry in attachment_attribs:
print('{}: {}'.format(entry, getattr(attachment, entry,"N/A")))
outfile = os.path.join(os.path.abspath(out_dir),os.path.split(args.MSG_FILE)[-1])
if not os.path.exists(outfile):
os.makedirs(outfile)
outfile = os.path.join(outfile, attachment.FileName)
attachment.SaveAsFile(outfile)
print("Exported: {}".format(outfile))
i += 1
After running the above script, we will get the attributes of message and its attachments in the console window along with several files in the output directory.
MBOX files are text files with special formatting that split messages stored within. They are often found in association with UNIX systems, Thunderbolt, and Google Takeouts.
In this section, you will see a Python script, where we will be structuring MBOX files got from Google Takeouts. But before that we must know that how we can generate these MBOX files by using our Google account or Gmail account.
Acquiring of Google account mailbox implies taking backup of our Gmail account. Backup can be taken for various personal or professional reasons. Note that Google provides backing up of Gmail data. To acquire our Google account mailbox into MBOX format, you need to follow the steps given below −
Open My account dashboard.
Open My account dashboard.
Go to Personal info & privacy section and select Control your content link.
Go to Personal info & privacy section and select Control your content link.
You can create a new archive or can manage existing one. If we click, CREATE ARCHIVE link, then we will get some check boxes for each Google product we wish to include.
You can create a new archive or can manage existing one. If we click, CREATE ARCHIVE link, then we will get some check boxes for each Google product we wish to include.
After selecting the products, we will get the freedom to choose file type and maximum size for our archive along with the delivery method to select from list.
After selecting the products, we will get the freedom to choose file type and maximum size for our archive along with the delivery method to select from list.
Finally, we will get this backup in MBOX format.
Finally, we will get this backup in MBOX format.
Now, the MBOX file discussed above can be structured using Python as shown below −
First, need to import Python libraries as follows −
from __future__ import print_function
from argparse import ArgumentParser
import mailbox
import os
import time
import csv
from tqdm import tqdm
import base64
All the libraries have been used and explained in earlier scripts, except the mailbox library which is used to parse MBOX files.
Now, provide an argument for command-line handler. Here it will accept two arguments− one would be the path to MBOX file, and the other would be the desired output folder.
if __name__ == '__main__':
parser = ArgumentParser('Parsing MBOX files')
parser.add_argument("MBOX", help="Path to mbox file")
parser.add_argument(
"OUTPUT_DIR",help = "Path to output directory to write report ""and exported content")
args = parser.parse_args()
main(args.MBOX, args.OUTPUT_DIR)
Now, will define main() function and call mbox class of mailbox library with the help of which we can parse a MBOX file by providing its path −
def main(mbox_file, output_dir):
print("Reading mbox file")
mbox = mailbox.mbox(mbox_file, factory=custom_reader)
print("{} messages to parse".format(len(mbox)))
Now, define a reader method for mailbox library as follows −
def custom_reader(data_stream):
data = data_stream.read()
try:
content = data.decode("ascii")
except (UnicodeDecodeError, UnicodeEncodeError) as e:
content = data.decode("cp1252", errors="replace")
return mailbox.mboxMessage(content)
Now, create some variables for further processing as follows −
parsed_data = []
attachments_dir = os.path.join(output_dir, "attachments")
if not os.path.exists(attachments_dir):
os.makedirs(attachments_dir)
columns = [
"Date", "From", "To", "Subject", "X-Gmail-Labels", "Return-Path", "Received",
"Content-Type", "Message-ID","X-GM-THRID", "num_attachments_exported", "export_path"]
Next, use tqdm to generate a progress bar and to track the iteration process as follows −
for message in tqdm(mbox):
msg_data = dict()
header_data = dict(message._headers)
for hdr in columns:
msg_data[hdr] = header_data.get(hdr, "N/A")
Now, check weather message is having payloads or not. If it is having then we will define write_payload() method as follows −
if len(message.get_payload()):
export_path = write_payload(message, attachments_dir)
msg_data['num_attachments_exported'] = len(export_path)
msg_data['export_path'] = ", ".join(export_path)
Now, data need to be appended. Then we will call create_report() method as follows −
parsed_data.append(msg_data)
create_report(
parsed_data, os.path.join(output_dir, "mbox_report.csv"), columns)
def write_payload(msg, out_dir):
pyld = msg.get_payload()
export_path = []
if msg.is_multipart():
for entry in pyld:
export_path += write_payload(entry, out_dir)
else:
content_type = msg.get_content_type()
if "application/" in content_type.lower():
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
elif "image/" in content_type.lower():
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
elif "video/" in content_type.lower():
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
elif "audio/" in content_type.lower():
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
elif "text/csv" in content_type.lower():
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
elif "info/" in content_type.lower():
export_path.append(export_content(msg, out_dir,
msg.get_payload()))
elif "text/calendar" in content_type.lower():
export_path.append(export_content(msg, out_dir,
msg.get_payload()))
elif "text/rtf" in content_type.lower():
export_path.append(export_content(msg, out_dir,
msg.get_payload()))
else:
if "name=" in msg.get('Content-Disposition', "N/A"):
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
elif "name=" in msg.get('Content-Type', "N/A"):
content = base64.b64decode(msg.get_payload())
export_path.append(export_content(msg, out_dir, content))
return export_path
Observe that the above if-else statements are easy to understand. Now, we need to define a method that will extract the filename from the msg object as follows −
def export_content(msg, out_dir, content_data):
file_name = get_filename(msg)
file_ext = "FILE"
if "." in file_name: file_ext = file_name.rsplit(".", 1)[-1]
file_name = "{}_{:.4f}.{}".format(file_name.rsplit(".", 1)[0], time.time(), file_ext)
file_name = os.path.join(out_dir, file_name)
Now, with the help of following lines of code, you can actually export the file −
if isinstance(content_data, str):
open(file_name, 'w').write(content_data)
else:
open(file_name, 'wb').write(content_data)
return file_name
Now, let us define a function to extract filenames from the message to accurately represent the names of these files as follows −
def get_filename(msg):
if 'name=' in msg.get("Content-Disposition", "N/A"):
fname_data = msg["Content-Disposition"].replace("\r\n", " ")
fname = [x for x in fname_data.split("; ") if 'name=' in x]
file_name = fname[0].split("=", 1)[-1]
elif 'name=' in msg.get("Content-Type", "N/A"):
fname_data = msg["Content-Type"].replace("\r\n", " ")
fname = [x for x in fname_data.split("; ") if 'name=' in x]
file_name = fname[0].split("=", 1)[-1]
else:
file_name = "NO_FILENAME"
fchars = [x for x in file_name if x.isalnum() or x.isspace() or x == "."]
return "".join(fchars)
Now, we can write a CSV file by defining the create_report() function as follows −
def create_report(output_data, output_file, columns):
with open(output_file, 'w', newline="") as outfile:
csvfile = csv.DictWriter(outfile, columns)
csvfile.writeheader()
csvfile.writerows(output_data)
Once you run the script given above, we will get the CSV report and directory full of attachments.
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2184,
"s": 1953,
"text": "The previous chapters discussed about the importance and the process of network forensics and the concepts involved. In this chapter, let us learn about the role of emails in digital forensics and their investigation using Python."
},
{
"code": null,
"e": 2485,
"s": 2184,
"text": "Emails play a very important role in business communications and have emerged as one of the most important applications on internet. They are a convenient mode for sending messages as well as documents, not only from computers but also from other electronic gadgets such as mobile phones and tablets."
},
{
"code": null,
"e": 2823,
"s": 2485,
"text": "The negative side of emails is that criminals may leak important information about their company. Hence, the role of emails in digital forensics has been increased in recent years. In digital forensics, emails are considered as crucial evidences and Email Header Analysis has become important to collect evidence during forensic process."
},
{
"code": null,
"e": 2898,
"s": 2823,
"text": "An investigator has the following goals while performing email forensics −"
},
{
"code": null,
"e": 2928,
"s": 2898,
"text": "To identify the main criminal"
},
{
"code": null,
"e": 2959,
"s": 2928,
"text": "To collect necessary evidences"
},
{
"code": null,
"e": 2986,
"s": 2959,
"text": "To presenting the findings"
},
{
"code": null,
"e": 3004,
"s": 2986,
"text": "To build the case"
},
{
"code": null,
"e": 3227,
"s": 3004,
"text": "Email forensics play a very important role in investigation as most of the communication in present era relies on emails. However, an email forensic investigator may face the following challenges during the investigation −"
},
{
"code": null,
"e": 3538,
"s": 3227,
"text": "The biggest challenge in email forensics is the use of fake e-mails that are created by manipulating and scripting headers etc. In this category criminals also use temporary email which is a service that allows a registered user to receive email at a temporary address that expires after a certain time period."
},
{
"code": null,
"e": 3730,
"s": 3538,
"text": "Another challenge in email forensics is spoofing in which criminals used to present an email as someone else’s. In this case the machine will receive both fake as well as original IP address."
},
{
"code": null,
"e": 3899,
"s": 3730,
"text": "Here, the Email server strips identifying information from the email message before forwarding it further. This leads to another big challenge for email investigations."
},
{
"code": null,
"e": 4201,
"s": 3899,
"text": "Email forensics is the study of source and content of email as evidence to identify the actual sender and recipient of a message along with some other information such as date/time of transmission and intention of sender. It involves investigating metadata, port scanning as well as keyword searching."
},
{
"code": null,
"e": 4286,
"s": 4201,
"text": "Some of the common techniques which can be used for email forensic investigation are"
},
{
"code": null,
"e": 4302,
"s": 4286,
"text": "Header Analysis"
},
{
"code": null,
"e": 4323,
"s": 4302,
"text": "Server investigation"
},
{
"code": null,
"e": 4352,
"s": 4323,
"text": "Network Device Investigation"
},
{
"code": null,
"e": 4379,
"s": 4352,
"text": "Sender Mailer Fingerprints"
},
{
"code": null,
"e": 4409,
"s": 4379,
"text": "Software Embedded Identifiers"
},
{
"code": null,
"e": 4536,
"s": 4409,
"text": "In the following sections, we are going to learn how to fetch information using Python for the purpose of email investigation."
},
{
"code": null,
"e": 4780,
"s": 4536,
"text": "EML files are basically emails in file format which are widely used for storing email messages. They are structured text files that are compatible across multiple email clients such as Microsoft Outlook, Outlook Express, and Windows Live Mail."
},
{
"code": null,
"e": 5050,
"s": 4780,
"text": "An EML file stores email headers, body content, attachment data as plain text. It uses base64 to encode binary data and Quoted-Printable (QP) encoding to store content information. The Python script that can be used to extract information from EML file is given below −"
},
{
"code": null,
"e": 5112,
"s": 5050,
"text": "First, import the following Python libraries as shown below −"
},
{
"code": null,
"e": 5271,
"s": 5112,
"text": "from __future__ import print_function\nfrom argparse import ArgumentParser, FileType\nfrom email import message_from_file\n\nimport os\nimport quopri\nimport base64"
},
{
"code": null,
"e": 5430,
"s": 5271,
"text": "In the above libraries, quopri is used to decode the QP encoded values from EML files. Any base64 encoded data can be decoded with the help of base64 library."
},
{
"code": null,
"e": 5587,
"s": 5430,
"text": "Next, let us provide argument for command-line handler. Note that here it will accept only one argument which would be the path to EML file as shown below −"
},
{
"code": null,
"e": 5813,
"s": 5587,
"text": "if __name__ == '__main__':\n parser = ArgumentParser('Extracting information from EML file')\n parser.add_argument(\"EML_FILE\",help=\"Path to EML File\", type=FileType('r'))\n args = parser.parse_args()\n main(args.EML_FILE)"
},
{
"code": null,
"e": 6125,
"s": 5813,
"text": "Now, we need to define main() function in which we will use the method named message_from_file() from email library to read the file like object. Here we will access the headers, body content, attachments and other payload information by using resulting variable named emlfile as shown in the code given below −"
},
{
"code": null,
"e": 6419,
"s": 6125,
"text": "def main(input_file):\n emlfile = message_from_file(input_file)\n for key, value in emlfile._headers:\n print(\"{}: {}\".format(key, value))\nprint(\"\\nBody\\n\")\n\nif emlfile.is_multipart():\n for part in emlfile.get_payload():\n process_payload(part)\nelse:\n process_payload(emlfile[1])"
},
{
"code": null,
"e": 6745,
"s": 6419,
"text": "Now, we need to define process_payload() method in which we will extract message body content by using get_payload() method. We will decode QP encoded data by using quopri.decodestring() function. We will also check the content MIME type so that it can handle the storage of the email properly. Observe the code given below −"
},
{
"code": null,
"e": 7508,
"s": 6745,
"text": "def process_payload(payload):\n print(payload.get_content_type() + \"\\n\" + \"=\" * len(payload.get_content_type()))\n body = quopri.decodestring(payload.get_payload())\n \n if payload.get_charset():\n body = body.decode(payload.get_charset())\nelse:\n try:\n body = body.decode()\n except UnicodeDecodeError:\n body = body.decode('cp1252')\n\nif payload.get_content_type() == \"text/html\":\n outfile = os.path.basename(args.EML_FILE.name) + \".html\"\n open(outfile, 'w').write(body)\nelif payload.get_content_type().startswith('application'):\n outfile = open(payload.get_filename(), 'wb')\n body = base64.b64decode(payload.get_payload())\n outfile.write(body)\n outfile.close()\n print(\"Exported: {}\\n\".format(outfile.name))\nelse:\n print(body)"
},
{
"code": null,
"e": 7621,
"s": 7508,
"text": "After executing the above script, we will get the header information along with various payloads on the console."
},
{
"code": null,
"e": 7874,
"s": 7621,
"text": "Email messages come in many different formats. MSG is one such kind of format used by Microsoft Outlook and Exchange. Files with MSG extension may contain plain ASCII text for the headers and the main message body as well as hyperlinks and attachments."
},
{
"code": null,
"e": 8113,
"s": 7874,
"text": "In this section, we will learn how to extract information from MSG file using Outlook API. Note that the following Python script will work only on Windows. For this, we need to install third party Python library named pywin32 as follows −"
},
{
"code": null,
"e": 8134,
"s": 8113,
"text": "pip install pywin32\n"
},
{
"code": null,
"e": 8197,
"s": 8134,
"text": "Now, import the following libraries using the commands shown −"
},
{
"code": null,
"e": 8323,
"s": 8197,
"text": "from __future__ import print_function\nfrom argparse import ArgumentParser\n\nimport os\nimport win32com.client\nimport pywintypes"
},
{
"code": null,
"e": 8507,
"s": 8323,
"text": "Now, let us provide an argument for command-line handler. Here it will accept two arguments one would be the path to MSG file and other would be the desired output folder as follows −"
},
{
"code": null,
"e": 8893,
"s": 8507,
"text": "if __name__ == '__main__':\n parser = ArgumentParser(‘Extracting information from MSG file’)\n parser.add_argument(\"MSG_FILE\", help=\"Path to MSG file\")\n parser.add_argument(\"OUTPUT_DIR\", help=\"Path to output folder\")\n args = parser.parse_args()\n out_dir = args.OUTPUT_DIR\n \n if not os.path.exists(out_dir):\n os.makedirs(out_dir)\n main(args.MSG_FILE, args.OUTPUT_DIR)"
},
{
"code": null,
"e": 9049,
"s": 8893,
"text": "Now, we need to define main() function in which we will call win32com library for setting up Outlook API which further allows access to the MAPI namespace."
},
{
"code": null,
"e": 9365,
"s": 9049,
"text": "def main(msg_file, output_dir):\n mapi = win32com.client.Dispatch(\"Outlook.Application\").GetNamespace(\"MAPI\")\n msg = mapi.OpenSharedItem(os.path.abspath(args.MSG_FILE))\n \n display_msg_attribs(msg)\n display_msg_recipients(msg)\n \n extract_msg_body(msg, output_dir)\n extract_attachments(msg, output_dir)"
},
{
"code": null,
"e": 9617,
"s": 9365,
"text": "Now, define different functions which we are using in this script. The code given below shows defining the display_msg_attribs() function that allow us to display various attributes of a message like subject, to , BCC, CC, Size, SenderName, sent, etc."
},
{
"code": null,
"e": 10298,
"s": 9617,
"text": "def display_msg_attribs(msg):\n attribs = [\n 'Application', 'AutoForwarded', 'BCC', 'CC', 'Class',\n 'ConversationID', 'ConversationTopic', 'CreationTime',\n 'ExpiryTime', 'Importance', 'InternetCodePage', 'IsMarkedAsTask',\n 'LastModificationTime', 'Links','ReceivedTime', 'ReminderSet',\n 'ReminderTime', 'ReplyRecipientNames', 'Saved', 'Sender',\n 'SenderEmailAddress', 'SenderEmailType', 'SenderName', 'Sent',\n 'SentOn', 'SentOnBehalfOfName', 'Size', 'Subject',\n 'TaskCompletedDate', 'TaskDueDate', 'To', 'UnRead'\n ]\n print(\"\\nMessage Attributes\")\n for entry in attribs:\n print(\"{}: {}\".format(entry, getattr(msg, entry, 'N/A')))"
},
{
"code": null,
"e": 10419,
"s": 10298,
"text": "Now, define the display_msg_recipeints() function that iterates through the messages and displays the recipient details."
},
{
"code": null,
"e": 10824,
"s": 10419,
"text": "def display_msg_recipients(msg):\n recipient_attrib = ['Address', 'AutoResponse', 'Name', 'Resolved', 'Sendable']\n i = 1\n \n while True:\n try:\n recipient = msg.Recipients(i)\n except pywintypes.com_error:\n break\n print(\"\\nRecipient {}\".format(i))\n print(\"=\" * 15)\n \n for entry in recipient_attrib:\n print(\"{}: {}\".format(entry, getattr(recipient, entry, 'N/A')))\n i += 1"
},
{
"code": null,
"e": 10946,
"s": 10824,
"text": "Next, we define extract_msg_body() function that extracts the body content, HTML as well as Plain text, from the message."
},
{
"code": null,
"e": 11364,
"s": 10946,
"text": "def extract_msg_body(msg, out_dir):\n html_data = msg.HTMLBody.encode('cp1252')\n outfile = os.path.join(out_dir, os.path.basename(args.MSG_FILE))\n \n open(outfile + \".body.html\", 'wb').write(html_data)\n print(\"Exported: {}\".format(outfile + \".body.html\"))\n body_data = msg.Body.encode('cp1252')\n \n open(outfile + \".body.txt\", 'wb').write(body_data)\n print(\"Exported: {}\".format(outfile + \".body.txt\"))"
},
{
"code": null,
"e": 11481,
"s": 11364,
"text": "Next, we shall define the extract_attachments() function that exports attachment data into desired output directory."
},
{
"code": null,
"e": 11753,
"s": 11481,
"text": "def extract_attachments(msg, out_dir):\n attachment_attribs = ['DisplayName', 'FileName', 'PathName', 'Position', 'Size']\n i = 1 # Attachments start at 1\n \n while True:\n try:\n attachment = msg.Attachments(i)\n except pywintypes.com_error:\n break"
},
{
"code": null,
"e": 11872,
"s": 11753,
"text": "Once all the functions are defined, we will print all the attributes to the console with the following line of codes −"
},
{
"code": null,
"e": 12299,
"s": 11872,
"text": "print(\"\\nAttachment {}\".format(i))\nprint(\"=\" * 15)\n \nfor entry in attachment_attribs:\n print('{}: {}'.format(entry, getattr(attachment, entry,\"N/A\")))\noutfile = os.path.join(os.path.abspath(out_dir),os.path.split(args.MSG_FILE)[-1])\n \nif not os.path.exists(outfile):\nos.makedirs(outfile)\noutfile = os.path.join(outfile, attachment.FileName)\nattachment.SaveAsFile(outfile)\n \nprint(\"Exported: {}\".format(outfile))\ni += 1"
},
{
"code": null,
"e": 12461,
"s": 12299,
"text": "After running the above script, we will get the attributes of message and its attachments in the console window along with several files in the output directory."
},
{
"code": null,
"e": 12635,
"s": 12461,
"text": "MBOX files are text files with special formatting that split messages stored within. They are often found in association with UNIX systems, Thunderbolt, and Google Takeouts."
},
{
"code": null,
"e": 12865,
"s": 12635,
"text": "In this section, you will see a Python script, where we will be structuring MBOX files got from Google Takeouts. But before that we must know that how we can generate these MBOX files by using our Google account or Gmail account."
},
{
"code": null,
"e": 13162,
"s": 12865,
"text": "Acquiring of Google account mailbox implies taking backup of our Gmail account. Backup can be taken for various personal or professional reasons. Note that Google provides backing up of Gmail data. To acquire our Google account mailbox into MBOX format, you need to follow the steps given below −"
},
{
"code": null,
"e": 13189,
"s": 13162,
"text": "Open My account dashboard."
},
{
"code": null,
"e": 13216,
"s": 13189,
"text": "Open My account dashboard."
},
{
"code": null,
"e": 13292,
"s": 13216,
"text": "Go to Personal info & privacy section and select Control your content link."
},
{
"code": null,
"e": 13368,
"s": 13292,
"text": "Go to Personal info & privacy section and select Control your content link."
},
{
"code": null,
"e": 13537,
"s": 13368,
"text": "You can create a new archive or can manage existing one. If we click, CREATE ARCHIVE link, then we will get some check boxes for each Google product we wish to include."
},
{
"code": null,
"e": 13706,
"s": 13537,
"text": "You can create a new archive or can manage existing one. If we click, CREATE ARCHIVE link, then we will get some check boxes for each Google product we wish to include."
},
{
"code": null,
"e": 13865,
"s": 13706,
"text": "After selecting the products, we will get the freedom to choose file type and maximum size for our archive along with the delivery method to select from list."
},
{
"code": null,
"e": 14024,
"s": 13865,
"text": "After selecting the products, we will get the freedom to choose file type and maximum size for our archive along with the delivery method to select from list."
},
{
"code": null,
"e": 14073,
"s": 14024,
"text": "Finally, we will get this backup in MBOX format."
},
{
"code": null,
"e": 14122,
"s": 14073,
"text": "Finally, we will get this backup in MBOX format."
},
{
"code": null,
"e": 14205,
"s": 14122,
"text": "Now, the MBOX file discussed above can be structured using Python as shown below −"
},
{
"code": null,
"e": 14257,
"s": 14205,
"text": "First, need to import Python libraries as follows −"
},
{
"code": null,
"e": 14417,
"s": 14257,
"text": "from __future__ import print_function\nfrom argparse import ArgumentParser\n\nimport mailbox\nimport os\nimport time\nimport csv\nfrom tqdm import tqdm\n\nimport base64"
},
{
"code": null,
"e": 14546,
"s": 14417,
"text": "All the libraries have been used and explained in earlier scripts, except the mailbox library which is used to parse MBOX files."
},
{
"code": null,
"e": 14718,
"s": 14546,
"text": "Now, provide an argument for command-line handler. Here it will accept two arguments− one would be the path to MBOX file, and the other would be the desired output folder."
},
{
"code": null,
"e": 15034,
"s": 14718,
"text": "if __name__ == '__main__':\n parser = ArgumentParser('Parsing MBOX files')\n parser.add_argument(\"MBOX\", help=\"Path to mbox file\")\n parser.add_argument(\n \"OUTPUT_DIR\",help = \"Path to output directory to write report \"\"and exported content\")\n args = parser.parse_args()\n main(args.MBOX, args.OUTPUT_DIR)"
},
{
"code": null,
"e": 15178,
"s": 15034,
"text": "Now, will define main() function and call mbox class of mailbox library with the help of which we can parse a MBOX file by providing its path −"
},
{
"code": null,
"e": 15349,
"s": 15178,
"text": "def main(mbox_file, output_dir):\n print(\"Reading mbox file\")\n mbox = mailbox.mbox(mbox_file, factory=custom_reader)\n print(\"{} messages to parse\".format(len(mbox)))"
},
{
"code": null,
"e": 15410,
"s": 15349,
"text": "Now, define a reader method for mailbox library as follows −"
},
{
"code": null,
"e": 15668,
"s": 15410,
"text": "def custom_reader(data_stream):\n data = data_stream.read()\n try:\n content = data.decode(\"ascii\")\n except (UnicodeDecodeError, UnicodeEncodeError) as e:\n content = data.decode(\"cp1252\", errors=\"replace\")\n return mailbox.mboxMessage(content)"
},
{
"code": null,
"e": 15731,
"s": 15668,
"text": "Now, create some variables for further processing as follows −"
},
{
"code": null,
"e": 16062,
"s": 15731,
"text": "parsed_data = []\nattachments_dir = os.path.join(output_dir, \"attachments\")\n\nif not os.path.exists(attachments_dir):\n os.makedirs(attachments_dir)\ncolumns = [\n \"Date\", \"From\", \"To\", \"Subject\", \"X-Gmail-Labels\", \"Return-Path\", \"Received\", \n \"Content-Type\", \"Message-ID\",\"X-GM-THRID\", \"num_attachments_exported\", \"export_path\"]"
},
{
"code": null,
"e": 16152,
"s": 16062,
"text": "Next, use tqdm to generate a progress bar and to track the iteration process as follows −"
},
{
"code": null,
"e": 16307,
"s": 16152,
"text": "for message in tqdm(mbox):\n msg_data = dict()\n header_data = dict(message._headers)\nfor hdr in columns:\n msg_data[hdr] = header_data.get(hdr, \"N/A\")"
},
{
"code": null,
"e": 16433,
"s": 16307,
"text": "Now, check weather message is having payloads or not. If it is having then we will define write_payload() method as follows −"
},
{
"code": null,
"e": 16632,
"s": 16433,
"text": "if len(message.get_payload()):\n export_path = write_payload(message, attachments_dir)\n msg_data['num_attachments_exported'] = len(export_path)\n msg_data['export_path'] = \", \".join(export_path)"
},
{
"code": null,
"e": 16717,
"s": 16632,
"text": "Now, data need to be appended. Then we will call create_report() method as follows −"
},
{
"code": null,
"e": 18603,
"s": 16717,
"text": "parsed_data.append(msg_data)\ncreate_report(\n parsed_data, os.path.join(output_dir, \"mbox_report.csv\"), columns)\ndef write_payload(msg, out_dir):\n pyld = msg.get_payload()\n export_path = []\n \nif msg.is_multipart():\n for entry in pyld:\n export_path += write_payload(entry, out_dir)\nelse:\n content_type = msg.get_content_type()\n if \"application/\" in content_type.lower():\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\n elif \"image/\" in content_type.lower():\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\n\n elif \"video/\" in content_type.lower():\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\n elif \"audio/\" in content_type.lower():\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\n elif \"text/csv\" in content_type.lower():\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\n elif \"info/\" in content_type.lower():\n export_path.append(export_content(msg, out_dir,\n msg.get_payload()))\n elif \"text/calendar\" in content_type.lower():\n export_path.append(export_content(msg, out_dir,\n msg.get_payload()))\n elif \"text/rtf\" in content_type.lower():\n export_path.append(export_content(msg, out_dir,\n msg.get_payload()))\n else:\n if \"name=\" in msg.get('Content-Disposition', \"N/A\"):\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\n elif \"name=\" in msg.get('Content-Type', \"N/A\"):\n content = base64.b64decode(msg.get_payload())\n export_path.append(export_content(msg, out_dir, content))\nreturn export_path"
},
{
"code": null,
"e": 18765,
"s": 18603,
"text": "Observe that the above if-else statements are easy to understand. Now, we need to define a method that will extract the filename from the msg object as follows −"
},
{
"code": null,
"e": 19072,
"s": 18765,
"text": "def export_content(msg, out_dir, content_data):\n file_name = get_filename(msg)\n file_ext = \"FILE\"\n \n if \".\" in file_name: file_ext = file_name.rsplit(\".\", 1)[-1]\n file_name = \"{}_{:.4f}.{}\".format(file_name.rsplit(\".\", 1)[0], time.time(), file_ext)\n file_name = os.path.join(out_dir, file_name)"
},
{
"code": null,
"e": 19154,
"s": 19072,
"text": "Now, with the help of following lines of code, you can actually export the file −"
},
{
"code": null,
"e": 19300,
"s": 19154,
"text": "if isinstance(content_data, str):\n open(file_name, 'w').write(content_data)\nelse:\n open(file_name, 'wb').write(content_data)\nreturn file_name"
},
{
"code": null,
"e": 19430,
"s": 19300,
"text": "Now, let us define a function to extract filenames from the message to accurately represent the names of these files as follows −"
},
{
"code": null,
"e": 20053,
"s": 19430,
"text": "def get_filename(msg):\n if 'name=' in msg.get(\"Content-Disposition\", \"N/A\"):\n fname_data = msg[\"Content-Disposition\"].replace(\"\\r\\n\", \" \")\n fname = [x for x in fname_data.split(\"; \") if 'name=' in x]\n file_name = fname[0].split(\"=\", 1)[-1]\n elif 'name=' in msg.get(\"Content-Type\", \"N/A\"):\n fname_data = msg[\"Content-Type\"].replace(\"\\r\\n\", \" \")\n fname = [x for x in fname_data.split(\"; \") if 'name=' in x]\n file_name = fname[0].split(\"=\", 1)[-1]\n else:\n file_name = \"NO_FILENAME\"\n fchars = [x for x in file_name if x.isalnum() or x.isspace() or x == \".\"]\n return \"\".join(fchars)"
},
{
"code": null,
"e": 20136,
"s": 20053,
"text": "Now, we can write a CSV file by defining the create_report() function as follows −"
},
{
"code": null,
"e": 20359,
"s": 20136,
"text": "def create_report(output_data, output_file, columns):\n with open(output_file, 'w', newline=\"\") as outfile:\n csvfile = csv.DictWriter(outfile, columns)\n csvfile.writeheader()\n csvfile.writerows(output_data)"
},
{
"code": null,
"e": 20458,
"s": 20359,
"text": "Once you run the script given above, we will get the CSV report and directory full of attachments."
},
{
"code": null,
"e": 20495,
"s": 20458,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 20511,
"s": 20495,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 20544,
"s": 20511,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 20563,
"s": 20544,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 20598,
"s": 20563,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 20620,
"s": 20598,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 20654,
"s": 20620,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 20682,
"s": 20654,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 20717,
"s": 20682,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 20731,
"s": 20717,
"text": " Lets Kode It"
},
{
"code": null,
"e": 20764,
"s": 20731,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 20781,
"s": 20764,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 20788,
"s": 20781,
"text": " Print"
},
{
"code": null,
"e": 20799,
"s": 20788,
"text": " Add Notes"
}
] |
Program to Insert new item in array on any position in PHP - GeeksforGeeks
|
01 Oct, 2018
New item in an array can be inserted with the help of array_splice() function of PHP. This function removes a portion of an array and replaces it with something else. If offset and length are such that nothing is removed, then the elements from the replacement array are inserted in the place specified by the offset.
Syntax:
array array_splice ($input, $offset [, $length [, $replacement]])
Parameters: This function takes four parameters out of which 2 are mandatory and 2 are optional:
$input: This parameter takes the value of an array on which operations are needed to perform.
$offset: If this parameter is positive then the start of removed portion is at that position from the beginning of the input array and if this parameter is negative then it starts that far from the end of the input array.
$length: (optional) If this parameter is omitted then it removes everything from offset to the end of the array.If length is specified and is positive, then that many elements will be removed.If length is specified and is negative then the end of the removed portion will be that many elements from the end of the array.If length is specified and is zero, no elements will be removed.
If length is specified and is positive, then that many elements will be removed.
If length is specified and is negative then the end of the removed portion will be that many elements from the end of the array.
If length is specified and is zero, no elements will be removed.
$replacement: (optional) This parameter is an optional parameter which takes value as an array and if this replacement array is specified, then the removed elements are replaced with elements from this replacement array.
Return Value: It returns the last value of the array, shortening the array by one element.
Note that keys in replacement array are not preserved.
Program
<?php//Original Array on which operations is to be perform $original_array = array( '1', '2', '3', '4', '5' ); echo 'Original array : ';foreach ($original_array as $x) {echo "$x ";} echo "\n"; //value of new item$inserted_value = '11'; //value of position at which insertion is to be done $position = 2; //array_splice() function array_splice( $original_array, $position, 0, $inserted_value ); echo "After inserting 11 in the array is : ";foreach ($original_array as $x) {echo "$x ";}?>
Original array : 1 2 3 4 5
After inserting 11 in the array is : 1 2 11 3 4 5
References : http://php.net/manual/en/function.array-splice.php
PHP-array
Picked
PHP
PHP Programs
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Insert Form Data into Database using PHP ?
How to execute PHP code using command line ?
PHP in_array() Function
How to convert array to string in PHP ?
How to pop an alert message box using PHP ?
How to execute PHP code using command line ?
How to Insert Form Data into Database using PHP ?
How to convert array to string in PHP ?
How to pop an alert message box using PHP ?
How to delete an array element based on key in PHP?
|
[
{
"code": null,
"e": 31431,
"s": 31403,
"text": "\n01 Oct, 2018"
},
{
"code": null,
"e": 31749,
"s": 31431,
"text": "New item in an array can be inserted with the help of array_splice() function of PHP. This function removes a portion of an array and replaces it with something else. If offset and length are such that nothing is removed, then the elements from the replacement array are inserted in the place specified by the offset."
},
{
"code": null,
"e": 31757,
"s": 31749,
"text": "Syntax:"
},
{
"code": null,
"e": 31823,
"s": 31757,
"text": "array array_splice ($input, $offset [, $length [, $replacement]])"
},
{
"code": null,
"e": 31920,
"s": 31823,
"text": "Parameters: This function takes four parameters out of which 2 are mandatory and 2 are optional:"
},
{
"code": null,
"e": 32014,
"s": 31920,
"text": "$input: This parameter takes the value of an array on which operations are needed to perform."
},
{
"code": null,
"e": 32236,
"s": 32014,
"text": "$offset: If this parameter is positive then the start of removed portion is at that position from the beginning of the input array and if this parameter is negative then it starts that far from the end of the input array."
},
{
"code": null,
"e": 32621,
"s": 32236,
"text": "$length: (optional) If this parameter is omitted then it removes everything from offset to the end of the array.If length is specified and is positive, then that many elements will be removed.If length is specified and is negative then the end of the removed portion will be that many elements from the end of the array.If length is specified and is zero, no elements will be removed."
},
{
"code": null,
"e": 32702,
"s": 32621,
"text": "If length is specified and is positive, then that many elements will be removed."
},
{
"code": null,
"e": 32831,
"s": 32702,
"text": "If length is specified and is negative then the end of the removed portion will be that many elements from the end of the array."
},
{
"code": null,
"e": 32896,
"s": 32831,
"text": "If length is specified and is zero, no elements will be removed."
},
{
"code": null,
"e": 33117,
"s": 32896,
"text": "$replacement: (optional) This parameter is an optional parameter which takes value as an array and if this replacement array is specified, then the removed elements are replaced with elements from this replacement array."
},
{
"code": null,
"e": 33208,
"s": 33117,
"text": "Return Value: It returns the last value of the array, shortening the array by one element."
},
{
"code": null,
"e": 33263,
"s": 33208,
"text": "Note that keys in replacement array are not preserved."
},
{
"code": null,
"e": 33271,
"s": 33263,
"text": "Program"
},
{
"code": "<?php//Original Array on which operations is to be perform $original_array = array( '1', '2', '3', '4', '5' ); echo 'Original array : ';foreach ($original_array as $x) {echo \"$x \";} echo \"\\n\"; //value of new item$inserted_value = '11'; //value of position at which insertion is to be done $position = 2; //array_splice() function array_splice( $original_array, $position, 0, $inserted_value ); echo \"After inserting 11 in the array is : \";foreach ($original_array as $x) {echo \"$x \";}?>",
"e": 33769,
"s": 33271,
"text": null
},
{
"code": null,
"e": 33848,
"s": 33769,
"text": "Original array : 1 2 3 4 5 \nAfter inserting 11 in the array is : 1 2 11 3 4 5\n"
},
{
"code": null,
"e": 33912,
"s": 33848,
"text": "References : http://php.net/manual/en/function.array-splice.php"
},
{
"code": null,
"e": 33922,
"s": 33912,
"text": "PHP-array"
},
{
"code": null,
"e": 33929,
"s": 33922,
"text": "Picked"
},
{
"code": null,
"e": 33933,
"s": 33929,
"text": "PHP"
},
{
"code": null,
"e": 33946,
"s": 33933,
"text": "PHP Programs"
},
{
"code": null,
"e": 33950,
"s": 33946,
"text": "PHP"
},
{
"code": null,
"e": 34048,
"s": 33950,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34098,
"s": 34048,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 34143,
"s": 34098,
"text": "How to execute PHP code using command line ?"
},
{
"code": null,
"e": 34167,
"s": 34143,
"text": "PHP in_array() Function"
},
{
"code": null,
"e": 34207,
"s": 34167,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 34251,
"s": 34207,
"text": "How to pop an alert message box using PHP ?"
},
{
"code": null,
"e": 34296,
"s": 34251,
"text": "How to execute PHP code using command line ?"
},
{
"code": null,
"e": 34346,
"s": 34296,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 34386,
"s": 34346,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 34430,
"s": 34386,
"text": "How to pop an alert message box using PHP ?"
}
] |
Introducing Copula in Monte Carlo Simulation | by Rina Buoy | Towards Data Science
|
In the oil and gas industry, uncertainties are everywhere from the surface to the sub-surface. To embed the uncertainties in any estimation, probabilistic approaches are required.
One of the simple cases is a volumetric estimation. The formulas to estimate hydrocarbon (oil/gas) initially in place (HCIIP) are given below:
To probabilistically estimate the HCIIP, a Monte Carlo method is used and it follows the following steps (source):
Define a domain of possible inputsGenerate inputs randomly from a probability distribution over the domainPerform a deterministic computation on the inputsAggregate the results
Define a domain of possible inputs
Generate inputs randomly from a probability distribution over the domain
Perform a deterministic computation on the inputs
Aggregate the results
A Ms Excel implementation of probabilistic oil and gas prospect evaluation can be obtained here.
If all inputs are independent random variables or normally distributed, random sampling is rather straightforward. However, the inputs, for example, porosity and HC saturation are correlated to some degree and have different distributions. In such a case, random sampling is difficult. This is when Copula come to our rescue.
According to Wikipedia, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform.
The above definition is abstract and hard to comprehend. However, the implementation is rather easy. Thomas Wiecki wrote an inspiring post which gives a intuitive illustration of copula.
To fully grab the concept of copula, understanding of random variable transformation is required.
Let’s start by sampling from a uniform distribution between 0 and 1.
x = stats.uniform(0, 1).rvs(10000)
Instead of being uniform, these samples can be transformed to any probability distribution of interest via the inverse of the cumulative density function (CDF). Let’s say we want these samples to be normally distributed. We can pass these samples to inverse CDF function of a normal distribution ( ex. ppf function in scipy.stats). We will get the following samples which are normally distributed.
norm = stats.distributions.norm()x_trans = norm.ppf(x)
If we plot the uniform samples, the transformed samples, and the inverse CDF curve on the same plot, we get:
When we want to draw samples from a given distribution, the computer does this transformation under the hood.
Importantly, the process is reversible; that means that we can transform samples of any distribution back to unform distribution via the same CDF.
The steps of Gaussian copula are as follows:
Draw samples from a correlated multivariate normal distribution. Variable correlations are specified via the covariance matrix.Transform the correlated samples so that marginals (each input) are uniform.Transform the uniform marginals to any distribution of interest. For example, porosity to a truncated normal distribution, HC saturation to a triangle distribution etc.
Draw samples from a correlated multivariate normal distribution. Variable correlations are specified via the covariance matrix.
Transform the correlated samples so that marginals (each input) are uniform.
Transform the uniform marginals to any distribution of interest. For example, porosity to a truncated normal distribution, HC saturation to a triangle distribution etc.
Let’s start by drawing samples from a correlated multivariate normal distribution.
mvnorm = stats.multivariate_normal([0, 0], [[1., 0.5], [0.5, 1.]])x = mvnorm.rvs((10000,))
Next, we transform the marginals to a uniform distribution.
norm = stats.norm([0],[1])x_unif = norm.cdf(x)
As we can see, the joint distribution of X1 and X2 are correlated while their marginals are uniform.
We can, now, transform the marginals to any distribution of interest while preserving the correlation. For example, we want to draw X1 from a triangle distribution and X2 from a normal distribution.
x1_tri = stats.triang.ppf(x_unif[:, 0], c=0.158 , loc=36, scale=21)x2_norm =stats.norm(525, 112).ppf(x_unif[:, 1])
Now we obtain the desired joint distribution of X1 and X2 which are drawn from different distributions.
Armed with copula, we are ready to introduce variable correlations to the sampling phase of Monte Carlo. Here is the complete python codes of calculating OIIP:
import seaborn as snsfrom scipy import statsimport numpy as npimport matplotlib.pyplot as plt# HCIIP = GRV*NTG*POR*SHC/FVFmeans = [0.]*5cov = [[1., 0., 0., 0., 0.],[0., 1., 0., 0., 0.],[0., 0., 1., 0., 0.],[0., 0., 0., 1., 0.],[0., 0., 0., 0., 1.]]mvnorm_std = stats.multivariate_normal(means,cov)x = mvnorm_std.rvs(10000,random_state=42)norm_std = stats.norm()x_unif = norm_std.cdf(x)#create individual distr.grv = stats.triang(c=0.1 , loc=10000, scale=300).ppf(x_unif[:, 0])ntg = stats.triang(c=0.2 , loc=0.5, scale=0.5).ppf(x_unif[:, 1])phi = stats.truncnorm(-2*1.96,1.96,0.2,0.05).ppf(x_unif[:, 2])shc = stats.norm(0.6,0.05).ppf(x_unif[:, 3])fvf= stats.truncnorm(-1.96,2*1.96,1.3,0.1).ppf(x_unif[:, 4])stoiip = 7758*grv*ntg*phi*shc/fvf/1e6sns.distplot(stoiip , kde=False, norm_hist=True)plt.figure()sns.distplot(stoiip ,hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))plt.show()
For the case of no correlation, the covariance matrix is a diagonal matrix (zeros everywhere except the diagonal cells).
cov = [[1., 0., 0., 0., 0.],[0., 1., 0., 0., 0.],[0., 0., 1., 0., 0.],[0., 0., 0., 1., 0.],[0., 0., 0., 0., 1.]]
Running the above codes, we obtain the following OIIP histogram and cumulative probability distribution.
To illustrate the case of variable correlations, we arbitrarily assign positive correlations among NTG, POR and SHC.
cov = [[1., 0., 0., 0., 0.],[0., 1., 0.7, 0.6, 0.],[0., 0.7, 1., 0.8, 0.],[0., 0.6, 0.8, 1., 0.],[0., 0., 0., 0., 1.]]
By observing the OIIP histogram and cumulative distribution, we can see that including variable correlations or dependencies expands the P10-P90 range of the calculated OIIP. This is reasonable since by including correlations, sampling can explore extreme space which may, otherwise, be missed in the case of no correlation. In the next post, we will look at Bayesian decline curve analysis (DCA).
|
[
{
"code": null,
"e": 352,
"s": 172,
"text": "In the oil and gas industry, uncertainties are everywhere from the surface to the sub-surface. To embed the uncertainties in any estimation, probabilistic approaches are required."
},
{
"code": null,
"e": 495,
"s": 352,
"text": "One of the simple cases is a volumetric estimation. The formulas to estimate hydrocarbon (oil/gas) initially in place (HCIIP) are given below:"
},
{
"code": null,
"e": 610,
"s": 495,
"text": "To probabilistically estimate the HCIIP, a Monte Carlo method is used and it follows the following steps (source):"
},
{
"code": null,
"e": 787,
"s": 610,
"text": "Define a domain of possible inputsGenerate inputs randomly from a probability distribution over the domainPerform a deterministic computation on the inputsAggregate the results"
},
{
"code": null,
"e": 822,
"s": 787,
"text": "Define a domain of possible inputs"
},
{
"code": null,
"e": 895,
"s": 822,
"text": "Generate inputs randomly from a probability distribution over the domain"
},
{
"code": null,
"e": 945,
"s": 895,
"text": "Perform a deterministic computation on the inputs"
},
{
"code": null,
"e": 967,
"s": 945,
"text": "Aggregate the results"
},
{
"code": null,
"e": 1064,
"s": 967,
"text": "A Ms Excel implementation of probabilistic oil and gas prospect evaluation can be obtained here."
},
{
"code": null,
"e": 1390,
"s": 1064,
"text": "If all inputs are independent random variables or normally distributed, random sampling is rather straightforward. However, the inputs, for example, porosity and HC saturation are correlated to some degree and have different distributions. In such a case, random sampling is difficult. This is when Copula come to our rescue."
},
{
"code": null,
"e": 1551,
"s": 1390,
"text": "According to Wikipedia, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform."
},
{
"code": null,
"e": 1738,
"s": 1551,
"text": "The above definition is abstract and hard to comprehend. However, the implementation is rather easy. Thomas Wiecki wrote an inspiring post which gives a intuitive illustration of copula."
},
{
"code": null,
"e": 1836,
"s": 1738,
"text": "To fully grab the concept of copula, understanding of random variable transformation is required."
},
{
"code": null,
"e": 1905,
"s": 1836,
"text": "Let’s start by sampling from a uniform distribution between 0 and 1."
},
{
"code": null,
"e": 1940,
"s": 1905,
"text": "x = stats.uniform(0, 1).rvs(10000)"
},
{
"code": null,
"e": 2338,
"s": 1940,
"text": "Instead of being uniform, these samples can be transformed to any probability distribution of interest via the inverse of the cumulative density function (CDF). Let’s say we want these samples to be normally distributed. We can pass these samples to inverse CDF function of a normal distribution ( ex. ppf function in scipy.stats). We will get the following samples which are normally distributed."
},
{
"code": null,
"e": 2393,
"s": 2338,
"text": "norm = stats.distributions.norm()x_trans = norm.ppf(x)"
},
{
"code": null,
"e": 2502,
"s": 2393,
"text": "If we plot the uniform samples, the transformed samples, and the inverse CDF curve on the same plot, we get:"
},
{
"code": null,
"e": 2612,
"s": 2502,
"text": "When we want to draw samples from a given distribution, the computer does this transformation under the hood."
},
{
"code": null,
"e": 2759,
"s": 2612,
"text": "Importantly, the process is reversible; that means that we can transform samples of any distribution back to unform distribution via the same CDF."
},
{
"code": null,
"e": 2804,
"s": 2759,
"text": "The steps of Gaussian copula are as follows:"
},
{
"code": null,
"e": 3176,
"s": 2804,
"text": "Draw samples from a correlated multivariate normal distribution. Variable correlations are specified via the covariance matrix.Transform the correlated samples so that marginals (each input) are uniform.Transform the uniform marginals to any distribution of interest. For example, porosity to a truncated normal distribution, HC saturation to a triangle distribution etc."
},
{
"code": null,
"e": 3304,
"s": 3176,
"text": "Draw samples from a correlated multivariate normal distribution. Variable correlations are specified via the covariance matrix."
},
{
"code": null,
"e": 3381,
"s": 3304,
"text": "Transform the correlated samples so that marginals (each input) are uniform."
},
{
"code": null,
"e": 3550,
"s": 3381,
"text": "Transform the uniform marginals to any distribution of interest. For example, porosity to a truncated normal distribution, HC saturation to a triangle distribution etc."
},
{
"code": null,
"e": 3633,
"s": 3550,
"text": "Let’s start by drawing samples from a correlated multivariate normal distribution."
},
{
"code": null,
"e": 3724,
"s": 3633,
"text": "mvnorm = stats.multivariate_normal([0, 0], [[1., 0.5], [0.5, 1.]])x = mvnorm.rvs((10000,))"
},
{
"code": null,
"e": 3784,
"s": 3724,
"text": "Next, we transform the marginals to a uniform distribution."
},
{
"code": null,
"e": 3831,
"s": 3784,
"text": "norm = stats.norm([0],[1])x_unif = norm.cdf(x)"
},
{
"code": null,
"e": 3932,
"s": 3831,
"text": "As we can see, the joint distribution of X1 and X2 are correlated while their marginals are uniform."
},
{
"code": null,
"e": 4131,
"s": 3932,
"text": "We can, now, transform the marginals to any distribution of interest while preserving the correlation. For example, we want to draw X1 from a triangle distribution and X2 from a normal distribution."
},
{
"code": null,
"e": 4248,
"s": 4131,
"text": "x1_tri = stats.triang.ppf(x_unif[:, 0], c=0.158 , loc=36, scale=21)x2_norm =stats.norm(525, 112).ppf(x_unif[:, 1])"
},
{
"code": null,
"e": 4352,
"s": 4248,
"text": "Now we obtain the desired joint distribution of X1 and X2 which are drawn from different distributions."
},
{
"code": null,
"e": 4512,
"s": 4352,
"text": "Armed with copula, we are ready to introduce variable correlations to the sampling phase of Monte Carlo. Here is the complete python codes of calculating OIIP:"
},
{
"code": null,
"e": 5409,
"s": 4512,
"text": "import seaborn as snsfrom scipy import statsimport numpy as npimport matplotlib.pyplot as plt# HCIIP = GRV*NTG*POR*SHC/FVFmeans = [0.]*5cov = [[1., 0., 0., 0., 0.],[0., 1., 0., 0., 0.],[0., 0., 1., 0., 0.],[0., 0., 0., 1., 0.],[0., 0., 0., 0., 1.]]mvnorm_std = stats.multivariate_normal(means,cov)x = mvnorm_std.rvs(10000,random_state=42)norm_std = stats.norm()x_unif = norm_std.cdf(x)#create individual distr.grv = stats.triang(c=0.1 , loc=10000, scale=300).ppf(x_unif[:, 0])ntg = stats.triang(c=0.2 , loc=0.5, scale=0.5).ppf(x_unif[:, 1])phi = stats.truncnorm(-2*1.96,1.96,0.2,0.05).ppf(x_unif[:, 2])shc = stats.norm(0.6,0.05).ppf(x_unif[:, 3])fvf= stats.truncnorm(-1.96,2*1.96,1.3,0.1).ppf(x_unif[:, 4])stoiip = 7758*grv*ntg*phi*shc/fvf/1e6sns.distplot(stoiip , kde=False, norm_hist=True)plt.figure()sns.distplot(stoiip ,hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))plt.show()"
},
{
"code": null,
"e": 5530,
"s": 5409,
"text": "For the case of no correlation, the covariance matrix is a diagonal matrix (zeros everywhere except the diagonal cells)."
},
{
"code": null,
"e": 5643,
"s": 5530,
"text": "cov = [[1., 0., 0., 0., 0.],[0., 1., 0., 0., 0.],[0., 0., 1., 0., 0.],[0., 0., 0., 1., 0.],[0., 0., 0., 0., 1.]]"
},
{
"code": null,
"e": 5748,
"s": 5643,
"text": "Running the above codes, we obtain the following OIIP histogram and cumulative probability distribution."
},
{
"code": null,
"e": 5865,
"s": 5748,
"text": "To illustrate the case of variable correlations, we arbitrarily assign positive correlations among NTG, POR and SHC."
},
{
"code": null,
"e": 5984,
"s": 5865,
"text": "cov = [[1., 0., 0., 0., 0.],[0., 1., 0.7, 0.6, 0.],[0., 0.7, 1., 0.8, 0.],[0., 0.6, 0.8, 1., 0.],[0., 0., 0., 0., 1.]]"
}
] |
How to write the first C++ program?
|
So you've decided to learn how to program in C++ but don't know where to start. Here's a brief overview of how you can get started.
This is the first step you'd want to do before starting learning to program in C++. There are good free C++ compilers available for all major OS platforms. Download one that suits your platform or you can use the tutorialspoint.com's online compiler on https://www.tutorialspoint.com/compile_cpp_online.php
GCC − GCC is the GNU Compiler chain that is basically a collection of a bunch of different compilers created by GNU. You can download and install this compiler from http://gcc.gnu.org/
Clang−Clang is a compiler collection released by the LLVM community. It is available on all platforms and you can download and find install instructions on http://clang.llvm.org/get_started.html
Visual C++ 2017 Community− This is a free C++ compiler built for windows by Microsoft. You can download and install this compiler from https://www.visualstudio.com/vs/cplusplus/
Now that you have a compiler installed, its time to write a C++ program. Let's start with the epitome of programming example's, it, the Hello world program. We'll print hello world to the screen using C++ in this example. Create a new file called hello.cpp and write the following code to it −
#include<iostream>
int main() {
std::cout << "Hello World\n";
}
Let's dissect this program.
Line 1− We start with the #include<iostream> line which essentially tells the compiler to copy the code from the iostream file(used for managing input and output streams) and paste it in our source file. Header iostream, that allows performing standard input and output operations, such as writing the output of this program (Hello World) to the screen. Lines beginning with a hash sign (#) are directives read and interpreted by what is known as the preprocessor.
Line 2− A blank line: Blank lines have no effect on a program.
Line 3− We then declare a function called main with the return type of int. main() is the entry point of our program. Whenever we run a C++ program, we start with the main function and begin execution from the first line within this function and keep executing each line till we reach the end. We start a block using the curly brace({) here. This marks the beginning of main's function definition, and the closing brace (}) at line 5, marks its end. All statements between these braces are the function's body that defines what happens when main is called.
Line 4−
std::cout << "Hello World\n";
This line is a C++ statement. This statement has three parts: First, std::cout, which identifies the standard console output device. Second the insertion operator << which indicates that what follows is inserted into std::cout. Last, we have a sentence within quotes that we'd like printed on the screen. This will become more clear to you as we proceed in learning C++.
In short, we provide a cout object with a string "Hello world\n" to be printed to the standard output device.
Note that the statement ends with a semicolon (;). This character marks the end of the statement
Now that we've written the program, we need to translate it to a language that the processor understands, ie, in binary machine code. We do this using a compiler we installed in the first step. You need to open your terminal/cmd and navigate to the location of the hello.cpp file using the cd command. Assuming you installed the GCC, you can use the following command to compile the program −
$ g++ -o hello hello.cpp
This command means that you want the g++ compiler to create an output file, hello using the source file hello.cpp.
Now that we've written our program and compiled it, time to run it! You can run the program using −
$ ./hello
You will get the output−
Hello world
|
[
{
"code": null,
"e": 1194,
"s": 1062,
"text": "So you've decided to learn how to program in C++ but don't know where to start. Here's a brief overview of how you can get started."
},
{
"code": null,
"e": 1501,
"s": 1194,
"text": "This is the first step you'd want to do before starting learning to program in C++. There are good free C++ compilers available for all major OS platforms. Download one that suits your platform or you can use the tutorialspoint.com's online compiler on https://www.tutorialspoint.com/compile_cpp_online.php"
},
{
"code": null,
"e": 1686,
"s": 1501,
"text": "GCC − GCC is the GNU Compiler chain that is basically a collection of a bunch of different compilers created by GNU. You can download and install this compiler from http://gcc.gnu.org/"
},
{
"code": null,
"e": 1881,
"s": 1686,
"text": "Clang−Clang is a compiler collection released by the LLVM community. It is available on all platforms and you can download and find install instructions on http://clang.llvm.org/get_started.html"
},
{
"code": null,
"e": 2059,
"s": 1881,
"text": "Visual C++ 2017 Community− This is a free C++ compiler built for windows by Microsoft. You can download and install this compiler from https://www.visualstudio.com/vs/cplusplus/"
},
{
"code": null,
"e": 2353,
"s": 2059,
"text": "Now that you have a compiler installed, its time to write a C++ program. Let's start with the epitome of programming example's, it, the Hello world program. We'll print hello world to the screen using C++ in this example. Create a new file called hello.cpp and write the following code to it −"
},
{
"code": null,
"e": 2420,
"s": 2353,
"text": "#include<iostream>\nint main() {\n std::cout << \"Hello World\\n\";\n}"
},
{
"code": null,
"e": 2448,
"s": 2420,
"text": "Let's dissect this program."
},
{
"code": null,
"e": 2914,
"s": 2448,
"text": "Line 1− We start with the #include<iostream> line which essentially tells the compiler to copy the code from the iostream file(used for managing input and output streams) and paste it in our source file. Header iostream, that allows performing standard input and output operations, such as writing the output of this program (Hello World) to the screen. Lines beginning with a hash sign (#) are directives read and interpreted by what is known as the preprocessor."
},
{
"code": null,
"e": 2977,
"s": 2914,
"text": "Line 2− A blank line: Blank lines have no effect on a program."
},
{
"code": null,
"e": 3534,
"s": 2977,
"text": "Line 3− We then declare a function called main with the return type of int. main() is the entry point of our program. Whenever we run a C++ program, we start with the main function and begin execution from the first line within this function and keep executing each line till we reach the end. We start a block using the curly brace({) here. This marks the beginning of main's function definition, and the closing brace (}) at line 5, marks its end. All statements between these braces are the function's body that defines what happens when main is called."
},
{
"code": null,
"e": 3543,
"s": 3534,
"text": "Line 4− "
},
{
"code": null,
"e": 3573,
"s": 3543,
"text": "std::cout << \"Hello World\\n\";"
},
{
"code": null,
"e": 3944,
"s": 3573,
"text": "This line is a C++ statement. This statement has three parts: First, std::cout, which identifies the standard console output device. Second the insertion operator << which indicates that what follows is inserted into std::cout. Last, we have a sentence within quotes that we'd like printed on the screen. This will become more clear to you as we proceed in learning C++."
},
{
"code": null,
"e": 4054,
"s": 3944,
"text": "In short, we provide a cout object with a string \"Hello world\\n\" to be printed to the standard output device."
},
{
"code": null,
"e": 4151,
"s": 4054,
"text": "Note that the statement ends with a semicolon (;). This character marks the end of the statement"
},
{
"code": null,
"e": 4544,
"s": 4151,
"text": "Now that we've written the program, we need to translate it to a language that the processor understands, ie, in binary machine code. We do this using a compiler we installed in the first step. You need to open your terminal/cmd and navigate to the location of the hello.cpp file using the cd command. Assuming you installed the GCC, you can use the following command to compile the program −"
},
{
"code": null,
"e": 4569,
"s": 4544,
"text": "$ g++ -o hello hello.cpp"
},
{
"code": null,
"e": 4684,
"s": 4569,
"text": "This command means that you want the g++ compiler to create an output file, hello using the source file hello.cpp."
},
{
"code": null,
"e": 4784,
"s": 4684,
"text": "Now that we've written our program and compiled it, time to run it! You can run the program using −"
},
{
"code": null,
"e": 4794,
"s": 4784,
"text": "$ ./hello"
},
{
"code": null,
"e": 4819,
"s": 4794,
"text": "You will get the output−"
},
{
"code": null,
"e": 4831,
"s": 4819,
"text": "Hello world"
}
] |
Learn Beginner SQL in 5 steps in 5 minutes! | by Terence Shin | Towards Data Science
|
So you want to learn SQL? Great, you should!
In this article, I’m going to explain to you how to query with SQL in the easiest way possible. But first, let me define a couple of terms...
If this is the kind of stuff that you like, be one of the FIRST to subscribe to my new YouTube channel here! While there aren’t any videos yet, I’ll be sharing lots of amazing content like this but in video form. Thanks for your support :)
A row, also called a record, is a collection of attributes (variables) that represent a single entity. For example, one row may represent one hospital patient and may have attributes/variables like age, weight, height, etc...
A table is a collection of rows with the same attributes (with the same variables). What helps me the most is to think of a table as an Excel table.
A query is a request for data from a database table or combination of tables. Using the table above, I would write a query if I wanted to find all patients that were older than 23 years old.
Since this is a tutorial for beginners, I’m going to show you how to write a query if you wanted to extract data from one table.
There are five components to a basic query:
SELECT (mandatory)FROM (mandatory)WHERE (optional)GROUP BY (optional)ORDER BY (optional)
SELECT (mandatory)
FROM (mandatory)
WHERE (optional)
GROUP BY (optional)
ORDER BY (optional)
The structure is as follows:
SELECT [column_name_1], [column_name_2], [column_name_n]FROM [table_name]WHERE [condition 1]GROUP BY [column_name] ORDER BY [column_name]
Let’s bring back my example as a reference:
SELECT determines which columns you want to pull from a given table. For example, if I wanted to pull Name then my code would look like:
SELECT Name
A neat trick is if you want to pull all columns, you can use an asterisk — see below:
SELECT *
FROM determines which table you want to pull the information from. For example, if you wanted to pull the Name of the patient, you would want to pull the data FROM the table called patient_info (see above). The code would look something like this:
SELECT NameFROM patient_info
And there’s your first functional query! Let's go through the 3 additional optional steps.
What if you wanted to select the Names of patients who are older than 23? This is when WHERE comes in. WHERE is a statement used to filter your table, the same way you would use the filter tool in Excel!
The code to get the Names of patients who are older than 23 is to the left. A visual representation is shown to the right:
If you want the Names of patients that satisfy two clauses, you can use AND. Eg. Find the Names of patients who are older than 23 and weigh more than 130 lbs.
SELECT NameFROM patient_infoWHERE Age > 23 AND Weight_lbs > 130
If you want the Names of patients that satisfy one of two clauses, you can use OR. Eg. Find the Names of patients who are younger than 22 or older than 23.
SELECT NameFROM patient_infoWHERE Age < 22 OR Age > 23
GROUP BY does what it says — it groups rows that have the same values into summary rows. It is typically used with aggregate functions like COUNT, MIN, MAX, SUM, AVG.
Let's use the example below:
If we wanted to get the number of hospital visits for each patient, we could use the code below and get the following result:
ORDER BY allows you to sort your results based on a particular attribute or a number of attributes in ascending or descending order. Let’s show an example.
SELECT *FROM patient_infoORDER BY Age asc
‘ORDER BY Age asc’ means that your result set will order the rows by age in ascending order (see the left table in the image above). If you want to order it in descending order (right table in the image above), you would replace asc with desc.
And that’s how you construct a query! You’ve just learned one of the most in-demand skills in the tech world. I’ll provide some links below where you can practice your SQL skills. Enjoy!
If you like my work and want to support me...
Be one of the FIRST to subscribe to my new YouTube channel here! While there aren’t any videos yet, I’ll be sharing lots of amazing content like this but in video form.
Also be one of the first to follow me on Twitter here.
Follow me on LinkedIn here.
Sign up on my email list here.
Check out my website, terenceshin.com.
|
[
{
"code": null,
"e": 217,
"s": 172,
"text": "So you want to learn SQL? Great, you should!"
},
{
"code": null,
"e": 359,
"s": 217,
"text": "In this article, I’m going to explain to you how to query with SQL in the easiest way possible. But first, let me define a couple of terms..."
},
{
"code": null,
"e": 599,
"s": 359,
"text": "If this is the kind of stuff that you like, be one of the FIRST to subscribe to my new YouTube channel here! While there aren’t any videos yet, I’ll be sharing lots of amazing content like this but in video form. Thanks for your support :)"
},
{
"code": null,
"e": 825,
"s": 599,
"text": "A row, also called a record, is a collection of attributes (variables) that represent a single entity. For example, one row may represent one hospital patient and may have attributes/variables like age, weight, height, etc..."
},
{
"code": null,
"e": 974,
"s": 825,
"text": "A table is a collection of rows with the same attributes (with the same variables). What helps me the most is to think of a table as an Excel table."
},
{
"code": null,
"e": 1165,
"s": 974,
"text": "A query is a request for data from a database table or combination of tables. Using the table above, I would write a query if I wanted to find all patients that were older than 23 years old."
},
{
"code": null,
"e": 1294,
"s": 1165,
"text": "Since this is a tutorial for beginners, I’m going to show you how to write a query if you wanted to extract data from one table."
},
{
"code": null,
"e": 1338,
"s": 1294,
"text": "There are five components to a basic query:"
},
{
"code": null,
"e": 1427,
"s": 1338,
"text": "SELECT (mandatory)FROM (mandatory)WHERE (optional)GROUP BY (optional)ORDER BY (optional)"
},
{
"code": null,
"e": 1446,
"s": 1427,
"text": "SELECT (mandatory)"
},
{
"code": null,
"e": 1463,
"s": 1446,
"text": "FROM (mandatory)"
},
{
"code": null,
"e": 1480,
"s": 1463,
"text": "WHERE (optional)"
},
{
"code": null,
"e": 1500,
"s": 1480,
"text": "GROUP BY (optional)"
},
{
"code": null,
"e": 1520,
"s": 1500,
"text": "ORDER BY (optional)"
},
{
"code": null,
"e": 1549,
"s": 1520,
"text": "The structure is as follows:"
},
{
"code": null,
"e": 1701,
"s": 1549,
"text": "SELECT [column_name_1], [column_name_2], [column_name_n]FROM [table_name]WHERE [condition 1]GROUP BY [column_name] ORDER BY [column_name]"
},
{
"code": null,
"e": 1745,
"s": 1701,
"text": "Let’s bring back my example as a reference:"
},
{
"code": null,
"e": 1882,
"s": 1745,
"text": "SELECT determines which columns you want to pull from a given table. For example, if I wanted to pull Name then my code would look like:"
},
{
"code": null,
"e": 1894,
"s": 1882,
"text": "SELECT Name"
},
{
"code": null,
"e": 1980,
"s": 1894,
"text": "A neat trick is if you want to pull all columns, you can use an asterisk — see below:"
},
{
"code": null,
"e": 1989,
"s": 1980,
"text": "SELECT *"
},
{
"code": null,
"e": 2237,
"s": 1989,
"text": "FROM determines which table you want to pull the information from. For example, if you wanted to pull the Name of the patient, you would want to pull the data FROM the table called patient_info (see above). The code would look something like this:"
},
{
"code": null,
"e": 2270,
"s": 2237,
"text": "SELECT NameFROM patient_info"
},
{
"code": null,
"e": 2361,
"s": 2270,
"text": "And there’s your first functional query! Let's go through the 3 additional optional steps."
},
{
"code": null,
"e": 2565,
"s": 2361,
"text": "What if you wanted to select the Names of patients who are older than 23? This is when WHERE comes in. WHERE is a statement used to filter your table, the same way you would use the filter tool in Excel!"
},
{
"code": null,
"e": 2688,
"s": 2565,
"text": "The code to get the Names of patients who are older than 23 is to the left. A visual representation is shown to the right:"
},
{
"code": null,
"e": 2847,
"s": 2688,
"text": "If you want the Names of patients that satisfy two clauses, you can use AND. Eg. Find the Names of patients who are older than 23 and weigh more than 130 lbs."
},
{
"code": null,
"e": 2921,
"s": 2847,
"text": "SELECT NameFROM patient_infoWHERE Age > 23 AND Weight_lbs > 130"
},
{
"code": null,
"e": 3077,
"s": 2921,
"text": "If you want the Names of patients that satisfy one of two clauses, you can use OR. Eg. Find the Names of patients who are younger than 22 or older than 23."
},
{
"code": null,
"e": 3142,
"s": 3077,
"text": "SELECT NameFROM patient_infoWHERE Age < 22 OR Age > 23"
},
{
"code": null,
"e": 3309,
"s": 3142,
"text": "GROUP BY does what it says — it groups rows that have the same values into summary rows. It is typically used with aggregate functions like COUNT, MIN, MAX, SUM, AVG."
},
{
"code": null,
"e": 3338,
"s": 3309,
"text": "Let's use the example below:"
},
{
"code": null,
"e": 3464,
"s": 3338,
"text": "If we wanted to get the number of hospital visits for each patient, we could use the code below and get the following result:"
},
{
"code": null,
"e": 3620,
"s": 3464,
"text": "ORDER BY allows you to sort your results based on a particular attribute or a number of attributes in ascending or descending order. Let’s show an example."
},
{
"code": null,
"e": 3668,
"s": 3620,
"text": "SELECT *FROM patient_infoORDER BY Age asc"
},
{
"code": null,
"e": 3912,
"s": 3668,
"text": "‘ORDER BY Age asc’ means that your result set will order the rows by age in ascending order (see the left table in the image above). If you want to order it in descending order (right table in the image above), you would replace asc with desc."
},
{
"code": null,
"e": 4099,
"s": 3912,
"text": "And that’s how you construct a query! You’ve just learned one of the most in-demand skills in the tech world. I’ll provide some links below where you can practice your SQL skills. Enjoy!"
},
{
"code": null,
"e": 4145,
"s": 4099,
"text": "If you like my work and want to support me..."
},
{
"code": null,
"e": 4314,
"s": 4145,
"text": "Be one of the FIRST to subscribe to my new YouTube channel here! While there aren’t any videos yet, I’ll be sharing lots of amazing content like this but in video form."
},
{
"code": null,
"e": 4369,
"s": 4314,
"text": "Also be one of the first to follow me on Twitter here."
},
{
"code": null,
"e": 4397,
"s": 4369,
"text": "Follow me on LinkedIn here."
},
{
"code": null,
"e": 4428,
"s": 4397,
"text": "Sign up on my email list here."
}
] |
\lbrack - Tex Command
|
\lbrack - Used to draw left bracket symbol.
{ \lbrack }
\lbrack command is used to draw left brace symbol.
\lbrack \frac ab, c \rbrack
[ab,c]
\left\lbrack \frac ab, c \right\rbrack
[ab,c]
\lbrack \frac ab, c \rbrack
[ab,c]
\lbrack \frac ab, c \rbrack
\left\lbrack \frac ab, c \right\rbrack
[ab,c]
\left\lbrack \frac ab, c \right\rbrack
14 Lectures
52 mins
Ashraf Said
11 Lectures
1 hours
Ashraf Said
9 Lectures
1 hours
Emenwa Global, Ejike IfeanyiChukwu
29 Lectures
2.5 hours
Mohammad Nauman
14 Lectures
1 hours
Daniel Stern
15 Lectures
47 mins
Nishant Kumar
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 8030,
"s": 7986,
"text": "\\lbrack - Used to draw left bracket symbol."
},
{
"code": null,
"e": 8042,
"s": 8030,
"text": "{ \\lbrack }"
},
{
"code": null,
"e": 8093,
"s": 8042,
"text": "\\lbrack command is used to draw left brace symbol."
},
{
"code": null,
"e": 8182,
"s": 8093,
"text": "\n\\lbrack \\frac ab, c \\rbrack\n\n[ab,c]\n\n\n\\left\\lbrack \\frac ab, c \\right\\rbrack\n\n[ab,c]\n\n\n"
},
{
"code": null,
"e": 8220,
"s": 8182,
"text": "\\lbrack \\frac ab, c \\rbrack\n\n[ab,c]\n\n"
},
{
"code": null,
"e": 8248,
"s": 8220,
"text": "\\lbrack \\frac ab, c \\rbrack"
},
{
"code": null,
"e": 8297,
"s": 8248,
"text": "\\left\\lbrack \\frac ab, c \\right\\rbrack\n\n[ab,c]\n\n"
},
{
"code": null,
"e": 8336,
"s": 8297,
"text": "\\left\\lbrack \\frac ab, c \\right\\rbrack"
},
{
"code": null,
"e": 8368,
"s": 8336,
"text": "\n 14 Lectures \n 52 mins\n"
},
{
"code": null,
"e": 8381,
"s": 8368,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8414,
"s": 8381,
"text": "\n 11 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8427,
"s": 8414,
"text": " Ashraf Said"
},
{
"code": null,
"e": 8459,
"s": 8427,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8495,
"s": 8459,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 8530,
"s": 8495,
"text": "\n 29 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8547,
"s": 8530,
"text": " Mohammad Nauman"
},
{
"code": null,
"e": 8580,
"s": 8547,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 8594,
"s": 8580,
"text": " Daniel Stern"
},
{
"code": null,
"e": 8626,
"s": 8594,
"text": "\n 15 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 8641,
"s": 8626,
"text": " Nishant Kumar"
},
{
"code": null,
"e": 8648,
"s": 8641,
"text": " Print"
},
{
"code": null,
"e": 8659,
"s": 8648,
"text": " Add Notes"
}
] |
MySQL edit and update records including employee salary
|
The UPDATE command is used in MySQL to update records. With it, the SET command is used to set new values. Let us first create a table −
mysql> create table DemoTable
(
EmployeeId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
EmployeeName varchar(50),
EmployeeSalary int
);
Query OK, 0 rows affected (0.57 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('Chris',56780);
Query OK, 1 row affected (0.14 sec)
mysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('Robert',45670);
Query OK, 1 row affected (0.10 sec)
mysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('Mike',87654);
Query OK, 1 row affected (0.13 sec)
mysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('David',34569);
Query OK, 1 row affected (0.11 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+------------+--------------+----------------+
| EmployeeId | EmployeeName | EmployeeSalary |
+------------+--------------+----------------+
| 1 | Chris | 56780 |
| 2 | Robert | 45670 |
| 3 | Mike | 87654 |
| 4 | David | 34569 |
+------------+--------------+----------------+
4 rows in set (0.00 sec)
Following is the query to update records and set new values −
mysql> update DemoTable
set EmployeeSalary=EmployeeSalary+12346;
Query OK, 4 rows affected (0.14 sec)
Rows matched: 4 Changed: 4 Warnings: 0
Let us check the table records −
mysql> select *from DemoTable;
This will produce the following output −
+------------+--------------+----------------+
| EmployeeId | EmployeeName | EmployeeSalary |
+------------+--------------+----------------+
| 1 | Chris | 69126 |
| 2 | Robert | 58016 |
| 3 | Mike | 100000 |
| 4 | David | 46915 |
+------------+--------------+----------------+
4 rows in set (0.00 sec)
|
[
{
"code": null,
"e": 1199,
"s": 1062,
"text": "The UPDATE command is used in MySQL to update records. With it, the SET command is used to set new values. Let us first create a table −"
},
{
"code": null,
"e": 1377,
"s": 1199,
"text": "mysql> create table DemoTable\n(\n EmployeeId int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n EmployeeName varchar(50),\n EmployeeSalary int\n);\nQuery OK, 0 rows affected (0.57 sec)"
},
{
"code": null,
"e": 1433,
"s": 1377,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1901,
"s": 1433,
"text": "mysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('Chris',56780);\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('Robert',45670);\nQuery OK, 1 row affected (0.10 sec)\nmysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('Mike',87654);\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into DemoTable(EmployeeName,EmployeeSalary) values('David',34569);\nQuery OK, 1 row affected (0.11 sec)"
},
{
"code": null,
"e": 1961,
"s": 1901,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1992,
"s": 1961,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 2033,
"s": 1992,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2434,
"s": 2033,
"text": "+------------+--------------+----------------+\n| EmployeeId | EmployeeName | EmployeeSalary |\n+------------+--------------+----------------+\n| 1 | Chris | 56780 |\n| 2 | Robert | 45670 |\n| 3 | Mike | 87654 |\n| 4 | David | 34569 |\n+------------+--------------+----------------+\n4 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2496,
"s": 2434,
"text": "Following is the query to update records and set new values −"
},
{
"code": null,
"e": 2640,
"s": 2496,
"text": "mysql> update DemoTable\n set EmployeeSalary=EmployeeSalary+12346;\nQuery OK, 4 rows affected (0.14 sec)\nRows matched: 4 Changed: 4 Warnings: 0"
},
{
"code": null,
"e": 2673,
"s": 2640,
"text": "Let us check the table records −"
},
{
"code": null,
"e": 2704,
"s": 2673,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 2745,
"s": 2704,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 3146,
"s": 2745,
"text": "+------------+--------------+----------------+\n| EmployeeId | EmployeeName | EmployeeSalary |\n+------------+--------------+----------------+\n| 1 | Chris | 69126 |\n| 2 | Robert | 58016 |\n| 3 | Mike | 100000 |\n| 4 | David | 46915 |\n+------------+--------------+----------------+\n4 rows in set (0.00 sec)"
}
] |
When to use static methods in Java?
|
You should use static methods whenever,
The code in the method is not dependent on instance creation and is not using any instance variable.
A particular piece of code is to be shared by all the instance methods.
The definition of the method should not be changed or overridden.
you are writing utility classes which should not be changed.
Live Demo
public class InstanceCounter {
private static int numInstances = 0;
protected static int getCount() {
return numInstances;
}
private static void addInstance() {
numInstances++;
}
InstanceCounter() {
InstanceCounter.addInstance();
}
public static void main(String[] arguments) {
System.out.println("Starting with "+InstanceCounter.getCount()+" instances");
for (int i = 0; i < 500; ++i) {
new InstanceCounter();
}
System.out.println("Created " + InstanceCounter.getCount() + " instances");
}
}
Started with 0 instances
Created 500 instances
|
[
{
"code": null,
"e": 1102,
"s": 1062,
"text": "You should use static methods whenever,"
},
{
"code": null,
"e": 1203,
"s": 1102,
"text": "The code in the method is not dependent on instance creation and is not using any instance variable."
},
{
"code": null,
"e": 1275,
"s": 1203,
"text": "A particular piece of code is to be shared by all the instance methods."
},
{
"code": null,
"e": 1341,
"s": 1275,
"text": "The definition of the method should not be changed or overridden."
},
{
"code": null,
"e": 1402,
"s": 1341,
"text": "you are writing utility classes which should not be changed."
},
{
"code": null,
"e": 1413,
"s": 1402,
"text": " Live Demo"
},
{
"code": null,
"e": 1991,
"s": 1413,
"text": "public class InstanceCounter {\n private static int numInstances = 0;\n protected static int getCount() {\n return numInstances;\n }\n private static void addInstance() {\n numInstances++;\n }\n InstanceCounter() {\n InstanceCounter.addInstance();\n }\n public static void main(String[] arguments) {\n System.out.println(\"Starting with \"+InstanceCounter.getCount()+\" instances\");\n \n for (int i = 0; i < 500; ++i) {\n new InstanceCounter();\n }\n System.out.println(\"Created \" + InstanceCounter.getCount() + \" instances\");\n }\n}"
},
{
"code": null,
"e": 2039,
"s": 1991,
"text": "Started with 0 instances\nCreated 500 instances\n"
}
] |
Calculate the sum of the diagonal elements of a NumPy array - GeeksforGeeks
|
05 Sep, 2020
Sometimes we need to find the sum of the Upper right, Upper left, Lower right, or lower left diagonal elements. Numpy provides us the facility to compute the sum of different diagonals elements using numpy.trace() and numpy.diagonal() method.
Method 1: Finding the sum of diagonal elements using numpy.trace()
Syntax : numpy.trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None)
Example 1: For 3X3 Numpy matrix
Python3
# importing Numpy packageimport numpy as np # creating a 3X3 Numpy matrixn_array = np.array([[55, 25, 15], [30, 44, 2], [11, 45, 77]]) # Displaying the Matrixprint("Numpy Matrix is:")print(n_array) # calculating the Trace of a matrixtrace = np.trace(n_array) print("\nTrace of given 3X3 matrix:")print(trace)
Output:
Example 2: For 4X4 Numpy matrix
Python3
# importing Numpy packageimport numpy as np # creating a 4X4 Numpy matrixn_array = np.array([[55, 25, 15, 41], [30, 44, 2, 54], [11, 45, 77, 11], [11, 212, 4, 20]]) # Displaying the Matrixprint("Numpy Matrix is:")print(n_array) # calculating the Trace of a matrixtrace = np.trace(n_array) print("\nTrace of given 4X4 matrix:")print(trace)
Output:
Method 2: Finding the sum of diagonal elements using numpy.diagonal()
Syntax :
numpy.diagonal(a, offset=0, axis1=0, axis2=1
Example 1: For 3X3 Numpy Matrix
Python3
# importing Numpy packageimport numpy as np # creating a 3X3 Numpy matrixn_array = np.array([[55, 25, 15], [30, 44, 2], [11, 45, 77]]) # Displaying the Matrixprint("Numpy Matrix is:")print(n_array) # Finding the diagonal elements of a matrixdiag = np.diagonal(n_array) print("\nDiagonal elements are:")print(diag) print("\nSum of Diagonal elements is:")print(sum(diag))
Output:
Example 2: For 5X5 Numpy Matrix
Python3
# importing Numpy packageimport numpy as np # creating a 5X5 Numpy matrixn_array = np.array([[5, 2, 1, 4, 6], [9, 4, 2, 5, 2], [11, 5, 7, 3, 9], [5, 6, 6, 7, 2], [7, 5, 9, 3, 3]]) # Displaying the Matrixprint("Numpy Matrix is:")print(n_array) # Finding the diagonal elements of a matrixdiag = np.diagonal(n_array) print("\nDiagonal elements are:")print(diag) print("\nSum of Diagonal elements is:")print(sum(diag))
Output:
Python numpy-Matrix Function
Python-numpy
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Enumerate() in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Python String | replace()
Reading and Writing to text files in Python
sum() function in Python
Create a Pandas DataFrame from Lists
How to drop one or multiple columns in Pandas Dataframe
*args and **kwargs in Python
|
[
{
"code": null,
"e": 24355,
"s": 24327,
"text": "\n05 Sep, 2020"
},
{
"code": null,
"e": 24598,
"s": 24355,
"text": "Sometimes we need to find the sum of the Upper right, Upper left, Lower right, or lower left diagonal elements. Numpy provides us the facility to compute the sum of different diagonals elements using numpy.trace() and numpy.diagonal() method."
},
{
"code": null,
"e": 24665,
"s": 24598,
"text": "Method 1: Finding the sum of diagonal elements using numpy.trace()"
},
{
"code": null,
"e": 24741,
"s": 24665,
"text": "Syntax : numpy.trace(a, offset=0, axis1=0, axis2=1, dtype=None, out=None) "
},
{
"code": null,
"e": 24773,
"s": 24741,
"text": "Example 1: For 3X3 Numpy matrix"
},
{
"code": null,
"e": 24781,
"s": 24773,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating a 3X3 Numpy matrixn_array = np.array([[55, 25, 15], [30, 44, 2], [11, 45, 77]]) # Displaying the Matrixprint(\"Numpy Matrix is:\")print(n_array) # calculating the Trace of a matrixtrace = np.trace(n_array) print(\"\\nTrace of given 3X3 matrix:\")print(trace)",
"e": 25134,
"s": 24781,
"text": null
},
{
"code": null,
"e": 25142,
"s": 25134,
"text": "Output:"
},
{
"code": null,
"e": 25174,
"s": 25142,
"text": "Example 2: For 4X4 Numpy matrix"
},
{
"code": null,
"e": 25182,
"s": 25174,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating a 4X4 Numpy matrixn_array = np.array([[55, 25, 15, 41], [30, 44, 2, 54], [11, 45, 77, 11], [11, 212, 4, 20]]) # Displaying the Matrixprint(\"Numpy Matrix is:\")print(n_array) # calculating the Trace of a matrixtrace = np.trace(n_array) print(\"\\nTrace of given 4X4 matrix:\")print(trace)",
"e": 25584,
"s": 25182,
"text": null
},
{
"code": null,
"e": 25592,
"s": 25584,
"text": "Output:"
},
{
"code": null,
"e": 25662,
"s": 25592,
"text": "Method 2: Finding the sum of diagonal elements using numpy.diagonal()"
},
{
"code": null,
"e": 25671,
"s": 25662,
"text": "Syntax :"
},
{
"code": null,
"e": 25717,
"s": 25671,
"text": "numpy.diagonal(a, offset=0, axis1=0, axis2=1\n"
},
{
"code": null,
"e": 25749,
"s": 25717,
"text": "Example 1: For 3X3 Numpy Matrix"
},
{
"code": null,
"e": 25757,
"s": 25749,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating a 3X3 Numpy matrixn_array = np.array([[55, 25, 15], [30, 44, 2], [11, 45, 77]]) # Displaying the Matrixprint(\"Numpy Matrix is:\")print(n_array) # Finding the diagonal elements of a matrixdiag = np.diagonal(n_array) print(\"\\nDiagonal elements are:\")print(diag) print(\"\\nSum of Diagonal elements is:\")print(sum(diag))",
"e": 26170,
"s": 25757,
"text": null
},
{
"code": null,
"e": 26178,
"s": 26170,
"text": "Output:"
},
{
"code": null,
"e": 26210,
"s": 26178,
"text": "Example 2: For 5X5 Numpy Matrix"
},
{
"code": null,
"e": 26218,
"s": 26210,
"text": "Python3"
},
{
"code": "# importing Numpy packageimport numpy as np # creating a 5X5 Numpy matrixn_array = np.array([[5, 2, 1, 4, 6], [9, 4, 2, 5, 2], [11, 5, 7, 3, 9], [5, 6, 6, 7, 2], [7, 5, 9, 3, 3]]) # Displaying the Matrixprint(\"Numpy Matrix is:\")print(n_array) # Finding the diagonal elements of a matrixdiag = np.diagonal(n_array) print(\"\\nDiagonal elements are:\")print(diag) print(\"\\nSum of Diagonal elements is:\")print(sum(diag))",
"e": 26714,
"s": 26218,
"text": null
},
{
"code": null,
"e": 26722,
"s": 26714,
"text": "Output:"
},
{
"code": null,
"e": 26751,
"s": 26722,
"text": "Python numpy-Matrix Function"
},
{
"code": null,
"e": 26764,
"s": 26751,
"text": "Python-numpy"
},
{
"code": null,
"e": 26771,
"s": 26764,
"text": "Python"
},
{
"code": null,
"e": 26869,
"s": 26771,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26878,
"s": 26869,
"text": "Comments"
},
{
"code": null,
"e": 26891,
"s": 26878,
"text": "Old Comments"
},
{
"code": null,
"e": 26909,
"s": 26891,
"text": "Python Dictionary"
},
{
"code": null,
"e": 26931,
"s": 26909,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 26963,
"s": 26931,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27005,
"s": 26963,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 27031,
"s": 27005,
"text": "Python String | replace()"
},
{
"code": null,
"e": 27075,
"s": 27031,
"text": "Reading and Writing to text files in Python"
},
{
"code": null,
"e": 27100,
"s": 27075,
"text": "sum() function in Python"
},
{
"code": null,
"e": 27137,
"s": 27100,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 27193,
"s": 27137,
"text": "How to drop one or multiple columns in Pandas Dataframe"
}
] |
Achieve Glow Effect with CSS Filters
|
The glow effect is used to create a glow around the object. If it is a transparent image, then glow is created around the opaque pixels of it.
The following parameters can be used in this filter −
You can try to run the following code to create a glow around the object −
Live Demo
<html>
<head>
</head>
<body>
<img src="/css/images/logo.png"
alt="CSS Logo"
style="Filter: Chroma(Color = #000000) Glow(Color=#00FF00, Strength=20)">
<p>Text Example:</p>
<div style="width: 357;
height: 50;
font-size: 30pt;
font-family: Arial Black;
color: red;
Filter: Glow(Color=#00FF00, Strength=20)">CSS Tutorials</div>
</body>
</html>
|
[
{
"code": null,
"e": 1205,
"s": 1062,
"text": "The glow effect is used to create a glow around the object. If it is a transparent image, then glow is created around the opaque pixels of it."
},
{
"code": null,
"e": 1259,
"s": 1205,
"text": "The following parameters can be used in this filter −"
},
{
"code": null,
"e": 1334,
"s": 1259,
"text": "You can try to run the following code to create a glow around the object −"
},
{
"code": null,
"e": 1344,
"s": 1334,
"text": "Live Demo"
},
{
"code": null,
"e": 1780,
"s": 1344,
"text": "<html>\n <head>\n </head>\n\n <body>\n <img src=\"/css/images/logo.png\"\n alt=\"CSS Logo\"\n style=\"Filter: Chroma(Color = #000000) Glow(Color=#00FF00, Strength=20)\">\n\n <p>Text Example:</p>\n\n <div style=\"width: 357;\n height: 50;\n font-size: 30pt;\n font-family: Arial Black;\n color: red;\n Filter: Glow(Color=#00FF00, Strength=20)\">CSS Tutorials</div>\n </body>\n</html>"
}
] |
Comments in Octave GNU - GeeksforGeeks
|
01 Aug, 2020
Octave is open-source, free available for many of the platforms. It is a high-level language. It comes up with a text interface along with an experimental graphical interface. It is also used for various Machine Learning algorithms for solving various numeric problems. You can say that it is similar to MATLAB but slower than MATLAB.
Comments are generic English sentences, mostly written in a program to explain what it does or what a piece of code is supposed to do. More specifically, information that programmer should be concerned with and it has nothing to do with the logic of the code. They are completely ignored by the compiler and are thus never reflected on to the input.
In Octave comments are of two types:
Single-line Comments
Block Comments
Single-line comments are comments that require only one line. They are usually drafted to explain what a single line of code does or what it is supposed to produce so that it can help someone referring to the source code. In octave to make a line comment just put a ‘#’ or ‘%’ in front of that line.
Syntax :
# comment statement
% comment statement
Example :
char1 = '#';printf("Below is the comment using %c\n", char1);# this is a comment char2 = '%';printf("Below is the comment using %c\n", char2);% this is also a comment
Output :
Below is the comment using #
Below is the comment using %
In octave to make a block comment just put a ‘#{‘ or ‘%{‘ in the starting of the block and ‘#}’ or ‘%}’ at the end of the block.
Syntax :
#{
this is
a block
comment
#}
%{
this is
a block
comment
%}
Example :
char1 = '#';printf("Below is the block comment using %c\n", char1);#{this is a comment using # #} char2 = '%';printf("Below is the block comment using %c\n", char2);%{this is a comment using % %}
Output :
Below is the block comment using #
Below is the block comment using %
Octave-GNU
Programming Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Programming Languages to Learn in 2022
Difference between Shallow and Deep copy of a class
Advantages and Disadvantages of OOP
Prolog | An Introduction
Program to calculate Electricity Bill
Top 10 Fastest Programming Languages
Java Swing | JComboBox with examples
JLabel | Java Swing
10 Best IDEs for C or C++ Developers in 2021
Control Structures in Programming Languages
|
[
{
"code": null,
"e": 24452,
"s": 24424,
"text": "\n01 Aug, 2020"
},
{
"code": null,
"e": 24787,
"s": 24452,
"text": "Octave is open-source, free available for many of the platforms. It is a high-level language. It comes up with a text interface along with an experimental graphical interface. It is also used for various Machine Learning algorithms for solving various numeric problems. You can say that it is similar to MATLAB but slower than MATLAB."
},
{
"code": null,
"e": 25137,
"s": 24787,
"text": "Comments are generic English sentences, mostly written in a program to explain what it does or what a piece of code is supposed to do. More specifically, information that programmer should be concerned with and it has nothing to do with the logic of the code. They are completely ignored by the compiler and are thus never reflected on to the input."
},
{
"code": null,
"e": 25174,
"s": 25137,
"text": "In Octave comments are of two types:"
},
{
"code": null,
"e": 25195,
"s": 25174,
"text": "Single-line Comments"
},
{
"code": null,
"e": 25210,
"s": 25195,
"text": "Block Comments"
},
{
"code": null,
"e": 25510,
"s": 25210,
"text": "Single-line comments are comments that require only one line. They are usually drafted to explain what a single line of code does or what it is supposed to produce so that it can help someone referring to the source code. In octave to make a line comment just put a ‘#’ or ‘%’ in front of that line."
},
{
"code": null,
"e": 25519,
"s": 25510,
"text": "Syntax :"
},
{
"code": null,
"e": 25562,
"s": 25519,
"text": "# comment statement \n% comment statement \n"
},
{
"code": null,
"e": 25572,
"s": 25562,
"text": "Example :"
},
{
"code": "char1 = '#';printf(\"Below is the comment using %c\\n\", char1);# this is a comment char2 = '%';printf(\"Below is the comment using %c\\n\", char2);% this is also a comment",
"e": 25740,
"s": 25572,
"text": null
},
{
"code": null,
"e": 25749,
"s": 25740,
"text": "Output :"
},
{
"code": null,
"e": 25808,
"s": 25749,
"text": "Below is the comment using #\nBelow is the comment using %\n"
},
{
"code": null,
"e": 25937,
"s": 25808,
"text": "In octave to make a block comment just put a ‘#{‘ or ‘%{‘ in the starting of the block and ‘#}’ or ‘%}’ at the end of the block."
},
{
"code": null,
"e": 25946,
"s": 25937,
"text": "Syntax :"
},
{
"code": null,
"e": 26018,
"s": 25946,
"text": "#{ \nthis is\n a block\n comment\n#}\n\n%{ \nthis is \n a block \n comment\n%} \n"
},
{
"code": null,
"e": 26028,
"s": 26018,
"text": "Example :"
},
{
"code": "char1 = '#';printf(\"Below is the block comment using %c\\n\", char1);#{this is a comment using # #} char2 = '%';printf(\"Below is the block comment using %c\\n\", char2);%{this is a comment using % %}",
"e": 26225,
"s": 26028,
"text": null
},
{
"code": null,
"e": 26234,
"s": 26225,
"text": "Output :"
},
{
"code": null,
"e": 26305,
"s": 26234,
"text": "Below is the block comment using #\nBelow is the block comment using %\n"
},
{
"code": null,
"e": 26316,
"s": 26305,
"text": "Octave-GNU"
},
{
"code": null,
"e": 26337,
"s": 26316,
"text": "Programming Language"
},
{
"code": null,
"e": 26435,
"s": 26337,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26444,
"s": 26435,
"text": "Comments"
},
{
"code": null,
"e": 26457,
"s": 26444,
"text": "Old Comments"
},
{
"code": null,
"e": 26503,
"s": 26457,
"text": "Top 10 Programming Languages to Learn in 2022"
},
{
"code": null,
"e": 26555,
"s": 26503,
"text": "Difference between Shallow and Deep copy of a class"
},
{
"code": null,
"e": 26591,
"s": 26555,
"text": "Advantages and Disadvantages of OOP"
},
{
"code": null,
"e": 26616,
"s": 26591,
"text": "Prolog | An Introduction"
},
{
"code": null,
"e": 26654,
"s": 26616,
"text": "Program to calculate Electricity Bill"
},
{
"code": null,
"e": 26691,
"s": 26654,
"text": "Top 10 Fastest Programming Languages"
},
{
"code": null,
"e": 26728,
"s": 26691,
"text": "Java Swing | JComboBox with examples"
},
{
"code": null,
"e": 26748,
"s": 26728,
"text": "JLabel | Java Swing"
},
{
"code": null,
"e": 26793,
"s": 26748,
"text": "10 Best IDEs for C or C++ Developers in 2021"
}
] |
Water drop problem - GeeksforGeeks
|
06 Oct, 2021
Consider a pipe of length L. The pipe has N water droplets at N different positions within it. Each water droplet is moving towards the end of the pipe(x=L) at different rates. When a water droplet mixes with another water droplet, it assumes the speed of the water droplet it is mixing with. Determine the no of droplets that come out of the end of the pipe. Refer to the figure below:
The numbers on circles indicates speed of water droplets
Examples:
Input: length = 12, position = [10, 8, 0, 5, 3],
speed = [2, 4, 1, 1, 3]
Output: 3
Explanation:
Droplets starting at x=10 and x=8 become a droplet,
meeting each other at x=12 at time =1 sec.
The droplet starting at 0 doesn't mix with any
other droplet, so it is a drop by itself.
Droplets starting at x=5 and x=3 become a single
drop, mixing with each other at x=6 at time = 1 sec.
Note that no other droplets meet these drops before
the end of the pipe, so the answer is 3.
Refer to the figure below
Numbers on circles indicates speed of water droplets.
Approach: This problem uses greedy technique. A drop will mix with another drop if two conditions are met: 1. If the drop is faster than the drop it is mixing with 2. If the position of the faster drop is behind the slower drop.
We use an array of pairs to store the position and the time that ith drop would take to reach the end of the pipe. Then we sort the array according to the position of the drops. Now we have a fair idea of which drops lie behind which drops and their respective time taken to reach the end. More time means less speed and less time means more speed. Now all the drops before a slower drop will mix with it. And all the drops after the slower drop with mix with the next slower drop and so on. For example, if the times to reach the end are: 12, 3, 7, 8, 1 (sorted according to positions) 0th drop is slowest, it won’t mix with the next drop 1st drop is faster than the 2nd drop, So they will mix and 2nd drop is faster than the third drop so all three will mix together. They cannot mix with the 4th drop because that is faster. No of local maximal + residue(drops after last local maxima) = Total number of drops.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
#include <bits/stdc++.h>using namespace std; // Function to find the number// of the drops that come out of the// pipeint drops(int length, int position[], int speed[], int n){ // stores position and time // taken by a single // drop to reach the end as a pair vector<pair<int, double> > m(n); int i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop int p = length - position[i]; // inserts initial position of the // ith drop to the pair m[i].first = position[i]; // inserts time taken by ith // drop to reach // the end to the pair m[i].second = p * 1.0 / speed[i]; } // sorts the pair according to increasing // order of their positions sort(m.begin(), m.end()); int k = 0; // counter for no of final drops int curr_max = m[n-1].second; // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i].second > curr_max) { k++; curr_max=m[i].second; } } // calculating residual // drops in the pipe k++; return k;} // Driver Codeint main(){ // length of pipe int length = 12; // position of droplets int position[] = { 10, 8, 0, 5, 3 }; // speed of each droplets int speed[] = { 2, 4, 1, 1, 3 }; int n = sizeof(speed)/sizeof(speed[0]); cout << drops(length, position, speed, n); return 0;}
import java.util.*; // User defined Pair classclass Pair { int x; int y; // Constructor public Pair(int x, int y) { this.x = x; this.y = y; }} // class to define user defined conparatorclass Compare { static void compare(Pair arr[], int n) { // Comparator to sort the pair according to second element Arrays.sort(arr, new Comparator<Pair>() { @Override public int compare(Pair p1, Pair p2) { return p1.x - p2.x; } }); }} public class Main{ // Function to find the number // of the drops that come out of the // pipe static int drops(int length, int[] position, int[] speed, int n) { // stores position and time // taken by a single // drop to reach the end as a pair Pair m[] = new Pair[n]; int i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop int p = length - position[i]; // inserts initial position of the // ith drop to the pair // inserts time taken by ith // drop to reach // the end to the pair m[i] = new Pair(position[i], p / speed[i]); } // sorts the pair according to increasing // order of their positions Compare obj = new Compare(); obj.compare(m, n); int k = 0; // counter for no of final drops int curr_max = (int)(m[n - 1].y); // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i].y > curr_max) { k++; curr_max = (int)(m[i].y); } } // calculating residual // drops in the pipe k++; return k; } public static void main(String[] args) { // length of pipe int length = 12; // position of droplets int[] position = { 10, 8, 0, 5, 3 }; // speed of each droplets int[] speed = { 2, 4, 1, 1, 3 }; int n = speed.length; System.out.println(drops(length, position, speed, n)); }} // This code is contributed by decode2207.
# Function to find the number# of the drops that come out of the# pipedef drops(length, position, speed, n): # Stores position and time # taken by a single drop to # reach the end as a pair m = [] for i in range(n): # Calculates distance needs to be # covered by the ith drop p = length - position[i] # Inserts initial position of the # ith drop to the pair # inserts time taken by ith # drop to reach # the end to the pair m.append([position[i], (p * 1.0) / speed[i]]) # Sorts the pair according to increasing # order of their positions m.sort() # Counter for no of final drops k = 0 curr_max = m[n - 1][1] # We traverse the array demo # right to left # to determine the slower drop for i in range(n - 2, -1, -1): # Checks for next slower drop if (m[i][1] > curr_max): k += 1 curr_max = m[i][1] # Calculating residual # drops in the pipe k += 1 return k # Driver Code # Length of pipelength = 12 # Position of dropletsposition = [ 10, 8, 0, 5, 3 ] # Speed of each dropletsspeed = [ 2, 4, 1, 1, 3 ]n = len(speed) print(drops(length, position, speed, n)) # This code is contributed by divyeshrabadiya07
using System;using System.Collections.Generic;class GFG{ // Function to find the number // of the drops that come out of the // pipe static int drops(int length, int[] position, int[] speed, int n) { // stores position and time // taken by a single // drop to reach the end as a pair List<Tuple<int,double>> m = new List<Tuple<int,double>>(); int i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop int p = length - position[i]; // inserts initial position of the // ith drop to the pair // inserts time taken by ith // drop to reach // the end to the pair m.Add(new Tuple<int,double>(position[i], p * 1.0 / speed[i])); } // sorts the pair according to increasing // order of their positions m.Sort(); int k = 0; // counter for no of final drops int curr_max = (int)m[n - 1].Item2; // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i].Item2 > curr_max) { k++; curr_max = (int)m[i].Item2; } } // calculating residual // drops in the pipe k++; return k; } // Driver code static void Main() { // length of pipe int length = 12; // position of droplets int[] position = { 10, 8, 0, 5, 3 }; // speed of each droplets int[] speed = { 2, 4, 1, 1, 3 }; int n = speed.Length; Console.WriteLine(drops(length, position, speed, n)); }} // This code is contributed by divyesh072019
<script> // Function to find the number// of the drops that come out of the// pipefunction drops(length, position, speed, n){ // stores position and time // taken by a single // drop to reach the end as a pair var m = Array.from(Array(n), ()=>Array(2)); var i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop var p = length - position[i]; // inserts initial position of the // ith drop to the pair m[i][0] = position[i]; // inserts time taken by ith // drop to reach // the end to the pair m[i][1] = p * 1.0 / speed[i]; } // sorts the pair according to increasing // order of their positions m.sort(); var k = 0; // counter for no of final drops var curr_max = m[n-1][1]; // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i][1] > curr_max) { k++; curr_max=m[i][1]; } } // calculating residual // drops in the pipe k++; return k;} // Driver Code// length of pipevar length = 12; // position of dropletsvar position = [10, 8, 0, 5, 3]; // speed of each dropletsvar speed = [2, 4, 1, 1, 3];var n = speed.length;document.write( drops(length, position, speed, n)); </script>
3
sasmitvaidya007
divyeshrabadiya07
divyesh072019
rutvik_56
decode2207
be1398
Competitive Programming
Greedy
Sorting
Stack
Greedy
Stack
Sorting
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Multistage Graph (Shortest Path)
Breadth First Traversal ( BFS ) on a 2D array
Difference between Backtracking and Branch-N-Bound technique
Most important type of Algorithms
5 Best Languages for Competitive Programming
Dijkstra's shortest path algorithm | Greedy Algo-7
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2
Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5
Program for array rotation
Write a program to print all permutations of a given string
|
[
{
"code": null,
"e": 26467,
"s": 26439,
"text": "\n06 Oct, 2021"
},
{
"code": null,
"e": 26855,
"s": 26467,
"text": "Consider a pipe of length L. The pipe has N water droplets at N different positions within it. Each water droplet is moving towards the end of the pipe(x=L) at different rates. When a water droplet mixes with another water droplet, it assumes the speed of the water droplet it is mixing with. Determine the no of droplets that come out of the end of the pipe. Refer to the figure below: "
},
{
"code": null,
"e": 26914,
"s": 26855,
"text": "The numbers on circles indicates speed of water droplets "
},
{
"code": null,
"e": 26925,
"s": 26914,
"text": "Examples: "
},
{
"code": null,
"e": 27492,
"s": 26925,
"text": "Input: length = 12, position = [10, 8, 0, 5, 3], \n speed = [2, 4, 1, 1, 3]\nOutput: 3\nExplanation:\nDroplets starting at x=10 and x=8 become a droplet, \nmeeting each other at x=12 at time =1 sec.\nThe droplet starting at 0 doesn't mix with any \nother droplet, so it is a drop by itself.\nDroplets starting at x=5 and x=3 become a single \ndrop, mixing with each other at x=6 at time = 1 sec.\nNote that no other droplets meet these drops before \nthe end of the pipe, so the answer is 3.\nRefer to the figure below\nNumbers on circles indicates speed of water droplets."
},
{
"code": null,
"e": 27722,
"s": 27492,
"text": "Approach: This problem uses greedy technique. A drop will mix with another drop if two conditions are met: 1. If the drop is faster than the drop it is mixing with 2. If the position of the faster drop is behind the slower drop. "
},
{
"code": null,
"e": 28636,
"s": 27722,
"text": "We use an array of pairs to store the position and the time that ith drop would take to reach the end of the pipe. Then we sort the array according to the position of the drops. Now we have a fair idea of which drops lie behind which drops and their respective time taken to reach the end. More time means less speed and less time means more speed. Now all the drops before a slower drop will mix with it. And all the drops after the slower drop with mix with the next slower drop and so on. For example, if the times to reach the end are: 12, 3, 7, 8, 1 (sorted according to positions) 0th drop is slowest, it won’t mix with the next drop 1st drop is faster than the 2nd drop, So they will mix and 2nd drop is faster than the third drop so all three will mix together. They cannot mix with the 4th drop because that is faster. No of local maximal + residue(drops after last local maxima) = Total number of drops."
},
{
"code": null,
"e": 28687,
"s": 28636,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 28691,
"s": 28687,
"text": "C++"
},
{
"code": null,
"e": 28696,
"s": 28691,
"text": "Java"
},
{
"code": null,
"e": 28704,
"s": 28696,
"text": "Python3"
},
{
"code": null,
"e": 28707,
"s": 28704,
"text": "C#"
},
{
"code": null,
"e": 28718,
"s": 28707,
"text": "Javascript"
},
{
"code": "#include <bits/stdc++.h>using namespace std; // Function to find the number// of the drops that come out of the// pipeint drops(int length, int position[], int speed[], int n){ // stores position and time // taken by a single // drop to reach the end as a pair vector<pair<int, double> > m(n); int i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop int p = length - position[i]; // inserts initial position of the // ith drop to the pair m[i].first = position[i]; // inserts time taken by ith // drop to reach // the end to the pair m[i].second = p * 1.0 / speed[i]; } // sorts the pair according to increasing // order of their positions sort(m.begin(), m.end()); int k = 0; // counter for no of final drops int curr_max = m[n-1].second; // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i].second > curr_max) { k++; curr_max=m[i].second; } } // calculating residual // drops in the pipe k++; return k;} // Driver Codeint main(){ // length of pipe int length = 12; // position of droplets int position[] = { 10, 8, 0, 5, 3 }; // speed of each droplets int speed[] = { 2, 4, 1, 1, 3 }; int n = sizeof(speed)/sizeof(speed[0]); cout << drops(length, position, speed, n); return 0;}",
"e": 30277,
"s": 28718,
"text": null
},
{
"code": "import java.util.*; // User defined Pair classclass Pair { int x; int y; // Constructor public Pair(int x, int y) { this.x = x; this.y = y; }} // class to define user defined conparatorclass Compare { static void compare(Pair arr[], int n) { // Comparator to sort the pair according to second element Arrays.sort(arr, new Comparator<Pair>() { @Override public int compare(Pair p1, Pair p2) { return p1.x - p2.x; } }); }} public class Main{ // Function to find the number // of the drops that come out of the // pipe static int drops(int length, int[] position, int[] speed, int n) { // stores position and time // taken by a single // drop to reach the end as a pair Pair m[] = new Pair[n]; int i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop int p = length - position[i]; // inserts initial position of the // ith drop to the pair // inserts time taken by ith // drop to reach // the end to the pair m[i] = new Pair(position[i], p / speed[i]); } // sorts the pair according to increasing // order of their positions Compare obj = new Compare(); obj.compare(m, n); int k = 0; // counter for no of final drops int curr_max = (int)(m[n - 1].y); // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i].y > curr_max) { k++; curr_max = (int)(m[i].y); } } // calculating residual // drops in the pipe k++; return k; } public static void main(String[] args) { // length of pipe int length = 12; // position of droplets int[] position = { 10, 8, 0, 5, 3 }; // speed of each droplets int[] speed = { 2, 4, 1, 1, 3 }; int n = speed.length; System.out.println(drops(length, position, speed, n)); }} // This code is contributed by decode2207.",
"e": 32409,
"s": 30277,
"text": null
},
{
"code": "# Function to find the number# of the drops that come out of the# pipedef drops(length, position, speed, n): # Stores position and time # taken by a single drop to # reach the end as a pair m = [] for i in range(n): # Calculates distance needs to be # covered by the ith drop p = length - position[i] # Inserts initial position of the # ith drop to the pair # inserts time taken by ith # drop to reach # the end to the pair m.append([position[i], (p * 1.0) / speed[i]]) # Sorts the pair according to increasing # order of their positions m.sort() # Counter for no of final drops k = 0 curr_max = m[n - 1][1] # We traverse the array demo # right to left # to determine the slower drop for i in range(n - 2, -1, -1): # Checks for next slower drop if (m[i][1] > curr_max): k += 1 curr_max = m[i][1] # Calculating residual # drops in the pipe k += 1 return k # Driver Code # Length of pipelength = 12 # Position of dropletsposition = [ 10, 8, 0, 5, 3 ] # Speed of each dropletsspeed = [ 2, 4, 1, 1, 3 ]n = len(speed) print(drops(length, position, speed, n)) # This code is contributed by divyeshrabadiya07",
"e": 33725,
"s": 32409,
"text": null
},
{
"code": "using System;using System.Collections.Generic;class GFG{ // Function to find the number // of the drops that come out of the // pipe static int drops(int length, int[] position, int[] speed, int n) { // stores position and time // taken by a single // drop to reach the end as a pair List<Tuple<int,double>> m = new List<Tuple<int,double>>(); int i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop int p = length - position[i]; // inserts initial position of the // ith drop to the pair // inserts time taken by ith // drop to reach // the end to the pair m.Add(new Tuple<int,double>(position[i], p * 1.0 / speed[i])); } // sorts the pair according to increasing // order of their positions m.Sort(); int k = 0; // counter for no of final drops int curr_max = (int)m[n - 1].Item2; // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i].Item2 > curr_max) { k++; curr_max = (int)m[i].Item2; } } // calculating residual // drops in the pipe k++; return k; } // Driver code static void Main() { // length of pipe int length = 12; // position of droplets int[] position = { 10, 8, 0, 5, 3 }; // speed of each droplets int[] speed = { 2, 4, 1, 1, 3 }; int n = speed.Length; Console.WriteLine(drops(length, position, speed, n)); }} // This code is contributed by divyesh072019",
"e": 35342,
"s": 33725,
"text": null
},
{
"code": "<script> // Function to find the number// of the drops that come out of the// pipefunction drops(length, position, speed, n){ // stores position and time // taken by a single // drop to reach the end as a pair var m = Array.from(Array(n), ()=>Array(2)); var i; for (i = 0; i < n; i++) { // calculates distance needs to be // covered by the ith drop var p = length - position[i]; // inserts initial position of the // ith drop to the pair m[i][0] = position[i]; // inserts time taken by ith // drop to reach // the end to the pair m[i][1] = p * 1.0 / speed[i]; } // sorts the pair according to increasing // order of their positions m.sort(); var k = 0; // counter for no of final drops var curr_max = m[n-1][1]; // we traverse the array demo // right to left // to determine the slower drop for (i = n - 2; i >= 0; i--) { // checks for next slower drop if (m[i][1] > curr_max) { k++; curr_max=m[i][1]; } } // calculating residual // drops in the pipe k++; return k;} // Driver Code// length of pipevar length = 12; // position of dropletsvar position = [10, 8, 0, 5, 3]; // speed of each dropletsvar speed = [2, 4, 1, 1, 3];var n = speed.length;document.write( drops(length, position, speed, n)); </script>",
"e": 36750,
"s": 35342,
"text": null
},
{
"code": null,
"e": 36752,
"s": 36750,
"text": "3"
},
{
"code": null,
"e": 36768,
"s": 36752,
"text": "sasmitvaidya007"
},
{
"code": null,
"e": 36786,
"s": 36768,
"text": "divyeshrabadiya07"
},
{
"code": null,
"e": 36800,
"s": 36786,
"text": "divyesh072019"
},
{
"code": null,
"e": 36810,
"s": 36800,
"text": "rutvik_56"
},
{
"code": null,
"e": 36821,
"s": 36810,
"text": "decode2207"
},
{
"code": null,
"e": 36828,
"s": 36821,
"text": "be1398"
},
{
"code": null,
"e": 36852,
"s": 36828,
"text": "Competitive Programming"
},
{
"code": null,
"e": 36859,
"s": 36852,
"text": "Greedy"
},
{
"code": null,
"e": 36867,
"s": 36859,
"text": "Sorting"
},
{
"code": null,
"e": 36873,
"s": 36867,
"text": "Stack"
},
{
"code": null,
"e": 36880,
"s": 36873,
"text": "Greedy"
},
{
"code": null,
"e": 36886,
"s": 36880,
"text": "Stack"
},
{
"code": null,
"e": 36894,
"s": 36886,
"text": "Sorting"
},
{
"code": null,
"e": 36992,
"s": 36894,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 37025,
"s": 36992,
"text": "Multistage Graph (Shortest Path)"
},
{
"code": null,
"e": 37071,
"s": 37025,
"text": "Breadth First Traversal ( BFS ) on a 2D array"
},
{
"code": null,
"e": 37132,
"s": 37071,
"text": "Difference between Backtracking and Branch-N-Bound technique"
},
{
"code": null,
"e": 37166,
"s": 37132,
"text": "Most important type of Algorithms"
},
{
"code": null,
"e": 37211,
"s": 37166,
"text": "5 Best Languages for Competitive Programming"
},
{
"code": null,
"e": 37262,
"s": 37211,
"text": "Dijkstra's shortest path algorithm | Greedy Algo-7"
},
{
"code": null,
"e": 37320,
"s": 37262,
"text": "Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2"
},
{
"code": null,
"e": 37371,
"s": 37320,
"text": "Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5"
},
{
"code": null,
"e": 37398,
"s": 37371,
"text": "Program for array rotation"
}
] |
p5.js | setVolume() Function - GeeksforGeeks
|
21 Nov, 2021
The setVolume() function is an inbuilt function in p5.js library. This function is used to control the volume of the played audio on the web. This function has a range of between (0.0) which means total silence to (1.0) which means full volume. This volume also can be controllable by a slider var by dividing that in different ranges.
Syntax:
setVolume( volume, rampTime, timeFromNow )
Note: All the sound-related functions only work when the sound library is included in the head section of the index.html file.Parameter: This function accept three parameters as mentioned above and described below.
volume: This parameter holds a float number that defines the volume of the playback.
rampTime: This parameter holds an integer value of time in the second format after that the sound will be fade. It is optional.
timeFromNow: This parameter holds an integer value of time in the second format after that define event will happen.
Below examples illustrate the p5.setVolume() function in JavaScript: Example 1: In this example, we set the fixed volume in the code which is 0.5.
javascript
var sound; function preload() { // Initialize sound sound = loadSound("pfivesound.mp3");} function setup() { // Playing the preloaded sound sound.play(); //stopping the played sound after 5 seconds sound.setVolume(0.5);}
Example 2: In this example, we will create a slide that will help the user to increase the volume by 0.2, and the starting volume is set to 0.2.
javascript
var sound;var slider; function preload() { // Initialize sound sound = loadSound("pfivesound.mp3");} function setup() { // Playing the preloaded sound sound.play(); //creating sound rocker slider = createSlider(0, 1, 0.2, 0.2); } function draw() { sound.setVolume(slider.value());}
Online editor: https://editor.p5js.org/ Environment Setup: https://www.geeksforgeeks.org/p5-js-soundfile-object-installation-and-methods/Supported Browsers: The browsers are supported by p5.js setVolume() function are listed below:
Google Chrome
Internet Explorer
Firefox
Safari
Opera
sumitgumber28
JavaScript-p5.js
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Difference Between PUT and PATCH Request
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
|
[
{
"code": null,
"e": 25943,
"s": 25915,
"text": "\n21 Nov, 2021"
},
{
"code": null,
"e": 26280,
"s": 25943,
"text": "The setVolume() function is an inbuilt function in p5.js library. This function is used to control the volume of the played audio on the web. This function has a range of between (0.0) which means total silence to (1.0) which means full volume. This volume also can be controllable by a slider var by dividing that in different ranges. "
},
{
"code": null,
"e": 26290,
"s": 26280,
"text": "Syntax: "
},
{
"code": null,
"e": 26333,
"s": 26290,
"text": "setVolume( volume, rampTime, timeFromNow )"
},
{
"code": null,
"e": 26550,
"s": 26333,
"text": "Note: All the sound-related functions only work when the sound library is included in the head section of the index.html file.Parameter: This function accept three parameters as mentioned above and described below. "
},
{
"code": null,
"e": 26635,
"s": 26550,
"text": "volume: This parameter holds a float number that defines the volume of the playback."
},
{
"code": null,
"e": 26763,
"s": 26635,
"text": "rampTime: This parameter holds an integer value of time in the second format after that the sound will be fade. It is optional."
},
{
"code": null,
"e": 26880,
"s": 26763,
"text": "timeFromNow: This parameter holds an integer value of time in the second format after that define event will happen."
},
{
"code": null,
"e": 27029,
"s": 26880,
"text": "Below examples illustrate the p5.setVolume() function in JavaScript: Example 1: In this example, we set the fixed volume in the code which is 0.5. "
},
{
"code": null,
"e": 27040,
"s": 27029,
"text": "javascript"
},
{
"code": "var sound; function preload() { // Initialize sound sound = loadSound(\"pfivesound.mp3\");} function setup() { // Playing the preloaded sound sound.play(); //stopping the played sound after 5 seconds sound.setVolume(0.5);}",
"e": 27281,
"s": 27040,
"text": null
},
{
"code": null,
"e": 27427,
"s": 27281,
"text": "Example 2: In this example, we will create a slide that will help the user to increase the volume by 0.2, and the starting volume is set to 0.2. "
},
{
"code": null,
"e": 27438,
"s": 27427,
"text": "javascript"
},
{
"code": "var sound;var slider; function preload() { // Initialize sound sound = loadSound(\"pfivesound.mp3\");} function setup() { // Playing the preloaded sound sound.play(); //creating sound rocker slider = createSlider(0, 1, 0.2, 0.2); } function draw() { sound.setVolume(slider.value());}",
"e": 27748,
"s": 27438,
"text": null
},
{
"code": null,
"e": 27982,
"s": 27748,
"text": "Online editor: https://editor.p5js.org/ Environment Setup: https://www.geeksforgeeks.org/p5-js-soundfile-object-installation-and-methods/Supported Browsers: The browsers are supported by p5.js setVolume() function are listed below: "
},
{
"code": null,
"e": 27996,
"s": 27982,
"text": "Google Chrome"
},
{
"code": null,
"e": 28014,
"s": 27996,
"text": "Internet Explorer"
},
{
"code": null,
"e": 28022,
"s": 28014,
"text": "Firefox"
},
{
"code": null,
"e": 28029,
"s": 28022,
"text": "Safari"
},
{
"code": null,
"e": 28036,
"s": 28029,
"text": "Opera "
},
{
"code": null,
"e": 28050,
"s": 28036,
"text": "sumitgumber28"
},
{
"code": null,
"e": 28067,
"s": 28050,
"text": "JavaScript-p5.js"
},
{
"code": null,
"e": 28078,
"s": 28067,
"text": "JavaScript"
},
{
"code": null,
"e": 28095,
"s": 28078,
"text": "Web Technologies"
},
{
"code": null,
"e": 28193,
"s": 28095,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28233,
"s": 28193,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 28278,
"s": 28233,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 28339,
"s": 28278,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28411,
"s": 28339,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 28452,
"s": 28411,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 28492,
"s": 28452,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 28525,
"s": 28492,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 28570,
"s": 28525,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 28613,
"s": 28570,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
LISP - Cond Construct
|
The cond construct in LISP is most commonly used to permit branching.
Syntax for cond is −
(cond (test1 action1)
(test2 action2)
...
(testn actionn))
Each clause within the cond statement consists of a conditional test and an action to be performed.
If the first test following cond, test1, is evaluated to be true, then the related action part, action1, is executed, its value is returned and the rest of the clauses are skipped over.
If test1 evaluates to be nil, then control moves to the second clause without executing action1, and the same process is followed.
If none of the test conditions are evaluated to be true, then the cond statement returns nil.
Create a new source code file named main.lisp and type the following code in it −
(setq a 10)
(cond ((> a 20)
(format t "~% a is greater than 20"))
(t (format t "~% value of a is ~d " a)))
When you click the Execute button, or type Ctrl+E, LISP executes it immediately and the result returned is −
value of a is 10
Please note that the t in the second clause ensures that the last action is performed if none other would.
79 Lectures
7 hours
Arnold Higuit
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2130,
"s": 2060,
"text": "The cond construct in LISP is most commonly used to permit branching."
},
{
"code": null,
"e": 2151,
"s": 2130,
"text": "Syntax for cond is −"
},
{
"code": null,
"e": 2230,
"s": 2151,
"text": "(cond (test1 action1)\n (test2 action2)\n ...\n (testn actionn))\n"
},
{
"code": null,
"e": 2330,
"s": 2230,
"text": "Each clause within the cond statement consists of a conditional test and an action to be performed."
},
{
"code": null,
"e": 2516,
"s": 2330,
"text": "If the first test following cond, test1, is evaluated to be true, then the related action part, action1, is executed, its value is returned and the rest of the clauses are skipped over."
},
{
"code": null,
"e": 2647,
"s": 2516,
"text": "If test1 evaluates to be nil, then control moves to the second clause without executing action1, and the same process is followed."
},
{
"code": null,
"e": 2741,
"s": 2647,
"text": "If none of the test conditions are evaluated to be true, then the cond statement returns nil."
},
{
"code": null,
"e": 2823,
"s": 2741,
"text": "Create a new source code file named main.lisp and type the following code in it −"
},
{
"code": null,
"e": 2936,
"s": 2823,
"text": "(setq a 10)\n(cond ((> a 20)\n (format t \"~% a is greater than 20\"))\n (t (format t \"~% value of a is ~d \" a)))"
},
{
"code": null,
"e": 3045,
"s": 2936,
"text": "When you click the Execute button, or type Ctrl+E, LISP executes it immediately and the result returned is −"
},
{
"code": null,
"e": 3063,
"s": 3045,
"text": "value of a is 10\n"
},
{
"code": null,
"e": 3170,
"s": 3063,
"text": "Please note that the t in the second clause ensures that the last action is performed if none other would."
},
{
"code": null,
"e": 3203,
"s": 3170,
"text": "\n 79 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 3218,
"s": 3203,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 3225,
"s": 3218,
"text": " Print"
},
{
"code": null,
"e": 3236,
"s": 3225,
"text": " Add Notes"
}
] |
DateTimeOffset.ToUnixTimeSeconds() Method in C# - GeeksforGeeks
|
19 Mar, 2019
DateTimeOffset.ToUnixTimeSeconds Method is used to return the number of seconds that have elapsed since 1970-01-01T00:00:00Z. Before returning the Unix time, this method will convert the current instance to the UTC. And also, it will return a negative value for the date and time values before 1970-01-01T00:00:00Z.
Syntax: public long ToUnixTimeSeconds ();
Return Value: This method return the number of seconds that have elapsed since 1970-01-01T00:00:00Z.
Below programs illustrate the use of DateTimeOffset.ToUnixTimeSeconds() Method:
Example 1:
// C# program to demonstrate the// DateTimeOffset.ToUnixTimeMilliseconds()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTimeOffset DateTimeOffset offset = new DateTimeOffset(2017, 6, 1, 7, 55, 0, new TimeSpan(-5, 0, 0)); // Returns the number of seconds // that have elapsed since 1970-01-01T00:00:00Z. // instance using ToUnixTimeSeconds() method long value = offset.ToUnixTimeSeconds(); // Display the time Console.WriteLine("Returns the number of"+ " seconds : {0}", value); }}
Returns the number of seconds : 1496321700
Example 2:
// C# program to demonstrate the// DateTimeOffset.ToUnixTimeSeconds()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTimeOffset DateTimeOffset offset = new DateTimeOffset(2017, 6, 1, 7, 55, 0, new TimeSpan(-5, 0, 0)); // Returns the number of seconds // that have elapsed since 1970-01-01T00:00:00Z. // instance using ToUnixTimeSeconds() method long value = offset.ToUnixTimeSeconds(); // Display the time Console.WriteLine("Returns the number of"+ " seconds : {0}", value); }}
Returns the number of seconds : 1496321700
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.datetimeoffset.tounixtimeseconds?view=netframework-4.7.2
CSharp-DateTimeOffset-Struct
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Extension Method in C#
HashSet in C# with Examples
C# | Inheritance
Partial Classes in C#
C# | Generics - Introduction
Top 50 C# Interview Questions & Answers
Switch Statement in C#
C# | How to insert an element in an Array?
Convert String to Character Array in C#
Lambda Expressions in C#
|
[
{
"code": null,
"e": 25657,
"s": 25629,
"text": "\n19 Mar, 2019"
},
{
"code": null,
"e": 25973,
"s": 25657,
"text": "DateTimeOffset.ToUnixTimeSeconds Method is used to return the number of seconds that have elapsed since 1970-01-01T00:00:00Z. Before returning the Unix time, this method will convert the current instance to the UTC. And also, it will return a negative value for the date and time values before 1970-01-01T00:00:00Z."
},
{
"code": null,
"e": 26015,
"s": 25973,
"text": "Syntax: public long ToUnixTimeSeconds ();"
},
{
"code": null,
"e": 26116,
"s": 26015,
"text": "Return Value: This method return the number of seconds that have elapsed since 1970-01-01T00:00:00Z."
},
{
"code": null,
"e": 26196,
"s": 26116,
"text": "Below programs illustrate the use of DateTimeOffset.ToUnixTimeSeconds() Method:"
},
{
"code": null,
"e": 26207,
"s": 26196,
"text": "Example 1:"
},
{
"code": "// C# program to demonstrate the// DateTimeOffset.ToUnixTimeMilliseconds()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTimeOffset DateTimeOffset offset = new DateTimeOffset(2017, 6, 1, 7, 55, 0, new TimeSpan(-5, 0, 0)); // Returns the number of seconds // that have elapsed since 1970-01-01T00:00:00Z. // instance using ToUnixTimeSeconds() method long value = offset.ToUnixTimeSeconds(); // Display the time Console.WriteLine(\"Returns the number of\"+ \" seconds : {0}\", value); }}",
"e": 26888,
"s": 26207,
"text": null
},
{
"code": null,
"e": 26932,
"s": 26888,
"text": "Returns the number of seconds : 1496321700\n"
},
{
"code": null,
"e": 26943,
"s": 26932,
"text": "Example 2:"
},
{
"code": "// C# program to demonstrate the// DateTimeOffset.ToUnixTimeSeconds()// Methodusing System;using System.Globalization; class GFG { // Main Method public static void Main() { // creating object of DateTimeOffset DateTimeOffset offset = new DateTimeOffset(2017, 6, 1, 7, 55, 0, new TimeSpan(-5, 0, 0)); // Returns the number of seconds // that have elapsed since 1970-01-01T00:00:00Z. // instance using ToUnixTimeSeconds() method long value = offset.ToUnixTimeSeconds(); // Display the time Console.WriteLine(\"Returns the number of\"+ \" seconds : {0}\", value); }}",
"e": 27619,
"s": 26943,
"text": null
},
{
"code": null,
"e": 27663,
"s": 27619,
"text": "Returns the number of seconds : 1496321700\n"
},
{
"code": null,
"e": 27674,
"s": 27663,
"text": "Reference:"
},
{
"code": null,
"e": 27782,
"s": 27674,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.datetimeoffset.tounixtimeseconds?view=netframework-4.7.2"
},
{
"code": null,
"e": 27811,
"s": 27782,
"text": "CSharp-DateTimeOffset-Struct"
},
{
"code": null,
"e": 27825,
"s": 27811,
"text": "CSharp-method"
},
{
"code": null,
"e": 27828,
"s": 27825,
"text": "C#"
},
{
"code": null,
"e": 27926,
"s": 27828,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27949,
"s": 27926,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 27977,
"s": 27949,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 27994,
"s": 27977,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 28016,
"s": 27994,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 28045,
"s": 28016,
"text": "C# | Generics - Introduction"
},
{
"code": null,
"e": 28085,
"s": 28045,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 28108,
"s": 28085,
"text": "Switch Statement in C#"
},
{
"code": null,
"e": 28151,
"s": 28108,
"text": "C# | How to insert an element in an Array?"
},
{
"code": null,
"e": 28191,
"s": 28151,
"text": "Convert String to Character Array in C#"
}
] |
Underscore.js _.first() Function - GeeksforGeeks
|
24 Nov, 2021
The Underscore.js is a JavaScript library that provides a lot of useful functions that help in the programming in a big way like the map, filter, invokes, etc even without using any built-in objects.The _.first() function is used to return the first element of the array, i.e. the number at the zeroth index. It returns first n elements in the array of m size (n < m) by just passing the variable n in the array. It is a very easy-to-use function of the underscore.js library and is used widely when working with array elements.
Syntax:
_.first(array, [n])
Parameters: This function accepts two parameter as mentioned above and described below:
array: This parameter is used to hold the array of elements.
variable: It tells the number of elements wanted.
Return values: This function returns the array of elements.
Passing an array to the _.first function(): The ._first() function will return the first element along with all it’s properties of the array passed. Like here, the elements have 2 properties, the name, and the age so, the final result will contain both of these properties of the first element as the variable n is not passed here.
Example:
<html> <head> <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore.js"> </script> </head> <body> <script type="text/javascript"> console.log(_.first([{name: 'jack', age: 14}, {name: 'jill', age: 15}, {name: 'humpty', age: 16}])); </script> </body></html>
Output:
Passing a structure to _.first() function: The ._first() function will return the first element along with all it’s properties of the array passed as the variable n is not passed here. Like here, each element has four properties, the category, the title, the value, and the id. So, the final result will contain all of these properties of the first element.
Example:
<html> <head> <script type="text/javascript" src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js" > </script> </head> <body> <script type="text/javascript"> var goal = [ { "category" : "other", "title" : "harry University", "value" : 50000, "id":"1" }, { "category" : "traveling", "title" : "tommy University", "value" : 50000, "id":"2" }, { "category" : "education", "title" : "jerry University", "value" : 50000, "id":"3" }, { "category" : "business", "title" : "Charlie University", "value" : 50000, "id":"4" } ] console.log(_.first(goal)); </script> </body></html>
Output:
Passing an array with one property as true/false to the _.first() function: This will work exactly the same as the above two examples. The false/true property will only be displayed in the first element. Here, this property will not be logically used.
Example:
<html> <head> <script type="text/javascript" src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js" > </script> </head> <body> <script type="text/javascript"> var people = [ {"name": "sakshi", "hasLong": "false"}, {"name": "aishwarya", "hasLong": "true"}, {"name": "akansha", "hasLong": "true"}, {"name": "preeti", "hasLong": "true"} ] console.log(_.first(people)); </script> </body></html>
Output:
Passing an array with the variable n to the _.first() function: To have more than one element which is the first element, just pass the number and get a result. Remember that the elements will always come from the start of the array.
Example:
<html> <head> <script type="text/javascript" src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js" > </script> </head> <body> <script type="text/javascript"> var users = [{"num":"1"}, {"num":"2"}, {"num":"3"}, {"num":"4"}, {"num":"5"}]; console.log(_.first(users, 2)); </script> </body></html>
Output:
shubham_singh
arorakashish0911
JavaScript - Underscore.js
javascript-functions
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Differences between Functional Components and Class Components in React
How to calculate the number of days between two dates in javascript?
How to create footer to stay at the bottom of a Web page?
|
[
{
"code": null,
"e": 34957,
"s": 34929,
"text": "\n24 Nov, 2021"
},
{
"code": null,
"e": 35486,
"s": 34957,
"text": "The Underscore.js is a JavaScript library that provides a lot of useful functions that help in the programming in a big way like the map, filter, invokes, etc even without using any built-in objects.The _.first() function is used to return the first element of the array, i.e. the number at the zeroth index. It returns first n elements in the array of m size (n < m) by just passing the variable n in the array. It is a very easy-to-use function of the underscore.js library and is used widely when working with array elements."
},
{
"code": null,
"e": 35494,
"s": 35486,
"text": "Syntax:"
},
{
"code": null,
"e": 35515,
"s": 35494,
"text": "_.first(array, [n]) "
},
{
"code": null,
"e": 35603,
"s": 35515,
"text": "Parameters: This function accepts two parameter as mentioned above and described below:"
},
{
"code": null,
"e": 35664,
"s": 35603,
"text": "array: This parameter is used to hold the array of elements."
},
{
"code": null,
"e": 35714,
"s": 35664,
"text": "variable: It tells the number of elements wanted."
},
{
"code": null,
"e": 35774,
"s": 35714,
"text": "Return values: This function returns the array of elements."
},
{
"code": null,
"e": 36106,
"s": 35774,
"text": "Passing an array to the _.first function(): The ._first() function will return the first element along with all it’s properties of the array passed. Like here, the elements have 2 properties, the name, and the age so, the final result will contain both of these properties of the first element as the variable n is not passed here."
},
{
"code": null,
"e": 36115,
"s": 36106,
"text": "Example:"
},
{
"code": "<html> <head> <script type=\"text/javascript\" src=\"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore.js\"> </script> </head> <body> <script type=\"text/javascript\"> console.log(_.first([{name: 'jack', age: 14}, {name: 'jill', age: 15}, {name: 'humpty', age: 16}])); </script> </body></html> ",
"e": 36544,
"s": 36115,
"text": null
},
{
"code": null,
"e": 36552,
"s": 36544,
"text": "Output:"
},
{
"code": null,
"e": 36910,
"s": 36552,
"text": "Passing a structure to _.first() function: The ._first() function will return the first element along with all it’s properties of the array passed as the variable n is not passed here. Like here, each element has four properties, the category, the title, the value, and the id. So, the final result will contain all of these properties of the first element."
},
{
"code": null,
"e": 36919,
"s": 36910,
"text": "Example:"
},
{
"code": "<html> <head> <script type=\"text/javascript\" src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\" > </script> </head> <body> <script type=\"text/javascript\"> var goal = [ { \"category\" : \"other\", \"title\" : \"harry University\", \"value\" : 50000, \"id\":\"1\" }, { \"category\" : \"traveling\", \"title\" : \"tommy University\", \"value\" : 50000, \"id\":\"2\" }, { \"category\" : \"education\", \"title\" : \"jerry University\", \"value\" : 50000, \"id\":\"3\" }, { \"category\" : \"business\", \"title\" : \"Charlie University\", \"value\" : 50000, \"id\":\"4\" } ] console.log(_.first(goal)); </script> </body></html>",
"e": 37918,
"s": 36919,
"text": null
},
{
"code": null,
"e": 37926,
"s": 37918,
"text": "Output:"
},
{
"code": null,
"e": 38178,
"s": 37926,
"text": "Passing an array with one property as true/false to the _.first() function: This will work exactly the same as the above two examples. The false/true property will only be displayed in the first element. Here, this property will not be logically used."
},
{
"code": null,
"e": 38187,
"s": 38178,
"text": "Example:"
},
{
"code": "<html> <head> <script type=\"text/javascript\" src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\" > </script> </head> <body> <script type=\"text/javascript\"> var people = [ {\"name\": \"sakshi\", \"hasLong\": \"false\"}, {\"name\": \"aishwarya\", \"hasLong\": \"true\"}, {\"name\": \"akansha\", \"hasLong\": \"true\"}, {\"name\": \"preeti\", \"hasLong\": \"true\"} ] console.log(_.first(people)); </script> </body></html>",
"e": 38718,
"s": 38187,
"text": null
},
{
"code": null,
"e": 38726,
"s": 38718,
"text": "Output:"
},
{
"code": null,
"e": 38960,
"s": 38726,
"text": "Passing an array with the variable n to the _.first() function: To have more than one element which is the first element, just pass the number and get a result. Remember that the elements will always come from the start of the array."
},
{
"code": null,
"e": 38969,
"s": 38960,
"text": "Example:"
},
{
"code": "<html> <head> <script type=\"text/javascript\" src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\" > </script> </head> <body> <script type=\"text/javascript\"> var users = [{\"num\":\"1\"}, {\"num\":\"2\"}, {\"num\":\"3\"}, {\"num\":\"4\"}, {\"num\":\"5\"}]; console.log(_.first(users, 2)); </script> </body></html>",
"e": 39366,
"s": 38969,
"text": null
},
{
"code": null,
"e": 39374,
"s": 39366,
"text": "Output:"
},
{
"code": null,
"e": 39388,
"s": 39374,
"text": "shubham_singh"
},
{
"code": null,
"e": 39405,
"s": 39388,
"text": "arorakashish0911"
},
{
"code": null,
"e": 39432,
"s": 39405,
"text": "JavaScript - Underscore.js"
},
{
"code": null,
"e": 39453,
"s": 39432,
"text": "javascript-functions"
},
{
"code": null,
"e": 39470,
"s": 39453,
"text": "Web Technologies"
},
{
"code": null,
"e": 39568,
"s": 39470,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39608,
"s": 39568,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 39641,
"s": 39608,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 39686,
"s": 39641,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 39729,
"s": 39686,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 39779,
"s": 39729,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 39840,
"s": 39779,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 39902,
"s": 39840,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 39974,
"s": 39902,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 40043,
"s": 39974,
"text": "How to calculate the number of days between two dates in javascript?"
}
] |
How to convert number to words in an R data frame column?
|
To convert number to words in an R data frame column, we can use english function from english package. For example, if we have a data frame called df that contains a number column x then we can convert the numbers into words by using the command as.character(english(df$x)).
Consider the below data frame −
Live Demo
x<-rpois(20,5)
df1<-data.frame(x)
df1
x
1 3
2 5
3 4
4 7
5 7
6 3
7 6
8 1
9 11
10 6
11 6
12 6
13 5
14 7
15 4
16 1
17 3
18 1
19 1
20 1
Loading english package and converting numbers in column x to words −
library(english)
df1$x<-as.character(english(df1$x))
df1
x
1 three
2 five
3 four
4 seven
5 seven
6 three
7 six
8 one
9 eleven
10 six
11 six
12 six
13 five
14 seven
15 four
16 one
17 three
18 one
19 one
20 one
Live Demo
y<-rpois(20,10)
df2<-data.frame(y)
df2
y
1 6
2 12
3 10
4 11
5 13
6 5
7 7
8 8
9 2
10 11
11 11
12 11
13 12
14 13
15 15
16 6
17 11
18 6
19 11
20 10
Converting numbers in column y to words −
df2$y<-as.character(english(df2$y))
df2
y
1 six
2 twelve
3 ten
4 eleven
5 thirteen
6 five
7 seven
8 eight
9 two
10 eleven
11 eleven
12 eleven
13 twelve
14 thirteen
15 fifteen
16 six
17 eleven
18 six
19 eleven
20 ten
|
[
{
"code": null,
"e": 1338,
"s": 1062,
"text": "To convert number to words in an R data frame column, we can use english function from english package. For example, if we have a data frame called df that contains a number column x then we can convert the numbers into words by using the command as.character(english(df$x))."
},
{
"code": null,
"e": 1370,
"s": 1338,
"text": "Consider the below data frame −"
},
{
"code": null,
"e": 1381,
"s": 1370,
"text": " Live Demo"
},
{
"code": null,
"e": 1419,
"s": 1381,
"text": "x<-rpois(20,5)\ndf1<-data.frame(x)\ndf1"
},
{
"code": null,
"e": 1545,
"s": 1419,
"text": " x\n1 3\n2 5\n3 4\n4 7\n5 7\n6 3\n7 6\n8 1\n9 11\n10 6\n11 6\n12 6\n13 5\n14 7\n15 4\n16 1\n17 3\n18 1\n19 1\n20 1"
},
{
"code": null,
"e": 1615,
"s": 1545,
"text": "Loading english package and converting numbers in column x to words −"
},
{
"code": null,
"e": 1672,
"s": 1615,
"text": "library(english)\ndf1$x<-as.character(english(df1$x))\ndf1"
},
{
"code": null,
"e": 1858,
"s": 1672,
"text": " x\n1 three\n2 five\n3 four\n4 seven\n5 seven\n6 three\n7 six\n8 one\n9 eleven\n10 six\n11 six\n12 six\n13 five\n14 seven\n15 four\n16 one\n17 three\n18 one\n19 one\n20 one"
},
{
"code": null,
"e": 1869,
"s": 1858,
"text": " Live Demo"
},
{
"code": null,
"e": 1908,
"s": 1869,
"text": "y<-rpois(20,10)\ndf2<-data.frame(y)\ndf2"
},
{
"code": null,
"e": 2034,
"s": 1908,
"text": " y\n1 6\n2 12\n3 10\n4 11\n5 13\n6 5\n7 7\n8 8\n9 2\n10 11\n11 11\n12 11\n13 12\n14 13\n15 15\n16 6\n17 11\n18 6\n19 11\n20 10"
},
{
"code": null,
"e": 2076,
"s": 2034,
"text": "Converting numbers in column y to words −"
},
{
"code": null,
"e": 2116,
"s": 2076,
"text": "df2$y<-as.character(english(df2$y))\ndf2"
},
{
"code": null,
"e": 2326,
"s": 2116,
"text": " y\n1 six\n2 twelve\n3 ten\n4 eleven\n5 thirteen\n6 five\n7 seven\n8 eight\n9 two\n10 eleven\n11 eleven\n12 eleven\n13 twelve\n14 thirteen\n15 fifteen\n16 six\n17 eleven\n18 six\n19 eleven\n20 ten"
}
] |
How to use subList() in android CopyOnWriteArrayList?
|
Before getting into the example, we should know what CopyOnWriteArrayList is. It is a thread-safe variant of ArrayList and operations add, set, and so on by making a fresh copy of the underlying array.
This example demonstrates about How to use subList() in android CopyOnWriteArrayList
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version = "1.0" encoding = "utf-8"?>
<LinearLayout xmlns:android = "http://schemas.android.com/apk/res/android"
xmlns:app = "http://schemas.android.com/apk/res-auto"
xmlns:tools = "http://schemas.android.com/tools"
android:layout_width = "match_parent"
android:gravity = "center"
android:layout_height = "match_parent"
tools:context = ".MainActivity"
android:orientation = "vertical">
<TextView
android:id = "@+id/actionEvent"
android:textSize = "40sp"
android:layout_marginTop = "30dp"
android:layout_width = "wrap_content"
android:layout_height = "match_parent" />
</LinearLayout>
In the above code, we have taken a text view to show CopyOnWriteArrayList elements.
Step 3 − Add the following code to src/MainActivity.java
package com.example.myapplication;
import android.os.Build;
import android.os.Bundle;
import android.support.annotation.RequiresApi;
import android.support.v7.app.AppCompatActivity;
import android.view.View;
import android.widget.TextView;
import java.util.ArrayList;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.CopyOnWriteArrayList;
public class MainActivity extends AppCompatActivity {
CopyOnWriteArrayList copyOnWriteArrayList;
String head;
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
copyOnWriteArrayList = new CopyOnWriteArrayList<String>();
final TextView actionEvent = findViewById(R.id.actionEvent);
copyOnWriteArrayList.add("sai");
copyOnWriteArrayList.add("ram");
copyOnWriteArrayList.add("krishna");
copyOnWriteArrayList.add("prasad");
copyOnWriteArrayList.add("ram");
actionEvent.setText("" + copyOnWriteArrayList);
actionEvent.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
actionEvent.setText("" + copyOnWriteArrayList.subList(0,2));
}
});
}
}
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from an android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click on textview, It will give the result as shown below –
Click here to download the project code
|
[
{
"code": null,
"e": 1264,
"s": 1062,
"text": "Before getting into the example, we should know what CopyOnWriteArrayList is. It is a thread-safe variant of ArrayList and operations add, set, and so on by making a fresh copy of the underlying array."
},
{
"code": null,
"e": 1349,
"s": 1264,
"text": "This example demonstrates about How to use subList() in android CopyOnWriteArrayList"
},
{
"code": null,
"e": 1478,
"s": 1349,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1543,
"s": 1478,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2186,
"s": 1543,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<LinearLayout xmlns:android = \"http://schemas.android.com/apk/res/android\"\n xmlns:app = \"http://schemas.android.com/apk/res-auto\"\n xmlns:tools = \"http://schemas.android.com/tools\"\n android:layout_width = \"match_parent\"\n android:gravity = \"center\"\n android:layout_height = \"match_parent\"\n tools:context = \".MainActivity\"\n android:orientation = \"vertical\">\n <TextView\n android:id = \"@+id/actionEvent\"\n android:textSize = \"40sp\"\n android:layout_marginTop = \"30dp\"\n android:layout_width = \"wrap_content\"\n android:layout_height = \"match_parent\" />\n</LinearLayout>"
},
{
"code": null,
"e": 2270,
"s": 2186,
"text": "In the above code, we have taken a text view to show CopyOnWriteArrayList elements."
},
{
"code": null,
"e": 2327,
"s": 2270,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 3634,
"s": 2327,
"text": "package com.example.myapplication;\nimport android.os.Build;\nimport android.os.Bundle;\nimport android.support.annotation.RequiresApi;\nimport android.support.v7.app.AppCompatActivity;\nimport android.view.View;\nimport android.widget.TextView;\nimport java.util.ArrayList;\nimport java.util.concurrent.ConcurrentLinkedQueue;\nimport java.util.concurrent.CopyOnWriteArrayList;\npublic class MainActivity extends AppCompatActivity {\n CopyOnWriteArrayList copyOnWriteArrayList;\n String head;\n @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n copyOnWriteArrayList = new CopyOnWriteArrayList<String>();\n final TextView actionEvent = findViewById(R.id.actionEvent);\n copyOnWriteArrayList.add(\"sai\");\n copyOnWriteArrayList.add(\"ram\");\n copyOnWriteArrayList.add(\"krishna\");\n copyOnWriteArrayList.add(\"prasad\");\n copyOnWriteArrayList.add(\"ram\");\n actionEvent.setText(\"\" + copyOnWriteArrayList);\n actionEvent.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n actionEvent.setText(\"\" + copyOnWriteArrayList.subList(0,2));\n }\n });\n }\n}"
},
{
"code": null,
"e": 3984,
"s": 3634,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from an android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
},
{
"code": null,
"e": 4044,
"s": 3984,
"text": "Click on textview, It will give the result as shown below –"
},
{
"code": null,
"e": 4084,
"s": 4044,
"text": "Click here to download the project code"
}
] |
wxPython - Create Static Box using Create() method - GeeksforGeeks
|
08 Jul, 2020
In this article we are going to learn about Static Box in wxPython. A static box is a rectangle drawn around other windows to denote a logical grouping of items.In this article we will create Static Box using two step creation, in order to do that we will use Create() method.
Syntax: wx.StaticBox.Create(parent, id=ID_ANY, label=””, pos=DefaultPosition, size=DefaultSize, style=0, name=StaticBoxNameStr)
Parameters
Return Type: bool
Code Example:
import wx class FrameUI(wx.Frame): def __init__(self, parent, title): super(FrameUI, self).__init__(parent, title = title, size =(300, 200)) # function for in-frame components self.InitUI() def InitUI(self): # parent panel for radio box pnl = wx.Panel(self) # initialize static box self.sb = wx.StaticBox() # create static box self.sb.Create(pnl, 2, label ="Static Box", pos =(20, 20), size =(100, 100)) # set frame in centre self.Centre() # set size of frame self.SetSize((400, 250)) # show output frame self.Show(True) # wx App instanceex = wx.App()# Example instanceFrameUI(None, 'RadioButton and RadioBox')ex.MainLoop()
Output Window:
Python wxPython-StaticBox
Python-gui
Python-wxPython
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Different ways to create Pandas Dataframe
Python String | replace()
Create a Pandas DataFrame from Lists
Python program to convert a list to string
Reading and Writing to text files in Python
|
[
{
"code": null,
"e": 24134,
"s": 24106,
"text": "\n08 Jul, 2020"
},
{
"code": null,
"e": 24411,
"s": 24134,
"text": "In this article we are going to learn about Static Box in wxPython. A static box is a rectangle drawn around other windows to denote a logical grouping of items.In this article we will create Static Box using two step creation, in order to do that we will use Create() method."
},
{
"code": null,
"e": 24539,
"s": 24411,
"text": "Syntax: wx.StaticBox.Create(parent, id=ID_ANY, label=””, pos=DefaultPosition, size=DefaultSize, style=0, name=StaticBoxNameStr)"
},
{
"code": null,
"e": 24550,
"s": 24539,
"text": "Parameters"
},
{
"code": null,
"e": 24568,
"s": 24550,
"text": "Return Type: bool"
},
{
"code": null,
"e": 24582,
"s": 24568,
"text": "Code Example:"
},
{
"code": "import wx class FrameUI(wx.Frame): def __init__(self, parent, title): super(FrameUI, self).__init__(parent, title = title, size =(300, 200)) # function for in-frame components self.InitUI() def InitUI(self): # parent panel for radio box pnl = wx.Panel(self) # initialize static box self.sb = wx.StaticBox() # create static box self.sb.Create(pnl, 2, label =\"Static Box\", pos =(20, 20), size =(100, 100)) # set frame in centre self.Centre() # set size of frame self.SetSize((400, 250)) # show output frame self.Show(True) # wx App instanceex = wx.App()# Example instanceFrameUI(None, 'RadioButton and RadioBox')ex.MainLoop()",
"e": 25346,
"s": 24582,
"text": null
},
{
"code": null,
"e": 25361,
"s": 25346,
"text": "Output Window:"
},
{
"code": null,
"e": 25387,
"s": 25361,
"text": "Python wxPython-StaticBox"
},
{
"code": null,
"e": 25398,
"s": 25387,
"text": "Python-gui"
},
{
"code": null,
"e": 25414,
"s": 25398,
"text": "Python-wxPython"
},
{
"code": null,
"e": 25421,
"s": 25414,
"text": "Python"
},
{
"code": null,
"e": 25519,
"s": 25421,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25528,
"s": 25519,
"text": "Comments"
},
{
"code": null,
"e": 25541,
"s": 25528,
"text": "Old Comments"
},
{
"code": null,
"e": 25559,
"s": 25541,
"text": "Python Dictionary"
},
{
"code": null,
"e": 25594,
"s": 25559,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 25616,
"s": 25594,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 25648,
"s": 25616,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 25678,
"s": 25648,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 25720,
"s": 25678,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 25746,
"s": 25720,
"text": "Python String | replace()"
},
{
"code": null,
"e": 25783,
"s": 25746,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 25826,
"s": 25783,
"text": "Python program to convert a list to string"
}
] |
Python Pandas - Extract the quarter of the date from the DateTimeIndex with specific time series frequency
|
To extract the quarter of the date from the DateTimeIndex with specific time series frequency, use the DateTimeIndex.quarter.
At first, import the required libraries −
import pandas as pd
Create a DatetimeIndex with period 6 and frequency as M i.e. Month. The timezone is Australia/Sydney −
datetimeindex = pd.date_range('2021-10-20 02:30:50', periods=6, tz='Australia/Sydney', freq='2M')
Display DateTimeIndex frequency −
print("DateTimeIndex frequency...\n", datetimeindex.freq)
Get the quarter of the date −
print("\nGet the quarter of the date..\n",datetimeindex.quarter)
Result is based on the following quarters of an year −
Quarter 1 = 1st January to 31st March
Quarter 2 = 1st April to 30th June
Quarter 3 = 1st July to 30th September
Quarter 4 = 1st October to 31st December
Following is the code −
import pandas as pd
# DatetimeIndex with period 6 and frequency as M i.e. Month
# The timezone is Australia/Sydney
datetimeindex = pd.date_range('2021-10-20 02:30:50', periods=6, tz='Australia/Sydney', freq='2M')
# display DateTimeIndex
print("DateTimeIndex...\n", datetimeindex)
# display DateTimeIndex frequency
print("DateTimeIndex frequency...\n", datetimeindex.freq)
# Get the quarter of the date
# Result is based on the following quarters of an year:
# Quarter 1 = 1st January to 31st March
# Quarter 2 = 1st April to 30th June
# Quarter 3 = 1st July to 30th September
# Quarter 4 = 1st October to 31st December
print("\nGet the quarter of the date..\n",datetimeindex.quarter)
This will produce the following code −
DateTimeIndex...
DatetimeIndex(['2021-10-31 02:30:50+11:00', '2021-12-31 02:30:50+11:00',
'2022-02-28 02:30:50+11:00', '2022-04-30 02:30:50+10:00',
'2022-06-30 02:30:50+10:00', '2022-08-31 02:30:50+10:00'],
dtype='datetime64[ns, Australia/Sydney]', freq='2M')
DateTimeIndex frequency...
<2 * MonthEnds>
Get the quarter of the date..
Int64Index([4, 4, 1, 2, 2, 3], dtype='int64')
|
[
{
"code": null,
"e": 1188,
"s": 1062,
"text": "To extract the quarter of the date from the DateTimeIndex with specific time series frequency, use the DateTimeIndex.quarter."
},
{
"code": null,
"e": 1230,
"s": 1188,
"text": "At first, import the required libraries −"
},
{
"code": null,
"e": 1250,
"s": 1230,
"text": "import pandas as pd"
},
{
"code": null,
"e": 1353,
"s": 1250,
"text": "Create a DatetimeIndex with period 6 and frequency as M i.e. Month. The timezone is Australia/Sydney −"
},
{
"code": null,
"e": 1452,
"s": 1353,
"text": "datetimeindex = pd.date_range('2021-10-20 02:30:50', periods=6, tz='Australia/Sydney', freq='2M')\n"
},
{
"code": null,
"e": 1486,
"s": 1452,
"text": "Display DateTimeIndex frequency −"
},
{
"code": null,
"e": 1544,
"s": 1486,
"text": "print(\"DateTimeIndex frequency...\\n\", datetimeindex.freq)"
},
{
"code": null,
"e": 1574,
"s": 1544,
"text": "Get the quarter of the date −"
},
{
"code": null,
"e": 1640,
"s": 1574,
"text": "print(\"\\nGet the quarter of the date..\\n\",datetimeindex.quarter)\n"
},
{
"code": null,
"e": 1695,
"s": 1640,
"text": "Result is based on the following quarters of an year −"
},
{
"code": null,
"e": 1848,
"s": 1695,
"text": "Quarter 1 = 1st January to 31st March\nQuarter 2 = 1st April to 30th June\nQuarter 3 = 1st July to 30th September\nQuarter 4 = 1st October to 31st December"
},
{
"code": null,
"e": 1872,
"s": 1848,
"text": "Following is the code −"
},
{
"code": null,
"e": 2560,
"s": 1872,
"text": "import pandas as pd\n\n# DatetimeIndex with period 6 and frequency as M i.e. Month\n# The timezone is Australia/Sydney\ndatetimeindex = pd.date_range('2021-10-20 02:30:50', periods=6, tz='Australia/Sydney', freq='2M')\n\n# display DateTimeIndex\nprint(\"DateTimeIndex...\\n\", datetimeindex)\n\n# display DateTimeIndex frequency\nprint(\"DateTimeIndex frequency...\\n\", datetimeindex.freq)\n\n# Get the quarter of the date\n# Result is based on the following quarters of an year:\n# Quarter 1 = 1st January to 31st March\n# Quarter 2 = 1st April to 30th June\n# Quarter 3 = 1st July to 30th September\n# Quarter 4 = 1st October to 31st December\nprint(\"\\nGet the quarter of the date..\\n\",datetimeindex.quarter)"
},
{
"code": null,
"e": 2599,
"s": 2560,
"text": "This will produce the following code −"
},
{
"code": null,
"e": 2979,
"s": 2599,
"text": "DateTimeIndex...\nDatetimeIndex(['2021-10-31 02:30:50+11:00', '2021-12-31 02:30:50+11:00',\n'2022-02-28 02:30:50+11:00', '2022-04-30 02:30:50+10:00',\n'2022-06-30 02:30:50+10:00', '2022-08-31 02:30:50+10:00'],\ndtype='datetime64[ns, Australia/Sydney]', freq='2M')\nDateTimeIndex frequency...\n<2 * MonthEnds>\n\nGet the quarter of the date..\nInt64Index([4, 4, 1, 2, 2, 3], dtype='int64')"
}
] |
How to create a breadcrumb navigation with CSS?
|
Following is the code to create breadcrumb navigation using CSS −
Live Demo
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<style>
body {
margin: 0px;
margin-top: 10px;
padding: 0px;
}
.breadcrumb {
background-color: rgb(39, 39, 39);
overflow: auto;
height: auto;
}
li {
display: inline-block;
text-align: center;
padding: 10px;
font-size: 17px;
}
.links {
text-decoration: none;
color: rgb(178, 137, 253);
}
.links:hover {
text-decoration: underline;
}
.breadcrumb ul li:before {
padding: 8px;
color: white;
content: "/\00a0";
text-decoration: none;
}
ul:last-child {
color: white;
font-weight: bolder;
font-family: monospace;
}
</style>
</head>
<body>
<h1>Breadcrumb Navigation Example</h1>
<div class="breadcrumb">
<ul>
<li><a class="links" href="#">Root</a></li>
<li><a class="links" href="#">Home</a></li>
<li><a class="links" href="#">User</a></li>
<li><a class="links" href="#">Desktop</a></li>
<li>Games</li>
</ul>
</div>
<h2>Hover over the links to see effect</h2>
</body>
</html>
The above code will produce the following output −
|
[
{
"code": null,
"e": 1128,
"s": 1062,
"text": "Following is the code to create breadcrumb navigation using CSS −"
},
{
"code": null,
"e": 1139,
"s": 1128,
"text": " Live Demo"
},
{
"code": null,
"e": 2228,
"s": 1139,
"text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\nbody {\n margin: 0px;\n margin-top: 10px;\n padding: 0px;\n}\n.breadcrumb {\n background-color: rgb(39, 39, 39);\n overflow: auto;\n height: auto;\n}\nli {\n display: inline-block;\n text-align: center;\n padding: 10px;\n font-size: 17px;\n}\n.links {\n text-decoration: none;\n color: rgb(178, 137, 253);\n}\n.links:hover {\n text-decoration: underline;\n}\n.breadcrumb ul li:before {\n padding: 8px;\n color: white;\n content: \"/\\00a0\";\n text-decoration: none;\n}\nul:last-child {\n color: white;\n font-weight: bolder;\n font-family: monospace;\n}\n</style>\n</head>\n<body>\n<h1>Breadcrumb Navigation Example</h1>\n<div class=\"breadcrumb\">\n<ul>\n<li><a class=\"links\" href=\"#\">Root</a></li>\n<li><a class=\"links\" href=\"#\">Home</a></li>\n<li><a class=\"links\" href=\"#\">User</a></li>\n<li><a class=\"links\" href=\"#\">Desktop</a></li>\n<li>Games</li>\n</ul>\n</div>\n<h2>Hover over the links to see effect</h2>\n</body>\n</html>"
},
{
"code": null,
"e": 2279,
"s": 2228,
"text": "The above code will produce the following output −"
}
] |
Java NIO - Socket Channel
|
Java NIO socket channel is a selectable type channel which means it can be multiplexed using selector, used for stream oriented data flow connecting sockets.Socket channel can be created by invoking its static open() method,providing any pre-existing socket is not already present.Socket channel is created by invoking open method but not yet connected.In order to connect socket channel connect() method is to be called.One point to be mentioned here is if channel is not connected and any I/O operation is tried to be attempted then NotYetConnectedException is thrown by this channel.So one must be ensure that channel is connected before performing any IO operation.Once channel is get connected,it remains connected until it is closed.The state of socket channel may be determined by invoking its isConnected method.
The connection of socket channel could be finished by invoking its finishConnect() method.Whether or not a connection operation is in progress may be determined by invoking the isConnectionPending method.By default socket channel supports non-blocking connection.Also it support asynchronous shutdown, which is similar to the asynchronous close operation specified in the Channel class.
Socket channels are safe for use by multiple concurrent threads. They support concurrent reading and writing, though at most one thread may be reading and at most one thread may be writing at any given time. The connect and finishConnect methods are mutually synchronized against each other, and an attempt to initiate a read or write operation while an invocation of one of these methods is in progress will block until that invocation is complete.
bind(SocketAddress local) − This method is used to bind the socket channel to the local address which is provided as the parameter to this method.
bind(SocketAddress local) − This method is used to bind the socket channel to the local address which is provided as the parameter to this method.
connect(SocketAddress remote) − This method is used to connect the socket to the remote address.
connect(SocketAddress remote) − This method is used to connect the socket to the remote address.
finishConnect() − This method is used to finishes the process of connecting a socket channel.
finishConnect() − This method is used to finishes the process of connecting a socket channel.
getRemoteAddress() − This method return the address of remote location to which the channel's socket is connected.
getRemoteAddress() − This method return the address of remote location to which the channel's socket is connected.
isConnected() − As already mentioned this method returns the status of connection of socket channel i.e whether it is connected or not.
isConnected() − As already mentioned this method returns the status of connection of socket channel i.e whether it is connected or not.
open() and open((SocketAddress remote) − Open method is used open a socket channel for no specified address while parameterized open method open channel for specified remote address and also connects to it.This convenience method works as if by invoking the open() method, invoking the connect method upon the resulting socket channel, passing it remote, and then returning that channel.
open() and open((SocketAddress remote) − Open method is used open a socket channel for no specified address while parameterized open method open channel for specified remote address and also connects to it.This convenience method works as if by invoking the open() method, invoking the connect method upon the resulting socket channel, passing it remote, and then returning that channel.
read(ByteBuffer dst) − This method is used to read data from the given buffer through socket channel.
read(ByteBuffer dst) − This method is used to read data from the given buffer through socket channel.
isConnectionPending() − This method tells whether or not a connection operation is in progress on this channel.
isConnectionPending() − This method tells whether or not a connection operation is in progress on this channel.
The following example shows the how to send data from Java NIO SocketChannel.
Hello World!
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.util.EnumSet;
public class SocketChannelClient {
public static void main(String[] args) throws IOException {
ServerSocketChannel serverSocket = null;
SocketChannel client = null;
serverSocket = ServerSocketChannel.open();
serverSocket.socket().bind(new InetSocketAddress(9000));
client = serverSocket.accept();
System.out.println("Connection Set: " + client.getRemoteAddress());
Path path = Paths.get("C:/Test/temp1.txt");
FileChannel fileChannel = FileChannel.open(path,
EnumSet.of(StandardOpenOption.CREATE,
StandardOpenOption.TRUNCATE_EXISTING,
StandardOpenOption.WRITE)
);
ByteBuffer buffer = ByteBuffer.allocate(1024);
while(client.read(buffer) > 0) {
buffer.flip();
fileChannel.write(buffer);
buffer.clear();
}
fileChannel.close();
System.out.println("File Received");
client.close();
}
}
Running the client will not print anything until server starts.
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.channels.SocketChannel;
import java.nio.file.Path;
import java.nio.file.Paths;
public class SocketChannelServer {
public static void main(String[] args) throws IOException {
SocketChannel server = SocketChannel.open();
SocketAddress socketAddr = new InetSocketAddress("localhost", 9000);
server.connect(socketAddr);
Path path = Paths.get("C:/Test/temp.txt");
FileChannel fileChannel = FileChannel.open(path);
ByteBuffer buffer = ByteBuffer.allocate(1024);
while(fileChannel.read(buffer) > 0) {
buffer.flip();
server.write(buffer);
buffer.clear();
}
fileChannel.close();
System.out.println("File Sent");
server.close();
}
}
Running the server will print the following.
Connection Set: /127.0.0.1:49558
File Received
16 Lectures
2 hours
Malhar Lathkar
19 Lectures
5 hours
Malhar Lathkar
25 Lectures
2.5 hours
Anadi Sharma
126 Lectures
7 hours
Tushar Kale
119 Lectures
17.5 hours
Monica Mittal
76 Lectures
7 hours
Arnab Chakraborty
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2805,
"s": 1984,
"text": "Java NIO socket channel is a selectable type channel which means it can be multiplexed using selector, used for stream oriented data flow connecting sockets.Socket channel can be created by invoking its static open() method,providing any pre-existing socket is not already present.Socket channel is created by invoking open method but not yet connected.In order to connect socket channel connect() method is to be called.One point to be mentioned here is if channel is not connected and any I/O operation is tried to be attempted then NotYetConnectedException is thrown by this channel.So one must be ensure that channel is connected before performing any IO operation.Once channel is get connected,it remains connected until it is closed.The state of socket channel may be determined by invoking its isConnected method."
},
{
"code": null,
"e": 3192,
"s": 2805,
"text": "The connection of socket channel could be finished by invoking its finishConnect() method.Whether or not a connection operation is in progress may be determined by invoking the isConnectionPending method.By default socket channel supports non-blocking connection.Also it support asynchronous shutdown, which is similar to the asynchronous close operation specified in the Channel class."
},
{
"code": null,
"e": 3642,
"s": 3192,
"text": "Socket channels are safe for use by multiple concurrent threads. They support concurrent reading and writing, though at most one thread may be reading and at most one thread may be writing at any given time. The connect and finishConnect methods are mutually synchronized against each other, and an attempt to initiate a read or write operation while an invocation of one of these methods is in progress will block until that invocation is complete."
},
{
"code": null,
"e": 3789,
"s": 3642,
"text": "bind(SocketAddress local) − This method is used to bind the socket channel to the local address which is provided as the parameter to this method."
},
{
"code": null,
"e": 3936,
"s": 3789,
"text": "bind(SocketAddress local) − This method is used to bind the socket channel to the local address which is provided as the parameter to this method."
},
{
"code": null,
"e": 4033,
"s": 3936,
"text": "connect(SocketAddress remote) − This method is used to connect the socket to the remote address."
},
{
"code": null,
"e": 4130,
"s": 4033,
"text": "connect(SocketAddress remote) − This method is used to connect the socket to the remote address."
},
{
"code": null,
"e": 4224,
"s": 4130,
"text": "finishConnect() − This method is used to finishes the process of connecting a socket channel."
},
{
"code": null,
"e": 4318,
"s": 4224,
"text": "finishConnect() − This method is used to finishes the process of connecting a socket channel."
},
{
"code": null,
"e": 4433,
"s": 4318,
"text": "getRemoteAddress() − This method return the address of remote location to which the channel's socket is connected."
},
{
"code": null,
"e": 4548,
"s": 4433,
"text": "getRemoteAddress() − This method return the address of remote location to which the channel's socket is connected."
},
{
"code": null,
"e": 4684,
"s": 4548,
"text": "isConnected() − As already mentioned this method returns the status of connection of socket channel i.e whether it is connected or not."
},
{
"code": null,
"e": 4820,
"s": 4684,
"text": "isConnected() − As already mentioned this method returns the status of connection of socket channel i.e whether it is connected or not."
},
{
"code": null,
"e": 5208,
"s": 4820,
"text": "open() and open((SocketAddress remote) − Open method is used open a socket channel for no specified address while parameterized open method open channel for specified remote address and also connects to it.This convenience method works as if by invoking the open() method, invoking the connect method upon the resulting socket channel, passing it remote, and then returning that channel."
},
{
"code": null,
"e": 5596,
"s": 5208,
"text": "open() and open((SocketAddress remote) − Open method is used open a socket channel for no specified address while parameterized open method open channel for specified remote address and also connects to it.This convenience method works as if by invoking the open() method, invoking the connect method upon the resulting socket channel, passing it remote, and then returning that channel."
},
{
"code": null,
"e": 5698,
"s": 5596,
"text": "read(ByteBuffer dst) − This method is used to read data from the given buffer through socket channel."
},
{
"code": null,
"e": 5800,
"s": 5698,
"text": "read(ByteBuffer dst) − This method is used to read data from the given buffer through socket channel."
},
{
"code": null,
"e": 5912,
"s": 5800,
"text": "isConnectionPending() − This method tells whether or not a connection operation is in progress on this channel."
},
{
"code": null,
"e": 6024,
"s": 5912,
"text": "isConnectionPending() − This method tells whether or not a connection operation is in progress on this channel."
},
{
"code": null,
"e": 6102,
"s": 6024,
"text": "The following example shows the how to send data from Java NIO SocketChannel."
},
{
"code": null,
"e": 6116,
"s": 6102,
"text": "Hello World!\n"
},
{
"code": null,
"e": 7403,
"s": 6116,
"text": "import java.io.IOException;\nimport java.net.InetSocketAddress;\nimport java.nio.ByteBuffer;\nimport java.nio.channels.FileChannel;\nimport java.nio.channels.ServerSocketChannel;\nimport java.nio.channels.SocketChannel;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardOpenOption;\nimport java.util.EnumSet;\n\npublic class SocketChannelClient {\n public static void main(String[] args) throws IOException {\n ServerSocketChannel serverSocket = null;\n SocketChannel client = null;\n serverSocket = ServerSocketChannel.open();\n serverSocket.socket().bind(new InetSocketAddress(9000));\n client = serverSocket.accept();\n System.out.println(\"Connection Set: \" + client.getRemoteAddress());\n Path path = Paths.get(\"C:/Test/temp1.txt\");\n FileChannel fileChannel = FileChannel.open(path, \n EnumSet.of(StandardOpenOption.CREATE, \n StandardOpenOption.TRUNCATE_EXISTING,\n StandardOpenOption.WRITE)\n ); \n ByteBuffer buffer = ByteBuffer.allocate(1024);\n while(client.read(buffer) > 0) {\n buffer.flip();\n fileChannel.write(buffer);\n buffer.clear();\n }\n fileChannel.close();\n System.out.println(\"File Received\");\n client.close();\n }\n}"
},
{
"code": null,
"e": 7467,
"s": 7403,
"text": "Running the client will not print anything until server starts."
},
{
"code": null,
"e": 8369,
"s": 7469,
"text": "import java.io.IOException;\nimport java.net.InetSocketAddress;\nimport java.net.SocketAddress;\nimport java.nio.ByteBuffer;\nimport java.nio.channels.FileChannel;\nimport java.nio.channels.SocketChannel;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\npublic class SocketChannelServer {\n public static void main(String[] args) throws IOException {\n SocketChannel server = SocketChannel.open();\n SocketAddress socketAddr = new InetSocketAddress(\"localhost\", 9000);\n server.connect(socketAddr);\n\n Path path = Paths.get(\"C:/Test/temp.txt\");\n FileChannel fileChannel = FileChannel.open(path);\n ByteBuffer buffer = ByteBuffer.allocate(1024);\n while(fileChannel.read(buffer) > 0) {\n buffer.flip();\n server.write(buffer);\n buffer.clear();\n }\n fileChannel.close();\n System.out.println(\"File Sent\");\n server.close();\n }\n}"
},
{
"code": null,
"e": 8414,
"s": 8369,
"text": "Running the server will print the following."
},
{
"code": null,
"e": 8463,
"s": 8414,
"text": "Connection Set: /127.0.0.1:49558\nFile Received\n"
},
{
"code": null,
"e": 8496,
"s": 8463,
"text": "\n 16 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 8512,
"s": 8496,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 8545,
"s": 8512,
"text": "\n 19 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 8561,
"s": 8545,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 8596,
"s": 8561,
"text": "\n 25 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8610,
"s": 8596,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 8644,
"s": 8610,
"text": "\n 126 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 8658,
"s": 8644,
"text": " Tushar Kale"
},
{
"code": null,
"e": 8695,
"s": 8658,
"text": "\n 119 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 8710,
"s": 8695,
"text": " Monica Mittal"
},
{
"code": null,
"e": 8743,
"s": 8710,
"text": "\n 76 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 8762,
"s": 8743,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 8769,
"s": 8762,
"text": " Print"
},
{
"code": null,
"e": 8780,
"s": 8769,
"text": " Add Notes"
}
] |
Create a yellow button (warning) with Bootstrap
|
Use the .btn-warning class in Bootstrap to create a yellow button that indicates warning in Bootstrap.
You can try to run the following code to implement the .btn-warning class −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>Bootstrap Example</title>
<link rel = "stylesheet" href = "https://maxcdn.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css">
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src = "https://maxcdn.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js"></script>
</head>
<body>
<button type = "button" class = "btn btn-warning">You are WARNED!</button>
</body>
</html>
|
[
{
"code": null,
"e": 1165,
"s": 1062,
"text": "Use the .btn-warning class in Bootstrap to create a yellow button that indicates warning in Bootstrap."
},
{
"code": null,
"e": 1241,
"s": 1165,
"text": "You can try to run the following code to implement the .btn-warning class −"
},
{
"code": null,
"e": 1251,
"s": 1241,
"text": "Live Demo"
},
{
"code": null,
"e": 1752,
"s": 1251,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link rel = \"stylesheet\" href = \"https://maxcdn.bootstrapcdn.com/bootstrap/4.1.1/css/bootstrap.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <script src = \"https://maxcdn.bootstrapcdn.com/bootstrap/4.1.1/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <button type = \"button\" class = \"btn btn-warning\">You are WARNED!</button>\n </body>\n</html>"
}
] |
Return two prime numbers | Practice | GeeksforGeeks
|
Given an even number N (greater than 2), return two prime numbers whose sum will be equal to given number. There are several combinations possible. Print only the pair whose minimum value is the smallest among all the minimum values of pairs and print the minimum element first.
NOTE: A solution will always exist, read Goldbachs conjecture.
Example 1:
Input: N = 74
Output: 3 71
Explaination: There are several possibilities
like 37 37. But the minimum value of this pair
is 3 which is smallest among all possible
minimum values of all the pairs.
Example 2:
Input: 4
Output: 2 2
Explaination: This is the only possible
prtitioning of 4.
Your Task:
You do not need to read input or print anything. Your task is to complete the function primeDivision() which takes N as input parameter and returns the partition satisfying the condition.
Expected Time Complexity: O(N*log(logN))
Expected Auxiliary Space: O(N)
Constraints:
4 ≤ N ≤ 104
0
iamalizaidi1106 days ago
simple code with simple logic
class Solution{
static boolean isPrime(int n)
{
int f=0;
for(int i=2;i<=n/2;i++)
{
if(n%i==0)
f++;
}
return f==0;
}
static List<Integer> primeDivision(int N){
List<Integer> l=new ArrayList<>();
String s="#";
for(int i=N-1;i>=2;i--)
{
if(isPrime(i))
s+=i+"#";
}
for(int i=2;i<=N-1;i++)
{
if(s.contains("#"+i+"#") && s.contains("#"+(N-i)+"#"))
{
l.add(i);
l.add(N-i);
break;
}
}
return l;
}
}
0
cs21m0592 weeks ago
Use sieve of eratosthenes to find the prime numbers :-
CODE(C++) :-
vector<int> prime(n+1 , 1); prime[0] = prime[1] = 0; for(int i = 2 ; i <= sqrt(n) ;i++ ) { for(int j = 2 ; j*i < n ; j++ ) { prime[i*j] = 0; } } for(int i = 2 ; i <= n ; i++) { if(prime[i] == 1 && prime[n-i] == 1 ) { return {i , n-i}; } }
0
gupta2411sumit2 weeks ago
bool isPrime( int n ) { if(n==2) { return true ; } if(n%2==0) { return false ; } for( int i = 2 ; i<=sqrt(n) ; i++) { if(n%i==0) { return false ; } } return true ; } vector<int> primeDivision(int N){ // code here vector<int>ans ; for( int i = 2 ; i<=N/2 ; i++) { if(isPrime(i) && isPrime(N-i)) { ans.push_back(i) ; ans.push_back(N-i) ; return ans ; } } }
0
reahaansheriff1 month ago
Python 2 pointer approach (0.5 sec)
class Solution:
def primeDivision(self, N):
# code here
prime = []
for i in range(1,N+1):
if(i > 1):
for j in range(2,int(i**0.5)+1):
if(i%j == 0):
break
else:
prime.append(i)
#print(prime)
l=[]
i=0
j=len(prime)-1
while(i<len(prime) and j < len(prime) and i <= j):
if(prime[i]+prime[j] == N):
l.append([prime[i],prime[j]])
i+=1
j-=1
elif(prime[i]+prime[j] > N):
j-=1
elif(prime[i]+prime[j] < N):
i+=1
return l[0]
0
gujjulassr2 months ago
0.2/11.1
class Solution{ static List<Integer> primeDivision(int N){ // code here ArrayList<Integer> a=new ArrayList<Integer>(); for(int i=2;i<=N;i++){ if(prime(i)){ int k=N-i; if(prime(k)){ a.add(i); a.add(k); return a; } } } return a; } public static boolean prime(int n){ for(int i=2;i*i<=n;i++){ if(n%i==0){ return false; } } return true; }}
0
adityashuklajsr092 months ago
simplest python code
import math
def isprime(n):
if n <= 1:
return False
if n == 2:
return True
if n > 2 and n % 2 == 0:
return False
max_div = math.floor(math.sqrt(n))
for i in range(3, 1 + max_div, 2):
if n % i == 0:
return False
return True
class Solution:
def primeDivision(self, N):
# code here
i=2
while 1:
if isprime(i) and isprime(N-i):
return i,N-i
i+=1
0
adityashuklajsr09
This comment was deleted.
0
sujayghorpade2 months ago
Simple Java code:
static List<Integer> primeDivision(int N){
// code here
List<Integer> res = new ArrayList<>();
//get all primes from 2 to N
int[] primes = java.util.stream.IntStream.rangeClosed(2,N).filter(Solution::isPrime).toArray();
int l = 0;
int r = primes.length-1;
while(l<=r){
if(primes[l] + primes[r] == N){
res.add(primes[l]);
res.add(primes[r]);
break;
}else if(primes[l] + primes[r] < N){
l++;
}else {
r--;
}
}
return res;
}
//helper method to check if number is prime
static boolean isPrime(int n){
return java.util.stream.IntStream.rangeClosed(2,(int)Math.sqrt(n)).noneMatch(num -> n%num==0);
}
-1
rko162 months ago
C++ Easy Solution
Total Time Taken:
0.0/3.4
bool isP(int n)
{
if(n==1)
return false;
if(n==2 || n==3)
return true;
if(n%2==0 || n%3==0)
return false;
for(int i=5;i*i<=n;i+=6)
if(n%i==0 || n%(i+2)==0)
return false;
return true;
}
vector<int> primeDivision(int N){
vector<int>v;
for(int i=1;i<=N;i++)
if(isP(i))
v.push_back(i);
int i=0,n=v.size()-1;
while(i<v.size())
if(v[i]+v[n]==N)
return {v[i],v[n]};
else if(v[i]+v[n]>N)
n--;
else
i++;
return {};
}
0
tarunchawla74632 months ago
Easy to understand C++ solution
class Solution{
public:
vector<int> primeDivision(int N){
vector<bool> table(N+1,false);
for(int i=2;i<sqrt(N);i++){
if(table[i]==false){
int t=i;
for(int j=i*t;j<N;j=i*(++t)){
table[j]=true;
}
}
}
vector<int> prime;
unordered_set<int> set;
for(int i=2;i<N;i++){
if(!table[i]){
prime.push_back(i);
set.insert(i);
}
}
vector<int> re;
for(int i=prime.size()-1;i>=0;i--){
if(set.count(N-prime[i])){
re.push_back(N-prime[i]);
re.push_back(prime[i]);
break;
}
}
return re;
}
};
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab.
|
[
{
"code": null,
"e": 517,
"s": 238,
"text": "Given an even number N (greater than 2), return two prime numbers whose sum will be equal to given number. There are several combinations possible. Print only the pair whose minimum value is the smallest among all the minimum values of pairs and print the minimum element first."
},
{
"code": null,
"e": 581,
"s": 517,
"text": "NOTE: A solution will always exist, read Goldbachs conjecture. "
},
{
"code": null,
"e": 592,
"s": 581,
"text": "Example 1:"
},
{
"code": null,
"e": 790,
"s": 592,
"text": "Input: N = 74\nOutput: 3 71\nExplaination: There are several possibilities \nlike 37 37. But the minimum value of this pair \nis 3 which is smallest among all possible \nminimum values of all the pairs."
},
{
"code": null,
"e": 801,
"s": 790,
"text": "Example 2:"
},
{
"code": null,
"e": 881,
"s": 801,
"text": "Input: 4\nOutput: 2 2\nExplaination: This is the only possible \nprtitioning of 4."
},
{
"code": null,
"e": 1080,
"s": 881,
"text": "Your Task:\nYou do not need to read input or print anything. Your task is to complete the function primeDivision() which takes N as input parameter and returns the partition satisfying the condition."
},
{
"code": null,
"e": 1152,
"s": 1080,
"text": "Expected Time Complexity: O(N*log(logN))\nExpected Auxiliary Space: O(N)"
},
{
"code": null,
"e": 1179,
"s": 1152,
"text": "Constraints:\n4 ≤ N ≤ 104 "
},
{
"code": null,
"e": 1181,
"s": 1179,
"text": "0"
},
{
"code": null,
"e": 1206,
"s": 1181,
"text": "iamalizaidi1106 days ago"
},
{
"code": null,
"e": 1236,
"s": 1206,
"text": "simple code with simple logic"
},
{
"code": null,
"e": 1892,
"s": 1236,
"text": "\nclass Solution{\n static boolean isPrime(int n)\n {\n int f=0;\n for(int i=2;i<=n/2;i++)\n {\n if(n%i==0)\n f++;\n }\n return f==0;\n }\n static List<Integer> primeDivision(int N){\n List<Integer> l=new ArrayList<>();\n String s=\"#\";\n for(int i=N-1;i>=2;i--)\n {\n if(isPrime(i))\n s+=i+\"#\";\n }\n for(int i=2;i<=N-1;i++)\n {\n if(s.contains(\"#\"+i+\"#\") && s.contains(\"#\"+(N-i)+\"#\"))\n {\n l.add(i);\n l.add(N-i);\n break;\n }\n }\n return l;\n }\n}"
},
{
"code": null,
"e": 1894,
"s": 1892,
"text": "0"
},
{
"code": null,
"e": 1914,
"s": 1894,
"text": "cs21m0592 weeks ago"
},
{
"code": null,
"e": 1970,
"s": 1914,
"text": "Use sieve of eratosthenes to find the prime numbers :- "
},
{
"code": null,
"e": 1986,
"s": 1972,
"text": "CODE(C++) :- "
},
{
"code": null,
"e": 2378,
"s": 1988,
"text": " vector<int> prime(n+1 , 1); prime[0] = prime[1] = 0; for(int i = 2 ; i <= sqrt(n) ;i++ ) { for(int j = 2 ; j*i < n ; j++ ) { prime[i*j] = 0; } } for(int i = 2 ; i <= n ; i++) { if(prime[i] == 1 && prime[n-i] == 1 ) { return {i , n-i}; } } "
},
{
"code": null,
"e": 2380,
"s": 2378,
"text": "0"
},
{
"code": null,
"e": 2406,
"s": 2380,
"text": "gupta2411sumit2 weeks ago"
},
{
"code": null,
"e": 3035,
"s": 2406,
"text": " bool isPrime( int n ) { if(n==2) { return true ; } if(n%2==0) { return false ; } for( int i = 2 ; i<=sqrt(n) ; i++) { if(n%i==0) { return false ; } } return true ; } vector<int> primeDivision(int N){ // code here vector<int>ans ; for( int i = 2 ; i<=N/2 ; i++) { if(isPrime(i) && isPrime(N-i)) { ans.push_back(i) ; ans.push_back(N-i) ; return ans ; } } }"
},
{
"code": null,
"e": 3037,
"s": 3035,
"text": "0"
},
{
"code": null,
"e": 3063,
"s": 3037,
"text": "reahaansheriff1 month ago"
},
{
"code": null,
"e": 3099,
"s": 3063,
"text": "Python 2 pointer approach (0.5 sec)"
},
{
"code": null,
"e": 3813,
"s": 3099,
"text": "\nclass Solution:\n def primeDivision(self, N):\n # code here\n prime = []\n for i in range(1,N+1):\n if(i > 1):\n for j in range(2,int(i**0.5)+1):\n if(i%j == 0):\n break\n else:\n prime.append(i)\n #print(prime)\n l=[]\n i=0\n j=len(prime)-1\n while(i<len(prime) and j < len(prime) and i <= j):\n if(prime[i]+prime[j] == N):\n l.append([prime[i],prime[j]])\n i+=1\n j-=1\n elif(prime[i]+prime[j] > N):\n j-=1\n elif(prime[i]+prime[j] < N):\n i+=1\n return l[0]"
},
{
"code": null,
"e": 3817,
"s": 3815,
"text": "0"
},
{
"code": null,
"e": 3840,
"s": 3817,
"text": "gujjulassr2 months ago"
},
{
"code": null,
"e": 3849,
"s": 3840,
"text": "0.2/11.1"
},
{
"code": null,
"e": 4393,
"s": 3851,
"text": "class Solution{ static List<Integer> primeDivision(int N){ // code here ArrayList<Integer> a=new ArrayList<Integer>(); for(int i=2;i<=N;i++){ if(prime(i)){ int k=N-i; if(prime(k)){ a.add(i); a.add(k); return a; } } } return a; } public static boolean prime(int n){ for(int i=2;i*i<=n;i++){ if(n%i==0){ return false; } } return true; }}"
},
{
"code": null,
"e": 4395,
"s": 4393,
"text": "0"
},
{
"code": null,
"e": 4425,
"s": 4395,
"text": "adityashuklajsr092 months ago"
},
{
"code": null,
"e": 4446,
"s": 4425,
"text": "simplest python code"
},
{
"code": null,
"e": 4936,
"s": 4446,
"text": "import math\ndef isprime(n):\n if n <= 1:\n return False\n if n == 2:\n return True\n if n > 2 and n % 2 == 0:\n return False\n \n max_div = math.floor(math.sqrt(n))\n for i in range(3, 1 + max_div, 2):\n if n % i == 0:\n return False\n return True\n \nclass Solution:\n def primeDivision(self, N):\n # code here\n i=2 \n while 1:\n if isprime(i) and isprime(N-i):\n return i,N-i\n i+=1"
},
{
"code": null,
"e": 4938,
"s": 4936,
"text": "0"
},
{
"code": null,
"e": 4956,
"s": 4938,
"text": "adityashuklajsr09"
},
{
"code": null,
"e": 4982,
"s": 4956,
"text": "This comment was deleted."
},
{
"code": null,
"e": 4984,
"s": 4982,
"text": "0"
},
{
"code": null,
"e": 5010,
"s": 4984,
"text": "sujayghorpade2 months ago"
},
{
"code": null,
"e": 5028,
"s": 5010,
"text": "Simple Java code:"
},
{
"code": null,
"e": 5850,
"s": 5030,
"text": "static List<Integer> primeDivision(int N){\n // code here\n List<Integer> res = new ArrayList<>();\n //get all primes from 2 to N\n int[] primes = java.util.stream.IntStream.rangeClosed(2,N).filter(Solution::isPrime).toArray();\n int l = 0;\n int r = primes.length-1;\n while(l<=r){\n if(primes[l] + primes[r] == N){\n res.add(primes[l]);\n res.add(primes[r]);\n break;\n }else if(primes[l] + primes[r] < N){\n l++;\n }else {\n r--;\n }\n\n }\n\n return res;\n\n }\n //helper method to check if number is prime\n static boolean isPrime(int n){\n return java.util.stream.IntStream.rangeClosed(2,(int)Math.sqrt(n)).noneMatch(num -> n%num==0);\n }"
},
{
"code": null,
"e": 5853,
"s": 5850,
"text": "-1"
},
{
"code": null,
"e": 5871,
"s": 5853,
"text": "rko162 months ago"
},
{
"code": null,
"e": 5889,
"s": 5871,
"text": "C++ Easy Solution"
},
{
"code": null,
"e": 5907,
"s": 5889,
"text": "Total Time Taken:"
},
{
"code": null,
"e": 5915,
"s": 5907,
"text": "0.0/3.4"
},
{
"code": null,
"e": 6426,
"s": 5915,
"text": "bool isP(int n)\n{\n if(n==1)\n return false;\n if(n==2 || n==3)\n return true;\n if(n%2==0 || n%3==0)\n return false;\n for(int i=5;i*i<=n;i+=6)\n if(n%i==0 || n%(i+2)==0)\n return false;\n return true;\n}\nvector<int> primeDivision(int N){\n vector<int>v;\n \n for(int i=1;i<=N;i++)\n if(isP(i))\n v.push_back(i);\n \n int i=0,n=v.size()-1;\n while(i<v.size())\n if(v[i]+v[n]==N)\n return {v[i],v[n]};\n else if(v[i]+v[n]>N)\n n--;\n else\n i++;\n \n return {};\n }\n"
},
{
"code": null,
"e": 6428,
"s": 6426,
"text": "0"
},
{
"code": null,
"e": 6456,
"s": 6428,
"text": "tarunchawla74632 months ago"
},
{
"code": null,
"e": 6488,
"s": 6456,
"text": "Easy to understand C++ solution"
},
{
"code": null,
"e": 7277,
"s": 6488,
"text": "class Solution{\npublic:\n vector<int> primeDivision(int N){\n vector<bool> table(N+1,false);\n for(int i=2;i<sqrt(N);i++){\n if(table[i]==false){\n int t=i;\n for(int j=i*t;j<N;j=i*(++t)){\n table[j]=true;\n }\n }\n }\n vector<int> prime;\n unordered_set<int> set;\n for(int i=2;i<N;i++){\n if(!table[i]){\n prime.push_back(i);\n set.insert(i);\n }\n }\n vector<int> re;\n for(int i=prime.size()-1;i>=0;i--){\n if(set.count(N-prime[i])){\n re.push_back(N-prime[i]);\n re.push_back(prime[i]);\n break;\n }\n }\n return re;\n }\n};"
},
{
"code": null,
"e": 7423,
"s": 7277,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 7459,
"s": 7423,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 7469,
"s": 7459,
"text": "\nProblem\n"
},
{
"code": null,
"e": 7479,
"s": 7469,
"text": "\nContest\n"
},
{
"code": null,
"e": 7542,
"s": 7479,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 7690,
"s": 7542,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 7898,
"s": 7690,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 8004,
"s": 7898,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Sort by subdocument in MongoDB
|
To sort by subdocument, use $sort in MongoDB. Let us create a collection with documents −
> db.demo245.insertOne(
... {
... "_id": 101,
... "deatils": [
... { "DueDate": new ISODate("2019-01-10"), "Value": 45},
... {"DueDate": new ISODate("2019-11-10"), "Value": 34 }
... ]
... }
...);
{ "acknowledged" : true, "insertedId" : 101 }
> db.demo245.insertOne(
... {
... "_id": 102,
... "details": [
... { "DueDate": new ISODate("2019-12-11"), "Value": 29},
... {"DueDate": new ISODate("2019-03-10"), "Value": 78}
... ]
... }
...);
{ "acknowledged" : true, "insertedId" : 102 }
Display all documents from a collection with the help of find() method −
> db.demo245.find();
This will produce the following output −
{
"_id" : 101, "deatils" : [
{ "DueDate" : ISODate("2019-01-10T00:00:00Z"), "Value" : 45 },
{ "DueDate" : ISODate("2019-11-10T00:00:00Z"), "Value" : 34 }
]
}
{
"_id" : 102, "details" : [
{ "DueDate" : ISODate("2019-12-11T00:00:00Z"), "Value" : 29 },
{ "DueDate" : ISODate("2019-03-10T00:00:00Z"), "Value" : 78 } \
]
}
Following is the query to sort by subdocument −
> db.demo245.aggregate([
... { "$unwind": "$details" },
... { "$sort": { "_id": 1, "details.Value": -1 } },
... { "$group": {
... "_id": "$_id",
... "details": { "$push": "$details" }
... }},
... { "$sort": { "details.Value": -1 } }
...])
This will produce the following output −
{ "_id" : 102, "details" : [ { "DueDate" : ISODate("2019-03-10T00:00:00Z"), "Value" : 78 }, { "DueDate" : ISODate("2019-12-11T00:00:00Z"), "Value" : 29 } ] }
|
[
{
"code": null,
"e": 1152,
"s": 1062,
"text": "To sort by subdocument, use $sort in MongoDB. Let us create a collection with documents −"
},
{
"code": null,
"e": 1706,
"s": 1152,
"text": "> db.demo245.insertOne(\n... {\n... \"_id\": 101,\n... \"deatils\": [\n... { \"DueDate\": new ISODate(\"2019-01-10\"), \"Value\": 45},\n... {\"DueDate\": new ISODate(\"2019-11-10\"), \"Value\": 34 }\n... ]\n... }\n...);\n{ \"acknowledged\" : true, \"insertedId\" : 101 }\n> db.demo245.insertOne(\n... {\n... \"_id\": 102,\n... \"details\": [\n... { \"DueDate\": new ISODate(\"2019-12-11\"), \"Value\": 29},\n... {\"DueDate\": new ISODate(\"2019-03-10\"), \"Value\": 78}\n... ]\n... }\n...);\n{ \"acknowledged\" : true, \"insertedId\" : 102 }"
},
{
"code": null,
"e": 1779,
"s": 1706,
"text": "Display all documents from a collection with the help of find() method −"
},
{
"code": null,
"e": 1800,
"s": 1779,
"text": "> db.demo245.find();"
},
{
"code": null,
"e": 1841,
"s": 1800,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2196,
"s": 1841,
"text": "{\n \"_id\" : 101, \"deatils\" : [\n { \"DueDate\" : ISODate(\"2019-01-10T00:00:00Z\"), \"Value\" : 45 },\n { \"DueDate\" : ISODate(\"2019-11-10T00:00:00Z\"), \"Value\" : 34 }\n ]\n}\n{\n \"_id\" : 102, \"details\" : [\n { \"DueDate\" : ISODate(\"2019-12-11T00:00:00Z\"), \"Value\" : 29 },\n { \"DueDate\" : ISODate(\"2019-03-10T00:00:00Z\"), \"Value\" : 78 } \\\n ] \n}"
},
{
"code": null,
"e": 2244,
"s": 2196,
"text": "Following is the query to sort by subdocument −"
},
{
"code": null,
"e": 2503,
"s": 2244,
"text": "> db.demo245.aggregate([\n... { \"$unwind\": \"$details\" },\n... { \"$sort\": { \"_id\": 1, \"details.Value\": -1 } },\n... { \"$group\": {\n... \"_id\": \"$_id\",\n... \"details\": { \"$push\": \"$details\" }\n... }},\n... { \"$sort\": { \"details.Value\": -1 } }\n...])"
},
{
"code": null,
"e": 2544,
"s": 2503,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2702,
"s": 2544,
"text": "{ \"_id\" : 102, \"details\" : [ { \"DueDate\" : ISODate(\"2019-03-10T00:00:00Z\"), \"Value\" : 78 }, { \"DueDate\" : ISODate(\"2019-12-11T00:00:00Z\"), \"Value\" : 29 } ] }"
}
] |
init - Unix, Linux Command
|
Runlevels 0, 1, and 6 are reserved. Runlevel 0 is used to
halt the system, runlevel 6 is used to reboot the system, and runlevel
1 is used to get the system down into single user mode. Runlevel S
is not really meant to be used directly, but more for the scripts that are
executed when entering runlevel 1. For more information on this,
see the manpages for shutdown(8) and inittab(5).
Runlevels 7-9 are also valid, though not really documented. This is
because "traditional" Unix variants don’t use them.
In case you’re curious, runlevels S and s are in fact the same.
Internally they are aliases for the same runlevel.
Runlevel S or s bring the system to single user mode
and do not require an /etc/inittab file. In single user mode,
a root shell is opened on /dev/console.
When entering single user mode, init initializes the consoles
stty settings to sane values. Clocal mode is set. Hardware
speed and handshaking are not changed.
When entering a multi-user mode for the first time, init performs the
boot and bootwait entries to allow file systems to be
mounted before users can log in. Then all entries matching the runlevel
are processed.
When starting a new process, init first checks whether the file
/etc/initscript exists. If it does, it uses this script to
start the process.
Each time a child terminates, init records the fact and the reason
it died in /var/run/utmp and /var/log/wtmp,
provided that these files exist.
If init is not in single user mode and receives a powerfail
signal (SIGPWR), it reads the file /etc/powerstatus. It then starts
a command based on the contents of this file:
Usage of SIGPWR and /etc/powerstatus is discouraged. Someone
wanting to interact with init should use the /dev/initctl
control channel - see the source code of the sysvinit package
for more documentation about this.
When init is requested to change the runlevel, it sends the
warning signal SIGTERM to all processes that are undefined
in the new runlevel. It then waits 5 seconds before forcibly
terminating these processes via the SIGKILL signal.
Note that init assumes that all these processes (and their
descendants) remain in the same process group which init
originally created for them. If any process changes its process group
affiliation it will not receive these signals. Such processes need to
be terminated separately.
telinit can be invoked only by users with appropriate
privileges.
The init binary checks if it is init or telinit by looking
at its process id; the real init’s process id is always 1.
From this it follows that instead of calling telinit one can also
just use init instead as a shortcut.
/etc/inittab
/etc/initscript
/dev/console
/var/run/utmp
/var/log/wtmp
/dev/initctl
getty (1)
getty (1)
login (1)
login (1)
sh (1)
sh (1)
runlevel (8)
runlevel (8)
shutdown (8)
shutdown (8)
kill (1)
kill (1)
Advertisements
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 10966,
"s": 10579,
"text": "\nRunlevels 0, 1, and 6 are reserved. Runlevel 0 is used to\nhalt the system, runlevel 6 is used to reboot the system, and runlevel\n1 is used to get the system down into single user mode. Runlevel S\nis not really meant to be used directly, but more for the scripts that are\nexecuted when entering runlevel 1. For more information on this,\nsee the manpages for shutdown(8) and inittab(5).\n"
},
{
"code": null,
"e": 11203,
"s": 10966,
"text": "\nRunlevels 7-9 are also valid, though not really documented. This is\nbecause \"traditional\" Unix variants don’t use them.\nIn case you’re curious, runlevels S and s are in fact the same.\nInternally they are aliases for the same runlevel.\n"
},
{
"code": null,
"e": 11363,
"s": 11205,
"text": "\nRunlevel S or s bring the system to single user mode\nand do not require an /etc/inittab file. In single user mode,\na root shell is opened on /dev/console.\n"
},
{
"code": null,
"e": 11525,
"s": 11363,
"text": "\nWhen entering single user mode, init initializes the consoles\nstty settings to sane values. Clocal mode is set. Hardware\nspeed and handshaking are not changed.\n"
},
{
"code": null,
"e": 11739,
"s": 11525,
"text": "\nWhen entering a multi-user mode for the first time, init performs the\nboot and bootwait entries to allow file systems to be\nmounted before users can log in. Then all entries matching the runlevel\nare processed.\n"
},
{
"code": null,
"e": 11883,
"s": 11739,
"text": "\nWhen starting a new process, init first checks whether the file\n/etc/initscript exists. If it does, it uses this script to\nstart the process.\n"
},
{
"code": null,
"e": 12029,
"s": 11883,
"text": "\nEach time a child terminates, init records the fact and the reason\nit died in /var/run/utmp and /var/log/wtmp,\nprovided that these files exist.\n"
},
{
"code": null,
"e": 12205,
"s": 12029,
"text": "\nIf init is not in single user mode and receives a powerfail\nsignal (SIGPWR), it reads the file /etc/powerstatus. It then starts\na command based on the contents of this file:\n"
},
{
"code": null,
"e": 12423,
"s": 12205,
"text": "\nUsage of SIGPWR and /etc/powerstatus is discouraged. Someone\nwanting to interact with init should use the /dev/initctl\ncontrol channel - see the source code of the sysvinit package\nfor more documentation about this.\n"
},
{
"code": null,
"e": 12942,
"s": 12423,
"text": "\nWhen init is requested to change the runlevel, it sends the\nwarning signal SIGTERM to all processes that are undefined\nin the new runlevel. It then waits 5 seconds before forcibly\nterminating these processes via the SIGKILL signal.\nNote that init assumes that all these processes (and their\ndescendants) remain in the same process group which init\noriginally created for them. If any process changes its process group\naffiliation it will not receive these signals. Such processes need to\nbe terminated separately.\n"
},
{
"code": null,
"e": 13010,
"s": 12942,
"text": "\ntelinit can be invoked only by users with appropriate\nprivileges.\n"
},
{
"code": null,
"e": 13233,
"s": 13010,
"text": "\nThe init binary checks if it is init or telinit by looking\nat its process id; the real init’s process id is always 1.\nFrom this it follows that instead of calling telinit one can also\njust use init instead as a shortcut.\n"
},
{
"code": null,
"e": 13319,
"s": 13235,
"text": "/etc/inittab\n/etc/initscript\n/dev/console\n/var/run/utmp\n/var/log/wtmp\n/dev/initctl\n"
},
{
"code": null,
"e": 13329,
"s": 13319,
"text": "getty (1)"
},
{
"code": null,
"e": 13339,
"s": 13329,
"text": "getty (1)"
},
{
"code": null,
"e": 13349,
"s": 13339,
"text": "login (1)"
},
{
"code": null,
"e": 13359,
"s": 13349,
"text": "login (1)"
},
{
"code": null,
"e": 13366,
"s": 13359,
"text": "sh (1)"
},
{
"code": null,
"e": 13373,
"s": 13366,
"text": "sh (1)"
},
{
"code": null,
"e": 13386,
"s": 13373,
"text": "runlevel (8)"
},
{
"code": null,
"e": 13399,
"s": 13386,
"text": "runlevel (8)"
},
{
"code": null,
"e": 13412,
"s": 13399,
"text": "shutdown (8)"
},
{
"code": null,
"e": 13425,
"s": 13412,
"text": "shutdown (8)"
},
{
"code": null,
"e": 13434,
"s": 13425,
"text": "kill (1)"
},
{
"code": null,
"e": 13443,
"s": 13434,
"text": "kill (1)"
},
{
"code": null,
"e": 13460,
"s": 13443,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 13495,
"s": 13460,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 13523,
"s": 13495,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 13557,
"s": 13523,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 13574,
"s": 13557,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 13607,
"s": 13574,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 13618,
"s": 13607,
"text": " Pradeep D"
},
{
"code": null,
"e": 13653,
"s": 13618,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 13669,
"s": 13653,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 13702,
"s": 13669,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 13714,
"s": 13702,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 13746,
"s": 13714,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 13754,
"s": 13746,
"text": " Uplatz"
},
{
"code": null,
"e": 13761,
"s": 13754,
"text": " Print"
},
{
"code": null,
"e": 13772,
"s": 13761,
"text": " Add Notes"
}
] |
Getting Started with OpenCV. First steps towards computer vision... | by Thiago Carvalho | Towards Data Science
|
A while back, I trained an object detection model for a college project, but honestly, I don’t remember much about it besides the fact it required lots of Redbulls and long nights watching my model train.
I’ve recently regained some interest in those topics, and I decided to start over and learn it again, but this time I’m taking notes and sharing my learnings.
— I wonder if someday we’ll be able to use style transfer to copy styles from one data viz to another without compromising its integrity.
OpenCV is an open-source library, initially developed by Intel, and it’s filled with handy methods and functions that support computer vision and machine learning.
In this article, I’ll get my feet wet learning how to read images, display them in a Jupyter Notebook, and how we can inspect and change some of its properties.
import cv2import numpy as npimport matplotlib.pyplot as plt
Let’s start with .imread to load the picture, and then we can use .imshow for displaying it in a new window.
image = cv2.imread('img.jpg')cv2.imshow('Some title', image)cv2.waitKey(0)cv2.destroyAllWindows()
The methods .waitkey and .destroyAllWindows are essential to run our code without crashing. The first will tell Jupyter to keep running that block until some key is pressed, and the second will close the window at the end.
We can also try displaying the image with Matplotlib .imshow; that way, it’ll be displayed inline instead of in a new window.
image = cv2.imread('img.jpg')plt.imshow(image)
Uh, ok. That looks weird. The colors are all messed up.
OpenCV loads the images as Numpy arrays, and those have three dimensions Reds, Greens, and Blues. The dimensions are often referred to as channels, and they hold values from 0 to 255 that represents the intensity of color for each pixel.
>>> print(type(image))>>> print(image.shape)<class 'numpy.ndarray'>(776, 960, 3)
That means it’s RGB, right?Not really. It’s BGR, which is the same thing but in a different order.
Matplotlib uses RGB, and that’s why our pic was looking weird. That’s not an issue since OpenCV has some very convenient methods for converting colors.
image = cv2.imread('img.jpg')image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.imshow(image)
Cool, we got to read and display our image with OpenCV and got a peek at how to convert GBR colors into RGB to display them inline with Matplolib.
Other color formats can be handled with OpenCV, like HSV, CMYK, and more.
Since we’ll be repeating this a lot, let’s create a method for plotting with Matplotlib. We can set the size of the plot and remove the axis to make it even better.
def show(img): fig, ax = plt.subplots(1, figsize=(12,8)) ax.axis('off') plt.imshow(img, cmap='Greys')
Note that I’ve also defined the colormap in .imshow as ‘Greys’; That parameter will be ignored when we plot RGB images but will be helpful later on when we draw the individual dimensions of the arrays. For now, let’s try our method.
image = cv2.imread('img2.jpeg')image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)show(image)
Alright, now let’s try converting it to grayscale and then to RGB.
image = cv2.imread('img2.jpeg')gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)image = cv2.cvtColor(gray, cv2.COLOR_GRAY2RGB)show(image)
We can use .split to get individual arrays for the colors and assemble the picture back together with .merge. That’s practical for modifying, inspecting, and filtering a single dimension of our array.
For example, we can multiply the array by zero to remove it;
img = cv2.imread('img2.jpeg')B, G, R = cv2.split(img) img = cv2.merge([B*0, G, R*0])img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)show(img)
We can increase or decrease the intensity of a color, or build a new Numpy array with the same shape to replace it, or whatever you can think.
img = cv2.merge([np.ones_like(B)*255, G, R])img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)show(img)
The same concept of split and merge can be applied to other formats such as HSV and HSL.
img = cv2.imread('img2.jpeg')img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)H, S, V = cv2.split(img) img = cv2.merge([np.ones_like(H)*30, S+10, V-20])img = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)show(img)
HSV: Hue, Saturation, and Value.
That format is handy for filtering colors since it works with hue — That means, instead of having to figure out the ranges of combinations between red, green, and blue, we can use ranges of angles.
We can define a lower and upper HSV boundary with Numpy. Apply the method .inRange to filter those values, and create a mask. Then we can apply this mask at the saturation with .bitwise_and, that will make everything outside the boundaries turn to zero.
That in other words: We can filter some colors and make all the rest in grayscale.
# read img and convert to HSVimg = cv2.imread('img2.jpeg')img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)# split dimensionsH, S, V = cv2.split(img)# upper and lower boundarieslower = np.array([80, 0, 0]) upper = np.array([120, 255, 255])# build maskmask = cv2.inRange(img, lower, upper)# apply mask to saturationS = cv2.bitwise_and(S, S, mask=mask)# assemble imageimg = cv2.merge([H, S, V])# convert to RGB and displayimg = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)show(img)
Splitting the image also allows us to inspect its composition more easily.
We can plot a color from RGB, a Saturation from HSV, or any other channel we want.
img = cv2.imread('img2.jpeg')B, G, R = cv2.split(img) show(B)img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)H, S, V = cv2.split(img)show(S)
With the ‘Greys’ colormap, the values go from white(low) to black(high).
We can tell by looking at the first map that the intensity of blue is higher in the ground than it is in the building, and we can see with the saturation plot that the values around the skateboard are higher than in other parts of the image.
I’ll stop here for today. We explored how to load and display our pictures, how to convert the array to different color formats, and how to access, modify, and filter the dimensions.
For the next one, I’ll try to explore transformations and how to move, resize, crop, and rotate images.
Thanks for reading my article. I hope you enjoyed it.
Resources:OpenCV Read Image;OpenCV Color Conversions;Matplotlib Display Image;OpenCV Operations on Arrays;OpenCV Basic Operations;
|
[
{
"code": null,
"e": 376,
"s": 171,
"text": "A while back, I trained an object detection model for a college project, but honestly, I don’t remember much about it besides the fact it required lots of Redbulls and long nights watching my model train."
},
{
"code": null,
"e": 535,
"s": 376,
"text": "I’ve recently regained some interest in those topics, and I decided to start over and learn it again, but this time I’m taking notes and sharing my learnings."
},
{
"code": null,
"e": 673,
"s": 535,
"text": "— I wonder if someday we’ll be able to use style transfer to copy styles from one data viz to another without compromising its integrity."
},
{
"code": null,
"e": 837,
"s": 673,
"text": "OpenCV is an open-source library, initially developed by Intel, and it’s filled with handy methods and functions that support computer vision and machine learning."
},
{
"code": null,
"e": 998,
"s": 837,
"text": "In this article, I’ll get my feet wet learning how to read images, display them in a Jupyter Notebook, and how we can inspect and change some of its properties."
},
{
"code": null,
"e": 1058,
"s": 998,
"text": "import cv2import numpy as npimport matplotlib.pyplot as plt"
},
{
"code": null,
"e": 1167,
"s": 1058,
"text": "Let’s start with .imread to load the picture, and then we can use .imshow for displaying it in a new window."
},
{
"code": null,
"e": 1265,
"s": 1167,
"text": "image = cv2.imread('img.jpg')cv2.imshow('Some title', image)cv2.waitKey(0)cv2.destroyAllWindows()"
},
{
"code": null,
"e": 1488,
"s": 1265,
"text": "The methods .waitkey and .destroyAllWindows are essential to run our code without crashing. The first will tell Jupyter to keep running that block until some key is pressed, and the second will close the window at the end."
},
{
"code": null,
"e": 1614,
"s": 1488,
"text": "We can also try displaying the image with Matplotlib .imshow; that way, it’ll be displayed inline instead of in a new window."
},
{
"code": null,
"e": 1661,
"s": 1614,
"text": "image = cv2.imread('img.jpg')plt.imshow(image)"
},
{
"code": null,
"e": 1717,
"s": 1661,
"text": "Uh, ok. That looks weird. The colors are all messed up."
},
{
"code": null,
"e": 1955,
"s": 1717,
"text": "OpenCV loads the images as Numpy arrays, and those have three dimensions Reds, Greens, and Blues. The dimensions are often referred to as channels, and they hold values from 0 to 255 that represents the intensity of color for each pixel."
},
{
"code": null,
"e": 2036,
"s": 1955,
"text": ">>> print(type(image))>>> print(image.shape)<class 'numpy.ndarray'>(776, 960, 3)"
},
{
"code": null,
"e": 2135,
"s": 2036,
"text": "That means it’s RGB, right?Not really. It’s BGR, which is the same thing but in a different order."
},
{
"code": null,
"e": 2287,
"s": 2135,
"text": "Matplotlib uses RGB, and that’s why our pic was looking weird. That’s not an issue since OpenCV has some very convenient methods for converting colors."
},
{
"code": null,
"e": 2380,
"s": 2287,
"text": "image = cv2.imread('img.jpg')image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)plt.imshow(image)"
},
{
"code": null,
"e": 2527,
"s": 2380,
"text": "Cool, we got to read and display our image with OpenCV and got a peek at how to convert GBR colors into RGB to display them inline with Matplolib."
},
{
"code": null,
"e": 2601,
"s": 2527,
"text": "Other color formats can be handled with OpenCV, like HSV, CMYK, and more."
},
{
"code": null,
"e": 2766,
"s": 2601,
"text": "Since we’ll be repeating this a lot, let’s create a method for plotting with Matplotlib. We can set the size of the plot and remove the axis to make it even better."
},
{
"code": null,
"e": 2880,
"s": 2766,
"text": "def show(img): fig, ax = plt.subplots(1, figsize=(12,8)) ax.axis('off') plt.imshow(img, cmap='Greys')"
},
{
"code": null,
"e": 3113,
"s": 2880,
"text": "Note that I’ve also defined the colormap in .imshow as ‘Greys’; That parameter will be ignored when we plot RGB images but will be helpful later on when we draw the individual dimensions of the arrays. For now, let’s try our method."
},
{
"code": null,
"e": 3202,
"s": 3113,
"text": "image = cv2.imread('img2.jpeg')image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)show(image)"
},
{
"code": null,
"e": 3269,
"s": 3202,
"text": "Alright, now let’s try converting it to grayscale and then to RGB."
},
{
"code": null,
"e": 3404,
"s": 3269,
"text": "image = cv2.imread('img2.jpeg')gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)image = cv2.cvtColor(gray, cv2.COLOR_GRAY2RGB)show(image)"
},
{
"code": null,
"e": 3605,
"s": 3404,
"text": "We can use .split to get individual arrays for the colors and assemble the picture back together with .merge. That’s practical for modifying, inspecting, and filtering a single dimension of our array."
},
{
"code": null,
"e": 3666,
"s": 3605,
"text": "For example, we can multiply the array by zero to remove it;"
},
{
"code": null,
"e": 3802,
"s": 3666,
"text": "img = cv2.imread('img2.jpeg')B, G, R = cv2.split(img) img = cv2.merge([B*0, G, R*0])img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)show(img)"
},
{
"code": null,
"e": 3945,
"s": 3802,
"text": "We can increase or decrease the intensity of a color, or build a new Numpy array with the same shape to replace it, or whatever you can think."
},
{
"code": null,
"e": 4041,
"s": 3945,
"text": "img = cv2.merge([np.ones_like(B)*255, G, R])img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)show(img)"
},
{
"code": null,
"e": 4130,
"s": 4041,
"text": "The same concept of split and merge can be applied to other formats such as HSV and HSL."
},
{
"code": null,
"e": 4327,
"s": 4130,
"text": "img = cv2.imread('img2.jpeg')img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)H, S, V = cv2.split(img) img = cv2.merge([np.ones_like(H)*30, S+10, V-20])img = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)show(img)"
},
{
"code": null,
"e": 4360,
"s": 4327,
"text": "HSV: Hue, Saturation, and Value."
},
{
"code": null,
"e": 4558,
"s": 4360,
"text": "That format is handy for filtering colors since it works with hue — That means, instead of having to figure out the ranges of combinations between red, green, and blue, we can use ranges of angles."
},
{
"code": null,
"e": 4812,
"s": 4558,
"text": "We can define a lower and upper HSV boundary with Numpy. Apply the method .inRange to filter those values, and create a mask. Then we can apply this mask at the saturation with .bitwise_and, that will make everything outside the boundaries turn to zero."
},
{
"code": null,
"e": 4895,
"s": 4812,
"text": "That in other words: We can filter some colors and make all the rest in grayscale."
},
{
"code": null,
"e": 5360,
"s": 4895,
"text": "# read img and convert to HSVimg = cv2.imread('img2.jpeg')img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)# split dimensionsH, S, V = cv2.split(img)# upper and lower boundarieslower = np.array([80, 0, 0]) upper = np.array([120, 255, 255])# build maskmask = cv2.inRange(img, lower, upper)# apply mask to saturationS = cv2.bitwise_and(S, S, mask=mask)# assemble imageimg = cv2.merge([H, S, V])# convert to RGB and displayimg = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)show(img)"
},
{
"code": null,
"e": 5435,
"s": 5360,
"text": "Splitting the image also allows us to inspect its composition more easily."
},
{
"code": null,
"e": 5518,
"s": 5435,
"text": "We can plot a color from RGB, a Saturation from HSV, or any other channel we want."
},
{
"code": null,
"e": 5653,
"s": 5518,
"text": "img = cv2.imread('img2.jpeg')B, G, R = cv2.split(img) show(B)img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)H, S, V = cv2.split(img)show(S)"
},
{
"code": null,
"e": 5726,
"s": 5653,
"text": "With the ‘Greys’ colormap, the values go from white(low) to black(high)."
},
{
"code": null,
"e": 5968,
"s": 5726,
"text": "We can tell by looking at the first map that the intensity of blue is higher in the ground than it is in the building, and we can see with the saturation plot that the values around the skateboard are higher than in other parts of the image."
},
{
"code": null,
"e": 6151,
"s": 5968,
"text": "I’ll stop here for today. We explored how to load and display our pictures, how to convert the array to different color formats, and how to access, modify, and filter the dimensions."
},
{
"code": null,
"e": 6255,
"s": 6151,
"text": "For the next one, I’ll try to explore transformations and how to move, resize, crop, and rotate images."
},
{
"code": null,
"e": 6309,
"s": 6255,
"text": "Thanks for reading my article. I hope you enjoyed it."
}
] |
ByteBuffer getInt() method in Java with Examples - GeeksforGeeks
|
17 Jun, 2019
The getInt() method of java.nio.ByteBuffer class is used to read the next four bytes at this buffer’s current position, composing them into an int value according to the current byte order, and then increments the position by four.
Syntax:
public abstract int getInt()
Return Value: This method returns the int value at the buffer’s current position
Throws: This method throws BufferUnderflowException – If there are fewer than four bytes remaining in this buffer.Below are the examples to illustrate the getInt() method:
Examples 1:
// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 12; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20) .put(30); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println("Original ByteBuffer: "); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + " "); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(); // print the int value System.out.println("\n\nByte Value: " + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(); // print the int value System.out.println("Next Byte Value: " + value1); } catch (BufferUnderflowException e) { System.out.println("\nException Thrown : " + e); } }}
Original ByteBuffer:
10 20 30
Byte Value: 10
Next Byte Value: 20
Examples 2:
// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 8; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println("Original ByteBuffer: "); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + " "); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(); // print the int value System.out.println("\n\nByte Value: " + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(); // print the int value System.out.println("Next Byte Value: " + value1); // Reads the int at this buffer's next position // using getInt() method int value2 = bb.getInt(); } catch (BufferUnderflowException e) { System.out.println("\nthere are fewer than " + "four bytes remaining in this buffer"); System.out.println("Exception Thrown : " + e); } }}
Original ByteBuffer:
10 20
Byte Value: 10
Next Byte Value: 20
there are fewer than four bytes remaining in this buffer
Exception Thrown : java.nio.BufferUnderflowException
Reference: https://docs.oracle.com/javase/9/docs/api/java/nio/ByteBuffer.html#getInt–
The getInt(int index) method of ByteBuffer is used to read four bytes at the given index, composing them into a int value according to the current byte order.
Syntax :
public abstract int getInt(int index)
Parameters: This method takes index (The index from which the Byte will be read) as a parameter.
Return Value: This method returns The int value at the given index.
Exception: This method throws IndexOutOfBoundsException. If index is negative or not smaller than the buffer’s limit this exception is thrown.
Below are the examples to illustrate the getInt(int index) method:
Examples 1:
// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 8; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println("Original ByteBuffer: "); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + " "); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(0); // print the int value System.out.println("\n\nByte Value: " + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(4); // print the int value System.out.println("Next Byte Value: " + value1); } catch (IndexOutOfBoundsException e) { System.out.println("\nindex is negative or smaller " + "than the buffer's limit, minus seven"); System.out.println("Exception Thrown : " + e); } }}
Original ByteBuffer:
10 20
Byte Value: 10
Next Byte Value: 20
Examples 2:
// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 8; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println("Original ByteBuffer: "); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + " "); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(0); // print the int value System.out.println("\n\nByte Value: " + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(7); // print the int value System.out.println("Next Byte Value: " + value1); } catch (IndexOutOfBoundsException e) { System.out.println("\nindex is negative or smaller" + " than the buffer's limit, minus seven"); System.out.println("Exception Thrown : " + e); } }}
Original ByteBuffer:
10 20
Byte Value: 10
index is negative or smaller than the buffer's limit, minus seven
Exception Thrown : java.lang.IndexOutOfBoundsException
Reference: https://docs.oracle.com/javase/9/docs/api/java/nio/ByteBuffer.html#getInt-int-
Java-ByteBuffer
Java-Functions
Java-NIO package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
How to iterate any Map in Java
Interfaces in Java
Initialize an ArrayList in Java
ArrayList in Java
Stack Class in Java
Multidimensional Arrays in Java
Singleton Class in Java
LinkedList in Java
|
[
{
"code": null,
"e": 24211,
"s": 24183,
"text": "\n17 Jun, 2019"
},
{
"code": null,
"e": 24443,
"s": 24211,
"text": "The getInt() method of java.nio.ByteBuffer class is used to read the next four bytes at this buffer’s current position, composing them into an int value according to the current byte order, and then increments the position by four."
},
{
"code": null,
"e": 24451,
"s": 24443,
"text": "Syntax:"
},
{
"code": null,
"e": 24480,
"s": 24451,
"text": "public abstract int getInt()"
},
{
"code": null,
"e": 24561,
"s": 24480,
"text": "Return Value: This method returns the int value at the buffer’s current position"
},
{
"code": null,
"e": 24733,
"s": 24561,
"text": "Throws: This method throws BufferUnderflowException – If there are fewer than four bytes remaining in this buffer.Below are the examples to illustrate the getInt() method:"
},
{
"code": null,
"e": 24745,
"s": 24733,
"text": "Examples 1:"
},
{
"code": "// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 12; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20) .put(30); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println(\"Original ByteBuffer: \"); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + \" \"); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(); // print the int value System.out.println(\"\\n\\nByte Value: \" + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(); // print the int value System.out.println(\"Next Byte Value: \" + value1); } catch (BufferUnderflowException e) { System.out.println(\"\\nException Thrown : \" + e); } }}",
"e": 26248,
"s": 24745,
"text": null
},
{
"code": null,
"e": 26317,
"s": 26248,
"text": "Original ByteBuffer: \n10 20 30 \n\nByte Value: 10\nNext Byte Value: 20\n"
},
{
"code": null,
"e": 26329,
"s": 26317,
"text": "Examples 2:"
},
{
"code": "// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 8; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println(\"Original ByteBuffer: \"); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + \" \"); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(); // print the int value System.out.println(\"\\n\\nByte Value: \" + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(); // print the int value System.out.println(\"Next Byte Value: \" + value1); // Reads the int at this buffer's next position // using getInt() method int value2 = bb.getInt(); } catch (BufferUnderflowException e) { System.out.println(\"\\nthere are fewer than \" + \"four bytes remaining in this buffer\"); System.out.println(\"Exception Thrown : \" + e); } }}",
"e": 28065,
"s": 26329,
"text": null
},
{
"code": null,
"e": 28242,
"s": 28065,
"text": "Original ByteBuffer: \n10 20 \n\nByte Value: 10\nNext Byte Value: 20\n\nthere are fewer than four bytes remaining in this buffer\nException Thrown : java.nio.BufferUnderflowException\n"
},
{
"code": null,
"e": 28328,
"s": 28242,
"text": "Reference: https://docs.oracle.com/javase/9/docs/api/java/nio/ByteBuffer.html#getInt–"
},
{
"code": null,
"e": 28487,
"s": 28328,
"text": "The getInt(int index) method of ByteBuffer is used to read four bytes at the given index, composing them into a int value according to the current byte order."
},
{
"code": null,
"e": 28496,
"s": 28487,
"text": "Syntax :"
},
{
"code": null,
"e": 28534,
"s": 28496,
"text": "public abstract int getInt(int index)"
},
{
"code": null,
"e": 28631,
"s": 28534,
"text": "Parameters: This method takes index (The index from which the Byte will be read) as a parameter."
},
{
"code": null,
"e": 28699,
"s": 28631,
"text": "Return Value: This method returns The int value at the given index."
},
{
"code": null,
"e": 28842,
"s": 28699,
"text": "Exception: This method throws IndexOutOfBoundsException. If index is negative or not smaller than the buffer’s limit this exception is thrown."
},
{
"code": null,
"e": 28909,
"s": 28842,
"text": "Below are the examples to illustrate the getInt(int index) method:"
},
{
"code": null,
"e": 28921,
"s": 28909,
"text": "Examples 1:"
},
{
"code": "// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 8; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println(\"Original ByteBuffer: \"); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + \" \"); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(0); // print the int value System.out.println(\"\\n\\nByte Value: \" + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(4); // print the int value System.out.println(\"Next Byte Value: \" + value1); } catch (IndexOutOfBoundsException e) { System.out.println(\"\\nindex is negative or smaller \" + \"than the buffer's limit, minus seven\"); System.out.println(\"Exception Thrown : \" + e); } }}",
"e": 30537,
"s": 28921,
"text": null
},
{
"code": null,
"e": 30603,
"s": 30537,
"text": "Original ByteBuffer: \n10 20 \n\nByte Value: 10\nNext Byte Value: 20\n"
},
{
"code": null,
"e": 30615,
"s": 30603,
"text": "Examples 2:"
},
{
"code": "// Java program to demonstrate// getInt() method import java.nio.*;import java.util.*; public class GFG { public static void main(String[] args) { // Declaring the capacity of the ByteBuffer int capacity = 8; // Creating the ByteBuffer try { // creating object of ByteBuffer // and allocating size capacity ByteBuffer bb = ByteBuffer.allocate(capacity); // putting the int value in the bytebuffer bb.asIntBuffer() .put(10) .put(20); // rewind the Bytebuffer bb.rewind(); // print the ByteBuffer System.out.println(\"Original ByteBuffer: \"); for (int i = 1; i <= capacity / 4; i++) System.out.print(bb.getInt() + \" \"); // rewind the Bytebuffer bb.rewind(); // Reads the Int at this buffer's current position // using getInt() method int value = bb.getInt(0); // print the int value System.out.println(\"\\n\\nByte Value: \" + value); // Reads the int at this buffer's next position // using getInt() method int value1 = bb.getInt(7); // print the int value System.out.println(\"Next Byte Value: \" + value1); } catch (IndexOutOfBoundsException e) { System.out.println(\"\\nindex is negative or smaller\" + \" than the buffer's limit, minus seven\"); System.out.println(\"Exception Thrown : \" + e); } }}",
"e": 32231,
"s": 30615,
"text": null
},
{
"code": null,
"e": 32399,
"s": 32231,
"text": "Original ByteBuffer: \n10 20 \n\nByte Value: 10\n\nindex is negative or smaller than the buffer's limit, minus seven\nException Thrown : java.lang.IndexOutOfBoundsException\n"
},
{
"code": null,
"e": 32489,
"s": 32399,
"text": "Reference: https://docs.oracle.com/javase/9/docs/api/java/nio/ByteBuffer.html#getInt-int-"
},
{
"code": null,
"e": 32505,
"s": 32489,
"text": "Java-ByteBuffer"
},
{
"code": null,
"e": 32520,
"s": 32505,
"text": "Java-Functions"
},
{
"code": null,
"e": 32537,
"s": 32520,
"text": "Java-NIO package"
},
{
"code": null,
"e": 32542,
"s": 32537,
"text": "Java"
},
{
"code": null,
"e": 32547,
"s": 32542,
"text": "Java"
},
{
"code": null,
"e": 32645,
"s": 32547,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32654,
"s": 32645,
"text": "Comments"
},
{
"code": null,
"e": 32667,
"s": 32654,
"text": "Old Comments"
},
{
"code": null,
"e": 32718,
"s": 32667,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 32748,
"s": 32718,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 32779,
"s": 32748,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 32798,
"s": 32779,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 32830,
"s": 32798,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 32848,
"s": 32830,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 32868,
"s": 32848,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 32900,
"s": 32868,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 32924,
"s": 32900,
"text": "Singleton Class in Java"
}
] |
Count common elements in two arrays containing multiples of N and M - GeeksforGeeks
|
15 Mar, 2021
Given two arrays such that the first array contains multiples of an integer n which are less than or equal to k and similarly, the second array contains multiples of an integer m which are less than or equal to k.The task is to find the number of common elements between the arrays.Examples:
Input :n=2 m=3 k=9 Output : 1 First array would be = [ 2, 4, 6, 8 ] Second array would be = [ 3, 6, 9 ] 6 is the only common elementInput :n=1 m=2 k=5 Output : 2
Approach : Find the LCM of n and m .As LCM is the least common multiple of n and m, all the multiples of LCM would be common in both the arrays. The number of multiples of LCM which are less than or equal to k would be equal to k/(LCM(m, n)).To find the LCM first calculate the GCD of two numbers using the Euclidean algorithm and lcm of n, m is n*m/gcd(n, m).Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation of the above approach#include <bits/stdc++.h> using namespace std; // Recursive function to find// gcd using euclidean algorithmint gcd(int a, int b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdint lcm(int n, int m){ return (n * m) / gcd(n, m);} // Driver codeint main(){ int n = 2, m = 3, k = 5; cout << k / lcm(n, m) << endl; return 0;}
// Java implementation of the above approachimport java.util.*;import java.lang.*;import java.io.*; class GFG{ // Recursive function to find// gcd using euclidean algorithmstatic int gcd(int a, int b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdstatic int lcm(int n, int m){ return (n * m) / gcd(n, m);} // Driver codepublic static void main(String[] args){ int n = 2, m = 3, k = 5; System.out.print( k / lcm(n, m));}} // This code is contributed by mohit kumar 29
# Python3 implementation of the above approach # Recursive function to find# gcd using euclidean algorithmdef gcd(a, b) : if (a == 0) : return b; return gcd(b % a, a); # Function to find lcm# of two numbers using gcddef lcm(n, m) : return (n * m) // gcd(n, m); # Driver codeif __name__ == "__main__" : n = 2; m = 3; k = 5; print(k // lcm(n, m)); # This code is contributed by AnkitRai01
// C# implementation of the above approachusing System; class GFG{ // Recursive function to find// gcd using euclidean algorithmstatic int gcd(int a, int b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdstatic int lcm(int n, int m){ return (n * m) / gcd(n, m);} // Driver codepublic static void Main(String[] args){ int n = 2, m = 3, k = 5; Console.WriteLine( k / lcm(n, m));}} // This code is contributed by Princi Singh
<script> // javascript implementation of the above approach// Recursive function to find// gcd using euclidean algorithmfunction gcd(a, b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdfunction lcm(n, m){ return (n * m) / gcd(n, m);} // Driver code var n = 2, m = 3, k = 5; document.write( parseInt(k / lcm(n, m))); // This code is contributed by Amit Katiyar </script>
0
Time Complexity : O(log(min(n,m)))
ankthon
mohit kumar 29
princi singh
amit143katiyar
GCD-LCM
Competitive Programming
Greedy
Mathematical
Greedy
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Modulo 10^9+7 (1000000007)
Bits manipulation (Important tactics)
Prefix Sum Array - Implementation and Applications in Competitive Programming
Top 15 Websites for Coding Challenges and Competitions
Formatted output in Java
Dijkstra's shortest path algorithm | Greedy Algo-7
Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2
Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5
Program for array rotation
Huffman Coding | Greedy Algo-3
|
[
{
"code": null,
"e": 25560,
"s": 25532,
"text": "\n15 Mar, 2021"
},
{
"code": null,
"e": 25854,
"s": 25560,
"text": "Given two arrays such that the first array contains multiples of an integer n which are less than or equal to k and similarly, the second array contains multiples of an integer m which are less than or equal to k.The task is to find the number of common elements between the arrays.Examples: "
},
{
"code": null,
"e": 26018,
"s": 25854,
"text": "Input :n=2 m=3 k=9 Output : 1 First array would be = [ 2, 4, 6, 8 ] Second array would be = [ 3, 6, 9 ] 6 is the only common elementInput :n=1 m=2 k=5 Output : 2 "
},
{
"code": null,
"e": 26433,
"s": 26020,
"text": "Approach : Find the LCM of n and m .As LCM is the least common multiple of n and m, all the multiples of LCM would be common in both the arrays. The number of multiples of LCM which are less than or equal to k would be equal to k/(LCM(m, n)).To find the LCM first calculate the GCD of two numbers using the Euclidean algorithm and lcm of n, m is n*m/gcd(n, m).Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26437,
"s": 26433,
"text": "C++"
},
{
"code": null,
"e": 26442,
"s": 26437,
"text": "Java"
},
{
"code": null,
"e": 26450,
"s": 26442,
"text": "Python3"
},
{
"code": null,
"e": 26453,
"s": 26450,
"text": "C#"
},
{
"code": null,
"e": 26464,
"s": 26453,
"text": "Javascript"
},
{
"code": "// C++ implementation of the above approach#include <bits/stdc++.h> using namespace std; // Recursive function to find// gcd using euclidean algorithmint gcd(int a, int b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdint lcm(int n, int m){ return (n * m) / gcd(n, m);} // Driver codeint main(){ int n = 2, m = 3, k = 5; cout << k / lcm(n, m) << endl; return 0;}",
"e": 26904,
"s": 26464,
"text": null
},
{
"code": "// Java implementation of the above approachimport java.util.*;import java.lang.*;import java.io.*; class GFG{ // Recursive function to find// gcd using euclidean algorithmstatic int gcd(int a, int b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdstatic int lcm(int n, int m){ return (n * m) / gcd(n, m);} // Driver codepublic static void main(String[] args){ int n = 2, m = 3, k = 5; System.out.print( k / lcm(n, m));}} // This code is contributed by mohit kumar 29",
"e": 27444,
"s": 26904,
"text": null
},
{
"code": "# Python3 implementation of the above approach # Recursive function to find# gcd using euclidean algorithmdef gcd(a, b) : if (a == 0) : return b; return gcd(b % a, a); # Function to find lcm# of two numbers using gcddef lcm(n, m) : return (n * m) // gcd(n, m); # Driver codeif __name__ == \"__main__\" : n = 2; m = 3; k = 5; print(k // lcm(n, m)); # This code is contributed by AnkitRai01",
"e": 27867,
"s": 27444,
"text": null
},
{
"code": "// C# implementation of the above approachusing System; class GFG{ // Recursive function to find// gcd using euclidean algorithmstatic int gcd(int a, int b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdstatic int lcm(int n, int m){ return (n * m) / gcd(n, m);} // Driver codepublic static void Main(String[] args){ int n = 2, m = 3, k = 5; Console.WriteLine( k / lcm(n, m));}} // This code is contributed by Princi Singh",
"e": 28366,
"s": 27867,
"text": null
},
{
"code": "<script> // javascript implementation of the above approach// Recursive function to find// gcd using euclidean algorithmfunction gcd(a, b){ if (a == 0) return b; return gcd(b % a, a);} // Function to find lcm// of two numbers using gcdfunction lcm(n, m){ return (n * m) / gcd(n, m);} // Driver code var n = 2, m = 3, k = 5; document.write( parseInt(k / lcm(n, m))); // This code is contributed by Amit Katiyar </script>",
"e": 28802,
"s": 28366,
"text": null
},
{
"code": null,
"e": 28804,
"s": 28802,
"text": "0"
},
{
"code": null,
"e": 28842,
"s": 28806,
"text": "Time Complexity : O(log(min(n,m))) "
},
{
"code": null,
"e": 28850,
"s": 28842,
"text": "ankthon"
},
{
"code": null,
"e": 28865,
"s": 28850,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 28878,
"s": 28865,
"text": "princi singh"
},
{
"code": null,
"e": 28893,
"s": 28878,
"text": "amit143katiyar"
},
{
"code": null,
"e": 28901,
"s": 28893,
"text": "GCD-LCM"
},
{
"code": null,
"e": 28925,
"s": 28901,
"text": "Competitive Programming"
},
{
"code": null,
"e": 28932,
"s": 28925,
"text": "Greedy"
},
{
"code": null,
"e": 28945,
"s": 28932,
"text": "Mathematical"
},
{
"code": null,
"e": 28952,
"s": 28945,
"text": "Greedy"
},
{
"code": null,
"e": 28965,
"s": 28952,
"text": "Mathematical"
},
{
"code": null,
"e": 29063,
"s": 28965,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29090,
"s": 29063,
"text": "Modulo 10^9+7 (1000000007)"
},
{
"code": null,
"e": 29128,
"s": 29090,
"text": "Bits manipulation (Important tactics)"
},
{
"code": null,
"e": 29206,
"s": 29128,
"text": "Prefix Sum Array - Implementation and Applications in Competitive Programming"
},
{
"code": null,
"e": 29261,
"s": 29206,
"text": "Top 15 Websites for Coding Challenges and Competitions"
},
{
"code": null,
"e": 29286,
"s": 29261,
"text": "Formatted output in Java"
},
{
"code": null,
"e": 29337,
"s": 29286,
"text": "Dijkstra's shortest path algorithm | Greedy Algo-7"
},
{
"code": null,
"e": 29395,
"s": 29337,
"text": "Kruskal’s Minimum Spanning Tree Algorithm | Greedy Algo-2"
},
{
"code": null,
"e": 29446,
"s": 29395,
"text": "Prim’s Minimum Spanning Tree (MST) | Greedy Algo-5"
},
{
"code": null,
"e": 29473,
"s": 29446,
"text": "Program for array rotation"
}
] |
GATE | GATE-CS-2015 (Set 1) | Question 65 - GeeksforGeeks
|
16 Nov, 2018
Consider the following C function.
int fun1 (int n){ int i, j, k, p, q = 0; for (i = 1; i<n; ++i) { p = 0; for (j = n; j > 1; j = j/2) ++p; for (k = 1; k < p; k = k*2) ++q; } return q;}
Which one of the following most closely approximates the return value of the function fun1?(A) n3(B) n (logn)2(C) nlogn(D) nlog(logn)Answer: (D)Explanation:
int fun1 (int n)
{
int i, j, k, p, q = 0;
// This loop runs Θ(n) time
for (i = 1; i < n; ++i)
{
p = 0;
// This loop runs Θ(Log n) times. Refer this
for (j=n; j > 1; j=j/2)
++p;
// Since above loop runs Θ(Log n) times, p = Θ(Log n)
// This loop runs Θ(Log p) times which loglogn
for (k=1; k < p; k=k*2)
++q;
}
return q;
}
T(n) = n(logn + loglogn)T(n) = n(logn) dominant
But please note here we are return q which lies in loglogn so ans should be T(n) = nloglogn
Refer this for details.Quiz of this Question
GATE-CS-2015 (Set 1)
GATE-GATE-CS-2015 (Set 1)
GATE
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
GATE | GATE-IT-2004 | Question 66
GATE | GATE-CS-2014-(Set-3) | Question 65
GATE | GATE CS 2010 | Question 24
GATE | GATE CS 2011 | Question 7
GATE | GATE-IT-2004 | Question 71
GATE | GATE-CS-2004 | Question 3
GATE | GATE CS 2019 | Question 27
GATE | GATE-CS-2016 (Set 1) | Question 65
GATE | GATE-CS-2016 (Set 2) | Question 61
GATE | GATE-CS-2004 | Question 69
|
[
{
"code": null,
"e": 24602,
"s": 24574,
"text": "\n16 Nov, 2018"
},
{
"code": null,
"e": 24637,
"s": 24602,
"text": "Consider the following C function."
},
{
"code": "int fun1 (int n){ int i, j, k, p, q = 0; for (i = 1; i<n; ++i) { p = 0; for (j = n; j > 1; j = j/2) ++p; for (k = 1; k < p; k = k*2) ++q; } return q;}",
"e": 24829,
"s": 24637,
"text": null
},
{
"code": null,
"e": 24986,
"s": 24829,
"text": "Which one of the following most closely approximates the return value of the function fun1?(A) n3(B) n (logn)2(C) nlogn(D) nlog(logn)Answer: (D)Explanation:"
},
{
"code": null,
"e": 25392,
"s": 24986,
"text": "int fun1 (int n)\n{\n int i, j, k, p, q = 0;\n\n // This loop runs Θ(n) time\n for (i = 1; i < n; ++i)\n {\n p = 0;\n\n // This loop runs Θ(Log n) times. Refer this \n for (j=n; j > 1; j=j/2)\n ++p;\n \n // Since above loop runs Θ(Log n) times, p = Θ(Log n)\n // This loop runs Θ(Log p) times which loglogn\n for (k=1; k < p; k=k*2)\n ++q;\n \n }\n return q;\n}"
},
{
"code": null,
"e": 25440,
"s": 25392,
"text": "T(n) = n(logn + loglogn)T(n) = n(logn) dominant"
},
{
"code": null,
"e": 25532,
"s": 25440,
"text": "But please note here we are return q which lies in loglogn so ans should be T(n) = nloglogn"
},
{
"code": null,
"e": 25577,
"s": 25532,
"text": "Refer this for details.Quiz of this Question"
},
{
"code": null,
"e": 25598,
"s": 25577,
"text": "GATE-CS-2015 (Set 1)"
},
{
"code": null,
"e": 25624,
"s": 25598,
"text": "GATE-GATE-CS-2015 (Set 1)"
},
{
"code": null,
"e": 25629,
"s": 25624,
"text": "GATE"
},
{
"code": null,
"e": 25727,
"s": 25629,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25761,
"s": 25727,
"text": "GATE | GATE-IT-2004 | Question 66"
},
{
"code": null,
"e": 25803,
"s": 25761,
"text": "GATE | GATE-CS-2014-(Set-3) | Question 65"
},
{
"code": null,
"e": 25837,
"s": 25803,
"text": "GATE | GATE CS 2010 | Question 24"
},
{
"code": null,
"e": 25870,
"s": 25837,
"text": "GATE | GATE CS 2011 | Question 7"
},
{
"code": null,
"e": 25904,
"s": 25870,
"text": "GATE | GATE-IT-2004 | Question 71"
},
{
"code": null,
"e": 25937,
"s": 25904,
"text": "GATE | GATE-CS-2004 | Question 3"
},
{
"code": null,
"e": 25971,
"s": 25937,
"text": "GATE | GATE CS 2019 | Question 27"
},
{
"code": null,
"e": 26013,
"s": 25971,
"text": "GATE | GATE-CS-2016 (Set 1) | Question 65"
},
{
"code": null,
"e": 26055,
"s": 26013,
"text": "GATE | GATE-CS-2016 (Set 2) | Question 61"
}
] |
Kali Linux – Password Cracking Tool
|
02 Jun, 2021
Password cracking is a mechanism that is used in most of the parts of hacking. Exploitation uses it to exploit the applications by cracking their administrator or other account passwords, Information Gathering uses it when we have to get the social media or other accounts of the C.E.O. or other employees of the target organization, Wifi Hacking uses it when we have to crack the hash from the captured wifi password hash file, etc.
So to be a good Ethical hacker one must be aware of password cracking techniques. Though it is easy to crack passwords by just using guessing techniques, it is very time consuming and less efficient so in order to automate the task, we have a lot of tools. When it comes to tools Kali Linux is the Operating System that stands first, So here we have a list of tools in Kali Linux that may be used for Password Cracking.
1. Crunch
In order to hack a password, we have to try a lot of passwords to get the right one. When an attacker uses thousands or millions of words or character combinations to crack a password there is no surety that any one of those millions of combinations will work or not. This collection of a different combination of characters is called a wordlist. And in order to crack a password or a hash, we need to have a good wordlist which could break the password. So to do so we have a tool in Kali Linux called crunch.
crunch is a wordlist generating tool that comes pre-installed with Kali Linux. It is used to generate custom keywords based on wordlists. It generates a wordlist with permutation and combination. We could use some specific patterns and symbols to generate a wordlist.
To use crunch, enter the following command in the terminal.
crunch
2. RainbowCrack
Rainbow crack is a tool that uses the time-memory trade-off technique in order to crack hashes of passwords. It uses rainbow tables in order to crack hashes of passwords. It doesn’t use the traditional brute force method for cracking passwords. It generates all the possible plaintexts and computes the hashes respectively. After that, it matches hash with the hashes of all the words in a wordlist. And when it finds the matching hashes, it results in the cracked password.
To use RainbowCrack, enter the following command in the terminal.
rcrack
3. Burp Suite
Burp Suite is one of the most popular web application security testing software. It is used as a proxy, so all the requests from the browser with the proxy pass through it. And as the request passes through the burp suite, it allows us to make changes to those requests as per our need which is good for testing vulnerabilities like XSS or SQLi or even any vulnerability related to the web. Kali Linux comes with burp suite community edition which is free but there is a paid edition of this tool known as burp suite professional which has a lot many functions as compared to burp suite community edition. It comes with an intruder tool that automates the process of password cracking through wordlists.
To use burp suite:
Read this to learn how to setup burp suite.
Open terminal and type “burpsuite” there.
Go to the Proxy tab and turn the interceptor switch to on.
Now visit any URL and it could be seen that the request is captured.
4. Maltego
Maltego is a platform developed to convey and put forward a clear picture of the environment that an organization owns and operates. Maltego offers a unique perspective to both network and resource-based entities which is the aggregation of information delivered all over the internet – whether it’s the current configuration of a router poised on the edge of our network or any other information, Maltego can locate, aggregate and visualize this information. It offers the user with unprecedented information which is leverage and power.
Maltego’s Uses:
It is used to exhibit the complexity and severity of single points of failure as well as trust relationships that exist currently within the scope of the infrastructure.
It is used in the collection of information on all security-related work. It will save time and will allow us to work more accurately and in a smarter way.
It aids us in thinking process by visually demonstrating interconnected links between searched items.
It provides a much more powerful search, giving smarter results.
It helps to discover “hidden” information.
To use Maltego: Go to applications menu and then select “maltego” tool to execute it.
5. John the Ripper
John the Ripper is a great tool for cracking passwords using some famous brute for attacks like dictionary attack or custom wordlist attack etc. It is even used to crack the hashes or passwords for the zipped or compressed files and even locked files as well. It has many available options to crack hashes or passwords.
To use John the Ripper
John the ripper comes pre-installed in Kali Linux.
Just type “john” in the terminal to use the tool.
saurabh1990aror
Kali-Linux
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
tar command in Linux with examples
Conditional Statements | Shell Script
'crontab' in Linux with Examples
Tail command in Linux with examples
Docker - COPY Instruction
scp command in Linux with Examples
UDP Server-Client implementation in C
Cat command in Linux with examples
echo command in Linux with Examples
touch command in Linux with Examples
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n02 Jun, 2021"
},
{
"code": null,
"e": 489,
"s": 54,
"text": "Password cracking is a mechanism that is used in most of the parts of hacking. Exploitation uses it to exploit the applications by cracking their administrator or other account passwords, Information Gathering uses it when we have to get the social media or other accounts of the C.E.O. or other employees of the target organization, Wifi Hacking uses it when we have to crack the hash from the captured wifi password hash file, etc. "
},
{
"code": null,
"e": 910,
"s": 489,
"text": "So to be a good Ethical hacker one must be aware of password cracking techniques. Though it is easy to crack passwords by just using guessing techniques, it is very time consuming and less efficient so in order to automate the task, we have a lot of tools. When it comes to tools Kali Linux is the Operating System that stands first, So here we have a list of tools in Kali Linux that may be used for Password Cracking. "
},
{
"code": null,
"e": 920,
"s": 910,
"text": "1. Crunch"
},
{
"code": null,
"e": 1432,
"s": 920,
"text": "In order to hack a password, we have to try a lot of passwords to get the right one. When an attacker uses thousands or millions of words or character combinations to crack a password there is no surety that any one of those millions of combinations will work or not. This collection of a different combination of characters is called a wordlist. And in order to crack a password or a hash, we need to have a good wordlist which could break the password. So to do so we have a tool in Kali Linux called crunch. "
},
{
"code": null,
"e": 1701,
"s": 1432,
"text": "crunch is a wordlist generating tool that comes pre-installed with Kali Linux. It is used to generate custom keywords based on wordlists. It generates a wordlist with permutation and combination. We could use some specific patterns and symbols to generate a wordlist. "
},
{
"code": null,
"e": 1762,
"s": 1701,
"text": "To use crunch, enter the following command in the terminal. "
},
{
"code": null,
"e": 1769,
"s": 1762,
"text": "crunch"
},
{
"code": null,
"e": 1785,
"s": 1769,
"text": "2. RainbowCrack"
},
{
"code": null,
"e": 2261,
"s": 1785,
"text": "Rainbow crack is a tool that uses the time-memory trade-off technique in order to crack hashes of passwords. It uses rainbow tables in order to crack hashes of passwords. It doesn’t use the traditional brute force method for cracking passwords. It generates all the possible plaintexts and computes the hashes respectively. After that, it matches hash with the hashes of all the words in a wordlist. And when it finds the matching hashes, it results in the cracked password. "
},
{
"code": null,
"e": 2328,
"s": 2261,
"text": "To use RainbowCrack, enter the following command in the terminal. "
},
{
"code": null,
"e": 2336,
"s": 2328,
"text": "rcrack "
},
{
"code": null,
"e": 2350,
"s": 2336,
"text": "3. Burp Suite"
},
{
"code": null,
"e": 3055,
"s": 2350,
"text": "Burp Suite is one of the most popular web application security testing software. It is used as a proxy, so all the requests from the browser with the proxy pass through it. And as the request passes through the burp suite, it allows us to make changes to those requests as per our need which is good for testing vulnerabilities like XSS or SQLi or even any vulnerability related to the web. Kali Linux comes with burp suite community edition which is free but there is a paid edition of this tool known as burp suite professional which has a lot many functions as compared to burp suite community edition. It comes with an intruder tool that automates the process of password cracking through wordlists. "
},
{
"code": null,
"e": 3075,
"s": 3055,
"text": "To use burp suite: "
},
{
"code": null,
"e": 3119,
"s": 3075,
"text": "Read this to learn how to setup burp suite."
},
{
"code": null,
"e": 3161,
"s": 3119,
"text": "Open terminal and type “burpsuite” there."
},
{
"code": null,
"e": 3220,
"s": 3161,
"text": "Go to the Proxy tab and turn the interceptor switch to on."
},
{
"code": null,
"e": 3290,
"s": 3220,
"text": "Now visit any URL and it could be seen that the request is captured. "
},
{
"code": null,
"e": 3301,
"s": 3290,
"text": "4. Maltego"
},
{
"code": null,
"e": 3841,
"s": 3301,
"text": "Maltego is a platform developed to convey and put forward a clear picture of the environment that an organization owns and operates. Maltego offers a unique perspective to both network and resource-based entities which is the aggregation of information delivered all over the internet – whether it’s the current configuration of a router poised on the edge of our network or any other information, Maltego can locate, aggregate and visualize this information. It offers the user with unprecedented information which is leverage and power. "
},
{
"code": null,
"e": 3858,
"s": 3841,
"text": "Maltego’s Uses: "
},
{
"code": null,
"e": 4028,
"s": 3858,
"text": "It is used to exhibit the complexity and severity of single points of failure as well as trust relationships that exist currently within the scope of the infrastructure."
},
{
"code": null,
"e": 4184,
"s": 4028,
"text": "It is used in the collection of information on all security-related work. It will save time and will allow us to work more accurately and in a smarter way."
},
{
"code": null,
"e": 4286,
"s": 4184,
"text": "It aids us in thinking process by visually demonstrating interconnected links between searched items."
},
{
"code": null,
"e": 4351,
"s": 4286,
"text": "It provides a much more powerful search, giving smarter results."
},
{
"code": null,
"e": 4394,
"s": 4351,
"text": "It helps to discover “hidden” information."
},
{
"code": null,
"e": 4482,
"s": 4394,
"text": "To use Maltego: Go to applications menu and then select “maltego” tool to execute it. "
},
{
"code": null,
"e": 4501,
"s": 4482,
"text": "5. John the Ripper"
},
{
"code": null,
"e": 4822,
"s": 4501,
"text": "John the Ripper is a great tool for cracking passwords using some famous brute for attacks like dictionary attack or custom wordlist attack etc. It is even used to crack the hashes or passwords for the zipped or compressed files and even locked files as well. It has many available options to crack hashes or passwords. "
},
{
"code": null,
"e": 4846,
"s": 4822,
"text": "To use John the Ripper "
},
{
"code": null,
"e": 4897,
"s": 4846,
"text": "John the ripper comes pre-installed in Kali Linux."
},
{
"code": null,
"e": 4948,
"s": 4897,
"text": "Just type “john” in the terminal to use the tool. "
},
{
"code": null,
"e": 4966,
"s": 4950,
"text": "saurabh1990aror"
},
{
"code": null,
"e": 4977,
"s": 4966,
"text": "Kali-Linux"
},
{
"code": null,
"e": 4988,
"s": 4977,
"text": "Linux-Unix"
},
{
"code": null,
"e": 5086,
"s": 4988,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 5121,
"s": 5086,
"text": "tar command in Linux with examples"
},
{
"code": null,
"e": 5159,
"s": 5121,
"text": "Conditional Statements | Shell Script"
},
{
"code": null,
"e": 5192,
"s": 5159,
"text": "'crontab' in Linux with Examples"
},
{
"code": null,
"e": 5228,
"s": 5192,
"text": "Tail command in Linux with examples"
},
{
"code": null,
"e": 5254,
"s": 5228,
"text": "Docker - COPY Instruction"
},
{
"code": null,
"e": 5289,
"s": 5254,
"text": "scp command in Linux with Examples"
},
{
"code": null,
"e": 5327,
"s": 5289,
"text": "UDP Server-Client implementation in C"
},
{
"code": null,
"e": 5362,
"s": 5327,
"text": "Cat command in Linux with examples"
},
{
"code": null,
"e": 5398,
"s": 5362,
"text": "echo command in Linux with Examples"
}
] |
Using GitHub to host a free static website
|
02 May, 2022
In this tutorial, we are going to learn about using GitHub to host our website, without paying anything at all, using GitHub Pages.
It’s important that you follow this link as a prerequisite to this tutorial.
A beginner may not be sure of whether to invest in buying server space, and GitHub gets their work done, for free, in a much clean and elegant manner, using a feature provided by GitHub called “GitHub Pages”.
Firstly, let us ask all the technologies we are going to use give their quick and sweet introduction.
Let’s Begin!!
Step 1: Create/ Sign into your GitHub account.
Step 2 Create a new GitHub Repository Go into your dashboard and find a “New Repository” button. Click on it to create a new repository and name it whatever you want let say I named my repo to “first-repo-gfg” and in description you can give some information about your repository.
While creating a new repository, GitHub asks for a variety of details to be filled about the new repository, You need not worry about it at all, just write the name of repo and check Add a README file, it will create a file in your root folder, if you want to know about what is read me file you can check it out here. There will 2 more options there one is for adding .gitignore and choose a license you can ignore these two options for now. Make sure your repo is public and just click on create Repository.
After You have created your Repository it will look something like this you can edit your read me file by clicking on the pencil icon and commit the changes in master branch.
step 3: Now we firstly have to clone this repo on our local system, so for that click on the code and copy the url of this repository.
step 4: Now open git bash or any terminal in system, make sure that git is already installed and configured properly. Now write the command for cloning the repository this will create a folder on your local system with the same name as of your Github Repository.
git clone “url of repo that you have copied“
Here we go! Your website is now up and running.
Now go inside this folder it will contain only a single file read.md, here you have to create your webpage or a whole website , lets make a simple webpage here and host it on Github.
Make a index.html file and write “Hello World this is my first web page.” like this
HTML
<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>GFG</title></head><body> <h1>Hello World this is my first web page.</h1></body></html>
step 5: Now run some basic commands and push this file to Github.
git add -A
git commit -a -m “first commit”
git push origin master
Now the webpage is pushed to Github you can see this index.html file in your repository, now your repository will contain 2 files (readme.md and index.html),now we are good to go to host our webpage.
step 6: Now just follow these steps carefully.
Go to setting and scroll down to Github pagesClick on “check it out here”Click on the dropdown currently showing none and select your branch in our case it is master.Click on save, you can see a url on the top of it in the format of https://<username>.github.io/<repo name>. This is url of your hosted webpage.
Go to setting and scroll down to Github pages
Click on “check it out here”
Click on the dropdown currently showing none and select your branch in our case it is master.
Click on save, you can see a url on the top of it in the format of https://<username>.github.io/<repo name>. This is url of your hosted webpage.
Congratulations you have hosted your first web page successfully for free.
Now you can use GitHub to test your HTML5 & CSS3 projects, showcase them to real world, on a real and beautiful website!
So that’s how GitHub pages work, really easy! You can make unlimited repository sites.
111arpit1
rkbhola5
GitHub
Git
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n02 May, 2022"
},
{
"code": null,
"e": 186,
"s": 54,
"text": "In this tutorial, we are going to learn about using GitHub to host our website, without paying anything at all, using GitHub Pages."
},
{
"code": null,
"e": 263,
"s": 186,
"text": "It’s important that you follow this link as a prerequisite to this tutorial."
},
{
"code": null,
"e": 472,
"s": 263,
"text": "A beginner may not be sure of whether to invest in buying server space, and GitHub gets their work done, for free, in a much clean and elegant manner, using a feature provided by GitHub called “GitHub Pages”."
},
{
"code": null,
"e": 574,
"s": 472,
"text": "Firstly, let us ask all the technologies we are going to use give their quick and sweet introduction."
},
{
"code": null,
"e": 588,
"s": 574,
"text": "Let’s Begin!!"
},
{
"code": null,
"e": 638,
"s": 588,
"text": " Step 1: Create/ Sign into your GitHub account. "
},
{
"code": null,
"e": 922,
"s": 638,
"text": " Step 2 Create a new GitHub Repository Go into your dashboard and find a “New Repository” button. Click on it to create a new repository and name it whatever you want let say I named my repo to “first-repo-gfg” and in description you can give some information about your repository."
},
{
"code": null,
"e": 1434,
"s": 922,
"text": "While creating a new repository, GitHub asks for a variety of details to be filled about the new repository, You need not worry about it at all, just write the name of repo and check Add a README file, it will create a file in your root folder, if you want to know about what is read me file you can check it out here. There will 2 more options there one is for adding .gitignore and choose a license you can ignore these two options for now. Make sure your repo is public and just click on create Repository."
},
{
"code": null,
"e": 1611,
"s": 1436,
"text": "After You have created your Repository it will look something like this you can edit your read me file by clicking on the pencil icon and commit the changes in master branch."
},
{
"code": null,
"e": 1748,
"s": 1613,
"text": "step 3: Now we firstly have to clone this repo on our local system, so for that click on the code and copy the url of this repository."
},
{
"code": null,
"e": 2013,
"s": 1750,
"text": "step 4: Now open git bash or any terminal in system, make sure that git is already installed and configured properly. Now write the command for cloning the repository this will create a folder on your local system with the same name as of your Github Repository."
},
{
"code": null,
"e": 2058,
"s": 2013,
"text": "git clone “url of repo that you have copied“"
},
{
"code": null,
"e": 2108,
"s": 2060,
"text": "Here we go! Your website is now up and running."
},
{
"code": null,
"e": 2291,
"s": 2108,
"text": "Now go inside this folder it will contain only a single file read.md, here you have to create your webpage or a whole website , lets make a simple webpage here and host it on Github."
},
{
"code": null,
"e": 2376,
"s": 2291,
"text": "Make a index.html file and write “Hello World this is my first web page.” like this "
},
{
"code": null,
"e": 2381,
"s": 2376,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"><head> <meta charset=\"UTF-8\"> <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\"> <title>GFG</title></head><body> <h1>Hello World this is my first web page.</h1></body></html>",
"e": 2676,
"s": 2381,
"text": null
},
{
"code": null,
"e": 2746,
"s": 2680,
"text": "step 5: Now run some basic commands and push this file to Github."
},
{
"code": null,
"e": 2757,
"s": 2746,
"text": "git add -A"
},
{
"code": null,
"e": 2789,
"s": 2757,
"text": "git commit -a -m “first commit”"
},
{
"code": null,
"e": 2812,
"s": 2789,
"text": "git push origin master"
},
{
"code": null,
"e": 3014,
"s": 2814,
"text": "Now the webpage is pushed to Github you can see this index.html file in your repository, now your repository will contain 2 files (readme.md and index.html),now we are good to go to host our webpage."
},
{
"code": null,
"e": 3063,
"s": 3016,
"text": "step 6: Now just follow these steps carefully."
},
{
"code": null,
"e": 3374,
"s": 3063,
"text": "Go to setting and scroll down to Github pagesClick on “check it out here”Click on the dropdown currently showing none and select your branch in our case it is master.Click on save, you can see a url on the top of it in the format of https://<username>.github.io/<repo name>. This is url of your hosted webpage."
},
{
"code": null,
"e": 3420,
"s": 3374,
"text": "Go to setting and scroll down to Github pages"
},
{
"code": null,
"e": 3449,
"s": 3420,
"text": "Click on “check it out here”"
},
{
"code": null,
"e": 3543,
"s": 3449,
"text": "Click on the dropdown currently showing none and select your branch in our case it is master."
},
{
"code": null,
"e": 3688,
"s": 3543,
"text": "Click on save, you can see a url on the top of it in the format of https://<username>.github.io/<repo name>. This is url of your hosted webpage."
},
{
"code": null,
"e": 3767,
"s": 3692,
"text": "Congratulations you have hosted your first web page successfully for free."
},
{
"code": null,
"e": 3890,
"s": 3769,
"text": "Now you can use GitHub to test your HTML5 & CSS3 projects, showcase them to real world, on a real and beautiful website!"
},
{
"code": null,
"e": 3977,
"s": 3890,
"text": "So that’s how GitHub pages work, really easy! You can make unlimited repository sites."
},
{
"code": null,
"e": 3991,
"s": 3981,
"text": "111arpit1"
},
{
"code": null,
"e": 4000,
"s": 3991,
"text": "rkbhola5"
},
{
"code": null,
"e": 4007,
"s": 4000,
"text": "GitHub"
},
{
"code": null,
"e": 4011,
"s": 4007,
"text": "Git"
}
] |
Shapiro–Wilk Test in R Programming
|
16 Jul, 2020
The Shapiro-Wilk’s test or Shapiro test is a normality test in frequentist statistics. The null hypothesis of Shapiro’s test is that the population is distributed normally. It is among the three tests for normality designed for detecting all kinds of departure from normality. If the value of p is equal to or less than 0.05, then the hypothesis of normality will be rejected by the Shapiro test. On failing, the test can state that the data will not fit the distribution normally with 95% confidence. However, on passing, the test can state that there exists no significant departure from normality. This test can be done very easily in R programming.
Suppose a sample, say x1,x2.......xn, has come from a normally distributed population. Then according to the Shapiro-Wilk’s tests null hypothesis test
where,
x(i) : it is the ith smallest number in the given sample.
mean(x) : ( x1+x2+......+xn) / n i.e the sample mean.
ai : coefficient that can be calculated as (a1,a2,....,an) = (mT V-1)/C . Here V is the covariance matrix, m and C are the vector norms that can be calculated as C= || V-1 m || and m = (m1, m2,......, mn ).
To perform the Shapiro Wilk Test, R provides shapiro.test() function.
Syntax:
shapiro.test(x)
Parameter:
x : a numeric vector containing the data values. It allows missing values but the number of missing values should be of the range 3 to 5000.
Let us see how to perform the Shapiro Wilk’s test step by step.
Step 1: At first install the required packages. The two packages that are required to perform the test are dplyr. The dplyr package is needed for efficient data manipulation. One can install the packages from the R console in the following way:
install.packages("dplyr")
Step 2: Now load the installed packages into the R Script. It can be done by using the library() function in the following way.
R
# loading the packagelibrary(dplyr)
Step 3: The most important task is to select a proper data set. Here let’s work with the ToothGrowth data set. It is an in-built data set in the R library.
R
# loading the packagelibrary("dplyr") # Using the ToothGrowth data set# loading the data setmy_data <- ToothGrowth
One can also create their own data set. For that first prepare the data, then save the file and then import the data set into the script. The file can include using the following syntax:
data <- read.delim(file.choose()) ,if the format of the file is .txt
data <- read.csv(file.choose()), if the format of the file is .csv
Step 4: Now select a random number using the set.seed() function. Following which we start displaying an output sample of 10 rows chosen randomly using the sample_n() function of the dplyr package. This is how we check our data.
R
# loading the packagelibrary("dplyr") # Using the ToothGrowth package# loading the data setmy_data <- ToothGrowth # Using the set.seed() for # random number generationset.seed(1234) # Using the sample_n() for # random sample of 10 rowsdplyr::sample_n(my_data, 10)
Output:
len supp dose
1 11.2 VC 0.5
2 8.2 OJ 0.5
3 10.0 OJ 0.5
4 27.3 OJ 2.0
5 14.5 OJ 1.0
6 26.4 OJ 2.0
7 4.2 VC 0.5
8 15.2 VC 1.0
9 14.5 OJ 0.5
10 26.7 VC 2.0
Step 5: At last perform the Shapiro Wilk’s test using the shapiro.test() function.
R
# loading the packagelibrary("dplyr") # Using the ToothGrowth package# loading the data setmy_data <- ToothGrowth # Using the set.seed() # for random number generationset.seed(1234) # Using the sample_n() # for random sample of 10 rowsdplyr::sample_n(my_data, 10) # Using the shapiro.test() to check# for normality based # on the len parametershapiro.test(my_data$len)
Output:
> dplyr::sample_n(my_data, 10)
len supp dose
1 11.2 VC 0.5
2 8.2 OJ 0.5
3 10.0 OJ 0.5
4 27.3 OJ 2.0
5 14.5 OJ 1.0
6 26.4 OJ 2.0
7 4.2 VC 0.5
8 15.2 VC 1.0
9 14.5 OJ 0.5
10 26.7 VC 2.0
> shapiro.test(my_data$len)
Shapiro-Wilk normality test
data: my_data$len
W = 0.96743, p-value = 0.1091
From the output obtained we can assume normality. The p-value is greater than 0.05. Hence, the distribution of the given data is not different from normal distribution significantly.
data-science
Picked
Machine Learning
R Language
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n16 Jul, 2020"
},
{
"code": null,
"e": 681,
"s": 28,
"text": "The Shapiro-Wilk’s test or Shapiro test is a normality test in frequentist statistics. The null hypothesis of Shapiro’s test is that the population is distributed normally. It is among the three tests for normality designed for detecting all kinds of departure from normality. If the value of p is equal to or less than 0.05, then the hypothesis of normality will be rejected by the Shapiro test. On failing, the test can state that the data will not fit the distribution normally with 95% confidence. However, on passing, the test can state that there exists no significant departure from normality. This test can be done very easily in R programming."
},
{
"code": null,
"e": 833,
"s": 681,
"text": "Suppose a sample, say x1,x2.......xn, has come from a normally distributed population. Then according to the Shapiro-Wilk’s tests null hypothesis test"
},
{
"code": null,
"e": 840,
"s": 833,
"text": "where,"
},
{
"code": null,
"e": 898,
"s": 840,
"text": "x(i) : it is the ith smallest number in the given sample."
},
{
"code": null,
"e": 952,
"s": 898,
"text": "mean(x) : ( x1+x2+......+xn) / n i.e the sample mean."
},
{
"code": null,
"e": 1159,
"s": 952,
"text": "ai : coefficient that can be calculated as (a1,a2,....,an) = (mT V-1)/C . Here V is the covariance matrix, m and C are the vector norms that can be calculated as C= || V-1 m || and m = (m1, m2,......, mn )."
},
{
"code": null,
"e": 1230,
"s": 1159,
"text": "To perform the Shapiro Wilk Test, R provides shapiro.test() function. "
},
{
"code": null,
"e": 1238,
"s": 1230,
"text": "Syntax:"
},
{
"code": null,
"e": 1254,
"s": 1238,
"text": "shapiro.test(x)"
},
{
"code": null,
"e": 1265,
"s": 1254,
"text": "Parameter:"
},
{
"code": null,
"e": 1407,
"s": 1265,
"text": "x : a numeric vector containing the data values. It allows missing values but the number of missing values should be of the range 3 to 5000. "
},
{
"code": null,
"e": 1471,
"s": 1407,
"text": "Let us see how to perform the Shapiro Wilk’s test step by step."
},
{
"code": null,
"e": 1716,
"s": 1471,
"text": "Step 1: At first install the required packages. The two packages that are required to perform the test are dplyr. The dplyr package is needed for efficient data manipulation. One can install the packages from the R console in the following way:"
},
{
"code": null,
"e": 1743,
"s": 1716,
"text": "install.packages(\"dplyr\")\n"
},
{
"code": null,
"e": 1871,
"s": 1743,
"text": "Step 2: Now load the installed packages into the R Script. It can be done by using the library() function in the following way."
},
{
"code": null,
"e": 1873,
"s": 1871,
"text": "R"
},
{
"code": "# loading the packagelibrary(dplyr)",
"e": 1909,
"s": 1873,
"text": null
},
{
"code": null,
"e": 2065,
"s": 1909,
"text": "Step 3: The most important task is to select a proper data set. Here let’s work with the ToothGrowth data set. It is an in-built data set in the R library."
},
{
"code": null,
"e": 2067,
"s": 2065,
"text": "R"
},
{
"code": "# loading the packagelibrary(\"dplyr\") # Using the ToothGrowth data set# loading the data setmy_data <- ToothGrowth",
"e": 2183,
"s": 2067,
"text": null
},
{
"code": null,
"e": 2370,
"s": 2183,
"text": "One can also create their own data set. For that first prepare the data, then save the file and then import the data set into the script. The file can include using the following syntax:"
},
{
"code": null,
"e": 2507,
"s": 2370,
"text": "data <- read.delim(file.choose()) ,if the format of the file is .txt\ndata <- read.csv(file.choose()), if the format of the file is .csv "
},
{
"code": null,
"e": 2736,
"s": 2507,
"text": "Step 4: Now select a random number using the set.seed() function. Following which we start displaying an output sample of 10 rows chosen randomly using the sample_n() function of the dplyr package. This is how we check our data."
},
{
"code": null,
"e": 2738,
"s": 2736,
"text": "R"
},
{
"code": "# loading the packagelibrary(\"dplyr\") # Using the ToothGrowth package# loading the data setmy_data <- ToothGrowth # Using the set.seed() for # random number generationset.seed(1234) # Using the sample_n() for # random sample of 10 rowsdplyr::sample_n(my_data, 10)",
"e": 3005,
"s": 2738,
"text": null
},
{
"code": null,
"e": 3013,
"s": 3005,
"text": "Output:"
},
{
"code": null,
"e": 3211,
"s": 3013,
"text": " len supp dose\n1 11.2 VC 0.5\n2 8.2 OJ 0.5\n3 10.0 OJ 0.5\n4 27.3 OJ 2.0\n5 14.5 OJ 1.0\n6 26.4 OJ 2.0\n7 4.2 VC 0.5\n8 15.2 VC 1.0\n9 14.5 OJ 0.5\n10 26.7 VC 2.0\n"
},
{
"code": null,
"e": 3294,
"s": 3211,
"text": "Step 5: At last perform the Shapiro Wilk’s test using the shapiro.test() function."
},
{
"code": null,
"e": 3296,
"s": 3294,
"text": "R"
},
{
"code": "# loading the packagelibrary(\"dplyr\") # Using the ToothGrowth package# loading the data setmy_data <- ToothGrowth # Using the set.seed() # for random number generationset.seed(1234) # Using the sample_n() # for random sample of 10 rowsdplyr::sample_n(my_data, 10) # Using the shapiro.test() to check# for normality based # on the len parametershapiro.test(my_data$len)",
"e": 3669,
"s": 3296,
"text": null
},
{
"code": null,
"e": 3677,
"s": 3669,
"text": "Output:"
},
{
"code": null,
"e": 4018,
"s": 3677,
"text": "> dplyr::sample_n(my_data, 10)\n len supp dose\n1 11.2 VC 0.5\n2 8.2 OJ 0.5\n3 10.0 OJ 0.5\n4 27.3 OJ 2.0\n5 14.5 OJ 1.0\n6 26.4 OJ 2.0\n7 4.2 VC 0.5\n8 15.2 VC 1.0\n9 14.5 OJ 0.5\n10 26.7 VC 2.0\n> shapiro.test(my_data$len)\n\n Shapiro-Wilk normality test\n\ndata: my_data$len\nW = 0.96743, p-value = 0.1091\n"
},
{
"code": null,
"e": 4201,
"s": 4018,
"text": "From the output obtained we can assume normality. The p-value is greater than 0.05. Hence, the distribution of the given data is not different from normal distribution significantly."
},
{
"code": null,
"e": 4214,
"s": 4201,
"text": "data-science"
},
{
"code": null,
"e": 4221,
"s": 4214,
"text": "Picked"
},
{
"code": null,
"e": 4238,
"s": 4221,
"text": "Machine Learning"
},
{
"code": null,
"e": 4249,
"s": 4238,
"text": "R Language"
},
{
"code": null,
"e": 4266,
"s": 4249,
"text": "Machine Learning"
}
] |
Handling 404 Error in Flask
|
15 Oct, 2020
Prerequisite: Creating simple application in Flask
A 404 Error is showed whenever a page is not found. Maybe the owner changed its URL and forgot to change the link or maybe they deleted the page itself. Every site needs a Custom Error page to avoid the user to see the default Ugly Error page.
GeeksforGeeks also has a customized error page. If we type a URL likewww.geeksforgeeks.org/ajneawnewiaiowjf
Default 404 Error
GeeksForGeeks Customized Error Page
It will show an Error 404 page since this URL doesn’t exist. But an error page provides a beautiful layout, helps the user to go back, or even takes them to the homepage after a specific time interval. That is why Custom Error pages are necessary for every website.
Flask provides us with a way to handle the error and return our Custom Error page.
For this, we need to download and import flask. Download the flask through the following commands on CMD.
pip install flask
Using app.py as our Python file to manage templates, 404.html be the file we will return in the case of a 404 error and header.html be the file with header and navbar of a website.
app.pyFlask allows us to make a python file to define all routes and functions. In app.py we have defined the route to the main page (‘/’) and error handler function which is a flask function and we passed 404 error as a parameter.
from flask import Flask, render_template app = Flask(__name__) # app [email protected](404) # inbuilt function which takes error as parameterdef not_found(e): # defining function return render_template("404.html")
The above python program will return 404.html file whenever the user opens a broken link.
404.htmlThe following code exports header and navbar from header.html.Both files should be stored in templates folder according to the flask.
{% extends "header.html" %}<!-- Exports header and navbar from header.html or any file you want-->{% block title %}Page Not Found{% endblock %}{% block body %} <h1>Oops! Looks like the page doesn't exist anymore</h1> <a href="{{ url_for('index') }}"><p>Click Here</a>To go to the Home Page</p> <!-- {{ url_for('index') }} is a var which returns url of index.html-->{% endblock %}
The app.py code for this example stays the same as above.The following code Shows the Custom 404 Error page and starts a countdown of 5 seconds.After 5 seconds are completed, it redirects the user back to the homepage.404.htmlThe following code exports header and navbar from header.html.Both files should be stored in the templates folder according to the flask.After 5 seconds, the user will get redirected to the Home Page Automatically.
<html><head><title>Page Not Found</title><script language="JavaScript" type="text/javascript"> var seconds =6;// countdown timer. took 6 because page takes approx 1 sec to load var url="{{url_for(index)}}";// variable for index.html url function redirect(){ if (seconds <=0){ // redirect to new url after counter down. window.location = url; } else { seconds--; document.getElementById("pageInfo").innerHTML="Redirecting to Home Page after "+seconds+" seconds." setTimeout("redirect()", 1000) }}</script></head> {% extends "header.html" %}//exporting navbar and header from header.html{% block body %} <body onload="redirect()"><p id="pageInfo"></p>{% endblock %} </html>
Sample header.htmlThis is a sample header.html which includes a navbar just like shown in the image.It’s made up of bootstrap. You can also make one of your own.For this one, refer the bootstrap documentation.
<!DOCTYPE html><html><head> <!-- LINKING ALL SCRIPTS/CSS REQUIRED FOR NAVBAR --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E26 3XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous"> <title>Flask</title></head><body><script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script> <header> <!-- Starting header --> <nav class="navbar navbar-expand-lg navbar-light bg-light"> <a class="navbar-brand" href="#">Navbar</a> <!-- bootstrap classes for navbar --> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <ul class="navbar-nav mr-auto"> <li class="nav-item active"> <a class="nav-link" href="#">Home <span class="sr-only">(current)</span></a> </li> <li class="nav-item"> <a class="nav-link" href="#">Link</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Dropdown </a> <div class="dropdown-menu" aria-labelledby="navbarDropdown"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <div class="dropdown-divider"></div> <a class="dropdown-item" href="#">Something else here</a> </div> </li> <li class="nav-item"> <a class="nav-link disabled" href="#">Disabled</a> </li> </ul> <form class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search" aria-label="Search"> <button class="btn btn-outline-success my-2 my-sm-0" type="submit">Search</button> </form> </div></nav></head> <body > {%block body%} {%endblock%} </body></html>
Output:The output will be a custom error page with header.html that the user exported.The following is an example output with my custom header, footer, and 404.html file.
Python
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Python String | replace()
How to Install PIP on Windows ?
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ?
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n15 Oct, 2020"
},
{
"code": null,
"e": 105,
"s": 54,
"text": "Prerequisite: Creating simple application in Flask"
},
{
"code": null,
"e": 349,
"s": 105,
"text": "A 404 Error is showed whenever a page is not found. Maybe the owner changed its URL and forgot to change the link or maybe they deleted the page itself. Every site needs a Custom Error page to avoid the user to see the default Ugly Error page."
},
{
"code": null,
"e": 457,
"s": 349,
"text": "GeeksforGeeks also has a customized error page. If we type a URL likewww.geeksforgeeks.org/ajneawnewiaiowjf"
},
{
"code": null,
"e": 475,
"s": 457,
"text": "Default 404 Error"
},
{
"code": null,
"e": 511,
"s": 475,
"text": "GeeksForGeeks Customized Error Page"
},
{
"code": null,
"e": 777,
"s": 511,
"text": "It will show an Error 404 page since this URL doesn’t exist. But an error page provides a beautiful layout, helps the user to go back, or even takes them to the homepage after a specific time interval. That is why Custom Error pages are necessary for every website."
},
{
"code": null,
"e": 860,
"s": 777,
"text": "Flask provides us with a way to handle the error and return our Custom Error page."
},
{
"code": null,
"e": 966,
"s": 860,
"text": "For this, we need to download and import flask. Download the flask through the following commands on CMD."
},
{
"code": null,
"e": 985,
"s": 966,
"text": "pip install flask\n"
},
{
"code": null,
"e": 1166,
"s": 985,
"text": "Using app.py as our Python file to manage templates, 404.html be the file we will return in the case of a 404 error and header.html be the file with header and navbar of a website."
},
{
"code": null,
"e": 1398,
"s": 1166,
"text": "app.pyFlask allows us to make a python file to define all routes and functions. In app.py we have defined the route to the main page (‘/’) and error handler function which is a flask function and we passed 404 error as a parameter."
},
{
"code": "from flask import Flask, render_template app = Flask(__name__) # app [email protected](404) # inbuilt function which takes error as parameterdef not_found(e): # defining function return render_template(\"404.html\")",
"e": 1621,
"s": 1398,
"text": null
},
{
"code": null,
"e": 1711,
"s": 1621,
"text": "The above python program will return 404.html file whenever the user opens a broken link."
},
{
"code": null,
"e": 1853,
"s": 1711,
"text": "404.htmlThe following code exports header and navbar from header.html.Both files should be stored in templates folder according to the flask."
},
{
"code": "{% extends \"header.html\" %}<!-- Exports header and navbar from header.html or any file you want-->{% block title %}Page Not Found{% endblock %}{% block body %} <h1>Oops! Looks like the page doesn't exist anymore</h1> <a href=\"{{ url_for('index') }}\"><p>Click Here</a>To go to the Home Page</p> <!-- {{ url_for('index') }} is a var which returns url of index.html-->{% endblock %}",
"e": 2242,
"s": 1853,
"text": null
},
{
"code": null,
"e": 2683,
"s": 2242,
"text": "The app.py code for this example stays the same as above.The following code Shows the Custom 404 Error page and starts a countdown of 5 seconds.After 5 seconds are completed, it redirects the user back to the homepage.404.htmlThe following code exports header and navbar from header.html.Both files should be stored in the templates folder according to the flask.After 5 seconds, the user will get redirected to the Home Page Automatically."
},
{
"code": "<html><head><title>Page Not Found</title><script language=\"JavaScript\" type=\"text/javascript\"> var seconds =6;// countdown timer. took 6 because page takes approx 1 sec to load var url=\"{{url_for(index)}}\";// variable for index.html url function redirect(){ if (seconds <=0){ // redirect to new url after counter down. window.location = url; } else { seconds--; document.getElementById(\"pageInfo\").innerHTML=\"Redirecting to Home Page after \"+seconds+\" seconds.\" setTimeout(\"redirect()\", 1000) }}</script></head> {% extends \"header.html\" %}//exporting navbar and header from header.html{% block body %} <body onload=\"redirect()\"><p id=\"pageInfo\"></p>{% endblock %} </html>",
"e": 3372,
"s": 2683,
"text": null
},
{
"code": null,
"e": 3582,
"s": 3372,
"text": "Sample header.htmlThis is a sample header.html which includes a navbar just like shown in the image.It’s made up of bootstrap. You can also make one of your own.For this one, refer the bootstrap documentation."
},
{
"code": "<!DOCTYPE html><html><head> <!-- LINKING ALL SCRIPTS/CSS REQUIRED FOR NAVBAR --> <link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css\" integrity=\"sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E26 3XmFcJlSAwiGgFAW/dAiS6JXm\" crossorigin=\"anonymous\"> <title>Flask</title></head><body><script src=\"https://code.jquery.com/jquery-3.2.1.slim.min.js\" integrity=\"sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN\" crossorigin=\"anonymous\"></script> <script src=\"https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js\" integrity=\"sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q\" crossorigin=\"anonymous\"></script> <script src=\"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js\" integrity=\"sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl\" crossorigin=\"anonymous\"></script> <header> <!-- Starting header --> <nav class=\"navbar navbar-expand-lg navbar-light bg-light\"> <a class=\"navbar-brand\" href=\"#\">Navbar</a> <!-- bootstrap classes for navbar --> <button class=\"navbar-toggler\" type=\"button\" data-toggle=\"collapse\" data-target=\"#navbarSupportedContent\" aria-controls=\"navbarSupportedContent\" aria-expanded=\"false\" aria-label=\"Toggle navigation\"> <span class=\"navbar-toggler-icon\"></span> </button> <div class=\"collapse navbar-collapse\" id=\"navbarSupportedContent\"> <ul class=\"navbar-nav mr-auto\"> <li class=\"nav-item active\"> <a class=\"nav-link\" href=\"#\">Home <span class=\"sr-only\">(current)</span></a> </li> <li class=\"nav-item\"> <a class=\"nav-link\" href=\"#\">Link</a> </li> <li class=\"nav-item dropdown\"> <a class=\"nav-link dropdown-toggle\" href=\"#\" id=\"navbarDropdown\" role=\"button data-toggle=\"dropdown\" aria-haspopup=\"true\" aria-expanded=\"false\"> Dropdown </a> <div class=\"dropdown-menu\" aria-labelledby=\"navbarDropdown\"> <a class=\"dropdown-item\" href=\"#\">Action</a> <a class=\"dropdown-item\" href=\"#\">Another action</a> <div class=\"dropdown-divider\"></div> <a class=\"dropdown-item\" href=\"#\">Something else here</a> </div> </li> <li class=\"nav-item\"> <a class=\"nav-link disabled\" href=\"#\">Disabled</a> </li> </ul> <form class=\"form-inline my-2 my-lg-0\"> <input class=\"form-control mr-sm-2\" type=\"search\" placeholder=\"Search\" aria-label=\"Search\"> <button class=\"btn btn-outline-success my-2 my-sm-0\" type=\"submit\">Search</button> </form> </div></nav></head> <body > {%block body%} {%endblock%} </body></html>",
"e": 6244,
"s": 3582,
"text": null
},
{
"code": null,
"e": 6415,
"s": 6244,
"text": "Output:The output will be a custom error page with header.html that the user exported.The following is an example output with my custom header, footer, and 404.html file."
},
{
"code": null,
"e": 6422,
"s": 6415,
"text": "Python"
},
{
"code": null,
"e": 6439,
"s": 6422,
"text": "Web Technologies"
},
{
"code": null,
"e": 6537,
"s": 6439,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 6555,
"s": 6537,
"text": "Python Dictionary"
},
{
"code": null,
"e": 6597,
"s": 6555,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 6619,
"s": 6597,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 6645,
"s": 6619,
"text": "Python String | replace()"
},
{
"code": null,
"e": 6677,
"s": 6645,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 6710,
"s": 6677,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 6772,
"s": 6710,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 6833,
"s": 6772,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 6883,
"s": 6833,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Java SQL Timestamp getTime() function with examples
|
06 Mar, 2019
The getTime() function is a part of Timestamp class of Java SQL.The function is used to get the time of the Timestamp object. The function returns time in milliseconds which represents the time in milliseconds after 1st January 1970.
Function Signature:
public long getTime()
Syntax:
ts1.getTime();
Parameters: The function does not require any parameter.
Return value: The function returns long value representing time in milliseconds.
Exception: The function does not throw any exceptions.
Below examples illustrate the use of getTime() function
Example 1: Create a timestamp and use the getTime() to get the time of timestamp object.
// Java program to demonstrate the// use of getTime() function import java.sql.*; class GFG { public static void main(String args[]) { // Create two timestamp objects Timestamp ts = new Timestamp(10000); // Display the timestamp object System.out.println("Timestamp time : " + ts.toString()); System.out.println("Time in milliseconds : " + ts.getTime()); }}
Timestamp time : 1970-01-01 00:00:10.0
Time in milliseconds : 10000
Example 2: Create a timestamp and use the getTime() to get the time of timestamp object and set the time before 1st January 1970. The negative long value represents the time before 1st January 1970
// Java program to demonstrate the// use of getTime() function import java.sql.*; public class solution { public static void main(String args[]) { // Create two timestamp objects Timestamp ts = new Timestamp(-10000); // Display the timestamp object System.out.println("Timestamp time : " + ts.toString()); System.out.println("Time in milliseconds : " + ts.getTime()); }}
Timestamp time : 1969-12-31 23:59:50.0
Time in milliseconds : -10000
Reference: https:// docs.oracle.com/javase/7/docs/api/java/sql/Timestamp.html
Java - util package
Java-Functions
Java-Sql package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Object Oriented Programming (OOPs) Concept in Java
How to iterate any Map in Java
Interfaces in Java
HashMap in Java with Examples
Stream In Java
ArrayList in Java
Collections in Java
Singleton Class in Java
Multidimensional Arrays in Java
Set in Java
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n06 Mar, 2019"
},
{
"code": null,
"e": 262,
"s": 28,
"text": "The getTime() function is a part of Timestamp class of Java SQL.The function is used to get the time of the Timestamp object. The function returns time in milliseconds which represents the time in milliseconds after 1st January 1970."
},
{
"code": null,
"e": 282,
"s": 262,
"text": "Function Signature:"
},
{
"code": null,
"e": 304,
"s": 282,
"text": "public long getTime()"
},
{
"code": null,
"e": 312,
"s": 304,
"text": "Syntax:"
},
{
"code": null,
"e": 327,
"s": 312,
"text": "ts1.getTime();"
},
{
"code": null,
"e": 384,
"s": 327,
"text": "Parameters: The function does not require any parameter."
},
{
"code": null,
"e": 465,
"s": 384,
"text": "Return value: The function returns long value representing time in milliseconds."
},
{
"code": null,
"e": 520,
"s": 465,
"text": "Exception: The function does not throw any exceptions."
},
{
"code": null,
"e": 576,
"s": 520,
"text": "Below examples illustrate the use of getTime() function"
},
{
"code": null,
"e": 665,
"s": 576,
"text": "Example 1: Create a timestamp and use the getTime() to get the time of timestamp object."
},
{
"code": "// Java program to demonstrate the// use of getTime() function import java.sql.*; class GFG { public static void main(String args[]) { // Create two timestamp objects Timestamp ts = new Timestamp(10000); // Display the timestamp object System.out.println(\"Timestamp time : \" + ts.toString()); System.out.println(\"Time in milliseconds : \" + ts.getTime()); }}",
"e": 1123,
"s": 665,
"text": null
},
{
"code": null,
"e": 1192,
"s": 1123,
"text": "Timestamp time : 1970-01-01 00:00:10.0\nTime in milliseconds : 10000\n"
},
{
"code": null,
"e": 1390,
"s": 1192,
"text": "Example 2: Create a timestamp and use the getTime() to get the time of timestamp object and set the time before 1st January 1970. The negative long value represents the time before 1st January 1970"
},
{
"code": "// Java program to demonstrate the// use of getTime() function import java.sql.*; public class solution { public static void main(String args[]) { // Create two timestamp objects Timestamp ts = new Timestamp(-10000); // Display the timestamp object System.out.println(\"Timestamp time : \" + ts.toString()); System.out.println(\"Time in milliseconds : \" + ts.getTime()); }}",
"e": 1865,
"s": 1390,
"text": null
},
{
"code": null,
"e": 1935,
"s": 1865,
"text": "Timestamp time : 1969-12-31 23:59:50.0\nTime in milliseconds : -10000\n"
},
{
"code": null,
"e": 2013,
"s": 1935,
"text": "Reference: https:// docs.oracle.com/javase/7/docs/api/java/sql/Timestamp.html"
},
{
"code": null,
"e": 2033,
"s": 2013,
"text": "Java - util package"
},
{
"code": null,
"e": 2048,
"s": 2033,
"text": "Java-Functions"
},
{
"code": null,
"e": 2065,
"s": 2048,
"text": "Java-Sql package"
},
{
"code": null,
"e": 2070,
"s": 2065,
"text": "Java"
},
{
"code": null,
"e": 2075,
"s": 2070,
"text": "Java"
},
{
"code": null,
"e": 2173,
"s": 2075,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2224,
"s": 2173,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 2255,
"s": 2224,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 2274,
"s": 2255,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 2304,
"s": 2274,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 2319,
"s": 2304,
"text": "Stream In Java"
},
{
"code": null,
"e": 2337,
"s": 2319,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 2357,
"s": 2337,
"text": "Collections in Java"
},
{
"code": null,
"e": 2381,
"s": 2357,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 2413,
"s": 2381,
"text": "Multidimensional Arrays in Java"
}
] |
Program to accept Strings starting with a Vowel
|
20 May, 2021
Given string str consisting of alphabets, the task is to check whether the given string is starting with a Vowel or Not.
Examples:
Input: str = "Animal"
Output: Accepted
Input: str = "GeeksforGeeks"
Output: Not Accepted
Approach:
Find the first character of the string
Check if the first character of the string is a vowel or not
If yes, print Accepted
Else print Not Accepted
Below is the implementation of the above approach:
CPP
Java
Python3
C#
Javascript
// C++ program to accept String// starting with Vowel #include <iostream>using namespace std; // Function to check if first character is vowelint checkIfStartsWithVowels(string str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkvoid check(string str){ if (checkIfStartsWithVowels(str)) cout << "Not Accepted\n"; else cout << "Accepted\n";} // Driver functionint main(){ string str = "animal"; check(str); str = "zebra"; check(str); return 0;}
// Java program to accept String// starting with Vowelclass GFG{ // Function to check if first character is vowelstatic int checkIfStartsWithVowels(char []str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkstatic void check(String str){ if (checkIfStartsWithVowels(str.toCharArray()) == 1) System.out.print("Not Accepted\n"); else System.out.print("Accepted\n");} // Driver codepublic static void main(String[] args){ String str = "animal"; check(str); str = "zebra"; check(str);}} // This code is contributed by PrinciRaj1992
# Python3 program to accept String# starting with Vowel # Function to check if first character is voweldef checkIfStartsWithVowels(string) : if (not(string[0] == 'A' or string[0] == 'a' or string[0] == 'E' or string[0] == 'e' or string[0] == 'I' or string[0] == 'i' or string[0] == 'O' or string[0] == 'o' or string[0] == 'U' or string[0] == 'u')) : return 1; else : return 0; # Function to checkdef check(string) : if (checkIfStartsWithVowels(string)) : print("Not Accepted"); else : print("Accepted"); # Driver functionif __name__ == "__main__" : string = "animal"; check(string); string = "zebra"; check(string); # This code is contributed by AnkitRai01
// C# program to accept String// starting with Vowelusing System; class GFG{ // Function to check if first character is vowelstatic int checkIfStartsWithVowels(char []str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkstatic void check(String str){ if (checkIfStartsWithVowels(str.ToCharArray()) == 1) Console.Write("Not Accepted\n"); else Console.Write("Accepted\n");} // Driver codepublic static void Main(String[] args){ String str = "animal"; check(str); str = "zebra"; check(str);}} // This code is contributed by PrinciRaj1992
<script>// Javascript program to accept String// starting with Vowel // Function to check if first character is vowelfunction checkIfStartsWithVowels(str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkfunction check(str){ if (checkIfStartsWithVowels(str)) document.write( "Not Accepted<br>"); else document.write("Accepted<br>");} var str = "animal";check(str); str = "zebra";check(str); // This code is contributed by SoumikMondal</script>
Accepted
Not Accepted
princiraj1992
ankthon
SoumikMondal
School Programming
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n20 May, 2021"
},
{
"code": null,
"e": 175,
"s": 54,
"text": "Given string str consisting of alphabets, the task is to check whether the given string is starting with a Vowel or Not."
},
{
"code": null,
"e": 186,
"s": 175,
"text": "Examples: "
},
{
"code": null,
"e": 277,
"s": 186,
"text": "Input: str = \"Animal\"\nOutput: Accepted\n\nInput: str = \"GeeksforGeeks\"\nOutput: Not Accepted "
},
{
"code": null,
"e": 289,
"s": 277,
"text": "Approach: "
},
{
"code": null,
"e": 328,
"s": 289,
"text": "Find the first character of the string"
},
{
"code": null,
"e": 389,
"s": 328,
"text": "Check if the first character of the string is a vowel or not"
},
{
"code": null,
"e": 412,
"s": 389,
"text": "If yes, print Accepted"
},
{
"code": null,
"e": 438,
"s": 412,
"text": "Else print Not Accepted "
},
{
"code": null,
"e": 490,
"s": 438,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 494,
"s": 490,
"text": "CPP"
},
{
"code": null,
"e": 499,
"s": 494,
"text": "Java"
},
{
"code": null,
"e": 507,
"s": 499,
"text": "Python3"
},
{
"code": null,
"e": 510,
"s": 507,
"text": "C#"
},
{
"code": null,
"e": 521,
"s": 510,
"text": "Javascript"
},
{
"code": "// C++ program to accept String// starting with Vowel #include <iostream>using namespace std; // Function to check if first character is vowelint checkIfStartsWithVowels(string str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkvoid check(string str){ if (checkIfStartsWithVowels(str)) cout << \"Not Accepted\\n\"; else cout << \"Accepted\\n\";} // Driver functionint main(){ string str = \"animal\"; check(str); str = \"zebra\"; check(str); return 0;}",
"e": 1234,
"s": 521,
"text": null
},
{
"code": "// Java program to accept String// starting with Vowelclass GFG{ // Function to check if first character is vowelstatic int checkIfStartsWithVowels(char []str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkstatic void check(String str){ if (checkIfStartsWithVowels(str.toCharArray()) == 1) System.out.print(\"Not Accepted\\n\"); else System.out.print(\"Accepted\\n\");} // Driver codepublic static void main(String[] args){ String str = \"animal\"; check(str); str = \"zebra\"; check(str);}} // This code is contributed by PrinciRaj1992",
"e": 2019,
"s": 1234,
"text": null
},
{
"code": "# Python3 program to accept String# starting with Vowel # Function to check if first character is voweldef checkIfStartsWithVowels(string) : if (not(string[0] == 'A' or string[0] == 'a' or string[0] == 'E' or string[0] == 'e' or string[0] == 'I' or string[0] == 'i' or string[0] == 'O' or string[0] == 'o' or string[0] == 'U' or string[0] == 'u')) : return 1; else : return 0; # Function to checkdef check(string) : if (checkIfStartsWithVowels(string)) : print(\"Not Accepted\"); else : print(\"Accepted\"); # Driver functionif __name__ == \"__main__\" : string = \"animal\"; check(string); string = \"zebra\"; check(string); # This code is contributed by AnkitRai01",
"e": 2760,
"s": 2019,
"text": null
},
{
"code": "// C# program to accept String// starting with Vowelusing System; class GFG{ // Function to check if first character is vowelstatic int checkIfStartsWithVowels(char []str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkstatic void check(String str){ if (checkIfStartsWithVowels(str.ToCharArray()) == 1) Console.Write(\"Not Accepted\\n\"); else Console.Write(\"Accepted\\n\");} // Driver codepublic static void Main(String[] args){ String str = \"animal\"; check(str); str = \"zebra\"; check(str);}} // This code is contributed by PrinciRaj1992",
"e": 3551,
"s": 2760,
"text": null
},
{
"code": "<script>// Javascript program to accept String// starting with Vowel // Function to check if first character is vowelfunction checkIfStartsWithVowels(str){ if (!(str[0] == 'A' || str[0] == 'a' || str[0] == 'E' || str[0] == 'e' || str[0] == 'I' || str[0] == 'i' || str[0] == 'O' || str[0] == 'o' || str[0] == 'U' || str[0] == 'u')) return 1; else return 0;} // Function to checkfunction check(str){ if (checkIfStartsWithVowels(str)) document.write( \"Not Accepted<br>\"); else document.write(\"Accepted<br>\");} var str = \"animal\";check(str); str = \"zebra\";check(str); // This code is contributed by SoumikMondal</script>",
"e": 4246,
"s": 3551,
"text": null
},
{
"code": null,
"e": 4268,
"s": 4246,
"text": "Accepted\nNot Accepted"
},
{
"code": null,
"e": 4284,
"s": 4270,
"text": "princiraj1992"
},
{
"code": null,
"e": 4292,
"s": 4284,
"text": "ankthon"
},
{
"code": null,
"e": 4305,
"s": 4292,
"text": "SoumikMondal"
},
{
"code": null,
"e": 4324,
"s": 4305,
"text": "School Programming"
},
{
"code": null,
"e": 4332,
"s": 4324,
"text": "Strings"
},
{
"code": null,
"e": 4340,
"s": 4332,
"text": "Strings"
}
] |
Print all the combinations of a string in lexicographical order
|
23 Nov, 2021
Given a string str, print of all the combinations of a string in lexicographical order.Examples:
Input: str = "ABC"
Output:
A
AB
ABC
AC
ACB
B
BA
BAC
BC
BCA
C
CA
CAB
CB
CBA
Input: ED
Output:
D
DE
E
ED
Approach: Count the occurrences of all the characters in the string using a map, then using recursion all the possible combinations can be printed. Store the elements and their counts in two different arrays. Three arrays are used, input[] array which has the characters, count[] array has the count of characters and result[] is a temporary array which is used in recursion to generate all the combinations. Using recursion and backtracking all the combinations can be printed. Below is the implementation of the above approach.
C++
Java
Python3
C#
Javascript
// C++ program to find all combinations// of a string in lexicographical order#include <bits/stdc++.h>using namespace std; // function to print stringvoid printResult(char* result, int len){ for (int i = 0; i <= len; i++) cout << result[i]; cout << endl;} // Method to found all combination// of string it is based in treevoid stringCombination(char result[], char str[], int count[], int level, int size, int length){ // return if level is equal size of string if (level == size) return; for (int i = 0; i < length; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] == 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, length); // backtracking count[i]++; }} void combination(string str){ // declare the map for store // each char with occurrence map<char, int> mp; for (int i = 0; i < str.size(); i++) { if (mp.find(str[i]) != mp.end()) mp[str[i]] = mp[str[i]] + 1; else mp[str[i]] = 1; } // initialize the input array // with all unique char char* input = new char[mp.size()]; // initialize the count array with // occurrence the unique char int* count = new int[mp.size()]; // temporary char array for store the result char* result = new char[str.size()]; map<char, int>::iterator it = mp.begin(); int i = 0; for (it; it != mp.end(); it++) { // store the element of input array input[i] = it->first; // store the element of count array count[i] = it->second; i++; } // size of map(no of unique char) int length = mp.size(); // size of original string int size = str.size(); // call function for print string combination stringCombination(result, input, count, 0, size, length);} // Driver codeint main(){ string str = "ABC"; combination(str); return 0;}
// Java program to find all combinations// of a string in lexicographical orderimport java.util.HashMap; class GFG{ // function to print string static void printResult(char[] result, int len) { for (int i = 0; i <= len; i++) System.out.print(result[i]); System.out.println(); } // Method to found all combination // of string it is based in tree static void stringCombination(char[] result, char[] str, int[] count, int level, int size, int length) { // return if level is equal size of string if (level == size) return; for (int i = 0; i < length; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] == 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, length); // backtracking count[i]++; } } static void combination(String str) { // declare the map for store // each char with occurrence HashMap<Character, Integer> mp = new HashMap<>(); for (int i = 0; i < str.length(); i++) mp.put(str.charAt(i), mp.get(str.charAt(i)) == null ? 1 : mp.get(str.charAt(i)) + 1); // initialize the input array // with all unique char char[] input = new char[mp.size()]; // initialize the count array with // occurrence the unique char int[] count = new int[mp.size()]; // temporary char array for store the result char[] result = new char[str.length()]; int i = 0; for (HashMap.Entry<Character, Integer> entry : mp.entrySet()) { // store the element of input array input[i] = entry.getKey(); // store the element of count array count[i] = entry.getValue(); i++; } // size of map(no of unique char) int length = mp.size(); // size of original string int size = str.length(); // call function for print string combination stringCombination(result, input, count, 0, size, length); } // Driver code public static void main (String[] args) { String str = "ABC"; combination(str); }} // This code is contributed by// sanjeev2552
# Python 3 program to find all combinations# of a string in lexicographical orderfrom collections import defaultdict # function to print string def printResult(result, length): for i in range(length+1): print(result[i], end="") print() # Method to found all combination# of string it is based in tree def stringCombination(result, st, count, level, size, length): # return if level is equal size of string if (level == size): return for i in range(length): # if occurrence of char is 0 then # skip the iteration of loop if (count[i] == 0): continue # decrease the char occurrence by 1 count[i] -= 1 # store the char in result result[level] = st[i] # print the string till level printResult(result, level) # call the function from level +1 stringCombination(result, st, count, level + 1, size, length) # backtracking count[i] += 1 def combination(st): # declare the map for store # each char with occurrence mp = defaultdict(int) for i in range(len(st)): if (st[i] in mp.keys()): mp[st[i]] = mp[st[i]] + 1 else: mp[st[i]] = 1 # initialize the input array # with all unique char input = ['']*len(mp) # initialize the count array with # occurrence the unique char count = [0] * len(mp) # temporary char array for store the result result = ['']*len(st) i = 0 for key, value in mp.items(): # store the element of input array input[i] = key # store the element of count array count[i] = value i += 1 # size of map(no of unique char) length = len(mp) # size of original string size = len(st) # call function for print string combination stringCombination(result, input, count, 0, size, length) # Driver codeif __name__ == "__main__": st = "ABC" combination(st) # This code is contributed by ukasp.
// C# program to find all combinations// of a string in lexicographical orderusing System;using System.Collections.Generic; class GFG{ // function to print string static void printResult(char[] result, int len) { for (int i = 0; i <= len; i++) Console.Write(result[i]); Console.WriteLine(); } // Method to found all combination // of string it is based in tree static void stringCombination(char[] result, char[] str, int[] count, int level, int size, int length) { // return if level is equal size of string if (level == size) return; for (int i = 0; i < length; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] == 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, length); // backtracking count[i]++; } } static void combination(String str) { int i; // declare the map for store // each char with occurrence Dictionary<char,int> mp = new Dictionary<char,int>(); for (i= 0; i < str.Length; i++) if(mp.ContainsKey(str[i])) mp[str[i]] = mp[str[i]] + 1; else mp.Add(str[i], 1); // initialize the input array // with all unique char char[] input = new char[mp.Count]; // initialize the count array with // occurrence the unique char int[] count = new int[mp.Count]; // temporary char array for store the result char[] result = new char[str.Length]; i = 0; foreach(KeyValuePair<char, int> entry in mp) { // store the element of input array input[i] = entry.Key; // store the element of count array count[i] = entry.Value; i++; } // size of map(no of unique char) int length = mp.Count; // size of original string int size = str.Length; // call function for print string combination stringCombination(result, input, count, 0, size, length); } // Driver code public static void Main(String[] args) { String str = "ABC"; combination(str); }} // This code is contributed by Rajput-Ji
<script> // JavaScript program to find all combinations // of a string in lexicographical order // function to print string function printResult(result, len) { for (var i = 0; i <= len; i++) document.write(result[i]); document.write("<br>"); } // Method to found all combination // of string it is based in tree function stringCombination(result, str, count, level, size, len) { // return if level is equal size of string if (level === size) return; for (var i = 0; i < len; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] === 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, len); // backtracking count[i]++; } } function combination(str) { var i; // declare the map for store // each char with occurrence var mp = {}; for (i = 0; i < str.length; i++) if (mp.hasOwnProperty(str[i])) mp[str[i]] = mp[str[i]] + 1; else mp[str[i]] = 1; // initialize the input array // with all unique char var input = new Array(Object.keys(mp).length).fill(0); // initialize the count array with // occurrence the unique char var count = new Array(Object.keys(mp).length).fill(0); // temporary char array for store the result var result = new Array(str.length).fill(0); i = 0; for (const [key, value] of Object.entries(mp)) { // store the element of input array input[i] = key; // store the element of count array count[i] = value; i++; } // size of map(no of unique char) var len = Object.keys(mp).length; // size of original string var size = str.length; // call function for print string combination stringCombination(result, input, count, 0, size, len); } // Driver code var str = "ABC"; combination(str); </script>
A
AB
ABC
AC
ACB
B
BA
BAC
BC
BCA
C
CA
CAB
CB
CBA
Time Complexity: O() where N is the size of the string.Auxiliary Space: O(N)
sanjeev2552
Rajput-Ji
tusharamrit
rdtank
pankajsharmagfg
ukasp
Algorithms-Backtracking
Algorithms-Recursion
C-String-Question
lexicographic-ordering
Algorithms
Backtracking
Recursion
Recursion
Backtracking
Algorithms
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
What is Hashing | A Complete Tutorial
Find if there is a path between two vertices in an undirected graph
How to Start Learning DSA?
Complete Roadmap To Learn DSA From Scratch
Types of Complexity Classes | P, NP, CoNP, NP hard and NP complete
Write a program to print all permutations of a given string
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
N Queen Problem | Backtracking-3
The Knight's tour problem | Backtracking-1
Backtracking | Introduction
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n23 Nov, 2021"
},
{
"code": null,
"e": 153,
"s": 54,
"text": "Given a string str, print of all the combinations of a string in lexicographical order.Examples: "
},
{
"code": null,
"e": 257,
"s": 153,
"text": "Input: str = \"ABC\"\nOutput:\nA\nAB\nABC\nAC\nACB\nB\nBA\nBAC\nBC\nBCA\nC\nCA\nCAB\nCB\nCBA\n\nInput: ED\nOutput:\nD\nDE\nE\nED"
},
{
"code": null,
"e": 791,
"s": 259,
"text": "Approach: Count the occurrences of all the characters in the string using a map, then using recursion all the possible combinations can be printed. Store the elements and their counts in two different arrays. Three arrays are used, input[] array which has the characters, count[] array has the count of characters and result[] is a temporary array which is used in recursion to generate all the combinations. Using recursion and backtracking all the combinations can be printed. Below is the implementation of the above approach. "
},
{
"code": null,
"e": 795,
"s": 791,
"text": "C++"
},
{
"code": null,
"e": 800,
"s": 795,
"text": "Java"
},
{
"code": null,
"e": 808,
"s": 800,
"text": "Python3"
},
{
"code": null,
"e": 811,
"s": 808,
"text": "C#"
},
{
"code": null,
"e": 822,
"s": 811,
"text": "Javascript"
},
{
"code": "// C++ program to find all combinations// of a string in lexicographical order#include <bits/stdc++.h>using namespace std; // function to print stringvoid printResult(char* result, int len){ for (int i = 0; i <= len; i++) cout << result[i]; cout << endl;} // Method to found all combination// of string it is based in treevoid stringCombination(char result[], char str[], int count[], int level, int size, int length){ // return if level is equal size of string if (level == size) return; for (int i = 0; i < length; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] == 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, length); // backtracking count[i]++; }} void combination(string str){ // declare the map for store // each char with occurrence map<char, int> mp; for (int i = 0; i < str.size(); i++) { if (mp.find(str[i]) != mp.end()) mp[str[i]] = mp[str[i]] + 1; else mp[str[i]] = 1; } // initialize the input array // with all unique char char* input = new char[mp.size()]; // initialize the count array with // occurrence the unique char int* count = new int[mp.size()]; // temporary char array for store the result char* result = new char[str.size()]; map<char, int>::iterator it = mp.begin(); int i = 0; for (it; it != mp.end(); it++) { // store the element of input array input[i] = it->first; // store the element of count array count[i] = it->second; i++; } // size of map(no of unique char) int length = mp.size(); // size of original string int size = str.size(); // call function for print string combination stringCombination(result, input, count, 0, size, length);} // Driver codeint main(){ string str = \"ABC\"; combination(str); return 0;}",
"e": 3105,
"s": 822,
"text": null
},
{
"code": "// Java program to find all combinations// of a string in lexicographical orderimport java.util.HashMap; class GFG{ // function to print string static void printResult(char[] result, int len) { for (int i = 0; i <= len; i++) System.out.print(result[i]); System.out.println(); } // Method to found all combination // of string it is based in tree static void stringCombination(char[] result, char[] str, int[] count, int level, int size, int length) { // return if level is equal size of string if (level == size) return; for (int i = 0; i < length; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] == 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, length); // backtracking count[i]++; } } static void combination(String str) { // declare the map for store // each char with occurrence HashMap<Character, Integer> mp = new HashMap<>(); for (int i = 0; i < str.length(); i++) mp.put(str.charAt(i), mp.get(str.charAt(i)) == null ? 1 : mp.get(str.charAt(i)) + 1); // initialize the input array // with all unique char char[] input = new char[mp.size()]; // initialize the count array with // occurrence the unique char int[] count = new int[mp.size()]; // temporary char array for store the result char[] result = new char[str.length()]; int i = 0; for (HashMap.Entry<Character, Integer> entry : mp.entrySet()) { // store the element of input array input[i] = entry.getKey(); // store the element of count array count[i] = entry.getValue(); i++; } // size of map(no of unique char) int length = mp.size(); // size of original string int size = str.length(); // call function for print string combination stringCombination(result, input, count, 0, size, length); } // Driver code public static void main (String[] args) { String str = \"ABC\"; combination(str); }} // This code is contributed by// sanjeev2552",
"e": 6125,
"s": 3105,
"text": null
},
{
"code": "# Python 3 program to find all combinations# of a string in lexicographical orderfrom collections import defaultdict # function to print string def printResult(result, length): for i in range(length+1): print(result[i], end=\"\") print() # Method to found all combination# of string it is based in tree def stringCombination(result, st, count, level, size, length): # return if level is equal size of string if (level == size): return for i in range(length): # if occurrence of char is 0 then # skip the iteration of loop if (count[i] == 0): continue # decrease the char occurrence by 1 count[i] -= 1 # store the char in result result[level] = st[i] # print the string till level printResult(result, level) # call the function from level +1 stringCombination(result, st, count, level + 1, size, length) # backtracking count[i] += 1 def combination(st): # declare the map for store # each char with occurrence mp = defaultdict(int) for i in range(len(st)): if (st[i] in mp.keys()): mp[st[i]] = mp[st[i]] + 1 else: mp[st[i]] = 1 # initialize the input array # with all unique char input = ['']*len(mp) # initialize the count array with # occurrence the unique char count = [0] * len(mp) # temporary char array for store the result result = ['']*len(st) i = 0 for key, value in mp.items(): # store the element of input array input[i] = key # store the element of count array count[i] = value i += 1 # size of map(no of unique char) length = len(mp) # size of original string size = len(st) # call function for print string combination stringCombination(result, input, count, 0, size, length) # Driver codeif __name__ == \"__main__\": st = \"ABC\" combination(st) # This code is contributed by ukasp.",
"e": 8181,
"s": 6125,
"text": null
},
{
"code": "// C# program to find all combinations// of a string in lexicographical orderusing System;using System.Collections.Generic; class GFG{ // function to print string static void printResult(char[] result, int len) { for (int i = 0; i <= len; i++) Console.Write(result[i]); Console.WriteLine(); } // Method to found all combination // of string it is based in tree static void stringCombination(char[] result, char[] str, int[] count, int level, int size, int length) { // return if level is equal size of string if (level == size) return; for (int i = 0; i < length; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] == 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, length); // backtracking count[i]++; } } static void combination(String str) { int i; // declare the map for store // each char with occurrence Dictionary<char,int> mp = new Dictionary<char,int>(); for (i= 0; i < str.Length; i++) if(mp.ContainsKey(str[i])) mp[str[i]] = mp[str[i]] + 1; else mp.Add(str[i], 1); // initialize the input array // with all unique char char[] input = new char[mp.Count]; // initialize the count array with // occurrence the unique char int[] count = new int[mp.Count]; // temporary char array for store the result char[] result = new char[str.Length]; i = 0; foreach(KeyValuePair<char, int> entry in mp) { // store the element of input array input[i] = entry.Key; // store the element of count array count[i] = entry.Value; i++; } // size of map(no of unique char) int length = mp.Count; // size of original string int size = str.Length; // call function for print string combination stringCombination(result, input, count, 0, size, length); } // Driver code public static void Main(String[] args) { String str = \"ABC\"; combination(str); }} // This code is contributed by Rajput-Ji",
"e": 11139,
"s": 8181,
"text": null
},
{
"code": "<script> // JavaScript program to find all combinations // of a string in lexicographical order // function to print string function printResult(result, len) { for (var i = 0; i <= len; i++) document.write(result[i]); document.write(\"<br>\"); } // Method to found all combination // of string it is based in tree function stringCombination(result, str, count, level, size, len) { // return if level is equal size of string if (level === size) return; for (var i = 0; i < len; i++) { // if occurrence of char is 0 then // skip the iteration of loop if (count[i] === 0) continue; // decrease the char occurrence by 1 count[i]--; // store the char in result result[level] = str[i]; // print the string till level printResult(result, level); // call the function from level +1 stringCombination(result, str, count, level + 1, size, len); // backtracking count[i]++; } } function combination(str) { var i; // declare the map for store // each char with occurrence var mp = {}; for (i = 0; i < str.length; i++) if (mp.hasOwnProperty(str[i])) mp[str[i]] = mp[str[i]] + 1; else mp[str[i]] = 1; // initialize the input array // with all unique char var input = new Array(Object.keys(mp).length).fill(0); // initialize the count array with // occurrence the unique char var count = new Array(Object.keys(mp).length).fill(0); // temporary char array for store the result var result = new Array(str.length).fill(0); i = 0; for (const [key, value] of Object.entries(mp)) { // store the element of input array input[i] = key; // store the element of count array count[i] = value; i++; } // size of map(no of unique char) var len = Object.keys(mp).length; // size of original string var size = str.length; // call function for print string combination stringCombination(result, input, count, 0, size, len); } // Driver code var str = \"ABC\"; combination(str); </script>",
"e": 13460,
"s": 11139,
"text": null
},
{
"code": null,
"e": 13508,
"s": 13460,
"text": "A\nAB\nABC\nAC\nACB\nB\nBA\nBAC\nBC\nBCA\nC\nCA\nCAB\nCB\nCBA"
},
{
"code": null,
"e": 13588,
"s": 13510,
"text": "Time Complexity: O() where N is the size of the string.Auxiliary Space: O(N) "
},
{
"code": null,
"e": 13600,
"s": 13588,
"text": "sanjeev2552"
},
{
"code": null,
"e": 13610,
"s": 13600,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 13622,
"s": 13610,
"text": "tusharamrit"
},
{
"code": null,
"e": 13629,
"s": 13622,
"text": "rdtank"
},
{
"code": null,
"e": 13645,
"s": 13629,
"text": "pankajsharmagfg"
},
{
"code": null,
"e": 13651,
"s": 13645,
"text": "ukasp"
},
{
"code": null,
"e": 13675,
"s": 13651,
"text": "Algorithms-Backtracking"
},
{
"code": null,
"e": 13696,
"s": 13675,
"text": "Algorithms-Recursion"
},
{
"code": null,
"e": 13714,
"s": 13696,
"text": "C-String-Question"
},
{
"code": null,
"e": 13737,
"s": 13714,
"text": "lexicographic-ordering"
},
{
"code": null,
"e": 13748,
"s": 13737,
"text": "Algorithms"
},
{
"code": null,
"e": 13761,
"s": 13748,
"text": "Backtracking"
},
{
"code": null,
"e": 13771,
"s": 13761,
"text": "Recursion"
},
{
"code": null,
"e": 13781,
"s": 13771,
"text": "Recursion"
},
{
"code": null,
"e": 13794,
"s": 13781,
"text": "Backtracking"
},
{
"code": null,
"e": 13805,
"s": 13794,
"text": "Algorithms"
},
{
"code": null,
"e": 13903,
"s": 13805,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 13941,
"s": 13903,
"text": "What is Hashing | A Complete Tutorial"
},
{
"code": null,
"e": 14009,
"s": 13941,
"text": "Find if there is a path between two vertices in an undirected graph"
},
{
"code": null,
"e": 14036,
"s": 14009,
"text": "How to Start Learning DSA?"
},
{
"code": null,
"e": 14079,
"s": 14036,
"text": "Complete Roadmap To Learn DSA From Scratch"
},
{
"code": null,
"e": 14146,
"s": 14079,
"text": "Types of Complexity Classes | P, NP, CoNP, NP hard and NP complete"
},
{
"code": null,
"e": 14206,
"s": 14146,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 14291,
"s": 14206,
"text": "Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)"
},
{
"code": null,
"e": 14324,
"s": 14291,
"text": "N Queen Problem | Backtracking-3"
},
{
"code": null,
"e": 14367,
"s": 14324,
"text": "The Knight's tour problem | Backtracking-1"
}
] |
Java Program to Search an Element in a Linked List
|
02 Nov, 2020
Prerequisite: LinkedList in java
LinkedList is a linear data structure where the elements are not stored in contiguous memory locations. Every element is a separate object known as a node with a data part and an address part. The elements are linked using pointers or references. Linked Lists are preferred over arrays in case of deletions and insertions as they take O(1) time for the respective operations.
Advantages of Linked List:
Insertions and deletions take O(1) time.
Dynamic in nature.
Memory is not contiguous.
Syntax:
LinkedList<ClassName> variableName = new LinkedList<>();
Example:
LinkedList<Integer> ll = new LinkedList<>();
Task:
Search for an element in a linked list.
Approach:
When the linked list is provided to us directly, we can use a for loop to traverse through the list and find the element. In case, if we are not allowed to use pre-built libraries, we need to create our very own linked list and search for the element.
Examples:
Input: ll1 = [10, 20, 30, -12, 0, 23, -2, 12]
element = 23
Output: 5
Input: ll2 = [1, 2, 3, 4, 5]
element = 3
Output: 2
The following are the two methods with which we can search for an element in a Linked List.
Method 1: When we are allowed to use in-built libraries
First, a Linked list is initialized.
A for loop is used to traverse through the elements present in the Linked List.
Below is the implementation of the above approach:
Java
// Java Program to find an element in a Linked List // Importing the Linked List classimport java.util.LinkedList; class SearchInALinkedList { public static void main(String[] args) { // Initializing the Linked List LinkedList<Integer> ll = new LinkedList<>(); // Adding elements to the Linked List ll.add(1); ll.add(2); ll.add(3); ll.add(4); ll.add(5); ll.add(6); ll.add(7); // Element to be searched int element = 4; // Initializing the answer to the index -1 int ans = -1; // Traversing through the Linked List for (int i = 0; i < ll.size(); i++) { // Eztracting each element in // the Linked List int llElement = ll.get(i); // Checking if the extracted element is equal to // the element to be searched if (llElement == element) { // Assigning the index of the // element to answer ans = i; break; } } // Checking if the element is present in the Linked // List if (ans == -1) { System.out.println("Element not found"); } else { System.out.println( "Element found in Linked List at " + ans); } }}
Element found in Linked List at 3
Time Complexity: O(n) where n is the number of elements present in the linked list.
Method 2: When we are not allowed to use in-built libraries
First, create a generic node class.
Create a LinkedList class and initialize the head node to null.
Create the required add and search functions.
Initialize the LinkedList in the main method.
Use the search method to find the element.
Below is the implementation of the above approach:
Java
// A Generic Node class is used to create a Linked Listclass Node<E> { // Data Stored in each Node of the Linked List E data; // Pointer to the next node in the Linked List Node<E> next; // Node class constructor used to initializes the data // in each Node Node(E data) { this.data = data; }} class LinkedList<E> { // Points to the head of the Linked // List i.e the first element Node<E> head = null; int size = 0; // Addition of elements to the tail of the Linked List public void add(E element) { // Checks whether the head is created else creates a // new one if (head == null) { head = new Node<>(element); size++; return; } // The Node which needs to be added at // the tail of the Linked List Node<E> add = new Node<>(element); // Storing the instance of the // head pointer Node<E> temp = head; // The while loop takes us to the tail of the Linked // List while (temp.next != null) { temp = temp.next; } // New Node is added at the tail of // the Linked List temp.next = add; // Size of the Linked List is incremented as // the elements are added size++; } // Searches the Linked List for the given element and // returns it's particular index if found else returns // -1 public int search(E element) { if (head == null) { return -1; } int index = 0; Node<E> temp = head; // While loop is used to search the entire Linked // List starting from the tail while (temp != null) { // Returns the index of that particular element, // if found. if (temp.data == element) { return index; } // Gradually increases index while // traversing through the Linked List index++; temp = temp.next; } // Returns -1 if the element is not found return -1; }} public class GFG { public static void main(String[] args) throws Exception { // Initializing the Linked List LinkedList<Integer> ll = new LinkedList<>(); // Adding elements to the Linked List ll.add(1); ll.add(10); ll.add(12); ll.add(-1); ll.add(0); ll.add(-19); ll.add(34); // Element to be searched int element = -1; // Searching the Linked // List for the element int ans = ll.search(-1); if (ans == -1) { System.out.println( "Element not found in the Linked List"); } else System.out.println( "Element found in the Linked List at " + ans); }}
Element found in the Linked List at 3
Time Complexity: O(n) where n is the number of elements present in the linked list.
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
Factory method design pattern in Java
Java Program to Remove Duplicate Elements From the Array
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n02 Nov, 2020"
},
{
"code": null,
"e": 87,
"s": 54,
"text": "Prerequisite: LinkedList in java"
},
{
"code": null,
"e": 463,
"s": 87,
"text": "LinkedList is a linear data structure where the elements are not stored in contiguous memory locations. Every element is a separate object known as a node with a data part and an address part. The elements are linked using pointers or references. Linked Lists are preferred over arrays in case of deletions and insertions as they take O(1) time for the respective operations."
},
{
"code": null,
"e": 490,
"s": 463,
"text": "Advantages of Linked List:"
},
{
"code": null,
"e": 531,
"s": 490,
"text": "Insertions and deletions take O(1) time."
},
{
"code": null,
"e": 550,
"s": 531,
"text": "Dynamic in nature."
},
{
"code": null,
"e": 576,
"s": 550,
"text": "Memory is not contiguous."
},
{
"code": null,
"e": 584,
"s": 576,
"text": "Syntax:"
},
{
"code": null,
"e": 697,
"s": 584,
"text": "LinkedList<ClassName> variableName = new LinkedList<>();\n\nExample:\nLinkedList<Integer> ll = new LinkedList<>();\n"
},
{
"code": null,
"e": 703,
"s": 697,
"text": "Task:"
},
{
"code": null,
"e": 743,
"s": 703,
"text": "Search for an element in a linked list."
},
{
"code": null,
"e": 753,
"s": 743,
"text": "Approach:"
},
{
"code": null,
"e": 1005,
"s": 753,
"text": "When the linked list is provided to us directly, we can use a for loop to traverse through the list and find the element. In case, if we are not allowed to use pre-built libraries, we need to create our very own linked list and search for the element."
},
{
"code": null,
"e": 1015,
"s": 1005,
"text": "Examples:"
},
{
"code": null,
"e": 1153,
"s": 1015,
"text": "Input: ll1 = [10, 20, 30, -12, 0, 23, -2, 12] \n element = 23\nOutput: 5\n\n\nInput: ll2 = [1, 2, 3, 4, 5]\n element = 3\nOutput: 2\n"
},
{
"code": null,
"e": 1245,
"s": 1153,
"text": "The following are the two methods with which we can search for an element in a Linked List."
},
{
"code": null,
"e": 1301,
"s": 1245,
"text": "Method 1: When we are allowed to use in-built libraries"
},
{
"code": null,
"e": 1338,
"s": 1301,
"text": "First, a Linked list is initialized."
},
{
"code": null,
"e": 1418,
"s": 1338,
"text": "A for loop is used to traverse through the elements present in the Linked List."
},
{
"code": null,
"e": 1469,
"s": 1418,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 1474,
"s": 1469,
"text": "Java"
},
{
"code": "// Java Program to find an element in a Linked List // Importing the Linked List classimport java.util.LinkedList; class SearchInALinkedList { public static void main(String[] args) { // Initializing the Linked List LinkedList<Integer> ll = new LinkedList<>(); // Adding elements to the Linked List ll.add(1); ll.add(2); ll.add(3); ll.add(4); ll.add(5); ll.add(6); ll.add(7); // Element to be searched int element = 4; // Initializing the answer to the index -1 int ans = -1; // Traversing through the Linked List for (int i = 0; i < ll.size(); i++) { // Eztracting each element in // the Linked List int llElement = ll.get(i); // Checking if the extracted element is equal to // the element to be searched if (llElement == element) { // Assigning the index of the // element to answer ans = i; break; } } // Checking if the element is present in the Linked // List if (ans == -1) { System.out.println(\"Element not found\"); } else { System.out.println( \"Element found in Linked List at \" + ans); } }}",
"e": 2835,
"s": 1474,
"text": null
},
{
"code": null,
"e": 2869,
"s": 2835,
"text": "Element found in Linked List at 3"
},
{
"code": null,
"e": 2953,
"s": 2869,
"text": "Time Complexity: O(n) where n is the number of elements present in the linked list."
},
{
"code": null,
"e": 3013,
"s": 2953,
"text": "Method 2: When we are not allowed to use in-built libraries"
},
{
"code": null,
"e": 3049,
"s": 3013,
"text": "First, create a generic node class."
},
{
"code": null,
"e": 3113,
"s": 3049,
"text": "Create a LinkedList class and initialize the head node to null."
},
{
"code": null,
"e": 3159,
"s": 3113,
"text": "Create the required add and search functions."
},
{
"code": null,
"e": 3205,
"s": 3159,
"text": "Initialize the LinkedList in the main method."
},
{
"code": null,
"e": 3248,
"s": 3205,
"text": "Use the search method to find the element."
},
{
"code": null,
"e": 3299,
"s": 3248,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 3304,
"s": 3299,
"text": "Java"
},
{
"code": "// A Generic Node class is used to create a Linked Listclass Node<E> { // Data Stored in each Node of the Linked List E data; // Pointer to the next node in the Linked List Node<E> next; // Node class constructor used to initializes the data // in each Node Node(E data) { this.data = data; }} class LinkedList<E> { // Points to the head of the Linked // List i.e the first element Node<E> head = null; int size = 0; // Addition of elements to the tail of the Linked List public void add(E element) { // Checks whether the head is created else creates a // new one if (head == null) { head = new Node<>(element); size++; return; } // The Node which needs to be added at // the tail of the Linked List Node<E> add = new Node<>(element); // Storing the instance of the // head pointer Node<E> temp = head; // The while loop takes us to the tail of the Linked // List while (temp.next != null) { temp = temp.next; } // New Node is added at the tail of // the Linked List temp.next = add; // Size of the Linked List is incremented as // the elements are added size++; } // Searches the Linked List for the given element and // returns it's particular index if found else returns // -1 public int search(E element) { if (head == null) { return -1; } int index = 0; Node<E> temp = head; // While loop is used to search the entire Linked // List starting from the tail while (temp != null) { // Returns the index of that particular element, // if found. if (temp.data == element) { return index; } // Gradually increases index while // traversing through the Linked List index++; temp = temp.next; } // Returns -1 if the element is not found return -1; }} public class GFG { public static void main(String[] args) throws Exception { // Initializing the Linked List LinkedList<Integer> ll = new LinkedList<>(); // Adding elements to the Linked List ll.add(1); ll.add(10); ll.add(12); ll.add(-1); ll.add(0); ll.add(-19); ll.add(34); // Element to be searched int element = -1; // Searching the Linked // List for the element int ans = ll.search(-1); if (ans == -1) { System.out.println( \"Element not found in the Linked List\"); } else System.out.println( \"Element found in the Linked List at \" + ans); }}",
"e": 6179,
"s": 3304,
"text": null
},
{
"code": null,
"e": 6217,
"s": 6179,
"text": "Element found in the Linked List at 3"
},
{
"code": null,
"e": 6301,
"s": 6217,
"text": "Time Complexity: O(n) where n is the number of elements present in the linked list."
},
{
"code": null,
"e": 6325,
"s": 6301,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 6330,
"s": 6325,
"text": "Java"
},
{
"code": null,
"e": 6344,
"s": 6330,
"text": "Java Programs"
},
{
"code": null,
"e": 6363,
"s": 6344,
"text": "Technical Scripter"
},
{
"code": null,
"e": 6368,
"s": 6363,
"text": "Java"
},
{
"code": null,
"e": 6466,
"s": 6368,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 6481,
"s": 6466,
"text": "Stream In Java"
},
{
"code": null,
"e": 6502,
"s": 6481,
"text": "Introduction to Java"
},
{
"code": null,
"e": 6523,
"s": 6502,
"text": "Constructors in Java"
},
{
"code": null,
"e": 6542,
"s": 6523,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 6559,
"s": 6542,
"text": "Generics in Java"
},
{
"code": null,
"e": 6585,
"s": 6559,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 6619,
"s": 6585,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 6666,
"s": 6619,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 6704,
"s": 6666,
"text": "Factory method design pattern in Java"
}
] |
How to build a URL Shortener with Django ?
|
26 Nov, 2020
Building a URL Shortener, Is one of the Best Beginner Project to Hone your Skills. In this article, we have shared the steps to build a URL shortener using Django Framework. To know more about Django visit – Django Tutorial
We need some things setup before we start with our project. We will be using Virtual Environment for our project.
pip install virtualenv
virtualenv urlShort
source urlShort/bin/activate
Above Command will create, activate virtual environment named urlShort.
We need to Install some packages before hand,
pip install django
First of all, We need to create our project by,
django-admin startproject urlShort
cd urlShort
Above Command, Creates A Django Project and then cd into that directory. After that, We also need to create an app inside of our project. App is sort of a container, where we will store our code. A project can have Multiple Apps and they can be interconnected
python manage.py startapp url
Above Command Creates an App named URL in our Project. Our File Structure Now will be —
urlShort
├── manage.py
├── url
│ ├── admin.py
│ ├── apps.py
│ ├── __init__.py
│ ├── migrations
│ │ └── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
└── urlShort
├── asgi.py
├── __init__.py
├── __pycache__
│ ├── __init__.cpython-37.pyc
│ └── settings.cpython-37.pyc
├── settings.py
├── urls.py
└── wsgi.py
You can check if all is working by just typing this in Command Line. But cd into the main folder, here urlShort.
python manage.py runserver
runserver will run a local server where our website will load. Move to url
https://localhost:8000
Keep Your Console Window Open.
Tighten your seat belts as we are starting to Code. First of all, we will play with views.py. views.py is basically used to connect our database, api with our Frontend. Open views.py and type
from django.http import HttpResponse
def index(request):
return HttpResponse("Hello World")
Save it and open localhost and check if it changes.It does not change because we have not map it to any route.Basically, If you write any function inside views.py it does not work but we need to map it inside urls.py. So, Create a urls.py inside url folder.
from django.urls import path
from . import views
app_name = "url"
urlpatterns = [
path("", views.index, name="home")
]
Don’t forget to add your app – “url” in INSTALLED_APPS in settings.py
First of all, we need a Database to store our Shorten URL’s. For That, We need to create a Schema for our Database Table in models.py.
models.py
from django.db import models
class UrlData(models.Model):
url = models.CharField(max_length=200)
slug = models.CharField(max_length=15)
def __str__(self):
return f"Short Url for: {self.url} is {self.slug}"
Above Code Creates a Table UrlData in our Database with Columns url, slug. We will use url column to store Original URL and slug to store 10-character string which will be used for shortening the URL.
For Example,
Original URL — https://medium.com/satyam-kulkarni/
Shorten Form — https://localhost:8000/sEqlKdsIUL
URL’s maximum length is 200 Characters and Slug’s maximum length is 15 (Considering our Website’s Address). After Creating Models for our Website, Let’s create some Forms for taking input from User.
Create a forms.py in our Django App Folder.
forms.py
from django import forms
class Url(forms.Form):
url = forms.CharField(label="URL")
We simply import forms from django and create a Class Url which we will use in views.py and render it in our HTML. Url form has only a url field to take input of Original URL.
Now, We will create the Interface of our App using views.py. Let’s divide this part in Functions.
urlShort()— This Function is where our Main Algorithm works. It takes a url from form after User submits it, then it generates a Random Slug which is then stored in Database with Original Url. It is also the function which render index.html (entrypoint of our app)
views.py urlShort()
def urlShort(request):
if request.method == 'POST':
form = Url(request.POST)
if form.is_valid():
slug = ''.join(random.choice(string.ascii_letters)
for x in range(10))
url = form.cleaned_data["url"]
new_url = UrlData(url=url, slug=slug)
new_url.save()
request.user.urlshort.add(new_url)
return redirect('/')
else:
form = Url()
data = UrlData.objects.all()
context = {
'form': form,
'data': data
}
return render(request, 'index.html', context)
urlRedirect() — This Function tracks the slug to Original URL and redirects it to Original URL.
views.py urlRedirect()
def urlRedirect(request, slugs):
data = UrlData.objects.get(slug=slugs)
return redirect(data.url)
Before Running This App, We need to specify URL paths in App’s urls.py
urls.py
from django.urls import path
from . import views
app_name = "url"
urlpatterns = [
path("", views.urlShort, name="home"),
path("u/<str:slugs>", views.urlRedirect, name="redirect")
]
Open Console in Main Project Directory.
python manage.py runserver
Python Django
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to iterate through Excel rows in Python?
Deque in Python
Defaultdict in Python
Queue in Python
Rotate axis tick labels in Seaborn and Matplotlib
Check if element exists in list in Python
Python Classes and Objects
Bar Plot in Matplotlib
Python OOPs Concepts
How To Convert Python Dictionary To JSON?
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n26 Nov, 2020"
},
{
"code": null,
"e": 278,
"s": 54,
"text": "Building a URL Shortener, Is one of the Best Beginner Project to Hone your Skills. In this article, we have shared the steps to build a URL shortener using Django Framework. To know more about Django visit – Django Tutorial"
},
{
"code": null,
"e": 392,
"s": 278,
"text": "We need some things setup before we start with our project. We will be using Virtual Environment for our project."
},
{
"code": null,
"e": 465,
"s": 392,
"text": "pip install virtualenv\nvirtualenv urlShort\nsource urlShort/bin/activate\n"
},
{
"code": null,
"e": 537,
"s": 465,
"text": "Above Command will create, activate virtual environment named urlShort."
},
{
"code": null,
"e": 583,
"s": 537,
"text": "We need to Install some packages before hand,"
},
{
"code": null,
"e": 603,
"s": 583,
"text": "pip install django\n"
},
{
"code": null,
"e": 651,
"s": 603,
"text": "First of all, We need to create our project by,"
},
{
"code": null,
"e": 699,
"s": 651,
"text": "django-admin startproject urlShort\ncd urlShort\n"
},
{
"code": null,
"e": 959,
"s": 699,
"text": "Above Command, Creates A Django Project and then cd into that directory. After that, We also need to create an app inside of our project. App is sort of a container, where we will store our code. A project can have Multiple Apps and they can be interconnected"
},
{
"code": null,
"e": 990,
"s": 959,
"text": "python manage.py startapp url\n"
},
{
"code": null,
"e": 1078,
"s": 990,
"text": "Above Command Creates an App named URL in our Project. Our File Structure Now will be —"
},
{
"code": null,
"e": 1405,
"s": 1078,
"text": "urlShort\n├── manage.py\n├── url\n│ ├── admin.py\n│ ├── apps.py\n│ ├── __init__.py\n│ ├── migrations\n│ │ └── __init__.py\n│ ├── models.py\n│ ├── tests.py\n│ └── views.py\n└── urlShort\n ├── asgi.py\n ├── __init__.py\n ├── __pycache__\n │ ├── __init__.cpython-37.pyc\n │ └── settings.cpython-37.pyc\n ├── settings.py\n ├── urls.py\n └── wsgi.py\n"
},
{
"code": null,
"e": 1518,
"s": 1405,
"text": "You can check if all is working by just typing this in Command Line. But cd into the main folder, here urlShort."
},
{
"code": null,
"e": 1546,
"s": 1518,
"text": "python manage.py runserver\n"
},
{
"code": null,
"e": 1621,
"s": 1546,
"text": "runserver will run a local server where our website will load. Move to url"
},
{
"code": null,
"e": 1645,
"s": 1621,
"text": "https://localhost:8000\n"
},
{
"code": null,
"e": 1676,
"s": 1645,
"text": "Keep Your Console Window Open."
},
{
"code": null,
"e": 1868,
"s": 1676,
"text": "Tighten your seat belts as we are starting to Code. First of all, we will play with views.py. views.py is basically used to connect our database, api with our Frontend. Open views.py and type"
},
{
"code": null,
"e": 1966,
"s": 1868,
"text": "from django.http import HttpResponse\n\n\ndef index(request):\n return HttpResponse(\"Hello World\")"
},
{
"code": null,
"e": 2224,
"s": 1966,
"text": "Save it and open localhost and check if it changes.It does not change because we have not map it to any route.Basically, If you write any function inside views.py it does not work but we need to map it inside urls.py. So, Create a urls.py inside url folder."
},
{
"code": null,
"e": 2347,
"s": 2224,
"text": "from django.urls import path\nfrom . import views\napp_name = \"url\"\nurlpatterns = [\n path(\"\", views.index, name=\"home\")\n]"
},
{
"code": null,
"e": 2417,
"s": 2347,
"text": "Don’t forget to add your app – “url” in INSTALLED_APPS in settings.py"
},
{
"code": null,
"e": 2552,
"s": 2417,
"text": "First of all, we need a Database to store our Shorten URL’s. For That, We need to create a Schema for our Database Table in models.py."
},
{
"code": null,
"e": 2562,
"s": 2552,
"text": "models.py"
},
{
"code": null,
"e": 2785,
"s": 2562,
"text": "from django.db import models\nclass UrlData(models.Model):\n url = models.CharField(max_length=200)\n slug = models.CharField(max_length=15)\ndef __str__(self):\n return f\"Short Url for: {self.url} is {self.slug}\"\n"
},
{
"code": null,
"e": 2986,
"s": 2785,
"text": "Above Code Creates a Table UrlData in our Database with Columns url, slug. We will use url column to store Original URL and slug to store 10-character string which will be used for shortening the URL."
},
{
"code": null,
"e": 2999,
"s": 2986,
"text": "For Example,"
},
{
"code": null,
"e": 3099,
"s": 2999,
"text": "Original URL — https://medium.com/satyam-kulkarni/\nShorten Form — https://localhost:8000/sEqlKdsIUL"
},
{
"code": null,
"e": 3299,
"s": 3099,
"text": "URL’s maximum length is 200 Characters and Slug’s maximum length is 15 (Considering our Website’s Address). After Creating Models for our Website, Let’s create some Forms for taking input from User. "
},
{
"code": null,
"e": 3343,
"s": 3299,
"text": "Create a forms.py in our Django App Folder."
},
{
"code": null,
"e": 3352,
"s": 3343,
"text": "forms.py"
},
{
"code": null,
"e": 3440,
"s": 3352,
"text": "from django import forms\nclass Url(forms.Form):\n url = forms.CharField(label=\"URL\")\n"
},
{
"code": null,
"e": 3616,
"s": 3440,
"text": "We simply import forms from django and create a Class Url which we will use in views.py and render it in our HTML. Url form has only a url field to take input of Original URL."
},
{
"code": null,
"e": 3714,
"s": 3616,
"text": "Now, We will create the Interface of our App using views.py. Let’s divide this part in Functions."
},
{
"code": null,
"e": 3979,
"s": 3714,
"text": "urlShort()— This Function is where our Main Algorithm works. It takes a url from form after User submits it, then it generates a Random Slug which is then stored in Database with Original Url. It is also the function which render index.html (entrypoint of our app)"
},
{
"code": null,
"e": 3999,
"s": 3979,
"text": "views.py urlShort()"
},
{
"code": null,
"e": 4608,
"s": 3999,
"text": "def urlShort(request):\n if request.method == 'POST':\n form = Url(request.POST)\n if form.is_valid():\n slug = ''.join(random.choice(string.ascii_letters)\n for x in range(10))\n url = form.cleaned_data[\"url\"]\n new_url = UrlData(url=url, slug=slug)\n new_url.save()\n request.user.urlshort.add(new_url)\n return redirect('/')\n else:\n form = Url()\n data = UrlData.objects.all()\n context = {\n 'form': form,\n 'data': data\n }\n return render(request, 'index.html', context)\n\n\n"
},
{
"code": null,
"e": 4704,
"s": 4608,
"text": "urlRedirect() — This Function tracks the slug to Original URL and redirects it to Original URL."
},
{
"code": null,
"e": 4727,
"s": 4704,
"text": "views.py urlRedirect()"
},
{
"code": null,
"e": 4834,
"s": 4727,
"text": "def urlRedirect(request, slugs):\n data = UrlData.objects.get(slug=slugs)\n return redirect(data.url)\n"
},
{
"code": null,
"e": 4905,
"s": 4834,
"text": "Before Running This App, We need to specify URL paths in App’s urls.py"
},
{
"code": null,
"e": 4913,
"s": 4905,
"text": "urls.py"
},
{
"code": null,
"e": 5103,
"s": 4913,
"text": "from django.urls import path\nfrom . import views\napp_name = \"url\"\nurlpatterns = [\n path(\"\", views.urlShort, name=\"home\"),\n path(\"u/<str:slugs>\", views.urlRedirect, name=\"redirect\")\n]\n"
},
{
"code": null,
"e": 5143,
"s": 5103,
"text": "Open Console in Main Project Directory."
},
{
"code": null,
"e": 5171,
"s": 5143,
"text": "python manage.py runserver\n"
},
{
"code": null,
"e": 5185,
"s": 5171,
"text": "Python Django"
},
{
"code": null,
"e": 5192,
"s": 5185,
"text": "Python"
},
{
"code": null,
"e": 5290,
"s": 5192,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 5335,
"s": 5290,
"text": "How to iterate through Excel rows in Python?"
},
{
"code": null,
"e": 5351,
"s": 5335,
"text": "Deque in Python"
},
{
"code": null,
"e": 5373,
"s": 5351,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 5389,
"s": 5373,
"text": "Queue in Python"
},
{
"code": null,
"e": 5439,
"s": 5389,
"text": "Rotate axis tick labels in Seaborn and Matplotlib"
},
{
"code": null,
"e": 5481,
"s": 5439,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 5508,
"s": 5481,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 5531,
"s": 5508,
"text": "Bar Plot in Matplotlib"
},
{
"code": null,
"e": 5552,
"s": 5531,
"text": "Python OOPs Concepts"
}
] |
Pandas Built-in Data Visualization | ML
|
24 Jun, 2019
Data Visualization is the presentation of data in graphical format. It helps people understand the significance of data by summarizing and presenting a huge amount of data in a simple and easy-to-understand format and helps communicate information clearly and effectively.
In this tutorial, we will learn about pandas built-in capabilities for data visualization! It’s built-off of matplotlib, but it baked into pandas for easier usage!
Let’s take a look!
InstallationEasiest way to install pandas is to use pip:
pip install pandas
or, Download it from here
This article demonstrates an illustration of using built-in data visualization feature in pandas by plotting different types of charts.
The Sample csv files df1 and df2 used in this tutorial can be downloaded from here.
import numpy as npimport pandas as pd # There are some fake data csv files# you can read in as dataframesdf1 = pd.read_csv('df1', index_col = 0)df2 = pd.read_csv('df2')
Matplotlib has style sheets which can be used to make the plots look a little nicer. These style sheets include plot_bmh, plot_fivethirtyeight, plot_ggplot and more. They basically create a set of style rules that your plots follow. We recommend using them, they make all your plots have the same look and feel more professional. We can even create our own if want company’s plots to all have the same look (it is a bit tedious to create on though).
Here is how to use them.
Before plt.style.use() plots look like this:
df1['A'].hist()
Output :
Call the style:
Now, plots look like this after calling ggplot style:
import matplotlib.pyplot as pltplt.style.use('ggplot')df1['A'].hist()
Output :
Plots look like this after calling bmh style:
plt.style.use('bmh')df1['A'].hist()
Output :
Plots look like this after calling dark_background style:
plt.style.use('dark_background')df1['A'].hist()
Output :
Plots look like this after calling fivethirtyeight style:
plt.style.use('fivethirtyeight')df1['A'].hist()
Output :
There are several plot types built-in to pandas, most of them statistical plots by nature:
df.plot.area
df.plot.barh
df.plot.density
df.plot.hist
df.plot.line
df.plot.scatter
df.plot.bar
df.plot.box
df.plot.hexbin
df.plot.kde
df.plot.pie
You can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. ‘box’, ‘barh’, etc.). Let’s start going through them!
An area chart or area graph displays graphically quantitative data. It is based on the line chart. The area between axis and line are commonly emphasized with colors, textures and hatchings. Commonly one compares two or more quantities with an area chart.
df2.plot.area(alpha = 0.4)
Output :
A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a line graph.
df2.head()
Output :
df2.plot.bar()
Output :
df2.plot.bar(stacked = True)
Output :
A histogram is a plot that lets you discover, and show, the underlying frequency distribution (shape) of a set of continuous data. This allows the inspection of the data for its underlying distribution (e.g., normal distribution), outliers, skewness, etc.
df1['A'].plot.hist(bins = 50)
Output :
A line plot is a graph that shows frequency of data along a number line. It is best to use a line plot when the data is time series. It is a quick, simple way to organize data.
df1.plot.line(x = df1.index, y ='B', figsize =(12, 3), lw = 1)
Output :
Scatter plots are used when you want to show the relationship between two variables. Scatter plots are sometimes called correlation plots because they show how two variables are correlated.
df1.plot.scatter(x ='A', y ='B')
Output :
You can use c to color based off another column value Use cmap to indicate colormap to use. For all the colormaps, check out: http://matplotlib.org/users/colormaps.html
df1.plot.scatter(x ='A', y ='B', c ='C', cmap ='coolwarm')
Output :
Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column:
df1.plot.scatter(x ='A', y ='B', s = df1['C']*200)
Output :
It is a plot in which a rectangle is drawn to represent the second and third quartiles, usually with a vertical line inside to indicate the median value. The lower and upper quartiles are shown as horizontal lines either side of the rectangle.A boxplot is a standardized way of displaying the distribution of data based on a five number summary (“minimum”, first quartile (Q1), median, third quartile (Q3), and “maximum”). It can tell you about your outliers and what their values are. It can also tell you if your data is symmetrical, how tightly your data is grouped, and if and how your data is skewed.
df2.plot.box() # Can also pass a by = argument for groupby
Output :
Hexagonal Binning is another way to manage the problem of having to many points that start to overlap. Hexagonal binning plots density, rather than points. Points are binned into gridded hexagons and distribution (the number of points per hexagon) is displayed using either the color or the area of the hexagons.Useful for Bivariate Data, alternative to scatterplot:
df = pd.DataFrame(np.random.randn(1000, 2), columns =['a', 'b'])df.plot.hexbin(x ='a', y ='b', gridsize = 25, cmap ='Oranges')
Output :
KDE is a technique that let’s you create a smooth curve given a set of data.
This can be useful if you want to visualize just the “shape” of some data, as a kind of continuous replacement for the discrete histogram. It can also be used to generate points that look like they came from a certain dataset – this behavior can power simple simulations, where simulated objects are modeled off of real data.
df2['a'].plot.kde()
Output :
df2.plot.density()
Output :
That’s it! Hopefully you can see why this method of plotting will be a lot easier to use than full-on matplotlib, it balances ease of use with control over the figure. A lot of the plot calls also accept additional arguments of their parent matplotlib plt. call.
Python-pandas
Machine Learning
Python
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Jun, 2019"
},
{
"code": null,
"e": 301,
"s": 28,
"text": "Data Visualization is the presentation of data in graphical format. It helps people understand the significance of data by summarizing and presenting a huge amount of data in a simple and easy-to-understand format and helps communicate information clearly and effectively."
},
{
"code": null,
"e": 465,
"s": 301,
"text": "In this tutorial, we will learn about pandas built-in capabilities for data visualization! It’s built-off of matplotlib, but it baked into pandas for easier usage!"
},
{
"code": null,
"e": 484,
"s": 465,
"text": "Let’s take a look!"
},
{
"code": null,
"e": 541,
"s": 484,
"text": "InstallationEasiest way to install pandas is to use pip:"
},
{
"code": null,
"e": 561,
"s": 541,
"text": "pip install pandas\n"
},
{
"code": null,
"e": 587,
"s": 561,
"text": "or, Download it from here"
},
{
"code": null,
"e": 723,
"s": 587,
"text": "This article demonstrates an illustration of using built-in data visualization feature in pandas by plotting different types of charts."
},
{
"code": null,
"e": 807,
"s": 723,
"text": "The Sample csv files df1 and df2 used in this tutorial can be downloaded from here."
},
{
"code": "import numpy as npimport pandas as pd # There are some fake data csv files# you can read in as dataframesdf1 = pd.read_csv('df1', index_col = 0)df2 = pd.read_csv('df2')",
"e": 977,
"s": 807,
"text": null
},
{
"code": null,
"e": 1427,
"s": 977,
"text": "Matplotlib has style sheets which can be used to make the plots look a little nicer. These style sheets include plot_bmh, plot_fivethirtyeight, plot_ggplot and more. They basically create a set of style rules that your plots follow. We recommend using them, they make all your plots have the same look and feel more professional. We can even create our own if want company’s plots to all have the same look (it is a bit tedious to create on though)."
},
{
"code": null,
"e": 1452,
"s": 1427,
"text": "Here is how to use them."
},
{
"code": null,
"e": 1497,
"s": 1452,
"text": "Before plt.style.use() plots look like this:"
},
{
"code": "df1['A'].hist()",
"e": 1513,
"s": 1497,
"text": null
},
{
"code": null,
"e": 1522,
"s": 1513,
"text": "Output :"
},
{
"code": null,
"e": 1538,
"s": 1522,
"text": "Call the style:"
},
{
"code": null,
"e": 1592,
"s": 1538,
"text": "Now, plots look like this after calling ggplot style:"
},
{
"code": "import matplotlib.pyplot as pltplt.style.use('ggplot')df1['A'].hist()",
"e": 1662,
"s": 1592,
"text": null
},
{
"code": null,
"e": 1671,
"s": 1662,
"text": "Output :"
},
{
"code": null,
"e": 1717,
"s": 1671,
"text": "Plots look like this after calling bmh style:"
},
{
"code": "plt.style.use('bmh')df1['A'].hist()",
"e": 1753,
"s": 1717,
"text": null
},
{
"code": null,
"e": 1762,
"s": 1753,
"text": "Output :"
},
{
"code": null,
"e": 1820,
"s": 1762,
"text": "Plots look like this after calling dark_background style:"
},
{
"code": "plt.style.use('dark_background')df1['A'].hist()",
"e": 1868,
"s": 1820,
"text": null
},
{
"code": null,
"e": 1877,
"s": 1868,
"text": "Output :"
},
{
"code": null,
"e": 1935,
"s": 1877,
"text": "Plots look like this after calling fivethirtyeight style:"
},
{
"code": "plt.style.use('fivethirtyeight')df1['A'].hist()",
"e": 1983,
"s": 1935,
"text": null
},
{
"code": null,
"e": 1992,
"s": 1983,
"text": "Output :"
},
{
"code": null,
"e": 2083,
"s": 1992,
"text": "There are several plot types built-in to pandas, most of them statistical plots by nature:"
},
{
"code": null,
"e": 2096,
"s": 2083,
"text": "df.plot.area"
},
{
"code": null,
"e": 2109,
"s": 2096,
"text": "df.plot.barh"
},
{
"code": null,
"e": 2125,
"s": 2109,
"text": "df.plot.density"
},
{
"code": null,
"e": 2138,
"s": 2125,
"text": "df.plot.hist"
},
{
"code": null,
"e": 2151,
"s": 2138,
"text": "df.plot.line"
},
{
"code": null,
"e": 2167,
"s": 2151,
"text": "df.plot.scatter"
},
{
"code": null,
"e": 2179,
"s": 2167,
"text": "df.plot.bar"
},
{
"code": null,
"e": 2191,
"s": 2179,
"text": "df.plot.box"
},
{
"code": null,
"e": 2206,
"s": 2191,
"text": "df.plot.hexbin"
},
{
"code": null,
"e": 2218,
"s": 2206,
"text": "df.plot.kde"
},
{
"code": null,
"e": 2230,
"s": 2218,
"text": "df.plot.pie"
},
{
"code": null,
"e": 2414,
"s": 2230,
"text": "You can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. ‘box’, ‘barh’, etc.). Let’s start going through them!"
},
{
"code": null,
"e": 2670,
"s": 2414,
"text": "An area chart or area graph displays graphically quantitative data. It is based on the line chart. The area between axis and line are commonly emphasized with colors, textures and hatchings. Commonly one compares two or more quantities with an area chart."
},
{
"code": "df2.plot.area(alpha = 0.4)",
"e": 2697,
"s": 2670,
"text": null
},
{
"code": null,
"e": 2706,
"s": 2697,
"text": "Output :"
},
{
"code": null,
"e": 2983,
"s": 2706,
"text": "A bar chart or bar graph is a chart or graph that presents categorical data with rectangular bars with heights or lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a line graph."
},
{
"code": "df2.head()",
"e": 2994,
"s": 2983,
"text": null
},
{
"code": null,
"e": 3003,
"s": 2994,
"text": "Output :"
},
{
"code": "df2.plot.bar()",
"e": 3018,
"s": 3003,
"text": null
},
{
"code": null,
"e": 3027,
"s": 3018,
"text": "Output :"
},
{
"code": "df2.plot.bar(stacked = True)",
"e": 3056,
"s": 3027,
"text": null
},
{
"code": null,
"e": 3065,
"s": 3056,
"text": "Output :"
},
{
"code": null,
"e": 3321,
"s": 3065,
"text": "A histogram is a plot that lets you discover, and show, the underlying frequency distribution (shape) of a set of continuous data. This allows the inspection of the data for its underlying distribution (e.g., normal distribution), outliers, skewness, etc."
},
{
"code": "df1['A'].plot.hist(bins = 50)",
"e": 3351,
"s": 3321,
"text": null
},
{
"code": null,
"e": 3360,
"s": 3351,
"text": "Output :"
},
{
"code": null,
"e": 3537,
"s": 3360,
"text": "A line plot is a graph that shows frequency of data along a number line. It is best to use a line plot when the data is time series. It is a quick, simple way to organize data."
},
{
"code": "df1.plot.line(x = df1.index, y ='B', figsize =(12, 3), lw = 1)",
"e": 3600,
"s": 3537,
"text": null
},
{
"code": null,
"e": 3609,
"s": 3600,
"text": "Output :"
},
{
"code": null,
"e": 3799,
"s": 3609,
"text": "Scatter plots are used when you want to show the relationship between two variables. Scatter plots are sometimes called correlation plots because they show how two variables are correlated."
},
{
"code": "df1.plot.scatter(x ='A', y ='B')",
"e": 3832,
"s": 3799,
"text": null
},
{
"code": null,
"e": 3841,
"s": 3832,
"text": "Output :"
},
{
"code": null,
"e": 4010,
"s": 3841,
"text": "You can use c to color based off another column value Use cmap to indicate colormap to use. For all the colormaps, check out: http://matplotlib.org/users/colormaps.html"
},
{
"code": "df1.plot.scatter(x ='A', y ='B', c ='C', cmap ='coolwarm')",
"e": 4069,
"s": 4010,
"text": null
},
{
"code": null,
"e": 4078,
"s": 4069,
"text": "Output :"
},
{
"code": null,
"e": 4195,
"s": 4078,
"text": "Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column:"
},
{
"code": "df1.plot.scatter(x ='A', y ='B', s = df1['C']*200)",
"e": 4246,
"s": 4195,
"text": null
},
{
"code": null,
"e": 4255,
"s": 4246,
"text": "Output :"
},
{
"code": null,
"e": 4861,
"s": 4255,
"text": "It is a plot in which a rectangle is drawn to represent the second and third quartiles, usually with a vertical line inside to indicate the median value. The lower and upper quartiles are shown as horizontal lines either side of the rectangle.A boxplot is a standardized way of displaying the distribution of data based on a five number summary (“minimum”, first quartile (Q1), median, third quartile (Q3), and “maximum”). It can tell you about your outliers and what their values are. It can also tell you if your data is symmetrical, how tightly your data is grouped, and if and how your data is skewed."
},
{
"code": "df2.plot.box() # Can also pass a by = argument for groupby",
"e": 4920,
"s": 4861,
"text": null
},
{
"code": null,
"e": 4929,
"s": 4920,
"text": "Output :"
},
{
"code": null,
"e": 5296,
"s": 4929,
"text": "Hexagonal Binning is another way to manage the problem of having to many points that start to overlap. Hexagonal binning plots density, rather than points. Points are binned into gridded hexagons and distribution (the number of points per hexagon) is displayed using either the color or the area of the hexagons.Useful for Bivariate Data, alternative to scatterplot:"
},
{
"code": "df = pd.DataFrame(np.random.randn(1000, 2), columns =['a', 'b'])df.plot.hexbin(x ='a', y ='b', gridsize = 25, cmap ='Oranges')",
"e": 5423,
"s": 5296,
"text": null
},
{
"code": null,
"e": 5432,
"s": 5423,
"text": "Output :"
},
{
"code": null,
"e": 5509,
"s": 5432,
"text": "KDE is a technique that let’s you create a smooth curve given a set of data."
},
{
"code": null,
"e": 5835,
"s": 5509,
"text": "This can be useful if you want to visualize just the “shape” of some data, as a kind of continuous replacement for the discrete histogram. It can also be used to generate points that look like they came from a certain dataset – this behavior can power simple simulations, where simulated objects are modeled off of real data."
},
{
"code": "df2['a'].plot.kde()",
"e": 5855,
"s": 5835,
"text": null
},
{
"code": null,
"e": 5864,
"s": 5855,
"text": "Output :"
},
{
"code": "df2.plot.density()",
"e": 5883,
"s": 5864,
"text": null
},
{
"code": null,
"e": 5892,
"s": 5883,
"text": "Output :"
},
{
"code": null,
"e": 6155,
"s": 5892,
"text": "That’s it! Hopefully you can see why this method of plotting will be a lot easier to use than full-on matplotlib, it balances ease of use with control over the figure. A lot of the plot calls also accept additional arguments of their parent matplotlib plt. call."
},
{
"code": null,
"e": 6169,
"s": 6155,
"text": "Python-pandas"
},
{
"code": null,
"e": 6186,
"s": 6169,
"text": "Machine Learning"
},
{
"code": null,
"e": 6193,
"s": 6186,
"text": "Python"
},
{
"code": null,
"e": 6210,
"s": 6193,
"text": "Machine Learning"
}
] |
Lexicographically smallest K-length substring containing maximum number of vowels
|
12 May, 2022
Given string str containing only the lowercase English alphabet and an integer K, the task is to find a K length substring that contains the maximum number of vowels (i.e. ‘a’, ‘e’, ‘i’, ‘o’, ‘u’). If there are multiple such substrings, return the substring which is lexicographically smallest.
Examples:
Input: str = “geeksforgeeks”, K = 4 Output: eeks Explanation: The substrings with the maximum count of vowels are “geek”, “eeks” which includes 2 vowels. But “eeks” is lexicographically smallest. Input: str = “ceebbaceeffo”, K = 3 Output: ace Explanation: Lexicographically, substrings with the maximum count of vowels are “ace”.
Naive Approach: To solve the problem mentioned above, we have to generate all the substrings of length K and store the lexicographically smallest of all such substrings which contain the maximum number of vowels. Time Complexity: O(N2)
Efficient Approach: The above-mentioned procedure can be optimized by creating a prefix sum array pref[] of vowels where the ith index contains the count of vowels from 0 to the ith index. The count of vowels for any substring str[l : r] can be given by pref[r]-pref[l-1]. Then, find the lexicographically smallest substring with the maximum count of vowels.Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program to find// lexicographically smallest// K-length substring containing// maximum number of vowels #include <bits/stdc++.h>using namespace std; // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsstring maxVowelSubString( string str, int K){ // Store the length of the string int N = str.length(); // Initialize a prefix sum array int pref[N]; // Loop through the string to // create the prefix sum array for (int i = 0; i < N; i++) { // Store 1 at the index // if it is a vowel if (str[i] == 'a' or str[i] == 'e' or str[i] == 'i' or str[i] == 'o' or str[i] == 'u') pref[i] = 1; // Otherwise, store 0 else pref[i] = 0; // Process the prefix array if (i) pref[i] += pref[i - 1]; } // Initialize the variable to store // maximum count of vowels int maxCount = pref[K - 1]; // Initialize the variable // to store substring // with maximum count of vowels string res = str.substr(0, K); // Loop through the prefix array for (int i = K; i < N; i++) { // Store the current // count of vowels int currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.substr(i - K + 1, K); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { string temp = str.substr( i - K + 1, K); if (temp < res) res = temp; } } // Return the result return res;} // Driver Programint main(){ string str = "ceebbaceeffo"; int K = 3; cout << maxVowelSubString(str, K); return 0;}
// Java program to find// lexicographically smallest// K-length substring containing// maximum number of vowelsclass GFG{ // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsstatic String maxVowelSubString(String str, int K){ // Store the length of the string int N = str.length(); // Initialize a prefix sum array int []pref = new int[N]; // Loop through the string to // create the prefix sum array for (int i = 0; i < N; i++) { // Store 1 at the index // if it is a vowel if (str.charAt(i) == 'a' || str.charAt(i) == 'e' || str.charAt(i) == 'i' || str.charAt(i) == 'o' || str.charAt(i) == 'u') pref[i] = 1; // Otherwise, store 0 else pref[i] = 0; // Process the prefix array if (i != 0) pref[i] += pref[i - 1]; } // Initialize the variable to store // maximum count of vowels int maxCount = pref[K - 1]; // Initialize the variable // to store substring // with maximum count of vowels String res = str.substring(0, K); // Loop through the prefix array for (int i = K; i < N; i++) { // Store the current // count of vowels int currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.substring(i - K + 1, i + 1); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { String temp = str.substring(i - K + 1, i + 1); if (temp.compareTo(res) < 0) res = temp; } } // Return the result return res;} // Driver Codepublic static void main(String []args){ String str = "ceebbaceeffo"; int K = 3; System.out.print(maxVowelSubString(str, K));}} // This code is contributed by Chitranayal
# Python3 program to find# lexicographically smallest# K-length substring containing# maximum number of vowels # Function that prints the# lexicographically smallest# K-length substring containing# maximum number of vowelsdef maxVowelSubString(str1, K): # Store the length of the string N = len(str1) # Initialize a prefix sum array pref = [0 for i in range(N)] # Loop through the string to # create the prefix sum array for i in range(N): # Store 1 at the index # if it is a vowel if (str1[i] == 'a' or str1[i] == 'e' or str1[i] == 'i' or str1[i] == 'o' or str1[i] == 'u'): pref[i] = 1 # Otherwise, store 0 else: pref[i] = 0 # Process the prefix array if (i): pref[i] += pref[i - 1] # Initialize the variable to # store maximum count of vowels maxCount = pref[K - 1] # Initialize the variable # to store substring with # maximum count of vowels res = str1[0:K] # Loop through the prefix array for i in range(K, N): # Store the current # count of vowels currCount = pref[i] - pref[i - K] # Update the result if current count # is greater than maximum count if (currCount > maxCount): maxCount = currCount res = str1[i - K + 1 : i + 1] # Update lexicographically smallest # substring if the current count # is equal to the maximum count elif (currCount == maxCount): temp = str1[i - K + 1 : i + 1] if (temp < res): res = temp # Return the result return res # Driver codeif __name__ == '__main__': str1 = "ceebbaceeffo" K = 3 print(maxVowelSubString(str1, K)) # This code is contributed by Surendra_Gangwar
// C# program to find// lexicographically smallest// K-length substring containing// maximum number of vowelsusing System;class GFG{ // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsstatic string maxVowelSubString(string str, int K){ // Store the length of the string int N = str.Length; // Initialize a prefix sum array int []pref = new int[N]; // Loop through the string to // create the prefix sum array for (int i = 0; i < N; i++) { // Store 1 at the index // if it is a vowel if (str[i] == 'a' || str[i] == 'e' || str[i] == 'i' || str[i] == 'o' || str[i] == 'u') pref[i] = 1; // Otherwise, store 0 else pref[i] = 0; // Process the prefix array if (i != 0) pref[i] += pref[i - 1]; } // Initialize the variable to store // maximum count of vowels int maxCount = pref[K - 1]; // Initialize the variable // to store substring // with maximum count of vowels string res = str.Substring(0, K); // Loop through the prefix array for (int i = K; i < N; i++) { // Store the current // count of vowels int currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.Substring(i - K + 1, K); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { string temp = str.Substring(i - K + 1, K); if (string.Compare(temp, res) == -1) res = temp; } } // Return the result return res;} // Driver Codepublic static void Main(){ string str = "ceebbaceeffo"; int K = 3; Console.Write(maxVowelSubString(str, K));}} // This code is contributed by Code_Mech
<script> // Javascript program to find// lexicographically smallest// K-length substring containing// maximum number of vowels // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsfunction maxVowelSubString(str, K){ // St||e the length of the string var N = str.length; // Initialize a prefix sum array var pref = Array(N); // Loop through the string to // create the prefix sum array for(var i = 0; i < N; i++) { // St||e 1 at the index // if it is a vowel if (str[i] == 'a' || str[i] == 'e' || str[i] == 'i' || str[i] == 'o' || str[i] == 'u') pref[i] = 1; // Otherwise, st||e 0 else pref[i] = 0; // Process the prefix array if (i) pref[i] += pref[i - 1]; } // Initialize the variable to st||e // maximum count of vowels var maxCount = pref[K - 1]; // Initialize the variable // to st||e substring // with maximum count of vowels var res = str.substring(0, K); // Loop through the prefix array for (var i = K; i < N; i++) { // St||e the current // count of vowels var currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.substring(i - K + 1, i - 1); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { var temp = str.substring( i - K + 1, i + 1); if (temp < res) res = temp; } } // Return the result return res;} // Driver Programvar str = "ceebbaceeffo";var K = 3;document.write( maxVowelSubString(str, K)); </script>
ace
Time Complexity: O(N)
Space Optimized Approach :
Instead of storing prefix sums, we can use a sliding window and update our result at each maximum.
C++
Java
Python3
Javascript
#include <iostream>using namespace std; // Helper function to check if a character is a vowelbool isVowel(char c){ return c == 'a' or c == 'e' or c == 'i' or c == 'o' or c == 'u';} // Function to find the maximum vowel substringstring maxVowelSubstring(string s, int k){ int maxCount = 0; // initialize maxCount as 0 string res = s.substr( 0, k); // and result as first substring of size k for (int i = 0, count = 0; i < s.size(); i++) // iterate through the string { if (isVowel( s[i])) // if current character is a vowel count++; // then increase count if (i >= k and isVowel( s[i - k])) // if character that is leaving // the window is a vowel count--; // then decrease count if (count > maxCount) // if we get a substring // having more vowels { maxCount = count; // update count if (i >= k) res = s.substr(i - k + 1, k); // and update result } if (count == maxCount and i >= k) // if we get a substring with same // maximum number of vowels { string t = s.substr(i - k + 1, k); if (t < res) // then check if it is // lexicographically smaller than // current result and update it res = t; } } return res;} // Driver codeint main(){ string str = "geeksforgeeks"; int k = 4; cout << maxVowelSubstring(str, k); return 0;}
/*package whatever //do not write package name here */import java.io.*; class GFG { static boolean isVowel(char c){ return c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u'; } // Function to find the maximum vowel subString static String maxVowelSubString(String s, int k) { int maxCount = 0; // initialize maxCount as 0 String res = s.substring(0, k); // and result as first subString of size k for (int i = 0, count = 0; i < s.length();i++) // iterate through the String { if (isVowel(s.charAt(i))) // if current character is a vowel count++; // then increase count if (i >= k && isVowel(s.charAt(i - k))) // if character that is leaving // the window is a vowel count--; // then decrease count if (count > maxCount) // if we get a subString // having more vowels { maxCount = count; // update count if (i >= k) res = s.substring(i - k + 1,i + 1); // and update result } if (count == maxCount && i >= k) // if we get a subString with same // maximum number of vowels { String t = s.substring(i - k + 1, i + 1); if (t.compareTo(res) < 0){ // then check if it is // lexicographically smaller than // current result and update it res = t; } } } return res; } public static void main (String[] args) { String str = "geeksforgeeks"; int k = 4; System.out.println(maxVowelSubString(str, k)); }} // This code is contributed by shinjanpatra.
# Helper function to check if a character is a voweldef isVowel(c): return (c == 'a' or c == 'e' or c == 'i' or c == 'o' or c == 'u') # Function to find the maximum vowel substringdef maxVowelSubstring(s, k): # initialize maxCount as 0 maxCount = 0 # and result as first substring of size k res = s[0:k] # iterate through the string count = 0 for i in range(len(s)): # if current character is a vowel if (isVowel(s[i])): count += 1 # then increase count # if character that is leaving if (i >= k and isVowel(s[i - k])): # the window is a vowel count -= 1 # then decrease count if (count > maxCount): # if we get a substring # having more vowels maxCount = count # update count if (i >= k): # and update result res = s[i - k + 1: i + 1] # if we get a substring with same if (count == maxCount and i >= k): # maximum number of vowels t = s[i - k + 1: i+1] if (t < res): # then check if it is # lexicographically smaller than # current result and update it res = t return res # driver codestr = "geeksforgeeks"k = 4print(maxVowelSubstring(str, k)) # This code is contributed by shinjanpatra
<script> // Helper function to check if a character is a vowel function isVowel(c) { return (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u'); } // Function to find the maximum vowel substring function maxVowelSubstring(s, k) { // initialize maxCount as 0 let maxCount = 0; // and result as first substring of size k let res = s.substr(0, k); // iterate through the string for (let i = 0, count = 0; i < s.length; i++) { // if current character is a vowel if (isVowel(s[i])) count++; // then increase count // if character that is leaving if (i >= k && isVowel(s[i - k])) // the window is a vowel count--; // then decrease count if (count > maxCount) // if we get a substring // having more vowels { maxCount = count; // update count if (i >= k) // and update result res = s.substr(i - k + 1, k); } // if we get a substring with same if (count == maxCount && i >= k) // maximum number of vowels { let t = s.substr(i - k + 1, k); if (t < res) // then check if it is // lexicographically smaller than // current result and update it res = t; } } return res; } let str = "geeksforgeeks"; let k = 4; document.write(maxVowelSubstring(str, k)); </script>
eeks
Time Complexity: O(N) Space Complexity: O(1)
SURENDRA_GANGWAR
Code_Mech
ukasp
itsok
vsvishruths
divyeshrabadiya07
shinjanpatra
lexicographic-ordering
substring
vowel-consonant
Competitive Programming
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Modulo 10^9+7 (1000000007)
Prefix Sum Array - Implementation and Applications in Competitive Programming
Bits manipulation (Important tactics)
What is Competitive Programming and How to Prepare for It?
Count of strings whose prefix match with the given string to a given length k
Write a program to reverse an array or string
Reverse a string in Java
Write a program to print all permutations of a given string
C++ Data Types
Check for Balanced Brackets in an expression (well-formedness) using Stack
|
[
{
"code": null,
"e": 54,
"s": 26,
"text": "\n12 May, 2022"
},
{
"code": null,
"e": 351,
"s": 54,
"text": "Given string str containing only the lowercase English alphabet and an integer K, the task is to find a K length substring that contains the maximum number of vowels (i.e. ‘a’, ‘e’, ‘i’, ‘o’, ‘u’). If there are multiple such substrings, return the substring which is lexicographically smallest. "
},
{
"code": null,
"e": 362,
"s": 351,
"text": "Examples: "
},
{
"code": null,
"e": 693,
"s": 362,
"text": "Input: str = “geeksforgeeks”, K = 4 Output: eeks Explanation: The substrings with the maximum count of vowels are “geek”, “eeks” which includes 2 vowels. But “eeks” is lexicographically smallest. Input: str = “ceebbaceeffo”, K = 3 Output: ace Explanation: Lexicographically, substrings with the maximum count of vowels are “ace”. "
},
{
"code": null,
"e": 930,
"s": 693,
"text": "Naive Approach: To solve the problem mentioned above, we have to generate all the substrings of length K and store the lexicographically smallest of all such substrings which contain the maximum number of vowels. Time Complexity: O(N2) "
},
{
"code": null,
"e": 1339,
"s": 930,
"text": "Efficient Approach: The above-mentioned procedure can be optimized by creating a prefix sum array pref[] of vowels where the ith index contains the count of vowels from 0 to the ith index. The count of vowels for any substring str[l : r] can be given by pref[r]-pref[l-1]. Then, find the lexicographically smallest substring with the maximum count of vowels.Below is the implementation of the above approach:"
},
{
"code": null,
"e": 1343,
"s": 1339,
"text": "C++"
},
{
"code": null,
"e": 1348,
"s": 1343,
"text": "Java"
},
{
"code": null,
"e": 1356,
"s": 1348,
"text": "Python3"
},
{
"code": null,
"e": 1359,
"s": 1356,
"text": "C#"
},
{
"code": null,
"e": 1370,
"s": 1359,
"text": "Javascript"
},
{
"code": "// C++ program to find// lexicographically smallest// K-length substring containing// maximum number of vowels #include <bits/stdc++.h>using namespace std; // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsstring maxVowelSubString( string str, int K){ // Store the length of the string int N = str.length(); // Initialize a prefix sum array int pref[N]; // Loop through the string to // create the prefix sum array for (int i = 0; i < N; i++) { // Store 1 at the index // if it is a vowel if (str[i] == 'a' or str[i] == 'e' or str[i] == 'i' or str[i] == 'o' or str[i] == 'u') pref[i] = 1; // Otherwise, store 0 else pref[i] = 0; // Process the prefix array if (i) pref[i] += pref[i - 1]; } // Initialize the variable to store // maximum count of vowels int maxCount = pref[K - 1]; // Initialize the variable // to store substring // with maximum count of vowels string res = str.substr(0, K); // Loop through the prefix array for (int i = K; i < N; i++) { // Store the current // count of vowels int currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.substr(i - K + 1, K); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { string temp = str.substr( i - K + 1, K); if (temp < res) res = temp; } } // Return the result return res;} // Driver Programint main(){ string str = \"ceebbaceeffo\"; int K = 3; cout << maxVowelSubString(str, K); return 0;}",
"e": 3404,
"s": 1370,
"text": null
},
{
"code": "// Java program to find// lexicographically smallest// K-length substring containing// maximum number of vowelsclass GFG{ // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsstatic String maxVowelSubString(String str, int K){ // Store the length of the string int N = str.length(); // Initialize a prefix sum array int []pref = new int[N]; // Loop through the string to // create the prefix sum array for (int i = 0; i < N; i++) { // Store 1 at the index // if it is a vowel if (str.charAt(i) == 'a' || str.charAt(i) == 'e' || str.charAt(i) == 'i' || str.charAt(i) == 'o' || str.charAt(i) == 'u') pref[i] = 1; // Otherwise, store 0 else pref[i] = 0; // Process the prefix array if (i != 0) pref[i] += pref[i - 1]; } // Initialize the variable to store // maximum count of vowels int maxCount = pref[K - 1]; // Initialize the variable // to store substring // with maximum count of vowels String res = str.substring(0, K); // Loop through the prefix array for (int i = K; i < N; i++) { // Store the current // count of vowels int currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.substring(i - K + 1, i + 1); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { String temp = str.substring(i - K + 1, i + 1); if (temp.compareTo(res) < 0) res = temp; } } // Return the result return res;} // Driver Codepublic static void main(String []args){ String str = \"ceebbaceeffo\"; int K = 3; System.out.print(maxVowelSubString(str, K));}} // This code is contributed by Chitranayal",
"e": 5418,
"s": 3404,
"text": null
},
{
"code": "# Python3 program to find# lexicographically smallest# K-length substring containing# maximum number of vowels # Function that prints the# lexicographically smallest# K-length substring containing# maximum number of vowelsdef maxVowelSubString(str1, K): # Store the length of the string N = len(str1) # Initialize a prefix sum array pref = [0 for i in range(N)] # Loop through the string to # create the prefix sum array for i in range(N): # Store 1 at the index # if it is a vowel if (str1[i] == 'a' or str1[i] == 'e' or str1[i] == 'i' or str1[i] == 'o' or str1[i] == 'u'): pref[i] = 1 # Otherwise, store 0 else: pref[i] = 0 # Process the prefix array if (i): pref[i] += pref[i - 1] # Initialize the variable to # store maximum count of vowels maxCount = pref[K - 1] # Initialize the variable # to store substring with # maximum count of vowels res = str1[0:K] # Loop through the prefix array for i in range(K, N): # Store the current # count of vowels currCount = pref[i] - pref[i - K] # Update the result if current count # is greater than maximum count if (currCount > maxCount): maxCount = currCount res = str1[i - K + 1 : i + 1] # Update lexicographically smallest # substring if the current count # is equal to the maximum count elif (currCount == maxCount): temp = str1[i - K + 1 : i + 1] if (temp < res): res = temp # Return the result return res # Driver codeif __name__ == '__main__': str1 = \"ceebbaceeffo\" K = 3 print(maxVowelSubString(str1, K)) # This code is contributed by Surendra_Gangwar",
"e": 7281,
"s": 5418,
"text": null
},
{
"code": "// C# program to find// lexicographically smallest// K-length substring containing// maximum number of vowelsusing System;class GFG{ // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsstatic string maxVowelSubString(string str, int K){ // Store the length of the string int N = str.Length; // Initialize a prefix sum array int []pref = new int[N]; // Loop through the string to // create the prefix sum array for (int i = 0; i < N; i++) { // Store 1 at the index // if it is a vowel if (str[i] == 'a' || str[i] == 'e' || str[i] == 'i' || str[i] == 'o' || str[i] == 'u') pref[i] = 1; // Otherwise, store 0 else pref[i] = 0; // Process the prefix array if (i != 0) pref[i] += pref[i - 1]; } // Initialize the variable to store // maximum count of vowels int maxCount = pref[K - 1]; // Initialize the variable // to store substring // with maximum count of vowels string res = str.Substring(0, K); // Loop through the prefix array for (int i = K; i < N; i++) { // Store the current // count of vowels int currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.Substring(i - K + 1, K); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { string temp = str.Substring(i - K + 1, K); if (string.Compare(temp, res) == -1) res = temp; } } // Return the result return res;} // Driver Codepublic static void Main(){ string str = \"ceebbaceeffo\"; int K = 3; Console.Write(maxVowelSubString(str, K));}} // This code is contributed by Code_Mech",
"e": 9400,
"s": 7281,
"text": null
},
{
"code": "<script> // Javascript program to find// lexicographically smallest// K-length substring containing// maximum number of vowels // Function that prints the// lexicographically smallest// K-length substring containing// maximum number of vowelsfunction maxVowelSubString(str, K){ // St||e the length of the string var N = str.length; // Initialize a prefix sum array var pref = Array(N); // Loop through the string to // create the prefix sum array for(var i = 0; i < N; i++) { // St||e 1 at the index // if it is a vowel if (str[i] == 'a' || str[i] == 'e' || str[i] == 'i' || str[i] == 'o' || str[i] == 'u') pref[i] = 1; // Otherwise, st||e 0 else pref[i] = 0; // Process the prefix array if (i) pref[i] += pref[i - 1]; } // Initialize the variable to st||e // maximum count of vowels var maxCount = pref[K - 1]; // Initialize the variable // to st||e substring // with maximum count of vowels var res = str.substring(0, K); // Loop through the prefix array for (var i = K; i < N; i++) { // St||e the current // count of vowels var currCount = pref[i] - pref[i - K]; // Update the result if current count // is greater than maximum count if (currCount > maxCount) { maxCount = currCount; res = str.substring(i - K + 1, i - 1); } // Update lexicographically smallest // substring if the current count // is equal to the maximum count else if (currCount == maxCount) { var temp = str.substring( i - K + 1, i + 1); if (temp < res) res = temp; } } // Return the result return res;} // Driver Programvar str = \"ceebbaceeffo\";var K = 3;document.write( maxVowelSubString(str, K)); </script>",
"e": 11386,
"s": 9400,
"text": null
},
{
"code": null,
"e": 11390,
"s": 11386,
"text": "ace"
},
{
"code": null,
"e": 11413,
"s": 11390,
"text": "Time Complexity: O(N) "
},
{
"code": null,
"e": 11440,
"s": 11413,
"text": "Space Optimized Approach :"
},
{
"code": null,
"e": 11539,
"s": 11440,
"text": "Instead of storing prefix sums, we can use a sliding window and update our result at each maximum."
},
{
"code": null,
"e": 11543,
"s": 11539,
"text": "C++"
},
{
"code": null,
"e": 11548,
"s": 11543,
"text": "Java"
},
{
"code": null,
"e": 11556,
"s": 11548,
"text": "Python3"
},
{
"code": null,
"e": 11567,
"s": 11556,
"text": "Javascript"
},
{
"code": "#include <iostream>using namespace std; // Helper function to check if a character is a vowelbool isVowel(char c){ return c == 'a' or c == 'e' or c == 'i' or c == 'o' or c == 'u';} // Function to find the maximum vowel substringstring maxVowelSubstring(string s, int k){ int maxCount = 0; // initialize maxCount as 0 string res = s.substr( 0, k); // and result as first substring of size k for (int i = 0, count = 0; i < s.size(); i++) // iterate through the string { if (isVowel( s[i])) // if current character is a vowel count++; // then increase count if (i >= k and isVowel( s[i - k])) // if character that is leaving // the window is a vowel count--; // then decrease count if (count > maxCount) // if we get a substring // having more vowels { maxCount = count; // update count if (i >= k) res = s.substr(i - k + 1, k); // and update result } if (count == maxCount and i >= k) // if we get a substring with same // maximum number of vowels { string t = s.substr(i - k + 1, k); if (t < res) // then check if it is // lexicographically smaller than // current result and update it res = t; } } return res;} // Driver codeint main(){ string str = \"geeksforgeeks\"; int k = 4; cout << maxVowelSubstring(str, k); return 0;}",
"e": 13210,
"s": 11567,
"text": null
},
{
"code": "/*package whatever //do not write package name here */import java.io.*; class GFG { static boolean isVowel(char c){ return c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u'; } // Function to find the maximum vowel subString static String maxVowelSubString(String s, int k) { int maxCount = 0; // initialize maxCount as 0 String res = s.substring(0, k); // and result as first subString of size k for (int i = 0, count = 0; i < s.length();i++) // iterate through the String { if (isVowel(s.charAt(i))) // if current character is a vowel count++; // then increase count if (i >= k && isVowel(s.charAt(i - k))) // if character that is leaving // the window is a vowel count--; // then decrease count if (count > maxCount) // if we get a subString // having more vowels { maxCount = count; // update count if (i >= k) res = s.substring(i - k + 1,i + 1); // and update result } if (count == maxCount && i >= k) // if we get a subString with same // maximum number of vowels { String t = s.substring(i - k + 1, i + 1); if (t.compareTo(res) < 0){ // then check if it is // lexicographically smaller than // current result and update it res = t; } } } return res; } public static void main (String[] args) { String str = \"geeksforgeeks\"; int k = 4; System.out.println(maxVowelSubString(str, k)); }} // This code is contributed by shinjanpatra.",
"e": 14746,
"s": 13210,
"text": null
},
{
"code": "# Helper function to check if a character is a voweldef isVowel(c): return (c == 'a' or c == 'e' or c == 'i' or c == 'o' or c == 'u') # Function to find the maximum vowel substringdef maxVowelSubstring(s, k): # initialize maxCount as 0 maxCount = 0 # and result as first substring of size k res = s[0:k] # iterate through the string count = 0 for i in range(len(s)): # if current character is a vowel if (isVowel(s[i])): count += 1 # then increase count # if character that is leaving if (i >= k and isVowel(s[i - k])): # the window is a vowel count -= 1 # then decrease count if (count > maxCount): # if we get a substring # having more vowels maxCount = count # update count if (i >= k): # and update result res = s[i - k + 1: i + 1] # if we get a substring with same if (count == maxCount and i >= k): # maximum number of vowels t = s[i - k + 1: i+1] if (t < res): # then check if it is # lexicographically smaller than # current result and update it res = t return res # driver codestr = \"geeksforgeeks\"k = 4print(maxVowelSubstring(str, k)) # This code is contributed by shinjanpatra",
"e": 16160,
"s": 14746,
"text": null
},
{
"code": "<script> // Helper function to check if a character is a vowel function isVowel(c) { return (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u'); } // Function to find the maximum vowel substring function maxVowelSubstring(s, k) { // initialize maxCount as 0 let maxCount = 0; // and result as first substring of size k let res = s.substr(0, k); // iterate through the string for (let i = 0, count = 0; i < s.length; i++) { // if current character is a vowel if (isVowel(s[i])) count++; // then increase count // if character that is leaving if (i >= k && isVowel(s[i - k])) // the window is a vowel count--; // then decrease count if (count > maxCount) // if we get a substring // having more vowels { maxCount = count; // update count if (i >= k) // and update result res = s.substr(i - k + 1, k); } // if we get a substring with same if (count == maxCount && i >= k) // maximum number of vowels { let t = s.substr(i - k + 1, k); if (t < res) // then check if it is // lexicographically smaller than // current result and update it res = t; } } return res; } let str = \"geeksforgeeks\"; let k = 4; document.write(maxVowelSubstring(str, k)); </script>",
"e": 17881,
"s": 16160,
"text": null
},
{
"code": null,
"e": 17886,
"s": 17881,
"text": "eeks"
},
{
"code": null,
"e": 17932,
"s": 17886,
"text": "Time Complexity: O(N) Space Complexity: O(1) "
},
{
"code": null,
"e": 17949,
"s": 17932,
"text": "SURENDRA_GANGWAR"
},
{
"code": null,
"e": 17959,
"s": 17949,
"text": "Code_Mech"
},
{
"code": null,
"e": 17965,
"s": 17959,
"text": "ukasp"
},
{
"code": null,
"e": 17971,
"s": 17965,
"text": "itsok"
},
{
"code": null,
"e": 17983,
"s": 17971,
"text": "vsvishruths"
},
{
"code": null,
"e": 18001,
"s": 17983,
"text": "divyeshrabadiya07"
},
{
"code": null,
"e": 18014,
"s": 18001,
"text": "shinjanpatra"
},
{
"code": null,
"e": 18037,
"s": 18014,
"text": "lexicographic-ordering"
},
{
"code": null,
"e": 18047,
"s": 18037,
"text": "substring"
},
{
"code": null,
"e": 18063,
"s": 18047,
"text": "vowel-consonant"
},
{
"code": null,
"e": 18087,
"s": 18063,
"text": "Competitive Programming"
},
{
"code": null,
"e": 18095,
"s": 18087,
"text": "Strings"
},
{
"code": null,
"e": 18103,
"s": 18095,
"text": "Strings"
},
{
"code": null,
"e": 18201,
"s": 18103,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 18228,
"s": 18201,
"text": "Modulo 10^9+7 (1000000007)"
},
{
"code": null,
"e": 18306,
"s": 18228,
"text": "Prefix Sum Array - Implementation and Applications in Competitive Programming"
},
{
"code": null,
"e": 18344,
"s": 18306,
"text": "Bits manipulation (Important tactics)"
},
{
"code": null,
"e": 18403,
"s": 18344,
"text": "What is Competitive Programming and How to Prepare for It?"
},
{
"code": null,
"e": 18481,
"s": 18403,
"text": "Count of strings whose prefix match with the given string to a given length k"
},
{
"code": null,
"e": 18527,
"s": 18481,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 18552,
"s": 18527,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 18612,
"s": 18552,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 18627,
"s": 18612,
"text": "C++ Data Types"
}
] |
Working of top down parser
|
12 Oct, 2021
In this article, we are going to cover working of top down parser and will see how we can take input and parse it and also cover some basics of top down.
Pre-requisite – Parsing
Top Down Parser :
In top down technique parse tree constructs from top and input will read from left to right. In top down, In top down parser, It will start symbol from proceed to string.
It follows left most derivation.
In top down parser, difficulty with top down parser is if variable contain more than one possibility selecting 1 is difficult.
Working of Top Down Parser : Let’s consider an example where grammar is given and you need to construct a parse tree by using top down parser technique.
Example –
S -> aABe
A -> Abc | b
B -> d
Now, let’s consider the input to read and to construct a parse tree with top down approach.
Input –
abbcde$
Now, you will see that how top down approach works. Here, you will see how you can generate a input string from the grammar for top down approach.
First, you can start with S -> a A B e and then you will see input string a in the beginning and e in the end.
Now, you need to generate abbcde .
Expand A-> Abc and Expand B-> d.
Now, You have string like aAbcde and your input string is abbcde.
Expand A->b.
Final string, you will get abbcde.
Given below is the Diagram explanation for constructing top down parse tree. You can see clearly in the diagram how you can generate the input string using grammar with top down approach.
sumitgumber28
Compiler Design
GATE CS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Directed Acyclic graph in Compiler Design (with examples)
Type Checking in Compiler Design
Data flow analysis in Compiler
S - attributed and L - attributed SDTs in Syntax directed translation
Runtime Environments in Compiler Design
Layers of OSI Model
ACID Properties in DBMS
Types of Operating Systems
TCP/IP Model
Normal Forms in DBMS
|
[
{
"code": null,
"e": 28,
"s": 0,
"text": "\n12 Oct, 2021"
},
{
"code": null,
"e": 183,
"s": 28,
"text": "In this article, we are going to cover working of top down parser and will see how we can take input and parse it and also cover some basics of top down. "
},
{
"code": null,
"e": 209,
"s": 183,
"text": "Pre-requisite – Parsing "
},
{
"code": null,
"e": 229,
"s": 209,
"text": "Top Down Parser : "
},
{
"code": null,
"e": 402,
"s": 229,
"text": "In top down technique parse tree constructs from top and input will read from left to right. In top down, In top down parser, It will start symbol from proceed to string. "
},
{
"code": null,
"e": 437,
"s": 402,
"text": "It follows left most derivation. "
},
{
"code": null,
"e": 566,
"s": 437,
"text": "In top down parser, difficulty with top down parser is if variable contain more than one possibility selecting 1 is difficult. "
},
{
"code": null,
"e": 720,
"s": 566,
"text": "Working of Top Down Parser : Let’s consider an example where grammar is given and you need to construct a parse tree by using top down parser technique. "
},
{
"code": null,
"e": 732,
"s": 720,
"text": "Example – "
},
{
"code": null,
"e": 762,
"s": 732,
"text": "S -> aABe\nA -> Abc | b\nB -> d"
},
{
"code": null,
"e": 855,
"s": 762,
"text": "Now, let’s consider the input to read and to construct a parse tree with top down approach. "
},
{
"code": null,
"e": 867,
"s": 857,
"text": "Input – "
},
{
"code": null,
"e": 875,
"s": 867,
"text": "abbcde$"
},
{
"code": null,
"e": 1024,
"s": 875,
"text": "Now, you will see that how top down approach works. Here, you will see how you can generate a input string from the grammar for top down approach. "
},
{
"code": null,
"e": 1137,
"s": 1024,
"text": "First, you can start with S -> a A B e and then you will see input string a in the beginning and e in the end. "
},
{
"code": null,
"e": 1174,
"s": 1137,
"text": "Now, you need to generate abbcde . "
},
{
"code": null,
"e": 1209,
"s": 1174,
"text": "Expand A-> Abc and Expand B-> d. "
},
{
"code": null,
"e": 1277,
"s": 1209,
"text": "Now, You have string like aAbcde and your input string is abbcde. "
},
{
"code": null,
"e": 1292,
"s": 1277,
"text": "Expand A->b. "
},
{
"code": null,
"e": 1329,
"s": 1292,
"text": "Final string, you will get abbcde. "
},
{
"code": null,
"e": 1518,
"s": 1329,
"text": "Given below is the Diagram explanation for constructing top down parse tree. You can see clearly in the diagram how you can generate the input string using grammar with top down approach. "
},
{
"code": null,
"e": 1536,
"s": 1522,
"text": "sumitgumber28"
},
{
"code": null,
"e": 1552,
"s": 1536,
"text": "Compiler Design"
},
{
"code": null,
"e": 1560,
"s": 1552,
"text": "GATE CS"
},
{
"code": null,
"e": 1658,
"s": 1560,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1716,
"s": 1658,
"text": "Directed Acyclic graph in Compiler Design (with examples)"
},
{
"code": null,
"e": 1749,
"s": 1716,
"text": "Type Checking in Compiler Design"
},
{
"code": null,
"e": 1780,
"s": 1749,
"text": "Data flow analysis in Compiler"
},
{
"code": null,
"e": 1850,
"s": 1780,
"text": "S - attributed and L - attributed SDTs in Syntax directed translation"
},
{
"code": null,
"e": 1890,
"s": 1850,
"text": "Runtime Environments in Compiler Design"
},
{
"code": null,
"e": 1910,
"s": 1890,
"text": "Layers of OSI Model"
},
{
"code": null,
"e": 1934,
"s": 1910,
"text": "ACID Properties in DBMS"
},
{
"code": null,
"e": 1961,
"s": 1934,
"text": "Types of Operating Systems"
},
{
"code": null,
"e": 1974,
"s": 1961,
"text": "TCP/IP Model"
}
] |
Athena Health Interview Experience for Associate Member of Technical Staff 2020 (Virtual) - GeeksforGeeks
|
04 Nov, 2020
Round 1 (Online Test – Day 1):
Platform: Hackerrank
Duration: 90 min
Mcq’s were based on Data Structure, OS, Pseudo Code Output DBMS.
Both Coding Questions were based on arrays and the difficulty level was moderate.
I was able to complete all 10 MCQ’s and one coding question of 75 marks and got shortlisted for 2nd Round.
24 shortlisted after this round.
Round 2 (Technical Interview – Day 2):
Platform: Hackarank Codepair
Duration: 1hr
Three coding questions were asked in this round, and I was supposed to explain the approach to the interviewer and then also had to implement the same on IDE.
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. You may assume that each input would have exactly one solution, and you may not use the same element twice.Input: Array =[3,2,4,8,1], target = 4
Output: [0,4]
Given a sentence, we have to reverse each word in the sentence.Input: "This is geeksforgeeks"
Output: "sihT si skeegrofskeeg"
Given an array with duplicate values, we have to remove the duplicates from the array and then print the array with its length.Input: Array=[2,1,3,5,6,2,8,2,5]
Output: [1, 2, 3, 5, 6, 8] 6 (length)
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. You may assume that each input would have exactly one solution, and you may not use the same element twice.Input: Array =[3,2,4,8,1], target = 4
Output: [0,4]
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. You may assume that each input would have exactly one solution, and you may not use the same element twice.
Input: Array =[3,2,4,8,1], target = 4
Output: [0,4]
Given a sentence, we have to reverse each word in the sentence.Input: "This is geeksforgeeks"
Output: "sihT si skeegrofskeeg"
Given a sentence, we have to reverse each word in the sentence.
Input: "This is geeksforgeeks"
Output: "sihT si skeegrofskeeg"
Given an array with duplicate values, we have to remove the duplicates from the array and then print the array with its length.Input: Array=[2,1,3,5,6,2,8,2,5]
Output: [1, 2, 3, 5, 6, 8] 6 (length)
Given an array with duplicate values, we have to remove the duplicates from the array and then print the array with its length.
Input: Array=[2,1,3,5,6,2,8,2,5]
Output: [1, 2, 3, 5, 6, 8] 6 (length)
Tip: Jumping directly on the problem statement is a big NO, never do that take some time to understand the problem, and if you have already practiced that before then pretend that you are thinking about the problem statement, take some time then give your approach to the interviewer.
While implementing ask the interviewer whether you should explain code while writing or not. (As some interviewer don’t like when you speak while writing code so it’s better to ask before).
10 shortlisted after this round
Round 3 (Managerial – Day 3):
Platform: Microsoft Teams
Duration: 1 hr
This round focused more on Projects mentioned in the resume. Many technical questions were asked regarding the technologies used in the project and at each point, the interviewer was testing the patience level and presence of the mind of the candidate. The interviewer was from a database background, so he asked a lot of questions about my Database project and also asked queries used in the project.
After a long discussion on projects, he asked me to tell everything about myself like how my day starts and about my weaknesses and strengths. He then asked what I know about the company and its products and why I wanted to join this particular Company.
Tip: While telling your weaknesses or strength or any positive quality keep giving some real-life examples that show those strengths or weaknesses, This will have a great impact on the interviewer.
Read about company products in details, having detailed knowledge about company products can be a plus point
6 shortlisted after this round
Round 4 (HR – Day 3):
Platform: Microsoft Teams
Platform: Microsoft Teams
Duration: 30-40 min
Duration: 30-40 min
Some Basic questions were asked like:
Why I chose to be an engineer.
What are my coding language preferences?
How I practice coding.
Plans for Higher Education.Etc.
After this she asked some resume based questions and also some discussion was made on my deep learning project which was mentioned in the resume.
Tip: “TELL ME ABOUT YOURSELF”, prepare this question really very well this is a must for any interview.
Final selects Announcement:
Platform: Microsoft Teams
All selected candidates received a mail having a meeting Link and in the meeting, the whole recruiting team was also in the meeting.
2 candidates were selected, and I was one of them.
Best of Luck.
Athena-Health
Marketing
Interview Experiences
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Microsoft Interview Experience for Internship (Via Engage)
Amazon Interview Experience for SDE-1 (On-Campus)
Infosys Interview Experience for DSE - System Engineer | On-Campus 2022
Amazon Interview Experience for SDE-1
Oracle Interview Experience | Set 69 (Application Engineer)
Amazon Interview Experience for SDE1 (8 Months Experienced) 2022
Amazon Interview Experience for SDE-1(Off-Campus)
Amazon Interview Experience (Off-Campus) 2022
Amazon Interview Experience for SDE-1
Infosys DSE Interview Experience 2021
|
[
{
"code": null,
"e": 24661,
"s": 24633,
"text": "\n04 Nov, 2020"
},
{
"code": null,
"e": 24692,
"s": 24661,
"text": "Round 1 (Online Test – Day 1):"
},
{
"code": null,
"e": 24714,
"s": 24692,
"text": "Platform: Hackerrank "
},
{
"code": null,
"e": 24731,
"s": 24714,
"text": "Duration: 90 min"
},
{
"code": null,
"e": 24796,
"s": 24731,
"text": "Mcq’s were based on Data Structure, OS, Pseudo Code Output DBMS."
},
{
"code": null,
"e": 24878,
"s": 24796,
"text": "Both Coding Questions were based on arrays and the difficulty level was moderate."
},
{
"code": null,
"e": 24985,
"s": 24878,
"text": "I was able to complete all 10 MCQ’s and one coding question of 75 marks and got shortlisted for 2nd Round."
},
{
"code": null,
"e": 25018,
"s": 24985,
"text": "24 shortlisted after this round."
},
{
"code": null,
"e": 25057,
"s": 25018,
"text": "Round 2 (Technical Interview – Day 2):"
},
{
"code": null,
"e": 25086,
"s": 25057,
"text": "Platform: Hackarank Codepair"
},
{
"code": null,
"e": 25100,
"s": 25086,
"text": "Duration: 1hr"
},
{
"code": null,
"e": 25259,
"s": 25100,
"text": "Three coding questions were asked in this round, and I was supposed to explain the approach to the interviewer and then also had to implement the same on IDE."
},
{
"code": null,
"e": 25869,
"s": 25259,
"text": "Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. You may assume that each input would have exactly one solution, and you may not use the same element twice.Input: Array =[3,2,4,8,1], target = 4\nOutput: [0,4]\nGiven a sentence, we have to reverse each word in the sentence.Input: \"This is geeksforgeeks\"\nOutput: \"sihT si skeegrofskeeg\"\nGiven an array with duplicate values, we have to remove the duplicates from the array and then print the array with its length.Input: Array=[2,1,3,5,6,2,8,2,5]\nOutput: [1, 2, 3, 5, 6, 8] 6 (length)\n"
},
{
"code": null,
"e": 26155,
"s": 25869,
"text": "Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. You may assume that each input would have exactly one solution, and you may not use the same element twice.Input: Array =[3,2,4,8,1], target = 4\nOutput: [0,4]\n"
},
{
"code": null,
"e": 26389,
"s": 26155,
"text": "Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. You may assume that each input would have exactly one solution, and you may not use the same element twice."
},
{
"code": null,
"e": 26442,
"s": 26389,
"text": "Input: Array =[3,2,4,8,1], target = 4\nOutput: [0,4]\n"
},
{
"code": null,
"e": 26569,
"s": 26442,
"text": "Given a sentence, we have to reverse each word in the sentence.Input: \"This is geeksforgeeks\"\nOutput: \"sihT si skeegrofskeeg\"\n"
},
{
"code": null,
"e": 26633,
"s": 26569,
"text": "Given a sentence, we have to reverse each word in the sentence."
},
{
"code": null,
"e": 26697,
"s": 26633,
"text": "Input: \"This is geeksforgeeks\"\nOutput: \"sihT si skeegrofskeeg\"\n"
},
{
"code": null,
"e": 26896,
"s": 26697,
"text": "Given an array with duplicate values, we have to remove the duplicates from the array and then print the array with its length.Input: Array=[2,1,3,5,6,2,8,2,5]\nOutput: [1, 2, 3, 5, 6, 8] 6 (length)\n"
},
{
"code": null,
"e": 27024,
"s": 26896,
"text": "Given an array with duplicate values, we have to remove the duplicates from the array and then print the array with its length."
},
{
"code": null,
"e": 27096,
"s": 27024,
"text": "Input: Array=[2,1,3,5,6,2,8,2,5]\nOutput: [1, 2, 3, 5, 6, 8] 6 (length)\n"
},
{
"code": null,
"e": 27381,
"s": 27096,
"text": "Tip: Jumping directly on the problem statement is a big NO, never do that take some time to understand the problem, and if you have already practiced that before then pretend that you are thinking about the problem statement, take some time then give your approach to the interviewer."
},
{
"code": null,
"e": 27571,
"s": 27381,
"text": "While implementing ask the interviewer whether you should explain code while writing or not. (As some interviewer don’t like when you speak while writing code so it’s better to ask before)."
},
{
"code": null,
"e": 27603,
"s": 27571,
"text": "10 shortlisted after this round"
},
{
"code": null,
"e": 27633,
"s": 27603,
"text": "Round 3 (Managerial – Day 3):"
},
{
"code": null,
"e": 27659,
"s": 27633,
"text": "Platform: Microsoft Teams"
},
{
"code": null,
"e": 27674,
"s": 27659,
"text": "Duration: 1 hr"
},
{
"code": null,
"e": 28076,
"s": 27674,
"text": "This round focused more on Projects mentioned in the resume. Many technical questions were asked regarding the technologies used in the project and at each point, the interviewer was testing the patience level and presence of the mind of the candidate. The interviewer was from a database background, so he asked a lot of questions about my Database project and also asked queries used in the project."
},
{
"code": null,
"e": 28330,
"s": 28076,
"text": "After a long discussion on projects, he asked me to tell everything about myself like how my day starts and about my weaknesses and strengths. He then asked what I know about the company and its products and why I wanted to join this particular Company."
},
{
"code": null,
"e": 28528,
"s": 28330,
"text": "Tip: While telling your weaknesses or strength or any positive quality keep giving some real-life examples that show those strengths or weaknesses, This will have a great impact on the interviewer."
},
{
"code": null,
"e": 28637,
"s": 28528,
"text": "Read about company products in details, having detailed knowledge about company products can be a plus point"
},
{
"code": null,
"e": 28668,
"s": 28637,
"text": "6 shortlisted after this round"
},
{
"code": null,
"e": 28690,
"s": 28668,
"text": "Round 4 (HR – Day 3):"
},
{
"code": null,
"e": 28716,
"s": 28690,
"text": "Platform: Microsoft Teams"
},
{
"code": null,
"e": 28742,
"s": 28716,
"text": "Platform: Microsoft Teams"
},
{
"code": null,
"e": 28762,
"s": 28742,
"text": "Duration: 30-40 min"
},
{
"code": null,
"e": 28782,
"s": 28762,
"text": "Duration: 30-40 min"
},
{
"code": null,
"e": 28820,
"s": 28782,
"text": "Some Basic questions were asked like:"
},
{
"code": null,
"e": 28851,
"s": 28820,
"text": "Why I chose to be an engineer."
},
{
"code": null,
"e": 28892,
"s": 28851,
"text": "What are my coding language preferences?"
},
{
"code": null,
"e": 28915,
"s": 28892,
"text": "How I practice coding."
},
{
"code": null,
"e": 28947,
"s": 28915,
"text": "Plans for Higher Education.Etc."
},
{
"code": null,
"e": 29093,
"s": 28947,
"text": "After this she asked some resume based questions and also some discussion was made on my deep learning project which was mentioned in the resume."
},
{
"code": null,
"e": 29197,
"s": 29093,
"text": "Tip: “TELL ME ABOUT YOURSELF”, prepare this question really very well this is a must for any interview."
},
{
"code": null,
"e": 29225,
"s": 29197,
"text": "Final selects Announcement:"
},
{
"code": null,
"e": 29251,
"s": 29225,
"text": "Platform: Microsoft Teams"
},
{
"code": null,
"e": 29385,
"s": 29251,
"text": "All selected candidates received a mail having a meeting Link and in the meeting, the whole recruiting team was also in the meeting. "
},
{
"code": null,
"e": 29436,
"s": 29385,
"text": "2 candidates were selected, and I was one of them."
},
{
"code": null,
"e": 29450,
"s": 29436,
"text": "Best of Luck."
},
{
"code": null,
"e": 29464,
"s": 29450,
"text": "Athena-Health"
},
{
"code": null,
"e": 29474,
"s": 29464,
"text": "Marketing"
},
{
"code": null,
"e": 29496,
"s": 29474,
"text": "Interview Experiences"
},
{
"code": null,
"e": 29594,
"s": 29496,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29603,
"s": 29594,
"text": "Comments"
},
{
"code": null,
"e": 29616,
"s": 29603,
"text": "Old Comments"
},
{
"code": null,
"e": 29675,
"s": 29616,
"text": "Microsoft Interview Experience for Internship (Via Engage)"
},
{
"code": null,
"e": 29725,
"s": 29675,
"text": "Amazon Interview Experience for SDE-1 (On-Campus)"
},
{
"code": null,
"e": 29797,
"s": 29725,
"text": "Infosys Interview Experience for DSE - System Engineer | On-Campus 2022"
},
{
"code": null,
"e": 29835,
"s": 29797,
"text": "Amazon Interview Experience for SDE-1"
},
{
"code": null,
"e": 29895,
"s": 29835,
"text": "Oracle Interview Experience | Set 69 (Application Engineer)"
},
{
"code": null,
"e": 29960,
"s": 29895,
"text": "Amazon Interview Experience for SDE1 (8 Months Experienced) 2022"
},
{
"code": null,
"e": 30010,
"s": 29960,
"text": "Amazon Interview Experience for SDE-1(Off-Campus)"
},
{
"code": null,
"e": 30056,
"s": 30010,
"text": "Amazon Interview Experience (Off-Campus) 2022"
},
{
"code": null,
"e": 30094,
"s": 30056,
"text": "Amazon Interview Experience for SDE-1"
}
] |
How to use ImageView as a Button in Android? - GeeksforGeeks
|
11 Feb, 2021
ImageView is used when we want to work with images or we want to display them in our application. So, this article will give you a complete idea of using an ImageView as a Button in android studio. So, without wasting further time let’s go to the article and read how we can achieve this task.
We will be building a simple application in which we will be displaying an ImageView and when we click on that ImageView we will get into a new activity or simply we can say that we are going to use ImageView as a button to switch between different activities. A sample video is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language.
Step 1: Create a New Project
To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language.
Step 2: Create another new Activity
Now, we will create another Empty Activity (SecondActivity) to move from one activity to another by clicking ImageView. So, to create second activity, go to the android project > File >new > Activity > Empty Activity.
Step 3: Working with the activity_main.xml file
Now it’s time to design the layout of the application. So for that go-to the app > res > layout > activity_main.xml and paste the below-written code in the activity_main.xml file.
XML
<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/relative_layout" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--ImageView which will used as a button to switch from one activity to another--> <ImageView android:id="@+id/imageView" android:layout_width="200dp" android:layout_height="wrap_content" android:layout_centerInParent="true" app:srcCompat="@drawable/geeksforgeeks" /> </RelativeLayout>
Step 4: Working with the MainActivity.java file
Go to the app > java > package name > MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail.
Java
import android.content.Intent;import android.os.Bundle;import android.view.View;import android.widget.ImageView; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { ImageView imageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // initialize imageView // with method findViewById() imageView = findViewById(R.id.imageView); // Apply OnClickListener to imageView to // switch from one activity to another imageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Intent class will help to go to next activity using // it's object named intent. // SecondActivty is the name of new created EmptyActivity. Intent intent = new Intent(MainActivity.this, SecondActivity.class); startActivity(intent); } }); }}
That’s all, now the application is ready to install on the device. Here is what the output of the application looks like.
android
Technical Scripter 2020
Android
Java
Technical Scripter
Java
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Flutter - Custom Bottom Navigation Bar
How to Read Data from SQLite Database in Android?
Android Listview in Java with Example
How to Post Data to API using Retrofit in Android?
Retrofit with Kotlin Coroutine in Android
Arrays in Java
Split() String method in Java with examples
For-each loop in Java
Reverse a string in Java
Arrays.sort() in Java with examples
|
[
{
"code": null,
"e": 24751,
"s": 24723,
"text": "\n11 Feb, 2021"
},
{
"code": null,
"e": 25045,
"s": 24751,
"text": "ImageView is used when we want to work with images or we want to display them in our application. So, this article will give you a complete idea of using an ImageView as a Button in android studio. So, without wasting further time let’s go to the article and read how we can achieve this task."
},
{
"code": null,
"e": 25474,
"s": 25045,
"text": "We will be building a simple application in which we will be displaying an ImageView and when we click on that ImageView we will get into a new activity or simply we can say that we are going to use ImageView as a button to switch between different activities. A sample video is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language. "
},
{
"code": null,
"e": 25503,
"s": 25474,
"text": "Step 1: Create a New Project"
},
{
"code": null,
"e": 25665,
"s": 25503,
"text": "To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language."
},
{
"code": null,
"e": 25701,
"s": 25665,
"text": "Step 2: Create another new Activity"
},
{
"code": null,
"e": 25919,
"s": 25701,
"text": "Now, we will create another Empty Activity (SecondActivity) to move from one activity to another by clicking ImageView. So, to create second activity, go to the android project > File >new > Activity > Empty Activity."
},
{
"code": null,
"e": 25967,
"s": 25919,
"text": "Step 3: Working with the activity_main.xml file"
},
{
"code": null,
"e": 26148,
"s": 25967,
"text": "Now it’s time to design the layout of the application. So for that go-to the app > res > layout > activity_main.xml and paste the below-written code in the activity_main.xml file. "
},
{
"code": null,
"e": 26152,
"s": 26148,
"text": "XML"
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" xmlns:tools=\"http://schemas.android.com/tools\" android:id=\"@+id/relative_layout\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" tools:context=\".MainActivity\"> <!--ImageView which will used as a button to switch from one activity to another--> <ImageView android:id=\"@+id/imageView\" android:layout_width=\"200dp\" android:layout_height=\"wrap_content\" android:layout_centerInParent=\"true\" app:srcCompat=\"@drawable/geeksforgeeks\" /> </RelativeLayout>",
"e": 26871,
"s": 26152,
"text": null
},
{
"code": null,
"e": 26919,
"s": 26871,
"text": "Step 4: Working with the MainActivity.java file"
},
{
"code": null,
"e": 27137,
"s": 26919,
"text": "Go to the app > java > package name > MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail."
},
{
"code": null,
"e": 27142,
"s": 27137,
"text": "Java"
},
{
"code": "import android.content.Intent;import android.os.Bundle;import android.view.View;import android.widget.ImageView; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { ImageView imageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // initialize imageView // with method findViewById() imageView = findViewById(R.id.imageView); // Apply OnClickListener to imageView to // switch from one activity to another imageView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Intent class will help to go to next activity using // it's object named intent. // SecondActivty is the name of new created EmptyActivity. Intent intent = new Intent(MainActivity.this, SecondActivity.class); startActivity(intent); } }); }}",
"e": 28236,
"s": 27142,
"text": null
},
{
"code": null,
"e": 28358,
"s": 28236,
"text": "That’s all, now the application is ready to install on the device. Here is what the output of the application looks like."
},
{
"code": null,
"e": 28366,
"s": 28358,
"text": "android"
},
{
"code": null,
"e": 28390,
"s": 28366,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 28398,
"s": 28390,
"text": "Android"
},
{
"code": null,
"e": 28403,
"s": 28398,
"text": "Java"
},
{
"code": null,
"e": 28422,
"s": 28403,
"text": "Technical Scripter"
},
{
"code": null,
"e": 28427,
"s": 28422,
"text": "Java"
},
{
"code": null,
"e": 28435,
"s": 28427,
"text": "Android"
},
{
"code": null,
"e": 28533,
"s": 28435,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28542,
"s": 28533,
"text": "Comments"
},
{
"code": null,
"e": 28555,
"s": 28542,
"text": "Old Comments"
},
{
"code": null,
"e": 28594,
"s": 28555,
"text": "Flutter - Custom Bottom Navigation Bar"
},
{
"code": null,
"e": 28644,
"s": 28594,
"text": "How to Read Data from SQLite Database in Android?"
},
{
"code": null,
"e": 28682,
"s": 28644,
"text": "Android Listview in Java with Example"
},
{
"code": null,
"e": 28733,
"s": 28682,
"text": "How to Post Data to API using Retrofit in Android?"
},
{
"code": null,
"e": 28775,
"s": 28733,
"text": "Retrofit with Kotlin Coroutine in Android"
},
{
"code": null,
"e": 28790,
"s": 28775,
"text": "Arrays in Java"
},
{
"code": null,
"e": 28834,
"s": 28790,
"text": "Split() String method in Java with examples"
},
{
"code": null,
"e": 28856,
"s": 28834,
"text": "For-each loop in Java"
},
{
"code": null,
"e": 28881,
"s": 28856,
"text": "Reverse a string in Java"
}
] |
Bulma - Tabs
|
Bulma provides tabbed navigation menu with different styles to display the content. You can create the tabbed navigation menu with base class of tabs and unordered list.
The below example demonstrates usage of tabs class to create tabbed navigation menu along with icons −
<!DOCTYPE html>
<html>
<head>
<meta charset = "utf-8">
<meta name = "viewport" content = "width = device-width, initial-scale = 1">
<title>Bulma Elements Example</title>
<link rel = "stylesheet" href = "https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css">
<script src = "https://use.fontawesome.com/releases/v5.1.0/js/all.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
</head>
<body>
<section class = "section">
<div class = "container">
<span class = "title">Tabs</span><br><br>
<span class = "is-size-5">Simple Tab</span><br><br>
<div class = "tabs">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Centered Tab
</span>
<br>
<br>
<div class = "tabs is-centered">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Right Tab
</span>
<br>
<br>
<div class = "tabs is-right">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Tabs with Icons
</span>
<br>
<br>
<div class = "tabs">
<ul>
<li class = "is-active">
<a>
<span class = "icon is-small">
<i class = "fas fa-home" aria-hidden = "true"></i>
</span>
<span>Home</span>
</a>
</li>
<li>
<a>
<span class = "icon is-small">
<i class = "fas fa-building" aria-hidden = "true"></i>
</span>
<span>About Us</span>
</a>
</li>
<li>
<a>
<span class = "icon is-small">
<i class = "fas fa-cogs" aria-hidden = "true"></i>
</span>
<span>Services</span>
</a>
</li>
<li>
<a>
<span class = "icon is-small">
<i class = "fas fa-file-signature" aria-hidden = "true"></i>
</span>
<span>Contact Us</span>
</a>
</li>
</ul>
</div>
</div>
</section>
</body>
</html>
It displays the below output −
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
You can specify size of tabs by using 3 modifiers such as is-small, is-medium and is-large to the tabs component.
The below example determines how to display the tabs in different sizes −
<!DOCTYPE html>
<html>
<head>
<meta charset = "utf-8">
<meta name = "viewport" content = "width = device-width, initial-scale = 1">
<title>Bulma Elements Example</title>
<link rel = "stylesheet" href = "https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css">
<script src = "https://use.fontawesome.com/releases/v5.1.0/js/all.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
</head>
<body>
<section class = "section">
<div class = "container">
<span class = "title">
Tab Sizes
</span>
<br>
<br>
<span class = "is-size-5">
Small Tab
</span>
<br>
<br>
<div class = "tabs is-small">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Medium Tab
</span>
<br>
<br>
<div class = "tabs is-medium">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Large Tab
</span>
<br>
<br>
<div class = "tabs is-large">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
</div>
</section>
</body>
</html>
It displays the below output −
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
You can specify style for tabs with borders (is-boxed class), rounded (is-toggle-rounded class) elements as shown in the below example −
<!DOCTYPE html>
<html>
<head>
<meta charset = "utf-8">
<meta name = "viewport" content = "width = device-width, initial-scale = 1">
<title>Bulma Elements Example</title>
<link rel = "stylesheet" href = "https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css">
<script src = "https://use.fontawesome.com/releases/v5.1.0/js/all.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
</head>
<body>
<section class = "section">
<div class = "container">
<span class = "title">
Tab Styles
</span>
<br>
<br>
<span class = "is-size-5">
Bordered Tab
</span>
<br>
<br>
<div class = "tabs is-boxed">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Toggle Tab
</span>
<br>
<br>
<div class = "tabs is-toggle">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Rounded Tab
</span>
<br>
<br>
<div class = "tabs is-toggle is-toggle-rounded">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
</div>
</section>
</body>
</html>
It displays the below output −
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Bulma allows you to combine the tabs with different types of modifiers such as is-centered, is-boxed, is-medium, is-fullwidth etc as shown in the below example −
<!DOCTYPE html>
<html>
<head>
<meta charset = "utf-8">
<meta name = "viewport" content = "width = device-width, initial-scale = 1">
<title>Bulma Elements Example</title>
<link rel = "stylesheet" href = "https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css">
<script src = "https://use.fontawesome.com/releases/v5.1.0/js/all.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
</head>
<body>
<section class = "section">
<div class = "container">
<span class = "title">
Combining Tabs
</span>
<br>
<br>
<span class = "is-size-5">
Centered and Boxed Tab
</span>
<br>
<br>
<div class = "tabs is-centered is-boxed">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Toggle and Fullwidth Tab
</span>
<br>
<br>
<div class = "tabs is-toggle is-fullwidth">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Centered, Boxed and Medium Tab
</span>
<br>
<br>
<div class = "tabs is-centered is-boxed is-medium">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
<br>
<br>
<span class = "is-size-5">
Toggle, Fullwidth and Large Tab
</span>
<br>
<br>
<div class = "tabs is-toggle is-fullwidth is-large">
<ul>
<li class = "is-active"><a>Home</a></li>
<li><a>About Us</a></li>
<li><a>Services</a></li>
<li><a>Contact Us</a></li>
</ul>
</div>
</div>
</section>
</body>
</html>
It displays the below output −
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Home
About Us
Services
Contact Us
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 1868,
"s": 1698,
"text": "Bulma provides tabbed navigation menu with different styles to display the content. You can create the tabbed navigation menu with base class of tabs and unordered list."
},
{
"code": null,
"e": 1971,
"s": 1868,
"text": "The below example demonstrates usage of tabs class to create tabbed navigation menu along with icons −"
},
{
"code": null,
"e": 5510,
"s": 1971,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1\">\n <title>Bulma Elements Example</title>\n <link rel = \"stylesheet\" href = \"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css\">\n <script src = \"https://use.fontawesome.com/releases/v5.1.0/js/all.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\n </head>\n \n <body>\n <section class = \"section\">\n <div class = \"container\">\n <span class = \"title\">Tabs</span><br><br>\n <span class = \"is-size-5\">Simple Tab</span><br><br>\n <div class = \"tabs\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Centered Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-centered\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Right Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-right\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Tabs with Icons\n </span>\n <br>\n <br>\n \n <div class = \"tabs\">\n <ul>\n <li class = \"is-active\">\n <a>\n <span class = \"icon is-small\">\n <i class = \"fas fa-home\" aria-hidden = \"true\"></i>\n </span>\n <span>Home</span>\n </a>\n </li>\n <li>\n <a>\n <span class = \"icon is-small\">\n <i class = \"fas fa-building\" aria-hidden = \"true\"></i>\n </span>\n <span>About Us</span>\n </a>\n </li>\n <li>\n <a>\n <span class = \"icon is-small\">\n <i class = \"fas fa-cogs\" aria-hidden = \"true\"></i>\n </span>\n <span>Services</span>\n </a>\n </li>\n <li>\n <a>\n <span class = \"icon is-small\">\n <i class = \"fas fa-file-signature\" aria-hidden = \"true\"></i>\n </span>\n <span>Contact Us</span>\n </a>\n </li>\n </ul>\n </div>\n \n </div>\n </section>\n </body>\n \n</html>"
},
{
"code": null,
"e": 5541,
"s": 5510,
"text": "It displays the below output −"
},
{
"code": null,
"e": 5546,
"s": 5541,
"text": "Home"
},
{
"code": null,
"e": 5555,
"s": 5546,
"text": "About Us"
},
{
"code": null,
"e": 5564,
"s": 5555,
"text": "Services"
},
{
"code": null,
"e": 5575,
"s": 5564,
"text": "Contact Us"
},
{
"code": null,
"e": 5580,
"s": 5575,
"text": "Home"
},
{
"code": null,
"e": 5589,
"s": 5580,
"text": "About Us"
},
{
"code": null,
"e": 5598,
"s": 5589,
"text": "Services"
},
{
"code": null,
"e": 5609,
"s": 5598,
"text": "Contact Us"
},
{
"code": null,
"e": 5614,
"s": 5609,
"text": "Home"
},
{
"code": null,
"e": 5623,
"s": 5614,
"text": "About Us"
},
{
"code": null,
"e": 5632,
"s": 5623,
"text": "Services"
},
{
"code": null,
"e": 5643,
"s": 5632,
"text": "Contact Us"
},
{
"code": null,
"e": 5653,
"s": 5643,
"text": "\n\n\nHome\n\n"
},
{
"code": null,
"e": 5667,
"s": 5653,
"text": "\n\n\nAbout Us\n\n"
},
{
"code": null,
"e": 5681,
"s": 5667,
"text": "\n\n\nServices\n\n"
},
{
"code": null,
"e": 5697,
"s": 5681,
"text": "\n\n\nContact Us\n\n"
},
{
"code": null,
"e": 5811,
"s": 5697,
"text": "You can specify size of tabs by using 3 modifiers such as is-small, is-medium and is-large to the tabs component."
},
{
"code": null,
"e": 5885,
"s": 5811,
"text": "The below example determines how to display the tabs in different sizes −"
},
{
"code": null,
"e": 8021,
"s": 5885,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1\">\n <title>Bulma Elements Example</title>\n <link rel = \"stylesheet\" href = \"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css\">\n <script src = \"https://use.fontawesome.com/releases/v5.1.0/js/all.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\n </head>\n \n <body>\n <section class = \"section\">\n <div class = \"container\">\n <span class = \"title\">\n Tab Sizes\n </span>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Small Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-small\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Medium Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-medium\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Large Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-large\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n </div>\n \n </section>\n </body>\n \n</html>"
},
{
"code": null,
"e": 8052,
"s": 8021,
"text": "It displays the below output −"
},
{
"code": null,
"e": 8057,
"s": 8052,
"text": "Home"
},
{
"code": null,
"e": 8066,
"s": 8057,
"text": "About Us"
},
{
"code": null,
"e": 8075,
"s": 8066,
"text": "Services"
},
{
"code": null,
"e": 8086,
"s": 8075,
"text": "Contact Us"
},
{
"code": null,
"e": 8091,
"s": 8086,
"text": "Home"
},
{
"code": null,
"e": 8100,
"s": 8091,
"text": "About Us"
},
{
"code": null,
"e": 8109,
"s": 8100,
"text": "Services"
},
{
"code": null,
"e": 8120,
"s": 8109,
"text": "Contact Us"
},
{
"code": null,
"e": 8125,
"s": 8120,
"text": "Home"
},
{
"code": null,
"e": 8134,
"s": 8125,
"text": "About Us"
},
{
"code": null,
"e": 8143,
"s": 8134,
"text": "Services"
},
{
"code": null,
"e": 8154,
"s": 8143,
"text": "Contact Us"
},
{
"code": null,
"e": 8291,
"s": 8154,
"text": "You can specify style for tabs with borders (is-boxed class), rounded (is-toggle-rounded class) elements as shown in the below example −"
},
{
"code": null,
"e": 10452,
"s": 8291,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1\">\n <title>Bulma Elements Example</title>\n <link rel = \"stylesheet\" href = \"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css\">\n <script src = \"https://use.fontawesome.com/releases/v5.1.0/js/all.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\n </head>\n \n <body>\n <section class = \"section\">\n <div class = \"container\">\n <span class = \"title\">\n Tab Styles\n </span>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Bordered Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-boxed\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Toggle Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-toggle\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Rounded Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-toggle is-toggle-rounded\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n </div>\n \n </section>\n </body>\n \n</html>"
},
{
"code": null,
"e": 10483,
"s": 10452,
"text": "It displays the below output −"
},
{
"code": null,
"e": 10488,
"s": 10483,
"text": "Home"
},
{
"code": null,
"e": 10497,
"s": 10488,
"text": "About Us"
},
{
"code": null,
"e": 10506,
"s": 10497,
"text": "Services"
},
{
"code": null,
"e": 10517,
"s": 10506,
"text": "Contact Us"
},
{
"code": null,
"e": 10522,
"s": 10517,
"text": "Home"
},
{
"code": null,
"e": 10531,
"s": 10522,
"text": "About Us"
},
{
"code": null,
"e": 10540,
"s": 10531,
"text": "Services"
},
{
"code": null,
"e": 10551,
"s": 10540,
"text": "Contact Us"
},
{
"code": null,
"e": 10556,
"s": 10551,
"text": "Home"
},
{
"code": null,
"e": 10565,
"s": 10556,
"text": "About Us"
},
{
"code": null,
"e": 10574,
"s": 10565,
"text": "Services"
},
{
"code": null,
"e": 10585,
"s": 10574,
"text": "Contact Us"
},
{
"code": null,
"e": 10747,
"s": 10585,
"text": "Bulma allows you to combine the tabs with different types of modifiers such as is-centered, is-boxed, is-medium, is-fullwidth etc as shown in the below example −"
},
{
"code": null,
"e": 13498,
"s": 10747,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1\">\n <title>Bulma Elements Example</title>\n <link rel = \"stylesheet\" href = \"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css\">\n <script src = \"https://use.fontawesome.com/releases/v5.1.0/js/all.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\n </head>\n \n <body>\n <section class = \"section\">\n <div class = \"container\">\n <span class = \"title\">\n Combining Tabs\n </span>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Centered and Boxed Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-centered is-boxed\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Toggle and Fullwidth Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-toggle is-fullwidth\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Centered, Boxed and Medium Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-centered is-boxed is-medium\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n <br>\n <br>\n \n <span class = \"is-size-5\">\n Toggle, Fullwidth and Large Tab\n </span>\n <br>\n <br>\n \n <div class = \"tabs is-toggle is-fullwidth is-large\">\n <ul>\n <li class = \"is-active\"><a>Home</a></li>\n <li><a>About Us</a></li>\n <li><a>Services</a></li>\n <li><a>Contact Us</a></li>\n </ul>\n </div>\n </div>\n \n </section>\n </body>\n \n</html>"
},
{
"code": null,
"e": 13529,
"s": 13498,
"text": "It displays the below output −"
},
{
"code": null,
"e": 13534,
"s": 13529,
"text": "Home"
},
{
"code": null,
"e": 13543,
"s": 13534,
"text": "About Us"
},
{
"code": null,
"e": 13552,
"s": 13543,
"text": "Services"
},
{
"code": null,
"e": 13563,
"s": 13552,
"text": "Contact Us"
},
{
"code": null,
"e": 13568,
"s": 13563,
"text": "Home"
},
{
"code": null,
"e": 13577,
"s": 13568,
"text": "About Us"
},
{
"code": null,
"e": 13586,
"s": 13577,
"text": "Services"
},
{
"code": null,
"e": 13597,
"s": 13586,
"text": "Contact Us"
},
{
"code": null,
"e": 13602,
"s": 13597,
"text": "Home"
},
{
"code": null,
"e": 13611,
"s": 13602,
"text": "About Us"
},
{
"code": null,
"e": 13620,
"s": 13611,
"text": "Services"
},
{
"code": null,
"e": 13631,
"s": 13620,
"text": "Contact Us"
},
{
"code": null,
"e": 13636,
"s": 13631,
"text": "Home"
},
{
"code": null,
"e": 13645,
"s": 13636,
"text": "About Us"
},
{
"code": null,
"e": 13654,
"s": 13645,
"text": "Services"
},
{
"code": null,
"e": 13665,
"s": 13654,
"text": "Contact Us"
},
{
"code": null,
"e": 13672,
"s": 13665,
"text": " Print"
},
{
"code": null,
"e": 13683,
"s": 13672,
"text": " Add Notes"
}
] |
Check if product of digits of a number at even and odd places is equal - GeeksforGeeks
|
18 Jan, 2022
Given an integer N, the task is to check whether the product of digits at even and odd places of a number are equal. If they are equal, print Yes otherwise print No.
Examples:
Input: N = 2841 Output: Yes Product of digits at odd places = 2 * 4 = 8 Product of digits at even places = 8 * 1 = 8
Input: N = 4324 Output: No Product of digits at odd places = 4 * 2 = 8 Product of digits at even places = 3 * 4 = 12
Approach:
Find the product of digits at even places and store it in prodEven.
Find the product of digits at odd places and store it in prodOdd.
If prodEven = prodOdd then print Yes otherwise print No.
Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function that returns true if the product// of even positioned digits is equal to// the product of odd positioned digits in nbool productEqual(int n){ // If n is a single digit number if (n < 10) return false; int prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // last digit int digit = n % 10; prodEven *= digit; n /= 10; // Second last digit digit = n % 10; prodOdd *= digit; n /= 10; } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false;} // Driver codeint main(){ int n = 4324; if (productEqual(n)) cout << "Yes"; else cout << "No"; return 0;}
// Java implementation of the approach class GFG { // Function that returns true // if the product of even positioned // digits is equal to the product of // odd positioned digits in n static boolean productEqual(int n) { // If n is a single digit number if (n < 10) return false; int prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // First digit int digit = n % 10; prodOdd *= digit; n /= 10; // If n becomes 0 then // there's no more digit if (n == 0) break; // Second digit digit = n % 10; prodEven *= digit; n /= 10; } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false; } // Driver code public static void main(String args[]) { int n = 4324; if (productEqual(n)) System.out.println("Yes"); else System.out.println("No"); } // This code is contributed by Ryuga}
# Python implementation of the approach # Function that returns true if the product# of even positioned digits is equal to# the product of odd positioned digits in n def productEqual(n): if n < 10: return False prodOdd = 1 prodEven = 1 # Take two consecutive digits # at a time # First digit while n > 0: digit = n % 10 prodOdd *= digit n = n//10 # If n becomes 0 then # there's no more digit if n == 0: break digit = n % 10 prodEven *= digit n = n//10 # If the products are equal if prodOdd == prodEven: return True # If the products are not equal return False # Driver coden = 4324if productEqual(n): print("Yes")else: print("No") # This code is contributed by Shrikant13
// C# implementation of the approachusing System; class GFG { // Function that returns true // if the product of even positioned // digits is equal to the product of // odd positioned digits in n static bool productEqual(int n) { // If n is a single digit number if (n < 10) return false; int prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // First digit int digit = n % 10; prodOdd *= digit; n /= 10; // If n becomes 0 then // there's no more digit if (n == 0) break; // Second digit digit = n % 10; prodEven *= digit; n /= 10; } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false; } // Driver code static void Main() { int n = 4324; if (productEqual(n)) Console.WriteLine("Yes"); else Console.WriteLine("No"); }} // This code is contributed by mits
<?php// PHP implementation of the approach // Function that returns true if the product// of even positioned digits is equal to// the product of odd positioned digits in nfunction productEqual($n){ // If n is a single digit number if ($n < 10) return false; $prodOdd = 1; $prodEven = 1; while ($n > 0) { // Take two consecutive digits // at a time // First digit $digit = $n % 10; $prodOdd *= $digit; $n /= 10; // If n becomes 0 then // there's no more digit if ($n == 0) break; // Second digit $digit = $n % 10; $prodEven *= $digit; $n /= 10; } // If the products are equal if ($prodEven == $prodOdd) return true; // If products are not equal return false;} // Driver code$n = 4324;if (productEqual(!$n)) echo "Yes";else echo "No"; // This code is contributed by jit_t?>
<script> // JavaScript implementation of the approach // Function that returns true if the product// of even positioned digits is equal to// the product of odd positioned digits in nfunction productEqual(n){ // If n is a single digit number if (n < 10) return false; let prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // First digit let digit = n % 10; prodOdd *= digit; n = Math.floor(n / 10); // If n becomes 0 then // there's no more digit if (n == 0) break; // Second digit digit = n % 10; prodEven *= digit; n = Math.floor(n / 10); } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false;} // Driver code let n = 4324; if (productEqual(n)) document.write("Yes"); else document.write("No"); // This code is contributed by Surbhi Tyagi. </script>
No
Time complexity: O(log10n)
Auxiliary Space: O(1)
Convert the integer to string. Traverse the string and store all even indices’ products in one variable and all odd indices’ products in another variable.If both are equal then print Yes else No
Convert the integer to string. Traverse the string and store all even indices’ products in one variable and all odd indices’ products in another variable.
If both are equal then print Yes else No
Below is the implementation:
C++
Java
Python3
C#
Javascript
// C++ implementation of the approach#include <iostream>using namespace std; void getResult(int n){ // To store the respective product int proOdd = 1; int proEven = 1; // Converting integer to string string num = to_string(n); // Traversing the string for(int i = 0; i < num.size(); i++) if (i % 2 == 0) proOdd = proOdd * (num[i] - '0'); else proEven = proEven * (num[i] - '0'); if (proOdd == proEven) cout << "Yes"; else cout << "No";} // Driver codeint main(){ int n = 4324; getResult(n); return 0;} // This code is contributed by sudhanshugupta2019a
// Java implementation of the approach import java.util.*; class GFG{ static void getResult(int n){ // To store the respective product int proOdd = 1; int proEven = 1; // Converting integer to String String num = String.valueOf(n); // Traversing the String for(int i = 0; i < num.length(); i++) if (i % 2 == 0) proOdd = proOdd * (num.charAt(i) - '0'); else proEven = proEven * (num.charAt(i) - '0'); if (proOdd == proEven) System.out.print("Yes"); else System.out.print("No");} // Driver codepublic static void main(String[] args){ int n = 4324; getResult(n); }} // This code is contributed by 29AjayKumar
# Python3 implementation of the approach def getResult(n): # To store the respective product proOdd = 1 proEven = 1 # Converting integer to string num = str(n) # Traversing the string for i in range(len(num)): if(i % 2 == 0): proOdd = proOdd*int(num[i]) else: proEven = proEven*int(num[i]) if(proOdd == proEven): print("Yes") else: print("No") # Driver codeif __name__ == "__main__": n = 4324 getResult(n) # This code is contributed by vikkycirus
// C# implementation of the approachusing System;public class GFG{ static void getResult(int n){ // To store the respective product int proOdd = 1; int proEven = 1; // Converting integer to String String num = String.Join("",n); // Traversing the String for(int i = 0; i < num.Length; i++) if (i % 2 == 0) proOdd = proOdd * (num[i] - '0'); else proEven = proEven * (num[i] - '0'); if (proOdd == proEven) Console.Write("Yes"); else Console.Write("No");} // Driver codepublic static void Main(String[] args){ int n = 4324; getResult(n);}} // This code is contributed by 29AjayKumar
<script> // Javascript implementation of the approach function getResult(n) { // To store the respective product let proOdd = 1; let proEven = 1; // Converting integer to String let num = n.toString(); // Traversing the String for(let i = 0; i < num.length; i++) if (i % 2 == 0) proOdd = proOdd * (num[i].charCodeAt() - '0'.charCodeAt()); else proEven = proEven * (num[i].charCodeAt() - '0'.charCodeAt()); if (proOdd == proEven) document.write("Yes"); else document.write("No"); } let n = 4324; getResult(n); </script>
Output:
No
Time complexity: O(d), where d is the number of digits in the integer.
Auxiliary Space: O(1)
Mithun Kumar
ankthon
shrikanth13
jit_t
vikkycirus
sudhanshugupta2019a
surbhityagi15
29AjayKumar
decode2207
alok900000
samim2000
number-digits
Mathematical
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Find all factors of a natural number | Set 1
Check if a number is Palindrome
Program to print prime numbers from 1 to N.
Program to add two binary strings
Program to multiply two matrices
Fizz Buzz Implementation
Find pair with maximum GCD in an array
Find Union and Intersection of two unsorted arrays
Count all possible paths from top left to bottom right of a mXn matrix
Count ways to reach the n'th stair
|
[
{
"code": null,
"e": 24301,
"s": 24273,
"text": "\n18 Jan, 2022"
},
{
"code": null,
"e": 24467,
"s": 24301,
"text": "Given an integer N, the task is to check whether the product of digits at even and odd places of a number are equal. If they are equal, print Yes otherwise print No."
},
{
"code": null,
"e": 24478,
"s": 24467,
"text": "Examples: "
},
{
"code": null,
"e": 24595,
"s": 24478,
"text": "Input: N = 2841 Output: Yes Product of digits at odd places = 2 * 4 = 8 Product of digits at even places = 8 * 1 = 8"
},
{
"code": null,
"e": 24715,
"s": 24595,
"text": "Input: N = 4324 Output: No Product of digits at odd places = 4 * 2 = 8 Product of digits at even places = 3 * 4 = 12 "
},
{
"code": null,
"e": 24727,
"s": 24715,
"text": "Approach: "
},
{
"code": null,
"e": 24795,
"s": 24727,
"text": "Find the product of digits at even places and store it in prodEven."
},
{
"code": null,
"e": 24861,
"s": 24795,
"text": "Find the product of digits at odd places and store it in prodOdd."
},
{
"code": null,
"e": 24918,
"s": 24861,
"text": "If prodEven = prodOdd then print Yes otherwise print No."
},
{
"code": null,
"e": 24971,
"s": 24918,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 24975,
"s": 24971,
"text": "C++"
},
{
"code": null,
"e": 24980,
"s": 24975,
"text": "Java"
},
{
"code": null,
"e": 24988,
"s": 24980,
"text": "Python3"
},
{
"code": null,
"e": 24991,
"s": 24988,
"text": "C#"
},
{
"code": null,
"e": 24995,
"s": 24991,
"text": "PHP"
},
{
"code": null,
"e": 25006,
"s": 24995,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std; // Function that returns true if the product// of even positioned digits is equal to// the product of odd positioned digits in nbool productEqual(int n){ // If n is a single digit number if (n < 10) return false; int prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // last digit int digit = n % 10; prodEven *= digit; n /= 10; // Second last digit digit = n % 10; prodOdd *= digit; n /= 10; } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false;} // Driver codeint main(){ int n = 4324; if (productEqual(n)) cout << \"Yes\"; else cout << \"No\"; return 0;}",
"e": 25882,
"s": 25006,
"text": null
},
{
"code": "// Java implementation of the approach class GFG { // Function that returns true // if the product of even positioned // digits is equal to the product of // odd positioned digits in n static boolean productEqual(int n) { // If n is a single digit number if (n < 10) return false; int prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // First digit int digit = n % 10; prodOdd *= digit; n /= 10; // If n becomes 0 then // there's no more digit if (n == 0) break; // Second digit digit = n % 10; prodEven *= digit; n /= 10; } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false; } // Driver code public static void main(String args[]) { int n = 4324; if (productEqual(n)) System.out.println(\"Yes\"); else System.out.println(\"No\"); } // This code is contributed by Ryuga}",
"e": 27078,
"s": 25882,
"text": null
},
{
"code": "# Python implementation of the approach # Function that returns true if the product# of even positioned digits is equal to# the product of odd positioned digits in n def productEqual(n): if n < 10: return False prodOdd = 1 prodEven = 1 # Take two consecutive digits # at a time # First digit while n > 0: digit = n % 10 prodOdd *= digit n = n//10 # If n becomes 0 then # there's no more digit if n == 0: break digit = n % 10 prodEven *= digit n = n//10 # If the products are equal if prodOdd == prodEven: return True # If the products are not equal return False # Driver coden = 4324if productEqual(n): print(\"Yes\")else: print(\"No\") # This code is contributed by Shrikant13",
"e": 27885,
"s": 27078,
"text": null
},
{
"code": "// C# implementation of the approachusing System; class GFG { // Function that returns true // if the product of even positioned // digits is equal to the product of // odd positioned digits in n static bool productEqual(int n) { // If n is a single digit number if (n < 10) return false; int prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // First digit int digit = n % 10; prodOdd *= digit; n /= 10; // If n becomes 0 then // there's no more digit if (n == 0) break; // Second digit digit = n % 10; prodEven *= digit; n /= 10; } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false; } // Driver code static void Main() { int n = 4324; if (productEqual(n)) Console.WriteLine(\"Yes\"); else Console.WriteLine(\"No\"); }} // This code is contributed by mits",
"e": 29063,
"s": 27885,
"text": null
},
{
"code": "<?php// PHP implementation of the approach // Function that returns true if the product// of even positioned digits is equal to// the product of odd positioned digits in nfunction productEqual($n){ // If n is a single digit number if ($n < 10) return false; $prodOdd = 1; $prodEven = 1; while ($n > 0) { // Take two consecutive digits // at a time // First digit $digit = $n % 10; $prodOdd *= $digit; $n /= 10; // If n becomes 0 then // there's no more digit if ($n == 0) break; // Second digit $digit = $n % 10; $prodEven *= $digit; $n /= 10; } // If the products are equal if ($prodEven == $prodOdd) return true; // If products are not equal return false;} // Driver code$n = 4324;if (productEqual(!$n)) echo \"Yes\";else echo \"No\"; // This code is contributed by jit_t?>",
"e": 30006,
"s": 29063,
"text": null
},
{
"code": "<script> // JavaScript implementation of the approach // Function that returns true if the product// of even positioned digits is equal to// the product of odd positioned digits in nfunction productEqual(n){ // If n is a single digit number if (n < 10) return false; let prodOdd = 1, prodEven = 1; while (n > 0) { // Take two consecutive digits // at a time // First digit let digit = n % 10; prodOdd *= digit; n = Math.floor(n / 10); // If n becomes 0 then // there's no more digit if (n == 0) break; // Second digit digit = n % 10; prodEven *= digit; n = Math.floor(n / 10); } // If the products are equal if (prodEven == prodOdd) return true; // If products are not equal return false;} // Driver code let n = 4324; if (productEqual(n)) document.write(\"Yes\"); else document.write(\"No\"); // This code is contributed by Surbhi Tyagi. </script>",
"e": 31027,
"s": 30006,
"text": null
},
{
"code": null,
"e": 31030,
"s": 31027,
"text": "No"
},
{
"code": null,
"e": 31059,
"s": 31032,
"text": "Time complexity: O(log10n)"
},
{
"code": null,
"e": 31081,
"s": 31059,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 31276,
"s": 31081,
"text": "Convert the integer to string. Traverse the string and store all even indices’ products in one variable and all odd indices’ products in another variable.If both are equal then print Yes else No"
},
{
"code": null,
"e": 31431,
"s": 31276,
"text": "Convert the integer to string. Traverse the string and store all even indices’ products in one variable and all odd indices’ products in another variable."
},
{
"code": null,
"e": 31472,
"s": 31431,
"text": "If both are equal then print Yes else No"
},
{
"code": null,
"e": 31501,
"s": 31472,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 31505,
"s": 31501,
"text": "C++"
},
{
"code": null,
"e": 31510,
"s": 31505,
"text": "Java"
},
{
"code": null,
"e": 31518,
"s": 31510,
"text": "Python3"
},
{
"code": null,
"e": 31521,
"s": 31518,
"text": "C#"
},
{
"code": null,
"e": 31532,
"s": 31521,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <iostream>using namespace std; void getResult(int n){ // To store the respective product int proOdd = 1; int proEven = 1; // Converting integer to string string num = to_string(n); // Traversing the string for(int i = 0; i < num.size(); i++) if (i % 2 == 0) proOdd = proOdd * (num[i] - '0'); else proEven = proEven * (num[i] - '0'); if (proOdd == proEven) cout << \"Yes\"; else cout << \"No\";} // Driver codeint main(){ int n = 4324; getResult(n); return 0;} // This code is contributed by sudhanshugupta2019a",
"e": 32189,
"s": 31532,
"text": null
},
{
"code": "// Java implementation of the approach import java.util.*; class GFG{ static void getResult(int n){ // To store the respective product int proOdd = 1; int proEven = 1; // Converting integer to String String num = String.valueOf(n); // Traversing the String for(int i = 0; i < num.length(); i++) if (i % 2 == 0) proOdd = proOdd * (num.charAt(i) - '0'); else proEven = proEven * (num.charAt(i) - '0'); if (proOdd == proEven) System.out.print(\"Yes\"); else System.out.print(\"No\");} // Driver codepublic static void main(String[] args){ int n = 4324; getResult(n); }} // This code is contributed by 29AjayKumar",
"e": 32895,
"s": 32189,
"text": null
},
{
"code": "# Python3 implementation of the approach def getResult(n): # To store the respective product proOdd = 1 proEven = 1 # Converting integer to string num = str(n) # Traversing the string for i in range(len(num)): if(i % 2 == 0): proOdd = proOdd*int(num[i]) else: proEven = proEven*int(num[i]) if(proOdd == proEven): print(\"Yes\") else: print(\"No\") # Driver codeif __name__ == \"__main__\": n = 4324 getResult(n) # This code is contributed by vikkycirus",
"e": 33438,
"s": 32895,
"text": null
},
{
"code": "// C# implementation of the approachusing System;public class GFG{ static void getResult(int n){ // To store the respective product int proOdd = 1; int proEven = 1; // Converting integer to String String num = String.Join(\"\",n); // Traversing the String for(int i = 0; i < num.Length; i++) if (i % 2 == 0) proOdd = proOdd * (num[i] - '0'); else proEven = proEven * (num[i] - '0'); if (proOdd == proEven) Console.Write(\"Yes\"); else Console.Write(\"No\");} // Driver codepublic static void Main(String[] args){ int n = 4324; getResult(n);}} // This code is contributed by 29AjayKumar",
"e": 34109,
"s": 33438,
"text": null
},
{
"code": "<script> // Javascript implementation of the approach function getResult(n) { // To store the respective product let proOdd = 1; let proEven = 1; // Converting integer to String let num = n.toString(); // Traversing the String for(let i = 0; i < num.length; i++) if (i % 2 == 0) proOdd = proOdd * (num[i].charCodeAt() - '0'.charCodeAt()); else proEven = proEven * (num[i].charCodeAt() - '0'.charCodeAt()); if (proOdd == proEven) document.write(\"Yes\"); else document.write(\"No\"); } let n = 4324; getResult(n); </script>",
"e": 34803,
"s": 34109,
"text": null
},
{
"code": null,
"e": 34811,
"s": 34803,
"text": "Output:"
},
{
"code": null,
"e": 34814,
"s": 34811,
"text": "No"
},
{
"code": null,
"e": 34885,
"s": 34814,
"text": "Time complexity: O(d), where d is the number of digits in the integer."
},
{
"code": null,
"e": 34907,
"s": 34885,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 34920,
"s": 34907,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 34928,
"s": 34920,
"text": "ankthon"
},
{
"code": null,
"e": 34940,
"s": 34928,
"text": "shrikanth13"
},
{
"code": null,
"e": 34946,
"s": 34940,
"text": "jit_t"
},
{
"code": null,
"e": 34957,
"s": 34946,
"text": "vikkycirus"
},
{
"code": null,
"e": 34977,
"s": 34957,
"text": "sudhanshugupta2019a"
},
{
"code": null,
"e": 34991,
"s": 34977,
"text": "surbhityagi15"
},
{
"code": null,
"e": 35003,
"s": 34991,
"text": "29AjayKumar"
},
{
"code": null,
"e": 35014,
"s": 35003,
"text": "decode2207"
},
{
"code": null,
"e": 35025,
"s": 35014,
"text": "alok900000"
},
{
"code": null,
"e": 35035,
"s": 35025,
"text": "samim2000"
},
{
"code": null,
"e": 35049,
"s": 35035,
"text": "number-digits"
},
{
"code": null,
"e": 35062,
"s": 35049,
"text": "Mathematical"
},
{
"code": null,
"e": 35075,
"s": 35062,
"text": "Mathematical"
},
{
"code": null,
"e": 35173,
"s": 35075,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35182,
"s": 35173,
"text": "Comments"
},
{
"code": null,
"e": 35195,
"s": 35182,
"text": "Old Comments"
},
{
"code": null,
"e": 35240,
"s": 35195,
"text": "Find all factors of a natural number | Set 1"
},
{
"code": null,
"e": 35272,
"s": 35240,
"text": "Check if a number is Palindrome"
},
{
"code": null,
"e": 35316,
"s": 35272,
"text": "Program to print prime numbers from 1 to N."
},
{
"code": null,
"e": 35350,
"s": 35316,
"text": "Program to add two binary strings"
},
{
"code": null,
"e": 35383,
"s": 35350,
"text": "Program to multiply two matrices"
},
{
"code": null,
"e": 35408,
"s": 35383,
"text": "Fizz Buzz Implementation"
},
{
"code": null,
"e": 35447,
"s": 35408,
"text": "Find pair with maximum GCD in an array"
},
{
"code": null,
"e": 35498,
"s": 35447,
"text": "Find Union and Intersection of two unsorted arrays"
},
{
"code": null,
"e": 35569,
"s": 35498,
"text": "Count all possible paths from top left to bottom right of a mXn matrix"
}
] |
Beautiful Boxplots With Statistical Significance Annotation | by Serafeim Loukas | Towards Data Science
|
I always remember myself reading some nice scientific publications where the authors would have created some nice boxplots with statistical annotations. In most of these cases, a statistical test had been used to determine whether there was a statistically significant difference in the mean value of a specific feature between different groups.
I have now managed to create some custom python code to do exactly this: produce beautiful boxplots with statistical annotations integrated. In this short article, I just show how to create such beautiful boxplots in Python.
We will use the Iris Dataset as we have done in all my previous posts. The dataset contains four features (length and width of sepals and petals) of 50 samples of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). The dataset is often used in data mining, classification and clustering examples and to test algorithms.
For reference, here are pictures of the three flowers species:
For this short tutorial, we will be only using 2 out of the 3 classes i.e. the setosa and versicolor classes. This is done only for the sake of simplicity.
Step 1: Let’s load the data and sub-select the desired 2 flower classes:
from sklearn.datasets import load_irisimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as pltimport numpy as np# Load the Iris datasetX = load_iris().datay = load_iris().targetfeature_names = load_iris().feature_namesclasses_names = load_iris().target_names# Use only 2 classes for this examplemask = y!=2X,y = X[mask,:], y[mask]# Get the remained class namesclasses_names[[0,1]]# array(['setosa', 'versicolor'], dtype='<U10')
Step 2:We have now selected all the samples for the 2 classes: setosa & versicolor flower classes. We will put the data into a panda dataframe to make our lives easier:
df = pd.DataFrame(X,columns=feature_names)df['Group'] = [i for i in y]df_long = pd.melt(df, 'Group', var_name='Feature', value_name='Value') # this is needed for the boxplots later ondf.head()
Step 3: Let’s inspect the dataframe:
As we can see, we have 4 features and the last column denote the group membership of the corresponding sample.
Step 4: Now it’s time to do the statistical tests. We will use a two-sample t-test (since our group are independent) to test if the mean value of any of these 4 features (i.e. sepal length, sepal width, petal length, petal width) is statistically different between the 2 groups of flowers (setosa and versicolor).
#* Statistical tests for differences in the features across groupsfrom scipy import statsall_t = list()all_p = list()for case in range(len(feature_names)): sub_df = df_long[df_long.Feature == feature_names[case]] g1 = sub_df[sub_df['Group'] == 0]['Value'].values g2 = sub_df[sub_df['Group'] == 1]['Value'].values t, p = stats.ttest_ind(g1, g2) all_t.append(t) all_p.append(p)
To do the statistical test we just used:
t, p = stats.ttest_ind(g1, g2)
Here we compare the mean of g1 (group 1: setosa) to the mean of g2 (group 2: versicolor) and we do that for all 4 features (using the for loop).
But how can we know if the mean of g1 (group 1: setosa) was significantly greater or smaller than the mean of g2 (group 2: versicolor)?
For this, we need to look at the t-values.
print(all_t)[-10.52098626754911, 9.454975848128596, -39.492719391538095, -34.08034154357719]print(feature_names)['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
Interpretation:
If the t-value is positive (>0) then the mean of g1 (group 1: setosa) was significantly greater than the mean of g2 (group 2: versicolor).
If the t-value is negative (<0) then the mean of g1 (group 1: setosa) was significantly smaller than the mean of g2 (group 2: versicolor).
Reminder: feature_names = [‘sepal length (cm)’, ‘sepal width (cm)’, ‘petal length (cm)’, ‘petal width (cm)’].
We can conclude that only the mean value of sepal width of g1 (setosa) was statistically greater than the mean value of sepal width of g2 (versicolor).
Step 5: Check the t-test results
print(np.count_nonzero(np.array(feature_names)[np.array(all_p) < 0.05]))# 4
Interpretation: We can see that there is a statistically significant difference in all 4 features between setosa and versicolor classes.
Step 6: Here is the magic. Let’s create some beautiful boxplots and annotate them with the estimated statistical significance.
# renaming so that class 0 will appear as setosa and class 1 as versicolordf_long.loc[df_long.Group==0, 'Group'] = classes_names[0]df_long.loc[df_long.Group==1, 'Group'] = classes_names[1]# Boxplotsfig, axes = plt.subplots(2,2, figsize=(14,10), dpi=100)axes = axes.flatten()for idx, feature in enumerate(feature_names): ax = sns.boxplot(x=”Feature”, hue=”Group”, y=”Value”, data = df_long[df_long.Feature == feature], linewidth=2, showmeans=True, meanprops={“marker”:”*”,”markerfacecolor”:”white”, “markeredgecolor”:”black”}, ax=axes[idx]) #* tick params axes[idx].set_xticklabels([str(feature)], rotation=0) axes[idx].set(xlabel=None) axes[idx].set(ylabel=None) axes[idx].grid(alpha=0.5) axes[idx].legend(loc=”lower right”, prop={‘size’: 11}) #*set edge color = black for b in range(len(ax.artists)): ax.artists[b].set_edgecolor(‘black’) ax.artists[b].set_alpha(0.8) #* statistical tests x1, x2 = -0.20, 0.20 y, h, col = df_long[df_long.Feature == feature][“Value”].max()+1, 2, ‘k’ axes[idx].plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) axes[idx].text((x1+x2)*.5, y+h, “statistically significant”, ha=’center’, va=’bottom’, color=col)fig.suptitle("Significant feature differences between setosa and versicolor classes/groups", size=14, y=0.93)plt.show()
As we can see from the statistical tests, we can conclude that only the mean value of sepal width of group 1 (setosa) was statistically greater than the mean value of sepal width of group 2 (versicolor).
On the other hand, the mean value of sepal length, petal length and petal width of the Setosa group was statistically smaller than the mean value of the Versicolor group.
These observations can be also verified by looking at boxplots.
That’s all folks! Hope you liked this article!
If you liked and found this article useful, follow me to be able to see all my new posts.
Questions? Post them as a comment and I will reply as soon as possible.
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
towardsdatascience.com
LinkedIn: https://www.linkedin.com/in/serafeim-loukas/
ResearchGate: https://www.researchgate.net/profile/Serafeim_Loukas
EPFL profile: https://people.epfl.ch/serafeim.loukas
Stack Overflow: https://stackoverflow.com/users/5025009/seralouk
|
[
{
"code": null,
"e": 517,
"s": 171,
"text": "I always remember myself reading some nice scientific publications where the authors would have created some nice boxplots with statistical annotations. In most of these cases, a statistical test had been used to determine whether there was a statistically significant difference in the mean value of a specific feature between different groups."
},
{
"code": null,
"e": 742,
"s": 517,
"text": "I have now managed to create some custom python code to do exactly this: produce beautiful boxplots with statistical annotations integrated. In this short article, I just show how to create such beautiful boxplots in Python."
},
{
"code": null,
"e": 1083,
"s": 742,
"text": "We will use the Iris Dataset as we have done in all my previous posts. The dataset contains four features (length and width of sepals and petals) of 50 samples of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). The dataset is often used in data mining, classification and clustering examples and to test algorithms."
},
{
"code": null,
"e": 1146,
"s": 1083,
"text": "For reference, here are pictures of the three flowers species:"
},
{
"code": null,
"e": 1302,
"s": 1146,
"text": "For this short tutorial, we will be only using 2 out of the 3 classes i.e. the setosa and versicolor classes. This is done only for the sake of simplicity."
},
{
"code": null,
"e": 1375,
"s": 1302,
"text": "Step 1: Let’s load the data and sub-select the desired 2 flower classes:"
},
{
"code": null,
"e": 1819,
"s": 1375,
"text": "from sklearn.datasets import load_irisimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as pltimport numpy as np# Load the Iris datasetX = load_iris().datay = load_iris().targetfeature_names = load_iris().feature_namesclasses_names = load_iris().target_names# Use only 2 classes for this examplemask = y!=2X,y = X[mask,:], y[mask]# Get the remained class namesclasses_names[[0,1]]# array(['setosa', 'versicolor'], dtype='<U10')"
},
{
"code": null,
"e": 1988,
"s": 1819,
"text": "Step 2:We have now selected all the samples for the 2 classes: setosa & versicolor flower classes. We will put the data into a panda dataframe to make our lives easier:"
},
{
"code": null,
"e": 2181,
"s": 1988,
"text": "df = pd.DataFrame(X,columns=feature_names)df['Group'] = [i for i in y]df_long = pd.melt(df, 'Group', var_name='Feature', value_name='Value') # this is needed for the boxplots later ondf.head()"
},
{
"code": null,
"e": 2218,
"s": 2181,
"text": "Step 3: Let’s inspect the dataframe:"
},
{
"code": null,
"e": 2329,
"s": 2218,
"text": "As we can see, we have 4 features and the last column denote the group membership of the corresponding sample."
},
{
"code": null,
"e": 2643,
"s": 2329,
"text": "Step 4: Now it’s time to do the statistical tests. We will use a two-sample t-test (since our group are independent) to test if the mean value of any of these 4 features (i.e. sepal length, sepal width, petal length, petal width) is statistically different between the 2 groups of flowers (setosa and versicolor)."
},
{
"code": null,
"e": 3037,
"s": 2643,
"text": "#* Statistical tests for differences in the features across groupsfrom scipy import statsall_t = list()all_p = list()for case in range(len(feature_names)): sub_df = df_long[df_long.Feature == feature_names[case]] g1 = sub_df[sub_df['Group'] == 0]['Value'].values g2 = sub_df[sub_df['Group'] == 1]['Value'].values t, p = stats.ttest_ind(g1, g2) all_t.append(t) all_p.append(p)"
},
{
"code": null,
"e": 3078,
"s": 3037,
"text": "To do the statistical test we just used:"
},
{
"code": null,
"e": 3109,
"s": 3078,
"text": "t, p = stats.ttest_ind(g1, g2)"
},
{
"code": null,
"e": 3254,
"s": 3109,
"text": "Here we compare the mean of g1 (group 1: setosa) to the mean of g2 (group 2: versicolor) and we do that for all 4 features (using the for loop)."
},
{
"code": null,
"e": 3390,
"s": 3254,
"text": "But how can we know if the mean of g1 (group 1: setosa) was significantly greater or smaller than the mean of g2 (group 2: versicolor)?"
},
{
"code": null,
"e": 3433,
"s": 3390,
"text": "For this, we need to look at the t-values."
},
{
"code": null,
"e": 3628,
"s": 3433,
"text": "print(all_t)[-10.52098626754911, 9.454975848128596, -39.492719391538095, -34.08034154357719]print(feature_names)['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']"
},
{
"code": null,
"e": 3644,
"s": 3628,
"text": "Interpretation:"
},
{
"code": null,
"e": 3783,
"s": 3644,
"text": "If the t-value is positive (>0) then the mean of g1 (group 1: setosa) was significantly greater than the mean of g2 (group 2: versicolor)."
},
{
"code": null,
"e": 3922,
"s": 3783,
"text": "If the t-value is negative (<0) then the mean of g1 (group 1: setosa) was significantly smaller than the mean of g2 (group 2: versicolor)."
},
{
"code": null,
"e": 4032,
"s": 3922,
"text": "Reminder: feature_names = [‘sepal length (cm)’, ‘sepal width (cm)’, ‘petal length (cm)’, ‘petal width (cm)’]."
},
{
"code": null,
"e": 4184,
"s": 4032,
"text": "We can conclude that only the mean value of sepal width of g1 (setosa) was statistically greater than the mean value of sepal width of g2 (versicolor)."
},
{
"code": null,
"e": 4217,
"s": 4184,
"text": "Step 5: Check the t-test results"
},
{
"code": null,
"e": 4293,
"s": 4217,
"text": "print(np.count_nonzero(np.array(feature_names)[np.array(all_p) < 0.05]))# 4"
},
{
"code": null,
"e": 4430,
"s": 4293,
"text": "Interpretation: We can see that there is a statistically significant difference in all 4 features between setosa and versicolor classes."
},
{
"code": null,
"e": 4557,
"s": 4430,
"text": "Step 6: Here is the magic. Let’s create some beautiful boxplots and annotate them with the estimated statistical significance."
},
{
"code": null,
"e": 5883,
"s": 4557,
"text": "# renaming so that class 0 will appear as setosa and class 1 as versicolordf_long.loc[df_long.Group==0, 'Group'] = classes_names[0]df_long.loc[df_long.Group==1, 'Group'] = classes_names[1]# Boxplotsfig, axes = plt.subplots(2,2, figsize=(14,10), dpi=100)axes = axes.flatten()for idx, feature in enumerate(feature_names): ax = sns.boxplot(x=”Feature”, hue=”Group”, y=”Value”, data = df_long[df_long.Feature == feature], linewidth=2, showmeans=True, meanprops={“marker”:”*”,”markerfacecolor”:”white”, “markeredgecolor”:”black”}, ax=axes[idx]) #* tick params axes[idx].set_xticklabels([str(feature)], rotation=0) axes[idx].set(xlabel=None) axes[idx].set(ylabel=None) axes[idx].grid(alpha=0.5) axes[idx].legend(loc=”lower right”, prop={‘size’: 11}) #*set edge color = black for b in range(len(ax.artists)): ax.artists[b].set_edgecolor(‘black’) ax.artists[b].set_alpha(0.8) #* statistical tests x1, x2 = -0.20, 0.20 y, h, col = df_long[df_long.Feature == feature][“Value”].max()+1, 2, ‘k’ axes[idx].plot([x1, x1, x2, x2], [y, y+h, y+h, y], lw=1.5, c=col) axes[idx].text((x1+x2)*.5, y+h, “statistically significant”, ha=’center’, va=’bottom’, color=col)fig.suptitle(\"Significant feature differences between setosa and versicolor classes/groups\", size=14, y=0.93)plt.show()"
},
{
"code": null,
"e": 6087,
"s": 5883,
"text": "As we can see from the statistical tests, we can conclude that only the mean value of sepal width of group 1 (setosa) was statistically greater than the mean value of sepal width of group 2 (versicolor)."
},
{
"code": null,
"e": 6258,
"s": 6087,
"text": "On the other hand, the mean value of sepal length, petal length and petal width of the Setosa group was statistically smaller than the mean value of the Versicolor group."
},
{
"code": null,
"e": 6322,
"s": 6258,
"text": "These observations can be also verified by looking at boxplots."
},
{
"code": null,
"e": 6369,
"s": 6322,
"text": "That’s all folks! Hope you liked this article!"
},
{
"code": null,
"e": 6459,
"s": 6369,
"text": "If you liked and found this article useful, follow me to be able to see all my new posts."
},
{
"code": null,
"e": 6531,
"s": 6459,
"text": "Questions? Post them as a comment and I will reply as soon as possible."
},
{
"code": null,
"e": 6554,
"s": 6531,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 6577,
"s": 6554,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 6600,
"s": 6577,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 6623,
"s": 6600,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 6646,
"s": 6623,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 6669,
"s": 6646,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 6724,
"s": 6669,
"text": "LinkedIn: https://www.linkedin.com/in/serafeim-loukas/"
},
{
"code": null,
"e": 6791,
"s": 6724,
"text": "ResearchGate: https://www.researchgate.net/profile/Serafeim_Loukas"
},
{
"code": null,
"e": 6844,
"s": 6791,
"text": "EPFL profile: https://people.epfl.ch/serafeim.loukas"
}
] |
Machine Learning Workflow on Diabetes Data: Part 01 | by Lahiru Liyanapathirana | Towards Data Science
|
“Machine learning in a medical setting can help enhance medical diagnosis dramatically.”
This article will portray how data related to diabetes can be leveraged to predict if a person has diabetes or not. More specifically, this article will focus on how machine learning can be utilized to predict diseases such as diabetes. By the end of this article series you will be able to understand concepts like data exploration, data cleansing, feature selection, model selection, model evaluation and apply them in a practical way.
Diabetes is a disease that occurs when the blood glucose level becomes high, which ultimately leads to other health problems such as heart diseases, kidney disease, etc. Diabetes is caused mainly due to the consumption of highly processed food, bad consumption habits, etc. According to WHO, the number of people with diabetes has been increased over the years.
Python 3.+
Anaconda (Scikit Learn, Numpy, Pandas, Matplotlib, Seaborn)
Jupyter Notebook.
Basic understanding of supervised machine learning methods: specifically classification.
As a Data Scientist, the most tedious task which we encounter is the acquiring and the preparation of a data set. Even though there is an abundance of data in this era, it is still hard to find a suitable data set that suits the problem you are trying to tackle. If there aren’t any suitable data sets to be found, you might have to create your own.
In this tutorial we aren’t going to create our own data set, instead, we will be using an existing data set called the “Pima Indians Diabetes Database” provided by the UCI Machine Learning Repository (famous repository for machine learning data sets). We will be performing the machine learning workflow with the Diabetes Data set provided above.
When encountered with a data set, first we should analyze and “get to know” the data set. This step is necessary to familiarize with the data, to gain some understanding of the potential features and to see if data cleaning is needed.
First, we will import the necessary libraries and import our data set to the Jupyter notebook. We can observe the mentioned columns in the data set.
%matplotlib inlineimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsdiabetes = pd.read_csv('datasets/diabetes.csv')diabetes.columns
Index(['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age', 'Outcome'], dtype='object')
Important: It should be noted that the above data set contains only limited features, where as in reality numerous features come into play.
We can examine the data set using the pandas’ head() method.
diabetes.head()
We can find the dimensions of the data set using the panda Dataframes’ ‘shape’ attribute.
print("Diabetes data set dimensions : {}".format(diabetes.shape))
Diabetes data set dimensions : (768, 9)
We can observe that the data set contain 768 rows and 9 columns. ‘Outcome’ is the column which we are going to predict, which says if the patient is diabetic or not. 1 means the person is diabetic and 0 means a person is not. We can identify that out of the 768 persons, 500 are labeled as 0 (non-diabetic) and 268 as 1 (diabetic)
diabetes.groupby('Outcome').size()
Visualization of data is an imperative aspect of data science. It helps to understand data and also to explain the data to another person. Python has several interesting visualization libraries such as Matplotlib, Seaborn, etc.
In this tutorial we will use pandas’ visualization which is built on top of matplotlib, to find the data distribution of the features.
We can use the following code to draw histograms for the two responses separately. (The images are not displayed here.)
diabetes.groupby(‘Outcome’).hist(figsize=(9, 9))
The next phase of the machine learning work flow is data cleaning. Considered to be one of the crucial steps of the workflow, because it can make or break the model. There is a saying in machine learning “Better data beats fancier algorithms”, which suggests better data gives you better resulting models.
There are several factors to consider in the data cleaning process.
Duplicate or irrelevant observations.Bad labeling of data, same category occurring multiple times.Missing or null data points.Unexpected outliers.
Duplicate or irrelevant observations.
Bad labeling of data, same category occurring multiple times.
Missing or null data points.
Unexpected outliers.
We won’t be discussing about the data cleaning procedure in detail in this tutorial.
Since we are using a standard data set, we can safely assume that factors 1, 2 are already dealt with. Unexpected outliers are either useful or potentially harmful.
We can find any missing or null data points of the data set (if there is any) using the following pandas function.
diabetes.isnull().sum()diabetes.isna().sum()
We can observe that there are no data points missing in the data set. If there were any, we should deal with them accordingly.
When analyzing the histogram we can identify that there are some outliers in some columns. We will further analyze those outliers and determine what we can do about them.
Blood pressure: By observing the data we can see that there are 0 values for blood pressure. And it is evident that the readings of the data set seem wrong because a living person cannot have a diastolic blood pressure of zero. By observing the data we can see 35 counts where the value is 0.
print("Total : ", diabetes[diabetes.BloodPressure == 0].shape[0])Total : 35print(diabetes[diabetes.BloodPressure == 0].groupby('Outcome')['Age'].count())Outcome0 191 16Name: Age, dtype: int64
Plasma glucose levels: Even after fasting glucose levels would not be as low as zero. Therefore zero is an invalid reading. By observing the data we can see 5 counts where the value is 0.
print("Total : ", diabetes[diabetes.Glucose == 0].shape[0])Total : 5print(diabetes[diabetes.Glucose == 0].groupby('Outcome')['Age'].count())Total : 5Outcome0 31 2Name: Age, dtype: int64
Skin Fold Thickness: For normal people, skin fold thickness can’t be less than 10 mm better yet zero. Total count where value is 0: 227.
print("Total : ", diabetes[diabetes.SkinThickness == 0].shape[0])Total : 227print(diabetes[diabetes.SkinThickness == 0].groupby('Outcome')['Age'].count())Outcome0 1391 88Name: Age, dtype: int64
BMI: Should not be 0 or close to zero unless the person is really underweight which could be life-threatening.
print("Total : ", diabetes[diabetes.BMI == 0].shape[0])Total : 11print(diabetes[diabetes.BMI == 0].groupby('Outcome')['Age'].count())Outcome0 91 2Name: Age, dtype: int64
Insulin: In a rare situation a person can have zero insulin but by observing the data, we can find that there is a total of 374 counts.
print("Total : ", diabetes[diabetes.Insulin == 0].shape[0])Total : 374print(diabetes[diabetes.Insulin == 0].groupby('Outcome')['Age'].count())Outcome0 2361 138Name: Age, dtype: int64
Here are several ways to handle invalid data values :
Ignore/remove these cases: This is not actually possible in most cases because that would mean losing valuable information. And in this case “skin thickness” and “insulin” columns mean to have a lot of invalid points. But it might work for “BMI”, “glucose ”and “blood pressure” data points.Put average/mean values: This might work for some data sets, but in our case putting a mean value to the blood pressure column would send a wrong signal to the model.Avoid using features: It is possible to not use the features with a lot of invalid values for the model. This may work for “skin thickness” but it's hard to predict that.
Ignore/remove these cases: This is not actually possible in most cases because that would mean losing valuable information. And in this case “skin thickness” and “insulin” columns mean to have a lot of invalid points. But it might work for “BMI”, “glucose ”and “blood pressure” data points.
Put average/mean values: This might work for some data sets, but in our case putting a mean value to the blood pressure column would send a wrong signal to the model.
Avoid using features: It is possible to not use the features with a lot of invalid values for the model. This may work for “skin thickness” but it's hard to predict that.
By the end of the data cleaning process, we have come to the conclusion that this given data set is incomplete. Since this is a demonstration for machine learning we will proceed with the given data with some minor adjustments.
We will remove the rows which the “BloodPressure”, “BMI” and “Glucose” are zero.
diabetes_mod = diabetes[(diabetes.BloodPressure != 0) & (diabetes.BMI != 0) & (diabetes.Glucose != 0)]print(diabetes_mod.shape)(724, 9)
Feature engineering is the process of transforming the gathered data into features that better represent the problem that we are trying to solve to the model, to improve its performance and accuracy.
Feature engineering creates more input features from the existing features and also combines several features to produce more intuitive features to feed to the model.
“ Feature engineering enables us to highlight the important features and facilitate to bring domain expertise on the problem to the table. It also allows avoiding overfitting the model despite providing many input features”.
The domain of the problem we are trying to tackle requires lots of related features. Since the data set is already provided, and by examining the data we can’t further create or dismiss any data at this point. In the data set, we have the following features.
‘Pregnancies’, ‘Glucose’, ‘Blood Pressure’, ‘Skin Thickness’, ‘Insulin’, ‘BMI’, ‘Diabetes Pedigree Function’, ‘Age’
By a crude observation, we can say that the ‘Skin Thickness’ is not an indicator of diabetes. But we can’t deny the fact that it is unusable at this point.
Therefore we will use all the features available. We separate the data set into features and the response that we are going to predict. We will assign the features to the X variable and the response to the y variable.
feature_names = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']X = diabetes_mod[feature_names]y = diabetes_mod.Outcome
Generally feature engineering is performed before selecting the model. However, for this tutorial we follow a different approach. Initially, we will be utilizing all the features provided in the data set to the model, we will revisit features engineering to discuss feature importance on the selected model.
The given article gives a very good explanation of Feature Engineering.
Model selection or algorithm selection phase is the most exciting and the heart of machine learning. It is the phase where we select the model which performs best for the data set at hand.
First, we will be calculating the “Classification Accuracy (Testing Accuracy)” of a given set of classification models with their default parameters to determine which model performs better with the diabetes data set.
We will import the necessary libraries for the notebook. We import 7 classifiers namely K-Nearest Neighbors, Support Vector Classifier, Logistic Regression, Gaussian Naive Bayes, Random Forest, and Gradient Boost to be contenders for the best classifier.
from sklearn.neighbors import KNeighborsClassifierfrom sklearn.svm import SVCfrom sklearn.linear_model import LogisticRegressionfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.naive_bayes import GaussianNBfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.ensemble import GradientBoostingClassifier
We will initialize the classifier models with their default parameters and add them to a model list.
models = []models.append(('KNN', KNeighborsClassifier()))models.append(('SVC', SVC()))models.append(('LR', LogisticRegression()))models.append(('DT', DecisionTreeClassifier()))models.append(('GNB', GaussianNB()))models.append(('RF', RandomForestClassifier()))models.append(('GB', GradientBoostingClassifier()))
Ex: Generally training the model with Scikit learn is as follows.
knn = KNeighborsClassifier()knn.fit(X_train, y_train)
It is a general practice to avoid training and testing on the same data. The reasons are that the goal of the model is to predict out-of-sample data, and the model could be overly complex leading to overfitting. To avoid the aforementioned problems, there are two precautions.
Train/Test SplitK-Fold Cross-Validation
Train/Test Split
K-Fold Cross-Validation
We will import “train_test_split” for train/test split and “cross_val_score” for k-fold cross-validation. “accuracy_score” is to evaluate the accuracy of the model in the train/test split method.
from sklearn.model_selection import train_test_splitfrom sklearn.model_selection import cross_val_scorefrom sklearn.metrics import accuracy_score
We will perform the mentioned methods to find the best performing base models.
This method split the data set into two portions: a training set and a testing set. The training set is used to train the model. And the testing set is used to test the model, and evaluate the accuracy.
Pros : But, train/test split is still useful because of its flexibility and speed
Cons : Provides a high-variance estimate of out-of-sample accuracy
Train/Test Split with Scikit Learn :
Next, we can split the features and responses into train and test portions. We stratify (a process where each response class should be represented with equal proportions in each of the portions) the samples.
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = diabetes_mod.Outcome, random_state=0)
Then we fit each model in a loop and calculate the accuracy of the respective model using the “accuracy_score”.
names = []scores = []for name, model in models: model.fit(X_train, y_train) y_pred = model.predict(X_test) scores.append(accuracy_score(y_test, y_pred)) names.append(name)tr_split = pd.DataFrame({'Name': names, 'Score': scores})print(tr_split)
This method splits the data set into K equal partitions (“folds”), then use 1 fold as the testing set and the union of the other folds as the training set. Then the model is tested for accuracy. The process will follow the above steps K times, using different folds as the testing set each time. The average testing accuracy of the process is the testing accuracy.
Pros : More accurate estimate of out-of-sample accuracy. More “efficient” use of data (every observation is used for both training and testing)
Cons : Much slower than Train/Test split.
It is preferred to use this method where computation capability is not scarce. We will be using this method from here on out.
K-Fold Cross Validation with Scikit Learn :
We will move forward with K-Fold cross validation as it is more accurate and use the data efficiently. We will train the models using 10 fold cross validation and calculate the mean accuracy of the models. “cross_val_score” provides its own training and accuracy calculation interface.
names = []scores = []for name, model in models: kfold = KFold(n_splits=10, random_state=10) score = cross_val_score(model, X, y, cv=kfold, scoring='accuracy').mean() names.append(name) scores.append(score)kf_cross_val = pd.DataFrame({'Name': names, 'Score': scores})print(kf_cross_val)
We can plot the accuracy scores using seaborn
axis = sns.barplot(x = 'Name', y = 'Score', data = kf_cross_val)axis.set(xlabel='Classifier', ylabel='Accuracy')for p in axis.patches: height = p.get_height() axis.text(p.get_x() + p.get_width()/2, height + 0.005, '{:1.4f}'.format(height), ha="center") plt.show()
We can see the Logistic Regression, Gaussian Naive Bayes, Random Forest and Gradient Boosting have performed better than the rest. From the base level we can observe that the Logistic Regression performs better than the other algorithms.
At the baseline Logistic Regression managed to achieve a classification accuracy of 77.64 %. This will be selected as the prime candidate for the next phases.
In this article we discussed about the basic machine learning workflow steps such as data exploration, data cleaning steps, feature engineering basics and model selection using Scikit Learn library. In the next article I will be discussing more about feature engineering, and hyper parameter tuning.
You can find the Part 02 of this series at the following link.
towardsdatascience.com
Source code that created this post can be found below.
github.com
If you have any problem or question regarding this article, please do not hesitate to leave a comment below or drop me an email: [email protected]
Hope you enjoyed the article. Cheers !!!
|
[
{
"code": null,
"e": 261,
"s": 172,
"text": "“Machine learning in a medical setting can help enhance medical diagnosis dramatically.”"
},
{
"code": null,
"e": 699,
"s": 261,
"text": "This article will portray how data related to diabetes can be leveraged to predict if a person has diabetes or not. More specifically, this article will focus on how machine learning can be utilized to predict diseases such as diabetes. By the end of this article series you will be able to understand concepts like data exploration, data cleansing, feature selection, model selection, model evaluation and apply them in a practical way."
},
{
"code": null,
"e": 1061,
"s": 699,
"text": "Diabetes is a disease that occurs when the blood glucose level becomes high, which ultimately leads to other health problems such as heart diseases, kidney disease, etc. Diabetes is caused mainly due to the consumption of highly processed food, bad consumption habits, etc. According to WHO, the number of people with diabetes has been increased over the years."
},
{
"code": null,
"e": 1072,
"s": 1061,
"text": "Python 3.+"
},
{
"code": null,
"e": 1132,
"s": 1072,
"text": "Anaconda (Scikit Learn, Numpy, Pandas, Matplotlib, Seaborn)"
},
{
"code": null,
"e": 1150,
"s": 1132,
"text": "Jupyter Notebook."
},
{
"code": null,
"e": 1239,
"s": 1150,
"text": "Basic understanding of supervised machine learning methods: specifically classification."
},
{
"code": null,
"e": 1589,
"s": 1239,
"text": "As a Data Scientist, the most tedious task which we encounter is the acquiring and the preparation of a data set. Even though there is an abundance of data in this era, it is still hard to find a suitable data set that suits the problem you are trying to tackle. If there aren’t any suitable data sets to be found, you might have to create your own."
},
{
"code": null,
"e": 1936,
"s": 1589,
"text": "In this tutorial we aren’t going to create our own data set, instead, we will be using an existing data set called the “Pima Indians Diabetes Database” provided by the UCI Machine Learning Repository (famous repository for machine learning data sets). We will be performing the machine learning workflow with the Diabetes Data set provided above."
},
{
"code": null,
"e": 2171,
"s": 1936,
"text": "When encountered with a data set, first we should analyze and “get to know” the data set. This step is necessary to familiarize with the data, to gain some understanding of the potential features and to see if data cleaning is needed."
},
{
"code": null,
"e": 2320,
"s": 2171,
"text": "First, we will import the necessary libraries and import our data set to the Jupyter notebook. We can observe the mentioned columns in the data set."
},
{
"code": null,
"e": 2492,
"s": 2320,
"text": "%matplotlib inlineimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsdiabetes = pd.read_csv('datasets/diabetes.csv')diabetes.columns "
},
{
"code": null,
"e": 2640,
"s": 2492,
"text": "Index(['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age', 'Outcome'], dtype='object')"
},
{
"code": null,
"e": 2780,
"s": 2640,
"text": "Important: It should be noted that the above data set contains only limited features, where as in reality numerous features come into play."
},
{
"code": null,
"e": 2841,
"s": 2780,
"text": "We can examine the data set using the pandas’ head() method."
},
{
"code": null,
"e": 2857,
"s": 2841,
"text": "diabetes.head()"
},
{
"code": null,
"e": 2947,
"s": 2857,
"text": "We can find the dimensions of the data set using the panda Dataframes’ ‘shape’ attribute."
},
{
"code": null,
"e": 3013,
"s": 2947,
"text": "print(\"Diabetes data set dimensions : {}\".format(diabetes.shape))"
},
{
"code": null,
"e": 3053,
"s": 3013,
"text": "Diabetes data set dimensions : (768, 9)"
},
{
"code": null,
"e": 3384,
"s": 3053,
"text": "We can observe that the data set contain 768 rows and 9 columns. ‘Outcome’ is the column which we are going to predict, which says if the patient is diabetic or not. 1 means the person is diabetic and 0 means a person is not. We can identify that out of the 768 persons, 500 are labeled as 0 (non-diabetic) and 268 as 1 (diabetic)"
},
{
"code": null,
"e": 3419,
"s": 3384,
"text": "diabetes.groupby('Outcome').size()"
},
{
"code": null,
"e": 3647,
"s": 3419,
"text": "Visualization of data is an imperative aspect of data science. It helps to understand data and also to explain the data to another person. Python has several interesting visualization libraries such as Matplotlib, Seaborn, etc."
},
{
"code": null,
"e": 3782,
"s": 3647,
"text": "In this tutorial we will use pandas’ visualization which is built on top of matplotlib, to find the data distribution of the features."
},
{
"code": null,
"e": 3902,
"s": 3782,
"text": "We can use the following code to draw histograms for the two responses separately. (The images are not displayed here.)"
},
{
"code": null,
"e": 3951,
"s": 3902,
"text": "diabetes.groupby(‘Outcome’).hist(figsize=(9, 9))"
},
{
"code": null,
"e": 4257,
"s": 3951,
"text": "The next phase of the machine learning work flow is data cleaning. Considered to be one of the crucial steps of the workflow, because it can make or break the model. There is a saying in machine learning “Better data beats fancier algorithms”, which suggests better data gives you better resulting models."
},
{
"code": null,
"e": 4325,
"s": 4257,
"text": "There are several factors to consider in the data cleaning process."
},
{
"code": null,
"e": 4472,
"s": 4325,
"text": "Duplicate or irrelevant observations.Bad labeling of data, same category occurring multiple times.Missing or null data points.Unexpected outliers."
},
{
"code": null,
"e": 4510,
"s": 4472,
"text": "Duplicate or irrelevant observations."
},
{
"code": null,
"e": 4572,
"s": 4510,
"text": "Bad labeling of data, same category occurring multiple times."
},
{
"code": null,
"e": 4601,
"s": 4572,
"text": "Missing or null data points."
},
{
"code": null,
"e": 4622,
"s": 4601,
"text": "Unexpected outliers."
},
{
"code": null,
"e": 4707,
"s": 4622,
"text": "We won’t be discussing about the data cleaning procedure in detail in this tutorial."
},
{
"code": null,
"e": 4872,
"s": 4707,
"text": "Since we are using a standard data set, we can safely assume that factors 1, 2 are already dealt with. Unexpected outliers are either useful or potentially harmful."
},
{
"code": null,
"e": 4987,
"s": 4872,
"text": "We can find any missing or null data points of the data set (if there is any) using the following pandas function."
},
{
"code": null,
"e": 5032,
"s": 4987,
"text": "diabetes.isnull().sum()diabetes.isna().sum()"
},
{
"code": null,
"e": 5159,
"s": 5032,
"text": "We can observe that there are no data points missing in the data set. If there were any, we should deal with them accordingly."
},
{
"code": null,
"e": 5330,
"s": 5159,
"text": "When analyzing the histogram we can identify that there are some outliers in some columns. We will further analyze those outliers and determine what we can do about them."
},
{
"code": null,
"e": 5623,
"s": 5330,
"text": "Blood pressure: By observing the data we can see that there are 0 values for blood pressure. And it is evident that the readings of the data set seem wrong because a living person cannot have a diastolic blood pressure of zero. By observing the data we can see 35 counts where the value is 0."
},
{
"code": null,
"e": 5822,
"s": 5623,
"text": "print(\"Total : \", diabetes[diabetes.BloodPressure == 0].shape[0])Total : 35print(diabetes[diabetes.BloodPressure == 0].groupby('Outcome')['Age'].count())Outcome0 191 16Name: Age, dtype: int64"
},
{
"code": null,
"e": 6010,
"s": 5822,
"text": "Plasma glucose levels: Even after fasting glucose levels would not be as low as zero. Therefore zero is an invalid reading. By observing the data we can see 5 counts where the value is 0."
},
{
"code": null,
"e": 6204,
"s": 6010,
"text": "print(\"Total : \", diabetes[diabetes.Glucose == 0].shape[0])Total : 5print(diabetes[diabetes.Glucose == 0].groupby('Outcome')['Age'].count())Total : 5Outcome0 31 2Name: Age, dtype: int64"
},
{
"code": null,
"e": 6341,
"s": 6204,
"text": "Skin Fold Thickness: For normal people, skin fold thickness can’t be less than 10 mm better yet zero. Total count where value is 0: 227."
},
{
"code": null,
"e": 6543,
"s": 6341,
"text": "print(\"Total : \", diabetes[diabetes.SkinThickness == 0].shape[0])Total : 227print(diabetes[diabetes.SkinThickness == 0].groupby('Outcome')['Age'].count())Outcome0 1391 88Name: Age, dtype: int64"
},
{
"code": null,
"e": 6654,
"s": 6543,
"text": "BMI: Should not be 0 or close to zero unless the person is really underweight which could be life-threatening."
},
{
"code": null,
"e": 6831,
"s": 6654,
"text": "print(\"Total : \", diabetes[diabetes.BMI == 0].shape[0])Total : 11print(diabetes[diabetes.BMI == 0].groupby('Outcome')['Age'].count())Outcome0 91 2Name: Age, dtype: int64"
},
{
"code": null,
"e": 6967,
"s": 6831,
"text": "Insulin: In a rare situation a person can have zero insulin but by observing the data, we can find that there is a total of 374 counts."
},
{
"code": null,
"e": 7157,
"s": 6967,
"text": "print(\"Total : \", diabetes[diabetes.Insulin == 0].shape[0])Total : 374print(diabetes[diabetes.Insulin == 0].groupby('Outcome')['Age'].count())Outcome0 2361 138Name: Age, dtype: int64"
},
{
"code": null,
"e": 7211,
"s": 7157,
"text": "Here are several ways to handle invalid data values :"
},
{
"code": null,
"e": 7838,
"s": 7211,
"text": "Ignore/remove these cases: This is not actually possible in most cases because that would mean losing valuable information. And in this case “skin thickness” and “insulin” columns mean to have a lot of invalid points. But it might work for “BMI”, “glucose ”and “blood pressure” data points.Put average/mean values: This might work for some data sets, but in our case putting a mean value to the blood pressure column would send a wrong signal to the model.Avoid using features: It is possible to not use the features with a lot of invalid values for the model. This may work for “skin thickness” but it's hard to predict that."
},
{
"code": null,
"e": 8129,
"s": 7838,
"text": "Ignore/remove these cases: This is not actually possible in most cases because that would mean losing valuable information. And in this case “skin thickness” and “insulin” columns mean to have a lot of invalid points. But it might work for “BMI”, “glucose ”and “blood pressure” data points."
},
{
"code": null,
"e": 8296,
"s": 8129,
"text": "Put average/mean values: This might work for some data sets, but in our case putting a mean value to the blood pressure column would send a wrong signal to the model."
},
{
"code": null,
"e": 8467,
"s": 8296,
"text": "Avoid using features: It is possible to not use the features with a lot of invalid values for the model. This may work for “skin thickness” but it's hard to predict that."
},
{
"code": null,
"e": 8695,
"s": 8467,
"text": "By the end of the data cleaning process, we have come to the conclusion that this given data set is incomplete. Since this is a demonstration for machine learning we will proceed with the given data with some minor adjustments."
},
{
"code": null,
"e": 8776,
"s": 8695,
"text": "We will remove the rows which the “BloodPressure”, “BMI” and “Glucose” are zero."
},
{
"code": null,
"e": 8912,
"s": 8776,
"text": "diabetes_mod = diabetes[(diabetes.BloodPressure != 0) & (diabetes.BMI != 0) & (diabetes.Glucose != 0)]print(diabetes_mod.shape)(724, 9)"
},
{
"code": null,
"e": 9112,
"s": 8912,
"text": "Feature engineering is the process of transforming the gathered data into features that better represent the problem that we are trying to solve to the model, to improve its performance and accuracy."
},
{
"code": null,
"e": 9279,
"s": 9112,
"text": "Feature engineering creates more input features from the existing features and also combines several features to produce more intuitive features to feed to the model."
},
{
"code": null,
"e": 9504,
"s": 9279,
"text": "“ Feature engineering enables us to highlight the important features and facilitate to bring domain expertise on the problem to the table. It also allows avoiding overfitting the model despite providing many input features”."
},
{
"code": null,
"e": 9763,
"s": 9504,
"text": "The domain of the problem we are trying to tackle requires lots of related features. Since the data set is already provided, and by examining the data we can’t further create or dismiss any data at this point. In the data set, we have the following features."
},
{
"code": null,
"e": 9879,
"s": 9763,
"text": "‘Pregnancies’, ‘Glucose’, ‘Blood Pressure’, ‘Skin Thickness’, ‘Insulin’, ‘BMI’, ‘Diabetes Pedigree Function’, ‘Age’"
},
{
"code": null,
"e": 10035,
"s": 9879,
"text": "By a crude observation, we can say that the ‘Skin Thickness’ is not an indicator of diabetes. But we can’t deny the fact that it is unusable at this point."
},
{
"code": null,
"e": 10253,
"s": 10035,
"text": "Therefore we will use all the features available. We separate the data set into features and the response that we are going to predict. We will assign the features to the X variable and the response to the y variable."
},
{
"code": null,
"e": 10438,
"s": 10253,
"text": "feature_names = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']X = diabetes_mod[feature_names]y = diabetes_mod.Outcome"
},
{
"code": null,
"e": 10746,
"s": 10438,
"text": "Generally feature engineering is performed before selecting the model. However, for this tutorial we follow a different approach. Initially, we will be utilizing all the features provided in the data set to the model, we will revisit features engineering to discuss feature importance on the selected model."
},
{
"code": null,
"e": 10818,
"s": 10746,
"text": "The given article gives a very good explanation of Feature Engineering."
},
{
"code": null,
"e": 11007,
"s": 10818,
"text": "Model selection or algorithm selection phase is the most exciting and the heart of machine learning. It is the phase where we select the model which performs best for the data set at hand."
},
{
"code": null,
"e": 11225,
"s": 11007,
"text": "First, we will be calculating the “Classification Accuracy (Testing Accuracy)” of a given set of classification models with their default parameters to determine which model performs better with the diabetes data set."
},
{
"code": null,
"e": 11480,
"s": 11225,
"text": "We will import the necessary libraries for the notebook. We import 7 classifiers namely K-Nearest Neighbors, Support Vector Classifier, Logistic Regression, Gaussian Naive Bayes, Random Forest, and Gradient Boost to be contenders for the best classifier."
},
{
"code": null,
"e": 11804,
"s": 11480,
"text": "from sklearn.neighbors import KNeighborsClassifierfrom sklearn.svm import SVCfrom sklearn.linear_model import LogisticRegressionfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.naive_bayes import GaussianNBfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.ensemble import GradientBoostingClassifier"
},
{
"code": null,
"e": 11905,
"s": 11804,
"text": "We will initialize the classifier models with their default parameters and add them to a model list."
},
{
"code": null,
"e": 12216,
"s": 11905,
"text": "models = []models.append(('KNN', KNeighborsClassifier()))models.append(('SVC', SVC()))models.append(('LR', LogisticRegression()))models.append(('DT', DecisionTreeClassifier()))models.append(('GNB', GaussianNB()))models.append(('RF', RandomForestClassifier()))models.append(('GB', GradientBoostingClassifier()))"
},
{
"code": null,
"e": 12282,
"s": 12216,
"text": "Ex: Generally training the model with Scikit learn is as follows."
},
{
"code": null,
"e": 12336,
"s": 12282,
"text": "knn = KNeighborsClassifier()knn.fit(X_train, y_train)"
},
{
"code": null,
"e": 12613,
"s": 12336,
"text": "It is a general practice to avoid training and testing on the same data. The reasons are that the goal of the model is to predict out-of-sample data, and the model could be overly complex leading to overfitting. To avoid the aforementioned problems, there are two precautions."
},
{
"code": null,
"e": 12653,
"s": 12613,
"text": "Train/Test SplitK-Fold Cross-Validation"
},
{
"code": null,
"e": 12670,
"s": 12653,
"text": "Train/Test Split"
},
{
"code": null,
"e": 12694,
"s": 12670,
"text": "K-Fold Cross-Validation"
},
{
"code": null,
"e": 12890,
"s": 12694,
"text": "We will import “train_test_split” for train/test split and “cross_val_score” for k-fold cross-validation. “accuracy_score” is to evaluate the accuracy of the model in the train/test split method."
},
{
"code": null,
"e": 13036,
"s": 12890,
"text": "from sklearn.model_selection import train_test_splitfrom sklearn.model_selection import cross_val_scorefrom sklearn.metrics import accuracy_score"
},
{
"code": null,
"e": 13115,
"s": 13036,
"text": "We will perform the mentioned methods to find the best performing base models."
},
{
"code": null,
"e": 13318,
"s": 13115,
"text": "This method split the data set into two portions: a training set and a testing set. The training set is used to train the model. And the testing set is used to test the model, and evaluate the accuracy."
},
{
"code": null,
"e": 13400,
"s": 13318,
"text": "Pros : But, train/test split is still useful because of its flexibility and speed"
},
{
"code": null,
"e": 13467,
"s": 13400,
"text": "Cons : Provides a high-variance estimate of out-of-sample accuracy"
},
{
"code": null,
"e": 13504,
"s": 13467,
"text": "Train/Test Split with Scikit Learn :"
},
{
"code": null,
"e": 13712,
"s": 13504,
"text": "Next, we can split the features and responses into train and test portions. We stratify (a process where each response class should be represented with equal proportions in each of the portions) the samples."
},
{
"code": null,
"e": 13819,
"s": 13712,
"text": "X_train, X_test, y_train, y_test = train_test_split(X, y, stratify = diabetes_mod.Outcome, random_state=0)"
},
{
"code": null,
"e": 13931,
"s": 13819,
"text": "Then we fit each model in a loop and calculate the accuracy of the respective model using the “accuracy_score”."
},
{
"code": null,
"e": 14187,
"s": 13931,
"text": "names = []scores = []for name, model in models: model.fit(X_train, y_train) y_pred = model.predict(X_test) scores.append(accuracy_score(y_test, y_pred)) names.append(name)tr_split = pd.DataFrame({'Name': names, 'Score': scores})print(tr_split)"
},
{
"code": null,
"e": 14552,
"s": 14187,
"text": "This method splits the data set into K equal partitions (“folds”), then use 1 fold as the testing set and the union of the other folds as the training set. Then the model is tested for accuracy. The process will follow the above steps K times, using different folds as the testing set each time. The average testing accuracy of the process is the testing accuracy."
},
{
"code": null,
"e": 14696,
"s": 14552,
"text": "Pros : More accurate estimate of out-of-sample accuracy. More “efficient” use of data (every observation is used for both training and testing)"
},
{
"code": null,
"e": 14738,
"s": 14696,
"text": "Cons : Much slower than Train/Test split."
},
{
"code": null,
"e": 14864,
"s": 14738,
"text": "It is preferred to use this method where computation capability is not scarce. We will be using this method from here on out."
},
{
"code": null,
"e": 14908,
"s": 14864,
"text": "K-Fold Cross Validation with Scikit Learn :"
},
{
"code": null,
"e": 15194,
"s": 14908,
"text": "We will move forward with K-Fold cross validation as it is more accurate and use the data efficiently. We will train the models using 10 fold cross validation and calculate the mean accuracy of the models. “cross_val_score” provides its own training and accuracy calculation interface."
},
{
"code": null,
"e": 15501,
"s": 15194,
"text": "names = []scores = []for name, model in models: kfold = KFold(n_splits=10, random_state=10) score = cross_val_score(model, X, y, cv=kfold, scoring='accuracy').mean() names.append(name) scores.append(score)kf_cross_val = pd.DataFrame({'Name': names, 'Score': scores})print(kf_cross_val)"
},
{
"code": null,
"e": 15547,
"s": 15501,
"text": "We can plot the accuracy scores using seaborn"
},
{
"code": null,
"e": 15821,
"s": 15547,
"text": "axis = sns.barplot(x = 'Name', y = 'Score', data = kf_cross_val)axis.set(xlabel='Classifier', ylabel='Accuracy')for p in axis.patches: height = p.get_height() axis.text(p.get_x() + p.get_width()/2, height + 0.005, '{:1.4f}'.format(height), ha=\"center\") plt.show()"
},
{
"code": null,
"e": 16059,
"s": 15821,
"text": "We can see the Logistic Regression, Gaussian Naive Bayes, Random Forest and Gradient Boosting have performed better than the rest. From the base level we can observe that the Logistic Regression performs better than the other algorithms."
},
{
"code": null,
"e": 16218,
"s": 16059,
"text": "At the baseline Logistic Regression managed to achieve a classification accuracy of 77.64 %. This will be selected as the prime candidate for the next phases."
},
{
"code": null,
"e": 16518,
"s": 16218,
"text": "In this article we discussed about the basic machine learning workflow steps such as data exploration, data cleaning steps, feature engineering basics and model selection using Scikit Learn library. In the next article I will be discussing more about feature engineering, and hyper parameter tuning."
},
{
"code": null,
"e": 16581,
"s": 16518,
"text": "You can find the Part 02 of this series at the following link."
},
{
"code": null,
"e": 16604,
"s": 16581,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 16659,
"s": 16604,
"text": "Source code that created this post can be found below."
},
{
"code": null,
"e": 16670,
"s": 16659,
"text": "github.com"
},
{
"code": null,
"e": 16821,
"s": 16670,
"text": "If you have any problem or question regarding this article, please do not hesitate to leave a comment below or drop me an email: [email protected]"
}
] |
C# | Get or set the value associated with specified key in ListDictionary - GeeksforGeeks
|
01 Feb, 2019
ListDictionary.Item[Object] property is used to get or set the value associated with the specified key.
Syntax:
public object this[object key] { get; set; }
Here, key is the key whose value to get or set.
Return Value : The value associated with the specified key. If the specified key is not found, attempting to get it returns null, and attempting to set it creates a new entry using the specified key.
Exception: This property will give ArgumentNullException if the key is null.
Example:
// C# code to get or set the value// associated with the specified keyusing System;using System.Collections;using System.Collections.Specialized; class GFG { // Driver code public static void Main() { // Creating a ListDictionary named myDict ListDictionary myDict = new ListDictionary(); // Adding key/value pairs in myDict myDict.Add("Australia", "Canberra"); myDict.Add("Belgium", "Brussels"); myDict.Add("Netherlands", "Amsterdam"); myDict.Add("China", "Beijing"); myDict.Add("Russia", "Moscow"); myDict.Add("India", "New Delhi"); // Displaying the key/value pairs in myDict foreach(DictionaryEntry de in myDict) { Console.WriteLine(de.Key + " " + de.Value); } // Displaying the value associated // with key "Russia" Console.WriteLine(myDict["Russia"]); // Setting the value associated with key "Russia" myDict["Russia"] = "Saint Petersburg"; // Displaying the value associated // with key "Russia" Console.WriteLine(myDict["Russia"]); // Displaying the value associated // with key "India" Console.WriteLine(myDict["India"]); // Setting the value associated with key "India" myDict["India"] = "Mumbai"; // Displaying the value associated // with key "India" Console.WriteLine(myDict["India"]); // Displaying the key/value pairs in myDict foreach(DictionaryEntry de in myDict) { Console.WriteLine(de.Key + " " + de.Value); } }}
Output:
Australia Canberra
Belgium Brussels
Netherlands Amsterdam
China Beijing
Russia Moscow
India New Delhi
Moscow
Saint Petersburg
New Delhi
Mumbai
Australia Canberra
Belgium Brussels
Netherlands Amsterdam
China Beijing
Russia Saint Petersburg
India Mumbai
Note:
This property provides the ability to access a specific element in the collection by using the syntax : myCollection[key].
A key cannot be null, but a value can.
This method is an O(n) operation, where n is Count.
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.collections.specialized.listdictionary.item?view=netframework-4.7.2
CSharp-Specialized-ListDictionary
CSharp-Specialized-Namespace
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# | Method Overriding
Destructors in C#
Difference between Ref and Out keywords in C#
C# | Delegates
C# | String.IndexOf( ) Method | Set - 1
C# | Constructors
Extension Method in C#
Introduction to .NET Framework
C# | Class and Object
C# | Abstract Classes
|
[
{
"code": null,
"e": 24928,
"s": 24900,
"text": "\n01 Feb, 2019"
},
{
"code": null,
"e": 25032,
"s": 24928,
"text": "ListDictionary.Item[Object] property is used to get or set the value associated with the specified key."
},
{
"code": null,
"e": 25040,
"s": 25032,
"text": "Syntax:"
},
{
"code": null,
"e": 25086,
"s": 25040,
"text": "public object this[object key] { get; set; }\n"
},
{
"code": null,
"e": 25134,
"s": 25086,
"text": "Here, key is the key whose value to get or set."
},
{
"code": null,
"e": 25334,
"s": 25134,
"text": "Return Value : The value associated with the specified key. If the specified key is not found, attempting to get it returns null, and attempting to set it creates a new entry using the specified key."
},
{
"code": null,
"e": 25411,
"s": 25334,
"text": "Exception: This property will give ArgumentNullException if the key is null."
},
{
"code": null,
"e": 25420,
"s": 25411,
"text": "Example:"
},
{
"code": "// C# code to get or set the value// associated with the specified keyusing System;using System.Collections;using System.Collections.Specialized; class GFG { // Driver code public static void Main() { // Creating a ListDictionary named myDict ListDictionary myDict = new ListDictionary(); // Adding key/value pairs in myDict myDict.Add(\"Australia\", \"Canberra\"); myDict.Add(\"Belgium\", \"Brussels\"); myDict.Add(\"Netherlands\", \"Amsterdam\"); myDict.Add(\"China\", \"Beijing\"); myDict.Add(\"Russia\", \"Moscow\"); myDict.Add(\"India\", \"New Delhi\"); // Displaying the key/value pairs in myDict foreach(DictionaryEntry de in myDict) { Console.WriteLine(de.Key + \" \" + de.Value); } // Displaying the value associated // with key \"Russia\" Console.WriteLine(myDict[\"Russia\"]); // Setting the value associated with key \"Russia\" myDict[\"Russia\"] = \"Saint Petersburg\"; // Displaying the value associated // with key \"Russia\" Console.WriteLine(myDict[\"Russia\"]); // Displaying the value associated // with key \"India\" Console.WriteLine(myDict[\"India\"]); // Setting the value associated with key \"India\" myDict[\"India\"] = \"Mumbai\"; // Displaying the value associated // with key \"India\" Console.WriteLine(myDict[\"India\"]); // Displaying the key/value pairs in myDict foreach(DictionaryEntry de in myDict) { Console.WriteLine(de.Key + \" \" + de.Value); } }}",
"e": 27040,
"s": 25420,
"text": null
},
{
"code": null,
"e": 27048,
"s": 27040,
"text": "Output:"
},
{
"code": null,
"e": 27301,
"s": 27048,
"text": "Australia Canberra\nBelgium Brussels\nNetherlands Amsterdam\nChina Beijing\nRussia Moscow\nIndia New Delhi\nMoscow\nSaint Petersburg\nNew Delhi\nMumbai\nAustralia Canberra\nBelgium Brussels\nNetherlands Amsterdam\nChina Beijing\nRussia Saint Petersburg\nIndia Mumbai\n"
},
{
"code": null,
"e": 27307,
"s": 27301,
"text": "Note:"
},
{
"code": null,
"e": 27430,
"s": 27307,
"text": "This property provides the ability to access a specific element in the collection by using the syntax : myCollection[key]."
},
{
"code": null,
"e": 27469,
"s": 27430,
"text": "A key cannot be null, but a value can."
},
{
"code": null,
"e": 27521,
"s": 27469,
"text": "This method is an O(n) operation, where n is Count."
},
{
"code": null,
"e": 27532,
"s": 27521,
"text": "Reference:"
},
{
"code": null,
"e": 27651,
"s": 27532,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.collections.specialized.listdictionary.item?view=netframework-4.7.2"
},
{
"code": null,
"e": 27685,
"s": 27651,
"text": "CSharp-Specialized-ListDictionary"
},
{
"code": null,
"e": 27714,
"s": 27685,
"text": "CSharp-Specialized-Namespace"
},
{
"code": null,
"e": 27717,
"s": 27714,
"text": "C#"
},
{
"code": null,
"e": 27815,
"s": 27717,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27838,
"s": 27815,
"text": "C# | Method Overriding"
},
{
"code": null,
"e": 27856,
"s": 27838,
"text": "Destructors in C#"
},
{
"code": null,
"e": 27902,
"s": 27856,
"text": "Difference between Ref and Out keywords in C#"
},
{
"code": null,
"e": 27917,
"s": 27902,
"text": "C# | Delegates"
},
{
"code": null,
"e": 27957,
"s": 27917,
"text": "C# | String.IndexOf( ) Method | Set - 1"
},
{
"code": null,
"e": 27975,
"s": 27957,
"text": "C# | Constructors"
},
{
"code": null,
"e": 27998,
"s": 27975,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 28029,
"s": 27998,
"text": "Introduction to .NET Framework"
},
{
"code": null,
"e": 28051,
"s": 28029,
"text": "C# | Class and Object"
}
] |
A Complete Machine Learning Walk-Through in Python: Part Two | by Will Koehrsen | Towards Data Science
|
Assembling all the machine learning pieces needed to solve a problem can be a daunting task. In this series of articles, we are walking through implementing a machine learning workflow using a real-world dataset to see how the individual techniques come together.
In the first post, we cleaned and structured the data, performed an exploratory data analysis, developed a set of features to use in our model, and established a baseline against which we can measure performance. In this article, we will look at how to implement and compare several machine learning models in Python, perform hyperparameter tuning to optimize the best model, and evaluate the final model on the test set.
The full code for this project is on GitHub and the second notebook corresponding to this article is here. Feel free to use, share, and modify the code in any way you want!
As a reminder, we are working on a supervised regression task: using New York City building energy data, we want to develop a model that can predict the Energy Star Score of a building. Our focus is on both accuracy of the predictions and interpretability of the model.
There are a ton of machine learning models to choose from and deciding where to start can be intimidating. While there are some charts that try to show you which algorithm to use, I prefer to just try out several and see which one works best! Machine learning is still a field driven primarily by empirical (experimental) rather than theoretical results, and it’s almost impossible to know ahead of time which model will do the best.
Generally, it’s a good idea to start out with simple, interpretable models such as linear regression, and if the performance is not adequate, move on to more complex, but usually more accurate methods. The following chart shows a (highly unscientific) version of the accuracy vs interpretability trade-off:
We will evaluate five different models covering the complexity spectrum:
Linear Regression
K-Nearest Neighbors Regression
Random Forest Regression
Gradient Boosted Regression
Support Vector Machine Regression
In this post we will focus on implementing these methods rather than the theory behind them. For anyone interesting in learning the background, I highly recommend An Introduction to Statistical Learning (available free online) or Hands-On Machine Learning with Scikit-Learn and TensorFlow. Both of these textbooks do a great job of explaining the theory and showing how to effectively use the methods in R and Python respectively.
While we dropped the columns with more than 50% missing values when we cleaned the data, there are still quite a few missing observations. Machine learning models cannot deal with any absent values, so we have to fill them in, a process known as imputation.
First, we’ll read in all the data and remind ourselves what it looks like:
import pandas as pdimport numpy as np# Read in data into dataframes train_features = pd.read_csv('data/training_features.csv')test_features = pd.read_csv('data/testing_features.csv')train_labels = pd.read_csv('data/training_labels.csv')test_labels = pd.read_csv('data/testing_labels.csv')Training Feature Size: (6622, 64)Testing Feature Size: (2839, 64)Training Labels Size: (6622, 1)Testing Labels Size: (2839, 1)
Every value that is NaN represents a missing observation. While there are a number of ways to fill in missing data, we will use a relatively simple method, median imputation. This replaces all the missing values in a column with the median value of the column.
In the following code, we create a Scikit-Learn Imputer object with the strategy set to median. We then train this object on the training data (using imputer.fit) and use it to fill in the missing values in both the training and testing data (using imputer.transform). This means missing values in the test data are filled in with the corresponding median value from the training data.
(We have to do imputation this way rather than training on all the data to avoid the problem of test data leakage, where information from the testing dataset spills over into the training data.)
# Create an imputer object with a median filling strategyimputer = Imputer(strategy='median')# Train on the training featuresimputer.fit(train_features)# Transform both training data and testing dataX = imputer.transform(train_features)X_test = imputer.transform(test_features)Missing values in training features: 0Missing values in testing features: 0
All of the features now have real, finite values with no missing examples.
Scaling refers to the general process of changing the range of a feature. This is necessary because features are measured in different units, and therefore cover different ranges. Methods such as support vector machines and K-nearest neighbors that take into account distance measures between observations are significantly affected by the range of the features and scaling allows them to learn. While methods such as Linear Regression and Random Forest do not actually require feature scaling, it is still best practice to take this step when we are comparing multiple algorithms.
We will scale the features by putting each one in a range between 0 and 1. This is done by taking each value of a feature, subtracting the minimum value of the feature, and dividing by the maximum minus the minimum (the range). This specific version of scaling is often called normalization and the other main version is known as standardization.
While this process would be easy to implement by hand, we can do it using a MinMaxScaler object in Scikit-Learn. The code for this method is identical to that for imputation except with a scaler instead of imputer! Again, we make sure to train only using training data and then transform all the data.
# Create the scaler object with a range of 0-1scaler = MinMaxScaler(feature_range=(0, 1))# Fit on the training datascaler.fit(X)# Transform both the training and testing dataX = scaler.transform(X)X_test = scaler.transform(X_test)
Every feature now has a minimum value of 0 and a maximum value of 1. Missing value imputation and feature scaling are two steps required in nearly any machine learning pipeline so it’s a good idea to understand how they work!
After all the work we spent cleaning and formatting the data, actually creating, training, and predicting with the models is relatively simple. We will use the Scikit-Learn library in Python, which has great documentation and a consistent model building syntax. Once you know how to make one model in Scikit-Learn, you can quickly implement a diverse range of algorithms.
We can illustrate one example of model creation, training (using .fit ) and testing (using .predict ) with the Gradient Boosting Regressor:
Gradient Boosted Performance on the test set: MAE = 10.0132
Model creation, training, and testing are each one line! To build the other models, we use the same syntax, with the only change the name of the algorithm. The results are presented below:
To put these figures in perspective, the naive baseline calculated using the median value of the target was 24.5. Clearly, machine learning is applicable to our problem because of the significant improvement over the baseline!
The gradient boosted regressor (MAE = 10.013) slightly beats out the random forest (10.014 MAE). These results aren’t entirely fair because we are mostly using the default values for the hyperparameters. Especially in models such as the support vector machine, the performance is highly dependent on these settings. Nonetheless, from these results we will select the gradient boosted regressor for model optimization.
In machine learning, after we have selected a model, we can optimize it for our problem by tuning the model hyperparameters.
First off, what are hyperparameters and how do they differ from parameters?
Model hyperparameters are best thought of as settings for a machine learning algorithm that are set by the data scientist before training. Examples would be the number of trees in a random forest or the number of neighbors used in K-nearest neighbors algorithm.
Model parameters are what the model learns during training, such as weights in a linear regression.
Controlling the hyperparameters affects the model performance by altering the balance between underfitting and overfitting in a model. Underfitting is when our model is not complex enough (it does not have enough degrees of freedom) to learn the mapping from features to target. An underfit model has high bias, which we can correct by making our model more complex.
Overfitting is when our model essentially memorizes the training data. An overfit model has high variance, which we can correct by limiting the complexity of the model through regularization. Both an underfit and an overfit model will not be able to generalize well to the testing data.
The problem with choosing the right hyperparameters is that the optimal set will be different for every machine learning problem! Therefore, the only way to find the best settings is to try out a number of them on each new dataset. Luckily, Scikit-Learn has a number of methods to allow us to efficiently evaluate hyperparameters. Moreover, projects such as TPOT by Epistasis Lab are trying to optimize the hyperparameter search using methods like genetic programming. In this project, we will stick to doing this with Scikit-Learn, but stayed tuned for more work on the auto-ML scene!
The particular hyperparameter tuning method we will implement is called random search with cross validation:
Random Search refers to the technique we will use to select hyperparameters. We define a grid and then randomly sample different combinations, rather than grid search where we exhaustively try out every single combination. (Surprisingly, random search performs nearly as well as grid search with a drastic reduction in run time.)
Cross Validation is the technique we use to evaluate a selected combination of hyperparameters. Rather than splitting the training set up into separate training and validation sets, which reduces the amount of training data we can use, we use K-Fold Cross Validation. This involves dividing the training data into K number of folds, and then going through an iterative process where we first train on K-1 of the folds and then evaluate performance on the Kth fold. We repeat this process K times and at the end of K-fold cross validation, we take the average error on each of the K iterations as the final performance measure.
The idea of K-Fold cross validation with K = 5 is shown below:
The entire process of performing random search with cross validation is:
Set up a grid of hyperparameters to evaluateRandomly sample a combination of hyperparametersCreate a model with the selected combinationEvaluate the model using K-fold cross validationDecide which hyperparameters worked the best
Set up a grid of hyperparameters to evaluate
Randomly sample a combination of hyperparameters
Create a model with the selected combination
Evaluate the model using K-fold cross validation
Decide which hyperparameters worked the best
Of course, we don’t do actually do this manually, but rather let Scikit-Learn’s RandomizedSearchCV handle all the work!
Since we will be using the Gradient Boosted Regression model, I should give at least a little background! This model is an ensemble method, meaning that it is built out of many weak learners, in this case individual decision trees. While a bagging algorithm such as random forest trains the weak learners in parallel and has them vote to make a prediction, a boosting method like Gradient Boosting, trains the learners in sequence, with each learner “concentrating” on the mistakes made by the previous ones.
Boosting methods have become popular in recent years and frequently win machine learning competitions. The Gradient Boosting Method is one particular implementation that uses Gradient Descent to minimize the cost function by sequentially training learners on the residuals of previous ones. The Scikit-Learn implementation of Gradient Boosting is generally regarded as less efficient than other libraries such as XGBoost , but it will work well enough for our small dataset and is quite accurate.
There are many hyperparameters to tune in a Gradient Boosted Regressor and you can look at the Scikit-Learn documentation for the details. We will optimize the following hyperparameters:
loss: the loss function to minimize
n_estimators: the number of weak learners (decision trees) to use
max_depth: the maximum depth of each decision tree
min_samples_leaf: the minimum number of examples required at a leaf node of the decision tree
min_samples_split: the minimum number of examples required to split a node of the decision tree
max_features: the maximum number of features to use for splitting nodes
I’m not sure if there is anyone who truly understands how all of these interact, and the only way to find the best combination is to try them out!
In the following code, we build a hyperparameter grid, create a RandomizedSearchCV object, and perform hyperparameter search using 4-fold cross validation over 25 different combinations of hyperparameters:
After performing the search, we can inspect the RandomizedSearchCV object to find the best model:
# Find the best combination of settingsrandom_cv.best_estimator_GradientBoostingRegressor(loss='lad', max_depth=5, max_features=None, min_samples_leaf=6, min_samples_split=6, n_estimators=500)
We can then use these results to perform grid search by choosing parameters for our grid that are close to these optimal values. However, further tuning is unlikely to significant improve our model. As a general rule, proper feature engineering will have a much larger impact on model performance than even the most extensive hyperparameter tuning. It’s the law of diminishing returns applied to machine learning: feature engineering gets you most of the way there, and hyperparameter tuning generally only provides a small benefit.
One experiment we can try is to change the number of estimators (decision trees) while holding the rest of the hyperparameters steady. This directly lets us observe the effect of this particular setting. See the notebook for the implementation, but here are the results:
As the number of trees used by the model increases, both the training and the testing error decrease. However, the training error decreases much more rapidly than the testing error and we can see that our model is overfitting: it performs very well on the training data, but is not able to achieve that same performance on the testing set.
We always expect at least some decrease in performance on the testing set (after all, the model can see the true answers for the training set), but a significant gap indicates overfitting. We can address overfitting by getting more training data, or decreasing the complexity of our model through the hyerparameters. In this case, we will leave the hyperparameters where they are, but I encourage anyone to try and reduce the overfitting.
For the final model, we will use 800 estimators because that resulted in the lowest error in cross validation. Now, time to test out this model!
As responsible machine learning engineers, we made sure to not let our model see the test set at any point of training. Therefore, we can use the test set performance as an indicator of how well our model would perform when deployed in the real world.
Making predictions on the test set and calculating the performance is relatively straightforward. Here, we compare the performance of the default Gradient Boosted Regressor to the tuned model:
# Make predictions on the test set using default and final modeldefault_pred = default_model.predict(X_test)final_pred = final_model.predict(X_test)Default model performance on the test set: MAE = 10.0118.Final model performance on the test set: MAE = 9.0446.
Hyperparameter tuning improved the accuracy of the model by about 10%. Depending on the use case, 10% could be a massive improvement, but it came at a significant time investment!
We can also time how long it takes to train the two models using the %timeit magic command in Jupyter Notebooks. First is the default model:
%%timeit -n 1 -r 5default_model.fit(X, y)1.09 s ± 153 ms per loop (mean ± std. dev. of 5 runs, 1 loop each)
1 second to train seems very reasonable. The final tuned model is not so fast:
%%timeit -n 1 -r 5final_model.fit(X, y)12.1 s ± 1.33 s per loop (mean ± std. dev. of 5 runs, 1 loop each)
This demonstrates a fundamental aspect of machine learning: it is always a game of trade-offs. We constantly have to balance accuracy vs interpretability, bias vs variance, accuracy vs run time, and so on. The right blend will ultimately depend on the problem. In our case, a 12 times increase in run-time is large in relative terms, but in absolute terms it’s not that significant.
Once we have the final predictions, we can investigate them to see if they exhibit any noticeable skew. On the left is a density plot of the predicted and actual values, and on the right is a histogram of the residuals:
The model predictions seem to follow the distribution of the actual values although the peak in the density occurs closer to the median value (66) on the training set than to the true peak in density (which is near 100). The residuals are nearly normally distribution, although we see a few large negative values where the model predictions were far below the true values. We will take a deeper look at interpreting the results of the model in the next post.
In this article we covered several steps in the machine learning workflow:
Imputation of missing values and scaling of features
Evaluating and comparing several machine learning models
Hyperparameter tuning using random grid search and cross validation
Evaluating the best model on the test set
The results of this work showed us that machine learning is applicable to the task of predicting a building’s Energy Star Score using the available data. Using a gradient boosted regressor we were able to predict the scores on the test set to within 9.1 points of the true value. Moreover, we saw that hyperparameter tuning can increase the performance of a model at a significant cost in terms of time invested. This is one of many trade-offs we have to consider when developing a machine learning solution.
In the third post (available here), we will look at peering into the black box we have created and try to understand how our model makes predictions. We also will determine the greatest factors influencing the Energy Star Score. While we know that our model is accurate, we want to know why it makes the predictions it does and what this tells us about the problem!
As always, I welcome feedback and constructive criticism and can be reached on Twitter @koehrsen_will.
|
[
{
"code": null,
"e": 436,
"s": 172,
"text": "Assembling all the machine learning pieces needed to solve a problem can be a daunting task. In this series of articles, we are walking through implementing a machine learning workflow using a real-world dataset to see how the individual techniques come together."
},
{
"code": null,
"e": 858,
"s": 436,
"text": "In the first post, we cleaned and structured the data, performed an exploratory data analysis, developed a set of features to use in our model, and established a baseline against which we can measure performance. In this article, we will look at how to implement and compare several machine learning models in Python, perform hyperparameter tuning to optimize the best model, and evaluate the final model on the test set."
},
{
"code": null,
"e": 1031,
"s": 858,
"text": "The full code for this project is on GitHub and the second notebook corresponding to this article is here. Feel free to use, share, and modify the code in any way you want!"
},
{
"code": null,
"e": 1301,
"s": 1031,
"text": "As a reminder, we are working on a supervised regression task: using New York City building energy data, we want to develop a model that can predict the Energy Star Score of a building. Our focus is on both accuracy of the predictions and interpretability of the model."
},
{
"code": null,
"e": 1735,
"s": 1301,
"text": "There are a ton of machine learning models to choose from and deciding where to start can be intimidating. While there are some charts that try to show you which algorithm to use, I prefer to just try out several and see which one works best! Machine learning is still a field driven primarily by empirical (experimental) rather than theoretical results, and it’s almost impossible to know ahead of time which model will do the best."
},
{
"code": null,
"e": 2042,
"s": 1735,
"text": "Generally, it’s a good idea to start out with simple, interpretable models such as linear regression, and if the performance is not adequate, move on to more complex, but usually more accurate methods. The following chart shows a (highly unscientific) version of the accuracy vs interpretability trade-off:"
},
{
"code": null,
"e": 2115,
"s": 2042,
"text": "We will evaluate five different models covering the complexity spectrum:"
},
{
"code": null,
"e": 2133,
"s": 2115,
"text": "Linear Regression"
},
{
"code": null,
"e": 2164,
"s": 2133,
"text": "K-Nearest Neighbors Regression"
},
{
"code": null,
"e": 2189,
"s": 2164,
"text": "Random Forest Regression"
},
{
"code": null,
"e": 2217,
"s": 2189,
"text": "Gradient Boosted Regression"
},
{
"code": null,
"e": 2251,
"s": 2217,
"text": "Support Vector Machine Regression"
},
{
"code": null,
"e": 2682,
"s": 2251,
"text": "In this post we will focus on implementing these methods rather than the theory behind them. For anyone interesting in learning the background, I highly recommend An Introduction to Statistical Learning (available free online) or Hands-On Machine Learning with Scikit-Learn and TensorFlow. Both of these textbooks do a great job of explaining the theory and showing how to effectively use the methods in R and Python respectively."
},
{
"code": null,
"e": 2940,
"s": 2682,
"text": "While we dropped the columns with more than 50% missing values when we cleaned the data, there are still quite a few missing observations. Machine learning models cannot deal with any absent values, so we have to fill them in, a process known as imputation."
},
{
"code": null,
"e": 3015,
"s": 2940,
"text": "First, we’ll read in all the data and remind ourselves what it looks like:"
},
{
"code": null,
"e": 3438,
"s": 3015,
"text": "import pandas as pdimport numpy as np# Read in data into dataframes train_features = pd.read_csv('data/training_features.csv')test_features = pd.read_csv('data/testing_features.csv')train_labels = pd.read_csv('data/training_labels.csv')test_labels = pd.read_csv('data/testing_labels.csv')Training Feature Size: (6622, 64)Testing Feature Size: (2839, 64)Training Labels Size: (6622, 1)Testing Labels Size: (2839, 1)"
},
{
"code": null,
"e": 3699,
"s": 3438,
"text": "Every value that is NaN represents a missing observation. While there are a number of ways to fill in missing data, we will use a relatively simple method, median imputation. This replaces all the missing values in a column with the median value of the column."
},
{
"code": null,
"e": 4085,
"s": 3699,
"text": "In the following code, we create a Scikit-Learn Imputer object with the strategy set to median. We then train this object on the training data (using imputer.fit) and use it to fill in the missing values in both the training and testing data (using imputer.transform). This means missing values in the test data are filled in with the corresponding median value from the training data."
},
{
"code": null,
"e": 4280,
"s": 4085,
"text": "(We have to do imputation this way rather than training on all the data to avoid the problem of test data leakage, where information from the testing dataset spills over into the training data.)"
},
{
"code": null,
"e": 4636,
"s": 4280,
"text": "# Create an imputer object with a median filling strategyimputer = Imputer(strategy='median')# Train on the training featuresimputer.fit(train_features)# Transform both training data and testing dataX = imputer.transform(train_features)X_test = imputer.transform(test_features)Missing values in training features: 0Missing values in testing features: 0"
},
{
"code": null,
"e": 4711,
"s": 4636,
"text": "All of the features now have real, finite values with no missing examples."
},
{
"code": null,
"e": 5293,
"s": 4711,
"text": "Scaling refers to the general process of changing the range of a feature. This is necessary because features are measured in different units, and therefore cover different ranges. Methods such as support vector machines and K-nearest neighbors that take into account distance measures between observations are significantly affected by the range of the features and scaling allows them to learn. While methods such as Linear Regression and Random Forest do not actually require feature scaling, it is still best practice to take this step when we are comparing multiple algorithms."
},
{
"code": null,
"e": 5640,
"s": 5293,
"text": "We will scale the features by putting each one in a range between 0 and 1. This is done by taking each value of a feature, subtracting the minimum value of the feature, and dividing by the maximum minus the minimum (the range). This specific version of scaling is often called normalization and the other main version is known as standardization."
},
{
"code": null,
"e": 5942,
"s": 5640,
"text": "While this process would be easy to implement by hand, we can do it using a MinMaxScaler object in Scikit-Learn. The code for this method is identical to that for imputation except with a scaler instead of imputer! Again, we make sure to train only using training data and then transform all the data."
},
{
"code": null,
"e": 6173,
"s": 5942,
"text": "# Create the scaler object with a range of 0-1scaler = MinMaxScaler(feature_range=(0, 1))# Fit on the training datascaler.fit(X)# Transform both the training and testing dataX = scaler.transform(X)X_test = scaler.transform(X_test)"
},
{
"code": null,
"e": 6399,
"s": 6173,
"text": "Every feature now has a minimum value of 0 and a maximum value of 1. Missing value imputation and feature scaling are two steps required in nearly any machine learning pipeline so it’s a good idea to understand how they work!"
},
{
"code": null,
"e": 6771,
"s": 6399,
"text": "After all the work we spent cleaning and formatting the data, actually creating, training, and predicting with the models is relatively simple. We will use the Scikit-Learn library in Python, which has great documentation and a consistent model building syntax. Once you know how to make one model in Scikit-Learn, you can quickly implement a diverse range of algorithms."
},
{
"code": null,
"e": 6911,
"s": 6771,
"text": "We can illustrate one example of model creation, training (using .fit ) and testing (using .predict ) with the Gradient Boosting Regressor:"
},
{
"code": null,
"e": 6971,
"s": 6911,
"text": "Gradient Boosted Performance on the test set: MAE = 10.0132"
},
{
"code": null,
"e": 7160,
"s": 6971,
"text": "Model creation, training, and testing are each one line! To build the other models, we use the same syntax, with the only change the name of the algorithm. The results are presented below:"
},
{
"code": null,
"e": 7387,
"s": 7160,
"text": "To put these figures in perspective, the naive baseline calculated using the median value of the target was 24.5. Clearly, machine learning is applicable to our problem because of the significant improvement over the baseline!"
},
{
"code": null,
"e": 7805,
"s": 7387,
"text": "The gradient boosted regressor (MAE = 10.013) slightly beats out the random forest (10.014 MAE). These results aren’t entirely fair because we are mostly using the default values for the hyperparameters. Especially in models such as the support vector machine, the performance is highly dependent on these settings. Nonetheless, from these results we will select the gradient boosted regressor for model optimization."
},
{
"code": null,
"e": 7930,
"s": 7805,
"text": "In machine learning, after we have selected a model, we can optimize it for our problem by tuning the model hyperparameters."
},
{
"code": null,
"e": 8006,
"s": 7930,
"text": "First off, what are hyperparameters and how do they differ from parameters?"
},
{
"code": null,
"e": 8268,
"s": 8006,
"text": "Model hyperparameters are best thought of as settings for a machine learning algorithm that are set by the data scientist before training. Examples would be the number of trees in a random forest or the number of neighbors used in K-nearest neighbors algorithm."
},
{
"code": null,
"e": 8368,
"s": 8268,
"text": "Model parameters are what the model learns during training, such as weights in a linear regression."
},
{
"code": null,
"e": 8735,
"s": 8368,
"text": "Controlling the hyperparameters affects the model performance by altering the balance between underfitting and overfitting in a model. Underfitting is when our model is not complex enough (it does not have enough degrees of freedom) to learn the mapping from features to target. An underfit model has high bias, which we can correct by making our model more complex."
},
{
"code": null,
"e": 9022,
"s": 8735,
"text": "Overfitting is when our model essentially memorizes the training data. An overfit model has high variance, which we can correct by limiting the complexity of the model through regularization. Both an underfit and an overfit model will not be able to generalize well to the testing data."
},
{
"code": null,
"e": 9608,
"s": 9022,
"text": "The problem with choosing the right hyperparameters is that the optimal set will be different for every machine learning problem! Therefore, the only way to find the best settings is to try out a number of them on each new dataset. Luckily, Scikit-Learn has a number of methods to allow us to efficiently evaluate hyperparameters. Moreover, projects such as TPOT by Epistasis Lab are trying to optimize the hyperparameter search using methods like genetic programming. In this project, we will stick to doing this with Scikit-Learn, but stayed tuned for more work on the auto-ML scene!"
},
{
"code": null,
"e": 9717,
"s": 9608,
"text": "The particular hyperparameter tuning method we will implement is called random search with cross validation:"
},
{
"code": null,
"e": 10047,
"s": 9717,
"text": "Random Search refers to the technique we will use to select hyperparameters. We define a grid and then randomly sample different combinations, rather than grid search where we exhaustively try out every single combination. (Surprisingly, random search performs nearly as well as grid search with a drastic reduction in run time.)"
},
{
"code": null,
"e": 10674,
"s": 10047,
"text": "Cross Validation is the technique we use to evaluate a selected combination of hyperparameters. Rather than splitting the training set up into separate training and validation sets, which reduces the amount of training data we can use, we use K-Fold Cross Validation. This involves dividing the training data into K number of folds, and then going through an iterative process where we first train on K-1 of the folds and then evaluate performance on the Kth fold. We repeat this process K times and at the end of K-fold cross validation, we take the average error on each of the K iterations as the final performance measure."
},
{
"code": null,
"e": 10737,
"s": 10674,
"text": "The idea of K-Fold cross validation with K = 5 is shown below:"
},
{
"code": null,
"e": 10810,
"s": 10737,
"text": "The entire process of performing random search with cross validation is:"
},
{
"code": null,
"e": 11039,
"s": 10810,
"text": "Set up a grid of hyperparameters to evaluateRandomly sample a combination of hyperparametersCreate a model with the selected combinationEvaluate the model using K-fold cross validationDecide which hyperparameters worked the best"
},
{
"code": null,
"e": 11084,
"s": 11039,
"text": "Set up a grid of hyperparameters to evaluate"
},
{
"code": null,
"e": 11133,
"s": 11084,
"text": "Randomly sample a combination of hyperparameters"
},
{
"code": null,
"e": 11178,
"s": 11133,
"text": "Create a model with the selected combination"
},
{
"code": null,
"e": 11227,
"s": 11178,
"text": "Evaluate the model using K-fold cross validation"
},
{
"code": null,
"e": 11272,
"s": 11227,
"text": "Decide which hyperparameters worked the best"
},
{
"code": null,
"e": 11392,
"s": 11272,
"text": "Of course, we don’t do actually do this manually, but rather let Scikit-Learn’s RandomizedSearchCV handle all the work!"
},
{
"code": null,
"e": 11901,
"s": 11392,
"text": "Since we will be using the Gradient Boosted Regression model, I should give at least a little background! This model is an ensemble method, meaning that it is built out of many weak learners, in this case individual decision trees. While a bagging algorithm such as random forest trains the weak learners in parallel and has them vote to make a prediction, a boosting method like Gradient Boosting, trains the learners in sequence, with each learner “concentrating” on the mistakes made by the previous ones."
},
{
"code": null,
"e": 12398,
"s": 11901,
"text": "Boosting methods have become popular in recent years and frequently win machine learning competitions. The Gradient Boosting Method is one particular implementation that uses Gradient Descent to minimize the cost function by sequentially training learners on the residuals of previous ones. The Scikit-Learn implementation of Gradient Boosting is generally regarded as less efficient than other libraries such as XGBoost , but it will work well enough for our small dataset and is quite accurate."
},
{
"code": null,
"e": 12585,
"s": 12398,
"text": "There are many hyperparameters to tune in a Gradient Boosted Regressor and you can look at the Scikit-Learn documentation for the details. We will optimize the following hyperparameters:"
},
{
"code": null,
"e": 12621,
"s": 12585,
"text": "loss: the loss function to minimize"
},
{
"code": null,
"e": 12687,
"s": 12621,
"text": "n_estimators: the number of weak learners (decision trees) to use"
},
{
"code": null,
"e": 12738,
"s": 12687,
"text": "max_depth: the maximum depth of each decision tree"
},
{
"code": null,
"e": 12832,
"s": 12738,
"text": "min_samples_leaf: the minimum number of examples required at a leaf node of the decision tree"
},
{
"code": null,
"e": 12928,
"s": 12832,
"text": "min_samples_split: the minimum number of examples required to split a node of the decision tree"
},
{
"code": null,
"e": 13000,
"s": 12928,
"text": "max_features: the maximum number of features to use for splitting nodes"
},
{
"code": null,
"e": 13147,
"s": 13000,
"text": "I’m not sure if there is anyone who truly understands how all of these interact, and the only way to find the best combination is to try them out!"
},
{
"code": null,
"e": 13353,
"s": 13147,
"text": "In the following code, we build a hyperparameter grid, create a RandomizedSearchCV object, and perform hyperparameter search using 4-fold cross validation over 25 different combinations of hyperparameters:"
},
{
"code": null,
"e": 13451,
"s": 13353,
"text": "After performing the search, we can inspect the RandomizedSearchCV object to find the best model:"
},
{
"code": null,
"e": 13744,
"s": 13451,
"text": "# Find the best combination of settingsrandom_cv.best_estimator_GradientBoostingRegressor(loss='lad', max_depth=5, max_features=None, min_samples_leaf=6, min_samples_split=6, n_estimators=500)"
},
{
"code": null,
"e": 14277,
"s": 13744,
"text": "We can then use these results to perform grid search by choosing parameters for our grid that are close to these optimal values. However, further tuning is unlikely to significant improve our model. As a general rule, proper feature engineering will have a much larger impact on model performance than even the most extensive hyperparameter tuning. It’s the law of diminishing returns applied to machine learning: feature engineering gets you most of the way there, and hyperparameter tuning generally only provides a small benefit."
},
{
"code": null,
"e": 14548,
"s": 14277,
"text": "One experiment we can try is to change the number of estimators (decision trees) while holding the rest of the hyperparameters steady. This directly lets us observe the effect of this particular setting. See the notebook for the implementation, but here are the results:"
},
{
"code": null,
"e": 14888,
"s": 14548,
"text": "As the number of trees used by the model increases, both the training and the testing error decrease. However, the training error decreases much more rapidly than the testing error and we can see that our model is overfitting: it performs very well on the training data, but is not able to achieve that same performance on the testing set."
},
{
"code": null,
"e": 15327,
"s": 14888,
"text": "We always expect at least some decrease in performance on the testing set (after all, the model can see the true answers for the training set), but a significant gap indicates overfitting. We can address overfitting by getting more training data, or decreasing the complexity of our model through the hyerparameters. In this case, we will leave the hyperparameters where they are, but I encourage anyone to try and reduce the overfitting."
},
{
"code": null,
"e": 15472,
"s": 15327,
"text": "For the final model, we will use 800 estimators because that resulted in the lowest error in cross validation. Now, time to test out this model!"
},
{
"code": null,
"e": 15724,
"s": 15472,
"text": "As responsible machine learning engineers, we made sure to not let our model see the test set at any point of training. Therefore, we can use the test set performance as an indicator of how well our model would perform when deployed in the real world."
},
{
"code": null,
"e": 15917,
"s": 15724,
"text": "Making predictions on the test set and calculating the performance is relatively straightforward. Here, we compare the performance of the default Gradient Boosted Regressor to the tuned model:"
},
{
"code": null,
"e": 16179,
"s": 15917,
"text": "# Make predictions on the test set using default and final modeldefault_pred = default_model.predict(X_test)final_pred = final_model.predict(X_test)Default model performance on the test set: MAE = 10.0118.Final model performance on the test set: MAE = 9.0446."
},
{
"code": null,
"e": 16359,
"s": 16179,
"text": "Hyperparameter tuning improved the accuracy of the model by about 10%. Depending on the use case, 10% could be a massive improvement, but it came at a significant time investment!"
},
{
"code": null,
"e": 16500,
"s": 16359,
"text": "We can also time how long it takes to train the two models using the %timeit magic command in Jupyter Notebooks. First is the default model:"
},
{
"code": null,
"e": 16608,
"s": 16500,
"text": "%%timeit -n 1 -r 5default_model.fit(X, y)1.09 s ± 153 ms per loop (mean ± std. dev. of 5 runs, 1 loop each)"
},
{
"code": null,
"e": 16687,
"s": 16608,
"text": "1 second to train seems very reasonable. The final tuned model is not so fast:"
},
{
"code": null,
"e": 16793,
"s": 16687,
"text": "%%timeit -n 1 -r 5final_model.fit(X, y)12.1 s ± 1.33 s per loop (mean ± std. dev. of 5 runs, 1 loop each)"
},
{
"code": null,
"e": 17176,
"s": 16793,
"text": "This demonstrates a fundamental aspect of machine learning: it is always a game of trade-offs. We constantly have to balance accuracy vs interpretability, bias vs variance, accuracy vs run time, and so on. The right blend will ultimately depend on the problem. In our case, a 12 times increase in run-time is large in relative terms, but in absolute terms it’s not that significant."
},
{
"code": null,
"e": 17396,
"s": 17176,
"text": "Once we have the final predictions, we can investigate them to see if they exhibit any noticeable skew. On the left is a density plot of the predicted and actual values, and on the right is a histogram of the residuals:"
},
{
"code": null,
"e": 17855,
"s": 17396,
"text": "The model predictions seem to follow the distribution of the actual values although the peak in the density occurs closer to the median value (66) on the training set than to the true peak in density (which is near 100). The residuals are nearly normally distribution, although we see a few large negative values where the model predictions were far below the true values. We will take a deeper look at interpreting the results of the model in the next post."
},
{
"code": null,
"e": 17930,
"s": 17855,
"text": "In this article we covered several steps in the machine learning workflow:"
},
{
"code": null,
"e": 17983,
"s": 17930,
"text": "Imputation of missing values and scaling of features"
},
{
"code": null,
"e": 18040,
"s": 17983,
"text": "Evaluating and comparing several machine learning models"
},
{
"code": null,
"e": 18108,
"s": 18040,
"text": "Hyperparameter tuning using random grid search and cross validation"
},
{
"code": null,
"e": 18150,
"s": 18108,
"text": "Evaluating the best model on the test set"
},
{
"code": null,
"e": 18659,
"s": 18150,
"text": "The results of this work showed us that machine learning is applicable to the task of predicting a building’s Energy Star Score using the available data. Using a gradient boosted regressor we were able to predict the scores on the test set to within 9.1 points of the true value. Moreover, we saw that hyperparameter tuning can increase the performance of a model at a significant cost in terms of time invested. This is one of many trade-offs we have to consider when developing a machine learning solution."
},
{
"code": null,
"e": 19025,
"s": 18659,
"text": "In the third post (available here), we will look at peering into the black box we have created and try to understand how our model makes predictions. We also will determine the greatest factors influencing the Energy Star Score. While we know that our model is accurate, we want to know why it makes the predictions it does and what this tells us about the problem!"
}
] |
How to sum negative and positive values using GroupBy in Pandas? - GeeksforGeeks
|
30 May, 2021
In this article, we will discuss how to calculate the sum of all negative numbers and positive numbers in DataFrame using the GroupBy method in Pandas.
To use the groupby() method use the given below syntax.
Syntax: df.groupby(column_name)
Step 1: Creating lambda functions to calculate positive-sum and negative-sum values.
pos = lambda col : col[col > 0].sum()
neg = lambda col : col[col < 0].sum()
Step 2: We will use the groupby() method and apply the lambda function to calculate the sum.
d = df.groupby(df['Alphabet'])
print(d['Frequency'].agg([('negative_values', neg),
('positive_values', pos)
]))
print(d['Bandwidth'].agg([('negative_values', neg),
('positive_values', pos)
]))
Example 1:
Calculate the sum of all positive as well as negative values of a, b, c for both columns i.e., Frequency and bandwidth
Python3
# Import Necessary Librariesimport pandas as pdimport numpy as np # Creating a DataFrame with # random valuesdf = pd.DataFrame({'Alphabet': ['a', 'b', 'c', 'c', 'a', 'a', 'c', 'b'], 'Frequency': [-10, 29, -12, -190, 72, -98, -12, 0], 'BandWidth': [10, 34, 23, -10, -87, -76, 365, 10]}) print(df) # Group By dataframe on categorical# valuesd = df.groupby(df['Alphabet']) # creating lambda function to calculate# positive as well as negative valuesdef pos(col): return col[col > 0].sum() def neg(col): return col[col < 0].sum() # Apply lambda function to particular # columnprint(d['Frequency'].agg([('negative_values', neg), ('positive_values', pos) ])) print(d['Bandwidth'].agg([('negative_values', neg), ('positive_values', pos) ]))
Output:
Example 2:
Calculate the sum of all positive as well as negative values of a, b for both columns i.e., X and Y
Python3
# Import Necessary Librariesimport pandas as pdimport numpy as np # Creating a DataFrame with random valuesdf = pd.DataFrame({'Function': ['F(x)', 'F(x)', 'F(y)', 'F(x)', 'F(y)', 'F(x)', 'F(x)', 'F(y)'], 'X': [-10, 29, -12, -190, 72, -98, -12, 0], 'Y': [10, 34, 23, -10, -87, -76, 365, 10]}) print(df) # Group By dataframe on categorical valuesd = df.groupby(df['Function']) # creating lambda function to calculate# positive as well as negative valuesdef pos(col): return col[col > 0].sum() def neg(col): return col[col < 0].sum() # Apply lambda function to particular # columnprint(d['X'].agg([('negative_values', neg), ('positive_values', pos) ])) print(d['Y'].agg([('negative_values', neg), ('positive_values', pos) ]))
Output:
DataFrame
X Output
Y Output
Example 3:
Calculate the sum of all positive as well as negative values of every name i.e., Marks. The next step is to make the lambda function to calculate the sum. In the last step, we will group the data according to the names and call the lambda functions to calculate the sum of the values.
Python3
# Import Necessary Librariesimport pandas as pdimport numpy as np # Creating a DataFrame with random valuesdf = pd.DataFrame({'Name': ['Aryan', 'Nityaa', 'Dhruv', 'Dhruv', 'Nityaa', 'Aryan', 'Nityaa', 'Aryan', 'Aryan', 'Dhruv', 'Nityaa', 'Dhruv', 'Dhruv'], 'Marks': [90, 93, 78, 56, 34, 12, 67, 45, 78, 92, 29, 88, 81]})print(df) # Group By dataframe on categorical valuesd = df.groupby(df['Name']) # creating lambda function to calculate# positive as well as negative valuesdef pos(col): return col[col > 0].sum() def neg(col): return col[col < 0].sum() # Apply lambda function to particular# columnprint(d['Marks'].agg([('negative_values', neg), ('positive_values', pos) ]))
Output:
Names
Marks
Picked
Python Pandas-exercise
Python pandas-groupby
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Box Plot in Python using Matplotlib
Bar Plot in Matplotlib
Python | Get dictionary keys as a list
Python | Convert set into a list
Ways to filter Pandas DataFrame by column values
Python - Call function from another file
loops in python
Multithreading in Python | Set 2 (Synchronization)
Python Dictionary keys() method
Python Lambda Functions
|
[
{
"code": null,
"e": 23901,
"s": 23873,
"text": "\n30 May, 2021"
},
{
"code": null,
"e": 24053,
"s": 23901,
"text": "In this article, we will discuss how to calculate the sum of all negative numbers and positive numbers in DataFrame using the GroupBy method in Pandas."
},
{
"code": null,
"e": 24109,
"s": 24053,
"text": "To use the groupby() method use the given below syntax."
},
{
"code": null,
"e": 24141,
"s": 24109,
"text": "Syntax: df.groupby(column_name)"
},
{
"code": null,
"e": 24226,
"s": 24141,
"text": "Step 1: Creating lambda functions to calculate positive-sum and negative-sum values."
},
{
"code": null,
"e": 24302,
"s": 24226,
"text": "pos = lambda col : col[col > 0].sum()\nneg = lambda col : col[col < 0].sum()"
},
{
"code": null,
"e": 24395,
"s": 24302,
"text": "Step 2: We will use the groupby() method and apply the lambda function to calculate the sum."
},
{
"code": null,
"e": 24688,
"s": 24395,
"text": "d = df.groupby(df['Alphabet'])\nprint(d['Frequency'].agg([('negative_values', neg),\n ('positive_values', pos)\n ]))\nprint(d['Bandwidth'].agg([('negative_values', neg),\n ('positive_values', pos)\n ]))"
},
{
"code": null,
"e": 24700,
"s": 24688,
"text": "Example 1: "
},
{
"code": null,
"e": 24819,
"s": 24700,
"text": "Calculate the sum of all positive as well as negative values of a, b, c for both columns i.e., Frequency and bandwidth"
},
{
"code": null,
"e": 24827,
"s": 24819,
"text": "Python3"
},
{
"code": "# Import Necessary Librariesimport pandas as pdimport numpy as np # Creating a DataFrame with # random valuesdf = pd.DataFrame({'Alphabet': ['a', 'b', 'c', 'c', 'a', 'a', 'c', 'b'], 'Frequency': [-10, 29, -12, -190, 72, -98, -12, 0], 'BandWidth': [10, 34, 23, -10, -87, -76, 365, 10]}) print(df) # Group By dataframe on categorical# valuesd = df.groupby(df['Alphabet']) # creating lambda function to calculate# positive as well as negative valuesdef pos(col): return col[col > 0].sum() def neg(col): return col[col < 0].sum() # Apply lambda function to particular # columnprint(d['Frequency'].agg([('negative_values', neg), ('positive_values', pos) ])) print(d['Bandwidth'].agg([('negative_values', neg), ('positive_values', pos) ]))",
"e": 25847,
"s": 24827,
"text": null
},
{
"code": null,
"e": 25855,
"s": 25847,
"text": "Output:"
},
{
"code": null,
"e": 25866,
"s": 25855,
"text": "Example 2:"
},
{
"code": null,
"e": 25966,
"s": 25866,
"text": "Calculate the sum of all positive as well as negative values of a, b for both columns i.e., X and Y"
},
{
"code": null,
"e": 25974,
"s": 25966,
"text": "Python3"
},
{
"code": "# Import Necessary Librariesimport pandas as pdimport numpy as np # Creating a DataFrame with random valuesdf = pd.DataFrame({'Function': ['F(x)', 'F(x)', 'F(y)', 'F(x)', 'F(y)', 'F(x)', 'F(x)', 'F(y)'], 'X': [-10, 29, -12, -190, 72, -98, -12, 0], 'Y': [10, 34, 23, -10, -87, -76, 365, 10]}) print(df) # Group By dataframe on categorical valuesd = df.groupby(df['Function']) # creating lambda function to calculate# positive as well as negative valuesdef pos(col): return col[col > 0].sum() def neg(col): return col[col < 0].sum() # Apply lambda function to particular # columnprint(d['X'].agg([('negative_values', neg), ('positive_values', pos) ])) print(d['Y'].agg([('negative_values', neg), ('positive_values', pos) ]))",
"e": 26965,
"s": 25974,
"text": null
},
{
"code": null,
"e": 26973,
"s": 26965,
"text": "Output:"
},
{
"code": null,
"e": 26983,
"s": 26973,
"text": "DataFrame"
},
{
"code": null,
"e": 26992,
"s": 26983,
"text": "X Output"
},
{
"code": null,
"e": 27001,
"s": 26992,
"text": "Y Output"
},
{
"code": null,
"e": 27012,
"s": 27001,
"text": "Example 3:"
},
{
"code": null,
"e": 27297,
"s": 27012,
"text": "Calculate the sum of all positive as well as negative values of every name i.e., Marks. The next step is to make the lambda function to calculate the sum. In the last step, we will group the data according to the names and call the lambda functions to calculate the sum of the values."
},
{
"code": null,
"e": 27305,
"s": 27297,
"text": "Python3"
},
{
"code": "# Import Necessary Librariesimport pandas as pdimport numpy as np # Creating a DataFrame with random valuesdf = pd.DataFrame({'Name': ['Aryan', 'Nityaa', 'Dhruv', 'Dhruv', 'Nityaa', 'Aryan', 'Nityaa', 'Aryan', 'Aryan', 'Dhruv', 'Nityaa', 'Dhruv', 'Dhruv'], 'Marks': [90, 93, 78, 56, 34, 12, 67, 45, 78, 92, 29, 88, 81]})print(df) # Group By dataframe on categorical valuesd = df.groupby(df['Name']) # creating lambda function to calculate# positive as well as negative valuesdef pos(col): return col[col > 0].sum() def neg(col): return col[col < 0].sum() # Apply lambda function to particular# columnprint(d['Marks'].agg([('negative_values', neg), ('positive_values', pos) ]))",
"e": 28192,
"s": 27305,
"text": null
},
{
"code": null,
"e": 28200,
"s": 28192,
"text": "Output:"
},
{
"code": null,
"e": 28206,
"s": 28200,
"text": "Names"
},
{
"code": null,
"e": 28212,
"s": 28206,
"text": "Marks"
},
{
"code": null,
"e": 28219,
"s": 28212,
"text": "Picked"
},
{
"code": null,
"e": 28242,
"s": 28219,
"text": "Python Pandas-exercise"
},
{
"code": null,
"e": 28264,
"s": 28242,
"text": "Python pandas-groupby"
},
{
"code": null,
"e": 28278,
"s": 28264,
"text": "Python-pandas"
},
{
"code": null,
"e": 28285,
"s": 28278,
"text": "Python"
},
{
"code": null,
"e": 28383,
"s": 28285,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28392,
"s": 28383,
"text": "Comments"
},
{
"code": null,
"e": 28405,
"s": 28392,
"text": "Old Comments"
},
{
"code": null,
"e": 28441,
"s": 28405,
"text": "Box Plot in Python using Matplotlib"
},
{
"code": null,
"e": 28464,
"s": 28441,
"text": "Bar Plot in Matplotlib"
},
{
"code": null,
"e": 28503,
"s": 28464,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 28536,
"s": 28503,
"text": "Python | Convert set into a list"
},
{
"code": null,
"e": 28585,
"s": 28536,
"text": "Ways to filter Pandas DataFrame by column values"
},
{
"code": null,
"e": 28626,
"s": 28585,
"text": "Python - Call function from another file"
},
{
"code": null,
"e": 28642,
"s": 28626,
"text": "loops in python"
},
{
"code": null,
"e": 28693,
"s": 28642,
"text": "Multithreading in Python | Set 2 (Synchronization)"
},
{
"code": null,
"e": 28725,
"s": 28693,
"text": "Python Dictionary keys() method"
}
] |
Plotly with Matplotlib and Chart Studio
|
This chapter deals with data visualization library titled Matplotlib and online plot maker named Chart Studio.
Matplotlib is a popular Python data visualization library capable of producing production-ready but static plots. you can convert your static matplotlib figures into interactive plots with the help of mpl_to_plotly() function in plotly.tools module.
Following script produces a Sine wave Line plot using Matplotlib’s PyPlot API.
from matplotlib import pyplot as plt
import numpy as np
import math
#needed for definition of pi
x = np.arange(0, math.pi*2, 0.05)
y = np.sin(x)
plt.plot(x,y)
plt.xlabel("angle")
plt.ylabel("sine")
plt.title('sine wave')
plt.show()
Now we shall convert it into a plotly figure as follows −
fig = plt.gcf()
plotly_fig = tls.mpl_to_plotly(fig)
py.iplot(plotly_fig)
The output of the code is as given below −
Chart Studio is an online plot maker tool made available by Plotly. It provides a graphical user interface for importing and analyzing data into a grid and using stats tools. Graphs can be embedded or downloaded. It is mainly used to enable creating graphs faster and more efficiently.
After logging in to plotly’s account, start the chart studio app by visiting the link https://plot.ly/create. The web page offers a blank work sheet below the plot area. Chart Studio lets you to add plot traces by pushing + trace button.
Various plot structure elements such as annotations, style etc. as well as facility to save, export and share the plots is available in the menu.
Let us add data in the worksheet and add choose bar plot trace from the trace types.
Click in the type text box and select bar plot.
Then, provide data columns for x and y axes and enter plot title.
12 Lectures
53 mins
Pranjal Srivastava
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2471,
"s": 2360,
"text": "This chapter deals with data visualization library titled Matplotlib and online plot maker named Chart Studio."
},
{
"code": null,
"e": 2721,
"s": 2471,
"text": "Matplotlib is a popular Python data visualization library capable of producing production-ready but static plots. you can convert your static matplotlib figures into interactive plots with the help of mpl_to_plotly() function in plotly.tools module."
},
{
"code": null,
"e": 2800,
"s": 2721,
"text": "Following script produces a Sine wave Line plot using Matplotlib’s PyPlot API."
},
{
"code": null,
"e": 3033,
"s": 2800,
"text": "from matplotlib import pyplot as plt\nimport numpy as np\nimport math \n#needed for definition of pi\nx = np.arange(0, math.pi*2, 0.05)\ny = np.sin(x)\nplt.plot(x,y)\nplt.xlabel(\"angle\")\nplt.ylabel(\"sine\")\nplt.title('sine wave')\nplt.show()"
},
{
"code": null,
"e": 3091,
"s": 3033,
"text": "Now we shall convert it into a plotly figure as follows −"
},
{
"code": null,
"e": 3164,
"s": 3091,
"text": "fig = plt.gcf()\nplotly_fig = tls.mpl_to_plotly(fig)\npy.iplot(plotly_fig)"
},
{
"code": null,
"e": 3207,
"s": 3164,
"text": "The output of the code is as given below −"
},
{
"code": null,
"e": 3493,
"s": 3207,
"text": "Chart Studio is an online plot maker tool made available by Plotly. It provides a graphical user interface for importing and analyzing data into a grid and using stats tools. Graphs can be embedded or downloaded. It is mainly used to enable creating graphs faster and more efficiently."
},
{
"code": null,
"e": 3731,
"s": 3493,
"text": "After logging in to plotly’s account, start the chart studio app by visiting the link https://plot.ly/create. The web page offers a blank work sheet below the plot area. Chart Studio lets you to add plot traces by pushing + trace button."
},
{
"code": null,
"e": 3877,
"s": 3731,
"text": "Various plot structure elements such as annotations, style etc. as well as facility to save, export and share the plots is available in the menu."
},
{
"code": null,
"e": 3962,
"s": 3877,
"text": "Let us add data in the worksheet and add choose bar plot trace from the trace types."
},
{
"code": null,
"e": 4010,
"s": 3962,
"text": "Click in the type text box and select bar plot."
},
{
"code": null,
"e": 4076,
"s": 4010,
"text": "Then, provide data columns for x and y axes and enter plot title."
},
{
"code": null,
"e": 4108,
"s": 4076,
"text": "\n 12 Lectures \n 53 mins\n"
},
{
"code": null,
"e": 4128,
"s": 4108,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 4135,
"s": 4128,
"text": " Print"
},
{
"code": null,
"e": 4146,
"s": 4135,
"text": " Add Notes"
}
] |
C++ Program for Comb Sort?
|
The basic idea of comb sort and the bubble sort is same. In other words, comb sort is an improvement on the bubble sort. In the bubble sorting technique, the items are compared with the next item in each phase. But for the comb sort, the items are sorted in a specific gap. After completing each phase, the gap is decreased. The decreasing factor or the shrink factor for this sort is 1.3. It means that after completing each phase the gap is divided by 1.3. Time Complexity is O(n log n) for best case. O(n2/2nP) (p is number of increment) for average case and O(n2) for worst case.
Input − An array of data, and the total number in the array
Output − The sorted Array
Begin
gap := size
flag := true
while the gap ≠ 1 OR flag = true do
gap = floor(gap/1.3) //the the floor value after division
if gap < 1 then
gap := 1
flag = false
for i := 0 to size – gap -1 do
if array[i] > array[i+gap] then
swap array[i] with array[i+gap]
flag = true;
done
done
End
include<iostream>
#include<algorithm>
using namespace std;
void display(int *array, int size){
for(int i = 0; i<size; i++)
cout << array[i] << " ";
cout << endl;
}
void combSort(int *array, int size){
int gap = size; //initialize gap size with size of array
bool flag = true;
while(gap != 1 || flag == true){
gap = (gap*10)/13; //minimize gap by shrink factor
if(gap<1)
gap = 1;
flag = false;
for(int i = 0; i<size-gap; i++){ //compare elements with gap
if(array[i] > array[i+gap]){
swap(array[i], array[i+gap]);
flag = true;
}
}
}
}
int main(){
int n;
cout << "Enter the number of elements: ";
cin >> n;
int arr[n]; //create an array with given number of elements
cout << "Enter elements:" << endl;
for(int i = 0; i<n; i++){
cin >> arr[i];
}
cout << "Array before Sorting: ";
display(arr, n);
combSort(arr, n);
cout << "Array after Sorting: ";
display(arr, n);
}
Enter the number of elements: 10
Enter elements:
108 96 23 74 12 56 85 42 13 47
Array before Sorting: 108 96 23 74 12 56 85 42 13 47
Array after Sorting: 12 13 23 42 47 56 74 85 96 108
|
[
{
"code": null,
"e": 1646,
"s": 1062,
"text": "The basic idea of comb sort and the bubble sort is same. In other words, comb sort is an improvement on the bubble sort. In the bubble sorting technique, the items are compared with the next item in each phase. But for the comb sort, the items are sorted in a specific gap. After completing each phase, the gap is decreased. The decreasing factor or the shrink factor for this sort is 1.3. It means that after completing each phase the gap is divided by 1.3. Time Complexity is O(n log n) for best case. O(n2/2nP) (p is number of increment) for average case and O(n2) for worst case."
},
{
"code": null,
"e": 1706,
"s": 1646,
"text": "Input − An array of data, and the total number in the array"
},
{
"code": null,
"e": 1732,
"s": 1706,
"text": "Output − The sorted Array"
},
{
"code": null,
"e": 2102,
"s": 1732,
"text": "Begin\n gap := size\n flag := true\n while the gap ≠ 1 OR flag = true do\n gap = floor(gap/1.3) //the the floor value after division\n if gap < 1 then\n gap := 1\n flag = false\n for i := 0 to size – gap -1 do\n if array[i] > array[i+gap] then\n swap array[i] with array[i+gap]\n flag = true;\n done\n done\nEnd"
},
{
"code": null,
"e": 3113,
"s": 2102,
"text": "include<iostream>\n#include<algorithm>\nusing namespace std;\nvoid display(int *array, int size){\n for(int i = 0; i<size; i++)\n cout << array[i] << \" \";\n cout << endl;\n}\nvoid combSort(int *array, int size){\n int gap = size; //initialize gap size with size of array\n bool flag = true;\n while(gap != 1 || flag == true){\n gap = (gap*10)/13; //minimize gap by shrink factor\n if(gap<1)\n gap = 1;\n flag = false;\n for(int i = 0; i<size-gap; i++){ //compare elements with gap\n if(array[i] > array[i+gap]){\n swap(array[i], array[i+gap]);\n flag = true;\n }\n }\n }\n}\nint main(){\n int n;\n cout << \"Enter the number of elements: \";\n cin >> n;\n int arr[n]; //create an array with given number of elements\n cout << \"Enter elements:\" << endl;\n for(int i = 0; i<n; i++){\n cin >> arr[i];\n }\n cout << \"Array before Sorting: \";\n display(arr, n);\n combSort(arr, n);\n cout << \"Array after Sorting: \";\n display(arr, n);\n}"
},
{
"code": null,
"e": 3298,
"s": 3113,
"text": "Enter the number of elements: 10\nEnter elements:\n108 96 23 74 12 56 85 42 13 47\nArray before Sorting: 108 96 23 74 12 56 85 42 13 47\nArray after Sorting: 12 13 23 42 47 56 74 85 96 108"
}
] |
Building A Deep Learning Model using Keras | by Eijaz Allibhai | Towards Data Science
|
Deep learning is an increasingly popular subset of machine learning. Deep learning models are built using neural networks. A neural network takes in inputs, which are then processed in hidden layers using weights that are adjusted during training. Then the model spits out a prediction. The weights are adjusted to find patterns in order to make better predictions. The user does not need to specify what patterns to look for — the neural network learns on its own.
Keras is a user-friendly neural network library written in Python. In this tutorial, I will go over two deep learning models using Keras: one for regression and one for classification. We will build a regression model to predict an employee’s wage per hour, and we will build a classification model to predict whether or not a patient has diabetes.
Note: The datasets we will be using are relatively clean, so we will not perform any data preprocessing in order to get our data ready for modeling. Datasets that you will use in future projects may not be so clean — for example, they may have missing values — so you may need to use data preprocessing techniques to alter your datasets to get more accurate results.
For our regression deep learning model, the first step is to read in the data we will use as input. For this example, we are using the ‘hourly wages’ dataset. To start, we will use Pandas to read in the data. I will not go into detail on Pandas, but it is a library you should become familiar with if you’re looking to dive further into data science and machine learning.
‘df’ stands for dataframe. Pandas reads in the csv file as a dataframe. The ‘head()’ function will show the first 5 rows of the dataframe so you can check that the data has been read in properly and can take an initial look at how the data is structured.
Import pandas as pd#read in data using pandastrain_df = pd.read_csv(‘data/hourly_wages_data.csv’)#check data has been read in properlytrain_df.head()
Next, we need to split up our dataset into inputs (train_X) and our target (train_y). Our input will be every column except ‘wage_per_hour’ because ‘wage_per_hour’ is what we will be attempting to predict. Therefore, ‘wage_per_hour’ will be our target.
We will use pandas ‘drop’ function to drop the column ‘wage_per_hour’ from our dataframe and store it in the variable ‘train_X’. This will be our input.
#create a dataframe with all training data except the target columntrain_X = train_df.drop(columns=['wage_per_hour'])#check that the target variable has been removedtrain_X.head()
We will insert the column ‘wage_per_hour’ into our target variable (train_y).
#create a dataframe with only the target columntrain_y = train_df[['wage_per_hour']]#view dataframetrain_y.head()
Next, we have to build the model. Here is the code:
from keras.models import Sequentialfrom keras.layers import Dense#create modelmodel = Sequential()#get number of columns in training datan_cols = train_X.shape[1]#add model layersmodel.add(Dense(10, activation='relu', input_shape=(n_cols,)))model.add(Dense(10, activation='relu'))model.add(Dense(1))
The model type that we will be using is Sequential. Sequential is the easiest way to build a model in Keras. It allows you to build a model layer by layer. Each layer has weights that correspond to the layer the follows it.
We use the ‘add()’ function to add layers to our model. We will add two layers and an output layer.
‘Dense’ is the layer type. Dense is a standard layer type that works for most cases. In a dense layer, all nodes in the previous layer connect to the nodes in the current layer.
We have 10 nodes in each of our input layers. This number can also be in the hundreds or thousands. Increasing the number of nodes in each layer increases model capacity. I will go into further detail about the effects of increasing model capacity shortly.
‘Activation’ is the activation function for the layer. An activation function allows models to take into account nonlinear relationships. For example, if you are predicting diabetes in patients, going from age 10 to 11 is different than going from age 60–61.
The activation function we will be using is ReLU or Rectified Linear Activation. Although it is two linear pieces, it has been proven to work well in neural networks.
The first layer needs an input shape. The input shape specifies the number of rows and columns in the input. The number of columns in our input is stored in ‘n_cols’. There is nothing after the comma which indicates that there can be any amount of rows.
The last layer is the output layer. It only has one node, which is for our prediction.
Next, we need to compile our model. Compiling the model takes two parameters: optimizer and loss.
The optimizer controls the learning rate. We will be using ‘adam’ as our optmizer. Adam is generally a good optimizer to use for many cases. The adam optimizer adjusts the learning rate throughout training.
The learning rate determines how fast the optimal weights for the model are calculated. A smaller learning rate may lead to more accurate weights (up to a certain point), but the time it takes to compute the weights will be longer.
For our loss function, we will use ‘mean_squared_error’. It is calculated by taking the average squared difference between the predicted and actual values. It is a popular loss function for regression problems. The closer to 0 this is, the better the model performed.
#compile model using mse as a measure of model performancemodel.compile(optimizer='adam', loss='mean_squared_error')
Now we will train our model. To train, we will use the ‘fit()’ function on our model with the following five parameters: training data (train_X), target data (train_y), validation split, the number of epochs and callbacks.
The validation split will randomly split the data into use for training and testing. During training, we will be able to see the validation loss, which give the mean squared error of our model on the validation set. We will set the validation split at 0.2, which means that 20% of the training data we provide in the model will be set aside for testing model performance.
The number of epochs is the number of times the model will cycle through the data. The more epochs we run, the more the model will improve, up to a certain point. After that point, the model will stop improving during each epoch. In addition, the more epochs, the longer the model will take to run. To monitor this, we will use ‘early stopping’.
Early stopping will stop the model from training before the number of epochs is reached if the model stops improving. We will set our early stopping monitor to 3. This means that after 3 epochs in a row in which the model doesn’t improve, training will stop. Sometimes, the validation loss can stop improving then improve in the next epoch, but after 3 epochs in which the validation loss doesn’t improve, it usually won’t improve again.
from keras.callbacks import EarlyStopping#set early stopping monitor so the model stops training when it won't improve anymoreearly_stopping_monitor = EarlyStopping(patience=3)#train modelmodel.fit(train_X, train_y, validation_split=0.2, epochs=30, callbacks=[early_stopping_monitor])
If you want to use this model to make predictions on new data, we would use the ‘predict()’ function, passing in our new data. The output would be ‘wage_per_hour’ predictions.
#example on how to use our newly trained model on how to make predictions on unseen data (we will pretend our new data is saved in a dataframe called 'test_X').test_y_predictions = model.predict(test_X)
Congrats! You have built a deep learning model in Keras! It is not very accurate yet, but that can improve with using a larger amount of training data and ‘model capacity’.
As you increase the number of nodes and layers in a model, the model capacity increases. Increasing model capacity can lead to a more accurate model, up to a certain point, at which the model will stop improving. Generally, the more training data you provide, the larger the model should be. We are only using a tiny amount of data, so our model is pretty small. The larger the model, the more computational capacity it requires and it will take longer to train.
Let’s create a new model using the same training data as our previous model. This time, we will add a layer and increase the nodes in each layer to 200. We will train the model to see if increasing the model capacity will improve our validation score.
#training a new model on the same data to show the effect of increasing model capacity#create modelmodel_mc = Sequential()#add model layersmodel_mc.add(Dense(200, activation='relu', input_shape=(n_cols,)))model_mc.add(Dense(200, activation='relu'))model_mc.add(Dense(200, activation='relu'))model_mc.add(Dense(1))#compile model using mse as a measure of model performancemodel_mc.compile(optimizer='adam', loss='mean_squared_error')#train modelmodel_mc.fit(train_X, train_y, validation_split=0.2, epochs=30, callbacks=[early_stopping_monitor])
We can see that by increasing our model capacity, we have improved our validation loss from 32.63 in our old model to 28.06 in our new model.
Now let’s move on to building our model for classification. Since many steps will be a repeat from the previous model, I will only go over new concepts.
For this next model, we are going to predict if patients have diabetes or not.
#read in training datatrain_df_2 = pd.read_csv('documents/data/diabetes_data.csv')#view data structuretrain_df_2.head()
#create a dataframe with all training data except the target columntrain_X_2 = train_df_2.drop(columns=['diabetes'])#check that the target variable has been removedtrain_X_2.head()
When separating the target column, we need to call the ‘to_categorical()’ function so that column will be ‘one-hot encoded’. Currently, a patient with no diabetes is represented with a 0 in the diabetes column and a patient with diabetes is represented with a 1. With one-hot encoding, the integer will be removed and a binary variable is inputted for each category. In our case, we have two categories: no diabetes and diabetes. A patient with no diabetes will be represented by [1 0] and a patient with diabetes will be represented by [0 1].
from keras.utils import to_categorical#one-hot encode target columntrain_y_2 = to_categorical(train_df_2.diabetes)#vcheck that target column has been convertedtrain_y_2[0:5]
#create modelmodel_2 = Sequential()#get number of columns in training datan_cols_2 = train_X_2.shape[1]#add layers to modelmodel_2.add(Dense(250, activation='relu', input_shape=(n_cols_2,)))model_2.add(Dense(250, activation='relu'))model_2.add(Dense(250, activation='relu'))model_2.add(Dense(2, activation='softmax'))
The last layer of our model has 2 nodes — one for each option: the patient has diabetes or they don’t.
The activation is ‘softmax’. Softmax makes the output sum up to 1 so the output can be interpreted as probabilities. The model will then make its prediction based on which option has a higher probability.
#compile model using accuracy to measure model performancemodel_2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
We will use ‘categorical_crossentropy’ for our loss function. This is the most common choice for classification. A lower score indicates that the model is performing better.
To make things even easier to interpret, we will use the ‘accuracy’ metric to see the accuracy score on the validation set at the end of each epoch.
#train modelmodel_2.fit(X_2, target, epochs=30, validation_split=0.2, callbacks=[early_stopping_monitor])
Congrats! You are now well on your way to building amazing deep learning models in Keras!
Thanks for reading! The github repository for this tutorial can be found here.
|
[
{
"code": null,
"e": 638,
"s": 172,
"text": "Deep learning is an increasingly popular subset of machine learning. Deep learning models are built using neural networks. A neural network takes in inputs, which are then processed in hidden layers using weights that are adjusted during training. Then the model spits out a prediction. The weights are adjusted to find patterns in order to make better predictions. The user does not need to specify what patterns to look for — the neural network learns on its own."
},
{
"code": null,
"e": 987,
"s": 638,
"text": "Keras is a user-friendly neural network library written in Python. In this tutorial, I will go over two deep learning models using Keras: one for regression and one for classification. We will build a regression model to predict an employee’s wage per hour, and we will build a classification model to predict whether or not a patient has diabetes."
},
{
"code": null,
"e": 1354,
"s": 987,
"text": "Note: The datasets we will be using are relatively clean, so we will not perform any data preprocessing in order to get our data ready for modeling. Datasets that you will use in future projects may not be so clean — for example, they may have missing values — so you may need to use data preprocessing techniques to alter your datasets to get more accurate results."
},
{
"code": null,
"e": 1726,
"s": 1354,
"text": "For our regression deep learning model, the first step is to read in the data we will use as input. For this example, we are using the ‘hourly wages’ dataset. To start, we will use Pandas to read in the data. I will not go into detail on Pandas, but it is a library you should become familiar with if you’re looking to dive further into data science and machine learning."
},
{
"code": null,
"e": 1981,
"s": 1726,
"text": "‘df’ stands for dataframe. Pandas reads in the csv file as a dataframe. The ‘head()’ function will show the first 5 rows of the dataframe so you can check that the data has been read in properly and can take an initial look at how the data is structured."
},
{
"code": null,
"e": 2131,
"s": 1981,
"text": "Import pandas as pd#read in data using pandastrain_df = pd.read_csv(‘data/hourly_wages_data.csv’)#check data has been read in properlytrain_df.head()"
},
{
"code": null,
"e": 2384,
"s": 2131,
"text": "Next, we need to split up our dataset into inputs (train_X) and our target (train_y). Our input will be every column except ‘wage_per_hour’ because ‘wage_per_hour’ is what we will be attempting to predict. Therefore, ‘wage_per_hour’ will be our target."
},
{
"code": null,
"e": 2537,
"s": 2384,
"text": "We will use pandas ‘drop’ function to drop the column ‘wage_per_hour’ from our dataframe and store it in the variable ‘train_X’. This will be our input."
},
{
"code": null,
"e": 2717,
"s": 2537,
"text": "#create a dataframe with all training data except the target columntrain_X = train_df.drop(columns=['wage_per_hour'])#check that the target variable has been removedtrain_X.head()"
},
{
"code": null,
"e": 2795,
"s": 2717,
"text": "We will insert the column ‘wage_per_hour’ into our target variable (train_y)."
},
{
"code": null,
"e": 2909,
"s": 2795,
"text": "#create a dataframe with only the target columntrain_y = train_df[['wage_per_hour']]#view dataframetrain_y.head()"
},
{
"code": null,
"e": 2961,
"s": 2909,
"text": "Next, we have to build the model. Here is the code:"
},
{
"code": null,
"e": 3261,
"s": 2961,
"text": "from keras.models import Sequentialfrom keras.layers import Dense#create modelmodel = Sequential()#get number of columns in training datan_cols = train_X.shape[1]#add model layersmodel.add(Dense(10, activation='relu', input_shape=(n_cols,)))model.add(Dense(10, activation='relu'))model.add(Dense(1))"
},
{
"code": null,
"e": 3485,
"s": 3261,
"text": "The model type that we will be using is Sequential. Sequential is the easiest way to build a model in Keras. It allows you to build a model layer by layer. Each layer has weights that correspond to the layer the follows it."
},
{
"code": null,
"e": 3585,
"s": 3485,
"text": "We use the ‘add()’ function to add layers to our model. We will add two layers and an output layer."
},
{
"code": null,
"e": 3763,
"s": 3585,
"text": "‘Dense’ is the layer type. Dense is a standard layer type that works for most cases. In a dense layer, all nodes in the previous layer connect to the nodes in the current layer."
},
{
"code": null,
"e": 4020,
"s": 3763,
"text": "We have 10 nodes in each of our input layers. This number can also be in the hundreds or thousands. Increasing the number of nodes in each layer increases model capacity. I will go into further detail about the effects of increasing model capacity shortly."
},
{
"code": null,
"e": 4279,
"s": 4020,
"text": "‘Activation’ is the activation function for the layer. An activation function allows models to take into account nonlinear relationships. For example, if you are predicting diabetes in patients, going from age 10 to 11 is different than going from age 60–61."
},
{
"code": null,
"e": 4446,
"s": 4279,
"text": "The activation function we will be using is ReLU or Rectified Linear Activation. Although it is two linear pieces, it has been proven to work well in neural networks."
},
{
"code": null,
"e": 4700,
"s": 4446,
"text": "The first layer needs an input shape. The input shape specifies the number of rows and columns in the input. The number of columns in our input is stored in ‘n_cols’. There is nothing after the comma which indicates that there can be any amount of rows."
},
{
"code": null,
"e": 4787,
"s": 4700,
"text": "The last layer is the output layer. It only has one node, which is for our prediction."
},
{
"code": null,
"e": 4885,
"s": 4787,
"text": "Next, we need to compile our model. Compiling the model takes two parameters: optimizer and loss."
},
{
"code": null,
"e": 5092,
"s": 4885,
"text": "The optimizer controls the learning rate. We will be using ‘adam’ as our optmizer. Adam is generally a good optimizer to use for many cases. The adam optimizer adjusts the learning rate throughout training."
},
{
"code": null,
"e": 5324,
"s": 5092,
"text": "The learning rate determines how fast the optimal weights for the model are calculated. A smaller learning rate may lead to more accurate weights (up to a certain point), but the time it takes to compute the weights will be longer."
},
{
"code": null,
"e": 5592,
"s": 5324,
"text": "For our loss function, we will use ‘mean_squared_error’. It is calculated by taking the average squared difference between the predicted and actual values. It is a popular loss function for regression problems. The closer to 0 this is, the better the model performed."
},
{
"code": null,
"e": 5709,
"s": 5592,
"text": "#compile model using mse as a measure of model performancemodel.compile(optimizer='adam', loss='mean_squared_error')"
},
{
"code": null,
"e": 5932,
"s": 5709,
"text": "Now we will train our model. To train, we will use the ‘fit()’ function on our model with the following five parameters: training data (train_X), target data (train_y), validation split, the number of epochs and callbacks."
},
{
"code": null,
"e": 6304,
"s": 5932,
"text": "The validation split will randomly split the data into use for training and testing. During training, we will be able to see the validation loss, which give the mean squared error of our model on the validation set. We will set the validation split at 0.2, which means that 20% of the training data we provide in the model will be set aside for testing model performance."
},
{
"code": null,
"e": 6650,
"s": 6304,
"text": "The number of epochs is the number of times the model will cycle through the data. The more epochs we run, the more the model will improve, up to a certain point. After that point, the model will stop improving during each epoch. In addition, the more epochs, the longer the model will take to run. To monitor this, we will use ‘early stopping’."
},
{
"code": null,
"e": 7088,
"s": 6650,
"text": "Early stopping will stop the model from training before the number of epochs is reached if the model stops improving. We will set our early stopping monitor to 3. This means that after 3 epochs in a row in which the model doesn’t improve, training will stop. Sometimes, the validation loss can stop improving then improve in the next epoch, but after 3 epochs in which the validation loss doesn’t improve, it usually won’t improve again."
},
{
"code": null,
"e": 7373,
"s": 7088,
"text": "from keras.callbacks import EarlyStopping#set early stopping monitor so the model stops training when it won't improve anymoreearly_stopping_monitor = EarlyStopping(patience=3)#train modelmodel.fit(train_X, train_y, validation_split=0.2, epochs=30, callbacks=[early_stopping_monitor])"
},
{
"code": null,
"e": 7549,
"s": 7373,
"text": "If you want to use this model to make predictions on new data, we would use the ‘predict()’ function, passing in our new data. The output would be ‘wage_per_hour’ predictions."
},
{
"code": null,
"e": 7752,
"s": 7549,
"text": "#example on how to use our newly trained model on how to make predictions on unseen data (we will pretend our new data is saved in a dataframe called 'test_X').test_y_predictions = model.predict(test_X)"
},
{
"code": null,
"e": 7925,
"s": 7752,
"text": "Congrats! You have built a deep learning model in Keras! It is not very accurate yet, but that can improve with using a larger amount of training data and ‘model capacity’."
},
{
"code": null,
"e": 8388,
"s": 7925,
"text": "As you increase the number of nodes and layers in a model, the model capacity increases. Increasing model capacity can lead to a more accurate model, up to a certain point, at which the model will stop improving. Generally, the more training data you provide, the larger the model should be. We are only using a tiny amount of data, so our model is pretty small. The larger the model, the more computational capacity it requires and it will take longer to train."
},
{
"code": null,
"e": 8640,
"s": 8388,
"text": "Let’s create a new model using the same training data as our previous model. This time, we will add a layer and increase the nodes in each layer to 200. We will train the model to see if increasing the model capacity will improve our validation score."
},
{
"code": null,
"e": 9184,
"s": 8640,
"text": "#training a new model on the same data to show the effect of increasing model capacity#create modelmodel_mc = Sequential()#add model layersmodel_mc.add(Dense(200, activation='relu', input_shape=(n_cols,)))model_mc.add(Dense(200, activation='relu'))model_mc.add(Dense(200, activation='relu'))model_mc.add(Dense(1))#compile model using mse as a measure of model performancemodel_mc.compile(optimizer='adam', loss='mean_squared_error')#train modelmodel_mc.fit(train_X, train_y, validation_split=0.2, epochs=30, callbacks=[early_stopping_monitor])"
},
{
"code": null,
"e": 9326,
"s": 9184,
"text": "We can see that by increasing our model capacity, we have improved our validation loss from 32.63 in our old model to 28.06 in our new model."
},
{
"code": null,
"e": 9479,
"s": 9326,
"text": "Now let’s move on to building our model for classification. Since many steps will be a repeat from the previous model, I will only go over new concepts."
},
{
"code": null,
"e": 9558,
"s": 9479,
"text": "For this next model, we are going to predict if patients have diabetes or not."
},
{
"code": null,
"e": 9678,
"s": 9558,
"text": "#read in training datatrain_df_2 = pd.read_csv('documents/data/diabetes_data.csv')#view data structuretrain_df_2.head()"
},
{
"code": null,
"e": 9859,
"s": 9678,
"text": "#create a dataframe with all training data except the target columntrain_X_2 = train_df_2.drop(columns=['diabetes'])#check that the target variable has been removedtrain_X_2.head()"
},
{
"code": null,
"e": 10403,
"s": 9859,
"text": "When separating the target column, we need to call the ‘to_categorical()’ function so that column will be ‘one-hot encoded’. Currently, a patient with no diabetes is represented with a 0 in the diabetes column and a patient with diabetes is represented with a 1. With one-hot encoding, the integer will be removed and a binary variable is inputted for each category. In our case, we have two categories: no diabetes and diabetes. A patient with no diabetes will be represented by [1 0] and a patient with diabetes will be represented by [0 1]."
},
{
"code": null,
"e": 10577,
"s": 10403,
"text": "from keras.utils import to_categorical#one-hot encode target columntrain_y_2 = to_categorical(train_df_2.diabetes)#vcheck that target column has been convertedtrain_y_2[0:5]"
},
{
"code": null,
"e": 10895,
"s": 10577,
"text": "#create modelmodel_2 = Sequential()#get number of columns in training datan_cols_2 = train_X_2.shape[1]#add layers to modelmodel_2.add(Dense(250, activation='relu', input_shape=(n_cols_2,)))model_2.add(Dense(250, activation='relu'))model_2.add(Dense(250, activation='relu'))model_2.add(Dense(2, activation='softmax'))"
},
{
"code": null,
"e": 10998,
"s": 10895,
"text": "The last layer of our model has 2 nodes — one for each option: the patient has diabetes or they don’t."
},
{
"code": null,
"e": 11203,
"s": 10998,
"text": "The activation is ‘softmax’. Softmax makes the output sum up to 1 so the output can be interpreted as probabilities. The model will then make its prediction based on which option has a higher probability."
},
{
"code": null,
"e": 11350,
"s": 11203,
"text": "#compile model using accuracy to measure model performancemodel_2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])"
},
{
"code": null,
"e": 11524,
"s": 11350,
"text": "We will use ‘categorical_crossentropy’ for our loss function. This is the most common choice for classification. A lower score indicates that the model is performing better."
},
{
"code": null,
"e": 11673,
"s": 11524,
"text": "To make things even easier to interpret, we will use the ‘accuracy’ metric to see the accuracy score on the validation set at the end of each epoch."
},
{
"code": null,
"e": 11779,
"s": 11673,
"text": "#train modelmodel_2.fit(X_2, target, epochs=30, validation_split=0.2, callbacks=[early_stopping_monitor])"
},
{
"code": null,
"e": 11869,
"s": 11779,
"text": "Congrats! You are now well on your way to building amazing deep learning models in Keras!"
}
] |
Karatsuba algorithm for fast multiplication using Divide and Conquer algorithm - GeeksforGeeks
|
23 Feb, 2022
Given two binary strings that represent value of two integers, find the product of two strings. For example, if the first bit string is “1100” and second bit string is “1010”, output should be 120.
For simplicity, let the length of two strings be same and be n.
A Naive Approach is to follow the process we study in school. One by one take all bits of second number and multiply it with all bits of first number. Finally add all multiplications. This algorithm takes O(n^2) time.
Using Divide and Conquer, we can multiply two integers in less time complexity. We divide the given numbers in two halves. Let the given numbers be X and Y.
For simplicity let us assume that n is even
X = Xl*2n/2 + Xr [Xl and Xr contain leftmost and rightmost n/2 bits of X]
Y = Yl*2n/2 + Yr [Yl and Yr contain leftmost and rightmost n/2 bits of Y]
The product XY can be written as following.
XY = (Xl*2n/2 + Xr)(Yl*2n/2 + Yr)
= 2n XlYl + 2n/2(XlYr + XrYl) + XrYr
If we take a look at the above formula, there are four multiplications of size n/2, so we basically divided the problem of size n into four sub-problems of size n/2. But that doesn’t help because solution of recurrence T(n) = 4T(n/2) + O(n) is O(n^2). The tricky part of this algorithm is to change the middle two terms to some other form so that only one extra multiplication would be sufficient. The following is tricky expression for middle two terms.
XlYr + XrYl = (Xl + Xr)(Yl + Yr) - XlYl- XrYr
So the final value of XY becomes
XY = 2n XlYl + 2n/2 * [(Xl + Xr)(Yl + Yr) - XlYl - XrYr] + XrYr
With above trick, the recurrence becomes T(n) = 3T(n/2) + O(n) and solution of this recurrence is O(n1.59).
What if the lengths of input strings are different and are not even? To handle the different length case, we append 0’s in the beginning. To handle odd length, we put floor(n/2) bits in left half and ceil(n/2) bits in right half. So the expression for XY changes to following.
XY = 22ceil(n/2) XlYl + 2ceil(n/2) * [(Xl + Xr)(Yl + Yr) - XlYl - XrYr] + XrYr
The above algorithm is called Karatsuba algorithm and it can be used for any base.
Following is C++ implementation of above algorithm.
C++
// C++ implementation of Karatsuba algorithm for bit string multiplication.#include<iostream>#include<stdio.h> using namespace std; // FOLLOWING TWO FUNCTIONS ARE COPIED FROM http://goo.gl/q0OhZ// Helper method: given two unequal sized bit strings, converts them to// same length by adding leading 0s in the smaller string. Returns the// the new lengthint makeEqualLength(string &str1, string &str2){ int len1 = str1.size(); int len2 = str2.size(); if (len1 < len2) { for (int i = 0 ; i < len2 - len1 ; i++) str1 = '0' + str1; return len2; } else if (len1 > len2) { for (int i = 0 ; i < len1 - len2 ; i++) str2 = '0' + str2; } return len1; // If len1 >= len2} // The main function that adds two bit sequences and returns the additionstring addBitStrings( string first, string second ){ string result; // To store the sum bits // make the lengths same before adding int length = makeEqualLength(first, second); int carry = 0; // Initialize carry // Add all bits one by one for (int i = length-1 ; i >= 0 ; i--) { int firstBit = first.at(i) - '0'; int secondBit = second.at(i) - '0'; // boolean expression for sum of 3 bits int sum = (firstBit ^ secondBit ^ carry)+'0'; result = (char)sum + result; // boolean expression for 3-bit addition carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry); } // if overflow, then add a leading 1 if (carry) result = '1' + result; return result;} // A utility function to multiply single bits of strings a and bint multiplyiSingleBit(string a, string b){ return (a[0] - '0')*(b[0] - '0'); } // The main function that multiplies two bit strings X and Y and returns// result as long integerlong int multiply(string X, string Y){ // Find the maximum of lengths of x and Y and make length // of smaller string same as that of larger string int n = makeEqualLength(X, Y); // Base cases if (n == 0) return 0; if (n == 1) return multiplyiSingleBit(X, Y); int fh = n/2; // First half of string, floor(n/2) int sh = (n-fh); // Second half of string, ceil(n/2) // Find the first half and second half of first string. // Refer http://goo.gl/lLmgn for substr method string Xl = X.substr(0, fh); string Xr = X.substr(fh, sh); // Find the first half and second half of second string string Yl = Y.substr(0, fh); string Yr = Y.substr(fh, sh); // Recursively calculate the three products of inputs of size n/2 long int P1 = multiply(Xl, Yl); long int P2 = multiply(Xr, Yr); long int P3 = multiply(addBitStrings(Xl, Xr), addBitStrings(Yl, Yr)); // Combine the three products to get the final result. return P1*(1<<(2*sh)) + (P3 - P1 - P2)*(1<<sh) + P2;} // Driver program to test above functionsint main(){ printf ("%ld\n", multiply("1100", "1010")); printf ("%ld\n", multiply("110", "1010")); printf ("%ld\n", multiply("11", "1010")); printf ("%ld\n", multiply("1", "1010")); printf ("%ld\n", multiply("0", "1010")); printf ("%ld\n", multiply("111", "111")); printf ("%ld\n", multiply("11", "11"));}
Output:
120
60
30
10
0
49
9
Time Complexity: Time complexity of the above solution is O(nlog23) = O(n1.59).Time complexity of multiplication can be further improved using another Divide and Conquer algorithm, fast Fourier transform. We will soon be discussing fast Fourier transform as a separate post.
Exercise The above program returns a long int value and will not work for big strings. Extend the above program to return a string instead of a long int value.
SolutionMultiplication process for large numbers is an important problem in Computer Science. Given approach uses Divide and Conquer methodology. Run the code to see the time complexity comparison for normal Binary Multiplication and Karatsuba Algorithm. You can see the full code in this repository
Examples:
First Binary Input : 101001010101010010101001010100101010010101010010101
Second Binary Input : 101001010101010010101001010100101010010101010010101
Decimal Output : Not Representable
Output : 2.1148846e+30
First Binary Input : 1011
Second Binary Input : 1000
Decimal Output : 88
Output : 5e-05
C++
#include <iostream>#include <ctime>#include <fstream>#include <string.h>#include <cmath>#include <sstream> using namespace std; // classical method classclass BinaryMultiplier{public: string MakeMultiplication(string,string); string MakeShifting(string,int); string addBinary(string,string); void BinaryStringToDecimal(string);}; // karatsuba method classclass Karatsuba{public: int lengthController(string &,string &); string addStrings(string,string); string multiply(string,string); string DecimalToBinary(long long int); string Subtraction(string,string); string MakeShifting(string,int);}; // this function get strings and go over str2 bit// if it sees 1 it calculates the shifted version according to position bit// Makes add operation for binary strings// returns result stringstring BinaryMultiplier::MakeMultiplication(string str1, string str2){ string allSum = ""; for (int j = 0 ; j<str2.length(); j++) { int secondDigit = str2[j] - '0'; if (secondDigit == 1) { string shifted = MakeShifting(str1,str2.size()-(j+1)); allSum = addBinary(shifted, allSum); } else { continue; } } return allSum;} // this function adds binary strings with carrystring BinaryMultiplier::addBinary(string a, string b){ string result = ""; int s = 0; int i = a.size() - 1; int j = b.size() - 1; while (i >= 0 || j >= 0 || s == 1) { s += ((i >= 0)? a[i] - '0': 0); s += ((j >= 0)? b[j] - '0': 0); result = char(s % 2 + '0') + result; s /= 2; i--; j--; } return result;} // this function shifts the given string according to given number// returns shifted versionstring BinaryMultiplier::MakeShifting(string str, int stepnum){ string shifted = str; for (int i = 0 ; i < stepnum ; i++) shifted = shifted + '0'; return shifted;} // this function converts Binary String Number to Decimal Number// After 32 bits it gives 0 because it overflows the size of intvoid BinaryMultiplier::BinaryStringToDecimal(string result){ cout<<"Binary Result : "<<result<<endl; unsigned long long int val = 0; for (int i = result.length()-1; i >= 0; i--) { if (result[i] == '1') { val += pow(2,(result.length()-1)-i); } } cout<<"Decimal Result (Not proper for Large Binary Numbers):" <<val<<endl;} // this function controls lengths of strings and make their lengths equal// returns the maximum lengthint Karatsuba::lengthController(string &str1, string &str2){ int len1 = str1.size(); int len2 = str2.size(); if (len1 < len2) { for (int i = 0 ; i < len2 - len1 ; i++) str1 = '0' + str1; return len2; } else if (len1 > len2) { for (int i = 0 ; i < len1 - len2 ; i++) str2 = '0' + str2; } return len1;} // this function add strings with carry// uses one by one bit addition methodology// returns result stringstring Karatsuba::addStrings(string first, string second){ string result; // To store the sum bits // make the lengths same before adding int length = lengthController(first, second); int carry = 0; // Initialize carry // Add all bits one by one for (int i = length-1 ; i >= 0 ; i--) { int firstBit = first.at(i) - '0'; int secondBit = second.at(i) - '0'; // boolean expression for sum of 3 bits int sum = (firstBit ^ secondBit ^ carry)+'0'; result = (char)sum + result; // Boolean expression for 3-bit addition carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry); } // if overflow, then add a leading 1 if (carry) { result = '1' + result; } return result;} // this function converts decimal number to binary stringstring Karatsuba::DecimalToBinary(long long int number){ string result = ""; if (number <= 0) { return "0"; } else { int i = 0; while (number > 0) { long long int num= number % 2; stringstream ss; ss<<num; result = ss.str() + result; number = number / 2; i++; } return result; }} // this function makes binary string subtraction with overflowstring Karatsuba::Subtraction(string lhs, string rhs){ int length = lengthController(lhs, rhs); int diff; string result; for (int i = length-1; i >= 0; i--) { diff = (lhs[i]-'0') - (rhs[i]-'0'); if (diff >= 0) { result = DecimalToBinary(diff) + result; } else { for (int j = i-1; j>=0; j--) { lhs[j] = ((lhs[j]-'0') - 1) % 10 + '0'; if (lhs[j] != '1') { break; } } result = DecimalToBinary(diff+2) + result; } } return result;} // this function makes shiftingstring Karatsuba::MakeShifting(string str, int stepnum){ string shifted = str; for (int i = 0 ; i < stepnum ; i++) shifted = shifted + '0'; return shifted;} // this function is the core of the Karatsuba// divides problem into 4 subproblems// recursively multiplies them// returns the result stringstring Karatsuba::multiply(string X, string Y){ int n = lengthController(X, Y); if (n == 1) return ((Y[0]-'0' == 1) && (X[0]-'0' == 1)) ? "1" : "0"; int fh = n/2; // First half of string, floor(n/2) int sh = (n-fh); // Second half of string, ceil(n/2) // Find the first half and second half of first string. string Xl = X.substr(0, fh); string Xr = X.substr(fh, sh); // Find the first half and second half of second string string Yl = Y.substr(0, fh); string Yr = Y.substr(fh, sh); // Recursively calculate the three products of inputs of size n/2 string P1 = multiply(Xl, Yl); string P2 = multiply(Xr, Yr); string P3 = multiply(addStrings(Xl, Xr), addStrings(Yl, Yr)); // return added string version return addStrings(addStrings(MakeShifting(P1, 2*(n-n/2)),P2),MakeShifting(Subtraction(P3,addStrings(P1,P2)), n-(n/2)));} int main(int argc, const char * argv[]){ // get the binary numbers as strings string firstNumber,secondNumber; cout<<"Please give the First Binary number : "; cin>>firstNumber; cout<<endl<<"Please give the Second Binary number : "; cin>>secondNumber; cout << endl; // make the initial lengths equal by adding zeros int len1 = firstNumber.size(); int len2 = secondNumber.size(); int general_len = firstNumber.size(); if (len1 < len2) { for (int i = 0 ; i < len2 - len1 ; i++) firstNumber = '0' + firstNumber; general_len = firstNumber.size(); } else if (len1 > len2) { for (int i = 0 ; i < len1 - len2 ; i++) secondNumber = '0' + secondNumber; general_len = secondNumber.size(); } // In classical methodology Binary String Multiplication cout<<"Classical Algorithm : "<<endl; BinaryMultiplier newobj; const clock_t classical_time = clock(); string classic = newobj.MakeMultiplication(firstNumber, secondNumber); cout << float( clock () - classical_time ) / CLOCKS_PER_SEC<<endl<<endl; float c_time = float( clock () - classical_time ) / CLOCKS_PER_SEC; newobj.BinaryStringToDecimal(classic); // Using Karatsuba Multiplication Algorithm Binary String Multiplication cout<<endl<<"Karatsuba Algorithm : "<<endl; Karatsuba obj; const clock_t karatsuba_time = clock(); string karatsuba = obj.multiply(firstNumber, secondNumber); cout << float( clock () - karatsuba_time ) / CLOCKS_PER_SEC<<endl<<endl; float k_time = float( clock () - classical_time ) / CLOCKS_PER_SEC; newobj.BinaryStringToDecimal(karatsuba); return 0;}
Related Article : Multiply Large Numbers Represented as Strings
References: Wikipedia page for Karatsuba algorithm Algorithms 1st Edition by Sanjoy Dasgupta, Christos Papadimitriou and Umesh Vazirani http://courses.csail.mit.edu/6.006/spring11/exams/notes3-karatsuba http://www.cc.gatech.edu/~ninamf/Algos11/lectures/lect0131.pdfPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
harshitSingh_11
emreuysal
ShayekhBinIslam
arorakashish0911
simmytarika5
Divide and Conquer
Strings
Strings
Divide and Conquer
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Program for Tower of Hanoi
Divide and Conquer Algorithm | Introduction
Write a program to calculate pow(x,n)
Count number of occurrences (or frequency) in a sorted array
Quick Sort vs Merge Sort
Reverse a string in Java
Write a program to reverse an array or string
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types
|
[
{
"code": null,
"e": 24947,
"s": 24919,
"text": "\n23 Feb, 2022"
},
{
"code": null,
"e": 25145,
"s": 24947,
"text": "Given two binary strings that represent value of two integers, find the product of two strings. For example, if the first bit string is “1100” and second bit string is “1010”, output should be 120."
},
{
"code": null,
"e": 25209,
"s": 25145,
"text": "For simplicity, let the length of two strings be same and be n."
},
{
"code": null,
"e": 25427,
"s": 25209,
"text": "A Naive Approach is to follow the process we study in school. One by one take all bits of second number and multiply it with all bits of first number. Finally add all multiplications. This algorithm takes O(n^2) time."
},
{
"code": null,
"e": 25584,
"s": 25427,
"text": "Using Divide and Conquer, we can multiply two integers in less time complexity. We divide the given numbers in two halves. Let the given numbers be X and Y."
},
{
"code": null,
"e": 25629,
"s": 25584,
"text": "For simplicity let us assume that n is even "
},
{
"code": null,
"e": 25785,
"s": 25629,
"text": "X = Xl*2n/2 + Xr [Xl and Xr contain leftmost and rightmost n/2 bits of X]\nY = Yl*2n/2 + Yr [Yl and Yr contain leftmost and rightmost n/2 bits of Y]"
},
{
"code": null,
"e": 25830,
"s": 25785,
"text": "The product XY can be written as following. "
},
{
"code": null,
"e": 25904,
"s": 25830,
"text": "XY = (Xl*2n/2 + Xr)(Yl*2n/2 + Yr)\n = 2n XlYl + 2n/2(XlYr + XrYl) + XrYr"
},
{
"code": null,
"e": 26361,
"s": 25904,
"text": "If we take a look at the above formula, there are four multiplications of size n/2, so we basically divided the problem of size n into four sub-problems of size n/2. But that doesn’t help because solution of recurrence T(n) = 4T(n/2) + O(n) is O(n^2). The tricky part of this algorithm is to change the middle two terms to some other form so that only one extra multiplication would be sufficient. The following is tricky expression for middle two terms. "
},
{
"code": null,
"e": 26407,
"s": 26361,
"text": "XlYr + XrYl = (Xl + Xr)(Yl + Yr) - XlYl- XrYr"
},
{
"code": null,
"e": 26442,
"s": 26407,
"text": "So the final value of XY becomes "
},
{
"code": null,
"e": 26506,
"s": 26442,
"text": "XY = 2n XlYl + 2n/2 * [(Xl + Xr)(Yl + Yr) - XlYl - XrYr] + XrYr"
},
{
"code": null,
"e": 26614,
"s": 26506,
"text": "With above trick, the recurrence becomes T(n) = 3T(n/2) + O(n) and solution of this recurrence is O(n1.59)."
},
{
"code": null,
"e": 26893,
"s": 26614,
"text": "What if the lengths of input strings are different and are not even? To handle the different length case, we append 0’s in the beginning. To handle odd length, we put floor(n/2) bits in left half and ceil(n/2) bits in right half. So the expression for XY changes to following. "
},
{
"code": null,
"e": 26972,
"s": 26893,
"text": "XY = 22ceil(n/2) XlYl + 2ceil(n/2) * [(Xl + Xr)(Yl + Yr) - XlYl - XrYr] + XrYr"
},
{
"code": null,
"e": 27056,
"s": 26972,
"text": "The above algorithm is called Karatsuba algorithm and it can be used for any base. "
},
{
"code": null,
"e": 27108,
"s": 27056,
"text": "Following is C++ implementation of above algorithm."
},
{
"code": null,
"e": 27112,
"s": 27108,
"text": "C++"
},
{
"code": "// C++ implementation of Karatsuba algorithm for bit string multiplication.#include<iostream>#include<stdio.h> using namespace std; // FOLLOWING TWO FUNCTIONS ARE COPIED FROM http://goo.gl/q0OhZ// Helper method: given two unequal sized bit strings, converts them to// same length by adding leading 0s in the smaller string. Returns the// the new lengthint makeEqualLength(string &str1, string &str2){ int len1 = str1.size(); int len2 = str2.size(); if (len1 < len2) { for (int i = 0 ; i < len2 - len1 ; i++) str1 = '0' + str1; return len2; } else if (len1 > len2) { for (int i = 0 ; i < len1 - len2 ; i++) str2 = '0' + str2; } return len1; // If len1 >= len2} // The main function that adds two bit sequences and returns the additionstring addBitStrings( string first, string second ){ string result; // To store the sum bits // make the lengths same before adding int length = makeEqualLength(first, second); int carry = 0; // Initialize carry // Add all bits one by one for (int i = length-1 ; i >= 0 ; i--) { int firstBit = first.at(i) - '0'; int secondBit = second.at(i) - '0'; // boolean expression for sum of 3 bits int sum = (firstBit ^ secondBit ^ carry)+'0'; result = (char)sum + result; // boolean expression for 3-bit addition carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry); } // if overflow, then add a leading 1 if (carry) result = '1' + result; return result;} // A utility function to multiply single bits of strings a and bint multiplyiSingleBit(string a, string b){ return (a[0] - '0')*(b[0] - '0'); } // The main function that multiplies two bit strings X and Y and returns// result as long integerlong int multiply(string X, string Y){ // Find the maximum of lengths of x and Y and make length // of smaller string same as that of larger string int n = makeEqualLength(X, Y); // Base cases if (n == 0) return 0; if (n == 1) return multiplyiSingleBit(X, Y); int fh = n/2; // First half of string, floor(n/2) int sh = (n-fh); // Second half of string, ceil(n/2) // Find the first half and second half of first string. // Refer http://goo.gl/lLmgn for substr method string Xl = X.substr(0, fh); string Xr = X.substr(fh, sh); // Find the first half and second half of second string string Yl = Y.substr(0, fh); string Yr = Y.substr(fh, sh); // Recursively calculate the three products of inputs of size n/2 long int P1 = multiply(Xl, Yl); long int P2 = multiply(Xr, Yr); long int P3 = multiply(addBitStrings(Xl, Xr), addBitStrings(Yl, Yr)); // Combine the three products to get the final result. return P1*(1<<(2*sh)) + (P3 - P1 - P2)*(1<<sh) + P2;} // Driver program to test above functionsint main(){ printf (\"%ld\\n\", multiply(\"1100\", \"1010\")); printf (\"%ld\\n\", multiply(\"110\", \"1010\")); printf (\"%ld\\n\", multiply(\"11\", \"1010\")); printf (\"%ld\\n\", multiply(\"1\", \"1010\")); printf (\"%ld\\n\", multiply(\"0\", \"1010\")); printf (\"%ld\\n\", multiply(\"111\", \"111\")); printf (\"%ld\\n\", multiply(\"11\", \"11\"));}",
"e": 30301,
"s": 27112,
"text": null
},
{
"code": null,
"e": 30310,
"s": 30301,
"text": "Output: "
},
{
"code": null,
"e": 30330,
"s": 30310,
"text": "120\n60\n30\n10\n0\n49\n9"
},
{
"code": null,
"e": 30605,
"s": 30330,
"text": "Time Complexity: Time complexity of the above solution is O(nlog23) = O(n1.59).Time complexity of multiplication can be further improved using another Divide and Conquer algorithm, fast Fourier transform. We will soon be discussing fast Fourier transform as a separate post."
},
{
"code": null,
"e": 30765,
"s": 30605,
"text": "Exercise The above program returns a long int value and will not work for big strings. Extend the above program to return a string instead of a long int value."
},
{
"code": null,
"e": 31065,
"s": 30765,
"text": "SolutionMultiplication process for large numbers is an important problem in Computer Science. Given approach uses Divide and Conquer methodology. Run the code to see the time complexity comparison for normal Binary Multiplication and Karatsuba Algorithm. You can see the full code in this repository"
},
{
"code": null,
"e": 31076,
"s": 31065,
"text": "Examples: "
},
{
"code": null,
"e": 31283,
"s": 31076,
"text": "First Binary Input : 101001010101010010101001010100101010010101010010101 \nSecond Binary Input : 101001010101010010101001010100101010010101010010101\nDecimal Output : Not Representable \nOutput : 2.1148846e+30"
},
{
"code": null,
"e": 31372,
"s": 31283,
"text": "First Binary Input : 1011 \nSecond Binary Input : 1000\nDecimal Output : 88\nOutput : 5e-05"
},
{
"code": null,
"e": 31376,
"s": 31372,
"text": "C++"
},
{
"code": "#include <iostream>#include <ctime>#include <fstream>#include <string.h>#include <cmath>#include <sstream> using namespace std; // classical method classclass BinaryMultiplier{public: string MakeMultiplication(string,string); string MakeShifting(string,int); string addBinary(string,string); void BinaryStringToDecimal(string);}; // karatsuba method classclass Karatsuba{public: int lengthController(string &,string &); string addStrings(string,string); string multiply(string,string); string DecimalToBinary(long long int); string Subtraction(string,string); string MakeShifting(string,int);}; // this function get strings and go over str2 bit// if it sees 1 it calculates the shifted version according to position bit// Makes add operation for binary strings// returns result stringstring BinaryMultiplier::MakeMultiplication(string str1, string str2){ string allSum = \"\"; for (int j = 0 ; j<str2.length(); j++) { int secondDigit = str2[j] - '0'; if (secondDigit == 1) { string shifted = MakeShifting(str1,str2.size()-(j+1)); allSum = addBinary(shifted, allSum); } else { continue; } } return allSum;} // this function adds binary strings with carrystring BinaryMultiplier::addBinary(string a, string b){ string result = \"\"; int s = 0; int i = a.size() - 1; int j = b.size() - 1; while (i >= 0 || j >= 0 || s == 1) { s += ((i >= 0)? a[i] - '0': 0); s += ((j >= 0)? b[j] - '0': 0); result = char(s % 2 + '0') + result; s /= 2; i--; j--; } return result;} // this function shifts the given string according to given number// returns shifted versionstring BinaryMultiplier::MakeShifting(string str, int stepnum){ string shifted = str; for (int i = 0 ; i < stepnum ; i++) shifted = shifted + '0'; return shifted;} // this function converts Binary String Number to Decimal Number// After 32 bits it gives 0 because it overflows the size of intvoid BinaryMultiplier::BinaryStringToDecimal(string result){ cout<<\"Binary Result : \"<<result<<endl; unsigned long long int val = 0; for (int i = result.length()-1; i >= 0; i--) { if (result[i] == '1') { val += pow(2,(result.length()-1)-i); } } cout<<\"Decimal Result (Not proper for Large Binary Numbers):\" <<val<<endl;} // this function controls lengths of strings and make their lengths equal// returns the maximum lengthint Karatsuba::lengthController(string &str1, string &str2){ int len1 = str1.size(); int len2 = str2.size(); if (len1 < len2) { for (int i = 0 ; i < len2 - len1 ; i++) str1 = '0' + str1; return len2; } else if (len1 > len2) { for (int i = 0 ; i < len1 - len2 ; i++) str2 = '0' + str2; } return len1;} // this function add strings with carry// uses one by one bit addition methodology// returns result stringstring Karatsuba::addStrings(string first, string second){ string result; // To store the sum bits // make the lengths same before adding int length = lengthController(first, second); int carry = 0; // Initialize carry // Add all bits one by one for (int i = length-1 ; i >= 0 ; i--) { int firstBit = first.at(i) - '0'; int secondBit = second.at(i) - '0'; // boolean expression for sum of 3 bits int sum = (firstBit ^ secondBit ^ carry)+'0'; result = (char)sum + result; // Boolean expression for 3-bit addition carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry); } // if overflow, then add a leading 1 if (carry) { result = '1' + result; } return result;} // this function converts decimal number to binary stringstring Karatsuba::DecimalToBinary(long long int number){ string result = \"\"; if (number <= 0) { return \"0\"; } else { int i = 0; while (number > 0) { long long int num= number % 2; stringstream ss; ss<<num; result = ss.str() + result; number = number / 2; i++; } return result; }} // this function makes binary string subtraction with overflowstring Karatsuba::Subtraction(string lhs, string rhs){ int length = lengthController(lhs, rhs); int diff; string result; for (int i = length-1; i >= 0; i--) { diff = (lhs[i]-'0') - (rhs[i]-'0'); if (diff >= 0) { result = DecimalToBinary(diff) + result; } else { for (int j = i-1; j>=0; j--) { lhs[j] = ((lhs[j]-'0') - 1) % 10 + '0'; if (lhs[j] != '1') { break; } } result = DecimalToBinary(diff+2) + result; } } return result;} // this function makes shiftingstring Karatsuba::MakeShifting(string str, int stepnum){ string shifted = str; for (int i = 0 ; i < stepnum ; i++) shifted = shifted + '0'; return shifted;} // this function is the core of the Karatsuba// divides problem into 4 subproblems// recursively multiplies them// returns the result stringstring Karatsuba::multiply(string X, string Y){ int n = lengthController(X, Y); if (n == 1) return ((Y[0]-'0' == 1) && (X[0]-'0' == 1)) ? \"1\" : \"0\"; int fh = n/2; // First half of string, floor(n/2) int sh = (n-fh); // Second half of string, ceil(n/2) // Find the first half and second half of first string. string Xl = X.substr(0, fh); string Xr = X.substr(fh, sh); // Find the first half and second half of second string string Yl = Y.substr(0, fh); string Yr = Y.substr(fh, sh); // Recursively calculate the three products of inputs of size n/2 string P1 = multiply(Xl, Yl); string P2 = multiply(Xr, Yr); string P3 = multiply(addStrings(Xl, Xr), addStrings(Yl, Yr)); // return added string version return addStrings(addStrings(MakeShifting(P1, 2*(n-n/2)),P2),MakeShifting(Subtraction(P3,addStrings(P1,P2)), n-(n/2)));} int main(int argc, const char * argv[]){ // get the binary numbers as strings string firstNumber,secondNumber; cout<<\"Please give the First Binary number : \"; cin>>firstNumber; cout<<endl<<\"Please give the Second Binary number : \"; cin>>secondNumber; cout << endl; // make the initial lengths equal by adding zeros int len1 = firstNumber.size(); int len2 = secondNumber.size(); int general_len = firstNumber.size(); if (len1 < len2) { for (int i = 0 ; i < len2 - len1 ; i++) firstNumber = '0' + firstNumber; general_len = firstNumber.size(); } else if (len1 > len2) { for (int i = 0 ; i < len1 - len2 ; i++) secondNumber = '0' + secondNumber; general_len = secondNumber.size(); } // In classical methodology Binary String Multiplication cout<<\"Classical Algorithm : \"<<endl; BinaryMultiplier newobj; const clock_t classical_time = clock(); string classic = newobj.MakeMultiplication(firstNumber, secondNumber); cout << float( clock () - classical_time ) / CLOCKS_PER_SEC<<endl<<endl; float c_time = float( clock () - classical_time ) / CLOCKS_PER_SEC; newobj.BinaryStringToDecimal(classic); // Using Karatsuba Multiplication Algorithm Binary String Multiplication cout<<endl<<\"Karatsuba Algorithm : \"<<endl; Karatsuba obj; const clock_t karatsuba_time = clock(); string karatsuba = obj.multiply(firstNumber, secondNumber); cout << float( clock () - karatsuba_time ) / CLOCKS_PER_SEC<<endl<<endl; float k_time = float( clock () - classical_time ) / CLOCKS_PER_SEC; newobj.BinaryStringToDecimal(karatsuba); return 0;}",
"e": 39392,
"s": 31376,
"text": null
},
{
"code": null,
"e": 39456,
"s": 39392,
"text": "Related Article : Multiply Large Numbers Represented as Strings"
},
{
"code": null,
"e": 39847,
"s": 39456,
"text": "References: Wikipedia page for Karatsuba algorithm Algorithms 1st Edition by Sanjoy Dasgupta, Christos Papadimitriou and Umesh Vazirani http://courses.csail.mit.edu/6.006/spring11/exams/notes3-karatsuba http://www.cc.gatech.edu/~ninamf/Algos11/lectures/lect0131.pdfPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 39863,
"s": 39847,
"text": "harshitSingh_11"
},
{
"code": null,
"e": 39873,
"s": 39863,
"text": "emreuysal"
},
{
"code": null,
"e": 39889,
"s": 39873,
"text": "ShayekhBinIslam"
},
{
"code": null,
"e": 39906,
"s": 39889,
"text": "arorakashish0911"
},
{
"code": null,
"e": 39919,
"s": 39906,
"text": "simmytarika5"
},
{
"code": null,
"e": 39938,
"s": 39919,
"text": "Divide and Conquer"
},
{
"code": null,
"e": 39946,
"s": 39938,
"text": "Strings"
},
{
"code": null,
"e": 39954,
"s": 39946,
"text": "Strings"
},
{
"code": null,
"e": 39973,
"s": 39954,
"text": "Divide and Conquer"
},
{
"code": null,
"e": 40071,
"s": 39973,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 40080,
"s": 40071,
"text": "Comments"
},
{
"code": null,
"e": 40093,
"s": 40080,
"text": "Old Comments"
},
{
"code": null,
"e": 40120,
"s": 40093,
"text": "Program for Tower of Hanoi"
},
{
"code": null,
"e": 40164,
"s": 40120,
"text": "Divide and Conquer Algorithm | Introduction"
},
{
"code": null,
"e": 40202,
"s": 40164,
"text": "Write a program to calculate pow(x,n)"
},
{
"code": null,
"e": 40263,
"s": 40202,
"text": "Count number of occurrences (or frequency) in a sorted array"
},
{
"code": null,
"e": 40288,
"s": 40263,
"text": "Quick Sort vs Merge Sort"
},
{
"code": null,
"e": 40313,
"s": 40288,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 40359,
"s": 40313,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 40393,
"s": 40359,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 40453,
"s": 40393,
"text": "Write a program to print all permutations of a given string"
}
] |
Spring Boot - CrudRepository with Example - GeeksforGeeks
|
22 Dec, 2021
Spring Boot is built on the top of the spring and contains all the features of spring. And is becoming a favorite of developers these days because of its rapid production-ready environment which enables the developers to directly focus on the logic instead of struggling with the configuration and setup. Spring Boot is a microservice-based framework and making a production-ready application in it takes very little time. Following are some of the features of Spring Boot:
It allows avoiding heavy configuration of XML which is present in spring
It provides easy maintenance and creation of REST endpoints
It includes embedded Tomcat-server
Deployment is very easy, war and jar files can be easily deployed in the tomcat server
For more information please refer to this article: Introduction to Spring Boot. In this article, we are going to discuss how to use CrudRepository to manage data in a Spring Boot application.
There is an interface available in Spring Boot named as CrudRepository that contains methods for CRUD operations. It provides generic Crud operation on a repository. It is defined in the package org.springframework.data.repository and It extends the Spring Data Repository interface. If someone wants to use CrudRepository in the spring boot application he/she has to create an interface and extend the CrudRepository interface.
Syntax:
public interface CrudRepository<T, ID> extends Repository<T, ID>
Where:
T: Domain type that repository manages (Generally the Entity/Model class name)
ID: Type of the id of the entity that repository manages (Generally the wrapper class of your @Id that is created inside the Entity/Model class)
Illustration:
public interface DepartmentRepository extends CrudRepository<Department, Long> {}
Now let us discuss some of the most important methods that are available inside the CrudRepository are given below as follows:
Method 1: save(): Saves a given entity. Use the returned instance for further operations as the save operation might have changed the entity instance completely.
Syntax:
<S extends T> S save(S entity)
Parameters: entity – must not be null.
Returns: the saved entity; will never be null.
Throws: IllegalArgumentException – in case the given entity is null.
Method 2: findById(): Retrieves an entity by its id.
Syntax:
Optional<T> findById(ID id)
Parameters: id – must not be null.
Returns: the entity with the given id or Optional#empty() if none found.
Exception Thrown: IllegalArgumentException is thrown if the ‘id’ is null.
Method 3: findAll(): Returns all instances of the type.
Syntax:
Iterable<T> findAll()
Return Type: All entities
Method 4: count(): Returns the number of entities available.
Syntax:
long count()
Return Type: the number of entities.
Method 5: deleteById(): Deletes the entity with the given id.
Syntax:
void deleteById(ID id)
Parameters: Id (must not be null)
Exception Thrown: IllegalArgumentException in case the given id is null.
Example
The following Spring Boot application manages a Department entity with CrudRepository. The data is saved in the H2 database. We use a RESTful controller.
Step 1: Refer to this article How to Create a Spring Boot Project with IntelliJ IDEA and create a Spring Boot project.
Step 2: Add the following dependency
Spring Web
H2 Database
Lombok
Spring Data JPA
Below is the complete code for the pom.xml file. Please check if you have missed something.
XML
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.5.5</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.amiya</groupId> <artifactId>Spring-Boot-Demo-Project</artifactId> <version>1.0.0-SNAPSHOT</version> <name>Spring-Boot-Demo-Project</name> <description>Demo project for Spring Boot</description> <properties> <java.version>11</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> </plugins> </build> </project>
Step 3: Create 4 packages and create some classes and interfaces inside these packages as seen in the below image
entity
repository
service
controller
Note:
Green Rounded Icon ‘I’ Buttons are Interface
Blue Rounded Icon ‘C’ Buttons are Classes
Step 4: Inside the entity package
Create a simple POJO class inside the Department.java file. Below is the code for the Department.java file
Java
package com.amiya.springbootdemoproject.entity; import lombok.AllArgsConstructor;import lombok.Builder;import lombok.Data;import lombok.NoArgsConstructor; import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id; @Entity@Data@NoArgsConstructor@AllArgsConstructor@Builderpublic class Department { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long departmentId; private String departmentName; private String departmentAddress; private String departmentCode;}
Step 5: Inside the repository package
Create a simple interface and name the interface as DepartmentRepository. This interface is going to extend the CrudRepository as we have discussed above.
Java
// Java Program to Illustrate DepartmentRepository.java File // Importing package module to this codepackage com.amiya.springbootdemoproject.repository;// Importing required classesimport com.amiya.springbootdemoproject.entity.Department;import org.springframework.data.repository.CrudRepository;import org.springframework.stereotype.Repository; // Annotation@Repository // Classpublic interface DepartmentRepository extends CrudRepository<Department, Long> {}
Step 6: Inside the service package
Inside the package create one interface named as DepartmentService and one class named as DepartmentServiceImpl. Below is the code for the DepartmentService.java file.
Example 1-A
Java
package com.amiya.springbootdemoproject.service; import com.amiya.springbootdemoproject.entity.Department; import java.util.List; public interface DepartmentService { // save operation Department saveDepartment(Department department); // read operation List<Department> fetchDepartmentList(); // update operation Department updateDepartment(Department department, Long departmentId); // delete operation void deleteDepartmentById(Long departmentId);}
Example 1-B
Java
// Below is the code for the DepartmentServiceImpl.java file.package com.amiya.springbootdemoproject.service; import com.amiya.springbootdemoproject.entity.Department;import com.amiya.springbootdemoproject.repository.DepartmentRepository;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Service; import java.util.List;import java.util.Objects; @Servicepublic class DepartmentServiceImpl implements DepartmentService{ @Autowired private DepartmentRepository departmentRepository; // save operation @Override public Department saveDepartment(Department department) { return departmentRepository.save(department); } // read operation @Override public List<Department> fetchDepartmentList() { return (List<Department>) departmentRepository.findAll(); } // update operation @Override public Department updateDepartment(Department department, Long departmentId) { Department depDB = departmentRepository.findById(departmentId).get(); if (Objects.nonNull(department.getDepartmentName()) && !"".equalsIgnoreCase(department.getDepartmentName())) { depDB.setDepartmentName(department.getDepartmentName()); } if (Objects.nonNull(department.getDepartmentAddress()) && !"".equalsIgnoreCase(department.getDepartmentAddress())) { depDB.setDepartmentAddress(department.getDepartmentAddress()); } if (Objects.nonNull(department.getDepartmentCode()) && !"".equalsIgnoreCase(department.getDepartmentCode())) { depDB.setDepartmentCode(department.getDepartmentCode()); } return departmentRepository.save(depDB); } // delete operation @Override public void deleteDepartmentById(Long departmentId) { departmentRepository.deleteById(departmentId); } }
Step 7: Inside the controller package
Inside the package create one class named as DepartmentController.
Java
// Java Program to Illustrate DepartmentController File // Importing package modulepackage com.amiya.springbootdemoproject.controller;// Importing required classesimport com.amiya.springbootdemoproject.entity.Department;import com.amiya.springbootdemoproject.service.DepartmentService;import java.util.List;import javax.validation.Valid;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.*; // Annotation@RestController // Classpublic class DepartmentController { // Annotation @Autowired private DepartmentService departmentService; // Save operation @PostMapping("/departments") public Department saveDepartment( @Valid @RequestBody Department department) { return departmentService.saveDepartment(department); } // Read operation @GetMapping("/departments") public List<Department> fetchDepartmentList() { return departmentService.fetchDepartmentList(); } // Update operation @PutMapping("/departments/{id}") public Department updateDepartment(@RequestBody Department department, @PathVariable("id") Long departmentId) { return departmentService.updateDepartment( department, departmentId); } // Delete operation @DeleteMapping("/departments/{id}") public String deleteDepartmentById(@PathVariable("id") Long departmentId) { departmentService.deleteDepartmentById( departmentId); return "Deleted Successfully"; }}
Step 8: Below is the code for the application.properties file
server.port = 8082
# H2 Database
spring.h2.console.enabled=true
spring.datasource.url=jdbc:h2:mem:dcbapp
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=password
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
Now run your application and let’s test the endpoints in Postman and also refer to our H2 Database.
Endpoint 1: POST – http://localhost:8082/departments/
Endpoint 2: GET – http://localhost:8082/departments/
Endpoint 3: PUT – http://localhost:8082/departments/1
Endpoint 4: DELETE – http://localhost:8082/departments/1
H2 Database is as depicted in below media
Java-Spring-Boot
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Exceptions in Java
Constructors in Java
Different ways of Reading a text file in Java
Functional Interfaces in Java
Generics in Java
Comparator Interface in Java with Examples
Introduction to Java
PriorityQueue in Java
How to remove an element from ArrayList in Java?
|
[
{
"code": null,
"e": 24090,
"s": 24062,
"text": "\n22 Dec, 2021"
},
{
"code": null,
"e": 24564,
"s": 24090,
"text": "Spring Boot is built on the top of the spring and contains all the features of spring. And is becoming a favorite of developers these days because of its rapid production-ready environment which enables the developers to directly focus on the logic instead of struggling with the configuration and setup. Spring Boot is a microservice-based framework and making a production-ready application in it takes very little time. Following are some of the features of Spring Boot:"
},
{
"code": null,
"e": 24637,
"s": 24564,
"text": "It allows avoiding heavy configuration of XML which is present in spring"
},
{
"code": null,
"e": 24697,
"s": 24637,
"text": "It provides easy maintenance and creation of REST endpoints"
},
{
"code": null,
"e": 24732,
"s": 24697,
"text": "It includes embedded Tomcat-server"
},
{
"code": null,
"e": 24819,
"s": 24732,
"text": "Deployment is very easy, war and jar files can be easily deployed in the tomcat server"
},
{
"code": null,
"e": 25011,
"s": 24819,
"text": "For more information please refer to this article: Introduction to Spring Boot. In this article, we are going to discuss how to use CrudRepository to manage data in a Spring Boot application."
},
{
"code": null,
"e": 25441,
"s": 25011,
"text": "There is an interface available in Spring Boot named as CrudRepository that contains methods for CRUD operations. It provides generic Crud operation on a repository. It is defined in the package org.springframework.data.repository and It extends the Spring Data Repository interface. If someone wants to use CrudRepository in the spring boot application he/she has to create an interface and extend the CrudRepository interface. "
},
{
"code": null,
"e": 25450,
"s": 25441,
"text": "Syntax: "
},
{
"code": null,
"e": 25515,
"s": 25450,
"text": "public interface CrudRepository<T, ID> extends Repository<T, ID>"
},
{
"code": null,
"e": 25522,
"s": 25515,
"text": "Where:"
},
{
"code": null,
"e": 25601,
"s": 25522,
"text": "T: Domain type that repository manages (Generally the Entity/Model class name)"
},
{
"code": null,
"e": 25746,
"s": 25601,
"text": "ID: Type of the id of the entity that repository manages (Generally the wrapper class of your @Id that is created inside the Entity/Model class)"
},
{
"code": null,
"e": 25760,
"s": 25746,
"text": "Illustration:"
},
{
"code": null,
"e": 25842,
"s": 25760,
"text": "public interface DepartmentRepository extends CrudRepository<Department, Long> {}"
},
{
"code": null,
"e": 25969,
"s": 25842,
"text": "Now let us discuss some of the most important methods that are available inside the CrudRepository are given below as follows:"
},
{
"code": null,
"e": 26131,
"s": 25969,
"text": "Method 1: save(): Saves a given entity. Use the returned instance for further operations as the save operation might have changed the entity instance completely."
},
{
"code": null,
"e": 26139,
"s": 26131,
"text": "Syntax:"
},
{
"code": null,
"e": 26171,
"s": 26139,
"text": " <S extends T> S save(S entity)"
},
{
"code": null,
"e": 26210,
"s": 26171,
"text": "Parameters: entity – must not be null."
},
{
"code": null,
"e": 26257,
"s": 26210,
"text": "Returns: the saved entity; will never be null."
},
{
"code": null,
"e": 26326,
"s": 26257,
"text": "Throws: IllegalArgumentException – in case the given entity is null."
},
{
"code": null,
"e": 26379,
"s": 26326,
"text": "Method 2: findById(): Retrieves an entity by its id."
},
{
"code": null,
"e": 26387,
"s": 26379,
"text": "Syntax:"
},
{
"code": null,
"e": 26415,
"s": 26387,
"text": "Optional<T> findById(ID id)"
},
{
"code": null,
"e": 26450,
"s": 26415,
"text": "Parameters: id – must not be null."
},
{
"code": null,
"e": 26523,
"s": 26450,
"text": "Returns: the entity with the given id or Optional#empty() if none found."
},
{
"code": null,
"e": 26597,
"s": 26523,
"text": "Exception Thrown: IllegalArgumentException is thrown if the ‘id’ is null."
},
{
"code": null,
"e": 26653,
"s": 26597,
"text": "Method 3: findAll(): Returns all instances of the type."
},
{
"code": null,
"e": 26662,
"s": 26653,
"text": "Syntax: "
},
{
"code": null,
"e": 26684,
"s": 26662,
"text": "Iterable<T> findAll()"
},
{
"code": null,
"e": 26710,
"s": 26684,
"text": "Return Type: All entities"
},
{
"code": null,
"e": 26771,
"s": 26710,
"text": "Method 4: count(): Returns the number of entities available."
},
{
"code": null,
"e": 26779,
"s": 26771,
"text": "Syntax:"
},
{
"code": null,
"e": 26792,
"s": 26779,
"text": "long count()"
},
{
"code": null,
"e": 26829,
"s": 26792,
"text": "Return Type: the number of entities."
},
{
"code": null,
"e": 26891,
"s": 26829,
"text": "Method 5: deleteById(): Deletes the entity with the given id."
},
{
"code": null,
"e": 26899,
"s": 26891,
"text": "Syntax:"
},
{
"code": null,
"e": 26922,
"s": 26899,
"text": "void deleteById(ID id)"
},
{
"code": null,
"e": 26956,
"s": 26922,
"text": "Parameters: Id (must not be null)"
},
{
"code": null,
"e": 27029,
"s": 26956,
"text": "Exception Thrown: IllegalArgumentException in case the given id is null."
},
{
"code": null,
"e": 27037,
"s": 27029,
"text": "Example"
},
{
"code": null,
"e": 27191,
"s": 27037,
"text": "The following Spring Boot application manages a Department entity with CrudRepository. The data is saved in the H2 database. We use a RESTful controller."
},
{
"code": null,
"e": 27311,
"s": 27191,
"text": "Step 1: Refer to this article How to Create a Spring Boot Project with IntelliJ IDEA and create a Spring Boot project. "
},
{
"code": null,
"e": 27348,
"s": 27311,
"text": "Step 2: Add the following dependency"
},
{
"code": null,
"e": 27359,
"s": 27348,
"text": "Spring Web"
},
{
"code": null,
"e": 27371,
"s": 27359,
"text": "H2 Database"
},
{
"code": null,
"e": 27378,
"s": 27371,
"text": "Lombok"
},
{
"code": null,
"e": 27394,
"s": 27378,
"text": "Spring Data JPA"
},
{
"code": null,
"e": 27486,
"s": 27394,
"text": "Below is the complete code for the pom.xml file. Please check if you have missed something."
},
{
"code": null,
"e": 27490,
"s": 27486,
"text": "XML"
},
{
"code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?><project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.5.5</version> <relativePath/> <!-- lookup parent from repository --> </parent> <groupId>com.amiya</groupId> <artifactId>Spring-Boot-Demo-Project</artifactId> <version>1.0.0-SNAPSHOT</version> <name>Spring-Boot-Demo-Project</name> <description>Demo project for Spring Boot</description> <properties> <java.version>11</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <scope>runtime</scope> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> </plugins> </build> </project>",
"e": 29955,
"s": 27490,
"text": null
},
{
"code": null,
"e": 30069,
"s": 29955,
"text": "Step 3: Create 4 packages and create some classes and interfaces inside these packages as seen in the below image"
},
{
"code": null,
"e": 30076,
"s": 30069,
"text": "entity"
},
{
"code": null,
"e": 30087,
"s": 30076,
"text": "repository"
},
{
"code": null,
"e": 30095,
"s": 30087,
"text": "service"
},
{
"code": null,
"e": 30106,
"s": 30095,
"text": "controller"
},
{
"code": null,
"e": 30112,
"s": 30106,
"text": "Note:"
},
{
"code": null,
"e": 30157,
"s": 30112,
"text": "Green Rounded Icon ‘I’ Buttons are Interface"
},
{
"code": null,
"e": 30199,
"s": 30157,
"text": "Blue Rounded Icon ‘C’ Buttons are Classes"
},
{
"code": null,
"e": 30233,
"s": 30199,
"text": "Step 4: Inside the entity package"
},
{
"code": null,
"e": 30340,
"s": 30233,
"text": "Create a simple POJO class inside the Department.java file. Below is the code for the Department.java file"
},
{
"code": null,
"e": 30345,
"s": 30340,
"text": "Java"
},
{
"code": "package com.amiya.springbootdemoproject.entity; import lombok.AllArgsConstructor;import lombok.Builder;import lombok.Data;import lombok.NoArgsConstructor; import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id; @Entity@Data@NoArgsConstructor@AllArgsConstructor@Builderpublic class Department { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long departmentId; private String departmentName; private String departmentAddress; private String departmentCode;}",
"e": 30923,
"s": 30345,
"text": null
},
{
"code": null,
"e": 30961,
"s": 30923,
"text": "Step 5: Inside the repository package"
},
{
"code": null,
"e": 31117,
"s": 30961,
"text": "Create a simple interface and name the interface as DepartmentRepository. This interface is going to extend the CrudRepository as we have discussed above. "
},
{
"code": null,
"e": 31122,
"s": 31117,
"text": "Java"
},
{
"code": "// Java Program to Illustrate DepartmentRepository.java File // Importing package module to this codepackage com.amiya.springbootdemoproject.repository;// Importing required classesimport com.amiya.springbootdemoproject.entity.Department;import org.springframework.data.repository.CrudRepository;import org.springframework.stereotype.Repository; // Annotation@Repository // Classpublic interface DepartmentRepository extends CrudRepository<Department, Long> {}",
"e": 31589,
"s": 31122,
"text": null
},
{
"code": null,
"e": 31624,
"s": 31589,
"text": "Step 6: Inside the service package"
},
{
"code": null,
"e": 31792,
"s": 31624,
"text": "Inside the package create one interface named as DepartmentService and one class named as DepartmentServiceImpl. Below is the code for the DepartmentService.java file."
},
{
"code": null,
"e": 31804,
"s": 31792,
"text": "Example 1-A"
},
{
"code": null,
"e": 31809,
"s": 31804,
"text": "Java"
},
{
"code": "package com.amiya.springbootdemoproject.service; import com.amiya.springbootdemoproject.entity.Department; import java.util.List; public interface DepartmentService { // save operation Department saveDepartment(Department department); // read operation List<Department> fetchDepartmentList(); // update operation Department updateDepartment(Department department, Long departmentId); // delete operation void deleteDepartmentById(Long departmentId);}",
"e": 32293,
"s": 31809,
"text": null
},
{
"code": null,
"e": 32306,
"s": 32293,
"text": "Example 1-B "
},
{
"code": null,
"e": 32311,
"s": 32306,
"text": "Java"
},
{
"code": "// Below is the code for the DepartmentServiceImpl.java file.package com.amiya.springbootdemoproject.service; import com.amiya.springbootdemoproject.entity.Department;import com.amiya.springbootdemoproject.repository.DepartmentRepository;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Service; import java.util.List;import java.util.Objects; @Servicepublic class DepartmentServiceImpl implements DepartmentService{ @Autowired private DepartmentRepository departmentRepository; // save operation @Override public Department saveDepartment(Department department) { return departmentRepository.save(department); } // read operation @Override public List<Department> fetchDepartmentList() { return (List<Department>) departmentRepository.findAll(); } // update operation @Override public Department updateDepartment(Department department, Long departmentId) { Department depDB = departmentRepository.findById(departmentId).get(); if (Objects.nonNull(department.getDepartmentName()) && !\"\".equalsIgnoreCase(department.getDepartmentName())) { depDB.setDepartmentName(department.getDepartmentName()); } if (Objects.nonNull(department.getDepartmentAddress()) && !\"\".equalsIgnoreCase(department.getDepartmentAddress())) { depDB.setDepartmentAddress(department.getDepartmentAddress()); } if (Objects.nonNull(department.getDepartmentCode()) && !\"\".equalsIgnoreCase(department.getDepartmentCode())) { depDB.setDepartmentCode(department.getDepartmentCode()); } return departmentRepository.save(depDB); } // delete operation @Override public void deleteDepartmentById(Long departmentId) { departmentRepository.deleteById(departmentId); } }",
"e": 34176,
"s": 32311,
"text": null
},
{
"code": null,
"e": 34214,
"s": 34176,
"text": "Step 7: Inside the controller package"
},
{
"code": null,
"e": 34281,
"s": 34214,
"text": "Inside the package create one class named as DepartmentController."
},
{
"code": null,
"e": 34286,
"s": 34281,
"text": "Java"
},
{
"code": "// Java Program to Illustrate DepartmentController File // Importing package modulepackage com.amiya.springbootdemoproject.controller;// Importing required classesimport com.amiya.springbootdemoproject.entity.Department;import com.amiya.springbootdemoproject.service.DepartmentService;import java.util.List;import javax.validation.Valid;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.*; // Annotation@RestController // Classpublic class DepartmentController { // Annotation @Autowired private DepartmentService departmentService; // Save operation @PostMapping(\"/departments\") public Department saveDepartment( @Valid @RequestBody Department department) { return departmentService.saveDepartment(department); } // Read operation @GetMapping(\"/departments\") public List<Department> fetchDepartmentList() { return departmentService.fetchDepartmentList(); } // Update operation @PutMapping(\"/departments/{id}\") public Department updateDepartment(@RequestBody Department department, @PathVariable(\"id\") Long departmentId) { return departmentService.updateDepartment( department, departmentId); } // Delete operation @DeleteMapping(\"/departments/{id}\") public String deleteDepartmentById(@PathVariable(\"id\") Long departmentId) { departmentService.deleteDepartmentById( departmentId); return \"Deleted Successfully\"; }}",
"e": 35873,
"s": 34286,
"text": null
},
{
"code": null,
"e": 35935,
"s": 35873,
"text": "Step 8: Below is the code for the application.properties file"
},
{
"code": null,
"e": 36216,
"s": 35935,
"text": "server.port = 8082\n\n# H2 Database\nspring.h2.console.enabled=true\nspring.datasource.url=jdbc:h2:mem:dcbapp\nspring.datasource.driverClassName=org.h2.Driver\nspring.datasource.username=sa\nspring.datasource.password=password\nspring.jpa.database-platform=org.hibernate.dialect.H2Dialect"
},
{
"code": null,
"e": 36316,
"s": 36216,
"text": "Now run your application and let’s test the endpoints in Postman and also refer to our H2 Database."
},
{
"code": null,
"e": 36370,
"s": 36316,
"text": "Endpoint 1: POST – http://localhost:8082/departments/"
},
{
"code": null,
"e": 36423,
"s": 36370,
"text": "Endpoint 2: GET – http://localhost:8082/departments/"
},
{
"code": null,
"e": 36477,
"s": 36423,
"text": "Endpoint 3: PUT – http://localhost:8082/departments/1"
},
{
"code": null,
"e": 36534,
"s": 36477,
"text": "Endpoint 4: DELETE – http://localhost:8082/departments/1"
},
{
"code": null,
"e": 36576,
"s": 36534,
"text": "H2 Database is as depicted in below media"
},
{
"code": null,
"e": 36593,
"s": 36576,
"text": "Java-Spring-Boot"
},
{
"code": null,
"e": 36598,
"s": 36593,
"text": "Java"
},
{
"code": null,
"e": 36603,
"s": 36598,
"text": "Java"
},
{
"code": null,
"e": 36701,
"s": 36603,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 36716,
"s": 36701,
"text": "Stream In Java"
},
{
"code": null,
"e": 36735,
"s": 36716,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 36756,
"s": 36735,
"text": "Constructors in Java"
},
{
"code": null,
"e": 36802,
"s": 36756,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 36832,
"s": 36802,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 36849,
"s": 36832,
"text": "Generics in Java"
},
{
"code": null,
"e": 36892,
"s": 36849,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 36913,
"s": 36892,
"text": "Introduction to Java"
},
{
"code": null,
"e": 36935,
"s": 36913,
"text": "PriorityQueue in Java"
}
] |
How do we create multiline comments in Python?
|
Comment is a piece of text in a computer program that is meant to be a programmer-readable explanation or annotation in the source code and not ignored by compiler/interpreter. In Python script, the symbol # indicates start of comment line.
C like block comment (/* .. */) is not available in Python. If more than one consecutive line are to be commented, # symbol must be put at beginning of each line
##comment1
##comment2
##comment3
print ("Hello World")
A triple quoted multi-line string is also treated as comment if it is not a docstring of a function or class.
'''
comment1
comment2
comment3
'''
print ("Hello World")
|
[
{
"code": null,
"e": 1303,
"s": 1062,
"text": "Comment is a piece of text in a computer program that is meant to be a programmer-readable explanation or annotation in the source code and not ignored by compiler/interpreter. In Python script, the symbol # indicates start of comment line."
},
{
"code": null,
"e": 1465,
"s": 1303,
"text": "C like block comment (/* .. */) is not available in Python. If more than one consecutive line are to be commented, # symbol must be put at beginning of each line"
},
{
"code": null,
"e": 1521,
"s": 1465,
"text": "##comment1\n##comment2\n##comment3\n print (\"Hello World\")"
},
{
"code": null,
"e": 1631,
"s": 1521,
"text": "A triple quoted multi-line string is also treated as comment if it is not a docstring of a function or class."
},
{
"code": null,
"e": 1689,
"s": 1631,
"text": "'''\ncomment1\ncomment2\ncomment3\n'''\n print (\"Hello World\")"
}
] |
How to compare float and double in C++?
|
Comparing floats and double variables depends on what your end goal is. If you want a runnable function without going too much in details and won't have a problem in some inaccurate calculations you can use the following function −
#include<iostream>
using namespace std;
// Define the error that you can tolerate
#define EPSILON 0.000001
bool areSame(double a, double b) {
return fabs(a - b) < EPSILON;
}
int main() {
double a = 1.005;
double b = 1.006;
cout << areSame(a, a);
cout << areSame(a, b);
}
This will give the output −
1
0
This function takes your tolerance for error and checks if the threshold is greater than the difference between the numbers you're comparing. If you need something much more accurate, you're better off reading this excellent blog post:https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
|
[
{
"code": null,
"e": 1294,
"s": 1062,
"text": "Comparing floats and double variables depends on what your end goal is. If you want a runnable function without going too much in details and won't have a problem in some inaccurate calculations you can use the following function −"
},
{
"code": null,
"e": 1583,
"s": 1294,
"text": "#include<iostream>\nusing namespace std;\n\n// Define the error that you can tolerate\n#define EPSILON 0.000001\n\nbool areSame(double a, double b) {\n return fabs(a - b) < EPSILON;\n}\n\nint main() {\n double a = 1.005;\n double b = 1.006;\n cout << areSame(a, a);\n cout << areSame(a, b);\n}"
},
{
"code": null,
"e": 1611,
"s": 1583,
"text": "This will give the output −"
},
{
"code": null,
"e": 1615,
"s": 1611,
"text": "1\n0"
},
{
"code": null,
"e": 1948,
"s": 1615,
"text": "This function takes your tolerance for error and checks if the threshold is greater than the difference between the numbers you're comparing. If you need something much more accurate, you're better off reading this excellent blog post:https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/"
}
] |
Estimating Uncertainty in Machine Learning Models — Part 3 | by Gideon Mendels | Towards Data Science
|
Check out part 1 (here)and part 2 (here) of this series
Author: Dhruv Nair, data scientist, Comet.ml
In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout.
So far, the approaches we looked at involved creating variations in the dataset, or the model parameters to estimate uncertainty. The main drawback here is that it requires us to either train multiple models, or make multiple predictions in order to figure out the variance in our model’s predictions.
In situations with latency constraints, techniques such as MC Dropout might not be appropriate for estimating a prediction interval. What can we do to reduce the number of predictions we need to estimate the interval?
In part 1 of this series, we made an assumption that the mean response of our dependent variable, μ(y|x), is normally distributed.
The MLE method involves building two models, one to estimate the conditional mean response, μ(y|x) , and another to estimate the variance, σ2 in the predicted response.
We do this by first, splitting our training data into two halves. The first half model, mμ is trained as a regular regression model, using the first half of the data. This model is then used to make predictions on the second half of the data.
The second model, mσ2 is trained using the second half of the data, and the squared residuals of mμ as the dependent variable.
The final prediction interval can be expressed in the following way
Here α is the desired level of confidence according to the Gaussian Distribution.
We’re going to be using the Auto MPG dataset again. Notice how the training data is split again in the last step.
Mean Variance Estimation Methoddataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin']raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values = "?", comment='\t', sep=" ", skipinitialspace=True)dataset = raw_dataset.copy()dataset = dataset.dropna()origin = dataset.pop('Origin')dataset['USA'] = (origin == 1)*1.0dataset['Europe'] = (origin == 2)*1.0dataset['Japan'] = (origin == 3)*1.0train_dataset = dataset.sample(frac=0.8,random_state=0)test_dataset = dataset.drop(train_dataset.index)mean_dataset = train_dataset.sample(frac=0.5 , random_state=0)var_dataset = train_dataset.drop(mean_dataset.index)
Next, we’re going to create two models to estimate the mean and variance in our data
import kerasfrom keras.models import Modelfrom keras.layers import Input, Dense, Dropoutdropout_rate = 0.5def model_fn(): inputs = Input(shape=(9,)) x = Dense(64, activation='relu')(inputs) x = Dropout(dropout_rate)(x) x = Dense(64, activation='relu')(x) x = Dropout(dropout_rate)(x) outputs = Dense(1)(x) model = Model(inputs, outputs) return modelmean_model = model_fn()mean_model.compile(loss="mean_squared_error", optimizer='adam')var_model = model_fn()var_model.compile(loss="mean_squared_error", optimizer='adam')
Finally, we’re going to normalize our data, and start training
train_stats = train_dataset.describe()train_stats.pop("MPG")train_stats.transpose()def norm(x): return (x - train_stats.loc['mean'])/ train_stats.loc['std']normed_train_data = norm(train_dataset)normed_mean_data = norm(mean_dataset)normed_var_data = norm(var_dataset)normed_test_data = norm(test_dataset)train_labels = train_dataset.pop('MPG')mean_labels = mean_dataset.pop('MPG')var_labels = var_dataset.pop('MPG')test_labels = test_dataset.pop('MPG')
Once the mean model has been trained, we can use it to make predictions on the second half of our dataset and compute the squared residuals.
EPOCHS = 100mean_model.fit(normed_mean_data, mean_labels, epochs=EPOCHS, validation_split=0.2, verbose=0)mean_predictions = mean_model.predict(normed_var_data)squared_residuals = (var_labels.values.reshape(-1,1) - mean_predictions) ** 2var_model.fit(normed_var_data, squared_residuals, epochs=EPOCHS, validation_split=0.2, verbose=0)
Let’s take a look at the intervals produced by this approach.
You will notice that the highly inaccurate predictions have much larger intervals around the mean.
What if we do not want to make assumptions about the distribution of our response variable, and want to directly estimate the upper and lower limit of our target variable?
A quantile loss can help us estimate a target percentile response, instead of a mean response. i.e. Predicting the 0.25th Quantile value of our target will tell us, that given our current set of features, we expect 25% of the target values to be equal to or less than our prediction.
If we train two separate regression models, one for the 0.025 percentile and another for the 0.9725 percentile, we are effectively saying that we expect 95% of our target values to fall within this interval i.e. A 95% prediction interval
Quantile Regression Loss Function
Keras, does not come with a default quantile loss, so we’re going to use the following implementation from Sachin Abeywardana
import keras.backend as Kdef tilted_loss(q,y,f): e = (y-f) return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)model = model_fn()model.compile(loss=lambda y,f: tilted_loss(0.5,y,f), optimizer='adam')lowerq_model = model_fn()lowerq_model.compile(loss=lambda y,f: tilted_loss(0.025,y,f), optimizer='adam')upperq_model = model_fn()upperq_model.compile(loss=lambda y,f: tilted_loss(0.9725,y,f), optimizer='adam')
The resulting predictions look like this
One of the disadvantages of this approach is that it tends to produce very wide intervals. You will also notice that the intervals are not symmetric about the median estimated values (blue dots).
In the last post, we introduced two metrics to assess the quality of our interval predictions, PICP, and MPIW. The table below compares these metrics across the last three approaches we have used to estimate uncertainty in a Neural Network.
Comparison of techniques to estimate uncertainty in Neural Networks
We see that the Mean-Variance estimation method produces the intervals with the smallest width, which results in a reduction of its PICP score. MC Dropout, and Quantile Regression produce very wide intervals, leading to a perfect PICP score.
Balancing between MPIW and PICP is an open ended question, and completely dependent on how the model is being applied. Ideally, we would like our intervals to be as tight as possible, with a low mean width, and also includes our target values the majority of time.
These techniques can readily be implemented on top of your existing models with very few changes, and providing uncertainty estimates to your predictions, makes them significantly more trustworthy.
I hope you enjoyed our series on uncertainty. Keep watching this space for more great content!!
|
[
{
"code": null,
"e": 228,
"s": 172,
"text": "Check out part 1 (here)and part 2 (here) of this series"
},
{
"code": null,
"e": 273,
"s": 228,
"text": "Author: Dhruv Nair, data scientist, Comet.ml"
},
{
"code": null,
"e": 519,
"s": 273,
"text": "In the last part of our series on uncertainty estimation, we addressed the limitations of approaches like bootstrapping for large models, and demonstrated how we might estimate uncertainty in the predictions of a neural network using MC Dropout."
},
{
"code": null,
"e": 821,
"s": 519,
"text": "So far, the approaches we looked at involved creating variations in the dataset, or the model parameters to estimate uncertainty. The main drawback here is that it requires us to either train multiple models, or make multiple predictions in order to figure out the variance in our model’s predictions."
},
{
"code": null,
"e": 1039,
"s": 821,
"text": "In situations with latency constraints, techniques such as MC Dropout might not be appropriate for estimating a prediction interval. What can we do to reduce the number of predictions we need to estimate the interval?"
},
{
"code": null,
"e": 1170,
"s": 1039,
"text": "In part 1 of this series, we made an assumption that the mean response of our dependent variable, μ(y|x), is normally distributed."
},
{
"code": null,
"e": 1339,
"s": 1170,
"text": "The MLE method involves building two models, one to estimate the conditional mean response, μ(y|x) , and another to estimate the variance, σ2 in the predicted response."
},
{
"code": null,
"e": 1582,
"s": 1339,
"text": "We do this by first, splitting our training data into two halves. The first half model, mμ is trained as a regular regression model, using the first half of the data. This model is then used to make predictions on the second half of the data."
},
{
"code": null,
"e": 1709,
"s": 1582,
"text": "The second model, mσ2 is trained using the second half of the data, and the squared residuals of mμ as the dependent variable."
},
{
"code": null,
"e": 1777,
"s": 1709,
"text": "The final prediction interval can be expressed in the following way"
},
{
"code": null,
"e": 1859,
"s": 1777,
"text": "Here α is the desired level of confidence according to the Gaussian Distribution."
},
{
"code": null,
"e": 1973,
"s": 1859,
"text": "We’re going to be using the Auto MPG dataset again. Notice how the training data is split again in the last step."
},
{
"code": null,
"e": 2838,
"s": 1973,
"text": "Mean Variance Estimation Methoddataset_path = keras.utils.get_file(\"auto-mpg.data\", \"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data\")column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin']raw_dataset = pd.read_csv(dataset_path, names=column_names, na_values = \"?\", comment='\\t', sep=\" \", skipinitialspace=True)dataset = raw_dataset.copy()dataset = dataset.dropna()origin = dataset.pop('Origin')dataset['USA'] = (origin == 1)*1.0dataset['Europe'] = (origin == 2)*1.0dataset['Japan'] = (origin == 3)*1.0train_dataset = dataset.sample(frac=0.8,random_state=0)test_dataset = dataset.drop(train_dataset.index)mean_dataset = train_dataset.sample(frac=0.5 , random_state=0)var_dataset = train_dataset.drop(mean_dataset.index)"
},
{
"code": null,
"e": 2923,
"s": 2838,
"text": "Next, we’re going to create two models to estimate the mean and variance in our data"
},
{
"code": null,
"e": 3475,
"s": 2923,
"text": "import kerasfrom keras.models import Modelfrom keras.layers import Input, Dense, Dropoutdropout_rate = 0.5def model_fn(): inputs = Input(shape=(9,)) x = Dense(64, activation='relu')(inputs) x = Dropout(dropout_rate)(x) x = Dense(64, activation='relu')(x) x = Dropout(dropout_rate)(x) outputs = Dense(1)(x) model = Model(inputs, outputs) return modelmean_model = model_fn()mean_model.compile(loss=\"mean_squared_error\", optimizer='adam')var_model = model_fn()var_model.compile(loss=\"mean_squared_error\", optimizer='adam')"
},
{
"code": null,
"e": 3538,
"s": 3475,
"text": "Finally, we’re going to normalize our data, and start training"
},
{
"code": null,
"e": 3994,
"s": 3538,
"text": "train_stats = train_dataset.describe()train_stats.pop(\"MPG\")train_stats.transpose()def norm(x): return (x - train_stats.loc['mean'])/ train_stats.loc['std']normed_train_data = norm(train_dataset)normed_mean_data = norm(mean_dataset)normed_var_data = norm(var_dataset)normed_test_data = norm(test_dataset)train_labels = train_dataset.pop('MPG')mean_labels = mean_dataset.pop('MPG')var_labels = var_dataset.pop('MPG')test_labels = test_dataset.pop('MPG')"
},
{
"code": null,
"e": 4135,
"s": 3994,
"text": "Once the mean model has been trained, we can use it to make predictions on the second half of our dataset and compute the squared residuals."
},
{
"code": null,
"e": 4469,
"s": 4135,
"text": "EPOCHS = 100mean_model.fit(normed_mean_data, mean_labels, epochs=EPOCHS, validation_split=0.2, verbose=0)mean_predictions = mean_model.predict(normed_var_data)squared_residuals = (var_labels.values.reshape(-1,1) - mean_predictions) ** 2var_model.fit(normed_var_data, squared_residuals, epochs=EPOCHS, validation_split=0.2, verbose=0)"
},
{
"code": null,
"e": 4531,
"s": 4469,
"text": "Let’s take a look at the intervals produced by this approach."
},
{
"code": null,
"e": 4630,
"s": 4531,
"text": "You will notice that the highly inaccurate predictions have much larger intervals around the mean."
},
{
"code": null,
"e": 4802,
"s": 4630,
"text": "What if we do not want to make assumptions about the distribution of our response variable, and want to directly estimate the upper and lower limit of our target variable?"
},
{
"code": null,
"e": 5086,
"s": 4802,
"text": "A quantile loss can help us estimate a target percentile response, instead of a mean response. i.e. Predicting the 0.25th Quantile value of our target will tell us, that given our current set of features, we expect 25% of the target values to be equal to or less than our prediction."
},
{
"code": null,
"e": 5324,
"s": 5086,
"text": "If we train two separate regression models, one for the 0.025 percentile and another for the 0.9725 percentile, we are effectively saying that we expect 95% of our target values to fall within this interval i.e. A 95% prediction interval"
},
{
"code": null,
"e": 5358,
"s": 5324,
"text": "Quantile Regression Loss Function"
},
{
"code": null,
"e": 5484,
"s": 5358,
"text": "Keras, does not come with a default quantile loss, so we’re going to use the following implementation from Sachin Abeywardana"
},
{
"code": null,
"e": 5894,
"s": 5484,
"text": "import keras.backend as Kdef tilted_loss(q,y,f): e = (y-f) return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)model = model_fn()model.compile(loss=lambda y,f: tilted_loss(0.5,y,f), optimizer='adam')lowerq_model = model_fn()lowerq_model.compile(loss=lambda y,f: tilted_loss(0.025,y,f), optimizer='adam')upperq_model = model_fn()upperq_model.compile(loss=lambda y,f: tilted_loss(0.9725,y,f), optimizer='adam')"
},
{
"code": null,
"e": 5935,
"s": 5894,
"text": "The resulting predictions look like this"
},
{
"code": null,
"e": 6131,
"s": 5935,
"text": "One of the disadvantages of this approach is that it tends to produce very wide intervals. You will also notice that the intervals are not symmetric about the median estimated values (blue dots)."
},
{
"code": null,
"e": 6372,
"s": 6131,
"text": "In the last post, we introduced two metrics to assess the quality of our interval predictions, PICP, and MPIW. The table below compares these metrics across the last three approaches we have used to estimate uncertainty in a Neural Network."
},
{
"code": null,
"e": 6440,
"s": 6372,
"text": "Comparison of techniques to estimate uncertainty in Neural Networks"
},
{
"code": null,
"e": 6682,
"s": 6440,
"text": "We see that the Mean-Variance estimation method produces the intervals with the smallest width, which results in a reduction of its PICP score. MC Dropout, and Quantile Regression produce very wide intervals, leading to a perfect PICP score."
},
{
"code": null,
"e": 6947,
"s": 6682,
"text": "Balancing between MPIW and PICP is an open ended question, and completely dependent on how the model is being applied. Ideally, we would like our intervals to be as tight as possible, with a low mean width, and also includes our target values the majority of time."
},
{
"code": null,
"e": 7145,
"s": 6947,
"text": "These techniques can readily be implemented on top of your existing models with very few changes, and providing uncertainty estimates to your predictions, makes them significantly more trustworthy."
}
] |
Python String isupper() Method
|
Python string method isupper() checks whether all the case-based characters (letters) of the string are uppercase.
Following is the syntax for isupper() method −
str.isupper()
NA
NA
This method returns true if all cased characters in the string are uppercase and there is at least one cased character, false otherwise.
The following example shows the usage of isupper() method.
#!/usr/bin/python
str = "THIS IS STRING EXAMPLE....WOW!!!";
print str.isupper()
str = "THIS is string example....wow!!!";
print str.isupper()
When we run above program, it produces following result −
True
False
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2360,
"s": 2244,
"text": "Python string method isupper() checks whether all the case-based characters (letters) of the string are uppercase."
},
{
"code": null,
"e": 2407,
"s": 2360,
"text": "Following is the syntax for isupper() method −"
},
{
"code": null,
"e": 2422,
"s": 2407,
"text": "str.isupper()\n"
},
{
"code": null,
"e": 2425,
"s": 2422,
"text": "NA"
},
{
"code": null,
"e": 2428,
"s": 2425,
"text": "NA"
},
{
"code": null,
"e": 2565,
"s": 2428,
"text": "This method returns true if all cased characters in the string are uppercase and there is at least one cased character, false otherwise."
},
{
"code": null,
"e": 2624,
"s": 2565,
"text": "The following example shows the usage of isupper() method."
},
{
"code": null,
"e": 2769,
"s": 2624,
"text": "#!/usr/bin/python\n\nstr = \"THIS IS STRING EXAMPLE....WOW!!!\"; \nprint str.isupper()\n\nstr = \"THIS is string example....wow!!!\";\nprint str.isupper()"
},
{
"code": null,
"e": 2827,
"s": 2769,
"text": "When we run above program, it produces following result −"
},
{
"code": null,
"e": 2839,
"s": 2827,
"text": "True\nFalse\n"
},
{
"code": null,
"e": 2876,
"s": 2839,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 2892,
"s": 2876,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 2925,
"s": 2892,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2944,
"s": 2925,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 2979,
"s": 2944,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3001,
"s": 2979,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3035,
"s": 3001,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3063,
"s": 3035,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3098,
"s": 3063,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3112,
"s": 3098,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3145,
"s": 3112,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3162,
"s": 3145,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3169,
"s": 3162,
"text": " Print"
},
{
"code": null,
"e": 3180,
"s": 3169,
"text": " Add Notes"
}
] |
NumPy - Quick Guide
|
NumPy is a Python package. It stands for 'Numerical Python'. It is a library consisting of multidimensional array objects and a collection of routines for processing of array.
Numeric, the ancestor of NumPy, was developed by Jim Hugunin. Another package Numarray was also developed, having some additional functionalities. In 2005, Travis Oliphant created NumPy package by incorporating the features of Numarray into Numeric package. There are many contributors to this open source project.
Using NumPy, a developer can perform the following operations −
Mathematical and logical operations on arrays.
Mathematical and logical operations on arrays.
Fourier transforms and routines for shape manipulation.
Fourier transforms and routines for shape manipulation.
Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation.
Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation.
NumPy is often used along with packages like SciPy (Scientific Python) and Mat−plotlib (plotting library). This combination is widely used as a replacement for MatLab, a popular platform for technical computing. However, Python alternative to MatLab is now seen as a more modern and complete programming language.
It is open source, which is an added advantage of NumPy.
Standard Python distribution doesn't come bundled with NumPy module. A lightweight alternative is to install NumPy using popular Python package installer, pip.
pip install numpy
The best way to enable NumPy is to use an installable binary package specific to your operating system. These binaries contain full SciPy stack (inclusive of NumPy, SciPy, matplotlib, IPython, SymPy and nose packages along with core Python).
Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy stack. It is also available for Linux and Mac.
Canopy (https://www.enthought.com/products/canopy/) is available as free as well as commercial distribution with full SciPy stack for Windows, Linux and Mac.
Python (x,y): It is a free Python distribution with SciPy stack and Spyder IDE for Windows OS. (Downloadable from https://www.python-xy.github.io/)
Package managers of respective Linux distributions are used to install one or more packages in SciPy stack.
sudo apt-get install python-numpy
python-scipy python-matplotlibipythonipythonnotebook python-pandas
python-sympy python-nose
sudo yum install numpyscipy python-matplotlibipython
python-pandas sympy python-nose atlas-devel
Core Python (2.6.x, 2.7.x and 3.2.x onwards) must be installed with distutils and zlib module should be enabled.
GNU gcc (4.2 and above) C compiler must be available.
To install NumPy, run the following command.
Python setup.py install
To test whether NumPy module is properly installed, try to import it from Python prompt.
import numpy
If it is not installed, the following error message will be displayed.
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import numpy
ImportError: No module named 'numpy'
Alternatively, NumPy package is imported using the following syntax −
import numpy as np
The most important object defined in NumPy is an N-dimensional array type called ndarray. It describes the collection of items of the same type. Items in the collection can be accessed using a zero-based index.
Every item in an ndarray takes the same size of block in the memory. Each element in ndarray is an object of data-type object (called dtype).
Any item extracted from ndarray object (by slicing) is represented by a Python object of one of array scalar types. The following diagram shows a relationship between ndarray, data type object (dtype) and array scalar type −
An instance of ndarray class can be constructed by different array creation routines described later in the tutorial. The basic ndarray is created using an array function in NumPy as follows −
numpy.array
It creates an ndarray from any object exposing array interface, or from any method that returns an array.
numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0)
The above constructor takes the following parameters −
object
Any object exposing the array interface method returns an array, or any (nested) sequence.
dtype
Desired data type of array, optional
copy
Optional. By default (true), the object is copied
order
C (row major) or F (column major) or A (any) (default)
subok
By default, returned array forced to be a base class array. If true, sub-classes passed through
ndmin
Specifies minimum dimensions of resultant array
Take a look at the following examples to understand better.
import numpy as np
a = np.array([1,2,3])
print a
The output is as follows −
[1, 2, 3]
# more than one dimensions
import numpy as np
a = np.array([[1, 2], [3, 4]])
print a
The output is as follows −
[[1, 2]
[3, 4]]
# minimum dimensions
import numpy as np
a = np.array([1, 2, 3,4,5], ndmin = 2)
print a
The output is as follows −
[[1, 2, 3, 4, 5]]
# dtype parameter
import numpy as np
a = np.array([1, 2, 3], dtype = complex)
print a
The output is as follows −
[ 1.+0.j, 2.+0.j, 3.+0.j]
The ndarray object consists of contiguous one-dimensional segment of computer memory, combined with an indexing scheme that maps each item to a location in the memory block. The memory block holds the elements in a row-major order (C style) or a column-major order (FORTRAN or MatLab style).
NumPy supports a much greater variety of numerical types than Python does. The following table shows different scalar data types defined in NumPy.
bool_
Boolean (True or False) stored as a byte
int_
Default integer type (same as C long; normally either int64 or int32)
intc
Identical to C int (normally int32 or int64)
intp
Integer used for indexing (same as C ssize_t; normally either int32 or int64)
int8
Byte (-128 to 127)
int16
Integer (-32768 to 32767)
int32
Integer (-2147483648 to 2147483647)
int64
Integer (-9223372036854775808 to 9223372036854775807)
uint8
Unsigned integer (0 to 255)
uint16
Unsigned integer (0 to 65535)
uint32
Unsigned integer (0 to 4294967295)
uint64
Unsigned integer (0 to 18446744073709551615)
float_
Shorthand for float64
float16
Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
float32
Single precision float: sign bit, 8 bits exponent, 23 bits mantissa
float64
Double precision float: sign bit, 11 bits exponent, 52 bits mantissa
complex_
Shorthand for complex128
complex64
Complex number, represented by two 32-bit floats (real and imaginary components)
complex128
Complex number, represented by two 64-bit floats (real and imaginary components)
NumPy numerical types are instances of dtype (data-type) objects, each having unique characteristics. The dtypes are available as np.bool_, np.float32, etc.
A data type object describes interpretation of fixed block of memory corresponding to an array, depending on the following aspects −
Type of data (integer, float or Python object)
Type of data (integer, float or Python object)
Size of data
Size of data
Byte order (little-endian or big-endian)
Byte order (little-endian or big-endian)
In case of structured type, the names of fields, data type of each field and part of the memory block taken by each field.
In case of structured type, the names of fields, data type of each field and part of the memory block taken by each field.
If data type is a subarray, its shape and data type
If data type is a subarray, its shape and data type
The byte order is decided by prefixing '<' or '>' to data type. '<' means that encoding is little-endian (least significant is stored in smallest address). '>' means that encoding is big-endian (most significant byte is stored in smallest address).
A dtype object is constructed using the following syntax −
numpy.dtype(object, align, copy)
The parameters are −
Object − To be converted to data type object
Object − To be converted to data type object
Align − If true, adds padding to the field to make it similar to C-struct
Align − If true, adds padding to the field to make it similar to C-struct
Copy − Makes a new copy of dtype object. If false, the result is reference to builtin data type object
Copy − Makes a new copy of dtype object. If false, the result is reference to builtin data type object
# using array-scalar type
import numpy as np
dt = np.dtype(np.int32)
print dt
The output is as follows −
int32
#int8, int16, int32, int64 can be replaced by equivalent string 'i1', 'i2','i4', etc.
import numpy as np
dt = np.dtype('i4')
print dt
The output is as follows −
int32
# using endian notation
import numpy as np
dt = np.dtype('>i4')
print dt
The output is as follows −
>i4
The following examples show the use of structured data type. Here, the field name and the corresponding scalar data type is to be declared.
# first create structured data type
import numpy as np
dt = np.dtype([('age',np.int8)])
print dt
The output is as follows −
[('age', 'i1')]
# now apply it to ndarray object
import numpy as np
dt = np.dtype([('age',np.int8)])
a = np.array([(10,),(20,),(30,)], dtype = dt)
print a
The output is as follows −
[(10,) (20,) (30,)]
# file name can be used to access content of age column
import numpy as np
dt = np.dtype([('age',np.int8)])
a = np.array([(10,),(20,),(30,)], dtype = dt)
print a['age']
The output is as follows −
[10 20 30]
The following examples define a structured data type called student with a string field 'name', an integer field 'age' and a float field 'marks'. This dtype is applied to ndarray object.
import numpy as np
student = np.dtype([('name','S20'), ('age', 'i1'), ('marks', 'f4')])
print student
The output is as follows −
[('name', 'S20'), ('age', 'i1'), ('marks', '<f4')])
import numpy as np
student = np.dtype([('name','S20'), ('age', 'i1'), ('marks', 'f4')])
a = np.array([('abc', 21, 50),('xyz', 18, 75)], dtype = student)
print a
The output is as follows −
[('abc', 21, 50.0), ('xyz', 18, 75.0)]
Each built-in data type has a character code that uniquely identifies it.
'b' − boolean
'b' − boolean
'i' − (signed) integer
'i' − (signed) integer
'u' − unsigned integer
'u' − unsigned integer
'f' − floating-point
'f' − floating-point
'c' − complex-floating point
'c' − complex-floating point
'm' − timedelta
'm' − timedelta
'M' − datetime
'M' − datetime
'O' − (Python) objects
'O' − (Python) objects
'S', 'a' − (byte-)string
'S', 'a' − (byte-)string
'U' − Unicode
'U' − Unicode
'V' − raw data (void)
'V' − raw data (void)
In this chapter, we will discuss the various array attributes of NumPy.
This array attribute returns a tuple consisting of array dimensions. It can also be used to resize the array.
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
print a.shape
The output is as follows −
(2, 3)
# this resizes the ndarray
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
a.shape = (3,2)
print a
The output is as follows −
[[1, 2]
[3, 4]
[5, 6]]
NumPy also provides a reshape function to resize an array.
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
b = a.reshape(3,2)
print b
The output is as follows −
[[1, 2]
[3, 4]
[5, 6]]
This array attribute returns the number of array dimensions.
# an array of evenly spaced numbers
import numpy as np
a = np.arange(24)
print a
The output is as follows −
[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
# this is one dimensional array
import numpy as np
a = np.arange(24)
a.ndim
# now reshape it
b = a.reshape(2,4,3)
print b
# b is having three dimensions
The output is as follows −
[[[ 0, 1, 2]
[ 3, 4, 5]
[ 6, 7, 8]
[ 9, 10, 11]]
[[12, 13, 14]
[15, 16, 17]
[18, 19, 20]
[21, 22, 23]]]
This array attribute returns the length of each element of array in bytes.
# dtype of array is int8 (1 byte)
import numpy as np
x = np.array([1,2,3,4,5], dtype = np.int8)
print x.itemsize
The output is as follows −
1
# dtype of array is now float32 (4 bytes)
import numpy as np
x = np.array([1,2,3,4,5], dtype = np.float32)
print x.itemsize
The output is as follows −
4
The ndarray object has the following attributes. Its current values are returned by this function.
C_CONTIGUOUS (C)
The data is in a single, C-style contiguous segment
F_CONTIGUOUS (F)
The data is in a single, Fortran-style contiguous segment
OWNDATA (O)
The array owns the memory it uses or borrows it from another object
WRITEABLE (W)
The data area can be written to. Setting this to False locks the data, making it read-only
ALIGNED (A)
The data and all elements are aligned appropriately for the hardware
UPDATEIFCOPY (U)
This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array
The following example shows the current values of flags.
import numpy as np
x = np.array([1,2,3,4,5])
print x.flags
The output is as follows −
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
A new ndarray object can be constructed by any of the following array creation routines or using a low-level ndarray constructor.
It creates an uninitialized array of specified shape and dtype. It uses the following constructor −
numpy.empty(shape, dtype = float, order = 'C')
The constructor takes the following parameters.
Shape
Shape of an empty array in int or tuple of int
Dtype
Desired output data type. Optional
Order
'C' for C-style row-major array, 'F' for FORTRAN style column-major array
The following code shows an example of an empty array.
import numpy as np
x = np.empty([3,2], dtype = int)
print x
The output is as follows −
[[22649312 1701344351]
[1818321759 1885959276]
[16779776 156368896]]
Note − The elements in an array show random values as they are not initialized.
Returns a new array of specified size, filled with zeros.
numpy.zeros(shape, dtype = float, order = 'C')
The constructor takes the following parameters.
Shape
Shape of an empty array in int or sequence of int
Dtype
Desired output data type. Optional
Order
'C' for C-style row-major array, 'F' for FORTRAN style column-major array
# array of five zeros. Default dtype is float
import numpy as np
x = np.zeros(5)
print x
The output is as follows −
[ 0. 0. 0. 0. 0.]
import numpy as np
x = np.zeros((5,), dtype = np.int)
print x
Now, the output would be as follows −
[0 0 0 0 0]
# custom type
import numpy as np
x = np.zeros((2,2), dtype = [('x', 'i4'), ('y', 'i4')])
print x
It should produce the following output −
[[(0,0)(0,0)]
[(0,0)(0,0)]]
Returns a new array of specified size and type, filled with ones.
numpy.ones(shape, dtype = None, order = 'C')
The constructor takes the following parameters.
Shape
Shape of an empty array in int or tuple of int
Dtype
Desired output data type. Optional
Order
'C' for C-style row-major array, 'F' for FORTRAN style column-major array
# array of five ones. Default dtype is float
import numpy as np
x = np.ones(5)
print x
The output is as follows −
[ 1. 1. 1. 1. 1.]
import numpy as np
x = np.ones([2,2], dtype = int)
print x
Now, the output would be as follows −
[[1 1]
[1 1]]
In this chapter, we will discuss how to create an array from existing data.
This function is similar to numpy.array except for the fact that it has fewer parameters. This routine is useful for converting Python sequence into ndarray.
numpy.asarray(a, dtype = None, order = None)
The constructor takes the following parameters.
a
Input data in any form such as list, list of tuples, tuples, tuple of tuples or tuple of lists
dtype
By default, the data type of input data is applied to the resultant ndarray
order
C (row major) or F (column major). C is default
The following examples show how you can use the asarray function.
# convert list to ndarray
import numpy as np
x = [1,2,3]
a = np.asarray(x)
print a
Its output would be as follows −
[1 2 3]
# dtype is set
import numpy as np
x = [1,2,3]
a = np.asarray(x, dtype = float)
print a
Now, the output would be as follows −
[ 1. 2. 3.]
# ndarray from tuple
import numpy as np
x = (1,2,3)
a = np.asarray(x)
print a
Its output would be −
[1 2 3]
# ndarray from list of tuples
import numpy as np
x = [(1,2,3),(4,5)]
a = np.asarray(x)
print a
Here, the output would be as follows −
[(1, 2, 3) (4, 5)]
This function interprets a buffer as one-dimensional array. Any object that exposes the buffer interface is used as parameter to return an ndarray.
numpy.frombuffer(buffer, dtype = float, count = -1, offset = 0)
The constructor takes the following parameters.
buffer
Any object that exposes buffer interface
dtype
Data type of returned ndarray. Defaults to float
count
The number of items to read, default -1 means all data
offset
The starting position to read from. Default is 0
The following examples demonstrate the use of frombuffer function.
import numpy as np
s = 'Hello World'
a = np.frombuffer(s, dtype = 'S1')
print a
Here is its output −
['H' 'e' 'l' 'l' 'o' ' ' 'W' 'o' 'r' 'l' 'd']
This function builds an ndarray object from any iterable object. A new one-dimensional array is returned by this function.
numpy.fromiter(iterable, dtype, count = -1)
Here, the constructor takes the following parameters.
iterable
Any iterable object
dtype
Data type of resultant array
count
The number of items to be read from iterator. Default is -1 which means all data to be read
The following examples show how to use the built-in range() function to return a list object. An iterator of this list is used to form an ndarray object.
# create list object using range function
import numpy as np
list = range(5)
print list
Its output is as follows −
[0, 1, 2, 3, 4]
# obtain iterator object from list
import numpy as np
list = range(5)
it = iter(list)
# use iterator to create ndarray
x = np.fromiter(it, dtype = float)
print x
Now, the output would be as follows −
[0. 1. 2. 3. 4.]
In this chapter, we will see how to create an array from numerical ranges.
This function returns an ndarray object containing evenly spaced values within a given range. The format of the function is as follows −
numpy.arange(start, stop, step, dtype)
The constructor takes the following parameters.
start
The start of an interval. If omitted, defaults to 0
stop
The end of an interval (not including this number)
step
Spacing between values, default is 1
dtype
Data type of resulting ndarray. If not given, data type of input is used
The following examples show how you can use this function.
import numpy as np
x = np.arange(5)
print x
Its output would be as follows −
[0 1 2 3 4]
import numpy as np
# dtype set
x = np.arange(5, dtype = float)
print x
Here, the output would be −
[0. 1. 2. 3. 4.]
# start and stop parameters set
import numpy as np
x = np.arange(10,20,2)
print x
Its output is as follows −
[10 12 14 16 18]
This function is similar to arange() function. In this function, instead of step size, the number of evenly spaced values between the interval is specified. The usage of this function is as follows −
numpy.linspace(start, stop, num, endpoint, retstep, dtype)
The constructor takes the following parameters.
start
The starting value of the sequence
stop
The end value of the sequence, included in the sequence if endpoint set to true
num
The number of evenly spaced samples to be generated. Default is 50
endpoint
True by default, hence the stop value is included in the sequence. If false, it is not included
retstep
If true, returns samples and step between the consecutive numbers
dtype
Data type of output ndarray
The following examples demonstrate the use linspace function.
import numpy as np
x = np.linspace(10,20,5)
print x
Its output would be −
[10. 12.5 15. 17.5 20.]
# endpoint set to false
import numpy as np
x = np.linspace(10,20, 5, endpoint = False)
print x
The output would be −
[10. 12. 14. 16. 18.]
# find retstep value
import numpy as np
x = np.linspace(1,2,5, retstep = True)
print x
# retstep here is 0.25
Now, the output would be −
(array([ 1. , 1.25, 1.5 , 1.75, 2. ]), 0.25)
This function returns an ndarray object that contains the numbers that are evenly spaced on a log scale. Start and stop endpoints of the scale are indices of the base, usually 10.
numpy.logspace(start, stop, num, endpoint, base, dtype)
Following parameters determine the output of logspace function.
start
The starting point of the sequence is basestart
stop
The final value of sequence is basestop
num
The number of values between the range. Default is 50
endpoint
If true, stop is the last value in the range
base
Base of log space, default is 10
dtype
Data type of output array. If not given, it depends upon other input arguments
The following examples will help you understand the logspace function.
import numpy as np
# default base is 10
a = np.logspace(1.0, 2.0, num = 10)
print a
Its output would be as follows −
[ 10. 12.91549665 16.68100537 21.5443469 27.82559402
35.93813664 46.41588834 59.94842503 77.42636827 100. ]
# set base of log space to 2
import numpy as np
a = np.logspace(1,10,num = 10, base = 2)
print a
Now, the output would be −
[ 2. 4. 8. 16. 32. 64. 128. 256. 512. 1024.]
Contents of ndarray object can be accessed and modified by indexing or slicing, just like Python's in-built container objects.
As mentioned earlier, items in ndarray object follows zero-based index. Three types of indexing methods are available − field access, basic slicing and advanced indexing.
Basic slicing is an extension of Python's basic concept of slicing to n dimensions. A Python slice object is constructed by giving start, stop, and step parameters to the built-in slice function. This slice object is passed to the array to extract a part of array.
import numpy as np
a = np.arange(10)
s = slice(2,7,2)
print a[s]
Its output is as follows −
[2 4 6]
In the above example, an ndarray object is prepared by arange() function. Then a slice object is defined with start, stop, and step values 2, 7, and 2 respectively. When this slice object is passed to the ndarray, a part of it starting with index 2 up to 7 with a step of 2 is sliced.
The same result can also be obtained by giving the slicing parameters separated by a colon : (start:stop:step) directly to the ndarray object.
import numpy as np
a = np.arange(10)
b = a[2:7:2]
print b
Here, we will get the same output −
[2 4 6]
If only one parameter is put, a single item corresponding to the index will be returned. If a : is inserted in front of it, all items from that index onwards will be extracted. If two parameters (with : between them) is used, items between the two indexes (not including the stop index) with default step one are sliced.
# slice single item
import numpy as np
a = np.arange(10)
b = a[5]
print b
Its output is as follows −
5
# slice items starting from index
import numpy as np
a = np.arange(10)
print a[2:]
Now, the output would be −
[2 3 4 5 6 7 8 9]
# slice items between indexes
import numpy as np
a = np.arange(10)
print a[2:5]
Here, the output would be −
[2 3 4]
The above description applies to multi-dimensional ndarray too.
import numpy as np
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print a
# slice items starting from index
print 'Now we will slice the array from the index a[1:]'
print a[1:]
The output is as follows −
[[1 2 3]
[3 4 5]
[4 5 6]]
Now we will slice the array from the index a[1:]
[[3 4 5]
[4 5 6]]
Slicing can also include ellipsis (...) to make a selection tuple of the same length as the dimension of an array. If ellipsis is used at the row position, it will return an ndarray comprising of items in rows.
# array to begin with
import numpy as np
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print 'Our array is:'
print a
print '\n'
# this returns array of items in the second column
print 'The items in the second column are:'
print a[...,1]
print '\n'
# Now we will slice all items from the second row
print 'The items in the second row are:'
print a[1,...]
print '\n'
# Now we will slice all items from column 1 onwards
print 'The items column 1 onwards are:'
print a[...,1:]
The output of this program is as follows −
Our array is:
[[1 2 3]
[3 4 5]
[4 5 6]]
The items in the second column are:
[2 4 5]
The items in the second row are:
[3 4 5]
The items column 1 onwards are:
[[2 3]
[4 5]
[5 6]]
It is possible to make a selection from ndarray that is a non-tuple sequence, ndarray object of integer or Boolean data type, or a tuple with at least one item being a sequence object. Advanced indexing always returns a copy of the data. As against this, the slicing only presents a view.
There are two types of advanced indexing − Integer and Boolean.
This mechanism helps in selecting any arbitrary item in an array based on its N-dimensional index. Each integer array represents the number of indexes into that dimension. When the index consists of as many integer arrays as the dimensions of the target ndarray, it becomes straightforward.
In the following example, one element of specified column from each row of ndarray object is selected. Hence, the row index contains all row numbers, and the column index specifies the element to be selected.
import numpy as np
x = np.array([[1, 2], [3, 4], [5, 6]])
y = x[[0,1,2], [0,1,0]]
print y
Its output would be as follows −
[1 4 5]
The selection includes elements at (0,0), (1,1) and (2,0) from the first array.
In the following example, elements placed at corners of a 4X3 array are selected. The row indices of selection are [0, 0] and [3,3] whereas the column indices are [0,2] and [0,2].
import numpy as np
x = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]])
print 'Our array is:'
print x
print '\n'
rows = np.array([[0,0],[3,3]])
cols = np.array([[0,2],[0,2]])
y = x[rows,cols]
print 'The corner elements of this array are:'
print y
The output of this program is as follows −
Our array is:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
The corner elements of this array are:
[[ 0 2]
[ 9 11]]
The resultant selection is an ndarray object containing corner elements.
Advanced and basic indexing can be combined by using one slice (:) or ellipsis (...) with an index array. The following example uses slice for row and advanced index for column. The result is the same when slice is used for both. But advanced index results in copy and may have different memory layout.
import numpy as np
x = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]])
print 'Our array is:'
print x
print '\n'
# slicing
z = x[1:4,1:3]
print 'After slicing, our array becomes:'
print z
print '\n'
# using advanced index for column
y = x[1:4,[1,2]]
print 'Slicing using advanced index for column:'
print y
The output of this program would be as follows −
Our array is:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
After slicing, our array becomes:
[[ 4 5]
[ 7 8]
[10 11]]
Slicing using advanced index for column:
[[ 4 5]
[ 7 8]
[10 11]]
This type of advanced indexing is used when the resultant object is meant to be the result of Boolean operations, such as comparison operators.
In this example, items greater than 5 are returned as a result of Boolean indexing.
import numpy as np
x = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]])
print 'Our array is:'
print x
print '\n'
# Now we will print the items greater than 5
print 'The items greater than 5 are:'
print x[x > 5]
The output of this program would be −
Our array is:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
The items greater than 5 are:
[ 6 7 8 9 10 11]
In this example, NaN (Not a Number) elements are omitted by using ~ (complement operator).
import numpy as np
a = np.array([np.nan, 1,2,np.nan,3,4,5])
print a[~np.isnan(a)]
Its output would be −
[ 1. 2. 3. 4. 5.]
The following example shows how to filter out the non-complex elements from an array.
import numpy as np
a = np.array([1, 2+6j, 5, 3.5+5j])
print a[np.iscomplex(a)]
Here, the output is as follows −
[2.0+6.j 3.5+5.j]
The term broadcasting refers to the ability of NumPy to treat arrays of different shapes during arithmetic operations. Arithmetic operations on arrays are usually done on corresponding elements. If two arrays are of exactly the same shape, then these operations are smoothly performed.
import numpy as np
a = np.array([1,2,3,4])
b = np.array([10,20,30,40])
c = a * b
print c
Its output is as follows −
[10 40 90 160]
If the dimensions of two arrays are dissimilar, element-to-element operations are not possible. However, operations on arrays of non-similar shapes is still possible in NumPy, because of the broadcasting capability. The smaller array is broadcast to the size of the larger array so that they have compatible shapes.
Broadcasting is possible if the following rules are satisfied −
Array with smaller ndim than the other is prepended with '1' in its shape.
Array with smaller ndim than the other is prepended with '1' in its shape.
Size in each dimension of the output shape is maximum of the input sizes in that dimension.
Size in each dimension of the output shape is maximum of the input sizes in that dimension.
An input can be used in calculation, if its size in a particular dimension matches the output size or its value is exactly 1.
An input can be used in calculation, if its size in a particular dimension matches the output size or its value is exactly 1.
If an input has a dimension size of 1, the first data entry in that dimension is used for all calculations along that dimension.
If an input has a dimension size of 1, the first data entry in that dimension is used for all calculations along that dimension.
A set of arrays is said to be broadcastable if the above rules produce a valid result and one of the following is true −
Arrays have exactly the same shape.
Arrays have exactly the same shape.
Arrays have the same number of dimensions and the length of each dimension is either a common length or 1.
Arrays have the same number of dimensions and the length of each dimension is either a common length or 1.
Array having too few dimensions can have its shape prepended with a dimension of length 1, so that the above stated property is true.
Array having too few dimensions can have its shape prepended with a dimension of length 1, so that the above stated property is true.
The following program shows an example of broadcasting.
import numpy as np
a = np.array([[0.0,0.0,0.0],[10.0,10.0,10.0],[20.0,20.0,20.0],[30.0,30.0,30.0]])
b = np.array([1.0,2.0,3.0])
print 'First array:'
print a
print '\n'
print 'Second array:'
print b
print '\n'
print 'First Array + Second Array'
print a + b
The output of this program would be as follows −
First array:
[[ 0. 0. 0.]
[ 10. 10. 10.]
[ 20. 20. 20.]
[ 30. 30. 30.]]
Second array:
[ 1. 2. 3.]
First Array + Second Array
[[ 1. 2. 3.]
[ 11. 12. 13.]
[ 21. 22. 23.]
[ 31. 32. 33.]]
The following figure demonstrates how array b is broadcast to become compatible with a.
NumPy package contains an iterator object numpy.nditer. It is an efficient multidimensional iterator object using which it is possible to iterate over an array. Each element of an array is visited using Python’s standard Iterator interface.
Let us create a 3X4 array using arange() function and iterate over it using nditer.
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'Original array is:'
print a
print '\n'
print 'Modified array is:'
for x in np.nditer(a):
print x,
The output of this program is as follows −
Original array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Modified array is:
0 5 10 15 20 25 30 35 40 45 50 55
The order of iteration is chosen to match the memory layout of an array, without considering a particular ordering. This can be seen by iterating over the transpose of the above array.
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'Original array is:'
print a
print '\n'
print 'Transpose of the original array is:'
b = a.T
print b
print '\n'
print 'Modified array is:'
for x in np.nditer(b):
print x,
The output of the above program is as follows −
Original array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Transpose of the original array is:
[[ 0 20 40]
[ 5 25 45]
[10 30 50]
[15 35 55]]
Modified array is:
0 5 10 15 20 25 30 35 40 45 50 55
If the same elements are stored using F-style order, the iterator chooses the more efficient way of iterating over an array.
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'Original array is:'
print a
print '\n'
print 'Transpose of the original array is:'
b = a.T
print b
print '\n'
print 'Sorted in C-style order:'
c = b.copy(order='C')
print c
for x in np.nditer(c):
print x,
print '\n'
print 'Sorted in F-style order:'
c = b.copy(order='F')
print c
for x in np.nditer(c):
print x,
Its output would be as follows −
Original array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Transpose of the original array is:
[[ 0 20 40]
[ 5 25 45]
[10 30 50]
[15 35 55]]
Sorted in C-style order:
[[ 0 20 40]
[ 5 25 45]
[10 30 50]
[15 35 55]]
0 20 40 5 25 45 10 30 50 15 35 55
Sorted in F-style order:
[[ 0 20 40]
[ 5 25 45]
[10 30 50]
[15 35 55]]
0 5 10 15 20 25 30 35 40 45 50 55
It is possible to force nditer object to use a specific order by explicitly mentioning it.
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'Original array is:'
print a
print '\n'
print 'Sorted in C-style order:'
for x in np.nditer(a, order = 'C'):
print x,
print '\n'
print 'Sorted in F-style order:'
for x in np.nditer(a, order = 'F'):
print x,
Its output would be −
Original array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Sorted in C-style order:
0 5 10 15 20 25 30 35 40 45 50 55
Sorted in F-style order:
0 20 40 5 25 45 10 30 50 15 35 55
The nditer object has another optional parameter called op_flags. Its default value is read-only, but can be set to read-write or write-only mode. This will enable modifying array elements using this iterator.
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'Original array is:'
print a
print '\n'
for x in np.nditer(a, op_flags = ['readwrite']):
x[...] = 2*x
print 'Modified array is:'
print a
Its output is as follows −
Original array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Modified array is:
[[ 0 10 20 30]
[ 40 50 60 70]
[ 80 90 100 110]]
The nditer class constructor has a ‘flags’ parameter, which can take the following values −
c_index
C_order index can be tracked
f_index
Fortran_order index is tracked
multi-index
Type of indexes with one per iteration can be tracked
external_loop
Causes values given to be one-dimensional arrays with multiple values instead of zero-dimensional array
In the following example, one-dimensional arrays corresponding to each column is traversed by the iterator.
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'Original array is:'
print a
print '\n'
print 'Modified array is:'
for x in np.nditer(a, flags = ['external_loop'], order = 'F'):
print x,
The output is as follows −
Original array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Modified array is:
[ 0 20 40] [ 5 25 45] [10 30 50] [15 35 55]
If two arrays are broadcastable, a combined nditer object is able to iterate upon them concurrently. Assuming that an array a has dimension 3X4, and there is another array b of dimension 1X4, the iterator of following type is used (array b is broadcast to size of a).
import numpy as np
a = np.arange(0,60,5)
a = a.reshape(3,4)
print 'First array is:'
print a
print '\n'
print 'Second array is:'
b = np.array([1, 2, 3, 4], dtype = int)
print b
print '\n'
print 'Modified array is:'
for x,y in np.nditer([a,b]):
print "%d:%d" % (x,y),
Its output would be as follows −
First array is:
[[ 0 5 10 15]
[20 25 30 35]
[40 45 50 55]]
Second array is:
[1 2 3 4]
Modified array is:
0:1 5:2 10:3 15:4 20:1 25:2 30:3 35:4 40:1 45:2 50:3 55:4
Several routines are available in NumPy package for manipulation of elements in ndarray object. They can be classified into the following types −
Gives a new shape to an array without changing its data
A 1-D iterator over the array
Returns a copy of the array collapsed into one dimension
Returns a contiguous flattened array
Permutes the dimensions of an array
Same as self.transpose()
Rolls the specified axis backwards
Interchanges the two axes of an array
Produces an object that mimics broadcasting
Broadcasts an array to a new shape
Expands the shape of an array
Removes single-dimensional entries from the shape of an array
Joins a sequence of arrays along an existing axis
Joins a sequence of arrays along a new axis
Stacks arrays in sequence horizontally (column wise)
Stacks arrays in sequence vertically (row wise)
Splits an array into multiple sub-arrays
Splits an array into multiple sub-arrays horizontally (column-wise)
Splits an array into multiple sub-arrays vertically (row-wise)
Returns a new array with the specified shape
Appends the values to the end of an array
Inserts the values along the given axis before the given indices
Returns a new array with sub-arrays along an axis deleted
Finds the unique elements of an array
Following are the functions for bitwise operations available in NumPy package.
Computes bitwise AND operation of array elements
Computes bitwise OR operation of array elements
Computes bitwise NOT
Shifts bits of a binary representation to the left
Shifts bits of binary representation to the right
The following functions are used to perform vectorized string operations for arrays of dtype numpy.string_ or numpy.unicode_. They are based on the standard string functions in Python's built-in library.
Returns element-wise string concatenation for two arrays of str or Unicode
Returns the string with multiple concatenation, element-wise
Returns a copy of the given string with elements centered in a string of specified length
Returns a copy of the string with only the first character capitalized
Returns the element-wise title cased version of the string or unicode
Returns an array with the elements converted to lowercase
Returns an array with the elements converted to uppercase
Returns a list of the words in the string, using separatordelimiter
Returns a list of the lines in the element, breaking at the line boundaries
Returns a copy with the leading and trailing characters removed
Returns a string which is the concatenation of the strings in the sequence
Returns a copy of the string with all occurrences of substring replaced by the new string
Calls str.decode element-wise
Calls str.encode element-wise
These functions are defined in character array class (numpy.char). The older Numarray package contained chararray class. The above functions in numpy.char class are useful in performing vectorized string operations.
Quite understandably, NumPy contains a large number of various mathematical operations. NumPy provides standard trigonometric functions, functions for arithmetic operations, handling complex numbers, etc.
NumPy has standard trigonometric functions which return trigonometric ratios for a given angle in radians.
Example
import numpy as np
a = np.array([0,30,45,60,90])
print 'Sine of different angles:'
# Convert to radians by multiplying with pi/180
print np.sin(a*np.pi/180)
print '\n'
print 'Cosine values for angles in array:'
print np.cos(a*np.pi/180)
print '\n'
print 'Tangent values for given angles:'
print np.tan(a*np.pi/180)
Here is its output −
Sine of different angles:
[ 0. 0.5 0.70710678 0.8660254 1. ]
Cosine values for angles in array:
[ 1.00000000e+00 8.66025404e-01 7.07106781e-01 5.00000000e-01
6.12323400e-17]
Tangent values for given angles:
[ 0.00000000e+00 5.77350269e-01 1.00000000e+00 1.73205081e+00
1.63312394e+16]
arcsin, arcos, and arctan functions return the trigonometric inverse of sin, cos, and tan of the given angle. The result of these functions can be verified by numpy.degrees() function by converting radians to degrees.
Example
import numpy as np
a = np.array([0,30,45,60,90])
print 'Array containing sine values:'
sin = np.sin(a*np.pi/180)
print sin
print '\n'
print 'Compute sine inverse of angles. Returned values are in radians.'
inv = np.arcsin(sin)
print inv
print '\n'
print 'Check result by converting to degrees:'
print np.degrees(inv)
print '\n'
print 'arccos and arctan functions behave similarly:'
cos = np.cos(a*np.pi/180)
print cos
print '\n'
print 'Inverse of cos:'
inv = np.arccos(cos)
print inv
print '\n'
print 'In degrees:'
print np.degrees(inv)
print '\n'
print 'Tan function:'
tan = np.tan(a*np.pi/180)
print tan
print '\n'
print 'Inverse of tan:'
inv = np.arctan(tan)
print inv
print '\n'
print 'In degrees:'
print np.degrees(inv)
Its output is as follows −
Array containing sine values:
[ 0. 0.5 0.70710678 0.8660254 1. ]
Compute sine inverse of angles. Returned values are in radians.
[ 0. 0.52359878 0.78539816 1.04719755 1.57079633]
Check result by converting to degrees:
[ 0. 30. 45. 60. 90.]
arccos and arctan functions behave similarly:
[ 1.00000000e+00 8.66025404e-01 7.07106781e-01 5.00000000e-01
6.12323400e-17]
Inverse of cos:
[ 0. 0.52359878 0.78539816 1.04719755 1.57079633]
In degrees:
[ 0. 30. 45. 60. 90.]
Tan function:
[ 0.00000000e+00 5.77350269e-01 1.00000000e+00 1.73205081e+00
1.63312394e+16]
Inverse of tan:
[ 0. 0.52359878 0.78539816 1.04719755 1.57079633]
In degrees:
[ 0. 30. 45. 60. 90.]
This is a function that returns the value rounded to the desired precision. The function takes the following parameters.
numpy.around(a,decimals)
Where,
a
Input data
decimals
The number of decimals to round to. Default is 0. If negative, the integer is rounded to position to the left of the decimal point
Example
import numpy as np
a = np.array([1.0,5.55, 123, 0.567, 25.532])
print 'Original array:'
print a
print '\n'
print 'After rounding:'
print np.around(a)
print np.around(a, decimals = 1)
print np.around(a, decimals = -1)
It produces the following output −
Original array:
[ 1. 5.55 123. 0.567 25.532]
After rounding:
[ 1. 6. 123. 1. 26. ]
[ 1. 5.6 123. 0.6 25.5]
[ 0. 10. 120. 0. 30. ]
This function returns the largest integer not greater than the input parameter. The floor of the scalar x is the largest integer i, such that i <= x. Note that in Python, flooring always is rounded away from 0.
Example
import numpy as np
a = np.array([-1.7, 1.5, -0.2, 0.6, 10])
print 'The given array:'
print a
print '\n'
print 'The modified array:'
print np.floor(a)
It produces the following output −
The given array:
[ -1.7 1.5 -0.2 0.6 10. ]
The modified array:
[ -2. 1. -1. 0. 10.]
The ceil() function returns the ceiling of an input value, i.e. the ceil of the scalar x is the smallest integer i, such that i >= x.
Example
import numpy as np
a = np.array([-1.7, 1.5, -0.2, 0.6, 10])
print 'The given array:'
print a
print '\n'
print 'The modified array:'
print np.ceil(a)
It will produce the following output −
The given array:
[ -1.7 1.5 -0.2 0.6 10. ]
The modified array:
[ -1. 2. -0. 1. 10.]
Input arrays for performing arithmetic operations such as add(), subtract(), multiply(), and divide() must be either of the same shape or should conform to array broadcasting rules.
import numpy as np
a = np.arange(9, dtype = np.float_).reshape(3,3)
print 'First array:'
print a
print '\n'
print 'Second array:'
b = np.array([10,10,10])
print b
print '\n'
print 'Add the two arrays:'
print np.add(a,b)
print '\n'
print 'Subtract the two arrays:'
print np.subtract(a,b)
print '\n'
print 'Multiply the two arrays:'
print np.multiply(a,b)
print '\n'
print 'Divide the two arrays:'
print np.divide(a,b)
It will produce the following output −
First array:
[[ 0. 1. 2.]
[ 3. 4. 5.]
[ 6. 7. 8.]]
Second array:
[10 10 10]
Add the two arrays:
[[ 10. 11. 12.]
[ 13. 14. 15.]
[ 16. 17. 18.]]
Subtract the two arrays:
[[-10. -9. -8.]
[ -7. -6. -5.]
[ -4. -3. -2.]]
Multiply the two arrays:
[[ 0. 10. 20.]
[ 30. 40. 50.]
[ 60. 70. 80.]]
Divide the two arrays:
[[ 0. 0.1 0.2]
[ 0.3 0.4 0.5]
[ 0.6 0.7 0.8]]
Let us now discuss some of the other important arithmetic functions available in NumPy.
This function returns the reciprocal of argument, element-wise. For elements with absolute values larger than 1, the result is always 0 because of the way in which Python handles integer division. For integer 0, an overflow warning is issued.
import numpy as np
a = np.array([0.25, 1.33, 1, 0, 100])
print 'Our array is:'
print a
print '\n'
print 'After applying reciprocal function:'
print np.reciprocal(a)
print '\n'
b = np.array([100], dtype = int)
print 'The second array is:'
print b
print '\n'
print 'After applying reciprocal function:'
print np.reciprocal(b)
It will produce the following output −
Our array is:
[ 0.25 1.33 1. 0. 100. ]
After applying reciprocal function:
main.py:9: RuntimeWarning: divide by zero encountered in reciprocal
print np.reciprocal(a)
[ 4. 0.7518797 1. inf 0.01 ]
The second array is:
[100]
After applying reciprocal function:
[0]
This function treats elements in the first input array as base and returns it raised to the power of the corresponding element in the second input array.
import numpy as np
a = np.array([10,100,1000])
print 'Our array is:'
print a
print '\n'
print 'Applying power function:'
print np.power(a,2)
print '\n'
print 'Second array:'
b = np.array([1,2,3])
print b
print '\n'
print 'Applying power function again:'
print np.power(a,b)
It will produce the following output −
Our array is:
[ 10 100 1000]
Applying power function:
[ 100 10000 1000000]
Second array:
[1 2 3]
Applying power function again:
[ 10 10000 1000000000]
This function returns the remainder of division of the corresponding elements in the input array. The function numpy.remainder() also produces the same result.
import numpy as np
a = np.array([10,20,30])
b = np.array([3,5,7])
print 'First array:'
print a
print '\n'
print 'Second array:'
print b
print '\n'
print 'Applying mod() function:'
print np.mod(a,b)
print '\n'
print 'Applying remainder() function:'
print np.remainder(a,b)
It will produce the following output −
First array:
[10 20 30]
Second array:
[3 5 7]
Applying mod() function:
[1 0 2]
Applying remainder() function:
[1 0 2]
The following functions are used to perform operations on array with complex numbers.
numpy.real() − returns the real part of the complex data type argument.
numpy.real() − returns the real part of the complex data type argument.
numpy.imag() − returns the imaginary part of the complex data type argument.
numpy.imag() − returns the imaginary part of the complex data type argument.
numpy.conj() − returns the complex conjugate, which is obtained by changing the sign of the imaginary part.
numpy.conj() − returns the complex conjugate, which is obtained by changing the sign of the imaginary part.
numpy.angle() − returns the angle of the complex argument. The function has degree parameter. If true, the angle in the degree is returned, otherwise the angle is in radians.
numpy.angle() − returns the angle of the complex argument. The function has degree parameter. If true, the angle in the degree is returned, otherwise the angle is in radians.
import numpy as np
a = np.array([-5.6j, 0.2j, 11. , 1+1j])
print 'Our array is:'
print a
print '\n'
print 'Applying real() function:'
print np.real(a)
print '\n'
print 'Applying imag() function:'
print np.imag(a)
print '\n'
print 'Applying conj() function:'
print np.conj(a)
print '\n'
print 'Applying angle() function:'
print np.angle(a)
print '\n'
print 'Applying angle() function again (result in degrees)'
print np.angle(a, deg = True)
It will produce the following output −
Our array is:
[ 0.-5.6j 0.+0.2j 11.+0.j 1.+1.j ]
Applying real() function:
[ 0. 0. 11. 1.]
Applying imag() function:
[-5.6 0.2 0. 1. ]
Applying conj() function:
[ 0.+5.6j 0.-0.2j 11.-0.j 1.-1.j ]
Applying angle() function:
[-1.57079633 1.57079633 0. 0.78539816]
Applying angle() function again (result in degrees)
[-90. 90. 0. 45.]
NumPy has quite a few useful statistical functions for finding minimum, maximum, percentile standard deviation and variance, etc. from the given elements in the array. The functions are explained as follows −
These functions return the minimum and the maximum from the elements in the given array along the specified axis.
import numpy as np
a = np.array([[3,7,5],[8,4,3],[2,4,9]])
print 'Our array is:'
print a
print '\n'
print 'Applying amin() function:'
print np.amin(a,1)
print '\n'
print 'Applying amin() function again:'
print np.amin(a,0)
print '\n'
print 'Applying amax() function:'
print np.amax(a)
print '\n'
print 'Applying amax() function again:'
print np.amax(a, axis = 0)
It will produce the following output −
Our array is:
[[3 7 5]
[8 4 3]
[2 4 9]]
Applying amin() function:
[3 3 2]
Applying amin() function again:
[2 4 3]
Applying amax() function:
9
Applying amax() function again:
[8 7 9]
The numpy.ptp() function returns the range (maximum-minimum) of values along an axis.
import numpy as np
a = np.array([[3,7,5],[8,4,3],[2,4,9]])
print 'Our array is:'
print a
print '\n'
print 'Applying ptp() function:'
print np.ptp(a)
print '\n'
print 'Applying ptp() function along axis 1:'
print np.ptp(a, axis = 1)
print '\n'
print 'Applying ptp() function along axis 0:'
print np.ptp(a, axis = 0)
It will produce the following output −
Our array is:
[[3 7 5]
[8 4 3]
[2 4 9]]
Applying ptp() function:
7
Applying ptp() function along axis 1:
[4 5 7]
Applying ptp() function along axis 0:
[6 3 6]
Percentile (or a centile) is a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall. The function numpy.percentile() takes the following arguments.
numpy.percentile(a, q, axis)
Where,
a
Input array
q
The percentile to compute must be between 0-100
axis
The axis along which the percentile is to be calculated
import numpy as np
a = np.array([[30,40,70],[80,20,10],[50,90,60]])
print 'Our array is:'
print a
print '\n'
print 'Applying percentile() function:'
print np.percentile(a,50)
print '\n'
print 'Applying percentile() function along axis 1:'
print np.percentile(a,50, axis = 1)
print '\n'
print 'Applying percentile() function along axis 0:'
print np.percentile(a,50, axis = 0)
It will produce the following output −
Our array is:
[[30 40 70]
[80 20 10]
[50 90 60]]
Applying percentile() function:
50.0
Applying percentile() function along axis 1:
[ 40. 20. 60.]
Applying percentile() function along axis 0:
[ 50. 40. 60.]
Median is defined as the value separating the higher half of a data sample from the lower half. The numpy.median() function is used as shown in the following program.
import numpy as np
a = np.array([[30,65,70],[80,95,10],[50,90,60]])
print 'Our array is:'
print a
print '\n'
print 'Applying median() function:'
print np.median(a)
print '\n'
print 'Applying median() function along axis 0:'
print np.median(a, axis = 0)
print '\n'
print 'Applying median() function along axis 1:'
print np.median(a, axis = 1)
It will produce the following output −
Our array is:
[[30 65 70]
[80 95 10]
[50 90 60]]
Applying median() function:
65.0
Applying median() function along axis 0:
[ 50. 90. 60.]
Applying median() function along axis 1:
[ 65. 80. 60.]
Arithmetic mean is the sum of elements along an axis divided by the number of elements. The numpy.mean() function returns the arithmetic mean of elements in the array. If the axis is mentioned, it is calculated along it.
import numpy as np
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print 'Our array is:'
print a
print '\n'
print 'Applying mean() function:'
print np.mean(a)
print '\n'
print 'Applying mean() function along axis 0:'
print np.mean(a, axis = 0)
print '\n'
print 'Applying mean() function along axis 1:'
print np.mean(a, axis = 1)
It will produce the following output −
Our array is:
[[1 2 3]
[3 4 5]
[4 5 6]]
Applying mean() function:
3.66666666667
Applying mean() function along axis 0:
[ 2.66666667 3.66666667 4.66666667]
Applying mean() function along axis 1:
[ 2. 4. 5.]
Weighted average is an average resulting from the multiplication of each component by a factor reflecting its importance. The numpy.average() function computes the weighted average of elements in an array according to their respective weight given in another array. The function can have an axis parameter. If the axis is not specified, the array is flattened.
Considering an array [1,2,3,4] and corresponding weights [4,3,2,1], the weighted average is calculated by adding the product of the corresponding elements and dividing the sum by the sum of weights.
Weighted average = (1*4+2*3+3*2+4*1)/(4+3+2+1)
import numpy as np
a = np.array([1,2,3,4])
print 'Our array is:'
print a
print '\n'
print 'Applying average() function:'
print np.average(a)
print '\n'
# this is same as mean when weight is not specified
wts = np.array([4,3,2,1])
print 'Applying average() function again:'
print np.average(a,weights = wts)
print '\n'
# Returns the sum of weights, if the returned parameter is set to True.
print 'Sum of weights'
print np.average([1,2,3, 4],weights = [4,3,2,1], returned = True)
It will produce the following output −
Our array is:
[1 2 3 4]
Applying average() function:
2.5
Applying average() function again:
2.0
Sum of weights
(2.0, 10.0)
In a multi-dimensional array, the axis for computation can be specified.
import numpy as np
a = np.arange(6).reshape(3,2)
print 'Our array is:'
print a
print '\n'
print 'Modified array:'
wt = np.array([3,5])
print np.average(a, axis = 1, weights = wt)
print '\n'
print 'Modified array:'
print np.average(a, axis = 1, weights = wt, returned = True)
It will produce the following output −
Our array is:
[[0 1]
[2 3]
[4 5]]
Modified array:
[ 0.625 2.625 4.625]
Modified array:
(array([ 0.625, 2.625, 4.625]), array([ 8., 8., 8.]))
Standard deviation is the square root of the average of squared deviations from mean. The formula for standard deviation is as follows −
std = sqrt(mean(abs(x - x.mean())**2))
If the array is [1, 2, 3, 4], then its mean is 2.5. Hence the squared deviations are [2.25, 0.25, 0.25, 2.25] and the square root of its mean divided by 4, i.e., sqrt (5/4) is 1.1180339887498949.
import numpy as np
print np.std([1,2,3,4])
It will produce the following output −
1.1180339887498949
Variance is the average of squared deviations, i.e., mean(abs(x - x.mean())**2). In other words, the standard deviation is the square root of variance.
import numpy as np
print np.var([1,2,3,4])
It will produce the following output −
1.25
A variety of sorting related functions are available in NumPy. These sorting functions implement different sorting algorithms, each of them characterized by the speed of execution, worst case performance, the workspace required and the stability of algorithms. Following table shows the comparison of three sorting algorithms.
The sort() function returns a sorted copy of the input array. It has the following parameters −
numpy.sort(a, axis, kind, order)
Where,
a
Array to be sorted
axis
The axis along which the array is to be sorted. If none, the array is flattened, sorting on the last axis
kind
Default is quicksort
order
If the array contains fields, the order of fields to be sorted
import numpy as np
a = np.array([[3,7],[9,1]])
print 'Our array is:'
print a
print '\n'
print 'Applying sort() function:'
print np.sort(a)
print '\n'
print 'Sort along axis 0:'
print np.sort(a, axis = 0)
print '\n'
# Order parameter in sort function
dt = np.dtype([('name', 'S10'),('age', int)])
a = np.array([("raju",21),("anil",25),("ravi", 17), ("amar",27)], dtype = dt)
print 'Our array is:'
print a
print '\n'
print 'Order by name:'
print np.sort(a, order = 'name')
It will produce the following output −
Our array is:
[[3 7]
[9 1]]
Applying sort() function:
[[3 7]
[1 9]]
Sort along axis 0:
[[3 1]
[9 7]]
Our array is:
[('raju', 21) ('anil', 25) ('ravi', 17) ('amar', 27)]
Order by name:
[('amar', 27) ('anil', 25) ('raju', 21) ('ravi', 17)]
The numpy.argsort() function performs an indirect sort on input array, along the given axis and using a specified kind of sort to return the array of indices of data. This indices array is used to construct the sorted array.
import numpy as np
x = np.array([3, 1, 2])
print 'Our array is:'
print x
print '\n'
print 'Applying argsort() to x:'
y = np.argsort(x)
print y
print '\n'
print 'Reconstruct original array in sorted order:'
print x[y]
print '\n'
print 'Reconstruct the original array using loop:'
for i in y:
print x[i],
It will produce the following output −
Our array is:
[3 1 2]
Applying argsort() to x:
[1 2 0]
Reconstruct original array in sorted order:
[1 2 3]
Reconstruct the original array using loop:
1 2 3
function performs an indirect sort using a sequence of keys. The keys can be seen as a column in a spreadsheet. The function returns an array of indices, using which the sorted data can be obtained. Note, that the last key happens to be the primary key of sort.
import numpy as np
nm = ('raju','anil','ravi','amar')
dv = ('f.y.', 's.y.', 's.y.', 'f.y.')
ind = np.lexsort((dv,nm))
print 'Applying lexsort() function:'
print ind
print '\n'
print 'Use this index to get sorted data:'
print [nm[i] + ", " + dv[i] for i in ind]
It will produce the following output −
Applying lexsort() function:
[3 1 0 2]
Use this index to get sorted data:
['amar, f.y.', 'anil, s.y.', 'raju, f.y.', 'ravi, s.y.']
NumPy module has a number of functions for searching inside an array. Functions for finding the maximum, the minimum as well as the elements satisfying a given condition are available.
These two functions return the indices of maximum and minimum elements respectively along the given axis.
import numpy as np
a = np.array([[30,40,70],[80,20,10],[50,90,60]])
print 'Our array is:'
print a
print '\n'
print 'Applying argmax() function:'
print np.argmax(a)
print '\n'
print 'Index of maximum number in flattened array'
print a.flatten()
print '\n'
print 'Array containing indices of maximum along axis 0:'
maxindex = np.argmax(a, axis = 0)
print maxindex
print '\n'
print 'Array containing indices of maximum along axis 1:'
maxindex = np.argmax(a, axis = 1)
print maxindex
print '\n'
print 'Applying argmin() function:'
minindex = np.argmin(a)
print minindex
print '\n'
print 'Flattened array:'
print a.flatten()[minindex]
print '\n'
print 'Flattened array along axis 0:'
minindex = np.argmin(a, axis = 0)
print minindex
print '\n'
print 'Flattened array along axis 1:'
minindex = np.argmin(a, axis = 1)
print minindex
It will produce the following output −
Our array is:
[[30 40 70]
[80 20 10]
[50 90 60]]
Applying argmax() function:
7
Index of maximum number in flattened array
[30 40 70 80 20 10 50 90 60]
Array containing indices of maximum along axis 0:
[1 2 0]
Array containing indices of maximum along axis 1:
[2 0 1]
Applying argmin() function:
5
Flattened array:
10
Flattened array along axis 0:
[0 1 1]
Flattened array along axis 1:
[0 2 0]
The numpy.nonzero() function returns the indices of non-zero elements in the input array.
import numpy as np
a = np.array([[30,40,0],[0,20,10],[50,0,60]])
print 'Our array is:'
print a
print '\n'
print 'Applying nonzero() function:'
print np.nonzero (a)
It will produce the following output −
Our array is:
[[30 40 0]
[ 0 20 10]
[50 0 60]]
Applying nonzero() function:
(array([0, 0, 1, 1, 2, 2]), array([0, 1, 1, 2, 0, 2]))
The where() function returns the indices of elements in an input array where the given condition is satisfied.
import numpy as np
x = np.arange(9.).reshape(3, 3)
print 'Our array is:'
print x
print 'Indices of elements > 3'
y = np.where(x > 3)
print y
print 'Use these indices to get elements satisfying the condition'
print x[y]
It will produce the following output −
Our array is:
[[ 0. 1. 2.]
[ 3. 4. 5.]
[ 6. 7. 8.]]
Indices of elements > 3
(array([1, 1, 2, 2, 2]), array([1, 2, 0, 1, 2]))
Use these indices to get elements satisfying the condition
[ 4. 5. 6. 7. 8.]
The extract() function returns the elements satisfying any condition.
import numpy as np
x = np.arange(9.).reshape(3, 3)
print 'Our array is:'
print x
# define a condition
condition = np.mod(x,2) == 0
print 'Element-wise value of condition'
print condition
print 'Extract elements using condition'
print np.extract(condition, x)
It will produce the following output −
Our array is:
[[ 0. 1. 2.]
[ 3. 4. 5.]
[ 6. 7. 8.]]
Element-wise value of condition
[[ True False True]
[False True False]
[ True False True]]
Extract elements using condition
[ 0. 2. 4. 6. 8.]
We have seen that the data stored in the memory of a computer depends on which architecture the CPU uses. It may be little-endian (least significant is stored in the smallest address) or big-endian (most significant byte in the smallest address).
The numpy.ndarray.byteswap() function toggles between the two representations: bigendian and little-endian.
import numpy as np
a = np.array([1, 256, 8755], dtype = np.int16)
print 'Our array is:'
print a
print 'Representation of data in memory in hexadecimal form:'
print map(hex,a)
# byteswap() function swaps in place by passing True parameter
print 'Applying byteswap() function:'
print a.byteswap(True)
print 'In hexadecimal form:'
print map(hex,a)
# We can see the bytes being swapped
It will produce the following output −
Our array is:
[1 256 8755]
Representation of data in memory in hexadecimal form:
['0x1', '0x100', '0x2233']
Applying byteswap() function:
[256 1 13090]
In hexadecimal form:
['0x100', '0x1', '0x3322']
While executing the functions, some of them return a copy of the input array, while some return the view. When the contents are physically stored in another location, it is called Copy. If on the other hand, a different view of the same memory content is provided, we call it as View.
Simple assignments do not make the copy of array object. Instead, it uses the same id() of the original array to access it. The id() returns a universal identifier of Python object, similar to the pointer in C.
Furthermore, any changes in either gets reflected in the other. For example, the changing shape of one will change the shape of the other too.
import numpy as np
a = np.arange(6)
print 'Our array is:'
print a
print 'Applying id() function:'
print id(a)
print 'a is assigned to b:'
b = a
print b
print 'b has same id():'
print id(b)
print 'Change shape of b:'
b.shape = 3,2
print b
print 'Shape of a also gets changed:'
print a
It will produce the following output −
Our array is:
[0 1 2 3 4 5]
Applying id() function:
139747815479536
a is assigned to b:
[0 1 2 3 4 5]
b has same id():
139747815479536
Change shape of b:
[[0 1]
[2 3]
[4 5]]
Shape of a also gets changed:
[[0 1]
[2 3]
[4 5]]
NumPy has ndarray.view() method which is a new array object that looks at the same data of the original array. Unlike the earlier case, change in dimensions of the new array doesn’t change dimensions of the original.
import numpy as np
# To begin with, a is 3X2 array
a = np.arange(6).reshape(3,2)
print 'Array a:'
print a
print 'Create view of a:'
b = a.view()
print b
print 'id() for both the arrays are different:'
print 'id() of a:'
print id(a)
print 'id() of b:'
print id(b)
# Change the shape of b. It does not change the shape of a
b.shape = 2,3
print 'Shape of b:'
print b
print 'Shape of a:'
print a
It will produce the following output −
Array a:
[[0 1]
[2 3]
[4 5]]
Create view of a:
[[0 1]
[2 3]
[4 5]]
id() for both the arrays are different:
id() of a:
140424307227264
id() of b:
140424151696288
Shape of b:
[[0 1 2]
[3 4 5]]
Shape of a:
[[0 1]
[2 3]
[4 5]]
Slice of an array creates a view.
import numpy as np
a = np.array([[10,10], [2,3], [4,5]])
print 'Our array is:'
print a
print 'Create a slice:'
s = a[:, :2]
print s
It will produce the following output −
Our array is:
[[10 10]
[ 2 3]
[ 4 5]]
Create a slice:
[[10 10]
[ 2 3]
[ 4 5]]
The ndarray.copy() function creates a deep copy. It is a complete copy of the array and its data, and doesn’t share with the original array.
import numpy as np
a = np.array([[10,10], [2,3], [4,5]])
print 'Array a is:'
print a
print 'Create a deep copy of a:'
b = a.copy()
print 'Array b is:'
print b
#b does not share any memory of a
print 'Can we write b is a'
print b is a
print 'Change the contents of b:'
b[0,0] = 100
print 'Modified array b:'
print b
print 'a remains unchanged:'
print a
It will produce the following output −
Array a is:
[[10 10]
[ 2 3]
[ 4 5]]
Create a deep copy of a:
Array b is:
[[10 10]
[ 2 3]
[ 4 5]]
Can we write b is a
False
Change the contents of b:
Modified array b:
[[100 10]
[ 2 3]
[ 4 5]]
a remains unchanged:
[[10 10]
[ 2 3]
[ 4 5]]
NumPy package contains a Matrix library numpy.matlib. This module has functions that return matrices instead of ndarray objects.
The matlib.empty() function returns a new matrix without initializing the entries. The function takes the following parameters.
numpy.matlib.empty(shape, dtype, order)
Where,
shape
int or tuple of int defining the shape of the new matrix
Dtype
Optional. Data type of the output
order
C or F
import numpy.matlib
import numpy as np
print np.matlib.empty((2,2))
# filled with random data
It will produce the following output −
[[ 2.12199579e-314, 4.24399158e-314]
[ 4.24399158e-314, 2.12199579e-314]]
This function returns the matrix filled with zeros.
import numpy.matlib
import numpy as np
print np.matlib.zeros((2,2))
It will produce the following output −
[[ 0. 0.]
[ 0. 0.]]
This function returns the matrix filled with 1s.
import numpy.matlib
import numpy as np
print np.matlib.ones((2,2))
It will produce the following output −
[[ 1. 1.]
[ 1. 1.]]
This function returns a matrix with 1 along the diagonal elements and the zeros elsewhere. The function takes the following parameters.
numpy.matlib.eye(n, M,k, dtype)
Where,
n
The number of rows in the resulting matrix
M
The number of columns, defaults to n
k
Index of diagonal
dtype
Data type of the output
import numpy.matlib
import numpy as np
print np.matlib.eye(n = 3, M = 4, k = 0, dtype = float)
It will produce the following output −
[[ 1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 1. 0.]]
The numpy.matlib.identity() function returns the Identity matrix of the given size. An identity matrix is a square matrix with all diagonal elements as 1.
import numpy.matlib
import numpy as np
print np.matlib.identity(5, dtype = float)
It will produce the following output −
[[ 1. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0.]
[ 0. 0. 1. 0. 0.]
[ 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 1.]]
The numpy.matlib.rand() function returns a matrix of the given size filled with random values.
import numpy.matlib
import numpy as np
print np.matlib.rand(3,3)
It will produce the following output −
[[ 0.82674464 0.57206837 0.15497519]
[ 0.33857374 0.35742401 0.90895076]
[ 0.03968467 0.13962089 0.39665201]]
Note that a matrix is always two-dimensional, whereas ndarray is an n-dimensional array. Both the objects are inter-convertible.
import numpy.matlib
import numpy as np
i = np.matrix('1,2;3,4')
print i
It will produce the following output −
[[1 2]
[3 4]]
import numpy.matlib
import numpy as np
j = np.asarray(i)
print j
It will produce the following output −
[[1 2]
[3 4]]
import numpy.matlib
import numpy as np
k = np.asmatrix (j)
print k
It will produce the following output −
[[1 2]
[3 4]]
NumPy package contains numpy.linalg module that provides all the functionality required for linear algebra. Some of the important functions in this module are described in the following table.
Dot product of the two arrays
Dot product of the two vectors
Inner product of the two arrays
Matrix product of the two arrays
Computes the determinant of the array
Solves the linear matrix equation
Finds the multiplicative inverse of the matrix
Matplotlib is a plotting library for Python. It is used along with NumPy to provide an environment that is an effective open source alternative for MatLab. It can also be used with graphics toolkits like PyQt and wxPython.
Matplotlib module was first written by John D. Hunter. Since 2012, Michael Droettboom is the principal developer. Currently, Matplotlib ver. 1.5.1 is the stable version available. The package is available in binary distribution as well as in the source code form on www.matplotlib.org.
Conventionally, the package is imported into the Python script by adding the following statement −
from matplotlib import pyplot as plt
Here pyplot() is the most important function in matplotlib library, which is used to plot 2D data. The following script plots the equation y = 2x + 5
import numpy as np
from matplotlib import pyplot as plt
x = np.arange(1,11)
y = 2 * x + 5
plt.title("Matplotlib demo")
plt.xlabel("x axis caption")
plt.ylabel("y axis caption")
plt.plot(x,y)
plt.show()
An ndarray object x is created from np.arange() function as the values on the x axis. The corresponding values on the y axis are stored in another ndarray object y. These values are plotted using plot() function of pyplot submodule of matplotlib package.
The graphical representation is displayed by show() function.
The above code should produce the following output −
Instead of the linear graph, the values can be displayed discretely by adding a format string to the plot() function. Following formatting characters can be used.
'-'
Solid line style
'--'
Dashed line style
'-.'
Dash-dot line style
':'
Dotted line style
'.'
Point marker
','
Pixel marker
'o'
Circle marker
'v'
Triangle_down marker
'^'
Triangle_up marker
'<'
Triangle_left marker
'>'
Triangle_right marker
'1'
Tri_down marker
'2'
Tri_up marker
'3'
Tri_left marker
'4'
Tri_right marker
's'
Square marker
'p'
Pentagon marker
'*'
Star marker
'h'
Hexagon1 marker
'H'
Hexagon2 marker
'+'
Plus marker
'x'
X marker
'D'
Diamond marker
'd'
Thin_diamond marker
'|'
Vline marker
'_'
Hline marker
The following color abbreviations are also defined.
To display the circles representing points, instead of the line in the above example, use “ob” as the format string in plot() function.
import numpy as np
from matplotlib import pyplot as plt
x = np.arange(1,11)
y = 2 * x + 5
plt.title("Matplotlib demo")
plt.xlabel("x axis caption")
plt.ylabel("y axis caption")
plt.plot(x,y,"ob")
plt.show()
The above code should produce the following output −
The following script produces the sine wave plot using matplotlib.
import numpy as np
import matplotlib.pyplot as plt
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
plt.title("sine wave form")
# Plot the points using matplotlib
plt.plot(x, y)
plt.show()
The subplot() function allows you to plot different things in the same figure. In the following script, sine and cosine values are plotted.
import numpy as np
import matplotlib.pyplot as plt
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
The above code should produce the following output −
The pyplot submodule provides bar() function to generate bar graphs. The following example produces the bar graph of two sets of x and y arrays.
from matplotlib import pyplot as plt
x = [5,8,10]
y = [12,16,6]
x2 = [6,9,11]
y2 = [6,15,7]
plt.bar(x, y, align = 'center')
plt.bar(x2, y2, color = 'g', align = 'center')
plt.title('Bar graph')
plt.ylabel('Y axis')
plt.xlabel('X axis')
plt.show()
This code should produce the following output −
NumPy has a numpy.histogram() function that is a graphical representation of the frequency distribution of data. Rectangles of equal horizontal size corresponding to class interval called bin and variable height corresponding to frequency.
The numpy.histogram() function takes the input array and bins as two parameters. The successive elements in bin array act as the boundary of each bin.
import numpy as np
a = np.array([22,87,5,43,56,73,55,54,11,20,51,5,79,31,27])
np.histogram(a,bins = [0,20,40,60,80,100])
hist,bins = np.histogram(a,bins = [0,20,40,60,80,100])
print hist
print bins
It will produce the following output −
[3 4 5 2 1]
[0 20 40 60 80 100]
Matplotlib can convert this numeric representation of histogram into a graph. The plt() function of pyplot submodule takes the array containing the data and bin array as parameters and converts into a histogram.
from matplotlib import pyplot as plt
import numpy as np
a = np.array([22,87,5,43,56,73,55,54,11,20,51,5,79,31,27])
plt.hist(a, bins = [0,20,40,60,80,100])
plt.title("histogram")
plt.show()
It should produce the following output −
The ndarray objects can be saved to and loaded from the disk files. The IO functions available are −
load() and save() functions handle /numPy binary files (with npy extension)
load() and save() functions handle /numPy binary files (with npy extension)
loadtxt() and savetxt() functions handle normal text files
loadtxt() and savetxt() functions handle normal text files
NumPy introduces a simple file format for ndarray objects. This .npy file stores data, shape, dtype and other information required to reconstruct the ndarray in a disk file such that the array is correctly retrieved even if the file is on another machine with different architecture.
The numpy.save() file stores the input array in a disk file with npy extension.
import numpy as np
a = np.array([1,2,3,4,5])
np.save('outfile',a)
To reconstruct array from outfile.npy, use load() function.
import numpy as np
b = np.load('outfile.npy')
print b
It will produce the following output −
array([1, 2, 3, 4, 5])
The save() and load() functions accept an additional Boolean parameter allow_pickles. A pickle in Python is used to serialize and de-serialize objects before saving to or reading from a disk file.
The storage and retrieval of array data in simple text file format is done with savetxt() and loadtxt() functions.
import numpy as np
a = np.array([1,2,3,4,5])
np.savetxt('out.txt',a)
b = np.loadtxt('out.txt')
print b
It will produce the following output −
[ 1. 2. 3. 4. 5.]
The savetxt() and loadtxt() functions accept additional optional parameters such as header, footer, and delimiter.
63 Lectures
6 hours
Abhilash Nelson
19 Lectures
8 hours
DATAhill Solutions Srinivas Reddy
12 Lectures
3 hours
DATAhill Solutions Srinivas Reddy
10 Lectures
2.5 hours
Akbar Khan
20 Lectures
2 hours
Pruthviraja L
63 Lectures
6 hours
Anmol
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2419,
"s": 2243,
"text": "NumPy is a Python package. It stands for 'Numerical Python'. It is a library consisting of multidimensional array objects and a collection of routines for processing of array."
},
{
"code": null,
"e": 2734,
"s": 2419,
"text": "Numeric, the ancestor of NumPy, was developed by Jim Hugunin. Another package Numarray was also developed, having some additional functionalities. In 2005, Travis Oliphant created NumPy package by incorporating the features of Numarray into Numeric package. There are many contributors to this open source project."
},
{
"code": null,
"e": 2798,
"s": 2734,
"text": "Using NumPy, a developer can perform the following operations −"
},
{
"code": null,
"e": 2845,
"s": 2798,
"text": "Mathematical and logical operations on arrays."
},
{
"code": null,
"e": 2892,
"s": 2845,
"text": "Mathematical and logical operations on arrays."
},
{
"code": null,
"e": 2948,
"s": 2892,
"text": "Fourier transforms and routines for shape manipulation."
},
{
"code": null,
"e": 3004,
"s": 2948,
"text": "Fourier transforms and routines for shape manipulation."
},
{
"code": null,
"e": 3120,
"s": 3004,
"text": "Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation."
},
{
"code": null,
"e": 3236,
"s": 3120,
"text": "Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation."
},
{
"code": null,
"e": 3550,
"s": 3236,
"text": "NumPy is often used along with packages like SciPy (Scientific Python) and Mat−plotlib (plotting library). This combination is widely used as a replacement for MatLab, a popular platform for technical computing. However, Python alternative to MatLab is now seen as a more modern and complete programming language."
},
{
"code": null,
"e": 3607,
"s": 3550,
"text": "It is open source, which is an added advantage of NumPy."
},
{
"code": null,
"e": 3767,
"s": 3607,
"text": "Standard Python distribution doesn't come bundled with NumPy module. A lightweight alternative is to install NumPy using popular Python package installer, pip."
},
{
"code": null,
"e": 3786,
"s": 3767,
"text": "pip install numpy\n"
},
{
"code": null,
"e": 4028,
"s": 3786,
"text": "The best way to enable NumPy is to use an installable binary package specific to your operating system. These binaries contain full SciPy stack (inclusive of NumPy, SciPy, matplotlib, IPython, SymPy and nose packages along with core Python)."
},
{
"code": null,
"e": 4156,
"s": 4028,
"text": "Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy stack. It is also available for Linux and Mac."
},
{
"code": null,
"e": 4314,
"s": 4156,
"text": "Canopy (https://www.enthought.com/products/canopy/) is available as free as well as commercial distribution with full SciPy stack for Windows, Linux and Mac."
},
{
"code": null,
"e": 4462,
"s": 4314,
"text": "Python (x,y): It is a free Python distribution with SciPy stack and Spyder IDE for Windows OS. (Downloadable from https://www.python-xy.github.io/)"
},
{
"code": null,
"e": 4570,
"s": 4462,
"text": "Package managers of respective Linux distributions are used to install one or more packages in SciPy stack."
},
{
"code": null,
"e": 4699,
"s": 4570,
"text": "sudo apt-get install python-numpy \npython-scipy python-matplotlibipythonipythonnotebook python-pandas \npython-sympy python-nose\n"
},
{
"code": null,
"e": 4798,
"s": 4699,
"text": "sudo yum install numpyscipy python-matplotlibipython \npython-pandas sympy python-nose atlas-devel\n"
},
{
"code": null,
"e": 4911,
"s": 4798,
"text": "Core Python (2.6.x, 2.7.x and 3.2.x onwards) must be installed with distutils and zlib module should be enabled."
},
{
"code": null,
"e": 4965,
"s": 4911,
"text": "GNU gcc (4.2 and above) C compiler must be available."
},
{
"code": null,
"e": 5010,
"s": 4965,
"text": "To install NumPy, run the following command."
},
{
"code": null,
"e": 5035,
"s": 5010,
"text": "Python setup.py install\n"
},
{
"code": null,
"e": 5124,
"s": 5035,
"text": "To test whether NumPy module is properly installed, try to import it from Python prompt."
},
{
"code": null,
"e": 5138,
"s": 5124,
"text": "import numpy\n"
},
{
"code": null,
"e": 5209,
"s": 5138,
"text": "If it is not installed, the following error message will be displayed."
},
{
"code": null,
"e": 5347,
"s": 5209,
"text": "Traceback (most recent call last): \n File \"<pyshell#0>\", line 1, in <module> \n import numpy \nImportError: No module named 'numpy'\n"
},
{
"code": null,
"e": 5417,
"s": 5347,
"text": "Alternatively, NumPy package is imported using the following syntax −"
},
{
"code": null,
"e": 5437,
"s": 5417,
"text": "import numpy as np\n"
},
{
"code": null,
"e": 5648,
"s": 5437,
"text": "The most important object defined in NumPy is an N-dimensional array type called ndarray. It describes the collection of items of the same type. Items in the collection can be accessed using a zero-based index."
},
{
"code": null,
"e": 5790,
"s": 5648,
"text": "Every item in an ndarray takes the same size of block in the memory. Each element in ndarray is an object of data-type object (called dtype)."
},
{
"code": null,
"e": 6015,
"s": 5790,
"text": "Any item extracted from ndarray object (by slicing) is represented by a Python object of one of array scalar types. The following diagram shows a relationship between ndarray, data type object (dtype) and array scalar type −"
},
{
"code": null,
"e": 6208,
"s": 6015,
"text": "An instance of ndarray class can be constructed by different array creation routines described later in the tutorial. The basic ndarray is created using an array function in NumPy as follows −"
},
{
"code": null,
"e": 6222,
"s": 6208,
"text": "numpy.array \n"
},
{
"code": null,
"e": 6328,
"s": 6222,
"text": "It creates an ndarray from any object exposing array interface, or from any method that returns an array."
},
{
"code": null,
"e": 6416,
"s": 6328,
"text": "numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0)\n"
},
{
"code": null,
"e": 6471,
"s": 6416,
"text": "The above constructor takes the following parameters −"
},
{
"code": null,
"e": 6478,
"s": 6471,
"text": "object"
},
{
"code": null,
"e": 6569,
"s": 6478,
"text": "Any object exposing the array interface method returns an array, or any (nested) sequence."
},
{
"code": null,
"e": 6575,
"s": 6569,
"text": "dtype"
},
{
"code": null,
"e": 6612,
"s": 6575,
"text": "Desired data type of array, optional"
},
{
"code": null,
"e": 6617,
"s": 6612,
"text": "copy"
},
{
"code": null,
"e": 6667,
"s": 6617,
"text": "Optional. By default (true), the object is copied"
},
{
"code": null,
"e": 6673,
"s": 6667,
"text": "order"
},
{
"code": null,
"e": 6728,
"s": 6673,
"text": "C (row major) or F (column major) or A (any) (default)"
},
{
"code": null,
"e": 6734,
"s": 6728,
"text": "subok"
},
{
"code": null,
"e": 6830,
"s": 6734,
"text": "By default, returned array forced to be a base class array. If true, sub-classes passed through"
},
{
"code": null,
"e": 6836,
"s": 6830,
"text": "ndmin"
},
{
"code": null,
"e": 6884,
"s": 6836,
"text": "Specifies minimum dimensions of resultant array"
},
{
"code": null,
"e": 6944,
"s": 6884,
"text": "Take a look at the following examples to understand better."
},
{
"code": null,
"e": 6995,
"s": 6944,
"text": "import numpy as np \na = np.array([1,2,3]) \nprint a"
},
{
"code": null,
"e": 7022,
"s": 6995,
"text": "The output is as follows −"
},
{
"code": null,
"e": 7033,
"s": 7022,
"text": "[1, 2, 3]\n"
},
{
"code": null,
"e": 7121,
"s": 7033,
"text": "# more than one dimensions \nimport numpy as np \na = np.array([[1, 2], [3, 4]]) \nprint a"
},
{
"code": null,
"e": 7148,
"s": 7121,
"text": "The output is as follows −"
},
{
"code": null,
"e": 7167,
"s": 7148,
"text": "[[1, 2] \n [3, 4]]\n"
},
{
"code": null,
"e": 7257,
"s": 7167,
"text": "# minimum dimensions \nimport numpy as np \na = np.array([1, 2, 3,4,5], ndmin = 2) \nprint a"
},
{
"code": null,
"e": 7284,
"s": 7257,
"text": "The output is as follows −"
},
{
"code": null,
"e": 7303,
"s": 7284,
"text": "[[1, 2, 3, 4, 5]]\n"
},
{
"code": null,
"e": 7392,
"s": 7303,
"text": "# dtype parameter \nimport numpy as np \na = np.array([1, 2, 3], dtype = complex) \nprint a"
},
{
"code": null,
"e": 7419,
"s": 7392,
"text": "The output is as follows −"
},
{
"code": null,
"e": 7448,
"s": 7419,
"text": "[ 1.+0.j, 2.+0.j, 3.+0.j]\n"
},
{
"code": null,
"e": 7740,
"s": 7448,
"text": "The ndarray object consists of contiguous one-dimensional segment of computer memory, combined with an indexing scheme that maps each item to a location in the memory block. The memory block holds the elements in a row-major order (C style) or a column-major order (FORTRAN or MatLab style)."
},
{
"code": null,
"e": 7887,
"s": 7740,
"text": "NumPy supports a much greater variety of numerical types than Python does. The following table shows different scalar data types defined in NumPy."
},
{
"code": null,
"e": 7893,
"s": 7887,
"text": "bool_"
},
{
"code": null,
"e": 7934,
"s": 7893,
"text": "Boolean (True or False) stored as a byte"
},
{
"code": null,
"e": 7939,
"s": 7934,
"text": "int_"
},
{
"code": null,
"e": 8009,
"s": 7939,
"text": "Default integer type (same as C long; normally either int64 or int32)"
},
{
"code": null,
"e": 8014,
"s": 8009,
"text": "intc"
},
{
"code": null,
"e": 8059,
"s": 8014,
"text": "Identical to C int (normally int32 or int64)"
},
{
"code": null,
"e": 8064,
"s": 8059,
"text": "intp"
},
{
"code": null,
"e": 8142,
"s": 8064,
"text": "Integer used for indexing (same as C ssize_t; normally either int32 or int64)"
},
{
"code": null,
"e": 8147,
"s": 8142,
"text": "int8"
},
{
"code": null,
"e": 8166,
"s": 8147,
"text": "Byte (-128 to 127)"
},
{
"code": null,
"e": 8172,
"s": 8166,
"text": "int16"
},
{
"code": null,
"e": 8198,
"s": 8172,
"text": "Integer (-32768 to 32767)"
},
{
"code": null,
"e": 8204,
"s": 8198,
"text": "int32"
},
{
"code": null,
"e": 8240,
"s": 8204,
"text": "Integer (-2147483648 to 2147483647)"
},
{
"code": null,
"e": 8246,
"s": 8240,
"text": "int64"
},
{
"code": null,
"e": 8300,
"s": 8246,
"text": "Integer (-9223372036854775808 to 9223372036854775807)"
},
{
"code": null,
"e": 8306,
"s": 8300,
"text": "uint8"
},
{
"code": null,
"e": 8334,
"s": 8306,
"text": "Unsigned integer (0 to 255)"
},
{
"code": null,
"e": 8341,
"s": 8334,
"text": "uint16"
},
{
"code": null,
"e": 8371,
"s": 8341,
"text": "Unsigned integer (0 to 65535)"
},
{
"code": null,
"e": 8378,
"s": 8371,
"text": "uint32"
},
{
"code": null,
"e": 8413,
"s": 8378,
"text": "Unsigned integer (0 to 4294967295)"
},
{
"code": null,
"e": 8420,
"s": 8413,
"text": "uint64"
},
{
"code": null,
"e": 8465,
"s": 8420,
"text": "Unsigned integer (0 to 18446744073709551615)"
},
{
"code": null,
"e": 8472,
"s": 8465,
"text": "float_"
},
{
"code": null,
"e": 8494,
"s": 8472,
"text": "Shorthand for float64"
},
{
"code": null,
"e": 8502,
"s": 8494,
"text": "float16"
},
{
"code": null,
"e": 8568,
"s": 8502,
"text": "Half precision float: sign bit, 5 bits exponent, 10 bits mantissa"
},
{
"code": null,
"e": 8576,
"s": 8568,
"text": "float32"
},
{
"code": null,
"e": 8644,
"s": 8576,
"text": "Single precision float: sign bit, 8 bits exponent, 23 bits mantissa"
},
{
"code": null,
"e": 8652,
"s": 8644,
"text": "float64"
},
{
"code": null,
"e": 8721,
"s": 8652,
"text": "Double precision float: sign bit, 11 bits exponent, 52 bits mantissa"
},
{
"code": null,
"e": 8730,
"s": 8721,
"text": "complex_"
},
{
"code": null,
"e": 8755,
"s": 8730,
"text": "Shorthand for complex128"
},
{
"code": null,
"e": 8765,
"s": 8755,
"text": "complex64"
},
{
"code": null,
"e": 8846,
"s": 8765,
"text": "Complex number, represented by two 32-bit floats (real and imaginary components)"
},
{
"code": null,
"e": 8857,
"s": 8846,
"text": "complex128"
},
{
"code": null,
"e": 8938,
"s": 8857,
"text": "Complex number, represented by two 64-bit floats (real and imaginary components)"
},
{
"code": null,
"e": 9095,
"s": 8938,
"text": "NumPy numerical types are instances of dtype (data-type) objects, each having unique characteristics. The dtypes are available as np.bool_, np.float32, etc."
},
{
"code": null,
"e": 9228,
"s": 9095,
"text": "A data type object describes interpretation of fixed block of memory corresponding to an array, depending on the following aspects −"
},
{
"code": null,
"e": 9275,
"s": 9228,
"text": "Type of data (integer, float or Python object)"
},
{
"code": null,
"e": 9322,
"s": 9275,
"text": "Type of data (integer, float or Python object)"
},
{
"code": null,
"e": 9335,
"s": 9322,
"text": "Size of data"
},
{
"code": null,
"e": 9348,
"s": 9335,
"text": "Size of data"
},
{
"code": null,
"e": 9389,
"s": 9348,
"text": "Byte order (little-endian or big-endian)"
},
{
"code": null,
"e": 9430,
"s": 9389,
"text": "Byte order (little-endian or big-endian)"
},
{
"code": null,
"e": 9553,
"s": 9430,
"text": "In case of structured type, the names of fields, data type of each field and part of the memory block taken by each field."
},
{
"code": null,
"e": 9676,
"s": 9553,
"text": "In case of structured type, the names of fields, data type of each field and part of the memory block taken by each field."
},
{
"code": null,
"e": 9728,
"s": 9676,
"text": "If data type is a subarray, its shape and data type"
},
{
"code": null,
"e": 9780,
"s": 9728,
"text": "If data type is a subarray, its shape and data type"
},
{
"code": null,
"e": 10029,
"s": 9780,
"text": "The byte order is decided by prefixing '<' or '>' to data type. '<' means that encoding is little-endian (least significant is stored in smallest address). '>' means that encoding is big-endian (most significant byte is stored in smallest address)."
},
{
"code": null,
"e": 10088,
"s": 10029,
"text": "A dtype object is constructed using the following syntax −"
},
{
"code": null,
"e": 10122,
"s": 10088,
"text": "numpy.dtype(object, align, copy)\n"
},
{
"code": null,
"e": 10143,
"s": 10122,
"text": "The parameters are −"
},
{
"code": null,
"e": 10188,
"s": 10143,
"text": "Object − To be converted to data type object"
},
{
"code": null,
"e": 10233,
"s": 10188,
"text": "Object − To be converted to data type object"
},
{
"code": null,
"e": 10307,
"s": 10233,
"text": "Align − If true, adds padding to the field to make it similar to C-struct"
},
{
"code": null,
"e": 10381,
"s": 10307,
"text": "Align − If true, adds padding to the field to make it similar to C-struct"
},
{
"code": null,
"e": 10484,
"s": 10381,
"text": "Copy − Makes a new copy of dtype object. If false, the result is reference to builtin data type object"
},
{
"code": null,
"e": 10587,
"s": 10484,
"text": "Copy − Makes a new copy of dtype object. If false, the result is reference to builtin data type object"
},
{
"code": null,
"e": 10668,
"s": 10587,
"text": "# using array-scalar type \nimport numpy as np \ndt = np.dtype(np.int32) \nprint dt"
},
{
"code": null,
"e": 10695,
"s": 10668,
"text": "The output is as follows −"
},
{
"code": null,
"e": 10702,
"s": 10695,
"text": "int32\n"
},
{
"code": null,
"e": 10840,
"s": 10702,
"text": "#int8, int16, int32, int64 can be replaced by equivalent string 'i1', 'i2','i4', etc. \nimport numpy as np \n\ndt = np.dtype('i4')\nprint dt "
},
{
"code": null,
"e": 10867,
"s": 10840,
"text": "The output is as follows −"
},
{
"code": null,
"e": 10874,
"s": 10867,
"text": "int32\n"
},
{
"code": null,
"e": 10950,
"s": 10874,
"text": "# using endian notation \nimport numpy as np \ndt = np.dtype('>i4') \nprint dt"
},
{
"code": null,
"e": 10977,
"s": 10950,
"text": "The output is as follows −"
},
{
"code": null,
"e": 10982,
"s": 10977,
"text": ">i4\n"
},
{
"code": null,
"e": 11122,
"s": 10982,
"text": "The following examples show the use of structured data type. Here, the field name and the corresponding scalar data type is to be declared."
},
{
"code": null,
"e": 11223,
"s": 11122,
"text": "# first create structured data type \nimport numpy as np \ndt = np.dtype([('age',np.int8)]) \nprint dt "
},
{
"code": null,
"e": 11250,
"s": 11223,
"text": "The output is as follows −"
},
{
"code": null,
"e": 11268,
"s": 11250,
"text": "[('age', 'i1')] \n"
},
{
"code": null,
"e": 11412,
"s": 11268,
"text": "# now apply it to ndarray object \nimport numpy as np \n\ndt = np.dtype([('age',np.int8)]) \na = np.array([(10,),(20,),(30,)], dtype = dt) \nprint a"
},
{
"code": null,
"e": 11439,
"s": 11412,
"text": "The output is as follows −"
},
{
"code": null,
"e": 11460,
"s": 11439,
"text": "[(10,) (20,) (30,)]\n"
},
{
"code": null,
"e": 11634,
"s": 11460,
"text": "# file name can be used to access content of age column \nimport numpy as np \n\ndt = np.dtype([('age',np.int8)]) \na = np.array([(10,),(20,),(30,)], dtype = dt) \nprint a['age']"
},
{
"code": null,
"e": 11661,
"s": 11634,
"text": "The output is as follows −"
},
{
"code": null,
"e": 11673,
"s": 11661,
"text": "[10 20 30]\n"
},
{
"code": null,
"e": 11860,
"s": 11673,
"text": "The following examples define a structured data type called student with a string field 'name', an integer field 'age' and a float field 'marks'. This dtype is applied to ndarray object."
},
{
"code": null,
"e": 11964,
"s": 11860,
"text": "import numpy as np \nstudent = np.dtype([('name','S20'), ('age', 'i1'), ('marks', 'f4')]) \nprint student"
},
{
"code": null,
"e": 11991,
"s": 11964,
"text": "The output is as follows −"
},
{
"code": null,
"e": 12044,
"s": 11991,
"text": "[('name', 'S20'), ('age', 'i1'), ('marks', '<f4')])\n"
},
{
"code": null,
"e": 12209,
"s": 12044,
"text": "import numpy as np \n\nstudent = np.dtype([('name','S20'), ('age', 'i1'), ('marks', 'f4')]) \na = np.array([('abc', 21, 50),('xyz', 18, 75)], dtype = student) \nprint a"
},
{
"code": null,
"e": 12236,
"s": 12209,
"text": "The output is as follows −"
},
{
"code": null,
"e": 12276,
"s": 12236,
"text": "[('abc', 21, 50.0), ('xyz', 18, 75.0)]\n"
},
{
"code": null,
"e": 12350,
"s": 12276,
"text": "Each built-in data type has a character code that uniquely identifies it."
},
{
"code": null,
"e": 12364,
"s": 12350,
"text": "'b' − boolean"
},
{
"code": null,
"e": 12378,
"s": 12364,
"text": "'b' − boolean"
},
{
"code": null,
"e": 12401,
"s": 12378,
"text": "'i' − (signed) integer"
},
{
"code": null,
"e": 12424,
"s": 12401,
"text": "'i' − (signed) integer"
},
{
"code": null,
"e": 12447,
"s": 12424,
"text": "'u' − unsigned integer"
},
{
"code": null,
"e": 12470,
"s": 12447,
"text": "'u' − unsigned integer"
},
{
"code": null,
"e": 12491,
"s": 12470,
"text": "'f' − floating-point"
},
{
"code": null,
"e": 12512,
"s": 12491,
"text": "'f' − floating-point"
},
{
"code": null,
"e": 12541,
"s": 12512,
"text": "'c' − complex-floating point"
},
{
"code": null,
"e": 12570,
"s": 12541,
"text": "'c' − complex-floating point"
},
{
"code": null,
"e": 12586,
"s": 12570,
"text": "'m' − timedelta"
},
{
"code": null,
"e": 12602,
"s": 12586,
"text": "'m' − timedelta"
},
{
"code": null,
"e": 12617,
"s": 12602,
"text": "'M' − datetime"
},
{
"code": null,
"e": 12632,
"s": 12617,
"text": "'M' − datetime"
},
{
"code": null,
"e": 12655,
"s": 12632,
"text": "'O' − (Python) objects"
},
{
"code": null,
"e": 12678,
"s": 12655,
"text": "'O' − (Python) objects"
},
{
"code": null,
"e": 12703,
"s": 12678,
"text": "'S', 'a' − (byte-)string"
},
{
"code": null,
"e": 12728,
"s": 12703,
"text": "'S', 'a' − (byte-)string"
},
{
"code": null,
"e": 12742,
"s": 12728,
"text": "'U' − Unicode"
},
{
"code": null,
"e": 12756,
"s": 12742,
"text": "'U' − Unicode"
},
{
"code": null,
"e": 12778,
"s": 12756,
"text": "'V' − raw data (void)"
},
{
"code": null,
"e": 12800,
"s": 12778,
"text": "'V' − raw data (void)"
},
{
"code": null,
"e": 12872,
"s": 12800,
"text": "In this chapter, we will discuss the various array attributes of NumPy."
},
{
"code": null,
"e": 12982,
"s": 12872,
"text": "This array attribute returns a tuple consisting of array dimensions. It can also be used to resize the array."
},
{
"code": null,
"e": 13049,
"s": 12982,
"text": "import numpy as np \na = np.array([[1,2,3],[4,5,6]]) \nprint a.shape"
},
{
"code": null,
"e": 13076,
"s": 13049,
"text": "The output is as follows −"
},
{
"code": null,
"e": 13084,
"s": 13076,
"text": "(2, 3)\n"
},
{
"code": null,
"e": 13192,
"s": 13084,
"text": "# this resizes the ndarray \nimport numpy as np \n\na = np.array([[1,2,3],[4,5,6]]) \na.shape = (3,2) \nprint a "
},
{
"code": null,
"e": 13219,
"s": 13192,
"text": "The output is as follows −"
},
{
"code": null,
"e": 13247,
"s": 13219,
"text": "[[1, 2] \n [3, 4] \n [5, 6]]\n"
},
{
"code": null,
"e": 13306,
"s": 13247,
"text": "NumPy also provides a reshape function to resize an array."
},
{
"code": null,
"e": 13387,
"s": 13306,
"text": "import numpy as np \na = np.array([[1,2,3],[4,5,6]]) \nb = a.reshape(3,2) \nprint b"
},
{
"code": null,
"e": 13414,
"s": 13387,
"text": "The output is as follows −"
},
{
"code": null,
"e": 13442,
"s": 13414,
"text": "[[1, 2] \n [3, 4] \n [5, 6]]\n"
},
{
"code": null,
"e": 13503,
"s": 13442,
"text": "This array attribute returns the number of array dimensions."
},
{
"code": null,
"e": 13587,
"s": 13503,
"text": "# an array of evenly spaced numbers \nimport numpy as np \na = np.arange(24) \nprint a"
},
{
"code": null,
"e": 13614,
"s": 13587,
"text": "The output is as follows −"
},
{
"code": null,
"e": 13695,
"s": 13614,
"text": "[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23] \n"
},
{
"code": null,
"e": 13857,
"s": 13695,
"text": "# this is one dimensional array \nimport numpy as np \na = np.arange(24) \na.ndim \n\n# now reshape it \nb = a.reshape(2,4,3) \nprint b \n# b is having three dimensions"
},
{
"code": null,
"e": 13884,
"s": 13857,
"text": "The output is as follows −"
},
{
"code": null,
"e": 14020,
"s": 13884,
"text": "[[[ 0, 1, 2] \n [ 3, 4, 5] \n [ 6, 7, 8] \n [ 9, 10, 11]] \n [[12, 13, 14] \n [15, 16, 17]\n [18, 19, 20] \n [21, 22, 23]]] \n"
},
{
"code": null,
"e": 14095,
"s": 14020,
"text": "This array attribute returns the length of each element of array in bytes."
},
{
"code": null,
"e": 14211,
"s": 14095,
"text": "# dtype of array is int8 (1 byte) \nimport numpy as np \nx = np.array([1,2,3,4,5], dtype = np.int8) \nprint x.itemsize"
},
{
"code": null,
"e": 14238,
"s": 14211,
"text": "The output is as follows −"
},
{
"code": null,
"e": 14241,
"s": 14238,
"text": "1\n"
},
{
"code": null,
"e": 14368,
"s": 14241,
"text": "# dtype of array is now float32 (4 bytes) \nimport numpy as np \nx = np.array([1,2,3,4,5], dtype = np.float32) \nprint x.itemsize"
},
{
"code": null,
"e": 14395,
"s": 14368,
"text": "The output is as follows −"
},
{
"code": null,
"e": 14398,
"s": 14395,
"text": "4\n"
},
{
"code": null,
"e": 14497,
"s": 14398,
"text": "The ndarray object has the following attributes. Its current values are returned by this function."
},
{
"code": null,
"e": 14514,
"s": 14497,
"text": "C_CONTIGUOUS (C)"
},
{
"code": null,
"e": 14566,
"s": 14514,
"text": "The data is in a single, C-style contiguous segment"
},
{
"code": null,
"e": 14583,
"s": 14566,
"text": "F_CONTIGUOUS (F)"
},
{
"code": null,
"e": 14641,
"s": 14583,
"text": "The data is in a single, Fortran-style contiguous segment"
},
{
"code": null,
"e": 14653,
"s": 14641,
"text": "OWNDATA (O)"
},
{
"code": null,
"e": 14721,
"s": 14653,
"text": "The array owns the memory it uses or borrows it from another object"
},
{
"code": null,
"e": 14735,
"s": 14721,
"text": "WRITEABLE (W)"
},
{
"code": null,
"e": 14826,
"s": 14735,
"text": "The data area can be written to. Setting this to False locks the data, making it read-only"
},
{
"code": null,
"e": 14838,
"s": 14826,
"text": "ALIGNED (A)"
},
{
"code": null,
"e": 14907,
"s": 14838,
"text": "The data and all elements are aligned appropriately for the hardware"
},
{
"code": null,
"e": 14924,
"s": 14907,
"text": "UPDATEIFCOPY (U)"
},
{
"code": null,
"e": 15061,
"s": 14924,
"text": "This array is a copy of some other array. When this array is deallocated, the base array will be updated with the contents of this array"
},
{
"code": null,
"e": 15118,
"s": 15061,
"text": "The following example shows the current values of flags."
},
{
"code": null,
"e": 15179,
"s": 15118,
"text": "import numpy as np \nx = np.array([1,2,3,4,5]) \nprint x.flags"
},
{
"code": null,
"e": 15206,
"s": 15179,
"text": "The output is as follows −"
},
{
"code": null,
"e": 15320,
"s": 15206,
"text": "C_CONTIGUOUS : True \nF_CONTIGUOUS : True \nOWNDATA : True \nWRITEABLE : True \nALIGNED : True \nUPDATEIFCOPY : False\n"
},
{
"code": null,
"e": 15450,
"s": 15320,
"text": "A new ndarray object can be constructed by any of the following array creation routines or using a low-level ndarray constructor."
},
{
"code": null,
"e": 15550,
"s": 15450,
"text": "It creates an uninitialized array of specified shape and dtype. It uses the following constructor −"
},
{
"code": null,
"e": 15598,
"s": 15550,
"text": "numpy.empty(shape, dtype = float, order = 'C')\n"
},
{
"code": null,
"e": 15646,
"s": 15598,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 15652,
"s": 15646,
"text": "Shape"
},
{
"code": null,
"e": 15699,
"s": 15652,
"text": "Shape of an empty array in int or tuple of int"
},
{
"code": null,
"e": 15705,
"s": 15699,
"text": "Dtype"
},
{
"code": null,
"e": 15740,
"s": 15705,
"text": "Desired output data type. Optional"
},
{
"code": null,
"e": 15746,
"s": 15740,
"text": "Order"
},
{
"code": null,
"e": 15820,
"s": 15746,
"text": "'C' for C-style row-major array, 'F' for FORTRAN style column-major array"
},
{
"code": null,
"e": 15875,
"s": 15820,
"text": "The following code shows an example of an empty array."
},
{
"code": null,
"e": 15937,
"s": 15875,
"text": "import numpy as np \nx = np.empty([3,2], dtype = int) \nprint x"
},
{
"code": null,
"e": 15964,
"s": 15937,
"text": "The output is as follows −"
},
{
"code": null,
"e": 16045,
"s": 15964,
"text": "[[22649312 1701344351] \n [1818321759 1885959276] \n [16779776 156368896]]\n"
},
{
"code": null,
"e": 16125,
"s": 16045,
"text": "Note − The elements in an array show random values as they are not initialized."
},
{
"code": null,
"e": 16183,
"s": 16125,
"text": "Returns a new array of specified size, filled with zeros."
},
{
"code": null,
"e": 16231,
"s": 16183,
"text": "numpy.zeros(shape, dtype = float, order = 'C')\n"
},
{
"code": null,
"e": 16279,
"s": 16231,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 16285,
"s": 16279,
"text": "Shape"
},
{
"code": null,
"e": 16335,
"s": 16285,
"text": "Shape of an empty array in int or sequence of int"
},
{
"code": null,
"e": 16341,
"s": 16335,
"text": "Dtype"
},
{
"code": null,
"e": 16376,
"s": 16341,
"text": "Desired output data type. Optional"
},
{
"code": null,
"e": 16382,
"s": 16376,
"text": "Order"
},
{
"code": null,
"e": 16456,
"s": 16382,
"text": "'C' for C-style row-major array, 'F' for FORTRAN style column-major array"
},
{
"code": null,
"e": 16548,
"s": 16456,
"text": "# array of five zeros. Default dtype is float \nimport numpy as np \nx = np.zeros(5) \nprint x"
},
{
"code": null,
"e": 16575,
"s": 16548,
"text": "The output is as follows −"
},
{
"code": null,
"e": 16598,
"s": 16575,
"text": "[ 0. 0. 0. 0. 0.]\n"
},
{
"code": null,
"e": 16662,
"s": 16598,
"text": "import numpy as np \nx = np.zeros((5,), dtype = np.int) \nprint x"
},
{
"code": null,
"e": 16700,
"s": 16662,
"text": "Now, the output would be as follows −"
},
{
"code": null,
"e": 16717,
"s": 16700,
"text": "[0 0 0 0 0]\n"
},
{
"code": null,
"e": 16818,
"s": 16717,
"text": "# custom type \nimport numpy as np \nx = np.zeros((2,2), dtype = [('x', 'i4'), ('y', 'i4')]) \nprint x"
},
{
"code": null,
"e": 16859,
"s": 16818,
"text": "It should produce the following output −"
},
{
"code": null,
"e": 16898,
"s": 16859,
"text": "[[(0,0)(0,0)]\n [(0,0)(0,0)]] \n"
},
{
"code": null,
"e": 16964,
"s": 16898,
"text": "Returns a new array of specified size and type, filled with ones."
},
{
"code": null,
"e": 17010,
"s": 16964,
"text": "numpy.ones(shape, dtype = None, order = 'C')\n"
},
{
"code": null,
"e": 17058,
"s": 17010,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 17064,
"s": 17058,
"text": "Shape"
},
{
"code": null,
"e": 17111,
"s": 17064,
"text": "Shape of an empty array in int or tuple of int"
},
{
"code": null,
"e": 17117,
"s": 17111,
"text": "Dtype"
},
{
"code": null,
"e": 17152,
"s": 17117,
"text": "Desired output data type. Optional"
},
{
"code": null,
"e": 17158,
"s": 17152,
"text": "Order"
},
{
"code": null,
"e": 17232,
"s": 17158,
"text": "'C' for C-style row-major array, 'F' for FORTRAN style column-major array"
},
{
"code": null,
"e": 17322,
"s": 17232,
"text": "# array of five ones. Default dtype is float \nimport numpy as np \nx = np.ones(5) \nprint x"
},
{
"code": null,
"e": 17349,
"s": 17322,
"text": "The output is as follows −"
},
{
"code": null,
"e": 17372,
"s": 17349,
"text": "[ 1. 1. 1. 1. 1.]\n"
},
{
"code": null,
"e": 17433,
"s": 17372,
"text": "import numpy as np \nx = np.ones([2,2], dtype = int) \nprint x"
},
{
"code": null,
"e": 17471,
"s": 17433,
"text": "Now, the output would be as follows −"
},
{
"code": null,
"e": 17490,
"s": 17471,
"text": "[[1 1] \n [1 1]]\n"
},
{
"code": null,
"e": 17566,
"s": 17490,
"text": "In this chapter, we will discuss how to create an array from existing data."
},
{
"code": null,
"e": 17724,
"s": 17566,
"text": "This function is similar to numpy.array except for the fact that it has fewer parameters. This routine is useful for converting Python sequence into ndarray."
},
{
"code": null,
"e": 17770,
"s": 17724,
"text": "numpy.asarray(a, dtype = None, order = None)\n"
},
{
"code": null,
"e": 17818,
"s": 17770,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 17820,
"s": 17818,
"text": "a"
},
{
"code": null,
"e": 17915,
"s": 17820,
"text": "Input data in any form such as list, list of tuples, tuples, tuple of tuples or tuple of lists"
},
{
"code": null,
"e": 17921,
"s": 17915,
"text": "dtype"
},
{
"code": null,
"e": 17997,
"s": 17921,
"text": "By default, the data type of input data is applied to the resultant ndarray"
},
{
"code": null,
"e": 18003,
"s": 17997,
"text": "order"
},
{
"code": null,
"e": 18051,
"s": 18003,
"text": "C (row major) or F (column major). C is default"
},
{
"code": null,
"e": 18117,
"s": 18051,
"text": "The following examples show how you can use the asarray function."
},
{
"code": null,
"e": 18205,
"s": 18117,
"text": "# convert list to ndarray \nimport numpy as np \n\nx = [1,2,3] \na = np.asarray(x) \nprint a"
},
{
"code": null,
"e": 18238,
"s": 18205,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 18250,
"s": 18238,
"text": "[1 2 3] \n"
},
{
"code": null,
"e": 18341,
"s": 18250,
"text": "# dtype is set \nimport numpy as np \n\nx = [1,2,3]\na = np.asarray(x, dtype = float) \nprint a"
},
{
"code": null,
"e": 18379,
"s": 18341,
"text": "Now, the output would be as follows −"
},
{
"code": null,
"e": 18395,
"s": 18379,
"text": "[ 1. 2. 3.] \n"
},
{
"code": null,
"e": 18478,
"s": 18395,
"text": "# ndarray from tuple \nimport numpy as np \n\nx = (1,2,3) \na = np.asarray(x) \nprint a"
},
{
"code": null,
"e": 18500,
"s": 18478,
"text": "Its output would be −"
},
{
"code": null,
"e": 18511,
"s": 18500,
"text": "[1 2 3]\n"
},
{
"code": null,
"e": 18611,
"s": 18511,
"text": "# ndarray from list of tuples \nimport numpy as np \n\nx = [(1,2,3),(4,5)] \na = np.asarray(x) \nprint a"
},
{
"code": null,
"e": 18650,
"s": 18611,
"text": "Here, the output would be as follows −"
},
{
"code": null,
"e": 18670,
"s": 18650,
"text": "[(1, 2, 3) (4, 5)]\n"
},
{
"code": null,
"e": 18818,
"s": 18670,
"text": "This function interprets a buffer as one-dimensional array. Any object that exposes the buffer interface is used as parameter to return an ndarray."
},
{
"code": null,
"e": 18883,
"s": 18818,
"text": "numpy.frombuffer(buffer, dtype = float, count = -1, offset = 0)\n"
},
{
"code": null,
"e": 18931,
"s": 18883,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 18938,
"s": 18931,
"text": "buffer"
},
{
"code": null,
"e": 18979,
"s": 18938,
"text": "Any object that exposes buffer interface"
},
{
"code": null,
"e": 18985,
"s": 18979,
"text": "dtype"
},
{
"code": null,
"e": 19034,
"s": 18985,
"text": "Data type of returned ndarray. Defaults to float"
},
{
"code": null,
"e": 19040,
"s": 19034,
"text": "count"
},
{
"code": null,
"e": 19095,
"s": 19040,
"text": "The number of items to read, default -1 means all data"
},
{
"code": null,
"e": 19102,
"s": 19095,
"text": "offset"
},
{
"code": null,
"e": 19151,
"s": 19102,
"text": "The starting position to read from. Default is 0"
},
{
"code": null,
"e": 19218,
"s": 19151,
"text": "The following examples demonstrate the use of frombuffer function."
},
{
"code": null,
"e": 19301,
"s": 19218,
"text": "import numpy as np \ns = 'Hello World' \na = np.frombuffer(s, dtype = 'S1') \nprint a"
},
{
"code": null,
"e": 19322,
"s": 19301,
"text": "Here is its output −"
},
{
"code": null,
"e": 19379,
"s": 19322,
"text": "['H' 'e' 'l' 'l' 'o' ' ' 'W' 'o' 'r' 'l' 'd']\n"
},
{
"code": null,
"e": 19502,
"s": 19379,
"text": "This function builds an ndarray object from any iterable object. A new one-dimensional array is returned by this function."
},
{
"code": null,
"e": 19547,
"s": 19502,
"text": "numpy.fromiter(iterable, dtype, count = -1)\n"
},
{
"code": null,
"e": 19601,
"s": 19547,
"text": "Here, the constructor takes the following parameters."
},
{
"code": null,
"e": 19610,
"s": 19601,
"text": "iterable"
},
{
"code": null,
"e": 19630,
"s": 19610,
"text": "Any iterable object"
},
{
"code": null,
"e": 19636,
"s": 19630,
"text": "dtype"
},
{
"code": null,
"e": 19665,
"s": 19636,
"text": "Data type of resultant array"
},
{
"code": null,
"e": 19671,
"s": 19665,
"text": "count"
},
{
"code": null,
"e": 19763,
"s": 19671,
"text": "The number of items to be read from iterator. Default is -1 which means all data to be read"
},
{
"code": null,
"e": 19917,
"s": 19763,
"text": "The following examples show how to use the built-in range() function to return a list object. An iterator of this list is used to form an ndarray object."
},
{
"code": null,
"e": 20008,
"s": 19917,
"text": "# create list object using range function \nimport numpy as np \nlist = range(5) \nprint list"
},
{
"code": null,
"e": 20035,
"s": 20008,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 20056,
"s": 20035,
"text": "[0, 1, 2, 3, 4]\n"
},
{
"code": null,
"e": 20226,
"s": 20056,
"text": "# obtain iterator object from list \nimport numpy as np \nlist = range(5) \nit = iter(list) \n\n# use iterator to create ndarray \nx = np.fromiter(it, dtype = float) \nprint x"
},
{
"code": null,
"e": 20264,
"s": 20226,
"text": "Now, the output would be as follows −"
},
{
"code": null,
"e": 20290,
"s": 20264,
"text": "[0. 1. 2. 3. 4.]\n"
},
{
"code": null,
"e": 20365,
"s": 20290,
"text": "In this chapter, we will see how to create an array from numerical ranges."
},
{
"code": null,
"e": 20502,
"s": 20365,
"text": "This function returns an ndarray object containing evenly spaced values within a given range. The format of the function is as follows −"
},
{
"code": null,
"e": 20542,
"s": 20502,
"text": "numpy.arange(start, stop, step, dtype)\n"
},
{
"code": null,
"e": 20590,
"s": 20542,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 20596,
"s": 20590,
"text": "start"
},
{
"code": null,
"e": 20648,
"s": 20596,
"text": "The start of an interval. If omitted, defaults to 0"
},
{
"code": null,
"e": 20653,
"s": 20648,
"text": "stop"
},
{
"code": null,
"e": 20704,
"s": 20653,
"text": "The end of an interval (not including this number)"
},
{
"code": null,
"e": 20709,
"s": 20704,
"text": "step"
},
{
"code": null,
"e": 20746,
"s": 20709,
"text": "Spacing between values, default is 1"
},
{
"code": null,
"e": 20752,
"s": 20746,
"text": "dtype"
},
{
"code": null,
"e": 20825,
"s": 20752,
"text": "Data type of resulting ndarray. If not given, data type of input is used"
},
{
"code": null,
"e": 20884,
"s": 20825,
"text": "The following examples show how you can use this function."
},
{
"code": null,
"e": 20930,
"s": 20884,
"text": "import numpy as np \nx = np.arange(5) \nprint x"
},
{
"code": null,
"e": 20963,
"s": 20930,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 20980,
"s": 20963,
"text": "[0 1 2 3 4]\n"
},
{
"code": null,
"e": 21053,
"s": 20980,
"text": "import numpy as np \n# dtype set \nx = np.arange(5, dtype = float)\nprint x"
},
{
"code": null,
"e": 21081,
"s": 21053,
"text": "Here, the output would be −"
},
{
"code": null,
"e": 21104,
"s": 21081,
"text": "[0. 1. 2. 3. 4.] \n"
},
{
"code": null,
"e": 21189,
"s": 21104,
"text": "# start and stop parameters set \nimport numpy as np \nx = np.arange(10,20,2) \nprint x"
},
{
"code": null,
"e": 21216,
"s": 21189,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 21239,
"s": 21216,
"text": "[10 12 14 16 18] \n"
},
{
"code": null,
"e": 21439,
"s": 21239,
"text": "This function is similar to arange() function. In this function, instead of step size, the number of evenly spaced values between the interval is specified. The usage of this function is as follows −"
},
{
"code": null,
"e": 21499,
"s": 21439,
"text": "numpy.linspace(start, stop, num, endpoint, retstep, dtype)\n"
},
{
"code": null,
"e": 21547,
"s": 21499,
"text": "The constructor takes the following parameters."
},
{
"code": null,
"e": 21553,
"s": 21547,
"text": "start"
},
{
"code": null,
"e": 21588,
"s": 21553,
"text": "The starting value of the sequence"
},
{
"code": null,
"e": 21593,
"s": 21588,
"text": "stop"
},
{
"code": null,
"e": 21673,
"s": 21593,
"text": "The end value of the sequence, included in the sequence if endpoint set to true"
},
{
"code": null,
"e": 21677,
"s": 21673,
"text": "num"
},
{
"code": null,
"e": 21744,
"s": 21677,
"text": "The number of evenly spaced samples to be generated. Default is 50"
},
{
"code": null,
"e": 21753,
"s": 21744,
"text": "endpoint"
},
{
"code": null,
"e": 21849,
"s": 21753,
"text": "True by default, hence the stop value is included in the sequence. If false, it is not included"
},
{
"code": null,
"e": 21857,
"s": 21849,
"text": "retstep"
},
{
"code": null,
"e": 21923,
"s": 21857,
"text": "If true, returns samples and step between the consecutive numbers"
},
{
"code": null,
"e": 21929,
"s": 21923,
"text": "dtype"
},
{
"code": null,
"e": 21957,
"s": 21929,
"text": "Data type of output ndarray"
},
{
"code": null,
"e": 22019,
"s": 21957,
"text": "The following examples demonstrate the use linspace function."
},
{
"code": null,
"e": 22073,
"s": 22019,
"text": "import numpy as np \nx = np.linspace(10,20,5) \nprint x"
},
{
"code": null,
"e": 22095,
"s": 22073,
"text": "Its output would be −"
},
{
"code": null,
"e": 22127,
"s": 22095,
"text": "[10. 12.5 15. 17.5 20.]\n"
},
{
"code": null,
"e": 22225,
"s": 22127,
"text": "# endpoint set to false \nimport numpy as np \nx = np.linspace(10,20, 5, endpoint = False) \nprint x"
},
{
"code": null,
"e": 22247,
"s": 22225,
"text": "The output would be −"
},
{
"code": null,
"e": 22278,
"s": 22247,
"text": "[10. 12. 14. 16. 18.]\n"
},
{
"code": null,
"e": 22393,
"s": 22278,
"text": "# find retstep value \nimport numpy as np \n\nx = np.linspace(1,2,5, retstep = True) \nprint x \n# retstep here is 0.25"
},
{
"code": null,
"e": 22420,
"s": 22393,
"text": "Now, the output would be −"
},
{
"code": null,
"e": 22472,
"s": 22420,
"text": "(array([ 1. , 1.25, 1.5 , 1.75, 2. ]), 0.25)\n"
},
{
"code": null,
"e": 22652,
"s": 22472,
"text": "This function returns an ndarray object that contains the numbers that are evenly spaced on a log scale. Start and stop endpoints of the scale are indices of the base, usually 10."
},
{
"code": null,
"e": 22709,
"s": 22652,
"text": "numpy.logspace(start, stop, num, endpoint, base, dtype)\n"
},
{
"code": null,
"e": 22773,
"s": 22709,
"text": "Following parameters determine the output of logspace function."
},
{
"code": null,
"e": 22779,
"s": 22773,
"text": "start"
},
{
"code": null,
"e": 22827,
"s": 22779,
"text": "The starting point of the sequence is basestart"
},
{
"code": null,
"e": 22832,
"s": 22827,
"text": "stop"
},
{
"code": null,
"e": 22872,
"s": 22832,
"text": "The final value of sequence is basestop"
},
{
"code": null,
"e": 22876,
"s": 22872,
"text": "num"
},
{
"code": null,
"e": 22930,
"s": 22876,
"text": "The number of values between the range. Default is 50"
},
{
"code": null,
"e": 22939,
"s": 22930,
"text": "endpoint"
},
{
"code": null,
"e": 22984,
"s": 22939,
"text": "If true, stop is the last value in the range"
},
{
"code": null,
"e": 22989,
"s": 22984,
"text": "base"
},
{
"code": null,
"e": 23022,
"s": 22989,
"text": "Base of log space, default is 10"
},
{
"code": null,
"e": 23028,
"s": 23022,
"text": "dtype"
},
{
"code": null,
"e": 23107,
"s": 23028,
"text": "Data type of output array. If not given, it depends upon other input arguments"
},
{
"code": null,
"e": 23178,
"s": 23107,
"text": "The following examples will help you understand the logspace function."
},
{
"code": null,
"e": 23265,
"s": 23178,
"text": "import numpy as np \n# default base is 10 \na = np.logspace(1.0, 2.0, num = 10) \nprint a"
},
{
"code": null,
"e": 23298,
"s": 23265,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 23452,
"s": 23298,
"text": "[ 10. 12.91549665 16.68100537 21.5443469 27.82559402 \n 35.93813664 46.41588834 59.94842503 77.42636827 100. ]\n"
},
{
"code": null,
"e": 23552,
"s": 23452,
"text": "# set base of log space to 2 \nimport numpy as np \na = np.logspace(1,10,num = 10, base = 2) \nprint a"
},
{
"code": null,
"e": 23579,
"s": 23552,
"text": "Now, the output would be −"
},
{
"code": null,
"e": 23652,
"s": 23579,
"text": "[ 2. 4. 8. 16. 32. 64. 128. 256. 512. 1024.] \n"
},
{
"code": null,
"e": 23779,
"s": 23652,
"text": "Contents of ndarray object can be accessed and modified by indexing or slicing, just like Python's in-built container objects."
},
{
"code": null,
"e": 23950,
"s": 23779,
"text": "As mentioned earlier, items in ndarray object follows zero-based index. Three types of indexing methods are available − field access, basic slicing and advanced indexing."
},
{
"code": null,
"e": 24215,
"s": 23950,
"text": "Basic slicing is an extension of Python's basic concept of slicing to n dimensions. A Python slice object is constructed by giving start, stop, and step parameters to the built-in slice function. This slice object is passed to the array to extract a part of array."
},
{
"code": null,
"e": 24283,
"s": 24215,
"text": "import numpy as np \na = np.arange(10) \ns = slice(2,7,2) \nprint a[s]"
},
{
"code": null,
"e": 24310,
"s": 24283,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 24321,
"s": 24310,
"text": "[2 4 6]\n"
},
{
"code": null,
"e": 24606,
"s": 24321,
"text": "In the above example, an ndarray object is prepared by arange() function. Then a slice object is defined with start, stop, and step values 2, 7, and 2 respectively. When this slice object is passed to the ndarray, a part of it starting with index 2 up to 7 with a step of 2 is sliced."
},
{
"code": null,
"e": 24749,
"s": 24606,
"text": "The same result can also be obtained by giving the slicing parameters separated by a colon : (start:stop:step) directly to the ndarray object."
},
{
"code": null,
"e": 24810,
"s": 24749,
"text": "import numpy as np \na = np.arange(10) \nb = a[2:7:2] \nprint b"
},
{
"code": null,
"e": 24846,
"s": 24810,
"text": "Here, we will get the same output −"
},
{
"code": null,
"e": 24857,
"s": 24846,
"text": "[2 4 6]\n"
},
{
"code": null,
"e": 25178,
"s": 24857,
"text": "If only one parameter is put, a single item corresponding to the index will be returned. If a : is inserted in front of it, all items from that index onwards will be extracted. If two parameters (with : between them) is used, items between the two indexes (not including the stop index) with default step one are sliced."
},
{
"code": null,
"e": 25257,
"s": 25178,
"text": "# slice single item \nimport numpy as np \n\na = np.arange(10) \nb = a[5] \nprint b"
},
{
"code": null,
"e": 25284,
"s": 25257,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 25287,
"s": 25284,
"text": "5\n"
},
{
"code": null,
"e": 25373,
"s": 25287,
"text": "# slice items starting from index \nimport numpy as np \na = np.arange(10) \nprint a[2:]"
},
{
"code": null,
"e": 25400,
"s": 25373,
"text": "Now, the output would be −"
},
{
"code": null,
"e": 25426,
"s": 25400,
"text": "[2 3 4 5 6 7 8 9]\n"
},
{
"code": null,
"e": 25509,
"s": 25426,
"text": "# slice items between indexes \nimport numpy as np \na = np.arange(10) \nprint a[2:5]"
},
{
"code": null,
"e": 25537,
"s": 25509,
"text": "Here, the output would be −"
},
{
"code": null,
"e": 25549,
"s": 25537,
"text": "[2 3 4] \n"
},
{
"code": null,
"e": 25613,
"s": 25549,
"text": "The above description applies to multi-dimensional ndarray too."
},
{
"code": null,
"e": 25789,
"s": 25613,
"text": "import numpy as np \na = np.array([[1,2,3],[3,4,5],[4,5,6]]) \nprint a \n\n# slice items starting from index\nprint 'Now we will slice the array from the index a[1:]' \nprint a[1:]"
},
{
"code": null,
"e": 25816,
"s": 25789,
"text": "The output is as follows −"
},
{
"code": null,
"e": 25914,
"s": 25816,
"text": "[[1 2 3]\n [3 4 5]\n [4 5 6]]\n\nNow we will slice the array from the index a[1:]\n[[3 4 5]\n [4 5 6]]\n"
},
{
"code": null,
"e": 26125,
"s": 25914,
"text": "Slicing can also include ellipsis (...) to make a selection tuple of the same length as the dimension of an array. If ellipsis is used at the row position, it will return an ndarray comprising of items in rows."
},
{
"code": null,
"e": 26617,
"s": 26125,
"text": "# array to begin with \nimport numpy as np \na = np.array([[1,2,3],[3,4,5],[4,5,6]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\n# this returns array of items in the second column \nprint 'The items in the second column are:' \nprint a[...,1] \nprint '\\n' \n\n# Now we will slice all items from the second row \nprint 'The items in the second row are:' \nprint a[1,...] \nprint '\\n' \n\n# Now we will slice all items from column 1 onwards \nprint 'The items column 1 onwards are:' \nprint a[...,1:]"
},
{
"code": null,
"e": 26660,
"s": 26617,
"text": "The output of this program is as follows −"
},
{
"code": null,
"e": 26850,
"s": 26660,
"text": "Our array is:\n[[1 2 3]\n [3 4 5]\n [4 5 6]] \n \nThe items in the second column are: \n[2 4 5] \n\nThe items in the second row are:\n[3 4 5]\n\nThe items column 1 onwards are:\n[[2 3]\n [4 5]\n [5 6]] \n"
},
{
"code": null,
"e": 27139,
"s": 26850,
"text": "It is possible to make a selection from ndarray that is a non-tuple sequence, ndarray object of integer or Boolean data type, or a tuple with at least one item being a sequence object. Advanced indexing always returns a copy of the data. As against this, the slicing only presents a view."
},
{
"code": null,
"e": 27203,
"s": 27139,
"text": "There are two types of advanced indexing − Integer and Boolean."
},
{
"code": null,
"e": 27494,
"s": 27203,
"text": "This mechanism helps in selecting any arbitrary item in an array based on its N-dimensional index. Each integer array represents the number of indexes into that dimension. When the index consists of as many integer arrays as the dimensions of the target ndarray, it becomes straightforward."
},
{
"code": null,
"e": 27703,
"s": 27494,
"text": "In the following example, one element of specified column from each row of ndarray object is selected. Hence, the row index contains all row numbers, and the column index specifies the element to be selected."
},
{
"code": null,
"e": 27797,
"s": 27703,
"text": "import numpy as np \n\nx = np.array([[1, 2], [3, 4], [5, 6]]) \ny = x[[0,1,2], [0,1,0]] \nprint y"
},
{
"code": null,
"e": 27830,
"s": 27797,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 27841,
"s": 27830,
"text": "[1 4 5]\n"
},
{
"code": null,
"e": 27921,
"s": 27841,
"text": "The selection includes elements at (0,0), (1,1) and (2,0) from the first array."
},
{
"code": null,
"e": 28101,
"s": 27921,
"text": "In the following example, elements placed at corners of a 4X3 array are selected. The row indices of selection are [0, 0] and [3,3] whereas the column indices are [0,2] and [0,2]."
},
{
"code": null,
"e": 28380,
"s": 28101,
"text": "import numpy as np \nx = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]]) \n \nprint 'Our array is:' \nprint x \nprint '\\n' \n\nrows = np.array([[0,0],[3,3]])\ncols = np.array([[0,2],[0,2]]) \ny = x[rows,cols] \n \nprint 'The corner elements of this array are:' \nprint y"
},
{
"code": null,
"e": 28423,
"s": 28380,
"text": "The output of this program is as follows −"
},
{
"code": null,
"e": 28924,
"s": 28423,
"text": "Our array is: \n[[ 0 1 2] \n [ 3 4 5] \n [ 6 7 8] \n [ 9 10 11]]\n \nThe corner elements of this array are: \n[[ 0 2] \n [ 9 11]] \n"
},
{
"code": null,
"e": 28997,
"s": 28924,
"text": "The resultant selection is an ndarray object containing corner elements."
},
{
"code": null,
"e": 29300,
"s": 28997,
"text": "Advanced and basic indexing can be combined by using one slice (:) or ellipsis (...) with an index array. The following example uses slice for row and advanced index for column. The result is the same when slice is used for both. But advanced index results in copy and may have different memory layout."
},
{
"code": null,
"e": 29642,
"s": 29300,
"text": "import numpy as np \nx = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]]) \n\nprint 'Our array is:' \nprint x \nprint '\\n' \n\n# slicing \nz = x[1:4,1:3] \n\nprint 'After slicing, our array becomes:' \nprint z \nprint '\\n' \n\n# using advanced index for column \ny = x[1:4,[1,2]] \n\nprint 'Slicing using advanced index for column:' \nprint y"
},
{
"code": null,
"e": 29691,
"s": 29642,
"text": "The output of this program would be as follows −"
},
{
"code": null,
"e": 29892,
"s": 29691,
"text": "Our array is:\n[[ 0 1 2] \n [ 3 4 5] \n [ 6 7 8]\n [ 9 10 11]]\n \nAfter slicing, our array becomes:\n[[ 4 5]\n [ 7 8]\n [10 11]]\n\nSlicing using advanced index for column:\n[[ 4 5]\n [ 7 8]\n [10 11]] \n"
},
{
"code": null,
"e": 30036,
"s": 29892,
"text": "This type of advanced indexing is used when the resultant object is meant to be the result of Boolean operations, such as comparison operators."
},
{
"code": null,
"e": 30120,
"s": 30036,
"text": "In this example, items greater than 5 are returned as a result of Boolean indexing."
},
{
"code": null,
"e": 30356,
"s": 30120,
"text": "import numpy as np \nx = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]]) \n\nprint 'Our array is:' \nprint x \nprint '\\n' \n\n# Now we will print the items greater than 5 \nprint 'The items greater than 5 are:' \nprint x[x > 5]"
},
{
"code": null,
"e": 30394,
"s": 30356,
"text": "The output of this program would be −"
},
{
"code": null,
"e": 30516,
"s": 30394,
"text": "Our array is: \n[[ 0 1 2] \n [ 3 4 5] \n [ 6 7 8] \n [ 9 10 11]] \n \nThe items greater than 5 are:\n[ 6 7 8 9 10 11] \n"
},
{
"code": null,
"e": 30607,
"s": 30516,
"text": "In this example, NaN (Not a Number) elements are omitted by using ~ (complement operator)."
},
{
"code": null,
"e": 30691,
"s": 30607,
"text": "import numpy as np \na = np.array([np.nan, 1,2,np.nan,3,4,5]) \nprint a[~np.isnan(a)]"
},
{
"code": null,
"e": 30713,
"s": 30691,
"text": "Its output would be −"
},
{
"code": null,
"e": 30741,
"s": 30713,
"text": "[ 1. 2. 3. 4. 5.] \n"
},
{
"code": null,
"e": 30827,
"s": 30741,
"text": "The following example shows how to filter out the non-complex elements from an array."
},
{
"code": null,
"e": 30908,
"s": 30827,
"text": "import numpy as np \na = np.array([1, 2+6j, 5, 3.5+5j]) \nprint a[np.iscomplex(a)]"
},
{
"code": null,
"e": 30941,
"s": 30908,
"text": "Here, the output is as follows −"
},
{
"code": null,
"e": 30962,
"s": 30941,
"text": "[2.0+6.j 3.5+5.j] \n"
},
{
"code": null,
"e": 31248,
"s": 30962,
"text": "The term broadcasting refers to the ability of NumPy to treat arrays of different shapes during arithmetic operations. Arithmetic operations on arrays are usually done on corresponding elements. If two arrays are of exactly the same shape, then these operations are smoothly performed."
},
{
"code": null,
"e": 31342,
"s": 31248,
"text": "import numpy as np \n\na = np.array([1,2,3,4]) \nb = np.array([10,20,30,40]) \nc = a * b \nprint c"
},
{
"code": null,
"e": 31369,
"s": 31342,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 31391,
"s": 31369,
"text": "[10 40 90 160]\n"
},
{
"code": null,
"e": 31707,
"s": 31391,
"text": "If the dimensions of two arrays are dissimilar, element-to-element operations are not possible. However, operations on arrays of non-similar shapes is still possible in NumPy, because of the broadcasting capability. The smaller array is broadcast to the size of the larger array so that they have compatible shapes."
},
{
"code": null,
"e": 31771,
"s": 31707,
"text": "Broadcasting is possible if the following rules are satisfied −"
},
{
"code": null,
"e": 31846,
"s": 31771,
"text": "Array with smaller ndim than the other is prepended with '1' in its shape."
},
{
"code": null,
"e": 31921,
"s": 31846,
"text": "Array with smaller ndim than the other is prepended with '1' in its shape."
},
{
"code": null,
"e": 32013,
"s": 31921,
"text": "Size in each dimension of the output shape is maximum of the input sizes in that dimension."
},
{
"code": null,
"e": 32105,
"s": 32013,
"text": "Size in each dimension of the output shape is maximum of the input sizes in that dimension."
},
{
"code": null,
"e": 32231,
"s": 32105,
"text": "An input can be used in calculation, if its size in a particular dimension matches the output size or its value is exactly 1."
},
{
"code": null,
"e": 32357,
"s": 32231,
"text": "An input can be used in calculation, if its size in a particular dimension matches the output size or its value is exactly 1."
},
{
"code": null,
"e": 32486,
"s": 32357,
"text": "If an input has a dimension size of 1, the first data entry in that dimension is used for all calculations along that dimension."
},
{
"code": null,
"e": 32615,
"s": 32486,
"text": "If an input has a dimension size of 1, the first data entry in that dimension is used for all calculations along that dimension."
},
{
"code": null,
"e": 32736,
"s": 32615,
"text": "A set of arrays is said to be broadcastable if the above rules produce a valid result and one of the following is true −"
},
{
"code": null,
"e": 32772,
"s": 32736,
"text": "Arrays have exactly the same shape."
},
{
"code": null,
"e": 32808,
"s": 32772,
"text": "Arrays have exactly the same shape."
},
{
"code": null,
"e": 32915,
"s": 32808,
"text": "Arrays have the same number of dimensions and the length of each dimension is either a common length or 1."
},
{
"code": null,
"e": 33022,
"s": 32915,
"text": "Arrays have the same number of dimensions and the length of each dimension is either a common length or 1."
},
{
"code": null,
"e": 33156,
"s": 33022,
"text": "Array having too few dimensions can have its shape prepended with a dimension of length 1, so that the above stated property is true."
},
{
"code": null,
"e": 33290,
"s": 33156,
"text": "Array having too few dimensions can have its shape prepended with a dimension of length 1, so that the above stated property is true."
},
{
"code": null,
"e": 33346,
"s": 33290,
"text": "The following program shows an example of broadcasting."
},
{
"code": null,
"e": 33627,
"s": 33346,
"text": "import numpy as np \na = np.array([[0.0,0.0,0.0],[10.0,10.0,10.0],[20.0,20.0,20.0],[30.0,30.0,30.0]]) \nb = np.array([1.0,2.0,3.0]) \n \nprint 'First array:' \nprint a \nprint '\\n' \n \nprint 'Second array:' \nprint b \nprint '\\n' \n \nprint 'First Array + Second Array' \nprint a + b"
},
{
"code": null,
"e": 33676,
"s": 33627,
"text": "The output of this program would be as follows −"
},
{
"code": null,
"e": 33869,
"s": 33676,
"text": "First array:\n[[ 0. 0. 0.]\n [ 10. 10. 10.]\n [ 20. 20. 20.]\n [ 30. 30. 30.]]\n\nSecond array:\n[ 1. 2. 3.]\n\nFirst Array + Second Array\n[[ 1. 2. 3.]\n [ 11. 12. 13.]\n [ 21. 22. 23.]\n [ 31. 32. 33.]]\n"
},
{
"code": null,
"e": 33957,
"s": 33869,
"text": "The following figure demonstrates how array b is broadcast to become compatible with a."
},
{
"code": null,
"e": 34198,
"s": 33957,
"text": "NumPy package contains an iterator object numpy.nditer. It is an efficient multidimensional iterator object using which it is possible to iterate over an array. Each element of an array is visited using Python’s standard Iterator interface."
},
{
"code": null,
"e": 34282,
"s": 34198,
"text": "Let us create a 3X4 array using arange() function and iterate over it using nditer."
},
{
"code": null,
"e": 34452,
"s": 34282,
"text": "import numpy as np\na = np.arange(0,60,5)\na = a.reshape(3,4)\n\nprint 'Original array is:'\nprint a\nprint '\\n'\n\nprint 'Modified array is:'\nfor x in np.nditer(a):\n print x,"
},
{
"code": null,
"e": 34495,
"s": 34452,
"text": "The output of this program is as follows −"
},
{
"code": null,
"e": 34614,
"s": 34495,
"text": "Original array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nModified array is:\n0 5 10 15 20 25 30 35 40 45 50 55\n"
},
{
"code": null,
"e": 34799,
"s": 34614,
"text": "The order of iteration is chosen to match the memory layout of an array, without considering a particular ordering. This can be seen by iterating over the transpose of the above array."
},
{
"code": null,
"e": 35063,
"s": 34799,
"text": "import numpy as np \na = np.arange(0,60,5) \na = a.reshape(3,4) \n \nprint 'Original array is:'\nprint a \nprint '\\n' \n \nprint 'Transpose of the original array is:' \nb = a.T \nprint b \nprint '\\n' \n \nprint 'Modified array is:' \nfor x in np.nditer(b): \n print x,"
},
{
"code": null,
"e": 35111,
"s": 35063,
"text": "The output of the above program is as follows −"
},
{
"code": null,
"e": 35316,
"s": 35111,
"text": "Original array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nTranspose of the original array is:\n[[ 0 20 40]\n [ 5 25 45]\n [10 30 50]\n [15 35 55]]\n\nModified array is:\n0 5 10 15 20 25 30 35 40 45 50 55\n"
},
{
"code": null,
"e": 35441,
"s": 35316,
"text": "If the same elements are stored using F-style order, the iterator chooses the more efficient way of iterating over an array."
},
{
"code": null,
"e": 35829,
"s": 35441,
"text": "import numpy as np\na = np.arange(0,60,5)\na = a.reshape(3,4)\nprint 'Original array is:'\nprint a\nprint '\\n'\n\nprint 'Transpose of the original array is:'\nb = a.T\nprint b\nprint '\\n'\n\nprint 'Sorted in C-style order:'\nc = b.copy(order='C')\nprint c\nfor x in np.nditer(c):\n print x,\n\nprint '\\n'\n\nprint 'Sorted in F-style order:'\nc = b.copy(order='F')\nprint c\nfor x in np.nditer(c):\n print x,"
},
{
"code": null,
"e": 35862,
"s": 35829,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 36231,
"s": 35862,
"text": "Original array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nTranspose of the original array is:\n[[ 0 20 40]\n [ 5 25 45]\n [10 30 50]\n [15 35 55]]\n\nSorted in C-style order:\n[[ 0 20 40]\n [ 5 25 45]\n [10 30 50]\n [15 35 55]]\n0 20 40 5 25 45 10 30 50 15 35 55\n\nSorted in F-style order:\n[[ 0 20 40]\n [ 5 25 45]\n [10 30 50]\n [15 35 55]]\n0 5 10 15 20 25 30 35 40 45 50 55\n"
},
{
"code": null,
"e": 36322,
"s": 36231,
"text": "It is possible to force nditer object to use a specific order by explicitly mentioning it."
},
{
"code": null,
"e": 36618,
"s": 36322,
"text": "import numpy as np \na = np.arange(0,60,5) \na = a.reshape(3,4) \n\nprint 'Original array is:' \nprint a \nprint '\\n' \n\nprint 'Sorted in C-style order:' \nfor x in np.nditer(a, order = 'C'): \n print x, \nprint '\\n' \n\nprint 'Sorted in F-style order:' \nfor x in np.nditer(a, order = 'F'): \n print x,"
},
{
"code": null,
"e": 36640,
"s": 36618,
"text": "Its output would be −"
},
{
"code": null,
"e": 36825,
"s": 36640,
"text": "Original array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nSorted in C-style order:\n0 5 10 15 20 25 30 35 40 45 50 55\n\nSorted in F-style order:\n0 20 40 5 25 45 10 30 50 15 35 55\n"
},
{
"code": null,
"e": 37035,
"s": 36825,
"text": "The nditer object has another optional parameter called op_flags. Its default value is read-only, but can be set to read-write or write-only mode. This will enable modifying array elements using this iterator."
},
{
"code": null,
"e": 37242,
"s": 37035,
"text": "import numpy as np\na = np.arange(0,60,5)\na = a.reshape(3,4)\nprint 'Original array is:'\nprint a\nprint '\\n'\n\nfor x in np.nditer(a, op_flags = ['readwrite']):\n x[...] = 2*x\nprint 'Modified array is:'\nprint a"
},
{
"code": null,
"e": 37269,
"s": 37242,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 37404,
"s": 37269,
"text": "Original array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nModified array is:\n[[ 0 10 20 30]\n [ 40 50 60 70]\n [ 80 90 100 110]]\n"
},
{
"code": null,
"e": 37496,
"s": 37404,
"text": "The nditer class constructor has a ‘flags’ parameter, which can take the following values −"
},
{
"code": null,
"e": 37504,
"s": 37496,
"text": "c_index"
},
{
"code": null,
"e": 37533,
"s": 37504,
"text": "C_order index can be tracked"
},
{
"code": null,
"e": 37541,
"s": 37533,
"text": "f_index"
},
{
"code": null,
"e": 37572,
"s": 37541,
"text": "Fortran_order index is tracked"
},
{
"code": null,
"e": 37584,
"s": 37572,
"text": "multi-index"
},
{
"code": null,
"e": 37638,
"s": 37584,
"text": "Type of indexes with one per iteration can be tracked"
},
{
"code": null,
"e": 37652,
"s": 37638,
"text": "external_loop"
},
{
"code": null,
"e": 37756,
"s": 37652,
"text": "Causes values given to be one-dimensional arrays with multiple values instead of zero-dimensional array"
},
{
"code": null,
"e": 37864,
"s": 37756,
"text": "In the following example, one-dimensional arrays corresponding to each column is traversed by the iterator."
},
{
"code": null,
"e": 38083,
"s": 37864,
"text": "import numpy as np \na = np.arange(0,60,5) \na = a.reshape(3,4) \n\nprint 'Original array is:' \nprint a \nprint '\\n' \n\nprint 'Modified array is:' \nfor x in np.nditer(a, flags = ['external_loop'], order = 'F'): \n print x,"
},
{
"code": null,
"e": 38110,
"s": 38083,
"text": "The output is as follows −"
},
{
"code": null,
"e": 38239,
"s": 38110,
"text": "Original array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nModified array is:\n[ 0 20 40] [ 5 25 45] [10 30 50] [15 35 55]\n"
},
{
"code": null,
"e": 38507,
"s": 38239,
"text": "If two arrays are broadcastable, a combined nditer object is able to iterate upon them concurrently. Assuming that an array a has dimension 3X4, and there is another array b of dimension 1X4, the iterator of following type is used (array b is broadcast to size of a)."
},
{
"code": null,
"e": 38793,
"s": 38507,
"text": "import numpy as np \na = np.arange(0,60,5) \na = a.reshape(3,4) \n\nprint 'First array is:' \nprint a \nprint '\\n' \n\nprint 'Second array is:' \nb = np.array([1, 2, 3, 4], dtype = int) \nprint b \nprint '\\n' \n\nprint 'Modified array is:' \nfor x,y in np.nditer([a,b]): \n print \"%d:%d\" % (x,y),"
},
{
"code": null,
"e": 38826,
"s": 38793,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 38994,
"s": 38826,
"text": "First array is:\n[[ 0 5 10 15]\n [20 25 30 35]\n [40 45 50 55]]\n\nSecond array is:\n[1 2 3 4]\n\nModified array is:\n0:1 5:2 10:3 15:4 20:1 25:2 30:3 35:4 40:1 45:2 50:3 55:4\n"
},
{
"code": null,
"e": 39140,
"s": 38994,
"text": "Several routines are available in NumPy package for manipulation of elements in ndarray object. They can be classified into the following types −"
},
{
"code": null,
"e": 39196,
"s": 39140,
"text": "Gives a new shape to an array without changing its data"
},
{
"code": null,
"e": 39226,
"s": 39196,
"text": "A 1-D iterator over the array"
},
{
"code": null,
"e": 39283,
"s": 39226,
"text": "Returns a copy of the array collapsed into one dimension"
},
{
"code": null,
"e": 39320,
"s": 39283,
"text": "Returns a contiguous flattened array"
},
{
"code": null,
"e": 39356,
"s": 39320,
"text": "Permutes the dimensions of an array"
},
{
"code": null,
"e": 39381,
"s": 39356,
"text": "Same as self.transpose()"
},
{
"code": null,
"e": 39416,
"s": 39381,
"text": "Rolls the specified axis backwards"
},
{
"code": null,
"e": 39454,
"s": 39416,
"text": "Interchanges the two axes of an array"
},
{
"code": null,
"e": 39498,
"s": 39454,
"text": "Produces an object that mimics broadcasting"
},
{
"code": null,
"e": 39533,
"s": 39498,
"text": "Broadcasts an array to a new shape"
},
{
"code": null,
"e": 39563,
"s": 39533,
"text": "Expands the shape of an array"
},
{
"code": null,
"e": 39625,
"s": 39563,
"text": "Removes single-dimensional entries from the shape of an array"
},
{
"code": null,
"e": 39675,
"s": 39625,
"text": "Joins a sequence of arrays along an existing axis"
},
{
"code": null,
"e": 39719,
"s": 39675,
"text": "Joins a sequence of arrays along a new axis"
},
{
"code": null,
"e": 39772,
"s": 39719,
"text": "Stacks arrays in sequence horizontally (column wise)"
},
{
"code": null,
"e": 39820,
"s": 39772,
"text": "Stacks arrays in sequence vertically (row wise)"
},
{
"code": null,
"e": 39861,
"s": 39820,
"text": "Splits an array into multiple sub-arrays"
},
{
"code": null,
"e": 39929,
"s": 39861,
"text": "Splits an array into multiple sub-arrays horizontally (column-wise)"
},
{
"code": null,
"e": 39992,
"s": 39929,
"text": "Splits an array into multiple sub-arrays vertically (row-wise)"
},
{
"code": null,
"e": 40037,
"s": 39992,
"text": "Returns a new array with the specified shape"
},
{
"code": null,
"e": 40079,
"s": 40037,
"text": "Appends the values to the end of an array"
},
{
"code": null,
"e": 40144,
"s": 40079,
"text": "Inserts the values along the given axis before the given indices"
},
{
"code": null,
"e": 40202,
"s": 40144,
"text": "Returns a new array with sub-arrays along an axis deleted"
},
{
"code": null,
"e": 40240,
"s": 40202,
"text": "Finds the unique elements of an array"
},
{
"code": null,
"e": 40319,
"s": 40240,
"text": "Following are the functions for bitwise operations available in NumPy package."
},
{
"code": null,
"e": 40368,
"s": 40319,
"text": "Computes bitwise AND operation of array elements"
},
{
"code": null,
"e": 40416,
"s": 40368,
"text": "Computes bitwise OR operation of array elements"
},
{
"code": null,
"e": 40437,
"s": 40416,
"text": "Computes bitwise NOT"
},
{
"code": null,
"e": 40488,
"s": 40437,
"text": "Shifts bits of a binary representation to the left"
},
{
"code": null,
"e": 40538,
"s": 40488,
"text": "Shifts bits of binary representation to the right"
},
{
"code": null,
"e": 40742,
"s": 40538,
"text": "The following functions are used to perform vectorized string operations for arrays of dtype numpy.string_ or numpy.unicode_. They are based on the standard string functions in Python's built-in library."
},
{
"code": null,
"e": 40817,
"s": 40742,
"text": "Returns element-wise string concatenation for two arrays of str or Unicode"
},
{
"code": null,
"e": 40878,
"s": 40817,
"text": "Returns the string with multiple concatenation, element-wise"
},
{
"code": null,
"e": 40968,
"s": 40878,
"text": "Returns a copy of the given string with elements centered in a string of specified length"
},
{
"code": null,
"e": 41039,
"s": 40968,
"text": "Returns a copy of the string with only the first character capitalized"
},
{
"code": null,
"e": 41109,
"s": 41039,
"text": "Returns the element-wise title cased version of the string or unicode"
},
{
"code": null,
"e": 41167,
"s": 41109,
"text": "Returns an array with the elements converted to lowercase"
},
{
"code": null,
"e": 41225,
"s": 41167,
"text": "Returns an array with the elements converted to uppercase"
},
{
"code": null,
"e": 41293,
"s": 41225,
"text": "Returns a list of the words in the string, using separatordelimiter"
},
{
"code": null,
"e": 41369,
"s": 41293,
"text": "Returns a list of the lines in the element, breaking at the line boundaries"
},
{
"code": null,
"e": 41433,
"s": 41369,
"text": "Returns a copy with the leading and trailing characters removed"
},
{
"code": null,
"e": 41508,
"s": 41433,
"text": "Returns a string which is the concatenation of the strings in the sequence"
},
{
"code": null,
"e": 41598,
"s": 41508,
"text": "Returns a copy of the string with all occurrences of substring replaced by the new string"
},
{
"code": null,
"e": 41628,
"s": 41598,
"text": "Calls str.decode element-wise"
},
{
"code": null,
"e": 41658,
"s": 41628,
"text": "Calls str.encode element-wise"
},
{
"code": null,
"e": 41874,
"s": 41658,
"text": "These functions are defined in character array class (numpy.char). The older Numarray package contained chararray class. The above functions in numpy.char class are useful in performing vectorized string operations."
},
{
"code": null,
"e": 42079,
"s": 41874,
"text": "Quite understandably, NumPy contains a large number of various mathematical operations. NumPy provides standard trigonometric functions, functions for arithmetic operations, handling complex numbers, etc."
},
{
"code": null,
"e": 42186,
"s": 42079,
"text": "NumPy has standard trigonometric functions which return trigonometric ratios for a given angle in radians."
},
{
"code": null,
"e": 42194,
"s": 42186,
"text": "Example"
},
{
"code": null,
"e": 42525,
"s": 42194,
"text": "import numpy as np \na = np.array([0,30,45,60,90]) \n\nprint 'Sine of different angles:' \n# Convert to radians by multiplying with pi/180 \nprint np.sin(a*np.pi/180) \nprint '\\n' \n\nprint 'Cosine values for angles in array:' \nprint np.cos(a*np.pi/180) \nprint '\\n' \n\nprint 'Tangent values for given angles:' \nprint np.tan(a*np.pi/180) "
},
{
"code": null,
"e": 42546,
"s": 42525,
"text": "Here is its output −"
},
{
"code": null,
"e": 42980,
"s": 42546,
"text": "Sine of different angles: \n[ 0. 0.5 0.70710678 0.8660254 1. ]\n\nCosine values for angles in array:\n[ 1.00000000e+00 8.66025404e-01 7.07106781e-01 5.00000000e-01\n 6.12323400e-17]\n\nTangent values for given angles: \n[ 0.00000000e+00 5.77350269e-01 1.00000000e+00 1.73205081e+00\n 1.63312394e+16]\n"
},
{
"code": null,
"e": 43198,
"s": 42980,
"text": "arcsin, arcos, and arctan functions return the trigonometric inverse of sin, cos, and tan of the given angle. The result of these functions can be verified by numpy.degrees() function by converting radians to degrees."
},
{
"code": null,
"e": 43206,
"s": 43198,
"text": "Example"
},
{
"code": null,
"e": 43981,
"s": 43206,
"text": "import numpy as np \na = np.array([0,30,45,60,90]) \n\nprint 'Array containing sine values:' \nsin = np.sin(a*np.pi/180) \nprint sin \nprint '\\n' \n\nprint 'Compute sine inverse of angles. Returned values are in radians.' \ninv = np.arcsin(sin) \nprint inv \nprint '\\n' \n\nprint 'Check result by converting to degrees:' \nprint np.degrees(inv) \nprint '\\n' \n\nprint 'arccos and arctan functions behave similarly:' \ncos = np.cos(a*np.pi/180) \nprint cos \nprint '\\n' \n\nprint 'Inverse of cos:' \ninv = np.arccos(cos) \nprint inv \nprint '\\n' \n\nprint 'In degrees:' \nprint np.degrees(inv) \nprint '\\n' \n\nprint 'Tan function:' \ntan = np.tan(a*np.pi/180) \nprint tan\nprint '\\n' \n\nprint 'Inverse of tan:' \ninv = np.arctan(tan) \nprint inv \nprint '\\n' \n\nprint 'In degrees:' \nprint np.degrees(inv) "
},
{
"code": null,
"e": 44008,
"s": 43981,
"text": "Its output is as follows −"
},
{
"code": null,
"e": 44795,
"s": 44008,
"text": "Array containing sine values:\n[ 0. 0.5 0.70710678 0.8660254 1. ]\n\nCompute sine inverse of angles. Returned values are in radians.\n[ 0. 0.52359878 0.78539816 1.04719755 1.57079633] \n\nCheck result by converting to degrees:\n[ 0. 30. 45. 60. 90.]\n\narccos and arctan functions behave similarly:\n[ 1.00000000e+00 8.66025404e-01 7.07106781e-01 5.00000000e-01 \n 6.12323400e-17] \n\nInverse of cos:\n[ 0. 0.52359878 0.78539816 1.04719755 1.57079633] \n\nIn degrees:\n[ 0. 30. 45. 60. 90.] \n\nTan function:\n[ 0.00000000e+00 5.77350269e-01 1.00000000e+00 1.73205081e+00 \n 1.63312394e+16]\n\nInverse of tan:\n[ 0. 0.52359878 0.78539816 1.04719755 1.57079633]\n\nIn degrees:\n[ 0. 30. 45. 60. 90.]\n"
},
{
"code": null,
"e": 44916,
"s": 44795,
"text": "This is a function that returns the value rounded to the desired precision. The function takes the following parameters."
},
{
"code": null,
"e": 44942,
"s": 44916,
"text": "numpy.around(a,decimals)\n"
},
{
"code": null,
"e": 44949,
"s": 44942,
"text": "Where,"
},
{
"code": null,
"e": 44951,
"s": 44949,
"text": "a"
},
{
"code": null,
"e": 44962,
"s": 44951,
"text": "Input data"
},
{
"code": null,
"e": 44971,
"s": 44962,
"text": "decimals"
},
{
"code": null,
"e": 45102,
"s": 44971,
"text": "The number of decimals to round to. Default is 0. If negative, the integer is rounded to position to the left of the decimal point"
},
{
"code": null,
"e": 45110,
"s": 45102,
"text": "Example"
},
{
"code": null,
"e": 45338,
"s": 45110,
"text": "import numpy as np \na = np.array([1.0,5.55, 123, 0.567, 25.532]) \n\nprint 'Original array:' \nprint a \nprint '\\n' \n\nprint 'After rounding:' \nprint np.around(a) \nprint np.around(a, decimals = 1) \nprint np.around(a, decimals = -1)"
},
{
"code": null,
"e": 45373,
"s": 45338,
"text": "It produces the following output −"
},
{
"code": null,
"e": 45772,
"s": 45373,
"text": "Original array: \n[ 1. 5.55 123. 0.567 25.532] \n\nAfter rounding: \n[ 1. 6. 123. 1. 26. ] \n[ 1. 5.6 123. 0.6 25.5] \n[ 0. 10. 120. 0. 30. ]\n"
},
{
"code": null,
"e": 45983,
"s": 45772,
"text": "This function returns the largest integer not greater than the input parameter. The floor of the scalar x is the largest integer i, such that i <= x. Note that in Python, flooring always is rounded away from 0."
},
{
"code": null,
"e": 45991,
"s": 45983,
"text": "Example"
},
{
"code": null,
"e": 46150,
"s": 45991,
"text": "import numpy as np \na = np.array([-1.7, 1.5, -0.2, 0.6, 10]) \n\nprint 'The given array:' \nprint a \nprint '\\n' \n\nprint 'The modified array:' \nprint np.floor(a)"
},
{
"code": null,
"e": 46185,
"s": 46150,
"text": "It produces the following output −"
},
{
"code": null,
"e": 46404,
"s": 46185,
"text": "The given array: \n[ -1.7 1.5 -0.2 0.6 10. ]\n\nThe modified array: \n[ -2. 1. -1. 0. 10.]\n"
},
{
"code": null,
"e": 46538,
"s": 46404,
"text": "The ceil() function returns the ceiling of an input value, i.e. the ceil of the scalar x is the smallest integer i, such that i >= x."
},
{
"code": null,
"e": 46546,
"s": 46538,
"text": "Example"
},
{
"code": null,
"e": 46704,
"s": 46546,
"text": "import numpy as np \na = np.array([-1.7, 1.5, -0.2, 0.6, 10]) \n\nprint 'The given array:' \nprint a \nprint '\\n' \n\nprint 'The modified array:' \nprint np.ceil(a)"
},
{
"code": null,
"e": 46743,
"s": 46704,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 46962,
"s": 46743,
"text": "The given array: \n[ -1.7 1.5 -0.2 0.6 10. ]\n\nThe modified array: \n[ -1. 2. -0. 1. 10.]\n"
},
{
"code": null,
"e": 47144,
"s": 46962,
"text": "Input arrays for performing arithmetic operations such as add(), subtract(), multiply(), and divide() must be either of the same shape or should conform to array broadcasting rules."
},
{
"code": null,
"e": 47591,
"s": 47144,
"text": "import numpy as np \na = np.arange(9, dtype = np.float_).reshape(3,3) \n\nprint 'First array:' \nprint a \nprint '\\n' \n\nprint 'Second array:' \nb = np.array([10,10,10]) \nprint b \nprint '\\n' \n\nprint 'Add the two arrays:' \nprint np.add(a,b) \nprint '\\n' \n\nprint 'Subtract the two arrays:' \nprint np.subtract(a,b) \nprint '\\n' \n\nprint 'Multiply the two arrays:' \nprint np.multiply(a,b) \nprint '\\n' \n\nprint 'Divide the two arrays:' \nprint np.divide(a,b)"
},
{
"code": null,
"e": 47630,
"s": 47591,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 48001,
"s": 47630,
"text": "First array:\n[[ 0. 1. 2.]\n [ 3. 4. 5.]\n [ 6. 7. 8.]]\n\nSecond array:\n[10 10 10]\n\nAdd the two arrays:\n[[ 10. 11. 12.]\n [ 13. 14. 15.]\n [ 16. 17. 18.]]\n\nSubtract the two arrays:\n[[-10. -9. -8.]\n [ -7. -6. -5.]\n [ -4. -3. -2.]]\n\nMultiply the two arrays:\n[[ 0. 10. 20.]\n [ 30. 40. 50.]\n [ 60. 70. 80.]]\n\nDivide the two arrays:\n[[ 0. 0.1 0.2]\n [ 0.3 0.4 0.5]\n [ 0.6 0.7 0.8]]\n"
},
{
"code": null,
"e": 48089,
"s": 48001,
"text": "Let us now discuss some of the other important arithmetic functions available in NumPy."
},
{
"code": null,
"e": 48332,
"s": 48089,
"text": "This function returns the reciprocal of argument, element-wise. For elements with absolute values larger than 1, the result is always 0 because of the way in which Python handles integer division. For integer 0, an overflow warning is issued."
},
{
"code": null,
"e": 48677,
"s": 48332,
"text": "import numpy as np \na = np.array([0.25, 1.33, 1, 0, 100]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'After applying reciprocal function:' \nprint np.reciprocal(a) \nprint '\\n' \n\nb = np.array([100], dtype = int) \nprint 'The second array is:' \nprint b \nprint '\\n' \n\nprint 'After applying reciprocal function:' \nprint np.reciprocal(b) "
},
{
"code": null,
"e": 48716,
"s": 48677,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 49137,
"s": 48716,
"text": "Our array is: \n[ 0.25 1.33 1. 0. 100. ]\n\nAfter applying reciprocal function: \nmain.py:9: RuntimeWarning: divide by zero encountered in reciprocal\n print np.reciprocal(a)\n[ 4. 0.7518797 1. inf 0.01 ]\n\nThe second array is:\n[100]\n\nAfter applying reciprocal function:\n[0]\n"
},
{
"code": null,
"e": 49291,
"s": 49137,
"text": "This function treats elements in the first input array as base and returns it raised to the power of the corresponding element in the second input array."
},
{
"code": null,
"e": 49585,
"s": 49291,
"text": "import numpy as np \na = np.array([10,100,1000]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying power function:' \nprint np.power(a,2) \nprint '\\n' \n\nprint 'Second array:' \nb = np.array([1,2,3]) \nprint b \nprint '\\n' \n\nprint 'Applying power function again:' \nprint np.power(a,b)"
},
{
"code": null,
"e": 49624,
"s": 49585,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 49863,
"s": 49624,
"text": "Our array is: \n[ 10 100 1000]\n\nApplying power function:\n[ 100 10000 1000000]\n\nSecond array:\n[1 2 3]\n\nApplying power function again:\n[ 10 10000 1000000000]\n"
},
{
"code": null,
"e": 50023,
"s": 49863,
"text": "This function returns the remainder of division of the corresponding elements in the input array. The function numpy.remainder() also produces the same result."
},
{
"code": null,
"e": 50316,
"s": 50023,
"text": "import numpy as np \na = np.array([10,20,30]) \nb = np.array([3,5,7]) \n\nprint 'First array:' \nprint a \nprint '\\n' \n\nprint 'Second array:' \nprint b \nprint '\\n' \n\nprint 'Applying mod() function:' \nprint np.mod(a,b) \nprint '\\n' \n\nprint 'Applying remainder() function:' \nprint np.remainder(a,b) "
},
{
"code": null,
"e": 50355,
"s": 50316,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 50710,
"s": 50355,
"text": "First array: \n[10 20 30]\n\nSecond array: \n[3 5 7]\n\nApplying mod() function: \n[1 0 2]\n\nApplying remainder() function: \n[1 0 2]\n"
},
{
"code": null,
"e": 50796,
"s": 50710,
"text": "The following functions are used to perform operations on array with complex numbers."
},
{
"code": null,
"e": 50868,
"s": 50796,
"text": "numpy.real() − returns the real part of the complex data type argument."
},
{
"code": null,
"e": 50940,
"s": 50868,
"text": "numpy.real() − returns the real part of the complex data type argument."
},
{
"code": null,
"e": 51017,
"s": 50940,
"text": "numpy.imag() − returns the imaginary part of the complex data type argument."
},
{
"code": null,
"e": 51094,
"s": 51017,
"text": "numpy.imag() − returns the imaginary part of the complex data type argument."
},
{
"code": null,
"e": 51202,
"s": 51094,
"text": "numpy.conj() − returns the complex conjugate, which is obtained by changing the sign of the imaginary part."
},
{
"code": null,
"e": 51310,
"s": 51202,
"text": "numpy.conj() − returns the complex conjugate, which is obtained by changing the sign of the imaginary part."
},
{
"code": null,
"e": 51485,
"s": 51310,
"text": "numpy.angle() − returns the angle of the complex argument. The function has degree parameter. If true, the angle in the degree is returned, otherwise the angle is in radians."
},
{
"code": null,
"e": 51660,
"s": 51485,
"text": "numpy.angle() − returns the angle of the complex argument. The function has degree parameter. If true, the angle in the degree is returned, otherwise the angle is in radians."
},
{
"code": null,
"e": 52129,
"s": 51660,
"text": "import numpy as np \na = np.array([-5.6j, 0.2j, 11. , 1+1j]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying real() function:' \nprint np.real(a) \nprint '\\n' \n\nprint 'Applying imag() function:' \nprint np.imag(a) \nprint '\\n' \n\nprint 'Applying conj() function:' \nprint np.conj(a) \nprint '\\n' \n\nprint 'Applying angle() function:' \nprint np.angle(a) \nprint '\\n' \n\nprint 'Applying angle() function again (result in degrees)' \nprint np.angle(a, deg = True)"
},
{
"code": null,
"e": 52168,
"s": 52129,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 52506,
"s": 52168,
"text": "Our array is:\n[ 0.-5.6j 0.+0.2j 11.+0.j 1.+1.j ]\n\nApplying real() function:\n[ 0. 0. 11. 1.]\n\nApplying imag() function:\n[-5.6 0.2 0. 1. ]\n\nApplying conj() function:\n[ 0.+5.6j 0.-0.2j 11.-0.j 1.-1.j ]\n\nApplying angle() function:\n[-1.57079633 1.57079633 0. 0.78539816]\n\nApplying angle() function again (result in degrees)\n[-90. 90. 0. 45.]\n"
},
{
"code": null,
"e": 52715,
"s": 52506,
"text": "NumPy has quite a few useful statistical functions for finding minimum, maximum, percentile standard deviation and variance, etc. from the given elements in the array. The functions are explained as follows −"
},
{
"code": null,
"e": 52829,
"s": 52715,
"text": "These functions return the minimum and the maximum from the elements in the given array along the specified axis."
},
{
"code": null,
"e": 53217,
"s": 52829,
"text": "import numpy as np \na = np.array([[3,7,5],[8,4,3],[2,4,9]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying amin() function:' \nprint np.amin(a,1) \nprint '\\n' \n\nprint 'Applying amin() function again:' \nprint np.amin(a,0) \nprint '\\n' \n\nprint 'Applying amax() function:' \nprint np.amax(a) \nprint '\\n' \n\nprint 'Applying amax() function again:' \nprint np.amax(a, axis = 0)"
},
{
"code": null,
"e": 53256,
"s": 53217,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 53443,
"s": 53256,
"text": "Our array is:\n[[3 7 5]\n[8 4 3]\n[2 4 9]]\n\nApplying amin() function:\n[3 3 2]\n\nApplying amin() function again:\n[2 4 3]\n\nApplying amax() function:\n9\n\nApplying amax() function again:\n[8 7 9]\n"
},
{
"code": null,
"e": 53529,
"s": 53443,
"text": "The numpy.ptp() function returns the range (maximum-minimum) of values along an axis."
},
{
"code": null,
"e": 53864,
"s": 53529,
"text": "import numpy as np \na = np.array([[3,7,5],[8,4,3],[2,4,9]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying ptp() function:' \nprint np.ptp(a) \nprint '\\n' \n\nprint 'Applying ptp() function along axis 1:' \nprint np.ptp(a, axis = 1) \nprint '\\n' \n\nprint 'Applying ptp() function along axis 0:'\nprint np.ptp(a, axis = 0) "
},
{
"code": null,
"e": 53903,
"s": 53864,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 54066,
"s": 53903,
"text": "Our array is:\n[[3 7 5]\n[8 4 3]\n[2 4 9]]\n\nApplying ptp() function:\n7\n\nApplying ptp() function along axis 1:\n[4 5 7]\n\nApplying ptp() function along axis 0:\n[6 3 6]\n"
},
{
"code": null,
"e": 54288,
"s": 54066,
"text": "Percentile (or a centile) is a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall. The function numpy.percentile() takes the following arguments."
},
{
"code": null,
"e": 54318,
"s": 54288,
"text": "numpy.percentile(a, q, axis)\n"
},
{
"code": null,
"e": 54325,
"s": 54318,
"text": "Where,"
},
{
"code": null,
"e": 54327,
"s": 54325,
"text": "a"
},
{
"code": null,
"e": 54339,
"s": 54327,
"text": "Input array"
},
{
"code": null,
"e": 54341,
"s": 54339,
"text": "q"
},
{
"code": null,
"e": 54389,
"s": 54341,
"text": "The percentile to compute must be between 0-100"
},
{
"code": null,
"e": 54394,
"s": 54389,
"text": "axis"
},
{
"code": null,
"e": 54450,
"s": 54394,
"text": "The axis along which the percentile is to be calculated"
},
{
"code": null,
"e": 54844,
"s": 54450,
"text": "import numpy as np \na = np.array([[30,40,70],[80,20,10],[50,90,60]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying percentile() function:' \nprint np.percentile(a,50) \nprint '\\n' \n\nprint 'Applying percentile() function along axis 1:' \nprint np.percentile(a,50, axis = 1) \nprint '\\n' \n\nprint 'Applying percentile() function along axis 0:' \nprint np.percentile(a,50, axis = 0)"
},
{
"code": null,
"e": 54883,
"s": 54844,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 55095,
"s": 54883,
"text": "Our array is:\n[[30 40 70]\n [80 20 10]\n [50 90 60]]\n\nApplying percentile() function:\n50.0\n\nApplying percentile() function along axis 1:\n[ 40. 20. 60.]\n\nApplying percentile() function along axis 0:\n[ 50. 40. 60.]\n"
},
{
"code": null,
"e": 55262,
"s": 55095,
"text": "Median is defined as the value separating the higher half of a data sample from the lower half. The numpy.median() function is used as shown in the following program."
},
{
"code": null,
"e": 55624,
"s": 55262,
"text": "import numpy as np \na = np.array([[30,65,70],[80,95,10],[50,90,60]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying median() function:' \nprint np.median(a) \nprint '\\n' \n\nprint 'Applying median() function along axis 0:' \nprint np.median(a, axis = 0) \nprint '\\n' \n \nprint 'Applying median() function along axis 1:' \nprint np.median(a, axis = 1)"
},
{
"code": null,
"e": 55663,
"s": 55624,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 55863,
"s": 55663,
"text": "Our array is:\n[[30 65 70]\n [80 95 10]\n [50 90 60]]\n\nApplying median() function:\n65.0\n\nApplying median() function along axis 0:\n[ 50. 90. 60.]\n\nApplying median() function along axis 1:\n[ 65. 80. 60.]\n"
},
{
"code": null,
"e": 56084,
"s": 55863,
"text": "Arithmetic mean is the sum of elements along an axis divided by the number of elements. The numpy.mean() function returns the arithmetic mean of elements in the array. If the axis is mentioned, it is calculated along it."
},
{
"code": null,
"e": 56424,
"s": 56084,
"text": "import numpy as np \na = np.array([[1,2,3],[3,4,5],[4,5,6]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying mean() function:' \nprint np.mean(a) \nprint '\\n' \n\nprint 'Applying mean() function along axis 0:' \nprint np.mean(a, axis = 0) \nprint '\\n' \n\nprint 'Applying mean() function along axis 1:' \nprint np.mean(a, axis = 1)"
},
{
"code": null,
"e": 56463,
"s": 56424,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 56675,
"s": 56463,
"text": "Our array is:\n[[1 2 3]\n [3 4 5]\n [4 5 6]]\n\nApplying mean() function:\n3.66666666667\n\nApplying mean() function along axis 0:\n[ 2.66666667 3.66666667 4.66666667]\n\nApplying mean() function along axis 1:\n[ 2. 4. 5.]\n"
},
{
"code": null,
"e": 57036,
"s": 56675,
"text": "Weighted average is an average resulting from the multiplication of each component by a factor reflecting its importance. The numpy.average() function computes the weighted average of elements in an array according to their respective weight given in another array. The function can have an axis parameter. If the axis is not specified, the array is flattened."
},
{
"code": null,
"e": 57235,
"s": 57036,
"text": "Considering an array [1,2,3,4] and corresponding weights [4,3,2,1], the weighted average is calculated by adding the product of the corresponding elements and dividing the sum by the sum of weights."
},
{
"code": null,
"e": 57282,
"s": 57235,
"text": "Weighted average = (1*4+2*3+3*2+4*1)/(4+3+2+1)"
},
{
"code": null,
"e": 57784,
"s": 57282,
"text": "import numpy as np \na = np.array([1,2,3,4]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying average() function:' \nprint np.average(a) \nprint '\\n' \n\n# this is same as mean when weight is not specified \nwts = np.array([4,3,2,1]) \n\nprint 'Applying average() function again:' \nprint np.average(a,weights = wts) \nprint '\\n' \n\n# Returns the sum of weights, if the returned parameter is set to True. \nprint 'Sum of weights' \nprint np.average([1,2,3, 4],weights = [4,3,2,1], returned = True)"
},
{
"code": null,
"e": 57823,
"s": 57784,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 57950,
"s": 57823,
"text": "Our array is:\n[1 2 3 4]\n\nApplying average() function:\n2.5\n\nApplying average() function again:\n2.0\n\nSum of weights\n(2.0, 10.0)\n"
},
{
"code": null,
"e": 58023,
"s": 57950,
"text": "In a multi-dimensional array, the axis for computation can be specified."
},
{
"code": null,
"e": 58313,
"s": 58023,
"text": "import numpy as np \na = np.arange(6).reshape(3,2) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Modified array:' \nwt = np.array([3,5]) \nprint np.average(a, axis = 1, weights = wt) \nprint '\\n' \n\nprint 'Modified array:' \nprint np.average(a, axis = 1, weights = wt, returned = True)"
},
{
"code": null,
"e": 58352,
"s": 58313,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 58498,
"s": 58352,
"text": "Our array is:\n[[0 1]\n [2 3]\n [4 5]]\n\nModified array:\n[ 0.625 2.625 4.625]\n\nModified array:\n(array([ 0.625, 2.625, 4.625]), array([ 8., 8., 8.]))\n"
},
{
"code": null,
"e": 58635,
"s": 58498,
"text": "Standard deviation is the square root of the average of squared deviations from mean. The formula for standard deviation is as follows −"
},
{
"code": null,
"e": 58675,
"s": 58635,
"text": "std = sqrt(mean(abs(x - x.mean())**2))\n"
},
{
"code": null,
"e": 58871,
"s": 58675,
"text": "If the array is [1, 2, 3, 4], then its mean is 2.5. Hence the squared deviations are [2.25, 0.25, 0.25, 2.25] and the square root of its mean divided by 4, i.e., sqrt (5/4) is 1.1180339887498949."
},
{
"code": null,
"e": 58915,
"s": 58871,
"text": "import numpy as np \nprint np.std([1,2,3,4])"
},
{
"code": null,
"e": 58954,
"s": 58915,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 58975,
"s": 58954,
"text": "1.1180339887498949 \n"
},
{
"code": null,
"e": 59127,
"s": 58975,
"text": "Variance is the average of squared deviations, i.e., mean(abs(x - x.mean())**2). In other words, the standard deviation is the square root of variance."
},
{
"code": null,
"e": 59171,
"s": 59127,
"text": "import numpy as np \nprint np.var([1,2,3,4])"
},
{
"code": null,
"e": 59210,
"s": 59171,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 59216,
"s": 59210,
"text": "1.25\n"
},
{
"code": null,
"e": 59543,
"s": 59216,
"text": "A variety of sorting related functions are available in NumPy. These sorting functions implement different sorting algorithms, each of them characterized by the speed of execution, worst case performance, the workspace required and the stability of algorithms. Following table shows the comparison of three sorting algorithms."
},
{
"code": null,
"e": 59639,
"s": 59543,
"text": "The sort() function returns a sorted copy of the input array. It has the following parameters −"
},
{
"code": null,
"e": 59673,
"s": 59639,
"text": "numpy.sort(a, axis, kind, order)\n"
},
{
"code": null,
"e": 59680,
"s": 59673,
"text": "Where,"
},
{
"code": null,
"e": 59682,
"s": 59680,
"text": "a"
},
{
"code": null,
"e": 59701,
"s": 59682,
"text": "Array to be sorted"
},
{
"code": null,
"e": 59706,
"s": 59701,
"text": "axis"
},
{
"code": null,
"e": 59812,
"s": 59706,
"text": "The axis along which the array is to be sorted. If none, the array is flattened, sorting on the last axis"
},
{
"code": null,
"e": 59817,
"s": 59812,
"text": "kind"
},
{
"code": null,
"e": 59838,
"s": 59817,
"text": "Default is quicksort"
},
{
"code": null,
"e": 59844,
"s": 59838,
"text": "order"
},
{
"code": null,
"e": 59907,
"s": 59844,
"text": "If the array contains fields, the order of fields to be sorted"
},
{
"code": null,
"e": 60406,
"s": 59907,
"text": "import numpy as np \na = np.array([[3,7],[9,1]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n'\n\nprint 'Applying sort() function:' \nprint np.sort(a) \nprint '\\n' \n \nprint 'Sort along axis 0:' \nprint np.sort(a, axis = 0) \nprint '\\n' \n\n# Order parameter in sort function \ndt = np.dtype([('name', 'S10'),('age', int)]) \na = np.array([(\"raju\",21),(\"anil\",25),(\"ravi\", 17), (\"amar\",27)], dtype = dt) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Order by name:' \nprint np.sort(a, order = 'name')"
},
{
"code": null,
"e": 60445,
"s": 60406,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 60691,
"s": 60445,
"text": "Our array is:\n[[3 7]\n [9 1]]\n\nApplying sort() function:\n[[3 7]\n [1 9]]\n\nSort along axis 0:\n[[3 1]\n [9 7]]\n\nOur array is:\n[('raju', 21) ('anil', 25) ('ravi', 17) ('amar', 27)]\n\nOrder by name:\n[('amar', 27) ('anil', 25) ('raju', 21) ('ravi', 17)]\n"
},
{
"code": null,
"e": 60916,
"s": 60691,
"text": "The numpy.argsort() function performs an indirect sort on input array, along the given axis and using a specified kind of sort to return the array of indices of data. This indices array is used to construct the sorted array."
},
{
"code": null,
"e": 61243,
"s": 60916,
"text": "import numpy as np \nx = np.array([3, 1, 2]) \n\nprint 'Our array is:' \nprint x \nprint '\\n' \n\nprint 'Applying argsort() to x:' \ny = np.argsort(x) \nprint y \nprint '\\n' \n\nprint 'Reconstruct original array in sorted order:' \nprint x[y] \nprint '\\n' \n\nprint 'Reconstruct the original array using loop:' \nfor i in y: \n print x[i],"
},
{
"code": null,
"e": 61282,
"s": 61243,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 61442,
"s": 61282,
"text": "Our array is:\n[3 1 2]\n\nApplying argsort() to x:\n[1 2 0]\n\nReconstruct original array in sorted order:\n[1 2 3]\n\nReconstruct the original array using loop:\n1 2 3\n"
},
{
"code": null,
"e": 61704,
"s": 61442,
"text": "function performs an indirect sort using a sequence of keys. The keys can be seen as a column in a spreadsheet. The function returns an array of indices, using which the sorted data can be obtained. Note, that the last key happens to be the primary key of sort."
},
{
"code": null,
"e": 61978,
"s": 61704,
"text": "import numpy as np \n\nnm = ('raju','anil','ravi','amar') \ndv = ('f.y.', 's.y.', 's.y.', 'f.y.') \nind = np.lexsort((dv,nm)) \n\nprint 'Applying lexsort() function:' \nprint ind \nprint '\\n' \n\nprint 'Use this index to get sorted data:' \nprint [nm[i] + \", \" + dv[i] for i in ind] "
},
{
"code": null,
"e": 62017,
"s": 61978,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 62150,
"s": 62017,
"text": "Applying lexsort() function:\n[3 1 0 2]\n\nUse this index to get sorted data:\n['amar, f.y.', 'anil, s.y.', 'raju, f.y.', 'ravi, s.y.']\n"
},
{
"code": null,
"e": 62335,
"s": 62150,
"text": "NumPy module has a number of functions for searching inside an array. Functions for finding the maximum, the minimum as well as the elements satisfying a given condition are available."
},
{
"code": null,
"e": 62441,
"s": 62335,
"text": "These two functions return the indices of maximum and minimum elements respectively along the given axis."
},
{
"code": null,
"e": 63315,
"s": 62441,
"text": "import numpy as np \na = np.array([[30,40,70],[80,20,10],[50,90,60]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying argmax() function:' \nprint np.argmax(a) \nprint '\\n' \n\nprint 'Index of maximum number in flattened array' \nprint a.flatten() \nprint '\\n' \n\nprint 'Array containing indices of maximum along axis 0:' \nmaxindex = np.argmax(a, axis = 0) \nprint maxindex \nprint '\\n' \n\nprint 'Array containing indices of maximum along axis 1:' \nmaxindex = np.argmax(a, axis = 1) \nprint maxindex \nprint '\\n' \n\nprint 'Applying argmin() function:' \nminindex = np.argmin(a) \nprint minindex \nprint '\\n' \n \nprint 'Flattened array:' \nprint a.flatten()[minindex] \nprint '\\n' \n\nprint 'Flattened array along axis 0:' \nminindex = np.argmin(a, axis = 0) \nprint minindex\nprint '\\n'\n\nprint 'Flattened array along axis 1:' \nminindex = np.argmin(a, axis = 1) \nprint minindex"
},
{
"code": null,
"e": 63354,
"s": 63315,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 63758,
"s": 63354,
"text": "Our array is:\n[[30 40 70]\n [80 20 10]\n [50 90 60]]\n\nApplying argmax() function:\n7\n\nIndex of maximum number in flattened array\n[30 40 70 80 20 10 50 90 60]\n\nArray containing indices of maximum along axis 0:\n[1 2 0]\n\nArray containing indices of maximum along axis 1:\n[2 0 1]\n\nApplying argmin() function:\n5\n\nFlattened array:\n10\n\nFlattened array along axis 0:\n[0 1 1]\n\nFlattened array along axis 1:\n[0 2 0]\n"
},
{
"code": null,
"e": 63848,
"s": 63758,
"text": "The numpy.nonzero() function returns the indices of non-zero elements in the input array."
},
{
"code": null,
"e": 64021,
"s": 63848,
"text": "import numpy as np \na = np.array([[30,40,0],[0,20,10],[50,0,60]]) \n\nprint 'Our array is:' \nprint a \nprint '\\n' \n\nprint 'Applying nonzero() function:' \nprint np.nonzero (a)"
},
{
"code": null,
"e": 64060,
"s": 64021,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 64195,
"s": 64060,
"text": "Our array is:\n[[30 40 0]\n [ 0 20 10]\n [50 0 60]]\n\nApplying nonzero() function:\n(array([0, 0, 1, 1, 2, 2]), array([0, 1, 1, 2, 0, 2]))\n"
},
{
"code": null,
"e": 64306,
"s": 64195,
"text": "The where() function returns the indices of elements in an input array where the given condition is satisfied."
},
{
"code": null,
"e": 64538,
"s": 64306,
"text": "import numpy as np \nx = np.arange(9.).reshape(3, 3) \n\nprint 'Our array is:' \nprint x \n\nprint 'Indices of elements > 3' \ny = np.where(x > 3) \nprint y \n\nprint 'Use these indices to get elements satisfying the condition' \nprint x[y]"
},
{
"code": null,
"e": 64577,
"s": 64538,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 64784,
"s": 64577,
"text": "Our array is:\n[[ 0. 1. 2.]\n [ 3. 4. 5.]\n [ 6. 7. 8.]]\n\nIndices of elements > 3\n(array([1, 1, 2, 2, 2]), array([1, 2, 0, 1, 2]))\n\nUse these indices to get elements satisfying the condition\n[ 4. 5. 6. 7. 8.]\n"
},
{
"code": null,
"e": 64854,
"s": 64784,
"text": "The extract() function returns the elements satisfying any condition."
},
{
"code": null,
"e": 65128,
"s": 64854,
"text": "import numpy as np \nx = np.arange(9.).reshape(3, 3) \n\nprint 'Our array is:' \nprint x \n\n# define a condition \ncondition = np.mod(x,2) == 0 \n\nprint 'Element-wise value of condition' \nprint condition \n\nprint 'Extract elements using condition' \nprint np.extract(condition, x)"
},
{
"code": null,
"e": 65167,
"s": 65128,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 65368,
"s": 65167,
"text": "Our array is:\n[[ 0. 1. 2.]\n [ 3. 4. 5.]\n [ 6. 7. 8.]]\n\nElement-wise value of condition\n[[ True False True]\n [False True False]\n [ True False True]]\n\nExtract elements using condition\n[ 0. 2. 4. 6. 8.]\n"
},
{
"code": null,
"e": 65615,
"s": 65368,
"text": "We have seen that the data stored in the memory of a computer depends on which architecture the CPU uses. It may be little-endian (least significant is stored in the smallest address) or big-endian (most significant byte in the smallest address)."
},
{
"code": null,
"e": 65723,
"s": 65615,
"text": "The numpy.ndarray.byteswap() function toggles between the two representations: bigendian and little-endian."
},
{
"code": null,
"e": 66123,
"s": 65723,
"text": "import numpy as np \na = np.array([1, 256, 8755], dtype = np.int16) \n\nprint 'Our array is:' \nprint a \n\nprint 'Representation of data in memory in hexadecimal form:' \nprint map(hex,a) \n# byteswap() function swaps in place by passing True parameter \n\nprint 'Applying byteswap() function:' \nprint a.byteswap(True) \n\nprint 'In hexadecimal form:' \nprint map(hex,a) \n# We can see the bytes being swapped"
},
{
"code": null,
"e": 66162,
"s": 66123,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 66366,
"s": 66162,
"text": "Our array is:\n[1 256 8755]\n\nRepresentation of data in memory in hexadecimal form:\n['0x1', '0x100', '0x2233']\n\nApplying byteswap() function:\n[256 1 13090]\n\nIn hexadecimal form:\n['0x100', '0x1', '0x3322']\n"
},
{
"code": null,
"e": 66651,
"s": 66366,
"text": "While executing the functions, some of them return a copy of the input array, while some return the view. When the contents are physically stored in another location, it is called Copy. If on the other hand, a different view of the same memory content is provided, we call it as View."
},
{
"code": null,
"e": 66862,
"s": 66651,
"text": "Simple assignments do not make the copy of array object. Instead, it uses the same id() of the original array to access it. The id() returns a universal identifier of Python object, similar to the pointer in C."
},
{
"code": null,
"e": 67005,
"s": 66862,
"text": "Furthermore, any changes in either gets reflected in the other. For example, the changing shape of one will change the shape of the other too."
},
{
"code": null,
"e": 67315,
"s": 67005,
"text": "import numpy as np \na = np.arange(6) \n\nprint 'Our array is:' \nprint a \n\nprint 'Applying id() function:' \nprint id(a) \n\nprint 'a is assigned to b:' \nb = a \nprint b \n\nprint 'b has same id():' \nprint id(b) \n\nprint 'Change shape of b:' \nb.shape = 3,2 \nprint b \n\nprint 'Shape of a also gets changed:' \nprint a"
},
{
"code": null,
"e": 67354,
"s": 67315,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 67587,
"s": 67354,
"text": "Our array is:\n[0 1 2 3 4 5]\n\nApplying id() function:\n139747815479536\n\na is assigned to b:\n[0 1 2 3 4 5]\nb has same id():\n139747815479536\n\nChange shape of b:\n[[0 1]\n [2 3]\n [4 5]]\n\nShape of a also gets changed:\n[[0 1]\n [2 3]\n [4 5]]\n"
},
{
"code": null,
"e": 67804,
"s": 67587,
"text": "NumPy has ndarray.view() method which is a new array object that looks at the same data of the original array. Unlike the earlier case, change in dimensions of the new array doesn’t change dimensions of the original."
},
{
"code": null,
"e": 68224,
"s": 67804,
"text": "import numpy as np \n# To begin with, a is 3X2 array \na = np.arange(6).reshape(3,2) \n\nprint 'Array a:' \nprint a \n\nprint 'Create view of a:' \nb = a.view() \nprint b \n\nprint 'id() for both the arrays are different:' \nprint 'id() of a:'\nprint id(a) \nprint 'id() of b:' \nprint id(b) \n\n# Change the shape of b. It does not change the shape of a \nb.shape = 2,3 \n\nprint 'Shape of b:' \nprint b \n\nprint 'Shape of a:' \nprint a"
},
{
"code": null,
"e": 68263,
"s": 68224,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 68498,
"s": 68263,
"text": "Array a:\n[[0 1]\n [2 3]\n [4 5]]\n\nCreate view of a:\n[[0 1]\n [2 3]\n [4 5]]\n\nid() for both the arrays are different:\nid() of a:\n140424307227264\nid() of b:\n140424151696288\n\nShape of b:\n[[0 1 2]\n [3 4 5]]\n\nShape of a:\n[[0 1]\n [2 3]\n [4 5]]\n"
},
{
"code": null,
"e": 68532,
"s": 68498,
"text": "Slice of an array creates a view."
},
{
"code": null,
"e": 68674,
"s": 68532,
"text": "import numpy as np \na = np.array([[10,10], [2,3], [4,5]]) \n\nprint 'Our array is:' \nprint a \n\nprint 'Create a slice:' \ns = a[:, :2] \nprint s "
},
{
"code": null,
"e": 68713,
"s": 68674,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 68797,
"s": 68713,
"text": "Our array is:\n[[10 10]\n [ 2 3]\n [ 4 5]]\n\nCreate a slice:\n[[10 10]\n [ 2 3]\n [ 4 5]]\n"
},
{
"code": null,
"e": 68938,
"s": 68797,
"text": "The ndarray.copy() function creates a deep copy. It is a complete copy of the array and its data, and doesn’t share with the original array."
},
{
"code": null,
"e": 69315,
"s": 68938,
"text": "import numpy as np \na = np.array([[10,10], [2,3], [4,5]]) \n\nprint 'Array a is:' \nprint a \n\nprint 'Create a deep copy of a:' \nb = a.copy() \nprint 'Array b is:' \nprint b \n\n#b does not share any memory of a \nprint 'Can we write b is a' \nprint b is a \n\nprint 'Change the contents of b:' \nb[0,0] = 100 \n\nprint 'Modified array b:' \nprint b \n\nprint 'a remains unchanged:' \nprint a"
},
{
"code": null,
"e": 69354,
"s": 69315,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 69603,
"s": 69354,
"text": "Array a is:\n[[10 10]\n [ 2 3]\n [ 4 5]]\n\nCreate a deep copy of a:\nArray b is:\n[[10 10]\n [ 2 3]\n [ 4 5]]\nCan we write b is a\nFalse\n\nChange the contents of b:\nModified array b:\n[[100 10]\n [ 2 3]\n [ 4 5]]\n\na remains unchanged:\n[[10 10]\n [ 2 3]\n [ 4 5]]\n"
},
{
"code": null,
"e": 69732,
"s": 69603,
"text": "NumPy package contains a Matrix library numpy.matlib. This module has functions that return matrices instead of ndarray objects."
},
{
"code": null,
"e": 69860,
"s": 69732,
"text": "The matlib.empty() function returns a new matrix without initializing the entries. The function takes the following parameters."
},
{
"code": null,
"e": 69901,
"s": 69860,
"text": "numpy.matlib.empty(shape, dtype, order)\n"
},
{
"code": null,
"e": 69908,
"s": 69901,
"text": "Where,"
},
{
"code": null,
"e": 69914,
"s": 69908,
"text": "shape"
},
{
"code": null,
"e": 69971,
"s": 69914,
"text": "int or tuple of int defining the shape of the new matrix"
},
{
"code": null,
"e": 69977,
"s": 69971,
"text": "Dtype"
},
{
"code": null,
"e": 70011,
"s": 69977,
"text": "Optional. Data type of the output"
},
{
"code": null,
"e": 70017,
"s": 70011,
"text": "order"
},
{
"code": null,
"e": 70024,
"s": 70017,
"text": "C or F"
},
{
"code": null,
"e": 70122,
"s": 70024,
"text": "import numpy.matlib \nimport numpy as np \n\nprint np.matlib.empty((2,2)) \n# filled with random data"
},
{
"code": null,
"e": 70161,
"s": 70122,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 70243,
"s": 70161,
"text": "[[ 2.12199579e-314, 4.24399158e-314] \n [ 4.24399158e-314, 2.12199579e-314]] \n"
},
{
"code": null,
"e": 70295,
"s": 70243,
"text": "This function returns the matrix filled with zeros."
},
{
"code": null,
"e": 70366,
"s": 70295,
"text": "import numpy.matlib \nimport numpy as np \nprint np.matlib.zeros((2,2)) "
},
{
"code": null,
"e": 70405,
"s": 70366,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 70431,
"s": 70405,
"text": "[[ 0. 0.] \n [ 0. 0.]] \n"
},
{
"code": null,
"e": 70480,
"s": 70431,
"text": "This function returns the matrix filled with 1s."
},
{
"code": null,
"e": 70549,
"s": 70480,
"text": "import numpy.matlib \nimport numpy as np \nprint np.matlib.ones((2,2))"
},
{
"code": null,
"e": 70588,
"s": 70549,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 70614,
"s": 70588,
"text": "[[ 1. 1.] \n [ 1. 1.]] \n"
},
{
"code": null,
"e": 70750,
"s": 70614,
"text": "This function returns a matrix with 1 along the diagonal elements and the zeros elsewhere. The function takes the following parameters."
},
{
"code": null,
"e": 70783,
"s": 70750,
"text": "numpy.matlib.eye(n, M,k, dtype)\n"
},
{
"code": null,
"e": 70790,
"s": 70783,
"text": "Where,"
},
{
"code": null,
"e": 70792,
"s": 70790,
"text": "n"
},
{
"code": null,
"e": 70835,
"s": 70792,
"text": "The number of rows in the resulting matrix"
},
{
"code": null,
"e": 70837,
"s": 70835,
"text": "M"
},
{
"code": null,
"e": 70874,
"s": 70837,
"text": "The number of columns, defaults to n"
},
{
"code": null,
"e": 70876,
"s": 70874,
"text": "k"
},
{
"code": null,
"e": 70894,
"s": 70876,
"text": "Index of diagonal"
},
{
"code": null,
"e": 70900,
"s": 70894,
"text": "dtype"
},
{
"code": null,
"e": 70924,
"s": 70900,
"text": "Data type of the output"
},
{
"code": null,
"e": 71021,
"s": 70924,
"text": "import numpy.matlib \nimport numpy as np \nprint np.matlib.eye(n = 3, M = 4, k = 0, dtype = float)"
},
{
"code": null,
"e": 71060,
"s": 71021,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 71122,
"s": 71060,
"text": "[[ 1. 0. 0. 0.] \n [ 0. 1. 0. 0.] \n [ 0. 0. 1. 0.]] \n"
},
{
"code": null,
"e": 71277,
"s": 71122,
"text": "The numpy.matlib.identity() function returns the Identity matrix of the given size. An identity matrix is a square matrix with all diagonal elements as 1."
},
{
"code": null,
"e": 71361,
"s": 71277,
"text": "import numpy.matlib \nimport numpy as np \nprint np.matlib.identity(5, dtype = float)"
},
{
"code": null,
"e": 71400,
"s": 71361,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 71522,
"s": 71400,
"text": "[[ 1. 0. 0. 0. 0.] \n [ 0. 1. 0. 0. 0.] \n [ 0. 0. 1. 0. 0.] \n [ 0. 0. 0. 1. 0.] \n [ 0. 0. 0. 0. 1.]] \n"
},
{
"code": null,
"e": 71617,
"s": 71522,
"text": "The numpy.matlib.rand() function returns a matrix of the given size filled with random values."
},
{
"code": null,
"e": 71684,
"s": 71617,
"text": "import numpy.matlib \nimport numpy as np \nprint np.matlib.rand(3,3)"
},
{
"code": null,
"e": 71723,
"s": 71684,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 71844,
"s": 71723,
"text": "[[ 0.82674464 0.57206837 0.15497519] \n [ 0.33857374 0.35742401 0.90895076] \n [ 0.03968467 0.13962089 0.39665201]]\n"
},
{
"code": null,
"e": 71973,
"s": 71844,
"text": "Note that a matrix is always two-dimensional, whereas ndarray is an n-dimensional array. Both the objects are inter-convertible."
},
{
"code": null,
"e": 72051,
"s": 71973,
"text": "import numpy.matlib \nimport numpy as np \n\ni = np.matrix('1,2;3,4') \nprint i "
},
{
"code": null,
"e": 72090,
"s": 72051,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 72109,
"s": 72090,
"text": "[[1 2] \n [3 4]]\n"
},
{
"code": null,
"e": 72180,
"s": 72109,
"text": "import numpy.matlib \nimport numpy as np \n\nj = np.asarray(i) \nprint j "
},
{
"code": null,
"e": 72219,
"s": 72180,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 72239,
"s": 72219,
"text": "[[1 2] \n [3 4]] \n"
},
{
"code": null,
"e": 72311,
"s": 72239,
"text": "import numpy.matlib \nimport numpy as np \n\nk = np.asmatrix (j) \nprint k"
},
{
"code": null,
"e": 72350,
"s": 72311,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 72369,
"s": 72350,
"text": "[[1 2] \n [3 4]]\n"
},
{
"code": null,
"e": 72562,
"s": 72369,
"text": "NumPy package contains numpy.linalg module that provides all the functionality required for linear algebra. Some of the important functions in this module are described in the following table."
},
{
"code": null,
"e": 72592,
"s": 72562,
"text": "Dot product of the two arrays"
},
{
"code": null,
"e": 72623,
"s": 72592,
"text": "Dot product of the two vectors"
},
{
"code": null,
"e": 72655,
"s": 72623,
"text": "Inner product of the two arrays"
},
{
"code": null,
"e": 72688,
"s": 72655,
"text": "Matrix product of the two arrays"
},
{
"code": null,
"e": 72726,
"s": 72688,
"text": "Computes the determinant of the array"
},
{
"code": null,
"e": 72760,
"s": 72726,
"text": "Solves the linear matrix equation"
},
{
"code": null,
"e": 72807,
"s": 72760,
"text": "Finds the multiplicative inverse of the matrix"
},
{
"code": null,
"e": 73030,
"s": 72807,
"text": "Matplotlib is a plotting library for Python. It is used along with NumPy to provide an environment that is an effective open source alternative for MatLab. It can also be used with graphics toolkits like PyQt and wxPython."
},
{
"code": null,
"e": 73316,
"s": 73030,
"text": "Matplotlib module was first written by John D. Hunter. Since 2012, Michael Droettboom is the principal developer. Currently, Matplotlib ver. 1.5.1 is the stable version available. The package is available in binary distribution as well as in the source code form on www.matplotlib.org."
},
{
"code": null,
"e": 73415,
"s": 73316,
"text": "Conventionally, the package is imported into the Python script by adding the following statement −"
},
{
"code": null,
"e": 73453,
"s": 73415,
"text": "from matplotlib import pyplot as plt\n"
},
{
"code": null,
"e": 73603,
"s": 73453,
"text": "Here pyplot() is the most important function in matplotlib library, which is used to plot 2D data. The following script plots the equation y = 2x + 5"
},
{
"code": null,
"e": 73814,
"s": 73603,
"text": "import numpy as np \nfrom matplotlib import pyplot as plt \n\nx = np.arange(1,11) \ny = 2 * x + 5 \nplt.title(\"Matplotlib demo\") \nplt.xlabel(\"x axis caption\") \nplt.ylabel(\"y axis caption\") \nplt.plot(x,y) \nplt.show()"
},
{
"code": null,
"e": 74069,
"s": 73814,
"text": "An ndarray object x is created from np.arange() function as the values on the x axis. The corresponding values on the y axis are stored in another ndarray object y. These values are plotted using plot() function of pyplot submodule of matplotlib package."
},
{
"code": null,
"e": 74131,
"s": 74069,
"text": "The graphical representation is displayed by show() function."
},
{
"code": null,
"e": 74184,
"s": 74131,
"text": "The above code should produce the following output −"
},
{
"code": null,
"e": 74347,
"s": 74184,
"text": "Instead of the linear graph, the values can be displayed discretely by adding a format string to the plot() function. Following formatting characters can be used."
},
{
"code": null,
"e": 74351,
"s": 74347,
"text": "'-'"
},
{
"code": null,
"e": 74368,
"s": 74351,
"text": "Solid line style"
},
{
"code": null,
"e": 74373,
"s": 74368,
"text": "'--'"
},
{
"code": null,
"e": 74391,
"s": 74373,
"text": "Dashed line style"
},
{
"code": null,
"e": 74396,
"s": 74391,
"text": "'-.'"
},
{
"code": null,
"e": 74416,
"s": 74396,
"text": "Dash-dot line style"
},
{
"code": null,
"e": 74420,
"s": 74416,
"text": "':'"
},
{
"code": null,
"e": 74438,
"s": 74420,
"text": "Dotted line style"
},
{
"code": null,
"e": 74442,
"s": 74438,
"text": "'.'"
},
{
"code": null,
"e": 74455,
"s": 74442,
"text": "Point marker"
},
{
"code": null,
"e": 74459,
"s": 74455,
"text": "','"
},
{
"code": null,
"e": 74472,
"s": 74459,
"text": "Pixel marker"
},
{
"code": null,
"e": 74476,
"s": 74472,
"text": "'o'"
},
{
"code": null,
"e": 74490,
"s": 74476,
"text": "Circle marker"
},
{
"code": null,
"e": 74494,
"s": 74490,
"text": "'v'"
},
{
"code": null,
"e": 74515,
"s": 74494,
"text": "Triangle_down marker"
},
{
"code": null,
"e": 74519,
"s": 74515,
"text": "'^'"
},
{
"code": null,
"e": 74538,
"s": 74519,
"text": "Triangle_up marker"
},
{
"code": null,
"e": 74542,
"s": 74538,
"text": "'<'"
},
{
"code": null,
"e": 74563,
"s": 74542,
"text": "Triangle_left marker"
},
{
"code": null,
"e": 74567,
"s": 74563,
"text": "'>'"
},
{
"code": null,
"e": 74589,
"s": 74567,
"text": "Triangle_right marker"
},
{
"code": null,
"e": 74593,
"s": 74589,
"text": "'1'"
},
{
"code": null,
"e": 74609,
"s": 74593,
"text": "Tri_down marker"
},
{
"code": null,
"e": 74613,
"s": 74609,
"text": "'2'"
},
{
"code": null,
"e": 74627,
"s": 74613,
"text": "Tri_up marker"
},
{
"code": null,
"e": 74631,
"s": 74627,
"text": "'3'"
},
{
"code": null,
"e": 74647,
"s": 74631,
"text": "Tri_left marker"
},
{
"code": null,
"e": 74651,
"s": 74647,
"text": "'4'"
},
{
"code": null,
"e": 74668,
"s": 74651,
"text": "Tri_right marker"
},
{
"code": null,
"e": 74672,
"s": 74668,
"text": "'s'"
},
{
"code": null,
"e": 74686,
"s": 74672,
"text": "Square marker"
},
{
"code": null,
"e": 74690,
"s": 74686,
"text": "'p'"
},
{
"code": null,
"e": 74706,
"s": 74690,
"text": "Pentagon marker"
},
{
"code": null,
"e": 74710,
"s": 74706,
"text": "'*'"
},
{
"code": null,
"e": 74722,
"s": 74710,
"text": "Star marker"
},
{
"code": null,
"e": 74726,
"s": 74722,
"text": "'h'"
},
{
"code": null,
"e": 74742,
"s": 74726,
"text": "Hexagon1 marker"
},
{
"code": null,
"e": 74746,
"s": 74742,
"text": "'H'"
},
{
"code": null,
"e": 74762,
"s": 74746,
"text": "Hexagon2 marker"
},
{
"code": null,
"e": 74766,
"s": 74762,
"text": "'+'"
},
{
"code": null,
"e": 74778,
"s": 74766,
"text": "Plus marker"
},
{
"code": null,
"e": 74782,
"s": 74778,
"text": "'x'"
},
{
"code": null,
"e": 74791,
"s": 74782,
"text": "X marker"
},
{
"code": null,
"e": 74795,
"s": 74791,
"text": "'D'"
},
{
"code": null,
"e": 74810,
"s": 74795,
"text": "Diamond marker"
},
{
"code": null,
"e": 74814,
"s": 74810,
"text": "'d'"
},
{
"code": null,
"e": 74834,
"s": 74814,
"text": "Thin_diamond marker"
},
{
"code": null,
"e": 74838,
"s": 74834,
"text": "'|'"
},
{
"code": null,
"e": 74851,
"s": 74838,
"text": "Vline marker"
},
{
"code": null,
"e": 74855,
"s": 74851,
"text": "'_'"
},
{
"code": null,
"e": 74868,
"s": 74855,
"text": "Hline marker"
},
{
"code": null,
"e": 74920,
"s": 74868,
"text": "The following color abbreviations are also defined."
},
{
"code": null,
"e": 75056,
"s": 74920,
"text": "To display the circles representing points, instead of the line in the above example, use “ob” as the format string in plot() function."
},
{
"code": null,
"e": 75273,
"s": 75056,
"text": "import numpy as np \nfrom matplotlib import pyplot as plt \n\nx = np.arange(1,11) \ny = 2 * x + 5 \nplt.title(\"Matplotlib demo\") \nplt.xlabel(\"x axis caption\") \nplt.ylabel(\"y axis caption\") \nplt.plot(x,y,\"ob\") \nplt.show() "
},
{
"code": null,
"e": 75326,
"s": 75273,
"text": "The above code should produce the following output −"
},
{
"code": null,
"e": 75393,
"s": 75326,
"text": "The following script produces the sine wave plot using matplotlib."
},
{
"code": null,
"e": 75653,
"s": 75393,
"text": "import numpy as np \nimport matplotlib.pyplot as plt \n\n# Compute the x and y coordinates for points on a sine curve \nx = np.arange(0, 3 * np.pi, 0.1) \ny = np.sin(x) \nplt.title(\"sine wave form\") \n\n# Plot the points using matplotlib \nplt.plot(x, y) \nplt.show() "
},
{
"code": null,
"e": 75793,
"s": 75653,
"text": "The subplot() function allows you to plot different things in the same figure. In the following script, sine and cosine values are plotted."
},
{
"code": null,
"e": 76355,
"s": 75793,
"text": "import numpy as np \nimport matplotlib.pyplot as plt \n \n# Compute the x and y coordinates for points on sine and cosine curves \nx = np.arange(0, 3 * np.pi, 0.1) \ny_sin = np.sin(x) \ny_cos = np.cos(x) \n \n# Set up a subplot grid that has height 2 and width 1, \n# and set the first such subplot as active. \nplt.subplot(2, 1, 1)\n \n# Make the first plot \nplt.plot(x, y_sin) \nplt.title('Sine') \n \n# Set the second subplot as active, and make the second plot. \nplt.subplot(2, 1, 2) \nplt.plot(x, y_cos) \nplt.title('Cosine') \n \n# Show the figure. \nplt.show()"
},
{
"code": null,
"e": 76408,
"s": 76355,
"text": "The above code should produce the following output −"
},
{
"code": null,
"e": 76553,
"s": 76408,
"text": "The pyplot submodule provides bar() function to generate bar graphs. The following example produces the bar graph of two sets of x and y arrays."
},
{
"code": null,
"e": 76814,
"s": 76553,
"text": "from matplotlib import pyplot as plt \nx = [5,8,10] \ny = [12,16,6] \n\nx2 = [6,9,11] \ny2 = [6,15,7] \nplt.bar(x, y, align = 'center') \nplt.bar(x2, y2, color = 'g', align = 'center') \nplt.title('Bar graph') \nplt.ylabel('Y axis') \nplt.xlabel('X axis') \n\nplt.show()"
},
{
"code": null,
"e": 76862,
"s": 76814,
"text": "This code should produce the following output −"
},
{
"code": null,
"e": 77102,
"s": 76862,
"text": "NumPy has a numpy.histogram() function that is a graphical representation of the frequency distribution of data. Rectangles of equal horizontal size corresponding to class interval called bin and variable height corresponding to frequency."
},
{
"code": null,
"e": 77253,
"s": 77102,
"text": "The numpy.histogram() function takes the input array and bins as two parameters. The successive elements in bin array act as the boundary of each bin."
},
{
"code": null,
"e": 77461,
"s": 77253,
"text": "import numpy as np \n \na = np.array([22,87,5,43,56,73,55,54,11,20,51,5,79,31,27]) \nnp.histogram(a,bins = [0,20,40,60,80,100]) \nhist,bins = np.histogram(a,bins = [0,20,40,60,80,100]) \nprint hist \nprint bins "
},
{
"code": null,
"e": 77500,
"s": 77461,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 77533,
"s": 77500,
"text": "[3 4 5 2 1]\n[0 20 40 60 80 100]\n"
},
{
"code": null,
"e": 77745,
"s": 77533,
"text": "Matplotlib can convert this numeric representation of histogram into a graph. The plt() function of pyplot submodule takes the array containing the data and bin array as parameters and converts into a histogram."
},
{
"code": null,
"e": 77944,
"s": 77745,
"text": "from matplotlib import pyplot as plt \nimport numpy as np \n \na = np.array([22,87,5,43,56,73,55,54,11,20,51,5,79,31,27]) \nplt.hist(a, bins = [0,20,40,60,80,100]) \nplt.title(\"histogram\") \nplt.show()"
},
{
"code": null,
"e": 77985,
"s": 77944,
"text": "It should produce the following output −"
},
{
"code": null,
"e": 78086,
"s": 77985,
"text": "The ndarray objects can be saved to and loaded from the disk files. The IO functions available are −"
},
{
"code": null,
"e": 78162,
"s": 78086,
"text": "load() and save() functions handle /numPy binary files (with npy extension)"
},
{
"code": null,
"e": 78238,
"s": 78162,
"text": "load() and save() functions handle /numPy binary files (with npy extension)"
},
{
"code": null,
"e": 78297,
"s": 78238,
"text": "loadtxt() and savetxt() functions handle normal text files"
},
{
"code": null,
"e": 78356,
"s": 78297,
"text": "loadtxt() and savetxt() functions handle normal text files"
},
{
"code": null,
"e": 78640,
"s": 78356,
"text": "NumPy introduces a simple file format for ndarray objects. This .npy file stores data, shape, dtype and other information required to reconstruct the ndarray in a disk file such that the array is correctly retrieved even if the file is on another machine with different architecture."
},
{
"code": null,
"e": 78720,
"s": 78640,
"text": "The numpy.save() file stores the input array in a disk file with npy extension."
},
{
"code": null,
"e": 78788,
"s": 78720,
"text": "import numpy as np \na = np.array([1,2,3,4,5]) \nnp.save('outfile',a)"
},
{
"code": null,
"e": 78848,
"s": 78788,
"text": "To reconstruct array from outfile.npy, use load() function."
},
{
"code": null,
"e": 78905,
"s": 78848,
"text": "import numpy as np \nb = np.load('outfile.npy') \nprint b "
},
{
"code": null,
"e": 78944,
"s": 78905,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 78968,
"s": 78944,
"text": "array([1, 2, 3, 4, 5])\n"
},
{
"code": null,
"e": 79165,
"s": 78968,
"text": "The save() and load() functions accept an additional Boolean parameter allow_pickles. A pickle in Python is used to serialize and de-serialize objects before saving to or reading from a disk file."
},
{
"code": null,
"e": 79280,
"s": 79165,
"text": "The storage and retrieval of array data in simple text file format is done with savetxt() and loadtxt() functions."
},
{
"code": null,
"e": 79389,
"s": 79280,
"text": "import numpy as np \n\na = np.array([1,2,3,4,5]) \nnp.savetxt('out.txt',a) \nb = np.loadtxt('out.txt') \nprint b "
},
{
"code": null,
"e": 79428,
"s": 79389,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 79452,
"s": 79428,
"text": "[ 1. 2. 3. 4. 5.] \n"
},
{
"code": null,
"e": 79567,
"s": 79452,
"text": "The savetxt() and loadtxt() functions accept additional optional parameters such as header, footer, and delimiter."
},
{
"code": null,
"e": 79600,
"s": 79567,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 79617,
"s": 79600,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 79650,
"s": 79617,
"text": "\n 19 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 79685,
"s": 79650,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 79718,
"s": 79685,
"text": "\n 12 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 79753,
"s": 79718,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 79788,
"s": 79753,
"text": "\n 10 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 79800,
"s": 79788,
"text": " Akbar Khan"
},
{
"code": null,
"e": 79833,
"s": 79800,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 79848,
"s": 79833,
"text": " Pruthviraja L"
},
{
"code": null,
"e": 79881,
"s": 79848,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 79888,
"s": 79881,
"text": " Anmol"
},
{
"code": null,
"e": 79895,
"s": 79888,
"text": " Print"
},
{
"code": null,
"e": 79906,
"s": 79895,
"text": " Add Notes"
}
] |
Java ResultSetMetaData getColumnCount() method with example?
|
The getColumnCount() method of the ResultSetMetaData (interface) retrieves the number of the columns of the current ResultSet object.
This method returns an integer value representing the number of columns.
To get the ResultSetMetaData object, you need to:
Register the Driver: Select the required database register the Driver class of the particular database using the registerDriver() method of the DriverManager class or, the forName() method of the class named Class.
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
Get connection: Create a connection object by passing the URL of the database, username and password of a user in the database (in string format) as parameters to the getConnection() method of the DriverManager class.
Connection mysqlCon = DriverManager.getConnection(mysqlUrl, "root", "password");
Create a Statement object: Create a Statement object using the createStatement method of the connection interface.
Statement stmt = con.createStatement();
Execute the Query: Execute the SELECT query using the executeQuery() methods of the Statement interface and Retrieve the results into the ResultSet object.
String query = "Select * from MyPlayers";
ResultSet rs = stmt.executeQuery(query);
Get the ResultSetMetsdata object: Retrieve the ResultSetMetsdata object of the current ResultSet by invoking the getMetaData() method.
ResultSetMetaData resultSetMetaData = rs.getMetaData();
Finally, using the getColumnCount() method of the ResultSetMetaData interface get the number of columns in the table as:
int columnCount = resultSetMetaData.getColumnCount();
Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below:
CREATE TABLE MyPlayers(
ID INT,
First_Name VARCHAR(255),
Last_Name VARCHAR(255),
Date_Of_Birth date,
Place_Of_Birth VARCHAR(255),
Country VARCHAR(255),
PRIMARY KEY (ID)
);
Now, we will insert 7 records in MyPlayers table using INSERT statements -
insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');
insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');
insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');
insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');
insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');
insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');
insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');
Following JDBC program establishes connection with MySQL database, retrieves and displays the number of columns in the MyPlayers table using the getColumnCount() method.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
public class ResultSetMetaData_getColumnCount {
public static void main(String args[]) throws SQLException {
//Registering the Driver
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
//Getting the connection
String mysqlUrl = "jdbc:mysql://localhost/mydatabase";
Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");
System.out.println("Connection established......");
//Creating the Statement
Statement stmt = con.createStatement();
//Query to retrieve records
String query = "Select * from MyPlayers";
//Executing the query
ResultSet rs = stmt.executeQuery(query);
//retrieving the ResultSetMetaData object
ResultSetMetaData resultSetMetaData = rs.getMetaData();
//Retrieving the column count of the current table
int columnCount = resultSetMetaData.getColumnCount();
System.out.println("Number of columns in the table represented by the current
ResultSet object are: "+ columnCount);
}
}
Connection established......
Number of columns in the table represented by the current ResultSet object are: 6
|
[
{
"code": null,
"e": 1196,
"s": 1062,
"text": "The getColumnCount() method of the ResultSetMetaData (interface) retrieves the number of the columns of the current ResultSet object."
},
{
"code": null,
"e": 1269,
"s": 1196,
"text": "This method returns an integer value representing the number of columns."
},
{
"code": null,
"e": 1319,
"s": 1269,
"text": "To get the ResultSetMetaData object, you need to:"
},
{
"code": null,
"e": 1534,
"s": 1319,
"text": "Register the Driver: Select the required database register the Driver class of the particular database using the registerDriver() method of the DriverManager class or, the forName() method of the class named Class."
},
{
"code": null,
"e": 1593,
"s": 1534,
"text": "DriverManager.registerDriver(new com.mysql.jdbc.Driver());"
},
{
"code": null,
"e": 1811,
"s": 1593,
"text": "Get connection: Create a connection object by passing the URL of the database, username and password of a user in the database (in string format) as parameters to the getConnection() method of the DriverManager class."
},
{
"code": null,
"e": 1892,
"s": 1811,
"text": "Connection mysqlCon = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");"
},
{
"code": null,
"e": 2007,
"s": 1892,
"text": "Create a Statement object: Create a Statement object using the createStatement method of the connection interface."
},
{
"code": null,
"e": 2047,
"s": 2007,
"text": "Statement stmt = con.createStatement();"
},
{
"code": null,
"e": 2203,
"s": 2047,
"text": "Execute the Query: Execute the SELECT query using the executeQuery() methods of the Statement interface and Retrieve the results into the ResultSet object."
},
{
"code": null,
"e": 2286,
"s": 2203,
"text": "String query = \"Select * from MyPlayers\";\nResultSet rs = stmt.executeQuery(query);"
},
{
"code": null,
"e": 2421,
"s": 2286,
"text": "Get the ResultSetMetsdata object: Retrieve the ResultSetMetsdata object of the current ResultSet by invoking the getMetaData() method."
},
{
"code": null,
"e": 2477,
"s": 2421,
"text": "ResultSetMetaData resultSetMetaData = rs.getMetaData();"
},
{
"code": null,
"e": 2598,
"s": 2477,
"text": "Finally, using the getColumnCount() method of the ResultSetMetaData interface get the number of columns in the table as:"
},
{
"code": null,
"e": 2652,
"s": 2598,
"text": "int columnCount = resultSetMetaData.getColumnCount();"
},
{
"code": null,
"e": 2752,
"s": 2652,
"text": "Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below: "
},
{
"code": null,
"e": 2945,
"s": 2752,
"text": "CREATE TABLE MyPlayers(\n ID INT,\n First_Name VARCHAR(255),\n Last_Name VARCHAR(255),\n Date_Of_Birth date,\n Place_Of_Birth VARCHAR(255),\n Country VARCHAR(255),\n PRIMARY KEY (ID)\n);"
},
{
"code": null,
"e": 3020,
"s": 2945,
"text": "Now, we will insert 7 records in MyPlayers table using INSERT statements -"
},
{
"code": null,
"e": 3682,
"s": 3020,
"text": "insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');\ninsert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');\ninsert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');\ninsert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');\ninsert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');\ninsert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');\ninsert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');"
},
{
"code": null,
"e": 3852,
"s": 3682,
"text": "Following JDBC program establishes connection with MySQL database, retrieves and displays the number of columns in the MyPlayers table using the getColumnCount() method."
},
{
"code": null,
"e": 5070,
"s": 3852,
"text": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.ResultSet;\nimport java.sql.ResultSetMetaData;\nimport java.sql.SQLException;\nimport java.sql.Statement;\npublic class ResultSetMetaData_getColumnCount {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Creating the Statement\n Statement stmt = con.createStatement();\n //Query to retrieve records\n String query = \"Select * from MyPlayers\";\n //Executing the query\n ResultSet rs = stmt.executeQuery(query);\n //retrieving the ResultSetMetaData object\n ResultSetMetaData resultSetMetaData = rs.getMetaData();\n //Retrieving the column count of the current table\n int columnCount = resultSetMetaData.getColumnCount();\n System.out.println(\"Number of columns in the table represented by the current\n ResultSet object are: \"+ columnCount);\n }\n}"
},
{
"code": null,
"e": 5181,
"s": 5070,
"text": "Connection established......\nNumber of columns in the table represented by the current ResultSet object are: 6"
}
] |
Arduino - Connecting Switch
|
Pushbuttons or switches connect two open terminals in a circuit. This example turns on the LED on pin 2 when you press the pushbutton switch connected to pin 8.
Pull-down resistors are used in electronic logic circuits to ensure that inputs to Arduino settle at expected logic levels if external devices are disconnected or are at high-impedance. As nothing is connected to an input pin, it does not mean that it is a logical zero. Pull down resistors are connected between the ground and the appropriate pin on the device.
An example of a pull-down resistor in a digital circuit is shown in the following figure. A pushbutton switch is connected between the supply voltage and a microcontroller pin. In such a circuit, when the switch is closed, the micro-controller input is at a logical high value, but when the switch is open, the pull-down resistor pulls the input voltage down to the ground (logical zero value), preventing an undefined state at the input.
The pull-down resistor must have a larger resistance than the impedance of the logic circuit, or else it might pull the voltage down too much and the input voltage at the pin would remain at a constant logical low value, regardless of the switch position.
You will need the following components −
1 × Arduino UNO board
1 × 330 ohm resistor
1 × 4.7K ohm resistor (pull down)
1 × LED
Follow the circuit diagram and make the connections as shown in the image given below.
Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking on New.
// constants won't change. They're used here to
// set pin numbers:
const int buttonPin = 8; // the number of the pushbutton pin
const int ledPin = 2; // the number of the LED pin
// variables will change:
int buttonState = 0; // variable for reading the pushbutton status
void setup() {
// initialize the LED pin as an output:
pinMode(ledPin, OUTPUT);
// initialize the pushbutton pin as an input:
pinMode(buttonPin, INPUT);
}
void loop() {
// read the state of the pushbutton value:
buttonState = digitalRead(buttonPin);
// check if the pushbutton is pressed.
// if it is, the buttonState is HIGH:
if (buttonState == HIGH) {
// turn LED on:
digitalWrite(ledPin, HIGH);
} else {
// turn LED off:
digitalWrite(ledPin, LOW);
}
}
When the switch is open, (pushbutton is not pressed), there is no connection between the two terminals of the pushbutton, so the pin is connected to the ground (through the pull-down resistor) and we read a LOW. When the switch is closed (pushbutton is pressed), it makes a connection between its two terminals, connecting the pin to 5 volts, so that we read a HIGH.
LED is turned ON when the pushbutton is pressed and OFF when it is released.
65 Lectures
6.5 hours
Amit Rana
43 Lectures
3 hours
Amit Rana
20 Lectures
2 hours
Ashraf Said
19 Lectures
1.5 hours
Ashraf Said
11 Lectures
47 mins
Ashraf Said
9 Lectures
41 mins
Ashraf Said
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 3031,
"s": 2870,
"text": "Pushbuttons or switches connect two open terminals in a circuit. This example turns on the LED on pin 2 when you press the pushbutton switch connected to pin 8."
},
{
"code": null,
"e": 3394,
"s": 3031,
"text": "Pull-down resistors are used in electronic logic circuits to ensure that inputs to Arduino settle at expected logic levels if external devices are disconnected or are at high-impedance. As nothing is connected to an input pin, it does not mean that it is a logical zero. Pull down resistors are connected between the ground and the appropriate pin on the device."
},
{
"code": null,
"e": 3833,
"s": 3394,
"text": "An example of a pull-down resistor in a digital circuit is shown in the following figure. A pushbutton switch is connected between the supply voltage and a microcontroller pin. In such a circuit, when the switch is closed, the micro-controller input is at a logical high value, but when the switch is open, the pull-down resistor pulls the input voltage down to the ground (logical zero value), preventing an undefined state at the input."
},
{
"code": null,
"e": 4089,
"s": 3833,
"text": "The pull-down resistor must have a larger resistance than the impedance of the logic circuit, or else it might pull the voltage down too much and the input voltage at the pin would remain at a constant logical low value, regardless of the switch position."
},
{
"code": null,
"e": 4130,
"s": 4089,
"text": "You will need the following components −"
},
{
"code": null,
"e": 4152,
"s": 4130,
"text": "1 × Arduino UNO board"
},
{
"code": null,
"e": 4173,
"s": 4152,
"text": "1 × 330 ohm resistor"
},
{
"code": null,
"e": 4207,
"s": 4173,
"text": "1 × 4.7K ohm resistor (pull down)"
},
{
"code": null,
"e": 4215,
"s": 4207,
"text": "1 × LED"
},
{
"code": null,
"e": 4302,
"s": 4215,
"text": "Follow the circuit diagram and make the connections as shown in the image given below."
},
{
"code": null,
"e": 4451,
"s": 4302,
"text": "Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking on New."
},
{
"code": null,
"e": 5238,
"s": 4451,
"text": "// constants won't change. They're used here to\n// set pin numbers:\nconst int buttonPin = 8; // the number of the pushbutton pin\nconst int ledPin = 2; // the number of the LED pin\n// variables will change:\nint buttonState = 0; // variable for reading the pushbutton status\n\nvoid setup() {\n // initialize the LED pin as an output:\n pinMode(ledPin, OUTPUT);\n // initialize the pushbutton pin as an input:\n pinMode(buttonPin, INPUT);\n}\n\nvoid loop() {\n // read the state of the pushbutton value:\n buttonState = digitalRead(buttonPin);\n // check if the pushbutton is pressed.\n // if it is, the buttonState is HIGH:\n if (buttonState == HIGH) {\n // turn LED on:\n digitalWrite(ledPin, HIGH);\n } else {\n // turn LED off:\n digitalWrite(ledPin, LOW);\n }\n}"
},
{
"code": null,
"e": 5605,
"s": 5238,
"text": "When the switch is open, (pushbutton is not pressed), there is no connection between the two terminals of the pushbutton, so the pin is connected to the ground (through the pull-down resistor) and we read a LOW. When the switch is closed (pushbutton is pressed), it makes a connection between its two terminals, connecting the pin to 5 volts, so that we read a HIGH."
},
{
"code": null,
"e": 5682,
"s": 5605,
"text": "LED is turned ON when the pushbutton is pressed and OFF when it is released."
},
{
"code": null,
"e": 5717,
"s": 5682,
"text": "\n 65 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 5728,
"s": 5717,
"text": " Amit Rana"
},
{
"code": null,
"e": 5761,
"s": 5728,
"text": "\n 43 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 5772,
"s": 5761,
"text": " Amit Rana"
},
{
"code": null,
"e": 5805,
"s": 5772,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 5818,
"s": 5805,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5853,
"s": 5818,
"text": "\n 19 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 5866,
"s": 5853,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5898,
"s": 5866,
"text": "\n 11 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 5911,
"s": 5898,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5942,
"s": 5911,
"text": "\n 9 Lectures \n 41 mins\n"
},
{
"code": null,
"e": 5955,
"s": 5942,
"text": " Ashraf Said"
},
{
"code": null,
"e": 5962,
"s": 5955,
"text": " Print"
},
{
"code": null,
"e": 5973,
"s": 5962,
"text": " Add Notes"
}
] |
Minimizing the cost function: Gradient descent | by XuanKhanh Nguyen | Towards Data Science
|
Imagine you are at the top of a mountain and want to descend. There may be many available paths, but you want to reach the bottom with a minimum number of steps. How can you come up with a solution? To answer that question, we will solve the gradient descent problem.
Gradient descent is one of the simplest algorithms that is used, not only in linear regression but in many aspects of machine learning. Several ideas build on this algorithm and it is a crucial and fundamental piece of machine learning.
The structure of this note:
Gradient descent
Apply gradient descent to linear regression
Gradient descent variants
A case study
This is a very long note. Grab a cup of coffee or tea and let’s get started.
A quick recap from my last note:
So we have our hypothesis function and we have a way of measuring how well it fits the data. We now need to estimate the parameters theta zero and theta one in the hypothesis function.
So here’s the problem setup. Assume that we have a function J, as theta zero, theta one. We want to minimize over theta zero and theta one of this function J(theta zero, theta one). And it turns out gradient descent is an algorithm for solving this general problem. We’re going to start with some initial guesses for theta zero and theta one. It doesn’t matter what they are, but a common choice would be we set theta zero to zero and set theta one to one. What we’re going to do in gradient descent is we’ll keep changing theta zero and theta one a little bit to try to reduce J(theta zero, theta one), until we wind at a minimum, or maybe at a local minimum.
Gradient descent is an efficient optimization algorithm that attempts to find a local or global minimum of the cost function.
A local minimum is a point where our function is lower than all neighboring points. It is not possible to decrease the value of the cost function by making infinitesimal steps.
A global minimum is a point that obtains the absolute lowest value of our function, but global minima are difficult to compute in practice.
Cost Function vs Gradient descent
We might argue that if the cost function and gradient descent are both used to minimize something then what is the difference and can we use one instead of the other?
Well, a cost function is something we want to minimize. For example, our cost function might be the sum of squared errors over the training set. Gradient descent is a method for finding the minimum of a function of multiple variables.
So we can use gradient descent as a tool to minimize our cost function.
Suppose we have a function with n variables, then the gradient is the length-n vector that defines the direction in which the cost is increasing most rapidly. So in gradient descent, we follow the negative of the gradient to the point where the cost is a minimum. In machine learning, the cost function is a function to which we are applying the gradient descent algorithm.
I assume that the readers are already familiar with calculus but will provide a brief overview of how calculus concepts relate to optimization here. So don’t worry friends, just stay with me... it’s kind of intuitive!
Machine learning uses derivatives in optimization problems. Derivatives are used to decide whether to increase or decrease the weights to increase or decrease an objective function. If we can compute the derivative of a function, we know in which direction to proceed to minimize it.
Suppose we have a function y = f(x) . The derivative f’(x) gives the slope of f(x) at point x. It specifies how to scale a small change in the input to obtain the corresponding change in the output. Let’s say, f(x) = 1/2 x2
We can reduce f(x) by moving in small steps with the opposite sign of the derivative. When f’(x) = 0,the derivative provides no information about which direction to move. Points where f’(x) = 0 are known as critical points.
The concept of convergence is a well defined mathematical term. It means that “eventually” a sequence of elements gets closer and closer to a single value. So what does it mean for an algorithm to converge? Technically what converges is not the algorithm, but a value the algorithm is manipulating or iterating. To illustrate this, let's say we are writing an algorithm that prints all the digits of pi.
Our algorithm starts printing numbers like:
x0 = 3.1x1 = 3.14x2 = 3.141x3 = 3.1415x4 = 3.14159...
As we can see, the algorithm prints increasing numbers close to pi. We say our algorithm converges to pi. And we call such functions convex functions (like a bowl shape). Now, let us consider the formula of gradient descent:
We implement this formula by taking the derivative (the tangential line to a function) of our cost function. The slope of the tangent line is the value of the derivative at that point and it will give us a direction to move towards. We make steps down the cost function in the direction with the steepest descent. The size of each step is determined by the parameter α (alpha), which is called the learning rate.
The learning rate determines the size of the steps that are taken by the gradient descent algorithm.
To reach a local minimum efficiently, we have to set our learning rate- parameter α appropriately, neither too high nor too low. Depending on where the initial point starts on the graph, it could end up at different points. Typically, the value of the learning rate is chosen manually, starting with 0.1, 0.01, or 0.001 as the common values.
In this case, gradient descent is taking too long to calculate; we need to increase the learning rate.
If our learning curve is just going up and down without reaching a lower point, we should try decreasing the learning rate.
Note:
If the learning rate is too big, the loss will bounce around and may not reach the local minimum.
If the learning rate is too small then gradient descent will eventually reach the local minimum but require a long time to do so.
The cost function should decrease over time if gradient descent is working properly.
How can we visualize this idea?
Let’s say we are at Mount Lyell (the highest point in Yosemite National Park), we hike down the hill following the path of the river. The job of gradient descent here is exactly what we aim to achieve— to reach the bottom-most point of the mountain. Mount Lyell is the data plotted in space, the surface representing the objective function, and the size of the step we move is the learning rate. The lowest point on the mountain is the value where the cost of the function reaches its minimum (the parameter α where our model presents more accuracy).
Assume also that Mount Lyell is shaped in such a way that the river will not stop at any place and will straightaway arrive at the foothill (like a bowl shape). In machine learning, we would have achieved our global minimum. However, straightforward optimization is not the case in real-life. The river may face a lot of pits on the way down. It might be trapped in the pits and fail to move downwards, which is a local minimum in machine learning.
When we are in a valley, there is no way we can descend the hill further. We can say we have converged. In machine learning, when gradient descent can’t reduce the cost function anymore and the cost remains near the same level, we can say it has converged to an optimum. The number of iterations for convergence may vary a lot.
The takeaway here is the initial values and learning rate. Depending on where we start at the first point, we could wind up at different local optima. Also, depending on the size of the step we take (learning rate) we might arrive at the foothill differently. These values are important in determining whether we will reach the foothill (global minima) or get trapped in the pits (local minima).
So we know gradient descent is an optimization algorithm to find the minimum of a function. How can we apply the algorithm to our linear regression?
To apply gradient descent, the key term here is the derivative.
Take the cost function and take a partial derivative with respect to theta zero and theta one, which looks like this:
Take the cost function and take a partial derivative with respect to theta zero and theta one, which looks like this:
To take the partial derivative, we hold all of the other variables constant. Let’s say, we want to take the partial derivative with respect to theta zero, we just treat theta one as a constant and vice versa.
But why do we use partial derivatives in the equation? So that we’ll have a way of measuring how well our hypothesis function fits the data. We need to estimate the parameters (theta zero and theta one) in the hypothesis function — that is, we want to know the rate of change value for theta zero and theta one. In calculus, partial derivatives represent the rate of change of the functions as one variable change while the others are held constant. We apply the partial derivatives with respect to theta zero and theta one to the cost function to point us to the lowest point.
2. Plug them back into our gradient descent algorithm
To find the best minimum, repeat steps to apply various values for theta zero and theta one. In other words, repeat the steps until convergence.
The process of finding the optimal values for theta zero and theta one is to then minimize our derivatives.
Hence, to solve for the gradient at the next step of the iteration, we iterate through our data points using our updated theta zero and theta one values and compute their partial derivatives. This new gradient tells us the slope of our cost function at our current position and the direction we should move to update our parameters. The size of our update is controlled by the learning rate.
Pros and cons of gradient descent
A simple algorithm that is easy to implement and each iteration is cheap; we just need to compute a gradient
However, it’s often slow because many interesting problems are not strongly convex
Cannot handle non-differentiable functions (biggest downside)
There are three types of gradient descent methods based on the amount of data used to calculate the gradient:
Batch gradient descent
Stochastic gradient descent
Mini-batch gradient descent
Batch Gradient Descent
In batch gradient descent, to calculate the gradient of the cost function, we calculate the error for each example in the training dataset and then take the sum. The model is updated only after all examples have been evaluated.
What if we have 1000 samples or in a worst-case scenario, one million samples? The gradient descent algorithm would need to run one million times. So batch gradient descent is not a good fit for large datasets.
As we see, batch gradient descent is not an optimal solution here. It requires a large number of computational resources, as the entire dataset needs to remain in memory. So, if we just need to move a single step towards the minimum, should we calculate the cost a million times?
Stochastic Gradient Descent (SGD)
In SGD, we use one training sample at each iteration instead of using the whole dataset to sum all for every step, that is — SGD performs a parameter update for each observation. So instead of looping over each observation, it just needs one to perform the parameter update.
Note: In SGD, before for-looping, we need to randomly shuffle the training examples.
SGD is usually faster than batch gradient descent, but its path to the minima is more random than batch gradient descent since SGD uses only one example at a time. But it’s ok as we are indifferent to the path, as long as it gives us the minimum and shorter training time.
SGD is widely used for larger dataset training, computationally faster, and can allow for parallel model training.
Mini-Batch Gradient Descent
Mini-batch gradient descent is a combination of both bath gradient descent and stochastic gradient descent.
Mini-batch gradient descent uses n data points (instead of one sample in SGD) at each iteration.
We have learned all we need to implement Linear Regression. Now it’s time to see how it works on a dataset. I have learned so much by implementing a simple linear regression in Python. I hope you will learn a thing or two after reading my note.
I downloaded the Boston weather reports from the National Oceanic and Atmospheric Administration. You can search on Kaggle for competitions, datasets, and other solutions. Our dataset contains information on weather conditions recorded on each day at a weather station. Information includes average temperature (TAVG), cooling degree days season to date (CDSD), extreme maximum temperature for the period (EMXT), heating degree days season to date (HDSD), maximum temperature(TMAX), minimum temperature (TMIN). In this example, we want to predict the maximum temperature taking input feature as the minimum temperature.
Let’s get our hands dirty with Python, shall we?
Import all the required libraries:
Import all the required libraries:
import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as seabornInstancefrom sklearn.model_selection import train_test_splitfrom sklearn.linear_model import LinearRegression%matplotlib inline
2. Import the CSV dataset using pandas:
df = pd.read_csv(‘climate.csv’)df.dropna(inplace = True)
We use the dropna() function to remove missing values.
3. Check the number of rows and columns in our datasets.
df.shape
We should receive output as (903,9), which means our data contains 903 rows and 9 columns.
We can see the statistical detail of our dataset by using describe() function:
df.describe()
4. Visualize our dataset to see if we can manually find any relationship between the data.
fig,(ax1) = plt.subplots(1, figsize = (12,6))ax1.scatter (X, y, s = 8)plt.title (‘Min vs Max Temp’)plt.xlabel(‘TMIN’)plt.ylabel(‘TMAX’)plt.show()
5. Divide the data into “attributes” and “labels”.
Attributes are the independent variables while labels are dependent variables whose values are to be predicted. In our dataset, we only have two columns. We want to predict TMAX depending upon the TMIN recorded. Therefore our attribute set will consist of the “TMIN” column which is stored in the X variable, and the label will be the “TMAX” column which is stored in y variable.
X = df[‘TMIN’].values.reshape(-1,1).astype(‘float32’)y = df[‘TMAX’].values.reshape(-1,1).astype(‘float32’)
6. Split 80% of the data into the training set while 20% of the data go into the test set.
The test_size variable is where we specify the proportion of the test set.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
7. Train our algorithm.
For this, we need to import the LinearRegression class, instantiate it, and call the fit() method along with our training data.
h = LinearRegression()h.fit(X_train,y_train)print(h.intercept_) # to retrieve theta_0print(h.coef_) # to retrieve theta_1
The result should be approximately 16.25 for theta_0 and 1.07 for theta_1.
8. Make some predictions.
To do so, we will use our test data and see how accurately our algorithm predicts the percentage score.
y_pred = h.predict(X_test)compare = pd.DataFrame({‘Actual’: y_test.flatten(), ‘Predicted’: y_pred.flatten()})compare
As we can see from the table above, the predicted percentages are close to the actual ones. Let’s plot a straight line with the test data :
fig,(ax1) = plt.subplots(1, figsize = (12,6))ax1.scatter (X_test, y_test, s = 8)plt.plot(X_test,y_pred, color = ‘black’, linewidth = 2)\plt.show()
The predictions are pretty close to the actual plot, which indicates a small value of the variance.
9. Implement Linear Regression
#pick some random value to start withtheta_0 = np.random.random()theta_1 = np.random.random()def hypothesis (theta_0,theta_1,X): return theta_1*X + theta_0def cost_function (X,y,theta_0,theta_1): m = len(X) summation = 0.0 for i in range (m): summation += ((theta_1 * X[i] + theta_0) - y[i])**2 return summation /(2*m)def gradient_descent(X,y,theta_0,theta_1,learning_rate): t0_deriv = 0 t1_deriv = 0 m = len(X) for i in range (m): t0_deriv += (theta_1 * X[i] + theta_0) - y[i] t1_deriv += ((theta_1 * X[i] + theta_0) - y[i])* X[i]theta_0 -= (1/m) * learning_rate * t0_deriv theta_1 -= (1/m) * learning_rate * t1_deriv return theta_0,theta_1def training (X, y, theta_0, theta_1, learning_rate, iters): cost_history = [0] t0_history = [0] t1_history = [0] for i in range(iters): theta_0,theta_1 = gradient_descent(X, y, theta_0, theta_1, learning_rate) t0_history.append(theta_0) t1_history.append(theta_1) cost = cost_function(X, y, theta_0, theta_1) cost_history.append(cost) if i%10 == 0: print ("iter={}, theta_0={}, theta_1={}, cost= {}".format(i, theta_0, theta_1, cost))return t0_history, t1_history, cost_history
We choose learning rate equals 0.01 for 2000 iterations, and plot our cost function J
t0_history, t1_history, cost_history = training (X, y, theta_0, theta_1, 0.01, 2000)#Plot the cost functionplt.title('Cost Function J')plt.xlabel('No. of iterations')plt.ylabel('Cost')plt.plot(cost_history)plt.ylim(ymin=0)plt.xlim(xmin=0)plt.show()
I found a cool way to visualize our data using Animations with Matplotlib. It takes 449 iterations for the model to come quite close to the best fit line.
import matplotlib.animation as animationfig = plt.figure()ax = plt.axes()# set up our plotplt.ylabel(“TMAX”)plt.xlabel(“TMIN”)plt.title(‘Linear Regression’)plt.scatter(X, y, color =’gray’,s =8)line, = ax.plot([], [], lw=2)plt.close()#Generate the animation data,def init(): line.set_data([], []) annotation.set_text('') return line, annotation# animation function. This is called sequentiallydef animate(i): #print(i) x = np.linspace(-5, 20, 1000) y = past_thetas[i][1]*x + past_thetas[i][0] line.set_data(x, y) annotation.set_text('Cost = %.2f e10' % (past_costs[i]/10000000000)) return line, annotationanim = animation.FuncAnimation(fig, animate, init_func=init, frames=np.arange(1,400), interval=40, blit=True)from IPython.display import HTMLHTML(anim.to_html5_video())
In this note, we studied the most fundamental machine learning algorithm — gradient descent. We implemented a simple linear regression with the help of the Scikit-Learning machine learning library. In the next note, we will focus on multiple linear regression.
The hardest part of any endeavor is the beginning, and you have passed that, so don’t stop!
Lucky for us, linear regression is well-taught in almost every machine learning curriculum, and there are a decent number of solid resources out there to help us understand the different parts of a linear regression model, including the mathematics behind. Below are some more resources if you find yourself wanting to learn even more.
Optimizing learning rateElimination of all bad local minima in deep learning (Cornell University)Elimination of all bad local minima in deep learning (MIT)What is convergence?Why Visualize Gradient Descent Optimization Algorithms?A case study — Moneyball — Linear RegressionSplit your data into training and testing (80/20)Partial derivative in gradient descent for two variablesFree data to work on: Lionbridge AI, Dataset Search
Optimizing learning rate
Elimination of all bad local minima in deep learning (Cornell University)
Elimination of all bad local minima in deep learning (MIT)
What is convergence?
Why Visualize Gradient Descent Optimization Algorithms?
A case study — Moneyball — Linear Regression
Split your data into training and testing (80/20)
Partial derivative in gradient descent for two variables
Free data to work on: Lionbridge AI, Dataset Search
|
[
{
"code": null,
"e": 440,
"s": 172,
"text": "Imagine you are at the top of a mountain and want to descend. There may be many available paths, but you want to reach the bottom with a minimum number of steps. How can you come up with a solution? To answer that question, we will solve the gradient descent problem."
},
{
"code": null,
"e": 677,
"s": 440,
"text": "Gradient descent is one of the simplest algorithms that is used, not only in linear regression but in many aspects of machine learning. Several ideas build on this algorithm and it is a crucial and fundamental piece of machine learning."
},
{
"code": null,
"e": 705,
"s": 677,
"text": "The structure of this note:"
},
{
"code": null,
"e": 722,
"s": 705,
"text": "Gradient descent"
},
{
"code": null,
"e": 766,
"s": 722,
"text": "Apply gradient descent to linear regression"
},
{
"code": null,
"e": 792,
"s": 766,
"text": "Gradient descent variants"
},
{
"code": null,
"e": 805,
"s": 792,
"text": "A case study"
},
{
"code": null,
"e": 882,
"s": 805,
"text": "This is a very long note. Grab a cup of coffee or tea and let’s get started."
},
{
"code": null,
"e": 915,
"s": 882,
"text": "A quick recap from my last note:"
},
{
"code": null,
"e": 1100,
"s": 915,
"text": "So we have our hypothesis function and we have a way of measuring how well it fits the data. We now need to estimate the parameters theta zero and theta one in the hypothesis function."
},
{
"code": null,
"e": 1761,
"s": 1100,
"text": "So here’s the problem setup. Assume that we have a function J, as theta zero, theta one. We want to minimize over theta zero and theta one of this function J(theta zero, theta one). And it turns out gradient descent is an algorithm for solving this general problem. We’re going to start with some initial guesses for theta zero and theta one. It doesn’t matter what they are, but a common choice would be we set theta zero to zero and set theta one to one. What we’re going to do in gradient descent is we’ll keep changing theta zero and theta one a little bit to try to reduce J(theta zero, theta one), until we wind at a minimum, or maybe at a local minimum."
},
{
"code": null,
"e": 1887,
"s": 1761,
"text": "Gradient descent is an efficient optimization algorithm that attempts to find a local or global minimum of the cost function."
},
{
"code": null,
"e": 2064,
"s": 1887,
"text": "A local minimum is a point where our function is lower than all neighboring points. It is not possible to decrease the value of the cost function by making infinitesimal steps."
},
{
"code": null,
"e": 2204,
"s": 2064,
"text": "A global minimum is a point that obtains the absolute lowest value of our function, but global minima are difficult to compute in practice."
},
{
"code": null,
"e": 2238,
"s": 2204,
"text": "Cost Function vs Gradient descent"
},
{
"code": null,
"e": 2405,
"s": 2238,
"text": "We might argue that if the cost function and gradient descent are both used to minimize something then what is the difference and can we use one instead of the other?"
},
{
"code": null,
"e": 2640,
"s": 2405,
"text": "Well, a cost function is something we want to minimize. For example, our cost function might be the sum of squared errors over the training set. Gradient descent is a method for finding the minimum of a function of multiple variables."
},
{
"code": null,
"e": 2712,
"s": 2640,
"text": "So we can use gradient descent as a tool to minimize our cost function."
},
{
"code": null,
"e": 3086,
"s": 2712,
"text": "Suppose we have a function with n variables, then the gradient is the length-n vector that defines the direction in which the cost is increasing most rapidly. So in gradient descent, we follow the negative of the gradient to the point where the cost is a minimum. In machine learning, the cost function is a function to which we are applying the gradient descent algorithm."
},
{
"code": null,
"e": 3304,
"s": 3086,
"text": "I assume that the readers are already familiar with calculus but will provide a brief overview of how calculus concepts relate to optimization here. So don’t worry friends, just stay with me... it’s kind of intuitive!"
},
{
"code": null,
"e": 3588,
"s": 3304,
"text": "Machine learning uses derivatives in optimization problems. Derivatives are used to decide whether to increase or decrease the weights to increase or decrease an objective function. If we can compute the derivative of a function, we know in which direction to proceed to minimize it."
},
{
"code": null,
"e": 3812,
"s": 3588,
"text": "Suppose we have a function y = f(x) . The derivative f’(x) gives the slope of f(x) at point x. It specifies how to scale a small change in the input to obtain the corresponding change in the output. Let’s say, f(x) = 1/2 x2"
},
{
"code": null,
"e": 4036,
"s": 3812,
"text": "We can reduce f(x) by moving in small steps with the opposite sign of the derivative. When f’(x) = 0,the derivative provides no information about which direction to move. Points where f’(x) = 0 are known as critical points."
},
{
"code": null,
"e": 4440,
"s": 4036,
"text": "The concept of convergence is a well defined mathematical term. It means that “eventually” a sequence of elements gets closer and closer to a single value. So what does it mean for an algorithm to converge? Technically what converges is not the algorithm, but a value the algorithm is manipulating or iterating. To illustrate this, let's say we are writing an algorithm that prints all the digits of pi."
},
{
"code": null,
"e": 4484,
"s": 4440,
"text": "Our algorithm starts printing numbers like:"
},
{
"code": null,
"e": 4538,
"s": 4484,
"text": "x0 = 3.1x1 = 3.14x2 = 3.141x3 = 3.1415x4 = 3.14159..."
},
{
"code": null,
"e": 4763,
"s": 4538,
"text": "As we can see, the algorithm prints increasing numbers close to pi. We say our algorithm converges to pi. And we call such functions convex functions (like a bowl shape). Now, let us consider the formula of gradient descent:"
},
{
"code": null,
"e": 5176,
"s": 4763,
"text": "We implement this formula by taking the derivative (the tangential line to a function) of our cost function. The slope of the tangent line is the value of the derivative at that point and it will give us a direction to move towards. We make steps down the cost function in the direction with the steepest descent. The size of each step is determined by the parameter α (alpha), which is called the learning rate."
},
{
"code": null,
"e": 5277,
"s": 5176,
"text": "The learning rate determines the size of the steps that are taken by the gradient descent algorithm."
},
{
"code": null,
"e": 5619,
"s": 5277,
"text": "To reach a local minimum efficiently, we have to set our learning rate- parameter α appropriately, neither too high nor too low. Depending on where the initial point starts on the graph, it could end up at different points. Typically, the value of the learning rate is chosen manually, starting with 0.1, 0.01, or 0.001 as the common values."
},
{
"code": null,
"e": 5722,
"s": 5619,
"text": "In this case, gradient descent is taking too long to calculate; we need to increase the learning rate."
},
{
"code": null,
"e": 5846,
"s": 5722,
"text": "If our learning curve is just going up and down without reaching a lower point, we should try decreasing the learning rate."
},
{
"code": null,
"e": 5852,
"s": 5846,
"text": "Note:"
},
{
"code": null,
"e": 5950,
"s": 5852,
"text": "If the learning rate is too big, the loss will bounce around and may not reach the local minimum."
},
{
"code": null,
"e": 6080,
"s": 5950,
"text": "If the learning rate is too small then gradient descent will eventually reach the local minimum but require a long time to do so."
},
{
"code": null,
"e": 6165,
"s": 6080,
"text": "The cost function should decrease over time if gradient descent is working properly."
},
{
"code": null,
"e": 6197,
"s": 6165,
"text": "How can we visualize this idea?"
},
{
"code": null,
"e": 6748,
"s": 6197,
"text": "Let’s say we are at Mount Lyell (the highest point in Yosemite National Park), we hike down the hill following the path of the river. The job of gradient descent here is exactly what we aim to achieve— to reach the bottom-most point of the mountain. Mount Lyell is the data plotted in space, the surface representing the objective function, and the size of the step we move is the learning rate. The lowest point on the mountain is the value where the cost of the function reaches its minimum (the parameter α where our model presents more accuracy)."
},
{
"code": null,
"e": 7197,
"s": 6748,
"text": "Assume also that Mount Lyell is shaped in such a way that the river will not stop at any place and will straightaway arrive at the foothill (like a bowl shape). In machine learning, we would have achieved our global minimum. However, straightforward optimization is not the case in real-life. The river may face a lot of pits on the way down. It might be trapped in the pits and fail to move downwards, which is a local minimum in machine learning."
},
{
"code": null,
"e": 7525,
"s": 7197,
"text": "When we are in a valley, there is no way we can descend the hill further. We can say we have converged. In machine learning, when gradient descent can’t reduce the cost function anymore and the cost remains near the same level, we can say it has converged to an optimum. The number of iterations for convergence may vary a lot."
},
{
"code": null,
"e": 7921,
"s": 7525,
"text": "The takeaway here is the initial values and learning rate. Depending on where we start at the first point, we could wind up at different local optima. Also, depending on the size of the step we take (learning rate) we might arrive at the foothill differently. These values are important in determining whether we will reach the foothill (global minima) or get trapped in the pits (local minima)."
},
{
"code": null,
"e": 8070,
"s": 7921,
"text": "So we know gradient descent is an optimization algorithm to find the minimum of a function. How can we apply the algorithm to our linear regression?"
},
{
"code": null,
"e": 8134,
"s": 8070,
"text": "To apply gradient descent, the key term here is the derivative."
},
{
"code": null,
"e": 8252,
"s": 8134,
"text": "Take the cost function and take a partial derivative with respect to theta zero and theta one, which looks like this:"
},
{
"code": null,
"e": 8370,
"s": 8252,
"text": "Take the cost function and take a partial derivative with respect to theta zero and theta one, which looks like this:"
},
{
"code": null,
"e": 8579,
"s": 8370,
"text": "To take the partial derivative, we hold all of the other variables constant. Let’s say, we want to take the partial derivative with respect to theta zero, we just treat theta one as a constant and vice versa."
},
{
"code": null,
"e": 9157,
"s": 8579,
"text": "But why do we use partial derivatives in the equation? So that we’ll have a way of measuring how well our hypothesis function fits the data. We need to estimate the parameters (theta zero and theta one) in the hypothesis function — that is, we want to know the rate of change value for theta zero and theta one. In calculus, partial derivatives represent the rate of change of the functions as one variable change while the others are held constant. We apply the partial derivatives with respect to theta zero and theta one to the cost function to point us to the lowest point."
},
{
"code": null,
"e": 9211,
"s": 9157,
"text": "2. Plug them back into our gradient descent algorithm"
},
{
"code": null,
"e": 9356,
"s": 9211,
"text": "To find the best minimum, repeat steps to apply various values for theta zero and theta one. In other words, repeat the steps until convergence."
},
{
"code": null,
"e": 9464,
"s": 9356,
"text": "The process of finding the optimal values for theta zero and theta one is to then minimize our derivatives."
},
{
"code": null,
"e": 9856,
"s": 9464,
"text": "Hence, to solve for the gradient at the next step of the iteration, we iterate through our data points using our updated theta zero and theta one values and compute their partial derivatives. This new gradient tells us the slope of our cost function at our current position and the direction we should move to update our parameters. The size of our update is controlled by the learning rate."
},
{
"code": null,
"e": 9890,
"s": 9856,
"text": "Pros and cons of gradient descent"
},
{
"code": null,
"e": 9999,
"s": 9890,
"text": "A simple algorithm that is easy to implement and each iteration is cheap; we just need to compute a gradient"
},
{
"code": null,
"e": 10082,
"s": 9999,
"text": "However, it’s often slow because many interesting problems are not strongly convex"
},
{
"code": null,
"e": 10144,
"s": 10082,
"text": "Cannot handle non-differentiable functions (biggest downside)"
},
{
"code": null,
"e": 10254,
"s": 10144,
"text": "There are three types of gradient descent methods based on the amount of data used to calculate the gradient:"
},
{
"code": null,
"e": 10277,
"s": 10254,
"text": "Batch gradient descent"
},
{
"code": null,
"e": 10305,
"s": 10277,
"text": "Stochastic gradient descent"
},
{
"code": null,
"e": 10333,
"s": 10305,
"text": "Mini-batch gradient descent"
},
{
"code": null,
"e": 10356,
"s": 10333,
"text": "Batch Gradient Descent"
},
{
"code": null,
"e": 10584,
"s": 10356,
"text": "In batch gradient descent, to calculate the gradient of the cost function, we calculate the error for each example in the training dataset and then take the sum. The model is updated only after all examples have been evaluated."
},
{
"code": null,
"e": 10795,
"s": 10584,
"text": "What if we have 1000 samples or in a worst-case scenario, one million samples? The gradient descent algorithm would need to run one million times. So batch gradient descent is not a good fit for large datasets."
},
{
"code": null,
"e": 11075,
"s": 10795,
"text": "As we see, batch gradient descent is not an optimal solution here. It requires a large number of computational resources, as the entire dataset needs to remain in memory. So, if we just need to move a single step towards the minimum, should we calculate the cost a million times?"
},
{
"code": null,
"e": 11109,
"s": 11075,
"text": "Stochastic Gradient Descent (SGD)"
},
{
"code": null,
"e": 11384,
"s": 11109,
"text": "In SGD, we use one training sample at each iteration instead of using the whole dataset to sum all for every step, that is — SGD performs a parameter update for each observation. So instead of looping over each observation, it just needs one to perform the parameter update."
},
{
"code": null,
"e": 11469,
"s": 11384,
"text": "Note: In SGD, before for-looping, we need to randomly shuffle the training examples."
},
{
"code": null,
"e": 11742,
"s": 11469,
"text": "SGD is usually faster than batch gradient descent, but its path to the minima is more random than batch gradient descent since SGD uses only one example at a time. But it’s ok as we are indifferent to the path, as long as it gives us the minimum and shorter training time."
},
{
"code": null,
"e": 11857,
"s": 11742,
"text": "SGD is widely used for larger dataset training, computationally faster, and can allow for parallel model training."
},
{
"code": null,
"e": 11885,
"s": 11857,
"text": "Mini-Batch Gradient Descent"
},
{
"code": null,
"e": 11993,
"s": 11885,
"text": "Mini-batch gradient descent is a combination of both bath gradient descent and stochastic gradient descent."
},
{
"code": null,
"e": 12090,
"s": 11993,
"text": "Mini-batch gradient descent uses n data points (instead of one sample in SGD) at each iteration."
},
{
"code": null,
"e": 12335,
"s": 12090,
"text": "We have learned all we need to implement Linear Regression. Now it’s time to see how it works on a dataset. I have learned so much by implementing a simple linear regression in Python. I hope you will learn a thing or two after reading my note."
},
{
"code": null,
"e": 12955,
"s": 12335,
"text": "I downloaded the Boston weather reports from the National Oceanic and Atmospheric Administration. You can search on Kaggle for competitions, datasets, and other solutions. Our dataset contains information on weather conditions recorded on each day at a weather station. Information includes average temperature (TAVG), cooling degree days season to date (CDSD), extreme maximum temperature for the period (EMXT), heating degree days season to date (HDSD), maximum temperature(TMAX), minimum temperature (TMIN). In this example, we want to predict the maximum temperature taking input feature as the minimum temperature."
},
{
"code": null,
"e": 13004,
"s": 12955,
"text": "Let’s get our hands dirty with Python, shall we?"
},
{
"code": null,
"e": 13039,
"s": 13004,
"text": "Import all the required libraries:"
},
{
"code": null,
"e": 13074,
"s": 13039,
"text": "Import all the required libraries:"
},
{
"code": null,
"e": 13295,
"s": 13074,
"text": "import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as seabornInstancefrom sklearn.model_selection import train_test_splitfrom sklearn.linear_model import LinearRegression%matplotlib inline"
},
{
"code": null,
"e": 13335,
"s": 13295,
"text": "2. Import the CSV dataset using pandas:"
},
{
"code": null,
"e": 13392,
"s": 13335,
"text": "df = pd.read_csv(‘climate.csv’)df.dropna(inplace = True)"
},
{
"code": null,
"e": 13447,
"s": 13392,
"text": "We use the dropna() function to remove missing values."
},
{
"code": null,
"e": 13504,
"s": 13447,
"text": "3. Check the number of rows and columns in our datasets."
},
{
"code": null,
"e": 13513,
"s": 13504,
"text": "df.shape"
},
{
"code": null,
"e": 13604,
"s": 13513,
"text": "We should receive output as (903,9), which means our data contains 903 rows and 9 columns."
},
{
"code": null,
"e": 13683,
"s": 13604,
"text": "We can see the statistical detail of our dataset by using describe() function:"
},
{
"code": null,
"e": 13697,
"s": 13683,
"text": "df.describe()"
},
{
"code": null,
"e": 13788,
"s": 13697,
"text": "4. Visualize our dataset to see if we can manually find any relationship between the data."
},
{
"code": null,
"e": 13934,
"s": 13788,
"text": "fig,(ax1) = plt.subplots(1, figsize = (12,6))ax1.scatter (X, y, s = 8)plt.title (‘Min vs Max Temp’)plt.xlabel(‘TMIN’)plt.ylabel(‘TMAX’)plt.show()"
},
{
"code": null,
"e": 13985,
"s": 13934,
"text": "5. Divide the data into “attributes” and “labels”."
},
{
"code": null,
"e": 14365,
"s": 13985,
"text": "Attributes are the independent variables while labels are dependent variables whose values are to be predicted. In our dataset, we only have two columns. We want to predict TMAX depending upon the TMIN recorded. Therefore our attribute set will consist of the “TMIN” column which is stored in the X variable, and the label will be the “TMAX” column which is stored in y variable."
},
{
"code": null,
"e": 14472,
"s": 14365,
"text": "X = df[‘TMIN’].values.reshape(-1,1).astype(‘float32’)y = df[‘TMAX’].values.reshape(-1,1).astype(‘float32’)"
},
{
"code": null,
"e": 14563,
"s": 14472,
"text": "6. Split 80% of the data into the training set while 20% of the data go into the test set."
},
{
"code": null,
"e": 14638,
"s": 14563,
"text": "The test_size variable is where we specify the proportion of the test set."
},
{
"code": null,
"e": 14727,
"s": 14638,
"text": "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)"
},
{
"code": null,
"e": 14751,
"s": 14727,
"text": "7. Train our algorithm."
},
{
"code": null,
"e": 14879,
"s": 14751,
"text": "For this, we need to import the LinearRegression class, instantiate it, and call the fit() method along with our training data."
},
{
"code": null,
"e": 15001,
"s": 14879,
"text": "h = LinearRegression()h.fit(X_train,y_train)print(h.intercept_) # to retrieve theta_0print(h.coef_) # to retrieve theta_1"
},
{
"code": null,
"e": 15076,
"s": 15001,
"text": "The result should be approximately 16.25 for theta_0 and 1.07 for theta_1."
},
{
"code": null,
"e": 15102,
"s": 15076,
"text": "8. Make some predictions."
},
{
"code": null,
"e": 15206,
"s": 15102,
"text": "To do so, we will use our test data and see how accurately our algorithm predicts the percentage score."
},
{
"code": null,
"e": 15323,
"s": 15206,
"text": "y_pred = h.predict(X_test)compare = pd.DataFrame({‘Actual’: y_test.flatten(), ‘Predicted’: y_pred.flatten()})compare"
},
{
"code": null,
"e": 15463,
"s": 15323,
"text": "As we can see from the table above, the predicted percentages are close to the actual ones. Let’s plot a straight line with the test data :"
},
{
"code": null,
"e": 15610,
"s": 15463,
"text": "fig,(ax1) = plt.subplots(1, figsize = (12,6))ax1.scatter (X_test, y_test, s = 8)plt.plot(X_test,y_pred, color = ‘black’, linewidth = 2)\\plt.show()"
},
{
"code": null,
"e": 15710,
"s": 15610,
"text": "The predictions are pretty close to the actual plot, which indicates a small value of the variance."
},
{
"code": null,
"e": 15741,
"s": 15710,
"text": "9. Implement Linear Regression"
},
{
"code": null,
"e": 16996,
"s": 15741,
"text": "#pick some random value to start withtheta_0 = np.random.random()theta_1 = np.random.random()def hypothesis (theta_0,theta_1,X): return theta_1*X + theta_0def cost_function (X,y,theta_0,theta_1): m = len(X) summation = 0.0 for i in range (m): summation += ((theta_1 * X[i] + theta_0) - y[i])**2 return summation /(2*m)def gradient_descent(X,y,theta_0,theta_1,learning_rate): t0_deriv = 0 t1_deriv = 0 m = len(X) for i in range (m): t0_deriv += (theta_1 * X[i] + theta_0) - y[i] t1_deriv += ((theta_1 * X[i] + theta_0) - y[i])* X[i]theta_0 -= (1/m) * learning_rate * t0_deriv theta_1 -= (1/m) * learning_rate * t1_deriv return theta_0,theta_1def training (X, y, theta_0, theta_1, learning_rate, iters): cost_history = [0] t0_history = [0] t1_history = [0] for i in range(iters): theta_0,theta_1 = gradient_descent(X, y, theta_0, theta_1, learning_rate) t0_history.append(theta_0) t1_history.append(theta_1) cost = cost_function(X, y, theta_0, theta_1) cost_history.append(cost) if i%10 == 0: print (\"iter={}, theta_0={}, theta_1={}, cost= {}\".format(i, theta_0, theta_1, cost))return t0_history, t1_history, cost_history"
},
{
"code": null,
"e": 17082,
"s": 16996,
"text": "We choose learning rate equals 0.01 for 2000 iterations, and plot our cost function J"
},
{
"code": null,
"e": 17331,
"s": 17082,
"text": "t0_history, t1_history, cost_history = training (X, y, theta_0, theta_1, 0.01, 2000)#Plot the cost functionplt.title('Cost Function J')plt.xlabel('No. of iterations')plt.ylabel('Cost')plt.plot(cost_history)plt.ylim(ymin=0)plt.xlim(xmin=0)plt.show()"
},
{
"code": null,
"e": 17486,
"s": 17331,
"text": "I found a cool way to visualize our data using Animations with Matplotlib. It takes 449 iterations for the model to come quite close to the best fit line."
},
{
"code": null,
"e": 18287,
"s": 17486,
"text": "import matplotlib.animation as animationfig = plt.figure()ax = plt.axes()# set up our plotplt.ylabel(“TMAX”)plt.xlabel(“TMIN”)plt.title(‘Linear Regression’)plt.scatter(X, y, color =’gray’,s =8)line, = ax.plot([], [], lw=2)plt.close()#Generate the animation data,def init(): line.set_data([], []) annotation.set_text('') return line, annotation# animation function. This is called sequentiallydef animate(i): #print(i) x = np.linspace(-5, 20, 1000) y = past_thetas[i][1]*x + past_thetas[i][0] line.set_data(x, y) annotation.set_text('Cost = %.2f e10' % (past_costs[i]/10000000000)) return line, annotationanim = animation.FuncAnimation(fig, animate, init_func=init, frames=np.arange(1,400), interval=40, blit=True)from IPython.display import HTMLHTML(anim.to_html5_video())"
},
{
"code": null,
"e": 18548,
"s": 18287,
"text": "In this note, we studied the most fundamental machine learning algorithm — gradient descent. We implemented a simple linear regression with the help of the Scikit-Learning machine learning library. In the next note, we will focus on multiple linear regression."
},
{
"code": null,
"e": 18640,
"s": 18548,
"text": "The hardest part of any endeavor is the beginning, and you have passed that, so don’t stop!"
},
{
"code": null,
"e": 18976,
"s": 18640,
"text": "Lucky for us, linear regression is well-taught in almost every machine learning curriculum, and there are a decent number of solid resources out there to help us understand the different parts of a linear regression model, including the mathematics behind. Below are some more resources if you find yourself wanting to learn even more."
},
{
"code": null,
"e": 19407,
"s": 18976,
"text": "Optimizing learning rateElimination of all bad local minima in deep learning (Cornell University)Elimination of all bad local minima in deep learning (MIT)What is convergence?Why Visualize Gradient Descent Optimization Algorithms?A case study — Moneyball — Linear RegressionSplit your data into training and testing (80/20)Partial derivative in gradient descent for two variablesFree data to work on: Lionbridge AI, Dataset Search"
},
{
"code": null,
"e": 19432,
"s": 19407,
"text": "Optimizing learning rate"
},
{
"code": null,
"e": 19506,
"s": 19432,
"text": "Elimination of all bad local minima in deep learning (Cornell University)"
},
{
"code": null,
"e": 19565,
"s": 19506,
"text": "Elimination of all bad local minima in deep learning (MIT)"
},
{
"code": null,
"e": 19586,
"s": 19565,
"text": "What is convergence?"
},
{
"code": null,
"e": 19642,
"s": 19586,
"text": "Why Visualize Gradient Descent Optimization Algorithms?"
},
{
"code": null,
"e": 19687,
"s": 19642,
"text": "A case study — Moneyball — Linear Regression"
},
{
"code": null,
"e": 19737,
"s": 19687,
"text": "Split your data into training and testing (80/20)"
},
{
"code": null,
"e": 19794,
"s": 19737,
"text": "Partial derivative in gradient descent for two variables"
}
] |
🛰️ 4 Steps to Detect Coastline Changes from Satellite | by Alessio Vaccaro | Towards Data Science
|
Coasts are a very dynamic system in which the phenomena of erosion, and therefore of retreat, or advancement of the coastline are managed by numerous meteoclimatic, geological, biological and anthropic factors.
In the case where the action of marine abrasion is greater than the deposit, there are evident cases of coastal erosion that literally lead to the disintegration and demolition of the earth’s surface.
In this article, we will use an algorithm called Canny Edge Detection on two satellite images acquired by the OLI (Operational Land Imager) sensor on Landsat 8 platform.
Through this methodology, we will be able to visualize and estimate the progress of the coastline over time of a particular European area subject to strong erosion actions: the Holderness Coast.
Here is the proposed workflow:
Let’s start! But before...
The Landsat 8 is an orbiting platform that mounts on board an 11-band multispectral sensor called OLI (Operational Land Imager).
Specifically, in this article, we will use only the bands with a resolution of 30 meters (i.e. the first 7).
The data can be downloaded free of charge, after registration, through the platform provided by the USGS: https://earthexplorer.usgs.gov/.
Moreover, as is usually used, rather than using the raw data of incident sunlight we will use reflectance, i.e. the amount of sunlight reflected from the earth’s surfaces [0–1].
Among the various common packages, in this article we will use rasterio to easily deal with raster images, OpenCV to apply the Canny algorithm and Scikit-Learn to segment images.
Let’s define a variable that tells us the number of bands to keep and the ancillary data previously entered in a JSON:
This Json is a collection of information from the Landsat OLI imager (created by me). A sort of instruction manual. It looks like this:
Remember that we will use only the bands with 30 m resolution, so only the first 7. If you are willing to have lower resolutions (100m) you can embed the significant TIRS 1 and TIRS 2 bands also.
As already mentioned a few lines above we will use two different acquisitions from Landsat-8 OLI:
2014/02/01
2019/07/25
To facilitate and speed up all the operations on the two acquisitions we will define an Acquisition() class that will allow us to encapsulate all the necessary functions.
During the execution of the code, this will allow us to perform some support functions such as:
GeoTIFF search in the specified path;
loading acquisitions;
acquisitions registration (aligning them);
acquisitions subsetting
Ok, we can now launch the entire code by using:
The result should be something like this:
Searching for 'tif' files in Data/2014-02-01Found 7 'tif' filesLoading imagesDoneSearching for 'tif' files in Data/2019-07-25Found 7 'tif' filesLoading imagesDone
Our 14 OLI images (2 acquisition in 7 bands) are now loaded.
At this stage, after the “alignment” (or more formally registration) of the two multispectral cubes, we take care of cutting out the portion of the acquisition that doesn’t interests us.
Let’s use the function subsetImages() to “cut” the unwanted data.
So let’s therefore define the AOI (Area of Interest) and proceed to subsetting using the function subsetImages() within the Acquisition() class:
Done!
Let’s try to view all the bands of the 2019/07/25 acquisition. For purely aesthetic reasons, before making the image plot, let’s perform standardization of the images using the StandardScaler().
This should be the result.
As you can see some bands are brightest than others. This is pretty normal.
Let’s now try to visualize both acquisitions in an RGB composite obtained using bands 4 (Red), 3 (Green) and 2 (Blue).
BIAS and GAIN are defined only to get a pretty viz.
And here is the result! It is funny how the two acquisitions are particularly different in terms of reflectance.
Ok, move on to coastline detection.
In this paragraph, we will perform edge detection using Canny’s methodology.
Before going to the real detection it is necessary to prepare the dataset trying to segment it through a clustering algorithm to discriminate between oceans and lands.
At this stage we should reshaping the two multispectral cubes for clustering operations.
Let’s segment of the two acquisitions through k-means (use the model you prefer).
Here are the two clusters identified representing emerged lands and water bodies.
Canny’s technique in a traditional key is divided into the following phases:
Noise reduction by convolution with a Gaussian filter;Image gradient calculation in the four directions (horizontal, vertical and 2 oblique);Extraction of gradient local maxima;Threshold with hysteresis for edge extraction.
Noise reduction by convolution with a Gaussian filter;
Image gradient calculation in the four directions (horizontal, vertical and 2 oblique);
Extraction of gradient local maxima;
Threshold with hysteresis for edge extraction.
Let’s start immediately by converting the clustering result into images and then reducing the noise through a Gaussian filter with a 15x15 kernel:
After slightly blurring the images we can proceed to the execution of the real Canny technique with the OpenCV Canny() module:
In a single line of code, we obtained the gradient, extracted the local maxima and then applied the threshold with hysteresis for each acquisition.
NOTE: Play with the Canny() parameters to explore different results.
And here are the results.
And here are a few details:
As you can see from the results, Canny’s algorithm in its original pipeline works quite well but its performance depends - as usual- on the data involved.
The clustering algorithm used, in fact, has allowed us to segment the starting multispectral cubes with performances that can certainly be improved. The use of several clustering models in parallel could overall improve the results.
🤝 If you have any doubts, feedback or requests for collaboration, please do not hesitate to contact me on Linkedin. I will be very happy to have a chat with you!
👉 For more contents like this one and to stay updated on upcoming articles don’t forget to follow me here on Medium.
👉 For any reference to this article please contact me. Thank you.
|
[
{
"code": null,
"e": 383,
"s": 172,
"text": "Coasts are a very dynamic system in which the phenomena of erosion, and therefore of retreat, or advancement of the coastline are managed by numerous meteoclimatic, geological, biological and anthropic factors."
},
{
"code": null,
"e": 584,
"s": 383,
"text": "In the case where the action of marine abrasion is greater than the deposit, there are evident cases of coastal erosion that literally lead to the disintegration and demolition of the earth’s surface."
},
{
"code": null,
"e": 754,
"s": 584,
"text": "In this article, we will use an algorithm called Canny Edge Detection on two satellite images acquired by the OLI (Operational Land Imager) sensor on Landsat 8 platform."
},
{
"code": null,
"e": 949,
"s": 754,
"text": "Through this methodology, we will be able to visualize and estimate the progress of the coastline over time of a particular European area subject to strong erosion actions: the Holderness Coast."
},
{
"code": null,
"e": 980,
"s": 949,
"text": "Here is the proposed workflow:"
},
{
"code": null,
"e": 1007,
"s": 980,
"text": "Let’s start! But before..."
},
{
"code": null,
"e": 1136,
"s": 1007,
"text": "The Landsat 8 is an orbiting platform that mounts on board an 11-band multispectral sensor called OLI (Operational Land Imager)."
},
{
"code": null,
"e": 1245,
"s": 1136,
"text": "Specifically, in this article, we will use only the bands with a resolution of 30 meters (i.e. the first 7)."
},
{
"code": null,
"e": 1384,
"s": 1245,
"text": "The data can be downloaded free of charge, after registration, through the platform provided by the USGS: https://earthexplorer.usgs.gov/."
},
{
"code": null,
"e": 1562,
"s": 1384,
"text": "Moreover, as is usually used, rather than using the raw data of incident sunlight we will use reflectance, i.e. the amount of sunlight reflected from the earth’s surfaces [0–1]."
},
{
"code": null,
"e": 1741,
"s": 1562,
"text": "Among the various common packages, in this article we will use rasterio to easily deal with raster images, OpenCV to apply the Canny algorithm and Scikit-Learn to segment images."
},
{
"code": null,
"e": 1860,
"s": 1741,
"text": "Let’s define a variable that tells us the number of bands to keep and the ancillary data previously entered in a JSON:"
},
{
"code": null,
"e": 1996,
"s": 1860,
"text": "This Json is a collection of information from the Landsat OLI imager (created by me). A sort of instruction manual. It looks like this:"
},
{
"code": null,
"e": 2192,
"s": 1996,
"text": "Remember that we will use only the bands with 30 m resolution, so only the first 7. If you are willing to have lower resolutions (100m) you can embed the significant TIRS 1 and TIRS 2 bands also."
},
{
"code": null,
"e": 2290,
"s": 2192,
"text": "As already mentioned a few lines above we will use two different acquisitions from Landsat-8 OLI:"
},
{
"code": null,
"e": 2301,
"s": 2290,
"text": "2014/02/01"
},
{
"code": null,
"e": 2312,
"s": 2301,
"text": "2019/07/25"
},
{
"code": null,
"e": 2483,
"s": 2312,
"text": "To facilitate and speed up all the operations on the two acquisitions we will define an Acquisition() class that will allow us to encapsulate all the necessary functions."
},
{
"code": null,
"e": 2579,
"s": 2483,
"text": "During the execution of the code, this will allow us to perform some support functions such as:"
},
{
"code": null,
"e": 2617,
"s": 2579,
"text": "GeoTIFF search in the specified path;"
},
{
"code": null,
"e": 2639,
"s": 2617,
"text": "loading acquisitions;"
},
{
"code": null,
"e": 2682,
"s": 2639,
"text": "acquisitions registration (aligning them);"
},
{
"code": null,
"e": 2706,
"s": 2682,
"text": "acquisitions subsetting"
},
{
"code": null,
"e": 2754,
"s": 2706,
"text": "Ok, we can now launch the entire code by using:"
},
{
"code": null,
"e": 2796,
"s": 2754,
"text": "The result should be something like this:"
},
{
"code": null,
"e": 2959,
"s": 2796,
"text": "Searching for 'tif' files in Data/2014-02-01Found 7 'tif' filesLoading imagesDoneSearching for 'tif' files in Data/2019-07-25Found 7 'tif' filesLoading imagesDone"
},
{
"code": null,
"e": 3020,
"s": 2959,
"text": "Our 14 OLI images (2 acquisition in 7 bands) are now loaded."
},
{
"code": null,
"e": 3207,
"s": 3020,
"text": "At this stage, after the “alignment” (or more formally registration) of the two multispectral cubes, we take care of cutting out the portion of the acquisition that doesn’t interests us."
},
{
"code": null,
"e": 3273,
"s": 3207,
"text": "Let’s use the function subsetImages() to “cut” the unwanted data."
},
{
"code": null,
"e": 3418,
"s": 3273,
"text": "So let’s therefore define the AOI (Area of Interest) and proceed to subsetting using the function subsetImages() within the Acquisition() class:"
},
{
"code": null,
"e": 3424,
"s": 3418,
"text": "Done!"
},
{
"code": null,
"e": 3619,
"s": 3424,
"text": "Let’s try to view all the bands of the 2019/07/25 acquisition. For purely aesthetic reasons, before making the image plot, let’s perform standardization of the images using the StandardScaler()."
},
{
"code": null,
"e": 3646,
"s": 3619,
"text": "This should be the result."
},
{
"code": null,
"e": 3722,
"s": 3646,
"text": "As you can see some bands are brightest than others. This is pretty normal."
},
{
"code": null,
"e": 3841,
"s": 3722,
"text": "Let’s now try to visualize both acquisitions in an RGB composite obtained using bands 4 (Red), 3 (Green) and 2 (Blue)."
},
{
"code": null,
"e": 3893,
"s": 3841,
"text": "BIAS and GAIN are defined only to get a pretty viz."
},
{
"code": null,
"e": 4006,
"s": 3893,
"text": "And here is the result! It is funny how the two acquisitions are particularly different in terms of reflectance."
},
{
"code": null,
"e": 4042,
"s": 4006,
"text": "Ok, move on to coastline detection."
},
{
"code": null,
"e": 4119,
"s": 4042,
"text": "In this paragraph, we will perform edge detection using Canny’s methodology."
},
{
"code": null,
"e": 4287,
"s": 4119,
"text": "Before going to the real detection it is necessary to prepare the dataset trying to segment it through a clustering algorithm to discriminate between oceans and lands."
},
{
"code": null,
"e": 4376,
"s": 4287,
"text": "At this stage we should reshaping the two multispectral cubes for clustering operations."
},
{
"code": null,
"e": 4458,
"s": 4376,
"text": "Let’s segment of the two acquisitions through k-means (use the model you prefer)."
},
{
"code": null,
"e": 4540,
"s": 4458,
"text": "Here are the two clusters identified representing emerged lands and water bodies."
},
{
"code": null,
"e": 4617,
"s": 4540,
"text": "Canny’s technique in a traditional key is divided into the following phases:"
},
{
"code": null,
"e": 4841,
"s": 4617,
"text": "Noise reduction by convolution with a Gaussian filter;Image gradient calculation in the four directions (horizontal, vertical and 2 oblique);Extraction of gradient local maxima;Threshold with hysteresis for edge extraction."
},
{
"code": null,
"e": 4896,
"s": 4841,
"text": "Noise reduction by convolution with a Gaussian filter;"
},
{
"code": null,
"e": 4984,
"s": 4896,
"text": "Image gradient calculation in the four directions (horizontal, vertical and 2 oblique);"
},
{
"code": null,
"e": 5021,
"s": 4984,
"text": "Extraction of gradient local maxima;"
},
{
"code": null,
"e": 5068,
"s": 5021,
"text": "Threshold with hysteresis for edge extraction."
},
{
"code": null,
"e": 5215,
"s": 5068,
"text": "Let’s start immediately by converting the clustering result into images and then reducing the noise through a Gaussian filter with a 15x15 kernel:"
},
{
"code": null,
"e": 5342,
"s": 5215,
"text": "After slightly blurring the images we can proceed to the execution of the real Canny technique with the OpenCV Canny() module:"
},
{
"code": null,
"e": 5490,
"s": 5342,
"text": "In a single line of code, we obtained the gradient, extracted the local maxima and then applied the threshold with hysteresis for each acquisition."
},
{
"code": null,
"e": 5559,
"s": 5490,
"text": "NOTE: Play with the Canny() parameters to explore different results."
},
{
"code": null,
"e": 5585,
"s": 5559,
"text": "And here are the results."
},
{
"code": null,
"e": 5613,
"s": 5585,
"text": "And here are a few details:"
},
{
"code": null,
"e": 5768,
"s": 5613,
"text": "As you can see from the results, Canny’s algorithm in its original pipeline works quite well but its performance depends - as usual- on the data involved."
},
{
"code": null,
"e": 6001,
"s": 5768,
"text": "The clustering algorithm used, in fact, has allowed us to segment the starting multispectral cubes with performances that can certainly be improved. The use of several clustering models in parallel could overall improve the results."
},
{
"code": null,
"e": 6163,
"s": 6001,
"text": "🤝 If you have any doubts, feedback or requests for collaboration, please do not hesitate to contact me on Linkedin. I will be very happy to have a chat with you!"
},
{
"code": null,
"e": 6280,
"s": 6163,
"text": "👉 For more contents like this one and to stay updated on upcoming articles don’t forget to follow me here on Medium."
}
] |
How to create an unordered list with disc bullets in HTML?
|
To create unordered list in HTML, use the <ul> tag. The unordered list starts with the <ul> tag. The list item starts with the <li> tag and will be marked as disc, square, circle, etc. The default is bullets, which is small black circles.
For creating an unordered list with circle bullets, use CSS property list-style-type. We will be using the style attribute. The style attribute specifies an inline style for an element. The attribute is used with the HTML <ul> tag, with the CSS property list-style-type to add circle bullets to an unordered list.
Just keep in mind, the usage of style attribute overrides any style set globally. It will override any style set in the HTML <style> tag or external style sheet.
You can try to run the following code to create an unordered list with disc bullets in HTML −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>HTML Unordered List</title>
</head>
<body>
<h1>Developed Countries</h1>
<p>The list of developed countries:</p>
<ul style="list-style-type:disc">
<li>US</li>
<li>Australia</li>
<li>New Zealand</li>
</ul>
</body>
</html>
|
[
{
"code": null,
"e": 1301,
"s": 1062,
"text": "To create unordered list in HTML, use the <ul> tag. The unordered list starts with the <ul> tag. The list item starts with the <li> tag and will be marked as disc, square, circle, etc. The default is bullets, which is small black circles."
},
{
"code": null,
"e": 1615,
"s": 1301,
"text": "For creating an unordered list with circle bullets, use CSS property list-style-type. We will be using the style attribute. The style attribute specifies an inline style for an element. The attribute is used with the HTML <ul> tag, with the CSS property list-style-type to add circle bullets to an unordered list."
},
{
"code": null,
"e": 1778,
"s": 1615,
"text": " Just keep in mind, the usage of style attribute overrides any style set globally. It will override any style set in the HTML <style> tag or external style sheet."
},
{
"code": null,
"e": 1874,
"s": 1780,
"text": "You can try to run the following code to create an unordered list with disc bullets in HTML −"
},
{
"code": null,
"e": 1884,
"s": 1874,
"text": "Live Demo"
},
{
"code": null,
"e": 2210,
"s": 1884,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>HTML Unordered List</title>\n </head>\n <body>\n <h1>Developed Countries</h1>\n <p>The list of developed countries:</p>\n <ul style=\"list-style-type:disc\">\n <li>US</li>\n <li>Australia</li>\n <li>New Zealand</li>\n </ul>\n </body>\n</html>"
}
] |
Java Program to check if a Float is Infinite or Not a Number(NAN)
|
To check if a Float is isInfinite, use the isInfinite() method and to check for NAN, use the isNaN() method.
Live Demo
public class Demo {
public static void main(String[] args) {
float value1 = (float) 1 / 0;
boolean res1 = Float.isInfinite(value1);
System.out.println("Checking for isInfinite? = "+res1);
float value2 = (float) Math.sqrt(9);
boolean res2 = Float.isNaN(value2);
System.out.println("Checking for isNan? = "+res2);
}
}
Checking for isInfinite? = true
Checking for isNan? = false
|
[
{
"code": null,
"e": 1171,
"s": 1062,
"text": "To check if a Float is isInfinite, use the isInfinite() method and to check for NAN, use the isNaN() method."
},
{
"code": null,
"e": 1182,
"s": 1171,
"text": " Live Demo"
},
{
"code": null,
"e": 1540,
"s": 1182,
"text": "public class Demo {\n public static void main(String[] args) {\n float value1 = (float) 1 / 0;\n boolean res1 = Float.isInfinite(value1);\n System.out.println(\"Checking for isInfinite? = \"+res1);\n float value2 = (float) Math.sqrt(9);\n boolean res2 = Float.isNaN(value2);\n System.out.println(\"Checking for isNan? = \"+res2);\n }\n}"
},
{
"code": null,
"e": 1600,
"s": 1540,
"text": "Checking for isInfinite? = true\nChecking for isNan? = false"
}
] |
Drawing a network graph with networkX and Matplotlib
|
To draw a network graph with networkx and matplotlib, plt.show() −
Set the figure size and adjust the padding between and around the subplots.
Set the figure size and adjust the padding between and around the subplots.
Make an object for a dataframe with the keys, from and to.
Make an object for a dataframe with the keys, from and to.
Get a graph containing an edgelist.
Get a graph containing an edgelist.
Draw a graph (Step 3) using draw() method with some node properties.
Draw a graph (Step 3) using draw() method with some node properties.
To display the figure, use show() method.
To display the figure, use show() method.
import pandas as pd
import networkx as nx
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
df = pd.DataFrame({'from': ['A', 'B', 'C', 'A'], 'to': ['D', 'A', 'E', 'C']})
G = nx.from_pandas_edgelist(df, 'from', 'to')
nx.draw(G, with_labels=True, node_size=100, alpha=1, linewidths=10)
plt.show()
|
[
{
"code": null,
"e": 1129,
"s": 1062,
"text": "To draw a network graph with networkx and matplotlib, plt.show() −"
},
{
"code": null,
"e": 1205,
"s": 1129,
"text": "Set the figure size and adjust the padding between and around the subplots."
},
{
"code": null,
"e": 1281,
"s": 1205,
"text": "Set the figure size and adjust the padding between and around the subplots."
},
{
"code": null,
"e": 1340,
"s": 1281,
"text": "Make an object for a dataframe with the keys, from and to."
},
{
"code": null,
"e": 1399,
"s": 1340,
"text": "Make an object for a dataframe with the keys, from and to."
},
{
"code": null,
"e": 1435,
"s": 1399,
"text": "Get a graph containing an edgelist."
},
{
"code": null,
"e": 1471,
"s": 1435,
"text": "Get a graph containing an edgelist."
},
{
"code": null,
"e": 1540,
"s": 1471,
"text": "Draw a graph (Step 3) using draw() method with some node properties."
},
{
"code": null,
"e": 1609,
"s": 1540,
"text": "Draw a graph (Step 3) using draw() method with some node properties."
},
{
"code": null,
"e": 1651,
"s": 1609,
"text": "To display the figure, use show() method."
},
{
"code": null,
"e": 1693,
"s": 1651,
"text": "To display the figure, use show() method."
},
{
"code": null,
"e": 2062,
"s": 1693,
"text": "import pandas as pd\nimport networkx as nx\nfrom matplotlib import pyplot as plt\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\ndf = pd.DataFrame({'from': ['A', 'B', 'C', 'A'], 'to': ['D', 'A', 'E', 'C']})\nG = nx.from_pandas_edgelist(df, 'from', 'to')\nnx.draw(G, with_labels=True, node_size=100, alpha=1, linewidths=10)\nplt.show()"
}
] |
Modelling Regression Trees. How to program this classic Machine... | by Diego Lopez Yse | Towards Data Science
|
Decision Trees (DTs) are probably one of the most popular Machine Learning algorithms. In my post “The Complete Guide to Decision Trees”, I describe DTs in detail: their real-life applications, different DT types and algorithms, and their pros and cons. I’ve detailed how to program Classification Trees, and now it’s the turn of Regression Trees.
Regression Trees work with numeric target variables. Unlike Classification Trees in which the target variable is qualitative, Regression Trees are used to predict continuous output variables. If you want to predict things like the probability of success of a medical treatment, the future price of a financial stock, or salaries in a given population, you can use this algorithm. Let’s see an implementation example in Python.
The Boston Housing dataset consists of price of houses in various places in Boston, USA. Alongside with their price, this dataset provides information such as crime level, areas of non-retail business in the town, the age of people who own the house, and other attributes.
The variable called ‘MEDV’ indicates the prices of the houses and is the target variable. The rest of the variables are the predictors based on which we will predict the value of the house.
You can cut down the complexity of building DTs by dealing with simpler sub-steps: each individual sub-routine in a DT will connect to other ones to increase complexity, and this construction will let you reach more robust models that are easier to maintain and improve. Now, let’s build a Regression Tree (special type of DT) in Python.
Loading a data file is the easy part. The problem (and most time-consuming part) usually refers to the data preparation process: setting the right data formats, dealing with missing values and outliers, eliminating duplicates, etc.
Before loading the data, we’ll import the necessary libraries:
import pandas as pdfrom pandas_datareader import dataimport numpy as npfrom sklearn.tree import DecisionTreeRegressorfrom sklearn.model_selection import train_test_splitfrom sklearn import metricsfrom sklearn.metrics import r2_score
Now we load the dataset and convert it to a Pandas Dataframe:
boston = datasets.load_boston()df = pd.DataFrame(boston.data)
And name the columns:
df.columns = boston.feature_namesdf[‘MEDV’] = boston.target
First understand the dataset and describe it:
print(boston.DESCR)df.info()
Nice: 506 records, 14 numeric variables and no missing values. We don’t need to preprocess the data and we’re ready to model.
You need to divide your given columns into two types of variables: dependent (or target variable) and independent variable (or feature variables). In our example, variable “MEDV” (the median value of owner-occupied homes) is the one we’re trying to predict.
X = df.iloc[:,0:13].copy()y = df.iloc[:,13].copy()
To understand model performance, dividing the dataset into a training set and a test set is a good strategy. By splitting the dataset into two separate sets, we can train using one set and test using another.
Training set: this data is used to build your model. E.g. using the CART algorithm to create a Decision Tree.
Testing set: this data is used to see how the model performs on unseen data, as it would in a real-world situation. This data should be left completely unseen until you would like to test your model to evaluate performance.
Next, we split our dataset into 70% train and 30% test.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
Building a DT is as simple as this:
rt = DecisionTreeRegressor(criterion = ‘mse’, max_depth=5)
In this case, we only defined the splitting criteria (chose mean squared error) and defined only one hyperparameter (the maximum depth to which the tree will be built). Parameters which define the model architecture are referred to as hyperparameters and thus, the process of searching for the ideal model architecture (the one that maximizes the model performance) is referred to as hyperparameter tuning. A hyperparameter is a parameter whose value is set before the learning process begins, and they can’t be directly trained from the data.
You can take a look at the rest of the hyperparameters you can tune by calling the model:
rt
Models can have many hyperparameters and there are different strategies for finding the best combination of parameters. You can take a look at some of them on this link.
Fitting your model to the training data represents the training part of the modelling process. After it is trained, the model can be used to make predictions, with a predict method call:
model_r = rt.fit(X_train, y_train)
A test dataset is a dataset that is independent of the training dataset. This test dataset is the unseen data set for your model which will help you generalizing it:
y_pred = model_r.predict(X_test)
One of the biggest strengths of DTs is their interpretability. Visualizing DTs is not only a powerful way to understand your model, but also to communicate how your model works:
from sklearn import treeimport graphvizdot_data = tree.export_graphviz(rt, feature_names=list(X), class_names=sorted(y.unique()), filled=True)graphviz.Source(dot_data)
The variable “LSTAT” seems to be critical to define the partition of the Regression Tree. We’ll check this later once we calculate feature importances.
The quality of a model is related to how well its predictions match up against actual values. Evaluating your machine learning algorithm is an essential part of any project: how can you measure its success and when do you know that it shouldn’t be improved any more? Different machine learning algorithms have varying evaluation metrics, so let’s mention some of the main ones for regression problems:
Mean absolute error (MAE)
Is the mean of the absolute values of the individual prediction errors on over all instances in the test set. It tells us how big of an error we can expect on average.
print(‘Mean Absolute Error:’, metrics.mean_absolute_error(y_test, y_pred))
Mean squared error (MSE)
Is the mean of the squared prediction errors over all instances in the test set. Because the MSE is squared, its units do not match that of the original output, and also because we are squaring the difference, the MSE will almost always be larger than the MAE: for this reason we can’t directly compare the MAE to the MSE.
print(‘Mean Squared Error:’, metrics.mean_squared_error(y_test, y_pred))
The effect of the square term in the MSE equation is most apparent with the presence of outliers in our data: while each residual in MAE contributes proportionally to the total error, the error grows quadratically in MSE. This ultimately means that outliers in our data will contribute to much higher total error in the MSE than they would in the MAE, and the model will be penalized more for making predictions that differ greatly from the corresponding actual value.
Root mean squared error (RMSE)
Is the square root of the mean of the square of all of the error. By squaring the errors before we calculate their mean and then taking the square root of the mean, we arrive at a measure of the size of the error that gives more weight to the large but infrequent errors than the mean. We can also compare RMSE and MAE to determine whether the forecast contains large but infrequent errors: the larger the difference between RMSE and MAE the more inconsistent the error size.
print(‘Root Mean Squared Error:’, np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
R Squared score (R2)
Explains in percentage terms the amount of variation in the response variable that is due to variation in the feature variables. R Squared can take any values between 0 to 1, and although it provides some useful insights regarding the regression model, you shouldn’t rely only on this measure for the assessment of your model.
print(‘R Squared Score is:’, r2_score(y_test, y_pred))
The most common interpretation of R Squared is how well the regression model fits the observed data. Like our example, an R Squared of 0,74 reveals that 74% of the data fit the regression model. Although a higher R Squared indicates a better fit for the model, it’s not always the case that a high measure is good for the regression model: the quality of the statistical measure depends on many factors, such as the nature of the variables employed in the model, the units of measure of the variables, and the applied data transformation.
Feature importance
Another key metric consists of assigning scores to input features of a predictive model, indicating the relative importance of each feature when making a prediction. Feature importance provides insights into the data, the model, and represents the basis for dimensionality reduction and feature selection, which can improve the performance of a predictive model. The more an attribute is used to make key decisions with the DT, the higher its relative importance.
for importance, name in sorted(zip(rt.feature_importances_, X_train.columns),reverse=True): print (name, importance)
As highlighted in the visualization, the variable “LSTAT” has a higher importance in relation to other variables (being the main feature of the model). Let’s see that on a plot:
Features “LSTAT” and “RM” account for more than 80% of the importance for making predictions.
We can only compare our model’s error metrics to those of a competing model (e.g. R Squared scores of 2 different models), and although these measures provide valuable insights regarding the model’s performance, always remember:
Just because a forecast has been accurate in the past, it doesn’t mean it will be accurate in the future.
We’ve covered several steps during our modelling, and each one of them is a discipline on its own: exploratory data analysis, feature engineering, or hyperparameter tuning are all extensive and complex aspects of any machine learning model. You should consider going deeper into those subjects.
One important aspect to look at regarding Decision Trees is the way they partition the data space in comparison to other algorithms. If you had chosen to solve the boston housing price prediction with a linear regression, you’d had visualized a graph like the following:
A liner regression will search for the linear relationship between the target and its predictor. In this example, both variables (“MEDV” and “RM”) seem linearly related which is why this method may work relatively fine, but reality often shows non-linear relationships. Let’s see how a Regression Tree would map the same relationship between target and predictor:
In this example, a Regression Tree that uses MSE as partition criteria and a max_depth of 5 divides the data space in a completely different way, identifying relationships that a linear regression can’t fit.
The way a Decision Tree partitions the data space looking to optimize a given criteria will depend not only on the criteria itself (e.g. MSE or MAE as partition criteria), but on the set up of all hyperparamenters. Hyperparameter optimization defines the way a Decision Tree works, and ultimately its performance. Some hyperparameters will deeply affect the performance of the model, and finding their right levels is critical to reach the best possible performance. In the example below, you can see how the hyperparameter max_depth has a huge influence on the Regression Tree’s R squared score when being set up between 0 and 10, but above 10, any level you choose will have no impact on it:
In order to overcome the fact that you may overfit your model by trying to find the “perfect” hyperparameter levels for your DT, you should consider exploring ensemble methods. Ensemble methods combine several DTs to produce better predictive performance than single DTs. The main principle behind the ensemble model is that a group of weak learners come together to form a strong learner, significantly improving the performance of a single DT. They are used to decrease the model’s variance and bias and improve predictions. Now that you saw how a Decision Tree works, I suggest you move forward with ensemble methods like Bagging or Boosting.
Interested in these topics? Follow me on Linkedin or Twitter
|
[
{
"code": null,
"e": 520,
"s": 172,
"text": "Decision Trees (DTs) are probably one of the most popular Machine Learning algorithms. In my post “The Complete Guide to Decision Trees”, I describe DTs in detail: their real-life applications, different DT types and algorithms, and their pros and cons. I’ve detailed how to program Classification Trees, and now it’s the turn of Regression Trees."
},
{
"code": null,
"e": 947,
"s": 520,
"text": "Regression Trees work with numeric target variables. Unlike Classification Trees in which the target variable is qualitative, Regression Trees are used to predict continuous output variables. If you want to predict things like the probability of success of a medical treatment, the future price of a financial stock, or salaries in a given population, you can use this algorithm. Let’s see an implementation example in Python."
},
{
"code": null,
"e": 1220,
"s": 947,
"text": "The Boston Housing dataset consists of price of houses in various places in Boston, USA. Alongside with their price, this dataset provides information such as crime level, areas of non-retail business in the town, the age of people who own the house, and other attributes."
},
{
"code": null,
"e": 1410,
"s": 1220,
"text": "The variable called ‘MEDV’ indicates the prices of the houses and is the target variable. The rest of the variables are the predictors based on which we will predict the value of the house."
},
{
"code": null,
"e": 1748,
"s": 1410,
"text": "You can cut down the complexity of building DTs by dealing with simpler sub-steps: each individual sub-routine in a DT will connect to other ones to increase complexity, and this construction will let you reach more robust models that are easier to maintain and improve. Now, let’s build a Regression Tree (special type of DT) in Python."
},
{
"code": null,
"e": 1980,
"s": 1748,
"text": "Loading a data file is the easy part. The problem (and most time-consuming part) usually refers to the data preparation process: setting the right data formats, dealing with missing values and outliers, eliminating duplicates, etc."
},
{
"code": null,
"e": 2043,
"s": 1980,
"text": "Before loading the data, we’ll import the necessary libraries:"
},
{
"code": null,
"e": 2276,
"s": 2043,
"text": "import pandas as pdfrom pandas_datareader import dataimport numpy as npfrom sklearn.tree import DecisionTreeRegressorfrom sklearn.model_selection import train_test_splitfrom sklearn import metricsfrom sklearn.metrics import r2_score"
},
{
"code": null,
"e": 2338,
"s": 2276,
"text": "Now we load the dataset and convert it to a Pandas Dataframe:"
},
{
"code": null,
"e": 2400,
"s": 2338,
"text": "boston = datasets.load_boston()df = pd.DataFrame(boston.data)"
},
{
"code": null,
"e": 2422,
"s": 2400,
"text": "And name the columns:"
},
{
"code": null,
"e": 2482,
"s": 2422,
"text": "df.columns = boston.feature_namesdf[‘MEDV’] = boston.target"
},
{
"code": null,
"e": 2528,
"s": 2482,
"text": "First understand the dataset and describe it:"
},
{
"code": null,
"e": 2557,
"s": 2528,
"text": "print(boston.DESCR)df.info()"
},
{
"code": null,
"e": 2683,
"s": 2557,
"text": "Nice: 506 records, 14 numeric variables and no missing values. We don’t need to preprocess the data and we’re ready to model."
},
{
"code": null,
"e": 2941,
"s": 2683,
"text": "You need to divide your given columns into two types of variables: dependent (or target variable) and independent variable (or feature variables). In our example, variable “MEDV” (the median value of owner-occupied homes) is the one we’re trying to predict."
},
{
"code": null,
"e": 2992,
"s": 2941,
"text": "X = df.iloc[:,0:13].copy()y = df.iloc[:,13].copy()"
},
{
"code": null,
"e": 3201,
"s": 2992,
"text": "To understand model performance, dividing the dataset into a training set and a test set is a good strategy. By splitting the dataset into two separate sets, we can train using one set and test using another."
},
{
"code": null,
"e": 3311,
"s": 3201,
"text": "Training set: this data is used to build your model. E.g. using the CART algorithm to create a Decision Tree."
},
{
"code": null,
"e": 3535,
"s": 3311,
"text": "Testing set: this data is used to see how the model performs on unseen data, as it would in a real-world situation. This data should be left completely unseen until you would like to test your model to evaluate performance."
},
{
"code": null,
"e": 3591,
"s": 3535,
"text": "Next, we split our dataset into 70% train and 30% test."
},
{
"code": null,
"e": 3664,
"s": 3591,
"text": "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)"
},
{
"code": null,
"e": 3700,
"s": 3664,
"text": "Building a DT is as simple as this:"
},
{
"code": null,
"e": 3759,
"s": 3700,
"text": "rt = DecisionTreeRegressor(criterion = ‘mse’, max_depth=5)"
},
{
"code": null,
"e": 4303,
"s": 3759,
"text": "In this case, we only defined the splitting criteria (chose mean squared error) and defined only one hyperparameter (the maximum depth to which the tree will be built). Parameters which define the model architecture are referred to as hyperparameters and thus, the process of searching for the ideal model architecture (the one that maximizes the model performance) is referred to as hyperparameter tuning. A hyperparameter is a parameter whose value is set before the learning process begins, and they can’t be directly trained from the data."
},
{
"code": null,
"e": 4393,
"s": 4303,
"text": "You can take a look at the rest of the hyperparameters you can tune by calling the model:"
},
{
"code": null,
"e": 4396,
"s": 4393,
"text": "rt"
},
{
"code": null,
"e": 4566,
"s": 4396,
"text": "Models can have many hyperparameters and there are different strategies for finding the best combination of parameters. You can take a look at some of them on this link."
},
{
"code": null,
"e": 4753,
"s": 4566,
"text": "Fitting your model to the training data represents the training part of the modelling process. After it is trained, the model can be used to make predictions, with a predict method call:"
},
{
"code": null,
"e": 4788,
"s": 4753,
"text": "model_r = rt.fit(X_train, y_train)"
},
{
"code": null,
"e": 4954,
"s": 4788,
"text": "A test dataset is a dataset that is independent of the training dataset. This test dataset is the unseen data set for your model which will help you generalizing it:"
},
{
"code": null,
"e": 4987,
"s": 4954,
"text": "y_pred = model_r.predict(X_test)"
},
{
"code": null,
"e": 5165,
"s": 4987,
"text": "One of the biggest strengths of DTs is their interpretability. Visualizing DTs is not only a powerful way to understand your model, but also to communicate how your model works:"
},
{
"code": null,
"e": 5333,
"s": 5165,
"text": "from sklearn import treeimport graphvizdot_data = tree.export_graphviz(rt, feature_names=list(X), class_names=sorted(y.unique()), filled=True)graphviz.Source(dot_data)"
},
{
"code": null,
"e": 5485,
"s": 5333,
"text": "The variable “LSTAT” seems to be critical to define the partition of the Regression Tree. We’ll check this later once we calculate feature importances."
},
{
"code": null,
"e": 5887,
"s": 5485,
"text": "The quality of a model is related to how well its predictions match up against actual values. Evaluating your machine learning algorithm is an essential part of any project: how can you measure its success and when do you know that it shouldn’t be improved any more? Different machine learning algorithms have varying evaluation metrics, so let’s mention some of the main ones for regression problems:"
},
{
"code": null,
"e": 5913,
"s": 5887,
"text": "Mean absolute error (MAE)"
},
{
"code": null,
"e": 6081,
"s": 5913,
"text": "Is the mean of the absolute values of the individual prediction errors on over all instances in the test set. It tells us how big of an error we can expect on average."
},
{
"code": null,
"e": 6156,
"s": 6081,
"text": "print(‘Mean Absolute Error:’, metrics.mean_absolute_error(y_test, y_pred))"
},
{
"code": null,
"e": 6181,
"s": 6156,
"text": "Mean squared error (MSE)"
},
{
"code": null,
"e": 6504,
"s": 6181,
"text": "Is the mean of the squared prediction errors over all instances in the test set. Because the MSE is squared, its units do not match that of the original output, and also because we are squaring the difference, the MSE will almost always be larger than the MAE: for this reason we can’t directly compare the MAE to the MSE."
},
{
"code": null,
"e": 6577,
"s": 6504,
"text": "print(‘Mean Squared Error:’, metrics.mean_squared_error(y_test, y_pred))"
},
{
"code": null,
"e": 7046,
"s": 6577,
"text": "The effect of the square term in the MSE equation is most apparent with the presence of outliers in our data: while each residual in MAE contributes proportionally to the total error, the error grows quadratically in MSE. This ultimately means that outliers in our data will contribute to much higher total error in the MSE than they would in the MAE, and the model will be penalized more for making predictions that differ greatly from the corresponding actual value."
},
{
"code": null,
"e": 7077,
"s": 7046,
"text": "Root mean squared error (RMSE)"
},
{
"code": null,
"e": 7553,
"s": 7077,
"text": "Is the square root of the mean of the square of all of the error. By squaring the errors before we calculate their mean and then taking the square root of the mean, we arrive at a measure of the size of the error that gives more weight to the large but infrequent errors than the mean. We can also compare RMSE and MAE to determine whether the forecast contains large but infrequent errors: the larger the difference between RMSE and MAE the more inconsistent the error size."
},
{
"code": null,
"e": 7640,
"s": 7553,
"text": "print(‘Root Mean Squared Error:’, np.sqrt(metrics.mean_squared_error(y_test, y_pred)))"
},
{
"code": null,
"e": 7661,
"s": 7640,
"text": "R Squared score (R2)"
},
{
"code": null,
"e": 7988,
"s": 7661,
"text": "Explains in percentage terms the amount of variation in the response variable that is due to variation in the feature variables. R Squared can take any values between 0 to 1, and although it provides some useful insights regarding the regression model, you shouldn’t rely only on this measure for the assessment of your model."
},
{
"code": null,
"e": 8043,
"s": 7988,
"text": "print(‘R Squared Score is:’, r2_score(y_test, y_pred))"
},
{
"code": null,
"e": 8582,
"s": 8043,
"text": "The most common interpretation of R Squared is how well the regression model fits the observed data. Like our example, an R Squared of 0,74 reveals that 74% of the data fit the regression model. Although a higher R Squared indicates a better fit for the model, it’s not always the case that a high measure is good for the regression model: the quality of the statistical measure depends on many factors, such as the nature of the variables employed in the model, the units of measure of the variables, and the applied data transformation."
},
{
"code": null,
"e": 8601,
"s": 8582,
"text": "Feature importance"
},
{
"code": null,
"e": 9065,
"s": 8601,
"text": "Another key metric consists of assigning scores to input features of a predictive model, indicating the relative importance of each feature when making a prediction. Feature importance provides insights into the data, the model, and represents the basis for dimensionality reduction and feature selection, which can improve the performance of a predictive model. The more an attribute is used to make key decisions with the DT, the higher its relative importance."
},
{
"code": null,
"e": 9182,
"s": 9065,
"text": "for importance, name in sorted(zip(rt.feature_importances_, X_train.columns),reverse=True): print (name, importance)"
},
{
"code": null,
"e": 9360,
"s": 9182,
"text": "As highlighted in the visualization, the variable “LSTAT” has a higher importance in relation to other variables (being the main feature of the model). Let’s see that on a plot:"
},
{
"code": null,
"e": 9454,
"s": 9360,
"text": "Features “LSTAT” and “RM” account for more than 80% of the importance for making predictions."
},
{
"code": null,
"e": 9683,
"s": 9454,
"text": "We can only compare our model’s error metrics to those of a competing model (e.g. R Squared scores of 2 different models), and although these measures provide valuable insights regarding the model’s performance, always remember:"
},
{
"code": null,
"e": 9789,
"s": 9683,
"text": "Just because a forecast has been accurate in the past, it doesn’t mean it will be accurate in the future."
},
{
"code": null,
"e": 10084,
"s": 9789,
"text": "We’ve covered several steps during our modelling, and each one of them is a discipline on its own: exploratory data analysis, feature engineering, or hyperparameter tuning are all extensive and complex aspects of any machine learning model. You should consider going deeper into those subjects."
},
{
"code": null,
"e": 10355,
"s": 10084,
"text": "One important aspect to look at regarding Decision Trees is the way they partition the data space in comparison to other algorithms. If you had chosen to solve the boston housing price prediction with a linear regression, you’d had visualized a graph like the following:"
},
{
"code": null,
"e": 10719,
"s": 10355,
"text": "A liner regression will search for the linear relationship between the target and its predictor. In this example, both variables (“MEDV” and “RM”) seem linearly related which is why this method may work relatively fine, but reality often shows non-linear relationships. Let’s see how a Regression Tree would map the same relationship between target and predictor:"
},
{
"code": null,
"e": 10927,
"s": 10719,
"text": "In this example, a Regression Tree that uses MSE as partition criteria and a max_depth of 5 divides the data space in a completely different way, identifying relationships that a linear regression can’t fit."
},
{
"code": null,
"e": 11621,
"s": 10927,
"text": "The way a Decision Tree partitions the data space looking to optimize a given criteria will depend not only on the criteria itself (e.g. MSE or MAE as partition criteria), but on the set up of all hyperparamenters. Hyperparameter optimization defines the way a Decision Tree works, and ultimately its performance. Some hyperparameters will deeply affect the performance of the model, and finding their right levels is critical to reach the best possible performance. In the example below, you can see how the hyperparameter max_depth has a huge influence on the Regression Tree’s R squared score when being set up between 0 and 10, but above 10, any level you choose will have no impact on it:"
},
{
"code": null,
"e": 12267,
"s": 11621,
"text": "In order to overcome the fact that you may overfit your model by trying to find the “perfect” hyperparameter levels for your DT, you should consider exploring ensemble methods. Ensemble methods combine several DTs to produce better predictive performance than single DTs. The main principle behind the ensemble model is that a group of weak learners come together to form a strong learner, significantly improving the performance of a single DT. They are used to decrease the model’s variance and bias and improve predictions. Now that you saw how a Decision Tree works, I suggest you move forward with ensemble methods like Bagging or Boosting."
}
] |
What is the difference between GET and POST in Python CGI Programming?
|
You must have come across many situations when you need to pass some information from your browser to web server and ultimately to your CGI Program. Most frequently, browser uses two methods two pass this information to web server. These methods are GET Method and POST Method.
Passing Information using GET method
The GET method sends the encoded user information appended to the page request. The page and the encoded information are separated by the ? character as follows −
http://www.test.com/cgi-bin/hello.py?key1=value1&key2=value2
The GET method is the default method to pass information from browser to web server and it produces a long string that appears in your browser's Location:box. Never use GET method if you have password or other sensitive information to pass to the server. The GET method has size limitation: only 1024 characters can be sent in a request string. The GET method sends information using QUERY_STRING header and will be accessible in your CGI Program through QUERY_STRING environment variable.
You can pass information by simply concatenating key and value pairs along with any URL or you can use HTML <FORM> tags to pass information using GET method.
Here is a simple URL, which passes two values to hello_get.py program using GET method.
/cgi-bin/hello_get.py?first_name=ZARA&last_name=ALI
Below is hello_get.py script to handle input given by web browser. We are going to use cgi module, which makes it very easy to access passed information −
#!/usr/bin/python
# Import modules for CGI handling
import cgi, cgitb
# Create instance of FieldStorage
form = cgi.FieldStorage()
# Get data from fields
first_name = form.getvalue('first_name')
last_name = form.getvalue('last_name')
print "Content-type:text/html\r\n\r\n"
print "<html>"
print "<head>"
print "<title>Hello - Second CGI Program</title>"
print "</head>"
print "<body>"
print "<h2>Hello %s %s</h2>" % (first_name, last_name)
print "</body>"
print "</html>"
This would generate the following result −
Hello ZARA ALI
This example passes two values using HTML FORM and submit button. We use same CGI script hello_get.py to handle this input.
<form action = "/cgi-bin/hello_get.py" method = "get">
First Name: <input type = "text" name = "first_name"> <br />
Last Name: <input type = "text" name = "last_name" />
<input type = "submit" value = "Submit" />
</form>
Here is the actual output of the above form, you enter First and Last Name and then click submit button to see the result.
First Name:
Last Name:
Submit
A generally more reliable method of passing information to a CGI program is the POST method. This packages the information in exactly the same way as GET methods, but instead of sending it as a text string after a ? in the URL it sends it as a separate message. This message comes into the CGI script in the form of the standard input.
Below is same hello_get.py script which handles GET as well as POST method.
#!/usr/bin/python
# Import modules for CGI handling
import cgi, cgitb
# Create instance of FieldStorage
form = cgi.FieldStorage()
# Get data from fields
first_name = form.getvalue('first_name')
last_name = form.getvalue('last_name')
print "Content-type:text/html\r\n\r\n"
print "<html>"
print "<head>"
print "<title>Hello - Second CGI Program</title>"
print "</head>"
print "<body>"
print "<h2>Hello %s %s</h2>" % (first_name, last_name)
print "</body>"
print "</html>"
Let us take again same example as above which passes two values using HTML FORM and submit button. We use same CGI script hello_get.py to handle this input.
<form action = "/cgi-bin/hello_get.py" method = "post">
First Name: <input type = "text" name = "first_name"><br />
Last Name: <input type = "text" name = "last_name" />
<input type = "submit" value = "Submit" />
</form>
Here is the actual output of the above form. You enter First and Last Name and then click submit button to see the result.
First Name:
Last Name:
Submit
|
[
{
"code": null,
"e": 1340,
"s": 1062,
"text": "You must have come across many situations when you need to pass some information from your browser to web server and ultimately to your CGI Program. Most frequently, browser uses two methods two pass this information to web server. These methods are GET Method and POST Method."
},
{
"code": null,
"e": 1377,
"s": 1340,
"text": "Passing Information using GET method"
},
{
"code": null,
"e": 1540,
"s": 1377,
"text": "The GET method sends the encoded user information appended to the page request. The page and the encoded information are separated by the ? character as follows −"
},
{
"code": null,
"e": 1601,
"s": 1540,
"text": "http://www.test.com/cgi-bin/hello.py?key1=value1&key2=value2"
},
{
"code": null,
"e": 2091,
"s": 1601,
"text": "The GET method is the default method to pass information from browser to web server and it produces a long string that appears in your browser's Location:box. Never use GET method if you have password or other sensitive information to pass to the server. The GET method has size limitation: only 1024 characters can be sent in a request string. The GET method sends information using QUERY_STRING header and will be accessible in your CGI Program through QUERY_STRING environment variable."
},
{
"code": null,
"e": 2249,
"s": 2091,
"text": "You can pass information by simply concatenating key and value pairs along with any URL or you can use HTML <FORM> tags to pass information using GET method."
},
{
"code": null,
"e": 2337,
"s": 2249,
"text": "Here is a simple URL, which passes two values to hello_get.py program using GET method."
},
{
"code": null,
"e": 2389,
"s": 2337,
"text": "/cgi-bin/hello_get.py?first_name=ZARA&last_name=ALI"
},
{
"code": null,
"e": 2544,
"s": 2389,
"text": "Below is hello_get.py script to handle input given by web browser. We are going to use cgi module, which makes it very easy to access passed information −"
},
{
"code": null,
"e": 3015,
"s": 2544,
"text": "#!/usr/bin/python\n# Import modules for CGI handling\nimport cgi, cgitb\n# Create instance of FieldStorage\nform = cgi.FieldStorage()\n# Get data from fields\nfirst_name = form.getvalue('first_name')\nlast_name = form.getvalue('last_name')\nprint \"Content-type:text/html\\r\\n\\r\\n\"\nprint \"<html>\"\nprint \"<head>\"\nprint \"<title>Hello - Second CGI Program</title>\"\nprint \"</head>\"\nprint \"<body>\"\nprint \"<h2>Hello %s %s</h2>\" % (first_name, last_name)\nprint \"</body>\"\nprint \"</html>\""
},
{
"code": null,
"e": 3058,
"s": 3015,
"text": "This would generate the following result −"
},
{
"code": null,
"e": 3073,
"s": 3058,
"text": "Hello ZARA ALI"
},
{
"code": null,
"e": 3197,
"s": 3073,
"text": "This example passes two values using HTML FORM and submit button. We use same CGI script hello_get.py to handle this input."
},
{
"code": null,
"e": 3419,
"s": 3197,
"text": "<form action = \"/cgi-bin/hello_get.py\" method = \"get\">\nFirst Name: <input type = \"text\" name = \"first_name\"> <br />\nLast Name: <input type = \"text\" name = \"last_name\" />\n<input type = \"submit\" value = \"Submit\" />\n</form>"
},
{
"code": null,
"e": 3542,
"s": 3419,
"text": "Here is the actual output of the above form, you enter First and Last Name and then click submit button to see the result."
},
{
"code": null,
"e": 3573,
"s": 3542,
"text": "First Name:\nLast Name:\n Submit"
},
{
"code": null,
"e": 3909,
"s": 3573,
"text": "A generally more reliable method of passing information to a CGI program is the POST method. This packages the information in exactly the same way as GET methods, but instead of sending it as a text string after a ? in the URL it sends it as a separate message. This message comes into the CGI script in the form of the standard input."
},
{
"code": null,
"e": 3985,
"s": 3909,
"text": "Below is same hello_get.py script which handles GET as well as POST method."
},
{
"code": null,
"e": 4456,
"s": 3985,
"text": "#!/usr/bin/python\n# Import modules for CGI handling\nimport cgi, cgitb\n# Create instance of FieldStorage\nform = cgi.FieldStorage()\n# Get data from fields\nfirst_name = form.getvalue('first_name')\nlast_name = form.getvalue('last_name')\nprint \"Content-type:text/html\\r\\n\\r\\n\"\nprint \"<html>\"\nprint \"<head>\"\nprint \"<title>Hello - Second CGI Program</title>\"\nprint \"</head>\"\nprint \"<body>\"\nprint \"<h2>Hello %s %s</h2>\" % (first_name, last_name)\nprint \"</body>\"\nprint \"</html>\""
},
{
"code": null,
"e": 4613,
"s": 4456,
"text": "Let us take again same example as above which passes two values using HTML FORM and submit button. We use same CGI script hello_get.py to handle this input."
},
{
"code": null,
"e": 4834,
"s": 4613,
"text": "<form action = \"/cgi-bin/hello_get.py\" method = \"post\">\nFirst Name: <input type = \"text\" name = \"first_name\"><br />\nLast Name: <input type = \"text\" name = \"last_name\" />\n<input type = \"submit\" value = \"Submit\" />\n</form>"
},
{
"code": null,
"e": 4957,
"s": 4834,
"text": "Here is the actual output of the above form. You enter First and Last Name and then click submit button to see the result."
},
{
"code": null,
"e": 4988,
"s": 4957,
"text": "First Name:\nLast Name:\n Submit"
}
] |
Java Program to Count the Total Number of Vowels and Consonants in a String - GeeksforGeeks
|
18 Aug, 2021
Given a String count the total number of vowels and consonants in this given string. Assuming String may contain only special characters, or white spaces, or a combination of all. The idea is to iterate the string and checks if that character is present in the reference string or not. If a character is present in the reference increment number of vowels by 1, otherwise, increment the number of consonants by 1.
Example:
Input : String = "GeeksforGeeks"
Output: Number of Vowels = 5
Number of Consonants = 8
Input : String = "Alice"
Output: Number of Vowels = 3
Number of Consonants = 2
Approach:
Create two variables vow and cons and initialize them with 0.Start string traversing.If i’th character is vowel then increment in vow variable by 1.Else if the character is consonant then increment in cons variable by 1.
Create two variables vow and cons and initialize them with 0.
Start string traversing.
If i’th character is vowel then increment in vow variable by 1.
Else if the character is consonant then increment in cons variable by 1.
Example
Java
// Java Program to Count Total Number of Vowels// and Consonants in a String // Importing all utility classesimport java.util.*; // Main classclass GFG { // Method 1 // To prints number of vowels and consonants public static void count(String str) { // Initially initializing elements with zero // as till now we have not traversed int vow = 0, con = 0; // Declaring a reference String // which contains all the vowels String ref = "aeiouAEIOU"; for (int i = 0; i < str.length(); i++) { // Check for any special characters present // in the given string if ((str.charAt(i) >= 'A' && str.charAt(i) <= 'Z') || (str.charAt(i) >= 'a' && str.charAt(i) <= 'z')) { if (ref.indexOf(str.charAt(i)) != -1) vow++; else con++; } } // Print and display number of vowels and consonants // on console System.out.println("Number of Vowels = " + vow + "\nNumber of Consonants = " + con); } // Method 2 // Main driver method public static void main(String[] args) { // Custom string as input String str = "#GeeksforGeeks"; // Callin gthe method 1 count(str); }}
Number of Vowels = 5
Number of Consonants = 8
Time Complexity: O(n2) here, n is the length of the string.
vasireddykomalkumar
Picked
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Initialize an ArrayList in Java
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
Interfaces in Java
ArrayList in Java
Convert a String to Character array in Java
Initializing a List in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
|
[
{
"code": null,
"e": 25071,
"s": 25043,
"text": "\n18 Aug, 2021"
},
{
"code": null,
"e": 25485,
"s": 25071,
"text": "Given a String count the total number of vowels and consonants in this given string. Assuming String may contain only special characters, or white spaces, or a combination of all. The idea is to iterate the string and checks if that character is present in the reference string or not. If a character is present in the reference increment number of vowels by 1, otherwise, increment the number of consonants by 1."
},
{
"code": null,
"e": 25494,
"s": 25485,
"text": "Example:"
},
{
"code": null,
"e": 25677,
"s": 25494,
"text": "Input : String = \"GeeksforGeeks\"\nOutput: Number of Vowels = 5\n Number of Consonants = 8\n\nInput : String = \"Alice\"\nOutput: Number of Vowels = 3\n Number of Consonants = 2"
},
{
"code": null,
"e": 25687,
"s": 25677,
"text": "Approach:"
},
{
"code": null,
"e": 25908,
"s": 25687,
"text": "Create two variables vow and cons and initialize them with 0.Start string traversing.If i’th character is vowel then increment in vow variable by 1.Else if the character is consonant then increment in cons variable by 1."
},
{
"code": null,
"e": 25970,
"s": 25908,
"text": "Create two variables vow and cons and initialize them with 0."
},
{
"code": null,
"e": 25995,
"s": 25970,
"text": "Start string traversing."
},
{
"code": null,
"e": 26059,
"s": 25995,
"text": "If i’th character is vowel then increment in vow variable by 1."
},
{
"code": null,
"e": 26132,
"s": 26059,
"text": "Else if the character is consonant then increment in cons variable by 1."
},
{
"code": null,
"e": 26140,
"s": 26132,
"text": "Example"
},
{
"code": null,
"e": 26145,
"s": 26140,
"text": "Java"
},
{
"code": "// Java Program to Count Total Number of Vowels// and Consonants in a String // Importing all utility classesimport java.util.*; // Main classclass GFG { // Method 1 // To prints number of vowels and consonants public static void count(String str) { // Initially initializing elements with zero // as till now we have not traversed int vow = 0, con = 0; // Declaring a reference String // which contains all the vowels String ref = \"aeiouAEIOU\"; for (int i = 0; i < str.length(); i++) { // Check for any special characters present // in the given string if ((str.charAt(i) >= 'A' && str.charAt(i) <= 'Z') || (str.charAt(i) >= 'a' && str.charAt(i) <= 'z')) { if (ref.indexOf(str.charAt(i)) != -1) vow++; else con++; } } // Print and display number of vowels and consonants // on console System.out.println(\"Number of Vowels = \" + vow + \"\\nNumber of Consonants = \" + con); } // Method 2 // Main driver method public static void main(String[] args) { // Custom string as input String str = \"#GeeksforGeeks\"; // Callin gthe method 1 count(str); }}",
"e": 27585,
"s": 26145,
"text": null
},
{
"code": null,
"e": 27631,
"s": 27585,
"text": "Number of Vowels = 5\nNumber of Consonants = 8"
},
{
"code": null,
"e": 27692,
"s": 27631,
"text": "Time Complexity: O(n2) here, n is the length of the string. "
},
{
"code": null,
"e": 27712,
"s": 27692,
"text": "vasireddykomalkumar"
},
{
"code": null,
"e": 27719,
"s": 27712,
"text": "Picked"
},
{
"code": null,
"e": 27743,
"s": 27719,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 27748,
"s": 27743,
"text": "Java"
},
{
"code": null,
"e": 27762,
"s": 27748,
"text": "Java Programs"
},
{
"code": null,
"e": 27781,
"s": 27762,
"text": "Technical Scripter"
},
{
"code": null,
"e": 27786,
"s": 27781,
"text": "Java"
},
{
"code": null,
"e": 27884,
"s": 27786,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27916,
"s": 27884,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 27967,
"s": 27916,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 27997,
"s": 27967,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 28016,
"s": 27997,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 28034,
"s": 28016,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 28078,
"s": 28034,
"text": "Convert a String to Character array in Java"
},
{
"code": null,
"e": 28106,
"s": 28078,
"text": "Initializing a List in Java"
},
{
"code": null,
"e": 28132,
"s": 28106,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 28166,
"s": 28132,
"text": "Convert Double to Integer in Java"
}
] |
MVC Framework - Quick Guide
|
The Model-View-Controller (MVC) is an architectural pattern that separates an application into three main logical components: the model, the view, and the controller. Each of these components are built to handle specific development aspects of an application. MVC is one of the most frequently used industry-standard web development framework to create scalable and extensible projects.
Following are the components of MVC −
The Model component corresponds to all the data-related logic that the user works with. This can represent either the data that is being transferred between the View and Controller components or any other business logic-related data. For example, a Customer object will retrieve the customer information from the database, manipulate it and update it data back to the database or use it to render data.
The View component is used for all the UI logic of the application. For example, the Customer view will include all the UI components such as text boxes, dropdowns, etc. that the final user interacts with.
Controllers act as an interface between Model and View components to process all the business logic and incoming requests, manipulate data using the Model component and interact with the Views to render the final output. For example, the Customer controller will handle all the interactions and inputs from the Customer View and update the database using the Customer Model. The same controller will be used to view the Customer data.
ASP.NET supports three major development models: Web Pages, Web Forms and MVC (Model View Controller). ASP.NET MVC framework is a lightweight, highly testable presentation framework that is integrated with the existing ASP.NET features, such as master pages, authentication, etc. Within .NET, this framework is defined in the System.Web.Mvc assembly. The latest version of the MVC Framework is 5.0. We use Visual Studio to create ASP.NET MVC applications which can be added as a template in Visual Studio.
ASP.NET MVC provides the following features −
Ideal for developing complex but lightweight applications.
Ideal for developing complex but lightweight applications.
Provides an extensible and pluggable framework, which can be easily replaced and customized. For example, if you do not wish to use the in-built Razor or ASPX View Engine, then you can use any other third-party view engines or even customize the existing ones.
Provides an extensible and pluggable framework, which can be easily replaced and customized. For example, if you do not wish to use the in-built Razor or ASPX View Engine, then you can use any other third-party view engines or even customize the existing ones.
Utilizes the component-based design of the application by logically dividing it into Model, View, and Controller components. This enables the developers to manage the complexity of large-scale projects and work on individual components.
Utilizes the component-based design of the application by logically dividing it into Model, View, and Controller components. This enables the developers to manage the complexity of large-scale projects and work on individual components.
MVC structure enhances the test-driven development and testability of the application, since all the components can be designed interface-based and tested using mock objects. Hence, ASP.NET MVC Framework is ideal for projects with large team of web developers.
MVC structure enhances the test-driven development and testability of the application, since all the components can be designed interface-based and tested using mock objects. Hence, ASP.NET MVC Framework is ideal for projects with large team of web developers.
Supports all the existing vast ASP.NET functionalities, such as Authorization and Authentication, Master Pages, Data Binding, User Controls, Memberships, ASP.NET Routing, etc.
Supports all the existing vast ASP.NET functionalities, such as Authorization and Authentication, Master Pages, Data Binding, User Controls, Memberships, ASP.NET Routing, etc.
Does not use the concept of View State (which is present in ASP.NET). This helps in building applications, which are lightweight and gives full control to the developers.
Does not use the concept of View State (which is present in ASP.NET). This helps in building applications, which are lightweight and gives full control to the developers.
Thus, you can consider MVC Framework as a major framework built on top of ASP.NET providing a large set of added functionality focusing on component-based development and testing.
In the last chapter, we studied the high-level architecture flow of MVC Framework. Now let us take a look at how the execution of an MVC application takes place when there is a certain request from the client. The following diagram illustrates the flow.
Step 1 − The client browser sends request to the MVC Application.
Step 2 − Global.ascx receives this request and performs routing based on the URL of the incoming request using the RouteTable, RouteData, UrlRoutingModule and MvcRouteHandler objects.
Step 3 − This routing operation calls the appropriate controller and executes it using the IControllerFactory object and MvcHandler object's Execute method.
Step 4 − The Controller processes the data using Model and invokes the appropriate method using ControllerActionInvoker object
Step 5 − The processed Model is then passed to the View, which in turn renders the final output.
MVC and ASP.NET Web Forms are inter-related yet different models of development, depending on the requirement of the application and other factors. At a high level, you can consider that MVC is an advanced and sophisticated web application framework designed with separation of concerns and testability in mind. Both the frameworks have their advantages and disadvantages depending on specific requirements. This concept can be visualized using the following diagram −
Let us jump in and create our first MVC application using Views and Controllers. Once we have a small hands-on experience on how a basic MVC application works, we will learn all the individual components and concepts in the coming chapters.
Step 1 − Start your Visual Studio and select File → New → Project. Select Web → ASP.NET MVC Web Application and name this project as FirstMVCApplicatio. Select the Location as C:\MVC. Click OK.
Step 2 − This will open the Project Template option. Select Empty template and View Engine as Razor. Click OK.
Now, Visual Studio will create our first MVC project as shown in the following screenshot.
Step 3 − Now we will create the first Controller in our application. Controllers are just simple C# classes, which contains multiple public methods, known as action methods. To add a new Controller, right-click the Controllers folder in our project and select Add → Controller. Name the Controller as HomeController and click Add.
This will create a class file HomeController.cs under the Controllers folder with the following default code.
using System;
using System.Web.Mvc;
namespace FirstMVCApplication.Controllers {
public class HomeController : Controller {
public ViewResult Index() {
return View();
}
}
}
The above code basically defines a public method Index inside our HomeController and returns a ViewResult object. In the next steps, we will learn how to return a View using the ViewResult object.
Step 4 − Now we will add a new View to our Home Controller. To add a new View, rightclick view folder and click Add → View.
Step 5 − Name the new View as Index and View Engine as Razor (SCHTML). Click Add.
This will add a new cshtml file inside Views/Home folder with the following code −
@{
Layout = null;
}
<html>
<head>
<meta name = "viewport" content = "width = device-width" />
<title>Index</title>
</head>
<body>
<div>
</div>
</body>
</html>
Step 6 − Modify the above View's body content with the following code −
<body>
<div>
Welcome to My First MVC Application (<b>From Index View</b>)
</div>
</body>
Step 7 − Now run the application. This will give you the following output in the browser. This output is rendered based on the content in our View file. The application first calls the Controller which in turn calls this View and produces the output.
In Step 7, the output we received was based on the content of our View file and had no interaction with the Controller. Moving a step forward, we will now create a small example to display a Welcome message with the current time using an interaction of View and Controller.
Step 8 − MVC uses the ViewBag object to pass data between Controller and View. Open the HomeController.cs and edit the Index function to the following code.
public ViewResult Index() {
int hour = DateTime.Now.Hour;
ViewBag.Greeting =
hour < 12
? "Good Morning. Time is" + DateTime.Now.ToShortTimeString()
: "Good Afternoon. Time is " + DateTime.Now.ToShortTimeString();
return View();
}
In the above code, we set the value of the Greeting attribute of the ViewBag object. The code checks the current hour and returns the Good Morning/Afternoon message accordingly using return View() statement. Note that here Greeting is just an example attribute that we have used with ViewBag object. You can use any other attribute name in place of Greeting.
Step 9 − Open the Index.cshtml and copy the following code in the body section.
<body>
<div>
@ViewBag.Greeting (<b>From Index View</b>)
</div>
</body>
In the above code, we are accessing the value of Greeting attribute of the ViewBag object using @ (which would be set from the Controller).
Step 10 − Now run the application again. This time our code will run the Controller first, set the ViewBag and then render it using the View code. Following will be the output.
Now that we have already created a sample MVC application, let us understand the folder structure of an MVC project. We will create new a MVC project to learn this.
In your Visual Studio, open File → New → Project and select ASP.NET MVC Application. Name it as MVCFolderDemo.
Click OK. In the next window, select Internet Application as the Project Template and click OK.
This will create a sample MVC application as shown in the following screenshot.
Note − Files present in this project are coming out of the default template that we have selected. These may change slightly as per different versions.
This folder will contain all the Controller classes. MVC requires the name of all the controller files to end with Controller.
In our example, the Controllers folder contains two class files: AccountController and HomeController.
This folder will contain all the Model classes, which are used to work on application data.
In our example, the Models folder contains AccountModels. You can open and look at the code in this file to see how the data model is created for managing accounts in our example.
This folder stores the HTML files related to application display and user interface. It contains one folder for each controller.
In our example, you will see three sub-folders under Views, namely Account, Home and Shared which contains html files specific to that view area.
This folder contains all the files which are needed during the application load.
For e.g., the RouteConfig file is used to route the incoming URL to the correct Controller and Action.
This folder contains all the static files, such as css, images, icons, etc.
The Site.css file inside this folder is the default styling that the application applies.
This folder stores all the JS files in the project. By default, Visual Studio adds MVC, jQuery and other standard JS libraries.
The component ‘Model’ is responsible for managing the data of the application. It responds to the request from the view and it also responds to instructions from the controller to update itself.
Model classes can either be created manually or generated from database entities. We are going to see a lot of examples for manually creating Models in the coming chapters. Thus in this chapter, we will try the other option, i.e. generating from the database so that you have hands-on experience on both the methods.
Connect to SQL Server and create a new database.
Now run the following queries to create new tables.
CREATE TABLE [dbo].[Student](
[StudentID] INT IDENTITY (1,1) NOT NULL,
[LastName] NVARCHAR (50) NULL,
[FirstName] NVARCHAR (50) NULL,
[EnrollmentDate] DATETIME NULL,
PRIMARY KEY CLUSTERED ([StudentID] ASC)
)
CREATE TABLE [dbo].[Course](
[CourseID] INT IDENTITY (1,1) NOT NULL,
[Title] NVARCHAR (50) NULL,
[Credits] INT NULL,
PRIMARY KEY CLUSTERED ([CourseID] ASC)
)
CREATE TABLE [dbo].[Enrollment](
[EnrollmentID] INT IDENTITY (1,1) NOT NULL,
[Grade] DECIMAL(3,2) NULL,
[CourseID] INT NOT NULL,
[StudentID] INT NOT NULL,
PRIMARY KEY CLUSTERED ([EnrollmentID] ASC),
CONSTRAINT [FK_dbo.Enrollment_dbo.Course_CourseID] FOREIGN KEY ([CourseID])
REFERENCES [dbo].[Course]([CourseID]) ON DELETE CASCADE,
CONSTRAINT [FK_dbo.Enrollment_dbo.Student_StudentID] FOREIGN KEY ([StudentID])
REFERENCES [dbo].[Student]([StudentID]) ON DELETE CASCADE
)
After creating the database and setting up the tables, you can go ahead and create a new MVC Empty Application. Right-click on the Models folder in your project and select Add → New Item. Then, select ADO.NET Entity Data Model.
In the next wizard, choose Generate From Database and click Next. Set the Connection to your SQL database.
Select your database and click Test Connection. A screen similar to the following will follow. Click Next.
Select Tables, Views, and Stored Procedures and Functions. Click Finish. You will see the Model View created as shown in the following screenshot.
The above operations would automatically create a Model file for all the database entities. For example, the Student table that we created will result in a Model file Student.cs with the following code −
namespace MvcModelExample.Models {
using System;
using System.Collections.Generic;
public partial class Student {
public Student() {
this.Enrollments = new HashSet();
}
public int StudentID { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
public Nullable EnrollmentDate { get; set; }
public virtual ICollection Enrollments { get; set; }
}
}
Asp.net MVC Controllers are responsible for controlling the flow of the application execution. When you make a request (means request a page) to MVC application, a controller is responsible for returning the response to that request. The controller can perform one or more actions. The controller action can return different types of action results to a particular request.
The Controller is responsible for controlling the application logic and acts as the coordinator between the View and the Model. The Controller receives an input from the users via the View, then processes the user's data with the help of Model and passes the results back to the View.
To create a Controller −
Step 1 − Create an MVC Empty Application and then right-click on the Controller folder in your MVC application.
Step 2 − Select the menu option Add → Controller. After selection, the Add Controller dialog is displayed. Name the Controller as DemoController.
A Controller class file will be created as shown in the following screenshot.
In the MVC Framework, controller classes must implement the IController interface from the System.Web.Mvc namespace.
public interface IController {
void Execute(RequestContext requestContext);
}
This is a very simple interface. The sole method, Execute, is invoked when a request is targeted at the controller class. The MVC Framework knows which controller class has been targeted in a request by reading the value of the controller property generated by the routing data.
Step 1 − Add a new class file and name it as DemoCustomController. Now modify this class to inherit IController interface.
Step 2 − Copy the following code inside this class.
public class DemoCustomController:IController {
public void Execute(System.Web.Routing.RequestContext requestContext) {
var controller = (string)requestContext.RouteData.Values["controller"];
var action = (string)requestContext.RouteData.Values["action"];
requestContext.HttpContext.Response.Write(
string.Format("Controller: {0}, Action: {1}", controller, action));
}
}
Step 3 − Run the application and you will receive the following output.
As seen in the initial introductory chapters, View is the component involved with the application's User Interface. These Views are generally bind from the model data and have extensions such as html, aspx, cshtml, vbhtml, etc. In our First MVC Application, we had used Views with Controller to display data to the final user. For rendering these static and dynamic content to the browser, MVC Framework utilizes View Engines. View Engines are basically markup syntax implementation, which are responsible for rendering the final HTML to the browser.
MVC Framework comes with two built-in view engines −
Razor Engine − Razor is a markup syntax that enables the server side C# or VB code into web pages. This server side code can be used to create dynamic content when the web page is being loaded. Razor is an advanced engine as compared to ASPX engine and was launched in the later versions of MVC.
ASPX Engine − ASPX or the Web Forms engine is the default view engine that is included in the MVC Framework since the beginning. Writing a code with this engine is similar to writing a code in ASP.NET Web Forms.
Following are small code snippets comparing both Razor and ASPX engine.
@Html.ActionLink("Create New", "UserAdd")
<% Html.ActionLink("SignUp", "SignUp") %>
Out of these two, Razor is an advanced View Engine as it comes with compact syntax, test driven development approaches, and better security features. We will use Razor engine in all our examples since it is the most dominantly used View engine.
These View Engines can be coded and implemented in following two types −
Strongly typed
Dynamic typed
These approaches are similar to early-binding and late-binding respectively in which the models will be bind to the View strongly or dynamically.
To understand this concept, let us create a sample MVC application (follow the steps in the previous chapters) and add a Controller class file named ViewDemoController.
Now, copy the following code in the controller file −
using System.Collections.Generic;
using System.Web.Mvc;
namespace ViewsInMVC.Controllers {
public class ViewDemoController : Controller {
public class Blog {
public string Name;
public string URL;
}
private readonly List topBlogs = new List {
new Blog { Name = "Joe Delage", URL = "http://tutorialspoint/joe/"},
new Blog {Name = "Mark Dsouza", URL = "http://tutorialspoint/mark"},
new Blog {Name = "Michael Shawn", URL = "http://tutorialspoint/michael"}
};
public ActionResult StonglyTypedIndex() {
return View(topBlogs);
}
public ActionResult IndexNotStonglyTyped() {
return View(topBlogs);
}
}
}
In the above code, we have two action methods defined: StronglyTypedIndex and IndexNotStonglyTyped. We will now add Views for these action methods.
Right-click on StonglyTypedIndex action method and click Add View. In the next window, check the 'Create a strongly-typed view' checkbox. This will also enable the Model Class and Scaffold template options. Select List from Scaffold Template option. Click Add.
A View file similar to the following screenshot will be created. As you can note, it has included the ViewDemoController's Blog model class at the top. You will also be able to use IntelliSense in your code with this approach.
To create dynamic typed views, right-click the IndexNotStonglyTyped action and click Add View.
This time, do not select the 'Create a strongly-typed view' checkbox.
The resulting view will have the following code −
@model dynamic
@{
ViewBag.Title = "IndexNotStonglyTyped";
}
<h2>Index Not Stongly Typed</h2>
<p>
<ul>
@foreach (var blog in Model) {
<li>
<a href = "@blog.URL">@blog.Name</a>
</li>
}
</ul>
</p>
As you can see in the above code, this time it did not add the Blog model to the View as in the previous case. Also, you would not be able to use IntelliSense this time because this time the binding will be done at run-time.
Strongly typed Views is considered as a better approach since we already know what data is being passed as the Model unlike dynamic typed Views in which the data gets bind at runtime and may lead to runtime errors, if something changes in the linked model.
Layouts are used in MVC to provide a consistent look and feel on all the pages of our application. It is the same as defining the Master Pages but MVC provides some more functionalities.
Step 1 − Create a sample MVC application with Internet application as Template and create a Content folder in the root directory of the web application.
Step 2 − Create a Style Sheet file named MyStyleSheet.css under the CONTENT folder. This CSS file will contain all the CSS classes necessary for a consistent web application page design.
Step 3 − Create a Shared folder under the View folder.
Step 4 − Create a MasterLayout.cshtml file under the Shared folder. The file MasterLayout.cshtml represents the layout of each page in the application. Right-click on the Shared folder in the Solution Explorer, then go to Add item and click View. Copy the following layout code.
<!DOCTYPE html>
<html lang = "en">
<head>
<meta charset = "utf-8" />
<title>@ViewBag.Title - Tutorial Point</title>
<link href = "~/favicon.ico" rel = "shortcut icon" type = "image/x-icon" />
<link rel = "stylesheet" href = "@Url.Content("~/Content/MyStyleSheet.css")" />
</head>
<body>
<header>
<div class = "content-wrapper">
<div class = "float-left">
<p class = "site-title">
@Html.ActionLink("Tutorial Point", "Index", "Home")
</p>
</div>
<div class = "float-right">
<nav>
<ul id = "menu">
<li>@Html.ActionLink("Home", "Index", "Home")</li>
<li>@Html.ActionLink("About", "About", "Home")</li>
</ul>
</nav>
</div>
</div>
</header>
<div id = "body">
@RenderSection("featured", required: false)
<section class = "content-wrapper main-content clear-fix">
@RenderBody()
</section>
</div>
<footer>
<div class = "content-wrapper">
<div class = "float-left">
<p>© @DateTime.Now.Year - Tutorial Point</p>
</div>
</div>
</footer>
</body>
</html>
In this layout, we are using an HTML helper method and some other system-defined methods, hence let's look at these methods one by one.
Url.Content() − This method specifies the path of any file that we are using in our View code. It takes the virtual path as input and returns the absolute path.
Url.Content() − This method specifies the path of any file that we are using in our View code. It takes the virtual path as input and returns the absolute path.
Html.ActionLink() − This method renders HTML links that links to action of some controller. The first parameter specifies the display name, the second parameter specifies the Action name, and the third parameter specifies the Controller name.
Html.ActionLink() − This method renders HTML links that links to action of some controller. The first parameter specifies the display name, the second parameter specifies the Action name, and the third parameter specifies the Controller name.
RenderSection() − Specifies the name of the section that we want to display at that location in the template.
RenderSection() − Specifies the name of the section that we want to display at that location in the template.
RenderBody() − Renders the actual body of the associated View.
RenderBody() − Renders the actual body of the associated View.
Step 5 − Finally, open the _ViewStart.cshtml file inside Views folder and add the following code −
@{
Layout = "~/Views/Shared/_Layout.cshtml";
}
If the file is not present, you can create the file with this name.
Step 6 − Run the application now to see the modified home page.
ASP.NET MVC Routing enables the use of URLs that are descriptive of the user actions and are more easily understood by the users. At the same time, Routing can be used to hide data which is not intended to be shown to the final user.
For example, in an application that does not use routing, the user would be shown the URL as http://myapplication/Users.aspx?id=1 which would correspond to the file Users.aspx inside myapplication path and sending ID as 1, Generally, we would not like to show such file names to our final user.
To handle MVC URLs, ASP.NET platform uses the routing system, which lets you create any pattern of URLs you desire, and express them in a clear and concise manner. Each route in MVC contains a specific URL pattern. This URL pattern is compared to the incoming request URL and if the URL matches this pattern, it is used by the routing engine to further process the request.
To understand the MVC routing, consider the following URL −
http://servername/Products/Phones
In the above URL, Products is the first segment and Phone is the second segment which can be expressed in the following format −
{controller}/{action}
The MVC framework automatically considers the first segment as the Controller name and the second segment as one of the actions inside that Controller.
Note − If the name of your Controller is ProductsController, you would only mention Products in the routing URL. The MVC framework automatically understands the Controller suffix.
Routes are defined in the RouteConfig.cs file which is present under the App_Start project folder.
You will see the following code inside this file −
public class RouteConfig {
public static void RegisterRoutes(RouteCollection routes) {
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
name: "Default",
url: "{controller}/{action}/{id}",
defaults: new { controller = "Home", action = "Index",
id = UrlParameter.Optional }
);
}
}
This RegisterRoutes method is called by the Global.ascx when the application is started. The Application_Start method under Global.ascx calls this MapRoute function which sets the default Controller and its action (method inside the Controller class).
To modify the above default mapping as per our example, change the following line of code −
defaults: new { controller = "Products", action = "Phones", id = UrlParameter.Optional }
This setting will pick the ProductsController and call the Phone method inside that. Similarly, if you have another method such as Electronics inside ProductsController, the URL for it would be −
http://servername/Products/Electronics
In ASP.NET MVC, controllers define action methods and these action methods generally have a one-to-one relationship with UI controls, such as clicking a button or a link, etc. For example, in one of our previous examples, the UserController class contained methods UserAdd, UserDelete, etc.
However, many times we would like to perform some action before or after a particular operation. For achieving this functionality, ASP.NET MVC provides a feature to add pre- and post-action behaviors on the controller's action methods.
ASP.NET MVC framework supports the following action filters −
Action Filters − Action filters are used to implement logic that gets executed before and after a controller action executes. We will look at Action Filters in detail in this chapter.
Action Filters − Action filters are used to implement logic that gets executed before and after a controller action executes. We will look at Action Filters in detail in this chapter.
Authorization Filters − Authorization filters are used to implement authentication and authorization for controller actions.
Authorization Filters − Authorization filters are used to implement authentication and authorization for controller actions.
Result Filters − Result filters contain logic that is executed before and after a view result is executed. For example, you might want to modify a view result right before the view is rendered to the browser.
Result Filters − Result filters contain logic that is executed before and after a view result is executed. For example, you might want to modify a view result right before the view is rendered to the browser.
Exception Filters − Exception filters are the last type of filter to run. You can use an exception filter to handle errors raised by either your controller actions or controller action results. You also can use exception filters to log errors.
Exception Filters − Exception filters are the last type of filter to run. You can use an exception filter to handle errors raised by either your controller actions or controller action results. You also can use exception filters to log errors.
Action filters are one of the most commonly used filters to perform additional data processing, or manipulating the return values or cancelling the execution of action or modifying the view structure at run time.
Action Filters are additional attributes that can be applied to either a controller section or the entire controller to modify the way in which an action is executed. These attributes are special .NET classes derived from System.Attribute which can be attached to classes, methods, properties, and fields.
ASP.NET MVC provides the following action filters −
Output Cache − This action filter caches the output of a controller action for a specified amount of time.
Output Cache − This action filter caches the output of a controller action for a specified amount of time.
Handle Error − This action filter handles errors raised when a controller action executes.
Handle Error − This action filter handles errors raised when a controller action executes.
Authorize − This action filter enables you to restrict access to a particular user or role.
Authorize − This action filter enables you to restrict access to a particular user or role.
Now, we will see the code example to apply these filters on an example controller ActionFilterDemoController. (ActionFilterDemoController is just used as an example. You can use these filters on any of your controllers.)
Example − Specifies the return value to be cached for 10 seconds.
public class ActionFilterDemoController : Controller {
[HttpGet]
OutputCache(Duration = 10)]
public string Index() {
return DateTime.Now.ToString("T");
}
}
Example − Redirects application to a custom error page when an error is triggered by the controller.
[HandleError]
public class ActionFilterDemoController : Controller {
public ActionResult Index() {
throw new NullReferenceException();
}
public ActionResult About() {
return View();
}
}
With the above code, if any error happens during the action execution, it will find a view named Error in the Views folder and render that page to the user.
Example − Allowing only authorized users to log in the application.
public class ActionFilterDemoController: Controller {
[Authorize]
public ActionResult Index() {
ViewBag.Message = "This can be viewed only by authenticated users only";
return View();
}
[Authorize(Roles="admin")]
public ActionResult AdminIndex() {
ViewBag.Message = "This can be viewed only by users in Admin role only";
return View();
}
}
With the above code, if you would try to access the application without logging in, it will throw an error similar to the one shown in the following screenshot.
In the first chapter, we learnt how Controllers and Views interact in MVC. In this tutorial, we are going to take a step forward and learn how to use Models and create an advanced application to create, edit, delete. and view the list of users in our application.
Step 1 − Select File → New → Project → ASP.NET MVC Web Application. Name it as AdvancedMVCApplication. Click Ok. In the next window, select Template as Internet Application and View Engine as Razor. Observe that we are using a template this time instead of an Empty application.
This will create a new solution project as shown in the following screenshot. Since we are using the default ASP.NET theme, it comes with sample Views, Controllers, Models and other files.
Step 2 − Build the solution and run the application to see its default output as shown in the following screenshot.
Step 3 − Add a new model which will define the structure of users data. Right-click on Models folder and click Add → Class. Name this as UserModel and click Add.
Step 4 − Copy the following code in the newly created UserModel.cs.
using System;
using System.ComponentModel;
using System.ComponentModel.DataAnnotations;
using System.Web.Mvc.Html;
namespace AdvancedMVCApplication.Models {
public class UserModels {
[Required]
public int Id { get; set; }
[DisplayName("First Name")]
[Required(ErrorMessage = "First name is required")]
public string FirstName { get; set; }
[Required]
public string LastName { get; set; }
public string Address { get; set; }
[Required]
[StringLength(50)]
public string Email { get; set; }
[DataType(DataType.Date)]
public DateTime DOB { get; set; }
[Range(100,1000000)]
public decimal Salary { get; set; }
}
}
In the above code, we have specified all the parameters that the User model has, their data types and validations such as required fields and length.
Now that we have our User Model ready to hold the data, we will create a class file Users.cs, which will contain methods for viewing users, adding, editing, and deleting users.
Step 5 − Right-click on Models and click Add → Class. Name it as Users. This will create users.cs class inside Models. Copy the following code in the users.cs class.
using System;
using System.Collections.Generic;
using System.EnterpriseServices;
namespace AdvancedMVCApplication.Models {
public class Users {
public List UserList = new List();
//action to get user details
public UserModels GetUser(int id) {
UserModels usrMdl = null;
foreach (UserModels um in UserList)
if (um.Id == id)
usrMdl = um;
return usrMdl;
}
//action to create new user
public void CreateUser(UserModels userModel) {
UserList.Add(userModel);
}
//action to udpate existing user
public void UpdateUser(UserModels userModel) {
foreach (UserModels usrlst in UserList) {
if (usrlst.Id == userModel.Id) {
usrlst.Address = userModel.Address;
usrlst.DOB = userModel.DOB;
usrlst.Email = userModel.Email;
usrlst.FirstName = userModel.FirstName;
usrlst.LastName = userModel.LastName;
usrlst.Salary = userModel.Salary;
break;
}
}
}
//action to delete exising user
public void DeleteUser(UserModels userModel) {
foreach (UserModels usrlst in UserList) {
if (usrlst.Id == userModel.Id) {
UserList.Remove(usrlst);
break;
}
}
}
}
}
Once we have our UserModel.cs and Users.cs, we will add Views to our model for viewing users, adding, editing and deleting users. First let us create a View to create a user.
Step 6 − Right-click on the Views folder and click Add → View.
Step 7 − In the next window, select the View Name as UserAdd, View Engine as Razor and select the Create a strongly-typed view checkbox.
Step 8 − Click Add. This will create the following CSHML code by default as shown below −
@model AdvancedMVCApplication.Models.UserModels
@{
ViewBag.Title = "UserAdd";
}
<h2>UserAdd</h2>
@using (Html.BeginForm()) {
@Html.ValidationSummary(true)
<fieldset>
<legend>UserModels</legend>
<div class = "editor-label">
@Html.LabelFor(model => model.FirstName)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.FirstName)
@Html.ValidationMessageFor(model => model.FirstName)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.LastName)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.LastName)
@Html.ValidationMessageFor(model => model.LastName)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.Address)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.Address)
@Html.ValidationMessageFor(model => model.Address)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.Email)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.Email)
@Html.ValidationMessageFor(model => model.Email)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.DOB)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.DOB)
@Html.ValidationMessageFor(model => model.DOB)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.Salary)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.Salary)
@Html.ValidationMessageFor(model => model.Salary)
</div>
<p>
<input type = "submit" value = "Create" />
</p>
</fieldset>
}
<div>
@Html.ActionLink("Back to List", "Index")
</div>
@section Scripts {
@Scripts.Render("~/bundles/jqueryval")
}
As you can see, this view contains view details of all the attributes of the fields including their validation messages, labels, etc. This View will look like the following in our final application.
Similar to UserAdd, now we will add four more Views given below with the given code −
This View will display all the users present in our system on the Index page.
@model IEnumerable<AdvancedMVCApplication.Models.UserModels>
@{
ViewBag.Title = "Index";
}
<h2>Index</h2>
<p>
@Html.ActionLink("Create New", "UserAdd")
</p>
<table>
<tr>
<th>
@Html.DisplayNameFor(model => model.FirstName)
</th>
<th>
@Html.DisplayNameFor(model => model.LastName)
</th>
<th>
@Html.DisplayNameFor(model => model.Address)
</th>
<th>
@Html.DisplayNameFor(model => model.Email)
</th>
<th>
@Html.DisplayNameFor(model => model.DOB)
</th>
<th>
@Html.DisplayNameFor(model => model.Salary)
</th>
<th></th>
</tr>
@foreach (var item in Model) {
<tr>
<td>
@Html.DisplayFor(modelItem => item.FirstName)
</td>
<td>
@Html.DisplayFor(modelItem => item.LastName)
</td>
<td>
@Html.DisplayFor(modelItem => item.Address)
</td>
<td>
@Html.DisplayFor(modelItem => item.Email)
</td>
<td>
@Html.DisplayFor(modelItem => item.DOB)
</td>
<td>
@Html.DisplayFor(modelItem => item.Salary)
</td>
<td>
@Html.ActionLink("Edit", "Edit", new { id = item.Id }) |
@Html.ActionLink("Details", "Details", new { id = item.Id }) |
@Html.ActionLink("Delete", "Delete", new { id = item.Id })
</td>
</tr>
}
</table>
This View will look like the following in our final application.
This View will display the details of a specific user when we click on the user record.
@model AdvancedMVCApplication.Models.UserModels
@{
ViewBag.Title = "Details";
}
<h2>Details</h2>
<fieldset>
<legend>UserModels</legend>
<div class = "display-label">
@Html.DisplayNameFor(model => model.FirstName)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.FirstName)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.LastName)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.LastName)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.Address)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.Address)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.Email)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.Email)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.DOB)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.DOB)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.Salary)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.Salary)
</div>
</fieldset>
<p>
@Html.ActionLink("Edit", "Edit", new { id = Model.Id }) |
@Html.ActionLink("Back to List", "Index")
</p>
This View will look like the following in our final application.
This View will display the edit form to edit the details of an existing user.
@model AdvancedMVCApplication.Models.UserModels
@{
ViewBag.Title = "Edit";
}
<h2>Edit</h2>
@using (Html.BeginForm()) {
@Html.AntiForgeryToken()
@Html.ValidationSummary(true)
<fieldset>
<legend>UserModels</legend>
@Html.HiddenFor(model => model.Id)
<div class = "editor-label">
@Html.LabelFor(model => model.FirstName)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.FirstName)
@Html.ValidationMessageFor(model => model.FirstName)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.LastName)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.LastName)
@Html.ValidationMessageFor(model => model.LastName)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.Address)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.Address)
@Html.ValidationMessageFor(model => model.Address)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.Email)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.Email)
@Html.ValidationMessageFor(model => model.Email)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.DOB)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.DOB)
@Html.ValidationMessageFor(model => model.DOB)
</div>
<div class = "editor-label">
@Html.LabelFor(model => model.Salary)
</div>
<div class = "editor-field">
@Html.EditorFor(model => model.Salary)
@Html.ValidationMessageFor(model => model.Salary)
</div>
<p>
<input type = "submit" value = "Save" />
</p>
</fieldset>
}
<div>
@Html.ActionLink("Back to List", "Index")
</div>
@section Scripts {
@Scripts.Render("~/bundles/jqueryval")
}
This View will look like the following in our application.
This View will display the form to delete the existing user.
@model AdvancedMVCApplication.Models.UserModels
@{
ViewBag.Title = "Delete";
}
<h2>Delete</h2>
<h3>Are you sure you want to delete this?</h3>
<fieldset>
<legend>UserModels</legend>
<div class = "display-label">
@Html.DisplayNameFor(model => model.FirstName)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.FirstName)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.LastName)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.LastName)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.Address)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.Address)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.Email)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.Email)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.DOB)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.DOB)
</div>
<div class = "display-label">
@Html.DisplayNameFor(model => model.Salary)
</div>
<div class = "display-field">
@Html.DisplayFor(model => model.Salary)
</div>
</fieldset>
@using (Html.BeginForm()) {
@Html.AntiForgeryToken()
<p>
<input type = "submit" value = "Delete" /> |
@Html.ActionLink("Back to List", "Index")
</p>
}
This View will look like the following in our final application.
Step 9 − We have already added the Models and Views in our application. Now finally we will add a controller for our view. Right-click on the Controllers folder and click Add → Controller. Name it as UserController.
By default, your Controller class will be created with the following code −
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using AdvancedMVCApplication.Models;
namespace AdvancedMVCApplication.Controllers {
public class UserController : Controller {
private static Users _users = new Users();
public ActionResult Index() {
return View(_users.UserList);
}
}
}
In the above code, the Index method will be used while rendering the list of users on the Index page.
Step 10 − Right-click on the Index method and select Create View to create a View for our Index page (which will list down all the users and provide options to create new users).
Step 11 − Now add the following code in the UserController.cs. In this code, we are creating action methods for different user actions and returning corresponding views that we created earlier.
We will add two methods for each operation: GET and POST. HttpGet will be used while fetching the data and rendering it. HttpPost will be used for creating/updating data. For example, when we are adding a new user, we will need a form to add a user, which is a GET operation. Once we fill the form and submit those values, we will need the POST method.
//Action for Index View
public ActionResult Index() {
return View(_users.UserList);
}
//Action for UserAdd View
[HttpGet]
public ActionResult UserAdd() {
return View();
}
[HttpPost]
public ActionResult UserAdd(UserModels userModel) {
_users.CreateUser(userModel);
return View("Index", _users.UserList);
}
//Action for Details View
[HttpGet]
public ActionResult Details(int id) {
return View(_users.UserList.FirstOrDefault(x => x.Id == id));
}
[HttpPost]
public ActionResult Details() {
return View("Index", _users.UserList);
}
//Action for Edit View
[HttpGet]
public ActionResult Edit(int id) {
return View(_users.UserList.FirstOrDefault(x=>x.Id==id));
}
[HttpPost]
public ActionResult Edit(UserModels userModel) {
_users.UpdateUser(userModel);
return View("Index", _users.UserList);
}
//Action for Delete View
[HttpGet]
public ActionResult Delete(int id) {
return View(_users.UserList.FirstOrDefault(x => x.Id == id));
}
[HttpPost]
public ActionResult Delete(UserModels userModel) {
_users.DeleteUser(userModel);
return View("Index", _users.UserList);
} sers.UserList);
Step 12 − Last thing to do is go to RouteConfig.cs file in App_Start folder and change the default Controller to User.
defaults: new { controller = "User", action = "Index", id = UrlParameter.Optional }
That's all we need to get our advanced application up and running.
Step 13 − Now run the application. You will be able to see an application as shown in the following screenshot. You can perform all the functionalities of adding, viewing, editing, and deleting users as we saw in the earlier screenshots.
As you might be knowing, Ajax is a shorthand for Asynchronous JavaScript and XML. The MVC Framework contains built-in support for unobtrusive Ajax. You can use the helper methods to define your Ajax features without adding a code throughout all the views. This feature in MVC is based on the jQuery features.
To enable the unobtrusive AJAX support in the MVC application, open the Web.Config file and set the UnobtrusiveJavaScriptEnabled property inside the appSettings section using the following code. If the key is already present in your application, you can ignore this step.
<add key = "UnobtrusiveJavaScriptEnabled" value = "true" />
After this, open the common layout file _Layout.cshtml file located under Views/Shared folder. We will add references to the jQuery libraries here using the following code −
<script src = "~/Scripts/jquery-ui-1.8.24.min.js" type = "text/javascript">
</script>
<script src = "~/Scripts/jquery.unobtrusive-ajax.min.js" type = "text/javascript">
</script>
In the example that follows, we will create a form which will display the list of users in the system. We will place a dropdown having three options: Admin, Normal, and Guest. When you will select one of these values, it will display the list of users belonging to this category using unobtrusive AJAX setup.
Step 1 − Create a Model file Model.cs and copy the following code.
using System;
namespace MVCAjaxSupportExample.Models {
public class User {
public int UserId { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
public Role Role { get; set; }
}
public enum Role {
Admin,
Normal,
Guest
}
}
Step 2 − Create a Controller file named UserController.cs and create two action methods inside that using the following code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web.Mvc;
using MVCAjaxSupportExample.Models;
namespace MVCAjaxSupportExample.Controllers {
public class UserController : Controller {
private readonly User[] userData =
{
new User {FirstName = "Edy", LastName = "Clooney", Role = Role.Admin},
new User {FirstName = "David", LastName = "Sanderson", Role = Role.Admin},
new User {FirstName = "Pandy", LastName = "Griffyth", Role = Role.Normal},
new User {FirstName = "Joe", LastName = "Gubbins", Role = Role.Normal},
new User {FirstName = "Mike", LastName = "Smith", Role = Role.Guest}
};
public ActionResult Index() {
return View(userData);
}
public PartialViewResult GetUserData(string selectedRole = "All") {
IEnumerable data = userData;
if (selectedRole != "All") {
var selected = (Role) Enum.Parse(typeof (Role), selectedRole);
data = userData.Where(p => p.Role == selected);
}
return PartialView(data);
}
public ActionResult GetUser(string selectedRole = "All") {
return View((object) selectedRole);
}
}
}
Step 3 − Now create a partial View named GetUserData with the following code. This view will be used to render list of users based on the selected role from the dropdown.
@model IEnumerable<MVCAjaxSupportExample.Models.User>
<table>
<tr>
<th>
@Html.DisplayNameFor(model => model.FirstName)
</th>
<th>
@Html.DisplayNameFor(model => model.LastName)
</th>
<th>
@Html.DisplayNameFor(model => model.BirthDate)
</th>
<th></th>
</tr>
@foreach (var item in Model) {
<tr>
<td>
@Html.DisplayFor(modelItem => item.FirstName)
</td>
<td>
@Html.DisplayFor(modelItem => item.LastName)
</td>
<td>
@Html.DisplayFor(modelItem => item.BirthDate)
</td>
<td>
</td>
</tr>
}
</table>
Step 4 − Now create a View GetUser with the following code. This view will asynchronously get the data from the previously created controller's GetUserData Action.
@using MVCAjaxSupportExample.Models
@model string
@{
ViewBag.Title = "GetUser";
AjaxOptions ajaxOpts = new AjaxOptions {
UpdateTargetId = "tableBody"
};
}
<h2>Get User</h2>
<table>
<thead>
<tr>
<th>First</th>
<th>Last</th>
<th>Role</th>
</tr>
</thead>
<tbody id="tableBody">
@Html.Action("GetUserData", new {selectedRole = Model })
</tbody>
</table>
@using (Ajax.BeginForm("GetUser", ajaxOpts)) {
<div>
@Html.DropDownList("selectedRole", new SelectList(
new [] {"All"}.Concat(Enum.GetNames(typeof(Role)))))
<button type="submit">Submit</button>
</div>
}
Step 5 − Finally, change the Route.config entries to launch the User Controller.
defaults: new { controller = "User", action = "GetUser", id = UrlParameter.Optional }
Step 6 − Run the application which will look like the following screenshot.
If you select Admin from the dropdown, it will go and fetch all the users with Admin type. This is happening via AJAX and does not reload the entire page.
Bundling and Minification are two performance improvement techniques that improves the request load time of the application. Most of the current major browsers limit the number of simultaneous connections per hostname to six. It means that at a time, all the additional requests will be queued by the browser.
To enable bundling and minification in your MVC application, open the Web.config file inside your solution. In this file, search for compilation settings under system.web −
<system.web>
<compilation debug = "true" />
</system.web>
By default, you will see the debug parameter set to true, which means that bundling and minification is disabled. Set this parameter to false.
To improve the performance of the application, ASP.NET MVC provides inbuilt feature to bundle multiple files into a single, file which in turn improves the page load performance because of fewer HTTP requests.
Bundling is a simple logical group of files that could be referenced by unique name and loaded with a single HTTP request.
By default, the MVC application's BundleConfig (located inside App_Start folder) comes with the following code −
public static void RegisterBundles(BundleCollection bundles) {
// Following is the sample code to bundle all the css files in the project
// The code to bundle other javascript files will also be similar to this
bundles.Add(new StyleBundle("~/Content/themes/base/css").Include(
"~/Content/themes/base/jquery.ui.core.css",
"~/Content/themes/base/jquery.ui.tabs.css",
"~/Content/themes/base/jquery.ui.datepicker.css",
"~/Content/themes/base/jquery.ui.progressbar.css",
"~/Content/themes/base/jquery.ui.theme.css"));
}
The above code basically bundles all the CSS files present in Content/themes/base folder into a single file.
Minification is another such performance improvement technique in which it optimizes the javascript, css code by shortening the variable names, removing unnecessary white spaces, line breaks, comments, etc. This in turn reduces the file size and helps the application to load faster.
For using this option, you will have to first install the Web Essentials Extension in your Visual Studio. After that, when you will right-click on any css or javascript file, it will show you the option to create a minified version of that file.
Thus, if you have a css file named Site.css, it will create its minified version as Site.min.css.
Now when the next time your application will run in the browser, it will bundle and minify all the css and js files, hence improving the application performance.
In ASP.NET, error handling is done using the standard try catch approach or using application events. ASP.NET MVC comes with built-in support for exception handling using a feature known as exception filters. We are going to learn two approaches here: one with overriding the onException method and another by defining the HandleError filters.
This approach is used when we want to handle all the exceptions across the Action methods at the controller level.
To understand this approach, create an MVC application (follow the steps covered in previous chapters). Now add a new Controller class and add the following code which overrides the onException method and explicitly throws an error in our Action method −
Now let us create a common View named Error which will be shown to the user when any exception happens in the application. Inside the Views folder, create a new folder called Shared and add a new View named Error.
Copy the following code inside the newly created Error.cshtml −
If you try to run the application now, it will give the following result. The above code renders the Error View when any exception occurs in any of the action methods within this controller.
The advantage of this approach is that multiple actions within the same controller can share this error handling logic. However, the disadvantage is that we cannot use the same error handling logic across multiple controllers.
The HandleError Attribute is one of the action filters that we studied in Filters and Action Filters chapter. The HandleErrorAttribute is the default implementation of IExceptionFilter. This filter handles all the exceptions raised by controller actions, filters, and views.
To use this feature, first of all turn on the customErrors section in web.config. Open the web.config and place the following code inside system.web and set its value as On.
<customErrors mode = "On"/>
We already have the Error View created inside the Shared folder under Views. This time change the code of this View file to the following, to strongly-type it with the HandleErrorInfo model (which is present under System.Web.MVC).
@model System.Web.Mvc.HandleErrorInfo
@{
Layout = null;
}
<!DOCTYPE html>
<html>
<head>
<meta name = "viewport" content = "width = device-width" />
<title>Error</title>
</head>
<body>
<h2>
Sorry, an error occurred while processing your request.
</h2>
<h2>Exception details</h2>
<p>
Controller: @Model.ControllerName <br>
Action: @Model.ActionName
Exception: @Model.Exception
</p>
</body>
</html>
Now place the following code in your controller file which specifies [HandleError] attribute at the Controller file.
using System;
using System.Data.Common;
using System.Web.Mvc;
namespace ExceptionHandlingMVC.Controllers {
[HandleError]
public class ExceptionHandlingController : Controller {
public ActionResult TestMethod() {
throw new Exception("Test Exception");
return View();
}
}
}
If you try to run the application now, you will get an error similar to shown in the following screenshot.
As you can see, this time the error contains more information about the Controller and Action related details. In this manner, the HandleError can be used at any level and across controllers to handle such errors.
44 Lectures
4.5 hours
Kaushik Roy Chowdhury
42 Lectures
18 hours
SHIVPRASAD KOIRALA
57 Lectures
3.5 hours
University Code
55 Lectures
4.5 hours
University Code
40 Lectures
2.5 hours
University Code
140 Lectures
9 hours
Bhrugen Patel
Print
Add Notes
Bookmark this page
|
[
{
"code": null,
"e": 2412,
"s": 2025,
"text": "The Model-View-Controller (MVC) is an architectural pattern that separates an application into three main logical components: the model, the view, and the controller. Each of these components are built to handle specific development aspects of an application. MVC is one of the most frequently used industry-standard web development framework to create scalable and extensible projects."
},
{
"code": null,
"e": 2450,
"s": 2412,
"text": "Following are the components of MVC −"
},
{
"code": null,
"e": 2853,
"s": 2450,
"text": "The Model component corresponds to all the data-related logic that the user works with. This can represent either the data that is being transferred between the View and Controller components or any other business logic-related data. For example, a Customer object will retrieve the customer information from the database, manipulate it and update it data back to the database or use it to render data."
},
{
"code": null,
"e": 3059,
"s": 2853,
"text": "The View component is used for all the UI logic of the application. For example, the Customer view will include all the UI components such as text boxes, dropdowns, etc. that the final user interacts with."
},
{
"code": null,
"e": 3494,
"s": 3059,
"text": "Controllers act as an interface between Model and View components to process all the business logic and incoming requests, manipulate data using the Model component and interact with the Views to render the final output. For example, the Customer controller will handle all the interactions and inputs from the Customer View and update the database using the Customer Model. The same controller will be used to view the Customer data."
},
{
"code": null,
"e": 4000,
"s": 3494,
"text": "ASP.NET supports three major development models: Web Pages, Web Forms and MVC (Model View Controller). ASP.NET MVC framework is a lightweight, highly testable presentation framework that is integrated with the existing ASP.NET features, such as master pages, authentication, etc. Within .NET, this framework is defined in the System.Web.Mvc assembly. The latest version of the MVC Framework is 5.0. We use Visual Studio to create ASP.NET MVC applications which can be added as a template in Visual Studio."
},
{
"code": null,
"e": 4046,
"s": 4000,
"text": "ASP.NET MVC provides the following features −"
},
{
"code": null,
"e": 4105,
"s": 4046,
"text": "Ideal for developing complex but lightweight applications."
},
{
"code": null,
"e": 4164,
"s": 4105,
"text": "Ideal for developing complex but lightweight applications."
},
{
"code": null,
"e": 4425,
"s": 4164,
"text": "Provides an extensible and pluggable framework, which can be easily replaced and customized. For example, if you do not wish to use the in-built Razor or ASPX View Engine, then you can use any other third-party view engines or even customize the existing ones."
},
{
"code": null,
"e": 4686,
"s": 4425,
"text": "Provides an extensible and pluggable framework, which can be easily replaced and customized. For example, if you do not wish to use the in-built Razor or ASPX View Engine, then you can use any other third-party view engines or even customize the existing ones."
},
{
"code": null,
"e": 4923,
"s": 4686,
"text": "Utilizes the component-based design of the application by logically dividing it into Model, View, and Controller components. This enables the developers to manage the complexity of large-scale projects and work on individual components."
},
{
"code": null,
"e": 5160,
"s": 4923,
"text": "Utilizes the component-based design of the application by logically dividing it into Model, View, and Controller components. This enables the developers to manage the complexity of large-scale projects and work on individual components."
},
{
"code": null,
"e": 5421,
"s": 5160,
"text": "MVC structure enhances the test-driven development and testability of the application, since all the components can be designed interface-based and tested using mock objects. Hence, ASP.NET MVC Framework is ideal for projects with large team of web developers."
},
{
"code": null,
"e": 5682,
"s": 5421,
"text": "MVC structure enhances the test-driven development and testability of the application, since all the components can be designed interface-based and tested using mock objects. Hence, ASP.NET MVC Framework is ideal for projects with large team of web developers."
},
{
"code": null,
"e": 5858,
"s": 5682,
"text": "Supports all the existing vast ASP.NET functionalities, such as Authorization and Authentication, Master Pages, Data Binding, User Controls, Memberships, ASP.NET Routing, etc."
},
{
"code": null,
"e": 6034,
"s": 5858,
"text": "Supports all the existing vast ASP.NET functionalities, such as Authorization and Authentication, Master Pages, Data Binding, User Controls, Memberships, ASP.NET Routing, etc."
},
{
"code": null,
"e": 6205,
"s": 6034,
"text": "Does not use the concept of View State (which is present in ASP.NET). This helps in building applications, which are lightweight and gives full control to the developers."
},
{
"code": null,
"e": 6376,
"s": 6205,
"text": "Does not use the concept of View State (which is present in ASP.NET). This helps in building applications, which are lightweight and gives full control to the developers."
},
{
"code": null,
"e": 6556,
"s": 6376,
"text": "Thus, you can consider MVC Framework as a major framework built on top of ASP.NET providing a large set of added functionality focusing on component-based development and testing."
},
{
"code": null,
"e": 6810,
"s": 6556,
"text": "In the last chapter, we studied the high-level architecture flow of MVC Framework. Now let us take a look at how the execution of an MVC application takes place when there is a certain request from the client. The following diagram illustrates the flow."
},
{
"code": null,
"e": 6876,
"s": 6810,
"text": "Step 1 − The client browser sends request to the MVC Application."
},
{
"code": null,
"e": 7060,
"s": 6876,
"text": "Step 2 − Global.ascx receives this request and performs routing based on the URL of the incoming request using the RouteTable, RouteData, UrlRoutingModule and MvcRouteHandler objects."
},
{
"code": null,
"e": 7217,
"s": 7060,
"text": "Step 3 − This routing operation calls the appropriate controller and executes it using the IControllerFactory object and MvcHandler object's Execute method."
},
{
"code": null,
"e": 7344,
"s": 7217,
"text": "Step 4 − The Controller processes the data using Model and invokes the appropriate method using ControllerActionInvoker object"
},
{
"code": null,
"e": 7441,
"s": 7344,
"text": "Step 5 − The processed Model is then passed to the View, which in turn renders the final output."
},
{
"code": null,
"e": 7910,
"s": 7441,
"text": "MVC and ASP.NET Web Forms are inter-related yet different models of development, depending on the requirement of the application and other factors. At a high level, you can consider that MVC is an advanced and sophisticated web application framework designed with separation of concerns and testability in mind. Both the frameworks have their advantages and disadvantages depending on specific requirements. This concept can be visualized using the following diagram −"
},
{
"code": null,
"e": 8151,
"s": 7910,
"text": "Let us jump in and create our first MVC application using Views and Controllers. Once we have a small hands-on experience on how a basic MVC application works, we will learn all the individual components and concepts in the coming chapters."
},
{
"code": null,
"e": 8345,
"s": 8151,
"text": "Step 1 − Start your Visual Studio and select File → New → Project. Select Web → ASP.NET MVC Web Application and name this project as FirstMVCApplicatio. Select the Location as C:\\MVC. Click OK."
},
{
"code": null,
"e": 8456,
"s": 8345,
"text": "Step 2 − This will open the Project Template option. Select Empty template and View Engine as Razor. Click OK."
},
{
"code": null,
"e": 8547,
"s": 8456,
"text": "Now, Visual Studio will create our first MVC project as shown in the following screenshot."
},
{
"code": null,
"e": 8878,
"s": 8547,
"text": "Step 3 − Now we will create the first Controller in our application. Controllers are just simple C# classes, which contains multiple public methods, known as action methods. To add a new Controller, right-click the Controllers folder in our project and select Add → Controller. Name the Controller as HomeController and click Add."
},
{
"code": null,
"e": 8988,
"s": 8878,
"text": "This will create a class file HomeController.cs under the Controllers folder with the following default code."
},
{
"code": null,
"e": 9209,
"s": 8988,
"text": "using System; \nusing System.Web.Mvc; \n\nnamespace FirstMVCApplication.Controllers { \n \n public class HomeController : Controller { \n \n public ViewResult Index() { \n return View(); \n } \n } \n}"
},
{
"code": null,
"e": 9406,
"s": 9209,
"text": "The above code basically defines a public method Index inside our HomeController and returns a ViewResult object. In the next steps, we will learn how to return a View using the ViewResult object."
},
{
"code": null,
"e": 9530,
"s": 9406,
"text": "Step 4 − Now we will add a new View to our Home Controller. To add a new View, rightclick view folder and click Add → View."
},
{
"code": null,
"e": 9612,
"s": 9530,
"text": "Step 5 − Name the new View as Index and View Engine as Razor (SCHTML). Click Add."
},
{
"code": null,
"e": 9695,
"s": 9612,
"text": "This will add a new cshtml file inside Views/Home folder with the following code −"
},
{
"code": null,
"e": 9916,
"s": 9695,
"text": "@{ \n Layout = null; \n} \n\n<html> \n <head> \n <meta name = \"viewport\" content = \"width = device-width\" /> \n <title>Index</title> \n </head> \n\n <body> \n <div> \n \n </div> \n </body> \n</html> "
},
{
"code": null,
"e": 9988,
"s": 9916,
"text": "Step 6 − Modify the above View's body content with the following code −"
},
{
"code": null,
"e": 10093,
"s": 9988,
"text": "<body> \n <div> \n Welcome to My First MVC Application (<b>From Index View</b>) \n </div> \n</body>"
},
{
"code": null,
"e": 10344,
"s": 10093,
"text": "Step 7 − Now run the application. This will give you the following output in the browser. This output is rendered based on the content in our View file. The application first calls the Controller which in turn calls this View and produces the output."
},
{
"code": null,
"e": 10618,
"s": 10344,
"text": "In Step 7, the output we received was based on the content of our View file and had no interaction with the Controller. Moving a step forward, we will now create a small example to display a Welcome message with the current time using an interaction of View and Controller."
},
{
"code": null,
"e": 10775,
"s": 10618,
"text": "Step 8 − MVC uses the ViewBag object to pass data between Controller and View. Open the HomeController.cs and edit the Index function to the following code."
},
{
"code": null,
"e": 11059,
"s": 10775,
"text": "public ViewResult Index() { \n int hour = DateTime.Now.Hour; \n \n ViewBag.Greeting =\n hour < 12 \n ? \"Good Morning. Time is\" + DateTime.Now.ToShortTimeString() \n : \"Good Afternoon. Time is \" + DateTime.Now.ToShortTimeString(); \n \n return View(); \n}"
},
{
"code": null,
"e": 11418,
"s": 11059,
"text": "In the above code, we set the value of the Greeting attribute of the ViewBag object. The code checks the current hour and returns the Good Morning/Afternoon message accordingly using return View() statement. Note that here Greeting is just an example attribute that we have used with ViewBag object. You can use any other attribute name in place of Greeting."
},
{
"code": null,
"e": 11498,
"s": 11418,
"text": "Step 9 − Open the Index.cshtml and copy the following code in the body section."
},
{
"code": null,
"e": 11586,
"s": 11498,
"text": "<body> \n <div> \n @ViewBag.Greeting (<b>From Index View</b>) \n </div> \n</body> "
},
{
"code": null,
"e": 11726,
"s": 11586,
"text": "In the above code, we are accessing the value of Greeting attribute of the ViewBag object using @ (which would be set from the Controller)."
},
{
"code": null,
"e": 11903,
"s": 11726,
"text": "Step 10 − Now run the application again. This time our code will run the Controller first, set the ViewBag and then render it using the View code. Following will be the output."
},
{
"code": null,
"e": 12068,
"s": 11903,
"text": "Now that we have already created a sample MVC application, let us understand the folder structure of an MVC project. We will create new a MVC project to learn this."
},
{
"code": null,
"e": 12179,
"s": 12068,
"text": "In your Visual Studio, open File → New → Project and select ASP.NET MVC Application. Name it as MVCFolderDemo."
},
{
"code": null,
"e": 12275,
"s": 12179,
"text": "Click OK. In the next window, select Internet Application as the Project Template and click OK."
},
{
"code": null,
"e": 12355,
"s": 12275,
"text": "This will create a sample MVC application as shown in the following screenshot."
},
{
"code": null,
"e": 12507,
"s": 12355,
"text": "Note − Files present in this project are coming out of the default template that we have selected. These may change slightly as per different versions."
},
{
"code": null,
"e": 12634,
"s": 12507,
"text": "This folder will contain all the Controller classes. MVC requires the name of all the controller files to end with Controller."
},
{
"code": null,
"e": 12737,
"s": 12634,
"text": "In our example, the Controllers folder contains two class files: AccountController and HomeController."
},
{
"code": null,
"e": 12829,
"s": 12737,
"text": "This folder will contain all the Model classes, which are used to work on application data."
},
{
"code": null,
"e": 13009,
"s": 12829,
"text": "In our example, the Models folder contains AccountModels. You can open and look at the code in this file to see how the data model is created for managing accounts in our example."
},
{
"code": null,
"e": 13138,
"s": 13009,
"text": "This folder stores the HTML files related to application display and user interface. It contains one folder for each controller."
},
{
"code": null,
"e": 13284,
"s": 13138,
"text": "In our example, you will see three sub-folders under Views, namely Account, Home and Shared which contains html files specific to that view area."
},
{
"code": null,
"e": 13365,
"s": 13284,
"text": "This folder contains all the files which are needed during the application load."
},
{
"code": null,
"e": 13468,
"s": 13365,
"text": "For e.g., the RouteConfig file is used to route the incoming URL to the correct Controller and Action."
},
{
"code": null,
"e": 13544,
"s": 13468,
"text": "This folder contains all the static files, such as css, images, icons, etc."
},
{
"code": null,
"e": 13634,
"s": 13544,
"text": "The Site.css file inside this folder is the default styling that the application applies."
},
{
"code": null,
"e": 13762,
"s": 13634,
"text": "This folder stores all the JS files in the project. By default, Visual Studio adds MVC, jQuery and other standard JS libraries."
},
{
"code": null,
"e": 13957,
"s": 13762,
"text": "The component ‘Model’ is responsible for managing the data of the application. It responds to the request from the view and it also responds to instructions from the controller to update itself."
},
{
"code": null,
"e": 14274,
"s": 13957,
"text": "Model classes can either be created manually or generated from database entities. We are going to see a lot of examples for manually creating Models in the coming chapters. Thus in this chapter, we will try the other option, i.e. generating from the database so that you have hands-on experience on both the methods."
},
{
"code": null,
"e": 14323,
"s": 14274,
"text": "Connect to SQL Server and create a new database."
},
{
"code": null,
"e": 14375,
"s": 14323,
"text": "Now run the following queries to create new tables."
},
{
"code": null,
"e": 15368,
"s": 14375,
"text": "CREATE TABLE [dbo].[Student]( \n [StudentID] INT IDENTITY (1,1) NOT NULL, \n [LastName] NVARCHAR (50) NULL, \n [FirstName] NVARCHAR (50) NULL, \n [EnrollmentDate] DATETIME NULL, \n PRIMARY KEY CLUSTERED ([StudentID] ASC) \n) \n\nCREATE TABLE [dbo].[Course]( \n [CourseID] INT IDENTITY (1,1) NOT NULL, \n [Title] NVARCHAR (50) NULL, \n [Credits] INT NULL, \n PRIMARY KEY CLUSTERED ([CourseID] ASC) \n) \n\nCREATE TABLE [dbo].[Enrollment]( \n [EnrollmentID] INT IDENTITY (1,1) NOT NULL, \n [Grade] DECIMAL(3,2) NULL, \n [CourseID] INT NOT NULL, \n [StudentID] INT NOT NULL, \n PRIMARY KEY CLUSTERED ([EnrollmentID] ASC), \n CONSTRAINT [FK_dbo.Enrollment_dbo.Course_CourseID] FOREIGN KEY ([CourseID]) \n REFERENCES [dbo].[Course]([CourseID]) ON DELETE CASCADE, \n CONSTRAINT [FK_dbo.Enrollment_dbo.Student_StudentID] FOREIGN KEY ([StudentID]) \n REFERENCES [dbo].[Student]([StudentID]) ON DELETE CASCADE \n)"
},
{
"code": null,
"e": 15596,
"s": 15368,
"text": "After creating the database and setting up the tables, you can go ahead and create a new MVC Empty Application. Right-click on the Models folder in your project and select Add → New Item. Then, select ADO.NET Entity Data Model."
},
{
"code": null,
"e": 15703,
"s": 15596,
"text": "In the next wizard, choose Generate From Database and click Next. Set the Connection to your SQL database."
},
{
"code": null,
"e": 15810,
"s": 15703,
"text": "Select your database and click Test Connection. A screen similar to the following will follow. Click Next."
},
{
"code": null,
"e": 15957,
"s": 15810,
"text": "Select Tables, Views, and Stored Procedures and Functions. Click Finish. You will see the Model View created as shown in the following screenshot."
},
{
"code": null,
"e": 16161,
"s": 15957,
"text": "The above operations would automatically create a Model file for all the database entities. For example, the Student table that we created will result in a Model file Student.cs with the following code −"
},
{
"code": null,
"e": 16637,
"s": 16161,
"text": "namespace MvcModelExample.Models { \n using System; \n using System.Collections.Generic; \n \n public partial class Student { \n \n public Student() { \n this.Enrollments = new HashSet(); \n } \n \n public int StudentID { get; set; } \n public string LastName { get; set; } \n public string FirstName { get; set; } \n public Nullable EnrollmentDate { get; set; } \n public virtual ICollection Enrollments { get; set; } \n } \n}"
},
{
"code": null,
"e": 17011,
"s": 16637,
"text": "Asp.net MVC Controllers are responsible for controlling the flow of the application execution. When you make a request (means request a page) to MVC application, a controller is responsible for returning the response to that request. The controller can perform one or more actions. The controller action can return different types of action results to a particular request."
},
{
"code": null,
"e": 17296,
"s": 17011,
"text": "The Controller is responsible for controlling the application logic and acts as the coordinator between the View and the Model. The Controller receives an input from the users via the View, then processes the user's data with the help of Model and passes the results back to the View."
},
{
"code": null,
"e": 17321,
"s": 17296,
"text": "To create a Controller −"
},
{
"code": null,
"e": 17433,
"s": 17321,
"text": "Step 1 − Create an MVC Empty Application and then right-click on the Controller folder in your MVC application."
},
{
"code": null,
"e": 17579,
"s": 17433,
"text": "Step 2 − Select the menu option Add → Controller. After selection, the Add Controller dialog is displayed. Name the Controller as DemoController."
},
{
"code": null,
"e": 17657,
"s": 17579,
"text": "A Controller class file will be created as shown in the following screenshot."
},
{
"code": null,
"e": 17774,
"s": 17657,
"text": "In the MVC Framework, controller classes must implement the IController interface from the System.Web.Mvc namespace."
},
{
"code": null,
"e": 17855,
"s": 17774,
"text": "public interface IController {\n void Execute(RequestContext requestContext);\n}"
},
{
"code": null,
"e": 18134,
"s": 17855,
"text": "This is a very simple interface. The sole method, Execute, is invoked when a request is targeted at the controller class. The MVC Framework knows which controller class has been targeted in a request by reading the value of the controller property generated by the routing data."
},
{
"code": null,
"e": 18257,
"s": 18134,
"text": "Step 1 − Add a new class file and name it as DemoCustomController. Now modify this class to inherit IController interface."
},
{
"code": null,
"e": 18309,
"s": 18257,
"text": "Step 2 − Copy the following code inside this class."
},
{
"code": null,
"e": 18722,
"s": 18309,
"text": "public class DemoCustomController:IController { \n \n public void Execute(System.Web.Routing.RequestContext requestContext) { \n var controller = (string)requestContext.RouteData.Values[\"controller\"]; \n var action = (string)requestContext.RouteData.Values[\"action\"]; \n requestContext.HttpContext.Response.Write( \n string.Format(\"Controller: {0}, Action: {1}\", controller, action)); \n } \n} "
},
{
"code": null,
"e": 18794,
"s": 18722,
"text": "Step 3 − Run the application and you will receive the following output."
},
{
"code": null,
"e": 19345,
"s": 18794,
"text": "As seen in the initial introductory chapters, View is the component involved with the application's User Interface. These Views are generally bind from the model data and have extensions such as html, aspx, cshtml, vbhtml, etc. In our First MVC Application, we had used Views with Controller to display data to the final user. For rendering these static and dynamic content to the browser, MVC Framework utilizes View Engines. View Engines are basically markup syntax implementation, which are responsible for rendering the final HTML to the browser."
},
{
"code": null,
"e": 19398,
"s": 19345,
"text": "MVC Framework comes with two built-in view engines −"
},
{
"code": null,
"e": 19694,
"s": 19398,
"text": "Razor Engine − Razor is a markup syntax that enables the server side C# or VB code into web pages. This server side code can be used to create dynamic content when the web page is being loaded. Razor is an advanced engine as compared to ASPX engine and was launched in the later versions of MVC."
},
{
"code": null,
"e": 19906,
"s": 19694,
"text": "ASPX Engine − ASPX or the Web Forms engine is the default view engine that is included in the MVC Framework since the beginning. Writing a code with this engine is similar to writing a code in ASP.NET Web Forms."
},
{
"code": null,
"e": 19978,
"s": 19906,
"text": "Following are small code snippets comparing both Razor and ASPX engine."
},
{
"code": null,
"e": 20021,
"s": 19978,
"text": "@Html.ActionLink(\"Create New\", \"UserAdd\") "
},
{
"code": null,
"e": 20064,
"s": 20021,
"text": "<% Html.ActionLink(\"SignUp\", \"SignUp\") %> "
},
{
"code": null,
"e": 20309,
"s": 20064,
"text": "Out of these two, Razor is an advanced View Engine as it comes with compact syntax, test driven development approaches, and better security features. We will use Razor engine in all our examples since it is the most dominantly used View engine."
},
{
"code": null,
"e": 20382,
"s": 20309,
"text": "These View Engines can be coded and implemented in following two types −"
},
{
"code": null,
"e": 20397,
"s": 20382,
"text": "Strongly typed"
},
{
"code": null,
"e": 20411,
"s": 20397,
"text": "Dynamic typed"
},
{
"code": null,
"e": 20557,
"s": 20411,
"text": "These approaches are similar to early-binding and late-binding respectively in which the models will be bind to the View strongly or dynamically."
},
{
"code": null,
"e": 20726,
"s": 20557,
"text": "To understand this concept, let us create a sample MVC application (follow the steps in the previous chapters) and add a Controller class file named ViewDemoController."
},
{
"code": null,
"e": 20780,
"s": 20726,
"text": "Now, copy the following code in the controller file −"
},
{
"code": null,
"e": 21554,
"s": 20780,
"text": "using System.Collections.Generic; \nusing System.Web.Mvc; \n\nnamespace ViewsInMVC.Controllers { \n \n public class ViewDemoController : Controller { \n \n public class Blog { \n public string Name; \n public string URL; \n } \n \n private readonly List topBlogs = new List { \n new Blog { Name = \"Joe Delage\", URL = \"http://tutorialspoint/joe/\"}, \n new Blog {Name = \"Mark Dsouza\", URL = \"http://tutorialspoint/mark\"}, \n new Blog {Name = \"Michael Shawn\", URL = \"http://tutorialspoint/michael\"} \n }; \n \n public ActionResult StonglyTypedIndex() { \n return View(topBlogs); \n } \n \n public ActionResult IndexNotStonglyTyped() { \n return View(topBlogs); \n } \n } \n}"
},
{
"code": null,
"e": 21702,
"s": 21554,
"text": "In the above code, we have two action methods defined: StronglyTypedIndex and IndexNotStonglyTyped. We will now add Views for these action methods."
},
{
"code": null,
"e": 21963,
"s": 21702,
"text": "Right-click on StonglyTypedIndex action method and click Add View. In the next window, check the 'Create a strongly-typed view' checkbox. This will also enable the Model Class and Scaffold template options. Select List from Scaffold Template option. Click Add."
},
{
"code": null,
"e": 22190,
"s": 21963,
"text": "A View file similar to the following screenshot will be created. As you can note, it has included the ViewDemoController's Blog model class at the top. You will also be able to use IntelliSense in your code with this approach."
},
{
"code": null,
"e": 22285,
"s": 22190,
"text": "To create dynamic typed views, right-click the IndexNotStonglyTyped action and click Add View."
},
{
"code": null,
"e": 22355,
"s": 22285,
"text": "This time, do not select the 'Create a strongly-typed view' checkbox."
},
{
"code": null,
"e": 22405,
"s": 22355,
"text": "The resulting view will have the following code −"
},
{
"code": null,
"e": 22691,
"s": 22405,
"text": "@model dynamic \n \n@{ \n ViewBag.Title = \"IndexNotStonglyTyped\"; \n}\n\n<h2>Index Not Stongly Typed</h2> \n<p> \n <ul> \n \n @foreach (var blog in Model) { \n <li> \n <a href = \"@blog.URL\">@blog.Name</a> \n </li> \n } \n \n </ul> \n</p>"
},
{
"code": null,
"e": 22916,
"s": 22691,
"text": "As you can see in the above code, this time it did not add the Blog model to the View as in the previous case. Also, you would not be able to use IntelliSense this time because this time the binding will be done at run-time."
},
{
"code": null,
"e": 23173,
"s": 22916,
"text": "Strongly typed Views is considered as a better approach since we already know what data is being passed as the Model unlike dynamic typed Views in which the data gets bind at runtime and may lead to runtime errors, if something changes in the linked model."
},
{
"code": null,
"e": 23360,
"s": 23173,
"text": "Layouts are used in MVC to provide a consistent look and feel on all the pages of our application. It is the same as defining the Master Pages but MVC provides some more functionalities."
},
{
"code": null,
"e": 23513,
"s": 23360,
"text": "Step 1 − Create a sample MVC application with Internet application as Template and create a Content folder in the root directory of the web application."
},
{
"code": null,
"e": 23700,
"s": 23513,
"text": "Step 2 − Create a Style Sheet file named MyStyleSheet.css under the CONTENT folder. This CSS file will contain all the CSS classes necessary for a consistent web application page design."
},
{
"code": null,
"e": 23755,
"s": 23700,
"text": "Step 3 − Create a Shared folder under the View folder."
},
{
"code": null,
"e": 24034,
"s": 23755,
"text": "Step 4 − Create a MasterLayout.cshtml file under the Shared folder. The file MasterLayout.cshtml represents the layout of each page in the application. Right-click on the Shared folder in the Solution Explorer, then go to Add item and click View. Copy the following layout code."
},
{
"code": null,
"e": 25441,
"s": 24034,
"text": "<!DOCTYPE html> \n\n<html lang = \"en\"> \n <head> \n <meta charset = \"utf-8\" /> \n <title>@ViewBag.Title - Tutorial Point</title> \n <link href = \"~/favicon.ico\" rel = \"shortcut icon\" type = \"image/x-icon\" />\n <link rel = \"stylesheet\" href = \"@Url.Content(\"~/Content/MyStyleSheet.css\")\" />\n </head> \n \n <body> \n <header> \n \n <div class = \"content-wrapper\"> \n <div class = \"float-left\"> \n <p class = \"site-title\"> \n @Html.ActionLink(\"Tutorial Point\", \"Index\", \"Home\")\n </p> \n </div> \n \n <div class = \"float-right\">\n <nav> \n <ul id = \"menu\"> \n <li>@Html.ActionLink(\"Home\", \"Index\", \"Home\")</li> \n <li>@Html.ActionLink(\"About\", \"About\", \"Home\")</li>\n </ul> \n </nav> \n </div> \n </div> \n \n </header>\n <div id = \"body\"> \n @RenderSection(\"featured\", required: false) \n <section class = \"content-wrapper main-content clear-fix\"> \n @RenderBody() \n </section> \n </div>\n \n <footer>\n <div class = \"content-wrapper\">\n <div class = \"float-left\"> \n <p>© @DateTime.Now.Year - Tutorial Point</p> \n </div> \n </div> \n </footer>\n \n </body>\n</html>"
},
{
"code": null,
"e": 25577,
"s": 25441,
"text": "In this layout, we are using an HTML helper method and some other system-defined methods, hence let's look at these methods one by one."
},
{
"code": null,
"e": 25738,
"s": 25577,
"text": "Url.Content() − This method specifies the path of any file that we are using in our View code. It takes the virtual path as input and returns the absolute path."
},
{
"code": null,
"e": 25899,
"s": 25738,
"text": "Url.Content() − This method specifies the path of any file that we are using in our View code. It takes the virtual path as input and returns the absolute path."
},
{
"code": null,
"e": 26142,
"s": 25899,
"text": "Html.ActionLink() − This method renders HTML links that links to action of some controller. The first parameter specifies the display name, the second parameter specifies the Action name, and the third parameter specifies the Controller name."
},
{
"code": null,
"e": 26385,
"s": 26142,
"text": "Html.ActionLink() − This method renders HTML links that links to action of some controller. The first parameter specifies the display name, the second parameter specifies the Action name, and the third parameter specifies the Controller name."
},
{
"code": null,
"e": 26495,
"s": 26385,
"text": "RenderSection() − Specifies the name of the section that we want to display at that location in the template."
},
{
"code": null,
"e": 26605,
"s": 26495,
"text": "RenderSection() − Specifies the name of the section that we want to display at that location in the template."
},
{
"code": null,
"e": 26668,
"s": 26605,
"text": "RenderBody() − Renders the actual body of the associated View."
},
{
"code": null,
"e": 26731,
"s": 26668,
"text": "RenderBody() − Renders the actual body of the associated View."
},
{
"code": null,
"e": 26830,
"s": 26731,
"text": "Step 5 − Finally, open the _ViewStart.cshtml file inside Views folder and add the following code −"
},
{
"code": null,
"e": 26882,
"s": 26830,
"text": "@{ \n Layout = \"~/Views/Shared/_Layout.cshtml\"; \n}"
},
{
"code": null,
"e": 26950,
"s": 26882,
"text": "If the file is not present, you can create the file with this name."
},
{
"code": null,
"e": 27014,
"s": 26950,
"text": "Step 6 − Run the application now to see the modified home page."
},
{
"code": null,
"e": 27248,
"s": 27014,
"text": "ASP.NET MVC Routing enables the use of URLs that are descriptive of the user actions and are more easily understood by the users. At the same time, Routing can be used to hide data which is not intended to be shown to the final user."
},
{
"code": null,
"e": 27543,
"s": 27248,
"text": "For example, in an application that does not use routing, the user would be shown the URL as http://myapplication/Users.aspx?id=1 which would correspond to the file Users.aspx inside myapplication path and sending ID as 1, Generally, we would not like to show such file names to our final user."
},
{
"code": null,
"e": 27917,
"s": 27543,
"text": "To handle MVC URLs, ASP.NET platform uses the routing system, which lets you create any pattern of URLs you desire, and express them in a clear and concise manner. Each route in MVC contains a specific URL pattern. This URL pattern is compared to the incoming request URL and if the URL matches this pattern, it is used by the routing engine to further process the request."
},
{
"code": null,
"e": 27977,
"s": 27917,
"text": "To understand the MVC routing, consider the following URL −"
},
{
"code": null,
"e": 28012,
"s": 27977,
"text": "http://servername/Products/Phones\n"
},
{
"code": null,
"e": 28141,
"s": 28012,
"text": "In the above URL, Products is the first segment and Phone is the second segment which can be expressed in the following format −"
},
{
"code": null,
"e": 28165,
"s": 28141,
"text": "{controller}/{action} \n"
},
{
"code": null,
"e": 28317,
"s": 28165,
"text": "The MVC framework automatically considers the first segment as the Controller name and the second segment as one of the actions inside that Controller."
},
{
"code": null,
"e": 28497,
"s": 28317,
"text": "Note − If the name of your Controller is ProductsController, you would only mention Products in the routing URL. The MVC framework automatically understands the Controller suffix."
},
{
"code": null,
"e": 28596,
"s": 28497,
"text": "Routes are defined in the RouteConfig.cs file which is present under the App_Start project folder."
},
{
"code": null,
"e": 28647,
"s": 28596,
"text": "You will see the following code inside this file −"
},
{
"code": null,
"e": 29030,
"s": 28647,
"text": "public class RouteConfig { \n \n public static void RegisterRoutes(RouteCollection routes) { \n routes.IgnoreRoute(\"{resource}.axd/{*pathInfo}\"); \n \n routes.MapRoute( \n name: \"Default\", \n url: \"{controller}/{action}/{id}\", \n defaults: new { controller = \"Home\", action = \"Index\", \n id = UrlParameter.Optional } \n ); \n } \n} "
},
{
"code": null,
"e": 29282,
"s": 29030,
"text": "This RegisterRoutes method is called by the Global.ascx when the application is started. The Application_Start method under Global.ascx calls this MapRoute function which sets the default Controller and its action (method inside the Controller class)."
},
{
"code": null,
"e": 29374,
"s": 29282,
"text": "To modify the above default mapping as per our example, change the following line of code −"
},
{
"code": null,
"e": 29465,
"s": 29374,
"text": "defaults: new { controller = \"Products\", action = \"Phones\", id = UrlParameter.Optional } \n"
},
{
"code": null,
"e": 29661,
"s": 29465,
"text": "This setting will pick the ProductsController and call the Phone method inside that. Similarly, if you have another method such as Electronics inside ProductsController, the URL for it would be −"
},
{
"code": null,
"e": 29701,
"s": 29661,
"text": "http://servername/Products/Electronics "
},
{
"code": null,
"e": 29992,
"s": 29701,
"text": "In ASP.NET MVC, controllers define action methods and these action methods generally have a one-to-one relationship with UI controls, such as clicking a button or a link, etc. For example, in one of our previous examples, the UserController class contained methods UserAdd, UserDelete, etc."
},
{
"code": null,
"e": 30228,
"s": 29992,
"text": "However, many times we would like to perform some action before or after a particular operation. For achieving this functionality, ASP.NET MVC provides a feature to add pre- and post-action behaviors on the controller's action methods."
},
{
"code": null,
"e": 30290,
"s": 30228,
"text": "ASP.NET MVC framework supports the following action filters −"
},
{
"code": null,
"e": 30474,
"s": 30290,
"text": "Action Filters − Action filters are used to implement logic that gets executed before and after a controller action executes. We will look at Action Filters in detail in this chapter."
},
{
"code": null,
"e": 30658,
"s": 30474,
"text": "Action Filters − Action filters are used to implement logic that gets executed before and after a controller action executes. We will look at Action Filters in detail in this chapter."
},
{
"code": null,
"e": 30783,
"s": 30658,
"text": "Authorization Filters − Authorization filters are used to implement authentication and authorization for controller actions."
},
{
"code": null,
"e": 30908,
"s": 30783,
"text": "Authorization Filters − Authorization filters are used to implement authentication and authorization for controller actions."
},
{
"code": null,
"e": 31117,
"s": 30908,
"text": "Result Filters − Result filters contain logic that is executed before and after a view result is executed. For example, you might want to modify a view result right before the view is rendered to the browser."
},
{
"code": null,
"e": 31326,
"s": 31117,
"text": "Result Filters − Result filters contain logic that is executed before and after a view result is executed. For example, you might want to modify a view result right before the view is rendered to the browser."
},
{
"code": null,
"e": 31570,
"s": 31326,
"text": "Exception Filters − Exception filters are the last type of filter to run. You can use an exception filter to handle errors raised by either your controller actions or controller action results. You also can use exception filters to log errors."
},
{
"code": null,
"e": 31814,
"s": 31570,
"text": "Exception Filters − Exception filters are the last type of filter to run. You can use an exception filter to handle errors raised by either your controller actions or controller action results. You also can use exception filters to log errors."
},
{
"code": null,
"e": 32027,
"s": 31814,
"text": "Action filters are one of the most commonly used filters to perform additional data processing, or manipulating the return values or cancelling the execution of action or modifying the view structure at run time."
},
{
"code": null,
"e": 32333,
"s": 32027,
"text": "Action Filters are additional attributes that can be applied to either a controller section or the entire controller to modify the way in which an action is executed. These attributes are special .NET classes derived from System.Attribute which can be attached to classes, methods, properties, and fields."
},
{
"code": null,
"e": 32385,
"s": 32333,
"text": "ASP.NET MVC provides the following action filters −"
},
{
"code": null,
"e": 32492,
"s": 32385,
"text": "Output Cache − This action filter caches the output of a controller action for a specified amount of time."
},
{
"code": null,
"e": 32599,
"s": 32492,
"text": "Output Cache − This action filter caches the output of a controller action for a specified amount of time."
},
{
"code": null,
"e": 32690,
"s": 32599,
"text": "Handle Error − This action filter handles errors raised when a controller action executes."
},
{
"code": null,
"e": 32781,
"s": 32690,
"text": "Handle Error − This action filter handles errors raised when a controller action executes."
},
{
"code": null,
"e": 32873,
"s": 32781,
"text": "Authorize − This action filter enables you to restrict access to a particular user or role."
},
{
"code": null,
"e": 32965,
"s": 32873,
"text": "Authorize − This action filter enables you to restrict access to a particular user or role."
},
{
"code": null,
"e": 33186,
"s": 32965,
"text": "Now, we will see the code example to apply these filters on an example controller ActionFilterDemoController. (ActionFilterDemoController is just used as an example. You can use these filters on any of your controllers.)"
},
{
"code": null,
"e": 33252,
"s": 33186,
"text": "Example − Specifies the return value to be cached for 10 seconds."
},
{
"code": null,
"e": 33437,
"s": 33252,
"text": "public class ActionFilterDemoController : Controller { \n [HttpGet] \n OutputCache(Duration = 10)] \n \n public string Index() { \n return DateTime.Now.ToString(\"T\"); \n } \n}"
},
{
"code": null,
"e": 33538,
"s": 33437,
"text": "Example − Redirects application to a custom error page when an error is triggered by the controller."
},
{
"code": null,
"e": 33766,
"s": 33538,
"text": "[HandleError] \npublic class ActionFilterDemoController : Controller { \n \n public ActionResult Index() { \n throw new NullReferenceException(); \n } \n \n public ActionResult About() { \n return View(); \n } \n} "
},
{
"code": null,
"e": 33923,
"s": 33766,
"text": "With the above code, if any error happens during the action execution, it will find a view named Error in the Views folder and render that page to the user."
},
{
"code": null,
"e": 33991,
"s": 33923,
"text": "Example − Allowing only authorized users to log in the application."
},
{
"code": null,
"e": 34393,
"s": 33991,
"text": "public class ActionFilterDemoController: Controller { \n [Authorize] \n \n public ActionResult Index() { \n ViewBag.Message = \"This can be viewed only by authenticated users only\"; \n return View(); \n } \n \n [Authorize(Roles=\"admin\")] \n public ActionResult AdminIndex() { \n ViewBag.Message = \"This can be viewed only by users in Admin role only\"; \n return View(); \n } \n}"
},
{
"code": null,
"e": 34554,
"s": 34393,
"text": "With the above code, if you would try to access the application without logging in, it will throw an error similar to the one shown in the following screenshot."
},
{
"code": null,
"e": 34818,
"s": 34554,
"text": "In the first chapter, we learnt how Controllers and Views interact in MVC. In this tutorial, we are going to take a step forward and learn how to use Models and create an advanced application to create, edit, delete. and view the list of users in our application."
},
{
"code": null,
"e": 35097,
"s": 34818,
"text": "Step 1 − Select File → New → Project → ASP.NET MVC Web Application. Name it as AdvancedMVCApplication. Click Ok. In the next window, select Template as Internet Application and View Engine as Razor. Observe that we are using a template this time instead of an Empty application."
},
{
"code": null,
"e": 35286,
"s": 35097,
"text": "This will create a new solution project as shown in the following screenshot. Since we are using the default ASP.NET theme, it comes with sample Views, Controllers, Models and other files."
},
{
"code": null,
"e": 35402,
"s": 35286,
"text": "Step 2 − Build the solution and run the application to see its default output as shown in the following screenshot."
},
{
"code": null,
"e": 35564,
"s": 35402,
"text": "Step 3 − Add a new model which will define the structure of users data. Right-click on Models folder and click Add → Class. Name this as UserModel and click Add."
},
{
"code": null,
"e": 35632,
"s": 35564,
"text": "Step 4 − Copy the following code in the newly created UserModel.cs."
},
{
"code": null,
"e": 36408,
"s": 35632,
"text": "using System; \nusing System.ComponentModel; \nusing System.ComponentModel.DataAnnotations; \nusing System.Web.Mvc.Html; \n\nnamespace AdvancedMVCApplication.Models { \n public class UserModels { \n \n [Required] \n public int Id { get; set; } \n [DisplayName(\"First Name\")] \n [Required(ErrorMessage = \"First name is required\")] \n public string FirstName { get; set; } \n [Required] \n public string LastName { get; set; } \n \n public string Address { get; set; } \n \n [Required] \n [StringLength(50)] \n public string Email { get; set; } \n \n [DataType(DataType.Date)] \n public DateTime DOB { get; set; } \n \n [Range(100,1000000)] \n public decimal Salary { get; set; } \n } \n} "
},
{
"code": null,
"e": 36558,
"s": 36408,
"text": "In the above code, we have specified all the parameters that the User model has, their data types and validations such as required fields and length."
},
{
"code": null,
"e": 36735,
"s": 36558,
"text": "Now that we have our User Model ready to hold the data, we will create a class file Users.cs, which will contain methods for viewing users, adding, editing, and deleting users."
},
{
"code": null,
"e": 36901,
"s": 36735,
"text": "Step 5 − Right-click on Models and click Add → Class. Name it as Users. This will create users.cs class inside Models. Copy the following code in the users.cs class."
},
{
"code": null,
"e": 38451,
"s": 36901,
"text": "using System; \nusing System.Collections.Generic; \nusing System.EnterpriseServices; \n\nnamespace AdvancedMVCApplication.Models { \n \n public class Users { \n public List UserList = new List(); \n \n //action to get user details \n public UserModels GetUser(int id) { \n UserModels usrMdl = null; \n \n foreach (UserModels um in UserList) \n \n if (um.Id == id) \n usrMdl = um; \n return usrMdl; \n } \n \n //action to create new user \n public void CreateUser(UserModels userModel) { \n UserList.Add(userModel); \n } \n \n //action to udpate existing user \n public void UpdateUser(UserModels userModel) { \n \n foreach (UserModels usrlst in UserList) { \n \n if (usrlst.Id == userModel.Id) { \n usrlst.Address = userModel.Address; \n usrlst.DOB = userModel.DOB; \n usrlst.Email = userModel.Email; \n usrlst.FirstName = userModel.FirstName; \n usrlst.LastName = userModel.LastName; \n usrlst.Salary = userModel.Salary; \n break; \n } \n } \n } \n \n //action to delete exising user \n public void DeleteUser(UserModels userModel) { \n \n foreach (UserModels usrlst in UserList) { \n \n if (usrlst.Id == userModel.Id) { \n UserList.Remove(usrlst); \n break; \n } \n } \n } \n } \n} "
},
{
"code": null,
"e": 38626,
"s": 38451,
"text": "Once we have our UserModel.cs and Users.cs, we will add Views to our model for viewing users, adding, editing and deleting users. First let us create a View to create a user."
},
{
"code": null,
"e": 38689,
"s": 38626,
"text": "Step 6 − Right-click on the Views folder and click Add → View."
},
{
"code": null,
"e": 38826,
"s": 38689,
"text": "Step 7 − In the next window, select the View Name as UserAdd, View Engine as Razor and select the Create a strongly-typed view checkbox."
},
{
"code": null,
"e": 38916,
"s": 38826,
"text": "Step 8 − Click Add. This will create the following CSHML code by default as shown below −"
},
{
"code": null,
"e": 40991,
"s": 38916,
"text": "@model AdvancedMVCApplication.Models.UserModels \n\n@{ \n ViewBag.Title = \"UserAdd\"; \n}\n\n<h2>UserAdd</h2> \n\n@using (Html.BeginForm()) { \n @Html.ValidationSummary(true) \n \n <fieldset> \n <legend>UserModels</legend> \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.FirstName) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.FirstName) \n @Html.ValidationMessageFor(model => model.FirstName) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.LastName) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.LastName) \n @Html.ValidationMessageFor(model => model.LastName) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.Address) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.Address) \n @Html.ValidationMessageFor(model => model.Address) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.Email)\n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.Email) \n @Html.ValidationMessageFor(model => model.Email) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.DOB) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.DOB) \n @Html.ValidationMessageFor(model => model.DOB) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.Salary) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.Salary) \n @Html.ValidationMessageFor(model => model.Salary) \n </div> \n \n <p> \n <input type = \"submit\" value = \"Create\" /> \n </p> \n </fieldset> \n} \n<div> \n @Html.ActionLink(\"Back to List\", \"Index\") \n</div> \n\n@section Scripts { \n \n @Scripts.Render(\"~/bundles/jqueryval\") \n}"
},
{
"code": null,
"e": 41190,
"s": 40991,
"text": "As you can see, this view contains view details of all the attributes of the fields including their validation messages, labels, etc. This View will look like the following in our final application."
},
{
"code": null,
"e": 41276,
"s": 41190,
"text": "Similar to UserAdd, now we will add four more Views given below with the given code −"
},
{
"code": null,
"e": 41354,
"s": 41276,
"text": "This View will display all the users present in our system on the Index page."
},
{
"code": null,
"e": 42961,
"s": 41354,
"text": "@model IEnumerable<AdvancedMVCApplication.Models.UserModels> \n\n@{ \n ViewBag.Title = \"Index\"; \n} \n\n<h2>Index</h2> \n\n<p> \n @Html.ActionLink(\"Create New\", \"UserAdd\") \n</p> \n\n<table> \n <tr> \n <th> \n @Html.DisplayNameFor(model => model.FirstName) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.LastName) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.Address) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.Email) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.DOB) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.Salary) \n </th> \n \n <th></th> \n </tr> \n \n @foreach (var item in Model) { \n <tr> \n <td>\n @Html.DisplayFor(modelItem => item.FirstName) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.LastName) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.Address) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.Email) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.DOB) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.Salary) \n </td> \n \n <td> \n @Html.ActionLink(\"Edit\", \"Edit\", new { id = item.Id }) | \n @Html.ActionLink(\"Details\", \"Details\", new { id = item.Id }) | \n @Html.ActionLink(\"Delete\", \"Delete\", new { id = item.Id }) \n </td> \n </tr> \n } \n</table>"
},
{
"code": null,
"e": 43026,
"s": 42961,
"text": "This View will look like the following in our final application."
},
{
"code": null,
"e": 43114,
"s": 43026,
"text": "This View will display the details of a specific user when we click on the user record."
},
{
"code": null,
"e": 44578,
"s": 43114,
"text": "@model AdvancedMVCApplication.Models.UserModels \n\n@{ \n ViewBag.Title = \"Details\"; \n} \n\n<h2>Details</h2> \n<fieldset> \n <legend>UserModels</legend> \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.FirstName) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.FirstName) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.LastName) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.LastName)\n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.Address) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.Address) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.Email) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.Email) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.DOB) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.DOB) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.Salary) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.Salary) \n </div> \n \n</fieldset> \n<p>\n @Html.ActionLink(\"Edit\", \"Edit\", new { id = Model.Id }) | \n @Html.ActionLink(\"Back to List\", \"Index\") \n</p>"
},
{
"code": null,
"e": 44643,
"s": 44578,
"text": "This View will look like the following in our final application."
},
{
"code": null,
"e": 44721,
"s": 44643,
"text": "This View will display the edit form to edit the details of an existing user."
},
{
"code": null,
"e": 46857,
"s": 44721,
"text": "@model AdvancedMVCApplication.Models.UserModels \n\n@{ \n ViewBag.Title = \"Edit\"; \n} \n\n<h2>Edit</h2> \n@using (Html.BeginForm()) { \n @Html.AntiForgeryToken() \n @Html.ValidationSummary(true) \n \n <fieldset> \n <legend>UserModels</legend> \n @Html.HiddenFor(model => model.Id) \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.FirstName) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.FirstName) \n @Html.ValidationMessageFor(model => model.FirstName) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.LastName) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.LastName) \n @Html.ValidationMessageFor(model => model.LastName) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.Address) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.Address) \n @Html.ValidationMessageFor(model => model.Address) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.Email) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.Email) \n @Html.ValidationMessageFor(model => model.Email) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.DOB)\n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.DOB) \n @Html.ValidationMessageFor(model => model.DOB) \n </div> \n \n <div class = \"editor-label\"> \n @Html.LabelFor(model => model.Salary) \n </div> \n \n <div class = \"editor-field\"> \n @Html.EditorFor(model => model.Salary) \n @Html.ValidationMessageFor(model => model.Salary) \n </div> \n \n <p> \n <input type = \"submit\" value = \"Save\" /> \n </p> \n </fieldset> \n} \n<div> \n @Html.ActionLink(\"Back to List\", \"Index\") \n</div> \n\n@section Scripts { \n @Scripts.Render(\"~/bundles/jqueryval\") \n}"
},
{
"code": null,
"e": 46916,
"s": 46857,
"text": "This View will look like the following in our application."
},
{
"code": null,
"e": 46977,
"s": 46916,
"text": "This View will display the form to delete the existing user."
},
{
"code": null,
"e": 48549,
"s": 46977,
"text": "@model AdvancedMVCApplication.Models.UserModels \n\n@{ \n ViewBag.Title = \"Delete\"; \n} \n\n<h2>Delete</h2> \n<h3>Are you sure you want to delete this?</h3> \n<fieldset> \n <legend>UserModels</legend> \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.FirstName) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.FirstName) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.LastName) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.LastName) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.Address) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.Address) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.Email) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.Email) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.DOB) \n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.DOB) \n </div> \n \n <div class = \"display-label\"> \n @Html.DisplayNameFor(model => model.Salary)\n </div> \n \n <div class = \"display-field\"> \n @Html.DisplayFor(model => model.Salary) \n </div> \n</fieldset> \n\n@using (Html.BeginForm()) { \n @Html.AntiForgeryToken() \n \n <p> \n <input type = \"submit\" value = \"Delete\" /> | \n @Html.ActionLink(\"Back to List\", \"Index\") \n </p> \n}"
},
{
"code": null,
"e": 48614,
"s": 48549,
"text": "This View will look like the following in our final application."
},
{
"code": null,
"e": 48830,
"s": 48614,
"text": "Step 9 − We have already added the Models and Views in our application. Now finally we will add a controller for our view. Right-click on the Controllers folder and click Add → Controller. Name it as UserController."
},
{
"code": null,
"e": 48906,
"s": 48830,
"text": "By default, your Controller class will be created with the following code −"
},
{
"code": null,
"e": 49310,
"s": 48906,
"text": "using System; \nusing System.Collections.Generic; \nusing System.Linq; \nusing System.Web; \nusing System.Web.Mvc; \nusing AdvancedMVCApplication.Models; \n\nnamespace AdvancedMVCApplication.Controllers { \n \n public class UserController : Controller { \n private static Users _users = new Users(); \n \n public ActionResult Index() { \n return View(_users.UserList); \n } \n } \n} "
},
{
"code": null,
"e": 49412,
"s": 49310,
"text": "In the above code, the Index method will be used while rendering the list of users on the Index page."
},
{
"code": null,
"e": 49591,
"s": 49412,
"text": "Step 10 − Right-click on the Index method and select Create View to create a View for our Index page (which will list down all the users and provide options to create new users)."
},
{
"code": null,
"e": 49785,
"s": 49591,
"text": "Step 11 − Now add the following code in the UserController.cs. In this code, we are creating action methods for different user actions and returning corresponding views that we created earlier."
},
{
"code": null,
"e": 50138,
"s": 49785,
"text": "We will add two methods for each operation: GET and POST. HttpGet will be used while fetching the data and rendering it. HttpPost will be used for creating/updating data. For example, when we are adding a new user, we will need a form to add a user, which is a GET operation. Once we fill the form and submit those values, we will need the POST method."
},
{
"code": null,
"e": 51310,
"s": 50138,
"text": "//Action for Index View \npublic ActionResult Index() { \n return View(_users.UserList); \n} \n\n//Action for UserAdd View \n[HttpGet] \npublic ActionResult UserAdd() { \n return View(); \n} \n\n[HttpPost] \npublic ActionResult UserAdd(UserModels userModel) { \n _users.CreateUser(userModel); \n return View(\"Index\", _users.UserList); \n} \n\n//Action for Details View \n[HttpGet] \npublic ActionResult Details(int id) { \n return View(_users.UserList.FirstOrDefault(x => x.Id == id)); \n} \n\n[HttpPost] \npublic ActionResult Details() { \n return View(\"Index\", _users.UserList); \n} \n\n//Action for Edit View \n[HttpGet] \npublic ActionResult Edit(int id) { \n return View(_users.UserList.FirstOrDefault(x=>x.Id==id)); \n} \n\n[HttpPost] \npublic ActionResult Edit(UserModels userModel) { \n _users.UpdateUser(userModel); \n return View(\"Index\", _users.UserList); \n} \n \n//Action for Delete View \n[HttpGet] \npublic ActionResult Delete(int id) { \n return View(_users.UserList.FirstOrDefault(x => x.Id == id)); \n} \n\n[HttpPost] \npublic ActionResult Delete(UserModels userModel) { \n _users.DeleteUser(userModel); \n return View(\"Index\", _users.UserList); \n} sers.UserList);"
},
{
"code": null,
"e": 51429,
"s": 51310,
"text": "Step 12 − Last thing to do is go to RouteConfig.cs file in App_Start folder and change the default Controller to User."
},
{
"code": null,
"e": 51515,
"s": 51429,
"text": "defaults: new { controller = \"User\", action = \"Index\", id = UrlParameter.Optional } \n"
},
{
"code": null,
"e": 51582,
"s": 51515,
"text": "That's all we need to get our advanced application up and running."
},
{
"code": null,
"e": 51820,
"s": 51582,
"text": "Step 13 − Now run the application. You will be able to see an application as shown in the following screenshot. You can perform all the functionalities of adding, viewing, editing, and deleting users as we saw in the earlier screenshots."
},
{
"code": null,
"e": 52129,
"s": 51820,
"text": "As you might be knowing, Ajax is a shorthand for Asynchronous JavaScript and XML. The MVC Framework contains built-in support for unobtrusive Ajax. You can use the helper methods to define your Ajax features without adding a code throughout all the views. This feature in MVC is based on the jQuery features."
},
{
"code": null,
"e": 52401,
"s": 52129,
"text": "To enable the unobtrusive AJAX support in the MVC application, open the Web.Config file and set the UnobtrusiveJavaScriptEnabled property inside the appSettings section using the following code. If the key is already present in your application, you can ignore this step."
},
{
"code": null,
"e": 52462,
"s": 52401,
"text": "<add key = \"UnobtrusiveJavaScriptEnabled\" value = \"true\" />\n"
},
{
"code": null,
"e": 52636,
"s": 52462,
"text": "After this, open the common layout file _Layout.cshtml file located under Views/Shared folder. We will add references to the jQuery libraries here using the following code −"
},
{
"code": null,
"e": 52817,
"s": 52636,
"text": "<script src = \"~/Scripts/jquery-ui-1.8.24.min.js\" type = \"text/javascript\">\n</script> \n\n<script src = \"~/Scripts/jquery.unobtrusive-ajax.min.js\" type = \"text/javascript\">\n</script>"
},
{
"code": null,
"e": 53126,
"s": 52817,
"text": "In the example that follows, we will create a form which will display the list of users in the system. We will place a dropdown having three options: Admin, Normal, and Guest. When you will select one of these values, it will display the list of users belonging to this category using unobtrusive AJAX setup."
},
{
"code": null,
"e": 53193,
"s": 53126,
"text": "Step 1 − Create a Model file Model.cs and copy the following code."
},
{
"code": null,
"e": 53579,
"s": 53193,
"text": "using System; \n\nnamespace MVCAjaxSupportExample.Models { \n \n public class User { \n public int UserId { get; set; } \n public string FirstName { get; set; } \n public string LastName { get; set; } \n public DateTime BirthDate { get; set; } \n public Role Role { get; set; } \n } \n \n public enum Role { \n Admin, \n Normal, \n Guest \n } \n} "
},
{
"code": null,
"e": 53705,
"s": 53579,
"text": "Step 2 − Create a Controller file named UserController.cs and create two action methods inside that using the following code."
},
{
"code": null,
"e": 55011,
"s": 53705,
"text": "using System; \nusing System.Collections.Generic; \nusing System.Linq; \nusing System.Web.Mvc; \nusing MVCAjaxSupportExample.Models; \n\nnamespace MVCAjaxSupportExample.Controllers {\n \n public class UserController : Controller { \n \n private readonly User[] userData = \n { \n new User {FirstName = \"Edy\", LastName = \"Clooney\", Role = Role.Admin}, \n new User {FirstName = \"David\", LastName = \"Sanderson\", Role = Role.Admin}, \n new User {FirstName = \"Pandy\", LastName = \"Griffyth\", Role = Role.Normal}, \n new User {FirstName = \"Joe\", LastName = \"Gubbins\", Role = Role.Normal}, \n new User {FirstName = \"Mike\", LastName = \"Smith\", Role = Role.Guest} \n }; \n \n public ActionResult Index() { \n return View(userData); \n } \n \n public PartialViewResult GetUserData(string selectedRole = \"All\") { \n IEnumerable data = userData; \n \n if (selectedRole != \"All\") { \n var selected = (Role) Enum.Parse(typeof (Role), selectedRole); \n data = userData.Where(p => p.Role == selected); \n } \n \n return PartialView(data); \n } \n \n public ActionResult GetUser(string selectedRole = \"All\") { \n return View((object) selectedRole); \n } \n } \n}"
},
{
"code": null,
"e": 55182,
"s": 55011,
"text": "Step 3 − Now create a partial View named GetUserData with the following code. This view will be used to render list of users based on the selected role from the dropdown."
},
{
"code": null,
"e": 55909,
"s": 55182,
"text": "@model IEnumerable<MVCAjaxSupportExample.Models.User> \n\n<table> \n <tr> \n <th> \n @Html.DisplayNameFor(model => model.FirstName) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.LastName) \n </th> \n \n <th> \n @Html.DisplayNameFor(model => model.BirthDate) \n </th> \n <th></th> \n </tr> \n\n @foreach (var item in Model) { \n <tr> \n <td> \n @Html.DisplayFor(modelItem => item.FirstName) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.LastName) \n </td> \n \n <td> \n @Html.DisplayFor(modelItem => item.BirthDate) \n </td> \n \n <td> \n \n </td> \n </tr> \n} \n</table>"
},
{
"code": null,
"e": 56073,
"s": 55909,
"text": "Step 4 − Now create a View GetUser with the following code. This view will asynchronously get the data from the previously created controller's GetUserData Action."
},
{
"code": null,
"e": 56738,
"s": 56073,
"text": "@using MVCAjaxSupportExample.Models \n@model string \n\n@{ \nViewBag.Title = \"GetUser\"; \n\nAjaxOptions ajaxOpts = new AjaxOptions { \nUpdateTargetId = \"tableBody\" \n}; \n} \n\n<h2>Get User</h2> \n<table> \n <thead>\n <tr>\n <th>First</th>\n <th>Last</th>\n <th>Role</th>\n </tr>\n </thead> \n \n <tbody id=\"tableBody\"> \n @Html.Action(\"GetUserData\", new {selectedRole = Model }) \n </tbody> \n</table> \n\n@using (Ajax.BeginForm(\"GetUser\", ajaxOpts)) { \n <div> \n @Html.DropDownList(\"selectedRole\", new SelectList( \n new [] {\"All\"}.Concat(Enum.GetNames(typeof(Role))))) \n <button type=\"submit\">Submit</button> \n </div> \n}"
},
{
"code": null,
"e": 56819,
"s": 56738,
"text": "Step 5 − Finally, change the Route.config entries to launch the User Controller."
},
{
"code": null,
"e": 56906,
"s": 56819,
"text": "defaults: new { controller = \"User\", action = \"GetUser\", id = UrlParameter.Optional }\n"
},
{
"code": null,
"e": 56982,
"s": 56906,
"text": "Step 6 − Run the application which will look like the following screenshot."
},
{
"code": null,
"e": 57137,
"s": 56982,
"text": "If you select Admin from the dropdown, it will go and fetch all the users with Admin type. This is happening via AJAX and does not reload the entire page."
},
{
"code": null,
"e": 57447,
"s": 57137,
"text": "Bundling and Minification are two performance improvement techniques that improves the request load time of the application. Most of the current major browsers limit the number of simultaneous connections per hostname to six. It means that at a time, all the additional requests will be queued by the browser."
},
{
"code": null,
"e": 57620,
"s": 57447,
"text": "To enable bundling and minification in your MVC application, open the Web.config file inside your solution. In this file, search for compilation settings under system.web −"
},
{
"code": null,
"e": 57681,
"s": 57620,
"text": "<system.web>\n <compilation debug = \"true\" />\n</system.web>"
},
{
"code": null,
"e": 57824,
"s": 57681,
"text": "By default, you will see the debug parameter set to true, which means that bundling and minification is disabled. Set this parameter to false."
},
{
"code": null,
"e": 58034,
"s": 57824,
"text": "To improve the performance of the application, ASP.NET MVC provides inbuilt feature to bundle multiple files into a single, file which in turn improves the page load performance because of fewer HTTP requests."
},
{
"code": null,
"e": 58157,
"s": 58034,
"text": "Bundling is a simple logical group of files that could be referenced by unique name and loaded with a single HTTP request."
},
{
"code": null,
"e": 58270,
"s": 58157,
"text": "By default, the MVC application's BundleConfig (located inside App_Start folder) comes with the following code −"
},
{
"code": null,
"e": 58854,
"s": 58270,
"text": "public static void RegisterBundles(BundleCollection bundles) { \n \n // Following is the sample code to bundle all the css files in the project \n \n // The code to bundle other javascript files will also be similar to this \n \n bundles.Add(new StyleBundle(\"~/Content/themes/base/css\").Include( \n \"~/Content/themes/base/jquery.ui.core.css\", \n \"~/Content/themes/base/jquery.ui.tabs.css\", \n \"~/Content/themes/base/jquery.ui.datepicker.css\", \n \"~/Content/themes/base/jquery.ui.progressbar.css\", \n \"~/Content/themes/base/jquery.ui.theme.css\")); \n}"
},
{
"code": null,
"e": 58963,
"s": 58854,
"text": "The above code basically bundles all the CSS files present in Content/themes/base folder into a single file."
},
{
"code": null,
"e": 59247,
"s": 58963,
"text": "Minification is another such performance improvement technique in which it optimizes the javascript, css code by shortening the variable names, removing unnecessary white spaces, line breaks, comments, etc. This in turn reduces the file size and helps the application to load faster."
},
{
"code": null,
"e": 59493,
"s": 59247,
"text": "For using this option, you will have to first install the Web Essentials Extension in your Visual Studio. After that, when you will right-click on any css or javascript file, it will show you the option to create a minified version of that file."
},
{
"code": null,
"e": 59591,
"s": 59493,
"text": "Thus, if you have a css file named Site.css, it will create its minified version as Site.min.css."
},
{
"code": null,
"e": 59753,
"s": 59591,
"text": "Now when the next time your application will run in the browser, it will bundle and minify all the css and js files, hence improving the application performance."
},
{
"code": null,
"e": 60097,
"s": 59753,
"text": "In ASP.NET, error handling is done using the standard try catch approach or using application events. ASP.NET MVC comes with built-in support for exception handling using a feature known as exception filters. We are going to learn two approaches here: one with overriding the onException method and another by defining the HandleError filters."
},
{
"code": null,
"e": 60212,
"s": 60097,
"text": "This approach is used when we want to handle all the exceptions across the Action methods at the controller level."
},
{
"code": null,
"e": 60467,
"s": 60212,
"text": "To understand this approach, create an MVC application (follow the steps covered in previous chapters). Now add a new Controller class and add the following code which overrides the onException method and explicitly throws an error in our Action method −"
},
{
"code": null,
"e": 60681,
"s": 60467,
"text": "Now let us create a common View named Error which will be shown to the user when any exception happens in the application. Inside the Views folder, create a new folder called Shared and add a new View named Error."
},
{
"code": null,
"e": 60745,
"s": 60681,
"text": "Copy the following code inside the newly created Error.cshtml −"
},
{
"code": null,
"e": 60936,
"s": 60745,
"text": "If you try to run the application now, it will give the following result. The above code renders the Error View when any exception occurs in any of the action methods within this controller."
},
{
"code": null,
"e": 61163,
"s": 60936,
"text": "The advantage of this approach is that multiple actions within the same controller can share this error handling logic. However, the disadvantage is that we cannot use the same error handling logic across multiple controllers."
},
{
"code": null,
"e": 61438,
"s": 61163,
"text": "The HandleError Attribute is one of the action filters that we studied in Filters and Action Filters chapter. The HandleErrorAttribute is the default implementation of IExceptionFilter. This filter handles all the exceptions raised by controller actions, filters, and views."
},
{
"code": null,
"e": 61612,
"s": 61438,
"text": "To use this feature, first of all turn on the customErrors section in web.config. Open the web.config and place the following code inside system.web and set its value as On."
},
{
"code": null,
"e": 61641,
"s": 61612,
"text": "<customErrors mode = \"On\"/>\n"
},
{
"code": null,
"e": 61872,
"s": 61641,
"text": "We already have the Error View created inside the Shared folder under Views. This time change the code of this View file to the following, to strongly-type it with the HandleErrorInfo model (which is present under System.Web.MVC)."
},
{
"code": null,
"e": 62400,
"s": 61872,
"text": "@model System.Web.Mvc.HandleErrorInfo \n\n@{ \nLayout = null; \n} \n \n<!DOCTYPE html> \n<html> \n <head> \n <meta name = \"viewport\" content = \"width = device-width\" /> \n <title>Error</title> \n </head> \n \n <body> \n <h2> \n Sorry, an error occurred while processing your request. \n </h2> \n <h2>Exception details</h2> \n \n <p> \n Controller: @Model.ControllerName <br> \n Action: @Model.ActionName \n Exception: @Model.Exception \n </p> \n \n </body> \n</html> "
},
{
"code": null,
"e": 62517,
"s": 62400,
"text": "Now place the following code in your controller file which specifies [HandleError] attribute at the Controller file."
},
{
"code": null,
"e": 62848,
"s": 62517,
"text": "using System; \nusing System.Data.Common; \nusing System.Web.Mvc; \n\nnamespace ExceptionHandlingMVC.Controllers { \n [HandleError] \n public class ExceptionHandlingController : Controller { \n \n public ActionResult TestMethod() { \n throw new Exception(\"Test Exception\"); \n return View(); \n } \n } \n}"
},
{
"code": null,
"e": 62955,
"s": 62848,
"text": "If you try to run the application now, you will get an error similar to shown in the following screenshot."
},
{
"code": null,
"e": 63169,
"s": 62955,
"text": "As you can see, this time the error contains more information about the Controller and Action related details. In this manner, the HandleError can be used at any level and across controllers to handle such errors."
},
{
"code": null,
"e": 63204,
"s": 63169,
"text": "\n 44 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 63227,
"s": 63204,
"text": " Kaushik Roy Chowdhury"
},
{
"code": null,
"e": 63261,
"s": 63227,
"text": "\n 42 Lectures \n 18 hours \n"
},
{
"code": null,
"e": 63281,
"s": 63261,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 63316,
"s": 63281,
"text": "\n 57 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 63333,
"s": 63316,
"text": " University Code"
},
{
"code": null,
"e": 63368,
"s": 63333,
"text": "\n 55 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 63385,
"s": 63368,
"text": " University Code"
},
{
"code": null,
"e": 63420,
"s": 63385,
"text": "\n 40 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 63437,
"s": 63420,
"text": " University Code"
},
{
"code": null,
"e": 63471,
"s": 63437,
"text": "\n 140 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 63486,
"s": 63471,
"text": " Bhrugen Patel"
},
{
"code": null,
"e": 63493,
"s": 63486,
"text": " Print"
},
{
"code": null,
"e": 63504,
"s": 63493,
"text": " Add Notes"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.