title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
sprintf() in C | 28 Jun, 2022
Syntax:
int sprintf(char *str, const char *string,...);
Return:
If successful,
it returns the total number of
characters written excluding
null-character appended in the string,
in case of failure a negative number
is returned .
sprintf stands for “String print”. Instead of printing on console, it store output on char buffer which are specified in sprintf.
C
// Example program to demonstrate sprintf()#include <stdio.h>int main(){ char buffer[50]; int a = 10, b = 20, c; c = a + b; sprintf(buffer, "Sum of %d and %d is %d", a, b, c); // The string "sum of 10 and 20 is 30" is stored // into buffer instead of printing on stdout printf("%s", buffer); return 0;}
Sum of 10 and 20 is 30
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
Time Complexity: O(n), where n is the number of elements being stored in buffer.Auxiliary Space: O(n), where n is the number of elements being stored in buffer.
harpreet0745
anandkumarshivam2266
hardikkoriintern
cpp-input-output
CPP-Library
C Language
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Substring in C++
Multidimensional Arrays in C / C++
Function Pointer in C
Left Shift and Right Shift Operators in C/C++
Different Methods to Reverse a String in C++
Write a program to reverse an array or string
Reverse a string in Java
C++ Data Types
Write a program to print all permutations of a given string
Check for Balanced Brackets in an expression (well-formedness) using Stack | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n28 Jun, 2022"
},
{
"code": null,
"e": 61,
"s": 52,
"text": "Syntax: "
},
{
"code": null,
"e": 110,
"s": 61,
"text": "int sprintf(char *str, const char *string,...); "
},
{
"code": null,
"e": 119,
"s": 110,
"text": "Return: "
},
{
"code": null,
"e": 288,
"s": 119,
"text": "If successful,\nit returns the total number of \ncharacters written excluding \nnull-character appended in the string, \nin case of failure a negative number \nis returned ."
},
{
"code": null,
"e": 418,
"s": 288,
"text": "sprintf stands for “String print”. Instead of printing on console, it store output on char buffer which are specified in sprintf."
},
{
"code": null,
"e": 420,
"s": 418,
"text": "C"
},
{
"code": "// Example program to demonstrate sprintf()#include <stdio.h>int main(){ char buffer[50]; int a = 10, b = 20, c; c = a + b; sprintf(buffer, \"Sum of %d and %d is %d\", a, b, c); // The string \"sum of 10 and 20 is 30\" is stored // into buffer instead of printing on stdout printf(\"%s\", buffer); return 0;}",
"e": 749,
"s": 420,
"text": null
},
{
"code": null,
"e": 772,
"s": 749,
"text": "Sum of 10 and 20 is 30"
},
{
"code": null,
"e": 897,
"s": 772,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 1059,
"s": 897,
"text": "Time Complexity: O(n), where n is the number of elements being stored in buffer.Auxiliary Space: O(n), where n is the number of elements being stored in buffer."
},
{
"code": null,
"e": 1072,
"s": 1059,
"text": "harpreet0745"
},
{
"code": null,
"e": 1093,
"s": 1072,
"text": "anandkumarshivam2266"
},
{
"code": null,
"e": 1110,
"s": 1093,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 1127,
"s": 1110,
"text": "cpp-input-output"
},
{
"code": null,
"e": 1139,
"s": 1127,
"text": "CPP-Library"
},
{
"code": null,
"e": 1150,
"s": 1139,
"text": "C Language"
},
{
"code": null,
"e": 1158,
"s": 1150,
"text": "Strings"
},
{
"code": null,
"e": 1166,
"s": 1158,
"text": "Strings"
},
{
"code": null,
"e": 1264,
"s": 1166,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1281,
"s": 1264,
"text": "Substring in C++"
},
{
"code": null,
"e": 1316,
"s": 1281,
"text": "Multidimensional Arrays in C / C++"
},
{
"code": null,
"e": 1338,
"s": 1316,
"text": "Function Pointer in C"
},
{
"code": null,
"e": 1384,
"s": 1338,
"text": "Left Shift and Right Shift Operators in C/C++"
},
{
"code": null,
"e": 1429,
"s": 1384,
"text": "Different Methods to Reverse a String in C++"
},
{
"code": null,
"e": 1475,
"s": 1429,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 1500,
"s": 1475,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 1515,
"s": 1500,
"text": "C++ Data Types"
},
{
"code": null,
"e": 1575,
"s": 1515,
"text": "Write a program to print all permutations of a given string"
}
] |
How to make a vertical line using HTML ? | 10 May, 2022
To make a vertical line, use border-left or border-right property. The height property is used to set the height of border (vertical line) element. Position property is used to set the position of vertical line.
Example 1: It creates a vertical line using border-left, height and position property.
html
<!DOCTYPE html><html> <head> <title> HTML border Property </title> <!-- style to create vertical line --> <style> .vertical { border-left: 6px solid black; height: 200px; position:absolute; left: 50%; } </style></head> <body style = "text-align: center;"> <h1 style = "color: green;"> GeeksForGeeks </h1> <div class = "vertical"></div> </body> </html>
Output:
Example 2: It creates a vertical line using border-left and height property.
html
<!DOCTYPE html><html> <head> <title> HTML border Property </title> <!-- border-left property is used to create vertical line --> <style> .vertical { border-left: 5px solid black; height: 200px; } </style></head> <body> <h1 style= "color: green;"> GeeksForGeeks </h1> <div class= "vertical"></div> </body> </html>
Output:
Supported Browsers:
Google Chrome
Internet Explorer
Firefox
Opera
Safari
HTML is the foundation of webpages, is used for webpage development by structuring websites and web apps. You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples.
ysachin2314
hardikkoriintern
CSS-Misc
Picked
CSS
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to set space between the flexbox ?
Design a Tribute Page using HTML & CSS
How to select all child elements recursively using CSS?
Build a Survey Form using HTML and CSS
CSS | :not(:last-child):after Selector
REST API (Introduction)
Hide or show elements in HTML using display property
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
HTTP headers | Content-Type | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n10 May, 2022"
},
{
"code": null,
"e": 266,
"s": 54,
"text": "To make a vertical line, use border-left or border-right property. The height property is used to set the height of border (vertical line) element. Position property is used to set the position of vertical line."
},
{
"code": null,
"e": 354,
"s": 266,
"text": "Example 1: It creates a vertical line using border-left, height and position property. "
},
{
"code": null,
"e": 359,
"s": 354,
"text": "html"
},
{
"code": "<!DOCTYPE html><html> <head> <title> HTML border Property </title> <!-- style to create vertical line --> <style> .vertical { border-left: 6px solid black; height: 200px; position:absolute; left: 50%; } </style></head> <body style = \"text-align: center;\"> <h1 style = \"color: green;\"> GeeksForGeeks </h1> <div class = \"vertical\"></div> </body> </html> ",
"e": 859,
"s": 359,
"text": null
},
{
"code": null,
"e": 869,
"s": 859,
"text": "Output: "
},
{
"code": null,
"e": 947,
"s": 869,
"text": "Example 2: It creates a vertical line using border-left and height property. "
},
{
"code": null,
"e": 952,
"s": 947,
"text": "html"
},
{
"code": "<!DOCTYPE html><html> <head> <title> HTML border Property </title> <!-- border-left property is used to create vertical line --> <style> .vertical { border-left: 5px solid black; height: 200px; } </style></head> <body> <h1 style= \"color: green;\"> GeeksForGeeks </h1> <div class= \"vertical\"></div> </body> </html> ",
"e": 1380,
"s": 952,
"text": null
},
{
"code": null,
"e": 1390,
"s": 1380,
"text": "Output: "
},
{
"code": null,
"e": 1410,
"s": 1390,
"text": "Supported Browsers:"
},
{
"code": null,
"e": 1424,
"s": 1410,
"text": "Google Chrome"
},
{
"code": null,
"e": 1442,
"s": 1424,
"text": "Internet Explorer"
},
{
"code": null,
"e": 1450,
"s": 1442,
"text": "Firefox"
},
{
"code": null,
"e": 1456,
"s": 1450,
"text": "Opera"
},
{
"code": null,
"e": 1463,
"s": 1456,
"text": "Safari"
},
{
"code": null,
"e": 1658,
"s": 1463,
"text": "HTML is the foundation of webpages, is used for webpage development by structuring websites and web apps. You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples."
},
{
"code": null,
"e": 1670,
"s": 1658,
"text": "ysachin2314"
},
{
"code": null,
"e": 1687,
"s": 1670,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 1696,
"s": 1687,
"text": "CSS-Misc"
},
{
"code": null,
"e": 1703,
"s": 1696,
"text": "Picked"
},
{
"code": null,
"e": 1707,
"s": 1703,
"text": "CSS"
},
{
"code": null,
"e": 1712,
"s": 1707,
"text": "HTML"
},
{
"code": null,
"e": 1729,
"s": 1712,
"text": "Web Technologies"
},
{
"code": null,
"e": 1734,
"s": 1729,
"text": "HTML"
},
{
"code": null,
"e": 1832,
"s": 1734,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1871,
"s": 1832,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 1910,
"s": 1871,
"text": "Design a Tribute Page using HTML & CSS"
},
{
"code": null,
"e": 1966,
"s": 1910,
"text": "How to select all child elements recursively using CSS?"
},
{
"code": null,
"e": 2005,
"s": 1966,
"text": "Build a Survey Form using HTML and CSS"
},
{
"code": null,
"e": 2044,
"s": 2005,
"text": "CSS | :not(:last-child):after Selector"
},
{
"code": null,
"e": 2068,
"s": 2044,
"text": "REST API (Introduction)"
},
{
"code": null,
"e": 2121,
"s": 2068,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 2181,
"s": 2121,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 2242,
"s": 2181,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
}
] |
How to create an image element dynamically using JavaScript ? | 20 Jul, 2021
Given an HTML element and the task is to create an <img> element and append it to the document using JavaScript. In these examples when someone clicks on the button then the <img> element created. We can replace click event by any other JavaScript event.
Approach 1:
Create an empty img element using document.createElement() method.
Then set its attributes like (src, height, width, alt, title etc).
Finally, insert it into the document.
Example 1: This example implements the above approach.
<!DOCTYPE HTML> <html> <head> <title> How to create an image element dynamically using JavaScript ? </title> </head> <body id = "body" style = "text-align:center;"> <h1 style = "color:green;" > GeeksforGeeks </h1> <p id = "GFG_UP" style = "font-size: 15px; font-weight: bold;"> </p> <button onclick = "GFG_Fun()"> click here </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px; font-weight: bold;"> </p> <script> var up = document.getElementById('GFG_UP'); up.innerHTML = "Click on the button to add image element"; var down = document.getElementById('GFG_DOWN'); function GFG_Fun() { var img = document.createElement('img'); img.src = 'https://media.geeksforgeeks.org/wp-content/uploads/20190529122828/bs21.png'; document.getElementById('body').appendChild(img); down.innerHTML = "Image Element Added."; } </script> </body> </html>
Output:
Before clicking on the button:
After clicking on the button:
Approach 2:
Create an empty image instance using new Image().
Then set its attributes like (src, height, width, alt, title etc).
Finally, insert it to the document.
Example 2: This example implements the above approach.
<!DOCTYPE HTML> <html> <head> <title> How to create an image element dynamically using JavaScript ? </title> </head> <body id = "body" style = "text-align:center;"> <h1 style = "color:green;" > GeeksforGeeks </h1> <p id = "GFG_UP" style = "font-size: 15px; font-weight: bold;"> </p> <button onclick = "GFG_Fun()"> click here </button> <p id = "GFG_DOWN" style = "color:green; font-size: 20px; font-weight: bold;"> </p> <script> var up = document.getElementById('GFG_UP'); up.innerHTML = "Click on the button to add image element"; var down = document.getElementById('GFG_DOWN'); function GFG_Fun() { var img = new Image(); img.src = 'https://media.geeksforgeeks.org/wp-content/uploads/20190529122828/bs21.png'; document.getElementById('body').appendChild(img); down.innerHTML = "Image Element Added."; } </script> </body> </html>
Output:
Before clicking on the button:
After clicking on the button:
JavaScript is best known for web page development but it is also used in a variety of non-browser environments. You can learn JavaScript from the ground up by following this JavaScript Tutorial and JavaScript Examples.
JavaScript
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
Difference Between PUT and PATCH Request
Roadmap to Learn JavaScript For Beginners
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n20 Jul, 2021"
},
{
"code": null,
"e": 307,
"s": 52,
"text": "Given an HTML element and the task is to create an <img> element and append it to the document using JavaScript. In these examples when someone clicks on the button then the <img> element created. We can replace click event by any other JavaScript event."
},
{
"code": null,
"e": 319,
"s": 307,
"text": "Approach 1:"
},
{
"code": null,
"e": 386,
"s": 319,
"text": "Create an empty img element using document.createElement() method."
},
{
"code": null,
"e": 453,
"s": 386,
"text": "Then set its attributes like (src, height, width, alt, title etc)."
},
{
"code": null,
"e": 491,
"s": 453,
"text": "Finally, insert it into the document."
},
{
"code": null,
"e": 546,
"s": 491,
"text": "Example 1: This example implements the above approach."
},
{
"code": "<!DOCTYPE HTML> <html> <head> <title> How to create an image element dynamically using JavaScript ? </title> </head> <body id = \"body\" style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksforGeeks </h1> <p id = \"GFG_UP\" style = \"font-size: 15px; font-weight: bold;\"> </p> <button onclick = \"GFG_Fun()\"> click here </button> <p id = \"GFG_DOWN\" style = \"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var up = document.getElementById('GFG_UP'); up.innerHTML = \"Click on the button to add image element\"; var down = document.getElementById('GFG_DOWN'); function GFG_Fun() { var img = document.createElement('img'); img.src = 'https://media.geeksforgeeks.org/wp-content/uploads/20190529122828/bs21.png'; document.getElementById('body').appendChild(img); down.innerHTML = \"Image Element Added.\"; } </script> </body> </html>",
"e": 1630,
"s": 546,
"text": null
},
{
"code": null,
"e": 1638,
"s": 1630,
"text": "Output:"
},
{
"code": null,
"e": 1669,
"s": 1638,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 1699,
"s": 1669,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 1711,
"s": 1699,
"text": "Approach 2:"
},
{
"code": null,
"e": 1761,
"s": 1711,
"text": "Create an empty image instance using new Image()."
},
{
"code": null,
"e": 1828,
"s": 1761,
"text": "Then set its attributes like (src, height, width, alt, title etc)."
},
{
"code": null,
"e": 1864,
"s": 1828,
"text": "Finally, insert it to the document."
},
{
"code": null,
"e": 1919,
"s": 1864,
"text": "Example 2: This example implements the above approach."
},
{
"code": "<!DOCTYPE HTML> <html> <head> <title> How to create an image element dynamically using JavaScript ? </title> </head> <body id = \"body\" style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksforGeeks </h1> <p id = \"GFG_UP\" style = \"font-size: 15px; font-weight: bold;\"> </p> <button onclick = \"GFG_Fun()\"> click here </button> <p id = \"GFG_DOWN\" style = \"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var up = document.getElementById('GFG_UP'); up.innerHTML = \"Click on the button to add image element\"; var down = document.getElementById('GFG_DOWN'); function GFG_Fun() { var img = new Image(); img.src = 'https://media.geeksforgeeks.org/wp-content/uploads/20190529122828/bs21.png'; document.getElementById('body').appendChild(img); down.innerHTML = \"Image Element Added.\"; } </script> </body> </html>",
"e": 2985,
"s": 1919,
"text": null
},
{
"code": null,
"e": 2993,
"s": 2985,
"text": "Output:"
},
{
"code": null,
"e": 3024,
"s": 2993,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 3054,
"s": 3024,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 3273,
"s": 3054,
"text": "JavaScript is best known for web page development but it is also used in a variety of non-browser environments. You can learn JavaScript from the ground up by following this JavaScript Tutorial and JavaScript Examples."
},
{
"code": null,
"e": 3284,
"s": 3273,
"text": "JavaScript"
},
{
"code": null,
"e": 3301,
"s": 3284,
"text": "Web Technologies"
},
{
"code": null,
"e": 3328,
"s": 3301,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 3426,
"s": 3328,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3487,
"s": 3426,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 3559,
"s": 3487,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 3599,
"s": 3559,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 3640,
"s": 3599,
"text": "Difference Between PUT and PATCH Request"
},
{
"code": null,
"e": 3682,
"s": 3640,
"text": "Roadmap to Learn JavaScript For Beginners"
},
{
"code": null,
"e": 3715,
"s": 3682,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 3777,
"s": 3715,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 3838,
"s": 3777,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 3888,
"s": 3838,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
How to add data in JSON file using Node.js ? | 29 Dec, 2021
JSON stands for Javascript Object Notation. It is one of the easiest ways to exchange information between applications and is generally used by websites/APIs to communicate. For getting started with Node.js, refer this article.
First of all, you need to ensure that the JSON file that you are expecting is not going to consume a large memory. For data that is estimated to consume around 500MB, this method won’t be efficient and you should rather consider using a database system.
Node.js has an in-built module named fs, which stands for File System and enables the user to interact with the file system in a modeled way. To use it, type the code below in your server program.const fs = require('fs');The documentation for fs module can be found here.We can start by creating a JSON file, which will contain an id, a name and a city for this example.Note that you can have as many key-value pairs as you want, but we are using three here for a start. {
"id": 1,
"name": "John",
"city": "London"
}Let’s name this JSON file as data.json.Now that we have a JSON file to write to, first we will make a JavaScript object to access the file.For this, we will use fs.readFileSync() which will give us the data in raw format. To get the data in JSON format, we will use JSON.parse().Thus, the code on our server side will look like this:var data = fs.readFileSync('data.json');
var myObject= JSON.parse(data);Now that we have our object ready, let’s suppose we have a key-value pair of data that we want to add :let newData = {
"country": "England"
} We need to use our object(i.e. myObject) to add this data.We will do this by using .push() method as follows:Note: To use push() function, data objects in json file must be stored in array. If JSON file is empty or having single object without array, push() will give error.myObject.push(newData);To write this new data to our JSON file, we will use fs.writeFile() which takes the JSON file and data to be added as parameters. Note that we will have to first convert the object back into raw format before writing it. This will be done using JSON.stringify() method.var newData = JSON.stringify(myObject);
fs.writeFile('data.json', newData, err => {
// error checking
if(err) throw err;
console.log("New data added");
}); Now our data.json file would look like this:{
"id": 1,
"name": "John",
"city": "London",
"country": "England"
}
Node.js has an in-built module named fs, which stands for File System and enables the user to interact with the file system in a modeled way. To use it, type the code below in your server program.const fs = require('fs');The documentation for fs module can be found here.
Node.js has an in-built module named fs, which stands for File System and enables the user to interact with the file system in a modeled way. To use it, type the code below in your server program.
const fs = require('fs');
The documentation for fs module can be found here.
We can start by creating a JSON file, which will contain an id, a name and a city for this example.Note that you can have as many key-value pairs as you want, but we are using three here for a start. {
"id": 1,
"name": "John",
"city": "London"
}Let’s name this JSON file as data.json.
We can start by creating a JSON file, which will contain an id, a name and a city for this example.Note that you can have as many key-value pairs as you want, but we are using three here for a start.
{
"id": 1,
"name": "John",
"city": "London"
}
Let’s name this JSON file as data.json.
Now that we have a JSON file to write to, first we will make a JavaScript object to access the file.For this, we will use fs.readFileSync() which will give us the data in raw format. To get the data in JSON format, we will use JSON.parse().Thus, the code on our server side will look like this:var data = fs.readFileSync('data.json');
var myObject= JSON.parse(data);
Now that we have a JSON file to write to, first we will make a JavaScript object to access the file.For this, we will use fs.readFileSync() which will give us the data in raw format. To get the data in JSON format, we will use JSON.parse().Thus, the code on our server side will look like this:
var data = fs.readFileSync('data.json');
var myObject= JSON.parse(data);
Now that we have our object ready, let’s suppose we have a key-value pair of data that we want to add :let newData = {
"country": "England"
} We need to use our object(i.e. myObject) to add this data.We will do this by using .push() method as follows:Note: To use push() function, data objects in json file must be stored in array. If JSON file is empty or having single object without array, push() will give error.myObject.push(newData);
Now that we have our object ready, let’s suppose we have a key-value pair of data that we want to add :
let newData = {
"country": "England"
}
We need to use our object(i.e. myObject) to add this data.We will do this by using .push() method as follows:
Note: To use push() function, data objects in json file must be stored in array. If JSON file is empty or having single object without array, push() will give error.
myObject.push(newData);
To write this new data to our JSON file, we will use fs.writeFile() which takes the JSON file and data to be added as parameters. Note that we will have to first convert the object back into raw format before writing it. This will be done using JSON.stringify() method.var newData = JSON.stringify(myObject);
fs.writeFile('data.json', newData, err => {
// error checking
if(err) throw err;
console.log("New data added");
}); Now our data.json file would look like this:{
"id": 1,
"name": "John",
"city": "London",
"country": "England"
}
To write this new data to our JSON file, we will use fs.writeFile() which takes the JSON file and data to be added as parameters. Note that we will have to first convert the object back into raw format before writing it. This will be done using JSON.stringify() method.
var newData = JSON.stringify(myObject);
fs.writeFile('data.json', newData, err => {
// error checking
if(err) throw err;
console.log("New data added");
});
Now our data.json file would look like this:
{
"id": 1,
"name": "John",
"city": "London",
"country": "England"
}
Example: The index.js code for the above example.
index.js
// Requiring fs moduleconst fs = require("fs"); // Storing the JSON format data in myObjectvar data = fs.readFileSync("data.json");var myObject = JSON.parse(data); // Defining new data to be addedlet newData = { country: "England",}; // Adding the new data to our objectmyObject.push(newData); // Writing to our JSON filevar newData2 = JSON.stringify(myObject);fs.writeFile("data2.json", newData2, (err) => { // Error checking if (err) throw err; console.log("New data added");});
Output:
vivekvikigupta
NodeJS-Questions
Picked
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Node.js fs.writeFile() Method
How to install the previous version of node.js and npm ?
Difference between promise and async await in Node.js
Mongoose | findByIdAndUpdate() Function
JWT Authentication with Node.js
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ?
Differences between Functional Components and Class Components in React | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Dec, 2021"
},
{
"code": null,
"e": 256,
"s": 28,
"text": "JSON stands for Javascript Object Notation. It is one of the easiest ways to exchange information between applications and is generally used by websites/APIs to communicate. For getting started with Node.js, refer this article."
},
{
"code": null,
"e": 510,
"s": 256,
"text": "First of all, you need to ensure that the JSON file that you are expecting is not going to consume a large memory. For data that is estimated to consume around 500MB, this method won’t be efficient and you should rather consider using a database system."
},
{
"code": null,
"e": 2459,
"s": 510,
"text": "Node.js has an in-built module named fs, which stands for File System and enables the user to interact with the file system in a modeled way. To use it, type the code below in your server program.const fs = require('fs');The documentation for fs module can be found here.We can start by creating a JSON file, which will contain an id, a name and a city for this example.Note that you can have as many key-value pairs as you want, but we are using three here for a start. {\n \"id\": 1,\n \"name\": \"John\",\n \"city\": \"London\"\n}Let’s name this JSON file as data.json.Now that we have a JSON file to write to, first we will make a JavaScript object to access the file.For this, we will use fs.readFileSync() which will give us the data in raw format. To get the data in JSON format, we will use JSON.parse().Thus, the code on our server side will look like this:var data = fs.readFileSync('data.json');\nvar myObject= JSON.parse(data);Now that we have our object ready, let’s suppose we have a key-value pair of data that we want to add :let newData = {\n \"country\": \"England\"\n} We need to use our object(i.e. myObject) to add this data.We will do this by using .push() method as follows:Note: To use push() function, data objects in json file must be stored in array. If JSON file is empty or having single object without array, push() will give error.myObject.push(newData);To write this new data to our JSON file, we will use fs.writeFile() which takes the JSON file and data to be added as parameters. Note that we will have to first convert the object back into raw format before writing it. This will be done using JSON.stringify() method.var newData = JSON.stringify(myObject);\nfs.writeFile('data.json', newData, err => {\n // error checking\n if(err) throw err;\n \n console.log(\"New data added\");\n}); Now our data.json file would look like this:{\n \"id\": 1,\n \"name\": \"John\",\n \"city\": \"London\",\n \"country\": \"England\"\n}"
},
{
"code": null,
"e": 2731,
"s": 2459,
"text": "Node.js has an in-built module named fs, which stands for File System and enables the user to interact with the file system in a modeled way. To use it, type the code below in your server program.const fs = require('fs');The documentation for fs module can be found here."
},
{
"code": null,
"e": 2928,
"s": 2731,
"text": "Node.js has an in-built module named fs, which stands for File System and enables the user to interact with the file system in a modeled way. To use it, type the code below in your server program."
},
{
"code": null,
"e": 2954,
"s": 2928,
"text": "const fs = require('fs');"
},
{
"code": null,
"e": 3005,
"s": 2954,
"text": "The documentation for fs module can be found here."
},
{
"code": null,
"e": 3302,
"s": 3005,
"text": "We can start by creating a JSON file, which will contain an id, a name and a city for this example.Note that you can have as many key-value pairs as you want, but we are using three here for a start. {\n \"id\": 1,\n \"name\": \"John\",\n \"city\": \"London\"\n}Let’s name this JSON file as data.json."
},
{
"code": null,
"e": 3503,
"s": 3302,
"text": "We can start by creating a JSON file, which will contain an id, a name and a city for this example.Note that you can have as many key-value pairs as you want, but we are using three here for a start. "
},
{
"code": null,
"e": 3561,
"s": 3503,
"text": "{\n \"id\": 1,\n \"name\": \"John\",\n \"city\": \"London\"\n}"
},
{
"code": null,
"e": 3601,
"s": 3561,
"text": "Let’s name this JSON file as data.json."
},
{
"code": null,
"e": 3968,
"s": 3601,
"text": "Now that we have a JSON file to write to, first we will make a JavaScript object to access the file.For this, we will use fs.readFileSync() which will give us the data in raw format. To get the data in JSON format, we will use JSON.parse().Thus, the code on our server side will look like this:var data = fs.readFileSync('data.json');\nvar myObject= JSON.parse(data);"
},
{
"code": null,
"e": 4263,
"s": 3968,
"text": "Now that we have a JSON file to write to, first we will make a JavaScript object to access the file.For this, we will use fs.readFileSync() which will give us the data in raw format. To get the data in JSON format, we will use JSON.parse().Thus, the code on our server side will look like this:"
},
{
"code": null,
"e": 4336,
"s": 4263,
"text": "var data = fs.readFileSync('data.json');\nvar myObject= JSON.parse(data);"
},
{
"code": null,
"e": 4781,
"s": 4336,
"text": "Now that we have our object ready, let’s suppose we have a key-value pair of data that we want to add :let newData = {\n \"country\": \"England\"\n} We need to use our object(i.e. myObject) to add this data.We will do this by using .push() method as follows:Note: To use push() function, data objects in json file must be stored in array. If JSON file is empty or having single object without array, push() will give error.myObject.push(newData);"
},
{
"code": null,
"e": 4885,
"s": 4781,
"text": "Now that we have our object ready, let’s suppose we have a key-value pair of data that we want to add :"
},
{
"code": null,
"e": 4930,
"s": 4885,
"text": "let newData = {\n \"country\": \"England\"\n} "
},
{
"code": null,
"e": 5040,
"s": 4930,
"text": "We need to use our object(i.e. myObject) to add this data.We will do this by using .push() method as follows:"
},
{
"code": null,
"e": 5206,
"s": 5040,
"text": "Note: To use push() function, data objects in json file must be stored in array. If JSON file is empty or having single object without array, push() will give error."
},
{
"code": null,
"e": 5230,
"s": 5206,
"text": "myObject.push(newData);"
},
{
"code": null,
"e": 5802,
"s": 5230,
"text": "To write this new data to our JSON file, we will use fs.writeFile() which takes the JSON file and data to be added as parameters. Note that we will have to first convert the object back into raw format before writing it. This will be done using JSON.stringify() method.var newData = JSON.stringify(myObject);\nfs.writeFile('data.json', newData, err => {\n // error checking\n if(err) throw err;\n \n console.log(\"New data added\");\n}); Now our data.json file would look like this:{\n \"id\": 1,\n \"name\": \"John\",\n \"city\": \"London\",\n \"country\": \"England\"\n}"
},
{
"code": null,
"e": 6072,
"s": 5802,
"text": "To write this new data to our JSON file, we will use fs.writeFile() which takes the JSON file and data to be added as parameters. Note that we will have to first convert the object back into raw format before writing it. This will be done using JSON.stringify() method."
},
{
"code": null,
"e": 6248,
"s": 6072,
"text": "var newData = JSON.stringify(myObject);\nfs.writeFile('data.json', newData, err => {\n // error checking\n if(err) throw err;\n \n console.log(\"New data added\");\n}); "
},
{
"code": null,
"e": 6293,
"s": 6248,
"text": "Now our data.json file would look like this:"
},
{
"code": null,
"e": 6377,
"s": 6293,
"text": "{\n \"id\": 1,\n \"name\": \"John\",\n \"city\": \"London\",\n \"country\": \"England\"\n}"
},
{
"code": null,
"e": 6427,
"s": 6377,
"text": "Example: The index.js code for the above example."
},
{
"code": null,
"e": 6436,
"s": 6427,
"text": "index.js"
},
{
"code": "// Requiring fs moduleconst fs = require(\"fs\"); // Storing the JSON format data in myObjectvar data = fs.readFileSync(\"data.json\");var myObject = JSON.parse(data); // Defining new data to be addedlet newData = { country: \"England\",}; // Adding the new data to our objectmyObject.push(newData); // Writing to our JSON filevar newData2 = JSON.stringify(myObject);fs.writeFile(\"data2.json\", newData2, (err) => { // Error checking if (err) throw err; console.log(\"New data added\");});",
"e": 6925,
"s": 6436,
"text": null
},
{
"code": null,
"e": 6933,
"s": 6925,
"text": "Output:"
},
{
"code": null,
"e": 6948,
"s": 6933,
"text": "vivekvikigupta"
},
{
"code": null,
"e": 6965,
"s": 6948,
"text": "NodeJS-Questions"
},
{
"code": null,
"e": 6972,
"s": 6965,
"text": "Picked"
},
{
"code": null,
"e": 6980,
"s": 6972,
"text": "Node.js"
},
{
"code": null,
"e": 6997,
"s": 6980,
"text": "Web Technologies"
},
{
"code": null,
"e": 7095,
"s": 6997,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7125,
"s": 7095,
"text": "Node.js fs.writeFile() Method"
},
{
"code": null,
"e": 7182,
"s": 7125,
"text": "How to install the previous version of node.js and npm ?"
},
{
"code": null,
"e": 7236,
"s": 7182,
"text": "Difference between promise and async await in Node.js"
},
{
"code": null,
"e": 7276,
"s": 7236,
"text": "Mongoose | findByIdAndUpdate() Function"
},
{
"code": null,
"e": 7308,
"s": 7276,
"text": "JWT Authentication with Node.js"
},
{
"code": null,
"e": 7370,
"s": 7308,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 7431,
"s": 7370,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 7481,
"s": 7431,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 7524,
"s": 7481,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Sort an array of large numbers | 29 Mar, 2022
Given an array of numbers where every number is represented as string. The numbers may be very large (may not fit in long long int), the task is to sort these numbers.Examples:
Input : arr[] = {"5", "1237637463746732323", "12" };
Output : arr[] = {"5", "12", "1237637463746732323"};
Input : arr[] = {"50", "12", "12", "1"};
Output : arr[] = {"1", "12", "12", "50"};
Below is the implementation of the above idea.
C++
Java
Python3
C#
Javascript
// C++ program to sort large numbers represented// as strings.#include<bits/stdc++.h>using namespace std; // Returns true if str1 is smaller than str2.bool compareNumbers(string str1, string str2){ // Calculate lengths of both string int n1 = str1.length(), n2 = str2.length(); if (n1 < n2) return true; if (n2 < n1) return false; // If lengths are same for (int i=0; i<n1; i++) { if (str1[i] < str2[i]) return true; if (str1[i] > str2[i]) return false; } return false;} // Function for sort an array of large numbers// represented as stringsvoid sortLargeNumbers(string arr[], int n){ sort(arr, arr+n, compareNumbers);} // Driver codeint main(){ string arr[] = {"5", "1237637463746732323", "97987", "12" }; int n = sizeof(arr)/sizeof(arr[0]); sortLargeNumbers(arr, n); for (int i=0; i<n; i++) cout << arr[i] << " "; return 0;}
// Java program to sort large numbers represented// as strings.import java.io.*;import java.util.*; class main{ // Function for sort an array of large numbers // represented as strings static void sortLargeNumbers(String arr[]) { // Refer below post for understanding below expression: // https://www.geeksforgeeks.org/lambda-expressions-java-8/ Arrays.sort(arr, (left, right) -> { /* If length of left != right, then return the diff of the length else use compareTo function to compare values.*/ if (left.length() != right.length()) return left.length() - right.length(); return left.compareTo(right); }); } // Driver code public static void main(String args[]) { String arr[] = {"5", "1237637463746732323", "97987", "12" }; sortLargeNumbers(arr); for (String s : arr) System.out.print(s + " "); }}
# Python3 program to sort large numbers# represented as strings # Function for sort an array of large# numbers represented as stringsdef sortLargeNumbers (arr, n): arr.sort(key = int) # Driver Codeif __name__ == '__main__': arr = [ "5", "1237637463746732323", "97987", "12" ] n = len(arr) sortLargeNumbers(arr, n) for i in arr: print(i, end = ' ') # This code is contributed by himanshu77
// C# program to sort large numbers// represented as strings.using System; class GFG{ // Function for sort an array of large // numbers represented as strings static void sortLargeNumbers(String []arr) { // Refer below post for understanding // below expression: // https://www.geeksforgeeks.org/lambda-expressions-java-8/ for(int i = 0; i < arr.Length - 1; i++) { /* If length of left != right, then return the diff of the length else use compareTo function to compare values.*/ String left = arr[i], right = arr[i + 1]; if (left.Length > right.Length) { arr[i] = right; arr[i + 1] = left; i -= 2; } } } // Driver code public static void Main() { String []arr = {"5", "1237637463746732323", "97987", "12" }; sortLargeNumbers(arr); foreach (String s in arr) Console.Write(s + " "); }} // This code is contributed by PrinciRaj1992
<script> // JavaScript program to sort large numbers// represented as strings. // Function for sort an array of large numbers // represented as strings function sortLargeNumbers(arr) { // Refer below post for understanding // below expression: // https://www.geeksforgeeks.org/lambda-expressions-java-8/ for(let i = 0; i < arr.length - 1; i++) { /* If length of left != right, then return the diff of the length else use compareTo function to compare values.*/ let left = arr[i], right = arr[i + 1]; if (left.length > right.length) { arr[i] = right; arr[i + 1] = left; i -= 2; } } } // Driver Code let arr = ["5", "1237637463746732323", "97987", "12" ]; sortLargeNumbers(arr); for (let s in arr) document.write(arr[s] + " "); </script>
Output:
5 12 97987 1237637463746732323
Time complexity: O(k * n Log n), Here assumption is that the sort() function uses a O(n Log n) sorting algorithm.
Auxiliary Space: O(1)
Similar Post: Sorting Big IntegersThis article is contributed by DANISH KALEEM. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
princiraj1992
himanshu77
sanjoy_62
surindertarika1234
prophet1999
large-numbers
Sorting
Sorting
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n29 Mar, 2022"
},
{
"code": null,
"e": 233,
"s": 54,
"text": "Given an array of numbers where every number is represented as string. The numbers may be very large (may not fit in long long int), the task is to sort these numbers.Examples: "
},
{
"code": null,
"e": 423,
"s": 233,
"text": "Input : arr[] = {\"5\", \"1237637463746732323\", \"12\" };\nOutput : arr[] = {\"5\", \"12\", \"1237637463746732323\"};\n\nInput : arr[] = {\"50\", \"12\", \"12\", \"1\"};\nOutput : arr[] = {\"1\", \"12\", \"12\", \"50\"};"
},
{
"code": null,
"e": 471,
"s": 423,
"text": "Below is the implementation of the above idea. "
},
{
"code": null,
"e": 475,
"s": 471,
"text": "C++"
},
{
"code": null,
"e": 480,
"s": 475,
"text": "Java"
},
{
"code": null,
"e": 488,
"s": 480,
"text": "Python3"
},
{
"code": null,
"e": 491,
"s": 488,
"text": "C#"
},
{
"code": null,
"e": 502,
"s": 491,
"text": "Javascript"
},
{
"code": "// C++ program to sort large numbers represented// as strings.#include<bits/stdc++.h>using namespace std; // Returns true if str1 is smaller than str2.bool compareNumbers(string str1, string str2){ // Calculate lengths of both string int n1 = str1.length(), n2 = str2.length(); if (n1 < n2) return true; if (n2 < n1) return false; // If lengths are same for (int i=0; i<n1; i++) { if (str1[i] < str2[i]) return true; if (str1[i] > str2[i]) return false; } return false;} // Function for sort an array of large numbers// represented as stringsvoid sortLargeNumbers(string arr[], int n){ sort(arr, arr+n, compareNumbers);} // Driver codeint main(){ string arr[] = {\"5\", \"1237637463746732323\", \"97987\", \"12\" }; int n = sizeof(arr)/sizeof(arr[0]); sortLargeNumbers(arr, n); for (int i=0; i<n; i++) cout << arr[i] << \" \"; return 0;}",
"e": 1446,
"s": 502,
"text": null
},
{
"code": "// Java program to sort large numbers represented// as strings.import java.io.*;import java.util.*; class main{ // Function for sort an array of large numbers // represented as strings static void sortLargeNumbers(String arr[]) { // Refer below post for understanding below expression: // https://www.geeksforgeeks.org/lambda-expressions-java-8/ Arrays.sort(arr, (left, right) -> { /* If length of left != right, then return the diff of the length else use compareTo function to compare values.*/ if (left.length() != right.length()) return left.length() - right.length(); return left.compareTo(right); }); } // Driver code public static void main(String args[]) { String arr[] = {\"5\", \"1237637463746732323\", \"97987\", \"12\" }; sortLargeNumbers(arr); for (String s : arr) System.out.print(s + \" \"); }}",
"e": 2442,
"s": 1446,
"text": null
},
{
"code": "# Python3 program to sort large numbers# represented as strings # Function for sort an array of large# numbers represented as stringsdef sortLargeNumbers (arr, n): arr.sort(key = int) # Driver Codeif __name__ == '__main__': arr = [ \"5\", \"1237637463746732323\", \"97987\", \"12\" ] n = len(arr) sortLargeNumbers(arr, n) for i in arr: print(i, end = ' ') # This code is contributed by himanshu77",
"e": 2872,
"s": 2442,
"text": null
},
{
"code": "// C# program to sort large numbers// represented as strings.using System; class GFG{ // Function for sort an array of large // numbers represented as strings static void sortLargeNumbers(String []arr) { // Refer below post for understanding // below expression: // https://www.geeksforgeeks.org/lambda-expressions-java-8/ for(int i = 0; i < arr.Length - 1; i++) { /* If length of left != right, then return the diff of the length else use compareTo function to compare values.*/ String left = arr[i], right = arr[i + 1]; if (left.Length > right.Length) { arr[i] = right; arr[i + 1] = left; i -= 2; } } } // Driver code public static void Main() { String []arr = {\"5\", \"1237637463746732323\", \"97987\", \"12\" }; sortLargeNumbers(arr); foreach (String s in arr) Console.Write(s + \" \"); }} // This code is contributed by PrinciRaj1992",
"e": 3951,
"s": 2872,
"text": null
},
{
"code": "<script> // JavaScript program to sort large numbers// represented as strings. // Function for sort an array of large numbers // represented as strings function sortLargeNumbers(arr) { // Refer below post for understanding // below expression: // https://www.geeksforgeeks.org/lambda-expressions-java-8/ for(let i = 0; i < arr.length - 1; i++) { /* If length of left != right, then return the diff of the length else use compareTo function to compare values.*/ let left = arr[i], right = arr[i + 1]; if (left.length > right.length) { arr[i] = right; arr[i + 1] = left; i -= 2; } } } // Driver Code let arr = [\"5\", \"1237637463746732323\", \"97987\", \"12\" ]; sortLargeNumbers(arr); for (let s in arr) document.write(arr[s] + \" \"); </script>",
"e": 4921,
"s": 3951,
"text": null
},
{
"code": null,
"e": 4931,
"s": 4921,
"text": "Output: "
},
{
"code": null,
"e": 4963,
"s": 4931,
"text": "5 12 97987 1237637463746732323 "
},
{
"code": null,
"e": 5077,
"s": 4963,
"text": "Time complexity: O(k * n Log n), Here assumption is that the sort() function uses a O(n Log n) sorting algorithm."
},
{
"code": null,
"e": 5099,
"s": 5077,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 5555,
"s": 5099,
"text": "Similar Post: Sorting Big IntegersThis article is contributed by DANISH KALEEM. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 5569,
"s": 5555,
"text": "princiraj1992"
},
{
"code": null,
"e": 5580,
"s": 5569,
"text": "himanshu77"
},
{
"code": null,
"e": 5590,
"s": 5580,
"text": "sanjoy_62"
},
{
"code": null,
"e": 5609,
"s": 5590,
"text": "surindertarika1234"
},
{
"code": null,
"e": 5621,
"s": 5609,
"text": "prophet1999"
},
{
"code": null,
"e": 5635,
"s": 5621,
"text": "large-numbers"
},
{
"code": null,
"e": 5643,
"s": 5635,
"text": "Sorting"
},
{
"code": null,
"e": 5651,
"s": 5643,
"text": "Sorting"
}
] |
Python | Program to count number of lists in a list of lists | 07 May, 2019
Given a list of lists, write a Python program to count the number of lists contained within the list of lists.
Examples:
Input : [[1, 2, 3], [4, 5], [6, 7, 8, 9]]
Output : 3
Input : [[1], ['Bob'], ['Delhi'], ['x', 'y']]
Output : 4
Method #1 : Using len()
# Python3 program to Count number # of lists in a list of lists def countList(lst): return len(lst) # Driver codelst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]print(countList(lst))
3
Method #2 : Using type()
Use a for loop and in every iteration to check if the type of the current item is a list or not, and accordingly increment ‘count’ variable. This method has a benefit over approach #1, as it works well for a list of heterogeneous elements.
# Python3 program to Count number # of lists in a list of lists def countList(lst): count = 0 for el in lst: if type(el)== type([]): count+= 1 return count # Driver codelst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]print(countList(lst))
3
A one-liner alternative approach for the above code is given below:
def countList(lst): return sum(type(el)== type([]) for el in lst)
Method #3 : Using isinstance() method
# Python3 program to Count number # of lists in a list of lists def countList(lst): return sum(isinstance(i, list) for i in lst) # Driver codelst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]print(countList(lst))
3
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Python String | replace()
How to Install PIP on Windows ?
Python program to convert a list to string
Defaultdict in Python
Python | Convert a list to dictionary
Python | Convert string dictionary to dictionary
Python Program for Fibonacci numbers | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n07 May, 2019"
},
{
"code": null,
"e": 139,
"s": 28,
"text": "Given a list of lists, write a Python program to count the number of lists contained within the list of lists."
},
{
"code": null,
"e": 149,
"s": 139,
"text": "Examples:"
},
{
"code": null,
"e": 262,
"s": 149,
"text": "Input : [[1, 2, 3], [4, 5], [6, 7, 8, 9]]\nOutput : 3\n\nInput : [[1], ['Bob'], ['Delhi'], ['x', 'y']]\nOutput : 4\n"
},
{
"code": null,
"e": 287,
"s": 262,
"text": " Method #1 : Using len()"
},
{
"code": "# Python3 program to Count number # of lists in a list of lists def countList(lst): return len(lst) # Driver codelst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]print(countList(lst))",
"e": 470,
"s": 287,
"text": null
},
{
"code": null,
"e": 473,
"s": 470,
"text": "3\n"
},
{
"code": null,
"e": 499,
"s": 473,
"text": " Method #2 : Using type()"
},
{
"code": null,
"e": 739,
"s": 499,
"text": "Use a for loop and in every iteration to check if the type of the current item is a list or not, and accordingly increment ‘count’ variable. This method has a benefit over approach #1, as it works well for a list of heterogeneous elements."
},
{
"code": "# Python3 program to Count number # of lists in a list of lists def countList(lst): count = 0 for el in lst: if type(el)== type([]): count+= 1 return count # Driver codelst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]print(countList(lst))",
"e": 1016,
"s": 739,
"text": null
},
{
"code": null,
"e": 1019,
"s": 1016,
"text": "3\n"
},
{
"code": null,
"e": 1087,
"s": 1019,
"text": "A one-liner alternative approach for the above code is given below:"
},
{
"code": "def countList(lst): return sum(type(el)== type([]) for el in lst)",
"e": 1156,
"s": 1087,
"text": null
},
{
"code": null,
"e": 1195,
"s": 1156,
"text": " Method #3 : Using isinstance() method"
},
{
"code": "# Python3 program to Count number # of lists in a list of lists def countList(lst): return sum(isinstance(i, list) for i in lst) # Driver codelst = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]print(countList(lst))",
"e": 1407,
"s": 1195,
"text": null
},
{
"code": null,
"e": 1410,
"s": 1407,
"text": "3\n"
},
{
"code": null,
"e": 1431,
"s": 1410,
"text": "Python list-programs"
},
{
"code": null,
"e": 1438,
"s": 1431,
"text": "Python"
},
{
"code": null,
"e": 1454,
"s": 1438,
"text": "Python Programs"
},
{
"code": null,
"e": 1552,
"s": 1454,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1570,
"s": 1552,
"text": "Python Dictionary"
},
{
"code": null,
"e": 1612,
"s": 1570,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 1634,
"s": 1612,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 1660,
"s": 1634,
"text": "Python String | replace()"
},
{
"code": null,
"e": 1692,
"s": 1660,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 1735,
"s": 1692,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 1757,
"s": 1735,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 1795,
"s": 1757,
"text": "Python | Convert a list to dictionary"
},
{
"code": null,
"e": 1844,
"s": 1795,
"text": "Python | Convert string dictionary to dictionary"
}
] |
Apriori Algorithm in R Programming | 20 Aug, 2021
Apriori algorithm is used for finding frequent itemsets in a dataset for association rule mining. It is called Apriori because it uses prior knowledge of frequent itemset properties. We apply an iterative approach or level-wise search where k-frequent itemsets are used to find k+1 itemsets. To improve the efficiency of the level-wise generation of frequent itemsets an important property is used called Apriori property which helps by reducing the search space. It’s very easy to implement this algorithm using the R programming language.
Apriori Property: All non-empty subsets of a frequent itemset must be frequent. Apriori assumes that all subsets of a frequent itemset must be frequent (Apriori property). If an itemset is infrequent, all its supersets will be infrequent.
Essentially, the Apriori algorithm takes each part of a larger data set and contrasts it with other sets in some ordered way. The resulting scores are used to generate sets that are classed as frequent appearances in a larger database for aggregated data collection. In a practical sense, one can get a better idea of the algorithm by looking at applications such as a Market Basket Tool that helps with figuring out which items are purchased together in a market basket, or a financial analysis tool that helps to show how various stocks trend together. The Apriori algorithm may be used in conjunction with other algorithms to effectively sort and contrast data to show a much better picture of how complex systems reflect patterns and trends.
Support: Support is an indication of how frequently the itemset appears in the dataset. It is the count of records containing an item ‘x’ divided by the total number of records in the database.
Confidence: Confidence is a measure of times such that if an item ‘x’ is bought, then item ‘y’ is also bought together. It is the support count of (x U y) divided by the support count of ‘x’.
Lift: Lift is the ratio of the observed support to that which is expected if ‘x’ and ‘y’ were independent. It is the support count of (x U y) divided by the product of individual support counts of ‘x’ and ‘y’.
Read each item in the transaction.Calculate the support of every item.If support is less than minimum support, discard the item. Else, insert it into frequent itemset.Calculate confidence for each non- empty subset.If confidence is less than minimum confidence, discard the subset. Else, it into strong rules.
Read each item in the transaction.
Calculate the support of every item.
If support is less than minimum support, discard the item. Else, insert it into frequent itemset.
Calculate confidence for each non- empty subset.
If confidence is less than minimum confidence, discard the subset. Else, it into strong rules.
RStudio provides popular open source and enterprise-ready professional software for the R statistical computing environment. R is a language that is developed to support statistical calculations and graphical computing/ visualizations. It has an in-built library function called arules which implements the Apriori algorithm for Market Basket Analysis and computes the strong rules through Association Rule Mining, once we specify the minimum support and minimum confidence, according to our needs. Given below are the required code and corresponding output for the Apriori algorithm. The Groceries dataset has been used for the same, which is available in the default database of R. It contains 9,835 transactions/ records, each having ‘n’ number of items that were bought together from the grocery store.
Example:
Step 1: Load required library
‘arules’ package provides the infrastructure for representing, manipulating, and analyzing transaction data and patterns.
library(arules)
’arulesviz’ package is used for visualizing Association Rules and Frequent Itemsets. It extends the package ‘arules’ with various visualization techniques for association rules and itemsets. The package also includes several interactive visualizations for rule exploration.
library(arulesViz)
‘RColorBrewer‘ is a ColorBrewer Palette which provides color schemes for maps and other graphics.
library(RColorBrewer)
Step 2: Import the dataset
‘Groceries‘ dataset is predefined in the R package. It is a set of 9835 records/ transactions, each having ‘n’ number of items, which were bought together from the grocery store.
data("Groceries")
Step 3: Applying apriori() function
‘apriori()‘ function is in-built in R to mine frequent itemsets and association rules using the Apriori algorithm. Here, ‘Groceries’ is the transaction data. ‘parameter’ is a named list that specifies the minimum support and confidence for finding the association rules. The default behavior is to mine the rules with minimum support of 0.1 and 0.8 as the minimum confidence. Here, we have specified the minimum support to be 0.01 and the minimum confidence to be 0.2.
rules <- apriori(Groceries, parameter = list(supp = 0.01, conf = 0.2))
Step 4: Applying inspect() function
inspect() function prints the internal representation of an R object or the result of an expression. Here, it displays the first 10 strong association rules.
inspect(rules[1:10])
Step 5: Applying itemFrequencyPlot() function
itemFrequencyPlot() creates a bar plot for item frequencies/ support. It creates an item frequency bar plot for inspecting the distribution of objects based on the transactions. The items are plotted ordered by descending support. Here, ‘topN=20’ means that 20 items with the highest item frequency/ lift will be plotted.
arules::itemFrequencyPlot(Groceries, topN = 20,
col = brewer.pal(8, 'Pastel2'),
main = 'Relative Item Frequency Plot',
type = "relative",
ylab = "Item Frequency (Relative)")
The complete R code is given below.
R
# Loading Librarieslibrary(arules)library(arulesViz)library(RColorBrewer) # import datasetdata("Groceries") # using apriori() functionrules <- apriori(Groceries, parameter = list(supp = 0.01, conf = 0.2)) # using inspect() functioninspect(rules[1:10]) # using itemFrequencyPlot() functionarules::itemFrequencyPlot(Groceries, topN = 20, col = brewer.pal(8, 'Pastel2'), main = 'Relative Item Frequency Plot', type = "relative", ylab = "Item Frequency (Relative)")
Output:
Strong Rules:
Strong Rules obtained after applying the Apriori Algorithm is as follows
After running the above code for the Apriori algorithm, we can see the following output, specifying the first 10 strongest Association rules, based on the support (minimum support of 0.01), confidence (minimum confidence of 0.2), and lift, along with mentioning the count of times the products occur together in the transactions.
Visualization:
Box Plot of the Top 20 Items having the Highest Item Frequency (Relative) using Lift as a Parameter
We have used the ‘Groceries’ dataset which has about 9835 transactions that include ‘n’ number of items that were bought together from the store. On running the Apriori algorithm over the dataset with a minimum support value of 0.01 and minimum confidence of 0.2, we have filtered out the strong association rules in the transaction. We have listed the first 10 transactions above, along with the box plot of the top 20 items having the highest relative item frequency. Some association rules that we can conclude from this program are:
If hard cheese is bought, then whole milk is also bought.
If buttermilk is bought, then whole milk is also bought with it.
If buttermilk is bought, then other vegetables are also bought together.
Also, whole milk has high support as well as a confidence value.
Hence, it will be profitable to put ‘whole milk’ in a visible and reachable shelf as it is one of the most frequently bought items. Also, near the shelf where ‘buttermilk’ is put, there should be shelves for ‘whole milk’ and ‘other vegetables’ as their confidence value is quite high. So there is a higher probability of buying them along with buttermilk. Thus, with similar actions, we can aim at increasing the sales and profits of the grocery store by analyzing users’ shopping patterns.
clintra
R Data-science
R Machine-Learning
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Change Color of Bars in Barchart using ggplot2 in R
How to Split Column Into Multiple Columns in R DataFrame?
Group by function in R using Dplyr
How to Change Axis Scales in R Plots?
How to filter R DataFrame by values in a column?
R - if statement
Logistic Regression in R Programming
Replace Specific Characters in String in R
How to import an Excel File into R ?
Joining of Dataframes in R Programming | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n20 Aug, 2021"
},
{
"code": null,
"e": 593,
"s": 52,
"text": "Apriori algorithm is used for finding frequent itemsets in a dataset for association rule mining. It is called Apriori because it uses prior knowledge of frequent itemset properties. We apply an iterative approach or level-wise search where k-frequent itemsets are used to find k+1 itemsets. To improve the efficiency of the level-wise generation of frequent itemsets an important property is used called Apriori property which helps by reducing the search space. It’s very easy to implement this algorithm using the R programming language."
},
{
"code": null,
"e": 832,
"s": 593,
"text": "Apriori Property: All non-empty subsets of a frequent itemset must be frequent. Apriori assumes that all subsets of a frequent itemset must be frequent (Apriori property). If an itemset is infrequent, all its supersets will be infrequent."
},
{
"code": null,
"e": 1578,
"s": 832,
"text": "Essentially, the Apriori algorithm takes each part of a larger data set and contrasts it with other sets in some ordered way. The resulting scores are used to generate sets that are classed as frequent appearances in a larger database for aggregated data collection. In a practical sense, one can get a better idea of the algorithm by looking at applications such as a Market Basket Tool that helps with figuring out which items are purchased together in a market basket, or a financial analysis tool that helps to show how various stocks trend together. The Apriori algorithm may be used in conjunction with other algorithms to effectively sort and contrast data to show a much better picture of how complex systems reflect patterns and trends."
},
{
"code": null,
"e": 1772,
"s": 1578,
"text": "Support: Support is an indication of how frequently the itemset appears in the dataset. It is the count of records containing an item ‘x’ divided by the total number of records in the database."
},
{
"code": null,
"e": 1964,
"s": 1772,
"text": "Confidence: Confidence is a measure of times such that if an item ‘x’ is bought, then item ‘y’ is also bought together. It is the support count of (x U y) divided by the support count of ‘x’."
},
{
"code": null,
"e": 2174,
"s": 1964,
"text": "Lift: Lift is the ratio of the observed support to that which is expected if ‘x’ and ‘y’ were independent. It is the support count of (x U y) divided by the product of individual support counts of ‘x’ and ‘y’."
},
{
"code": null,
"e": 2484,
"s": 2174,
"text": "Read each item in the transaction.Calculate the support of every item.If support is less than minimum support, discard the item. Else, insert it into frequent itemset.Calculate confidence for each non- empty subset.If confidence is less than minimum confidence, discard the subset. Else, it into strong rules."
},
{
"code": null,
"e": 2519,
"s": 2484,
"text": "Read each item in the transaction."
},
{
"code": null,
"e": 2556,
"s": 2519,
"text": "Calculate the support of every item."
},
{
"code": null,
"e": 2654,
"s": 2556,
"text": "If support is less than minimum support, discard the item. Else, insert it into frequent itemset."
},
{
"code": null,
"e": 2703,
"s": 2654,
"text": "Calculate confidence for each non- empty subset."
},
{
"code": null,
"e": 2798,
"s": 2703,
"text": "If confidence is less than minimum confidence, discard the subset. Else, it into strong rules."
},
{
"code": null,
"e": 3605,
"s": 2798,
"text": "RStudio provides popular open source and enterprise-ready professional software for the R statistical computing environment. R is a language that is developed to support statistical calculations and graphical computing/ visualizations. It has an in-built library function called arules which implements the Apriori algorithm for Market Basket Analysis and computes the strong rules through Association Rule Mining, once we specify the minimum support and minimum confidence, according to our needs. Given below are the required code and corresponding output for the Apriori algorithm. The Groceries dataset has been used for the same, which is available in the default database of R. It contains 9,835 transactions/ records, each having ‘n’ number of items that were bought together from the grocery store."
},
{
"code": null,
"e": 3614,
"s": 3605,
"text": "Example:"
},
{
"code": null,
"e": 3644,
"s": 3614,
"text": "Step 1: Load required library"
},
{
"code": null,
"e": 3766,
"s": 3644,
"text": "‘arules’ package provides the infrastructure for representing, manipulating, and analyzing transaction data and patterns."
},
{
"code": null,
"e": 3782,
"s": 3766,
"text": "library(arules)"
},
{
"code": null,
"e": 4056,
"s": 3782,
"text": "’arulesviz’ package is used for visualizing Association Rules and Frequent Itemsets. It extends the package ‘arules’ with various visualization techniques for association rules and itemsets. The package also includes several interactive visualizations for rule exploration."
},
{
"code": null,
"e": 4075,
"s": 4056,
"text": "library(arulesViz)"
},
{
"code": null,
"e": 4173,
"s": 4075,
"text": "‘RColorBrewer‘ is a ColorBrewer Palette which provides color schemes for maps and other graphics."
},
{
"code": null,
"e": 4195,
"s": 4173,
"text": "library(RColorBrewer)"
},
{
"code": null,
"e": 4222,
"s": 4195,
"text": "Step 2: Import the dataset"
},
{
"code": null,
"e": 4402,
"s": 4222,
"text": "‘Groceries‘ dataset is predefined in the R package. It is a set of 9835 records/ transactions, each having ‘n’ number of items, which were bought together from the grocery store."
},
{
"code": null,
"e": 4420,
"s": 4402,
"text": "data(\"Groceries\")"
},
{
"code": null,
"e": 4456,
"s": 4420,
"text": "Step 3: Applying apriori() function"
},
{
"code": null,
"e": 4925,
"s": 4456,
"text": "‘apriori()‘ function is in-built in R to mine frequent itemsets and association rules using the Apriori algorithm. Here, ‘Groceries’ is the transaction data. ‘parameter’ is a named list that specifies the minimum support and confidence for finding the association rules. The default behavior is to mine the rules with minimum support of 0.1 and 0.8 as the minimum confidence. Here, we have specified the minimum support to be 0.01 and the minimum confidence to be 0.2."
},
{
"code": null,
"e": 4996,
"s": 4925,
"text": "rules <- apriori(Groceries, parameter = list(supp = 0.01, conf = 0.2))"
},
{
"code": null,
"e": 5032,
"s": 4996,
"text": "Step 4: Applying inspect() function"
},
{
"code": null,
"e": 5190,
"s": 5032,
"text": "inspect() function prints the internal representation of an R object or the result of an expression. Here, it displays the first 10 strong association rules."
},
{
"code": null,
"e": 5211,
"s": 5190,
"text": "inspect(rules[1:10])"
},
{
"code": null,
"e": 5257,
"s": 5211,
"text": "Step 5: Applying itemFrequencyPlot() function"
},
{
"code": null,
"e": 5579,
"s": 5257,
"text": "itemFrequencyPlot() creates a bar plot for item frequencies/ support. It creates an item frequency bar plot for inspecting the distribution of objects based on the transactions. The items are plotted ordered by descending support. Here, ‘topN=20’ means that 20 items with the highest item frequency/ lift will be plotted."
},
{
"code": null,
"e": 5858,
"s": 5579,
"text": "arules::itemFrequencyPlot(Groceries, topN = 20, \n col = brewer.pal(8, 'Pastel2'),\n main = 'Relative Item Frequency Plot',\n type = \"relative\",\n ylab = \"Item Frequency (Relative)\")"
},
{
"code": null,
"e": 5894,
"s": 5858,
"text": "The complete R code is given below."
},
{
"code": null,
"e": 5896,
"s": 5894,
"text": "R"
},
{
"code": "# Loading Librarieslibrary(arules)library(arulesViz)library(RColorBrewer) # import datasetdata(\"Groceries\") # using apriori() functionrules <- apriori(Groceries, parameter = list(supp = 0.01, conf = 0.2)) # using inspect() functioninspect(rules[1:10]) # using itemFrequencyPlot() functionarules::itemFrequencyPlot(Groceries, topN = 20, col = brewer.pal(8, 'Pastel2'), main = 'Relative Item Frequency Plot', type = \"relative\", ylab = \"Item Frequency (Relative)\")",
"e": 6474,
"s": 5896,
"text": null
},
{
"code": null,
"e": 6485,
"s": 6477,
"text": "Output:"
},
{
"code": null,
"e": 6501,
"s": 6487,
"text": "Strong Rules:"
},
{
"code": null,
"e": 6575,
"s": 6501,
"text": "Strong Rules obtained after applying the Apriori Algorithm is as follows "
},
{
"code": null,
"e": 6906,
"s": 6575,
"text": "After running the above code for the Apriori algorithm, we can see the following output, specifying the first 10 strongest Association rules, based on the support (minimum support of 0.01), confidence (minimum confidence of 0.2), and lift, along with mentioning the count of times the products occur together in the transactions. "
},
{
"code": null,
"e": 6921,
"s": 6906,
"text": "Visualization:"
},
{
"code": null,
"e": 7022,
"s": 6921,
"text": "Box Plot of the Top 20 Items having the Highest Item Frequency (Relative) using Lift as a Parameter "
},
{
"code": null,
"e": 7560,
"s": 7022,
"text": "We have used the ‘Groceries’ dataset which has about 9835 transactions that include ‘n’ number of items that were bought together from the store. On running the Apriori algorithm over the dataset with a minimum support value of 0.01 and minimum confidence of 0.2, we have filtered out the strong association rules in the transaction. We have listed the first 10 transactions above, along with the box plot of the top 20 items having the highest relative item frequency. Some association rules that we can conclude from this program are: "
},
{
"code": null,
"e": 7618,
"s": 7560,
"text": "If hard cheese is bought, then whole milk is also bought."
},
{
"code": null,
"e": 7683,
"s": 7618,
"text": "If buttermilk is bought, then whole milk is also bought with it."
},
{
"code": null,
"e": 7756,
"s": 7683,
"text": "If buttermilk is bought, then other vegetables are also bought together."
},
{
"code": null,
"e": 7821,
"s": 7756,
"text": "Also, whole milk has high support as well as a confidence value."
},
{
"code": null,
"e": 8312,
"s": 7821,
"text": "Hence, it will be profitable to put ‘whole milk’ in a visible and reachable shelf as it is one of the most frequently bought items. Also, near the shelf where ‘buttermilk’ is put, there should be shelves for ‘whole milk’ and ‘other vegetables’ as their confidence value is quite high. So there is a higher probability of buying them along with buttermilk. Thus, with similar actions, we can aim at increasing the sales and profits of the grocery store by analyzing users’ shopping patterns."
},
{
"code": null,
"e": 8322,
"s": 8314,
"text": "clintra"
},
{
"code": null,
"e": 8337,
"s": 8322,
"text": "R Data-science"
},
{
"code": null,
"e": 8356,
"s": 8337,
"text": "R Machine-Learning"
},
{
"code": null,
"e": 8367,
"s": 8356,
"text": "R Language"
},
{
"code": null,
"e": 8465,
"s": 8367,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 8517,
"s": 8465,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 8575,
"s": 8517,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 8610,
"s": 8575,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 8648,
"s": 8610,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 8697,
"s": 8648,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 8714,
"s": 8697,
"text": "R - if statement"
},
{
"code": null,
"e": 8751,
"s": 8714,
"text": "Logistic Regression in R Programming"
},
{
"code": null,
"e": 8794,
"s": 8751,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 8831,
"s": 8794,
"text": "How to import an Excel File into R ?"
}
] |
XML parsing in Python | 28 Jun, 2022
This article focuses on how one can parse a given XML file and extract some useful data out of it in a structured way.
XML: XML stands for eXtensible Markup Language. It was designed to store and transport data. It was designed to be both human- and machine-readable.That’s why, the design goals of XML emphasize simplicity, generality, and usability across the Internet.The XML file to be parsed in this tutorial is actually a RSS feed.
RSS: RSS(Rich Site Summary, often called Really Simple Syndication) uses a family of standard web feed formats to publish frequently updated informationlike blog entries, news headlines, audio, video. RSS is XML formatted plain text.
The RSS format itself is relatively easy to read both by automated processes and by humans alike.
The RSS processed in this tutorial is the RSS feed of top news stories from a popular news website. You can check it out here. Our goal is to process this RSS feed (or XML file) and save it in some other format for future use.
Python Module used: This article will focus on using inbuilt xml module in python for parsing XML and the main focus will be on the ElementTree XML API of this module.
Implementation:
#Python code to illustrate parsing of XML files# importing the required modulesimport csvimport requestsimport xml.etree.ElementTree as ET def loadRSS(): # url of rss feed url = 'http://www.hindustantimes.com/rss/topnews/rssfeed.xml' # creating HTTP response object from given url resp = requests.get(url) # saving the xml file with open('topnewsfeed.xml', 'wb') as f: f.write(resp.content) def parseXML(xmlfile): # create element tree object tree = ET.parse(xmlfile) # get root element root = tree.getroot() # create empty list for news items newsitems = [] # iterate news items for item in root.findall('./channel/item'): # empty news dictionary news = {} # iterate child elements of item for child in item: # special checking for namespace object content:media if child.tag == '{http://search.yahoo.com/mrss/}content': news['media'] = child.attrib['url'] else: news[child.tag] = child.text.encode('utf8') # append news dictionary to news items list newsitems.append(news) # return news items list return newsitems def savetoCSV(newsitems, filename): # specifying the fields for csv file fields = ['guid', 'title', 'pubDate', 'description', 'link', 'media'] # writing to csv file with open(filename, 'w') as csvfile: # creating a csv dict writer object writer = csv.DictWriter(csvfile, fieldnames = fields) # writing headers (field names) writer.writeheader() # writing data rows writer.writerows(newsitems) def main(): # load rss from web to update existing xml file loadRSS() # parse xml file newsitems = parseXML('topnewsfeed.xml') # store news items in a csv file savetoCSV(newsitems, 'topnews.csv') if __name__ == "__main__": # calling main function main()
Above code will:
Load RSS feed from specified URL and save it as an XML file.
Parse the XML file to save news as a list of dictionaries where each dictionary is a single news item.
Save the news items into a CSV file.
Let us try to understand the code in pieces:
Loading and saving RSS feeddef loadRSS():
# url of rss feed
url = 'http://www.hindustantimes.com/rss/topnews/rssfeed.xml'
# creating HTTP response object from given url
resp = requests.get(url)
# saving the xml file
with open('topnewsfeed.xml', 'wb') as f:
f.write(resp.content)Here, we first created a HTTP response object by sending an HTTP request to the URL of the RSS feed. The content of response now contains the XML file data which we save as topnewsfeed.xml in our local directory.For more insight on how requests module works, follow this article:GET and POST requests using Python
def loadRSS():
# url of rss feed
url = 'http://www.hindustantimes.com/rss/topnews/rssfeed.xml'
# creating HTTP response object from given url
resp = requests.get(url)
# saving the xml file
with open('topnewsfeed.xml', 'wb') as f:
f.write(resp.content)
Here, we first created a HTTP response object by sending an HTTP request to the URL of the RSS feed. The content of response now contains the XML file data which we save as topnewsfeed.xml in our local directory.For more insight on how requests module works, follow this article:GET and POST requests using Python
Parsing XMLWe have created parseXML() function to parse XML file. We know that XML is an inherently hierarchical data format, and the most natural way to represent it is with a tree. Look at the image below for example:Here, we are using xml.etree.ElementTree (call it ET, in short) module. Element Tree has two classes for this purpose – ElementTree represents the whole XMLdocument as a tree, and Element represents a single node in this tree. Interactions with the whole document (reading and writing to/from files) are usually done on the ElementTree level. Interactions with a single XML element and its sub-elements are done on the Element level.Ok, so let’s go through the parseXML() function now:tree = ET.parse(xmlfile)Here, we create an ElementTree object by parsing the passed xmlfile.root = tree.getroot()getroot() function return the root of tree as an Element object.for item in root.findall('./channel/item'):Now, once you have taken a look at the structure of your XML file, you will notice that we are interested only in item element../channel/item is actually XPath syntax (XPath is a language for addressing parts of an XML document). Here, we want to find all item grand-children of channel children of the root(denoted by ‘.’) element.You can read more about supported XPath syntax here.for item in root.findall('./channel/item'):
# empty news dictionary
news = {}
# iterate child elements of item
for child in item:
# special checking for namespace object content:media
if child.tag == '{http://search.yahoo.com/mrss/}content':
news['media'] = child.attrib['url']
else:
news[child.tag] = child.text.encode('utf8')
# append news dictionary to news items list
newsitems.append(news)Now, we know that we are iterating through item elements where each item element contains one news. So, we create an empty news dictionary in which we will store all data available about news item. To iterate though each child element of an element, we simply iterate through it, like this:for child in item:Now, notice a sample item element here:We will have to handle namespace tags separately as they get expanded to their original value, when parsed. So, we do something like this:if child.tag == '{http://search.yahoo.com/mrss/}content':
news['media'] = child.attrib['url']child.attrib is a dictionary of all the attributes related to an element. Here, we are interested in url attribute of media:content namespace tag.Now, for all other children, we simply do:news[child.tag] = child.text.encode('utf8')child.tag contains the name of child element. child.text stores all the text inside that child element. So, finally, a sample item element is converted to a dictionary and looks like this:{'description': 'Ignis has a tough competition already, from Hyun.... ,
'guid': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,
'link': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,
'media': 'http://www.hindustantimes.com/rf/image_size_630x354/HT/... ,
'pubDate': 'Thu, 12 Jan 2017 12:33:04 GMT ',
'title': 'Maruti Ignis launches on Jan 13: Five cars that threa..... }Then, we simply append this dict element to the list newsitems.Finally, this list is returned.
Here, we are using xml.etree.ElementTree (call it ET, in short) module. Element Tree has two classes for this purpose – ElementTree represents the whole XMLdocument as a tree, and Element represents a single node in this tree. Interactions with the whole document (reading and writing to/from files) are usually done on the ElementTree level. Interactions with a single XML element and its sub-elements are done on the Element level.
Ok, so let’s go through the parseXML() function now:
tree = ET.parse(xmlfile)
Here, we create an ElementTree object by parsing the passed xmlfile.
root = tree.getroot()
getroot() function return the root of tree as an Element object.
for item in root.findall('./channel/item'):
Now, once you have taken a look at the structure of your XML file, you will notice that we are interested only in item element../channel/item is actually XPath syntax (XPath is a language for addressing parts of an XML document). Here, we want to find all item grand-children of channel children of the root(denoted by ‘.’) element.You can read more about supported XPath syntax here.
for item in root.findall('./channel/item'):
# empty news dictionary
news = {}
# iterate child elements of item
for child in item:
# special checking for namespace object content:media
if child.tag == '{http://search.yahoo.com/mrss/}content':
news['media'] = child.attrib['url']
else:
news[child.tag] = child.text.encode('utf8')
# append news dictionary to news items list
newsitems.append(news)
Now, we know that we are iterating through item elements where each item element contains one news. So, we create an empty news dictionary in which we will store all data available about news item. To iterate though each child element of an element, we simply iterate through it, like this:
for child in item:
Now, notice a sample item element here:
We will have to handle namespace tags separately as they get expanded to their original value, when parsed. So, we do something like this:
if child.tag == '{http://search.yahoo.com/mrss/}content':
news['media'] = child.attrib['url']
child.attrib is a dictionary of all the attributes related to an element. Here, we are interested in url attribute of media:content namespace tag.Now, for all other children, we simply do:
news[child.tag] = child.text.encode('utf8')
child.tag contains the name of child element. child.text stores all the text inside that child element. So, finally, a sample item element is converted to a dictionary and looks like this:
{'description': 'Ignis has a tough competition already, from Hyun.... ,
'guid': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,
'link': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,
'media': 'http://www.hindustantimes.com/rf/image_size_630x354/HT/... ,
'pubDate': 'Thu, 12 Jan 2017 12:33:04 GMT ',
'title': 'Maruti Ignis launches on Jan 13: Five cars that threa..... }
Then, we simply append this dict element to the list newsitems.Finally, this list is returned.
Saving data to a CSV fileNow, we simply save the list of news items to a CSV file so that it could be used or modified easily in future using savetoCSV() function. To know more about writing dictionary elements to a CSV file, go through this article:Working with CSV files in Python
So now, here is how our formatted data looks like now:
As you can see, the hierarchical XML file data has been converted to a simple CSV file so that all news stories are stored in form of a table. This makes it easier to extend the database too.Also, one can use the JSON-like data directly in their applications! This is the best alternative for extracting data from websites which do not provide a public API but provide some RSS feeds.
All the code and files used in above article can be found here.
What next?
You can have a look at more rss feeds of the news website used in above example. You can try to create an extended version of above example by parsing other rss feeds too.
Are you a cricket fan? Then this rss feed must be of your interest! You can parse this XML file to scrape information about the live cricket matches and use to make a desktop notifier!
Quiz of HTML and XML
This article is contributed by Nikhil Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
Python-projects
python-utility
Project
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
OpenCV C++ Program for Face Detection
Simple Chat Room using Python
10 Best Web Development Projects For Your Resume
Twitter Sentiment Analysis using Python
Student Information Management System
Read JSON file using Python
Python map() function
Adding new column to existing DataFrame in Pandas
Python Dictionary
How to get column names in Pandas dataframe | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n28 Jun, 2022"
},
{
"code": null,
"e": 171,
"s": 52,
"text": "This article focuses on how one can parse a given XML file and extract some useful data out of it in a structured way."
},
{
"code": null,
"e": 490,
"s": 171,
"text": "XML: XML stands for eXtensible Markup Language. It was designed to store and transport data. It was designed to be both human- and machine-readable.That’s why, the design goals of XML emphasize simplicity, generality, and usability across the Internet.The XML file to be parsed in this tutorial is actually a RSS feed."
},
{
"code": null,
"e": 724,
"s": 490,
"text": "RSS: RSS(Rich Site Summary, often called Really Simple Syndication) uses a family of standard web feed formats to publish frequently updated informationlike blog entries, news headlines, audio, video. RSS is XML formatted plain text."
},
{
"code": null,
"e": 822,
"s": 724,
"text": "The RSS format itself is relatively easy to read both by automated processes and by humans alike."
},
{
"code": null,
"e": 1049,
"s": 822,
"text": "The RSS processed in this tutorial is the RSS feed of top news stories from a popular news website. You can check it out here. Our goal is to process this RSS feed (or XML file) and save it in some other format for future use."
},
{
"code": null,
"e": 1217,
"s": 1049,
"text": "Python Module used: This article will focus on using inbuilt xml module in python for parsing XML and the main focus will be on the ElementTree XML API of this module."
},
{
"code": null,
"e": 1233,
"s": 1217,
"text": "Implementation:"
},
{
"code": "#Python code to illustrate parsing of XML files# importing the required modulesimport csvimport requestsimport xml.etree.ElementTree as ET def loadRSS(): # url of rss feed url = 'http://www.hindustantimes.com/rss/topnews/rssfeed.xml' # creating HTTP response object from given url resp = requests.get(url) # saving the xml file with open('topnewsfeed.xml', 'wb') as f: f.write(resp.content) def parseXML(xmlfile): # create element tree object tree = ET.parse(xmlfile) # get root element root = tree.getroot() # create empty list for news items newsitems = [] # iterate news items for item in root.findall('./channel/item'): # empty news dictionary news = {} # iterate child elements of item for child in item: # special checking for namespace object content:media if child.tag == '{http://search.yahoo.com/mrss/}content': news['media'] = child.attrib['url'] else: news[child.tag] = child.text.encode('utf8') # append news dictionary to news items list newsitems.append(news) # return news items list return newsitems def savetoCSV(newsitems, filename): # specifying the fields for csv file fields = ['guid', 'title', 'pubDate', 'description', 'link', 'media'] # writing to csv file with open(filename, 'w') as csvfile: # creating a csv dict writer object writer = csv.DictWriter(csvfile, fieldnames = fields) # writing headers (field names) writer.writeheader() # writing data rows writer.writerows(newsitems) def main(): # load rss from web to update existing xml file loadRSS() # parse xml file newsitems = parseXML('topnewsfeed.xml') # store news items in a csv file savetoCSV(newsitems, 'topnews.csv') if __name__ == \"__main__\": # calling main function main()",
"e": 3212,
"s": 1233,
"text": null
},
{
"code": null,
"e": 3229,
"s": 3212,
"text": "Above code will:"
},
{
"code": null,
"e": 3290,
"s": 3229,
"text": "Load RSS feed from specified URL and save it as an XML file."
},
{
"code": null,
"e": 3393,
"s": 3290,
"text": "Parse the XML file to save news as a list of dictionaries where each dictionary is a single news item."
},
{
"code": null,
"e": 3430,
"s": 3393,
"text": "Save the news items into a CSV file."
},
{
"code": null,
"e": 3475,
"s": 3430,
"text": "Let us try to understand the code in pieces:"
},
{
"code": null,
"e": 4099,
"s": 3475,
"text": "Loading and saving RSS feeddef loadRSS():\n # url of rss feed\n url = 'http://www.hindustantimes.com/rss/topnews/rssfeed.xml'\n # creating HTTP response object from given url\n resp = requests.get(url)\n # saving the xml file\n with open('topnewsfeed.xml', 'wb') as f:\n f.write(resp.content)Here, we first created a HTTP response object by sending an HTTP request to the URL of the RSS feed. The content of response now contains the XML file data which we save as topnewsfeed.xml in our local directory.For more insight on how requests module works, follow this article:GET and POST requests using Python"
},
{
"code": null,
"e": 4383,
"s": 4099,
"text": "def loadRSS():\n # url of rss feed\n url = 'http://www.hindustantimes.com/rss/topnews/rssfeed.xml'\n # creating HTTP response object from given url\n resp = requests.get(url)\n # saving the xml file\n with open('topnewsfeed.xml', 'wb') as f:\n f.write(resp.content)"
},
{
"code": null,
"e": 4697,
"s": 4383,
"text": "Here, we first created a HTTP response object by sending an HTTP request to the URL of the RSS feed. The content of response now contains the XML file data which we save as topnewsfeed.xml in our local directory.For more insight on how requests module works, follow this article:GET and POST requests using Python"
},
{
"code": null,
"e": 8032,
"s": 4697,
"text": "Parsing XMLWe have created parseXML() function to parse XML file. We know that XML is an inherently hierarchical data format, and the most natural way to represent it is with a tree. Look at the image below for example:Here, we are using xml.etree.ElementTree (call it ET, in short) module. Element Tree has two classes for this purpose – ElementTree represents the whole XMLdocument as a tree, and Element represents a single node in this tree. Interactions with the whole document (reading and writing to/from files) are usually done on the ElementTree level. Interactions with a single XML element and its sub-elements are done on the Element level.Ok, so let’s go through the parseXML() function now:tree = ET.parse(xmlfile)Here, we create an ElementTree object by parsing the passed xmlfile.root = tree.getroot()getroot() function return the root of tree as an Element object.for item in root.findall('./channel/item'):Now, once you have taken a look at the structure of your XML file, you will notice that we are interested only in item element../channel/item is actually XPath syntax (XPath is a language for addressing parts of an XML document). Here, we want to find all item grand-children of channel children of the root(denoted by ‘.’) element.You can read more about supported XPath syntax here.for item in root.findall('./channel/item'):\n\n # empty news dictionary\n news = {}\n\n # iterate child elements of item\n for child in item:\n\n # special checking for namespace object content:media\n if child.tag == '{http://search.yahoo.com/mrss/}content':\n news['media'] = child.attrib['url']\n else:\n news[child.tag] = child.text.encode('utf8')\n\n # append news dictionary to news items list\n newsitems.append(news)Now, we know that we are iterating through item elements where each item element contains one news. So, we create an empty news dictionary in which we will store all data available about news item. To iterate though each child element of an element, we simply iterate through it, like this:for child in item:Now, notice a sample item element here:We will have to handle namespace tags separately as they get expanded to their original value, when parsed. So, we do something like this:if child.tag == '{http://search.yahoo.com/mrss/}content':\n news['media'] = child.attrib['url']child.attrib is a dictionary of all the attributes related to an element. Here, we are interested in url attribute of media:content namespace tag.Now, for all other children, we simply do:news[child.tag] = child.text.encode('utf8')child.tag contains the name of child element. child.text stores all the text inside that child element. So, finally, a sample item element is converted to a dictionary and looks like this:{'description': 'Ignis has a tough competition already, from Hyun.... ,\n 'guid': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,\n 'link': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,\n 'media': 'http://www.hindustantimes.com/rf/image_size_630x354/HT/... ,\n 'pubDate': 'Thu, 12 Jan 2017 12:33:04 GMT ',\n 'title': 'Maruti Ignis launches on Jan 13: Five cars that threa..... }Then, we simply append this dict element to the list newsitems.Finally, this list is returned."
},
{
"code": null,
"e": 8466,
"s": 8032,
"text": "Here, we are using xml.etree.ElementTree (call it ET, in short) module. Element Tree has two classes for this purpose – ElementTree represents the whole XMLdocument as a tree, and Element represents a single node in this tree. Interactions with the whole document (reading and writing to/from files) are usually done on the ElementTree level. Interactions with a single XML element and its sub-elements are done on the Element level."
},
{
"code": null,
"e": 8519,
"s": 8466,
"text": "Ok, so let’s go through the parseXML() function now:"
},
{
"code": null,
"e": 8544,
"s": 8519,
"text": "tree = ET.parse(xmlfile)"
},
{
"code": null,
"e": 8613,
"s": 8544,
"text": "Here, we create an ElementTree object by parsing the passed xmlfile."
},
{
"code": null,
"e": 8635,
"s": 8613,
"text": "root = tree.getroot()"
},
{
"code": null,
"e": 8700,
"s": 8635,
"text": "getroot() function return the root of tree as an Element object."
},
{
"code": null,
"e": 8744,
"s": 8700,
"text": "for item in root.findall('./channel/item'):"
},
{
"code": null,
"e": 9129,
"s": 8744,
"text": "Now, once you have taken a look at the structure of your XML file, you will notice that we are interested only in item element../channel/item is actually XPath syntax (XPath is a language for addressing parts of an XML document). Here, we want to find all item grand-children of channel children of the root(denoted by ‘.’) element.You can read more about supported XPath syntax here."
},
{
"code": null,
"e": 9644,
"s": 9129,
"text": "for item in root.findall('./channel/item'):\n\n # empty news dictionary\n news = {}\n\n # iterate child elements of item\n for child in item:\n\n # special checking for namespace object content:media\n if child.tag == '{http://search.yahoo.com/mrss/}content':\n news['media'] = child.attrib['url']\n else:\n news[child.tag] = child.text.encode('utf8')\n\n # append news dictionary to news items list\n newsitems.append(news)"
},
{
"code": null,
"e": 9935,
"s": 9644,
"text": "Now, we know that we are iterating through item elements where each item element contains one news. So, we create an empty news dictionary in which we will store all data available about news item. To iterate though each child element of an element, we simply iterate through it, like this:"
},
{
"code": null,
"e": 9954,
"s": 9935,
"text": "for child in item:"
},
{
"code": null,
"e": 9994,
"s": 9954,
"text": "Now, notice a sample item element here:"
},
{
"code": null,
"e": 10133,
"s": 9994,
"text": "We will have to handle namespace tags separately as they get expanded to their original value, when parsed. So, we do something like this:"
},
{
"code": null,
"e": 10243,
"s": 10133,
"text": "if child.tag == '{http://search.yahoo.com/mrss/}content':\n news['media'] = child.attrib['url']"
},
{
"code": null,
"e": 10432,
"s": 10243,
"text": "child.attrib is a dictionary of all the attributes related to an element. Here, we are interested in url attribute of media:content namespace tag.Now, for all other children, we simply do:"
},
{
"code": null,
"e": 10476,
"s": 10432,
"text": "news[child.tag] = child.text.encode('utf8')"
},
{
"code": null,
"e": 10665,
"s": 10476,
"text": "child.tag contains the name of child element. child.text stores all the text inside that child element. So, finally, a sample item element is converted to a dictionary and looks like this:"
},
{
"code": null,
"e": 11071,
"s": 10665,
"text": "{'description': 'Ignis has a tough competition already, from Hyun.... ,\n 'guid': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,\n 'link': 'http://www.hindustantimes.com/autos/maruti-ignis-launch.... ,\n 'media': 'http://www.hindustantimes.com/rf/image_size_630x354/HT/... ,\n 'pubDate': 'Thu, 12 Jan 2017 12:33:04 GMT ',\n 'title': 'Maruti Ignis launches on Jan 13: Five cars that threa..... }"
},
{
"code": null,
"e": 11166,
"s": 11071,
"text": "Then, we simply append this dict element to the list newsitems.Finally, this list is returned."
},
{
"code": null,
"e": 11449,
"s": 11166,
"text": "Saving data to a CSV fileNow, we simply save the list of news items to a CSV file so that it could be used or modified easily in future using savetoCSV() function. To know more about writing dictionary elements to a CSV file, go through this article:Working with CSV files in Python"
},
{
"code": null,
"e": 11504,
"s": 11449,
"text": "So now, here is how our formatted data looks like now:"
},
{
"code": null,
"e": 11889,
"s": 11504,
"text": "As you can see, the hierarchical XML file data has been converted to a simple CSV file so that all news stories are stored in form of a table. This makes it easier to extend the database too.Also, one can use the JSON-like data directly in their applications! This is the best alternative for extracting data from websites which do not provide a public API but provide some RSS feeds."
},
{
"code": null,
"e": 11953,
"s": 11889,
"text": "All the code and files used in above article can be found here."
},
{
"code": null,
"e": 11964,
"s": 11953,
"text": "What next?"
},
{
"code": null,
"e": 12136,
"s": 11964,
"text": "You can have a look at more rss feeds of the news website used in above example. You can try to create an extended version of above example by parsing other rss feeds too."
},
{
"code": null,
"e": 12321,
"s": 12136,
"text": "Are you a cricket fan? Then this rss feed must be of your interest! You can parse this XML file to scrape information about the live cricket matches and use to make a desktop notifier!"
},
{
"code": null,
"e": 12342,
"s": 12321,
"text": "Quiz of HTML and XML"
},
{
"code": null,
"e": 12609,
"s": 12342,
"text": "This article is contributed by Nikhil Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 12733,
"s": 12609,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above"
},
{
"code": null,
"e": 12749,
"s": 12733,
"text": "Python-projects"
},
{
"code": null,
"e": 12764,
"s": 12749,
"text": "python-utility"
},
{
"code": null,
"e": 12772,
"s": 12764,
"text": "Project"
},
{
"code": null,
"e": 12779,
"s": 12772,
"text": "Python"
},
{
"code": null,
"e": 12877,
"s": 12779,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 12915,
"s": 12877,
"text": "OpenCV C++ Program for Face Detection"
},
{
"code": null,
"e": 12945,
"s": 12915,
"text": "Simple Chat Room using Python"
},
{
"code": null,
"e": 12994,
"s": 12945,
"text": "10 Best Web Development Projects For Your Resume"
},
{
"code": null,
"e": 13034,
"s": 12994,
"text": "Twitter Sentiment Analysis using Python"
},
{
"code": null,
"e": 13072,
"s": 13034,
"text": "Student Information Management System"
},
{
"code": null,
"e": 13100,
"s": 13072,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 13122,
"s": 13100,
"text": "Python map() function"
},
{
"code": null,
"e": 13172,
"s": 13122,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 13190,
"s": 13172,
"text": "Python Dictionary"
}
] |
IntegerField – Django Forms | 13 Feb, 2020
IntegerField in Django Forms is a integer field, for input of Integer numbers. The default widget for this input is NumberInput. It normalizes to a Python Integer. It uses MaxLengthValidator and MinLengthValidator if max_length and min_length are provided. Otherwise, all inputs are valid.
IntegerField has following optional arguments:
max_length and min_length :- If provided, these arguments ensure that the string is at most or at least the given length.
Syntax
field_name = forms.IntegerField(**options)
Illustration of IntegerField using an Example. Consider a project named geeksforgeeks having an app named geeks.
Refer to the following articles to check how to create a project and an app in Django.
How to Create a Basic Project using MVT in Django?
How to Create an App in Django ?
Enter the following code into forms.py file of geeks app.
from django import forms # creating a form class GeeksForm(forms.Form): geeks_field = forms.IntegerField(max_length = 200)
Add the geeks app to INSTALLED_APPS
# Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'geeks',]
Now to render this form into a view we need a view and a URL mapped to that URL. Let’s create a view first in views.py of geeks app,
from django.shortcuts import renderfrom .forms import GeeksForm # Create your views here.def home_view(request): context = {} context['form'] = GeeksForm() return render( request, "home.html", context)
Here we are importing that particular form from forms.py and creating an object of it in the view so that it can be rendered in a template.Now, to initiate a Django form you need to create home.html where one would be designing the stuff as they like. Let’s create a form in home.html.
<form method="POST"> {% csrf_token %} {{ form.as_p }} <input type="submit" value="Submit"></form>
Finally, a URL to map to this view in urls.py
from django.urls import path # importing views from views..pyfrom .views import home_view urlpatterns = [ path('', home_view ),]
Let’s run the server and check what has actually happened, Run
Python manage.py runserver
Thus, an geeks_field IntegerField is created by replacing “_” with ” “. It is a field to input of Integer numbers.
IntegerField is used for input of integer numbers in the database. One can input subject marks, Marks, etc. Till now we have discussed how to implement IntegerField but how to use it in the view for performing the logical part. To perform some logic we would need to get the value entered into the field into a python string instance.
In views.py,
from django.shortcuts import renderfrom .forms import GeeksForm # Create your views here.def home_view(request): context = {} form = GeeksForm(request.POST or None) context['form']= form if request.POST: if form.is_valid(): temp = form.cleaned_data.get("geeks_field") print(type(temp)) return render(request, "home.html", context)
Now let’s try entering some other data into the field.
You can clearly see it is asking for entering a valid Number. Let’s try entering Integer data now.
Now this data can be fetched using corresponding request dictionary. If method is GET, data would be available in request.GET and if post, request.POST correspondingly. In above example we have the value in temp which we can use for any purpose. You can check that data is converted to a python Integer instance in geeks_field.
Core Field arguments are the arguments given to each field for applying some constraint or imparting a particular characteristic to a particular Field. For example, adding an argument required = False to IntegerField will enable it to be left blank by the user. Each Field class constructor takes at least these arguments. Some Field classes take additional, field-specific arguments, but the following should always be accepted:
NaveenArora
Django-forms
Python Django
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Python String | replace()
How to Install PIP on Windows ?
*args and **kwargs in Python
Python Classes and Objects
Python OOPs Concepts
Introduction To PYTHON
Convert integer to string in Python
How to drop one or multiple columns in Pandas Dataframe | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n13 Feb, 2020"
},
{
"code": null,
"e": 318,
"s": 28,
"text": "IntegerField in Django Forms is a integer field, for input of Integer numbers. The default widget for this input is NumberInput. It normalizes to a Python Integer. It uses MaxLengthValidator and MinLengthValidator if max_length and min_length are provided. Otherwise, all inputs are valid."
},
{
"code": null,
"e": 365,
"s": 318,
"text": "IntegerField has following optional arguments:"
},
{
"code": null,
"e": 487,
"s": 365,
"text": "max_length and min_length :- If provided, these arguments ensure that the string is at most or at least the given length."
},
{
"code": null,
"e": 494,
"s": 487,
"text": "Syntax"
},
{
"code": null,
"e": 537,
"s": 494,
"text": "field_name = forms.IntegerField(**options)"
},
{
"code": null,
"e": 650,
"s": 537,
"text": "Illustration of IntegerField using an Example. Consider a project named geeksforgeeks having an app named geeks."
},
{
"code": null,
"e": 737,
"s": 650,
"text": "Refer to the following articles to check how to create a project and an app in Django."
},
{
"code": null,
"e": 788,
"s": 737,
"text": "How to Create a Basic Project using MVT in Django?"
},
{
"code": null,
"e": 821,
"s": 788,
"text": "How to Create an App in Django ?"
},
{
"code": null,
"e": 879,
"s": 821,
"text": "Enter the following code into forms.py file of geeks app."
},
{
"code": "from django import forms # creating a form class GeeksForm(forms.Form): geeks_field = forms.IntegerField(max_length = 200)",
"e": 1006,
"s": 879,
"text": null
},
{
"code": null,
"e": 1042,
"s": 1006,
"text": "Add the geeks app to INSTALLED_APPS"
},
{
"code": "# Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'geeks',]",
"e": 1280,
"s": 1042,
"text": null
},
{
"code": null,
"e": 1413,
"s": 1280,
"text": "Now to render this form into a view we need a view and a URL mapped to that URL. Let’s create a view first in views.py of geeks app,"
},
{
"code": "from django.shortcuts import renderfrom .forms import GeeksForm # Create your views here.def home_view(request): context = {} context['form'] = GeeksForm() return render( request, \"home.html\", context)",
"e": 1625,
"s": 1413,
"text": null
},
{
"code": null,
"e": 1911,
"s": 1625,
"text": "Here we are importing that particular form from forms.py and creating an object of it in the view so that it can be rendered in a template.Now, to initiate a Django form you need to create home.html where one would be designing the stuff as they like. Let’s create a form in home.html."
},
{
"code": "<form method=\"POST\"> {% csrf_token %} {{ form.as_p }} <input type=\"submit\" value=\"Submit\"></form>",
"e": 2018,
"s": 1911,
"text": null
},
{
"code": null,
"e": 2064,
"s": 2018,
"text": "Finally, a URL to map to this view in urls.py"
},
{
"code": "from django.urls import path # importing views from views..pyfrom .views import home_view urlpatterns = [ path('', home_view ),]",
"e": 2198,
"s": 2064,
"text": null
},
{
"code": null,
"e": 2261,
"s": 2198,
"text": "Let’s run the server and check what has actually happened, Run"
},
{
"code": null,
"e": 2288,
"s": 2261,
"text": "Python manage.py runserver"
},
{
"code": null,
"e": 2403,
"s": 2288,
"text": "Thus, an geeks_field IntegerField is created by replacing “_” with ” “. It is a field to input of Integer numbers."
},
{
"code": null,
"e": 2738,
"s": 2403,
"text": "IntegerField is used for input of integer numbers in the database. One can input subject marks, Marks, etc. Till now we have discussed how to implement IntegerField but how to use it in the view for performing the logical part. To perform some logic we would need to get the value entered into the field into a python string instance."
},
{
"code": null,
"e": 2751,
"s": 2738,
"text": "In views.py,"
},
{
"code": "from django.shortcuts import renderfrom .forms import GeeksForm # Create your views here.def home_view(request): context = {} form = GeeksForm(request.POST or None) context['form']= form if request.POST: if form.is_valid(): temp = form.cleaned_data.get(\"geeks_field\") print(type(temp)) return render(request, \"home.html\", context)",
"e": 3127,
"s": 2751,
"text": null
},
{
"code": null,
"e": 3182,
"s": 3127,
"text": "Now let’s try entering some other data into the field."
},
{
"code": null,
"e": 3281,
"s": 3182,
"text": "You can clearly see it is asking for entering a valid Number. Let’s try entering Integer data now."
},
{
"code": null,
"e": 3609,
"s": 3281,
"text": "Now this data can be fetched using corresponding request dictionary. If method is GET, data would be available in request.GET and if post, request.POST correspondingly. In above example we have the value in temp which we can use for any purpose. You can check that data is converted to a python Integer instance in geeks_field."
},
{
"code": null,
"e": 4039,
"s": 3609,
"text": "Core Field arguments are the arguments given to each field for applying some constraint or imparting a particular characteristic to a particular Field. For example, adding an argument required = False to IntegerField will enable it to be left blank by the user. Each Field class constructor takes at least these arguments. Some Field classes take additional, field-specific arguments, but the following should always be accepted:"
},
{
"code": null,
"e": 4051,
"s": 4039,
"text": "NaveenArora"
},
{
"code": null,
"e": 4064,
"s": 4051,
"text": "Django-forms"
},
{
"code": null,
"e": 4078,
"s": 4064,
"text": "Python Django"
},
{
"code": null,
"e": 4085,
"s": 4078,
"text": "Python"
},
{
"code": null,
"e": 4183,
"s": 4085,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4201,
"s": 4183,
"text": "Python Dictionary"
},
{
"code": null,
"e": 4243,
"s": 4201,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 4269,
"s": 4243,
"text": "Python String | replace()"
},
{
"code": null,
"e": 4301,
"s": 4269,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 4330,
"s": 4301,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 4357,
"s": 4330,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 4378,
"s": 4357,
"text": "Python OOPs Concepts"
},
{
"code": null,
"e": 4401,
"s": 4378,
"text": "Introduction To PYTHON"
},
{
"code": null,
"e": 4437,
"s": 4401,
"text": "Convert integer to string in Python"
}
] |
Generic Linked List in C | 27 Jun, 2022
Unlike C++ and Java, C doesn’t support generics. How to create a linked list in C that can be used for any data type? In C, we can use a void pointer and a function pointer to implement the same functionality. The great thing about void pointer is it can be used to point to any data type. Also, the size of all types of pointers is always is same, so we can always allocate a linked list node. Function pointer is needed to process actual content stored at the address pointed by the void pointer.
Following is a sample C code to demonstrate the working of a generic linked list.
C
// C program for generic linked list#include<stdio.h>#include<stdlib.h> /* A linked list node */struct Node{ // Any data type can be stored in this node void *data; struct Node *next;}; /* Function to add a node at the beginning of Linked List. This function expects a pointer to the data to be added and size of the data type */void push(struct Node** head_ref, void *new_data, size_t data_size){ // Allocate memory for node struct Node* new_node = (struct Node*)malloc(sizeof(struct Node)); new_node->data = malloc(data_size); new_node->next = (*head_ref); // Copy contents of new_data to newly allocated memory. // Assumption: char takes 1 byte. int i; for (i=0; i<data_size; i++) *(char *)(new_node->data + i) = *(char *)(new_data + i); // Change head pointer as new node is added at the beginning (*head_ref) = new_node;} /* Function to print nodes in a given linked list. fpitr is used to access the function to be used for printing current node data. Note that different data types need different specifier in printf() */void printList(struct Node *node, void (*fptr)(void *)){ while (node != NULL) { (*fptr)(node->data); node = node->next; }} // Function to print an integervoid printInt(void *n){ printf(" %d", *(int *)n);} // Function to print a floatvoid printFloat(void *f){ printf(" %f", *(float *)f);} /* Driver program to test above function */int main(){ struct Node *start = NULL; // Create and print an int linked list unsigned int_size = sizeof(int); int arr[] = {10, 20, 30, 40, 50}, i; for (i=4; i>=0; i--) push(&start, &arr[i], int_size); printf("Created integer linked list is \n"); printList(start, printInt); // Create and print a float linked list unsigned float_size = sizeof(float); start = NULL; float arr2[] = {10.1, 20.2, 30.3, 40.4, 50.5}; for (i=4; i>=0; i--) push(&start, &arr2[i], float_size); printf("\n\nCreated float linked list is \n"); printList(start, printFloat); return 0;}
Created integer linked list is
10 20 30 40 50
Created float linked list is
10.100000 20.200001 30.299999 40.400002 50.500000
niharikatanwar61
hardikkoriintern
C-Pointers
Advanced Data Structure
C Language
Linked List
Linked List
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
AVL Tree | Set 1 (Insertion)
Trie | (Insert and Search)
LRU Cache Implementation
Introduction of B-Tree
Red-Black Tree | Set 1 (Introduction)
std::sort() in C++ STL
Bitwise Operators in C/C++
Arrays in C/C++
Substring in C++
Dynamic Memory Allocation in C using malloc(), calloc(), free() and realloc() | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n27 Jun, 2022"
},
{
"code": null,
"e": 552,
"s": 52,
"text": "Unlike C++ and Java, C doesn’t support generics. How to create a linked list in C that can be used for any data type? In C, we can use a void pointer and a function pointer to implement the same functionality. The great thing about void pointer is it can be used to point to any data type. Also, the size of all types of pointers is always is same, so we can always allocate a linked list node. Function pointer is needed to process actual content stored at the address pointed by the void pointer. "
},
{
"code": null,
"e": 634,
"s": 552,
"text": "Following is a sample C code to demonstrate the working of a generic linked list."
},
{
"code": null,
"e": 636,
"s": 634,
"text": "C"
},
{
"code": "// C program for generic linked list#include<stdio.h>#include<stdlib.h> /* A linked list node */struct Node{ // Any data type can be stored in this node void *data; struct Node *next;}; /* Function to add a node at the beginning of Linked List. This function expects a pointer to the data to be added and size of the data type */void push(struct Node** head_ref, void *new_data, size_t data_size){ // Allocate memory for node struct Node* new_node = (struct Node*)malloc(sizeof(struct Node)); new_node->data = malloc(data_size); new_node->next = (*head_ref); // Copy contents of new_data to newly allocated memory. // Assumption: char takes 1 byte. int i; for (i=0; i<data_size; i++) *(char *)(new_node->data + i) = *(char *)(new_data + i); // Change head pointer as new node is added at the beginning (*head_ref) = new_node;} /* Function to print nodes in a given linked list. fpitr is used to access the function to be used for printing current node data. Note that different data types need different specifier in printf() */void printList(struct Node *node, void (*fptr)(void *)){ while (node != NULL) { (*fptr)(node->data); node = node->next; }} // Function to print an integervoid printInt(void *n){ printf(\" %d\", *(int *)n);} // Function to print a floatvoid printFloat(void *f){ printf(\" %f\", *(float *)f);} /* Driver program to test above function */int main(){ struct Node *start = NULL; // Create and print an int linked list unsigned int_size = sizeof(int); int arr[] = {10, 20, 30, 40, 50}, i; for (i=4; i>=0; i--) push(&start, &arr[i], int_size); printf(\"Created integer linked list is \\n\"); printList(start, printInt); // Create and print a float linked list unsigned float_size = sizeof(float); start = NULL; float arr2[] = {10.1, 20.2, 30.3, 40.4, 50.5}; for (i=4; i>=0; i--) push(&start, &arr2[i], float_size); printf(\"\\n\\nCreated float linked list is \\n\"); printList(start, printFloat); return 0;}",
"e": 2703,
"s": 636,
"text": null
},
{
"code": null,
"e": 2833,
"s": 2703,
"text": "Created integer linked list is \n 10 20 30 40 50\n\nCreated float linked list is \n 10.100000 20.200001 30.299999 40.400002 50.500000"
},
{
"code": null,
"e": 2850,
"s": 2833,
"text": "niharikatanwar61"
},
{
"code": null,
"e": 2867,
"s": 2850,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 2878,
"s": 2867,
"text": "C-Pointers"
},
{
"code": null,
"e": 2902,
"s": 2878,
"text": "Advanced Data Structure"
},
{
"code": null,
"e": 2913,
"s": 2902,
"text": "C Language"
},
{
"code": null,
"e": 2925,
"s": 2913,
"text": "Linked List"
},
{
"code": null,
"e": 2937,
"s": 2925,
"text": "Linked List"
},
{
"code": null,
"e": 3035,
"s": 2937,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3064,
"s": 3035,
"text": "AVL Tree | Set 1 (Insertion)"
},
{
"code": null,
"e": 3091,
"s": 3064,
"text": "Trie | (Insert and Search)"
},
{
"code": null,
"e": 3116,
"s": 3091,
"text": "LRU Cache Implementation"
},
{
"code": null,
"e": 3139,
"s": 3116,
"text": "Introduction of B-Tree"
},
{
"code": null,
"e": 3177,
"s": 3139,
"text": "Red-Black Tree | Set 1 (Introduction)"
},
{
"code": null,
"e": 3200,
"s": 3177,
"text": "std::sort() in C++ STL"
},
{
"code": null,
"e": 3227,
"s": 3200,
"text": "Bitwise Operators in C/C++"
},
{
"code": null,
"e": 3243,
"s": 3227,
"text": "Arrays in C/C++"
},
{
"code": null,
"e": 3260,
"s": 3243,
"text": "Substring in C++"
}
] |
Display the contents of a VIEW in MySQL? | Following is the syntax −
select * from yourViewName;
Let us first create a table −
mysql> create table DemoTable1388
-> (
-> StudentId int NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> StudentName varchar(40)
-> );
Query OK, 0 rows affected (0.71 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable1388(StudentName) values('Chris');
Query OK, 1 row affected (0.23 sec)
mysql> insert into DemoTable1388(StudentName) values('Bob');
Query OK, 1 row affected (0.17 sec)
mysql> insert into DemoTable1388(StudentName) values('David');
Query OK, 1 row affected (0.12 sec)
mysql> insert into DemoTable1388(StudentName) values('Mike');
Query OK, 1 row affected (0.29 sec)
Display all records from the table using select statement −
mysql> select * from DemoTable1388;
This will produce the following output −
+-----------+-------------+
| StudentId | StudentName |
+-----------+-------------+
| 1 | Chris |
| 2 | Bob |
| 3 | David |
| 4 | Mike |
+-----------+-------------+
4 rows in set (0.00 sec)
Following is the query to create view in MySQL −
mysql> create view view1388 as select *from DemoTable1388 where StudentId=3;
Query OK, 0 rows affected (0.13 sec)
Now, display the contents of view in MySQL −
mysql> select * from view1388;
This will produce the following output −
+-----------+-------------+
| StudentId | StudentName |
+-----------+-------------+
| 3 | David |
+-----------+-------------+
1 row in set (0.19 sec) | [
{
"code": null,
"e": 1088,
"s": 1062,
"text": "Following is the syntax −"
},
{
"code": null,
"e": 1116,
"s": 1088,
"text": "select * from yourViewName;"
},
{
"code": null,
"e": 1146,
"s": 1116,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1321,
"s": 1146,
"text": "mysql> create table DemoTable1388\n -> (\n -> StudentId int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n -> StudentName varchar(40)\n -> );\nQuery OK, 0 rows affected (0.71 sec)"
},
{
"code": null,
"e": 1377,
"s": 1321,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1770,
"s": 1377,
"text": "mysql> insert into DemoTable1388(StudentName) values('Chris');\nQuery OK, 1 row affected (0.23 sec)\nmysql> insert into DemoTable1388(StudentName) values('Bob');\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into DemoTable1388(StudentName) values('David');\nQuery OK, 1 row affected (0.12 sec)\nmysql> insert into DemoTable1388(StudentName) values('Mike');\nQuery OK, 1 row affected (0.29 sec)"
},
{
"code": null,
"e": 1830,
"s": 1770,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1866,
"s": 1830,
"text": "mysql> select * from DemoTable1388;"
},
{
"code": null,
"e": 1907,
"s": 1866,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2156,
"s": 1907,
"text": "+-----------+-------------+\n| StudentId | StudentName |\n+-----------+-------------+\n| 1 | Chris |\n| 2 | Bob |\n| 3 | David |\n| 4 | Mike |\n+-----------+-------------+\n4 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2205,
"s": 2156,
"text": "Following is the query to create view in MySQL −"
},
{
"code": null,
"e": 2319,
"s": 2205,
"text": "mysql> create view view1388 as select *from DemoTable1388 where StudentId=3;\nQuery OK, 0 rows affected (0.13 sec)"
},
{
"code": null,
"e": 2364,
"s": 2319,
"text": "Now, display the contents of view in MySQL −"
},
{
"code": null,
"e": 2395,
"s": 2364,
"text": "mysql> select * from view1388;"
},
{
"code": null,
"e": 2436,
"s": 2395,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2600,
"s": 2436,
"text": "+-----------+-------------+\n| StudentId | StudentName |\n+-----------+-------------+\n| 3 | David |\n+-----------+-------------+\n1 row in set (0.19 sec)"
}
] |
How to implement a filter() for Objects in JavaScript? - GeeksforGeeks | 28 Jan, 2020
The filter() method basically outputs all the element object that pass a specific test or satisfies a specific function. The return type of the filter() method is an array that consists of all the element(s)/object(s) satisfying the specified function.
Syntax:
var newArray = arr.filter(callback(object[, ind[, array]])[, Arg])
Parameters:
Callback is a predicate, to test each object of the array. Returns True to keep the object, False otherwise. It takes in three arguments:
Object: The current object being processed in the array.
ind (Optional): Index of the current object being processed in the array.
array (Optional): Array on which filter was called upon.
Arg (Optional): Value to use(.this) when executing callback.
Example 1:
<script> var array = [-1, -4, 5, 6, 8, 9, -12, -5, 4, -1]; var new_array = array.filter(element => element >= 0); document.write( "<h2>Output\n</h2>", "<h3>", new_array, "</h3>");</script>
Output:The above example returns all the positive elements in a given array.
Example 2:
<script>var employees = [ {name: "Tony Stark", department: "IT"}, {name: "Peter Parker", department: "Pizza Delivery"}, {name: "Bruce Wayne", department: "IT"}, {name: "Clark Kent", department: "Editing"}]; var output = employees.filter(employee => employee.department == "IT");for(var i=0;i<output.length;i++){ document.write("<h2>", output[i].name, "</h2>", "<br/>")};</script>
Output:
JavaScript-Misc
javascript-object
Picked
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to calculate the number of days between two dates in javascript?
File uploading in React.js
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 37579,
"s": 37551,
"text": "\n28 Jan, 2020"
},
{
"code": null,
"e": 37832,
"s": 37579,
"text": "The filter() method basically outputs all the element object that pass a specific test or satisfies a specific function. The return type of the filter() method is an array that consists of all the element(s)/object(s) satisfying the specified function."
},
{
"code": null,
"e": 37840,
"s": 37832,
"text": "Syntax:"
},
{
"code": null,
"e": 37907,
"s": 37840,
"text": "var newArray = arr.filter(callback(object[, ind[, array]])[, Arg])"
},
{
"code": null,
"e": 37919,
"s": 37907,
"text": "Parameters:"
},
{
"code": null,
"e": 38057,
"s": 37919,
"text": "Callback is a predicate, to test each object of the array. Returns True to keep the object, False otherwise. It takes in three arguments:"
},
{
"code": null,
"e": 38114,
"s": 38057,
"text": "Object: The current object being processed in the array."
},
{
"code": null,
"e": 38188,
"s": 38114,
"text": "ind (Optional): Index of the current object being processed in the array."
},
{
"code": null,
"e": 38245,
"s": 38188,
"text": "array (Optional): Array on which filter was called upon."
},
{
"code": null,
"e": 38306,
"s": 38245,
"text": "Arg (Optional): Value to use(.this) when executing callback."
},
{
"code": null,
"e": 38317,
"s": 38306,
"text": "Example 1:"
},
{
"code": "<script> var array = [-1, -4, 5, 6, 8, 9, -12, -5, 4, -1]; var new_array = array.filter(element => element >= 0); document.write( \"<h2>Output\\n</h2>\", \"<h3>\", new_array, \"</h3>\");</script>",
"e": 38536,
"s": 38317,
"text": null
},
{
"code": null,
"e": 38613,
"s": 38536,
"text": "Output:The above example returns all the positive elements in a given array."
},
{
"code": null,
"e": 38624,
"s": 38613,
"text": "Example 2:"
},
{
"code": "<script>var employees = [ {name: \"Tony Stark\", department: \"IT\"}, {name: \"Peter Parker\", department: \"Pizza Delivery\"}, {name: \"Bruce Wayne\", department: \"IT\"}, {name: \"Clark Kent\", department: \"Editing\"}]; var output = employees.filter(employee => employee.department == \"IT\");for(var i=0;i<output.length;i++){ document.write(\"<h2>\", output[i].name, \"</h2>\", \"<br/>\")};</script>",
"e": 39020,
"s": 38624,
"text": null
},
{
"code": null,
"e": 39028,
"s": 39020,
"text": "Output:"
},
{
"code": null,
"e": 39044,
"s": 39028,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 39062,
"s": 39044,
"text": "javascript-object"
},
{
"code": null,
"e": 39069,
"s": 39062,
"text": "Picked"
},
{
"code": null,
"e": 39080,
"s": 39069,
"text": "JavaScript"
},
{
"code": null,
"e": 39097,
"s": 39080,
"text": "Web Technologies"
},
{
"code": null,
"e": 39195,
"s": 39097,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39204,
"s": 39195,
"text": "Comments"
},
{
"code": null,
"e": 39217,
"s": 39204,
"text": "Old Comments"
},
{
"code": null,
"e": 39278,
"s": 39217,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 39323,
"s": 39278,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 39395,
"s": 39323,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 39464,
"s": 39395,
"text": "How to calculate the number of days between two dates in javascript?"
},
{
"code": null,
"e": 39491,
"s": 39464,
"text": "File uploading in React.js"
},
{
"code": null,
"e": 39547,
"s": 39491,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 39580,
"s": 39547,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 39642,
"s": 39580,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 39685,
"s": 39642,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Dynamic VideoView in Kotlin - GeeksforGeeks | 16 Dec, 2021
In Android, VideoView is used to load video files. We can rely on any of the external resources, URLs, or the local data in order to fetch the video content. In this article, we will be discussing how to create a VideoView in Kotlin dynamically.
Note: If we use the video view in the background or just go back from a current video session, the old video position is not saved, that is the old state where we last left the video is not saved. In order to achieve it, we need to make use of some external databases to store the states.
Following methods are provided by the video view class in order to facilitate the whole procedure:
To create a new project in Android Studio follow these steps:
Click on File, then New and then New Project and give name whatever you like.Choose “Empty Activity” for the project template.Then, select Kotlin language Support and click next button.Select minimum SDK, whatever you need
Click on File, then New and then New Project and give name whatever you like.
Choose “Empty Activity” for the project template.
Then, select Kotlin language Support and click next button.
Select minimum SDK, whatever you need
This is how your project directory should look like:
XML
<?xml version="1.0" encoding="utf-8"?><LinearLayout android:id="@+id/layout" xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical"></LinearLayout>
Now, we need to add the video. To do that, we have two options:
We can have a video file stored locally on our system:Create a folder named “Raw” in the res folder. Add the video file in it and use the following code snippet.// val path = "android.resource://" + packageName + "/" + R.raw.your_videoFile_name
// videoView.setVideoURI(Uri.parse(path))
We can use the video file from any web resource:// Uri uri = Uri.parse("your_custom_URL");
// videoView.setVideoURI(uri)
We can have a video file stored locally on our system:Create a folder named “Raw” in the res folder. Add the video file in it and use the following code snippet.// val path = "android.resource://" + packageName + "/" + R.raw.your_videoFile_name
// videoView.setVideoURI(Uri.parse(path))
// val path = "android.resource://" + packageName + "/" + R.raw.your_videoFile_name
// videoView.setVideoURI(Uri.parse(path))
We can use the video file from any web resource:// Uri uri = Uri.parse("your_custom_URL");
// videoView.setVideoURI(uri)
// Uri uri = Uri.parse("your_custom_URL");
// videoView.setVideoURI(uri)
Insert following code in your MainActivity.kt.
Java
package gfg.apps.videoview import android.net.Uriimport androidx.appcompat.app.AppCompatActivityimport android.os.Bundleimport android.view.ViewGroupimport android.widget.* class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) // creating a VideoView val videoView = VideoView(this) // setting height and width of the VideoView in our linear layout val layoutParams = LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT) layoutParams.setMargins(10, 10, 10, 10) videoView.layoutParams = layoutParams // accessing the media controller val mediaController = MediaController(this) mediaController.setAnchorView(videoView) videoView.setMediaController(mediaController) // setting the video access path val path = "android.resource://" + packageName + "/" + R.raw.gfg videoView.setVideoURI(Uri.parse(path)) val linearLayout = findViewById<LinearLayout>(R.id.layout) // Add VideoView to LinearLayout linearLayout?.addView(videoView) }}
XML
<?xml version="1.0" encoding="utf-8"?><manifest xmlns:android="http://schemas.android.com/apk/res/android" package="gfg.apps.videoview"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest>
sooda367
android
Android-View
Kotlin Android
Picked
Android
Kotlin
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Create and Add Data to SQLite Database in Android?
Broadcast Receiver in Android With Example
Android RecyclerView in Kotlin
CardView in Android With Example
Content Providers in Android with Example
Broadcast Receiver in Android With Example
Android UI Layouts
Android RecyclerView in Kotlin
Content Providers in Android with Example
Retrofit with Kotlin Coroutine in Android | [
{
"code": null,
"e": 23719,
"s": 23691,
"text": "\n16 Dec, 2021"
},
{
"code": null,
"e": 23965,
"s": 23719,
"text": "In Android, VideoView is used to load video files. We can rely on any of the external resources, URLs, or the local data in order to fetch the video content. In this article, we will be discussing how to create a VideoView in Kotlin dynamically."
},
{
"code": null,
"e": 24254,
"s": 23965,
"text": "Note: If we use the video view in the background or just go back from a current video session, the old video position is not saved, that is the old state where we last left the video is not saved. In order to achieve it, we need to make use of some external databases to store the states."
},
{
"code": null,
"e": 24353,
"s": 24254,
"text": "Following methods are provided by the video view class in order to facilitate the whole procedure:"
},
{
"code": null,
"e": 24415,
"s": 24353,
"text": "To create a new project in Android Studio follow these steps:"
},
{
"code": null,
"e": 24638,
"s": 24415,
"text": "Click on File, then New and then New Project and give name whatever you like.Choose “Empty Activity” for the project template.Then, select Kotlin language Support and click next button.Select minimum SDK, whatever you need"
},
{
"code": null,
"e": 24716,
"s": 24638,
"text": "Click on File, then New and then New Project and give name whatever you like."
},
{
"code": null,
"e": 24766,
"s": 24716,
"text": "Choose “Empty Activity” for the project template."
},
{
"code": null,
"e": 24826,
"s": 24766,
"text": "Then, select Kotlin language Support and click next button."
},
{
"code": null,
"e": 24864,
"s": 24826,
"text": "Select minimum SDK, whatever you need"
},
{
"code": null,
"e": 24917,
"s": 24864,
"text": "This is how your project directory should look like:"
},
{
"code": null,
"e": 24921,
"s": 24917,
"text": "XML"
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><LinearLayout android:id=\"@+id/layout\" xmlns:android=\"http://schemas.android.com/apk/res/android\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:gravity=\"center\" android:orientation=\"vertical\"></LinearLayout>",
"e": 25220,
"s": 24921,
"text": null
},
{
"code": null,
"e": 25284,
"s": 25220,
"text": "Now, we need to add the video. To do that, we have two options:"
},
{
"code": null,
"e": 25695,
"s": 25284,
"text": "We can have a video file stored locally on our system:Create a folder named “Raw” in the res folder. Add the video file in it and use the following code snippet.// val path = \"android.resource://\" + packageName + \"/\" + R.raw.your_videoFile_name\n// videoView.setVideoURI(Uri.parse(path)) \nWe can use the video file from any web resource:// Uri uri = Uri.parse(\"your_custom_URL\");\n// videoView.setVideoURI(uri) \n"
},
{
"code": null,
"e": 25984,
"s": 25695,
"text": "We can have a video file stored locally on our system:Create a folder named “Raw” in the res folder. Add the video file in it and use the following code snippet.// val path = \"android.resource://\" + packageName + \"/\" + R.raw.your_videoFile_name\n// videoView.setVideoURI(Uri.parse(path)) \n"
},
{
"code": null,
"e": 26112,
"s": 25984,
"text": "// val path = \"android.resource://\" + packageName + \"/\" + R.raw.your_videoFile_name\n// videoView.setVideoURI(Uri.parse(path)) \n"
},
{
"code": null,
"e": 26235,
"s": 26112,
"text": "We can use the video file from any web resource:// Uri uri = Uri.parse(\"your_custom_URL\");\n// videoView.setVideoURI(uri) \n"
},
{
"code": null,
"e": 26310,
"s": 26235,
"text": "// Uri uri = Uri.parse(\"your_custom_URL\");\n// videoView.setVideoURI(uri) \n"
},
{
"code": null,
"e": 26357,
"s": 26310,
"text": "Insert following code in your MainActivity.kt."
},
{
"code": null,
"e": 26362,
"s": 26357,
"text": "Java"
},
{
"code": "package gfg.apps.videoview import android.net.Uriimport androidx.appcompat.app.AppCompatActivityimport android.os.Bundleimport android.view.ViewGroupimport android.widget.* class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) // creating a VideoView val videoView = VideoView(this) // setting height and width of the VideoView in our linear layout val layoutParams = LinearLayout.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.MATCH_PARENT) layoutParams.setMargins(10, 10, 10, 10) videoView.layoutParams = layoutParams // accessing the media controller val mediaController = MediaController(this) mediaController.setAnchorView(videoView) videoView.setMediaController(mediaController) // setting the video access path val path = \"android.resource://\" + packageName + \"/\" + R.raw.gfg videoView.setVideoURI(Uri.parse(path)) val linearLayout = findViewById<LinearLayout>(R.id.layout) // Add VideoView to LinearLayout linearLayout?.addView(videoView) }}",
"e": 27600,
"s": 26362,
"text": null
},
{
"code": null,
"e": 27604,
"s": 27600,
"text": "XML"
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"gfg.apps.videoview\"> <application android:allowBackup=\"true\" android:icon=\"@mipmap/ic_launcher\" android:label=\"@string/app_name\" android:roundIcon=\"@mipmap/ic_launcher_round\" android:supportsRtl=\"true\" android:theme=\"@style/AppTheme\"> <activity android:name=\".MainActivity\"> <intent-filter> <action android:name=\"android.intent.action.MAIN\" /> <category android:name=\"android.intent.category.LAUNCHER\" /> </intent-filter> </activity> </application> </manifest>",
"e": 28303,
"s": 27604,
"text": null
},
{
"code": null,
"e": 28312,
"s": 28303,
"text": "sooda367"
},
{
"code": null,
"e": 28320,
"s": 28312,
"text": "android"
},
{
"code": null,
"e": 28333,
"s": 28320,
"text": "Android-View"
},
{
"code": null,
"e": 28348,
"s": 28333,
"text": "Kotlin Android"
},
{
"code": null,
"e": 28355,
"s": 28348,
"text": "Picked"
},
{
"code": null,
"e": 28363,
"s": 28355,
"text": "Android"
},
{
"code": null,
"e": 28370,
"s": 28363,
"text": "Kotlin"
},
{
"code": null,
"e": 28378,
"s": 28370,
"text": "Android"
},
{
"code": null,
"e": 28476,
"s": 28378,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28485,
"s": 28476,
"text": "Comments"
},
{
"code": null,
"e": 28498,
"s": 28485,
"text": "Old Comments"
},
{
"code": null,
"e": 28556,
"s": 28498,
"text": "How to Create and Add Data to SQLite Database in Android?"
},
{
"code": null,
"e": 28599,
"s": 28556,
"text": "Broadcast Receiver in Android With Example"
},
{
"code": null,
"e": 28630,
"s": 28599,
"text": "Android RecyclerView in Kotlin"
},
{
"code": null,
"e": 28663,
"s": 28630,
"text": "CardView in Android With Example"
},
{
"code": null,
"e": 28705,
"s": 28663,
"text": "Content Providers in Android with Example"
},
{
"code": null,
"e": 28748,
"s": 28705,
"text": "Broadcast Receiver in Android With Example"
},
{
"code": null,
"e": 28767,
"s": 28748,
"text": "Android UI Layouts"
},
{
"code": null,
"e": 28798,
"s": 28767,
"text": "Android RecyclerView in Kotlin"
},
{
"code": null,
"e": 28840,
"s": 28798,
"text": "Content Providers in Android with Example"
}
] |
How to generate Tkinter Buttons dynamically? | In this article, we will see how to create buttons dynamically in a tkinter window.
Creating buttons dynamically means customizing the buttons and their functionality by adding events to them.
First, we will import the tkinter library in the notebook, then we will create an instance using the Button function which takes parameters such as parent or root of the window, textvariable which is the value to assign in each button and command.
Button(parent, textvariable, command)
from tkinter import *
import tkinter as tk
# create an instance of tkinter
win = tk.Tk()
#Define the size of the window
win.geometry("700x200")
#Name the title of the window
win.title("www.tutorialspoint.com")
# number of buttons
n=10
#Defining the row and column
i=3
#Iterating over the numbers till n and
#creating the button
for j in range(n):
mybutton= Button(win, text=j)
mybutton.grid(row=i, column=j)
# Keep the window open
win.mainloop()
Running the above code in tkinter notebook will generate the following output. | [
{
"code": null,
"e": 1255,
"s": 1062,
"text": "In this article, we will see how to create buttons dynamically in a tkinter window.\nCreating buttons dynamically means customizing the buttons and their functionality by adding events to them."
},
{
"code": null,
"e": 1503,
"s": 1255,
"text": "First, we will import the tkinter library in the notebook, then we will create an instance using the Button function which takes parameters such as parent or root of the window, textvariable which is the value to assign in each button and command."
},
{
"code": null,
"e": 1541,
"s": 1503,
"text": "Button(parent, textvariable, command)"
},
{
"code": null,
"e": 2000,
"s": 1541,
"text": "from tkinter import *\nimport tkinter as tk\n\n# create an instance of tkinter\nwin = tk.Tk()\n\n#Define the size of the window\nwin.geometry(\"700x200\")\n\n#Name the title of the window\nwin.title(\"www.tutorialspoint.com\")\n\n# number of buttons\nn=10\n\n#Defining the row and column\ni=3\n\n#Iterating over the numbers till n and\n#creating the button\nfor j in range(n):\n mybutton= Button(win, text=j)\n mybutton.grid(row=i, column=j)\n\n# Keep the window open\nwin.mainloop()"
},
{
"code": null,
"e": 2079,
"s": 2000,
"text": "Running the above code in tkinter notebook will generate the following output."
}
] |
How to send data back to the Main Activity in Android using Kotlin? | This example demonstrates how to send data back to the Main Activity in Android using Kotlin
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center_horizontal"
android:orientation="vertical"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="100dp"
android:layout_marginBottom="50dp"
android:text="Tutorials Point"
android:textAlignment="center"
android:textColor="@android:color/holo_green_dark"
android:textSize="32sp"
android:textStyle="bold" />
<TextView
android:id="@+id/textViewNumbers"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Numbers: "
android:textColor="@android:color/black"
android:textSize="24sp"
android:textStyle="bold" />
<Button
android:id="@+id/buttonAdd"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:text="add" />
<Button
android:id="@+id/buttonSubtract"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:text="subtract" />
</LinearLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.app.Activity
import android.content.Intent
import android.os.Bundle
import android.widget.Button
import android.widget.EditText
import android.widget.TextView
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
class MainActivity : AppCompatActivity() {
private lateinit var textViewResult: TextView
private lateinit var editTextNumber1: EditText
private lateinit var editTextNumber2: EditText
private lateinit var button: Button
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
textViewResult = findViewById(R.id.textViewResult)
editTextNumber1 = findViewById(R.id.editTextNumber1)
editTextNumber2 = findViewById(R.id.editTextNumber2)
button = findViewById(R.id.btnOpenActivity2)
button.setOnClickListener {
if ((editTextNumber1.text.toString() == "" || editTextNumber2.text.toString() == "")) {
Toast.makeText(this@MainActivity, "Please insert numbers", Toast.LENGTH_SHORT).show()
} else {
val number1 = Integer.parseInt(editTextNumber1.text.toString())
val number2 = Integer.parseInt(editTextNumber2.text.toString())
val intent = Intent(this@MainActivity, SecondActivity::class.java)
intent.putExtra("number1", number1)
intent.putExtra("number2", number2)
startActivityForResult(intent, 1)
}
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 1) {
if (resultCode == Activity.RESULT_OK) {
val result = data!!.getIntExtra("result", 0)
textViewResult.text = "" + result
}
if (resultCode == Activity.RESULT_CANCELED) {
textViewResult.text = "Nothing selected"
}
}
}
}
Step 4 − Create a new activity and add the following code −
activity_second.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center_horizontal"
android:orientation="vertical"
tools:context=".MainActivity">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="100dp"
android:layout_marginBottom="50dp"
android:text="Tutorials Point"
android:textAlignment="center"
android:textColor="@android:color/holo_green_dark"
android:textSize="32sp"
android:textStyle="bold" />
<TextView
android:id="@+id/textViewNumbers"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Numbers: "
android:textColor="@android:color/black"
android:textSize="24sp"
android:textStyle="bold" />
<Button
android:id="@+id/buttonAdd"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="10dp"
android:text="add" />
<Button
android:id="@+id/buttonSubtract"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:text="subtract" />
</LinearLayout>
SecondActivity.kt
import android.app.Activity
import android.content.Intent
import android.os.Bundle
import android.widget.Button
import android.widget.TextView
import androidx.appcompat.app.AppCompatActivity
class SecondActivity : AppCompatActivity() {
lateinit var textViewNumber: TextView
lateinit var buttonAdd: Button
lateinit var buttonSubtract: Button
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_second)
val intent = intent
val number1 = intent.getIntExtra("number1", 0)
val number2 = intent.getIntExtra("number2", 0)
textViewNumber = findViewById(R.id.textViewNumbers)
textViewNumber.text = "Numbers: $number1, $number2"
buttonAdd = findViewById(R.id.buttonAdd)
buttonSubtract = findViewById(R.id.buttonSubtract)
buttonAdd.setOnClickListener {
val result = number1 + number2
val resultIntent = Intent()
resultIntent.putExtra("result", result)
setResult(Activity.RESULT_OK, resultIntent)
finish()
}
buttonSubtract.setOnClickListener {
val result = number1 - number2
val resultIntent = Intent()
resultIntent.putExtra("result", result)
setResult(Activity.RESULT_OK, resultIntent)
finish()
}
}
}
Step 5 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the
Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen | [
{
"code": null,
"e": 1155,
"s": 1062,
"text": "This example demonstrates how to send data back to the Main Activity in Android using Kotlin"
},
{
"code": null,
"e": 1284,
"s": 1155,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1349,
"s": 1284,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2767,
"s": 1349,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:gravity=\"center_horizontal\"\n android:orientation=\"vertical\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"100dp\"\n android:layout_marginBottom=\"50dp\"\n android:text=\"Tutorials Point\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:id=\"@+id/textViewNumbers\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Numbers: \"\n android:textColor=\"@android:color/black\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n <Button\n android:id=\"@+id/buttonAdd\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"10dp\"\n android:text=\"add\" />\n <Button\n android:id=\"@+id/buttonSubtract\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"5dp\"\n android:text=\"subtract\" />\n</LinearLayout>"
},
{
"code": null,
"e": 2822,
"s": 2767,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 4813,
"s": 2822,
"text": "import android.app.Activity\nimport android.content.Intent\nimport android.os.Bundle\nimport android.widget.Button\nimport android.widget.EditText\nimport android.widget.TextView\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n private lateinit var textViewResult: TextView\n private lateinit var editTextNumber1: EditText\n private lateinit var editTextNumber2: EditText\n private lateinit var button: Button\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n textViewResult = findViewById(R.id.textViewResult)\n editTextNumber1 = findViewById(R.id.editTextNumber1)\n editTextNumber2 = findViewById(R.id.editTextNumber2)\n button = findViewById(R.id.btnOpenActivity2)\n button.setOnClickListener {\n if ((editTextNumber1.text.toString() == \"\" || editTextNumber2.text.toString() == \"\")) {\n Toast.makeText(this@MainActivity, \"Please insert numbers\", Toast.LENGTH_SHORT).show()\n } else {\n val number1 = Integer.parseInt(editTextNumber1.text.toString())\n val number2 = Integer.parseInt(editTextNumber2.text.toString())\n val intent = Intent(this@MainActivity, SecondActivity::class.java)\n intent.putExtra(\"number1\", number1)\n intent.putExtra(\"number2\", number2)\n startActivityForResult(intent, 1)\n }\n }\n }\n override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {\n super.onActivityResult(requestCode, resultCode, data)\n if (requestCode == 1) {\n if (resultCode == Activity.RESULT_OK) {\n val result = data!!.getIntExtra(\"result\", 0)\n textViewResult.text = \"\" + result\n }\n if (resultCode == Activity.RESULT_CANCELED) {\n textViewResult.text = \"Nothing selected\"\n }\n }\n }\n}"
},
{
"code": null,
"e": 4873,
"s": 4813,
"text": "Step 4 − Create a new activity and add the following code −"
},
{
"code": null,
"e": 4893,
"s": 4873,
"text": "activity_second.xml"
},
{
"code": null,
"e": 6311,
"s": 4893,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:gravity=\"center_horizontal\"\n android:orientation=\"vertical\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"100dp\"\n android:layout_marginBottom=\"50dp\"\n android:text=\"Tutorials Point\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"32sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:id=\"@+id/textViewNumbers\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Numbers: \"\n android:textColor=\"@android:color/black\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n <Button\n android:id=\"@+id/buttonAdd\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"10dp\"\n android:text=\"add\" />\n <Button\n android:id=\"@+id/buttonSubtract\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"5dp\"\n android:text=\"subtract\" />\n</LinearLayout>"
},
{
"code": null,
"e": 6329,
"s": 6311,
"text": "SecondActivity.kt"
},
{
"code": null,
"e": 7671,
"s": 6329,
"text": "import android.app.Activity\nimport android.content.Intent\nimport android.os.Bundle\nimport android.widget.Button\nimport android.widget.TextView\nimport androidx.appcompat.app.AppCompatActivity\nclass SecondActivity : AppCompatActivity() {\n lateinit var textViewNumber: TextView\n lateinit var buttonAdd: Button\n lateinit var buttonSubtract: Button\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_second)\n val intent = intent\n val number1 = intent.getIntExtra(\"number1\", 0)\n val number2 = intent.getIntExtra(\"number2\", 0)\n textViewNumber = findViewById(R.id.textViewNumbers)\n textViewNumber.text = \"Numbers: $number1, $number2\"\n buttonAdd = findViewById(R.id.buttonAdd)\n buttonSubtract = findViewById(R.id.buttonSubtract)\n buttonAdd.setOnClickListener {\n val result = number1 + number2\n val resultIntent = Intent()\n resultIntent.putExtra(\"result\", result)\n setResult(Activity.RESULT_OK, resultIntent)\n finish()\n }\n buttonSubtract.setOnClickListener {\n val result = number1 - number2\n val resultIntent = Intent()\n resultIntent.putExtra(\"result\", result)\n setResult(Activity.RESULT_OK, resultIntent)\n finish()\n }\n }\n}"
},
{
"code": null,
"e": 7726,
"s": 7671,
"text": "Step 5 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 8400,
"s": 7726,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 8748,
"s": 8400,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the\nRun icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen"
}
] |
Unfolding Naïve Bayes from Scratch ! | by Aisha Javed | Towards Data Science | Whether you are a beginner in Machine Learning or you have been trying hard to understand the Super Natural Machine Learning Algorithms and you still feel that the dots do not connect somehow, this post is definitely for you!
I have tried to keep things simple and in plain-English. The sole purpose is to deeply and clearly understand the working of a well know Text Classification ML Algorithm (Naïve Bayes) without being trapped in the gibberish mathematical jargon that is often used in the explanation of ML Algorithms!
Anyone looking out for in depth yet understandable explanation of ML Algorithms from scratch
A complete clear picture of the Naïve Bayes ML Algorithm with all its mysterious mathematics demystified plus a concrete step taken forward in your ML voyage!
Milestone # 1 : A Quick Short Intro to The Naïve Bayes Algorithm
Milestone # 2 : Training Phase of The Naïve Bayes Model
The Grand Milestone # 3 : The Testing Phase —Where Prediction Comes into the Play!
Milestone # 4 : Digging Deeper into the Mathematics of Probability
Milestone # 5 : Avoiding the Common Pitfall of The Underflow Error!
Milestone # 6 : Concluding Notes....
Naive Bayes is one of the most common ML algorithms that is often used for the purpose of text classification. If you have just stepped into ML, it is one of the easiest classification algorithms to start with. Naive Bayes is a probabilistic classification algorithm as it uses probability to make predictions for the purpose of classification.
So, if you are looking forward to take a step forward into Machine Learning Voyage, Naïve Bayes Classifier is definitely your next stop!
Milestone # 1 Achieved 👍
Let’s say, there is a restaurant review, “Very good food and service!!!”, and you want to predict that whether this given review implies a positive or a negative sentiment. To do this, we will first need to train a model ( that essentially means to determine counts of words of each category) on a relevant labelled training data set and then this model itself will be able to automatically classify such reviews into one of the given sentiments against which it was trained for. Assume that you are given a training dataset which looks like something below (a review and it’s corresponding sentiment):
A Quick Side Note : Naive Bayes Classifier is a Supervised Machine Learning Algorithm
As part of the preprocessing phase (which is not covered in a detail in this post), all words in the training corpus/ training dataset are converted to lowercase and everything apart from letters like punctuation is excluded from the training examples.
A Quick Side Note : A common pitfall is not preprocessing the test data in the same way as the training dataset was preprocessed and rather feeding the test example directly into the trained model. As a result, the trained model performs badly on the given test example on which it was supposed to perform quite good!
Just simply make two bag of words (BoW), one for each category, and each of them will simply contain words and their corresponding counts. All words belonging to “Positive” sentiment/label will go to one BoW and all words belonging to “Negative” sentiment will have their own BoW. Every sentence in training set is split into words (on the basis of space as a tokenizer/separator) and this is how simply word-count pairs are constructed as demonstrated below :
We are done with the training of The Naive Bayes Model!
Milestone # 2 Achieved 👍
Grab a cup of coffee or stretch your muscles before diving into the Grand Milestone # 3
Caution : We are about to begin the most essential part of the Naive Bayes Model i.e using the above trained model for prediction of restaurant reviews. I now It’s a bit lengthy but totally worth it as we will be discussing each and every minute detail leaving zero ambiguities as the end result !
Consider that now your model is given a restaurant review, “Very good food and service!!!”, and it needs to classify to what particular category it belongs to. A positive review or a negative one? We need to find the probability of this given review of belonging to each category and then we would assign it either a positive or a negative label depending upon for which particular category this test example was able to score more probability.
Preprocess the test example in the same way as the training examples were preprocessed i.e changing examples to lower case and excluding everything apart from letters/alphabets.
Tokenize the test example i.e split it into single words.
A Quick Side Note : You must be already familiar with the term “feature” in machine learning. Here, in Naive Bayes, each word in the vocabulary of each class of the training data set constitutes a categorical feature. This implies that counts of all the unique words (i.e vocabulary/vocab) of each class are basically a set of features for that particular class. And why do we need “counts” ? because we need a numeric representation of the categorical word features as the Naive Bayes Model/Algorithm requires numeric features to find out the probabilistic scores!
The not so intimidating mathematical form of finding probability
let i = test example = “Very good food and service!!!”
Total number of words in i = 5, so values of j (representing feature number) vary from 1 to 5. It’s that simple!
Let’s map the above scenario to the given test example to make it more clear!
Before we start deducing probability of a test word j in a specific class c let’s quickly get familiar with some easy peasy notation that is being used in the not so distant lines of this blog post:
As we have only one example in our test set at the moment (for the sake of understanding), so i = 1.
A Quick Side Note : During test time/prediction time, we map every word of test example against it’s count that was found during training phase. So, in this case, we are looking for in total 5 word counts for this given test example.
Just a random thing to keep you from falling asleep !
Before we start calculating product ( p of a test word “ j ” in class c ), we obviously first need to determine p of a test word “ j ” in class c . There are two ways of doing this as specified below — which one should be actually followed and rather is practically used will be discovered in just a few minutes...
Now we can multiply the probabilities of individual words ( as found above ) in order to find the numerical value of the term : product ( p of a test word “ j ” in class c )
By now, we have numerical values for both the terms i.e ( p of class c and product ( p of a test word “ j ” in class c ) ) for both the classes . So we can multiply both of these terms in order to determine p ( i belonging to class c ) for both the categories. This is demonstrated below :
The p ( i belonging to class c ) turns out to be zero for both the categories!!! but clearly the test example “Very good food and service!!!” belongs to positive class! Clearly, this happened because the product ( p of a test word “ j ” in class c ) was zero for both the categories and this in turn was zero because a few words in the given test example (highlighted in orange) NEVER EVER appeared in our training dataset and hence their probability was zero! and clearly they have caused all the destruction!
So does this imply that whenever a word that appears in the test example but never ever occurred in the training dataset will always cause such destruction ? and in such case our trained model will never be able to predict the correct sentiment? It will just randomly pick positive or negative category since both have same zero probability and predict wrongly? The answer is NO! This is where the second method (numbered 2) comes into play and infact this is the mathematical formula that is actually used to deduce p ( i belonging to class c ) . But before we move on the method number 2, we should first get familiar with it’s mathematical brainy stuff!
So now after adding pseudocounts of 1’s , the probability p of a test word that NEVER EVER APPEARED IN THE TRAINING DATASET WILL NEVER BE ZERO and therefore, the numerical value of the term product ( p of a test word “ j ” in class c ) will never end up as zero which in turn implies that p ( i belonging to class c ) will never be zero as well! So all is well and no more destruction by zero probabilities!
So the numerator term of method number 2 will have an added 1 as we have added a one for every word in the vocabulary and so it becomes :
Similarly the denominator becomes :
And so the complete formula :
You are almost there !
Now as probability of the test example, ”Very good food and service!!!” is more for the positive class i.e 9.33E-09 as compared to the negative class (i.e 7.74E-09), so we can predict it as a Positive Sentiment ! And that is how we simply predict a label for a test/unseen example
Milestone # 3 Achieved !! 👍 👍 👍
Only a few final touch-ups are remaining !
A Quick Side Note : As like every other machine learning algorithm, Naive Bayes too needs a validation set to assess the trained model’s effectiveness. But , since this post was aimed to focus on the algorithmic insights, so I deliberately avoided it and directly jumped to the testing part
Now that you have built a basic understanding of the probabilistic calculations needed to train the Naive Bayes Model and then using it to predict probability for the given test sentence, I will now dig deeper into the probabilistic details. While doing the calculations of probability of the given test sentence in the above section, we did nothing but implemented the given probabilistic formula for our prediction at test time:
Decoding the above mathematical equation :
“|” = refers to a state which has already been given / or some filtering criteria
“c” = class/category
“x” = test example/test sentence
p (c|x) = given test example x, what is it’s probability of belonging to class c. This is also known as posterior probability. This is conditional probability that is to be found for the given test example x for each of the given training classes.
p(x|c)=given class c, what is the probability of example x belonging to class c. This is also known as likelihood as it implies how much likely does example x belongs to class c. This is conditional probability too as we are finding probability of x out of total instances of class c only i.e we have restricted/conditioned our search space to class c while finding the probability of x. We calculate this probability using the counts of words that are determined during the training phase.
We implicitly used this formula twice above in the calculations sections as we had two classes. Remember finding the numerical value of product ( p of a test word “ j ” in class c ) ?
p = This implies the probability of class c. This is also known as prior probability/unconditional probability. This is unconditional probability. We calculated this too earlier above in the probability calculations sections ( in Step # 1 which was finding value of term : p of class c )
p(x) = This is also known as normalizing constant so that the probability p(c|x) does actually falls in the range [0,1]. So if you remove this, the probability p(c|x) may not necessarily fall in the range of [0,1]. Intuitively this means probability of example x under any circumstances or irrespective of it’s class labels i.e whether positive or negative.This is also reflected in total probability theorem which is used to calculate p(x) and dictates that to find p(x), we will find it’s probability in all given classes (because it is unconditional probability)and simply add them :
This implies that if we have two classes then we would have two terms, so in our particular case of positive and negative sentiments:
Did we use it in the above calculations? No we did not. Why??? because we are comparing probabilities of positive and negative class and since the denominator remains the same, so in this particular case, omitting out the same denominator doesn’t affect the prediction by our trained model. It simply cancels out for both classes. So although we can include it but there is no such logical reason to do so. But again as we have eliminated the normalization constant, the probability p(c|x) may not necessarily fall in the range of [0,1]
Milestone # 4 Achieved 👍
If you noticed, the numerical values of probabilities of words ( i.e p of a test word “ j ” in class c ) were quite small. And therefore, multiplying all these tiny probabilities to find product ( p of a test word “ j ” in class c ) will yield even a more smaller numerical value that often results in underflow which obviously means that for that given test sentence, the trained model will fail to predict it’s category/sentiment. So to avoid this underflow error, we take help of mathematical log as follows :
So now instead of multiplication of the tiny individual word probabilities, we will simply add them. And why only log? why not any other function? Because log increases or decreases monotonically which means that it will not affect the order of probabilities. Probabilities that were smaller will still stay smaller after the log has been applied on them and vice versa. so let’s say that a test word “is” has a smaller probability than the test word “happy”, so after passing these through log would although increase their magnitude but “is” would still have a smaller probability than “happy”. Therefore, without affecting the predictions of our trained model, we can effectively avoid the common pitfall of underflow error.
Milestone # 5 Achieved 👍
Although we live in a age of API’s and practically rarely code from scratch. But understanding the algorithmic theory in depth is extremely vital to develop a sound understanding of how the machine learning algorithms actually work. It is only the key understanding which actually differentiates a true data scientist from a naive one and what actually matters when training a really good model. So before moving to API’s , I personally believe that a true data scientist should code from scratch to actually see behind the numbers and the reason why a particular algorithm is better than the other.
One of the best characteristics of the Naive Bayes Model is that you can improve it’s accuracy by simply updating it with new vocabulary words instead of always retraining it. You will just need to add words to the vocabulary and update the words counts accordingly. That’s it!
At last! Finally! Milestone # 6 Achieved 😤 😤 😤
So that’s all for this blog post & you have taken a step forward in you ML journey! 😄
Upcoming posts will include :
Unfolding Naïve Bayes from Scratch! Take-2 🎬 Implementation of Naive Bayes from Scratch in Python
Unfolding Naïve Bayes from Scratch! Take-3 🎬 Implementation of Naive Bayes using scikit-learn (Python’s Holy grail of Machine Learning!)
Stay Tuned ! 📻
If you have any thoughts, comments, or questions, feel free to comment below or connect 📞 with me on LinkedIn
Diagrams & text are licensed under —Medium’s Terms of Service i. e Author owns the rights to the content (s)he created and posts on Medium.“Others cannot copy, distribute, or perform your work without your permission (or as permitted by fair use).”
The Figures that have been included in this blog post from other sources don’t fall under this license and can be recognized by a note in their caption “Photo By...”. All such images have been taken from Unsplash and can be used for free without having the need of permission from or provide credit to the photographer or Unsplash. The detailed licensing terms can be referred here .
CitationFor Attribution Purposes, this work is to be cited as
Aisha Javed, “Unfolding Naïve Bayes from Scratch !”, Towards Data Science, 2018
BibTeX Citation
@ARTICLE {javed2018a, author = "Javed, Aisha", title = "Unfolding Naïve Bayes from Scratch !", journal = "Towards Data Science", year = "2018", note = "https://towardsdatascience.com/unfolding-na%C3%AFve-bayes-from-scratch-2e86dcae4b01" doi = "2e86dcae4b01"} | [
{
"code": null,
"e": 398,
"s": 172,
"text": "Whether you are a beginner in Machine Learning or you have been trying hard to understand the Super Natural Machine Learning Algorithms and you still feel that the dots do not connect somehow, this post is definitely for you!"
},
{
"code": null,
"e": 698,
"s": 398,
"text": "I have tried to keep things simple and in plain-English. The sole purpose is to deeply and clearly understand the working of a well know Text Classification ML Algorithm (Naïve Bayes) without being trapped in the gibberish mathematical jargon that is often used in the explanation of ML Algorithms!"
},
{
"code": null,
"e": 791,
"s": 698,
"text": "Anyone looking out for in depth yet understandable explanation of ML Algorithms from scratch"
},
{
"code": null,
"e": 951,
"s": 791,
"text": "A complete clear picture of the Naïve Bayes ML Algorithm with all its mysterious mathematics demystified plus a concrete step taken forward in your ML voyage!"
},
{
"code": null,
"e": 1017,
"s": 951,
"text": "Milestone # 1 : A Quick Short Intro to The Naïve Bayes Algorithm"
},
{
"code": null,
"e": 1074,
"s": 1017,
"text": "Milestone # 2 : Training Phase of The Naïve Bayes Model"
},
{
"code": null,
"e": 1157,
"s": 1074,
"text": "The Grand Milestone # 3 : The Testing Phase —Where Prediction Comes into the Play!"
},
{
"code": null,
"e": 1224,
"s": 1157,
"text": "Milestone # 4 : Digging Deeper into the Mathematics of Probability"
},
{
"code": null,
"e": 1292,
"s": 1224,
"text": "Milestone # 5 : Avoiding the Common Pitfall of The Underflow Error!"
},
{
"code": null,
"e": 1329,
"s": 1292,
"text": "Milestone # 6 : Concluding Notes...."
},
{
"code": null,
"e": 1674,
"s": 1329,
"text": "Naive Bayes is one of the most common ML algorithms that is often used for the purpose of text classification. If you have just stepped into ML, it is one of the easiest classification algorithms to start with. Naive Bayes is a probabilistic classification algorithm as it uses probability to make predictions for the purpose of classification."
},
{
"code": null,
"e": 1812,
"s": 1674,
"text": "So, if you are looking forward to take a step forward into Machine Learning Voyage, Naïve Bayes Classifier is definitely your next stop!"
},
{
"code": null,
"e": 1837,
"s": 1812,
"text": "Milestone # 1 Achieved 👍"
},
{
"code": null,
"e": 2440,
"s": 1837,
"text": "Let’s say, there is a restaurant review, “Very good food and service!!!”, and you want to predict that whether this given review implies a positive or a negative sentiment. To do this, we will first need to train a model ( that essentially means to determine counts of words of each category) on a relevant labelled training data set and then this model itself will be able to automatically classify such reviews into one of the given sentiments against which it was trained for. Assume that you are given a training dataset which looks like something below (a review and it’s corresponding sentiment):"
},
{
"code": null,
"e": 2526,
"s": 2440,
"text": "A Quick Side Note : Naive Bayes Classifier is a Supervised Machine Learning Algorithm"
},
{
"code": null,
"e": 2779,
"s": 2526,
"text": "As part of the preprocessing phase (which is not covered in a detail in this post), all words in the training corpus/ training dataset are converted to lowercase and everything apart from letters like punctuation is excluded from the training examples."
},
{
"code": null,
"e": 3097,
"s": 2779,
"text": "A Quick Side Note : A common pitfall is not preprocessing the test data in the same way as the training dataset was preprocessed and rather feeding the test example directly into the trained model. As a result, the trained model performs badly on the given test example on which it was supposed to perform quite good!"
},
{
"code": null,
"e": 3558,
"s": 3097,
"text": "Just simply make two bag of words (BoW), one for each category, and each of them will simply contain words and their corresponding counts. All words belonging to “Positive” sentiment/label will go to one BoW and all words belonging to “Negative” sentiment will have their own BoW. Every sentence in training set is split into words (on the basis of space as a tokenizer/separator) and this is how simply word-count pairs are constructed as demonstrated below :"
},
{
"code": null,
"e": 3614,
"s": 3558,
"text": "We are done with the training of The Naive Bayes Model!"
},
{
"code": null,
"e": 3639,
"s": 3614,
"text": "Milestone # 2 Achieved 👍"
},
{
"code": null,
"e": 3727,
"s": 3639,
"text": "Grab a cup of coffee or stretch your muscles before diving into the Grand Milestone # 3"
},
{
"code": null,
"e": 4025,
"s": 3727,
"text": "Caution : We are about to begin the most essential part of the Naive Bayes Model i.e using the above trained model for prediction of restaurant reviews. I now It’s a bit lengthy but totally worth it as we will be discussing each and every minute detail leaving zero ambiguities as the end result !"
},
{
"code": null,
"e": 4470,
"s": 4025,
"text": "Consider that now your model is given a restaurant review, “Very good food and service!!!”, and it needs to classify to what particular category it belongs to. A positive review or a negative one? We need to find the probability of this given review of belonging to each category and then we would assign it either a positive or a negative label depending upon for which particular category this test example was able to score more probability."
},
{
"code": null,
"e": 4648,
"s": 4470,
"text": "Preprocess the test example in the same way as the training examples were preprocessed i.e changing examples to lower case and excluding everything apart from letters/alphabets."
},
{
"code": null,
"e": 4706,
"s": 4648,
"text": "Tokenize the test example i.e split it into single words."
},
{
"code": null,
"e": 5272,
"s": 4706,
"text": "A Quick Side Note : You must be already familiar with the term “feature” in machine learning. Here, in Naive Bayes, each word in the vocabulary of each class of the training data set constitutes a categorical feature. This implies that counts of all the unique words (i.e vocabulary/vocab) of each class are basically a set of features for that particular class. And why do we need “counts” ? because we need a numeric representation of the categorical word features as the Naive Bayes Model/Algorithm requires numeric features to find out the probabilistic scores!"
},
{
"code": null,
"e": 5337,
"s": 5272,
"text": "The not so intimidating mathematical form of finding probability"
},
{
"code": null,
"e": 5392,
"s": 5337,
"text": "let i = test example = “Very good food and service!!!”"
},
{
"code": null,
"e": 5505,
"s": 5392,
"text": "Total number of words in i = 5, so values of j (representing feature number) vary from 1 to 5. It’s that simple!"
},
{
"code": null,
"e": 5583,
"s": 5505,
"text": "Let’s map the above scenario to the given test example to make it more clear!"
},
{
"code": null,
"e": 5782,
"s": 5583,
"text": "Before we start deducing probability of a test word j in a specific class c let’s quickly get familiar with some easy peasy notation that is being used in the not so distant lines of this blog post:"
},
{
"code": null,
"e": 5883,
"s": 5782,
"text": "As we have only one example in our test set at the moment (for the sake of understanding), so i = 1."
},
{
"code": null,
"e": 6117,
"s": 5883,
"text": "A Quick Side Note : During test time/prediction time, we map every word of test example against it’s count that was found during training phase. So, in this case, we are looking for in total 5 word counts for this given test example."
},
{
"code": null,
"e": 6171,
"s": 6117,
"text": "Just a random thing to keep you from falling asleep !"
},
{
"code": null,
"e": 6486,
"s": 6171,
"text": "Before we start calculating product ( p of a test word “ j ” in class c ), we obviously first need to determine p of a test word “ j ” in class c . There are two ways of doing this as specified below — which one should be actually followed and rather is practically used will be discovered in just a few minutes..."
},
{
"code": null,
"e": 6660,
"s": 6486,
"text": "Now we can multiply the probabilities of individual words ( as found above ) in order to find the numerical value of the term : product ( p of a test word “ j ” in class c )"
},
{
"code": null,
"e": 6950,
"s": 6660,
"text": "By now, we have numerical values for both the terms i.e ( p of class c and product ( p of a test word “ j ” in class c ) ) for both the classes . So we can multiply both of these terms in order to determine p ( i belonging to class c ) for both the categories. This is demonstrated below :"
},
{
"code": null,
"e": 7461,
"s": 6950,
"text": "The p ( i belonging to class c ) turns out to be zero for both the categories!!! but clearly the test example “Very good food and service!!!” belongs to positive class! Clearly, this happened because the product ( p of a test word “ j ” in class c ) was zero for both the categories and this in turn was zero because a few words in the given test example (highlighted in orange) NEVER EVER appeared in our training dataset and hence their probability was zero! and clearly they have caused all the destruction!"
},
{
"code": null,
"e": 8118,
"s": 7461,
"text": "So does this imply that whenever a word that appears in the test example but never ever occurred in the training dataset will always cause such destruction ? and in such case our trained model will never be able to predict the correct sentiment? It will just randomly pick positive or negative category since both have same zero probability and predict wrongly? The answer is NO! This is where the second method (numbered 2) comes into play and infact this is the mathematical formula that is actually used to deduce p ( i belonging to class c ) . But before we move on the method number 2, we should first get familiar with it’s mathematical brainy stuff!"
},
{
"code": null,
"e": 8526,
"s": 8118,
"text": "So now after adding pseudocounts of 1’s , the probability p of a test word that NEVER EVER APPEARED IN THE TRAINING DATASET WILL NEVER BE ZERO and therefore, the numerical value of the term product ( p of a test word “ j ” in class c ) will never end up as zero which in turn implies that p ( i belonging to class c ) will never be zero as well! So all is well and no more destruction by zero probabilities!"
},
{
"code": null,
"e": 8664,
"s": 8526,
"text": "So the numerator term of method number 2 will have an added 1 as we have added a one for every word in the vocabulary and so it becomes :"
},
{
"code": null,
"e": 8700,
"s": 8664,
"text": "Similarly the denominator becomes :"
},
{
"code": null,
"e": 8730,
"s": 8700,
"text": "And so the complete formula :"
},
{
"code": null,
"e": 8753,
"s": 8730,
"text": "You are almost there !"
},
{
"code": null,
"e": 9034,
"s": 8753,
"text": "Now as probability of the test example, ”Very good food and service!!!” is more for the positive class i.e 9.33E-09 as compared to the negative class (i.e 7.74E-09), so we can predict it as a Positive Sentiment ! And that is how we simply predict a label for a test/unseen example"
},
{
"code": null,
"e": 9066,
"s": 9034,
"text": "Milestone # 3 Achieved !! 👍 👍 👍"
},
{
"code": null,
"e": 9109,
"s": 9066,
"text": "Only a few final touch-ups are remaining !"
},
{
"code": null,
"e": 9400,
"s": 9109,
"text": "A Quick Side Note : As like every other machine learning algorithm, Naive Bayes too needs a validation set to assess the trained model’s effectiveness. But , since this post was aimed to focus on the algorithmic insights, so I deliberately avoided it and directly jumped to the testing part"
},
{
"code": null,
"e": 9831,
"s": 9400,
"text": "Now that you have built a basic understanding of the probabilistic calculations needed to train the Naive Bayes Model and then using it to predict probability for the given test sentence, I will now dig deeper into the probabilistic details. While doing the calculations of probability of the given test sentence in the above section, we did nothing but implemented the given probabilistic formula for our prediction at test time:"
},
{
"code": null,
"e": 9874,
"s": 9831,
"text": "Decoding the above mathematical equation :"
},
{
"code": null,
"e": 9956,
"s": 9874,
"text": "“|” = refers to a state which has already been given / or some filtering criteria"
},
{
"code": null,
"e": 9977,
"s": 9956,
"text": "“c” = class/category"
},
{
"code": null,
"e": 10010,
"s": 9977,
"text": "“x” = test example/test sentence"
},
{
"code": null,
"e": 10258,
"s": 10010,
"text": "p (c|x) = given test example x, what is it’s probability of belonging to class c. This is also known as posterior probability. This is conditional probability that is to be found for the given test example x for each of the given training classes."
},
{
"code": null,
"e": 10749,
"s": 10258,
"text": "p(x|c)=given class c, what is the probability of example x belonging to class c. This is also known as likelihood as it implies how much likely does example x belongs to class c. This is conditional probability too as we are finding probability of x out of total instances of class c only i.e we have restricted/conditioned our search space to class c while finding the probability of x. We calculate this probability using the counts of words that are determined during the training phase."
},
{
"code": null,
"e": 10933,
"s": 10749,
"text": "We implicitly used this formula twice above in the calculations sections as we had two classes. Remember finding the numerical value of product ( p of a test word “ j ” in class c ) ?"
},
{
"code": null,
"e": 11221,
"s": 10933,
"text": "p = This implies the probability of class c. This is also known as prior probability/unconditional probability. This is unconditional probability. We calculated this too earlier above in the probability calculations sections ( in Step # 1 which was finding value of term : p of class c )"
},
{
"code": null,
"e": 11808,
"s": 11221,
"text": "p(x) = This is also known as normalizing constant so that the probability p(c|x) does actually falls in the range [0,1]. So if you remove this, the probability p(c|x) may not necessarily fall in the range of [0,1]. Intuitively this means probability of example x under any circumstances or irrespective of it’s class labels i.e whether positive or negative.This is also reflected in total probability theorem which is used to calculate p(x) and dictates that to find p(x), we will find it’s probability in all given classes (because it is unconditional probability)and simply add them :"
},
{
"code": null,
"e": 11942,
"s": 11808,
"text": "This implies that if we have two classes then we would have two terms, so in our particular case of positive and negative sentiments:"
},
{
"code": null,
"e": 12479,
"s": 11942,
"text": "Did we use it in the above calculations? No we did not. Why??? because we are comparing probabilities of positive and negative class and since the denominator remains the same, so in this particular case, omitting out the same denominator doesn’t affect the prediction by our trained model. It simply cancels out for both classes. So although we can include it but there is no such logical reason to do so. But again as we have eliminated the normalization constant, the probability p(c|x) may not necessarily fall in the range of [0,1]"
},
{
"code": null,
"e": 12504,
"s": 12479,
"text": "Milestone # 4 Achieved 👍"
},
{
"code": null,
"e": 13017,
"s": 12504,
"text": "If you noticed, the numerical values of probabilities of words ( i.e p of a test word “ j ” in class c ) were quite small. And therefore, multiplying all these tiny probabilities to find product ( p of a test word “ j ” in class c ) will yield even a more smaller numerical value that often results in underflow which obviously means that for that given test sentence, the trained model will fail to predict it’s category/sentiment. So to avoid this underflow error, we take help of mathematical log as follows :"
},
{
"code": null,
"e": 13745,
"s": 13017,
"text": "So now instead of multiplication of the tiny individual word probabilities, we will simply add them. And why only log? why not any other function? Because log increases or decreases monotonically which means that it will not affect the order of probabilities. Probabilities that were smaller will still stay smaller after the log has been applied on them and vice versa. so let’s say that a test word “is” has a smaller probability than the test word “happy”, so after passing these through log would although increase their magnitude but “is” would still have a smaller probability than “happy”. Therefore, without affecting the predictions of our trained model, we can effectively avoid the common pitfall of underflow error."
},
{
"code": null,
"e": 13770,
"s": 13745,
"text": "Milestone # 5 Achieved 👍"
},
{
"code": null,
"e": 14370,
"s": 13770,
"text": "Although we live in a age of API’s and practically rarely code from scratch. But understanding the algorithmic theory in depth is extremely vital to develop a sound understanding of how the machine learning algorithms actually work. It is only the key understanding which actually differentiates a true data scientist from a naive one and what actually matters when training a really good model. So before moving to API’s , I personally believe that a true data scientist should code from scratch to actually see behind the numbers and the reason why a particular algorithm is better than the other."
},
{
"code": null,
"e": 14648,
"s": 14370,
"text": "One of the best characteristics of the Naive Bayes Model is that you can improve it’s accuracy by simply updating it with new vocabulary words instead of always retraining it. You will just need to add words to the vocabulary and update the words counts accordingly. That’s it!"
},
{
"code": null,
"e": 14695,
"s": 14648,
"text": "At last! Finally! Milestone # 6 Achieved 😤 😤 😤"
},
{
"code": null,
"e": 14781,
"s": 14695,
"text": "So that’s all for this blog post & you have taken a step forward in you ML journey! 😄"
},
{
"code": null,
"e": 14811,
"s": 14781,
"text": "Upcoming posts will include :"
},
{
"code": null,
"e": 14910,
"s": 14811,
"text": "Unfolding Naïve Bayes from Scratch! Take-2 🎬 Implementation of Naive Bayes from Scratch in Python"
},
{
"code": null,
"e": 15048,
"s": 14910,
"text": "Unfolding Naïve Bayes from Scratch! Take-3 🎬 Implementation of Naive Bayes using scikit-learn (Python’s Holy grail of Machine Learning!)"
},
{
"code": null,
"e": 15063,
"s": 15048,
"text": "Stay Tuned ! 📻"
},
{
"code": null,
"e": 15173,
"s": 15063,
"text": "If you have any thoughts, comments, or questions, feel free to comment below or connect 📞 with me on LinkedIn"
},
{
"code": null,
"e": 15422,
"s": 15173,
"text": "Diagrams & text are licensed under —Medium’s Terms of Service i. e Author owns the rights to the content (s)he created and posts on Medium.“Others cannot copy, distribute, or perform your work without your permission (or as permitted by fair use).”"
},
{
"code": null,
"e": 15806,
"s": 15422,
"text": "The Figures that have been included in this blog post from other sources don’t fall under this license and can be recognized by a note in their caption “Photo By...”. All such images have been taken from Unsplash and can be used for free without having the need of permission from or provide credit to the photographer or Unsplash. The detailed licensing terms can be referred here ."
},
{
"code": null,
"e": 15868,
"s": 15806,
"text": "CitationFor Attribution Purposes, this work is to be cited as"
},
{
"code": null,
"e": 15949,
"s": 15868,
"text": "Aisha Javed, “Unfolding Naïve Bayes from Scratch !”, Towards Data Science, 2018"
},
{
"code": null,
"e": 15965,
"s": 15949,
"text": "BibTeX Citation"
}
] |
Angular 6 - Data Binding | Data Binding is available right from AngularJS, Angular 2,4 and is now available in Angular 6 as well. We use curly braces for data binding - {{}}; this process is called interpolation. We have already seen in our previous examples how we declared the value to the variable title and the same is printed in the browser.
The variable in the app.component.html file is referred as {{title}} and the value of title is initialized in the app.component.ts file and in app.component.html, the value is displayed.
Let us now create a dropdown of months in the browser. To do that , we have created an array of months in app.component.ts as follows −
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'Angular 6 Project!';
// declared array of months.
months = ["January", "Feburary", "March", "April", "May",
"June", "July", "August", "September",
"October", "November", "December"];
}
The month's array that is shown above is to be displayed in a dropdown in the browser. For this, we will use the following line of code −
<!--The content below is only a placeholder and can be replaced. -->
<div style = "text-align:center">
<h1>
Welcome to {{title}}.
</h1>
</div>
<div> Months :
<select>
<option *ngFor = "let i of months">{{i}}</option>
</select>
</div>
We have created the normal select tag with option. In option, we have used the for loop. The for loop is used to iterate over the months' array, which in turn will create the option tag with the value present in the months.
The syntax for in Angular is *ngFor = "let I of months" and to get the value of months we are displaying it in {{i}}.
The two curly brackets help with data binding. You declare the variables in your app.component.ts file and the same will be replaced using the curly brackets.
Let us see the output of the above month's array in the browser
The variable that is set in the app.component.ts can be bound with the app.component.html using the curly brackets; for example, {{}}.
Let us now display the data in the browser based on condition. Here, we have added a variable and assigned the value as true. Using the if statement, we can hide/show the content to be displayed.
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'Angular 4 Project!';
//array of months.
months = ["January", "February", "March", "April",
"May", "June", "July", "August", "September",
"October", "November", "December"];
isavailable = true; //variable is set to true
}
<!--The content below is only a placeholder and can be replaced.-->
<div style = "text-align:center">
<h1>
Welcome to {{title}}.
</h1>
</div>
<div> Months :
<select>
<option *ngFor = "let i of months">{{i}}</option>
</select>
</div>
<br/>
<div>
<span *ngIf = "isavailable">Condition is valid.</span>
<!--over here based on if condition the text condition is valid is displayed.
If the value of isavailable is set to false it will not display the text.-->
</div>
Let us try the above example using the IF THEN ELSE condition.
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'Angular 4 Project!';
//array of months.
months = ["January", "February", "March", "April",
"May", "June", "July", "August", "September",
"October", "November", "December"];
isavailable = false;
}
In this case, we have made the isavailable variable as false. To print the else condition, we will have to create the ng-template as follows −
<ng-template #condition1>Condition is invalid</ng-template>
The full code looks like this −
<!--The content below is only a placeholder and can be replaced.-->
<div style = "text-align:center">
<h1>
Welcome to {{title}}.
</h1>
</div>
<div> Months :
<select>
<option *ngFor="let i of months">{{i}}</option>
</select>
</div>
<br/>
<div>
<span *ngIf = "isavailable; else condition1">Condition is valid.</span>
<ng-template #condition1>Condition is invalid</ng-template>
</div>
If is used with the else condition and the variable used is condition1. The same is assigned as an id to the ng-template, and when the available variable is set to false the text Condition is invalid is displayed.
The following screenshot shows the display in the browser.
Let us now use the if then else condition.
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'Angular 4 Project!';
//array of months.
months = ["January", "February", "March", "April",
"May", "June", "July", "August", "September",
"October", "November", "December"];
isavailable = true;
}
Now, we will make the variable isavailable as true. In the html, the condition is written in the following way −
<!--The content below is only a placeholder and can be replaced.-->
<div style = "text-align:center">
<h1>
Welcome to {{title}}.
</h1>
</div>
<div> Months :
<select>
<option *ngFor = "let i of months">{{i}}</option>
</select>
</div>
<br/>
<div>
<span *ngIf = "isavailable; then condition1 else condition2">Condition is valid.</span>
<ng-template #condition1>Condition is valid</ng-template>
<ng-template #condition2>Condition is invalid</ng-template>
</div>
If the variable is true, then condition1, else condition2. Now, two templates are created with id #condition1 and #condition2.
The display in the browser is as follows −
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2315,
"s": 1995,
"text": "Data Binding is available right from AngularJS, Angular 2,4 and is now available in Angular 6 as well. We use curly braces for data binding - {{}}; this process is called interpolation. We have already seen in our previous examples how we declared the value to the variable title and the same is printed in the browser."
},
{
"code": null,
"e": 2502,
"s": 2315,
"text": "The variable in the app.component.html file is referred as {{title}} and the value of title is initialized in the app.component.ts file and in app.component.html, the value is displayed."
},
{
"code": null,
"e": 2638,
"s": 2502,
"text": "Let us now create a dropdown of months in the browser. To do that , we have created an array of months in app.component.ts as follows −"
},
{
"code": null,
"e": 3056,
"s": 2638,
"text": "import { Component } from '@angular/core';\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'Angular 6 Project!';\n // declared array of months.\n months = [\"January\", \"Feburary\", \"March\", \"April\", \"May\", \n \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n}"
},
{
"code": null,
"e": 3194,
"s": 3056,
"text": "The month's array that is shown above is to be displayed in a dropdown in the browser. For this, we will use the following line of code −"
},
{
"code": null,
"e": 3452,
"s": 3194,
"text": "<!--The content below is only a placeholder and can be replaced. -->\n<div style = \"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n<div> Months :\n <select>\n <option *ngFor = \"let i of months\">{{i}}</option>\n </select>\n</div>"
},
{
"code": null,
"e": 3676,
"s": 3452,
"text": "We have created the normal select tag with option. In option, we have used the for loop. The for loop is used to iterate over the months' array, which in turn will create the option tag with the value present in the months."
},
{
"code": null,
"e": 3794,
"s": 3676,
"text": "The syntax for in Angular is *ngFor = \"let I of months\" and to get the value of months we are displaying it in {{i}}."
},
{
"code": null,
"e": 3953,
"s": 3794,
"text": "The two curly brackets help with data binding. You declare the variables in your app.component.ts file and the same will be replaced using the curly brackets."
},
{
"code": null,
"e": 4017,
"s": 3953,
"text": "Let us see the output of the above month's array in the browser"
},
{
"code": null,
"e": 4152,
"s": 4017,
"text": "The variable that is set in the app.component.ts can be bound with the app.component.html using the curly brackets; for example, {{}}."
},
{
"code": null,
"e": 4348,
"s": 4152,
"text": "Let us now display the data in the browser based on condition. Here, we have added a variable and assigned the value as true. Using the if statement, we can hide/show the content to be displayed."
},
{
"code": null,
"e": 4806,
"s": 4348,
"text": "import { Component } from '@angular/core';\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'Angular 4 Project!';\n //array of months.\n months = [\"January\", \"February\", \"March\", \"April\",\n \"May\", \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n isavailable = true; //variable is set to true\n}"
},
{
"code": null,
"e": 5301,
"s": 4806,
"text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style = \"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n<div> Months :\n <select>\n <option *ngFor = \"let i of months\">{{i}}</option>\n </select>\n</div>\n<br/>\n<div>\n <span *ngIf = \"isavailable\">Condition is valid.</span> \n <!--over here based on if condition the text condition is valid is displayed. \n If the value of isavailable is set to false it will not display the text.-->\n</div>"
},
{
"code": null,
"e": 5364,
"s": 5301,
"text": "Let us try the above example using the IF THEN ELSE condition."
},
{
"code": null,
"e": 5795,
"s": 5364,
"text": "import { Component } from '@angular/core';\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'Angular 4 Project!';\n //array of months.\n months = [\"January\", \"February\", \"March\", \"April\",\n \"May\", \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n isavailable = false;\n}"
},
{
"code": null,
"e": 5938,
"s": 5795,
"text": "In this case, we have made the isavailable variable as false. To print the else condition, we will have to create the ng-template as follows −"
},
{
"code": null,
"e": 5999,
"s": 5938,
"text": "<ng-template #condition1>Condition is invalid</ng-template>\n"
},
{
"code": null,
"e": 6031,
"s": 5999,
"text": "The full code looks like this −"
},
{
"code": null,
"e": 6443,
"s": 6031,
"text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style = \"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n<div> Months :\n <select>\n <option *ngFor=\"let i of months\">{{i}}</option>\n </select>\n</div>\n<br/>\n<div>\n <span *ngIf = \"isavailable; else condition1\">Condition is valid.</span>\n <ng-template #condition1>Condition is invalid</ng-template>\n</div>"
},
{
"code": null,
"e": 6657,
"s": 6443,
"text": "If is used with the else condition and the variable used is condition1. The same is assigned as an id to the ng-template, and when the available variable is set to false the text Condition is invalid is displayed."
},
{
"code": null,
"e": 6716,
"s": 6657,
"text": "The following screenshot shows the display in the browser."
},
{
"code": null,
"e": 6759,
"s": 6716,
"text": "Let us now use the if then else condition."
},
{
"code": null,
"e": 7189,
"s": 6759,
"text": "import { Component } from '@angular/core';\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'Angular 4 Project!';\n //array of months.\n months = [\"January\", \"February\", \"March\", \"April\",\n \"May\", \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n isavailable = true;\n}"
},
{
"code": null,
"e": 7302,
"s": 7189,
"text": "Now, we will make the variable isavailable as true. In the html, the condition is written in the following way −"
},
{
"code": null,
"e": 7790,
"s": 7302,
"text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style = \"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n<div> Months :\n <select>\n <option *ngFor = \"let i of months\">{{i}}</option>\n </select>\n</div>\n<br/>\n<div>\n <span *ngIf = \"isavailable; then condition1 else condition2\">Condition is valid.</span>\n <ng-template #condition1>Condition is valid</ng-template>\n <ng-template #condition2>Condition is invalid</ng-template>\n</div>"
},
{
"code": null,
"e": 7917,
"s": 7790,
"text": "If the variable is true, then condition1, else condition2. Now, two templates are created with id #condition1 and #condition2."
},
{
"code": null,
"e": 7960,
"s": 7917,
"text": "The display in the browser is as follows −"
},
{
"code": null,
"e": 7995,
"s": 7960,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 8009,
"s": 7995,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 8044,
"s": 8009,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8058,
"s": 8044,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 8093,
"s": 8058,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 8113,
"s": 8093,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 8148,
"s": 8113,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8165,
"s": 8148,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 8198,
"s": 8165,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 8210,
"s": 8198,
"text": " Senol Atac"
},
{
"code": null,
"e": 8245,
"s": 8210,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 8257,
"s": 8245,
"text": " Senol Atac"
},
{
"code": null,
"e": 8264,
"s": 8257,
"text": " Print"
},
{
"code": null,
"e": 8275,
"s": 8264,
"text": " Add Notes"
}
] |
How to bring your Data Science Project in production | by René Bremer | Towards Data Science | A lot of companies struggle to bring their data science projects into production. A common issue is that the closer the model is to production, the harder it is to answer the following question:
Why did the model predict this?
Having a build/release pipeline for data science projects can help to answer this question. It enables you to trace back that:
Model M was trained on dataset D with algorithm A by person P
Model M was deployed in production in release R on time T
This audit trail is essential for every model running in production and is required in a lot of industries, e.g. finance.
Last update of blog/git repo: July 21, 2021. Credits to Pardeep Singla for fixing some breaking changes in Azure ML.
I have learned that this blog/repo is regularly used in demos, tutorials, etc. Don’t hesitate to contact me if you do so as well, I would love to know.
In this tutorial, a build/release pipeline for a machine learning project is created as follows:
An HTTP endpoint is created that predicts if the income of a person is higher or lower than 50k per year using features as age, hours of week working, education.
Azure Databricks with Spark, Azure ML and Azure DevOps are used to create a model and endpoint. Azure Kubernetes Service (AKS) is both used as test and production environment.
The project can be depicted in the following high level overview:
In the remainder of this blog, the following steps will be executed:
3. Prerequisites
4. Create machine learning model in Azure Databricks
5. Manage model in Azure Machine Learning Service
6,7. Build and release model in Azure DevOps
8. Conclusion
The follow-up of the blog can be found here in which security is embedded in the build/release pipeline. Furthermore, the details of the audit trail are discussed in this blog. Finally, if you interested how to use Azure Databricks with Azure Data Factory, refer to this blog.
The following resources are required in this tutorial:
Azure Databricks
Azure Machine Learning Service
Azure DevOps
Azure Databricks is an Apache Spark-based analytics platform optimized for Azure. It can be used for many analytical workloads, amongst others machine learning and deep learning. In this step, the following is done:
4a. Create new cluster
4b. Import notebook
4c. Run notebook
Start your Azure Databricks workspace and go to Cluster. Create a new cluster with the following settings (edit September 2020: in devOps pipeline, Databricks Runtime 6.6. is used, recommended to use this runtime version for interactive analysis in Databricks as well):
Go to your Azure Databricks workspace, right-click and then select import. In the radio button, select to import the following notebook using URL:
https://raw.githubusercontent.com/rebremer/devopsai_databricks/master/project/modelling/1_IncomeNotebookExploration.py
See also picture below:
Select the notebook you imported in 4b and attach the notebook to the cluster you created in 4a. Make sure that the cluster is running and otherwise start it. Read the steps in the notebook, in which the data is explored and several settings and algorithms are tried to create a model that predicts the income class of a person. Walk through the notebook cell by cell by using shortcut SHIFT+ENTER.
Azure Machine Learning Service (Azure ML) is a cloud service that you use to train, deploy, automate, and manage machine learning models. In this context, the model that was created in previous step will be added to your Azuere ML instance. The following steps will be executed
5a. Add library to Databricks cluster
5b. Import notebook using Azure ML to Azure Databricks
5c. Review results in Azure ML
Right click in your workspace and select to “create library”
Select PyPi and then fill in: azureml-sdk[databricks]
Finally, attach the library to the cluster.
In the prevous part of this tutorial, a model was created in Azure Databricks. In this part you are going to add the created model to Azure Machine Learning Service.
Go to your Databricks Service again, right click, select import and import the a notebook using the following URL:
https://raw.githubusercontent.com/rebremer/devopsai_databricks/master/project/modelling/2_IncomeNotebookAMLS.py
Again, make sure it is attached to a cluster and the cluster is running
Subsequently, fill in the correct values for workspace, subscription_id and resource_grp. All values can be found in the overview tab of your Azure Machine Learning Service Workspace in the Azure Portal.
Now run the notebook cell for cell by using shortcut SHIFT+ENTER.
In cell 6, you will need to authenticate to Azure Machine Learning Service in the notebook. Follow the instruction in the notebook by opening the URL and enter the generated code to authenticate.
In step 5b, a notebook was run in which the results were written to Azure Machine Learning Service. In this, the following was done:
A new experiment was created in you Azure ML
With in this experiment, a root run with 6 child runs were the different attempts can be found.
A childrun contains a description of the model (e.g. Logistic Regression with regularization 0) and the most important logging of the attempt (e.g. accuracy, number of false postives)
The model artificact (.mml) is also part of a childrun. The artifact of the best childrun can be taken and deployed into production.
Go to you Azure ML instance. Select the experiment name that was used in the notebook (e.g. experiment_model_int).
Now click on the experiment, click on the run and childrun you want to see the metrics.
When you go to output, you will find the model artifact, which you can also download. The model artifact of the best run will be used as the base of the containter that is deployed using Azure DevOps in the next part of this tutorial.
Azure DevOps is the tool to continuously build, test, and deploy your code to any platform and cloud. In chapter 6, an Azure DevOps will be created and prepared. The project will be prepared using the following steps:
6a. Create Personal Access Token in Databricks
6b. Create AKS cluster
6c. Create Azure DevOps project and service connection
6d. Add variables to code
In chapter 7, the actual build-release pipeline will be created and run to create an endpoint of the model.
To run Notebooks in Azure Databricks triggered from Azure DevOps (using REST APIs), a Databrics Access Token (PAT) is required for authentication.
Go to Azure Databricks and click to the person icon in the upper right corner. Select User Settings and then generate a new token.
Make sure to copy the token now. You won’t be able to see it again. Token is needed to access Databricks from the Azure DevOps build pipeline later
In this step, a test and production environment is created in Azure Kubernetes Services (AKS). Typically, these are 2 separate AKS environments, however, for simplicity and cost savings only environment is created. First, go to to you Azure ML Service Workspace and select Compute. Take as compute name blog-devai-aks and select Kubernetes Service as compute type, see also below.
Creating an AKS cluster takes approximately 10 minutes. Continue to the next step.
Create a new project in Azure DevOps by following this tutorial. Once you create a new project, click on the repository folder and select to import the following repository:
https://github.com/rebremer/devopsai_databricks.git
See also picture below:
A Service connection is needed to access the resources in the resource group from Azure DevOps. Go to project settings, service connection and then select Azure Resource Manager.
Select Service Principal Authentication and limit scope to your resource group in which your Machine Learning Workspace Service is deployed. Make sure that you name the connection as follows: devopsaisec_service_connection.
In the Repos you created in the previous step, the following files shall be changed:
\project\configcode_build_release.yml
With the same variables for workspace, subscription_id and resource with values of your Machine Learning Service Workspace as in step 5b. Also, fill in your Databricks Personal Access Token generated in step 6a.
variables: # change 5 variables below with your own settings, make sure that # : with a space is kept and not replaced with = workspace: '<<Name of your workspace>>' subscription_id: '<<Subscription id>>' resource_grp: '<<Name of your resource group with aml service>>' domain: 'westeurope.azuredatabricks.net' # change loc.when needed dbr_pat_token_raw: '<<your Databricks Personal Access Token>>'
The files can be changed in Azure DevOps by looking up the file in the Repos, click on “edit”, change the variables and then “commit” the file. You can also clone the project and work from there. Notice that in a production situation, keys must never be added to a code. Instead, secret variables in an Azure DevOps pipeline shall be used and is dealt with in this follow-up tutorial.
In this chapter, an Azure DevOps project is created and prepared. Now the model is ready to be built and released in the Azure DevOps project.
In this part, the model is built and released in the Azure DevOps using the following steps:
7a. Create build-release pipeline
7b. Run build-release pipeline
7c. Consume HTTP endpoint with Postman
In this step, you are going to create a build-release pipeline. Go to Azure DevOps project you have created in 6c and then click on Pipelines. A wizard is shown in which your Azure Repos Git shall be selected, see also below.
Subsequently, select your Git repo attached to this project and then select “Existing Azure Pipelines YAML file”. Then browse the directory \project\configcode_build_release_aci_only.yml or \project\configcode_build_release.yml in case an AKS cluster is created in step 6b, see also below.
Finally review your pipeline and save your pipeline, see also below.
In this chapter, the pipeline was configured. In this pipeline the following steps will be executed:
Build:
Select Python 3.6 and install dependencies
Upload notebook to Databricks
Create model using Azure Databricks by running notebook. Add model to Azure Machine Learning service
Creation of build artifact as input for release deployTest and deployProd
Release deployTest:
Retrieve model created in Build step
Deploy model as docker image to AKS as test endpoint
Test “test endpoint” in AKS
Release deployProd:
Retrieve model created in Build step
Deploy model as docker image to AKS as prd endpoint
Test “prod endpoint” in AKS
In the next part, the pipeline will be run.
In this step, the build-release pipeline will be run in Azure DevOps. Go to your pipeline deployed in the previous step, select the pipeline and then select queue, see also below.
When the pipeline is started, a docker image is created containing an ML model using Azure Databricks and Azure ML in the build step. Subsequently, the docker image is deployed/released in ACI and AKS. A successful run can be seen below.
Notice that if you decided to not deploy the docker image in AKS, the previous steps will still be executed and the AKS step will fail. For detailed logging, you can click on the various steps.
When you go to the Azure ML Workspace, you can find the endpoints of the models you deployed in 7b. These endpoints will now be consumed by Postman to create predictions. An example payload can be found in the project/services/50_testEndpoint.py in the project. In this example, the income class of three persons is predicted.
The prediction for the first person is that the income is higer than 50k,
For the other two persons the prediction is lower than 50k.
In this tutorial, an end to end pipeline for a machine learning project was created. In this:
Azure Databricks with Spark was used to explore the data and create the machine learning models.
Azure Machine Learning Service was used to keep track of the models and its metrics.
Azure Devops was used to build an image of the best model and to release it as an endpoint.
This way you can orchestrate and monitor the entire pipeline from idea to the moment that the model is brought into production. This enables you to answer to question: Why did the model predict this?
The architecture overview can be found below. In this follow-up tutorial, security of the pipeline is enhanced. | [
{
"code": null,
"e": 367,
"s": 172,
"text": "A lot of companies struggle to bring their data science projects into production. A common issue is that the closer the model is to production, the harder it is to answer the following question:"
},
{
"code": null,
"e": 399,
"s": 367,
"text": "Why did the model predict this?"
},
{
"code": null,
"e": 526,
"s": 399,
"text": "Having a build/release pipeline for data science projects can help to answer this question. It enables you to trace back that:"
},
{
"code": null,
"e": 588,
"s": 526,
"text": "Model M was trained on dataset D with algorithm A by person P"
},
{
"code": null,
"e": 646,
"s": 588,
"text": "Model M was deployed in production in release R on time T"
},
{
"code": null,
"e": 768,
"s": 646,
"text": "This audit trail is essential for every model running in production and is required in a lot of industries, e.g. finance."
},
{
"code": null,
"e": 885,
"s": 768,
"text": "Last update of blog/git repo: July 21, 2021. Credits to Pardeep Singla for fixing some breaking changes in Azure ML."
},
{
"code": null,
"e": 1037,
"s": 885,
"text": "I have learned that this blog/repo is regularly used in demos, tutorials, etc. Don’t hesitate to contact me if you do so as well, I would love to know."
},
{
"code": null,
"e": 1134,
"s": 1037,
"text": "In this tutorial, a build/release pipeline for a machine learning project is created as follows:"
},
{
"code": null,
"e": 1296,
"s": 1134,
"text": "An HTTP endpoint is created that predicts if the income of a person is higher or lower than 50k per year using features as age, hours of week working, education."
},
{
"code": null,
"e": 1472,
"s": 1296,
"text": "Azure Databricks with Spark, Azure ML and Azure DevOps are used to create a model and endpoint. Azure Kubernetes Service (AKS) is both used as test and production environment."
},
{
"code": null,
"e": 1538,
"s": 1472,
"text": "The project can be depicted in the following high level overview:"
},
{
"code": null,
"e": 1607,
"s": 1538,
"text": "In the remainder of this blog, the following steps will be executed:"
},
{
"code": null,
"e": 1624,
"s": 1607,
"text": "3. Prerequisites"
},
{
"code": null,
"e": 1677,
"s": 1624,
"text": "4. Create machine learning model in Azure Databricks"
},
{
"code": null,
"e": 1727,
"s": 1677,
"text": "5. Manage model in Azure Machine Learning Service"
},
{
"code": null,
"e": 1772,
"s": 1727,
"text": "6,7. Build and release model in Azure DevOps"
},
{
"code": null,
"e": 1786,
"s": 1772,
"text": "8. Conclusion"
},
{
"code": null,
"e": 2063,
"s": 1786,
"text": "The follow-up of the blog can be found here in which security is embedded in the build/release pipeline. Furthermore, the details of the audit trail are discussed in this blog. Finally, if you interested how to use Azure Databricks with Azure Data Factory, refer to this blog."
},
{
"code": null,
"e": 2118,
"s": 2063,
"text": "The following resources are required in this tutorial:"
},
{
"code": null,
"e": 2135,
"s": 2118,
"text": "Azure Databricks"
},
{
"code": null,
"e": 2166,
"s": 2135,
"text": "Azure Machine Learning Service"
},
{
"code": null,
"e": 2179,
"s": 2166,
"text": "Azure DevOps"
},
{
"code": null,
"e": 2395,
"s": 2179,
"text": "Azure Databricks is an Apache Spark-based analytics platform optimized for Azure. It can be used for many analytical workloads, amongst others machine learning and deep learning. In this step, the following is done:"
},
{
"code": null,
"e": 2418,
"s": 2395,
"text": "4a. Create new cluster"
},
{
"code": null,
"e": 2438,
"s": 2418,
"text": "4b. Import notebook"
},
{
"code": null,
"e": 2455,
"s": 2438,
"text": "4c. Run notebook"
},
{
"code": null,
"e": 2725,
"s": 2455,
"text": "Start your Azure Databricks workspace and go to Cluster. Create a new cluster with the following settings (edit September 2020: in devOps pipeline, Databricks Runtime 6.6. is used, recommended to use this runtime version for interactive analysis in Databricks as well):"
},
{
"code": null,
"e": 2872,
"s": 2725,
"text": "Go to your Azure Databricks workspace, right-click and then select import. In the radio button, select to import the following notebook using URL:"
},
{
"code": null,
"e": 2991,
"s": 2872,
"text": "https://raw.githubusercontent.com/rebremer/devopsai_databricks/master/project/modelling/1_IncomeNotebookExploration.py"
},
{
"code": null,
"e": 3015,
"s": 2991,
"text": "See also picture below:"
},
{
"code": null,
"e": 3414,
"s": 3015,
"text": "Select the notebook you imported in 4b and attach the notebook to the cluster you created in 4a. Make sure that the cluster is running and otherwise start it. Read the steps in the notebook, in which the data is explored and several settings and algorithms are tried to create a model that predicts the income class of a person. Walk through the notebook cell by cell by using shortcut SHIFT+ENTER."
},
{
"code": null,
"e": 3692,
"s": 3414,
"text": "Azure Machine Learning Service (Azure ML) is a cloud service that you use to train, deploy, automate, and manage machine learning models. In this context, the model that was created in previous step will be added to your Azuere ML instance. The following steps will be executed"
},
{
"code": null,
"e": 3730,
"s": 3692,
"text": "5a. Add library to Databricks cluster"
},
{
"code": null,
"e": 3785,
"s": 3730,
"text": "5b. Import notebook using Azure ML to Azure Databricks"
},
{
"code": null,
"e": 3816,
"s": 3785,
"text": "5c. Review results in Azure ML"
},
{
"code": null,
"e": 3877,
"s": 3816,
"text": "Right click in your workspace and select to “create library”"
},
{
"code": null,
"e": 3931,
"s": 3877,
"text": "Select PyPi and then fill in: azureml-sdk[databricks]"
},
{
"code": null,
"e": 3975,
"s": 3931,
"text": "Finally, attach the library to the cluster."
},
{
"code": null,
"e": 4141,
"s": 3975,
"text": "In the prevous part of this tutorial, a model was created in Azure Databricks. In this part you are going to add the created model to Azure Machine Learning Service."
},
{
"code": null,
"e": 4256,
"s": 4141,
"text": "Go to your Databricks Service again, right click, select import and import the a notebook using the following URL:"
},
{
"code": null,
"e": 4368,
"s": 4256,
"text": "https://raw.githubusercontent.com/rebremer/devopsai_databricks/master/project/modelling/2_IncomeNotebookAMLS.py"
},
{
"code": null,
"e": 4440,
"s": 4368,
"text": "Again, make sure it is attached to a cluster and the cluster is running"
},
{
"code": null,
"e": 4644,
"s": 4440,
"text": "Subsequently, fill in the correct values for workspace, subscription_id and resource_grp. All values can be found in the overview tab of your Azure Machine Learning Service Workspace in the Azure Portal."
},
{
"code": null,
"e": 4710,
"s": 4644,
"text": "Now run the notebook cell for cell by using shortcut SHIFT+ENTER."
},
{
"code": null,
"e": 4906,
"s": 4710,
"text": "In cell 6, you will need to authenticate to Azure Machine Learning Service in the notebook. Follow the instruction in the notebook by opening the URL and enter the generated code to authenticate."
},
{
"code": null,
"e": 5039,
"s": 4906,
"text": "In step 5b, a notebook was run in which the results were written to Azure Machine Learning Service. In this, the following was done:"
},
{
"code": null,
"e": 5084,
"s": 5039,
"text": "A new experiment was created in you Azure ML"
},
{
"code": null,
"e": 5180,
"s": 5084,
"text": "With in this experiment, a root run with 6 child runs were the different attempts can be found."
},
{
"code": null,
"e": 5364,
"s": 5180,
"text": "A childrun contains a description of the model (e.g. Logistic Regression with regularization 0) and the most important logging of the attempt (e.g. accuracy, number of false postives)"
},
{
"code": null,
"e": 5497,
"s": 5364,
"text": "The model artificact (.mml) is also part of a childrun. The artifact of the best childrun can be taken and deployed into production."
},
{
"code": null,
"e": 5612,
"s": 5497,
"text": "Go to you Azure ML instance. Select the experiment name that was used in the notebook (e.g. experiment_model_int)."
},
{
"code": null,
"e": 5700,
"s": 5612,
"text": "Now click on the experiment, click on the run and childrun you want to see the metrics."
},
{
"code": null,
"e": 5935,
"s": 5700,
"text": "When you go to output, you will find the model artifact, which you can also download. The model artifact of the best run will be used as the base of the containter that is deployed using Azure DevOps in the next part of this tutorial."
},
{
"code": null,
"e": 6153,
"s": 5935,
"text": "Azure DevOps is the tool to continuously build, test, and deploy your code to any platform and cloud. In chapter 6, an Azure DevOps will be created and prepared. The project will be prepared using the following steps:"
},
{
"code": null,
"e": 6200,
"s": 6153,
"text": "6a. Create Personal Access Token in Databricks"
},
{
"code": null,
"e": 6223,
"s": 6200,
"text": "6b. Create AKS cluster"
},
{
"code": null,
"e": 6278,
"s": 6223,
"text": "6c. Create Azure DevOps project and service connection"
},
{
"code": null,
"e": 6304,
"s": 6278,
"text": "6d. Add variables to code"
},
{
"code": null,
"e": 6412,
"s": 6304,
"text": "In chapter 7, the actual build-release pipeline will be created and run to create an endpoint of the model."
},
{
"code": null,
"e": 6559,
"s": 6412,
"text": "To run Notebooks in Azure Databricks triggered from Azure DevOps (using REST APIs), a Databrics Access Token (PAT) is required for authentication."
},
{
"code": null,
"e": 6690,
"s": 6559,
"text": "Go to Azure Databricks and click to the person icon in the upper right corner. Select User Settings and then generate a new token."
},
{
"code": null,
"e": 6838,
"s": 6690,
"text": "Make sure to copy the token now. You won’t be able to see it again. Token is needed to access Databricks from the Azure DevOps build pipeline later"
},
{
"code": null,
"e": 7219,
"s": 6838,
"text": "In this step, a test and production environment is created in Azure Kubernetes Services (AKS). Typically, these are 2 separate AKS environments, however, for simplicity and cost savings only environment is created. First, go to to you Azure ML Service Workspace and select Compute. Take as compute name blog-devai-aks and select Kubernetes Service as compute type, see also below."
},
{
"code": null,
"e": 7302,
"s": 7219,
"text": "Creating an AKS cluster takes approximately 10 minutes. Continue to the next step."
},
{
"code": null,
"e": 7476,
"s": 7302,
"text": "Create a new project in Azure DevOps by following this tutorial. Once you create a new project, click on the repository folder and select to import the following repository:"
},
{
"code": null,
"e": 7528,
"s": 7476,
"text": "https://github.com/rebremer/devopsai_databricks.git"
},
{
"code": null,
"e": 7552,
"s": 7528,
"text": "See also picture below:"
},
{
"code": null,
"e": 7731,
"s": 7552,
"text": "A Service connection is needed to access the resources in the resource group from Azure DevOps. Go to project settings, service connection and then select Azure Resource Manager."
},
{
"code": null,
"e": 7955,
"s": 7731,
"text": "Select Service Principal Authentication and limit scope to your resource group in which your Machine Learning Workspace Service is deployed. Make sure that you name the connection as follows: devopsaisec_service_connection."
},
{
"code": null,
"e": 8040,
"s": 7955,
"text": "In the Repos you created in the previous step, the following files shall be changed:"
},
{
"code": null,
"e": 8078,
"s": 8040,
"text": "\\project\\configcode_build_release.yml"
},
{
"code": null,
"e": 8290,
"s": 8078,
"text": "With the same variables for workspace, subscription_id and resource with values of your Machine Learning Service Workspace as in step 5b. Also, fill in your Databricks Personal Access Token generated in step 6a."
},
{
"code": null,
"e": 8697,
"s": 8290,
"text": "variables: # change 5 variables below with your own settings, make sure that # : with a space is kept and not replaced with = workspace: '<<Name of your workspace>>' subscription_id: '<<Subscription id>>' resource_grp: '<<Name of your resource group with aml service>>' domain: 'westeurope.azuredatabricks.net' # change loc.when needed dbr_pat_token_raw: '<<your Databricks Personal Access Token>>'"
},
{
"code": null,
"e": 9082,
"s": 8697,
"text": "The files can be changed in Azure DevOps by looking up the file in the Repos, click on “edit”, change the variables and then “commit” the file. You can also clone the project and work from there. Notice that in a production situation, keys must never be added to a code. Instead, secret variables in an Azure DevOps pipeline shall be used and is dealt with in this follow-up tutorial."
},
{
"code": null,
"e": 9225,
"s": 9082,
"text": "In this chapter, an Azure DevOps project is created and prepared. Now the model is ready to be built and released in the Azure DevOps project."
},
{
"code": null,
"e": 9318,
"s": 9225,
"text": "In this part, the model is built and released in the Azure DevOps using the following steps:"
},
{
"code": null,
"e": 9352,
"s": 9318,
"text": "7a. Create build-release pipeline"
},
{
"code": null,
"e": 9383,
"s": 9352,
"text": "7b. Run build-release pipeline"
},
{
"code": null,
"e": 9422,
"s": 9383,
"text": "7c. Consume HTTP endpoint with Postman"
},
{
"code": null,
"e": 9648,
"s": 9422,
"text": "In this step, you are going to create a build-release pipeline. Go to Azure DevOps project you have created in 6c and then click on Pipelines. A wizard is shown in which your Azure Repos Git shall be selected, see also below."
},
{
"code": null,
"e": 9938,
"s": 9648,
"text": "Subsequently, select your Git repo attached to this project and then select “Existing Azure Pipelines YAML file”. Then browse the directory \\project\\configcode_build_release_aci_only.yml or \\project\\configcode_build_release.yml in case an AKS cluster is created in step 6b, see also below."
},
{
"code": null,
"e": 10007,
"s": 9938,
"text": "Finally review your pipeline and save your pipeline, see also below."
},
{
"code": null,
"e": 10108,
"s": 10007,
"text": "In this chapter, the pipeline was configured. In this pipeline the following steps will be executed:"
},
{
"code": null,
"e": 10115,
"s": 10108,
"text": "Build:"
},
{
"code": null,
"e": 10158,
"s": 10115,
"text": "Select Python 3.6 and install dependencies"
},
{
"code": null,
"e": 10188,
"s": 10158,
"text": "Upload notebook to Databricks"
},
{
"code": null,
"e": 10289,
"s": 10188,
"text": "Create model using Azure Databricks by running notebook. Add model to Azure Machine Learning service"
},
{
"code": null,
"e": 10363,
"s": 10289,
"text": "Creation of build artifact as input for release deployTest and deployProd"
},
{
"code": null,
"e": 10383,
"s": 10363,
"text": "Release deployTest:"
},
{
"code": null,
"e": 10420,
"s": 10383,
"text": "Retrieve model created in Build step"
},
{
"code": null,
"e": 10473,
"s": 10420,
"text": "Deploy model as docker image to AKS as test endpoint"
},
{
"code": null,
"e": 10501,
"s": 10473,
"text": "Test “test endpoint” in AKS"
},
{
"code": null,
"e": 10521,
"s": 10501,
"text": "Release deployProd:"
},
{
"code": null,
"e": 10558,
"s": 10521,
"text": "Retrieve model created in Build step"
},
{
"code": null,
"e": 10610,
"s": 10558,
"text": "Deploy model as docker image to AKS as prd endpoint"
},
{
"code": null,
"e": 10638,
"s": 10610,
"text": "Test “prod endpoint” in AKS"
},
{
"code": null,
"e": 10682,
"s": 10638,
"text": "In the next part, the pipeline will be run."
},
{
"code": null,
"e": 10862,
"s": 10682,
"text": "In this step, the build-release pipeline will be run in Azure DevOps. Go to your pipeline deployed in the previous step, select the pipeline and then select queue, see also below."
},
{
"code": null,
"e": 11100,
"s": 10862,
"text": "When the pipeline is started, a docker image is created containing an ML model using Azure Databricks and Azure ML in the build step. Subsequently, the docker image is deployed/released in ACI and AKS. A successful run can be seen below."
},
{
"code": null,
"e": 11294,
"s": 11100,
"text": "Notice that if you decided to not deploy the docker image in AKS, the previous steps will still be executed and the AKS step will fail. For detailed logging, you can click on the various steps."
},
{
"code": null,
"e": 11621,
"s": 11294,
"text": "When you go to the Azure ML Workspace, you can find the endpoints of the models you deployed in 7b. These endpoints will now be consumed by Postman to create predictions. An example payload can be found in the project/services/50_testEndpoint.py in the project. In this example, the income class of three persons is predicted."
},
{
"code": null,
"e": 11695,
"s": 11621,
"text": "The prediction for the first person is that the income is higer than 50k,"
},
{
"code": null,
"e": 11755,
"s": 11695,
"text": "For the other two persons the prediction is lower than 50k."
},
{
"code": null,
"e": 11849,
"s": 11755,
"text": "In this tutorial, an end to end pipeline for a machine learning project was created. In this:"
},
{
"code": null,
"e": 11946,
"s": 11849,
"text": "Azure Databricks with Spark was used to explore the data and create the machine learning models."
},
{
"code": null,
"e": 12031,
"s": 11946,
"text": "Azure Machine Learning Service was used to keep track of the models and its metrics."
},
{
"code": null,
"e": 12123,
"s": 12031,
"text": "Azure Devops was used to build an image of the best model and to release it as an endpoint."
},
{
"code": null,
"e": 12323,
"s": 12123,
"text": "This way you can orchestrate and monitor the entire pipeline from idea to the moment that the model is brought into production. This enables you to answer to question: Why did the model predict this?"
}
] |
CycleGANS and Pix2Pix. Credits: Presenting abridged version of... | by Manish Chablani | Towards Data Science | Credits: Presenting abridged version of these blogs to explain the idea and concepts behind pix2pix and cycleGANs.
Christopher Hesse blog:
affinelayer.com
Olga Liakhovich blog:
www.microsoft.com
paper: https://phillipi.github.io/pix2pix/
pix2pix uses a conditional generative adversarial network (cGAN) to learn a mapping from an input image to an output image.
An example of a dataset would be that the input image is a black and white picture and the target image is the color version of the picture. The generator in this case is trying to learn how to colorize a black and white image. The discriminator is looking at the generator’s colorization attempts and trying to learn to tell the difference between the colorizations the generator provides and the true colorized target image provided in the dataset.
The structure of the generator is called an “encoder-decoder” and in pix2pix the encoder-decoder looks more or less like this:
The volumes are there to give you a sense of the shape of the tensor dimensions next to them. The input in this example is a 256x256 image with 3 color channels (red, green, and blue, all equal for a black and white image), and the output is the same.
The generator takes some input and tries to reduce it with a series of encoders (convolution + activation function) into a much smaller representation. The idea is that by compressing it this way we hopefully have a higher level representation of the data after the final encode layer. The decode layers do the opposite (deconvolution + activation function) and reverse the action of the encoder layers.
In order to improve the performance of the image-to-image transform in the paper, the authors used a “U-Net” instead of an encoder-decoder. This is the same thing, but with “skip connections” directly connecting encoder layers to decoder layers:
The skip connections give the network the option of bypassing the encoding/decoding part if it doesn’t have a use for it.
These diagrams are a slight simplification. For instance, the first and last layers of the network have no batch norm layer and a few layers in the middle have dropout units.
The Discriminator
The Discriminator has the job of taking two images, an input image and an unknown image (which will be either a target or output image from the generator), and deciding if the second image was produced by the generator or not.
The structure looks a lot like the encoder section of the generator, but works a little differently. The output is a 30x30 image where each pixel value (0 to 1) represents how believable the corresponding section of the unknown image is. In the pix2pix implementation, each pixel from this 30x30 image corresponds to the believability of a 70x70 patch of the input image (the patches overlap a lot since the input images are 256x256). The architecture is called a “PatchGAN”.
Training
To train this network, there are two steps: training the discriminator and training the generator.
To train the discriminator, first the generator generates an output image. The discriminator looks at the input/target pair and the input/output pair and produces its guess about how realistic they look. The weights of the discriminator are then adjusted based on the classification error of the input/output pair and the input/target pair.
The generator’s weights are then adjusted based on the output of the discriminator as well as the difference between the output and target image.
Original CycleGAN paper
While PIX2PIX can produce truly magical results, the challenge is in training data. The two image spaces that you wanted to learn to translate between needed to be pre-formatted into a single X/Y image that held both tightly-correlated images. This could be time-consuming, infeasible, or even impossible based on what two image types you were trying to translate between (for instance, if you didn’t have one-to-one matches between the two image profiles). This is where the CycleGAN comes in.
The key idea behind CycleGANs is that they can build upon the power of the PIX2PIX architecture, but allow you to point the model at two discrete, unpaired collections of images. For example, one collection of images, Group X, would be full of sunny beach photos while Group Y would be a collection of overcast beach photos. The CycleGAN model can learn to translate the images between these two aesthetics without the need to merge tightly correlated matches together into a single X/Y training image.
The way CycleGANs are able to learn such great translations without having explicit X/Y training images involves introducing the idea of a full translation cycle to determine how good the entire translation system is, thus improving both generators at the same time.
This approach is the clever power that CycleGANs brings to image-to-image translations and how it enables better translations among non-paired image styles.
The original CycleGANs paper, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”, was published by Jun-Yan Zhu, et al.
The power of CycleGANs is in how they set up the loss function, and use the full cycle loss as an additional optimization target.
As a refresher: we’re dealing with 2 generators and 2 discriminators.
Let’s start with the generator’s loss functions, which consist of 2 parts.
Part 1: The generator is successful if fake (generated) images are so good that discriminator can not distinguish those from real images. In other words, the discriminator’s output for fake images should be as close to 1 as possible. In TensorFlow terms, the generator would like to minimize:
g_loss_G_disc = tf.reduce_mean((discY_fake — tf.ones_like(discY_fake)) ** 2)g_loss_F_dicr = tf.reduce_mean((discX_fake — tf.ones_like(discX_fake)) ** 2)
Note: the “**” symbol above is the power operator in Python.
Part 2: We need to capture cyclic loss: as we go from one generator back to the original space of images using another generator, the difference between the original image (where we started the cycle) and the cyclic image should be minimized.
g_loss_G_cycle = tf.reduce_mean(tf.abs(real_X — genF_back)) + tf.reduce_mean(tf.abs(real_Y — genG_back))g_loss_F_cycle = tf.reduce_mean(tf.abs(real_X — genF_back)) + tf.reduce_mean(tf.abs(real_Y — genG_back))
Finally, the generator loss is the sum of these two terms:
g_loss_G = g_loss_G_disc + g_loss_G_cycle
Because cyclic loss is so important we want to multiply its effect. We used an L1_lambda constant for this multiplier (in the paper the value 10 was used).
Now the grand finale of the generator loss looks like:
g_loss_G = g_loss_G_disc + L1_lambda * g_loss_G_cycle
g_loss_F = g_loss_F_disc + L1_lambda * g_loss_F_cycle
Discriminator Loss
The Discriminator has 2 decisions to make:
Real images should be marked as real (recommendation should be as close to 1 as possible)The discriminator should be able to recognize generated images and thus predict 0 for fake images.
Real images should be marked as real (recommendation should be as close to 1 as possible)
The discriminator should be able to recognize generated images and thus predict 0 for fake images.
DY_loss_real = tf.reduce_mean((DY — tf.ones_like(DY))** 2)DY_loss_fake = tf.reduce_mean((DY_fake_sample — tf.zeros_like(DY_fake_sample)) ** 2)DY_loss = (DY_loss_real + DY_loss_fake) / 2DX_loss_real = tf.reduce_mean((DX — tf.ones_like(DX)) ** 2)DX_loss_fake = tf.reduce_mean((DX_fake_sample — tf.zeros_like(DX_fake_sample)) ** 2)DX_loss = (DX_loss_real + DX_loss_fake) / 2 | [
{
"code": null,
"e": 287,
"s": 172,
"text": "Credits: Presenting abridged version of these blogs to explain the idea and concepts behind pix2pix and cycleGANs."
},
{
"code": null,
"e": 311,
"s": 287,
"text": "Christopher Hesse blog:"
},
{
"code": null,
"e": 327,
"s": 311,
"text": "affinelayer.com"
},
{
"code": null,
"e": 349,
"s": 327,
"text": "Olga Liakhovich blog:"
},
{
"code": null,
"e": 367,
"s": 349,
"text": "www.microsoft.com"
},
{
"code": null,
"e": 410,
"s": 367,
"text": "paper: https://phillipi.github.io/pix2pix/"
},
{
"code": null,
"e": 534,
"s": 410,
"text": "pix2pix uses a conditional generative adversarial network (cGAN) to learn a mapping from an input image to an output image."
},
{
"code": null,
"e": 985,
"s": 534,
"text": "An example of a dataset would be that the input image is a black and white picture and the target image is the color version of the picture. The generator in this case is trying to learn how to colorize a black and white image. The discriminator is looking at the generator’s colorization attempts and trying to learn to tell the difference between the colorizations the generator provides and the true colorized target image provided in the dataset."
},
{
"code": null,
"e": 1112,
"s": 985,
"text": "The structure of the generator is called an “encoder-decoder” and in pix2pix the encoder-decoder looks more or less like this:"
},
{
"code": null,
"e": 1364,
"s": 1112,
"text": "The volumes are there to give you a sense of the shape of the tensor dimensions next to them. The input in this example is a 256x256 image with 3 color channels (red, green, and blue, all equal for a black and white image), and the output is the same."
},
{
"code": null,
"e": 1768,
"s": 1364,
"text": "The generator takes some input and tries to reduce it with a series of encoders (convolution + activation function) into a much smaller representation. The idea is that by compressing it this way we hopefully have a higher level representation of the data after the final encode layer. The decode layers do the opposite (deconvolution + activation function) and reverse the action of the encoder layers."
},
{
"code": null,
"e": 2014,
"s": 1768,
"text": "In order to improve the performance of the image-to-image transform in the paper, the authors used a “U-Net” instead of an encoder-decoder. This is the same thing, but with “skip connections” directly connecting encoder layers to decoder layers:"
},
{
"code": null,
"e": 2136,
"s": 2014,
"text": "The skip connections give the network the option of bypassing the encoding/decoding part if it doesn’t have a use for it."
},
{
"code": null,
"e": 2311,
"s": 2136,
"text": "These diagrams are a slight simplification. For instance, the first and last layers of the network have no batch norm layer and a few layers in the middle have dropout units."
},
{
"code": null,
"e": 2329,
"s": 2311,
"text": "The Discriminator"
},
{
"code": null,
"e": 2556,
"s": 2329,
"text": "The Discriminator has the job of taking two images, an input image and an unknown image (which will be either a target or output image from the generator), and deciding if the second image was produced by the generator or not."
},
{
"code": null,
"e": 3032,
"s": 2556,
"text": "The structure looks a lot like the encoder section of the generator, but works a little differently. The output is a 30x30 image where each pixel value (0 to 1) represents how believable the corresponding section of the unknown image is. In the pix2pix implementation, each pixel from this 30x30 image corresponds to the believability of a 70x70 patch of the input image (the patches overlap a lot since the input images are 256x256). The architecture is called a “PatchGAN”."
},
{
"code": null,
"e": 3041,
"s": 3032,
"text": "Training"
},
{
"code": null,
"e": 3140,
"s": 3041,
"text": "To train this network, there are two steps: training the discriminator and training the generator."
},
{
"code": null,
"e": 3481,
"s": 3140,
"text": "To train the discriminator, first the generator generates an output image. The discriminator looks at the input/target pair and the input/output pair and produces its guess about how realistic they look. The weights of the discriminator are then adjusted based on the classification error of the input/output pair and the input/target pair."
},
{
"code": null,
"e": 3627,
"s": 3481,
"text": "The generator’s weights are then adjusted based on the output of the discriminator as well as the difference between the output and target image."
},
{
"code": null,
"e": 3651,
"s": 3627,
"text": "Original CycleGAN paper"
},
{
"code": null,
"e": 4146,
"s": 3651,
"text": "While PIX2PIX can produce truly magical results, the challenge is in training data. The two image spaces that you wanted to learn to translate between needed to be pre-formatted into a single X/Y image that held both tightly-correlated images. This could be time-consuming, infeasible, or even impossible based on what two image types you were trying to translate between (for instance, if you didn’t have one-to-one matches between the two image profiles). This is where the CycleGAN comes in."
},
{
"code": null,
"e": 4649,
"s": 4146,
"text": "The key idea behind CycleGANs is that they can build upon the power of the PIX2PIX architecture, but allow you to point the model at two discrete, unpaired collections of images. For example, one collection of images, Group X, would be full of sunny beach photos while Group Y would be a collection of overcast beach photos. The CycleGAN model can learn to translate the images between these two aesthetics without the need to merge tightly correlated matches together into a single X/Y training image."
},
{
"code": null,
"e": 4916,
"s": 4649,
"text": "The way CycleGANs are able to learn such great translations without having explicit X/Y training images involves introducing the idea of a full translation cycle to determine how good the entire translation system is, thus improving both generators at the same time."
},
{
"code": null,
"e": 5073,
"s": 4916,
"text": "This approach is the clever power that CycleGANs brings to image-to-image translations and how it enables better translations among non-paired image styles."
},
{
"code": null,
"e": 5223,
"s": 5073,
"text": "The original CycleGANs paper, “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”, was published by Jun-Yan Zhu, et al."
},
{
"code": null,
"e": 5353,
"s": 5223,
"text": "The power of CycleGANs is in how they set up the loss function, and use the full cycle loss as an additional optimization target."
},
{
"code": null,
"e": 5423,
"s": 5353,
"text": "As a refresher: we’re dealing with 2 generators and 2 discriminators."
},
{
"code": null,
"e": 5498,
"s": 5423,
"text": "Let’s start with the generator’s loss functions, which consist of 2 parts."
},
{
"code": null,
"e": 5791,
"s": 5498,
"text": "Part 1: The generator is successful if fake (generated) images are so good that discriminator can not distinguish those from real images. In other words, the discriminator’s output for fake images should be as close to 1 as possible. In TensorFlow terms, the generator would like to minimize:"
},
{
"code": null,
"e": 5944,
"s": 5791,
"text": "g_loss_G_disc = tf.reduce_mean((discY_fake — tf.ones_like(discY_fake)) ** 2)g_loss_F_dicr = tf.reduce_mean((discX_fake — tf.ones_like(discX_fake)) ** 2)"
},
{
"code": null,
"e": 6005,
"s": 5944,
"text": "Note: the “**” symbol above is the power operator in Python."
},
{
"code": null,
"e": 6248,
"s": 6005,
"text": "Part 2: We need to capture cyclic loss: as we go from one generator back to the original space of images using another generator, the difference between the original image (where we started the cycle) and the cyclic image should be minimized."
},
{
"code": null,
"e": 6457,
"s": 6248,
"text": "g_loss_G_cycle = tf.reduce_mean(tf.abs(real_X — genF_back)) + tf.reduce_mean(tf.abs(real_Y — genG_back))g_loss_F_cycle = tf.reduce_mean(tf.abs(real_X — genF_back)) + tf.reduce_mean(tf.abs(real_Y — genG_back))"
},
{
"code": null,
"e": 6516,
"s": 6457,
"text": "Finally, the generator loss is the sum of these two terms:"
},
{
"code": null,
"e": 6558,
"s": 6516,
"text": "g_loss_G = g_loss_G_disc + g_loss_G_cycle"
},
{
"code": null,
"e": 6714,
"s": 6558,
"text": "Because cyclic loss is so important we want to multiply its effect. We used an L1_lambda constant for this multiplier (in the paper the value 10 was used)."
},
{
"code": null,
"e": 6769,
"s": 6714,
"text": "Now the grand finale of the generator loss looks like:"
},
{
"code": null,
"e": 6823,
"s": 6769,
"text": "g_loss_G = g_loss_G_disc + L1_lambda * g_loss_G_cycle"
},
{
"code": null,
"e": 6877,
"s": 6823,
"text": "g_loss_F = g_loss_F_disc + L1_lambda * g_loss_F_cycle"
},
{
"code": null,
"e": 6896,
"s": 6877,
"text": "Discriminator Loss"
},
{
"code": null,
"e": 6939,
"s": 6896,
"text": "The Discriminator has 2 decisions to make:"
},
{
"code": null,
"e": 7127,
"s": 6939,
"text": "Real images should be marked as real (recommendation should be as close to 1 as possible)The discriminator should be able to recognize generated images and thus predict 0 for fake images."
},
{
"code": null,
"e": 7217,
"s": 7127,
"text": "Real images should be marked as real (recommendation should be as close to 1 as possible)"
},
{
"code": null,
"e": 7316,
"s": 7217,
"text": "The discriminator should be able to recognize generated images and thus predict 0 for fake images."
}
] |
Rail Fence Cipher - Encryption and Decryption - GeeksforGeeks | 22 Mar, 2022
Given a plain-text message and a numeric key, cipher/de-cipher the given text using Rail Fence algorithm. The rail fence cipher (also called a zigzag cipher) is a form of transposition cipher. It derives its name from the way in which it is encoded. Examples:
Encryption
Input : "GeeksforGeeks "
Key = 3
Output : GsGsekfrek eoe
Decryption
Input : GsGsekfrek eoe
Key = 3
Output : "GeeksforGeeks "
Encryption
Input : "defend the east wall"
Key = 3
Output : dnhaweedtees alf tl
Decryption
Input : dnhaweedtees alf tl
Key = 3
Output : defend the east wall
Encryption
Input : "attack at once"
Key = 2
Output : atc toctaka ne
Decryption
Input : "atc toctaka ne"
Key = 2
Output : attack at once
Encryption
In a transposition cipher, the order of the alphabets is re-arranged to obtain the cipher-text.
In the rail fence cipher, the plain-text is written downwards and diagonally on successive rails of an imaginary fence.
When we reach the bottom rail, we traverse upwards moving diagonally, after reaching the top rail, the direction is changed again. Thus the alphabets of the message are written in a zig-zag manner.
After each alphabet has been written, the individual rows are combined to obtain the cipher-text.
For example, if the message is “GeeksforGeeks” and the number of rails = 3 then cipher is prepared as:
.’.Its encryption will be done row wise i.e. GSGSEKFREKEOE
Decryption
As we’ve seen earlier, the number of columns in rail fence cipher remains equal to the length of plain-text message. And the key corresponds to the number of rails.
Hence, rail matrix can be constructed accordingly. Once we’ve got the matrix we can figure-out the spots where texts should be placed (using the same way of moving diagonally up and down alternatively ).
Then, we fill the cipher-text row wise. After filling it, we traverse the matrix in zig-zag manner to obtain the original text.
Implementation: Let cipher-text = “GsGsekfrek eoe” , and Key = 3
Number of columns in matrix = len(cipher-text) = 13
Number of rows = key = 3
Hence original matrix will be of 3*13 , now marking places with text as ‘*’ we get
* _ _ _ * _ _ _ * _ _ _ *
_ * _ * _ * _ * _ * _ *
_ _ * _ _ _ * _ _ _ * _
Below is a program to encrypt/decrypt the message using the above algorithm.
C++
Python3
// C++ program to illustrate Rail Fence Cipher// Encryption and Decryption#include <bits/stdc++.h>using namespace std; // function to encrypt a messagestring encryptRailFence(string text, int key){ // create the matrix to cipher plain text // key = rows , length(text) = columns char rail[key][(text.length())]; // filling the rail matrix to distinguish filled // spaces from blank ones for (int i=0; i < key; i++) for (int j = 0; j < text.length(); j++) rail[i][j] = '\n'; // to find the direction bool dir_down = false; int row = 0, col = 0; for (int i=0; i < text.length(); i++) { // check the direction of flow // reverse the direction if we've just // filled the top or bottom rail if (row == 0 || row == key-1) dir_down = !dir_down; // fill the corresponding alphabet rail[row][col++] = text[i]; // find the next row using direction flag dir_down?row++ : row--; } //now we can construct the cipher using the rail matrix string result; for (int i=0; i < key; i++) for (int j=0; j < text.length(); j++) if (rail[i][j]!='\n') result.push_back(rail[i][j]); return result;} // This function receives cipher-text and key// and returns the original text after decryptionstring decryptRailFence(string cipher, int key){ // create the matrix to cipher plain text // key = rows , length(text) = columns char rail[key][cipher.length()]; // filling the rail matrix to distinguish filled // spaces from blank ones for (int i=0; i < key; i++) for (int j=0; j < cipher.length(); j++) rail[i][j] = '\n'; // to find the direction bool dir_down; int row = 0, col = 0; // mark the places with '*' for (int i=0; i < cipher.length(); i++) { // check the direction of flow if (row == 0) dir_down = true; if (row == key-1) dir_down = false; // place the marker rail[row][col++] = '*'; // find the next row using direction flag dir_down?row++ : row--; } // now we can construct the fill the rail matrix int index = 0; for (int i=0; i<key; i++) for (int j=0; j<cipher.length(); j++) if (rail[i][j] == '*' && index<cipher.length()) rail[i][j] = cipher[index++]; // now read the matrix in zig-zag manner to construct // the resultant text string result; row = 0, col = 0; for (int i=0; i< cipher.length(); i++) { // check the direction of flow if (row == 0) dir_down = true; if (row == key-1) dir_down = false; // place the marker if (rail[row][col] != '*') result.push_back(rail[row][col++]); // find the next row using direction flag dir_down?row++: row--; } return result;} //driver program to check the above functionsint main(){ cout << encryptRailFence("attack at once", 2) << endl; cout << encryptRailFence("GeeksforGeeks ", 3) << endl; cout << encryptRailFence("defend the east wall", 3) << endl; //Now decryption of the same cipher-text cout << decryptRailFence("GsGsekfrek eoe",3) << endl; cout << decryptRailFence("atc toctaka ne",2) << endl; cout << decryptRailFence("dnhaweedtees alf tl",3) << endl; return 0;}
# Python3 program to illustrate# Rail Fence Cipher Encryption# and Decryption # function to encrypt a messagedef encryptRailFence(text, key): # create the matrix to cipher # plain text key = rows , # length(text) = columns # filling the rail matrix # to distinguish filled # spaces from blank ones rail = [['\n' for i in range(len(text))] for j in range(key)] # to find the direction dir_down = False row, col = 0, 0 for i in range(len(text)): # check the direction of flow # reverse the direction if we've just # filled the top or bottom rail if (row == 0) or (row == key - 1): dir_down = not dir_down # fill the corresponding alphabet rail[row][col] = text[i] col += 1 # find the next row using # direction flag if dir_down: row += 1 else: row -= 1 # now we can construct the cipher # using the rail matrix result = [] for i in range(key): for j in range(len(text)): if rail[i][j] != '\n': result.append(rail[i][j]) return("" . join(result)) # This function receives cipher-text# and key and returns the original# text after decryptiondef decryptRailFence(cipher, key): # create the matrix to cipher # plain text key = rows , # length(text) = columns # filling the rail matrix to # distinguish filled spaces # from blank ones rail = [['\n' for i in range(len(cipher))] for j in range(key)] # to find the direction dir_down = None row, col = 0, 0 # mark the places with '*' for i in range(len(cipher)): if row == 0: dir_down = True if row == key - 1: dir_down = False # place the marker rail[row][col] = '*' col += 1 # find the next row # using direction flag if dir_down: row += 1 else: row -= 1 # now we can construct the # fill the rail matrix index = 0 for i in range(key): for j in range(len(cipher)): if ((rail[i][j] == '*') and (index < len(cipher))): rail[i][j] = cipher[index] index += 1 # now read the matrix in # zig-zag manner to construct # the resultant text result = [] row, col = 0, 0 for i in range(len(cipher)): # check the direction of flow if row == 0: dir_down = True if row == key-1: dir_down = False # place the marker if (rail[row][col] != '*'): result.append(rail[row][col]) col += 1 # find the next row using # direction flag if dir_down: row += 1 else: row -= 1 return("".join(result)) # Driver codeif __name__ == "__main__": print(encryptRailFence("attack at once", 2)) print(encryptRailFence("GeeksforGeeks ", 3)) print(encryptRailFence("defend the east wall", 3)) # Now decryption of the # same cipher-text print(decryptRailFence("GsGsekfrek eoe", 3)) print(decryptRailFence("atc toctaka ne", 2)) print(decryptRailFence("dnhaweedtees alf tl", 3)) # This code is contributed# by Pratik Somwanshi
Output:
atc toctaka ne
GsGsekfrek eoe
dnhaweedtees alf tl
GeeksforGeeks
attack at once
delendfthe east wal
Time Complexity: O(row * col)Auxiliary Space: O(row * col) References: https://en.wikipedia.org/wiki/Rail_fence_cipherThis article is contributed by Ashutosh Kumar If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
PratikSomwanshi
nidhi_biet
pankajsharmagfg
bhabeshmali
makhija726
cryptography
Algorithms
cryptography
Algorithms
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
SDE SHEET - A Complete Guide for SDE Preparation
DSA Sheet by Love Babbar
Introduction to Algorithms
Difference between Informed and Uninformed Search in AI
Quick Sort vs Merge Sort
Cyclomatic Complexity
Generate all permutation of a set in Python
Converting Roman Numerals to Decimal lying between 1 to 3999
SCAN (Elevator) Disk Scheduling Algorithms
Difference Between Symmetric and Asymmetric Key Encryption | [
{
"code": null,
"e": 24099,
"s": 24071,
"text": "\n22 Mar, 2022"
},
{
"code": null,
"e": 24361,
"s": 24099,
"text": "Given a plain-text message and a numeric key, cipher/de-cipher the given text using Rail Fence algorithm. The rail fence cipher (also called a zigzag cipher) is a form of transposition cipher. It derives its name from the way in which it is encoded. Examples: "
},
{
"code": null,
"e": 24798,
"s": 24361,
"text": "Encryption\nInput : \"GeeksforGeeks \"\nKey = 3\nOutput : GsGsekfrek eoe\nDecryption\nInput : GsGsekfrek eoe\nKey = 3\nOutput : \"GeeksforGeeks \"\n\nEncryption\nInput : \"defend the east wall\"\nKey = 3\nOutput : dnhaweedtees alf tl\nDecryption\nInput : dnhaweedtees alf tl\nKey = 3\nOutput : defend the east wall\n\nEncryption\nInput : \"attack at once\"\nKey = 2 \nOutput : atc toctaka ne \nDecryption\nInput : \"atc toctaka ne\"\nKey = 2\nOutput : attack at once"
},
{
"code": null,
"e": 24813,
"s": 24802,
"text": "Encryption"
},
{
"code": null,
"e": 24911,
"s": 24813,
"text": "In a transposition cipher, the order of the alphabets is re-arranged to obtain the cipher-text. "
},
{
"code": null,
"e": 25031,
"s": 24911,
"text": "In the rail fence cipher, the plain-text is written downwards and diagonally on successive rails of an imaginary fence."
},
{
"code": null,
"e": 25229,
"s": 25031,
"text": "When we reach the bottom rail, we traverse upwards moving diagonally, after reaching the top rail, the direction is changed again. Thus the alphabets of the message are written in a zig-zag manner."
},
{
"code": null,
"e": 25327,
"s": 25229,
"text": "After each alphabet has been written, the individual rows are combined to obtain the cipher-text."
},
{
"code": null,
"e": 25432,
"s": 25327,
"text": "For example, if the message is “GeeksforGeeks” and the number of rails = 3 then cipher is prepared as: "
},
{
"code": null,
"e": 25491,
"s": 25432,
"text": ".’.Its encryption will be done row wise i.e. GSGSEKFREKEOE"
},
{
"code": null,
"e": 25504,
"s": 25493,
"text": "Decryption"
},
{
"code": null,
"e": 25670,
"s": 25504,
"text": "As we’ve seen earlier, the number of columns in rail fence cipher remains equal to the length of plain-text message. And the key corresponds to the number of rails. "
},
{
"code": null,
"e": 25874,
"s": 25670,
"text": "Hence, rail matrix can be constructed accordingly. Once we’ve got the matrix we can figure-out the spots where texts should be placed (using the same way of moving diagonally up and down alternatively )."
},
{
"code": null,
"e": 26002,
"s": 25874,
"text": "Then, we fill the cipher-text row wise. After filling it, we traverse the matrix in zig-zag manner to obtain the original text."
},
{
"code": null,
"e": 26069,
"s": 26002,
"text": "Implementation: Let cipher-text = “GsGsekfrek eoe” , and Key = 3 "
},
{
"code": null,
"e": 26121,
"s": 26069,
"text": "Number of columns in matrix = len(cipher-text) = 13"
},
{
"code": null,
"e": 26146,
"s": 26121,
"text": "Number of rows = key = 3"
},
{
"code": null,
"e": 26231,
"s": 26146,
"text": "Hence original matrix will be of 3*13 , now marking places with text as ‘*’ we get "
},
{
"code": null,
"e": 26308,
"s": 26231,
"text": "* _ _ _ * _ _ _ * _ _ _ *\n_ * _ * _ * _ * _ * _ * \n_ _ * _ _ _ * _ _ _ * _ "
},
{
"code": null,
"e": 26387,
"s": 26308,
"text": "Below is a program to encrypt/decrypt the message using the above algorithm. "
},
{
"code": null,
"e": 26391,
"s": 26387,
"text": "C++"
},
{
"code": null,
"e": 26399,
"s": 26391,
"text": "Python3"
},
{
"code": "// C++ program to illustrate Rail Fence Cipher// Encryption and Decryption#include <bits/stdc++.h>using namespace std; // function to encrypt a messagestring encryptRailFence(string text, int key){ // create the matrix to cipher plain text // key = rows , length(text) = columns char rail[key][(text.length())]; // filling the rail matrix to distinguish filled // spaces from blank ones for (int i=0; i < key; i++) for (int j = 0; j < text.length(); j++) rail[i][j] = '\\n'; // to find the direction bool dir_down = false; int row = 0, col = 0; for (int i=0; i < text.length(); i++) { // check the direction of flow // reverse the direction if we've just // filled the top or bottom rail if (row == 0 || row == key-1) dir_down = !dir_down; // fill the corresponding alphabet rail[row][col++] = text[i]; // find the next row using direction flag dir_down?row++ : row--; } //now we can construct the cipher using the rail matrix string result; for (int i=0; i < key; i++) for (int j=0; j < text.length(); j++) if (rail[i][j]!='\\n') result.push_back(rail[i][j]); return result;} // This function receives cipher-text and key// and returns the original text after decryptionstring decryptRailFence(string cipher, int key){ // create the matrix to cipher plain text // key = rows , length(text) = columns char rail[key][cipher.length()]; // filling the rail matrix to distinguish filled // spaces from blank ones for (int i=0; i < key; i++) for (int j=0; j < cipher.length(); j++) rail[i][j] = '\\n'; // to find the direction bool dir_down; int row = 0, col = 0; // mark the places with '*' for (int i=0; i < cipher.length(); i++) { // check the direction of flow if (row == 0) dir_down = true; if (row == key-1) dir_down = false; // place the marker rail[row][col++] = '*'; // find the next row using direction flag dir_down?row++ : row--; } // now we can construct the fill the rail matrix int index = 0; for (int i=0; i<key; i++) for (int j=0; j<cipher.length(); j++) if (rail[i][j] == '*' && index<cipher.length()) rail[i][j] = cipher[index++]; // now read the matrix in zig-zag manner to construct // the resultant text string result; row = 0, col = 0; for (int i=0; i< cipher.length(); i++) { // check the direction of flow if (row == 0) dir_down = true; if (row == key-1) dir_down = false; // place the marker if (rail[row][col] != '*') result.push_back(rail[row][col++]); // find the next row using direction flag dir_down?row++: row--; } return result;} //driver program to check the above functionsint main(){ cout << encryptRailFence(\"attack at once\", 2) << endl; cout << encryptRailFence(\"GeeksforGeeks \", 3) << endl; cout << encryptRailFence(\"defend the east wall\", 3) << endl; //Now decryption of the same cipher-text cout << decryptRailFence(\"GsGsekfrek eoe\",3) << endl; cout << decryptRailFence(\"atc toctaka ne\",2) << endl; cout << decryptRailFence(\"dnhaweedtees alf tl\",3) << endl; return 0;}",
"e": 29789,
"s": 26399,
"text": null
},
{
"code": "# Python3 program to illustrate# Rail Fence Cipher Encryption# and Decryption # function to encrypt a messagedef encryptRailFence(text, key): # create the matrix to cipher # plain text key = rows , # length(text) = columns # filling the rail matrix # to distinguish filled # spaces from blank ones rail = [['\\n' for i in range(len(text))] for j in range(key)] # to find the direction dir_down = False row, col = 0, 0 for i in range(len(text)): # check the direction of flow # reverse the direction if we've just # filled the top or bottom rail if (row == 0) or (row == key - 1): dir_down = not dir_down # fill the corresponding alphabet rail[row][col] = text[i] col += 1 # find the next row using # direction flag if dir_down: row += 1 else: row -= 1 # now we can construct the cipher # using the rail matrix result = [] for i in range(key): for j in range(len(text)): if rail[i][j] != '\\n': result.append(rail[i][j]) return(\"\" . join(result)) # This function receives cipher-text# and key and returns the original# text after decryptiondef decryptRailFence(cipher, key): # create the matrix to cipher # plain text key = rows , # length(text) = columns # filling the rail matrix to # distinguish filled spaces # from blank ones rail = [['\\n' for i in range(len(cipher))] for j in range(key)] # to find the direction dir_down = None row, col = 0, 0 # mark the places with '*' for i in range(len(cipher)): if row == 0: dir_down = True if row == key - 1: dir_down = False # place the marker rail[row][col] = '*' col += 1 # find the next row # using direction flag if dir_down: row += 1 else: row -= 1 # now we can construct the # fill the rail matrix index = 0 for i in range(key): for j in range(len(cipher)): if ((rail[i][j] == '*') and (index < len(cipher))): rail[i][j] = cipher[index] index += 1 # now read the matrix in # zig-zag manner to construct # the resultant text result = [] row, col = 0, 0 for i in range(len(cipher)): # check the direction of flow if row == 0: dir_down = True if row == key-1: dir_down = False # place the marker if (rail[row][col] != '*'): result.append(rail[row][col]) col += 1 # find the next row using # direction flag if dir_down: row += 1 else: row -= 1 return(\"\".join(result)) # Driver codeif __name__ == \"__main__\": print(encryptRailFence(\"attack at once\", 2)) print(encryptRailFence(\"GeeksforGeeks \", 3)) print(encryptRailFence(\"defend the east wall\", 3)) # Now decryption of the # same cipher-text print(decryptRailFence(\"GsGsekfrek eoe\", 3)) print(decryptRailFence(\"atc toctaka ne\", 2)) print(decryptRailFence(\"dnhaweedtees alf tl\", 3)) # This code is contributed# by Pratik Somwanshi",
"e": 33164,
"s": 29789,
"text": null
},
{
"code": null,
"e": 33174,
"s": 33164,
"text": "Output: "
},
{
"code": null,
"e": 33275,
"s": 33174,
"text": "atc toctaka ne\nGsGsekfrek eoe\ndnhaweedtees alf tl\nGeeksforGeeks \nattack at once\ndelendfthe east wal"
},
{
"code": null,
"e": 33815,
"s": 33275,
"text": "Time Complexity: O(row * col)Auxiliary Space: O(row * col) References: https://en.wikipedia.org/wiki/Rail_fence_cipherThis article is contributed by Ashutosh Kumar If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 33831,
"s": 33815,
"text": "PratikSomwanshi"
},
{
"code": null,
"e": 33842,
"s": 33831,
"text": "nidhi_biet"
},
{
"code": null,
"e": 33858,
"s": 33842,
"text": "pankajsharmagfg"
},
{
"code": null,
"e": 33870,
"s": 33858,
"text": "bhabeshmali"
},
{
"code": null,
"e": 33881,
"s": 33870,
"text": "makhija726"
},
{
"code": null,
"e": 33894,
"s": 33881,
"text": "cryptography"
},
{
"code": null,
"e": 33905,
"s": 33894,
"text": "Algorithms"
},
{
"code": null,
"e": 33918,
"s": 33905,
"text": "cryptography"
},
{
"code": null,
"e": 33929,
"s": 33918,
"text": "Algorithms"
},
{
"code": null,
"e": 34027,
"s": 33929,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34036,
"s": 34027,
"text": "Comments"
},
{
"code": null,
"e": 34049,
"s": 34036,
"text": "Old Comments"
},
{
"code": null,
"e": 34098,
"s": 34049,
"text": "SDE SHEET - A Complete Guide for SDE Preparation"
},
{
"code": null,
"e": 34123,
"s": 34098,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 34150,
"s": 34123,
"text": "Introduction to Algorithms"
},
{
"code": null,
"e": 34206,
"s": 34150,
"text": "Difference between Informed and Uninformed Search in AI"
},
{
"code": null,
"e": 34231,
"s": 34206,
"text": "Quick Sort vs Merge Sort"
},
{
"code": null,
"e": 34253,
"s": 34231,
"text": "Cyclomatic Complexity"
},
{
"code": null,
"e": 34297,
"s": 34253,
"text": "Generate all permutation of a set in Python"
},
{
"code": null,
"e": 34358,
"s": 34297,
"text": "Converting Roman Numerals to Decimal lying between 1 to 3999"
},
{
"code": null,
"e": 34401,
"s": 34358,
"text": "SCAN (Elevator) Disk Scheduling Algorithms"
}
] |
Minimum possible sum of array elements after performing the given operation - GeeksforGeeks | 15 Nov, 2021
Given an array arr[] of positive integers and an integer x, the task is to minimize the sum of elements of the array after performing the given operation at most once. In a single operation, any element from the array can be divided by x (if it is divisible by x) and at the same time, any other element from the array must be multiplied by x.Examples:
Input: arr[] = {1, 2, 3, 4, 5}, x = 2 Output: 14 Multiply 1 by x i.e. 1 * 2 = 2 Divide 4 by x i.e. 4 / 2 = 2 And the updated sum will be 2 + 2 + 3 + 2 + 5 = 14Input: arr[] = {5, 5, 5, 5, 6}, x = 3 Output: 26
Approach: For an optimal solution, x must be multiplied with the smallest element from the array and only the largest element divisible by x must be divided by it. Let sumAfterOperation be the sum of the array elements calculated after performing the operation and sum be the sum of all the elements of the original array then the minimized sum will be min(sum, sumAfterOperation).Below is the implementation of the above approach:
C++
Java
Python3
C#
PHP
Javascript
// C++ implementation of the approach#include <bits/stdc++.h>using namespace std;#define ll long long int // Function to return the minimized sumll minSum(int arr[], int n, int x){ ll sum = 0; // To store the largest element // from the array which is // divisible by x int largestDivisible = -1, minimum = arr[0]; for (int i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible by x // and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then there's no point // in performing the operation as we will end up // increasing the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the sum // and then add their updated values ll sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return min(sum, sumAfterOperation);} // Driver codeint main(){ int arr[] = { 5, 5, 5, 5, 6 }; int n = sizeof(arr) / sizeof(arr[0]); int x = 3; cout << minSum(arr, n, x); return 0;}
// Java implementation of the approachclass GFG{ // Function to return the minimized sumstatic int minSum(int arr[], int n, int x){ int sum = 0; // To store the largest element // from the array which is // divisible by x int largestDivisible = -1, minimum = arr[0]; for (int i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible // by x and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the // sum and then add their updated values int sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return Math.min(sum, sumAfterOperation);} // Driver codepublic static void main(String[] args){ int arr[] = { 5, 5, 5, 5, 6 }; int n =arr.length; int x = 3; System.out.println(minSum(arr, n, x));}} // This code is contributed// by Code_Mech
# Python3 implementation of the approach # Function to return the minimized sumdef minSum(arr, n, x): Sum = 0 # To store the largest element # from the array which is # divisible by x largestDivisible, minimum = -1, arr[0] for i in range(0, n): # Sum of array elements before # performing any operation Sum += arr[i] # If current element is divisible by x # and it is maximum so far if(arr[i] % x == 0 and largestDivisible < arr[i]): largestDivisible = arr[i] # Update the minimum element if arr[i] < minimum: minimum = arr[i] # If no element can be reduced then there's # no point in performing the operation as # we will end up increasing the sum when an # element is multiplied by x if largestDivisible == -1: return Sum # Subtract the chosen elements from the # sum and then add their updated values sumAfterOperation = (Sum - minimum - largestDivisible + (x * minimum) + (largestDivisible // x)) # Return the minimized sum return min(Sum, sumAfterOperation) # Driver codeif __name__ == "__main__": arr = [5, 5, 5, 5, 6] n = len(arr) x = 3 print(minSum(arr, n, x)) # This code is contributed by Rituraj Jain
// C# implementation of the approachusing System; class GFG{ // Function to return the minimized sumstatic int minSum(int[] arr, int n, int x){ int sum = 0; // To store the largest element // from the array which is // divisible by x int largestDivisible = -1, minimum = arr[0]; for (int i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible // by x and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the // sum and then add their updated values int sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return Math.Min(sum, sumAfterOperation);} // Driver codepublic static void Main(){ int[] arr = { 5, 5, 5, 5, 6 }; int n = arr.Length; int x = 3; Console.WriteLine(minSum(arr, n, x));}} // This code is contributed// by Code_Mech
<?php// PHP implementation of the approach // Function to return the minimized sumfunction minSum($arr, $n, $x){ $sum = 0; // To store the largest element // from the array which is // divisible by x $largestDivisible = -1; $minimum = $arr[0]; for ($i = 0; $i < $n; $i++) { // Sum of array elements before // performing any operation $sum += $arr[$i]; // If current element is divisible // by x and it is maximum so far if ($arr[$i] % $x == 0 && $largestDivisible < $arr[$i]) $largestDivisible = $arr[$i]; // Update the minimum element if ($arr[$i] < $minimum) $minimum = $arr[$i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if ($largestDivisible == -1) return $sum; // Subtract the chosen elements from the // sum and then add their updated values $sumAfterOperation = $sum - $minimum - $largestDivisible + ($x * $minimum) + ($largestDivisible / $x); // Return the minimized sum return min($sum, $sumAfterOperation);} // Driver code$arr = array( 5, 5, 5, 5, 6 );$n = sizeof($arr);$x = 3; print(minSum($arr, $n, $x)); // This code is contributed by Ryuga?>
<script>// javascript implementation of the approach // Function to return the minimized sum function minSum(arr , n , x) { var sum = 0; // To store the largest element // from the array which is // divisible by x var largestDivisible = -1, minimum = arr[0]; for (i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible // by x and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the // sum and then add their updated values var sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return Math.min(sum, sumAfterOperation); } // Driver code var arr = [ 5, 5, 5, 5, 6 ]; var n = arr.length; var x = 3; document.write(minSum(arr, n, x)); // This code contributed by aashish1995</script>
26
rituraj_jain
ankthon
Code_Mech
aashish1995
simranarora5sos
Constructive Algorithms
divisibility
Arrays
Mathematical
Arrays
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 50 Array Coding Problems for Interviews
Introduction to Arrays
Linear Search
Multidimensional Arrays in Java
Maximum and minimum of an array using minimum number of comparisons
Program for Fibonacci numbers
C++ Data Types
Write a program to print all permutations of a given string
Set in C++ Standard Template Library (STL)
Program to find GCD or HCF of two numbers | [
{
"code": null,
"e": 25205,
"s": 25177,
"text": "\n15 Nov, 2021"
},
{
"code": null,
"e": 25560,
"s": 25205,
"text": "Given an array arr[] of positive integers and an integer x, the task is to minimize the sum of elements of the array after performing the given operation at most once. In a single operation, any element from the array can be divided by x (if it is divisible by x) and at the same time, any other element from the array must be multiplied by x.Examples: "
},
{
"code": null,
"e": 25770,
"s": 25560,
"text": "Input: arr[] = {1, 2, 3, 4, 5}, x = 2 Output: 14 Multiply 1 by x i.e. 1 * 2 = 2 Divide 4 by x i.e. 4 / 2 = 2 And the updated sum will be 2 + 2 + 3 + 2 + 5 = 14Input: arr[] = {5, 5, 5, 5, 6}, x = 3 Output: 26 "
},
{
"code": null,
"e": 26205,
"s": 25772,
"text": "Approach: For an optimal solution, x must be multiplied with the smallest element from the array and only the largest element divisible by x must be divided by it. Let sumAfterOperation be the sum of the array elements calculated after performing the operation and sum be the sum of all the elements of the original array then the minimized sum will be min(sum, sumAfterOperation).Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26209,
"s": 26205,
"text": "C++"
},
{
"code": null,
"e": 26214,
"s": 26209,
"text": "Java"
},
{
"code": null,
"e": 26222,
"s": 26214,
"text": "Python3"
},
{
"code": null,
"e": 26225,
"s": 26222,
"text": "C#"
},
{
"code": null,
"e": 26229,
"s": 26225,
"text": "PHP"
},
{
"code": null,
"e": 26240,
"s": 26229,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include <bits/stdc++.h>using namespace std;#define ll long long int // Function to return the minimized sumll minSum(int arr[], int n, int x){ ll sum = 0; // To store the largest element // from the array which is // divisible by x int largestDivisible = -1, minimum = arr[0]; for (int i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible by x // and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then there's no point // in performing the operation as we will end up // increasing the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the sum // and then add their updated values ll sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return min(sum, sumAfterOperation);} // Driver codeint main(){ int arr[] = { 5, 5, 5, 5, 6 }; int n = sizeof(arr) / sizeof(arr[0]); int x = 3; cout << minSum(arr, n, x); return 0;}",
"e": 27648,
"s": 26240,
"text": null
},
{
"code": "// Java implementation of the approachclass GFG{ // Function to return the minimized sumstatic int minSum(int arr[], int n, int x){ int sum = 0; // To store the largest element // from the array which is // divisible by x int largestDivisible = -1, minimum = arr[0]; for (int i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible // by x and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the // sum and then add their updated values int sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return Math.min(sum, sumAfterOperation);} // Driver codepublic static void main(String[] args){ int arr[] = { 5, 5, 5, 5, 6 }; int n =arr.length; int x = 3; System.out.println(minSum(arr, n, x));}} // This code is contributed// by Code_Mech",
"e": 29097,
"s": 27648,
"text": null
},
{
"code": "# Python3 implementation of the approach # Function to return the minimized sumdef minSum(arr, n, x): Sum = 0 # To store the largest element # from the array which is # divisible by x largestDivisible, minimum = -1, arr[0] for i in range(0, n): # Sum of array elements before # performing any operation Sum += arr[i] # If current element is divisible by x # and it is maximum so far if(arr[i] % x == 0 and largestDivisible < arr[i]): largestDivisible = arr[i] # Update the minimum element if arr[i] < minimum: minimum = arr[i] # If no element can be reduced then there's # no point in performing the operation as # we will end up increasing the sum when an # element is multiplied by x if largestDivisible == -1: return Sum # Subtract the chosen elements from the # sum and then add their updated values sumAfterOperation = (Sum - minimum - largestDivisible + (x * minimum) + (largestDivisible // x)) # Return the minimized sum return min(Sum, sumAfterOperation) # Driver codeif __name__ == \"__main__\": arr = [5, 5, 5, 5, 6] n = len(arr) x = 3 print(minSum(arr, n, x)) # This code is contributed by Rituraj Jain",
"e": 30396,
"s": 29097,
"text": null
},
{
"code": "// C# implementation of the approachusing System; class GFG{ // Function to return the minimized sumstatic int minSum(int[] arr, int n, int x){ int sum = 0; // To store the largest element // from the array which is // divisible by x int largestDivisible = -1, minimum = arr[0]; for (int i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible // by x and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the // sum and then add their updated values int sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return Math.Min(sum, sumAfterOperation);} // Driver codepublic static void Main(){ int[] arr = { 5, 5, 5, 5, 6 }; int n = arr.Length; int x = 3; Console.WriteLine(minSum(arr, n, x));}} // This code is contributed// by Code_Mech",
"e": 31844,
"s": 30396,
"text": null
},
{
"code": "<?php// PHP implementation of the approach // Function to return the minimized sumfunction minSum($arr, $n, $x){ $sum = 0; // To store the largest element // from the array which is // divisible by x $largestDivisible = -1; $minimum = $arr[0]; for ($i = 0; $i < $n; $i++) { // Sum of array elements before // performing any operation $sum += $arr[$i]; // If current element is divisible // by x and it is maximum so far if ($arr[$i] % $x == 0 && $largestDivisible < $arr[$i]) $largestDivisible = $arr[$i]; // Update the minimum element if ($arr[$i] < $minimum) $minimum = $arr[$i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if ($largestDivisible == -1) return $sum; // Subtract the chosen elements from the // sum and then add their updated values $sumAfterOperation = $sum - $minimum - $largestDivisible + ($x * $minimum) + ($largestDivisible / $x); // Return the minimized sum return min($sum, $sumAfterOperation);} // Driver code$arr = array( 5, 5, 5, 5, 6 );$n = sizeof($arr);$x = 3; print(minSum($arr, $n, $x)); // This code is contributed by Ryuga?>",
"e": 33206,
"s": 31844,
"text": null
},
{
"code": "<script>// javascript implementation of the approach // Function to return the minimized sum function minSum(arr , n , x) { var sum = 0; // To store the largest element // from the array which is // divisible by x var largestDivisible = -1, minimum = arr[0]; for (i = 0; i < n; i++) { // Sum of array elements before // performing any operation sum += arr[i]; // If current element is divisible // by x and it is maximum so far if (arr[i] % x == 0 && largestDivisible < arr[i]) largestDivisible = arr[i]; // Update the minimum element if (arr[i] < minimum) minimum = arr[i]; } // If no element can be reduced then // there's no point in performing the // operation as we will end up increasing // the sum when an element is multiplied by x if (largestDivisible == -1) return sum; // Subtract the chosen elements from the // sum and then add their updated values var sumAfterOperation = sum - minimum - largestDivisible + (x * minimum) + (largestDivisible / x); // Return the minimized sum return Math.min(sum, sumAfterOperation); } // Driver code var arr = [ 5, 5, 5, 5, 6 ]; var n = arr.length; var x = 3; document.write(minSum(arr, n, x)); // This code contributed by aashish1995</script>",
"e": 34704,
"s": 33206,
"text": null
},
{
"code": null,
"e": 34707,
"s": 34704,
"text": "26"
},
{
"code": null,
"e": 34722,
"s": 34709,
"text": "rituraj_jain"
},
{
"code": null,
"e": 34730,
"s": 34722,
"text": "ankthon"
},
{
"code": null,
"e": 34740,
"s": 34730,
"text": "Code_Mech"
},
{
"code": null,
"e": 34752,
"s": 34740,
"text": "aashish1995"
},
{
"code": null,
"e": 34768,
"s": 34752,
"text": "simranarora5sos"
},
{
"code": null,
"e": 34792,
"s": 34768,
"text": "Constructive Algorithms"
},
{
"code": null,
"e": 34805,
"s": 34792,
"text": "divisibility"
},
{
"code": null,
"e": 34812,
"s": 34805,
"text": "Arrays"
},
{
"code": null,
"e": 34825,
"s": 34812,
"text": "Mathematical"
},
{
"code": null,
"e": 34832,
"s": 34825,
"text": "Arrays"
},
{
"code": null,
"e": 34845,
"s": 34832,
"text": "Mathematical"
},
{
"code": null,
"e": 34943,
"s": 34845,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34952,
"s": 34943,
"text": "Comments"
},
{
"code": null,
"e": 34965,
"s": 34952,
"text": "Old Comments"
},
{
"code": null,
"e": 35009,
"s": 34965,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 35032,
"s": 35009,
"text": "Introduction to Arrays"
},
{
"code": null,
"e": 35046,
"s": 35032,
"text": "Linear Search"
},
{
"code": null,
"e": 35078,
"s": 35046,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 35146,
"s": 35078,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 35176,
"s": 35146,
"text": "Program for Fibonacci numbers"
},
{
"code": null,
"e": 35191,
"s": 35176,
"text": "C++ Data Types"
},
{
"code": null,
"e": 35251,
"s": 35191,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 35294,
"s": 35251,
"text": "Set in C++ Standard Template Library (STL)"
}
] |
ConcurrentNavigableMap Interface in Java - GeeksforGeeks | 19 Dec, 2021
The ConcurrentNavigableMap interface is a member of the Java Collection Framework. It extends from the NavigableMap interface and ConcurrentMap interface. The ConcurrentNavigableMap provides thread-safe access to map elements along with providing convenient navigation methods. It belongs to java.util.concurrent package.
Declaration:
public interface ConcurrentNavigableMap<K,V> extends ConcurrentMap<K,V>, NavigableMap<K,V>
Here, K is the key Object type and V is the value Object type.
It implements ConcurrentMap<K, V>, Map<K, V>, NavigableMap<K, V>, SortedMap<K, V> interfaces. ConcurrentSkipListMap implements ConcurrentNavigableMap.
Java
// Java Program to demonstrate the// ConcurrentNavigableMap Interfaceimport java.util.concurrent.ConcurrentNavigableMap;import java.util.concurrent.ConcurrentSkipListMap; public class GFG { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap // is an interface so We use // ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() method cnmap.put(1, "First"); cnmap.put(2, "Second"); cnmap.put(3, "Third"); cnmap.put(4, "Fourth"); // Print the contents on the console System.out.println( "Mappings of ConcurrentNavigableMap : " + cnmap); System.out.println("HeadMap(3): " + cnmap.headMap(3)); System.out.println("TailMap(3): " + cnmap.tailMap(3)); System.out.println("SubMap(1, 3): " + cnmap.subMap(1, 3)); }}
Output:
Mappings of ConcurrentNavigableMap : {1=First, 2=Second, 3=Third, 4=Fourth}
HeadMap(3): {1=First, 2=Second}
TailMap(3): {3=Third, 4=Fourth}
SubMap(1, 3): {1=First, 2=Second}
The ConcurrentNavigableMap has one implementing class which is ConcurrentSkipListMap class. The ConcurrentSkipListMap is a scalable implementation of the ConcurrentNavigableMap interface. The keys in ConcurrentSkipListMap are sorted by natural order or by using a Comparator at the time of construction of the object. The ConcurrentSkipListMap has the expected time cost of log(n) for insertion, deletion, and searching operations. It is a thread-safe class, therefore, all basic operations can be accomplished concurrently. Syntax:
ConcurrentSkipListMap< ? , ? > objectName = new ConcurrentSkipListMap< ? , ? >();
Example: In the code given below, we simply instantiate an object of the ConcurrentSkipListMap class named cslmap. The put() method is used to add elements and remove() to delete elements. For the remove() method the syntax is objectname.remove(Object key). The keySet() shows all the keys in the map (description in the method table given above).
Java
// Java Program to demonstrate the ConcurrentSkipListMapimport java.util.concurrent.*; public class ConcurrentSkipListMapExample { public static void main(String[] args) { // Instantiate an object of // ConcurrentSkipListMap named cslmap ConcurrentSkipListMap<Integer, String> cslmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cslmap.put(1, "Geeks"); cslmap.put(2, "For"); cslmap.put(3, "Geeks"); // Print the contents on the console System.out.println( "The ConcurrentSkipListMap contains: " + cslmap); // Print the key set using keySet() System.out.println( "\nThe ConcurrentSkipListMap key set: " + cslmap.keySet()); // Remove elements using remove() cslmap.remove(3); // Print the contents on the console System.out.println( "\nThe ConcurrentSkipListMap contains: " + cslmap); }}
Output:
The ConcurrentSkipListMap contains: {1=Geeks, 2=For, 3=Geeks}
The ConcurrentSkipListMap key set: [1, 2, 3]
The ConcurrentSkipListMap contains: {1=Geeks, 2=For}
To add elements to a ConcurrentNavigableMap we can use any methods of the Map interface. The code below shows how to use them. You can observe in the code that when no Comparator is provided at the time of construction, the natural order is followed.
Java
// Java Program for adding elements to a// ConcurrentNavigableMapimport java.util.concurrent.*; public class AddingElementsExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, "Third"); cnmap.put(6, "Second"); cnmap.put(3, "First"); // Print the contents on the console System.out.println( "Mappings of ConcurrentNavigableMap : " + cnmap); }}
Output:
Mappings of ConcurrentNavigableMap : {3=First, 6=Second, 8=Third}
To remove elements as well we use methods of the Map interface, as ConcurrentNavigableMap is a descendant of Map.
Java
// Java Program for deleting// elements from ConcurrentNavigableMap import java.util.concurrent.*; public class RemovingElementsExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap // is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, "Third"); cnmap.put(6, "Second"); cnmap.put(3, "First"); cnmap.put(11, "Fourth"); // Print the contents on the console System.out.println( "Mappings of ConcurrentNavigableMap : " + cnmap); // Remove elements using remove() cnmap.remove(6); cnmap.remove(8); // Print the contents on the console System.out.println( "\nConcurrentNavigableMap, after remove operation : " + cnmap); // Clear the entire map using clear() cnmap.clear(); System.out.println( "\nConcurrentNavigableMap, after clear operation : " + cnmap); }}
Output:
Mappings of ConcurrentNavigableMap : {3=First, 6=Second, 8=Third, 11=Fourth}
ConcurrentNavigableMap, after remove operation : {3=First, 11=Fourth}
ConcurrentNavigableMap, after clear operation : {}
We can access the elements of a ConcurrentNavigableMap using get() method, the example of this is given below.
Java
// Java Program for accessing// elements in a ConcurrentNavigableMap import java.util.concurrent.*; public class AccessingElementsExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, "Third"); cnmap.put(6, "Second"); cnmap.put(3, "First"); cnmap.put(11, "Fourth"); // Accessing the elements using get() // with key as a parameter System.out.println(cnmap.get(3)); System.out.println(cnmap.get(6)); System.out.println(cnmap.get(8)); System.out.println(cnmap.get(11)); // Display the set of keys using keySet() System.out.println( "\nThe ConcurrentNavigableMap key set: " + cnmap.keySet()); }}
Output:
First
Second
Third
Fourth
The ConcurrentNavigableMap key set: [3, 6, 8, 11]
We can use the Iterator interface to traverse over any structure of the Collection Framework. Since Iterators work with one type of data we use .Entry< ? , ? > to resolve the two separate types into a compatible format. Then using the next() method we print the elements of the ConcurrentNavigableMap.
Java
// Java Program for traversing a ConcurrentNavigableMap import java.util.concurrent.*;import java.util.*; public class TraversalExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, "Third"); cnmap.put(6, "Second"); cnmap.put(3, "First"); cnmap.put(11, "Fourth"); // Create an Iterator over the // ConcurrentNavigableMap Iterator<ConcurrentNavigableMap .Entry<Integer, String> > itr = cnmap.entrySet().iterator(); // The hasNext() method is used to check if there is // a next element The next() method is used to // retrieve the next element while (itr.hasNext()) { ConcurrentNavigableMap .Entry<Integer, String> entry = itr.next(); System.out.println("Key = " + entry.getKey() + ", Value = " + entry.getValue()); } }}
Output:
Key = 3, Value = First
Key = 6, Value = Second
Key = 8, Value = Third
Key = 11, Value = Fourth
Note: Every time that we say ‘elements of ConcurrentNavigableMap’, it has to be noted that the elements are actually stored in the object of an implementing class of ConcurrentNavigableMap in this case ConcurrentSkipListMap.
ConcurrentNavigableMap inherits methods from the Map interface, SortedMap interface, ConcurrentMap interface, NavigableMap interface. The basic methods for adding elements, removing elements, and traversal are given by the parent interfaces. The methods of the ConcurrentNavigableMap are given in the following table. Here,
K – The type of the keys in the map.
V – The type of values mapped in the map.
Method
Description
METHOD
DESCRIPTION
compute(K key, BiFunction<? super K,
? super V,? extends V> remappingFunction)
computeIfAbsent(K key, Function<? super K,
? extends V> mappingFunction)
If the specified key is not already associated with a value (or is mapped to null), attempts to compute its value
using the given mapping function and enters it into this map unless null.
computeIfPresent(K key, BiFunction<? super K,?
super V,? extends V> remappingFunction)
merge(K key, V value, BiFunction<? super V
,? super V,? extends V> remappingFunction)
replaceAll(BiFunction<? super K,? super V
,? extends V> function)
METHOD
DESCRIPTION
METHOD
DESCRIPTION
METHOD
DESCRIPTION
Ganeshchowdharysadanala
sweetyty
arorakashish0911
Java-Collections
Java-ConcurrentNavigableMap
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Constructors in Java
Stream In Java
Exceptions in Java
Different ways of Reading a text file in Java
Functional Interfaces in Java
Java Programming Examples
StringBuilder Class in Java with Examples
Checked vs Unchecked Exceptions in Java
Comparator Interface in Java with Examples
Strings in Java | [
{
"code": null,
"e": 23868,
"s": 23840,
"text": "\n19 Dec, 2021"
},
{
"code": null,
"e": 24190,
"s": 23868,
"text": "The ConcurrentNavigableMap interface is a member of the Java Collection Framework. It extends from the NavigableMap interface and ConcurrentMap interface. The ConcurrentNavigableMap provides thread-safe access to map elements along with providing convenient navigation methods. It belongs to java.util.concurrent package."
},
{
"code": null,
"e": 24203,
"s": 24190,
"text": "Declaration:"
},
{
"code": null,
"e": 24294,
"s": 24203,
"text": "public interface ConcurrentNavigableMap<K,V> extends ConcurrentMap<K,V>, NavigableMap<K,V>"
},
{
"code": null,
"e": 24357,
"s": 24294,
"text": "Here, K is the key Object type and V is the value Object type."
},
{
"code": null,
"e": 24512,
"s": 24357,
"text": "It implements ConcurrentMap<K, V>, Map<K, V>, NavigableMap<K, V>, SortedMap<K, V> interfaces. ConcurrentSkipListMap implements ConcurrentNavigableMap."
},
{
"code": null,
"e": 24517,
"s": 24512,
"text": "Java"
},
{
"code": "// Java Program to demonstrate the// ConcurrentNavigableMap Interfaceimport java.util.concurrent.ConcurrentNavigableMap;import java.util.concurrent.ConcurrentSkipListMap; public class GFG { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap // is an interface so We use // ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() method cnmap.put(1, \"First\"); cnmap.put(2, \"Second\"); cnmap.put(3, \"Third\"); cnmap.put(4, \"Fourth\"); // Print the contents on the console System.out.println( \"Mappings of ConcurrentNavigableMap : \" + cnmap); System.out.println(\"HeadMap(3): \" + cnmap.headMap(3)); System.out.println(\"TailMap(3): \" + cnmap.tailMap(3)); System.out.println(\"SubMap(1, 3): \" + cnmap.subMap(1, 3)); }}",
"e": 25591,
"s": 24517,
"text": null
},
{
"code": null,
"e": 25600,
"s": 25591,
"text": " Output:"
},
{
"code": null,
"e": 25774,
"s": 25600,
"text": "Mappings of ConcurrentNavigableMap : {1=First, 2=Second, 3=Third, 4=Fourth}\nHeadMap(3): {1=First, 2=Second}\nTailMap(3): {3=Third, 4=Fourth}\nSubMap(1, 3): {1=First, 2=Second}"
},
{
"code": null,
"e": 26307,
"s": 25774,
"text": "The ConcurrentNavigableMap has one implementing class which is ConcurrentSkipListMap class. The ConcurrentSkipListMap is a scalable implementation of the ConcurrentNavigableMap interface. The keys in ConcurrentSkipListMap are sorted by natural order or by using a Comparator at the time of construction of the object. The ConcurrentSkipListMap has the expected time cost of log(n) for insertion, deletion, and searching operations. It is a thread-safe class, therefore, all basic operations can be accomplished concurrently. Syntax:"
},
{
"code": null,
"e": 26389,
"s": 26307,
"text": "ConcurrentSkipListMap< ? , ? > objectName = new ConcurrentSkipListMap< ? , ? >();"
},
{
"code": null,
"e": 26737,
"s": 26389,
"text": "Example: In the code given below, we simply instantiate an object of the ConcurrentSkipListMap class named cslmap. The put() method is used to add elements and remove() to delete elements. For the remove() method the syntax is objectname.remove(Object key). The keySet() shows all the keys in the map (description in the method table given above)."
},
{
"code": null,
"e": 26742,
"s": 26737,
"text": "Java"
},
{
"code": "// Java Program to demonstrate the ConcurrentSkipListMapimport java.util.concurrent.*; public class ConcurrentSkipListMapExample { public static void main(String[] args) { // Instantiate an object of // ConcurrentSkipListMap named cslmap ConcurrentSkipListMap<Integer, String> cslmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cslmap.put(1, \"Geeks\"); cslmap.put(2, \"For\"); cslmap.put(3, \"Geeks\"); // Print the contents on the console System.out.println( \"The ConcurrentSkipListMap contains: \" + cslmap); // Print the key set using keySet() System.out.println( \"\\nThe ConcurrentSkipListMap key set: \" + cslmap.keySet()); // Remove elements using remove() cslmap.remove(3); // Print the contents on the console System.out.println( \"\\nThe ConcurrentSkipListMap contains: \" + cslmap); }}",
"e": 27757,
"s": 26742,
"text": null
},
{
"code": null,
"e": 27765,
"s": 27757,
"text": "Output:"
},
{
"code": null,
"e": 27927,
"s": 27765,
"text": "The ConcurrentSkipListMap contains: {1=Geeks, 2=For, 3=Geeks}\n\nThe ConcurrentSkipListMap key set: [1, 2, 3]\n\nThe ConcurrentSkipListMap contains: {1=Geeks, 2=For}"
},
{
"code": null,
"e": 28180,
"s": 27929,
"text": "To add elements to a ConcurrentNavigableMap we can use any methods of the Map interface. The code below shows how to use them. You can observe in the code that when no Comparator is provided at the time of construction, the natural order is followed."
},
{
"code": null,
"e": 28185,
"s": 28180,
"text": "Java"
},
{
"code": "// Java Program for adding elements to a// ConcurrentNavigableMapimport java.util.concurrent.*; public class AddingElementsExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, \"Third\"); cnmap.put(6, \"Second\"); cnmap.put(3, \"First\"); // Print the contents on the console System.out.println( \"Mappings of ConcurrentNavigableMap : \" + cnmap); }}",
"e": 28882,
"s": 28185,
"text": null
},
{
"code": null,
"e": 28891,
"s": 28882,
"text": " Output:"
},
{
"code": null,
"e": 28957,
"s": 28891,
"text": "Mappings of ConcurrentNavigableMap : {3=First, 6=Second, 8=Third}"
},
{
"code": null,
"e": 29071,
"s": 28957,
"text": "To remove elements as well we use methods of the Map interface, as ConcurrentNavigableMap is a descendant of Map."
},
{
"code": null,
"e": 29076,
"s": 29071,
"text": "Java"
},
{
"code": "// Java Program for deleting// elements from ConcurrentNavigableMap import java.util.concurrent.*; public class RemovingElementsExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap // is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, \"Third\"); cnmap.put(6, \"Second\"); cnmap.put(3, \"First\"); cnmap.put(11, \"Fourth\"); // Print the contents on the console System.out.println( \"Mappings of ConcurrentNavigableMap : \" + cnmap); // Remove elements using remove() cnmap.remove(6); cnmap.remove(8); // Print the contents on the console System.out.println( \"\\nConcurrentNavigableMap, after remove operation : \" + cnmap); // Clear the entire map using clear() cnmap.clear(); System.out.println( \"\\nConcurrentNavigableMap, after clear operation : \" + cnmap); }}",
"e": 30248,
"s": 29076,
"text": null
},
{
"code": null,
"e": 30257,
"s": 30248,
"text": " Output:"
},
{
"code": null,
"e": 30457,
"s": 30257,
"text": "Mappings of ConcurrentNavigableMap : {3=First, 6=Second, 8=Third, 11=Fourth}\n\nConcurrentNavigableMap, after remove operation : {3=First, 11=Fourth}\n\nConcurrentNavigableMap, after clear operation : {}"
},
{
"code": null,
"e": 30568,
"s": 30457,
"text": "We can access the elements of a ConcurrentNavigableMap using get() method, the example of this is given below."
},
{
"code": null,
"e": 30573,
"s": 30568,
"text": "Java"
},
{
"code": "// Java Program for accessing// elements in a ConcurrentNavigableMap import java.util.concurrent.*; public class AccessingElementsExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, \"Third\"); cnmap.put(6, \"Second\"); cnmap.put(3, \"First\"); cnmap.put(11, \"Fourth\"); // Accessing the elements using get() // with key as a parameter System.out.println(cnmap.get(3)); System.out.println(cnmap.get(6)); System.out.println(cnmap.get(8)); System.out.println(cnmap.get(11)); // Display the set of keys using keySet() System.out.println( \"\\nThe ConcurrentNavigableMap key set: \" + cnmap.keySet()); }}",
"e": 31568,
"s": 30573,
"text": null
},
{
"code": null,
"e": 31577,
"s": 31568,
"text": " Output:"
},
{
"code": null,
"e": 31654,
"s": 31577,
"text": "First\nSecond\nThird\nFourth\n\nThe ConcurrentNavigableMap key set: [3, 6, 8, 11]"
},
{
"code": null,
"e": 31956,
"s": 31654,
"text": "We can use the Iterator interface to traverse over any structure of the Collection Framework. Since Iterators work with one type of data we use .Entry< ? , ? > to resolve the two separate types into a compatible format. Then using the next() method we print the elements of the ConcurrentNavigableMap."
},
{
"code": null,
"e": 31961,
"s": 31956,
"text": "Java"
},
{
"code": "// Java Program for traversing a ConcurrentNavigableMap import java.util.concurrent.*;import java.util.*; public class TraversalExample { public static void main(String[] args) { // Instantiate an object // Since ConcurrentNavigableMap is an interface // We use ConcurrentSkipListMap ConcurrentNavigableMap<Integer, String> cnmap = new ConcurrentSkipListMap<Integer, String>(); // Add elements using put() cnmap.put(8, \"Third\"); cnmap.put(6, \"Second\"); cnmap.put(3, \"First\"); cnmap.put(11, \"Fourth\"); // Create an Iterator over the // ConcurrentNavigableMap Iterator<ConcurrentNavigableMap .Entry<Integer, String> > itr = cnmap.entrySet().iterator(); // The hasNext() method is used to check if there is // a next element The next() method is used to // retrieve the next element while (itr.hasNext()) { ConcurrentNavigableMap .Entry<Integer, String> entry = itr.next(); System.out.println(\"Key = \" + entry.getKey() + \", Value = \" + entry.getValue()); } }}",
"e": 33205,
"s": 31961,
"text": null
},
{
"code": null,
"e": 33214,
"s": 33205,
"text": " Output:"
},
{
"code": null,
"e": 33309,
"s": 33214,
"text": "Key = 3, Value = First\nKey = 6, Value = Second\nKey = 8, Value = Third\nKey = 11, Value = Fourth"
},
{
"code": null,
"e": 33534,
"s": 33309,
"text": "Note: Every time that we say ‘elements of ConcurrentNavigableMap’, it has to be noted that the elements are actually stored in the object of an implementing class of ConcurrentNavigableMap in this case ConcurrentSkipListMap."
},
{
"code": null,
"e": 33858,
"s": 33534,
"text": "ConcurrentNavigableMap inherits methods from the Map interface, SortedMap interface, ConcurrentMap interface, NavigableMap interface. The basic methods for adding elements, removing elements, and traversal are given by the parent interfaces. The methods of the ConcurrentNavigableMap are given in the following table. Here,"
},
{
"code": null,
"e": 33895,
"s": 33858,
"text": "K – The type of the keys in the map."
},
{
"code": null,
"e": 33937,
"s": 33895,
"text": "V – The type of values mapped in the map."
},
{
"code": null,
"e": 33944,
"s": 33937,
"text": "Method"
},
{
"code": null,
"e": 33956,
"s": 33944,
"text": "Description"
},
{
"code": null,
"e": 33963,
"s": 33956,
"text": "METHOD"
},
{
"code": null,
"e": 33975,
"s": 33963,
"text": "DESCRIPTION"
},
{
"code": null,
"e": 34013,
"s": 33975,
"text": "compute(K key, BiFunction<? super K,"
},
{
"code": null,
"e": 34057,
"s": 34013,
"text": "? super V,? extends V> remappingFunction)"
},
{
"code": null,
"e": 34102,
"s": 34057,
"text": "computeIfAbsent(K key, Function<? super K,"
},
{
"code": null,
"e": 34132,
"s": 34102,
"text": "? extends V> mappingFunction)"
},
{
"code": null,
"e": 34247,
"s": 34132,
"text": "If the specified key is not already associated with a value (or is mapped to null), attempts to compute its value "
},
{
"code": null,
"e": 34321,
"s": 34247,
"text": "using the given mapping function and enters it into this map unless null."
},
{
"code": null,
"e": 34371,
"s": 34321,
"text": "computeIfPresent(K key, BiFunction<? super K,? "
},
{
"code": null,
"e": 34412,
"s": 34371,
"text": "super V,? extends V> remappingFunction)"
},
{
"code": null,
"e": 34456,
"s": 34412,
"text": "merge(K key, V value, BiFunction<? super V"
},
{
"code": null,
"e": 34501,
"s": 34456,
"text": ",? super V,? extends V> remappingFunction)"
},
{
"code": null,
"e": 34545,
"s": 34501,
"text": "replaceAll(BiFunction<? super K,? super V"
},
{
"code": null,
"e": 34570,
"s": 34545,
"text": ",? extends V> function)"
},
{
"code": null,
"e": 34577,
"s": 34570,
"text": "METHOD"
},
{
"code": null,
"e": 34589,
"s": 34577,
"text": "DESCRIPTION"
},
{
"code": null,
"e": 34596,
"s": 34589,
"text": "METHOD"
},
{
"code": null,
"e": 34608,
"s": 34596,
"text": "DESCRIPTION"
},
{
"code": null,
"e": 34615,
"s": 34608,
"text": "METHOD"
},
{
"code": null,
"e": 34627,
"s": 34615,
"text": "DESCRIPTION"
},
{
"code": null,
"e": 34651,
"s": 34627,
"text": "Ganeshchowdharysadanala"
},
{
"code": null,
"e": 34660,
"s": 34651,
"text": "sweetyty"
},
{
"code": null,
"e": 34677,
"s": 34660,
"text": "arorakashish0911"
},
{
"code": null,
"e": 34694,
"s": 34677,
"text": "Java-Collections"
},
{
"code": null,
"e": 34722,
"s": 34694,
"text": "Java-ConcurrentNavigableMap"
},
{
"code": null,
"e": 34727,
"s": 34722,
"text": "Java"
},
{
"code": null,
"e": 34732,
"s": 34727,
"text": "Java"
},
{
"code": null,
"e": 34749,
"s": 34732,
"text": "Java-Collections"
},
{
"code": null,
"e": 34847,
"s": 34749,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34856,
"s": 34847,
"text": "Comments"
},
{
"code": null,
"e": 34869,
"s": 34856,
"text": "Old Comments"
},
{
"code": null,
"e": 34890,
"s": 34869,
"text": "Constructors in Java"
},
{
"code": null,
"e": 34905,
"s": 34890,
"text": "Stream In Java"
},
{
"code": null,
"e": 34924,
"s": 34905,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 34970,
"s": 34924,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 35000,
"s": 34970,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 35026,
"s": 35000,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 35068,
"s": 35026,
"text": "StringBuilder Class in Java with Examples"
},
{
"code": null,
"e": 35108,
"s": 35068,
"text": "Checked vs Unchecked Exceptions in Java"
},
{
"code": null,
"e": 35151,
"s": 35108,
"text": "Comparator Interface in Java with Examples"
}
] |
Logo - Turtle World | Logo has a number of other additional drawing commands, some of these are given below.
home
cleartext or ct
label
setxy
The label command takes a single word as a quoted string (e.g. “a_string”) or a list of words in [ ] brackets without quotation (e.g. [a string of letters]) and prints them on the graphics window at the location of the turtle. Let us consider the following code.
The setxy command takes two arguments, treats the first as the value of the abscissa (horizontal axis) and the second as a value of the ordinate (vertical axis). It places the turtle at these coordinates, possibly leaving ink while reaching these coordinates. In the following three figures, we have shown how the setxy command can be used.
The cleartext command, abbreviated ct, clears the text region of the command window.
Following is an exercise to check your aptitude on what you have learned so far in this chapter.
What kind of figure does the following command sequence produce?
cs pu setxy -60 60 pd home rt 45 fd 85 lt 135 fd 120
Interpret these commands as you read them from left to right. Try it to find out the result.
Following is a table of command summary.
Sets the turtle's x-coordinate to +100
Moves it 100 points to the right of center
No vertical change
Moves the turtle 200 points to the left of center
No vertical change
Sets the turtle's y-coordinate to 150
Moves it 150 points above center
No horizontal change
Moves the turtle 50 points below center
No horizontal change
Moves the turtle to xy coordinate 100 100
show xcor
show ycor
Reports the turtle’s x-coordinate
Reports the turtle’s y-coordinate
setheading 0
seth 0
Points the turtle straight up, “high noon”
Moves the turtle 120 degree to point to the four o’clock position
The following screenshot is a practical demonstration of some of the above commands.
48 Lectures
6 hours
Arnab Chakraborty
38 Lectures
2.5 hours
Rob Cubbon
81 Lectures
7.5 hours
YouAccel
8 Lectures
34 mins
Yash Rajoliya
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1921,
"s": 1834,
"text": "Logo has a number of other additional drawing commands, some of these are given below."
},
{
"code": null,
"e": 1926,
"s": 1921,
"text": "home"
},
{
"code": null,
"e": 1942,
"s": 1926,
"text": "cleartext or ct"
},
{
"code": null,
"e": 1948,
"s": 1942,
"text": "label"
},
{
"code": null,
"e": 1954,
"s": 1948,
"text": "setxy"
},
{
"code": null,
"e": 2217,
"s": 1954,
"text": "The label command takes a single word as a quoted string (e.g. “a_string”) or a list of words in [ ] brackets without quotation (e.g. [a string of letters]) and prints them on the graphics window at the location of the turtle. Let us consider the following code."
},
{
"code": null,
"e": 2558,
"s": 2217,
"text": "The setxy command takes two arguments, treats the first as the value of the abscissa (horizontal axis) and the second as a value of the ordinate (vertical axis). It places the turtle at these coordinates, possibly leaving ink while reaching these coordinates. In the following three figures, we have shown how the setxy command can be used."
},
{
"code": null,
"e": 2643,
"s": 2558,
"text": "The cleartext command, abbreviated ct, clears the text region of the command window."
},
{
"code": null,
"e": 2740,
"s": 2643,
"text": "Following is an exercise to check your aptitude on what you have learned so far in this chapter."
},
{
"code": null,
"e": 2805,
"s": 2740,
"text": "What kind of figure does the following command sequence produce?"
},
{
"code": null,
"e": 2859,
"s": 2805,
"text": "cs pu setxy -60 60 pd home rt 45 fd 85 lt 135 fd 120\n"
},
{
"code": null,
"e": 2952,
"s": 2859,
"text": "Interpret these commands as you read them from left to right. Try it to find out the result."
},
{
"code": null,
"e": 2993,
"s": 2952,
"text": "Following is a table of command summary."
},
{
"code": null,
"e": 3032,
"s": 2993,
"text": "Sets the turtle's x-coordinate to +100"
},
{
"code": null,
"e": 3075,
"s": 3032,
"text": "Moves it 100 points to the right of center"
},
{
"code": null,
"e": 3094,
"s": 3075,
"text": "No vertical change"
},
{
"code": null,
"e": 3144,
"s": 3094,
"text": "Moves the turtle 200 points to the left of center"
},
{
"code": null,
"e": 3163,
"s": 3144,
"text": "No vertical change"
},
{
"code": null,
"e": 3201,
"s": 3163,
"text": "Sets the turtle's y-coordinate to 150"
},
{
"code": null,
"e": 3234,
"s": 3201,
"text": "Moves it 150 points above center"
},
{
"code": null,
"e": 3255,
"s": 3234,
"text": "No horizontal change"
},
{
"code": null,
"e": 3295,
"s": 3255,
"text": "Moves the turtle 50 points below center"
},
{
"code": null,
"e": 3316,
"s": 3295,
"text": "No horizontal change"
},
{
"code": null,
"e": 3358,
"s": 3316,
"text": "Moves the turtle to xy coordinate 100 100"
},
{
"code": null,
"e": 3368,
"s": 3358,
"text": "show xcor"
},
{
"code": null,
"e": 3378,
"s": 3368,
"text": "show ycor"
},
{
"code": null,
"e": 3412,
"s": 3378,
"text": "Reports the turtle’s x-coordinate"
},
{
"code": null,
"e": 3446,
"s": 3412,
"text": "Reports the turtle’s y-coordinate"
},
{
"code": null,
"e": 3459,
"s": 3446,
"text": "setheading 0"
},
{
"code": null,
"e": 3466,
"s": 3459,
"text": "seth 0"
},
{
"code": null,
"e": 3509,
"s": 3466,
"text": "Points the turtle straight up, “high noon”"
},
{
"code": null,
"e": 3575,
"s": 3509,
"text": "Moves the turtle 120 degree to point to the four o’clock position"
},
{
"code": null,
"e": 3660,
"s": 3575,
"text": "The following screenshot is a practical demonstration of some of the above commands."
},
{
"code": null,
"e": 3693,
"s": 3660,
"text": "\n 48 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3712,
"s": 3693,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3747,
"s": 3712,
"text": "\n 38 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3759,
"s": 3747,
"text": " Rob Cubbon"
},
{
"code": null,
"e": 3794,
"s": 3759,
"text": "\n 81 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 3804,
"s": 3794,
"text": " YouAccel"
},
{
"code": null,
"e": 3835,
"s": 3804,
"text": "\n 8 Lectures \n 34 mins\n"
},
{
"code": null,
"e": 3850,
"s": 3835,
"text": " Yash Rajoliya"
},
{
"code": null,
"e": 3857,
"s": 3850,
"text": " Print"
},
{
"code": null,
"e": 3868,
"s": 3857,
"text": " Add Notes"
}
] |
How to add a date and time in HTML5? | Use the <time> tag to add date and time. The HTML <time> tag is used for displaying the human readable date and time.
The HTML <time> tag also supports the following additional attribute −
You can try to run the following code to learn how to add date and time <details> in HTML5 −
<!Doctype html>
<html>
<head>
<title>HTML time Tag</title>
</head>
<body>
<p>The time is <time>08:30 pm</time></p>
</body>
</html> | [
{
"code": null,
"e": 1180,
"s": 1062,
"text": "Use the <time> tag to add date and time. The HTML <time> tag is used for displaying the human readable date and time."
},
{
"code": null,
"e": 1251,
"s": 1180,
"text": "The HTML <time> tag also supports the following additional attribute −"
},
{
"code": null,
"e": 1344,
"s": 1251,
"text": "You can try to run the following code to learn how to add date and time <details> in HTML5 −"
},
{
"code": null,
"e": 1499,
"s": 1344,
"text": "<!Doctype html>\n<html>\n <head>\n <title>HTML time Tag</title>\n </head>\n <body>\n <p>The time is <time>08:30 pm</time></p>\n </body>\n</html>"
}
] |
How to pad a number with leading zeros in JavaScript? | To pad a number with leading zeros, a function is created here, which checks the number with its length and add leading zeros.
You can try to run the following code to pad a number with leading zeros in JavaScript −
Live Demo
<!DOCTYPE html>
<html>
<body>
<script>
String.prototype.padFunction = function(padStr, len) {
var str = this;
while (str.length < len)
str = padStr + str;
return str;
}
var str = "2";
document.write(str.padFunction("0", 4));
var str = "8";
document.write("<br>"+str.padFunction("0", 5));
</script>
</body>
</html>
0002
00008 | [
{
"code": null,
"e": 1189,
"s": 1062,
"text": "To pad a number with leading zeros, a function is created here, which checks the number with its length and add leading zeros."
},
{
"code": null,
"e": 1278,
"s": 1189,
"text": "You can try to run the following code to pad a number with leading zeros in JavaScript −"
},
{
"code": null,
"e": 1288,
"s": 1278,
"text": "Live Demo"
},
{
"code": null,
"e": 1726,
"s": 1288,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <script>\n String.prototype.padFunction = function(padStr, len) {\n var str = this;\n while (str.length < len)\n str = padStr + str;\n return str;\n }\n var str = \"2\";\n document.write(str.padFunction(\"0\", 4));\n\n var str = \"8\";\n document.write(\"<br>\"+str.padFunction(\"0\", 5));\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 1737,
"s": 1726,
"text": "0002\n00008"
}
] |
Pulling Your Data Up By the Bootstraps | by Simon Spichak | Towards Data Science | If you work with any large datasets, you have probably heard of bootstrapping. If you are a burgeoning statistician or bioinformatician, it is part of your computational toolset. What is the point of using this function? More importantly, what is bootstrapping anyway?
Bradley Efron first published the idea of bootstrapping in 19791. This computer-intensive technique became more popular and useful as computing power became cheaper and more available. Indeed, researchers have cited the bootstrapping method more than 20 000 times.
When working with large datasets, we aim to make inferences about the population from which our data is drawn. While we can calculate a mean or median, we do not know the certainty of this estimate. If we increase our sample size, we can reduce the error and approach the population parameters. However, if we are conducting RNA-sequencing or collecting large swathes of data, it is expensive or even impossible to increase the sample size. Bootstrapping is a resampling method that helps us determine error and confidence intervals. Results from bootstrapping later inform conclusions, whether you are looking at stock market data, phylogenetic trees or gene transcript abundances.
Bootstrapping is a method of resampling with replacement. We will run through an example to explain how this works as well as the assumptions for this method.
Supposed we have a dataset indicating the cost that basketball players charge for making appearances on birthdays. However, it is difficult for you to contact more than 8 players, so your dataset, D, in this example contains 8 values. Since we talked with a wide spectrum of different basketball players, from benchwarmers, to ensure that your sample is similar enough to the entire population of players.
Herein lies our statistical assumption: our data sample approximates the population distribution.
D = {100, 200, 200, 300, 500, 1000, 1000, 750}
Here the average of our sample D is 506.25. If we bootstrap this sample a few times, we will get a better idea of the variance within this dataset. Bootstrapping involves resampling with replacement. Our resampled bootstraps will have 8 values each, however since they are resampled with replacement, the same value (i.e. 100) could appear multiple times. In this way, bootstrapping may generate different estimates each time it is run. However, with enough bootstraps, we generate an approximation of the variance within the data. Notice the following:
We are not adding any new points to our dataset.Each resampled bootstrap contains the same amount of values as our original sample.Since we resample with replacement, the probability of resampling any value is the same throughout the bootstrap. Each value is drawn as an independent event. If the first value that we resampled is 200, this does not change the probability that the second value in this bootstrap will also be 200.
We are not adding any new points to our dataset.
Each resampled bootstrap contains the same amount of values as our original sample.
Since we resample with replacement, the probability of resampling any value is the same throughout the bootstrap. Each value is drawn as an independent event. If the first value that we resampled is 200, this does not change the probability that the second value in this bootstrap will also be 200.
D1 = {100, 1000, 500, 300, 200, 200, 200, 100}D2 = {300, 1000, 1000, 300, 500, 100, 200, 750}D3 = {750, 300, 200, 200, 100, 300, 750, 1000}
The averages of D1, D2, D3 are 325, 518.75, 450. We can then use these values to generate standard error, confidence intervals and other measures of interest. Using Python, R or other languages, its simple to generate 50, 100 or even 1000 bootstrapped samples. Knowing the bias, variance and spread of our sample helps us make better inferences about the population that its drawn from. It helps you incorporate the robustness of your sample into the rest of your inferences.
For the sake of this example, we used a small dataset. In general, bootstrapping does not apply to small datasets, datasets with many outliers or datasets involving dependent data measures.
If you are still having trouble visualizing this method, I’ve shown the process of bootstrapping below, on a dataset of jellybeans.
Bootstrapping helps us determine the confidence of specific branches within a phylogenetic tree. We might be looking at an amino acid sequence from a protein or a nucleotide sequence from a gene. Our original sample can quickly be resampled 1000 times, reconstructing 1000 bootstrapped trees. If your original tree shows that a specific protein or gene sequence branches off, you can check your bootstrapped tree to see how often this branch occurs. If it occurs more than 950 times, you can be fairly certain that your data is robust. If it only occurs around 400 times, then it could be resultant from an outlier.
Sleuth3 software estimates gene transcript abundance using a bootstrap approach. By re-sampling our next-generation sequencing reads, we can calculate a more robust estimate of transcript abundance. Re-sampling gives us an idea of technical variability within our data. The technical variation is used along with biological variation when estimating whether a specific gene or transcript is increased within your dataset.
Other uses of bootstrapping include aggregating for ensemble machine learning. Basically, our dataset is resampled many times. Each bootstrapped sample is then run through our classifier or machine learning model. We can use all of the outputs together to generate a more accurate classifier. This prevents us from overfitting data based on our limited sample.
Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Statist. 7 (1979), no. 1, 1–26. doi:10.1214/aos/1176344552. https://projecteuclid.org/euclid.aos/1176344552Efron, Bradley, Elizabeth Halloran, and Susan Holmes. “Bootstrap confidence levels for phylogenetic trees.” Proceedings of the National Academy of Sciences 93.23 (1996): 13429–13429.https://hbctraining.github.io/DGE_workshop_salmon/lessons/09_sleuth.html
Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Statist. 7 (1979), no. 1, 1–26. doi:10.1214/aos/1176344552. https://projecteuclid.org/euclid.aos/1176344552
Efron, Bradley, Elizabeth Halloran, and Susan Holmes. “Bootstrap confidence levels for phylogenetic trees.” Proceedings of the National Academy of Sciences 93.23 (1996): 13429–13429. | [
{
"code": null,
"e": 441,
"s": 172,
"text": "If you work with any large datasets, you have probably heard of bootstrapping. If you are a burgeoning statistician or bioinformatician, it is part of your computational toolset. What is the point of using this function? More importantly, what is bootstrapping anyway?"
},
{
"code": null,
"e": 706,
"s": 441,
"text": "Bradley Efron first published the idea of bootstrapping in 19791. This computer-intensive technique became more popular and useful as computing power became cheaper and more available. Indeed, researchers have cited the bootstrapping method more than 20 000 times."
},
{
"code": null,
"e": 1389,
"s": 706,
"text": "When working with large datasets, we aim to make inferences about the population from which our data is drawn. While we can calculate a mean or median, we do not know the certainty of this estimate. If we increase our sample size, we can reduce the error and approach the population parameters. However, if we are conducting RNA-sequencing or collecting large swathes of data, it is expensive or even impossible to increase the sample size. Bootstrapping is a resampling method that helps us determine error and confidence intervals. Results from bootstrapping later inform conclusions, whether you are looking at stock market data, phylogenetic trees or gene transcript abundances."
},
{
"code": null,
"e": 1548,
"s": 1389,
"text": "Bootstrapping is a method of resampling with replacement. We will run through an example to explain how this works as well as the assumptions for this method."
},
{
"code": null,
"e": 1954,
"s": 1548,
"text": "Supposed we have a dataset indicating the cost that basketball players charge for making appearances on birthdays. However, it is difficult for you to contact more than 8 players, so your dataset, D, in this example contains 8 values. Since we talked with a wide spectrum of different basketball players, from benchwarmers, to ensure that your sample is similar enough to the entire population of players."
},
{
"code": null,
"e": 2052,
"s": 1954,
"text": "Herein lies our statistical assumption: our data sample approximates the population distribution."
},
{
"code": null,
"e": 2099,
"s": 2052,
"text": "D = {100, 200, 200, 300, 500, 1000, 1000, 750}"
},
{
"code": null,
"e": 2653,
"s": 2099,
"text": "Here the average of our sample D is 506.25. If we bootstrap this sample a few times, we will get a better idea of the variance within this dataset. Bootstrapping involves resampling with replacement. Our resampled bootstraps will have 8 values each, however since they are resampled with replacement, the same value (i.e. 100) could appear multiple times. In this way, bootstrapping may generate different estimates each time it is run. However, with enough bootstraps, we generate an approximation of the variance within the data. Notice the following:"
},
{
"code": null,
"e": 3083,
"s": 2653,
"text": "We are not adding any new points to our dataset.Each resampled bootstrap contains the same amount of values as our original sample.Since we resample with replacement, the probability of resampling any value is the same throughout the bootstrap. Each value is drawn as an independent event. If the first value that we resampled is 200, this does not change the probability that the second value in this bootstrap will also be 200."
},
{
"code": null,
"e": 3132,
"s": 3083,
"text": "We are not adding any new points to our dataset."
},
{
"code": null,
"e": 3216,
"s": 3132,
"text": "Each resampled bootstrap contains the same amount of values as our original sample."
},
{
"code": null,
"e": 3515,
"s": 3216,
"text": "Since we resample with replacement, the probability of resampling any value is the same throughout the bootstrap. Each value is drawn as an independent event. If the first value that we resampled is 200, this does not change the probability that the second value in this bootstrap will also be 200."
},
{
"code": null,
"e": 3655,
"s": 3515,
"text": "D1 = {100, 1000, 500, 300, 200, 200, 200, 100}D2 = {300, 1000, 1000, 300, 500, 100, 200, 750}D3 = {750, 300, 200, 200, 100, 300, 750, 1000}"
},
{
"code": null,
"e": 4131,
"s": 3655,
"text": "The averages of D1, D2, D3 are 325, 518.75, 450. We can then use these values to generate standard error, confidence intervals and other measures of interest. Using Python, R or other languages, its simple to generate 50, 100 or even 1000 bootstrapped samples. Knowing the bias, variance and spread of our sample helps us make better inferences about the population that its drawn from. It helps you incorporate the robustness of your sample into the rest of your inferences."
},
{
"code": null,
"e": 4321,
"s": 4131,
"text": "For the sake of this example, we used a small dataset. In general, bootstrapping does not apply to small datasets, datasets with many outliers or datasets involving dependent data measures."
},
{
"code": null,
"e": 4453,
"s": 4321,
"text": "If you are still having trouble visualizing this method, I’ve shown the process of bootstrapping below, on a dataset of jellybeans."
},
{
"code": null,
"e": 5069,
"s": 4453,
"text": "Bootstrapping helps us determine the confidence of specific branches within a phylogenetic tree. We might be looking at an amino acid sequence from a protein or a nucleotide sequence from a gene. Our original sample can quickly be resampled 1000 times, reconstructing 1000 bootstrapped trees. If your original tree shows that a specific protein or gene sequence branches off, you can check your bootstrapped tree to see how often this branch occurs. If it occurs more than 950 times, you can be fairly certain that your data is robust. If it only occurs around 400 times, then it could be resultant from an outlier."
},
{
"code": null,
"e": 5491,
"s": 5069,
"text": "Sleuth3 software estimates gene transcript abundance using a bootstrap approach. By re-sampling our next-generation sequencing reads, we can calculate a more robust estimate of transcript abundance. Re-sampling gives us an idea of technical variability within our data. The technical variation is used along with biological variation when estimating whether a specific gene or transcript is increased within your dataset."
},
{
"code": null,
"e": 5852,
"s": 5491,
"text": "Other uses of bootstrapping include aggregating for ensemble machine learning. Basically, our dataset is resampled many times. Each bootstrapped sample is then run through our classifier or machine learning model. We can use all of the outputs together to generate a more accurate classifier. This prevents us from overfitting data based on our limited sample."
},
{
"code": null,
"e": 6279,
"s": 5852,
"text": "Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Statist. 7 (1979), no. 1, 1–26. doi:10.1214/aos/1176344552. https://projecteuclid.org/euclid.aos/1176344552Efron, Bradley, Elizabeth Halloran, and Susan Holmes. “Bootstrap confidence levels for phylogenetic trees.” Proceedings of the National Academy of Sciences 93.23 (1996): 13429–13429.https://hbctraining.github.io/DGE_workshop_salmon/lessons/09_sleuth.html"
},
{
"code": null,
"e": 6452,
"s": 6279,
"text": "Efron, B. Bootstrap Methods: Another Look at the Jackknife. Ann. Statist. 7 (1979), no. 1, 1–26. doi:10.1214/aos/1176344552. https://projecteuclid.org/euclid.aos/1176344552"
}
] |
Calling an External Program in Java using Process and Runtime - GeeksforGeeks | 09 Aug, 2019
Java contains the functionality of initiating an external process – an executable file or an existing application on the system, such as Google Chrome or the Media Player- by simple Java code. One way is to use following two classes for the purpose:
Process classRuntime class
Process class
Runtime class
The Process class present in the java.lang package contains many useful methods such as killing a subprocess, making a thread wait for some time, returning the I/O stream of the subprocess etc. Subsequently, the Runtime class provides a portal to interact with the Java runtime environment. It contains methods to execute a process, give the number of available processors, display the free memory in the JVM, among others.
// A sample Java program (Written for Windows OS)// to demonstrate creation of external process // using Runtime and Processclass CoolStuff{ public static void main(String[] args) { try { // Command to create an external process String command = "C:\Program Files (x86)"+ "\Google\Chrome\Application\chrome.exe"; // Running the above command Runtime run = Runtime.getRuntime(); Process proc = run.exec(command); } catch (IOException e) { e.printStackTrace(); } }}
Runtime.getRuntime() simply returns the Runtime object associated with the current Java application. The executable path is specified in the process exec(String path) method. We also have an IOException try-catch block to handle the case where the file to be executed is not found. On running the code, an instance of Google Chrome opens up on the computer.
Another way to create an external process is using ProcessBuilder which has been discussed in below post.ProcessBuilder in Java to create a basic online Judge
This article is contributed by Anannya Uberoi. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
DevParzival404
Java-Library
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Initialize an ArrayList in Java
HashMap in Java with Examples
Interfaces in Java
Object Oriented Programming (OOPs) Concept in Java
How to iterate any Map in Java
ArrayList in Java
Multidimensional Arrays in Java
Stack Class in Java
Set in Java
LinkedList in Java | [
{
"code": null,
"e": 24462,
"s": 24434,
"text": "\n09 Aug, 2019"
},
{
"code": null,
"e": 24712,
"s": 24462,
"text": "Java contains the functionality of initiating an external process – an executable file or an existing application on the system, such as Google Chrome or the Media Player- by simple Java code. One way is to use following two classes for the purpose:"
},
{
"code": null,
"e": 24739,
"s": 24712,
"text": "Process classRuntime class"
},
{
"code": null,
"e": 24753,
"s": 24739,
"text": "Process class"
},
{
"code": null,
"e": 24767,
"s": 24753,
"text": "Runtime class"
},
{
"code": null,
"e": 25191,
"s": 24767,
"text": "The Process class present in the java.lang package contains many useful methods such as killing a subprocess, making a thread wait for some time, returning the I/O stream of the subprocess etc. Subsequently, the Runtime class provides a portal to interact with the Java runtime environment. It contains methods to execute a process, give the number of available processors, display the free memory in the JVM, among others."
},
{
"code": "// A sample Java program (Written for Windows OS)// to demonstrate creation of external process // using Runtime and Processclass CoolStuff{ public static void main(String[] args) { try { // Command to create an external process String command = \"C:\\Program Files (x86)\"+ \"\\Google\\Chrome\\Application\\chrome.exe\"; // Running the above command Runtime run = Runtime.getRuntime(); Process proc = run.exec(command); } catch (IOException e) { e.printStackTrace(); } }}",
"e": 25793,
"s": 25191,
"text": null
},
{
"code": null,
"e": 26151,
"s": 25793,
"text": "Runtime.getRuntime() simply returns the Runtime object associated with the current Java application. The executable path is specified in the process exec(String path) method. We also have an IOException try-catch block to handle the case where the file to be executed is not found. On running the code, an instance of Google Chrome opens up on the computer."
},
{
"code": null,
"e": 26310,
"s": 26151,
"text": "Another way to create an external process is using ProcessBuilder which has been discussed in below post.ProcessBuilder in Java to create a basic online Judge"
},
{
"code": null,
"e": 26612,
"s": 26310,
"text": "This article is contributed by Anannya Uberoi. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 26737,
"s": 26612,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 26752,
"s": 26737,
"text": "DevParzival404"
},
{
"code": null,
"e": 26765,
"s": 26752,
"text": "Java-Library"
},
{
"code": null,
"e": 26770,
"s": 26765,
"text": "Java"
},
{
"code": null,
"e": 26775,
"s": 26770,
"text": "Java"
},
{
"code": null,
"e": 26873,
"s": 26775,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26882,
"s": 26873,
"text": "Comments"
},
{
"code": null,
"e": 26895,
"s": 26882,
"text": "Old Comments"
},
{
"code": null,
"e": 26927,
"s": 26895,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 26957,
"s": 26927,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 26976,
"s": 26957,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 27027,
"s": 26976,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 27058,
"s": 27027,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 27076,
"s": 27058,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 27108,
"s": 27076,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 27128,
"s": 27108,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 27140,
"s": 27128,
"text": "Set in Java"
}
] |
In JavaScript, can be use a new line in console.log? | Yes, we can use a new line using “\n” in console.log(). Following is the code −
const studentDetailsObject = new Object()
studentDetailsObject.name = 'David'
studentDetailsObject.subjectName = 'JavaScript'
studentDetailsObject.countryName = 'US'
studentDetailsObject.print = function(){
console.log('hello David');
}
console.log("studentObject", "\n", studentDetailsObject);
To run the above program, you need to use the following command −
node fileName.js.
Here, my file name is demo170.js.
This will produce the following output −
PS C:\Users\Amit\javascript-code> node demo170.js
studentObject
{
name: 'David',
subjectName: 'JavaScript',
countryName: 'US',
print: [Function]
} | [
{
"code": null,
"e": 1142,
"s": 1062,
"text": "Yes, we can use a new line using “\\n” in console.log(). Following is the code −"
},
{
"code": null,
"e": 1440,
"s": 1142,
"text": "const studentDetailsObject = new Object()\nstudentDetailsObject.name = 'David'\nstudentDetailsObject.subjectName = 'JavaScript'\nstudentDetailsObject.countryName = 'US'\nstudentDetailsObject.print = function(){\n console.log('hello David');\n}\nconsole.log(\"studentObject\", \"\\n\", studentDetailsObject);"
},
{
"code": null,
"e": 1506,
"s": 1440,
"text": "To run the above program, you need to use the following command −"
},
{
"code": null,
"e": 1524,
"s": 1506,
"text": "node fileName.js."
},
{
"code": null,
"e": 1558,
"s": 1524,
"text": "Here, my file name is demo170.js."
},
{
"code": null,
"e": 1599,
"s": 1558,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1758,
"s": 1599,
"text": "PS C:\\Users\\Amit\\javascript-code> node demo170.js\nstudentObject\n{\n name: 'David',\n subjectName: 'JavaScript',\n countryName: 'US',\n print: [Function]\n}"
}
] |
Angular 2 - Handling Events | In Angular 2, events such as button click or any other sort of events can also be handled very easily. The events get triggered from the html page and are sent across to Angular JS class for further processing.
Let’s look at an example of how we can achieve event handling. In our example, we will look at displaying a click button and a status property. Initially, the status property will be true. When the button is clicked, the status property will then become false.
Step 1 − Change the code of the app.component.ts file to the following.
import {
Component
} from '@angular/core';
@Component ({
selector: 'my-app',
templateUrl: 'app/app.component.html'
})
export class AppComponent {
Status: boolean = true;
clicked(event) {
this.Status = false;
}
}
Following points need to be noted about the above code.
We are defining a variable called status of the type Boolean which is initially true.
We are defining a variable called status of the type Boolean which is initially true.
Next, we are defining the clicked function which will be called whenever our button is clicked on our html page. In the function, we change the value of the Status property from true to false.
Next, we are defining the clicked function which will be called whenever our button is clicked on our html page. In the function, we change the value of the Status property from true to false.
Step 2 − Make the following changes to the app/app.component.html file, which is the template file.
<div>
{{Status}}
<button (click) = "clicked()">Click</button>
</div>
Following points need to be noted about the above code.
We are first just displaying the value of the Status property of our class.
We are first just displaying the value of the Status property of our class.
Then are defining the button html tag with the value of Click. We then ensure that the click event of the button gets triggered to the clicked event in our class.
Then are defining the button html tag with the value of Click. We then ensure that the click event of the button gets triggered to the clicked event in our class.
Step 3 − Save all the code changes and refresh the browser, you will get the following output.
Step 4 − Click the Click button, you will get the following output.
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2508,
"s": 2297,
"text": "In Angular 2, events such as button click or any other sort of events can also be handled very easily. The events get triggered from the html page and are sent across to Angular JS class for further processing."
},
{
"code": null,
"e": 2769,
"s": 2508,
"text": "Let’s look at an example of how we can achieve event handling. In our example, we will look at displaying a click button and a status property. Initially, the status property will be true. When the button is clicked, the status property will then become false."
},
{
"code": null,
"e": 2841,
"s": 2769,
"text": "Step 1 − Change the code of the app.component.ts file to the following."
},
{
"code": null,
"e": 3092,
"s": 2841,
"text": "import { \n Component \n} from '@angular/core'; \n\n@Component ({ \n selector: 'my-app', \n templateUrl: 'app/app.component.html' \n}) \n\nexport class AppComponent { \n Status: boolean = true; \n clicked(event) { \n this.Status = false; \n } \n}"
},
{
"code": null,
"e": 3148,
"s": 3092,
"text": "Following points need to be noted about the above code."
},
{
"code": null,
"e": 3234,
"s": 3148,
"text": "We are defining a variable called status of the type Boolean which is initially true."
},
{
"code": null,
"e": 3320,
"s": 3234,
"text": "We are defining a variable called status of the type Boolean which is initially true."
},
{
"code": null,
"e": 3513,
"s": 3320,
"text": "Next, we are defining the clicked function which will be called whenever our button is clicked on our html page. In the function, we change the value of the Status property from true to false."
},
{
"code": null,
"e": 3706,
"s": 3513,
"text": "Next, we are defining the clicked function which will be called whenever our button is clicked on our html page. In the function, we change the value of the Status property from true to false."
},
{
"code": null,
"e": 3806,
"s": 3706,
"text": "Step 2 − Make the following changes to the app/app.component.html file, which is the template file."
},
{
"code": null,
"e": 3885,
"s": 3806,
"text": "<div> \n {{Status}} \n <button (click) = \"clicked()\">Click</button> \n</div> "
},
{
"code": null,
"e": 3941,
"s": 3885,
"text": "Following points need to be noted about the above code."
},
{
"code": null,
"e": 4017,
"s": 3941,
"text": "We are first just displaying the value of the Status property of our class."
},
{
"code": null,
"e": 4093,
"s": 4017,
"text": "We are first just displaying the value of the Status property of our class."
},
{
"code": null,
"e": 4256,
"s": 4093,
"text": "Then are defining the button html tag with the value of Click. We then ensure that the click event of the button gets triggered to the clicked event in our class."
},
{
"code": null,
"e": 4419,
"s": 4256,
"text": "Then are defining the button html tag with the value of Click. We then ensure that the click event of the button gets triggered to the clicked event in our class."
},
{
"code": null,
"e": 4514,
"s": 4419,
"text": "Step 3 − Save all the code changes and refresh the browser, you will get the following output."
},
{
"code": null,
"e": 4582,
"s": 4514,
"text": "Step 4 − Click the Click button, you will get the following output."
},
{
"code": null,
"e": 4617,
"s": 4582,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 4631,
"s": 4617,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 4666,
"s": 4631,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4680,
"s": 4666,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 4715,
"s": 4680,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 4735,
"s": 4715,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 4770,
"s": 4735,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4787,
"s": 4770,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4820,
"s": 4787,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 4832,
"s": 4820,
"text": " Senol Atac"
},
{
"code": null,
"e": 4867,
"s": 4832,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 4879,
"s": 4867,
"text": " Senol Atac"
},
{
"code": null,
"e": 4886,
"s": 4879,
"text": " Print"
},
{
"code": null,
"e": 4897,
"s": 4886,
"text": " Add Notes"
}
] |
Black-Box Attacks on Perceptual Image Hashes with GANs | by Nick Locascio | Towards Data Science | tldr: This post demonstrates that GANs are capable of breaking image hash algorithms in two key ways: (1) Reversal Attack: Synthesizing the original image from the hash (2) Poisoning Attack: synthesizing hash collisions for arbitrary natural image distributions.
A Perceptual image hash (PIH) is a short hexadecimal string (e.g. ‘00081c3c3c181818’ ) based on an image’s appearance. Perceptual image hashes, despite being hashes, are not cryptographically secure hashes. This is by design, because PIHs aim to be smoothly invariant to small changes in the image (rotation, crop, gamma correction, noise addition, adding a border). This is in contrast to cryptographic hash functions that are designed for non-smoothness and to change entirely if any single bit changes.
The perceptual hashes of the below images are only slightly changed by the text modification, but their md5 hashes are completely different.
a_hash(original) = 3c3e0e1a3a1e1e1ea_hash(modified) = 3c3e0e3e3e1e1e1emd5(original) = 8d4e3391a3bca7...md5(modified) = c27baa59355d10...
I won’t delve too much into the details of how these algorithms work: see (here) for more info.
Despite not being cryptographically solid, PIHs are still used in a wide-range of privacy-sensitive applications.
Because perceptual image hashes have a smoothness property relating inputs to outputs, we can model this process and its inverse with a neural network. GANs are well suited for this generation task, especially because there are many potential images from a single image hash. GANs allow us to learn the image manifold and enable the model to explore these differing, but valid image distribution outputs.
I train the Pix2Pix network (paper, github) to convert perceptual images hashes computed using the a_hash function from the standard python hashing library, imagehash (github). For this demonstration, I train on celebA faces dataset, though the black-box attack is general and applicable to other datasets, image distributions, and hash functions. I arrange the image hash produced by a_hash into a 2d array to serve as the input image to the pix2pix model.
Below are some of the results of this Hash Reversal Attack. In many cases the attack is able to generate a look-alike face to the original, and even in failure cases often represents the correct gender, hairstyle, and race of the original image. Note: the face textures aren’t perfect as the model is not fully converged. This was trained on limited compute resources using Google’s Colab Tool.
Many applications assume these hashes are privacy-preserving, but these results above show that they can be reversed. Any service that claims security by storing sensitive image hashes is misleading its users and at potential risk for an attack like this.
Hash Poisoning attack is relevant to the following scenario:
A system that allows users to submit photos to a database of images to ban. A human reviews the image to ensure it is an image that deserves banning (and that the image is say, not the Coca-Cola logo). If approved, this hash gets added to the database and is checked against whenever a new image is uploaded. If this new image’s hash collides with the banned hash, the image is prevented from being uploaded.
If a malicious user were to somehow trick the human reviewer into accepting the Coca-Cola logo as a banned image, the database could be ‘poisoned’ by containing hashes of images that should be sharable. In fact we just need the human reviewer to accept an image that has a hash-collision with the Coca-Cola logo! This human-fooling task can be accomplished with our learned generative model.
In the described model, we can reverse hashes into approximates of their original images. However, these generated images do not always hash exactly back to the original hash. To apply this attack successfully, we have to modify the pix2pix objective slightly to ensure the operation is properly invertible and hashes the original image back to the true original hash.
I add an additional hash-cycle loss term to the standard pix2pix loss. This computes the hash on the generated image and computes a pixel-wise cross-entropy loss between the true hash and the generated image’s hash. In limited experiments, this additional loss term brings the generated hash collision rate from ~30% to ~80%.
Above is a diagram illustrating our network generating an image of a face that has a hash collision with the Coca-Cola logo. These share the same hashes and would allow a user to poison a hash database and prevent a corporate logo from being uploaded to a platform. Here are some more corporate logos with generated faces that are hash collisions.
Don’t use perceptual image hashing in privacy or content sensitive applications! Or at least don’t store them without some additional security measures.
A scheme that would be more secure is to generate many potential hashes for an image (apply image transformations to create new hashes), put those hashes through a provably secure hash function like md5 and store those hashed hashes in your database to check against when matching future images.
A potential solution would be to not disclose which hash function is being used. But this is security by obscurity. It does not prevent someone with access to the database or codebase from reversing the stored hashes or poisoning the hash pool with the detailed attack.
Thanks to Duncan Wilson, Ishaan Gulrajani, Karthik Narasimhan, Harini Suresh, and Eduardo DeLeon for their feedback and suggestions on this work. Thanks to Christopher Hesse for the great pix2pix repository, and Jens Segers for the imagehash python library.
If you enjoyed this article, please let me know with a *clap* or a share. | [
{
"code": null,
"e": 434,
"s": 171,
"text": "tldr: This post demonstrates that GANs are capable of breaking image hash algorithms in two key ways: (1) Reversal Attack: Synthesizing the original image from the hash (2) Poisoning Attack: synthesizing hash collisions for arbitrary natural image distributions."
},
{
"code": null,
"e": 940,
"s": 434,
"text": "A Perceptual image hash (PIH) is a short hexadecimal string (e.g. ‘00081c3c3c181818’ ) based on an image’s appearance. Perceptual image hashes, despite being hashes, are not cryptographically secure hashes. This is by design, because PIHs aim to be smoothly invariant to small changes in the image (rotation, crop, gamma correction, noise addition, adding a border). This is in contrast to cryptographic hash functions that are designed for non-smoothness and to change entirely if any single bit changes."
},
{
"code": null,
"e": 1081,
"s": 940,
"text": "The perceptual hashes of the below images are only slightly changed by the text modification, but their md5 hashes are completely different."
},
{
"code": null,
"e": 1218,
"s": 1081,
"text": "a_hash(original) = 3c3e0e1a3a1e1e1ea_hash(modified) = 3c3e0e3e3e1e1e1emd5(original) = 8d4e3391a3bca7...md5(modified) = c27baa59355d10..."
},
{
"code": null,
"e": 1314,
"s": 1218,
"text": "I won’t delve too much into the details of how these algorithms work: see (here) for more info."
},
{
"code": null,
"e": 1428,
"s": 1314,
"text": "Despite not being cryptographically solid, PIHs are still used in a wide-range of privacy-sensitive applications."
},
{
"code": null,
"e": 1833,
"s": 1428,
"text": "Because perceptual image hashes have a smoothness property relating inputs to outputs, we can model this process and its inverse with a neural network. GANs are well suited for this generation task, especially because there are many potential images from a single image hash. GANs allow us to learn the image manifold and enable the model to explore these differing, but valid image distribution outputs."
},
{
"code": null,
"e": 2291,
"s": 1833,
"text": "I train the Pix2Pix network (paper, github) to convert perceptual images hashes computed using the a_hash function from the standard python hashing library, imagehash (github). For this demonstration, I train on celebA faces dataset, though the black-box attack is general and applicable to other datasets, image distributions, and hash functions. I arrange the image hash produced by a_hash into a 2d array to serve as the input image to the pix2pix model."
},
{
"code": null,
"e": 2686,
"s": 2291,
"text": "Below are some of the results of this Hash Reversal Attack. In many cases the attack is able to generate a look-alike face to the original, and even in failure cases often represents the correct gender, hairstyle, and race of the original image. Note: the face textures aren’t perfect as the model is not fully converged. This was trained on limited compute resources using Google’s Colab Tool."
},
{
"code": null,
"e": 2942,
"s": 2686,
"text": "Many applications assume these hashes are privacy-preserving, but these results above show that they can be reversed. Any service that claims security by storing sensitive image hashes is misleading its users and at potential risk for an attack like this."
},
{
"code": null,
"e": 3003,
"s": 2942,
"text": "Hash Poisoning attack is relevant to the following scenario:"
},
{
"code": null,
"e": 3412,
"s": 3003,
"text": "A system that allows users to submit photos to a database of images to ban. A human reviews the image to ensure it is an image that deserves banning (and that the image is say, not the Coca-Cola logo). If approved, this hash gets added to the database and is checked against whenever a new image is uploaded. If this new image’s hash collides with the banned hash, the image is prevented from being uploaded."
},
{
"code": null,
"e": 3804,
"s": 3412,
"text": "If a malicious user were to somehow trick the human reviewer into accepting the Coca-Cola logo as a banned image, the database could be ‘poisoned’ by containing hashes of images that should be sharable. In fact we just need the human reviewer to accept an image that has a hash-collision with the Coca-Cola logo! This human-fooling task can be accomplished with our learned generative model."
},
{
"code": null,
"e": 4173,
"s": 3804,
"text": "In the described model, we can reverse hashes into approximates of their original images. However, these generated images do not always hash exactly back to the original hash. To apply this attack successfully, we have to modify the pix2pix objective slightly to ensure the operation is properly invertible and hashes the original image back to the true original hash."
},
{
"code": null,
"e": 4499,
"s": 4173,
"text": "I add an additional hash-cycle loss term to the standard pix2pix loss. This computes the hash on the generated image and computes a pixel-wise cross-entropy loss between the true hash and the generated image’s hash. In limited experiments, this additional loss term brings the generated hash collision rate from ~30% to ~80%."
},
{
"code": null,
"e": 4847,
"s": 4499,
"text": "Above is a diagram illustrating our network generating an image of a face that has a hash collision with the Coca-Cola logo. These share the same hashes and would allow a user to poison a hash database and prevent a corporate logo from being uploaded to a platform. Here are some more corporate logos with generated faces that are hash collisions."
},
{
"code": null,
"e": 5000,
"s": 4847,
"text": "Don’t use perceptual image hashing in privacy or content sensitive applications! Or at least don’t store them without some additional security measures."
},
{
"code": null,
"e": 5296,
"s": 5000,
"text": "A scheme that would be more secure is to generate many potential hashes for an image (apply image transformations to create new hashes), put those hashes through a provably secure hash function like md5 and store those hashed hashes in your database to check against when matching future images."
},
{
"code": null,
"e": 5566,
"s": 5296,
"text": "A potential solution would be to not disclose which hash function is being used. But this is security by obscurity. It does not prevent someone with access to the database or codebase from reversing the stored hashes or poisoning the hash pool with the detailed attack."
},
{
"code": null,
"e": 5824,
"s": 5566,
"text": "Thanks to Duncan Wilson, Ishaan Gulrajani, Karthik Narasimhan, Harini Suresh, and Eduardo DeLeon for their feedback and suggestions on this work. Thanks to Christopher Hesse for the great pix2pix repository, and Jens Segers for the imagehash python library."
}
] |
Find maximum element along with its index in Julia - findmax() Method - GeeksforGeeks | 26 Mar, 2020
The findmax() is an inbuilt function in julia which is used to return the maximum element of the specified collection along with its index. If there are multiple maximal elements are present in the collection, then the first one will be returned. If there is any data element is NaN, this element is returned.
Syntax:findmax(itr)orfindmax(A; dims)
Parameters:
itr: Specified collection of elements.
A: Specified array.
dims: Specified dimension.
Returns: It returns the maximum elements with their corresponding index.
Example 1:
# Julia program to illustrate # the use of findmax() method # Getting the maximum elements# with their corresponding index.println(findmax([1, 2, 3, 4]))println(findmax([5, 0, false, 6]))println(findmax([1, 2, 3, true]))println(findmax([5, 0, NaN, 6]))println(findmax([1, 2, 3, 3]))
Output:
Example 2:
# Julia program to illustrate # the use of findmax() method # Getting the value and index of# the maximum over the given dimensionsA = [5 10; 15 20];println(findmax(A, dims = 1))println(findmax(A, dims = 2))
Output:
Julia
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Vectors in Julia
Comments in Julia
Storing Output on a File in Julia
while loop in Julia
Getting rounded value of a number in Julia - round() Method
Reshaping array dimensions in Julia | Array reshape() Method
Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)
Functions in Julia
Creating array with repeated elements in Julia - repeat() Method
Get array dimensions and size of a dimension in Julia - size() Method | [
{
"code": null,
"e": 24100,
"s": 24072,
"text": "\n26 Mar, 2020"
},
{
"code": null,
"e": 24410,
"s": 24100,
"text": "The findmax() is an inbuilt function in julia which is used to return the maximum element of the specified collection along with its index. If there are multiple maximal elements are present in the collection, then the first one will be returned. If there is any data element is NaN, this element is returned."
},
{
"code": null,
"e": 24448,
"s": 24410,
"text": "Syntax:findmax(itr)orfindmax(A; dims)"
},
{
"code": null,
"e": 24460,
"s": 24448,
"text": "Parameters:"
},
{
"code": null,
"e": 24499,
"s": 24460,
"text": "itr: Specified collection of elements."
},
{
"code": null,
"e": 24519,
"s": 24499,
"text": "A: Specified array."
},
{
"code": null,
"e": 24546,
"s": 24519,
"text": "dims: Specified dimension."
},
{
"code": null,
"e": 24619,
"s": 24546,
"text": "Returns: It returns the maximum elements with their corresponding index."
},
{
"code": null,
"e": 24630,
"s": 24619,
"text": "Example 1:"
},
{
"code": "# Julia program to illustrate # the use of findmax() method # Getting the maximum elements# with their corresponding index.println(findmax([1, 2, 3, 4]))println(findmax([5, 0, false, 6]))println(findmax([1, 2, 3, true]))println(findmax([5, 0, NaN, 6]))println(findmax([1, 2, 3, 3]))",
"e": 24914,
"s": 24630,
"text": null
},
{
"code": null,
"e": 24922,
"s": 24914,
"text": "Output:"
},
{
"code": null,
"e": 24933,
"s": 24922,
"text": "Example 2:"
},
{
"code": "# Julia program to illustrate # the use of findmax() method # Getting the value and index of# the maximum over the given dimensionsA = [5 10; 15 20];println(findmax(A, dims = 1))println(findmax(A, dims = 2))",
"e": 25142,
"s": 24933,
"text": null
},
{
"code": null,
"e": 25150,
"s": 25142,
"text": "Output:"
},
{
"code": null,
"e": 25156,
"s": 25150,
"text": "Julia"
},
{
"code": null,
"e": 25254,
"s": 25156,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25263,
"s": 25254,
"text": "Comments"
},
{
"code": null,
"e": 25276,
"s": 25263,
"text": "Old Comments"
},
{
"code": null,
"e": 25293,
"s": 25276,
"text": "Vectors in Julia"
},
{
"code": null,
"e": 25311,
"s": 25293,
"text": "Comments in Julia"
},
{
"code": null,
"e": 25345,
"s": 25311,
"text": "Storing Output on a File in Julia"
},
{
"code": null,
"e": 25365,
"s": 25345,
"text": "while loop in Julia"
},
{
"code": null,
"e": 25425,
"s": 25365,
"text": "Getting rounded value of a number in Julia - round() Method"
},
{
"code": null,
"e": 25486,
"s": 25425,
"text": "Reshaping array dimensions in Julia | Array reshape() Method"
},
{
"code": null,
"e": 25559,
"s": 25486,
"text": "Decision Making in Julia (if, if-else, Nested-if, if-elseif-else ladder)"
},
{
"code": null,
"e": 25578,
"s": 25559,
"text": "Functions in Julia"
},
{
"code": null,
"e": 25643,
"s": 25578,
"text": "Creating array with repeated elements in Julia - repeat() Method"
}
] |
Java nested if statement example | It is always legal to nest if-else statements which means you can use one if or else if statement inside another if or else if statement.
The syntax for a nested if...else is as follows −
if(Boolean_expression 1) {
// Executes when the Boolean expression 1 is true
if(Boolean_expression 2) {
// Executes when the Boolean expression 2 is true
}
}
You can nest else if...else in the similar way as we have nested if statement.
Live Demo
public class Test {
public static void main(String args[]) {
int x = 30;
int y = 10;
if( x == 30 ) {
if( y == 10 ) {
System.out.print("X = 30 and Y = 10");
}
}
}
}
This will produce the following result −
X = 30 and Y = 10 | [
{
"code": null,
"e": 1200,
"s": 1062,
"text": "It is always legal to nest if-else statements which means you can use one if or else if statement inside another if or else if statement."
},
{
"code": null,
"e": 1250,
"s": 1200,
"text": "The syntax for a nested if...else is as follows −"
},
{
"code": null,
"e": 1423,
"s": 1250,
"text": "if(Boolean_expression 1) {\n // Executes when the Boolean expression 1 is true\n if(Boolean_expression 2) {\n // Executes when the Boolean expression 2 is true\n }\n}"
},
{
"code": null,
"e": 1502,
"s": 1423,
"text": "You can nest else if...else in the similar way as we have nested if statement."
},
{
"code": null,
"e": 1512,
"s": 1502,
"text": "Live Demo"
},
{
"code": null,
"e": 1738,
"s": 1512,
"text": "public class Test {\n\n public static void main(String args[]) {\n int x = 30;\n int y = 10;\n\n if( x == 30 ) {\n if( y == 10 ) {\n System.out.print(\"X = 30 and Y = 10\");\n }\n }\n }\n}"
},
{
"code": null,
"e": 1779,
"s": 1738,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 1797,
"s": 1779,
"text": "X = 30 and Y = 10"
}
] |
Ridge Regression Python Example. A tutorial on how to implement Ridge... | by Cory Maklin | Towards Data Science | Overfitting, the process by which a model performs well for training samples but fails to generalize, is one of the main challenges in machine learning. In the proceeding article, we’ll cover how we can use regularization to help prevent overfitting. To be specific, we’ll talk about Ridge Regression, a distant cousin of Linear Regression, and how it can be used to determine the best fitting line.
Before we can begin to describe Ridge Regression, it’s important that you understand variance and bias in the context of machine learning.
The term bias is not the y-intercept but the extent to which the model fails to come up with a plot that approximates the samples. For example, the proceeding line has a high bias since it fails to capture the underlying trend in the data.
On the other hand, the proceeding line has a relatively low bias. If we were to measure the mean square error, it would be much lower compared to the previous example.
In contrast to the statistical definition, variance does not refer the spread of data relative to the mean. Rather, it characterizes the difference in fits between datasets. In other words, it measures how the accuracy of a model changes when presented with a different dataset. For example, the squiggly line in the proceeding image performs radically different on other datasets. Therefore, we say it has a high variance.
On the other hand, the straight line has relatively low variance because the mean square error is similar for different datasets.
Ridge Regression is almost identical to Linear Regression except that we introduce a small amount of bias. In return for said bias, we get a significant drop in variance. In other words, by starting out with a slightly worse fit, Ridge Regression performs better against data that doesn’t exactly follow the same pattern as the data the model was trained on.
Adding bias, is often referred to as regularization. As the name implies, regularization is used to develop a model that excels at predicting targets for data that follows a regular pattern rather than specific. Said another way, the purpose of regularization is to prevent overfitting. Overfitting tends to occur when we use a higher degree polynomial than what is needed to model the data.
To get around this problem, we introduce a regularization term to the loss function. In Ridge Regression, the loss function is the linear least squares function and the regularization is given by the l2-norm.
Since we are trying to minimize the loss function and w is included in the residual sum of squares, the model will be forced into finding a balance between minimizing the residual sum of squares and minimizing the coefficients.
For a high degree polynomial, the coefficients of the higher order variables will tend towards 0 if the underlying data can be approximated just as well with a low degree polynomial.
If we set the hyperparameter alpha to some large number, in trying to find the minimum value for the cost function, the model will set the coefficients to 0. In other words, the regression line will have a slope of 0.
Finding the coefficients given the added regularization term isn’t all that difficult. We take the cost function, perform a bit of algebra, take the partial derivative with respect to w (the vector of coefficients), make it equal to 0 and then solve for w.
Let’s see how we can go about implementing Ridge Regression from scratch using Python. To begin, we import the following libraries.
from sklearn.datasets import make_regressionfrom matplotlib import pyplot as pltimport numpy as npfrom sklearn.linear_model import Ridge
We can use the scikit-learn library to generate sample data which is well suited for regression.
X, y, coefficients = make_regression( n_samples=50, n_features=1, n_informative=1, n_targets=1, noise=5, coef=True, random_state=1)
Next, we define the hyperparameter alpha. Alpha determines the regularization strength. The larger value for alpha, the stronger the regularization. In other words, when alpha is a very larger number, the bias of the model will be high. An alpha of 1, will result in a model that acts identical to Linear Regression.
alpha = 1
We create the identity matrix. In order for the equation we saw previously to respect the rules of matrix operations, the identity matrix has to be the same size as the matrix X transpose dot X.
n, m = X.shapeI = np.identity(m)
Finally, we solve for w using the equation discuss above.
w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X) + alpha * I), X.T), y)
In comparing w to the actual coefficient(s) used in generating the data, we can see that they’re not exactly equal to one another but close.
w
coefficients
Let’s take a look at how the regression line fits the data.
plt.scatter(X, y)plt.plot(X, w*X, c='red')
Let’s do the same thing using the scikit-learn implementation of Ridge Regression. First, we create and train an instance of the Ridge class.
rr = Ridge(alpha=1)rr.fit(X, y)w = rr.coef_
We get the same value for w where we solved for it using linear algebra.
w
The regression line is identical to the one above.
plt.scatter(X, y)plt.plot(X, w*X, c='red')
Next, let’s visualize the effect of the regularization parameter alpha. To start, we set it to 10.
rr = Ridge(alpha=10)rr.fit(X, y)w = rr.coef_[0]plt.scatter(X, y)plt.plot(X, w*X, c='red')
As we can see, the regression line is no longer a perfect fit. In other words, the model has a higher bias compared to the one with an alpha of 1. For emphasis, let’s try an alpha of 100.
rr = Ridge(alpha=100)rr.fit(X, y)w = rr.coef_[0]plt.scatter(X, y)plt.plot(X, w*X, c='red')
When alpha tends towards positive infinity, the regression line will tend towards a mean of 0 since that would minimize the variance across different datasets. | [
{
"code": null,
"e": 572,
"s": 172,
"text": "Overfitting, the process by which a model performs well for training samples but fails to generalize, is one of the main challenges in machine learning. In the proceeding article, we’ll cover how we can use regularization to help prevent overfitting. To be specific, we’ll talk about Ridge Regression, a distant cousin of Linear Regression, and how it can be used to determine the best fitting line."
},
{
"code": null,
"e": 711,
"s": 572,
"text": "Before we can begin to describe Ridge Regression, it’s important that you understand variance and bias in the context of machine learning."
},
{
"code": null,
"e": 951,
"s": 711,
"text": "The term bias is not the y-intercept but the extent to which the model fails to come up with a plot that approximates the samples. For example, the proceeding line has a high bias since it fails to capture the underlying trend in the data."
},
{
"code": null,
"e": 1119,
"s": 951,
"text": "On the other hand, the proceeding line has a relatively low bias. If we were to measure the mean square error, it would be much lower compared to the previous example."
},
{
"code": null,
"e": 1543,
"s": 1119,
"text": "In contrast to the statistical definition, variance does not refer the spread of data relative to the mean. Rather, it characterizes the difference in fits between datasets. In other words, it measures how the accuracy of a model changes when presented with a different dataset. For example, the squiggly line in the proceeding image performs radically different on other datasets. Therefore, we say it has a high variance."
},
{
"code": null,
"e": 1673,
"s": 1543,
"text": "On the other hand, the straight line has relatively low variance because the mean square error is similar for different datasets."
},
{
"code": null,
"e": 2032,
"s": 1673,
"text": "Ridge Regression is almost identical to Linear Regression except that we introduce a small amount of bias. In return for said bias, we get a significant drop in variance. In other words, by starting out with a slightly worse fit, Ridge Regression performs better against data that doesn’t exactly follow the same pattern as the data the model was trained on."
},
{
"code": null,
"e": 2424,
"s": 2032,
"text": "Adding bias, is often referred to as regularization. As the name implies, regularization is used to develop a model that excels at predicting targets for data that follows a regular pattern rather than specific. Said another way, the purpose of regularization is to prevent overfitting. Overfitting tends to occur when we use a higher degree polynomial than what is needed to model the data."
},
{
"code": null,
"e": 2633,
"s": 2424,
"text": "To get around this problem, we introduce a regularization term to the loss function. In Ridge Regression, the loss function is the linear least squares function and the regularization is given by the l2-norm."
},
{
"code": null,
"e": 2861,
"s": 2633,
"text": "Since we are trying to minimize the loss function and w is included in the residual sum of squares, the model will be forced into finding a balance between minimizing the residual sum of squares and minimizing the coefficients."
},
{
"code": null,
"e": 3044,
"s": 2861,
"text": "For a high degree polynomial, the coefficients of the higher order variables will tend towards 0 if the underlying data can be approximated just as well with a low degree polynomial."
},
{
"code": null,
"e": 3262,
"s": 3044,
"text": "If we set the hyperparameter alpha to some large number, in trying to find the minimum value for the cost function, the model will set the coefficients to 0. In other words, the regression line will have a slope of 0."
},
{
"code": null,
"e": 3519,
"s": 3262,
"text": "Finding the coefficients given the added regularization term isn’t all that difficult. We take the cost function, perform a bit of algebra, take the partial derivative with respect to w (the vector of coefficients), make it equal to 0 and then solve for w."
},
{
"code": null,
"e": 3651,
"s": 3519,
"text": "Let’s see how we can go about implementing Ridge Regression from scratch using Python. To begin, we import the following libraries."
},
{
"code": null,
"e": 3788,
"s": 3651,
"text": "from sklearn.datasets import make_regressionfrom matplotlib import pyplot as pltimport numpy as npfrom sklearn.linear_model import Ridge"
},
{
"code": null,
"e": 3885,
"s": 3788,
"text": "We can use the scikit-learn library to generate sample data which is well suited for regression."
},
{
"code": null,
"e": 4038,
"s": 3885,
"text": "X, y, coefficients = make_regression( n_samples=50, n_features=1, n_informative=1, n_targets=1, noise=5, coef=True, random_state=1)"
},
{
"code": null,
"e": 4355,
"s": 4038,
"text": "Next, we define the hyperparameter alpha. Alpha determines the regularization strength. The larger value for alpha, the stronger the regularization. In other words, when alpha is a very larger number, the bias of the model will be high. An alpha of 1, will result in a model that acts identical to Linear Regression."
},
{
"code": null,
"e": 4365,
"s": 4355,
"text": "alpha = 1"
},
{
"code": null,
"e": 4560,
"s": 4365,
"text": "We create the identity matrix. In order for the equation we saw previously to respect the rules of matrix operations, the identity matrix has to be the same size as the matrix X transpose dot X."
},
{
"code": null,
"e": 4593,
"s": 4560,
"text": "n, m = X.shapeI = np.identity(m)"
},
{
"code": null,
"e": 4651,
"s": 4593,
"text": "Finally, we solve for w using the equation discuss above."
},
{
"code": null,
"e": 4721,
"s": 4651,
"text": "w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X) + alpha * I), X.T), y)"
},
{
"code": null,
"e": 4862,
"s": 4721,
"text": "In comparing w to the actual coefficient(s) used in generating the data, we can see that they’re not exactly equal to one another but close."
},
{
"code": null,
"e": 4864,
"s": 4862,
"text": "w"
},
{
"code": null,
"e": 4877,
"s": 4864,
"text": "coefficients"
},
{
"code": null,
"e": 4937,
"s": 4877,
"text": "Let’s take a look at how the regression line fits the data."
},
{
"code": null,
"e": 4980,
"s": 4937,
"text": "plt.scatter(X, y)plt.plot(X, w*X, c='red')"
},
{
"code": null,
"e": 5122,
"s": 4980,
"text": "Let’s do the same thing using the scikit-learn implementation of Ridge Regression. First, we create and train an instance of the Ridge class."
},
{
"code": null,
"e": 5166,
"s": 5122,
"text": "rr = Ridge(alpha=1)rr.fit(X, y)w = rr.coef_"
},
{
"code": null,
"e": 5239,
"s": 5166,
"text": "We get the same value for w where we solved for it using linear algebra."
},
{
"code": null,
"e": 5241,
"s": 5239,
"text": "w"
},
{
"code": null,
"e": 5292,
"s": 5241,
"text": "The regression line is identical to the one above."
},
{
"code": null,
"e": 5335,
"s": 5292,
"text": "plt.scatter(X, y)plt.plot(X, w*X, c='red')"
},
{
"code": null,
"e": 5434,
"s": 5335,
"text": "Next, let’s visualize the effect of the regularization parameter alpha. To start, we set it to 10."
},
{
"code": null,
"e": 5524,
"s": 5434,
"text": "rr = Ridge(alpha=10)rr.fit(X, y)w = rr.coef_[0]plt.scatter(X, y)plt.plot(X, w*X, c='red')"
},
{
"code": null,
"e": 5712,
"s": 5524,
"text": "As we can see, the regression line is no longer a perfect fit. In other words, the model has a higher bias compared to the one with an alpha of 1. For emphasis, let’s try an alpha of 100."
},
{
"code": null,
"e": 5803,
"s": 5712,
"text": "rr = Ridge(alpha=100)rr.fit(X, y)w = rr.coef_[0]plt.scatter(X, y)plt.plot(X, w*X, c='red')"
}
] |
Could Pluto Be A Real Jupyter Replacement? | by Emmett Boudreau | Towards Data Science | Are you a Julia developer who is sick and tired of Jupyter Notebook always getting in the way of your programming?
Me niether.
Despite my overall lack of need for a new notebook system, as I don’t even use the one I have that often, I decided to take a look at Pluto.jl because of recommendation from a friend. Pluto.jl is a fully-featured web/markdown/code notebook that comes in the form of a simple Julia package. It features all of the great things that you might expect from your typical Jupyter notebook. For a while, I actually used a notebook written in Scala to write Scala, it made me miss Jupyter, and frankly, its replacement,
the Spylon kernel
put a new tombstone on Scala’s data analysis grave. All of this is to say, while I am a huge proponent of Julia, I am not sure that this is going to change my mind at all — and the theme of this could turn out to be more of a question,
Why not just use Jupyter?
Although I would say that I consider myself a skeptic to these kind of things, I certainly would like to give Pluto.jl a chance. Firstly, I wanted to see what some of the key differences are between Pluto and Jupyter. If there are improvements to be had by using Pluto, then it might end up being a better choice for me as a whole — because I write a lot of Julia.
The first place I decided to check for information was the Pluto.jl Github README markdown file, of course. It was at this moment I realized my criticism of Pluto.jl was completely unfounded. The notebook is actually incredibly smart, and is constantly analyzing the code you write. For example, if I were to have an out-of-state code that has not been ran with a dependency I am trying to use in it, Pluto will automatically run that code for me.
Furthermore, whenever a variable or function is changed, Pluto automatically runs every other code cell associated with it. This is convenient, and completely cuts out the go to the top of the notebook and spam shift+enter activity. It was a great change of pace not having to worry as much about the reproducability of my notebook, as well. Julia has great methods to managing packages, so as a result most of the time dependency issues will never be felt by another programmer. Adding to that, though is the effort that has been made to have code run the state of the kernel.
Taking a quote directly from the Pluto.jl README,
At any instant, the program state is completely described by the code you see.
This is a big problem that I have with. This is made even worse with JIT, which works a lot better when you don’t need to restart and run all cells all of the time. If you’d like to learn more about reproducible research and the fundamental problems I have ran into with Jupyter, here are two articles I have written on that exact topic!:
towardsdatascience.com
towardsdatascience.com
A great example of what makes this so cool for analytics is that I could be actively altering data in one cell while simultaneously visualizing it in another cell. Just a change to the way Jupyter handles kernels and states is actually incredibly refreshing. Another great thing is that the Pluto.jl interpreter is made to read Julia code — as in real Julia code. There are no .ipynbs, only .jl files. This means anything developed in Pluto can also be used across the entire spectrum of Julia, and worked on in any session by any programmer with the same code very easily.
I figured if I was going to give Pluto.jl a decent try, I might as well dive into a classic Data Science project with it. For my tech-stack I will be using DataFrames.jl data frames, Gadfly.jl for visualization, and Lathe.jl for statistical analysis and machine-learning — exciting! Of course, to try out Pluto, we are going to need to add it first:
julia>]pkg> add Pluto
After adding Pluto, it can be ran using the Pluto.run() function.
using Pluto;Pluto.run()
Whenever you startup Pluto for the first time, you will be greeted with this page. I think it is a little tedious that you have to manually enter a path into the text-box to open a notebook. The first thing I had to do getting into these notebooks was run pwd() to see where I am on my file-system.
Interestingly, I am in my .Julia directory located at ~/. This is unfortunate because I was hoping to load some .CSV data in. Let’s see if that changes when we save the notebook.
Good news!
So here are a few major concerns I have with getting a project done in this notebook. Firstly, whenever I ctrl+enter, it does not create a new cell. Secondly, stdout does not work in these notebooks. I cannot for the life of me understand why that is, but you cannot print() or println(). I digress, let’s actually get some data read in. Another cool package I recently picked up (Emmett went shopping) is PrettyTables.jl, so hopefully my data frames will at least look good.
This is going to take some getting used to. I do like the way the notebook looks, and there are certainly some aspects I am enjoying, such as the beautiful output.
I am hungry for a categorical problem, so among this small list of features the “ Precip Type” classification really caught my eye. For all I know, however, this data frame could only be a classification of rainy days. Fortunately, we can check the number of unique values in a given array using the Set type.
length(Set(df[Symbol("Precip Type")]))
Which returned 3. Of course, this is plenty of categories for a categorical problem, however, one of the categories happened to be null — meaning that we should probably be cleaning this data instead of dipping our hands deep into the machine-learning cookie jar. Fortunately, the “ Summary” feature had around 27 different classifications on this data-set with around 96,000 observations.
Set(df[Symbol(“Summary”)])
A great way to check for missing observations in the Julia language is to collect the missing observations and compare it to the original length of the data frame.
testmissings = collect(skipmissing(df[:Summary]))length(testmissings)
Good news — it’s the same number! I am no astronomer or weather man, but if I had to guess I would say that there is a pretty good chance that these descriptions could correlate with the humidity. Just to make sure, though, let’s filter the data and perform a test.
It was here where I really got annoyed with the notebook if I am being honest. A lot of things about it were just things that I certainly could not get used to. Easily the most aggravating thing was that I kept deleting three cells at a time trying to remove some text. After all of this time of using it, I still haven’t been able to find out how to add markdown.
I also ended up getting formally lost on how to fix a copy mistake I made where I accidentally altered my data frame. The notebook wouldn’t let me redefine a variable, and made it extremely tedious to work with this relatively basic code. Despite this, I did get a model fit — and it got 40 percent accuracy. After that I thought maybe these features weren’t as correlated as I previously thought... Their statistical tests weren’t showing very much significance, anyway.
Another big complaint I have with Pluto is the documentation browser, it might as well not even be there and at the very least allow one to run the regular help function. I think it is a great idea, however, that maybe just needs some ironing out — and possibly some NLP suggestive text.
I digress, overall I would say I am a fan of the project, but certainly not going to be using it anytime soon. I am not a big fan of notebooks in general, but they do have their uses! If I had to chose between Pluto.jl and Jupyter, I would probably be choosing Jupyter with IJulia. I would love to hope that this would change in the future, however. With a few more things here and there I am certain it could be a promising development environment. However, at this point in time I certainly cannot suggest this... It was oddly frustrating, and to be blunt — it felt like the IDE got in my way more than it helped me.
Hopefully Pluto.jl will be better in the future. There is a chance too that I am not the target audience, or that others might enjoy them a lot more than me. There is certainly some great ideas and genuinely really cool stuff going on under the hood though, and I am somewhat excited to see what will be like in the future. For now though, I will definitely be sticking to Atom and IJulia. | [
{
"code": null,
"e": 287,
"s": 172,
"text": "Are you a Julia developer who is sick and tired of Jupyter Notebook always getting in the way of your programming?"
},
{
"code": null,
"e": 299,
"s": 287,
"text": "Me niether."
},
{
"code": null,
"e": 811,
"s": 299,
"text": "Despite my overall lack of need for a new notebook system, as I don’t even use the one I have that often, I decided to take a look at Pluto.jl because of recommendation from a friend. Pluto.jl is a fully-featured web/markdown/code notebook that comes in the form of a simple Julia package. It features all of the great things that you might expect from your typical Jupyter notebook. For a while, I actually used a notebook written in Scala to write Scala, it made me miss Jupyter, and frankly, its replacement,"
},
{
"code": null,
"e": 829,
"s": 811,
"text": "the Spylon kernel"
},
{
"code": null,
"e": 1065,
"s": 829,
"text": "put a new tombstone on Scala’s data analysis grave. All of this is to say, while I am a huge proponent of Julia, I am not sure that this is going to change my mind at all — and the theme of this could turn out to be more of a question,"
},
{
"code": null,
"e": 1091,
"s": 1065,
"text": "Why not just use Jupyter?"
},
{
"code": null,
"e": 1456,
"s": 1091,
"text": "Although I would say that I consider myself a skeptic to these kind of things, I certainly would like to give Pluto.jl a chance. Firstly, I wanted to see what some of the key differences are between Pluto and Jupyter. If there are improvements to be had by using Pluto, then it might end up being a better choice for me as a whole — because I write a lot of Julia."
},
{
"code": null,
"e": 1904,
"s": 1456,
"text": "The first place I decided to check for information was the Pluto.jl Github README markdown file, of course. It was at this moment I realized my criticism of Pluto.jl was completely unfounded. The notebook is actually incredibly smart, and is constantly analyzing the code you write. For example, if I were to have an out-of-state code that has not been ran with a dependency I am trying to use in it, Pluto will automatically run that code for me."
},
{
"code": null,
"e": 2482,
"s": 1904,
"text": "Furthermore, whenever a variable or function is changed, Pluto automatically runs every other code cell associated with it. This is convenient, and completely cuts out the go to the top of the notebook and spam shift+enter activity. It was a great change of pace not having to worry as much about the reproducability of my notebook, as well. Julia has great methods to managing packages, so as a result most of the time dependency issues will never be felt by another programmer. Adding to that, though is the effort that has been made to have code run the state of the kernel."
},
{
"code": null,
"e": 2532,
"s": 2482,
"text": "Taking a quote directly from the Pluto.jl README,"
},
{
"code": null,
"e": 2611,
"s": 2532,
"text": "At any instant, the program state is completely described by the code you see."
},
{
"code": null,
"e": 2950,
"s": 2611,
"text": "This is a big problem that I have with. This is made even worse with JIT, which works a lot better when you don’t need to restart and run all cells all of the time. If you’d like to learn more about reproducible research and the fundamental problems I have ran into with Jupyter, here are two articles I have written on that exact topic!:"
},
{
"code": null,
"e": 2973,
"s": 2950,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 2996,
"s": 2973,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 3570,
"s": 2996,
"text": "A great example of what makes this so cool for analytics is that I could be actively altering data in one cell while simultaneously visualizing it in another cell. Just a change to the way Jupyter handles kernels and states is actually incredibly refreshing. Another great thing is that the Pluto.jl interpreter is made to read Julia code — as in real Julia code. There are no .ipynbs, only .jl files. This means anything developed in Pluto can also be used across the entire spectrum of Julia, and worked on in any session by any programmer with the same code very easily."
},
{
"code": null,
"e": 3920,
"s": 3570,
"text": "I figured if I was going to give Pluto.jl a decent try, I might as well dive into a classic Data Science project with it. For my tech-stack I will be using DataFrames.jl data frames, Gadfly.jl for visualization, and Lathe.jl for statistical analysis and machine-learning — exciting! Of course, to try out Pluto, we are going to need to add it first:"
},
{
"code": null,
"e": 3942,
"s": 3920,
"text": "julia>]pkg> add Pluto"
},
{
"code": null,
"e": 4008,
"s": 3942,
"text": "After adding Pluto, it can be ran using the Pluto.run() function."
},
{
"code": null,
"e": 4032,
"s": 4008,
"text": "using Pluto;Pluto.run()"
},
{
"code": null,
"e": 4331,
"s": 4032,
"text": "Whenever you startup Pluto for the first time, you will be greeted with this page. I think it is a little tedious that you have to manually enter a path into the text-box to open a notebook. The first thing I had to do getting into these notebooks was run pwd() to see where I am on my file-system."
},
{
"code": null,
"e": 4510,
"s": 4331,
"text": "Interestingly, I am in my .Julia directory located at ~/. This is unfortunate because I was hoping to load some .CSV data in. Let’s see if that changes when we save the notebook."
},
{
"code": null,
"e": 4521,
"s": 4510,
"text": "Good news!"
},
{
"code": null,
"e": 4997,
"s": 4521,
"text": "So here are a few major concerns I have with getting a project done in this notebook. Firstly, whenever I ctrl+enter, it does not create a new cell. Secondly, stdout does not work in these notebooks. I cannot for the life of me understand why that is, but you cannot print() or println(). I digress, let’s actually get some data read in. Another cool package I recently picked up (Emmett went shopping) is PrettyTables.jl, so hopefully my data frames will at least look good."
},
{
"code": null,
"e": 5161,
"s": 4997,
"text": "This is going to take some getting used to. I do like the way the notebook looks, and there are certainly some aspects I am enjoying, such as the beautiful output."
},
{
"code": null,
"e": 5471,
"s": 5161,
"text": "I am hungry for a categorical problem, so among this small list of features the “ Precip Type” classification really caught my eye. For all I know, however, this data frame could only be a classification of rainy days. Fortunately, we can check the number of unique values in a given array using the Set type."
},
{
"code": null,
"e": 5510,
"s": 5471,
"text": "length(Set(df[Symbol(\"Precip Type\")]))"
},
{
"code": null,
"e": 5900,
"s": 5510,
"text": "Which returned 3. Of course, this is plenty of categories for a categorical problem, however, one of the categories happened to be null — meaning that we should probably be cleaning this data instead of dipping our hands deep into the machine-learning cookie jar. Fortunately, the “ Summary” feature had around 27 different classifications on this data-set with around 96,000 observations."
},
{
"code": null,
"e": 5927,
"s": 5900,
"text": "Set(df[Symbol(“Summary”)])"
},
{
"code": null,
"e": 6091,
"s": 5927,
"text": "A great way to check for missing observations in the Julia language is to collect the missing observations and compare it to the original length of the data frame."
},
{
"code": null,
"e": 6161,
"s": 6091,
"text": "testmissings = collect(skipmissing(df[:Summary]))length(testmissings)"
},
{
"code": null,
"e": 6427,
"s": 6161,
"text": "Good news — it’s the same number! I am no astronomer or weather man, but if I had to guess I would say that there is a pretty good chance that these descriptions could correlate with the humidity. Just to make sure, though, let’s filter the data and perform a test."
},
{
"code": null,
"e": 6792,
"s": 6427,
"text": "It was here where I really got annoyed with the notebook if I am being honest. A lot of things about it were just things that I certainly could not get used to. Easily the most aggravating thing was that I kept deleting three cells at a time trying to remove some text. After all of this time of using it, I still haven’t been able to find out how to add markdown."
},
{
"code": null,
"e": 7264,
"s": 6792,
"text": "I also ended up getting formally lost on how to fix a copy mistake I made where I accidentally altered my data frame. The notebook wouldn’t let me redefine a variable, and made it extremely tedious to work with this relatively basic code. Despite this, I did get a model fit — and it got 40 percent accuracy. After that I thought maybe these features weren’t as correlated as I previously thought... Their statistical tests weren’t showing very much significance, anyway."
},
{
"code": null,
"e": 7552,
"s": 7264,
"text": "Another big complaint I have with Pluto is the documentation browser, it might as well not even be there and at the very least allow one to run the regular help function. I think it is a great idea, however, that maybe just needs some ironing out — and possibly some NLP suggestive text."
},
{
"code": null,
"e": 8171,
"s": 7552,
"text": "I digress, overall I would say I am a fan of the project, but certainly not going to be using it anytime soon. I am not a big fan of notebooks in general, but they do have their uses! If I had to chose between Pluto.jl and Jupyter, I would probably be choosing Jupyter with IJulia. I would love to hope that this would change in the future, however. With a few more things here and there I am certain it could be a promising development environment. However, at this point in time I certainly cannot suggest this... It was oddly frustrating, and to be blunt — it felt like the IDE got in my way more than it helped me."
}
] |
Stop Hardcoding Sensitive Data in Your Python Applications | by Ahmed Besbes | Towards Data Science | As a data scientist, I daily use Python to build applications that rely on credentials and sensitive settings.
Here are some examples of those, off the top of my head:
API keys to access third-party services
Passwords and credentials
Email addresses or personal data (name, age, social security number, etc.)
Debug flags
Hosts, URL, URI
and obviously, much more things.
Some of these settings are private by nature. Others are sensitive because they can provide insights into cyber-attacks or threats.
In this quick post, we’ll see how to address this problem using environment variables and a special file called .env .We’ll also learn how to interact with this file using the python-dotenv module and keep your sensitive data out of sight.
Without further ado, let’s have a look 🔍
Environment variables are variables that hold data that you don’t want to hardcode into your programs.They’re abstracted away and taken out of the code.
Their values live inside your operating system; they can either be built-in or set via custom applications.
Environment variables are made up of key/value pairs and you can use them to store different types of data.
Domain names
Execution modes (prod, dev, staging)
Credentials such as authentication keys, logins or passwords
Emails addresses, etc.
To access environment variables, you need to use the os module that provides utilities to read and write these values.
This seems like a handy way to keep sensitive data invisible, right?
Imagine that you need to use an API key in your code without divulging its value. All you need to do is load it from the environment variables:
api_key = os.gentenv("SECRET_API_KEY")
But wait, I didn’t remember setting my API key as an environment variable. How do I do that?
There are two common ways:
From your terminal by typing:
export SECRET_API_KEY=XXXXXXXXXXXX
Or by adding the same line to your .zshrc , .bashrc or .bash_profile and sourcing it
If you need environment variables that’ll only be used in a specific project, I don’t recommend these two methods. In fact,
If you use your terminal, you need to remember to set your environment variables every time before running your program and this doesn’t seem right.
If you modify .zshrc or .bashrc every time you need to add a new environment variable, these files can quickly become cluttered with a lot of unnecessary information. If your environment variables are tied to very specific projects, it doesn’t make much sense to set them in a global scope where they can be accessed everywhere and at any point in time.
Hopefully, .env files seem to tackle this issue.
.env files are, first of all, text files that contain key/value pairs of all the environment variables required by your application.
They enable you to use environment variables without polluting the global environment namespace. In fact, each separate project can have its own .env file.
Here’s an example:
.env files are usually put at the root of a project. To access the values that are listed in them, you need to install the python-dotenv library.
pip install python-dotenv
Then, you only need to add two lines of code:
one to import the library:
from dotenv import load_dotenv
one to look for a .envfile and load environment variables from it
load_dotenv()
👉 When these two lines of code are executed, the environment variables are injected into the project runtime.When the project terminates, they’re flushed and they haven’t been added to the global namespace at any time.
The whole point of a .env file is to externalize sensitive data out of your codebase
Therefore you should never version this file or push it to Github.
You can keep it in your local development environment for testing and you should never share it with the public.
An easy way to avoid this unwanted situation is to add .envto your .gitignorefile:
.env
Python applications are not just a series of instructions and code. They are also coupled with data and configuration settings.
Most of the time, this config data is sensitive and needs to be accessed in a private manner.
Hopefully, environment variables and .envfiles handle this situation.
To get started with .env files here’s what you can do
Install the python-dotenv module using pip
Create the .env file with the appropriate environment variables of your project
Add it to the .gitignore file to prevent git from committing it
Load the the settings into your Python files using the python-dotenv module
Although the topic is not that complex to grasp, there is some great material I went through and learned one thing or two I didn’t know about environment variables and the use of .env files.
Here’s a curated list of some of these resources.
python-dotenv documentation
https://betterprogramming.pub/getting-rid-of-hardcoded-python-variables-with-the-dotenv-module-d0aff8ce0c80
https://dev.to/jakewitcher/using-env-files-for-environment-variables-in-python-applications-55a1
https://www.askpython.com/python/python-dotenv-module
https://www.youtube.com/watch?v=YdgIWTYQ69A&ab_channel=JonathanSoma
https://www.youtube.com/watch?v=rKiLd40HIjc&ab_channel=CodingEntrepreneurs
a nifty website to generate .gitignore files: https://www.toptal.com/developers/gitignore
Maybe you’ll learn something too 😉
If you’ve made it this far, I really thank you for your time.
That’ll be all for me today. Until next time for other programming tips👋 | [
{
"code": null,
"e": 283,
"s": 172,
"text": "As a data scientist, I daily use Python to build applications that rely on credentials and sensitive settings."
},
{
"code": null,
"e": 340,
"s": 283,
"text": "Here are some examples of those, off the top of my head:"
},
{
"code": null,
"e": 380,
"s": 340,
"text": "API keys to access third-party services"
},
{
"code": null,
"e": 406,
"s": 380,
"text": "Passwords and credentials"
},
{
"code": null,
"e": 481,
"s": 406,
"text": "Email addresses or personal data (name, age, social security number, etc.)"
},
{
"code": null,
"e": 493,
"s": 481,
"text": "Debug flags"
},
{
"code": null,
"e": 509,
"s": 493,
"text": "Hosts, URL, URI"
},
{
"code": null,
"e": 542,
"s": 509,
"text": "and obviously, much more things."
},
{
"code": null,
"e": 674,
"s": 542,
"text": "Some of these settings are private by nature. Others are sensitive because they can provide insights into cyber-attacks or threats."
},
{
"code": null,
"e": 914,
"s": 674,
"text": "In this quick post, we’ll see how to address this problem using environment variables and a special file called .env .We’ll also learn how to interact with this file using the python-dotenv module and keep your sensitive data out of sight."
},
{
"code": null,
"e": 955,
"s": 914,
"text": "Without further ado, let’s have a look 🔍"
},
{
"code": null,
"e": 1108,
"s": 955,
"text": "Environment variables are variables that hold data that you don’t want to hardcode into your programs.They’re abstracted away and taken out of the code."
},
{
"code": null,
"e": 1216,
"s": 1108,
"text": "Their values live inside your operating system; they can either be built-in or set via custom applications."
},
{
"code": null,
"e": 1324,
"s": 1216,
"text": "Environment variables are made up of key/value pairs and you can use them to store different types of data."
},
{
"code": null,
"e": 1337,
"s": 1324,
"text": "Domain names"
},
{
"code": null,
"e": 1374,
"s": 1337,
"text": "Execution modes (prod, dev, staging)"
},
{
"code": null,
"e": 1435,
"s": 1374,
"text": "Credentials such as authentication keys, logins or passwords"
},
{
"code": null,
"e": 1458,
"s": 1435,
"text": "Emails addresses, etc."
},
{
"code": null,
"e": 1577,
"s": 1458,
"text": "To access environment variables, you need to use the os module that provides utilities to read and write these values."
},
{
"code": null,
"e": 1646,
"s": 1577,
"text": "This seems like a handy way to keep sensitive data invisible, right?"
},
{
"code": null,
"e": 1790,
"s": 1646,
"text": "Imagine that you need to use an API key in your code without divulging its value. All you need to do is load it from the environment variables:"
},
{
"code": null,
"e": 1831,
"s": 1790,
"text": "api_key = os.gentenv(\"SECRET_API_KEY\") "
},
{
"code": null,
"e": 1924,
"s": 1831,
"text": "But wait, I didn’t remember setting my API key as an environment variable. How do I do that?"
},
{
"code": null,
"e": 1951,
"s": 1924,
"text": "There are two common ways:"
},
{
"code": null,
"e": 1981,
"s": 1951,
"text": "From your terminal by typing:"
},
{
"code": null,
"e": 2016,
"s": 1981,
"text": "export SECRET_API_KEY=XXXXXXXXXXXX"
},
{
"code": null,
"e": 2101,
"s": 2016,
"text": "Or by adding the same line to your .zshrc , .bashrc or .bash_profile and sourcing it"
},
{
"code": null,
"e": 2225,
"s": 2101,
"text": "If you need environment variables that’ll only be used in a specific project, I don’t recommend these two methods. In fact,"
},
{
"code": null,
"e": 2374,
"s": 2225,
"text": "If you use your terminal, you need to remember to set your environment variables every time before running your program and this doesn’t seem right."
},
{
"code": null,
"e": 2728,
"s": 2374,
"text": "If you modify .zshrc or .bashrc every time you need to add a new environment variable, these files can quickly become cluttered with a lot of unnecessary information. If your environment variables are tied to very specific projects, it doesn’t make much sense to set them in a global scope where they can be accessed everywhere and at any point in time."
},
{
"code": null,
"e": 2777,
"s": 2728,
"text": "Hopefully, .env files seem to tackle this issue."
},
{
"code": null,
"e": 2910,
"s": 2777,
"text": ".env files are, first of all, text files that contain key/value pairs of all the environment variables required by your application."
},
{
"code": null,
"e": 3066,
"s": 2910,
"text": "They enable you to use environment variables without polluting the global environment namespace. In fact, each separate project can have its own .env file."
},
{
"code": null,
"e": 3085,
"s": 3066,
"text": "Here’s an example:"
},
{
"code": null,
"e": 3231,
"s": 3085,
"text": ".env files are usually put at the root of a project. To access the values that are listed in them, you need to install the python-dotenv library."
},
{
"code": null,
"e": 3257,
"s": 3231,
"text": "pip install python-dotenv"
},
{
"code": null,
"e": 3303,
"s": 3257,
"text": "Then, you only need to add two lines of code:"
},
{
"code": null,
"e": 3330,
"s": 3303,
"text": "one to import the library:"
},
{
"code": null,
"e": 3361,
"s": 3330,
"text": "from dotenv import load_dotenv"
},
{
"code": null,
"e": 3427,
"s": 3361,
"text": "one to look for a .envfile and load environment variables from it"
},
{
"code": null,
"e": 3441,
"s": 3427,
"text": "load_dotenv()"
},
{
"code": null,
"e": 3660,
"s": 3441,
"text": "👉 When these two lines of code are executed, the environment variables are injected into the project runtime.When the project terminates, they’re flushed and they haven’t been added to the global namespace at any time."
},
{
"code": null,
"e": 3745,
"s": 3660,
"text": "The whole point of a .env file is to externalize sensitive data out of your codebase"
},
{
"code": null,
"e": 3812,
"s": 3745,
"text": "Therefore you should never version this file or push it to Github."
},
{
"code": null,
"e": 3925,
"s": 3812,
"text": "You can keep it in your local development environment for testing and you should never share it with the public."
},
{
"code": null,
"e": 4008,
"s": 3925,
"text": "An easy way to avoid this unwanted situation is to add .envto your .gitignorefile:"
},
{
"code": null,
"e": 4013,
"s": 4008,
"text": ".env"
},
{
"code": null,
"e": 4141,
"s": 4013,
"text": "Python applications are not just a series of instructions and code. They are also coupled with data and configuration settings."
},
{
"code": null,
"e": 4235,
"s": 4141,
"text": "Most of the time, this config data is sensitive and needs to be accessed in a private manner."
},
{
"code": null,
"e": 4305,
"s": 4235,
"text": "Hopefully, environment variables and .envfiles handle this situation."
},
{
"code": null,
"e": 4359,
"s": 4305,
"text": "To get started with .env files here’s what you can do"
},
{
"code": null,
"e": 4402,
"s": 4359,
"text": "Install the python-dotenv module using pip"
},
{
"code": null,
"e": 4482,
"s": 4402,
"text": "Create the .env file with the appropriate environment variables of your project"
},
{
"code": null,
"e": 4546,
"s": 4482,
"text": "Add it to the .gitignore file to prevent git from committing it"
},
{
"code": null,
"e": 4622,
"s": 4546,
"text": "Load the the settings into your Python files using the python-dotenv module"
},
{
"code": null,
"e": 4813,
"s": 4622,
"text": "Although the topic is not that complex to grasp, there is some great material I went through and learned one thing or two I didn’t know about environment variables and the use of .env files."
},
{
"code": null,
"e": 4863,
"s": 4813,
"text": "Here’s a curated list of some of these resources."
},
{
"code": null,
"e": 4891,
"s": 4863,
"text": "python-dotenv documentation"
},
{
"code": null,
"e": 4999,
"s": 4891,
"text": "https://betterprogramming.pub/getting-rid-of-hardcoded-python-variables-with-the-dotenv-module-d0aff8ce0c80"
},
{
"code": null,
"e": 5096,
"s": 4999,
"text": "https://dev.to/jakewitcher/using-env-files-for-environment-variables-in-python-applications-55a1"
},
{
"code": null,
"e": 5150,
"s": 5096,
"text": "https://www.askpython.com/python/python-dotenv-module"
},
{
"code": null,
"e": 5218,
"s": 5150,
"text": "https://www.youtube.com/watch?v=YdgIWTYQ69A&ab_channel=JonathanSoma"
},
{
"code": null,
"e": 5293,
"s": 5218,
"text": "https://www.youtube.com/watch?v=rKiLd40HIjc&ab_channel=CodingEntrepreneurs"
},
{
"code": null,
"e": 5383,
"s": 5293,
"text": "a nifty website to generate .gitignore files: https://www.toptal.com/developers/gitignore"
},
{
"code": null,
"e": 5418,
"s": 5383,
"text": "Maybe you’ll learn something too 😉"
},
{
"code": null,
"e": 5480,
"s": 5418,
"text": "If you’ve made it this far, I really thank you for your time."
}
] |
Select any row from a Dataframe in Pandas | Python - GeeksforGeeks | 24 Oct, 2019
In this article, we will learn how to get the rows from a dataframe as a list, without using the functions like ilic[]. There are multiple ways to do get the rows as a list from given dataframe. Let’s see them will the help of examples.
# importing pandas as pd import pandas as pd # Create the dataframe df = pd.DataFrame({'Date':['10/2/2011', '11/2/2011', '12/2/2011', '13/2/11'], 'Event':['Music', 'Poetry', 'Theatre', 'Comedy'], 'Cost':[10000, 5000, 15000, 2000]}) # using interrors() method # Create an empty list Row_list =[] # Iterate over each row for index, rows in df.iterrows(): # Create list for the current row my_list =[rows.Date, rows.Event, rows.Cost] # append the list to the final list Row_list.append(my_list) # Print the list print(Row_list)
Output:
[['10/2/2011', 'Music', 10000], ['11/2/2011', 'Poetry', 5000],
['12/2/2011', 'Theatre', 15000], ['13/2/11', 'Comedy', 2000]]
# Print the first 2 elements print(Row_list[:2])
Output:
[['10/2/2011', 'Music', 10000], ['11/2/2011', 'Poetry', 5000]]
pandas-dataframe-program
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Box Plot in Python using Matplotlib
Python | Get dictionary keys as a list
Bar Plot in Matplotlib
Multithreading in Python | Set 2 (Synchronization)
Python Dictionary keys() method
loops in python
Python - Call function from another file
Ways to filter Pandas DataFrame by column values
Python | Convert set into a list
Python program to find number of days between two given dates | [
{
"code": null,
"e": 23901,
"s": 23873,
"text": "\n24 Oct, 2019"
},
{
"code": null,
"e": 24138,
"s": 23901,
"text": "In this article, we will learn how to get the rows from a dataframe as a list, without using the functions like ilic[]. There are multiple ways to do get the rows as a list from given dataframe. Let’s see them will the help of examples."
},
{
"code": "# importing pandas as pd import pandas as pd # Create the dataframe df = pd.DataFrame({'Date':['10/2/2011', '11/2/2011', '12/2/2011', '13/2/11'], 'Event':['Music', 'Poetry', 'Theatre', 'Comedy'], 'Cost':[10000, 5000, 15000, 2000]}) # using interrors() method # Create an empty list Row_list =[] # Iterate over each row for index, rows in df.iterrows(): # Create list for the current row my_list =[rows.Date, rows.Event, rows.Cost] # append the list to the final list Row_list.append(my_list) # Print the list print(Row_list) ",
"e": 24747,
"s": 24138,
"text": null
},
{
"code": null,
"e": 24755,
"s": 24747,
"text": "Output:"
},
{
"code": null,
"e": 24888,
"s": 24755,
"text": "[['10/2/2011', 'Music', 10000], ['11/2/2011', 'Poetry', 5000], \n ['12/2/2011', 'Theatre', 15000], ['13/2/11', 'Comedy', 2000]]\n"
},
{
"code": " # Print the first 2 elements print(Row_list[:2]) ",
"e": 24942,
"s": 24888,
"text": null
},
{
"code": null,
"e": 24950,
"s": 24942,
"text": "Output:"
},
{
"code": null,
"e": 25014,
"s": 24950,
"text": "[['10/2/2011', 'Music', 10000], ['11/2/2011', 'Poetry', 5000]]\n"
},
{
"code": null,
"e": 25039,
"s": 25014,
"text": "pandas-dataframe-program"
},
{
"code": null,
"e": 25053,
"s": 25039,
"text": "Python-pandas"
},
{
"code": null,
"e": 25060,
"s": 25053,
"text": "Python"
},
{
"code": null,
"e": 25158,
"s": 25060,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25167,
"s": 25158,
"text": "Comments"
},
{
"code": null,
"e": 25180,
"s": 25167,
"text": "Old Comments"
},
{
"code": null,
"e": 25216,
"s": 25180,
"text": "Box Plot in Python using Matplotlib"
},
{
"code": null,
"e": 25255,
"s": 25216,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 25278,
"s": 25255,
"text": "Bar Plot in Matplotlib"
},
{
"code": null,
"e": 25329,
"s": 25278,
"text": "Multithreading in Python | Set 2 (Synchronization)"
},
{
"code": null,
"e": 25361,
"s": 25329,
"text": "Python Dictionary keys() method"
},
{
"code": null,
"e": 25377,
"s": 25361,
"text": "loops in python"
},
{
"code": null,
"e": 25418,
"s": 25377,
"text": "Python - Call function from another file"
},
{
"code": null,
"e": 25467,
"s": 25418,
"text": "Ways to filter Pandas DataFrame by column values"
},
{
"code": null,
"e": 25500,
"s": 25467,
"text": "Python | Convert set into a list"
}
] |
Python program to print decimal octal hex and binary of first n numbers | Suppose we have a value n. We have to print Decimal, Octal, Hexadecimal and Binary equivalent of first n numbers (1 to n) in four different columns. As we know, we can express the numbers with prefix characters d, o, X and b for decimal, octal, hexadecimal and decimal respectively.
So, if the input is like n = 10, then the output will be
1 1 1 1
2 2 2 10
3 3 3 11
4 4 4 100
5 5 5 101
6 6 6 110
7 7 7 111
8 10 8 1000
9 11 9 1001
10 12 A 1010
To solve this, we will follow these steps −
l := (length of binary equivalent of n) - 2
for i in range 1 to n, dof := blank stringfor each character c in "doXb", doif f is not empty, thenf := f concatenate one blank spacef := f + right aligned formatting string by converting l as string then concatenate cpass i four times to the formatted string f and print the line
f := blank string
for each character c in "doXb", doif f is not empty, thenf := f concatenate one blank spacef := f + right aligned formatting string by converting l as string then concatenate c
if f is not empty, thenf := f concatenate one blank space
f := f concatenate one blank space
f := f + right aligned formatting string by converting l as string then concatenate c
pass i four times to the formatted string f and print the line
Let us see the following implementation to get better understanding
def solve(n):
l = len(bin(n)) - 2
for i in range(1, n + 1):
f = ""
for c in "doXb":
if f:
f += " "
f += "{:>" + str(l) + c + "}"
print(f.format(i, i, i, i))
n = 10
solve(n)
10
1 1 1 1
2 2 2 10
3 3 3 11
4 4 4 100
5 5 5 101
6 6 6 110
7 7 7 111
8 10 8 1000
9 11 9 1001
10 12 A 1010 | [
{
"code": null,
"e": 1345,
"s": 1062,
"text": "Suppose we have a value n. We have to print Decimal, Octal, Hexadecimal and Binary equivalent of first n numbers (1 to n) in four different columns. As we know, we can express the numbers with prefix characters d, o, X and b for decimal, octal, hexadecimal and decimal respectively."
},
{
"code": null,
"e": 1402,
"s": 1345,
"text": "So, if the input is like n = 10, then the output will be"
},
{
"code": null,
"e": 1572,
"s": 1402,
"text": "1 1 1 1\n2 2 2 10\n3 3 3 11\n4 4 4 100\n5 5 5 101\n6 6 6 110\n7 7 7 111\n8 10 8 1000\n9 11 9 1001\n10 12 A 1010"
},
{
"code": null,
"e": 1616,
"s": 1572,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1660,
"s": 1616,
"text": "l := (length of binary equivalent of n) - 2"
},
{
"code": null,
"e": 1941,
"s": 1660,
"text": "for i in range 1 to n, dof := blank stringfor each character c in \"doXb\", doif f is not empty, thenf := f concatenate one blank spacef := f + right aligned formatting string by converting l as string then concatenate cpass i four times to the formatted string f and print the line"
},
{
"code": null,
"e": 1959,
"s": 1941,
"text": "f := blank string"
},
{
"code": null,
"e": 2136,
"s": 1959,
"text": "for each character c in \"doXb\", doif f is not empty, thenf := f concatenate one blank spacef := f + right aligned formatting string by converting l as string then concatenate c"
},
{
"code": null,
"e": 2194,
"s": 2136,
"text": "if f is not empty, thenf := f concatenate one blank space"
},
{
"code": null,
"e": 2229,
"s": 2194,
"text": "f := f concatenate one blank space"
},
{
"code": null,
"e": 2315,
"s": 2229,
"text": "f := f + right aligned formatting string by converting l as string then concatenate c"
},
{
"code": null,
"e": 2378,
"s": 2315,
"text": "pass i four times to the formatted string f and print the line"
},
{
"code": null,
"e": 2446,
"s": 2378,
"text": "Let us see the following implementation to get better understanding"
},
{
"code": null,
"e": 2676,
"s": 2446,
"text": "def solve(n):\n l = len(bin(n)) - 2\n for i in range(1, n + 1):\n f = \"\"\n for c in \"doXb\":\n if f:\n f += \" \"\n f += \"{:>\" + str(l) + c + \"}\"\n print(f.format(i, i, i, i))\n\nn = 10\nsolve(n)\n\n"
},
{
"code": null,
"e": 2679,
"s": 2676,
"text": "10"
},
{
"code": null,
"e": 2849,
"s": 2679,
"text": "1 1 1 1\n2 2 2 10\n3 3 3 11\n4 4 4 100\n5 5 5 101\n6 6 6 110\n7 7 7 111\n8 10 8 1000\n9 11 9 1001\n10 12 A 1010"
}
] |
Add 1 to a number represented as a linked list? | The linked list representation of a number is provided in such a way that the all nodes of the linked list are treated as one digit of the number. The node stores the number such that the first element of the linked list holds the most significant digit of the number, and the last element of the linked list holds the least significant bit of the number. For example, the number 202345 is represented in the linked list as (2->0->2->3->4->5).
And to add one to this linked list represented number we have to check the value of the least significant bit of the list. If it is less than 9 than it's ok otherwise the code will change the next digit and so on.
Now lets see an example to know how to do it, 1999 is represented as (1-> 9-> 9 -> 9) and adding 1 to it should change it to (2->0->0->0)
Input:1999
Output:2000
To add 1 to a given number represented as a linked list meaning to follow some steps that are,
reversing the linked list: you need to reverse the linked list that this means changing the last digit to the first and the first to the last. For example, 1-> 9-> 9 -> 9 is converted to 9-> 9 -> 9 ->1.
for this changed linked list now traverse the list, in the left-most node add one. if this node’s value is equal to 9 then propagate a carry to the next Node. Do the same procedure until the carry is there.
reverse the string back as in original form and then returned the head to get the string printed.
#include <iostream>
using namespace std;
//n=next node ; d=data ; p= previous node; h=head node; c=current node
class Node {
public:
int d;
Node* n;
};
Node *newNode(int d) {
Node *new_node = new Node;
new_node->d = d;
new_node->n = NULL;
return new_node;
}
Node *reverse(Node *h) {
Node * p = NULL;
Node * c = h;
Node * n;
while (c != NULL) {
n = c->n;
c->n = p;
p = c;
c = n;
}
return p;
}
Node *addOneUtil(Node *h) {
Node* res = h;
Node *temp, *p = NULL;
int carry = 1, sum;
while (h != NULL) {
sum = carry + h->d;
carry = (sum >= 10)? 1 : 0;
sum = sum % 10;
h->d = sum;
temp = h;
h = h->n;
}
if (carry > 0)
temp->n = newNode(carry);
return res;
}
Node* addOne(Node *h) {
h = reverse(h);
h = addOneUtil(h);
return reverse(h);
}
int main() {
Node *h = newNode(1);
h->n = newNode(9);
h->n->n = newNode(9);
h->n->n->n = newNode(9);
h = addOne(h);
while (h != NULL) {
cout << h->d;
h = h->n;
}
cout<<endl;
return 0;
} | [
{
"code": null,
"e": 1506,
"s": 1062,
"text": "The linked list representation of a number is provided in such a way that the all nodes of the linked list are treated as one digit of the number. The node stores the number such that the first element of the linked list holds the most significant digit of the number, and the last element of the linked list holds the least significant bit of the number. For example, the number 202345 is represented in the linked list as (2->0->2->3->4->5)."
},
{
"code": null,
"e": 1720,
"s": 1506,
"text": "And to add one to this linked list represented number we have to check the value of the least significant bit of the list. If it is less than 9 than it's ok otherwise the code will change the next digit and so on."
},
{
"code": null,
"e": 1858,
"s": 1720,
"text": "Now lets see an example to know how to do it, 1999 is represented as (1-> 9-> 9 -> 9) and adding 1 to it should change it to (2->0->0->0)"
},
{
"code": null,
"e": 1881,
"s": 1858,
"text": "Input:1999\nOutput:2000"
},
{
"code": null,
"e": 1976,
"s": 1881,
"text": "To add 1 to a given number represented as a linked list meaning to follow some steps that are,"
},
{
"code": null,
"e": 2179,
"s": 1976,
"text": "reversing the linked list: you need to reverse the linked list that this means changing the last digit to the first and the first to the last. For example, 1-> 9-> 9 -> 9 is converted to 9-> 9 -> 9 ->1."
},
{
"code": null,
"e": 2386,
"s": 2179,
"text": "for this changed linked list now traverse the list, in the left-most node add one. if this node’s value is equal to 9 then propagate a carry to the next Node. Do the same procedure until the carry is there."
},
{
"code": null,
"e": 2484,
"s": 2386,
"text": "reverse the string back as in original form and then returned the head to get the string printed."
},
{
"code": null,
"e": 3581,
"s": 2484,
"text": "#include <iostream>\nusing namespace std;\n//n=next node ; d=data ; p= previous node; h=head node; c=current node\nclass Node {\n public:\n int d;\n Node* n;\n};\nNode *newNode(int d) {\n Node *new_node = new Node;\n new_node->d = d;\n new_node->n = NULL;\n return new_node;\n}\nNode *reverse(Node *h) {\n Node * p = NULL;\n Node * c = h;\n Node * n;\n while (c != NULL) {\n n = c->n;\n c->n = p;\n p = c;\n c = n;\n }\n return p;\n}\nNode *addOneUtil(Node *h) {\n Node* res = h;\n Node *temp, *p = NULL;\n int carry = 1, sum;\n while (h != NULL) {\n sum = carry + h->d;\n carry = (sum >= 10)? 1 : 0;\n sum = sum % 10;\n h->d = sum;\n temp = h;\n h = h->n;\n }\n if (carry > 0)\n temp->n = newNode(carry);\n return res;\n}\nNode* addOne(Node *h) {\n h = reverse(h);\n h = addOneUtil(h);\n return reverse(h);\n}\nint main() {\n Node *h = newNode(1);\n h->n = newNode(9);\n h->n->n = newNode(9);\n h->n->n->n = newNode(9);\n h = addOne(h);\n while (h != NULL) {\n cout << h->d;\n h = h->n;\n }\n cout<<endl;\n return 0;\n}"
}
] |
Plotting A Square Wave Using Matplotlib, Numpy And Scipy | 20 Apr, 2022
Prerequisites: linspace, Mathplotlib, Scipy
A square wave is a non-sinusoidal periodic waveform in which the amplitude alternates at a steady frequency between the fixed minimum and maximum values, with the same duration at minimum and maximum. Graphical representations are always easy to understand and are adopted and preferable before any written or verbal communication. In this article, we will try to understand, How can we plot Square waves using the Scipy python module.
Approach:
Import required module.
Create a sample rate.
Plot a square wave.
Label the graph.
Display Graph.
Step 1: Import module
Python3
from scipy import signalimport matplotlib.pyplot as plotimport numpy as np
Step 2: The NumPy linspace function is a tool in Python for creating numeric sequences that return evenly spaced numbers over a specified interval.
Python3
t = np.linspace(0, 1, 1000, endpoint = True)
Step 3: plot.plot function: This method accepts the following parameters and Plot the square wave signal.
Syntax:
scipy.signal.square(t)
Parameter:
t: The input time array.
Return:
Output array containing the square waveform.
Python3
# Plot the square waveplot.plot(t, signal.square(2 * np.pi * 5 * t))
Step 4: Give title name, x-axis label name, y-axis label name.
Python3
# Give x,y, title axis labelplot.xlabel('Time')plot.ylabel('Amplitude')plot.title('Square wave - Geeksforgeeks')
Step 5: plot.axhline() : The axhline() function in pyplot module of matplotlib library is used to add a horizontal line across the axis.
Python3
# Provide x axis and black line colorplot.axhline(y=0, color='k')
Below is the full implementation:
Python3
from scipy import signalimport matplotlib.pyplot as plotimport numpy as np t = np.linspace(0, 1, 1000, endpoint=True) # Plot the square waveplot.plot(t, signal.square(2 * np.pi * 5 * t)) # Give x,y,title axis labelplot.xlabel('Time')plot.ylabel('Amplitude')plot.title('Square wave - Geeksforgeeks') plot.axhline(y = 0, color = 'k') # Displayplot.show()
Output:
sooda367
rkbhola5
Data Visualization
Python-matplotlib
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n20 Apr, 2022"
},
{
"code": null,
"e": 72,
"s": 28,
"text": "Prerequisites: linspace, Mathplotlib, Scipy"
},
{
"code": null,
"e": 508,
"s": 72,
"text": "A square wave is a non-sinusoidal periodic waveform in which the amplitude alternates at a steady frequency between the fixed minimum and maximum values, with the same duration at minimum and maximum. Graphical representations are always easy to understand and are adopted and preferable before any written or verbal communication. In this article, we will try to understand, How can we plot Square waves using the Scipy python module."
},
{
"code": null,
"e": 518,
"s": 508,
"text": "Approach:"
},
{
"code": null,
"e": 542,
"s": 518,
"text": "Import required module."
},
{
"code": null,
"e": 564,
"s": 542,
"text": "Create a sample rate."
},
{
"code": null,
"e": 584,
"s": 564,
"text": "Plot a square wave."
},
{
"code": null,
"e": 601,
"s": 584,
"text": "Label the graph."
},
{
"code": null,
"e": 616,
"s": 601,
"text": "Display Graph."
},
{
"code": null,
"e": 638,
"s": 616,
"text": "Step 1: Import module"
},
{
"code": null,
"e": 646,
"s": 638,
"text": "Python3"
},
{
"code": "from scipy import signalimport matplotlib.pyplot as plotimport numpy as np",
"e": 721,
"s": 646,
"text": null
},
{
"code": null,
"e": 872,
"s": 724,
"text": "Step 2: The NumPy linspace function is a tool in Python for creating numeric sequences that return evenly spaced numbers over a specified interval."
},
{
"code": null,
"e": 882,
"s": 874,
"text": "Python3"
},
{
"code": "t = np.linspace(0, 1, 1000, endpoint = True)",
"e": 927,
"s": 882,
"text": null
},
{
"code": null,
"e": 1033,
"s": 927,
"text": "Step 3: plot.plot function: This method accepts the following parameters and Plot the square wave signal."
},
{
"code": null,
"e": 1041,
"s": 1033,
"text": "Syntax:"
},
{
"code": null,
"e": 1064,
"s": 1041,
"text": "scipy.signal.square(t)"
},
{
"code": null,
"e": 1075,
"s": 1064,
"text": "Parameter:"
},
{
"code": null,
"e": 1100,
"s": 1075,
"text": "t: The input time array."
},
{
"code": null,
"e": 1108,
"s": 1100,
"text": "Return:"
},
{
"code": null,
"e": 1153,
"s": 1108,
"text": "Output array containing the square waveform."
},
{
"code": null,
"e": 1161,
"s": 1153,
"text": "Python3"
},
{
"code": "# Plot the square waveplot.plot(t, signal.square(2 * np.pi * 5 * t))",
"e": 1230,
"s": 1161,
"text": null
},
{
"code": null,
"e": 1297,
"s": 1233,
"text": "Step 4: Give title name, x-axis label name, y-axis label name. "
},
{
"code": null,
"e": 1307,
"s": 1299,
"text": "Python3"
},
{
"code": "# Give x,y, title axis labelplot.xlabel('Time')plot.ylabel('Amplitude')plot.title('Square wave - Geeksforgeeks')",
"e": 1420,
"s": 1307,
"text": null
},
{
"code": null,
"e": 1560,
"s": 1423,
"text": "Step 5: plot.axhline() : The axhline() function in pyplot module of matplotlib library is used to add a horizontal line across the axis."
},
{
"code": null,
"e": 1570,
"s": 1562,
"text": "Python3"
},
{
"code": "# Provide x axis and black line colorplot.axhline(y=0, color='k')",
"e": 1636,
"s": 1570,
"text": null
},
{
"code": null,
"e": 1673,
"s": 1639,
"text": "Below is the full implementation:"
},
{
"code": null,
"e": 1683,
"s": 1675,
"text": "Python3"
},
{
"code": "from scipy import signalimport matplotlib.pyplot as plotimport numpy as np t = np.linspace(0, 1, 1000, endpoint=True) # Plot the square waveplot.plot(t, signal.square(2 * np.pi * 5 * t)) # Give x,y,title axis labelplot.xlabel('Time')plot.ylabel('Amplitude')plot.title('Square wave - Geeksforgeeks') plot.axhline(y = 0, color = 'k') # Displayplot.show()",
"e": 2040,
"s": 1683,
"text": null
},
{
"code": null,
"e": 2051,
"s": 2043,
"text": "Output:"
},
{
"code": null,
"e": 2064,
"s": 2055,
"text": "sooda367"
},
{
"code": null,
"e": 2073,
"s": 2064,
"text": "rkbhola5"
},
{
"code": null,
"e": 2092,
"s": 2073,
"text": "Data Visualization"
},
{
"code": null,
"e": 2110,
"s": 2092,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 2117,
"s": 2110,
"text": "Python"
}
] |
Node.js EventEmitter | 13 Oct, 2021
Node.js uses events module to create and handle custom events. The EventEmitter class can be used to create and handle custom events module.The syntax to Import the events module are given below:
Syntax:
const EventEmitter = require('events');
All EventEmitters emit the event newListener when new listeners are added and removeListener when existing listeners are removed. It also provide one more option:
boolean captureRejections
Default Value: false
It automatically captures rejections.
Listening events: Before emits any event, it must register functions(callbacks) to listen to the events.
Syntax:
eventEmitter.addListener(event, listener)
eventEmitter.on(event, listener)
eventEmitter.on(event, listener) and eventEmitter.addListener(event, listener) are pretty much similar. It adds the listener at the end of the listener’s array for the specified event. Multiple calls to the same event and listener will add the listener multiple times and correspondingly fire multiple times. Both functions return emitter, so calls can be chained.
Emitting events: Every event is named event in nodejs. We can trigger an event by emit(event, [arg1], [arg2], [...]) function. We can pass an arbitrary set of arguments to the listener functions.
Syntax:
eventEmitter.emit(event, [arg1], [arg2], [...])
Example:
// Importing eventsconst EventEmitter = require('events'); // Initializing event emitter instances var eventEmitter = new EventEmitter(); // Registering to myEvent eventEmitter.on('myEvent', (msg) => { console.log(msg);}); // Triggering myEventeventEmitter.emit('myEvent', "First event");
Output:
First event
Removing Listener: The eventEmitter.removeListener() takes two argument event and listener, and removes that listener from the listeners array that is subscribed to that event. While eventEmitter.removeAllListeners() removes all the listener from the array which are subscribed to the mentioned event.
Syntax:
eventEmitter.removeListener(event, listener)
eventEmitter.removeAllListeners([event])
Example:
// Importing eventsconst EventEmitter = require('events'); // Initializing event emitter instances var eventEmitter = new EventEmitter(); var geek1= (msg) => { console.log("Message from geek1: " + msg);}; var geek2 = (msg) => { console.log("Message from geek2: " + msg);}; // Registering geek1 and geek2eventEmitter.on('myEvent', geek1);eventEmitter.on('myEvent', geek1);eventEmitter.on('myEvent', geek2); // Removing listener geek1 that was// registered on the line 13eventEmitter.removeListener('myEvent', geek1); // Triggering myEventeventEmitter.emit('myEvent', "Event occurred"); // Removing all the listeners to myEventeventEmitter.removeAllListeners('myEvent'); // Triggering myEventeventEmitter.emit('myEvent', "Event occurred");
Output:
Message from geek1: Event occurred
Message from geek2: Event occurred
We registered two times geek1 and one time geek2. For calling eventEmitter.removeListener(‘myEvent’, geek1) one instance of geek1 will be removed. Finally, removing all listener by using removeAllListeners() method that will remove all listeners to myEvent.
Special Events: All EventEmitter instances emit the event ‘newListener’ when new listeners are added and ‘removeListener’ existing listeners are removed.
Event: ‘newListener’ The EventEmitter instance will emit its own ‘newListener’ event before a listener is added to its internal array of listeners. Listeners registered for the ‘newListener’ event will be passed to the event name and reference to the listener being added. The event ‘newListener’ is triggered before adding the listener to the array.eventEmitter.once( 'newListener', listener)
eventEmitter.on( 'newListener', listener)
eventEmitter.once( 'newListener', listener)
eventEmitter.on( 'newListener', listener)
Event: ‘removeListener’ The ‘removeListener’ event is emitted after a listener is removed.eventEmitter.once( ‘removeListener’, listener)
eventEmitter.on( 'removeListener’, listener)
eventEmitter.once( ‘removeListener’, listener)
eventEmitter.on( 'removeListener’, listener)
Event: ‘error’ When an error occurs within an EventEmitter instance, the typical action is for an ‘error’ event to be emitted. If an EventEmitter does not have at least one listener registered for the ‘error’ event, and an ‘error’ event is emitted, the error is thrown, a stack trace is printed, and the Node.js process exits.eventEmitter.on('error', listener)
eventEmitter.on('error', listener)
Example:
// Importing eventsconst EventEmitter = require('events'); // Initializing event emitter instances var eventEmitter = new EventEmitter(); // Register to erroreventEmitter.on('error', (err) => { console.error('Attention! There was an error');}); // Register to newListenereventEmitter.on( 'newListener', (event, listener) => { console.log(`The listener is added to ${event}`);}); // Register to removeListenereventEmitter.on( 'removeListener', (event, listener) => { console.log(`The listener is removed from ${event}`);}); // Declaring listener geek1 to myEvent1var geek1 = (msg) => { console.log("Message from geek1: " + msg);}; // Declaring listener geek2 to myEvent2var geek2 = (msg) => { console.log("Message from geek2: " + msg);}; // Listening to myEvent with geek1 and geek2eventEmitter.on('myEvent', geek1);eventEmitter.on('myEvent', geek2); // Removing listenereventEmitter.off('myEvent', geek1); // Triggering myEventeventEmitter.emit('myEvent', 'Event occurred'); // Triggering erroreventEmitter.emit('error', new Error('Attention!'));
Output:
The listener is added to removeListener
The listener is added to myEvent
The listener is added to myEvent
The listener is removed from myEvent
Message from geek2: Event occurred
Attention! There was an error
Reference: https://nodejs.org/api/events.html#events_class_eventemitter
Node.js-Misc
Picked
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n13 Oct, 2021"
},
{
"code": null,
"e": 224,
"s": 28,
"text": "Node.js uses events module to create and handle custom events. The EventEmitter class can be used to create and handle custom events module.The syntax to Import the events module are given below:"
},
{
"code": null,
"e": 232,
"s": 224,
"text": "Syntax:"
},
{
"code": null,
"e": 272,
"s": 232,
"text": "const EventEmitter = require('events');"
},
{
"code": null,
"e": 435,
"s": 272,
"text": "All EventEmitters emit the event newListener when new listeners are added and removeListener when existing listeners are removed. It also provide one more option:"
},
{
"code": null,
"e": 461,
"s": 435,
"text": "boolean captureRejections"
},
{
"code": null,
"e": 520,
"s": 461,
"text": "Default Value: false\nIt automatically captures rejections."
},
{
"code": null,
"e": 625,
"s": 520,
"text": "Listening events: Before emits any event, it must register functions(callbacks) to listen to the events."
},
{
"code": null,
"e": 633,
"s": 625,
"text": "Syntax:"
},
{
"code": null,
"e": 708,
"s": 633,
"text": "eventEmitter.addListener(event, listener)\neventEmitter.on(event, listener)"
},
{
"code": null,
"e": 1073,
"s": 708,
"text": "eventEmitter.on(event, listener) and eventEmitter.addListener(event, listener) are pretty much similar. It adds the listener at the end of the listener’s array for the specified event. Multiple calls to the same event and listener will add the listener multiple times and correspondingly fire multiple times. Both functions return emitter, so calls can be chained."
},
{
"code": null,
"e": 1269,
"s": 1073,
"text": "Emitting events: Every event is named event in nodejs. We can trigger an event by emit(event, [arg1], [arg2], [...]) function. We can pass an arbitrary set of arguments to the listener functions."
},
{
"code": null,
"e": 1277,
"s": 1269,
"text": "Syntax:"
},
{
"code": null,
"e": 1326,
"s": 1277,
"text": "eventEmitter.emit(event, [arg1], [arg2], [...])\n"
},
{
"code": null,
"e": 1335,
"s": 1326,
"text": "Example:"
},
{
"code": "// Importing eventsconst EventEmitter = require('events'); // Initializing event emitter instances var eventEmitter = new EventEmitter(); // Registering to myEvent eventEmitter.on('myEvent', (msg) => { console.log(msg);}); // Triggering myEventeventEmitter.emit('myEvent', \"First event\");",
"e": 1630,
"s": 1335,
"text": null
},
{
"code": null,
"e": 1638,
"s": 1630,
"text": "Output:"
},
{
"code": null,
"e": 1650,
"s": 1638,
"text": "First event"
},
{
"code": null,
"e": 1952,
"s": 1650,
"text": "Removing Listener: The eventEmitter.removeListener() takes two argument event and listener, and removes that listener from the listeners array that is subscribed to that event. While eventEmitter.removeAllListeners() removes all the listener from the array which are subscribed to the mentioned event."
},
{
"code": null,
"e": 1960,
"s": 1952,
"text": "Syntax:"
},
{
"code": null,
"e": 2046,
"s": 1960,
"text": "eventEmitter.removeListener(event, listener)\neventEmitter.removeAllListeners([event])"
},
{
"code": null,
"e": 2055,
"s": 2046,
"text": "Example:"
},
{
"code": "// Importing eventsconst EventEmitter = require('events'); // Initializing event emitter instances var eventEmitter = new EventEmitter(); var geek1= (msg) => { console.log(\"Message from geek1: \" + msg);}; var geek2 = (msg) => { console.log(\"Message from geek2: \" + msg);}; // Registering geek1 and geek2eventEmitter.on('myEvent', geek1);eventEmitter.on('myEvent', geek1);eventEmitter.on('myEvent', geek2); // Removing listener geek1 that was// registered on the line 13eventEmitter.removeListener('myEvent', geek1); // Triggering myEventeventEmitter.emit('myEvent', \"Event occurred\"); // Removing all the listeners to myEventeventEmitter.removeAllListeners('myEvent'); // Triggering myEventeventEmitter.emit('myEvent', \"Event occurred\");",
"e": 2811,
"s": 2055,
"text": null
},
{
"code": null,
"e": 2819,
"s": 2811,
"text": "Output:"
},
{
"code": null,
"e": 2890,
"s": 2819,
"text": "Message from geek1: Event occurred\nMessage from geek2: Event occurred\n"
},
{
"code": null,
"e": 3148,
"s": 2890,
"text": "We registered two times geek1 and one time geek2. For calling eventEmitter.removeListener(‘myEvent’, geek1) one instance of geek1 will be removed. Finally, removing all listener by using removeAllListeners() method that will remove all listeners to myEvent."
},
{
"code": null,
"e": 3302,
"s": 3148,
"text": "Special Events: All EventEmitter instances emit the event ‘newListener’ when new listeners are added and ‘removeListener’ existing listeners are removed."
},
{
"code": null,
"e": 3738,
"s": 3302,
"text": "Event: ‘newListener’ The EventEmitter instance will emit its own ‘newListener’ event before a listener is added to its internal array of listeners. Listeners registered for the ‘newListener’ event will be passed to the event name and reference to the listener being added. The event ‘newListener’ is triggered before adding the listener to the array.eventEmitter.once( 'newListener', listener)\neventEmitter.on( 'newListener', listener)"
},
{
"code": null,
"e": 3824,
"s": 3738,
"text": "eventEmitter.once( 'newListener', listener)\neventEmitter.on( 'newListener', listener)"
},
{
"code": null,
"e": 4006,
"s": 3824,
"text": "Event: ‘removeListener’ The ‘removeListener’ event is emitted after a listener is removed.eventEmitter.once( ‘removeListener’, listener)\neventEmitter.on( 'removeListener’, listener)"
},
{
"code": null,
"e": 4098,
"s": 4006,
"text": "eventEmitter.once( ‘removeListener’, listener)\neventEmitter.on( 'removeListener’, listener)"
},
{
"code": null,
"e": 4460,
"s": 4098,
"text": "Event: ‘error’ When an error occurs within an EventEmitter instance, the typical action is for an ‘error’ event to be emitted. If an EventEmitter does not have at least one listener registered for the ‘error’ event, and an ‘error’ event is emitted, the error is thrown, a stack trace is printed, and the Node.js process exits.eventEmitter.on('error', listener)\n"
},
{
"code": null,
"e": 4496,
"s": 4460,
"text": "eventEmitter.on('error', listener)\n"
},
{
"code": null,
"e": 4505,
"s": 4496,
"text": "Example:"
},
{
"code": "// Importing eventsconst EventEmitter = require('events'); // Initializing event emitter instances var eventEmitter = new EventEmitter(); // Register to erroreventEmitter.on('error', (err) => { console.error('Attention! There was an error');}); // Register to newListenereventEmitter.on( 'newListener', (event, listener) => { console.log(`The listener is added to ${event}`);}); // Register to removeListenereventEmitter.on( 'removeListener', (event, listener) => { console.log(`The listener is removed from ${event}`);}); // Declaring listener geek1 to myEvent1var geek1 = (msg) => { console.log(\"Message from geek1: \" + msg);}; // Declaring listener geek2 to myEvent2var geek2 = (msg) => { console.log(\"Message from geek2: \" + msg);}; // Listening to myEvent with geek1 and geek2eventEmitter.on('myEvent', geek1);eventEmitter.on('myEvent', geek2); // Removing listenereventEmitter.off('myEvent', geek1); // Triggering myEventeventEmitter.emit('myEvent', 'Event occurred'); // Triggering erroreventEmitter.emit('error', new Error('Attention!'));",
"e": 5578,
"s": 4505,
"text": null
},
{
"code": null,
"e": 5586,
"s": 5578,
"text": "Output:"
},
{
"code": null,
"e": 5795,
"s": 5586,
"text": "The listener is added to removeListener\nThe listener is added to myEvent\nThe listener is added to myEvent\nThe listener is removed from myEvent\nMessage from geek2: Event occurred\nAttention! There was an error\n"
},
{
"code": null,
"e": 5867,
"s": 5795,
"text": "Reference: https://nodejs.org/api/events.html#events_class_eventemitter"
},
{
"code": null,
"e": 5880,
"s": 5867,
"text": "Node.js-Misc"
},
{
"code": null,
"e": 5887,
"s": 5880,
"text": "Picked"
},
{
"code": null,
"e": 5895,
"s": 5887,
"text": "Node.js"
},
{
"code": null,
"e": 5912,
"s": 5895,
"text": "Web Technologies"
}
] |
How to change the color of Action Bar in an Android App? | 23 Feb, 2021
In this article, you will learn how to change the colour of the Action Bar in an Android App.
There are two ways to change color.
By changing styles.xml file:Just go to res/values/styles.xml fileedit the xml file to change the color of action bar.Code for styles.xml is given belowstyles.xmlactivity_main.xmlMainActivity.javastyles.xml<resources> <!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <!-- This code is for changing the color of the bar. --> <!-- Type your colour code which you want to set in colorPrimary item --> <item name="colorPrimary">#0F9D58</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> </style> <style name="AppTheme.NoActionBar"> <item name="windowActionBar">false</item> <item name="windowNoTitle">true</item> </style> <!-- Define other styles to fix theme --> <style name="AppTheme.AppBarOverlay" parent="ThemeOverlay.AppCompat.Dark.ActionBar" /> <style name="AppTheme.PopupOverlay" parent="ThemeOverlay.AppCompat.Light" /></resources>activity_main.xml<?xml version="1.0" encoding="utf-8"?> <!--Relative Layout--><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/relativelayout"> <!--Text View--> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/textview" android:textColor="#0F9D58" android:textSize="32dp" android:layout_centerInParent="true"/></RelativeLayout>MainActivity.javapackage com.geeksforgeeks.changecolor;import android.widget.TextView;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define text View TextView t = findViewById(R.id.textview); t.setText("Geeks for Geeks"); }}Through Java file by defining ActionBar object:Define object for ActionBar and colorDrawable classset color using setBackgroundDrawable function with colorDrawable object as its parameter.Here is complete code for MainActivity.javaMainActivity.javaactivity_main.xmlMainActivity.javapackage com.geeksforgeeks.changecolor; import android.support.v7.app.ActionBar;import android.graphics.Color;import android.graphics.drawable.ColorDrawable;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define ActionBar object ActionBar actionBar; actionBar = getSupportActionBar(); // Define ColorDrawable object and parse color // using parseColor method // with color hash code as its parameter ColorDrawable colorDrawable = new ColorDrawable(Color.parseColor("#0F9D58")); // Set BackgroundDrawable actionBar.setBackgroundDrawable(colorDrawable); }}activity_main.xml<?xml version="1.0" encoding="utf-8"?> <!--Relative Layout--><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/relativelayout"> <!--Text View--> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textColor="#0F9D58" android:textSize="30dp" android:text="Geeks for Geeks" android:layout_centerInParent="true"/></RelativeLayout>
By changing styles.xml file:Just go to res/values/styles.xml fileedit the xml file to change the color of action bar.Code for styles.xml is given belowstyles.xmlactivity_main.xmlMainActivity.javastyles.xml<resources> <!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <!-- This code is for changing the color of the bar. --> <!-- Type your colour code which you want to set in colorPrimary item --> <item name="colorPrimary">#0F9D58</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> </style> <style name="AppTheme.NoActionBar"> <item name="windowActionBar">false</item> <item name="windowNoTitle">true</item> </style> <!-- Define other styles to fix theme --> <style name="AppTheme.AppBarOverlay" parent="ThemeOverlay.AppCompat.Dark.ActionBar" /> <style name="AppTheme.PopupOverlay" parent="ThemeOverlay.AppCompat.Light" /></resources>activity_main.xml<?xml version="1.0" encoding="utf-8"?> <!--Relative Layout--><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/relativelayout"> <!--Text View--> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/textview" android:textColor="#0F9D58" android:textSize="32dp" android:layout_centerInParent="true"/></RelativeLayout>MainActivity.javapackage com.geeksforgeeks.changecolor;import android.widget.TextView;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define text View TextView t = findViewById(R.id.textview); t.setText("Geeks for Geeks"); }}
Just go to res/values/styles.xml file
edit the xml file to change the color of action bar.
Code for styles.xml is given below
styles.xml
activity_main.xml
MainActivity.java
<resources> <!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> <!-- Customize your theme here. --> <!-- This code is for changing the color of the bar. --> <!-- Type your colour code which you want to set in colorPrimary item --> <item name="colorPrimary">#0F9D58</item> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> </style> <style name="AppTheme.NoActionBar"> <item name="windowActionBar">false</item> <item name="windowNoTitle">true</item> </style> <!-- Define other styles to fix theme --> <style name="AppTheme.AppBarOverlay" parent="ThemeOverlay.AppCompat.Dark.ActionBar" /> <style name="AppTheme.PopupOverlay" parent="ThemeOverlay.AppCompat.Light" /></resources>
<?xml version="1.0" encoding="utf-8"?> <!--Relative Layout--><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/relativelayout"> <!--Text View--> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/textview" android:textColor="#0F9D58" android:textSize="32dp" android:layout_centerInParent="true"/></RelativeLayout>
package com.geeksforgeeks.changecolor;import android.widget.TextView;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define text View TextView t = findViewById(R.id.textview); t.setText("Geeks for Geeks"); }}
Through Java file by defining ActionBar object:Define object for ActionBar and colorDrawable classset color using setBackgroundDrawable function with colorDrawable object as its parameter.Here is complete code for MainActivity.javaMainActivity.javaactivity_main.xmlMainActivity.javapackage com.geeksforgeeks.changecolor; import android.support.v7.app.ActionBar;import android.graphics.Color;import android.graphics.drawable.ColorDrawable;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define ActionBar object ActionBar actionBar; actionBar = getSupportActionBar(); // Define ColorDrawable object and parse color // using parseColor method // with color hash code as its parameter ColorDrawable colorDrawable = new ColorDrawable(Color.parseColor("#0F9D58")); // Set BackgroundDrawable actionBar.setBackgroundDrawable(colorDrawable); }}activity_main.xml<?xml version="1.0" encoding="utf-8"?> <!--Relative Layout--><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/relativelayout"> <!--Text View--> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textColor="#0F9D58" android:textSize="30dp" android:text="Geeks for Geeks" android:layout_centerInParent="true"/></RelativeLayout>
Define object for ActionBar and colorDrawable class
set color using setBackgroundDrawable function with colorDrawable object as its parameter.
Here is complete code for MainActivity.java
MainActivity.java
activity_main.xml
package com.geeksforgeeks.changecolor; import android.support.v7.app.ActionBar;import android.graphics.Color;import android.graphics.drawable.ColorDrawable;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define ActionBar object ActionBar actionBar; actionBar = getSupportActionBar(); // Define ColorDrawable object and parse color // using parseColor method // with color hash code as its parameter ColorDrawable colorDrawable = new ColorDrawable(Color.parseColor("#0F9D58")); // Set BackgroundDrawable actionBar.setBackgroundDrawable(colorDrawable); }}
<?xml version="1.0" encoding="utf-8"?> <!--Relative Layout--><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/relativelayout"> <!--Text View--> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:textColor="#0F9D58" android:textSize="30dp" android:text="Geeks for Geeks" android:layout_centerInParent="true"/></RelativeLayout>
Output:
Default color of action Bar:
In Main Activity color of Action Bar is changed to hash code defined in above code.
Android-Bars
Android
Java
Java
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n23 Feb, 2021"
},
{
"code": null,
"e": 148,
"s": 54,
"text": "In this article, you will learn how to change the colour of the Action Bar in an Android App."
},
{
"code": null,
"e": 184,
"s": 148,
"text": "There are two ways to change color."
},
{
"code": null,
"e": 4075,
"s": 184,
"text": "By changing styles.xml file:Just go to res/values/styles.xml fileedit the xml file to change the color of action bar.Code for styles.xml is given belowstyles.xmlactivity_main.xmlMainActivity.javastyles.xml<resources> <!-- Base application theme. --> <style name=\"AppTheme\" parent=\"Theme.AppCompat.Light.DarkActionBar\"> <!-- Customize your theme here. --> <!-- This code is for changing the color of the bar. --> <!-- Type your colour code which you want to set in colorPrimary item --> <item name=\"colorPrimary\">#0F9D58</item> <item name=\"colorPrimaryDark\">@color/colorPrimaryDark</item> <item name=\"colorAccent\">@color/colorAccent</item> </style> <style name=\"AppTheme.NoActionBar\"> <item name=\"windowActionBar\">false</item> <item name=\"windowNoTitle\">true</item> </style> <!-- Define other styles to fix theme --> <style name=\"AppTheme.AppBarOverlay\" parent=\"ThemeOverlay.AppCompat.Dark.ActionBar\" /> <style name=\"AppTheme.PopupOverlay\" parent=\"ThemeOverlay.AppCompat.Light\" /></resources>activity_main.xml<?xml version=\"1.0\" encoding=\"utf-8\"?> <!--Relative Layout--><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:id=\"@+id/relativelayout\"> <!--Text View--> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:id=\"@+id/textview\" android:textColor=\"#0F9D58\" android:textSize=\"32dp\" android:layout_centerInParent=\"true\"/></RelativeLayout>MainActivity.javapackage com.geeksforgeeks.changecolor;import android.widget.TextView;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define text View TextView t = findViewById(R.id.textview); t.setText(\"Geeks for Geeks\"); }}Through Java file by defining ActionBar object:Define object for ActionBar and colorDrawable classset color using setBackgroundDrawable function with colorDrawable object as its parameter.Here is complete code for MainActivity.javaMainActivity.javaactivity_main.xmlMainActivity.javapackage com.geeksforgeeks.changecolor; import android.support.v7.app.ActionBar;import android.graphics.Color;import android.graphics.drawable.ColorDrawable;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define ActionBar object ActionBar actionBar; actionBar = getSupportActionBar(); // Define ColorDrawable object and parse color // using parseColor method // with color hash code as its parameter ColorDrawable colorDrawable = new ColorDrawable(Color.parseColor(\"#0F9D58\")); // Set BackgroundDrawable actionBar.setBackgroundDrawable(colorDrawable); }}activity_main.xml<?xml version=\"1.0\" encoding=\"utf-8\"?> <!--Relative Layout--><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:id=\"@+id/relativelayout\"> <!--Text View--> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:textColor=\"#0F9D58\" android:textSize=\"30dp\" android:text=\"Geeks for Geeks\" android:layout_centerInParent=\"true\"/></RelativeLayout>"
},
{
"code": null,
"e": 6219,
"s": 4075,
"text": "By changing styles.xml file:Just go to res/values/styles.xml fileedit the xml file to change the color of action bar.Code for styles.xml is given belowstyles.xmlactivity_main.xmlMainActivity.javastyles.xml<resources> <!-- Base application theme. --> <style name=\"AppTheme\" parent=\"Theme.AppCompat.Light.DarkActionBar\"> <!-- Customize your theme here. --> <!-- This code is for changing the color of the bar. --> <!-- Type your colour code which you want to set in colorPrimary item --> <item name=\"colorPrimary\">#0F9D58</item> <item name=\"colorPrimaryDark\">@color/colorPrimaryDark</item> <item name=\"colorAccent\">@color/colorAccent</item> </style> <style name=\"AppTheme.NoActionBar\"> <item name=\"windowActionBar\">false</item> <item name=\"windowNoTitle\">true</item> </style> <!-- Define other styles to fix theme --> <style name=\"AppTheme.AppBarOverlay\" parent=\"ThemeOverlay.AppCompat.Dark.ActionBar\" /> <style name=\"AppTheme.PopupOverlay\" parent=\"ThemeOverlay.AppCompat.Light\" /></resources>activity_main.xml<?xml version=\"1.0\" encoding=\"utf-8\"?> <!--Relative Layout--><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:id=\"@+id/relativelayout\"> <!--Text View--> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:id=\"@+id/textview\" android:textColor=\"#0F9D58\" android:textSize=\"32dp\" android:layout_centerInParent=\"true\"/></RelativeLayout>MainActivity.javapackage com.geeksforgeeks.changecolor;import android.widget.TextView;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define text View TextView t = findViewById(R.id.textview); t.setText(\"Geeks for Geeks\"); }}"
},
{
"code": null,
"e": 6257,
"s": 6219,
"text": "Just go to res/values/styles.xml file"
},
{
"code": null,
"e": 6310,
"s": 6257,
"text": "edit the xml file to change the color of action bar."
},
{
"code": null,
"e": 6345,
"s": 6310,
"text": "Code for styles.xml is given below"
},
{
"code": null,
"e": 6356,
"s": 6345,
"text": "styles.xml"
},
{
"code": null,
"e": 6374,
"s": 6356,
"text": "activity_main.xml"
},
{
"code": null,
"e": 6392,
"s": 6374,
"text": "MainActivity.java"
},
{
"code": "<resources> <!-- Base application theme. --> <style name=\"AppTheme\" parent=\"Theme.AppCompat.Light.DarkActionBar\"> <!-- Customize your theme here. --> <!-- This code is for changing the color of the bar. --> <!-- Type your colour code which you want to set in colorPrimary item --> <item name=\"colorPrimary\">#0F9D58</item> <item name=\"colorPrimaryDark\">@color/colorPrimaryDark</item> <item name=\"colorAccent\">@color/colorAccent</item> </style> <style name=\"AppTheme.NoActionBar\"> <item name=\"windowActionBar\">false</item> <item name=\"windowNoTitle\">true</item> </style> <!-- Define other styles to fix theme --> <style name=\"AppTheme.AppBarOverlay\" parent=\"ThemeOverlay.AppCompat.Dark.ActionBar\" /> <style name=\"AppTheme.PopupOverlay\" parent=\"ThemeOverlay.AppCompat.Light\" /></resources>",
"e": 7261,
"s": 6392,
"text": null
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?> <!--Relative Layout--><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:id=\"@+id/relativelayout\"> <!--Text View--> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:id=\"@+id/textview\" android:textColor=\"#0F9D58\" android:textSize=\"32dp\" android:layout_centerInParent=\"true\"/></RelativeLayout>",
"e": 7845,
"s": 7261,
"text": null
},
{
"code": "package com.geeksforgeeks.changecolor;import android.widget.TextView;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define text View TextView t = findViewById(R.id.textview); t.setText(\"Geeks for Geeks\"); }}",
"e": 8299,
"s": 7845,
"text": null
},
{
"code": null,
"e": 10047,
"s": 8299,
"text": "Through Java file by defining ActionBar object:Define object for ActionBar and colorDrawable classset color using setBackgroundDrawable function with colorDrawable object as its parameter.Here is complete code for MainActivity.javaMainActivity.javaactivity_main.xmlMainActivity.javapackage com.geeksforgeeks.changecolor; import android.support.v7.app.ActionBar;import android.graphics.Color;import android.graphics.drawable.ColorDrawable;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define ActionBar object ActionBar actionBar; actionBar = getSupportActionBar(); // Define ColorDrawable object and parse color // using parseColor method // with color hash code as its parameter ColorDrawable colorDrawable = new ColorDrawable(Color.parseColor(\"#0F9D58\")); // Set BackgroundDrawable actionBar.setBackgroundDrawable(colorDrawable); }}activity_main.xml<?xml version=\"1.0\" encoding=\"utf-8\"?> <!--Relative Layout--><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:id=\"@+id/relativelayout\"> <!--Text View--> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:textColor=\"#0F9D58\" android:textSize=\"30dp\" android:text=\"Geeks for Geeks\" android:layout_centerInParent=\"true\"/></RelativeLayout>"
},
{
"code": null,
"e": 10099,
"s": 10047,
"text": "Define object for ActionBar and colorDrawable class"
},
{
"code": null,
"e": 10190,
"s": 10099,
"text": "set color using setBackgroundDrawable function with colorDrawable object as its parameter."
},
{
"code": null,
"e": 10234,
"s": 10190,
"text": "Here is complete code for MainActivity.java"
},
{
"code": null,
"e": 10252,
"s": 10234,
"text": "MainActivity.java"
},
{
"code": null,
"e": 10270,
"s": 10252,
"text": "activity_main.xml"
},
{
"code": "package com.geeksforgeeks.changecolor; import android.support.v7.app.ActionBar;import android.graphics.Color;import android.graphics.drawable.ColorDrawable;import android.support.v7.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Define ActionBar object ActionBar actionBar; actionBar = getSupportActionBar(); // Define ColorDrawable object and parse color // using parseColor method // with color hash code as its parameter ColorDrawable colorDrawable = new ColorDrawable(Color.parseColor(\"#0F9D58\")); // Set BackgroundDrawable actionBar.setBackgroundDrawable(colorDrawable); }}",
"e": 11131,
"s": 10270,
"text": null
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?> <!--Relative Layout--><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:id=\"@+id/relativelayout\"> <!--Text View--> <TextView android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:textColor=\"#0F9D58\" android:textSize=\"30dp\" android:text=\"Geeks for Geeks\" android:layout_centerInParent=\"true\"/></RelativeLayout>",
"e": 11720,
"s": 11131,
"text": null
},
{
"code": null,
"e": 11728,
"s": 11720,
"text": "Output:"
},
{
"code": null,
"e": 11757,
"s": 11728,
"text": "Default color of action Bar:"
},
{
"code": null,
"e": 11841,
"s": 11757,
"text": "In Main Activity color of Action Bar is changed to hash code defined in above code."
},
{
"code": null,
"e": 11854,
"s": 11841,
"text": "Android-Bars"
},
{
"code": null,
"e": 11862,
"s": 11854,
"text": "Android"
},
{
"code": null,
"e": 11867,
"s": 11862,
"text": "Java"
},
{
"code": null,
"e": 11872,
"s": 11867,
"text": "Java"
},
{
"code": null,
"e": 11880,
"s": 11872,
"text": "Android"
}
] |
Python | Timezone Conversion | 19 Jul, 2019
Most datetime items came back from the dateutil parser are naive, which means they don’t have an explicit tzinfo. tzinfo determines the timezone and UTC offest. It is the standard ISO format for UTC datetime strings. UTC is the coordinated universal time, and is fundamentally the equivalent as GMT. ISO is the International Standards Organization, which in addition to other things, determines standard datetime designing.
Python datetime items can either be naive or mindful. In the event that a datetime item has a tzinfo, at that point it knows. Something else, the datetime is naive. To make an naive datetime object timezone aware, define tzinfo abstract baseclass. In any case, the Python datetime library just characterizes a conceptual baseclass for tzinfo, and leaves it over to others to really actualize tzinfo creation. This is the place the tz module of dateutil comes in—it gives all that it is required to turn upward timezones from your OS timezone information.
Installation:
Use pip or easy_install dateutil to install. Make sure that the operating system has timezone data.
On Linux, this is usually found in /usr/share/zoneinfo, and the Ubuntu package is called tzdata. In case of the number of files and directories in /usr/share/zoneinfo, such as America/ and Europe/, then it’s ready to proceed.
Getting a UTC tzinfo object – by calling tz.tzutc()
from dateutil import tztz.tzutc()
tzutc()
The offset is 0 by calling the utcoffset() method with a UTC datetime object.
import datetimetz.tzutc().utcoffset(datetime.datetime.utcnow())
datetime.timedelta(0)
Pass in a timezone file path to the gettz() function to get tzinfo objects for other timezones.
tz.gettz('US/Pacific')
tzfile('/usr/share/zoneinfo/US/Pacific')
tz.gettz('Europe / Paris')
tzfile('/usr/share/zoneinfo/Europe/Paris')
tz.gettz('US / Pacific').utcoffset(datetime.datetime.utcnow())
datetime.timedelta(-1, 61200)
To change over a non-UTC datetime item to UTC, it must be made timezone mindful. On the off chance that you attempt to change over a credulous datetime to UTC, you’ll get a ValueError exemption. To make a naive datetime timezone mindful, you basically call the replace() strategy with the right tzinfo. Once a datetime item has a tzinfo, at that point UTC change can be performed by calling the astimezone() technique with tz.tzutc().
abc = tz.gettz('US/Pacific')dat = datetime.datetime(2010, 9, 25, 10, 36)dat.tzinfodat.astimezone(tz.tzutc())
Traceback (most recent call last):
File "/usr/lib/python2.6/doctest.py", line 1228, in __run
compileflags, 1) in test.globs
File "", line 1, in
dat.astimezone(tz.tzutc())
ValueError: astimezone() cannot be applied to a naive datetime
dat.replace(tzinfo = abc)
datetime.datetime(2010, 9, 25, 10, 36, tzinfo=tzfile(
'/usr/share/zoneinfo/US/Pacific'))
All behind working –
The tzutc and tzfile items are the two subclasses of tzinfo.
All things considered, they know the right UTC offset for timezone change (which is 0 for tzutc).
A tzfile item realizes how to peruse the working framework’s zoneinfo documents to get the fundamental counterbalance information.
The replace() strategy for a datetime item does what the name suggests—it replaces qualities.
Once a datetime has a tzinfo, the astimezone() strategy will most likely believer the time utilizing the UTC counterbalances, and afterward supplant the current tzinfo with the new tzinfo
Code : Passing a tzinfos keyword argument into the dateutil parser to detect the unrecognized timezones
parser.parse('Wednesday, Aug 4, 2010 at 6:30 p.m. (CDT)', fuzzy = True)
datetime.datetime(2010, 8, 4, 18, 30)
tzinfos = {'CDT': tz.gettz('US/Central')}parser.parse('Wednesday, Aug 4, 2010 at 6:30 p.m. (CDT)',fuzzy = True, tzinfos = tzinfos)
datetime.datetime(2010, 8, 4, 18, 30, tzinfo=tzfile('
/usr/share/zoneinfo/US/Central'))
python-modules
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n19 Jul, 2019"
},
{
"code": null,
"e": 452,
"s": 28,
"text": "Most datetime items came back from the dateutil parser are naive, which means they don’t have an explicit tzinfo. tzinfo determines the timezone and UTC offest. It is the standard ISO format for UTC datetime strings. UTC is the coordinated universal time, and is fundamentally the equivalent as GMT. ISO is the International Standards Organization, which in addition to other things, determines standard datetime designing."
},
{
"code": null,
"e": 1007,
"s": 452,
"text": "Python datetime items can either be naive or mindful. In the event that a datetime item has a tzinfo, at that point it knows. Something else, the datetime is naive. To make an naive datetime object timezone aware, define tzinfo abstract baseclass. In any case, the Python datetime library just characterizes a conceptual baseclass for tzinfo, and leaves it over to others to really actualize tzinfo creation. This is the place the tz module of dateutil comes in—it gives all that it is required to turn upward timezones from your OS timezone information."
},
{
"code": null,
"e": 1021,
"s": 1007,
"text": "Installation:"
},
{
"code": null,
"e": 1121,
"s": 1021,
"text": "Use pip or easy_install dateutil to install. Make sure that the operating system has timezone data."
},
{
"code": null,
"e": 1347,
"s": 1121,
"text": "On Linux, this is usually found in /usr/share/zoneinfo, and the Ubuntu package is called tzdata. In case of the number of files and directories in /usr/share/zoneinfo, such as America/ and Europe/, then it’s ready to proceed."
},
{
"code": null,
"e": 1399,
"s": 1347,
"text": "Getting a UTC tzinfo object – by calling tz.tzutc()"
},
{
"code": "from dateutil import tztz.tzutc()",
"e": 1433,
"s": 1399,
"text": null
},
{
"code": null,
"e": 1441,
"s": 1433,
"text": "tzutc()"
},
{
"code": null,
"e": 1519,
"s": 1441,
"text": "The offset is 0 by calling the utcoffset() method with a UTC datetime object."
},
{
"code": "import datetimetz.tzutc().utcoffset(datetime.datetime.utcnow())",
"e": 1583,
"s": 1519,
"text": null
},
{
"code": null,
"e": 1605,
"s": 1583,
"text": "datetime.timedelta(0)"
},
{
"code": null,
"e": 1701,
"s": 1605,
"text": "Pass in a timezone file path to the gettz() function to get tzinfo objects for other timezones."
},
{
"code": "tz.gettz('US/Pacific')",
"e": 1724,
"s": 1701,
"text": null
},
{
"code": null,
"e": 1765,
"s": 1724,
"text": "tzfile('/usr/share/zoneinfo/US/Pacific')"
},
{
"code": "tz.gettz('Europe / Paris')",
"e": 1792,
"s": 1765,
"text": null
},
{
"code": null,
"e": 1835,
"s": 1792,
"text": "tzfile('/usr/share/zoneinfo/Europe/Paris')"
},
{
"code": "tz.gettz('US / Pacific').utcoffset(datetime.datetime.utcnow())",
"e": 1898,
"s": 1835,
"text": null
},
{
"code": null,
"e": 1928,
"s": 1898,
"text": "datetime.timedelta(-1, 61200)"
},
{
"code": null,
"e": 2363,
"s": 1928,
"text": "To change over a non-UTC datetime item to UTC, it must be made timezone mindful. On the off chance that you attempt to change over a credulous datetime to UTC, you’ll get a ValueError exemption. To make a naive datetime timezone mindful, you basically call the replace() strategy with the right tzinfo. Once a datetime item has a tzinfo, at that point UTC change can be performed by calling the astimezone() technique with tz.tzutc()."
},
{
"code": "abc = tz.gettz('US/Pacific')dat = datetime.datetime(2010, 9, 25, 10, 36)dat.tzinfodat.astimezone(tz.tzutc())",
"e": 2472,
"s": 2363,
"text": null
},
{
"code": null,
"e": 2711,
"s": 2472,
"text": "Traceback (most recent call last):\n File \"/usr/lib/python2.6/doctest.py\", line 1228, in __run\n compileflags, 1) in test.globs\n File \"\", line 1, in \n dat.astimezone(tz.tzutc())\nValueError: astimezone() cannot be applied to a naive datetime"
},
{
"code": "dat.replace(tzinfo = abc)",
"e": 2737,
"s": 2711,
"text": null
},
{
"code": null,
"e": 2826,
"s": 2737,
"text": "datetime.datetime(2010, 9, 25, 10, 36, tzinfo=tzfile(\n'/usr/share/zoneinfo/US/Pacific'))"
},
{
"code": null,
"e": 2847,
"s": 2826,
"text": "All behind working –"
},
{
"code": null,
"e": 2908,
"s": 2847,
"text": "The tzutc and tzfile items are the two subclasses of tzinfo."
},
{
"code": null,
"e": 3006,
"s": 2908,
"text": "All things considered, they know the right UTC offset for timezone change (which is 0 for tzutc)."
},
{
"code": null,
"e": 3137,
"s": 3006,
"text": "A tzfile item realizes how to peruse the working framework’s zoneinfo documents to get the fundamental counterbalance information."
},
{
"code": null,
"e": 3231,
"s": 3137,
"text": "The replace() strategy for a datetime item does what the name suggests—it replaces qualities."
},
{
"code": null,
"e": 3419,
"s": 3231,
"text": "Once a datetime has a tzinfo, the astimezone() strategy will most likely believer the time utilizing the UTC counterbalances, and afterward supplant the current tzinfo with the new tzinfo"
},
{
"code": null,
"e": 3523,
"s": 3419,
"text": "Code : Passing a tzinfos keyword argument into the dateutil parser to detect the unrecognized timezones"
},
{
"code": "parser.parse('Wednesday, Aug 4, 2010 at 6:30 p.m. (CDT)', fuzzy = True)",
"e": 3607,
"s": 3523,
"text": null
},
{
"code": null,
"e": 3645,
"s": 3607,
"text": "datetime.datetime(2010, 8, 4, 18, 30)"
},
{
"code": "tzinfos = {'CDT': tz.gettz('US/Central')}parser.parse('Wednesday, Aug 4, 2010 at 6:30 p.m. (CDT)',fuzzy = True, tzinfos = tzinfos)",
"e": 3776,
"s": 3645,
"text": null
},
{
"code": null,
"e": 3864,
"s": 3776,
"text": "datetime.datetime(2010, 8, 4, 18, 30, tzinfo=tzfile('\n/usr/share/zoneinfo/US/Central'))"
},
{
"code": null,
"e": 3879,
"s": 3864,
"text": "python-modules"
},
{
"code": null,
"e": 3886,
"s": 3879,
"text": "Python"
}
] |
Python MongoDB – Update_one() | 13 Jan, 2022
MongoDB is a cross-platform document-oriented and a non relational (i.e NoSQL) database program. It is an open-source document database, that stores the data in the form of key-value pairs.First create a database on which we perform the update_one() operation:
Python3
# importing Mongoclient from pymongofrom pymongo import MongoClient try: conn = MongoClient() # Making connection except: print("Could not connect to MongoDB") # databasedb = conn.database # Created or Switched to collection# names: GeeksForGeekscollection = db.GeeksForGeeks # Creating Records:record1 = { "appliance":"fan", "quantity":10, "rating":"3 stars", "company":"havells"}record2 = { "appliance":"cooler", "quantity":15, "rating":"4 stars", "company":"symphony"}record3 = { "appliance":"ac", "quantity":20, "rating":"5 stars", "company":"voltas"}record4 = { "appliance":"tv", "quantity":12, "rating":"3 stars", "company":"samsung"} # Inserting the Datarec_id1 = collection.insert_one(record1)rec_id2 = collection.insert_one(record2)rec_id3 = collection.insert_one(record3)rec_id4 = collection.insert_one(record4) # Printing the data insertedprint("The data in the database is:")cursor = collection.find()for record in cursor: print(record)
Output :
MongoDB Shell:
It is a function by which we can update a record in a MongoDB database or Collection. This method mainly focuses on two arguments that we passed one is the query (i.e filter) object defining which document to update and the second is an object defining the new values of the document(i.e new_values) and the rest arguments are optional that we will discuss in the syntax section. This function finds the first document that matches with the query and update it with an object defining the new values of the document, i.e Updates a single document within the collection based on the filter. Syntax:
collection.update_one(filter, new_values, upsert=False, bypass_document_validation=False, collation=None, array_filters=None, session=None)Parameters:
‘filter’ : A query that matches the document to update.
‘new_values’ : The modifications to apply.
‘upsert’ (optional): If “True”, perform an insert if no documents match the filter.
‘bypass_document_validation’ (optional) : If “True”, allows the write to opt-out of document level validation. Default is “False”.
‘collation’ (optional) : An instance of class: ‘~pymongo.collation.Collation’. This option is only supported on MongoDB 3.4 and above.
‘array_filters’ (optional) : A list of filters specifying which array elements an update should apply. Requires MongoDB 3.6+.
‘session’ (optional) : a class:’~pymongo.client_session.ClientSession’.
Example 1: In this example, we are going to update the fan quantity from 10 to 25.
Python3
# importing Mongoclient from pymongofrom pymongo import MongoClient conn = MongoClient('localhost', 27017)# databasedb = conn.database # Created or Switched to collection# names: GeeksForGeekscollection = db.GeeksForGeeks # Updating fan quantity form 10 to 25.filter = { 'appliance': 'fan' } # Values to be updated.newvalues = { "$set": { 'quantity': 25 } } # Using update_one() method for single# updation.collection.update_one(filter, newvalues) # Printing the updated content of the# databasecursor = collection.find()for record in cursor: print(record)
Output :
MongoDB Shell:
Example 2: In this example we are changing the tv company name from ‘samsung’ to ‘sony’ by using update_one():
Python3
# importing Mongoclient from pymongofrom pymongo import MongoClient conn = MongoClient('localhost', 27017) # databasedb = conn.database # Created or Switched to collection# names: GeeksForGeekscollection = db.GeeksForGeeks # Updating the tv company name from# 'samsung' to 'sony'.filter = { 'appliance': 'tv' } # Values to be updated.newvalues = { "$set": { 'company': "sony" } } # Using update_one() method for single updation.collection.update_one(filter, newvalues) # Printing the updated content of the databasecursor = collection.find()for record in cursor: print(record)
Output :
MongoDB Shell:
NOTE :The “$set” operator replaces the value of a field with the specified value. If the field does not exist, “$set” will add a new field with the specified value, provided that the new field does not violate a type constraint.
rkbhola5
Python-mongoDB
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
How to Install PIP on Windows ?
Python String | replace()
Python OOPs Concepts | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n13 Jan, 2022"
},
{
"code": null,
"e": 315,
"s": 52,
"text": "MongoDB is a cross-platform document-oriented and a non relational (i.e NoSQL) database program. It is an open-source document database, that stores the data in the form of key-value pairs.First create a database on which we perform the update_one() operation: "
},
{
"code": null,
"e": 323,
"s": 315,
"text": "Python3"
},
{
"code": "# importing Mongoclient from pymongofrom pymongo import MongoClient try: conn = MongoClient() # Making connection except: print(\"Could not connect to MongoDB\") # databasedb = conn.database # Created or Switched to collection# names: GeeksForGeekscollection = db.GeeksForGeeks # Creating Records:record1 = { \"appliance\":\"fan\", \"quantity\":10, \"rating\":\"3 stars\", \"company\":\"havells\"}record2 = { \"appliance\":\"cooler\", \"quantity\":15, \"rating\":\"4 stars\", \"company\":\"symphony\"}record3 = { \"appliance\":\"ac\", \"quantity\":20, \"rating\":\"5 stars\", \"company\":\"voltas\"}record4 = { \"appliance\":\"tv\", \"quantity\":12, \"rating\":\"3 stars\", \"company\":\"samsung\"} # Inserting the Datarec_id1 = collection.insert_one(record1)rec_id2 = collection.insert_one(record2)rec_id3 = collection.insert_one(record3)rec_id4 = collection.insert_one(record4) # Printing the data insertedprint(\"The data in the database is:\")cursor = collection.find()for record in cursor: print(record)",
"e": 1408,
"s": 323,
"text": null
},
{
"code": null,
"e": 1418,
"s": 1408,
"text": "Output : "
},
{
"code": null,
"e": 1434,
"s": 1418,
"text": "MongoDB Shell: "
},
{
"code": null,
"e": 2035,
"s": 1436,
"text": "It is a function by which we can update a record in a MongoDB database or Collection. This method mainly focuses on two arguments that we passed one is the query (i.e filter) object defining which document to update and the second is an object defining the new values of the document(i.e new_values) and the rest arguments are optional that we will discuss in the syntax section. This function finds the first document that matches with the query and update it with an object defining the new values of the document, i.e Updates a single document within the collection based on the filter. Syntax: "
},
{
"code": null,
"e": 2188,
"s": 2035,
"text": "collection.update_one(filter, new_values, upsert=False, bypass_document_validation=False, collation=None, array_filters=None, session=None)Parameters: "
},
{
"code": null,
"e": 2244,
"s": 2188,
"text": "‘filter’ : A query that matches the document to update."
},
{
"code": null,
"e": 2287,
"s": 2244,
"text": "‘new_values’ : The modifications to apply."
},
{
"code": null,
"e": 2371,
"s": 2287,
"text": "‘upsert’ (optional): If “True”, perform an insert if no documents match the filter."
},
{
"code": null,
"e": 2502,
"s": 2371,
"text": "‘bypass_document_validation’ (optional) : If “True”, allows the write to opt-out of document level validation. Default is “False”."
},
{
"code": null,
"e": 2637,
"s": 2502,
"text": "‘collation’ (optional) : An instance of class: ‘~pymongo.collation.Collation’. This option is only supported on MongoDB 3.4 and above."
},
{
"code": null,
"e": 2763,
"s": 2637,
"text": "‘array_filters’ (optional) : A list of filters specifying which array elements an update should apply. Requires MongoDB 3.6+."
},
{
"code": null,
"e": 2835,
"s": 2763,
"text": "‘session’ (optional) : a class:’~pymongo.client_session.ClientSession’."
},
{
"code": null,
"e": 2922,
"s": 2837,
"text": "Example 1: In this example, we are going to update the fan quantity from 10 to 25. "
},
{
"code": null,
"e": 2930,
"s": 2922,
"text": "Python3"
},
{
"code": "# importing Mongoclient from pymongofrom pymongo import MongoClient conn = MongoClient('localhost', 27017)# databasedb = conn.database # Created or Switched to collection# names: GeeksForGeekscollection = db.GeeksForGeeks # Updating fan quantity form 10 to 25.filter = { 'appliance': 'fan' } # Values to be updated.newvalues = { \"$set\": { 'quantity': 25 } } # Using update_one() method for single# updation.collection.update_one(filter, newvalues) # Printing the updated content of the# databasecursor = collection.find()for record in cursor: print(record)",
"e": 3490,
"s": 2930,
"text": null
},
{
"code": null,
"e": 3500,
"s": 3490,
"text": "Output : "
},
{
"code": null,
"e": 3516,
"s": 3500,
"text": "MongoDB Shell: "
},
{
"code": null,
"e": 3629,
"s": 3516,
"text": "Example 2: In this example we are changing the tv company name from ‘samsung’ to ‘sony’ by using update_one(): "
},
{
"code": null,
"e": 3637,
"s": 3629,
"text": "Python3"
},
{
"code": "# importing Mongoclient from pymongofrom pymongo import MongoClient conn = MongoClient('localhost', 27017) # databasedb = conn.database # Created or Switched to collection# names: GeeksForGeekscollection = db.GeeksForGeeks # Updating the tv company name from# 'samsung' to 'sony'.filter = { 'appliance': 'tv' } # Values to be updated.newvalues = { \"$set\": { 'company': \"sony\" } } # Using update_one() method for single updation.collection.update_one(filter, newvalues) # Printing the updated content of the databasecursor = collection.find()for record in cursor: print(record)",
"e": 4218,
"s": 3637,
"text": null
},
{
"code": null,
"e": 4228,
"s": 4218,
"text": "Output : "
},
{
"code": null,
"e": 4245,
"s": 4228,
"text": "MongoDB Shell: "
},
{
"code": null,
"e": 4475,
"s": 4245,
"text": "NOTE :The “$set” operator replaces the value of a field with the specified value. If the field does not exist, “$set” will add a new field with the specified value, provided that the new field does not violate a type constraint. "
},
{
"code": null,
"e": 4484,
"s": 4475,
"text": "rkbhola5"
},
{
"code": null,
"e": 4499,
"s": 4484,
"text": "Python-mongoDB"
},
{
"code": null,
"e": 4506,
"s": 4499,
"text": "Python"
},
{
"code": null,
"e": 4604,
"s": 4506,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4632,
"s": 4604,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 4682,
"s": 4632,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 4704,
"s": 4682,
"text": "Python map() function"
},
{
"code": null,
"e": 4748,
"s": 4704,
"text": "How to get column names in Pandas dataframe"
},
{
"code": null,
"e": 4790,
"s": 4748,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 4812,
"s": 4790,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 4847,
"s": 4812,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 4879,
"s": 4847,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 4905,
"s": 4879,
"text": "Python String | replace()"
}
] |
Express.js req.get() Function | 09 Jul, 2020
The req.get() function returns the specified HTTP request header field which is case-insensitive match and the Referrer and Referrer fields are interchangeable.
Syntax:
req.get( field )
Parameter: The field parameter specifies the HTTP request header field.
Return Value: String.
Installation of express module:
You can visit the link to Install express module. You can install this package by using this command.npm install expressAfter installing the express module, you can check your express version in command prompt using the command.npm version expressAfter that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.node index.js
You can visit the link to Install express module. You can install this package by using this command.npm install express
npm install express
After installing the express module, you can check your express version in command prompt using the command.npm version express
npm version express
After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.node index.js
node index.js
Example 1: Filename: index.js
var express = require('express');var app = express();var PORT = 3000; app.get('/', function (req, res) { console.log(req.get('Content-Type')); res.end();}); app.listen(PORT, function(err){ if (err) console.log(err); console.log("Server listening on PORT", PORT);});
Steps to run the program:
The project structure will look like this:Make sure you have installed express module using the following command:npm install expressRun index.js file using below command:node index.jsOutput:Server listening on PORT 3000
Now make a GET request to http://localhost:3000/ with header set to ‘content-type: text/plain’, then you will see the following output on your console:Server listening on PORT 3000
text/plain
The project structure will look like this:
Make sure you have installed express module using the following command:npm install express
npm install express
Run index.js file using below command:node index.jsOutput:Server listening on PORT 3000
node index.js
Output:
Server listening on PORT 3000
Now make a GET request to http://localhost:3000/ with header set to ‘content-type: text/plain’, then you will see the following output on your console:Server listening on PORT 3000
text/plain
Server listening on PORT 3000
text/plain
Example 2: Filename: index.js
var express = require('express');var app = express();var PORT = 3000; app.get('/', function (req, res) { console.log(req.get('Anything-else')); res.end();}); app.listen(PORT, function(err){ if (err) console.log(err); console.log("Server listening on PORT", PORT);});
Run index.js file using below command:
node index.js
Now make a GET request to http://localhost:3000/, then you will see the following output on your console:
Server listening on PORT 3000
undefined
Reference: https://expressjs.com/en/4x/api.html#req.get
Express.js
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n09 Jul, 2020"
},
{
"code": null,
"e": 189,
"s": 28,
"text": "The req.get() function returns the specified HTTP request header field which is case-insensitive match and the Referrer and Referrer fields are interchangeable."
},
{
"code": null,
"e": 197,
"s": 189,
"text": "Syntax:"
},
{
"code": null,
"e": 214,
"s": 197,
"text": "req.get( field )"
},
{
"code": null,
"e": 286,
"s": 214,
"text": "Parameter: The field parameter specifies the HTTP request header field."
},
{
"code": null,
"e": 308,
"s": 286,
"text": "Return Value: String."
},
{
"code": null,
"e": 340,
"s": 308,
"text": "Installation of express module:"
},
{
"code": null,
"e": 735,
"s": 340,
"text": "You can visit the link to Install express module. You can install this package by using this command.npm install expressAfter installing the express module, you can check your express version in command prompt using the command.npm version expressAfter that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.node index.js"
},
{
"code": null,
"e": 856,
"s": 735,
"text": "You can visit the link to Install express module. You can install this package by using this command.npm install express"
},
{
"code": null,
"e": 876,
"s": 856,
"text": "npm install express"
},
{
"code": null,
"e": 1004,
"s": 876,
"text": "After installing the express module, you can check your express version in command prompt using the command.npm version express"
},
{
"code": null,
"e": 1024,
"s": 1004,
"text": "npm version express"
},
{
"code": null,
"e": 1172,
"s": 1024,
"text": "After that, you can just create a folder and add a file for example, index.js. To run this file you need to run the following command.node index.js"
},
{
"code": null,
"e": 1186,
"s": 1172,
"text": "node index.js"
},
{
"code": null,
"e": 1216,
"s": 1186,
"text": "Example 1: Filename: index.js"
},
{
"code": "var express = require('express');var app = express();var PORT = 3000; app.get('/', function (req, res) { console.log(req.get('Content-Type')); res.end();}); app.listen(PORT, function(err){ if (err) console.log(err); console.log(\"Server listening on PORT\", PORT);});",
"e": 1499,
"s": 1216,
"text": null
},
{
"code": null,
"e": 1525,
"s": 1499,
"text": "Steps to run the program:"
},
{
"code": null,
"e": 1939,
"s": 1525,
"text": "The project structure will look like this:Make sure you have installed express module using the following command:npm install expressRun index.js file using below command:node index.jsOutput:Server listening on PORT 3000\nNow make a GET request to http://localhost:3000/ with header set to ‘content-type: text/plain’, then you will see the following output on your console:Server listening on PORT 3000\ntext/plain\n"
},
{
"code": null,
"e": 1982,
"s": 1939,
"text": "The project structure will look like this:"
},
{
"code": null,
"e": 2074,
"s": 1982,
"text": "Make sure you have installed express module using the following command:npm install express"
},
{
"code": null,
"e": 2094,
"s": 2074,
"text": "npm install express"
},
{
"code": null,
"e": 2183,
"s": 2094,
"text": "Run index.js file using below command:node index.jsOutput:Server listening on PORT 3000\n"
},
{
"code": null,
"e": 2197,
"s": 2183,
"text": "node index.js"
},
{
"code": null,
"e": 2205,
"s": 2197,
"text": "Output:"
},
{
"code": null,
"e": 2236,
"s": 2205,
"text": "Server listening on PORT 3000\n"
},
{
"code": null,
"e": 2429,
"s": 2236,
"text": "Now make a GET request to http://localhost:3000/ with header set to ‘content-type: text/plain’, then you will see the following output on your console:Server listening on PORT 3000\ntext/plain\n"
},
{
"code": null,
"e": 2471,
"s": 2429,
"text": "Server listening on PORT 3000\ntext/plain\n"
},
{
"code": null,
"e": 2501,
"s": 2471,
"text": "Example 2: Filename: index.js"
},
{
"code": "var express = require('express');var app = express();var PORT = 3000; app.get('/', function (req, res) { console.log(req.get('Anything-else')); res.end();}); app.listen(PORT, function(err){ if (err) console.log(err); console.log(\"Server listening on PORT\", PORT);});",
"e": 2785,
"s": 2501,
"text": null
},
{
"code": null,
"e": 2824,
"s": 2785,
"text": "Run index.js file using below command:"
},
{
"code": null,
"e": 2838,
"s": 2824,
"text": "node index.js"
},
{
"code": null,
"e": 2944,
"s": 2838,
"text": "Now make a GET request to http://localhost:3000/, then you will see the following output on your console:"
},
{
"code": null,
"e": 2985,
"s": 2944,
"text": "Server listening on PORT 3000\nundefined\n"
},
{
"code": null,
"e": 3041,
"s": 2985,
"text": "Reference: https://expressjs.com/en/4x/api.html#req.get"
},
{
"code": null,
"e": 3052,
"s": 3041,
"text": "Express.js"
},
{
"code": null,
"e": 3060,
"s": 3052,
"text": "Node.js"
},
{
"code": null,
"e": 3077,
"s": 3060,
"text": "Web Technologies"
}
] |
OpenCV | Saving an Image | 26 Aug, 2019
This article aims to learn how to save an image from one location to any other desired location on your system in CPP using OpenCv. Using OpenCV, we can generate a blank image with any colour one wishes to.
So, let us dig deep into it and understand the concept with the complete explanation.
// c++ code explaining how to// save an image to a defined// location in OpenCV // loading library files#include <highlevelmonitorconfigurationapi.h>#include <opencv2\highgui\highgui.hpp>#include <opencv2\opencv.hpp> using namespace cv;using namespace std; int main(int argc, char** argv){ // Reading the image file from a given location in system Mat img = imread("..path\\abcd.jpg"); // if there is no image // or in case of error if (img.empty()) { cout << "Can not open or image is not present" << endl; // wait for any key to be pressed cin.get(); return -1; } // You can make any changes // like blurring, transformation // writing the image to a defined location as JPEG bool check = imwrite("..path\\MyImage.jpg", img); // if the image is not saved if (check == false) { cout << "Mission - Saving the image, FAILED" << endl; // wait for any key to be pressed cin.get(); return -1; } cout << "Successfully saved the image. " << endl; // Naming the window String geek_window = "MY SAVED IMAGE"; // Creating a window namedWindow(geek_window); // Showing the image in the defined window imshow(geek_window, img); // waiting for any key to be pressed waitKey(0); // destroying the creating window destroyWindow(geek_window); return 0;}
Input :Output :
// Reading the image file from a given location in systemMat img = imread("..path\\abcd.jpg"); // if there is no image// or in case of errorif (img.empty()) { cout << "Can not open or image is not present" << endl; // wait for any key to be pressed cin.get(); return -1;}
This part of the code reads the image from the path we have given to it. And it takes care of any error (if occurs). If there is no image present at this path, then “Can not open or image is not present” message will display and at the press of any key, the window will exit.
// writing the image to a defined location as JPEGbool check = imwrite("..path\\MyImage.jpg", img); // if the image is not savedif (check == false) { cout << "Mission - Saving the image, FAILED" << endl; // wait for any key to be pressed cin.get(); return -1;} cout << "Successfully saved the image. " << endl;
This part of the code write the image to the defined path and if not successful, it will generate “Mission – Saving the image, FAILED” message and at the press of any key, the window will exit. And rest of the code will create the window and display the image in it. It will keep on displaying the image in the window until the key is pressed. Finally, the window will be destroyed.
OpenCV
Advanced Computer Subject
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n26 Aug, 2019"
},
{
"code": null,
"e": 235,
"s": 28,
"text": "This article aims to learn how to save an image from one location to any other desired location on your system in CPP using OpenCv. Using OpenCV, we can generate a blank image with any colour one wishes to."
},
{
"code": null,
"e": 321,
"s": 235,
"text": "So, let us dig deep into it and understand the concept with the complete explanation."
},
{
"code": "// c++ code explaining how to// save an image to a defined// location in OpenCV // loading library files#include <highlevelmonitorconfigurationapi.h>#include <opencv2\\highgui\\highgui.hpp>#include <opencv2\\opencv.hpp> using namespace cv;using namespace std; int main(int argc, char** argv){ // Reading the image file from a given location in system Mat img = imread(\"..path\\\\abcd.jpg\"); // if there is no image // or in case of error if (img.empty()) { cout << \"Can not open or image is not present\" << endl; // wait for any key to be pressed cin.get(); return -1; } // You can make any changes // like blurring, transformation // writing the image to a defined location as JPEG bool check = imwrite(\"..path\\\\MyImage.jpg\", img); // if the image is not saved if (check == false) { cout << \"Mission - Saving the image, FAILED\" << endl; // wait for any key to be pressed cin.get(); return -1; } cout << \"Successfully saved the image. \" << endl; // Naming the window String geek_window = \"MY SAVED IMAGE\"; // Creating a window namedWindow(geek_window); // Showing the image in the defined window imshow(geek_window, img); // waiting for any key to be pressed waitKey(0); // destroying the creating window destroyWindow(geek_window); return 0;}",
"e": 1722,
"s": 321,
"text": null
},
{
"code": null,
"e": 1738,
"s": 1722,
"text": "Input :Output :"
},
{
"code": "// Reading the image file from a given location in systemMat img = imread(\"..path\\\\abcd.jpg\"); // if there is no image// or in case of errorif (img.empty()) { cout << \"Can not open or image is not present\" << endl; // wait for any key to be pressed cin.get(); return -1;}",
"e": 2025,
"s": 1738,
"text": null
},
{
"code": null,
"e": 2301,
"s": 2025,
"text": "This part of the code reads the image from the path we have given to it. And it takes care of any error (if occurs). If there is no image present at this path, then “Can not open or image is not present” message will display and at the press of any key, the window will exit."
},
{
"code": "// writing the image to a defined location as JPEGbool check = imwrite(\"..path\\\\MyImage.jpg\", img); // if the image is not savedif (check == false) { cout << \"Mission - Saving the image, FAILED\" << endl; // wait for any key to be pressed cin.get(); return -1;} cout << \"Successfully saved the image. \" << endl;",
"e": 2628,
"s": 2301,
"text": null
},
{
"code": null,
"e": 3011,
"s": 2628,
"text": "This part of the code write the image to the defined path and if not successful, it will generate “Mission – Saving the image, FAILED” message and at the press of any key, the window will exit. And rest of the code will create the window and display the image in it. It will keep on displaying the image in the window until the key is pressed. Finally, the window will be destroyed."
},
{
"code": null,
"e": 3018,
"s": 3011,
"text": "OpenCV"
},
{
"code": null,
"e": 3044,
"s": 3018,
"text": "Advanced Computer Subject"
},
{
"code": null,
"e": 3048,
"s": 3044,
"text": "C++"
},
{
"code": null,
"e": 3052,
"s": 3048,
"text": "CPP"
}
] |
ggplot2 – Title and Subtitle with Different Size and Color in R | 16 May, 2021
A Title and the subtitle to a plot give a piece of information about the graph that what the graph actually wants to represent. This article describes how to add a Title and Subtitle with Different Sizes and Colors using ggplot2 in R Programming.
To add a Title and Subtitle within a plot, first, we have to import ggplot2 library using library() function. If you have not installed yet, you can simply install it by writing a command install.packages(“ggplot2”) in R Console.
library(ggplot2)
Consider the following data for the example:
data <- data.frame(
name=c("A","B","C","D","E") ,
value=c(3,12,5,18,45)
)
Creating a Plot using ggplot() function with the value of X-axis as Name and Y-axis as Value and make it a barplot using geom_bar() function of the ggplot2 library. Here we use the fill parameter to geom_bar() function to color the bars of the plots.
R
# Load Packagelibrary(ggplot2) # Create a Datadata <- data.frame( Name=c("A", "B", "C", "D", "E") , Value=c(3, 12, 5, 18, 45)) # Create a Simple BarPlot with green color.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = "identity", fill = "green")
Output:
Method 1. By Using ggtitle() function:
For this, we simply add ggtitle() function to a geom_bar() function. Inside ggtitle() function, we can directly write the title that we want to add to the plot without defining any parameter but for add Subtitle to the plot using ggtitle() function, we have to use subtitle parameter to ggtitle() function and then assign the subtitle to that parameter.
Syntax : ggtitle(“Title of the Plot”, subtitle = “Subtitle of the Plot”)
Parameter :
we give title that we want to add, as it’s parameter.
subtitle is used as a second parameter of ggtitle() function to add subtitle of plot.
Below is the implementation:
R
# Load Packagelibrary(ggplot2) # Create a Datadata <- data.frame( Name=c("A", "B", "C", "D", "E"), Value=c(3, 12, 5, 18, 45)) # Create a BarPlot and add title# and subtitle to it using ggtitle() function.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = "identity", fill = "green")+ ggtitle("Title For Barplot", subtitle = "This is Subtitle" )
Output:
Method 2. By Using labs() Function:
To add Title and Subtitle to R Plot using labs() function, things are same as above only difference is we use labs() function instead of ggtitle() function and assign title that we want to add to the parameter called ‘title’. Subtitle can be added using the same parameter of the above example. Output is also the same as the above example output.
Syntax : ggtitle(“Title of the Plot”, subtitle = “Subtitle of the Plot”)
Parameter :
title is used as a first parameter to add the title of Plot.
subtitle is used as a second parameter to add the subtitle of Plot.
Below is the implementation:
R
# Load Packagelibrary(ggplot2) # Create Datadata <- data.frame( Name = c("A", "B", "C", "D", "E") , Value = c(3, 12, 5, 18, 45)) # Create BarPlot and add title# and subtitle to it using labs() function.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = "identity", fill = "green")+ labs(title = "Title For Barplot", subtitle = "This is Subtitle" )
Output:
To change the size of the title and subtitle, we add the theme() function to labs() or ggtitle() function, whatever you used. Here we use labs() function. Inside theme() function, we use plot.title parameter for doing changes in the title of plot and plot.subtitle for doing changes in Subtitle of Plot. We use element_text() function as a value of plot.title and plot.subtitle parameter. We can change the appearance of texts using element_text() function. To change the size of the title and subtitle, we use the size parameter of element_text() function. Here we set the size of the title as 30 and the size of the subtitle as 20.
Below is the implementation:
R
library(ggplot2) data <- data.frame( Name = c("A", "B", "C", "D", "E") , Value=c(3, 12, 5, 18, 45)) # Create a BarPlot with title# of size 30 and subtitle of size 20ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = "identity", fill = "green")+ labs(title = "Title For Barplot", subtitle = "This is Subtitle" )+ theme(plot.title = element_text(size = 30), plot.subtitle = element_text(size = 20) )
Output:
Title and Subtitle with Different size
To change the color of Title and Subtitle, We simply add a color parameter to element_text() function. All others are the same as the above implementation. Here we set the value of the color of title as green and the value of subtitle as red.
R
library(ggplot2) data <- data.frame( Name = c("A", "B", "C", "D", "E") , Value = c(3, 12, 5, 18, 45)) # Create a BarPlot with title# and subtitle with different colors.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = "identity", fill = "green")+ labs(title = "Title For Barplot", subtitle = "This is Subtitle" )+ theme(plot.title = element_text(size = 30, color = "green"), plot.subtitle = element_text(size = 20, color = "red") )
Output:
Title and Subtitle with Different Color
Picked
R-ggplot
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n16 May, 2021"
},
{
"code": null,
"e": 275,
"s": 28,
"text": "A Title and the subtitle to a plot give a piece of information about the graph that what the graph actually wants to represent. This article describes how to add a Title and Subtitle with Different Sizes and Colors using ggplot2 in R Programming."
},
{
"code": null,
"e": 505,
"s": 275,
"text": "To add a Title and Subtitle within a plot, first, we have to import ggplot2 library using library() function. If you have not installed yet, you can simply install it by writing a command install.packages(“ggplot2”) in R Console."
},
{
"code": null,
"e": 522,
"s": 505,
"text": "library(ggplot2)"
},
{
"code": null,
"e": 567,
"s": 522,
"text": "Consider the following data for the example:"
},
{
"code": null,
"e": 647,
"s": 567,
"text": "data <- data.frame(\n name=c(\"A\",\"B\",\"C\",\"D\",\"E\") , \n value=c(3,12,5,18,45)\n)"
},
{
"code": null,
"e": 898,
"s": 647,
"text": "Creating a Plot using ggplot() function with the value of X-axis as Name and Y-axis as Value and make it a barplot using geom_bar() function of the ggplot2 library. Here we use the fill parameter to geom_bar() function to color the bars of the plots."
},
{
"code": null,
"e": 900,
"s": 898,
"text": "R"
},
{
"code": "# Load Packagelibrary(ggplot2) # Create a Datadata <- data.frame( Name=c(\"A\", \"B\", \"C\", \"D\", \"E\") , Value=c(3, 12, 5, 18, 45)) # Create a Simple BarPlot with green color.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = \"identity\", fill = \"green\")",
"e": 1163,
"s": 900,
"text": null
},
{
"code": null,
"e": 1171,
"s": 1163,
"text": "Output:"
},
{
"code": null,
"e": 1210,
"s": 1171,
"text": "Method 1. By Using ggtitle() function:"
},
{
"code": null,
"e": 1564,
"s": 1210,
"text": "For this, we simply add ggtitle() function to a geom_bar() function. Inside ggtitle() function, we can directly write the title that we want to add to the plot without defining any parameter but for add Subtitle to the plot using ggtitle() function, we have to use subtitle parameter to ggtitle() function and then assign the subtitle to that parameter."
},
{
"code": null,
"e": 1637,
"s": 1564,
"text": "Syntax : ggtitle(“Title of the Plot”, subtitle = “Subtitle of the Plot”)"
},
{
"code": null,
"e": 1649,
"s": 1637,
"text": "Parameter :"
},
{
"code": null,
"e": 1703,
"s": 1649,
"text": "we give title that we want to add, as it’s parameter."
},
{
"code": null,
"e": 1789,
"s": 1703,
"text": "subtitle is used as a second parameter of ggtitle() function to add subtitle of plot."
},
{
"code": null,
"e": 1818,
"s": 1789,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 1820,
"s": 1818,
"text": "R"
},
{
"code": "# Load Packagelibrary(ggplot2) # Create a Datadata <- data.frame( Name=c(\"A\", \"B\", \"C\", \"D\", \"E\"), Value=c(3, 12, 5, 18, 45)) # Create a BarPlot and add title# and subtitle to it using ggtitle() function.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = \"identity\", fill = \"green\")+ ggtitle(\"Title For Barplot\", subtitle = \"This is Subtitle\" )",
"e": 2190,
"s": 1820,
"text": null
},
{
"code": null,
"e": 2198,
"s": 2190,
"text": "Output:"
},
{
"code": null,
"e": 2234,
"s": 2198,
"text": "Method 2. By Using labs() Function:"
},
{
"code": null,
"e": 2582,
"s": 2234,
"text": "To add Title and Subtitle to R Plot using labs() function, things are same as above only difference is we use labs() function instead of ggtitle() function and assign title that we want to add to the parameter called ‘title’. Subtitle can be added using the same parameter of the above example. Output is also the same as the above example output."
},
{
"code": null,
"e": 2655,
"s": 2582,
"text": "Syntax : ggtitle(“Title of the Plot”, subtitle = “Subtitle of the Plot”)"
},
{
"code": null,
"e": 2667,
"s": 2655,
"text": "Parameter :"
},
{
"code": null,
"e": 2728,
"s": 2667,
"text": "title is used as a first parameter to add the title of Plot."
},
{
"code": null,
"e": 2796,
"s": 2728,
"text": "subtitle is used as a second parameter to add the subtitle of Plot."
},
{
"code": null,
"e": 2825,
"s": 2796,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 2827,
"s": 2825,
"text": "R"
},
{
"code": "# Load Packagelibrary(ggplot2) # Create Datadata <- data.frame( Name = c(\"A\", \"B\", \"C\", \"D\", \"E\") , Value = c(3, 12, 5, 18, 45)) # Create BarPlot and add title# and subtitle to it using labs() function.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = \"identity\", fill = \"green\")+ labs(title = \"Title For Barplot\", subtitle = \"This is Subtitle\" )",
"e": 3202,
"s": 2827,
"text": null
},
{
"code": null,
"e": 3210,
"s": 3202,
"text": "Output:"
},
{
"code": null,
"e": 3844,
"s": 3210,
"text": "To change the size of the title and subtitle, we add the theme() function to labs() or ggtitle() function, whatever you used. Here we use labs() function. Inside theme() function, we use plot.title parameter for doing changes in the title of plot and plot.subtitle for doing changes in Subtitle of Plot. We use element_text() function as a value of plot.title and plot.subtitle parameter. We can change the appearance of texts using element_text() function. To change the size of the title and subtitle, we use the size parameter of element_text() function. Here we set the size of the title as 30 and the size of the subtitle as 20."
},
{
"code": null,
"e": 3873,
"s": 3844,
"text": "Below is the implementation:"
},
{
"code": null,
"e": 3875,
"s": 3873,
"text": "R"
},
{
"code": "library(ggplot2) data <- data.frame( Name = c(\"A\", \"B\", \"C\", \"D\", \"E\") , Value=c(3, 12, 5, 18, 45)) # Create a BarPlot with title# of size 30 and subtitle of size 20ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = \"identity\", fill = \"green\")+ labs(title = \"Title For Barplot\", subtitle = \"This is Subtitle\" )+ theme(plot.title = element_text(size = 30), plot.subtitle = element_text(size = 20) )",
"e": 4315,
"s": 3875,
"text": null
},
{
"code": null,
"e": 4323,
"s": 4315,
"text": "Output:"
},
{
"code": null,
"e": 4362,
"s": 4323,
"text": "Title and Subtitle with Different size"
},
{
"code": null,
"e": 4605,
"s": 4362,
"text": "To change the color of Title and Subtitle, We simply add a color parameter to element_text() function. All others are the same as the above implementation. Here we set the value of the color of title as green and the value of subtitle as red."
},
{
"code": null,
"e": 4607,
"s": 4605,
"text": "R"
},
{
"code": "library(ggplot2) data <- data.frame( Name = c(\"A\", \"B\", \"C\", \"D\", \"E\") , Value = c(3, 12, 5, 18, 45)) # Create a BarPlot with title# and subtitle with different colors.ggplot(data, aes(x = Name, y = Value)) + geom_bar(stat = \"identity\", fill = \"green\")+ labs(title = \"Title For Barplot\", subtitle = \"This is Subtitle\" )+ theme(plot.title = element_text(size = 30, color = \"green\"), plot.subtitle = element_text(size = 20, color = \"red\") )",
"e": 5082,
"s": 4607,
"text": null
},
{
"code": null,
"e": 5090,
"s": 5082,
"text": "Output:"
},
{
"code": null,
"e": 5130,
"s": 5090,
"text": "Title and Subtitle with Different Color"
},
{
"code": null,
"e": 5137,
"s": 5130,
"text": "Picked"
},
{
"code": null,
"e": 5146,
"s": 5137,
"text": "R-ggplot"
},
{
"code": null,
"e": 5157,
"s": 5146,
"text": "R Language"
}
] |
Structures, Unions and Enumerations in C++ | 18 Apr, 2022
In this article, we will discuss structures, unions, and enumerations and their differences.
The structure is a user-defined data type that is available in C++.
Structures are used to combine different types of data types, just like an array is used to combine the same type of data types.
A structure is declared by using the keyword “struct“. When we declare a variable of the structure we need to write the keyword “struct in C language but for C++ the keyword is not mandatory
Syntax:
struct
{
// Declaration of the struct
}
Below is the C++ program to demonstrate the use of struct:
C++
// C++ program to demonstrate the// making of structure#include <bits/stdc++.h>using namespace std; // Define structurestruct GFG { int G1; char G2; float G3;}; // Driver Codeint main(){ // Declaring a Structure struct GFG Geek; Geek.G1 = 85; Geek.G2 = 'G'; Geek.G3 = 989.45; cout << "The value is : " << Geek.G1 << endl; cout << "The value is : " << Geek.G2 << endl; cout << "The value is : " << Geek.G3 << endl; return 0;}
The value is : 85
The value is : G
The value is : 989.45
Explanation: In the above code, that values are assigned to (G1, G2, G3) fields of the structure employee and at the end, the value of “salary” is printed.
Structure using typedef: typedef is a keyword that is used to assign a new name to any existing data-type. Below is the C++ program illustrating use of struct using typedef:
C++
// C++ program to demonstrate the use// of struct using typedef#include <bits/stdc++.h>using namespace std; // Declaration of typedeftypedef struct GeekForGeeks { int G1; char G2; float G3; } GFG; // Driver Codeint main(){ GFG Geek; Geek.G1 = 85; Geek.G2 = 'G'; Geek.G3 = 989.45; cout << "The value is : " << Geek.G1 << endl; cout << "The value is : " << Geek.G2 << endl; cout << "The value is : " << Geek.G3 << endl; return 0;}
The value is : 85
The value is : G
The value is : 989.45
Explanation:
In the above code, the keyword “typedef” is used before struct and after the closing bracket of structure, “GFG” is written.
Now create structure variables without using the keyword “struct” and the name of the struct.
A structure instance has been created named “Geek” by just writing “GFG” before it.
Unions: A union is a type of structure that can be used where the amount of memory used is a key factor.
Similarly to the structure, the union can contain different types of data types.
Each time a new variable is initialized from the union it overwrites the previous in C language but in C++ we also don’t need this keyword and uses that memory location.
This is most useful when the type of data being passed through functions is unknown, using a union which contains all possible data types can remedy this problem.
It is declared by using the keyword “union“.
Below is the C++ program illustrating the implementation of union:
C++
// C++ program to illustrate the use// of the unions#include <iostream>using namespace std; // Defining a Unionunion GFG { int Geek1; char Geek2; float Geek3;}; // Driver Codeint main(){ // Initializing Union union GFG G1, G2, G3; G1.Geek1 = 34; G2.Geek2 = 34; G3.Geek3 = 34.34; // Printing values cout << "The first value at " << "the allocated memory : " << G1.Geek1 << endl; cout << "The next value stored " << "after removing the " << "previous value : " << G2.Geek2 << endl; cout << "The Final value value " << "at the same allocated " << "memory space : " << G3.Geek3 << endl; return 0;}
The first value at the allocated memory : 34
The next value stored after removing the previous value : "
The Final value value at the same allocated memory space : 34.34
Explanation: In the above code, Geek2 variable is assigned an integer (34). But by being of char type, the value is transformed through coercion into its char equivalent (“). This result is correctly displayed in the Output section.
Enums: Enums are user-defined types that consist of named integral constants.
It helps to assign constants to a set of names to make the program easier to read, maintain and understand.
An Enumeration is declared by using the keyword “enum“.
Below is the C++ program illustrating the use of enum:
C++
// C++ program to illustrate the use// of the Enums #include <bits/stdc++.h>using namespace std; // Defining an enumenum GeeksforGeeks { Geek1, Geek2, Geek3 }; GeeksforGeeks G1 = Geek1;GeeksforGeeks G2 = Geek2;GeeksforGeeks G3 = Geek3; // Driver Codeint main(){ cout << "The numerical value " << "assigned to Geek1 : " << G1 << endl; cout << "The numerical value " << "assigned to Geek2 : " << G2 << endl; cout << "The numerical value " << "assigned to Geek3 : " << G3 << endl; return 0;}
The numerical value assigned to Geek1 : 0
The numerical value assigned to Geek2 : 1
The numerical value assigned to Geek3 : 2
Explanation: In the above code, the named constants like Geek1, Geek2, and Geek3 have assigned integral values such as 0, 1, 2 respectively while the output is given.
eshitadutta
abhishek0719kadiyan
RishabhPrabhu
C-Struct-Union-Enum
cpp-struct
Structure & Union
C++
C++ Programs
cpp-struct
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Sorting a vector in C++
Polymorphism in C++
std::string class in C++
Pair in C++ Standard Template Library (STL)
Friend class and function in C++
Header files in C/C++ and its uses
Sorting a Map by value in C++ STL
Program to print ASCII Value of a character
How to return multiple values from a function in C or C++?
C++ program for hashing with chaining | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n18 Apr, 2022"
},
{
"code": null,
"e": 147,
"s": 54,
"text": "In this article, we will discuss structures, unions, and enumerations and their differences."
},
{
"code": null,
"e": 215,
"s": 147,
"text": "The structure is a user-defined data type that is available in C++."
},
{
"code": null,
"e": 344,
"s": 215,
"text": "Structures are used to combine different types of data types, just like an array is used to combine the same type of data types."
},
{
"code": null,
"e": 535,
"s": 344,
"text": "A structure is declared by using the keyword “struct“. When we declare a variable of the structure we need to write the keyword “struct in C language but for C++ the keyword is not mandatory"
},
{
"code": null,
"e": 543,
"s": 535,
"text": "Syntax:"
},
{
"code": null,
"e": 587,
"s": 543,
"text": "struct \n{\n // Declaration of the struct\n}"
},
{
"code": null,
"e": 646,
"s": 587,
"text": "Below is the C++ program to demonstrate the use of struct:"
},
{
"code": null,
"e": 650,
"s": 646,
"text": "C++"
},
{
"code": "// C++ program to demonstrate the// making of structure#include <bits/stdc++.h>using namespace std; // Define structurestruct GFG { int G1; char G2; float G3;}; // Driver Codeint main(){ // Declaring a Structure struct GFG Geek; Geek.G1 = 85; Geek.G2 = 'G'; Geek.G3 = 989.45; cout << \"The value is : \" << Geek.G1 << endl; cout << \"The value is : \" << Geek.G2 << endl; cout << \"The value is : \" << Geek.G3 << endl; return 0;}",
"e": 1136,
"s": 650,
"text": null
},
{
"code": null,
"e": 1194,
"s": 1136,
"text": "The value is : 85\nThe value is : G\nThe value is : 989.45\n"
},
{
"code": null,
"e": 1350,
"s": 1194,
"text": "Explanation: In the above code, that values are assigned to (G1, G2, G3) fields of the structure employee and at the end, the value of “salary” is printed."
},
{
"code": null,
"e": 1524,
"s": 1350,
"text": "Structure using typedef: typedef is a keyword that is used to assign a new name to any existing data-type. Below is the C++ program illustrating use of struct using typedef:"
},
{
"code": null,
"e": 1528,
"s": 1524,
"text": "C++"
},
{
"code": "// C++ program to demonstrate the use// of struct using typedef#include <bits/stdc++.h>using namespace std; // Declaration of typedeftypedef struct GeekForGeeks { int G1; char G2; float G3; } GFG; // Driver Codeint main(){ GFG Geek; Geek.G1 = 85; Geek.G2 = 'G'; Geek.G3 = 989.45; cout << \"The value is : \" << Geek.G1 << endl; cout << \"The value is : \" << Geek.G2 << endl; cout << \"The value is : \" << Geek.G3 << endl; return 0;}",
"e": 2019,
"s": 1528,
"text": null
},
{
"code": null,
"e": 2077,
"s": 2019,
"text": "The value is : 85\nThe value is : G\nThe value is : 989.45\n"
},
{
"code": null,
"e": 2090,
"s": 2077,
"text": "Explanation:"
},
{
"code": null,
"e": 2215,
"s": 2090,
"text": "In the above code, the keyword “typedef” is used before struct and after the closing bracket of structure, “GFG” is written."
},
{
"code": null,
"e": 2309,
"s": 2215,
"text": "Now create structure variables without using the keyword “struct” and the name of the struct."
},
{
"code": null,
"e": 2393,
"s": 2309,
"text": "A structure instance has been created named “Geek” by just writing “GFG” before it."
},
{
"code": null,
"e": 2499,
"s": 2393,
"text": "Unions: A union is a type of structure that can be used where the amount of memory used is a key factor. "
},
{
"code": null,
"e": 2580,
"s": 2499,
"text": "Similarly to the structure, the union can contain different types of data types."
},
{
"code": null,
"e": 2750,
"s": 2580,
"text": "Each time a new variable is initialized from the union it overwrites the previous in C language but in C++ we also don’t need this keyword and uses that memory location."
},
{
"code": null,
"e": 2913,
"s": 2750,
"text": "This is most useful when the type of data being passed through functions is unknown, using a union which contains all possible data types can remedy this problem."
},
{
"code": null,
"e": 2958,
"s": 2913,
"text": "It is declared by using the keyword “union“."
},
{
"code": null,
"e": 3025,
"s": 2958,
"text": "Below is the C++ program illustrating the implementation of union:"
},
{
"code": null,
"e": 3029,
"s": 3025,
"text": "C++"
},
{
"code": "// C++ program to illustrate the use// of the unions#include <iostream>using namespace std; // Defining a Unionunion GFG { int Geek1; char Geek2; float Geek3;}; // Driver Codeint main(){ // Initializing Union union GFG G1, G2, G3; G1.Geek1 = 34; G2.Geek2 = 34; G3.Geek3 = 34.34; // Printing values cout << \"The first value at \" << \"the allocated memory : \" << G1.Geek1 << endl; cout << \"The next value stored \" << \"after removing the \" << \"previous value : \" << G2.Geek2 << endl; cout << \"The Final value value \" << \"at the same allocated \" << \"memory space : \" << G3.Geek3 << endl; return 0;}",
"e": 3730,
"s": 3029,
"text": null
},
{
"code": null,
"e": 3901,
"s": 3730,
"text": "The first value at the allocated memory : 34\nThe next value stored after removing the previous value : \"\nThe Final value value at the same allocated memory space : 34.34\n"
},
{
"code": null,
"e": 4134,
"s": 3901,
"text": "Explanation: In the above code, Geek2 variable is assigned an integer (34). But by being of char type, the value is transformed through coercion into its char equivalent (“). This result is correctly displayed in the Output section."
},
{
"code": null,
"e": 4212,
"s": 4134,
"text": "Enums: Enums are user-defined types that consist of named integral constants."
},
{
"code": null,
"e": 4320,
"s": 4212,
"text": "It helps to assign constants to a set of names to make the program easier to read, maintain and understand."
},
{
"code": null,
"e": 4376,
"s": 4320,
"text": "An Enumeration is declared by using the keyword “enum“."
},
{
"code": null,
"e": 4431,
"s": 4376,
"text": "Below is the C++ program illustrating the use of enum:"
},
{
"code": null,
"e": 4435,
"s": 4431,
"text": "C++"
},
{
"code": "// C++ program to illustrate the use// of the Enums #include <bits/stdc++.h>using namespace std; // Defining an enumenum GeeksforGeeks { Geek1, Geek2, Geek3 }; GeeksforGeeks G1 = Geek1;GeeksforGeeks G2 = Geek2;GeeksforGeeks G3 = Geek3; // Driver Codeint main(){ cout << \"The numerical value \" << \"assigned to Geek1 : \" << G1 << endl; cout << \"The numerical value \" << \"assigned to Geek2 : \" << G2 << endl; cout << \"The numerical value \" << \"assigned to Geek3 : \" << G3 << endl; return 0;}",
"e": 5028,
"s": 4435,
"text": null
},
{
"code": null,
"e": 5155,
"s": 5028,
"text": "The numerical value assigned to Geek1 : 0\nThe numerical value assigned to Geek2 : 1\nThe numerical value assigned to Geek3 : 2\n"
},
{
"code": null,
"e": 5322,
"s": 5155,
"text": "Explanation: In the above code, the named constants like Geek1, Geek2, and Geek3 have assigned integral values such as 0, 1, 2 respectively while the output is given."
},
{
"code": null,
"e": 5334,
"s": 5322,
"text": "eshitadutta"
},
{
"code": null,
"e": 5354,
"s": 5334,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 5368,
"s": 5354,
"text": "RishabhPrabhu"
},
{
"code": null,
"e": 5388,
"s": 5368,
"text": "C-Struct-Union-Enum"
},
{
"code": null,
"e": 5399,
"s": 5388,
"text": "cpp-struct"
},
{
"code": null,
"e": 5417,
"s": 5399,
"text": "Structure & Union"
},
{
"code": null,
"e": 5421,
"s": 5417,
"text": "C++"
},
{
"code": null,
"e": 5434,
"s": 5421,
"text": "C++ Programs"
},
{
"code": null,
"e": 5445,
"s": 5434,
"text": "cpp-struct"
},
{
"code": null,
"e": 5449,
"s": 5445,
"text": "CPP"
},
{
"code": null,
"e": 5547,
"s": 5449,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 5571,
"s": 5547,
"text": "Sorting a vector in C++"
},
{
"code": null,
"e": 5591,
"s": 5571,
"text": "Polymorphism in C++"
},
{
"code": null,
"e": 5616,
"s": 5591,
"text": "std::string class in C++"
},
{
"code": null,
"e": 5660,
"s": 5616,
"text": "Pair in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 5693,
"s": 5660,
"text": "Friend class and function in C++"
},
{
"code": null,
"e": 5728,
"s": 5693,
"text": "Header files in C/C++ and its uses"
},
{
"code": null,
"e": 5762,
"s": 5728,
"text": "Sorting a Map by value in C++ STL"
},
{
"code": null,
"e": 5806,
"s": 5762,
"text": "Program to print ASCII Value of a character"
},
{
"code": null,
"e": 5865,
"s": 5806,
"text": "How to return multiple values from a function in C or C++?"
}
] |
Python program to print the octal value of the numbers from 1 to N | 24 Jan, 2021
Given a number N, the task is to write a Python program to print the octal value of the numbers from 1 to N.
Examples:
Input: 3
Output: 1
2
3
Input: 11
Output: 1
2
3
4
5
6
7
10
11
12
13
Approach:
We will take the value of N as input.
Then, we will run the for loop from 1 to N+1 and traverse each “i” through oct() function.
Print each octal value.
Note: The oct() function is one of the built-in methods in Python3. The oct() method takes an integer and returns its octal representation in a string format.
Below are the implementations based on the above approach:
Python3
# Python program to print the octal value of the# numbers from 1 to N # Function to find the octal value of the numbers# in the range 1 to Ndef octal_in_range(n): # For loop traversing from 1 to N (Both Inclusive) for i in range(1, n+1): # Printing octal value of i print(oct(i)[2:]) # Calling the function with input 3print("Input: 3")octal_in_range(3) # Calling the function with input 11print("Input: 11")octal_in_range(11)
Output:
Input: 3
1
2
3
Input: 11
1
2
3
4
5
6
7
10
11
12
13
Python-Built-in-functions
Technical Scripter 2020
Python
Python Programs
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Jan, 2021"
},
{
"code": null,
"e": 137,
"s": 28,
"text": "Given a number N, the task is to write a Python program to print the octal value of the numbers from 1 to N."
},
{
"code": null,
"e": 147,
"s": 137,
"text": "Examples:"
},
{
"code": null,
"e": 319,
"s": 147,
"text": "Input: 3\nOutput: 1\n 2\n 3\n \nInput: 11\nOutput: 1\n 2\n 3\n 4\n 5\n 6\n 7\n 10\n 11\n 12\n 13"
},
{
"code": null,
"e": 329,
"s": 319,
"text": "Approach:"
},
{
"code": null,
"e": 367,
"s": 329,
"text": "We will take the value of N as input."
},
{
"code": null,
"e": 458,
"s": 367,
"text": "Then, we will run the for loop from 1 to N+1 and traverse each “i” through oct() function."
},
{
"code": null,
"e": 482,
"s": 458,
"text": "Print each octal value."
},
{
"code": null,
"e": 642,
"s": 482,
"text": "Note: The oct() function is one of the built-in methods in Python3. The oct() method takes an integer and returns its octal representation in a string format. "
},
{
"code": null,
"e": 701,
"s": 642,
"text": "Below are the implementations based on the above approach:"
},
{
"code": null,
"e": 709,
"s": 701,
"text": "Python3"
},
{
"code": "# Python program to print the octal value of the# numbers from 1 to N # Function to find the octal value of the numbers# in the range 1 to Ndef octal_in_range(n): # For loop traversing from 1 to N (Both Inclusive) for i in range(1, n+1): # Printing octal value of i print(oct(i)[2:]) # Calling the function with input 3print(\"Input: 3\")octal_in_range(3) # Calling the function with input 11print(\"Input: 11\")octal_in_range(11)",
"e": 1165,
"s": 709,
"text": null
},
{
"code": null,
"e": 1173,
"s": 1165,
"text": "Output:"
},
{
"code": null,
"e": 1224,
"s": 1173,
"text": "Input: 3\n1\n2\n3\nInput: 11\n1\n2\n3\n4\n5\n6\n7\n10\n11\n12\n13"
},
{
"code": null,
"e": 1250,
"s": 1224,
"text": "Python-Built-in-functions"
},
{
"code": null,
"e": 1274,
"s": 1250,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 1281,
"s": 1274,
"text": "Python"
},
{
"code": null,
"e": 1297,
"s": 1281,
"text": "Python Programs"
},
{
"code": null,
"e": 1316,
"s": 1297,
"text": "Technical Scripter"
}
] |
Graph Theory - Types of Graphs | There are various types of graphs depending upon the number of vertices, number of edges, interconnectivity, and their overall structure. We will discuss only a certain few important types of graphs in this chapter.
A graph having no edges is called a Null Graph.
In the above graph, there are three vertices named ‘a’, ‘b’, and ‘c’, but there are no edges among them. Hence it is a Null Graph.
A graph with only one vertex is called a Trivial Graph.
In the above shown graph, there is only one vertex ‘a’ with no other edges. Hence it is a Trivial graph.
A non-directed graph contains edges but the edges are not directed ones.
In this graph, ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’ are the vertices, and ‘ab’, ‘bc’, ‘cd’, ‘da’, ‘ag’, ‘gf’, ‘ef’ are the edges of the graph. Since it is a non-directed graph, the edges ‘ab’ and ‘ba’ are same. Similarly other edges also considered in the same way.
In a directed graph, each edge has a direction.
In the above graph, we have seven vertices ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, and ‘g’, and eight edges ‘ab’, ‘cb’, ‘dc’, ‘ad’, ‘ec’, ‘fe’, ‘gf’, and ‘ga’. As it is a directed graph, each edge bears an arrow mark that shows its direction. Note that in a directed graph, ‘ab’ is different from ‘ba’.
A graph with no loops and no parallel edges is called a simple graph.
The maximum number of edges possible in a single graph with ‘n’ vertices is nC2 where nC2 = n(n – 1)/2.
The maximum number of edges possible in a single graph with ‘n’ vertices is nC2 where nC2 = n(n – 1)/2.
The number of simple graphs possible with ‘n’ vertices = 2nc2 = 2n(n-1)/2.
The number of simple graphs possible with ‘n’ vertices = 2nc2 = 2n(n-1)/2.
In the following graph, there are 3 vertices with 3 edges which is maximum excluding the parallel edges and loops. This can be proved by using the above formulae.
The maximum number of edges with n=3 vertices −
nC2 = n(n–1)/2
= 3(3–1)/2
= 6/2
= 3 edges
The maximum number of simple graphs with n=3 vertices −
2nC2 = 2n(n-1)/2
= 23(3-1)/2
= 23
= 8
These 8 graphs are as shown below −
A graph G is said to be connected if there exists a path between every pair of vertices. There should be at least one edge for every vertex in the graph. So that we can say that it is connected to some other vertex at the other side of the edge.
In the following graph, each vertex has its own edge connected to other edge. Hence it is a connected graph.
A graph G is disconnected, if it does not contain at least two connected vertices.
The following graph is an example of a Disconnected Graph, where there are two components, one with ‘a’, ‘b’, ‘c’, ‘d’ vertices and another with ‘e’, ’f’, ‘g’, ‘h’ vertices.
The two components are independent and not connected to each other. Hence it is called disconnected graph.
In this example, there are two independent components, a-b-f-e and c-d, which are not connected to each other. Hence this is a disconnected graph.
A graph G is said to be regular, if all its vertices have the same degree. In a graph, if the degree of each vertex is ‘k’, then the graph is called a ‘k-regular graph’.
In the following graphs, all the vertices have the same degree. So these graphs are called regular graphs.
In both the graphs, all the vertices have degree 2. They are called 2-Regular Graphs.
A simple graph with ‘n’ mutual vertices is called a complete graph and it is denoted by ‘Kn’. In the graph, a vertex should have edges with all other vertices, then it called a complete graph.
In other words, if a vertex is connected to all other vertices in a graph, then it is called a complete graph.
In the following graphs, each vertex in the graph is connected with all the remaining vertices in the graph except by itself.
In graph I,
In graph II,
A simple graph with ‘n’ vertices (n >= 3) and ‘n’ edges is called a cycle graph if all its edges form a cycle of length ‘n’.
If the degree of each vertex in the graph is two, then it is called a Cycle Graph.
Notation − Cn
Take a look at the following graphs −
Graph I has 3 vertices with 3 edges which is forming a cycle ‘ab-bc-ca’.
Graph I has 3 vertices with 3 edges which is forming a cycle ‘ab-bc-ca’.
Graph II has 4 vertices with 4 edges which is forming a cycle ‘pq-qs-sr-rp’.
Graph II has 4 vertices with 4 edges which is forming a cycle ‘pq-qs-sr-rp’.
Graph III has 5 vertices with 5 edges which is forming a cycle ‘ik-km-ml-lj-ji’.
Graph III has 5 vertices with 5 edges which is forming a cycle ‘ik-km-ml-lj-ji’.
Hence all the given graphs are cycle graphs.
A wheel graph is obtained from a cycle graph Cn-1 by adding a new vertex. That new vertex is called a Hub which is connected to all the vertices of Cn.
Notation − Wn
No. of edges in Wn = No. of edges from hub to all other vertices +
No. of edges from all other nodes in cycle graph without a hub.
= (n–1) + (n–1)
= 2(n–1)
Take a look at the following graphs. They are all wheel graphs.
In graph I, it is obtained from C3 by adding an vertex at the middle named as ‘d’. It is denoted as W4.
Number of edges in W4 = 2(n-1) = 2(3) = 6
In graph II, it is obtained from C4 by adding a vertex at the middle named as ‘t’. It is denoted as W5.
Number of edges in W5 = 2(n-1) = 2(4) = 8
In graph III, it is obtained from C6 by adding a vertex at the middle named as ‘o’. It is denoted as W7.
Number of edges in W4 = 2(n-1) = 2(6) = 12
A graph with at least one cycle is called a cyclic graph.
In the above example graph, we have two cycles a-b-c-d-a and c-f-g-e-c. Hence it is called a cyclic graph.
A graph with no cycles is called an acyclic graph.
In the above example graph, we do not have any cycles. Hence it is a non-cyclic graph.
A simple graph G = (V, E) with vertex partition V = {V1, V2} is called a bipartite graph if every edge of E joins a vertex in V1 to a vertex in V2.
In general, a Bipertite graph has two sets of vertices, let us say, V1 and V2, and if an edge is drawn, it should connect any vertex in set V1 to any vertex in set V2.
In this graph, you can observe two sets of vertices − V1 and V2. Here, two edges named ‘ae’ and ‘bd’ are connecting the vertices of two sets V1 and V2.
A bipartite graph ‘G’, G = (V, E) with partition V = {V1, V2} is said to be a complete bipartite graph if every vertex in V1 is connected to every vertex of V2.
In general, a complete bipartite graph connects each vertex from set V1 to each vertex from set V2.
The following graph is a complete bipartite graph because it has edges connecting each vertex from set V1 to each vertex from set V2.
If |V1| = m and |V2| = n, then the complete bipartite graph is denoted by Km, n.
Km,n has (m+n) vertices and (mn) edges.
Km,n has (m+n) vertices and (mn) edges.
Km,n is a regular graph if m=n.
Km,n is a regular graph if m=n.
In general, a complete bipartite graph is not a complete graph.
Km,n is a complete graph if m=n=1.
The maximum number of edges in a bipartite graph with n vertices is
If n=10, k5, 5= ⌊
n2
/
4
⌋ = ⌊
102
/
4
⌋ = 25
Similarly K6, 4=24
K7, 3=21
K8, 2=16
K9, 1=9
If n=9, k5, 4 = ⌊
n2
/
4
⌋ = ⌊
92
/
4
⌋ = 20
Similarly K6, 3=18
K7, 2=14
K8, 1=8
‘G’ is a bipartite graph if ‘G’ has no cycles of odd length. A special case of bipartite graph is a star graph.
A complete bipartite graph of the form K1, n-1 is a star graph with n-vertices. A star graph is a complete bipartite graph if a single vertex belongs to one set and all the remaining vertices belong to the other set.
In the above graphs, out of ‘n’ vertices, all the ‘n–1’ vertices are connected to a single vertex. Hence it is in the form of K1, n-1 which are star graphs.
Let 'G−' be a simple graph with some vertices as that of ‘G’ and an edge {U, V} is present in 'G−', if the edge is not present in G. It means, two vertices are adjacent in 'G−' if the two vertices are not adjacent in G.
If the edges that exist in graph I are absent in another graph II, and if both graph I and graph II are combined together to form a complete graph, then graph I and graph II are called complements of each other.
In the following example, graph-I has two edges ‘cd’ and ‘bd’. Its complement graph-II has four edges.
Note that the edges in graph-I are not present in graph-II and vice versa. Hence, the combination of both the graphs gives a complete graph of ‘n’ vertices.
Note − A combination of two complementary graphs gives a complete graph.
If ‘G’ is any simple graph, then
|E(G)| + |E('G-')| = |E(Kn)|, where n = number of vertices in the graph.
Let ‘G’ be a simple graph with nine vertices and twelve edges, find the number of edges in 'G-'.
You have, |E(G)| + |E('G-')| = |E(Kn)|
12 + |E('G-')| =
12 + |E('G-')| = 36
|E('G-')| = 24
‘G’ is a simple graph with 40 edges and its complement 'G−' has 38 edges. Find the number of vertices in the graph G or 'G−'.
Let the number of vertices in the graph be ‘n’.
We have, |E(G)| + |E('G-')| = |E(Kn)|
40 + 38 =
n(n-1)
/
2
156 = n(n-1)
13(12) = n(n-1)
n = 13
97 Lectures
7 hours
Arnab Chakraborty
99 Lectures
6 hours
Arnab Chakraborty
7 Lectures
1 hours
Zach Miller
31 Lectures
3.5 hours
Abhishek And Pukhraj
32 Lectures
6.5 hours
William Fiset
54 Lectures
4 hours
Sasha Miller
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2182,
"s": 1966,
"text": "There are various types of graphs depending upon the number of vertices, number of edges, interconnectivity, and their overall structure. We will discuss only a certain few important types of graphs in this chapter."
},
{
"code": null,
"e": 2230,
"s": 2182,
"text": "A graph having no edges is called a Null Graph."
},
{
"code": null,
"e": 2361,
"s": 2230,
"text": "In the above graph, there are three vertices named ‘a’, ‘b’, and ‘c’, but there are no edges among them. Hence it is a Null Graph."
},
{
"code": null,
"e": 2417,
"s": 2361,
"text": "A graph with only one vertex is called a Trivial Graph."
},
{
"code": null,
"e": 2522,
"s": 2417,
"text": "In the above shown graph, there is only one vertex ‘a’ with no other edges. Hence it is a Trivial graph."
},
{
"code": null,
"e": 2595,
"s": 2522,
"text": "A non-directed graph contains edges but the edges are not directed ones."
},
{
"code": null,
"e": 2858,
"s": 2595,
"text": "In this graph, ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’ are the vertices, and ‘ab’, ‘bc’, ‘cd’, ‘da’, ‘ag’, ‘gf’, ‘ef’ are the edges of the graph. Since it is a non-directed graph, the edges ‘ab’ and ‘ba’ are same. Similarly other edges also considered in the same way."
},
{
"code": null,
"e": 2906,
"s": 2858,
"text": "In a directed graph, each edge has a direction."
},
{
"code": null,
"e": 3199,
"s": 2906,
"text": "In the above graph, we have seven vertices ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, and ‘g’, and eight edges ‘ab’, ‘cb’, ‘dc’, ‘ad’, ‘ec’, ‘fe’, ‘gf’, and ‘ga’. As it is a directed graph, each edge bears an arrow mark that shows its direction. Note that in a directed graph, ‘ab’ is different from ‘ba’."
},
{
"code": null,
"e": 3269,
"s": 3199,
"text": "A graph with no loops and no parallel edges is called a simple graph."
},
{
"code": null,
"e": 3373,
"s": 3269,
"text": "The maximum number of edges possible in a single graph with ‘n’ vertices is nC2 where nC2 = n(n – 1)/2."
},
{
"code": null,
"e": 3477,
"s": 3373,
"text": "The maximum number of edges possible in a single graph with ‘n’ vertices is nC2 where nC2 = n(n – 1)/2."
},
{
"code": null,
"e": 3552,
"s": 3477,
"text": "The number of simple graphs possible with ‘n’ vertices = 2nc2 = 2n(n-1)/2."
},
{
"code": null,
"e": 3627,
"s": 3552,
"text": "The number of simple graphs possible with ‘n’ vertices = 2nc2 = 2n(n-1)/2."
},
{
"code": null,
"e": 3790,
"s": 3627,
"text": "In the following graph, there are 3 vertices with 3 edges which is maximum excluding the parallel edges and loops. This can be proved by using the above formulae."
},
{
"code": null,
"e": 3838,
"s": 3790,
"text": "The maximum number of edges with n=3 vertices −"
},
{
"code": null,
"e": 3890,
"s": 3838,
"text": "nC2 = n(n–1)/2\n = 3(3–1)/2\n = 6/2\n = 3 edges\n"
},
{
"code": null,
"e": 3946,
"s": 3890,
"text": "The maximum number of simple graphs with n=3 vertices −"
},
{
"code": null,
"e": 3994,
"s": 3946,
"text": "2nC2 = 2n(n-1)/2\n = 23(3-1)/2\n = 23\n = 8\n"
},
{
"code": null,
"e": 4030,
"s": 3994,
"text": "These 8 graphs are as shown below −"
},
{
"code": null,
"e": 4276,
"s": 4030,
"text": "A graph G is said to be connected if there exists a path between every pair of vertices. There should be at least one edge for every vertex in the graph. So that we can say that it is connected to some other vertex at the other side of the edge."
},
{
"code": null,
"e": 4385,
"s": 4276,
"text": "In the following graph, each vertex has its own edge connected to other edge. Hence it is a connected graph."
},
{
"code": null,
"e": 4468,
"s": 4385,
"text": "A graph G is disconnected, if it does not contain at least two connected vertices."
},
{
"code": null,
"e": 4642,
"s": 4468,
"text": "The following graph is an example of a Disconnected Graph, where there are two components, one with ‘a’, ‘b’, ‘c’, ‘d’ vertices and another with ‘e’, ’f’, ‘g’, ‘h’ vertices."
},
{
"code": null,
"e": 4749,
"s": 4642,
"text": "The two components are independent and not connected to each other. Hence it is called disconnected graph."
},
{
"code": null,
"e": 4896,
"s": 4749,
"text": "In this example, there are two independent components, a-b-f-e and c-d, which are not connected to each other. Hence this is a disconnected graph."
},
{
"code": null,
"e": 5066,
"s": 4896,
"text": "A graph G is said to be regular, if all its vertices have the same degree. In a graph, if the degree of each vertex is ‘k’, then the graph is called a ‘k-regular graph’."
},
{
"code": null,
"e": 5173,
"s": 5066,
"text": "In the following graphs, all the vertices have the same degree. So these graphs are called regular graphs."
},
{
"code": null,
"e": 5259,
"s": 5173,
"text": "In both the graphs, all the vertices have degree 2. They are called 2-Regular Graphs."
},
{
"code": null,
"e": 5452,
"s": 5259,
"text": "A simple graph with ‘n’ mutual vertices is called a complete graph and it is denoted by ‘Kn’. In the graph, a vertex should have edges with all other vertices, then it called a complete graph."
},
{
"code": null,
"e": 5563,
"s": 5452,
"text": "In other words, if a vertex is connected to all other vertices in a graph, then it is called a complete graph."
},
{
"code": null,
"e": 5689,
"s": 5563,
"text": "In the following graphs, each vertex in the graph is connected with all the remaining vertices in the graph except by itself."
},
{
"code": null,
"e": 5701,
"s": 5689,
"text": "In graph I,"
},
{
"code": null,
"e": 5714,
"s": 5701,
"text": "In graph II,"
},
{
"code": null,
"e": 5839,
"s": 5714,
"text": "A simple graph with ‘n’ vertices (n >= 3) and ‘n’ edges is called a cycle graph if all its edges form a cycle of length ‘n’."
},
{
"code": null,
"e": 5922,
"s": 5839,
"text": "If the degree of each vertex in the graph is two, then it is called a Cycle Graph."
},
{
"code": null,
"e": 5936,
"s": 5922,
"text": "Notation − Cn"
},
{
"code": null,
"e": 5974,
"s": 5936,
"text": "Take a look at the following graphs −"
},
{
"code": null,
"e": 6047,
"s": 5974,
"text": "Graph I has 3 vertices with 3 edges which is forming a cycle ‘ab-bc-ca’."
},
{
"code": null,
"e": 6120,
"s": 6047,
"text": "Graph I has 3 vertices with 3 edges which is forming a cycle ‘ab-bc-ca’."
},
{
"code": null,
"e": 6197,
"s": 6120,
"text": "Graph II has 4 vertices with 4 edges which is forming a cycle ‘pq-qs-sr-rp’."
},
{
"code": null,
"e": 6274,
"s": 6197,
"text": "Graph II has 4 vertices with 4 edges which is forming a cycle ‘pq-qs-sr-rp’."
},
{
"code": null,
"e": 6355,
"s": 6274,
"text": "Graph III has 5 vertices with 5 edges which is forming a cycle ‘ik-km-ml-lj-ji’."
},
{
"code": null,
"e": 6436,
"s": 6355,
"text": "Graph III has 5 vertices with 5 edges which is forming a cycle ‘ik-km-ml-lj-ji’."
},
{
"code": null,
"e": 6481,
"s": 6436,
"text": "Hence all the given graphs are cycle graphs."
},
{
"code": null,
"e": 6633,
"s": 6481,
"text": "A wheel graph is obtained from a cycle graph Cn-1 by adding a new vertex. That new vertex is called a Hub which is connected to all the vertices of Cn."
},
{
"code": null,
"e": 6647,
"s": 6633,
"text": "Notation − Wn"
},
{
"code": null,
"e": 6867,
"s": 6647,
"text": "No. of edges in Wn = No. of edges from hub to all other vertices +\n No. of edges from all other nodes in cycle graph without a hub.\n = (n–1) + (n–1)\n = 2(n–1)\n"
},
{
"code": null,
"e": 6931,
"s": 6867,
"text": "Take a look at the following graphs. They are all wheel graphs."
},
{
"code": null,
"e": 7035,
"s": 6931,
"text": "In graph I, it is obtained from C3 by adding an vertex at the middle named as ‘d’. It is denoted as W4."
},
{
"code": null,
"e": 7078,
"s": 7035,
"text": "Number of edges in W4 = 2(n-1) = 2(3) = 6\n"
},
{
"code": null,
"e": 7182,
"s": 7078,
"text": "In graph II, it is obtained from C4 by adding a vertex at the middle named as ‘t’. It is denoted as W5."
},
{
"code": null,
"e": 7225,
"s": 7182,
"text": "Number of edges in W5 = 2(n-1) = 2(4) = 8\n"
},
{
"code": null,
"e": 7330,
"s": 7225,
"text": "In graph III, it is obtained from C6 by adding a vertex at the middle named as ‘o’. It is denoted as W7."
},
{
"code": null,
"e": 7374,
"s": 7330,
"text": "Number of edges in W4 = 2(n-1) = 2(6) = 12\n"
},
{
"code": null,
"e": 7432,
"s": 7374,
"text": "A graph with at least one cycle is called a cyclic graph."
},
{
"code": null,
"e": 7539,
"s": 7432,
"text": "In the above example graph, we have two cycles a-b-c-d-a and c-f-g-e-c. Hence it is called a cyclic graph."
},
{
"code": null,
"e": 7590,
"s": 7539,
"text": "A graph with no cycles is called an acyclic graph."
},
{
"code": null,
"e": 7677,
"s": 7590,
"text": "In the above example graph, we do not have any cycles. Hence it is a non-cyclic graph."
},
{
"code": null,
"e": 7825,
"s": 7677,
"text": "A simple graph G = (V, E) with vertex partition V = {V1, V2} is called a bipartite graph if every edge of E joins a vertex in V1 to a vertex in V2."
},
{
"code": null,
"e": 7993,
"s": 7825,
"text": "In general, a Bipertite graph has two sets of vertices, let us say, V1 and V2, and if an edge is drawn, it should connect any vertex in set V1 to any vertex in set V2."
},
{
"code": null,
"e": 8145,
"s": 7993,
"text": "In this graph, you can observe two sets of vertices − V1 and V2. Here, two edges named ‘ae’ and ‘bd’ are connecting the vertices of two sets V1 and V2."
},
{
"code": null,
"e": 8306,
"s": 8145,
"text": "A bipartite graph ‘G’, G = (V, E) with partition V = {V1, V2} is said to be a complete bipartite graph if every vertex in V1 is connected to every vertex of V2."
},
{
"code": null,
"e": 8406,
"s": 8306,
"text": "In general, a complete bipartite graph connects each vertex from set V1 to each vertex from set V2."
},
{
"code": null,
"e": 8540,
"s": 8406,
"text": "The following graph is a complete bipartite graph because it has edges connecting each vertex from set V1 to each vertex from set V2."
},
{
"code": null,
"e": 8621,
"s": 8540,
"text": "If |V1| = m and |V2| = n, then the complete bipartite graph is denoted by Km, n."
},
{
"code": null,
"e": 8661,
"s": 8621,
"text": "Km,n has (m+n) vertices and (mn) edges."
},
{
"code": null,
"e": 8701,
"s": 8661,
"text": "Km,n has (m+n) vertices and (mn) edges."
},
{
"code": null,
"e": 8733,
"s": 8701,
"text": "Km,n is a regular graph if m=n."
},
{
"code": null,
"e": 8765,
"s": 8733,
"text": "Km,n is a regular graph if m=n."
},
{
"code": null,
"e": 8829,
"s": 8765,
"text": "In general, a complete bipartite graph is not a complete graph."
},
{
"code": null,
"e": 8864,
"s": 8829,
"text": "Km,n is a complete graph if m=n=1."
},
{
"code": null,
"e": 8933,
"s": 8864,
"text": "The maximum number of edges in a bipartite graph with n vertices is "
},
{
"code": null,
"e": 8979,
"s": 8933,
"text": "If n=10, k5, 5= ⌊\nn2\n/\n4\n⌋ = ⌊\n102\n/\n4\n⌋ = 25"
},
{
"code": null,
"e": 8998,
"s": 8979,
"text": "Similarly K6, 4=24"
},
{
"code": null,
"e": 9007,
"s": 8998,
"text": "K7, 3=21"
},
{
"code": null,
"e": 9016,
"s": 9007,
"text": "K8, 2=16"
},
{
"code": null,
"e": 9024,
"s": 9016,
"text": "K9, 1=9"
},
{
"code": null,
"e": 9069,
"s": 9024,
"text": "If n=9, k5, 4 = ⌊\nn2\n/\n4\n⌋ = ⌊\n92\n/\n4\n⌋ = 20"
},
{
"code": null,
"e": 9088,
"s": 9069,
"text": "Similarly K6, 3=18"
},
{
"code": null,
"e": 9097,
"s": 9088,
"text": "K7, 2=14"
},
{
"code": null,
"e": 9105,
"s": 9097,
"text": "K8, 1=8"
},
{
"code": null,
"e": 9217,
"s": 9105,
"text": "‘G’ is a bipartite graph if ‘G’ has no cycles of odd length. A special case of bipartite graph is a star graph."
},
{
"code": null,
"e": 9434,
"s": 9217,
"text": "A complete bipartite graph of the form K1, n-1 is a star graph with n-vertices. A star graph is a complete bipartite graph if a single vertex belongs to one set and all the remaining vertices belong to the other set."
},
{
"code": null,
"e": 9591,
"s": 9434,
"text": "In the above graphs, out of ‘n’ vertices, all the ‘n–1’ vertices are connected to a single vertex. Hence it is in the form of K1, n-1 which are star graphs."
},
{
"code": null,
"e": 9811,
"s": 9591,
"text": "Let 'G−' be a simple graph with some vertices as that of ‘G’ and an edge {U, V} is present in 'G−', if the edge is not present in G. It means, two vertices are adjacent in 'G−' if the two vertices are not adjacent in G."
},
{
"code": null,
"e": 10023,
"s": 9811,
"text": "If the edges that exist in graph I are absent in another graph II, and if both graph I and graph II are combined together to form a complete graph, then graph I and graph II are called complements of each other."
},
{
"code": null,
"e": 10126,
"s": 10023,
"text": "In the following example, graph-I has two edges ‘cd’ and ‘bd’. Its complement graph-II has four edges."
},
{
"code": null,
"e": 10283,
"s": 10126,
"text": "Note that the edges in graph-I are not present in graph-II and vice versa. Hence, the combination of both the graphs gives a complete graph of ‘n’ vertices."
},
{
"code": null,
"e": 10356,
"s": 10283,
"text": "Note − A combination of two complementary graphs gives a complete graph."
},
{
"code": null,
"e": 10389,
"s": 10356,
"text": "If ‘G’ is any simple graph, then"
},
{
"code": null,
"e": 10462,
"s": 10389,
"text": "|E(G)| + |E('G-')| = |E(Kn)|, where n = number of vertices in the graph."
},
{
"code": null,
"e": 10559,
"s": 10462,
"text": "Let ‘G’ be a simple graph with nine vertices and twelve edges, find the number of edges in 'G-'."
},
{
"code": null,
"e": 10598,
"s": 10559,
"text": "You have, |E(G)| + |E('G-')| = |E(Kn)|"
},
{
"code": null,
"e": 10616,
"s": 10598,
"text": "12 + |E('G-')| = "
},
{
"code": null,
"e": 10636,
"s": 10616,
"text": "12 + |E('G-')| = 36"
},
{
"code": null,
"e": 10651,
"s": 10636,
"text": "|E('G-')| = 24"
},
{
"code": null,
"e": 10777,
"s": 10651,
"text": "‘G’ is a simple graph with 40 edges and its complement 'G−' has 38 edges. Find the number of vertices in the graph G or 'G−'."
},
{
"code": null,
"e": 10825,
"s": 10777,
"text": "Let the number of vertices in the graph be ‘n’."
},
{
"code": null,
"e": 10863,
"s": 10825,
"text": "We have, |E(G)| + |E('G-')| = |E(Kn)|"
},
{
"code": null,
"e": 10886,
"s": 10863,
"text": "40 + 38 = \nn(n-1)\n/\n2\n"
},
{
"code": null,
"e": 10899,
"s": 10886,
"text": "156 = n(n-1)"
},
{
"code": null,
"e": 10915,
"s": 10899,
"text": "13(12) = n(n-1)"
},
{
"code": null,
"e": 10922,
"s": 10915,
"text": "n = 13"
},
{
"code": null,
"e": 10955,
"s": 10922,
"text": "\n 97 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 10974,
"s": 10955,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 11007,
"s": 10974,
"text": "\n 99 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 11026,
"s": 11007,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 11058,
"s": 11026,
"text": "\n 7 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 11071,
"s": 11058,
"text": " Zach Miller"
},
{
"code": null,
"e": 11106,
"s": 11071,
"text": "\n 31 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 11128,
"s": 11106,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 11163,
"s": 11128,
"text": "\n 32 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 11178,
"s": 11163,
"text": " William Fiset"
},
{
"code": null,
"e": 11211,
"s": 11178,
"text": "\n 54 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 11225,
"s": 11211,
"text": " Sasha Miller"
},
{
"code": null,
"e": 11232,
"s": 11225,
"text": " Print"
},
{
"code": null,
"e": 11243,
"s": 11232,
"text": " Add Notes"
}
] |
Data Scientist vs Data Analyst. Here’s the Difference. | by Matt Przybyla | Towards Data Science | IntroductionData ScientistData AnalystSimilaritiesDifferencesSummaryReferences
Introduction
Data Scientist
Data Analyst
Similarities
Differences
Summary
References
Whereas data science and machine learning fields share confusion between their job descriptions, employers, and the general public, the difference between data science and data analytics is more separable. However, there are still similarities along with the key differences between the two fields and job positions. Some would say to be a data scientist, a data analytics role is a prerequisite to becoming hired as a data scientist.
This article aims to shed light on what it means to be a data scientist and data analyst, from a professional in both fields.
While I was studying to become a data scientist, as a working data analyst, I realized that data science theory is vastly different from that of data analytics. That is not to say that data science does not share the same tools and programming languages as data analytics. One could also argue that data science is a form of data analytics because ultimately, you are working with data — transforming, visualizing, and coming to a conclusion for actionable change. So if they are so similar or one is under the other, why write an article on these two popular fields? The reason is that people who are coming into either field can learn from here — what they will be getting themselves into with either career — or if people are generally curious, and to further the discussion. Below, I will outline the main similarities, differences, and examples of what it means to be either a data scientist or data analyst.
Exclaimer — this DS section only has some information I have gathered from my previous article on data science versus machine learning along with new information as well [3]:
towardsdatascience.com
Data science can be described as a field of automated statistics in the form of models that aide in classifying and predicting outcomes. Here are the top skills that are required to be a data scientist:
Python or R
SQL
Jupyter Notebook
Algorithms/Modeling
Python — in my personal experience, I believe most companies are looking for Python more than R as the main programming language. Job descriptions may list both; however, most people you are working with like machine learning engineers, data engineers, and software engineers will not have familiarity with R. Therefore, I believe, to be a more holistic data scientist, Python will be more beneficial for you.
SQL, at first, can seem more like a data analyst skill — it is, but it should still be a skill you employ for data science. Most datasets are not given to you in the business setting (as opposed to academia), and you will have to make your own — via SQL. Now, there are plenty of subtypes of SQL; like PostgreSQL, MySQL, Microsoft SQL Server T-SQL, and Oracle SQL. They are similar forms of the same querying language, hosted by different platforms. Because these are so similar, having any of these is useful and can be translated easily to a slightly different form of SQL.
Jupyter Notebook, a data scientist’s playground for both coding and modeling. A research environment, if you will, allowing quick and easy Python coding that can incorporate commenting out of code, the code itself, and a platform to build and test models from useful libraries like sklearn, pandas, and numpy.
Algorithms — the main function of a data scientist is to utilize algorithms that quickly and accurately predict, classify, and suggest outputs from data. As you ingest data into the model, a new outcome is created. Examples of key algorithm types are usually bucketed in unsupervised learning (e.g., clustering) and supervised learning (e.g., classification/regression). Some specific key algorithms:
Random Forest (ensemble classification)
Logistic Regression (classification — not regression)
K-Means (clustering)
K-Nearest Neighbor (classification/regression)
Overall, a data scientist can be many things, but the main functions are to:
— meet with stakeholders to define the business problem— pull data (SQL)— Exploratory Data Analysis (EDA), feature engineering, model building, & prediction (Python, Jupyter Notebook, and Algorithms)— depending on the workplace, compile code to .py format and/or pickled model for production
To find out more information on what a data scientist is, how much they make, the outlook of the field, and more useful information, click this link here [4] from UC Berkeley.
A data analyst shares similar titles with business analyst, business intelligence analyst, and even a Tableau developer. The focus of data analytics is to describe and visualize the current landscape of the data — to report and explain it to nontechnical users. A data science crossover position is a data analyst who performs predictive analytics — sharing more similarities of a data scientist without the automated, algorithmic method of outputting those predictions.
Some of the main skills that are required to be a data analyst are:
SQL
Excel
Tableau (or other visualization tools — Google Data Studio, etc)
SQL — just like how a data scientist would use SQL as stated above, so does a data analyst. However, there is a strong focus on SQL in this field. Where some data scientists can get away with simply selecting columns from a table with a few joins, a data analyst can expect to perform much more involved querying (e.g., common table expressions, pivot tables, window functions, subqueries). Sometimes a data analyst can share more similarities between a data engineer over a data scientist depending on the company.
Excel — old school, yes, but still very powerful, even predictive analytics and trend analytics can be performed here. The main pitful oftentimes is a slower performance in Excel over other more robust tools that use Python.
Tableau — I would just say visualization tools, but most companies, in my experience, list this tool as a specific, top skill for data analysts. Dragging and dropping of data into a pre-created chart in Tableau is simple and powerful; there are more difficult and complex functions, too, like calculated fields and connecting to a live SQL database over basing your analysis via a static Excel sheet.
Overall, a data analyst can be many things as well, but the main functions are to:
— meet with stakeholders to define the business problem— pull data (SQL)— EDA, trend analysis, and visualizations (Excel and Tableau)— depending on the workplace, presenting findings and supplying actionable insights those same stakeholders
To find out more information on what a data scientist is, how much they make, the outlook of the field, and more useful information, click here [6], from Northwestern University.
Some of the similarities have already been outlined in the previous sections, but to summarize, data scientists share commonalities between both coding languages, platforms/tools, and problem-solving.
Similar tools include, but are not limited to SQL, Tableau, and that same concept of defining a probelm, analyizing the data, and outputting an analysis.
While there are similarities, there are still differences between the two fields and roles.
Some of the main differences revolve around automation of the analysis — data scientists focus on automating analysis and predictions with algorthims using programming languages like Python, whereas data analysts use stationary, or past data, and in some cases, will create predicted scenarios with tools like Tableau and SQL.
Data science and data analytics share more than just the name (data), but they also include some important differences. Whether you want to be a data scientist or data analyst, I hope you found this outline of key differences and similarities useful. If you are already one of these two roles, then I hope I taught you something new, and if you have any questions or comments, please do so below.
[1] Photo by Christina @ wocintechchat.com on Unsplash [1], (2019)
[2] M.Przybyla, Jupyter Notebook screenshot, (2020)
[3] M.Przybyla, Data Science vs Machine Learning. Here’s the Difference., (2020)
[4] UC Berkely, What is Data Science?, (2020)
[5] Photo by William Iven on Unsplash, (2015)
[6] Northeastern University, What Does a Data Analyst Do?, (2019)
[7] Photo by Filiberto Santillán on Unsplash, (2019) | [
{
"code": null,
"e": 250,
"s": 171,
"text": "IntroductionData ScientistData AnalystSimilaritiesDifferencesSummaryReferences"
},
{
"code": null,
"e": 263,
"s": 250,
"text": "Introduction"
},
{
"code": null,
"e": 278,
"s": 263,
"text": "Data Scientist"
},
{
"code": null,
"e": 291,
"s": 278,
"text": "Data Analyst"
},
{
"code": null,
"e": 304,
"s": 291,
"text": "Similarities"
},
{
"code": null,
"e": 316,
"s": 304,
"text": "Differences"
},
{
"code": null,
"e": 324,
"s": 316,
"text": "Summary"
},
{
"code": null,
"e": 335,
"s": 324,
"text": "References"
},
{
"code": null,
"e": 770,
"s": 335,
"text": "Whereas data science and machine learning fields share confusion between their job descriptions, employers, and the general public, the difference between data science and data analytics is more separable. However, there are still similarities along with the key differences between the two fields and job positions. Some would say to be a data scientist, a data analytics role is a prerequisite to becoming hired as a data scientist."
},
{
"code": null,
"e": 896,
"s": 770,
"text": "This article aims to shed light on what it means to be a data scientist and data analyst, from a professional in both fields."
},
{
"code": null,
"e": 1810,
"s": 896,
"text": "While I was studying to become a data scientist, as a working data analyst, I realized that data science theory is vastly different from that of data analytics. That is not to say that data science does not share the same tools and programming languages as data analytics. One could also argue that data science is a form of data analytics because ultimately, you are working with data — transforming, visualizing, and coming to a conclusion for actionable change. So if they are so similar or one is under the other, why write an article on these two popular fields? The reason is that people who are coming into either field can learn from here — what they will be getting themselves into with either career — or if people are generally curious, and to further the discussion. Below, I will outline the main similarities, differences, and examples of what it means to be either a data scientist or data analyst."
},
{
"code": null,
"e": 1985,
"s": 1810,
"text": "Exclaimer — this DS section only has some information I have gathered from my previous article on data science versus machine learning along with new information as well [3]:"
},
{
"code": null,
"e": 2008,
"s": 1985,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 2211,
"s": 2008,
"text": "Data science can be described as a field of automated statistics in the form of models that aide in classifying and predicting outcomes. Here are the top skills that are required to be a data scientist:"
},
{
"code": null,
"e": 2223,
"s": 2211,
"text": "Python or R"
},
{
"code": null,
"e": 2227,
"s": 2223,
"text": "SQL"
},
{
"code": null,
"e": 2244,
"s": 2227,
"text": "Jupyter Notebook"
},
{
"code": null,
"e": 2264,
"s": 2244,
"text": "Algorithms/Modeling"
},
{
"code": null,
"e": 2674,
"s": 2264,
"text": "Python — in my personal experience, I believe most companies are looking for Python more than R as the main programming language. Job descriptions may list both; however, most people you are working with like machine learning engineers, data engineers, and software engineers will not have familiarity with R. Therefore, I believe, to be a more holistic data scientist, Python will be more beneficial for you."
},
{
"code": null,
"e": 3250,
"s": 2674,
"text": "SQL, at first, can seem more like a data analyst skill — it is, but it should still be a skill you employ for data science. Most datasets are not given to you in the business setting (as opposed to academia), and you will have to make your own — via SQL. Now, there are plenty of subtypes of SQL; like PostgreSQL, MySQL, Microsoft SQL Server T-SQL, and Oracle SQL. They are similar forms of the same querying language, hosted by different platforms. Because these are so similar, having any of these is useful and can be translated easily to a slightly different form of SQL."
},
{
"code": null,
"e": 3560,
"s": 3250,
"text": "Jupyter Notebook, a data scientist’s playground for both coding and modeling. A research environment, if you will, allowing quick and easy Python coding that can incorporate commenting out of code, the code itself, and a platform to build and test models from useful libraries like sklearn, pandas, and numpy."
},
{
"code": null,
"e": 3961,
"s": 3560,
"text": "Algorithms — the main function of a data scientist is to utilize algorithms that quickly and accurately predict, classify, and suggest outputs from data. As you ingest data into the model, a new outcome is created. Examples of key algorithm types are usually bucketed in unsupervised learning (e.g., clustering) and supervised learning (e.g., classification/regression). Some specific key algorithms:"
},
{
"code": null,
"e": 4001,
"s": 3961,
"text": "Random Forest (ensemble classification)"
},
{
"code": null,
"e": 4055,
"s": 4001,
"text": "Logistic Regression (classification — not regression)"
},
{
"code": null,
"e": 4076,
"s": 4055,
"text": "K-Means (clustering)"
},
{
"code": null,
"e": 4123,
"s": 4076,
"text": "K-Nearest Neighbor (classification/regression)"
},
{
"code": null,
"e": 4200,
"s": 4123,
"text": "Overall, a data scientist can be many things, but the main functions are to:"
},
{
"code": null,
"e": 4492,
"s": 4200,
"text": "— meet with stakeholders to define the business problem— pull data (SQL)— Exploratory Data Analysis (EDA), feature engineering, model building, & prediction (Python, Jupyter Notebook, and Algorithms)— depending on the workplace, compile code to .py format and/or pickled model for production"
},
{
"code": null,
"e": 4668,
"s": 4492,
"text": "To find out more information on what a data scientist is, how much they make, the outlook of the field, and more useful information, click this link here [4] from UC Berkeley."
},
{
"code": null,
"e": 5139,
"s": 4668,
"text": "A data analyst shares similar titles with business analyst, business intelligence analyst, and even a Tableau developer. The focus of data analytics is to describe and visualize the current landscape of the data — to report and explain it to nontechnical users. A data science crossover position is a data analyst who performs predictive analytics — sharing more similarities of a data scientist without the automated, algorithmic method of outputting those predictions."
},
{
"code": null,
"e": 5207,
"s": 5139,
"text": "Some of the main skills that are required to be a data analyst are:"
},
{
"code": null,
"e": 5211,
"s": 5207,
"text": "SQL"
},
{
"code": null,
"e": 5217,
"s": 5211,
"text": "Excel"
},
{
"code": null,
"e": 5282,
"s": 5217,
"text": "Tableau (or other visualization tools — Google Data Studio, etc)"
},
{
"code": null,
"e": 5798,
"s": 5282,
"text": "SQL — just like how a data scientist would use SQL as stated above, so does a data analyst. However, there is a strong focus on SQL in this field. Where some data scientists can get away with simply selecting columns from a table with a few joins, a data analyst can expect to perform much more involved querying (e.g., common table expressions, pivot tables, window functions, subqueries). Sometimes a data analyst can share more similarities between a data engineer over a data scientist depending on the company."
},
{
"code": null,
"e": 6023,
"s": 5798,
"text": "Excel — old school, yes, but still very powerful, even predictive analytics and trend analytics can be performed here. The main pitful oftentimes is a slower performance in Excel over other more robust tools that use Python."
},
{
"code": null,
"e": 6424,
"s": 6023,
"text": "Tableau — I would just say visualization tools, but most companies, in my experience, list this tool as a specific, top skill for data analysts. Dragging and dropping of data into a pre-created chart in Tableau is simple and powerful; there are more difficult and complex functions, too, like calculated fields and connecting to a live SQL database over basing your analysis via a static Excel sheet."
},
{
"code": null,
"e": 6507,
"s": 6424,
"text": "Overall, a data analyst can be many things as well, but the main functions are to:"
},
{
"code": null,
"e": 6748,
"s": 6507,
"text": "— meet with stakeholders to define the business problem— pull data (SQL)— EDA, trend analysis, and visualizations (Excel and Tableau)— depending on the workplace, presenting findings and supplying actionable insights those same stakeholders"
},
{
"code": null,
"e": 6927,
"s": 6748,
"text": "To find out more information on what a data scientist is, how much they make, the outlook of the field, and more useful information, click here [6], from Northwestern University."
},
{
"code": null,
"e": 7128,
"s": 6927,
"text": "Some of the similarities have already been outlined in the previous sections, but to summarize, data scientists share commonalities between both coding languages, platforms/tools, and problem-solving."
},
{
"code": null,
"e": 7282,
"s": 7128,
"text": "Similar tools include, but are not limited to SQL, Tableau, and that same concept of defining a probelm, analyizing the data, and outputting an analysis."
},
{
"code": null,
"e": 7374,
"s": 7282,
"text": "While there are similarities, there are still differences between the two fields and roles."
},
{
"code": null,
"e": 7701,
"s": 7374,
"text": "Some of the main differences revolve around automation of the analysis — data scientists focus on automating analysis and predictions with algorthims using programming languages like Python, whereas data analysts use stationary, or past data, and in some cases, will create predicted scenarios with tools like Tableau and SQL."
},
{
"code": null,
"e": 8098,
"s": 7701,
"text": "Data science and data analytics share more than just the name (data), but they also include some important differences. Whether you want to be a data scientist or data analyst, I hope you found this outline of key differences and similarities useful. If you are already one of these two roles, then I hope I taught you something new, and if you have any questions or comments, please do so below."
},
{
"code": null,
"e": 8165,
"s": 8098,
"text": "[1] Photo by Christina @ wocintechchat.com on Unsplash [1], (2019)"
},
{
"code": null,
"e": 8217,
"s": 8165,
"text": "[2] M.Przybyla, Jupyter Notebook screenshot, (2020)"
},
{
"code": null,
"e": 8298,
"s": 8217,
"text": "[3] M.Przybyla, Data Science vs Machine Learning. Here’s the Difference., (2020)"
},
{
"code": null,
"e": 8344,
"s": 8298,
"text": "[4] UC Berkely, What is Data Science?, (2020)"
},
{
"code": null,
"e": 8390,
"s": 8344,
"text": "[5] Photo by William Iven on Unsplash, (2015)"
},
{
"code": null,
"e": 8456,
"s": 8390,
"text": "[6] Northeastern University, What Does a Data Analyst Do?, (2019)"
}
] |
Matplotlib - Formatting Axes | Sometimes, one or a few points are much larger than the bulk of data. In such a case, the scale of an axis needs to be set as logarithmic rather than the normal scale. This is the Logarithmic scale. In Matplotlib, it is possible by setting xscale or vscale property of axes object to ‘log’.
It is also required sometimes to show some additional distance between axis numbers and axis label. The labelpad property of either axis (x or y or both) can be set to the desired value.
Both the above features are demonstrated with the help of the following example. The subplot on the right has a logarithmic scale and one on left has its x axis having label at more distance.
import matplotlib.pyplot as plt
import numpy as np
fig, axes = plt.subplots(1, 2, figsize=(10,4))
x = np.arange(1,5)
axes[0].plot( x, np.exp(x))
axes[0].plot(x,x**2)
axes[0].set_title("Normal scale")
axes[1].plot (x, np.exp(x))
axes[1].plot(x, x**2)
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)")
axes[0].set_xlabel("x axis")
axes[0].set_ylabel("y axis")
axes[0].xaxis.labelpad = 10
axes[1].set_xlabel("x axis")
axes[1].set_ylabel("y axis")
plt.show()
Axis spines are the lines connecting axis tick marks demarcating boundaries of plot area. The axes object has spines located at top, bottom, left and right.
Each spine can be formatted by specifying color and width. Any edge can be made invisible if its color is set to none.
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.spines['bottom'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(2)
ax.spines['right'].set_color(None)
ax.spines['top'].set_color(None)
ax.plot([1,2,3,4,5])
plt.show()
63 Lectures
6 hours
Abhilash Nelson
11 Lectures
4 hours
DATAhill Solutions Srinivas Reddy
9 Lectures
2.5 hours
DATAhill Solutions Srinivas Reddy
32 Lectures
4 hours
Aipython
10 Lectures
2.5 hours
Akbar Khan
63 Lectures
6 hours
Anmol
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2807,
"s": 2516,
"text": "Sometimes, one or a few points are much larger than the bulk of data. In such a case, the scale of an axis needs to be set as logarithmic rather than the normal scale. This is the Logarithmic scale. In Matplotlib, it is possible by setting xscale or vscale property of axes object to ‘log’."
},
{
"code": null,
"e": 2994,
"s": 2807,
"text": "It is also required sometimes to show some additional distance between axis numbers and axis label. The labelpad property of either axis (x or y or both) can be set to the desired value."
},
{
"code": null,
"e": 3186,
"s": 2994,
"text": "Both the above features are demonstrated with the help of the following example. The subplot on the right has a logarithmic scale and one on left has its x axis having label at more distance."
},
{
"code": null,
"e": 3660,
"s": 3186,
"text": "import matplotlib.pyplot as plt\nimport numpy as np\nfig, axes = plt.subplots(1, 2, figsize=(10,4))\nx = np.arange(1,5)\naxes[0].plot( x, np.exp(x))\naxes[0].plot(x,x**2)\naxes[0].set_title(\"Normal scale\")\naxes[1].plot (x, np.exp(x))\naxes[1].plot(x, x**2)\naxes[1].set_yscale(\"log\")\naxes[1].set_title(\"Logarithmic scale (y)\")\naxes[0].set_xlabel(\"x axis\")\naxes[0].set_ylabel(\"y axis\")\naxes[0].xaxis.labelpad = 10\naxes[1].set_xlabel(\"x axis\")\naxes[1].set_ylabel(\"y axis\")\nplt.show()"
},
{
"code": null,
"e": 3817,
"s": 3660,
"text": "Axis spines are the lines connecting axis tick marks demarcating boundaries of plot area. The axes object has spines located at top, bottom, left and right."
},
{
"code": null,
"e": 3936,
"s": 3817,
"text": "Each spine can be formatted by specifying color and width. Any edge can be made invisible if its color is set to none."
},
{
"code": null,
"e": 4224,
"s": 3936,
"text": "import matplotlib.pyplot as plt\nfig = plt.figure()\nax = fig.add_axes([0,0,1,1])\nax.spines['bottom'].set_color('blue')\nax.spines['left'].set_color('red')\nax.spines['left'].set_linewidth(2)\nax.spines['right'].set_color(None)\nax.spines['top'].set_color(None)\nax.plot([1,2,3,4,5])\nplt.show()"
},
{
"code": null,
"e": 4257,
"s": 4224,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 4274,
"s": 4257,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 4307,
"s": 4274,
"text": "\n 11 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4342,
"s": 4307,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 4376,
"s": 4342,
"text": "\n 9 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4411,
"s": 4376,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 4444,
"s": 4411,
"text": "\n 32 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4454,
"s": 4444,
"text": " Aipython"
},
{
"code": null,
"e": 4489,
"s": 4454,
"text": "\n 10 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4501,
"s": 4489,
"text": " Akbar Khan"
},
{
"code": null,
"e": 4534,
"s": 4501,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 4541,
"s": 4534,
"text": " Anmol"
},
{
"code": null,
"e": 4548,
"s": 4541,
"text": " Print"
},
{
"code": null,
"e": 4559,
"s": 4548,
"text": " Add Notes"
}
] |
Program to display Astrological sign or Zodiac sign for given date of birth - GeeksforGeeks | 17 May, 2021
For given date of birth, this program displays an astrological sign or Zodiac sign.Examples :
Input : Day = 10, Month = December
Output : Sagittarius
Explanation :
People born on this date have a zodiac Sagittarius.
Input : Day = 7, Month = September
Output : Virgo
Approach :Although the exact dates can shift plus or minus a day, depending on the year, here are the general zodiac sign dates used by Western (or Tropical) astrology :
WESTERN ASTROLOGY STAR SIGN DATES :
Aries (March 21-April 19)
Taurus (April 20-May 20)
Gemini (May 21-June 20)
Cancer (June 21-July 22)
Leo (July 23-August 22)
Virgo (August 23-September 22)
Libra (September 23-October 22)
Scorpio (October 23-November 21)
Sagittarius (November 22-December 21)
Capricorn (December 22-January 19)
Aquarius (January 20-February 18)
Pisces (February 19-March 20)
We need to check our mentioned date and month and thus find its equivalent zodiac, i.e which zodiac fits in that particular date as well as month and print its corresponding zodiac sign.Below is the implementation of above approach :
C++
Java
Python
C#
Javascript
// CPP program to display astrological sign// or Zodiac sign for given date of birth#include <bits/stdc++.h>using namespace std; void zodiac_sign(int day, string month){ string astro_sign=""; // checks month and date within the // valid range of a specified zodiac if (month == "december"){ if (day < 22) astro_sign = "Sagittarius"; else astro_sign ="capricorn"; } else if (month == "january"){ if (day < 20) astro_sign = "Capricorn"; else astro_sign = "aquarius"; } else if (month == "february"){ if (day < 19) astro_sign = "Aquarius"; else astro_sign = "pisces"; } else if(month == "march"){ if (day < 21) astro_sign = "Pisces"; else astro_sign = "aries"; } else if (month == "april"){ if (day < 20) astro_sign = "Aries"; else astro_sign = "taurus"; } else if (month == "may"){ if (day < 21) astro_sign = "Taurus"; else astro_sign = "gemini"; } else if( month == "june"){ if (day < 21) astro_sign = "Gemini"; else astro_sign = "cancer"; } else if (month == "july"){ if (day < 23) astro_sign = "Cancer"; else astro_sign = "leo"; } else if( month == "august"){ if (day < 23) astro_sign = "Leo"; else astro_sign = "virgo"; } else if (month == "september"){ if (day < 23) astro_sign = "Virgo"; else astro_sign = "libra"; } else if (month == "october"){ if (day < 23) astro_sign = "Libra"; else astro_sign = "scorpio"; } else if (month == "november"){ if (day < 22) astro_sign = "scorpio"; else astro_sign = "sagittarius"; } cout<<astro_sign;} // Driver codeint main (){ int day = 19; string month = "may"; zodiac_sign(day, month); return 0;} // This code is contributed by Gitanjali.
// Java program to display astrological sign// or Zodiac sign for given date of birthimport java.io.*; class GFG { static void zodiac_sign(int day, String month) { String astro_sign=""; // checks month and date within the // valid range of a specified zodiac if (month == "december"){ if (day < 22) astro_sign = "Sagittarius"; else astro_sign ="capricorn"; } else if (month == "january"){ if (day < 20) astro_sign = "Capricorn"; else astro_sign = "aquarius"; } else if (month == "february"){ if (day < 19) astro_sign = "Aquarius"; else astro_sign = "pisces"; } else if(month == "march"){ if (day < 21) astro_sign = "Pisces"; else astro_sign = "aries"; } else if (month == "april"){ if (day < 20) astro_sign = "Aries"; else astro_sign = "taurus"; } else if (month == "may"){ if (day < 21) astro_sign = "Taurus"; else astro_sign = "gemini"; } else if( month == "june"){ if (day < 21) astro_sign = "Gemini"; else astro_sign = "cancer"; } else if (month == "july"){ if (day < 23) astro_sign = "Cancer"; else astro_sign = "leo"; } else if( month == "august"){ if (day < 23) astro_sign = "Leo"; else astro_sign = "virgo"; } else if (month == "september"){ if (day < 23) astro_sign = "Virgo"; else astro_sign = "libra"; } else if (month == "october"){ if (day < 23) astro_sign = "Libra"; else astro_sign = "scorpio"; } else if (month == "november"){ if (day < 22) astro_sign = "scorpio"; else astro_sign = "sagittarius"; } System.out.println(astro_sign); } // Driver code public static void main (String[] args) { int day = 19; String month = "may"; zodiac_sign(day, month); }} // This code is contributed by Gitanjali.
# Python program to display astrological sign# or Zodiac sign for given date of birth def zodiac_sign(day, month): # checks month and date within the valid range # of a specified zodiac if month == 'december': astro_sign = 'Sagittarius' if (day < 22) else 'capricorn' elif month == 'january': astro_sign = 'Capricorn' if (day < 20) else 'aquarius' elif month == 'february': astro_sign = 'Aquarius' if (day < 19) else 'pisces' elif month == 'march': astro_sign = 'Pisces' if (day < 21) else 'aries' elif month == 'april': astro_sign = 'Aries' if (day < 20) else 'taurus' elif month == 'may': astro_sign = 'Taurus' if (day < 21) else 'gemini' elif month == 'june': astro_sign = 'Gemini' if (day < 21) else 'cancer' elif month == 'july': astro_sign = 'Cancer' if (day < 23) else 'leo' elif month == 'august': astro_sign = 'Leo' if (day < 23) else 'virgo' elif month == 'september': astro_sign = 'Virgo' if (day < 23) else 'libra' elif month == 'october': astro_sign = 'Libra' if (day < 23) else 'scorpio' elif month == 'november': astro_sign = 'scorpio' if (day < 22) else 'sagittarius' print(astro_sign) # Driver codeif __name__ == '__main__': day = 19 month = "may" zodiac_sign(day, month)
// C# program to display astrological sign// or Zodiac sign for given date of birthusing System; class GFG { static void zodiac_sign(int day, string month) { string astro_sign=""; // checks month and date within the // valid range of a specified zodiac if (month == "december"){ if (day < 22) astro_sign = "Sagittarius"; else astro_sign ="capricorn"; } else if (month == "january"){ if (day < 20) astro_sign = "Capricorn"; else astro_sign = "aquarius"; } else if (month == "february"){ if (day < 19) astro_sign = "Aquarius"; else astro_sign = "pisces"; } else if(month == "march"){ if (day < 21) astro_sign = "Pisces"; else astro_sign = "aries"; } else if (month == "april"){ if (day < 20) astro_sign = "Aries"; else astro_sign = "taurus"; } else if (month == "may"){ if (day < 21) astro_sign = "Taurus"; else astro_sign = "gemini"; } else if( month == "june"){ if (day < 21) astro_sign = "Gemini"; else astro_sign = "cancer"; } else if (month == "july"){ if (day < 23) astro_sign = "Cancer"; else astro_sign = "leo"; } else if( month == "august"){ if (day < 23) astro_sign = "Leo"; else astro_sign = "virgo"; } else if (month == "september"){ if (day < 23) astro_sign = "Virgo"; else astro_sign = "libra"; } else if (month == "october"){ if (day < 23) astro_sign = "Libra"; else astro_sign = "scorpio"; } else if (month == "november"){ if (day < 22) astro_sign = "scorpio"; else astro_sign = "sagittarius"; } Console.WriteLine(astro_sign); } // Driver code public static void Main () { int day = 19; string month = "may"; zodiac_sign(day, month); }} // This code is contributed by vt_m.
<script> // JavaScript program to display astrological sign// or Zodiac sign for given date of birth // Function to calculate sum// digits of nfunction zodiac_sign(day, month) { let astro_sign=""; // checks month and date within the // valid range of a specified zodiac if (month == "december"){ if (day < 22) astro_sign = "Sagittarius"; else astro_sign ="capricorn"; } else if (month == "january"){ if (day < 20) astro_sign = "Capricorn"; else astro_sign = "aquarius"; } else if (month == "february"){ if (day < 19) astro_sign = "Aquarius"; else astro_sign = "pisces"; } else if(month == "march"){ if (day < 21) astro_sign = "Pisces"; else astro_sign = "aries"; } else if (month == "april"){ if (day < 20) astro_sign = "Aries"; else astro_sign = "taurus"; } else if (month == "may"){ if (day < 21) astro_sign = "Taurus"; else astro_sign = "gemini"; } else if( month == "june"){ if (day < 21) astro_sign = "Gemini"; else astro_sign = "cancer"; } else if (month == "july"){ if (day < 23) astro_sign = "Cancer"; else astro_sign = "leo"; } else if( month == "august"){ if (day < 23) astro_sign = "Leo"; else astro_sign = "virgo"; } else if (month == "september"){ if (day < 23) astro_sign = "Virgo"; else astro_sign = "libra"; } else if (month == "october"){ if (day < 23) astro_sign = "Libra"; else astro_sign = "scorpio"; } else if (month == "november"){ if (day < 22) astro_sign = "scorpio"; else astro_sign = "sagittarius"; } document.write(astro_sign); } // Driver Code let day = 19; let month = "may"; zodiac_sign(day, month); </script>
Output:
Taurus
chinmoy1997pal
Python-projects
Python
School Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Pandas dataframe.groupby()
Arrays in C/C++
Reverse a string in Java
Inheritance in C++
C++ Classes and Objects
Constructors in C++ | [
{
"code": null,
"e": 23901,
"s": 23873,
"text": "\n17 May, 2021"
},
{
"code": null,
"e": 23997,
"s": 23901,
"text": "For given date of birth, this program displays an astrological sign or Zodiac sign.Examples : "
},
{
"code": null,
"e": 24170,
"s": 23997,
"text": "Input : Day = 10, Month = December\nOutput : Sagittarius\nExplanation :\nPeople born on this date have a zodiac Sagittarius.\n\nInput : Day = 7, Month = September\nOutput : Virgo"
},
{
"code": null,
"e": 24343,
"s": 24172,
"text": "Approach :Although the exact dates can shift plus or minus a day, depending on the year, here are the general zodiac sign dates used by Western (or Tropical) astrology : "
},
{
"code": null,
"e": 24738,
"s": 24343,
"text": "WESTERN ASTROLOGY STAR SIGN DATES :\n\nAries (March 21-April 19)\nTaurus (April 20-May 20)\nGemini (May 21-June 20)\nCancer (June 21-July 22)\nLeo (July 23-August 22)\nVirgo (August 23-September 22)\nLibra (September 23-October 22)\nScorpio (October 23-November 21)\nSagittarius (November 22-December 21)\nCapricorn (December 22-January 19)\nAquarius (January 20-February 18)\nPisces (February 19-March 20) "
},
{
"code": null,
"e": 24974,
"s": 24738,
"text": "We need to check our mentioned date and month and thus find its equivalent zodiac, i.e which zodiac fits in that particular date as well as month and print its corresponding zodiac sign.Below is the implementation of above approach : "
},
{
"code": null,
"e": 24978,
"s": 24974,
"text": "C++"
},
{
"code": null,
"e": 24983,
"s": 24978,
"text": "Java"
},
{
"code": null,
"e": 24990,
"s": 24983,
"text": "Python"
},
{
"code": null,
"e": 24993,
"s": 24990,
"text": "C#"
},
{
"code": null,
"e": 25004,
"s": 24993,
"text": "Javascript"
},
{
"code": "// CPP program to display astrological sign// or Zodiac sign for given date of birth#include <bits/stdc++.h>using namespace std; void zodiac_sign(int day, string month){ string astro_sign=\"\"; // checks month and date within the // valid range of a specified zodiac if (month == \"december\"){ if (day < 22) astro_sign = \"Sagittarius\"; else astro_sign =\"capricorn\"; } else if (month == \"january\"){ if (day < 20) astro_sign = \"Capricorn\"; else astro_sign = \"aquarius\"; } else if (month == \"february\"){ if (day < 19) astro_sign = \"Aquarius\"; else astro_sign = \"pisces\"; } else if(month == \"march\"){ if (day < 21) astro_sign = \"Pisces\"; else astro_sign = \"aries\"; } else if (month == \"april\"){ if (day < 20) astro_sign = \"Aries\"; else astro_sign = \"taurus\"; } else if (month == \"may\"){ if (day < 21) astro_sign = \"Taurus\"; else astro_sign = \"gemini\"; } else if( month == \"june\"){ if (day < 21) astro_sign = \"Gemini\"; else astro_sign = \"cancer\"; } else if (month == \"july\"){ if (day < 23) astro_sign = \"Cancer\"; else astro_sign = \"leo\"; } else if( month == \"august\"){ if (day < 23) astro_sign = \"Leo\"; else astro_sign = \"virgo\"; } else if (month == \"september\"){ if (day < 23) astro_sign = \"Virgo\"; else astro_sign = \"libra\"; } else if (month == \"october\"){ if (day < 23) astro_sign = \"Libra\"; else astro_sign = \"scorpio\"; } else if (month == \"november\"){ if (day < 22) astro_sign = \"scorpio\"; else astro_sign = \"sagittarius\"; } cout<<astro_sign;} // Driver codeint main (){ int day = 19; string month = \"may\"; zodiac_sign(day, month); return 0;} // This code is contributed by Gitanjali.",
"e": 27146,
"s": 25004,
"text": null
},
{
"code": "// Java program to display astrological sign// or Zodiac sign for given date of birthimport java.io.*; class GFG { static void zodiac_sign(int day, String month) { String astro_sign=\"\"; // checks month and date within the // valid range of a specified zodiac if (month == \"december\"){ if (day < 22) astro_sign = \"Sagittarius\"; else astro_sign =\"capricorn\"; } else if (month == \"january\"){ if (day < 20) astro_sign = \"Capricorn\"; else astro_sign = \"aquarius\"; } else if (month == \"february\"){ if (day < 19) astro_sign = \"Aquarius\"; else astro_sign = \"pisces\"; } else if(month == \"march\"){ if (day < 21) astro_sign = \"Pisces\"; else astro_sign = \"aries\"; } else if (month == \"april\"){ if (day < 20) astro_sign = \"Aries\"; else astro_sign = \"taurus\"; } else if (month == \"may\"){ if (day < 21) astro_sign = \"Taurus\"; else astro_sign = \"gemini\"; } else if( month == \"june\"){ if (day < 21) astro_sign = \"Gemini\"; else astro_sign = \"cancer\"; } else if (month == \"july\"){ if (day < 23) astro_sign = \"Cancer\"; else astro_sign = \"leo\"; } else if( month == \"august\"){ if (day < 23) astro_sign = \"Leo\"; else astro_sign = \"virgo\"; } else if (month == \"september\"){ if (day < 23) astro_sign = \"Virgo\"; else astro_sign = \"libra\"; } else if (month == \"october\"){ if (day < 23) astro_sign = \"Libra\"; else astro_sign = \"scorpio\"; } else if (month == \"november\"){ if (day < 22) astro_sign = \"scorpio\"; else astro_sign = \"sagittarius\"; } System.out.println(astro_sign); } // Driver code public static void main (String[] args) { int day = 19; String month = \"may\"; zodiac_sign(day, month); }} // This code is contributed by Gitanjali.",
"e": 29718,
"s": 27146,
"text": null
},
{
"code": "# Python program to display astrological sign# or Zodiac sign for given date of birth def zodiac_sign(day, month): # checks month and date within the valid range # of a specified zodiac if month == 'december': astro_sign = 'Sagittarius' if (day < 22) else 'capricorn' elif month == 'january': astro_sign = 'Capricorn' if (day < 20) else 'aquarius' elif month == 'february': astro_sign = 'Aquarius' if (day < 19) else 'pisces' elif month == 'march': astro_sign = 'Pisces' if (day < 21) else 'aries' elif month == 'april': astro_sign = 'Aries' if (day < 20) else 'taurus' elif month == 'may': astro_sign = 'Taurus' if (day < 21) else 'gemini' elif month == 'june': astro_sign = 'Gemini' if (day < 21) else 'cancer' elif month == 'july': astro_sign = 'Cancer' if (day < 23) else 'leo' elif month == 'august': astro_sign = 'Leo' if (day < 23) else 'virgo' elif month == 'september': astro_sign = 'Virgo' if (day < 23) else 'libra' elif month == 'october': astro_sign = 'Libra' if (day < 23) else 'scorpio' elif month == 'november': astro_sign = 'scorpio' if (day < 22) else 'sagittarius' print(astro_sign) # Driver codeif __name__ == '__main__': day = 19 month = \"may\" zodiac_sign(day, month)",
"e": 31162,
"s": 29718,
"text": null
},
{
"code": "// C# program to display astrological sign// or Zodiac sign for given date of birthusing System; class GFG { static void zodiac_sign(int day, string month) { string astro_sign=\"\"; // checks month and date within the // valid range of a specified zodiac if (month == \"december\"){ if (day < 22) astro_sign = \"Sagittarius\"; else astro_sign =\"capricorn\"; } else if (month == \"january\"){ if (day < 20) astro_sign = \"Capricorn\"; else astro_sign = \"aquarius\"; } else if (month == \"february\"){ if (day < 19) astro_sign = \"Aquarius\"; else astro_sign = \"pisces\"; } else if(month == \"march\"){ if (day < 21) astro_sign = \"Pisces\"; else astro_sign = \"aries\"; } else if (month == \"april\"){ if (day < 20) astro_sign = \"Aries\"; else astro_sign = \"taurus\"; } else if (month == \"may\"){ if (day < 21) astro_sign = \"Taurus\"; else astro_sign = \"gemini\"; } else if( month == \"june\"){ if (day < 21) astro_sign = \"Gemini\"; else astro_sign = \"cancer\"; } else if (month == \"july\"){ if (day < 23) astro_sign = \"Cancer\"; else astro_sign = \"leo\"; } else if( month == \"august\"){ if (day < 23) astro_sign = \"Leo\"; else astro_sign = \"virgo\"; } else if (month == \"september\"){ if (day < 23) astro_sign = \"Virgo\"; else astro_sign = \"libra\"; } else if (month == \"october\"){ if (day < 23) astro_sign = \"Libra\"; else astro_sign = \"scorpio\"; } else if (month == \"november\"){ if (day < 22) astro_sign = \"scorpio\"; else astro_sign = \"sagittarius\"; } Console.WriteLine(astro_sign); } // Driver code public static void Main () { int day = 19; string month = \"may\"; zodiac_sign(day, month); }} // This code is contributed by vt_m.",
"e": 33709,
"s": 31162,
"text": null
},
{
"code": "<script> // JavaScript program to display astrological sign// or Zodiac sign for given date of birth // Function to calculate sum// digits of nfunction zodiac_sign(day, month) { let astro_sign=\"\"; // checks month and date within the // valid range of a specified zodiac if (month == \"december\"){ if (day < 22) astro_sign = \"Sagittarius\"; else astro_sign =\"capricorn\"; } else if (month == \"january\"){ if (day < 20) astro_sign = \"Capricorn\"; else astro_sign = \"aquarius\"; } else if (month == \"february\"){ if (day < 19) astro_sign = \"Aquarius\"; else astro_sign = \"pisces\"; } else if(month == \"march\"){ if (day < 21) astro_sign = \"Pisces\"; else astro_sign = \"aries\"; } else if (month == \"april\"){ if (day < 20) astro_sign = \"Aries\"; else astro_sign = \"taurus\"; } else if (month == \"may\"){ if (day < 21) astro_sign = \"Taurus\"; else astro_sign = \"gemini\"; } else if( month == \"june\"){ if (day < 21) astro_sign = \"Gemini\"; else astro_sign = \"cancer\"; } else if (month == \"july\"){ if (day < 23) astro_sign = \"Cancer\"; else astro_sign = \"leo\"; } else if( month == \"august\"){ if (day < 23) astro_sign = \"Leo\"; else astro_sign = \"virgo\"; } else if (month == \"september\"){ if (day < 23) astro_sign = \"Virgo\"; else astro_sign = \"libra\"; } else if (month == \"october\"){ if (day < 23) astro_sign = \"Libra\"; else astro_sign = \"scorpio\"; } else if (month == \"november\"){ if (day < 22) astro_sign = \"scorpio\"; else astro_sign = \"sagittarius\"; } document.write(astro_sign); } // Driver Code let day = 19; let month = \"may\"; zodiac_sign(day, month); </script>",
"e": 36192,
"s": 33709,
"text": null
},
{
"code": null,
"e": 36202,
"s": 36192,
"text": "Output: "
},
{
"code": null,
"e": 36209,
"s": 36202,
"text": "Taurus"
},
{
"code": null,
"e": 36226,
"s": 36211,
"text": "chinmoy1997pal"
},
{
"code": null,
"e": 36242,
"s": 36226,
"text": "Python-projects"
},
{
"code": null,
"e": 36249,
"s": 36242,
"text": "Python"
},
{
"code": null,
"e": 36268,
"s": 36249,
"text": "School Programming"
},
{
"code": null,
"e": 36366,
"s": 36268,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 36375,
"s": 36366,
"text": "Comments"
},
{
"code": null,
"e": 36388,
"s": 36375,
"text": "Old Comments"
},
{
"code": null,
"e": 36420,
"s": 36388,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 36476,
"s": 36420,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 36518,
"s": 36476,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 36560,
"s": 36518,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 36596,
"s": 36560,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 36612,
"s": 36596,
"text": "Arrays in C/C++"
},
{
"code": null,
"e": 36637,
"s": 36612,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 36656,
"s": 36637,
"text": "Inheritance in C++"
},
{
"code": null,
"e": 36680,
"s": 36656,
"text": "C++ Classes and Objects"
}
] |
Least prime factor of numbers till n - GeeksforGeeks | 08 Dec, 2021
Given a number n, print least prime factors of all numbers from 1 to n. The least prime factor of an integer n is the smallest prime number that divides the number. The least prime factor of all even numbers is 2. A prime number is its own least prime factor (as well as its own greatest prime factor).Note: We need to print 1 for 1.Example :
Input : 6
Output : Least Prime factor of 1: 1
Least Prime factor of 2: 2
Least Prime factor of 3: 3
Least Prime factor of 4: 2
Least Prime factor of 5: 5
Least Prime factor of 6: 2
We can use a variation of sieve of Eratosthenes to solve the above problem.
Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n).Initially, let i equal 2, the smallest prime number.Enumerate the multiples of i by counting to n from 2i in increments of i, and mark them as having least prime factor as i (if not already marked). Also mark i as least prime factor of i (i itself is a prime number).Find the first number greater than i in the list that is not marked. If there was no such number, stop. Otherwise, let i now equal this new number (which is the next prime), and repeat from step 3.
Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n).
Initially, let i equal 2, the smallest prime number.
Enumerate the multiples of i by counting to n from 2i in increments of i, and mark them as having least prime factor as i (if not already marked). Also mark i as least prime factor of i (i itself is a prime number).
Find the first number greater than i in the list that is not marked. If there was no such number, stop. Otherwise, let i now equal this new number (which is the next prime), and repeat from step 3.
Below is the implementation of the algorithm, where least_prime[] saves the value of the least prime factor corresponding to the respective index.
C++
Java
Python 3
C#
PHP
Javascript
// C++ program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenes#include<bits/stdc++.h>using namespace std; void leastPrimeFactor(int n){ // Create a vector to store least primes. // Initialize all entries as 0. vector<int> least_prime(n+1, 0); // We need to print 1 for 1. least_prime[1] = 1; for (int i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (int j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (int i = 1; i <= n; i++) cout << "Least Prime factor of " << i << ": " << least_prime[i] << "\n";} // Driver program to test above functionint main(){ int n = 10; leastPrimeFactor(n); return 0;}
// Java program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenes import java.io.*;import java.util.*; class GFG{ public static void leastPrimeFactor(int n) { // Create a vector to store least primes. // Initialize all entries as 0. int[] least_prime = new int[n+1]; // We need to print 1 for 1. least_prime[1] = 1; for (int i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (int j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (int i = 1; i <= n; i++) System.out.println("Least Prime factor of " + + i + ": " + least_prime[i]); } public static void main (String[] args) { int n = 10; leastPrimeFactor(n); }} // Code Contributed by Mohit Gupta_OMG <(0_o)>
# Python 3 program to print the# least prime factors of numbers# less than or equal to n using# modified Sieve of Eratosthenes def leastPrimeFactor(n) : # Create a vector to store least primes. # Initialize all entries as 0. least_prime = [0] * (n + 1) # We need to print 1 for 1. least_prime[1] = 1 for i in range(2, n + 1) : # least_prime[i] == 0 # means it i is prime if (least_prime[i] == 0) : # marking the prime number # as its own lpf least_prime[i] = i # mark it as a divisor for all its # multiples if not already marked for j in range(i * i, n + 1, i) : if (least_prime[j] == 0) : least_prime[j] = i # print least prime factor # of numbers till n for i in range(1, n + 1) : print("Least Prime factor of " ,i , ": " , least_prime[i] ) # Driver program n = 10leastPrimeFactor(n) # This code is contributed# by Nikita Tiwari.
// C# program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenesusing System; class GFG{ public static void leastPrimeFactor(int n) { // Create a vector to store least primes. // Initialize all entries as 0. int []least_prime = new int[n+1]; // We need to print 1 for 1. least_prime[1] = 1; for (int i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (int j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (int i = 1; i <= n; i++) Console.WriteLine("Least Prime factor of " + i + ": " + least_prime[i]); } // Driver code public static void Main () { int n = 10; // Function calling leastPrimeFactor(n); }} // This code is contributed by Nitin Mittal
<?php// PHP program to print the// least prime factors of// numbers less than or equal// to n using modified Sieve// of Eratosthenes function leastPrimeFactor($n){ // Create a vector to // store least primes. // Initialize all entries // as 0. $least_prime = array($n + 1); for ($i = 0; $i <= $n; $i++) $least_prime[$i] = 0; // We need to // print 1 for 1. $least_prime[1] = 1; for ($i = 2; $i <= $n; $i++) { // least_prime[i] == 0 // means it i is prime if ($least_prime[$i] == 0) { // marking the prime // number as its own lpf $least_prime[$i] = $i; // mark it as a divisor // for all its multiples // if not already marked for ($j = $i * $i; $j <= $n; $j += $i) if ($least_prime[$j] == 0) $least_prime[$j] = $i; } } // print least prime // factor of numbers // till n for ($i = 1; $i <= $n; $i++) echo "Least Prime factor of " . $i . ": " . $least_prime[$i] . "\n";} // Driver Code$n = 10;leastPrimeFactor($n); // This code is contributed// by Sam007?>
<script>// javascript program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenesfunction leastPrimeFactor( n){ // Create a vector to store least primes. // Initialize all entries as 0. let least_prime = Array(n+1).fill(0); // We need to print 1 for 1. least_prime[1] = 1; for (let i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (let j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (let i = 1; i <= n; i++) document.write( "Least Prime factor of " + i + ": " + least_prime[i] + "<br/>");} // Driver program to test above function let n = 10; leastPrimeFactor(n); // This code is contributed by Rajput-Ji </script>
Least Prime factor of 1: 1
Least Prime factor of 2: 2
Least Prime factor of 3: 3
Least Prime factor of 4: 2
Least Prime factor of 5: 5
Least Prime factor of 6: 2
Least Prime factor of 7: 7
Least Prime factor of 8: 2
Least Prime factor of 9: 3
Least Prime factor of 10: 2
Time Complexity: O(nloglog(n)) Auxiliary Space: O(n)References: 1. https://www.geeksforgeeks.org/sieve-of-eratosthenes/ 2. https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes 3. https://oeis.org/wiki/Least_prime_factor_of_nExercise: Can we extend this algorithm or use least_prime[] to find all the prime factors for numbers till n?This article is contributed by Ayush Khanduri. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
Sam007
sidsv21
Rajput-Ji
vishalsharma14
Prime Number
prime-factor
sieve
Mathematical
Technical Scripter
Mathematical
Prime Number
sieve
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Merge two sorted arrays
Prime Numbers
Modulo Operator (%) in C/C++ with Examples
Find all factors of a natural number | Set 1
Modulo 10^9+7 (1000000007)
Program to find sum of elements in a given array
The Knight's tour problem | Backtracking-1
Program for factorial of a number
Minimum number of jumps to reach end
Operators in C / C++ | [
{
"code": null,
"e": 24229,
"s": 24201,
"text": "\n08 Dec, 2021"
},
{
"code": null,
"e": 24573,
"s": 24229,
"text": "Given a number n, print least prime factors of all numbers from 1 to n. The least prime factor of an integer n is the smallest prime number that divides the number. The least prime factor of all even numbers is 2. A prime number is its own least prime factor (as well as its own greatest prime factor).Note: We need to print 1 for 1.Example : "
},
{
"code": null,
"e": 24799,
"s": 24573,
"text": "Input : 6\nOutput : Least Prime factor of 1: 1\n Least Prime factor of 2: 2\n Least Prime factor of 3: 3\n Least Prime factor of 4: 2\n Least Prime factor of 5: 5\n Least Prime factor of 6: 2"
},
{
"code": null,
"e": 24876,
"s": 24799,
"text": "We can use a variation of sieve of Eratosthenes to solve the above problem. "
},
{
"code": null,
"e": 25415,
"s": 24876,
"text": "Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n).Initially, let i equal 2, the smallest prime number.Enumerate the multiples of i by counting to n from 2i in increments of i, and mark them as having least prime factor as i (if not already marked). Also mark i as least prime factor of i (i itself is a prime number).Find the first number greater than i in the list that is not marked. If there was no such number, stop. Otherwise, let i now equal this new number (which is the next prime), and repeat from step 3."
},
{
"code": null,
"e": 25490,
"s": 25415,
"text": "Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n)."
},
{
"code": null,
"e": 25543,
"s": 25490,
"text": "Initially, let i equal 2, the smallest prime number."
},
{
"code": null,
"e": 25759,
"s": 25543,
"text": "Enumerate the multiples of i by counting to n from 2i in increments of i, and mark them as having least prime factor as i (if not already marked). Also mark i as least prime factor of i (i itself is a prime number)."
},
{
"code": null,
"e": 25957,
"s": 25759,
"text": "Find the first number greater than i in the list that is not marked. If there was no such number, stop. Otherwise, let i now equal this new number (which is the next prime), and repeat from step 3."
},
{
"code": null,
"e": 26105,
"s": 25957,
"text": "Below is the implementation of the algorithm, where least_prime[] saves the value of the least prime factor corresponding to the respective index. "
},
{
"code": null,
"e": 26111,
"s": 26107,
"text": "C++"
},
{
"code": null,
"e": 26116,
"s": 26111,
"text": "Java"
},
{
"code": null,
"e": 26125,
"s": 26116,
"text": "Python 3"
},
{
"code": null,
"e": 26128,
"s": 26125,
"text": "C#"
},
{
"code": null,
"e": 26132,
"s": 26128,
"text": "PHP"
},
{
"code": null,
"e": 26143,
"s": 26132,
"text": "Javascript"
},
{
"code": "// C++ program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenes#include<bits/stdc++.h>using namespace std; void leastPrimeFactor(int n){ // Create a vector to store least primes. // Initialize all entries as 0. vector<int> least_prime(n+1, 0); // We need to print 1 for 1. least_prime[1] = 1; for (int i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (int j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (int i = 1; i <= n; i++) cout << \"Least Prime factor of \" << i << \": \" << least_prime[i] << \"\\n\";} // Driver program to test above functionint main(){ int n = 10; leastPrimeFactor(n); return 0;}",
"e": 27270,
"s": 26143,
"text": null
},
{
"code": "// Java program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenes import java.io.*;import java.util.*; class GFG{ public static void leastPrimeFactor(int n) { // Create a vector to store least primes. // Initialize all entries as 0. int[] least_prime = new int[n+1]; // We need to print 1 for 1. least_prime[1] = 1; for (int i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (int j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (int i = 1; i <= n; i++) System.out.println(\"Least Prime factor of \" + + i + \": \" + least_prime[i]); } public static void main (String[] args) { int n = 10; leastPrimeFactor(n); }} // Code Contributed by Mohit Gupta_OMG <(0_o)>",
"e": 28633,
"s": 27270,
"text": null
},
{
"code": "# Python 3 program to print the# least prime factors of numbers# less than or equal to n using# modified Sieve of Eratosthenes def leastPrimeFactor(n) : # Create a vector to store least primes. # Initialize all entries as 0. least_prime = [0] * (n + 1) # We need to print 1 for 1. least_prime[1] = 1 for i in range(2, n + 1) : # least_prime[i] == 0 # means it i is prime if (least_prime[i] == 0) : # marking the prime number # as its own lpf least_prime[i] = i # mark it as a divisor for all its # multiples if not already marked for j in range(i * i, n + 1, i) : if (least_prime[j] == 0) : least_prime[j] = i # print least prime factor # of numbers till n for i in range(1, n + 1) : print(\"Least Prime factor of \" ,i , \": \" , least_prime[i] ) # Driver program n = 10leastPrimeFactor(n) # This code is contributed# by Nikita Tiwari.",
"e": 29692,
"s": 28633,
"text": null
},
{
"code": "// C# program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenesusing System; class GFG{ public static void leastPrimeFactor(int n) { // Create a vector to store least primes. // Initialize all entries as 0. int []least_prime = new int[n+1]; // We need to print 1 for 1. least_prime[1] = 1; for (int i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (int j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (int i = 1; i <= n; i++) Console.WriteLine(\"Least Prime factor of \" + i + \": \" + least_prime[i]); } // Driver code public static void Main () { int n = 10; // Function calling leastPrimeFactor(n); }} // This code is contributed by Nitin Mittal",
"e": 31030,
"s": 29692,
"text": null
},
{
"code": "<?php// PHP program to print the// least prime factors of// numbers less than or equal// to n using modified Sieve// of Eratosthenes function leastPrimeFactor($n){ // Create a vector to // store least primes. // Initialize all entries // as 0. $least_prime = array($n + 1); for ($i = 0; $i <= $n; $i++) $least_prime[$i] = 0; // We need to // print 1 for 1. $least_prime[1] = 1; for ($i = 2; $i <= $n; $i++) { // least_prime[i] == 0 // means it i is prime if ($least_prime[$i] == 0) { // marking the prime // number as its own lpf $least_prime[$i] = $i; // mark it as a divisor // for all its multiples // if not already marked for ($j = $i * $i; $j <= $n; $j += $i) if ($least_prime[$j] == 0) $least_prime[$j] = $i; } } // print least prime // factor of numbers // till n for ($i = 1; $i <= $n; $i++) echo \"Least Prime factor of \" . $i . \": \" . $least_prime[$i] . \"\\n\";} // Driver Code$n = 10;leastPrimeFactor($n); // This code is contributed// by Sam007?>",
"e": 32265,
"s": 31030,
"text": null
},
{
"code": "<script>// javascript program to print the least prime factors// of numbers less than or equal to// n using modified Sieve of Eratosthenesfunction leastPrimeFactor( n){ // Create a vector to store least primes. // Initialize all entries as 0. let least_prime = Array(n+1).fill(0); // We need to print 1 for 1. least_prime[1] = 1; for (let i = 2; i <= n; i++) { // least_prime[i] == 0 // means it i is prime if (least_prime[i] == 0) { // marking the prime number // as its own lpf least_prime[i] = i; // mark it as a divisor for all its // multiples if not already marked for (let j = i*i; j <= n; j += i) if (least_prime[j] == 0) least_prime[j] = i; } } // print least prime factor of // of numbers till n for (let i = 1; i <= n; i++) document.write( \"Least Prime factor of \" + i + \": \" + least_prime[i] + \"<br/>\");} // Driver program to test above function let n = 10; leastPrimeFactor(n); // This code is contributed by Rajput-Ji </script>",
"e": 33417,
"s": 32265,
"text": null
},
{
"code": null,
"e": 33688,
"s": 33417,
"text": "Least Prime factor of 1: 1\nLeast Prime factor of 2: 2\nLeast Prime factor of 3: 3\nLeast Prime factor of 4: 2\nLeast Prime factor of 5: 5\nLeast Prime factor of 6: 2\nLeast Prime factor of 7: 7\nLeast Prime factor of 8: 2\nLeast Prime factor of 9: 3\nLeast Prime factor of 10: 2"
},
{
"code": null,
"e": 34443,
"s": 33688,
"text": "Time Complexity: O(nloglog(n)) Auxiliary Space: O(n)References: 1. https://www.geeksforgeeks.org/sieve-of-eratosthenes/ 2. https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes 3. https://oeis.org/wiki/Least_prime_factor_of_nExercise: Can we extend this algorithm or use least_prime[] to find all the prime factors for numbers till n?This article is contributed by Ayush Khanduri. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 34456,
"s": 34443,
"text": "nitin mittal"
},
{
"code": null,
"e": 34463,
"s": 34456,
"text": "Sam007"
},
{
"code": null,
"e": 34471,
"s": 34463,
"text": "sidsv21"
},
{
"code": null,
"e": 34481,
"s": 34471,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 34496,
"s": 34481,
"text": "vishalsharma14"
},
{
"code": null,
"e": 34509,
"s": 34496,
"text": "Prime Number"
},
{
"code": null,
"e": 34522,
"s": 34509,
"text": "prime-factor"
},
{
"code": null,
"e": 34528,
"s": 34522,
"text": "sieve"
},
{
"code": null,
"e": 34541,
"s": 34528,
"text": "Mathematical"
},
{
"code": null,
"e": 34560,
"s": 34541,
"text": "Technical Scripter"
},
{
"code": null,
"e": 34573,
"s": 34560,
"text": "Mathematical"
},
{
"code": null,
"e": 34586,
"s": 34573,
"text": "Prime Number"
},
{
"code": null,
"e": 34592,
"s": 34586,
"text": "sieve"
},
{
"code": null,
"e": 34690,
"s": 34592,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34699,
"s": 34690,
"text": "Comments"
},
{
"code": null,
"e": 34712,
"s": 34699,
"text": "Old Comments"
},
{
"code": null,
"e": 34736,
"s": 34712,
"text": "Merge two sorted arrays"
},
{
"code": null,
"e": 34750,
"s": 34736,
"text": "Prime Numbers"
},
{
"code": null,
"e": 34793,
"s": 34750,
"text": "Modulo Operator (%) in C/C++ with Examples"
},
{
"code": null,
"e": 34838,
"s": 34793,
"text": "Find all factors of a natural number | Set 1"
},
{
"code": null,
"e": 34865,
"s": 34838,
"text": "Modulo 10^9+7 (1000000007)"
},
{
"code": null,
"e": 34914,
"s": 34865,
"text": "Program to find sum of elements in a given array"
},
{
"code": null,
"e": 34957,
"s": 34914,
"text": "The Knight's tour problem | Backtracking-1"
},
{
"code": null,
"e": 34991,
"s": 34957,
"text": "Program for factorial of a number"
},
{
"code": null,
"e": 35028,
"s": 34991,
"text": "Minimum number of jumps to reach end"
}
] |
IMS DB - Quick Guide | Database is a collection of correlated data items. These data items are organized and stored in a manner to provide fast and easy access. IMS database is a hierarchical database where data is stored at different levels and each entity is dependent on higher level entities. The physical elements on an application system that use IMS are shown in the following figure.
A Database Management system is a set of application programs used for storing, accessing, and managing data in the database. IMS database management system maintains integrity and allows fast recovery of data by organizing it in such a way that it is easy to retrieve. IMS maintains a large amount of world's corporate data with the help of its database management system.
The function of transaction manager is to provide a communication platform between the database and the application programs. IMS acts as a transaction manager. A transaction manager deals with the end-user to store and retrieve data from the database. IMS can use IMS DB or DB2 as its back-end database to store the data.
DL/I comprises of application programs that grant access to the data stored in the database. IMS DB uses DL/I which serves as the interface language that programmers use for accessing the database in an application program. We will discuss this in more detail in the upcoming chapters.
Points to note −
IMS supports applications from different languages such as Java and XML.
IMS applications and data can be accessed over any platform.
IMS DB processing is very fast as compared to DB2.
Points to note −
Implementation of IMS DB is very complex.
IMS predefined tree structure reduces flexibility.
IMS DB is difficult to manage.
An IMS database is a collection of data accommodating physical files. In a hierarchical database, the topmost level contains the general information about the entity. As we proceed from the top level to the bottom levels in the hierarchy, we get more and more information about the entity.
Each level in the hierarchy contains segments. In standard files, it is difficult to implement hierarchies but DL/I supports hierarchies. The following figure depicts the structure of IMS DB.
Points to note −
A segment is created by grouping of similar data together.
A segment is created by grouping of similar data together.
It is the smallest unit of information that DL/I transfers to and from an application program during any input-output operation.
It is the smallest unit of information that DL/I transfers to and from an application program during any input-output operation.
A segment can have one or more data fields grouped together.
A segment can have one or more data fields grouped together.
In the following example, the segment Student has four data fields.
Points to note−
A field is a single piece of data in a segment. For example, Roll Number, Name, Course, and Mobile Number are single fields in the Student segment.
A field is a single piece of data in a segment. For example, Roll Number, Name, Course, and Mobile Number are single fields in the Student segment.
A segment consists of related fields to collect the information of an entity.
A segment consists of related fields to collect the information of an entity.
Fields can be used as a key for ordering the segments.
Fields can be used as a key for ordering the segments.
Fields can be used as a qualifier for searching information about a particular segment.
Fields can be used as a qualifier for searching information about a particular segment.
Points to note −
Segment Type is a category of data in a segment.
Segment Type is a category of data in a segment.
A DL/I database can have 255 different segment types and 15 levels of hierarchy.
A DL/I database can have 255 different segment types and 15 levels of hierarchy.
In the following figure, there are three segments namely, Library, Books Information, and Student Information.
In the following figure, there are three segments namely, Library, Books Information, and Student Information.
Points to note −
A segment occurrence is an individual segment of a particular type containing user data. In the above example, Books Information is one segment type and there can any number of occurrences of it, as it can store the information about any number of books.
A segment occurrence is an individual segment of a particular type containing user data. In the above example, Books Information is one segment type and there can any number of occurrences of it, as it can store the information about any number of books.
Within the IMS Database, there is only one occurrence of each segment type, but there can be an unlimited number of occurrences of each segment type.
Within the IMS Database, there is only one occurrence of each segment type, but there can be an unlimited number of occurrences of each segment type.
Hierarchical databases work on the relationships between two or more segments. The following example shows how segments are related to each other in the IMS database structure.
Points to note −
The segment that lies at the top of the hierarchy is called the root segment.
The segment that lies at the top of the hierarchy is called the root segment.
The root segment is the only segment through which all dependent segments are accessed.
The root segment is the only segment through which all dependent segments are accessed.
The root segment is the only segment in the database which is never a child segment.
The root segment is the only segment in the database which is never a child segment.
There can be only one root segment in the IMS database structure.
There can be only one root segment in the IMS database structure.
For example, 'A' is the root segment in the above example.
For example, 'A' is the root segment in the above example.
Points to note −
A parent segment has one or more dependent segments directly below it.
A parent segment has one or more dependent segments directly below it.
For example, 'A', 'B', and 'E' are the parent segments in the above example.
For example, 'A', 'B', and 'E' are the parent segments in the above example.
Points to note −
All segments other than the root segment are known as dependent segments.
All segments other than the root segment are known as dependent segments.
Dependent segments depend on one or more segments to present complete meaning.
Dependent segments depend on one or more segments to present complete meaning.
For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are dependent segments in our example.
For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are dependent segments in our example.
Points to note −
Any segment having a segment directly above it in the hierarchy is known as a child segment.
Any segment having a segment directly above it in the hierarchy is known as a child segment.
Each dependent segment in the structure is a child segment.
Each dependent segment in the structure is a child segment.
For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are child segments.
For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are child segments.
Points to note −
Two or more segment occurrences of a particular segment type under a single parent segment are called twin segments.
Two or more segment occurrences of a particular segment type under a single parent segment are called twin segments.
For example, 'C1' and 'C2' are twin segments, so do 'F1' and 'F2' are.
For example, 'C1' and 'C2' are twin segments, so do 'F1' and 'F2' are.
Points to note −
Sibling segments are the segments of different types and the same parent.
Sibling segments are the segments of different types and the same parent.
For example, 'B' and 'E' are sibling segments. Similarly, 'C1', 'C2', and 'D' are sibling segments.
For example, 'B' and 'E' are sibling segments. Similarly, 'C1', 'C2', and 'D' are sibling segments.
Points to note −
Each occurrence of the root segment, plus all the subordinate segment occurrences make one database record.
Each occurrence of the root segment, plus all the subordinate segment occurrences make one database record.
Every database record has only one root segment but it may have any number of segment occurrences.
Every database record has only one root segment but it may have any number of segment occurrences.
In standard file processing, a record is a unit of data that an application program uses for certain operations. In DL/I, that unit of data is known as a segment. A single database record has many segment occurrences.
In standard file processing, a record is a unit of data that an application program uses for certain operations. In DL/I, that unit of data is known as a segment. A single database record has many segment occurrences.
Points to note −
A path is the series of segments that starts from the root segment of a database record to any specific segment occurrence.
A path is the series of segments that starts from the root segment of a database record to any specific segment occurrence.
A path in the hierarchy structure need not be complete to the lowest level. It depends on how much information we require about an entity.
A path in the hierarchy structure need not be complete to the lowest level. It depends on how much information we require about an entity.
A path must be continuous and we cannot skip intermediate levels in the structure.
A path must be continuous and we cannot skip intermediate levels in the structure.
In the following figure, the child records in dark grey color show a path which starts from 'A' and goes through 'C2'.
In the following figure, the child records in dark grey color show a path which starts from 'A' and goes through 'C2'.
IMS DB stores data at different levels. Data is retrieved and inserted by issuing DL/I calls from an application program. We will discuss about DL/I calls in detail in the upcoming chapters. Data can be processed in the following two ways −
Sequential Processing
Random Processing
When segments are retrieved sequentially from the database, DL/I follows a predefined pattern. Let us understand the sequential processing of IMS DB.
Listed below are the points to note about sequential processing −
Predefined pattern for accessing data in DL/I is first down the hierarchy, then left to right.
Predefined pattern for accessing data in DL/I is first down the hierarchy, then left to right.
The root segment is retrieved first, then DL/I moves to the first left child and it goes down till the lowest level. At the lowest level, it retrieves all the occurrences of twin segments. Then it goes to the right segment.
The root segment is retrieved first, then DL/I moves to the first left child and it goes down till the lowest level. At the lowest level, it retrieves all the occurrences of twin segments. Then it goes to the right segment.
To understand better, observe the arrows in the above figure that show the flow for accessing the segments. Library is the root segment and the flow starts from there and goes till cars to access a single record. The same process is repeated for all occurrences to get all the data records.
To understand better, observe the arrows in the above figure that show the flow for accessing the segments. Library is the root segment and the flow starts from there and goes till cars to access a single record. The same process is repeated for all occurrences to get all the data records.
While accessing data, the program uses the position in the database which helps to retrieve and insert segments.
While accessing data, the program uses the position in the database which helps to retrieve and insert segments.
Random processing is also known as direct processing of data in IMS DB. Let us take an example to understand random processing in IMS DB −
Listed below are the points to note about random processing −
Segment occurrence that needs to be retrieved randomly requires key fields of all the segments it depends upon. These key fields are supplied by the application program.
Segment occurrence that needs to be retrieved randomly requires key fields of all the segments it depends upon. These key fields are supplied by the application program.
A concatenated key completely identifies the path from the root segment to the segment which you want to retrieve.
A concatenated key completely identifies the path from the root segment to the segment which you want to retrieve.
Suppose you want to retrieve an occurrence of the Commerce segment, then you need to supply the concatenated key field values of the segments it depends upon, such as Library, Books, and Commerce.
Suppose you want to retrieve an occurrence of the Commerce segment, then you need to supply the concatenated key field values of the segments it depends upon, such as Library, Books, and Commerce.
Random processing is faster than sequential processing. In real-world scenario, the applications combine both sequential and random processing methods together to achieve best results.
Random processing is faster than sequential processing. In real-world scenario, the applications combine both sequential and random processing methods together to achieve best results.
Points to note −
A key field is also known as a sequence field.
A key field is also known as a sequence field.
A key field is present within a segment and it is used to retrieve the segment occurrence.
A key field is present within a segment and it is used to retrieve the segment occurrence.
A key field manages the segment occurrence in ascending order.
A key field manages the segment occurrence in ascending order.
In each segment, only a single field can be used as a key field or sequence field.
In each segment, only a single field can be used as a key field or sequence field.
As mentioned, only a single field can be used as a key field. If you want to search for the contents of other segment fields which are not key fields, then the field which is used to retrieve the data is known as a search field.
IMS Control Blocks define the structure of the IMS database and a program's access to them. The following diagram shows the structure of IMS control blocks.
DL/I uses the following three types of Control Blocks −
Database Descriptor (DBD)
Program Specification Block (PSB)
Access Control Block (ACB)
Points to note −
DBD describes the complete physical structure of the database once all the segments have been defined.
DBD describes the complete physical structure of the database once all the segments have been defined.
While installing a DL/I database, one DBD must be created as it is required to access the IMS database.
While installing a DL/I database, one DBD must be created as it is required to access the IMS database.
Applications can use different views of the DBD. They are called Application Data Structures and they are specified in the Program Specification Block.
Applications can use different views of the DBD. They are called Application Data Structures and they are specified in the Program Specification Block.
The Database Administrator creates a DBD by coding DBDGEN control statements.
The Database Administrator creates a DBD by coding DBDGEN control statements.
DBDGEN is a Database Descriptor Generator. Creating control blocks is the responsibility of the Database Administrator. All the load modules are stored in the IMS library. Assembly Language macro statements are used to create control blocks. Given below is a sample code that shows how to create a DBD using DBDGEN control statements −
PRINT NOGEN
DBD NAME=LIBRARY,ACCESS=HIDAM
DATASET DD1=LIB,DEVICE=3380
SEGM NAME=LIBSEG,PARENT=0,BYTES=10
FIELD NAME=(LIBRARY,SEQ,U),BYTES=10,START=1,TYPE=C
SEGM NAME=BOOKSEG,PARENT=LIBSEG,BYTES=5
FIELD NAME=(BOOKS,SEQ,U),BYTES=10,START=1,TYPE=C
SEGM NAME=MAGSEG,PARENT=LIBSEG,BYTES=9
FIELD NAME=(MAGZINES,SEQ),BYTES=8,START=1,TYPE=C
DBDGEN
FINISH
END
Let us understand the terms used in the above DBDGEN −
When you execute the above control statements in JCL, it creates a physical structure where LIBRARY is the root segment, and BOOKS and MAGZINES are its child segments.
When you execute the above control statements in JCL, it creates a physical structure where LIBRARY is the root segment, and BOOKS and MAGZINES are its child segments.
The first DBD macro statement identifies the database. Here, we need to mention the NAME and ACCESS which is used by DL/I to access this database.
The first DBD macro statement identifies the database. Here, we need to mention the NAME and ACCESS which is used by DL/I to access this database.
The second DATASET macro statement identifies the file that contains the database.
The second DATASET macro statement identifies the file that contains the database.
The segment types are defined using the SEGM macro statement. We need to specify the PARENT of that segment. If it is a Root segment, then mention PARENT=0.
The segment types are defined using the SEGM macro statement. We need to specify the PARENT of that segment. If it is a Root segment, then mention PARENT=0.
The following table shows parameters used in FIELD macro statement −
Name
Name of the field, typically 1 to 8 characters long
Bytes
Length of the field
Start
Position of field within segment
Type
Data type of the field
Type C
Character data type
Type P
Packed decimal data type
Type Z
Zoned decimal data type
Type X
Hexadecimal data type
Type H
Half word binary data type
Type F
Full word binary data type
The fundamentals of PSB are as given below −
A database has a single physical structure defined by a DBD but the application programs that process it can have different views of the database. These views are called application data structure and are defined in the PSB.
A database has a single physical structure defined by a DBD but the application programs that process it can have different views of the database. These views are called application data structure and are defined in the PSB.
No program can use more than one PSB in a single execution.
No program can use more than one PSB in a single execution.
Application programs have their own PSB and it is common for application programs that have similar database processing requirements to share a PSB.
Application programs have their own PSB and it is common for application programs that have similar database processing requirements to share a PSB.
PSB consists of one or more control blocks called Program Communication Blocks (PCBs). The PSB contains one PCB for each DL/I database the application program will access. We will discuss more about PCBs in the upcoming modules.
PSB consists of one or more control blocks called Program Communication Blocks (PCBs). The PSB contains one PCB for each DL/I database the application program will access. We will discuss more about PCBs in the upcoming modules.
PSBGEN must be performed to create a PSB for the program.
PSBGEN must be performed to create a PSB for the program.
PSBGEN is known as Program Specification Block Generator. The following example creates a PSB using PSBGEN −
PRINT NOGEN
PCB TYPE=DB,DBDNAME=LIBRARY,KEYLEN=10,PROCOPT=LS
SENSEG NAME=LIBSEG
SENSEG NAME=BOOKSEG,PARENT=LIBSEG
SENSEG NAME=MAGSEG,PARENT=LIBSEG
PSBGEN PSBNAME=LIBPSB,LANG=COBOL
END
Let us understand the terms used in the above DBDGEN −
The first macro statement is the Program Communication Block (PCB) that describes the database Type, Name, Key-Length, and Processing Option.
The first macro statement is the Program Communication Block (PCB) that describes the database Type, Name, Key-Length, and Processing Option.
DBDNAME parameter on the PCB macro specifies the name of the DBD. KEYLEN specifies the length of the longest concatenated key. The program can process in the database. PROCOPT parameter specifies the program's processing options. For example, LS means only LOAD Operations.
DBDNAME parameter on the PCB macro specifies the name of the DBD. KEYLEN specifies the length of the longest concatenated key. The program can process in the database. PROCOPT parameter specifies the program's processing options. For example, LS means only LOAD Operations.
SENSEG is known as Segment Level Sensitivity. It defines the program's access to parts of the database and it is identified at the segment level. The program has access to all the fields within the segments to which it is sensitive. A program can also have field-level sensitivity. In this, we define a segment name and the parent name of the segment.
SENSEG is known as Segment Level Sensitivity. It defines the program's access to parts of the database and it is identified at the segment level. The program has access to all the fields within the segments to which it is sensitive. A program can also have field-level sensitivity. In this, we define a segment name and the parent name of the segment.
The last macro statement is PCBGEN. PSBGEN is the last statement telling there are no more statements to process. PSBNAME defines the name given to the output PSB module. The LANG parameter specifies the language in which the application program is written, e.g., COBOL.
The last macro statement is PCBGEN. PSBGEN is the last statement telling there are no more statements to process. PSBNAME defines the name given to the output PSB module. The LANG parameter specifies the language in which the application program is written, e.g., COBOL.
Listed below are the points to note about access control blocks −
Access Control Blocks for an application program combines the Database Descriptor and the Program Specification Block into an executable form.
Access Control Blocks for an application program combines the Database Descriptor and the Program Specification Block into an executable form.
ACBGEN is known as Access Control Blocks Generator. It is used to generate ACBs.
ACBGEN is known as Access Control Blocks Generator. It is used to generate ACBs.
For online programs, we need to pre-build ACBs. Hence the ACBGEN utility is executed before executing the application program.
For online programs, we need to pre-build ACBs. Hence the ACBGEN utility is executed before executing the application program.
For batch programs, ACBs can be generated at execution time too.
For batch programs, ACBs can be generated at execution time too.
An application program which includes DL/I calls cannot execute directly. Instead, a JCL is required to trigger the IMS DL/I batch module. The batch initialization module in IMS is DFSRRC00. The application program and the DL/I module execute together. The following diagram shows the structure of an application program which includes DL/I calls to access a database.
The application program interfaces with IMS DL/I modules via the following program elements −
An ENTRY statement specifies that the PCBs are utilized by the program.
An ENTRY statement specifies that the PCBs are utilized by the program.
A PCB-mask co-relates with the information preserved in the pre-constructed PCB which receives return information from the IMS.
A PCB-mask co-relates with the information preserved in the pre-constructed PCB which receives return information from the IMS.
An Input-Output Area is used for passing data segments to and from the IMS database.
An Input-Output Area is used for passing data segments to and from the IMS database.
Calls to DL/I specify the processing functions such as fetch, insert, delete, replace, etc.
Calls to DL/I specify the processing functions such as fetch, insert, delete, replace, etc.
Check Status Codes is used to check the SQL return code of the processing option specified to inform whether the operation was successful or not.
Check Status Codes is used to check the SQL return code of the processing option specified to inform whether the operation was successful or not.
A Terminate statement is used to end the processing of the application program which includes the DL/I.
A Terminate statement is used to end the processing of the application program which includes the DL/I.
As of now, we learnt that the IMS consists of segments which are used in high-level programming languages to access data. Consider the following IMS database structure of a Library which we have seen earlier and here we see the layout of its segments in COBOL −
01 LIBRARY-SEGMENT.
05 BOOK-ID PIC X(5).
05 ISSUE-DATE PIC X(10).
05 RETURN-DATE PIC X(10).
05 STUDENT-ID PIC A(25).
01 BOOK-SEGMENT.
05 BOOK-ID PIC X(5).
05 BOOK-NAME PIC A(30).
05 AUTHOR PIC A(25).
01 STUDENT-SEGMENT.
05 STUDENT-ID PIC X(5).
05 STUDENT-NAME PIC A(25).
05 DIVISION PIC X(10).
The structure of an IMS application program is different from that of a Non-IMS application program. An IMS program cannot be executed directly; rather it is always called as a subroutine. An IMS application program consists of Program Specification Blocks to provide a view of the IMS database.
The application program and the PSBs linked to that program are loaded when we execute an application program which includes IMS DL/I modules. Then the CALL requests triggered by the application programs are executed by the IMS module.
The following IMS services are used by the application program −
Accessing database records
Issuing IMS commands
Issuing IMS service calls
Checkpoint calls
Sync calls
Sending or receiving messages from online user terminals
We include DL/I calls inside COBOL application program to communicate with IMS database. We use the following DL/I statements in COBOL program to access the database −
Entry Statement
Goback Statement
Call Statement
It is used to pass the control from the DL/I to the COBOL program. Here is the syntax of the entry statement −
ENTRY 'DLITCBL' USING pcb-name1
[pcb-name2]
The above statement is coded in the Procedure Division of a COBOL program. Let us go into the details of the entry statement in COBOL program −
The batch initialization module triggers the application program and is executed under its control.
The batch initialization module triggers the application program and is executed under its control.
The DL/I loads the required control blocks and modules and the application program, and control is given to the application program.
The DL/I loads the required control blocks and modules and the application program, and control is given to the application program.
DLITCBL stands for DL/I to COBOL. The entry statement is used to define the entry point in the program.
DLITCBL stands for DL/I to COBOL. The entry statement is used to define the entry point in the program.
When we call a sub-program in COBOL, its address is also provided. Likewise, when the DL/I gives the control to the application program, it also provides the address of each PCB defined in the program's PSB.
When we call a sub-program in COBOL, its address is also provided. Likewise, when the DL/I gives the control to the application program, it also provides the address of each PCB defined in the program's PSB.
All the PCBs used in the application program must be defined inside the Linkage Section of the COBOL program because PCB resides outside the application program.
All the PCBs used in the application program must be defined inside the Linkage Section of the COBOL program because PCB resides outside the application program.
The PCB definition inside the Linkage Section is called as PCB Mask.
The PCB definition inside the Linkage Section is called as PCB Mask.
The relation between PCB masks and actual PCBs in storage is created by listing the PCBs in the entry statement. The sequence of listing in the entry statement should be same as they appear in the PSBGEN.
The relation between PCB masks and actual PCBs in storage is created by listing the PCBs in the entry statement. The sequence of listing in the entry statement should be same as they appear in the PSBGEN.
It is used to pass the control back to the IMS control program. Following is the syntax of the Goback statement −
GOBACK
Listed below are the fundamental points to note about the Goback statement −
GOBACK is coded at the end of the application program. It returns the control to DL/I from the program.
GOBACK is coded at the end of the application program. It returns the control to DL/I from the program.
We should not use STOP RUN as it returns the control to the operating system. If we use STOP RUN, the DL/I never gets a chance to perform its terminating functions. That is why, in DL/I application programs, Goback statement is used.
We should not use STOP RUN as it returns the control to the operating system. If we use STOP RUN, the DL/I never gets a chance to perform its terminating functions. That is why, in DL/I application programs, Goback statement is used.
Before issuing a Goback statement, all the non-DL/I datasets used in the COBOL application program must be closed, otherwise the program will terminate abnormally.
Before issuing a Goback statement, all the non-DL/I datasets used in the COBOL application program must be closed, otherwise the program will terminate abnormally.
Call statement is used to request for DL/I services such as executing certain operations on the IMS database. Here is the syntax of the call statement −
CALL 'CBLTDLI' USING DLI Function Code
PCB Mask
Segment I/O Area
[Segment Search Arguments]
The syntax above shows parameters which you can use with the call statement. We will discuss each of them in the following table −
DLI Function Code
Identifies the DL/I function to be performed. This argument is the name of the four character fields that describe the I/O operation.
PCB Mask
The PCB definition inside the Linkage Section is called as PCB Mask. They are used in the entry statement. No SELECT, ASSIGN, OPEN, or CLOSE statements are required.
Segment I/O Area
Name of an input/output work area. This is an area of the application program into which the DL/I puts a requested segment.
Segment Search Arguments
These are optional parameters depending on the type of the call issued. They are used to search data segments inside the IMS database.
Given below are the points to note about the Call statement −
CBLTDLI stands for COBOL to DL/I. It is the name of an interface module that is link edited with your program’s object module.
CBLTDLI stands for COBOL to DL/I. It is the name of an interface module that is link edited with your program’s object module.
After each DL/I call, the DLI stores a status code in the PCB. The program can use this code to determine whether the call succeeded or failed.
After each DL/I call, the DLI stores a status code in the PCB. The program can use this code to determine whether the call succeeded or failed.
For more understanding of COBOL, you can go through our COBOL tutorial here. The following example shows the structure of a COBOL program that uses IMS database and DL/I calls. We will discuss in detail each of the parameters used in the example in the upcoming chapters.
IDENTIFICATION DIVISION.
PROGRAM-ID. TEST1.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 DLI-FUNCTIONS.
05 DLI-GU PIC X(4) VALUE 'GU '.
05 DLI-GHU PIC X(4) VALUE 'GHU '.
05 DLI-GN PIC X(4) VALUE 'GN '.
05 DLI-GHN PIC X(4) VALUE 'GHN '.
05 DLI-GNP PIC X(4) VALUE 'GNP '.
05 DLI-GHNP PIC X(4) VALUE 'GHNP'.
05 DLI-ISRT PIC X(4) VALUE 'ISRT'.
05 DLI-DLET PIC X(4) VALUE 'DLET'.
05 DLI-REPL PIC X(4) VALUE 'REPL'.
05 DLI-CHKP PIC X(4) VALUE 'CHKP'.
05 DLI-XRST PIC X(4) VALUE 'XRST'.
05 DLI-PCB PIC X(4) VALUE 'PCB '.
01 SEGMENT-I-O-AREA PIC X(150).
LINKAGE SECTION.
01 STUDENT-PCB-MASK.
05 STD-DBD-NAME PIC X(8).
05 STD-SEGMENT-LEVEL PIC XX.
05 STD-STATUS-CODE PIC XX.
05 STD-PROC-OPTIONS PIC X(4).
05 FILLER PIC S9(5) COMP.
05 STD-SEGMENT-NAME PIC X(8).
05 STD-KEY-LENGTH PIC S9(5) COMP.
05 STD-NUMB-SENS-SEGS PIC S9(5) COMP.
05 STD-KEY PIC X(11).
PROCEDURE DIVISION.
ENTRY 'DLITCBL' USING STUDENT-PCB-MASK.
A000-READ-PARA.
110-GET-INVENTORY-SEGMENT.
CALL ‘CBLTDLI’ USING DLI-GN
STUDENT-PCB-MASK
SEGMENT-I-O-AREA.
GOBACK.
DL/I function is the first parameter that is used in a DL/I call. This function tells which operation is going to be performed on the IMS database by the IMS DL/I call. The syntax of DL/I function is as follows −
01 DLI-FUNCTIONS.
05 DLI-GU PIC X(4) VALUE 'GU '.
05 DLI-GHU PIC X(4) VALUE 'GHU '.
05 DLI-GN PIC X(4) VALUE 'GN '.
05 DLI-GHN PIC X(4) VALUE 'GHN '.
05 DLI-GNP PIC X(4) VALUE 'GNP '.
05 DLI-GHNP PIC X(4) VALUE 'GHNP'.
05 DLI-ISRT PIC X(4) VALUE 'ISRT'.
05 DLI-DLET PIC X(4) VALUE 'DLET'.
05 DLI-REPL PIC X(4) VALUE 'REPL'.
05 DLI-CHKP PIC X(4) VALUE 'CHKP'.
05 DLI-XRST PIC X(4) VALUE 'XRST'.
05 DLI-PCB PIC X(4) VALUE 'PCB '.
This syntax represents the following key points −
For this parameter, we can provide any four-character name as a storage field to store the function code.
For this parameter, we can provide any four-character name as a storage field to store the function code.
DL/I function parameter is coded in the working storage section of the COBOL program.
DL/I function parameter is coded in the working storage section of the COBOL program.
For specifying the DL/I function, the programmer needs to code one of the 05 level data names such as DLI-GU in a DL/I call, since COBOL does not allow to code literals on a CALL statement.
For specifying the DL/I function, the programmer needs to code one of the 05 level data names such as DLI-GU in a DL/I call, since COBOL does not allow to code literals on a CALL statement.
DL/I functions are divided into three categories: Get, Update, and Other functions. Let us discuss each of them in detail.
DL/I functions are divided into three categories: Get, Update, and Other functions. Let us discuss each of them in detail.
Get functions are similar to the read operation supported by any programming language. Get function is used to fetch segments from an IMS DL/I database. The following Get functions are used in IMS DB −
Get Unique
Get Next
Get Next within Parent
Get Hold Unique
Get Hold Next
Get Hold Next within Parent
Let us consider the following IMS database structure to understand the DL/I function calls −
'GU' code is used for the Get Unique function. It works similar to the random read statement in COBOL. It is used to fetch a particular segment occurrence based on the field values. The field values can be provided using segment search arguments. The syntax of a GU call is as follows −
CALL 'CBLTDLI' USING DLI-GU
PCB Mask
Segment I/O Area
[Segment Search Arguments]
If you execute the above call statement by providing appropriate values for all parameters in the COBOL program, you can retrieve the segment in the segment I/O area from the database. In the above example, if you provide the field values of Library, Magazines, and Health, then you get the desired occurrence of the Health segment.
'GN' code is used for the Get Next function. It works similar to the read next statement in COBOL. It is used to fetch segment occurrences in a sequence. The predefined pattern for accessing data segment occurrences is down the hierarchy, then left to right. The syntax of a GN call is as follows −
CALL 'CBLTDLI' USING DLI-GN
PCB Mask
Segment I/O Area
[Segment Search Arguments]
If you execute the above call statement by providing appropriate values for all parameters in the COBOL program, you can retrieve the segment occurrence in the segment I/O area from the database in a sequential order. In the above example, it starts with accessing the Library segment, then Books segment, and so on. We perform the GN call again and again, until we reach the segment occurrence we want.
'GNP' code is used for Get Next within Parent. This function is used to retrieve segment occurrences in sequence subordinate to an established parent segment. The syntax of a GNP call is as follows −
CALL 'CBLTDLI' USING DLI-GNP
PCB Mask
Segment I/O Area
[Segment Search Arguments]
'GHU' code is used for Get Hold Unique. Hold function specifies that we are going to update the segment after retrieval. The Get Hold Unique function corresponds to the Get Unique call. Given below is the syntax of a GHU call −
CALL 'CBLTDLI' USING DLI-GHU
PCB Mask
Segment I/O Area
[Segment Search Arguments]
'GHN' code is used for Get Hold Next. Hold function specifies that we are going to update the segment after retrieval. The Get Hold Next function corresponds to the Get Next call. Given below is the syntax of a GHN call −
CALL 'CBLTDLI' USING DLI-GHN
PCB Mask
Segment I/O Area
[Segment Search Arguments]
'GHNP' code is used for Get Hold Next within Parent. Hold function specifies that we are going to update the segment after retrieval. The Get Hold Next within Parent function corresponds to the Get Next within Parent call. Given below is the syntax of a GHNP call −
CALL 'CBLTDLI' USING DLI-GHNP
PCB Mask
Segment I/O Area
[Segment Search Arguments]
Update functions are similar to re-write or insert operations in any other programming language. Update functions are used to update segments in an IMS DL/I database. Before using the update function, there must be a successful call with Hold clause for the segment occurrence. The following Update functions are used in IMS DB −
Insert
Delete
Replace
'ISRT' code is used for the Insert function. The ISRT function is used to add a new segment to the database. It is used to change an existing database or load a new database. Given below is the syntax of an ISRT call −
CALL 'CBLTDLI' USING DLI-ISRT
PCB Mask
Segment I/O Area
[Segment Search Arguments]
'DLET' code is used for the Delete function. It is used to remove a segment from an IMS DL/I database. Given below is the syntax of a DLET call −
CALL 'CBLTDLI' USING DLI-DLET
PCB Mask
Segment I/O Area
[Segment Search Arguments]
'REPL' code is used for Get Hold Next within Parent. The Replace function is used to replace a segment in the IMS DL/I database. Given below is the syntax of an REPL call −
CALL 'CBLTDLI' USING DLI-REPL
PCB Mask
Segment I/O Area
[Segment Search Arguments]
The following other functions are used in IMS DL/I calls −
Checkpoint
Restart
PCB
'CHKP' code is used for the Checkpoint function. It is used in the recovery features of IMS. Given below is the syntax of a CHKP call −
CALL 'CBLTDLI' USING DLI-CHKP
PCB Mask
Segment I/O Area
[Segment Search Arguments]
'XRST' code is used for the Restart function. It is used in the restart features of IMS. Given below is the syntax of an XRST call −
CALL 'CBLTDLI' USING DLI-XRST
PCB Mask
Segment I/O Area
[Segment Search Arguments]
PCB function is used in CICS programs in the IMS DL/I database. Given below is the syntax of a PCB call −
CALL 'CBLTDLI' USING DLI-PCB
PCB Mask
Segment I/O Area
[Segment Search Arguments]
You can find more details about these functions in the recovery chapter.
PCB stands for Program Communication Block. PCB Mask is the second parameter used in the DL/I call. It is declared in the linkage section. Given below is the syntax of a PCB Mask −
01 PCB-NAME.
05 DBD-NAME PIC X(8).
05 SEG-LEVEL PIC XX.
05 STATUS-CODE PIC XX.
05 PROC-OPTIONS PIC X(4).
05 RESERVED-DLI PIC S9(5).
05 SEG-NAME PIC X(8).
05 LENGTH-FB-KEY PIC S9(5).
05 NUMB-SENS-SEGS PIC S9(5).
05 KEY-FB-AREA PIC X(n).
Here are the key points to note −
For each database, the DL/I maintains an area of storage that is known as the program communication block. It stores the information about the database that are accessed inside the application programs.
For each database, the DL/I maintains an area of storage that is known as the program communication block. It stores the information about the database that are accessed inside the application programs.
The ENTRY statement creates a connection between the PCB masks in the Linkage Section and the PCBs within the program’s PSB. The PCB masks used in a DL/I call tells which database to use for operation.
The ENTRY statement creates a connection between the PCB masks in the Linkage Section and the PCBs within the program’s PSB. The PCB masks used in a DL/I call tells which database to use for operation.
You can assume this is similar to specifying a file name in a COBOL READ statement or a record name in a COBOL write statement. No SELECT, ASSIGN, OPEN, or CLOSE statements are required.
You can assume this is similar to specifying a file name in a COBOL READ statement or a record name in a COBOL write statement. No SELECT, ASSIGN, OPEN, or CLOSE statements are required.
After each DL/I call, the DL/I stores a status code in the PCB and the program can use that code to determine whether the call succeeded or failed.
After each DL/I call, the DL/I stores a status code in the PCB and the program can use that code to determine whether the call succeeded or failed.
Points to note −
PCB Name is the name of the area which refers to the entire structure of the PCB fields.
PCB Name is the name of the area which refers to the entire structure of the PCB fields.
PCB Name is used in program statements.
PCB Name is used in program statements.
PCB Name is not a field in the PCB.
PCB Name is not a field in the PCB.
Points to note −
DBD name contains the character data. It is eight bytes long.
DBD name contains the character data. It is eight bytes long.
The first field in the PCB is the name of the database being processed and it provides the DBD name from the library of database descriptions associated with a particular database.
The first field in the PCB is the name of the database being processed and it provides the DBD name from the library of database descriptions associated with a particular database.
Points to note −
Segment level is known as Segment Hierarchy Level Indicator. It contains character data and is two bytes long.
Segment level is known as Segment Hierarchy Level Indicator. It contains character data and is two bytes long.
A segment level field stores the level of the segment that was processed. When a segment is retrieved successfully, the level number of the retrieved segment is stored here.
A segment level field stores the level of the segment that was processed. When a segment is retrieved successfully, the level number of the retrieved segment is stored here.
A segment level field never has a value greater than 15 because that is the maximum number of levels permitted in a DL/I database.
A segment level field never has a value greater than 15 because that is the maximum number of levels permitted in a DL/I database.
Points to note −
Status code field contains two bytes of character data.
Status code field contains two bytes of character data.
Status code contains the DL/I status code.
Status code contains the DL/I status code.
Spaces are moved to the status code field when DL/I completes the processing of calls successfully.
Spaces are moved to the status code field when DL/I completes the processing of calls successfully.
Non-space values indicate that the call was not successful.
Non-space values indicate that the call was not successful.
Status code GB indicates end-of-file and status code GE indicates that the requested segment is not found.
Status code GB indicates end-of-file and status code GE indicates that the requested segment is not found.
Points to note −
Proc options are known as processing options which contain four-character data fields.
Proc options are known as processing options which contain four-character data fields.
A Processing Option field indicates what kind of processing the program is authorized to do on the database.
A Processing Option field indicates what kind of processing the program is authorized to do on the database.
Points to note −
Reserved DL/I is known as the reserved area of the IMS. It stores four bytes binary data.
Reserved DL/I is known as the reserved area of the IMS. It stores four bytes binary data.
IMS uses this area for its own internal linkage related to an application program.
IMS uses this area for its own internal linkage related to an application program.
Points to note −
SEG Name is known as segment name feedback area. It contains 8 bytes of character data.
SEG Name is known as segment name feedback area. It contains 8 bytes of character data.
The name of the segment is stored in this field after each DL/I call.
The name of the segment is stored in this field after each DL/I call.
Points to note −
Length FB key is known as the length of the key feedback area. It stores four bytes of binary data.
Length FB key is known as the length of the key feedback area. It stores four bytes of binary data.
This field is used to report the length of the concatenated key of the lowest level segment processed during the previous call.
This field is used to report the length of the concatenated key of the lowest level segment processed during the previous call.
It is used with the key feedback area.
It is used with the key feedback area.
Points to note −
Number of sensitivity segments store four bytes binary data.
Number of sensitivity segments store four bytes binary data.
It defines to which level an application program is sensitive. It represents a count of number of segments in the logical data structure.
It defines to which level an application program is sensitive. It represents a count of number of segments in the logical data structure.
Points to note −
Key feedback area varies in length from one PCB to another.
Key feedback area varies in length from one PCB to another.
It contains the longest possible concatenated key that can be used with the program’s view of the database.
It contains the longest possible concatenated key that can be used with the program’s view of the database.
After a database operation, DL/I returns the concatenated key of the lowest level segment processed in this field, and it returns the length of the key in the key length feedback area.
After a database operation, DL/I returns the concatenated key of the lowest level segment processed in this field, and it returns the length of the key in the key length feedback area.
SSA stands for Segment Search Arguments. SSA is used to identify the segment occurrence being accessed. It is an optional parameter. We can include any number of SSAs depending on the requirement. There are two types of SSAs −
Unqualified SSA
Qualified SSA
An unqualified SSA provides the name of the segment being used inside the call. Given below is the syntax of an unqualified SSA −
01 UNQUALIFIED-SSA.
05 SEGMENT-NAME PIC X(8).
05 FILLER PIC X VALUE SPACE.
The key points of unqualified SSA are as follows −
A basic unqualified SSA is 9 bytes long.
A basic unqualified SSA is 9 bytes long.
The first 8 bytes hold the segment name which is being used for processing.
The first 8 bytes hold the segment name which is being used for processing.
The last byte always contains space.
The last byte always contains space.
DL/I uses the last byte to determine the type of SSA.
DL/I uses the last byte to determine the type of SSA.
To access a particular segment, move the name of the segment in the SEGMENT-NAME field.
To access a particular segment, move the name of the segment in the SEGMENT-NAME field.
The following images show the structures of unqualified and qualified SSAs −
A Qualified SSA provides the segment type with the specific database occurrence of a segment. Given below is the syntax of a Qualified SSA −
01 QUALIFIED-SSA.
05 SEGMENT-NAME PIC X(8).
05 FILLER PIC X(01) VALUE '('.
05 FIELD-NAME PIC X(8).
05 REL-OPR PIC X(2).
05 SEARCH-VALUE PIC X(n).
05 FILLER PIC X(n+1) VALUE ')'.
The key points of qualified SSA are as follows −
The first 8 bytes of a qualified SSA holds the segment name being used for processing.
The first 8 bytes of a qualified SSA holds the segment name being used for processing.
The ninth byte is a left parenthesis '('.
The ninth byte is a left parenthesis '('.
The next 8 bytes starting from the tenth position specifies the field name which we want to search.
The next 8 bytes starting from the tenth position specifies the field name which we want to search.
After the field name, in the 18th and 19th positions, we specify two-character relational operator code.
After the field name, in the 18th and 19th positions, we specify two-character relational operator code.
Then we specify the field value and in the last byte, there is a right parenthesis ')'.
Then we specify the field value and in the last byte, there is a right parenthesis ')'.
The following table shows the relational operators used in a Qualified SSA.
Command codes are used to enhance the functionality of DL/I calls. Command codes reduce the number of DL/I calls, making the programs simple. Also, it improves the performance as the number of calls is reduced. The following image shows how command codes are used in unqualified and qualified SSAs −
The key points of command codes are as follows −
To use command codes, specify an asterisk in the 9th position of the SSA as shown in the above image.
To use command codes, specify an asterisk in the 9th position of the SSA as shown in the above image.
Command code is coded at the tenth position.
Command code is coded at the tenth position.
From 10th position onwards, DL/I considers all characters to be command codes until it encounters a space for an unqualified SSA and a left parenthesis for a qualified SSA.
From 10th position onwards, DL/I considers all characters to be command codes until it encounters a space for an unqualified SSA and a left parenthesis for a qualified SSA.
The following table shows the list of command codes used in SSA −
The fundamental points of multiple qualifications are as follows −
Multiple qualifications are required when we need to use two or more qualifications or fields for comparison.
Multiple qualifications are required when we need to use two or more qualifications or fields for comparison.
We use Boolean operators like AND and OR to connect two or more qualifications.
We use Boolean operators like AND and OR to connect two or more qualifications.
Multiple qualifications can be used when we want to process a segment based on a range of possible values for a single field.
Multiple qualifications can be used when we want to process a segment based on a range of possible values for a single field.
Given below is the syntax of Multiple Qualifications −
01 QUALIFIED-SSA.
05 SEGMENT-NAME PIC X(8).
05 FILLER PIC X(01) VALUE '('.
05 FIELD-NAME1 PIC X(8).
05 REL-OPR PIC X(2).
05 SEARCH-VALUE1 PIC X(m).
05 MUL-QUAL PIC X VALUE '&'.
05 FIELD-NAME2 PIC X(8).
05 REL-OPR PIC X(2).
05 SEARCH-VALUE2 PIC X(n).
05 FILLER PIC X(n+1) VALUE ')'.
MUL-QUAL is a short term for MULtiple QUALIification in which we can provide boolean operators like AND or OR.
The various data retrieval methods used in IMS DL/I calls are as follows −
GU Call
GN Call
Using Command Codes
Multiple Processing
Let us consider the following IMS database structure to understand the data retrieval function calls −
The fundamentals of GU call are as follows −
GU call is known as Get Unique call. It is used for random processing.
GU call is known as Get Unique call. It is used for random processing.
If an application does not update the database regularly or if the number of database updates is less, then we use random processing.
If an application does not update the database regularly or if the number of database updates is less, then we use random processing.
GU call is used to place the pointer at a particular position for further sequential retrieval.
GU call is used to place the pointer at a particular position for further sequential retrieval.
GU calls are independent of the pointer position established by the previous calls.
GU calls are independent of the pointer position established by the previous calls.
GU call processing is based on the unique key fields supplied in the call statement.
GU call processing is based on the unique key fields supplied in the call statement.
If we supply a key field that is not unique, then DL/I returns the first segment occurrence of the key field.
If we supply a key field that is not unique, then DL/I returns the first segment occurrence of the key field.
CALL 'CBLTDLI' USING DLI-GU
PCB-NAME
IO-AREA
LIBRARY-SSA
BOOKS-SSA
ENGINEERING-SSA
IT-SSA
The above example shows we issue a GU call by providing a complete set of qualified SSAs. It includes all the key fields starting from the root level to the segment occurrence that we want to retrieve.
If we do not provide the complete set of qualified SSAs in the call, then DL/I works in the following way −
When we use an unqualified SSA in a GU call, DL/I accesses the first segment occurrence in the database that meets the criteria you specify.
When we use an unqualified SSA in a GU call, DL/I accesses the first segment occurrence in the database that meets the criteria you specify.
When we issue a GU call without any SSAs, DL/I returns the first occurrence of the root segment in the database.
When we issue a GU call without any SSAs, DL/I returns the first occurrence of the root segment in the database.
If some SSAs at intermediate levels are not mentioned in the call, then DL/I uses either the established position or the default value of an unqualified SSA for the segment.
If some SSAs at intermediate levels are not mentioned in the call, then DL/I uses either the established position or the default value of an unqualified SSA for the segment.
The following table shows the relevant status codes after a GU call −
Spaces
Successful call
GE
DL/I could not find a segment that met the criteria specified in the call
The fundamentals of GN call are as follows −
GN call is known as Get Next call. It is used for basic sequential processing.
GN call is known as Get Next call. It is used for basic sequential processing.
The initial position of the pointer in the database is before the root segment of the first database record.
The initial position of the pointer in the database is before the root segment of the first database record.
The database pointer position is before the next segment occurrence in the sequence, after a successful GN call.
The database pointer position is before the next segment occurrence in the sequence, after a successful GN call.
The GN call starts through the database from the position established by the previous call.
The GN call starts through the database from the position established by the previous call.
If a GN call is unqualified, it returns the next segment occurrence in the database regardless of its type, in hierarchical sequence.
If a GN call is unqualified, it returns the next segment occurrence in the database regardless of its type, in hierarchical sequence.
If a GN call includes SSAs, then DL/I retrieves only segments that meet the requirements of all specified SSAs.
If a GN call includes SSAs, then DL/I retrieves only segments that meet the requirements of all specified SSAs.
CALL 'CBLTDLI' USING DLI-GN
PCB-NAME
IO-AREA
BOOKS-SSA
The above example shows we issue a GN call providing the starting position to read the records sequentially. It fetches the first occurrence of the BOOKS segment.
The following table shows the relevant status codes after a GN call −
Spaces
Successful call
GE
DL/I could not find a segment that met the criteria specified in the call.
GA
An unqualified GN call moves up one level in the database hierarchy to fetch the segment.
GB
End of database is reached and segment not found.
GK
An unqualified GN call tries to fetch a segment of a particular type other than the one just retrieved but stays in the same hierarchical level.
Command codes are used with calls to fetch a segment occurrence. The various command codes used with calls are discussed below.
Points to note −
When an F command code is specified in a call, the call processes the first occurrence of the segment.
When an F command code is specified in a call, the call processes the first occurrence of the segment.
F command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls.
F command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls.
If we specify an F command code with a GU call, it does not have any significance, as GU calls fetch the first segment occurrence by default.
If we specify an F command code with a GU call, it does not have any significance, as GU calls fetch the first segment occurrence by default.
Points to note −
When an L command code is specified in a call, the call processes the last occurrence of the segment.
When an L command code is specified in a call, the call processes the last occurrence of the segment.
L command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls.
L command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls.
Points to note −
D command code is used to fetch more than one segment occurrences using just a single call.
D command code is used to fetch more than one segment occurrences using just a single call.
Normally DL/I operates on the lowest level segment specified in an SSA, but in many cases, we want data from other levels as well. In those cases, we can use the D command code.
Normally DL/I operates on the lowest level segment specified in an SSA, but in many cases, we want data from other levels as well. In those cases, we can use the D command code.
D command code makes easy retrieval of the entire path of segments.
D command code makes easy retrieval of the entire path of segments.
Points to note −
C command code is used to concatenate keys.
C command code is used to concatenate keys.
Using relational operators is a bit complex, as we need to specify a field name, a relational operator, and a search value. Instead, we can use a C command code to provide a concatenated key.
Using relational operators is a bit complex, as we need to specify a field name, a relational operator, and a search value. Instead, we can use a C command code to provide a concatenated key.
The following example shows the use of C command code −
01 LOCATION-SSA.
05 FILLER PIC X(11) VALUE ‘INLOCSEG*C(‘.
05 LIBRARY-SSA PIC X(5).
05 BOOKS-SSA PIC X(4).
05 ENGINEERING-SSA PIC X(6).
05 IT-SSA PIC X(3)
05 FILLER PIC X VALUE ‘)’.
CALL 'CBLTDLI' USING DLI-GU
PCB-NAME
IO-AREA
LOCATION-SSA
Points to note −
When we issue a GU or GN call, the DL/I establishes its parentage at the lowest level segment that is retrieved.
When we issue a GU or GN call, the DL/I establishes its parentage at the lowest level segment that is retrieved.
If we include a P command code, then the DL/I establishes its parentage at a higher level segment in the hierarchical path.
If we include a P command code, then the DL/I establishes its parentage at a higher level segment in the hierarchical path.
Points to note −
When a U command code is specified in an unqualified SSA in a GN call, the DL/I restricts the search for the segment.
When a U command code is specified in an unqualified SSA in a GN call, the DL/I restricts the search for the segment.
U command code is ignored if it is used with a qualified SSA.
U command code is ignored if it is used with a qualified SSA.
Points to note −
V command code works similar to the U command code, but it restricts the search of a segment at a particular level and all levels above the hierarchy.
V command code works similar to the U command code, but it restricts the search of a segment at a particular level and all levels above the hierarchy.
V command code is ignored when used with a qualified SSA.
V command code is ignored when used with a qualified SSA.
Points to note −
Q command code is used to enqueue or reserve a segment for exclusive use of your application program.
Q command code is used to enqueue or reserve a segment for exclusive use of your application program.
Q command code is used in an interactive environment where another program might make a change to a segment.
Q command code is used in an interactive environment where another program might make a change to a segment.
A program can have multiple positions in the IMS database which is known as multiple processing. Multiple processing can be done in two ways −
Multiple PCBs
Multiple Positioning
Multiple PCBs can be defined for a single database. If there are multiple PCBs, then an application program can have different views of it. This method for implementing multiple processing is inefficient because of the overheads imposed by the extra PCBs.
A program can maintain multiple positions in a database using a single PCB. This is achieved by maintaining a distinct position for each hierarchical path. Multiple positioning is used to access segments of two or more types sequentially at the same time.
The different data manipulation methods used in IMS DL/I calls are as follows −
ISRT Call
Get Hold Calls
REPL Call
DLET Call
Let us consider the following IMS database structure to understand the data manipulation function calls −
Points to note −
ISRT call is known as Insert call which is used to add segment occurrences to a database.
ISRT call is known as Insert call which is used to add segment occurrences to a database.
ISRT calls are used for loading a new database.
ISRT calls are used for loading a new database.
We issue an ISRT call when a segment description field is loaded with data.
We issue an ISRT call when a segment description field is loaded with data.
An unqualified or qualified SSA must be specified in the call so that the DL/I knows where to place a segment occurrence.
An unqualified or qualified SSA must be specified in the call so that the DL/I knows where to place a segment occurrence.
We can use a combination of both unqualified and qualified SSA in the call. A qualified SSA can be specified for all the above levels. Let us consider the following example −
We can use a combination of both unqualified and qualified SSA in the call. A qualified SSA can be specified for all the above levels. Let us consider the following example −
CALL 'CBLTDLI' USING DLI-ISRT
PCB-NAME
IO-AREA
LIBRARY-SSA
BOOKS-SSA
UNQUALIFIED-ENGINEERING-SSA
The above example shows we are issuing an ISRT call by providing a combination of qualified and unqualified SSAs.
When a new segment that we are inserting has a unique key field, then it is added at the proper position. If the key field is not unique, then it is added by the rules defined by a database administrator.
When we issue an ISRT call without specifying a key field, then the insert rule tells where to place the segments relative to existing twin segments. Given below are the insert rules −
First − If the rule is first, the new segment is added before any existing twins.
First − If the rule is first, the new segment is added before any existing twins.
Last − If the rule is last, the new segment is added after all existing twins.
Last − If the rule is last, the new segment is added after all existing twins.
Here − If the rule is here, it is added at the current position relative to existing twins, which may be first, last, or anywhere.
Here − If the rule is here, it is added at the current position relative to existing twins, which may be first, last, or anywhere.
The following table shows the relevant status codes after an ISRT call −
Spaces
Successful call
GE
Multiple SSAs are used and the DL/I cannot satisfy the call with the specified path.
II
Try to add a segment occurrence that is already present in the database.
LB / LC LD / LE
We get these status codes while load processing. In most cases, they indicate that you are not inserting the segments in an exact hierarchical sequence.
Points to note −
There are three types of Get Hold call which we specify in a DL/I call:
Get Hold Unique (GHU)
Get Hold Next (GHN)
Get Hold Next within Parent (GHNP)
There are three types of Get Hold call which we specify in a DL/I call:
Get Hold Unique (GHU)
Get Hold Unique (GHU)
Get Hold Next (GHN)
Get Hold Next (GHN)
Get Hold Next within Parent (GHNP)
Get Hold Next within Parent (GHNP)
Hold function specifies that we are going to update the segment after retrieval. So before an REPL or DLET call, a successful hold call must be issued telling the DL/I an intent to update the database.
Hold function specifies that we are going to update the segment after retrieval. So before an REPL or DLET call, a successful hold call must be issued telling the DL/I an intent to update the database.
Points to note −
After a successful get hold call, we issue an REPL call to update a segment occurrence.
After a successful get hold call, we issue an REPL call to update a segment occurrence.
We cannot change the length of a segment using an REPL call.
We cannot change the length of a segment using an REPL call.
We cannot change the value of a key field using an REPL call.
We cannot change the value of a key field using an REPL call.
We cannot use a qualified SSA with an REPL call. If we specify a qualified SSA, then the call fails.
We cannot use a qualified SSA with an REPL call. If we specify a qualified SSA, then the call fails.
CALL 'CBLTDLI' USING DLI-GHU
PCB-NAME
IO-AREA
LIBRARY-SSA
BOOKS-SSA
ENGINEERING-SSA
IT-SSA.
*Move the values which you want to update in IT segment occurrence*
CALL ‘CBLTDLI’ USING DLI-REPL
PCB-NAME
IO-AREA.
The above example updates the IT segment occurrence using an REPL call. First, we issue a GHU call to get the segment occurrence we want to update. Then, we issue an REPL call to update the values of that segment.
Points to note −
DLET call works much in the same way as an REPL call does.
DLET call works much in the same way as an REPL call does.
After a successful get hold call, we issue a DLET call to delete a segment occurrence.
After a successful get hold call, we issue a DLET call to delete a segment occurrence.
We cannot use a qualified SSA with a DLET call. If we specify a qualified SSA, then the call fails.
We cannot use a qualified SSA with a DLET call. If we specify a qualified SSA, then the call fails.
CALL 'CBLTDLI' USING DLI-GHU
PCB-NAME
IO-AREA
LIBRARY-SSA
BOOKS-SSA
ENGINEERING-SSA
IT-SSA.
CALL ‘CBLTDLI’ USING DLI-DLET
PCB-NAME
IO-AREA.
The above example deletes the IT segment occurrence using a DLET call. First, we issue a GHU call to get the segment occurrence we want to delete. Then, we issue a DLET call to update the values of that segment.
The following table shows the relevant status codes after an REPL or a DLET call −
Spaces
Successful call
AJ
Qualified SSA used on REPL or DLET call.
DJ
Program issues a replace call without an immediately preceding get hold call.
DA
Program makes a change to the segment’s key field before issuing the REPL or DLET call
Secondary Indexing is used when we want to access a database without using the complete concatenated key or when we do not want to use the sequence primary fields.
DL/I stores the pointer to segments of the indexed database in a separate database. Index pointer segment is the only type of secondary index. It consists of two parts −
Prefix Element
Data Element
The prefix part of the index pointer segment contains a pointer to the Index Target Segment. Index target segment is the segment that is accessible using the secondary index.
The data element contains the key value from the segment in the indexed database over which the index is built. This is also known as the index source segment.
Here are the key points to note about Secondary Indexing −
The index source segment and the target source segment need not be the same.
The index source segment and the target source segment need not be the same.
When we set up a secondary index, it is automatically maintained by the DL/I.
When we set up a secondary index, it is automatically maintained by the DL/I.
The DBA defines many secondary indexes as per the multiple access paths. These secondary indexes are stored in a separate index database.
The DBA defines many secondary indexes as per the multiple access paths. These secondary indexes are stored in a separate index database.
We should not create more secondary indexes, as they impose additional processing overhead on the DL/I.
We should not create more secondary indexes, as they impose additional processing overhead on the DL/I.
Points to note −
The field in the index source segment over which the secondary index is built is called as the secondary key.
The field in the index source segment over which the secondary index is built is called as the secondary key.
Any field can be used as a secondary key. It need not be the segments sequence field.
Any field can be used as a secondary key. It need not be the segments sequence field.
Secondary keys can be any combination of single fields within the index source segment.
Secondary keys can be any combination of single fields within the index source segment.
Secondary key values do not have to be unique.
Secondary key values do not have to be unique.
Points to note −
When we build a secondary index, the apparent hierarchical structure of the database is also changed.
When we build a secondary index, the apparent hierarchical structure of the database is also changed.
The index target segment becomes the apparent root segment. As shown in the following image, the Engineering segment becomes the root segment, even if it is not a root segment.
The index target segment becomes the apparent root segment. As shown in the following image, the Engineering segment becomes the root segment, even if it is not a root segment.
The rearrangement of the database structure caused by the secondary index is known as the secondary data structure.
The rearrangement of the database structure caused by the secondary index is known as the secondary data structure.
Secondary data structures do not make any changes to the main physical database structure present on the disk. It is just a way to alter the database structure in front of the application program.
Secondary data structures do not make any changes to the main physical database structure present on the disk. It is just a way to alter the database structure in front of the application program.
Points to note −
When an AND (* or &) operator is used with secondary indexes, it is known as a dependent AND operator.
When an AND (* or &) operator is used with secondary indexes, it is known as a dependent AND operator.
An independent AND (#) allows us to specify qualifications that would be impossible with a dependent AND.
An independent AND (#) allows us to specify qualifications that would be impossible with a dependent AND.
This operator can be used only for secondary indexes where the index source segment is dependent on the index target segment.
This operator can be used only for secondary indexes where the index source segment is dependent on the index target segment.
We can code an SSA with an independent AND to specify that an occurrence of the target segment be processed based on the fields in two or more dependent source segments.
We can code an SSA with an independent AND to specify that an occurrence of the target segment be processed based on the fields in two or more dependent source segments.
01 ITEM-SELECTION-SSA.
05 FILLER PIC X(8).
05 FILLER PIC X(1) VALUE '('.
05 FILLER PIC X(10).
05 SSA-KEY-1 PIC X(8).
05 FILLER PIC X VALUE '#'.
05 FILLER PIC X(10).
05 SSA-KEY-2 PIC X(8).
05 FILLER PIC X VALUE ')'.
Points to note −
Sparse sequencing is also known as Sparse Indexing. We can remove some of the index source segments from the index using sparse sequencing with secondary index database.
Sparse sequencing is also known as Sparse Indexing. We can remove some of the index source segments from the index using sparse sequencing with secondary index database.
Sparse sequencing is used to improve the performance. When some occurrences of the index source segment are not used, we can remove that.
Sparse sequencing is used to improve the performance. When some occurrences of the index source segment are not used, we can remove that.
DL/I uses a suppression value or a suppression routine or both to determine whether a segment should be indexed.
DL/I uses a suppression value or a suppression routine or both to determine whether a segment should be indexed.
If the value of a sequence field in the index source segment matches a suppression value, then no index relationship is established.
If the value of a sequence field in the index source segment matches a suppression value, then no index relationship is established.
The suppression routine is a user-written program that evaluates the segment and determines whether or not it should be indexed.
The suppression routine is a user-written program that evaluates the segment and determines whether or not it should be indexed.
When sparse indexing is used, its functions are handled by the DL/I. We do not need to make special provisions for it in the application program.
When sparse indexing is used, its functions are handled by the DL/I. We do not need to make special provisions for it in the application program.
As discussed in earlier modules, DBDGEN is used to create a DBD. When we create secondary indexes, two databases are involved. A DBA needs to create two DBDs using two DBDGENs for creating a relationship between an indexed database and a secondary indexed database.
After creating the secondary index for a database, the DBA needs to create the PSBs. PSBGEN for the program specifies the proper processing sequence for the database on the PROCSEQ parameter of the PSB macro. For the PROCSEQ parameter, the DBA codes the DBD name for the secondary index database.
IMS database has a rule that each segment type can have only one parent. This limits the complexity of the physical database. Many DL/I applications require a complex structure that allows a segment to have two parent segment types. To overcome this limitation, DL/I allows the DBA to implement logical relationships in which a segment can have both physical and logical parents. We can create additional relationships within one physical database. The new data structure after implementing the logical relationship is known as the Logical Database.
A logical relationship has the following properties −
A logical relationship is a path between two segments which are related logically and not physically.
A logical relationship is a path between two segments which are related logically and not physically.
Usually a logical relationship is established between separate databases. But it is possible to have a relationship between the segments of one particular database.
Usually a logical relationship is established between separate databases. But it is possible to have a relationship between the segments of one particular database.
The following image shows two different databases. One is a Student database, and the other is a Library database. We create a logical relationship between the Books Issued segment from the Student database and the Books segment from the Library database.
This is how the logical database looks when you create a logical relationship −
Logical child segment is the basis of a logical relationship. It is a physical data segment but for DL/I, it appears as if it has two parents. The Books segment in the above example has two parent segments. Issued books segment is the logical parent and Library segment is the physical parent. One logical child segment occurrence has only one logical parent segment occurrence and one logical parent segment occurrence can have many logical child segment occurrences.
Logical twins are the occurrences of a logical child segment type that are all subordinate to a single occurrence of the logical parent segment type. DL/I makes the logical child segment appear similar to an actual physical child segment. This is also known as a virtual logical child segment.
A DBA creates logical relationships between segments. To implement a logical relationship, the DBA has to specify it in the DBDGENs for the involved physical databases. There are three types of logical relationships −
Unidirectional
Bidirectional Virtual
Bidirectional Physical
The logical connection goes from the logical child to the logical parent and it cannot go the other way around.
It allows access in both the directions. The logical child in its physical structure and the corresponding virtual logical child can be seen as paired segments.
The logical child is a physically stored subordinate to both its physical and logical parents. To application programs, it appears the same way as a bidirectional virtual logical child.
The programming considerations for using a logical database are as follows −
DL/I calls to access the database remains same with the logical database too.
DL/I calls to access the database remains same with the logical database too.
Program specification block indicates the structure which we use in our calls. In some cases, we cannot identify that we are using a logical database.
Program specification block indicates the structure which we use in our calls. In some cases, we cannot identify that we are using a logical database.
Logical relationships add a new dimension to database programming.
Logical relationships add a new dimension to database programming.
You must be careful while working with logical databases, as two databases are integrated together. If you modify one database, the same modifications must be reflected in the other database.
You must be careful while working with logical databases, as two databases are integrated together. If you modify one database, the same modifications must be reflected in the other database.
Program specifications should indicate what processing is allowed on a database. If a processing rule is violated, you get a non-blank status code.
Program specifications should indicate what processing is allowed on a database. If a processing rule is violated, you get a non-blank status code.
A logical child segment always begins with the complete concatenated key of the destination parent. This is known as the Destination Parent Concatenated Key (DPCK). You need to always code the DPCK at the start of your segment I/O area for a logical child. In a logical database, the concatenated segment makes the connection between segments that are defined in different physical databases. A concatenated segment consists of the following two parts −
Logical child segment
Destination parent segment
A logical child segment consists of the following two parts −
Destination Parent Concatenated Key (DPCK)
Logical child user data
When we work with concatenated segments during update, it may be possible to add or change the data in both the logical child and the destination parent with a single call. This also depends on the rules the DBA specified for the database. For an insert, provide the DPCK in the right position. For a replace or delete, do not change the DPCK or the sequence field data in either part of the concatenated segment.
The database administrator needs to plan for the database recovery in case of system failures. Failures can be of many types such as application crashes, hardware errors, power failures, etc.
Some simple approaches to database recovery are as follows −
Make periodical backup copies of important datasets so that all transactions posted against the datasets are retained.
Make periodical backup copies of important datasets so that all transactions posted against the datasets are retained.
If a dataset is damaged due to a system failure, that problem is corrected by restoring the backup copy. Then the accumulated transactions are re-posted to the backup copy to bring them up-to-date.
If a dataset is damaged due to a system failure, that problem is corrected by restoring the backup copy. Then the accumulated transactions are re-posted to the backup copy to bring them up-to-date.
The disadvantages of simple approach to database recovery are as follows −
Re-posting the accumulated transactions consumes a lot of time.
Re-posting the accumulated transactions consumes a lot of time.
All other applications need to wait for execution until the recovery is finished.
All other applications need to wait for execution until the recovery is finished.
Database recovery is lengthier than file recovery, if logical and secondary index relationships are involved.
Database recovery is lengthier than file recovery, if logical and secondary index relationships are involved.
A DL/I program crashes in a way that is different from the way a standard program crashes because a standard program is executed directly by the operating system, while a DL/I program is not. By employing an abnormal termination routine, the system interferes so that recovery can be done after the ABnormal END (ABEND). The abnormal termination routine performs the following actions −
Closes all datasets
Cancels all pending jobs in the queue
Creates a storage dump to find out the root cause of ABEND
The limitation of this routine is that it does not ensure if the data in use is accurate or not.
When an application program ABENDs, it is necessary to revert the changes done by the application program, correct the error, and re-run the application program. To do this, it is required to have the DL/I log. Here are the key points about DL/I logging −
A DL/I records all the changes made by an application program in a file which is known as the log file.
A DL/I records all the changes made by an application program in a file which is known as the log file.
When the application program changes a segment, its before image and after images are created by the DL/I.
When the application program changes a segment, its before image and after images are created by the DL/I.
These segment images can be used to restore the segments, in case the application program crashes.
These segment images can be used to restore the segments, in case the application program crashes.
DL/I uses a technique called write-ahead logging to record database changes. With write-ahead logging, a database change is written to the log dataset before it is written to the actual dataset.
DL/I uses a technique called write-ahead logging to record database changes. With write-ahead logging, a database change is written to the log dataset before it is written to the actual dataset.
As the log is always ahead of the database, the recovery utilities can determine the status of any database change.
As the log is always ahead of the database, the recovery utilities can determine the status of any database change.
When the program executes a call to change a database segment, the DL/I takes care of its logging part.
When the program executes a call to change a database segment, the DL/I takes care of its logging part.
The two approaches of database recovery are −
Forward Recovery − DL/I uses the log file to store the change data. The accumulated transactions are re-posted using this log file.
Forward Recovery − DL/I uses the log file to store the change data. The accumulated transactions are re-posted using this log file.
Backward Recovery − Backward recovery is also known as backout recovery. The log records for the program are read backwards and their effects are reversed in the database. When the backout is complete, the databases are in the same state as they were in before the failure, assuming that no another application program altered the database in the meantime.
Backward Recovery − Backward recovery is also known as backout recovery. The log records for the program are read backwards and their effects are reversed in the database. When the backout is complete, the databases are in the same state as they were in before the failure, assuming that no another application program altered the database in the meantime.
A checkpoint is a stage where the database changes done by the application program are considered complete and accurate. Listed below are the points to note about a checkpoint −
Database changes made before the most recent checkpoint are not reversed by backward recovery.
Database changes made before the most recent checkpoint are not reversed by backward recovery.
Database changes logged after the most recent checkpoint are not applied to an image copy of the database during forward recovery.
Database changes logged after the most recent checkpoint are not applied to an image copy of the database during forward recovery.
Using checkpoint method, the database is restored to its condition at the most recent checkpoint when the recovery process completes.
Using checkpoint method, the database is restored to its condition at the most recent checkpoint when the recovery process completes.
The default for batch programs is that the checkpoint is the beginning of the program.
The default for batch programs is that the checkpoint is the beginning of the program.
A checkpoint can be established using a checkpoint call (CHKP).
A checkpoint can be established using a checkpoint call (CHKP).
A checkpoint call causes a checkpoint record to be written on the DL/I log.
A checkpoint call causes a checkpoint record to be written on the DL/I log.
Shown below is the syntax of a CHKP call −
CALL 'CBLTDLI' USING DLI-CHKP
PCB-NAME
CHECKPOINT-ID
There are two checkpoint methods −
Basic Checkpointing − It allows the programmer to issue checkpoint calls that the DL/I recovery utilities use during recovery processing.
Basic Checkpointing − It allows the programmer to issue checkpoint calls that the DL/I recovery utilities use during recovery processing.
Symbolic Checkpointing − It is an advanced form of checkpointing that is used in combination with the extended restart facility. Symbolic checkpointing and extended restart together let the application programmer code the programs so that they can resume processing at the point just after the checkpoint.
Symbolic Checkpointing − It is an advanced form of checkpointing that is used in combination with the extended restart facility. Symbolic checkpointing and extended restart together let the application programmer code the programs so that they can resume processing at the point just after the checkpoint.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2315,
"s": 1946,
"text": "Database is a collection of correlated data items. These data items are organized and stored in a manner to provide fast and easy access. IMS database is a hierarchical database where data is stored at different levels and each entity is dependent on higher level entities. The physical elements on an application system that use IMS are shown in the following figure."
},
{
"code": null,
"e": 2689,
"s": 2315,
"text": "A Database Management system is a set of application programs used for storing, accessing, and managing data in the database. IMS database management system maintains integrity and allows fast recovery of data by organizing it in such a way that it is easy to retrieve. IMS maintains a large amount of world's corporate data with the help of its database management system."
},
{
"code": null,
"e": 3012,
"s": 2689,
"text": "The function of transaction manager is to provide a communication platform between the database and the application programs. IMS acts as a transaction manager. A transaction manager deals with the end-user to store and retrieve data from the database. IMS can use IMS DB or DB2 as its back-end database to store the data."
},
{
"code": null,
"e": 3298,
"s": 3012,
"text": "DL/I comprises of application programs that grant access to the data stored in the database. IMS DB uses DL/I which serves as the interface language that programmers use for accessing the database in an application program. We will discuss this in more detail in the upcoming chapters."
},
{
"code": null,
"e": 3315,
"s": 3298,
"text": "Points to note −"
},
{
"code": null,
"e": 3388,
"s": 3315,
"text": "IMS supports applications from different languages such as Java and XML."
},
{
"code": null,
"e": 3449,
"s": 3388,
"text": "IMS applications and data can be accessed over any platform."
},
{
"code": null,
"e": 3500,
"s": 3449,
"text": "IMS DB processing is very fast as compared to DB2."
},
{
"code": null,
"e": 3517,
"s": 3500,
"text": "Points to note −"
},
{
"code": null,
"e": 3559,
"s": 3517,
"text": "Implementation of IMS DB is very complex."
},
{
"code": null,
"e": 3610,
"s": 3559,
"text": "IMS predefined tree structure reduces flexibility."
},
{
"code": null,
"e": 3641,
"s": 3610,
"text": "IMS DB is difficult to manage."
},
{
"code": null,
"e": 3931,
"s": 3641,
"text": "An IMS database is a collection of data accommodating physical files. In a hierarchical database, the topmost level contains the general information about the entity. As we proceed from the top level to the bottom levels in the hierarchy, we get more and more information about the entity."
},
{
"code": null,
"e": 4123,
"s": 3931,
"text": "Each level in the hierarchy contains segments. In standard files, it is difficult to implement hierarchies but DL/I supports hierarchies. The following figure depicts the structure of IMS DB."
},
{
"code": null,
"e": 4140,
"s": 4123,
"text": "Points to note −"
},
{
"code": null,
"e": 4199,
"s": 4140,
"text": "A segment is created by grouping of similar data together."
},
{
"code": null,
"e": 4258,
"s": 4199,
"text": "A segment is created by grouping of similar data together."
},
{
"code": null,
"e": 4387,
"s": 4258,
"text": "It is the smallest unit of information that DL/I transfers to and from an application program during any input-output operation."
},
{
"code": null,
"e": 4516,
"s": 4387,
"text": "It is the smallest unit of information that DL/I transfers to and from an application program during any input-output operation."
},
{
"code": null,
"e": 4577,
"s": 4516,
"text": "A segment can have one or more data fields grouped together."
},
{
"code": null,
"e": 4638,
"s": 4577,
"text": "A segment can have one or more data fields grouped together."
},
{
"code": null,
"e": 4706,
"s": 4638,
"text": "In the following example, the segment Student has four data fields."
},
{
"code": null,
"e": 4722,
"s": 4706,
"text": "Points to note−"
},
{
"code": null,
"e": 4870,
"s": 4722,
"text": "A field is a single piece of data in a segment. For example, Roll Number, Name, Course, and Mobile Number are single fields in the Student segment."
},
{
"code": null,
"e": 5018,
"s": 4870,
"text": "A field is a single piece of data in a segment. For example, Roll Number, Name, Course, and Mobile Number are single fields in the Student segment."
},
{
"code": null,
"e": 5096,
"s": 5018,
"text": "A segment consists of related fields to collect the information of an entity."
},
{
"code": null,
"e": 5174,
"s": 5096,
"text": "A segment consists of related fields to collect the information of an entity."
},
{
"code": null,
"e": 5229,
"s": 5174,
"text": "Fields can be used as a key for ordering the segments."
},
{
"code": null,
"e": 5284,
"s": 5229,
"text": "Fields can be used as a key for ordering the segments."
},
{
"code": null,
"e": 5372,
"s": 5284,
"text": "Fields can be used as a qualifier for searching information about a particular segment."
},
{
"code": null,
"e": 5460,
"s": 5372,
"text": "Fields can be used as a qualifier for searching information about a particular segment."
},
{
"code": null,
"e": 5477,
"s": 5460,
"text": "Points to note −"
},
{
"code": null,
"e": 5526,
"s": 5477,
"text": "Segment Type is a category of data in a segment."
},
{
"code": null,
"e": 5575,
"s": 5526,
"text": "Segment Type is a category of data in a segment."
},
{
"code": null,
"e": 5656,
"s": 5575,
"text": "A DL/I database can have 255 different segment types and 15 levels of hierarchy."
},
{
"code": null,
"e": 5737,
"s": 5656,
"text": "A DL/I database can have 255 different segment types and 15 levels of hierarchy."
},
{
"code": null,
"e": 5848,
"s": 5737,
"text": "In the following figure, there are three segments namely, Library, Books Information, and Student Information."
},
{
"code": null,
"e": 5959,
"s": 5848,
"text": "In the following figure, there are three segments namely, Library, Books Information, and Student Information."
},
{
"code": null,
"e": 5976,
"s": 5959,
"text": "Points to note −"
},
{
"code": null,
"e": 6231,
"s": 5976,
"text": "A segment occurrence is an individual segment of a particular type containing user data. In the above example, Books Information is one segment type and there can any number of occurrences of it, as it can store the information about any number of books."
},
{
"code": null,
"e": 6486,
"s": 6231,
"text": "A segment occurrence is an individual segment of a particular type containing user data. In the above example, Books Information is one segment type and there can any number of occurrences of it, as it can store the information about any number of books."
},
{
"code": null,
"e": 6636,
"s": 6486,
"text": "Within the IMS Database, there is only one occurrence of each segment type, but there can be an unlimited number of occurrences of each segment type."
},
{
"code": null,
"e": 6786,
"s": 6636,
"text": "Within the IMS Database, there is only one occurrence of each segment type, but there can be an unlimited number of occurrences of each segment type."
},
{
"code": null,
"e": 6963,
"s": 6786,
"text": "Hierarchical databases work on the relationships between two or more segments. The following example shows how segments are related to each other in the IMS database structure."
},
{
"code": null,
"e": 6980,
"s": 6963,
"text": "Points to note −"
},
{
"code": null,
"e": 7058,
"s": 6980,
"text": "The segment that lies at the top of the hierarchy is called the root segment."
},
{
"code": null,
"e": 7136,
"s": 7058,
"text": "The segment that lies at the top of the hierarchy is called the root segment."
},
{
"code": null,
"e": 7224,
"s": 7136,
"text": "The root segment is the only segment through which all dependent segments are accessed."
},
{
"code": null,
"e": 7312,
"s": 7224,
"text": "The root segment is the only segment through which all dependent segments are accessed."
},
{
"code": null,
"e": 7397,
"s": 7312,
"text": "The root segment is the only segment in the database which is never a child segment."
},
{
"code": null,
"e": 7482,
"s": 7397,
"text": "The root segment is the only segment in the database which is never a child segment."
},
{
"code": null,
"e": 7548,
"s": 7482,
"text": "There can be only one root segment in the IMS database structure."
},
{
"code": null,
"e": 7614,
"s": 7548,
"text": "There can be only one root segment in the IMS database structure."
},
{
"code": null,
"e": 7673,
"s": 7614,
"text": "For example, 'A' is the root segment in the above example."
},
{
"code": null,
"e": 7732,
"s": 7673,
"text": "For example, 'A' is the root segment in the above example."
},
{
"code": null,
"e": 7749,
"s": 7732,
"text": "Points to note −"
},
{
"code": null,
"e": 7820,
"s": 7749,
"text": "A parent segment has one or more dependent segments directly below it."
},
{
"code": null,
"e": 7891,
"s": 7820,
"text": "A parent segment has one or more dependent segments directly below it."
},
{
"code": null,
"e": 7968,
"s": 7891,
"text": "For example, 'A', 'B', and 'E' are the parent segments in the above example."
},
{
"code": null,
"e": 8045,
"s": 7968,
"text": "For example, 'A', 'B', and 'E' are the parent segments in the above example."
},
{
"code": null,
"e": 8062,
"s": 8045,
"text": "Points to note −"
},
{
"code": null,
"e": 8136,
"s": 8062,
"text": "All segments other than the root segment are known as dependent segments."
},
{
"code": null,
"e": 8210,
"s": 8136,
"text": "All segments other than the root segment are known as dependent segments."
},
{
"code": null,
"e": 8289,
"s": 8210,
"text": "Dependent segments depend on one or more segments to present complete meaning."
},
{
"code": null,
"e": 8368,
"s": 8289,
"text": "Dependent segments depend on one or more segments to present complete meaning."
},
{
"code": null,
"e": 8461,
"s": 8368,
"text": "For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are dependent segments in our example."
},
{
"code": null,
"e": 8554,
"s": 8461,
"text": "For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are dependent segments in our example."
},
{
"code": null,
"e": 8571,
"s": 8554,
"text": "Points to note −"
},
{
"code": null,
"e": 8664,
"s": 8571,
"text": "Any segment having a segment directly above it in the hierarchy is known as a child segment."
},
{
"code": null,
"e": 8757,
"s": 8664,
"text": "Any segment having a segment directly above it in the hierarchy is known as a child segment."
},
{
"code": null,
"e": 8817,
"s": 8757,
"text": "Each dependent segment in the structure is a child segment."
},
{
"code": null,
"e": 8877,
"s": 8817,
"text": "Each dependent segment in the structure is a child segment."
},
{
"code": null,
"e": 8951,
"s": 8877,
"text": "For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are child segments."
},
{
"code": null,
"e": 9025,
"s": 8951,
"text": "For example, 'B', 'C1', 'C2', 'D', 'E', 'F1' and 'F2' are child segments."
},
{
"code": null,
"e": 9042,
"s": 9025,
"text": "Points to note −"
},
{
"code": null,
"e": 9159,
"s": 9042,
"text": "Two or more segment occurrences of a particular segment type under a single parent segment are called twin segments."
},
{
"code": null,
"e": 9276,
"s": 9159,
"text": "Two or more segment occurrences of a particular segment type under a single parent segment are called twin segments."
},
{
"code": null,
"e": 9347,
"s": 9276,
"text": "For example, 'C1' and 'C2' are twin segments, so do 'F1' and 'F2' are."
},
{
"code": null,
"e": 9418,
"s": 9347,
"text": "For example, 'C1' and 'C2' are twin segments, so do 'F1' and 'F2' are."
},
{
"code": null,
"e": 9435,
"s": 9418,
"text": "Points to note −"
},
{
"code": null,
"e": 9509,
"s": 9435,
"text": "Sibling segments are the segments of different types and the same parent."
},
{
"code": null,
"e": 9583,
"s": 9509,
"text": "Sibling segments are the segments of different types and the same parent."
},
{
"code": null,
"e": 9683,
"s": 9583,
"text": "For example, 'B' and 'E' are sibling segments. Similarly, 'C1', 'C2', and 'D' are sibling segments."
},
{
"code": null,
"e": 9783,
"s": 9683,
"text": "For example, 'B' and 'E' are sibling segments. Similarly, 'C1', 'C2', and 'D' are sibling segments."
},
{
"code": null,
"e": 9800,
"s": 9783,
"text": "Points to note −"
},
{
"code": null,
"e": 9908,
"s": 9800,
"text": "Each occurrence of the root segment, plus all the subordinate segment occurrences make one database record."
},
{
"code": null,
"e": 10016,
"s": 9908,
"text": "Each occurrence of the root segment, plus all the subordinate segment occurrences make one database record."
},
{
"code": null,
"e": 10115,
"s": 10016,
"text": "Every database record has only one root segment but it may have any number of segment occurrences."
},
{
"code": null,
"e": 10214,
"s": 10115,
"text": "Every database record has only one root segment but it may have any number of segment occurrences."
},
{
"code": null,
"e": 10432,
"s": 10214,
"text": "In standard file processing, a record is a unit of data that an application program uses for certain operations. In DL/I, that unit of data is known as a segment. A single database record has many segment occurrences."
},
{
"code": null,
"e": 10650,
"s": 10432,
"text": "In standard file processing, a record is a unit of data that an application program uses for certain operations. In DL/I, that unit of data is known as a segment. A single database record has many segment occurrences."
},
{
"code": null,
"e": 10667,
"s": 10650,
"text": "Points to note −"
},
{
"code": null,
"e": 10791,
"s": 10667,
"text": "A path is the series of segments that starts from the root segment of a database record to any specific segment occurrence."
},
{
"code": null,
"e": 10915,
"s": 10791,
"text": "A path is the series of segments that starts from the root segment of a database record to any specific segment occurrence."
},
{
"code": null,
"e": 11054,
"s": 10915,
"text": "A path in the hierarchy structure need not be complete to the lowest level. It depends on how much information we require about an entity."
},
{
"code": null,
"e": 11193,
"s": 11054,
"text": "A path in the hierarchy structure need not be complete to the lowest level. It depends on how much information we require about an entity."
},
{
"code": null,
"e": 11276,
"s": 11193,
"text": "A path must be continuous and we cannot skip intermediate levels in the structure."
},
{
"code": null,
"e": 11359,
"s": 11276,
"text": "A path must be continuous and we cannot skip intermediate levels in the structure."
},
{
"code": null,
"e": 11478,
"s": 11359,
"text": "In the following figure, the child records in dark grey color show a path which starts from 'A' and goes through 'C2'."
},
{
"code": null,
"e": 11597,
"s": 11478,
"text": "In the following figure, the child records in dark grey color show a path which starts from 'A' and goes through 'C2'."
},
{
"code": null,
"e": 11838,
"s": 11597,
"text": "IMS DB stores data at different levels. Data is retrieved and inserted by issuing DL/I calls from an application program. We will discuss about DL/I calls in detail in the upcoming chapters. Data can be processed in the following two ways −"
},
{
"code": null,
"e": 11860,
"s": 11838,
"text": "Sequential Processing"
},
{
"code": null,
"e": 11878,
"s": 11860,
"text": "Random Processing"
},
{
"code": null,
"e": 12028,
"s": 11878,
"text": "When segments are retrieved sequentially from the database, DL/I follows a predefined pattern. Let us understand the sequential processing of IMS DB."
},
{
"code": null,
"e": 12094,
"s": 12028,
"text": "Listed below are the points to note about sequential processing −"
},
{
"code": null,
"e": 12189,
"s": 12094,
"text": "Predefined pattern for accessing data in DL/I is first down the hierarchy, then left to right."
},
{
"code": null,
"e": 12284,
"s": 12189,
"text": "Predefined pattern for accessing data in DL/I is first down the hierarchy, then left to right."
},
{
"code": null,
"e": 12508,
"s": 12284,
"text": "The root segment is retrieved first, then DL/I moves to the first left child and it goes down till the lowest level. At the lowest level, it retrieves all the occurrences of twin segments. Then it goes to the right segment."
},
{
"code": null,
"e": 12732,
"s": 12508,
"text": "The root segment is retrieved first, then DL/I moves to the first left child and it goes down till the lowest level. At the lowest level, it retrieves all the occurrences of twin segments. Then it goes to the right segment."
},
{
"code": null,
"e": 13023,
"s": 12732,
"text": "To understand better, observe the arrows in the above figure that show the flow for accessing the segments. Library is the root segment and the flow starts from there and goes till cars to access a single record. The same process is repeated for all occurrences to get all the data records."
},
{
"code": null,
"e": 13314,
"s": 13023,
"text": "To understand better, observe the arrows in the above figure that show the flow for accessing the segments. Library is the root segment and the flow starts from there and goes till cars to access a single record. The same process is repeated for all occurrences to get all the data records."
},
{
"code": null,
"e": 13427,
"s": 13314,
"text": "While accessing data, the program uses the position in the database which helps to retrieve and insert segments."
},
{
"code": null,
"e": 13540,
"s": 13427,
"text": "While accessing data, the program uses the position in the database which helps to retrieve and insert segments."
},
{
"code": null,
"e": 13679,
"s": 13540,
"text": "Random processing is also known as direct processing of data in IMS DB. Let us take an example to understand random processing in IMS DB −"
},
{
"code": null,
"e": 13741,
"s": 13679,
"text": "Listed below are the points to note about random processing −"
},
{
"code": null,
"e": 13911,
"s": 13741,
"text": "Segment occurrence that needs to be retrieved randomly requires key fields of all the segments it depends upon. These key fields are supplied by the application program."
},
{
"code": null,
"e": 14081,
"s": 13911,
"text": "Segment occurrence that needs to be retrieved randomly requires key fields of all the segments it depends upon. These key fields are supplied by the application program."
},
{
"code": null,
"e": 14196,
"s": 14081,
"text": "A concatenated key completely identifies the path from the root segment to the segment which you want to retrieve."
},
{
"code": null,
"e": 14311,
"s": 14196,
"text": "A concatenated key completely identifies the path from the root segment to the segment which you want to retrieve."
},
{
"code": null,
"e": 14508,
"s": 14311,
"text": "Suppose you want to retrieve an occurrence of the Commerce segment, then you need to supply the concatenated key field values of the segments it depends upon, such as Library, Books, and Commerce."
},
{
"code": null,
"e": 14705,
"s": 14508,
"text": "Suppose you want to retrieve an occurrence of the Commerce segment, then you need to supply the concatenated key field values of the segments it depends upon, such as Library, Books, and Commerce."
},
{
"code": null,
"e": 14890,
"s": 14705,
"text": "Random processing is faster than sequential processing. In real-world scenario, the applications combine both sequential and random processing methods together to achieve best results."
},
{
"code": null,
"e": 15075,
"s": 14890,
"text": "Random processing is faster than sequential processing. In real-world scenario, the applications combine both sequential and random processing methods together to achieve best results."
},
{
"code": null,
"e": 15092,
"s": 15075,
"text": "Points to note −"
},
{
"code": null,
"e": 15139,
"s": 15092,
"text": "A key field is also known as a sequence field."
},
{
"code": null,
"e": 15186,
"s": 15139,
"text": "A key field is also known as a sequence field."
},
{
"code": null,
"e": 15277,
"s": 15186,
"text": "A key field is present within a segment and it is used to retrieve the segment occurrence."
},
{
"code": null,
"e": 15368,
"s": 15277,
"text": "A key field is present within a segment and it is used to retrieve the segment occurrence."
},
{
"code": null,
"e": 15431,
"s": 15368,
"text": "A key field manages the segment occurrence in ascending order."
},
{
"code": null,
"e": 15494,
"s": 15431,
"text": "A key field manages the segment occurrence in ascending order."
},
{
"code": null,
"e": 15577,
"s": 15494,
"text": "In each segment, only a single field can be used as a key field or sequence field."
},
{
"code": null,
"e": 15660,
"s": 15577,
"text": "In each segment, only a single field can be used as a key field or sequence field."
},
{
"code": null,
"e": 15889,
"s": 15660,
"text": "As mentioned, only a single field can be used as a key field. If you want to search for the contents of other segment fields which are not key fields, then the field which is used to retrieve the data is known as a search field."
},
{
"code": null,
"e": 16046,
"s": 15889,
"text": "IMS Control Blocks define the structure of the IMS database and a program's access to them. The following diagram shows the structure of IMS control blocks."
},
{
"code": null,
"e": 16102,
"s": 16046,
"text": "DL/I uses the following three types of Control Blocks −"
},
{
"code": null,
"e": 16128,
"s": 16102,
"text": "Database Descriptor (DBD)"
},
{
"code": null,
"e": 16162,
"s": 16128,
"text": "Program Specification Block (PSB)"
},
{
"code": null,
"e": 16189,
"s": 16162,
"text": "Access Control Block (ACB)"
},
{
"code": null,
"e": 16206,
"s": 16189,
"text": "Points to note −"
},
{
"code": null,
"e": 16309,
"s": 16206,
"text": "DBD describes the complete physical structure of the database once all the segments have been defined."
},
{
"code": null,
"e": 16412,
"s": 16309,
"text": "DBD describes the complete physical structure of the database once all the segments have been defined."
},
{
"code": null,
"e": 16516,
"s": 16412,
"text": "While installing a DL/I database, one DBD must be created as it is required to access the IMS database."
},
{
"code": null,
"e": 16620,
"s": 16516,
"text": "While installing a DL/I database, one DBD must be created as it is required to access the IMS database."
},
{
"code": null,
"e": 16772,
"s": 16620,
"text": "Applications can use different views of the DBD. They are called Application Data Structures and they are specified in the Program Specification Block."
},
{
"code": null,
"e": 16924,
"s": 16772,
"text": "Applications can use different views of the DBD. They are called Application Data Structures and they are specified in the Program Specification Block."
},
{
"code": null,
"e": 17002,
"s": 16924,
"text": "The Database Administrator creates a DBD by coding DBDGEN control statements."
},
{
"code": null,
"e": 17080,
"s": 17002,
"text": "The Database Administrator creates a DBD by coding DBDGEN control statements."
},
{
"code": null,
"e": 17416,
"s": 17080,
"text": "DBDGEN is a Database Descriptor Generator. Creating control blocks is the responsibility of the Database Administrator. All the load modules are stored in the IMS library. Assembly Language macro statements are used to create control blocks. Given below is a sample code that shows how to create a DBD using DBDGEN control statements −"
},
{
"code": null,
"e": 17767,
"s": 17416,
"text": "PRINT\tNOGEN\nDBD\tNAME=LIBRARY,ACCESS=HIDAM\nDATASET\tDD1=LIB,DEVICE=3380\nSEGM\tNAME=LIBSEG,PARENT=0,BYTES=10\nFIELD\tNAME=(LIBRARY,SEQ,U),BYTES=10,START=1,TYPE=C\nSEGM\tNAME=BOOKSEG,PARENT=LIBSEG,BYTES=5\nFIELD\tNAME=(BOOKS,SEQ,U),BYTES=10,START=1,TYPE=C\nSEGM\tNAME=MAGSEG,PARENT=LIBSEG,BYTES=9\nFIELD\tNAME=(MAGZINES,SEQ),BYTES=8,START=1,TYPE=C\nDBDGEN\nFINISH\nEND"
},
{
"code": null,
"e": 17822,
"s": 17767,
"text": "Let us understand the terms used in the above DBDGEN −"
},
{
"code": null,
"e": 17990,
"s": 17822,
"text": "When you execute the above control statements in JCL, it creates a physical structure where LIBRARY is the root segment, and BOOKS and MAGZINES are its child segments."
},
{
"code": null,
"e": 18158,
"s": 17990,
"text": "When you execute the above control statements in JCL, it creates a physical structure where LIBRARY is the root segment, and BOOKS and MAGZINES are its child segments."
},
{
"code": null,
"e": 18305,
"s": 18158,
"text": "The first DBD macro statement identifies the database. Here, we need to mention the NAME and ACCESS which is used by DL/I to access this database."
},
{
"code": null,
"e": 18452,
"s": 18305,
"text": "The first DBD macro statement identifies the database. Here, we need to mention the NAME and ACCESS which is used by DL/I to access this database."
},
{
"code": null,
"e": 18535,
"s": 18452,
"text": "The second DATASET macro statement identifies the file that contains the database."
},
{
"code": null,
"e": 18618,
"s": 18535,
"text": "The second DATASET macro statement identifies the file that contains the database."
},
{
"code": null,
"e": 18775,
"s": 18618,
"text": "The segment types are defined using the SEGM macro statement. We need to specify the PARENT of that segment. If it is a Root segment, then mention PARENT=0."
},
{
"code": null,
"e": 18932,
"s": 18775,
"text": "The segment types are defined using the SEGM macro statement. We need to specify the PARENT of that segment. If it is a Root segment, then mention PARENT=0."
},
{
"code": null,
"e": 19001,
"s": 18932,
"text": "The following table shows parameters used in FIELD macro statement −"
},
{
"code": null,
"e": 19006,
"s": 19001,
"text": "Name"
},
{
"code": null,
"e": 19058,
"s": 19006,
"text": "Name of the field, typically 1 to 8 characters long"
},
{
"code": null,
"e": 19064,
"s": 19058,
"text": "Bytes"
},
{
"code": null,
"e": 19084,
"s": 19064,
"text": "Length of the field"
},
{
"code": null,
"e": 19090,
"s": 19084,
"text": "Start"
},
{
"code": null,
"e": 19123,
"s": 19090,
"text": "Position of field within segment"
},
{
"code": null,
"e": 19128,
"s": 19123,
"text": "Type"
},
{
"code": null,
"e": 19151,
"s": 19128,
"text": "Data type of the field"
},
{
"code": null,
"e": 19158,
"s": 19151,
"text": "Type C"
},
{
"code": null,
"e": 19178,
"s": 19158,
"text": "Character data type"
},
{
"code": null,
"e": 19185,
"s": 19178,
"text": "Type P"
},
{
"code": null,
"e": 19210,
"s": 19185,
"text": "Packed decimal data type"
},
{
"code": null,
"e": 19217,
"s": 19210,
"text": "Type Z"
},
{
"code": null,
"e": 19241,
"s": 19217,
"text": "Zoned decimal data type"
},
{
"code": null,
"e": 19248,
"s": 19241,
"text": "Type X"
},
{
"code": null,
"e": 19270,
"s": 19248,
"text": "Hexadecimal data type"
},
{
"code": null,
"e": 19277,
"s": 19270,
"text": "Type H"
},
{
"code": null,
"e": 19304,
"s": 19277,
"text": "Half word binary data type"
},
{
"code": null,
"e": 19311,
"s": 19304,
"text": "Type F"
},
{
"code": null,
"e": 19338,
"s": 19311,
"text": "Full word binary data type"
},
{
"code": null,
"e": 19383,
"s": 19338,
"text": "The fundamentals of PSB are as given below −"
},
{
"code": null,
"e": 19608,
"s": 19383,
"text": "A database has a single physical structure defined by a DBD but the application programs that process it can have different views of the database. These views are called application data structure and are defined in the PSB."
},
{
"code": null,
"e": 19833,
"s": 19608,
"text": "A database has a single physical structure defined by a DBD but the application programs that process it can have different views of the database. These views are called application data structure and are defined in the PSB."
},
{
"code": null,
"e": 19893,
"s": 19833,
"text": "No program can use more than one PSB in a single execution."
},
{
"code": null,
"e": 19953,
"s": 19893,
"text": "No program can use more than one PSB in a single execution."
},
{
"code": null,
"e": 20102,
"s": 19953,
"text": "Application programs have their own PSB and it is common for application programs that have similar database processing requirements to share a PSB."
},
{
"code": null,
"e": 20251,
"s": 20102,
"text": "Application programs have their own PSB and it is common for application programs that have similar database processing requirements to share a PSB."
},
{
"code": null,
"e": 20480,
"s": 20251,
"text": "PSB consists of one or more control blocks called Program Communication Blocks (PCBs). The PSB contains one PCB for each DL/I database the application program will access. We will discuss more about PCBs in the upcoming modules."
},
{
"code": null,
"e": 20709,
"s": 20480,
"text": "PSB consists of one or more control blocks called Program Communication Blocks (PCBs). The PSB contains one PCB for each DL/I database the application program will access. We will discuss more about PCBs in the upcoming modules."
},
{
"code": null,
"e": 20767,
"s": 20709,
"text": "PSBGEN must be performed to create a PSB for the program."
},
{
"code": null,
"e": 20825,
"s": 20767,
"text": "PSBGEN must be performed to create a PSB for the program."
},
{
"code": null,
"e": 20934,
"s": 20825,
"text": "PSBGEN is known as Program Specification Block Generator. The following example creates a PSB using PSBGEN −"
},
{
"code": null,
"e": 21128,
"s": 20934,
"text": "PRINT NOGEN\nPCB TYPE=DB,DBDNAME=LIBRARY,KEYLEN=10,PROCOPT=LS\nSENSEG NAME=LIBSEG\nSENSEG NAME=BOOKSEG,PARENT=LIBSEG\nSENSEG NAME=MAGSEG,PARENT=LIBSEG\nPSBGEN PSBNAME=LIBPSB,LANG=COBOL\nEND"
},
{
"code": null,
"e": 21183,
"s": 21128,
"text": "Let us understand the terms used in the above DBDGEN −"
},
{
"code": null,
"e": 21325,
"s": 21183,
"text": "The first macro statement is the Program Communication Block (PCB) that describes the database Type, Name, Key-Length, and Processing Option."
},
{
"code": null,
"e": 21467,
"s": 21325,
"text": "The first macro statement is the Program Communication Block (PCB) that describes the database Type, Name, Key-Length, and Processing Option."
},
{
"code": null,
"e": 21741,
"s": 21467,
"text": "DBDNAME parameter on the PCB macro specifies the name of the DBD. KEYLEN specifies the length of the longest concatenated key. The program can process in the database. PROCOPT parameter specifies the program's processing options. For example, LS means only LOAD Operations."
},
{
"code": null,
"e": 22015,
"s": 21741,
"text": "DBDNAME parameter on the PCB macro specifies the name of the DBD. KEYLEN specifies the length of the longest concatenated key. The program can process in the database. PROCOPT parameter specifies the program's processing options. For example, LS means only LOAD Operations."
},
{
"code": null,
"e": 22367,
"s": 22015,
"text": "SENSEG is known as Segment Level Sensitivity. It defines the program's access to parts of the database and it is identified at the segment level. The program has access to all the fields within the segments to which it is sensitive. A program can also have field-level sensitivity. In this, we define a segment name and the parent name of the segment."
},
{
"code": null,
"e": 22719,
"s": 22367,
"text": "SENSEG is known as Segment Level Sensitivity. It defines the program's access to parts of the database and it is identified at the segment level. The program has access to all the fields within the segments to which it is sensitive. A program can also have field-level sensitivity. In this, we define a segment name and the parent name of the segment."
},
{
"code": null,
"e": 22990,
"s": 22719,
"text": "The last macro statement is PCBGEN. PSBGEN is the last statement telling there are no more statements to process. PSBNAME defines the name given to the output PSB module. The LANG parameter specifies the language in which the application program is written, e.g., COBOL."
},
{
"code": null,
"e": 23261,
"s": 22990,
"text": "The last macro statement is PCBGEN. PSBGEN is the last statement telling there are no more statements to process. PSBNAME defines the name given to the output PSB module. The LANG parameter specifies the language in which the application program is written, e.g., COBOL."
},
{
"code": null,
"e": 23327,
"s": 23261,
"text": "Listed below are the points to note about access control blocks −"
},
{
"code": null,
"e": 23470,
"s": 23327,
"text": "Access Control Blocks for an application program combines the Database Descriptor and the Program Specification Block into an executable form."
},
{
"code": null,
"e": 23613,
"s": 23470,
"text": "Access Control Blocks for an application program combines the Database Descriptor and the Program Specification Block into an executable form."
},
{
"code": null,
"e": 23694,
"s": 23613,
"text": "ACBGEN is known as Access Control Blocks Generator. It is used to generate ACBs."
},
{
"code": null,
"e": 23775,
"s": 23694,
"text": "ACBGEN is known as Access Control Blocks Generator. It is used to generate ACBs."
},
{
"code": null,
"e": 23902,
"s": 23775,
"text": "For online programs, we need to pre-build ACBs. Hence the ACBGEN utility is executed before executing the application program."
},
{
"code": null,
"e": 24029,
"s": 23902,
"text": "For online programs, we need to pre-build ACBs. Hence the ACBGEN utility is executed before executing the application program."
},
{
"code": null,
"e": 24094,
"s": 24029,
"text": "For batch programs, ACBs can be generated at execution time too."
},
{
"code": null,
"e": 24159,
"s": 24094,
"text": "For batch programs, ACBs can be generated at execution time too."
},
{
"code": null,
"e": 24528,
"s": 24159,
"text": "An application program which includes DL/I calls cannot execute directly. Instead, a JCL is required to trigger the IMS DL/I batch module. The batch initialization module in IMS is DFSRRC00. The application program and the DL/I module execute together. The following diagram shows the structure of an application program which includes DL/I calls to access a database."
},
{
"code": null,
"e": 24622,
"s": 24528,
"text": "The application program interfaces with IMS DL/I modules via the following program elements −"
},
{
"code": null,
"e": 24694,
"s": 24622,
"text": "An ENTRY statement specifies that the PCBs are utilized by the program."
},
{
"code": null,
"e": 24766,
"s": 24694,
"text": "An ENTRY statement specifies that the PCBs are utilized by the program."
},
{
"code": null,
"e": 24894,
"s": 24766,
"text": "A PCB-mask co-relates with the information preserved in the pre-constructed PCB which receives return information from the IMS."
},
{
"code": null,
"e": 25022,
"s": 24894,
"text": "A PCB-mask co-relates with the information preserved in the pre-constructed PCB which receives return information from the IMS."
},
{
"code": null,
"e": 25107,
"s": 25022,
"text": "An Input-Output Area is used for passing data segments to and from the IMS database."
},
{
"code": null,
"e": 25192,
"s": 25107,
"text": "An Input-Output Area is used for passing data segments to and from the IMS database."
},
{
"code": null,
"e": 25284,
"s": 25192,
"text": "Calls to DL/I specify the processing functions such as fetch, insert, delete, replace, etc."
},
{
"code": null,
"e": 25376,
"s": 25284,
"text": "Calls to DL/I specify the processing functions such as fetch, insert, delete, replace, etc."
},
{
"code": null,
"e": 25522,
"s": 25376,
"text": "Check Status Codes is used to check the SQL return code of the processing option specified to inform whether the operation was successful or not."
},
{
"code": null,
"e": 25668,
"s": 25522,
"text": "Check Status Codes is used to check the SQL return code of the processing option specified to inform whether the operation was successful or not."
},
{
"code": null,
"e": 25772,
"s": 25668,
"text": "A Terminate statement is used to end the processing of the application program which includes the DL/I."
},
{
"code": null,
"e": 25876,
"s": 25772,
"text": "A Terminate statement is used to end the processing of the application program which includes the DL/I."
},
{
"code": null,
"e": 26138,
"s": 25876,
"text": "As of now, we learnt that the IMS consists of segments which are used in high-level programming languages to access data. Consider the following IMS database structure of a Library which we have seen earlier and here we see the layout of its segments in COBOL −"
},
{
"code": null,
"e": 26517,
"s": 26138,
"text": "01 LIBRARY-SEGMENT.\n 05 BOOK-ID PIC X(5).\n 05 ISSUE-DATE PIC X(10).\n 05 RETURN-DATE PIC X(10).\n 05 STUDENT-ID PIC A(25).\n\t\n01 BOOK-SEGMENT.\n 05 BOOK-ID PIC X(5).\n 05 BOOK-NAME PIC A(30).\n 05 AUTHOR PIC A(25).\n\t\n01 STUDENT-SEGMENT.\n 05 STUDENT-ID PIC X(5).\n 05 STUDENT-NAME PIC A(25).\n 05 DIVISION PIC X(10).\n"
},
{
"code": null,
"e": 26813,
"s": 26517,
"text": "The structure of an IMS application program is different from that of a Non-IMS application program. An IMS program cannot be executed directly; rather it is always called as a subroutine. An IMS application program consists of Program Specification Blocks to provide a view of the IMS database."
},
{
"code": null,
"e": 27049,
"s": 26813,
"text": "The application program and the PSBs linked to that program are loaded when we execute an application program which includes IMS DL/I modules. Then the CALL requests triggered by the application programs are executed by the IMS module."
},
{
"code": null,
"e": 27114,
"s": 27049,
"text": "The following IMS services are used by the application program −"
},
{
"code": null,
"e": 27141,
"s": 27114,
"text": "Accessing database records"
},
{
"code": null,
"e": 27162,
"s": 27141,
"text": "Issuing IMS commands"
},
{
"code": null,
"e": 27188,
"s": 27162,
"text": "Issuing IMS service calls"
},
{
"code": null,
"e": 27205,
"s": 27188,
"text": "Checkpoint calls"
},
{
"code": null,
"e": 27216,
"s": 27205,
"text": "Sync calls"
},
{
"code": null,
"e": 27273,
"s": 27216,
"text": "Sending or receiving messages from online user terminals"
},
{
"code": null,
"e": 27441,
"s": 27273,
"text": "We include DL/I calls inside COBOL application program to communicate with IMS database. We use the following DL/I statements in COBOL program to access the database −"
},
{
"code": null,
"e": 27457,
"s": 27441,
"text": "Entry Statement"
},
{
"code": null,
"e": 27474,
"s": 27457,
"text": "Goback Statement"
},
{
"code": null,
"e": 27489,
"s": 27474,
"text": "Call Statement"
},
{
"code": null,
"e": 27600,
"s": 27489,
"text": "It is used to pass the control from the DL/I to the COBOL program. Here is the syntax of the entry statement −"
},
{
"code": null,
"e": 27666,
"s": 27600,
"text": "ENTRY 'DLITCBL' USING pcb-name1\n [pcb-name2]\n"
},
{
"code": null,
"e": 27810,
"s": 27666,
"text": "The above statement is coded in the Procedure Division of a COBOL program. Let us go into the details of the entry statement in COBOL program −"
},
{
"code": null,
"e": 27910,
"s": 27810,
"text": "The batch initialization module triggers the application program and is executed under its control."
},
{
"code": null,
"e": 28010,
"s": 27910,
"text": "The batch initialization module triggers the application program and is executed under its control."
},
{
"code": null,
"e": 28143,
"s": 28010,
"text": "The DL/I loads the required control blocks and modules and the application program, and control is given to the application program."
},
{
"code": null,
"e": 28276,
"s": 28143,
"text": "The DL/I loads the required control blocks and modules and the application program, and control is given to the application program."
},
{
"code": null,
"e": 28380,
"s": 28276,
"text": "DLITCBL stands for DL/I to COBOL. The entry statement is used to define the entry point in the program."
},
{
"code": null,
"e": 28484,
"s": 28380,
"text": "DLITCBL stands for DL/I to COBOL. The entry statement is used to define the entry point in the program."
},
{
"code": null,
"e": 28692,
"s": 28484,
"text": "When we call a sub-program in COBOL, its address is also provided. Likewise, when the DL/I gives the control to the application program, it also provides the address of each PCB defined in the program's PSB."
},
{
"code": null,
"e": 28900,
"s": 28692,
"text": "When we call a sub-program in COBOL, its address is also provided. Likewise, when the DL/I gives the control to the application program, it also provides the address of each PCB defined in the program's PSB."
},
{
"code": null,
"e": 29062,
"s": 28900,
"text": "All the PCBs used in the application program must be defined inside the Linkage Section of the COBOL program because PCB resides outside the application program."
},
{
"code": null,
"e": 29224,
"s": 29062,
"text": "All the PCBs used in the application program must be defined inside the Linkage Section of the COBOL program because PCB resides outside the application program."
},
{
"code": null,
"e": 29293,
"s": 29224,
"text": "The PCB definition inside the Linkage Section is called as PCB Mask."
},
{
"code": null,
"e": 29362,
"s": 29293,
"text": "The PCB definition inside the Linkage Section is called as PCB Mask."
},
{
"code": null,
"e": 29567,
"s": 29362,
"text": "The relation between PCB masks and actual PCBs in storage is created by listing the PCBs in the entry statement. The sequence of listing in the entry statement should be same as they appear in the PSBGEN."
},
{
"code": null,
"e": 29772,
"s": 29567,
"text": "The relation between PCB masks and actual PCBs in storage is created by listing the PCBs in the entry statement. The sequence of listing in the entry statement should be same as they appear in the PSBGEN."
},
{
"code": null,
"e": 29886,
"s": 29772,
"text": "It is used to pass the control back to the IMS control program. Following is the syntax of the Goback statement −"
},
{
"code": null,
"e": 29894,
"s": 29886,
"text": "GOBACK\n"
},
{
"code": null,
"e": 29971,
"s": 29894,
"text": "Listed below are the fundamental points to note about the Goback statement −"
},
{
"code": null,
"e": 30075,
"s": 29971,
"text": "GOBACK is coded at the end of the application program. It returns the control to DL/I from the program."
},
{
"code": null,
"e": 30179,
"s": 30075,
"text": "GOBACK is coded at the end of the application program. It returns the control to DL/I from the program."
},
{
"code": null,
"e": 30413,
"s": 30179,
"text": "We should not use STOP RUN as it returns the control to the operating system. If we use STOP RUN, the DL/I never gets a chance to perform its terminating functions. That is why, in DL/I application programs, Goback statement is used."
},
{
"code": null,
"e": 30647,
"s": 30413,
"text": "We should not use STOP RUN as it returns the control to the operating system. If we use STOP RUN, the DL/I never gets a chance to perform its terminating functions. That is why, in DL/I application programs, Goback statement is used."
},
{
"code": null,
"e": 30811,
"s": 30647,
"text": "Before issuing a Goback statement, all the non-DL/I datasets used in the COBOL application program must be closed, otherwise the program will terminate abnormally."
},
{
"code": null,
"e": 30975,
"s": 30811,
"text": "Before issuing a Goback statement, all the non-DL/I datasets used in the COBOL application program must be closed, otherwise the program will terminate abnormally."
},
{
"code": null,
"e": 31128,
"s": 30975,
"text": "Call statement is used to request for DL/I services such as executing certain operations on the IMS database. Here is the syntax of the call statement −"
},
{
"code": null,
"e": 31293,
"s": 31128,
"text": "CALL 'CBLTDLI' USING DLI Function Code\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 31424,
"s": 31293,
"text": "The syntax above shows parameters which you can use with the call statement. We will discuss each of them in the following table −"
},
{
"code": null,
"e": 31442,
"s": 31424,
"text": "DLI Function Code"
},
{
"code": null,
"e": 31576,
"s": 31442,
"text": "Identifies the DL/I function to be performed. This argument is the name of the four character fields that describe the I/O operation."
},
{
"code": null,
"e": 31585,
"s": 31576,
"text": "PCB Mask"
},
{
"code": null,
"e": 31751,
"s": 31585,
"text": "The PCB definition inside the Linkage Section is called as PCB Mask. They are used in the entry statement. No SELECT, ASSIGN, OPEN, or CLOSE statements are required."
},
{
"code": null,
"e": 31768,
"s": 31751,
"text": "Segment I/O Area"
},
{
"code": null,
"e": 31892,
"s": 31768,
"text": "Name of an input/output work area. This is an area of the application program into which the DL/I puts a requested segment."
},
{
"code": null,
"e": 31917,
"s": 31892,
"text": "Segment Search Arguments"
},
{
"code": null,
"e": 32052,
"s": 31917,
"text": "These are optional parameters depending on the type of the call issued. They are used to search data segments inside the IMS database."
},
{
"code": null,
"e": 32114,
"s": 32052,
"text": "Given below are the points to note about the Call statement −"
},
{
"code": null,
"e": 32241,
"s": 32114,
"text": "CBLTDLI stands for COBOL to DL/I. It is the name of an interface module that is link edited with your program’s object module."
},
{
"code": null,
"e": 32368,
"s": 32241,
"text": "CBLTDLI stands for COBOL to DL/I. It is the name of an interface module that is link edited with your program’s object module."
},
{
"code": null,
"e": 32512,
"s": 32368,
"text": "After each DL/I call, the DLI stores a status code in the PCB. The program can use this code to determine whether the call succeeded or failed."
},
{
"code": null,
"e": 32656,
"s": 32512,
"text": "After each DL/I call, the DLI stores a status code in the PCB. The program can use this code to determine whether the call succeeded or failed."
},
{
"code": null,
"e": 32928,
"s": 32656,
"text": "For more understanding of COBOL, you can go through our COBOL tutorial here. The following example shows the structure of a COBOL program that uses IMS database and DL/I calls. We will discuss in detail each of the parameters used in the example in the upcoming chapters."
},
{
"code": null,
"e": 34267,
"s": 32928,
"text": "IDENTIFICATION DIVISION.\nPROGRAM-ID. TEST1.\nDATA DIVISION.\nWORKING-STORAGE SECTION.\n01 DLI-FUNCTIONS.\n 05 DLI-GU PIC X(4) VALUE 'GU '.\n 05 DLI-GHU PIC X(4) VALUE 'GHU '.\n 05 DLI-GN PIC X(4) VALUE 'GN '.\n 05 DLI-GHN PIC X(4) VALUE 'GHN '.\n 05 DLI-GNP PIC X(4) VALUE 'GNP '.\n 05 DLI-GHNP PIC X(4) VALUE 'GHNP'.\n 05 DLI-ISRT PIC X(4) VALUE 'ISRT'.\n 05 DLI-DLET PIC X(4) VALUE 'DLET'.\n 05 DLI-REPL PIC X(4) VALUE 'REPL'.\n 05 DLI-CHKP PIC X(4) VALUE 'CHKP'.\n 05 DLI-XRST PIC X(4) VALUE 'XRST'.\n 05 DLI-PCB PIC X(4) VALUE 'PCB '.\n01 SEGMENT-I-O-AREA PIC X(150).\nLINKAGE SECTION.\n01 STUDENT-PCB-MASK.\n 05 STD-DBD-NAME PIC X(8).\n 05 STD-SEGMENT-LEVEL PIC XX.\n 05 STD-STATUS-CODE PIC XX.\n 05 STD-PROC-OPTIONS PIC X(4).\n 05 FILLER PIC S9(5) COMP.\n 05 STD-SEGMENT-NAME PIC X(8).\n 05 STD-KEY-LENGTH PIC S9(5) COMP.\n 05 STD-NUMB-SENS-SEGS PIC S9(5) COMP.\n 05 STD-KEY PIC X(11).\nPROCEDURE DIVISION.\nENTRY 'DLITCBL' USING STUDENT-PCB-MASK.\nA000-READ-PARA.\n110-GET-INVENTORY-SEGMENT.\n CALL ‘CBLTDLI’ USING DLI-GN\n STUDENT-PCB-MASK\n SEGMENT-I-O-AREA.\nGOBACK."
},
{
"code": null,
"e": 34480,
"s": 34267,
"text": "DL/I function is the first parameter that is used in a DL/I call. This function tells which operation is going to be performed on the IMS database by the IMS DL/I call. The syntax of DL/I function is as follows −"
},
{
"code": null,
"e": 35051,
"s": 34480,
"text": "01 DLI-FUNCTIONS.\n 05 DLI-GU PIC X(4) VALUE 'GU '.\n 05 DLI-GHU PIC X(4) VALUE 'GHU '.\n 05 DLI-GN PIC X(4) VALUE 'GN '.\n 05 DLI-GHN PIC X(4) VALUE 'GHN '.\n 05 DLI-GNP PIC X(4) VALUE 'GNP '.\n 05 DLI-GHNP PIC X(4) VALUE 'GHNP'.\n 05 DLI-ISRT PIC X(4) VALUE 'ISRT'.\n 05 DLI-DLET PIC X(4) VALUE 'DLET'.\n 05 DLI-REPL PIC X(4) VALUE 'REPL'.\n 05 DLI-CHKP PIC X(4) VALUE 'CHKP'.\n 05 DLI-XRST PIC X(4) VALUE 'XRST'.\n 05 DLI-PCB PIC X(4) VALUE 'PCB '.\n"
},
{
"code": null,
"e": 35101,
"s": 35051,
"text": "This syntax represents the following key points −"
},
{
"code": null,
"e": 35207,
"s": 35101,
"text": "For this parameter, we can provide any four-character name as a storage field to store the function code."
},
{
"code": null,
"e": 35313,
"s": 35207,
"text": "For this parameter, we can provide any four-character name as a storage field to store the function code."
},
{
"code": null,
"e": 35399,
"s": 35313,
"text": "DL/I function parameter is coded in the working storage section of the COBOL program."
},
{
"code": null,
"e": 35485,
"s": 35399,
"text": "DL/I function parameter is coded in the working storage section of the COBOL program."
},
{
"code": null,
"e": 35675,
"s": 35485,
"text": "For specifying the DL/I function, the programmer needs to code one of the 05 level data names such as DLI-GU in a DL/I call, since COBOL does not allow to code literals on a CALL statement."
},
{
"code": null,
"e": 35865,
"s": 35675,
"text": "For specifying the DL/I function, the programmer needs to code one of the 05 level data names such as DLI-GU in a DL/I call, since COBOL does not allow to code literals on a CALL statement."
},
{
"code": null,
"e": 35988,
"s": 35865,
"text": "DL/I functions are divided into three categories: Get, Update, and Other functions. Let us discuss each of them in detail."
},
{
"code": null,
"e": 36111,
"s": 35988,
"text": "DL/I functions are divided into three categories: Get, Update, and Other functions. Let us discuss each of them in detail."
},
{
"code": null,
"e": 36313,
"s": 36111,
"text": "Get functions are similar to the read operation supported by any programming language. Get function is used to fetch segments from an IMS DL/I database. The following Get functions are used in IMS DB −"
},
{
"code": null,
"e": 36324,
"s": 36313,
"text": "Get Unique"
},
{
"code": null,
"e": 36333,
"s": 36324,
"text": "Get Next"
},
{
"code": null,
"e": 36356,
"s": 36333,
"text": "Get Next within Parent"
},
{
"code": null,
"e": 36372,
"s": 36356,
"text": "Get Hold Unique"
},
{
"code": null,
"e": 36386,
"s": 36372,
"text": "Get Hold Next"
},
{
"code": null,
"e": 36414,
"s": 36386,
"text": "Get Hold Next within Parent"
},
{
"code": null,
"e": 36507,
"s": 36414,
"text": "Let us consider the following IMS database structure to understand the DL/I function calls −"
},
{
"code": null,
"e": 36794,
"s": 36507,
"text": "'GU' code is used for the Get Unique function. It works similar to the random read statement in COBOL. It is used to fetch a particular segment occurrence based on the field values. The field values can be provided using segment search arguments. The syntax of a GU call is as follows −"
},
{
"code": null,
"e": 36939,
"s": 36794,
"text": "CALL 'CBLTDLI' USING DLI-GU\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 37272,
"s": 36939,
"text": "If you execute the above call statement by providing appropriate values for all parameters in the COBOL program, you can retrieve the segment in the segment I/O area from the database. In the above example, if you provide the field values of Library, Magazines, and Health, then you get the desired occurrence of the Health segment."
},
{
"code": null,
"e": 37571,
"s": 37272,
"text": "'GN' code is used for the Get Next function. It works similar to the read next statement in COBOL. It is used to fetch segment occurrences in a sequence. The predefined pattern for accessing data segment occurrences is down the hierarchy, then left to right. The syntax of a GN call is as follows −"
},
{
"code": null,
"e": 37716,
"s": 37571,
"text": "CALL 'CBLTDLI' USING DLI-GN\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 38120,
"s": 37716,
"text": "If you execute the above call statement by providing appropriate values for all parameters in the COBOL program, you can retrieve the segment occurrence in the segment I/O area from the database in a sequential order. In the above example, it starts with accessing the Library segment, then Books segment, and so on. We perform the GN call again and again, until we reach the segment occurrence we want."
},
{
"code": null,
"e": 38320,
"s": 38120,
"text": "'GNP' code is used for Get Next within Parent. This function is used to retrieve segment occurrences in sequence subordinate to an established parent segment. The syntax of a GNP call is as follows −"
},
{
"code": null,
"e": 38466,
"s": 38320,
"text": "CALL 'CBLTDLI' USING DLI-GNP\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 38694,
"s": 38466,
"text": "'GHU' code is used for Get Hold Unique. Hold function specifies that we are going to update the segment after retrieval. The Get Hold Unique function corresponds to the Get Unique call. Given below is the syntax of a GHU call −"
},
{
"code": null,
"e": 38840,
"s": 38694,
"text": "CALL 'CBLTDLI' USING DLI-GHU\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 39062,
"s": 38840,
"text": "'GHN' code is used for Get Hold Next. Hold function specifies that we are going to update the segment after retrieval. The Get Hold Next function corresponds to the Get Next call. Given below is the syntax of a GHN call −"
},
{
"code": null,
"e": 39208,
"s": 39062,
"text": "CALL 'CBLTDLI' USING DLI-GHN\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 39474,
"s": 39208,
"text": "'GHNP' code is used for Get Hold Next within Parent. Hold function specifies that we are going to update the segment after retrieval. The Get Hold Next within Parent function corresponds to the Get Next within Parent call. Given below is the syntax of a GHNP call −"
},
{
"code": null,
"e": 39621,
"s": 39474,
"text": "CALL 'CBLTDLI' USING DLI-GHNP\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 39951,
"s": 39621,
"text": "Update functions are similar to re-write or insert operations in any other programming language. Update functions are used to update segments in an IMS DL/I database. Before using the update function, there must be a successful call with Hold clause for the segment occurrence. The following Update functions are used in IMS DB −"
},
{
"code": null,
"e": 39958,
"s": 39951,
"text": "Insert"
},
{
"code": null,
"e": 39965,
"s": 39958,
"text": "Delete"
},
{
"code": null,
"e": 39973,
"s": 39965,
"text": "Replace"
},
{
"code": null,
"e": 40192,
"s": 39973,
"text": "'ISRT' code is used for the Insert function. The ISRT function is used to add a new segment to the database. It is used to change an existing database or load a new database. Given below is the syntax of an ISRT call −"
},
{
"code": null,
"e": 40339,
"s": 40192,
"text": "CALL 'CBLTDLI' USING DLI-ISRT\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 40485,
"s": 40339,
"text": "'DLET' code is used for the Delete function. It is used to remove a segment from an IMS DL/I database. Given below is the syntax of a DLET call −"
},
{
"code": null,
"e": 40632,
"s": 40485,
"text": "CALL 'CBLTDLI' USING DLI-DLET\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 40805,
"s": 40632,
"text": "'REPL' code is used for Get Hold Next within Parent. The Replace function is used to replace a segment in the IMS DL/I database. Given below is the syntax of an REPL call −"
},
{
"code": null,
"e": 40952,
"s": 40805,
"text": "CALL 'CBLTDLI' USING DLI-REPL\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 41011,
"s": 40952,
"text": "The following other functions are used in IMS DL/I calls −"
},
{
"code": null,
"e": 41022,
"s": 41011,
"text": "Checkpoint"
},
{
"code": null,
"e": 41030,
"s": 41022,
"text": "Restart"
},
{
"code": null,
"e": 41034,
"s": 41030,
"text": "PCB"
},
{
"code": null,
"e": 41170,
"s": 41034,
"text": "'CHKP' code is used for the Checkpoint function. It is used in the recovery features of IMS. Given below is the syntax of a CHKP call −"
},
{
"code": null,
"e": 41317,
"s": 41170,
"text": "CALL 'CBLTDLI' USING DLI-CHKP\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 41450,
"s": 41317,
"text": "'XRST' code is used for the Restart function. It is used in the restart features of IMS. Given below is the syntax of an XRST call −"
},
{
"code": null,
"e": 41597,
"s": 41450,
"text": "CALL 'CBLTDLI' USING DLI-XRST\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 41703,
"s": 41597,
"text": "PCB function is used in CICS programs in the IMS DL/I database. Given below is the syntax of a PCB call −"
},
{
"code": null,
"e": 41849,
"s": 41703,
"text": "CALL 'CBLTDLI' USING DLI-PCB\n PCB Mask\n Segment I/O Area\n [Segment Search Arguments]\n"
},
{
"code": null,
"e": 41922,
"s": 41849,
"text": "You can find more details about these functions in the recovery chapter."
},
{
"code": null,
"e": 42103,
"s": 41922,
"text": "PCB stands for Program Communication Block. PCB Mask is the second parameter used in the DL/I call. It is declared in the linkage section. Given below is the syntax of a PCB Mask −"
},
{
"code": null,
"e": 42413,
"s": 42103,
"text": "01 PCB-NAME.\n 05 DBD-NAME PIC X(8).\n 05 SEG-LEVEL PIC XX.\n 05 STATUS-CODE PIC XX.\n 05 PROC-OPTIONS PIC X(4).\n 05 RESERVED-DLI PIC S9(5).\n 05 SEG-NAME PIC X(8).\n 05 LENGTH-FB-KEY PIC S9(5).\n 05 NUMB-SENS-SEGS PIC S9(5).\n 05 KEY-FB-AREA PIC X(n).\n"
},
{
"code": null,
"e": 42447,
"s": 42413,
"text": "Here are the key points to note −"
},
{
"code": null,
"e": 42650,
"s": 42447,
"text": "For each database, the DL/I maintains an area of storage that is known as the program communication block. It stores the information about the database that are accessed inside the application programs."
},
{
"code": null,
"e": 42853,
"s": 42650,
"text": "For each database, the DL/I maintains an area of storage that is known as the program communication block. It stores the information about the database that are accessed inside the application programs."
},
{
"code": null,
"e": 43055,
"s": 42853,
"text": "The ENTRY statement creates a connection between the PCB masks in the Linkage Section and the PCBs within the program’s PSB. The PCB masks used in a DL/I call tells which database to use for operation."
},
{
"code": null,
"e": 43257,
"s": 43055,
"text": "The ENTRY statement creates a connection between the PCB masks in the Linkage Section and the PCBs within the program’s PSB. The PCB masks used in a DL/I call tells which database to use for operation."
},
{
"code": null,
"e": 43444,
"s": 43257,
"text": "You can assume this is similar to specifying a file name in a COBOL READ statement or a record name in a COBOL write statement. No SELECT, ASSIGN, OPEN, or CLOSE statements are required."
},
{
"code": null,
"e": 43631,
"s": 43444,
"text": "You can assume this is similar to specifying a file name in a COBOL READ statement or a record name in a COBOL write statement. No SELECT, ASSIGN, OPEN, or CLOSE statements are required."
},
{
"code": null,
"e": 43779,
"s": 43631,
"text": "After each DL/I call, the DL/I stores a status code in the PCB and the program can use that code to determine whether the call succeeded or failed."
},
{
"code": null,
"e": 43927,
"s": 43779,
"text": "After each DL/I call, the DL/I stores a status code in the PCB and the program can use that code to determine whether the call succeeded or failed."
},
{
"code": null,
"e": 43944,
"s": 43927,
"text": "Points to note −"
},
{
"code": null,
"e": 44033,
"s": 43944,
"text": "PCB Name is the name of the area which refers to the entire structure of the PCB fields."
},
{
"code": null,
"e": 44122,
"s": 44033,
"text": "PCB Name is the name of the area which refers to the entire structure of the PCB fields."
},
{
"code": null,
"e": 44162,
"s": 44122,
"text": "PCB Name is used in program statements."
},
{
"code": null,
"e": 44202,
"s": 44162,
"text": "PCB Name is used in program statements."
},
{
"code": null,
"e": 44238,
"s": 44202,
"text": "PCB Name is not a field in the PCB."
},
{
"code": null,
"e": 44274,
"s": 44238,
"text": "PCB Name is not a field in the PCB."
},
{
"code": null,
"e": 44291,
"s": 44274,
"text": "Points to note −"
},
{
"code": null,
"e": 44353,
"s": 44291,
"text": "DBD name contains the character data. It is eight bytes long."
},
{
"code": null,
"e": 44415,
"s": 44353,
"text": "DBD name contains the character data. It is eight bytes long."
},
{
"code": null,
"e": 44596,
"s": 44415,
"text": "The first field in the PCB is the name of the database being processed and it provides the DBD name from the library of database descriptions associated with a particular database."
},
{
"code": null,
"e": 44777,
"s": 44596,
"text": "The first field in the PCB is the name of the database being processed and it provides the DBD name from the library of database descriptions associated with a particular database."
},
{
"code": null,
"e": 44794,
"s": 44777,
"text": "Points to note −"
},
{
"code": null,
"e": 44905,
"s": 44794,
"text": "Segment level is known as Segment Hierarchy Level Indicator. It contains character data and is two bytes long."
},
{
"code": null,
"e": 45016,
"s": 44905,
"text": "Segment level is known as Segment Hierarchy Level Indicator. It contains character data and is two bytes long."
},
{
"code": null,
"e": 45190,
"s": 45016,
"text": "A segment level field stores the level of the segment that was processed. When a segment is retrieved successfully, the level number of the retrieved segment is stored here."
},
{
"code": null,
"e": 45364,
"s": 45190,
"text": "A segment level field stores the level of the segment that was processed. When a segment is retrieved successfully, the level number of the retrieved segment is stored here."
},
{
"code": null,
"e": 45495,
"s": 45364,
"text": "A segment level field never has a value greater than 15 because that is the maximum number of levels permitted in a DL/I database."
},
{
"code": null,
"e": 45626,
"s": 45495,
"text": "A segment level field never has a value greater than 15 because that is the maximum number of levels permitted in a DL/I database."
},
{
"code": null,
"e": 45643,
"s": 45626,
"text": "Points to note −"
},
{
"code": null,
"e": 45699,
"s": 45643,
"text": "Status code field contains two bytes of character data."
},
{
"code": null,
"e": 45755,
"s": 45699,
"text": "Status code field contains two bytes of character data."
},
{
"code": null,
"e": 45798,
"s": 45755,
"text": "Status code contains the DL/I status code."
},
{
"code": null,
"e": 45841,
"s": 45798,
"text": "Status code contains the DL/I status code."
},
{
"code": null,
"e": 45941,
"s": 45841,
"text": "Spaces are moved to the status code field when DL/I completes the processing of calls successfully."
},
{
"code": null,
"e": 46041,
"s": 45941,
"text": "Spaces are moved to the status code field when DL/I completes the processing of calls successfully."
},
{
"code": null,
"e": 46101,
"s": 46041,
"text": "Non-space values indicate that the call was not successful."
},
{
"code": null,
"e": 46161,
"s": 46101,
"text": "Non-space values indicate that the call was not successful."
},
{
"code": null,
"e": 46268,
"s": 46161,
"text": "Status code GB indicates end-of-file and status code GE indicates that the requested segment is not found."
},
{
"code": null,
"e": 46375,
"s": 46268,
"text": "Status code GB indicates end-of-file and status code GE indicates that the requested segment is not found."
},
{
"code": null,
"e": 46392,
"s": 46375,
"text": "Points to note −"
},
{
"code": null,
"e": 46479,
"s": 46392,
"text": "Proc options are known as processing options which contain four-character data fields."
},
{
"code": null,
"e": 46566,
"s": 46479,
"text": "Proc options are known as processing options which contain four-character data fields."
},
{
"code": null,
"e": 46675,
"s": 46566,
"text": "A Processing Option field indicates what kind of processing the program is authorized to do on the database."
},
{
"code": null,
"e": 46784,
"s": 46675,
"text": "A Processing Option field indicates what kind of processing the program is authorized to do on the database."
},
{
"code": null,
"e": 46801,
"s": 46784,
"text": "Points to note −"
},
{
"code": null,
"e": 46891,
"s": 46801,
"text": "Reserved DL/I is known as the reserved area of the IMS. It stores four bytes binary data."
},
{
"code": null,
"e": 46981,
"s": 46891,
"text": "Reserved DL/I is known as the reserved area of the IMS. It stores four bytes binary data."
},
{
"code": null,
"e": 47064,
"s": 46981,
"text": "IMS uses this area for its own internal linkage related to an application program."
},
{
"code": null,
"e": 47147,
"s": 47064,
"text": "IMS uses this area for its own internal linkage related to an application program."
},
{
"code": null,
"e": 47164,
"s": 47147,
"text": "Points to note −"
},
{
"code": null,
"e": 47252,
"s": 47164,
"text": "SEG Name is known as segment name feedback area. It contains 8 bytes of character data."
},
{
"code": null,
"e": 47340,
"s": 47252,
"text": "SEG Name is known as segment name feedback area. It contains 8 bytes of character data."
},
{
"code": null,
"e": 47410,
"s": 47340,
"text": "The name of the segment is stored in this field after each DL/I call."
},
{
"code": null,
"e": 47480,
"s": 47410,
"text": "The name of the segment is stored in this field after each DL/I call."
},
{
"code": null,
"e": 47497,
"s": 47480,
"text": "Points to note −"
},
{
"code": null,
"e": 47597,
"s": 47497,
"text": "Length FB key is known as the length of the key feedback area. It stores four bytes of binary data."
},
{
"code": null,
"e": 47697,
"s": 47597,
"text": "Length FB key is known as the length of the key feedback area. It stores four bytes of binary data."
},
{
"code": null,
"e": 47825,
"s": 47697,
"text": "This field is used to report the length of the concatenated key of the lowest level segment processed during the previous call."
},
{
"code": null,
"e": 47953,
"s": 47825,
"text": "This field is used to report the length of the concatenated key of the lowest level segment processed during the previous call."
},
{
"code": null,
"e": 47992,
"s": 47953,
"text": "It is used with the key feedback area."
},
{
"code": null,
"e": 48031,
"s": 47992,
"text": "It is used with the key feedback area."
},
{
"code": null,
"e": 48048,
"s": 48031,
"text": "Points to note −"
},
{
"code": null,
"e": 48109,
"s": 48048,
"text": "Number of sensitivity segments store four bytes binary data."
},
{
"code": null,
"e": 48170,
"s": 48109,
"text": "Number of sensitivity segments store four bytes binary data."
},
{
"code": null,
"e": 48308,
"s": 48170,
"text": "It defines to which level an application program is sensitive. It represents a count of number of segments in the logical data structure."
},
{
"code": null,
"e": 48446,
"s": 48308,
"text": "It defines to which level an application program is sensitive. It represents a count of number of segments in the logical data structure."
},
{
"code": null,
"e": 48463,
"s": 48446,
"text": "Points to note −"
},
{
"code": null,
"e": 48523,
"s": 48463,
"text": "Key feedback area varies in length from one PCB to another."
},
{
"code": null,
"e": 48583,
"s": 48523,
"text": "Key feedback area varies in length from one PCB to another."
},
{
"code": null,
"e": 48691,
"s": 48583,
"text": "It contains the longest possible concatenated key that can be used with the program’s view of the database."
},
{
"code": null,
"e": 48799,
"s": 48691,
"text": "It contains the longest possible concatenated key that can be used with the program’s view of the database."
},
{
"code": null,
"e": 48984,
"s": 48799,
"text": "After a database operation, DL/I returns the concatenated key of the lowest level segment processed in this field, and it returns the length of the key in the key length feedback area."
},
{
"code": null,
"e": 49169,
"s": 48984,
"text": "After a database operation, DL/I returns the concatenated key of the lowest level segment processed in this field, and it returns the length of the key in the key length feedback area."
},
{
"code": null,
"e": 49396,
"s": 49169,
"text": "SSA stands for Segment Search Arguments. SSA is used to identify the segment occurrence being accessed. It is an optional parameter. We can include any number of SSAs depending on the requirement. There are two types of SSAs −"
},
{
"code": null,
"e": 49412,
"s": 49396,
"text": "Unqualified SSA"
},
{
"code": null,
"e": 49426,
"s": 49412,
"text": "Qualified SSA"
},
{
"code": null,
"e": 49556,
"s": 49426,
"text": "An unqualified SSA provides the name of the segment being used inside the call. Given below is the syntax of an unqualified SSA −"
},
{
"code": null,
"e": 49652,
"s": 49556,
"text": "01 UNQUALIFIED-SSA.\n 05 SEGMENT-NAME PIC X(8).\n 05 FILLER PIC X VALUE SPACE.\n"
},
{
"code": null,
"e": 49703,
"s": 49652,
"text": "The key points of unqualified SSA are as follows −"
},
{
"code": null,
"e": 49744,
"s": 49703,
"text": "A basic unqualified SSA is 9 bytes long."
},
{
"code": null,
"e": 49785,
"s": 49744,
"text": "A basic unqualified SSA is 9 bytes long."
},
{
"code": null,
"e": 49861,
"s": 49785,
"text": "The first 8 bytes hold the segment name which is being used for processing."
},
{
"code": null,
"e": 49937,
"s": 49861,
"text": "The first 8 bytes hold the segment name which is being used for processing."
},
{
"code": null,
"e": 49974,
"s": 49937,
"text": "The last byte always contains space."
},
{
"code": null,
"e": 50011,
"s": 49974,
"text": "The last byte always contains space."
},
{
"code": null,
"e": 50065,
"s": 50011,
"text": "DL/I uses the last byte to determine the type of SSA."
},
{
"code": null,
"e": 50119,
"s": 50065,
"text": "DL/I uses the last byte to determine the type of SSA."
},
{
"code": null,
"e": 50207,
"s": 50119,
"text": "To access a particular segment, move the name of the segment in the SEGMENT-NAME field."
},
{
"code": null,
"e": 50295,
"s": 50207,
"text": "To access a particular segment, move the name of the segment in the SEGMENT-NAME field."
},
{
"code": null,
"e": 50372,
"s": 50295,
"text": "The following images show the structures of unqualified and qualified SSAs −"
},
{
"code": null,
"e": 50513,
"s": 50372,
"text": "A Qualified SSA provides the segment type with the specific database occurrence of a segment. Given below is the syntax of a Qualified SSA −"
},
{
"code": null,
"e": 50746,
"s": 50513,
"text": "01 QUALIFIED-SSA.\n 05 SEGMENT-NAME PIC X(8).\n 05 FILLER PIC X(01) VALUE '('.\n 05 FIELD-NAME PIC X(8).\n 05 REL-OPR PIC X(2).\n 05 SEARCH-VALUE PIC X(n).\n 05 FILLER PIC X(n+1) VALUE ')'.\n"
},
{
"code": null,
"e": 50795,
"s": 50746,
"text": "The key points of qualified SSA are as follows −"
},
{
"code": null,
"e": 50882,
"s": 50795,
"text": "The first 8 bytes of a qualified SSA holds the segment name being used for processing."
},
{
"code": null,
"e": 50969,
"s": 50882,
"text": "The first 8 bytes of a qualified SSA holds the segment name being used for processing."
},
{
"code": null,
"e": 51011,
"s": 50969,
"text": "The ninth byte is a left parenthesis '('."
},
{
"code": null,
"e": 51053,
"s": 51011,
"text": "The ninth byte is a left parenthesis '('."
},
{
"code": null,
"e": 51153,
"s": 51053,
"text": "The next 8 bytes starting from the tenth position specifies the field name which we want to search."
},
{
"code": null,
"e": 51253,
"s": 51153,
"text": "The next 8 bytes starting from the tenth position specifies the field name which we want to search."
},
{
"code": null,
"e": 51358,
"s": 51253,
"text": "After the field name, in the 18th and 19th positions, we specify two-character relational operator code."
},
{
"code": null,
"e": 51463,
"s": 51358,
"text": "After the field name, in the 18th and 19th positions, we specify two-character relational operator code."
},
{
"code": null,
"e": 51551,
"s": 51463,
"text": "Then we specify the field value and in the last byte, there is a right parenthesis ')'."
},
{
"code": null,
"e": 51639,
"s": 51551,
"text": "Then we specify the field value and in the last byte, there is a right parenthesis ')'."
},
{
"code": null,
"e": 51715,
"s": 51639,
"text": "The following table shows the relational operators used in a Qualified SSA."
},
{
"code": null,
"e": 52015,
"s": 51715,
"text": "Command codes are used to enhance the functionality of DL/I calls. Command codes reduce the number of DL/I calls, making the programs simple. Also, it improves the performance as the number of calls is reduced. The following image shows how command codes are used in unqualified and qualified SSAs −"
},
{
"code": null,
"e": 52064,
"s": 52015,
"text": "The key points of command codes are as follows −"
},
{
"code": null,
"e": 52166,
"s": 52064,
"text": "To use command codes, specify an asterisk in the 9th position of the SSA as shown in the above image."
},
{
"code": null,
"e": 52268,
"s": 52166,
"text": "To use command codes, specify an asterisk in the 9th position of the SSA as shown in the above image."
},
{
"code": null,
"e": 52313,
"s": 52268,
"text": "Command code is coded at the tenth position."
},
{
"code": null,
"e": 52358,
"s": 52313,
"text": "Command code is coded at the tenth position."
},
{
"code": null,
"e": 52531,
"s": 52358,
"text": "From 10th position onwards, DL/I considers all characters to be command codes until it encounters a space for an unqualified SSA and a left parenthesis for a qualified SSA."
},
{
"code": null,
"e": 52704,
"s": 52531,
"text": "From 10th position onwards, DL/I considers all characters to be command codes until it encounters a space for an unqualified SSA and a left parenthesis for a qualified SSA."
},
{
"code": null,
"e": 52770,
"s": 52704,
"text": "The following table shows the list of command codes used in SSA −"
},
{
"code": null,
"e": 52837,
"s": 52770,
"text": "The fundamental points of multiple qualifications are as follows −"
},
{
"code": null,
"e": 52947,
"s": 52837,
"text": "Multiple qualifications are required when we need to use two or more qualifications or fields for comparison."
},
{
"code": null,
"e": 53057,
"s": 52947,
"text": "Multiple qualifications are required when we need to use two or more qualifications or fields for comparison."
},
{
"code": null,
"e": 53137,
"s": 53057,
"text": "We use Boolean operators like AND and OR to connect two or more qualifications."
},
{
"code": null,
"e": 53217,
"s": 53137,
"text": "We use Boolean operators like AND and OR to connect two or more qualifications."
},
{
"code": null,
"e": 53343,
"s": 53217,
"text": "Multiple qualifications can be used when we want to process a segment based on a range of possible values for a single field."
},
{
"code": null,
"e": 53469,
"s": 53343,
"text": "Multiple qualifications can be used when we want to process a segment based on a range of possible values for a single field."
},
{
"code": null,
"e": 53524,
"s": 53469,
"text": "Given below is the syntax of Multiple Qualifications −"
},
{
"code": null,
"e": 53893,
"s": 53524,
"text": "01 QUALIFIED-SSA.\n 05 SEGMENT-NAME PIC X(8).\n 05 FILLER PIC X(01) VALUE '('.\n 05 FIELD-NAME1 PIC X(8).\n 05 REL-OPR PIC X(2).\n 05 SEARCH-VALUE1 PIC X(m).\n 05 MUL-QUAL PIC X VALUE '&'.\n 05 FIELD-NAME2 PIC X(8).\n 05 REL-OPR PIC X(2).\n 05 SEARCH-VALUE2 PIC X(n).\n 05 FILLER PIC X(n+1) VALUE ')'.\n"
},
{
"code": null,
"e": 54004,
"s": 53893,
"text": "MUL-QUAL is a short term for MULtiple QUALIification in which we can provide boolean operators like AND or OR."
},
{
"code": null,
"e": 54079,
"s": 54004,
"text": "The various data retrieval methods used in IMS DL/I calls are as follows −"
},
{
"code": null,
"e": 54087,
"s": 54079,
"text": "GU Call"
},
{
"code": null,
"e": 54095,
"s": 54087,
"text": "GN Call"
},
{
"code": null,
"e": 54115,
"s": 54095,
"text": "Using Command Codes"
},
{
"code": null,
"e": 54135,
"s": 54115,
"text": "Multiple Processing"
},
{
"code": null,
"e": 54238,
"s": 54135,
"text": "Let us consider the following IMS database structure to understand the data retrieval function calls −"
},
{
"code": null,
"e": 54283,
"s": 54238,
"text": "The fundamentals of GU call are as follows −"
},
{
"code": null,
"e": 54354,
"s": 54283,
"text": "GU call is known as Get Unique call. It is used for random processing."
},
{
"code": null,
"e": 54425,
"s": 54354,
"text": "GU call is known as Get Unique call. It is used for random processing."
},
{
"code": null,
"e": 54559,
"s": 54425,
"text": "If an application does not update the database regularly or if the number of database updates is less, then we use random processing."
},
{
"code": null,
"e": 54693,
"s": 54559,
"text": "If an application does not update the database regularly or if the number of database updates is less, then we use random processing."
},
{
"code": null,
"e": 54789,
"s": 54693,
"text": "GU call is used to place the pointer at a particular position for further sequential retrieval."
},
{
"code": null,
"e": 54885,
"s": 54789,
"text": "GU call is used to place the pointer at a particular position for further sequential retrieval."
},
{
"code": null,
"e": 54969,
"s": 54885,
"text": "GU calls are independent of the pointer position established by the previous calls."
},
{
"code": null,
"e": 55053,
"s": 54969,
"text": "GU calls are independent of the pointer position established by the previous calls."
},
{
"code": null,
"e": 55138,
"s": 55053,
"text": "GU call processing is based on the unique key fields supplied in the call statement."
},
{
"code": null,
"e": 55223,
"s": 55138,
"text": "GU call processing is based on the unique key fields supplied in the call statement."
},
{
"code": null,
"e": 55333,
"s": 55223,
"text": "If we supply a key field that is not unique, then DL/I returns the first segment occurrence of the key field."
},
{
"code": null,
"e": 55443,
"s": 55333,
"text": "If we supply a key field that is not unique, then DL/I returns the first segment occurrence of the key field."
},
{
"code": null,
"e": 55659,
"s": 55443,
"text": "CALL 'CBLTDLI' USING DLI-GU\n PCB-NAME\n IO-AREA\n LIBRARY-SSA\n BOOKS-SSA\n ENGINEERING-SSA\n IT-SSA"
},
{
"code": null,
"e": 55861,
"s": 55659,
"text": "The above example shows we issue a GU call by providing a complete set of qualified SSAs. It includes all the key fields starting from the root level to the segment occurrence that we want to retrieve."
},
{
"code": null,
"e": 55969,
"s": 55861,
"text": "If we do not provide the complete set of qualified SSAs in the call, then DL/I works in the following way −"
},
{
"code": null,
"e": 56110,
"s": 55969,
"text": "When we use an unqualified SSA in a GU call, DL/I accesses the first segment occurrence in the database that meets the criteria you specify."
},
{
"code": null,
"e": 56251,
"s": 56110,
"text": "When we use an unqualified SSA in a GU call, DL/I accesses the first segment occurrence in the database that meets the criteria you specify."
},
{
"code": null,
"e": 56364,
"s": 56251,
"text": "When we issue a GU call without any SSAs, DL/I returns the first occurrence of the root segment in the database."
},
{
"code": null,
"e": 56477,
"s": 56364,
"text": "When we issue a GU call without any SSAs, DL/I returns the first occurrence of the root segment in the database."
},
{
"code": null,
"e": 56651,
"s": 56477,
"text": "If some SSAs at intermediate levels are not mentioned in the call, then DL/I uses either the established position or the default value of an unqualified SSA for the segment."
},
{
"code": null,
"e": 56825,
"s": 56651,
"text": "If some SSAs at intermediate levels are not mentioned in the call, then DL/I uses either the established position or the default value of an unqualified SSA for the segment."
},
{
"code": null,
"e": 56895,
"s": 56825,
"text": "The following table shows the relevant status codes after a GU call −"
},
{
"code": null,
"e": 56902,
"s": 56895,
"text": "Spaces"
},
{
"code": null,
"e": 56918,
"s": 56902,
"text": "Successful call"
},
{
"code": null,
"e": 56921,
"s": 56918,
"text": "GE"
},
{
"code": null,
"e": 56995,
"s": 56921,
"text": "DL/I could not find a segment that met the criteria specified in the call"
},
{
"code": null,
"e": 57040,
"s": 56995,
"text": "The fundamentals of GN call are as follows −"
},
{
"code": null,
"e": 57119,
"s": 57040,
"text": "GN call is known as Get Next call. It is used for basic sequential processing."
},
{
"code": null,
"e": 57198,
"s": 57119,
"text": "GN call is known as Get Next call. It is used for basic sequential processing."
},
{
"code": null,
"e": 57307,
"s": 57198,
"text": "The initial position of the pointer in the database is before the root segment of the first database record."
},
{
"code": null,
"e": 57416,
"s": 57307,
"text": "The initial position of the pointer in the database is before the root segment of the first database record."
},
{
"code": null,
"e": 57529,
"s": 57416,
"text": "The database pointer position is before the next segment occurrence in the sequence, after a successful GN call."
},
{
"code": null,
"e": 57642,
"s": 57529,
"text": "The database pointer position is before the next segment occurrence in the sequence, after a successful GN call."
},
{
"code": null,
"e": 57734,
"s": 57642,
"text": "The GN call starts through the database from the position established by the previous call."
},
{
"code": null,
"e": 57826,
"s": 57734,
"text": "The GN call starts through the database from the position established by the previous call."
},
{
"code": null,
"e": 57960,
"s": 57826,
"text": "If a GN call is unqualified, it returns the next segment occurrence in the database regardless of its type, in hierarchical sequence."
},
{
"code": null,
"e": 58094,
"s": 57960,
"text": "If a GN call is unqualified, it returns the next segment occurrence in the database regardless of its type, in hierarchical sequence."
},
{
"code": null,
"e": 58206,
"s": 58094,
"text": "If a GN call includes SSAs, then DL/I retrieves only segments that meet the requirements of all specified SSAs."
},
{
"code": null,
"e": 58318,
"s": 58206,
"text": "If a GN call includes SSAs, then DL/I retrieves only segments that meet the requirements of all specified SSAs."
},
{
"code": null,
"e": 58436,
"s": 58318,
"text": "CALL 'CBLTDLI' USING DLI-GN\n PCB-NAME\n IO-AREA\n BOOKS-SSA"
},
{
"code": null,
"e": 58599,
"s": 58436,
"text": "The above example shows we issue a GN call providing the starting position to read the records sequentially. It fetches the first occurrence of the BOOKS segment."
},
{
"code": null,
"e": 58669,
"s": 58599,
"text": "The following table shows the relevant status codes after a GN call −"
},
{
"code": null,
"e": 58676,
"s": 58669,
"text": "Spaces"
},
{
"code": null,
"e": 58692,
"s": 58676,
"text": "Successful call"
},
{
"code": null,
"e": 58695,
"s": 58692,
"text": "GE"
},
{
"code": null,
"e": 58770,
"s": 58695,
"text": "DL/I could not find a segment that met the criteria specified in the call."
},
{
"code": null,
"e": 58773,
"s": 58770,
"text": "GA"
},
{
"code": null,
"e": 58863,
"s": 58773,
"text": "An unqualified GN call moves up one level in the database hierarchy to fetch the segment."
},
{
"code": null,
"e": 58866,
"s": 58863,
"text": "GB"
},
{
"code": null,
"e": 58916,
"s": 58866,
"text": "End of database is reached and segment not found."
},
{
"code": null,
"e": 58919,
"s": 58916,
"text": "GK"
},
{
"code": null,
"e": 59064,
"s": 58919,
"text": "An unqualified GN call tries to fetch a segment of a particular type other than the one just retrieved but stays in the same hierarchical level."
},
{
"code": null,
"e": 59192,
"s": 59064,
"text": "Command codes are used with calls to fetch a segment occurrence. The various command codes used with calls are discussed below."
},
{
"code": null,
"e": 59209,
"s": 59192,
"text": "Points to note −"
},
{
"code": null,
"e": 59312,
"s": 59209,
"text": "When an F command code is specified in a call, the call processes the first occurrence of the segment."
},
{
"code": null,
"e": 59415,
"s": 59312,
"text": "When an F command code is specified in a call, the call processes the first occurrence of the segment."
},
{
"code": null,
"e": 59528,
"s": 59415,
"text": "F command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls."
},
{
"code": null,
"e": 59641,
"s": 59528,
"text": "F command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls."
},
{
"code": null,
"e": 59783,
"s": 59641,
"text": "If we specify an F command code with a GU call, it does not have any significance, as GU calls fetch the first segment occurrence by default."
},
{
"code": null,
"e": 59925,
"s": 59783,
"text": "If we specify an F command code with a GU call, it does not have any significance, as GU calls fetch the first segment occurrence by default."
},
{
"code": null,
"e": 59942,
"s": 59925,
"text": "Points to note −"
},
{
"code": null,
"e": 60044,
"s": 59942,
"text": "When an L command code is specified in a call, the call processes the last occurrence of the segment."
},
{
"code": null,
"e": 60146,
"s": 60044,
"text": "When an L command code is specified in a call, the call processes the last occurrence of the segment."
},
{
"code": null,
"e": 60259,
"s": 60146,
"text": "L command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls."
},
{
"code": null,
"e": 60372,
"s": 60259,
"text": "L command codes can be used when we want to process sequentially and it can be used with GN calls and GNP calls."
},
{
"code": null,
"e": 60389,
"s": 60372,
"text": "Points to note −"
},
{
"code": null,
"e": 60481,
"s": 60389,
"text": "D command code is used to fetch more than one segment occurrences using just a single call."
},
{
"code": null,
"e": 60573,
"s": 60481,
"text": "D command code is used to fetch more than one segment occurrences using just a single call."
},
{
"code": null,
"e": 60751,
"s": 60573,
"text": "Normally DL/I operates on the lowest level segment specified in an SSA, but in many cases, we want data from other levels as well. In those cases, we can use the D command code."
},
{
"code": null,
"e": 60929,
"s": 60751,
"text": "Normally DL/I operates on the lowest level segment specified in an SSA, but in many cases, we want data from other levels as well. In those cases, we can use the D command code."
},
{
"code": null,
"e": 60997,
"s": 60929,
"text": "D command code makes easy retrieval of the entire path of segments."
},
{
"code": null,
"e": 61065,
"s": 60997,
"text": "D command code makes easy retrieval of the entire path of segments."
},
{
"code": null,
"e": 61082,
"s": 61065,
"text": "Points to note −"
},
{
"code": null,
"e": 61126,
"s": 61082,
"text": "C command code is used to concatenate keys."
},
{
"code": null,
"e": 61170,
"s": 61126,
"text": "C command code is used to concatenate keys."
},
{
"code": null,
"e": 61362,
"s": 61170,
"text": "Using relational operators is a bit complex, as we need to specify a field name, a relational operator, and a search value. Instead, we can use a C command code to provide a concatenated key."
},
{
"code": null,
"e": 61554,
"s": 61362,
"text": "Using relational operators is a bit complex, as we need to specify a field name, a relational operator, and a search value. Instead, we can use a C command code to provide a concatenated key."
},
{
"code": null,
"e": 61610,
"s": 61554,
"text": "The following example shows the use of C command code −"
},
{
"code": null,
"e": 61966,
"s": 61610,
"text": "01 LOCATION-SSA.\n 05 FILLER\t\t PIC X(11) VALUE ‘INLOCSEG*C(‘.\n 05 LIBRARY-SSA PIC X(5).\n 05 BOOKS-SSA PIC X(4).\n 05 ENGINEERING-SSA PIC X(6).\n 05 IT-SSA PIC X(3)\n 05 FILLER\t\t PIC X\tVALUE ‘)’.\n\nCALL 'CBLTDLI' USING DLI-GU\n PCB-NAME\n IO-AREA\n LOCATION-SSA"
},
{
"code": null,
"e": 61983,
"s": 61966,
"text": "Points to note −"
},
{
"code": null,
"e": 62096,
"s": 61983,
"text": "When we issue a GU or GN call, the DL/I establishes its parentage at the lowest level segment that is retrieved."
},
{
"code": null,
"e": 62209,
"s": 62096,
"text": "When we issue a GU or GN call, the DL/I establishes its parentage at the lowest level segment that is retrieved."
},
{
"code": null,
"e": 62333,
"s": 62209,
"text": "If we include a P command code, then the DL/I establishes its parentage at a higher level segment in the hierarchical path."
},
{
"code": null,
"e": 62457,
"s": 62333,
"text": "If we include a P command code, then the DL/I establishes its parentage at a higher level segment in the hierarchical path."
},
{
"code": null,
"e": 62474,
"s": 62457,
"text": "Points to note −"
},
{
"code": null,
"e": 62592,
"s": 62474,
"text": "When a U command code is specified in an unqualified SSA in a GN call, the DL/I restricts the search for the segment."
},
{
"code": null,
"e": 62710,
"s": 62592,
"text": "When a U command code is specified in an unqualified SSA in a GN call, the DL/I restricts the search for the segment."
},
{
"code": null,
"e": 62772,
"s": 62710,
"text": "U command code is ignored if it is used with a qualified SSA."
},
{
"code": null,
"e": 62834,
"s": 62772,
"text": "U command code is ignored if it is used with a qualified SSA."
},
{
"code": null,
"e": 62851,
"s": 62834,
"text": "Points to note −"
},
{
"code": null,
"e": 63002,
"s": 62851,
"text": "V command code works similar to the U command code, but it restricts the search of a segment at a particular level and all levels above the hierarchy."
},
{
"code": null,
"e": 63153,
"s": 63002,
"text": "V command code works similar to the U command code, but it restricts the search of a segment at a particular level and all levels above the hierarchy."
},
{
"code": null,
"e": 63211,
"s": 63153,
"text": "V command code is ignored when used with a qualified SSA."
},
{
"code": null,
"e": 63269,
"s": 63211,
"text": "V command code is ignored when used with a qualified SSA."
},
{
"code": null,
"e": 63286,
"s": 63269,
"text": "Points to note −"
},
{
"code": null,
"e": 63388,
"s": 63286,
"text": "Q command code is used to enqueue or reserve a segment for exclusive use of your application program."
},
{
"code": null,
"e": 63490,
"s": 63388,
"text": "Q command code is used to enqueue or reserve a segment for exclusive use of your application program."
},
{
"code": null,
"e": 63599,
"s": 63490,
"text": "Q command code is used in an interactive environment where another program might make a change to a segment."
},
{
"code": null,
"e": 63708,
"s": 63599,
"text": "Q command code is used in an interactive environment where another program might make a change to a segment."
},
{
"code": null,
"e": 63851,
"s": 63708,
"text": "A program can have multiple positions in the IMS database which is known as multiple processing. Multiple processing can be done in two ways −"
},
{
"code": null,
"e": 63865,
"s": 63851,
"text": "Multiple PCBs"
},
{
"code": null,
"e": 63886,
"s": 63865,
"text": "Multiple Positioning"
},
{
"code": null,
"e": 64142,
"s": 63886,
"text": "Multiple PCBs can be defined for a single database. If there are multiple PCBs, then an application program can have different views of it. This method for implementing multiple processing is inefficient because of the overheads imposed by the extra PCBs."
},
{
"code": null,
"e": 64398,
"s": 64142,
"text": "A program can maintain multiple positions in a database using a single PCB. This is achieved by maintaining a distinct position for each hierarchical path. Multiple positioning is used to access segments of two or more types sequentially at the same time."
},
{
"code": null,
"e": 64478,
"s": 64398,
"text": "The different data manipulation methods used in IMS DL/I calls are as follows −"
},
{
"code": null,
"e": 64488,
"s": 64478,
"text": "ISRT Call"
},
{
"code": null,
"e": 64503,
"s": 64488,
"text": "Get Hold Calls"
},
{
"code": null,
"e": 64513,
"s": 64503,
"text": "REPL Call"
},
{
"code": null,
"e": 64523,
"s": 64513,
"text": "DLET Call"
},
{
"code": null,
"e": 64629,
"s": 64523,
"text": "Let us consider the following IMS database structure to understand the data manipulation function calls −"
},
{
"code": null,
"e": 64646,
"s": 64629,
"text": "Points to note −"
},
{
"code": null,
"e": 64736,
"s": 64646,
"text": "ISRT call is known as Insert call which is used to add segment occurrences to a database."
},
{
"code": null,
"e": 64826,
"s": 64736,
"text": "ISRT call is known as Insert call which is used to add segment occurrences to a database."
},
{
"code": null,
"e": 64874,
"s": 64826,
"text": "ISRT calls are used for loading a new database."
},
{
"code": null,
"e": 64922,
"s": 64874,
"text": "ISRT calls are used for loading a new database."
},
{
"code": null,
"e": 64998,
"s": 64922,
"text": "We issue an ISRT call when a segment description field is loaded with data."
},
{
"code": null,
"e": 65074,
"s": 64998,
"text": "We issue an ISRT call when a segment description field is loaded with data."
},
{
"code": null,
"e": 65196,
"s": 65074,
"text": "An unqualified or qualified SSA must be specified in the call so that the DL/I knows where to place a segment occurrence."
},
{
"code": null,
"e": 65318,
"s": 65196,
"text": "An unqualified or qualified SSA must be specified in the call so that the DL/I knows where to place a segment occurrence."
},
{
"code": null,
"e": 65493,
"s": 65318,
"text": "We can use a combination of both unqualified and qualified SSA in the call. A qualified SSA can be specified for all the above levels. Let us consider the following example −"
},
{
"code": null,
"e": 65668,
"s": 65493,
"text": "We can use a combination of both unqualified and qualified SSA in the call. A qualified SSA can be specified for all the above levels. Let us consider the following example −"
},
{
"code": null,
"e": 65870,
"s": 65668,
"text": "CALL 'CBLTDLI' USING DLI-ISRT\n PCB-NAME\n IO-AREA\n LIBRARY-SSA\n BOOKS-SSA\n UNQUALIFIED-ENGINEERING-SSA"
},
{
"code": null,
"e": 65984,
"s": 65870,
"text": "The above example shows we are issuing an ISRT call by providing a combination of qualified and unqualified SSAs."
},
{
"code": null,
"e": 66189,
"s": 65984,
"text": "When a new segment that we are inserting has a unique key field, then it is added at the proper position. If the key field is not unique, then it is added by the rules defined by a database administrator."
},
{
"code": null,
"e": 66374,
"s": 66189,
"text": "When we issue an ISRT call without specifying a key field, then the insert rule tells where to place the segments relative to existing twin segments. Given below are the insert rules −"
},
{
"code": null,
"e": 66456,
"s": 66374,
"text": "First − If the rule is first, the new segment is added before any existing twins."
},
{
"code": null,
"e": 66538,
"s": 66456,
"text": "First − If the rule is first, the new segment is added before any existing twins."
},
{
"code": null,
"e": 66617,
"s": 66538,
"text": "Last − If the rule is last, the new segment is added after all existing twins."
},
{
"code": null,
"e": 66696,
"s": 66617,
"text": "Last − If the rule is last, the new segment is added after all existing twins."
},
{
"code": null,
"e": 66827,
"s": 66696,
"text": "Here − If the rule is here, it is added at the current position relative to existing twins, which may be first, last, or anywhere."
},
{
"code": null,
"e": 66958,
"s": 66827,
"text": "Here − If the rule is here, it is added at the current position relative to existing twins, which may be first, last, or anywhere."
},
{
"code": null,
"e": 67031,
"s": 66958,
"text": "The following table shows the relevant status codes after an ISRT call −"
},
{
"code": null,
"e": 67038,
"s": 67031,
"text": "Spaces"
},
{
"code": null,
"e": 67054,
"s": 67038,
"text": "Successful call"
},
{
"code": null,
"e": 67057,
"s": 67054,
"text": "GE"
},
{
"code": null,
"e": 67142,
"s": 67057,
"text": "Multiple SSAs are used and the DL/I cannot satisfy the call with the specified path."
},
{
"code": null,
"e": 67145,
"s": 67142,
"text": "II"
},
{
"code": null,
"e": 67218,
"s": 67145,
"text": "Try to add a segment occurrence that is already present in the database."
},
{
"code": null,
"e": 67234,
"s": 67218,
"text": "LB / LC LD / LE"
},
{
"code": null,
"e": 67387,
"s": 67234,
"text": "We get these status codes while load processing. In most cases, they indicate that you are not inserting the segments in an exact hierarchical sequence."
},
{
"code": null,
"e": 67404,
"s": 67387,
"text": "Points to note −"
},
{
"code": null,
"e": 67556,
"s": 67404,
"text": "There are three types of Get Hold call which we specify in a DL/I call:\n\nGet Hold Unique (GHU)\nGet Hold Next (GHN)\nGet Hold Next within Parent (GHNP)\n\n"
},
{
"code": null,
"e": 67628,
"s": 67556,
"text": "There are three types of Get Hold call which we specify in a DL/I call:"
},
{
"code": null,
"e": 67650,
"s": 67628,
"text": "Get Hold Unique (GHU)"
},
{
"code": null,
"e": 67672,
"s": 67650,
"text": "Get Hold Unique (GHU)"
},
{
"code": null,
"e": 67692,
"s": 67672,
"text": "Get Hold Next (GHN)"
},
{
"code": null,
"e": 67712,
"s": 67692,
"text": "Get Hold Next (GHN)"
},
{
"code": null,
"e": 67747,
"s": 67712,
"text": "Get Hold Next within Parent (GHNP)"
},
{
"code": null,
"e": 67782,
"s": 67747,
"text": "Get Hold Next within Parent (GHNP)"
},
{
"code": null,
"e": 67984,
"s": 67782,
"text": "Hold function specifies that we are going to update the segment after retrieval. So before an REPL or DLET call, a successful hold call must be issued telling the DL/I an intent to update the database."
},
{
"code": null,
"e": 68186,
"s": 67984,
"text": "Hold function specifies that we are going to update the segment after retrieval. So before an REPL or DLET call, a successful hold call must be issued telling the DL/I an intent to update the database."
},
{
"code": null,
"e": 68203,
"s": 68186,
"text": "Points to note −"
},
{
"code": null,
"e": 68291,
"s": 68203,
"text": "After a successful get hold call, we issue an REPL call to update a segment occurrence."
},
{
"code": null,
"e": 68379,
"s": 68291,
"text": "After a successful get hold call, we issue an REPL call to update a segment occurrence."
},
{
"code": null,
"e": 68440,
"s": 68379,
"text": "We cannot change the length of a segment using an REPL call."
},
{
"code": null,
"e": 68501,
"s": 68440,
"text": "We cannot change the length of a segment using an REPL call."
},
{
"code": null,
"e": 68563,
"s": 68501,
"text": "We cannot change the value of a key field using an REPL call."
},
{
"code": null,
"e": 68625,
"s": 68563,
"text": "We cannot change the value of a key field using an REPL call."
},
{
"code": null,
"e": 68726,
"s": 68625,
"text": "We cannot use a qualified SSA with an REPL call. If we specify a qualified SSA, then the call fails."
},
{
"code": null,
"e": 68827,
"s": 68726,
"text": "We cannot use a qualified SSA with an REPL call. If we specify a qualified SSA, then the call fails."
},
{
"code": null,
"e": 69226,
"s": 68827,
"text": "CALL 'CBLTDLI' USING DLI-GHU\n PCB-NAME\n IO-AREA\n LIBRARY-SSA\n BOOKS-SSA\n ENGINEERING-SSA\n IT-SSA.\n \n*Move the values which you want to update in IT segment occurrence*\n\nCALL ‘CBLTDLI’ USING DLI-REPL\n PCB-NAME\n IO-AREA."
},
{
"code": null,
"e": 69440,
"s": 69226,
"text": "The above example updates the IT segment occurrence using an REPL call. First, we issue a GHU call to get the segment occurrence we want to update. Then, we issue an REPL call to update the values of that segment."
},
{
"code": null,
"e": 69457,
"s": 69440,
"text": "Points to note −"
},
{
"code": null,
"e": 69516,
"s": 69457,
"text": "DLET call works much in the same way as an REPL call does."
},
{
"code": null,
"e": 69575,
"s": 69516,
"text": "DLET call works much in the same way as an REPL call does."
},
{
"code": null,
"e": 69662,
"s": 69575,
"text": "After a successful get hold call, we issue a DLET call to delete a segment occurrence."
},
{
"code": null,
"e": 69749,
"s": 69662,
"text": "After a successful get hold call, we issue a DLET call to delete a segment occurrence."
},
{
"code": null,
"e": 69849,
"s": 69749,
"text": "We cannot use a qualified SSA with a DLET call. If we specify a qualified SSA, then the call fails."
},
{
"code": null,
"e": 69949,
"s": 69849,
"text": "We cannot use a qualified SSA with a DLET call. If we specify a qualified SSA, then the call fails."
},
{
"code": null,
"e": 70279,
"s": 69949,
"text": "CALL 'CBLTDLI' USING DLI-GHU\n PCB-NAME\n IO-AREA\n LIBRARY-SSA\n BOOKS-SSA\n ENGINEERING-SSA\n IT-SSA.\n \nCALL ‘CBLTDLI’ USING DLI-DLET\n PCB-NAME\n IO-AREA."
},
{
"code": null,
"e": 70491,
"s": 70279,
"text": "The above example deletes the IT segment occurrence using a DLET call. First, we issue a GHU call to get the segment occurrence we want to delete. Then, we issue a DLET call to update the values of that segment."
},
{
"code": null,
"e": 70574,
"s": 70491,
"text": "The following table shows the relevant status codes after an REPL or a DLET call −"
},
{
"code": null,
"e": 70581,
"s": 70574,
"text": "Spaces"
},
{
"code": null,
"e": 70597,
"s": 70581,
"text": "Successful call"
},
{
"code": null,
"e": 70600,
"s": 70597,
"text": "AJ"
},
{
"code": null,
"e": 70641,
"s": 70600,
"text": "Qualified SSA used on REPL or DLET call."
},
{
"code": null,
"e": 70644,
"s": 70641,
"text": "DJ"
},
{
"code": null,
"e": 70722,
"s": 70644,
"text": "Program issues a replace call without an immediately preceding get hold call."
},
{
"code": null,
"e": 70725,
"s": 70722,
"text": "DA"
},
{
"code": null,
"e": 70812,
"s": 70725,
"text": "Program makes a change to the segment’s key field before issuing the REPL or DLET call"
},
{
"code": null,
"e": 70976,
"s": 70812,
"text": "Secondary Indexing is used when we want to access a database without using the complete concatenated key or when we do not want to use the sequence primary fields."
},
{
"code": null,
"e": 71146,
"s": 70976,
"text": "DL/I stores the pointer to segments of the indexed database in a separate database. Index pointer segment is the only type of secondary index. It consists of two parts −"
},
{
"code": null,
"e": 71161,
"s": 71146,
"text": "Prefix Element"
},
{
"code": null,
"e": 71176,
"s": 71163,
"text": "Data Element"
},
{
"code": null,
"e": 71351,
"s": 71176,
"text": "The prefix part of the index pointer segment contains a pointer to the Index Target Segment. Index target segment is the segment that is accessible using the secondary index."
},
{
"code": null,
"e": 71511,
"s": 71351,
"text": "The data element contains the key value from the segment in the indexed database over which the index is built. This is also known as the index source segment."
},
{
"code": null,
"e": 71570,
"s": 71511,
"text": "Here are the key points to note about Secondary Indexing −"
},
{
"code": null,
"e": 71647,
"s": 71570,
"text": "The index source segment and the target source segment need not be the same."
},
{
"code": null,
"e": 71724,
"s": 71647,
"text": "The index source segment and the target source segment need not be the same."
},
{
"code": null,
"e": 71802,
"s": 71724,
"text": "When we set up a secondary index, it is automatically maintained by the DL/I."
},
{
"code": null,
"e": 71880,
"s": 71802,
"text": "When we set up a secondary index, it is automatically maintained by the DL/I."
},
{
"code": null,
"e": 72018,
"s": 71880,
"text": "The DBA defines many secondary indexes as per the multiple access paths. These secondary indexes are stored in a separate index database."
},
{
"code": null,
"e": 72156,
"s": 72018,
"text": "The DBA defines many secondary indexes as per the multiple access paths. These secondary indexes are stored in a separate index database."
},
{
"code": null,
"e": 72260,
"s": 72156,
"text": "We should not create more secondary indexes, as they impose additional processing overhead on the DL/I."
},
{
"code": null,
"e": 72364,
"s": 72260,
"text": "We should not create more secondary indexes, as they impose additional processing overhead on the DL/I."
},
{
"code": null,
"e": 72381,
"s": 72364,
"text": "Points to note −"
},
{
"code": null,
"e": 72491,
"s": 72381,
"text": "The field in the index source segment over which the secondary index is built is called as the secondary key."
},
{
"code": null,
"e": 72601,
"s": 72491,
"text": "The field in the index source segment over which the secondary index is built is called as the secondary key."
},
{
"code": null,
"e": 72687,
"s": 72601,
"text": "Any field can be used as a secondary key. It need not be the segments sequence field."
},
{
"code": null,
"e": 72773,
"s": 72687,
"text": "Any field can be used as a secondary key. It need not be the segments sequence field."
},
{
"code": null,
"e": 72861,
"s": 72773,
"text": "Secondary keys can be any combination of single fields within the index source segment."
},
{
"code": null,
"e": 72949,
"s": 72861,
"text": "Secondary keys can be any combination of single fields within the index source segment."
},
{
"code": null,
"e": 72996,
"s": 72949,
"text": "Secondary key values do not have to be unique."
},
{
"code": null,
"e": 73043,
"s": 72996,
"text": "Secondary key values do not have to be unique."
},
{
"code": null,
"e": 73060,
"s": 73043,
"text": "Points to note −"
},
{
"code": null,
"e": 73162,
"s": 73060,
"text": "When we build a secondary index, the apparent hierarchical structure of the database is also changed."
},
{
"code": null,
"e": 73264,
"s": 73162,
"text": "When we build a secondary index, the apparent hierarchical structure of the database is also changed."
},
{
"code": null,
"e": 73441,
"s": 73264,
"text": "The index target segment becomes the apparent root segment. As shown in the following image, the Engineering segment becomes the root segment, even if it is not a root segment."
},
{
"code": null,
"e": 73618,
"s": 73441,
"text": "The index target segment becomes the apparent root segment. As shown in the following image, the Engineering segment becomes the root segment, even if it is not a root segment."
},
{
"code": null,
"e": 73734,
"s": 73618,
"text": "The rearrangement of the database structure caused by the secondary index is known as the secondary data structure."
},
{
"code": null,
"e": 73850,
"s": 73734,
"text": "The rearrangement of the database structure caused by the secondary index is known as the secondary data structure."
},
{
"code": null,
"e": 74047,
"s": 73850,
"text": "Secondary data structures do not make any changes to the main physical database structure present on the disk. It is just a way to alter the database structure in front of the application program."
},
{
"code": null,
"e": 74244,
"s": 74047,
"text": "Secondary data structures do not make any changes to the main physical database structure present on the disk. It is just a way to alter the database structure in front of the application program."
},
{
"code": null,
"e": 74261,
"s": 74244,
"text": "Points to note −"
},
{
"code": null,
"e": 74364,
"s": 74261,
"text": "When an AND (* or &) operator is used with secondary indexes, it is known as a dependent AND operator."
},
{
"code": null,
"e": 74467,
"s": 74364,
"text": "When an AND (* or &) operator is used with secondary indexes, it is known as a dependent AND operator."
},
{
"code": null,
"e": 74573,
"s": 74467,
"text": "An independent AND (#) allows us to specify qualifications that would be impossible with a dependent AND."
},
{
"code": null,
"e": 74679,
"s": 74573,
"text": "An independent AND (#) allows us to specify qualifications that would be impossible with a dependent AND."
},
{
"code": null,
"e": 74805,
"s": 74679,
"text": "This operator can be used only for secondary indexes where the index source segment is dependent on the index target segment."
},
{
"code": null,
"e": 74931,
"s": 74805,
"text": "This operator can be used only for secondary indexes where the index source segment is dependent on the index target segment."
},
{
"code": null,
"e": 75101,
"s": 74931,
"text": "We can code an SSA with an independent AND to specify that an occurrence of the target segment be processed based on the fields in two or more dependent source segments."
},
{
"code": null,
"e": 75271,
"s": 75101,
"text": "We can code an SSA with an independent AND to specify that an occurrence of the target segment be processed based on the fields in two or more dependent source segments."
},
{
"code": null,
"e": 75589,
"s": 75271,
"text": "01 ITEM-SELECTION-SSA.\n 05 FILLER PIC X(8).\n 05 FILLER PIC X(1) VALUE '('.\n 05 FILLER PIC X(10).\n 05 SSA-KEY-1 PIC X(8).\n 05 FILLER PIC X VALUE '#'.\n 05 FILLER PIC X(10).\n 05 SSA-KEY-2 PIC X(8).\n 05 FILLER PIC X VALUE ')'. "
},
{
"code": null,
"e": 75606,
"s": 75589,
"text": "Points to note −"
},
{
"code": null,
"e": 75776,
"s": 75606,
"text": "Sparse sequencing is also known as Sparse Indexing. We can remove some of the index source segments from the index using sparse sequencing with secondary index database."
},
{
"code": null,
"e": 75946,
"s": 75776,
"text": "Sparse sequencing is also known as Sparse Indexing. We can remove some of the index source segments from the index using sparse sequencing with secondary index database."
},
{
"code": null,
"e": 76084,
"s": 75946,
"text": "Sparse sequencing is used to improve the performance. When some occurrences of the index source segment are not used, we can remove that."
},
{
"code": null,
"e": 76222,
"s": 76084,
"text": "Sparse sequencing is used to improve the performance. When some occurrences of the index source segment are not used, we can remove that."
},
{
"code": null,
"e": 76335,
"s": 76222,
"text": "DL/I uses a suppression value or a suppression routine or both to determine whether a segment should be indexed."
},
{
"code": null,
"e": 76448,
"s": 76335,
"text": "DL/I uses a suppression value or a suppression routine or both to determine whether a segment should be indexed."
},
{
"code": null,
"e": 76581,
"s": 76448,
"text": "If the value of a sequence field in the index source segment matches a suppression value, then no index relationship is established."
},
{
"code": null,
"e": 76714,
"s": 76581,
"text": "If the value of a sequence field in the index source segment matches a suppression value, then no index relationship is established."
},
{
"code": null,
"e": 76843,
"s": 76714,
"text": "The suppression routine is a user-written program that evaluates the segment and determines whether or not it should be indexed."
},
{
"code": null,
"e": 76972,
"s": 76843,
"text": "The suppression routine is a user-written program that evaluates the segment and determines whether or not it should be indexed."
},
{
"code": null,
"e": 77118,
"s": 76972,
"text": "When sparse indexing is used, its functions are handled by the DL/I. We do not need to make special provisions for it in the application program."
},
{
"code": null,
"e": 77264,
"s": 77118,
"text": "When sparse indexing is used, its functions are handled by the DL/I. We do not need to make special provisions for it in the application program."
},
{
"code": null,
"e": 77530,
"s": 77264,
"text": "As discussed in earlier modules, DBDGEN is used to create a DBD. When we create secondary indexes, two databases are involved. A DBA needs to create two DBDs using two DBDGENs for creating a relationship between an indexed database and a secondary indexed database."
},
{
"code": null,
"e": 77827,
"s": 77530,
"text": "After creating the secondary index for a database, the DBA needs to create the PSBs. PSBGEN for the program specifies the proper processing sequence for the database on the PROCSEQ parameter of the PSB macro. For the PROCSEQ parameter, the DBA codes the DBD name for the secondary index database."
},
{
"code": null,
"e": 78377,
"s": 77827,
"text": "IMS database has a rule that each segment type can have only one parent. This limits the complexity of the physical database. Many DL/I applications require a complex structure that allows a segment to have two parent segment types. To overcome this limitation, DL/I allows the DBA to implement logical relationships in which a segment can have both physical and logical parents. We can create additional relationships within one physical database. The new data structure after implementing the logical relationship is known as the Logical Database."
},
{
"code": null,
"e": 78431,
"s": 78377,
"text": "A logical relationship has the following properties −"
},
{
"code": null,
"e": 78533,
"s": 78431,
"text": "A logical relationship is a path between two segments which are related logically and not physically."
},
{
"code": null,
"e": 78635,
"s": 78533,
"text": "A logical relationship is a path between two segments which are related logically and not physically."
},
{
"code": null,
"e": 78800,
"s": 78635,
"text": "Usually a logical relationship is established between separate databases. But it is possible to have a relationship between the segments of one particular database."
},
{
"code": null,
"e": 78965,
"s": 78800,
"text": "Usually a logical relationship is established between separate databases. But it is possible to have a relationship between the segments of one particular database."
},
{
"code": null,
"e": 79221,
"s": 78965,
"text": "The following image shows two different databases. One is a Student database, and the other is a Library database. We create a logical relationship between the Books Issued segment from the Student database and the Books segment from the Library database."
},
{
"code": null,
"e": 79301,
"s": 79221,
"text": "This is how the logical database looks when you create a logical relationship −"
},
{
"code": null,
"e": 79770,
"s": 79301,
"text": "Logical child segment is the basis of a logical relationship. It is a physical data segment but for DL/I, it appears as if it has two parents. The Books segment in the above example has two parent segments. Issued books segment is the logical parent and Library segment is the physical parent. One logical child segment occurrence has only one logical parent segment occurrence and one logical parent segment occurrence can have many logical child segment occurrences."
},
{
"code": null,
"e": 80064,
"s": 79770,
"text": "Logical twins are the occurrences of a logical child segment type that are all subordinate to a single occurrence of the logical parent segment type. DL/I makes the logical child segment appear similar to an actual physical child segment. This is also known as a virtual logical child segment."
},
{
"code": null,
"e": 80282,
"s": 80064,
"text": "A DBA creates logical relationships between segments. To implement a logical relationship, the DBA has to specify it in the DBDGENs for the involved physical databases. There are three types of logical relationships −"
},
{
"code": null,
"e": 80297,
"s": 80282,
"text": "Unidirectional"
},
{
"code": null,
"e": 80319,
"s": 80297,
"text": "Bidirectional Virtual"
},
{
"code": null,
"e": 80342,
"s": 80319,
"text": "Bidirectional Physical"
},
{
"code": null,
"e": 80454,
"s": 80342,
"text": "The logical connection goes from the logical child to the logical parent and it cannot go the other way around."
},
{
"code": null,
"e": 80615,
"s": 80454,
"text": "It allows access in both the directions. The logical child in its physical structure and the corresponding virtual logical child can be seen as paired segments."
},
{
"code": null,
"e": 80801,
"s": 80615,
"text": "The logical child is a physically stored subordinate to both its physical and logical parents. To application programs, it appears the same way as a bidirectional virtual logical child."
},
{
"code": null,
"e": 80878,
"s": 80801,
"text": "The programming considerations for using a logical database are as follows −"
},
{
"code": null,
"e": 80956,
"s": 80878,
"text": "DL/I calls to access the database remains same with the logical database too."
},
{
"code": null,
"e": 81034,
"s": 80956,
"text": "DL/I calls to access the database remains same with the logical database too."
},
{
"code": null,
"e": 81185,
"s": 81034,
"text": "Program specification block indicates the structure which we use in our calls. In some cases, we cannot identify that we are using a logical database."
},
{
"code": null,
"e": 81336,
"s": 81185,
"text": "Program specification block indicates the structure which we use in our calls. In some cases, we cannot identify that we are using a logical database."
},
{
"code": null,
"e": 81403,
"s": 81336,
"text": "Logical relationships add a new dimension to database programming."
},
{
"code": null,
"e": 81470,
"s": 81403,
"text": "Logical relationships add a new dimension to database programming."
},
{
"code": null,
"e": 81662,
"s": 81470,
"text": "You must be careful while working with logical databases, as two databases are integrated together. If you modify one database, the same modifications must be reflected in the other database."
},
{
"code": null,
"e": 81854,
"s": 81662,
"text": "You must be careful while working with logical databases, as two databases are integrated together. If you modify one database, the same modifications must be reflected in the other database."
},
{
"code": null,
"e": 82002,
"s": 81854,
"text": "Program specifications should indicate what processing is allowed on a database. If a processing rule is violated, you get a non-blank status code."
},
{
"code": null,
"e": 82150,
"s": 82002,
"text": "Program specifications should indicate what processing is allowed on a database. If a processing rule is violated, you get a non-blank status code."
},
{
"code": null,
"e": 82604,
"s": 82150,
"text": "A logical child segment always begins with the complete concatenated key of the destination parent. This is known as the Destination Parent Concatenated Key (DPCK). You need to always code the DPCK at the start of your segment I/O area for a logical child. In a logical database, the concatenated segment makes the connection between segments that are defined in different physical databases. A concatenated segment consists of the following two parts −"
},
{
"code": null,
"e": 82626,
"s": 82604,
"text": "Logical child segment"
},
{
"code": null,
"e": 82653,
"s": 82626,
"text": "Destination parent segment"
},
{
"code": null,
"e": 82715,
"s": 82653,
"text": "A logical child segment consists of the following two parts −"
},
{
"code": null,
"e": 82758,
"s": 82715,
"text": "Destination Parent Concatenated Key (DPCK)"
},
{
"code": null,
"e": 82782,
"s": 82758,
"text": "Logical child user data"
},
{
"code": null,
"e": 83196,
"s": 82782,
"text": "When we work with concatenated segments during update, it may be possible to add or change the data in both the logical child and the destination parent with a single call. This also depends on the rules the DBA specified for the database. For an insert, provide the DPCK in the right position. For a replace or delete, do not change the DPCK or the sequence field data in either part of the concatenated segment."
},
{
"code": null,
"e": 83388,
"s": 83196,
"text": "The database administrator needs to plan for the database recovery in case of system failures. Failures can be of many types such as application crashes, hardware errors, power failures, etc."
},
{
"code": null,
"e": 83449,
"s": 83388,
"text": "Some simple approaches to database recovery are as follows −"
},
{
"code": null,
"e": 83568,
"s": 83449,
"text": "Make periodical backup copies of important datasets so that all transactions posted against the datasets are retained."
},
{
"code": null,
"e": 83687,
"s": 83568,
"text": "Make periodical backup copies of important datasets so that all transactions posted against the datasets are retained."
},
{
"code": null,
"e": 83885,
"s": 83687,
"text": "If a dataset is damaged due to a system failure, that problem is corrected by restoring the backup copy. Then the accumulated transactions are re-posted to the backup copy to bring them up-to-date."
},
{
"code": null,
"e": 84083,
"s": 83885,
"text": "If a dataset is damaged due to a system failure, that problem is corrected by restoring the backup copy. Then the accumulated transactions are re-posted to the backup copy to bring them up-to-date."
},
{
"code": null,
"e": 84158,
"s": 84083,
"text": "The disadvantages of simple approach to database recovery are as follows −"
},
{
"code": null,
"e": 84222,
"s": 84158,
"text": "Re-posting the accumulated transactions consumes a lot of time."
},
{
"code": null,
"e": 84286,
"s": 84222,
"text": "Re-posting the accumulated transactions consumes a lot of time."
},
{
"code": null,
"e": 84368,
"s": 84286,
"text": "All other applications need to wait for execution until the recovery is finished."
},
{
"code": null,
"e": 84450,
"s": 84368,
"text": "All other applications need to wait for execution until the recovery is finished."
},
{
"code": null,
"e": 84560,
"s": 84450,
"text": "Database recovery is lengthier than file recovery, if logical and secondary index relationships are involved."
},
{
"code": null,
"e": 84670,
"s": 84560,
"text": "Database recovery is lengthier than file recovery, if logical and secondary index relationships are involved."
},
{
"code": null,
"e": 85057,
"s": 84670,
"text": "A DL/I program crashes in a way that is different from the way a standard program crashes because a standard program is executed directly by the operating system, while a DL/I program is not. By employing an abnormal termination routine, the system interferes so that recovery can be done after the ABnormal END (ABEND). The abnormal termination routine performs the following actions −"
},
{
"code": null,
"e": 85077,
"s": 85057,
"text": "Closes all datasets"
},
{
"code": null,
"e": 85115,
"s": 85077,
"text": "Cancels all pending jobs in the queue"
},
{
"code": null,
"e": 85174,
"s": 85115,
"text": "Creates a storage dump to find out the root cause of ABEND"
},
{
"code": null,
"e": 85271,
"s": 85174,
"text": "The limitation of this routine is that it does not ensure if the data in use is accurate or not."
},
{
"code": null,
"e": 85527,
"s": 85271,
"text": "When an application program ABENDs, it is necessary to revert the changes done by the application program, correct the error, and re-run the application program. To do this, it is required to have the DL/I log. Here are the key points about DL/I logging −"
},
{
"code": null,
"e": 85631,
"s": 85527,
"text": "A DL/I records all the changes made by an application program in a file which is known as the log file."
},
{
"code": null,
"e": 85735,
"s": 85631,
"text": "A DL/I records all the changes made by an application program in a file which is known as the log file."
},
{
"code": null,
"e": 85842,
"s": 85735,
"text": "When the application program changes a segment, its before image and after images are created by the DL/I."
},
{
"code": null,
"e": 85949,
"s": 85842,
"text": "When the application program changes a segment, its before image and after images are created by the DL/I."
},
{
"code": null,
"e": 86048,
"s": 85949,
"text": "These segment images can be used to restore the segments, in case the application program crashes."
},
{
"code": null,
"e": 86147,
"s": 86048,
"text": "These segment images can be used to restore the segments, in case the application program crashes."
},
{
"code": null,
"e": 86342,
"s": 86147,
"text": "DL/I uses a technique called write-ahead logging to record database changes. With write-ahead logging, a database change is written to the log dataset before it is written to the actual dataset."
},
{
"code": null,
"e": 86537,
"s": 86342,
"text": "DL/I uses a technique called write-ahead logging to record database changes. With write-ahead logging, a database change is written to the log dataset before it is written to the actual dataset."
},
{
"code": null,
"e": 86653,
"s": 86537,
"text": "As the log is always ahead of the database, the recovery utilities can determine the status of any database change."
},
{
"code": null,
"e": 86769,
"s": 86653,
"text": "As the log is always ahead of the database, the recovery utilities can determine the status of any database change."
},
{
"code": null,
"e": 86873,
"s": 86769,
"text": "When the program executes a call to change a database segment, the DL/I takes care of its logging part."
},
{
"code": null,
"e": 86977,
"s": 86873,
"text": "When the program executes a call to change a database segment, the DL/I takes care of its logging part."
},
{
"code": null,
"e": 87023,
"s": 86977,
"text": "The two approaches of database recovery are −"
},
{
"code": null,
"e": 87155,
"s": 87023,
"text": "Forward Recovery −\tDL/I uses the log file to store the change data. The accumulated transactions are re-posted using this log file."
},
{
"code": null,
"e": 87287,
"s": 87155,
"text": "Forward Recovery −\tDL/I uses the log file to store the change data. The accumulated transactions are re-posted using this log file."
},
{
"code": null,
"e": 87644,
"s": 87287,
"text": "Backward Recovery − Backward recovery is also known as backout recovery. The log records for the program are read backwards and their effects are reversed in the database. When the backout is complete, the databases are in the same state as they were in before the failure, assuming that no another application program altered the database in the meantime."
},
{
"code": null,
"e": 88001,
"s": 87644,
"text": "Backward Recovery − Backward recovery is also known as backout recovery. The log records for the program are read backwards and their effects are reversed in the database. When the backout is complete, the databases are in the same state as they were in before the failure, assuming that no another application program altered the database in the meantime."
},
{
"code": null,
"e": 88179,
"s": 88001,
"text": "A checkpoint is a stage where the database changes done by the application program are considered complete and accurate. Listed below are the points to note about a checkpoint −"
},
{
"code": null,
"e": 88274,
"s": 88179,
"text": "Database changes made before the most recent checkpoint are not reversed by backward recovery."
},
{
"code": null,
"e": 88369,
"s": 88274,
"text": "Database changes made before the most recent checkpoint are not reversed by backward recovery."
},
{
"code": null,
"e": 88500,
"s": 88369,
"text": "Database changes logged after the most recent checkpoint are not applied to an image copy of the database during forward recovery."
},
{
"code": null,
"e": 88631,
"s": 88500,
"text": "Database changes logged after the most recent checkpoint are not applied to an image copy of the database during forward recovery."
},
{
"code": null,
"e": 88765,
"s": 88631,
"text": "Using checkpoint method, the database is restored to its condition at the most recent checkpoint when the recovery process completes."
},
{
"code": null,
"e": 88899,
"s": 88765,
"text": "Using checkpoint method, the database is restored to its condition at the most recent checkpoint when the recovery process completes."
},
{
"code": null,
"e": 88986,
"s": 88899,
"text": "The default for batch programs is that the checkpoint is the beginning of the program."
},
{
"code": null,
"e": 89073,
"s": 88986,
"text": "The default for batch programs is that the checkpoint is the beginning of the program."
},
{
"code": null,
"e": 89137,
"s": 89073,
"text": "A checkpoint can be established using a checkpoint call (CHKP)."
},
{
"code": null,
"e": 89201,
"s": 89137,
"text": "A checkpoint can be established using a checkpoint call (CHKP)."
},
{
"code": null,
"e": 89277,
"s": 89201,
"text": "A checkpoint call causes a checkpoint record to be written on the DL/I log."
},
{
"code": null,
"e": 89353,
"s": 89277,
"text": "A checkpoint call causes a checkpoint record to be written on the DL/I log."
},
{
"code": null,
"e": 89396,
"s": 89353,
"text": "Shown below is the syntax of a CHKP call −"
},
{
"code": null,
"e": 89492,
"s": 89396,
"text": "CALL 'CBLTDLI' USING DLI-CHKP\n PCB-NAME\n CHECKPOINT-ID\n"
},
{
"code": null,
"e": 89527,
"s": 89492,
"text": "There are two checkpoint methods −"
},
{
"code": null,
"e": 89665,
"s": 89527,
"text": "Basic Checkpointing − It allows the programmer to issue checkpoint calls that the DL/I recovery utilities use during recovery processing."
},
{
"code": null,
"e": 89803,
"s": 89665,
"text": "Basic Checkpointing − It allows the programmer to issue checkpoint calls that the DL/I recovery utilities use during recovery processing."
},
{
"code": null,
"e": 90109,
"s": 89803,
"text": "Symbolic Checkpointing − It is an advanced form of checkpointing that is used in combination with the extended restart facility. Symbolic checkpointing and extended restart together let the application programmer code the programs so that they can resume processing at the point just after the checkpoint."
},
{
"code": null,
"e": 90415,
"s": 90109,
"text": "Symbolic Checkpointing − It is an advanced form of checkpointing that is used in combination with the extended restart facility. Symbolic checkpointing and extended restart together let the application programmer code the programs so that they can resume processing at the point just after the checkpoint."
},
{
"code": null,
"e": 90422,
"s": 90415,
"text": " Print"
},
{
"code": null,
"e": 90433,
"s": 90422,
"text": " Add Notes"
}
] |
Histograms and Density Plots in Python | by Will Koehrsen | Towards Data Science | Visualizing One-Dimensional Data in Python
Plotting a single variable seems like it should be easy. With only one dimension how hard can it be to effectively display the data? For a long time, I got by using the simple histogram which shows the location of values, the spread of the data, and the shape of the data (normal, skewed, bimodal, etc.) However, I recently ran into some problems where a histogram failed and I knew it was time to broaden my plotting knowledge. I found an excellent free online book on data visualization, and implemented some of the techniques. Rather than keep everything I learned to myself, I decided it would helpful (to myself and to others) to write a Python guide to histograms and an alternative that has proven immensely useful, density plots.
This article will take a comprehensive look at using histograms and density plots in Python using the matplotlib and seaborn libraries. Throughout, we will explore a real-world dataset because with the wealth of sources available online, there is no excuse for not using actual data! We will visualize the NYCflights13 data, which contains over 300,000 observations of flights departing NYC in 2013. We will focus on displaying a single variable, the arrival delay of flights in minutes. The full code for this article is available as a Jupyter Notebook on GitHub.
It’s always a good idea to examine our data before we get started plotting. We can read the data into a pandas dataframe and display the first 10 rows:
import pandas as pd# Read in data and examine first 10 rowsflights = pd.read_csv('data/formatted_flights.csv')flights.head(10)
The flight arrival delays are in minutes and negative values mean the flight was early (it turns out flights often tend to arrive early, just never when we’re on them!) There are over 300,000 flights with a minimum delay of -60 minutes and a maximum delay of 120 minutes. The other column in the dataframe is the name of the airline which we can use for comparisons.
A great way to get started exploring a single variable is with the histogram. A histogram divides the variable into bins, counts the data points in each bin, and shows the bins on the x-axis and the counts on the y-axis. In our case, the bins will be an interval of time representing the delay of the flights and the count will be the number of flights falling into that interval. The binwidth is the most important parameter for a histogram and we should always try out a few different values of binwidth to select the best one for our data.
To make a basic histogram in Python, we can use either matplotlib or seaborn. The code below shows function calls in both libraries that create equivalent figures. For the plot calls, we specify the binwidth by the number of bins. For this plot, I will use bins that are 5 minutes in length, which means that the number of bins will be the range of the data (from -60 to 120 minutes) divided by the binwidth, 5 minutes ( bins = int(180/5)).
For most basic histograms, I would go with the matplotlib code because it is simpler, but we will use the seaborn distplot function later on to create different distributions and it’s good to be familiar with the different options.
How did I come up with 5 minutes for the binwidth? The only way to figure out an optimal binwidth is to try out multiple values! Below is code to make the same figure in matplotlib with a range of binwidths. Ultimately, there is no right or wrong answer to the binwidth, but I choose 5 minutes because I think it best represents the distribution.
The choice of binwidth significantly affects the resulting plot. Smaller binwidths can make the plot cluttered, but larger binwidths may obscure nuances in the data. Matplotlib will automatically choose a reasonable binwidth for you, but I like to specify the binwidth myself after trying out several values. There is no true right or wrong answer, so try a few options and see which works best for your particular data.
Histograms are a great way to start exploring a single variable drawn from one category. However, when we want to compare the distributions of one variable across multiple categories, histograms have issues with readability. For example, if we want to compare arrival delay distributions between airlines, an approach that doesn’t work well is to to create histograms for each airline on the same plot:
(Notice that the y-axis has been normalized to account for the differing number of flights between airlines. To do this, pass in the argument norm_hist = True to the sns.distplot function call.)
This plot is not very helpful! All the overlapping bars make it nearly impossible to make comparisons between the airlines. Let’s look at a few possible solutions to this common problem.
Instead of overlapping the airline histograms, we can place them side-by-side. To do this, we create a list of the arrival delays for each airline, and then pass this into the plt.hist function call as a list of lists. We have to specify different colors to use for each airline and a label so we can tell them apart. The code, including creating the lists for each airline is below:
By default, if we pass in a list of lists, matplotlib will put the bars side-by-side. Here, I have changed the binwidth to 15 minutes because otherwise the plot is too cluttered, but even with this modification, this is not an effective figure. There is too much information to process at once, the bars don’t align with the labels, and it’s still hard to compare distributions between airlines. When we make a plot, we want it to be as easy for the viewer to understand as possible, and this figure fails by that criteria! Let’s look at a second potential solution.
Instead of plotting the bars for each airline side-by-side, we can stack them by passing in the parameter stacked = True to the histogram call:
# Stacked histogram with multiple airlinesplt.hist([x1, x2, x3, x4, x5], bins = int(180/15), stacked=True, normed=True, color = colors, label=names)
Well, that definitely is not any better! Here, each airline is represented as a section of the whole for each bin, but it’s nearly impossible to make comparisons. For example, at a delay of -15 to 0 minutes, does United Air Lines or JetBlue Airlines have a larger size of the bar? I can’t tell and viewers won’t be able to either. I generally am not a proponent of stacked bars because they can be difficult to interpret (although there are use cases such as when visualizing proportions). Both of the solutions we tried using histograms were not successful, and so it’s time to move to the density plot.
First, what is a density plot? A density plot is a smoothed, continuous version of a histogram estimated from the data. The most common form of estimation is known as kernel density estimation. In this method, a continuous curve (the kernel) is drawn at every individual data point and all of these curves are then added together to make a single smooth density estimation. The kernel most often used is a Gaussian (which produces a Gaussian bell curve at each data point). If, like me, you find that description a little confusing, take a look at the following plot:
Here, each small black vertical line on the x-axis represents a data point. The individual kernels (Gaussians in this example) are shown drawn in dashed red lines above each point. The solid blue curve is created by summing the individual Gaussians and forms the overall density plot.
The x-axis is the value of the variable just like in a histogram, but what exactly does the y-axis represent? The y-axis in a density plot is the probability density function for the kernel density estimation. However, we need to be careful to specify this is a probability density and not a probability. The difference is the probability density is the probability per unit on the x-axis. To convert to an actual probability, we need to find the area under the curve for a specific interval on the x-axis. Somewhat confusingly, because this is a probability density and not a probability, the y-axis can take values greater than one. The only requirement of the density plot is that the total area under the curve integrates to one. I generally tend to think of the y-axis on a density plot as a value only for relative comparisons between different categories.
To make density plots in seaborn, we can use either the distplot or kdeplot function. I will continue to use the distplot function because it lets us make multiple distributions with one function call. For example, we can make a density plot showing all arrival delays on top of the corresponding histogram:
# Density Plot and Histogram of all arrival delayssns.distplot(flights['arr_delay'], hist=True, kde=True, bins=int(180/5), color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4})
The curve shows the density plot which is essentially a smooth version of the histogram. The y-axis is in terms of density, and the histogram is normalized by default so that it has the same y-scale as the density plot.
Analogous to the binwidth of a histogram, a density plot has a parameter called the bandwidth that changes the individual kernels and significantly affects the final result of the plot. The plotting library will choose a reasonable value of the bandwidth for us (by default using the ‘scott’ estimate), and unlike the binwidth of a histogram, I usually use the default bandwidth. However, we can look at using different bandwidths to see if there is a better choice. In the plot, ‘scott’ is the default, which looks like the best option.
Notice that a wider bandwidth results in more smoothing of the distribution. We also see that even though we limited our data to -60 to 120 minutes, the density plot extends beyond these limits. This is one potential issue with a density plot: because it calculates a distribution at each data point, it can generate data that falls outside the bounds of the original data. This might mean that we end up with impossible values on the x-axis that were never present in the original data! As a note, we can also change the kernel, which changes the distribution drawn at each data point and thus the overall distribution. However, for most applications, the default kernel, Gaussian, and the default bandwidth estimation work very well.
Now that we understand how a density plot is made and what it represents, let’s see how it can solve our problem of visualizing the arrival delays of multiple airlines. To show the distributions on the same plot, we can iterate through the airlines, each time calling distplot with the kernel density estimate set to True and the histogram set to False. The code to draw the density plot with multiple airlines is below:
Finally, we have arrived at an effective solution! With the density plot, we can easily make comparisons between airlines because the plot is less cluttered. Now that we finally have the plot we want, we come to the conclusion that all these airlines have nearly identical arrival delay distributions! However, there are other airlines in the dataset, and we can plot one that is a little different to illustrate another optional parameter for density plots, shading the graph.
Filling in the density plot can help us to distinguish between overlapping distributions. Although this is not always a good approach, it can help to emphasize the difference between distributions. To shade the density plots, we pass in shade = True to the kde_kws argument in the distplot call.
sns.distplot(subset['arr_delay'], hist = False, kde = True, kde_kws = {'shade': True, 'linewidth': 3}, label = airline)
Whether or not to shade the plot is, like other plotting options, a question that depends on the problem! For this graph, I think it makes sense because the shading helps us distinguish the plots in the regions where they overlap. Now, we finally have some useful information: Alaska Airlines flights tend to be earlier more often than United Airlines. The next time you have the option, you know which airline to choose!
If you want to show every value in a distribution and not just the smoothed density, you can add a rug plot. This shows every single data point on the x-axis, allowing us to visualize all of the actual values. The benefit of using seaborn’s distplot is that we can add the rug plot with a single parameter call of rug = True (with some formatting as well).
With many data points the rug plot can become overcrowded, but for some datasets, it can be helpful to view every data point. The rug plot also lets us see how the density plot “creates” data where none exists because it makes a kernel distribution at each data point. These distributions can leak over the range of the original data and give the impression that Alaska Airlines has delays that are both shorter and longer than actually recorded. We need to be careful about this artifact of density plots and point it out to viewers!
This post has hopefully given you a range of options for visualizing a single variable from one or multiple categories. There are even more univariate (single variable) plots we can make such as empirical cumulative density plots and quantile-quantile plots, but for now we will leave it at histograms and density plots (and rug plots too!). Don’t worry if the options seem overwhelming: with practice, making a good choice will become easier, and you can always ask for help if needed. Moreover, often there isn’t an optimal choice and the “right” decision will come down to preference and the objectives of the visualization. The good thing is, no matter what plot you want to make, there is going to be a way to do it in Python! Visualizations are an effective means for communicating results, and knowing all the options available allows us to choose the right figure for our data.
I welcome feedback and constructive criticism and can be reached on Twitter @koehrsen_will. | [
{
"code": null,
"e": 215,
"s": 172,
"text": "Visualizing One-Dimensional Data in Python"
},
{
"code": null,
"e": 953,
"s": 215,
"text": "Plotting a single variable seems like it should be easy. With only one dimension how hard can it be to effectively display the data? For a long time, I got by using the simple histogram which shows the location of values, the spread of the data, and the shape of the data (normal, skewed, bimodal, etc.) However, I recently ran into some problems where a histogram failed and I knew it was time to broaden my plotting knowledge. I found an excellent free online book on data visualization, and implemented some of the techniques. Rather than keep everything I learned to myself, I decided it would helpful (to myself and to others) to write a Python guide to histograms and an alternative that has proven immensely useful, density plots."
},
{
"code": null,
"e": 1518,
"s": 953,
"text": "This article will take a comprehensive look at using histograms and density plots in Python using the matplotlib and seaborn libraries. Throughout, we will explore a real-world dataset because with the wealth of sources available online, there is no excuse for not using actual data! We will visualize the NYCflights13 data, which contains over 300,000 observations of flights departing NYC in 2013. We will focus on displaying a single variable, the arrival delay of flights in minutes. The full code for this article is available as a Jupyter Notebook on GitHub."
},
{
"code": null,
"e": 1670,
"s": 1518,
"text": "It’s always a good idea to examine our data before we get started plotting. We can read the data into a pandas dataframe and display the first 10 rows:"
},
{
"code": null,
"e": 1797,
"s": 1670,
"text": "import pandas as pd# Read in data and examine first 10 rowsflights = pd.read_csv('data/formatted_flights.csv')flights.head(10)"
},
{
"code": null,
"e": 2164,
"s": 1797,
"text": "The flight arrival delays are in minutes and negative values mean the flight was early (it turns out flights often tend to arrive early, just never when we’re on them!) There are over 300,000 flights with a minimum delay of -60 minutes and a maximum delay of 120 minutes. The other column in the dataframe is the name of the airline which we can use for comparisons."
},
{
"code": null,
"e": 2707,
"s": 2164,
"text": "A great way to get started exploring a single variable is with the histogram. A histogram divides the variable into bins, counts the data points in each bin, and shows the bins on the x-axis and the counts on the y-axis. In our case, the bins will be an interval of time representing the delay of the flights and the count will be the number of flights falling into that interval. The binwidth is the most important parameter for a histogram and we should always try out a few different values of binwidth to select the best one for our data."
},
{
"code": null,
"e": 3148,
"s": 2707,
"text": "To make a basic histogram in Python, we can use either matplotlib or seaborn. The code below shows function calls in both libraries that create equivalent figures. For the plot calls, we specify the binwidth by the number of bins. For this plot, I will use bins that are 5 minutes in length, which means that the number of bins will be the range of the data (from -60 to 120 minutes) divided by the binwidth, 5 minutes ( bins = int(180/5))."
},
{
"code": null,
"e": 3380,
"s": 3148,
"text": "For most basic histograms, I would go with the matplotlib code because it is simpler, but we will use the seaborn distplot function later on to create different distributions and it’s good to be familiar with the different options."
},
{
"code": null,
"e": 3727,
"s": 3380,
"text": "How did I come up with 5 minutes for the binwidth? The only way to figure out an optimal binwidth is to try out multiple values! Below is code to make the same figure in matplotlib with a range of binwidths. Ultimately, there is no right or wrong answer to the binwidth, but I choose 5 minutes because I think it best represents the distribution."
},
{
"code": null,
"e": 4148,
"s": 3727,
"text": "The choice of binwidth significantly affects the resulting plot. Smaller binwidths can make the plot cluttered, but larger binwidths may obscure nuances in the data. Matplotlib will automatically choose a reasonable binwidth for you, but I like to specify the binwidth myself after trying out several values. There is no true right or wrong answer, so try a few options and see which works best for your particular data."
},
{
"code": null,
"e": 4551,
"s": 4148,
"text": "Histograms are a great way to start exploring a single variable drawn from one category. However, when we want to compare the distributions of one variable across multiple categories, histograms have issues with readability. For example, if we want to compare arrival delay distributions between airlines, an approach that doesn’t work well is to to create histograms for each airline on the same plot:"
},
{
"code": null,
"e": 4746,
"s": 4551,
"text": "(Notice that the y-axis has been normalized to account for the differing number of flights between airlines. To do this, pass in the argument norm_hist = True to the sns.distplot function call.)"
},
{
"code": null,
"e": 4933,
"s": 4746,
"text": "This plot is not very helpful! All the overlapping bars make it nearly impossible to make comparisons between the airlines. Let’s look at a few possible solutions to this common problem."
},
{
"code": null,
"e": 5317,
"s": 4933,
"text": "Instead of overlapping the airline histograms, we can place them side-by-side. To do this, we create a list of the arrival delays for each airline, and then pass this into the plt.hist function call as a list of lists. We have to specify different colors to use for each airline and a label so we can tell them apart. The code, including creating the lists for each airline is below:"
},
{
"code": null,
"e": 5884,
"s": 5317,
"text": "By default, if we pass in a list of lists, matplotlib will put the bars side-by-side. Here, I have changed the binwidth to 15 minutes because otherwise the plot is too cluttered, but even with this modification, this is not an effective figure. There is too much information to process at once, the bars don’t align with the labels, and it’s still hard to compare distributions between airlines. When we make a plot, we want it to be as easy for the viewer to understand as possible, and this figure fails by that criteria! Let’s look at a second potential solution."
},
{
"code": null,
"e": 6028,
"s": 5884,
"text": "Instead of plotting the bars for each airline side-by-side, we can stack them by passing in the parameter stacked = True to the histogram call:"
},
{
"code": null,
"e": 6185,
"s": 6028,
"text": "# Stacked histogram with multiple airlinesplt.hist([x1, x2, x3, x4, x5], bins = int(180/15), stacked=True, normed=True, color = colors, label=names)"
},
{
"code": null,
"e": 6790,
"s": 6185,
"text": "Well, that definitely is not any better! Here, each airline is represented as a section of the whole for each bin, but it’s nearly impossible to make comparisons. For example, at a delay of -15 to 0 minutes, does United Air Lines or JetBlue Airlines have a larger size of the bar? I can’t tell and viewers won’t be able to either. I generally am not a proponent of stacked bars because they can be difficult to interpret (although there are use cases such as when visualizing proportions). Both of the solutions we tried using histograms were not successful, and so it’s time to move to the density plot."
},
{
"code": null,
"e": 7358,
"s": 6790,
"text": "First, what is a density plot? A density plot is a smoothed, continuous version of a histogram estimated from the data. The most common form of estimation is known as kernel density estimation. In this method, a continuous curve (the kernel) is drawn at every individual data point and all of these curves are then added together to make a single smooth density estimation. The kernel most often used is a Gaussian (which produces a Gaussian bell curve at each data point). If, like me, you find that description a little confusing, take a look at the following plot:"
},
{
"code": null,
"e": 7643,
"s": 7358,
"text": "Here, each small black vertical line on the x-axis represents a data point. The individual kernels (Gaussians in this example) are shown drawn in dashed red lines above each point. The solid blue curve is created by summing the individual Gaussians and forms the overall density plot."
},
{
"code": null,
"e": 8506,
"s": 7643,
"text": "The x-axis is the value of the variable just like in a histogram, but what exactly does the y-axis represent? The y-axis in a density plot is the probability density function for the kernel density estimation. However, we need to be careful to specify this is a probability density and not a probability. The difference is the probability density is the probability per unit on the x-axis. To convert to an actual probability, we need to find the area under the curve for a specific interval on the x-axis. Somewhat confusingly, because this is a probability density and not a probability, the y-axis can take values greater than one. The only requirement of the density plot is that the total area under the curve integrates to one. I generally tend to think of the y-axis on a density plot as a value only for relative comparisons between different categories."
},
{
"code": null,
"e": 8814,
"s": 8506,
"text": "To make density plots in seaborn, we can use either the distplot or kdeplot function. I will continue to use the distplot function because it lets us make multiple distributions with one function call. For example, we can make a density plot showing all arrival delays on top of the corresponding histogram:"
},
{
"code": null,
"e": 9053,
"s": 8814,
"text": "# Density Plot and Histogram of all arrival delayssns.distplot(flights['arr_delay'], hist=True, kde=True, bins=int(180/5), color = 'darkblue', hist_kws={'edgecolor':'black'}, kde_kws={'linewidth': 4})"
},
{
"code": null,
"e": 9273,
"s": 9053,
"text": "The curve shows the density plot which is essentially a smooth version of the histogram. The y-axis is in terms of density, and the histogram is normalized by default so that it has the same y-scale as the density plot."
},
{
"code": null,
"e": 9811,
"s": 9273,
"text": "Analogous to the binwidth of a histogram, a density plot has a parameter called the bandwidth that changes the individual kernels and significantly affects the final result of the plot. The plotting library will choose a reasonable value of the bandwidth for us (by default using the ‘scott’ estimate), and unlike the binwidth of a histogram, I usually use the default bandwidth. However, we can look at using different bandwidths to see if there is a better choice. In the plot, ‘scott’ is the default, which looks like the best option."
},
{
"code": null,
"e": 10547,
"s": 9811,
"text": "Notice that a wider bandwidth results in more smoothing of the distribution. We also see that even though we limited our data to -60 to 120 minutes, the density plot extends beyond these limits. This is one potential issue with a density plot: because it calculates a distribution at each data point, it can generate data that falls outside the bounds of the original data. This might mean that we end up with impossible values on the x-axis that were never present in the original data! As a note, we can also change the kernel, which changes the distribution drawn at each data point and thus the overall distribution. However, for most applications, the default kernel, Gaussian, and the default bandwidth estimation work very well."
},
{
"code": null,
"e": 10968,
"s": 10547,
"text": "Now that we understand how a density plot is made and what it represents, let’s see how it can solve our problem of visualizing the arrival delays of multiple airlines. To show the distributions on the same plot, we can iterate through the airlines, each time calling distplot with the kernel density estimate set to True and the histogram set to False. The code to draw the density plot with multiple airlines is below:"
},
{
"code": null,
"e": 11446,
"s": 10968,
"text": "Finally, we have arrived at an effective solution! With the density plot, we can easily make comparisons between airlines because the plot is less cluttered. Now that we finally have the plot we want, we come to the conclusion that all these airlines have nearly identical arrival delay distributions! However, there are other airlines in the dataset, and we can plot one that is a little different to illustrate another optional parameter for density plots, shading the graph."
},
{
"code": null,
"e": 11742,
"s": 11446,
"text": "Filling in the density plot can help us to distinguish between overlapping distributions. Although this is not always a good approach, it can help to emphasize the difference between distributions. To shade the density plots, we pass in shade = True to the kde_kws argument in the distplot call."
},
{
"code": null,
"e": 11896,
"s": 11742,
"text": "sns.distplot(subset['arr_delay'], hist = False, kde = True, kde_kws = {'shade': True, 'linewidth': 3}, label = airline)"
},
{
"code": null,
"e": 12318,
"s": 11896,
"text": "Whether or not to shade the plot is, like other plotting options, a question that depends on the problem! For this graph, I think it makes sense because the shading helps us distinguish the plots in the regions where they overlap. Now, we finally have some useful information: Alaska Airlines flights tend to be earlier more often than United Airlines. The next time you have the option, you know which airline to choose!"
},
{
"code": null,
"e": 12675,
"s": 12318,
"text": "If you want to show every value in a distribution and not just the smoothed density, you can add a rug plot. This shows every single data point on the x-axis, allowing us to visualize all of the actual values. The benefit of using seaborn’s distplot is that we can add the rug plot with a single parameter call of rug = True (with some formatting as well)."
},
{
"code": null,
"e": 13210,
"s": 12675,
"text": "With many data points the rug plot can become overcrowded, but for some datasets, it can be helpful to view every data point. The rug plot also lets us see how the density plot “creates” data where none exists because it makes a kernel distribution at each data point. These distributions can leak over the range of the original data and give the impression that Alaska Airlines has delays that are both shorter and longer than actually recorded. We need to be careful about this artifact of density plots and point it out to viewers!"
},
{
"code": null,
"e": 14096,
"s": 13210,
"text": "This post has hopefully given you a range of options for visualizing a single variable from one or multiple categories. There are even more univariate (single variable) plots we can make such as empirical cumulative density plots and quantile-quantile plots, but for now we will leave it at histograms and density plots (and rug plots too!). Don’t worry if the options seem overwhelming: with practice, making a good choice will become easier, and you can always ask for help if needed. Moreover, often there isn’t an optimal choice and the “right” decision will come down to preference and the objectives of the visualization. The good thing is, no matter what plot you want to make, there is going to be a way to do it in Python! Visualizations are an effective means for communicating results, and knowing all the options available allows us to choose the right figure for our data."
}
] |
SVM: Feature Selection and Kernels | by Pier Paolo Ippolito | Towards Data Science | A Support Vector Machine (SVM) is a supervised machine learning algorithm that can be employed for both classification and regression purposes.
- Noel Bambrick.
1
2
3
4
5
6
7
8
9
10
Powered by Play.ht
Create audio with Play.ht
Create Audio Narrations with Play.ht
Support Vector Machines (SVM) is a Machine Learning Algorithm which can be used for many different tasks (Figure 1). In this article, I will explain the mathematical basis to demonstrate how this algorithm works for binary classification purposes.
The main objective in SVM is to find the optimal hyperplane to correctly classify between data points of different classes (Figure 2). The hyperplane dimensionality is equal to the number of input features minus one (eg. when working with three feature the hyperplane will be a two-dimensional plane).
Data points on one side of the hyperplane will be classified to a certain class while data points on the other side of the hyperplane will be classified to a different class (eg. green and red as in Figure 2). The distance between the hyperplane and the first point (for all the different classes) on either side of the hyperplane is a measure of sure the algorithm is about its classification decision. The bigger the distance and the more confident we can be SVM is making the right decision.
The data points closest to the hyperplane are called Support Vectors. Support Vectors determines the orientation and position of the hyperplane, in order to maximise the classifier margin (and therefore the classification score). The number of Support Vectors the SVM algorithm should use can be arbitrarily chosen depending on the applications.
Basic SVM classification can be easily implemented using the Scikit-Learn Python library in a few lines of code.
from sklearn import svmtrainedsvm = svm.SVC().fit(X_Train, Y_Train)predictionsvm = trainedsvm.predict(X_Test)print(confusion_matrix(Y_Test,predictionsvm))print(classification_report(Y_Test,predictionsvm))
The are two main types of classification SVM algorithms Hard Margin and Soft Margin:
Hard Margin: aims to find the best hyperplane without tolerating any form of misclassification.
Soft Margin: we add a degree of tolerance in SVM. In this way we allow the model to voluntary misclassify a few data points if that can lead to identifying a hyperplane able to generalise better to unseen data.
Soft Margin SVM can be implemented in Scikit-Learn by adding a C penalty term in svm.SVC. The bigger C and the more penalty the algorithm gets when making a misclassification.
If the data we are working with is not linearly separable (therefore leading to poor linear SVM classification results), it is possible to apply a technique known as the Kernel Trick. This method is able to map our non-linear separable data into a higher dimensional space, making our data linearly separable. Using this new dimensional space SVM can then be easily implemented (Figure 3).
There are many different types of Kernels which can be used to create this higher dimensional space, some examples are linear, polynomial, Sigmoid and Radial Basis Function (RBF). In Scikit-Learn a Kernel function can be specified by adding a kernel parameter in svm.SVC. An additional parameter called gamma can be included to specify the influence of the kernel on the model.
It is usually suggested to use linear kernels if the number of features is larger than the number of observations in the dataset (otherwise RBF might be a better choice).
When working with a large amount of data using RBF, speed might become a constraint to take into account.
Once having fitted our linear SVM it is possible to access the classifier coefficients using .coef_ on the trained model. These weights figure the orthogonal vector coordinates orthogonal to the hyperplane. Their direction represents instead the predicted class.
Feature importance can, therefore, be determined by comparing the size of these coefficients to each other. By looking at the SVM coefficients it is, therefore, possible to identify the main features used in classification and get rid of the not important ones (which hold less variance).
Reducing the number of features in Machine Learning plays a really important role especially when working with large datasets. This can in fact: speed up training, avoid overfitting and ultimately lead to better classification results thanks to the reduced noise in the data.
In Figure 4 are shown the main features I identified using SVM on the Pima Indians Diabetes Database. In green are shown all the features corresponding to the negative coefficients and in blue the positive ones. If you want to find out more about it, all my code is freely available on my Kaggle and GitHub profiles.
If you would like to dig into the Mathematics behind SVM, I have left here a lecture from Patrick Winston available on the MIT OpenCourseWare YouTube channel [4]. This lecture illustrates how to derive the SVM decision rules and which mathematical constraints are to apply using Lagrangian Multipliers.
If you want to keep updated with my latest articles and projects follow me on Medium and subscribe to my mailing list. These are some of my contacts details:
Linkedin
Personal Blog
Personal Website
Medium Profile
GitHub
Kaggle
[1] Support Vector Machine without tears, Ankit Sharma. Accessed at: https://www.slideshare.net/ankitksharma/svm-37753690
[2] Support Vector Machine — Introduction to Machine Learning Algorithms, Rohith Gandhi. Accessed at: https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47
[3] Support vector machines, Jeremy Jordan. Accessed at: https://www.jeremyjordan.me/support-vector-machines/
[4] MIT OpenCourseWare, 16. Learning: Support Vector Machines. Accessed at: https://www.youtube.com/watch?v=_PwhiWxHK8o | [
{
"code": null,
"e": 315,
"s": 171,
"text": "A Support Vector Machine (SVM) is a supervised machine learning algorithm that can be employed for both classification and regression purposes."
},
{
"code": null,
"e": 332,
"s": 315,
"text": "- Noel Bambrick."
},
{
"code": null,
"e": 334,
"s": 332,
"text": "1"
},
{
"code": null,
"e": 336,
"s": 334,
"text": "2"
},
{
"code": null,
"e": 338,
"s": 336,
"text": "3"
},
{
"code": null,
"e": 340,
"s": 338,
"text": "4"
},
{
"code": null,
"e": 342,
"s": 340,
"text": "5"
},
{
"code": null,
"e": 344,
"s": 342,
"text": "6"
},
{
"code": null,
"e": 346,
"s": 344,
"text": "7"
},
{
"code": null,
"e": 348,
"s": 346,
"text": "8"
},
{
"code": null,
"e": 350,
"s": 348,
"text": "9"
},
{
"code": null,
"e": 353,
"s": 350,
"text": "10"
},
{
"code": null,
"e": 372,
"s": 353,
"text": "Powered by Play.ht"
},
{
"code": null,
"e": 398,
"s": 372,
"text": "Create audio with Play.ht"
},
{
"code": null,
"e": 435,
"s": 398,
"text": "Create Audio Narrations with Play.ht"
},
{
"code": null,
"e": 683,
"s": 435,
"text": "Support Vector Machines (SVM) is a Machine Learning Algorithm which can be used for many different tasks (Figure 1). In this article, I will explain the mathematical basis to demonstrate how this algorithm works for binary classification purposes."
},
{
"code": null,
"e": 985,
"s": 683,
"text": "The main objective in SVM is to find the optimal hyperplane to correctly classify between data points of different classes (Figure 2). The hyperplane dimensionality is equal to the number of input features minus one (eg. when working with three feature the hyperplane will be a two-dimensional plane)."
},
{
"code": null,
"e": 1480,
"s": 985,
"text": "Data points on one side of the hyperplane will be classified to a certain class while data points on the other side of the hyperplane will be classified to a different class (eg. green and red as in Figure 2). The distance between the hyperplane and the first point (for all the different classes) on either side of the hyperplane is a measure of sure the algorithm is about its classification decision. The bigger the distance and the more confident we can be SVM is making the right decision."
},
{
"code": null,
"e": 1826,
"s": 1480,
"text": "The data points closest to the hyperplane are called Support Vectors. Support Vectors determines the orientation and position of the hyperplane, in order to maximise the classifier margin (and therefore the classification score). The number of Support Vectors the SVM algorithm should use can be arbitrarily chosen depending on the applications."
},
{
"code": null,
"e": 1939,
"s": 1826,
"text": "Basic SVM classification can be easily implemented using the Scikit-Learn Python library in a few lines of code."
},
{
"code": null,
"e": 2144,
"s": 1939,
"text": "from sklearn import svmtrainedsvm = svm.SVC().fit(X_Train, Y_Train)predictionsvm = trainedsvm.predict(X_Test)print(confusion_matrix(Y_Test,predictionsvm))print(classification_report(Y_Test,predictionsvm))"
},
{
"code": null,
"e": 2229,
"s": 2144,
"text": "The are two main types of classification SVM algorithms Hard Margin and Soft Margin:"
},
{
"code": null,
"e": 2325,
"s": 2229,
"text": "Hard Margin: aims to find the best hyperplane without tolerating any form of misclassification."
},
{
"code": null,
"e": 2536,
"s": 2325,
"text": "Soft Margin: we add a degree of tolerance in SVM. In this way we allow the model to voluntary misclassify a few data points if that can lead to identifying a hyperplane able to generalise better to unseen data."
},
{
"code": null,
"e": 2712,
"s": 2536,
"text": "Soft Margin SVM can be implemented in Scikit-Learn by adding a C penalty term in svm.SVC. The bigger C and the more penalty the algorithm gets when making a misclassification."
},
{
"code": null,
"e": 3102,
"s": 2712,
"text": "If the data we are working with is not linearly separable (therefore leading to poor linear SVM classification results), it is possible to apply a technique known as the Kernel Trick. This method is able to map our non-linear separable data into a higher dimensional space, making our data linearly separable. Using this new dimensional space SVM can then be easily implemented (Figure 3)."
},
{
"code": null,
"e": 3480,
"s": 3102,
"text": "There are many different types of Kernels which can be used to create this higher dimensional space, some examples are linear, polynomial, Sigmoid and Radial Basis Function (RBF). In Scikit-Learn a Kernel function can be specified by adding a kernel parameter in svm.SVC. An additional parameter called gamma can be included to specify the influence of the kernel on the model."
},
{
"code": null,
"e": 3651,
"s": 3480,
"text": "It is usually suggested to use linear kernels if the number of features is larger than the number of observations in the dataset (otherwise RBF might be a better choice)."
},
{
"code": null,
"e": 3757,
"s": 3651,
"text": "When working with a large amount of data using RBF, speed might become a constraint to take into account."
},
{
"code": null,
"e": 4020,
"s": 3757,
"text": "Once having fitted our linear SVM it is possible to access the classifier coefficients using .coef_ on the trained model. These weights figure the orthogonal vector coordinates orthogonal to the hyperplane. Their direction represents instead the predicted class."
},
{
"code": null,
"e": 4309,
"s": 4020,
"text": "Feature importance can, therefore, be determined by comparing the size of these coefficients to each other. By looking at the SVM coefficients it is, therefore, possible to identify the main features used in classification and get rid of the not important ones (which hold less variance)."
},
{
"code": null,
"e": 4585,
"s": 4309,
"text": "Reducing the number of features in Machine Learning plays a really important role especially when working with large datasets. This can in fact: speed up training, avoid overfitting and ultimately lead to better classification results thanks to the reduced noise in the data."
},
{
"code": null,
"e": 4902,
"s": 4585,
"text": "In Figure 4 are shown the main features I identified using SVM on the Pima Indians Diabetes Database. In green are shown all the features corresponding to the negative coefficients and in blue the positive ones. If you want to find out more about it, all my code is freely available on my Kaggle and GitHub profiles."
},
{
"code": null,
"e": 5205,
"s": 4902,
"text": "If you would like to dig into the Mathematics behind SVM, I have left here a lecture from Patrick Winston available on the MIT OpenCourseWare YouTube channel [4]. This lecture illustrates how to derive the SVM decision rules and which mathematical constraints are to apply using Lagrangian Multipliers."
},
{
"code": null,
"e": 5363,
"s": 5205,
"text": "If you want to keep updated with my latest articles and projects follow me on Medium and subscribe to my mailing list. These are some of my contacts details:"
},
{
"code": null,
"e": 5372,
"s": 5363,
"text": "Linkedin"
},
{
"code": null,
"e": 5386,
"s": 5372,
"text": "Personal Blog"
},
{
"code": null,
"e": 5403,
"s": 5386,
"text": "Personal Website"
},
{
"code": null,
"e": 5418,
"s": 5403,
"text": "Medium Profile"
},
{
"code": null,
"e": 5425,
"s": 5418,
"text": "GitHub"
},
{
"code": null,
"e": 5432,
"s": 5425,
"text": "Kaggle"
},
{
"code": null,
"e": 5554,
"s": 5432,
"text": "[1] Support Vector Machine without tears, Ankit Sharma. Accessed at: https://www.slideshare.net/ankitksharma/svm-37753690"
},
{
"code": null,
"e": 5767,
"s": 5554,
"text": "[2] Support Vector Machine — Introduction to Machine Learning Algorithms, Rohith Gandhi. Accessed at: https://towardsdatascience.com/support-vector-machine-introduction-to-machine-learning-algorithms-934a444fca47"
},
{
"code": null,
"e": 5877,
"s": 5767,
"text": "[3] Support vector machines, Jeremy Jordan. Accessed at: https://www.jeremyjordan.me/support-vector-machines/"
}
] |
How to declare an attribute in Python without a value?
| In Python, as well as in several other languages, there is a value that means "no value". In Python, that value with no value is None. So the following shows how None is used −
class Student:
StudentName = None
RollNumber = None
Those are like instance variables though, and not class variables, so we may as well write −
class Student(object):
def __init__(self):
self.StudentName = None
self.RollNumber = None
We can see how Python assigns the None value implicitly in the code below −
def h():
pass
k = h() # k now has the value of None | [
{
"code": null,
"e": 1239,
"s": 1062,
"text": "In Python, as well as in several other languages, there is a value that means \"no value\". In Python, that value with no value is None. So the following shows how None is used −"
},
{
"code": null,
"e": 1297,
"s": 1239,
"text": "class Student:\n StudentName = None\n RollNumber = None"
},
{
"code": null,
"e": 1390,
"s": 1297,
"text": "Those are like instance variables though, and not class variables, so we may as well write −"
},
{
"code": null,
"e": 1500,
"s": 1390,
"text": "class Student(object):\n def __init__(self):\n self.StudentName = None\n self.RollNumber = None"
},
{
"code": null,
"e": 1576,
"s": 1500,
"text": "We can see how Python assigns the None value implicitly in the code below −"
},
{
"code": null,
"e": 1632,
"s": 1576,
"text": "def h():\n pass\nk = h() # k now has the value of None"
}
] |
Android - Data Backup | Android allows you to backup your application data to remote "cloud" storage, in order to provide a restore point for the application data and settings. You can only backup your application data. In order to access the other applications data, you need to root your phone.
In order to make a data backup application, you need to register your application with google backup service. This has been explained in the example. After registering , you have to specify its key in the AndroidManifest.XML
<application
android:allowBackup="true"
android:backupAgent="MyBackupPlace">
<meta-data
android:name="com.google.android.backup.api_key"
android:value="AEdPqrEAAAAIErlxFByGgNz2ywBeQb6TsmLpp5Ksh1PW-ZSexg" />
</application>
Android provides BackUpAgentHelper class to handle all the operations of data backup. In order to use this class , you have to extend your class with it. Its syntax is given below −
public class MyBackUpPlace extends BackupAgentHelper {
}
The persistent data that you want to backup is in either of the two forms. Either it could be SharedPrefrences or it could be File. Android supports both types of backup in the respective classes of SharedPreferencesBackupHelper and FileBackupHelper.
In order to use SharedPerefernceBackupHelper, you need to instantiate its object with the name of your sharedPerefernces File. Its syntax is given below −
static final String File_Name_Of_Prefrences = "myPrefrences";
SharedPreferencesBackupHelper helper = new SharedPreferencesBackupHelper(this, File_Name_Of_Prefrences);
The last thing you need to do is to call addHelper method by specifying the backup key string , and the helper object. Its syntax is given below −
addHelper(PREFS_BACKUP_KEY, helper);
The addHelper method will automatically add a helper to a given data subset to the agent's configuration.
Apart from these methods, there are other methods defined in the BackupAgentHelper class. They are defined below −
onBackup(ParcelFileDescriptor oldState, BackupDataOutput data, ParcelFileDescriptor newState)
Run the backup process on each of the configured handlers
onRestore(BackupDataInput data, int appVersionCode, ParcelFileDescriptor newState)
Run the restore process on each of the configured handlers
The methods of the SharedPreferencesBackUpHelper class are listed below.
performBackup(ParcelFileDescriptor oldState, BackupDataOutput data, ParcelFileDescriptor newState)
Backs up the configured SharedPreferences groups
restoreEntity(BackupDataInputStream data)
Restores one entity from the restore data stream to its proper shared preferences file store
The following example demonstrates the use of BackupAgentHelper class to create backup of your application data.
To experiment with this example, you need to run this on an actual device or in an emulator.
Register you android application with google backup service. In order to do that , visit this link. You must agree to the terms of service, and then enter the application package name. It is shown below −
Then click on Register with android backup service. It would give you your key, along with your AndroidManifest code to copy. Just copy the key. It is shown below −
Once you copy the key , you need to write it in your AndroidManifest.XML file. Its code is given below −
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.backup" >
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:backupAgent="MyBackUpPlace"
android:theme="@style/AppTheme" >
<activity
android:name="com.example.backup.MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<meta-data
android:name="com.google.android.backup.api_key"
android:value="AEdPqrEAAAAIErlxFByGgNz2ywBeQb6TsmLpp5Ksh1PW-ZSexg" />
</application>
</manifest>
Here is the code of BackUpAgentHelper class. The name of the class should be the same as you specified in the backupAgent tag under application in AndroidManifest.XML
package com.example.backup;
import android.app.backup.BackupAgentHelper;
import android.app.backup.SharedPreferencesBackupHelper;
public class MyBackUpPlace extends BackupAgentHelper {
static final String File_Name_Of_Prefrences = "myPrefrences";
static final String PREFS_BACKUP_KEY = "backup";
@Override
public void onCreate() {
SharedPreferencesBackupHelper helper = new SharedPreferencesBackupHelper(this,
File_Name_Of_Prefrences);
addHelper(PREFS_BACKUP_KEY, helper);
}
}
Once you've implemented your backup agent, you can test the backup and restore functionality with the following procedure, using bmgr.
If using the emulator, create and use an AVD with Android 2.2 (API Level 8).
If using a device, the device must be running Android 2.2 or greater and have Google Play built in.
If using the emulator, you can enable backup with the following command from your SDK tools/ path −
adb shell bmgr enable true
If using a device, open the system Settings, select Privacy, then enable Back up my data and Automatic restore.
For testing purposes, you can also make a request with the following bmgr command −
adb shell bmgr backup your.package.name
Initiate a backup operation by typing the following command.
adb shell bmgr run
This forces the Backup Manager to perform all backup requests that are in its queue.
Uninstall the application with the following command −
adb uninstall your.package.name
Then reinstall the application and verify the results.
46 Lectures
7.5 hours
Aditya Dua
32 Lectures
3.5 hours
Sharad Kumar
9 Lectures
1 hours
Abhilash Nelson
14 Lectures
1.5 hours
Abhilash Nelson
15 Lectures
1.5 hours
Abhilash Nelson
10 Lectures
1 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3880,
"s": 3607,
"text": "Android allows you to backup your application data to remote \"cloud\" storage, in order to provide a restore point for the application data and settings. You can only backup your application data. In order to access the other applications data, you need to root your phone."
},
{
"code": null,
"e": 4105,
"s": 3880,
"text": "In order to make a data backup application, you need to register your application with google backup service. This has been explained in the example. After registering , you have to specify its key in the AndroidManifest.XML"
},
{
"code": null,
"e": 4350,
"s": 4105,
"text": "<application\n android:allowBackup=\"true\"\n android:backupAgent=\"MyBackupPlace\">\n\n <meta-data \n android:name=\"com.google.android.backup.api_key\"\n android:value=\"AEdPqrEAAAAIErlxFByGgNz2ywBeQb6TsmLpp5Ksh1PW-ZSexg\" />\n</application>"
},
{
"code": null,
"e": 4532,
"s": 4350,
"text": "Android provides BackUpAgentHelper class to handle all the operations of data backup. In order to use this class , you have to extend your class with it. Its syntax is given below −"
},
{
"code": null,
"e": 4589,
"s": 4532,
"text": "public class MyBackUpPlace extends BackupAgentHelper {\n}"
},
{
"code": null,
"e": 4840,
"s": 4589,
"text": "The persistent data that you want to backup is in either of the two forms. Either it could be SharedPrefrences or it could be File. Android supports both types of backup in the respective classes of SharedPreferencesBackupHelper and FileBackupHelper."
},
{
"code": null,
"e": 4995,
"s": 4840,
"text": "In order to use SharedPerefernceBackupHelper, you need to instantiate its object with the name of your sharedPerefernces File. Its syntax is given below −"
},
{
"code": null,
"e": 5162,
"s": 4995,
"text": "static final String File_Name_Of_Prefrences = \"myPrefrences\";\nSharedPreferencesBackupHelper helper = new SharedPreferencesBackupHelper(this, File_Name_Of_Prefrences);"
},
{
"code": null,
"e": 5309,
"s": 5162,
"text": "The last thing you need to do is to call addHelper method by specifying the backup key string , and the helper object. Its syntax is given below −"
},
{
"code": null,
"e": 5346,
"s": 5309,
"text": "addHelper(PREFS_BACKUP_KEY, helper);"
},
{
"code": null,
"e": 5452,
"s": 5346,
"text": "The addHelper method will automatically add a helper to a given data subset to the agent's configuration."
},
{
"code": null,
"e": 5567,
"s": 5452,
"text": "Apart from these methods, there are other methods defined in the BackupAgentHelper class. They are defined below −"
},
{
"code": null,
"e": 5661,
"s": 5567,
"text": "onBackup(ParcelFileDescriptor oldState, BackupDataOutput data, ParcelFileDescriptor newState)"
},
{
"code": null,
"e": 5719,
"s": 5661,
"text": "Run the backup process on each of the configured handlers"
},
{
"code": null,
"e": 5802,
"s": 5719,
"text": "onRestore(BackupDataInput data, int appVersionCode, ParcelFileDescriptor newState)"
},
{
"code": null,
"e": 5861,
"s": 5802,
"text": "Run the restore process on each of the configured handlers"
},
{
"code": null,
"e": 5934,
"s": 5861,
"text": "The methods of the SharedPreferencesBackUpHelper class are listed below."
},
{
"code": null,
"e": 6033,
"s": 5934,
"text": "performBackup(ParcelFileDescriptor oldState, BackupDataOutput data, ParcelFileDescriptor newState)"
},
{
"code": null,
"e": 6082,
"s": 6033,
"text": "Backs up the configured SharedPreferences groups"
},
{
"code": null,
"e": 6124,
"s": 6082,
"text": "restoreEntity(BackupDataInputStream data)"
},
{
"code": null,
"e": 6217,
"s": 6124,
"text": "Restores one entity from the restore data stream to its proper shared preferences file store"
},
{
"code": null,
"e": 6330,
"s": 6217,
"text": "The following example demonstrates the use of BackupAgentHelper class to create backup of your application data."
},
{
"code": null,
"e": 6423,
"s": 6330,
"text": "To experiment with this example, you need to run this on an actual device or in an emulator."
},
{
"code": null,
"e": 6628,
"s": 6423,
"text": "Register you android application with google backup service. In order to do that , visit this link. You must agree to the terms of service, and then enter the application package name. It is shown below −"
},
{
"code": null,
"e": 6793,
"s": 6628,
"text": "Then click on Register with android backup service. It would give you your key, along with your AndroidManifest code to copy. Just copy the key. It is shown below −"
},
{
"code": null,
"e": 6898,
"s": 6793,
"text": "Once you copy the key , you need to write it in your AndroidManifest.XML file. Its code is given below −"
},
{
"code": null,
"e": 7797,
"s": 6898,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.backup\" >\n\n <application\n android:allowBackup=\"true\"\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\"\n android:backupAgent=\"MyBackUpPlace\"\n android:theme=\"@style/AppTheme\" >\n \n <activity\n android:name=\"com.example.backup.MainActivity\"\n android:label=\"@string/app_name\" >\n \n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n \n </activity>\n \n <meta-data \n android:name=\"com.google.android.backup.api_key\"\n android:value=\"AEdPqrEAAAAIErlxFByGgNz2ywBeQb6TsmLpp5Ksh1PW-ZSexg\" />\n\n </application>\n</manifest>"
},
{
"code": null,
"e": 7964,
"s": 7797,
"text": "Here is the code of BackUpAgentHelper class. The name of the class should be the same as you specified in the backupAgent tag under application in AndroidManifest.XML"
},
{
"code": null,
"e": 8481,
"s": 7964,
"text": "package com.example.backup;\n\nimport android.app.backup.BackupAgentHelper;\nimport android.app.backup.SharedPreferencesBackupHelper;\n\npublic class MyBackUpPlace extends BackupAgentHelper {\n static final String File_Name_Of_Prefrences = \"myPrefrences\";\n static final String PREFS_BACKUP_KEY = \"backup\";\n \n @Override\n public void onCreate() {\n SharedPreferencesBackupHelper helper = new SharedPreferencesBackupHelper(this, \n File_Name_Of_Prefrences);\n addHelper(PREFS_BACKUP_KEY, helper);\n }\n}"
},
{
"code": null,
"e": 8616,
"s": 8481,
"text": "Once you've implemented your backup agent, you can test the backup and restore functionality with the following procedure, using bmgr."
},
{
"code": null,
"e": 8693,
"s": 8616,
"text": "If using the emulator, create and use an AVD with Android 2.2 (API Level 8)."
},
{
"code": null,
"e": 8793,
"s": 8693,
"text": "If using a device, the device must be running Android 2.2 or greater and have Google Play built in."
},
{
"code": null,
"e": 8893,
"s": 8793,
"text": "If using the emulator, you can enable backup with the following command from your SDK tools/ path −"
},
{
"code": null,
"e": 8921,
"s": 8893,
"text": "adb shell bmgr enable true\n"
},
{
"code": null,
"e": 9033,
"s": 8921,
"text": "If using a device, open the system Settings, select Privacy, then enable Back up my data and Automatic restore."
},
{
"code": null,
"e": 9117,
"s": 9033,
"text": "For testing purposes, you can also make a request with the following bmgr command −"
},
{
"code": null,
"e": 9158,
"s": 9117,
"text": "adb shell bmgr backup your.package.name\n"
},
{
"code": null,
"e": 9219,
"s": 9158,
"text": "Initiate a backup operation by typing the following command."
},
{
"code": null,
"e": 9239,
"s": 9219,
"text": "adb shell bmgr run\n"
},
{
"code": null,
"e": 9324,
"s": 9239,
"text": "This forces the Backup Manager to perform all backup requests that are in its queue."
},
{
"code": null,
"e": 9379,
"s": 9324,
"text": "Uninstall the application with the following command −"
},
{
"code": null,
"e": 9412,
"s": 9379,
"text": "adb uninstall your.package.name\n"
},
{
"code": null,
"e": 9467,
"s": 9412,
"text": "Then reinstall the application and verify the results."
},
{
"code": null,
"e": 9502,
"s": 9467,
"text": "\n 46 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 9514,
"s": 9502,
"text": " Aditya Dua"
},
{
"code": null,
"e": 9549,
"s": 9514,
"text": "\n 32 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 9563,
"s": 9549,
"text": " Sharad Kumar"
},
{
"code": null,
"e": 9595,
"s": 9563,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 9612,
"s": 9595,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 9647,
"s": 9612,
"text": "\n 14 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 9664,
"s": 9647,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 9699,
"s": 9664,
"text": "\n 15 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 9716,
"s": 9699,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 9749,
"s": 9716,
"text": "\n 10 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 9766,
"s": 9749,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 9773,
"s": 9766,
"text": " Print"
},
{
"code": null,
"e": 9784,
"s": 9773,
"text": " Add Notes"
}
] |
How to make an anchor tag refer to nothing? | To make an anchor tag refer to nothing, use “javascript: void(0)”. The following link does nothing because the expression "0" has no effect in JavaScript. Here the expression "0" is evaluated, but it is not loaded back into the current document.
Live Demo
<html>
<head>
<script>
<!--
//-->
</script>
</head>
<body>
<p>Click the following, This won't react at all...</p>
<a href="javascript:void(0)">Click me!</a>
</body>
</html> | [
{
"code": null,
"e": 1308,
"s": 1062,
"text": "To make an anchor tag refer to nothing, use “javascript: void(0)”. The following link does nothing because the expression \"0\" has no effect in JavaScript. Here the expression \"0\" is evaluated, but it is not loaded back into the current document."
},
{
"code": null,
"e": 1318,
"s": 1308,
"text": "Live Demo"
},
{
"code": null,
"e": 1549,
"s": 1318,
"text": "<html>\n <head>\n <script>\n <!--\n //-->\n </script>\n </head>\n <body>\n <p>Click the following, This won't react at all...</p>\n <a href=\"javascript:void(0)\">Click me!</a>\n </body>\n</html> "
}
] |
Program to find century for a year in C++ | In this tutorial, we will be discussing a program to find the century for a year.
For this we will be provided with a year. Our task is to find the century in which the given year falls.
Live Demo
#include <bits/stdc++.h>
using namespace std;
void find_century(int year){
//year values can only be positive
if (year <= 0)
cout << "0 and negative is not allow"
<< "for a year";
else if (year <= 100)
cout << "1st century\n";
else if (year % 100 == 0)
cout << year/ 100 <<" century";
else
cout << year/ 100 + 1 << " century";
}
int main(){
int year = 2001;
find_century(year);
return 0;
}
21 century | [
{
"code": null,
"e": 1144,
"s": 1062,
"text": "In this tutorial, we will be discussing a program to find the century for a year."
},
{
"code": null,
"e": 1249,
"s": 1144,
"text": "For this we will be provided with a year. Our task is to find the century in which the given year falls."
},
{
"code": null,
"e": 1260,
"s": 1249,
"text": " Live Demo"
},
{
"code": null,
"e": 1707,
"s": 1260,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid find_century(int year){\n //year values can only be positive\n if (year <= 0)\n cout << \"0 and negative is not allow\"\n << \"for a year\";\n else if (year <= 100)\n cout << \"1st century\\n\";\n else if (year % 100 == 0)\n cout << year/ 100 <<\" century\";\n else\n cout << year/ 100 + 1 << \" century\";\n}\nint main(){\n int year = 2001;\n find_century(year);\n return 0;\n}"
},
{
"code": null,
"e": 1718,
"s": 1707,
"text": "21 century"
}
] |
Java Program to take Screenshots - GeeksforGeeks | 18 Jan, 2018
In this program we will see how we can take screenshots using a java program and save the screenshot in desired folder.We use java.awt.Robot class to capture pixels of screen. It provides method like createScreenCapture which captures the current screen. This method returns captured image as BufferedImage object which can be saved as a file. It also uses ImageIO to save it as PNG image format. Toolkit.getDefaultToolkit().getSize() method is used to get the size of screen.The serialVersionUID is universal version identifier for Serializable class. Thread is used so that after executing the program we can switch to the screen we want to take screenshot of. 120s is the time in seconds i.e. 2 mins.
NOTE : Please keep note of UpperCase and LowerCase in name of methods. A slight change of Case may cause errors.
How to use the program to capture Screenshot :
Write program in Notepad.
Save it as Screenshot.java and run it on CommandPrompt.
Refer to the screenshots at end in case of any problem.
// Java Program to Capture full// Image of Screenimport java.awt.AWTException;import java.awt.Rectangle;import java.awt.Toolkit;import java.awt.Robot;import java.awt.image.BufferedImage;import java.io.IOException;import java.io.File;import javax.imageio.ImageIO; public class Screenshot { public static final long serialVersionUID = 1L; public static void main(String[] args) { try { Thread.sleep(120); Robot r = new Robot(); // It saves screenshot to desired path String path = "D:// Shot.jpg"; // Used to get ScreenSize and capture image Rectangle capture = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()); BufferedImage Image = r.createScreenCapture(capture); ImageIO.write(Image, "jpg", new File(path)); System.out.println("Screenshot saved"); } catch (AWTException | IOException | InterruptedException ex) { System.out.println(ex); } }}
Output :
References:http://viralpatel.net/blogs/how-to-take-screen-shots-in-java-taking-screenshots-java/http://www.javatechblog.com/java/how-to-take-screenshot-programmatically-in-java/
GBlog
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
DSA Sheet by Love Babbar
How to Start Learning DSA?
Types of Software Testing
12 pip Commands For Python Developers
Working with csv files in Python
Arrays in Java
Split() String method in Java with examples
For-each loop in Java
Stream In Java
Object Oriented Programming (OOPs) Concept in Java | [
{
"code": null,
"e": 24856,
"s": 24828,
"text": "\n18 Jan, 2018"
},
{
"code": null,
"e": 25560,
"s": 24856,
"text": "In this program we will see how we can take screenshots using a java program and save the screenshot in desired folder.We use java.awt.Robot class to capture pixels of screen. It provides method like createScreenCapture which captures the current screen. This method returns captured image as BufferedImage object which can be saved as a file. It also uses ImageIO to save it as PNG image format. Toolkit.getDefaultToolkit().getSize() method is used to get the size of screen.The serialVersionUID is universal version identifier for Serializable class. Thread is used so that after executing the program we can switch to the screen we want to take screenshot of. 120s is the time in seconds i.e. 2 mins."
},
{
"code": null,
"e": 25673,
"s": 25560,
"text": "NOTE : Please keep note of UpperCase and LowerCase in name of methods. A slight change of Case may cause errors."
},
{
"code": null,
"e": 25720,
"s": 25673,
"text": "How to use the program to capture Screenshot :"
},
{
"code": null,
"e": 25746,
"s": 25720,
"text": "Write program in Notepad."
},
{
"code": null,
"e": 25802,
"s": 25746,
"text": "Save it as Screenshot.java and run it on CommandPrompt."
},
{
"code": null,
"e": 25858,
"s": 25802,
"text": "Refer to the screenshots at end in case of any problem."
},
{
"code": "// Java Program to Capture full// Image of Screenimport java.awt.AWTException;import java.awt.Rectangle;import java.awt.Toolkit;import java.awt.Robot;import java.awt.image.BufferedImage;import java.io.IOException;import java.io.File;import javax.imageio.ImageIO; public class Screenshot { public static final long serialVersionUID = 1L; public static void main(String[] args) { try { Thread.sleep(120); Robot r = new Robot(); // It saves screenshot to desired path String path = \"D:// Shot.jpg\"; // Used to get ScreenSize and capture image Rectangle capture = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()); BufferedImage Image = r.createScreenCapture(capture); ImageIO.write(Image, \"jpg\", new File(path)); System.out.println(\"Screenshot saved\"); } catch (AWTException | IOException | InterruptedException ex) { System.out.println(ex); } }}",
"e": 26878,
"s": 25858,
"text": null
},
{
"code": null,
"e": 26887,
"s": 26878,
"text": "Output :"
},
{
"code": null,
"e": 27065,
"s": 26887,
"text": "References:http://viralpatel.net/blogs/how-to-take-screen-shots-in-java-taking-screenshots-java/http://www.javatechblog.com/java/how-to-take-screenshot-programmatically-in-java/"
},
{
"code": null,
"e": 27071,
"s": 27065,
"text": "GBlog"
},
{
"code": null,
"e": 27076,
"s": 27071,
"text": "Java"
},
{
"code": null,
"e": 27090,
"s": 27076,
"text": "Java Programs"
},
{
"code": null,
"e": 27095,
"s": 27090,
"text": "Java"
},
{
"code": null,
"e": 27193,
"s": 27095,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27218,
"s": 27193,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 27245,
"s": 27218,
"text": "How to Start Learning DSA?"
},
{
"code": null,
"e": 27271,
"s": 27245,
"text": "Types of Software Testing"
},
{
"code": null,
"e": 27309,
"s": 27271,
"text": "12 pip Commands For Python Developers"
},
{
"code": null,
"e": 27342,
"s": 27309,
"text": "Working with csv files in Python"
},
{
"code": null,
"e": 27357,
"s": 27342,
"text": "Arrays in Java"
},
{
"code": null,
"e": 27401,
"s": 27357,
"text": "Split() String method in Java with examples"
},
{
"code": null,
"e": 27423,
"s": 27401,
"text": "For-each loop in Java"
},
{
"code": null,
"e": 27438,
"s": 27423,
"text": "Stream In Java"
}
] |
5 SMOTE Techniques for Oversampling your Imbalance Data | by Cornellius Yudha Wijaya | Towards Data Science | If you enjoy my content and want to get more in-depth knowledge regarding data or just daily life as a Data Scientist, please consider subscribing to my newsletter here.
Imbalance data is a case where the classification dataset class has a skewed proportion. For example, I would use the churn dataset from Kaggle for this article.
We can see there is a skew in the Yes class compared to the No class. If we calculate the proportion, the Yes class proportion is around 20.4% of the whole dataset. Although, how do you classify the imbalance data? The table below might help you.
There are three cases of Imbalance — Mild, Moderate, and Extreme; depends on the minority class proportion to the whole dataset. In our example above, we only have a Mild case of imbalanced data.
Now, why do we need to care about imbalanced data when creating our machine learning model? Well, an imbalance class creates a bias where the machine learning model tends to predict the majority class. You don’t want the prediction model to ignore the minority class, right?
That is why there are techniques to overcome the imbalance problem — Undersampling and Oversampling. What is the difference between these two techniques?
Undersampling would decrease the proportion of your majority class until the number is similar to the minority class. At the same time, Oversampling would resample the minority class proportion following the majority class proportion.
In this article, I would only write a specific technique for Oversampling called SMOTE and various varieties of the SMOTE.
Just a little note, I am a Data Scientist who believes in leaving the proportion as it is because it is representing the data. It is better to try feature engineering before you jump into these techniques.
So, what is SMOTE? SMOTE or Synthetic Minority Oversampling Technique is an oversampling technique but SMOTE working differently than your typical oversampling.
In a classic oversampling technique, the minority data is duplicated from the minority data population. While it increases the number of data, it does not give any new information or variation to the machine learning model.
For the reason above, Nitesh Chawla, et al. (2002) introduce a new technique to create synthetic data for oversampling purposes in their SMOTE paper.
SMOTE works by utilizing a k-nearest neighbour algorithm to create synthetic data. SMOTE first start by choosing random data from the minority class, then k-nearest neighbours from the data are set. Synthetic data would then be made between the random data and the randomly selected k-nearest neighbour. Let me show you the example below.
The procedure is repeated enough times until the minority class has the same proportion as the majority class.
I omit a more in-depth explanation because the passage above already summarizes how SMOTE work. In this article, I want to focus on SMOTE and its variation, as well as when to use it without touching much in theory. If you want to know more, let me attach the link to the paper for each variation I mention here.
As preparation, I would use the imblearn package, which includes SMOTE and their variation in the package.
#Installing imblearnpip install -U imbalanced-learn
We would start by using the SMOTE in their default form. We would use the same churn dataset above. Let’s prepare the data first as well to try the SMOTE.
If you realize from my explanation above, SMOTE is used to synthesize data where the features are continuous and a classification problem. For that reason, in this section, we only would try to use two continuous features with the classification target.
import pandas as pdimport seaborns as sns#I read the csv churn data into variable called df. Here I would only use two continuous features CreditScore and Age with the target Exiteddf_example = df[['CreditScore', 'Age', 'Exited']]sns.scatterplot(data = df, x ='CreditScore', y = 'Age', hue = 'Exited')
As we can see in the above scatter plot between the ‘CreditScore’ and ‘Age’ feature, there are mixed up between the 0 and 1 classes.
Let’s try to oversampled the data using the SMOTE technique.
#Importing SMOTEfrom imblearn.over_sampling import SMOTE#Oversampling the datasmote = SMOTE(random_state = 101)X, y = smote.fit_resample(df[['CreditScore', 'Age']], df['Exited'])#Creating a new Oversampling Data Framedf_oversampler = pd.DataFrame(X, columns = ['CreditScore', 'Age'])df_oversampler['Exited']sns.countplot(df_oversampler['Exited'])
As we can see in the graph above, class 0 and 1 now have a similar proportion. Let’s see how is it goes if we create a similar scatter plot like before.
sns.scatterplot(data = df_oversampler, x ='CreditScore', y = 'Age', hue = 'Exited')
Currently, we have the oversampled data to fill the area that previously was empty with the synthetic data.
The purpose of oversampling is, just as I stated before, to have a better prediction model. This technique was not created for any analysis purposes as every data created is synthetic, so that is a reminder.
For the reason above, we need to evaluate whether oversampling data leads to a better model or not. Let’s start by splitting the data to create the prediction model.
# Importing the splitter, classification model, and the metricfrom sklearn.linear_model import LogisticRegressionfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import classification_report#Splitting the data with stratificationX_train, X_test, y_train, y_test = train_test_split(df_example[['CreditScore', 'Age']], df['Exited'], test_size = 0.2, stratify = df['Exited'], random_state = 101)
As an addition, you should only oversample your training data and not the whole data except if you would use the entire data as your training data. In case you want to split the data, you should split the data first before oversampled the training data.
#Create an oversampled training datasmote = SMOTE(random_state = 101)X_oversample, y_oversample = smote.fit_resample(X_train, y_train)
Now we have both the imbalanced data and oversampled data, let’s try to create the classification model using both of these data. First, let’s see the performance of the Logistic Regression model trained with the imbalanced data.
#Training with imbalance dataclassifier = LogisticRegression()classifier.fit(X_train, y_train)print(classification_report(y_test, classifier.predict(X_test)))
As we can see from the metrics, our Logistic Regression model trained with the imbalanced data tends to predict class 0 rather than class 1. The bias is in our model.
Let’s see how is the result of the model trained with the oversampled data.
#Training with oversampled dataclassifier_o = LogisticRegression()classifier_o.fit(X_oversample, y_oversample)print(classification_report(y_test, classifier_o.predict(X_test)))
The model is doing better at predicted class 1 in this case. In this case, we could say that the oversampled data helps our Logistic Regression model to predict the class 1 better.
I could say that the oversampled data improve the Logistic Regression model for prediction purposes, although the context of ‘improve’ is once again back to the user.
I have mention that SMOTE only works for continuous features. So, what to do if you have mixed (categorical and continuous) features? In this case, we have another variation of SMOTE called SMOTE-NC (Nominal and Continuous).
You might think, then, just transform the categorical data into numerical; therefore, we had a numerical feature for SMOTE to use. The problem is when we did that; we would have data that did not make any sense.
For example, in the churn data above, we had ‘IsActiveMember’ categorical feature with the data either 0 or 1. If we oversampled this data with SMOTE, we could end up with oversampled data such as 0.67 or 0.5, which does not make sense at all.
This is why we need to use SMOTE-NC when we have cases of mixed data. The premise is simple, we denote which features are categorical, and SMOTE would resample the categorical data instead of creating synthetic data.
Let’s try applying SMOTE-NC. In this case, I would select another feature as an example (one categorical, one continuous).
df_example = df[['CreditScore', 'IsActiveMember', 'Exited']]
In this case, ‘CreditScore’ is the continuous feature, and ‘IsActiveMember’ is the categorical feature. Then, let’s split the data just like before.
X_train, X_test, y_train, y_test = train_test_split(df_example[['CreditScore', 'IsActiveMember']],df['Exited'], test_size = 0.2,stratify = df['Exited'], random_state = 101)
Then, let’s create two different classification models once more; one trained with the imbalanced data and one with the oversampled data. First, let’s try SMOTE-NC to oversampled the data.
#Import the SMOTE-NCfrom imblearn.over_sampling import SMOTENC#Create the oversampler. For SMOTE-NC we need to pinpoint the column position where is the categorical features are. In this case, 'IsActiveMember' is positioned in the second column we input [1] as the parameter. If you have more than one categorical columns, just input all the columns positionsmotenc = SMOTENC([1],random_state = 101)X_oversample, y_oversample = smotenc.fit_resample(X_train, y_train)
With the data ready, let’s try to create the classifiers.
#Classifier with imbalance dataclassifier = LogisticRegression()classifier.fit(X_train, y_train)print(classification_report(y_test, classifier.predict(X_test)))
With the imbalance data, we can see the classifier favor the class 0 and ignore the class 1 completely. Then, how about if we trained it with the SMOTE-NC oversampled data.
#Classifier with SMOTE-NCclassifier_o = LogisticRegression()classifier_o.fit(X_oversample, y_oversample)print(classification_report(y_test, classifier_o.predict(X_test)))
Just like with SMOTE, the classifier with SMOTE-NC oversampled data give a new perspective to the machine learning model to predict the imbalanced data. It wasn’t necessarily the best, but it was better than the imbalance data.
Borderline-SMOTE is a variation of the SMOTE. Just like the name implies, it has something to do with the border.
So, unlike with the SMOTE, where the synthetic data are created randomly between the two data, Borderline-SMOTE only makes synthetic data along the decision boundary between the two classes.
Also, there are two kinds of Borderline-SMOTE; there are Borderline-SMOTE1 and Borderline-SMOTE2. The differences are simple; Borderline-SMOTE1 also oversampled the majority class where the majority data are causing misclassification in the decision boundary, while Borderline-SMOTE2 only oversampled the minority classes.
Let’s try the Borderline-SMOTE with our previous data. I would once more only using the numerical features.
df_example = df[['CreditScore', 'Age', 'Exited']]
The above picture is the difference between oversampling data with SMOTE and Borderline-SMOTE1. It might slightly look similar, but we could see there are differences where the synthetic data are created.
How about the performances for the machine learning model? Let us try it. First, as usual, we split the data.
X_train, X_test, y_train, y_test = train_test_split(df_example[['CreditScore', 'Age']], df['Exited'], test_size = 0.2, stratify = df['Exited'], random_state = 101)
Then, we create the oversampled data by using Borderline-SMOTE.
#By default, the BorderlineSMOTE would use the Borderline-SMOTE1from imblearn.over_sampling import BorderlineSMOTEbsmote = BorderlineSMOTE(random_state = 101, kind = 'borderline-1')X_oversample_borderline, y_oversample_borderline = bsmote.fit_resample(X_train, y_train)
Lastly, let’s check the machine learning performance with the Borderline-SMOTE oversampled data.
classifier_border = LogisticRegression()classifier_border.fit(X_oversample_borderline, y_oversample_borderline)print(classification_report(y_test, classifier_border.predict(X_test)))
The performance doesn’t differ much from the model trained with the SMOTE oversampled data. This means that we should focus on the features instead of oversampling the data.
Borderline-SMOTE is used the best when we know that the misclassification often happens near the boundary decision. Otherwise, we could stay use the usual SMOTE. If you want to read more about the Borderline-SMOTE, you could check the paper here.
Another variation of Borderline-SMOTE is Borderline-SMOTE SVM, or we could just call it SVM-SMOTE.
The main differences between SVM-SMOTE and the other SMOTE are that instead of using K-nearest neighbors to identify the misclassification in the Borderline-SMOTE, the technique would incorporate the SVM algorithm.
In the SVM-SMOTE, the borderline area is approximated by the support vectors after training SVMs classifier on the original training set. Synthetic data will be randomly created along the lines joining each minority class support vector with a number of its nearest neighbors.
What special about Borderline-SMOTE SVM compared to the Borderline-SMOTE is that more data are synthesized away from the region of class overlap. It focuses more on where the data is separated.
Just like before, let’s try to use the technique in the model creation. I would still use the same training data in the Borderline-SMOTE example.
from imblearn.over_sampling import SVMSMOTEsvmsmote = SVMSMOTE(random_state = 101)X_oversample_svm, y_oversample_svm = svmsmote.fit_resample(X_train, y_train)classifier_svm = LogisticRegression()classifier_svm.fit(X_oversample_svm, y_oversample_svm)print(classification_report(y_test, classifier_svm.predict(X_test)))
The performance is once more not differ much, although I could say that the model in this time slightly favoured the class 0 more than when we use the other technique but not too much.
It depends on you once again, what are your prediction models target are and the business affected by it. If you want to read more about the Borderline-SMOTE SVM, you could check the paper here.
ADASYN is another variation from SMOTE. ADASYN takes a more different approach compared to the Borderline-SMOTE. While Borderline-SMOTE tries to synthesize the data near the data decision boundary, ADASYN creates synthetic data according to the data density.
The synthetic data generation would be inversely proportional to the density of the minority class. It means more synthetic data are created in regions of the feature space where the density of minority examples is low, and fewer or none where the density is high.
In simpler terms, in an area where the minority class is less dense, the synthetic data are created more. Otherwise, the synthetic data is not made so much.
Let’s see how the performance by using the ADASYN. I would still use the same training data in the Borderline-SMOTE example.
from imblearn.over_sampling import ADASYNadasyn = ADASYN(random_state = 101)X_oversample_ada, y_oversample_ada = adasyn.fit_resample(X_train, y_train)classifier_ada = LogisticRegression()classifier_ada.fit(X_oversample_ada, y_oversample_ada)print(classification_report(y_test, classifier_ada.predict(X_test)))
As we can see from the model performance above, the performance is slightly worse than when we use the other SMOTE method.
The problems might lie in the outliers. Just like I stated before, ADASYN would focus on the density data where the density is low. Often time, the low-density data is an outlier. The ADASYN approach would then put too much attention on these areas of the feature space, which may result in worse model performance. It might be better to remove the outlier before using the ADASYN.
If you want to read more about ADASYN, you could check the paper here.
Imbalanced data is a problem when creating a predictive machine learning model. One way to alleviate this problem is by oversampling the minority data.
Instead of oversampling by replicating the data, we can oversample the data by creating synthetic data using the SMOTE technique. There are few variations of SMOTE, including:
SMOTESMOTE-NCBorderline-SMOTESVM-SMOTEADASYN
SMOTE
SMOTE-NC
Borderline-SMOTE
SVM-SMOTE
ADASYN
I hope it helps!
If you are not subscribed as a Medium Member, please consider subscribing through my referral to support my writing.
Visit me on my LinkedIn or Twitter | [
{
"code": null,
"e": 342,
"s": 172,
"text": "If you enjoy my content and want to get more in-depth knowledge regarding data or just daily life as a Data Scientist, please consider subscribing to my newsletter here."
},
{
"code": null,
"e": 504,
"s": 342,
"text": "Imbalance data is a case where the classification dataset class has a skewed proportion. For example, I would use the churn dataset from Kaggle for this article."
},
{
"code": null,
"e": 751,
"s": 504,
"text": "We can see there is a skew in the Yes class compared to the No class. If we calculate the proportion, the Yes class proportion is around 20.4% of the whole dataset. Although, how do you classify the imbalance data? The table below might help you."
},
{
"code": null,
"e": 947,
"s": 751,
"text": "There are three cases of Imbalance — Mild, Moderate, and Extreme; depends on the minority class proportion to the whole dataset. In our example above, we only have a Mild case of imbalanced data."
},
{
"code": null,
"e": 1222,
"s": 947,
"text": "Now, why do we need to care about imbalanced data when creating our machine learning model? Well, an imbalance class creates a bias where the machine learning model tends to predict the majority class. You don’t want the prediction model to ignore the minority class, right?"
},
{
"code": null,
"e": 1376,
"s": 1222,
"text": "That is why there are techniques to overcome the imbalance problem — Undersampling and Oversampling. What is the difference between these two techniques?"
},
{
"code": null,
"e": 1611,
"s": 1376,
"text": "Undersampling would decrease the proportion of your majority class until the number is similar to the minority class. At the same time, Oversampling would resample the minority class proportion following the majority class proportion."
},
{
"code": null,
"e": 1734,
"s": 1611,
"text": "In this article, I would only write a specific technique for Oversampling called SMOTE and various varieties of the SMOTE."
},
{
"code": null,
"e": 1940,
"s": 1734,
"text": "Just a little note, I am a Data Scientist who believes in leaving the proportion as it is because it is representing the data. It is better to try feature engineering before you jump into these techniques."
},
{
"code": null,
"e": 2101,
"s": 1940,
"text": "So, what is SMOTE? SMOTE or Synthetic Minority Oversampling Technique is an oversampling technique but SMOTE working differently than your typical oversampling."
},
{
"code": null,
"e": 2325,
"s": 2101,
"text": "In a classic oversampling technique, the minority data is duplicated from the minority data population. While it increases the number of data, it does not give any new information or variation to the machine learning model."
},
{
"code": null,
"e": 2475,
"s": 2325,
"text": "For the reason above, Nitesh Chawla, et al. (2002) introduce a new technique to create synthetic data for oversampling purposes in their SMOTE paper."
},
{
"code": null,
"e": 2814,
"s": 2475,
"text": "SMOTE works by utilizing a k-nearest neighbour algorithm to create synthetic data. SMOTE first start by choosing random data from the minority class, then k-nearest neighbours from the data are set. Synthetic data would then be made between the random data and the randomly selected k-nearest neighbour. Let me show you the example below."
},
{
"code": null,
"e": 2925,
"s": 2814,
"text": "The procedure is repeated enough times until the minority class has the same proportion as the majority class."
},
{
"code": null,
"e": 3238,
"s": 2925,
"text": "I omit a more in-depth explanation because the passage above already summarizes how SMOTE work. In this article, I want to focus on SMOTE and its variation, as well as when to use it without touching much in theory. If you want to know more, let me attach the link to the paper for each variation I mention here."
},
{
"code": null,
"e": 3345,
"s": 3238,
"text": "As preparation, I would use the imblearn package, which includes SMOTE and their variation in the package."
},
{
"code": null,
"e": 3397,
"s": 3345,
"text": "#Installing imblearnpip install -U imbalanced-learn"
},
{
"code": null,
"e": 3552,
"s": 3397,
"text": "We would start by using the SMOTE in their default form. We would use the same churn dataset above. Let’s prepare the data first as well to try the SMOTE."
},
{
"code": null,
"e": 3806,
"s": 3552,
"text": "If you realize from my explanation above, SMOTE is used to synthesize data where the features are continuous and a classification problem. For that reason, in this section, we only would try to use two continuous features with the classification target."
},
{
"code": null,
"e": 4108,
"s": 3806,
"text": "import pandas as pdimport seaborns as sns#I read the csv churn data into variable called df. Here I would only use two continuous features CreditScore and Age with the target Exiteddf_example = df[['CreditScore', 'Age', 'Exited']]sns.scatterplot(data = df, x ='CreditScore', y = 'Age', hue = 'Exited')"
},
{
"code": null,
"e": 4241,
"s": 4108,
"text": "As we can see in the above scatter plot between the ‘CreditScore’ and ‘Age’ feature, there are mixed up between the 0 and 1 classes."
},
{
"code": null,
"e": 4302,
"s": 4241,
"text": "Let’s try to oversampled the data using the SMOTE technique."
},
{
"code": null,
"e": 4649,
"s": 4302,
"text": "#Importing SMOTEfrom imblearn.over_sampling import SMOTE#Oversampling the datasmote = SMOTE(random_state = 101)X, y = smote.fit_resample(df[['CreditScore', 'Age']], df['Exited'])#Creating a new Oversampling Data Framedf_oversampler = pd.DataFrame(X, columns = ['CreditScore', 'Age'])df_oversampler['Exited']sns.countplot(df_oversampler['Exited'])"
},
{
"code": null,
"e": 4802,
"s": 4649,
"text": "As we can see in the graph above, class 0 and 1 now have a similar proportion. Let’s see how is it goes if we create a similar scatter plot like before."
},
{
"code": null,
"e": 4886,
"s": 4802,
"text": "sns.scatterplot(data = df_oversampler, x ='CreditScore', y = 'Age', hue = 'Exited')"
},
{
"code": null,
"e": 4994,
"s": 4886,
"text": "Currently, we have the oversampled data to fill the area that previously was empty with the synthetic data."
},
{
"code": null,
"e": 5202,
"s": 4994,
"text": "The purpose of oversampling is, just as I stated before, to have a better prediction model. This technique was not created for any analysis purposes as every data created is synthetic, so that is a reminder."
},
{
"code": null,
"e": 5368,
"s": 5202,
"text": "For the reason above, we need to evaluate whether oversampling data leads to a better model or not. Let’s start by splitting the data to create the prediction model."
},
{
"code": null,
"e": 5785,
"s": 5368,
"text": "# Importing the splitter, classification model, and the metricfrom sklearn.linear_model import LogisticRegressionfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import classification_report#Splitting the data with stratificationX_train, X_test, y_train, y_test = train_test_split(df_example[['CreditScore', 'Age']], df['Exited'], test_size = 0.2, stratify = df['Exited'], random_state = 101)"
},
{
"code": null,
"e": 6039,
"s": 5785,
"text": "As an addition, you should only oversample your training data and not the whole data except if you would use the entire data as your training data. In case you want to split the data, you should split the data first before oversampled the training data."
},
{
"code": null,
"e": 6174,
"s": 6039,
"text": "#Create an oversampled training datasmote = SMOTE(random_state = 101)X_oversample, y_oversample = smote.fit_resample(X_train, y_train)"
},
{
"code": null,
"e": 6404,
"s": 6174,
"text": "Now we have both the imbalanced data and oversampled data, let’s try to create the classification model using both of these data. First, let’s see the performance of the Logistic Regression model trained with the imbalanced data."
},
{
"code": null,
"e": 6563,
"s": 6404,
"text": "#Training with imbalance dataclassifier = LogisticRegression()classifier.fit(X_train, y_train)print(classification_report(y_test, classifier.predict(X_test)))"
},
{
"code": null,
"e": 6730,
"s": 6563,
"text": "As we can see from the metrics, our Logistic Regression model trained with the imbalanced data tends to predict class 0 rather than class 1. The bias is in our model."
},
{
"code": null,
"e": 6806,
"s": 6730,
"text": "Let’s see how is the result of the model trained with the oversampled data."
},
{
"code": null,
"e": 6983,
"s": 6806,
"text": "#Training with oversampled dataclassifier_o = LogisticRegression()classifier_o.fit(X_oversample, y_oversample)print(classification_report(y_test, classifier_o.predict(X_test)))"
},
{
"code": null,
"e": 7164,
"s": 6983,
"text": "The model is doing better at predicted class 1 in this case. In this case, we could say that the oversampled data helps our Logistic Regression model to predict the class 1 better."
},
{
"code": null,
"e": 7331,
"s": 7164,
"text": "I could say that the oversampled data improve the Logistic Regression model for prediction purposes, although the context of ‘improve’ is once again back to the user."
},
{
"code": null,
"e": 7556,
"s": 7331,
"text": "I have mention that SMOTE only works for continuous features. So, what to do if you have mixed (categorical and continuous) features? In this case, we have another variation of SMOTE called SMOTE-NC (Nominal and Continuous)."
},
{
"code": null,
"e": 7768,
"s": 7556,
"text": "You might think, then, just transform the categorical data into numerical; therefore, we had a numerical feature for SMOTE to use. The problem is when we did that; we would have data that did not make any sense."
},
{
"code": null,
"e": 8012,
"s": 7768,
"text": "For example, in the churn data above, we had ‘IsActiveMember’ categorical feature with the data either 0 or 1. If we oversampled this data with SMOTE, we could end up with oversampled data such as 0.67 or 0.5, which does not make sense at all."
},
{
"code": null,
"e": 8229,
"s": 8012,
"text": "This is why we need to use SMOTE-NC when we have cases of mixed data. The premise is simple, we denote which features are categorical, and SMOTE would resample the categorical data instead of creating synthetic data."
},
{
"code": null,
"e": 8352,
"s": 8229,
"text": "Let’s try applying SMOTE-NC. In this case, I would select another feature as an example (one categorical, one continuous)."
},
{
"code": null,
"e": 8413,
"s": 8352,
"text": "df_example = df[['CreditScore', 'IsActiveMember', 'Exited']]"
},
{
"code": null,
"e": 8562,
"s": 8413,
"text": "In this case, ‘CreditScore’ is the continuous feature, and ‘IsActiveMember’ is the categorical feature. Then, let’s split the data just like before."
},
{
"code": null,
"e": 8735,
"s": 8562,
"text": "X_train, X_test, y_train, y_test = train_test_split(df_example[['CreditScore', 'IsActiveMember']],df['Exited'], test_size = 0.2,stratify = df['Exited'], random_state = 101)"
},
{
"code": null,
"e": 8924,
"s": 8735,
"text": "Then, let’s create two different classification models once more; one trained with the imbalanced data and one with the oversampled data. First, let’s try SMOTE-NC to oversampled the data."
},
{
"code": null,
"e": 9391,
"s": 8924,
"text": "#Import the SMOTE-NCfrom imblearn.over_sampling import SMOTENC#Create the oversampler. For SMOTE-NC we need to pinpoint the column position where is the categorical features are. In this case, 'IsActiveMember' is positioned in the second column we input [1] as the parameter. If you have more than one categorical columns, just input all the columns positionsmotenc = SMOTENC([1],random_state = 101)X_oversample, y_oversample = smotenc.fit_resample(X_train, y_train)"
},
{
"code": null,
"e": 9449,
"s": 9391,
"text": "With the data ready, let’s try to create the classifiers."
},
{
"code": null,
"e": 9610,
"s": 9449,
"text": "#Classifier with imbalance dataclassifier = LogisticRegression()classifier.fit(X_train, y_train)print(classification_report(y_test, classifier.predict(X_test)))"
},
{
"code": null,
"e": 9783,
"s": 9610,
"text": "With the imbalance data, we can see the classifier favor the class 0 and ignore the class 1 completely. Then, how about if we trained it with the SMOTE-NC oversampled data."
},
{
"code": null,
"e": 9954,
"s": 9783,
"text": "#Classifier with SMOTE-NCclassifier_o = LogisticRegression()classifier_o.fit(X_oversample, y_oversample)print(classification_report(y_test, classifier_o.predict(X_test)))"
},
{
"code": null,
"e": 10182,
"s": 9954,
"text": "Just like with SMOTE, the classifier with SMOTE-NC oversampled data give a new perspective to the machine learning model to predict the imbalanced data. It wasn’t necessarily the best, but it was better than the imbalance data."
},
{
"code": null,
"e": 10296,
"s": 10182,
"text": "Borderline-SMOTE is a variation of the SMOTE. Just like the name implies, it has something to do with the border."
},
{
"code": null,
"e": 10487,
"s": 10296,
"text": "So, unlike with the SMOTE, where the synthetic data are created randomly between the two data, Borderline-SMOTE only makes synthetic data along the decision boundary between the two classes."
},
{
"code": null,
"e": 10810,
"s": 10487,
"text": "Also, there are two kinds of Borderline-SMOTE; there are Borderline-SMOTE1 and Borderline-SMOTE2. The differences are simple; Borderline-SMOTE1 also oversampled the majority class where the majority data are causing misclassification in the decision boundary, while Borderline-SMOTE2 only oversampled the minority classes."
},
{
"code": null,
"e": 10918,
"s": 10810,
"text": "Let’s try the Borderline-SMOTE with our previous data. I would once more only using the numerical features."
},
{
"code": null,
"e": 10968,
"s": 10918,
"text": "df_example = df[['CreditScore', 'Age', 'Exited']]"
},
{
"code": null,
"e": 11173,
"s": 10968,
"text": "The above picture is the difference between oversampling data with SMOTE and Borderline-SMOTE1. It might slightly look similar, but we could see there are differences where the synthetic data are created."
},
{
"code": null,
"e": 11283,
"s": 11173,
"text": "How about the performances for the machine learning model? Let us try it. First, as usual, we split the data."
},
{
"code": null,
"e": 11448,
"s": 11283,
"text": "X_train, X_test, y_train, y_test = train_test_split(df_example[['CreditScore', 'Age']], df['Exited'], test_size = 0.2, stratify = df['Exited'], random_state = 101)"
},
{
"code": null,
"e": 11512,
"s": 11448,
"text": "Then, we create the oversampled data by using Borderline-SMOTE."
},
{
"code": null,
"e": 11782,
"s": 11512,
"text": "#By default, the BorderlineSMOTE would use the Borderline-SMOTE1from imblearn.over_sampling import BorderlineSMOTEbsmote = BorderlineSMOTE(random_state = 101, kind = 'borderline-1')X_oversample_borderline, y_oversample_borderline = bsmote.fit_resample(X_train, y_train)"
},
{
"code": null,
"e": 11879,
"s": 11782,
"text": "Lastly, let’s check the machine learning performance with the Borderline-SMOTE oversampled data."
},
{
"code": null,
"e": 12062,
"s": 11879,
"text": "classifier_border = LogisticRegression()classifier_border.fit(X_oversample_borderline, y_oversample_borderline)print(classification_report(y_test, classifier_border.predict(X_test)))"
},
{
"code": null,
"e": 12236,
"s": 12062,
"text": "The performance doesn’t differ much from the model trained with the SMOTE oversampled data. This means that we should focus on the features instead of oversampling the data."
},
{
"code": null,
"e": 12483,
"s": 12236,
"text": "Borderline-SMOTE is used the best when we know that the misclassification often happens near the boundary decision. Otherwise, we could stay use the usual SMOTE. If you want to read more about the Borderline-SMOTE, you could check the paper here."
},
{
"code": null,
"e": 12582,
"s": 12483,
"text": "Another variation of Borderline-SMOTE is Borderline-SMOTE SVM, or we could just call it SVM-SMOTE."
},
{
"code": null,
"e": 12797,
"s": 12582,
"text": "The main differences between SVM-SMOTE and the other SMOTE are that instead of using K-nearest neighbors to identify the misclassification in the Borderline-SMOTE, the technique would incorporate the SVM algorithm."
},
{
"code": null,
"e": 13074,
"s": 12797,
"text": "In the SVM-SMOTE, the borderline area is approximated by the support vectors after training SVMs classifier on the original training set. Synthetic data will be randomly created along the lines joining each minority class support vector with a number of its nearest neighbors."
},
{
"code": null,
"e": 13268,
"s": 13074,
"text": "What special about Borderline-SMOTE SVM compared to the Borderline-SMOTE is that more data are synthesized away from the region of class overlap. It focuses more on where the data is separated."
},
{
"code": null,
"e": 13414,
"s": 13268,
"text": "Just like before, let’s try to use the technique in the model creation. I would still use the same training data in the Borderline-SMOTE example."
},
{
"code": null,
"e": 13732,
"s": 13414,
"text": "from imblearn.over_sampling import SVMSMOTEsvmsmote = SVMSMOTE(random_state = 101)X_oversample_svm, y_oversample_svm = svmsmote.fit_resample(X_train, y_train)classifier_svm = LogisticRegression()classifier_svm.fit(X_oversample_svm, y_oversample_svm)print(classification_report(y_test, classifier_svm.predict(X_test)))"
},
{
"code": null,
"e": 13917,
"s": 13732,
"text": "The performance is once more not differ much, although I could say that the model in this time slightly favoured the class 0 more than when we use the other technique but not too much."
},
{
"code": null,
"e": 14112,
"s": 13917,
"text": "It depends on you once again, what are your prediction models target are and the business affected by it. If you want to read more about the Borderline-SMOTE SVM, you could check the paper here."
},
{
"code": null,
"e": 14371,
"s": 14112,
"text": "ADASYN is another variation from SMOTE. ADASYN takes a more different approach compared to the Borderline-SMOTE. While Borderline-SMOTE tries to synthesize the data near the data decision boundary, ADASYN creates synthetic data according to the data density."
},
{
"code": null,
"e": 14636,
"s": 14371,
"text": "The synthetic data generation would be inversely proportional to the density of the minority class. It means more synthetic data are created in regions of the feature space where the density of minority examples is low, and fewer or none where the density is high."
},
{
"code": null,
"e": 14793,
"s": 14636,
"text": "In simpler terms, in an area where the minority class is less dense, the synthetic data are created more. Otherwise, the synthetic data is not made so much."
},
{
"code": null,
"e": 14918,
"s": 14793,
"text": "Let’s see how the performance by using the ADASYN. I would still use the same training data in the Borderline-SMOTE example."
},
{
"code": null,
"e": 15228,
"s": 14918,
"text": "from imblearn.over_sampling import ADASYNadasyn = ADASYN(random_state = 101)X_oversample_ada, y_oversample_ada = adasyn.fit_resample(X_train, y_train)classifier_ada = LogisticRegression()classifier_ada.fit(X_oversample_ada, y_oversample_ada)print(classification_report(y_test, classifier_ada.predict(X_test)))"
},
{
"code": null,
"e": 15351,
"s": 15228,
"text": "As we can see from the model performance above, the performance is slightly worse than when we use the other SMOTE method."
},
{
"code": null,
"e": 15733,
"s": 15351,
"text": "The problems might lie in the outliers. Just like I stated before, ADASYN would focus on the density data where the density is low. Often time, the low-density data is an outlier. The ADASYN approach would then put too much attention on these areas of the feature space, which may result in worse model performance. It might be better to remove the outlier before using the ADASYN."
},
{
"code": null,
"e": 15804,
"s": 15733,
"text": "If you want to read more about ADASYN, you could check the paper here."
},
{
"code": null,
"e": 15956,
"s": 15804,
"text": "Imbalanced data is a problem when creating a predictive machine learning model. One way to alleviate this problem is by oversampling the minority data."
},
{
"code": null,
"e": 16132,
"s": 15956,
"text": "Instead of oversampling by replicating the data, we can oversample the data by creating synthetic data using the SMOTE technique. There are few variations of SMOTE, including:"
},
{
"code": null,
"e": 16177,
"s": 16132,
"text": "SMOTESMOTE-NCBorderline-SMOTESVM-SMOTEADASYN"
},
{
"code": null,
"e": 16183,
"s": 16177,
"text": "SMOTE"
},
{
"code": null,
"e": 16192,
"s": 16183,
"text": "SMOTE-NC"
},
{
"code": null,
"e": 16209,
"s": 16192,
"text": "Borderline-SMOTE"
},
{
"code": null,
"e": 16219,
"s": 16209,
"text": "SVM-SMOTE"
},
{
"code": null,
"e": 16226,
"s": 16219,
"text": "ADASYN"
},
{
"code": null,
"e": 16243,
"s": 16226,
"text": "I hope it helps!"
},
{
"code": null,
"e": 16360,
"s": 16243,
"text": "If you are not subscribed as a Medium Member, please consider subscribing through my referral to support my writing."
}
] |
Spark SQL 102 — Aggregations and Window Functions | by David Vrba | Towards Data Science | Data aggregation is an important step in many data analyses. It is a way how to reduce the dataset and compute various metrics, statistics, and other characteristics. A related but slightly more advanced topic are window functions that allow computing also other analytical and ranking functions on the data based on a window with a so-called frame.
This is a continuation of a recent article in which we described what is a DataFrame and how transformations work in Spark SQL in general. Here we will dive into aggregations and window functions which are two specific groups of transformations, they are closely related, but as we will see, there is an important difference between them which is good to understand.
For the code, we will use PySpark API, the current version at the time of writing which is 3.1.2 (June 2021).
Let's start with the most simple aggregations which are computations in which we reduce the entire dataset to a single number. This might be like the total count of rows in the DataFrame or the sum/average of values in some specific column. For this purpose, we can use agg() function directly on the DataFrame and pass the aggregation functions as arguments in a comma-separated way:
from pyspark.sql.functions import count, sumdf.agg(count('*'))df.agg(count('*'), sum('price'))df.agg( count('*').alias('number_of_rows'), sum('price').alias('total_price'))
Notice that the output of the first example is a DataFrame with a single row and single column — it is just a number represented by a DataFrame. In the second example, the output is a DataFrame with a single row and two columns — one column for each aggregation function. In the last example, we can see that each of the aggregations can be also renamed using the alias() function.
Very often, we need to compute the aggregation, not for the entire DataFrame, but separately for each group of rows where the group is defined as rows that have the same value in a specific column. As an example, imagine a dataset of credit card transactions where each row is a unique transaction but different rows can belong to the same user (cardholder). Here, it might be useful to compute the aggregation separately for each user, and for this kind of aggregation we can use the groupBy transformation:
( df .groupBy('user_id') .agg(count('*').alias('number_of_transactions')))
Again, we are using here the agg function and we can pass in any aggregation function such as count, countDistinct, sum, avg/mean, min, max, first, last, collect_list, collect_set, approx_count_distinct, corr, and for the complete list, I recommend to check the documentation. Notice that if we don’t rename the result of the aggregation, it will have a default name, which in the case of the count function is count(1).
Alternatively, we can call an aggregation function directly after the groupBy as follows
( df .groupBy('user_id') .count())
Notice that using this syntax has the disadvantage that you can not rename the result of the aggregation directly using the alias() because here the count function returns back a DataFrame, so the alias would be applied on the entire DataFrame. So the renaming would have to be handled by another transformation such as withColumnRenamed(‘count’, ‘new_name’). Also, here you can call only one aggregation function at a time while the syntax with agg allows you to call as many functions as you want at the same time. For other functions that can be called after groupBy see the documentation.
One important property of these groupBy transformations is that the output DataFrame will contain only the columns that were specified as arguments in the groupBy() and the results of the aggregation. So if we call df.groupBy(‘user_id’).count(), no matter how many fields the df has, the output will have only two columns, namely user_id and count. Also, the number of rows represented by the output DataFrame will be smaller, or in a marginal case, the same as in the original df. The marginal case corresponds to a situation in which the grouping column has all values distinct so each group has exactly one row. As we will see, this will be different with window functions.
The window functions are a group of functions that can be called also over a group of rows similarly as we have seen in the previous case. There is a couple of differences, however. Firstly, after calling a window function, the dataset will not be reduced — all rows and all columns will be in the output DataFrame and the calculation will be added in a new column. The group of rows on which the function will be applied is again given by a specific column (or a list of columns) for which the rows have the same value and the group is referred to as a window. Also, the window functions are more flexible in the sense that sometimes you don’t want to apply the function on the entire window, but rather only on a subset of rows from the window — a so-called frame. Finally, the window can be also sorted because some functions (so-called ranking functions) require it. Let’s see the syntax for the window functions:
from pyspark.sql import Windoww = Window().partitionBy('user_id')df.withColumn('number_of_transactions', count('*').over(w))
As you can see, we first define the window using the function partitonBy() — this is analogous to the groupBy(), all rows that will have the same value in the specified column (here user_id) will form one window. Then we add a new column to the DataFrame and here we call our window function over the specified window.
This is a specific group of window functions that require the window to be sorted. As a specific example, consider the function row_number() that tells you the number of the row within the window:
from pyspark.sql.functions import row_numberw = Window.partitionBy('user_id').orderBy('transaction_date')df.withColumn('r', row_number().over(w))
Other ranking functions are for example rank() or dense_rank().
As mentioned above some functions can be applied over a subset of rows from the window. A typical use-case is to compute a cumulative sum of values where the frame would specify that we want to apply the function from the beginning of the window until the current row. Also, it is obvious that the order of the rows in the frame will be important because the cumulative sum will have a different shape if the rows go in a different order. The frame can be specified using one of these two functions:
rowsBetween()
rangeBetween()
Both of these two functions take two arguments: start and end of the frame and they can be specified as follows:
Window.unboundedPreceding, Window.unboundedFollowing — the entire window from the beginning to the end
Window.unboundedPreceding, Window.currentRow — from the beginning of the window to the current row, this is used for the cumulative sum
using numerical values, for example, 0 means currentRow, but the meaning of other values can differ based on the framing function rowsBetween/rangeBetween.
To understand the difference between rowsBetween and rangeBetween, let’s see the following example in which we have three columns, user_id, activity, and day and we want to sum the activity for each user:
df.withColumn('activity_sum', sum('activity').over(w))
We will sort the window by day and use the interval (-1, 0) to specify the frame. We will see that the interval has a different meaning for both functions. On the right side of the image you can see what will be the result of the sum in each case (for the sake of simplicity, we display in the image only one window for the user with id 100):
In the case of rowsBetween, on each row, we sum the activities from the current row and the previous one (if it exists), that’s what the interval (-1, 0) means. On the other hand, in the case of rangeBetween, on each row, we first need to compute the range of rows that will be summed by subtracting the value 1 from the value in the day column. For example on the 3rd row, we have 7–1 = 6 leading to the interval (6, 7) and we should sum up all rows where the value in the day column fits into this interval and in our case, it is only the current row, because there is no row with day=6.
As you can see, in the case of rangeBetween, the column by which we sort needs to be of some numerical type because Spark needs to do some arithmetic on the values in this column. This restriction is not present in the case of rowsBetween.
To mention some other window functions, see for example:
lead()
lag()
ntile()
nth_value()
cume_dist()
sorting the window will change the frame — this might not be intuitive — computing sum over a window that is sorted will lead to a different result as compared to a window that is not sorted. For more details, check my other article where I show an example.
Both transformations, groupBy, and window will require a specific partitioning, and if the partitioning is not present they will induce a shuffle — the data needs to be reorganized in such a way that all rows within one group/window need to be placed within one partition. The difference however is that with groupBy, Spark will partly aggregate the data first and then shuffle the reduced dataset as compared to window where the entire dataset will be shuffled.
If we don’t pass any argument to the partitionBy function and specify the window as w = Window().partitionBy(), the entire dataset will become one big window and all data will be shuffled to a single partition which may lead to performance issues because all data will be placed on a single node in the cluster.
In this article, we covered aggregation and window functions which are very frequently used transformations, especially among data analysts. We discussed what is the difference between calling groupBy and Window.partitionBy and we have seen different options for how to specify the frame on a window. | [
{
"code": null,
"e": 522,
"s": 172,
"text": "Data aggregation is an important step in many data analyses. It is a way how to reduce the dataset and compute various metrics, statistics, and other characteristics. A related but slightly more advanced topic are window functions that allow computing also other analytical and ranking functions on the data based on a window with a so-called frame."
},
{
"code": null,
"e": 889,
"s": 522,
"text": "This is a continuation of a recent article in which we described what is a DataFrame and how transformations work in Spark SQL in general. Here we will dive into aggregations and window functions which are two specific groups of transformations, they are closely related, but as we will see, there is an important difference between them which is good to understand."
},
{
"code": null,
"e": 999,
"s": 889,
"text": "For the code, we will use PySpark API, the current version at the time of writing which is 3.1.2 (June 2021)."
},
{
"code": null,
"e": 1384,
"s": 999,
"text": "Let's start with the most simple aggregations which are computations in which we reduce the entire dataset to a single number. This might be like the total count of rows in the DataFrame or the sum/average of values in some specific column. For this purpose, we can use agg() function directly on the DataFrame and pass the aggregation functions as arguments in a comma-separated way:"
},
{
"code": null,
"e": 1559,
"s": 1384,
"text": "from pyspark.sql.functions import count, sumdf.agg(count('*'))df.agg(count('*'), sum('price'))df.agg( count('*').alias('number_of_rows'), sum('price').alias('total_price'))"
},
{
"code": null,
"e": 1941,
"s": 1559,
"text": "Notice that the output of the first example is a DataFrame with a single row and single column — it is just a number represented by a DataFrame. In the second example, the output is a DataFrame with a single row and two columns — one column for each aggregation function. In the last example, we can see that each of the aggregations can be also renamed using the alias() function."
},
{
"code": null,
"e": 2450,
"s": 1941,
"text": "Very often, we need to compute the aggregation, not for the entire DataFrame, but separately for each group of rows where the group is defined as rows that have the same value in a specific column. As an example, imagine a dataset of credit card transactions where each row is a unique transaction but different rows can belong to the same user (cardholder). Here, it might be useful to compute the aggregation separately for each user, and for this kind of aggregation we can use the groupBy transformation:"
},
{
"code": null,
"e": 2528,
"s": 2450,
"text": "( df .groupBy('user_id') .agg(count('*').alias('number_of_transactions')))"
},
{
"code": null,
"e": 2949,
"s": 2528,
"text": "Again, we are using here the agg function and we can pass in any aggregation function such as count, countDistinct, sum, avg/mean, min, max, first, last, collect_list, collect_set, approx_count_distinct, corr, and for the complete list, I recommend to check the documentation. Notice that if we don’t rename the result of the aggregation, it will have a default name, which in the case of the count function is count(1)."
},
{
"code": null,
"e": 3038,
"s": 2949,
"text": "Alternatively, we can call an aggregation function directly after the groupBy as follows"
},
{
"code": null,
"e": 3076,
"s": 3038,
"text": "( df .groupBy('user_id') .count())"
},
{
"code": null,
"e": 3669,
"s": 3076,
"text": "Notice that using this syntax has the disadvantage that you can not rename the result of the aggregation directly using the alias() because here the count function returns back a DataFrame, so the alias would be applied on the entire DataFrame. So the renaming would have to be handled by another transformation such as withColumnRenamed(‘count’, ‘new_name’). Also, here you can call only one aggregation function at a time while the syntax with agg allows you to call as many functions as you want at the same time. For other functions that can be called after groupBy see the documentation."
},
{
"code": null,
"e": 4346,
"s": 3669,
"text": "One important property of these groupBy transformations is that the output DataFrame will contain only the columns that were specified as arguments in the groupBy() and the results of the aggregation. So if we call df.groupBy(‘user_id’).count(), no matter how many fields the df has, the output will have only two columns, namely user_id and count. Also, the number of rows represented by the output DataFrame will be smaller, or in a marginal case, the same as in the original df. The marginal case corresponds to a situation in which the grouping column has all values distinct so each group has exactly one row. As we will see, this will be different with window functions."
},
{
"code": null,
"e": 5264,
"s": 4346,
"text": "The window functions are a group of functions that can be called also over a group of rows similarly as we have seen in the previous case. There is a couple of differences, however. Firstly, after calling a window function, the dataset will not be reduced — all rows and all columns will be in the output DataFrame and the calculation will be added in a new column. The group of rows on which the function will be applied is again given by a specific column (or a list of columns) for which the rows have the same value and the group is referred to as a window. Also, the window functions are more flexible in the sense that sometimes you don’t want to apply the function on the entire window, but rather only on a subset of rows from the window — a so-called frame. Finally, the window can be also sorted because some functions (so-called ranking functions) require it. Let’s see the syntax for the window functions:"
},
{
"code": null,
"e": 5389,
"s": 5264,
"text": "from pyspark.sql import Windoww = Window().partitionBy('user_id')df.withColumn('number_of_transactions', count('*').over(w))"
},
{
"code": null,
"e": 5708,
"s": 5389,
"text": "As you can see, we first define the window using the function partitonBy() — this is analogous to the groupBy(), all rows that will have the same value in the specified column (here user_id) will form one window. Then we add a new column to the DataFrame and here we call our window function over the specified window."
},
{
"code": null,
"e": 5905,
"s": 5708,
"text": "This is a specific group of window functions that require the window to be sorted. As a specific example, consider the function row_number() that tells you the number of the row within the window:"
},
{
"code": null,
"e": 6051,
"s": 5905,
"text": "from pyspark.sql.functions import row_numberw = Window.partitionBy('user_id').orderBy('transaction_date')df.withColumn('r', row_number().over(w))"
},
{
"code": null,
"e": 6115,
"s": 6051,
"text": "Other ranking functions are for example rank() or dense_rank()."
},
{
"code": null,
"e": 6615,
"s": 6115,
"text": "As mentioned above some functions can be applied over a subset of rows from the window. A typical use-case is to compute a cumulative sum of values where the frame would specify that we want to apply the function from the beginning of the window until the current row. Also, it is obvious that the order of the rows in the frame will be important because the cumulative sum will have a different shape if the rows go in a different order. The frame can be specified using one of these two functions:"
},
{
"code": null,
"e": 6629,
"s": 6615,
"text": "rowsBetween()"
},
{
"code": null,
"e": 6644,
"s": 6629,
"text": "rangeBetween()"
},
{
"code": null,
"e": 6757,
"s": 6644,
"text": "Both of these two functions take two arguments: start and end of the frame and they can be specified as follows:"
},
{
"code": null,
"e": 6860,
"s": 6757,
"text": "Window.unboundedPreceding, Window.unboundedFollowing — the entire window from the beginning to the end"
},
{
"code": null,
"e": 6996,
"s": 6860,
"text": "Window.unboundedPreceding, Window.currentRow — from the beginning of the window to the current row, this is used for the cumulative sum"
},
{
"code": null,
"e": 7152,
"s": 6996,
"text": "using numerical values, for example, 0 means currentRow, but the meaning of other values can differ based on the framing function rowsBetween/rangeBetween."
},
{
"code": null,
"e": 7357,
"s": 7152,
"text": "To understand the difference between rowsBetween and rangeBetween, let’s see the following example in which we have three columns, user_id, activity, and day and we want to sum the activity for each user:"
},
{
"code": null,
"e": 7412,
"s": 7357,
"text": "df.withColumn('activity_sum', sum('activity').over(w))"
},
{
"code": null,
"e": 7755,
"s": 7412,
"text": "We will sort the window by day and use the interval (-1, 0) to specify the frame. We will see that the interval has a different meaning for both functions. On the right side of the image you can see what will be the result of the sum in each case (for the sake of simplicity, we display in the image only one window for the user with id 100):"
},
{
"code": null,
"e": 8345,
"s": 7755,
"text": "In the case of rowsBetween, on each row, we sum the activities from the current row and the previous one (if it exists), that’s what the interval (-1, 0) means. On the other hand, in the case of rangeBetween, on each row, we first need to compute the range of rows that will be summed by subtracting the value 1 from the value in the day column. For example on the 3rd row, we have 7–1 = 6 leading to the interval (6, 7) and we should sum up all rows where the value in the day column fits into this interval and in our case, it is only the current row, because there is no row with day=6."
},
{
"code": null,
"e": 8585,
"s": 8345,
"text": "As you can see, in the case of rangeBetween, the column by which we sort needs to be of some numerical type because Spark needs to do some arithmetic on the values in this column. This restriction is not present in the case of rowsBetween."
},
{
"code": null,
"e": 8642,
"s": 8585,
"text": "To mention some other window functions, see for example:"
},
{
"code": null,
"e": 8649,
"s": 8642,
"text": "lead()"
},
{
"code": null,
"e": 8655,
"s": 8649,
"text": "lag()"
},
{
"code": null,
"e": 8663,
"s": 8655,
"text": "ntile()"
},
{
"code": null,
"e": 8675,
"s": 8663,
"text": "nth_value()"
},
{
"code": null,
"e": 8687,
"s": 8675,
"text": "cume_dist()"
},
{
"code": null,
"e": 8945,
"s": 8687,
"text": "sorting the window will change the frame — this might not be intuitive — computing sum over a window that is sorted will lead to a different result as compared to a window that is not sorted. For more details, check my other article where I show an example."
},
{
"code": null,
"e": 9408,
"s": 8945,
"text": "Both transformations, groupBy, and window will require a specific partitioning, and if the partitioning is not present they will induce a shuffle — the data needs to be reorganized in such a way that all rows within one group/window need to be placed within one partition. The difference however is that with groupBy, Spark will partly aggregate the data first and then shuffle the reduced dataset as compared to window where the entire dataset will be shuffled."
},
{
"code": null,
"e": 9720,
"s": 9408,
"text": "If we don’t pass any argument to the partitionBy function and specify the window as w = Window().partitionBy(), the entire dataset will become one big window and all data will be shuffled to a single partition which may lead to performance issues because all data will be placed on a single node in the cluster."
}
] |
Java Examples - Collection Comparison | How to compare elements in a collection ?
Following example compares the element of a collection by converting a string into a treeset using Collection.min() and Collection.max() methods of Collection class.
import java.util.Collections;
import java.util.Set;
import java.util.TreeSet;
public class MainClass {
public static void main(String[] args) {
String[] coins = { "Penny", "nickel", "dime", "Quarter", "dollar" };
Set set = new TreeSet();
for (int i = 0; i < coins.length; i++)set.add(coins[i]);
System.out.println(Collections.min(set));
System.out.println(Collections.min(set, String.CASE_INSENSITIVE_ORDER));
for(int i = 0; i <= 10; i++)System.out.print('-');
System.out.println(Collections.max(set));
System.out.println(Collections.max(set,
String.CASE_INSENSITIVE_ORDER));
}
}
The above code sample will produce the following result.
Penny
dime
----------
nickle
Quarter
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2110,
"s": 2068,
"text": "How to compare elements in a collection ?"
},
{
"code": null,
"e": 2276,
"s": 2110,
"text": "Following example compares the element of a collection by converting a string into a treeset using Collection.min() and Collection.max() methods of Collection class."
},
{
"code": null,
"e": 2933,
"s": 2276,
"text": "import java.util.Collections; \nimport java.util.Set;\nimport java.util.TreeSet;\n\npublic class MainClass {\n public static void main(String[] args) {\n String[] coins = { \"Penny\", \"nickel\", \"dime\", \"Quarter\", \"dollar\" };\n Set set = new TreeSet();\n \n for (int i = 0; i < coins.length; i++)set.add(coins[i]);\n System.out.println(Collections.min(set));\n System.out.println(Collections.min(set, String.CASE_INSENSITIVE_ORDER));\n \n for(int i = 0; i <= 10; i++)System.out.print('-');\n System.out.println(Collections.max(set));\n System.out.println(Collections.max(set, \n String.CASE_INSENSITIVE_ORDER));\n }\n}"
},
{
"code": null,
"e": 2990,
"s": 2933,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 3028,
"s": 2990,
"text": "Penny\ndime\n----------\nnickle\nQuarter\n"
},
{
"code": null,
"e": 3035,
"s": 3028,
"text": " Print"
},
{
"code": null,
"e": 3046,
"s": 3035,
"text": " Add Notes"
}
] |
NumPy - Advanced Indexing | It is possible to make a selection from ndarray that is a non-tuple sequence, ndarray object of integer or Boolean data type, or a tuple with at least one item being a sequence object. Advanced indexing always returns a copy of the data. As against this, the slicing only presents a view.
There are two types of advanced indexing − Integer and Boolean.
This mechanism helps in selecting any arbitrary item in an array based on its Ndimensional index. Each integer array represents the number of indexes into that dimension. When the index consists of as many integer arrays as the dimensions of the target ndarray, it becomes straightforward.
In the following example, one element of specified column from each row of ndarray object is selected. Hence, the row index contains all row numbers, and the column index specifies the element to be selected.
import numpy as np
x = np.array([[1, 2], [3, 4], [5, 6]])
y = x[[0,1,2], [0,1,0]]
print y
Its output would be as follows −
[1 4 5]
The selection includes elements at (0,0), (1,1) and (2,0) from the first array.
In the following example, elements placed at corners of a 4X3 array are selected. The row indices of selection are [0, 0] and [3,3] whereas the column indices are [0,2] and [0,2].
import numpy as np
x = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]])
print 'Our array is:'
print x
print '\n'
rows = np.array([[0,0],[3,3]])
cols = np.array([[0,2],[0,2]])
y = x[rows,cols]
print 'The corner elements of this array are:'
print y
The output of this program is as follows −
Our array is:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
The corner elements of this array are:
[[ 0 2]
[ 9 11]]
The resultant selection is an ndarray object containing corner elements.
Advanced and basic indexing can be combined by using one slice (:) or ellipsis (...) with an index array. The following example uses slice for row and advanced index for column. The result is the same when slice is used for both. But advanced index results in copy and may have different memory layout.
import numpy as np
x = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]])
print 'Our array is:'
print x
print '\n'
# slicing
z = x[1:4,1:3]
print 'After slicing, our array becomes:'
print z
print '\n'
# using advanced index for column
y = x[1:4,[1,2]]
print 'Slicing using advanced index for column:'
print y
The output of this program would be as follows −
Our array is:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
After slicing, our array becomes:
[[ 4 5]
[ 7 8]
[10 11]]
Slicing using advanced index for column:
[[ 4 5]
[ 7 8]
[10 11]]
This type of advanced indexing is used when the resultant object is meant to be the result of Boolean operations, such as comparison operators.
In this example, items greater than 5 are returned as a result of Boolean indexing.
import numpy as np
x = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]])
print 'Our array is:'
print x
print '\n'
# Now we will print the items greater than 5
print 'The items greater than 5 are:'
print x[x > 5]
The output of this program would be −
Our array is:
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
The items greater than 5 are:
[ 6 7 8 9 10 11]
In this example, NaN (Not a Number) elements are omitted by using ~ (complement operator).
import numpy as np
a = np.array([np.nan, 1,2,np.nan,3,4,5])
print a[~np.isnan(a)]
Its output would be −
[ 1. 2. 3. 4. 5.]
The following example shows how to filter out the non-complex elements from an array.
import numpy as np
a = np.array([1, 2+6j, 5, 3.5+5j])
print a[np.iscomplex(a)]
Here, the output is as follows −
[2.0+6.j 3.5+5.j]
63 Lectures
6 hours
Abhilash Nelson
19 Lectures
8 hours
DATAhill Solutions Srinivas Reddy
12 Lectures
3 hours
DATAhill Solutions Srinivas Reddy
10 Lectures
2.5 hours
Akbar Khan
20 Lectures
2 hours
Pruthviraja L
63 Lectures
6 hours
Anmol
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2532,
"s": 2243,
"text": "It is possible to make a selection from ndarray that is a non-tuple sequence, ndarray object of integer or Boolean data type, or a tuple with at least one item being a sequence object. Advanced indexing always returns a copy of the data. As against this, the slicing only presents a view."
},
{
"code": null,
"e": 2596,
"s": 2532,
"text": "There are two types of advanced indexing − Integer and Boolean."
},
{
"code": null,
"e": 2886,
"s": 2596,
"text": "This mechanism helps in selecting any arbitrary item in an array based on its Ndimensional index. Each integer array represents the number of indexes into that dimension. When the index consists of as many integer arrays as the dimensions of the target ndarray, it becomes straightforward."
},
{
"code": null,
"e": 3095,
"s": 2886,
"text": "In the following example, one element of specified column from each row of ndarray object is selected. Hence, the row index contains all row numbers, and the column index specifies the element to be selected."
},
{
"code": null,
"e": 3189,
"s": 3095,
"text": "import numpy as np \n\nx = np.array([[1, 2], [3, 4], [5, 6]]) \ny = x[[0,1,2], [0,1,0]] \nprint y"
},
{
"code": null,
"e": 3222,
"s": 3189,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 3233,
"s": 3222,
"text": "[1 4 5]\n"
},
{
"code": null,
"e": 3313,
"s": 3233,
"text": "The selection includes elements at (0,0), (1,1) and (2,0) from the first array."
},
{
"code": null,
"e": 3493,
"s": 3313,
"text": "In the following example, elements placed at corners of a 4X3 array are selected. The row indices of selection are [0, 0] and [3,3] whereas the column indices are [0,2] and [0,2]."
},
{
"code": null,
"e": 3772,
"s": 3493,
"text": "import numpy as np \nx = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]]) \n \nprint 'Our array is:' \nprint x \nprint '\\n' \n\nrows = np.array([[0,0],[3,3]])\ncols = np.array([[0,2],[0,2]]) \ny = x[rows,cols] \n \nprint 'The corner elements of this array are:' \nprint y"
},
{
"code": null,
"e": 3815,
"s": 3772,
"text": "The output of this program is as follows −"
},
{
"code": null,
"e": 4316,
"s": 3815,
"text": "Our array is: \n[[ 0 1 2] \n [ 3 4 5] \n [ 6 7 8] \n [ 9 10 11]]\n \nThe corner elements of this array are: \n[[ 0 2] \n [ 9 11]] \n"
},
{
"code": null,
"e": 4389,
"s": 4316,
"text": "The resultant selection is an ndarray object containing corner elements."
},
{
"code": null,
"e": 4692,
"s": 4389,
"text": "Advanced and basic indexing can be combined by using one slice (:) or ellipsis (...) with an index array. The following example uses slice for row and advanced index for column. The result is the same when slice is used for both. But advanced index results in copy and may have different memory layout."
},
{
"code": null,
"e": 5034,
"s": 4692,
"text": "import numpy as np \nx = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]]) \n\nprint 'Our array is:' \nprint x \nprint '\\n' \n\n# slicing \nz = x[1:4,1:3] \n\nprint 'After slicing, our array becomes:' \nprint z \nprint '\\n' \n\n# using advanced index for column \ny = x[1:4,[1,2]] \n\nprint 'Slicing using advanced index for column:' \nprint y"
},
{
"code": null,
"e": 5083,
"s": 5034,
"text": "The output of this program would be as follows −"
},
{
"code": null,
"e": 5284,
"s": 5083,
"text": "Our array is:\n[[ 0 1 2] \n [ 3 4 5] \n [ 6 7 8]\n [ 9 10 11]]\n \nAfter slicing, our array becomes:\n[[ 4 5]\n [ 7 8]\n [10 11]]\n\nSlicing using advanced index for column:\n[[ 4 5]\n [ 7 8]\n [10 11]] \n"
},
{
"code": null,
"e": 5428,
"s": 5284,
"text": "This type of advanced indexing is used when the resultant object is meant to be the result of Boolean operations, such as comparison operators."
},
{
"code": null,
"e": 5512,
"s": 5428,
"text": "In this example, items greater than 5 are returned as a result of Boolean indexing."
},
{
"code": null,
"e": 5748,
"s": 5512,
"text": "import numpy as np \nx = np.array([[ 0, 1, 2],[ 3, 4, 5],[ 6, 7, 8],[ 9, 10, 11]]) \n\nprint 'Our array is:' \nprint x \nprint '\\n' \n\n# Now we will print the items greater than 5 \nprint 'The items greater than 5 are:' \nprint x[x > 5]"
},
{
"code": null,
"e": 5786,
"s": 5748,
"text": "The output of this program would be −"
},
{
"code": null,
"e": 5908,
"s": 5786,
"text": "Our array is: \n[[ 0 1 2] \n [ 3 4 5] \n [ 6 7 8] \n [ 9 10 11]] \n \nThe items greater than 5 are:\n[ 6 7 8 9 10 11] \n"
},
{
"code": null,
"e": 5999,
"s": 5908,
"text": "In this example, NaN (Not a Number) elements are omitted by using ~ (complement operator)."
},
{
"code": null,
"e": 6083,
"s": 5999,
"text": "import numpy as np \na = np.array([np.nan, 1,2,np.nan,3,4,5]) \nprint a[~np.isnan(a)]"
},
{
"code": null,
"e": 6105,
"s": 6083,
"text": "Its output would be −"
},
{
"code": null,
"e": 6133,
"s": 6105,
"text": "[ 1. 2. 3. 4. 5.] \n"
},
{
"code": null,
"e": 6219,
"s": 6133,
"text": "The following example shows how to filter out the non-complex elements from an array."
},
{
"code": null,
"e": 6300,
"s": 6219,
"text": "import numpy as np \na = np.array([1, 2+6j, 5, 3.5+5j]) \nprint a[np.iscomplex(a)]"
},
{
"code": null,
"e": 6333,
"s": 6300,
"text": "Here, the output is as follows −"
},
{
"code": null,
"e": 6354,
"s": 6333,
"text": "[2.0+6.j 3.5+5.j] \n"
},
{
"code": null,
"e": 6387,
"s": 6354,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 6404,
"s": 6387,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 6437,
"s": 6404,
"text": "\n 19 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 6472,
"s": 6437,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 6505,
"s": 6472,
"text": "\n 12 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 6540,
"s": 6505,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 6575,
"s": 6540,
"text": "\n 10 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6587,
"s": 6575,
"text": " Akbar Khan"
},
{
"code": null,
"e": 6620,
"s": 6587,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6635,
"s": 6620,
"text": " Pruthviraja L"
},
{
"code": null,
"e": 6668,
"s": 6635,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 6675,
"s": 6668,
"text": " Anmol"
},
{
"code": null,
"e": 6682,
"s": 6675,
"text": " Print"
},
{
"code": null,
"e": 6693,
"s": 6682,
"text": " Add Notes"
}
] |
C++ basic_streambuf Library - pbase | It is used in pointer to beginning of output sequence and returns a pointer to the first element of the array with the portion of the controlled output sequence that is currently buffered.
Following is the declaration for std::basic_streambuf::pbase.
char_type* pbase() const;
none
It returns a pointer to the beginning of an array with the part of the controlled output sequence that is currently buffered.
Strong guarantee − if an exception is thrown, there are no changes in the stream buffer.
It accesses the stream buffer object.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2792,
"s": 2603,
"text": "It is used in pointer to beginning of output sequence and returns a pointer to the first element of the array with the portion of the controlled output sequence that is currently buffered."
},
{
"code": null,
"e": 2854,
"s": 2792,
"text": "Following is the declaration for std::basic_streambuf::pbase."
},
{
"code": null,
"e": 2880,
"s": 2854,
"text": "char_type* pbase() const;"
},
{
"code": null,
"e": 2885,
"s": 2880,
"text": "none"
},
{
"code": null,
"e": 3011,
"s": 2885,
"text": "It returns a pointer to the beginning of an array with the part of the controlled output sequence that is currently buffered."
},
{
"code": null,
"e": 3100,
"s": 3011,
"text": "Strong guarantee − if an exception is thrown, there are no changes in the stream buffer."
},
{
"code": null,
"e": 3138,
"s": 3100,
"text": "It accesses the stream buffer object."
},
{
"code": null,
"e": 3145,
"s": 3138,
"text": " Print"
},
{
"code": null,
"e": 3156,
"s": 3145,
"text": " Add Notes"
}
] |
Part 9: Time Series Consensus Motifs | by Sean Law | Towards Data Science | STUMPY is a powerful and scalable Python library for modern time series analysis and, at its core, efficiently computes something called a matrix profile. The goal of this multi-part series is to explain what the matrix profile is and how you can start leveraging STUMPY for all of your modern time series data mining tasks!
Note: These tutorials were originally featured in the STUMPY documentation.
Part 1: The Matrix ProfilePart 2: STUMPY BasicsPart 3: Time Series ChainsPart 4: Semantic SegmentationPart 5: Fast Approximate Matrix Profiles with STUMPYPart 6: Matrix Profiles for Streaming Time Series DataPart 7: Fast Pattern Searching with STUMPYPart 8: AB-Joins with STUMPYPart 9: Time Series Consensus MotifsPart 10: Discovering Multidimensional Time Series Motifs
This tutorial utilizes the main takeaways from the Matrix Profile XV paper.
Matrix profiles can be used to find conserved patterns within a single time series (self-join) and across two time series (AB-join). In both cases, these conserved patterns are often called “motifs”. And, when considering a set of three or more time series, one common trick for identifying a conserved motif across the entire set is to:
Append a np.nan to the end of each time series. This is used to identify the boundary between neighboring time series and ensures that any identified motif will not straddle multiple time series.Concatenate all of the time series into a single long time seriesCompute the matrix profile (self-join) on the aforementioned concatenated time series
Append a np.nan to the end of each time series. This is used to identify the boundary between neighboring time series and ensures that any identified motif will not straddle multiple time series.
Concatenate all of the time series into a single long time series
Compute the matrix profile (self-join) on the aforementioned concatenated time series
However, this is not guaranteed to find patterns that are conserved across all of the time series within the set. This idea of a finding a conserved motif that is common to all of the time series in a set is referred to as a “consensus motif”. In this tutorial, we will introduce the “Ostinato” algorithm, which is an efficient way to find the consensus motif amongst a set of time series.
Let’s import the packages that we’ll need to load, analyze, and plot the data.
%matplotlib inlineimport stumpyimport matplotlib.pyplot as pltimport numpy as npimport pandas as pdfrom itertools import cycle, combinationsfrom matplotlib.patches import Rectanglefrom scipy.cluster.hierarchy import linkage, dendrogramfrom scipy.special import combfig_size = plt.rcParams["figure.figsize"]fig_size[0] = 20fig_size[1] = 6plt.rcParams["figure.figsize"] = fig_sizeplt.rcParams['xtick.direction'] = 'out'
In the following dataset, a volunteer was asked to “spell” out different Japanese sentences by performing eye movements that represented writing strokes of individual Japanese characters. Their eye movements were recorded by an electrooculograph (EOG) and they were given one second to “visually trace” each Japanese character. For our purposes we’re only using the vertical eye positions and, conceptually, this basic example reproduced Figure 1 and Figure 2 of the Matrix Profile XV paper.
sentence_idx = [6, 7, 9, 10, 16, 24]Ts = [None] * len(sentence_idx)fs = 50 # eog signal was downsampled to 50 Hzfor i, s in enumerate(sentence_idx): Ts[i] = pd.read_csv(f'https://zenodo.org/record/4288978/files/EOG_001_01_{s:03d}.csv?download=1').iloc[:, 0].values# the literal sentencessentences = pd.read_csv(f'https://zenodo.org/record/4288978/files/test_sent.jp.csv?download=1', index_col=0)
Below, we plotted six time series that each represent the vertical eye position while a person “wrote” Japanese sentences using their eyes. As you can see, some of the Japanese sentences are longer and contain more words while others are shorter. However, there is one common Japanese word (i.e., a “common motif”) that is contained in all six examples. Can you spot the one second long pattern that is common across these six time series?
def plot_vertical_eog(): fig, ax = plt.subplots(6, sharex=True, sharey=True) prop_cycle = plt.rcParams['axes.prop_cycle'] colors = cycle(prop_cycle.by_key()['color']) for i, e in enumerate(Ts): ax[i].plot(np.arange(0, len(e)) / fs, e, color=next(colors)) ax[i].set_ylim((-330, 1900)) plt.subplots_adjust(hspace=0) plt.xlabel('Time (s)') return axplot_vertical_eog()plt.suptitle('Vertical Eye Position While Writing Different Japanese Sentences', fontsize=14)plt.show()
To find out, we can use the stumpy.ostinato function to help us discover the “consensus motif” by passing in the list of time series, Ts, along with the subsequence window size, m:
m = fsradius, Ts_idx, subseq_idx = stumpy.ostinato(Ts, m)print(f'Found Best Radius {np.round(radius, 2)} in time series {Ts_idx} starting at subsequence index location {subseq_idx}.')Found Best Radius 0.87 in time series 4 starting at subsequence index location 1271.
Now, Let’s plot the individual subsequences from each time series that correspond to the matching consensus motif:
seed_motif = Ts[Ts_idx][subseq_idx : subseq_idx + m]x = np.linspace(0,1,50)nn = np.zeros(len(Ts), dtype=np.int64)nn[Ts_idx] = subseq_idxfor i, e in enumerate(Ts): if i != Ts_idx: nn[i] = np.argmin(stumpy.core.mass(seed_motif, e)) lw = 1 label = None else: lw = 4 label = 'Seed Motif' plt.plot(x, e[nn[i]:nn[i]+m], lw=lw, label=label)plt.title('The Consensus Motif')plt.xlabel('Time (s)')plt.legend()plt.show()
There is a striking similarity between the subsequences. The most central “seed motif” is plotted with a thicker purple line.
When we highlight the above subsequences in their original context (light blue boxes below), we can see that they occur at different times:
ax = plot_vertical_eog()for i in range(len(Ts)): y = ax[i].get_ylim() r = Rectangle((nn[i] / fs, y[0]), 1, y[1]-y[0], alpha=0.3) ax[i].add_patch(r)plt.suptitle('Vertical Eye Position While Writing Different Japanese Sentences', fontsize=14)plt.show()
The discovered conserved motif (light blue boxes) correspond to writing the Japanese character ア, which occurs at different times in different example sentences.
In this next example, we’ll reproduce Figure 9 from the Matrix Profile XV paper.
Mitochondrial DNA (mtDNA) has been successfully used to determine evolutionary relationships between organisms (phylogeny). Since DNAs are essentially ordered sequences of letters, we can loosely treat them as time series and use all of the available time series tools.
animals = ['python', 'hippo', 'red_flying_fox', 'alpaca']data = {}for animal in animals: data[animal] = pd.read_csv(f"https://zenodo.org/record/4289120/files/{animal}.csv?download=1").iloc[:,0].valuescolors = {'python': 'tab:blue', 'hippo': 'tab:green', 'red_flying_fox': 'tab:purple', 'alpaca': 'tab:red'}
Naively, using scipy.cluster.hierarchy we can cluster the mtDNAs based on the majority of the sequences. A correct clustering would place the two “artiodactyla”, hippo and alpaca, closest and, together with the red flying fox, we would expect them to form a cluster of “mammals”. Finally, the python, a “reptile”, should be furthest away from all of the “mammals”.
fig, ax = plt.subplots(ncols=2)truncate = 15000for k, v in data.items(): ax[0].plot(v[:truncate], label=k, color=colors[k])ax[0].legend()ax[0].set_xlabel('Number of mtDNA Base Pairs')ax[0].set_title('mtDNA Sequences')truncate = 16000dp = np.zeros(int(comb(4, 2)))for i, a_c in enumerate(combinations(data.keys(), 2)): dp[i] = stumpy.core.mass(data[a_c[0]][:truncate], data[a_c[1]][:truncate])Z = linkage(dp, optimal_ordering=True)dendrogram(Z, labels=[k for k in data.keys()], ax=ax[1])ax[1].set_ylabel('Z-Normalized Euclidean Distance')ax[1].set_title('Clustering')plt.show()
Uh oh, the clustering is clearly wrong! Amongst other problems, the alpaca (a mammal) should not be most closely related to the python (a reptile).
In order to obtain the correct relationships, we need to identify and then compare the parts of the mtDNA that is the most conserved across the mtDNA sequences. In other words, we need to cluster based on their consensus motif. Let’s limit the subsequence window size to 1,000 base pairs and identify the consensus motif again using the stumpy.ostinato function:
m = 1000bsf_radius, bsf_Ts_idx, bsf_subseq_idx = stumpy.ostinato(list(data.values()), m)print(f'Found best radius {np.round(bsf_radius, 2)} in time series {bsf_Ts_idx} starting at subsequence index location {bsf_subseq_idx}.')Found best radius 2.73 in time series 1 starting at subsequence index location 602.
Now, let’s perform the clustering again but, this time, using the consensus motif:
consensus_motifs = {}best_motif = list(data.items())[bsf_Ts_idx][1][bsf_subseq_idx : bsf_subseq_idx + m]for i, (k, v) in enumerate(data.items()): if i == bsf_Ts_idx: consensus_motifs[k] = best_motif else: idx = np.argmin(stumpy.core.mass(best_motif, v)) consensus_motifs[k] = v[idx : idx + m]fig, ax = plt.subplots(ncols=2)# plot the consensus motifsfor animal, motif in consensus_motifs.items(): ax[0].plot(motif, label=animal, color=colors[animal])ax[0].legend()# cluster consensus motifsdp = np.zeros(int(comb(4, 2)))for i, motif in enumerate(combinations(list(consensus_motifs.values()), 2)): dp[i] = stumpy.core.mass(motif[0], motif[1])Z = linkage(dp, optimal_ordering=True)dendrogram(Z, labels=[k for k in consensus_motifs.keys()])ax[0].set_title('Consensus mtDNA Motifs')ax[0].set_xlabel('Number of mtDNA Base Pairs')ax[1].set_title('Clustering Using the Consensus Motifs')ax[1].set_ylabel('Z-normalized Euclidean Distance')plt.show()
Now this looks much better! Hierarchically, the python is “far away” from the other mammals and, amongst the mammalia, the red flying fox (a bat) is less related to both the alpaca and the hippo which are the closest evolutionary relatives in this set of animals.
And that’s it! You have now learned how to search for a consensus motif amongst a set of times series using the awesome stumpy.ostinato function. You can now import this package and use it in your own projects. Happy coding!
Matrix Profile XV
STUMPY Matrix Profile Documentation
STUMPY Matrix Profile Github Code Repository | [
{
"code": null,
"e": 496,
"s": 171,
"text": "STUMPY is a powerful and scalable Python library for modern time series analysis and, at its core, efficiently computes something called a matrix profile. The goal of this multi-part series is to explain what the matrix profile is and how you can start leveraging STUMPY for all of your modern time series data mining tasks!"
},
{
"code": null,
"e": 572,
"s": 496,
"text": "Note: These tutorials were originally featured in the STUMPY documentation."
},
{
"code": null,
"e": 943,
"s": 572,
"text": "Part 1: The Matrix ProfilePart 2: STUMPY BasicsPart 3: Time Series ChainsPart 4: Semantic SegmentationPart 5: Fast Approximate Matrix Profiles with STUMPYPart 6: Matrix Profiles for Streaming Time Series DataPart 7: Fast Pattern Searching with STUMPYPart 8: AB-Joins with STUMPYPart 9: Time Series Consensus MotifsPart 10: Discovering Multidimensional Time Series Motifs"
},
{
"code": null,
"e": 1019,
"s": 943,
"text": "This tutorial utilizes the main takeaways from the Matrix Profile XV paper."
},
{
"code": null,
"e": 1357,
"s": 1019,
"text": "Matrix profiles can be used to find conserved patterns within a single time series (self-join) and across two time series (AB-join). In both cases, these conserved patterns are often called “motifs”. And, when considering a set of three or more time series, one common trick for identifying a conserved motif across the entire set is to:"
},
{
"code": null,
"e": 1703,
"s": 1357,
"text": "Append a np.nan to the end of each time series. This is used to identify the boundary between neighboring time series and ensures that any identified motif will not straddle multiple time series.Concatenate all of the time series into a single long time seriesCompute the matrix profile (self-join) on the aforementioned concatenated time series"
},
{
"code": null,
"e": 1899,
"s": 1703,
"text": "Append a np.nan to the end of each time series. This is used to identify the boundary between neighboring time series and ensures that any identified motif will not straddle multiple time series."
},
{
"code": null,
"e": 1965,
"s": 1899,
"text": "Concatenate all of the time series into a single long time series"
},
{
"code": null,
"e": 2051,
"s": 1965,
"text": "Compute the matrix profile (self-join) on the aforementioned concatenated time series"
},
{
"code": null,
"e": 2441,
"s": 2051,
"text": "However, this is not guaranteed to find patterns that are conserved across all of the time series within the set. This idea of a finding a conserved motif that is common to all of the time series in a set is referred to as a “consensus motif”. In this tutorial, we will introduce the “Ostinato” algorithm, which is an efficient way to find the consensus motif amongst a set of time series."
},
{
"code": null,
"e": 2520,
"s": 2441,
"text": "Let’s import the packages that we’ll need to load, analyze, and plot the data."
},
{
"code": null,
"e": 2938,
"s": 2520,
"text": "%matplotlib inlineimport stumpyimport matplotlib.pyplot as pltimport numpy as npimport pandas as pdfrom itertools import cycle, combinationsfrom matplotlib.patches import Rectanglefrom scipy.cluster.hierarchy import linkage, dendrogramfrom scipy.special import combfig_size = plt.rcParams[\"figure.figsize\"]fig_size[0] = 20fig_size[1] = 6plt.rcParams[\"figure.figsize\"] = fig_sizeplt.rcParams['xtick.direction'] = 'out'"
},
{
"code": null,
"e": 3430,
"s": 2938,
"text": "In the following dataset, a volunteer was asked to “spell” out different Japanese sentences by performing eye movements that represented writing strokes of individual Japanese characters. Their eye movements were recorded by an electrooculograph (EOG) and they were given one second to “visually trace” each Japanese character. For our purposes we’re only using the vertical eye positions and, conceptually, this basic example reproduced Figure 1 and Figure 2 of the Matrix Profile XV paper."
},
{
"code": null,
"e": 3830,
"s": 3430,
"text": "sentence_idx = [6, 7, 9, 10, 16, 24]Ts = [None] * len(sentence_idx)fs = 50 # eog signal was downsampled to 50 Hzfor i, s in enumerate(sentence_idx): Ts[i] = pd.read_csv(f'https://zenodo.org/record/4288978/files/EOG_001_01_{s:03d}.csv?download=1').iloc[:, 0].values# the literal sentencessentences = pd.read_csv(f'https://zenodo.org/record/4288978/files/test_sent.jp.csv?download=1', index_col=0)"
},
{
"code": null,
"e": 4270,
"s": 3830,
"text": "Below, we plotted six time series that each represent the vertical eye position while a person “wrote” Japanese sentences using their eyes. As you can see, some of the Japanese sentences are longer and contain more words while others are shorter. However, there is one common Japanese word (i.e., a “common motif”) that is contained in all six examples. Can you spot the one second long pattern that is common across these six time series?"
},
{
"code": null,
"e": 4774,
"s": 4270,
"text": "def plot_vertical_eog(): fig, ax = plt.subplots(6, sharex=True, sharey=True) prop_cycle = plt.rcParams['axes.prop_cycle'] colors = cycle(prop_cycle.by_key()['color']) for i, e in enumerate(Ts): ax[i].plot(np.arange(0, len(e)) / fs, e, color=next(colors)) ax[i].set_ylim((-330, 1900)) plt.subplots_adjust(hspace=0) plt.xlabel('Time (s)') return axplot_vertical_eog()plt.suptitle('Vertical Eye Position While Writing Different Japanese Sentences', fontsize=14)plt.show()"
},
{
"code": null,
"e": 4955,
"s": 4774,
"text": "To find out, we can use the stumpy.ostinato function to help us discover the “consensus motif” by passing in the list of time series, Ts, along with the subsequence window size, m:"
},
{
"code": null,
"e": 5223,
"s": 4955,
"text": "m = fsradius, Ts_idx, subseq_idx = stumpy.ostinato(Ts, m)print(f'Found Best Radius {np.round(radius, 2)} in time series {Ts_idx} starting at subsequence index location {subseq_idx}.')Found Best Radius 0.87 in time series 4 starting at subsequence index location 1271."
},
{
"code": null,
"e": 5338,
"s": 5223,
"text": "Now, Let’s plot the individual subsequences from each time series that correspond to the matching consensus motif:"
},
{
"code": null,
"e": 5792,
"s": 5338,
"text": "seed_motif = Ts[Ts_idx][subseq_idx : subseq_idx + m]x = np.linspace(0,1,50)nn = np.zeros(len(Ts), dtype=np.int64)nn[Ts_idx] = subseq_idxfor i, e in enumerate(Ts): if i != Ts_idx: nn[i] = np.argmin(stumpy.core.mass(seed_motif, e)) lw = 1 label = None else: lw = 4 label = 'Seed Motif' plt.plot(x, e[nn[i]:nn[i]+m], lw=lw, label=label)plt.title('The Consensus Motif')plt.xlabel('Time (s)')plt.legend()plt.show()"
},
{
"code": null,
"e": 5918,
"s": 5792,
"text": "There is a striking similarity between the subsequences. The most central “seed motif” is plotted with a thicker purple line."
},
{
"code": null,
"e": 6058,
"s": 5918,
"text": "When we highlight the above subsequences in their original context (light blue boxes below), we can see that they occur at different times:"
},
{
"code": null,
"e": 6318,
"s": 6058,
"text": "ax = plot_vertical_eog()for i in range(len(Ts)): y = ax[i].get_ylim() r = Rectangle((nn[i] / fs, y[0]), 1, y[1]-y[0], alpha=0.3) ax[i].add_patch(r)plt.suptitle('Vertical Eye Position While Writing Different Japanese Sentences', fontsize=14)plt.show()"
},
{
"code": null,
"e": 6480,
"s": 6318,
"text": "The discovered conserved motif (light blue boxes) correspond to writing the Japanese character ア, which occurs at different times in different example sentences."
},
{
"code": null,
"e": 6561,
"s": 6480,
"text": "In this next example, we’ll reproduce Figure 9 from the Matrix Profile XV paper."
},
{
"code": null,
"e": 6831,
"s": 6561,
"text": "Mitochondrial DNA (mtDNA) has been successfully used to determine evolutionary relationships between organisms (phylogeny). Since DNAs are essentially ordered sequences of letters, we can loosely treat them as time series and use all of the available time series tools."
},
{
"code": null,
"e": 7141,
"s": 6831,
"text": "animals = ['python', 'hippo', 'red_flying_fox', 'alpaca']data = {}for animal in animals: data[animal] = pd.read_csv(f\"https://zenodo.org/record/4289120/files/{animal}.csv?download=1\").iloc[:,0].valuescolors = {'python': 'tab:blue', 'hippo': 'tab:green', 'red_flying_fox': 'tab:purple', 'alpaca': 'tab:red'}"
},
{
"code": null,
"e": 7506,
"s": 7141,
"text": "Naively, using scipy.cluster.hierarchy we can cluster the mtDNAs based on the majority of the sequences. A correct clustering would place the two “artiodactyla”, hippo and alpaca, closest and, together with the red flying fox, we would expect them to form a cluster of “mammals”. Finally, the python, a “reptile”, should be furthest away from all of the “mammals”."
},
{
"code": null,
"e": 8089,
"s": 7506,
"text": "fig, ax = plt.subplots(ncols=2)truncate = 15000for k, v in data.items(): ax[0].plot(v[:truncate], label=k, color=colors[k])ax[0].legend()ax[0].set_xlabel('Number of mtDNA Base Pairs')ax[0].set_title('mtDNA Sequences')truncate = 16000dp = np.zeros(int(comb(4, 2)))for i, a_c in enumerate(combinations(data.keys(), 2)): dp[i] = stumpy.core.mass(data[a_c[0]][:truncate], data[a_c[1]][:truncate])Z = linkage(dp, optimal_ordering=True)dendrogram(Z, labels=[k for k in data.keys()], ax=ax[1])ax[1].set_ylabel('Z-Normalized Euclidean Distance')ax[1].set_title('Clustering')plt.show()"
},
{
"code": null,
"e": 8237,
"s": 8089,
"text": "Uh oh, the clustering is clearly wrong! Amongst other problems, the alpaca (a mammal) should not be most closely related to the python (a reptile)."
},
{
"code": null,
"e": 8600,
"s": 8237,
"text": "In order to obtain the correct relationships, we need to identify and then compare the parts of the mtDNA that is the most conserved across the mtDNA sequences. In other words, we need to cluster based on their consensus motif. Let’s limit the subsequence window size to 1,000 base pairs and identify the consensus motif again using the stumpy.ostinato function:"
},
{
"code": null,
"e": 8910,
"s": 8600,
"text": "m = 1000bsf_radius, bsf_Ts_idx, bsf_subseq_idx = stumpy.ostinato(list(data.values()), m)print(f'Found best radius {np.round(bsf_radius, 2)} in time series {bsf_Ts_idx} starting at subsequence index location {bsf_subseq_idx}.')Found best radius 2.73 in time series 1 starting at subsequence index location 602."
},
{
"code": null,
"e": 8993,
"s": 8910,
"text": "Now, let’s perform the clustering again but, this time, using the consensus motif:"
},
{
"code": null,
"e": 9968,
"s": 8993,
"text": "consensus_motifs = {}best_motif = list(data.items())[bsf_Ts_idx][1][bsf_subseq_idx : bsf_subseq_idx + m]for i, (k, v) in enumerate(data.items()): if i == bsf_Ts_idx: consensus_motifs[k] = best_motif else: idx = np.argmin(stumpy.core.mass(best_motif, v)) consensus_motifs[k] = v[idx : idx + m]fig, ax = plt.subplots(ncols=2)# plot the consensus motifsfor animal, motif in consensus_motifs.items(): ax[0].plot(motif, label=animal, color=colors[animal])ax[0].legend()# cluster consensus motifsdp = np.zeros(int(comb(4, 2)))for i, motif in enumerate(combinations(list(consensus_motifs.values()), 2)): dp[i] = stumpy.core.mass(motif[0], motif[1])Z = linkage(dp, optimal_ordering=True)dendrogram(Z, labels=[k for k in consensus_motifs.keys()])ax[0].set_title('Consensus mtDNA Motifs')ax[0].set_xlabel('Number of mtDNA Base Pairs')ax[1].set_title('Clustering Using the Consensus Motifs')ax[1].set_ylabel('Z-normalized Euclidean Distance')plt.show()"
},
{
"code": null,
"e": 10232,
"s": 9968,
"text": "Now this looks much better! Hierarchically, the python is “far away” from the other mammals and, amongst the mammalia, the red flying fox (a bat) is less related to both the alpaca and the hippo which are the closest evolutionary relatives in this set of animals."
},
{
"code": null,
"e": 10457,
"s": 10232,
"text": "And that’s it! You have now learned how to search for a consensus motif amongst a set of times series using the awesome stumpy.ostinato function. You can now import this package and use it in your own projects. Happy coding!"
},
{
"code": null,
"e": 10475,
"s": 10457,
"text": "Matrix Profile XV"
},
{
"code": null,
"e": 10511,
"s": 10475,
"text": "STUMPY Matrix Profile Documentation"
}
] |
How can we Implement a Stack using Queue in Java? | A Stack is a subclass of Vector class and it represents last-in-first-out (LIFO) stack of objects. The last element added at the top of the stack (In) can be the first element to be removed (Out) from the stack.
A Queue class extends Collection interface and it supports the insert and removes operations using a first-in-first-out (FIFO). We can also implement a Stack using Queue in the below program.
import java.util.*;
public class StackFromQueueTest {
Queue queue = new LinkedList();
public void push(int value) {
int queueSize = queue.size();
queue.add(value);
for (int i = 0; i < queueSize; i++) {
queue.add(queue.remove());
}
}
public void pop() {
System.out.println("An element removed from a stack is: " + queue.remove());
}
public static void main(String[] args) {
StackFromQueueTest test = new StackFromQueueTest();
test.push(10);
test.push(20);
test.push(30);
test.push(40);
System.out.println(test.queue);
test.pop();
System.out.println(test.queue);
}
}
[40, 30, 20, 10]
An element removed from a stack is: 40
[30, 20, 10] | [
{
"code": null,
"e": 1274,
"s": 1062,
"text": "A Stack is a subclass of Vector class and it represents last-in-first-out (LIFO) stack of objects. The last element added at the top of the stack (In) can be the first element to be removed (Out) from the stack."
},
{
"code": null,
"e": 1466,
"s": 1274,
"text": "A Queue class extends Collection interface and it supports the insert and removes operations using a first-in-first-out (FIFO). We can also implement a Stack using Queue in the below program."
},
{
"code": null,
"e": 2139,
"s": 1466,
"text": "import java.util.*;\npublic class StackFromQueueTest {\n Queue queue = new LinkedList();\n public void push(int value) {\n int queueSize = queue.size();\n queue.add(value);\n for (int i = 0; i < queueSize; i++) {\n queue.add(queue.remove());\n }\n }\n public void pop() {\n System.out.println(\"An element removed from a stack is: \" + queue.remove());\n }\n public static void main(String[] args) {\n StackFromQueueTest test = new StackFromQueueTest();\n test.push(10);\n test.push(20);\n test.push(30);\n test.push(40);\n System.out.println(test.queue);\n test.pop();\n System.out.println(test.queue);\n }\n}"
},
{
"code": null,
"e": 2208,
"s": 2139,
"text": "[40, 30, 20, 10]\nAn element removed from a stack is: 40\n[30, 20, 10]"
}
] |
TIKA - Extracting MS-Office Files | Given below is the program to extract content and metadata from a Microsoft Office Document.
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import org.apache.tika.exception.TikaException;
import org.apache.tika.metadata.Metadata;
import org.apache.tika.parser.ParseContext;
import org.apache.tika.parser.microsoft.ooxml.OOXMLParser;
import org.apache.tika.sax.BodyContentHandler;
import org.xml.sax.SAXException;
public class MSExcelParse {
public static void main(final String[] args) throws IOException, TikaException {
//detecting the file type
BodyContentHandler handler = new BodyContentHandler();
Metadata metadata = new Metadata();
FileInputStream inputstream = new FileInputStream(new File("example_msExcel.xlsx"));
ParseContext pcontext = new ParseContext();
//OOXml parser
OOXMLParser msofficeparser = new OOXMLParser ();
msofficeparser.parse(inputstream, handler, metadata,pcontext);
System.out.println("Contents of the document:" + handler.toString());
System.out.println("Metadata of the document:");
String[] metadataNames = metadata.names();
for(String name : metadataNames) {
System.out.println(name + ": " + metadata.get(name));
}
}
}
Save the above code as MSExelParse.java, and compile it from the command prompt by using the following commands −
javac MSExcelParse.java
java MSExcelParse
Here we are passing the following sample Excel file.
The given Excel file has the following properties −
After executing the above program you will get the following output.
Output −
Contents of the document:
Sheet1
Name Age Designation Salary
Ramu 50 Manager 50,000
Raheem 40 Assistant manager 40,000
Robert 30 Superviser 30,000
sita 25 Clerk 25,000
sameer 25 Section in-charge 20,000
Metadata of the document:
meta:creation-date: 2006-09-16T00:00:00Z
dcterms:modified: 2014-09-28T15:18:41Z
meta:save-date: 2014-09-28T15:18:41Z
Application-Name: Microsoft Excel
extended-properties:Company:
dcterms:created: 2006-09-16T00:00:00Z
Last-Modified: 2014-09-28T15:18:41Z
Application-Version: 15.0300
date: 2014-09-28T15:18:41Z
publisher:
modified: 2014-09-28T15:18:41Z
Creation-Date: 2006-09-16T00:00:00Z
extended-properties:AppVersion: 15.0300
protected: false
dc:publisher:
extended-properties:Application: Microsoft Excel
Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
Last-Save-Date: 2014-09-28T15:18:41Z
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2203,
"s": 2110,
"text": "Given below is the program to extract content and metadata from a Microsoft Office Document."
},
{
"code": null,
"e": 3416,
"s": 2203,
"text": "import java.io.File;\nimport java.io.FileInputStream;\nimport java.io.IOException;\n\nimport org.apache.tika.exception.TikaException;\nimport org.apache.tika.metadata.Metadata;\nimport org.apache.tika.parser.ParseContext;\nimport org.apache.tika.parser.microsoft.ooxml.OOXMLParser;\nimport org.apache.tika.sax.BodyContentHandler;\n\nimport org.xml.sax.SAXException;\n\npublic class MSExcelParse {\n\n public static void main(final String[] args) throws IOException, TikaException {\n \n //detecting the file type\n BodyContentHandler handler = new BodyContentHandler();\n Metadata metadata = new Metadata();\n FileInputStream inputstream = new FileInputStream(new File(\"example_msExcel.xlsx\"));\n ParseContext pcontext = new ParseContext();\n \n //OOXml parser\n OOXMLParser msofficeparser = new OOXMLParser (); \n msofficeparser.parse(inputstream, handler, metadata,pcontext);\n System.out.println(\"Contents of the document:\" + handler.toString());\n System.out.println(\"Metadata of the document:\");\n String[] metadataNames = metadata.names();\n \n for(String name : metadataNames) {\n System.out.println(name + \": \" + metadata.get(name));\n }\n }\n}"
},
{
"code": null,
"e": 3530,
"s": 3416,
"text": "Save the above code as MSExelParse.java, and compile it from the command prompt by using the following commands −"
},
{
"code": null,
"e": 3574,
"s": 3530,
"text": "javac MSExcelParse.java\njava MSExcelParse \n"
},
{
"code": null,
"e": 3627,
"s": 3574,
"text": "Here we are passing the following sample Excel file."
},
{
"code": null,
"e": 3679,
"s": 3627,
"text": "The given Excel file has the following properties −"
},
{
"code": null,
"e": 3748,
"s": 3679,
"text": "After executing the above program you will get the following output."
},
{
"code": null,
"e": 3757,
"s": 3748,
"text": "Output −"
},
{
"code": null,
"e": 4678,
"s": 3757,
"text": "Contents of the document:\n\nSheet1\nName\tAge\tDesignation\t\tSalary\nRamu\t50\tManager\t\t\t50,000\nRaheem\t40\tAssistant manager\t40,000\nRobert\t30\tSuperviser\t\t30,000\nsita\t25\tClerk\t\t\t25,000\nsameer\t25\tSection in-charge\t20,000\n\nMetadata of the document:\n\nmeta:creation-date: 2006-09-16T00:00:00Z\ndcterms:modified: 2014-09-28T15:18:41Z\nmeta:save-date: 2014-09-28T15:18:41Z\nApplication-Name: Microsoft Excel\nextended-properties:Company: \ndcterms:created: 2006-09-16T00:00:00Z\nLast-Modified: 2014-09-28T15:18:41Z\nApplication-Version: 15.0300\ndate: 2014-09-28T15:18:41Z\npublisher: \nmodified: 2014-09-28T15:18:41Z\nCreation-Date: 2006-09-16T00:00:00Z\nextended-properties:AppVersion: 15.0300\nprotected: false\ndc:publisher: \nextended-properties:Application: Microsoft Excel\nContent-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet\nLast-Save-Date: 2014-09-28T15:18:41Z\n"
},
{
"code": null,
"e": 4685,
"s": 4678,
"text": " Print"
},
{
"code": null,
"e": 4696,
"s": 4685,
"text": " Add Notes"
}
] |
Spring Boot MongoDB + Spring Data Example - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorial, we are going to show how to work with Spring Boot MongoDB with Spring Data.
Spring Boot 2.0.0.RELEASE
Spring Boot Started Data Mongo 3.6.3
Maven
Java 8
Install or setup MongoDB if you do not have and execute the below commands in MongoDB shell to create MongoDB document and inserting data.
> use otp
> db.item.insert({
itemId:1,
serialNumber:82,
category:"Books",
name:"MongoDB in Action"
})
> db.item.insert({
itemId:2,
serialNumber:20,
category:"Mobiles",
name:"iPhone6"
})
> db.item.insert({
itemId:3,
serialNumber:84,
category:"Books",
name:"Spring in Action"
})
> db.item.insert({
itemId:4,
serialNumber:24,
category:"Mobiles",
name:"Samsung Galaxy"
})
Created item document and inserted four items in it.
Project Structure :
pom.xml :
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.onlinetutorialspoint</groupId>
<artifactId>SpringBoot_MongoDB_Example</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>SpringBoot_MongoDB_Example</name>
<description>Spring Boot MongoDB Example</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.0.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
application properties :
#mongodb properties
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=otp
Creating Model, which represents the MongoDB document.
Item.java
package com.onlinetutorialspoint.docs;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.index.Indexed;
import org.springframework.data.mongodb.core.mapping.Document;
@Document(collection = "item")
public class Item {
@Id
private String id;
private long itemId;
private String serialNumber;
private String category;
private String name;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public long getItemId() {
return itemId;
}
public void setItemId(long itemId) {
this.itemId = itemId;
}
public String getSerialNumber() {
return serialNumber;
}
public void setSerialNumber(String serialNumber) {
this.serialNumber = serialNumber;
}
public String getCategory() {
return category;
}
public void setCategory(String category) {
this.category = category;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Creating a Spring Data + MongoDB repository.
Item Repository :
package com.onlinetutorialspoint.repository;
import com.onlinetutorialspoint.docs.Item;
import org.springframework.data.mongodb.repository.MongoRepository;
import java.util.List;
public interface ItemRepository extends MongoRepository<Item,Long> {
List<Item> findByCategory(String category);
Item findByItemId(long itemId);
}
Creating Service class: Responsible to do all CRUD operations.
ItemService.java
package com.onlinetutorialspoint.service;
import com.onlinetutorialspoint.docs.Item;
import com.onlinetutorialspoint.repository.ItemRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
@Service
public class ItemService {
@Autowired
ItemRepository itemRepo;
public List<Item> getAllItems(){
return itemRepo.findAll();
}
/*Getting a specific item by category from collection*/
public List<Item> getItemByCategory(String category){
List<Item> item = itemRepo.findByCategory(category);
return item;
}
/*Getting a specific item by item id from collection*/
public Item getItemByItemId(long itemId){
Item item = itemRepo.findByItemId(itemId);
return item;
}
/*Adding/inserting an item into collection*/
public Item addItem(long id,String serialNumber, String name,String category) {
Item item = new Item();
item.setCategory(category);
item.setItemId(id);
item.setName(name);
item.setSerialNumber(serialNumber);
return itemRepo.save(item);
}
/*delete an item from collection*/
public int deleteItem(long itemId){
Item item = itemRepo.findByItemId(itemId);
if(item != null){
itemRepo.delete(item);
return 1;
}
return -1;
}
}
Create a Spring Boot Rest Controller :
ItemController.java
package com.onlinetutorialspoint.controller;
import com.onlinetutorialspoint.docs.Item;
import com.onlinetutorialspoint.service.ItemService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;
import java.util.List;
@RestController
public class ItemController {
@Autowired
ItemService itemService;
@RequestMapping("/getAllItems")
@ResponseBody
public List<Item> getItems(){
return itemService.getAllItems();
}
@RequestMapping("/getItem")
@ResponseBody
public List<Item> getItem(@RequestParam("category") String category){
return itemService.getItemByCategory(category);
}
@RequestMapping("/getItemById")
@ResponseBody
public Item getItemById(@RequestParam("item") long item){
return itemService.getItemByItemId(item);
}
@RequestMapping("/addItem")
@ResponseBody
public String addItem(@RequestParam("itemId") long itemId,@RequestParam("serialNumber") String serialNumber,
@RequestParam("name") String name,
@RequestParam("category") String category){
if(itemService.addItem(itemId,serialNumber,name,category) != null){
return "Item Added Successfully";
}else{
return "Something went wrong !";
}
}
@RequestMapping("/deteteItem")
@ResponseBody
public String deteteItem(@RequestParam("itemId") int itemId){
if(itemService.deleteItem(itemId) == 1){
return "Item Deleted Successfully";
}else{
return "Something went wrong !";
}
}
}
Application.java
package com.onlinetutorialspoint;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
mvn clean install
mvn spring-boot:run
[INFO] --- spring-boot-maven-plugin:2.0.0.RELEASE:run (default-cli) @ SpringBoot_MongoDB_Example ---
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.0.RELEASE)
2018-03-09 09:00:09.575 INFO 884 --- [ main] com.onlinetutorialspoint.Application : Starting Application on DESKTOP-RN4SMHT with PID 884 (E:\work\SpringBoot_MongoDB_Ex
ample\target\classes started by Lenovo in E:\work\SpringBoot_MongoDB_Example)
2018-03-09 09:00:09.601 INFO 884 --- [ main] com.onlinetutorialspoint.Application : No active profile set, falling back to default profiles: default
2018-03-09 09:00:10.033 INFO 884 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebS
erverApplicationContext@3d574a8d: startup date [Fri Mar 09 09:00:10 IST 2018]; root of context hierarchy
2018-03-09 09:00:13.358 INFO 884 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2018-03-09 09:00:13.465 INFO 884 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-03-09 09:00:13.466 INFO 884 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.28
.........
.........
Output :
Getting all Items :
http://localhost:8080/getAllItems
Get Item By a specific item Id :
http://localhost:8080/getItemById?item=2
Get Item By Category:
http://localhost:8080/getItem?category=Books
Insert item :
http://localhost:8080/addItem?itemId=5&serialNumber=28&name=Sony&category=Television
Delete Item :
http://localhost:8080/deteteItem?itemId=3
After Insert and Delete an item :
Spring Boot with Spring Data Reference
MongoDB Official Tutorial
Happy Learning 🙂
SpringBoot MongoDB + Spring Data Example
File size: 93 KB
Downloads: 1023
Spring Boot MockMvc JUnit Test Example
Spring Boot In Memory Basic Authentication Security
Spring Boot Hazelcast Cache Example
Spring Boot JdbcTemplate CRUD Operations Mysql
Spring Boot Redis Cache Example – Redis Server
Spring boot exception handling rest service (CRUD) operations
Spring Boot Soap WebServices Example
Spring Boot Redis Data Example CRUD Operations
Spring Boot Multiple Data Sources Example
How to setup or install MongoDB on Windows 10
Spring Boot DataRest Example RepositoryRestResource
How To Change Spring Boot Context Path
Spring Boot Validation Login Form Example
How to set Spring Boot Tomcat session timeout
Spring Boot H2 Database + JDBC Template Example
Spring Boot MockMvc JUnit Test Example
Spring Boot In Memory Basic Authentication Security
Spring Boot Hazelcast Cache Example
Spring Boot JdbcTemplate CRUD Operations Mysql
Spring Boot Redis Cache Example – Redis Server
Spring boot exception handling rest service (CRUD) operations
Spring Boot Soap WebServices Example
Spring Boot Redis Data Example CRUD Operations
Spring Boot Multiple Data Sources Example
How to setup or install MongoDB on Windows 10
Spring Boot DataRest Example RepositoryRestResource
How To Change Spring Boot Context Path
Spring Boot Validation Login Form Example
How to set Spring Boot Tomcat session timeout
Spring Boot H2 Database + JDBC Template Example
SAtria
October 4, 2019 at 12:48 pm - Reply
How to setup two datasource (mongodb different connection) in spring boot crud
SAtria
October 4, 2019 at 12:48 pm - Reply
How to setup two datasource (mongodb different connection) in spring boot crud
How to setup two datasource (mongodb different connection) in spring boot crud
Δ
Spring Boot – Hello World
Spring Boot – MVC Example
Spring Boot- Change Context Path
Spring Boot – Change Tomcat Port Number
Spring Boot – Change Tomcat to Jetty Server
Spring Boot – Tomcat session timeout
Spring Boot – Enable Random Port
Spring Boot – Properties File
Spring Boot – Beans Lazy Loading
Spring Boot – Set Favicon image
Spring Boot – Set Custom Banner
Spring Boot – Set Application TimeZone
Spring Boot – Send Mail
Spring Boot – FileUpload Ajax
Spring Boot – Actuator
Spring Boot – Actuator Database Health Check
Spring Boot – Swagger
Spring Boot – Enable CORS
Spring Boot – External Apache ActiveMQ Setup
Spring Boot – Inmemory Apache ActiveMq
Spring Boot – Scheduler Job
Spring Boot – Exception Handling
Spring Boot – Hibernate CRUD
Spring Boot – JPA Integration CRUD
Spring Boot – JPA DataRest CRUD
Spring Boot – JdbcTemplate CRUD
Spring Boot – Multiple Data Sources Config
Spring Boot – JNDI Configuration
Spring Boot – H2 Database CRUD
Spring Boot – MongoDB CRUD
Spring Boot – Redis Data CRUD
Spring Boot – MVC Login Form Validation
Spring Boot – Custom Error Pages
Spring Boot – iText PDF
Spring Boot – Enable SSL (HTTPs)
Spring Boot – Basic Authentication
Spring Boot – In Memory Basic Authentication
Spring Boot – Security MySQL Database Integration
Spring Boot – Redis Cache – Redis Server
Spring Boot – Hazelcast Cache
Spring Boot – EhCache
Spring Boot – Kafka Producer
Spring Boot – Kafka Consumer
Spring Boot – Kafka JSON Message to Kafka Topic
Spring Boot – RabbitMQ Publisher
Spring Boot – RabbitMQ Consumer
Spring Boot – SOAP Consumer
Spring Boot – Soap WebServices
Spring Boot – Batch Csv to Database
Spring Boot – Eureka Server
Spring Boot – MockMvc JUnit
Spring Boot – Docker Deployment | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 492,
"s": 398,
"text": "In this tutorial, we are going to show how to work with Spring Boot MongoDB with Spring Data."
},
{
"code": null,
"e": 518,
"s": 492,
"text": "Spring Boot 2.0.0.RELEASE"
},
{
"code": null,
"e": 555,
"s": 518,
"text": "Spring Boot Started Data Mongo 3.6.3"
},
{
"code": null,
"e": 561,
"s": 555,
"text": "Maven"
},
{
"code": null,
"e": 568,
"s": 561,
"text": "Java 8"
},
{
"code": null,
"e": 707,
"s": 568,
"text": "Install or setup MongoDB if you do not have and execute the below commands in MongoDB shell to create MongoDB document and inserting data."
},
{
"code": null,
"e": 1123,
"s": 707,
"text": "> use otp\n> db.item.insert({\n itemId:1,\n serialNumber:82,\n category:\"Books\",\n name:\"MongoDB in Action\"\n})\n> db.item.insert({\n itemId:2,\n serialNumber:20,\n category:\"Mobiles\",\n name:\"iPhone6\"\n})\n> db.item.insert({\n itemId:3,\n serialNumber:84,\n category:\"Books\",\n name:\"Spring in Action\"\n})\n> db.item.insert({\n itemId:4,\n serialNumber:24,\n category:\"Mobiles\",\n name:\"Samsung Galaxy\"\n})"
},
{
"code": null,
"e": 1176,
"s": 1123,
"text": "Created item document and inserted four items in it."
},
{
"code": null,
"e": 1196,
"s": 1176,
"text": "Project Structure :"
},
{
"code": null,
"e": 1206,
"s": 1196,
"text": "pom.xml :"
},
{
"code": null,
"e": 2722,
"s": 1206,
"text": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n\n <groupId>com.onlinetutorialspoint</groupId>\n <artifactId>SpringBoot_MongoDB_Example</artifactId>\n <version>0.0.1-SNAPSHOT</version>\n <packaging>jar</packaging>\n <name>SpringBoot_MongoDB_Example</name>\n <description>Spring Boot MongoDB Example</description>\n <parent>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-parent</artifactId>\n <version>2.0.0.RELEASE</version>\n <relativePath/> <!-- lookup parent from repository -->\n </parent>\n <properties>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\n <java.version>1.8</java.version>\n </properties>\n <dependencies>\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-data-mongodb</artifactId>\n </dependency>\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-web</artifactId>\n </dependency>\n </dependencies>\n <build>\n <plugins>\n <plugin>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-maven-plugin</artifactId>\n </plugin>\n </plugins>\n </build>\n</project>\n"
},
{
"code": null,
"e": 2747,
"s": 2722,
"text": "application properties :"
},
{
"code": null,
"e": 2867,
"s": 2747,
"text": "#mongodb properties\nspring.data.mongodb.host=localhost\nspring.data.mongodb.port=27017\nspring.data.mongodb.database=otp\n"
},
{
"code": null,
"e": 2922,
"s": 2867,
"text": "Creating Model, which represents the MongoDB document."
},
{
"code": null,
"e": 2932,
"s": 2922,
"text": "Item.java"
},
{
"code": null,
"e": 4062,
"s": 2932,
"text": "package com.onlinetutorialspoint.docs;\n\nimport org.springframework.data.annotation.Id;\nimport org.springframework.data.mongodb.core.index.Indexed;\nimport org.springframework.data.mongodb.core.mapping.Document;\n\n@Document(collection = \"item\")\npublic class Item {\n @Id\n private String id;\n\n private long itemId;\n\n private String serialNumber;\n\n private String category;\n\n private String name;\n\n public String getId() {\n return id;\n }\n\n public void setId(String id) {\n this.id = id;\n }\n\n public long getItemId() {\n return itemId;\n }\n\n public void setItemId(long itemId) {\n this.itemId = itemId;\n }\n\n public String getSerialNumber() {\n return serialNumber;\n }\n\n public void setSerialNumber(String serialNumber) {\n this.serialNumber = serialNumber;\n }\n\n public String getCategory() {\n return category;\n }\n\n public void setCategory(String category) {\n this.category = category;\n }\n\n public String getName() {\n return name;\n }\n\n public void setName(String name) {\n this.name = name;\n }\n}\n"
},
{
"code": null,
"e": 4107,
"s": 4062,
"text": "Creating a Spring Data + MongoDB repository."
},
{
"code": null,
"e": 4125,
"s": 4107,
"text": "Item Repository :"
},
{
"code": null,
"e": 4463,
"s": 4125,
"text": "package com.onlinetutorialspoint.repository;\n\nimport com.onlinetutorialspoint.docs.Item;\nimport org.springframework.data.mongodb.repository.MongoRepository;\n\nimport java.util.List;\n\npublic interface ItemRepository extends MongoRepository<Item,Long> {\n List<Item> findByCategory(String category);\n Item findByItemId(long itemId);\n}\n"
},
{
"code": null,
"e": 4526,
"s": 4463,
"text": "Creating Service class: Responsible to do all CRUD operations."
},
{
"code": null,
"e": 4543,
"s": 4526,
"text": "ItemService.java"
},
{
"code": null,
"e": 5967,
"s": 4543,
"text": "package com.onlinetutorialspoint.service;\n\nimport com.onlinetutorialspoint.docs.Item;\nimport com.onlinetutorialspoint.repository.ItemRepository;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.stereotype.Service;\n\nimport java.util.List;\n\n@Service\npublic class ItemService {\n @Autowired\n ItemRepository itemRepo;\n public List<Item> getAllItems(){\n return itemRepo.findAll();\n\n }\n\n /*Getting a specific item by category from collection*/\n public List<Item> getItemByCategory(String category){\n List<Item> item = itemRepo.findByCategory(category);\n return item;\n }\n\n /*Getting a specific item by item id from collection*/\n public Item getItemByItemId(long itemId){\n Item item = itemRepo.findByItemId(itemId);\n return item;\n }\n /*Adding/inserting an item into collection*/\n public Item addItem(long id,String serialNumber, String name,String category) {\n Item item = new Item();\n item.setCategory(category);\n item.setItemId(id);\n item.setName(name);\n item.setSerialNumber(serialNumber);\n return itemRepo.save(item);\n }\n /*delete an item from collection*/\n public int deleteItem(long itemId){\n Item item = itemRepo.findByItemId(itemId);\n if(item != null){\n itemRepo.delete(item);\n return 1;\n }\n return -1;\n }\n}\n"
},
{
"code": null,
"e": 6006,
"s": 5967,
"text": "Create a Spring Boot Rest Controller :"
},
{
"code": null,
"e": 6026,
"s": 6006,
"text": "ItemController.java"
},
{
"code": null,
"e": 7879,
"s": 6026,
"text": "package com.onlinetutorialspoint.controller;\n\nimport com.onlinetutorialspoint.docs.Item;\nimport com.onlinetutorialspoint.service.ItemService;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.RequestParam;\nimport org.springframework.web.bind.annotation.ResponseBody;\nimport org.springframework.web.bind.annotation.RestController;\n\nimport java.util.List;\n\n@RestController\npublic class ItemController {\n\n @Autowired\n ItemService itemService;\n @RequestMapping(\"/getAllItems\")\n @ResponseBody\n public List<Item> getItems(){\n return itemService.getAllItems();\n }\n\n @RequestMapping(\"/getItem\")\n @ResponseBody\n public List<Item> getItem(@RequestParam(\"category\") String category){\n return itemService.getItemByCategory(category);\n }\n\n @RequestMapping(\"/getItemById\")\n @ResponseBody\n public Item getItemById(@RequestParam(\"item\") long item){\n return itemService.getItemByItemId(item);\n }\n\n @RequestMapping(\"/addItem\")\n @ResponseBody\n public String addItem(@RequestParam(\"itemId\") long itemId,@RequestParam(\"serialNumber\") String serialNumber,\n @RequestParam(\"name\") String name,\n @RequestParam(\"category\") String category){\n if(itemService.addItem(itemId,serialNumber,name,category) != null){\n return \"Item Added Successfully\";\n }else{\n return \"Something went wrong !\";\n }\n }\n @RequestMapping(\"/deteteItem\")\n @ResponseBody\n public String deteteItem(@RequestParam(\"itemId\") int itemId){\n if(itemService.deleteItem(itemId) == 1){\n return \"Item Deleted Successfully\";\n }else{\n return \"Something went wrong !\";\n }\n }\n}\n"
},
{
"code": null,
"e": 7896,
"s": 7879,
"text": "Application.java"
},
{
"code": null,
"e": 8205,
"s": 7896,
"text": "package com.onlinetutorialspoint;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\n\n@SpringBootApplication\npublic class Application {\n\n public static void main(String[] args) {\n SpringApplication.run(Application.class, args);\n }\n}\n"
},
{
"code": null,
"e": 9786,
"s": 8205,
"text": "mvn clean install\nmvn spring-boot:run\n\n[INFO] --- spring-boot-maven-plugin:2.0.0.RELEASE:run (default-cli) @ SpringBoot_MongoDB_Example ---\n\n . ____ _ __ _ _\n /\\\\ / ___'_ __ _ _(_)_ __ __ _ \\ \\ \\ \\\n( ( )\\___ | '_ | '_| | '_ \\/ _` | \\ \\ \\ \\\n \\\\/ ___)| |_)| | | | | || (_| | ) ) ) )\n ' |____| .__|_| |_|_| |_\\__, | / / / /\n =========|_|==============|___/=/_/_/_/\n :: Spring Boot :: (v2.0.0.RELEASE)\n\n2018-03-09 09:00:09.575 INFO 884 --- [ main] com.onlinetutorialspoint.Application : Starting Application on DESKTOP-RN4SMHT with PID 884 (E:\\work\\SpringBoot_MongoDB_Ex\nample\\target\\classes started by Lenovo in E:\\work\\SpringBoot_MongoDB_Example)\n2018-03-09 09:00:09.601 INFO 884 --- [ main] com.onlinetutorialspoint.Application : No active profile set, falling back to default profiles: default\n2018-03-09 09:00:10.033 INFO 884 --- [ main] ConfigServletWebServerApplicationContext : Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebS\nerverApplicationContext@3d574a8d: startup date [Fri Mar 09 09:00:10 IST 2018]; root of context hierarchy\n2018-03-09 09:00:13.358 INFO 884 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)\n2018-03-09 09:00:13.465 INFO 884 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]\n2018-03-09 09:00:13.466 INFO 884 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.28\n.........\n........."
},
{
"code": null,
"e": 9795,
"s": 9786,
"text": "Output :"
},
{
"code": null,
"e": 9815,
"s": 9795,
"text": "Getting all Items :"
},
{
"code": null,
"e": 9849,
"s": 9815,
"text": "http://localhost:8080/getAllItems"
},
{
"code": null,
"e": 9882,
"s": 9849,
"text": "Get Item By a specific item Id :"
},
{
"code": null,
"e": 9923,
"s": 9882,
"text": "http://localhost:8080/getItemById?item=2"
},
{
"code": null,
"e": 9945,
"s": 9923,
"text": "Get Item By Category:"
},
{
"code": null,
"e": 9990,
"s": 9945,
"text": "http://localhost:8080/getItem?category=Books"
},
{
"code": null,
"e": 10004,
"s": 9990,
"text": "Insert item :"
},
{
"code": null,
"e": 10089,
"s": 10004,
"text": "http://localhost:8080/addItem?itemId=5&serialNumber=28&name=Sony&category=Television"
},
{
"code": null,
"e": 10103,
"s": 10089,
"text": "Delete Item :"
},
{
"code": null,
"e": 10145,
"s": 10103,
"text": "http://localhost:8080/deteteItem?itemId=3"
},
{
"code": null,
"e": 10179,
"s": 10145,
"text": "After Insert and Delete an item :"
},
{
"code": null,
"e": 10220,
"s": 10181,
"text": "Spring Boot with Spring Data Reference"
},
{
"code": null,
"e": 10246,
"s": 10220,
"text": "MongoDB Official Tutorial"
},
{
"code": null,
"e": 10263,
"s": 10246,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 10341,
"s": 10263,
"text": "\n\nSpringBoot MongoDB + Spring Data Example\n\nFile size: 93 KB\nDownloads: 1023\n"
},
{
"code": null,
"e": 11025,
"s": 10341,
"text": "\nSpring Boot MockMvc JUnit Test Example\nSpring Boot In Memory Basic Authentication Security\nSpring Boot Hazelcast Cache Example\nSpring Boot JdbcTemplate CRUD Operations Mysql\nSpring Boot Redis Cache Example – Redis Server\nSpring boot exception handling rest service (CRUD) operations\nSpring Boot Soap WebServices Example\nSpring Boot Redis Data Example CRUD Operations\nSpring Boot Multiple Data Sources Example\nHow to setup or install MongoDB on Windows 10\nSpring Boot DataRest Example RepositoryRestResource\nHow To Change Spring Boot Context Path\nSpring Boot Validation Login Form Example\nHow to set Spring Boot Tomcat session timeout\nSpring Boot H2 Database + JDBC Template Example\n"
},
{
"code": null,
"e": 11064,
"s": 11025,
"text": "Spring Boot MockMvc JUnit Test Example"
},
{
"code": null,
"e": 11116,
"s": 11064,
"text": "Spring Boot In Memory Basic Authentication Security"
},
{
"code": null,
"e": 11152,
"s": 11116,
"text": "Spring Boot Hazelcast Cache Example"
},
{
"code": null,
"e": 11199,
"s": 11152,
"text": "Spring Boot JdbcTemplate CRUD Operations Mysql"
},
{
"code": null,
"e": 11246,
"s": 11199,
"text": "Spring Boot Redis Cache Example – Redis Server"
},
{
"code": null,
"e": 11308,
"s": 11246,
"text": "Spring boot exception handling rest service (CRUD) operations"
},
{
"code": null,
"e": 11345,
"s": 11308,
"text": "Spring Boot Soap WebServices Example"
},
{
"code": null,
"e": 11392,
"s": 11345,
"text": "Spring Boot Redis Data Example CRUD Operations"
},
{
"code": null,
"e": 11434,
"s": 11392,
"text": "Spring Boot Multiple Data Sources Example"
},
{
"code": null,
"e": 11480,
"s": 11434,
"text": "How to setup or install MongoDB on Windows 10"
},
{
"code": null,
"e": 11532,
"s": 11480,
"text": "Spring Boot DataRest Example RepositoryRestResource"
},
{
"code": null,
"e": 11571,
"s": 11532,
"text": "How To Change Spring Boot Context Path"
},
{
"code": null,
"e": 11613,
"s": 11571,
"text": "Spring Boot Validation Login Form Example"
},
{
"code": null,
"e": 11659,
"s": 11613,
"text": "How to set Spring Boot Tomcat session timeout"
},
{
"code": null,
"e": 11707,
"s": 11659,
"text": "Spring Boot H2 Database + JDBC Template Example"
},
{
"code": null,
"e": 11842,
"s": 11707,
"text": "\n\n\n\n\n\nSAtria\nOctober 4, 2019 at 12:48 pm - Reply \n\nHow to setup two datasource (mongodb different connection) in spring boot crud\n\n\n\n\n"
},
{
"code": null,
"e": 11975,
"s": 11842,
"text": "\n\n\n\n\nSAtria\nOctober 4, 2019 at 12:48 pm - Reply \n\nHow to setup two datasource (mongodb different connection) in spring boot crud\n\n\n\n"
},
{
"code": null,
"e": 12054,
"s": 11975,
"text": "How to setup two datasource (mongodb different connection) in spring boot crud"
},
{
"code": null,
"e": 12060,
"s": 12058,
"text": "Δ"
},
{
"code": null,
"e": 12087,
"s": 12060,
"text": " Spring Boot – Hello World"
},
{
"code": null,
"e": 12114,
"s": 12087,
"text": " Spring Boot – MVC Example"
},
{
"code": null,
"e": 12148,
"s": 12114,
"text": " Spring Boot- Change Context Path"
},
{
"code": null,
"e": 12189,
"s": 12148,
"text": " Spring Boot – Change Tomcat Port Number"
},
{
"code": null,
"e": 12234,
"s": 12189,
"text": " Spring Boot – Change Tomcat to Jetty Server"
},
{
"code": null,
"e": 12272,
"s": 12234,
"text": " Spring Boot – Tomcat session timeout"
},
{
"code": null,
"e": 12306,
"s": 12272,
"text": " Spring Boot – Enable Random Port"
},
{
"code": null,
"e": 12337,
"s": 12306,
"text": " Spring Boot – Properties File"
},
{
"code": null,
"e": 12371,
"s": 12337,
"text": " Spring Boot – Beans Lazy Loading"
},
{
"code": null,
"e": 12404,
"s": 12371,
"text": " Spring Boot – Set Favicon image"
},
{
"code": null,
"e": 12437,
"s": 12404,
"text": " Spring Boot – Set Custom Banner"
},
{
"code": null,
"e": 12477,
"s": 12437,
"text": " Spring Boot – Set Application TimeZone"
},
{
"code": null,
"e": 12502,
"s": 12477,
"text": " Spring Boot – Send Mail"
},
{
"code": null,
"e": 12533,
"s": 12502,
"text": " Spring Boot – FileUpload Ajax"
},
{
"code": null,
"e": 12557,
"s": 12533,
"text": " Spring Boot – Actuator"
},
{
"code": null,
"e": 12603,
"s": 12557,
"text": " Spring Boot – Actuator Database Health Check"
},
{
"code": null,
"e": 12626,
"s": 12603,
"text": " Spring Boot – Swagger"
},
{
"code": null,
"e": 12653,
"s": 12626,
"text": " Spring Boot – Enable CORS"
},
{
"code": null,
"e": 12699,
"s": 12653,
"text": " Spring Boot – External Apache ActiveMQ Setup"
},
{
"code": null,
"e": 12739,
"s": 12699,
"text": " Spring Boot – Inmemory Apache ActiveMq"
},
{
"code": null,
"e": 12768,
"s": 12739,
"text": " Spring Boot – Scheduler Job"
},
{
"code": null,
"e": 12802,
"s": 12768,
"text": " Spring Boot – Exception Handling"
},
{
"code": null,
"e": 12832,
"s": 12802,
"text": " Spring Boot – Hibernate CRUD"
},
{
"code": null,
"e": 12868,
"s": 12832,
"text": " Spring Boot – JPA Integration CRUD"
},
{
"code": null,
"e": 12901,
"s": 12868,
"text": " Spring Boot – JPA DataRest CRUD"
},
{
"code": null,
"e": 12934,
"s": 12901,
"text": " Spring Boot – JdbcTemplate CRUD"
},
{
"code": null,
"e": 12978,
"s": 12934,
"text": " Spring Boot – Multiple Data Sources Config"
},
{
"code": null,
"e": 13012,
"s": 12978,
"text": " Spring Boot – JNDI Configuration"
},
{
"code": null,
"e": 13044,
"s": 13012,
"text": " Spring Boot – H2 Database CRUD"
},
{
"code": null,
"e": 13072,
"s": 13044,
"text": " Spring Boot – MongoDB CRUD"
},
{
"code": null,
"e": 13103,
"s": 13072,
"text": " Spring Boot – Redis Data CRUD"
},
{
"code": null,
"e": 13144,
"s": 13103,
"text": " Spring Boot – MVC Login Form Validation"
},
{
"code": null,
"e": 13178,
"s": 13144,
"text": " Spring Boot – Custom Error Pages"
},
{
"code": null,
"e": 13203,
"s": 13178,
"text": " Spring Boot – iText PDF"
},
{
"code": null,
"e": 13237,
"s": 13203,
"text": " Spring Boot – Enable SSL (HTTPs)"
},
{
"code": null,
"e": 13273,
"s": 13237,
"text": " Spring Boot – Basic Authentication"
},
{
"code": null,
"e": 13319,
"s": 13273,
"text": " Spring Boot – In Memory Basic Authentication"
},
{
"code": null,
"e": 13370,
"s": 13319,
"text": " Spring Boot – Security MySQL Database Integration"
},
{
"code": null,
"e": 13412,
"s": 13370,
"text": " Spring Boot – Redis Cache – Redis Server"
},
{
"code": null,
"e": 13443,
"s": 13412,
"text": " Spring Boot – Hazelcast Cache"
},
{
"code": null,
"e": 13466,
"s": 13443,
"text": " Spring Boot – EhCache"
},
{
"code": null,
"e": 13496,
"s": 13466,
"text": " Spring Boot – Kafka Producer"
},
{
"code": null,
"e": 13526,
"s": 13496,
"text": " Spring Boot – Kafka Consumer"
},
{
"code": null,
"e": 13575,
"s": 13526,
"text": " Spring Boot – Kafka JSON Message to Kafka Topic"
},
{
"code": null,
"e": 13609,
"s": 13575,
"text": " Spring Boot – RabbitMQ Publisher"
},
{
"code": null,
"e": 13642,
"s": 13609,
"text": " Spring Boot – RabbitMQ Consumer"
},
{
"code": null,
"e": 13671,
"s": 13642,
"text": " Spring Boot – SOAP Consumer"
},
{
"code": null,
"e": 13703,
"s": 13671,
"text": " Spring Boot – Soap WebServices"
},
{
"code": null,
"e": 13740,
"s": 13703,
"text": " Spring Boot – Batch Csv to Database"
},
{
"code": null,
"e": 13769,
"s": 13740,
"text": " Spring Boot – Eureka Server"
},
{
"code": null,
"e": 13798,
"s": 13769,
"text": " Spring Boot – MockMvc JUnit"
}
] |
PHP | Uploading File - GeeksforGeeks | 12 May, 2018
Have you ever wondered how websites build their system of file uploading in PHP? Here we will come to know about the file uploading process. A question which you can come up with – ‘Are we able to upload any kind of file with this system?’. The answer is yes, we can upload files with different types of extensions.Let’s make an HTML form for uploading the file to the server.index.html
<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <title>File Upload Form</title></head><body> <form action="file-upload-manager.php" method="post" enctype="multipart/form-data"> <!--multipart/form-data ensures that form data is going to be encoded as MIME data--> <h2>Upload File</h2> <label for="fileSelect">Filename:</label> <input type="file" name="photo" id="fileSelect"> <input type="submit" name="submit" value="Upload"> <!-- name of the input fields are going to be used in our php script--> <p><strong>Note:</strong>Only .jpg, .jpeg, .png formats allowed to a max size of 2MB.</p> </form></body></html>
Now, time to write a php script which is able to handle the file uploading system.file-upload-manager.php
<?php// Check if the form was submittedif($_SERVER["REQUEST_METHOD"] == "POST"){ // Check if file was uploaded without errors if(isset($_FILES["photo"]) && $_FILES["photo"]["error"] == 0) { $allowed_ext = array("jpg" => "image/jpg", "jpeg" => "image/jpeg", "gif" => "image/gif", "png" => "image/png"); $file_name = $_FILES["photo"]["name"]; $file_type = $_FILES["photo"]["type"]; $file_size = $_FILES["photo"]["size"]; // Verify file extension $ext = pathinfo($filename, PATHINFO_EXTENSION); if (!array_key_exists($ext, $allowed_ext)) die("Error: Please select a valid file format."); // Verify file size - 2MB max $maxsize = 2 * 1024 * 1024; if ($file_size > $maxsize) die("Error: File size is larger than the allowed limit."); // Verify MYME type of the file if (in_array($file_type, $allowed_ext)) { // Check whether file exists before uploading it if (file_exists("upload/".$_FILES["photo"]["name"])) echo $_FILES["photo"]["name"]." is already exists."; else { move_uploaded_file($_FILES["photo"]["tmp_name"], "uploads/".$_FILES["photo"]["name"]); echo "Your file was uploaded successfully."; } } else { echo "Error: Please try again."; } } else { echo "Error: ". $_FILES["photo"]["error"]; }}?>
In the above script, once we submit the form, later we can access the information via a PHP superglobal associative array $_FILES. Apart form using the $_FILES array, many in-built functions are playing a major role. After we are done with uploading a file, in the script we will check the request method of the server, if it is POST then it will proceed otherwise the system will throw an error. Later on, we have accessed the $_FILES array to get the file name, file size, and type of the file. Once we got those pieces of information, then we validate the size and type of the file. In the end, we search in the folder, where the file is to be uploaded, for checking if the file already exists or not. If not, we have used move_uploaded_file() to move the file from temporary location to the desired directory on the server and we are done.Output
PHP-file-handling
PHP
Web technologies Questions
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Insert Form Data into Database using PHP ?
How to execute PHP code using command line ?
How to pop an alert message box using PHP ?
PHP in_array() Function
How to convert array to string in PHP ?
Installation of Node.js on Linux
How to insert spaces/tabs in text using HTML/CSS?
How to set the default value for an HTML <select> element ?
File uploading in React.js
How to set input type date in dd-mm-yyyy format using HTML ? | [
{
"code": null,
"e": 40527,
"s": 40499,
"text": "\n12 May, 2018"
},
{
"code": null,
"e": 40914,
"s": 40527,
"text": "Have you ever wondered how websites build their system of file uploading in PHP? Here we will come to know about the file uploading process. A question which you can come up with – ‘Are we able to upload any kind of file with this system?’. The answer is yes, we can upload files with different types of extensions.Let’s make an HTML form for uploading the file to the server.index.html"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"><head> <meta charset=\"UTF-8\"> <title>File Upload Form</title></head><body> <form action=\"file-upload-manager.php\" method=\"post\" enctype=\"multipart/form-data\"> <!--multipart/form-data ensures that form data is going to be encoded as MIME data--> <h2>Upload File</h2> <label for=\"fileSelect\">Filename:</label> <input type=\"file\" name=\"photo\" id=\"fileSelect\"> <input type=\"submit\" name=\"submit\" value=\"Upload\"> <!-- name of the input fields are going to be used in our php script--> <p><strong>Note:</strong>Only .jpg, .jpeg, .png formats allowed to a max size of 2MB.</p> </form></body></html>",
"e": 41595,
"s": 40914,
"text": null
},
{
"code": null,
"e": 41701,
"s": 41595,
"text": "Now, time to write a php script which is able to handle the file uploading system.file-upload-manager.php"
},
{
"code": "<?php// Check if the form was submittedif($_SERVER[\"REQUEST_METHOD\"] == \"POST\"){ // Check if file was uploaded without errors if(isset($_FILES[\"photo\"]) && $_FILES[\"photo\"][\"error\"] == 0) { $allowed_ext = array(\"jpg\" => \"image/jpg\", \"jpeg\" => \"image/jpeg\", \"gif\" => \"image/gif\", \"png\" => \"image/png\"); $file_name = $_FILES[\"photo\"][\"name\"]; $file_type = $_FILES[\"photo\"][\"type\"]; $file_size = $_FILES[\"photo\"][\"size\"]; // Verify file extension $ext = pathinfo($filename, PATHINFO_EXTENSION); if (!array_key_exists($ext, $allowed_ext)) die(\"Error: Please select a valid file format.\"); // Verify file size - 2MB max $maxsize = 2 * 1024 * 1024; if ($file_size > $maxsize) die(\"Error: File size is larger than the allowed limit.\"); // Verify MYME type of the file if (in_array($file_type, $allowed_ext)) { // Check whether file exists before uploading it if (file_exists(\"upload/\".$_FILES[\"photo\"][\"name\"])) echo $_FILES[\"photo\"][\"name\"].\" is already exists.\"; else { move_uploaded_file($_FILES[\"photo\"][\"tmp_name\"], \"uploads/\".$_FILES[\"photo\"][\"name\"]); echo \"Your file was uploaded successfully.\"; } } else { echo \"Error: Please try again.\"; } } else { echo \"Error: \". $_FILES[\"photo\"][\"error\"]; }}?>",
"e": 43376,
"s": 41701,
"text": null
},
{
"code": null,
"e": 44226,
"s": 43376,
"text": "In the above script, once we submit the form, later we can access the information via a PHP superglobal associative array $_FILES. Apart form using the $_FILES array, many in-built functions are playing a major role. After we are done with uploading a file, in the script we will check the request method of the server, if it is POST then it will proceed otherwise the system will throw an error. Later on, we have accessed the $_FILES array to get the file name, file size, and type of the file. Once we got those pieces of information, then we validate the size and type of the file. In the end, we search in the folder, where the file is to be uploaded, for checking if the file already exists or not. If not, we have used move_uploaded_file() to move the file from temporary location to the desired directory on the server and we are done.Output"
},
{
"code": null,
"e": 44244,
"s": 44226,
"text": "PHP-file-handling"
},
{
"code": null,
"e": 44248,
"s": 44244,
"text": "PHP"
},
{
"code": null,
"e": 44275,
"s": 44248,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 44279,
"s": 44275,
"text": "PHP"
},
{
"code": null,
"e": 44377,
"s": 44279,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 44386,
"s": 44377,
"text": "Comments"
},
{
"code": null,
"e": 44399,
"s": 44386,
"text": "Old Comments"
},
{
"code": null,
"e": 44449,
"s": 44399,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 44494,
"s": 44449,
"text": "How to execute PHP code using command line ?"
},
{
"code": null,
"e": 44538,
"s": 44494,
"text": "How to pop an alert message box using PHP ?"
},
{
"code": null,
"e": 44562,
"s": 44538,
"text": "PHP in_array() Function"
},
{
"code": null,
"e": 44602,
"s": 44562,
"text": "How to convert array to string in PHP ?"
},
{
"code": null,
"e": 44635,
"s": 44602,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 44685,
"s": 44635,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 44745,
"s": 44685,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 44772,
"s": 44745,
"text": "File uploading in React.js"
}
] |
Node.js crypto.pbkdf2() Method | 11 Oct, 2021
The crypto.pbkdf2() method gives an asynchronous Password-Based Key Derivation Function 2 i.e. (PBKDF2) implementation. Moreover, a particular HMAC digest algorithm which is defined by digest is implemented to derive a key of the required byte length (keylen) from the stated password, salt, and iterations.
Syntax:
crypto.pbkdf2( password, salt, iterations, keylen, digest, callback )
Parameters: This method accepts six parameters as mentioned above and described below:
password: It can holds string, Buffer, TypedArray, or DataView type of data.
salt: It must be as unique as possible. However, it is recommended that a salt is arbitrary and in any case it is at least 16 bytes long. It is of type string, Buffer, TypedArray, or DataView.
iterations: It must be a number and should be set as high as possible. So, the more is the number of iterations, the more secure the derived key will be, but in that case it takes greater amount of time to complete. It is of type number.
keylen: It is the key of the required byte length and it is of type number.
digest: It is digest algorithms of string type.
callback: It is a function with two parameters namely err, and derivedKey.
Return Type: It returns the derived password based key.
Below example illustrate the use of crypto.pbkdf2() method in Node.js:
Example 1:
// Node.js program to demonstrate the // crypto.pbkdf2() method // Including crypto moduleconst crypto = require('crypto'); // Implementing pbkdf2 with all its parameterscrypto.pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) => { if (err) throw err; // Prints derivedKey console.log(derivedKey.toString('hex'));});
Output:
3745e482c6e0ade35da10139e797157f4a5da669dad7d5da88ef87e
47471cc47ed941c7ad618e827304f083f8707f12b7cfdd5f489b782
f10cc269e3c08d59ae
Example 2:
// Node.js program to demonstrate the // crypto.pbkdf2() method // Including crypto moduleconst crypto = require('crypto'); // Implementing pbkdf2 with all its parameters// but digest is nullcrypto.pbkdf2('secret', 'salt', 677, 6, null, (err, derivedKey) => { if (err) { console.log(err); } else { // Prints derivedKey without encoding console.log(derivedKey); }});
Output: Here, a buffer is returned as a derived key is not changed to string.
Buffer 71 1e 7b 7b 9b 53
Reference: https://nodejs.org/api/crypto.html#crypto_crypto_pbkdf2_password_salt_iterations_keylen_digest_callback
Node.js-crypto-module
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to update Node.js and NPM to next version ?
Node.js fs.readFileSync() Method
How to update NPM ?
Node.js fs.writeFile() Method
Difference between promise and async await in Node.js
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Difference between var, let and const keywords in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS?
Roadmap to Learn JavaScript For Beginners | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n11 Oct, 2021"
},
{
"code": null,
"e": 336,
"s": 28,
"text": "The crypto.pbkdf2() method gives an asynchronous Password-Based Key Derivation Function 2 i.e. (PBKDF2) implementation. Moreover, a particular HMAC digest algorithm which is defined by digest is implemented to derive a key of the required byte length (keylen) from the stated password, salt, and iterations."
},
{
"code": null,
"e": 344,
"s": 336,
"text": "Syntax:"
},
{
"code": null,
"e": 414,
"s": 344,
"text": "crypto.pbkdf2( password, salt, iterations, keylen, digest, callback )"
},
{
"code": null,
"e": 501,
"s": 414,
"text": "Parameters: This method accepts six parameters as mentioned above and described below:"
},
{
"code": null,
"e": 578,
"s": 501,
"text": "password: It can holds string, Buffer, TypedArray, or DataView type of data."
},
{
"code": null,
"e": 771,
"s": 578,
"text": "salt: It must be as unique as possible. However, it is recommended that a salt is arbitrary and in any case it is at least 16 bytes long. It is of type string, Buffer, TypedArray, or DataView."
},
{
"code": null,
"e": 1009,
"s": 771,
"text": "iterations: It must be a number and should be set as high as possible. So, the more is the number of iterations, the more secure the derived key will be, but in that case it takes greater amount of time to complete. It is of type number."
},
{
"code": null,
"e": 1085,
"s": 1009,
"text": "keylen: It is the key of the required byte length and it is of type number."
},
{
"code": null,
"e": 1133,
"s": 1085,
"text": "digest: It is digest algorithms of string type."
},
{
"code": null,
"e": 1208,
"s": 1133,
"text": "callback: It is a function with two parameters namely err, and derivedKey."
},
{
"code": null,
"e": 1264,
"s": 1208,
"text": "Return Type: It returns the derived password based key."
},
{
"code": null,
"e": 1335,
"s": 1264,
"text": "Below example illustrate the use of crypto.pbkdf2() method in Node.js:"
},
{
"code": null,
"e": 1346,
"s": 1335,
"text": "Example 1:"
},
{
"code": "// Node.js program to demonstrate the // crypto.pbkdf2() method // Including crypto moduleconst crypto = require('crypto'); // Implementing pbkdf2 with all its parameterscrypto.pbkdf2('secret', 'salt', 100000, 64, 'sha512', (err, derivedKey) => { if (err) throw err; // Prints derivedKey console.log(derivedKey.toString('hex'));});",
"e": 1695,
"s": 1346,
"text": null
},
{
"code": null,
"e": 1703,
"s": 1695,
"text": "Output:"
},
{
"code": null,
"e": 1835,
"s": 1703,
"text": "3745e482c6e0ade35da10139e797157f4a5da669dad7d5da88ef87e\n47471cc47ed941c7ad618e827304f083f8707f12b7cfdd5f489b782\nf10cc269e3c08d59ae\n"
},
{
"code": null,
"e": 1846,
"s": 1835,
"text": "Example 2:"
},
{
"code": "// Node.js program to demonstrate the // crypto.pbkdf2() method // Including crypto moduleconst crypto = require('crypto'); // Implementing pbkdf2 with all its parameters// but digest is nullcrypto.pbkdf2('secret', 'salt', 677, 6, null, (err, derivedKey) => { if (err) { console.log(err); } else { // Prints derivedKey without encoding console.log(derivedKey); }}); ",
"e": 2253,
"s": 1846,
"text": null
},
{
"code": null,
"e": 2331,
"s": 2253,
"text": "Output: Here, a buffer is returned as a derived key is not changed to string."
},
{
"code": null,
"e": 2356,
"s": 2331,
"text": "Buffer 71 1e 7b 7b 9b 53"
},
{
"code": null,
"e": 2471,
"s": 2356,
"text": "Reference: https://nodejs.org/api/crypto.html#crypto_crypto_pbkdf2_password_salt_iterations_keylen_digest_callback"
},
{
"code": null,
"e": 2493,
"s": 2471,
"text": "Node.js-crypto-module"
},
{
"code": null,
"e": 2501,
"s": 2493,
"text": "Node.js"
},
{
"code": null,
"e": 2518,
"s": 2501,
"text": "Web Technologies"
},
{
"code": null,
"e": 2616,
"s": 2518,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2664,
"s": 2616,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 2697,
"s": 2664,
"text": "Node.js fs.readFileSync() Method"
},
{
"code": null,
"e": 2717,
"s": 2697,
"text": "How to update NPM ?"
},
{
"code": null,
"e": 2747,
"s": 2717,
"text": "Node.js fs.writeFile() Method"
},
{
"code": null,
"e": 2801,
"s": 2747,
"text": "Difference between promise and async await in Node.js"
},
{
"code": null,
"e": 2863,
"s": 2801,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 2924,
"s": 2863,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 2967,
"s": 2924,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 3017,
"s": 2967,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Oracle Interview Experience | Applications Engineer | 17 Jun, 2020
Round 1:
1) Tell me about yourself
2) Problem Statement:
Reverse the words in below string considering all negative test cases
Input: Hello World
Output: olleH dlroW
If the input have more than 1 space as separator then output should contain as much space as input
If the statement have start and trailing spaces then output will also contain those spaces
Input is case sensitive
3) Sql queries
There are 2 tables: Employee and Department.
Print the employee name with department name
Print the count of employees having same department with department name
Print the count of employees having the same department
Round 2:
1) Problem Statement:
Check if the given string is palindrome or not. If the string is not palindrome then try to make it palindrome if possible
2) SQL queries:
There are 2 tables: Employee and Salary
Print only female employees
Print the female employee who has highest salary
Round 3:
1) Core java related questions
What is method overloading and overriding with example
How we can achieve inheritance in java
What is default methods in java8 and when should we use it
2) Problem Statement
geeksforgeeks.org/min-cost-path-dp-6/
3) Glass Puzzle
My friend was throwing a very grand party and wanted to borrow from me 100 wine glasses. I decided to send them through my boy servant.
Just to give an incentive to Servant to deliver the glasses intact I offered him 3 paise for every glass delivered safely and threatened to forfeit 9 paise for every glass he broke.
On settlement Harish received Rs 2.40 from me. How many glasses did Harish safely deliver?
Round 4:
1) Puzzle:
Using only 1 to 9 numbers, Arrange the numbers such that
AxBxC = BxGxE = DxExF
A D
B G E
C F
Each character should assign distinct numbers
2) Puzzle:
Using 1 to 16 numbers, sum would be 34 for each column, rows and for both diagonals. Write the program as well to solve the puzzle:
Round 5:
1) Problem Statement: Merge the sub arrays if possible. Input: List of sub arrays: [1, 10], [11, 15], [13, 25], [30, 40], [40, 50]Output: [1, 25], [30, 50]
2) Puzzle:
There are 101 coins.
1 coin is a faulty coin.
A Faulty coin has different weight other than 100 coins.
With a minimum number of comparisons, find that if the faulty coin has higher weight or lower weight than other coins.
Thanks, GeeksforGeeks. It helped me a lot in preparation.
Marketing
Oracle
Interview Experiences
Oracle
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n17 Jun, 2020"
},
{
"code": null,
"e": 41,
"s": 31,
"text": "Round 1: "
},
{
"code": null,
"e": 68,
"s": 41,
"text": "1) Tell me about yourself "
},
{
"code": null,
"e": 92,
"s": 68,
"text": "2) Problem Statement: "
},
{
"code": null,
"e": 163,
"s": 92,
"text": "Reverse the words in below string considering all negative test cases "
},
{
"code": null,
"e": 183,
"s": 163,
"text": "Input: Hello World "
},
{
"code": null,
"e": 205,
"s": 183,
"text": "Output: olleH dlroW "
},
{
"code": null,
"e": 304,
"s": 205,
"text": "If the input have more than 1 space as separator then output should contain as much space as input"
},
{
"code": null,
"e": 395,
"s": 304,
"text": "If the statement have start and trailing spaces then output will also contain those spaces"
},
{
"code": null,
"e": 419,
"s": 395,
"text": "Input is case sensitive"
},
{
"code": null,
"e": 436,
"s": 419,
"text": "3) Sql queries "
},
{
"code": null,
"e": 481,
"s": 436,
"text": "There are 2 tables: Employee and Department."
},
{
"code": null,
"e": 526,
"s": 481,
"text": "Print the employee name with department name"
},
{
"code": null,
"e": 599,
"s": 526,
"text": "Print the count of employees having same department with department name"
},
{
"code": null,
"e": 655,
"s": 599,
"text": "Print the count of employees having the same department"
},
{
"code": null,
"e": 668,
"s": 658,
"text": "Round 2: "
},
{
"code": null,
"e": 691,
"s": 668,
"text": "1) Problem Statement: "
},
{
"code": null,
"e": 815,
"s": 691,
"text": "Check if the given string is palindrome or not. If the string is not palindrome then try to make it palindrome if possible "
},
{
"code": null,
"e": 833,
"s": 815,
"text": "2) SQL queries: "
},
{
"code": null,
"e": 873,
"s": 833,
"text": "There are 2 tables: Employee and Salary"
},
{
"code": null,
"e": 901,
"s": 873,
"text": "Print only female employees"
},
{
"code": null,
"e": 950,
"s": 901,
"text": "Print the female employee who has highest salary"
},
{
"code": null,
"e": 963,
"s": 953,
"text": "Round 3: "
},
{
"code": null,
"e": 996,
"s": 963,
"text": "1) Core java related questions "
},
{
"code": null,
"e": 1051,
"s": 996,
"text": "What is method overloading and overriding with example"
},
{
"code": null,
"e": 1090,
"s": 1051,
"text": "How we can achieve inheritance in java"
},
{
"code": null,
"e": 1149,
"s": 1090,
"text": "What is default methods in java8 and when should we use it"
},
{
"code": null,
"e": 1172,
"s": 1149,
"text": "2) Problem Statement "
},
{
"code": null,
"e": 1210,
"s": 1172,
"text": "geeksforgeeks.org/min-cost-path-dp-6/"
},
{
"code": null,
"e": 1228,
"s": 1210,
"text": "3) Glass Puzzle "
},
{
"code": null,
"e": 1364,
"s": 1228,
"text": "My friend was throwing a very grand party and wanted to borrow from me 100 wine glasses. I decided to send them through my boy servant."
},
{
"code": null,
"e": 1546,
"s": 1364,
"text": "Just to give an incentive to Servant to deliver the glasses intact I offered him 3 paise for every glass delivered safely and threatened to forfeit 9 paise for every glass he broke."
},
{
"code": null,
"e": 1637,
"s": 1546,
"text": "On settlement Harish received Rs 2.40 from me. How many glasses did Harish safely deliver?"
},
{
"code": null,
"e": 1650,
"s": 1640,
"text": "Round 4: "
},
{
"code": null,
"e": 1663,
"s": 1650,
"text": "1) Puzzle: "
},
{
"code": null,
"e": 1720,
"s": 1663,
"text": "Using only 1 to 9 numbers, Arrange the numbers such that"
},
{
"code": null,
"e": 1742,
"s": 1720,
"text": "AxBxC = BxGxE = DxExF"
},
{
"code": null,
"e": 1767,
"s": 1742,
"text": "A D \nB G E\nC F"
},
{
"code": null,
"e": 1813,
"s": 1767,
"text": "Each character should assign distinct numbers"
},
{
"code": null,
"e": 1825,
"s": 1813,
"text": "2) Puzzle: "
},
{
"code": null,
"e": 1957,
"s": 1825,
"text": "Using 1 to 16 numbers, sum would be 34 for each column, rows and for both diagonals. Write the program as well to solve the puzzle:"
},
{
"code": null,
"e": 1967,
"s": 1957,
"text": "Round 5: "
},
{
"code": null,
"e": 2123,
"s": 1967,
"text": "1) Problem Statement: Merge the sub arrays if possible. Input: List of sub arrays: [1, 10], [11, 15], [13, 25], [30, 40], [40, 50]Output: [1, 25], [30, 50]"
},
{
"code": null,
"e": 2134,
"s": 2123,
"text": "2) Puzzle:"
},
{
"code": null,
"e": 2155,
"s": 2134,
"text": "There are 101 coins."
},
{
"code": null,
"e": 2180,
"s": 2155,
"text": "1 coin is a faulty coin."
},
{
"code": null,
"e": 2238,
"s": 2180,
"text": "A Faulty coin has different weight other than 100 coins. "
},
{
"code": null,
"e": 2357,
"s": 2238,
"text": "With a minimum number of comparisons, find that if the faulty coin has higher weight or lower weight than other coins."
},
{
"code": null,
"e": 2415,
"s": 2357,
"text": "Thanks, GeeksforGeeks. It helped me a lot in preparation."
},
{
"code": null,
"e": 2425,
"s": 2415,
"text": "Marketing"
},
{
"code": null,
"e": 2432,
"s": 2425,
"text": "Oracle"
},
{
"code": null,
"e": 2454,
"s": 2432,
"text": "Interview Experiences"
},
{
"code": null,
"e": 2461,
"s": 2454,
"text": "Oracle"
}
] |
Python | Pandas.to_datetime() | 17 Sep, 2018
When a csv file is imported and a Data Frame is made, the Date time objects in the file are read as a string object rather a Date Time object and Hence it’s very tough to perform operations like Time difference on a string rather a Date Time object. Pandas to_datetime() method helps to convert string Date time into Python Date time object.
Syntax:
pandas.to_datetime(arg, errors=’raise’, dayfirst=False, yearfirst=False, utc=None, box=True, format=None, exact=True, unit=None, infer_datetime_format=False, origin=’unix’, cache=False)
Parameters:
arg: An integer, string, float, list or dict object to convert in to Date time object.dayfirst: Boolean value, places day first if True.yearfirst: Boolean value, places year first if True.utc: Boolean value, Returns time in UTC if True.format: String input to tell position of day, month and year.
Return type: Date time object series.
For link of the CSV file used, click here.
Example #1: String to DateIn the following example, a csv file is read and the date column of Data frame is converted into Date Time object from a string object.
# importing pandas packageimport pandas as pd # making data frame from csv filedata = pd.read_csv("todatetime.csv") # overwriting data after changing formatdata["Date"]= pd.to_datetime(data["Date"]) # info of datadata.info() # displaydata
Output:As shown in the image, the Data Type of Date column was object but after using to_datetime(), it got converted into a date time object.
Before operation-
After Operation- Example #2: Exception while converting TimeTime object can also be converted with this method. But since in the Time column, a date isn’t specified and hence Pandas will put Today’s date automatically in that case.
# importing pandas packageimport pandas as pd # making data frame from csv filedata = pd.read_csv("todatetime.csv") # overwriting data after changing formatdata["Time"]= pd.to_datetime(data["Time"]) # info of datadata.info() # displaydata
Output:As shown in the output, a date (2018-07-07) that is Today’s date is already added with the Date time object.
Python pandas-dataFrame
Python pandas-dataFrame-methods
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n17 Sep, 2018"
},
{
"code": null,
"e": 370,
"s": 28,
"text": "When a csv file is imported and a Data Frame is made, the Date time objects in the file are read as a string object rather a Date Time object and Hence it’s very tough to perform operations like Time difference on a string rather a Date Time object. Pandas to_datetime() method helps to convert string Date time into Python Date time object."
},
{
"code": null,
"e": 378,
"s": 370,
"text": "Syntax:"
},
{
"code": null,
"e": 564,
"s": 378,
"text": "pandas.to_datetime(arg, errors=’raise’, dayfirst=False, yearfirst=False, utc=None, box=True, format=None, exact=True, unit=None, infer_datetime_format=False, origin=’unix’, cache=False)"
},
{
"code": null,
"e": 577,
"s": 564,
"text": " Parameters:"
},
{
"code": null,
"e": 875,
"s": 577,
"text": "arg: An integer, string, float, list or dict object to convert in to Date time object.dayfirst: Boolean value, places day first if True.yearfirst: Boolean value, places year first if True.utc: Boolean value, Returns time in UTC if True.format: String input to tell position of day, month and year."
},
{
"code": null,
"e": 913,
"s": 875,
"text": "Return type: Date time object series."
},
{
"code": null,
"e": 956,
"s": 913,
"text": "For link of the CSV file used, click here."
},
{
"code": null,
"e": 1118,
"s": 956,
"text": "Example #1: String to DateIn the following example, a csv file is read and the date column of Data frame is converted into Date Time object from a string object."
},
{
"code": "# importing pandas packageimport pandas as pd # making data frame from csv filedata = pd.read_csv(\"todatetime.csv\") # overwriting data after changing formatdata[\"Date\"]= pd.to_datetime(data[\"Date\"]) # info of datadata.info() # displaydata",
"e": 1361,
"s": 1118,
"text": null
},
{
"code": null,
"e": 1504,
"s": 1361,
"text": "Output:As shown in the image, the Data Type of Date column was object but after using to_datetime(), it got converted into a date time object."
},
{
"code": null,
"e": 1522,
"s": 1504,
"text": "Before operation-"
},
{
"code": null,
"e": 1754,
"s": 1522,
"text": "After Operation- Example #2: Exception while converting TimeTime object can also be converted with this method. But since in the Time column, a date isn’t specified and hence Pandas will put Today’s date automatically in that case."
},
{
"code": "# importing pandas packageimport pandas as pd # making data frame from csv filedata = pd.read_csv(\"todatetime.csv\") # overwriting data after changing formatdata[\"Time\"]= pd.to_datetime(data[\"Time\"]) # info of datadata.info() # displaydata",
"e": 1997,
"s": 1754,
"text": null
},
{
"code": null,
"e": 2113,
"s": 1997,
"text": "Output:As shown in the output, a date (2018-07-07) that is Today’s date is already added with the Date time object."
},
{
"code": null,
"e": 2137,
"s": 2113,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 2169,
"s": 2137,
"text": "Python pandas-dataFrame-methods"
},
{
"code": null,
"e": 2183,
"s": 2169,
"text": "Python-pandas"
},
{
"code": null,
"e": 2190,
"s": 2183,
"text": "Python"
}
] |
Diameter of a tree using DFS | 10 Jun, 2022
The diameter of a tree (sometimes called the width) is the number of nodes on the longest path between two leaves in the tree. The diagram below shows two trees each with a diameter of five, the leaves that form the ends of the longest path are shaded (note that there is more than one path in each tree of length five, but no path longer than five nodes)
We have discussed a solution in the below postDiameter of a binary treeIn this post, a different DFS-based solution is discussed. After observing the above tree we can see that the longest path will always occur between two leaf nodes. We start DFS from a random node and then see which node is farthest from it. Let the node farthest be X. It is clear that X will always be a leaf node and a corner of DFS. Now if we start DFS from X and check the farthest node from it, we will get the diameter of the tree. The C++ implementation uses an adjacency list representation of graphs. STL‘s list container is used to store lists of adjacent nodes.
C++
Java
Python3
C#
Javascript
// C++ program to find diameter of a binary tree// using DFS.#include <iostream>#include <limits.h>#include <list>using namespace std; // Used to track farthest node.int x; // Sets maxCount as maximum distance from node.void dfsUtil(int node, int count, bool visited[], int& maxCount, list<int>* adj){ visited[node] = true; count++; for (auto i = adj[node].begin(); i != adj[node].end(); ++i) { if (!visited[*i]) { if (count >= maxCount) { maxCount = count; x = *i; } dfsUtil(*i, count, visited, maxCount, adj); } }} // The function to do DFS traversal. It uses recursive// dfsUtil()void dfs(int node, int n, list<int>* adj, int& maxCount){ bool visited[n + 1]; int count = 0; // Mark all the vertices as not visited for (int i = 1; i <= n; ++i) visited[i] = false; // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, maxCount, adj);} // Returns diameter of binary tree represented// as adjacency list.int diameter(list<int>* adj, int n){ int maxCount = INT_MIN; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj, maxCount); /* DFS from X and check the farthest node from it */ dfs(x, n, adj, maxCount); return maxCount;} /* Driver program to test above functions*/int main(){ int n = 5; /* Constructed tree is 1 / \ 2 3 / \ 4 5 */ list<int>* adj = new list<int>[n + 1]; /*create undirected edges */ adj[1].push_back(2); adj[2].push_back(1); adj[1].push_back(3); adj[3].push_back(1); adj[2].push_back(4); adj[4].push_back(2); adj[2].push_back(5); adj[5].push_back(2); /* maxCount will have diameter of tree */ cout << "Diameter of the given tree is " << diameter(adj, n) << endl; return 0;}
// Java program to find diameter of a// binary tree using DFS.import java.util.ArrayList;import java.util.Arrays;import java.util.List;public class Diametre_tree { // Used to track farthest node. static int x; static int maxCount; static List<Integer> adj[]; // Sets maxCount as maximum distance // from node static void dfsUtil(int node, int count, boolean visited[], List<Integer> adj[]) { visited[node] = true; count++; List<Integer> l = adj[node]; for(Integer i: l) { if(!visited[i]){ if (count >= maxCount) { maxCount = count; x = i; } dfsUtil(i, count, visited, adj); } } } // The function to do DFS traversal. It uses // recursive dfsUtil() static void dfs(int node, int n, List<Integer> adj[]) { boolean[] visited = new boolean[n + 1]; int count = 0; // Mark all the vertices as not visited Arrays.fill(visited, false); // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, adj); } // Returns diameter of binary tree represented // as adjacency list. static int diameter(List<Integer> adj[], int n) { maxCount = Integer.MIN_VALUE; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj); /* DFS from X and check the farthest node from it */ dfs(x, n, adj); return maxCount; } /* Driver program to test above functions*/ public static void main(String args[]) { int n = 5; /* Constructed tree is 1 / \ 2 3 / \ 4 5 */ adj = new List[n + 1]; for(int i = 0; i < n+1 ; i++) adj[i] = new ArrayList<Integer>(); /*create undirected edges */ adj[1].add(2); adj[2].add(1); adj[1].add(3); adj[3].add(1); adj[2].add(4); adj[4].add(2); adj[2].add(5); adj[5].add(2); /* maxCount will have diameter of tree */ System.out.println("Diameter of the given " + "tree is " + diameter(adj, n)); }}// This code is contributed by Sumit Ghosh
# Python3 program to find diameter of a binary tree# using DFS. # Sets maxCount as maximum distance from node.def dfsUtil(node, count): global visited, x, maxCount, adj visited[node] = 1 count += 1 for i in adj[node]: if (visited[i] == 0): if (count >= maxCount): maxCount = count x = i dfsUtil(i, count) # The function to do DFS traversal. It uses recursive# dfsUtil()def dfs(node, n): count = 0 for i in range(n + 1): visited[i] = 0 # Increment count by 1 for visited node dfsUtil(node, count + 1) # Returns diameter of binary tree represented# as adjacency list.def diameter(n): global adj, maxCount # DFS from a random node and then see # farthest node X from it*/ dfs(1, n) # DFS from X and check the farthest node dfs(x, n) return maxCount ## Driver code*/if __name__ == '__main__': n = 5 # # Constructed tree is # 1 # / \ # 2 3 # / \ # 4 5 */ adj, visited = [[] for i in range(n + 1)], [0 for i in range(n + 1)] maxCount = -10**19 x = 0 # create undirected edges */ adj[1].append(2) adj[2].append(1) adj[1].append(3) adj[3].append(1) adj[2].append(4) adj[4].append(2) adj[2].append(5) adj[5].append(2) # maxCount will have diameter of tree */ print ("Diameter of the given tree is ", diameter(n)) # This code is contributed by mohit kumar 29
// C# program to find diameter of a// binary tree using DFS.using System;using System.Collections.Generic; class GFG{ // Used to track farthest node. static int x; static int maxCount; static List<int> []adj; // Sets maxCount as maximum distance // from node static void dfsUtil(int node, int count, bool []visited, List<int> []adj) { visited[node] = true; count++; List<int> l = adj[node]; foreach(int i in l) { if(!visited[i]) { if (count >= maxCount) { maxCount = count; x = i; } dfsUtil(i, count, visited, adj); } } } // The function to do DFS traversal. It uses // recursive dfsUtil() static void dfs(int node, int n, List<int> []adj) { bool[] visited = new bool[n + 1]; int count = 0; // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, adj); } // Returns diameter of binary tree represented // as adjacency list. static int diameter(List<int> []adj, int n) { maxCount = int.MinValue; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj); /* DFS from X and check the farthest node from it */ dfs(x, n, adj); return maxCount; } // Driver Code public static void Main(String []args) { int n = 5; /* Constructed tree is 1 / \ 2 3 / \ 4 5 */ adj = new List<int>[n + 1]; for(int i = 0; i < n + 1; i++) adj[i] = new List<int>(); /*create undirected edges */ adj[1].Add(2); adj[2].Add(1); adj[1].Add(3); adj[3].Add(1); adj[2].Add(4); adj[4].Add(2); adj[2].Add(5); adj[5].Add(2); /* maxCount will have diameter of tree */ Console.WriteLine("Diameter of the given " + "tree is " + diameter(adj, n)); }} // This code is contributed by PrinciRaj1992
<script> // JavaScript program to find diameter of a// binary tree using DFS. // Used to track farthest node. let x; let maxCount; let adj=[]; // Sets maxCount as maximum distance // from node function dfsUtil(node,count,visited,adj) { visited[node] = true; count++; let l = adj[node]; for(let i=0;i<l.length;i++) { if(!visited[l[i]]){ if (count >= maxCount) { maxCount = count; x = l[i]; } dfsUtil(l[i], count, visited, adj); } } } // The function to do DFS traversal. It uses // recursive dfsUtil() function dfs(node,n,adj) { let visited = new Array(n + 1); let count = 0; // Mark all the vertices as not visited for(let i=0;i<visited.length;i++) { visited[i]=false; } // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, adj); } // Returns diameter of binary tree represented // as adjacency list. function diameter(adj,n) { maxCount = Number.MIN_VALUE; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj); /* DFS from X and check the farthest node from it */ dfs(x, n, adj); return maxCount; } /* Driver program to test above functions*/ let n = 5; /* Constructed tree is 1 / \ 2 3 / \ 4 5 */ adj = new Array(n + 1); for(let i = 0; i < n+1 ; i++) adj[i] = []; /*create undirected edges */ adj[1].push(2); adj[2].push(1); adj[1].push(3); adj[3].push(1); adj[2].push(4); adj[4].push(2); adj[2].push(5); adj[5].push(2); /* maxCount will have diameter of tree */ document.write("Diameter of the given " + "tree is " + diameter(adj, n)); // This code is contributed by unknown2108 </script>
Output:
Diameter of the given tree is 4
Time Complexity: O(n), where n is the number of nodes
Auxiliary Space: O(n)This article is contributed by Ankur Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
princiraj1992
mohit kumar 29
unknown2108
rohitmishra051000
Amazon
Cadence India
DFS
Directi
MakeMyTrip
Microsoft
Oracle
OYO Rooms
Philips
Salesforce
Snapdeal
VMWare
Tree
VMWare
Amazon
Microsoft
OYO Rooms
Snapdeal
MakeMyTrip
Oracle
Directi
Philips
Salesforce
Cadence India
DFS
Tree
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n10 Jun, 2022"
},
{
"code": null,
"e": 412,
"s": 54,
"text": "The diameter of a tree (sometimes called the width) is the number of nodes on the longest path between two leaves in the tree. The diagram below shows two trees each with a diameter of five, the leaves that form the ends of the longest path are shaded (note that there is more than one path in each tree of length five, but no path longer than five nodes) "
},
{
"code": null,
"e": 1061,
"s": 414,
"text": "We have discussed a solution in the below postDiameter of a binary treeIn this post, a different DFS-based solution is discussed. After observing the above tree we can see that the longest path will always occur between two leaf nodes. We start DFS from a random node and then see which node is farthest from it. Let the node farthest be X. It is clear that X will always be a leaf node and a corner of DFS. Now if we start DFS from X and check the farthest node from it, we will get the diameter of the tree. The C++ implementation uses an adjacency list representation of graphs. STL‘s list container is used to store lists of adjacent nodes. "
},
{
"code": null,
"e": 1065,
"s": 1061,
"text": "C++"
},
{
"code": null,
"e": 1070,
"s": 1065,
"text": "Java"
},
{
"code": null,
"e": 1078,
"s": 1070,
"text": "Python3"
},
{
"code": null,
"e": 1081,
"s": 1078,
"text": "C#"
},
{
"code": null,
"e": 1092,
"s": 1081,
"text": "Javascript"
},
{
"code": "// C++ program to find diameter of a binary tree// using DFS.#include <iostream>#include <limits.h>#include <list>using namespace std; // Used to track farthest node.int x; // Sets maxCount as maximum distance from node.void dfsUtil(int node, int count, bool visited[], int& maxCount, list<int>* adj){ visited[node] = true; count++; for (auto i = adj[node].begin(); i != adj[node].end(); ++i) { if (!visited[*i]) { if (count >= maxCount) { maxCount = count; x = *i; } dfsUtil(*i, count, visited, maxCount, adj); } }} // The function to do DFS traversal. It uses recursive// dfsUtil()void dfs(int node, int n, list<int>* adj, int& maxCount){ bool visited[n + 1]; int count = 0; // Mark all the vertices as not visited for (int i = 1; i <= n; ++i) visited[i] = false; // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, maxCount, adj);} // Returns diameter of binary tree represented// as adjacency list.int diameter(list<int>* adj, int n){ int maxCount = INT_MIN; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj, maxCount); /* DFS from X and check the farthest node from it */ dfs(x, n, adj, maxCount); return maxCount;} /* Driver program to test above functions*/int main(){ int n = 5; /* Constructed tree is 1 / \\ 2 3 / \\ 4 5 */ list<int>* adj = new list<int>[n + 1]; /*create undirected edges */ adj[1].push_back(2); adj[2].push_back(1); adj[1].push_back(3); adj[3].push_back(1); adj[2].push_back(4); adj[4].push_back(2); adj[2].push_back(5); adj[5].push_back(2); /* maxCount will have diameter of tree */ cout << \"Diameter of the given tree is \" << diameter(adj, n) << endl; return 0;}",
"e": 2990,
"s": 1092,
"text": null
},
{
"code": "// Java program to find diameter of a// binary tree using DFS.import java.util.ArrayList;import java.util.Arrays;import java.util.List;public class Diametre_tree { // Used to track farthest node. static int x; static int maxCount; static List<Integer> adj[]; // Sets maxCount as maximum distance // from node static void dfsUtil(int node, int count, boolean visited[], List<Integer> adj[]) { visited[node] = true; count++; List<Integer> l = adj[node]; for(Integer i: l) { if(!visited[i]){ if (count >= maxCount) { maxCount = count; x = i; } dfsUtil(i, count, visited, adj); } } } // The function to do DFS traversal. It uses // recursive dfsUtil() static void dfs(int node, int n, List<Integer> adj[]) { boolean[] visited = new boolean[n + 1]; int count = 0; // Mark all the vertices as not visited Arrays.fill(visited, false); // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, adj); } // Returns diameter of binary tree represented // as adjacency list. static int diameter(List<Integer> adj[], int n) { maxCount = Integer.MIN_VALUE; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj); /* DFS from X and check the farthest node from it */ dfs(x, n, adj); return maxCount; } /* Driver program to test above functions*/ public static void main(String args[]) { int n = 5; /* Constructed tree is 1 / \\ 2 3 / \\ 4 5 */ adj = new List[n + 1]; for(int i = 0; i < n+1 ; i++) adj[i] = new ArrayList<Integer>(); /*create undirected edges */ adj[1].add(2); adj[2].add(1); adj[1].add(3); adj[3].add(1); adj[2].add(4); adj[4].add(2); adj[2].add(5); adj[5].add(2); /* maxCount will have diameter of tree */ System.out.println(\"Diameter of the given \" + \"tree is \" + diameter(adj, n)); }}// This code is contributed by Sumit Ghosh",
"e": 5449,
"s": 2990,
"text": null
},
{
"code": "# Python3 program to find diameter of a binary tree# using DFS. # Sets maxCount as maximum distance from node.def dfsUtil(node, count): global visited, x, maxCount, adj visited[node] = 1 count += 1 for i in adj[node]: if (visited[i] == 0): if (count >= maxCount): maxCount = count x = i dfsUtil(i, count) # The function to do DFS traversal. It uses recursive# dfsUtil()def dfs(node, n): count = 0 for i in range(n + 1): visited[i] = 0 # Increment count by 1 for visited node dfsUtil(node, count + 1) # Returns diameter of binary tree represented# as adjacency list.def diameter(n): global adj, maxCount # DFS from a random node and then see # farthest node X from it*/ dfs(1, n) # DFS from X and check the farthest node dfs(x, n) return maxCount ## Driver code*/if __name__ == '__main__': n = 5 # # Constructed tree is # 1 # / \\ # 2 3 # / \\ # 4 5 */ adj, visited = [[] for i in range(n + 1)], [0 for i in range(n + 1)] maxCount = -10**19 x = 0 # create undirected edges */ adj[1].append(2) adj[2].append(1) adj[1].append(3) adj[3].append(1) adj[2].append(4) adj[4].append(2) adj[2].append(5) adj[5].append(2) # maxCount will have diameter of tree */ print (\"Diameter of the given tree is \", diameter(n)) # This code is contributed by mohit kumar 29",
"e": 6905,
"s": 5449,
"text": null
},
{
"code": "// C# program to find diameter of a// binary tree using DFS.using System;using System.Collections.Generic; class GFG{ // Used to track farthest node. static int x; static int maxCount; static List<int> []adj; // Sets maxCount as maximum distance // from node static void dfsUtil(int node, int count, bool []visited, List<int> []adj) { visited[node] = true; count++; List<int> l = adj[node]; foreach(int i in l) { if(!visited[i]) { if (count >= maxCount) { maxCount = count; x = i; } dfsUtil(i, count, visited, adj); } } } // The function to do DFS traversal. It uses // recursive dfsUtil() static void dfs(int node, int n, List<int> []adj) { bool[] visited = new bool[n + 1]; int count = 0; // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, adj); } // Returns diameter of binary tree represented // as adjacency list. static int diameter(List<int> []adj, int n) { maxCount = int.MinValue; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj); /* DFS from X and check the farthest node from it */ dfs(x, n, adj); return maxCount; } // Driver Code public static void Main(String []args) { int n = 5; /* Constructed tree is 1 / \\ 2 3 / \\ 4 5 */ adj = new List<int>[n + 1]; for(int i = 0; i < n + 1; i++) adj[i] = new List<int>(); /*create undirected edges */ adj[1].Add(2); adj[2].Add(1); adj[1].Add(3); adj[3].Add(1); adj[2].Add(4); adj[4].Add(2); adj[2].Add(5); adj[5].Add(2); /* maxCount will have diameter of tree */ Console.WriteLine(\"Diameter of the given \" + \"tree is \" + diameter(adj, n)); }} // This code is contributed by PrinciRaj1992",
"e": 9153,
"s": 6905,
"text": null
},
{
"code": "<script> // JavaScript program to find diameter of a// binary tree using DFS. // Used to track farthest node. let x; let maxCount; let adj=[]; // Sets maxCount as maximum distance // from node function dfsUtil(node,count,visited,adj) { visited[node] = true; count++; let l = adj[node]; for(let i=0;i<l.length;i++) { if(!visited[l[i]]){ if (count >= maxCount) { maxCount = count; x = l[i]; } dfsUtil(l[i], count, visited, adj); } } } // The function to do DFS traversal. It uses // recursive dfsUtil() function dfs(node,n,adj) { let visited = new Array(n + 1); let count = 0; // Mark all the vertices as not visited for(let i=0;i<visited.length;i++) { visited[i]=false; } // Increment count by 1 for visited node dfsUtil(node, count + 1, visited, adj); } // Returns diameter of binary tree represented // as adjacency list. function diameter(adj,n) { maxCount = Number.MIN_VALUE; /* DFS from a random node and then see farthest node X from it*/ dfs(1, n, adj); /* DFS from X and check the farthest node from it */ dfs(x, n, adj); return maxCount; } /* Driver program to test above functions*/ let n = 5; /* Constructed tree is 1 / \\ 2 3 / \\ 4 5 */ adj = new Array(n + 1); for(let i = 0; i < n+1 ; i++) adj[i] = []; /*create undirected edges */ adj[1].push(2); adj[2].push(1); adj[1].push(3); adj[3].push(1); adj[2].push(4); adj[4].push(2); adj[2].push(5); adj[5].push(2); /* maxCount will have diameter of tree */ document.write(\"Diameter of the given \" + \"tree is \" + diameter(adj, n)); // This code is contributed by unknown2108 </script>",
"e": 11304,
"s": 9153,
"text": null
},
{
"code": null,
"e": 11314,
"s": 11304,
"text": "Output: "
},
{
"code": null,
"e": 11346,
"s": 11314,
"text": "Diameter of the given tree is 4"
},
{
"code": null,
"e": 11400,
"s": 11346,
"text": "Time Complexity: O(n), where n is the number of nodes"
},
{
"code": null,
"e": 11841,
"s": 11400,
"text": "Auxiliary Space: O(n)This article is contributed by Ankur Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 11855,
"s": 11841,
"text": "princiraj1992"
},
{
"code": null,
"e": 11870,
"s": 11855,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 11882,
"s": 11870,
"text": "unknown2108"
},
{
"code": null,
"e": 11900,
"s": 11882,
"text": "rohitmishra051000"
},
{
"code": null,
"e": 11907,
"s": 11900,
"text": "Amazon"
},
{
"code": null,
"e": 11921,
"s": 11907,
"text": "Cadence India"
},
{
"code": null,
"e": 11925,
"s": 11921,
"text": "DFS"
},
{
"code": null,
"e": 11933,
"s": 11925,
"text": "Directi"
},
{
"code": null,
"e": 11944,
"s": 11933,
"text": "MakeMyTrip"
},
{
"code": null,
"e": 11954,
"s": 11944,
"text": "Microsoft"
},
{
"code": null,
"e": 11961,
"s": 11954,
"text": "Oracle"
},
{
"code": null,
"e": 11971,
"s": 11961,
"text": "OYO Rooms"
},
{
"code": null,
"e": 11979,
"s": 11971,
"text": "Philips"
},
{
"code": null,
"e": 11990,
"s": 11979,
"text": "Salesforce"
},
{
"code": null,
"e": 11999,
"s": 11990,
"text": "Snapdeal"
},
{
"code": null,
"e": 12006,
"s": 11999,
"text": "VMWare"
},
{
"code": null,
"e": 12011,
"s": 12006,
"text": "Tree"
},
{
"code": null,
"e": 12018,
"s": 12011,
"text": "VMWare"
},
{
"code": null,
"e": 12025,
"s": 12018,
"text": "Amazon"
},
{
"code": null,
"e": 12035,
"s": 12025,
"text": "Microsoft"
},
{
"code": null,
"e": 12045,
"s": 12035,
"text": "OYO Rooms"
},
{
"code": null,
"e": 12054,
"s": 12045,
"text": "Snapdeal"
},
{
"code": null,
"e": 12065,
"s": 12054,
"text": "MakeMyTrip"
},
{
"code": null,
"e": 12072,
"s": 12065,
"text": "Oracle"
},
{
"code": null,
"e": 12080,
"s": 12072,
"text": "Directi"
},
{
"code": null,
"e": 12088,
"s": 12080,
"text": "Philips"
},
{
"code": null,
"e": 12099,
"s": 12088,
"text": "Salesforce"
},
{
"code": null,
"e": 12113,
"s": 12099,
"text": "Cadence India"
},
{
"code": null,
"e": 12117,
"s": 12113,
"text": "DFS"
},
{
"code": null,
"e": 12122,
"s": 12117,
"text": "Tree"
}
] |
String Manipulation in Shell Scripting | 24 May, 2021
String Manipulation is defined as performing several operations on a string resulting change in its contents. In Shell Scripting, this can be done in two ways: pure bash string manipulation, and string manipulation via external commands.
Basics of pure bash string manipulation:
1. Assigning content to a variable and printing its content: In bash, ‘$‘ followed by the variable name is used to print the content of the variable. Shell internally expands the variable with its value. This feature of the shell is also known as parameter expansion. Shell does not care about the type of variables and can store strings, integers, or real numbers.Syntax:
VariableName='value'
echo $VariableName
or
VariableName="value"
echo ${VariableName}
or
VariableName=value
echo "$VariableName"
Note: There should not be any space around the “=” sign in the variable assignment. When you use VariableName=value, the shell treats the “=” as an assignment operator and assigns the value to the variable. When you use VariableName = value, the shell assumes that VariableName is the name of a command and tries to execute it.
Example:
2. To print length of string inside Bash Shell: ‘#‘ symbol is used to print the length of a string.
Syntax:
variableName=value
echo ${#variablename}
Example:
3. Concatenate strings inside Bash Shell using variables: In bash, listing the strings together concatenates the string. The resulting string so formed is a new string containing all the listed strings.
Syntax:
var=${var1}${var2}${var3}
or
var=$var1$var2$var3
or
var="$var1""$var2""$var3"
To concatenate any character between the strings:
The following will insert "**" between the strings
var=${var1}**${var2}**${var3}
or
var=$var1**$var2**$var3
or
var="$var1"**"$var2"**"$var3"
The following concatenate the strings using space:
var=${var1} ${var2} ${var3}
or
var="$var1" "$var2" "$var3"
or
echo ${var1} ${var2} ${var3}
Note: While concatenating strings via space, avoid using var=$var1 $var2 $var3. Here, the shell assumes $var2 and $var3 as commands and tries to execute them, resulting in an error.
Example:
4. Concatenate strings inside Bash Shell using an array: In bash, arrays can also be used to concatenate strings.
Syntax:
To create an array:
arr=("value1" value2 $value3)
To print an array:
echo ${arr[@]}
To print length of an array:
echo ${#arr[@]}
Using indices (index starts from 0):
echo ${arr[index]}
Note: echo ${arr} is the same as echo ${arr[0]}
Example:
5. Extract a substring from a string: In Bash, a substring of characters can be extracted from a string.
Syntax:
${string:position} --> returns a substring starting from $position till end
${string:position:length} --> returns a substring of $length characters starting from $position.
Note: $length and $position must be always greater than or equal to zero.
If the $position is less than 0, it will print the complete string.
If the $length is less than 0, it will raise an error and will not execute.
Example:
6. Substring matching: In Bash, the shortest and longest possible match of a substring can be found and deleted from either front or back.
Syntax:
To delete the shortest substring match from front of $string:
${string#substring}
To delete the shortest substring match from back of $string:
${string%substring}
To delete the longest substring match from front of $string:
${string##substring}
To delete the shortest substring match from back of $string of $string:
${string%%substring}
Example:
In the above example:
The first echo statement substring ‘*.‘ matches the characters ending with a dot, and # deletes the shortest match of the substring from the front of the string, so it strips the substring ‘Welcome.‘.
The second echo statement substring ‘.*‘ matches the substring starting with a dot and ending with characters, and % deletes the shortest match of the substring from the back of the string, so it strips the substring ‘.GeeksForGeeks‘
The third echo statement substring ‘*.‘ matches the characters ending with a dot, and ## deletes the longest match of the substring from the front of the string, so it strips the substring ‘Welcome.to.‘
The fourth echo statement substring ‘.*‘ matches the substring starting with a dot and ending with characters, and %% deletes the longest match of the substring from the back of the string, so it strips the substring ‘.to.GeeksForGeeks‘.
priyanshulgovil
Linux-Unix
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
tar command in Linux with examples
curl command in Linux with Examples
'crontab' in Linux with Examples
Tail command in Linux with examples
Docker - COPY Instruction
scp command in Linux with Examples
UDP Server-Client implementation in C
Cat command in Linux with examples
echo command in Linux with Examples
diff command in Linux with examples | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n24 May, 2021"
},
{
"code": null,
"e": 291,
"s": 52,
"text": "String Manipulation is defined as performing several operations on a string resulting change in its contents. In Shell Scripting, this can be done in two ways: pure bash string manipulation, and string manipulation via external commands. "
},
{
"code": null,
"e": 332,
"s": 291,
"text": "Basics of pure bash string manipulation:"
},
{
"code": null,
"e": 705,
"s": 332,
"text": "1. Assigning content to a variable and printing its content: In bash, ‘$‘ followed by the variable name is used to print the content of the variable. Shell internally expands the variable with its value. This feature of the shell is also known as parameter expansion. Shell does not care about the type of variables and can store strings, integers, or real numbers.Syntax:"
},
{
"code": null,
"e": 833,
"s": 705,
"text": "VariableName='value'\necho $VariableName\nor\nVariableName=\"value\"\necho ${VariableName}\nor\nVariableName=value\necho \"$VariableName\""
},
{
"code": null,
"e": 1161,
"s": 833,
"text": "Note: There should not be any space around the “=” sign in the variable assignment. When you use VariableName=value, the shell treats the “=” as an assignment operator and assigns the value to the variable. When you use VariableName = value, the shell assumes that VariableName is the name of a command and tries to execute it."
},
{
"code": null,
"e": 1170,
"s": 1161,
"text": "Example:"
},
{
"code": null,
"e": 1270,
"s": 1170,
"text": "2. To print length of string inside Bash Shell: ‘#‘ symbol is used to print the length of a string."
},
{
"code": null,
"e": 1278,
"s": 1270,
"text": "Syntax:"
},
{
"code": null,
"e": 1319,
"s": 1278,
"text": "variableName=value\necho ${#variablename}"
},
{
"code": null,
"e": 1328,
"s": 1319,
"text": "Example:"
},
{
"code": null,
"e": 1531,
"s": 1328,
"text": "3. Concatenate strings inside Bash Shell using variables: In bash, listing the strings together concatenates the string. The resulting string so formed is a new string containing all the listed strings."
},
{
"code": null,
"e": 1539,
"s": 1531,
"text": "Syntax:"
},
{
"code": null,
"e": 1617,
"s": 1539,
"text": "var=${var1}${var2}${var3}\nor\nvar=$var1$var2$var3\nor\nvar=\"$var1\"\"$var2\"\"$var3\""
},
{
"code": null,
"e": 1667,
"s": 1617,
"text": "To concatenate any character between the strings:"
},
{
"code": null,
"e": 1951,
"s": 1667,
"text": "The following will insert \"**\" between the strings\nvar=${var1}**${var2}**${var3}\nor\nvar=$var1**$var2**$var3\nor\nvar=\"$var1\"**\"$var2\"**\"$var3\"\n\nThe following concatenate the strings using space:\nvar=${var1} ${var2} ${var3}\nor\nvar=\"$var1\" \"$var2\" \"$var3\"\nor\necho ${var1} ${var2} ${var3}"
},
{
"code": null,
"e": 2133,
"s": 1951,
"text": "Note: While concatenating strings via space, avoid using var=$var1 $var2 $var3. Here, the shell assumes $var2 and $var3 as commands and tries to execute them, resulting in an error."
},
{
"code": null,
"e": 2142,
"s": 2133,
"text": "Example:"
},
{
"code": null,
"e": 2257,
"s": 2142,
"text": "4. Concatenate strings inside Bash Shell using an array: In bash, arrays can also be used to concatenate strings. "
},
{
"code": null,
"e": 2265,
"s": 2257,
"text": "Syntax:"
},
{
"code": null,
"e": 2502,
"s": 2265,
"text": "To create an array:\narr=(\"value1\" value2 $value3)\n\nTo print an array:\necho ${arr[@]}\n\nTo print length of an array:\necho ${#arr[@]}\n\nUsing indices (index starts from 0):\necho ${arr[index]}\n\nNote: echo ${arr} is the same as echo ${arr[0]}"
},
{
"code": null,
"e": 2511,
"s": 2502,
"text": "Example:"
},
{
"code": null,
"e": 2616,
"s": 2511,
"text": "5. Extract a substring from a string: In Bash, a substring of characters can be extracted from a string."
},
{
"code": null,
"e": 2624,
"s": 2616,
"text": "Syntax:"
},
{
"code": null,
"e": 2798,
"s": 2624,
"text": "${string:position} --> returns a substring starting from $position till end\n${string:position:length} --> returns a substring of $length characters starting from $position."
},
{
"code": null,
"e": 2873,
"s": 2798,
"text": "Note: $length and $position must be always greater than or equal to zero. "
},
{
"code": null,
"e": 2941,
"s": 2873,
"text": "If the $position is less than 0, it will print the complete string."
},
{
"code": null,
"e": 3017,
"s": 2941,
"text": "If the $length is less than 0, it will raise an error and will not execute."
},
{
"code": null,
"e": 3026,
"s": 3017,
"text": "Example:"
},
{
"code": null,
"e": 3165,
"s": 3026,
"text": "6. Substring matching: In Bash, the shortest and longest possible match of a substring can be found and deleted from either front or back."
},
{
"code": null,
"e": 3173,
"s": 3165,
"text": "Syntax:"
},
{
"code": null,
"e": 3514,
"s": 3173,
"text": "To delete the shortest substring match from front of $string:\n${string#substring}\n\nTo delete the shortest substring match from back of $string:\n${string%substring}\n\nTo delete the longest substring match from front of $string:\n${string##substring}\n\nTo delete the shortest substring match from back of $string of $string:\n${string%%substring}"
},
{
"code": null,
"e": 3523,
"s": 3514,
"text": "Example:"
},
{
"code": null,
"e": 3546,
"s": 3523,
"text": "In the above example: "
},
{
"code": null,
"e": 3747,
"s": 3546,
"text": "The first echo statement substring ‘*.‘ matches the characters ending with a dot, and # deletes the shortest match of the substring from the front of the string, so it strips the substring ‘Welcome.‘."
},
{
"code": null,
"e": 3981,
"s": 3747,
"text": "The second echo statement substring ‘.*‘ matches the substring starting with a dot and ending with characters, and % deletes the shortest match of the substring from the back of the string, so it strips the substring ‘.GeeksForGeeks‘"
},
{
"code": null,
"e": 4184,
"s": 3981,
"text": "The third echo statement substring ‘*.‘ matches the characters ending with a dot, and ## deletes the longest match of the substring from the front of the string, so it strips the substring ‘Welcome.to.‘"
},
{
"code": null,
"e": 4422,
"s": 4184,
"text": "The fourth echo statement substring ‘.*‘ matches the substring starting with a dot and ending with characters, and %% deletes the longest match of the substring from the back of the string, so it strips the substring ‘.to.GeeksForGeeks‘."
},
{
"code": null,
"e": 4438,
"s": 4422,
"text": "priyanshulgovil"
},
{
"code": null,
"e": 4449,
"s": 4438,
"text": "Linux-Unix"
},
{
"code": null,
"e": 4547,
"s": 4449,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4582,
"s": 4547,
"text": "tar command in Linux with examples"
},
{
"code": null,
"e": 4618,
"s": 4582,
"text": "curl command in Linux with Examples"
},
{
"code": null,
"e": 4651,
"s": 4618,
"text": "'crontab' in Linux with Examples"
},
{
"code": null,
"e": 4687,
"s": 4651,
"text": "Tail command in Linux with examples"
},
{
"code": null,
"e": 4713,
"s": 4687,
"text": "Docker - COPY Instruction"
},
{
"code": null,
"e": 4748,
"s": 4713,
"text": "scp command in Linux with Examples"
},
{
"code": null,
"e": 4786,
"s": 4748,
"text": "UDP Server-Client implementation in C"
},
{
"code": null,
"e": 4821,
"s": 4786,
"text": "Cat command in Linux with examples"
},
{
"code": null,
"e": 4857,
"s": 4821,
"text": "echo command in Linux with Examples"
}
] |
Python | Unpacking tuple of lists | 04 Apr, 2019
Given a tuple of lists, write a Python program to unpack the elements of the lists that are packed inside the given tuple.
Examples:
Input : (['a', 'apple'], ['b', 'ball'])
Output : ['a', 'apple', 'b', 'ball']
Input : ([1, 'sam', 75], [2, 'bob', 39], [3, 'Kate', 87])
Output : [1, 'sam', 75, 2, 'bob', 39, 3, 'Kate', 87]
Approach #1 : Using reduce()
reduce() is a classic list operation used to apply a particular function passed in its argument to all of the list elements. In this case we used add function of operator module which simply adds the given list arguments to an empty list.
# Python3 program to unpack # tuple of listsfrom functools import reduceimport operator def unpackTuple(tup): return (reduce(operator.add, tup)) # Driver codetup = (['a', 'apple'], ['b', 'ball'])print(unpackTuple(tup))
['a', 'apple', 'b', 'ball']
Approach #2 : Using Numpy [Alternative to Approach #1]
# Python3 program to unpack # tuple of listsfrom functools import reduceimport numpy def unpackTuple(tup): print (reduce(numpy.append, tup)) # Driver codetup = (['a', 'apple'], ['b', 'ball'])unpackTuple(tup)
['a' 'apple' 'b' 'ball']
Approach #3 : Using itertools.chain(*iterables)
itertools.chain(*iterables) make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted. This makes our job a lot easier, as we can simply append each iterable to the empty list and return it.
# Python3 program to unpack # tuple of listsfrom itertools import chain def unpackTuple(tup): res = [] for i in chain(*tup): res.append(i) print(res) # Driver codetup = (['a', 'apple'], ['b', 'ball'])unpackTuple(tup)
['a', 'apple', 'b', 'ball']
Python list-programs
Python tuple-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n04 Apr, 2019"
},
{
"code": null,
"e": 176,
"s": 53,
"text": "Given a tuple of lists, write a Python program to unpack the elements of the lists that are packed inside the given tuple."
},
{
"code": null,
"e": 186,
"s": 176,
"text": "Examples:"
},
{
"code": null,
"e": 376,
"s": 186,
"text": "Input : (['a', 'apple'], ['b', 'ball'])\nOutput : ['a', 'apple', 'b', 'ball']\n\nInput : ([1, 'sam', 75], [2, 'bob', 39], [3, 'Kate', 87])\nOutput : [1, 'sam', 75, 2, 'bob', 39, 3, 'Kate', 87]\n"
},
{
"code": null,
"e": 406,
"s": 376,
"text": " Approach #1 : Using reduce()"
},
{
"code": null,
"e": 645,
"s": 406,
"text": "reduce() is a classic list operation used to apply a particular function passed in its argument to all of the list elements. In this case we used add function of operator module which simply adds the given list arguments to an empty list."
},
{
"code": "# Python3 program to unpack # tuple of listsfrom functools import reduceimport operator def unpackTuple(tup): return (reduce(operator.add, tup)) # Driver codetup = (['a', 'apple'], ['b', 'ball'])print(unpackTuple(tup))",
"e": 875,
"s": 645,
"text": null
},
{
"code": null,
"e": 904,
"s": 875,
"text": "['a', 'apple', 'b', 'ball']\n"
},
{
"code": null,
"e": 961,
"s": 906,
"text": "Approach #2 : Using Numpy [Alternative to Approach #1]"
},
{
"code": "# Python3 program to unpack # tuple of listsfrom functools import reduceimport numpy def unpackTuple(tup): print (reduce(numpy.append, tup)) # Driver codetup = (['a', 'apple'], ['b', 'ball'])unpackTuple(tup)",
"e": 1186,
"s": 961,
"text": null
},
{
"code": null,
"e": 1212,
"s": 1186,
"text": "['a' 'apple' 'b' 'ball']\n"
},
{
"code": null,
"e": 1261,
"s": 1212,
"text": " Approach #3 : Using itertools.chain(*iterables)"
},
{
"code": null,
"e": 1557,
"s": 1261,
"text": "itertools.chain(*iterables) make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted. This makes our job a lot easier, as we can simply append each iterable to the empty list and return it."
},
{
"code": "# Python3 program to unpack # tuple of listsfrom itertools import chain def unpackTuple(tup): res = [] for i in chain(*tup): res.append(i) print(res) # Driver codetup = (['a', 'apple'], ['b', 'ball'])unpackTuple(tup)",
"e": 1806,
"s": 1557,
"text": null
},
{
"code": null,
"e": 1835,
"s": 1806,
"text": "['a', 'apple', 'b', 'ball']\n"
},
{
"code": null,
"e": 1856,
"s": 1835,
"text": "Python list-programs"
},
{
"code": null,
"e": 1878,
"s": 1856,
"text": "Python tuple-programs"
},
{
"code": null,
"e": 1885,
"s": 1878,
"text": "Python"
},
{
"code": null,
"e": 1901,
"s": 1885,
"text": "Python Programs"
}
] |
What is TEXT data type in MySQL? | TEXT data objects are useful for storing long-form text strings in a MySQL database. Followings are some point about TEXT data type −
TEXT is the family of column type intended as high-capacity character storage.
The actual TEXT column type is of four types-TINYTEXT, TEXT, MEDIUMTEXT and LONGTEXT.
The four TEXT types are very similar to each other; the only difference is the maximum amount of data each can store.
The smallest TEXT type, TINYTEXT shares the same character length as VARCHAR.
TEXT values are treated as character strings.
TEXT has character set other than binary character set and collation.
The comparisons and sorting are based on the collation of its character set.
Truncation of excess trailing spaces from values to be inserted into TEXT columns always generates a warning, regardless of the SQL mode.
A TEXT family column is just like a VARCHAR.
TEXT column cannot have DEFAULT value.
Following example shows how to declare a column as TEXT.
mysql> Create table magzine(id INT, title Varchar(25), Introduction TEXT);
Query OK, 0 rows affected (0.16 sec)
mysql> Describe magzine;
+--------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| title | varchar(25) | YES | | NULL | |
| Introduction | text | YES | | NULL | |
+--------------+-------------+------+-----+---------+-------+
3 rows in set (0.11 sec) | [
{
"code": null,
"e": 1196,
"s": 1062,
"text": "TEXT data objects are useful for storing long-form text strings in a MySQL database. Followings are some point about TEXT data type −"
},
{
"code": null,
"e": 1275,
"s": 1196,
"text": "TEXT is the family of column type intended as high-capacity character storage."
},
{
"code": null,
"e": 1361,
"s": 1275,
"text": "The actual TEXT column type is of four types-TINYTEXT, TEXT, MEDIUMTEXT and LONGTEXT."
},
{
"code": null,
"e": 1479,
"s": 1361,
"text": "The four TEXT types are very similar to each other; the only difference is the maximum amount of data each can store."
},
{
"code": null,
"e": 1557,
"s": 1479,
"text": "The smallest TEXT type, TINYTEXT shares the same character length as VARCHAR."
},
{
"code": null,
"e": 1603,
"s": 1557,
"text": "TEXT values are treated as character strings."
},
{
"code": null,
"e": 1673,
"s": 1603,
"text": "TEXT has character set other than binary character set and collation."
},
{
"code": null,
"e": 1750,
"s": 1673,
"text": "The comparisons and sorting are based on the collation of its character set."
},
{
"code": null,
"e": 1888,
"s": 1750,
"text": "Truncation of excess trailing spaces from values to be inserted into TEXT columns always generates a warning, regardless of the SQL mode."
},
{
"code": null,
"e": 1933,
"s": 1888,
"text": "A TEXT family column is just like a VARCHAR."
},
{
"code": null,
"e": 1972,
"s": 1933,
"text": "TEXT column cannot have DEFAULT value."
},
{
"code": null,
"e": 2030,
"s": 1972,
"text": " Following example shows how to declare a column as TEXT."
},
{
"code": null,
"e": 2627,
"s": 2030,
"text": "mysql> Create table magzine(id INT, title Varchar(25), Introduction TEXT);\nQuery OK, 0 rows affected (0.16 sec)\n\nmysql> Describe magzine;\n+--------------+-------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+--------------+-------------+------+-----+---------+-------+\n| id | int(11) | YES | | NULL | |\n| title | varchar(25) | YES | | NULL | |\n| Introduction | text | YES | | NULL | |\n+--------------+-------------+------+-----+---------+-------+\n3 rows in set (0.11 sec)"
}
] |
Python Design Patterns - Builder | Builder Pattern is a unique design pattern which helps in building complex object using simple objects and uses an algorithmic approach. This design pattern comes under the category of creational pattern. In this design pattern, a builder class builds the final object in step-by-step procedure. This builder is independent of other objects.
It provides clear separation and a unique layer between construction and representation of a specified object created by class.
It provides clear separation and a unique layer between construction and representation of a specified object created by class.
It provides better control over construction process of the pattern created.
It provides better control over construction process of the pattern created.
It gives the perfect scenario to change the internal representation of objects.
It gives the perfect scenario to change the internal representation of objects.
In this section, we will learn how to implement the builder pattern.
class Director:
__builder = None
def setBuilder(self, builder):
self.__builder = builder
def getCar(self):
car = Car()
# First goes the body
body = self.__builder.getBody()
car.setBody(body)
# Then engine
engine = self.__builder.getEngine()
car.setEngine(engine)
# And four wheels
i = 0
while i < 4:
wheel = self.__builder.getWheel()
car.attachWheel(wheel)
i += 1
return car
# The whole product
class Car:
def __init__(self):
self.__wheels = list()
self.__engine = None
self.__body = None
def setBody(self, body):
self.__body = body
def attachWheel(self, wheel):
self.__wheels.append(wheel)
def setEngine(self, engine):
self.__engine = engine
def specification(self):
print "body: %s" % self.__body.shape
print "engine horsepower: %d" % self.__engine.horsepower
print "tire size: %d\'" % self.__wheels[0].size
class Builder:
def getWheel(self): pass
def getEngine(self): pass
def getBody(self): pass
class JeepBuilder(Builder):
def getWheel(self):
wheel = Wheel()
wheel.size = 22
return wheel
def getEngine(self):
engine = Engine()
engine.horsepower = 400
return engine
def getBody(self):
body = Body()
body.shape = "SUV"
return body
# Car parts
class Wheel:
size = None
class Engine:
horsepower = None
class Body:
shape = None
def main():
jeepBuilder = JeepBuilder() # initializing the class
director = Director()
# Build Jeep
print "Jeep"
director.setBuilder(jeepBuilder)
jeep = director.getCar()
jeep.specification()
print ""
if __name__ == "__main__":
main()
The above program generates the following output −
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2821,
"s": 2479,
"text": "Builder Pattern is a unique design pattern which helps in building complex object using simple objects and uses an algorithmic approach. This design pattern comes under the category of creational pattern. In this design pattern, a builder class builds the final object in step-by-step procedure. This builder is independent of other objects."
},
{
"code": null,
"e": 2949,
"s": 2821,
"text": "It provides clear separation and a unique layer between construction and representation of a specified object created by class."
},
{
"code": null,
"e": 3077,
"s": 2949,
"text": "It provides clear separation and a unique layer between construction and representation of a specified object created by class."
},
{
"code": null,
"e": 3154,
"s": 3077,
"text": "It provides better control over construction process of the pattern created."
},
{
"code": null,
"e": 3231,
"s": 3154,
"text": "It provides better control over construction process of the pattern created."
},
{
"code": null,
"e": 3311,
"s": 3231,
"text": "It gives the perfect scenario to change the internal representation of objects."
},
{
"code": null,
"e": 3391,
"s": 3311,
"text": "It gives the perfect scenario to change the internal representation of objects."
},
{
"code": null,
"e": 3460,
"s": 3391,
"text": "In this section, we will learn how to implement the builder pattern."
},
{
"code": null,
"e": 5272,
"s": 3460,
"text": "class Director:\n __builder = None\n \n def setBuilder(self, builder):\n self.__builder = builder\n \n def getCar(self):\n car = Car()\n \n # First goes the body\n body = self.__builder.getBody()\n car.setBody(body)\n \n # Then engine\n engine = self.__builder.getEngine()\n car.setEngine(engine)\n \n # And four wheels\n i = 0\n while i < 4:\n wheel = self.__builder.getWheel()\n\t\t\tcar.attachWheel(wheel)\n i += 1\n return car\n\n# The whole product\nclass Car:\n def __init__(self):\n self.__wheels = list()\n self.__engine = None\n self.__body = None\n\n def setBody(self, body):\n self.__body = body\n\n def attachWheel(self, wheel):\n self.__wheels.append(wheel)\n\n def setEngine(self, engine):\n self.__engine = engine\n\n def specification(self):\n print \"body: %s\" % self.__body.shape\n print \"engine horsepower: %d\" % self.__engine.horsepower\n print \"tire size: %d\\'\" % self.__wheels[0].size\n\nclass Builder:\n def getWheel(self): pass\n def getEngine(self): pass\n def getBody(self): pass\n\nclass JeepBuilder(Builder):\n \n def getWheel(self):\n wheel = Wheel()\n wheel.size = 22\n return wheel\n \n def getEngine(self):\n engine = Engine()\n engine.horsepower = 400\n return engine\n \n def getBody(self):\n body = Body()\n body.shape = \"SUV\"\n return body\n\n# Car parts\nclass Wheel:\n size = None\n\nclass Engine:\n horsepower = None\n\nclass Body:\n shape = None\n\ndef main():\n jeepBuilder = JeepBuilder() # initializing the class\n \n director = Director()\n \n # Build Jeep\n print \"Jeep\"\n director.setBuilder(jeepBuilder)\n jeep = director.getCar()\n jeep.specification()\n print \"\"\n\nif __name__ == \"__main__\":\n main()"
},
{
"code": null,
"e": 5323,
"s": 5272,
"text": "The above program generates the following output −"
},
{
"code": null,
"e": 5360,
"s": 5323,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 5376,
"s": 5360,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 5409,
"s": 5376,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 5428,
"s": 5409,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 5463,
"s": 5428,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 5485,
"s": 5463,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 5519,
"s": 5485,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 5547,
"s": 5519,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 5582,
"s": 5547,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 5596,
"s": 5582,
"text": " Lets Kode It"
},
{
"code": null,
"e": 5629,
"s": 5596,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 5646,
"s": 5629,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 5653,
"s": 5646,
"text": " Print"
},
{
"code": null,
"e": 5664,
"s": 5653,
"text": " Add Notes"
}
] |
How String Hashcode value is calculated? - GeeksforGeeks | 01 Apr, 2020
The String hashCode() method returns the hashcode value of this String as an Integer.
Syntax:public int hashCode()
For Example:
import java.io.*; class GFG { public static void main(String[] args) { String str = "GFG"; System.out.println(str); int hashCode = str.hashCode(); System.out.println(hashCode); }}
GFG
70472
But the question here is, how this integer value 70472 is printed. If you will try to find the hashcode value of this string again, the result would be the same. So how is this String hashcode calculated?
How is String hashcode calculated?The hashcode value of a String is calculated with the help of a formula:
s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]
where:
s[i] represents the ith character of the string
^ refers to the exponential operand
n represents the length of the string
Example:
In the above case, the String is “GFG”. Hence:
s[] = {'G', 'F', 'G'}
n = 3
So the hashcode value will be calculated as:
s[0]*31^(2) + s[1]*31^1 + s[2]
= G*31^2 + F*31 + G
= (as ASCII value of G = 71 and F = 70)
71*312 + 70*31 + 71
= 68231 + 2170 + 71
= 70472
which is the value received as the output.
Hence this is how the String hashcode value is calculated.
HashCode value of empty string?
In this case, the String is “”. Hence:
s[] = {}
n = 0
So the hashcode value will be calculated as:
s[0]*31^(0)
= 0
Hence the hashcode value of an empty string is always 0.
pavanhareesh97
Java-Strings
Java
Java-Strings
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Interfaces in Java
ArrayList in Java
Multidimensional Arrays in Java
Overriding in Java
Stack Class in Java
Collections in Java
Singleton Class in Java
LinkedList in Java
Multithreading in Java
Inheritance in Java | [
{
"code": null,
"e": 24122,
"s": 24094,
"text": "\n01 Apr, 2020"
},
{
"code": null,
"e": 24208,
"s": 24122,
"text": "The String hashCode() method returns the hashcode value of this String as an Integer."
},
{
"code": null,
"e": 24237,
"s": 24208,
"text": "Syntax:public int hashCode()"
},
{
"code": null,
"e": 24250,
"s": 24237,
"text": "For Example:"
},
{
"code": "import java.io.*; class GFG { public static void main(String[] args) { String str = \"GFG\"; System.out.println(str); int hashCode = str.hashCode(); System.out.println(hashCode); }}",
"e": 24470,
"s": 24250,
"text": null
},
{
"code": null,
"e": 24481,
"s": 24470,
"text": "GFG\n70472\n"
},
{
"code": null,
"e": 24686,
"s": 24481,
"text": "But the question here is, how this integer value 70472 is printed. If you will try to find the hashcode value of this string again, the result would be the same. So how is this String hashcode calculated?"
},
{
"code": null,
"e": 24793,
"s": 24686,
"text": "How is String hashcode calculated?The hashcode value of a String is calculated with the help of a formula:"
},
{
"code": null,
"e": 24839,
"s": 24793,
"text": "s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]\n"
},
{
"code": null,
"e": 24846,
"s": 24839,
"text": "where:"
},
{
"code": null,
"e": 24894,
"s": 24846,
"text": "s[i] represents the ith character of the string"
},
{
"code": null,
"e": 24930,
"s": 24894,
"text": "^ refers to the exponential operand"
},
{
"code": null,
"e": 24968,
"s": 24930,
"text": "n represents the length of the string"
},
{
"code": null,
"e": 24977,
"s": 24968,
"text": "Example:"
},
{
"code": null,
"e": 25024,
"s": 24977,
"text": "In the above case, the String is “GFG”. Hence:"
},
{
"code": null,
"e": 25053,
"s": 25024,
"text": "s[] = {'G', 'F', 'G'}\nn = 3\n"
},
{
"code": null,
"e": 25098,
"s": 25053,
"text": "So the hashcode value will be calculated as:"
},
{
"code": null,
"e": 25240,
"s": 25098,
"text": "s[0]*31^(2) + s[1]*31^1 + s[2]\n= G*31^2 + F*31 + G\n= (as ASCII value of G = 71 and F = 70)\n 71*312 + 70*31 + 71 \n= 68231 + 2170 + 71\n= 70472"
},
{
"code": null,
"e": 25283,
"s": 25240,
"text": "which is the value received as the output."
},
{
"code": null,
"e": 25342,
"s": 25283,
"text": "Hence this is how the String hashcode value is calculated."
},
{
"code": null,
"e": 25374,
"s": 25342,
"text": "HashCode value of empty string?"
},
{
"code": null,
"e": 25413,
"s": 25374,
"text": "In this case, the String is “”. Hence:"
},
{
"code": null,
"e": 25429,
"s": 25413,
"text": "s[] = {}\nn = 0\n"
},
{
"code": null,
"e": 25474,
"s": 25429,
"text": "So the hashcode value will be calculated as:"
},
{
"code": null,
"e": 25491,
"s": 25474,
"text": "s[0]*31^(0)\n= 0\n"
},
{
"code": null,
"e": 25548,
"s": 25491,
"text": "Hence the hashcode value of an empty string is always 0."
},
{
"code": null,
"e": 25563,
"s": 25548,
"text": "pavanhareesh97"
},
{
"code": null,
"e": 25576,
"s": 25563,
"text": "Java-Strings"
},
{
"code": null,
"e": 25581,
"s": 25576,
"text": "Java"
},
{
"code": null,
"e": 25594,
"s": 25581,
"text": "Java-Strings"
},
{
"code": null,
"e": 25599,
"s": 25594,
"text": "Java"
},
{
"code": null,
"e": 25697,
"s": 25599,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25706,
"s": 25697,
"text": "Comments"
},
{
"code": null,
"e": 25719,
"s": 25706,
"text": "Old Comments"
},
{
"code": null,
"e": 25738,
"s": 25719,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 25756,
"s": 25738,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 25788,
"s": 25756,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 25807,
"s": 25788,
"text": "Overriding in Java"
},
{
"code": null,
"e": 25827,
"s": 25807,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 25847,
"s": 25827,
"text": "Collections in Java"
},
{
"code": null,
"e": 25871,
"s": 25847,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 25890,
"s": 25871,
"text": "LinkedList in Java"
},
{
"code": null,
"e": 25913,
"s": 25890,
"text": "Multithreading in Java"
}
] |
Python 3 - Comparison Operators Example | These operators compare the values on either side of them and decide the relation among them. They are also called Relational operators.
Assume variable a holds the value 10 and variable b holds the value 20, then −
Assume variable a holds the value 10 and variable b holds the value 20, then −
#!/usr/bin/python3
a = 21
b = 10
if ( a == b ):
print ("Line 1 - a is equal to b")
else:
print ("Line 1 - a is not equal to b")
if ( a != b ):
print ("Line 2 - a is not equal to b")
else:
print ("Line 2 - a is equal to b")
if ( a < b ):
print ("Line 3 - a is less than b" )
else:
print ("Line 3 - a is not less than b")
if ( a > b ):
print ("Line 4 - a is greater than b")
else:
print ("Line 4 - a is not greater than b")
a,b = b,a #values of a and b swapped. a becomes 10, b becomes 21
if ( a <= b ):
print ("Line 5 - a is either less than or equal to b")
else:
print ("Line 5 - a is neither less than nor equal to b")
if ( b >= a ):
print ("Line 6 - b is either greater than or equal to b")
else:
print ("Line 6 - b is neither greater than nor equal to b")
When you execute the above program, it produces the following result −
Line 1 - a is not equal to b
Line 2 - a is not equal to b
Line 3 - a is not less than b
Line 4 - a is greater than b
Line 5 - a is either less than or equal to b
Line 6 - b is either greater than or equal to b
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2477,
"s": 2340,
"text": "These operators compare the values on either side of them and decide the relation among them. They are also called Relational operators."
},
{
"code": null,
"e": 2556,
"s": 2477,
"text": "Assume variable a holds the value 10 and variable b holds the value 20, then −"
},
{
"code": null,
"e": 2635,
"s": 2556,
"text": "Assume variable a holds the value 10 and variable b holds the value 20, then −"
},
{
"code": null,
"e": 3442,
"s": 2635,
"text": "#!/usr/bin/python3\n\na = 21\nb = 10\n\nif ( a == b ):\n print (\"Line 1 - a is equal to b\")\nelse:\n print (\"Line 1 - a is not equal to b\")\n\nif ( a != b ):\n print (\"Line 2 - a is not equal to b\")\nelse:\n print (\"Line 2 - a is equal to b\")\n\nif ( a < b ):\n print (\"Line 3 - a is less than b\" )\nelse:\n print (\"Line 3 - a is not less than b\")\n\nif ( a > b ):\n print (\"Line 4 - a is greater than b\")\nelse:\n print (\"Line 4 - a is not greater than b\")\n\na,b = b,a #values of a and b swapped. a becomes 10, b becomes 21\n\nif ( a <= b ):\n print (\"Line 5 - a is either less than or equal to b\")\nelse:\n print (\"Line 5 - a is neither less than nor equal to b\")\n\nif ( b >= a ):\n print (\"Line 6 - b is either greater than or equal to b\")\nelse:\n print (\"Line 6 - b is neither greater than nor equal to b\")"
},
{
"code": null,
"e": 3513,
"s": 3442,
"text": "When you execute the above program, it produces the following result −"
},
{
"code": null,
"e": 3726,
"s": 3513,
"text": "Line 1 - a is not equal to b\nLine 2 - a is not equal to b\nLine 3 - a is not less than b\nLine 4 - a is greater than b\nLine 5 - a is either less than or equal to b\nLine 6 - b is either greater than or equal to b\n"
},
{
"code": null,
"e": 3763,
"s": 3726,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 3779,
"s": 3763,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 3812,
"s": 3779,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3831,
"s": 3812,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3866,
"s": 3831,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3888,
"s": 3866,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3922,
"s": 3888,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3950,
"s": 3922,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3985,
"s": 3950,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3999,
"s": 3985,
"text": " Lets Kode It"
},
{
"code": null,
"e": 4032,
"s": 3999,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 4049,
"s": 4032,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 4056,
"s": 4049,
"text": " Print"
},
{
"code": null,
"e": 4067,
"s": 4056,
"text": " Add Notes"
}
] |
MapStruct - Using Builder | MapStruct allows to use Builders. We can use Builder frameworks or can use our custom builder. In below example, we are using a custom builder.
Open project mapping as updated in Mapping Direct Fields chapter in Eclipse.
Update Student.java with following code −
Student.java
package com.tutorialspoint.model;
public class Student {
private final String name;
private final int id;
protected Student(Student.Builder builder) {
this.name = builder.name;
this.id = builder.id;
}
public static Student.Builder builder() {
return new Student.Builder();
}
public static class Builder {
private String name;
private int id;
public Builder name(String name) {
this.name = name;
return this;
}
public Builder id(int id) {
this.id = id;
return this;
}
public Student create() {
return new Student( this );
}
}
public String getName() {
return name;
}
public int getId() {
return id;
}
}
Update StudentMapper.java with following code −
StudentMapper.java
package com.tutorialspoint.mapper;
import org.mapstruct.Mapper;
import org.mapstruct.Mapping;
import com.tutorialspoint.entity.StudentEntity;
import com.tutorialspoint.model.Student;
@Mapper
public interface StudentMapper {
Student getModelFromEntity(StudentEntity studentEntity);
@Mapping(target="id", source="id")
@Mapping(target="name", source="name")
StudentEntity getEntityFromModel(Student student);
}
Update StudentMapperTest.java with following code −
StudentMapperTest.java
package com.tutorialspoint.mapping;
import static org.junit.jupiter.api.Assertions.assertEquals;
import org.junit.jupiter.api.Test;
import org.mapstruct.factory.Mappers;
import com.tutorialspoint.entity.StudentEntity;
import com.tutorialspoint.entity.SubjectEntity;
import com.tutorialspoint.mapper.StudentMapper;
import com.tutorialspoint.model.Student;
public class StudentMapperTest {
private StudentMapper studentMapper = Mappers.getMapper(StudentMapper.class);
@Test
public void testEntityToModel() {
StudentEntity entity = new StudentEntity();
entity.setName("John");
entity.setId(1);
Student model = studentMapper.getModelFromEntity(entity);
assertEquals(entity.getName(), model.getName());
assertEquals(entity.getId(), model.getId());
}
@Test
public void testModelToEntity() {
Student.Builder builder = Student.builder().id(1).name("John");
Student model = builder.create();
StudentEntity entity = studentMapper.getEntityFromModel(model);
assertEquals(entity.getName(), model.getName());
assertEquals(entity.getId(), model.getId());
}
}
Run the following command to test the mappings.
mvn clean test
Once command is successful. Verify the output.
mvn clean test
[INFO] Scanning for projects...
...
[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ mapping ---
[INFO] Surefire report directory: \mvn\mapping\target\surefire-reports
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running com.tutorialspoint.mapping.DeliveryAddressMapperTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 sec
Running com.tutorialspoint.mapping.StudentMapperTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Results :
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0
...
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2404,
"s": 2260,
"text": "MapStruct allows to use Builders. We can use Builder frameworks or can use our custom builder. In below example, we are using a custom builder."
},
{
"code": null,
"e": 2481,
"s": 2404,
"text": "Open project mapping as updated in Mapping Direct Fields chapter in Eclipse."
},
{
"code": null,
"e": 2523,
"s": 2481,
"text": "Update Student.java with following code −"
},
{
"code": null,
"e": 2536,
"s": 2523,
"text": "Student.java"
},
{
"code": null,
"e": 3299,
"s": 2536,
"text": "package com.tutorialspoint.model;\n\npublic class Student {\n private final String name;\n private final int id;\n\n protected Student(Student.Builder builder) {\n this.name = builder.name;\n this.id = builder.id;\n }\n public static Student.Builder builder() {\n return new Student.Builder();\n }\n public static class Builder {\n private String name;\n private int id;\n public Builder name(String name) {\n this.name = name;\n return this;\n }\n public Builder id(int id) {\n this.id = id;\n return this;\n }\n public Student create() {\n return new Student( this );\n }\n }\n public String getName() {\n return name;\n }\n public int getId() {\n return id;\n }\n}"
},
{
"code": null,
"e": 3347,
"s": 3299,
"text": "Update StudentMapper.java with following code −"
},
{
"code": null,
"e": 3366,
"s": 3347,
"text": "StudentMapper.java"
},
{
"code": null,
"e": 3788,
"s": 3366,
"text": "package com.tutorialspoint.mapper;\n\nimport org.mapstruct.Mapper;\nimport org.mapstruct.Mapping;\nimport com.tutorialspoint.entity.StudentEntity;\nimport com.tutorialspoint.model.Student;\n\n@Mapper\npublic interface StudentMapper {\n Student getModelFromEntity(StudentEntity studentEntity);\n @Mapping(target=\"id\", source=\"id\")\n @Mapping(target=\"name\", source=\"name\")\n StudentEntity getEntityFromModel(Student student);\n}"
},
{
"code": null,
"e": 3840,
"s": 3788,
"text": "Update StudentMapperTest.java with following code −"
},
{
"code": null,
"e": 3863,
"s": 3840,
"text": "StudentMapperTest.java"
},
{
"code": null,
"e": 5001,
"s": 3863,
"text": "package com.tutorialspoint.mapping;\n\nimport static org.junit.jupiter.api.Assertions.assertEquals;\nimport org.junit.jupiter.api.Test;\nimport org.mapstruct.factory.Mappers;\nimport com.tutorialspoint.entity.StudentEntity;\nimport com.tutorialspoint.entity.SubjectEntity;\nimport com.tutorialspoint.mapper.StudentMapper;\nimport com.tutorialspoint.model.Student;\n\npublic class StudentMapperTest {\n private StudentMapper studentMapper = Mappers.getMapper(StudentMapper.class);\n \n @Test\n public void testEntityToModel() {\n StudentEntity entity = new StudentEntity();\n entity.setName(\"John\");\n entity.setId(1);\n Student model = studentMapper.getModelFromEntity(entity);\n assertEquals(entity.getName(), model.getName());\n assertEquals(entity.getId(), model.getId());\n }\n @Test\n public void testModelToEntity() {\n Student.Builder builder = Student.builder().id(1).name(\"John\");\n Student model = builder.create();\n StudentEntity entity = studentMapper.getEntityFromModel(model);\n assertEquals(entity.getName(), model.getName());\n assertEquals(entity.getId(), model.getId());\n }\n}"
},
{
"code": null,
"e": 5049,
"s": 5001,
"text": "Run the following command to test the mappings."
},
{
"code": null,
"e": 5065,
"s": 5049,
"text": "mvn clean test\n"
},
{
"code": null,
"e": 5112,
"s": 5065,
"text": "Once command is successful. Verify the output."
},
{
"code": null,
"e": 5760,
"s": 5112,
"text": "mvn clean test\n[INFO] Scanning for projects...\n...\n[INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ mapping ---\n[INFO] Surefire report directory: \\mvn\\mapping\\target\\surefire-reports\n\n-------------------------------------------------------\n T E S T S\n-------------------------------------------------------\nRunning com.tutorialspoint.mapping.DeliveryAddressMapperTest\nTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 sec\nRunning com.tutorialspoint.mapping.StudentMapperTest\nTests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec\n\nResults :\n\nTests run: 3, Failures: 0, Errors: 0, Skipped: 0\n...\n"
},
{
"code": null,
"e": 5767,
"s": 5760,
"text": " Print"
},
{
"code": null,
"e": 5778,
"s": 5767,
"text": " Add Notes"
}
] |
How to add a day to datetime field in MySQL query? | To add a day to datetime field, use the DATE_ADD() function. The syntax is as follows −
SELECT DATE_ADD(yourColumnName,interval yourIntegerValue day) as anyVariableName from yourTableName;
Let us first create a table −
mysql> create table AddOneDayDemo
−> (
−> YourDay datetime
−> );
Query OK, 0 rows affected (1.37 sec)
Insert current date with the help of curdate() and after that use date_add() function to add a day.
To insert a day into the table, the following is the query −
mysql> insert into AddOneDayDemo values(curdate());
Query OK, 1 row affected (0.17 sec)
Display records with the help of select statement. The query is as follows −
mysql> select *from AddOneDayDemo;
The following is the record with current date −
| YourDay |
+---------------------+
| 2018-11-27 00:00:00 |
+---------------------+
1 row in set (0.00 sec)
The query to add a day to current date is as follows −
mysql> select date_add(YourDay,interval 1 day) as yourDayafteraddingoneday from AddOneDayDemo;
The following is the output -
+--------------------------+
| yourDayafteraddingoneday |
+--------------------------+
| 2018-11-28 00:00:00 |
+--------------------------+
1 row in set (0.00 sec)
The above output displays a date that is an addition to the current date. | [
{
"code": null,
"e": 1150,
"s": 1062,
"text": "To add a day to datetime field, use the DATE_ADD() function. The syntax is as follows −"
},
{
"code": null,
"e": 1251,
"s": 1150,
"text": "SELECT DATE_ADD(yourColumnName,interval yourIntegerValue day) as anyVariableName from yourTableName;"
},
{
"code": null,
"e": 1281,
"s": 1251,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1386,
"s": 1281,
"text": "mysql> create table AddOneDayDemo\n−> (\n −> YourDay datetime\n−> );\nQuery OK, 0 rows affected (1.37 sec)"
},
{
"code": null,
"e": 1486,
"s": 1386,
"text": "Insert current date with the help of curdate() and after that use date_add() function to add a day."
},
{
"code": null,
"e": 1547,
"s": 1486,
"text": "To insert a day into the table, the following is the query −"
},
{
"code": null,
"e": 1635,
"s": 1547,
"text": "mysql> insert into AddOneDayDemo values(curdate());\nQuery OK, 1 row affected (0.17 sec)"
},
{
"code": null,
"e": 1712,
"s": 1635,
"text": "Display records with the help of select statement. The query is as follows −"
},
{
"code": null,
"e": 1747,
"s": 1712,
"text": "mysql> select *from AddOneDayDemo;"
},
{
"code": null,
"e": 1795,
"s": 1747,
"text": "The following is the record with current date −"
},
{
"code": null,
"e": 1915,
"s": 1795,
"text": "| YourDay |\n+---------------------+\n| 2018-11-27 00:00:00 |\n+---------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 1970,
"s": 1915,
"text": "The query to add a day to current date is as follows −"
},
{
"code": null,
"e": 2065,
"s": 1970,
"text": "mysql> select date_add(YourDay,interval 1 day) as yourDayafteraddingoneday from AddOneDayDemo;"
},
{
"code": null,
"e": 2095,
"s": 2065,
"text": "The following is the output -"
},
{
"code": null,
"e": 2266,
"s": 2095,
"text": "+--------------------------+\n| yourDayafteraddingoneday |\n+--------------------------+\n| 2018-11-28 00:00:00 | \n+--------------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 2340,
"s": 2266,
"text": "The above output displays a date that is an addition to the current date."
}
] |
Clear text from textarea with selenium. | We can clear text from a text area with Selenium. We shall use the clear method to remove the content from a text area or an edit box. First we shall identify the text area with the help of any locator.
A text area is identified with textarea tagname in the html code. Let us input some text inside the below text area, then clear the text.
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
public class TextAreaClear{
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver","C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.get("http://www.uitestpractice.com/Students/Form");
// identify element
WebElement m = driver.findElement(By.id("comment"));
// enter text
m.sendKeys("Selenium");
// obtain value entered in text area
System.out.println("Value entered: " + m.getAttribute("value"));
// clear text area
m.clear();
// obtain value entered in text area after clear applied
System.out.println("Value after clear(): " + m.getAttribute("value"));
driver.quit();
}
} | [
{
"code": null,
"e": 1265,
"s": 1062,
"text": "We can clear text from a text area with Selenium. We shall use the clear method to remove the content from a text area or an edit box. First we shall identify the text area with the help of any locator."
},
{
"code": null,
"e": 1403,
"s": 1265,
"text": "A text area is identified with textarea tagname in the html code. Let us input some text inside the below text area, then clear the text."
},
{
"code": null,
"e": 2307,
"s": 1403,
"text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\npublic class TextAreaClear{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\",\"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.get(\"http://www.uitestpractice.com/Students/Form\");\n // identify element\n WebElement m = driver.findElement(By.id(\"comment\"));\n // enter text\n m.sendKeys(\"Selenium\");\n // obtain value entered in text area\n System.out.println(\"Value entered: \" + m.getAttribute(\"value\"));\n // clear text area\n m.clear();\n // obtain value entered in text area after clear applied\n System.out.println(\"Value after clear(): \" + m.getAttribute(\"value\"));\n driver.quit();\n }\n}"
}
] |
How to calculate greatest common divisor of two or more numbers/arrays in JavaScript ? - GeeksforGeeks | 22 Apr, 2021
Given two or more numbers/array of numbers and the task is to find the GCD of the given numbers/array elements in JavaScript.
Examples:
Input : arr[] = {1, 2, 3}
Output : 1
Input : arr[] = {2, 4, 6, 8}
Output : 2
The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCD of pairs of numbers.
gcd(a, b, c) = gcd(a, gcd(b, c))
= gcd(gcd(a, b), c)
= gcd(gcd(a, c), b)
For an array of elements, we do the following. We will also check for the result if the result at any step becomes 1 we will just return 1 as gcd(1, x) = 1.
result = arr[0]
For i = 1 to n-1
result = GCD(result, arr[i])
Below is the implementation of the above approach.
Code Example:
Javascript
<script> // Function to return gcd of a and b function gcd(a, b) { if (a == 0) return b; return gcd(b % a, a); } // Function to find gcd of array // of numbers function findGCD(arr, n) { let result = arr[0]; for (let i = 1; i < n; i++) { result = gcd(arr[i], result); if (result == 1) { return 1; } } return result; } // Driver code let arr = [2, 4, 6, 8, 16]; let n = arr.length; document.write(findGCD(arr, n) + "<br>"); </script>
Output:
2
Time Complexity: O(N * log(M)), where M is the smallest element of the array and N is the length of the array.
javascript-math
JavaScript-Methods
JavaScript-Questions
Picked
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
How to calculate the number of days between two dates in javascript?
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Convert a string to an integer in JavaScript | [
{
"code": null,
"e": 37995,
"s": 37967,
"text": "\n22 Apr, 2021"
},
{
"code": null,
"e": 38121,
"s": 37995,
"text": "Given two or more numbers/array of numbers and the task is to find the GCD of the given numbers/array elements in JavaScript."
},
{
"code": null,
"e": 38131,
"s": 38121,
"text": "Examples:"
},
{
"code": null,
"e": 38211,
"s": 38131,
"text": "Input : arr[] = {1, 2, 3}\nOutput : 1\n\nInput : arr[] = {2, 4, 6, 8}\nOutput : 2"
},
{
"code": null,
"e": 38391,
"s": 38211,
"text": "The GCD of three or more numbers equals the product of the prime factors common to all the numbers, but it can also be calculated by repeatedly taking the GCD of pairs of numbers."
},
{
"code": null,
"e": 38488,
"s": 38391,
"text": "gcd(a, b, c) = gcd(a, gcd(b, c))\n = gcd(gcd(a, b), c)\n = gcd(gcd(a, c), b)"
},
{
"code": null,
"e": 38645,
"s": 38488,
"text": "For an array of elements, we do the following. We will also check for the result if the result at any step becomes 1 we will just return 1 as gcd(1, x) = 1."
},
{
"code": null,
"e": 38709,
"s": 38645,
"text": "result = arr[0]\nFor i = 1 to n-1\n result = GCD(result, arr[i])"
},
{
"code": null,
"e": 38760,
"s": 38709,
"text": "Below is the implementation of the above approach."
},
{
"code": null,
"e": 38774,
"s": 38760,
"text": "Code Example:"
},
{
"code": null,
"e": 38785,
"s": 38774,
"text": "Javascript"
},
{
"code": "<script> // Function to return gcd of a and b function gcd(a, b) { if (a == 0) return b; return gcd(b % a, a); } // Function to find gcd of array // of numbers function findGCD(arr, n) { let result = arr[0]; for (let i = 1; i < n; i++) { result = gcd(arr[i], result); if (result == 1) { return 1; } } return result; } // Driver code let arr = [2, 4, 6, 8, 16]; let n = arr.length; document.write(findGCD(arr, n) + \"<br>\"); </script>",
"e": 39361,
"s": 38785,
"text": null
},
{
"code": null,
"e": 39370,
"s": 39361,
"text": "Output: "
},
{
"code": null,
"e": 39372,
"s": 39370,
"text": "2"
},
{
"code": null,
"e": 39483,
"s": 39372,
"text": "Time Complexity: O(N * log(M)), where M is the smallest element of the array and N is the length of the array."
},
{
"code": null,
"e": 39499,
"s": 39483,
"text": "javascript-math"
},
{
"code": null,
"e": 39518,
"s": 39499,
"text": "JavaScript-Methods"
},
{
"code": null,
"e": 39539,
"s": 39518,
"text": "JavaScript-Questions"
},
{
"code": null,
"e": 39546,
"s": 39539,
"text": "Picked"
},
{
"code": null,
"e": 39557,
"s": 39546,
"text": "JavaScript"
},
{
"code": null,
"e": 39574,
"s": 39557,
"text": "Web Technologies"
},
{
"code": null,
"e": 39672,
"s": 39574,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 39717,
"s": 39672,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 39778,
"s": 39717,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 39847,
"s": 39778,
"text": "How to calculate the number of days between two dates in javascript?"
},
{
"code": null,
"e": 39919,
"s": 39847,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 39971,
"s": 39919,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 40013,
"s": 39971,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 40046,
"s": 40013,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 40089,
"s": 40046,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 40151,
"s": 40089,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
}
] |
Java Program for Cutting a Rod | DP-13 - GeeksforGeeks | 25 Jun, 2021
Given a rod of length n inches and an array of prices that contains prices of all pieces of size smaller than n. Determine the maximum value obtainable by cutting up the rod and selling the pieces. For example, if length of the rod is 8 and the values of different pieces are given as following, then the maximum obtainable value is 22 (by cutting in two pieces of lengths 2 and 6)
length | 1 2 3 4 5 6 7 8
--------------------------------------------
price | 1 5 8 9 10 17 17 20
And if the prices are as following, then the maximum obtainable value is 24 (by cutting in eight pieces of length 1)
length | 1 2 3 4 5 6 7 8
--------------------------------------------
price | 3 5 8 9 10 17 17 20
Recommended: Please solve it on “PRACTICE ” first, before moving on to the solution.
Following is simple recursive implementation of the Rod Cutting problem. The implementation simply follows the recursive structure mentioned above.
Java
// // A Naive recursive solution for Rod cutting problemclass RodCutting { /* Returns the best obtainable price for a rod of length n and price[] as prices of different pieces */ static int cutRod(int price[], int n) { if (n <= 0) return 0; int max_val = Integer.MIN_VALUE; // Recursively cut the rod in different pieces and // compare different configurations for (int i = 0; i < n; i++) max_val = Math.max(max_val, price[i] + cutRod(price, n - i - 1)); return max_val; } /* Driver program to test above functions */ public static void main(String args[]) { int arr[] = new int[] { 1, 5, 8, 9, 10, 17, 17, 20 }; int size = arr.length; System.out.println("Maximum Obtainable Value is " + cutRod(arr, size)); }}/* This code is contributed by Rajat Mishra */
Maximum Obtainable Value is 22
Considering the above implementation, following is recursion tree for a Rod of length 4.
cR() ---> cutRod()
cR(4)
/ /
/ /
cR(3) cR(2) cR(1) cR(0)
/ | / |
/ | / |
cR(2) cR(1) cR(0) cR(1) cR(0) cR(0)
/ | |
/ | |
cR(1) cR(0) cR(0) cR(0)
/
/
CR(0)
In the above partial recursion tree, cR(2) is being solved twice. We can see that there are many subproblems which are solved again and again. Since same subproblems are called again, this problem has Overlapping Subproblems property. So the Rod Cutting problem has both properties (see this and this) of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by constructing a temporary array val[] in bottom up manner.
Java
// A Dynamic Programming solution for Rod cutting problemclass RodCutting { /* Returns the best obtainable price for a rod of length n and price[] as prices of different pieces */ static int cutRod(int price[], int n) { int val[] = new int[n + 1]; val[0] = 0; // Build the table val[] in bottom up manner and return // the last entry from the table for (int i = 1; i <= n; i++) { int max_val = Integer.MIN_VALUE; for (int j = 0; j < i; j++) max_val = Math.max(max_val, price[j] + val[i - j - 1]); val[i] = max_val; } return val[n]; } /* Driver program to test above functions */ public static void main(String args[]) { int arr[] = new int[] { 1, 5, 8, 9, 10, 17, 17, 20 }; int size = arr.length; System.out.println("Maximum Obtainable Value is " + cutRod(arr, size)); }}/* This code is contributed by Rajat Mishra */
Maximum Obtainable Value is 22
Please refer complete article on Cutting a Rod | DP-13 for more details!
anikakapoor
Java Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Iterate HashMap in Java?
Iterate Over the Characters of a String in Java
How to Get Elements By Index from HashSet in Java?
Java Program to Write into a File
How to Iterate LinkedList in Java?
Java Program to Sort Names in an Alphabetical Order
Create Password Protected Zip File in Java
How to Replace a Element in Java ArrayList?
How to Apply Different Styles to a Cell in a Spreadsheet using Java?
Modulo or Remainder Operator in Java | [
{
"code": null,
"e": 24824,
"s": 24796,
"text": "\n25 Jun, 2021"
},
{
"code": null,
"e": 25207,
"s": 24824,
"text": "Given a rod of length n inches and an array of prices that contains prices of all pieces of size smaller than n. Determine the maximum value obtainable by cutting up the rod and selling the pieces. For example, if length of the rod is 8 and the values of different pieces are given as following, then the maximum obtainable value is 22 (by cutting in two pieces of lengths 2 and 6) "
},
{
"code": null,
"e": 25336,
"s": 25207,
"text": "length | 1 2 3 4 5 6 7 8 \n--------------------------------------------\nprice | 1 5 8 9 10 17 17 20"
},
{
"code": null,
"e": 25453,
"s": 25336,
"text": "And if the prices are as following, then the maximum obtainable value is 24 (by cutting in eight pieces of length 1)"
},
{
"code": null,
"e": 25582,
"s": 25453,
"text": "length | 1 2 3 4 5 6 7 8 \n--------------------------------------------\nprice | 3 5 8 9 10 17 17 20"
},
{
"code": null,
"e": 25667,
"s": 25582,
"text": "Recommended: Please solve it on “PRACTICE ” first, before moving on to the solution."
},
{
"code": null,
"e": 25817,
"s": 25669,
"text": "Following is simple recursive implementation of the Rod Cutting problem. The implementation simply follows the recursive structure mentioned above."
},
{
"code": null,
"e": 25822,
"s": 25817,
"text": "Java"
},
{
"code": "// // A Naive recursive solution for Rod cutting problemclass RodCutting { /* Returns the best obtainable price for a rod of length n and price[] as prices of different pieces */ static int cutRod(int price[], int n) { if (n <= 0) return 0; int max_val = Integer.MIN_VALUE; // Recursively cut the rod in different pieces and // compare different configurations for (int i = 0; i < n; i++) max_val = Math.max(max_val, price[i] + cutRod(price, n - i - 1)); return max_val; } /* Driver program to test above functions */ public static void main(String args[]) { int arr[] = new int[] { 1, 5, 8, 9, 10, 17, 17, 20 }; int size = arr.length; System.out.println(\"Maximum Obtainable Value is \" + cutRod(arr, size)); }}/* This code is contributed by Rajat Mishra */",
"e": 26727,
"s": 25822,
"text": null
},
{
"code": null,
"e": 26758,
"s": 26727,
"text": "Maximum Obtainable Value is 22"
},
{
"code": null,
"e": 26849,
"s": 26760,
"text": "Considering the above implementation, following is recursion tree for a Rod of length 4."
},
{
"code": null,
"e": 27256,
"s": 26849,
"text": "cR() ---> cutRod() \n\n cR(4)\n / / \n / / \n cR(3) cR(2) cR(1) cR(0)\n / | / |\n / | / | \n cR(2) cR(1) cR(0) cR(1) cR(0) cR(0)\n / | |\n / | | \n cR(1) cR(0) cR(0) cR(0)\n /\n /\nCR(0)"
},
{
"code": null,
"e": 27756,
"s": 27256,
"text": "In the above partial recursion tree, cR(2) is being solved twice. We can see that there are many subproblems which are solved again and again. Since same subproblems are called again, this problem has Overlapping Subproblems property. So the Rod Cutting problem has both properties (see this and this) of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by constructing a temporary array val[] in bottom up manner."
},
{
"code": null,
"e": 27761,
"s": 27756,
"text": "Java"
},
{
"code": "// A Dynamic Programming solution for Rod cutting problemclass RodCutting { /* Returns the best obtainable price for a rod of length n and price[] as prices of different pieces */ static int cutRod(int price[], int n) { int val[] = new int[n + 1]; val[0] = 0; // Build the table val[] in bottom up manner and return // the last entry from the table for (int i = 1; i <= n; i++) { int max_val = Integer.MIN_VALUE; for (int j = 0; j < i; j++) max_val = Math.max(max_val, price[j] + val[i - j - 1]); val[i] = max_val; } return val[n]; } /* Driver program to test above functions */ public static void main(String args[]) { int arr[] = new int[] { 1, 5, 8, 9, 10, 17, 17, 20 }; int size = arr.length; System.out.println(\"Maximum Obtainable Value is \" + cutRod(arr, size)); }}/* This code is contributed by Rajat Mishra */",
"e": 28764,
"s": 27761,
"text": null
},
{
"code": null,
"e": 28795,
"s": 28764,
"text": "Maximum Obtainable Value is 22"
},
{
"code": null,
"e": 28871,
"s": 28797,
"text": "Please refer complete article on Cutting a Rod | DP-13 for more details! "
},
{
"code": null,
"e": 28883,
"s": 28871,
"text": "anikakapoor"
},
{
"code": null,
"e": 28897,
"s": 28883,
"text": "Java Programs"
},
{
"code": null,
"e": 28995,
"s": 28897,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29004,
"s": 28995,
"text": "Comments"
},
{
"code": null,
"e": 29017,
"s": 29004,
"text": "Old Comments"
},
{
"code": null,
"e": 29049,
"s": 29017,
"text": "How to Iterate HashMap in Java?"
},
{
"code": null,
"e": 29097,
"s": 29049,
"text": "Iterate Over the Characters of a String in Java"
},
{
"code": null,
"e": 29148,
"s": 29097,
"text": "How to Get Elements By Index from HashSet in Java?"
},
{
"code": null,
"e": 29182,
"s": 29148,
"text": "Java Program to Write into a File"
},
{
"code": null,
"e": 29217,
"s": 29182,
"text": "How to Iterate LinkedList in Java?"
},
{
"code": null,
"e": 29269,
"s": 29217,
"text": "Java Program to Sort Names in an Alphabetical Order"
},
{
"code": null,
"e": 29312,
"s": 29269,
"text": "Create Password Protected Zip File in Java"
},
{
"code": null,
"e": 29356,
"s": 29312,
"text": "How to Replace a Element in Java ArrayList?"
},
{
"code": null,
"e": 29425,
"s": 29356,
"text": "How to Apply Different Styles to a Cell in a Spreadsheet using Java?"
}
] |
How to Use “NOT IN” Filter in Pandas? - GeeksforGeeks | 22 Nov, 2021
In this article, we will discuss NOT IN filter in pandas, NOT IN is a membership operator used to check whether the data is present in dataframe or not. It will return true if the value is not present, otherwise false
Python3
# import pandas moduleimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # displaydata1
Output:
sample dataframe
We are using isin() operator to get the given values in the dataframe and those values are taken from the list, so we are filtering the dataframe one column values which are present in that list.
Syntax: dataframe[~dataframe[column_name].isin(list)]
where
dataframe is the input dataframe
column_name is the column that is filtered
list is the list of values to be removed in that column
Python3
# import pandas moduleimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # consider a listlist1 = ['harsha', 'jyothika'] # filter in name columnprint(data1[~data1['name'].isin(list1)])print("============") # consider a listlist2 = ['R'] # filter in name columnprint(data1[~data1['subject1'].isin(list2)])print("============") # consider a listlist3 = [96, 89] # filter in name columnprint(data1[~data1['marks'].isin(list3)])
Output:
NOT IN Filter with One Column
Now we can filter in more than one column by using any() function. This function will check the value that exists in any given column and columns are given in [[]] separated by a comma.
Syntax: dataframe[~dataframe[[columns]].isin(list).any(axis=1)]
Python3
# import pandas moduleimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # consider a listlist1 = ['harsha', 'jyothika', 96] # filter in name and marks columnprint(data1[~data1[['name', 'marks']].isin(list1).any(axis=1)])print("============") # consider a listlist2 = ['R', 'sravan'] # filter in name and subject1 columnprint(data1[~data1[['subject1', 'name']].isin(list2).any(axis=1)])
Output:
NOT IN Filter with Multiple Column
This is similar to the above functionality.
Syntax: dataframe[~numpy.isin(dataframe[‘column’], list)]
Python3
# import pandas moduleimport numpy as npimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # consider a listlist1 = ['harsha', 'jyothika', 96] # filter in name columndata1[~np.isin(data1['name'], list1)]
Output:
numpy with NOT IN filter
Picked
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
Python | Pandas dataframe.groupby()
Defaultdict in Python
Python | Get unique values from a list | [
{
"code": null,
"e": 25647,
"s": 25619,
"text": "\n22 Nov, 2021"
},
{
"code": null,
"e": 25866,
"s": 25647,
"text": "In this article, we will discuss NOT IN filter in pandas, NOT IN is a membership operator used to check whether the data is present in dataframe or not. It will return true if the value is not present, otherwise false"
},
{
"code": null,
"e": 25874,
"s": 25866,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # displaydata1",
"e": 26134,
"s": 25874,
"text": null
},
{
"code": null,
"e": 26142,
"s": 26134,
"text": "Output:"
},
{
"code": null,
"e": 26159,
"s": 26142,
"text": "sample dataframe"
},
{
"code": null,
"e": 26356,
"s": 26159,
"text": "We are using isin() operator to get the given values in the dataframe and those values are taken from the list, so we are filtering the dataframe one column values which are present in that list."
},
{
"code": null,
"e": 26410,
"s": 26356,
"text": "Syntax: dataframe[~dataframe[column_name].isin(list)]"
},
{
"code": null,
"e": 26416,
"s": 26410,
"text": "where"
},
{
"code": null,
"e": 26449,
"s": 26416,
"text": "dataframe is the input dataframe"
},
{
"code": null,
"e": 26492,
"s": 26449,
"text": "column_name is the column that is filtered"
},
{
"code": null,
"e": 26548,
"s": 26492,
"text": "list is the list of values to be removed in that column"
},
{
"code": null,
"e": 26556,
"s": 26548,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # consider a listlist1 = ['harsha', 'jyothika'] # filter in name columnprint(data1[~data1['name'].isin(list1)])print(\"============\") # consider a listlist2 = ['R'] # filter in name columnprint(data1[~data1['subject1'].isin(list2)])print(\"============\") # consider a listlist3 = [96, 89] # filter in name columnprint(data1[~data1['marks'].isin(list3)])",
"e": 27160,
"s": 26556,
"text": null
},
{
"code": null,
"e": 27168,
"s": 27160,
"text": "Output:"
},
{
"code": null,
"e": 27198,
"s": 27168,
"text": "NOT IN Filter with One Column"
},
{
"code": null,
"e": 27384,
"s": 27198,
"text": "Now we can filter in more than one column by using any() function. This function will check the value that exists in any given column and columns are given in [[]] separated by a comma."
},
{
"code": null,
"e": 27448,
"s": 27384,
"text": "Syntax: dataframe[~dataframe[[columns]].isin(list).any(axis=1)]"
},
{
"code": null,
"e": 27456,
"s": 27448,
"text": "Python3"
},
{
"code": "# import pandas moduleimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # consider a listlist1 = ['harsha', 'jyothika', 96] # filter in name and marks columnprint(data1[~data1[['name', 'marks']].isin(list1).any(axis=1)])print(\"============\") # consider a listlist2 = ['R', 'sravan'] # filter in name and subject1 columnprint(data1[~data1[['subject1', 'name']].isin(list2).any(axis=1)])",
"e": 28018,
"s": 27456,
"text": null
},
{
"code": null,
"e": 28026,
"s": 28018,
"text": "Output:"
},
{
"code": null,
"e": 28062,
"s": 28026,
"text": " NOT IN Filter with Multiple Column"
},
{
"code": null,
"e": 28106,
"s": 28062,
"text": "This is similar to the above functionality."
},
{
"code": null,
"e": 28164,
"s": 28106,
"text": "Syntax: dataframe[~numpy.isin(dataframe[‘column’], list)]"
},
{
"code": null,
"e": 28172,
"s": 28164,
"text": "Python3"
},
{
"code": "# import pandas moduleimport numpy as npimport pandas as pd # create dataframedata1 = pd.DataFrame({'name': ['sravan', 'harsha', 'jyothika'], 'subject1': ['python', 'R', 'php'], 'marks': [96, 89, 90]}, index=[0, 1, 2]) # consider a listlist1 = ['harsha', 'jyothika', 96] # filter in name columndata1[~np.isin(data1['name'], list1)]",
"e": 28549,
"s": 28172,
"text": null
},
{
"code": null,
"e": 28557,
"s": 28549,
"text": "Output:"
},
{
"code": null,
"e": 28582,
"s": 28557,
"text": "numpy with NOT IN filter"
},
{
"code": null,
"e": 28589,
"s": 28582,
"text": "Picked"
},
{
"code": null,
"e": 28603,
"s": 28589,
"text": "Python-pandas"
},
{
"code": null,
"e": 28610,
"s": 28603,
"text": "Python"
},
{
"code": null,
"e": 28708,
"s": 28610,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28740,
"s": 28708,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28782,
"s": 28740,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 28824,
"s": 28782,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 28880,
"s": 28824,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 28907,
"s": 28880,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 28938,
"s": 28907,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 28967,
"s": 28938,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 29003,
"s": 28967,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 29025,
"s": 29003,
"text": "Defaultdict in Python"
}
] |
Bitwise OR of all unordered pairs from a given array - GeeksforGeeks | 23 Apr, 2021
Given an array arr[] of size N, the task is to find the Bitwise XOR of all possible unordered pairs from the given array.
Examples:
Input: arr[] = {1, 5, 3, 7} Output: 7 Explanation: All possible unordered pairs are (1, 5), (1, 3), (1, 7), (5, 3), (5, 7), (3, 7) Bitwise OR of all possible pairs are = { ( 1 | 5 ) | ( 1 | 3 ) | ( 1 | 7 ) | ( 5 | 3 ) | ( 5 | 7 ) | ( 3 | 7 ) } Therefore, the required output is 7.
Input: arr[] = {4, 5, 12, 15} Output: 15
Approach: The simplest approach to solve this problem is to traverse the array and generate all possible pairs of the given array. Finally, print the Bitwise OR of each element of all possible pairs of the given array. Follow the steps below to solve the problem:
Initialize a variable, say totalOR, to store Bit-wise OR of each element of all possible pairs.
Traverse the given array and generate all possible pairs(arr[i], arr[j]) from the given array and for each pair (arr[i], arr[j]), update the value of totalOR = (totalOR | arr[i] | arr[j]).
Finally, print the value of totalOR.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to find the Bitwise OR of// all possible pairs from the arrayint TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs from arr[] int totalOR = 0; // Traverse the array and calculate // bitwise OR of all possible pairs for (int i = 0; i < N; i++) { for (int j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Codeint main(){ int arr[] = { 4, 5, 12, 15 }; int N = sizeof(arr) / sizeof(arr[0]); cout << TotalBitwiseORPair(arr, N);}
// Java program to implement// the above approachimport java.util.*;class GFG{ // Function to find the Bitwise OR of// all possible pairs from the arraystatic int TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs from arr[] int totalOR = 0; // Traverse the array and // calculate bitwise OR of // all possible pairs for (int i = 0; i < N; i++) { for (int j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Codepublic static void main(String[] args){ int arr[] = {4, 5, 12, 15}; int N = arr.length; System.out.print(TotalBitwiseORPair(arr, N));}} // This code is contributed by sanjoy_62
# Python3 program to implement# the above approach # Function to find the Bitwise# OR of all possible pairs# from the arraydef TotalBitwiseORPair(arr, N): # Stores bitwise OR of all # possible pairs from arr[] totalOR = 0 # Traverse the array and # calculate bitwise OR of # all possible pairs for i in range(N): for j in range(i + 1, N): # Update totalOR totalOR |= (arr[i] | arr[j]) # Return Bitwise OR of all # possible pairs from arr[] return totalOR # Driver Codeif __name__ == '__main__': arr = [4, 5, 12, 15] N = len(arr) print(TotalBitwiseORPair(arr, N)) # This code is contributed by Mohit Kumar 29
// C# program to implement// the above approach using System; class GFG{ // Function to find the Bitwise OR of// all possible pairs from the arraystatic int TotalBitwiseORPair(int[] arr, int N){ // Stores bitwise OR of all // possible pairs from arr[] int totalOR = 0; // Traverse the array and // calculate bitwise OR of // all possible pairs for(int i = 0; i < N; i++) { for(int j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Codepublic static void Main(){ int[] arr = { 4, 5, 12, 15 }; int N = arr.Length; Console.WriteLine(TotalBitwiseORPair(arr, N));}} // This code is contributed by susmitakundugoaldanga
<script> // JavaScript program to implement// the above approach // Function to find the Bitwise OR of// all possible pairs from the arrayfunction TotalBitwiseORPair(arr, N){ // Stores bitwise OR of all // possible pairs from arr[] let totalOR = 0; // Traverse the array and calculate // bitwise OR of all possible pairs for (let i = 0; i < N; i++) { for (let j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Code let arr = [ 4, 5, 12, 15 ]; let N = arr.length; document.write(TotalBitwiseORPair(arr, N)); // This code is contributed by Surbhi Tyagi </script>
15
Time Complexity: O(N2)Auxiliary Space: O(1)
Efficient Approach: To optimize the above approach the idea is based on the following observations:
1 | 1 | 1 | .....(n times) = 1 0 | 0 | 0 | .....(n times) = 0 Therefore, (a | a | a | .... (n times)) = a
Follow the steps below to solve the problem:
Initialize a variable, say totalOR to store the bitwise OR of all possible unordered pairs of the array.
Traverse the array and update the value of totalOR = (totalOR | arr[i]).
Finally, print the value of totalOR.
Below is the implementation of the above approach
C++
Java
Python3
C#
Javascript
// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to find the bitwise OR of// all possible pairs of the arrayint TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs of arr[] int totalOR = 0; // Traverse the array arr[] for (int i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of arr[] return totalOR;} // Driver Codeint main(){ int arr[] = { 4, 5, 12, 15 }; int N = sizeof(arr) / sizeof(arr[0]); cout << TotalBitwiseORPair(arr, N);}
// Java program to implement// the above approachimport java.util.*; class GFG{ // Function to find the bitwise OR of// all possible pairs of the arraystatic int TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs of arr[] int totalOR = 0; // Traverse the array arr[] for(int i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of arr[] return totalOR;} // Driver Codepublic static void main(String[] args){ int arr[] = { 4, 5, 12, 15 }; int N = arr.length; System.out.print(TotalBitwiseORPair(arr, N));}} // This code is contributed by gauravrajput1
# Python program to implement# the above approach # Function to find the bitwise OR of# all possible pairs of the arraydef TotalBitwiseORPair(arr, N): # Stores bitwise OR of all # possible pairs of arr totalOR = 0; # Traverse the array arr for i in range(N): # Update totalOR totalOR |= arr[i]; # Return bitwise OR of all # possible pairs of arr return totalOR; # Driver Codeif __name__ == '__main__': arr = [4, 5, 12, 15]; N = len(arr); print(TotalBitwiseORPair(arr, N)); # This code is contributed by shikhasingrajput
// C# program to implement// the above approachusing System; class GFG{ // Function to find the bitwise OR of// all possible pairs of the arraystatic int TotalBitwiseORPair(int []arr, int N){ // Stores bitwise OR of all // possible pairs of []arr int totalOR = 0; // Traverse the array []arr for(int i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of []arr return totalOR;} // Driver Codepublic static void Main(String[] args){ int []arr = { 4, 5, 12, 15 }; int N = arr.Length; Console.Write(TotalBitwiseORPair(arr, N));}} // This code is contributed by Princi Singh
<script> // JavaScript program to implement// the above approach // Function to find the bitwise OR of// all possible pairs of the arrayfunction TotalBitwiseORPair(arr, N){ // Stores bitwise OR of all // possible pairs of arr[] let totalOR = 0; // Traverse the array arr[] for(let i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of arr[] return totalOR;} // Driver Code let arr = [ 4, 5, 12, 15 ]; let N = arr.length; document.write(TotalBitwiseORPair(arr, N)); </script>
15
Time Complexity: O(N)Auxiliary Space: O(1)
mohit kumar 29
sanjoy_62
susmitakundugoaldanga
GauravRajput1
princi singh
shikhasingrajput
surbhityagi15
splevel62
Bitwise-OR
Arrays
Bit Magic
Mathematical
Arrays
Mathematical
Bit Magic
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Chocolate Distribution Problem
Window Sliding Technique
Reversal algorithm for array rotation
Next Greater Element
Find duplicates in O(n) time and O(1) extra space | Set 1
Bitwise Operators in C/C++
Left Shift and Right Shift Operators in C/C++
Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
Count set bits in an integer
How to swap two numbers without using a temporary variable? | [
{
"code": null,
"e": 26065,
"s": 26037,
"text": "\n23 Apr, 2021"
},
{
"code": null,
"e": 26187,
"s": 26065,
"text": "Given an array arr[] of size N, the task is to find the Bitwise XOR of all possible unordered pairs from the given array."
},
{
"code": null,
"e": 26197,
"s": 26187,
"text": "Examples:"
},
{
"code": null,
"e": 26478,
"s": 26197,
"text": "Input: arr[] = {1, 5, 3, 7} Output: 7 Explanation: All possible unordered pairs are (1, 5), (1, 3), (1, 7), (5, 3), (5, 7), (3, 7) Bitwise OR of all possible pairs are = { ( 1 | 5 ) | ( 1 | 3 ) | ( 1 | 7 ) | ( 5 | 3 ) | ( 5 | 7 ) | ( 3 | 7 ) } Therefore, the required output is 7."
},
{
"code": null,
"e": 26519,
"s": 26478,
"text": "Input: arr[] = {4, 5, 12, 15} Output: 15"
},
{
"code": null,
"e": 26783,
"s": 26519,
"text": "Approach: The simplest approach to solve this problem is to traverse the array and generate all possible pairs of the given array. Finally, print the Bitwise OR of each element of all possible pairs of the given array. Follow the steps below to solve the problem:"
},
{
"code": null,
"e": 26879,
"s": 26783,
"text": "Initialize a variable, say totalOR, to store Bit-wise OR of each element of all possible pairs."
},
{
"code": null,
"e": 27068,
"s": 26879,
"text": "Traverse the given array and generate all possible pairs(arr[i], arr[j]) from the given array and for each pair (arr[i], arr[j]), update the value of totalOR = (totalOR | arr[i] | arr[j])."
},
{
"code": null,
"e": 27105,
"s": 27068,
"text": "Finally, print the value of totalOR."
},
{
"code": null,
"e": 27156,
"s": 27105,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 27160,
"s": 27156,
"text": "C++"
},
{
"code": null,
"e": 27165,
"s": 27160,
"text": "Java"
},
{
"code": null,
"e": 27173,
"s": 27165,
"text": "Python3"
},
{
"code": null,
"e": 27176,
"s": 27173,
"text": "C#"
},
{
"code": null,
"e": 27187,
"s": 27176,
"text": "Javascript"
},
{
"code": "// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to find the Bitwise OR of// all possible pairs from the arrayint TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs from arr[] int totalOR = 0; // Traverse the array and calculate // bitwise OR of all possible pairs for (int i = 0; i < N; i++) { for (int j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Codeint main(){ int arr[] = { 4, 5, 12, 15 }; int N = sizeof(arr) / sizeof(arr[0]); cout << TotalBitwiseORPair(arr, N);}",
"e": 27954,
"s": 27187,
"text": null
},
{
"code": "// Java program to implement// the above approachimport java.util.*;class GFG{ // Function to find the Bitwise OR of// all possible pairs from the arraystatic int TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs from arr[] int totalOR = 0; // Traverse the array and // calculate bitwise OR of // all possible pairs for (int i = 0; i < N; i++) { for (int j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Codepublic static void main(String[] args){ int arr[] = {4, 5, 12, 15}; int N = arr.length; System.out.print(TotalBitwiseORPair(arr, N));}} // This code is contributed by sanjoy_62",
"e": 28777,
"s": 27954,
"text": null
},
{
"code": "# Python3 program to implement# the above approach # Function to find the Bitwise# OR of all possible pairs# from the arraydef TotalBitwiseORPair(arr, N): # Stores bitwise OR of all # possible pairs from arr[] totalOR = 0 # Traverse the array and # calculate bitwise OR of # all possible pairs for i in range(N): for j in range(i + 1, N): # Update totalOR totalOR |= (arr[i] | arr[j]) # Return Bitwise OR of all # possible pairs from arr[] return totalOR # Driver Codeif __name__ == '__main__': arr = [4, 5, 12, 15] N = len(arr) print(TotalBitwiseORPair(arr, N)) # This code is contributed by Mohit Kumar 29",
"e": 29460,
"s": 28777,
"text": null
},
{
"code": "// C# program to implement// the above approach using System; class GFG{ // Function to find the Bitwise OR of// all possible pairs from the arraystatic int TotalBitwiseORPair(int[] arr, int N){ // Stores bitwise OR of all // possible pairs from arr[] int totalOR = 0; // Traverse the array and // calculate bitwise OR of // all possible pairs for(int i = 0; i < N; i++) { for(int j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Codepublic static void Main(){ int[] arr = { 4, 5, 12, 15 }; int N = arr.Length; Console.WriteLine(TotalBitwiseORPair(arr, N));}} // This code is contributed by susmitakundugoaldanga",
"e": 30282,
"s": 29460,
"text": null
},
{
"code": "<script> // JavaScript program to implement// the above approach // Function to find the Bitwise OR of// all possible pairs from the arrayfunction TotalBitwiseORPair(arr, N){ // Stores bitwise OR of all // possible pairs from arr[] let totalOR = 0; // Traverse the array and calculate // bitwise OR of all possible pairs for (let i = 0; i < N; i++) { for (let j = i + 1; j < N; j++) { // Update totalOR totalOR |= (arr[i] | arr[j]); } } // Return Bitwise OR of all // possible pairs from arr[] return totalOR;} // Driver Code let arr = [ 4, 5, 12, 15 ]; let N = arr.length; document.write(TotalBitwiseORPair(arr, N)); // This code is contributed by Surbhi Tyagi </script>",
"e": 31045,
"s": 30282,
"text": null
},
{
"code": null,
"e": 31048,
"s": 31045,
"text": "15"
},
{
"code": null,
"e": 31094,
"s": 31050,
"text": "Time Complexity: O(N2)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 31194,
"s": 31094,
"text": "Efficient Approach: To optimize the above approach the idea is based on the following observations:"
},
{
"code": null,
"e": 31302,
"s": 31194,
"text": "1 | 1 | 1 | .....(n times) = 1 0 | 0 | 0 | .....(n times) = 0 Therefore, (a | a | a | .... (n times)) = a "
},
{
"code": null,
"e": 31347,
"s": 31302,
"text": "Follow the steps below to solve the problem:"
},
{
"code": null,
"e": 31452,
"s": 31347,
"text": "Initialize a variable, say totalOR to store the bitwise OR of all possible unordered pairs of the array."
},
{
"code": null,
"e": 31525,
"s": 31452,
"text": "Traverse the array and update the value of totalOR = (totalOR | arr[i])."
},
{
"code": null,
"e": 31562,
"s": 31525,
"text": "Finally, print the value of totalOR."
},
{
"code": null,
"e": 31612,
"s": 31562,
"text": "Below is the implementation of the above approach"
},
{
"code": null,
"e": 31616,
"s": 31612,
"text": "C++"
},
{
"code": null,
"e": 31621,
"s": 31616,
"text": "Java"
},
{
"code": null,
"e": 31629,
"s": 31621,
"text": "Python3"
},
{
"code": null,
"e": 31632,
"s": 31629,
"text": "C#"
},
{
"code": null,
"e": 31643,
"s": 31632,
"text": "Javascript"
},
{
"code": "// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to find the bitwise OR of// all possible pairs of the arrayint TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs of arr[] int totalOR = 0; // Traverse the array arr[] for (int i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of arr[] return totalOR;} // Driver Codeint main(){ int arr[] = { 4, 5, 12, 15 }; int N = sizeof(arr) / sizeof(arr[0]); cout << TotalBitwiseORPair(arr, N);}",
"e": 32276,
"s": 31643,
"text": null
},
{
"code": "// Java program to implement// the above approachimport java.util.*; class GFG{ // Function to find the bitwise OR of// all possible pairs of the arraystatic int TotalBitwiseORPair(int arr[], int N){ // Stores bitwise OR of all // possible pairs of arr[] int totalOR = 0; // Traverse the array arr[] for(int i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of arr[] return totalOR;} // Driver Codepublic static void main(String[] args){ int arr[] = { 4, 5, 12, 15 }; int N = arr.length; System.out.print(TotalBitwiseORPair(arr, N));}} // This code is contributed by gauravrajput1",
"e": 33016,
"s": 32276,
"text": null
},
{
"code": "# Python program to implement# the above approach # Function to find the bitwise OR of# all possible pairs of the arraydef TotalBitwiseORPair(arr, N): # Stores bitwise OR of all # possible pairs of arr totalOR = 0; # Traverse the array arr for i in range(N): # Update totalOR totalOR |= arr[i]; # Return bitwise OR of all # possible pairs of arr return totalOR; # Driver Codeif __name__ == '__main__': arr = [4, 5, 12, 15]; N = len(arr); print(TotalBitwiseORPair(arr, N)); # This code is contributed by shikhasingrajput",
"e": 33600,
"s": 33016,
"text": null
},
{
"code": "// C# program to implement// the above approachusing System; class GFG{ // Function to find the bitwise OR of// all possible pairs of the arraystatic int TotalBitwiseORPair(int []arr, int N){ // Stores bitwise OR of all // possible pairs of []arr int totalOR = 0; // Traverse the array []arr for(int i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of []arr return totalOR;} // Driver Codepublic static void Main(String[] args){ int []arr = { 4, 5, 12, 15 }; int N = arr.Length; Console.Write(TotalBitwiseORPair(arr, N));}} // This code is contributed by Princi Singh",
"e": 34328,
"s": 33600,
"text": null
},
{
"code": "<script> // JavaScript program to implement// the above approach // Function to find the bitwise OR of// all possible pairs of the arrayfunction TotalBitwiseORPair(arr, N){ // Stores bitwise OR of all // possible pairs of arr[] let totalOR = 0; // Traverse the array arr[] for(let i = 0; i < N; i++) { // Update totalOR totalOR |= arr[i]; } // Return bitwise OR of all // possible pairs of arr[] return totalOR;} // Driver Code let arr = [ 4, 5, 12, 15 ]; let N = arr.length; document.write(TotalBitwiseORPair(arr, N)); </script>",
"e": 34940,
"s": 34328,
"text": null
},
{
"code": null,
"e": 34943,
"s": 34940,
"text": "15"
},
{
"code": null,
"e": 34988,
"s": 34945,
"text": "Time Complexity: O(N)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 35003,
"s": 34988,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 35013,
"s": 35003,
"text": "sanjoy_62"
},
{
"code": null,
"e": 35035,
"s": 35013,
"text": "susmitakundugoaldanga"
},
{
"code": null,
"e": 35049,
"s": 35035,
"text": "GauravRajput1"
},
{
"code": null,
"e": 35062,
"s": 35049,
"text": "princi singh"
},
{
"code": null,
"e": 35079,
"s": 35062,
"text": "shikhasingrajput"
},
{
"code": null,
"e": 35093,
"s": 35079,
"text": "surbhityagi15"
},
{
"code": null,
"e": 35103,
"s": 35093,
"text": "splevel62"
},
{
"code": null,
"e": 35114,
"s": 35103,
"text": "Bitwise-OR"
},
{
"code": null,
"e": 35121,
"s": 35114,
"text": "Arrays"
},
{
"code": null,
"e": 35131,
"s": 35121,
"text": "Bit Magic"
},
{
"code": null,
"e": 35144,
"s": 35131,
"text": "Mathematical"
},
{
"code": null,
"e": 35151,
"s": 35144,
"text": "Arrays"
},
{
"code": null,
"e": 35164,
"s": 35151,
"text": "Mathematical"
},
{
"code": null,
"e": 35174,
"s": 35164,
"text": "Bit Magic"
},
{
"code": null,
"e": 35272,
"s": 35174,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35303,
"s": 35272,
"text": "Chocolate Distribution Problem"
},
{
"code": null,
"e": 35328,
"s": 35303,
"text": "Window Sliding Technique"
},
{
"code": null,
"e": 35366,
"s": 35328,
"text": "Reversal algorithm for array rotation"
},
{
"code": null,
"e": 35387,
"s": 35366,
"text": "Next Greater Element"
},
{
"code": null,
"e": 35445,
"s": 35387,
"text": "Find duplicates in O(n) time and O(1) extra space | Set 1"
},
{
"code": null,
"e": 35472,
"s": 35445,
"text": "Bitwise Operators in C/C++"
},
{
"code": null,
"e": 35518,
"s": 35472,
"text": "Left Shift and Right Shift Operators in C/C++"
},
{
"code": null,
"e": 35586,
"s": 35518,
"text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)"
},
{
"code": null,
"e": 35615,
"s": 35586,
"text": "Count set bits in an integer"
}
] |
<mat-grid-list> in Angular Material - GeeksforGeeks | 24 Sep, 2021
Angular Material is a UI component library that is developed by the Angular team to build design components for desktop and mobile web applications. In order to install it, we need to have angular installed in our project, once you have it you can enter the below command and can download it. mat-grid-list tag is used for styling the content in grids form.
Installation syntax:
ng add @angular/material
Approach:
First, install the angular material using the above-mentioned command.
After completing the installation, Import ‘MatGridListModule’ from ‘@angular/material/grid-list’ in the app.module.ts file.
Then use <mat-grid-list> tag to group all the items inside this group tag.
Inside the <mat-grid-list> tag we need to use <mat-grid-tile tag for every item.
We also have properties like cols and rowHeight which we can use for styling. cols property is used to display number of grids in a row.
Once done with the above steps then serve or start the project.
Code Implementation:
app.module.ts
import { CommonModule } from '@angular/common'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { MatGridListModule } from '@angular/material'; import { AppComponent } from './example.component'; @NgModule({ declarations: [AppComponent], exports: [AppComponent], imports: [ CommonModule, FormsModule, MatGridListModule ], }) export class AppModule {}
app.component.html
<mat-grid-list cols="3" rowHeight="2:1"> <mat-grid-tile>First Grid</mat-grid-tile> <mat-grid-tile>Second Grid</mat-grid-tile> <mat-grid-tile>Third Grid</mat-grid-tile> <mat-grid-tile>Fourth Grid</mat-grid-tile> <mat-grid-tile>Fifth Grid</mat-grid-tile> <mat-grid-tile>Sixth Frid</mat-grid-tile> </mat-grid-list>
app.component.scss :
mat-grid-tile {
background: lightsalmon;
}
Output:
simmytarika5
Angular-material
Picked
AngularJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Angular PrimeNG Dropdown Component
Angular PrimeNG Calendar Component
Angular PrimeNG Messages Component
Angular 10 (blur) Event
How to make a Bootstrap Modal Popup in Angular 9/8 ?
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 26464,
"s": 26436,
"text": "\n24 Sep, 2021"
},
{
"code": null,
"e": 26822,
"s": 26464,
"text": "Angular Material is a UI component library that is developed by the Angular team to build design components for desktop and mobile web applications. In order to install it, we need to have angular installed in our project, once you have it you can enter the below command and can download it. mat-grid-list tag is used for styling the content in grids form."
},
{
"code": null,
"e": 26843,
"s": 26822,
"text": "Installation syntax:"
},
{
"code": null,
"e": 26868,
"s": 26843,
"text": "ng add @angular/material"
},
{
"code": null,
"e": 26878,
"s": 26868,
"text": "Approach:"
},
{
"code": null,
"e": 26949,
"s": 26878,
"text": "First, install the angular material using the above-mentioned command."
},
{
"code": null,
"e": 27073,
"s": 26949,
"text": "After completing the installation, Import ‘MatGridListModule’ from ‘@angular/material/grid-list’ in the app.module.ts file."
},
{
"code": null,
"e": 27148,
"s": 27073,
"text": "Then use <mat-grid-list> tag to group all the items inside this group tag."
},
{
"code": null,
"e": 27229,
"s": 27148,
"text": "Inside the <mat-grid-list> tag we need to use <mat-grid-tile tag for every item."
},
{
"code": null,
"e": 27366,
"s": 27229,
"text": "We also have properties like cols and rowHeight which we can use for styling. cols property is used to display number of grids in a row."
},
{
"code": null,
"e": 27430,
"s": 27366,
"text": "Once done with the above steps then serve or start the project."
},
{
"code": null,
"e": 27451,
"s": 27430,
"text": "Code Implementation:"
},
{
"code": null,
"e": 27465,
"s": 27451,
"text": "app.module.ts"
},
{
"code": "import { CommonModule } from '@angular/common'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { MatGridListModule } from '@angular/material'; import { AppComponent } from './example.component'; @NgModule({ declarations: [AppComponent], exports: [AppComponent], imports: [ CommonModule, FormsModule, MatGridListModule ], }) export class AppModule {}",
"e": 27892,
"s": 27465,
"text": null
},
{
"code": null,
"e": 27911,
"s": 27892,
"text": "app.component.html"
},
{
"code": "<mat-grid-list cols=\"3\" rowHeight=\"2:1\"> <mat-grid-tile>First Grid</mat-grid-tile> <mat-grid-tile>Second Grid</mat-grid-tile> <mat-grid-tile>Third Grid</mat-grid-tile> <mat-grid-tile>Fourth Grid</mat-grid-tile> <mat-grid-tile>Fifth Grid</mat-grid-tile> <mat-grid-tile>Sixth Frid</mat-grid-tile> </mat-grid-list>",
"e": 28234,
"s": 27911,
"text": null
},
{
"code": null,
"e": 28255,
"s": 28234,
"text": "app.component.scss :"
},
{
"code": null,
"e": 28300,
"s": 28255,
"text": "mat-grid-tile {\n background: lightsalmon;\n}"
},
{
"code": null,
"e": 28308,
"s": 28300,
"text": "Output:"
},
{
"code": null,
"e": 28325,
"s": 28312,
"text": "simmytarika5"
},
{
"code": null,
"e": 28342,
"s": 28325,
"text": "Angular-material"
},
{
"code": null,
"e": 28349,
"s": 28342,
"text": "Picked"
},
{
"code": null,
"e": 28359,
"s": 28349,
"text": "AngularJS"
},
{
"code": null,
"e": 28376,
"s": 28359,
"text": "Web Technologies"
},
{
"code": null,
"e": 28474,
"s": 28376,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28509,
"s": 28474,
"text": "Angular PrimeNG Dropdown Component"
},
{
"code": null,
"e": 28544,
"s": 28509,
"text": "Angular PrimeNG Calendar Component"
},
{
"code": null,
"e": 28579,
"s": 28544,
"text": "Angular PrimeNG Messages Component"
},
{
"code": null,
"e": 28603,
"s": 28579,
"text": "Angular 10 (blur) Event"
},
{
"code": null,
"e": 28656,
"s": 28603,
"text": "How to make a Bootstrap Modal Popup in Angular 9/8 ?"
},
{
"code": null,
"e": 28696,
"s": 28656,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 28729,
"s": 28696,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 28774,
"s": 28729,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 28817,
"s": 28774,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
Python | Convert list of strings to list of tuples - GeeksforGeeks | 29 Apr, 2019
Sometimes we deal with different types of data types and we require to inter convert from one data type to another and hence inter conversion always a useful tool to have knowledge about. The interconversion from tuples to other formats have been discussed earlier. This article deals with the converse case. Let’s discuss certain ways in which this can be done.
Method #1 : Using map() + split() + tuple()
This task can be achieved using the combination of these functions. The map function can be used to link the logic to each string, split function is used to split the inner contents of list to different tuple attributes and tuple function performs the task of forming a tuple.
# Python3 code to demonstrate# convert list of strings to list of tuples# Using map() + split() + tuple() # initializing listtest_list = ['4, 1', '3, 2', '5, 3'] # printing original listprint("The original list : " + str(test_list)) # using map() + split() + tuple()# convert list of strings to list of tuplesres = [tuple(map(int, sub.split(', '))) for sub in test_list] # print resultprint("The list after conversion to tuple list : " + str(res))
The original list : ['4, 1', '3, 2', '5, 3']
The list after conversion to tuple list : [(4, 1), (3, 2), (5, 3)]
Method #2 : Using map() + evalThis is the most elegant way to perform this particular task. Where map function is used to extend the function logic to the whole list eval function internally performs interconversions and splitting.
# Python3 code to demonstrate# convert list of strings to list of tuples# Using map() + eval # initializing listtest_list = ['4, 1', '3, 2', '5, 3'] # printing original listprint("The original list : " + str(test_list)) # using map() + eval# convert list of strings to list of tuplesres = list(map(eval, test_list)) # print resultprint("The list after conversion to tuple list : " + str(res))
The original list : ['4, 1', '3, 2', '5, 3']
The list after conversion to tuple list : [(4, 1), (3, 2), (5, 3)]
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Enumerate() in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Python String | replace()
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Convert a list to dictionary
Python program to check whether a number is Prime or not
How to print without newline in Python? | [
{
"code": null,
"e": 24472,
"s": 24444,
"text": "\n29 Apr, 2019"
},
{
"code": null,
"e": 24835,
"s": 24472,
"text": "Sometimes we deal with different types of data types and we require to inter convert from one data type to another and hence inter conversion always a useful tool to have knowledge about. The interconversion from tuples to other formats have been discussed earlier. This article deals with the converse case. Let’s discuss certain ways in which this can be done."
},
{
"code": null,
"e": 24879,
"s": 24835,
"text": "Method #1 : Using map() + split() + tuple()"
},
{
"code": null,
"e": 25156,
"s": 24879,
"text": "This task can be achieved using the combination of these functions. The map function can be used to link the logic to each string, split function is used to split the inner contents of list to different tuple attributes and tuple function performs the task of forming a tuple."
},
{
"code": "# Python3 code to demonstrate# convert list of strings to list of tuples# Using map() + split() + tuple() # initializing listtest_list = ['4, 1', '3, 2', '5, 3'] # printing original listprint(\"The original list : \" + str(test_list)) # using map() + split() + tuple()# convert list of strings to list of tuplesres = [tuple(map(int, sub.split(', '))) for sub in test_list] # print resultprint(\"The list after conversion to tuple list : \" + str(res))",
"e": 25609,
"s": 25156,
"text": null
},
{
"code": null,
"e": 25722,
"s": 25609,
"text": "The original list : ['4, 1', '3, 2', '5, 3']\nThe list after conversion to tuple list : [(4, 1), (3, 2), (5, 3)]\n"
},
{
"code": null,
"e": 25956,
"s": 25724,
"text": "Method #2 : Using map() + evalThis is the most elegant way to perform this particular task. Where map function is used to extend the function logic to the whole list eval function internally performs interconversions and splitting."
},
{
"code": "# Python3 code to demonstrate# convert list of strings to list of tuples# Using map() + eval # initializing listtest_list = ['4, 1', '3, 2', '5, 3'] # printing original listprint(\"The original list : \" + str(test_list)) # using map() + eval# convert list of strings to list of tuplesres = list(map(eval, test_list)) # print resultprint(\"The list after conversion to tuple list : \" + str(res))",
"e": 26353,
"s": 25956,
"text": null
},
{
"code": null,
"e": 26466,
"s": 26353,
"text": "The original list : ['4, 1', '3, 2', '5, 3']\nThe list after conversion to tuple list : [(4, 1), (3, 2), (5, 3)]\n"
},
{
"code": null,
"e": 26487,
"s": 26466,
"text": "Python list-programs"
},
{
"code": null,
"e": 26494,
"s": 26487,
"text": "Python"
},
{
"code": null,
"e": 26510,
"s": 26494,
"text": "Python Programs"
},
{
"code": null,
"e": 26608,
"s": 26510,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26617,
"s": 26608,
"text": "Comments"
},
{
"code": null,
"e": 26630,
"s": 26617,
"text": "Old Comments"
},
{
"code": null,
"e": 26648,
"s": 26630,
"text": "Python Dictionary"
},
{
"code": null,
"e": 26670,
"s": 26648,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 26702,
"s": 26670,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26744,
"s": 26702,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 26770,
"s": 26744,
"text": "Python String | replace()"
},
{
"code": null,
"e": 26792,
"s": 26770,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26831,
"s": 26792,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 26869,
"s": 26831,
"text": "Python | Convert a list to dictionary"
},
{
"code": null,
"e": 26926,
"s": 26869,
"text": "Python program to check whether a number is Prime or not"
}
] |
How to show pdf in android webview? | This example demonstrate about How to show pdf in android webview.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version = "1.0" encoding = "utf-8"?>
<LinearLayout xmlns:android = "http://schemas.android.com/apk/res/android"
xmlns:app = "http://schemas.android.com/apk/res-auto"
xmlns:tools = "http://schemas.android.com/tools"
android:layout_width = "match_parent"
android:gravity = "center"
android:layout_height = "match_parent"
tools:context = ".MainActivity"
android:orientation = "vertical">
<WebView
android:id = "@+id/web_view"
android:layout_width = "match_parent"
android:layout_height = "match_parent" />
</LinearLayout>
In the above code, we have taken web view to show pdf.
Step 3 − Add the following code to src/MainActivity.java
package com.example.myapplication;
import android.app.ProgressDialog;
import android.os.Build;
import android.os.Bundle;
import android.support.annotation.RequiresApi;
import android.support.v7.app.AppCompatActivity;
import android.view.View;
import android.webkit.WebChromeClient;
import android.webkit.WebSettings;
import android.webkit.WebView;
import android.webkit.WebViewClient;
import android.widget.EditText;
public class MainActivity extends AppCompatActivity {
@RequiresApi(api = Build.VERSION_CODES.P)
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final ProgressDialog progressDialog = new ProgressDialog(this);
progressDialog.setMessage("Loading Data...");
progressDialog.setCancelable(false);
WebView web_view = findViewById(R.id.web_view);
web_view.requestFocus();
web_view.getSettings().setJavaScriptEnabled(true);
String myPdfUrl = "gymnasium-wandlitz.de/vplan/vplan.pdf";
String url = "https://docs.google.com/viewer?embedded = true&url = "+myPdfUrl;
web_view.loadUrl(url);
web_view.setWebViewClient(new WebViewClient() {
@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
view.loadUrl(url);
return true;
}
});
web_view.setWebChromeClient(new WebChromeClient() {
public void onProgressChanged(WebView view, int progress) {
if (progress < 100) {
progressDialog.show();
}
if (progress = = 100) {
progressDialog.dismiss();
}
}
});
}
}
Step 4 − Add the following code to AndroidManifest.xml
<?xml version = "1.0" encoding = "utf-8"?>
<manifest xmlns:android = "http://schemas.android.com/apk/res/android"
package = "com.example.myapplication">
<uses-permission android:name = "android.permission.INTERNET"/>
<application
android:allowBackup = "true"
android:icon = "@mipmap/ic_launcher"
android:label = "@string/app_name"
android:roundIcon = "@mipmap/ic_launcher_round"
android:supportsRtl = "true"
android:theme = "@style/AppTheme">
<activity android:name = ".MainActivity">
<intent-filter>
<action android:name = "android.intent.action.MAIN" />
<category android:name = "android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code | [
{
"code": null,
"e": 1129,
"s": 1062,
"text": "This example demonstrate about How to show pdf in android webview."
},
{
"code": null,
"e": 1258,
"s": 1129,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1323,
"s": 1258,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 1890,
"s": 1323,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<LinearLayout xmlns:android = \"http://schemas.android.com/apk/res/android\"\n xmlns:app = \"http://schemas.android.com/apk/res-auto\"\n xmlns:tools = \"http://schemas.android.com/tools\"\n android:layout_width = \"match_parent\"\n android:gravity = \"center\"\n android:layout_height = \"match_parent\"\n tools:context = \".MainActivity\"\n android:orientation = \"vertical\">\n <WebView\n android:id = \"@+id/web_view\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"match_parent\" />\n</LinearLayout>"
},
{
"code": null,
"e": 1945,
"s": 1890,
"text": "In the above code, we have taken web view to show pdf."
},
{
"code": null,
"e": 2002,
"s": 1945,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 3720,
"s": 2002,
"text": "package com.example.myapplication;\nimport android.app.ProgressDialog;\nimport android.os.Build;\nimport android.os.Bundle;\nimport android.support.annotation.RequiresApi;\nimport android.support.v7.app.AppCompatActivity;\nimport android.view.View;\nimport android.webkit.WebChromeClient;\nimport android.webkit.WebSettings;\nimport android.webkit.WebView;\nimport android.webkit.WebViewClient;\nimport android.widget.EditText;\npublic class MainActivity extends AppCompatActivity {\n @RequiresApi(api = Build.VERSION_CODES.P)\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n final ProgressDialog progressDialog = new ProgressDialog(this);\n progressDialog.setMessage(\"Loading Data...\");\n progressDialog.setCancelable(false);\n WebView web_view = findViewById(R.id.web_view);\n web_view.requestFocus();\n web_view.getSettings().setJavaScriptEnabled(true);\n String myPdfUrl = \"gymnasium-wandlitz.de/vplan/vplan.pdf\";\n String url = \"https://docs.google.com/viewer?embedded = true&url = \"+myPdfUrl;\n web_view.loadUrl(url);\n web_view.setWebViewClient(new WebViewClient() {\n @Override\n public boolean shouldOverrideUrlLoading(WebView view, String url) {\n view.loadUrl(url);\n return true;\n }\n });\n web_view.setWebChromeClient(new WebChromeClient() {\n public void onProgressChanged(WebView view, int progress) {\n if (progress < 100) {\n progressDialog.show();\n }\n if (progress = = 100) {\n progressDialog.dismiss();\n }\n }\n });\n }\n}"
},
{
"code": null,
"e": 3775,
"s": 3720,
"text": "Step 4 − Add the following code to AndroidManifest.xml"
},
{
"code": null,
"e": 4552,
"s": 3775,
"text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<manifest xmlns:android = \"http://schemas.android.com/apk/res/android\"\n package = \"com.example.myapplication\">\n <uses-permission android:name = \"android.permission.INTERNET\"/>\n <application\n android:allowBackup = \"true\"\n android:icon = \"@mipmap/ic_launcher\"\n android:label = \"@string/app_name\"\n android:roundIcon = \"@mipmap/ic_launcher_round\"\n android:supportsRtl = \"true\"\n android:theme = \"@style/AppTheme\">\n <activity android:name = \".MainActivity\">\n <intent-filter>\n <action android:name = \"android.intent.action.MAIN\" />\n <category android:name = \"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 4899,
"s": 4552,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
},
{
"code": null,
"e": 4939,
"s": 4899,
"text": "Click here to download the project code"
}
] |
Undecidability - GeeksforGeeks | 21 Jan, 2014
First is Emptiness for CFG; whether a CFG is empty or not, this problem is decidable.
Second is everything for CFG; whether a CFG will generate all possible strings (completeness of CFG), this problem is undecidable.
Third is Regularity for REC; whether language generated by TM is regular is undecidable.
Fourth is equivalence for regular; whether language generated by DFA and NFA are same is decidable.
Which of the following problems are decidable?
I. Whether the intersection of two regular languages is infinite
II. Whether a given context-free language is regular
III. Whether two push-down automata accept the same language
IV. Whether a given grammar is context-free
1. If A ≤p B and B is decidable then A is also decidable.
This is because if there exists a specific algorithm for solving B and we can
also reduce A to B then we can have a solution of A as well. Hence A is decidable.
However the reverse is not true i.e. if A ≤p B and A is decidable
then B is also decidable because A can have an algorithm existing for its correct
solution but might be the case that B does not.
2. If A ≤p B and A is undecidable then B is also undecidable.
This is because if A is undecidable even when it can be reduced to B that simply
reflects even B cannot provide an algorithm by which we can solve B and hence A.
So decision problem B is also undecidable.
Option 1: P1 ≤p P3 and given P1 is decidable gives no conclusion for P3.
Option 2: P3 ≤p P2 and given P2 is undecidable gives no conclusion for P3.
Option 3: P2 ≤p P3 and given P2 is undecidable gives conclusion for P3 to be
undecidable.
Option 4: P3 ≤p P2’s complement and given P2 is undecidable therefore P2’s
complement is also undecidable gives no conclusion for P3.
Given a Turing machine M over the input alphabet Σ, any
state q of M And a word w∈Σ*, does the computation of M
on w visit the state q?
(P1) Does a given finite state machine accept a given string
(P2) Does a given context free grammar generate an infinite
number of stings
A finite state machine always halts in final or non-final state.Therefore, problem P1 is decidable.
We check if the context free language generates any string of length between n and (2n – 1). If so, context free language is infinite else it is finite.Therefore, problem P2 is decidable.
Thus, option (A) is correct.
Please comment below if you find anything wrong in the above post.
Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
Python Program for Breadth First Search or BFS for a Graph
Best Time to Buy and Sell Stock
Must Do Coding Questions for Product Based Companies
How to calculate MOVING AVERAGE in a Pandas DataFrame?
What is "network ID" and "host ID" in IP Addresses?
What is Transmission Control Protocol (TCP)?
Converting nested JSON structures to Pandas DataFrames
Bash Scripting - How to check If File Exists
Python Raise Keyword
Python OpenCV - Canny() Function | [
{
"code": null,
"e": 31165,
"s": 31137,
"text": "\n21 Jan, 2014"
},
{
"code": null,
"e": 31251,
"s": 31165,
"text": "First is Emptiness for CFG; whether a CFG is empty or not, this problem is decidable."
},
{
"code": null,
"e": 31382,
"s": 31251,
"text": "Second is everything for CFG; whether a CFG will generate all possible strings (completeness of CFG), this problem is undecidable."
},
{
"code": null,
"e": 31471,
"s": 31382,
"text": "Third is Regularity for REC; whether language generated by TM is regular is undecidable."
},
{
"code": null,
"e": 31571,
"s": 31471,
"text": "Fourth is equivalence for regular; whether language generated by DFA and NFA are same is decidable."
},
{
"code": null,
"e": 31618,
"s": 31571,
"text": "Which of the following problems are decidable?"
},
{
"code": null,
"e": 31841,
"s": 31618,
"text": "I. Whether the intersection of two regular languages is infinite\nII. Whether a given context-free language is regular\nIII. Whether two push-down automata accept the same language\nIV. Whether a given grammar is context-free"
},
{
"code": null,
"e": 32532,
"s": 31841,
"text": "1. If A ≤p B and B is decidable then A is also decidable.\nThis is because if there exists a specific algorithm for solving B and we can \nalso reduce A to B then we can have a solution of A as well. Hence A is decidable.\n\nHowever the reverse is not true i.e. if A ≤p B and A is decidable \nthen B is also decidable because A can have an algorithm existing for its correct \nsolution but might be the case that B does not.\n\n2. If A ≤p B and A is undecidable then B is also undecidable.\nThis is because if A is undecidable even when it can be reduced to B that simply \nreflects even B cannot provide an algorithm by which we can solve B and hence A. \nSo decision problem B is also undecidable.\n\n"
},
{
"code": null,
"e": 32928,
"s": 32532,
"text": "Option 1: P1 ≤p P3 and given P1 is decidable gives no conclusion for P3.\nOption 2: P3 ≤p P2 and given P2 is undecidable gives no conclusion for P3.\nOption 3: P2 ≤p P3 and given P2 is undecidable gives conclusion for P3 to be \n undecidable.\nOption 4: P3 ≤p P2’s complement and given P2 is undecidable therefore P2’s \n complement is also undecidable gives no conclusion for P3.\n"
},
{
"code": null,
"e": 33065,
"s": 32928,
"text": "Given a Turing machine M over the input alphabet Σ, any\nstate q of M And a word w∈Σ*, does the computation of M\non w visit the state q? "
},
{
"code": null,
"e": 33209,
"s": 33065,
"text": "(P1) Does a given finite state machine accept a given string\n(P2) Does a given context free grammar generate an infinite \n number of stings"
},
{
"code": null,
"e": 33309,
"s": 33209,
"text": "A finite state machine always halts in final or non-final state.Therefore, problem P1 is decidable."
},
{
"code": null,
"e": 33594,
"s": 33309,
"text": "We check if the context free language generates any string of length between n and (2n – 1). If so, context free language is infinite else it is finite.Therefore, problem P2 is decidable.\n Thus, option (A) is correct.\nPlease comment below if you find anything wrong in the above post."
},
{
"code": null,
"e": 33692,
"s": 33594,
"text": "Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here."
},
{
"code": null,
"e": 33751,
"s": 33692,
"text": "Python Program for Breadth First Search or BFS for a Graph"
},
{
"code": null,
"e": 33783,
"s": 33751,
"text": "Best Time to Buy and Sell Stock"
},
{
"code": null,
"e": 33836,
"s": 33783,
"text": "Must Do Coding Questions for Product Based Companies"
},
{
"code": null,
"e": 33891,
"s": 33836,
"text": "How to calculate MOVING AVERAGE in a Pandas DataFrame?"
},
{
"code": null,
"e": 33943,
"s": 33891,
"text": "What is \"network ID\" and \"host ID\" in IP Addresses?"
},
{
"code": null,
"e": 33988,
"s": 33943,
"text": "What is Transmission Control Protocol (TCP)?"
},
{
"code": null,
"e": 34043,
"s": 33988,
"text": "Converting nested JSON structures to Pandas DataFrames"
},
{
"code": null,
"e": 34088,
"s": 34043,
"text": "Bash Scripting - How to check If File Exists"
},
{
"code": null,
"e": 34109,
"s": 34088,
"text": "Python Raise Keyword"
}
] |
Bird by Bird using Deep Learning. Advancing CNN model for fine-grained... | by Sofya Lipnitskaya | Towards Data Science | This article demonstrates how deep learning models used for image-related tasks can be advanced in order to address the fine-grained classification problem. For this objective, we will walk through the following two parts. First, you will get familiar with some basic concepts of computer vision and convolutional neural networks, while the second part demonstrates how to apply this knowledge to a real-world problem of bird species classification using PyTorch. Specifically, you will learn how to build your own CNN model – ResNet-50, – to further improve its performance using transfer learning, auxiliary task and attention-enhanced architecture, and even a little more.
Computers perform extremely well when it comes to crunching numbers. Solving tons of equations to get a human to the Moon? No problem. Determine whether a cat or a dog appears in an image? Oops... The task that is inherently easy for any human being seemed to be impossible for first computers. During the years, algorithms evolved as well as the hardware did (remember the Moor’s law? R.I.P.). The field of computer vision appeared as a trial to solve the task of classifying images using computers. After the long period of development, many sophisticated methods were created. However, all of them suffered from the lack of generalizability: a model built to classify cats vs. dogs couldn’t distinguish, for example, birds.
In 1989, Yann LeCun and his colleagues had proposed [1], and further developed [2] the concept of convolutional neural network (CNN). The model itself was inspired by a human visual cortex, where a visual neuron is responsible for a small piece of a picture that is visible to an eye – the neuron’s receptive field. Structurally, it was expressed in the way that a single convolutional neuron (filter) scanned an input image step-by-step, being applied to different parts of the image many times, which refers to a concept of weight sharing (Figure 1).
Of course, since LeCun’s LeNet-5, the state-of-the-art of CNN models has been developed greatly. The first successful large-scale architecture came out with AlexNet [3] that won the ILSVRC 2012 challenge achieving the top-5 error rate of 15.3%. Later advancements gave many powerful models that were mainly improved throughout the usage of larger and more complex architectures. The thing is, as the network goes deeper (depth is increasing), its performance gets saturated and starts degrading. To address this problem, the residual neural network (ResNet) was developed [4] to effectively direct the input over some layers (also known as skip- or residual connections).
The core idea of the ResNet architecture is to pass a part of a signal to the end of a convolutional block unprocessed (by just copying values) in order to enlarge gradient flow through the deep layers (Figure 2). Thus, the skip connection guarantees that performance of the model does not decrease but it could increase slightly.
The next part explains how the discussed theory can be actually applied for solving the real-world problem.
Bird species recognition is a difficult task challenging the visual abilities for both human experts and computers. One of the interesting datasets related to the fine-grained classification problem is Caltech-UCSD Birds-200-2011 (CUB-200-2011) [5] consisting of 11788 images of birds belonging to 200 species. To address this problem, the goals of the current tutorial will be: (a) to build a CNN model to classify bird images w.r.t. their species and (b) to determine how the prediction accuracy of a baseline model can be boosted using CNNs of different architectures. For that, we will use PyTorch, one of the most popular open-source frameworks for deep learning.
By the end of this tutorial, you will be able to:
Understand basics of image classification problem of bird species.
Determine the data-driven image pre-processing strategy.
Create your own deep learning pipeline for image classification.
Build, train and evaluate ResNet-50 model to predict bird species.
Improve the model performance by using different techniques.
First, you need to download an archive containing the dataset and store it into the data directory. It can be done manually from the following link, or using the Python code provided in the following GitHub repository:
github.com
Now, let’s import packages that we will use in this tutorial:
# import packagesimport osimport csvimport numpy as npimport sklearn.model_selection as skmsimport torchimport torch.utils.data as tdimport torch.nn.functional as Fimport torchvision as tvimport torchvision.transforms.functional as TF# define constantsDEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'RANDOM_SEED = 42IN_DIR_DATA = 'data/CUB_200_2011'IN_DIR_IMG = os.path.join(IN_DIR_DATA, 'images')
In this tutorial, we plan to pre-train a baseline model using the ImageNet dataset. As pre-trained models usually expect input images to be normalized in the same way, heights and widths should be at least of size 224 x 224 pixels. There might many ways for the image transformation be used to fullfill above specifications, but what might be the optimal one?
Exploratory data analysis is an essential starting point of any data science project, which lays the foundation for the further analysis. Since we are interested to define the optimal data transformation strategy, we are going to explore bird images to see what useful we can grasp on. Let’s have a look at some bird examples of the sparrow family (Figure 3). Seems like there can be a high similarity among birds related to different species which is really hard to spot. Is that a White-throated or a Lincoln Sparrow? Well, even experts can be confused...
Just out of interest, we’ll sum up all classes of the Sparrow family to understand how many of them are there in our dataset:
# calculate the number of sparrow speciescls_sparrows = [k for k in os.listdir(IN_DIR_IMG) if 'sparrow' in k.lower()]print(len(cls_sparrows))
The code above gives us the value of 21, implying that there are dozen different species can be represented only by a single family. And now we see why CUB-200-2011 is perfectly designed for fine-grained classification. What do we have is the many similar birds potentially related to different classes, and we, actually, plan to deal with that problem here.
But before getting in a real deep learning, we want to determine an appropriate strategy for data pre-processing. For that, we will analyse the marginal distributions of width sand heights by visualizing box plots for the corresponding observations:
Indeed, the size of images varies considerably. We also see that heights and widths of the majority images are equal to 375 and 500 pixels, respectively. So, what might be the appropriate transformation strategy for this kind of data?
CUB-200-2011 dataset contains thousands of images, so it might affect the computational time. To overcome that we first create class DatasetBirds to make data loading and pre-processing easy:
class DatasetBirds(tv.datasets.ImageFolder): """ Wrapper for the CUB-200-2011 dataset. Method DatasetBirds.__getitem__() returns tuple of image and its corresponding label. """ def __init__(self, root, transform=None, target_transform=None, loader=tv.datasets.folder.default_loader, is_valid_file=None, train=True, bboxes=False): img_root = os.path.join(root, 'images') super(DatasetBirds, self).__init__( root=img_root, transform=None, target_transform=None, loader=loader, is_valid_file=is_valid_file, ) self.transform_ = transform self.target_transform_ = target_transform self.train = train # obtain sample ids filtered by split path_to_splits = os.path.join(root, 'train_test_split.txt') indices_to_use = list() with open(path_to_splits, 'r') as in_file: for line in in_file: idx, use_train = line.strip('\n').split(' ', 2) if bool(int(use_train)) == self.train: indices_to_use.append(int(idx)) # obtain filenames of images path_to_index = os.path.join(root, 'images.txt') filenames_to_use = set() with open(path_to_index, 'r') as in_file: for line in in_file: idx, fn = line.strip('\n').split(' ', 2) if int(idx) in indices_to_use: filenames_to_use.add(fn) img_paths_cut = {'/'.join(img_path.rsplit('/', 2)[-2:]): idx for idx, (img_path, lb) in enumerate(self.imgs)} imgs_to_use = [self.imgs[img_paths_cut[fn]] for fn in filenames_to_use] _, targets_to_use = list(zip(*imgs_to_use)) self.imgs = self.samples = imgs_to_use self.targets = targets_to_use if bboxes: # get coordinates of a bounding box path_to_bboxes = os.path.join(root, 'bounding_boxes.txt') bounding_boxes = list() with open(path_to_bboxes, 'r') as in_file: for line in in_file: idx, x, y, w, h = map(lambda x: float(x), line.strip('\n').split(' ')) if int(idx) in indices_to_use: bounding_boxes.append((x, y, w, h)) self.bboxes = bounding_boxes else: self.bboxes = None def __getitem__(self, index): # generate one sample sample, target = super(DatasetBirds, self).__getitem__(index) if self.bboxes is not None: # squeeze coordinates of the bounding box to range [0, 1] width, height = sample.width, sample.height x, y, w, h = self.bboxes[index] scale_resize = 500 / width scale_resize_crop = scale_resize * (375 / 500) x_rel = scale_resize_crop * x / 375 y_rel = scale_resize_crop * y / 375 w_rel = scale_resize_crop * w / 375 h_rel = scale_resize_crop * h / 375 target = torch.tensor([target, x_rel, y_rel, w_rel, h_rel]) if self.transform_ is not None: sample = self.transform_(sample) if self.target_transform_ is not None: target = self.target_transform_(target) return sample, target
All pre-trained models expect input images to be normalized in the same way, such as the height and width are at least 224 pixels. As you might noticed from our previous analysis, the size of the data varies considerably, and many images have landscape layout rather than portrait one, and width is commonly close to the maximum value along both dimensions.
In order to improve the ability of the model to learn bird representation, we’ll use data augmentation. We want to transform images in a such way, so we maintain the aspect ratio. One solution is to scale images uniformly, so that both dimensions are equal to the larger side using the maximum padding strategy. For that, we’ll create a pad function to pad images to 500 pixels:
def pad(img, size_max=500): """ Pads images to the specified size (height x width). """ pad_height = max(0, size_max - img.height) pad_width = max(0, size_max - img.width) pad_top = pad_height // 2 pad_bottom = pad_height - pad_top pad_left = pad_width // 2 pad_right = pad_width - pad_left return TF.pad( img, (pad_left, pad_top, pad_right, pad_bottom), fill=tuple(map(lambda x: int(round(x * 256)), (0.485, 0.456, 0.406))))
Assuming birds to appear at any image part, we make the model able to capture them everywhere by randomly-cropping and flipping images along both axes during the model training. While the images of the test split will be center-cropped before feeding into ResNet-50, as we expect the majority birds to be located at this image part referring to the previous data exploration.
For that, we are going to crop images by 375 x 375 pixels along both dimensions, as that is the average size of the majority images. We’ll also normalize images by mean [0.485, 0.456, 0.406] and standard deviation [0.229, 0.224, 0.225] to make distribution of pixel values closer to the Gaussian one.
# transform imagestransforms_train = tv.transforms.Compose([ tv.transforms.Lambda(pad), tv.transforms.RandomOrder([ tv.transforms.RandomCrop((375, 375)), tv.transforms.RandomHorizontalFlip(), tv.transforms.RandomVerticalFlip() ]), tv.transforms.ToTensor(), tv.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])transforms_eval = tv.transforms.Compose([ tv.transforms.Lambda(pad), tv.transforms.CenterCrop((375, 375)), tv.transforms.ToTensor(), tv.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
Then, we’ll organize images of the CUB-200-2011 dataset into three subsets to insure the proper model training and evaluation. As authors of the dataset suggest the way to assemble the training and test subsets, we split our data accordingly. Additionally, the validation split will be defined to further fine-tune the parameters of the model during the model evaluation process. For that, the training subset will be split using stratified sampling technique that ensures that each subset have equally balanced classes of different species.
# instantiate dataset objects according to the pre-defined splitsds_train = DatasetBirds(IN_DIR_DATA, transform=transforms_train, train=True)ds_val = DatasetBirds(IN_DIR_DATA, transform=transforms_eval, train=True)ds_test = DatasetBirds(IN_DIR_DATA, transform=transforms_eval, train=False)splits = skms.StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=RANDOM_SEED)idx_train, idx_val = next(splits.split(np.zeros(len(ds_train)), ds_train.targets))
We’ll set up parameters for data loading and model training. To leverage computations and be able to proceed large dataset in parallel, we will collate input samples in several mini-batches and also denote how many sub-processes to use to generate them in order to leverage the training process.
# set hyper-parametersparams = {'batch_size': 24, 'num_workers': 8}num_epochs = 100num_classes = 200
After we’ll create a DataLoader object to yield samples of an each data split:
# instantiate data loaderstrain_loader = td.DataLoader( dataset=ds_train, sampler=td.SubsetRandomSampler(idx_train), **params)val_loader = td.DataLoader( dataset=ds_val, sampler=td.SubsetRandomSampler(idx_val), **params)test_loader = td.DataLoader(dataset=ds_test, **params)
We are going to use ResNet-50 model for classification of bird species. ResNet (or Residual Network) is a variant of convolutional neural networks that was proposed as a solution to the vanishing gradient problem of large networks.
PyTorch provides the ResNet-50 model on torchvision.models, so we will instantiate the respective class and set the argument num_classes to 200 given the dataset of that number of bird species:
# instantiate the modelmodel = tv.models.resnet50(num_classes=num_classes).to(DEVICE)
More specifically, the chosen architecture is 50 layers deep and composed of 5 stages, 4 of which with residual blocks and 1 comprise a convolution, batch normalization and ReLU operations.
Next point is to set the learning rate of our model as well as a schedule to adjust it during the training for the sake of the better performance. Training of the ResNet-50 model will be done using the Adam optimizer with an initial learning rate of 1e-3 and an exponentially decreasing learning rate schedule such as it drops by a factor of gamma at each epoch.
# instantiate optimizer and scheduleroptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.95)
Finally, we are ready to train and validate our model to recognize and learn the difference between bird species. The cross-entropy loss and accuracy metric will be accumulated per epoch in order to inspect the model performance dynamics. Following all of the training experiments, we test the model using the subset of previously unseen data to assess the overall goodness in bird classification using the accuracy metric.
# loop over epochsfor epoch in range(num_epochs):# train the model model.train() train_loss = list() train_acc = list() for batch in train_loader: x, y = batch x = x.to(DEVICE) y = y.to(DEVICE) optimizer.zero_grad() # predict bird species y_pred = model(x) # calculate the loss loss = F.cross_entropy(y_pred, y) # backprop & update weights loss.backward() optimizer.step() # calculate the accuracy acc = skms.accuracy_score([val.item() for val in y], [val.item() for val in y_pred.argmax(dim=-1)]) train_loss.append(loss.item()) train_acc.append(acc) # validate the model model.eval() val_loss = list() val_acc = list() with torch.no_grad(): for batch in val_loader: x, y = batch x = x.to(DEVICE) y = y.to(DEVICE) # predict bird species y_pred = model(x) # calculate the loss loss = F.cross_entropy(y_pred, y) # calculate the accuracy acc = skms.accuracy_score([val.item() for val in y], [val.item() for val in y_pred.argmax(dim=-1)]) val_loss.append(loss.item()) val_acc.append(acc) # adjust the learning rate scheduler.step()# test the modeltrue = list()pred = list()with torch.no_grad(): for batch in test_loader: x, y = batch x = x.to(DEVICE) y = y.to(DEVICE) y_pred = model(x) true.extend([val.item() for val in y]) pred.extend([val.item() for val in y_pred.argmax(dim=-1)])# calculate the accuracy test_accuracy = skms.accuracy_score(true, pred)print('Test accuracy: {:.3f}'.format(test_accuracy)
Figure 5 depicts the model performance metrics for ResNet-50:
As we see, the baseline model performs really poor as it overfits. The one of main reasons is the lack of diverse training samples. Just a quick note: CUB-200-2011 dataset has ~30 images per specie. Seems like we are stuck...isn’t it? Actually, there are some ways we can address to overcome these issues.
Well, we ran into a number of challenges in our previous analysis, so we may start thinking about how we can address these follow-up questions:
Question 1: How to deal with overfitting given the limited amount of training samples?
Question 2: How to improve the model performance in bird species recognition?
Let’s figure out how we can advance our baseline model in more detail.
As it was said before, deep neural networks require a lot of training samples. Practitioners have noticed that, in order to train a deep neural network from scratch, the amount of data should grow exponentially with the number of trainable parameters. Luckily, generalization ability of a model that was trained on a larger dataset can be transferred to another, usually, simpler task.
In order to improve the performance of thebaseline model for bird classification, we will use weight initialization obtained from the general-purpose model pre-trained on the ImageNet dataset, and further fine-tune its parameters using the CUB-200-2011 one. The training process remains the same, while the model will rather focus on the fine-tuning of hyper-parameters.
PyTorch provides pre-trained models in torch.utils.model_zoo. Construction of a pre-trained ResNet-50 can be done by passing pretrained=True into constructor. This simple trick provides us with the model that already has well initialized filters, so there is no need to learn them from scratch.
# instantiate the modelmodel = tv.models.resnet50(num_classes=200, pretrained=True).to(DEVICE)
We will also set a lower learning rate of 1e-4 in the optimizer, as we are going to train a network that was yet pre-trained on a large-scale image-classification task. And here are results:
As we see, the use of the pre-trained model allows to solve the overfitting problem giving 80.77% test accuracy. Let’s continue experimenting on that!
Now we can extend this approach even more. Why do we have to increase the complexity of a single task if we can add another one? No reason at all. It was noticed that introduction of an additional – auxiliary – task improves the network’s performance forcing it to learn more general representation of the training data.
As Caltech-UCSD Birds-200–2011 dataset includes bounding boxes in addition to class labels, we will use this auxiliary target to make the network to train in a multi-task fashion. Now, we will predict 4 coordinates of bird’s bounding box in addition to its specie by setting num_classes to 204:
# instantiate the pre-trained modelmodel = tv.models.resnet50(num_classes=204, pretrained=True).to(DEVICE)
Now we need to slightly modify our training and validation blocks, as we want to make predictions and calculate the loss for two targets corresponding to a correct bird specie and its bounding box coordinates. Here’s an example execution:
...y_pred = model(x)# predict bird speciesy_pred_cls = y_pred[..., :-4]y_cls = y[..., 0].long()# predict bounding box coordinatesy_pred_bbox = y_pred[..., -4:]y_bbox = y[..., 1:]# calculate the lossloss_cls = F.cross_entropy(y_pred_cls, y_cls)loss_bbox = F.mse_loss(torch.sigmoid(y_pred_bbox), y_bbox)loss = loss_cls + loss_bbox...
Results are even better – integration of the auxiliary task provides the stable increase of accuracy points giving 81.2% on the test split – as shown in Figure 7.
In the last few paragraphs we were focused on the data-driven advancement of our model. However, at some point the complexity of the task can exceed the model’s capacity resulting in a lower performance. In order to adjust the model’s power to the difficulty of the problem, we can equip the network with additional attention blocks that will help it to focus on important parts of the input and ignore irrelevant ones.
class Attention(torch.nn.Module): """ Attention block for CNN model. """ def __init__(self, in_channels, out_channels, kernel_size, padding): super(Attention, self).__init__() self.conv_depth = torch.nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding, groups=in_channels) self.conv_point = torch.nn.Conv2d(out_channels, out_channels, kernel_size=(1, 1)) self.bn = torch.nn.BatchNorm2d(out_channels, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True) self.activation = torch.nn.Tanh() def forward(self, inputs): x, output_size = inputs x = F.adaptive_max_pool2d(x, output_size=output_size) x = self.conv_depth(x) x = self.conv_point(x) x = self.bn(x) x = self.activation(x) + 1.0return x
Attention module allows to highlight relevant regions of feature maps and returns values varying in range [0.0, 2.0], where the lower value implies the lower priority of a given pixel for the following layers. So we’ll create and instantiate the class ResNet50Attention corresponding to the attention-enhanced ResNet-50 model:
class ResNet50Attention(torch.nn.Module): """ Attention-enhanced ResNet-50 model. """ weights_loader = staticmethod(tv.models.resnet50) def __init__(self, num_classes=200, pretrained=True, use_attention=True): super(ResNet50Attention, self).__init__() net = self.weights_loader(pretrained=pretrained) self.num_classes = num_classes self.pretrained = pretrained self.use_attention = use_attention net.fc = torch.nn.Linear( in_features=net.fc.in_features, out_features=num_classes, bias=net.fc.bias is not None ) self.net = net if self.use_attention: self.att1 = Attention(in_channels=64, out_channels=64, kernel_size=(3, 5), padding=(1, 2)) self.att2 = Attention(in_channels=64, out_channels=128, kernel_size=(5, 3), padding=(2, 1)) self.att3 = Attention(in_channels=128, out_channels=256, kernel_size=(3, 5), padding=(1, 2)) self.att4 = Attention(in_channels=256, out_channels=512, kernel_size=(5, 3), padding=(2, 1)) if pretrained: self.att1.bn.weight.data.zero_() self.att1.bn.bias.data.zero_() self.att2.bn.weight.data.zero_() self.att2.bn.bias.data.zero_() self.att3.bn.weight.data.zero_() self.att3.bn.bias.data.zero_() self.att4.bn.weight.data.zero_() self.att4.bn.bias.data.zero_() def _forward(self, x): return self.net(x) def _forward_att(self, x): x = self.net.conv1(x) x = self.net.bn1(x) x = self.net.relu(x) x = self.net.maxpool(x) x_a = x.clone() x = self.net.layer1(x) x = x * self.att1((x_a, x.shape[-2:])) x_a = x.clone() x = self.net.layer2(x) x = x * self.att2((x_a, x.shape[-2:])) x_a = x.clone() x = self.net.layer3(x) x = x * self.att3((x_a, x.shape[-2:])) x_a = x.clone() x = self.net.layer4(x) x = x * self.att4((x_a, x.shape[-2:])) x = self.net.avgpool(x) x = torch.flatten(x, 1) x = self.net.fc(x) return x def forward(self, x): return self._forward_att(x) if self.use_attention else self._forward(x)# instantiate the modelmodel = ResNet50Attention(num_classes=204, pretrained=True, use_attention=True).to(DEVICE)
After that, we are ready to train and evaluate the performance of the attention-enhanced model pre-trained on the ImageNet dataset and advanced with the multi-task learning for bird classification using the same code we utilized before. Final accuracy score has been increased to 82.4%!
Figure 8 shows summary results generated during the analysis:
Results clearly indicate that the final variant of the ResNet-50 model advanced with transfer and multi-task learning, as well as with the attention module, greatly contributes to the more accurate bird predictions.
Here, we used different approaches to improve the performance of a baseline ResNet-50 for the classification of bird species from CUB-200–2011 dataset. What could we learn from that? Here are some take-home messages from our analysis:
Data exploration results indicate the CUB-200–2011 as the high-quality, balanced although center-biased dataset without corrupted images.
In case of the limited amount of training samples, you can reuse weights of the model pre-trained on another dataset in your own model.
Learning through auxiliary task in addition to the primary bird classification one contributes to the better model performance.
Enhancing the network’s architecture by adding new layers (attention module) makes the model more accurate in bird species classification.
Analysis of different extensions of the basic ResNet-50 indicate the pre-trained model advanced using auxiliary task and attention mechanism as the prominent candidate for the further investigations.
In summary, there is a space for improvements of the model performance. Additional advancements can be achieved by further optimization of model hyper-parameters, the use of a stronger data augmentation, regularization, meta-learning techniques.
The focus of the next tutorial will be on the interpretability of deep learning models. Interested to keep it on?
Subscribe and stay updated on more deep learning materials at – https://medium.com/@slipnitskaya.
LeCun, Yann, et al. “Backpropagation applied to handwritten zip code recognition.” Neural computation 1.4 (1989): 541–551.LeCun, Yann, et al. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86.11 (1998): 2278–2324.Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Communications of the ACM 60.6 (2017): 84–90.He, Kaiming, et al. “Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition (2016): 770–778.Wah, Catherine, et al. “The Caltech-UCSD Birds 200–2011 dataset.” Computation & Neural Systems Technical Report, CNS-TR-2011–001.(2011).
LeCun, Yann, et al. “Backpropagation applied to handwritten zip code recognition.” Neural computation 1.4 (1989): 541–551.
LeCun, Yann, et al. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86.11 (1998): 2278–2324.
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Communications of the ACM 60.6 (2017): 84–90.
He, Kaiming, et al. “Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition (2016): 770–778.
Wah, Catherine, et al. “The Caltech-UCSD Birds 200–2011 dataset.” Computation & Neural Systems Technical Report, CNS-TR-2011–001.(2011). | [
{
"code": null,
"e": 847,
"s": 171,
"text": "This article demonstrates how deep learning models used for image-related tasks can be advanced in order to address the fine-grained classification problem. For this objective, we will walk through the following two parts. First, you will get familiar with some basic concepts of computer vision and convolutional neural networks, while the second part demonstrates how to apply this knowledge to a real-world problem of bird species classification using PyTorch. Specifically, you will learn how to build your own CNN model – ResNet-50, – to further improve its performance using transfer learning, auxiliary task and attention-enhanced architecture, and even a little more."
},
{
"code": null,
"e": 1574,
"s": 847,
"text": "Computers perform extremely well when it comes to crunching numbers. Solving tons of equations to get a human to the Moon? No problem. Determine whether a cat or a dog appears in an image? Oops... The task that is inherently easy for any human being seemed to be impossible for first computers. During the years, algorithms evolved as well as the hardware did (remember the Moor’s law? R.I.P.). The field of computer vision appeared as a trial to solve the task of classifying images using computers. After the long period of development, many sophisticated methods were created. However, all of them suffered from the lack of generalizability: a model built to classify cats vs. dogs couldn’t distinguish, for example, birds."
},
{
"code": null,
"e": 2127,
"s": 1574,
"text": "In 1989, Yann LeCun and his colleagues had proposed [1], and further developed [2] the concept of convolutional neural network (CNN). The model itself was inspired by a human visual cortex, where a visual neuron is responsible for a small piece of a picture that is visible to an eye – the neuron’s receptive field. Structurally, it was expressed in the way that a single convolutional neuron (filter) scanned an input image step-by-step, being applied to different parts of the image many times, which refers to a concept of weight sharing (Figure 1)."
},
{
"code": null,
"e": 2799,
"s": 2127,
"text": "Of course, since LeCun’s LeNet-5, the state-of-the-art of CNN models has been developed greatly. The first successful large-scale architecture came out with AlexNet [3] that won the ILSVRC 2012 challenge achieving the top-5 error rate of 15.3%. Later advancements gave many powerful models that were mainly improved throughout the usage of larger and more complex architectures. The thing is, as the network goes deeper (depth is increasing), its performance gets saturated and starts degrading. To address this problem, the residual neural network (ResNet) was developed [4] to effectively direct the input over some layers (also known as skip- or residual connections)."
},
{
"code": null,
"e": 3130,
"s": 2799,
"text": "The core idea of the ResNet architecture is to pass a part of a signal to the end of a convolutional block unprocessed (by just copying values) in order to enlarge gradient flow through the deep layers (Figure 2). Thus, the skip connection guarantees that performance of the model does not decrease but it could increase slightly."
},
{
"code": null,
"e": 3238,
"s": 3130,
"text": "The next part explains how the discussed theory can be actually applied for solving the real-world problem."
},
{
"code": null,
"e": 3907,
"s": 3238,
"text": "Bird species recognition is a difficult task challenging the visual abilities for both human experts and computers. One of the interesting datasets related to the fine-grained classification problem is Caltech-UCSD Birds-200-2011 (CUB-200-2011) [5] consisting of 11788 images of birds belonging to 200 species. To address this problem, the goals of the current tutorial will be: (a) to build a CNN model to classify bird images w.r.t. their species and (b) to determine how the prediction accuracy of a baseline model can be boosted using CNNs of different architectures. For that, we will use PyTorch, one of the most popular open-source frameworks for deep learning."
},
{
"code": null,
"e": 3957,
"s": 3907,
"text": "By the end of this tutorial, you will be able to:"
},
{
"code": null,
"e": 4024,
"s": 3957,
"text": "Understand basics of image classification problem of bird species."
},
{
"code": null,
"e": 4081,
"s": 4024,
"text": "Determine the data-driven image pre-processing strategy."
},
{
"code": null,
"e": 4146,
"s": 4081,
"text": "Create your own deep learning pipeline for image classification."
},
{
"code": null,
"e": 4213,
"s": 4146,
"text": "Build, train and evaluate ResNet-50 model to predict bird species."
},
{
"code": null,
"e": 4274,
"s": 4213,
"text": "Improve the model performance by using different techniques."
},
{
"code": null,
"e": 4493,
"s": 4274,
"text": "First, you need to download an archive containing the dataset and store it into the data directory. It can be done manually from the following link, or using the Python code provided in the following GitHub repository:"
},
{
"code": null,
"e": 4504,
"s": 4493,
"text": "github.com"
},
{
"code": null,
"e": 4566,
"s": 4504,
"text": "Now, let’s import packages that we will use in this tutorial:"
},
{
"code": null,
"e": 4971,
"s": 4566,
"text": "# import packagesimport osimport csvimport numpy as npimport sklearn.model_selection as skmsimport torchimport torch.utils.data as tdimport torch.nn.functional as Fimport torchvision as tvimport torchvision.transforms.functional as TF# define constantsDEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'RANDOM_SEED = 42IN_DIR_DATA = 'data/CUB_200_2011'IN_DIR_IMG = os.path.join(IN_DIR_DATA, 'images')"
},
{
"code": null,
"e": 5331,
"s": 4971,
"text": "In this tutorial, we plan to pre-train a baseline model using the ImageNet dataset. As pre-trained models usually expect input images to be normalized in the same way, heights and widths should be at least of size 224 x 224 pixels. There might many ways for the image transformation be used to fullfill above specifications, but what might be the optimal one?"
},
{
"code": null,
"e": 5889,
"s": 5331,
"text": "Exploratory data analysis is an essential starting point of any data science project, which lays the foundation for the further analysis. Since we are interested to define the optimal data transformation strategy, we are going to explore bird images to see what useful we can grasp on. Let’s have a look at some bird examples of the sparrow family (Figure 3). Seems like there can be a high similarity among birds related to different species which is really hard to spot. Is that a White-throated or a Lincoln Sparrow? Well, even experts can be confused..."
},
{
"code": null,
"e": 6015,
"s": 5889,
"text": "Just out of interest, we’ll sum up all classes of the Sparrow family to understand how many of them are there in our dataset:"
},
{
"code": null,
"e": 6157,
"s": 6015,
"text": "# calculate the number of sparrow speciescls_sparrows = [k for k in os.listdir(IN_DIR_IMG) if 'sparrow' in k.lower()]print(len(cls_sparrows))"
},
{
"code": null,
"e": 6516,
"s": 6157,
"text": "The code above gives us the value of 21, implying that there are dozen different species can be represented only by a single family. And now we see why CUB-200-2011 is perfectly designed for fine-grained classification. What do we have is the many similar birds potentially related to different classes, and we, actually, plan to deal with that problem here."
},
{
"code": null,
"e": 6766,
"s": 6516,
"text": "But before getting in a real deep learning, we want to determine an appropriate strategy for data pre-processing. For that, we will analyse the marginal distributions of width sand heights by visualizing box plots for the corresponding observations:"
},
{
"code": null,
"e": 7001,
"s": 6766,
"text": "Indeed, the size of images varies considerably. We also see that heights and widths of the majority images are equal to 375 and 500 pixels, respectively. So, what might be the appropriate transformation strategy for this kind of data?"
},
{
"code": null,
"e": 7193,
"s": 7001,
"text": "CUB-200-2011 dataset contains thousands of images, so it might affect the computational time. To overcome that we first create class DatasetBirds to make data loading and pre-processing easy:"
},
{
"code": null,
"e": 10541,
"s": 7193,
"text": "class DatasetBirds(tv.datasets.ImageFolder): \"\"\" Wrapper for the CUB-200-2011 dataset. Method DatasetBirds.__getitem__() returns tuple of image and its corresponding label. \"\"\" def __init__(self, root, transform=None, target_transform=None, loader=tv.datasets.folder.default_loader, is_valid_file=None, train=True, bboxes=False): img_root = os.path.join(root, 'images') super(DatasetBirds, self).__init__( root=img_root, transform=None, target_transform=None, loader=loader, is_valid_file=is_valid_file, ) self.transform_ = transform self.target_transform_ = target_transform self.train = train # obtain sample ids filtered by split path_to_splits = os.path.join(root, 'train_test_split.txt') indices_to_use = list() with open(path_to_splits, 'r') as in_file: for line in in_file: idx, use_train = line.strip('\\n').split(' ', 2) if bool(int(use_train)) == self.train: indices_to_use.append(int(idx)) # obtain filenames of images path_to_index = os.path.join(root, 'images.txt') filenames_to_use = set() with open(path_to_index, 'r') as in_file: for line in in_file: idx, fn = line.strip('\\n').split(' ', 2) if int(idx) in indices_to_use: filenames_to_use.add(fn) img_paths_cut = {'/'.join(img_path.rsplit('/', 2)[-2:]): idx for idx, (img_path, lb) in enumerate(self.imgs)} imgs_to_use = [self.imgs[img_paths_cut[fn]] for fn in filenames_to_use] _, targets_to_use = list(zip(*imgs_to_use)) self.imgs = self.samples = imgs_to_use self.targets = targets_to_use if bboxes: # get coordinates of a bounding box path_to_bboxes = os.path.join(root, 'bounding_boxes.txt') bounding_boxes = list() with open(path_to_bboxes, 'r') as in_file: for line in in_file: idx, x, y, w, h = map(lambda x: float(x), line.strip('\\n').split(' ')) if int(idx) in indices_to_use: bounding_boxes.append((x, y, w, h)) self.bboxes = bounding_boxes else: self.bboxes = None def __getitem__(self, index): # generate one sample sample, target = super(DatasetBirds, self).__getitem__(index) if self.bboxes is not None: # squeeze coordinates of the bounding box to range [0, 1] width, height = sample.width, sample.height x, y, w, h = self.bboxes[index] scale_resize = 500 / width scale_resize_crop = scale_resize * (375 / 500) x_rel = scale_resize_crop * x / 375 y_rel = scale_resize_crop * y / 375 w_rel = scale_resize_crop * w / 375 h_rel = scale_resize_crop * h / 375 target = torch.tensor([target, x_rel, y_rel, w_rel, h_rel]) if self.transform_ is not None: sample = self.transform_(sample) if self.target_transform_ is not None: target = self.target_transform_(target) return sample, target"
},
{
"code": null,
"e": 10899,
"s": 10541,
"text": "All pre-trained models expect input images to be normalized in the same way, such as the height and width are at least 224 pixels. As you might noticed from our previous analysis, the size of the data varies considerably, and many images have landscape layout rather than portrait one, and width is commonly close to the maximum value along both dimensions."
},
{
"code": null,
"e": 11278,
"s": 10899,
"text": "In order to improve the ability of the model to learn bird representation, we’ll use data augmentation. We want to transform images in a such way, so we maintain the aspect ratio. One solution is to scale images uniformly, so that both dimensions are equal to the larger side using the maximum padding strategy. For that, we’ll create a pad function to pad images to 500 pixels:"
},
{
"code": null,
"e": 11764,
"s": 11278,
"text": "def pad(img, size_max=500): \"\"\" Pads images to the specified size (height x width). \"\"\" pad_height = max(0, size_max - img.height) pad_width = max(0, size_max - img.width) pad_top = pad_height // 2 pad_bottom = pad_height - pad_top pad_left = pad_width // 2 pad_right = pad_width - pad_left return TF.pad( img, (pad_left, pad_top, pad_right, pad_bottom), fill=tuple(map(lambda x: int(round(x * 256)), (0.485, 0.456, 0.406))))"
},
{
"code": null,
"e": 12140,
"s": 11764,
"text": "Assuming birds to appear at any image part, we make the model able to capture them everywhere by randomly-cropping and flipping images along both axes during the model training. While the images of the test split will be center-cropped before feeding into ResNet-50, as we expect the majority birds to be located at this image part referring to the previous data exploration."
},
{
"code": null,
"e": 12441,
"s": 12140,
"text": "For that, we are going to crop images by 375 x 375 pixels along both dimensions, as that is the average size of the majority images. We’ll also normalize images by mean [0.485, 0.456, 0.406] and standard deviation [0.229, 0.224, 0.225] to make distribution of pixel values closer to the Gaussian one."
},
{
"code": null,
"e": 13010,
"s": 12441,
"text": "# transform imagestransforms_train = tv.transforms.Compose([ tv.transforms.Lambda(pad), tv.transforms.RandomOrder([ tv.transforms.RandomCrop((375, 375)), tv.transforms.RandomHorizontalFlip(), tv.transforms.RandomVerticalFlip() ]), tv.transforms.ToTensor(), tv.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])transforms_eval = tv.transforms.Compose([ tv.transforms.Lambda(pad), tv.transforms.CenterCrop((375, 375)), tv.transforms.ToTensor(), tv.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])"
},
{
"code": null,
"e": 13552,
"s": 13010,
"text": "Then, we’ll organize images of the CUB-200-2011 dataset into three subsets to insure the proper model training and evaluation. As authors of the dataset suggest the way to assemble the training and test subsets, we split our data accordingly. Additionally, the validation split will be defined to further fine-tune the parameters of the model during the model evaluation process. For that, the training subset will be split using stratified sampling technique that ensures that each subset have equally balanced classes of different species."
},
{
"code": null,
"e": 14013,
"s": 13552,
"text": "# instantiate dataset objects according to the pre-defined splitsds_train = DatasetBirds(IN_DIR_DATA, transform=transforms_train, train=True)ds_val = DatasetBirds(IN_DIR_DATA, transform=transforms_eval, train=True)ds_test = DatasetBirds(IN_DIR_DATA, transform=transforms_eval, train=False)splits = skms.StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=RANDOM_SEED)idx_train, idx_val = next(splits.split(np.zeros(len(ds_train)), ds_train.targets))"
},
{
"code": null,
"e": 14309,
"s": 14013,
"text": "We’ll set up parameters for data loading and model training. To leverage computations and be able to proceed large dataset in parallel, we will collate input samples in several mini-batches and also denote how many sub-processes to use to generate them in order to leverage the training process."
},
{
"code": null,
"e": 14410,
"s": 14309,
"text": "# set hyper-parametersparams = {'batch_size': 24, 'num_workers': 8}num_epochs = 100num_classes = 200"
},
{
"code": null,
"e": 14489,
"s": 14410,
"text": "After we’ll create a DataLoader object to yield samples of an each data split:"
},
{
"code": null,
"e": 14776,
"s": 14489,
"text": "# instantiate data loaderstrain_loader = td.DataLoader( dataset=ds_train, sampler=td.SubsetRandomSampler(idx_train), **params)val_loader = td.DataLoader( dataset=ds_val, sampler=td.SubsetRandomSampler(idx_val), **params)test_loader = td.DataLoader(dataset=ds_test, **params)"
},
{
"code": null,
"e": 15008,
"s": 14776,
"text": "We are going to use ResNet-50 model for classification of bird species. ResNet (or Residual Network) is a variant of convolutional neural networks that was proposed as a solution to the vanishing gradient problem of large networks."
},
{
"code": null,
"e": 15202,
"s": 15008,
"text": "PyTorch provides the ResNet-50 model on torchvision.models, so we will instantiate the respective class and set the argument num_classes to 200 given the dataset of that number of bird species:"
},
{
"code": null,
"e": 15288,
"s": 15202,
"text": "# instantiate the modelmodel = tv.models.resnet50(num_classes=num_classes).to(DEVICE)"
},
{
"code": null,
"e": 15478,
"s": 15288,
"text": "More specifically, the chosen architecture is 50 layers deep and composed of 5 stages, 4 of which with residual blocks and 1 comprise a convolution, batch normalization and ReLU operations."
},
{
"code": null,
"e": 15841,
"s": 15478,
"text": "Next point is to set the learning rate of our model as well as a schedule to adjust it during the training for the sake of the better performance. Training of the ResNet-50 model will be done using the Adam optimizer with an initial learning rate of 1e-3 and an exponentially decreasing learning rate schedule such as it drops by a factor of gamma at each epoch."
},
{
"code": null,
"e": 16009,
"s": 15841,
"text": "# instantiate optimizer and scheduleroptimizer = torch.optim.Adam(model.parameters(), lr=1e-3)scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.95)"
},
{
"code": null,
"e": 16433,
"s": 16009,
"text": "Finally, we are ready to train and validate our model to recognize and learn the difference between bird species. The cross-entropy loss and accuracy metric will be accumulated per epoch in order to inspect the model performance dynamics. Following all of the training experiments, we test the model using the subset of previously unseen data to assess the overall goodness in bird classification using the accuracy metric."
},
{
"code": null,
"e": 18191,
"s": 16433,
"text": "# loop over epochsfor epoch in range(num_epochs):# train the model model.train() train_loss = list() train_acc = list() for batch in train_loader: x, y = batch x = x.to(DEVICE) y = y.to(DEVICE) optimizer.zero_grad() # predict bird species y_pred = model(x) # calculate the loss loss = F.cross_entropy(y_pred, y) # backprop & update weights loss.backward() optimizer.step() # calculate the accuracy acc = skms.accuracy_score([val.item() for val in y], [val.item() for val in y_pred.argmax(dim=-1)]) train_loss.append(loss.item()) train_acc.append(acc) # validate the model model.eval() val_loss = list() val_acc = list() with torch.no_grad(): for batch in val_loader: x, y = batch x = x.to(DEVICE) y = y.to(DEVICE) # predict bird species y_pred = model(x) # calculate the loss loss = F.cross_entropy(y_pred, y) # calculate the accuracy acc = skms.accuracy_score([val.item() for val in y], [val.item() for val in y_pred.argmax(dim=-1)]) val_loss.append(loss.item()) val_acc.append(acc) # adjust the learning rate scheduler.step()# test the modeltrue = list()pred = list()with torch.no_grad(): for batch in test_loader: x, y = batch x = x.to(DEVICE) y = y.to(DEVICE) y_pred = model(x) true.extend([val.item() for val in y]) pred.extend([val.item() for val in y_pred.argmax(dim=-1)])# calculate the accuracy test_accuracy = skms.accuracy_score(true, pred)print('Test accuracy: {:.3f}'.format(test_accuracy)"
},
{
"code": null,
"e": 18253,
"s": 18191,
"text": "Figure 5 depicts the model performance metrics for ResNet-50:"
},
{
"code": null,
"e": 18559,
"s": 18253,
"text": "As we see, the baseline model performs really poor as it overfits. The one of main reasons is the lack of diverse training samples. Just a quick note: CUB-200-2011 dataset has ~30 images per specie. Seems like we are stuck...isn’t it? Actually, there are some ways we can address to overcome these issues."
},
{
"code": null,
"e": 18703,
"s": 18559,
"text": "Well, we ran into a number of challenges in our previous analysis, so we may start thinking about how we can address these follow-up questions:"
},
{
"code": null,
"e": 18790,
"s": 18703,
"text": "Question 1: How to deal with overfitting given the limited amount of training samples?"
},
{
"code": null,
"e": 18868,
"s": 18790,
"text": "Question 2: How to improve the model performance in bird species recognition?"
},
{
"code": null,
"e": 18939,
"s": 18868,
"text": "Let’s figure out how we can advance our baseline model in more detail."
},
{
"code": null,
"e": 19325,
"s": 18939,
"text": "As it was said before, deep neural networks require a lot of training samples. Practitioners have noticed that, in order to train a deep neural network from scratch, the amount of data should grow exponentially with the number of trainable parameters. Luckily, generalization ability of a model that was trained on a larger dataset can be transferred to another, usually, simpler task."
},
{
"code": null,
"e": 19696,
"s": 19325,
"text": "In order to improve the performance of thebaseline model for bird classification, we will use weight initialization obtained from the general-purpose model pre-trained on the ImageNet dataset, and further fine-tune its parameters using the CUB-200-2011 one. The training process remains the same, while the model will rather focus on the fine-tuning of hyper-parameters."
},
{
"code": null,
"e": 19991,
"s": 19696,
"text": "PyTorch provides pre-trained models in torch.utils.model_zoo. Construction of a pre-trained ResNet-50 can be done by passing pretrained=True into constructor. This simple trick provides us with the model that already has well initialized filters, so there is no need to learn them from scratch."
},
{
"code": null,
"e": 20086,
"s": 19991,
"text": "# instantiate the modelmodel = tv.models.resnet50(num_classes=200, pretrained=True).to(DEVICE)"
},
{
"code": null,
"e": 20277,
"s": 20086,
"text": "We will also set a lower learning rate of 1e-4 in the optimizer, as we are going to train a network that was yet pre-trained on a large-scale image-classification task. And here are results:"
},
{
"code": null,
"e": 20428,
"s": 20277,
"text": "As we see, the use of the pre-trained model allows to solve the overfitting problem giving 80.77% test accuracy. Let’s continue experimenting on that!"
},
{
"code": null,
"e": 20749,
"s": 20428,
"text": "Now we can extend this approach even more. Why do we have to increase the complexity of a single task if we can add another one? No reason at all. It was noticed that introduction of an additional – auxiliary – task improves the network’s performance forcing it to learn more general representation of the training data."
},
{
"code": null,
"e": 21044,
"s": 20749,
"text": "As Caltech-UCSD Birds-200–2011 dataset includes bounding boxes in addition to class labels, we will use this auxiliary target to make the network to train in a multi-task fashion. Now, we will predict 4 coordinates of bird’s bounding box in addition to its specie by setting num_classes to 204:"
},
{
"code": null,
"e": 21151,
"s": 21044,
"text": "# instantiate the pre-trained modelmodel = tv.models.resnet50(num_classes=204, pretrained=True).to(DEVICE)"
},
{
"code": null,
"e": 21390,
"s": 21151,
"text": "Now we need to slightly modify our training and validation blocks, as we want to make predictions and calculate the loss for two targets corresponding to a correct bird specie and its bounding box coordinates. Here’s an example execution:"
},
{
"code": null,
"e": 21722,
"s": 21390,
"text": "...y_pred = model(x)# predict bird speciesy_pred_cls = y_pred[..., :-4]y_cls = y[..., 0].long()# predict bounding box coordinatesy_pred_bbox = y_pred[..., -4:]y_bbox = y[..., 1:]# calculate the lossloss_cls = F.cross_entropy(y_pred_cls, y_cls)loss_bbox = F.mse_loss(torch.sigmoid(y_pred_bbox), y_bbox)loss = loss_cls + loss_bbox..."
},
{
"code": null,
"e": 21885,
"s": 21722,
"text": "Results are even better – integration of the auxiliary task provides the stable increase of accuracy points giving 81.2% on the test split – as shown in Figure 7."
},
{
"code": null,
"e": 22305,
"s": 21885,
"text": "In the last few paragraphs we were focused on the data-driven advancement of our model. However, at some point the complexity of the task can exceed the model’s capacity resulting in a lower performance. In order to adjust the model’s power to the difficulty of the problem, we can equip the network with additional attention blocks that will help it to focus on important parts of the input and ignore irrelevant ones."
},
{
"code": null,
"e": 23111,
"s": 22305,
"text": "class Attention(torch.nn.Module): \"\"\" Attention block for CNN model. \"\"\" def __init__(self, in_channels, out_channels, kernel_size, padding): super(Attention, self).__init__() self.conv_depth = torch.nn.Conv2d(in_channels, out_channels, kernel_size, padding=padding, groups=in_channels) self.conv_point = torch.nn.Conv2d(out_channels, out_channels, kernel_size=(1, 1)) self.bn = torch.nn.BatchNorm2d(out_channels, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True) self.activation = torch.nn.Tanh() def forward(self, inputs): x, output_size = inputs x = F.adaptive_max_pool2d(x, output_size=output_size) x = self.conv_depth(x) x = self.conv_point(x) x = self.bn(x) x = self.activation(x) + 1.0return x"
},
{
"code": null,
"e": 23438,
"s": 23111,
"text": "Attention module allows to highlight relevant regions of feature maps and returns values varying in range [0.0, 2.0], where the lower value implies the lower priority of a given pixel for the following layers. So we’ll create and instantiate the class ResNet50Attention corresponding to the attention-enhanced ResNet-50 model:"
},
{
"code": null,
"e": 25842,
"s": 23438,
"text": "class ResNet50Attention(torch.nn.Module): \"\"\" Attention-enhanced ResNet-50 model. \"\"\" weights_loader = staticmethod(tv.models.resnet50) def __init__(self, num_classes=200, pretrained=True, use_attention=True): super(ResNet50Attention, self).__init__() net = self.weights_loader(pretrained=pretrained) self.num_classes = num_classes self.pretrained = pretrained self.use_attention = use_attention net.fc = torch.nn.Linear( in_features=net.fc.in_features, out_features=num_classes, bias=net.fc.bias is not None ) self.net = net if self.use_attention: self.att1 = Attention(in_channels=64, out_channels=64, kernel_size=(3, 5), padding=(1, 2)) self.att2 = Attention(in_channels=64, out_channels=128, kernel_size=(5, 3), padding=(2, 1)) self.att3 = Attention(in_channels=128, out_channels=256, kernel_size=(3, 5), padding=(1, 2)) self.att4 = Attention(in_channels=256, out_channels=512, kernel_size=(5, 3), padding=(2, 1)) if pretrained: self.att1.bn.weight.data.zero_() self.att1.bn.bias.data.zero_() self.att2.bn.weight.data.zero_() self.att2.bn.bias.data.zero_() self.att3.bn.weight.data.zero_() self.att3.bn.bias.data.zero_() self.att4.bn.weight.data.zero_() self.att4.bn.bias.data.zero_() def _forward(self, x): return self.net(x) def _forward_att(self, x): x = self.net.conv1(x) x = self.net.bn1(x) x = self.net.relu(x) x = self.net.maxpool(x) x_a = x.clone() x = self.net.layer1(x) x = x * self.att1((x_a, x.shape[-2:])) x_a = x.clone() x = self.net.layer2(x) x = x * self.att2((x_a, x.shape[-2:])) x_a = x.clone() x = self.net.layer3(x) x = x * self.att3((x_a, x.shape[-2:])) x_a = x.clone() x = self.net.layer4(x) x = x * self.att4((x_a, x.shape[-2:])) x = self.net.avgpool(x) x = torch.flatten(x, 1) x = self.net.fc(x) return x def forward(self, x): return self._forward_att(x) if self.use_attention else self._forward(x)# instantiate the modelmodel = ResNet50Attention(num_classes=204, pretrained=True, use_attention=True).to(DEVICE)"
},
{
"code": null,
"e": 26129,
"s": 25842,
"text": "After that, we are ready to train and evaluate the performance of the attention-enhanced model pre-trained on the ImageNet dataset and advanced with the multi-task learning for bird classification using the same code we utilized before. Final accuracy score has been increased to 82.4%!"
},
{
"code": null,
"e": 26191,
"s": 26129,
"text": "Figure 8 shows summary results generated during the analysis:"
},
{
"code": null,
"e": 26407,
"s": 26191,
"text": "Results clearly indicate that the final variant of the ResNet-50 model advanced with transfer and multi-task learning, as well as with the attention module, greatly contributes to the more accurate bird predictions."
},
{
"code": null,
"e": 26642,
"s": 26407,
"text": "Here, we used different approaches to improve the performance of a baseline ResNet-50 for the classification of bird species from CUB-200–2011 dataset. What could we learn from that? Here are some take-home messages from our analysis:"
},
{
"code": null,
"e": 26780,
"s": 26642,
"text": "Data exploration results indicate the CUB-200–2011 as the high-quality, balanced although center-biased dataset without corrupted images."
},
{
"code": null,
"e": 26916,
"s": 26780,
"text": "In case of the limited amount of training samples, you can reuse weights of the model pre-trained on another dataset in your own model."
},
{
"code": null,
"e": 27044,
"s": 26916,
"text": "Learning through auxiliary task in addition to the primary bird classification one contributes to the better model performance."
},
{
"code": null,
"e": 27183,
"s": 27044,
"text": "Enhancing the network’s architecture by adding new layers (attention module) makes the model more accurate in bird species classification."
},
{
"code": null,
"e": 27383,
"s": 27183,
"text": "Analysis of different extensions of the basic ResNet-50 indicate the pre-trained model advanced using auxiliary task and attention mechanism as the prominent candidate for the further investigations."
},
{
"code": null,
"e": 27629,
"s": 27383,
"text": "In summary, there is a space for improvements of the model performance. Additional advancements can be achieved by further optimization of model hyper-parameters, the use of a stronger data augmentation, regularization, meta-learning techniques."
},
{
"code": null,
"e": 27743,
"s": 27629,
"text": "The focus of the next tutorial will be on the interpretability of deep learning models. Interested to keep it on?"
},
{
"code": null,
"e": 27841,
"s": 27743,
"text": "Subscribe and stay updated on more deep learning materials at – https://medium.com/@slipnitskaya."
},
{
"code": null,
"e": 28559,
"s": 27841,
"text": "LeCun, Yann, et al. “Backpropagation applied to handwritten zip code recognition.” Neural computation 1.4 (1989): 541–551.LeCun, Yann, et al. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86.11 (1998): 2278–2324.Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Communications of the ACM 60.6 (2017): 84–90.He, Kaiming, et al. “Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition (2016): 770–778.Wah, Catherine, et al. “The Caltech-UCSD Birds 200–2011 dataset.” Computation & Neural Systems Technical Report, CNS-TR-2011–001.(2011)."
},
{
"code": null,
"e": 28682,
"s": 28559,
"text": "LeCun, Yann, et al. “Backpropagation applied to handwritten zip code recognition.” Neural computation 1.4 (1989): 541–551."
},
{
"code": null,
"e": 28810,
"s": 28682,
"text": "LeCun, Yann, et al. “Gradient-based learning applied to document recognition.” Proceedings of the IEEE 86.11 (1998): 2278–2324."
},
{
"code": null,
"e": 28981,
"s": 28810,
"text": "Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Communications of the ACM 60.6 (2017): 84–90."
},
{
"code": null,
"e": 29144,
"s": 28981,
"text": "He, Kaiming, et al. “Deep residual learning for image recognition.” Proceedings of the IEEE conference on computer vision and pattern recognition (2016): 770–778."
}
] |
HTML Course | Basics of HTML - GeeksforGeeks | 08 Sep, 2021
Course Navigation
In this article, we will go through all the basic stuff required to write HTML. There are various tags that we must consider and know about while starting to code in HTML. These tags help in organization and basic formatting of elements in our script or web pages. These step by step procedures will guide you through the process of writing HTML.HTML Paragraph These tags help us to write paragraph statements in a webpage. They start with the <p> tag and ends with </p>. Here the <br> tag is used to break line and acts as a carriage return. <br> is an empty tag. Example:
html
<html><head> <title>GeeksforGeeks</title></head><body> <h1>Hello GeeksforGeeks</h1> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> </body></html>
Output:
HTML Horizontal Lines The <hr> tag is used to break the page into various parts, creating horizontal margins with help of a horizontal line running from left to right hand side of the page. This is also an empty tag and doesn’t take any additional statements. Example:
html
<html><head> <title>GeeksforGeeks</title></head><body> <h1>Hello GeeksforGeeks</h1> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> <hr> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> <hr> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> <hr></body></html>
Output:
HTML Images The image tag is used to insert an image into our web page. The source of the image to be inserted is put inside the <img src=”source_of_image“> tag. Example:
html
<html><head> <title>GeeksforGeeks</title></head><body> <img src="https://media.geeksforgeeks.org/wp-content/cdn-uploads/Geek_logi_-low_res.png"></body></html>
Output:
HTML – Attributes An attribute is used to provide extra or additional information about an element.
All HTML elements can have attributes. Attributes provide additional information about an element.
It takes two parameters : a name and a value. These define the properties of the element and is placed inside the opening tag of the element. The name parameter takes the name of the property we would like to assign to the element and the value takes the properties value or extent of the property names that can be aligned over the element.
Every name has some value that must be written within quotes.
HTML – CommentsIt is used for inserting comments in the HTML code. Using comments, specially in complex code, is the best practice of coding so that coder and reader can get help for understanding.
It is simply piece of code which is wiped off by web browsers i.e, not displayed by browser.
It gives help to coder / reader of code to identify piece of code specially in complex source code.
Syntax of HTML Comments :
HTML
<!-- Write your comments here -->
Example:
HTML
<!DOCTYPE html><html> <body> <!-- there is a comment --> <p>geeksforgeeks.</p> </body> </html>
Output:
geeksforgeeks.
HTML – Lists What is a list? A list is a record of short pieces of information, such as people’s names, usually written or printed with a single thing on each line and ordered in a way that makes a particular thing easy to find.For example:
A shopping list
To-do list
Lists in HTML HTML offers three ways for specifying lists of information. All lists must contain one or more list elements. The types of lists that can be used in HTML are :
ul : An unordered list. This will list items using plain bullets.
ol : An ordered list. This will use different schemes of numbers to list your items.
dl : A definition list. This arranges your items in the same way as they are arranged in a dictionary.
For More on HTML, Please refer: https://www.geeksforgeeks.org/html-tutorials/
Supported Browser:
Google Chrome
Microsoft Edge
Firefox
Opera
Safari
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
ysachin2314
abhishekundefeated0687
HTML-Basics
HTML-course-basic
HTML
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to insert spaces/tabs in text using HTML/CSS?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to update Node.js and NPM to next version ?
How to set the default value for an HTML <select> element ?
Hide or show elements in HTML using display property
Remove elements from a JavaScript Array
Installation of Node.js on Linux
Convert a string to an integer in JavaScript
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 31755,
"s": 31727,
"text": "\n08 Sep, 2021"
},
{
"code": null,
"e": 31775,
"s": 31755,
"text": "Course Navigation "
},
{
"code": null,
"e": 32351,
"s": 31775,
"text": "In this article, we will go through all the basic stuff required to write HTML. There are various tags that we must consider and know about while starting to code in HTML. These tags help in organization and basic formatting of elements in our script or web pages. These step by step procedures will guide you through the process of writing HTML.HTML Paragraph These tags help us to write paragraph statements in a webpage. They start with the <p> tag and ends with </p>. Here the <br> tag is used to break line and acts as a carriage return. <br> is an empty tag. Example: "
},
{
"code": null,
"e": 32356,
"s": 32351,
"text": "html"
},
{
"code": "<html><head> <title>GeeksforGeeks</title></head><body> <h1>Hello GeeksforGeeks</h1> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> </body></html>",
"e": 32608,
"s": 32356,
"text": null
},
{
"code": null,
"e": 32618,
"s": 32608,
"text": "Output: "
},
{
"code": null,
"e": 32889,
"s": 32618,
"text": "HTML Horizontal Lines The <hr> tag is used to break the page into various parts, creating horizontal margins with help of a horizontal line running from left to right hand side of the page. This is also an empty tag and doesn’t take any additional statements. Example: "
},
{
"code": null,
"e": 32894,
"s": 32889,
"text": "html"
},
{
"code": "<html><head> <title>GeeksforGeeks</title></head><body> <h1>Hello GeeksforGeeks</h1> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> <hr> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> <hr> <p> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> A Computer Science portal for geeks<br> </p> <hr></body></html>",
"e": 33466,
"s": 32894,
"text": null
},
{
"code": null,
"e": 33476,
"s": 33466,
"text": "Output: "
},
{
"code": null,
"e": 33649,
"s": 33476,
"text": "HTML Images The image tag is used to insert an image into our web page. The source of the image to be inserted is put inside the <img src=”source_of_image“> tag. Example: "
},
{
"code": null,
"e": 33654,
"s": 33649,
"text": "html"
},
{
"code": "<html><head> <title>GeeksforGeeks</title></head><body> <img src=\"https://media.geeksforgeeks.org/wp-content/cdn-uploads/Geek_logi_-low_res.png\"></body></html>",
"e": 33819,
"s": 33654,
"text": null
},
{
"code": null,
"e": 33829,
"s": 33819,
"text": "Output: "
},
{
"code": null,
"e": 33931,
"s": 33829,
"text": "HTML – Attributes An attribute is used to provide extra or additional information about an element. "
},
{
"code": null,
"e": 34030,
"s": 33931,
"text": "All HTML elements can have attributes. Attributes provide additional information about an element."
},
{
"code": null,
"e": 34372,
"s": 34030,
"text": "It takes two parameters : a name and a value. These define the properties of the element and is placed inside the opening tag of the element. The name parameter takes the name of the property we would like to assign to the element and the value takes the properties value or extent of the property names that can be aligned over the element."
},
{
"code": null,
"e": 34434,
"s": 34372,
"text": "Every name has some value that must be written within quotes."
},
{
"code": null,
"e": 34633,
"s": 34434,
"text": "HTML – CommentsIt is used for inserting comments in the HTML code. Using comments, specially in complex code, is the best practice of coding so that coder and reader can get help for understanding. "
},
{
"code": null,
"e": 34726,
"s": 34633,
"text": "It is simply piece of code which is wiped off by web browsers i.e, not displayed by browser."
},
{
"code": null,
"e": 34826,
"s": 34726,
"text": "It gives help to coder / reader of code to identify piece of code specially in complex source code."
},
{
"code": null,
"e": 34854,
"s": 34826,
"text": "Syntax of HTML Comments : "
},
{
"code": null,
"e": 34859,
"s": 34854,
"text": "HTML"
},
{
"code": "<!-- Write your comments here -->",
"e": 34893,
"s": 34859,
"text": null
},
{
"code": null,
"e": 34904,
"s": 34893,
"text": "Example: "
},
{
"code": null,
"e": 34909,
"s": 34904,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html> <body> <!-- there is a comment --> <p>geeksforgeeks.</p> </body> </html>",
"e": 35013,
"s": 34909,
"text": null
},
{
"code": null,
"e": 35023,
"s": 35013,
"text": "Output: "
},
{
"code": null,
"e": 35038,
"s": 35023,
"text": "geeksforgeeks."
},
{
"code": null,
"e": 35281,
"s": 35038,
"text": "HTML – Lists What is a list? A list is a record of short pieces of information, such as people’s names, usually written or printed with a single thing on each line and ordered in a way that makes a particular thing easy to find.For example: "
},
{
"code": null,
"e": 35297,
"s": 35281,
"text": "A shopping list"
},
{
"code": null,
"e": 35308,
"s": 35297,
"text": "To-do list"
},
{
"code": null,
"e": 35484,
"s": 35308,
"text": "Lists in HTML HTML offers three ways for specifying lists of information. All lists must contain one or more list elements. The types of lists that can be used in HTML are : "
},
{
"code": null,
"e": 35550,
"s": 35484,
"text": "ul : An unordered list. This will list items using plain bullets."
},
{
"code": null,
"e": 35635,
"s": 35550,
"text": "ol : An ordered list. This will use different schemes of numbers to list your items."
},
{
"code": null,
"e": 35738,
"s": 35635,
"text": "dl : A definition list. This arranges your items in the same way as they are arranged in a dictionary."
},
{
"code": null,
"e": 35817,
"s": 35738,
"text": "For More on HTML, Please refer: https://www.geeksforgeeks.org/html-tutorials/ "
},
{
"code": null,
"e": 35836,
"s": 35817,
"text": "Supported Browser:"
},
{
"code": null,
"e": 35850,
"s": 35836,
"text": "Google Chrome"
},
{
"code": null,
"e": 35865,
"s": 35850,
"text": "Microsoft Edge"
},
{
"code": null,
"e": 35873,
"s": 35865,
"text": "Firefox"
},
{
"code": null,
"e": 35879,
"s": 35873,
"text": "Opera"
},
{
"code": null,
"e": 35886,
"s": 35879,
"text": "Safari"
},
{
"code": null,
"e": 36025,
"s": 35888,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 36037,
"s": 36025,
"text": "ysachin2314"
},
{
"code": null,
"e": 36060,
"s": 36037,
"text": "abhishekundefeated0687"
},
{
"code": null,
"e": 36072,
"s": 36060,
"text": "HTML-Basics"
},
{
"code": null,
"e": 36090,
"s": 36072,
"text": "HTML-course-basic"
},
{
"code": null,
"e": 36095,
"s": 36090,
"text": "HTML"
},
{
"code": null,
"e": 36112,
"s": 36095,
"text": "Web Technologies"
},
{
"code": null,
"e": 36117,
"s": 36112,
"text": "HTML"
},
{
"code": null,
"e": 36215,
"s": 36117,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 36265,
"s": 36215,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 36327,
"s": 36265,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 36375,
"s": 36327,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 36435,
"s": 36375,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 36488,
"s": 36435,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 36528,
"s": 36488,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 36561,
"s": 36528,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 36606,
"s": 36561,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 36649,
"s": 36606,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
How to switch to frames in Selenium? | We can switch to frames in Selenium with the help of the following methods −
switchTo()defaultContent()This method is for switching to and fro in between frames and parent frames. The focus is shifted to the main page.
switchTo()defaultContent()
This method is for switching to and fro in between frames and parent frames. The focus is shifted to the main page.
switchTo().parentFrame()This method is used to switch the control to the parent frame of the current frame.
switchTo().parentFrame()
This method is used to switch the control to the parent frame of the current frame.
import org.openqa.selenium.By;
import org.openqa.selenium.Keys;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import java.util.concurrent.TimeUnit;
public class FrameDefault {
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
String url = "url with frames";
driver.get(url);
driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);
//grabbing the first frame with the help of index
driver.switchTo().frame(0);
System.out.println("Getting the page source " + driver.getPageSource());
// switching back to the parent web page
driver.switchTo().defaultContent();
driver.quit();
}
} | [
{
"code": null,
"e": 1139,
"s": 1062,
"text": "We can switch to frames in Selenium with the help of the following methods −"
},
{
"code": null,
"e": 1281,
"s": 1139,
"text": "switchTo()defaultContent()This method is for switching to and fro in between frames and parent frames. The focus is shifted to the main page."
},
{
"code": null,
"e": 1308,
"s": 1281,
"text": "switchTo()defaultContent()"
},
{
"code": null,
"e": 1424,
"s": 1308,
"text": "This method is for switching to and fro in between frames and parent frames. The focus is shifted to the main page."
},
{
"code": null,
"e": 1532,
"s": 1424,
"text": "switchTo().parentFrame()This method is used to switch the control to the parent frame of the current frame."
},
{
"code": null,
"e": 1557,
"s": 1532,
"text": "switchTo().parentFrame()"
},
{
"code": null,
"e": 1641,
"s": 1557,
"text": "This method is used to switch the control to the parent frame of the current frame."
},
{
"code": null,
"e": 2511,
"s": 1641,
"text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.Keys;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\npublic class FrameDefault {\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n String url = \"url with frames\";\n driver.get(url);\n driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);\n //grabbing the first frame with the help of index\n driver.switchTo().frame(0);\n System.out.println(\"Getting the page source \" + driver.getPageSource());\n // switching back to the parent web page\n driver.switchTo().defaultContent();\n driver.quit();\n }\n}"
}
] |
C# - Jagged Arrays | A Jagged array is an array of arrays. You can declare a jagged array named scores of type int as −
int [][] scores;
Declaring an array, does not create the array in memory. To create the above array −
int[][] scores = new int[5][];
for (int i = 0; i < scores.Length; i++) {
scores[i] = new int[4];
}
You can initialize a jagged array as −
int[][] scores = new int[2][]{new int[]{92,93,94},new int[]{85,66,87,88}};
Where, scores is an array of two arrays of integers - scores[0] is an array of 3 integers and scores[1] is an array of 4 integers.
The following example illustrates using a jagged array −
using System;
namespace ArrayApplication {
class MyArray {
static void Main(string[] args) {
/* a jagged array of 5 array of integers*/
int[][] a = new int[][]{new int[]{0,0},new int[]{1,2},
new int[]{2,4},new int[]{ 3, 6 }, new int[]{ 4, 8 } };
int i, j;
/* output each array element's value */
for (i = 0; i < 5; i++) {
for (j = 0; j < 2; j++) {
Console.WriteLine("a[{0}][{1}] = {2}", i, j, a[i][j]);
}
}
Console.ReadKey();
}
}
}
When the above code is compiled and executed, it produces the following result −
a[0][0]: 0
a[0][1]: 0
a[1][0]: 1
a[1][1]: 2
a[2][0]: 2
a[2][1]: 4
a[3][0]: 3
a[3][1]: 6
a[4][0]: 4
a[4][1]: 8
119 Lectures
23.5 hours
Raja Biswas
37 Lectures
13 hours
Trevoir Williams
16 Lectures
1 hours
Peter Jepson
159 Lectures
21.5 hours
Ebenezer Ogbu
193 Lectures
17 hours
Arnold Higuit
24 Lectures
2.5 hours
Eric Frick
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2369,
"s": 2270,
"text": "A Jagged array is an array of arrays. You can declare a jagged array named scores of type int as −"
},
{
"code": null,
"e": 2387,
"s": 2369,
"text": "int [][] scores;\n"
},
{
"code": null,
"e": 2472,
"s": 2387,
"text": "Declaring an array, does not create the array in memory. To create the above array −"
},
{
"code": null,
"e": 2574,
"s": 2472,
"text": "int[][] scores = new int[5][];\nfor (int i = 0; i < scores.Length; i++) {\n scores[i] = new int[4];\n}"
},
{
"code": null,
"e": 2613,
"s": 2574,
"text": "You can initialize a jagged array as −"
},
{
"code": null,
"e": 2688,
"s": 2613,
"text": "int[][] scores = new int[2][]{new int[]{92,93,94},new int[]{85,66,87,88}};"
},
{
"code": null,
"e": 2819,
"s": 2688,
"text": "Where, scores is an array of two arrays of integers - scores[0] is an array of 3 integers and scores[1] is an array of 4 integers."
},
{
"code": null,
"e": 2876,
"s": 2819,
"text": "The following example illustrates using a jagged array −"
},
{
"code": null,
"e": 3461,
"s": 2876,
"text": "using System;\n\nnamespace ArrayApplication {\n class MyArray {\n static void Main(string[] args) {\n \n /* a jagged array of 5 array of integers*/\n int[][] a = new int[][]{new int[]{0,0},new int[]{1,2},\n new int[]{2,4},new int[]{ 3, 6 }, new int[]{ 4, 8 } };\n int i, j;\n \n /* output each array element's value */\n for (i = 0; i < 5; i++) {\n for (j = 0; j < 2; j++) {\n Console.WriteLine(\"a[{0}][{1}] = {2}\", i, j, a[i][j]);\n }\n }\n Console.ReadKey();\n }\n }\n}"
},
{
"code": null,
"e": 3542,
"s": 3461,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 3653,
"s": 3542,
"text": "a[0][0]: 0\na[0][1]: 0\na[1][0]: 1\na[1][1]: 2\na[2][0]: 2\na[2][1]: 4\na[3][0]: 3\na[3][1]: 6\na[4][0]: 4\na[4][1]: 8\n"
},
{
"code": null,
"e": 3690,
"s": 3653,
"text": "\n 119 Lectures \n 23.5 hours \n"
},
{
"code": null,
"e": 3703,
"s": 3690,
"text": " Raja Biswas"
},
{
"code": null,
"e": 3737,
"s": 3703,
"text": "\n 37 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3755,
"s": 3737,
"text": " Trevoir Williams"
},
{
"code": null,
"e": 3788,
"s": 3755,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3802,
"s": 3788,
"text": " Peter Jepson"
},
{
"code": null,
"e": 3839,
"s": 3802,
"text": "\n 159 Lectures \n 21.5 hours \n"
},
{
"code": null,
"e": 3854,
"s": 3839,
"text": " Ebenezer Ogbu"
},
{
"code": null,
"e": 3889,
"s": 3854,
"text": "\n 193 Lectures \n 17 hours \n"
},
{
"code": null,
"e": 3904,
"s": 3889,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 3939,
"s": 3904,
"text": "\n 24 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3951,
"s": 3939,
"text": " Eric Frick"
},
{
"code": null,
"e": 3958,
"s": 3951,
"text": " Print"
},
{
"code": null,
"e": 3969,
"s": 3958,
"text": " Add Notes"
}
] |
How to add +1 to existing MySQL values? | Let us see an example and create a table first.
mysql> create table Add1ToExistingValue
-> (
-> Value int
-> );
Query OK, 0 rows affected (0.56 sec)
Insert some records in the table using insert command.
The query is as follows
mysql> insert into Add1ToExistingValue values(10);
Query OK, 1 row affected (0.12 sec)
mysql> insert into Add1ToExistingValue values(13);
Query OK, 1 row affected (0.15 sec)
mysql> insert into Add1ToExistingValue values(15);
Query OK, 1 row affected (0.13 sec)
mysql> insert into Add1ToExistingValue values(16);
Query OK, 1 row affected (0.14 sec)
mysql> insert into Add1ToExistingValue values(20);
Query OK, 1 row affected (0.16 sec)
mysql> insert into Add1ToExistingValue values(40);
Query OK, 1 row affected (0.15 sec)
mysql> insert into Add1ToExistingValue values(50);
Query OK, 1 row affected (0.11 sec)
mysql> insert into Add1ToExistingValue values(55);
Query OK, 1 row affected (0.17 sec)
mysql> insert into Add1ToExistingValue values(56);
Query OK, 1 row affected (0.17 sec)
Display all records from the table using select statement.
The query is as follows
mysql> select *from Add1ToExistingValue;
The following is the output
+-------+
| Value |
+-------+
| 10 |
| 13 |
| 15 |
| 16 |
| 20 |
| 40 |
| 50 |
| 55 |
| 56 |
+-------+
9 rows in set (0.00 sec)
Here is the query to add +1 to existing values
mysql> update Add1ToExistingValue set Value=Value+1 where Value >=20;
Query OK, 5 rows affected (0.08 sec)
Rows matched: 5 Changed: 5 Warnings: 0
Let us check the table records from the table using select statement.
The query is as follows
mysql> select *from Add1ToExistingValue;
The following is the output
+-------+
| Value |
+-------+
| 10 |
| 13 |
| 15 |
| 16 |
| 21 |
| 41 |
| 51 |
| 56 |
| 57 |
+-------+
9 rows in set (0.00 sec) | [
{
"code": null,
"e": 1110,
"s": 1062,
"text": "Let us see an example and create a table first."
},
{
"code": null,
"e": 1220,
"s": 1110,
"text": "mysql> create table Add1ToExistingValue\n -> (\n -> Value int\n -> );\nQuery OK, 0 rows affected (0.56 sec)"
},
{
"code": null,
"e": 1275,
"s": 1220,
"text": "Insert some records in the table using insert command."
},
{
"code": null,
"e": 1299,
"s": 1275,
"text": "The query is as follows"
},
{
"code": null,
"e": 2082,
"s": 1299,
"text": "mysql> insert into Add1ToExistingValue values(10);\nQuery OK, 1 row affected (0.12 sec)\nmysql> insert into Add1ToExistingValue values(13);\nQuery OK, 1 row affected (0.15 sec)\nmysql> insert into Add1ToExistingValue values(15);\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into Add1ToExistingValue values(16);\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into Add1ToExistingValue values(20);\nQuery OK, 1 row affected (0.16 sec)\nmysql> insert into Add1ToExistingValue values(40);\nQuery OK, 1 row affected (0.15 sec)\nmysql> insert into Add1ToExistingValue values(50);\nQuery OK, 1 row affected (0.11 sec)\nmysql> insert into Add1ToExistingValue values(55);\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into Add1ToExistingValue values(56);\nQuery OK, 1 row affected (0.17 sec)"
},
{
"code": null,
"e": 2141,
"s": 2082,
"text": "Display all records from the table using select statement."
},
{
"code": null,
"e": 2165,
"s": 2141,
"text": "The query is as follows"
},
{
"code": null,
"e": 2206,
"s": 2165,
"text": "mysql> select *from Add1ToExistingValue;"
},
{
"code": null,
"e": 2234,
"s": 2206,
"text": "The following is the output"
},
{
"code": null,
"e": 2389,
"s": 2234,
"text": "+-------+\n| Value |\n+-------+\n| 10 |\n| 13 |\n| 15 |\n| 16 |\n| 20 |\n| 40 |\n| 50 |\n| 55 |\n| 56 |\n+-------+\n9 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2436,
"s": 2389,
"text": "Here is the query to add +1 to existing values"
},
{
"code": null,
"e": 2582,
"s": 2436,
"text": "mysql> update Add1ToExistingValue set Value=Value+1 where Value >=20;\nQuery OK, 5 rows affected (0.08 sec)\nRows matched: 5 Changed: 5 Warnings: 0"
},
{
"code": null,
"e": 2652,
"s": 2582,
"text": "Let us check the table records from the table using select statement."
},
{
"code": null,
"e": 2676,
"s": 2652,
"text": "The query is as follows"
},
{
"code": null,
"e": 2717,
"s": 2676,
"text": "mysql> select *from Add1ToExistingValue;"
},
{
"code": null,
"e": 2745,
"s": 2717,
"text": "The following is the output"
},
{
"code": null,
"e": 2900,
"s": 2745,
"text": "+-------+\n| Value |\n+-------+\n| 10 |\n| 13 |\n| 15 |\n| 16 |\n| 21 |\n| 41 |\n| 51 |\n| 56 |\n| 57 |\n+-------+\n9 rows in set (0.00 sec)"
}
] |
C# | List Class - GeeksforGeeks | 03 Apr, 2019
List<T> class represents the list of objects which can be accessed by index. It comes under the System.Collection.Generic namespace. List class can be used to create a collection of different types like integers, strings etc. List<T> class also provides the methods to search, sort, and manipulate lists.
Characteristics:
It is different from the arrays. A List<T> can be resized dynamically but arrays cannot.
List<T> class can accept null as a valid value for reference types and it also allows duplicate elements.
If the Count becomes equals to Capacity, then the capacity of the List increased automatically by reallocating the internal array. The existing elements will be copied to the new array before the addition of the new element.
List<T> class is the generic equivalent of ArrayList class by implementing the IList<T> generic interface.
This class can use both equality and ordering comparer.
List<T> class is not sorted by default and elements are accessed by zero-based index.
For very large List<T> objects, you can increase the maximum capacity to 2 billion elements on a 64-bit system by setting the enabled attribute of the configuration element to true in the run-time environment.
Example:
// C# program to create a List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating a List of integers List<int> firstlist = new List<int>(); // displaying the number // of elements of List<T> Console.WriteLine(firstlist.Count); }}
Output:
0
Example:
// C# program to illustrate the// Capacity Property of List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating a List of integers // Here we are not setting // Capacity explicitly List<int> firstlist = new List<int>(); // adding elements in firstlist firstlist.Add(1); firstlist.Add(2); firstlist.Add(3); firstlist.Add(4); // Printing the Capacity of firstlist Console.WriteLine("Capacity Is: " + firstlist.Capacity); // Printing the Count of firstlist Console.WriteLine("Count Is: " + firstlist.Count); // Adding some more // elements in firstlist firstlist.Add(5); firstlist.Add(6); // Printing the Capacity of firstlist // It will give output 8 as internally // List is resized Console.WriteLine("Capacity Is: " + firstlist.Capacity); // Printing the Count of firstlist Console.WriteLine("Count Is: " + firstlist.Count); }}
Output:
Capacity Is: 4
Count Is: 4
Capacity Is: 8
Count Is: 6
Example 1:
// C# Program to check whether the// element is present in the List// or notusing System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating an List<T> of Integers List<int> firstlist = new List<int>(); // Adding elements to List firstlist.Add(1); firstlist.Add(2); firstlist.Add(3); firstlist.Add(4); firstlist.Add(5); firstlist.Add(6); firstlist.Add(7); // Checking whether 4 is present // in List or not Console.Write(firstlist.Contains(4)); }}
Output:
True
Example 2:
// C# Program to remove the element at// the specified index of the List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating an List<T> of Integers List<int> firstlist = new List<int>(); // Adding elements to List firstlist.Add(17); firstlist.Add(19); firstlist.Add(21); firstlist.Add(9); firstlist.Add(75); firstlist.Add(19); firstlist.Add(73); Console.WriteLine("Elements Present in List:\n"); int p = 0; // Displaying the elements of List foreach(int k in firstlist) { Console.Write("At Position {0}: ", p); Console.WriteLine(k); p++; } Console.WriteLine(" "); // removing the element at index 3 Console.WriteLine("Removing the element at index 3\n"); // 9 will remove from the List // and 75 will come at index 3 firstlist.RemoveAt(3); int p1 = 0; // Displaying the elements of List foreach(int n in firstlist) { Console.Write("At Position {0}: ", p1); Console.WriteLine(n); p1++; } }}
Output:
Elements Present in List:
At Position 0: 17
At Position 1: 19
At Position 2: 21
At Position 3: 9
At Position 4: 75
At Position 5: 19
At Position 6: 73
Removing the element at index 3
At Position 0: 17
At Position 1: 19
At Position 2: 21
At Position 3: 75
At Position 4: 19
At Position 5: 73
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1?view=netframework-4.7.2
CSharp-Generic-List
CSharp-Generic-Namespace
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
C# | Class and Object
C# | Constructors
Introduction to .NET Framework
Extension Method in C#
C# | Delegates
C# | Abstract Classes
C# | Data Types
HashSet in C# with Examples
Top 50 C# Interview Questions & Answers
Common Language Runtime (CLR) in C# | [
{
"code": null,
"e": 24494,
"s": 24466,
"text": "\n03 Apr, 2019"
},
{
"code": null,
"e": 24799,
"s": 24494,
"text": "List<T> class represents the list of objects which can be accessed by index. It comes under the System.Collection.Generic namespace. List class can be used to create a collection of different types like integers, strings etc. List<T> class also provides the methods to search, sort, and manipulate lists."
},
{
"code": null,
"e": 24816,
"s": 24799,
"text": "Characteristics:"
},
{
"code": null,
"e": 24905,
"s": 24816,
"text": "It is different from the arrays. A List<T> can be resized dynamically but arrays cannot."
},
{
"code": null,
"e": 25011,
"s": 24905,
"text": "List<T> class can accept null as a valid value for reference types and it also allows duplicate elements."
},
{
"code": null,
"e": 25236,
"s": 25011,
"text": "If the Count becomes equals to Capacity, then the capacity of the List increased automatically by reallocating the internal array. The existing elements will be copied to the new array before the addition of the new element."
},
{
"code": null,
"e": 25343,
"s": 25236,
"text": "List<T> class is the generic equivalent of ArrayList class by implementing the IList<T> generic interface."
},
{
"code": null,
"e": 25399,
"s": 25343,
"text": "This class can use both equality and ordering comparer."
},
{
"code": null,
"e": 25485,
"s": 25399,
"text": "List<T> class is not sorted by default and elements are accessed by zero-based index."
},
{
"code": null,
"e": 25695,
"s": 25485,
"text": "For very large List<T> objects, you can increase the maximum capacity to 2 billion elements on a 64-bit system by setting the enabled attribute of the configuration element to true in the run-time environment."
},
{
"code": null,
"e": 25704,
"s": 25695,
"text": "Example:"
},
{
"code": "// C# program to create a List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating a List of integers List<int> firstlist = new List<int>(); // displaying the number // of elements of List<T> Console.WriteLine(firstlist.Count); }}",
"e": 26068,
"s": 25704,
"text": null
},
{
"code": null,
"e": 26076,
"s": 26068,
"text": "Output:"
},
{
"code": null,
"e": 26079,
"s": 26076,
"text": "0\n"
},
{
"code": null,
"e": 26088,
"s": 26079,
"text": "Example:"
},
{
"code": "// C# program to illustrate the// Capacity Property of List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating a List of integers // Here we are not setting // Capacity explicitly List<int> firstlist = new List<int>(); // adding elements in firstlist firstlist.Add(1); firstlist.Add(2); firstlist.Add(3); firstlist.Add(4); // Printing the Capacity of firstlist Console.WriteLine(\"Capacity Is: \" + firstlist.Capacity); // Printing the Count of firstlist Console.WriteLine(\"Count Is: \" + firstlist.Count); // Adding some more // elements in firstlist firstlist.Add(5); firstlist.Add(6); // Printing the Capacity of firstlist // It will give output 8 as internally // List is resized Console.WriteLine(\"Capacity Is: \" + firstlist.Capacity); // Printing the Count of firstlist Console.WriteLine(\"Count Is: \" + firstlist.Count); }}",
"e": 27185,
"s": 26088,
"text": null
},
{
"code": null,
"e": 27193,
"s": 27185,
"text": "Output:"
},
{
"code": null,
"e": 27248,
"s": 27193,
"text": "Capacity Is: 4\nCount Is: 4\nCapacity Is: 8\nCount Is: 6\n"
},
{
"code": null,
"e": 27259,
"s": 27248,
"text": "Example 1:"
},
{
"code": "// C# Program to check whether the// element is present in the List// or notusing System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating an List<T> of Integers List<int> firstlist = new List<int>(); // Adding elements to List firstlist.Add(1); firstlist.Add(2); firstlist.Add(3); firstlist.Add(4); firstlist.Add(5); firstlist.Add(6); firstlist.Add(7); // Checking whether 4 is present // in List or not Console.Write(firstlist.Contains(4)); }}",
"e": 27883,
"s": 27259,
"text": null
},
{
"code": null,
"e": 27891,
"s": 27883,
"text": "Output:"
},
{
"code": null,
"e": 27897,
"s": 27891,
"text": "True\n"
},
{
"code": null,
"e": 27908,
"s": 27897,
"text": "Example 2:"
},
{
"code": "// C# Program to remove the element at// the specified index of the List<T>using System;using System.Collections.Generic; class Geeks { // Main Method public static void Main(String[] args) { // Creating an List<T> of Integers List<int> firstlist = new List<int>(); // Adding elements to List firstlist.Add(17); firstlist.Add(19); firstlist.Add(21); firstlist.Add(9); firstlist.Add(75); firstlist.Add(19); firstlist.Add(73); Console.WriteLine(\"Elements Present in List:\\n\"); int p = 0; // Displaying the elements of List foreach(int k in firstlist) { Console.Write(\"At Position {0}: \", p); Console.WriteLine(k); p++; } Console.WriteLine(\" \"); // removing the element at index 3 Console.WriteLine(\"Removing the element at index 3\\n\"); // 9 will remove from the List // and 75 will come at index 3 firstlist.RemoveAt(3); int p1 = 0; // Displaying the elements of List foreach(int n in firstlist) { Console.Write(\"At Position {0}: \", p1); Console.WriteLine(n); p1++; } }}",
"e": 29167,
"s": 27908,
"text": null
},
{
"code": null,
"e": 29175,
"s": 29167,
"text": "Output:"
},
{
"code": null,
"e": 29471,
"s": 29175,
"text": "Elements Present in List:\n\nAt Position 0: 17\nAt Position 1: 19\nAt Position 2: 21\nAt Position 3: 9\nAt Position 4: 75\nAt Position 5: 19\nAt Position 6: 73\n \nRemoving the element at index 3\n\nAt Position 0: 17\nAt Position 1: 19\nAt Position 2: 21\nAt Position 3: 75\nAt Position 4: 19\nAt Position 5: 73\n"
},
{
"code": null,
"e": 29482,
"s": 29471,
"text": "Reference:"
},
{
"code": null,
"e": 29584,
"s": 29482,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.list-1?view=netframework-4.7.2"
},
{
"code": null,
"e": 29604,
"s": 29584,
"text": "CSharp-Generic-List"
},
{
"code": null,
"e": 29629,
"s": 29604,
"text": "CSharp-Generic-Namespace"
},
{
"code": null,
"e": 29632,
"s": 29629,
"text": "C#"
},
{
"code": null,
"e": 29730,
"s": 29632,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29739,
"s": 29730,
"text": "Comments"
},
{
"code": null,
"e": 29752,
"s": 29739,
"text": "Old Comments"
},
{
"code": null,
"e": 29774,
"s": 29752,
"text": "C# | Class and Object"
},
{
"code": null,
"e": 29792,
"s": 29774,
"text": "C# | Constructors"
},
{
"code": null,
"e": 29823,
"s": 29792,
"text": "Introduction to .NET Framework"
},
{
"code": null,
"e": 29846,
"s": 29823,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 29861,
"s": 29846,
"text": "C# | Delegates"
},
{
"code": null,
"e": 29883,
"s": 29861,
"text": "C# | Abstract Classes"
},
{
"code": null,
"e": 29899,
"s": 29883,
"text": "C# | Data Types"
},
{
"code": null,
"e": 29927,
"s": 29899,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 29967,
"s": 29927,
"text": "Top 50 C# Interview Questions & Answers"
}
] |
Build an end-to-end Machine Learning Model with MLlib in pySpark. | by Nasir Safdari | Towards Data Science | In-Memory computation and Parallel-Processing are some of the major reasons that Apache Spark has become very popular in the big data industry to deal with data products at large scale and perform faster analysis. built on top of Spark, MLlib is a scalable Machine Learning library that delivers both high-quality algorithms and blazing speed. having great APIs for Java, Python, and Scala, it makes a top choice for Data Analysts, Data Engineers, and Data Scientists. MLlib consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering (matrix factorization), dimensionality reduction and etc.
In this article, we are going to build an end-to-end machine learning model using MLlib in pySpark. we are going to use a real world dataset from Home Credit Default Risk competition on kaggle. the objective of this competition was to identify if loan applicants are capable of repaying their loans based on the data that was collected from each applicant. the target variable is either 0 (applicants who were able to pay back their loans)or 1 (applicants who were NOT able to pay back their loans). it is a binary classification problem with a highly imbalanced target label. the distribution ratio is close to 0.91 to 0.09, with 0.91 being the ratio of the applicants who were able to pay back their loans and 0.09 being the ratio of the applicants who were not able to pay back their loans.
Let’s start by looking at the structure of our dataset:
#we use the findspark library to locate spark on our local machineimport findsparkfindspark.init('home Diredtory of Spark')from pyspark.sql import SparkSession# initiate our session and read the main CSV file, then we print the #dataframe schemaspark = SparkSession.builder.appName('imbalanced_binary_classification').getOrCreate()new_df = spark.read.csv('application_train.csv', header=True, inferSchema=True)new_df.printSchema()root |-- SK_ID_CURR: integer (nullable = true) |-- TARGET: integer (nullable = true) |-- NAME_CONTRACT_TYPE: string (nullable = true) |-- CODE_GENDER: string (nullable = true) |-- FLAG_OWN_CAR: string (nullable = true) |-- FLAG_OWN_REALTY: string (nullable = true) |-- CNT_CHILDREN: integer (nullable = true) |-- AMT_INCOME_TOTAL: double (nullable = true) |-- AMT_CREDIT: double (nullable = true) |-- AMT_ANNUITY: double (nullable = true) |-- AMT_GOODS_PRICE: double (nullable = true) |-- NAME_TYPE_SUITE: string (nullable = true) |-- NAME_INCOME_TYPE: string (nullable = true) |-- NAME_EDUCATION_TYPE: string (nullable = true) |-- NAME_FAMILY_STATUS: string (nullable = true) |-- NAME_HOUSING_TYPE: string (nullable = true) |-- REGION_POPULATION_RELATIVE: double (nullable = true)...
printSchema() only shows us the column names and its data type. we are going to drop the SK_ID_CURR column, rename the “TARGET” column to “label” and see the distribution of our target variable:
# Sk_ID_Curr is the id column which we dont need it in the process #so we get rid of it. and we rename the name of our # target variable to "label"drop_col = ['SK_ID_CURR']new_df = new_df.select([column for column in new_df.columns if column not in drop_col])new_df = new_df.withColumnRenamed('TARGET', 'label')new_df.groupby('label').count().toPandas()
we can visualize the distribution of the labels with matplotlib:
# let's have a look at the distribution of our target variable:# to make it look better, we first convert our spark df to a Pandasimport matplotlib.pyplot as pltimport seaborn as sns%matplotlib inlinedf_pd = new_df.toPandas()print(len(df_pd))plt.figure(figsize=(12,10))sns.countplot(x='label', data=df_pd, order=df_pd['label'].value_counts().index)
and here is how the data looks like in a Pandas dataframe format:
# let's see how everything look in Pandasimport pandas as pdpd.DataFrame(new_df.take(10), columns= new_df.columns)
now that we have some ideas about the general structure of our dataset, let’s continue with some data wrangling. first we check how many Categorical and Numerical features do we have. next we build a function that outputs essential information about the missing values in our dataset:
# now let's see how many categorical and numerical features we have:cat_cols = [item[0] for item in new_df.dtypes if item[1].startswith('string')] print(str(len(cat_cols)) + ' categorical features')num_cols = [item[0] for item in new_df.dtypes if item[1].startswith('int') | item[1].startswith('double')][1:]print(str(len(num_cols)) + ' numerical features')16 categorical features104 numerical features
here is how we get the table for missing info:
# we use the below function to find more information about the #missing valuesdef info_missing_table(df_pd): """Input pandas dataframe and Return columns with missing value and percentage""" mis_val = df_pd.isnull().sum() #count total of null in each columns in dataframe#count percentage of null in each columns mis_val_percent = 100 * df_pd.isnull().sum() / len(df_pd) mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1) #join to left (as column) between mis_val and mis_val_percent mis_val_table_ren_columns = mis_val_table.rename( columns = {0 : 'Missing Values', 1 : '% of Total Values'}) #rename columns in table mis_val_table_ren_columns = mis_val_table_ren_columns[ mis_val_table_ren_columns.iloc[:,1] != 0].sort_values('% of Total Values', ascending=False).round(1) print ("Your selected dataframe has " + str(df_pd.shape[1]) + " columns.\n" #.shape[1] : just view total columns in dataframe "There are " + str(mis_val_table_ren_columns.shape[0]) + " columns that have missing values.") #.shape[0] : just view total rows in dataframe return mis_val_table_ren_columnsmissings = info_missing_table(df_pd)missings
there are 67 columns out of 121 that has missing values in them. and it doesn't show all of them in the image, but overall, most of these 67 columns have more than 50 percent missing values. so we are dealing with a lot of missing values. we are going to fill the numerical missing values with the average of each column and the categorical missing values with the most frequent category of each column. but first, let’s count the missing values in each column:
miss_counts = count_missings(new_df)miss_counts[('AMT_ANNUITY', 12), ('AMT_GOODS_PRICE', 278), ('NAME_TYPE_SUITE', 1292), ('OWN_CAR_AGE', 202929), ('OCCUPATION_TYPE', 96391), ('CNT_FAM_MEMBERS', 2), ('EXT_SOURCE_1', 173378), ('EXT_SOURCE_2', 660), ('EXT_SOURCE_3', 60965), ('APARTMENTS_AVG', 156061), ('BASEMENTAREA_AVG', 179943), ('YEARS_BEGINEXPLUATATION_AVG', 150007), ('YEARS_BUILD_AVG', 204488),...
we separate categorical and numerical columns with missing values:
# here we seperate missing columns in our new_df based on #categorical and numerical typeslist_cols_miss=[x[0] for x in miss_counts]df_miss= new_df.select(*list_cols_miss)#categorical columnscatcolums_miss=[item[0] for item in df_miss.dtypes if item[1].startswith('string')] #will select name of column with string data typeprint("cateogrical columns_miss:", catcolums_miss)### numerical columnsnumcolumns_miss = [item[0] for item in df_miss.dtypes if item[1].startswith('int') | item[1].startswith('double')] #will select name of column with integer or double data typeprint("numerical columns_miss:", numcolumns_miss)
next we fill the missing values:
# now that we have seperated the columns based on categorical and #numerical types, we will fill the missing categiracl # values with the most frequent categoryfrom pyspark.sql.functions import rank,sum,coldf_Nomiss=new_df.na.drop()for x in catcolums_miss: mode=df_Nomiss.groupBy(x).count().sort(col("count").desc()).collect()[0][0] print(x, mode) #print name of columns and it's most categories new_df = new_df.na.fill({x:mode})# and we fill the missing numerical values with the average of each #columnfrom pyspark.sql.functions import mean, roundfor i in numcolumns_miss: meanvalue = new_df.select(round(mean(i))).collect()[0][0] print(i, meanvalue) new_df=new_df.na.fill({i:meanvalue})
now that we don’t have missing values anymore in our dataset, let’s work on how to deal with the imbalanced classes. there are different methods to mitigate this problem. one way is to under sample the majority class or over sample the minority class to make a more balanced results. another way is to assign weights for each class to penalize the majority class by assigning less weight and boost the minority class by assigning bigger weight. we are going to create a new column in the dataset named “weights” and assign the inverse ratio of each class as weights. here is how it’s done:
# adding the new column weights and fill it with ratiosfrom pyspark.sql.functions import whenratio = 0.91def weight_balance(labels): return when(labels == 1, ratio).otherwise(1*(1-ratio))new_df = new_df.withColumn('weights', weight_balance(col('label')))
and here is how it looks after adding the weight column:
the next step is Feature Engineering. pySpark has made it so easy that we do not need to do much for extracting features. here are the steps:
we apply StringIndexer() to assign indices to each category in our categorical columns.we apply OneHotEncoderEstimator() to convert categorical columns to onehot encoded vectors.and we apply VectorAssembler() to create a feature vector from all categorical and numerical features and we call the final vector as “features”.
we apply StringIndexer() to assign indices to each category in our categorical columns.
we apply OneHotEncoderEstimator() to convert categorical columns to onehot encoded vectors.
and we apply VectorAssembler() to create a feature vector from all categorical and numerical features and we call the final vector as “features”.
# we use the OneHotEncoderEstimator from MLlib in spark to convert #aech v=categorical feature into one-hot vectors# next, we use VectorAssembler to combine the resulted one-hot ector #and the rest of numerical features into a # single vector column. we append every step of the process in a #stages arrayfrom pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssemblerstages = []for categoricalCol in cat_cols: stringIndexer = StringIndexer(inputCol = categoricalCol, outputCol = categoricalCol + 'Index') encoder = OneHotEncoderEstimator(inputCols=[stringIndexer.getOutputCol()], outputCols=[categoricalCol + "classVec"])stages += [stringIndexer, encoder]assemblerInputs = [c + "classVec" for c in cat_cols] + num_colsassembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")stages += [assembler]
now let’s put everything into a pipeline. here we are performing a sequence of transformations so we do all of them at once using a pipeline:
# we use a pipeline to apply all the stages of transformationfrom pyspark.ml import Pipelinecols = new_df.columnspipeline = Pipeline(stages = stages)pipelineModel = pipeline.fit(new_df)new_df = pipelineModel.transform(new_df)selectedCols = ['features']+colsnew_df = new_df.select(selectedCols)pd.DataFrame(new_df.take(5), columns=new_df.columns)
here is how our new dataset looks like after feature engineering:
for training, we first split the dataset into training and testing sets. then we start training using Logistic Regression because it performs well for binary classification problems.
# split the data into trainign and testin setstrain, test = new_df.randomSplit([0.80, 0.20], seed = 42)print(train.count())print(test.count())# first we check how LogisticRegression perform from pyspark.ml.classification import LogisticRegressionLR = LogisticRegression(featuresCol = 'features', labelCol = 'label', maxIter=15)LR_model = LR.fit(train)
we are going to plot the ROC curve for the training data to see how Logistic Regression performed and then we will use Area Under ROC curve, which is a standard metric for evaluating binary classification, as the metric to evaluate the models:
#plotting the ROC CurvetrainingSummary = LR_model.summaryroc = trainingSummary.roc.toPandas()plt.plot(roc['FPR'],roc['TPR'])plt.ylabel('False Positive Rate')plt.xlabel('True Positive Rate')plt.title('ROC Curve')plt.show()print('Training set ROC: ' + str(trainingSummary.areaUnderROC))
checking model’s performance on the testing set:
from pyspark.ml.evaluation import BinaryClassificationEvaluatorpredictions_LR = LR_model.transform(test)evaluator = BinaryClassificationEvaluator()print("Test_SET (Area Under ROC): " + str(evaluator.evaluate(predictions_LR, {evaluator.metricName: "areaUnderROC"})))Test_SET (Area Under ROC): 0.7111434396856681
0.711 is not a very bad result for Logistic Regression. next we try another model, Gradient Boosting Trees (GBT). it’s a very popular classification and regression method that uses ensembles of decision trees.
# next we checkout gradient boosting treesfrom pyspark.ml.classification import GBTClassifiergbt = GBTClassifier(maxIter=15)GBT_Model = gbt.fit(train)gbt_predictions = GBT_Model.transform(test)evaluator = BinaryClassificationEvaluator()print("Test_SET (Area Under ROC): " + str(evaluator.evaluate(gbt_predictions, {evaluator.metricName: "areaUnderROC"})))Test_SET (Area Under ROC): 0.7322019340889893
we were able to achieve a much better result, 0.732, using GBT. as a final strategy here, we will implement hyper-parameter tuning using grid search and after that we run cross validation to better improve the performance of GBT.
from pyspark.ml.tuning import ParamGridBuilder, CrossValidatorparamGrid = (ParamGridBuilder() .addGrid(gbt.maxDepth, [2, 4, 6]) .addGrid(gbt.maxBins, [20, 30]) .addGrid(gbt.maxIter, [10, 15]) .build())cv = CrossValidator(estimator=gbt, estimatorParamMaps=paramGrid, evaluator=evaluator, numFolds=5)# Run cross validations.cvModel = cv.fit(train)gbt_cv_predictions = cvModel.transform(test)evaluator.evaluate(gbt_cv_predictions)CV_GBT (Area Under ROC) = 0.7368288195372332
the result was improved a little bit which means that we still can play around with hyper-parameter tuning to see if we can further improve the result.
in this project, we built an End-to-End Machine Learning model (binary classification with imbalanced classes). and we showed the power of Apache Spark’s MLlib and how it can be applied for end-to-end ML projects.
like always, the code and jupyter notebook is available on my Github.
Questions and Comments are highly appreciated. | [
{
"code": null,
"e": 833,
"s": 172,
"text": "In-Memory computation and Parallel-Processing are some of the major reasons that Apache Spark has become very popular in the big data industry to deal with data products at large scale and perform faster analysis. built on top of Spark, MLlib is a scalable Machine Learning library that delivers both high-quality algorithms and blazing speed. having great APIs for Java, Python, and Scala, it makes a top choice for Data Analysts, Data Engineers, and Data Scientists. MLlib consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering (matrix factorization), dimensionality reduction and etc."
},
{
"code": null,
"e": 1627,
"s": 833,
"text": "In this article, we are going to build an end-to-end machine learning model using MLlib in pySpark. we are going to use a real world dataset from Home Credit Default Risk competition on kaggle. the objective of this competition was to identify if loan applicants are capable of repaying their loans based on the data that was collected from each applicant. the target variable is either 0 (applicants who were able to pay back their loans)or 1 (applicants who were NOT able to pay back their loans). it is a binary classification problem with a highly imbalanced target label. the distribution ratio is close to 0.91 to 0.09, with 0.91 being the ratio of the applicants who were able to pay back their loans and 0.09 being the ratio of the applicants who were not able to pay back their loans."
},
{
"code": null,
"e": 1683,
"s": 1627,
"text": "Let’s start by looking at the structure of our dataset:"
},
{
"code": null,
"e": 2898,
"s": 1683,
"text": "#we use the findspark library to locate spark on our local machineimport findsparkfindspark.init('home Diredtory of Spark')from pyspark.sql import SparkSession# initiate our session and read the main CSV file, then we print the #dataframe schemaspark = SparkSession.builder.appName('imbalanced_binary_classification').getOrCreate()new_df = spark.read.csv('application_train.csv', header=True, inferSchema=True)new_df.printSchema()root |-- SK_ID_CURR: integer (nullable = true) |-- TARGET: integer (nullable = true) |-- NAME_CONTRACT_TYPE: string (nullable = true) |-- CODE_GENDER: string (nullable = true) |-- FLAG_OWN_CAR: string (nullable = true) |-- FLAG_OWN_REALTY: string (nullable = true) |-- CNT_CHILDREN: integer (nullable = true) |-- AMT_INCOME_TOTAL: double (nullable = true) |-- AMT_CREDIT: double (nullable = true) |-- AMT_ANNUITY: double (nullable = true) |-- AMT_GOODS_PRICE: double (nullable = true) |-- NAME_TYPE_SUITE: string (nullable = true) |-- NAME_INCOME_TYPE: string (nullable = true) |-- NAME_EDUCATION_TYPE: string (nullable = true) |-- NAME_FAMILY_STATUS: string (nullable = true) |-- NAME_HOUSING_TYPE: string (nullable = true) |-- REGION_POPULATION_RELATIVE: double (nullable = true)..."
},
{
"code": null,
"e": 3093,
"s": 2898,
"text": "printSchema() only shows us the column names and its data type. we are going to drop the SK_ID_CURR column, rename the “TARGET” column to “label” and see the distribution of our target variable:"
},
{
"code": null,
"e": 3447,
"s": 3093,
"text": "# Sk_ID_Curr is the id column which we dont need it in the process #so we get rid of it. and we rename the name of our # target variable to \"label\"drop_col = ['SK_ID_CURR']new_df = new_df.select([column for column in new_df.columns if column not in drop_col])new_df = new_df.withColumnRenamed('TARGET', 'label')new_df.groupby('label').count().toPandas()"
},
{
"code": null,
"e": 3512,
"s": 3447,
"text": "we can visualize the distribution of the labels with matplotlib:"
},
{
"code": null,
"e": 3861,
"s": 3512,
"text": "# let's have a look at the distribution of our target variable:# to make it look better, we first convert our spark df to a Pandasimport matplotlib.pyplot as pltimport seaborn as sns%matplotlib inlinedf_pd = new_df.toPandas()print(len(df_pd))plt.figure(figsize=(12,10))sns.countplot(x='label', data=df_pd, order=df_pd['label'].value_counts().index)"
},
{
"code": null,
"e": 3927,
"s": 3861,
"text": "and here is how the data looks like in a Pandas dataframe format:"
},
{
"code": null,
"e": 4042,
"s": 3927,
"text": "# let's see how everything look in Pandasimport pandas as pdpd.DataFrame(new_df.take(10), columns= new_df.columns)"
},
{
"code": null,
"e": 4327,
"s": 4042,
"text": "now that we have some ideas about the general structure of our dataset, let’s continue with some data wrangling. first we check how many Categorical and Numerical features do we have. next we build a function that outputs essential information about the missing values in our dataset:"
},
{
"code": null,
"e": 4734,
"s": 4327,
"text": "# now let's see how many categorical and numerical features we have:cat_cols = [item[0] for item in new_df.dtypes if item[1].startswith('string')] print(str(len(cat_cols)) + ' categorical features')num_cols = [item[0] for item in new_df.dtypes if item[1].startswith('int') | item[1].startswith('double')][1:]print(str(len(num_cols)) + ' numerical features')16 categorical features104 numerical features"
},
{
"code": null,
"e": 4781,
"s": 4734,
"text": "here is how we get the table for missing info:"
},
{
"code": null,
"e": 5977,
"s": 4781,
"text": "# we use the below function to find more information about the #missing valuesdef info_missing_table(df_pd): \"\"\"Input pandas dataframe and Return columns with missing value and percentage\"\"\" mis_val = df_pd.isnull().sum() #count total of null in each columns in dataframe#count percentage of null in each columns mis_val_percent = 100 * df_pd.isnull().sum() / len(df_pd) mis_val_table = pd.concat([mis_val, mis_val_percent], axis=1) #join to left (as column) between mis_val and mis_val_percent mis_val_table_ren_columns = mis_val_table.rename( columns = {0 : 'Missing Values', 1 : '% of Total Values'}) #rename columns in table mis_val_table_ren_columns = mis_val_table_ren_columns[ mis_val_table_ren_columns.iloc[:,1] != 0].sort_values('% of Total Values', ascending=False).round(1) print (\"Your selected dataframe has \" + str(df_pd.shape[1]) + \" columns.\\n\" #.shape[1] : just view total columns in dataframe \"There are \" + str(mis_val_table_ren_columns.shape[0]) + \" columns that have missing values.\") #.shape[0] : just view total rows in dataframe return mis_val_table_ren_columnsmissings = info_missing_table(df_pd)missings"
},
{
"code": null,
"e": 6439,
"s": 5977,
"text": "there are 67 columns out of 121 that has missing values in them. and it doesn't show all of them in the image, but overall, most of these 67 columns have more than 50 percent missing values. so we are dealing with a lot of missing values. we are going to fill the numerical missing values with the average of each column and the categorical missing values with the most frequent category of each column. but first, let’s count the missing values in each column:"
},
{
"code": null,
"e": 6843,
"s": 6439,
"text": "miss_counts = count_missings(new_df)miss_counts[('AMT_ANNUITY', 12), ('AMT_GOODS_PRICE', 278), ('NAME_TYPE_SUITE', 1292), ('OWN_CAR_AGE', 202929), ('OCCUPATION_TYPE', 96391), ('CNT_FAM_MEMBERS', 2), ('EXT_SOURCE_1', 173378), ('EXT_SOURCE_2', 660), ('EXT_SOURCE_3', 60965), ('APARTMENTS_AVG', 156061), ('BASEMENTAREA_AVG', 179943), ('YEARS_BEGINEXPLUATATION_AVG', 150007), ('YEARS_BUILD_AVG', 204488),..."
},
{
"code": null,
"e": 6910,
"s": 6843,
"text": "we separate categorical and numerical columns with missing values:"
},
{
"code": null,
"e": 7531,
"s": 6910,
"text": "# here we seperate missing columns in our new_df based on #categorical and numerical typeslist_cols_miss=[x[0] for x in miss_counts]df_miss= new_df.select(*list_cols_miss)#categorical columnscatcolums_miss=[item[0] for item in df_miss.dtypes if item[1].startswith('string')] #will select name of column with string data typeprint(\"cateogrical columns_miss:\", catcolums_miss)### numerical columnsnumcolumns_miss = [item[0] for item in df_miss.dtypes if item[1].startswith('int') | item[1].startswith('double')] #will select name of column with integer or double data typeprint(\"numerical columns_miss:\", numcolumns_miss)"
},
{
"code": null,
"e": 7564,
"s": 7531,
"text": "next we fill the missing values:"
},
{
"code": null,
"e": 8290,
"s": 7564,
"text": "# now that we have seperated the columns based on categorical and #numerical types, we will fill the missing categiracl # values with the most frequent categoryfrom pyspark.sql.functions import rank,sum,coldf_Nomiss=new_df.na.drop()for x in catcolums_miss: mode=df_Nomiss.groupBy(x).count().sort(col(\"count\").desc()).collect()[0][0] print(x, mode) #print name of columns and it's most categories new_df = new_df.na.fill({x:mode})# and we fill the missing numerical values with the average of each #columnfrom pyspark.sql.functions import mean, roundfor i in numcolumns_miss: meanvalue = new_df.select(round(mean(i))).collect()[0][0] print(i, meanvalue) new_df=new_df.na.fill({i:meanvalue})"
},
{
"code": null,
"e": 8880,
"s": 8290,
"text": "now that we don’t have missing values anymore in our dataset, let’s work on how to deal with the imbalanced classes. there are different methods to mitigate this problem. one way is to under sample the majority class or over sample the minority class to make a more balanced results. another way is to assign weights for each class to penalize the majority class by assigning less weight and boost the minority class by assigning bigger weight. we are going to create a new column in the dataset named “weights” and assign the inverse ratio of each class as weights. here is how it’s done:"
},
{
"code": null,
"e": 9138,
"s": 8880,
"text": "# adding the new column weights and fill it with ratiosfrom pyspark.sql.functions import whenratio = 0.91def weight_balance(labels): return when(labels == 1, ratio).otherwise(1*(1-ratio))new_df = new_df.withColumn('weights', weight_balance(col('label')))"
},
{
"code": null,
"e": 9195,
"s": 9138,
"text": "and here is how it looks after adding the weight column:"
},
{
"code": null,
"e": 9337,
"s": 9195,
"text": "the next step is Feature Engineering. pySpark has made it so easy that we do not need to do much for extracting features. here are the steps:"
},
{
"code": null,
"e": 9661,
"s": 9337,
"text": "we apply StringIndexer() to assign indices to each category in our categorical columns.we apply OneHotEncoderEstimator() to convert categorical columns to onehot encoded vectors.and we apply VectorAssembler() to create a feature vector from all categorical and numerical features and we call the final vector as “features”."
},
{
"code": null,
"e": 9749,
"s": 9661,
"text": "we apply StringIndexer() to assign indices to each category in our categorical columns."
},
{
"code": null,
"e": 9841,
"s": 9749,
"text": "we apply OneHotEncoderEstimator() to convert categorical columns to onehot encoded vectors."
},
{
"code": null,
"e": 9987,
"s": 9841,
"text": "and we apply VectorAssembler() to create a feature vector from all categorical and numerical features and we call the final vector as “features”."
},
{
"code": null,
"e": 10832,
"s": 9987,
"text": "# we use the OneHotEncoderEstimator from MLlib in spark to convert #aech v=categorical feature into one-hot vectors# next, we use VectorAssembler to combine the resulted one-hot ector #and the rest of numerical features into a # single vector column. we append every step of the process in a #stages arrayfrom pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssemblerstages = []for categoricalCol in cat_cols: stringIndexer = StringIndexer(inputCol = categoricalCol, outputCol = categoricalCol + 'Index') encoder = OneHotEncoderEstimator(inputCols=[stringIndexer.getOutputCol()], outputCols=[categoricalCol + \"classVec\"])stages += [stringIndexer, encoder]assemblerInputs = [c + \"classVec\" for c in cat_cols] + num_colsassembler = VectorAssembler(inputCols=assemblerInputs, outputCol=\"features\")stages += [assembler]"
},
{
"code": null,
"e": 10974,
"s": 10832,
"text": "now let’s put everything into a pipeline. here we are performing a sequence of transformations so we do all of them at once using a pipeline:"
},
{
"code": null,
"e": 11320,
"s": 10974,
"text": "# we use a pipeline to apply all the stages of transformationfrom pyspark.ml import Pipelinecols = new_df.columnspipeline = Pipeline(stages = stages)pipelineModel = pipeline.fit(new_df)new_df = pipelineModel.transform(new_df)selectedCols = ['features']+colsnew_df = new_df.select(selectedCols)pd.DataFrame(new_df.take(5), columns=new_df.columns)"
},
{
"code": null,
"e": 11386,
"s": 11320,
"text": "here is how our new dataset looks like after feature engineering:"
},
{
"code": null,
"e": 11569,
"s": 11386,
"text": "for training, we first split the dataset into training and testing sets. then we start training using Logistic Regression because it performs well for binary classification problems."
},
{
"code": null,
"e": 11921,
"s": 11569,
"text": "# split the data into trainign and testin setstrain, test = new_df.randomSplit([0.80, 0.20], seed = 42)print(train.count())print(test.count())# first we check how LogisticRegression perform from pyspark.ml.classification import LogisticRegressionLR = LogisticRegression(featuresCol = 'features', labelCol = 'label', maxIter=15)LR_model = LR.fit(train)"
},
{
"code": null,
"e": 12165,
"s": 11921,
"text": "we are going to plot the ROC curve for the training data to see how Logistic Regression performed and then we will use Area Under ROC curve, which is a standard metric for evaluating binary classification, as the metric to evaluate the models:"
},
{
"code": null,
"e": 12450,
"s": 12165,
"text": "#plotting the ROC CurvetrainingSummary = LR_model.summaryroc = trainingSummary.roc.toPandas()plt.plot(roc['FPR'],roc['TPR'])plt.ylabel('False Positive Rate')plt.xlabel('True Positive Rate')plt.title('ROC Curve')plt.show()print('Training set ROC: ' + str(trainingSummary.areaUnderROC))"
},
{
"code": null,
"e": 12499,
"s": 12450,
"text": "checking model’s performance on the testing set:"
},
{
"code": null,
"e": 12810,
"s": 12499,
"text": "from pyspark.ml.evaluation import BinaryClassificationEvaluatorpredictions_LR = LR_model.transform(test)evaluator = BinaryClassificationEvaluator()print(\"Test_SET (Area Under ROC): \" + str(evaluator.evaluate(predictions_LR, {evaluator.metricName: \"areaUnderROC\"})))Test_SET (Area Under ROC): 0.7111434396856681"
},
{
"code": null,
"e": 13020,
"s": 12810,
"text": "0.711 is not a very bad result for Logistic Regression. next we try another model, Gradient Boosting Trees (GBT). it’s a very popular classification and regression method that uses ensembles of decision trees."
},
{
"code": null,
"e": 13421,
"s": 13020,
"text": "# next we checkout gradient boosting treesfrom pyspark.ml.classification import GBTClassifiergbt = GBTClassifier(maxIter=15)GBT_Model = gbt.fit(train)gbt_predictions = GBT_Model.transform(test)evaluator = BinaryClassificationEvaluator()print(\"Test_SET (Area Under ROC): \" + str(evaluator.evaluate(gbt_predictions, {evaluator.metricName: \"areaUnderROC\"})))Test_SET (Area Under ROC): 0.7322019340889893"
},
{
"code": null,
"e": 13651,
"s": 13421,
"text": "we were able to achieve a much better result, 0.732, using GBT. as a final strategy here, we will implement hyper-parameter tuning using grid search and after that we run cross validation to better improve the performance of GBT."
},
{
"code": null,
"e": 14171,
"s": 13651,
"text": "from pyspark.ml.tuning import ParamGridBuilder, CrossValidatorparamGrid = (ParamGridBuilder() .addGrid(gbt.maxDepth, [2, 4, 6]) .addGrid(gbt.maxBins, [20, 30]) .addGrid(gbt.maxIter, [10, 15]) .build())cv = CrossValidator(estimator=gbt, estimatorParamMaps=paramGrid, evaluator=evaluator, numFolds=5)# Run cross validations.cvModel = cv.fit(train)gbt_cv_predictions = cvModel.transform(test)evaluator.evaluate(gbt_cv_predictions)CV_GBT (Area Under ROC) = 0.7368288195372332"
},
{
"code": null,
"e": 14323,
"s": 14171,
"text": "the result was improved a little bit which means that we still can play around with hyper-parameter tuning to see if we can further improve the result."
},
{
"code": null,
"e": 14537,
"s": 14323,
"text": "in this project, we built an End-to-End Machine Learning model (binary classification with imbalanced classes). and we showed the power of Apache Spark’s MLlib and how it can be applied for end-to-end ML projects."
},
{
"code": null,
"e": 14607,
"s": 14537,
"text": "like always, the code and jupyter notebook is available on my Github."
}
] |
How to get last 4 characters from string in
C#? | Firstly, set the string −
string str = "Football and Tennis";
Now, use the substring() method to get the last 4 characters −
str.Substring(str.Length - 4);
Let us see the complete code −
Live Demo
using System;
public class Demo {
public static void Main() {
string str = "Football and Tennis";
string res = str.Substring(str.Length - 4);
Console.WriteLine(res);
}
}
nnis | [
{
"code": null,
"e": 1088,
"s": 1062,
"text": "Firstly, set the string −"
},
{
"code": null,
"e": 1124,
"s": 1088,
"text": "string str = \"Football and Tennis\";"
},
{
"code": null,
"e": 1187,
"s": 1124,
"text": "Now, use the substring() method to get the last 4 characters −"
},
{
"code": null,
"e": 1218,
"s": 1187,
"text": "str.Substring(str.Length - 4);"
},
{
"code": null,
"e": 1249,
"s": 1218,
"text": "Let us see the complete code −"
},
{
"code": null,
"e": 1260,
"s": 1249,
"text": " Live Demo"
},
{
"code": null,
"e": 1454,
"s": 1260,
"text": "using System;\npublic class Demo {\n public static void Main() {\n string str = \"Football and Tennis\";\n string res = str.Substring(str.Length - 4);\n Console.WriteLine(res);\n }\n}"
},
{
"code": null,
"e": 1459,
"s": 1454,
"text": "nnis"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.