title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
How to form a MySQL Conditional Insert? | For this, you can insert using MySQL dual table. Let us create a table to understand the concept conditional insert. The query to create a table is as follows −
mysql> create table ConditionalInsertDemo
-> (
-> UserId int,
-> TotalUser int,
-> NumberOfItems int
-> );
Query OK, 0 rows affected (0.58 sec)
Insert some records in the table using insert command. The query is as follows −
mysql> insert into ConditionalInsertDemo values(101,560,780);
Query OK, 1 row affected (0.19 sec)
mysql> insert into ConditionalInsertDemo values(102,660,890);
Query OK, 1 row affected (0.20 sec)
mysql> insert into ConditionalInsertDemo values(103,450,50);
Query OK, 1 row affected (0.15 sec)
Display all records from the table using select statement. The query is as follows −
mysql> select *from ConditionalInsertDemo;
+--------+-----------+---------------+
| UserId | TotalUser | NumberOfItems |
+--------+-----------+---------------+
| 101 | 560 | 780 |
| 102 | 660 | 890 |
| 103 | 450 | 50 |
+--------+-----------+---------------+
3 rows in set (0.00 sec)
Now you have 3 records in the table. You can insert the next records with conditional insert with the help of dual table. The query inserts records in the table whenever UserId=104 and NumberOfItems=3500 must not be present in the table already. The conditional insert query is as follows −
mysql> insert into ConditionalInsertDemo(UserId,TotalUser,NumberOfItems)
-> select 104,900,3500 from dual
-> WHERE NOT EXISTS (SELECT * FROM ConditionalInsertDemo
-> where UserId=104 and NumberOfItems=3500);
Query OK, 1 row affected (0.18 sec)
Records: 1 Duplicates: 0 Warnings: 0
Now you can check the table, the record is inserted or not. The query to display all records is as follows −
mysql> select *from ConditionalInsertDemo;
+--------+-----------+---------------+
| UserId | TotalUser | NumberOfItems |
+--------+-----------+---------------+
| 101 | 560 | 780 |
| 102 | 660 | 890 |
| 103 | 450 | 50 |
| 104 | 900 | 3500 |
+--------+-----------+---------------+
4 rows in set (0.00 sec) | [
{
"code": null,
"e": 1223,
"s": 1062,
"text": "For this, you can insert using MySQL dual table. Let us create a table to understand the concept conditional insert. The query to create a table is as follows −"
},
{
"code": null,
"e": 1382,
"s": 1223,
"text": "mysql> create table ConditionalInsertDemo\n -> (\n -> UserId int,\n -> TotalUser int,\n -> NumberOfItems int\n -> );\nQuery OK, 0 rows affected (0.58 sec)"
},
{
"code": null,
"e": 1463,
"s": 1382,
"text": "Insert some records in the table using insert command. The query is as follows −"
},
{
"code": null,
"e": 1758,
"s": 1463,
"text": "mysql> insert into ConditionalInsertDemo values(101,560,780);\nQuery OK, 1 row affected (0.19 sec)\n\nmysql> insert into ConditionalInsertDemo values(102,660,890);\nQuery OK, 1 row affected (0.20 sec)\n\nmysql> insert into ConditionalInsertDemo values(103,450,50);\nQuery OK, 1 row affected (0.15 sec)"
},
{
"code": null,
"e": 1843,
"s": 1758,
"text": "Display all records from the table using select statement. The query is as follows −"
},
{
"code": null,
"e": 1886,
"s": 1843,
"text": "mysql> select *from ConditionalInsertDemo;"
},
{
"code": null,
"e": 2184,
"s": 1886,
"text": "+--------+-----------+---------------+\n| UserId | TotalUser | NumberOfItems |\n+--------+-----------+---------------+\n| 101 | 560 | 780 |\n| 102 | 660 | 890 |\n| 103 | 450 | 50 |\n+--------+-----------+---------------+\n3 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2475,
"s": 2184,
"text": "Now you have 3 records in the table. You can insert the next records with conditional insert with the help of dual table. The query inserts records in the table whenever UserId=104 and NumberOfItems=3500 must not be present in the table already. The conditional insert query is as follows −"
},
{
"code": null,
"e": 2765,
"s": 2475,
"text": "mysql> insert into ConditionalInsertDemo(UserId,TotalUser,NumberOfItems)\n -> select 104,900,3500 from dual\n -> WHERE NOT EXISTS (SELECT * FROM ConditionalInsertDemo\n -> where UserId=104 and NumberOfItems=3500);\nQuery OK, 1 row affected (0.18 sec)\nRecords: 1 Duplicates: 0 Warnings: 0"
},
{
"code": null,
"e": 2874,
"s": 2765,
"text": "Now you can check the table, the record is inserted or not. The query to display all records is as follows −"
},
{
"code": null,
"e": 2917,
"s": 2874,
"text": "mysql> select *from ConditionalInsertDemo;"
},
{
"code": null,
"e": 3254,
"s": 2917,
"text": "+--------+-----------+---------------+\n| UserId | TotalUser | NumberOfItems |\n+--------+-----------+---------------+\n| 101 | 560 | 780 |\n| 102 | 660 | 890 |\n| 103 | 450 | 50 |\n| 104 | 900 | 3500 |\n+--------+-----------+---------------+\n4 rows in set (0.00 sec)"
}
]
|
CSS - Pseudo Classes | CSS pseudo-classes are used to add special effects to some selectors. You do not need to use JavaScript or any other script to use those effects. A simple syntax of pseudo-classes is as follows −
selector:pseudo-class {property: value}
CSS classes can also be used with pseudo-classes −
selector.class:pseudo-class {property: value}
The most commonly used pseudo-classes are as follows −
:link
Use this class to add special style to an unvisited link.
:visited
Use this class to add special style to a visited link.
:hover
Use this class to add special style to an element when you mouse over it.
:active
Use this class to add special style to an active element.
:focus
Use this class to add special style to an element while the element has focus.
:first-child
Use this class to add special style to an element that is the first child of some other element.
:lang
Use this class to specify a language to use in a specified element.
While defining pseudo-classes in a <style>...</style> block, following points should be noted −
a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective.
a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective.
a:active MUST come after a:hover in the CSS definition in order to be effective.
a:active MUST come after a:hover in the CSS definition in order to be effective.
Pseudo-class names are not case-sensitive.
Pseudo-class names are not case-sensitive.
Pseudo-class are different from CSS classes but they can be combined.
Pseudo-class are different from CSS classes but they can be combined.
The following example demonstrates how to use the :link class to set the link color. Possible values could be any color name in any valid format.
<html>
<head>
<style type = "text/css">
a:link {color:#000000}
</style>
</head>
<body>
<a href = "">Black Link</a>
</body>
</html>
It will produce the following black link −
The following is the example which demonstrates how to use the :visited class to set the color of visited links. Possible values could be any color name in any valid format.
<html>
<head>
<style type = "text/css">
a:visited {color: #006600}
</style>
</head>
<body>
<a href = "">Click this link</a>
</body>
</html>
This will produce following link. Once you will click this link, it will change its color to green.
The following example demonstrates how to use the :hover class to change the color of links when we bring a mouse pointer over that link. Possible values could be any color name in any valid format.
<html>
<head>
<style type = "text/css">
a:hover {color: #FFCC00}
</style>
</head>
<body>
<a href = "">Bring Mouse Here</a>
</body>
</html>
It will produce the following link. Now you bring your mouse over this link and you will see that it changes its color to yellow.
The following example demonstrates how to use the :active class to change the color of active links. Possible values could be any color name in any valid format.
<html>
<head>
<style type = "text/css">
a:active {color: #FF00CC}
</style>
</head>
<body>
<a href = "">Click This Link</a>
</body>
</html>
It will produce the following link. When a user clicks it, the color changes to pink.
The following example demonstrates how to use the :focus class to change the color of focused links. Possible values could be any color name in any valid format.
<html>
<head>
<style type = "text/css">
a:focus {color: #0000FF}
</style>
</head>
<body>
<a href = "">Click this Link</a>
</body>
</html>
It will produce the following link. When this link gets focused, its color changes to orange. The color changes back when it loses focus.
The :first-child pseudo-class matches a specified element that is the first child of another element and adds special style to that element that is the first child of some other element.
To make :first-child work in IE <!DOCTYPE> must be declared at the top of document.
For example, to indent the first paragraph of all <div> elements, you could use this definition −
<html>
<head>
<style type = "text/css">
div > p:first-child {
text-indent: 25px;
}
</style>
</head>
<body>
<div>
<p>First paragraph in div. This paragraph will be indented</p>
<p>Second paragraph in div. This paragraph will not be indented</p>
</div>
<p>But it will not match the paragraph in this HTML:</p>
<div>
<h3>Heading</h3>
<p>The first paragraph inside the div. This paragraph will not be effected.</p>
</div>
</body>
</html>
It will produce the following result −
First paragraph in div. This paragraph will be indented
Second paragraph in div. This paragraph will not be indented
But it will not match the paragraph in this HTML:
The first paragraph inside the div.
This paragraph will not be effected.
The language pseudo-class :lang, allows constructing selectors based on the language setting for specific tags.
This class is useful in documents that must appeal to multiple languages that have different conventions for certain language constructs. For example, the French language typically uses angle brackets (< and >) for quoting purposes, while the English language uses quote marks (' and ').
In a document that needs to address this difference, you can use the :lang pseudo-class to change the quote marks appropriately. The following code changes the <blockquote> tag appropriately for the language being used −
<html>
<head>
<style type = "text/css">
/* Two levels of quotes for two languages*/
:lang(en) { quotes: '"' '"' "'" "'"; }
:lang(fr) { quotes: "<<" ">>" "<" ">"; }
</style>
</head>
<body>
<p>...<q lang = "fr">A quote in a paragraph</q>...</p>
</body>
</html>
The :lang selectors will apply to all the elements in the document. However, not all elements make use of the quotes property, so the effect will be transparent for most elements.
It will produce the following result −
...A quote in a paragraph...
33 Lectures
2.5 hours
Anadi Sharma
26 Lectures
2.5 hours
Frahaan Hussain
44 Lectures
4.5 hours
DigiFisk (Programming Is Fun)
21 Lectures
2.5 hours
DigiFisk (Programming Is Fun)
51 Lectures
7.5 hours
DigiFisk (Programming Is Fun)
52 Lectures
4 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2822,
"s": 2626,
"text": "CSS pseudo-classes are used to add special effects to some selectors. You do not need to use JavaScript or any other script to use those effects. A simple syntax of pseudo-classes is as follows −"
},
{
"code": null,
"e": 2863,
"s": 2822,
"text": "selector:pseudo-class {property: value}\n"
},
{
"code": null,
"e": 2914,
"s": 2863,
"text": "CSS classes can also be used with pseudo-classes −"
},
{
"code": null,
"e": 2961,
"s": 2914,
"text": "selector.class:pseudo-class {property: value}\n"
},
{
"code": null,
"e": 3016,
"s": 2961,
"text": "The most commonly used pseudo-classes are as follows −"
},
{
"code": null,
"e": 3022,
"s": 3016,
"text": ":link"
},
{
"code": null,
"e": 3080,
"s": 3022,
"text": "Use this class to add special style to an unvisited link."
},
{
"code": null,
"e": 3089,
"s": 3080,
"text": ":visited"
},
{
"code": null,
"e": 3144,
"s": 3089,
"text": "Use this class to add special style to a visited link."
},
{
"code": null,
"e": 3151,
"s": 3144,
"text": ":hover"
},
{
"code": null,
"e": 3225,
"s": 3151,
"text": "Use this class to add special style to an element when you mouse over it."
},
{
"code": null,
"e": 3233,
"s": 3225,
"text": ":active"
},
{
"code": null,
"e": 3291,
"s": 3233,
"text": "Use this class to add special style to an active element."
},
{
"code": null,
"e": 3298,
"s": 3291,
"text": ":focus"
},
{
"code": null,
"e": 3377,
"s": 3298,
"text": "Use this class to add special style to an element while the element has focus."
},
{
"code": null,
"e": 3390,
"s": 3377,
"text": ":first-child"
},
{
"code": null,
"e": 3487,
"s": 3390,
"text": "Use this class to add special style to an element that is the first child of some other element."
},
{
"code": null,
"e": 3493,
"s": 3487,
"text": ":lang"
},
{
"code": null,
"e": 3561,
"s": 3493,
"text": "Use this class to specify a language to use in a specified element."
},
{
"code": null,
"e": 3657,
"s": 3561,
"text": "While defining pseudo-classes in a <style>...</style> block, following points should be noted −"
},
{
"code": null,
"e": 3750,
"s": 3657,
"text": "a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective."
},
{
"code": null,
"e": 3843,
"s": 3750,
"text": "a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective."
},
{
"code": null,
"e": 3924,
"s": 3843,
"text": "a:active MUST come after a:hover in the CSS definition in order to be effective."
},
{
"code": null,
"e": 4005,
"s": 3924,
"text": "a:active MUST come after a:hover in the CSS definition in order to be effective."
},
{
"code": null,
"e": 4048,
"s": 4005,
"text": "Pseudo-class names are not case-sensitive."
},
{
"code": null,
"e": 4091,
"s": 4048,
"text": "Pseudo-class names are not case-sensitive."
},
{
"code": null,
"e": 4161,
"s": 4091,
"text": "Pseudo-class are different from CSS classes but they can be combined."
},
{
"code": null,
"e": 4231,
"s": 4161,
"text": "Pseudo-class are different from CSS classes but they can be combined."
},
{
"code": null,
"e": 4377,
"s": 4231,
"text": "The following example demonstrates how to use the :link class to set the link color. Possible values could be any color name in any valid format."
},
{
"code": null,
"e": 4548,
"s": 4377,
"text": "<html>\n <head>\n <style type = \"text/css\">\n a:link {color:#000000}\n </style>\n </head>\n\n <body>\n <a href = \"\">Black Link</a>\n </body>\n</html>"
},
{
"code": null,
"e": 4591,
"s": 4548,
"text": "It will produce the following black link −"
},
{
"code": null,
"e": 4765,
"s": 4591,
"text": "The following is the example which demonstrates how to use the :visited class to set the color of visited links. Possible values could be any color name in any valid format."
},
{
"code": null,
"e": 4946,
"s": 4765,
"text": "<html>\n <head>\n <style type = \"text/css\">\n a:visited {color: #006600}\n </style>\n </head>\n\n <body>\n <a href = \"\">Click this link</a>\n </body>\n</html> "
},
{
"code": null,
"e": 5046,
"s": 4946,
"text": "This will produce following link. Once you will click this link, it will change its color to green."
},
{
"code": null,
"e": 5245,
"s": 5046,
"text": "The following example demonstrates how to use the :hover class to change the color of links when we bring a mouse pointer over that link. Possible values could be any color name in any valid format."
},
{
"code": null,
"e": 5425,
"s": 5245,
"text": "<html>\n <head>\n <style type = \"text/css\">\n a:hover {color: #FFCC00}\n </style>\n </head>\n\n <body>\n <a href = \"\">Bring Mouse Here</a>\n </body>\n</html> "
},
{
"code": null,
"e": 5555,
"s": 5425,
"text": "It will produce the following link. Now you bring your mouse over this link and you will see that it changes its color to yellow."
},
{
"code": null,
"e": 5717,
"s": 5555,
"text": "The following example demonstrates how to use the :active class to change the color of active links. Possible values could be any color name in any valid format."
},
{
"code": null,
"e": 5897,
"s": 5717,
"text": "<html>\n <head>\n <style type = \"text/css\">\n a:active {color: #FF00CC}\n </style>\n </head>\n\n <body>\n <a href = \"\">Click This Link</a>\n </body>\n</html> "
},
{
"code": null,
"e": 5983,
"s": 5897,
"text": "It will produce the following link. When a user clicks it, the color changes to pink."
},
{
"code": null,
"e": 6145,
"s": 5983,
"text": "The following example demonstrates how to use the :focus class to change the color of focused links. Possible values could be any color name in any valid format."
},
{
"code": null,
"e": 6324,
"s": 6145,
"text": "<html>\n <head>\n <style type = \"text/css\">\n a:focus {color: #0000FF}\n </style>\n </head>\n\n <body>\n <a href = \"\">Click this Link</a>\n </body>\n</html> "
},
{
"code": null,
"e": 6462,
"s": 6324,
"text": "It will produce the following link. When this link gets focused, its color changes to orange. The color changes back when it loses focus."
},
{
"code": null,
"e": 6649,
"s": 6462,
"text": "The :first-child pseudo-class matches a specified element that is the first child of another element and adds special style to that element that is the first child of some other element."
},
{
"code": null,
"e": 6733,
"s": 6649,
"text": "To make :first-child work in IE <!DOCTYPE> must be declared at the top of document."
},
{
"code": null,
"e": 6831,
"s": 6733,
"text": "For example, to indent the first paragraph of all <div> elements, you could use this definition −"
},
{
"code": null,
"e": 7404,
"s": 6831,
"text": "<html>\n <head>\n <style type = \"text/css\">\n div > p:first-child {\n text-indent: 25px;\n }\n </style>\n </head>\n\n <body>\n \n <div>\n <p>First paragraph in div. This paragraph will be indented</p>\n <p>Second paragraph in div. This paragraph will not be indented</p>\n </div>\n <p>But it will not match the paragraph in this HTML:</p>\n \n <div>\n <h3>Heading</h3>\n <p>The first paragraph inside the div. This paragraph will not be effected.</p>\n </div>\n \n </body>\n</html>"
},
{
"code": null,
"e": 7443,
"s": 7404,
"text": "It will produce the following result −"
},
{
"code": null,
"e": 7510,
"s": 7443,
"text": "\n First paragraph in div. This paragraph will be indented\n "
},
{
"code": null,
"e": 7583,
"s": 7510,
"text": "\n Second paragraph in div. This paragraph will not be indented\n "
},
{
"code": null,
"e": 7633,
"s": 7583,
"text": "But it will not match the paragraph in this HTML:"
},
{
"code": null,
"e": 7723,
"s": 7633,
"text": "\n The first paragraph inside the div.\n This paragraph will not be effected.\n "
},
{
"code": null,
"e": 7835,
"s": 7723,
"text": "The language pseudo-class :lang, allows constructing selectors based on the language setting for specific tags."
},
{
"code": null,
"e": 8123,
"s": 7835,
"text": "This class is useful in documents that must appeal to multiple languages that have different conventions for certain language constructs. For example, the French language typically uses angle brackets (< and >) for quoting purposes, while the English language uses quote marks (' and ')."
},
{
"code": null,
"e": 8344,
"s": 8123,
"text": "In a document that needs to address this difference, you can use the :lang pseudo-class to change the quote marks appropriately. The following code changes the <blockquote> tag appropriately for the language being used −"
},
{
"code": null,
"e": 8676,
"s": 8344,
"text": "<html>\n <head>\n <style type = \"text/css\">\n \n /* Two levels of quotes for two languages*/\n :lang(en) { quotes: '\"' '\"' \"'\" \"'\"; }\n :lang(fr) { quotes: \"<<\" \">>\" \"<\" \">\"; }\n </style>\n </head>\n \n <body>\n <p>...<q lang = \"fr\">A quote in a paragraph</q>...</p>\n </body>\n</html>"
},
{
"code": null,
"e": 8856,
"s": 8676,
"text": "The :lang selectors will apply to all the elements in the document. However, not all elements make use of the quotes property, so the effect will be transparent for most elements."
},
{
"code": null,
"e": 8895,
"s": 8856,
"text": "It will produce the following result −"
},
{
"code": null,
"e": 8924,
"s": 8895,
"text": "...A quote in a paragraph..."
},
{
"code": null,
"e": 8959,
"s": 8924,
"text": "\n 33 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8973,
"s": 8959,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 9008,
"s": 8973,
"text": "\n 26 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 9025,
"s": 9008,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 9060,
"s": 9025,
"text": "\n 44 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 9091,
"s": 9060,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 9126,
"s": 9091,
"text": "\n 21 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 9157,
"s": 9126,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 9192,
"s": 9157,
"text": "\n 51 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 9223,
"s": 9192,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 9256,
"s": 9223,
"text": "\n 52 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 9287,
"s": 9256,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 9294,
"s": 9287,
"text": " Print"
},
{
"code": null,
"e": 9305,
"s": 9294,
"text": " Add Notes"
}
]
|
Delete Middle of Linked List | Practice | GeeksforGeeks | Given a singly linked list, delete middle of the linked list. For example, if given linked list is 1->2->3->4->5 then linked list should be modified to 1->2->4->5.
If there are even nodes, then there would be two middle nodes, we need to delete the second middle element. For example, if given linked list is 1->2->3->4->5->6 then it should be modified to 1->2->3->5->6.
If the input linked list is NULL or has 1 node, then it should return NULL
Example 1:
Input:
LinkedList: 1->2->3->4->5
Output: 1 2 4 5
Example 2:
Input:
LinkedList: 2->4->6->7->5->1
Output: 2 4 6 5 1
Your Task:
The task is to complete the function deleteMid() which should delete the middle element from the linked list and return the head of the modified linked list. If the linked list is empty then it should return NULL.
Expected Time Complexity: O(N).
Expected Auxiliary Space: O(1).
Constraints:
1 <= N <= 1000
1 <= value <= 1000
0
prabhakarsati294261 week ago
def deleteMid(head): ''' head: head of given linkedList return: head of resultant llist ''' #code here slow,fast = head,head pre = None while fast and fast.next: pre = slow slow = slow.next fast = fast.next.next pre.next = pre.next.next return head
0
csedeepak2 weeks ago
Node* deleteMid(Node* head){ // Your Code Here int cnt=0; Node *curr=head; if(head==NULL || head->next==NULL){ free(head); return NULL; } while(curr){ cnt++; curr=curr->next; } int pos=cnt/2 + 1; Node *temp=head; Node *back; for(int i=1; i<=pos-2 && temp!=NULL ; i++){ temp=temp->next; } back=temp->next; temp->next=temp->next->next; free(back); return head; }
0
harshscode2 weeks ago
Node *slow=head; Node *fast=head; Node *prev=NULL; while(fast and fast->next) { prev=slow; slow=slow->next; fast=fast->next->next; } prev->next=prev->next->next;
return head;
0
ns2847192 weeks ago
Node* deleteMid(Node* head){
Node* prev=NULL; Node* slow=head; Node* fast=head; while(fast && fast->next!=NULL) { fast=fast->next->next; prev=slow; slow=slow->next; } prev->next=prev->next->next; return head;}
0
hharshit81183 weeks ago
Node* deleteMid(Node* head){ if(!head || !head->next){ return NULL; } Node *slow = head, *ptr = NULL, *fast = head; while(fast!=NULL && fast->next != NULL){ ptr =slow; slow = slow->next; fast = fast->next->next; } Node *temp = ptr->next; ptr->next = temp->next; delete(temp); return head;}
0
goelyash3 weeks ago
Node* deleteMid(Node* head)
{
if(!head)return NULL;
Node* slow=head,*fast=head,*prev=head;
while(fast && fast->next){
slow= slow->next;
fast=fast->next->next;
}
while(prev->next!=slow)prev=prev->next;
prev->next=prev->next->next;
slow->next=NULL;
return head;
}
0
user_8m8m3 weeks ago
Node deleteMid(Node head) { // This is method only submission. // You only need to complete the method. if(head.next.next==null) { head.next=null; return head; } Node slow = head; Node fast = head; while(fast.next!=null) { fast = fast.next; if(fast.next!=null) { fast=fast.next; } slow = slow.next; } slow.data = slow.next.data; slow.next=slow.next.next; return head; }
0
sangrambachu4 weeks ago
class Solution {
Node deleteMid(Node head) {
// This is method only submission.
// You only need to complete the method.
// Find mid
int index = 0;
Node fast = head;
Node slow = head;
while(fast != null && fast.next != null) {
index++;
slow = slow.next;
fast = fast.next.next;
}
Node prev = head;
for(int i=0; i<index-1; i++) {
prev = prev.next;
}
prev.next = prev.next.next;
return head;
}
}
0
sanusonkar11 month ago
Java Solution
Time complexity =O(n)
Node deleteMid(Node head) {
Node prev=null;
Node fast=head;
Node slow=head;
//first try to find 1 node before mid
//ex-2 4 5 6 7 then we get in prev=>4 value node
while(fast!=null && fast.next!=null){
fast=fast.next.next;
prev=slow;
slow=slow.next;
}
prev.next=prev.next.next;
return head;
}
0
ajinkyahimanshu0071 month ago
Using normal basics:
1.
def deleteMid(head): ''' head: head of given linkedList return: head of resultant llist ''' n = length(head) return delNode(head,n//2) def length(head): count = 0 curr = head while curr: count += 1 curr = curr.next return count def delNode(head,k): temp = 0 curr = head while temp<k-1: curr = curr.next temp += 1 curr.next = curr.next.next return head
2.
if not head: return None dummy = slow = Node(data = None) dummy.next = head fast = head while fast and fast.next: slow = slow.next fast = fast.next.next slow.next = slow.next.next return dummy.next
the 1st one is faster is than the 2nd one
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 684,
"s": 238,
"text": "Given a singly linked list, delete middle of the linked list. For example, if given linked list is 1->2->3->4->5 then linked list should be modified to 1->2->4->5.\nIf there are even nodes, then there would be two middle nodes, we need to delete the second middle element. For example, if given linked list is 1->2->3->4->5->6 then it should be modified to 1->2->3->5->6.\nIf the input linked list is NULL or has 1 node, then it should return NULL"
},
{
"code": null,
"e": 695,
"s": 684,
"text": "Example 1:"
},
{
"code": null,
"e": 745,
"s": 695,
"text": "Input:\nLinkedList: 1->2->3->4->5\nOutput: 1 2 4 5\n"
},
{
"code": null,
"e": 756,
"s": 745,
"text": "Example 2:"
},
{
"code": null,
"e": 810,
"s": 756,
"text": "Input:\nLinkedList: 2->4->6->7->5->1\nOutput: 2 4 6 5 1"
},
{
"code": null,
"e": 1035,
"s": 810,
"text": "Your Task:\nThe task is to complete the function deleteMid() which should delete the middle element from the linked list and return the head of the modified linked list. If the linked list is empty then it should return NULL."
},
{
"code": null,
"e": 1099,
"s": 1035,
"text": "Expected Time Complexity: O(N).\nExpected Auxiliary Space: O(1)."
},
{
"code": null,
"e": 1146,
"s": 1099,
"text": "Constraints:\n1 <= N <= 1000\n1 <= value <= 1000"
},
{
"code": null,
"e": 1148,
"s": 1146,
"text": "0"
},
{
"code": null,
"e": 1177,
"s": 1148,
"text": "prabhakarsati294261 week ago"
},
{
"code": null,
"e": 1485,
"s": 1177,
"text": "def deleteMid(head): ''' head: head of given linkedList return: head of resultant llist ''' #code here slow,fast = head,head pre = None while fast and fast.next: pre = slow slow = slow.next fast = fast.next.next pre.next = pre.next.next return head "
},
{
"code": null,
"e": 1487,
"s": 1485,
"text": "0"
},
{
"code": null,
"e": 1508,
"s": 1487,
"text": "csedeepak2 weeks ago"
},
{
"code": null,
"e": 1984,
"s": 1508,
"text": "Node* deleteMid(Node* head){ // Your Code Here int cnt=0; Node *curr=head; if(head==NULL || head->next==NULL){ free(head); return NULL; } while(curr){ cnt++; curr=curr->next; } int pos=cnt/2 + 1; Node *temp=head; Node *back; for(int i=1; i<=pos-2 && temp!=NULL ; i++){ temp=temp->next; } back=temp->next; temp->next=temp->next->next; free(back); return head; }"
},
{
"code": null,
"e": 1986,
"s": 1984,
"text": "0"
},
{
"code": null,
"e": 2008,
"s": 1986,
"text": "harshscode2 weeks ago"
},
{
"code": null,
"e": 2208,
"s": 2008,
"text": " Node *slow=head; Node *fast=head; Node *prev=NULL; while(fast and fast->next) { prev=slow; slow=slow->next; fast=fast->next->next; } prev->next=prev->next->next; "
},
{
"code": null,
"e": 2224,
"s": 2208,
"text": " return head;"
},
{
"code": null,
"e": 2226,
"s": 2224,
"text": "0"
},
{
"code": null,
"e": 2246,
"s": 2226,
"text": "ns2847192 weeks ago"
},
{
"code": null,
"e": 2275,
"s": 2246,
"text": "Node* deleteMid(Node* head){"
},
{
"code": null,
"e": 2496,
"s": 2275,
"text": " Node* prev=NULL; Node* slow=head; Node* fast=head; while(fast && fast->next!=NULL) { fast=fast->next->next; prev=slow; slow=slow->next; } prev->next=prev->next->next; return head;}"
},
{
"code": null,
"e": 2498,
"s": 2496,
"text": "0"
},
{
"code": null,
"e": 2522,
"s": 2498,
"text": "hharshit81183 weeks ago"
},
{
"code": null,
"e": 2866,
"s": 2522,
"text": "Node* deleteMid(Node* head){ if(!head || !head->next){ return NULL; } Node *slow = head, *ptr = NULL, *fast = head; while(fast!=NULL && fast->next != NULL){ ptr =slow; slow = slow->next; fast = fast->next->next; } Node *temp = ptr->next; ptr->next = temp->next; delete(temp); return head;}"
},
{
"code": null,
"e": 2868,
"s": 2866,
"text": "0"
},
{
"code": null,
"e": 2888,
"s": 2868,
"text": "goelyash3 weeks ago"
},
{
"code": null,
"e": 3207,
"s": 2888,
"text": "Node* deleteMid(Node* head)\n{\n if(!head)return NULL;\n Node* slow=head,*fast=head,*prev=head;\n while(fast && fast->next){\n \n slow= slow->next;\n fast=fast->next->next;\n }\n while(prev->next!=slow)prev=prev->next;\n prev->next=prev->next->next;\n slow->next=NULL;\n return head;\n}"
},
{
"code": null,
"e": 3209,
"s": 3207,
"text": "0"
},
{
"code": null,
"e": 3230,
"s": 3209,
"text": "user_8m8m3 weeks ago"
},
{
"code": null,
"e": 3765,
"s": 3230,
"text": " Node deleteMid(Node head) { // This is method only submission. // You only need to complete the method. if(head.next.next==null) { head.next=null; return head; } Node slow = head; Node fast = head; while(fast.next!=null) { fast = fast.next; if(fast.next!=null) { fast=fast.next; } slow = slow.next; } slow.data = slow.next.data; slow.next=slow.next.next; return head; }"
},
{
"code": null,
"e": 3767,
"s": 3765,
"text": "0"
},
{
"code": null,
"e": 3791,
"s": 3767,
"text": "sangrambachu4 weeks ago"
},
{
"code": null,
"e": 4380,
"s": 3791,
"text": "class Solution {\n Node deleteMid(Node head) {\n // This is method only submission.\n // You only need to complete the method.\n \n // Find mid\n int index = 0;\n Node fast = head;\n Node slow = head;\n \n while(fast != null && fast.next != null) {\n index++;\n slow = slow.next;\n fast = fast.next.next;\n }\n \n Node prev = head;\n for(int i=0; i<index-1; i++) {\n prev = prev.next;\n }\n \n prev.next = prev.next.next;\n return head;\n }\n}"
},
{
"code": null,
"e": 4382,
"s": 4380,
"text": "0"
},
{
"code": null,
"e": 4405,
"s": 4382,
"text": "sanusonkar11 month ago"
},
{
"code": null,
"e": 4419,
"s": 4405,
"text": "Java Solution"
},
{
"code": null,
"e": 4441,
"s": 4419,
"text": "Time complexity =O(n)"
},
{
"code": null,
"e": 4834,
"s": 4441,
"text": " Node deleteMid(Node head) {\n Node prev=null;\n Node fast=head;\n Node slow=head;\n //first try to find 1 node before mid\n //ex-2 4 5 6 7 then we get in prev=>4 value node \n while(fast!=null && fast.next!=null){\n fast=fast.next.next;\n prev=slow;\n slow=slow.next;\n }\n prev.next=prev.next.next;\n return head;\n }"
},
{
"code": null,
"e": 4836,
"s": 4834,
"text": "0"
},
{
"code": null,
"e": 4866,
"s": 4836,
"text": "ajinkyahimanshu0071 month ago"
},
{
"code": null,
"e": 4887,
"s": 4866,
"text": "Using normal basics:"
},
{
"code": null,
"e": 4892,
"s": 4889,
"text": "1."
},
{
"code": null,
"e": 5306,
"s": 4892,
"text": "def deleteMid(head): ''' head: head of given linkedList return: head of resultant llist ''' n = length(head) return delNode(head,n//2) def length(head): count = 0 curr = head while curr: count += 1 curr = curr.next return count def delNode(head,k): temp = 0 curr = head while temp<k-1: curr = curr.next temp += 1 curr.next = curr.next.next return head"
},
{
"code": null,
"e": 5311,
"s": 5308,
"text": "2."
},
{
"code": null,
"e": 5545,
"s": 5311,
"text": "if not head: return None dummy = slow = Node(data = None) dummy.next = head fast = head while fast and fast.next: slow = slow.next fast = fast.next.next slow.next = slow.next.next return dummy.next"
},
{
"code": null,
"e": 5589,
"s": 5547,
"text": "the 1st one is faster is than the 2nd one"
},
{
"code": null,
"e": 5735,
"s": 5589,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 5771,
"s": 5735,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 5781,
"s": 5771,
"text": "\nProblem\n"
},
{
"code": null,
"e": 5791,
"s": 5781,
"text": "\nContest\n"
},
{
"code": null,
"e": 5854,
"s": 5791,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6002,
"s": 5854,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 6210,
"s": 6002,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 6316,
"s": 6210,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
]
|
A Utopian Simplex Protocol | The Simplex protocol is data link layer protocol for transmission of frames over computer network. It is hypothetical protocol designed for unidirectional data transmission over an ideal channel, i.e. a channel through which transmission can never go wrong.
It is assumed that both the sender and the receiver are always ready for data processing and both of them have infinite buffer. The sender simply sends all its data available onto the channel as soon as they are available its buffer. The receiver is assumed to process all incoming data instantly. It is does not handle flow control or error control. Since this protocol is totally unrealistic, it is often called Utopian Simplex protocol.
The significance of this protocol lies in the fact that it shows the basic structure on which the usable protocols are built upon.
Sender Site: The data link layer in the sender site waits for the network layer to send a data packet. On receiving the packet, it immediately processes it and sends it to the physical layer for transmission.
Sender Site: The data link layer in the sender site waits for the network layer to send a data packet. On receiving the packet, it immediately processes it and sends it to the physical layer for transmission.
Receiver Site: The data link layer in the receiver site waits for a frame to be available. When it is available, it immediately processes it and sends it to the network layer.
Receiver Site: The data link layer in the receiver site waits for a frame to be available. When it is available, it immediately processes it and sends it to the network layer.
begin
while (true) //check repeatedly
do
Wait_For_Event(); //wait for availability of packet
if ( Event(Frame_Available)) then
Get_Data_From_Network_Layer();
Make_Frame();
Send_Frame_To_Physical_Layer();
end if
end while
end
begin
while (true) //check repeatedly
do
Wait_For_Event(); //wait for arrival of frame
if ( Event(Frame_Arrival)) then
Receive_Frame_From_Physical_Layer();
Extract_Data();
Deliver_Data_To_Network_Layer();
end if
end while
end
The following flow diagram depicts communication via simplex protocol. | [
{
"code": null,
"e": 1320,
"s": 1062,
"text": "The Simplex protocol is data link layer protocol for transmission of frames over computer network. It is hypothetical protocol designed for unidirectional data transmission over an ideal channel, i.e. a channel through which transmission can never go wrong."
},
{
"code": null,
"e": 1760,
"s": 1320,
"text": "It is assumed that both the sender and the receiver are always ready for data processing and both of them have infinite buffer. The sender simply sends all its data available onto the channel as soon as they are available its buffer. The receiver is assumed to process all incoming data instantly. It is does not handle flow control or error control. Since this protocol is totally unrealistic, it is often called Utopian Simplex protocol."
},
{
"code": null,
"e": 1891,
"s": 1760,
"text": "The significance of this protocol lies in the fact that it shows the basic structure on which the usable protocols are built upon."
},
{
"code": null,
"e": 2100,
"s": 1891,
"text": "Sender Site: The data link layer in the sender site waits for the network layer to send a data packet. On receiving the packet, it immediately processes it and sends it to the physical layer for transmission."
},
{
"code": null,
"e": 2309,
"s": 2100,
"text": "Sender Site: The data link layer in the sender site waits for the network layer to send a data packet. On receiving the packet, it immediately processes it and sends it to the physical layer for transmission."
},
{
"code": null,
"e": 2485,
"s": 2309,
"text": "Receiver Site: The data link layer in the receiver site waits for a frame to be available. When it is available, it immediately processes it and sends it to the network layer."
},
{
"code": null,
"e": 2661,
"s": 2485,
"text": "Receiver Site: The data link layer in the receiver site waits for a frame to be available. When it is available, it immediately processes it and sends it to the network layer."
},
{
"code": null,
"e": 2950,
"s": 2661,
"text": "begin\n while (true) //check repeatedly\n do\n Wait_For_Event(); //wait for availability of packet\n if ( Event(Frame_Available)) then\n Get_Data_From_Network_Layer();\n Make_Frame();\n Send_Frame_To_Physical_Layer();\n end if\n end while\nend"
},
{
"code": null,
"e": 3241,
"s": 2950,
"text": "begin\n while (true) //check repeatedly\n do\n Wait_For_Event(); //wait for arrival of frame\n if ( Event(Frame_Arrival)) then\n Receive_Frame_From_Physical_Layer();\n Extract_Data();\n Deliver_Data_To_Network_Layer();\n end if\n end while\nend"
},
{
"code": null,
"e": 3312,
"s": 3241,
"text": "The following flow diagram depicts communication via simplex protocol."
}
]
|
How to Perform Arithmetic Operation using Switch Case in PHP through HTML form ? - GeeksforGeeks | 05 Jun, 2021
We are going to perform the basic arithmetic operations like- addition, subtraction, multiplication, and division using PHP. We are using HTML form to take the input values and choose an option to perform particular operation using Switch Case.
Arithmetic Operations are used to perform operations like addition, subtraction etc. on the values. To perform arithmetic operations on the data we need at least two values.
Addition: It performs the sum of given numbers.
Subtraction: It performs the difference of given numbers.
Multiplication: It performs the multiplication of given numbers.
Division: It performs the division of given numbers.
Example:
Addition = val + val2 + ... + valn
Example: add = 4 + 4 = 8
Subtraction = val - val2 - ... - valn
Example: sub = 4 - 4 = 0
Multiplication = val1 * val2 * ... * valn
Example: mul = 4 * 4 = 16
Division = val1 / val2
Example: mul = 4 / 4 = 1
Program 1:
PHP
<?php
$x = 15;
$y = 30;
$add = $x + $y;
$sub = $x - $y;
$mul = $x * $y;
$div = $y / $x;
echo "Sum: " . $add . "\n";
echo "Diff: " . $sub . "\n";
echo "Mul: " . $mul . "\n";
echo "Div: " . $div;
?>
Output:
Sum: 45
Diff: -15
Mul: 450
Div: 2
Using Switch Case: The switch statement is used to perform different actions based on different conditions.
Syntax:
switch (n) {
case label1:
code to be executed if n=label1;
break;
case label2:
code to be executed if n=label2;
break;
case label3:
code to be executed if n=label3;
break;
. . .
case labeln:
code to be executed if n=labellast;
break;
default:
code to be executed if n is different from all labels;
}
Execution Steps:
Start XAMPP Server
Open Notepad and type the below code and save the folder in path given in image
Program 2:
PHP
<!DOCTYPE html>
<html>
<head>
<title>GFG</title>
</head>
<body><center>
<h1>
ARITHMETIC OPERATIONS DEMO USING
SWITCH CASE IN PHP
</h1>
<h3>Option-1 = Addition</h3>
<h3>Option-2 = Subtraction</h3>
<h3>Option-3 = Multiplication</h3>
<h3>Option-4 = Division</h3>
<form method="post">
<table border="0">
<tr>
<!-- Taking value 1 in an text box -->
<td> <input type="text" name="num1"
value="" placeholder="Enter value 1"/>
</td>
</tr>
<tr>
<!-- Taking value 1 in an text box -->
<td> <input type="text" name="num2" value=""
placeholder="Enter value 2"/>
</td>
</tr>
<tr>
<!-- Taking option in an text box -->
<td> <input type="text" name="option" value=""
placeholder="Enter option 1-4 only"/>
</td>
</tr>
<tr>
<td> <input type="submit" name="submit"
value="Submit"/>
</td>
</tr>
</table>
</form>
</center>
<?php
// Checking submit condition
if(isset($_POST['submit'])) {
// Taking first number from the
// form data to variable 'a'
$a = $_POST['num1'];
// Taking second number from the
// form data to a variable 'b'
$b = $_POST['num2'];
// Taking option from the form
// data to a variable 'ch'
$ch = $_POST['option'];
switch($ch) {
case 1:
// Execute addition operation
// when option 1 is given
$r = $a + $b;
echo " Addition of two numbers = " . $r ;
break;
case 2:
// Executing subtraction operation
// when option 2 is given
$r = $a - $b;
echo " Subtraction of two numbers= " . $r ;
break;
case 3:
// Executing multiplication operation
// when option 3 is given
$r = $a * $b;
echo " Multiplication of two numbers = " . $r ;
break;
case 4:
// Executing division operation
// when option 4 is given
$r = $a / $b;
echo " Division of two numbers = " . $r ;
break;
default:
// When 1 to 4 option is not given
// then this condition is executed
echo ("invalid option\n");
}
return 0;
}
?>
</body>
</html>
Output: Open Web Browser and type localhost/gfg/code.php
Addition
Subtraction
Multiplication
Division
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
adnanirshad158
HTML-Misc
PHP-Misc
HTML
PHP
PHP Programs
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
REST API (Introduction)
How to auto-resize an image to fit a div container using CSS?
Design a web page using HTML and CSS
How to Show Images on Click using HTML ?
HTML Course | First Web Page | Printing Hello World
How to pop an alert message box using PHP ?
How to execute PHP code using command line ?
PHP in_array() Function
How to convert array to string in PHP ?
How to delete an array element based on key in PHP? | [
{
"code": null,
"e": 24787,
"s": 24756,
"text": " \n05 Jun, 2021\n"
},
{
"code": null,
"e": 25032,
"s": 24787,
"text": "We are going to perform the basic arithmetic operations like- addition, subtraction, multiplication, and division using PHP. We are using HTML form to take the input values and choose an option to perform particular operation using Switch Case."
},
{
"code": null,
"e": 25206,
"s": 25032,
"text": "Arithmetic Operations are used to perform operations like addition, subtraction etc. on the values. To perform arithmetic operations on the data we need at least two values."
},
{
"code": null,
"e": 25254,
"s": 25206,
"text": "Addition: It performs the sum of given numbers."
},
{
"code": null,
"e": 25312,
"s": 25254,
"text": "Subtraction: It performs the difference of given numbers."
},
{
"code": null,
"e": 25377,
"s": 25312,
"text": "Multiplication: It performs the multiplication of given numbers."
},
{
"code": null,
"e": 25430,
"s": 25377,
"text": "Division: It performs the division of given numbers."
},
{
"code": null,
"e": 25439,
"s": 25430,
"text": "Example:"
},
{
"code": null,
"e": 25681,
"s": 25439,
"text": "Addition = val + val2 + ... + valn\nExample: add = 4 + 4 = 8\n\nSubtraction = val - val2 - ... - valn\nExample: sub = 4 - 4 = 0\n\nMultiplication = val1 * val2 * ... * valn\nExample: mul = 4 * 4 = 16\n\nDivision = val1 / val2\nExample: mul = 4 / 4 = 1"
},
{
"code": null,
"e": 25692,
"s": 25681,
"text": "Program 1:"
},
{
"code": null,
"e": 25696,
"s": 25692,
"text": "PHP"
},
{
"code": "\n\n\n\n\n\n\n<?php \n \n$x = 15;\n$y = 30;\n \n$add = $x + $y; \n$sub = $x - $y;\n$mul = $x * $y;\n$div = $y / $x; \n \necho \"Sum: \" . $add . \"\\n\"; \necho \"Diff: \" . $sub . \"\\n\";\necho \"Mul: \" . $mul . \"\\n\"; \necho \"Div: \" . $div;\n \n?>\n\n\n\n\n\n",
"e": 25932,
"s": 25706,
"text": null
},
{
"code": null,
"e": 25940,
"s": 25932,
"text": "Output:"
},
{
"code": null,
"e": 25974,
"s": 25940,
"text": "Sum: 45\nDiff: -15\nMul: 450\nDiv: 2"
},
{
"code": null,
"e": 26082,
"s": 25974,
"text": "Using Switch Case: The switch statement is used to perform different actions based on different conditions."
},
{
"code": null,
"e": 26090,
"s": 26082,
"text": "Syntax:"
},
{
"code": null,
"e": 26492,
"s": 26090,
"text": "switch (n) {\n case label1:\n code to be executed if n=label1;\n break;\n case label2:\n code to be executed if n=label2;\n break;\n case label3:\n code to be executed if n=label3;\n break;\n\n . . .\n \n case labeln:\n code to be executed if n=labellast;\n break;\n default:\n code to be executed if n is different from all labels;\n}"
},
{
"code": null,
"e": 26509,
"s": 26492,
"text": "Execution Steps:"
},
{
"code": null,
"e": 26530,
"s": 26509,
"text": "Start XAMPP Server "
},
{
"code": null,
"e": 26612,
"s": 26530,
"text": "Open Notepad and type the below code and save the folder in path given in image "
},
{
"code": null,
"e": 26623,
"s": 26612,
"text": "Program 2:"
},
{
"code": null,
"e": 26627,
"s": 26623,
"text": "PHP"
},
{
"code": "\n\n\n\n\n\n\n<!DOCTYPE html>\n<html>\n<head>\n <title>GFG</title>\n</head>\n \n<body><center>\n <h1>\n ARITHMETIC OPERATIONS DEMO USING \n SWITCH CASE IN PHP\n </h1>\n \n <h3>Option-1 = Addition</h3>\n <h3>Option-2 = Subtraction</h3>\n <h3>Option-3 = Multiplication</h3>\n <h3>Option-4 = Division</h3>\n \n <form method=\"post\">\n <table border=\"0\">\n <tr>\n <!-- Taking value 1 in an text box -->\n <td> <input type=\"text\" name=\"num1\"\n value=\"\" placeholder=\"Enter value 1\"/>\n </td>\n </tr>\n \n <tr>\n <!-- Taking value 1 in an text box -->\n <td> <input type=\"text\" name=\"num2\" value=\"\"\n placeholder=\"Enter value 2\"/>\n </td>\n </tr>\n \n <tr>\n <!-- Taking option in an text box -->\n <td> <input type=\"text\" name=\"option\" value=\"\"\n placeholder=\"Enter option 1-4 only\"/>\n </td>\n </tr>\n \n <tr>\n <td> <input type=\"submit\" name=\"submit\"\n value=\"Submit\"/> \n </td>\n </tr>\n </table>\n </form>\n</center>\n \n<?php\n \n// Checking submit condition\nif(isset($_POST['submit'])) {\n \n // Taking first number from the \n // form data to variable 'a'\n $a = $_POST['num1'];\n \n // Taking second number from the \n // form data to a variable 'b'\n $b = $_POST['num2'];\n \n // Taking option from the form\n // data to a variable 'ch'\n $ch = $_POST['option'];\n \n switch($ch) {\n case 1:\n \n // Execute addition operation \n // when option 1 is given\n $r = $a + $b;\n echo \" Addition of two numbers = \" . $r ;\n break;\n \n case 2:\n \n // Executing subtraction operation\n // when option 2 is given\n $r = $a - $b; \n echo \" Subtraction of two numbers= \" . $r ;\n break;\n \n case 3:\n \n // Executing multiplication operation\n // when option 3 is given\n $r = $a * $b;\n echo \" Multiplication of two numbers = \" . $r ;\n break;\n \n case 4:\n \n // Executing division operation\n // when option 4 is given\n $r = $a / $b;\n echo \" Division of two numbers = \" . $r ;\n break;\n \n default:\n \n // When 1 to 4 option is not given\n // then this condition is executed\n echo (\"invalid option\\n\");\n }\n \n return 0;\n}\n?>\n</body>\n</html>\n\n\n\n\n\n",
"e": 29286,
"s": 26637,
"text": null
},
{
"code": null,
"e": 29343,
"s": 29286,
"text": "Output: Open Web Browser and type localhost/gfg/code.php"
},
{
"code": null,
"e": 29352,
"s": 29343,
"text": "Addition"
},
{
"code": null,
"e": 29364,
"s": 29352,
"text": "Subtraction"
},
{
"code": null,
"e": 29379,
"s": 29364,
"text": "Multiplication"
},
{
"code": null,
"e": 29388,
"s": 29379,
"text": "Division"
},
{
"code": null,
"e": 29525,
"s": 29388,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 29540,
"s": 29525,
"text": "adnanirshad158"
},
{
"code": null,
"e": 29552,
"s": 29540,
"text": "\nHTML-Misc\n"
},
{
"code": null,
"e": 29563,
"s": 29552,
"text": "\nPHP-Misc\n"
},
{
"code": null,
"e": 29570,
"s": 29563,
"text": "\nHTML\n"
},
{
"code": null,
"e": 29576,
"s": 29570,
"text": "\nPHP\n"
},
{
"code": null,
"e": 29591,
"s": 29576,
"text": "\nPHP Programs\n"
},
{
"code": null,
"e": 29610,
"s": 29591,
"text": "\nWeb Technologies\n"
},
{
"code": null,
"e": 29639,
"s": 29610,
"text": "\nWeb technologies Questions\n"
},
{
"code": null,
"e": 29844,
"s": 29639,
"text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n "
},
{
"code": null,
"e": 29868,
"s": 29844,
"text": "REST API (Introduction)"
},
{
"code": null,
"e": 29930,
"s": 29868,
"text": "How to auto-resize an image to fit a div container using CSS?"
},
{
"code": null,
"e": 29967,
"s": 29930,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 30008,
"s": 29967,
"text": "How to Show Images on Click using HTML ?"
},
{
"code": null,
"e": 30060,
"s": 30008,
"text": "HTML Course | First Web Page | Printing Hello World"
},
{
"code": null,
"e": 30104,
"s": 30060,
"text": "How to pop an alert message box using PHP ?"
},
{
"code": null,
"e": 30149,
"s": 30104,
"text": "How to execute PHP code using command line ?"
},
{
"code": null,
"e": 30173,
"s": 30149,
"text": "PHP in_array() Function"
},
{
"code": null,
"e": 30213,
"s": 30173,
"text": "How to convert array to string in PHP ?"
}
]
|
Longest Sub-Array with Sum K | Practice | GeeksforGeeks | Given an array containing N integers and an integer K., Your task is to find the length of the longest Sub-Array with the sum of the elements equal to the given value K.
Example 1:
Input :
A[] = {10, 5, 2, 7, 1, 9}
K = 15
Output : 4
Explanation:
The sub-array is {5, 2, 7, 1}.
Example 2:
Input :
A[] = {-1, 2, 3}
K = 6
Output : 0
Your Task:
This is a function problem. The input is already taken care of by the driver code. You only need to complete the function smallestSubsegment() that takes an array (A), sizeOfArray (n), sum (K)and returns the required length of the longest Sub-Array. The driver code takes care of the printing.
Expected Time Complexity: O(N).
Expected Auxiliary Space: O(N).
Constraints:
1<=N<=105
-105<=A[i], K<=105
0
harshpandeyalfa23 weeks ago
C++
(0.36 sec)
int lenOfLongSubarr(int A[], int N, int k) { int ps=0; int res=0; unordered_map<int,int> um; for(int i=0 ; i<N;i++) { ps+=A[i]; if(ps==k) res=i+1; if(um.find(ps) == um.end()) um.insert({ps,i}); if(um.find(ps-k) != um.end()) res=max(res, i- um[ps-k]); } return res; }
0
hydracody453 weeks ago
I thought there are three solution
-brute force 0(n3)-cummlative sum 0(n2)-prefix sum 0(nlog(n))
-brute force 0(n3)
-cummlative sum 0(n2)
-prefix sum 0(nlog(n))
There is one more solution in0(n) with two pointers but it does not work with negative numbers.
-2
emmanueluluabuike1 month ago
Can someone explain what is wrong with my logic?
public static int lenOfLongSubarr (int arr[], int sizeOfArray, int K) {
//Complete the function
int left = 0, right = 0, currentRunningSum = 0, max = 0;
while(right < sizeOfArray){
currentRunningSum += arr[right];
if(currentRunningSum == K)
max = Math.max(max, (right + 1) - left);
if(currentRunningSum >= K){
currentRunningSum -= arr[left];
left++;
}
right++;
}
return max;
}
+1
ravi1010prakash2 months ago
class Solution{ public: int lenOfLongSubarr(int A[], int N, int K) { // Complete the function unordered_map<int,int>mp; int presum=0; int res=0; for(int i=0;i<N;i++) { presum+=A[i]; if(presum==K) res=i+1; if(mp.find(presum)==mp.end()) mp.insert({presum,i}); if(mp.find(presum-K)!=mp.end()) res=max(res,i-mp[presum-K]); } return res; }
};
0
nirbhayg3422 months ago
Most Efficient solution (0.2/1.7)
class Solution{ public: int lenOfLongSubarr(int A[], int N, int K) { // Complete the function int res=0; unordered_map<int,int>m; int pre_sum=0; for(int i=0;i<N;i++){ pre_sum+=A[i]; if(pre_sum==K){res=i+1;} if(m.find(pre_sum)==m.end()){ m.insert({pre_sum,i}); } if(m.find(pre_sum-K)!=m.end()){ res=max(res,i-m[pre_sum-K]); } } return res; }
0
gagandeepsingh2972 months ago
Test Cases Passed:
193 / 193
Total Points Scored:
4/4
Total Time Taken:
0.2/1.7
Your Accuracy:
100%
unordered_map<int, int> m; int p = 0; m.insert({0, -1}); int len = 0; for(int i=0; i<n; i++){ p += A[i]; if(m.find(p-K) != m.end()){ len = max(len, i-m[p-K]); } if(m.find(p) == m.end()){ m[p] = i; } } return len;
+1
sg6214282 months ago
class Solution{
// Function for finding maximum and value pair
public static int lenOfLongSubarr (int A[], int N, int K) {
int cursum=0;
int start=0;
int end=-1;
int max=0;
Map<Integer,Integer>map=new HashMap<>();
for(int i=0;i<A.length;i++)
{
cursum+=A[i];
if(cursum-K==0)
{
start=0;
end=i;
if((end-start+1)>max)//comparing subarray size
max=end-start+1;
}
if(map.containsKey(cursum-K))
{
start=map.get(cursum-K)+1;
end=i;
if((end-start+1)>max)
max=end-start+1;
}
if(map.containsKey(cursum))//if we get two same keys with different values
map.getOrDefault(cursum,0);
else
map.put(cursum,i);
}
return max;
}
}
0
shreyansh00022 months ago
int lenOfLongSubarr(int A[], int N, int K) { unordered_map<int,int>m; int pre_sum = 0, res = 0; for(int i = 0; i<N; i++){ pre_sum += A[i]; if(pre_sum == K) res = i+1; if(m.find(pre_sum) == m.end()) m.insert({pre_sum, i}); if(m.find(pre_sum - K) != m.end()) res = max(res, i-m[pre_sum-K]); } return res; }
0
choudharysdimple123453 months ago
class Solution{ public: int lenOfLongSubarr(int A[], int N, int K) { int sum=0; unordered_map<int,int> vec; int i=0,j=0,mx=0; for(int i=0;i<N;i++){ sum += A[i]; if(sum == K) mx = max(mx,i+1); if(vec.find(sum) == vec.end()) vec[sum] = i; if(vec.find(sum - K) != vec.end()) mx = max(mx,i-vec[sum - K]); } return mx; }
};
0
aloksinghbais023 months ago
C++ solution having time complexity as O(N) and space complexity as O(N) is as follows :-
Note :- Similar question in which we find total number of subarrays with given sum.
Execution Time :- 0.4 /1.7 sec
int lenOfLongSubarr(int A[], int N, int K){ unordered_map<int,int> firstLastInd; unordered_map<int,bool> mp; int len = 0; int sum = 0; mp[0] = true, firstLastInd[0] = 0; for(int i=0; i<N; i++){ sum += A[i]; if(mp[sum-K]){ len = max(len,i - firstLastInd[sum-K] + 1); } mp[sum] = true; if(firstLastInd[sum] == 0 && sum) firstLastInd[sum] = i+1; } return (len); }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 409,
"s": 238,
"text": "Given an array containing N integers and an integer K., Your task is to find the length of the longest Sub-Array with the sum of the elements equal to the given value K. "
},
{
"code": null,
"e": 424,
"s": 411,
"text": "Example 1:\n "
},
{
"code": null,
"e": 521,
"s": 424,
"text": "Input :\nA[] = {10, 5, 2, 7, 1, 9}\nK = 15\nOutput : 4\nExplanation:\nThe sub-array is {5, 2, 7, 1}.\n"
},
{
"code": null,
"e": 532,
"s": 521,
"text": "Example 2:"
},
{
"code": null,
"e": 577,
"s": 532,
"text": "Input : \nA[] = {-1, 2, 3}\nK = 6\nOutput : 0\n\n"
},
{
"code": null,
"e": 883,
"s": 577,
"text": "Your Task:\nThis is a function problem. The input is already taken care of by the driver code. You only need to complete the function smallestSubsegment() that takes an array (A), sizeOfArray (n), sum (K)and returns the required length of the longest Sub-Array. The driver code takes care of the printing."
},
{
"code": null,
"e": 947,
"s": 883,
"text": "Expected Time Complexity: O(N).\nExpected Auxiliary Space: O(N)."
},
{
"code": null,
"e": 993,
"s": 949,
"text": "Constraints:\n1<=N<=105\n-105<=A[i], K<=105\n "
},
{
"code": null,
"e": 995,
"s": 993,
"text": "0"
},
{
"code": null,
"e": 1023,
"s": 995,
"text": "harshpandeyalfa23 weeks ago"
},
{
"code": null,
"e": 1027,
"s": 1023,
"text": "C++"
},
{
"code": null,
"e": 1038,
"s": 1027,
"text": "(0.36 sec)"
},
{
"code": null,
"e": 1399,
"s": 1038,
"text": "int lenOfLongSubarr(int A[], int N, int k) { int ps=0; int res=0; unordered_map<int,int> um; for(int i=0 ; i<N;i++) { ps+=A[i]; if(ps==k) res=i+1; if(um.find(ps) == um.end()) um.insert({ps,i}); if(um.find(ps-k) != um.end()) res=max(res, i- um[ps-k]); } return res; }"
},
{
"code": null,
"e": 1403,
"s": 1401,
"text": "0"
},
{
"code": null,
"e": 1426,
"s": 1403,
"text": "hydracody453 weeks ago"
},
{
"code": null,
"e": 1461,
"s": 1426,
"text": "I thought there are three solution"
},
{
"code": null,
"e": 1523,
"s": 1461,
"text": "-brute force 0(n3)-cummlative sum 0(n2)-prefix sum 0(nlog(n))"
},
{
"code": null,
"e": 1542,
"s": 1523,
"text": "-brute force 0(n3)"
},
{
"code": null,
"e": 1564,
"s": 1542,
"text": "-cummlative sum 0(n2)"
},
{
"code": null,
"e": 1587,
"s": 1564,
"text": "-prefix sum 0(nlog(n))"
},
{
"code": null,
"e": 1683,
"s": 1587,
"text": "There is one more solution in0(n) with two pointers but it does not work with negative numbers."
},
{
"code": null,
"e": 1686,
"s": 1683,
"text": "-2"
},
{
"code": null,
"e": 1715,
"s": 1686,
"text": "emmanueluluabuike1 month ago"
},
{
"code": null,
"e": 1764,
"s": 1715,
"text": "Can someone explain what is wrong with my logic?"
},
{
"code": null,
"e": 2330,
"s": 1764,
"text": "public static int lenOfLongSubarr (int arr[], int sizeOfArray, int K) {\n //Complete the function\n int left = 0, right = 0, currentRunningSum = 0, max = 0;\n\n while(right < sizeOfArray){\n currentRunningSum += arr[right];\n \n if(currentRunningSum == K)\n max = Math.max(max, (right + 1) - left);\n\n if(currentRunningSum >= K){\n currentRunningSum -= arr[left];\n left++;\n }\n \n right++;\n }\n \n return max;\n }"
},
{
"code": null,
"e": 2333,
"s": 2330,
"text": "+1"
},
{
"code": null,
"e": 2361,
"s": 2333,
"text": "ravi1010prakash2 months ago"
},
{
"code": null,
"e": 2825,
"s": 2361,
"text": "class Solution{ public: int lenOfLongSubarr(int A[], int N, int K) { // Complete the function unordered_map<int,int>mp; int presum=0; int res=0; for(int i=0;i<N;i++) { presum+=A[i]; if(presum==K) res=i+1; if(mp.find(presum)==mp.end()) mp.insert({presum,i}); if(mp.find(presum-K)!=mp.end()) res=max(res,i-mp[presum-K]); } return res; }"
},
{
"code": null,
"e": 2828,
"s": 2825,
"text": "};"
},
{
"code": null,
"e": 2830,
"s": 2828,
"text": "0"
},
{
"code": null,
"e": 2854,
"s": 2830,
"text": "nirbhayg3422 months ago"
},
{
"code": null,
"e": 2888,
"s": 2854,
"text": "Most Efficient solution (0.2/1.7)"
},
{
"code": null,
"e": 3373,
"s": 2890,
"text": "class Solution{ public: int lenOfLongSubarr(int A[], int N, int K) { // Complete the function int res=0; unordered_map<int,int>m; int pre_sum=0; for(int i=0;i<N;i++){ pre_sum+=A[i]; if(pre_sum==K){res=i+1;} if(m.find(pre_sum)==m.end()){ m.insert({pre_sum,i}); } if(m.find(pre_sum-K)!=m.end()){ res=max(res,i-m[pre_sum-K]); } } return res; } "
},
{
"code": null,
"e": 3375,
"s": 3373,
"text": "0"
},
{
"code": null,
"e": 3405,
"s": 3375,
"text": "gagandeepsingh2972 months ago"
},
{
"code": null,
"e": 3424,
"s": 3405,
"text": "Test Cases Passed:"
},
{
"code": null,
"e": 3434,
"s": 3424,
"text": "193 / 193"
},
{
"code": null,
"e": 3455,
"s": 3434,
"text": "Total Points Scored:"
},
{
"code": null,
"e": 3459,
"s": 3455,
"text": "4/4"
},
{
"code": null,
"e": 3477,
"s": 3459,
"text": "Total Time Taken:"
},
{
"code": null,
"e": 3485,
"s": 3477,
"text": "0.2/1.7"
},
{
"code": null,
"e": 3500,
"s": 3485,
"text": "Your Accuracy:"
},
{
"code": null,
"e": 3505,
"s": 3500,
"text": "100%"
},
{
"code": null,
"e": 3837,
"s": 3509,
"text": " unordered_map<int, int> m; int p = 0; m.insert({0, -1}); int len = 0; for(int i=0; i<n; i++){ p += A[i]; if(m.find(p-K) != m.end()){ len = max(len, i-m[p-K]); } if(m.find(p) == m.end()){ m[p] = i; } } return len;"
},
{
"code": null,
"e": 3842,
"s": 3839,
"text": "+1"
},
{
"code": null,
"e": 3863,
"s": 3842,
"text": "sg6214282 months ago"
},
{
"code": null,
"e": 4819,
"s": 3863,
"text": "class Solution{\n // Function for finding maximum and value pair\n public static int lenOfLongSubarr (int A[], int N, int K) {\n int cursum=0;\n int start=0;\n int end=-1;\n int max=0;\n Map<Integer,Integer>map=new HashMap<>();\n for(int i=0;i<A.length;i++)\n {\n cursum+=A[i];\n if(cursum-K==0)\n {\n start=0;\n end=i;\n if((end-start+1)>max)//comparing subarray size \n max=end-start+1;\n }\n if(map.containsKey(cursum-K))\n {\n start=map.get(cursum-K)+1;\n end=i;\n if((end-start+1)>max)\n max=end-start+1;\n }\n if(map.containsKey(cursum))//if we get two same keys with different values\n map.getOrDefault(cursum,0);\n else\n map.put(cursum,i);\n }\n return max;\n }\n}\n"
},
{
"code": null,
"e": 4821,
"s": 4819,
"text": "0"
},
{
"code": null,
"e": 4847,
"s": 4821,
"text": "shreyansh00022 months ago"
},
{
"code": null,
"e": 5274,
"s": 4847,
"text": "int lenOfLongSubarr(int A[], int N, int K) { unordered_map<int,int>m; int pre_sum = 0, res = 0; for(int i = 0; i<N; i++){ pre_sum += A[i]; if(pre_sum == K) res = i+1; if(m.find(pre_sum) == m.end()) m.insert({pre_sum, i}); if(m.find(pre_sum - K) != m.end()) res = max(res, i-m[pre_sum-K]); } return res; } "
},
{
"code": null,
"e": 5276,
"s": 5274,
"text": "0"
},
{
"code": null,
"e": 5310,
"s": 5276,
"text": "choudharysdimple123453 months ago"
},
{
"code": null,
"e": 5682,
"s": 5310,
"text": "class Solution{ public: int lenOfLongSubarr(int A[], int N, int K) { int sum=0; unordered_map<int,int> vec; int i=0,j=0,mx=0; for(int i=0;i<N;i++){ sum += A[i]; if(sum == K) mx = max(mx,i+1); if(vec.find(sum) == vec.end()) vec[sum] = i; if(vec.find(sum - K) != vec.end()) mx = max(mx,i-vec[sum - K]); } return mx; }"
},
{
"code": null,
"e": 5685,
"s": 5682,
"text": "};"
},
{
"code": null,
"e": 5687,
"s": 5685,
"text": "0"
},
{
"code": null,
"e": 5715,
"s": 5687,
"text": "aloksinghbais023 months ago"
},
{
"code": null,
"e": 5805,
"s": 5715,
"text": "C++ solution having time complexity as O(N) and space complexity as O(N) is as follows :-"
},
{
"code": null,
"e": 5889,
"s": 5805,
"text": "Note :- Similar question in which we find total number of subarrays with given sum."
},
{
"code": null,
"e": 5922,
"s": 5891,
"text": "Execution Time :- 0.4 /1.7 sec"
},
{
"code": null,
"e": 6452,
"s": 5924,
"text": "int lenOfLongSubarr(int A[], int N, int K){ unordered_map<int,int> firstLastInd; unordered_map<int,bool> mp; int len = 0; int sum = 0; mp[0] = true, firstLastInd[0] = 0; for(int i=0; i<N; i++){ sum += A[i]; if(mp[sum-K]){ len = max(len,i - firstLastInd[sum-K] + 1); } mp[sum] = true; if(firstLastInd[sum] == 0 && sum) firstLastInd[sum] = i+1; } return (len); } "
},
{
"code": null,
"e": 6598,
"s": 6452,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 6634,
"s": 6598,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 6644,
"s": 6634,
"text": "\nProblem\n"
},
{
"code": null,
"e": 6654,
"s": 6644,
"text": "\nContest\n"
},
{
"code": null,
"e": 6717,
"s": 6654,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6865,
"s": 6717,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 7073,
"s": 6865,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 7179,
"s": 7073,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
]
|
Remove names or dimnames from an Object in R Programming - unname() Function - GeeksforGeeks | 16 Jun, 2020
unname() function in R Language is used to remove the names or dimnames from an Object.
Syntax: unname(x)
Parameters:x: Object
Example 1:
# R Program to remove names from an object# Creating a matrixA = matrix(c(1:9), 3, 3) # Naming rows rownames(A) = c("a", "b", "c") # Naming columns colnames(A) = c("c", "d", "e") print(A) # Removing names using unname() functionunname(A)
Output:
c d e
a 1 4 7
b 2 5 8
c 3 6 9
[, 1] [, 2] [, 3]
[1, ] 1 4 7
[2, ] 2 5 8
[3, ] 3 6 9
Example 2:
# R Program to remove names from an object # Calling pre-defined data setBOD # Removing names using unname() functionunname(BOD)
Output:
Time demand
1 1 8.3
2 2 10.3
3 3 19.0
4 4 16.0
5 5 15.6
6 7 19.8
1 1 8.3
2 2 10.3
3 3 19.0
4 4 16.0
5 5 15.6
6 7 19.8
R Object-Function
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Replace specific values in column in R DataFrame ?
How to change Row Names of DataFrame in R ?
Filter data by multiple conditions in R using Dplyr
Change Color of Bars in Barchart using ggplot2 in R
Loops in R (for, while, repeat)
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
K-Means Clustering in R Programming
Remove rows with NA in one column of R DataFrame | [
{
"code": null,
"e": 24565,
"s": 24537,
"text": "\n16 Jun, 2020"
},
{
"code": null,
"e": 24653,
"s": 24565,
"text": "unname() function in R Language is used to remove the names or dimnames from an Object."
},
{
"code": null,
"e": 24671,
"s": 24653,
"text": "Syntax: unname(x)"
},
{
"code": null,
"e": 24692,
"s": 24671,
"text": "Parameters:x: Object"
},
{
"code": null,
"e": 24703,
"s": 24692,
"text": "Example 1:"
},
{
"code": "# R Program to remove names from an object# Creating a matrixA = matrix(c(1:9), 3, 3) # Naming rows rownames(A) = c(\"a\", \"b\", \"c\") # Naming columns colnames(A) = c(\"c\", \"d\", \"e\") print(A) # Removing names using unname() functionunname(A)",
"e": 24958,
"s": 24703,
"text": null
},
{
"code": null,
"e": 24966,
"s": 24958,
"text": "Output:"
},
{
"code": null,
"e": 25085,
"s": 24966,
"text": " c d e\na 1 4 7\nb 2 5 8\nc 3 6 9\n [, 1] [, 2] [, 3]\n[1, ] 1 4 7\n[2, ] 2 5 8\n[3, ] 3 6 9\n"
},
{
"code": null,
"e": 25096,
"s": 25085,
"text": "Example 2:"
},
{
"code": "# R Program to remove names from an object # Calling pre-defined data setBOD # Removing names using unname() functionunname(BOD)",
"e": 25227,
"s": 25096,
"text": null
},
{
"code": null,
"e": 25235,
"s": 25227,
"text": "Output:"
},
{
"code": null,
"e": 25397,
"s": 25235,
"text": " Time demand\n1 1 8.3\n2 2 10.3\n3 3 19.0\n4 4 16.0\n5 5 15.6\n6 7 19.8\n \n1 1 8.3\n2 2 10.3\n3 3 19.0\n4 4 16.0\n5 5 15.6\n6 7 19.8\n"
},
{
"code": null,
"e": 25415,
"s": 25397,
"text": "R Object-Function"
},
{
"code": null,
"e": 25426,
"s": 25415,
"text": "R Language"
},
{
"code": null,
"e": 25524,
"s": 25426,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25533,
"s": 25524,
"text": "Comments"
},
{
"code": null,
"e": 25546,
"s": 25533,
"text": "Old Comments"
},
{
"code": null,
"e": 25604,
"s": 25546,
"text": "How to Replace specific values in column in R DataFrame ?"
},
{
"code": null,
"e": 25648,
"s": 25604,
"text": "How to change Row Names of DataFrame in R ?"
},
{
"code": null,
"e": 25700,
"s": 25648,
"text": "Filter data by multiple conditions in R using Dplyr"
},
{
"code": null,
"e": 25752,
"s": 25700,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 25784,
"s": 25752,
"text": "Loops in R (for, while, repeat)"
},
{
"code": null,
"e": 25822,
"s": 25784,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 25857,
"s": 25822,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 25915,
"s": 25857,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 25951,
"s": 25915,
"text": "K-Means Clustering in R Programming"
}
]
|
Java Program to adjust LocalDate to last Day of Year with TemporalAdjusters class | Let us first set a date:
LocalDate localDate = LocalDate.of(2019, Month.FEBRUARY, 11);
Now, adjust LocalDate to last Day of year:
LocalDate day = localDate.with(TemporalAdjusters.lastDayOfYear());
import java.time.LocalDate;
import java.time.Month;
import java.time.temporal.TemporalAdjusters;
public class Demo {
public static void main(String[] args) {
LocalDate localDate = LocalDate.of(2019, Month.FEBRUARY, 11);
System.out.println("Current Date = "+localDate);
System.out.println("Current Month = "+localDate.getMonth());
LocalDate day = localDate.with(TemporalAdjusters.firstDayOfMonth());
System.out.println("First day of month = "+day);
day = localDate.with(TemporalAdjusters.lastDayOfMonth());
System.out.println("Last day of month = "+day);
day = localDate.with(TemporalAdjusters.lastDayOfYear());
System.out.println("Last day of year = "+day);
day = localDate.with(TemporalAdjusters.firstDayOfNextMonth());
System.out.println("First day of next month = "+day);
}
}
Current Date = 2019-02-11
Current Month = FEBRUARY
First day of month = 2019-02-01
Last day of month = 2019-02-28
Last day of year = 2019-12-31
First day of next month = 2019-03-01 | [
{
"code": null,
"e": 1087,
"s": 1062,
"text": "Let us first set a date:"
},
{
"code": null,
"e": 1149,
"s": 1087,
"text": "LocalDate localDate = LocalDate.of(2019, Month.FEBRUARY, 11);"
},
{
"code": null,
"e": 1192,
"s": 1149,
"text": "Now, adjust LocalDate to last Day of year:"
},
{
"code": null,
"e": 1259,
"s": 1192,
"text": "LocalDate day = localDate.with(TemporalAdjusters.lastDayOfYear());"
},
{
"code": null,
"e": 2110,
"s": 1259,
"text": "import java.time.LocalDate;\nimport java.time.Month;\nimport java.time.temporal.TemporalAdjusters;\npublic class Demo {\n public static void main(String[] args) {\n LocalDate localDate = LocalDate.of(2019, Month.FEBRUARY, 11);\n System.out.println(\"Current Date = \"+localDate);\n System.out.println(\"Current Month = \"+localDate.getMonth());\n LocalDate day = localDate.with(TemporalAdjusters.firstDayOfMonth());\n System.out.println(\"First day of month = \"+day);\n day = localDate.with(TemporalAdjusters.lastDayOfMonth());\n System.out.println(\"Last day of month = \"+day);\n day = localDate.with(TemporalAdjusters.lastDayOfYear());\n System.out.println(\"Last day of year = \"+day);\n day = localDate.with(TemporalAdjusters.firstDayOfNextMonth());\n System.out.println(\"First day of next month = \"+day);\n }\n}"
},
{
"code": null,
"e": 2291,
"s": 2110,
"text": "Current Date = 2019-02-11\nCurrent Month = FEBRUARY\nFirst day of month = 2019-02-01\nLast day of month = 2019-02-28\nLast day of year = 2019-12-31\nFirst day of next month = 2019-03-01"
}
]
|
Python - Read data from MongoDB - onlinetutorialspoint | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
So far, we have set up everything now time to read data from the MongoDB database. In the previous tutorial, we created a database and inserted documents using different methods.
Insert some data into the database before reading it, and I just copied the below snipped from the previous tutorial.
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
mlist = [
{ "_id": 1, "f_name": "Nil", "l_name": "Mil" },
{ "_id": 2, "f_name": "Andy", "l_name": "Apple" },
{ "_id": 3, "f_name": "AMad", "l_name": "Amhed" },
]
mid = mcol.insert_many(mlist)
print(mid.inserted_ids)
mclient.close()
The pymongo provides two methods to read databases: find_one() and find().
find_one() works as SELECT from MYSQL but only returns the first occurrence in the selection.
The find_one() is an overloaded function to pass the optional query as an argument to filter the data or read selected columns.
If no matches are found, this function returns blank, and if the query matches multiple data, it returns the very first document in the collection.
Run:
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
mrd= mcol.find_one()
print(mrd)
mclient.close()
Output:
{'_id': ObjectId('6197f62a5312dd9e82102c6b'), 'f_name': 'Rob', 'l_name': 'Carpenter'}
Check the insert_one() section of the previous tutorial, and you’ll realize the result is correct.
If you want to read all the data from a collection, you can use the find() function. It takes two optional parameters. The first is a query, and the second is projections (the columns you want to read). If the find() function has no parameters, it returns the same result as SELECT * in MySQL without anywhere condition.
In other words, find() without argument returns all documents of collection in the database.
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
for mrd in mcol.find():
print(mrd)
mclient.close()
Output:
{'_id': ObjectId('6197f62a5312dd9e82102c6b'), 'f_name': 'Rob', 'l_name': 'Carpenter'}
{'_id': ObjectId('6197f7eb588c1566078fbff2'), 'f_name': 'Rob', 'l_name': 'Carpenter'}
{'_id': ObjectId('619800106f7cca5b81de7ec7'), 'f_name': 'Nil', 'l_name': 'Mil'}
{'_id': ObjectId('619800106f7cca5b81de7ec8'), 'f_name': 'Andy', 'l_name': 'Apple'}
{'_id': ObjectId('619800106f7cca5b81de7ec9'), 'f_name': 'AMad', 'l_name': 'Amhed'}
{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}
{'_id': 2, 'f_name': 'Andy', 'l_name': 'Apple'}
{'_id': 3, 'f_name': 'AMad', 'l_name': 'Amhed'}
As seen in the output, find() returns all the documents from the collection.
Projections allow us to read specific columns from a collection. As discussed earlier, the find() function is an overloaded function that takes two optional parameters.
To apply the projection to the query, we must pass the required column list as a second parameter like({}, { "_id" :0, "f_name": 1,}) an argument. In the example, we are only projecting the f_name column from the collection. Here 0 means to ignore and 1 means select.
Example:
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
for mrd in mcol.find({}, { "_id" :0, "f_name": 1,}):
print(mrd)
mclient.close()
The above-given parameter selects only the first name.
Output:
{'f_name': 'Rob'}
{'f_name': 'Rob'}
{'f_name': 'Nil'}
{'f_name': 'Andy'}
{'f_name': 'AMad'}
{'f_name': 'Nil'}
{'f_name': 'Andy'}
{'f_name': 'AMad'}
The conditional query is similar to where in MYSQL.
Let’s use conditional query to find documents whose f_name is 'Nil.
Run:
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
mquery = { "f_name": "Nil" }
nval = mcol.find(mquery)
for mrd in nval:
print(mrd)
mclient.close()
Output:
{'_id': ObjectId('619d4553c0d6415b4736864d'), 'f_name': 'Nil', 'l_name': 'Mil'}
{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}
As expected, documents with f_namewith value 'Nil' is returned.
Use the greater than modifier to discover documents where the f name field begins with the letter N or above (alphabetically). The $gt used to do the same.
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
mquery = { "f_name": { "$gt": "N" } }
nval = mcol.find(mquery)
for mrd in nval:
print(mrd)
mclient.close()
Output:
{'_id': ObjectId('619d4553c0d6415b4736864d'), 'f_name': 'Nil', 'l_name': 'Mil'}
{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}
As expected, documents with f_name 'Nil' is obtained.
We can also use regex as a modifier to query string. Let’s find documents where the l_name starts with the letter 'M'.
Run:
from pymongo import MongoClient
mclient = MongoClient('localhost', 27017)
db = mclient['Newdatabase']
mcol = db["Employers"]
mquery = { "l_name": { "$regex": "M" } }
nval = mcol.find(mquery)
for mrd in nval:
print(mrd)
mclient.close()
Output:
{'_id': ObjectId('619d4553c0d6415b4736864d'), 'f_name': 'Nil', 'l_name': 'Mil'}
{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}
As expected, documents with l_name that starts from 'M' is obtained. Check out the reference to see more modifier and their usage.
PyMongo collections
MongoDB Query
Insert data into MongoDB
Happy Learning 🙂
Python – Update MongoDB documents
Python – Delete MongoDB Documents
Python – Create MongoDB Database and Inserting Documents
How to Connect MongoDB with Python
Python read data from MySQL Database
Python raw_input read input from keyboard
Python How to read input from keyboard
What are different Python Data Types
Python List Data Structure In Depth
Python Tuple Data Structure in Depth
Python – How to Read Google Search Results in Selenium
How to Read CSV File in Python
How to read a text file in Python ?
How to read JSON file in Python ?
Python – How to read environment variables ?
Python – Update MongoDB documents
Python – Delete MongoDB Documents
Python – Create MongoDB Database and Inserting Documents
How to Connect MongoDB with Python
Python read data from MySQL Database
Python raw_input read input from keyboard
Python How to read input from keyboard
What are different Python Data Types
Python List Data Structure In Depth
Python Tuple Data Structure in Depth
Python – How to Read Google Search Results in Selenium
How to Read CSV File in Python
How to read a text file in Python ?
How to read JSON file in Python ?
Python – How to read environment variables ?
Δ
Python – Introduction
Python – Features
Python – Install on Windows
Python – Modes of Program
Python – Number System
Python – Identifiers
Python – Operators
Python – Ternary Operator
Python – Command Line Arguments
Python – Keywords
Python – Data Types
Python – Upgrade Python PIP
Python – Virtual Environment
Pyhton – Type Casting
Python – String to Int
Python – Conditional Statements
Python – if statement
Python – *args and **kwargs
Python – Date Formatting
Python – Read input from keyboard
Python – raw_input
Python – List In Depth
Python – List Comprehension
Python – Set in Depth
Python – Dictionary in Depth
Python – Tuple in Depth
Python – Stack Datastructure
Python – Classes and Objects
Python – Constructors
Python – Object Introspection
Python – Inheritance
Python – Decorators
Python – Serialization with Pickle
Python – Exceptions Handling
Python – User defined Exceptions
Python – Multiprocessing
Python – Default function parameters
Python – Lambdas Functions
Python – NumPy Library
Python – MySQL Connector
Python – MySQL Create Database
Python – MySQL Read Data
Python – MySQL Insert Data
Python – MySQL Update Records
Python – MySQL Delete Records
Python – String Case Conversion
Howto – Find biggest of 2 numbers
Howto – Remove duplicates from List
Howto – Convert any Number to Binary
Howto – Merge two Lists
Howto – Merge two dicts
Howto – Get Characters Count in a File
Howto – Get Words Count in a File
Howto – Remove Spaces from String
Howto – Read Env variables
Howto – Read a text File
Howto – Read a JSON File
Howto – Read Config.ini files
Howto – Iterate Dictionary
Howto – Convert List Of Objects to CSV
Howto – Merge two dict in Python
Howto – create Zip File
Howto – Get OS info
Howto – Get size of Directory
Howto – Check whether a file exists
Howto – Remove key from dictionary
Howto – Sort Objects
Howto – Create or Delete Directories
Howto – Read CSV File
Howto – Create Python Iterable class
Howto – Access for loop index
Howto – Clear all elements from List
Howto – Remove empty lists from a List
Howto – Remove special characters from String
Howto – Sort dictionary by key
Howto – Filter a list | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 577,
"s": 398,
"text": "So far, we have set up everything now time to read data from the MongoDB database. In the previous tutorial, we created a database and inserted documents using different methods."
},
{
"code": null,
"e": 695,
"s": 577,
"text": "Insert some data into the database before reading it, and I just copied the below snipped from the previous tutorial."
},
{
"code": null,
"e": 1064,
"s": 695,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017) \ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\n\nmlist = [\n { \"_id\": 1, \"f_name\": \"Nil\", \"l_name\": \"Mil\" },\n { \"_id\": 2, \"f_name\": \"Andy\", \"l_name\": \"Apple\" },\n { \"_id\": 3, \"f_name\": \"AMad\", \"l_name\": \"Amhed\" },\n ]\n\nmid = mcol.insert_many(mlist)\nprint(mid.inserted_ids)\n\nmclient.close()"
},
{
"code": null,
"e": 1139,
"s": 1064,
"text": "The pymongo provides two methods to read databases: find_one() and find()."
},
{
"code": null,
"e": 1233,
"s": 1139,
"text": "find_one() works as SELECT from MYSQL but only returns the first occurrence in the selection."
},
{
"code": null,
"e": 1361,
"s": 1233,
"text": "The find_one() is an overloaded function to pass the optional query as an argument to filter the data or read selected columns."
},
{
"code": null,
"e": 1509,
"s": 1361,
"text": "If no matches are found, this function returns blank, and if the query matches multiple data, it returns the very first document in the collection."
},
{
"code": null,
"e": 1514,
"s": 1509,
"text": "Run:"
},
{
"code": null,
"e": 1690,
"s": 1514,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017) \ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\n\nmrd= mcol.find_one()\nprint(mrd)\n\nmclient.close()"
},
{
"code": null,
"e": 1698,
"s": 1690,
"text": "Output:"
},
{
"code": null,
"e": 1784,
"s": 1698,
"text": "{'_id': ObjectId('6197f62a5312dd9e82102c6b'), 'f_name': 'Rob', 'l_name': 'Carpenter'}"
},
{
"code": null,
"e": 1883,
"s": 1784,
"text": "Check the insert_one() section of the previous tutorial, and you’ll realize the result is correct."
},
{
"code": null,
"e": 2204,
"s": 1883,
"text": "If you want to read all the data from a collection, you can use the find() function. It takes two optional parameters. The first is a query, and the second is projections (the columns you want to read). If the find() function has no parameters, it returns the same result as SELECT * in MySQL without anywhere condition."
},
{
"code": null,
"e": 2297,
"s": 2204,
"text": "In other words, find() without argument returns all documents of collection in the database."
},
{
"code": null,
"e": 2477,
"s": 2297,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017)\ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\n\nfor mrd in mcol.find():\n print(mrd)\n\nmclient.close()"
},
{
"code": null,
"e": 2485,
"s": 2477,
"text": "Output:"
},
{
"code": null,
"e": 3044,
"s": 2485,
"text": "{'_id': ObjectId('6197f62a5312dd9e82102c6b'), 'f_name': 'Rob', 'l_name': 'Carpenter'}\n{'_id': ObjectId('6197f7eb588c1566078fbff2'), 'f_name': 'Rob', 'l_name': 'Carpenter'}\n{'_id': ObjectId('619800106f7cca5b81de7ec7'), 'f_name': 'Nil', 'l_name': 'Mil'}\n{'_id': ObjectId('619800106f7cca5b81de7ec8'), 'f_name': 'Andy', 'l_name': 'Apple'}\n{'_id': ObjectId('619800106f7cca5b81de7ec9'), 'f_name': 'AMad', 'l_name': 'Amhed'}\n{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}\n{'_id': 2, 'f_name': 'Andy', 'l_name': 'Apple'}\n{'_id': 3, 'f_name': 'AMad', 'l_name': 'Amhed'}"
},
{
"code": null,
"e": 3121,
"s": 3044,
"text": "As seen in the output, find() returns all the documents from the collection."
},
{
"code": null,
"e": 3290,
"s": 3121,
"text": "Projections allow us to read specific columns from a collection. As discussed earlier, the find() function is an overloaded function that takes two optional parameters."
},
{
"code": null,
"e": 3558,
"s": 3290,
"text": "To apply the projection to the query, we must pass the required column list as a second parameter like({}, { \"_id\" :0, \"f_name\": 1,}) an argument. In the example, we are only projecting the f_name column from the collection. Here 0 means to ignore and 1 means select."
},
{
"code": null,
"e": 3567,
"s": 3558,
"text": "Example:"
},
{
"code": null,
"e": 3779,
"s": 3567,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017)\ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\n\nfor mrd in mcol.find({}, { \"_id\" :0, \"f_name\": 1,}): \n print(mrd) \nmclient.close()"
},
{
"code": null,
"e": 3834,
"s": 3779,
"text": "The above-given parameter selects only the first name."
},
{
"code": null,
"e": 3842,
"s": 3834,
"text": "Output:"
},
{
"code": null,
"e": 3991,
"s": 3842,
"text": "{'f_name': 'Rob'}\n{'f_name': 'Rob'}\n{'f_name': 'Nil'}\n{'f_name': 'Andy'}\n{'f_name': 'AMad'}\n{'f_name': 'Nil'}\n{'f_name': 'Andy'}\n{'f_name': 'AMad'}\n"
},
{
"code": null,
"e": 4043,
"s": 3991,
"text": "The conditional query is similar to where in MYSQL."
},
{
"code": null,
"e": 4111,
"s": 4043,
"text": "Let’s use conditional query to find documents whose f_name is 'Nil."
},
{
"code": null,
"e": 4116,
"s": 4111,
"text": "Run:"
},
{
"code": null,
"e": 4339,
"s": 4116,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017)\ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\nmquery = { \"f_name\": \"Nil\" }\nnval = mcol.find(mquery)\nfor mrd in nval:\nprint(mrd)\nmclient.close()"
},
{
"code": null,
"e": 4347,
"s": 4339,
"text": "Output:"
},
{
"code": null,
"e": 4472,
"s": 4347,
"text": "{'_id': ObjectId('619d4553c0d6415b4736864d'), 'f_name': 'Nil', 'l_name': 'Mil'}\n{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}"
},
{
"code": null,
"e": 4536,
"s": 4472,
"text": "As expected, documents with f_namewith value 'Nil' is returned."
},
{
"code": null,
"e": 4692,
"s": 4536,
"text": "Use the greater than modifier to discover documents where the f name field begins with the letter N or above (alphabetically). The $gt used to do the same."
},
{
"code": null,
"e": 4924,
"s": 4692,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017)\ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\nmquery = { \"f_name\": { \"$gt\": \"N\" } }\nnval = mcol.find(mquery)\nfor mrd in nval:\nprint(mrd)\nmclient.close()"
},
{
"code": null,
"e": 4932,
"s": 4924,
"text": "Output:"
},
{
"code": null,
"e": 5057,
"s": 4932,
"text": "{'_id': ObjectId('619d4553c0d6415b4736864d'), 'f_name': 'Nil', 'l_name': 'Mil'}\n{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}"
},
{
"code": null,
"e": 5111,
"s": 5057,
"text": "As expected, documents with f_name 'Nil' is obtained."
},
{
"code": null,
"e": 5230,
"s": 5111,
"text": "We can also use regex as a modifier to query string. Let’s find documents where the l_name starts with the letter 'M'."
},
{
"code": null,
"e": 5235,
"s": 5230,
"text": "Run:"
},
{
"code": null,
"e": 5470,
"s": 5235,
"text": "from pymongo import MongoClient\nmclient = MongoClient('localhost', 27017)\ndb = mclient['Newdatabase']\nmcol = db[\"Employers\"]\nmquery = { \"l_name\": { \"$regex\": \"M\" } }\nnval = mcol.find(mquery)\nfor mrd in nval:\nprint(mrd)\nmclient.close()"
},
{
"code": null,
"e": 5478,
"s": 5470,
"text": "Output:"
},
{
"code": null,
"e": 5603,
"s": 5478,
"text": "{'_id': ObjectId('619d4553c0d6415b4736864d'), 'f_name': 'Nil', 'l_name': 'Mil'}\n{'_id': 1, 'f_name': 'Nil', 'l_name': 'Mil'}"
},
{
"code": null,
"e": 5734,
"s": 5603,
"text": "As expected, documents with l_name that starts from 'M' is obtained. Check out the reference to see more modifier and their usage."
},
{
"code": null,
"e": 5754,
"s": 5734,
"text": "PyMongo collections"
},
{
"code": null,
"e": 5768,
"s": 5754,
"text": "MongoDB Query"
},
{
"code": null,
"e": 5793,
"s": 5768,
"text": "Insert data into MongoDB"
},
{
"code": null,
"e": 5810,
"s": 5793,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 6401,
"s": 5810,
"text": "\nPython – Update MongoDB documents\nPython – Delete MongoDB Documents\nPython – Create MongoDB Database and Inserting Documents\nHow to Connect MongoDB with Python\nPython read data from MySQL Database\nPython raw_input read input from keyboard\nPython How to read input from keyboard\nWhat are different Python Data Types\nPython List Data Structure In Depth\nPython Tuple Data Structure in Depth\nPython – How to Read Google Search Results in Selenium\nHow to Read CSV File in Python\nHow to read a text file in Python ?\nHow to read JSON file in Python ?\nPython – How to read environment variables ?\n"
},
{
"code": null,
"e": 6435,
"s": 6401,
"text": "Python – Update MongoDB documents"
},
{
"code": null,
"e": 6469,
"s": 6435,
"text": "Python – Delete MongoDB Documents"
},
{
"code": null,
"e": 6526,
"s": 6469,
"text": "Python – Create MongoDB Database and Inserting Documents"
},
{
"code": null,
"e": 6561,
"s": 6526,
"text": "How to Connect MongoDB with Python"
},
{
"code": null,
"e": 6598,
"s": 6561,
"text": "Python read data from MySQL Database"
},
{
"code": null,
"e": 6640,
"s": 6598,
"text": "Python raw_input read input from keyboard"
},
{
"code": null,
"e": 6679,
"s": 6640,
"text": "Python How to read input from keyboard"
},
{
"code": null,
"e": 6716,
"s": 6679,
"text": "What are different Python Data Types"
},
{
"code": null,
"e": 6752,
"s": 6716,
"text": "Python List Data Structure In Depth"
},
{
"code": null,
"e": 6789,
"s": 6752,
"text": "Python Tuple Data Structure in Depth"
},
{
"code": null,
"e": 6844,
"s": 6789,
"text": "Python – How to Read Google Search Results in Selenium"
},
{
"code": null,
"e": 6875,
"s": 6844,
"text": "How to Read CSV File in Python"
},
{
"code": null,
"e": 6911,
"s": 6875,
"text": "How to read a text file in Python ?"
},
{
"code": null,
"e": 6945,
"s": 6911,
"text": "How to read JSON file in Python ?"
},
{
"code": null,
"e": 6990,
"s": 6945,
"text": "Python – How to read environment variables ?"
},
{
"code": null,
"e": 6996,
"s": 6994,
"text": "Δ"
},
{
"code": null,
"e": 7019,
"s": 6996,
"text": " Python – Introduction"
},
{
"code": null,
"e": 7038,
"s": 7019,
"text": " Python – Features"
},
{
"code": null,
"e": 7067,
"s": 7038,
"text": " Python – Install on Windows"
},
{
"code": null,
"e": 7094,
"s": 7067,
"text": " Python – Modes of Program"
},
{
"code": null,
"e": 7118,
"s": 7094,
"text": " Python – Number System"
},
{
"code": null,
"e": 7140,
"s": 7118,
"text": " Python – Identifiers"
},
{
"code": null,
"e": 7160,
"s": 7140,
"text": " Python – Operators"
},
{
"code": null,
"e": 7187,
"s": 7160,
"text": " Python – Ternary Operator"
},
{
"code": null,
"e": 7220,
"s": 7187,
"text": " Python – Command Line Arguments"
},
{
"code": null,
"e": 7239,
"s": 7220,
"text": " Python – Keywords"
},
{
"code": null,
"e": 7260,
"s": 7239,
"text": " Python – Data Types"
},
{
"code": null,
"e": 7289,
"s": 7260,
"text": " Python – Upgrade Python PIP"
},
{
"code": null,
"e": 7319,
"s": 7289,
"text": " Python – Virtual Environment"
},
{
"code": null,
"e": 7342,
"s": 7319,
"text": " Pyhton – Type Casting"
},
{
"code": null,
"e": 7366,
"s": 7342,
"text": " Python – String to Int"
},
{
"code": null,
"e": 7399,
"s": 7366,
"text": " Python – Conditional Statements"
},
{
"code": null,
"e": 7422,
"s": 7399,
"text": " Python – if statement"
},
{
"code": null,
"e": 7451,
"s": 7422,
"text": " Python – *args and **kwargs"
},
{
"code": null,
"e": 7477,
"s": 7451,
"text": " Python – Date Formatting"
},
{
"code": null,
"e": 7512,
"s": 7477,
"text": " Python – Read input from keyboard"
},
{
"code": null,
"e": 7532,
"s": 7512,
"text": " Python – raw_input"
},
{
"code": null,
"e": 7556,
"s": 7532,
"text": " Python – List In Depth"
},
{
"code": null,
"e": 7585,
"s": 7556,
"text": " Python – List Comprehension"
},
{
"code": null,
"e": 7608,
"s": 7585,
"text": " Python – Set in Depth"
},
{
"code": null,
"e": 7638,
"s": 7608,
"text": " Python – Dictionary in Depth"
},
{
"code": null,
"e": 7663,
"s": 7638,
"text": " Python – Tuple in Depth"
},
{
"code": null,
"e": 7693,
"s": 7663,
"text": " Python – Stack Datastructure"
},
{
"code": null,
"e": 7723,
"s": 7693,
"text": " Python – Classes and Objects"
},
{
"code": null,
"e": 7746,
"s": 7723,
"text": " Python – Constructors"
},
{
"code": null,
"e": 7777,
"s": 7746,
"text": " Python – Object Introspection"
},
{
"code": null,
"e": 7799,
"s": 7777,
"text": " Python – Inheritance"
},
{
"code": null,
"e": 7820,
"s": 7799,
"text": " Python – Decorators"
},
{
"code": null,
"e": 7856,
"s": 7820,
"text": " Python – Serialization with Pickle"
},
{
"code": null,
"e": 7886,
"s": 7856,
"text": " Python – Exceptions Handling"
},
{
"code": null,
"e": 7920,
"s": 7886,
"text": " Python – User defined Exceptions"
},
{
"code": null,
"e": 7946,
"s": 7920,
"text": " Python – Multiprocessing"
},
{
"code": null,
"e": 7984,
"s": 7946,
"text": " Python – Default function parameters"
},
{
"code": null,
"e": 8012,
"s": 7984,
"text": " Python – Lambdas Functions"
},
{
"code": null,
"e": 8036,
"s": 8012,
"text": " Python – NumPy Library"
},
{
"code": null,
"e": 8062,
"s": 8036,
"text": " Python – MySQL Connector"
},
{
"code": null,
"e": 8094,
"s": 8062,
"text": " Python – MySQL Create Database"
},
{
"code": null,
"e": 8120,
"s": 8094,
"text": " Python – MySQL Read Data"
},
{
"code": null,
"e": 8148,
"s": 8120,
"text": " Python – MySQL Insert Data"
},
{
"code": null,
"e": 8179,
"s": 8148,
"text": " Python – MySQL Update Records"
},
{
"code": null,
"e": 8210,
"s": 8179,
"text": " Python – MySQL Delete Records"
},
{
"code": null,
"e": 8243,
"s": 8210,
"text": " Python – String Case Conversion"
},
{
"code": null,
"e": 8278,
"s": 8243,
"text": " Howto – Find biggest of 2 numbers"
},
{
"code": null,
"e": 8315,
"s": 8278,
"text": " Howto – Remove duplicates from List"
},
{
"code": null,
"e": 8353,
"s": 8315,
"text": " Howto – Convert any Number to Binary"
},
{
"code": null,
"e": 8379,
"s": 8353,
"text": " Howto – Merge two Lists"
},
{
"code": null,
"e": 8404,
"s": 8379,
"text": " Howto – Merge two dicts"
},
{
"code": null,
"e": 8444,
"s": 8404,
"text": " Howto – Get Characters Count in a File"
},
{
"code": null,
"e": 8479,
"s": 8444,
"text": " Howto – Get Words Count in a File"
},
{
"code": null,
"e": 8514,
"s": 8479,
"text": " Howto – Remove Spaces from String"
},
{
"code": null,
"e": 8543,
"s": 8514,
"text": " Howto – Read Env variables"
},
{
"code": null,
"e": 8569,
"s": 8543,
"text": " Howto – Read a text File"
},
{
"code": null,
"e": 8595,
"s": 8569,
"text": " Howto – Read a JSON File"
},
{
"code": null,
"e": 8627,
"s": 8595,
"text": " Howto – Read Config.ini files"
},
{
"code": null,
"e": 8655,
"s": 8627,
"text": " Howto – Iterate Dictionary"
},
{
"code": null,
"e": 8695,
"s": 8655,
"text": " Howto – Convert List Of Objects to CSV"
},
{
"code": null,
"e": 8729,
"s": 8695,
"text": " Howto – Merge two dict in Python"
},
{
"code": null,
"e": 8754,
"s": 8729,
"text": " Howto – create Zip File"
},
{
"code": null,
"e": 8775,
"s": 8754,
"text": " Howto – Get OS info"
},
{
"code": null,
"e": 8806,
"s": 8775,
"text": " Howto – Get size of Directory"
},
{
"code": null,
"e": 8843,
"s": 8806,
"text": " Howto – Check whether a file exists"
},
{
"code": null,
"e": 8880,
"s": 8843,
"text": " Howto – Remove key from dictionary"
},
{
"code": null,
"e": 8902,
"s": 8880,
"text": " Howto – Sort Objects"
},
{
"code": null,
"e": 8940,
"s": 8902,
"text": " Howto – Create or Delete Directories"
},
{
"code": null,
"e": 8963,
"s": 8940,
"text": " Howto – Read CSV File"
},
{
"code": null,
"e": 9001,
"s": 8963,
"text": " Howto – Create Python Iterable class"
},
{
"code": null,
"e": 9032,
"s": 9001,
"text": " Howto – Access for loop index"
},
{
"code": null,
"e": 9070,
"s": 9032,
"text": " Howto – Clear all elements from List"
},
{
"code": null,
"e": 9110,
"s": 9070,
"text": " Howto – Remove empty lists from a List"
},
{
"code": null,
"e": 9157,
"s": 9110,
"text": " Howto – Remove special characters from String"
},
{
"code": null,
"e": 9189,
"s": 9157,
"text": " Howto – Sort dictionary by key"
}
]
|
Neo4j - Index | Neo4j SQL supports Indexes on node or relationship properties to improve the performance of the application. We can create indexes on properties for all nodes, which have the same label name.
We can use these indexed columns on MATCH or WHERE or IN operator to improve the execution of CQL command.
In this chapter, we will discuss how to −
Create an Index
Delete an Index
Neo4j CQL provides "CREATE INDEX" command to create indexes on Node or Relationship properties.
Following is the syntax to create an index in Neo4j.
CREATE INDEX ON:label (node)
Before proceeding with the example, create a node Dhawan as shown below.
CREATE (Dhawan:player{name: "shikar Dhawan", YOB: 1995, POB: "Delhi"})
Following is a sample Cypher Query to create an index on the node Dhawan in Neo4j.
CREATE INDEX ON:player(Dhawan)
To execute the above query, carry out the following steps −
Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown below.
Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot.
On executing, you will get the following result.
Neo4j CQL provides a "DROP INDEX" command to drop an existing index of a Node or Relationshis property.
Following is the syntax to create an index in Neo4j.
DROP INDEX ON:label(node)
Following is a sample Cypher Query to create an index on the node named “Dhawan” in Neo4j.
DROP INDEX ON:player(Dhawan)
To execute the above query, carry out the following steps −
Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot.
Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot.
On executing, you will get the following result.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2531,
"s": 2339,
"text": "Neo4j SQL supports Indexes on node or relationship properties to improve the performance of the application. We can create indexes on properties for all nodes, which have the same label name."
},
{
"code": null,
"e": 2638,
"s": 2531,
"text": "We can use these indexed columns on MATCH or WHERE or IN operator to improve the execution of CQL command."
},
{
"code": null,
"e": 2680,
"s": 2638,
"text": "In this chapter, we will discuss how to −"
},
{
"code": null,
"e": 2696,
"s": 2680,
"text": "Create an Index"
},
{
"code": null,
"e": 2712,
"s": 2696,
"text": "Delete an Index"
},
{
"code": null,
"e": 2808,
"s": 2712,
"text": "Neo4j CQL provides \"CREATE INDEX\" command to create indexes on Node or Relationship properties."
},
{
"code": null,
"e": 2861,
"s": 2808,
"text": "Following is the syntax to create an index in Neo4j."
},
{
"code": null,
"e": 2892,
"s": 2861,
"text": "CREATE INDEX ON:label (node) \n"
},
{
"code": null,
"e": 2965,
"s": 2892,
"text": "Before proceeding with the example, create a node Dhawan as shown below."
},
{
"code": null,
"e": 3036,
"s": 2965,
"text": "CREATE (Dhawan:player{name: \"shikar Dhawan\", YOB: 1995, POB: \"Delhi\"})"
},
{
"code": null,
"e": 3119,
"s": 3036,
"text": "Following is a sample Cypher Query to create an index on the node Dhawan in Neo4j."
},
{
"code": null,
"e": 3151,
"s": 3119,
"text": "CREATE INDEX ON:player(Dhawan) "
},
{
"code": null,
"e": 3211,
"s": 3151,
"text": "To execute the above query, carry out the following steps −"
},
{
"code": null,
"e": 3367,
"s": 3211,
"text": "Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown below."
},
{
"code": null,
"e": 3520,
"s": 3367,
"text": "Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot."
},
{
"code": null,
"e": 3569,
"s": 3520,
"text": "On executing, you will get the following result."
},
{
"code": null,
"e": 3673,
"s": 3569,
"text": "Neo4j CQL provides a \"DROP INDEX\" command to drop an existing index of a Node or Relationshis property."
},
{
"code": null,
"e": 3726,
"s": 3673,
"text": "Following is the syntax to create an index in Neo4j."
},
{
"code": null,
"e": 3754,
"s": 3726,
"text": "DROP INDEX ON:label(node) \n"
},
{
"code": null,
"e": 3845,
"s": 3754,
"text": "Following is a sample Cypher Query to create an index on the node named “Dhawan” in Neo4j."
},
{
"code": null,
"e": 3875,
"s": 3845,
"text": "DROP INDEX ON:player(Dhawan) "
},
{
"code": null,
"e": 3935,
"s": 3875,
"text": "To execute the above query, carry out the following steps −"
},
{
"code": null,
"e": 4113,
"s": 3935,
"text": "Step 1 − Open the Neo4j desktop App and start the Neo4j Server. Open the built-in browser app of Neo4j using the URL http://localhost:7474/ as shown in the following screenshot."
},
{
"code": null,
"e": 4266,
"s": 4113,
"text": "Step 2 − Copy and paste the desired query in the dollar prompt and press the play button (to execute the query) highlighted in the following screenshot."
},
{
"code": null,
"e": 4315,
"s": 4266,
"text": "On executing, you will get the following result."
},
{
"code": null,
"e": 4322,
"s": 4315,
"text": " Print"
},
{
"code": null,
"e": 4333,
"s": 4322,
"text": " Add Notes"
}
]
|
How regular expression alternatives work in Python? | In real world applications, we often use regular expressions that match any one of two or more alternatives. Also, we sometimes use a quantifier to apply to several expressions. All such goals are achieved by grouping with parentheses; and, in the use of alternatives, applying alternation with the vertical bar (|).
Alternation is useful when we need to match any one of several different alternatives. For example, the regex airways|airplane|bomber will match any text that contains airways or airplane or bomber. The same is achieved by using the regex air(ways|plane)|bomber.
If we used the regex (airways|airplane|bomber), it would match any of the three expressions. Consider the regex (air(ways|plane)|bomber), which has two captures if the first expression matches (airways or airplane as the first capture and ways or plane as the second capture), and one capture if the second expression matches (bomber). We can switch off the capturing effect by following an opening parenthesis with ?: like this:
(air(?:ways|plane)|bomber)
This will have only one capture if it matches (airways or airplane or bomber).
The following code illustrates the points discussed above −
import re
s = 'airways aircraft airplane bomber'
result = re.findall(r'(airways|airplane|bomber)', s)
print result
result2 = re.findall(r'(air(ways|plane)|bomber)', s)
print result2
result3 = re.findall(r'(air(?:ways|plane)|bomber)', s)
print result3
This gives the output
['airways', 'airplane', 'bomber']
[('airways', 'ways'), ('airplane', 'plane'), ('bomber', '')]
['airways', 'airplane', 'bomber'] | [
{
"code": null,
"e": 1379,
"s": 1062,
"text": "In real world applications, we often use regular expressions that match any one of two or more alternatives. Also, we sometimes use a quantifier to apply to several expressions. All such goals are achieved by grouping with parentheses; and, in the use of alternatives, applying alternation with the vertical bar (|)."
},
{
"code": null,
"e": 1642,
"s": 1379,
"text": "Alternation is useful when we need to match any one of several different alternatives. For example, the regex airways|airplane|bomber will match any text that contains airways or airplane or bomber. The same is achieved by using the regex air(ways|plane)|bomber."
},
{
"code": null,
"e": 2072,
"s": 1642,
"text": "If we used the regex (airways|airplane|bomber), it would match any of the three expressions. Consider the regex (air(ways|plane)|bomber), which has two captures if the first expression matches (airways or airplane as the first capture and ways or plane as the second capture), and one capture if the second expression matches (bomber). We can switch off the capturing effect by following an opening parenthesis with ?: like this:"
},
{
"code": null,
"e": 2099,
"s": 2072,
"text": "(air(?:ways|plane)|bomber)"
},
{
"code": null,
"e": 2179,
"s": 2099,
"text": " This will have only one capture if it matches (airways or airplane or bomber)."
},
{
"code": null,
"e": 2239,
"s": 2179,
"text": "The following code illustrates the points discussed above −"
},
{
"code": null,
"e": 2490,
"s": 2239,
"text": "import re\ns = 'airways aircraft airplane bomber'\nresult = re.findall(r'(airways|airplane|bomber)', s)\nprint result\nresult2 = re.findall(r'(air(ways|plane)|bomber)', s)\nprint result2\nresult3 = re.findall(r'(air(?:ways|plane)|bomber)', s)\nprint result3"
},
{
"code": null,
"e": 2512,
"s": 2490,
"text": "This gives the output"
},
{
"code": null,
"e": 2641,
"s": 2512,
"text": "['airways', 'airplane', 'bomber']\n[('airways', 'ways'), ('airplane', 'plane'), ('bomber', '')]\n['airways', 'airplane', 'bomber']"
}
]
|
Accessing array out of bounds in C/C++ - GeeksforGeeks | 07 Jul, 2017
Perquisite : Arrays in C/C++
In high level languages such as Java, there are functions which prevent you from accessing array out of bound by generating a exception such as java.lang.ArrayIndexOutOfBoundsException. But in case of C, there is no such functionality, so programmer need to take care of this situation.
What if programmer accidentally accesses any index of array which is out of bound ?
C don’t provide any specification which deal with problem of accessing invalid index. As per ISO C standard it is called Undefined Behavior.An undefined behavior (UB) is a result of executing computer code whose behavior is not prescribed by the language specification to which the code can adhere to, for the current state of the program (e.g. memory). This generally happens when the translator of the source code makes certain assumptions, but these assumptions are not satisfied during execution.
Examples of Undefined Behavior while accessing array out of bounds
Access non allocated location of memory: The program can access some piece of memory which is owned by it.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf("arr [0] is %d\n", arr[0]); // arr[10] is out of bound printf("arr[10] is %d\n", arr[10]); return 0;}Output :arr [0] is 1
arr[10] is -1786647872
It can be observed here, that arr[10] is accessing a memory location containing a garbage value.Segmentation fault: The program can access some piece of memory which is not owned by it, which can cause crashing of program such as segmentation fault.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf("arr [0] is %d\n",arr[0]); printf("arr[10] is %d\n",arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf("arr[10] is %d\n",arr[10]); return 0;}Output :Runtime Error : Segmentation Fault (SIGSEGV)
Access non allocated location of memory: The program can access some piece of memory which is owned by it.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf("arr [0] is %d\n", arr[0]); // arr[10] is out of bound printf("arr[10] is %d\n", arr[10]); return 0;}Output :arr [0] is 1
arr[10] is -1786647872
It can be observed here, that arr[10] is accessing a memory location containing a garbage value.
// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf("arr [0] is %d\n", arr[0]); // arr[10] is out of bound printf("arr[10] is %d\n", arr[10]); return 0;}
Output :
arr [0] is 1
arr[10] is -1786647872
It can be observed here, that arr[10] is accessing a memory location containing a garbage value.
Segmentation fault: The program can access some piece of memory which is not owned by it, which can cause crashing of program such as segmentation fault.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf("arr [0] is %d\n",arr[0]); printf("arr[10] is %d\n",arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf("arr[10] is %d\n",arr[10]); return 0;}Output :Runtime Error : Segmentation Fault (SIGSEGV)
// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf("arr [0] is %d\n",arr[0]); printf("arr[10] is %d\n",arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf("arr[10] is %d\n",arr[10]); return 0;}
Output :Runtime Error : Segmentation Fault (SIGSEGV)
Important Points:
Stay inside the bounds of the array in C programming while using arrays to avoid any such errors.
C++ however offers the std::vector class template, which does not require to perform bounds checking. A vector also has the std::at() member function which can perform bounds-checking.
This article is contributed by Mandeep Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
c-array
cpp-array
C Language
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
rand() and srand() in C/C++
Command line arguments in C/C++
Core Dump (Segmentation fault) in C/C++
INT_MAX and INT_MIN in C/C++ and Applications
fork() in C
Vector in C++ STL
Initialize a vector in C++ (6 different ways)
Iterators in C++ STL
Inheritance in C++
Copy Constructor in C++ | [
{
"code": null,
"e": 23975,
"s": 23947,
"text": "\n07 Jul, 2017"
},
{
"code": null,
"e": 24004,
"s": 23975,
"text": "Perquisite : Arrays in C/C++"
},
{
"code": null,
"e": 24291,
"s": 24004,
"text": "In high level languages such as Java, there are functions which prevent you from accessing array out of bound by generating a exception such as java.lang.ArrayIndexOutOfBoundsException. But in case of C, there is no such functionality, so programmer need to take care of this situation."
},
{
"code": null,
"e": 24375,
"s": 24291,
"text": "What if programmer accidentally accesses any index of array which is out of bound ?"
},
{
"code": null,
"e": 24876,
"s": 24375,
"text": "C don’t provide any specification which deal with problem of accessing invalid index. As per ISO C standard it is called Undefined Behavior.An undefined behavior (UB) is a result of executing computer code whose behavior is not prescribed by the language specification to which the code can adhere to, for the current state of the program (e.g. memory). This generally happens when the translator of the source code makes certain assumptions, but these assumptions are not satisfied during execution."
},
{
"code": null,
"e": 24943,
"s": 24876,
"text": "Examples of Undefined Behavior while accessing array out of bounds"
},
{
"code": null,
"e": 25957,
"s": 24943,
"text": "Access non allocated location of memory: The program can access some piece of memory which is owned by it.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf(\"arr [0] is %d\\n\", arr[0]); // arr[10] is out of bound printf(\"arr[10] is %d\\n\", arr[10]); return 0;}Output :arr [0] is 1\narr[10] is -1786647872\nIt can be observed here, that arr[10] is accessing a memory location containing a garbage value.Segmentation fault: The program can access some piece of memory which is not owned by it, which can cause crashing of program such as segmentation fault.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf(\"arr [0] is %d\\n\",arr[0]); printf(\"arr[10] is %d\\n\",arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf(\"arr[10] is %d\\n\",arr[10]); return 0;}Output :Runtime Error : Segmentation Fault (SIGSEGV)"
},
{
"code": null,
"e": 26446,
"s": 25957,
"text": "Access non allocated location of memory: The program can access some piece of memory which is owned by it.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf(\"arr [0] is %d\\n\", arr[0]); // arr[10] is out of bound printf(\"arr[10] is %d\\n\", arr[10]); return 0;}Output :arr [0] is 1\narr[10] is -1786647872\nIt can be observed here, that arr[10] is accessing a memory location containing a garbage value."
},
{
"code": "// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf(\"arr [0] is %d\\n\", arr[0]); // arr[10] is out of bound printf(\"arr[10] is %d\\n\", arr[10]); return 0;}",
"e": 26689,
"s": 26446,
"text": null
},
{
"code": null,
"e": 26698,
"s": 26689,
"text": "Output :"
},
{
"code": null,
"e": 26735,
"s": 26698,
"text": "arr [0] is 1\narr[10] is -1786647872\n"
},
{
"code": null,
"e": 26832,
"s": 26735,
"text": "It can be observed here, that arr[10] is accessing a memory location containing a garbage value."
},
{
"code": null,
"e": 27358,
"s": 26832,
"text": "Segmentation fault: The program can access some piece of memory which is not owned by it, which can cause crashing of program such as segmentation fault.// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf(\"arr [0] is %d\\n\",arr[0]); printf(\"arr[10] is %d\\n\",arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf(\"arr[10] is %d\\n\",arr[10]); return 0;}Output :Runtime Error : Segmentation Fault (SIGSEGV)"
},
{
"code": "// Program to demonstrate // accessing array out of bounds#include <stdio.h>int main(){ int arr[] = {1,2,3,4,5}; printf(\"arr [0] is %d\\n\",arr[0]); printf(\"arr[10] is %d\\n\",arr[10]); // allocation memory to out of bound // element arr[10] = 11; printf(\"arr[10] is %d\\n\",arr[10]); return 0;}",
"e": 27679,
"s": 27358,
"text": null
},
{
"code": null,
"e": 27732,
"s": 27679,
"text": "Output :Runtime Error : Segmentation Fault (SIGSEGV)"
},
{
"code": null,
"e": 27750,
"s": 27732,
"text": "Important Points:"
},
{
"code": null,
"e": 27848,
"s": 27750,
"text": "Stay inside the bounds of the array in C programming while using arrays to avoid any such errors."
},
{
"code": null,
"e": 28033,
"s": 27848,
"text": "C++ however offers the std::vector class template, which does not require to perform bounds checking. A vector also has the std::at() member function which can perform bounds-checking."
},
{
"code": null,
"e": 28334,
"s": 28033,
"text": "This article is contributed by Mandeep Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 28459,
"s": 28334,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 28467,
"s": 28459,
"text": "c-array"
},
{
"code": null,
"e": 28477,
"s": 28467,
"text": "cpp-array"
},
{
"code": null,
"e": 28488,
"s": 28477,
"text": "C Language"
},
{
"code": null,
"e": 28492,
"s": 28488,
"text": "C++"
},
{
"code": null,
"e": 28496,
"s": 28492,
"text": "CPP"
},
{
"code": null,
"e": 28594,
"s": 28496,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28603,
"s": 28594,
"text": "Comments"
},
{
"code": null,
"e": 28616,
"s": 28603,
"text": "Old Comments"
},
{
"code": null,
"e": 28644,
"s": 28616,
"text": "rand() and srand() in C/C++"
},
{
"code": null,
"e": 28676,
"s": 28644,
"text": "Command line arguments in C/C++"
},
{
"code": null,
"e": 28716,
"s": 28676,
"text": "Core Dump (Segmentation fault) in C/C++"
},
{
"code": null,
"e": 28762,
"s": 28716,
"text": "INT_MAX and INT_MIN in C/C++ and Applications"
},
{
"code": null,
"e": 28774,
"s": 28762,
"text": "fork() in C"
},
{
"code": null,
"e": 28792,
"s": 28774,
"text": "Vector in C++ STL"
},
{
"code": null,
"e": 28838,
"s": 28792,
"text": "Initialize a vector in C++ (6 different ways)"
},
{
"code": null,
"e": 28859,
"s": 28838,
"text": "Iterators in C++ STL"
},
{
"code": null,
"e": 28878,
"s": 28859,
"text": "Inheritance in C++"
}
]
|
Accelerate Your Exploratory Data Analysis With Pandas-Profiling | by Sukanta Roy | Towards Data Science | When starting a new data science project, the first step after getting your hands on the data set for the first time is to understand it. We achieve this by performing Exploratory Data Analysis (EDA). This includes finding out the data type of each variable, the distribution of the target variable, number of distinct values for each predictor variable, if there is any duplicate or missing values in the data set etc.
If you have ever done EDA on any data set (and I assume you have as you are reading this article), I don’t need to tell you how time consuming this process can be. And if you have been a part of many data science projects (be it in your job or by doing personal projects) you know how repetitive all these process can be. But with the Open source library Pandas-profiling that doesn’t have to be the case anymore.
Pandas-profiling is an open source library that can generate beautiful interactive reports for any data set, with just a single line of code. Sound’s interesting? Let’s take a look at the documentation to get a better understanding of what it does.
Pandas-profiling generates profile reports from a pandas DataFrame. The pandas df.describe() function is great but a little basic for serious exploratory data analysis. pandas_profiling extends the pandas DataFrame with df.profile_report() for quick data analysis.
For each column the following statistics — if relevant for the column type — are presented in an interactive HTML report:
Type inference: detect the types of columns in a data frame.
Essentials: type, unique values, missing values
Quantile statistics like minimum value, Q1, median, Q3, maximum, range, inter-quartile range
Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness
Most frequent values
Histogram
Correlations highlighting of highly correlated variables(Spearman, Pearson and Kendall matrices)
Missing values matrix, count, heatmap and dendrogram of missing values
Text analysis learn about categories (Uppercase, Space), scripts (Latin, Cyrillic) and blocks (ASCII) of text data.
Now that we know what pandas-profiling is all about, let’s see how to install it and use it in a Jupyter Notebook or in Google Colab in the following section.
You can install pandas-profiling very easily using pip package manager with the following command:
pip install pandas-profiling[notebook,html]
Alternatively, you could install the latest version directly from Github:
pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip
If you are using conda, then you can use the following command to installation
conda install -c conda-forge pandas-profiling
Google colab comes pre-installed with Pandas-profiling, but unfortunately it comes with an older version of it (v1.4). If you are following this article or the GitHub documentation, then the code will not run on Google Colab unless you install the latest version of the library (v2.6).
To do that, you need to first uninstall the existing library and install the latest one as follows:
# To uninstall!pip uninstall !pip uninstall pandas_profiling
Now to install, we need to run the pip install command.
!pip install pandas-profiling[notebook,html]
Now that we are done with the prerequisites, let’s get into the fun part of analyzing some data set.
The data set I will be using for this example is the Titanic data set.
import pandas as pdimport pandas_profilingfrom pandas_profiling import ProfileReportfrom pandas_profiling.utils.cache import cache_file
file = cache_file("titanic.csv","https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv")data = pd.read_csv(file)
To generate the report, run the following code in the notebook.
profile = ProfileReport(data, title="Titanic Dataset", html={'style': {'full_width': True}}, sort="None")
That’s it. With a single line of code you have generated the a detailed profile report. Now let us see the results by including the report in the notebook.
profile.to_notebook_iframe()
This will include the interactive report as HTML iframe in the notebook.
Save the report as a HTML file using the following code:
profile.to_file(output_file="your_report.html")
Or obtain the data as JSON using:
# As a stringjson_data = profile.to_json()# As a fileprofile.to_file(output_file="your_report.json")
Now that we know how to generate reports using pandas-profiling, let’s look at the result.
Pandas_profiling creates a very descriptive overview of the predictor variables, by calculating the total missing cells, duplicate rows, number of distinct values, missing values, zeros for the predictor variables. It also marks the variables that have high cardinality or have missing values in the warning section, as you can see in the above image.
Besides all these, it generates detailed analysis for each variable. I will go through some of them in this article, to see the full report with all the codes, find the colab link at the end of the article.
For the numerical features, besides having detailed statistics like mean, standard deviation, min, max, Interquartile range (IQR) etc. it also plots the histogram, gives the list of common and extreme values.
Similar to the numerical features, for categorical features it calculates common values, lengths, characters etc.
One of the most interesting things is the interactions and correlation sections of the report. In the interaction section the pandas_profiling library automatically generates interaction plots for every pair of variables. You can get the interaction plot of any pair by selecting the specific variables from the two headers (Like in this example, I have selected passengerId and Age)
Correlation is a statistical technique that can show whether and how strongly pairs of variables are related. For example, height and weight are related; taller people tend to be heavier than shorter people. The relationship isn’t perfect. People of the same height vary in weight, and you can easily think of two people you know where the shorter one is heavier than the taller one. Nonetheless, the average weight of people 5'5'’ is less than the average weight of people 5'6'’, and their average weight is less than that of people 5'7'’, etc. Correlation can tell you just how much of the variation in peoples’ weights is related to their heights.
The main result of a correlation is called the correlation coefficient (or “r”). It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two variables are related.
If r is close to 0, it means there is no relationship between the variables. If r is positive, it means that as one variable gets larger the other gets larger. If r is negative it means that as one gets larger, the other gets smaller (often called an “inverse” correlation).
When it comes to generating correlation matrix for all the numerical features, the pandas_profiling library gives us all the popular options to choose from including Pearson’s r, Spearman’s ρ etc.
Now that, we know the advantages of using pandas_profiling, it is also useful to note the disadvantage that this library has.
The main disadvantage of pandas profiling is its use with large data sets. With the increase in the size of the data the time to generate the report also increases a lot.
One way to solve this problem is to generate the profile report for a part of the data set. But while doing this, it is very important to make sure that the data is randomly sampled so that it is representative of all the data we have. We can do this by:
from pandas_profiling import ProfileReport# Generate report for 10000 data pointsprofile = ProfileReport(data.sample(n = 10000), title="Titanic Data set", html={'style': {'full_width': True}}, sort="None")# save to fileprofile.to_file(output_file='10000datapoints.html')
Alternatively, if you are insistent on getting the report on the whole data set, you can do that by using the minimal mode. In the minimal mode a simplified report will be generated with less information than the full one but it can be generated relatively quickly for a large data set. The code for the same is given below:
profile = ProfileReport(large_dataset, minimal=True)profile.to_file(output_file="output.html")
Now that you know what is pandas-profiling and how to use it, I hope it will save you a ton of time which you can use for more advanced analysis specific to the problem in hand.
If you want to get the full report with working code, you can take a look at the following notebook. And if you would like to read some of my other articles then you can find the links below.
colab.research.google.com
Pandas-Profiling GitHub repo:
github.com
If you loved this article, you may also like some of my the other articles.
towardsdatascience.com
towardsdatascience.com
codeburst.io
Hi, I am Sukanta Roy. A software developer, an aspiring Machine Learning Engineer, Former Google Summer of Code 2018 student and a huge psychology buff. If any of these things interest you, you can follow me on medium or you can connect with me on LinkedIn. | [
{
"code": null,
"e": 592,
"s": 172,
"text": "When starting a new data science project, the first step after getting your hands on the data set for the first time is to understand it. We achieve this by performing Exploratory Data Analysis (EDA). This includes finding out the data type of each variable, the distribution of the target variable, number of distinct values for each predictor variable, if there is any duplicate or missing values in the data set etc."
},
{
"code": null,
"e": 1006,
"s": 592,
"text": "If you have ever done EDA on any data set (and I assume you have as you are reading this article), I don’t need to tell you how time consuming this process can be. And if you have been a part of many data science projects (be it in your job or by doing personal projects) you know how repetitive all these process can be. But with the Open source library Pandas-profiling that doesn’t have to be the case anymore."
},
{
"code": null,
"e": 1255,
"s": 1006,
"text": "Pandas-profiling is an open source library that can generate beautiful interactive reports for any data set, with just a single line of code. Sound’s interesting? Let’s take a look at the documentation to get a better understanding of what it does."
},
{
"code": null,
"e": 1520,
"s": 1255,
"text": "Pandas-profiling generates profile reports from a pandas DataFrame. The pandas df.describe() function is great but a little basic for serious exploratory data analysis. pandas_profiling extends the pandas DataFrame with df.profile_report() for quick data analysis."
},
{
"code": null,
"e": 1642,
"s": 1520,
"text": "For each column the following statistics — if relevant for the column type — are presented in an interactive HTML report:"
},
{
"code": null,
"e": 1703,
"s": 1642,
"text": "Type inference: detect the types of columns in a data frame."
},
{
"code": null,
"e": 1751,
"s": 1703,
"text": "Essentials: type, unique values, missing values"
},
{
"code": null,
"e": 1844,
"s": 1751,
"text": "Quantile statistics like minimum value, Q1, median, Q3, maximum, range, inter-quartile range"
},
{
"code": null,
"e": 1981,
"s": 1844,
"text": "Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness"
},
{
"code": null,
"e": 2002,
"s": 1981,
"text": "Most frequent values"
},
{
"code": null,
"e": 2012,
"s": 2002,
"text": "Histogram"
},
{
"code": null,
"e": 2109,
"s": 2012,
"text": "Correlations highlighting of highly correlated variables(Spearman, Pearson and Kendall matrices)"
},
{
"code": null,
"e": 2180,
"s": 2109,
"text": "Missing values matrix, count, heatmap and dendrogram of missing values"
},
{
"code": null,
"e": 2296,
"s": 2180,
"text": "Text analysis learn about categories (Uppercase, Space), scripts (Latin, Cyrillic) and blocks (ASCII) of text data."
},
{
"code": null,
"e": 2455,
"s": 2296,
"text": "Now that we know what pandas-profiling is all about, let’s see how to install it and use it in a Jupyter Notebook or in Google Colab in the following section."
},
{
"code": null,
"e": 2554,
"s": 2455,
"text": "You can install pandas-profiling very easily using pip package manager with the following command:"
},
{
"code": null,
"e": 2598,
"s": 2554,
"text": "pip install pandas-profiling[notebook,html]"
},
{
"code": null,
"e": 2672,
"s": 2598,
"text": "Alternatively, you could install the latest version directly from Github:"
},
{
"code": null,
"e": 2756,
"s": 2672,
"text": "pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip"
},
{
"code": null,
"e": 2835,
"s": 2756,
"text": "If you are using conda, then you can use the following command to installation"
},
{
"code": null,
"e": 2881,
"s": 2835,
"text": "conda install -c conda-forge pandas-profiling"
},
{
"code": null,
"e": 3167,
"s": 2881,
"text": "Google colab comes pre-installed with Pandas-profiling, but unfortunately it comes with an older version of it (v1.4). If you are following this article or the GitHub documentation, then the code will not run on Google Colab unless you install the latest version of the library (v2.6)."
},
{
"code": null,
"e": 3267,
"s": 3167,
"text": "To do that, you need to first uninstall the existing library and install the latest one as follows:"
},
{
"code": null,
"e": 3328,
"s": 3267,
"text": "# To uninstall!pip uninstall !pip uninstall pandas_profiling"
},
{
"code": null,
"e": 3384,
"s": 3328,
"text": "Now to install, we need to run the pip install command."
},
{
"code": null,
"e": 3429,
"s": 3384,
"text": "!pip install pandas-profiling[notebook,html]"
},
{
"code": null,
"e": 3530,
"s": 3429,
"text": "Now that we are done with the prerequisites, let’s get into the fun part of analyzing some data set."
},
{
"code": null,
"e": 3601,
"s": 3530,
"text": "The data set I will be using for this example is the Titanic data set."
},
{
"code": null,
"e": 3737,
"s": 3601,
"text": "import pandas as pdimport pandas_profilingfrom pandas_profiling import ProfileReportfrom pandas_profiling.utils.cache import cache_file"
},
{
"code": null,
"e": 3874,
"s": 3737,
"text": "file = cache_file(\"titanic.csv\",\"https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv\")data = pd.read_csv(file)"
},
{
"code": null,
"e": 3938,
"s": 3874,
"text": "To generate the report, run the following code in the notebook."
},
{
"code": null,
"e": 4044,
"s": 3938,
"text": "profile = ProfileReport(data, title=\"Titanic Dataset\", html={'style': {'full_width': True}}, sort=\"None\")"
},
{
"code": null,
"e": 4200,
"s": 4044,
"text": "That’s it. With a single line of code you have generated the a detailed profile report. Now let us see the results by including the report in the notebook."
},
{
"code": null,
"e": 4229,
"s": 4200,
"text": "profile.to_notebook_iframe()"
},
{
"code": null,
"e": 4302,
"s": 4229,
"text": "This will include the interactive report as HTML iframe in the notebook."
},
{
"code": null,
"e": 4359,
"s": 4302,
"text": "Save the report as a HTML file using the following code:"
},
{
"code": null,
"e": 4407,
"s": 4359,
"text": "profile.to_file(output_file=\"your_report.html\")"
},
{
"code": null,
"e": 4441,
"s": 4407,
"text": "Or obtain the data as JSON using:"
},
{
"code": null,
"e": 4542,
"s": 4441,
"text": "# As a stringjson_data = profile.to_json()# As a fileprofile.to_file(output_file=\"your_report.json\")"
},
{
"code": null,
"e": 4633,
"s": 4542,
"text": "Now that we know how to generate reports using pandas-profiling, let’s look at the result."
},
{
"code": null,
"e": 4985,
"s": 4633,
"text": "Pandas_profiling creates a very descriptive overview of the predictor variables, by calculating the total missing cells, duplicate rows, number of distinct values, missing values, zeros for the predictor variables. It also marks the variables that have high cardinality or have missing values in the warning section, as you can see in the above image."
},
{
"code": null,
"e": 5192,
"s": 4985,
"text": "Besides all these, it generates detailed analysis for each variable. I will go through some of them in this article, to see the full report with all the codes, find the colab link at the end of the article."
},
{
"code": null,
"e": 5401,
"s": 5192,
"text": "For the numerical features, besides having detailed statistics like mean, standard deviation, min, max, Interquartile range (IQR) etc. it also plots the histogram, gives the list of common and extreme values."
},
{
"code": null,
"e": 5515,
"s": 5401,
"text": "Similar to the numerical features, for categorical features it calculates common values, lengths, characters etc."
},
{
"code": null,
"e": 5899,
"s": 5515,
"text": "One of the most interesting things is the interactions and correlation sections of the report. In the interaction section the pandas_profiling library automatically generates interaction plots for every pair of variables. You can get the interaction plot of any pair by selecting the specific variables from the two headers (Like in this example, I have selected passengerId and Age)"
},
{
"code": null,
"e": 6550,
"s": 5899,
"text": "Correlation is a statistical technique that can show whether and how strongly pairs of variables are related. For example, height and weight are related; taller people tend to be heavier than shorter people. The relationship isn’t perfect. People of the same height vary in weight, and you can easily think of two people you know where the shorter one is heavier than the taller one. Nonetheless, the average weight of people 5'5'’ is less than the average weight of people 5'6'’, and their average weight is less than that of people 5'7'’, etc. Correlation can tell you just how much of the variation in peoples’ weights is related to their heights."
},
{
"code": null,
"e": 6737,
"s": 6550,
"text": "The main result of a correlation is called the correlation coefficient (or “r”). It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two variables are related."
},
{
"code": null,
"e": 7012,
"s": 6737,
"text": "If r is close to 0, it means there is no relationship between the variables. If r is positive, it means that as one variable gets larger the other gets larger. If r is negative it means that as one gets larger, the other gets smaller (often called an “inverse” correlation)."
},
{
"code": null,
"e": 7209,
"s": 7012,
"text": "When it comes to generating correlation matrix for all the numerical features, the pandas_profiling library gives us all the popular options to choose from including Pearson’s r, Spearman’s ρ etc."
},
{
"code": null,
"e": 7335,
"s": 7209,
"text": "Now that, we know the advantages of using pandas_profiling, it is also useful to note the disadvantage that this library has."
},
{
"code": null,
"e": 7506,
"s": 7335,
"text": "The main disadvantage of pandas profiling is its use with large data sets. With the increase in the size of the data the time to generate the report also increases a lot."
},
{
"code": null,
"e": 7761,
"s": 7506,
"text": "One way to solve this problem is to generate the profile report for a part of the data set. But while doing this, it is very important to make sure that the data is randomly sampled so that it is representative of all the data we have. We can do this by:"
},
{
"code": null,
"e": 8032,
"s": 7761,
"text": "from pandas_profiling import ProfileReport# Generate report for 10000 data pointsprofile = ProfileReport(data.sample(n = 10000), title=\"Titanic Data set\", html={'style': {'full_width': True}}, sort=\"None\")# save to fileprofile.to_file(output_file='10000datapoints.html')"
},
{
"code": null,
"e": 8357,
"s": 8032,
"text": "Alternatively, if you are insistent on getting the report on the whole data set, you can do that by using the minimal mode. In the minimal mode a simplified report will be generated with less information than the full one but it can be generated relatively quickly for a large data set. The code for the same is given below:"
},
{
"code": null,
"e": 8452,
"s": 8357,
"text": "profile = ProfileReport(large_dataset, minimal=True)profile.to_file(output_file=\"output.html\")"
},
{
"code": null,
"e": 8630,
"s": 8452,
"text": "Now that you know what is pandas-profiling and how to use it, I hope it will save you a ton of time which you can use for more advanced analysis specific to the problem in hand."
},
{
"code": null,
"e": 8822,
"s": 8630,
"text": "If you want to get the full report with working code, you can take a look at the following notebook. And if you would like to read some of my other articles then you can find the links below."
},
{
"code": null,
"e": 8848,
"s": 8822,
"text": "colab.research.google.com"
},
{
"code": null,
"e": 8878,
"s": 8848,
"text": "Pandas-Profiling GitHub repo:"
},
{
"code": null,
"e": 8889,
"s": 8878,
"text": "github.com"
},
{
"code": null,
"e": 8965,
"s": 8889,
"text": "If you loved this article, you may also like some of my the other articles."
},
{
"code": null,
"e": 8988,
"s": 8965,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 9011,
"s": 8988,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 9024,
"s": 9011,
"text": "codeburst.io"
}
]
|
AWS - SAM Create a generic role and attach to all lambdas - | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
Here, we will create an IAM role in the SAM template and attach it to lambda functions.
The type AWS::IAM::Role is used to define a role in SAM/cloud formation template, that typically contains IAM policies. An IAM policy can be AWS managed policy AmazonSQSFullAccess or AmazonS3FullAccess , etc. or user-defined policies these are mostly for business-specific policies.
Resources:
GenericLambdaRole:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaRole'
- 'arn:aws:iam::aws:policy/AWSLambdaExecute'
- 'arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess'
- 'arn:aws:iam::aws:policy/AmazonSQSFullAccess'
- 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
- 'arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- 'lambda.amazonaws.com'
Action:
- 'sts:AssumeRole'
Policies:
- PolicyName: 'SecretsManagerParameterAccess'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ssm:GetParam*
- ssm:DescribeParam*
Resource:
- arn:aws:ssm:*:*:parameter/
The above policy having six AWS managed policies and one user-defined policy that SecretsManagerParameterAccess. Now it can be attached to any number of lambda functions. Let’s use this policy to the Lambda function.
As we have seen in the earlier example, a lambda function can be attached with a role using Roleattribute.
Resources:
SampleLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: sample-lambda-function
Description: sample-lambda- lambda
Role: !GetAtt GenericLambdaRole.Arn
Handler: src.handle
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
Sample SAM Template
Resources:
SampleLambda:
Type: AWS::Serverless::Function
Properties:
FunctionName: sample-lambda-function
Description: sample-lambda- lambda
Role: !GetAtt GenericLambdaRole.Arn
Handler: src.handle
SampleLambda2:
Type: AWS::Serverless::Function
Properties:
FunctionName: sample2-lambda-function
Description: sample2-lambda- lambda
Role: !GetAtt GenericLambdaRole.Arn
Handler: src.handle2
GenericLambdaRole:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaRole'
- 'arn:aws:iam::aws:policy/AWSLambdaExecute'
- 'arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess'
- 'arn:aws:iam::aws:policy/AmazonSQSFullAccess'
- 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
- 'arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- 'lambda.amazonaws.com'
Action:
- 'sts:AssumeRole'
Policies:
- PolicyName: 'SecretsManagerParameterAccess'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- ssm:GetParam*
- ssm:DescribeParam*
- kms:GetSecretValue
- kms:Decrypt
Resource:
- arn:aws:ssm:*:*:parameter/*
Done!
AWS – SAM serverless function
AWS IAM Roles
Happy Learning 🙂
AWS – SAM Serverless Function Example
AWS – SAM Lambda Layer example
Python – AWS SAM Lambda Example
Different ways to use Lambdas in Python
Java AWS – How to Create SQS Standard and FIFO Queues
[Fixed] – Malformed Lambda proxy response – status 502
Javascript – How to listObjects from AWS S3
How to install AWS CLI on Windows 10
How to Copy Local Files to AWS EC2 instance Manually ?
How set AWS Access Keys in Windows or Mac Environment
Java AWS – How to read messages from SQS
[Fixed] – Error: No changes to deploy. Stack is up to date
How add files to S3 Bucket using Shell Script
How to connect AWS EC2 Instance using PuTTY
Java AWS – How to Send Messages to SQS Queues
AWS – SAM Serverless Function Example
AWS – SAM Lambda Layer example
Python – AWS SAM Lambda Example
Different ways to use Lambdas in Python
Java AWS – How to Create SQS Standard and FIFO Queues
[Fixed] – Malformed Lambda proxy response – status 502
Javascript – How to listObjects from AWS S3
How to install AWS CLI on Windows 10
How to Copy Local Files to AWS EC2 instance Manually ?
How set AWS Access Keys in Windows or Mac Environment
Java AWS – How to read messages from SQS
[Fixed] – Error: No changes to deploy. Stack is up to date
How add files to S3 Bucket using Shell Script
How to connect AWS EC2 Instance using PuTTY
Java AWS – How to Send Messages to SQS Queues
Δ
Install Java on Mac OS
Install AWS CLI on Windows
Install Minikube on Windows
Install Docker Toolbox on Windows
Install SOAPUI on Windows
Install Gradle on Windows
Install RabbitMQ on Windows
Install PuTTY on windows
Install Mysql on Windows
Install Hibernate Tools in Eclipse
Install Elasticsearch on Windows
Install Maven on Windows
Install Maven on Ubuntu
Install Maven on Windows Command
Add OJDBC jar to Maven Repository
Install Ant on Windows
Install RabbitMQ on Windows
Install Apache Kafka on Ubuntu
Install Apache Kafka on Windows | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 486,
"s": 398,
"text": "Here, we will create an IAM role in the SAM template and attach it to lambda functions."
},
{
"code": null,
"e": 770,
"s": 486,
"text": "The type AWS::IAM::Role is used to define a role in SAM/cloud formation template, that typically contains IAM policies. An IAM policy can be AWS managed policy AmazonSQSFullAccess or AmazonS3FullAccess , etc. or user-defined policies these are mostly for business-specific policies."
},
{
"code": null,
"e": 1816,
"s": 770,
"text": "Resources:\n GenericLambdaRole:\n Type: AWS::IAM::Role\n Properties:\n ManagedPolicyArns:\n - 'arn:aws:iam::aws:policy/service-role/AWSLambdaRole'\n - 'arn:aws:iam::aws:policy/AWSLambdaExecute'\n - 'arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess'\n - 'arn:aws:iam::aws:policy/AmazonSQSFullAccess'\n - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'\n - 'arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess'\n AssumeRolePolicyDocument:\n Version: '2012-10-17'\n Statement:\n - Effect: Allow\n Principal:\n Service:\n - 'lambda.amazonaws.com'\n Action:\n - 'sts:AssumeRole'\n Policies:\n - PolicyName: 'SecretsManagerParameterAccess'\n PolicyDocument:\n Version: '2012-10-17'\n Statement:\n - Effect: Allow\n Action:\n - ssm:GetParam*\n - ssm:DescribeParam*\n Resource:\n - arn:aws:ssm:*:*:parameter/"
},
{
"code": null,
"e": 2033,
"s": 1816,
"text": "The above policy having six AWS managed policies and one user-defined policy that SecretsManagerParameterAccess. Now it can be attached to any number of lambda functions. Let’s use this policy to the Lambda function."
},
{
"code": null,
"e": 2140,
"s": 2033,
"text": "As we have seen in the earlier example, a lambda function can be attached with a role using Roleattribute."
},
{
"code": null,
"e": 2371,
"s": 2140,
"text": "Resources:\n SampleLambda:\n Type: AWS::Serverless::Function\n Properties:\n FunctionName: sample-lambda-function\n Description: sample-lambda- lambda\n Role: !GetAtt GenericLambdaRole.Arn\n Handler: src.handle"
},
{
"code": null,
"e": 4050,
"s": 2371,
"text": "AWSTemplateFormatVersion: \"2010-09-09\"\nTransform: AWS::Serverless-2016-10-31\nDescription: >\n Sample SAM Template\n\nResources:\n SampleLambda:\n Type: AWS::Serverless::Function\n Properties:\n FunctionName: sample-lambda-function\n Description: sample-lambda- lambda\n Role: !GetAtt GenericLambdaRole.Arn\n Handler: src.handle\n\n SampleLambda2:\n Type: AWS::Serverless::Function\n Properties:\n FunctionName: sample2-lambda-function\n Description: sample2-lambda- lambda\n Role: !GetAtt GenericLambdaRole.Arn\n Handler: src.handle2\n\n GenericLambdaRole:\n Type: AWS::IAM::Role\n Properties:\n ManagedPolicyArns:\n - 'arn:aws:iam::aws:policy/service-role/AWSLambdaRole'\n - 'arn:aws:iam::aws:policy/AWSLambdaExecute'\n - 'arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess'\n - 'arn:aws:iam::aws:policy/AmazonSQSFullAccess'\n - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'\n - 'arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess'\n AssumeRolePolicyDocument:\n Version: '2012-10-17'\n Statement:\n - Effect: Allow\n Principal:\n Service:\n - 'lambda.amazonaws.com'\n Action:\n - 'sts:AssumeRole'\n Policies:\n - PolicyName: 'SecretsManagerParameterAccess'\n PolicyDocument:\n Version: '2012-10-17'\n Statement:\n - Effect: Allow\n Action:\n - ssm:GetParam*\n - ssm:DescribeParam*\n - kms:GetSecretValue\n - kms:Decrypt\n Resource:\n - arn:aws:ssm:*:*:parameter/*"
},
{
"code": null,
"e": 4056,
"s": 4050,
"text": "Done!"
},
{
"code": null,
"e": 4086,
"s": 4056,
"text": "AWS – SAM serverless function"
},
{
"code": null,
"e": 4100,
"s": 4086,
"text": "AWS IAM Roles"
},
{
"code": null,
"e": 4117,
"s": 4100,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 4796,
"s": 4117,
"text": "\nAWS – SAM Serverless Function Example\nAWS – SAM Lambda Layer example\nPython – AWS SAM Lambda Example\nDifferent ways to use Lambdas in Python\nJava AWS – How to Create SQS Standard and FIFO Queues\n[Fixed] – Malformed Lambda proxy response – status 502\nJavascript – How to listObjects from AWS S3\nHow to install AWS CLI on Windows 10\nHow to Copy Local Files to AWS EC2 instance Manually ?\nHow set AWS Access Keys in Windows or Mac Environment\nJava AWS – How to read messages from SQS\n[Fixed] – Error: No changes to deploy. Stack is up to date\nHow add files to S3 Bucket using Shell Script\nHow to connect AWS EC2 Instance using PuTTY\nJava AWS – How to Send Messages to SQS Queues\n"
},
{
"code": null,
"e": 4834,
"s": 4796,
"text": "AWS – SAM Serverless Function Example"
},
{
"code": null,
"e": 4865,
"s": 4834,
"text": "AWS – SAM Lambda Layer example"
},
{
"code": null,
"e": 4897,
"s": 4865,
"text": "Python – AWS SAM Lambda Example"
},
{
"code": null,
"e": 4937,
"s": 4897,
"text": "Different ways to use Lambdas in Python"
},
{
"code": null,
"e": 4991,
"s": 4937,
"text": "Java AWS – How to Create SQS Standard and FIFO Queues"
},
{
"code": null,
"e": 5047,
"s": 4991,
"text": "[Fixed] – Malformed Lambda proxy response – status 502"
},
{
"code": null,
"e": 5091,
"s": 5047,
"text": "Javascript – How to listObjects from AWS S3"
},
{
"code": null,
"e": 5128,
"s": 5091,
"text": "How to install AWS CLI on Windows 10"
},
{
"code": null,
"e": 5183,
"s": 5128,
"text": "How to Copy Local Files to AWS EC2 instance Manually ?"
},
{
"code": null,
"e": 5237,
"s": 5183,
"text": "How set AWS Access Keys in Windows or Mac Environment"
},
{
"code": null,
"e": 5278,
"s": 5237,
"text": "Java AWS – How to read messages from SQS"
},
{
"code": null,
"e": 5337,
"s": 5278,
"text": "[Fixed] – Error: No changes to deploy. Stack is up to date"
},
{
"code": null,
"e": 5383,
"s": 5337,
"text": "How add files to S3 Bucket using Shell Script"
},
{
"code": null,
"e": 5427,
"s": 5383,
"text": "How to connect AWS EC2 Instance using PuTTY"
},
{
"code": null,
"e": 5473,
"s": 5427,
"text": "Java AWS – How to Send Messages to SQS Queues"
},
{
"code": null,
"e": 5479,
"s": 5477,
"text": "Δ"
},
{
"code": null,
"e": 5503,
"s": 5479,
"text": " Install Java on Mac OS"
},
{
"code": null,
"e": 5531,
"s": 5503,
"text": " Install AWS CLI on Windows"
},
{
"code": null,
"e": 5560,
"s": 5531,
"text": " Install Minikube on Windows"
},
{
"code": null,
"e": 5595,
"s": 5560,
"text": " Install Docker Toolbox on Windows"
},
{
"code": null,
"e": 5622,
"s": 5595,
"text": " Install SOAPUI on Windows"
},
{
"code": null,
"e": 5649,
"s": 5622,
"text": " Install Gradle on Windows"
},
{
"code": null,
"e": 5678,
"s": 5649,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 5704,
"s": 5678,
"text": " Install PuTTY on windows"
},
{
"code": null,
"e": 5730,
"s": 5704,
"text": " Install Mysql on Windows"
},
{
"code": null,
"e": 5766,
"s": 5730,
"text": " Install Hibernate Tools in Eclipse"
},
{
"code": null,
"e": 5800,
"s": 5766,
"text": " Install Elasticsearch on Windows"
},
{
"code": null,
"e": 5826,
"s": 5800,
"text": " Install Maven on Windows"
},
{
"code": null,
"e": 5851,
"s": 5826,
"text": " Install Maven on Ubuntu"
},
{
"code": null,
"e": 5885,
"s": 5851,
"text": " Install Maven on Windows Command"
},
{
"code": null,
"e": 5920,
"s": 5885,
"text": " Add OJDBC jar to Maven Repository"
},
{
"code": null,
"e": 5944,
"s": 5920,
"text": " Install Ant on Windows"
},
{
"code": null,
"e": 5973,
"s": 5944,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 6005,
"s": 5973,
"text": " Install Apache Kafka on Ubuntu"
}
]
|
JasperReports - Designs | The JRXML templates (or JRXML files) in JasperReport are standard XML files, having an extension of .jrxml. All the JRXML files contain tag <jasperReport>, as root element. This in turn contains many sub-elements (all of these are optional). JasperReport framework can handle different kinds of data sources. In this tutorial, we shall show how to generate a basic report, just by passing a collection of Java data object (using Java beans), to the JasperReport Engine. The final report shall display a list of people with the categories including their names and countries.
The Following steps are covered in this chapter to describe — how to design a JasperReport −
Creating a JRXML Report Template and.
Previewing the XML Report Template.
Create the JRXML file, which is jasper_report_template.jrxml using a text editor and save this file in C:\tools\jasperreports-5.0.1\test as per our environment setup.
<?xml version = "1.0" encoding = "UTF-8"?>
<!DOCTYPE jasperReport PUBLIC "//JasperReports//DTD Report Design//EN"
"http://jasperreports.sourceforge.net/dtds/jasperreport.dtd">
<jasperReport xmlns = "http://jasperreports.sourceforge.net/jasperreports"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://jasperreports.sourceforge.net/jasperreports
http://jasperreports.sourceforge.net/xsd/jasperreport.xsd"
name = "jasper_report_template" language = "groovy" pageWidth = "595"
pageHeight = "842" columnWidth = "555" leftMargin = "20" rightMargin = "20"
topMargin = "20" bottomMargin = "20">
<queryString>
<![CDATA[]]>
</queryString>
<field name = "country" class = "java.lang.String">
<fieldDescription><![CDATA[country]]></fieldDescription>
</field>
<field name = "name" class = "java.lang.String">
<fieldDescription><![CDATA[name]]></fieldDescription>
</field>
<columnHeader>
<band height = "23">
<staticText>
<reportElement mode = "Opaque" x = "0" y = "3" width = "535"
height = "15" backcolor = "#70A9A9" />
<box>
<bottomPen lineWidth = "1.0" lineColor = "#CCCCCC" />
</box>
<textElement />
<text><![CDATA[]]> </text>
</staticText>
<staticText>
<reportElement x = "414" y = "3" width = "121" height = "15" />
<textElement textAlignment = "Center" verticalAlignment = "Middle">
<font isBold = "true" />
</textElement>
<text><![CDATA[Country]]></text>
</staticText>
<staticText>
<reportElement x = "0" y = "3" width = "136" height = "15" />
<textElement textAlignment = "Center" verticalAlignment = "Middle">
<font isBold = "true" />
</textElement>
<text><![CDATA[Name]]></text>
</staticText>
</band>
</columnHeader>
<detail>
<band height = "16">
<staticText>
<reportElement mode = "Opaque" x = "0" y = "0" width = "535"
height = "14" backcolor = "#E5ECF9" />
<box>
<bottomPen lineWidth = "0.25" lineColor = "#CCCCCC" />
</box>
<textElement />
<text><![CDATA[]]> </text>
</staticText>
<textField>
<reportElement x = "414" y = "0" width = "121" height = "15" />
<textElement textAlignment = "Center" verticalAlignment = "Middle">
<font size = "9" />
</textElement>
<textFieldExpression class = "java.lang.String">
<![CDATA[$F{country}]]>
</textFieldExpression>
</textField>
<textField>
<reportElement x = "0" y = "0" width = "136" height = "15" />
<textElement textAlignment = "Center" verticalAlignment = "Middle" />
<textFieldExpression class = "java.lang.String">
<![CDATA[$F{name}]]>
</textFieldExpression>
</textField>
</band>
</detail>
</jasperReport>
Here are the details of main fields in the above report template −
<queryString> − This is empty (as we are passing data through Java Beans). Usually contains the SQL statement, which retrieves the report result.
<queryString> − This is empty (as we are passing data through Java Beans). Usually contains the SQL statement, which retrieves the report result.
<field name> − This element is used to map data from data sources or queries, into report templates. name is re-used in the report body and is case-sensitive.
<field name> − This element is used to map data from data sources or queries, into report templates. name is re-used in the report body and is case-sensitive.
<fieldDescription> − This element maps the field name with the appropriate element in the XML file.
<fieldDescription> − This element maps the field name with the appropriate element in the XML file.
<staticText> − This defines the static text that does not depend on any datasources, variables, parameters, or report expressions.
<staticText> − This defines the static text that does not depend on any datasources, variables, parameters, or report expressions.
<textFieldExpression> − This defines the appearance of the result field.
<textFieldExpression> − This defines the appearance of the result field.
$F{country} − This is a variable that contains the value of result, predefined field in the tag <field name>.
$F{country} − This is a variable that contains the value of result, predefined field in the tag <field name>.
<band> − Bands contain the data, which is displayed in the report.
<band> − Bands contain the data, which is displayed in the report.
Once the report design is ready, save it in C:\ directory.
There is a utility net.sf.jasperreports.view.JasperDesignViewer available in JasperReports JAR file, which helps in previewing the report design without having to compile or fill it. This utility is a standalone Java application, hence can be executed using ANT.
Let's write an ANT target viewDesignXML to view the JRXML. So, let's create and save build.xml under C:\tools\jasperreports-5.0.1\test directory (should be placed in the same directory where JRXML is placed). Here is the build.xml file −
<?xml version = "1.0" encoding = "UTF-8"?>
<project name = "JasperReportTest" default = "viewDesignXML" basedir = ".">
<import file = "baseBuild.xml" />
<target name = "viewDesignXML" description = "Design viewer is
launched to preview the JXML report design.">
<java classname = "net.sf.jasperreports.view.JasperDesignViewer" fork = "true">
<arg value = "-XML" />
<arg value = "-F${file.name}.jrxml" />
<classpath refid = "classpath" />
</java>
</target>
</project>
Next, let's open a command prompt and go to the directory where build.xml is placed. Execute the command ant (As the viewDesignXML is the default target). Output is follows −
C:\tools\jasperreports-5.0.1\test>ant
Buildfile: C:\tools\jasperreports-5.0.1\test\build.xml
viewDesignXML:
[java] log4j:WARN No appenders could be found for logger
(net.sf.jasperreports.engine.xml.JRXmlDigesterFactory).
[java] log4j:WARN Please initialize the log4j system properly.
Log4j warning can be ignored, and as a result of above execution, a window labeled "JasperDesignViewer" opens, displaying our report template preview.
As we see, only report expressions for obtaining the data are displayed, as JasperDesignViewer doesn't have access to the actual data source or report parameters. Terminate the JasperDesignViewer by closing the window or by hitting Ctrl-c in the command-line window.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2829,
"s": 2254,
"text": "The JRXML templates (or JRXML files) in JasperReport are standard XML files, having an extension of .jrxml. All the JRXML files contain tag <jasperReport>, as root element. This in turn contains many sub-elements (all of these are optional). JasperReport framework can handle different kinds of data sources. In this tutorial, we shall show how to generate a basic report, just by passing a collection of Java data object (using Java beans), to the JasperReport Engine. The final report shall display a list of people with the categories including their names and countries."
},
{
"code": null,
"e": 2922,
"s": 2829,
"text": "The Following steps are covered in this chapter to describe — how to design a JasperReport −"
},
{
"code": null,
"e": 2960,
"s": 2922,
"text": "Creating a JRXML Report Template and."
},
{
"code": null,
"e": 2996,
"s": 2960,
"text": "Previewing the XML Report Template."
},
{
"code": null,
"e": 3163,
"s": 2996,
"text": "Create the JRXML file, which is jasper_report_template.jrxml using a text editor and save this file in C:\\tools\\jasperreports-5.0.1\\test as per our environment setup."
},
{
"code": null,
"e": 6528,
"s": 3163,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<!DOCTYPE jasperReport PUBLIC \"//JasperReports//DTD Report Design//EN\"\n \"http://jasperreports.sourceforge.net/dtds/jasperreport.dtd\">\n\n<jasperReport xmlns = \"http://jasperreports.sourceforge.net/jasperreports\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation = \"http://jasperreports.sourceforge.net/jasperreports\n http://jasperreports.sourceforge.net/xsd/jasperreport.xsd\"\n name = \"jasper_report_template\" language = \"groovy\" pageWidth = \"595\"\n pageHeight = \"842\" columnWidth = \"555\" leftMargin = \"20\" rightMargin = \"20\"\n topMargin = \"20\" bottomMargin = \"20\">\n \n <queryString>\n <![CDATA[]]>\n </queryString>\n \n <field name = \"country\" class = \"java.lang.String\">\n <fieldDescription><![CDATA[country]]></fieldDescription>\n </field>\n\t\n <field name = \"name\" class = \"java.lang.String\">\n <fieldDescription><![CDATA[name]]></fieldDescription>\n </field>\n\t\n <columnHeader>\n <band height = \"23\">\n \n <staticText>\n <reportElement mode = \"Opaque\" x = \"0\" y = \"3\" width = \"535\" \n height = \"15\" backcolor = \"#70A9A9\" />\n \n <box>\n <bottomPen lineWidth = \"1.0\" lineColor = \"#CCCCCC\" />\n </box>\n \n <textElement />\n <text><![CDATA[]]> </text>\n </staticText>\n\t\t\t\n <staticText>\n <reportElement x = \"414\" y = \"3\" width = \"121\" height = \"15\" />\n \n <textElement textAlignment = \"Center\" verticalAlignment = \"Middle\">\n <font isBold = \"true\" />\n </textElement>\n \n <text><![CDATA[Country]]></text>\n </staticText>\n \n <staticText>\n <reportElement x = \"0\" y = \"3\" width = \"136\" height = \"15\" />\n \n <textElement textAlignment = \"Center\" verticalAlignment = \"Middle\">\n <font isBold = \"true\" />\n </textElement>\n \n <text><![CDATA[Name]]></text>\n </staticText>\n \n </band>\n </columnHeader>\n \n <detail>\n <band height = \"16\">\n \n <staticText>\n <reportElement mode = \"Opaque\" x = \"0\" y = \"0\" width = \"535\" \n height = \"14\" backcolor = \"#E5ECF9\" />\n \n <box>\n <bottomPen lineWidth = \"0.25\" lineColor = \"#CCCCCC\" />\n </box>\n \n <textElement />\n <text><![CDATA[]]> </text>\n </staticText>\n\t\t\t\n <textField>\n <reportElement x = \"414\" y = \"0\" width = \"121\" height = \"15\" />\n \n <textElement textAlignment = \"Center\" verticalAlignment = \"Middle\">\n <font size = \"9\" />\n </textElement>\n \n <textFieldExpression class = \"java.lang.String\">\n <![CDATA[$F{country}]]>\n </textFieldExpression>\n </textField>\n \n <textField>\n <reportElement x = \"0\" y = \"0\" width = \"136\" height = \"15\" />\n <textElement textAlignment = \"Center\" verticalAlignment = \"Middle\" />\n \n <textFieldExpression class = \"java.lang.String\">\n <![CDATA[$F{name}]]>\n </textFieldExpression>\n </textField>\n\n </band>\n </detail>\n\t\n</jasperReport>"
},
{
"code": null,
"e": 6595,
"s": 6528,
"text": "Here are the details of main fields in the above report template −"
},
{
"code": null,
"e": 6741,
"s": 6595,
"text": "<queryString> − This is empty (as we are passing data through Java Beans). Usually contains the SQL statement, which retrieves the report result."
},
{
"code": null,
"e": 6887,
"s": 6741,
"text": "<queryString> − This is empty (as we are passing data through Java Beans). Usually contains the SQL statement, which retrieves the report result."
},
{
"code": null,
"e": 7046,
"s": 6887,
"text": "<field name> − This element is used to map data from data sources or queries, into report templates. name is re-used in the report body and is case-sensitive."
},
{
"code": null,
"e": 7205,
"s": 7046,
"text": "<field name> − This element is used to map data from data sources or queries, into report templates. name is re-used in the report body and is case-sensitive."
},
{
"code": null,
"e": 7305,
"s": 7205,
"text": "<fieldDescription> − This element maps the field name with the appropriate element in the XML file."
},
{
"code": null,
"e": 7405,
"s": 7305,
"text": "<fieldDescription> − This element maps the field name with the appropriate element in the XML file."
},
{
"code": null,
"e": 7536,
"s": 7405,
"text": "<staticText> − This defines the static text that does not depend on any datasources, variables, parameters, or report expressions."
},
{
"code": null,
"e": 7667,
"s": 7536,
"text": "<staticText> − This defines the static text that does not depend on any datasources, variables, parameters, or report expressions."
},
{
"code": null,
"e": 7740,
"s": 7667,
"text": "<textFieldExpression> − This defines the appearance of the result field."
},
{
"code": null,
"e": 7813,
"s": 7740,
"text": "<textFieldExpression> − This defines the appearance of the result field."
},
{
"code": null,
"e": 7923,
"s": 7813,
"text": "$F{country} − This is a variable that contains the value of result, predefined field in the tag <field name>."
},
{
"code": null,
"e": 8033,
"s": 7923,
"text": "$F{country} − This is a variable that contains the value of result, predefined field in the tag <field name>."
},
{
"code": null,
"e": 8100,
"s": 8033,
"text": "<band> − Bands contain the data, which is displayed in the report."
},
{
"code": null,
"e": 8167,
"s": 8100,
"text": "<band> − Bands contain the data, which is displayed in the report."
},
{
"code": null,
"e": 8226,
"s": 8167,
"text": "Once the report design is ready, save it in C:\\ directory."
},
{
"code": null,
"e": 8489,
"s": 8226,
"text": "There is a utility net.sf.jasperreports.view.JasperDesignViewer available in JasperReports JAR file, which helps in previewing the report design without having to compile or fill it. This utility is a standalone Java application, hence can be executed using ANT."
},
{
"code": null,
"e": 8727,
"s": 8489,
"text": "Let's write an ANT target viewDesignXML to view the JRXML. So, let's create and save build.xml under C:\\tools\\jasperreports-5.0.1\\test directory (should be placed in the same directory where JRXML is placed). Here is the build.xml file −"
},
{
"code": null,
"e": 9258,
"s": 8727,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<project name = \"JasperReportTest\" default = \"viewDesignXML\" basedir = \".\">\n\n <import file = \"baseBuild.xml\" />\n <target name = \"viewDesignXML\" description = \"Design viewer is \n launched to preview the JXML report design.\">\n \n <java classname = \"net.sf.jasperreports.view.JasperDesignViewer\" fork = \"true\">\n <arg value = \"-XML\" />\n <arg value = \"-F${file.name}.jrxml\" />\n <classpath refid = \"classpath\" />\n </java>\n </target>\n\n</project>"
},
{
"code": null,
"e": 9433,
"s": 9258,
"text": "Next, let's open a command prompt and go to the directory where build.xml is placed. Execute the command ant (As the viewDesignXML is the default target). Output is follows −"
},
{
"code": null,
"e": 9719,
"s": 9433,
"text": "C:\\tools\\jasperreports-5.0.1\\test>ant\nBuildfile: C:\\tools\\jasperreports-5.0.1\\test\\build.xml\n\nviewDesignXML:\n[java] log4j:WARN No appenders could be found for logger\n(net.sf.jasperreports.engine.xml.JRXmlDigesterFactory).\n[java] log4j:WARN Please initialize the log4j system properly.\n"
},
{
"code": null,
"e": 9870,
"s": 9719,
"text": "Log4j warning can be ignored, and as a result of above execution, a window labeled \"JasperDesignViewer\" opens, displaying our report template preview."
},
{
"code": null,
"e": 10137,
"s": 9870,
"text": "As we see, only report expressions for obtaining the data are displayed, as JasperDesignViewer doesn't have access to the actual data source or report parameters. Terminate the JasperDesignViewer by closing the window or by hitting Ctrl-c in the command-line window."
},
{
"code": null,
"e": 10144,
"s": 10137,
"text": " Print"
},
{
"code": null,
"e": 10155,
"s": 10144,
"text": " Add Notes"
}
]
|
How to group dataframe rows into list in Pandas Groupby? - GeeksforGeeks | 02 Feb, 2021
Suppose you have a pandas DataFrame consisting of 2 columns and we want to group these columns. In this article, we will discuss about the same. First, let;s create the dataframe.
Python3
# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Print the dataframedf
Output:
Example #1: We can use groupby() method on column 1 and apply the method to apply a list on every group of pandas DataFrame.
Python3
# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and apply# method on the dataframedf = df.groupby('column1')['column2'].apply(list) # Print the dataframe againdf
Output:
Example #2: We can use groupby() method on column 1 and agg() method to apply aggregation, consisting of the lambda function, on every group of pandas DataFrame.
Python3
# importing pandas as pdimport pandas as pd # Create the dataframedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and agg method # with lambda function on the dataframedf = df.groupby('column1').agg({'column2': lambda x: list(x)}) # Print the dataframe againdf
Output:
Example #3: We can use the groupby() method on column 1 and agg() method to apply the aggregation list, on every group of pandas DataFrame.
Python3
# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and agg method # with list as argument on the dataframedf = df.groupby('column1').agg(list) df
Output:
Example #4: We can use groupby() method on column 1 and agg() method by passing ‘pd.Series.tolist’ as an argument.
Python3
# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and agg method with# pd.Series.tolist as argument on the dataframedf = df.groupby('column1').agg(pd.Series.tolist) df
Output:
Python pandas-dataFrame
Python pandas-groupby
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Pandas dataframe.groupby()
Defaultdict in Python
Python | Get unique values from a list
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python | [
{
"code": null,
"e": 23901,
"s": 23873,
"text": "\n02 Feb, 2021"
},
{
"code": null,
"e": 24081,
"s": 23901,
"text": "Suppose you have a pandas DataFrame consisting of 2 columns and we want to group these columns. In this article, we will discuss about the same. First, let;s create the dataframe."
},
{
"code": null,
"e": 24089,
"s": 24081,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Print the dataframedf",
"e": 24396,
"s": 24089,
"text": null
},
{
"code": null,
"e": 24404,
"s": 24396,
"text": "Output:"
},
{
"code": null,
"e": 24529,
"s": 24404,
"text": "Example #1: We can use groupby() method on column 1 and apply the method to apply a list on every group of pandas DataFrame."
},
{
"code": null,
"e": 24537,
"s": 24529,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and apply# method on the dataframedf = df.groupby('column1')['column2'].apply(list) # Print the dataframe againdf",
"e": 24958,
"s": 24537,
"text": null
},
{
"code": null,
"e": 24966,
"s": 24958,
"text": "Output:"
},
{
"code": null,
"e": 25128,
"s": 24966,
"text": "Example #2: We can use groupby() method on column 1 and agg() method to apply aggregation, consisting of the lambda function, on every group of pandas DataFrame."
},
{
"code": null,
"e": 25136,
"s": 25128,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the dataframedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and agg method # with lambda function on the dataframedf = df.groupby('column1').agg({'column2': lambda x: list(x)}) # Print the dataframe againdf",
"e": 25590,
"s": 25136,
"text": null
},
{
"code": null,
"e": 25598,
"s": 25590,
"text": "Output:"
},
{
"code": null,
"e": 25738,
"s": 25598,
"text": "Example #3: We can use the groupby() method on column 1 and agg() method to apply the aggregation list, on every group of pandas DataFrame."
},
{
"code": null,
"e": 25746,
"s": 25738,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and agg method # with list as argument on the dataframedf = df.groupby('column1').agg(list) df",
"e": 26148,
"s": 25746,
"text": null
},
{
"code": null,
"e": 26156,
"s": 26148,
"text": "Output:"
},
{
"code": null,
"e": 26271,
"s": 26156,
"text": "Example #4: We can use groupby() method on column 1 and agg() method by passing ‘pd.Series.tolist’ as an argument."
},
{
"code": null,
"e": 26279,
"s": 26271,
"text": "Python3"
},
{
"code": "# importing pandas as pdimport pandas as pd # Create the data framedf = pd.DataFrame({'column1': ['A', 'B', 'C', 'A', 'C', 'C', 'B', 'D', 'D', 'A'], 'column2': [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]}) # Use groupby method and agg method with# pd.Series.tolist as argument on the dataframedf = df.groupby('column1').agg(pd.Series.tolist) df",
"e": 26705,
"s": 26279,
"text": null
},
{
"code": null,
"e": 26713,
"s": 26705,
"text": "Output:"
},
{
"code": null,
"e": 26737,
"s": 26713,
"text": "Python pandas-dataFrame"
},
{
"code": null,
"e": 26759,
"s": 26737,
"text": "Python pandas-groupby"
},
{
"code": null,
"e": 26773,
"s": 26759,
"text": "Python-pandas"
},
{
"code": null,
"e": 26780,
"s": 26773,
"text": "Python"
},
{
"code": null,
"e": 26878,
"s": 26780,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26887,
"s": 26878,
"text": "Comments"
},
{
"code": null,
"e": 26900,
"s": 26887,
"text": "Old Comments"
},
{
"code": null,
"e": 26932,
"s": 26900,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26988,
"s": 26932,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 27030,
"s": 26988,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 27072,
"s": 27030,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27108,
"s": 27072,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 27130,
"s": 27108,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27169,
"s": 27130,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 27196,
"s": 27169,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 27227,
"s": 27196,
"text": "Python | os.path.join() method"
}
]
|
Angular7 - Http Client | HttpClient will help us fetch external data, post to it, etc. We need to import the http module to make use of the http service. Let us consider an example to understand how to make use of the http service.
To start using the http service, we need to import the module in app.module.ts as shown below −
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppRoutingModule , RoutingComponent} from './app-routing.module';
import { AppComponent } from './app.component';
import { NewCmpComponent } from './new-cmp/new-cmp.component';
import { ChangeTextDirective } from './change-text.directive';
import { SqrtPipe } from './app.sqrt';
import { MyserviceService } from './myservice.service';
import { HttpClientModule } from '@angular/common/http';
@NgModule({
declarations: [
SqrtPipe,
AppComponent,
NewCmpComponent,
ChangeTextDirective,
RoutingComponent
],
imports: [
BrowserModule,
AppRoutingModule,
HttpClientModule
],
providers: [MyserviceService],
bootstrap: [AppComponent]
})
export class AppModule { }
If you see the highlighted code, we have imported the HttpClientModule from @angular/common/http and the same is also added in the imports array.
We will fetch the data from the server using httpclient module declared above. We will do that inside a service we created in the previous chapter and use the data inside the components which we want.
myservice.service.ts
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
@Injectable({
providedIn: 'root'
})
export class MyserviceService {
private finaldata = [];
private apiurl = "http://jsonplaceholder.typicode.com/users";
constructor(private http: HttpClient) { }
getData() {
return this.http.get(this.apiurl);
}
}
There is a method added called getData that returns the data fetched for the url given.
The method getData is called from app.component.ts as follows −
import { Component } from '@angular/core';
import { MyserviceService } from './myservice.service';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'Angular 7 Project!';
public persondata = [];
constructor(private myservice: MyserviceService) {}
ngOnInit() {
this.myservice.getData().subscribe((data) => {
this.persondata = Array.from(Object.keys(data), k=>data[k]);
console.log(this.persondata);
});
}
}
We are calling the method getData which gives back an observable type data. The subscribe method is used on it which has an arrow function with the data we need.
When we check in the browser, the console displays the data as shown below −
Let us use the data in app.component.html as follows −
<h3>Users Data</h3>
<ul>
<li *ngFor="let item of persondata; let i = index"<
{{item.name}}
</li>
</ul>
Output
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2268,
"s": 2061,
"text": "HttpClient will help us fetch external data, post to it, etc. We need to import the http module to make use of the http service. Let us consider an example to understand how to make use of the http service."
},
{
"code": null,
"e": 2364,
"s": 2268,
"text": "To start using the http service, we need to import the module in app.module.ts as shown below −"
},
{
"code": null,
"e": 3194,
"s": 2364,
"text": "import { BrowserModule } from '@angular/platform-browser';\nimport { NgModule } from '@angular/core';\nimport { AppRoutingModule , RoutingComponent} from './app-routing.module';\nimport { AppComponent } from './app.component';\nimport { NewCmpComponent } from './new-cmp/new-cmp.component';\nimport { ChangeTextDirective } from './change-text.directive';\nimport { SqrtPipe } from './app.sqrt';\nimport { MyserviceService } from './myservice.service';\nimport { HttpClientModule } from '@angular/common/http';\n\n@NgModule({\n declarations: [\n SqrtPipe,\n AppComponent,\n NewCmpComponent,\n ChangeTextDirective,\n RoutingComponent\n ],\n imports: [\n BrowserModule,\n AppRoutingModule,\n HttpClientModule\n ],\n providers: [MyserviceService],\n bootstrap: [AppComponent]\n})\nexport class AppModule { }"
},
{
"code": null,
"e": 3340,
"s": 3194,
"text": "If you see the highlighted code, we have imported the HttpClientModule from @angular/common/http and the same is also added in the imports array."
},
{
"code": null,
"e": 3541,
"s": 3340,
"text": "We will fetch the data from the server using httpclient module declared above. We will do that inside a service we created in the previous chapter and use the data inside the components which we want."
},
{
"code": null,
"e": 3562,
"s": 3541,
"text": "myservice.service.ts"
},
{
"code": null,
"e": 3928,
"s": 3562,
"text": "import { Injectable } from '@angular/core';\nimport { HttpClient } from '@angular/common/http';\n@Injectable({\n providedIn: 'root'\n})\nexport class MyserviceService {\n private finaldata = [];\n private apiurl = \"http://jsonplaceholder.typicode.com/users\";\n constructor(private http: HttpClient) { }\n getData() {\n return this.http.get(this.apiurl);\n }\n}"
},
{
"code": null,
"e": 4016,
"s": 3928,
"text": "There is a method added called getData that returns the data fetched for the url given."
},
{
"code": null,
"e": 4080,
"s": 4016,
"text": "The method getData is called from app.component.ts as follows −"
},
{
"code": null,
"e": 4636,
"s": 4080,
"text": "import { Component } from '@angular/core';\nimport { MyserviceService } from './myservice.service';\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'Angular 7 Project!';\n public persondata = [];\n constructor(private myservice: MyserviceService) {}\n ngOnInit() {\n this.myservice.getData().subscribe((data) => {\n this.persondata = Array.from(Object.keys(data), k=>data[k]);\n console.log(this.persondata);\n });\n }\n}"
},
{
"code": null,
"e": 4798,
"s": 4636,
"text": "We are calling the method getData which gives back an observable type data. The subscribe method is used on it which has an arrow function with the data we need."
},
{
"code": null,
"e": 4875,
"s": 4798,
"text": "When we check in the browser, the console displays the data as shown below −"
},
{
"code": null,
"e": 4930,
"s": 4875,
"text": "Let us use the data in app.component.html as follows −"
},
{
"code": null,
"e": 5046,
"s": 4930,
"text": "<h3>Users Data</h3>\n<ul>\n <li *ngFor=\"let item of persondata; let i = index\"<\n {{item.name}}\n </li>\n</ul>\n"
},
{
"code": null,
"e": 5053,
"s": 5046,
"text": "Output"
},
{
"code": null,
"e": 5088,
"s": 5053,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 5102,
"s": 5088,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 5137,
"s": 5102,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 5151,
"s": 5137,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 5186,
"s": 5151,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 5206,
"s": 5186,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 5241,
"s": 5206,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 5258,
"s": 5241,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 5291,
"s": 5258,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 5303,
"s": 5291,
"text": " Senol Atac"
},
{
"code": null,
"e": 5338,
"s": 5303,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 5350,
"s": 5338,
"text": " Senol Atac"
},
{
"code": null,
"e": 5357,
"s": 5350,
"text": " Print"
},
{
"code": null,
"e": 5368,
"s": 5357,
"text": " Add Notes"
}
]
|
Biopython - Testing Techniques | Biopython have extensive test script to test the software under different conditions to make sure that the software is bug-free. To run the test script, download the source code of the Biopython and then run the below command −
python run_tests.py
This will run all the test scripts and gives the following output −
Python version: 2.7.12 (v2.7.12:d33e0cf91556, Jun 26 2016, 12:10:39)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
Operating system: posix darwin
test_Ace ... ok
test_Affy ... ok
test_AlignIO ... ok
test_AlignIO_ClustalIO ... ok
test_AlignIO_EmbossIO ... ok
test_AlignIO_FastaIO ... ok
test_AlignIO_MauveIO ... ok
test_AlignIO_PhylipIO ... ok
test_AlignIO_convert ... ok
...........................................
...........................................
We can also run individual test script as specified below −
python test_AlignIO.py
As we have learned, Biopython is one of the important software in the field of bioinformatics. Being written in python (easy to learn and write), It provides extensive functionality to deal with any computation and operation in the field of bioinformatics. It also provides easy and flexible interface to almost all the popular bioinformatics software to exploit the its functionality as well.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2334,
"s": 2106,
"text": "Biopython have extensive test script to test the software under different conditions to make sure that the software is bug-free. To run the test script, download the source code of the Biopython and then run the below command −"
},
{
"code": null,
"e": 2355,
"s": 2334,
"text": "python run_tests.py\n"
},
{
"code": null,
"e": 2423,
"s": 2355,
"text": "This will run all the test scripts and gives the following output −"
},
{
"code": null,
"e": 2894,
"s": 2423,
"text": "Python version: 2.7.12 (v2.7.12:d33e0cf91556, Jun 26 2016, 12:10:39) \n[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] \nOperating system: posix darwin \ntest_Ace ... ok \ntest_Affy ... ok \ntest_AlignIO ... ok \ntest_AlignIO_ClustalIO ... ok \ntest_AlignIO_EmbossIO ... ok \ntest_AlignIO_FastaIO ... ok \ntest_AlignIO_MauveIO ... ok \ntest_AlignIO_PhylipIO ... ok \ntest_AlignIO_convert ... ok \n........................................... \n...........................................\n"
},
{
"code": null,
"e": 2954,
"s": 2894,
"text": "We can also run individual test script as specified below −"
},
{
"code": null,
"e": 2978,
"s": 2954,
"text": "python test_AlignIO.py\n"
},
{
"code": null,
"e": 3372,
"s": 2978,
"text": "As we have learned, Biopython is one of the important software in the field of bioinformatics. Being written in python (easy to learn and write), It provides extensive functionality to deal with any computation and operation in the field of bioinformatics. It also provides easy and flexible interface to almost all the popular bioinformatics software to exploit the its functionality as well."
},
{
"code": null,
"e": 3379,
"s": 3372,
"text": " Print"
},
{
"code": null,
"e": 3390,
"s": 3379,
"text": " Add Notes"
}
]
|
CSS - speak | The speak property is used to specify that the text will be used for aural media. It also specifies how it will be spoken, for example normal or spelt out.
normal − Directs the user agent to speak the text using the pronunciation rules for that element and its children.
normal − Directs the user agent to speak the text using the pronunciation rules for that element and its children.
none − Prevents the element from being spoken.
none − Prevents the element from being spoken.
spell-out − Causes the user agent to speak the text one letter at a time, which is useful for speaking acronyms.
spell-out − Causes the user agent to speak the text one letter at a time, which is useful for speaking acronyms.
object.style.speak = "spell-out";
All the HTML elements
Here is the example −
<style type = "text/css">
<!--
acronym {speak: spell-out;}
*.hidden {speak: none;}
-->
</style>
33 Lectures
2.5 hours
Anadi Sharma
26 Lectures
2.5 hours
Frahaan Hussain
44 Lectures
4.5 hours
DigiFisk (Programming Is Fun)
21 Lectures
2.5 hours
DigiFisk (Programming Is Fun)
51 Lectures
7.5 hours
DigiFisk (Programming Is Fun)
52 Lectures
4 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2782,
"s": 2626,
"text": "The speak property is used to specify that the text will be used for aural media. It also specifies how it will be spoken, for example normal or spelt out."
},
{
"code": null,
"e": 2897,
"s": 2782,
"text": "normal − Directs the user agent to speak the text using the pronunciation rules for that element and its children."
},
{
"code": null,
"e": 3012,
"s": 2897,
"text": "normal − Directs the user agent to speak the text using the pronunciation rules for that element and its children."
},
{
"code": null,
"e": 3059,
"s": 3012,
"text": "none − Prevents the element from being spoken."
},
{
"code": null,
"e": 3106,
"s": 3059,
"text": "none − Prevents the element from being spoken."
},
{
"code": null,
"e": 3219,
"s": 3106,
"text": "spell-out − Causes the user agent to speak the text one letter at a time, which is useful for speaking acronyms."
},
{
"code": null,
"e": 3332,
"s": 3219,
"text": "spell-out − Causes the user agent to speak the text one letter at a time, which is useful for speaking acronyms."
},
{
"code": null,
"e": 3367,
"s": 3332,
"text": "object.style.speak = \"spell-out\";\n"
},
{
"code": null,
"e": 3390,
"s": 3367,
"text": "All the HTML elements "
},
{
"code": null,
"e": 3412,
"s": 3390,
"text": "Here is the example −"
},
{
"code": null,
"e": 3526,
"s": 3412,
"text": "<style type = \"text/css\">\n <!--\n acronym {speak: spell-out;}\n *.hidden {speak: none;}\n -->\n</style>"
},
{
"code": null,
"e": 3561,
"s": 3526,
"text": "\n 33 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3575,
"s": 3561,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 3610,
"s": 3575,
"text": "\n 26 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3627,
"s": 3610,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 3662,
"s": 3627,
"text": "\n 44 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 3693,
"s": 3662,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3728,
"s": 3693,
"text": "\n 21 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3759,
"s": 3728,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3794,
"s": 3759,
"text": "\n 51 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 3825,
"s": 3794,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3858,
"s": 3825,
"text": "\n 52 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 3889,
"s": 3858,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3896,
"s": 3889,
"text": " Print"
},
{
"code": null,
"e": 3907,
"s": 3896,
"text": " Add Notes"
}
]
|
Creating multiple Java objects by one type only | You can create a List of object easily. Consider the following example, where I'll create an array of Employee objects and print their details in a for loop.
import java.lang.reflect.InvocationTargetException;
import java.util.ArrayList;
import java.util.List;
public class Tester implements Cloneable {
private int data;
public int getData() {
return data;
}
public void setData(int data) {
this.data = data;
}
public Tester(int data){
this.data = data;
}
public static void main(String[] args) {
List<Tester> testerList = new ArrayList<Tester>();
testerList.add(new Tester(1));
testerList.add(new Tester(2));
testerList.add(new Tester(3));
testerList.add(new Tester(4));
for(Tester tester : testerList){
System.out.println(tester.getData());
}
}
} | [
{
"code": null,
"e": 1220,
"s": 1062,
"text": "You can create a List of object easily. Consider the following example, where I'll create an array of Employee objects and print their details in a for loop."
},
{
"code": null,
"e": 1913,
"s": 1220,
"text": "import java.lang.reflect.InvocationTargetException;\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class Tester implements Cloneable {\n private int data;\n\n public int getData() {\n return data;\n }\n public void setData(int data) {\n this.data = data;\n }\n public Tester(int data){\n this.data = data;\n }\n\n public static void main(String[] args) {\n List<Tester> testerList = new ArrayList<Tester>();\n\n testerList.add(new Tester(1));\n testerList.add(new Tester(2));\n testerList.add(new Tester(3));\n testerList.add(new Tester(4));\n\n for(Tester tester : testerList){\n System.out.println(tester.getData());\n }\n }\n}"
}
]
|
Permutation Coefficient - GeeksforGeeks | 22 Feb, 2022
Permutation refers to the process of arranging all the members of a given set to form a sequence. The number of permutations on a set of n elements is given by n! , where “!” represents factorial.
The Permutation Coefficient represented by P(n, k) is used to represent the number of ways to obtain an ordered subset having k elements from a set of n elements.Mathematically it’s given as:
Image Source : WikiExamples :
P(10, 2) = 90
P(10, 3) = 720
P(10, 0) = 1
P(10, 1) = 10
The coefficient can also be computed recursively using the below recursive formula:
P(n, k) = P(n-1, k) + k* P(n-1, k-1)
If we observe closely, we can analyze that the problem has overlapping substructure, hence we can apply dynamic programming here. Below is a program implementing the same idea.
C
Java
Python3
C#
PHP
Javascript
// A Dynamic Programming based// solution that uses table P[][]// to calculate the Permutation// Coefficient#include<bits/stdc++.h> // Returns value of Permutation// Coefficient P(n, k)int permutationCoeff(int n, int k){ int P[n + 1][k + 1]; // Calculate value of Permutation // Coefficient in bottom up manner for (int i = 0; i <= n; i++) { for (int j = 0; j <= std::min(i, k); j++) { // Base Cases if (j == 0) P[i][j] = 1; // Calculate value using // previously stored values else P[i][j] = P[i - 1][j] + (j * P[i - 1][j - 1]); // This step is important // as P(i,j)=0 for j>i P[i][j + 1] = 0; } } return P[n][k];} // Driver Codeint main(){ int n = 10, k = 2; printf("Value of P(%d, %d) is %d ", n, k, permutationCoeff(n, k)); return 0;}
// Java code for Dynamic Programming based// solution that uses table P[][] to// calculate the Permutation Coefficientimport java.io.*;import java.math.*; class GFG{ // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int P[][] = new int[n + 2][k + 2]; // Calculate value of Permutation // Coefficient in bottom up manner for (int i = 0; i <= n; i++) { for (int j = 0; j <= Math.min(i, k); j++) { // Base Cases if (j == 0) P[i][j] = 1; // Calculate value using previously // stored values else P[i][j] = P[i - 1][j] + (j * P[i - 1][j - 1]); // This step is important // as P(i,j)=0 for j>i P[i][j + 1] = 0; } } return P[n][k]; } // Driver Code public static void main(String args[]) { int n = 10, k = 2; System.out.println("Value of P( " + n + ","+ k +")" + " is " + permutationCoeff(n, k) ); }} // This code is contributed by Nikita Tiwari.
# A Dynamic Programming based# solution that uses# table P[][] to calculate the# Permutation Coefficient # Returns value of Permutation# Coefficient P(n, k)def permutationCoeff(n, k): P = [[0 for i in range(k + 1)] for j in range(n + 1)] # Calculate value of Permutation # Coefficient in # bottom up manner for i in range(n + 1): for j in range(min(i, k) + 1): # Base cases if (j == 0): P[i][j] = 1 # Calculate value using # previously stored values else: P[i][j] = P[i - 1][j] + ( j * P[i - 1][j - 1]) # This step is important # as P(i, j) = 0 for j>i if (j < k): P[i][j + 1] = 0 return P[n][k] # Driver Coden = 10k = 2print("Value of P(", n, ", ", k, ") is ", permutationCoeff(n, k), sep = "") # This code is contributed by Soumen Ghosh.
// C# code for Dynamic Programming based// solution that uses table P[][] to// calculate the Permutation Coefficientusing System; class GFG{ // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int [,]P = new int[n + 2,k + 2]; // Calculate value of Permutation // Coefficient in bottom up manner for (int i = 0; i <= n; i++) { for (int j = 0; j <= Math.Min(i, k); j++) { // Base Cases if (j == 0) P[i,j] = 1; // Calculate value using previously // stored values else P[i,j] = P[i - 1,j] + (j * P[i - 1,j - 1]); // This step is important // as P(i,j)=0 for j>i P[i,j + 1] = 0; } } return P[n,k]; } // Driver Code public static void Main() { int n = 10, k = 2; Console.WriteLine("Value of P( " + n + ","+ k +")" + " is " + permutationCoeff(n, k) ); }} // This code is contributed by anuj_67..
<?php// A Dynamic Programming based// solution that uses table P[][]// to calculate the Permutation// Coefficient // Returns value of Permutation// Coefficient P(n, k)function permutationCoeff( $n, $k){ $P = array(array()); // Calculate value of Permutation // Coefficient in bottom up manner for($i = 0; $i <= $n; $i++) { for($j = 0; $j <= min($i, $k); $j++) { // Base Cases if ($j == 0) $P[$i][$j] = 1; // Calculate value using // previously stored values else $P[$i][$j] = $P[$i - 1][$j] + ($j * $P[$i - 1][$j - 1]); // This step is important // as P(i,j)=0 for j>i $P[$i][$j + 1] = 0; } } return $P[$n][$k];} // Driver Code $n = 10; $k = 2; echo "Value of P(",$n," ,",$k,") is ", permutationCoeff($n, $k); // This code is contributed by anuj_67.?>
<script> // Javascript code for Dynamic Programming based // solution that uses table P[][] to // calculate the Permutation Coefficient // Returns value of Permutation // Coefficient P(n, k) function permutationCoeff(n, k) { let P = new Array(n + 2); for(let i = 0; i < n + 2; i++) { P[i] = new Array(k + 2); } // Calculate value of Permutation // Coefficient in bottom up manner for (let i = 0; i <= n; i++) { for (let j = 0; j <= Math.min(i, k); j++) { // Base Cases if (j == 0) P[i][j] = 1; // Calculate value using previously // stored values else P[i][j] = P[i - 1][j] + (j * P[i - 1][j - 1]); // This step is important // as P(i,j)=0 for j>i P[i][j + 1] = 0; } } return P[n][k]; } let n = 10, k = 2; document.write("Value of P(" + n + ","+ k +")" + " is " + permutationCoeff(n, k) ); // This code is contributed by decode2207.</script>
Output :
Value of P(10, 2) is 90
Here as we can see the time complexity is O(n*k) and space complexity is O(n*k) as the program uses an auxiliary matrix to store the result.
Can we do it in O(n) time ?Let us suppose we maintain a single 1D array to compute the factorials up to n. We can use computed factorial value and apply the formula P(n, k) = n! / (n-k)!. Below is a program illustrating the same concept.
C++
C
Java
Python3
C#
PHP
Javascript
// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficient#include<bits/stdc++.h>using namespace std; // Returns value of Permutation// Coefficient P(n, k)int permutationCoeff(int n, int k){ int fact[n + 1]; // Base case fact[0] = 1; // Calculate value // factorials up to n for(int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k];} // Driver Codeint main(){ int n = 10, k = 2; cout << "Value of P(" << n << ", " << k << ") is " << permutationCoeff(n, k); return 0;} // This code is contributed by shubhamsingh10
// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficient#include<bits/stdc++.h> // Returns value of Permutation// Coefficient P(n, k)int permutationCoeff(int n, int k){ int fact[n + 1]; // base case fact[0] = 1; // Calculate value // factorials up to n for (int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k];} // Driver Codeint main(){ int n = 10, k = 2; printf ("Value of P(%d, %d) is %d ", n, k, permutationCoeff(n, k) ); return 0;}
// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficientimport java .io.*; public class GFG { // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int []fact = new int[n+1]; // base case fact[0] = 1; // Calculate value // factorials up to n for (int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k]; } // Driver Code static public void main (String[] args) { int n = 10, k = 2; System.out.println("Value of" + " P( " + n + ", " + k + ") is " + permutationCoeff(n, k) ); }} // This code is contributed by anuj_67.
# A O(n) solution that uses# table fact[] to calculate# the Permutation Coefficient # Returns value of Permutation# Coefficient P(n, k)def permutationCoeff(n, k): fact = [0 for i in range(n + 1)] # base case fact[0] = 1 # Calculate value # factorials up to n for i in range(1, n + 1): fact[i] = i * fact[i - 1] # P(n, k) = n!/(n-k)! return int(fact[n] / fact[n - k]) # Driver Coden = 10k = 2print("Value of P(", n, ", ", k, ") is ", permutationCoeff(n, k), sep = "") # This code is contributed# by Soumen Ghosh
// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficientusing System; public class GFG { // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int []fact = new int[n+1]; // base case fact[0] = 1; // Calculate value // factorials up to n for (int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k]; } // Driver Code static public void Main () { int n = 10, k = 2; Console.WriteLine("Value of" + " P( " + n + ", " + k + ") is " + permutationCoeff(n, k) ); }} // This code is contributed by anuj_67.
<?php// A O(n) Solution that// uses table fact[] to// calculate the Permutation// Coefficient // Returns value of Permutation// Coefficient P(n, k)function permutationCoeff($n, $k){ $fact = array(); // base case $fact[0] = 1; // Calculate value // factorials up to n for ($i = 1; $i <= $n; $i++) $fact[$i] = $i * $fact[$i - 1]; // P(n,k)= n!/(n-k)! return $fact[$n] / $fact[$n - $k];} // Driver Code $n = 10; $k = 2; echo"Value of P(",$n," ", $k,") is ", permutationCoeff($n, $k) ; // This code is contributed by anuj_67. ?>
<script> // A O(n) solution that uses // table fact[] to calculate // the Permutation Coefficient // Returns value of Permutation // Coefficient P(n, k) function permutationCoeff(n, k) { let fact = new Array(n+1); // base case fact[0] = 1; // Calculate value // factorials up to n for (let i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return parseInt(fact[n] / fact[n - k], 10); } let n = 10, k = 2; document.write("Value of" + " P(" + n + ", " + k + ") is " + permutationCoeff(n, k) ); </script>
Output :
Value of P(10, 2) is 90
A O(n) time and O(1) Extra Space Solution
C++
Java
C#
PHP
Javascript
Python3
// A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficient#include <iostream>using namespace std; int PermutationCoeff(int n, int k){ int P = 1; // Compute n*(n-1)*(n-2)....(n-k+1) for (int i = 0; i < k; i++) P *= (n-i) ; return P;} // Driver Codeint main(){ int n = 10, k = 2; cout << "Value of P(" << n << ", " << k << ") is " << PermutationCoeff(n, k); return 0;}
// A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficientimport java.io.*; class GFG{ static int PermutationCoeff(int n, int k) { int Fn = 1, Fk = 1; // Compute n! and (n-k)! for (int i = 1; i <= n; i++) { Fn *= i; if (i == n - k) Fk = Fn; } int coeff = Fn / Fk; return coeff; } // Driver Code public static void main(String args[]) { int n = 10, k = 2; System.out.println("Value of P( " + n + "," + k +") is " + PermutationCoeff(n, k) ); }} // This code is contributed by Nikita Tiwari.
// A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficientusing System; class GFG { static int PermutationCoeff(int n, int k) { int Fn = 1, Fk = 1; // Compute n! and (n-k)! for (int i = 1; i <= n; i++) { Fn *= i; if (i == n - k) Fk = Fn; } int coeff = Fn / Fk; return coeff; } // Driver Code public static void Main() { int n = 10, k = 2; Console.WriteLine("Value of P( " + n + "," + k +") is " + PermutationCoeff(n, k) ); }} // This code is contributed by anuj_67.
<?php// A O(n) time and O(1) extra// space PHP solution to calculate// the Permutation Coefficient function PermutationCoeff( $n, $k){ $Fn = 1; $Fk; // Compute n! and (n-k)! for ( $i = 1; $i <= $n; $i++) { $Fn *= $i; if ($i == $n - $k) $Fk = $Fn; } $coeff = $Fn / $Fk; return $coeff;} // Driver Code$n = 10; $k = 2;echo "Value of P(" , $n , ", " , $k , ") is " , PermutationCoeff($n, $k); // This code is contributed by anuj_67.?>
<script> // A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficient function PermutationCoeff(n, k){ let P = 1; // Compute n*(n-1)*(n-2)....(n-k+1) for(let i = 0; i < k; i++) P *= (n - i); return P;} // Driver codelet n = 10, k = 2;document.write("Value of P(" + n + ", " + k + ") is " + PermutationCoeff(n, k)); // This code is contributed by divyesh072019 </script>
# A O(n) solution that uses# table fact[] to calculate# the Permutation Coefficient # Returns value of Permutation# Coefficient P(n, k)def permutationCoeff(n, k): f=1 for i in range(k): #P(n,k)=n*(n-1)*(n-2)*....(n-k-1) f*=(n-i) return f #This code is contributed by Suyash Saxena # Driver Coden = 10k = 2print("Value of P(", n, ", ", k, ") is ", permutationCoeff(n, k))
Output :
Value of P(10, 2) is 90
Thanks to Shiva Kumar for suggesting this solution.This article is contributed by Ashutosh Kumar. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
vt_m
isajidiqbal
nidhi_biet
SHUBHAMSINGH10
divyesh072019
suresh07
decode2207
suyashsincever13
simmytarika5
germanshephered48
permutation
Dynamic Programming
Dynamic Programming
permutation
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Bellman–Ford Algorithm | DP-23
Floyd Warshall Algorithm | DP-16
Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)
Overlapping Subproblems Property in Dynamic Programming | DP-1
Edit Distance | DP-5
Minimum number of jumps to reach end
Efficient program to print all prime factors of a given number
Cutting a Rod | DP-13
Longest Common Substring | DP-29
Tabulation vs Memoization | [
{
"code": null,
"e": 24187,
"s": 24159,
"text": "\n22 Feb, 2022"
},
{
"code": null,
"e": 24385,
"s": 24187,
"text": "Permutation refers to the process of arranging all the members of a given set to form a sequence. The number of permutations on a set of n elements is given by n! , where “!” represents factorial. "
},
{
"code": null,
"e": 24579,
"s": 24385,
"text": "The Permutation Coefficient represented by P(n, k) is used to represent the number of ways to obtain an ordered subset having k elements from a set of n elements.Mathematically it’s given as: "
},
{
"code": null,
"e": 24610,
"s": 24579,
"text": "Image Source : WikiExamples : "
},
{
"code": null,
"e": 24666,
"s": 24610,
"text": "P(10, 2) = 90\nP(10, 3) = 720\nP(10, 0) = 1\nP(10, 1) = 10"
},
{
"code": null,
"e": 24752,
"s": 24666,
"text": "The coefficient can also be computed recursively using the below recursive formula: "
},
{
"code": null,
"e": 24792,
"s": 24752,
"text": "P(n, k) = P(n-1, k) + k* P(n-1, k-1) "
},
{
"code": null,
"e": 24970,
"s": 24792,
"text": "If we observe closely, we can analyze that the problem has overlapping substructure, hence we can apply dynamic programming here. Below is a program implementing the same idea. "
},
{
"code": null,
"e": 24972,
"s": 24970,
"text": "C"
},
{
"code": null,
"e": 24977,
"s": 24972,
"text": "Java"
},
{
"code": null,
"e": 24985,
"s": 24977,
"text": "Python3"
},
{
"code": null,
"e": 24988,
"s": 24985,
"text": "C#"
},
{
"code": null,
"e": 24992,
"s": 24988,
"text": "PHP"
},
{
"code": null,
"e": 25003,
"s": 24992,
"text": "Javascript"
},
{
"code": "// A Dynamic Programming based// solution that uses table P[][]// to calculate the Permutation// Coefficient#include<bits/stdc++.h> // Returns value of Permutation// Coefficient P(n, k)int permutationCoeff(int n, int k){ int P[n + 1][k + 1]; // Calculate value of Permutation // Coefficient in bottom up manner for (int i = 0; i <= n; i++) { for (int j = 0; j <= std::min(i, k); j++) { // Base Cases if (j == 0) P[i][j] = 1; // Calculate value using // previously stored values else P[i][j] = P[i - 1][j] + (j * P[i - 1][j - 1]); // This step is important // as P(i,j)=0 for j>i P[i][j + 1] = 0; } } return P[n][k];} // Driver Codeint main(){ int n = 10, k = 2; printf(\"Value of P(%d, %d) is %d \", n, k, permutationCoeff(n, k)); return 0;}",
"e": 25949,
"s": 25003,
"text": null
},
{
"code": "// Java code for Dynamic Programming based// solution that uses table P[][] to// calculate the Permutation Coefficientimport java.io.*;import java.math.*; class GFG{ // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int P[][] = new int[n + 2][k + 2]; // Calculate value of Permutation // Coefficient in bottom up manner for (int i = 0; i <= n; i++) { for (int j = 0; j <= Math.min(i, k); j++) { // Base Cases if (j == 0) P[i][j] = 1; // Calculate value using previously // stored values else P[i][j] = P[i - 1][j] + (j * P[i - 1][j - 1]); // This step is important // as P(i,j)=0 for j>i P[i][j + 1] = 0; } } return P[n][k]; } // Driver Code public static void main(String args[]) { int n = 10, k = 2; System.out.println(\"Value of P( \" + n + \",\"+ k +\")\" + \" is \" + permutationCoeff(n, k) ); }} // This code is contributed by Nikita Tiwari.",
"e": 27255,
"s": 25949,
"text": null
},
{
"code": "# A Dynamic Programming based# solution that uses# table P[][] to calculate the# Permutation Coefficient # Returns value of Permutation# Coefficient P(n, k)def permutationCoeff(n, k): P = [[0 for i in range(k + 1)] for j in range(n + 1)] # Calculate value of Permutation # Coefficient in # bottom up manner for i in range(n + 1): for j in range(min(i, k) + 1): # Base cases if (j == 0): P[i][j] = 1 # Calculate value using # previously stored values else: P[i][j] = P[i - 1][j] + ( j * P[i - 1][j - 1]) # This step is important # as P(i, j) = 0 for j>i if (j < k): P[i][j + 1] = 0 return P[n][k] # Driver Coden = 10k = 2print(\"Value of P(\", n, \", \", k, \") is \", permutationCoeff(n, k), sep = \"\") # This code is contributed by Soumen Ghosh.",
"e": 28195,
"s": 27255,
"text": null
},
{
"code": "// C# code for Dynamic Programming based// solution that uses table P[][] to// calculate the Permutation Coefficientusing System; class GFG{ // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int [,]P = new int[n + 2,k + 2]; // Calculate value of Permutation // Coefficient in bottom up manner for (int i = 0; i <= n; i++) { for (int j = 0; j <= Math.Min(i, k); j++) { // Base Cases if (j == 0) P[i,j] = 1; // Calculate value using previously // stored values else P[i,j] = P[i - 1,j] + (j * P[i - 1,j - 1]); // This step is important // as P(i,j)=0 for j>i P[i,j + 1] = 0; } } return P[n,k]; } // Driver Code public static void Main() { int n = 10, k = 2; Console.WriteLine(\"Value of P( \" + n + \",\"+ k +\")\" + \" is \" + permutationCoeff(n, k) ); }} // This code is contributed by anuj_67..",
"e": 29472,
"s": 28195,
"text": null
},
{
"code": "<?php// A Dynamic Programming based// solution that uses table P[][]// to calculate the Permutation// Coefficient // Returns value of Permutation// Coefficient P(n, k)function permutationCoeff( $n, $k){ $P = array(array()); // Calculate value of Permutation // Coefficient in bottom up manner for($i = 0; $i <= $n; $i++) { for($j = 0; $j <= min($i, $k); $j++) { // Base Cases if ($j == 0) $P[$i][$j] = 1; // Calculate value using // previously stored values else $P[$i][$j] = $P[$i - 1][$j] + ($j * $P[$i - 1][$j - 1]); // This step is important // as P(i,j)=0 for j>i $P[$i][$j + 1] = 0; } } return $P[$n][$k];} // Driver Code $n = 10; $k = 2; echo \"Value of P(\",$n,\" ,\",$k,\") is \", permutationCoeff($n, $k); // This code is contributed by anuj_67.?>",
"e": 30448,
"s": 29472,
"text": null
},
{
"code": "<script> // Javascript code for Dynamic Programming based // solution that uses table P[][] to // calculate the Permutation Coefficient // Returns value of Permutation // Coefficient P(n, k) function permutationCoeff(n, k) { let P = new Array(n + 2); for(let i = 0; i < n + 2; i++) { P[i] = new Array(k + 2); } // Calculate value of Permutation // Coefficient in bottom up manner for (let i = 0; i <= n; i++) { for (let j = 0; j <= Math.min(i, k); j++) { // Base Cases if (j == 0) P[i][j] = 1; // Calculate value using previously // stored values else P[i][j] = P[i - 1][j] + (j * P[i - 1][j - 1]); // This step is important // as P(i,j)=0 for j>i P[i][j + 1] = 0; } } return P[n][k]; } let n = 10, k = 2; document.write(\"Value of P(\" + n + \",\"+ k +\")\" + \" is \" + permutationCoeff(n, k) ); // This code is contributed by decode2207.</script>",
"e": 31637,
"s": 30448,
"text": null
},
{
"code": null,
"e": 31647,
"s": 31637,
"text": "Output : "
},
{
"code": null,
"e": 31672,
"s": 31647,
"text": "Value of P(10, 2) is 90 "
},
{
"code": null,
"e": 31813,
"s": 31672,
"text": "Here as we can see the time complexity is O(n*k) and space complexity is O(n*k) as the program uses an auxiliary matrix to store the result."
},
{
"code": null,
"e": 32051,
"s": 31813,
"text": "Can we do it in O(n) time ?Let us suppose we maintain a single 1D array to compute the factorials up to n. We can use computed factorial value and apply the formula P(n, k) = n! / (n-k)!. Below is a program illustrating the same concept."
},
{
"code": null,
"e": 32055,
"s": 32051,
"text": "C++"
},
{
"code": null,
"e": 32057,
"s": 32055,
"text": "C"
},
{
"code": null,
"e": 32062,
"s": 32057,
"text": "Java"
},
{
"code": null,
"e": 32070,
"s": 32062,
"text": "Python3"
},
{
"code": null,
"e": 32073,
"s": 32070,
"text": "C#"
},
{
"code": null,
"e": 32077,
"s": 32073,
"text": "PHP"
},
{
"code": null,
"e": 32088,
"s": 32077,
"text": "Javascript"
},
{
"code": "// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficient#include<bits/stdc++.h>using namespace std; // Returns value of Permutation// Coefficient P(n, k)int permutationCoeff(int n, int k){ int fact[n + 1]; // Base case fact[0] = 1; // Calculate value // factorials up to n for(int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k];} // Driver Codeint main(){ int n = 10, k = 2; cout << \"Value of P(\" << n << \", \" << k << \") is \" << permutationCoeff(n, k); return 0;} // This code is contributed by shubhamsingh10",
"e": 32742,
"s": 32088,
"text": null
},
{
"code": "// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficient#include<bits/stdc++.h> // Returns value of Permutation// Coefficient P(n, k)int permutationCoeff(int n, int k){ int fact[n + 1]; // base case fact[0] = 1; // Calculate value // factorials up to n for (int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k];} // Driver Codeint main(){ int n = 10, k = 2; printf (\"Value of P(%d, %d) is %d \", n, k, permutationCoeff(n, k) ); return 0;}",
"e": 33317,
"s": 32742,
"text": null
},
{
"code": "// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficientimport java .io.*; public class GFG { // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int []fact = new int[n+1]; // base case fact[0] = 1; // Calculate value // factorials up to n for (int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k]; } // Driver Code static public void main (String[] args) { int n = 10, k = 2; System.out.println(\"Value of\" + \" P( \" + n + \", \" + k + \") is \" + permutationCoeff(n, k) ); }} // This code is contributed by anuj_67.",
"e": 34137,
"s": 33317,
"text": null
},
{
"code": "# A O(n) solution that uses# table fact[] to calculate# the Permutation Coefficient # Returns value of Permutation# Coefficient P(n, k)def permutationCoeff(n, k): fact = [0 for i in range(n + 1)] # base case fact[0] = 1 # Calculate value # factorials up to n for i in range(1, n + 1): fact[i] = i * fact[i - 1] # P(n, k) = n!/(n-k)! return int(fact[n] / fact[n - k]) # Driver Coden = 10k = 2print(\"Value of P(\", n, \", \", k, \") is \", permutationCoeff(n, k), sep = \"\") # This code is contributed# by Soumen Ghosh",
"e": 34689,
"s": 34137,
"text": null
},
{
"code": "// A O(n) solution that uses// table fact[] to calculate// the Permutation Coefficientusing System; public class GFG { // Returns value of Permutation // Coefficient P(n, k) static int permutationCoeff(int n, int k) { int []fact = new int[n+1]; // base case fact[0] = 1; // Calculate value // factorials up to n for (int i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return fact[n] / fact[n - k]; } // Driver Code static public void Main () { int n = 10, k = 2; Console.WriteLine(\"Value of\" + \" P( \" + n + \", \" + k + \") is \" + permutationCoeff(n, k) ); }} // This code is contributed by anuj_67.",
"e": 35490,
"s": 34689,
"text": null
},
{
"code": "<?php// A O(n) Solution that// uses table fact[] to// calculate the Permutation// Coefficient // Returns value of Permutation// Coefficient P(n, k)function permutationCoeff($n, $k){ $fact = array(); // base case $fact[0] = 1; // Calculate value // factorials up to n for ($i = 1; $i <= $n; $i++) $fact[$i] = $i * $fact[$i - 1]; // P(n,k)= n!/(n-k)! return $fact[$n] / $fact[$n - $k];} // Driver Code $n = 10; $k = 2; echo\"Value of P(\",$n,\" \", $k,\") is \", permutationCoeff($n, $k) ; // This code is contributed by anuj_67. ?>",
"e": 36096,
"s": 35490,
"text": null
},
{
"code": "<script> // A O(n) solution that uses // table fact[] to calculate // the Permutation Coefficient // Returns value of Permutation // Coefficient P(n, k) function permutationCoeff(n, k) { let fact = new Array(n+1); // base case fact[0] = 1; // Calculate value // factorials up to n for (let i = 1; i <= n; i++) fact[i] = i * fact[i - 1]; // P(n,k) = n! / (n - k)! return parseInt(fact[n] / fact[n - k], 10); } let n = 10, k = 2; document.write(\"Value of\" + \" P(\" + n + \", \" + k + \") is \" + permutationCoeff(n, k) ); </script>",
"e": 36801,
"s": 36096,
"text": null
},
{
"code": null,
"e": 36810,
"s": 36801,
"text": "Output :"
},
{
"code": null,
"e": 36835,
"s": 36810,
"text": "Value of P(10, 2) is 90 "
},
{
"code": null,
"e": 36878,
"s": 36835,
"text": "A O(n) time and O(1) Extra Space Solution "
},
{
"code": null,
"e": 36882,
"s": 36878,
"text": "C++"
},
{
"code": null,
"e": 36887,
"s": 36882,
"text": "Java"
},
{
"code": null,
"e": 36890,
"s": 36887,
"text": "C#"
},
{
"code": null,
"e": 36894,
"s": 36890,
"text": "PHP"
},
{
"code": null,
"e": 36905,
"s": 36894,
"text": "Javascript"
},
{
"code": null,
"e": 36913,
"s": 36905,
"text": "Python3"
},
{
"code": "// A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficient#include <iostream>using namespace std; int PermutationCoeff(int n, int k){ int P = 1; // Compute n*(n-1)*(n-2)....(n-k+1) for (int i = 0; i < k; i++) P *= (n-i) ; return P;} // Driver Codeint main(){ int n = 10, k = 2; cout << \"Value of P(\" << n << \", \" << k << \") is \" << PermutationCoeff(n, k); return 0;}",
"e": 37350,
"s": 36913,
"text": null
},
{
"code": "// A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficientimport java.io.*; class GFG{ static int PermutationCoeff(int n, int k) { int Fn = 1, Fk = 1; // Compute n! and (n-k)! for (int i = 1; i <= n; i++) { Fn *= i; if (i == n - k) Fk = Fn; } int coeff = Fn / Fk; return coeff; } // Driver Code public static void main(String args[]) { int n = 10, k = 2; System.out.println(\"Value of P( \" + n + \",\" + k +\") is \" + PermutationCoeff(n, k) ); }} // This code is contributed by Nikita Tiwari.",
"e": 38099,
"s": 37350,
"text": null
},
{
"code": "// A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficientusing System; class GFG { static int PermutationCoeff(int n, int k) { int Fn = 1, Fk = 1; // Compute n! and (n-k)! for (int i = 1; i <= n; i++) { Fn *= i; if (i == n - k) Fk = Fn; } int coeff = Fn / Fk; return coeff; } // Driver Code public static void Main() { int n = 10, k = 2; Console.WriteLine(\"Value of P( \" + n + \",\" + k +\") is \" + PermutationCoeff(n, k) ); }} // This code is contributed by anuj_67.",
"e": 38807,
"s": 38099,
"text": null
},
{
"code": "<?php// A O(n) time and O(1) extra// space PHP solution to calculate// the Permutation Coefficient function PermutationCoeff( $n, $k){ $Fn = 1; $Fk; // Compute n! and (n-k)! for ( $i = 1; $i <= $n; $i++) { $Fn *= $i; if ($i == $n - $k) $Fk = $Fn; } $coeff = $Fn / $Fk; return $coeff;} // Driver Code$n = 10; $k = 2;echo \"Value of P(\" , $n , \", \" , $k , \") is \" , PermutationCoeff($n, $k); // This code is contributed by anuj_67.?>",
"e": 39288,
"s": 38807,
"text": null
},
{
"code": "<script> // A O(n) time and O(1) extra// space solution to calculate// the Permutation Coefficient function PermutationCoeff(n, k){ let P = 1; // Compute n*(n-1)*(n-2)....(n-k+1) for(let i = 0; i < k; i++) P *= (n - i); return P;} // Driver codelet n = 10, k = 2;document.write(\"Value of P(\" + n + \", \" + k + \") is \" + PermutationCoeff(n, k)); // This code is contributed by divyesh072019 </script>",
"e": 39780,
"s": 39288,
"text": null
},
{
"code": "# A O(n) solution that uses# table fact[] to calculate# the Permutation Coefficient # Returns value of Permutation# Coefficient P(n, k)def permutationCoeff(n, k): f=1 for i in range(k): #P(n,k)=n*(n-1)*(n-2)*....(n-k-1) f*=(n-i) return f #This code is contributed by Suyash Saxena # Driver Coden = 10k = 2print(\"Value of P(\", n, \", \", k, \") is \", permutationCoeff(n, k))",
"e": 40182,
"s": 39780,
"text": null
},
{
"code": null,
"e": 40192,
"s": 40182,
"text": "Output : "
},
{
"code": null,
"e": 40217,
"s": 40192,
"text": "Value of P(10, 2) is 90 "
},
{
"code": null,
"e": 40440,
"s": 40217,
"text": "Thanks to Shiva Kumar for suggesting this solution.This article is contributed by Ashutosh Kumar. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above "
},
{
"code": null,
"e": 40445,
"s": 40440,
"text": "vt_m"
},
{
"code": null,
"e": 40457,
"s": 40445,
"text": "isajidiqbal"
},
{
"code": null,
"e": 40468,
"s": 40457,
"text": "nidhi_biet"
},
{
"code": null,
"e": 40483,
"s": 40468,
"text": "SHUBHAMSINGH10"
},
{
"code": null,
"e": 40497,
"s": 40483,
"text": "divyesh072019"
},
{
"code": null,
"e": 40506,
"s": 40497,
"text": "suresh07"
},
{
"code": null,
"e": 40517,
"s": 40506,
"text": "decode2207"
},
{
"code": null,
"e": 40534,
"s": 40517,
"text": "suyashsincever13"
},
{
"code": null,
"e": 40547,
"s": 40534,
"text": "simmytarika5"
},
{
"code": null,
"e": 40565,
"s": 40547,
"text": "germanshephered48"
},
{
"code": null,
"e": 40577,
"s": 40565,
"text": "permutation"
},
{
"code": null,
"e": 40597,
"s": 40577,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 40617,
"s": 40597,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 40629,
"s": 40617,
"text": "permutation"
},
{
"code": null,
"e": 40727,
"s": 40629,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 40736,
"s": 40727,
"text": "Comments"
},
{
"code": null,
"e": 40749,
"s": 40736,
"text": "Old Comments"
},
{
"code": null,
"e": 40780,
"s": 40749,
"text": "Bellman–Ford Algorithm | DP-23"
},
{
"code": null,
"e": 40813,
"s": 40780,
"text": "Floyd Warshall Algorithm | DP-16"
},
{
"code": null,
"e": 40881,
"s": 40813,
"text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)"
},
{
"code": null,
"e": 40944,
"s": 40881,
"text": "Overlapping Subproblems Property in Dynamic Programming | DP-1"
},
{
"code": null,
"e": 40965,
"s": 40944,
"text": "Edit Distance | DP-5"
},
{
"code": null,
"e": 41002,
"s": 40965,
"text": "Minimum number of jumps to reach end"
},
{
"code": null,
"e": 41065,
"s": 41002,
"text": "Efficient program to print all prime factors of a given number"
},
{
"code": null,
"e": 41087,
"s": 41065,
"text": "Cutting a Rod | DP-13"
},
{
"code": null,
"e": 41120,
"s": 41087,
"text": "Longest Common Substring | DP-29"
}
]
|
Cluster Analysis: Create, Visualize and Interpret Customer Segments | by Maarten Grootendorst | Towards Data Science | Although we have seen a large influx of supervised machine learning techniques being used in organizations these methods suffer from, typically, one large issue; a need for labeled data. Fortunately, many unsupervised methods exist for clustering data into previously unseen groups, thereby extracting new insights from your clientele.
This article will guide you through the ins and outs of clustering customers. Note that I will not only show you which sklearn package you can use but more importantly, how they can be used and what to look out for.
As always, the data is relatively straightforward and you can follow along with the notebook here. It contains customer information from a Telecom company and is typically used to predict churn:
There are many unsupervised clustering algorithms out there and although each of them has significant strengths in certain situations, I will discuss two that are commonly used.
In my experience, this is by far the most frequently used algorithm for clustering data. k-Means starts by choosing k random centers which you can set yourself. Then, all data points are assigned to the closest center based on their Euclidean distance. Next, new centers are calculated and the data points are updated (see gif below). This process continuous until clusters do not change between iterations.
Now in the example above the three cluster centers start very close to each other. This typically does not work well as it will have a harder time finding clusters. Instead, you can use k-means++ to improve the initialization of the centers. It starts with an initial center and makes sure that all subsequent centers are sufficiently far away. This optimizes the selection and creation of centers.
You can then determine the optimal k clusters by using something called the elbow method. You want to find the point of diminishing returns when selecting a range of clusters. You can do this by plotting the number of clusters on the X-axis and the inertia (within-cluster sum-of-squares criterion) on the Y-axis. You then select k for which you find a bend:
import seaborn as snsimport matplotlib.pyplot as pltfrom sklearn.cluster import KMeansscores = [KMeans(n_clusters=i+2).fit(df).inertia_ for i in range(10)]sns.lineplot(np.arange(2, 12), scores)plt.xlabel('Number of clusters')plt.ylabel("Inertia")plt.title("Inertia of k-Means versus number of clusters")
You can see the bend at the orange square. Thus, we selected k=4 clusters to be generated using k-Means.
One thing to note, since k-Means typically uses Euclidean distance to calculate the distances it does not work well with high dimensional data sets due to the curse of dimensionality. This curse, in part, states that Euclidean distances at high dimensionality have very little meaning since they are often very close together.
The data that we use is somewhat high dimensional since we have 27 features.
A solution would be to use the Cosine distance which works better in high dimensional space. Since Cosine distance and Euclidean distance are connected linearly for normalized vectors we can simply normalize our data.
from sklearn import preprocessingnormalized_vectors = preprocessing.normalize(df)scores = [KMeans(n_clusters=i+2).fit(normalized_vectors).inertia_ for i in range(10)]sns.lineplot(np.arange(2, 12), scores)plt.xlabel('Number of clusters')plt.ylabel("Inertia")plt.title("Inertia of Cosine k-Means versus number of clusters")plt.savefig("intertia_cosine_kmeans.jpg", dpi=300)
k-Means can be computationally quite expensive. Faster alternatives to this method are MiniBatchKMeans and BIRCH. Both methods are quicker to generate clusters, but the quality of those clusters are typically less than those generated by k-Means.
Clustering can also be done based on the density of data points. One example is Density-Based Spatial Clustering of Applications with Noise (DBSCAN) which clusters data points if they are sufficiently dense. DBSCAN identifies clusters and expands them by scanning neighborhoods. If it cannot find any points to add it simply moves on to a new point hoping it will find a new cluster. Any points that lack enough neighbors to be clustered are classified as noise:
The difference with k-means is that DBSCAN does not require you to specify the number of clusters. The two main parameters for DBSCAN are the minimum number of points that constitute a cluster (minPts) and the size of the neighborhood (eps).
You typically do not want minPts to be very small as clusters from noise will be generated. As a rule of thumb, it is best to set minPts to at least the number of features in your data. eps is a bit more difficult to optimize and could require a k-distance graph to find the right value. Using small values is often preferred.
An alternative to DBSCAN is OPTICS, which has a similar performance to DBSCAN but does not explicitly need to set eps.
Next step is to perform the actual clustering and try to interpret both the quality of the clusters as well as its content.
To start evaluating clusters you first need to understand the things that make a good cluster. Although many definitions and methods exist for evaluating clusters, one of the most frequently used methods is calculating something called the Silhouette score.
The Silhouette score measures the separability between clusters based on the distances between and within clusters. It calculates the mean intra-cluster distance (a), which is the mean distance within a cluster, and the mean nearest-cluster distance (b), which is the distance between a sample and the nearest cluster it is not a part of, for each sample. Then, the Silhouette coefficient for a sample is (b - a) / max(a, b).
Let’s calculate the Silhouette Score for all previously mentioned methods:
from sklearn.metrics import silhouette_score# Prepare modelskmeans = KMeans(n_clusters=4).fit(df)normalized_vectors = preprocessing.normalize(df)normalized_kmeans = KMeans(n_clusters=4).fit(normalized_vectors)min_samples = df.shape[1]+1 dbscan = DBSCAN(eps=3.5, min_samples=min_samples).fit(df)# Print resultsprint('kmeans: {}'.format(silhouette_score(df, kmeans.labels_, metric='euclidean')))print('Cosine kmeans:{}'.format(silhouette_score(normalized_vectors, normalized_kmeans.labels_, metric='cosine')))print('DBSCAN: {}'.format(silhouette_score(df, dbscan.labels_, metric='cosine')))
It is not a surprise to see that the cosine-based k-Means outperforms k-Means due to the amount of feature (27) that we have in the data. It is interesting to see that DBSCAN similarly performs well.
However, although objective measures are preferred I believe that when it comes to unsupervised clustering visually examining the clusters is one of the best ways to evaluate them. Never blindly follow objective measures. Make sure that you always inspect what exactly is happening!
Thus, next up are methods for visualizing clusters in 2d and 3d.
To visualize the clusters you can use one of the most popular methods for dimensionality reduction, namely PCA and t-SNE.
PCA works by using orthogonal transformations to convert correlates features into a set of values of linearly uncorrelated features. What is left are features that contain the largest possible variance. For an in-depth overview of PCA see this article.
We can then visualize our data in 3d:
pca_df = prepare_pca(2, df, normalized_kmeans.labels_)sns.scatterplot(x=pca_df.x, y=pca_df.y, hue=pca_df.labels, palette="Set2")
Although PCA might have been successful in reducing the dimensionality of the data, it does not seem to visualize the clusters very intuitively. This happens often with high dimensional data, they are typically clustered around the same point and PCA extracts that information.
Instead, we can use an algorithm called t-SNE which is specifically made to create an intuitive representation/visualization of the data.
t-SNE is an algorithm for visualizing high dimensional data. It uses local relationships between points to create a low-dimensional mapping which results in capturing non-linear structures.
It starts by creating a probability distribution (i.e., Gaussian) which dictates the relationships between neighboring points. Then, it constructs a low dimensional space that follows that distribution as closely as possible using the Student t-distribution. Now you may wonder why it uses a Student t-distribution at this step. Well, a Gaussian distribution has a short tail which squashes nearby points together. If you use a Student t-distribution than the tail is longer and points are more likely to be separated.
Let’s implement t-SNE in 3d and see if we can better visualize the clusters:
tsne_3d_df = prepare_tsne(3, df, kmeans.labels_)tsne_3d_df['normalized_kmeans'] = normalized_kmeans.labels_tsne_3d_df['dbscan'] = dbscan.labels_plot_animation(tsne_3d_df, 'kmeans', 'kmeans')plot_animation(tsne_3d_df, 'normalized_kmeans', 'normalized_kmeans')plot_animation(tsne_3d_df, 'dbscan', 'dbscan')
t-SNE gives a much more intuitive visual representation of the data. As can be seen in the animations, both cosine k-Means and DBSCAN seem to create logical clusters.
Now that we have segmented our customers it would be nice if we would know what makes each cluster unique. This will help us understand which types of customers we have.
One approach is to simply plot all variables and see where the differences are between clusters. This approach, however, fails when dealing with more than 10 variables as it would be difficult to visualize and interpret:
The solution would be to select a subset of variables that, to a certain extent, are important when defining clusters. There are two methods that I want to demonstrate here, namely variance between averaged groups and extracting feature importance through predictive modeling.
One assumption of variable importance in cluster tasks is that if the average value of a variable ordered by clusters differs significantly among each other, that variable is likely important in creating the clusters.
We start by simply aggregating the data based on the generated clusters and retrieving the mean value per variable:
from sklearn.preprocessing import MinMaxScalerscaler = MinMaxScaler()df_scaled = pd.DataFrame(scaler.fit_transform(df))df_scaled['dbscan'] = dbscan.labels_df_mean = (df_scaled.loc[df_scaled.dbscan!=-1, :] .groupby('dbscan').mean())
I ignored the -1 cluster since that is defined as noise by DBSCAN. The data were scaled between 0 and 1 for easier visualization.
Next, I simply calculate the variance of means between clusters within each variable and select the top 7 variables with the highest variance:
results = pd.DataFrame(columns=['Variable', 'Var'])for column in df_mean.columns[1:]: results.loc[len(results), :] = [column, np.var(df_mean[column])]selected_columns = list(results.sort_values( 'Var', ascending=False, ).head(7).Variable.values) + ['dbscan']tidy = df_scaled[selected_columns].melt(id_vars='dbscan')sns.barplot(x='dbscan', y='value', hue='variable', data=tidy)
You can now more clearly see differences between clusters. For example, in cluster 0 you can see that every single person has no Internet service while most other clusters contain those with Internet service. Moreover, we can see that cluster 2 contains only people with both Fiber optic and Phone services which implies that those are either bought together are of the same package.
NOTE: I did not take standard deviation, skewness, and kurtosis into account which is important in comparing variables. The method above is simply the first step in selecting variables.
Lastly, we can use the clusters as a target variable and then apply Random Forest to understand which features are important in the generation of the clusters. This method requires a bit more work since you will have to check the accuracy of your model to accurately extract important features.
In this example I am going to skip that step since we are dealing with imbalanced targets and multiple classes:
from sklearn.ensemble import RandomForestClassifierX, y = df.iloc[:,:-1], df.iloc[:,-1]clf = RandomForestClassifier(n_estimators=100).fit(X, y)data = np.array([clf.feature_importances_, X.columns]).Tcolumns = list(pd.DataFrame(data, columns=['Importance', 'Feature']) .sort_values("Importance", ascending=False) .head(7).Feature.values)tidy = df_scaled[columns+['dbscan']].melt(id_vars='dbscan')sns.barplot(x='dbscan', y='value', hue='variable', data=tidy)
We can see that similar features are selected when comparing to the variance analysis that we did before. Since this method requires a bit more work in the form of validation I would suggest using the variance method described before.
Hopefully, this article helps you start with understanding the principles behind clustering algorithms and most importantly how to apply them.
If you are, like me, passionate about AI, Data Science, or Psychology, please feel free to add me on LinkedIn or follow me on Twitter.
Notebook with code can be found here. | [
{
"code": null,
"e": 508,
"s": 172,
"text": "Although we have seen a large influx of supervised machine learning techniques being used in organizations these methods suffer from, typically, one large issue; a need for labeled data. Fortunately, many unsupervised methods exist for clustering data into previously unseen groups, thereby extracting new insights from your clientele."
},
{
"code": null,
"e": 724,
"s": 508,
"text": "This article will guide you through the ins and outs of clustering customers. Note that I will not only show you which sklearn package you can use but more importantly, how they can be used and what to look out for."
},
{
"code": null,
"e": 919,
"s": 724,
"text": "As always, the data is relatively straightforward and you can follow along with the notebook here. It contains customer information from a Telecom company and is typically used to predict churn:"
},
{
"code": null,
"e": 1097,
"s": 919,
"text": "There are many unsupervised clustering algorithms out there and although each of them has significant strengths in certain situations, I will discuss two that are commonly used."
},
{
"code": null,
"e": 1505,
"s": 1097,
"text": "In my experience, this is by far the most frequently used algorithm for clustering data. k-Means starts by choosing k random centers which you can set yourself. Then, all data points are assigned to the closest center based on their Euclidean distance. Next, new centers are calculated and the data points are updated (see gif below). This process continuous until clusters do not change between iterations."
},
{
"code": null,
"e": 1904,
"s": 1505,
"text": "Now in the example above the three cluster centers start very close to each other. This typically does not work well as it will have a harder time finding clusters. Instead, you can use k-means++ to improve the initialization of the centers. It starts with an initial center and makes sure that all subsequent centers are sufficiently far away. This optimizes the selection and creation of centers."
},
{
"code": null,
"e": 2263,
"s": 1904,
"text": "You can then determine the optimal k clusters by using something called the elbow method. You want to find the point of diminishing returns when selecting a range of clusters. You can do this by plotting the number of clusters on the X-axis and the inertia (within-cluster sum-of-squares criterion) on the Y-axis. You then select k for which you find a bend:"
},
{
"code": null,
"e": 2577,
"s": 2263,
"text": "import seaborn as snsimport matplotlib.pyplot as pltfrom sklearn.cluster import KMeansscores = [KMeans(n_clusters=i+2).fit(df).inertia_ for i in range(10)]sns.lineplot(np.arange(2, 12), scores)plt.xlabel('Number of clusters')plt.ylabel(\"Inertia\")plt.title(\"Inertia of k-Means versus number of clusters\")"
},
{
"code": null,
"e": 2682,
"s": 2577,
"text": "You can see the bend at the orange square. Thus, we selected k=4 clusters to be generated using k-Means."
},
{
"code": null,
"e": 3009,
"s": 2682,
"text": "One thing to note, since k-Means typically uses Euclidean distance to calculate the distances it does not work well with high dimensional data sets due to the curse of dimensionality. This curse, in part, states that Euclidean distances at high dimensionality have very little meaning since they are often very close together."
},
{
"code": null,
"e": 3086,
"s": 3009,
"text": "The data that we use is somewhat high dimensional since we have 27 features."
},
{
"code": null,
"e": 3304,
"s": 3086,
"text": "A solution would be to use the Cosine distance which works better in high dimensional space. Since Cosine distance and Euclidean distance are connected linearly for normalized vectors we can simply normalize our data."
},
{
"code": null,
"e": 3687,
"s": 3304,
"text": "from sklearn import preprocessingnormalized_vectors = preprocessing.normalize(df)scores = [KMeans(n_clusters=i+2).fit(normalized_vectors).inertia_ for i in range(10)]sns.lineplot(np.arange(2, 12), scores)plt.xlabel('Number of clusters')plt.ylabel(\"Inertia\")plt.title(\"Inertia of Cosine k-Means versus number of clusters\")plt.savefig(\"intertia_cosine_kmeans.jpg\", dpi=300)"
},
{
"code": null,
"e": 3934,
"s": 3687,
"text": "k-Means can be computationally quite expensive. Faster alternatives to this method are MiniBatchKMeans and BIRCH. Both methods are quicker to generate clusters, but the quality of those clusters are typically less than those generated by k-Means."
},
{
"code": null,
"e": 4397,
"s": 3934,
"text": "Clustering can also be done based on the density of data points. One example is Density-Based Spatial Clustering of Applications with Noise (DBSCAN) which clusters data points if they are sufficiently dense. DBSCAN identifies clusters and expands them by scanning neighborhoods. If it cannot find any points to add it simply moves on to a new point hoping it will find a new cluster. Any points that lack enough neighbors to be clustered are classified as noise:"
},
{
"code": null,
"e": 4639,
"s": 4397,
"text": "The difference with k-means is that DBSCAN does not require you to specify the number of clusters. The two main parameters for DBSCAN are the minimum number of points that constitute a cluster (minPts) and the size of the neighborhood (eps)."
},
{
"code": null,
"e": 4966,
"s": 4639,
"text": "You typically do not want minPts to be very small as clusters from noise will be generated. As a rule of thumb, it is best to set minPts to at least the number of features in your data. eps is a bit more difficult to optimize and could require a k-distance graph to find the right value. Using small values is often preferred."
},
{
"code": null,
"e": 5085,
"s": 4966,
"text": "An alternative to DBSCAN is OPTICS, which has a similar performance to DBSCAN but does not explicitly need to set eps."
},
{
"code": null,
"e": 5209,
"s": 5085,
"text": "Next step is to perform the actual clustering and try to interpret both the quality of the clusters as well as its content."
},
{
"code": null,
"e": 5467,
"s": 5209,
"text": "To start evaluating clusters you first need to understand the things that make a good cluster. Although many definitions and methods exist for evaluating clusters, one of the most frequently used methods is calculating something called the Silhouette score."
},
{
"code": null,
"e": 5893,
"s": 5467,
"text": "The Silhouette score measures the separability between clusters based on the distances between and within clusters. It calculates the mean intra-cluster distance (a), which is the mean distance within a cluster, and the mean nearest-cluster distance (b), which is the distance between a sample and the nearest cluster it is not a part of, for each sample. Then, the Silhouette coefficient for a sample is (b - a) / max(a, b)."
},
{
"code": null,
"e": 5968,
"s": 5893,
"text": "Let’s calculate the Silhouette Score for all previously mentioned methods:"
},
{
"code": null,
"e": 6725,
"s": 5968,
"text": "from sklearn.metrics import silhouette_score# Prepare modelskmeans = KMeans(n_clusters=4).fit(df)normalized_vectors = preprocessing.normalize(df)normalized_kmeans = KMeans(n_clusters=4).fit(normalized_vectors)min_samples = df.shape[1]+1 dbscan = DBSCAN(eps=3.5, min_samples=min_samples).fit(df)# Print resultsprint('kmeans: {}'.format(silhouette_score(df, kmeans.labels_, metric='euclidean')))print('Cosine kmeans:{}'.format(silhouette_score(normalized_vectors, normalized_kmeans.labels_, metric='cosine')))print('DBSCAN: {}'.format(silhouette_score(df, dbscan.labels_, metric='cosine')))"
},
{
"code": null,
"e": 6925,
"s": 6725,
"text": "It is not a surprise to see that the cosine-based k-Means outperforms k-Means due to the amount of feature (27) that we have in the data. It is interesting to see that DBSCAN similarly performs well."
},
{
"code": null,
"e": 7208,
"s": 6925,
"text": "However, although objective measures are preferred I believe that when it comes to unsupervised clustering visually examining the clusters is one of the best ways to evaluate them. Never blindly follow objective measures. Make sure that you always inspect what exactly is happening!"
},
{
"code": null,
"e": 7273,
"s": 7208,
"text": "Thus, next up are methods for visualizing clusters in 2d and 3d."
},
{
"code": null,
"e": 7395,
"s": 7273,
"text": "To visualize the clusters you can use one of the most popular methods for dimensionality reduction, namely PCA and t-SNE."
},
{
"code": null,
"e": 7648,
"s": 7395,
"text": "PCA works by using orthogonal transformations to convert correlates features into a set of values of linearly uncorrelated features. What is left are features that contain the largest possible variance. For an in-depth overview of PCA see this article."
},
{
"code": null,
"e": 7686,
"s": 7648,
"text": "We can then visualize our data in 3d:"
},
{
"code": null,
"e": 7831,
"s": 7686,
"text": "pca_df = prepare_pca(2, df, normalized_kmeans.labels_)sns.scatterplot(x=pca_df.x, y=pca_df.y, hue=pca_df.labels, palette=\"Set2\")"
},
{
"code": null,
"e": 8109,
"s": 7831,
"text": "Although PCA might have been successful in reducing the dimensionality of the data, it does not seem to visualize the clusters very intuitively. This happens often with high dimensional data, they are typically clustered around the same point and PCA extracts that information."
},
{
"code": null,
"e": 8247,
"s": 8109,
"text": "Instead, we can use an algorithm called t-SNE which is specifically made to create an intuitive representation/visualization of the data."
},
{
"code": null,
"e": 8437,
"s": 8247,
"text": "t-SNE is an algorithm for visualizing high dimensional data. It uses local relationships between points to create a low-dimensional mapping which results in capturing non-linear structures."
},
{
"code": null,
"e": 8956,
"s": 8437,
"text": "It starts by creating a probability distribution (i.e., Gaussian) which dictates the relationships between neighboring points. Then, it constructs a low dimensional space that follows that distribution as closely as possible using the Student t-distribution. Now you may wonder why it uses a Student t-distribution at this step. Well, a Gaussian distribution has a short tail which squashes nearby points together. If you use a Student t-distribution than the tail is longer and points are more likely to be separated."
},
{
"code": null,
"e": 9033,
"s": 8956,
"text": "Let’s implement t-SNE in 3d and see if we can better visualize the clusters:"
},
{
"code": null,
"e": 9338,
"s": 9033,
"text": "tsne_3d_df = prepare_tsne(3, df, kmeans.labels_)tsne_3d_df['normalized_kmeans'] = normalized_kmeans.labels_tsne_3d_df['dbscan'] = dbscan.labels_plot_animation(tsne_3d_df, 'kmeans', 'kmeans')plot_animation(tsne_3d_df, 'normalized_kmeans', 'normalized_kmeans')plot_animation(tsne_3d_df, 'dbscan', 'dbscan')"
},
{
"code": null,
"e": 9505,
"s": 9338,
"text": "t-SNE gives a much more intuitive visual representation of the data. As can be seen in the animations, both cosine k-Means and DBSCAN seem to create logical clusters."
},
{
"code": null,
"e": 9675,
"s": 9505,
"text": "Now that we have segmented our customers it would be nice if we would know what makes each cluster unique. This will help us understand which types of customers we have."
},
{
"code": null,
"e": 9896,
"s": 9675,
"text": "One approach is to simply plot all variables and see where the differences are between clusters. This approach, however, fails when dealing with more than 10 variables as it would be difficult to visualize and interpret:"
},
{
"code": null,
"e": 10173,
"s": 9896,
"text": "The solution would be to select a subset of variables that, to a certain extent, are important when defining clusters. There are two methods that I want to demonstrate here, namely variance between averaged groups and extracting feature importance through predictive modeling."
},
{
"code": null,
"e": 10391,
"s": 10173,
"text": "One assumption of variable importance in cluster tasks is that if the average value of a variable ordered by clusters differs significantly among each other, that variable is likely important in creating the clusters."
},
{
"code": null,
"e": 10507,
"s": 10391,
"text": "We start by simply aggregating the data based on the generated clusters and retrieving the mean value per variable:"
},
{
"code": null,
"e": 10758,
"s": 10507,
"text": "from sklearn.preprocessing import MinMaxScalerscaler = MinMaxScaler()df_scaled = pd.DataFrame(scaler.fit_transform(df))df_scaled['dbscan'] = dbscan.labels_df_mean = (df_scaled.loc[df_scaled.dbscan!=-1, :] .groupby('dbscan').mean())"
},
{
"code": null,
"e": 10888,
"s": 10758,
"text": "I ignored the -1 cluster since that is defined as noise by DBSCAN. The data were scaled between 0 and 1 for easier visualization."
},
{
"code": null,
"e": 11031,
"s": 10888,
"text": "Next, I simply calculate the variance of means between clusters within each variable and select the top 7 variables with the highest variance:"
},
{
"code": null,
"e": 11421,
"s": 11031,
"text": "results = pd.DataFrame(columns=['Variable', 'Var'])for column in df_mean.columns[1:]: results.loc[len(results), :] = [column, np.var(df_mean[column])]selected_columns = list(results.sort_values( 'Var', ascending=False, ).head(7).Variable.values) + ['dbscan']tidy = df_scaled[selected_columns].melt(id_vars='dbscan')sns.barplot(x='dbscan', y='value', hue='variable', data=tidy)"
},
{
"code": null,
"e": 11805,
"s": 11421,
"text": "You can now more clearly see differences between clusters. For example, in cluster 0 you can see that every single person has no Internet service while most other clusters contain those with Internet service. Moreover, we can see that cluster 2 contains only people with both Fiber optic and Phone services which implies that those are either bought together are of the same package."
},
{
"code": null,
"e": 11991,
"s": 11805,
"text": "NOTE: I did not take standard deviation, skewness, and kurtosis into account which is important in comparing variables. The method above is simply the first step in selecting variables."
},
{
"code": null,
"e": 12286,
"s": 11991,
"text": "Lastly, we can use the clusters as a target variable and then apply Random Forest to understand which features are important in the generation of the clusters. This method requires a bit more work since you will have to check the accuracy of your model to accurately extract important features."
},
{
"code": null,
"e": 12398,
"s": 12286,
"text": "In this example I am going to skip that step since we are dealing with imbalanced targets and multiple classes:"
},
{
"code": null,
"e": 12875,
"s": 12398,
"text": "from sklearn.ensemble import RandomForestClassifierX, y = df.iloc[:,:-1], df.iloc[:,-1]clf = RandomForestClassifier(n_estimators=100).fit(X, y)data = np.array([clf.feature_importances_, X.columns]).Tcolumns = list(pd.DataFrame(data, columns=['Importance', 'Feature']) .sort_values(\"Importance\", ascending=False) .head(7).Feature.values)tidy = df_scaled[columns+['dbscan']].melt(id_vars='dbscan')sns.barplot(x='dbscan', y='value', hue='variable', data=tidy)"
},
{
"code": null,
"e": 13110,
"s": 12875,
"text": "We can see that similar features are selected when comparing to the variance analysis that we did before. Since this method requires a bit more work in the form of validation I would suggest using the variance method described before."
},
{
"code": null,
"e": 13253,
"s": 13110,
"text": "Hopefully, this article helps you start with understanding the principles behind clustering algorithms and most importantly how to apply them."
},
{
"code": null,
"e": 13388,
"s": 13253,
"text": "If you are, like me, passionate about AI, Data Science, or Psychology, please feel free to add me on LinkedIn or follow me on Twitter."
}
]
|
DAX Text - REPLACE function | Replaces part of a text string, based on the number of characters you specify, with a different text string.
REPLACE (<old_text>, <start_num>, <num_chars>, <new_text>)
old_text
The string of text that contains the characters you want to replace, or a reference to a column that contains text.
start_num
The starting position in the old_text that you want to replace with new_text.
num_chars
The number of characters that you want to replace.
new_text
The replacement text for the specified characters in old_text.
A text string.
DAX uses Unicode and therefore stores all characters as the same length.
Note − If the argument, num_chars, is a blank or is a reference to a column that evaluates to a blank, then new_text is inserted at the position start_num, without replacing any characters. This is the same behavior as in Excel.
DAX REPLACE function is similar to DAX SUBSTITUTE function.
You can use REPLACE function, if you want to replace any text of variable length that occurs at a specific position in a text string.
You can use REPLACE function, if you want to replace any text of variable length that occurs at a specific position in a text string.
You can use SUBSTITUTE function, if you want to replace specific text in a text string.
You can use SUBSTITUTE function, if you want to replace specific text in a text string.
= REPLACE([Product],1,2, [No. of Units])
This returns a calculated column with the first two characters of the Product in a row replaced with the value No. of Units in the same row.
53 Lectures
5.5 hours
Abhay Gadiya
24 Lectures
2 hours
Randy Minder
26 Lectures
4.5 hours
Randy Minder
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2110,
"s": 2001,
"text": "Replaces part of a text string, based on the number of characters you specify, with a different text string."
},
{
"code": null,
"e": 2171,
"s": 2110,
"text": "REPLACE (<old_text>, <start_num>, <num_chars>, <new_text>) \n"
},
{
"code": null,
"e": 2180,
"s": 2171,
"text": "old_text"
},
{
"code": null,
"e": 2296,
"s": 2180,
"text": "The string of text that contains the characters you want to replace, or a reference to a column that contains text."
},
{
"code": null,
"e": 2306,
"s": 2296,
"text": "start_num"
},
{
"code": null,
"e": 2384,
"s": 2306,
"text": "The starting position in the old_text that you want to replace with new_text."
},
{
"code": null,
"e": 2394,
"s": 2384,
"text": "num_chars"
},
{
"code": null,
"e": 2445,
"s": 2394,
"text": "The number of characters that you want to replace."
},
{
"code": null,
"e": 2454,
"s": 2445,
"text": "new_text"
},
{
"code": null,
"e": 2517,
"s": 2454,
"text": "The replacement text for the specified characters in old_text."
},
{
"code": null,
"e": 2532,
"s": 2517,
"text": "A text string."
},
{
"code": null,
"e": 2605,
"s": 2532,
"text": "DAX uses Unicode and therefore stores all characters as the same length."
},
{
"code": null,
"e": 2834,
"s": 2605,
"text": "Note − If the argument, num_chars, is a blank or is a reference to a column that evaluates to a blank, then new_text is inserted at the position start_num, without replacing any characters. This is the same behavior as in Excel."
},
{
"code": null,
"e": 2894,
"s": 2834,
"text": "DAX REPLACE function is similar to DAX SUBSTITUTE function."
},
{
"code": null,
"e": 3028,
"s": 2894,
"text": "You can use REPLACE function, if you want to replace any text of variable length that occurs at a specific position in a text string."
},
{
"code": null,
"e": 3162,
"s": 3028,
"text": "You can use REPLACE function, if you want to replace any text of variable length that occurs at a specific position in a text string."
},
{
"code": null,
"e": 3250,
"s": 3162,
"text": "You can use SUBSTITUTE function, if you want to replace specific text in a text string."
},
{
"code": null,
"e": 3338,
"s": 3250,
"text": "You can use SUBSTITUTE function, if you want to replace specific text in a text string."
},
{
"code": null,
"e": 3380,
"s": 3338,
"text": "= REPLACE([Product],1,2, [No. of Units]) "
},
{
"code": null,
"e": 3521,
"s": 3380,
"text": "This returns a calculated column with the first two characters of the Product in a row replaced with the value No. of Units in the same row."
},
{
"code": null,
"e": 3556,
"s": 3521,
"text": "\n 53 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3570,
"s": 3556,
"text": " Abhay Gadiya"
},
{
"code": null,
"e": 3603,
"s": 3570,
"text": "\n 24 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3617,
"s": 3603,
"text": " Randy Minder"
},
{
"code": null,
"e": 3652,
"s": 3617,
"text": "\n 26 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 3666,
"s": 3652,
"text": " Randy Minder"
},
{
"code": null,
"e": 3673,
"s": 3666,
"text": " Print"
},
{
"code": null,
"e": 3684,
"s": 3673,
"text": " Add Notes"
}
]
|
Program to find length of longest contiguously strictly increasing sublist after removal in Python | Suppose we have a list of numbers called nums, we have to find the maximum length of a contiguous strictly increasing sublist. We are allowed to remove at most single element from the list.
So, if the input is like nums = [35, 5, 6, 7, 8, 9, 12, 11, 26], then the output will be 7, because if we remove 12 from nums, the list will be [5, 6, 7, 8, 9, 11, 26], the length is 7, this is the longest, contiguous, strictly increasing sub-list.
To solve this, we will follow these steps −
if nums is empty, thenreturn 0
return 0
end := a list of size same as nums and fill with 1
start := a list of size same as nums and fill with 1
for i in range 1 to size of nums - 1, doif nums[i] > nums[i - 1], thenend[i] := end[i - 1] + 1
if nums[i] > nums[i - 1], thenend[i] := end[i - 1] + 1
end[i] := end[i - 1] + 1
for j in range size of nums - 2 to 0, decrease by 1, doif nums[j + 1] > nums[j], thenstart[j] := start[j + 1] + 1
if nums[j + 1] > nums[j], thenstart[j] := start[j + 1] + 1
start[j] := start[j + 1] + 1
res := maximum of the elements of end and element of start
for k in range 1 to size of nums - 2, doif nums[k - 1] < nums[k + 1], thenres := maximum of res and (end[k - 1] + start[k + 1])
if nums[k - 1] < nums[k + 1], thenres := maximum of res and (end[k - 1] + start[k + 1])
res := maximum of res and (end[k - 1] + start[k + 1])
return res
Let us see the following implementation to get better understanding −
def solve(nums):
if not nums:
return 0
end = [1 for i in nums]
start = [1 for i in nums]
for i in range(1, len(nums)):
if nums[i] > nums[i - 1]:
end[i] = end[i - 1] + 1
for j in range(len(nums) - 2, -1, -1):
if nums[j + 1] > nums[j]:
start[j] = start[j + 1] + 1
res = max(max(end), max(start))
for k in range(1, len(nums) - 1):
if nums[k - 1] < nums[k + 1]:
res = max(res, end[k - 1] + start[k + 1])
return res
nums = [35, 5, 6, 7, 8, 9, 12, 11, 26]
print(solve(nums))
[35, 5, 6, 7, 8, 9, 12, 11, 26]
7 | [
{
"code": null,
"e": 1252,
"s": 1062,
"text": "Suppose we have a list of numbers called nums, we have to find the maximum length of a contiguous strictly increasing sublist. We are allowed to remove at most single element from the list."
},
{
"code": null,
"e": 1501,
"s": 1252,
"text": "So, if the input is like nums = [35, 5, 6, 7, 8, 9, 12, 11, 26], then the output will be 7, because if we remove 12 from nums, the list will be [5, 6, 7, 8, 9, 11, 26], the length is 7, this is the longest, contiguous, strictly increasing sub-list."
},
{
"code": null,
"e": 1545,
"s": 1501,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1576,
"s": 1545,
"text": "if nums is empty, thenreturn 0"
},
{
"code": null,
"e": 1585,
"s": 1576,
"text": "return 0"
},
{
"code": null,
"e": 1636,
"s": 1585,
"text": "end := a list of size same as nums and fill with 1"
},
{
"code": null,
"e": 1689,
"s": 1636,
"text": "start := a list of size same as nums and fill with 1"
},
{
"code": null,
"e": 1784,
"s": 1689,
"text": "for i in range 1 to size of nums - 1, doif nums[i] > nums[i - 1], thenend[i] := end[i - 1] + 1"
},
{
"code": null,
"e": 1839,
"s": 1784,
"text": "if nums[i] > nums[i - 1], thenend[i] := end[i - 1] + 1"
},
{
"code": null,
"e": 1864,
"s": 1839,
"text": "end[i] := end[i - 1] + 1"
},
{
"code": null,
"e": 1978,
"s": 1864,
"text": "for j in range size of nums - 2 to 0, decrease by 1, doif nums[j + 1] > nums[j], thenstart[j] := start[j + 1] + 1"
},
{
"code": null,
"e": 2037,
"s": 1978,
"text": "if nums[j + 1] > nums[j], thenstart[j] := start[j + 1] + 1"
},
{
"code": null,
"e": 2066,
"s": 2037,
"text": "start[j] := start[j + 1] + 1"
},
{
"code": null,
"e": 2125,
"s": 2066,
"text": "res := maximum of the elements of end and element of start"
},
{
"code": null,
"e": 2253,
"s": 2125,
"text": "for k in range 1 to size of nums - 2, doif nums[k - 1] < nums[k + 1], thenres := maximum of res and (end[k - 1] + start[k + 1])"
},
{
"code": null,
"e": 2341,
"s": 2253,
"text": "if nums[k - 1] < nums[k + 1], thenres := maximum of res and (end[k - 1] + start[k + 1])"
},
{
"code": null,
"e": 2395,
"s": 2341,
"text": "res := maximum of res and (end[k - 1] + start[k + 1])"
},
{
"code": null,
"e": 2406,
"s": 2395,
"text": "return res"
},
{
"code": null,
"e": 2476,
"s": 2406,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 3026,
"s": 2476,
"text": "def solve(nums):\n if not nums:\n return 0\n end = [1 for i in nums]\n start = [1 for i in nums]\n\n for i in range(1, len(nums)):\n if nums[i] > nums[i - 1]:\n end[i] = end[i - 1] + 1\n\n for j in range(len(nums) - 2, -1, -1):\n if nums[j + 1] > nums[j]:\n start[j] = start[j + 1] + 1\n\n res = max(max(end), max(start))\n\n for k in range(1, len(nums) - 1):\n if nums[k - 1] < nums[k + 1]:\n res = max(res, end[k - 1] + start[k + 1])\n\n return res\n\nnums = [35, 5, 6, 7, 8, 9, 12, 11, 26]\nprint(solve(nums))"
},
{
"code": null,
"e": 3058,
"s": 3026,
"text": "[35, 5, 6, 7, 8, 9, 12, 11, 26]"
},
{
"code": null,
"e": 3060,
"s": 3058,
"text": "7"
}
]
|
Putting ML in production II: logging and monitoring | by Javier Rodriguez Zaurin | Towards Data Science | In our previous post we showed how one could use the Apache Kafka’s Python API (Kafka-Python) to productionise an algorithm in real time. In this post we will focus more on the ML aspects, more specifically on how to log information during the (re)training process and monitor the results from the experiments. To that aim we will use MLflow along with Hyperopt or HyperparameterHunter.
A detailed description of both the scenario and the solution can be found in the post mentioned before.
In summary, we would like to run an algorithm in real time, and some immediate action needs to be taken based on the algorithm’s outputs (or predictions). In addition, after N interactions (or observations) the algorithm needs to be retrained without stopping the prediction service.
Our solution relies mostly on Kafka-Python distributing information among the different components of the process (see Figure 1 in our first post for more details):
A Service/App generates a message (JSON) with the required information for the algorithm (i.e. the features).The “Predictor” component receives the message, processes the information and runs the algorithm, sending the prediction back to the Service/App.After N number of processed messages (or observations) the Predictor sends a message to the “Trainer” component that starts a new training experiment. This new experiment will include the original dataset plus all the new observations collected. As we described in the first post, in the real world one would have to wait until it receives as many real outcomes (i.e. true labels or numerical results) as observations before retraining the algorithm.Once the algorithm has been retrained, the Trainer sends the corresponding message and the Predictor will load the new model without stopping the service.
A Service/App generates a message (JSON) with the required information for the algorithm (i.e. the features).
The “Predictor” component receives the message, processes the information and runs the algorithm, sending the prediction back to the Service/App.
After N number of processed messages (or observations) the Predictor sends a message to the “Trainer” component that starts a new training experiment. This new experiment will include the original dataset plus all the new observations collected. As we described in the first post, in the real world one would have to wait until it receives as many real outcomes (i.e. true labels or numerical results) as observations before retraining the algorithm.
Once the algorithm has been retrained, the Trainer sends the corresponding message and the Predictor will load the new model without stopping the service.
For the sake of brevity, we prefer to focus here on the processes rather than the algorithms themselves. However, let me at least give some direction in case you want to explore further.
The core algorithm we use is LightGBM. LightGBM has become my go-to algorithm for almost every project that involves classification or regression (it can also do ranking). There is an overwhelming amount of information online about the package, and one can learn how to use it in a variety of scenarios in their fantastic example section at the github repo. However, I always recommend reading the corresponding paper for a given algorithm. In this case, Guolin Ke et al 2017 did a fantastic job. The paper is very well written and, in general, quite accessible.
The optimisation packages used will be HyperparameterHunter (hereafter HH) and Hyperopt, both with Bayesian optimisation methods. HH uses Skopt as a backend, and its BayesianOptimization method is based on Gaussian Processes. On the other hand Hyperopt is, to my knowledge, the only Python package that implements the TPE (tree of Parzen estimators) algorithm. I have found some other libraries that use that algorithm, but are all dependent on Hyperopt (e.g. Optunity or Project Ray’s tune).
If you want to learn about Bayesian optimization methods I recommend doing the following. Read first the Bayesian Optimization section in the Skopt site. There you can find the description of the problem statement and the bayesian process (or loop), which I’d say is fairly common for bayesian optimisation approaches. Then go to the Hyperopt paper (Bergstra et al., 2011). Again, this is a “must-read” paper if you want to be familiar with Bayesian approaches. In particular, there you will learn about Gaussian Processes (GP) and TPE in the context of Sequential Model-based Global Optimization (SMBO) algorithms (Sections 2–4).
The remaining ML “ingredient” is MLflow, which will be used here to help tracking and monitoring the training process (although you will see that HH already does a good job saving all the important data).
Following a similar approach to the one used in our fist post, we will use the code as a guideline commenting on the most important parts. All the code in this section can be found within the trainmodule in our repo. We will start with Hyperopt and then move to HH, where we will illustrate what makes the later unique.
Hyperopt
The code in this section can be found within the in the script train_hyperopt_mlflow.py at the train module.
Remember, the goal is to minimise an objective function. Our Hyperopt objective function looks like this:
Where params could be, for example:
Let’s discuss the code within the function. The function must depend only on params. Within the function we use cross validation and we output the best metric, which is binary_logloss in this case. Note that we use LightGBM (imported as lgb) methods ( lgb.cv in line 14, Snippet 1) as opposed as the corresponding sklearn wrap up. This is because to my experience, LightGBM’s own methods are normally a bit faster. Also note that LightGBM does not implement metrics such as f1_score . Nonetheless, we have included a LightGBM f1 customised metric function in the train_hyperopt.py and train_hyperopt_mlfow.py scripts, just in case.
It is worth stopping for a second at line 22 in Snippet 1, where we record the number of boosting rounds used for that particular iteration. This is because when using Hyperopt (or Skopt or HH) the algorithm will optimise based on the input value of the parameters, one being num_boost_round. Within the objective function we do cross validation with early stopping to avoid overfitting. This means that the final number of boosting rounds might differ from the input value. This information will get "lost" within the optimisation process. To overcome this limitation we simply save the final num_boost_round to the dictionary early_stop_dict. However, it is not always clear that this is the best solution. For a full discussion on this and other issues regarding the optimisation of GBMs, please, have a look to this notebook in my github.
Finally, remember that we need to minimise the output value. Therefore, if the output is a score, the objective function must output its negative value, while if is an error ( rmse) or loss ( binary_logloss), the function must output the value itself.
The code that runs the optimisation process is simply:
Every set of parameter tried will be recorded in the trials object. Once the optimisation is done, we could code our own functionalities to record the results. Alternatively, we could use tools such as MLflow to help us with that task, and visually monitor the different experiments. As you can imagine, one could write a number of posts only on MLflow. Here, we will simply illustrate how we have used it to record the best performing parameters, model and metric, and monitor the algorithm’s performance.
MLflow
The MLflow block that tracks the results per experiment remains almost identical for both Hyperopt and HH, and is described in the snippet below.
Lines 1–8: one “annoying” behaviour we found in the newest MLflow version ( 0.8.2) is that the first time you instantiate the class MLflowClient() or create an experiment ( mlflow.create_experiment('test')) it will create two directories, mlruns/0 and mlruns/1 . The former is called Default and it will remain empty as you run experiments. Here we show this behaviour in an empty directory called test_mlflow :
infinito:test_mlflow javier$ lsinfinito:test_mlflow javier$ ipythonPython 3.6.5 (default, Apr 15 2018, 21:22:22)Type ‘copyright’, ‘credits’ or ‘license’ for more informationIPython 7.2.0 — An enhanced Interactive Python. Type ‘?’ for help.In [1]: import mlflowIn [2]: mlflow.__version__Out[2]: ‘0.8.2’In [3]: mlflow.create_experiment(‘test’)Out[3]: 1In [4]: ls mlruns/0/ 1/
Therefore, when you open the UI the first screen will be an empty screen with a experiment named Default . If you can live with that (I can’t), then there are easier ways to code lines 1–8 in the MLflow block, such as for example:
In the current set up (Snippet 4) , our first initialisation of the process ( python initialize.py) will be referred as Default and stored in directory mlruns/0 .
On the other hand, a more elegant way of defining the experiment_id per run would be to list the existing experiments and get the last elements’ id:
experiments = client.list_experiments()with mlflow.start_run(experiment_id=experiments[-1].experiment_id):
However, another “inconvenient” behaviour I found is that client.list_experiments() does not preserve the order. This is why we use the “slightly-less-elegant” solution n_experiments .
Line 9 in advance: we just run the experiment and record all, parameters, metrics and the model as an MLflow artifact.
At this stage it is worth mentioning that we are completely aware we are “under-using” MLflow. As well as tracking the results of the algorithms, MLflow can package and deploy projects. In other words, with MLflow one can manage almost the entire machine learning cycle. However, it is my impression that to do such a thing one needs to start a project with MLflow in mind. Moreover, it is not straightforward for me to see how one could package the entire project described in this and the previous posts using MLflow without adding unnecessary complexity. Nonetheless, I see lots of potential in this library and I clearly see myself using it in future projects.
Hyperparameter Hunter (HH)
After having a look to the code in snippets 1–4 one might wonder: “what if I want to record every hyper-optimisation run and keep track in a leaderboard?”. Well, there are two possibilities: i) one could simply move the MLflow block to the body of the objective function and adapt the code accordingly or ii) simply use HH.
As I write this post I see two drawbacks in using HH. In the first place, there is no need for you to code your own objective function. While this is mostly a positive aspect, it also means that it is less flexible. However, I have been using HH for some time now and unless you need to design a complex objective function (e.g. some unusual data manipulation within the objective, or internal update of parameters) HH will do the job. If you do need to design a highly customised objective function, you could always code one using sklearn ‘s syntax as pass it to the HH’s optimizer object as the model_initializer .
The second drawback, and perhaps more important, is not directly related to HH, but Skopt. HH is built on top of Skopt, which is notably slower that Hyperopt (see this notebook in my repo). However, I am aware that there are current efforts to add Hyperopt as an alternative backend (along with other upcoming features such as feature engineering, so stay tuned).
In conclusion, if you do not need to design a particularly complex objective function, and you can afford “Skopt-speed”, HH offers a number of functionalities that make it unique. To start with, HH records and organises all the experiments for you. Moreover, it learns as you run additional tests, as none of the past tests go to waste and. In other words:
“HyperparameterHunter is already aware of all that you’ve done, and that’s when HyperparameterHunter does something remarkable. It doesn’t start optimization from scratch like other libraries. It starts from all of the Experiments and previous optimization rounds you’ve already run through it.” Hunter McGushion.
Let’s have a look to the code. The following 3 snippets are all you need when using HH (see the documentation for more details). The full code in this section can be found in the script train_hyperparameterhunter_mlflow.py at the train module.
As you will see, the syntax very concise. We first set an Environment, which is simply a class to organise the parameters that allow experiments to be fairly compared.
We then run the experiment
Where model_init_params and model_extra_params are:
When looking carefully to Snippet 7, we can find a further difference between HH and Hyperopt, again purely related to the Skopt backend. You will see that while when using Hyperopt one can use quantile uniform distributions (hp.quniform(low, high, step)). There is no such option in Skopt. This means that for parameters like num_boost_rounds or num_leaves the search is less efficient. For example, a priori, one would not expect two experiments with 100 and 101 boosting rounds to yield different results. This is why when using Hyperopt we set num_boost_rounds to hp.quniform(50, 500, 10) , for example.
A potential way around is to use Skopt’s Categorical variables:
num_leaves=Categorical(np.arange(31, 256, 4)),
However, this is not the best solution given the fact that in reality, num_boost_roundsor num_leaves are not categorical variables, but are going to be treated at such. For example, by default, Skopt will build a one-hot encoded representation of the input space for categorical features. As categories have no intrinsic ordering, the distance between two points in that space is one if cat_a != cat_band zero otherwise. This behaviour is not what we want during the search process for parameters like num_leaves, since 32 leaves would be as distant from 31 as 255. Skopt offers the possibility of not transforming the space, which is still not ideal, but better:
num_leaves=Categorical(np.arange(31, 256, 4), transform=’identity’),
But for one reason or another this throws an error every time I have tried to use it. Nevertheless, we will use Integerand live with it while HH implements Hyperopt as a backend.
In this section we will run a minimal example to illustrate how we use HH and MLflow to track every detail of the training process. Here we will keep things simple, but one could use the Callbacks functionality in HH to seamlessly integrate the two packages.
The minimal example consists in processing 200 messages and retraining the algorithm every 50 processed messages. In every retraining experiment HH will run just 10 iteration. In the real world, there would be no limit for the incoming messages (i.e Kafka consumers are always listening), the retraining might happen after thousands of new observations have been processed or perhaps after some time step (i.e. every week) and HH should run for hundreds of iterations.
In addition, for the purpose of illustration, here the process is “over-logged” using our own logging methods (mainly pickle), HH and MLflow. In production, I would recommend using a customised solution combining MLflow and HH.
let’s have a look
Figure 1 shows a screen shot after processing the 200 messages and having retrained the model 4 times (once every 50 messages). In the upper-left terminal we run the Predictor (predictor.py ), the middle terminal runs the Trainer(trainer.py), where we can see the output from the last HH run, and the lower-terminal runs the App/Service(sample_app.py ), where we can see the requests received and the output prediction.
The reader might notice the alternation in the new model loaded in the upper-left terminal ( NEW MODEL RELOADED 0 -> NEW MODEL RELOADED 1 -> NEW MODEL RELOADED 0 -> NEW MODEL RELOADED 1 ). This is because when using our own logging methods we use a parameter called EXTRA_MODELS_TO_KEEP that sets how many past models we keep. It is currently set to one, and the current loading process points towards our output directory. This can be easily changed in the code to save M past models or to point towards the HH or MLflow corresponding output directories, where all the past best performing models are stored.
The upper-right terminal in Figure 1 starts the MLflow Tracking Server. A screen capture of the MLflow UI is shown below in Figure 2.
The figure shows the information that MLflow saved for the particular case of what we call “experiment_2” (i.e. retraining the models with 100 accumulated new observations/messages). For consistency with HH, we have saved every retraining process as a different experiment. If you’d prefer to save all retraining processes as one experiment simply go to the optimize method in the LGBOptimizer class at train_hyperparameter_hunter.py and change the reuse_experiment parameter to True.
With the current set up, there will be one subdirectory per experiment within the mlruns directory. For example, the structure of the mlruns/1 directory is:
As you can see, all the information one would need is there, very well organised. Let me insist, the current structure of our MLflow-code saves only the best performing set of parameters and model. As mentioned before, one could move the MLflow block in Snippet 4 to the objective function and adapt the code so that MLflow records every single iteration. Alternatively, one could use HH.
HH logs absolutely everything. Let’s have a look to the HyperparameterHunterAssets directory structure. A detail explanation of what is in each sub-directory can be found here. As one can see, plenty of information is saved about every single iteration, including the training datasets per experiment (5 datasets including the original/default one, plus 4 new datasets that include 50 additional observations per retraining cycle) and a leaderboard with the current winning iteration.
Hopefully, at this stage the reader has a good idea on how one could combine MLflow and HH to track and monitor an algorithm’s performance in production.
However, while all elements described here need to be considered when productionising ML, not all that needs to be considered is discussed here. Let me at least mention the missing pieces in the next section.
Unit Test
One aspect we have (intentionally) ignored in this and the previous post is unit-test. This is because unit-test normally depends on the application of your algorithm. For example, at your company, there might be some parts of the pipeline that have been widely tested in the past while some new pieces of code might require thorough unit-test. If I have the time I will include some code related to this task in the repo.
Concept Drift
Often ignored in production is “Concept Drift”. Concept Drift refers to the fact that the statistical properties of your dataset can change over time, and that can have a notable impact on the quality of your predictions. For example, let’s say your company has an app that targets mostly people under 30. One week you run an advertising campaign that broadens the age range of your users. Chances are that your existing models will not perform well for your new audience.
There are a number of options to detect and address Concept Drift. One could code a customised class to make sure that the distributions of the features in the training and testing datasets remains stable within certain limits. Alternatively, one could use libraries such as MLBox. MLBox fits within the new wave of automated ML libraries and comes with a series of nice functionalities, including an optimisation method that relies on, guess what...Hyperopt. Another functionality within the package is the Drift_thresholder() class. This class will automatically handle Concept Drift for you.
The MLBox implementations uses a classifier (a Random Forest by default) and tries to predict whether an observation belongs to the training or testing dataset. Then the estimated drift is measure as:
drift = (max(np.mean(S), 1-np.mean(S))-0.5)*2
Where S is a list containing the roc_auc_score corresponding to the n folds used during the process. If there is not Concept Drift, the algorithm should not be able to distinguish observations from the two datasets, the roc_auc_score per fold should be close to a random guess (0.5) and the drift between the datasets should be close to zero. Alternatively, if one or more of the features has changed over time then the algorithm will easily distinguish between the training and testing datasets, the roc_auc_score per fold would be close to 1 and in consequence, the drift will also be close to one. Note that if the datasets are highly imbalanced and/or very sparse, you might want to use a more suitable metric and algorithm (random forest is known to not perform well in very sparse datasets).
Overall, if you want to learn more about the package and the concept of Concept Drift you can find more information here. The source code for their DriftEstimator() is here. Again, I will include some code related to Concept Drift in our repo if I find the time.
Finally, so far we have used Kafka locally and we did not need to scale or maintain it. However, in the real world one would have to scale according to traffic and maintain the algorithm’s components. Our initial thought was to use Sagemaker for that task, and write a 3rd post. However, after a good, long dive into the tool (that you can find in the branch sagemakerin the repo) we found that using Sagemaker brought a lot of unnecessary complexity. In other words, using it turned out to be more complex than simply moving the code to the cloud, adapt it to use AWS tools (mostly EC2s and S3) and run it automatically using simple, customised scripts.
Let’s summarise the concepts and components one would have to consider when putting ML in production.
Well written code and properly structured project (big thanks to Jordi)Logging and monitoring the algorithm’s performance through timeUnit-testConcept DriftAlgorithm/Service scaling and maintenance.
Well written code and properly structured project (big thanks to Jordi)
Logging and monitoring the algorithm’s performance through time
Unit-test
Concept Drift
Algorithm/Service scaling and maintenance.
In this and our previous posts we have use Kafka-Python, MLflow and HyperparameterHunter or Hyperopt to illustrate points 1 and 2. Points 3, 4 and 5 will not be covered here. Regarding to Points 3 and 4 I believe that there is not much point in writing a further detailed explanation in this or another post (although if I have the time I will add some code to the repo). Regarding to Point 5, as we mentioned before, we think that simply moving the code and the structure described here to the cloud, and adapt it to the available tools there (e.g. EC2s, S3, ...), would be sufficient.
As always, comments/suggestions: [email protected]
[1] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye and Tie-Yan Liu, LightGBM: A highly efficient gradient boosting decision tree, Advances in Neural Information Processing Systems, 3149–3157, 2017.
[2] Bergstra, James; Bardenet, Remi; Bengio, Yoshua; Kegl, Balazs (2011), “Algorithms for hyper-parameter optimization”, Advances in Neural Information Processing Systems. | [
{
"code": null,
"e": 559,
"s": 172,
"text": "In our previous post we showed how one could use the Apache Kafka’s Python API (Kafka-Python) to productionise an algorithm in real time. In this post we will focus more on the ML aspects, more specifically on how to log information during the (re)training process and monitor the results from the experiments. To that aim we will use MLflow along with Hyperopt or HyperparameterHunter."
},
{
"code": null,
"e": 663,
"s": 559,
"text": "A detailed description of both the scenario and the solution can be found in the post mentioned before."
},
{
"code": null,
"e": 947,
"s": 663,
"text": "In summary, we would like to run an algorithm in real time, and some immediate action needs to be taken based on the algorithm’s outputs (or predictions). In addition, after N interactions (or observations) the algorithm needs to be retrained without stopping the prediction service."
},
{
"code": null,
"e": 1112,
"s": 947,
"text": "Our solution relies mostly on Kafka-Python distributing information among the different components of the process (see Figure 1 in our first post for more details):"
},
{
"code": null,
"e": 1971,
"s": 1112,
"text": "A Service/App generates a message (JSON) with the required information for the algorithm (i.e. the features).The “Predictor” component receives the message, processes the information and runs the algorithm, sending the prediction back to the Service/App.After N number of processed messages (or observations) the Predictor sends a message to the “Trainer” component that starts a new training experiment. This new experiment will include the original dataset plus all the new observations collected. As we described in the first post, in the real world one would have to wait until it receives as many real outcomes (i.e. true labels or numerical results) as observations before retraining the algorithm.Once the algorithm has been retrained, the Trainer sends the corresponding message and the Predictor will load the new model without stopping the service."
},
{
"code": null,
"e": 2081,
"s": 1971,
"text": "A Service/App generates a message (JSON) with the required information for the algorithm (i.e. the features)."
},
{
"code": null,
"e": 2227,
"s": 2081,
"text": "The “Predictor” component receives the message, processes the information and runs the algorithm, sending the prediction back to the Service/App."
},
{
"code": null,
"e": 2678,
"s": 2227,
"text": "After N number of processed messages (or observations) the Predictor sends a message to the “Trainer” component that starts a new training experiment. This new experiment will include the original dataset plus all the new observations collected. As we described in the first post, in the real world one would have to wait until it receives as many real outcomes (i.e. true labels or numerical results) as observations before retraining the algorithm."
},
{
"code": null,
"e": 2833,
"s": 2678,
"text": "Once the algorithm has been retrained, the Trainer sends the corresponding message and the Predictor will load the new model without stopping the service."
},
{
"code": null,
"e": 3020,
"s": 2833,
"text": "For the sake of brevity, we prefer to focus here on the processes rather than the algorithms themselves. However, let me at least give some direction in case you want to explore further."
},
{
"code": null,
"e": 3583,
"s": 3020,
"text": "The core algorithm we use is LightGBM. LightGBM has become my go-to algorithm for almost every project that involves classification or regression (it can also do ranking). There is an overwhelming amount of information online about the package, and one can learn how to use it in a variety of scenarios in their fantastic example section at the github repo. However, I always recommend reading the corresponding paper for a given algorithm. In this case, Guolin Ke et al 2017 did a fantastic job. The paper is very well written and, in general, quite accessible."
},
{
"code": null,
"e": 4076,
"s": 3583,
"text": "The optimisation packages used will be HyperparameterHunter (hereafter HH) and Hyperopt, both with Bayesian optimisation methods. HH uses Skopt as a backend, and its BayesianOptimization method is based on Gaussian Processes. On the other hand Hyperopt is, to my knowledge, the only Python package that implements the TPE (tree of Parzen estimators) algorithm. I have found some other libraries that use that algorithm, but are all dependent on Hyperopt (e.g. Optunity or Project Ray’s tune)."
},
{
"code": null,
"e": 4707,
"s": 4076,
"text": "If you want to learn about Bayesian optimization methods I recommend doing the following. Read first the Bayesian Optimization section in the Skopt site. There you can find the description of the problem statement and the bayesian process (or loop), which I’d say is fairly common for bayesian optimisation approaches. Then go to the Hyperopt paper (Bergstra et al., 2011). Again, this is a “must-read” paper if you want to be familiar with Bayesian approaches. In particular, there you will learn about Gaussian Processes (GP) and TPE in the context of Sequential Model-based Global Optimization (SMBO) algorithms (Sections 2–4)."
},
{
"code": null,
"e": 4912,
"s": 4707,
"text": "The remaining ML “ingredient” is MLflow, which will be used here to help tracking and monitoring the training process (although you will see that HH already does a good job saving all the important data)."
},
{
"code": null,
"e": 5232,
"s": 4912,
"text": "Following a similar approach to the one used in our fist post, we will use the code as a guideline commenting on the most important parts. All the code in this section can be found within the trainmodule in our repo. We will start with Hyperopt and then move to HH, where we will illustrate what makes the later unique."
},
{
"code": null,
"e": 5241,
"s": 5232,
"text": "Hyperopt"
},
{
"code": null,
"e": 5350,
"s": 5241,
"text": "The code in this section can be found within the in the script train_hyperopt_mlflow.py at the train module."
},
{
"code": null,
"e": 5456,
"s": 5350,
"text": "Remember, the goal is to minimise an objective function. Our Hyperopt objective function looks like this:"
},
{
"code": null,
"e": 5492,
"s": 5456,
"text": "Where params could be, for example:"
},
{
"code": null,
"e": 6124,
"s": 5492,
"text": "Let’s discuss the code within the function. The function must depend only on params. Within the function we use cross validation and we output the best metric, which is binary_logloss in this case. Note that we use LightGBM (imported as lgb) methods ( lgb.cv in line 14, Snippet 1) as opposed as the corresponding sklearn wrap up. This is because to my experience, LightGBM’s own methods are normally a bit faster. Also note that LightGBM does not implement metrics such as f1_score . Nonetheless, we have included a LightGBM f1 customised metric function in the train_hyperopt.py and train_hyperopt_mlfow.py scripts, just in case."
},
{
"code": null,
"e": 6967,
"s": 6124,
"text": "It is worth stopping for a second at line 22 in Snippet 1, where we record the number of boosting rounds used for that particular iteration. This is because when using Hyperopt (or Skopt or HH) the algorithm will optimise based on the input value of the parameters, one being num_boost_round. Within the objective function we do cross validation with early stopping to avoid overfitting. This means that the final number of boosting rounds might differ from the input value. This information will get \"lost\" within the optimisation process. To overcome this limitation we simply save the final num_boost_round to the dictionary early_stop_dict. However, it is not always clear that this is the best solution. For a full discussion on this and other issues regarding the optimisation of GBMs, please, have a look to this notebook in my github."
},
{
"code": null,
"e": 7219,
"s": 6967,
"text": "Finally, remember that we need to minimise the output value. Therefore, if the output is a score, the objective function must output its negative value, while if is an error ( rmse) or loss ( binary_logloss), the function must output the value itself."
},
{
"code": null,
"e": 7274,
"s": 7219,
"text": "The code that runs the optimisation process is simply:"
},
{
"code": null,
"e": 7781,
"s": 7274,
"text": "Every set of parameter tried will be recorded in the trials object. Once the optimisation is done, we could code our own functionalities to record the results. Alternatively, we could use tools such as MLflow to help us with that task, and visually monitor the different experiments. As you can imagine, one could write a number of posts only on MLflow. Here, we will simply illustrate how we have used it to record the best performing parameters, model and metric, and monitor the algorithm’s performance."
},
{
"code": null,
"e": 7788,
"s": 7781,
"text": "MLflow"
},
{
"code": null,
"e": 7934,
"s": 7788,
"text": "The MLflow block that tracks the results per experiment remains almost identical for both Hyperopt and HH, and is described in the snippet below."
},
{
"code": null,
"e": 8346,
"s": 7934,
"text": "Lines 1–8: one “annoying” behaviour we found in the newest MLflow version ( 0.8.2) is that the first time you instantiate the class MLflowClient() or create an experiment ( mlflow.create_experiment('test')) it will create two directories, mlruns/0 and mlruns/1 . The former is called Default and it will remain empty as you run experiments. Here we show this behaviour in an empty directory called test_mlflow :"
},
{
"code": null,
"e": 8720,
"s": 8346,
"text": "infinito:test_mlflow javier$ lsinfinito:test_mlflow javier$ ipythonPython 3.6.5 (default, Apr 15 2018, 21:22:22)Type ‘copyright’, ‘credits’ or ‘license’ for more informationIPython 7.2.0 — An enhanced Interactive Python. Type ‘?’ for help.In [1]: import mlflowIn [2]: mlflow.__version__Out[2]: ‘0.8.2’In [3]: mlflow.create_experiment(‘test’)Out[3]: 1In [4]: ls mlruns/0/ 1/"
},
{
"code": null,
"e": 8951,
"s": 8720,
"text": "Therefore, when you open the UI the first screen will be an empty screen with a experiment named Default . If you can live with that (I can’t), then there are easier ways to code lines 1–8 in the MLflow block, such as for example:"
},
{
"code": null,
"e": 9114,
"s": 8951,
"text": "In the current set up (Snippet 4) , our first initialisation of the process ( python initialize.py) will be referred as Default and stored in directory mlruns/0 ."
},
{
"code": null,
"e": 9263,
"s": 9114,
"text": "On the other hand, a more elegant way of defining the experiment_id per run would be to list the existing experiments and get the last elements’ id:"
},
{
"code": null,
"e": 9370,
"s": 9263,
"text": "experiments = client.list_experiments()with mlflow.start_run(experiment_id=experiments[-1].experiment_id):"
},
{
"code": null,
"e": 9555,
"s": 9370,
"text": "However, another “inconvenient” behaviour I found is that client.list_experiments() does not preserve the order. This is why we use the “slightly-less-elegant” solution n_experiments ."
},
{
"code": null,
"e": 9674,
"s": 9555,
"text": "Line 9 in advance: we just run the experiment and record all, parameters, metrics and the model as an MLflow artifact."
},
{
"code": null,
"e": 10339,
"s": 9674,
"text": "At this stage it is worth mentioning that we are completely aware we are “under-using” MLflow. As well as tracking the results of the algorithms, MLflow can package and deploy projects. In other words, with MLflow one can manage almost the entire machine learning cycle. However, it is my impression that to do such a thing one needs to start a project with MLflow in mind. Moreover, it is not straightforward for me to see how one could package the entire project described in this and the previous posts using MLflow without adding unnecessary complexity. Nonetheless, I see lots of potential in this library and I clearly see myself using it in future projects."
},
{
"code": null,
"e": 10366,
"s": 10339,
"text": "Hyperparameter Hunter (HH)"
},
{
"code": null,
"e": 10690,
"s": 10366,
"text": "After having a look to the code in snippets 1–4 one might wonder: “what if I want to record every hyper-optimisation run and keep track in a leaderboard?”. Well, there are two possibilities: i) one could simply move the MLflow block to the body of the objective function and adapt the code accordingly or ii) simply use HH."
},
{
"code": null,
"e": 11308,
"s": 10690,
"text": "As I write this post I see two drawbacks in using HH. In the first place, there is no need for you to code your own objective function. While this is mostly a positive aspect, it also means that it is less flexible. However, I have been using HH for some time now and unless you need to design a complex objective function (e.g. some unusual data manipulation within the objective, or internal update of parameters) HH will do the job. If you do need to design a highly customised objective function, you could always code one using sklearn ‘s syntax as pass it to the HH’s optimizer object as the model_initializer ."
},
{
"code": null,
"e": 11672,
"s": 11308,
"text": "The second drawback, and perhaps more important, is not directly related to HH, but Skopt. HH is built on top of Skopt, which is notably slower that Hyperopt (see this notebook in my repo). However, I am aware that there are current efforts to add Hyperopt as an alternative backend (along with other upcoming features such as feature engineering, so stay tuned)."
},
{
"code": null,
"e": 12029,
"s": 11672,
"text": "In conclusion, if you do not need to design a particularly complex objective function, and you can afford “Skopt-speed”, HH offers a number of functionalities that make it unique. To start with, HH records and organises all the experiments for you. Moreover, it learns as you run additional tests, as none of the past tests go to waste and. In other words:"
},
{
"code": null,
"e": 12343,
"s": 12029,
"text": "“HyperparameterHunter is already aware of all that you’ve done, and that’s when HyperparameterHunter does something remarkable. It doesn’t start optimization from scratch like other libraries. It starts from all of the Experiments and previous optimization rounds you’ve already run through it.” Hunter McGushion."
},
{
"code": null,
"e": 12587,
"s": 12343,
"text": "Let’s have a look to the code. The following 3 snippets are all you need when using HH (see the documentation for more details). The full code in this section can be found in the script train_hyperparameterhunter_mlflow.py at the train module."
},
{
"code": null,
"e": 12755,
"s": 12587,
"text": "As you will see, the syntax very concise. We first set an Environment, which is simply a class to organise the parameters that allow experiments to be fairly compared."
},
{
"code": null,
"e": 12782,
"s": 12755,
"text": "We then run the experiment"
},
{
"code": null,
"e": 12834,
"s": 12782,
"text": "Where model_init_params and model_extra_params are:"
},
{
"code": null,
"e": 13442,
"s": 12834,
"text": "When looking carefully to Snippet 7, we can find a further difference between HH and Hyperopt, again purely related to the Skopt backend. You will see that while when using Hyperopt one can use quantile uniform distributions (hp.quniform(low, high, step)). There is no such option in Skopt. This means that for parameters like num_boost_rounds or num_leaves the search is less efficient. For example, a priori, one would not expect two experiments with 100 and 101 boosting rounds to yield different results. This is why when using Hyperopt we set num_boost_rounds to hp.quniform(50, 500, 10) , for example."
},
{
"code": null,
"e": 13506,
"s": 13442,
"text": "A potential way around is to use Skopt’s Categorical variables:"
},
{
"code": null,
"e": 13553,
"s": 13506,
"text": "num_leaves=Categorical(np.arange(31, 256, 4)),"
},
{
"code": null,
"e": 14217,
"s": 13553,
"text": "However, this is not the best solution given the fact that in reality, num_boost_roundsor num_leaves are not categorical variables, but are going to be treated at such. For example, by default, Skopt will build a one-hot encoded representation of the input space for categorical features. As categories have no intrinsic ordering, the distance between two points in that space is one if cat_a != cat_band zero otherwise. This behaviour is not what we want during the search process for parameters like num_leaves, since 32 leaves would be as distant from 31 as 255. Skopt offers the possibility of not transforming the space, which is still not ideal, but better:"
},
{
"code": null,
"e": 14286,
"s": 14217,
"text": "num_leaves=Categorical(np.arange(31, 256, 4), transform=’identity’),"
},
{
"code": null,
"e": 14465,
"s": 14286,
"text": "But for one reason or another this throws an error every time I have tried to use it. Nevertheless, we will use Integerand live with it while HH implements Hyperopt as a backend."
},
{
"code": null,
"e": 14724,
"s": 14465,
"text": "In this section we will run a minimal example to illustrate how we use HH and MLflow to track every detail of the training process. Here we will keep things simple, but one could use the Callbacks functionality in HH to seamlessly integrate the two packages."
},
{
"code": null,
"e": 15193,
"s": 14724,
"text": "The minimal example consists in processing 200 messages and retraining the algorithm every 50 processed messages. In every retraining experiment HH will run just 10 iteration. In the real world, there would be no limit for the incoming messages (i.e Kafka consumers are always listening), the retraining might happen after thousands of new observations have been processed or perhaps after some time step (i.e. every week) and HH should run for hundreds of iterations."
},
{
"code": null,
"e": 15421,
"s": 15193,
"text": "In addition, for the purpose of illustration, here the process is “over-logged” using our own logging methods (mainly pickle), HH and MLflow. In production, I would recommend using a customised solution combining MLflow and HH."
},
{
"code": null,
"e": 15439,
"s": 15421,
"text": "let’s have a look"
},
{
"code": null,
"e": 15859,
"s": 15439,
"text": "Figure 1 shows a screen shot after processing the 200 messages and having retrained the model 4 times (once every 50 messages). In the upper-left terminal we run the Predictor (predictor.py ), the middle terminal runs the Trainer(trainer.py), where we can see the output from the last HH run, and the lower-terminal runs the App/Service(sample_app.py ), where we can see the requests received and the output prediction."
},
{
"code": null,
"e": 16469,
"s": 15859,
"text": "The reader might notice the alternation in the new model loaded in the upper-left terminal ( NEW MODEL RELOADED 0 -> NEW MODEL RELOADED 1 -> NEW MODEL RELOADED 0 -> NEW MODEL RELOADED 1 ). This is because when using our own logging methods we use a parameter called EXTRA_MODELS_TO_KEEP that sets how many past models we keep. It is currently set to one, and the current loading process points towards our output directory. This can be easily changed in the code to save M past models or to point towards the HH or MLflow corresponding output directories, where all the past best performing models are stored."
},
{
"code": null,
"e": 16603,
"s": 16469,
"text": "The upper-right terminal in Figure 1 starts the MLflow Tracking Server. A screen capture of the MLflow UI is shown below in Figure 2."
},
{
"code": null,
"e": 17088,
"s": 16603,
"text": "The figure shows the information that MLflow saved for the particular case of what we call “experiment_2” (i.e. retraining the models with 100 accumulated new observations/messages). For consistency with HH, we have saved every retraining process as a different experiment. If you’d prefer to save all retraining processes as one experiment simply go to the optimize method in the LGBOptimizer class at train_hyperparameter_hunter.py and change the reuse_experiment parameter to True."
},
{
"code": null,
"e": 17245,
"s": 17088,
"text": "With the current set up, there will be one subdirectory per experiment within the mlruns directory. For example, the structure of the mlruns/1 directory is:"
},
{
"code": null,
"e": 17634,
"s": 17245,
"text": "As you can see, all the information one would need is there, very well organised. Let me insist, the current structure of our MLflow-code saves only the best performing set of parameters and model. As mentioned before, one could move the MLflow block in Snippet 4 to the objective function and adapt the code so that MLflow records every single iteration. Alternatively, one could use HH."
},
{
"code": null,
"e": 18119,
"s": 17634,
"text": "HH logs absolutely everything. Let’s have a look to the HyperparameterHunterAssets directory structure. A detail explanation of what is in each sub-directory can be found here. As one can see, plenty of information is saved about every single iteration, including the training datasets per experiment (5 datasets including the original/default one, plus 4 new datasets that include 50 additional observations per retraining cycle) and a leaderboard with the current winning iteration."
},
{
"code": null,
"e": 18273,
"s": 18119,
"text": "Hopefully, at this stage the reader has a good idea on how one could combine MLflow and HH to track and monitor an algorithm’s performance in production."
},
{
"code": null,
"e": 18482,
"s": 18273,
"text": "However, while all elements described here need to be considered when productionising ML, not all that needs to be considered is discussed here. Let me at least mention the missing pieces in the next section."
},
{
"code": null,
"e": 18492,
"s": 18482,
"text": "Unit Test"
},
{
"code": null,
"e": 18915,
"s": 18492,
"text": "One aspect we have (intentionally) ignored in this and the previous post is unit-test. This is because unit-test normally depends on the application of your algorithm. For example, at your company, there might be some parts of the pipeline that have been widely tested in the past while some new pieces of code might require thorough unit-test. If I have the time I will include some code related to this task in the repo."
},
{
"code": null,
"e": 18929,
"s": 18915,
"text": "Concept Drift"
},
{
"code": null,
"e": 19402,
"s": 18929,
"text": "Often ignored in production is “Concept Drift”. Concept Drift refers to the fact that the statistical properties of your dataset can change over time, and that can have a notable impact on the quality of your predictions. For example, let’s say your company has an app that targets mostly people under 30. One week you run an advertising campaign that broadens the age range of your users. Chances are that your existing models will not perform well for your new audience."
},
{
"code": null,
"e": 19997,
"s": 19402,
"text": "There are a number of options to detect and address Concept Drift. One could code a customised class to make sure that the distributions of the features in the training and testing datasets remains stable within certain limits. Alternatively, one could use libraries such as MLBox. MLBox fits within the new wave of automated ML libraries and comes with a series of nice functionalities, including an optimisation method that relies on, guess what...Hyperopt. Another functionality within the package is the Drift_thresholder() class. This class will automatically handle Concept Drift for you."
},
{
"code": null,
"e": 20198,
"s": 19997,
"text": "The MLBox implementations uses a classifier (a Random Forest by default) and tries to predict whether an observation belongs to the training or testing dataset. Then the estimated drift is measure as:"
},
{
"code": null,
"e": 20244,
"s": 20198,
"text": "drift = (max(np.mean(S), 1-np.mean(S))-0.5)*2"
},
{
"code": null,
"e": 21042,
"s": 20244,
"text": "Where S is a list containing the roc_auc_score corresponding to the n folds used during the process. If there is not Concept Drift, the algorithm should not be able to distinguish observations from the two datasets, the roc_auc_score per fold should be close to a random guess (0.5) and the drift between the datasets should be close to zero. Alternatively, if one or more of the features has changed over time then the algorithm will easily distinguish between the training and testing datasets, the roc_auc_score per fold would be close to 1 and in consequence, the drift will also be close to one. Note that if the datasets are highly imbalanced and/or very sparse, you might want to use a more suitable metric and algorithm (random forest is known to not perform well in very sparse datasets)."
},
{
"code": null,
"e": 21305,
"s": 21042,
"text": "Overall, if you want to learn more about the package and the concept of Concept Drift you can find more information here. The source code for their DriftEstimator() is here. Again, I will include some code related to Concept Drift in our repo if I find the time."
},
{
"code": null,
"e": 21960,
"s": 21305,
"text": "Finally, so far we have used Kafka locally and we did not need to scale or maintain it. However, in the real world one would have to scale according to traffic and maintain the algorithm’s components. Our initial thought was to use Sagemaker for that task, and write a 3rd post. However, after a good, long dive into the tool (that you can find in the branch sagemakerin the repo) we found that using Sagemaker brought a lot of unnecessary complexity. In other words, using it turned out to be more complex than simply moving the code to the cloud, adapt it to use AWS tools (mostly EC2s and S3) and run it automatically using simple, customised scripts."
},
{
"code": null,
"e": 22062,
"s": 21960,
"text": "Let’s summarise the concepts and components one would have to consider when putting ML in production."
},
{
"code": null,
"e": 22261,
"s": 22062,
"text": "Well written code and properly structured project (big thanks to Jordi)Logging and monitoring the algorithm’s performance through timeUnit-testConcept DriftAlgorithm/Service scaling and maintenance."
},
{
"code": null,
"e": 22333,
"s": 22261,
"text": "Well written code and properly structured project (big thanks to Jordi)"
},
{
"code": null,
"e": 22397,
"s": 22333,
"text": "Logging and monitoring the algorithm’s performance through time"
},
{
"code": null,
"e": 22407,
"s": 22397,
"text": "Unit-test"
},
{
"code": null,
"e": 22421,
"s": 22407,
"text": "Concept Drift"
},
{
"code": null,
"e": 22464,
"s": 22421,
"text": "Algorithm/Service scaling and maintenance."
},
{
"code": null,
"e": 23051,
"s": 22464,
"text": "In this and our previous posts we have use Kafka-Python, MLflow and HyperparameterHunter or Hyperopt to illustrate points 1 and 2. Points 3, 4 and 5 will not be covered here. Regarding to Points 3 and 4 I believe that there is not much point in writing a further detailed explanation in this or another post (although if I have the time I will add some code to the repo). Regarding to Point 5, as we mentioned before, we think that simply moving the code and the structure described here to the cloud, and adapt it to the available tools there (e.g. EC2s, S3, ...), would be sufficient."
},
{
"code": null,
"e": 23103,
"s": 23051,
"text": "As always, comments/suggestions: [email protected]"
},
{
"code": null,
"e": 23334,
"s": 23103,
"text": "[1] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye and Tie-Yan Liu, LightGBM: A highly efficient gradient boosting decision tree, Advances in Neural Information Processing Systems, 3149–3157, 2017."
}
]
|
Behavior Driven Development - Quick Guide | Behavior Driven Development (BDD) is a software development process that originally emerged from Test Driven Development (TDD).
According to Dan North, who is responsible for the evolution of BDD, “BDD is using examples at multiple levels to create a shared understanding and surface uncertainty to deliver software that matter.”
BDD uses examples to illustrate the behavior of the system that are written in a readable and understandable language for everyone involved in the development. These examples include −
Converted into executable specifications.
Converted into executable specifications.
Used as the acceptance tests.
Used as the acceptance tests.
Behavior Driven Development focuses on −
Providing a shared process and shared tools promoting communication to the software developers, business analysts and stakeholders to collaborate on software development, with the aim of delivering product with business value.
Providing a shared process and shared tools promoting communication to the software developers, business analysts and stakeholders to collaborate on software development, with the aim of delivering product with business value.
What a system should do and not on how it should be implemented.
What a system should do and not on how it should be implemented.
Providing better readability and visibility.
Providing better readability and visibility.
Verifying not only the working of the software but also that it meets the customer’s expectations.
Verifying not only the working of the software but also that it meets the customer’s expectations.
The cost to fix a defect increases multifold if the defect is not detected at the right time and fixed as and when it is detected. Consider the following example.
This shows that unless requirements are obtained correctly, it would be expensive to fix the defects resulting from misunderstanding the requirements at a later stage. Further, the end product may not meet the customer’s expectations.
The need of the hour is a development approach that −
Is based on the requirements.
Is based on the requirements.
Focuses on requirements throughout the development.
Focuses on requirements throughout the development.
Ensures that the requirements are met.
Ensures that the requirements are met.
A development approach that can take care of the above-mentioned requirements is BDD. Thus, Behavior Driven Development −
Derives examples of different expected behaviors of the system.
Derives examples of different expected behaviors of the system.
Enables writing the examples in a language using the business domain terms to ensure easy understanding by everyone involved in the development including the customers.
Enables writing the examples in a language using the business domain terms to ensure easy understanding by everyone involved in the development including the customers.
Gets the examples ratified with customer from time to time by means of conversations.
Gets the examples ratified with customer from time to time by means of conversations.
Focuses on the customer requirements (examples) throughout the development.
Focuses on the customer requirements (examples) throughout the development.
Uses examples as acceptance tests.
Uses examples as acceptance tests.
The two main practices of BDD are −
Specification by Example (SbE)
Specification by Example (SbE)
Test Driven Development (TDD)
Test Driven Development (TDD)
Specification by Example (SbE) uses examples in conversations to illustrate the business rules and the behavior of the software to be built.
Specification by Example enables the product owners, business analysts, testers and the developers to eliminate common misunderstandings about the business requirements.
Test Driven Development, in the context of BDD, turns examples into human readable, executable specifications.
The developers use these specifications as a guide to implement increments of new functionality. This, results in a lean codebase and a suite of automated regression tests that keep the maintenance costs low throughout the lifetime of the software.
In Agile software development, BDD method is used to come to a common understanding on the pending specifications.
The following steps are executed in Agile BDD −
The developers and the product owner collaboratively write pending specifications in a plain text editor.
The developers and the product owner collaboratively write pending specifications in a plain text editor.
The product owner specifies the behaviors they expect from the system.
The product owner specifies the behaviors they expect from the system.
The developers
Fill the specifications with these behavior details.
Ask questions based on their understanding of the system.
The developers
Fill the specifications with these behavior details.
Fill the specifications with these behavior details.
Ask questions based on their understanding of the system.
Ask questions based on their understanding of the system.
The current system behaviors are considered to see if the new feature will break any of the existing features.
The current system behaviors are considered to see if the new feature will break any of the existing features.
The Agile Manifesto states the following −
We are uncovering better ways of developing software by doing it and helping others do it. Through this work, we have come to value −
Individuals and interactions − over Processes and tools
Individuals and interactions − over Processes and tools
Working software − over Comprehensive documentation
Working software − over Comprehensive documentation
Customer collaboration − over Contract negotiation
Customer collaboration − over Contract negotiation
Responding to change − over Following a plan
Responding to change − over Following a plan
That is, while there is value in the items on the right, we value the items on the left more.
BDD aligns itself to the Agile manifesto as follows −
When you look at any reference on Behavior Driven Development, you will find the usage of phrases such as “BDD is derived from TDD”, “BDD and TDD”. To know how BDD came into existence, why it is said to be derived from TDD and what is BDD and TDD, you have to have an understanding of TDD.
To start, let us get into the fundamentals of testing. The purpose of testing is to ensure that the system that is built is working as expected. Consider the following example.
Hence, by experience we have learnt that uncovering a defect as and when it is introduced and fixing it immediately would be cost effective. Therefore, there is a necessity of writing test cases at every stage of development and testing. This is what our traditional testing practices have taught us, which is often termed as Test-early.
This testing approach is termed as the Test-Last approach as testing is done after the completion of a stage.
The Test-Last approach was followed for quite some time in the software development projects. However, in reality, with this approach, as testing has to wait till the particular stage is completed, often it is overlooked because of −
The delays in the completion of the stage.
The delays in the completion of the stage.
Tight time schedules.
Tight time schedules.
Focus on delivery on time, skipping testing.
Focus on delivery on time, skipping testing.
Further, in the Test-Last approach, Unit testing, that is supposed to be done by the developers is often skipped. The various reasons found are based on the mind-set of the developers −
They are developers and not testers.
They are developers and not testers.
Testing is the responsibility of the testers.
Testing is the responsibility of the testers.
They are efficient in coding and their code would not have defects.
They are efficient in coding and their code would not have defects.
This results in −
Compromising on the quality of the product delivered.
Compromising on the quality of the product delivered.
Having the accountability for quality on testers only.
Having the accountability for quality on testers only.
High-costs in fixing the defects, post delivery.
High-costs in fixing the defects, post delivery.
Inability to obtain customer satisfaction, which would also mean loss of repeat business, thus effecting credibility.
Inability to obtain customer satisfaction, which would also mean loss of repeat business, thus effecting credibility.
These factors called for a shift in paradigm, to focus on testing. The result was the Test-First approach.
The Test-First approach replaces the inside-out (write code and then test) to outside-in (write test and then code) way of development.
This approach is incorporated into the following software development methodologies (that are Agile also) −
eXtreme Programming (XP).
eXtreme Programming (XP).
Test Driven Development (TDD).
Test Driven Development (TDD).
In these methodologies, the developer designs and writes the Unit tests for a code module before writing a single line of the code module. The developer then creates the code module with the goal of passing the Unit test. Thus, these methodologies use Unit testing to drive the development.
The fundamental point to note that the goal is development based on testing.
Test Driven Development is used to develop the code guided by Unit tests.
Step 1 − Consider a code module that is to be written.
Step 2 − Write a test
Step 3 − Run the test.
The test fails, as the code is still not written. Hence, Step 2 is usually referred to as write a test to fail.
Step 4 − Write minimum code possible to pass the test.
Step 5 − Run all the tests to ensure that they all still pass. Unit tests are automated to facilitate this step.
Step 6 − Refactor.
Step 7 − Repeat Step 1 to Step 6 for the next code module.
Each cycle should be very short, and a typical hour should contain many cycles.
This is also popularly known as the Red-Green-Refactor cycle, where −
Red − Writing a test that fails.
Red − Writing a test that fails.
Green − Writing code to pass the test.
Green − Writing code to pass the test.
Refactor − Remove duplication and improve the code to the acceptable standards.
Refactor − Remove duplication and improve the code to the acceptable standards.
The steps of a TDD process are illustrated below.
The benefits or advantages of Test Driven Development are −
The developer needs to understand first, what the desired result should be and how to test it before creating the code.
The developer needs to understand first, what the desired result should be and how to test it before creating the code.
The code for a component is finished only when the test passes and the code is refactored. This ensures testing and refactoring before the developer moves on to the next test.
The code for a component is finished only when the test passes and the code is refactored. This ensures testing and refactoring before the developer moves on to the next test.
As the suite of Unit tests is run after each refactoring, feedback that each component is still working is constant.
As the suite of Unit tests is run after each refactoring, feedback that each component is still working is constant.
The Unit tests act as living documentation that is always up to the data.
The Unit tests act as living documentation that is always up to the data.
If a defect is found, the developer creates a test to reveal that defect and then modify the code so that the test passes and the defect is fixed. This reduces the debugging time. All the other tests are also run and when they pass, it ensures that the existing functionality is not broken
If a defect is found, the developer creates a test to reveal that defect and then modify the code so that the test passes and the defect is fixed. This reduces the debugging time. All the other tests are also run and when they pass, it ensures that the existing functionality is not broken
The developer can make design decisions and refactor at any time and the running of the tests ensures that the system is still working. This makes the software maintainable.
The developer can make design decisions and refactor at any time and the running of the tests ensures that the system is still working. This makes the software maintainable.
The developer has the confidence to make any change since if the change impacts any existing functionality, the same is revealed by running the tests and the defects can be fixed immediately.
The developer has the confidence to make any change since if the change impacts any existing functionality, the same is revealed by running the tests and the defects can be fixed immediately.
On each successive test run, all the previous defect fixes are also verified and the repetition of same defect is reduced.
On each successive test run, all the previous defect fixes are also verified and the repetition of same defect is reduced.
As most of the testing is done during the development itself, the testing before delivery is shortened.
As most of the testing is done during the development itself, the testing before delivery is shortened.
The starting point is User Stories, describing the behavior of the system. Hence, the developers often face the following questions −
When to test?
When to test?
What to test?
What to test?
How to know if a specification is met?
How to know if a specification is met?
Does the code deliver business value?
Does the code deliver business value?
The following misconceptions exist in the industry and need clarifications.
TDD is a development methodology, and after every new Unit Test passes, it is added to the Automation Test Suite as all the tests need to be run whenever a new code is added or existing code is modified and also after every refactoring.
Thus, Test Automation Tools supporting TDD facilitate this process.
Acceptance Test Driven Development (ATDD) defines Acceptance Criteria and Acceptance Tests during the creation of User Stories, early in development. ATDD focuses on the communication and common understanding among the customers, developers and the testers.
The Key practices in ATDD are as follows −
Discuss real-world scenarios to build a shared understanding of the domain.
Discuss real-world scenarios to build a shared understanding of the domain.
Use those scenarios to arrive at acceptance criteria.
Use those scenarios to arrive at acceptance criteria.
Automate Acceptance tests.
Automate Acceptance tests.
Focus the development on those tests.
Focus the development on those tests.
Use the tests as a live specification to facilitate change.
Use the tests as a live specification to facilitate change.
The benefits of using ATDD are as follows −
Requirements are unambiguous and without functional gaps.
Requirements are unambiguous and without functional gaps.
Others understand the special cases that the developers foresee.
Others understand the special cases that the developers foresee.
The Acceptance tests guide the development.
The Acceptance tests guide the development.
According to Dan North, programmers normally face the following problems while performing Test Driven Development −
Where to start
Where to start
What to test and what not to test
What to test and what not to test
How much to test in one go
How much to test in one go
What to call their tests
What to call their tests
How to understand why a test fails
How to understand why a test fails
The solution to all these problems is Behavior Driven Development. It has evolved out of the established agile practices and is designed to make them more accessible and effective for teams, new to agile software delivery. Over time, BDD has grown to encompass the wider picture of agile analysis and automated acceptance testing.
The main difference between TDD and BDD is that −
TDD describes how the software works.
TDD describes how the software works.
On the other hand, BDD −
Describes how the end user uses the software.
Fosters collaboration and communication.
Emphasizes on examples of behavior of the System.
Aims at the executable specifications derived from the examples
On the other hand, BDD −
Describes how the end user uses the software.
Describes how the end user uses the software.
Fosters collaboration and communication.
Fosters collaboration and communication.
Emphasizes on examples of behavior of the System.
Emphasizes on examples of behavior of the System.
Aims at the executable specifications derived from the examples
Aims at the executable specifications derived from the examples
In TDD, the term “Acceptance Tests” is misleading. Acceptance tests actually represent the expected behavior of the system. In Agile practices, collaboration of the whole team and interactions with the customer and other stakeholders is emphasized. This has given rise to the necessity of usage of terms that are easily understood by everyone involved in the project.
TDD makes you think about the required Behavior and hence the term ‘Behavior’ is more useful than the term ‘Test’. BDD is Test Driven Development with a vocabulary that focuses on behavior and not tests.
In the words of Dan North, “I found the shift from thinking in tests to thinking in behavior so profound that I started to refer to TDD as BDD, or Behavior Driven Development.” TDD focuses on how something will work, BDD focuses on why we build it at all.
BDD answers the following questions often faced by the developers −
These answers result in the story framework as follows −
Story Framework
As a [Role]
I want [Feature]
so that [Benefit]
This means, ‘When a Feature is executed, the resulting Benefit is to the Person playing the Role.’
BDD further answers the following questions −
These answers result in the Example framework as follows −
Example Framework
Given some initial context,
When an event occurs,
Then ensure some outcomes.
This means, ‘Starting with the initial context, when a particular event happens, we know what the outcomes should be.’
Thus, the example shows the expected behavior of the system. The examples are used to illustrate different scenarios of the system.
Let us consider the following illustration by Dan North about an ATM system.
As a customer,
I want to withdraw cash from an ATM,
so that I do not have to wait in line at the bank.
There are two possible scenarios for this story.
Scenario 1 − Account is in credit
Given the account is in credit
And the card is valid
And the dispenser contains cash
When the customer requests cash
Then ensure the account is debited
And ensure cash is dispensed
And ensure the card is returned
Scenario 2 − Account is overdrawn past the overdraft limit
Given the account is overdrawn
And the card is valid
When the customer requests cash
Then ensure a rejection message is displayed
And ensure cash is not dispensed
And ensure the card is returned
The event is same in both the scenarios, but the context is different. Hence, the outcomes are different.
The Development Cycle for BDD is an outside-in approach.
Step 1 − Write a high-level (outside) business value example (using Cucumber or RSpec/Capybara) that goes red. (RSpec produces a BDD framework in the Ruby language)
Step 2 − Write a lower-level (inside) RSpec example for the first step of implementation that goes red.
Step 3 − Implement the minimum code to pass that lower-level example, see it go green.
Step 4 − Write the next lower-level RSpec example pushing towards passing Step 1 that goes red.
Step 5 − Repeat steps Step 3 and Step 4 until the high-level example in Step 1 goes green.
Note − The following points should be kept in mind −
Red/green state is a permission status.
Red/green state is a permission status.
When your low-level tests are green, you have the permission to write new examples or refactor existing implementation. You must not, in the context of refactoring, add new functionality/flexibility.
When your low-level tests are green, you have the permission to write new examples or refactor existing implementation. You must not, in the context of refactoring, add new functionality/flexibility.
When your low-level tests are red, you have permission to write or change implementation code only for making the existing tests go green. You must resist the urge to write the code to pass your next test, which does not exist, or implement features you may think are good (customer would not have asked).
When your low-level tests are red, you have permission to write or change implementation code only for making the existing tests go green. You must resist the urge to write the code to pass your next test, which does not exist, or implement features you may think are good (customer would not have asked).
According to Gojko Adzic, the author of ‘Specification by Example’, Specification by Example is a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently.”
Specification by Example is a collaborative approach to define the requirements and business-oriented functional tests for software products based on capturing and illustrating requirements using realistic examples instead of abstract statements.
The objective of Specification by Example is to focus on development and delivery of prioritized, verifiable, business requirements. While the concept of Specification by Example in itself is relatively new, it is simply a rephrasing of existing practices.
It supports a very specific, concise vocabulary known as ubiquitous language that −
Enables executable requirements.
Enables executable requirements.
Is used by everyone in the team.
Is used by everyone in the team.
Is created by a cross-functional team.
Is created by a cross-functional team.
Captures everyone's understanding.
Captures everyone's understanding.
Specification by Example can be used as a direct input into building Automated tests that reflect the business domain. Thus, the focus of Specification by Example is on building the right product and building the product right.
The primary aim of Specification by Example is to build the right product. It focuses on shared understanding, thus establishing a single source of truth. It enables automation of acceptance criteria so that focus is on defect prevention rather than defect detection. It also promotes test early to find the defects early.
Specification by Example is used to illustrate the expected system behavior that describes business value. The illustration is by means of concrete and real life examples. These examples are used to create executable requirements that are −
Testable without translation.
Testable without translation.
Captured in live documentation.
Captured in live documentation.
Following are the reasons why we use examples to describe particular specifications −
They are easier to understand.
They are easier to understand.
They are harder to misinterpret.
They are harder to misinterpret.
The advantages of using Specification by Example are −
Increased quality
Increased quality
Reduced waste
Reduced waste
Reduced risk of production defects
Reduced risk of production defects
Focused effort
Focused effort
Changes can be made more safely
Changes can be made more safely
Improved business involvement
Improved business involvement
Specification by Example find applications in −
Either complex business or complex organization.
Either complex business or complex organization.
Does not work well for purely technical problems.
Does not work well for purely technical problems.
Does not work well for UI focused software products.
Does not work well for UI focused software products.
Can be applied to legacy systems as well.
Can be applied to legacy systems as well.
The advantages of Specification by Example in terms of Acceptance testing are −
One single illustration is used for both, detailed requirements and testing
One single illustration is used for both, detailed requirements and testing
Progress of the project is in terms of Acceptance tests −
Each test is to test a behavior.
A test is either passing for a behavior or it is not.
A passing test represents that the particular behavior is completed.
If a project that requires 100 behaviors to be completed has 60 behaviors completed, then it is 60% finished.
Progress of the project is in terms of Acceptance tests −
Each test is to test a behavior.
Each test is to test a behavior.
A test is either passing for a behavior or it is not.
A test is either passing for a behavior or it is not.
A passing test represents that the particular behavior is completed.
A passing test represents that the particular behavior is completed.
If a project that requires 100 behaviors to be completed has 60 behaviors completed, then it is 60% finished.
If a project that requires 100 behaviors to be completed has 60 behaviors completed, then it is 60% finished.
Testers switch from defect fixing to defect prevention, and they contribute to the design of the solution.
Testers switch from defect fixing to defect prevention, and they contribute to the design of the solution.
Automation allows instant understanding of the impact of a requirement change on the solution.
Automation allows instant understanding of the impact of a requirement change on the solution.
The objective of Specification by Example is to promote collaboration of everyone in the team, including the customer throughout the project to deliver business value. Everyone for better understandability uses same Vocabulary.
Requirements are unambiguous and without functional gaps.
Requirements are unambiguous and without functional gaps.
Developers, actually read the specifications.
Developers, actually read the specifications.
Developers understand better, what is being developed.
Developers understand better, what is being developed.
Development progress is tracked better by counting the specifications that have been developed correctly.
Development progress is tracked better by counting the specifications that have been developed correctly.
Testers understand better, what is being tested.
Testers understand better, what is being tested.
Testers are involved from the beginning and have a role in the design.
Testers are involved from the beginning and have a role in the design.
Testers work toward defect prevention rather than defect detection.
Testers work toward defect prevention rather than defect detection.
Time is saved by identifying errors from the beginning.
Time is saved by identifying errors from the beginning.
A quality product is produced from the beginning.
A quality product is produced from the beginning.
As we have seen in the beginning of this chapter, Specification by Example is defined as a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently.
The process patterns are −
Collaborative specification
Collaborative specification
Illustrating specifications using examples
Illustrating specifications using examples
Refining the specification
Refining the specification
Automating examples
Automating examples
Validating frequently
Validating frequently
Living documentation
Living documentation
The objectives of collaborative specification are to −
Get the various roles in a team to have a common understanding and a shared vocabulary.
Get the various roles in a team to have a common understanding and a shared vocabulary.
Get everyone involved in the project so that they can contribute their different perspectives about a feature.
Get everyone involved in the project so that they can contribute their different perspectives about a feature.
Ensure shared communication and ownership of the features.
Ensure shared communication and ownership of the features.
These objectives are met in a specification workshop also known as the Three Amigos meeting. The Three Amigos are BA, QA and the developer. Though there are other roles in the project, these three would be responsible and accountable from definition to the delivery of the features.
During the meeting −
The Business Analyst (BA) presents the requirements and tests for a new feature.
The Business Analyst (BA) presents the requirements and tests for a new feature.
The three Amigos (BA, Developer, and QA) discuss the new feature and review the specifications.
The three Amigos (BA, Developer, and QA) discuss the new feature and review the specifications.
The QA and developer also identify the missing requirements.
The QA and developer also identify the missing requirements.
The three Amigos
Utilize a shared model using a ubiquitous language.
Use domain vocabulary (A glossary is maintained if required).
Look for differences and conflicts.
The three Amigos
Utilize a shared model using a ubiquitous language.
Utilize a shared model using a ubiquitous language.
Use domain vocabulary (A glossary is maintained if required).
Use domain vocabulary (A glossary is maintained if required).
Look for differences and conflicts.
Look for differences and conflicts.
Do not jump to implementation details at this point.
Do not jump to implementation details at this point.
Reach a consensus about whether a feature was specified sufficiently.
Reach a consensus about whether a feature was specified sufficiently.
A shared sense of requirements and test ownership facilitates quality specifications
A shared sense of requirements and test ownership facilitates quality specifications
The requirements are presented as scenarios, which provide explicit, unambiguous requirements. A scenario is an example of the system’s behavior from the users’ perspective.
The requirements are presented as scenarios, which provide explicit, unambiguous requirements. A scenario is an example of the system’s behavior from the users’ perspective.
Scenarios are specified using the Given-When-Then structure to create a testable specification −
Given <some precondition>
And <additional preconditions> Optional
When <an action/trigger occurs>
Then <some post condition>
And <additional post conditions> Optional
This specification is an example of a behavior of the system. It also represents an Acceptance criterion of the system.
The team discusses the examples and the feedback is incorporated until there is agreement that the examples cover the feature's expected behavior. This ensures good test coverage.
To refine a specification,
Be precise in writing the examples. If an example turns to be complex, split it into simpler examples.
Be precise in writing the examples. If an example turns to be complex, split it into simpler examples.
Focus on business perspective and avoid technical details.
Focus on business perspective and avoid technical details.
Consider both positive and negative conditions.
Consider both positive and negative conditions.
Adhere to the domain specific vocabulary.
Adhere to the domain specific vocabulary.
Discuss the examples with the customer.
Choose conversations to accomplish this.
Consider only those examples that the customer is interested in. This enables production of the required code only and avoids covering every possible combination, that may not be required
Discuss the examples with the customer.
Choose conversations to accomplish this.
Choose conversations to accomplish this.
Consider only those examples that the customer is interested in. This enables production of the required code only and avoids covering every possible combination, that may not be required
Consider only those examples that the customer is interested in. This enables production of the required code only and avoids covering every possible combination, that may not be required
To ensure that the scenario passes, all the test cases for that scenario must pass. Hence, enhance the specifications to make them testable. The test cases can include various ranges and data values (boundary and corner cases) as well as different business rules resulting in changes in data.
To ensure that the scenario passes, all the test cases for that scenario must pass. Hence, enhance the specifications to make them testable. The test cases can include various ranges and data values (boundary and corner cases) as well as different business rules resulting in changes in data.
Specify additional business rules such as complex calculations, data manipulation / transformation, etc.
Specify additional business rules such as complex calculations, data manipulation / transformation, etc.
Include non-functional scenarios (e.g. performance, load, usability, etc.) as Specification by Example
Include non-functional scenarios (e.g. performance, load, usability, etc.) as Specification by Example
The automation layer needs to be kept very simple – just wiring of the specification to the system under test. You can use a tool for the same.
Perform testing automation using Domain Specific Language (DSL) and show a clear connection between inputs and outputs. Focus on specification, and not script. Ensure that the tests are precise, easy to understand and testable.
Include example validation in your development pipeline with every change (addition/modification). There are many techniques and tools that can (and should) be adopted to help ensure the quality of a product. They revolve around three key principles- Test Early, Test Well and Test Often.
Execute the tests frequently so that you can identify the weak links. The examples representing the behaviors help track the progress and a behavior is said to be complete only after the corresponding test passes.
Keep the specifications as simple and short as possible. Organize the specifications and evolve them as work progresses. Make the documentation accessible for all in the team.
The illustration shows the process steps in Specification by Example.
Anti-patterns are certain patterns in software development that is considered a bad programming practice. As opposed to design patterns, which are common approaches to common problems, which have been formalized and are generally considered a good development practice, anti-patterns are the opposite and are undesirable
Anti-patterns give rise to various problems.
Many assumptions
Many assumptions
Building wrong thing
Building wrong thing
Testing wrong thing
Testing wrong thing
Unaware when code is finished
Unaware when code is finished
Hard to maintain tests
Hard to maintain tests
Hard to understand spec
Hard to understand spec
Loss of interest from business representatives
Loss of interest from business representatives
Hard to maintain tests
Hard to maintain tests
Hard to understand specifications
Hard to understand specifications
Loss of interest from business representatives
Loss of interest from business representatives
Teams think they have failed and get disappointed early
Teams think they have failed and get disappointed early
Quality can be ensured by keeping a watch on the anti-patterns. To minimize the problems created by anti-patterns, you should −
Get together to specify using examples.
Get together to specify using examples.
Clean up and improve the examples.
Clean up and improve the examples.
Write a code, which satisfies the examples
Write a code, which satisfies the examples
Automate the examples and deploy.
Automate the examples and deploy.
Repeat the approach for every user story.
Repeat the approach for every user story.
To solve the problems due to anti-patterns means adherence to −
Collaboration.
Collaboration.
Focusing on what.
Focusing on what.
Focusing on Business.
Focusing on Business.
Be prepared.
Be prepared.
Let us understand what each of the above mean.
In collaboration −
Business people, developers and testers give input from their own perspectives.
Business people, developers and testers give input from their own perspectives.
Automated examples prove that the team has built the correct thing.
Automated examples prove that the team has built the correct thing.
The process is more valuable than the tests themselves.
The process is more valuable than the tests themselves.
You must focus on the question - ‘what.’ While focusing on ‘what’ −
Do not try to cover all the possible cases.
Do not try to cover all the possible cases.
Do not forget to use different kind of tests.
Do not forget to use different kind of tests.
Keep examples as simple as possible.
Keep examples as simple as possible.
Examples should be easily understandable by the users of the system.
Examples should be easily understandable by the users of the system.
Tools should not play an important part in the workshops.
Tools should not play an important part in the workshops.
To focus on the business −
Keep specification at business intent.
Keep specification at business intent.
Include business in creating and reviewing specs.
Include business in creating and reviewing specs.
Hide all the details in the automation layer.
Hide all the details in the automation layer.
Be prepared for the following −
Benefits are not immediately apparent, even while the team practices are changed.
Benefits are not immediately apparent, even while the team practices are changed.
Introducing SbE is challenging.
Introducing SbE is challenging.
Requires time and investments.
Requires time and investments.
Automated testing does not come free.
Automated testing does not come free.
Use of tools is not mandatory for Specification by Example, though in practice several tools are available. There are cases that are successful following Specification by Example even without using a tool.
The following tools support Specification by Example −
Cucumber
Cucumber
SpecFlow
SpecFlow
Fitnesse
Fitnesse
Jbehave
Jbehave
Concordion
Concordion
Behat
Behat
Jasmine
Jasmine
Relish
Relish
Speclog
Speclog
The development teams often have a misconception that BDD is a tool framework. In reality, BDD is a development approach rather than a tool framework. However, as in the case of other development approaches, there are tools for BDD also.
Several BDD Tools are in use for different platforms and programming languages. They are −
Cucumber (Ruby framework)
Cucumber (Ruby framework)
SpecFlow (.NET framework)
SpecFlow (.NET framework)
Behave (Python framework)
Behave (Python framework)
JBehave (Java framework)
JBehave (Java framework)
JBehave Web (Java framework with Selenium integration)
JBehave Web (Java framework with Selenium integration)
Lettuce (Python framework)
Lettuce (Python framework)
Concordion (Java framework)
Concordion (Java framework)
Behat (PHP framework)
Behat (PHP framework)
Kahlan (PHP framework)
Kahlan (PHP framework)
DaSpec (JavaScript framework)
DaSpec (JavaScript framework)
Jasmine (JavaScript framework)
Jasmine (JavaScript framework)
Cucumber-js (JavaScript framework)
Cucumber-js (JavaScript framework)
Squish GUI Tester (BDD GUI Testing Tool for JavaScript, Python, Perl, Ruby and Tcl)
Squish GUI Tester (BDD GUI Testing Tool for JavaScript, Python, Perl, Ruby and Tcl)
Spock (Groovy framework)
Spock (Groovy framework)
Yadda (Gherkin language support for frameworks such as Jasmine (JavaScript framework))
Yadda (Gherkin language support for frameworks such as Jasmine (JavaScript framework))
Cucumber is a free tool for executable specifications used globally. Cucumber lets the software development teams describe how software should behave in plain text. The text is written in a business-readable, domain-specific language and serves as documentation, automated tests and development-aid, all rolled into one format. You can use over forty different spoken languages (English, Chinese, etc.) with Cucumber.
The key features of Cucumber are as follows −
Cucumber can be used for Executable Specifications, Test Automation and Living Documentation.
Cucumber can be used for Executable Specifications, Test Automation and Living Documentation.
Cucumber works with Ruby, Java, NET, Flex or web applications written in any language.
Cucumber works with Ruby, Java, NET, Flex or web applications written in any language.
Cucumber supports more succinct Tests in Tables - similar to what FIT does.
Cucumber supports more succinct Tests in Tables - similar to what FIT does.
Cucumber has revolutionized the Software Development Life Cycle by melding requirements, automated testing and documentation into a cohesive one: plain text executable specifications that validate the software.
Cucumber has revolutionized the Software Development Life Cycle by melding requirements, automated testing and documentation into a cohesive one: plain text executable specifications that validate the software.
SpecFlow is a BDD Tool for .NET Platform. SpecFlow is an open-source project. The source code is hosted on GitHub.
SpecFlow uses Gherkin Syntax for Features. The Gherkin format was introduced by Cucumber and is also used by other tools. The Gherkin language is maintained as a project on GitHub − https://github.com/cucumber/gherkin
Behave is used for Python framework.
Behave works with three types of files stored in a directory called “features” −
feature files with your behavior scenarios in it.
“steps” directory with Python step implementations for the scenarios.
Optionally, some environmental controls (code to run before and after steps, scenarios, features or the whole shooting match).
Behave works with three types of files stored in a directory called “features” −
feature files with your behavior scenarios in it.
feature files with your behavior scenarios in it.
“steps” directory with Python step implementations for the scenarios.
“steps” directory with Python step implementations for the scenarios.
Optionally, some environmental controls (code to run before and after steps, scenarios, features or the whole shooting match).
Optionally, some environmental controls (code to run before and after steps, scenarios, features or the whole shooting match).
Behave features are written using Gherkin (with some modifications) and are named “name.feature”.
Behave features are written using Gherkin (with some modifications) and are named “name.feature”.
The tags attached to a feature and scenario are available in the environment functions via the “feature” or “scenario” object passed to them. On those objects there is an attribute called “tags” which is a list of the tag names attached, in the order they are found in the features file.
The tags attached to a feature and scenario are available in the environment functions via the “feature” or “scenario” object passed to them. On those objects there is an attribute called “tags” which is a list of the tag names attached, in the order they are found in the features file.
Modifications to the Gherkin Standard −
Behave can parse standard Gherkin files and extends Gherkin to allow lowercase step keywords because these can sometimes allow more readable feature specifications
Modifications to the Gherkin Standard −
Behave can parse standard Gherkin files and extends Gherkin to allow lowercase step keywords because these can sometimes allow more readable feature specifications
Behave can parse standard Gherkin files and extends Gherkin to allow lowercase step keywords because these can sometimes allow more readable feature specifications
Lettuce is a very simple BDD tool based on Cucumber. It can execute plain-text functional descriptions as automated tests for Python projects. Lettuce aims the most common tasks on BDD.
Concordion is an open source tool for automating Specification by Example for Java Framework.
While the core features are simple, the Powerful extension framework API allows you to add functionality, such as using Excel spreadsheets as specifications, adding screenshots to the output, displaying logging information, etc.
Concordion lets you write the specifications in normal language using paragraphs, tables and proper punctuation and the structured Language using Given/When/Then is not necessary.
Concordion has been ported to other languages including −
C# (Concordion.NET)
C# (Concordion.NET)
Python (PyConcordion)
Python (PyConcordion)
Ruby (Ruby-Concordion)
Ruby (Ruby-Concordion)
Cucumber is a tool that supports Executable specifications, Test automation, and Living documentation.
Behavior Driven Development expands on Specification by Example. It also formalizes the Test-Driven Development best practices, in particular, the perspective of working from the outside-in. The development work is based on executable specifications.
The key features of executable specifications are as follows −
Executable Specifications are −
Derived from examples, that represent the behaviors of the system.
Written with collaboration of all involved in the development, including business and stakeholders.
Based on acceptance criterion.
Executable Specifications are −
Derived from examples, that represent the behaviors of the system.
Derived from examples, that represent the behaviors of the system.
Written with collaboration of all involved in the development, including business and stakeholders.
Written with collaboration of all involved in the development, including business and stakeholders.
Based on acceptance criterion.
Based on acceptance criterion.
Acceptance tests that are based on the executable specifications are automated.
Acceptance tests that are based on the executable specifications are automated.
A shared, ubiquitous language is used to write the executable specifications and the automated tests such that −
Domain specific terminology is used throughout the development.
Everyone, including the customers and the stakeholders speak about the system, its requirements and its implementation, in the same way.
The same terms are used to discuss the system present in the requirements, design documents, code, tests, etc.
Anyone can read and understand a requirement and how to generate more requirements.
Changes can be easily accommodated.
Live documentation is maintained.
A shared, ubiquitous language is used to write the executable specifications and the automated tests such that −
Domain specific terminology is used throughout the development.
Domain specific terminology is used throughout the development.
Everyone, including the customers and the stakeholders speak about the system, its requirements and its implementation, in the same way.
Everyone, including the customers and the stakeholders speak about the system, its requirements and its implementation, in the same way.
The same terms are used to discuss the system present in the requirements, design documents, code, tests, etc.
The same terms are used to discuss the system present in the requirements, design documents, code, tests, etc.
Anyone can read and understand a requirement and how to generate more requirements.
Anyone can read and understand a requirement and how to generate more requirements.
Changes can be easily accommodated.
Changes can be easily accommodated.
Live documentation is maintained.
Live documentation is maintained.
Cucumber helps with this process since it ties together the executable specifications with the actual code of the system and the automated acceptance tests.
The way it does this is actually designed to get the customers and developers working together. When an acceptance test passes, it means that the specification of the behavior of the system that it represents has been implemented correctly.
Consider the following example.
Feature − Sign up
Sign up should be quick and friendly.
Sign up should be quick and friendly.
Scenario − Successful sign up
New users should get a confirmation e-mail and be greeted personally.
Given I have chosen to sign up.
When I sign up with valid details.
Then I should receive a confirmation email.
And I should see a personalized greeting message.
Scenario − Successful sign up
New users should get a confirmation e-mail and be greeted personally.
New users should get a confirmation e-mail and be greeted personally.
Given I have chosen to sign up.
Given I have chosen to sign up.
When I sign up with valid details.
When I sign up with valid details.
Then I should receive a confirmation email.
Then I should receive a confirmation email.
And I should see a personalized greeting message.
And I should see a personalized greeting message.
From this example, we can see that −
Acceptance tests refer to Features.
Acceptance tests refer to Features.
Features are explained by Scenarios.
Features are explained by Scenarios.
Scenarios consist of Steps.
Scenarios consist of Steps.
The specification is written in a natural language in a plain text file, but it is executable.
Cucumber is a command line tool that processes text files containing the features looking for scenarios that can be executed against your system. Let us understand how Cucumber works.
It makes use of a bunch of conventions about how the files are named and where they are located (the respective folders) to make it easy to get started.
It makes use of a bunch of conventions about how the files are named and where they are located (the respective folders) to make it easy to get started.
Cucumber lets you keep specifications, automated tests and documentation in the same place.
Cucumber lets you keep specifications, automated tests and documentation in the same place.
Each scenario is a list of steps that describe the pre-conditions, actions, and post-conditions of the scenario; if each step executes without anyberror, the scenario is marked as passed.
Each scenario is a list of steps that describe the pre-conditions, actions, and post-conditions of the scenario; if each step executes without anyberror, the scenario is marked as passed.
At the end of a run, Cucumber will report how many scenarios passed.
At the end of a run, Cucumber will report how many scenarios passed.
If something fails, it provides information about what failed so that the developer can progress.
If something fails, it provides information about what failed so that the developer can progress.
In Cucumber, Features, Scenarios, and Steps are written in a Language called Gherkin.
Gherkin is plain-text English (or one of 60+ other languages) with a structure. Gherkin is easy to learn and its structure allows you to write examples in a concise manner.
Cucumber executes your files that contain executable specifications written in Gherkin.
Cucumber executes your files that contain executable specifications written in Gherkin.
Cucumber needs Step Definitions to translate plain-text Gherkin Steps into actions that will interact with the system.
Cucumber needs Step Definitions to translate plain-text Gherkin Steps into actions that will interact with the system.
When Cucumber executes a step in a scenario, it will look for a matching step definition to execute.
When Cucumber executes a step in a scenario, it will look for a matching step definition to execute.
A Step Definition is a small piece of code with a pattern attached to it.
A Step Definition is a small piece of code with a pattern attached to it.
The pattern is used to link the Step Definition to all the matching steps, and the code is what Cucumber will execute when it sees a Gherkin step.
The pattern is used to link the Step Definition to all the matching steps, and the code is what Cucumber will execute when it sees a Gherkin step.
Each step is accompanied by a Step Definition.
Each step is accompanied by a Step Definition.
Most steps will gather input and then delegate to a framework that is specific to your application domain in order to make calls on your framework.
Most steps will gather input and then delegate to a framework that is specific to your application domain in order to make calls on your framework.
Cucumber supports over a dozen different software platforms. You can choose the Cucumber implementation that works for you. Every Cucumber implementation provides the same overall functionality and they also have their own installation procedure and platform-specific functionality.
The key to Cucumber is the mapping between Steps and Step Definitions.
Given below are Cucumber implementations.
Given below are Framework implementations.
Gherkin is a language, which is used to write Features, Scenarios, and Steps. The purpose of Gherkin is to help us write concrete requirements.
To understand what we mean by concrete requirements, consider the following example −
Customers should be prevented from entering invalid credit card details.
If a customer enters a credit card number that is not exactly 16 digits long, when they try to submit the form, it should be redisplayed with an error message advising them of the correct number of digits.
The latter has no ambiguity and avoids errors and is much more testable.
Gherkin is designed to create requirements that are more concrete. In Gherkin, the above example looks like −
Feature
Feedback when entering invalid credit card details Feature Definition
In user testing, we have seen many people who make mistakes Documentation
Background True for all Scenarios Below
Given I have chosen an item to buy,
And I am about to enter my credit card number
Scenario − Credit card number too shortScenario Definition
When I enter a card number that is less than 16 digits long
And all the other details are correct
And I submit the formSteps
Then the form should be redisplayed
And I should see a message advising me of the correct number of digits
Gherkin files are plain text Files and have the extension .feature. Each line that is not blank has to start with a Gherkin keyword, followed by any text you like. The keywords are −
Feature
Feature
Scenario
Scenario
Given, When, Then, And, But (Steps)
Given, When, Then, And, But (Steps)
Background
Background
Scenario Outline
Scenario Outline
Examples
Examples
""" (Doc Strings)
""" (Doc Strings)
| (Data Tables)
| (Data Tables)
@ (Tags)
@ (Tags)
# (Comments)
# (Comments)
*
*
The Feature keyword is used to describe a software feature, and to group the related scenarios. A Feature has three basic elements −
The keyword – Feature.
The keyword – Feature.
The name of the feature, provided on the same line as the Feature keyword.
The name of the feature, provided on the same line as the Feature keyword.
An optional (but highly recommended) description that can span multiple lines i.e. all the text between the line containing the keyword Feature, and a line that starts with Scenario, Background, or Scenario Outline.
An optional (but highly recommended) description that can span multiple lines i.e. all the text between the line containing the keyword Feature, and a line that starts with Scenario, Background, or Scenario Outline.
In addition to a name and a description, Features contain a list of scenarios or scenario outlines, and an optional background.
It is conventional to name a .feature file by taking the name of the Feature, converting it to lowercase and replacing the spaces with underlines. For example,
feedback_when_entering_invalid_credit_card_details.feature
In order to identify Features in your system, you can use what is known as a “feature injection template”.
Some parts of Gherkin documents do not have to start with a keyword.
In the lines following a Feature, scenario, scenario outline or examples, you can write anything you like, as long as no line starts with a keyword. This is the way to include Descriptions.
To express the behavior of your system, you attach one or more scenarios with each Feature. It is typical to see 5 to 20 scenarios per Feature to completely specify all the behaviors around that Feature.
Scenarios follows the following pattern −
Describe an initial context
Describe an initial context
Describe an event
Describe an event
Describe an expected outcome
Describe an expected outcome
We start with a context, describe an action, and check the outcome. This is done with steps. Gherkin provides three keywords to describe each of the contexts, actions, and outcomes as steps.
Given − Establish context
Given − Establish context
When − Perform action
When − Perform action
Then − Check outcome
Then − Check outcome
These keywords provide readability of the scenario.
Example
Scenario − Withdraw money from account.
Given I have $100 in my account.
Given I have $100 in my account.
When I request $20.
When I request $20.
Then $20 should be dispensed.
Then $20 should be dispensed.
If there are multiple Given or When steps underneath each other, you can use And or But. They allow you to specify scenarios in detail.
Example
Scenario − Attempt withdrawal using stolen card.
Given I have $100 in my account.
Given I have $100 in my account.
But my card is invalid.
But my card is invalid.
When I request $50.
When I request $50.
Then my card should not be returned.
Then my card should not be returned.
And I should be told to contact the bank.
And I should be told to contact the bank.
While creating scenarios, remember ‘each scenario must make sense and be able to be executed independently of any other scenario’’. This means −
You cannot have the success condition of one scenario depend on the fact that some other scenario was executed before it.
You cannot have the success condition of one scenario depend on the fact that some other scenario was executed before it.
Each scenario creates its particular context, executes one thing, and tests the result.
Each scenario creates its particular context, executes one thing, and tests the result.
Such scenarios provide the following benefits −
Tests will be simpler and easier to understand.
Tests will be simpler and easier to understand.
You can run just a subset of your scenarios and you do not have to worry about the breaking of your test set.
You can run just a subset of your scenarios and you do not have to worry about the breaking of your test set.
Depending on your system, you might be able to run the tests in parallel, reducing the amount of time taken to execute all of your tests.
Depending on your system, you might be able to run the tests in parallel, reducing the amount of time taken to execute all of your tests.
If you have to write scenarios with several inputs or outputs, you might end up creating several scenarios that only differ by their values. The solution is to use scenario outline. To write a scenario outline,
Variables in the scenario outline steps are marked up with < and >.
Variables in the scenario outline steps are marked up with < and >.
The various values for the variables are given as examples in a table.
The various values for the variables are given as examples in a table.
Example
Suppose you are writing a Feature for adding two numbers on a calculator.
Feature − Add.
Scenario Outline: Add two numbers.
Given the input "<input>"
When the calculator is run
Then the output should be <output>"
Examples
| input | output |
| 2+2 | 4 |
| 98+1 | 99 |
| 255+390 | 645 |
A scenario outline section is always followed by one or more sections of examples, which are a container for a table. The table must have a header row corresponding to the variables in the scenario outline steps. Each of the rows below will create a new scenario, filling in the variable values
SpecFlow is an open-source project. The source code is hosted on GitHub. The feature files used by SpecFlow to store an acceptance criterion for features (use cases, user stories) in your application are defined using the Gherkin syntax.
The Gherkin format was introduced by Cucumber and is also used by other tools. The Gherkin language is maintained as a project on GitHub − https://github.com/cucumber/gherkin
The key features of Feature elements are −
The feature element provides a header for the feature file. The feature element includes the name and a high-level description of the corresponding feature in your application.
SpecFlow generates a unit test class for the feature element, with the class name derived from the name of the feature.
SpecFlow generates executable unit tests from the scenarios that represent acceptance criteria.
The feature element provides a header for the feature file. The feature element includes the name and a high-level description of the corresponding feature in your application.
SpecFlow generates a unit test class for the feature element, with the class name derived from the name of the feature.
SpecFlow generates a unit test class for the feature element, with the class name derived from the name of the feature.
SpecFlow generates executable unit tests from the scenarios that represent acceptance criteria.
SpecFlow generates executable unit tests from the scenarios that represent acceptance criteria.
A feature file may contain multiple scenarios used to describe the feature's acceptance tests.
Scenarios have a name and can consist of multiple scenario steps.
SpecFlow generates a unit test method for each scenario, with the method name derived from the name of the scenario.
A feature file may contain multiple scenarios used to describe the feature's acceptance tests.
Scenarios have a name and can consist of multiple scenario steps.
Scenarios have a name and can consist of multiple scenario steps.
SpecFlow generates a unit test method for each scenario, with the method name derived from the name of the scenario.
SpecFlow generates a unit test method for each scenario, with the method name derived from the name of the scenario.
The scenarios can have multiple scenario steps. There are three types of steps that define the preconditions, actions or verification steps, which make up the acceptance test.
The different types of steps begin with either the Given, When or Then keywords respectively and subsequent steps of the same type can be linked using the And and But keywords.
The different types of steps begin with either the Given, When or Then keywords respectively and subsequent steps of the same type can be linked using the And and But keywords.
The Gherkin syntax allows any combination of these three types of steps, but a scenario usually has distinct blocks of Given, When and Then statements.
The Gherkin syntax allows any combination of these three types of steps, but a scenario usually has distinct blocks of Given, When and Then statements.
Scenario steps are defined using text and can have additional table called DataTable or multi-line text called DocString arguments.
Scenario steps are defined using text and can have additional table called DataTable or multi-line text called DocString arguments.
The scenario steps are a primary way to execute any custom code to automate the application.
The scenario steps are a primary way to execute any custom code to automate the application.
SpecFlow generates a call inside the unit test method for each scenario step. The call is performed by the SpecFlow runtime that will execute the step definition matching to the scenario step.
SpecFlow generates a call inside the unit test method for each scenario step. The call is performed by the SpecFlow runtime that will execute the step definition matching to the scenario step.
The matching is done at runtime, so the generated tests can be compiled and executed even if the binding is not yet implemented.
The matching is done at runtime, so the generated tests can be compiled and executed even if the binding is not yet implemented.
You can include tables and multi-line arguments in scenario steps. These are used by the step definitions and are either passed as additional table or string arguments.
You can include tables and multi-line arguments in scenario steps. These are used by the step definitions and are either passed as additional table or string arguments.
Tags are markers that can be assigned to features and scenarios. Assigning a tag to a feature is equivalent to assigning the tag to all scenarios in the feature file. A Tag Name with a leading @ denotes tag.
If supported by the unit test framework, SpecFlow generates categories from the tags.
If supported by the unit test framework, SpecFlow generates categories from the tags.
The generated category name is the same as the tag's name, but without the leading @.
The generated category name is the same as the tag's name, but without the leading @.
You can filter and group the tests to be executed using these unit test categories. For example, you can tag crucial tests with @important, and then execute these tests more frequently.
You can filter and group the tests to be executed using these unit test categories. For example, you can tag crucial tests with @important, and then execute these tests more frequently.
The background language element allows specifying a common precondition for all scenarios in a feature file
The background part of the file can contain one or more scenario steps that are executed before any other steps of the scenarios.
The background part of the file can contain one or more scenario steps that are executed before any other steps of the scenarios.
SpecFlow generates a method from the background elements that is invoked from all unit tests generated for the scenarios.
SpecFlow generates a method from the background elements that is invoked from all unit tests generated for the scenarios.
Scenario outlines can be used to define data-driven acceptance tests. The scenario outline always consists of a scenario template specification (a scenario with data placeholders using the <placeholder> syntax) and a set of examples that provide values for the placeholders
If the unit test framework supports it, SpecFlow generates row-based tests from scenario outlines.
If the unit test framework supports it, SpecFlow generates row-based tests from scenario outlines.
Otherwise, it generates a parameterized unit-test logic method for a scenario outline and an individual unit test method for each example set.
Otherwise, it generates a parameterized unit-test logic method for a scenario outline and an individual unit test method for each example set.
For better traceability, the generated unit-test method names are derived from the scenario outline title and the first value of the examples (first column of the examples table).
For better traceability, the generated unit-test method names are derived from the scenario outline title and the first value of the examples (first column of the examples table).
It is therefore good practice to choose a unique and descriptive parameter as the first column in the example set.
It is therefore good practice to choose a unique and descriptive parameter as the first column in the example set.
As the Gherkin syntax does require all example columns to have matching placeholders in the scenario outline, you can even introduce an arbitrary column in the example sets used to name the tests with more readability.
As the Gherkin syntax does require all example columns to have matching placeholders in the scenario outline, you can even introduce an arbitrary column in the example sets used to name the tests with more readability.
SpecFlow performs the placeholder substitution as a separate phase before matching the step bindings.
SpecFlow performs the placeholder substitution as a separate phase before matching the step bindings.
The implementation and the parameters in the step bindings are thus independent of whether they are executed through a direct scenario or a scenario outline.
The implementation and the parameters in the step bindings are thus independent of whether they are executed through a direct scenario or a scenario outline.
This allows you to later specify further examples in the acceptance tests without changing the step bindings.
This allows you to later specify further examples in the acceptance tests without changing the step bindings.
You can add comment lines to the feature files at any place by starting the line with #. Be careful however, as comments in your specification can be a sign that acceptance criteria have been specified wrongly. SpecFlow ignores comment lines.
17 Lectures
52 mins
Shruti Mantri
8 Lectures
23 mins
Ken Burke
8 Lectures
1 hours
Matej Sucha
5 Lectures
1 hours
Matej Sucha
5 Lectures
1 hours
Matej Sucha
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1860,
"s": 1732,
"text": "Behavior Driven Development (BDD) is a software development process that originally emerged from Test Driven Development (TDD)."
},
{
"code": null,
"e": 2062,
"s": 1860,
"text": "According to Dan North, who is responsible for the evolution of BDD, “BDD is using examples at multiple levels to create a shared understanding and surface uncertainty to deliver software that matter.”"
},
{
"code": null,
"e": 2247,
"s": 2062,
"text": "BDD uses examples to illustrate the behavior of the system that are written in a readable and understandable language for everyone involved in the development. These examples include −"
},
{
"code": null,
"e": 2289,
"s": 2247,
"text": "Converted into executable specifications."
},
{
"code": null,
"e": 2331,
"s": 2289,
"text": "Converted into executable specifications."
},
{
"code": null,
"e": 2361,
"s": 2331,
"text": "Used as the acceptance tests."
},
{
"code": null,
"e": 2391,
"s": 2361,
"text": "Used as the acceptance tests."
},
{
"code": null,
"e": 2432,
"s": 2391,
"text": "Behavior Driven Development focuses on −"
},
{
"code": null,
"e": 2659,
"s": 2432,
"text": "Providing a shared process and shared tools promoting communication to the software developers, business analysts and stakeholders to collaborate on software development, with the aim of delivering product with business value."
},
{
"code": null,
"e": 2886,
"s": 2659,
"text": "Providing a shared process and shared tools promoting communication to the software developers, business analysts and stakeholders to collaborate on software development, with the aim of delivering product with business value."
},
{
"code": null,
"e": 2951,
"s": 2886,
"text": "What a system should do and not on how it should be implemented."
},
{
"code": null,
"e": 3016,
"s": 2951,
"text": "What a system should do and not on how it should be implemented."
},
{
"code": null,
"e": 3061,
"s": 3016,
"text": "Providing better readability and visibility."
},
{
"code": null,
"e": 3106,
"s": 3061,
"text": "Providing better readability and visibility."
},
{
"code": null,
"e": 3205,
"s": 3106,
"text": "Verifying not only the working of the software but also that it meets the customer’s expectations."
},
{
"code": null,
"e": 3304,
"s": 3205,
"text": "Verifying not only the working of the software but also that it meets the customer’s expectations."
},
{
"code": null,
"e": 3467,
"s": 3304,
"text": "The cost to fix a defect increases multifold if the defect is not detected at the right time and fixed as and when it is detected. Consider the following example."
},
{
"code": null,
"e": 3702,
"s": 3467,
"text": "This shows that unless requirements are obtained correctly, it would be expensive to fix the defects resulting from misunderstanding the requirements at a later stage. Further, the end product may not meet the customer’s expectations."
},
{
"code": null,
"e": 3756,
"s": 3702,
"text": "The need of the hour is a development approach that −"
},
{
"code": null,
"e": 3786,
"s": 3756,
"text": "Is based on the requirements."
},
{
"code": null,
"e": 3816,
"s": 3786,
"text": "Is based on the requirements."
},
{
"code": null,
"e": 3868,
"s": 3816,
"text": "Focuses on requirements throughout the development."
},
{
"code": null,
"e": 3920,
"s": 3868,
"text": "Focuses on requirements throughout the development."
},
{
"code": null,
"e": 3959,
"s": 3920,
"text": "Ensures that the requirements are met."
},
{
"code": null,
"e": 3998,
"s": 3959,
"text": "Ensures that the requirements are met."
},
{
"code": null,
"e": 4120,
"s": 3998,
"text": "A development approach that can take care of the above-mentioned requirements is BDD. Thus, Behavior Driven Development −"
},
{
"code": null,
"e": 4184,
"s": 4120,
"text": "Derives examples of different expected behaviors of the system."
},
{
"code": null,
"e": 4248,
"s": 4184,
"text": "Derives examples of different expected behaviors of the system."
},
{
"code": null,
"e": 4417,
"s": 4248,
"text": "Enables writing the examples in a language using the business domain terms to ensure easy understanding by everyone involved in the development including the customers."
},
{
"code": null,
"e": 4586,
"s": 4417,
"text": "Enables writing the examples in a language using the business domain terms to ensure easy understanding by everyone involved in the development including the customers."
},
{
"code": null,
"e": 4672,
"s": 4586,
"text": "Gets the examples ratified with customer from time to time by means of conversations."
},
{
"code": null,
"e": 4758,
"s": 4672,
"text": "Gets the examples ratified with customer from time to time by means of conversations."
},
{
"code": null,
"e": 4834,
"s": 4758,
"text": "Focuses on the customer requirements (examples) throughout the development."
},
{
"code": null,
"e": 4910,
"s": 4834,
"text": "Focuses on the customer requirements (examples) throughout the development."
},
{
"code": null,
"e": 4945,
"s": 4910,
"text": "Uses examples as acceptance tests."
},
{
"code": null,
"e": 4980,
"s": 4945,
"text": "Uses examples as acceptance tests."
},
{
"code": null,
"e": 5016,
"s": 4980,
"text": "The two main practices of BDD are −"
},
{
"code": null,
"e": 5047,
"s": 5016,
"text": "Specification by Example (SbE)"
},
{
"code": null,
"e": 5078,
"s": 5047,
"text": "Specification by Example (SbE)"
},
{
"code": null,
"e": 5108,
"s": 5078,
"text": "Test Driven Development (TDD)"
},
{
"code": null,
"e": 5138,
"s": 5108,
"text": "Test Driven Development (TDD)"
},
{
"code": null,
"e": 5279,
"s": 5138,
"text": "Specification by Example (SbE) uses examples in conversations to illustrate the business rules and the behavior of the software to be built."
},
{
"code": null,
"e": 5449,
"s": 5279,
"text": "Specification by Example enables the product owners, business analysts, testers and the developers to eliminate common misunderstandings about the business requirements."
},
{
"code": null,
"e": 5560,
"s": 5449,
"text": "Test Driven Development, in the context of BDD, turns examples into human readable, executable specifications."
},
{
"code": null,
"e": 5809,
"s": 5560,
"text": "The developers use these specifications as a guide to implement increments of new functionality. This, results in a lean codebase and a suite of automated regression tests that keep the maintenance costs low throughout the lifetime of the software."
},
{
"code": null,
"e": 5924,
"s": 5809,
"text": "In Agile software development, BDD method is used to come to a common understanding on the pending specifications."
},
{
"code": null,
"e": 5972,
"s": 5924,
"text": "The following steps are executed in Agile BDD −"
},
{
"code": null,
"e": 6078,
"s": 5972,
"text": "The developers and the product owner collaboratively write pending specifications in a plain text editor."
},
{
"code": null,
"e": 6184,
"s": 6078,
"text": "The developers and the product owner collaboratively write pending specifications in a plain text editor."
},
{
"code": null,
"e": 6255,
"s": 6184,
"text": "The product owner specifies the behaviors they expect from the system."
},
{
"code": null,
"e": 6326,
"s": 6255,
"text": "The product owner specifies the behaviors they expect from the system."
},
{
"code": null,
"e": 6455,
"s": 6326,
"text": "The developers\n\nFill the specifications with these behavior details.\nAsk questions based on their understanding of the system.\n\n"
},
{
"code": null,
"e": 6470,
"s": 6455,
"text": "The developers"
},
{
"code": null,
"e": 6523,
"s": 6470,
"text": "Fill the specifications with these behavior details."
},
{
"code": null,
"e": 6576,
"s": 6523,
"text": "Fill the specifications with these behavior details."
},
{
"code": null,
"e": 6634,
"s": 6576,
"text": "Ask questions based on their understanding of the system."
},
{
"code": null,
"e": 6692,
"s": 6634,
"text": "Ask questions based on their understanding of the system."
},
{
"code": null,
"e": 6803,
"s": 6692,
"text": "The current system behaviors are considered to see if the new feature will break any of the existing features."
},
{
"code": null,
"e": 6914,
"s": 6803,
"text": "The current system behaviors are considered to see if the new feature will break any of the existing features."
},
{
"code": null,
"e": 6957,
"s": 6914,
"text": "The Agile Manifesto states the following −"
},
{
"code": null,
"e": 7091,
"s": 6957,
"text": "We are uncovering better ways of developing software by doing it and helping others do it. Through this work, we have come to value −"
},
{
"code": null,
"e": 7147,
"s": 7091,
"text": "Individuals and interactions − over Processes and tools"
},
{
"code": null,
"e": 7203,
"s": 7147,
"text": "Individuals and interactions − over Processes and tools"
},
{
"code": null,
"e": 7255,
"s": 7203,
"text": "Working software − over Comprehensive documentation"
},
{
"code": null,
"e": 7307,
"s": 7255,
"text": "Working software − over Comprehensive documentation"
},
{
"code": null,
"e": 7358,
"s": 7307,
"text": "Customer collaboration − over Contract negotiation"
},
{
"code": null,
"e": 7409,
"s": 7358,
"text": "Customer collaboration − over Contract negotiation"
},
{
"code": null,
"e": 7454,
"s": 7409,
"text": "Responding to change − over Following a plan"
},
{
"code": null,
"e": 7499,
"s": 7454,
"text": "Responding to change − over Following a plan"
},
{
"code": null,
"e": 7593,
"s": 7499,
"text": "That is, while there is value in the items on the right, we value the items on the left more."
},
{
"code": null,
"e": 7647,
"s": 7593,
"text": "BDD aligns itself to the Agile manifesto as follows −"
},
{
"code": null,
"e": 7937,
"s": 7647,
"text": "When you look at any reference on Behavior Driven Development, you will find the usage of phrases such as “BDD is derived from TDD”, “BDD and TDD”. To know how BDD came into existence, why it is said to be derived from TDD and what is BDD and TDD, you have to have an understanding of TDD."
},
{
"code": null,
"e": 8114,
"s": 7937,
"text": "To start, let us get into the fundamentals of testing. The purpose of testing is to ensure that the system that is built is working as expected. Consider the following example."
},
{
"code": null,
"e": 8452,
"s": 8114,
"text": "Hence, by experience we have learnt that uncovering a defect as and when it is introduced and fixing it immediately would be cost effective. Therefore, there is a necessity of writing test cases at every stage of development and testing. This is what our traditional testing practices have taught us, which is often termed as Test-early."
},
{
"code": null,
"e": 8562,
"s": 8452,
"text": "This testing approach is termed as the Test-Last approach as testing is done after the completion of a stage."
},
{
"code": null,
"e": 8796,
"s": 8562,
"text": "The Test-Last approach was followed for quite some time in the software development projects. However, in reality, with this approach, as testing has to wait till the particular stage is completed, often it is overlooked because of −"
},
{
"code": null,
"e": 8839,
"s": 8796,
"text": "The delays in the completion of the stage."
},
{
"code": null,
"e": 8882,
"s": 8839,
"text": "The delays in the completion of the stage."
},
{
"code": null,
"e": 8904,
"s": 8882,
"text": "Tight time schedules."
},
{
"code": null,
"e": 8926,
"s": 8904,
"text": "Tight time schedules."
},
{
"code": null,
"e": 8971,
"s": 8926,
"text": "Focus on delivery on time, skipping testing."
},
{
"code": null,
"e": 9016,
"s": 8971,
"text": "Focus on delivery on time, skipping testing."
},
{
"code": null,
"e": 9202,
"s": 9016,
"text": "Further, in the Test-Last approach, Unit testing, that is supposed to be done by the developers is often skipped. The various reasons found are based on the mind-set of the developers −"
},
{
"code": null,
"e": 9239,
"s": 9202,
"text": "They are developers and not testers."
},
{
"code": null,
"e": 9276,
"s": 9239,
"text": "They are developers and not testers."
},
{
"code": null,
"e": 9322,
"s": 9276,
"text": "Testing is the responsibility of the testers."
},
{
"code": null,
"e": 9368,
"s": 9322,
"text": "Testing is the responsibility of the testers."
},
{
"code": null,
"e": 9436,
"s": 9368,
"text": "They are efficient in coding and their code would not have defects."
},
{
"code": null,
"e": 9504,
"s": 9436,
"text": "They are efficient in coding and their code would not have defects."
},
{
"code": null,
"e": 9522,
"s": 9504,
"text": "This results in −"
},
{
"code": null,
"e": 9576,
"s": 9522,
"text": "Compromising on the quality of the product delivered."
},
{
"code": null,
"e": 9630,
"s": 9576,
"text": "Compromising on the quality of the product delivered."
},
{
"code": null,
"e": 9685,
"s": 9630,
"text": "Having the accountability for quality on testers only."
},
{
"code": null,
"e": 9740,
"s": 9685,
"text": "Having the accountability for quality on testers only."
},
{
"code": null,
"e": 9789,
"s": 9740,
"text": "High-costs in fixing the defects, post delivery."
},
{
"code": null,
"e": 9838,
"s": 9789,
"text": "High-costs in fixing the defects, post delivery."
},
{
"code": null,
"e": 9956,
"s": 9838,
"text": "Inability to obtain customer satisfaction, which would also mean loss of repeat business, thus effecting credibility."
},
{
"code": null,
"e": 10074,
"s": 9956,
"text": "Inability to obtain customer satisfaction, which would also mean loss of repeat business, thus effecting credibility."
},
{
"code": null,
"e": 10181,
"s": 10074,
"text": "These factors called for a shift in paradigm, to focus on testing. The result was the Test-First approach."
},
{
"code": null,
"e": 10317,
"s": 10181,
"text": "The Test-First approach replaces the inside-out (write code and then test) to outside-in (write test and then code) way of development."
},
{
"code": null,
"e": 10425,
"s": 10317,
"text": "This approach is incorporated into the following software development methodologies (that are Agile also) −"
},
{
"code": null,
"e": 10451,
"s": 10425,
"text": "eXtreme Programming (XP)."
},
{
"code": null,
"e": 10477,
"s": 10451,
"text": "eXtreme Programming (XP)."
},
{
"code": null,
"e": 10508,
"s": 10477,
"text": "Test Driven Development (TDD)."
},
{
"code": null,
"e": 10539,
"s": 10508,
"text": "Test Driven Development (TDD)."
},
{
"code": null,
"e": 10830,
"s": 10539,
"text": "In these methodologies, the developer designs and writes the Unit tests for a code module before writing a single line of the code module. The developer then creates the code module with the goal of passing the Unit test. Thus, these methodologies use Unit testing to drive the development."
},
{
"code": null,
"e": 10907,
"s": 10830,
"text": "The fundamental point to note that the goal is development based on testing."
},
{
"code": null,
"e": 10981,
"s": 10907,
"text": "Test Driven Development is used to develop the code guided by Unit tests."
},
{
"code": null,
"e": 11036,
"s": 10981,
"text": "Step 1 − Consider a code module that is to be written."
},
{
"code": null,
"e": 11058,
"s": 11036,
"text": "Step 2 − Write a test"
},
{
"code": null,
"e": 11081,
"s": 11058,
"text": "Step 3 − Run the test."
},
{
"code": null,
"e": 11193,
"s": 11081,
"text": "The test fails, as the code is still not written. Hence, Step 2 is usually referred to as write a test to fail."
},
{
"code": null,
"e": 11248,
"s": 11193,
"text": "Step 4 − Write minimum code possible to pass the test."
},
{
"code": null,
"e": 11361,
"s": 11248,
"text": "Step 5 − Run all the tests to ensure that they all still pass. Unit tests are automated to facilitate this step."
},
{
"code": null,
"e": 11380,
"s": 11361,
"text": "Step 6 − Refactor."
},
{
"code": null,
"e": 11439,
"s": 11380,
"text": "Step 7 − Repeat Step 1 to Step 6 for the next code module."
},
{
"code": null,
"e": 11519,
"s": 11439,
"text": "Each cycle should be very short, and a typical hour should contain many cycles."
},
{
"code": null,
"e": 11589,
"s": 11519,
"text": "This is also popularly known as the Red-Green-Refactor cycle, where −"
},
{
"code": null,
"e": 11622,
"s": 11589,
"text": "Red − Writing a test that fails."
},
{
"code": null,
"e": 11655,
"s": 11622,
"text": "Red − Writing a test that fails."
},
{
"code": null,
"e": 11694,
"s": 11655,
"text": "Green − Writing code to pass the test."
},
{
"code": null,
"e": 11733,
"s": 11694,
"text": "Green − Writing code to pass the test."
},
{
"code": null,
"e": 11813,
"s": 11733,
"text": "Refactor − Remove duplication and improve the code to the acceptable standards."
},
{
"code": null,
"e": 11893,
"s": 11813,
"text": "Refactor − Remove duplication and improve the code to the acceptable standards."
},
{
"code": null,
"e": 11943,
"s": 11893,
"text": "The steps of a TDD process are illustrated below."
},
{
"code": null,
"e": 12003,
"s": 11943,
"text": "The benefits or advantages of Test Driven Development are −"
},
{
"code": null,
"e": 12123,
"s": 12003,
"text": "The developer needs to understand first, what the desired result should be and how to test it before creating the code."
},
{
"code": null,
"e": 12243,
"s": 12123,
"text": "The developer needs to understand first, what the desired result should be and how to test it before creating the code."
},
{
"code": null,
"e": 12419,
"s": 12243,
"text": "The code for a component is finished only when the test passes and the code is refactored. This ensures testing and refactoring before the developer moves on to the next test."
},
{
"code": null,
"e": 12595,
"s": 12419,
"text": "The code for a component is finished only when the test passes and the code is refactored. This ensures testing and refactoring before the developer moves on to the next test."
},
{
"code": null,
"e": 12712,
"s": 12595,
"text": "As the suite of Unit tests is run after each refactoring, feedback that each component is still working is constant."
},
{
"code": null,
"e": 12829,
"s": 12712,
"text": "As the suite of Unit tests is run after each refactoring, feedback that each component is still working is constant."
},
{
"code": null,
"e": 12903,
"s": 12829,
"text": "The Unit tests act as living documentation that is always up to the data."
},
{
"code": null,
"e": 12977,
"s": 12903,
"text": "The Unit tests act as living documentation that is always up to the data."
},
{
"code": null,
"e": 13267,
"s": 12977,
"text": "If a defect is found, the developer creates a test to reveal that defect and then modify the code so that the test passes and the defect is fixed. This reduces the debugging time. All the other tests are also run and when they pass, it ensures that the existing functionality is not broken"
},
{
"code": null,
"e": 13557,
"s": 13267,
"text": "If a defect is found, the developer creates a test to reveal that defect and then modify the code so that the test passes and the defect is fixed. This reduces the debugging time. All the other tests are also run and when they pass, it ensures that the existing functionality is not broken"
},
{
"code": null,
"e": 13731,
"s": 13557,
"text": "The developer can make design decisions and refactor at any time and the running of the tests ensures that the system is still working. This makes the software maintainable."
},
{
"code": null,
"e": 13905,
"s": 13731,
"text": "The developer can make design decisions and refactor at any time and the running of the tests ensures that the system is still working. This makes the software maintainable."
},
{
"code": null,
"e": 14097,
"s": 13905,
"text": "The developer has the confidence to make any change since if the change impacts any existing functionality, the same is revealed by running the tests and the defects can be fixed immediately."
},
{
"code": null,
"e": 14289,
"s": 14097,
"text": "The developer has the confidence to make any change since if the change impacts any existing functionality, the same is revealed by running the tests and the defects can be fixed immediately."
},
{
"code": null,
"e": 14412,
"s": 14289,
"text": "On each successive test run, all the previous defect fixes are also verified and the repetition of same defect is reduced."
},
{
"code": null,
"e": 14535,
"s": 14412,
"text": "On each successive test run, all the previous defect fixes are also verified and the repetition of same defect is reduced."
},
{
"code": null,
"e": 14639,
"s": 14535,
"text": "As most of the testing is done during the development itself, the testing before delivery is shortened."
},
{
"code": null,
"e": 14743,
"s": 14639,
"text": "As most of the testing is done during the development itself, the testing before delivery is shortened."
},
{
"code": null,
"e": 14877,
"s": 14743,
"text": "The starting point is User Stories, describing the behavior of the system. Hence, the developers often face the following questions −"
},
{
"code": null,
"e": 14891,
"s": 14877,
"text": "When to test?"
},
{
"code": null,
"e": 14905,
"s": 14891,
"text": "When to test?"
},
{
"code": null,
"e": 14919,
"s": 14905,
"text": "What to test?"
},
{
"code": null,
"e": 14933,
"s": 14919,
"text": "What to test?"
},
{
"code": null,
"e": 14972,
"s": 14933,
"text": "How to know if a specification is met?"
},
{
"code": null,
"e": 15011,
"s": 14972,
"text": "How to know if a specification is met?"
},
{
"code": null,
"e": 15049,
"s": 15011,
"text": "Does the code deliver business value?"
},
{
"code": null,
"e": 15087,
"s": 15049,
"text": "Does the code deliver business value?"
},
{
"code": null,
"e": 15163,
"s": 15087,
"text": "The following misconceptions exist in the industry and need clarifications."
},
{
"code": null,
"e": 15400,
"s": 15163,
"text": "TDD is a development methodology, and after every new Unit Test passes, it is added to the Automation Test Suite as all the tests need to be run whenever a new code is added or existing code is modified and also after every refactoring."
},
{
"code": null,
"e": 15468,
"s": 15400,
"text": "Thus, Test Automation Tools supporting TDD facilitate this process."
},
{
"code": null,
"e": 15726,
"s": 15468,
"text": "Acceptance Test Driven Development (ATDD) defines Acceptance Criteria and Acceptance Tests during the creation of User Stories, early in development. ATDD focuses on the communication and common understanding among the customers, developers and the testers."
},
{
"code": null,
"e": 15769,
"s": 15726,
"text": "The Key practices in ATDD are as follows −"
},
{
"code": null,
"e": 15845,
"s": 15769,
"text": "Discuss real-world scenarios to build a shared understanding of the domain."
},
{
"code": null,
"e": 15921,
"s": 15845,
"text": "Discuss real-world scenarios to build a shared understanding of the domain."
},
{
"code": null,
"e": 15975,
"s": 15921,
"text": "Use those scenarios to arrive at acceptance criteria."
},
{
"code": null,
"e": 16029,
"s": 15975,
"text": "Use those scenarios to arrive at acceptance criteria."
},
{
"code": null,
"e": 16056,
"s": 16029,
"text": "Automate Acceptance tests."
},
{
"code": null,
"e": 16083,
"s": 16056,
"text": "Automate Acceptance tests."
},
{
"code": null,
"e": 16121,
"s": 16083,
"text": "Focus the development on those tests."
},
{
"code": null,
"e": 16159,
"s": 16121,
"text": "Focus the development on those tests."
},
{
"code": null,
"e": 16219,
"s": 16159,
"text": "Use the tests as a live specification to facilitate change."
},
{
"code": null,
"e": 16279,
"s": 16219,
"text": "Use the tests as a live specification to facilitate change."
},
{
"code": null,
"e": 16323,
"s": 16279,
"text": "The benefits of using ATDD are as follows −"
},
{
"code": null,
"e": 16381,
"s": 16323,
"text": "Requirements are unambiguous and without functional gaps."
},
{
"code": null,
"e": 16439,
"s": 16381,
"text": "Requirements are unambiguous and without functional gaps."
},
{
"code": null,
"e": 16504,
"s": 16439,
"text": "Others understand the special cases that the developers foresee."
},
{
"code": null,
"e": 16569,
"s": 16504,
"text": "Others understand the special cases that the developers foresee."
},
{
"code": null,
"e": 16613,
"s": 16569,
"text": "The Acceptance tests guide the development."
},
{
"code": null,
"e": 16657,
"s": 16613,
"text": "The Acceptance tests guide the development."
},
{
"code": null,
"e": 16773,
"s": 16657,
"text": "According to Dan North, programmers normally face the following problems while performing Test Driven Development −"
},
{
"code": null,
"e": 16788,
"s": 16773,
"text": "Where to start"
},
{
"code": null,
"e": 16803,
"s": 16788,
"text": "Where to start"
},
{
"code": null,
"e": 16837,
"s": 16803,
"text": "What to test and what not to test"
},
{
"code": null,
"e": 16871,
"s": 16837,
"text": "What to test and what not to test"
},
{
"code": null,
"e": 16898,
"s": 16871,
"text": "How much to test in one go"
},
{
"code": null,
"e": 16925,
"s": 16898,
"text": "How much to test in one go"
},
{
"code": null,
"e": 16950,
"s": 16925,
"text": "What to call their tests"
},
{
"code": null,
"e": 16975,
"s": 16950,
"text": "What to call their tests"
},
{
"code": null,
"e": 17010,
"s": 16975,
"text": "How to understand why a test fails"
},
{
"code": null,
"e": 17045,
"s": 17010,
"text": "How to understand why a test fails"
},
{
"code": null,
"e": 17376,
"s": 17045,
"text": "The solution to all these problems is Behavior Driven Development. It has evolved out of the established agile practices and is designed to make them more accessible and effective for teams, new to agile software delivery. Over time, BDD has grown to encompass the wider picture of agile analysis and automated acceptance testing."
},
{
"code": null,
"e": 17426,
"s": 17376,
"text": "The main difference between TDD and BDD is that −"
},
{
"code": null,
"e": 17464,
"s": 17426,
"text": "TDD describes how the software works."
},
{
"code": null,
"e": 17502,
"s": 17464,
"text": "TDD describes how the software works."
},
{
"code": null,
"e": 17731,
"s": 17502,
"text": "On the other hand, BDD −\n\nDescribes how the end user uses the software.\nFosters collaboration and communication.\nEmphasizes on examples of behavior of the System.\nAims at the executable specifications derived from the examples\n\n"
},
{
"code": null,
"e": 17756,
"s": 17731,
"text": "On the other hand, BDD −"
},
{
"code": null,
"e": 17802,
"s": 17756,
"text": "Describes how the end user uses the software."
},
{
"code": null,
"e": 17848,
"s": 17802,
"text": "Describes how the end user uses the software."
},
{
"code": null,
"e": 17889,
"s": 17848,
"text": "Fosters collaboration and communication."
},
{
"code": null,
"e": 17930,
"s": 17889,
"text": "Fosters collaboration and communication."
},
{
"code": null,
"e": 17980,
"s": 17930,
"text": "Emphasizes on examples of behavior of the System."
},
{
"code": null,
"e": 18030,
"s": 17980,
"text": "Emphasizes on examples of behavior of the System."
},
{
"code": null,
"e": 18094,
"s": 18030,
"text": "Aims at the executable specifications derived from the examples"
},
{
"code": null,
"e": 18158,
"s": 18094,
"text": "Aims at the executable specifications derived from the examples"
},
{
"code": null,
"e": 18526,
"s": 18158,
"text": "In TDD, the term “Acceptance Tests” is misleading. Acceptance tests actually represent the expected behavior of the system. In Agile practices, collaboration of the whole team and interactions with the customer and other stakeholders is emphasized. This has given rise to the necessity of usage of terms that are easily understood by everyone involved in the project."
},
{
"code": null,
"e": 18730,
"s": 18526,
"text": "TDD makes you think about the required Behavior and hence the term ‘Behavior’ is more useful than the term ‘Test’. BDD is Test Driven Development with a vocabulary that focuses on behavior and not tests."
},
{
"code": null,
"e": 18986,
"s": 18730,
"text": "In the words of Dan North, “I found the shift from thinking in tests to thinking in behavior so profound that I started to refer to TDD as BDD, or Behavior Driven Development.” TDD focuses on how something will work, BDD focuses on why we build it at all."
},
{
"code": null,
"e": 19054,
"s": 18986,
"text": "BDD answers the following questions often faced by the developers −"
},
{
"code": null,
"e": 19111,
"s": 19054,
"text": "These answers result in the story framework as follows −"
},
{
"code": null,
"e": 19127,
"s": 19111,
"text": "Story Framework"
},
{
"code": null,
"e": 19139,
"s": 19127,
"text": "As a [Role]"
},
{
"code": null,
"e": 19156,
"s": 19139,
"text": "I want [Feature]"
},
{
"code": null,
"e": 19174,
"s": 19156,
"text": "so that [Benefit]"
},
{
"code": null,
"e": 19273,
"s": 19174,
"text": "This means, ‘When a Feature is executed, the resulting Benefit is to the Person playing the Role.’"
},
{
"code": null,
"e": 19319,
"s": 19273,
"text": "BDD further answers the following questions −"
},
{
"code": null,
"e": 19378,
"s": 19319,
"text": "These answers result in the Example framework as follows −"
},
{
"code": null,
"e": 19396,
"s": 19378,
"text": "Example Framework"
},
{
"code": null,
"e": 19424,
"s": 19396,
"text": "Given some initial context,"
},
{
"code": null,
"e": 19446,
"s": 19424,
"text": "When an event occurs,"
},
{
"code": null,
"e": 19473,
"s": 19446,
"text": "Then ensure some outcomes."
},
{
"code": null,
"e": 19592,
"s": 19473,
"text": "This means, ‘Starting with the initial context, when a particular event happens, we know what the outcomes should be.’"
},
{
"code": null,
"e": 19724,
"s": 19592,
"text": "Thus, the example shows the expected behavior of the system. The examples are used to illustrate different scenarios of the system."
},
{
"code": null,
"e": 19801,
"s": 19724,
"text": "Let us consider the following illustration by Dan North about an ATM system."
},
{
"code": null,
"e": 19816,
"s": 19801,
"text": "As a customer,"
},
{
"code": null,
"e": 19853,
"s": 19816,
"text": "I want to withdraw cash from an ATM,"
},
{
"code": null,
"e": 19904,
"s": 19853,
"text": "so that I do not have to wait in line at the bank."
},
{
"code": null,
"e": 19953,
"s": 19904,
"text": "There are two possible scenarios for this story."
},
{
"code": null,
"e": 19988,
"s": 19953,
"text": "Scenario 1 − Account is in credit"
},
{
"code": null,
"e": 20019,
"s": 19988,
"text": "Given the account is in credit"
},
{
"code": null,
"e": 20041,
"s": 20019,
"text": "And the card is valid"
},
{
"code": null,
"e": 20073,
"s": 20041,
"text": "And the dispenser contains cash"
},
{
"code": null,
"e": 20105,
"s": 20073,
"text": "When the customer requests cash"
},
{
"code": null,
"e": 20140,
"s": 20105,
"text": "Then ensure the account is debited"
},
{
"code": null,
"e": 20169,
"s": 20140,
"text": "And ensure cash is dispensed"
},
{
"code": null,
"e": 20201,
"s": 20169,
"text": "And ensure the card is returned"
},
{
"code": null,
"e": 20260,
"s": 20201,
"text": "Scenario 2 − Account is overdrawn past the overdraft limit"
},
{
"code": null,
"e": 20291,
"s": 20260,
"text": "Given the account is overdrawn"
},
{
"code": null,
"e": 20313,
"s": 20291,
"text": "And the card is valid"
},
{
"code": null,
"e": 20345,
"s": 20313,
"text": "When the customer requests cash"
},
{
"code": null,
"e": 20390,
"s": 20345,
"text": "Then ensure a rejection message is displayed"
},
{
"code": null,
"e": 20423,
"s": 20390,
"text": "And ensure cash is not dispensed"
},
{
"code": null,
"e": 20455,
"s": 20423,
"text": "And ensure the card is returned"
},
{
"code": null,
"e": 20561,
"s": 20455,
"text": "The event is same in both the scenarios, but the context is different. Hence, the outcomes are different."
},
{
"code": null,
"e": 20618,
"s": 20561,
"text": "The Development Cycle for BDD is an outside-in approach."
},
{
"code": null,
"e": 20783,
"s": 20618,
"text": "Step 1 − Write a high-level (outside) business value example (using Cucumber or RSpec/Capybara) that goes red. (RSpec produces a BDD framework in the Ruby language)"
},
{
"code": null,
"e": 20887,
"s": 20783,
"text": "Step 2 − Write a lower-level (inside) RSpec example for the first step of implementation that goes red."
},
{
"code": null,
"e": 20974,
"s": 20887,
"text": "Step 3 − Implement the minimum code to pass that lower-level example, see it go green."
},
{
"code": null,
"e": 21070,
"s": 20974,
"text": "Step 4 − Write the next lower-level RSpec example pushing towards passing Step 1 that goes red."
},
{
"code": null,
"e": 21161,
"s": 21070,
"text": "Step 5 − Repeat steps Step 3 and Step 4 until the high-level example in Step 1 goes green."
},
{
"code": null,
"e": 21214,
"s": 21161,
"text": "Note − The following points should be kept in mind −"
},
{
"code": null,
"e": 21254,
"s": 21214,
"text": "Red/green state is a permission status."
},
{
"code": null,
"e": 21294,
"s": 21254,
"text": "Red/green state is a permission status."
},
{
"code": null,
"e": 21494,
"s": 21294,
"text": "When your low-level tests are green, you have the permission to write new examples or refactor existing implementation. You must not, in the context of refactoring, add new functionality/flexibility."
},
{
"code": null,
"e": 21694,
"s": 21494,
"text": "When your low-level tests are green, you have the permission to write new examples or refactor existing implementation. You must not, in the context of refactoring, add new functionality/flexibility."
},
{
"code": null,
"e": 22000,
"s": 21694,
"text": "When your low-level tests are red, you have permission to write or change implementation code only for making the existing tests go green. You must resist the urge to write the code to pass your next test, which does not exist, or implement features you may think are good (customer would not have asked)."
},
{
"code": null,
"e": 22306,
"s": 22000,
"text": "When your low-level tests are red, you have permission to write or change implementation code only for making the existing tests go green. You must resist the urge to write the code to pass your next test, which does not exist, or implement features you may think are good (customer would not have asked)."
},
{
"code": null,
"e": 22532,
"s": 22306,
"text": "According to Gojko Adzic, the author of ‘Specification by Example’, Specification by Example is a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently.”"
},
{
"code": null,
"e": 22779,
"s": 22532,
"text": "Specification by Example is a collaborative approach to define the requirements and business-oriented functional tests for software products based on capturing and illustrating requirements using realistic examples instead of abstract statements."
},
{
"code": null,
"e": 23036,
"s": 22779,
"text": "The objective of Specification by Example is to focus on development and delivery of prioritized, verifiable, business requirements. While the concept of Specification by Example in itself is relatively new, it is simply a rephrasing of existing practices."
},
{
"code": null,
"e": 23120,
"s": 23036,
"text": "It supports a very specific, concise vocabulary known as ubiquitous language that −"
},
{
"code": null,
"e": 23153,
"s": 23120,
"text": "Enables executable requirements."
},
{
"code": null,
"e": 23186,
"s": 23153,
"text": "Enables executable requirements."
},
{
"code": null,
"e": 23219,
"s": 23186,
"text": "Is used by everyone in the team."
},
{
"code": null,
"e": 23252,
"s": 23219,
"text": "Is used by everyone in the team."
},
{
"code": null,
"e": 23291,
"s": 23252,
"text": "Is created by a cross-functional team."
},
{
"code": null,
"e": 23330,
"s": 23291,
"text": "Is created by a cross-functional team."
},
{
"code": null,
"e": 23365,
"s": 23330,
"text": "Captures everyone's understanding."
},
{
"code": null,
"e": 23400,
"s": 23365,
"text": "Captures everyone's understanding."
},
{
"code": null,
"e": 23628,
"s": 23400,
"text": "Specification by Example can be used as a direct input into building Automated tests that reflect the business domain. Thus, the focus of Specification by Example is on building the right product and building the product right."
},
{
"code": null,
"e": 23951,
"s": 23628,
"text": "The primary aim of Specification by Example is to build the right product. It focuses on shared understanding, thus establishing a single source of truth. It enables automation of acceptance criteria so that focus is on defect prevention rather than defect detection. It also promotes test early to find the defects early."
},
{
"code": null,
"e": 24192,
"s": 23951,
"text": "Specification by Example is used to illustrate the expected system behavior that describes business value. The illustration is by means of concrete and real life examples. These examples are used to create executable requirements that are −"
},
{
"code": null,
"e": 24222,
"s": 24192,
"text": "Testable without translation."
},
{
"code": null,
"e": 24252,
"s": 24222,
"text": "Testable without translation."
},
{
"code": null,
"e": 24284,
"s": 24252,
"text": "Captured in live documentation."
},
{
"code": null,
"e": 24316,
"s": 24284,
"text": "Captured in live documentation."
},
{
"code": null,
"e": 24402,
"s": 24316,
"text": "Following are the reasons why we use examples to describe particular specifications −"
},
{
"code": null,
"e": 24433,
"s": 24402,
"text": "They are easier to understand."
},
{
"code": null,
"e": 24464,
"s": 24433,
"text": "They are easier to understand."
},
{
"code": null,
"e": 24497,
"s": 24464,
"text": "They are harder to misinterpret."
},
{
"code": null,
"e": 24530,
"s": 24497,
"text": "They are harder to misinterpret."
},
{
"code": null,
"e": 24585,
"s": 24530,
"text": "The advantages of using Specification by Example are −"
},
{
"code": null,
"e": 24603,
"s": 24585,
"text": "Increased quality"
},
{
"code": null,
"e": 24621,
"s": 24603,
"text": "Increased quality"
},
{
"code": null,
"e": 24635,
"s": 24621,
"text": "Reduced waste"
},
{
"code": null,
"e": 24649,
"s": 24635,
"text": "Reduced waste"
},
{
"code": null,
"e": 24684,
"s": 24649,
"text": "Reduced risk of production defects"
},
{
"code": null,
"e": 24719,
"s": 24684,
"text": "Reduced risk of production defects"
},
{
"code": null,
"e": 24734,
"s": 24719,
"text": "Focused effort"
},
{
"code": null,
"e": 24749,
"s": 24734,
"text": "Focused effort"
},
{
"code": null,
"e": 24781,
"s": 24749,
"text": "Changes can be made more safely"
},
{
"code": null,
"e": 24813,
"s": 24781,
"text": "Changes can be made more safely"
},
{
"code": null,
"e": 24843,
"s": 24813,
"text": "Improved business involvement"
},
{
"code": null,
"e": 24873,
"s": 24843,
"text": "Improved business involvement"
},
{
"code": null,
"e": 24921,
"s": 24873,
"text": "Specification by Example find applications in −"
},
{
"code": null,
"e": 24970,
"s": 24921,
"text": "Either complex business or complex organization."
},
{
"code": null,
"e": 25019,
"s": 24970,
"text": "Either complex business or complex organization."
},
{
"code": null,
"e": 25069,
"s": 25019,
"text": "Does not work well for purely technical problems."
},
{
"code": null,
"e": 25119,
"s": 25069,
"text": "Does not work well for purely technical problems."
},
{
"code": null,
"e": 25172,
"s": 25119,
"text": "Does not work well for UI focused software products."
},
{
"code": null,
"e": 25225,
"s": 25172,
"text": "Does not work well for UI focused software products."
},
{
"code": null,
"e": 25267,
"s": 25225,
"text": "Can be applied to legacy systems as well."
},
{
"code": null,
"e": 25309,
"s": 25267,
"text": "Can be applied to legacy systems as well."
},
{
"code": null,
"e": 25389,
"s": 25309,
"text": "The advantages of Specification by Example in terms of Acceptance testing are −"
},
{
"code": null,
"e": 25465,
"s": 25389,
"text": "One single illustration is used for both, detailed requirements and testing"
},
{
"code": null,
"e": 25541,
"s": 25465,
"text": "One single illustration is used for both, detailed requirements and testing"
},
{
"code": null,
"e": 25867,
"s": 25541,
"text": "Progress of the project is in terms of Acceptance tests −\n\nEach test is to test a behavior.\nA test is either passing for a behavior or it is not.\nA passing test represents that the particular behavior is completed.\nIf a project that requires 100 behaviors to be completed has 60 behaviors completed, then it is 60% finished.\n"
},
{
"code": null,
"e": 25925,
"s": 25867,
"text": "Progress of the project is in terms of Acceptance tests −"
},
{
"code": null,
"e": 25958,
"s": 25925,
"text": "Each test is to test a behavior."
},
{
"code": null,
"e": 25991,
"s": 25958,
"text": "Each test is to test a behavior."
},
{
"code": null,
"e": 26045,
"s": 25991,
"text": "A test is either passing for a behavior or it is not."
},
{
"code": null,
"e": 26099,
"s": 26045,
"text": "A test is either passing for a behavior or it is not."
},
{
"code": null,
"e": 26168,
"s": 26099,
"text": "A passing test represents that the particular behavior is completed."
},
{
"code": null,
"e": 26237,
"s": 26168,
"text": "A passing test represents that the particular behavior is completed."
},
{
"code": null,
"e": 26347,
"s": 26237,
"text": "If a project that requires 100 behaviors to be completed has 60 behaviors completed, then it is 60% finished."
},
{
"code": null,
"e": 26457,
"s": 26347,
"text": "If a project that requires 100 behaviors to be completed has 60 behaviors completed, then it is 60% finished."
},
{
"code": null,
"e": 26564,
"s": 26457,
"text": "Testers switch from defect fixing to defect prevention, and they contribute to the design of the solution."
},
{
"code": null,
"e": 26671,
"s": 26564,
"text": "Testers switch from defect fixing to defect prevention, and they contribute to the design of the solution."
},
{
"code": null,
"e": 26766,
"s": 26671,
"text": "Automation allows instant understanding of the impact of a requirement change on the solution."
},
{
"code": null,
"e": 26861,
"s": 26766,
"text": "Automation allows instant understanding of the impact of a requirement change on the solution."
},
{
"code": null,
"e": 27089,
"s": 26861,
"text": "The objective of Specification by Example is to promote collaboration of everyone in the team, including the customer throughout the project to deliver business value. Everyone for better understandability uses same Vocabulary."
},
{
"code": null,
"e": 27147,
"s": 27089,
"text": "Requirements are unambiguous and without functional gaps."
},
{
"code": null,
"e": 27205,
"s": 27147,
"text": "Requirements are unambiguous and without functional gaps."
},
{
"code": null,
"e": 27251,
"s": 27205,
"text": "Developers, actually read the specifications."
},
{
"code": null,
"e": 27297,
"s": 27251,
"text": "Developers, actually read the specifications."
},
{
"code": null,
"e": 27352,
"s": 27297,
"text": "Developers understand better, what is being developed."
},
{
"code": null,
"e": 27407,
"s": 27352,
"text": "Developers understand better, what is being developed."
},
{
"code": null,
"e": 27513,
"s": 27407,
"text": "Development progress is tracked better by counting the specifications that have been developed correctly."
},
{
"code": null,
"e": 27619,
"s": 27513,
"text": "Development progress is tracked better by counting the specifications that have been developed correctly."
},
{
"code": null,
"e": 27668,
"s": 27619,
"text": "Testers understand better, what is being tested."
},
{
"code": null,
"e": 27717,
"s": 27668,
"text": "Testers understand better, what is being tested."
},
{
"code": null,
"e": 27788,
"s": 27717,
"text": "Testers are involved from the beginning and have a role in the design."
},
{
"code": null,
"e": 27859,
"s": 27788,
"text": "Testers are involved from the beginning and have a role in the design."
},
{
"code": null,
"e": 27927,
"s": 27859,
"text": "Testers work toward defect prevention rather than defect detection."
},
{
"code": null,
"e": 27995,
"s": 27927,
"text": "Testers work toward defect prevention rather than defect detection."
},
{
"code": null,
"e": 28051,
"s": 27995,
"text": "Time is saved by identifying errors from the beginning."
},
{
"code": null,
"e": 28107,
"s": 28051,
"text": "Time is saved by identifying errors from the beginning."
},
{
"code": null,
"e": 28157,
"s": 28107,
"text": "A quality product is produced from the beginning."
},
{
"code": null,
"e": 28207,
"s": 28157,
"text": "A quality product is produced from the beginning."
},
{
"code": null,
"e": 28425,
"s": 28207,
"text": "As we have seen in the beginning of this chapter, Specification by Example is defined as a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently."
},
{
"code": null,
"e": 28452,
"s": 28425,
"text": "The process patterns are −"
},
{
"code": null,
"e": 28480,
"s": 28452,
"text": "Collaborative specification"
},
{
"code": null,
"e": 28508,
"s": 28480,
"text": "Collaborative specification"
},
{
"code": null,
"e": 28551,
"s": 28508,
"text": "Illustrating specifications using examples"
},
{
"code": null,
"e": 28594,
"s": 28551,
"text": "Illustrating specifications using examples"
},
{
"code": null,
"e": 28621,
"s": 28594,
"text": "Refining the specification"
},
{
"code": null,
"e": 28648,
"s": 28621,
"text": "Refining the specification"
},
{
"code": null,
"e": 28668,
"s": 28648,
"text": "Automating examples"
},
{
"code": null,
"e": 28688,
"s": 28668,
"text": "Automating examples"
},
{
"code": null,
"e": 28710,
"s": 28688,
"text": "Validating frequently"
},
{
"code": null,
"e": 28732,
"s": 28710,
"text": "Validating frequently"
},
{
"code": null,
"e": 28753,
"s": 28732,
"text": "Living documentation"
},
{
"code": null,
"e": 28774,
"s": 28753,
"text": "Living documentation"
},
{
"code": null,
"e": 28829,
"s": 28774,
"text": "The objectives of collaborative specification are to −"
},
{
"code": null,
"e": 28917,
"s": 28829,
"text": "Get the various roles in a team to have a common understanding and a shared vocabulary."
},
{
"code": null,
"e": 29005,
"s": 28917,
"text": "Get the various roles in a team to have a common understanding and a shared vocabulary."
},
{
"code": null,
"e": 29116,
"s": 29005,
"text": "Get everyone involved in the project so that they can contribute their different perspectives about a feature."
},
{
"code": null,
"e": 29227,
"s": 29116,
"text": "Get everyone involved in the project so that they can contribute their different perspectives about a feature."
},
{
"code": null,
"e": 29286,
"s": 29227,
"text": "Ensure shared communication and ownership of the features."
},
{
"code": null,
"e": 29345,
"s": 29286,
"text": "Ensure shared communication and ownership of the features."
},
{
"code": null,
"e": 29628,
"s": 29345,
"text": "These objectives are met in a specification workshop also known as the Three Amigos meeting. The Three Amigos are BA, QA and the developer. Though there are other roles in the project, these three would be responsible and accountable from definition to the delivery of the features."
},
{
"code": null,
"e": 29649,
"s": 29628,
"text": "During the meeting −"
},
{
"code": null,
"e": 29730,
"s": 29649,
"text": "The Business Analyst (BA) presents the requirements and tests for a new feature."
},
{
"code": null,
"e": 29811,
"s": 29730,
"text": "The Business Analyst (BA) presents the requirements and tests for a new feature."
},
{
"code": null,
"e": 29907,
"s": 29811,
"text": "The three Amigos (BA, Developer, and QA) discuss the new feature and review the specifications."
},
{
"code": null,
"e": 30003,
"s": 29907,
"text": "The three Amigos (BA, Developer, and QA) discuss the new feature and review the specifications."
},
{
"code": null,
"e": 30064,
"s": 30003,
"text": "The QA and developer also identify the missing requirements."
},
{
"code": null,
"e": 30125,
"s": 30064,
"text": "The QA and developer also identify the missing requirements."
},
{
"code": null,
"e": 30295,
"s": 30125,
"text": "The three Amigos\n\nUtilize a shared model using a ubiquitous language.\nUse domain vocabulary (A glossary is maintained if required).\nLook for differences and conflicts.\n\n"
},
{
"code": null,
"e": 30312,
"s": 30295,
"text": "The three Amigos"
},
{
"code": null,
"e": 30364,
"s": 30312,
"text": "Utilize a shared model using a ubiquitous language."
},
{
"code": null,
"e": 30416,
"s": 30364,
"text": "Utilize a shared model using a ubiquitous language."
},
{
"code": null,
"e": 30478,
"s": 30416,
"text": "Use domain vocabulary (A glossary is maintained if required)."
},
{
"code": null,
"e": 30540,
"s": 30478,
"text": "Use domain vocabulary (A glossary is maintained if required)."
},
{
"code": null,
"e": 30576,
"s": 30540,
"text": "Look for differences and conflicts."
},
{
"code": null,
"e": 30612,
"s": 30576,
"text": "Look for differences and conflicts."
},
{
"code": null,
"e": 30665,
"s": 30612,
"text": "Do not jump to implementation details at this point."
},
{
"code": null,
"e": 30718,
"s": 30665,
"text": "Do not jump to implementation details at this point."
},
{
"code": null,
"e": 30788,
"s": 30718,
"text": "Reach a consensus about whether a feature was specified sufficiently."
},
{
"code": null,
"e": 30858,
"s": 30788,
"text": "Reach a consensus about whether a feature was specified sufficiently."
},
{
"code": null,
"e": 30943,
"s": 30858,
"text": "A shared sense of requirements and test ownership facilitates quality specifications"
},
{
"code": null,
"e": 31028,
"s": 30943,
"text": "A shared sense of requirements and test ownership facilitates quality specifications"
},
{
"code": null,
"e": 31202,
"s": 31028,
"text": "The requirements are presented as scenarios, which provide explicit, unambiguous requirements. A scenario is an example of the system’s behavior from the users’ perspective."
},
{
"code": null,
"e": 31376,
"s": 31202,
"text": "The requirements are presented as scenarios, which provide explicit, unambiguous requirements. A scenario is an example of the system’s behavior from the users’ perspective."
},
{
"code": null,
"e": 31473,
"s": 31376,
"text": "Scenarios are specified using the Given-When-Then structure to create a testable specification −"
},
{
"code": null,
"e": 31499,
"s": 31473,
"text": "Given <some precondition>"
},
{
"code": null,
"e": 31539,
"s": 31499,
"text": "And <additional preconditions> Optional"
},
{
"code": null,
"e": 31571,
"s": 31539,
"text": "When <an action/trigger occurs>"
},
{
"code": null,
"e": 31598,
"s": 31571,
"text": "Then <some post condition>"
},
{
"code": null,
"e": 31640,
"s": 31598,
"text": "And <additional post conditions> Optional"
},
{
"code": null,
"e": 31760,
"s": 31640,
"text": "This specification is an example of a behavior of the system. It also represents an Acceptance criterion of the system."
},
{
"code": null,
"e": 31940,
"s": 31760,
"text": "The team discusses the examples and the feedback is incorporated until there is agreement that the examples cover the feature's expected behavior. This ensures good test coverage."
},
{
"code": null,
"e": 31967,
"s": 31940,
"text": "To refine a specification,"
},
{
"code": null,
"e": 32070,
"s": 31967,
"text": "Be precise in writing the examples. If an example turns to be complex, split it into simpler examples."
},
{
"code": null,
"e": 32173,
"s": 32070,
"text": "Be precise in writing the examples. If an example turns to be complex, split it into simpler examples."
},
{
"code": null,
"e": 32232,
"s": 32173,
"text": "Focus on business perspective and avoid technical details."
},
{
"code": null,
"e": 32291,
"s": 32232,
"text": "Focus on business perspective and avoid technical details."
},
{
"code": null,
"e": 32339,
"s": 32291,
"text": "Consider both positive and negative conditions."
},
{
"code": null,
"e": 32387,
"s": 32339,
"text": "Consider both positive and negative conditions."
},
{
"code": null,
"e": 32429,
"s": 32387,
"text": "Adhere to the domain specific vocabulary."
},
{
"code": null,
"e": 32471,
"s": 32429,
"text": "Adhere to the domain specific vocabulary."
},
{
"code": null,
"e": 32743,
"s": 32471,
"text": "Discuss the examples with the customer.\n\nChoose conversations to accomplish this.\nConsider only those examples that the customer is interested in. This enables production of the required code only and avoids covering every possible combination, that may not be required\n\n"
},
{
"code": null,
"e": 32783,
"s": 32743,
"text": "Discuss the examples with the customer."
},
{
"code": null,
"e": 32824,
"s": 32783,
"text": "Choose conversations to accomplish this."
},
{
"code": null,
"e": 32865,
"s": 32824,
"text": "Choose conversations to accomplish this."
},
{
"code": null,
"e": 33053,
"s": 32865,
"text": "Consider only those examples that the customer is interested in. This enables production of the required code only and avoids covering every possible combination, that may not be required"
},
{
"code": null,
"e": 33241,
"s": 33053,
"text": "Consider only those examples that the customer is interested in. This enables production of the required code only and avoids covering every possible combination, that may not be required"
},
{
"code": null,
"e": 33534,
"s": 33241,
"text": "To ensure that the scenario passes, all the test cases for that scenario must pass. Hence, enhance the specifications to make them testable. The test cases can include various ranges and data values (boundary and corner cases) as well as different business rules resulting in changes in data."
},
{
"code": null,
"e": 33827,
"s": 33534,
"text": "To ensure that the scenario passes, all the test cases for that scenario must pass. Hence, enhance the specifications to make them testable. The test cases can include various ranges and data values (boundary and corner cases) as well as different business rules resulting in changes in data."
},
{
"code": null,
"e": 33932,
"s": 33827,
"text": "Specify additional business rules such as complex calculations, data manipulation / transformation, etc."
},
{
"code": null,
"e": 34037,
"s": 33932,
"text": "Specify additional business rules such as complex calculations, data manipulation / transformation, etc."
},
{
"code": null,
"e": 34140,
"s": 34037,
"text": "Include non-functional scenarios (e.g. performance, load, usability, etc.) as Specification by Example"
},
{
"code": null,
"e": 34243,
"s": 34140,
"text": "Include non-functional scenarios (e.g. performance, load, usability, etc.) as Specification by Example"
},
{
"code": null,
"e": 34387,
"s": 34243,
"text": "The automation layer needs to be kept very simple – just wiring of the specification to the system under test. You can use a tool for the same."
},
{
"code": null,
"e": 34615,
"s": 34387,
"text": "Perform testing automation using Domain Specific Language (DSL) and show a clear connection between inputs and outputs. Focus on specification, and not script. Ensure that the tests are precise, easy to understand and testable."
},
{
"code": null,
"e": 34904,
"s": 34615,
"text": "Include example validation in your development pipeline with every change (addition/modification). There are many techniques and tools that can (and should) be adopted to help ensure the quality of a product. They revolve around three key principles- Test Early, Test Well and Test Often."
},
{
"code": null,
"e": 35118,
"s": 34904,
"text": "Execute the tests frequently so that you can identify the weak links. The examples representing the behaviors help track the progress and a behavior is said to be complete only after the corresponding test passes."
},
{
"code": null,
"e": 35294,
"s": 35118,
"text": "Keep the specifications as simple and short as possible. Organize the specifications and evolve them as work progresses. Make the documentation accessible for all in the team."
},
{
"code": null,
"e": 35364,
"s": 35294,
"text": "The illustration shows the process steps in Specification by Example."
},
{
"code": null,
"e": 35685,
"s": 35364,
"text": "Anti-patterns are certain patterns in software development that is considered a bad programming practice. As opposed to design patterns, which are common approaches to common problems, which have been formalized and are generally considered a good development practice, anti-patterns are the opposite and are undesirable"
},
{
"code": null,
"e": 35730,
"s": 35685,
"text": "Anti-patterns give rise to various problems."
},
{
"code": null,
"e": 35747,
"s": 35730,
"text": "Many assumptions"
},
{
"code": null,
"e": 35764,
"s": 35747,
"text": "Many assumptions"
},
{
"code": null,
"e": 35785,
"s": 35764,
"text": "Building wrong thing"
},
{
"code": null,
"e": 35806,
"s": 35785,
"text": "Building wrong thing"
},
{
"code": null,
"e": 35826,
"s": 35806,
"text": "Testing wrong thing"
},
{
"code": null,
"e": 35846,
"s": 35826,
"text": "Testing wrong thing"
},
{
"code": null,
"e": 35876,
"s": 35846,
"text": "Unaware when code is finished"
},
{
"code": null,
"e": 35906,
"s": 35876,
"text": "Unaware when code is finished"
},
{
"code": null,
"e": 35929,
"s": 35906,
"text": "Hard to maintain tests"
},
{
"code": null,
"e": 35952,
"s": 35929,
"text": "Hard to maintain tests"
},
{
"code": null,
"e": 35976,
"s": 35952,
"text": "Hard to understand spec"
},
{
"code": null,
"e": 36000,
"s": 35976,
"text": "Hard to understand spec"
},
{
"code": null,
"e": 36047,
"s": 36000,
"text": "Loss of interest from business representatives"
},
{
"code": null,
"e": 36094,
"s": 36047,
"text": "Loss of interest from business representatives"
},
{
"code": null,
"e": 36117,
"s": 36094,
"text": "Hard to maintain tests"
},
{
"code": null,
"e": 36140,
"s": 36117,
"text": "Hard to maintain tests"
},
{
"code": null,
"e": 36174,
"s": 36140,
"text": "Hard to understand specifications"
},
{
"code": null,
"e": 36208,
"s": 36174,
"text": "Hard to understand specifications"
},
{
"code": null,
"e": 36255,
"s": 36208,
"text": "Loss of interest from business representatives"
},
{
"code": null,
"e": 36302,
"s": 36255,
"text": "Loss of interest from business representatives"
},
{
"code": null,
"e": 36358,
"s": 36302,
"text": "Teams think they have failed and get disappointed early"
},
{
"code": null,
"e": 36414,
"s": 36358,
"text": "Teams think they have failed and get disappointed early"
},
{
"code": null,
"e": 36542,
"s": 36414,
"text": "Quality can be ensured by keeping a watch on the anti-patterns. To minimize the problems created by anti-patterns, you should −"
},
{
"code": null,
"e": 36582,
"s": 36542,
"text": "Get together to specify using examples."
},
{
"code": null,
"e": 36622,
"s": 36582,
"text": "Get together to specify using examples."
},
{
"code": null,
"e": 36657,
"s": 36622,
"text": "Clean up and improve the examples."
},
{
"code": null,
"e": 36692,
"s": 36657,
"text": "Clean up and improve the examples."
},
{
"code": null,
"e": 36735,
"s": 36692,
"text": "Write a code, which satisfies the examples"
},
{
"code": null,
"e": 36778,
"s": 36735,
"text": "Write a code, which satisfies the examples"
},
{
"code": null,
"e": 36812,
"s": 36778,
"text": "Automate the examples and deploy."
},
{
"code": null,
"e": 36846,
"s": 36812,
"text": "Automate the examples and deploy."
},
{
"code": null,
"e": 36888,
"s": 36846,
"text": "Repeat the approach for every user story."
},
{
"code": null,
"e": 36930,
"s": 36888,
"text": "Repeat the approach for every user story."
},
{
"code": null,
"e": 36994,
"s": 36930,
"text": "To solve the problems due to anti-patterns means adherence to −"
},
{
"code": null,
"e": 37009,
"s": 36994,
"text": "Collaboration."
},
{
"code": null,
"e": 37024,
"s": 37009,
"text": "Collaboration."
},
{
"code": null,
"e": 37042,
"s": 37024,
"text": "Focusing on what."
},
{
"code": null,
"e": 37060,
"s": 37042,
"text": "Focusing on what."
},
{
"code": null,
"e": 37082,
"s": 37060,
"text": "Focusing on Business."
},
{
"code": null,
"e": 37104,
"s": 37082,
"text": "Focusing on Business."
},
{
"code": null,
"e": 37117,
"s": 37104,
"text": "Be prepared."
},
{
"code": null,
"e": 37130,
"s": 37117,
"text": "Be prepared."
},
{
"code": null,
"e": 37177,
"s": 37130,
"text": "Let us understand what each of the above mean."
},
{
"code": null,
"e": 37196,
"s": 37177,
"text": "In collaboration −"
},
{
"code": null,
"e": 37276,
"s": 37196,
"text": "Business people, developers and testers give input from their own perspectives."
},
{
"code": null,
"e": 37356,
"s": 37276,
"text": "Business people, developers and testers give input from their own perspectives."
},
{
"code": null,
"e": 37424,
"s": 37356,
"text": "Automated examples prove that the team has built the correct thing."
},
{
"code": null,
"e": 37492,
"s": 37424,
"text": "Automated examples prove that the team has built the correct thing."
},
{
"code": null,
"e": 37548,
"s": 37492,
"text": "The process is more valuable than the tests themselves."
},
{
"code": null,
"e": 37604,
"s": 37548,
"text": "The process is more valuable than the tests themselves."
},
{
"code": null,
"e": 37672,
"s": 37604,
"text": "You must focus on the question - ‘what.’ While focusing on ‘what’ −"
},
{
"code": null,
"e": 37716,
"s": 37672,
"text": "Do not try to cover all the possible cases."
},
{
"code": null,
"e": 37760,
"s": 37716,
"text": "Do not try to cover all the possible cases."
},
{
"code": null,
"e": 37806,
"s": 37760,
"text": "Do not forget to use different kind of tests."
},
{
"code": null,
"e": 37852,
"s": 37806,
"text": "Do not forget to use different kind of tests."
},
{
"code": null,
"e": 37889,
"s": 37852,
"text": "Keep examples as simple as possible."
},
{
"code": null,
"e": 37926,
"s": 37889,
"text": "Keep examples as simple as possible."
},
{
"code": null,
"e": 37995,
"s": 37926,
"text": "Examples should be easily understandable by the users of the system."
},
{
"code": null,
"e": 38064,
"s": 37995,
"text": "Examples should be easily understandable by the users of the system."
},
{
"code": null,
"e": 38122,
"s": 38064,
"text": "Tools should not play an important part in the workshops."
},
{
"code": null,
"e": 38180,
"s": 38122,
"text": "Tools should not play an important part in the workshops."
},
{
"code": null,
"e": 38207,
"s": 38180,
"text": "To focus on the business −"
},
{
"code": null,
"e": 38246,
"s": 38207,
"text": "Keep specification at business intent."
},
{
"code": null,
"e": 38285,
"s": 38246,
"text": "Keep specification at business intent."
},
{
"code": null,
"e": 38335,
"s": 38285,
"text": "Include business in creating and reviewing specs."
},
{
"code": null,
"e": 38385,
"s": 38335,
"text": "Include business in creating and reviewing specs."
},
{
"code": null,
"e": 38431,
"s": 38385,
"text": "Hide all the details in the automation layer."
},
{
"code": null,
"e": 38477,
"s": 38431,
"text": "Hide all the details in the automation layer."
},
{
"code": null,
"e": 38509,
"s": 38477,
"text": "Be prepared for the following −"
},
{
"code": null,
"e": 38591,
"s": 38509,
"text": "Benefits are not immediately apparent, even while the team practices are changed."
},
{
"code": null,
"e": 38673,
"s": 38591,
"text": "Benefits are not immediately apparent, even while the team practices are changed."
},
{
"code": null,
"e": 38705,
"s": 38673,
"text": "Introducing SbE is challenging."
},
{
"code": null,
"e": 38737,
"s": 38705,
"text": "Introducing SbE is challenging."
},
{
"code": null,
"e": 38768,
"s": 38737,
"text": "Requires time and investments."
},
{
"code": null,
"e": 38799,
"s": 38768,
"text": "Requires time and investments."
},
{
"code": null,
"e": 38837,
"s": 38799,
"text": "Automated testing does not come free."
},
{
"code": null,
"e": 38875,
"s": 38837,
"text": "Automated testing does not come free."
},
{
"code": null,
"e": 39081,
"s": 38875,
"text": "Use of tools is not mandatory for Specification by Example, though in practice several tools are available. There are cases that are successful following Specification by Example even without using a tool."
},
{
"code": null,
"e": 39136,
"s": 39081,
"text": "The following tools support Specification by Example −"
},
{
"code": null,
"e": 39145,
"s": 39136,
"text": "Cucumber"
},
{
"code": null,
"e": 39154,
"s": 39145,
"text": "Cucumber"
},
{
"code": null,
"e": 39163,
"s": 39154,
"text": "SpecFlow"
},
{
"code": null,
"e": 39172,
"s": 39163,
"text": "SpecFlow"
},
{
"code": null,
"e": 39181,
"s": 39172,
"text": "Fitnesse"
},
{
"code": null,
"e": 39190,
"s": 39181,
"text": "Fitnesse"
},
{
"code": null,
"e": 39198,
"s": 39190,
"text": "Jbehave"
},
{
"code": null,
"e": 39206,
"s": 39198,
"text": "Jbehave"
},
{
"code": null,
"e": 39217,
"s": 39206,
"text": "Concordion"
},
{
"code": null,
"e": 39228,
"s": 39217,
"text": "Concordion"
},
{
"code": null,
"e": 39234,
"s": 39228,
"text": "Behat"
},
{
"code": null,
"e": 39240,
"s": 39234,
"text": "Behat"
},
{
"code": null,
"e": 39248,
"s": 39240,
"text": "Jasmine"
},
{
"code": null,
"e": 39256,
"s": 39248,
"text": "Jasmine"
},
{
"code": null,
"e": 39263,
"s": 39256,
"text": "Relish"
},
{
"code": null,
"e": 39270,
"s": 39263,
"text": "Relish"
},
{
"code": null,
"e": 39278,
"s": 39270,
"text": "Speclog"
},
{
"code": null,
"e": 39286,
"s": 39278,
"text": "Speclog"
},
{
"code": null,
"e": 39524,
"s": 39286,
"text": "The development teams often have a misconception that BDD is a tool framework. In reality, BDD is a development approach rather than a tool framework. However, as in the case of other development approaches, there are tools for BDD also."
},
{
"code": null,
"e": 39615,
"s": 39524,
"text": "Several BDD Tools are in use for different platforms and programming languages. They are −"
},
{
"code": null,
"e": 39641,
"s": 39615,
"text": "Cucumber (Ruby framework)"
},
{
"code": null,
"e": 39667,
"s": 39641,
"text": "Cucumber (Ruby framework)"
},
{
"code": null,
"e": 39693,
"s": 39667,
"text": "SpecFlow (.NET framework)"
},
{
"code": null,
"e": 39719,
"s": 39693,
"text": "SpecFlow (.NET framework)"
},
{
"code": null,
"e": 39745,
"s": 39719,
"text": "Behave (Python framework)"
},
{
"code": null,
"e": 39771,
"s": 39745,
"text": "Behave (Python framework)"
},
{
"code": null,
"e": 39796,
"s": 39771,
"text": "JBehave (Java framework)"
},
{
"code": null,
"e": 39821,
"s": 39796,
"text": "JBehave (Java framework)"
},
{
"code": null,
"e": 39876,
"s": 39821,
"text": "JBehave Web (Java framework with Selenium integration)"
},
{
"code": null,
"e": 39931,
"s": 39876,
"text": "JBehave Web (Java framework with Selenium integration)"
},
{
"code": null,
"e": 39958,
"s": 39931,
"text": "Lettuce (Python framework)"
},
{
"code": null,
"e": 39985,
"s": 39958,
"text": "Lettuce (Python framework)"
},
{
"code": null,
"e": 40013,
"s": 39985,
"text": "Concordion (Java framework)"
},
{
"code": null,
"e": 40041,
"s": 40013,
"text": "Concordion (Java framework)"
},
{
"code": null,
"e": 40063,
"s": 40041,
"text": "Behat (PHP framework)"
},
{
"code": null,
"e": 40085,
"s": 40063,
"text": "Behat (PHP framework)"
},
{
"code": null,
"e": 40108,
"s": 40085,
"text": "Kahlan (PHP framework)"
},
{
"code": null,
"e": 40131,
"s": 40108,
"text": "Kahlan (PHP framework)"
},
{
"code": null,
"e": 40161,
"s": 40131,
"text": "DaSpec (JavaScript framework)"
},
{
"code": null,
"e": 40191,
"s": 40161,
"text": "DaSpec (JavaScript framework)"
},
{
"code": null,
"e": 40222,
"s": 40191,
"text": "Jasmine (JavaScript framework)"
},
{
"code": null,
"e": 40253,
"s": 40222,
"text": "Jasmine (JavaScript framework)"
},
{
"code": null,
"e": 40288,
"s": 40253,
"text": "Cucumber-js (JavaScript framework)"
},
{
"code": null,
"e": 40323,
"s": 40288,
"text": "Cucumber-js (JavaScript framework)"
},
{
"code": null,
"e": 40407,
"s": 40323,
"text": "Squish GUI Tester (BDD GUI Testing Tool for JavaScript, Python, Perl, Ruby and Tcl)"
},
{
"code": null,
"e": 40491,
"s": 40407,
"text": "Squish GUI Tester (BDD GUI Testing Tool for JavaScript, Python, Perl, Ruby and Tcl)"
},
{
"code": null,
"e": 40516,
"s": 40491,
"text": "Spock (Groovy framework)"
},
{
"code": null,
"e": 40541,
"s": 40516,
"text": "Spock (Groovy framework)"
},
{
"code": null,
"e": 40628,
"s": 40541,
"text": "Yadda (Gherkin language support for frameworks such as Jasmine (JavaScript framework))"
},
{
"code": null,
"e": 40715,
"s": 40628,
"text": "Yadda (Gherkin language support for frameworks such as Jasmine (JavaScript framework))"
},
{
"code": null,
"e": 41133,
"s": 40715,
"text": "Cucumber is a free tool for executable specifications used globally. Cucumber lets the software development teams describe how software should behave in plain text. The text is written in a business-readable, domain-specific language and serves as documentation, automated tests and development-aid, all rolled into one format. You can use over forty different spoken languages (English, Chinese, etc.) with Cucumber."
},
{
"code": null,
"e": 41179,
"s": 41133,
"text": "The key features of Cucumber are as follows −"
},
{
"code": null,
"e": 41273,
"s": 41179,
"text": "Cucumber can be used for Executable Specifications, Test Automation and Living Documentation."
},
{
"code": null,
"e": 41367,
"s": 41273,
"text": "Cucumber can be used for Executable Specifications, Test Automation and Living Documentation."
},
{
"code": null,
"e": 41454,
"s": 41367,
"text": "Cucumber works with Ruby, Java, NET, Flex or web applications written in any language."
},
{
"code": null,
"e": 41541,
"s": 41454,
"text": "Cucumber works with Ruby, Java, NET, Flex or web applications written in any language."
},
{
"code": null,
"e": 41617,
"s": 41541,
"text": "Cucumber supports more succinct Tests in Tables - similar to what FIT does."
},
{
"code": null,
"e": 41693,
"s": 41617,
"text": "Cucumber supports more succinct Tests in Tables - similar to what FIT does."
},
{
"code": null,
"e": 41904,
"s": 41693,
"text": "Cucumber has revolutionized the Software Development Life Cycle by melding requirements, automated testing and documentation into a cohesive one: plain text executable specifications that validate the software."
},
{
"code": null,
"e": 42115,
"s": 41904,
"text": "Cucumber has revolutionized the Software Development Life Cycle by melding requirements, automated testing and documentation into a cohesive one: plain text executable specifications that validate the software."
},
{
"code": null,
"e": 42230,
"s": 42115,
"text": "SpecFlow is a BDD Tool for .NET Platform. SpecFlow is an open-source project. The source code is hosted on GitHub."
},
{
"code": null,
"e": 42448,
"s": 42230,
"text": "SpecFlow uses Gherkin Syntax for Features. The Gherkin format was introduced by Cucumber and is also used by other tools. The Gherkin language is maintained as a project on GitHub − https://github.com/cucumber/gherkin"
},
{
"code": null,
"e": 42485,
"s": 42448,
"text": "Behave is used for Python framework."
},
{
"code": null,
"e": 42816,
"s": 42485,
"text": "Behave works with three types of files stored in a directory called “features” −\n\nfeature files with your behavior scenarios in it.\n“steps” directory with Python step implementations for the scenarios.\nOptionally, some environmental controls (code to run before and after steps, scenarios, features or the whole shooting match).\n\n"
},
{
"code": null,
"e": 42897,
"s": 42816,
"text": "Behave works with three types of files stored in a directory called “features” −"
},
{
"code": null,
"e": 42947,
"s": 42897,
"text": "feature files with your behavior scenarios in it."
},
{
"code": null,
"e": 42997,
"s": 42947,
"text": "feature files with your behavior scenarios in it."
},
{
"code": null,
"e": 43067,
"s": 42997,
"text": "“steps” directory with Python step implementations for the scenarios."
},
{
"code": null,
"e": 43137,
"s": 43067,
"text": "“steps” directory with Python step implementations for the scenarios."
},
{
"code": null,
"e": 43264,
"s": 43137,
"text": "Optionally, some environmental controls (code to run before and after steps, scenarios, features or the whole shooting match)."
},
{
"code": null,
"e": 43391,
"s": 43264,
"text": "Optionally, some environmental controls (code to run before and after steps, scenarios, features or the whole shooting match)."
},
{
"code": null,
"e": 43489,
"s": 43391,
"text": "Behave features are written using Gherkin (with some modifications) and are named “name.feature”."
},
{
"code": null,
"e": 43587,
"s": 43489,
"text": "Behave features are written using Gherkin (with some modifications) and are named “name.feature”."
},
{
"code": null,
"e": 43875,
"s": 43587,
"text": "The tags attached to a feature and scenario are available in the environment functions via the “feature” or “scenario” object passed to them. On those objects there is an attribute called “tags” which is a list of the tag names attached, in the order they are found in the features file."
},
{
"code": null,
"e": 44163,
"s": 43875,
"text": "The tags attached to a feature and scenario are available in the environment functions via the “feature” or “scenario” object passed to them. On those objects there is an attribute called “tags” which is a list of the tag names attached, in the order they are found in the features file."
},
{
"code": null,
"e": 44370,
"s": 44163,
"text": "Modifications to the Gherkin Standard −\n\nBehave can parse standard Gherkin files and extends Gherkin to allow lowercase step keywords because these can sometimes allow more readable feature specifications\n\n"
},
{
"code": null,
"e": 44410,
"s": 44370,
"text": "Modifications to the Gherkin Standard −"
},
{
"code": null,
"e": 44574,
"s": 44410,
"text": "Behave can parse standard Gherkin files and extends Gherkin to allow lowercase step keywords because these can sometimes allow more readable feature specifications"
},
{
"code": null,
"e": 44738,
"s": 44574,
"text": "Behave can parse standard Gherkin files and extends Gherkin to allow lowercase step keywords because these can sometimes allow more readable feature specifications"
},
{
"code": null,
"e": 44924,
"s": 44738,
"text": "Lettuce is a very simple BDD tool based on Cucumber. It can execute plain-text functional descriptions as automated tests for Python projects. Lettuce aims the most common tasks on BDD."
},
{
"code": null,
"e": 45018,
"s": 44924,
"text": "Concordion is an open source tool for automating Specification by Example for Java Framework."
},
{
"code": null,
"e": 45248,
"s": 45018,
"text": "While the core features are simple, the Powerful extension framework API allows you to add functionality, such as using Excel spreadsheets as specifications, adding screenshots to the output, displaying logging information, etc."
},
{
"code": null,
"e": 45428,
"s": 45248,
"text": "Concordion lets you write the specifications in normal language using paragraphs, tables and proper punctuation and the structured Language using Given/When/Then is not necessary."
},
{
"code": null,
"e": 45486,
"s": 45428,
"text": "Concordion has been ported to other languages including −"
},
{
"code": null,
"e": 45506,
"s": 45486,
"text": "C# (Concordion.NET)"
},
{
"code": null,
"e": 45526,
"s": 45506,
"text": "C# (Concordion.NET)"
},
{
"code": null,
"e": 45548,
"s": 45526,
"text": "Python (PyConcordion)"
},
{
"code": null,
"e": 45570,
"s": 45548,
"text": "Python (PyConcordion)"
},
{
"code": null,
"e": 45593,
"s": 45570,
"text": "Ruby (Ruby-Concordion)"
},
{
"code": null,
"e": 45616,
"s": 45593,
"text": "Ruby (Ruby-Concordion)"
},
{
"code": null,
"e": 45719,
"s": 45616,
"text": "Cucumber is a tool that supports Executable specifications, Test automation, and Living documentation."
},
{
"code": null,
"e": 45970,
"s": 45719,
"text": "Behavior Driven Development expands on Specification by Example. It also formalizes the Test-Driven Development best practices, in particular, the perspective of working from the outside-in. The development work is based on executable specifications."
},
{
"code": null,
"e": 46033,
"s": 45970,
"text": "The key features of executable specifications are as follows −"
},
{
"code": null,
"e": 46266,
"s": 46033,
"text": "Executable Specifications are −\n\nDerived from examples, that represent the behaviors of the system.\nWritten with collaboration of all involved in the development, including business and stakeholders.\nBased on acceptance criterion.\n\n"
},
{
"code": null,
"e": 46298,
"s": 46266,
"text": "Executable Specifications are −"
},
{
"code": null,
"e": 46365,
"s": 46298,
"text": "Derived from examples, that represent the behaviors of the system."
},
{
"code": null,
"e": 46432,
"s": 46365,
"text": "Derived from examples, that represent the behaviors of the system."
},
{
"code": null,
"e": 46532,
"s": 46432,
"text": "Written with collaboration of all involved in the development, including business and stakeholders."
},
{
"code": null,
"e": 46632,
"s": 46532,
"text": "Written with collaboration of all involved in the development, including business and stakeholders."
},
{
"code": null,
"e": 46663,
"s": 46632,
"text": "Based on acceptance criterion."
},
{
"code": null,
"e": 46694,
"s": 46663,
"text": "Based on acceptance criterion."
},
{
"code": null,
"e": 46774,
"s": 46694,
"text": "Acceptance tests that are based on the executable specifications are automated."
},
{
"code": null,
"e": 46854,
"s": 46774,
"text": "Acceptance tests that are based on the executable specifications are automated."
},
{
"code": null,
"e": 47436,
"s": 46854,
"text": "A shared, ubiquitous language is used to write the executable specifications and the automated tests such that −\n\nDomain specific terminology is used throughout the development.\nEveryone, including the customers and the stakeholders speak about the system, its requirements and its implementation, in the same way.\nThe same terms are used to discuss the system present in the requirements, design documents, code, tests, etc.\nAnyone can read and understand a requirement and how to generate more requirements.\nChanges can be easily accommodated.\nLive documentation is maintained.\n\n"
},
{
"code": null,
"e": 47549,
"s": 47436,
"text": "A shared, ubiquitous language is used to write the executable specifications and the automated tests such that −"
},
{
"code": null,
"e": 47613,
"s": 47549,
"text": "Domain specific terminology is used throughout the development."
},
{
"code": null,
"e": 47677,
"s": 47613,
"text": "Domain specific terminology is used throughout the development."
},
{
"code": null,
"e": 47814,
"s": 47677,
"text": "Everyone, including the customers and the stakeholders speak about the system, its requirements and its implementation, in the same way."
},
{
"code": null,
"e": 47951,
"s": 47814,
"text": "Everyone, including the customers and the stakeholders speak about the system, its requirements and its implementation, in the same way."
},
{
"code": null,
"e": 48062,
"s": 47951,
"text": "The same terms are used to discuss the system present in the requirements, design documents, code, tests, etc."
},
{
"code": null,
"e": 48173,
"s": 48062,
"text": "The same terms are used to discuss the system present in the requirements, design documents, code, tests, etc."
},
{
"code": null,
"e": 48257,
"s": 48173,
"text": "Anyone can read and understand a requirement and how to generate more requirements."
},
{
"code": null,
"e": 48341,
"s": 48257,
"text": "Anyone can read and understand a requirement and how to generate more requirements."
},
{
"code": null,
"e": 48377,
"s": 48341,
"text": "Changes can be easily accommodated."
},
{
"code": null,
"e": 48413,
"s": 48377,
"text": "Changes can be easily accommodated."
},
{
"code": null,
"e": 48447,
"s": 48413,
"text": "Live documentation is maintained."
},
{
"code": null,
"e": 48481,
"s": 48447,
"text": "Live documentation is maintained."
},
{
"code": null,
"e": 48638,
"s": 48481,
"text": "Cucumber helps with this process since it ties together the executable specifications with the actual code of the system and the automated acceptance tests."
},
{
"code": null,
"e": 48879,
"s": 48638,
"text": "The way it does this is actually designed to get the customers and developers working together. When an acceptance test passes, it means that the specification of the behavior of the system that it represents has been implemented correctly."
},
{
"code": null,
"e": 48911,
"s": 48879,
"text": "Consider the following example."
},
{
"code": null,
"e": 48929,
"s": 48911,
"text": "Feature − Sign up"
},
{
"code": null,
"e": 48967,
"s": 48929,
"text": "Sign up should be quick and friendly."
},
{
"code": null,
"e": 49005,
"s": 48967,
"text": "Sign up should be quick and friendly."
},
{
"code": null,
"e": 49269,
"s": 49005,
"text": "Scenario − Successful sign up\n\nNew users should get a confirmation e-mail and be greeted personally.\nGiven I have chosen to sign up.\nWhen I sign up with valid details.\nThen I should receive a confirmation email.\nAnd I should see a personalized greeting message.\n\n"
},
{
"code": null,
"e": 49299,
"s": 49269,
"text": "Scenario − Successful sign up"
},
{
"code": null,
"e": 49369,
"s": 49299,
"text": "New users should get a confirmation e-mail and be greeted personally."
},
{
"code": null,
"e": 49439,
"s": 49369,
"text": "New users should get a confirmation e-mail and be greeted personally."
},
{
"code": null,
"e": 49471,
"s": 49439,
"text": "Given I have chosen to sign up."
},
{
"code": null,
"e": 49503,
"s": 49471,
"text": "Given I have chosen to sign up."
},
{
"code": null,
"e": 49538,
"s": 49503,
"text": "When I sign up with valid details."
},
{
"code": null,
"e": 49573,
"s": 49538,
"text": "When I sign up with valid details."
},
{
"code": null,
"e": 49617,
"s": 49573,
"text": "Then I should receive a confirmation email."
},
{
"code": null,
"e": 49661,
"s": 49617,
"text": "Then I should receive a confirmation email."
},
{
"code": null,
"e": 49711,
"s": 49661,
"text": "And I should see a personalized greeting message."
},
{
"code": null,
"e": 49761,
"s": 49711,
"text": "And I should see a personalized greeting message."
},
{
"code": null,
"e": 49798,
"s": 49761,
"text": "From this example, we can see that −"
},
{
"code": null,
"e": 49834,
"s": 49798,
"text": "Acceptance tests refer to Features."
},
{
"code": null,
"e": 49870,
"s": 49834,
"text": "Acceptance tests refer to Features."
},
{
"code": null,
"e": 49907,
"s": 49870,
"text": "Features are explained by Scenarios."
},
{
"code": null,
"e": 49944,
"s": 49907,
"text": "Features are explained by Scenarios."
},
{
"code": null,
"e": 49972,
"s": 49944,
"text": "Scenarios consist of Steps."
},
{
"code": null,
"e": 50000,
"s": 49972,
"text": "Scenarios consist of Steps."
},
{
"code": null,
"e": 50095,
"s": 50000,
"text": "The specification is written in a natural language in a plain text file, but it is executable."
},
{
"code": null,
"e": 50279,
"s": 50095,
"text": "Cucumber is a command line tool that processes text files containing the features looking for scenarios that can be executed against your system. Let us understand how Cucumber works."
},
{
"code": null,
"e": 50432,
"s": 50279,
"text": "It makes use of a bunch of conventions about how the files are named and where they are located (the respective folders) to make it easy to get started."
},
{
"code": null,
"e": 50585,
"s": 50432,
"text": "It makes use of a bunch of conventions about how the files are named and where they are located (the respective folders) to make it easy to get started."
},
{
"code": null,
"e": 50677,
"s": 50585,
"text": "Cucumber lets you keep specifications, automated tests and documentation in the same place."
},
{
"code": null,
"e": 50769,
"s": 50677,
"text": "Cucumber lets you keep specifications, automated tests and documentation in the same place."
},
{
"code": null,
"e": 50957,
"s": 50769,
"text": "Each scenario is a list of steps that describe the pre-conditions, actions, and post-conditions of the scenario; if each step executes without anyberror, the scenario is marked as passed."
},
{
"code": null,
"e": 51145,
"s": 50957,
"text": "Each scenario is a list of steps that describe the pre-conditions, actions, and post-conditions of the scenario; if each step executes without anyberror, the scenario is marked as passed."
},
{
"code": null,
"e": 51214,
"s": 51145,
"text": "At the end of a run, Cucumber will report how many scenarios passed."
},
{
"code": null,
"e": 51283,
"s": 51214,
"text": "At the end of a run, Cucumber will report how many scenarios passed."
},
{
"code": null,
"e": 51381,
"s": 51283,
"text": "If something fails, it provides information about what failed so that the developer can progress."
},
{
"code": null,
"e": 51479,
"s": 51381,
"text": "If something fails, it provides information about what failed so that the developer can progress."
},
{
"code": null,
"e": 51565,
"s": 51479,
"text": "In Cucumber, Features, Scenarios, and Steps are written in a Language called Gherkin."
},
{
"code": null,
"e": 51738,
"s": 51565,
"text": "Gherkin is plain-text English (or one of 60+ other languages) with a structure. Gherkin is easy to learn and its structure allows you to write examples in a concise manner."
},
{
"code": null,
"e": 51826,
"s": 51738,
"text": "Cucumber executes your files that contain executable specifications written in Gherkin."
},
{
"code": null,
"e": 51914,
"s": 51826,
"text": "Cucumber executes your files that contain executable specifications written in Gherkin."
},
{
"code": null,
"e": 52033,
"s": 51914,
"text": "Cucumber needs Step Definitions to translate plain-text Gherkin Steps into actions that will interact with the system."
},
{
"code": null,
"e": 52152,
"s": 52033,
"text": "Cucumber needs Step Definitions to translate plain-text Gherkin Steps into actions that will interact with the system."
},
{
"code": null,
"e": 52253,
"s": 52152,
"text": "When Cucumber executes a step in a scenario, it will look for a matching step definition to execute."
},
{
"code": null,
"e": 52354,
"s": 52253,
"text": "When Cucumber executes a step in a scenario, it will look for a matching step definition to execute."
},
{
"code": null,
"e": 52428,
"s": 52354,
"text": "A Step Definition is a small piece of code with a pattern attached to it."
},
{
"code": null,
"e": 52502,
"s": 52428,
"text": "A Step Definition is a small piece of code with a pattern attached to it."
},
{
"code": null,
"e": 52649,
"s": 52502,
"text": "The pattern is used to link the Step Definition to all the matching steps, and the code is what Cucumber will execute when it sees a Gherkin step."
},
{
"code": null,
"e": 52796,
"s": 52649,
"text": "The pattern is used to link the Step Definition to all the matching steps, and the code is what Cucumber will execute when it sees a Gherkin step."
},
{
"code": null,
"e": 52843,
"s": 52796,
"text": "Each step is accompanied by a Step Definition."
},
{
"code": null,
"e": 52890,
"s": 52843,
"text": "Each step is accompanied by a Step Definition."
},
{
"code": null,
"e": 53038,
"s": 52890,
"text": "Most steps will gather input and then delegate to a framework that is specific to your application domain in order to make calls on your framework."
},
{
"code": null,
"e": 53186,
"s": 53038,
"text": "Most steps will gather input and then delegate to a framework that is specific to your application domain in order to make calls on your framework."
},
{
"code": null,
"e": 53469,
"s": 53186,
"text": "Cucumber supports over a dozen different software platforms. You can choose the Cucumber implementation that works for you. Every Cucumber implementation provides the same overall functionality and they also have their own installation procedure and platform-specific functionality."
},
{
"code": null,
"e": 53540,
"s": 53469,
"text": "The key to Cucumber is the mapping between Steps and Step Definitions."
},
{
"code": null,
"e": 53582,
"s": 53540,
"text": "Given below are Cucumber implementations."
},
{
"code": null,
"e": 53625,
"s": 53582,
"text": "Given below are Framework implementations."
},
{
"code": null,
"e": 53769,
"s": 53625,
"text": "Gherkin is a language, which is used to write Features, Scenarios, and Steps. The purpose of Gherkin is to help us write concrete requirements."
},
{
"code": null,
"e": 53855,
"s": 53769,
"text": "To understand what we mean by concrete requirements, consider the following example −"
},
{
"code": null,
"e": 53928,
"s": 53855,
"text": "Customers should be prevented from entering invalid credit card details."
},
{
"code": null,
"e": 54134,
"s": 53928,
"text": "If a customer enters a credit card number that is not exactly 16 digits long, when they try to submit the form, it should be redisplayed with an error message advising them of the correct number of digits."
},
{
"code": null,
"e": 54207,
"s": 54134,
"text": "The latter has no ambiguity and avoids errors and is much more testable."
},
{
"code": null,
"e": 54317,
"s": 54207,
"text": "Gherkin is designed to create requirements that are more concrete. In Gherkin, the above example looks like −"
},
{
"code": null,
"e": 54325,
"s": 54317,
"text": "Feature"
},
{
"code": null,
"e": 54395,
"s": 54325,
"text": "Feedback when entering invalid credit card details Feature Definition"
},
{
"code": null,
"e": 54469,
"s": 54395,
"text": "In user testing, we have seen many people who make mistakes Documentation"
},
{
"code": null,
"e": 54509,
"s": 54469,
"text": "Background True for all Scenarios Below"
},
{
"code": null,
"e": 54545,
"s": 54509,
"text": "Given I have chosen an item to buy,"
},
{
"code": null,
"e": 54591,
"s": 54545,
"text": "And I am about to enter my credit card number"
},
{
"code": null,
"e": 54650,
"s": 54591,
"text": "Scenario − Credit card number too shortScenario Definition"
},
{
"code": null,
"e": 54710,
"s": 54650,
"text": "When I enter a card number that is less than 16 digits long"
},
{
"code": null,
"e": 54748,
"s": 54710,
"text": "And all the other details are correct"
},
{
"code": null,
"e": 54775,
"s": 54748,
"text": "And I submit the formSteps"
},
{
"code": null,
"e": 54811,
"s": 54775,
"text": "Then the form should be redisplayed"
},
{
"code": null,
"e": 54882,
"s": 54811,
"text": "And I should see a message advising me of the correct number of digits"
},
{
"code": null,
"e": 55065,
"s": 54882,
"text": "Gherkin files are plain text Files and have the extension .feature. Each line that is not blank has to start with a Gherkin keyword, followed by any text you like. The keywords are −"
},
{
"code": null,
"e": 55073,
"s": 55065,
"text": "Feature"
},
{
"code": null,
"e": 55081,
"s": 55073,
"text": "Feature"
},
{
"code": null,
"e": 55090,
"s": 55081,
"text": "Scenario"
},
{
"code": null,
"e": 55099,
"s": 55090,
"text": "Scenario"
},
{
"code": null,
"e": 55135,
"s": 55099,
"text": "Given, When, Then, And, But (Steps)"
},
{
"code": null,
"e": 55171,
"s": 55135,
"text": "Given, When, Then, And, But (Steps)"
},
{
"code": null,
"e": 55182,
"s": 55171,
"text": "Background"
},
{
"code": null,
"e": 55193,
"s": 55182,
"text": "Background"
},
{
"code": null,
"e": 55210,
"s": 55193,
"text": "Scenario Outline"
},
{
"code": null,
"e": 55227,
"s": 55210,
"text": "Scenario Outline"
},
{
"code": null,
"e": 55236,
"s": 55227,
"text": "Examples"
},
{
"code": null,
"e": 55245,
"s": 55236,
"text": "Examples"
},
{
"code": null,
"e": 55263,
"s": 55245,
"text": "\"\"\" (Doc Strings)"
},
{
"code": null,
"e": 55281,
"s": 55263,
"text": "\"\"\" (Doc Strings)"
},
{
"code": null,
"e": 55297,
"s": 55281,
"text": "| (Data Tables)"
},
{
"code": null,
"e": 55313,
"s": 55297,
"text": "| (Data Tables)"
},
{
"code": null,
"e": 55322,
"s": 55313,
"text": "@ (Tags)"
},
{
"code": null,
"e": 55331,
"s": 55322,
"text": "@ (Tags)"
},
{
"code": null,
"e": 55344,
"s": 55331,
"text": "# (Comments)"
},
{
"code": null,
"e": 55357,
"s": 55344,
"text": "# (Comments)"
},
{
"code": null,
"e": 55359,
"s": 55357,
"text": "*"
},
{
"code": null,
"e": 55361,
"s": 55359,
"text": "*"
},
{
"code": null,
"e": 55494,
"s": 55361,
"text": "The Feature keyword is used to describe a software feature, and to group the related scenarios. A Feature has three basic elements −"
},
{
"code": null,
"e": 55517,
"s": 55494,
"text": "The keyword – Feature."
},
{
"code": null,
"e": 55540,
"s": 55517,
"text": "The keyword – Feature."
},
{
"code": null,
"e": 55615,
"s": 55540,
"text": "The name of the feature, provided on the same line as the Feature keyword."
},
{
"code": null,
"e": 55690,
"s": 55615,
"text": "The name of the feature, provided on the same line as the Feature keyword."
},
{
"code": null,
"e": 55906,
"s": 55690,
"text": "An optional (but highly recommended) description that can span multiple lines i.e. all the text between the line containing the keyword Feature, and a line that starts with Scenario, Background, or Scenario Outline."
},
{
"code": null,
"e": 56122,
"s": 55906,
"text": "An optional (but highly recommended) description that can span multiple lines i.e. all the text between the line containing the keyword Feature, and a line that starts with Scenario, Background, or Scenario Outline."
},
{
"code": null,
"e": 56250,
"s": 56122,
"text": "In addition to a name and a description, Features contain a list of scenarios or scenario outlines, and an optional background."
},
{
"code": null,
"e": 56410,
"s": 56250,
"text": "It is conventional to name a .feature file by taking the name of the Feature, converting it to lowercase and replacing the spaces with underlines. For example,"
},
{
"code": null,
"e": 56470,
"s": 56410,
"text": "feedback_when_entering_invalid_credit_card_details.feature\n"
},
{
"code": null,
"e": 56577,
"s": 56470,
"text": "In order to identify Features in your system, you can use what is known as a “feature injection template”."
},
{
"code": null,
"e": 56646,
"s": 56577,
"text": "Some parts of Gherkin documents do not have to start with a keyword."
},
{
"code": null,
"e": 56836,
"s": 56646,
"text": "In the lines following a Feature, scenario, scenario outline or examples, you can write anything you like, as long as no line starts with a keyword. This is the way to include Descriptions."
},
{
"code": null,
"e": 57040,
"s": 56836,
"text": "To express the behavior of your system, you attach one or more scenarios with each Feature. It is typical to see 5 to 20 scenarios per Feature to completely specify all the behaviors around that Feature."
},
{
"code": null,
"e": 57082,
"s": 57040,
"text": "Scenarios follows the following pattern −"
},
{
"code": null,
"e": 57110,
"s": 57082,
"text": "Describe an initial context"
},
{
"code": null,
"e": 57138,
"s": 57110,
"text": "Describe an initial context"
},
{
"code": null,
"e": 57156,
"s": 57138,
"text": "Describe an event"
},
{
"code": null,
"e": 57174,
"s": 57156,
"text": "Describe an event"
},
{
"code": null,
"e": 57203,
"s": 57174,
"text": "Describe an expected outcome"
},
{
"code": null,
"e": 57232,
"s": 57203,
"text": "Describe an expected outcome"
},
{
"code": null,
"e": 57423,
"s": 57232,
"text": "We start with a context, describe an action, and check the outcome. This is done with steps. Gherkin provides three keywords to describe each of the contexts, actions, and outcomes as steps."
},
{
"code": null,
"e": 57449,
"s": 57423,
"text": "Given − Establish context"
},
{
"code": null,
"e": 57475,
"s": 57449,
"text": "Given − Establish context"
},
{
"code": null,
"e": 57497,
"s": 57475,
"text": "When − Perform action"
},
{
"code": null,
"e": 57519,
"s": 57497,
"text": "When − Perform action"
},
{
"code": null,
"e": 57540,
"s": 57519,
"text": "Then − Check outcome"
},
{
"code": null,
"e": 57561,
"s": 57540,
"text": "Then − Check outcome"
},
{
"code": null,
"e": 57613,
"s": 57561,
"text": "These keywords provide readability of the scenario."
},
{
"code": null,
"e": 57621,
"s": 57613,
"text": "Example"
},
{
"code": null,
"e": 57661,
"s": 57621,
"text": "Scenario − Withdraw money from account."
},
{
"code": null,
"e": 57694,
"s": 57661,
"text": "Given I have $100 in my account."
},
{
"code": null,
"e": 57727,
"s": 57694,
"text": "Given I have $100 in my account."
},
{
"code": null,
"e": 57747,
"s": 57727,
"text": "When I request $20."
},
{
"code": null,
"e": 57767,
"s": 57747,
"text": "When I request $20."
},
{
"code": null,
"e": 57797,
"s": 57767,
"text": "Then $20 should be dispensed."
},
{
"code": null,
"e": 57827,
"s": 57797,
"text": "Then $20 should be dispensed."
},
{
"code": null,
"e": 57963,
"s": 57827,
"text": "If there are multiple Given or When steps underneath each other, you can use And or But. They allow you to specify scenarios in detail."
},
{
"code": null,
"e": 57971,
"s": 57963,
"text": "Example"
},
{
"code": null,
"e": 58020,
"s": 57971,
"text": "Scenario − Attempt withdrawal using stolen card."
},
{
"code": null,
"e": 58053,
"s": 58020,
"text": "Given I have $100 in my account."
},
{
"code": null,
"e": 58086,
"s": 58053,
"text": "Given I have $100 in my account."
},
{
"code": null,
"e": 58110,
"s": 58086,
"text": "But my card is invalid."
},
{
"code": null,
"e": 58134,
"s": 58110,
"text": "But my card is invalid."
},
{
"code": null,
"e": 58154,
"s": 58134,
"text": "When I request $50."
},
{
"code": null,
"e": 58174,
"s": 58154,
"text": "When I request $50."
},
{
"code": null,
"e": 58211,
"s": 58174,
"text": "Then my card should not be returned."
},
{
"code": null,
"e": 58248,
"s": 58211,
"text": "Then my card should not be returned."
},
{
"code": null,
"e": 58290,
"s": 58248,
"text": "And I should be told to contact the bank."
},
{
"code": null,
"e": 58332,
"s": 58290,
"text": "And I should be told to contact the bank."
},
{
"code": null,
"e": 58477,
"s": 58332,
"text": "While creating scenarios, remember ‘each scenario must make sense and be able to be executed independently of any other scenario’’. This means −"
},
{
"code": null,
"e": 58599,
"s": 58477,
"text": "You cannot have the success condition of one scenario depend on the fact that some other scenario was executed before it."
},
{
"code": null,
"e": 58721,
"s": 58599,
"text": "You cannot have the success condition of one scenario depend on the fact that some other scenario was executed before it."
},
{
"code": null,
"e": 58809,
"s": 58721,
"text": "Each scenario creates its particular context, executes one thing, and tests the result."
},
{
"code": null,
"e": 58897,
"s": 58809,
"text": "Each scenario creates its particular context, executes one thing, and tests the result."
},
{
"code": null,
"e": 58945,
"s": 58897,
"text": "Such scenarios provide the following benefits −"
},
{
"code": null,
"e": 58993,
"s": 58945,
"text": "Tests will be simpler and easier to understand."
},
{
"code": null,
"e": 59041,
"s": 58993,
"text": "Tests will be simpler and easier to understand."
},
{
"code": null,
"e": 59151,
"s": 59041,
"text": "You can run just a subset of your scenarios and you do not have to worry about the breaking of your test set."
},
{
"code": null,
"e": 59261,
"s": 59151,
"text": "You can run just a subset of your scenarios and you do not have to worry about the breaking of your test set."
},
{
"code": null,
"e": 59399,
"s": 59261,
"text": "Depending on your system, you might be able to run the tests in parallel, reducing the amount of time taken to execute all of your tests."
},
{
"code": null,
"e": 59537,
"s": 59399,
"text": "Depending on your system, you might be able to run the tests in parallel, reducing the amount of time taken to execute all of your tests."
},
{
"code": null,
"e": 59748,
"s": 59537,
"text": "If you have to write scenarios with several inputs or outputs, you might end up creating several scenarios that only differ by their values. The solution is to use scenario outline. To write a scenario outline,"
},
{
"code": null,
"e": 59816,
"s": 59748,
"text": "Variables in the scenario outline steps are marked up with < and >."
},
{
"code": null,
"e": 59884,
"s": 59816,
"text": "Variables in the scenario outline steps are marked up with < and >."
},
{
"code": null,
"e": 59955,
"s": 59884,
"text": "The various values for the variables are given as examples in a table."
},
{
"code": null,
"e": 60026,
"s": 59955,
"text": "The various values for the variables are given as examples in a table."
},
{
"code": null,
"e": 60034,
"s": 60026,
"text": "Example"
},
{
"code": null,
"e": 60108,
"s": 60034,
"text": "Suppose you are writing a Feature for adding two numbers on a calculator."
},
{
"code": null,
"e": 60123,
"s": 60108,
"text": "Feature − Add."
},
{
"code": null,
"e": 60346,
"s": 60123,
"text": "Scenario Outline: Add two numbers.\nGiven the input \"<input>\"\nWhen the calculator is run\nThen the output should be <output>\"\nExamples\n| input | output |\n| 2+2 | 4 | \n| 98+1 | 99 |\n| 255+390 | 645 |\n"
},
{
"code": null,
"e": 60641,
"s": 60346,
"text": "A scenario outline section is always followed by one or more sections of examples, which are a container for a table. The table must have a header row corresponding to the variables in the scenario outline steps. Each of the rows below will create a new scenario, filling in the variable values"
},
{
"code": null,
"e": 60879,
"s": 60641,
"text": "SpecFlow is an open-source project. The source code is hosted on GitHub. The feature files used by SpecFlow to store an acceptance criterion for features (use cases, user stories) in your application are defined using the Gherkin syntax."
},
{
"code": null,
"e": 61055,
"s": 60879,
"text": "The Gherkin format was introduced by Cucumber and is also used by other tools. The Gherkin language is maintained as a project on GitHub − https://github.com/cucumber/gherkin"
},
{
"code": null,
"e": 61098,
"s": 61055,
"text": "The key features of Feature elements are −"
},
{
"code": null,
"e": 61494,
"s": 61098,
"text": "The feature element provides a header for the feature file. The feature element includes the name and a high-level description of the corresponding feature in your application.\n\nSpecFlow generates a unit test class for the feature element, with the class name derived from the name of the feature.\nSpecFlow generates executable unit tests from the scenarios that represent acceptance criteria.\n\n"
},
{
"code": null,
"e": 61671,
"s": 61494,
"text": "The feature element provides a header for the feature file. The feature element includes the name and a high-level description of the corresponding feature in your application."
},
{
"code": null,
"e": 61791,
"s": 61671,
"text": "SpecFlow generates a unit test class for the feature element, with the class name derived from the name of the feature."
},
{
"code": null,
"e": 61911,
"s": 61791,
"text": "SpecFlow generates a unit test class for the feature element, with the class name derived from the name of the feature."
},
{
"code": null,
"e": 62007,
"s": 61911,
"text": "SpecFlow generates executable unit tests from the scenarios that represent acceptance criteria."
},
{
"code": null,
"e": 62103,
"s": 62007,
"text": "SpecFlow generates executable unit tests from the scenarios that represent acceptance criteria."
},
{
"code": null,
"e": 62384,
"s": 62103,
"text": "A feature file may contain multiple scenarios used to describe the feature's acceptance tests.\n\nScenarios have a name and can consist of multiple scenario steps.\nSpecFlow generates a unit test method for each scenario, with the method name derived from the name of the scenario.\n\n"
},
{
"code": null,
"e": 62479,
"s": 62384,
"text": "A feature file may contain multiple scenarios used to describe the feature's acceptance tests."
},
{
"code": null,
"e": 62545,
"s": 62479,
"text": "Scenarios have a name and can consist of multiple scenario steps."
},
{
"code": null,
"e": 62611,
"s": 62545,
"text": "Scenarios have a name and can consist of multiple scenario steps."
},
{
"code": null,
"e": 62728,
"s": 62611,
"text": "SpecFlow generates a unit test method for each scenario, with the method name derived from the name of the scenario."
},
{
"code": null,
"e": 62845,
"s": 62728,
"text": "SpecFlow generates a unit test method for each scenario, with the method name derived from the name of the scenario."
},
{
"code": null,
"e": 63021,
"s": 62845,
"text": "The scenarios can have multiple scenario steps. There are three types of steps that define the preconditions, actions or verification steps, which make up the acceptance test."
},
{
"code": null,
"e": 63198,
"s": 63021,
"text": "The different types of steps begin with either the Given, When or Then keywords respectively and subsequent steps of the same type can be linked using the And and But keywords."
},
{
"code": null,
"e": 63375,
"s": 63198,
"text": "The different types of steps begin with either the Given, When or Then keywords respectively and subsequent steps of the same type can be linked using the And and But keywords."
},
{
"code": null,
"e": 63527,
"s": 63375,
"text": "The Gherkin syntax allows any combination of these three types of steps, but a scenario usually has distinct blocks of Given, When and Then statements."
},
{
"code": null,
"e": 63679,
"s": 63527,
"text": "The Gherkin syntax allows any combination of these three types of steps, but a scenario usually has distinct blocks of Given, When and Then statements."
},
{
"code": null,
"e": 63811,
"s": 63679,
"text": "Scenario steps are defined using text and can have additional table called DataTable or multi-line text called DocString arguments."
},
{
"code": null,
"e": 63943,
"s": 63811,
"text": "Scenario steps are defined using text and can have additional table called DataTable or multi-line text called DocString arguments."
},
{
"code": null,
"e": 64036,
"s": 63943,
"text": "The scenario steps are a primary way to execute any custom code to automate the application."
},
{
"code": null,
"e": 64129,
"s": 64036,
"text": "The scenario steps are a primary way to execute any custom code to automate the application."
},
{
"code": null,
"e": 64322,
"s": 64129,
"text": "SpecFlow generates a call inside the unit test method for each scenario step. The call is performed by the SpecFlow runtime that will execute the step definition matching to the scenario step."
},
{
"code": null,
"e": 64515,
"s": 64322,
"text": "SpecFlow generates a call inside the unit test method for each scenario step. The call is performed by the SpecFlow runtime that will execute the step definition matching to the scenario step."
},
{
"code": null,
"e": 64644,
"s": 64515,
"text": "The matching is done at runtime, so the generated tests can be compiled and executed even if the binding is not yet implemented."
},
{
"code": null,
"e": 64773,
"s": 64644,
"text": "The matching is done at runtime, so the generated tests can be compiled and executed even if the binding is not yet implemented."
},
{
"code": null,
"e": 64942,
"s": 64773,
"text": "You can include tables and multi-line arguments in scenario steps. These are used by the step definitions and are either passed as additional table or string arguments."
},
{
"code": null,
"e": 65111,
"s": 64942,
"text": "You can include tables and multi-line arguments in scenario steps. These are used by the step definitions and are either passed as additional table or string arguments."
},
{
"code": null,
"e": 65319,
"s": 65111,
"text": "Tags are markers that can be assigned to features and scenarios. Assigning a tag to a feature is equivalent to assigning the tag to all scenarios in the feature file. A Tag Name with a leading @ denotes tag."
},
{
"code": null,
"e": 65405,
"s": 65319,
"text": "If supported by the unit test framework, SpecFlow generates categories from the tags."
},
{
"code": null,
"e": 65491,
"s": 65405,
"text": "If supported by the unit test framework, SpecFlow generates categories from the tags."
},
{
"code": null,
"e": 65577,
"s": 65491,
"text": "The generated category name is the same as the tag's name, but without the leading @."
},
{
"code": null,
"e": 65663,
"s": 65577,
"text": "The generated category name is the same as the tag's name, but without the leading @."
},
{
"code": null,
"e": 65849,
"s": 65663,
"text": "You can filter and group the tests to be executed using these unit test categories. For example, you can tag crucial tests with @important, and then execute these tests more frequently."
},
{
"code": null,
"e": 66035,
"s": 65849,
"text": "You can filter and group the tests to be executed using these unit test categories. For example, you can tag crucial tests with @important, and then execute these tests more frequently."
},
{
"code": null,
"e": 66143,
"s": 66035,
"text": "The background language element allows specifying a common precondition for all scenarios in a feature file"
},
{
"code": null,
"e": 66273,
"s": 66143,
"text": "The background part of the file can contain one or more scenario steps that are executed before any other steps of the scenarios."
},
{
"code": null,
"e": 66403,
"s": 66273,
"text": "The background part of the file can contain one or more scenario steps that are executed before any other steps of the scenarios."
},
{
"code": null,
"e": 66525,
"s": 66403,
"text": "SpecFlow generates a method from the background elements that is invoked from all unit tests generated for the scenarios."
},
{
"code": null,
"e": 66647,
"s": 66525,
"text": "SpecFlow generates a method from the background elements that is invoked from all unit tests generated for the scenarios."
},
{
"code": null,
"e": 66921,
"s": 66647,
"text": "Scenario outlines can be used to define data-driven acceptance tests. The scenario outline always consists of a scenario template specification (a scenario with data placeholders using the <placeholder> syntax) and a set of examples that provide values for the placeholders"
},
{
"code": null,
"e": 67020,
"s": 66921,
"text": "If the unit test framework supports it, SpecFlow generates row-based tests from scenario outlines."
},
{
"code": null,
"e": 67119,
"s": 67020,
"text": "If the unit test framework supports it, SpecFlow generates row-based tests from scenario outlines."
},
{
"code": null,
"e": 67262,
"s": 67119,
"text": "Otherwise, it generates a parameterized unit-test logic method for a scenario outline and an individual unit test method for each example set."
},
{
"code": null,
"e": 67405,
"s": 67262,
"text": "Otherwise, it generates a parameterized unit-test logic method for a scenario outline and an individual unit test method for each example set."
},
{
"code": null,
"e": 67585,
"s": 67405,
"text": "For better traceability, the generated unit-test method names are derived from the scenario outline title and the first value of the examples (first column of the examples table)."
},
{
"code": null,
"e": 67765,
"s": 67585,
"text": "For better traceability, the generated unit-test method names are derived from the scenario outline title and the first value of the examples (first column of the examples table)."
},
{
"code": null,
"e": 67880,
"s": 67765,
"text": "It is therefore good practice to choose a unique and descriptive parameter as the first column in the example set."
},
{
"code": null,
"e": 67995,
"s": 67880,
"text": "It is therefore good practice to choose a unique and descriptive parameter as the first column in the example set."
},
{
"code": null,
"e": 68214,
"s": 67995,
"text": "As the Gherkin syntax does require all example columns to have matching placeholders in the scenario outline, you can even introduce an arbitrary column in the example sets used to name the tests with more readability."
},
{
"code": null,
"e": 68433,
"s": 68214,
"text": "As the Gherkin syntax does require all example columns to have matching placeholders in the scenario outline, you can even introduce an arbitrary column in the example sets used to name the tests with more readability."
},
{
"code": null,
"e": 68535,
"s": 68433,
"text": "SpecFlow performs the placeholder substitution as a separate phase before matching the step bindings."
},
{
"code": null,
"e": 68637,
"s": 68535,
"text": "SpecFlow performs the placeholder substitution as a separate phase before matching the step bindings."
},
{
"code": null,
"e": 68795,
"s": 68637,
"text": "The implementation and the parameters in the step bindings are thus independent of whether they are executed through a direct scenario or a scenario outline."
},
{
"code": null,
"e": 68953,
"s": 68795,
"text": "The implementation and the parameters in the step bindings are thus independent of whether they are executed through a direct scenario or a scenario outline."
},
{
"code": null,
"e": 69063,
"s": 68953,
"text": "This allows you to later specify further examples in the acceptance tests without changing the step bindings."
},
{
"code": null,
"e": 69173,
"s": 69063,
"text": "This allows you to later specify further examples in the acceptance tests without changing the step bindings."
},
{
"code": null,
"e": 69416,
"s": 69173,
"text": "You can add comment lines to the feature files at any place by starting the line with #. Be careful however, as comments in your specification can be a sign that acceptance criteria have been specified wrongly. SpecFlow ignores comment lines."
},
{
"code": null,
"e": 69448,
"s": 69416,
"text": "\n 17 Lectures \n 52 mins\n"
},
{
"code": null,
"e": 69463,
"s": 69448,
"text": " Shruti Mantri"
},
{
"code": null,
"e": 69494,
"s": 69463,
"text": "\n 8 Lectures \n 23 mins\n"
},
{
"code": null,
"e": 69505,
"s": 69494,
"text": " Ken Burke"
},
{
"code": null,
"e": 69537,
"s": 69505,
"text": "\n 8 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 69550,
"s": 69537,
"text": " Matej Sucha"
},
{
"code": null,
"e": 69582,
"s": 69550,
"text": "\n 5 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 69595,
"s": 69582,
"text": " Matej Sucha"
},
{
"code": null,
"e": 69627,
"s": 69595,
"text": "\n 5 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 69640,
"s": 69627,
"text": " Matej Sucha"
},
{
"code": null,
"e": 69647,
"s": 69640,
"text": " Print"
},
{
"code": null,
"e": 69658,
"s": 69647,
"text": " Add Notes"
}
]
|
How to de-register a driver from driver manager’s drivers list using JDBC? | The java.sql.DriverManager class manages JDBC drivers in your application. This class maintains a list of required drivers and load them whenever it is initialized.
Therefore, you need to register the driver class before using it. However, you need to do it only once per application.
You can register a new Driver class in two ways −
Using the registerDriver() method of the DriverManager class. To this method you need to pass the Driver object as a parameter.
//Instantiating a driver class Driver driver = new com.mysql.jdbc.Driver();
//Registering the Driver DriverManager.registerDriver(driver);
Using the forName() method of the class named Class. To this method you need to pass the name of the Driver as a String parameter.
Class.forName("com.mysql.jdbc.Driver");
You can remove a particular Driver from the DriverManager’s list using its deregisterDriver() method.
If you invoke this method by passing the object of the required Driver class, the DriverManager simply drops the specified driver from its list.
DriverManager.deregisterDriver(mySQLDriver);
Following JDBC program establishes a connection with MySQL database, displays all the drivers registered with the DriverManager class, de-registers MySQL Driver and, displays the list again.
import java.sql.Connection;
import java.sql.Driver;
import java.sql.DriverManager;
import java.util.Enumeration;
public class DeRegistering_Driver {
public static void main(String args[])throws Exception {
//Instantiating a Driver class
Driver mySQLDriver = new com.mysql.jdbc.Driver();
//Registering the Driver
DriverManager.registerDriver(mySQLDriver);
//Getting the connection
String mysqlUrl = "jdbc:mysql://localhost/sampledatabase";
Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");
System.out.println("Connection established....... ");
System.out.println();
System.out.println("List of all the Drivers registered with the DriverManager: ");
//Retrieving the list of all the Drivers
Enumeration<Driver> e = DriverManager.getDrivers();
//Printing the list
while(e.hasMoreElements()) {
System.out.println(e.nextElement().getClass());
}
System.out.println();
//De-registering the MySQL Driver
DriverManager.deregisterDriver(mySQLDriver);
System.out.println("List of all the Drivers after de-registration:");
e = DriverManager.getDrivers();
//Printing the list
while(e.hasMoreElements()) {
System.out.println(e.nextElement().getClass());
}
System.out.println();
}
}
Since we have removed the driver from the DriverManager’s list you will not find the name of the MySQL driver in the list second time.
Connection established.......
List of all the Drivers registered with the DriverManager:
class oracle.jdbc.OracleDriver
class org.sqlite.JDBC
class org.apache.derby.jdbc.AutoloadedDriver
class org.apache.derby.jdbc.ClientDriver
class org.hsqldb.jdbc.JDBCDriver
class net.ucanaccess.jdbc.UcanaccessDriver
class com.mysql.jdbc.Driver
List of all the Drivers after de-registration:
class oracle.jdbc.OracleDriver
class org.sqlite.JDBC
class org.apache.derby.jdbc.AutoloadedDriver
class org.apache.derby.jdbc.ClientDriver
class org.hsqldb.jdbc.JDBCDriver
class net.ucanaccess.jdbc.UcanaccessDriver | [
{
"code": null,
"e": 1227,
"s": 1062,
"text": "The java.sql.DriverManager class manages JDBC drivers in your application. This class maintains a list of required drivers and load them whenever it is initialized."
},
{
"code": null,
"e": 1347,
"s": 1227,
"text": "Therefore, you need to register the driver class before using it. However, you need to do it only once per application."
},
{
"code": null,
"e": 1397,
"s": 1347,
"text": "You can register a new Driver class in two ways −"
},
{
"code": null,
"e": 1525,
"s": 1397,
"text": "Using the registerDriver() method of the DriverManager class. To this method you need to pass the Driver object as a parameter."
},
{
"code": null,
"e": 1664,
"s": 1525,
"text": "//Instantiating a driver class Driver driver = new com.mysql.jdbc.Driver();\n//Registering the Driver DriverManager.registerDriver(driver);"
},
{
"code": null,
"e": 1795,
"s": 1664,
"text": "Using the forName() method of the class named Class. To this method you need to pass the name of the Driver as a String parameter."
},
{
"code": null,
"e": 1835,
"s": 1795,
"text": "Class.forName(\"com.mysql.jdbc.Driver\");"
},
{
"code": null,
"e": 1937,
"s": 1835,
"text": "You can remove a particular Driver from the DriverManager’s list using its deregisterDriver() method."
},
{
"code": null,
"e": 2082,
"s": 1937,
"text": "If you invoke this method by passing the object of the required Driver class, the DriverManager simply drops the specified driver from its list."
},
{
"code": null,
"e": 2127,
"s": 2082,
"text": "DriverManager.deregisterDriver(mySQLDriver);"
},
{
"code": null,
"e": 2318,
"s": 2127,
"text": "Following JDBC program establishes a connection with MySQL database, displays all the drivers registered with the DriverManager class, de-registers MySQL Driver and, displays the list again."
},
{
"code": null,
"e": 3680,
"s": 2318,
"text": "import java.sql.Connection;\nimport java.sql.Driver;\nimport java.sql.DriverManager;\nimport java.util.Enumeration;\npublic class DeRegistering_Driver {\n public static void main(String args[])throws Exception {\n //Instantiating a Driver class\n Driver mySQLDriver = new com.mysql.jdbc.Driver();\n //Registering the Driver\n DriverManager.registerDriver(mySQLDriver);\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/sampledatabase\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established....... \");\n System.out.println();\n System.out.println(\"List of all the Drivers registered with the DriverManager: \");\n //Retrieving the list of all the Drivers\n Enumeration<Driver> e = DriverManager.getDrivers();\n //Printing the list\n while(e.hasMoreElements()) {\n System.out.println(e.nextElement().getClass());\n }\n System.out.println();\n //De-registering the MySQL Driver\n DriverManager.deregisterDriver(mySQLDriver);\n System.out.println(\"List of all the Drivers after de-registration:\");\n e = DriverManager.getDrivers();\n //Printing the list\n while(e.hasMoreElements()) {\n System.out.println(e.nextElement().getClass());\n }\n System.out.println();\n }\n}"
},
{
"code": null,
"e": 3815,
"s": 3680,
"text": "Since we have removed the driver from the DriverManager’s list you will not find the name of the MySQL driver in the list second time."
},
{
"code": null,
"e": 4409,
"s": 3815,
"text": "Connection established.......\nList of all the Drivers registered with the DriverManager:\nclass oracle.jdbc.OracleDriver\nclass org.sqlite.JDBC\nclass org.apache.derby.jdbc.AutoloadedDriver\nclass org.apache.derby.jdbc.ClientDriver\nclass org.hsqldb.jdbc.JDBCDriver\nclass net.ucanaccess.jdbc.UcanaccessDriver\nclass com.mysql.jdbc.Driver\nList of all the Drivers after de-registration:\nclass oracle.jdbc.OracleDriver\nclass org.sqlite.JDBC\nclass org.apache.derby.jdbc.AutoloadedDriver\nclass org.apache.derby.jdbc.ClientDriver\nclass org.hsqldb.jdbc.JDBCDriver\nclass net.ucanaccess.jdbc.UcanaccessDriver"
}
]
|
Find the maximum occurring character after performing the given operations - GeeksforGeeks | 12 Nov, 2021
Given a string str consisting of 0, 1, and *, the task is to find the maximum occurring character out of 0 and 1 after performing the given operations:
Replace * with 0 where * appears on the left side of the existing 0s in the string.
Replace * with 1 where * appears on the right side of the existing 1s in the string.
If any * can be replaced by both 0 and 1, then it remains unchanged.
Note: If the frequency of 0 and 1 is same after performing the given operations then print -1.
Examples:
Input: str = “**0**1***0” Output: 0 Explanation: Since 0 can replace the * to its left and 1 can replace the * to its right thus string becomes 000**1***0
Input: str = “0*1” Output: -1 Explanation: Both 0 and 1 have the same frequency hence the output is -1.
Approach: The idea to generate the final resultant string and then compare the frequency of 0 and 1. Below are the steps:
Count the initial frequencies of 0 and 1 in the string and store them in variables say count_0 and count_1.
Initialize a variable, say prev, as -1. Iterate over the string and check if the current character is *. If so, then continue.
If it is the first character encountered and is 0 then add all * to count_0 and change prev to current index.
Otherwise, if the first character is 1 then change prev to current index.
If the previous character is 1 and the current character is 0 then add half of * in between the characters to 0 and half to 1.
If the previous character is 0 and the current character is 1 then no * character in between them can be replaced.
If the previous and current both characters are of the same type then add the count of * to the frequencies.
Compare the frequencies of 0 and 1 and print the maximum occurring character.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to find the// maximum occurring charactervoid solve(string S){ // Initialize count of // zero and one int count_0 = 0, count_1 = 0; int prev = -1; // Iterate over the given string for (int i = 0; i < S.length(); i++) { // Count the zeros if (S[i] == '0') count_0++; // Count the ones else if (S[i] == '1') count_1++; } // Iterate over the given string for (int i = 0; i < S.length(); i++) { // Check if character // is * then continue if (S[i] == '*') continue; // Check if first character // after * is X else if (S[i] == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S[i] == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S[prev] == '1' && S[i] == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S[prev] == '1' && S[i] == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S[prev] == '0' && S[i] == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S[prev] == '0' && S[i] == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) cout << "0"; // If frequency of 1 // is more else if (count_1 > count_0) cout << "1"; else { cout << -1; }} // Driver codeint main(){ // Given string string str = "**0**1***0"; // Function Call solve(str); return 0;}
// Java program for the above approachimport java.io.*; class GFG{ // Function to find the// maximum occurring characterstatic void solve(String S){ // Initialize count of // zero and one int count_0 = 0, count_1 = 0; int prev = -1; // Iterate over the given string for(int i = 0; i < S.length(); i++) { // Count the zeros if (S.charAt(i) == '0') count_0++; // Count the ones else if (S.charAt(i) == '1') count_1++; } // Iterate over the given string for(int i = 0; i < S.length(); i++) { // Check if character // is * then continue if (S.charAt(i) == '*') continue; // Check if first character // after * is X else if (S.charAt(i) == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S.charAt(i) == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S.charAt(prev) == '1' && S.charAt(i) == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S.charAt(prev) == '1' && S.charAt(i) == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S.charAt(prev) == '0' && S.charAt(i) == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S.charAt(prev) == '0' && S.charAt(i) == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) System.out.print("0"); // If frequency of 1 // is more else if (count_1 > count_0) System.out.print("1"); else { System.out.print("-1"); }} // Driver codepublic static void main (String[] args){ // Given string String str = "**0**1***0"; // Function call solve(str);}} // This code is contributed by code_hunt
# Python3 program for the above approach # Function to find the# maximum occurring characterdef solve(S): # Initialize count of # zero and one count_0 = 0 count_1 = 0 prev = -1 # Iterate over the given string for i in range(len(S)) : # Count the zeros if (S[i] == '0'): count_0 += 1 # Count the ones elif (S[i] == '1'): count_1 += 1 # Iterate over the given string for i in range(len(S)): # Check if character # is * then continue if (S[i] == '*'): continue # Check if first character # after * is X elif (S[i] == '0' and prev == -1): # Add all * to # the frequency of X count_0 = count_0 + i # Set prev to the # i-th character prev = i # Check if first character # after * is Y elif (S[i] == '1' and prev == -1): # Set prev to the # i-th character prev = i # Check if prev character is 1 # and current character is 0 elif (S[prev] == '1' and S[i] == '0'): # Half of the * will be # converted to 0 count_0 = count_0 + (i - prev - 1) / 2 # Half of the * will be # converted to 1 count_1 = count_1 + (i - prev - 1) // 2 prev = i # Check if prev and current are 1 elif (S[prev] == '1' and S[i] == '1'): # All * will get converted to 1 count_1 = count_1 + (i - prev - 1) prev = i # No * can be replaced # by either 0 or 1 elif (S[prev] == '0' and S[i] == '1'): # Prev becomes the ith character prev = i # Check if prev and current are 0 elif (S[prev] == '0' and S[i] == '0'): # All * will get converted to 0 count_0 = count_0 + (i - prev - 1) prev = i # If frequency of 0 # is more if (count_0 > count_1): print("0") # If frequency of 1 # is more elif (count_1 > count_0): print("1") else: print("-1") # Driver code # Given stringstr = "**0**1***0" # Function callsolve(str) # This code is contributed by code_hunt
// C# program for the above approachusing System; class GFG{ // Function to find the// maximum occurring characterstatic void solve(string S){ // Initialize count of // zero and one int count_0 = 0, count_1 = 0; int prev = -1; // Iterate over the given string for(int i = 0; i < S.Length; i++) { // Count the zeros if (S[i] == '0') count_0++; // Count the ones else if (S[i] == '1') count_1++; } // Iterate over the given string for(int i = 0; i < S.Length; i++) { // Check if character // is * then continue if (S[i] == '*') continue; // Check if first character // after * is X else if (S[i] == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S[i] == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S[prev] == '1' && S[i] == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S[prev] == '1' && S[i] == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S[prev] == '0' && S[i] == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S[prev] == '0' && S[i] == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) Console.Write("0"); // If frequency of 1 // is more else if (count_1 > count_0) Console.Write("1"); else { Console.Write("-1"); }} // Driver codepublic static void Main (){ // Given string string str = "**0**1***0"; // Function call solve(str);}} // This code is contributed by code_hunt
<script>// javascript program for the above approach // Function to find the // maximum occurring character function solve( S) { // Initialize count of // zero and one var count_0 = 0, count_1 = 0; var prev = -1; // Iterate over the given string for (i = 0; i < S.length; i++) { // Count the zeros if (S.charAt(i) == '0') count_0++; // Count the ones else if (S.charAt(i) == '1') count_1++; } // Iterate over the given string for (i = 0; i < S.length; i++) { // Check if character // is * then continue if (S.charAt(i) == '*') continue; // Check if first character // after * is X else if (S.charAt(i) == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S.charAt(i) == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S.charAt(prev) == '1' && S.charAt(i) == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S.charAt(prev) == '1' && S.charAt(i) == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S.charAt(prev) == '0' && S.charAt(i) == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S.charAt(prev) == '0' && S.charAt(i) == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) document.write("0"); // If frequency of 1 // is more else if (count_1 > count_0) document.write("1"); else { document.write("-1"); } } // Driver code // Given string var str = "**0**1***0"; // Function call solve(str); // This code IS contributed by umadevi9616</script>
0
Time Complexity: O(N) Auxiliary Space: O(1)
code_hunt
umadevi9616
binary-string
frequency-counting
Searching
Strings
Searching
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Median of two sorted arrays of different sizes
Most frequent element in an array
Find the index of an array element in Java
Count number of occurrences (or frequency) in a sorted array
Two Pointers Technique
Reverse a string in Java
Write a program to reverse an array or string
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types | [
{
"code": null,
"e": 24951,
"s": 24923,
"text": "\n12 Nov, 2021"
},
{
"code": null,
"e": 25105,
"s": 24951,
"text": "Given a string str consisting of 0, 1, and *, the task is to find the maximum occurring character out of 0 and 1 after performing the given operations: "
},
{
"code": null,
"e": 25189,
"s": 25105,
"text": "Replace * with 0 where * appears on the left side of the existing 0s in the string."
},
{
"code": null,
"e": 25274,
"s": 25189,
"text": "Replace * with 1 where * appears on the right side of the existing 1s in the string."
},
{
"code": null,
"e": 25343,
"s": 25274,
"text": "If any * can be replaced by both 0 and 1, then it remains unchanged."
},
{
"code": null,
"e": 25438,
"s": 25343,
"text": "Note: If the frequency of 0 and 1 is same after performing the given operations then print -1."
},
{
"code": null,
"e": 25448,
"s": 25438,
"text": "Examples:"
},
{
"code": null,
"e": 25603,
"s": 25448,
"text": "Input: str = “**0**1***0” Output: 0 Explanation: Since 0 can replace the * to its left and 1 can replace the * to its right thus string becomes 000**1***0"
},
{
"code": null,
"e": 25707,
"s": 25603,
"text": "Input: str = “0*1” Output: -1 Explanation: Both 0 and 1 have the same frequency hence the output is -1."
},
{
"code": null,
"e": 25829,
"s": 25707,
"text": "Approach: The idea to generate the final resultant string and then compare the frequency of 0 and 1. Below are the steps:"
},
{
"code": null,
"e": 25937,
"s": 25829,
"text": "Count the initial frequencies of 0 and 1 in the string and store them in variables say count_0 and count_1."
},
{
"code": null,
"e": 26064,
"s": 25937,
"text": "Initialize a variable, say prev, as -1. Iterate over the string and check if the current character is *. If so, then continue."
},
{
"code": null,
"e": 26174,
"s": 26064,
"text": "If it is the first character encountered and is 0 then add all * to count_0 and change prev to current index."
},
{
"code": null,
"e": 26248,
"s": 26174,
"text": "Otherwise, if the first character is 1 then change prev to current index."
},
{
"code": null,
"e": 26375,
"s": 26248,
"text": "If the previous character is 1 and the current character is 0 then add half of * in between the characters to 0 and half to 1."
},
{
"code": null,
"e": 26490,
"s": 26375,
"text": "If the previous character is 0 and the current character is 1 then no * character in between them can be replaced."
},
{
"code": null,
"e": 26599,
"s": 26490,
"text": "If the previous and current both characters are of the same type then add the count of * to the frequencies."
},
{
"code": null,
"e": 26677,
"s": 26599,
"text": "Compare the frequencies of 0 and 1 and print the maximum occurring character."
},
{
"code": null,
"e": 26728,
"s": 26677,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 26732,
"s": 26728,
"text": "C++"
},
{
"code": null,
"e": 26737,
"s": 26732,
"text": "Java"
},
{
"code": null,
"e": 26745,
"s": 26737,
"text": "Python3"
},
{
"code": null,
"e": 26748,
"s": 26745,
"text": "C#"
},
{
"code": null,
"e": 26759,
"s": 26748,
"text": "Javascript"
},
{
"code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to find the// maximum occurring charactervoid solve(string S){ // Initialize count of // zero and one int count_0 = 0, count_1 = 0; int prev = -1; // Iterate over the given string for (int i = 0; i < S.length(); i++) { // Count the zeros if (S[i] == '0') count_0++; // Count the ones else if (S[i] == '1') count_1++; } // Iterate over the given string for (int i = 0; i < S.length(); i++) { // Check if character // is * then continue if (S[i] == '*') continue; // Check if first character // after * is X else if (S[i] == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S[i] == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S[prev] == '1' && S[i] == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S[prev] == '1' && S[i] == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S[prev] == '0' && S[i] == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S[prev] == '0' && S[i] == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) cout << \"0\"; // If frequency of 1 // is more else if (count_1 > count_0) cout << \"1\"; else { cout << -1; }} // Driver codeint main(){ // Given string string str = \"**0**1***0\"; // Function Call solve(str); return 0;}",
"e": 29246,
"s": 26759,
"text": null
},
{
"code": "// Java program for the above approachimport java.io.*; class GFG{ // Function to find the// maximum occurring characterstatic void solve(String S){ // Initialize count of // zero and one int count_0 = 0, count_1 = 0; int prev = -1; // Iterate over the given string for(int i = 0; i < S.length(); i++) { // Count the zeros if (S.charAt(i) == '0') count_0++; // Count the ones else if (S.charAt(i) == '1') count_1++; } // Iterate over the given string for(int i = 0; i < S.length(); i++) { // Check if character // is * then continue if (S.charAt(i) == '*') continue; // Check if first character // after * is X else if (S.charAt(i) == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S.charAt(i) == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S.charAt(prev) == '1' && S.charAt(i) == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S.charAt(prev) == '1' && S.charAt(i) == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S.charAt(prev) == '0' && S.charAt(i) == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S.charAt(prev) == '0' && S.charAt(i) == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) System.out.print(\"0\"); // If frequency of 1 // is more else if (count_1 > count_0) System.out.print(\"1\"); else { System.out.print(\"-1\"); }} // Driver codepublic static void main (String[] args){ // Given string String str = \"**0**1***0\"; // Function call solve(str);}} // This code is contributed by code_hunt",
"e": 32045,
"s": 29246,
"text": null
},
{
"code": "# Python3 program for the above approach # Function to find the# maximum occurring characterdef solve(S): # Initialize count of # zero and one count_0 = 0 count_1 = 0 prev = -1 # Iterate over the given string for i in range(len(S)) : # Count the zeros if (S[i] == '0'): count_0 += 1 # Count the ones elif (S[i] == '1'): count_1 += 1 # Iterate over the given string for i in range(len(S)): # Check if character # is * then continue if (S[i] == '*'): continue # Check if first character # after * is X elif (S[i] == '0' and prev == -1): # Add all * to # the frequency of X count_0 = count_0 + i # Set prev to the # i-th character prev = i # Check if first character # after * is Y elif (S[i] == '1' and prev == -1): # Set prev to the # i-th character prev = i # Check if prev character is 1 # and current character is 0 elif (S[prev] == '1' and S[i] == '0'): # Half of the * will be # converted to 0 count_0 = count_0 + (i - prev - 1) / 2 # Half of the * will be # converted to 1 count_1 = count_1 + (i - prev - 1) // 2 prev = i # Check if prev and current are 1 elif (S[prev] == '1' and S[i] == '1'): # All * will get converted to 1 count_1 = count_1 + (i - prev - 1) prev = i # No * can be replaced # by either 0 or 1 elif (S[prev] == '0' and S[i] == '1'): # Prev becomes the ith character prev = i # Check if prev and current are 0 elif (S[prev] == '0' and S[i] == '0'): # All * will get converted to 0 count_0 = count_0 + (i - prev - 1) prev = i # If frequency of 0 # is more if (count_0 > count_1): print(\"0\") # If frequency of 1 # is more elif (count_1 > count_0): print(\"1\") else: print(\"-1\") # Driver code # Given stringstr = \"**0**1***0\" # Function callsolve(str) # This code is contributed by code_hunt",
"e": 34358,
"s": 32045,
"text": null
},
{
"code": "// C# program for the above approachusing System; class GFG{ // Function to find the// maximum occurring characterstatic void solve(string S){ // Initialize count of // zero and one int count_0 = 0, count_1 = 0; int prev = -1; // Iterate over the given string for(int i = 0; i < S.Length; i++) { // Count the zeros if (S[i] == '0') count_0++; // Count the ones else if (S[i] == '1') count_1++; } // Iterate over the given string for(int i = 0; i < S.Length; i++) { // Check if character // is * then continue if (S[i] == '*') continue; // Check if first character // after * is X else if (S[i] == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S[i] == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S[prev] == '1' && S[i] == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S[prev] == '1' && S[i] == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S[prev] == '0' && S[i] == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S[prev] == '0' && S[i] == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) Console.Write(\"0\"); // If frequency of 1 // is more else if (count_1 > count_0) Console.Write(\"1\"); else { Console.Write(\"-1\"); }} // Driver codepublic static void Main (){ // Given string string str = \"**0**1***0\"; // Function call solve(str);}} // This code is contributed by code_hunt",
"e": 37010,
"s": 34358,
"text": null
},
{
"code": "<script>// javascript program for the above approach // Function to find the // maximum occurring character function solve( S) { // Initialize count of // zero and one var count_0 = 0, count_1 = 0; var prev = -1; // Iterate over the given string for (i = 0; i < S.length; i++) { // Count the zeros if (S.charAt(i) == '0') count_0++; // Count the ones else if (S.charAt(i) == '1') count_1++; } // Iterate over the given string for (i = 0; i < S.length; i++) { // Check if character // is * then continue if (S.charAt(i) == '*') continue; // Check if first character // after * is X else if (S.charAt(i) == '0' && prev == -1) { // Add all * to // the frequency of X count_0 = count_0 + i; // Set prev to the // i-th character prev = i; } // Check if first character // after * is Y else if (S.charAt(i) == '1' && prev == -1) { // Set prev to the // i-th character prev = i; } // Check if prev character is 1 // and current character is 0 else if (S.charAt(prev) == '1' && S.charAt(i) == '0') { // Half of the * will be // converted to 0 count_0 = count_0 + (i - prev - 1) / 2; // Half of the * will be // converted to 1 count_1 = count_1 + (i - prev - 1) / 2; prev = i; } // Check if prev and current are 1 else if (S.charAt(prev) == '1' && S.charAt(i) == '1') { // All * will get converted to 1 count_1 = count_1 + (i - prev - 1); prev = i; } // No * can be replaced // by either 0 or 1 else if (S.charAt(prev) == '0' && S.charAt(i) == '1') // Prev becomes the ith character prev = i; // Check if prev and current are 0 else if (S.charAt(prev) == '0' && S.charAt(i) == '0') { // All * will get converted to 0 count_0 = count_0 + (i - prev - 1); prev = i; } } // If frequency of 0 // is more if (count_0 > count_1) document.write(\"0\"); // If frequency of 1 // is more else if (count_1 > count_0) document.write(\"1\"); else { document.write(\"-1\"); } } // Driver code // Given string var str = \"**0**1***0\"; // Function call solve(str); // This code IS contributed by umadevi9616</script>",
"e": 39936,
"s": 37010,
"text": null
},
{
"code": null,
"e": 39938,
"s": 39936,
"text": "0"
},
{
"code": null,
"e": 39983,
"s": 39938,
"text": "Time Complexity: O(N) Auxiliary Space: O(1) "
},
{
"code": null,
"e": 39993,
"s": 39983,
"text": "code_hunt"
},
{
"code": null,
"e": 40005,
"s": 39993,
"text": "umadevi9616"
},
{
"code": null,
"e": 40019,
"s": 40005,
"text": "binary-string"
},
{
"code": null,
"e": 40038,
"s": 40019,
"text": "frequency-counting"
},
{
"code": null,
"e": 40048,
"s": 40038,
"text": "Searching"
},
{
"code": null,
"e": 40056,
"s": 40048,
"text": "Strings"
},
{
"code": null,
"e": 40066,
"s": 40056,
"text": "Searching"
},
{
"code": null,
"e": 40074,
"s": 40066,
"text": "Strings"
},
{
"code": null,
"e": 40172,
"s": 40074,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 40181,
"s": 40172,
"text": "Comments"
},
{
"code": null,
"e": 40194,
"s": 40181,
"text": "Old Comments"
},
{
"code": null,
"e": 40241,
"s": 40194,
"text": "Median of two sorted arrays of different sizes"
},
{
"code": null,
"e": 40275,
"s": 40241,
"text": "Most frequent element in an array"
},
{
"code": null,
"e": 40318,
"s": 40275,
"text": "Find the index of an array element in Java"
},
{
"code": null,
"e": 40379,
"s": 40318,
"text": "Count number of occurrences (or frequency) in a sorted array"
},
{
"code": null,
"e": 40402,
"s": 40379,
"text": "Two Pointers Technique"
},
{
"code": null,
"e": 40427,
"s": 40402,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 40473,
"s": 40427,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 40507,
"s": 40473,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 40567,
"s": 40507,
"text": "Write a program to print all permutations of a given string"
}
]
|
Pandas concat() tricks you should know to speed up your data analysis | by B. Chen | Towards Data Science | Pandas provides various built-in functions for easily combining DataFrames. Among them, the concat() function seems fairly straightforward to use, but there are still many tricks you should know to speed up your data analysis.
In this article, you’ll learn Pandas concat() tricks to deal with the following common problems:
Dealing with index and axisAvoiding duplicate indicesAdding a hierarchical index with keys and names optionsColumn matching and sortingLoading and concatenating datasets from a bunch of CSV files
Dealing with index and axis
Avoiding duplicate indices
Adding a hierarchical index with keys and names options
Column matching and sorting
Loading and concatenating datasets from a bunch of CSV files
Please check out my Github repo for the source code.
Suppose we have 2 datasets about exam grades.
df1 = pd.DataFrame({ 'name': ['A', 'B', 'C', 'D'], 'math': [60,89,82,70], 'physics': [66,95,83,66], 'chemistry': [61,91,77,70]})df2 = pd.DataFrame({ 'name': ['E', 'F', 'G', 'H'], 'math': [66,95,83,66], 'physics': [60,89,82,70], 'chemistry': [90,81,78,90]})
The simplest concatenation with concat() is by passing a list of DataFrames, for example[df1, df2]. And by default, it is concatenating vertically along the axis 0 and preserving all existing indices.
pd.concat([df1, df2])
If you want the concatenation to ignore existing indices, you can set the argument ignore_index=True. Then, the resulting DataFrame index will be labeled with 0, ..., n-1.
pd.concat([df1, df2], ignore_index=True)
To concatenate DataFrames horizontally along the axis 1 , you can set the argument axis=1 .
pd.concat([df1, df2], axis=1)
Now, we know that the concat() function preserves indices. If you’d like to verify that the indices in the result of pd.concat() do not overlap, you can set the argument verify_integrity=True. With this set to True, it will raise an exception if there are duplicate indices.
try: pd.concat([df1,df2], verify_integrity=True)except ValueError as e: print('ValueError', e)ValueError: Indexes have overlapping values: Int64Index([0, 1, 2, 3], dtype='int64')
It is quite useful to add a hierarchical index (Also known as multi-level index) for more sophisticated data analysis. In this case, let’s add index Year 1 and Year 2 for df1 and df2 respectively. To do that, we can simply specify the keys argument.
res = pd.concat([df1, df2], keys=['Year 1','Year 2'])res
And to access a specific group of values, for example, Year 1:
res.loc['Year 1']
In addition, the argument names can be used to add names for the resulting hierarchical index. For example: add name Class to the outermost index we just created.
pd.concat( [df1, df2], keys=['Year 1', 'Year 2'], names=['Class', None],)
To reset an index and turn it into a data column, you can use reset_index()
pd.concat( [df1, df2], keys=['Year 1', 'Year 2'], names=['Class', None],).reset_index(level=0) # reset_index(level='Class')
The concat() function is able to concatenate DataFrames with the columns in a different order. By default, the resulting DataFrame would have the same sorting as the first DataFrame. For example, in the following example, it’s the same order as df1.
If you prefer the resulting DataFrame to be sorted alphabetically, you can set the argument sort=True.
pd.concat([df1, df2], sort=True)
If you prefer a custom sort, here is how to do it:
custom_sort = ['math', 'chemistry', 'physics', 'name']res = pd.concat([df1, df2])res[custom_sort]
Suppose we need to load and concatenate datasets from a bunch of CSV files. Here is one solution using for loop.
# Badimport pathlib2 as pl2ps = pl2.Path('data/sp3')res = Nonefor p in ps.glob('*.csv'): if res is None: res = pd.read_csv(p) else: res = pd.concat([res, pd.read_csv(p)])
This certainly does the work. But the pd.concat() gets called every time in each for loop iteration. We can solve this effectively using list comprehension.
import pathlib2 as pl2ps = pl2.Path('data/sp3')dfs = ( pd.read_csv(p, encoding='utf8') for p in ps.glob('*.csv'))res = pd.concat(dfs)res
A single line of code read all the CSV files and generate a list of DataFrames dfs. Then, we just need to call pd.concat(dfs) once to get the same result.
If you time both executions using %%timeit, you probably find that the list comprehension solution saves half of the time.
# for-loop solution298 ms ± 11.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)# list comprehension solution153 ms ± 6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
List comprehension saves time and codes. It is a simple way to generate a list comparing to using loops.
Thanks for reading.
Please check out the notebook for the source code.
Stay tuned if you are interested in the practical aspect of machine learning.
How to do a Custom Sort on Pandas DataFrame
When to use Pandas transform() function
Using Pandas method chaining to improve code readability
Working with datetime in Pandas DataFrame
Working with missing values in Pandas
Pandas read_csv() tricks you should know
4 tricks you should know to parse date columns with Pandas read_csv()
More can be found from my Github | [
{
"code": null,
"e": 399,
"s": 172,
"text": "Pandas provides various built-in functions for easily combining DataFrames. Among them, the concat() function seems fairly straightforward to use, but there are still many tricks you should know to speed up your data analysis."
},
{
"code": null,
"e": 496,
"s": 399,
"text": "In this article, you’ll learn Pandas concat() tricks to deal with the following common problems:"
},
{
"code": null,
"e": 692,
"s": 496,
"text": "Dealing with index and axisAvoiding duplicate indicesAdding a hierarchical index with keys and names optionsColumn matching and sortingLoading and concatenating datasets from a bunch of CSV files"
},
{
"code": null,
"e": 720,
"s": 692,
"text": "Dealing with index and axis"
},
{
"code": null,
"e": 747,
"s": 720,
"text": "Avoiding duplicate indices"
},
{
"code": null,
"e": 803,
"s": 747,
"text": "Adding a hierarchical index with keys and names options"
},
{
"code": null,
"e": 831,
"s": 803,
"text": "Column matching and sorting"
},
{
"code": null,
"e": 892,
"s": 831,
"text": "Loading and concatenating datasets from a bunch of CSV files"
},
{
"code": null,
"e": 945,
"s": 892,
"text": "Please check out my Github repo for the source code."
},
{
"code": null,
"e": 991,
"s": 945,
"text": "Suppose we have 2 datasets about exam grades."
},
{
"code": null,
"e": 1272,
"s": 991,
"text": "df1 = pd.DataFrame({ 'name': ['A', 'B', 'C', 'D'], 'math': [60,89,82,70], 'physics': [66,95,83,66], 'chemistry': [61,91,77,70]})df2 = pd.DataFrame({ 'name': ['E', 'F', 'G', 'H'], 'math': [66,95,83,66], 'physics': [60,89,82,70], 'chemistry': [90,81,78,90]})"
},
{
"code": null,
"e": 1473,
"s": 1272,
"text": "The simplest concatenation with concat() is by passing a list of DataFrames, for example[df1, df2]. And by default, it is concatenating vertically along the axis 0 and preserving all existing indices."
},
{
"code": null,
"e": 1495,
"s": 1473,
"text": "pd.concat([df1, df2])"
},
{
"code": null,
"e": 1667,
"s": 1495,
"text": "If you want the concatenation to ignore existing indices, you can set the argument ignore_index=True. Then, the resulting DataFrame index will be labeled with 0, ..., n-1."
},
{
"code": null,
"e": 1708,
"s": 1667,
"text": "pd.concat([df1, df2], ignore_index=True)"
},
{
"code": null,
"e": 1800,
"s": 1708,
"text": "To concatenate DataFrames horizontally along the axis 1 , you can set the argument axis=1 ."
},
{
"code": null,
"e": 1830,
"s": 1800,
"text": "pd.concat([df1, df2], axis=1)"
},
{
"code": null,
"e": 2105,
"s": 1830,
"text": "Now, we know that the concat() function preserves indices. If you’d like to verify that the indices in the result of pd.concat() do not overlap, you can set the argument verify_integrity=True. With this set to True, it will raise an exception if there are duplicate indices."
},
{
"code": null,
"e": 2290,
"s": 2105,
"text": "try: pd.concat([df1,df2], verify_integrity=True)except ValueError as e: print('ValueError', e)ValueError: Indexes have overlapping values: Int64Index([0, 1, 2, 3], dtype='int64')"
},
{
"code": null,
"e": 2540,
"s": 2290,
"text": "It is quite useful to add a hierarchical index (Also known as multi-level index) for more sophisticated data analysis. In this case, let’s add index Year 1 and Year 2 for df1 and df2 respectively. To do that, we can simply specify the keys argument."
},
{
"code": null,
"e": 2597,
"s": 2540,
"text": "res = pd.concat([df1, df2], keys=['Year 1','Year 2'])res"
},
{
"code": null,
"e": 2660,
"s": 2597,
"text": "And to access a specific group of values, for example, Year 1:"
},
{
"code": null,
"e": 2678,
"s": 2660,
"text": "res.loc['Year 1']"
},
{
"code": null,
"e": 2841,
"s": 2678,
"text": "In addition, the argument names can be used to add names for the resulting hierarchical index. For example: add name Class to the outermost index we just created."
},
{
"code": null,
"e": 2925,
"s": 2841,
"text": "pd.concat( [df1, df2], keys=['Year 1', 'Year 2'], names=['Class', None],)"
},
{
"code": null,
"e": 3001,
"s": 2925,
"text": "To reset an index and turn it into a data column, you can use reset_index()"
},
{
"code": null,
"e": 3137,
"s": 3001,
"text": "pd.concat( [df1, df2], keys=['Year 1', 'Year 2'], names=['Class', None],).reset_index(level=0) # reset_index(level='Class')"
},
{
"code": null,
"e": 3387,
"s": 3137,
"text": "The concat() function is able to concatenate DataFrames with the columns in a different order. By default, the resulting DataFrame would have the same sorting as the first DataFrame. For example, in the following example, it’s the same order as df1."
},
{
"code": null,
"e": 3490,
"s": 3387,
"text": "If you prefer the resulting DataFrame to be sorted alphabetically, you can set the argument sort=True."
},
{
"code": null,
"e": 3523,
"s": 3490,
"text": "pd.concat([df1, df2], sort=True)"
},
{
"code": null,
"e": 3574,
"s": 3523,
"text": "If you prefer a custom sort, here is how to do it:"
},
{
"code": null,
"e": 3672,
"s": 3574,
"text": "custom_sort = ['math', 'chemistry', 'physics', 'name']res = pd.concat([df1, df2])res[custom_sort]"
},
{
"code": null,
"e": 3785,
"s": 3672,
"text": "Suppose we need to load and concatenate datasets from a bunch of CSV files. Here is one solution using for loop."
},
{
"code": null,
"e": 3976,
"s": 3785,
"text": "# Badimport pathlib2 as pl2ps = pl2.Path('data/sp3')res = Nonefor p in ps.glob('*.csv'): if res is None: res = pd.read_csv(p) else: res = pd.concat([res, pd.read_csv(p)])"
},
{
"code": null,
"e": 4133,
"s": 3976,
"text": "This certainly does the work. But the pd.concat() gets called every time in each for loop iteration. We can solve this effectively using list comprehension."
},
{
"code": null,
"e": 4273,
"s": 4133,
"text": "import pathlib2 as pl2ps = pl2.Path('data/sp3')dfs = ( pd.read_csv(p, encoding='utf8') for p in ps.glob('*.csv'))res = pd.concat(dfs)res"
},
{
"code": null,
"e": 4428,
"s": 4273,
"text": "A single line of code read all the CSV files and generate a list of DataFrames dfs. Then, we just need to call pd.concat(dfs) once to get the same result."
},
{
"code": null,
"e": 4551,
"s": 4428,
"text": "If you time both executions using %%timeit, you probably find that the list comprehension solution saves half of the time."
},
{
"code": null,
"e": 4731,
"s": 4551,
"text": "# for-loop solution298 ms ± 11.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)# list comprehension solution153 ms ± 6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)"
},
{
"code": null,
"e": 4836,
"s": 4731,
"text": "List comprehension saves time and codes. It is a simple way to generate a list comparing to using loops."
},
{
"code": null,
"e": 4856,
"s": 4836,
"text": "Thanks for reading."
},
{
"code": null,
"e": 4907,
"s": 4856,
"text": "Please check out the notebook for the source code."
},
{
"code": null,
"e": 4985,
"s": 4907,
"text": "Stay tuned if you are interested in the practical aspect of machine learning."
},
{
"code": null,
"e": 5029,
"s": 4985,
"text": "How to do a Custom Sort on Pandas DataFrame"
},
{
"code": null,
"e": 5069,
"s": 5029,
"text": "When to use Pandas transform() function"
},
{
"code": null,
"e": 5126,
"s": 5069,
"text": "Using Pandas method chaining to improve code readability"
},
{
"code": null,
"e": 5168,
"s": 5126,
"text": "Working with datetime in Pandas DataFrame"
},
{
"code": null,
"e": 5206,
"s": 5168,
"text": "Working with missing values in Pandas"
},
{
"code": null,
"e": 5247,
"s": 5206,
"text": "Pandas read_csv() tricks you should know"
},
{
"code": null,
"e": 5317,
"s": 5247,
"text": "4 tricks you should know to parse date columns with Pandas read_csv()"
}
]
|
Find LCM of two numbers | In mathematics Least Common Multiple (LCM) is the smallest possible integer, that is divisible by both numbers.
LCM can be calculated by many methods, like factorization, etc. but in this algorithm, we have multiplied the bigger number with 1, 2, 3.... n until we find a number which is divisible by the second number.
Input:
Two numbers: 6 and 9
Output:
The LCM is: 18
LCMofTwo(a, b)
Input: Two numbers a and b, considered a > b.
Output: LCM of a and b.
Begin
lcm := a
i := 2
while lcm mod b ≠ 0, do
lcm := a * i
i := i + 1
done
return lcm
End
#include<iostream>
using namespace std;
int findLCM(int a, int b) { //assume a is greater than b
int lcm = a, i = 2;
while(lcm % b != 0) { //try to find number which is multiple of b
lcm = a*i;
i++;
}
return lcm; //the lcm of a and b
}
int lcmOfTwo(int a, int b) {
int lcm;
if(a>b) //to send as first argument is greater than second
lcm = findLCM(a,b);
else
lcm = findLCM(b,a);
return lcm;
}
int main() {
int a, b;
cout << "Enter Two numbers to find LCM: "; cin >> a >> b;
cout << "The LCM is: " << lcmOfTwo(a,b);
}
Enter Two numbers to find LCM: 6 9
The LCM is: 18 | [
{
"code": null,
"e": 1174,
"s": 1062,
"text": "In mathematics Least Common Multiple (LCM) is the smallest possible integer, that is divisible by both numbers."
},
{
"code": null,
"e": 1381,
"s": 1174,
"text": "LCM can be calculated by many methods, like factorization, etc. but in this algorithm, we have multiplied the bigger number with 1, 2, 3.... n until we find a number which is divisible by the second number."
},
{
"code": null,
"e": 1432,
"s": 1381,
"text": "Input:\nTwo numbers: 6 and 9\nOutput:\nThe LCM is: 18"
},
{
"code": null,
"e": 1447,
"s": 1432,
"text": "LCMofTwo(a, b)"
},
{
"code": null,
"e": 1493,
"s": 1447,
"text": "Input: Two numbers a and b, considered a > b."
},
{
"code": null,
"e": 1517,
"s": 1493,
"text": "Output: LCM of a and b."
},
{
"code": null,
"e": 1636,
"s": 1517,
"text": "Begin\n lcm := a\n i := 2\n while lcm mod b ≠ 0, do\n lcm := a * i\n i := i + 1\n done\n\n return lcm\nEnd"
},
{
"code": null,
"e": 2225,
"s": 1636,
"text": "#include<iostream>\nusing namespace std;\n\nint findLCM(int a, int b) { //assume a is greater than b\n int lcm = a, i = 2;\n\n while(lcm % b != 0) { //try to find number which is multiple of b\n lcm = a*i;\n i++;\n }\n return lcm; //the lcm of a and b\n}\n\nint lcmOfTwo(int a, int b) {\n int lcm;\n if(a>b) //to send as first argument is greater than second\n lcm = findLCM(a,b);\n else\n lcm = findLCM(b,a);\n return lcm;\n}\n\nint main() {\n int a, b;\n cout << \"Enter Two numbers to find LCM: \"; cin >> a >> b;\n cout << \"The LCM is: \" << lcmOfTwo(a,b);\n}"
},
{
"code": null,
"e": 2275,
"s": 2225,
"text": "Enter Two numbers to find LCM: 6 9\nThe LCM is: 18"
}
]
|
How to restart an Activity in Android? | This example demonstrates how do I restart an Activity in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:textSize="24sp"
android:textStyle="bold"/>
<Button
android:id="@+id/button"
android:layout_below="@id/textView"
android:layout_marginTop="16sp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true"
android:text="Restart Activity"/>
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.content.Intent;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.TextView;
import java.util.Random;
public class MainActivity extends AppCompatActivity {
TextView textView;
Button button;
Random random = new Random();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
button = findViewById(R.id.button);
textView.setText("Random Number: " + random.nextInt(100));
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent intent = getIntent();
finish();
startActivity(intent);
}
});
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code. | [
{
"code": null,
"e": 1129,
"s": 1062,
"text": "This example demonstrates how do I restart an Activity in android."
},
{
"code": null,
"e": 1258,
"s": 1129,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1323,
"s": 1258,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2148,
"s": 1323,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\"/>\n <Button\n android:id=\"@+id/button\"\n android:layout_below=\"@id/textView\"\n android:layout_marginTop=\"16sp\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:text=\"Restart Activity\"/>\n</RelativeLayout>"
},
{
"code": null,
"e": 2205,
"s": 2148,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 3106,
"s": 2205,
"text": "import android.content.Intent;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Button;\nimport android.widget.TextView;\nimport java.util.Random;\npublic class MainActivity extends AppCompatActivity {\n TextView textView;\n Button button;\n Random random = new Random();\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n textView = findViewById(R.id.textView);\n button = findViewById(R.id.button);\n textView.setText(\"Random Number: \" + random.nextInt(100));\n button.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n Intent intent = getIntent();\n finish();\n startActivity(intent);\n }\n });\n }\n}"
},
{
"code": null,
"e": 3161,
"s": 3106,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 3831,
"s": 3161,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 4178,
"s": 3831,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −"
},
{
"code": null,
"e": 4219,
"s": 4178,
"text": "Click here to download the project code."
}
]
|
Scheduling with ease: Cost optimization tutorial for Python | by Eric Stoltz | Towards Data Science | Congratulations! You’re the proud new owner of the coolest store in town. To keep the operation running, you need to ensure that you have the correct number of workers scheduled for each shift. In this tutorial, we’ll design the lowest cost schedule for the upcoming week.
For the coming week, each day has two shifts of 8 hours. You currently have ten employees, four of which are considered managers. For any shifts beyond 40 hours in a given week (5 total shifts), you pay your employees overtime. To be fair to your employees, you decide that everyone has to work at least 3 shifts, but no more than 7 shifts. And to ensure that the shop runs smoothly, each shift requires at least one manager.
Before diving into the code, let’s add structure to our task by defining our objective, variables, and constraints.
In simple words, we want to design the lowest cost schedule, accounting for both regular time and overtime. We can define this mathematically as:
Where w is our list of workers, RegCost and OTCost are the dollar costs of a regular and overtime shift for each worker, respectively, and RegShifts and OTShifts are the total number of regular and overtime shifts for each worker, respectively.
We will create a list of variables for every worker/shift combination (e.g. [‘Employee1’,‘Monday1'], [‘Employee2’,‘Monday1’], etc.). Each of these variables will be a binary value to signify if a worker is scheduled (1) or not (0). We’ll also need to deal with the split of regular time and overtime, which we’ll handle as a hybrid of variable and constraint.
From the problem statement above, we know that there are a number of special considerations that we need to follow. To make sure that our optimized schedule is acceptable, we’ll create specific constraints:
Total number of workers staffed equals total number of workers required for each shift
Workers must stay between global minimum and maximum number of shifts
Workers can only be scheduled when they are available (handled in decision variable ‘x’)
At least one manager staffed per shift
Before diving into the the optimization model, we need some (illustrative) data to work with. Since loading data into Python is out of the scope of this tutorial, we’ll move through this part quickly.
Here’s a recap of what we now have:
A list of our 14 shifts (two shifts per day for one week) and our 10 employees (lines 7–9)
The number of workers needed for each shift (lines 12–13)
The availability of each worker for each shift (lines 17–23)
A list of who are managers and a list of who are not managers (lines 26–27)
The cost of a shift for each worker, both regular and overtime (lines 31–36)
A few global assumptions for min and max shifts and how many shifts are allowed before triggering overtime (lines 40–43)
Note: As can be seen in the above code, we are using a package called Gurobi. Gurobi is an optimization solver that is available for a number of programming languages. While the full version of Gurobi requires a commercial license, you can get an academic or online course license to run a limited version for free.
We first need to create the shell of our model. We do this with the following code:
model = Model(“Workers Scheduling”)
Let’s turn our structured variables into code:
First, we need to create binary variables for each worker/shift combination. We can do this with Gurobi’s addVars function (Note: if only adding one variable, use addVar instead). We specify that the variable is binary and we also read in the avail dictionary that we created before as a ub (“upper bound”). Each Gurobi variable has an upper and lower bound. Since we are using binary variables, naturally our variables must equal 0 or 1. By setting the upper bound equal to the values in avail , we are able to embed the constraint that certain worker/shift combinations must equal 0 (i.e. when that worker is unavailable).
Next, we have to create variables to handle the regular and overtime hours. As was mentioned before, we’ll handle this split as a combination of variable and constraint. For now, we simply create the variables for each worker without further specification. The one exception is that we set overtimeTrigger to be a binary variable (0 when there is no overtime for a given worker this week and 1 when there is overtime).
Similarly, let’s turn each constraint outlined above into code, using the addConstrs (adding multiple constraints at a time) and addConstr (adding one constraint at a time) functions.
First, we specify that the sum of assigned workers (1 for each scheduled worker, 0 for each non-scheduled worker) for each shift equals the total shift requirement:
Next, we deal with the split between regular time and overtime. To capture this correctly, we take a conservative approach. First, we specify that the number of regular shifts plus the number of overtime shifts is equal to the total number of shifts for each worker. Then, we ensure that the number of regular shifts is less than or equal to the number of shifts specified as our overtime trigger. We do this to ensure that regular shifts are accounted for before overtime shifts. To double down on this, we add the final constraint that says that if the number of regular shifts for a worker is less than 5 (OTTrigger), then the binary trigger for overtime is set to 0.
With this in place, we can finish our final constraints. Similar to above, we calculate the total number of assigned shifts for each worker. We specify that this must be greater than or equal to the global input for minimum number of shifts and less than or equal to the global maximum number of shifts. Finally, we handle the requirement for each shift needs at least one manager to be staffed.
Our objective is to minimize the total cost of the workers scheduled. We can handle this quite simply by defining a cost function that sums the total number of regular shifts times the cost of a regular shift for each worker and the total number of overtime shifts times the cost of an overtime shift for each worker. We tell Gurobi that the goal is to minimize this using ModelSense. Finally, we use setObjective to specify that Cost is the objective function.
Before running the optimization, it can be helpful to inspect the model. A great way to do this is:
With this code, you’ll be able to see the objective function, variables, constraints, etc. listed as formulas, which can be particularly helpful to ensure that the code is producing the functions that you intended.
After you are happy with the model, we can solve the optimization with a simple line:
model.optimize()
The optimize function produces an output that is fairly helpful, but doesn’t give us a ton to work with. In the following steps, we’ll extract more meaningful information from the model.
First, we want to know the total cost of the proposed schedule. We find out that the cost is $7535 by running the following:
print('Total cost = $' + str(model.ObjVal))
Now, let’s see a dashboard of the schedule using the following:
Finally, let’s create an alternative view of the dashboard by simply printing out the names of each employee assigned to each shift:
Through this tutorial, we produced an end-to-end solution to an optimization problem using Python. If this piqued your interest, play around with an example of your own. Try handling continuous decision variables, multi-objective problems, quadratic optimization, infeasible models- the possibilities are endless. And if you’re curious about creating your own optimization algorithms, check out my tutorial on building a genetic algorithm using Python!
You can find a consolidated notebook here. | [
{
"code": null,
"e": 444,
"s": 171,
"text": "Congratulations! You’re the proud new owner of the coolest store in town. To keep the operation running, you need to ensure that you have the correct number of workers scheduled for each shift. In this tutorial, we’ll design the lowest cost schedule for the upcoming week."
},
{
"code": null,
"e": 870,
"s": 444,
"text": "For the coming week, each day has two shifts of 8 hours. You currently have ten employees, four of which are considered managers. For any shifts beyond 40 hours in a given week (5 total shifts), you pay your employees overtime. To be fair to your employees, you decide that everyone has to work at least 3 shifts, but no more than 7 shifts. And to ensure that the shop runs smoothly, each shift requires at least one manager."
},
{
"code": null,
"e": 986,
"s": 870,
"text": "Before diving into the code, let’s add structure to our task by defining our objective, variables, and constraints."
},
{
"code": null,
"e": 1132,
"s": 986,
"text": "In simple words, we want to design the lowest cost schedule, accounting for both regular time and overtime. We can define this mathematically as:"
},
{
"code": null,
"e": 1377,
"s": 1132,
"text": "Where w is our list of workers, RegCost and OTCost are the dollar costs of a regular and overtime shift for each worker, respectively, and RegShifts and OTShifts are the total number of regular and overtime shifts for each worker, respectively."
},
{
"code": null,
"e": 1737,
"s": 1377,
"text": "We will create a list of variables for every worker/shift combination (e.g. [‘Employee1’,‘Monday1'], [‘Employee2’,‘Monday1’], etc.). Each of these variables will be a binary value to signify if a worker is scheduled (1) or not (0). We’ll also need to deal with the split of regular time and overtime, which we’ll handle as a hybrid of variable and constraint."
},
{
"code": null,
"e": 1944,
"s": 1737,
"text": "From the problem statement above, we know that there are a number of special considerations that we need to follow. To make sure that our optimized schedule is acceptable, we’ll create specific constraints:"
},
{
"code": null,
"e": 2031,
"s": 1944,
"text": "Total number of workers staffed equals total number of workers required for each shift"
},
{
"code": null,
"e": 2101,
"s": 2031,
"text": "Workers must stay between global minimum and maximum number of shifts"
},
{
"code": null,
"e": 2190,
"s": 2101,
"text": "Workers can only be scheduled when they are available (handled in decision variable ‘x’)"
},
{
"code": null,
"e": 2229,
"s": 2190,
"text": "At least one manager staffed per shift"
},
{
"code": null,
"e": 2430,
"s": 2229,
"text": "Before diving into the the optimization model, we need some (illustrative) data to work with. Since loading data into Python is out of the scope of this tutorial, we’ll move through this part quickly."
},
{
"code": null,
"e": 2466,
"s": 2430,
"text": "Here’s a recap of what we now have:"
},
{
"code": null,
"e": 2557,
"s": 2466,
"text": "A list of our 14 shifts (two shifts per day for one week) and our 10 employees (lines 7–9)"
},
{
"code": null,
"e": 2615,
"s": 2557,
"text": "The number of workers needed for each shift (lines 12–13)"
},
{
"code": null,
"e": 2676,
"s": 2615,
"text": "The availability of each worker for each shift (lines 17–23)"
},
{
"code": null,
"e": 2752,
"s": 2676,
"text": "A list of who are managers and a list of who are not managers (lines 26–27)"
},
{
"code": null,
"e": 2829,
"s": 2752,
"text": "The cost of a shift for each worker, both regular and overtime (lines 31–36)"
},
{
"code": null,
"e": 2950,
"s": 2829,
"text": "A few global assumptions for min and max shifts and how many shifts are allowed before triggering overtime (lines 40–43)"
},
{
"code": null,
"e": 3266,
"s": 2950,
"text": "Note: As can be seen in the above code, we are using a package called Gurobi. Gurobi is an optimization solver that is available for a number of programming languages. While the full version of Gurobi requires a commercial license, you can get an academic or online course license to run a limited version for free."
},
{
"code": null,
"e": 3350,
"s": 3266,
"text": "We first need to create the shell of our model. We do this with the following code:"
},
{
"code": null,
"e": 3386,
"s": 3350,
"text": "model = Model(“Workers Scheduling”)"
},
{
"code": null,
"e": 3433,
"s": 3386,
"text": "Let’s turn our structured variables into code:"
},
{
"code": null,
"e": 4058,
"s": 3433,
"text": "First, we need to create binary variables for each worker/shift combination. We can do this with Gurobi’s addVars function (Note: if only adding one variable, use addVar instead). We specify that the variable is binary and we also read in the avail dictionary that we created before as a ub (“upper bound”). Each Gurobi variable has an upper and lower bound. Since we are using binary variables, naturally our variables must equal 0 or 1. By setting the upper bound equal to the values in avail , we are able to embed the constraint that certain worker/shift combinations must equal 0 (i.e. when that worker is unavailable)."
},
{
"code": null,
"e": 4477,
"s": 4058,
"text": "Next, we have to create variables to handle the regular and overtime hours. As was mentioned before, we’ll handle this split as a combination of variable and constraint. For now, we simply create the variables for each worker without further specification. The one exception is that we set overtimeTrigger to be a binary variable (0 when there is no overtime for a given worker this week and 1 when there is overtime)."
},
{
"code": null,
"e": 4661,
"s": 4477,
"text": "Similarly, let’s turn each constraint outlined above into code, using the addConstrs (adding multiple constraints at a time) and addConstr (adding one constraint at a time) functions."
},
{
"code": null,
"e": 4826,
"s": 4661,
"text": "First, we specify that the sum of assigned workers (1 for each scheduled worker, 0 for each non-scheduled worker) for each shift equals the total shift requirement:"
},
{
"code": null,
"e": 5497,
"s": 4826,
"text": "Next, we deal with the split between regular time and overtime. To capture this correctly, we take a conservative approach. First, we specify that the number of regular shifts plus the number of overtime shifts is equal to the total number of shifts for each worker. Then, we ensure that the number of regular shifts is less than or equal to the number of shifts specified as our overtime trigger. We do this to ensure that regular shifts are accounted for before overtime shifts. To double down on this, we add the final constraint that says that if the number of regular shifts for a worker is less than 5 (OTTrigger), then the binary trigger for overtime is set to 0."
},
{
"code": null,
"e": 5893,
"s": 5497,
"text": "With this in place, we can finish our final constraints. Similar to above, we calculate the total number of assigned shifts for each worker. We specify that this must be greater than or equal to the global input for minimum number of shifts and less than or equal to the global maximum number of shifts. Finally, we handle the requirement for each shift needs at least one manager to be staffed."
},
{
"code": null,
"e": 6355,
"s": 5893,
"text": "Our objective is to minimize the total cost of the workers scheduled. We can handle this quite simply by defining a cost function that sums the total number of regular shifts times the cost of a regular shift for each worker and the total number of overtime shifts times the cost of an overtime shift for each worker. We tell Gurobi that the goal is to minimize this using ModelSense. Finally, we use setObjective to specify that Cost is the objective function."
},
{
"code": null,
"e": 6455,
"s": 6355,
"text": "Before running the optimization, it can be helpful to inspect the model. A great way to do this is:"
},
{
"code": null,
"e": 6670,
"s": 6455,
"text": "With this code, you’ll be able to see the objective function, variables, constraints, etc. listed as formulas, which can be particularly helpful to ensure that the code is producing the functions that you intended."
},
{
"code": null,
"e": 6756,
"s": 6670,
"text": "After you are happy with the model, we can solve the optimization with a simple line:"
},
{
"code": null,
"e": 6773,
"s": 6756,
"text": "model.optimize()"
},
{
"code": null,
"e": 6960,
"s": 6773,
"text": "The optimize function produces an output that is fairly helpful, but doesn’t give us a ton to work with. In the following steps, we’ll extract more meaningful information from the model."
},
{
"code": null,
"e": 7085,
"s": 6960,
"text": "First, we want to know the total cost of the proposed schedule. We find out that the cost is $7535 by running the following:"
},
{
"code": null,
"e": 7129,
"s": 7085,
"text": "print('Total cost = $' + str(model.ObjVal))"
},
{
"code": null,
"e": 7193,
"s": 7129,
"text": "Now, let’s see a dashboard of the schedule using the following:"
},
{
"code": null,
"e": 7326,
"s": 7193,
"text": "Finally, let’s create an alternative view of the dashboard by simply printing out the names of each employee assigned to each shift:"
},
{
"code": null,
"e": 7779,
"s": 7326,
"text": "Through this tutorial, we produced an end-to-end solution to an optimization problem using Python. If this piqued your interest, play around with an example of your own. Try handling continuous decision variables, multi-objective problems, quadratic optimization, infeasible models- the possibilities are endless. And if you’re curious about creating your own optimization algorithms, check out my tutorial on building a genetic algorithm using Python!"
}
]
|
C library function - realloc() | The C library function void *realloc(void *ptr, size_t size) attempts to resize the memory block pointed to by ptr that was previously allocated with a call to malloc or calloc.
Following is the declaration for realloc() function.
void *realloc(void *ptr, size_t size)
ptr − This is the pointer to a memory block previously allocated with malloc, calloc or realloc to be reallocated. If this is NULL, a new block is allocated and a pointer to it is returned by the function.
ptr − This is the pointer to a memory block previously allocated with malloc, calloc or realloc to be reallocated. If this is NULL, a new block is allocated and a pointer to it is returned by the function.
size − This is the new size for the memory block, in bytes. If it is 0 and ptr points to an existing block of memory, the memory block pointed by ptr is deallocated and a NULL pointer is returned.
size − This is the new size for the memory block, in bytes. If it is 0 and ptr points to an existing block of memory, the memory block pointed by ptr is deallocated and a NULL pointer is returned.
This function returns a pointer to the newly allocated memory, or NULL if the request fails.
The following example shows the usage of realloc() function.
#include <stdio.h>
#include <stdlib.h>
int main () {
char *str;
/* Initial memory allocation */
str = (char *) malloc(15);
strcpy(str, "tutorialspoint");
printf("String = %s, Address = %u\n", str, str);
/* Reallocating memory */
str = (char *) realloc(str, 25);
strcat(str, ".com");
printf("String = %s, Address = %u\n", str, str);
free(str);
return(0);
}
Let us compile and run the above program that will produce the following result −
String = tutorialspoint, Address = 355090448
String = tutorialspoint.com, Address = 355090448
12 Lectures
2 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
12 Lectures
2 hours
Richa Maheshwari
20 Lectures
3.5 hours
Vandana Annavaram
44 Lectures
1 hours
Amit Diwan
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2185,
"s": 2007,
"text": "The C library function void *realloc(void *ptr, size_t size) attempts to resize the memory block pointed to by ptr that was previously allocated with a call to malloc or calloc."
},
{
"code": null,
"e": 2238,
"s": 2185,
"text": "Following is the declaration for realloc() function."
},
{
"code": null,
"e": 2276,
"s": 2238,
"text": "void *realloc(void *ptr, size_t size)"
},
{
"code": null,
"e": 2482,
"s": 2276,
"text": "ptr − This is the pointer to a memory block previously allocated with malloc, calloc or realloc to be reallocated. If this is NULL, a new block is allocated and a pointer to it is returned by the function."
},
{
"code": null,
"e": 2688,
"s": 2482,
"text": "ptr − This is the pointer to a memory block previously allocated with malloc, calloc or realloc to be reallocated. If this is NULL, a new block is allocated and a pointer to it is returned by the function."
},
{
"code": null,
"e": 2885,
"s": 2688,
"text": "size − This is the new size for the memory block, in bytes. If it is 0 and ptr points to an existing block of memory, the memory block pointed by ptr is deallocated and a NULL pointer is returned."
},
{
"code": null,
"e": 3082,
"s": 2885,
"text": "size − This is the new size for the memory block, in bytes. If it is 0 and ptr points to an existing block of memory, the memory block pointed by ptr is deallocated and a NULL pointer is returned."
},
{
"code": null,
"e": 3175,
"s": 3082,
"text": "This function returns a pointer to the newly allocated memory, or NULL if the request fails."
},
{
"code": null,
"e": 3236,
"s": 3175,
"text": "The following example shows the usage of realloc() function."
},
{
"code": null,
"e": 3635,
"s": 3236,
"text": "#include <stdio.h>\n#include <stdlib.h>\n\nint main () {\n char *str;\n\n /* Initial memory allocation */\n str = (char *) malloc(15);\n strcpy(str, \"tutorialspoint\");\n printf(\"String = %s, Address = %u\\n\", str, str);\n\n /* Reallocating memory */\n str = (char *) realloc(str, 25);\n strcat(str, \".com\");\n printf(\"String = %s, Address = %u\\n\", str, str);\n\n free(str);\n \n return(0);\n}"
},
{
"code": null,
"e": 3717,
"s": 3635,
"text": "Let us compile and run the above program that will produce the following result −"
},
{
"code": null,
"e": 3812,
"s": 3717,
"text": "String = tutorialspoint, Address = 355090448\nString = tutorialspoint.com, Address = 355090448\n"
},
{
"code": null,
"e": 3845,
"s": 3812,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3860,
"s": 3845,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3895,
"s": 3860,
"text": "\n 12 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3910,
"s": 3895,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3945,
"s": 3910,
"text": "\n 48 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 3959,
"s": 3945,
"text": " Asif Hussain"
},
{
"code": null,
"e": 3992,
"s": 3959,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4010,
"s": 3992,
"text": " Richa Maheshwari"
},
{
"code": null,
"e": 4045,
"s": 4010,
"text": "\n 20 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 4064,
"s": 4045,
"text": " Vandana Annavaram"
},
{
"code": null,
"e": 4097,
"s": 4064,
"text": "\n 44 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4109,
"s": 4097,
"text": " Amit Diwan"
},
{
"code": null,
"e": 4116,
"s": 4109,
"text": " Print"
},
{
"code": null,
"e": 4127,
"s": 4116,
"text": " Add Notes"
}
]
|
Behave - Reports | Report generation is one of the most important steps towards the test automation framework. At the end of the execution, we cannot rely on the console output rather we should have a detailed report.
It should have the information on the count of tests that passed, failed, skipped, feature and scenario breakdown. Behave does not produce an in-built report but it can output in multiple formats and we can utilize the third-party tools to generate a report.
All the available formatters in Behave are displayed with the command −
behave --format help
When you use the command, the following screen will appear on your computer −
Some of the common Behave reports are −
Allure Report.
Allure Report.
Output JSON Report.
Output JSON Report.
JUnit Report
JUnit Report
Let us execute a test having two feature files with the below test results −
Project folder structure for the above test will be as follows −
Step 1 − Execute the command
To create a JUnit report, run the command given below −
behave --junit
Step 2 − Report folder generation
A folder called as the reports gets generated within the project, having the name TESTS-<feature file name>.xml.
Here, Payment and Payment1 are the names of the feature files.
Step 3 − Report generation to a specific folder
To generate the reports to a specific folder, say my_reports. We have to run the below mentioned command −
behave --junit --junit-directory my_reports
A folder called the my_reports gets generated within the project which contains the reports.
We can create the Behave JSON report. The JSON is actually a formatter.
Let us execute a test having two feature files with the below test results −
Project folder structure for the above test is as follows −
Step 1 − Execute the command
To create a JSON output in console, run the command −
behave -f json
The following screen will appear −
Step 2 − Output in readable format
To create a JSON output in a more readable format, run the following command −
behave -f json.pretty
Some portion of the output captured in the below image −
Step 3 − Report generation to a specific folder
To generate the reports to a specific folder say, my_reports.json, we have to run the following command −
behave –f json.pretty –o my_reports.json
The following image represents the screen that will appear on your computer.
A folder called the my_reports.json gets generated within the project, having details of all the features which are executed.
To generate Allure reports in Behave, first we have to install Allure in the system. For installation from the command line in Linux, run the following commands one after the other −
sudo apt-add-repository ppa:qameta/allure
sudo apt-get update
sudo apt-get install allure
For Mac users, installation is done with the Homebrew with the following command −
brew install allure
For Windows, Allure is installed from the Scoop installer. Run the below command to download and install Scoop and finally, execute it in the PowerShell −
scoop install allure
To update Allure distribution installations from Scoop, run the below command from the installation directory of Scoop −
\bin\checkver.ps1 allure -u
Finally, run the command given below −
scoop update allure
After Allure has been installed, we have to get the Allure-Behave integration plugin for Python. For this, run the following command −
pip install allure-behave
To verify if Allure has been installed successfully, run the command stated below −
allure
Let us execute a test having two feature files with the below test results −
Project folder structure for the above test is as follows −
Step 1 − Report generation to a specific folder
To generate the reports to a specific folder, say my_allure, we have to run the following command −
behave -f allure_behave.formatter:AllureFormatter –o my_allure
You will get the screen as shown below −
A folder called the my_allure gets generated within the project, having files with .json extension.
Step 2 − Start the web server
To start the web server, run the command given below −
allure serve my_allure
Here, the my_allure is the directory which contains the allure json files.
Simultaneously, a browser gets opened, with the Allure report as shown below −
We can also click on individual features and find their breakdowns, as shown below −
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2495,
"s": 2296,
"text": "Report generation is one of the most important steps towards the test automation framework. At the end of the execution, we cannot rely on the console output rather we should have a detailed report."
},
{
"code": null,
"e": 2754,
"s": 2495,
"text": "It should have the information on the count of tests that passed, failed, skipped, feature and scenario breakdown. Behave does not produce an in-built report but it can output in multiple formats and we can utilize the third-party tools to generate a report."
},
{
"code": null,
"e": 2826,
"s": 2754,
"text": "All the available formatters in Behave are displayed with the command −"
},
{
"code": null,
"e": 2848,
"s": 2826,
"text": "behave --format help\n"
},
{
"code": null,
"e": 2926,
"s": 2848,
"text": "When you use the command, the following screen will appear on your computer −"
},
{
"code": null,
"e": 2966,
"s": 2926,
"text": "Some of the common Behave reports are −"
},
{
"code": null,
"e": 2981,
"s": 2966,
"text": "Allure Report."
},
{
"code": null,
"e": 2996,
"s": 2981,
"text": "Allure Report."
},
{
"code": null,
"e": 3016,
"s": 2996,
"text": "Output JSON Report."
},
{
"code": null,
"e": 3036,
"s": 3016,
"text": "Output JSON Report."
},
{
"code": null,
"e": 3049,
"s": 3036,
"text": "JUnit Report"
},
{
"code": null,
"e": 3062,
"s": 3049,
"text": "JUnit Report"
},
{
"code": null,
"e": 3139,
"s": 3062,
"text": "Let us execute a test having two feature files with the below test results −"
},
{
"code": null,
"e": 3204,
"s": 3139,
"text": "Project folder structure for the above test will be as follows −"
},
{
"code": null,
"e": 3233,
"s": 3204,
"text": "Step 1 − Execute the command"
},
{
"code": null,
"e": 3289,
"s": 3233,
"text": "To create a JUnit report, run the command given below −"
},
{
"code": null,
"e": 3306,
"s": 3289,
"text": "behave --junit \n"
},
{
"code": null,
"e": 3340,
"s": 3306,
"text": "Step 2 − Report folder generation"
},
{
"code": null,
"e": 3453,
"s": 3340,
"text": "A folder called as the reports gets generated within the project, having the name TESTS-<feature file name>.xml."
},
{
"code": null,
"e": 3516,
"s": 3453,
"text": "Here, Payment and Payment1 are the names of the feature files."
},
{
"code": null,
"e": 3564,
"s": 3516,
"text": "Step 3 − Report generation to a specific folder"
},
{
"code": null,
"e": 3671,
"s": 3564,
"text": "To generate the reports to a specific folder, say my_reports. We have to run the below mentioned command −"
},
{
"code": null,
"e": 3716,
"s": 3671,
"text": "behave --junit --junit-directory my_reports\n"
},
{
"code": null,
"e": 3809,
"s": 3716,
"text": "A folder called the my_reports gets generated within the project which contains the reports."
},
{
"code": null,
"e": 3881,
"s": 3809,
"text": "We can create the Behave JSON report. The JSON is actually a formatter."
},
{
"code": null,
"e": 3958,
"s": 3881,
"text": "Let us execute a test having two feature files with the below test results −"
},
{
"code": null,
"e": 4018,
"s": 3958,
"text": "Project folder structure for the above test is as follows −"
},
{
"code": null,
"e": 4047,
"s": 4018,
"text": "Step 1 − Execute the command"
},
{
"code": null,
"e": 4101,
"s": 4047,
"text": "To create a JSON output in console, run the command −"
},
{
"code": null,
"e": 4117,
"s": 4101,
"text": "behave -f json\n"
},
{
"code": null,
"e": 4152,
"s": 4117,
"text": "The following screen will appear −"
},
{
"code": null,
"e": 4187,
"s": 4152,
"text": "Step 2 − Output in readable format"
},
{
"code": null,
"e": 4266,
"s": 4187,
"text": "To create a JSON output in a more readable format, run the following command −"
},
{
"code": null,
"e": 4289,
"s": 4266,
"text": "behave -f json.pretty\n"
},
{
"code": null,
"e": 4346,
"s": 4289,
"text": "Some portion of the output captured in the below image −"
},
{
"code": null,
"e": 4394,
"s": 4346,
"text": "Step 3 − Report generation to a specific folder"
},
{
"code": null,
"e": 4500,
"s": 4394,
"text": "To generate the reports to a specific folder say, my_reports.json, we have to run the following command −"
},
{
"code": null,
"e": 4542,
"s": 4500,
"text": "behave –f json.pretty –o my_reports.json\n"
},
{
"code": null,
"e": 4619,
"s": 4542,
"text": "The following image represents the screen that will appear on your computer."
},
{
"code": null,
"e": 4745,
"s": 4619,
"text": "A folder called the my_reports.json gets generated within the project, having details of all the features which are executed."
},
{
"code": null,
"e": 4928,
"s": 4745,
"text": "To generate Allure reports in Behave, first we have to install Allure in the system. For installation from the command line in Linux, run the following commands one after the other −"
},
{
"code": null,
"e": 5019,
"s": 4928,
"text": "sudo apt-add-repository ppa:qameta/allure\nsudo apt-get update\nsudo apt-get install allure\n"
},
{
"code": null,
"e": 5102,
"s": 5019,
"text": "For Mac users, installation is done with the Homebrew with the following command −"
},
{
"code": null,
"e": 5123,
"s": 5102,
"text": "brew install allure\n"
},
{
"code": null,
"e": 5278,
"s": 5123,
"text": "For Windows, Allure is installed from the Scoop installer. Run the below command to download and install Scoop and finally, execute it in the PowerShell −"
},
{
"code": null,
"e": 5300,
"s": 5278,
"text": "scoop install allure\n"
},
{
"code": null,
"e": 5421,
"s": 5300,
"text": "To update Allure distribution installations from Scoop, run the below command from the installation directory of Scoop −"
},
{
"code": null,
"e": 5450,
"s": 5421,
"text": "\\bin\\checkver.ps1 allure -u\n"
},
{
"code": null,
"e": 5489,
"s": 5450,
"text": "Finally, run the command given below −"
},
{
"code": null,
"e": 5510,
"s": 5489,
"text": "scoop update allure\n"
},
{
"code": null,
"e": 5645,
"s": 5510,
"text": "After Allure has been installed, we have to get the Allure-Behave integration plugin for Python. For this, run the following command −"
},
{
"code": null,
"e": 5672,
"s": 5645,
"text": "pip install allure-behave\n"
},
{
"code": null,
"e": 5756,
"s": 5672,
"text": "To verify if Allure has been installed successfully, run the command stated below −"
},
{
"code": null,
"e": 5764,
"s": 5756,
"text": "allure\n"
},
{
"code": null,
"e": 5841,
"s": 5764,
"text": "Let us execute a test having two feature files with the below test results −"
},
{
"code": null,
"e": 5901,
"s": 5841,
"text": "Project folder structure for the above test is as follows −"
},
{
"code": null,
"e": 5949,
"s": 5901,
"text": "Step 1 − Report generation to a specific folder"
},
{
"code": null,
"e": 6049,
"s": 5949,
"text": "To generate the reports to a specific folder, say my_allure, we have to run the following command −"
},
{
"code": null,
"e": 6113,
"s": 6049,
"text": "behave -f allure_behave.formatter:AllureFormatter –o my_allure\n"
},
{
"code": null,
"e": 6154,
"s": 6113,
"text": "You will get the screen as shown below −"
},
{
"code": null,
"e": 6254,
"s": 6154,
"text": "A folder called the my_allure gets generated within the project, having files with .json extension."
},
{
"code": null,
"e": 6284,
"s": 6254,
"text": "Step 2 − Start the web server"
},
{
"code": null,
"e": 6339,
"s": 6284,
"text": "To start the web server, run the command given below −"
},
{
"code": null,
"e": 6363,
"s": 6339,
"text": "allure serve my_allure\n"
},
{
"code": null,
"e": 6438,
"s": 6363,
"text": "Here, the my_allure is the directory which contains the allure json files."
},
{
"code": null,
"e": 6517,
"s": 6438,
"text": "Simultaneously, a browser gets opened, with the Allure report as shown below −"
},
{
"code": null,
"e": 6602,
"s": 6517,
"text": "We can also click on individual features and find their breakdowns, as shown below −"
},
{
"code": null,
"e": 6609,
"s": 6602,
"text": " Print"
},
{
"code": null,
"e": 6620,
"s": 6609,
"text": " Add Notes"
}
]
|
Important functions of STL Components in C++ - GeeksforGeeks | 04 Feb, 2022
C++
// C++ code#include <iostream>#include <utility>using namespace std; int main(){ // Declaring the PAIR1 of int and char // IF pair is not initialized then , // default value of int/double is 0 and // for string/char it is NULL pair<int, char> PAIR1; cout << PAIR1.first << " "; // NULL value therefore, not displayed cout << PAIR1.second << endl; // Initializing the pair during it's Declaration pair<string, double> PAIR2("GeeksForGeeks", 1.23); cout << PAIR2.first << " "; cout << PAIR2.second << endl; pair<string, double> PAIR3; // Inserting Value in pair using make_pair function PAIR3 = make_pair("GeeksForGeeks is Best", 4.56); cout << PAIR3.first << " "; cout << PAIR3.second << endl; pair<int, int> PAIR4; // Inserting Value in pair using {}(curly brackets) PAIR4 = { 4, 8 }; cout << PAIR4.first << " "; cout << PAIR4.second << endl; return 0;}
STL provides a range of data structures that are very useful in various scenarios. A lot of data structures are based on real-life applications. It is a library of container classes, algorithms, and iterators. It is a generalized library and so, its components are parameterized.
VectorStackQueuePriority queueSetListOrdered MapsUnordered Maps
Vector
Stack
Queue
Priority queue
Set
List
Ordered Maps
Unordered Maps
Containers or container classes store objects and data. There are in total seven standards “first-class” container classes and three container adaptor classes and only seven header files that provide access to these containers or container adaptors.
Note: We can include just one library, i.e., #include <bits/stdc++.h> that includes all the STL libraries but in certain competitions, including this library can make the code slow. To overcome this problem, we can add specific libraries to access particular data structures of STL. Also while removing the elements, it is required to take care if the data structure is empty or not. Calling remove function on an empty data structure leads to error. Below are the some Data Structures with their illustration shown
Vector: The major problem while using arrays was that we had to specify size. This drawback was overcome by vectors. Vectors internally work as dynamically allocated arrays, which is the main reason as to how we can add elements without specifying the size of the vector. When the size of the vector becomes equal to capacity, the capacity of vector increases and thus we can add further elements.
Vector: The major problem while using arrays was that we had to specify size. This drawback was overcome by vectors. Vectors internally work as dynamically allocated arrays, which is the main reason as to how we can add elements without specifying the size of the vector. When the size of the vector becomes equal to capacity, the capacity of vector increases and thus we can add further elements.
Header file:
#include <vector>
Syntax:
vector<data type> variable_name;
Most common function for vector:
push_back(): Used to push the element at the end of the vector. For faster method, use emplace_back().pop_back(): Used to remove the last element from the vector.size(): Returns the size of the vector.clear(): Deletes all the content of the vector.erase(): Deletes the specified index or data.empty(): Returns boolean value True if vector is empty, else returns False.Iterator lower_bound(Iterator first, Iterator last, const val): lower_bound returns an iterator pointing to the first element in the range [first, last) which has a value not less than ‘val’.Iterator upper_bound(Iterator first, Iterator last, const val): upper_bound returns an iterator pointing to the first element in the range [first, last) which has a value greater than ‘val’.
push_back(): Used to push the element at the end of the vector. For faster method, use emplace_back().
pop_back(): Used to remove the last element from the vector.
size(): Returns the size of the vector.
clear(): Deletes all the content of the vector.
erase(): Deletes the specified index or data.
empty(): Returns boolean value True if vector is empty, else returns False.
Iterator lower_bound(Iterator first, Iterator last, const val): lower_bound returns an iterator pointing to the first element in the range [first, last) which has a value not less than ‘val’.
Iterator upper_bound(Iterator first, Iterator last, const val): upper_bound returns an iterator pointing to the first element in the range [first, last) which has a value greater than ‘val’.
C++
// C++ program to illustrate the// function of vector in C++#include <iostream> // Header file for vector if// <bits/stdc++.h> not included#include <vector>using namespace std; // Function to print the vectorvoid print(vector<int> vec){ // vec.size() gives the size // of the vector for (int i = 0; i < vec.size(); i++) { cout << vec[i] << " "; } cout << endl;} // Driver Codeint main(){ // Defining a vector vector<int> vec; // Put all natural numbers // from 1 to 10 in vector for (int i = 1; i <= 10; i++) { vec.push_back(i); } cout << "Initial vector: "; // print the vector print(vec); // Size of vector cout << "Vector size: " << vec.size() << "\n"; // Check of vector is empty if (vec.empty() == false) cout << "Is vector is" << " empty: False\n"; // Popping out 10 form the vector vec.pop_back(); cout << "Vector after popping: "; print(vec); // Deleting the first element // from the vector using erase() vec.erase(vec.begin()); cout << "Vector after erase" << " first element: "; print(vec); // Clear the vector vec.clear(); cout << "Vector after " << "clearing: None "; print(vec); // Check if vector is empty if (vec.empty() == true) cout << "Is vector is" << " empty: True\n";}
Initial vector: 1 2 3 4 5 6 7 8 9 10
Vector size: 10
Is vector is empty: False
Vector after popping: 1 2 3 4 5 6 7 8 9
Vector after erase first element: 2 3 4 5 6 7 8 9
Vector after clearing: None
Is vector is empty: True
2. Stack: It is Last In First Out (LIFO) data structure. It can be implemented using arrays, linked lists, and vectors. Some problems like reversing an element or string, parenthesis check, print next greater element, postfix expression, etc, can be done using stack class rather than making all the functions we can use its inbuilt functions.Header file:
#include <stack>
Syntax:
stack<data_type> variable_name;
Most common function for stack:
push(): Used to push the element at top of the stack.pop(): Deletes the top element of the stack but do not returns it.top(): Returns the top element of the stack.empty(): Return boolean value, ie, True if stack is empty, else returns false.size(): Returns the size of the stack.
push(): Used to push the element at top of the stack.
pop(): Deletes the top element of the stack but do not returns it.
top(): Returns the top element of the stack.
empty(): Return boolean value, ie, True if stack is empty, else returns false.
size(): Returns the size of the stack.
C++
// C++ program to illustrate the// function of stack in C++#include <iostream> // Header file for stack#include <stack>using namespace std; // Function to print the stackvoid print(stack<char> s){ // Loops runs till stack // becomes empty while (s.empty() == false) { // Prints the top element cout << s.top() << " "; // Now pops the same top element s.pop(); } cout << "\n";} // Driver Codeint main(){ // Given char array char array[] = { 'G', 'E', 'E', 'K', 'S', 'F', 'O', 'R', 'G', 'E', 'E', 'K', 'S' }; // Defining stack stack<char> s; // Check if stack is empty if (s.empty() == true) { cout << "Stack is currently Empty" << "\n"; } else { cout << "Stack is not empty" << "\n"; } // Push elements in stack for (int i = sizeof(array) / sizeof(array[0]) - 1; i >= 0; i--) { s.push(array[i]); } // Size of stack cout << "Size of stack: " << s.size() << "\n"; // Content of stack cout << "Stack initially: "; print(s); // Returning the top // element of the stack cout << "Top element: " << s.top() << "\n"; // Popping the top // element in stack s.pop(); cout << "Stack after 1" << "pop operation: "; print(s); // Now checking the top element cout << "Top element after popping: " << s.top() << "\n"; // Size of stack cout << "Size of stack" << "after popping: " << s.size() << "\n"; // Again checking if the // stack is empty if (s.empty() == true) { cout << "Stack is currently Empty" << "\n"; } else { cout << "Stack is not empty" << "\n"; } return 0;}
Stack is currently Empty
Size of stack: 13
Stack initially: G E E K S F O R G E E K S
Top element: G
Stack after 1pop operation: E E K S F O R G E E K S
Top element after popping: E
Size of stackafter popping: 12
Stack is not empty
3.Queue: It is First In First Out (FIFO) data structure.The reason why we require queues use a lot of practical application of first in first out and when data doesn’t need to be processed early. For example, in a queue for buying tickets for a show, the one who enters the queue first, get the ticket first. It can be implemented using arrays, linked lists, and vectors just like stacks. Some applications of queue include level-order traversal in trees and graphs, in resource sharing, etc. Header file:
#include <queue>
Syntax:
queue<Data Type> variable_name;
Most common function for Queue:
push(): Used to push the element at back of the queuepop(): Deletes the front element of the queue but does not return it.front(): Returns the front element of the queue, or the element that is first in the line.empty(): Return boolean value, ie, True if queue is empty, else returns falseback(): Returns the last element of queue.size(): Returns the size of the queue.
push(): Used to push the element at back of the queue
pop(): Deletes the front element of the queue but does not return it.
front(): Returns the front element of the queue, or the element that is first in the line.
empty(): Return boolean value, ie, True if queue is empty, else returns false
back(): Returns the last element of queue.
size(): Returns the size of the queue.
C++
// C++ program to illustrate the// function of vector in C++#include <iostream> // Header file for queue#include <queue>using namespace std; // Function to print the queuevoid print(queue<char> q){ for (int i = 0; i < q.size(); i++) { // Printing the front element cout << q.front() << " "; // Popping the front element q.pop(); } cout << "\n";} // Driver Codeint main(){ // Given array char array[] = { 'G', 'E', 'E', 'K', 'S' }; // Defining queue queue<char> q; if (q.empty() == true) { cout << "Queue is empty\n"; } for (int i = 0; i < 5; i++) { q.push(array[i]); } cout << "Queue Initially: "; print(q); // Front element cout << "Front element: " << q.front() << "\n"; // Back element cout << "Back Element: " << q.back() << "\n"; // Size of queue cout << "Size of queue: " << q.size() << "\n"; // Empty if (q.empty() == false) { cout << "Queue is not empty\n"; } return 0;}
Queue is empty
Queue Initially: G E E
Front element: G
Back Element: S
Size of queue: 5
Queue is not empty
4.Priority Queue: This data structure is similar to queues, but the order of first out is decided by a priority set by the user. The main functions include getting top priority element, insert, delete the top priority element or decrease the priority. Data structure Heaps is used and not BST, as in BST, creating trees is costly than heaps, and the complexity of heaps is better. Also, heaps provide Complete Binary Tree and Heap order property satisfying all the properties of Priority Queue. Priority Queue has 2 variations, Min Heap and Max Heap. Complex problems like finding k largest or smallest elements, merge k unsorted arrays, Dijkstra’s Shortest Path Algorithm, Huffman code for compression, Prim’s Algorithm, etc. can be implemented easily.Header file:
#include <queue>
Syntax:
Min Priority Queue: priority_queue<Data Type> variable_name; Max Priority Queue: priority_queue <Data Type, vector, greater> variable_name;
Most common function for priority queue:
push(): Used to push the element in the queuepop(): Deletes the top priority element of the queue but does not return it. Deletes element with max priority in max heap else deletes min elementsize(): Returns the size of the queue.empty(): Returns boolean value, ie, true if queue is empty, else return false.top(): Returns the top element of the queue. In max priority queue, it returns the maximum, while in min priority queue, it returns the minimum value.
push(): Used to push the element in the queue
pop(): Deletes the top priority element of the queue but does not return it. Deletes element with max priority in max heap else deletes min element
size(): Returns the size of the queue.
empty(): Returns boolean value, ie, true if queue is empty, else return false.
top(): Returns the top element of the queue. In max priority queue, it returns the maximum, while in min priority queue, it returns the minimum value.
C++
// C++ program to illustrate the// function of priority queue in C++#include <iostream> // Header file for priority// queue, both MIN and MAX#include <queue>using namespace std; // Function to print the// min priority queuevoid print_min( priority_queue<int, vector<int>, greater<int> > q){ while (q.empty() == false) { // Print the front // element(MINIMUM) cout << q.top() << " "; // Pop the minimum q.pop(); } cout << "\n";} // Function to print the// min priority queuevoid print_max(priority_queue<int> q){ while (q.empty() == false) { // Print the front // element(MAXIMUM) cout << q.top() << " "; // Pop the maximum q.pop(); } cout << "\n";} // Driver Codeint main(){ // MIN priority queue priority_queue<int> max_pq; // MAX priority_queue priority_queue<int, vector<int>, greater<int> > min_pq; // Is queue empty if (min_pq.empty() == true) cout << "MIN priority " << "queue is empty\n"; if (max_pq.empty() == true) cout << "MAX priority" << " queue is empty\n"; cout << "\n"; for (int i = 1; i <= 10; i++) { min_pq.push(i); max_pq.push(i); } cout << "MIN priority queue: "; print_min(min_pq); cout << "MAX priority queue: "; print_max(max_pq); cout << "\n"; // Size cout << "Size of min pq: " << min_pq.size() << "\n"; cout << "Size of max pq: " << max_pq.size() << "\n"; cout << "\n"; // Top element cout << "Top of min pq: " << min_pq.top() << "\n"; cout << "Top of max pq: " << max_pq.top() << "\n"; cout << "\n"; // Pop the front element min_pq.pop(); max_pq.pop(); // Queus after popping cout << "MIN priority " << "queue after pop: "; print_min(min_pq); cout << "MAX priority " << "queue after pop: "; print_max(max_pq); cout << "\n"; // Size after popping cout << "Size of min pq: " << min_pq.size() << "\n"; cout << "Size of max pq: " << max_pq.size() << "\n"; cout << "\n"; // Is queue empty if (min_pq.empty() == false) cout << "MIN priority " << " queue is not empty\n"; if (max_pq.empty() == false) cout << "MAX priority queue" << " is not empty\n";}
MIN priority queue is empty
MAX priority queue is empty
MIN priority queue: 1 2 3 4 5 6 7 8 9 10
MAX priority queue: 10 9 8 7 6 5 4 3 2 1
Size of min pq: 10
Size of max pq: 10
Top of min pq: 1
Top of max pq: 10
MIN priority queue after pop: 2 3 4 5 6 7 8 9 10
MAX priority queue after pop: 9 8 7 6 5 4 3 2 1
Size of min pq: 9
Size of max pq: 9
MIN priority queue is not empty
MAX priority queue is not empty
4.Set: Sets are associative containers in which each element is unique. The elements cannot be modified once inserted in the set. A set ignores the duplicate values and all the elements are stored in sorted order. This data structure is particularly useful when incoming elements need to be sorted and modification is not required. The sets can store elements in two orders, increasing order or decreasing order. Header file:
#include <set>
Syntax:
Increasing order: set<Data type> variable_name;Decreasing order :set<Data type, greater<Data type> > variable_name;
Most common function for Set:
insert(): This function is used to insert a new element in the Set.begin(): This function returns an iterator to the first element in the set.end(): It returns an iterator to the theoretical element that follows the last element in the set.size(): Returns the total size of the set.find(): It returns an iterator to the searched element if present. If not, it gives an iterator to the end.count(): Returns the count of occurrences in a set. 1 if present, else 0.empty(): Returns boolean value, ie, true if empty else false.
insert(): This function is used to insert a new element in the Set.
begin(): This function returns an iterator to the first element in the set.
end(): It returns an iterator to the theoretical element that follows the last element in the set.
size(): Returns the total size of the set.
find(): It returns an iterator to the searched element if present. If not, it gives an iterator to the end.
count(): Returns the count of occurrences in a set. 1 if present, else 0.
empty(): Returns boolean value, ie, true if empty else false.
5.List: Lists stores data in non-contiguous manner.Elements of the list can be scattered in the different chunks of memory.Access to any particular index becomes costly as traversal from know index to that particular index has to be done, hence it is slow than vector.Zero-sized lists are also valid. Header file:
#include <list>
Syntax:
list<Data Type> variable_name;
Most common function for list:
push_front(element): Inserts a new element ‘element’ at the beginning of the list .push_back(element) : Inserts a new element ‘element’ at the end of the list.pop_front(): Removes the first element of the list.pop_back(): Removes the last element of the list.front() : Returns the value of the first element in the list.back() : Returns the value of the last element in the list .empty(): Returns boolean value, ie, true if empty else false.
push_front(element): Inserts a new element ‘element’ at the beginning of the list .
push_back(element) : Inserts a new element ‘element’ at the end of the list.
pop_front(): Removes the first element of the list.
pop_back(): Removes the last element of the list.
front() : Returns the value of the first element in the list.
back() : Returns the value of the last element in the list .
empty(): Returns boolean value, ie, true if empty else false.
6.Unordered Maps: Suppose you have a pencil box and some pens. You put some pens in the box, that would be random and not ordered unlike ordered maps but you can access any pen and work with it. The same is unordered maps, the elements are stored randomly but you can access any order anytime. The main difference is between ordered maps and unordered maps are the way the compiler stores them. It has two arguments, the first is called KEY, and the other is called Value. The keymaps to the value and is always unique.
Header file:
#include <unordered_map>
Syntax:
unordered_map<Data Type> variable name;
Most common function for unordered map:
count(): Returns boolean values, ie, 1 if the key passes exists, else false.erase(key) : Returns the passed key.clear() : Deletes the entire map.size() : Returns the size of the map.
count(): Returns boolean values, ie, 1 if the key passes exists, else false.
erase(key) : Returns the passed key.
clear() : Deletes the entire map.
size() : Returns the size of the map.
7.Ordered Maps: Suppose you have a new shelf in your room and some books. You arrange the books one after another but after arranging any number of books, you can access arranged books in any order and keep the book in the same place when you’ve read it. This is an example of a map. You fill the values one after another in order and the order is maintained always, but you can access any element anytime. This data structure is also implemented using dynamic arrays. Like unordered maps, it has two arguments, the first is called KEY, and the other is called Value. The keymaps to the value and is always unique.
Header file:
#include <ordered_map>
Syntax:
ordered_map<Data Type> variable name;
Most common function for ordered map:
count() : Returns boolean values, ie, 1 if the key passes exists, else false.erase(key): Returns the passed key.clear(): Deletes the entire map.size(): Returns the size of the map.
count() : Returns boolean values, ie, 1 if the key passes exists, else false.
erase(key): Returns the passed key.
clear(): Deletes the entire map.
size(): Returns the size of the map.
1.) Pair : The pair container is a simple container defined in <utility> header consisting of two data elements or objects. The first element is referenced as ‘first’ and the second element as ‘second’ and the order is fixed (first, second). Pair is used to combine together two values which may be different in type. Pair provides a way to store two heterogeneous objects as a single unit. Pair can be assigned, copied and compared. The array of objects allocated in a map or hash_map are of type ‘pair’ by default in which all the ‘first’ elements are unique keys associated with their ‘second’ value objects.To access the elements, we use variable name followed by dot operator followed by the keyword first or second.
Header File :
#include <utility>
Syntax :
pair (data_type1, data_type2) Pair_name;
Most common function for Pair :
make_pair() : This template function allows to create a value pair without writing the types explicitly.
make_pair() : This template function allows to create a value pair without writing the types explicitly.
Below is the code to implement the above function : –
C++
// C++ code#include <iostream>#include <utility>using namespace std; int main(){ // Declaring the PAIR1 of int and char // IF pair is not initialized then , // default value of int/double is 0 // and for string/char it is NULL pair<int, char> PAIR1; cout << PAIR1.first << " "; // NULL value therefore, not displayed cout << PAIR1.second << endl; // Initializing the pair during it's Declaration pair<string, double> PAIR2("GeeksForGeeks", 1.23); cout << PAIR2.first << " "; cout << PAIR2.second << endl; pair<string, double> PAIR3; // Inserting Value in pair using make_pair function PAIR3 = make_pair("GeeksForGeeks is Best", 4.56); cout << PAIR3.first << " "; cout << PAIR3.second << endl; pair<int, int> PAIR4; // Inserting Value in pair using {}(curly brackets) PAIR4 = { 4, 8 }; cout << PAIR4.first << " "; cout << PAIR4.second << endl; return 0;}
If you have an array of pair and you use inbuilt sort function then , by default it sort the array on the basis of first value(i.e. obj.first) of the array of each element .To sort in Descending order pass the function according to which you want to sort the array.
Below Code will show how to sort the Array of pair : –
C++
// C++ Code#include <algorithm>#include <iostream>#include <utility>using namespace std; bool ascending_secondValue(pair<int, int> a, pair<int, int> b){ return a.second < b.second;} bool descending_firstValue(pair<int, int> a, pair<int, int> b){ return a.first > b.first;} bool descending_secondValue(pair<int, int> a, pair<int, int> b){ return a.second > b.second;} // Driver Codeint main(){ pair<int, int> PAIR1[5]; PAIR1[0] = make_pair(1, 3); PAIR1[1] = make_pair(13, 4); PAIR1[2] = make_pair(5, 12); PAIR1[3] = make_pair(7, 9); // Using {} to insert element instead // of make_pair you can use any PAIR1[4] = { 11, 2 }; cout << "Sorting array in Ascending " "order on the basis of First value - " << endl; sort(PAIR1, PAIR1 + 5); for (int i = 0; i < 5; i++) { cout << PAIR1[i].first << " " << PAIR1[i].second << endl; } pair<int, int> PAIR2[5]; PAIR2[0] = make_pair(1, 3); PAIR2[1] = make_pair(13, 4); PAIR2[2] = make_pair(5, 12); PAIR2[3] = make_pair(7, 9); PAIR2[4] = make_pair(11, 2); cout << "Sorting array in Ascending " " order on the basis of Second value - " << endl; sort(PAIR2, PAIR2 + 5, ascending_secondValue); for (int i = 0; i < 5; i++) { cout << PAIR2[i].first << " " << PAIR2[i].second << endl; } pair<int, int> PAIR3[5]; PAIR3[0] = make_pair(1, 3); PAIR3[1] = make_pair(13, 4); PAIR3[2] = make_pair(5, 12); PAIR3[3] = make_pair(7, 9); PAIR3[4] = make_pair(11, 2); cout << "Sorting array in Descending order on the " "basis of First value - " << endl; sort(PAIR3, PAIR3 + 5, descending_firstValue); for (int i = 0; i < 5; i++) { cout << PAIR3[i].first << " " << PAIR3[i].second << endl; } pair<int, int> PAIR4[5]; PAIR4[0] = make_pair(1, 3); PAIR4[1] = make_pair(13, 4); PAIR4[2] = make_pair(5, 12); PAIR4[3] = make_pair(7, 9); PAIR4[4] = make_pair(11, 2); cout << "Sorting array in Descending order on the " "basis of Second value - " << endl; sort(PAIR4, PAIR4 + 5, descending_secondValue); for (int i = 0; i < 5; i++) { cout << PAIR4[i].first << " " << PAIR4[i].second << endl; } return 0;}
Ordered Set : Ordered set is a policy based data structure in g++ that keeps the unique elements in sorted order. It performs all the operations as performed by the set data structure in STL in log(n) complexity and performs two additional operations also in log(n) complexity .
order_of_key (k) : Number of items strictly smaller than k .find_by_order(k) : K-th element in a set (counting from zero).
order_of_key (k) : Number of items strictly smaller than k .
find_by_order(k) : K-th element in a set (counting from zero).
Header File and namespace :-
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace __gnu_pbds;
The necessary structure required for the ordered set implementation is :
tree < int , null_type , less , rb_tree_tag , tree_order_statistics_node_update >
You can read about them in detail here .
Additional functions in the ordered set other than the set are –
find_by_order(k): It returns to an iterator to the kth element (counting from zero) in the set in O(log n) time.To find the first element k must be zero.
find_by_order(k): It returns to an iterator to the kth element (counting from zero) in the set in O(log n) time.To find the first element k must be zero.
Let us assume we have a set s : {1, 5, 6, 17, 88}, then :
*(s.find_by_order(2)) : 3rd element in the set i.e. 6
*(s.find_by_order(4)) : 5th element in the set i.e. 88
2. order_of_key(k) : It returns to the number of items that are strictly smaller than our item k in O(log n) time.
Let us assume we have a set s : {1, 5, 6, 17, 88}, then :
s.order_of_key(6) : Count of elements strictly smaller than 6 is 2.
s.order_of_key(25) : Count of elements strictly smaller than 25 is 4.
NOTE : ordered_set is used here as a macro given to
tree<int, null_type, less, rb_tree_tag, tree_order_statistics_node_update>.
Therefore it can be given any name as macro other than ordered_set
but generally in the world of competitive programming it is commonly referred as ordered set
as it is a set with additional operations.
C++
// C++ program#include <iostream>using namespace std; // Header files, namespaces,// macros as defined above#include <ext/pb_ds/assoc_container.hpp>#include <ext/pb_ds/tree_policy.hpp>using namespace __gnu_pbds; // ordered_set is just macro you can give any// other name also#define ordered_set \ tree<int, null_type, less<int>, rb_tree_tag, \ tree_order_statistics_node_update> // Driver program to test above functionsint main(){ // Ordered set declared with name o_set ordered_set o_set; // insert function to insert in // ordered set same as SET STL o_set.insert(5); o_set.insert(1); o_set.insert(2); // Finding the second smallest element // in the set using * because // find_by_order returns an iterator cout << *(o_set.find_by_order(1)) << endl; // Finding the number of elements // strictly less than k=4 cout << o_set.order_of_key(4) << endl; // Finding the count of elements less // than or equal to 4 i.e. strictly less // than 5 if integers are present cout << o_set.order_of_key(5) << endl; // Deleting 2 from the set if it exists if (o_set.find(2) != o_set.end()) { o_set.erase(o_set.find(2)); } // Now after deleting 2 from the set // Finding the second smallest element in the set cout << *(o_set.find_by_order(1)) << endl; // Finding the number of // elements strictly less than k=4 cout << o_set.order_of_key(4) << endl; return 0;}
2
2
2
5
1
deepak290701
surinderdawra388
sweetyty
kk9826225
cpp-list
cpp-map
cpp-map-functions
cpp-priority-queue
cpp-queue
cpp-set
cpp-stack
cpp-stack-functions
cpp-unordered_map
cpp-unordered_map-functions
cpp-vector
STL
Articles
C++
C++ Programs
Competitive Programming
Data Structures
Queue
Stack
Data Structures
Stack
Queue
STL
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Time Complexity and Space Complexity
Docker - COPY Instruction
Time complexities of different data structures
Difference between Min Heap and Max Heap
SQL | Date functions
Vector in C++ STL
Arrays in C/C++
Initialize a vector in C++ (6 different ways)
Map in C++ Standard Template Library (STL)
std::sort() in C++ STL | [
{
"code": null,
"e": 24107,
"s": 24079,
"text": "\n04 Feb, 2022"
},
{
"code": null,
"e": 24111,
"s": 24107,
"text": "C++"
},
{
"code": "// C++ code#include <iostream>#include <utility>using namespace std; int main(){ // Declaring the PAIR1 of int and char // IF pair is not initialized then , // default value of int/double is 0 and // for string/char it is NULL pair<int, char> PAIR1; cout << PAIR1.first << \" \"; // NULL value therefore, not displayed cout << PAIR1.second << endl; // Initializing the pair during it's Declaration pair<string, double> PAIR2(\"GeeksForGeeks\", 1.23); cout << PAIR2.first << \" \"; cout << PAIR2.second << endl; pair<string, double> PAIR3; // Inserting Value in pair using make_pair function PAIR3 = make_pair(\"GeeksForGeeks is Best\", 4.56); cout << PAIR3.first << \" \"; cout << PAIR3.second << endl; pair<int, int> PAIR4; // Inserting Value in pair using {}(curly brackets) PAIR4 = { 4, 8 }; cout << PAIR4.first << \" \"; cout << PAIR4.second << endl; return 0;}",
"e": 25059,
"s": 24111,
"text": null
},
{
"code": null,
"e": 25340,
"s": 25059,
"text": " STL provides a range of data structures that are very useful in various scenarios. A lot of data structures are based on real-life applications. It is a library of container classes, algorithms, and iterators. It is a generalized library and so, its components are parameterized."
},
{
"code": null,
"e": 25404,
"s": 25340,
"text": "VectorStackQueuePriority queueSetListOrdered MapsUnordered Maps"
},
{
"code": null,
"e": 25411,
"s": 25404,
"text": "Vector"
},
{
"code": null,
"e": 25417,
"s": 25411,
"text": "Stack"
},
{
"code": null,
"e": 25423,
"s": 25417,
"text": "Queue"
},
{
"code": null,
"e": 25438,
"s": 25423,
"text": "Priority queue"
},
{
"code": null,
"e": 25442,
"s": 25438,
"text": "Set"
},
{
"code": null,
"e": 25447,
"s": 25442,
"text": "List"
},
{
"code": null,
"e": 25460,
"s": 25447,
"text": "Ordered Maps"
},
{
"code": null,
"e": 25475,
"s": 25460,
"text": "Unordered Maps"
},
{
"code": null,
"e": 25725,
"s": 25475,
"text": "Containers or container classes store objects and data. There are in total seven standards “first-class” container classes and three container adaptor classes and only seven header files that provide access to these containers or container adaptors."
},
{
"code": null,
"e": 26242,
"s": 25725,
"text": "Note: We can include just one library, i.e., #include <bits/stdc++.h> that includes all the STL libraries but in certain competitions, including this library can make the code slow. To overcome this problem, we can add specific libraries to access particular data structures of STL. Also while removing the elements, it is required to take care if the data structure is empty or not. Calling remove function on an empty data structure leads to error. Below are the some Data Structures with their illustration shown "
},
{
"code": null,
"e": 26640,
"s": 26242,
"text": "Vector: The major problem while using arrays was that we had to specify size. This drawback was overcome by vectors. Vectors internally work as dynamically allocated arrays, which is the main reason as to how we can add elements without specifying the size of the vector. When the size of the vector becomes equal to capacity, the capacity of vector increases and thus we can add further elements."
},
{
"code": null,
"e": 27038,
"s": 26640,
"text": "Vector: The major problem while using arrays was that we had to specify size. This drawback was overcome by vectors. Vectors internally work as dynamically allocated arrays, which is the main reason as to how we can add elements without specifying the size of the vector. When the size of the vector becomes equal to capacity, the capacity of vector increases and thus we can add further elements."
},
{
"code": null,
"e": 27051,
"s": 27038,
"text": "Header file:"
},
{
"code": null,
"e": 27069,
"s": 27051,
"text": "#include <vector>"
},
{
"code": null,
"e": 27078,
"s": 27069,
"text": "Syntax: "
},
{
"code": null,
"e": 27111,
"s": 27078,
"text": "vector<data type> variable_name;"
},
{
"code": null,
"e": 27145,
"s": 27111,
"text": "Most common function for vector: "
},
{
"code": null,
"e": 27895,
"s": 27145,
"text": "push_back(): Used to push the element at the end of the vector. For faster method, use emplace_back().pop_back(): Used to remove the last element from the vector.size(): Returns the size of the vector.clear(): Deletes all the content of the vector.erase(): Deletes the specified index or data.empty(): Returns boolean value True if vector is empty, else returns False.Iterator lower_bound(Iterator first, Iterator last, const val): lower_bound returns an iterator pointing to the first element in the range [first, last) which has a value not less than ‘val’.Iterator upper_bound(Iterator first, Iterator last, const val): upper_bound returns an iterator pointing to the first element in the range [first, last) which has a value greater than ‘val’."
},
{
"code": null,
"e": 27998,
"s": 27895,
"text": "push_back(): Used to push the element at the end of the vector. For faster method, use emplace_back()."
},
{
"code": null,
"e": 28059,
"s": 27998,
"text": "pop_back(): Used to remove the last element from the vector."
},
{
"code": null,
"e": 28099,
"s": 28059,
"text": "size(): Returns the size of the vector."
},
{
"code": null,
"e": 28147,
"s": 28099,
"text": "clear(): Deletes all the content of the vector."
},
{
"code": null,
"e": 28193,
"s": 28147,
"text": "erase(): Deletes the specified index or data."
},
{
"code": null,
"e": 28269,
"s": 28193,
"text": "empty(): Returns boolean value True if vector is empty, else returns False."
},
{
"code": null,
"e": 28461,
"s": 28269,
"text": "Iterator lower_bound(Iterator first, Iterator last, const val): lower_bound returns an iterator pointing to the first element in the range [first, last) which has a value not less than ‘val’."
},
{
"code": null,
"e": 28652,
"s": 28461,
"text": "Iterator upper_bound(Iterator first, Iterator last, const val): upper_bound returns an iterator pointing to the first element in the range [first, last) which has a value greater than ‘val’."
},
{
"code": null,
"e": 28656,
"s": 28652,
"text": "C++"
},
{
"code": "// C++ program to illustrate the// function of vector in C++#include <iostream> // Header file for vector if// <bits/stdc++.h> not included#include <vector>using namespace std; // Function to print the vectorvoid print(vector<int> vec){ // vec.size() gives the size // of the vector for (int i = 0; i < vec.size(); i++) { cout << vec[i] << \" \"; } cout << endl;} // Driver Codeint main(){ // Defining a vector vector<int> vec; // Put all natural numbers // from 1 to 10 in vector for (int i = 1; i <= 10; i++) { vec.push_back(i); } cout << \"Initial vector: \"; // print the vector print(vec); // Size of vector cout << \"Vector size: \" << vec.size() << \"\\n\"; // Check of vector is empty if (vec.empty() == false) cout << \"Is vector is\" << \" empty: False\\n\"; // Popping out 10 form the vector vec.pop_back(); cout << \"Vector after popping: \"; print(vec); // Deleting the first element // from the vector using erase() vec.erase(vec.begin()); cout << \"Vector after erase\" << \" first element: \"; print(vec); // Clear the vector vec.clear(); cout << \"Vector after \" << \"clearing: None \"; print(vec); // Check if vector is empty if (vec.empty() == true) cout << \"Is vector is\" << \" empty: True\\n\";}",
"e": 30025,
"s": 28656,
"text": null
},
{
"code": null,
"e": 30251,
"s": 30025,
"text": "Initial vector: 1 2 3 4 5 6 7 8 9 10 \nVector size: 10\nIs vector is empty: False\nVector after popping: 1 2 3 4 5 6 7 8 9 \nVector after erase first element: 2 3 4 5 6 7 8 9 \nVector after clearing: None \nIs vector is empty: True"
},
{
"code": null,
"e": 30607,
"s": 30251,
"text": "2. Stack: It is Last In First Out (LIFO) data structure. It can be implemented using arrays, linked lists, and vectors. Some problems like reversing an element or string, parenthesis check, print next greater element, postfix expression, etc, can be done using stack class rather than making all the functions we can use its inbuilt functions.Header file:"
},
{
"code": null,
"e": 30624,
"s": 30607,
"text": "#include <stack>"
},
{
"code": null,
"e": 30633,
"s": 30624,
"text": "Syntax: "
},
{
"code": null,
"e": 30665,
"s": 30633,
"text": "stack<data_type> variable_name;"
},
{
"code": null,
"e": 30699,
"s": 30665,
"text": "Most common function for stack: "
},
{
"code": null,
"e": 30979,
"s": 30699,
"text": "push(): Used to push the element at top of the stack.pop(): Deletes the top element of the stack but do not returns it.top(): Returns the top element of the stack.empty(): Return boolean value, ie, True if stack is empty, else returns false.size(): Returns the size of the stack."
},
{
"code": null,
"e": 31033,
"s": 30979,
"text": "push(): Used to push the element at top of the stack."
},
{
"code": null,
"e": 31100,
"s": 31033,
"text": "pop(): Deletes the top element of the stack but do not returns it."
},
{
"code": null,
"e": 31145,
"s": 31100,
"text": "top(): Returns the top element of the stack."
},
{
"code": null,
"e": 31224,
"s": 31145,
"text": "empty(): Return boolean value, ie, True if stack is empty, else returns false."
},
{
"code": null,
"e": 31263,
"s": 31224,
"text": "size(): Returns the size of the stack."
},
{
"code": null,
"e": 31267,
"s": 31263,
"text": "C++"
},
{
"code": "// C++ program to illustrate the// function of stack in C++#include <iostream> // Header file for stack#include <stack>using namespace std; // Function to print the stackvoid print(stack<char> s){ // Loops runs till stack // becomes empty while (s.empty() == false) { // Prints the top element cout << s.top() << \" \"; // Now pops the same top element s.pop(); } cout << \"\\n\";} // Driver Codeint main(){ // Given char array char array[] = { 'G', 'E', 'E', 'K', 'S', 'F', 'O', 'R', 'G', 'E', 'E', 'K', 'S' }; // Defining stack stack<char> s; // Check if stack is empty if (s.empty() == true) { cout << \"Stack is currently Empty\" << \"\\n\"; } else { cout << \"Stack is not empty\" << \"\\n\"; } // Push elements in stack for (int i = sizeof(array) / sizeof(array[0]) - 1; i >= 0; i--) { s.push(array[i]); } // Size of stack cout << \"Size of stack: \" << s.size() << \"\\n\"; // Content of stack cout << \"Stack initially: \"; print(s); // Returning the top // element of the stack cout << \"Top element: \" << s.top() << \"\\n\"; // Popping the top // element in stack s.pop(); cout << \"Stack after 1\" << \"pop operation: \"; print(s); // Now checking the top element cout << \"Top element after popping: \" << s.top() << \"\\n\"; // Size of stack cout << \"Size of stack\" << \"after popping: \" << s.size() << \"\\n\"; // Again checking if the // stack is empty if (s.empty() == true) { cout << \"Stack is currently Empty\" << \"\\n\"; } else { cout << \"Stack is not empty\" << \"\\n\"; } return 0;}",
"e": 33048,
"s": 31267,
"text": null
},
{
"code": null,
"e": 33282,
"s": 33048,
"text": "Stack is currently Empty\nSize of stack: 13\nStack initially: G E E K S F O R G E E K S \nTop element: G\nStack after 1pop operation: E E K S F O R G E E K S \nTop element after popping: E\nSize of stackafter popping: 12\nStack is not empty"
},
{
"code": null,
"e": 33789,
"s": 33282,
"text": "3.Queue: It is First In First Out (FIFO) data structure.The reason why we require queues use a lot of practical application of first in first out and when data doesn’t need to be processed early. For example, in a queue for buying tickets for a show, the one who enters the queue first, get the ticket first. It can be implemented using arrays, linked lists, and vectors just like stacks. Some applications of queue include level-order traversal in trees and graphs, in resource sharing, etc. Header file: "
},
{
"code": null,
"e": 33806,
"s": 33789,
"text": "#include <queue>"
},
{
"code": null,
"e": 33815,
"s": 33806,
"text": "Syntax: "
},
{
"code": null,
"e": 33847,
"s": 33815,
"text": "queue<Data Type> variable_name;"
},
{
"code": null,
"e": 33879,
"s": 33847,
"text": "Most common function for Queue:"
},
{
"code": null,
"e": 34249,
"s": 33879,
"text": "push(): Used to push the element at back of the queuepop(): Deletes the front element of the queue but does not return it.front(): Returns the front element of the queue, or the element that is first in the line.empty(): Return boolean value, ie, True if queue is empty, else returns falseback(): Returns the last element of queue.size(): Returns the size of the queue."
},
{
"code": null,
"e": 34303,
"s": 34249,
"text": "push(): Used to push the element at back of the queue"
},
{
"code": null,
"e": 34373,
"s": 34303,
"text": "pop(): Deletes the front element of the queue but does not return it."
},
{
"code": null,
"e": 34464,
"s": 34373,
"text": "front(): Returns the front element of the queue, or the element that is first in the line."
},
{
"code": null,
"e": 34542,
"s": 34464,
"text": "empty(): Return boolean value, ie, True if queue is empty, else returns false"
},
{
"code": null,
"e": 34585,
"s": 34542,
"text": "back(): Returns the last element of queue."
},
{
"code": null,
"e": 34624,
"s": 34585,
"text": "size(): Returns the size of the queue."
},
{
"code": null,
"e": 34628,
"s": 34624,
"text": "C++"
},
{
"code": "// C++ program to illustrate the// function of vector in C++#include <iostream> // Header file for queue#include <queue>using namespace std; // Function to print the queuevoid print(queue<char> q){ for (int i = 0; i < q.size(); i++) { // Printing the front element cout << q.front() << \" \"; // Popping the front element q.pop(); } cout << \"\\n\";} // Driver Codeint main(){ // Given array char array[] = { 'G', 'E', 'E', 'K', 'S' }; // Defining queue queue<char> q; if (q.empty() == true) { cout << \"Queue is empty\\n\"; } for (int i = 0; i < 5; i++) { q.push(array[i]); } cout << \"Queue Initially: \"; print(q); // Front element cout << \"Front element: \" << q.front() << \"\\n\"; // Back element cout << \"Back Element: \" << q.back() << \"\\n\"; // Size of queue cout << \"Size of queue: \" << q.size() << \"\\n\"; // Empty if (q.empty() == false) { cout << \"Queue is not empty\\n\"; } return 0;}",
"e": 35664,
"s": 34628,
"text": null
},
{
"code": null,
"e": 35772,
"s": 35664,
"text": "Queue is empty\nQueue Initially: G E E \nFront element: G\nBack Element: S\nSize of queue: 5\nQueue is not empty"
},
{
"code": null,
"e": 36539,
"s": 35772,
"text": "4.Priority Queue: This data structure is similar to queues, but the order of first out is decided by a priority set by the user. The main functions include getting top priority element, insert, delete the top priority element or decrease the priority. Data structure Heaps is used and not BST, as in BST, creating trees is costly than heaps, and the complexity of heaps is better. Also, heaps provide Complete Binary Tree and Heap order property satisfying all the properties of Priority Queue. Priority Queue has 2 variations, Min Heap and Max Heap. Complex problems like finding k largest or smallest elements, merge k unsorted arrays, Dijkstra’s Shortest Path Algorithm, Huffman code for compression, Prim’s Algorithm, etc. can be implemented easily.Header file: "
},
{
"code": null,
"e": 36556,
"s": 36539,
"text": "#include <queue>"
},
{
"code": null,
"e": 36564,
"s": 36556,
"text": "Syntax:"
},
{
"code": null,
"e": 36704,
"s": 36564,
"text": "Min Priority Queue: priority_queue<Data Type> variable_name; Max Priority Queue: priority_queue <Data Type, vector, greater> variable_name;"
},
{
"code": null,
"e": 36746,
"s": 36704,
"text": "Most common function for priority queue: "
},
{
"code": null,
"e": 37205,
"s": 36746,
"text": "push(): Used to push the element in the queuepop(): Deletes the top priority element of the queue but does not return it. Deletes element with max priority in max heap else deletes min elementsize(): Returns the size of the queue.empty(): Returns boolean value, ie, true if queue is empty, else return false.top(): Returns the top element of the queue. In max priority queue, it returns the maximum, while in min priority queue, it returns the minimum value."
},
{
"code": null,
"e": 37251,
"s": 37205,
"text": "push(): Used to push the element in the queue"
},
{
"code": null,
"e": 37399,
"s": 37251,
"text": "pop(): Deletes the top priority element of the queue but does not return it. Deletes element with max priority in max heap else deletes min element"
},
{
"code": null,
"e": 37438,
"s": 37399,
"text": "size(): Returns the size of the queue."
},
{
"code": null,
"e": 37517,
"s": 37438,
"text": "empty(): Returns boolean value, ie, true if queue is empty, else return false."
},
{
"code": null,
"e": 37668,
"s": 37517,
"text": "top(): Returns the top element of the queue. In max priority queue, it returns the maximum, while in min priority queue, it returns the minimum value."
},
{
"code": null,
"e": 37672,
"s": 37668,
"text": "C++"
},
{
"code": "// C++ program to illustrate the// function of priority queue in C++#include <iostream> // Header file for priority// queue, both MIN and MAX#include <queue>using namespace std; // Function to print the// min priority queuevoid print_min( priority_queue<int, vector<int>, greater<int> > q){ while (q.empty() == false) { // Print the front // element(MINIMUM) cout << q.top() << \" \"; // Pop the minimum q.pop(); } cout << \"\\n\";} // Function to print the// min priority queuevoid print_max(priority_queue<int> q){ while (q.empty() == false) { // Print the front // element(MAXIMUM) cout << q.top() << \" \"; // Pop the maximum q.pop(); } cout << \"\\n\";} // Driver Codeint main(){ // MIN priority queue priority_queue<int> max_pq; // MAX priority_queue priority_queue<int, vector<int>, greater<int> > min_pq; // Is queue empty if (min_pq.empty() == true) cout << \"MIN priority \" << \"queue is empty\\n\"; if (max_pq.empty() == true) cout << \"MAX priority\" << \" queue is empty\\n\"; cout << \"\\n\"; for (int i = 1; i <= 10; i++) { min_pq.push(i); max_pq.push(i); } cout << \"MIN priority queue: \"; print_min(min_pq); cout << \"MAX priority queue: \"; print_max(max_pq); cout << \"\\n\"; // Size cout << \"Size of min pq: \" << min_pq.size() << \"\\n\"; cout << \"Size of max pq: \" << max_pq.size() << \"\\n\"; cout << \"\\n\"; // Top element cout << \"Top of min pq: \" << min_pq.top() << \"\\n\"; cout << \"Top of max pq: \" << max_pq.top() << \"\\n\"; cout << \"\\n\"; // Pop the front element min_pq.pop(); max_pq.pop(); // Queus after popping cout << \"MIN priority \" << \"queue after pop: \"; print_min(min_pq); cout << \"MAX priority \" << \"queue after pop: \"; print_max(max_pq); cout << \"\\n\"; // Size after popping cout << \"Size of min pq: \" << min_pq.size() << \"\\n\"; cout << \"Size of max pq: \" << max_pq.size() << \"\\n\"; cout << \"\\n\"; // Is queue empty if (min_pq.empty() == false) cout << \"MIN priority \" << \" queue is not empty\\n\"; if (max_pq.empty() == false) cout << \"MAX priority queue\" << \" is not empty\\n\";}",
"e": 39983,
"s": 37672,
"text": null
},
{
"code": null,
"e": 40402,
"s": 39983,
"text": "MIN priority queue is empty\nMAX priority queue is empty\n\nMIN priority queue: 1 2 3 4 5 6 7 8 9 10 \nMAX priority queue: 10 9 8 7 6 5 4 3 2 1 \n\nSize of min pq: 10\nSize of max pq: 10\n\nTop of min pq: 1\nTop of max pq: 10\n\nMIN priority queue after pop: 2 3 4 5 6 7 8 9 10 \nMAX priority queue after pop: 9 8 7 6 5 4 3 2 1 \n\nSize of min pq: 9\nSize of max pq: 9\n\nMIN priority queue is not empty\nMAX priority queue is not empty"
},
{
"code": null,
"e": 40828,
"s": 40402,
"text": "4.Set: Sets are associative containers in which each element is unique. The elements cannot be modified once inserted in the set. A set ignores the duplicate values and all the elements are stored in sorted order. This data structure is particularly useful when incoming elements need to be sorted and modification is not required. The sets can store elements in two orders, increasing order or decreasing order. Header file:"
},
{
"code": null,
"e": 40844,
"s": 40828,
"text": "#include <set> "
},
{
"code": null,
"e": 40852,
"s": 40844,
"text": "Syntax:"
},
{
"code": null,
"e": 40969,
"s": 40852,
"text": "Increasing order: set<Data type> variable_name;Decreasing order :set<Data type, greater<Data type> > variable_name; "
},
{
"code": null,
"e": 41000,
"s": 40969,
"text": "Most common function for Set: "
},
{
"code": null,
"e": 41524,
"s": 41000,
"text": "insert(): This function is used to insert a new element in the Set.begin(): This function returns an iterator to the first element in the set.end(): It returns an iterator to the theoretical element that follows the last element in the set.size(): Returns the total size of the set.find(): It returns an iterator to the searched element if present. If not, it gives an iterator to the end.count(): Returns the count of occurrences in a set. 1 if present, else 0.empty(): Returns boolean value, ie, true if empty else false."
},
{
"code": null,
"e": 41592,
"s": 41524,
"text": "insert(): This function is used to insert a new element in the Set."
},
{
"code": null,
"e": 41668,
"s": 41592,
"text": "begin(): This function returns an iterator to the first element in the set."
},
{
"code": null,
"e": 41767,
"s": 41668,
"text": "end(): It returns an iterator to the theoretical element that follows the last element in the set."
},
{
"code": null,
"e": 41810,
"s": 41767,
"text": "size(): Returns the total size of the set."
},
{
"code": null,
"e": 41918,
"s": 41810,
"text": "find(): It returns an iterator to the searched element if present. If not, it gives an iterator to the end."
},
{
"code": null,
"e": 41992,
"s": 41918,
"text": "count(): Returns the count of occurrences in a set. 1 if present, else 0."
},
{
"code": null,
"e": 42054,
"s": 41992,
"text": "empty(): Returns boolean value, ie, true if empty else false."
},
{
"code": null,
"e": 42368,
"s": 42054,
"text": "5.List: Lists stores data in non-contiguous manner.Elements of the list can be scattered in the different chunks of memory.Access to any particular index becomes costly as traversal from know index to that particular index has to be done, hence it is slow than vector.Zero-sized lists are also valid. Header file:"
},
{
"code": null,
"e": 42385,
"s": 42368,
"text": "#include <list> "
},
{
"code": null,
"e": 42394,
"s": 42385,
"text": "Syntax: "
},
{
"code": null,
"e": 42425,
"s": 42394,
"text": "list<Data Type> variable_name;"
},
{
"code": null,
"e": 42456,
"s": 42425,
"text": "Most common function for list:"
},
{
"code": null,
"e": 42898,
"s": 42456,
"text": "push_front(element): Inserts a new element ‘element’ at the beginning of the list .push_back(element) : Inserts a new element ‘element’ at the end of the list.pop_front(): Removes the first element of the list.pop_back(): Removes the last element of the list.front() : Returns the value of the first element in the list.back() : Returns the value of the last element in the list .empty(): Returns boolean value, ie, true if empty else false."
},
{
"code": null,
"e": 42982,
"s": 42898,
"text": "push_front(element): Inserts a new element ‘element’ at the beginning of the list ."
},
{
"code": null,
"e": 43059,
"s": 42982,
"text": "push_back(element) : Inserts a new element ‘element’ at the end of the list."
},
{
"code": null,
"e": 43111,
"s": 43059,
"text": "pop_front(): Removes the first element of the list."
},
{
"code": null,
"e": 43161,
"s": 43111,
"text": "pop_back(): Removes the last element of the list."
},
{
"code": null,
"e": 43223,
"s": 43161,
"text": "front() : Returns the value of the first element in the list."
},
{
"code": null,
"e": 43284,
"s": 43223,
"text": "back() : Returns the value of the last element in the list ."
},
{
"code": null,
"e": 43346,
"s": 43284,
"text": "empty(): Returns boolean value, ie, true if empty else false."
},
{
"code": null,
"e": 43866,
"s": 43346,
"text": "6.Unordered Maps: Suppose you have a pencil box and some pens. You put some pens in the box, that would be random and not ordered unlike ordered maps but you can access any pen and work with it. The same is unordered maps, the elements are stored randomly but you can access any order anytime. The main difference is between ordered maps and unordered maps are the way the compiler stores them. It has two arguments, the first is called KEY, and the other is called Value. The keymaps to the value and is always unique."
},
{
"code": null,
"e": 43880,
"s": 43866,
"text": "Header file: "
},
{
"code": null,
"e": 43906,
"s": 43880,
"text": "#include <unordered_map> "
},
{
"code": null,
"e": 43915,
"s": 43906,
"text": "Syntax: "
},
{
"code": null,
"e": 43955,
"s": 43915,
"text": "unordered_map<Data Type> variable name;"
},
{
"code": null,
"e": 43996,
"s": 43955,
"text": "Most common function for unordered map: "
},
{
"code": null,
"e": 44179,
"s": 43996,
"text": "count(): Returns boolean values, ie, 1 if the key passes exists, else false.erase(key) : Returns the passed key.clear() : Deletes the entire map.size() : Returns the size of the map."
},
{
"code": null,
"e": 44256,
"s": 44179,
"text": "count(): Returns boolean values, ie, 1 if the key passes exists, else false."
},
{
"code": null,
"e": 44293,
"s": 44256,
"text": "erase(key) : Returns the passed key."
},
{
"code": null,
"e": 44327,
"s": 44293,
"text": "clear() : Deletes the entire map."
},
{
"code": null,
"e": 44365,
"s": 44327,
"text": "size() : Returns the size of the map."
},
{
"code": null,
"e": 44980,
"s": 44365,
"text": "7.Ordered Maps: Suppose you have a new shelf in your room and some books. You arrange the books one after another but after arranging any number of books, you can access arranged books in any order and keep the book in the same place when you’ve read it. This is an example of a map. You fill the values one after another in order and the order is maintained always, but you can access any element anytime. This data structure is also implemented using dynamic arrays. Like unordered maps, it has two arguments, the first is called KEY, and the other is called Value. The keymaps to the value and is always unique."
},
{
"code": null,
"e": 44993,
"s": 44980,
"text": "Header file:"
},
{
"code": null,
"e": 45016,
"s": 44993,
"text": "#include <ordered_map>"
},
{
"code": null,
"e": 45025,
"s": 45016,
"text": "Syntax: "
},
{
"code": null,
"e": 45063,
"s": 45025,
"text": "ordered_map<Data Type> variable name;"
},
{
"code": null,
"e": 45101,
"s": 45063,
"text": "Most common function for ordered map:"
},
{
"code": null,
"e": 45282,
"s": 45101,
"text": "count() : Returns boolean values, ie, 1 if the key passes exists, else false.erase(key): Returns the passed key.clear(): Deletes the entire map.size(): Returns the size of the map."
},
{
"code": null,
"e": 45360,
"s": 45282,
"text": "count() : Returns boolean values, ie, 1 if the key passes exists, else false."
},
{
"code": null,
"e": 45396,
"s": 45360,
"text": "erase(key): Returns the passed key."
},
{
"code": null,
"e": 45429,
"s": 45396,
"text": "clear(): Deletes the entire map."
},
{
"code": null,
"e": 45466,
"s": 45429,
"text": "size(): Returns the size of the map."
},
{
"code": null,
"e": 46188,
"s": 45466,
"text": "1.) Pair : The pair container is a simple container defined in <utility> header consisting of two data elements or objects. The first element is referenced as ‘first’ and the second element as ‘second’ and the order is fixed (first, second). Pair is used to combine together two values which may be different in type. Pair provides a way to store two heterogeneous objects as a single unit. Pair can be assigned, copied and compared. The array of objects allocated in a map or hash_map are of type ‘pair’ by default in which all the ‘first’ elements are unique keys associated with their ‘second’ value objects.To access the elements, we use variable name followed by dot operator followed by the keyword first or second."
},
{
"code": null,
"e": 46203,
"s": 46188,
"text": "Header File : "
},
{
"code": null,
"e": 46222,
"s": 46203,
"text": "#include <utility>"
},
{
"code": null,
"e": 46232,
"s": 46222,
"text": "Syntax : "
},
{
"code": null,
"e": 46273,
"s": 46232,
"text": "pair (data_type1, data_type2) Pair_name;"
},
{
"code": null,
"e": 46305,
"s": 46273,
"text": "Most common function for Pair :"
},
{
"code": null,
"e": 46410,
"s": 46305,
"text": "make_pair() : This template function allows to create a value pair without writing the types explicitly."
},
{
"code": null,
"e": 46515,
"s": 46410,
"text": "make_pair() : This template function allows to create a value pair without writing the types explicitly."
},
{
"code": null,
"e": 46570,
"s": 46515,
"text": "Below is the code to implement the above function : –"
},
{
"code": null,
"e": 46574,
"s": 46570,
"text": "C++"
},
{
"code": "// C++ code#include <iostream>#include <utility>using namespace std; int main(){ // Declaring the PAIR1 of int and char // IF pair is not initialized then , // default value of int/double is 0 // and for string/char it is NULL pair<int, char> PAIR1; cout << PAIR1.first << \" \"; // NULL value therefore, not displayed cout << PAIR1.second << endl; // Initializing the pair during it's Declaration pair<string, double> PAIR2(\"GeeksForGeeks\", 1.23); cout << PAIR2.first << \" \"; cout << PAIR2.second << endl; pair<string, double> PAIR3; // Inserting Value in pair using make_pair function PAIR3 = make_pair(\"GeeksForGeeks is Best\", 4.56); cout << PAIR3.first << \" \"; cout << PAIR3.second << endl; pair<int, int> PAIR4; // Inserting Value in pair using {}(curly brackets) PAIR4 = { 4, 8 }; cout << PAIR4.first << \" \"; cout << PAIR4.second << endl; return 0;}",
"e": 47522,
"s": 46574,
"text": null
},
{
"code": null,
"e": 47790,
"s": 47522,
"text": "If you have an array of pair and you use inbuilt sort function then , by default it sort the array on the basis of first value(i.e. obj.first) of the array of each element .To sort in Descending order pass the function according to which you want to sort the array. "
},
{
"code": null,
"e": 47845,
"s": 47790,
"text": "Below Code will show how to sort the Array of pair : –"
},
{
"code": null,
"e": 47849,
"s": 47845,
"text": "C++"
},
{
"code": "// C++ Code#include <algorithm>#include <iostream>#include <utility>using namespace std; bool ascending_secondValue(pair<int, int> a, pair<int, int> b){ return a.second < b.second;} bool descending_firstValue(pair<int, int> a, pair<int, int> b){ return a.first > b.first;} bool descending_secondValue(pair<int, int> a, pair<int, int> b){ return a.second > b.second;} // Driver Codeint main(){ pair<int, int> PAIR1[5]; PAIR1[0] = make_pair(1, 3); PAIR1[1] = make_pair(13, 4); PAIR1[2] = make_pair(5, 12); PAIR1[3] = make_pair(7, 9); // Using {} to insert element instead // of make_pair you can use any PAIR1[4] = { 11, 2 }; cout << \"Sorting array in Ascending \" \"order on the basis of First value - \" << endl; sort(PAIR1, PAIR1 + 5); for (int i = 0; i < 5; i++) { cout << PAIR1[i].first << \" \" << PAIR1[i].second << endl; } pair<int, int> PAIR2[5]; PAIR2[0] = make_pair(1, 3); PAIR2[1] = make_pair(13, 4); PAIR2[2] = make_pair(5, 12); PAIR2[3] = make_pair(7, 9); PAIR2[4] = make_pair(11, 2); cout << \"Sorting array in Ascending \" \" order on the basis of Second value - \" << endl; sort(PAIR2, PAIR2 + 5, ascending_secondValue); for (int i = 0; i < 5; i++) { cout << PAIR2[i].first << \" \" << PAIR2[i].second << endl; } pair<int, int> PAIR3[5]; PAIR3[0] = make_pair(1, 3); PAIR3[1] = make_pair(13, 4); PAIR3[2] = make_pair(5, 12); PAIR3[3] = make_pair(7, 9); PAIR3[4] = make_pair(11, 2); cout << \"Sorting array in Descending order on the \" \"basis of First value - \" << endl; sort(PAIR3, PAIR3 + 5, descending_firstValue); for (int i = 0; i < 5; i++) { cout << PAIR3[i].first << \" \" << PAIR3[i].second << endl; } pair<int, int> PAIR4[5]; PAIR4[0] = make_pair(1, 3); PAIR4[1] = make_pair(13, 4); PAIR4[2] = make_pair(5, 12); PAIR4[3] = make_pair(7, 9); PAIR4[4] = make_pair(11, 2); cout << \"Sorting array in Descending order on the \" \"basis of Second value - \" << endl; sort(PAIR4, PAIR4 + 5, descending_secondValue); for (int i = 0; i < 5; i++) { cout << PAIR4[i].first << \" \" << PAIR4[i].second << endl; } return 0;}",
"e": 50299,
"s": 47849,
"text": null
},
{
"code": null,
"e": 50578,
"s": 50299,
"text": "Ordered Set : Ordered set is a policy based data structure in g++ that keeps the unique elements in sorted order. It performs all the operations as performed by the set data structure in STL in log(n) complexity and performs two additional operations also in log(n) complexity ."
},
{
"code": null,
"e": 50701,
"s": 50578,
"text": "order_of_key (k) : Number of items strictly smaller than k .find_by_order(k) : K-th element in a set (counting from zero)."
},
{
"code": null,
"e": 50762,
"s": 50701,
"text": "order_of_key (k) : Number of items strictly smaller than k ."
},
{
"code": null,
"e": 50825,
"s": 50762,
"text": "find_by_order(k) : K-th element in a set (counting from zero)."
},
{
"code": null,
"e": 50855,
"s": 50825,
"text": "Header File and namespace :- "
},
{
"code": null,
"e": 50964,
"s": 50855,
"text": "#include <ext/pb_ds/assoc_container.hpp> \n#include <ext/pb_ds/tree_policy.hpp> \nusing namespace __gnu_pbds; "
},
{
"code": null,
"e": 51037,
"s": 50964,
"text": "The necessary structure required for the ordered set implementation is :"
},
{
"code": null,
"e": 51123,
"s": 51037,
"text": "tree < int , null_type , less , rb_tree_tag , tree_order_statistics_node_update >"
},
{
"code": null,
"e": 51164,
"s": 51123,
"text": "You can read about them in detail here ."
},
{
"code": null,
"e": 51230,
"s": 51164,
"text": "Additional functions in the ordered set other than the set are – "
},
{
"code": null,
"e": 51384,
"s": 51230,
"text": "find_by_order(k): It returns to an iterator to the kth element (counting from zero) in the set in O(log n) time.To find the first element k must be zero."
},
{
"code": null,
"e": 51538,
"s": 51384,
"text": "find_by_order(k): It returns to an iterator to the kth element (counting from zero) in the set in O(log n) time.To find the first element k must be zero."
},
{
"code": null,
"e": 51606,
"s": 51538,
"text": " Let us assume we have a set s : {1, 5, 6, 17, 88}, then :"
},
{
"code": null,
"e": 51670,
"s": 51606,
"text": " *(s.find_by_order(2)) : 3rd element in the set i.e. 6"
},
{
"code": null,
"e": 51735,
"s": 51670,
"text": " *(s.find_by_order(4)) : 5th element in the set i.e. 88 "
},
{
"code": null,
"e": 51856,
"s": 51735,
"text": " 2. order_of_key(k) : It returns to the number of items that are strictly smaller than our item k in O(log n) time."
},
{
"code": null,
"e": 51924,
"s": 51856,
"text": " Let us assume we have a set s : {1, 5, 6, 17, 88}, then :"
},
{
"code": null,
"e": 52002,
"s": 51924,
"text": " s.order_of_key(6) : Count of elements strictly smaller than 6 is 2."
},
{
"code": null,
"e": 52083,
"s": 52002,
"text": " s.order_of_key(25) : Count of elements strictly smaller than 25 is 4. "
},
{
"code": null,
"e": 52436,
"s": 52083,
"text": "NOTE : ordered_set is used here as a macro given to \n tree<int, null_type, less, rb_tree_tag, tree_order_statistics_node_update>. \n Therefore it can be given any name as macro other than ordered_set\n but generally in the world of competitive programming it is commonly referred as ordered set \n as it is a set with additional operations."
},
{
"code": null,
"e": 52440,
"s": 52436,
"text": "C++"
},
{
"code": "// C++ program#include <iostream>using namespace std; // Header files, namespaces,// macros as defined above#include <ext/pb_ds/assoc_container.hpp>#include <ext/pb_ds/tree_policy.hpp>using namespace __gnu_pbds; // ordered_set is just macro you can give any// other name also#define ordered_set \\ tree<int, null_type, less<int>, rb_tree_tag, \\ tree_order_statistics_node_update> // Driver program to test above functionsint main(){ // Ordered set declared with name o_set ordered_set o_set; // insert function to insert in // ordered set same as SET STL o_set.insert(5); o_set.insert(1); o_set.insert(2); // Finding the second smallest element // in the set using * because // find_by_order returns an iterator cout << *(o_set.find_by_order(1)) << endl; // Finding the number of elements // strictly less than k=4 cout << o_set.order_of_key(4) << endl; // Finding the count of elements less // than or equal to 4 i.e. strictly less // than 5 if integers are present cout << o_set.order_of_key(5) << endl; // Deleting 2 from the set if it exists if (o_set.find(2) != o_set.end()) { o_set.erase(o_set.find(2)); } // Now after deleting 2 from the set // Finding the second smallest element in the set cout << *(o_set.find_by_order(1)) << endl; // Finding the number of // elements strictly less than k=4 cout << o_set.order_of_key(4) << endl; return 0;}",
"e": 53930,
"s": 52440,
"text": null
},
{
"code": null,
"e": 53940,
"s": 53930,
"text": "2\n2\n2\n5\n1"
},
{
"code": null,
"e": 53953,
"s": 53940,
"text": "deepak290701"
},
{
"code": null,
"e": 53970,
"s": 53953,
"text": "surinderdawra388"
},
{
"code": null,
"e": 53979,
"s": 53970,
"text": "sweetyty"
},
{
"code": null,
"e": 53989,
"s": 53979,
"text": "kk9826225"
},
{
"code": null,
"e": 53998,
"s": 53989,
"text": "cpp-list"
},
{
"code": null,
"e": 54006,
"s": 53998,
"text": "cpp-map"
},
{
"code": null,
"e": 54024,
"s": 54006,
"text": "cpp-map-functions"
},
{
"code": null,
"e": 54043,
"s": 54024,
"text": "cpp-priority-queue"
},
{
"code": null,
"e": 54053,
"s": 54043,
"text": "cpp-queue"
},
{
"code": null,
"e": 54061,
"s": 54053,
"text": "cpp-set"
},
{
"code": null,
"e": 54071,
"s": 54061,
"text": "cpp-stack"
},
{
"code": null,
"e": 54091,
"s": 54071,
"text": "cpp-stack-functions"
},
{
"code": null,
"e": 54109,
"s": 54091,
"text": "cpp-unordered_map"
},
{
"code": null,
"e": 54137,
"s": 54109,
"text": "cpp-unordered_map-functions"
},
{
"code": null,
"e": 54148,
"s": 54137,
"text": "cpp-vector"
},
{
"code": null,
"e": 54152,
"s": 54148,
"text": "STL"
},
{
"code": null,
"e": 54161,
"s": 54152,
"text": "Articles"
},
{
"code": null,
"e": 54165,
"s": 54161,
"text": "C++"
},
{
"code": null,
"e": 54178,
"s": 54165,
"text": "C++ Programs"
},
{
"code": null,
"e": 54202,
"s": 54178,
"text": "Competitive Programming"
},
{
"code": null,
"e": 54218,
"s": 54202,
"text": "Data Structures"
},
{
"code": null,
"e": 54224,
"s": 54218,
"text": "Queue"
},
{
"code": null,
"e": 54230,
"s": 54224,
"text": "Stack"
},
{
"code": null,
"e": 54246,
"s": 54230,
"text": "Data Structures"
},
{
"code": null,
"e": 54252,
"s": 54246,
"text": "Stack"
},
{
"code": null,
"e": 54258,
"s": 54252,
"text": "Queue"
},
{
"code": null,
"e": 54262,
"s": 54258,
"text": "STL"
},
{
"code": null,
"e": 54266,
"s": 54262,
"text": "CPP"
},
{
"code": null,
"e": 54364,
"s": 54266,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 54373,
"s": 54364,
"text": "Comments"
},
{
"code": null,
"e": 54386,
"s": 54373,
"text": "Old Comments"
},
{
"code": null,
"e": 54423,
"s": 54386,
"text": "Time Complexity and Space Complexity"
},
{
"code": null,
"e": 54449,
"s": 54423,
"text": "Docker - COPY Instruction"
},
{
"code": null,
"e": 54496,
"s": 54449,
"text": "Time complexities of different data structures"
},
{
"code": null,
"e": 54537,
"s": 54496,
"text": "Difference between Min Heap and Max Heap"
},
{
"code": null,
"e": 54558,
"s": 54537,
"text": "SQL | Date functions"
},
{
"code": null,
"e": 54576,
"s": 54558,
"text": "Vector in C++ STL"
},
{
"code": null,
"e": 54592,
"s": 54576,
"text": "Arrays in C/C++"
},
{
"code": null,
"e": 54638,
"s": 54592,
"text": "Initialize a vector in C++ (6 different ways)"
},
{
"code": null,
"e": 54681,
"s": 54638,
"text": "Map in C++ Standard Template Library (STL)"
}
]
|
8086 program to find sum of odd numbers in a given series - GeeksforGeeks | 01 Jun, 2021
Problem – Write an Assembly Language Program to find sum of odd numbers in a given series containing 8 bit numbers stored in a continuous memory location and store the result in another memory location.
Example –
Example Explanation –
500 offset stores the counter value of the series and the elements of the series starts from 501 to 504 offset. In this example, we have 4 terms only. Adding the odd terms (found in BL register) 15+07 in AL register and result gets stored (1C) in AL register. The result from AL register gets stored in 600 offset.
500 offset stores the counter value of the series and the elements of the series starts from 501 to 504 offset.
In this example, we have 4 terms only. Adding the odd terms (found in BL register) 15+07 in AL register and result gets stored (1C) in AL register.
The result from AL register gets stored in 600 offset.
Assumptions –
The counter value which tells that how many numbers are there in the series is stored at memory location 500. The elements of the series are stored in a continuous memory location starting from 501. The result is stored at a memory location 600. The starting address of the program is 400.
The counter value which tells that how many numbers are there in the series is stored at memory location 500.
The elements of the series are stored in a continuous memory location starting from 501.
The result is stored at a memory location 600.
The starting address of the program is 400.
Test –
Syntax: TEST d, s
It performs AND operation of destination(d) and source(s) but result is not stored only flags ate modified.
Algorithm –
Load data from offset 500 to register CL. Increment the value of offset. Load 00H into CH register. Load 00H into AL register. Load data from offset to register BL. Use TEST instruction to check whether data in BL is even or odd, if zero flag is set means data is even then go to step 7 otherwise data is odd then go to step 8. Jump to memory location 413H. Add the data of AL and BL registers and store the result in AL register. Increment the value of offset. Jump to memory location 40AH if content of CX is not equal to zero otherwise go to step 11. Load the data from AL register to memory location 600. End
Load data from offset 500 to register CL.
Increment the value of offset.
Load 00H into CH register.
Load 00H into AL register.
Load data from offset to register BL.
Use TEST instruction to check whether data in BL is even or odd, if zero flag is set means data is even then go to step 7 otherwise data is odd then go to step 8.
Jump to memory location 413H.
Add the data of AL and BL registers and store the result in AL register.
Increment the value of offset.
Jump to memory location 40AH if content of CX is not equal to zero otherwise go to step 11.
Load the data from AL register to memory location 600.
End
Program –
Explanation –
MOV SI, 500 load the value 500 to SI. MOV CL, [SI] loads the data of offset SI into CL register. INC SI increases the value of SI by one. MOV CH, 00 loads the value 00 into CH register. MOV AL, 00 loads the value 00 into AL register. MOV BL, [SI] loads the data of offset SI into BL register. TEST BL, 01 AND operation of content of BL and value 01 and flag registers are modified. JZ 413 jump to 413 memory location if zero flag is set. ADD AL, BL add the contents of AL and BL registers and store the result in AL register. INC SI increases the value of SI by one. LOOP 40A jump to 40A memory location if value of CX is not equal to zero. MOV [600], AL loads the content of AL register to memory location 600. HLT stops the execution of the program.
MOV SI, 500 load the value 500 to SI.
MOV CL, [SI] loads the data of offset SI into CL register.
INC SI increases the value of SI by one.
MOV CH, 00 loads the value 00 into CH register.
MOV AL, 00 loads the value 00 into AL register.
MOV BL, [SI] loads the data of offset SI into BL register.
TEST BL, 01 AND operation of content of BL and value 01 and flag registers are modified.
JZ 413 jump to 413 memory location if zero flag is set.
ADD AL, BL add the contents of AL and BL registers and store the result in AL register.
INC SI increases the value of SI by one.
LOOP 40A jump to 40A memory location if value of CX is not equal to zero.
MOV [600], AL loads the content of AL register to memory location 600.
HLT stops the execution of the program.
Akanksha_Rai
shekharsaxena316
microprocessor
system-programming
Computer Organization & Architecture
microprocessor
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Architecture of 8086
Logical and Physical Address in Operating System
Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)
Addressing modes in 8085 microprocessor
Computer Organization | RISC and CISC
Architecture of 8085 microprocessor
Interrupts
Memory Hierarchy Design and its Characteristics
IEEE Standard 754 Floating Point Numbers
Computer Organization | Instruction Formats (Zero, One, Two and Three Address Instruction) | [
{
"code": null,
"e": 24707,
"s": 24679,
"text": "\n01 Jun, 2021"
},
{
"code": null,
"e": 24911,
"s": 24707,
"text": "Problem – Write an Assembly Language Program to find sum of odd numbers in a given series containing 8 bit numbers stored in a continuous memory location and store the result in another memory location. "
},
{
"code": null,
"e": 24922,
"s": 24911,
"text": "Example – "
},
{
"code": null,
"e": 24947,
"s": 24924,
"text": "Example Explanation – "
},
{
"code": null,
"e": 25264,
"s": 24947,
"text": "500 offset stores the counter value of the series and the elements of the series starts from 501 to 504 offset. In this example, we have 4 terms only. Adding the odd terms (found in BL register) 15+07 in AL register and result gets stored (1C) in AL register. The result from AL register gets stored in 600 offset. "
},
{
"code": null,
"e": 25377,
"s": 25264,
"text": "500 offset stores the counter value of the series and the elements of the series starts from 501 to 504 offset. "
},
{
"code": null,
"e": 25526,
"s": 25377,
"text": "In this example, we have 4 terms only. Adding the odd terms (found in BL register) 15+07 in AL register and result gets stored (1C) in AL register. "
},
{
"code": null,
"e": 25583,
"s": 25526,
"text": "The result from AL register gets stored in 600 offset. "
},
{
"code": null,
"e": 25598,
"s": 25583,
"text": "Assumptions – "
},
{
"code": null,
"e": 25890,
"s": 25598,
"text": "The counter value which tells that how many numbers are there in the series is stored at memory location 500. The elements of the series are stored in a continuous memory location starting from 501. The result is stored at a memory location 600. The starting address of the program is 400. "
},
{
"code": null,
"e": 26001,
"s": 25890,
"text": "The counter value which tells that how many numbers are there in the series is stored at memory location 500. "
},
{
"code": null,
"e": 26091,
"s": 26001,
"text": "The elements of the series are stored in a continuous memory location starting from 501. "
},
{
"code": null,
"e": 26139,
"s": 26091,
"text": "The result is stored at a memory location 600. "
},
{
"code": null,
"e": 26185,
"s": 26139,
"text": "The starting address of the program is 400. "
},
{
"code": null,
"e": 26193,
"s": 26185,
"text": "Test – "
},
{
"code": null,
"e": 26212,
"s": 26193,
"text": "Syntax: TEST d, s "
},
{
"code": null,
"e": 26321,
"s": 26212,
"text": "It performs AND operation of destination(d) and source(s) but result is not stored only flags ate modified. "
},
{
"code": null,
"e": 26334,
"s": 26321,
"text": "Algorithm – "
},
{
"code": null,
"e": 26949,
"s": 26334,
"text": "Load data from offset 500 to register CL. Increment the value of offset. Load 00H into CH register. Load 00H into AL register. Load data from offset to register BL. Use TEST instruction to check whether data in BL is even or odd, if zero flag is set means data is even then go to step 7 otherwise data is odd then go to step 8. Jump to memory location 413H. Add the data of AL and BL registers and store the result in AL register. Increment the value of offset. Jump to memory location 40AH if content of CX is not equal to zero otherwise go to step 11. Load the data from AL register to memory location 600. End "
},
{
"code": null,
"e": 26992,
"s": 26949,
"text": "Load data from offset 500 to register CL. "
},
{
"code": null,
"e": 27024,
"s": 26992,
"text": "Increment the value of offset. "
},
{
"code": null,
"e": 27052,
"s": 27024,
"text": "Load 00H into CH register. "
},
{
"code": null,
"e": 27080,
"s": 27052,
"text": "Load 00H into AL register. "
},
{
"code": null,
"e": 27119,
"s": 27080,
"text": "Load data from offset to register BL. "
},
{
"code": null,
"e": 27283,
"s": 27119,
"text": "Use TEST instruction to check whether data in BL is even or odd, if zero flag is set means data is even then go to step 7 otherwise data is odd then go to step 8. "
},
{
"code": null,
"e": 27314,
"s": 27283,
"text": "Jump to memory location 413H. "
},
{
"code": null,
"e": 27388,
"s": 27314,
"text": "Add the data of AL and BL registers and store the result in AL register. "
},
{
"code": null,
"e": 27420,
"s": 27388,
"text": "Increment the value of offset. "
},
{
"code": null,
"e": 27513,
"s": 27420,
"text": "Jump to memory location 40AH if content of CX is not equal to zero otherwise go to step 11. "
},
{
"code": null,
"e": 27569,
"s": 27513,
"text": "Load the data from AL register to memory location 600. "
},
{
"code": null,
"e": 27575,
"s": 27569,
"text": "End "
},
{
"code": null,
"e": 27586,
"s": 27575,
"text": "Program – "
},
{
"code": null,
"e": 27603,
"s": 27588,
"text": "Explanation – "
},
{
"code": null,
"e": 28356,
"s": 27603,
"text": "MOV SI, 500 load the value 500 to SI. MOV CL, [SI] loads the data of offset SI into CL register. INC SI increases the value of SI by one. MOV CH, 00 loads the value 00 into CH register. MOV AL, 00 loads the value 00 into AL register. MOV BL, [SI] loads the data of offset SI into BL register. TEST BL, 01 AND operation of content of BL and value 01 and flag registers are modified. JZ 413 jump to 413 memory location if zero flag is set. ADD AL, BL add the contents of AL and BL registers and store the result in AL register. INC SI increases the value of SI by one. LOOP 40A jump to 40A memory location if value of CX is not equal to zero. MOV [600], AL loads the content of AL register to memory location 600. HLT stops the execution of the program. "
},
{
"code": null,
"e": 28395,
"s": 28356,
"text": "MOV SI, 500 load the value 500 to SI. "
},
{
"code": null,
"e": 28455,
"s": 28395,
"text": "MOV CL, [SI] loads the data of offset SI into CL register. "
},
{
"code": null,
"e": 28497,
"s": 28455,
"text": "INC SI increases the value of SI by one. "
},
{
"code": null,
"e": 28546,
"s": 28497,
"text": "MOV CH, 00 loads the value 00 into CH register. "
},
{
"code": null,
"e": 28595,
"s": 28546,
"text": "MOV AL, 00 loads the value 00 into AL register. "
},
{
"code": null,
"e": 28655,
"s": 28595,
"text": "MOV BL, [SI] loads the data of offset SI into BL register. "
},
{
"code": null,
"e": 28745,
"s": 28655,
"text": "TEST BL, 01 AND operation of content of BL and value 01 and flag registers are modified. "
},
{
"code": null,
"e": 28802,
"s": 28745,
"text": "JZ 413 jump to 413 memory location if zero flag is set. "
},
{
"code": null,
"e": 28891,
"s": 28802,
"text": "ADD AL, BL add the contents of AL and BL registers and store the result in AL register. "
},
{
"code": null,
"e": 28933,
"s": 28891,
"text": "INC SI increases the value of SI by one. "
},
{
"code": null,
"e": 29008,
"s": 28933,
"text": "LOOP 40A jump to 40A memory location if value of CX is not equal to zero. "
},
{
"code": null,
"e": 29080,
"s": 29008,
"text": "MOV [600], AL loads the content of AL register to memory location 600. "
},
{
"code": null,
"e": 29121,
"s": 29080,
"text": "HLT stops the execution of the program. "
},
{
"code": null,
"e": 29134,
"s": 29121,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 29151,
"s": 29134,
"text": "shekharsaxena316"
},
{
"code": null,
"e": 29166,
"s": 29151,
"text": "microprocessor"
},
{
"code": null,
"e": 29185,
"s": 29166,
"text": "system-programming"
},
{
"code": null,
"e": 29222,
"s": 29185,
"text": "Computer Organization & Architecture"
},
{
"code": null,
"e": 29237,
"s": 29222,
"text": "microprocessor"
},
{
"code": null,
"e": 29335,
"s": 29237,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29344,
"s": 29335,
"text": "Comments"
},
{
"code": null,
"e": 29357,
"s": 29344,
"text": "Old Comments"
},
{
"code": null,
"e": 29378,
"s": 29357,
"text": "Architecture of 8086"
},
{
"code": null,
"e": 29427,
"s": 29378,
"text": "Logical and Physical Address in Operating System"
},
{
"code": null,
"e": 29522,
"s": 29427,
"text": "Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)"
},
{
"code": null,
"e": 29562,
"s": 29522,
"text": "Addressing modes in 8085 microprocessor"
},
{
"code": null,
"e": 29600,
"s": 29562,
"text": "Computer Organization | RISC and CISC"
},
{
"code": null,
"e": 29636,
"s": 29600,
"text": "Architecture of 8085 microprocessor"
},
{
"code": null,
"e": 29647,
"s": 29636,
"text": "Interrupts"
},
{
"code": null,
"e": 29695,
"s": 29647,
"text": "Memory Hierarchy Design and its Characteristics"
},
{
"code": null,
"e": 29736,
"s": 29695,
"text": "IEEE Standard 754 Floating Point Numbers"
}
]
|
An Introduction to Geographical Data Visualization | by Ryan Gotesman | Towards Data Science | I have always been very impressed with figures that manage to combine data with geographical information. They are some of the most beautiful examples of data visualization I can think of and the information you can gleam with a single glance is astounding.
These kinds of figures are referred to as choropleth maps and according to Wiki are defined as:
Thematic map(s) in which areas are shaded or patterned in proportion to the measurement of the statistical variable being displayed on the map, such as population density or per-capita income
Think of them like heat maps that take geographic boundaries into account.
We can make choropleth maps using the plotly library.
We will begin with a simple example. We import as follows:
import plotly as pyimport plotly.graph_objs as go
We want to create a ‘data’ variable which we will pass into the go.Figure() method to generate the choropleth map.
Let’s start with a map of the big wide world!
We define the ‘data’ variable as follows:
data = dict ( type = 'choropleth', locations = ['China','Canada','Brazil'], locationmode='country names', colorscale = ['Viridis'], z=[10,20,30])
The ‘data’ variable is a dictionary. We define it’s ‘type’ as choropleth. In ‘locations’ we give the names of various countries and in locationmode we tell the method we plan on specifying location by country name. Finally we specify the color scale and the values we want to assign to each country.
You can run plotly either online or offline and to avoid having to make an account I’ll be running it offline.
We generate the image with:
map = go.Figure(data=[data])py.offline.plot(map)
and get a map of the world. Note when we hover our mouse over a country we get its name and assigned value.
Rather than assigning arbitrary values to a few countries we can make this map more interesting by plotting the happiness of each nation.
Using data from the 2017 World Happiness Report we can create this figure using the code below
import plotly as pyimport plotly.graph_objs as goimport pandas as pddf = pd.read_csv("2017.csv")data = dict ( type = 'choropleth', locations = df['Country'], locationmode='country names', colorscale = ['Viridis'], z=df['Happiness.Score'])map = go.Figure(data=[data])py.offline.plot(map)
to generate the following figure. Without any need for statistical analysis we can confidently say some of the happiest countries are found in Europe and North America while some of the least happy countries are concentrated in Africa. That is the power of geographical visualization.
Now let’s do some in-depth visualization of the United States.
We first change the ‘data’ variable so that locationmode specifies American states. Next we add a ‘layout’ variable which allows us to customize certain aspects of the map. In this case we are telling the map to focus in on the United States rather than show the entire world which is the default. If you wanted you could also focus in on Asia, Europe, Africa, North America or South America.
Putting it all together we assign values to Arizona, California and Vermont with the following code:
data = dict ( type = 'choropleth', locations = ['AZ','CA','VT'], locationmode='USA-states', colorscale = ['Viridis'], z=[10,20,30])lyt = dict(geo=dict(scope='usa'))map = go.Figure(data=[data], layout = lyt)py.offline.plot(map)
to get this map:
We can now start to do some pretty fun things.
It’s well known that the world is experiencing a rapid decline in bee populations. This is awful because it will spell the end to lovely honey nut cheerio breakfasts. We can get our hands on the number of bee colonies per state and use our plotting to skills to see which states have the most bee colonies.
df = pd.read_csv("honeyproduction.csv")data = dict ( type = 'choropleth', locations = df['state'], locationmode='USA-states', colorscale = ['Viridis'], z=df['numcol'])lyt = dict(geo=dict(scope='usa'))map = go.Figure(data=[data], layout = lyt)py.offline.plot(map)
It looks like North Dakota has the most colonies following by California with alot of room for improvement in the rest of the country. We are also missing some data for Alaska and a couple states in the top right but that’s a limitation of this dataset.
We’ve touched upon happiness and bee colonies. Now all that’s left is unemployment.
Let’s go deeper and instead of looking at countries or states, and we can graph counties in each states. We will get data of the 2016 percent unemployment for each county from here.
Every single county can be uniquely identified with a 5 digit FIP number.
We begin with our imports and notice this time we import the plotly.figure_factory
import plotly as pyimport plotly.figure_factory as ffimport numpy as npimport pandas as pd
Next we read in the FIP numbers and also percent unemployment
df=pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/laucnty16.csv")df ['FIPS'] = df['State FIPS Code'].apply(lambda x: str(x).zfill(2)) + df['County FIPS Code'].apply(lambda x: str(x).zfill(3))fips = df['FIPS'].tolist()values = df['Unemployment Rate (%)'].tolist()
Since there are many different values for unemployment and we don’t want our color scale to try to represent 1000 different values we restrict it to just 12 colors with the following
colorscale=["#f7fbff","#ebf3fb","#deebf7","#d2e3f3","#c6dbef","#b3d2e9","#9ecae1", "#85bcdb","#6baed6","#57a0ce","#4292c6","#3082be","#2171b5","#1361a9","#08519c","#0b4083","#08306b"]endpts = list(np.linspace(1, 12, len(colorscale) - 1))
and pass all these arguments into our choropleth method
fig = ff.create_choropleth(fips=fips, values=values, colorscale= colorscale, binning_endpoints=endpts)py.offline.plot(fig)
to get
Wow what a pretty map!
Using such a graph we can quickly spot areas with elevated rates of unemployment (encircled in red).
For future analysis it would be interesting to figure out why these areas are experiencing more unemployment and what can be done about it.
I hope you’ve enjoyed this quick introduction to geographical data visualization. Let me know if you have any questions and if you have any examples of cool geographical data visualizations you’d like to share. | [
{
"code": null,
"e": 430,
"s": 172,
"text": "I have always been very impressed with figures that manage to combine data with geographical information. They are some of the most beautiful examples of data visualization I can think of and the information you can gleam with a single glance is astounding."
},
{
"code": null,
"e": 526,
"s": 430,
"text": "These kinds of figures are referred to as choropleth maps and according to Wiki are defined as:"
},
{
"code": null,
"e": 718,
"s": 526,
"text": "Thematic map(s) in which areas are shaded or patterned in proportion to the measurement of the statistical variable being displayed on the map, such as population density or per-capita income"
},
{
"code": null,
"e": 793,
"s": 718,
"text": "Think of them like heat maps that take geographic boundaries into account."
},
{
"code": null,
"e": 847,
"s": 793,
"text": "We can make choropleth maps using the plotly library."
},
{
"code": null,
"e": 906,
"s": 847,
"text": "We will begin with a simple example. We import as follows:"
},
{
"code": null,
"e": 956,
"s": 906,
"text": "import plotly as pyimport plotly.graph_objs as go"
},
{
"code": null,
"e": 1071,
"s": 956,
"text": "We want to create a ‘data’ variable which we will pass into the go.Figure() method to generate the choropleth map."
},
{
"code": null,
"e": 1117,
"s": 1071,
"text": "Let’s start with a map of the big wide world!"
},
{
"code": null,
"e": 1159,
"s": 1117,
"text": "We define the ‘data’ variable as follows:"
},
{
"code": null,
"e": 1320,
"s": 1159,
"text": "data = dict ( type = 'choropleth', locations = ['China','Canada','Brazil'], locationmode='country names', colorscale = ['Viridis'], z=[10,20,30])"
},
{
"code": null,
"e": 1620,
"s": 1320,
"text": "The ‘data’ variable is a dictionary. We define it’s ‘type’ as choropleth. In ‘locations’ we give the names of various countries and in locationmode we tell the method we plan on specifying location by country name. Finally we specify the color scale and the values we want to assign to each country."
},
{
"code": null,
"e": 1731,
"s": 1620,
"text": "You can run plotly either online or offline and to avoid having to make an account I’ll be running it offline."
},
{
"code": null,
"e": 1759,
"s": 1731,
"text": "We generate the image with:"
},
{
"code": null,
"e": 1808,
"s": 1759,
"text": "map = go.Figure(data=[data])py.offline.plot(map)"
},
{
"code": null,
"e": 1916,
"s": 1808,
"text": "and get a map of the world. Note when we hover our mouse over a country we get its name and assigned value."
},
{
"code": null,
"e": 2054,
"s": 1916,
"text": "Rather than assigning arbitrary values to a few countries we can make this map more interesting by plotting the happiness of each nation."
},
{
"code": null,
"e": 2149,
"s": 2054,
"text": "Using data from the 2017 World Happiness Report we can create this figure using the code below"
},
{
"code": null,
"e": 2451,
"s": 2149,
"text": "import plotly as pyimport plotly.graph_objs as goimport pandas as pddf = pd.read_csv(\"2017.csv\")data = dict ( type = 'choropleth', locations = df['Country'], locationmode='country names', colorscale = ['Viridis'], z=df['Happiness.Score'])map = go.Figure(data=[data])py.offline.plot(map)"
},
{
"code": null,
"e": 2736,
"s": 2451,
"text": "to generate the following figure. Without any need for statistical analysis we can confidently say some of the happiest countries are found in Europe and North America while some of the least happy countries are concentrated in Africa. That is the power of geographical visualization."
},
{
"code": null,
"e": 2799,
"s": 2736,
"text": "Now let’s do some in-depth visualization of the United States."
},
{
"code": null,
"e": 3192,
"s": 2799,
"text": "We first change the ‘data’ variable so that locationmode specifies American states. Next we add a ‘layout’ variable which allows us to customize certain aspects of the map. In this case we are telling the map to focus in on the United States rather than show the entire world which is the default. If you wanted you could also focus in on Asia, Europe, Africa, North America or South America."
},
{
"code": null,
"e": 3293,
"s": 3192,
"text": "Putting it all together we assign values to Arizona, California and Vermont with the following code:"
},
{
"code": null,
"e": 3535,
"s": 3293,
"text": "data = dict ( type = 'choropleth', locations = ['AZ','CA','VT'], locationmode='USA-states', colorscale = ['Viridis'], z=[10,20,30])lyt = dict(geo=dict(scope='usa'))map = go.Figure(data=[data], layout = lyt)py.offline.plot(map)"
},
{
"code": null,
"e": 3552,
"s": 3535,
"text": "to get this map:"
},
{
"code": null,
"e": 3599,
"s": 3552,
"text": "We can now start to do some pretty fun things."
},
{
"code": null,
"e": 3906,
"s": 3599,
"text": "It’s well known that the world is experiencing a rapid decline in bee populations. This is awful because it will spell the end to lovely honey nut cheerio breakfasts. We can get our hands on the number of bee colonies per state and use our plotting to skills to see which states have the most bee colonies."
},
{
"code": null,
"e": 4184,
"s": 3906,
"text": "df = pd.read_csv(\"honeyproduction.csv\")data = dict ( type = 'choropleth', locations = df['state'], locationmode='USA-states', colorscale = ['Viridis'], z=df['numcol'])lyt = dict(geo=dict(scope='usa'))map = go.Figure(data=[data], layout = lyt)py.offline.plot(map)"
},
{
"code": null,
"e": 4438,
"s": 4184,
"text": "It looks like North Dakota has the most colonies following by California with alot of room for improvement in the rest of the country. We are also missing some data for Alaska and a couple states in the top right but that’s a limitation of this dataset."
},
{
"code": null,
"e": 4522,
"s": 4438,
"text": "We’ve touched upon happiness and bee colonies. Now all that’s left is unemployment."
},
{
"code": null,
"e": 4704,
"s": 4522,
"text": "Let’s go deeper and instead of looking at countries or states, and we can graph counties in each states. We will get data of the 2016 percent unemployment for each county from here."
},
{
"code": null,
"e": 4778,
"s": 4704,
"text": "Every single county can be uniquely identified with a 5 digit FIP number."
},
{
"code": null,
"e": 4861,
"s": 4778,
"text": "We begin with our imports and notice this time we import the plotly.figure_factory"
},
{
"code": null,
"e": 4952,
"s": 4861,
"text": "import plotly as pyimport plotly.figure_factory as ffimport numpy as npimport pandas as pd"
},
{
"code": null,
"e": 5014,
"s": 4952,
"text": "Next we read in the FIP numbers and also percent unemployment"
},
{
"code": null,
"e": 5300,
"s": 5014,
"text": "df=pd.read_csv(\"https://raw.githubusercontent.com/plotly/datasets/master/laucnty16.csv\")df ['FIPS'] = df['State FIPS Code'].apply(lambda x: str(x).zfill(2)) + df['County FIPS Code'].apply(lambda x: str(x).zfill(3))fips = df['FIPS'].tolist()values = df['Unemployment Rate (%)'].tolist()"
},
{
"code": null,
"e": 5483,
"s": 5300,
"text": "Since there are many different values for unemployment and we don’t want our color scale to try to represent 1000 different values we restrict it to just 12 colors with the following"
},
{
"code": null,
"e": 5733,
"s": 5483,
"text": "colorscale=[\"#f7fbff\",\"#ebf3fb\",\"#deebf7\",\"#d2e3f3\",\"#c6dbef\",\"#b3d2e9\",\"#9ecae1\", \"#85bcdb\",\"#6baed6\",\"#57a0ce\",\"#4292c6\",\"#3082be\",\"#2171b5\",\"#1361a9\",\"#08519c\",\"#0b4083\",\"#08306b\"]endpts = list(np.linspace(1, 12, len(colorscale) - 1))"
},
{
"code": null,
"e": 5789,
"s": 5733,
"text": "and pass all these arguments into our choropleth method"
},
{
"code": null,
"e": 5912,
"s": 5789,
"text": "fig = ff.create_choropleth(fips=fips, values=values, colorscale= colorscale, binning_endpoints=endpts)py.offline.plot(fig)"
},
{
"code": null,
"e": 5919,
"s": 5912,
"text": "to get"
},
{
"code": null,
"e": 5942,
"s": 5919,
"text": "Wow what a pretty map!"
},
{
"code": null,
"e": 6043,
"s": 5942,
"text": "Using such a graph we can quickly spot areas with elevated rates of unemployment (encircled in red)."
},
{
"code": null,
"e": 6183,
"s": 6043,
"text": "For future analysis it would be interesting to figure out why these areas are experiencing more unemployment and what can be done about it."
}
]
|
Python Number shuffle() Method | Python number method shuffle() randomizes the items of a list in place.
Following is the syntax for shuffle() method −
shuffle (lst )
Note − This function is not accessible directly, so we need to import shuffle module and then we need to call this function using random static object.
lst − This could be a list or tuple.
lst − This could be a list or tuple.
This method does not return any value.
The following example shows the usage of shuffle() method.
#!/usr/bin/python
import random
list = [20, 16, 10, 5];
random.shuffle(list)
print "Reshuffled list : ", list
random.shuffle(list)
print "Reshuffled list : ", list
When we run above program, it produces following result −
Reshuffled list : [16, 5, 10, 20]
Reshuffled list : [16, 5, 20, 10]
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2316,
"s": 2244,
"text": "Python number method shuffle() randomizes the items of a list in place."
},
{
"code": null,
"e": 2363,
"s": 2316,
"text": "Following is the syntax for shuffle() method −"
},
{
"code": null,
"e": 2379,
"s": 2363,
"text": "shuffle (lst )\n"
},
{
"code": null,
"e": 2531,
"s": 2379,
"text": "Note − This function is not accessible directly, so we need to import shuffle module and then we need to call this function using random static object."
},
{
"code": null,
"e": 2568,
"s": 2531,
"text": "lst − This could be a list or tuple."
},
{
"code": null,
"e": 2605,
"s": 2568,
"text": "lst − This could be a list or tuple."
},
{
"code": null,
"e": 2644,
"s": 2605,
"text": "This method does not return any value."
},
{
"code": null,
"e": 2703,
"s": 2644,
"text": "The following example shows the usage of shuffle() method."
},
{
"code": null,
"e": 2871,
"s": 2703,
"text": "#!/usr/bin/python\nimport random\n\nlist = [20, 16, 10, 5];\nrandom.shuffle(list)\nprint \"Reshuffled list : \", list\n\nrandom.shuffle(list)\nprint \"Reshuffled list : \", list"
},
{
"code": null,
"e": 2929,
"s": 2871,
"text": "When we run above program, it produces following result −"
},
{
"code": null,
"e": 3000,
"s": 2929,
"text": "Reshuffled list : [16, 5, 10, 20]\nReshuffled list : [16, 5, 20, 10]\n"
},
{
"code": null,
"e": 3037,
"s": 3000,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 3053,
"s": 3037,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 3086,
"s": 3053,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3105,
"s": 3086,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 3140,
"s": 3105,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 3162,
"s": 3140,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3196,
"s": 3162,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3224,
"s": 3196,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3259,
"s": 3224,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3273,
"s": 3259,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3306,
"s": 3273,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3323,
"s": 3306,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3330,
"s": 3323,
"text": " Print"
},
{
"code": null,
"e": 3341,
"s": 3330,
"text": " Add Notes"
}
]
|
Keras - Lambda Layers | Lambda is used to transform the input data using an expression or function. For example, if Lambda with expression lambda x: x ** 2 is applied to a layer, then its input data will be squared before processing.
RepeatVector has four arguments and it is as follows −
keras.layers.Lambda(function, output_shape = None, mask = None, arguments = None)
function represent the lambda function.
function represent the lambda function.
output_shape represent the shape of the transformed input.
output_shape represent the shape of the transformed input.
mask represent the mask to be applied, if any.
mask represent the mask to be applied, if any.
arguments represent the optional argument for the lamda function as dictionary.
arguments represent the optional argument for the lamda function as dictionary.
87 Lectures
11 hours
Abhilash Nelson
61 Lectures
9 hours
Abhishek And Pukhraj
57 Lectures
7 hours
Abhishek And Pukhraj
52 Lectures
7 hours
Abhishek And Pukhraj
52 Lectures
6 hours
Abhishek And Pukhraj
68 Lectures
2 hours
Mike West
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2261,
"s": 2051,
"text": "Lambda is used to transform the input data using an expression or function. For example, if Lambda with expression lambda x: x ** 2 is applied to a layer, then its input data will be squared before processing."
},
{
"code": null,
"e": 2316,
"s": 2261,
"text": "RepeatVector has four arguments and it is as follows −"
},
{
"code": null,
"e": 2399,
"s": 2316,
"text": "keras.layers.Lambda(function, output_shape = None, mask = None, arguments = None)\n"
},
{
"code": null,
"e": 2439,
"s": 2399,
"text": "function represent the lambda function."
},
{
"code": null,
"e": 2479,
"s": 2439,
"text": "function represent the lambda function."
},
{
"code": null,
"e": 2538,
"s": 2479,
"text": "output_shape represent the shape of the transformed input."
},
{
"code": null,
"e": 2597,
"s": 2538,
"text": "output_shape represent the shape of the transformed input."
},
{
"code": null,
"e": 2644,
"s": 2597,
"text": "mask represent the mask to be applied, if any."
},
{
"code": null,
"e": 2691,
"s": 2644,
"text": "mask represent the mask to be applied, if any."
},
{
"code": null,
"e": 2771,
"s": 2691,
"text": "arguments represent the optional argument for the lamda function as dictionary."
},
{
"code": null,
"e": 2851,
"s": 2771,
"text": "arguments represent the optional argument for the lamda function as dictionary."
},
{
"code": null,
"e": 2885,
"s": 2851,
"text": "\n 87 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 2902,
"s": 2885,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 2935,
"s": 2902,
"text": "\n 61 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 2957,
"s": 2935,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 2990,
"s": 2957,
"text": "\n 57 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 3012,
"s": 2990,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 3045,
"s": 3012,
"text": "\n 52 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 3067,
"s": 3045,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 3100,
"s": 3067,
"text": "\n 52 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3122,
"s": 3100,
"text": " Abhishek And Pukhraj"
},
{
"code": null,
"e": 3155,
"s": 3122,
"text": "\n 68 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3166,
"s": 3155,
"text": " Mike West"
},
{
"code": null,
"e": 3173,
"s": 3166,
"text": " Print"
},
{
"code": null,
"e": 3184,
"s": 3173,
"text": " Add Notes"
}
]
|
Sum of the alphabetical values of the characters of a string in C++ | In this problem, we are given an array of string str[]. Our task is to find the score of all strings in the array. The score is defined as the product of the position of the string with the sum of the alphabetical values of the characters of the string.
Let’s take an example to understand the problem,
Input
str[] = {“Learn”, “programming”, “tutorials”, “point” }
Explanation
Position of “Learn” − 1 →
sum = 12 + 5 + 1 + 18 + 14 = 50. Score = 50
Position of “programming” − 2 →
sum = 16 + 18 + 15 + 7 + 18 + 1 + 13 + 13 + 9 + 14 + 7 = 131
Score = 262
Position of “tutorials” − 1 →
sum = 20 + 21 + 20 + 15 + 18 + 9 + 1 + 12 +
19 = 135
Score = 405
Position of “point” − 1 →
sum = 16 + 15 + 9 + 14 + 20 = 74
Score = 296
To solve this problem, a simple approach will be iterating over all strings of the array. For each string, store the position and find the sum of alphabetical values of the string. The multiple position and sum and return the product.
Step 1 − Iterate over the string and store the position and for each string follow step 2 and 3 −
Step 2 − Calculate the sum of the alphabets of the string.
Step 3 − print the product of position and sum.
Program to illustrate the working of the above solution,
Live Demo
#include <iostream>
using namespace std;
int strScore(string str[], string s, int n, int index){
int score = 0;
for (int j = 0; j < s.length(); j++)
score += s[j] - 'a' + 1;
score *= index;
return score;
}
int main(){
string str[] = { "learn", "programming", "tutorials", "point" };
int n = sizeof(str) / sizeof(str[0]);
string s = str[0];
for(int i = 0; i<n; i++){
s = str[i];
cout<<"The score of string ' "<<str[i]<<" ' is "<<strScore(str, s, n, i+1)<<endl;
}
return 0;
}
The score of string ' learn ' is 50
The score of string ' programming ' is 262
The score of string ' tutorials ' is 405
The score of string ' point ' is 296 | [
{
"code": null,
"e": 1316,
"s": 1062,
"text": "In this problem, we are given an array of string str[]. Our task is to find the score of all strings in the array. The score is defined as the product of the position of the string with the sum of the alphabetical values of the characters of the string."
},
{
"code": null,
"e": 1365,
"s": 1316,
"text": "Let’s take an example to understand the problem,"
},
{
"code": null,
"e": 1372,
"s": 1365,
"text": "Input "
},
{
"code": null,
"e": 1428,
"s": 1372,
"text": "str[] = {“Learn”, “programming”, “tutorials”, “point” }"
},
{
"code": null,
"e": 1441,
"s": 1428,
"text": "Explanation "
},
{
"code": null,
"e": 1467,
"s": 1441,
"text": "Position of “Learn” − 1 →"
},
{
"code": null,
"e": 1511,
"s": 1467,
"text": "sum = 12 + 5 + 1 + 18 + 14 = 50. Score = 50"
},
{
"code": null,
"e": 1543,
"s": 1511,
"text": "Position of “programming” − 2 →"
},
{
"code": null,
"e": 1616,
"s": 1543,
"text": "sum = 16 + 18 + 15 + 7 + 18 + 1 + 13 + 13 + 9 + 14 + 7 = 131\nScore = 262"
},
{
"code": null,
"e": 1646,
"s": 1616,
"text": "Position of “tutorials” − 1 →"
},
{
"code": null,
"e": 1711,
"s": 1646,
"text": "sum = 20 + 21 + 20 + 15 + 18 + 9 + 1 + 12 +\n19 = 135\nScore = 405"
},
{
"code": null,
"e": 1737,
"s": 1711,
"text": "Position of “point” − 1 →"
},
{
"code": null,
"e": 1782,
"s": 1737,
"text": "sum = 16 + 15 + 9 + 14 + 20 = 74\nScore = 296"
},
{
"code": null,
"e": 2017,
"s": 1782,
"text": "To solve this problem, a simple approach will be iterating over all strings of the array. For each string, store the position and find the sum of alphabetical values of the string. The multiple position and sum and return the product."
},
{
"code": null,
"e": 2115,
"s": 2017,
"text": "Step 1 − Iterate over the string and store the position and for each string follow step 2 and 3 −"
},
{
"code": null,
"e": 2174,
"s": 2115,
"text": "Step 2 − Calculate the sum of the alphabets of the string."
},
{
"code": null,
"e": 2222,
"s": 2174,
"text": "Step 3 − print the product of position and sum."
},
{
"code": null,
"e": 2279,
"s": 2222,
"text": "Program to illustrate the working of the above solution,"
},
{
"code": null,
"e": 2290,
"s": 2279,
"text": " Live Demo"
},
{
"code": null,
"e": 2812,
"s": 2290,
"text": "#include <iostream>\nusing namespace std;\nint strScore(string str[], string s, int n, int index){\n int score = 0;\n for (int j = 0; j < s.length(); j++)\n score += s[j] - 'a' + 1;\n score *= index;\n return score;\n}\nint main(){\n string str[] = { \"learn\", \"programming\", \"tutorials\", \"point\" };\n int n = sizeof(str) / sizeof(str[0]);\n string s = str[0];\n for(int i = 0; i<n; i++){\n s = str[i];\n cout<<\"The score of string ' \"<<str[i]<<\" ' is \"<<strScore(str, s, n, i+1)<<endl;\n }\n return 0;\n}"
},
{
"code": null,
"e": 2969,
"s": 2812,
"text": "The score of string ' learn ' is 50\nThe score of string ' programming ' is 262\nThe score of string ' tutorials ' is 405\nThe score of string ' point ' is 296"
}
]
|
ObjectInputStream defaultReadObject() method in Java with examples - GeeksforGeeks | 05 Jun, 2020
The defaultReadObject() method of the ObjectInputStream class in Java is used to read the non-static and non-transient fields of the current class from this stream.
Syntax:
public void defaultReadObject()
Parameters: This method does not accept any parameter.
Return Value: This method returns the value that has been read.
Errors and Exceptions: The function throws three exceptions which is described below:
ClassNotFoundException: The exception is thrown if the class of a serialized object could not be found.
IOException: The exception is thrown if an I/O error has occurred.
NotActiveException: The exception is thrown if the stream is not currently reading objects.
Below program illustrate the above method:
Program 1:
Java
// Java program to illustrate// the above method import java.io.*; public class GFG { public static void main(String[] args) { try { // create a new file // with an ObjectOutputStream FileOutputStream out = new FileOutputStream("Shubham.txt"); ObjectOutputStream out1 = new ObjectOutputStream(out); // write out1.writeObject(new solve()); // Flushes the stream out1.flush(); // create an ObjectInputStream // for the file ObjectInputStream example = new ObjectInputStream( new FileInputStream("Shubham.txt")); // Read from the stream solve ans = (solve)example.readObject(); System.out.println(ans.str); } catch (Exception ex) { ex.printStackTrace(); } } static class solve implements Serializable { String str = "Geeksforgeeks"; private void readObject(ObjectInputStream res) throws IOException, ClassNotFoundException { // By using defaultReadObject() method is // to read non-static fields of the present // class from the ObjectInputStream res.defaultReadObject(); } }}
Output:
Program 2:
Java
// Java program to illustrate// the above method import java.io.*; public class GFG { public static void main(String[] args) { try { // create a new file // with an ObjectOutputStream FileOutputStream out = new FileOutputStream("Shubham.txt"); ObjectOutputStream out1 = new ObjectOutputStream(out); // write out1.writeObject(new solve()); // Flushes the stream out1.flush(); // create an ObjectInputStream // for the file ObjectInputStream example = new ObjectInputStream( new FileInputStream("Shubham.txt")); // Read from the stream solve ans = (solve)example.readObject(); // System.out.println(ans.str); System.out.println(ans.in); } catch (Exception ex) { ex.printStackTrace(); } } static class solve implements Serializable { // String str = "Geeksforgeeks"; Integer in = new Integer(112414); private void readObject(ObjectInputStream res) throws IOException, ClassNotFoundException { // By using defaultReadObject() method is // to read non-static fields of the present // class from the ObjectInputStream res.defaultReadObject(); } }}
Output:
Reference:
https://docs.oracle.com/javase/10/docs/api/java/io/ObjectInputStream.html#defaultReadObject()
Java-Functions
Java-IO package
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Different ways of Reading a text file in Java
Exceptions in Java
Functional Interfaces in Java
Generics in Java
Comparator Interface in Java with Examples
Introduction to Java
HashMap get() Method in Java
Strings in Java | [
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n05 Jun, 2020"
},
{
"code": null,
"e": 24113,
"s": 23948,
"text": "The defaultReadObject() method of the ObjectInputStream class in Java is used to read the non-static and non-transient fields of the current class from this stream."
},
{
"code": null,
"e": 24121,
"s": 24113,
"text": "Syntax:"
},
{
"code": null,
"e": 24154,
"s": 24121,
"text": "public void defaultReadObject()\n"
},
{
"code": null,
"e": 24209,
"s": 24154,
"text": "Parameters: This method does not accept any parameter."
},
{
"code": null,
"e": 24273,
"s": 24209,
"text": "Return Value: This method returns the value that has been read."
},
{
"code": null,
"e": 24359,
"s": 24273,
"text": "Errors and Exceptions: The function throws three exceptions which is described below:"
},
{
"code": null,
"e": 24463,
"s": 24359,
"text": "ClassNotFoundException: The exception is thrown if the class of a serialized object could not be found."
},
{
"code": null,
"e": 24530,
"s": 24463,
"text": "IOException: The exception is thrown if an I/O error has occurred."
},
{
"code": null,
"e": 24622,
"s": 24530,
"text": "NotActiveException: The exception is thrown if the stream is not currently reading objects."
},
{
"code": null,
"e": 24665,
"s": 24622,
"text": "Below program illustrate the above method:"
},
{
"code": null,
"e": 24676,
"s": 24665,
"text": "Program 1:"
},
{
"code": null,
"e": 24681,
"s": 24676,
"text": "Java"
},
{
"code": "// Java program to illustrate// the above method import java.io.*; public class GFG { public static void main(String[] args) { try { // create a new file // with an ObjectOutputStream FileOutputStream out = new FileOutputStream(\"Shubham.txt\"); ObjectOutputStream out1 = new ObjectOutputStream(out); // write out1.writeObject(new solve()); // Flushes the stream out1.flush(); // create an ObjectInputStream // for the file ObjectInputStream example = new ObjectInputStream( new FileInputStream(\"Shubham.txt\")); // Read from the stream solve ans = (solve)example.readObject(); System.out.println(ans.str); } catch (Exception ex) { ex.printStackTrace(); } } static class solve implements Serializable { String str = \"Geeksforgeeks\"; private void readObject(ObjectInputStream res) throws IOException, ClassNotFoundException { // By using defaultReadObject() method is // to read non-static fields of the present // class from the ObjectInputStream res.defaultReadObject(); } }}",
"e": 26046,
"s": 24681,
"text": null
},
{
"code": null,
"e": 26054,
"s": 26046,
"text": "Output:"
},
{
"code": null,
"e": 26065,
"s": 26054,
"text": "Program 2:"
},
{
"code": null,
"e": 26070,
"s": 26065,
"text": "Java"
},
{
"code": "// Java program to illustrate// the above method import java.io.*; public class GFG { public static void main(String[] args) { try { // create a new file // with an ObjectOutputStream FileOutputStream out = new FileOutputStream(\"Shubham.txt\"); ObjectOutputStream out1 = new ObjectOutputStream(out); // write out1.writeObject(new solve()); // Flushes the stream out1.flush(); // create an ObjectInputStream // for the file ObjectInputStream example = new ObjectInputStream( new FileInputStream(\"Shubham.txt\")); // Read from the stream solve ans = (solve)example.readObject(); // System.out.println(ans.str); System.out.println(ans.in); } catch (Exception ex) { ex.printStackTrace(); } } static class solve implements Serializable { // String str = \"Geeksforgeeks\"; Integer in = new Integer(112414); private void readObject(ObjectInputStream res) throws IOException, ClassNotFoundException { // By using defaultReadObject() method is // to read non-static fields of the present // class from the ObjectInputStream res.defaultReadObject(); } }}",
"e": 27521,
"s": 26070,
"text": null
},
{
"code": null,
"e": 27529,
"s": 27521,
"text": "Output:"
},
{
"code": null,
"e": 27540,
"s": 27529,
"text": "Reference:"
},
{
"code": null,
"e": 27634,
"s": 27540,
"text": "https://docs.oracle.com/javase/10/docs/api/java/io/ObjectInputStream.html#defaultReadObject()"
},
{
"code": null,
"e": 27649,
"s": 27634,
"text": "Java-Functions"
},
{
"code": null,
"e": 27665,
"s": 27649,
"text": "Java-IO package"
},
{
"code": null,
"e": 27670,
"s": 27665,
"text": "Java"
},
{
"code": null,
"e": 27675,
"s": 27670,
"text": "Java"
},
{
"code": null,
"e": 27773,
"s": 27675,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27788,
"s": 27773,
"text": "Stream In Java"
},
{
"code": null,
"e": 27809,
"s": 27788,
"text": "Constructors in Java"
},
{
"code": null,
"e": 27855,
"s": 27809,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 27874,
"s": 27855,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 27904,
"s": 27874,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 27921,
"s": 27904,
"text": "Generics in Java"
},
{
"code": null,
"e": 27964,
"s": 27921,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 27985,
"s": 27964,
"text": "Introduction to Java"
},
{
"code": null,
"e": 28014,
"s": 27985,
"text": "HashMap get() Method in Java"
}
]
|
How to use Pipenv with Jupyter and VSCode | by Daniel Deutsch | Towards Data Science | This article was written in 2019. During the years of development, I have encountered various issues with the pipenv setup. I changed the environment and dependency management to using a conda environment and a requirements file with pip-compile. My latest setup is described in my new article “How to start a data science project” https://towardsdatascience.com/how-to-start-a-data-science-project-boilerplate-in-2021-33d81393e50
If you still want to use pipenv then the article should still be valuable. Enjoy.
Currently, I study Artificial Intelligence at the JKU university and for some exercises, we need to use jupyter notebooks. Having worked a little bit with Python the package manager pipenv proofed to be valuable. Now, I encountered some problems using it with Jupyter notebooks and within VSCode. Therefore, a short guide on how I solved it.
The Issue
Developing with Jupyter Notebook in the browser
Develop with Jupyter Notebook in VSCode
About
As I described in my last article Working with Jupyter and VSCode I use pyenv and pipenv for managing all packages in my python development. I also referenced some articles why this way is helpful and easy to use. Now, it is necessary to dive a little more into it. There are two ways you would want to develop with jupyter notebook. Either you work with it directly in the browser or inside VSCode. In both use cases, there can emerge problems.
Let’s say you already have the proper python environment on your system and now you want to create a specific one for a project.
First, create a Pipenv environment.Make sure to navigate into the correct directory.Use pipenv install <packages> to install all your packages.Then use pipenv shell to activate your shell.Then use pipenv install jupyter and afterward pipenv run jupyter notebook.
First, create a Pipenv environment.
Make sure to navigate into the correct directory.
Use pipenv install <packages> to install all your packages.
Then use pipenv shell to activate your shell.
Then use pipenv install jupyter and afterward pipenv run jupyter notebook.
Now the jupyter server is started and your notebook will have access to the correct environment.
Now for the workflow within VSCode. Here, it is import to be aware of different shells. I often use a separate terminal (iterm2) and sometimes an activated shell is not recognized by VSCode or you are in a wrong directory or it is not activated. All of this causes problems. Therefore my workflow is as follows:
First, create a Pipenv environment.Make sure to navigate into the correct directory.Use pipenv install <packages> to install all your packages.Then, be sure to have a proper settings file in your vscode folder with content like this:
First, create a Pipenv environment.
Make sure to navigate into the correct directory.
Use pipenv install <packages> to install all your packages.
Then, be sure to have a proper settings file in your vscode folder with content like this:
{ "python.venvPath": "${workspaceFolder}/.venv/bin/python", "python.pythonPath": ".venv/bin/python",}
Afterward, you can choose the proper python environment within VSCode. ( It should be the one created with Pipenv!) Now the correct environment is recognized for python files.
Normally this should be sufficient for VSCode and you can start the Jupyter server in it.
But sometimes you change the environment, or there is an issue in the settings file. If this is the case you need to open the VSCode terminal and run pipenv shell to activate the shell. (Check if the correct environment is still selected in VSCode):
Now, after opening the .ipynb file, you will be able to run the cells and not get the error "... was not able to start jupyter server in environment xxx"
Let me know if it helps or if you have other issues or solutions working with Pipenv in VSCode and Jupyter Notebook.
I consider myself a problem solver. My strengths are to navigate in complex environments, provide solutions and breaking them down. My knowledge and interests evolve around business law and programming machine learning applications. I provide services in building data analysis and evaluating business-related concepts. | [
{
"code": null,
"e": 603,
"s": 172,
"text": "This article was written in 2019. During the years of development, I have encountered various issues with the pipenv setup. I changed the environment and dependency management to using a conda environment and a requirements file with pip-compile. My latest setup is described in my new article “How to start a data science project” https://towardsdatascience.com/how-to-start-a-data-science-project-boilerplate-in-2021-33d81393e50"
},
{
"code": null,
"e": 685,
"s": 603,
"text": "If you still want to use pipenv then the article should still be valuable. Enjoy."
},
{
"code": null,
"e": 1027,
"s": 685,
"text": "Currently, I study Artificial Intelligence at the JKU university and for some exercises, we need to use jupyter notebooks. Having worked a little bit with Python the package manager pipenv proofed to be valuable. Now, I encountered some problems using it with Jupyter notebooks and within VSCode. Therefore, a short guide on how I solved it."
},
{
"code": null,
"e": 1037,
"s": 1027,
"text": "The Issue"
},
{
"code": null,
"e": 1085,
"s": 1037,
"text": "Developing with Jupyter Notebook in the browser"
},
{
"code": null,
"e": 1125,
"s": 1085,
"text": "Develop with Jupyter Notebook in VSCode"
},
{
"code": null,
"e": 1131,
"s": 1125,
"text": "About"
},
{
"code": null,
"e": 1577,
"s": 1131,
"text": "As I described in my last article Working with Jupyter and VSCode I use pyenv and pipenv for managing all packages in my python development. I also referenced some articles why this way is helpful and easy to use. Now, it is necessary to dive a little more into it. There are two ways you would want to develop with jupyter notebook. Either you work with it directly in the browser or inside VSCode. In both use cases, there can emerge problems."
},
{
"code": null,
"e": 1706,
"s": 1577,
"text": "Let’s say you already have the proper python environment on your system and now you want to create a specific one for a project."
},
{
"code": null,
"e": 1969,
"s": 1706,
"text": "First, create a Pipenv environment.Make sure to navigate into the correct directory.Use pipenv install <packages> to install all your packages.Then use pipenv shell to activate your shell.Then use pipenv install jupyter and afterward pipenv run jupyter notebook."
},
{
"code": null,
"e": 2005,
"s": 1969,
"text": "First, create a Pipenv environment."
},
{
"code": null,
"e": 2055,
"s": 2005,
"text": "Make sure to navigate into the correct directory."
},
{
"code": null,
"e": 2115,
"s": 2055,
"text": "Use pipenv install <packages> to install all your packages."
},
{
"code": null,
"e": 2161,
"s": 2115,
"text": "Then use pipenv shell to activate your shell."
},
{
"code": null,
"e": 2236,
"s": 2161,
"text": "Then use pipenv install jupyter and afterward pipenv run jupyter notebook."
},
{
"code": null,
"e": 2333,
"s": 2236,
"text": "Now the jupyter server is started and your notebook will have access to the correct environment."
},
{
"code": null,
"e": 2645,
"s": 2333,
"text": "Now for the workflow within VSCode. Here, it is import to be aware of different shells. I often use a separate terminal (iterm2) and sometimes an activated shell is not recognized by VSCode or you are in a wrong directory or it is not activated. All of this causes problems. Therefore my workflow is as follows:"
},
{
"code": null,
"e": 2879,
"s": 2645,
"text": "First, create a Pipenv environment.Make sure to navigate into the correct directory.Use pipenv install <packages> to install all your packages.Then, be sure to have a proper settings file in your vscode folder with content like this:"
},
{
"code": null,
"e": 2915,
"s": 2879,
"text": "First, create a Pipenv environment."
},
{
"code": null,
"e": 2965,
"s": 2915,
"text": "Make sure to navigate into the correct directory."
},
{
"code": null,
"e": 3025,
"s": 2965,
"text": "Use pipenv install <packages> to install all your packages."
},
{
"code": null,
"e": 3116,
"s": 3025,
"text": "Then, be sure to have a proper settings file in your vscode folder with content like this:"
},
{
"code": null,
"e": 3224,
"s": 3116,
"text": "{ \"python.venvPath\": \"${workspaceFolder}/.venv/bin/python\", \"python.pythonPath\": \".venv/bin/python\",}"
},
{
"code": null,
"e": 3400,
"s": 3224,
"text": "Afterward, you can choose the proper python environment within VSCode. ( It should be the one created with Pipenv!) Now the correct environment is recognized for python files."
},
{
"code": null,
"e": 3490,
"s": 3400,
"text": "Normally this should be sufficient for VSCode and you can start the Jupyter server in it."
},
{
"code": null,
"e": 3740,
"s": 3490,
"text": "But sometimes you change the environment, or there is an issue in the settings file. If this is the case you need to open the VSCode terminal and run pipenv shell to activate the shell. (Check if the correct environment is still selected in VSCode):"
},
{
"code": null,
"e": 3894,
"s": 3740,
"text": "Now, after opening the .ipynb file, you will be able to run the cells and not get the error \"... was not able to start jupyter server in environment xxx\""
},
{
"code": null,
"e": 4011,
"s": 3894,
"text": "Let me know if it helps or if you have other issues or solutions working with Pipenv in VSCode and Jupyter Notebook."
}
]
|
How to add data to a TableView in JavaFX? | TableView is a component that is used to create a table populate it, and remove items from it. You can create a table view by instantiating thejavafx.scene.control.TableView class.
The following Example demonstrates how to create a TableView and add data to it.
import javafx.application.Application;
import javafx.collections.FXCollections;
import javafx.collections.ObservableList;
import javafx.geometry.Insets;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.control.TableColumn;
import javafx.scene.control.TableView;
import javafx.scene.control.cell.PropertyValueFactory;
import javafx.scene.layout.VBox;
import javafx.scene.text.Font;
import javafx.scene.text.FontPosture;
import javafx.scene.text.FontWeight;
import javafx.stage.Stage;
public class SettingData extends Application {
public void start(Stage stage) {
//Label for education
Label label = new Label("File Data:");
Font font = Font.font("verdana", FontWeight.BOLD, FontPosture.REGULAR, 12);
label.setFont(font);
//Creating a table view
TableView<FileData> table = new TableView<FileData>();
final ObservableList<FileData> data = FXCollections.observableArrayList(
new FileData("file1", "D:\\myFiles\\file1.txt", "25 MB", "12/01/2017"),
new FileData("file2", "D:\\myFiles\\file2.txt", "30 MB", "01/11/2019"),
new FileData("file3", "D:\\myFiles\\file3.txt", "50 MB", "12/04/2017"),
new FileData("file4", "D:\\myFiles\\file4.txt", "75 MB", "25/09/2018")
);
//Creating columns
TableColumn fileNameCol = new TableColumn("File Name");
fileNameCol.setCellValueFactory(new PropertyValueFactory<>("fileName"));
TableColumn pathCol = new TableColumn("Path");
pathCol.setCellValueFactory(new PropertyValueFactory("path"));
TableColumn sizeCol = new TableColumn("Size");
sizeCol.setCellValueFactory(new PropertyValueFactory("size"));
TableColumn dateCol = new TableColumn("Date Modified");
dateCol.setCellValueFactory(new PropertyValueFactory("dateModified"));
dateCol.setPrefWidth(100);
//Adding data to the table
ObservableList<String> list = FXCollections.observableArrayList();
table.setItems(data);
table.getSelectionModel().setSelectionMode(SelectionMode.MULTIPLE);
table.getColumns().addAll(fileNameCol, pathCol, sizeCol, dateCol);
//Setting the size of the table
table.setMaxSize(350, 200);
VBox vbox = new VBox();
vbox.setSpacing(5);
vbox.setPadding(new Insets(10, 50, 50, 60));
vbox.getChildren().addAll(label, table);
//Setting the scene
Scene scene = new Scene(vbox, 595, 230);
stage.setTitle("Table View Exmple");
stage.setScene(scene);
stage.show();
}
public static void main(String args[]){
launch(args);
}
}
FileData class −
import javafx.beans.property.SimpleStringProperty;
public class FileData {
SimpleStringProperty fileName;
SimpleStringProperty path;
SimpleStringProperty size;
SimpleStringProperty dateModified;
FileData(String fileName, String path, String size, String dateModified) {
this.fileName = new SimpleStringProperty(fileName);
this.path = new SimpleStringProperty(path);
this.size = new SimpleStringProperty(size);
this.dateModified = new SimpleStringProperty(dateModified);
}
public String getFileName(){
return fileName.get();
}
public void setFileName(String fname){
fileName.set(fname);
}
public String getPath(){
return path.get();
}
public void setPath(String fpath){
path.set(fpath);
}
public String getSize(){
return size.get();
}
public void setSize(String fsize){
size.set(fsize);
}
public String getDateModified(){
return dateModified.get();
}
public void setModified(String fmodified){
dateModified.set(fmodified);
}
} | [
{
"code": null,
"e": 1243,
"s": 1062,
"text": "TableView is a component that is used to create a table populate it, and remove items from it. You can create a table view by instantiating thejavafx.scene.control.TableView class."
},
{
"code": null,
"e": 1324,
"s": 1243,
"text": "The following Example demonstrates how to create a TableView and add data to it."
},
{
"code": null,
"e": 3936,
"s": 1324,
"text": "import javafx.application.Application;\nimport javafx.collections.FXCollections;\nimport javafx.collections.ObservableList;\nimport javafx.geometry.Insets;\nimport javafx.scene.Scene;\nimport javafx.scene.control.Label;\nimport javafx.scene.control.TableColumn;\nimport javafx.scene.control.TableView;\nimport javafx.scene.control.cell.PropertyValueFactory;\nimport javafx.scene.layout.VBox;\nimport javafx.scene.text.Font;\nimport javafx.scene.text.FontPosture;\nimport javafx.scene.text.FontWeight;\nimport javafx.stage.Stage;\npublic class SettingData extends Application {\n public void start(Stage stage) {\n //Label for education\n Label label = new Label(\"File Data:\");\n Font font = Font.font(\"verdana\", FontWeight.BOLD, FontPosture.REGULAR, 12);\n label.setFont(font);\n //Creating a table view\n TableView<FileData> table = new TableView<FileData>();\n final ObservableList<FileData> data = FXCollections.observableArrayList(\n new FileData(\"file1\", \"D:\\\\myFiles\\\\file1.txt\", \"25 MB\", \"12/01/2017\"),\n new FileData(\"file2\", \"D:\\\\myFiles\\\\file2.txt\", \"30 MB\", \"01/11/2019\"),\n new FileData(\"file3\", \"D:\\\\myFiles\\\\file3.txt\", \"50 MB\", \"12/04/2017\"),\n new FileData(\"file4\", \"D:\\\\myFiles\\\\file4.txt\", \"75 MB\", \"25/09/2018\")\n );\n //Creating columns\n TableColumn fileNameCol = new TableColumn(\"File Name\");\n fileNameCol.setCellValueFactory(new PropertyValueFactory<>(\"fileName\"));\n TableColumn pathCol = new TableColumn(\"Path\");\n pathCol.setCellValueFactory(new PropertyValueFactory(\"path\"));\n TableColumn sizeCol = new TableColumn(\"Size\");\n sizeCol.setCellValueFactory(new PropertyValueFactory(\"size\"));\n TableColumn dateCol = new TableColumn(\"Date Modified\");\n dateCol.setCellValueFactory(new PropertyValueFactory(\"dateModified\"));\n dateCol.setPrefWidth(100);\n //Adding data to the table\n ObservableList<String> list = FXCollections.observableArrayList();\n table.setItems(data);\n table.getSelectionModel().setSelectionMode(SelectionMode.MULTIPLE);\n table.getColumns().addAll(fileNameCol, pathCol, sizeCol, dateCol);\n //Setting the size of the table\n table.setMaxSize(350, 200);\n VBox vbox = new VBox();\n vbox.setSpacing(5);\n vbox.setPadding(new Insets(10, 50, 50, 60));\n vbox.getChildren().addAll(label, table);\n //Setting the scene\n Scene scene = new Scene(vbox, 595, 230);\n stage.setTitle(\"Table View Exmple\");\n stage.setScene(scene);\n stage.show();\n }\n public static void main(String args[]){\n launch(args);\n }\n}"
},
{
"code": null,
"e": 3953,
"s": 3936,
"text": "FileData class −"
},
{
"code": null,
"e": 5017,
"s": 3953,
"text": "import javafx.beans.property.SimpleStringProperty;\npublic class FileData {\n SimpleStringProperty fileName;\n SimpleStringProperty path;\n SimpleStringProperty size;\n SimpleStringProperty dateModified;\n FileData(String fileName, String path, String size, String dateModified) {\n this.fileName = new SimpleStringProperty(fileName);\n this.path = new SimpleStringProperty(path);\n this.size = new SimpleStringProperty(size);\n this.dateModified = new SimpleStringProperty(dateModified);\n }\n public String getFileName(){\n return fileName.get();\n }\n public void setFileName(String fname){\n fileName.set(fname);\n }\n public String getPath(){\n return path.get();\n }\n public void setPath(String fpath){\n path.set(fpath);\n }\n public String getSize(){\n return size.get();\n }\n public void setSize(String fsize){\n size.set(fsize);\n }\n public String getDateModified(){\n return dateModified.get();\n }\n public void setModified(String fmodified){\n dateModified.set(fmodified);\n }\n}"
}
]
|
Enhancing Optimized PySpark Queries | by Ed Turner | Towards Data Science | The tale of a dream materializing
As we continue increasing the volume of data we are processing and storing, and as the velocity of technological advances transforms from linear to logarithmic and from logarithmic to horizontally asymptotic, innovative approaches to improving the run-time of our software and analysis are necessary.
These necessitating innovative approaches include utilizing two very popular frameworks: Apache Spark and Apache Arrow. These two frameworks enable users to process large volumes of data in a distributive fashion. These two frameworks, also, enables users to process larger volumes of data more quickly by using vectorized approaches. These two frameworks can easily facilitate big-data analysis. However, despite these two frameworks and their ability to empower users, there is still room for improvement, specifically within the python-ecosystem. Why can we confidently identify pockets of improvement in utilizing these frameworks within python? Let’s examine some features python has.
As a programming language, python enables pure flexibility. A developer does not need to specify the type of a variable before instantiating it or defining it. A developer does not need to specify a return type of a function. Python interprets the type of each object at runtime, which allows for these restrictions (or guardrails) to be removed. These features python offers lower development times with perceived productivity increases. However, it is known, these same features negatively impact program runtime. Since python interprets the type of each object at runtime, it takes a long time for some software to run, especially those that require excessive looping. Even within programs that utilize vectorized operations, given the complexity of some programmatic logic, there still can be some impact on runtime performance. Is there anything we can do to mitigate these performance-related impacts? Let us turn our attention to my solution of choice: Numba
Numba performs Just-In-Time compilations on python functions, very similar to how C/C++ and Java compilations are performed. Python functions that only contain standard builtin functions, or a set of NumPy functions, can be improved using Numba. Here is an example:
from time import timefrom numba import jitimport numpy as np@jit(nopython=True, fastmath=True)def numba_sum(x): return np.sum(x)# this returns the median time of executiondef profileFunct(funct, arraySize, nTimes):_times = [] for _ in range(nTimes): start = time() funct(np.random.random((arraySize,))) end = time() _times.append(end - start)return np.median(_times)# this is the numba timenumba_times = [profileFunct(numba_sum, i, 1000) for i in range(100, 1001, 100)]# this is the standard numpy timingnumpy_times = [profileFunct(np.sum, i, 1000) for i in range(100, 1001, 100)]speed_up_lst = list(map(lambda x: x[1] / x[0], zip(numba_times, numpy_times)))
In the above example, we are only calculating the sum of a numpy.array, and then comparing the performances to each other. In this simple example, we see a moderate performance boost. Depending on your local machine, you can see a performance boost of 10% to 150% percent. Generally, you will see similar performance boosts, or even more if you are iterating in your function.
If we can speed up a simple example of addition, let’s see an example where we are using Numba + Apache Arrow + Apache Spark.
The following is just a brief example, demonstrating the possibility of creating Just-In-Time compiled functions, and then using them with a pandas_udf.
from numba import jitimport numpy as npimport pandas as pdimport pyspark.sql.functions as Ffrom pyspark.sql import SparkSessionfrom pyspark.sql.types import DoubleTypespark = SparkSession.builder.appName("test").getOrCreate()spark.conf.set("spark.sql.execution.arrow.enabled", "true")df = pd.DataFrame(data=np.random.random((100,)), columns=["c1"])sdf = spark.createDataFrame(df)# JIT compiled function@jit(nopython=True, fastmath=True)def numba_add_one(x): return x + np.ones(x.shape)# this is needed to use apache [email protected]_udf(DoubleType())def add_one(x): return pd.Series(numba_add_one(x.values))sdf = sdf.withColumn("c1_add_one", add_one(F.col("c1")))sdf.toPandas()
As stated earlier, this is a simple example. However, for more complicated applications, this is extremely valuable and will make it speed up, even the most fundamental, Apache Spark SQL queries.
I hope you enjoyed your read! If this interests you, then you will be interested in the following article:
towardsdatascience.com
If you want to reach out and learn more, please follow me on LinkedIn or go to my home page and reach out to me there
www.linkedin.com
ed-turner.github.io
Thank you again! As always #happycoding | [
{
"code": null,
"e": 206,
"s": 172,
"text": "The tale of a dream materializing"
},
{
"code": null,
"e": 507,
"s": 206,
"text": "As we continue increasing the volume of data we are processing and storing, and as the velocity of technological advances transforms from linear to logarithmic and from logarithmic to horizontally asymptotic, innovative approaches to improving the run-time of our software and analysis are necessary."
},
{
"code": null,
"e": 1197,
"s": 507,
"text": "These necessitating innovative approaches include utilizing two very popular frameworks: Apache Spark and Apache Arrow. These two frameworks enable users to process large volumes of data in a distributive fashion. These two frameworks, also, enables users to process larger volumes of data more quickly by using vectorized approaches. These two frameworks can easily facilitate big-data analysis. However, despite these two frameworks and their ability to empower users, there is still room for improvement, specifically within the python-ecosystem. Why can we confidently identify pockets of improvement in utilizing these frameworks within python? Let’s examine some features python has."
},
{
"code": null,
"e": 2163,
"s": 1197,
"text": "As a programming language, python enables pure flexibility. A developer does not need to specify the type of a variable before instantiating it or defining it. A developer does not need to specify a return type of a function. Python interprets the type of each object at runtime, which allows for these restrictions (or guardrails) to be removed. These features python offers lower development times with perceived productivity increases. However, it is known, these same features negatively impact program runtime. Since python interprets the type of each object at runtime, it takes a long time for some software to run, especially those that require excessive looping. Even within programs that utilize vectorized operations, given the complexity of some programmatic logic, there still can be some impact on runtime performance. Is there anything we can do to mitigate these performance-related impacts? Let us turn our attention to my solution of choice: Numba"
},
{
"code": null,
"e": 2429,
"s": 2163,
"text": "Numba performs Just-In-Time compilations on python functions, very similar to how C/C++ and Java compilations are performed. Python functions that only contain standard builtin functions, or a set of NumPy functions, can be improved using Numba. Here is an example:"
},
{
"code": null,
"e": 3124,
"s": 2429,
"text": "from time import timefrom numba import jitimport numpy as np@jit(nopython=True, fastmath=True)def numba_sum(x): return np.sum(x)# this returns the median time of executiondef profileFunct(funct, arraySize, nTimes):_times = [] for _ in range(nTimes): start = time() funct(np.random.random((arraySize,))) end = time() _times.append(end - start)return np.median(_times)# this is the numba timenumba_times = [profileFunct(numba_sum, i, 1000) for i in range(100, 1001, 100)]# this is the standard numpy timingnumpy_times = [profileFunct(np.sum, i, 1000) for i in range(100, 1001, 100)]speed_up_lst = list(map(lambda x: x[1] / x[0], zip(numba_times, numpy_times)))"
},
{
"code": null,
"e": 3501,
"s": 3124,
"text": "In the above example, we are only calculating the sum of a numpy.array, and then comparing the performances to each other. In this simple example, we see a moderate performance boost. Depending on your local machine, you can see a performance boost of 10% to 150% percent. Generally, you will see similar performance boosts, or even more if you are iterating in your function."
},
{
"code": null,
"e": 3627,
"s": 3501,
"text": "If we can speed up a simple example of addition, let’s see an example where we are using Numba + Apache Arrow + Apache Spark."
},
{
"code": null,
"e": 3780,
"s": 3627,
"text": "The following is just a brief example, demonstrating the possibility of creating Just-In-Time compiled functions, and then using them with a pandas_udf."
},
{
"code": null,
"e": 4462,
"s": 3780,
"text": "from numba import jitimport numpy as npimport pandas as pdimport pyspark.sql.functions as Ffrom pyspark.sql import SparkSessionfrom pyspark.sql.types import DoubleTypespark = SparkSession.builder.appName(\"test\").getOrCreate()spark.conf.set(\"spark.sql.execution.arrow.enabled\", \"true\")df = pd.DataFrame(data=np.random.random((100,)), columns=[\"c1\"])sdf = spark.createDataFrame(df)# JIT compiled function@jit(nopython=True, fastmath=True)def numba_add_one(x): return x + np.ones(x.shape)# this is needed to use apache [email protected]_udf(DoubleType())def add_one(x): return pd.Series(numba_add_one(x.values))sdf = sdf.withColumn(\"c1_add_one\", add_one(F.col(\"c1\")))sdf.toPandas()"
},
{
"code": null,
"e": 4658,
"s": 4462,
"text": "As stated earlier, this is a simple example. However, for more complicated applications, this is extremely valuable and will make it speed up, even the most fundamental, Apache Spark SQL queries."
},
{
"code": null,
"e": 4765,
"s": 4658,
"text": "I hope you enjoyed your read! If this interests you, then you will be interested in the following article:"
},
{
"code": null,
"e": 4788,
"s": 4765,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 4906,
"s": 4788,
"text": "If you want to reach out and learn more, please follow me on LinkedIn or go to my home page and reach out to me there"
},
{
"code": null,
"e": 4923,
"s": 4906,
"text": "www.linkedin.com"
},
{
"code": null,
"e": 4943,
"s": 4923,
"text": "ed-turner.github.io"
}
]
|
numpy.transpose | This function permutes the dimension of the given array. It returns a view wherever possible. The function takes the following parameters.
numpy.transpose(arr, axes)
Where,
arr
The array to be transposed
axes
List of ints, corresponding to the dimensions. By default, the dimensions are reversed
import numpy as np
a = np.arange(12).reshape(3,4)
print 'The original array is:'
print a
print '\n'
print 'The transposed array is:'
print np.transpose(a)
Its output would be as follows −
The original array is:
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
The transposed array is:
[[ 0 4 8]
[ 1 5 9]
[ 2 6 10]
[ 3 7 11]]
63 Lectures
6 hours
Abhilash Nelson
19 Lectures
8 hours
DATAhill Solutions Srinivas Reddy
12 Lectures
3 hours
DATAhill Solutions Srinivas Reddy
10 Lectures
2.5 hours
Akbar Khan
20 Lectures
2 hours
Pruthviraja L
63 Lectures
6 hours
Anmol
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2382,
"s": 2243,
"text": "This function permutes the dimension of the given array. It returns a view wherever possible. The function takes the following parameters."
},
{
"code": null,
"e": 2410,
"s": 2382,
"text": "numpy.transpose(arr, axes)\n"
},
{
"code": null,
"e": 2417,
"s": 2410,
"text": "Where,"
},
{
"code": null,
"e": 2421,
"s": 2417,
"text": "arr"
},
{
"code": null,
"e": 2448,
"s": 2421,
"text": "The array to be transposed"
},
{
"code": null,
"e": 2453,
"s": 2448,
"text": "axes"
},
{
"code": null,
"e": 2540,
"s": 2453,
"text": "List of ints, corresponding to the dimensions. By default, the dimensions are reversed"
},
{
"code": null,
"e": 2704,
"s": 2540,
"text": "import numpy as np \na = np.arange(12).reshape(3,4) \n\nprint 'The original array is:' \nprint a \nprint '\\n' \n\nprint 'The transposed array is:' \nprint np.transpose(a)"
},
{
"code": null,
"e": 2737,
"s": 2704,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 2869,
"s": 2737,
"text": "The original array is:\n[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n\nThe transposed array is:\n[[ 0 4 8]\n [ 1 5 9]\n [ 2 6 10]\n [ 3 7 11]]\n"
},
{
"code": null,
"e": 2902,
"s": 2869,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 2919,
"s": 2902,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 2952,
"s": 2919,
"text": "\n 19 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2987,
"s": 2952,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 3020,
"s": 2987,
"text": "\n 12 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 3055,
"s": 3020,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 3090,
"s": 3055,
"text": "\n 10 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3102,
"s": 3090,
"text": " Akbar Khan"
},
{
"code": null,
"e": 3135,
"s": 3102,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3150,
"s": 3135,
"text": " Pruthviraja L"
},
{
"code": null,
"e": 3183,
"s": 3150,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3190,
"s": 3183,
"text": " Anmol"
},
{
"code": null,
"e": 3197,
"s": 3190,
"text": " Print"
},
{
"code": null,
"e": 3208,
"s": 3197,
"text": " Add Notes"
}
]
|
Getting Started with Text Vectorization | by Shirley Chen | Towards Data Science | Feel free to follow me on Medium :)
Recently, I completed “Book Recommendations from Charles Darwin” case study on DataCamp. In this project, we will learn how to implement text preprocessing and text vectorization, how to build a book recommendation system using Natural Language Processing(NLP) and detect how closely related Charles Darwin’s books are to each other.
Text Vectorization is the process of converting text into numerical representation. Here is some popular methods to accomplish text vectorization:
Binary Term Frequency
Bag of Words (BoW) Term Frequency
(L1) Normalized Term Frequency
(L2) Normalized TF-IDF
Word2Vec
In this section, we will use the corpus below to introduce the 5 popular methods in text vectorization.
corpus = ["This is a brown house. This house is big. The street number is 1.", "This is a small house. This house has 1 bedroom. The street number is 12.", "This dog is brown. This dog likes to play.", "The dog is in the bedroom."]
Binary Term Frequency captures presence (1) or absence (0) of term in document. For this part, under TfidfVectorizer, we set binary parameter equal to true so that it can show just presence (1) or absence (0) and norm parameter equal to false.
Bag of Words (BoW) Term Frequency captures frequency of term in document. Under TfidfVectorizer, we set binary parameter equal to false so that it can show the actual frequency of the term and norm parameter equal to none.
(L1) Normalized Term Frequency captures normalized BoW term frequency in document. Under TfidfVectorizer, we set binary parameter equal to false so that it can show the actual frequency of the term and norm parameter equal to l1.
(L2) Normalized TFIDF (Term Frequency–Inverse Document Frequency) captures normalized TFIDF in document. The below is the formula for how to compute the TFIDF.
Under TfidfVectorizer, we set binary parameter equal to false so that it can show the actual frequency of the term and norm parameter equal to l2.
Word2Vec provides embedded representation of words. Word2Vec starts with one representation of all words in the corpus and train a NN (with 1 hidden layer) on a very large corpus of data. Here is the two methods that is typically used for training the NN:
Continuous Bag of Words (CBOW) — Predict vector representation of center/target word based on window of context words
Skip-Gram (SG) — Predict vector representation of window of context words based on center/target word
Once we have the embedded vectors for each word and use them for NLP:
Compute cosine similarity between word vectors, create higher order representations using weighted average of the word vectors and feed to the classification task
Python’s spacy package provides pre-trained models we can use to see how w2v works.
import spacynlp = spacy.load("en_core_web_md", disable=['parser', 'ner'])# Get w2v representation of the word 'breakfast'print (nlp('breakfast').vector.size)nlp('breakfast').vector[:10]# Find cosine similarity between w2v representations of breakfast and universenlp('breakfast').similarity(nlp('universe')) # 0.044292555doc1 = nlp("I like oranges that are sweet.")doc2 = nlp("I like apples that are sour.")doc1.similarity(doc2) # 0.962154245
Charles Darwin is the most famous scientist in the world. He wrote many other books on a wide range of topics, including geology, plants or his personal life. In this project, we will develop a content-based book recommendation system, which will determine which books are close to each other based on how similar the discussed topics are. Let’s take a look at the books we will use later.
import glob # glob is a general term used to define techniques to match specified patterns according to rules related to Unix shell.folder = "datasets/"files = glob.glob(folder + "*.txt")files.sort()
As the first step, we need to load the content of each book and check the regular expression to facilitate the process by removing the all non-alpha-numeric characters. We call such a collection of texts a corpus.
import re, ostxts = []titles = []for n in files: f = open(n, encoding='utf-8-sig') data = re.sub('[\W_]+', ' ', f.read()) txts.append(data) titles.append(os.path.basename(n).replace('.txt', ''))[len(t) for t in txts]
And then, for consistency, we will refer to Darwin’s most famous book “On the Origin of Species” to check the results for other given book.
for i in range(len(titles)): if titles[i] == 'OriginofSpecies': ori = iprint(ori) # Index = 15
Next step, we transform the corpus into a format by doing tokenization.
stoplist = set('for a of the and to in to be which some is at that we i who whom show via may my our might as well'.split())txts_lower_case = [i.lower() for i in txts]txts_split = [i.split() for i in txts]texts = [[word for word in txt if word not in stoplist] for txt in txts_split]texts[15][0:20]
For the next parts of text preprocessing, we use a stemming process, which will group together the inflected forms of a word so they can be analyzed as a single item: the stem. In order to make the process faster, we will directly load the final results from a pickle file and review the method used to generate it.
import pickletexts_stem = pickle.load(open('datasets/texts_stem.p', 'rb'))texts_stem[15][0:20]
Bag-of-Words Models (BoW)
First, we need to create a universe of all words contained in our corpus of Charles Darwin’s books, which we call a dictionary. Then, using the stemmed tokens and the dictionary, we will create bag-of-words models (BoW) to represent our books as a list of all uniques tokens they contain associated with their respective number of occurrences.
from gensim import corporadictionary = corpora.Dictionary(texts_stem)bows = [dictionary.doc2bow(text) for text in texts_stem]print(bows[15][:5])
In order to better understand the model, we will transform it into a DataFrame and display the 10 most common stems for the book “On the Origin of Species”.
df_bow_origin = pd.DataFrame()df_bow_origin['index'] = [i[0] for i in bows[15] if i]df_bow_origin['occurrences'] = [i[1] for i in bows[15] if i]df_bow_origin['token'] = [dictionary[index] for index in df_bow_origin['index']]df_bow_origin.occurrences.sort_values(ascending=False).head(10)
TF-IDF Model
Next, we will use a TF-IDF model to define the importance of each word depending on how frequent it is in the text. As a result, a high TF-IDF score for a word will indicate that this word is specific to this text.
from gensim.models import TfidfModelmodel = TfidfModel(bows)model[bows[15]]
Once again, in order to better understand the model, we will transform it into a DataFrame and display the 10 most specific words for the “On the Origin of Species” book.
df_tfidf = pd.DataFrame()df_tfidf['id'] = [i[0] for i in model[bows[15]]]df_tfidf['score'] = [i[1] for i in model[bows[15]]]df_tfidf['token'] = [dictionary[index] for index in df_tfidf['id']]df_tfidf.score.sort_values(ascending=False).head(10)
Now that we have a TF-IDF model on how specific they are to each book, we can measure how related to books are between each other. Therefore, we will use Cosine Similarity and visualize the results as a distance matrix.
from gensim import similaritiessims = similarities.MatrixSimilarity(model[bows])sim_df = pd.DataFrame(list(sims))sim_df.columns = titles sim_df.index = titlessim_df
We now have a matrix containing all the similarity measures between any pair of books from Charles Darwin! We can use barh() to display a horizontal bar plot for which books are the most similar to “On the Origin of Species.”
%matplotlib inlineimport matplotlib.pyplot as pltv = sim_df['OriginofSpecies']v_sorted = v.sort_values()v_sorted.plot.barh()plt.xlabel('Similarity')
However, we want to have a better understanding of the big picture and see how Darwin’s books are generally related to each other. To this purpose, we will represent the whole similarity matrix as a dendrogram, which is a standard tool to display such data.
from scipy.cluster import hierarchyZ = hierarchy.linkage(sim_df, 'ward')chart = hierarchy.dendrogram(Z, leaf_font_size=8, labels=sim_df.index, orientation="left")
Finally, based on the chart we created before, we can conclude that “the variation of animals and plants under domestication” is most related to “On the Origin of Species.”
Source code that created this article can be found in my Github.
Thank you so much for reading my article! Hi, I’m Shirley, currently studying for a Master Degree in MS-Business Analytics at ASU. If you have questions, please don’t hesitate to contact me!
Email me at [email protected] and feel free to connect me on LinkedIn! | [
{
"code": null,
"e": 207,
"s": 171,
"text": "Feel free to follow me on Medium :)"
},
{
"code": null,
"e": 541,
"s": 207,
"text": "Recently, I completed “Book Recommendations from Charles Darwin” case study on DataCamp. In this project, we will learn how to implement text preprocessing and text vectorization, how to build a book recommendation system using Natural Language Processing(NLP) and detect how closely related Charles Darwin’s books are to each other."
},
{
"code": null,
"e": 688,
"s": 541,
"text": "Text Vectorization is the process of converting text into numerical representation. Here is some popular methods to accomplish text vectorization:"
},
{
"code": null,
"e": 710,
"s": 688,
"text": "Binary Term Frequency"
},
{
"code": null,
"e": 744,
"s": 710,
"text": "Bag of Words (BoW) Term Frequency"
},
{
"code": null,
"e": 775,
"s": 744,
"text": "(L1) Normalized Term Frequency"
},
{
"code": null,
"e": 798,
"s": 775,
"text": "(L2) Normalized TF-IDF"
},
{
"code": null,
"e": 807,
"s": 798,
"text": "Word2Vec"
},
{
"code": null,
"e": 911,
"s": 807,
"text": "In this section, we will use the corpus below to introduce the 5 popular methods in text vectorization."
},
{
"code": null,
"e": 1170,
"s": 911,
"text": "corpus = [\"This is a brown house. This house is big. The street number is 1.\", \"This is a small house. This house has 1 bedroom. The street number is 12.\", \"This dog is brown. This dog likes to play.\", \"The dog is in the bedroom.\"]"
},
{
"code": null,
"e": 1414,
"s": 1170,
"text": "Binary Term Frequency captures presence (1) or absence (0) of term in document. For this part, under TfidfVectorizer, we set binary parameter equal to true so that it can show just presence (1) or absence (0) and norm parameter equal to false."
},
{
"code": null,
"e": 1637,
"s": 1414,
"text": "Bag of Words (BoW) Term Frequency captures frequency of term in document. Under TfidfVectorizer, we set binary parameter equal to false so that it can show the actual frequency of the term and norm parameter equal to none."
},
{
"code": null,
"e": 1867,
"s": 1637,
"text": "(L1) Normalized Term Frequency captures normalized BoW term frequency in document. Under TfidfVectorizer, we set binary parameter equal to false so that it can show the actual frequency of the term and norm parameter equal to l1."
},
{
"code": null,
"e": 2027,
"s": 1867,
"text": "(L2) Normalized TFIDF (Term Frequency–Inverse Document Frequency) captures normalized TFIDF in document. The below is the formula for how to compute the TFIDF."
},
{
"code": null,
"e": 2174,
"s": 2027,
"text": "Under TfidfVectorizer, we set binary parameter equal to false so that it can show the actual frequency of the term and norm parameter equal to l2."
},
{
"code": null,
"e": 2430,
"s": 2174,
"text": "Word2Vec provides embedded representation of words. Word2Vec starts with one representation of all words in the corpus and train a NN (with 1 hidden layer) on a very large corpus of data. Here is the two methods that is typically used for training the NN:"
},
{
"code": null,
"e": 2548,
"s": 2430,
"text": "Continuous Bag of Words (CBOW) — Predict vector representation of center/target word based on window of context words"
},
{
"code": null,
"e": 2650,
"s": 2548,
"text": "Skip-Gram (SG) — Predict vector representation of window of context words based on center/target word"
},
{
"code": null,
"e": 2720,
"s": 2650,
"text": "Once we have the embedded vectors for each word and use them for NLP:"
},
{
"code": null,
"e": 2883,
"s": 2720,
"text": "Compute cosine similarity between word vectors, create higher order representations using weighted average of the word vectors and feed to the classification task"
},
{
"code": null,
"e": 2967,
"s": 2883,
"text": "Python’s spacy package provides pre-trained models we can use to see how w2v works."
},
{
"code": null,
"e": 3416,
"s": 2967,
"text": "import spacynlp = spacy.load(\"en_core_web_md\", disable=['parser', 'ner'])# Get w2v representation of the word 'breakfast'print (nlp('breakfast').vector.size)nlp('breakfast').vector[:10]# Find cosine similarity between w2v representations of breakfast and universenlp('breakfast').similarity(nlp('universe')) # 0.044292555doc1 = nlp(\"I like oranges that are sweet.\")doc2 = nlp(\"I like apples that are sour.\")doc1.similarity(doc2) # 0.962154245"
},
{
"code": null,
"e": 3806,
"s": 3416,
"text": "Charles Darwin is the most famous scientist in the world. He wrote many other books on a wide range of topics, including geology, plants or his personal life. In this project, we will develop a content-based book recommendation system, which will determine which books are close to each other based on how similar the discussed topics are. Let’s take a look at the books we will use later."
},
{
"code": null,
"e": 4007,
"s": 3806,
"text": "import glob # glob is a general term used to define techniques to match specified patterns according to rules related to Unix shell.folder = \"datasets/\"files = glob.glob(folder + \"*.txt\")files.sort()"
},
{
"code": null,
"e": 4221,
"s": 4007,
"text": "As the first step, we need to load the content of each book and check the regular expression to facilitate the process by removing the all non-alpha-numeric characters. We call such a collection of texts a corpus."
},
{
"code": null,
"e": 4451,
"s": 4221,
"text": "import re, ostxts = []titles = []for n in files: f = open(n, encoding='utf-8-sig') data = re.sub('[\\W_]+', ' ', f.read()) txts.append(data) titles.append(os.path.basename(n).replace('.txt', ''))[len(t) for t in txts]"
},
{
"code": null,
"e": 4591,
"s": 4451,
"text": "And then, for consistency, we will refer to Darwin’s most famous book “On the Origin of Species” to check the results for other given book."
},
{
"code": null,
"e": 4701,
"s": 4591,
"text": "for i in range(len(titles)): if titles[i] == 'OriginofSpecies': ori = iprint(ori) # Index = 15"
},
{
"code": null,
"e": 4773,
"s": 4701,
"text": "Next step, we transform the corpus into a format by doing tokenization."
},
{
"code": null,
"e": 5072,
"s": 4773,
"text": "stoplist = set('for a of the and to in to be which some is at that we i who whom show via may my our might as well'.split())txts_lower_case = [i.lower() for i in txts]txts_split = [i.split() for i in txts]texts = [[word for word in txt if word not in stoplist] for txt in txts_split]texts[15][0:20]"
},
{
"code": null,
"e": 5388,
"s": 5072,
"text": "For the next parts of text preprocessing, we use a stemming process, which will group together the inflected forms of a word so they can be analyzed as a single item: the stem. In order to make the process faster, we will directly load the final results from a pickle file and review the method used to generate it."
},
{
"code": null,
"e": 5483,
"s": 5388,
"text": "import pickletexts_stem = pickle.load(open('datasets/texts_stem.p', 'rb'))texts_stem[15][0:20]"
},
{
"code": null,
"e": 5509,
"s": 5483,
"text": "Bag-of-Words Models (BoW)"
},
{
"code": null,
"e": 5853,
"s": 5509,
"text": "First, we need to create a universe of all words contained in our corpus of Charles Darwin’s books, which we call a dictionary. Then, using the stemmed tokens and the dictionary, we will create bag-of-words models (BoW) to represent our books as a list of all uniques tokens they contain associated with their respective number of occurrences."
},
{
"code": null,
"e": 5998,
"s": 5853,
"text": "from gensim import corporadictionary = corpora.Dictionary(texts_stem)bows = [dictionary.doc2bow(text) for text in texts_stem]print(bows[15][:5])"
},
{
"code": null,
"e": 6155,
"s": 5998,
"text": "In order to better understand the model, we will transform it into a DataFrame and display the 10 most common stems for the book “On the Origin of Species”."
},
{
"code": null,
"e": 6443,
"s": 6155,
"text": "df_bow_origin = pd.DataFrame()df_bow_origin['index'] = [i[0] for i in bows[15] if i]df_bow_origin['occurrences'] = [i[1] for i in bows[15] if i]df_bow_origin['token'] = [dictionary[index] for index in df_bow_origin['index']]df_bow_origin.occurrences.sort_values(ascending=False).head(10)"
},
{
"code": null,
"e": 6456,
"s": 6443,
"text": "TF-IDF Model"
},
{
"code": null,
"e": 6671,
"s": 6456,
"text": "Next, we will use a TF-IDF model to define the importance of each word depending on how frequent it is in the text. As a result, a high TF-IDF score for a word will indicate that this word is specific to this text."
},
{
"code": null,
"e": 6747,
"s": 6671,
"text": "from gensim.models import TfidfModelmodel = TfidfModel(bows)model[bows[15]]"
},
{
"code": null,
"e": 6918,
"s": 6747,
"text": "Once again, in order to better understand the model, we will transform it into a DataFrame and display the 10 most specific words for the “On the Origin of Species” book."
},
{
"code": null,
"e": 7162,
"s": 6918,
"text": "df_tfidf = pd.DataFrame()df_tfidf['id'] = [i[0] for i in model[bows[15]]]df_tfidf['score'] = [i[1] for i in model[bows[15]]]df_tfidf['token'] = [dictionary[index] for index in df_tfidf['id']]df_tfidf.score.sort_values(ascending=False).head(10)"
},
{
"code": null,
"e": 7382,
"s": 7162,
"text": "Now that we have a TF-IDF model on how specific they are to each book, we can measure how related to books are between each other. Therefore, we will use Cosine Similarity and visualize the results as a distance matrix."
},
{
"code": null,
"e": 7547,
"s": 7382,
"text": "from gensim import similaritiessims = similarities.MatrixSimilarity(model[bows])sim_df = pd.DataFrame(list(sims))sim_df.columns = titles sim_df.index = titlessim_df"
},
{
"code": null,
"e": 7773,
"s": 7547,
"text": "We now have a matrix containing all the similarity measures between any pair of books from Charles Darwin! We can use barh() to display a horizontal bar plot for which books are the most similar to “On the Origin of Species.”"
},
{
"code": null,
"e": 7922,
"s": 7773,
"text": "%matplotlib inlineimport matplotlib.pyplot as pltv = sim_df['OriginofSpecies']v_sorted = v.sort_values()v_sorted.plot.barh()plt.xlabel('Similarity')"
},
{
"code": null,
"e": 8180,
"s": 7922,
"text": "However, we want to have a better understanding of the big picture and see how Darwin’s books are generally related to each other. To this purpose, we will represent the whole similarity matrix as a dendrogram, which is a standard tool to display such data."
},
{
"code": null,
"e": 8343,
"s": 8180,
"text": "from scipy.cluster import hierarchyZ = hierarchy.linkage(sim_df, 'ward')chart = hierarchy.dendrogram(Z, leaf_font_size=8, labels=sim_df.index, orientation=\"left\")"
},
{
"code": null,
"e": 8516,
"s": 8343,
"text": "Finally, based on the chart we created before, we can conclude that “the variation of animals and plants under domestication” is most related to “On the Origin of Species.”"
},
{
"code": null,
"e": 8581,
"s": 8516,
"text": "Source code that created this article can be found in my Github."
},
{
"code": null,
"e": 8772,
"s": 8581,
"text": "Thank you so much for reading my article! Hi, I’m Shirley, currently studying for a Master Degree in MS-Business Analytics at ASU. If you have questions, please don’t hesitate to contact me!"
}
]
|
How to create a weather bot in 5 minutes | by Andrew Rudchuk | Towards Data Science | Create weather bot to get the information about the weather in the Hala.ai chat. In this tutorial we are using the service https://weatherstack.com to get the information about the current weather.
Step 1. Go to the https://weatherstack.com and sign up for a free account
Step 2.After the registration, go to the https://weatherstack.com/quickstart and copy the API Access Key that was generated for your account and also Base URL
Step 3.Go to the Integrationsection on the Hala Platform, add the new integration REST API, and past the values that you have copied in the previous step. Save your results.
Step 4.Go to the Actionssection on the Hala Platform, and create a new action. Provide the name for the action and select the integration.
Step 5. Make the configuration of your API service. You can find more information about building the API in the documentation of the API provider. In our case, you can read API documentation of this service here. When you will finish, save your changes.
Step 6. You have set up integration and action. Now, you need to create an intent to recognize the user input about getting the weather. Go to the Intentssection on the Hala Platformand press New Utterance
Step 7. To change the default name of the intent, click on icon pencil next to the default value, or you can click on the default value, and then you will be able to modify it.
Step 8. Specify the name of the intent and press Enter
Step 9. Now you need to provide examples of how the end-users can write about the getting the weather forecast. You need to provide at least five examples.
Step 10. Saveyour changes and train model
Step 11. Now, you need to create an entity to recognize the city in for which we need to provide the weather. Go to the Entitiessection on the Platform and press New Entity
Saveyour changes and train model
Step 12. Go to the Skill Kitsection on the Hala Platformand create a skill by pressing the button Create skill
Step 13. Provide the name of the skill, description, and tags (last two is optional). Press next for creating the dialog flow.
Step 14. You will be promoted to the interface for creation dialog flow. Create the first root node by pressing the button Add new
Step 15. Open the created dialog node and fill it.
Field Name — you can enter any value, for example “User asks for a weather forecast”
Conditions — here you need to specify created intent intent.get_weather
Actions — skip this section
Context — skip this section
Output — here we need to ask the user for which city we need to provide the weather forecast
Step 16. Now you need to create a child node to recognize the input from the user about the city.
Step 17. Open the child node and fill it.
Field Name — you can enter any value, for example “Form Success”
Conditions — here you need to specify the entity entity.city
Actions — When we got the information about the city, we can send the API request to the weather service. We need to add the Action that we have created previously.
Find and add the action
When you add the action, you would need to specify the context variable where the response from the weather service will be stored, for example, context.weatherResponse
Context — Here we need to save the City name into the context variable that will be sent to the weather service.
Output — In output you can write something like this “Please wait, I am checking the information.”
Step 18. Create a new child node to process the API response from the weather service.
Open the created node and fill it:
Field Name — you can enter any value, for example “Provide the results”
Conditions — Specify the context variable that was created for the Action Response on previous steps: context.weatherResponse
Actions — skip this section
Context — skip this section
Output — In the Output, we would need to provide information about the weather. To access the data from the API response, you would need to use the next prefix context.weatherResponse.and then specify the path to the required data. All API providers have information about using their API. You could read this information to create the correct path to the data extracting.
You can use the next text:
Weather forecast for {{context.weatherResponse.request.query}}Local time: {{context.weatherResponse.location.localtime}}Temprature: {{context.weatherResponse.current.temperature}}Wind speed: {{context.weatherResponse.current.wind_speed}}
Step 19. Save the changes by pressing button “Save Changes”. Now you have created the simple skill with one dialog node.
Remember to save your changes.
Step 20. Test the results. Go to the Hala Web Chat by using the next link https://chat.hala.ai/and type the first message in the chat. You can type one of the trained phrases, or you can use new phrases that weren’t specified in intent training.
Congratulations! You have created the skill (bot) and tested it. | [
{
"code": null,
"e": 369,
"s": 171,
"text": "Create weather bot to get the information about the weather in the Hala.ai chat. In this tutorial we are using the service https://weatherstack.com to get the information about the current weather."
},
{
"code": null,
"e": 443,
"s": 369,
"text": "Step 1. Go to the https://weatherstack.com and sign up for a free account"
},
{
"code": null,
"e": 602,
"s": 443,
"text": "Step 2.After the registration, go to the https://weatherstack.com/quickstart and copy the API Access Key that was generated for your account and also Base URL"
},
{
"code": null,
"e": 776,
"s": 602,
"text": "Step 3.Go to the Integrationsection on the Hala Platform, add the new integration REST API, and past the values that you have copied in the previous step. Save your results."
},
{
"code": null,
"e": 915,
"s": 776,
"text": "Step 4.Go to the Actionssection on the Hala Platform, and create a new action. Provide the name for the action and select the integration."
},
{
"code": null,
"e": 1169,
"s": 915,
"text": "Step 5. Make the configuration of your API service. You can find more information about building the API in the documentation of the API provider. In our case, you can read API documentation of this service here. When you will finish, save your changes."
},
{
"code": null,
"e": 1375,
"s": 1169,
"text": "Step 6. You have set up integration and action. Now, you need to create an intent to recognize the user input about getting the weather. Go to the Intentssection on the Hala Platformand press New Utterance"
},
{
"code": null,
"e": 1552,
"s": 1375,
"text": "Step 7. To change the default name of the intent, click on icon pencil next to the default value, or you can click on the default value, and then you will be able to modify it."
},
{
"code": null,
"e": 1607,
"s": 1552,
"text": "Step 8. Specify the name of the intent and press Enter"
},
{
"code": null,
"e": 1763,
"s": 1607,
"text": "Step 9. Now you need to provide examples of how the end-users can write about the getting the weather forecast. You need to provide at least five examples."
},
{
"code": null,
"e": 1805,
"s": 1763,
"text": "Step 10. Saveyour changes and train model"
},
{
"code": null,
"e": 1978,
"s": 1805,
"text": "Step 11. Now, you need to create an entity to recognize the city in for which we need to provide the weather. Go to the Entitiessection on the Platform and press New Entity"
},
{
"code": null,
"e": 2011,
"s": 1978,
"text": "Saveyour changes and train model"
},
{
"code": null,
"e": 2122,
"s": 2011,
"text": "Step 12. Go to the Skill Kitsection on the Hala Platformand create a skill by pressing the button Create skill"
},
{
"code": null,
"e": 2249,
"s": 2122,
"text": "Step 13. Provide the name of the skill, description, and tags (last two is optional). Press next for creating the dialog flow."
},
{
"code": null,
"e": 2380,
"s": 2249,
"text": "Step 14. You will be promoted to the interface for creation dialog flow. Create the first root node by pressing the button Add new"
},
{
"code": null,
"e": 2431,
"s": 2380,
"text": "Step 15. Open the created dialog node and fill it."
},
{
"code": null,
"e": 2516,
"s": 2431,
"text": "Field Name — you can enter any value, for example “User asks for a weather forecast”"
},
{
"code": null,
"e": 2588,
"s": 2516,
"text": "Conditions — here you need to specify created intent intent.get_weather"
},
{
"code": null,
"e": 2616,
"s": 2588,
"text": "Actions — skip this section"
},
{
"code": null,
"e": 2644,
"s": 2616,
"text": "Context — skip this section"
},
{
"code": null,
"e": 2737,
"s": 2644,
"text": "Output — here we need to ask the user for which city we need to provide the weather forecast"
},
{
"code": null,
"e": 2835,
"s": 2737,
"text": "Step 16. Now you need to create a child node to recognize the input from the user about the city."
},
{
"code": null,
"e": 2877,
"s": 2835,
"text": "Step 17. Open the child node and fill it."
},
{
"code": null,
"e": 2942,
"s": 2877,
"text": "Field Name — you can enter any value, for example “Form Success”"
},
{
"code": null,
"e": 3003,
"s": 2942,
"text": "Conditions — here you need to specify the entity entity.city"
},
{
"code": null,
"e": 3168,
"s": 3003,
"text": "Actions — When we got the information about the city, we can send the API request to the weather service. We need to add the Action that we have created previously."
},
{
"code": null,
"e": 3192,
"s": 3168,
"text": "Find and add the action"
},
{
"code": null,
"e": 3361,
"s": 3192,
"text": "When you add the action, you would need to specify the context variable where the response from the weather service will be stored, for example, context.weatherResponse"
},
{
"code": null,
"e": 3474,
"s": 3361,
"text": "Context — Here we need to save the City name into the context variable that will be sent to the weather service."
},
{
"code": null,
"e": 3573,
"s": 3474,
"text": "Output — In output you can write something like this “Please wait, I am checking the information.”"
},
{
"code": null,
"e": 3660,
"s": 3573,
"text": "Step 18. Create a new child node to process the API response from the weather service."
},
{
"code": null,
"e": 3695,
"s": 3660,
"text": "Open the created node and fill it:"
},
{
"code": null,
"e": 3767,
"s": 3695,
"text": "Field Name — you can enter any value, for example “Provide the results”"
},
{
"code": null,
"e": 3893,
"s": 3767,
"text": "Conditions — Specify the context variable that was created for the Action Response on previous steps: context.weatherResponse"
},
{
"code": null,
"e": 3921,
"s": 3893,
"text": "Actions — skip this section"
},
{
"code": null,
"e": 3949,
"s": 3921,
"text": "Context — skip this section"
},
{
"code": null,
"e": 4322,
"s": 3949,
"text": "Output — In the Output, we would need to provide information about the weather. To access the data from the API response, you would need to use the next prefix context.weatherResponse.and then specify the path to the required data. All API providers have information about using their API. You could read this information to create the correct path to the data extracting."
},
{
"code": null,
"e": 4349,
"s": 4322,
"text": "You can use the next text:"
},
{
"code": null,
"e": 4587,
"s": 4349,
"text": "Weather forecast for {{context.weatherResponse.request.query}}Local time: {{context.weatherResponse.location.localtime}}Temprature: {{context.weatherResponse.current.temperature}}Wind speed: {{context.weatherResponse.current.wind_speed}}"
},
{
"code": null,
"e": 4708,
"s": 4587,
"text": "Step 19. Save the changes by pressing button “Save Changes”. Now you have created the simple skill with one dialog node."
},
{
"code": null,
"e": 4739,
"s": 4708,
"text": "Remember to save your changes."
},
{
"code": null,
"e": 4985,
"s": 4739,
"text": "Step 20. Test the results. Go to the Hala Web Chat by using the next link https://chat.hala.ai/and type the first message in the chat. You can type one of the trained phrases, or you can use new phrases that weren’t specified in intent training."
}
]
|
Maximum number of strings that can be formed with given zeros and ones - GeeksforGeeks | 09 Aug, 2021
Given a list of strings arr[] of zeros and ones only and two integer N and M, where N is the number of 1’s and M is the number of 0’s. The task is to find the maximum number of strings from the given list of strings that can be constructured with given number of 0’s and 1’s.
Examples:
Input: arr[] = {“10”, “0001”, “11100”, “1”, “0”}, M = 5, N = 3 Output: 4 Explanation: The 4 strings which can be formed using five 0’s and three 1’s are: “10”, “0001”, “1”, “0”
Input: arr[] = {“10”, “00”, “000” “0001”, “111001”, “1”, “0”}, M = 3, N = 1 Output: 3 Explanation: The 3 strings which can be formed using three 0’s and one 1’s are: “00”, “1”, “0”
Naive Approach: The idea is to generate all the combination of the given list of strings and check the count of zeros and ones satisfying the given condition. But the time complexity of this solution is exponential.
Time Complexity: O(2N), where N is the number of strings in the list.
Efficient Approach: An efficient solution is given by using Dynamic Programming. The idea is to use recursion for generating all possible combinations and store the results for Overlapping Subproblems during recursion.
Below are the steps:
The idea is to use 3D dp array(dp[M][N][i]) where N and M are the number of 1’s and 0’s respectively and i is the index of the string in the list.
Find the number of 1’s and 0’s in the current string and check if the count of the zeros and ones is less than or equals to the given count N and M respectively.
If above condition is true, then check whether current state value is stored in the dp table or not. If yes then return this value.
Else recursively move for the next iteration by including and excluding the current string as:
// By including the current string
x = 1 + recursive_function(M - zero, N - ones, arr, i + 1)
// By excluding the current string
y = recursive_function(M, N, arr, i + 1)
// and update the dp table as:
dp[M][N][i] = max(x, y)
The maximum value of the above two recursive calls will give the maximum number with N 1’s and M 0’s for the current state.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // 3D dp table to store the state valueint dp[100][100][100]; // Function that count the combination// of 0's and 1's from the given list// of stringint countString(int m, int n, vector<string>& arr, int i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return INT_MIN; } // If index reaches out of bound if (i >= arr.size()) { return 0; } // Return the prestored result if (dp[m][n][i] != -1) { return dp[m][n][i]; } // Initialize count of 0's and 1's // to 0 for the current state int zero = 0, one = 0; // Calculate the number of 1's and // 0's in current string for (char c : arr[i]) { if (c == '0') { zero++; } else { one++; } } // Include the current string and // recurr for the next iteration int x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current string and // recurr for the next iteration int y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m][n][i] = max(x, y);} // Driver Codeint main(){ vector<string> arr = { "10", "0001", "1", "111001", "0" }; // N 0's and M 1's int N = 3, M = 5; // Initialize dp array to -1 memset(dp, -1, sizeof(dp)); // Function call cout << countString(M, N, arr, 0);}
// Java program for the above approachclass GFG{ // 3D dp table to store the state valuestatic int [][][]dp = new int[100][100][100]; // Function that count the combination// of 0's and 1's from the given list// of Stringstatic int countString(int m, int n, String []arr, int i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return Integer.MIN_VALUE; } // If index reaches out of bound if (i >= arr.length) { return 0; } // Return the prestored result if (dp[m][n][i] != -1) { return dp[m][n][i]; } // Initialize count of 0's and 1's // to 0 for the current state int zero = 0, one = 0; // Calculate the number of 1's and // 0's in current String for (char c : arr[i].toCharArray()) { if (c == '0') { zero++; } else { one++; } } // Include the current String and // recurr for the next iteration int x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current String and // recurr for the next iteration int y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m][n][i] = Math.max(x, y);} // Driver Codepublic static void main(String[] args){ String []arr = { "10", "0001", "1", "111001", "0" }; // N 0's and M 1's int N = 3, M = 5; // Initialize dp array to -1 for(int i = 0;i<100;i++){ for(int j = 0;j<100;j++){ for(int l=0;l<100;l++) dp[i][j][l]=-1; } } // Function call System.out.print(countString(M, N, arr, 0));}} // This code is contributed by 29AjayKumar
# Python 3 program for the above approachimport sys # 3D dp table to store the state valuedp = [[[-1 for i in range(100)]for j in range(100)] for k in range(100)] # Function that count the combination# of 0's and 1's from the given list# of stringdef countString(m, n, arr, i): # Base Case if count of 0's or 1's # becomes negative if (m < 0 or n < 0): return -sys.maxsize - 1 # If index reaches out of bound if (i >= len(arr)): return 0 # Return the prestored result if (dp[m][n][i] != -1): return dp[m][n][i] # Initialize count of 0's and 1's # to 0 for the current state zero = 0 one = 0 # Calculate the number of 1's and # 0's in current string for c in arr[i]: if (c == '0'): zero += 1 else: one += 1 # Include the current string and # recurr for the next iteration x = 1 + countString(m - zero, n - one, arr, i + 1) # Exclude the current string and # recurr for the next iteration y = countString(m, n, arr, i + 1) dp[m][n][i] = max(x, y) # Update the maximum of the above # two states to the current dp state return dp[m][n][i] # Driver Codeif __name__ == '__main__': arr = ["10", "0001", "1","111001", "0"] # N 0's and M 1's N = 3 M = 5 # Function call print(countString(M, N, arr, 0)) # This code is contributed by Surendra_Gangwar
// C# program for the above approachusing System; class GFG{ // 3D dp table to store the state valuestatic int [,,]dp = new int[100, 100, 100]; // Function that count the combination// of 0's and 1's from the given list// of Stringstatic int countString(int m, int n, String []arr, int i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return int.MinValue; } // If index reaches out of bound if (i >= arr.Length) { return 0; } // Return the prestored result if (dp[m, n, i] != -1) { return dp[m, n, i]; } // Initialize count of 0's and 1's // to 0 for the current state int zero = 0, one = 0; // Calculate the number of 1's and // 0's in current String foreach (char c in arr[i].ToCharArray()) { if (c == '0') { zero++; } else { one++; } } // Include the current String and // recurr for the next iteration int x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current String and // recurr for the next iteration int y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m, n, i] = Math.Max(x, y);} // Driver Codepublic static void Main(String[] args){ String []arr = { "10", "0001", "1", "111001", "0" }; // N 0's and M 1's int N = 3, M = 5; // Initialize dp array to -1 for(int i = 0; i < 100; i++){ for(int j = 0; j < 100; j++){ for(int l = 0; l < 100; l++) dp[i, j, l] = -1; } } // Function call Console.Write(countString(M, N, arr, 0));}} // This code is contributed by Rajput-Ji
<script> // Javascript program for the above approach // 3D dp table to store the state valuelet dp = new Array(); // Initialize dp array to -1for(let i = 0; i < 100; i++){ dp[i] = new Array(); for(let j = 0; j < 100; j++) { dp[i][j] = new Array(); for(let l = 0; l < 100; l++) { dp[i][j][l] = -1; } }} // Function that count the combination// of 0's and 1's from the given list// of Stringfunction countString(m, n, arr, i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return Number.MIN_VALUE; } // If index reaches out of bound if (i >= arr.length) { return 0; } // Return the prestored result if (dp[m][n][i] != -1) { return dp[m][n][i]; } // Initialize count of 0's and 1's // to 0 for the current state let zero = 0, one = 0; // Calculate the number of 1's and // 0's in current String for(let c = 0; c < arr[i].length; c++) { if (arr[i] == '0') { zero++; } else { one++; } } // Include the current String and // recurr for the next iteration let x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current String and // recurr for the next iteration let y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m][n][i] = Math.max(x, y);} // Driver Codelet arr = [ "10", "0001", "1", "111001", "0" ]; // N 0's and M 1'slet N = 3, M = 5; // Function calldocument.write(countString(M, N, arr, 0)); // This code is contributed by Dharanendra L V. </script>
4
Time Complexity: O(N*M*len), where N and M are the numbers of 1’s and 0’s respectively and len is the length of the list.Auxiliary Space: O(N*M*len)
SURENDRA_GANGWAR
29AjayKumar
Rajput-Ji
dharanendralv23
sweetyty
pankajsharmagfg
Algorithms
Backtracking
Combinatorial
Dynamic Programming
Recursion
Strings
Strings
Dynamic Programming
Recursion
Combinatorial
Backtracking
Algorithms
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
DSA Sheet by Love Babbar
SCAN (Elevator) Disk Scheduling Algorithms
Program for SSTF disk scheduling algorithm
Rail Fence Cipher - Encryption and Decryption
Quadratic Probing in Hashing
N Queen Problem | Backtracking-3
Write a program to print all permutations of a given string
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
Rat in a Maze | Backtracking-2
The Knight's tour problem | Backtracking-1 | [
{
"code": null,
"e": 24700,
"s": 24672,
"text": "\n09 Aug, 2021"
},
{
"code": null,
"e": 24976,
"s": 24700,
"text": "Given a list of strings arr[] of zeros and ones only and two integer N and M, where N is the number of 1’s and M is the number of 0’s. The task is to find the maximum number of strings from the given list of strings that can be constructured with given number of 0’s and 1’s."
},
{
"code": null,
"e": 24987,
"s": 24976,
"text": "Examples: "
},
{
"code": null,
"e": 25164,
"s": 24987,
"text": "Input: arr[] = {“10”, “0001”, “11100”, “1”, “0”}, M = 5, N = 3 Output: 4 Explanation: The 4 strings which can be formed using five 0’s and three 1’s are: “10”, “0001”, “1”, “0”"
},
{
"code": null,
"e": 25347,
"s": 25164,
"text": "Input: arr[] = {“10”, “00”, “000” “0001”, “111001”, “1”, “0”}, M = 3, N = 1 Output: 3 Explanation: The 3 strings which can be formed using three 0’s and one 1’s are: “00”, “1”, “0” "
},
{
"code": null,
"e": 25564,
"s": 25347,
"text": "Naive Approach: The idea is to generate all the combination of the given list of strings and check the count of zeros and ones satisfying the given condition. But the time complexity of this solution is exponential. "
},
{
"code": null,
"e": 25634,
"s": 25564,
"text": "Time Complexity: O(2N), where N is the number of strings in the list."
},
{
"code": null,
"e": 25854,
"s": 25634,
"text": "Efficient Approach: An efficient solution is given by using Dynamic Programming. The idea is to use recursion for generating all possible combinations and store the results for Overlapping Subproblems during recursion. "
},
{
"code": null,
"e": 25877,
"s": 25854,
"text": "Below are the steps: "
},
{
"code": null,
"e": 26024,
"s": 25877,
"text": "The idea is to use 3D dp array(dp[M][N][i]) where N and M are the number of 1’s and 0’s respectively and i is the index of the string in the list."
},
{
"code": null,
"e": 26186,
"s": 26024,
"text": "Find the number of 1’s and 0’s in the current string and check if the count of the zeros and ones is less than or equals to the given count N and M respectively."
},
{
"code": null,
"e": 26318,
"s": 26186,
"text": "If above condition is true, then check whether current state value is stored in the dp table or not. If yes then return this value."
},
{
"code": null,
"e": 26413,
"s": 26318,
"text": "Else recursively move for the next iteration by including and excluding the current string as:"
},
{
"code": null,
"e": 26641,
"s": 26413,
"text": "// By including the current string\nx = 1 + recursive_function(M - zero, N - ones, arr, i + 1)\n\n// By excluding the current string \ny = recursive_function(M, N, arr, i + 1)\n\n// and update the dp table as:\ndp[M][N][i] = max(x, y)"
},
{
"code": null,
"e": 26765,
"s": 26641,
"text": "The maximum value of the above two recursive calls will give the maximum number with N 1’s and M 0’s for the current state."
},
{
"code": null,
"e": 26818,
"s": 26765,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26822,
"s": 26818,
"text": "C++"
},
{
"code": null,
"e": 26827,
"s": 26822,
"text": "Java"
},
{
"code": null,
"e": 26835,
"s": 26827,
"text": "Python3"
},
{
"code": null,
"e": 26838,
"s": 26835,
"text": "C#"
},
{
"code": null,
"e": 26849,
"s": 26838,
"text": "Javascript"
},
{
"code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // 3D dp table to store the state valueint dp[100][100][100]; // Function that count the combination// of 0's and 1's from the given list// of stringint countString(int m, int n, vector<string>& arr, int i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return INT_MIN; } // If index reaches out of bound if (i >= arr.size()) { return 0; } // Return the prestored result if (dp[m][n][i] != -1) { return dp[m][n][i]; } // Initialize count of 0's and 1's // to 0 for the current state int zero = 0, one = 0; // Calculate the number of 1's and // 0's in current string for (char c : arr[i]) { if (c == '0') { zero++; } else { one++; } } // Include the current string and // recurr for the next iteration int x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current string and // recurr for the next iteration int y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m][n][i] = max(x, y);} // Driver Codeint main(){ vector<string> arr = { \"10\", \"0001\", \"1\", \"111001\", \"0\" }; // N 0's and M 1's int N = 3, M = 5; // Initialize dp array to -1 memset(dp, -1, sizeof(dp)); // Function call cout << countString(M, N, arr, 0);}",
"e": 28439,
"s": 26849,
"text": null
},
{
"code": "// Java program for the above approachclass GFG{ // 3D dp table to store the state valuestatic int [][][]dp = new int[100][100][100]; // Function that count the combination// of 0's and 1's from the given list// of Stringstatic int countString(int m, int n, String []arr, int i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return Integer.MIN_VALUE; } // If index reaches out of bound if (i >= arr.length) { return 0; } // Return the prestored result if (dp[m][n][i] != -1) { return dp[m][n][i]; } // Initialize count of 0's and 1's // to 0 for the current state int zero = 0, one = 0; // Calculate the number of 1's and // 0's in current String for (char c : arr[i].toCharArray()) { if (c == '0') { zero++; } else { one++; } } // Include the current String and // recurr for the next iteration int x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current String and // recurr for the next iteration int y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m][n][i] = Math.max(x, y);} // Driver Codepublic static void main(String[] args){ String []arr = { \"10\", \"0001\", \"1\", \"111001\", \"0\" }; // N 0's and M 1's int N = 3, M = 5; // Initialize dp array to -1 for(int i = 0;i<100;i++){ for(int j = 0;j<100;j++){ for(int l=0;l<100;l++) dp[i][j][l]=-1; } } // Function call System.out.print(countString(M, N, arr, 0));}} // This code is contributed by 29AjayKumar",
"e": 30243,
"s": 28439,
"text": null
},
{
"code": "# Python 3 program for the above approachimport sys # 3D dp table to store the state valuedp = [[[-1 for i in range(100)]for j in range(100)] for k in range(100)] # Function that count the combination# of 0's and 1's from the given list# of stringdef countString(m, n, arr, i): # Base Case if count of 0's or 1's # becomes negative if (m < 0 or n < 0): return -sys.maxsize - 1 # If index reaches out of bound if (i >= len(arr)): return 0 # Return the prestored result if (dp[m][n][i] != -1): return dp[m][n][i] # Initialize count of 0's and 1's # to 0 for the current state zero = 0 one = 0 # Calculate the number of 1's and # 0's in current string for c in arr[i]: if (c == '0'): zero += 1 else: one += 1 # Include the current string and # recurr for the next iteration x = 1 + countString(m - zero, n - one, arr, i + 1) # Exclude the current string and # recurr for the next iteration y = countString(m, n, arr, i + 1) dp[m][n][i] = max(x, y) # Update the maximum of the above # two states to the current dp state return dp[m][n][i] # Driver Codeif __name__ == '__main__': arr = [\"10\", \"0001\", \"1\",\"111001\", \"0\"] # N 0's and M 1's N = 3 M = 5 # Function call print(countString(M, N, arr, 0)) # This code is contributed by Surendra_Gangwar",
"e": 31654,
"s": 30243,
"text": null
},
{
"code": "// C# program for the above approachusing System; class GFG{ // 3D dp table to store the state valuestatic int [,,]dp = new int[100, 100, 100]; // Function that count the combination// of 0's and 1's from the given list// of Stringstatic int countString(int m, int n, String []arr, int i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return int.MinValue; } // If index reaches out of bound if (i >= arr.Length) { return 0; } // Return the prestored result if (dp[m, n, i] != -1) { return dp[m, n, i]; } // Initialize count of 0's and 1's // to 0 for the current state int zero = 0, one = 0; // Calculate the number of 1's and // 0's in current String foreach (char c in arr[i].ToCharArray()) { if (c == '0') { zero++; } else { one++; } } // Include the current String and // recurr for the next iteration int x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current String and // recurr for the next iteration int y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m, n, i] = Math.Max(x, y);} // Driver Codepublic static void Main(String[] args){ String []arr = { \"10\", \"0001\", \"1\", \"111001\", \"0\" }; // N 0's and M 1's int N = 3, M = 5; // Initialize dp array to -1 for(int i = 0; i < 100; i++){ for(int j = 0; j < 100; j++){ for(int l = 0; l < 100; l++) dp[i, j, l] = -1; } } // Function call Console.Write(countString(M, N, arr, 0));}} // This code is contributed by Rajput-Ji",
"e": 33492,
"s": 31654,
"text": null
},
{
"code": "<script> // Javascript program for the above approach // 3D dp table to store the state valuelet dp = new Array(); // Initialize dp array to -1for(let i = 0; i < 100; i++){ dp[i] = new Array(); for(let j = 0; j < 100; j++) { dp[i][j] = new Array(); for(let l = 0; l < 100; l++) { dp[i][j][l] = -1; } }} // Function that count the combination// of 0's and 1's from the given list// of Stringfunction countString(m, n, arr, i){ // Base Case if count of 0's or 1's // becomes negative if (m < 0 || n < 0) { return Number.MIN_VALUE; } // If index reaches out of bound if (i >= arr.length) { return 0; } // Return the prestored result if (dp[m][n][i] != -1) { return dp[m][n][i]; } // Initialize count of 0's and 1's // to 0 for the current state let zero = 0, one = 0; // Calculate the number of 1's and // 0's in current String for(let c = 0; c < arr[i].length; c++) { if (arr[i] == '0') { zero++; } else { one++; } } // Include the current String and // recurr for the next iteration let x = 1 + countString(m - zero, n - one, arr, i + 1); // Exclude the current String and // recurr for the next iteration let y = countString(m, n, arr, i + 1); // Update the maximum of the above // two states to the current dp state return dp[m][n][i] = Math.max(x, y);} // Driver Codelet arr = [ \"10\", \"0001\", \"1\", \"111001\", \"0\" ]; // N 0's and M 1'slet N = 3, M = 5; // Function calldocument.write(countString(M, N, arr, 0)); // This code is contributed by Dharanendra L V. </script>",
"e": 35265,
"s": 33492,
"text": null
},
{
"code": null,
"e": 35267,
"s": 35265,
"text": "4"
},
{
"code": null,
"e": 35418,
"s": 35269,
"text": "Time Complexity: O(N*M*len), where N and M are the numbers of 1’s and 0’s respectively and len is the length of the list.Auxiliary Space: O(N*M*len)"
},
{
"code": null,
"e": 35435,
"s": 35418,
"text": "SURENDRA_GANGWAR"
},
{
"code": null,
"e": 35447,
"s": 35435,
"text": "29AjayKumar"
},
{
"code": null,
"e": 35457,
"s": 35447,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 35473,
"s": 35457,
"text": "dharanendralv23"
},
{
"code": null,
"e": 35482,
"s": 35473,
"text": "sweetyty"
},
{
"code": null,
"e": 35498,
"s": 35482,
"text": "pankajsharmagfg"
},
{
"code": null,
"e": 35509,
"s": 35498,
"text": "Algorithms"
},
{
"code": null,
"e": 35522,
"s": 35509,
"text": "Backtracking"
},
{
"code": null,
"e": 35536,
"s": 35522,
"text": "Combinatorial"
},
{
"code": null,
"e": 35556,
"s": 35536,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 35566,
"s": 35556,
"text": "Recursion"
},
{
"code": null,
"e": 35574,
"s": 35566,
"text": "Strings"
},
{
"code": null,
"e": 35582,
"s": 35574,
"text": "Strings"
},
{
"code": null,
"e": 35602,
"s": 35582,
"text": "Dynamic Programming"
},
{
"code": null,
"e": 35612,
"s": 35602,
"text": "Recursion"
},
{
"code": null,
"e": 35626,
"s": 35612,
"text": "Combinatorial"
},
{
"code": null,
"e": 35639,
"s": 35626,
"text": "Backtracking"
},
{
"code": null,
"e": 35650,
"s": 35639,
"text": "Algorithms"
},
{
"code": null,
"e": 35748,
"s": 35650,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 35773,
"s": 35748,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 35816,
"s": 35773,
"text": "SCAN (Elevator) Disk Scheduling Algorithms"
},
{
"code": null,
"e": 35859,
"s": 35816,
"text": "Program for SSTF disk scheduling algorithm"
},
{
"code": null,
"e": 35905,
"s": 35859,
"text": "Rail Fence Cipher - Encryption and Decryption"
},
{
"code": null,
"e": 35934,
"s": 35905,
"text": "Quadratic Probing in Hashing"
},
{
"code": null,
"e": 35967,
"s": 35934,
"text": "N Queen Problem | Backtracking-3"
},
{
"code": null,
"e": 36027,
"s": 35967,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 36112,
"s": 36027,
"text": "Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)"
},
{
"code": null,
"e": 36143,
"s": 36112,
"text": "Rat in a Maze | Backtracking-2"
}
]
|
Remove First Node of the Linked List using C++ | Given a linked list, we need to remove its first element and return the pointer to the head of the new list.
Input : 1 -> 2 -> 3 -> 4 -> 5 -> NULL
Output : 2 -> 3 -> 4 -> 5 -> NULL
Input : 2 -> 4 -> 6 -> 8 -> 33 -> 67 -> NULL
Output : 4 -> 6 -> 8 -> 33 -> 67 -> NULL
In the given problem, we need to remove the first node of the list and move our head to the second element and return the head.
In this problem, we can move the head to the next location and then free the previous node.
#include <iostream>
using namespace std;
/* Link list node */
struct Node {
int data;
struct Node* next;
};
void push(struct Node** head_ref, int new_data) { // pushing the data into the list
struct Node* new_node = new Node;
new_node->data = new_data;
new_node->next = (*head_ref);
(*head_ref) = new_node;
}
int main() {
Node* head = NULL;
push(&head, 12);
push(&head, 29);
push(&head, 11);
push(&head, 23);
push(&head, 8);
auto temp = head; // temp becomes head
head = head -> next; // our head becomes the next element
delete temp; // we delete temp i.e. the first element
for (temp = head; temp != NULL; temp = temp->next) // printing the list
cout << temp->data << " ";
return 0;
}
23 11 29 12
We just need to shift the head to its next element in this program and then delete the previous element and then print the new list. The overall time complexity of the given program is O(1) which means that our program doesn’t depend on the given input, and it is the best complexity we can achieve.
In this article, we solve a problem to Remove the first node of the linked list. We also learned the C++ program for this problem and the complete approach we solved. We can write the same program in other languages such as C, java, python, and other languages. We hope you find this article helpful. | [
{
"code": null,
"e": 1171,
"s": 1062,
"text": "Given a linked list, we need to remove its first element and return the pointer to the head of the new list."
},
{
"code": null,
"e": 1330,
"s": 1171,
"text": "Input : 1 -> 2 -> 3 -> 4 -> 5 -> NULL\nOutput : 2 -> 3 -> 4 -> 5 -> NULL\n\nInput : 2 -> 4 -> 6 -> 8 -> 33 -> 67 -> NULL\nOutput : 4 -> 6 -> 8 -> 33 -> 67 -> NULL"
},
{
"code": null,
"e": 1458,
"s": 1330,
"text": "In the given problem, we need to remove the first node of the list and move our head to the second element and return the head."
},
{
"code": null,
"e": 1550,
"s": 1458,
"text": "In this problem, we can move the head to the next location and then free the previous node."
},
{
"code": null,
"e": 2294,
"s": 1550,
"text": "#include <iostream>\nusing namespace std;\n/* Link list node */\nstruct Node {\n int data;\n struct Node* next;\n};\nvoid push(struct Node** head_ref, int new_data) { // pushing the data into the list\n struct Node* new_node = new Node;\n new_node->data = new_data;\n new_node->next = (*head_ref);\n (*head_ref) = new_node;\n}\nint main() {\n Node* head = NULL;\n push(&head, 12);\n push(&head, 29);\n push(&head, 11);\n push(&head, 23);\n push(&head, 8);\n auto temp = head; // temp becomes head\n head = head -> next; // our head becomes the next element\n delete temp; // we delete temp i.e. the first element\n for (temp = head; temp != NULL; temp = temp->next) // printing the list\n cout << temp->data << \" \";\n return 0;\n}"
},
{
"code": null,
"e": 2306,
"s": 2294,
"text": "23 11 29 12"
},
{
"code": null,
"e": 2606,
"s": 2306,
"text": "We just need to shift the head to its next element in this program and then delete the previous element and then print the new list. The overall time complexity of the given program is O(1) which means that our program doesn’t depend on the given input, and it is the best complexity we can achieve."
},
{
"code": null,
"e": 2907,
"s": 2606,
"text": "In this article, we solve a problem to Remove the first node of the linked list. We also learned the C++ program for this problem and the complete approach we solved. We can write the same program in other languages such as C, java, python, and other languages. We hope you find this article helpful."
}
]
|
PHP - Function MySQLi Num Rows | int mysqli_num_rows ( mysqli_result $result );
It returns the number of rows in a result set
It returns the number of rows in a result set.
result
It is a result set identifier returned by mysqli_query(), mysqli_store_result() or mysqli_use_result()
Try out the following example
<?php
$connection_mysql = mysqli_connect("localhost","user","password","db");
if (mysqli_connect_errno($connection_mysql)){
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$sql = "SELECT Lastname FROM Persons";
if ($result = mysqli_query($connection_mysql,$sql)){
$rowcount = mysqli_num_rows($result);
printf("Result set has %d rows.\n",$rowcount);
mysqli_free_result($result);
}
mysqli_close($connection_mysql);
?>
45 Lectures
9 hours
Malhar Lathkar
34 Lectures
4 hours
Syed Raza
84 Lectures
5.5 hours
Frahaan Hussain
17 Lectures
1 hours
Nivedita Jain
100 Lectures
34 hours
Azaz Patel
43 Lectures
5.5 hours
Vijay Kumar Parvatha Reddy
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2805,
"s": 2757,
"text": "int mysqli_num_rows ( mysqli_result $result );\n"
},
{
"code": null,
"e": 2851,
"s": 2805,
"text": "It returns the number of rows in a result set"
},
{
"code": null,
"e": 2898,
"s": 2851,
"text": "It returns the number of rows in a result set."
},
{
"code": null,
"e": 2905,
"s": 2898,
"text": "result"
},
{
"code": null,
"e": 3008,
"s": 2905,
"text": "It is a result set identifier returned by mysqli_query(), mysqli_store_result() or mysqli_use_result()"
},
{
"code": null,
"e": 3038,
"s": 3008,
"text": "Try out the following example"
},
{
"code": null,
"e": 3530,
"s": 3038,
"text": "<?php\n $connection_mysql = mysqli_connect(\"localhost\",\"user\",\"password\",\"db\");\n \n if (mysqli_connect_errno($connection_mysql)){\n echo \"Failed to connect to MySQL: \" . mysqli_connect_error();\n }\n $sql = \"SELECT Lastname FROM Persons\";\n \n if ($result = mysqli_query($connection_mysql,$sql)){\n $rowcount = mysqli_num_rows($result);\n \n printf(\"Result set has %d rows.\\n\",$rowcount);\n mysqli_free_result($result);\n }\n mysqli_close($connection_mysql);\n?>"
},
{
"code": null,
"e": 3563,
"s": 3530,
"text": "\n 45 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 3579,
"s": 3563,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 3612,
"s": 3579,
"text": "\n 34 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 3623,
"s": 3612,
"text": " Syed Raza"
},
{
"code": null,
"e": 3658,
"s": 3623,
"text": "\n 84 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3675,
"s": 3658,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 3708,
"s": 3675,
"text": "\n 17 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3723,
"s": 3708,
"text": " Nivedita Jain"
},
{
"code": null,
"e": 3758,
"s": 3723,
"text": "\n 100 Lectures \n 34 hours \n"
},
{
"code": null,
"e": 3770,
"s": 3758,
"text": " Azaz Patel"
},
{
"code": null,
"e": 3805,
"s": 3770,
"text": "\n 43 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3833,
"s": 3805,
"text": " Vijay Kumar Parvatha Reddy"
},
{
"code": null,
"e": 3840,
"s": 3833,
"text": " Print"
},
{
"code": null,
"e": 3851,
"s": 3840,
"text": " Add Notes"
}
]
|
Exploring the Python Pandas Library | by Sadrach Pierre, Ph.D. | Towards Data Science | Pandas is a python library used for analyzing, transforming, and generating statistics from data. In this post, we will discuss several useful methods in Pandas for data wrangling and exploration. For our purposes, we will be using the Medical Cost Personal Datasets data from Kaggle.
Let’s get started!
To begin, let’s import the pandas library:
import pandas as pd
Next, let’s read into Pandas data frame. A Pandas data frame is basically a tabular array-like data structure with rows and columns. To read in the data we can use the ‘.read_csv()’ method:
df = pd.read_csv("insurance.csv")
Next, we can use the ‘.head()’ and ‘.tail()’ methods to look at the first and last five rows of data respectively:
print(df.head())
print(df.tail())
We can also look at the column names :
print(df.columns)
This is particularly useful for data sets with a significant number of columns.
Often times, when using real data, we have to deal with missing values in the data columns. Using pandas, we can quickly get an idea about how sparse the data is with the ‘.isnull()’ method:
print(df.isnull().sum())
We see that this data doesn’t contain any missing values. Let’s artificially add missing values for the ‘children’ and ‘region’ columns, to demonstrate how we go about removing these values:
df.loc[df.region == 'southwest', 'region'] = np.nandf.loc[df.children == 1, 'children'] = np.nan
Now let’s count the number of missing values we’ve added:
print(df.isnull().sum())
We can use the ‘.dropna()’ method to remove these missing values. This can either be done in place or we can return a value which we can store in a new variable. To drop missing values in place we do the following:
df.dropna(inplace=True)print(df.isnull().sum())
Here, the df variable has been modified in place. Alternatively we can do the following:
df = df.dropna()
Let’s print the missing values once more:
print(df.isnull().sum())
We can check the length of the data frame before and after dropping the missing values:
print("Length Before:", len(df))df.dropna(inplace=True)print("Length After:", len(df))
Additionally, if you would like to fill missing values in a data frame you can use the ‘.fillna()’ method. To do so in place:
df.fillna(0, inplace=True)
and alternatively:
df = df.fillna(0)
Since we are imputing missing values, the length of the data frame should not change:
print("Length Before:", len(df))df.fillna(0, inplace=True)print("Length After:", len(df))
We can easily filter data frames based on column values. For example, if we want records corresponding to patients younger than 30 years old we can write:
df = df[df['age'] < 30]print(df.head())
If we want records corresponding to charges greater than $10,000 we can write:
df = df[df['charges'] > 10000]print(df.head())
We can also filter the data frame to only include smokers:
df = df[df['smoker'] == 'yes']
Let’s print the first five rows:
print(df.head())
Notice that the index has been modified since data was removed. We can fix this by using the ‘reset_index’ method:
df.reset_index(inplace=True)del df['index']print(df.head())
We can also filter with multiple conditions. Suppose we want to pull data corresponding to female smokers. We can use the ‘.loc[]’ method in the following way:
df = df.loc[(df.sex == 'female') & (df.smoker == 'yes')]df.reset_index(inplace=True)del df['index']print(df.head())
We can even add more than two conditions. Let’s filter for female smokers over 50:
df = df.loc[(df.sex == 'female') & (df.smoker == 'yes') & (df.age >= 50)]df.reset_index(inplace=True)del df['index']print(df.head())
Now we will discuss how to use the ‘.iloc[]’ method to select indices. To select the first, second and last indices in dataset we do the following:
print(df.head())print("---------------------First---------------------")print(df.iloc[0])print("---------------------Second---------------------") print(df.iloc[1])print("---------------------Last---------------------")print(df.iloc[-1])
You can do something similar with ‘.loc[]’ for specific columns. To select the first and second rows we do the following:
print("---------------------First---------------------")print(df.loc[0, 'sex'])print("---------------------Second---------------------") print(df.loc[1, 'sex'])
We can also select multiple rows within a column:
print("---------------------First---------------------")print(df.loc[0:3, 'sex'])print("---------------------Second---------------------") print(df.loc[3:6, 'sex'])
Now we will discuss how to generate statistics from the data in our data frame. We can create separate data frames for specific categories and generate statistics from the resulting data frames. Let’s create separate data frames for male and female records:
df_female = df[df['sex'] == 'female']df_male = df[df['sex'] == 'male']
Let’s get the average charge for females:
print("Female charges: ", df_female['charges'].mean())
Let’s also get the average charge for males:
print("Male charges: ", df_male['charges'].mean())
We can also find the maximum value for any numerical column using the ‘.max()’ method. Let’s do this for the full data set:
print("Maximum Value: ", df['charges'].max())
We can also find the minimum value:
print("Minimum Value: ", df['charges'].min())
We can apply these methods to other numerical columns. Let’s do so for the ‘age’ column:
print("Maximum Value: ", df['age'].max())print("Minimum Value: ", df['age'].min())
Let’s do the same for the ‘bmi’ column:
print("Maximum Value: ", df['bmi'].max())print("Minimum Value: ", df['bmi'].min())
Another useful method is the ‘.groupby()’ method which can be used for aggregating data. Let’s say we want to know the number of male and female smokers:
df_yes = df[df['smoker'] == 'yes']df_yes = df_yes.groupby(['sex'])['smoker'].count()print(df_yes.head())
We see that the number of male smokers is greater than the number of female smokers. We can also look at the grouped statistics for non-smokers:
df_no = df[df['smoker'] == 'no']df_no = df_no.groupby(['sex'])['smoker'].count()print(df_no.head())
We can also use the ‘.groupby()’ method to pull the average medical cost across category types. Earlier we looked at the average medical cost for males and females. We can generate those statistics again with ‘.groupby()’:
df = df.groupby(['sex'])['charges'].mean()print(df.head())
We can also generate these statistics for each region group:
df = df.groupby(['region'])['charges'].mean()print(df.head())
It would also be interesting to look at the average medical cost for each smoker group:
df = df.groupby(['smoker'])['charges'].mean()print(df.head())
As we’d expect, smokers have significantly higher medical costs than non-smokers. We can also group by smoker and sex:
df = df.groupby(['smoker', 'sex'])['charges'].mean()print(df.head())
We can also look at other statistics like standard deviation in charges across categories. The standard deviation measures the amount of dispersion in a set of values. In this case we will be considering standard deviation in charges, which corresponds to the dispersion in the charges data. Let’s look at the standard deviation in charges across sexes:
df = df.groupby(['sex'])['charges'].std()print(df.head())
We can also look at the standard deviation in charges across regions:
df = df.groupby(['region'])['charges'].std()print(df.head())
Or we can apply the ‘.groupby()’ across multiple columns. Let’s calculate the standard deviation in charges across region/sex groups:
df = df.groupby(['region', 'sex'])['charges'].std()print(df.head())
Next, let’s calculate the standard deviation in charges across smoker/sex groups:
df = df.groupby(['smoker', 'sex'])['charges'].std()print(df.head())
Next, we will discuss how to iterate over data frame rows. We can use a method called ‘.iterrows()’ which will allow us to iterate over row and index values:
for index, rows in df.iterrows(): print(index, rows)
Below is a screenshot of a few of the output values:
We can also select specific rows for our iterations. Let’s do so for the ‘sex’, ‘charges’ and ‘smoker’ columns:
for index, rows in df.iterrows(): print('sex:', rows['sex'], 'charges:', rows['charges'], 'smoker:', rows['smoker'])
We can also create new columns conditioned on the values of other columns. Suppose we want to create a new column that specifies whether or not a record corresponds to a female smoker. We can use the ‘.iterrows()’ and ‘.at[]’ methods to label female smokers with boolean values:
for index, rows in df.iterrows(): if (rows.sex == 'female') and (rows.smoker == 'yes'): df.at[index, 'female_smoker'] = True else: df.at[index, 'female_smoker'] = False
Let’s print the first five rows of our modified data frame:
print(df.head())
We can perform even more complicated labeling with ‘.iterrows()’ and ‘.at[]’. Suppose we want to create a column of boolean values corresponding to male smokers over the age of 50 with children:
for index, rows in df.iterrows(): if (rows.sex == 'male') and (rows.smoker == 'yes') and (rows.age > 50) and (rows.children > 0): df.at[index, 'male_smoker_with_children'] = True else: df.at[index, 'male_smoker_with_children'] = False
Let’s print the first five rows of the resulting data frame:
print(df.head())
Another thing we can do is use the ‘Counter’ method from the collections module to get an idea of what the distributions in boolean values are for our new columns. Let’s apply the ‘Counter’ method to the ‘female_smoker’ column:
from collections import Counterprint(Counter(df['female_smoker']))
This corresponds to 115 records of female smokers. Let’s apply ‘Counter’ to the male smokers over 50 with children column:
print(Counter(df['male_smoker_with_children']))
This corresponds to 21 records of male smokers over 50 with children.
Finally, if we have altered our data frame enough that we would want to save it to a separate file we use the ‘.to_csv()’ method:
df.to_csv("insurance_edit.csv")
To summarize, in this post we discussed several methods in Pandas. We discussed how to read, clean, and filter data using Pandas methods. We also discussed how to generate aggregate statistics, iterate over data frames and write data to a new file. I hope this was helpful. The code in this post is available on GitHub. Thank you for reading! | [
{
"code": null,
"e": 457,
"s": 172,
"text": "Pandas is a python library used for analyzing, transforming, and generating statistics from data. In this post, we will discuss several useful methods in Pandas for data wrangling and exploration. For our purposes, we will be using the Medical Cost Personal Datasets data from Kaggle."
},
{
"code": null,
"e": 476,
"s": 457,
"text": "Let’s get started!"
},
{
"code": null,
"e": 519,
"s": 476,
"text": "To begin, let’s import the pandas library:"
},
{
"code": null,
"e": 539,
"s": 519,
"text": "import pandas as pd"
},
{
"code": null,
"e": 729,
"s": 539,
"text": "Next, let’s read into Pandas data frame. A Pandas data frame is basically a tabular array-like data structure with rows and columns. To read in the data we can use the ‘.read_csv()’ method:"
},
{
"code": null,
"e": 763,
"s": 729,
"text": "df = pd.read_csv(\"insurance.csv\")"
},
{
"code": null,
"e": 878,
"s": 763,
"text": "Next, we can use the ‘.head()’ and ‘.tail()’ methods to look at the first and last five rows of data respectively:"
},
{
"code": null,
"e": 895,
"s": 878,
"text": "print(df.head())"
},
{
"code": null,
"e": 912,
"s": 895,
"text": "print(df.tail())"
},
{
"code": null,
"e": 951,
"s": 912,
"text": "We can also look at the column names :"
},
{
"code": null,
"e": 969,
"s": 951,
"text": "print(df.columns)"
},
{
"code": null,
"e": 1049,
"s": 969,
"text": "This is particularly useful for data sets with a significant number of columns."
},
{
"code": null,
"e": 1240,
"s": 1049,
"text": "Often times, when using real data, we have to deal with missing values in the data columns. Using pandas, we can quickly get an idea about how sparse the data is with the ‘.isnull()’ method:"
},
{
"code": null,
"e": 1265,
"s": 1240,
"text": "print(df.isnull().sum())"
},
{
"code": null,
"e": 1456,
"s": 1265,
"text": "We see that this data doesn’t contain any missing values. Let’s artificially add missing values for the ‘children’ and ‘region’ columns, to demonstrate how we go about removing these values:"
},
{
"code": null,
"e": 1553,
"s": 1456,
"text": "df.loc[df.region == 'southwest', 'region'] = np.nandf.loc[df.children == 1, 'children'] = np.nan"
},
{
"code": null,
"e": 1611,
"s": 1553,
"text": "Now let’s count the number of missing values we’ve added:"
},
{
"code": null,
"e": 1636,
"s": 1611,
"text": "print(df.isnull().sum())"
},
{
"code": null,
"e": 1851,
"s": 1636,
"text": "We can use the ‘.dropna()’ method to remove these missing values. This can either be done in place or we can return a value which we can store in a new variable. To drop missing values in place we do the following:"
},
{
"code": null,
"e": 1899,
"s": 1851,
"text": "df.dropna(inplace=True)print(df.isnull().sum())"
},
{
"code": null,
"e": 1988,
"s": 1899,
"text": "Here, the df variable has been modified in place. Alternatively we can do the following:"
},
{
"code": null,
"e": 2005,
"s": 1988,
"text": "df = df.dropna()"
},
{
"code": null,
"e": 2047,
"s": 2005,
"text": "Let’s print the missing values once more:"
},
{
"code": null,
"e": 2072,
"s": 2047,
"text": "print(df.isnull().sum())"
},
{
"code": null,
"e": 2160,
"s": 2072,
"text": "We can check the length of the data frame before and after dropping the missing values:"
},
{
"code": null,
"e": 2247,
"s": 2160,
"text": "print(\"Length Before:\", len(df))df.dropna(inplace=True)print(\"Length After:\", len(df))"
},
{
"code": null,
"e": 2373,
"s": 2247,
"text": "Additionally, if you would like to fill missing values in a data frame you can use the ‘.fillna()’ method. To do so in place:"
},
{
"code": null,
"e": 2400,
"s": 2373,
"text": "df.fillna(0, inplace=True)"
},
{
"code": null,
"e": 2419,
"s": 2400,
"text": "and alternatively:"
},
{
"code": null,
"e": 2437,
"s": 2419,
"text": "df = df.fillna(0)"
},
{
"code": null,
"e": 2523,
"s": 2437,
"text": "Since we are imputing missing values, the length of the data frame should not change:"
},
{
"code": null,
"e": 2613,
"s": 2523,
"text": "print(\"Length Before:\", len(df))df.fillna(0, inplace=True)print(\"Length After:\", len(df))"
},
{
"code": null,
"e": 2768,
"s": 2613,
"text": "We can easily filter data frames based on column values. For example, if we want records corresponding to patients younger than 30 years old we can write:"
},
{
"code": null,
"e": 2808,
"s": 2768,
"text": "df = df[df['age'] < 30]print(df.head())"
},
{
"code": null,
"e": 2887,
"s": 2808,
"text": "If we want records corresponding to charges greater than $10,000 we can write:"
},
{
"code": null,
"e": 2934,
"s": 2887,
"text": "df = df[df['charges'] > 10000]print(df.head())"
},
{
"code": null,
"e": 2993,
"s": 2934,
"text": "We can also filter the data frame to only include smokers:"
},
{
"code": null,
"e": 3024,
"s": 2993,
"text": "df = df[df['smoker'] == 'yes']"
},
{
"code": null,
"e": 3057,
"s": 3024,
"text": "Let’s print the first five rows:"
},
{
"code": null,
"e": 3074,
"s": 3057,
"text": "print(df.head())"
},
{
"code": null,
"e": 3189,
"s": 3074,
"text": "Notice that the index has been modified since data was removed. We can fix this by using the ‘reset_index’ method:"
},
{
"code": null,
"e": 3249,
"s": 3189,
"text": "df.reset_index(inplace=True)del df['index']print(df.head())"
},
{
"code": null,
"e": 3409,
"s": 3249,
"text": "We can also filter with multiple conditions. Suppose we want to pull data corresponding to female smokers. We can use the ‘.loc[]’ method in the following way:"
},
{
"code": null,
"e": 3525,
"s": 3409,
"text": "df = df.loc[(df.sex == 'female') & (df.smoker == 'yes')]df.reset_index(inplace=True)del df['index']print(df.head())"
},
{
"code": null,
"e": 3608,
"s": 3525,
"text": "We can even add more than two conditions. Let’s filter for female smokers over 50:"
},
{
"code": null,
"e": 3741,
"s": 3608,
"text": "df = df.loc[(df.sex == 'female') & (df.smoker == 'yes') & (df.age >= 50)]df.reset_index(inplace=True)del df['index']print(df.head())"
},
{
"code": null,
"e": 3889,
"s": 3741,
"text": "Now we will discuss how to use the ‘.iloc[]’ method to select indices. To select the first, second and last indices in dataset we do the following:"
},
{
"code": null,
"e": 4127,
"s": 3889,
"text": "print(df.head())print(\"---------------------First---------------------\")print(df.iloc[0])print(\"---------------------Second---------------------\") print(df.iloc[1])print(\"---------------------Last---------------------\")print(df.iloc[-1])"
},
{
"code": null,
"e": 4249,
"s": 4127,
"text": "You can do something similar with ‘.loc[]’ for specific columns. To select the first and second rows we do the following:"
},
{
"code": null,
"e": 4410,
"s": 4249,
"text": "print(\"---------------------First---------------------\")print(df.loc[0, 'sex'])print(\"---------------------Second---------------------\") print(df.loc[1, 'sex'])"
},
{
"code": null,
"e": 4460,
"s": 4410,
"text": "We can also select multiple rows within a column:"
},
{
"code": null,
"e": 4625,
"s": 4460,
"text": "print(\"---------------------First---------------------\")print(df.loc[0:3, 'sex'])print(\"---------------------Second---------------------\") print(df.loc[3:6, 'sex'])"
},
{
"code": null,
"e": 4883,
"s": 4625,
"text": "Now we will discuss how to generate statistics from the data in our data frame. We can create separate data frames for specific categories and generate statistics from the resulting data frames. Let’s create separate data frames for male and female records:"
},
{
"code": null,
"e": 4954,
"s": 4883,
"text": "df_female = df[df['sex'] == 'female']df_male = df[df['sex'] == 'male']"
},
{
"code": null,
"e": 4996,
"s": 4954,
"text": "Let’s get the average charge for females:"
},
{
"code": null,
"e": 5051,
"s": 4996,
"text": "print(\"Female charges: \", df_female['charges'].mean())"
},
{
"code": null,
"e": 5096,
"s": 5051,
"text": "Let’s also get the average charge for males:"
},
{
"code": null,
"e": 5147,
"s": 5096,
"text": "print(\"Male charges: \", df_male['charges'].mean())"
},
{
"code": null,
"e": 5271,
"s": 5147,
"text": "We can also find the maximum value for any numerical column using the ‘.max()’ method. Let’s do this for the full data set:"
},
{
"code": null,
"e": 5317,
"s": 5271,
"text": "print(\"Maximum Value: \", df['charges'].max())"
},
{
"code": null,
"e": 5353,
"s": 5317,
"text": "We can also find the minimum value:"
},
{
"code": null,
"e": 5399,
"s": 5353,
"text": "print(\"Minimum Value: \", df['charges'].min())"
},
{
"code": null,
"e": 5488,
"s": 5399,
"text": "We can apply these methods to other numerical columns. Let’s do so for the ‘age’ column:"
},
{
"code": null,
"e": 5571,
"s": 5488,
"text": "print(\"Maximum Value: \", df['age'].max())print(\"Minimum Value: \", df['age'].min())"
},
{
"code": null,
"e": 5611,
"s": 5571,
"text": "Let’s do the same for the ‘bmi’ column:"
},
{
"code": null,
"e": 5694,
"s": 5611,
"text": "print(\"Maximum Value: \", df['bmi'].max())print(\"Minimum Value: \", df['bmi'].min())"
},
{
"code": null,
"e": 5848,
"s": 5694,
"text": "Another useful method is the ‘.groupby()’ method which can be used for aggregating data. Let’s say we want to know the number of male and female smokers:"
},
{
"code": null,
"e": 5954,
"s": 5848,
"text": "df_yes = df[df['smoker'] == 'yes']df_yes = df_yes.groupby(['sex'])['smoker'].count()print(df_yes.head())"
},
{
"code": null,
"e": 6099,
"s": 5954,
"text": "We see that the number of male smokers is greater than the number of female smokers. We can also look at the grouped statistics for non-smokers:"
},
{
"code": null,
"e": 6200,
"s": 6099,
"text": "df_no = df[df['smoker'] == 'no']df_no = df_no.groupby(['sex'])['smoker'].count()print(df_no.head())"
},
{
"code": null,
"e": 6423,
"s": 6200,
"text": "We can also use the ‘.groupby()’ method to pull the average medical cost across category types. Earlier we looked at the average medical cost for males and females. We can generate those statistics again with ‘.groupby()’:"
},
{
"code": null,
"e": 6483,
"s": 6423,
"text": "df = df.groupby(['sex'])['charges'].mean()print(df.head())"
},
{
"code": null,
"e": 6544,
"s": 6483,
"text": "We can also generate these statistics for each region group:"
},
{
"code": null,
"e": 6607,
"s": 6544,
"text": "df = df.groupby(['region'])['charges'].mean()print(df.head())"
},
{
"code": null,
"e": 6695,
"s": 6607,
"text": "It would also be interesting to look at the average medical cost for each smoker group:"
},
{
"code": null,
"e": 6758,
"s": 6695,
"text": "df = df.groupby(['smoker'])['charges'].mean()print(df.head())"
},
{
"code": null,
"e": 6877,
"s": 6758,
"text": "As we’d expect, smokers have significantly higher medical costs than non-smokers. We can also group by smoker and sex:"
},
{
"code": null,
"e": 6947,
"s": 6877,
"text": "df = df.groupby(['smoker', 'sex'])['charges'].mean()print(df.head())"
},
{
"code": null,
"e": 7301,
"s": 6947,
"text": "We can also look at other statistics like standard deviation in charges across categories. The standard deviation measures the amount of dispersion in a set of values. In this case we will be considering standard deviation in charges, which corresponds to the dispersion in the charges data. Let’s look at the standard deviation in charges across sexes:"
},
{
"code": null,
"e": 7360,
"s": 7301,
"text": "df = df.groupby(['sex'])['charges'].std()print(df.head())"
},
{
"code": null,
"e": 7430,
"s": 7360,
"text": "We can also look at the standard deviation in charges across regions:"
},
{
"code": null,
"e": 7492,
"s": 7430,
"text": "df = df.groupby(['region'])['charges'].std()print(df.head())"
},
{
"code": null,
"e": 7626,
"s": 7492,
"text": "Or we can apply the ‘.groupby()’ across multiple columns. Let’s calculate the standard deviation in charges across region/sex groups:"
},
{
"code": null,
"e": 7695,
"s": 7626,
"text": "df = df.groupby(['region', 'sex'])['charges'].std()print(df.head())"
},
{
"code": null,
"e": 7777,
"s": 7695,
"text": "Next, let’s calculate the standard deviation in charges across smoker/sex groups:"
},
{
"code": null,
"e": 7846,
"s": 7777,
"text": "df = df.groupby(['smoker', 'sex'])['charges'].std()print(df.head())"
},
{
"code": null,
"e": 8004,
"s": 7846,
"text": "Next, we will discuss how to iterate over data frame rows. We can use a method called ‘.iterrows()’ which will allow us to iterate over row and index values:"
},
{
"code": null,
"e": 8060,
"s": 8004,
"text": "for index, rows in df.iterrows(): print(index, rows)"
},
{
"code": null,
"e": 8113,
"s": 8060,
"text": "Below is a screenshot of a few of the output values:"
},
{
"code": null,
"e": 8225,
"s": 8113,
"text": "We can also select specific rows for our iterations. Let’s do so for the ‘sex’, ‘charges’ and ‘smoker’ columns:"
},
{
"code": null,
"e": 8345,
"s": 8225,
"text": "for index, rows in df.iterrows(): print('sex:', rows['sex'], 'charges:', rows['charges'], 'smoker:', rows['smoker'])"
},
{
"code": null,
"e": 8624,
"s": 8345,
"text": "We can also create new columns conditioned on the values of other columns. Suppose we want to create a new column that specifies whether or not a record corresponds to a female smoker. We can use the ‘.iterrows()’ and ‘.at[]’ methods to label female smokers with boolean values:"
},
{
"code": null,
"e": 8814,
"s": 8624,
"text": "for index, rows in df.iterrows(): if (rows.sex == 'female') and (rows.smoker == 'yes'): df.at[index, 'female_smoker'] = True else: df.at[index, 'female_smoker'] = False"
},
{
"code": null,
"e": 8874,
"s": 8814,
"text": "Let’s print the first five rows of our modified data frame:"
},
{
"code": null,
"e": 8891,
"s": 8874,
"text": "print(df.head())"
},
{
"code": null,
"e": 9086,
"s": 8891,
"text": "We can perform even more complicated labeling with ‘.iterrows()’ and ‘.at[]’. Suppose we want to create a column of boolean values corresponding to male smokers over the age of 50 with children:"
},
{
"code": null,
"e": 9342,
"s": 9086,
"text": "for index, rows in df.iterrows(): if (rows.sex == 'male') and (rows.smoker == 'yes') and (rows.age > 50) and (rows.children > 0): df.at[index, 'male_smoker_with_children'] = True else: df.at[index, 'male_smoker_with_children'] = False"
},
{
"code": null,
"e": 9403,
"s": 9342,
"text": "Let’s print the first five rows of the resulting data frame:"
},
{
"code": null,
"e": 9420,
"s": 9403,
"text": "print(df.head())"
},
{
"code": null,
"e": 9648,
"s": 9420,
"text": "Another thing we can do is use the ‘Counter’ method from the collections module to get an idea of what the distributions in boolean values are for our new columns. Let’s apply the ‘Counter’ method to the ‘female_smoker’ column:"
},
{
"code": null,
"e": 9715,
"s": 9648,
"text": "from collections import Counterprint(Counter(df['female_smoker']))"
},
{
"code": null,
"e": 9838,
"s": 9715,
"text": "This corresponds to 115 records of female smokers. Let’s apply ‘Counter’ to the male smokers over 50 with children column:"
},
{
"code": null,
"e": 9886,
"s": 9838,
"text": "print(Counter(df['male_smoker_with_children']))"
},
{
"code": null,
"e": 9956,
"s": 9886,
"text": "This corresponds to 21 records of male smokers over 50 with children."
},
{
"code": null,
"e": 10086,
"s": 9956,
"text": "Finally, if we have altered our data frame enough that we would want to save it to a separate file we use the ‘.to_csv()’ method:"
},
{
"code": null,
"e": 10118,
"s": 10086,
"text": "df.to_csv(\"insurance_edit.csv\")"
}
]
|
Python | Assign multiple variables with list values - GeeksforGeeks | 02 Jan, 2019
We generally come through the task of getting certain index values and assigning variables out of them. The general approach we follow is to extract each list element by its index and then assign it to variables. This approach requires more line of code. Let’s discuss certain ways to do this task in compact manner to improve readability.
Method #1 : Using list comprehensionBy using list comprehension one can achieve this task with ease and in one line. We run a loop for specific indices in RHS and assign them to the required variables.
# Python3 code to demonstrate # to assign variables from list element# using list comprehension # initializing list test_list = [1, 4, 5, 6, 7, 3] # printing original listprint ("The original list is : " + str(test_list)) # using list comprehension# to assign variables from list elementvar1, var2, var3 = [test_list[i] for i in (1, 3, 5)] # printing resultprint ("The variables are : " + str(var1) + " " + str(var2) + " " + str(var3))
The original list is : [1, 4, 5, 6, 7, 3]
The variables are : 4 6 3
Method #2 : Using itemgetter()itemgetter function can also be used to perform this particular task. This function accepts the index values and the container it is working on and assigns to the variables.
# Python3 code to demonstrate # to assign variables from list element# using itemgetter()from operator import itemgetter # initializing list test_list = [1, 4, 5, 6, 7, 3] # printing original listprint ("The original list is : " + str(test_list)) # using using itemgetter()# to assign variables from list elementvar1, var2, var3 = itemgetter(1, 3, 5)(test_list) # printing resultprint ("The variables are : " + str(var1) + " " + str(var2) + " " + str(var3))
The original list is : [1, 4, 5, 6, 7, 3]
The variables are : 4 6 3
Method #3 : Using itertools.compress()compress function accepts boolean values corresponding to each index as True if it has to be assigned to the variable and False it is not to be used in the variable assignment.
# Python3 code to demonstrate # to assign variables from list element# using itertools.compress()from itertools import compress # initializing list test_list = [1, 4, 5, 6, 7, 3] # printing original listprint ("The original list is : " + str(test_list)) # using using itertools.compress()# to assign variables from list elementvar1, var2, var3 = compress(test_list, (0, 1, 0, 1, 0, 1, 0)) # printing resultprint ("The variables are : " + str(var1) + " " + str(var2) + " " + str(var3))
The original list is : [1, 4, 5, 6, 7, 3]
The variables are : 4 6 3
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Python String | replace()
Python program to convert a list to string
Defaultdict in Python
Python | Split string into list of characters
Python | Get dictionary keys as a list
Python | Convert a list to dictionary | [
{
"code": null,
"e": 24442,
"s": 24414,
"text": "\n02 Jan, 2019"
},
{
"code": null,
"e": 24782,
"s": 24442,
"text": "We generally come through the task of getting certain index values and assigning variables out of them. The general approach we follow is to extract each list element by its index and then assign it to variables. This approach requires more line of code. Let’s discuss certain ways to do this task in compact manner to improve readability."
},
{
"code": null,
"e": 24984,
"s": 24782,
"text": "Method #1 : Using list comprehensionBy using list comprehension one can achieve this task with ease and in one line. We run a loop for specific indices in RHS and assign them to the required variables."
},
{
"code": "# Python3 code to demonstrate # to assign variables from list element# using list comprehension # initializing list test_list = [1, 4, 5, 6, 7, 3] # printing original listprint (\"The original list is : \" + str(test_list)) # using list comprehension# to assign variables from list elementvar1, var2, var3 = [test_list[i] for i in (1, 3, 5)] # printing resultprint (\"The variables are : \" + str(var1) + \" \" + str(var2) + \" \" + str(var3))",
"e": 25481,
"s": 24984,
"text": null
},
{
"code": null,
"e": 25550,
"s": 25481,
"text": "The original list is : [1, 4, 5, 6, 7, 3]\nThe variables are : 4 6 3\n"
},
{
"code": null,
"e": 25755,
"s": 25550,
"text": " Method #2 : Using itemgetter()itemgetter function can also be used to perform this particular task. This function accepts the index values and the container it is working on and assigns to the variables."
},
{
"code": "# Python3 code to demonstrate # to assign variables from list element# using itemgetter()from operator import itemgetter # initializing list test_list = [1, 4, 5, 6, 7, 3] # printing original listprint (\"The original list is : \" + str(test_list)) # using using itemgetter()# to assign variables from list elementvar1, var2, var3 = itemgetter(1, 3, 5)(test_list) # printing resultprint (\"The variables are : \" + str(var1) + \" \" + str(var2) + \" \" + str(var3))",
"e": 26272,
"s": 25755,
"text": null
},
{
"code": null,
"e": 26341,
"s": 26272,
"text": "The original list is : [1, 4, 5, 6, 7, 3]\nThe variables are : 4 6 3\n"
},
{
"code": null,
"e": 26557,
"s": 26341,
"text": " Method #3 : Using itertools.compress()compress function accepts boolean values corresponding to each index as True if it has to be assigned to the variable and False it is not to be used in the variable assignment."
},
{
"code": "# Python3 code to demonstrate # to assign variables from list element# using itertools.compress()from itertools import compress # initializing list test_list = [1, 4, 5, 6, 7, 3] # printing original listprint (\"The original list is : \" + str(test_list)) # using using itertools.compress()# to assign variables from list elementvar1, var2, var3 = compress(test_list, (0, 1, 0, 1, 0, 1, 0)) # printing resultprint (\"The variables are : \" + str(var1) + \" \" + str(var2) + \" \" + str(var3))",
"e": 27102,
"s": 26557,
"text": null
},
{
"code": null,
"e": 27171,
"s": 27102,
"text": "The original list is : [1, 4, 5, 6, 7, 3]\nThe variables are : 4 6 3\n"
},
{
"code": null,
"e": 27192,
"s": 27171,
"text": "Python list-programs"
},
{
"code": null,
"e": 27199,
"s": 27192,
"text": "Python"
},
{
"code": null,
"e": 27215,
"s": 27199,
"text": "Python Programs"
},
{
"code": null,
"e": 27313,
"s": 27215,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27322,
"s": 27313,
"text": "Comments"
},
{
"code": null,
"e": 27335,
"s": 27322,
"text": "Old Comments"
},
{
"code": null,
"e": 27353,
"s": 27335,
"text": "Python Dictionary"
},
{
"code": null,
"e": 27388,
"s": 27353,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 27420,
"s": 27388,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27462,
"s": 27420,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 27488,
"s": 27462,
"text": "Python String | replace()"
},
{
"code": null,
"e": 27531,
"s": 27488,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 27553,
"s": 27531,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27599,
"s": 27553,
"text": "Python | Split string into list of characters"
},
{
"code": null,
"e": 27638,
"s": 27599,
"text": "Python | Get dictionary keys as a list"
}
]
|
C library function - modf() | The C library function double modf(double x, double *integer) returns the fraction component (part after the decimal), and sets integer to the integer component.
Following is the declaration for modf() function.
double modf(double x, double *integer)
x − This is the floating point value.
x − This is the floating point value.
integer − This is the pointer to an object where the integral part is to be stored.
integer − This is the pointer to an object where the integral part is to be stored.
This function returns the fractional part of x, with the same sign.
The following example shows the usage of modf() function.
#include<stdio.h>
#include<math.h>
int main () {
double x, fractpart, intpart;
x = 8.123456;
fractpart = modf(x, &intpart);
printf("Integral part = %lf\n", intpart);
printf("Fraction Part = %lf \n", fractpart);
return(0);
}
Let us compile and run the above program that will produce the following result −
Integral part = 8.000000
Fraction Part = 0.123456
12 Lectures
2 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
12 Lectures
2 hours
Richa Maheshwari
20 Lectures
3.5 hours
Vandana Annavaram
44 Lectures
1 hours
Amit Diwan
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2169,
"s": 2007,
"text": "The C library function double modf(double x, double *integer) returns the fraction component (part after the decimal), and sets integer to the integer component."
},
{
"code": null,
"e": 2219,
"s": 2169,
"text": "Following is the declaration for modf() function."
},
{
"code": null,
"e": 2258,
"s": 2219,
"text": "double modf(double x, double *integer)"
},
{
"code": null,
"e": 2296,
"s": 2258,
"text": "x − This is the floating point value."
},
{
"code": null,
"e": 2334,
"s": 2296,
"text": "x − This is the floating point value."
},
{
"code": null,
"e": 2418,
"s": 2334,
"text": "integer − This is the pointer to an object where the integral part is to be stored."
},
{
"code": null,
"e": 2502,
"s": 2418,
"text": "integer − This is the pointer to an object where the integral part is to be stored."
},
{
"code": null,
"e": 2570,
"s": 2502,
"text": "This function returns the fractional part of x, with the same sign."
},
{
"code": null,
"e": 2628,
"s": 2570,
"text": "The following example shows the usage of modf() function."
},
{
"code": null,
"e": 2877,
"s": 2628,
"text": "#include<stdio.h>\n#include<math.h>\n\nint main () {\n double x, fractpart, intpart;\n\n x = 8.123456;\n fractpart = modf(x, &intpart);\n\n printf(\"Integral part = %lf\\n\", intpart);\n printf(\"Fraction Part = %lf \\n\", fractpart);\n \n return(0);\n}"
},
{
"code": null,
"e": 2959,
"s": 2877,
"text": "Let us compile and run the above program that will produce the following result −"
},
{
"code": null,
"e": 3011,
"s": 2959,
"text": "Integral part = 8.000000\nFraction Part = 0.123456 \n"
},
{
"code": null,
"e": 3044,
"s": 3011,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3059,
"s": 3044,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3094,
"s": 3059,
"text": "\n 12 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3109,
"s": 3094,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3144,
"s": 3109,
"text": "\n 48 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 3158,
"s": 3144,
"text": " Asif Hussain"
},
{
"code": null,
"e": 3191,
"s": 3158,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3209,
"s": 3191,
"text": " Richa Maheshwari"
},
{
"code": null,
"e": 3244,
"s": 3209,
"text": "\n 20 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 3263,
"s": 3244,
"text": " Vandana Annavaram"
},
{
"code": null,
"e": 3296,
"s": 3263,
"text": "\n 44 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3308,
"s": 3296,
"text": " Amit Diwan"
},
{
"code": null,
"e": 3315,
"s": 3308,
"text": " Print"
},
{
"code": null,
"e": 3326,
"s": 3315,
"text": " Add Notes"
}
]
|
Chef - Files & Packages | In Chef, creating configuration files and moving packages are the key components. There are multiple ways how Chef manages the same. There are multiple ways how Chef supports in dealing with the files and software packages.
Step 1 − Edit the default recipe of the cookbook.
vipin@laptop:~/chef-repo $ subl cookbooks/test_cookbook/recipes/default.rb
include_recipe "apt"
apt_repository "s3tools" do
uri "http://s3tools.org/repo/deb-all"
components ["stable/"]
key "http://s3tools.org/repo/deb-all/stable/s3tools.key"
action :add
end
package "s3cmd"
Step 2 − Edit the metadata to add dependency on the apt cookbook.
vipin@laptop:~/chef-repo $ subl cookbooks/my_cookbook/metadata.rb
...
depends "apt"
Step 3 − Upload the modified cookbook to the Chef server.
Step 4 − Validate that the package you are trying to install, is not yet installed.
Step 5 − Validate the default repo.
Step 6 − Run Chef-Client on the node.
Step 7 − Validate that the required package is installed.
If one needs to install a piece of software that is not available as a package for a given platform, one needs to compile it oneself. In Chef, we can do this by using the script resource.
Step 1 − Edit the default recipe.
vipin@laptop:~/chef-repo $ subl cookbooks/my_cookbook/recipes/
default.rb
version = "1.3.9"
bash "install_nginx_from_source" do
cwd Chef::Config['file_cache_path']
code ≪-EOH
wget http://nginx.org/download/nginx-#{version}.tar.gz
tar zxf nginx-#{version}.tar.gz &&
cd nginx-#{version} &&
./configure && make && make install
EOH
Step 2 − Upload the modified cookbook to the Chef server.
Step 3 − Run the Chef-Client on the node.
Step 4 − Validate that the nginx is installed.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2604,
"s": 2380,
"text": "In Chef, creating configuration files and moving packages are the key components. There are multiple ways how Chef manages the same. There are multiple ways how Chef supports in dealing with the files and software packages."
},
{
"code": null,
"e": 2654,
"s": 2604,
"text": "Step 1 − Edit the default recipe of the cookbook."
},
{
"code": null,
"e": 2948,
"s": 2654,
"text": "vipin@laptop:~/chef-repo $ subl cookbooks/test_cookbook/recipes/default.rb \ninclude_recipe \"apt\" \napt_repository \"s3tools\" do \n uri \"http://s3tools.org/repo/deb-all\" \n components [\"stable/\"] \n key \"http://s3tools.org/repo/deb-all/stable/s3tools.key\" \n action :add \nend \npackage \"s3cmd\""
},
{
"code": null,
"e": 3014,
"s": 2948,
"text": "Step 2 − Edit the metadata to add dependency on the apt cookbook."
},
{
"code": null,
"e": 3101,
"s": 3014,
"text": "vipin@laptop:~/chef-repo $ subl cookbooks/my_cookbook/metadata.rb \n... \ndepends \"apt\"\n"
},
{
"code": null,
"e": 3159,
"s": 3101,
"text": "Step 3 − Upload the modified cookbook to the Chef server."
},
{
"code": null,
"e": 3243,
"s": 3159,
"text": "Step 4 − Validate that the package you are trying to install, is not yet installed."
},
{
"code": null,
"e": 3279,
"s": 3243,
"text": "Step 5 − Validate the default repo."
},
{
"code": null,
"e": 3317,
"s": 3279,
"text": "Step 6 − Run Chef-Client on the node."
},
{
"code": null,
"e": 3375,
"s": 3317,
"text": "Step 7 − Validate that the required package is installed."
},
{
"code": null,
"e": 3563,
"s": 3375,
"text": "If one needs to install a piece of software that is not available as a package for a given platform, one needs to compile it oneself. In Chef, we can do this by using the script resource."
},
{
"code": null,
"e": 3597,
"s": 3563,
"text": "Step 1 − Edit the default recipe."
},
{
"code": null,
"e": 3969,
"s": 3597,
"text": "vipin@laptop:~/chef-repo $ subl cookbooks/my_cookbook/recipes/ \ndefault.rb \nversion = \"1.3.9\" \nbash \"install_nginx_from_source\" do \n cwd Chef::Config['file_cache_path'] \n code ≪-EOH \n wget http://nginx.org/download/nginx-#{version}.tar.gz \n tar zxf nginx-#{version}.tar.gz && \n cd nginx-#{version} && \n ./configure && make && make install \n EOH "
},
{
"code": null,
"e": 4027,
"s": 3969,
"text": "Step 2 − Upload the modified cookbook to the Chef server."
},
{
"code": null,
"e": 4069,
"s": 4027,
"text": "Step 3 − Run the Chef-Client on the node."
},
{
"code": null,
"e": 4116,
"s": 4069,
"text": "Step 4 − Validate that the nginx is installed."
},
{
"code": null,
"e": 4123,
"s": 4116,
"text": " Print"
},
{
"code": null,
"e": 4134,
"s": 4123,
"text": " Add Notes"
}
]
|
Check if array is sorted | Practice | GeeksforGeeks | Given an array arr[] of size N, check if it is sorted in non-decreasing order or not.
Example 1:
Input:
N = 5
arr[] = {10, 20, 30, 40, 50}
Output: 1
Explanation: The given array is sorted.
Example 2:
Input:
N = 6
arr[] = {90, 80, 100, 70, 40, 30}
Output: 0
Explanation: The given array is not sorted.
Your Task:
You don't need to read input or print anything. Your task is to complete the function arraySortedOrNot() which takes the arr[] and N as input parameters and returns a boolean value (true if it is sorted otherwise false).
Expected Time Complexity: O(N)
Expected Auxiliary Space: O(1)
Constraints:
1 ≤ N ≤ 105
1 ≤ Arr[i] ≤ 106
0
dell94593 weeks ago
java:class Solution { boolean arraySortedOrNot(int[] arr, int n) { // code here for (int i =0; i<arr.length-1;i++){ if(arr[i]>arr[i+1]){ return false; } } return true; }}
+1
princejee20191 month ago
C++ CODE || EASY TO UNDERSTAND
bool arraySortedOrNot(int arr[], int n) { for(int i =0;i<n-1;i++){ if(arr[i]>arr[i+1]){ return false; } } return true;}
0
codewithaddy2 months ago
SIMPLE C+++ SOLUTION
bool arraySortedOrNot(int a[], int n) {
for(int i=0; i<n-1; i++){
if(a[i]>a[i+1]){
return false;
}
}
return true;
}
0
deepanshubhalla20002 months ago
class Solution { boolean arraySortedOrNot(int[] arr, int n) { return helper(arr,n,0); // code here } static boolean helper(int[] arr, int n, int index){ if(index==n-1) return true; return arr[index] <= arr[index+1] && helper(arr,n,index+1); }}
0
godvishalkumar7862 months ago
if(arr.length==1){ return true; } boolean flag=false; for(int i=0; i<n-1; i++){ if(arr[i]<=arr[i+1]){ flag=true; } else{ flag=false; break; }}return flag;
0
mohdabidalistates2 months ago
Hi I have a doubt. Can anyone tell why my code doesn't work.
bool flag=false; for(int i=0; i<n; i++){ if(arr[i]<=arr[i+1]){ flag=true; } else{ flag=false; break; } } return flag;
0
shulabhgill2 months ago
class Solution{ public: bool arraySortedOrNot(int arr[], int n) { bool res=true; if(n==0 || n==1) return true; if(arr[0]>arr[1]) return false; else { res=arraySortedOrNot(arr+1,n-1); } return res; }};
0
shivamrajpoot24102 months ago
for(int i=1;i<n;i++){ if(arr[i]<arr[i-1]){ return false; } } return true; }
+1
shivamray3 months ago
bool arraySortedOrNot(int arr[], int n) { for (int i=0;i<n-1;i++)// code here { if (arr[i]>arr[i+1]) { return false; } } return true; }
+1
niks043 months ago
bool arraySortedOrNot(int arr[], int n) {
for(int i=1;i<n;i++){
if(arr[i]<arr[i-1])
return false;
}
return true;
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 325,
"s": 238,
"text": "Given an array arr[] of size N, check if it is sorted in non-decreasing order or not. "
},
{
"code": null,
"e": 336,
"s": 325,
"text": "Example 1:"
},
{
"code": null,
"e": 429,
"s": 336,
"text": "Input:\nN = 5\narr[] = {10, 20, 30, 40, 50}\nOutput: 1\nExplanation: The given array is sorted.\n"
},
{
"code": null,
"e": 440,
"s": 429,
"text": "Example 2:"
},
{
"code": null,
"e": 541,
"s": 440,
"text": "Input:\nN = 6\narr[] = {90, 80, 100, 70, 40, 30}\nOutput: 0\nExplanation: The given array is not sorted."
},
{
"code": null,
"e": 774,
"s": 541,
"text": "\nYour Task:\nYou don't need to read input or print anything. Your task is to complete the function arraySortedOrNot() which takes the arr[] and N as input parameters and returns a boolean value (true if it is sorted otherwise false)."
},
{
"code": null,
"e": 837,
"s": 774,
"text": "\nExpected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 880,
"s": 837,
"text": "\nConstraints:\n1 ≤ N ≤ 105\n1 ≤ Arr[i] ≤ 106"
},
{
"code": null,
"e": 884,
"s": 882,
"text": "0"
},
{
"code": null,
"e": 904,
"s": 884,
"text": "dell94593 weeks ago"
},
{
"code": null,
"e": 1137,
"s": 904,
"text": "java:class Solution { boolean arraySortedOrNot(int[] arr, int n) { // code here for (int i =0; i<arr.length-1;i++){ if(arr[i]>arr[i+1]){ return false; } } return true; }}"
},
{
"code": null,
"e": 1140,
"s": 1137,
"text": "+1"
},
{
"code": null,
"e": 1165,
"s": 1140,
"text": "princejee20191 month ago"
},
{
"code": null,
"e": 1196,
"s": 1165,
"text": "C++ CODE || EASY TO UNDERSTAND"
},
{
"code": null,
"e": 1344,
"s": 1196,
"text": "bool arraySortedOrNot(int arr[], int n) { for(int i =0;i<n-1;i++){ if(arr[i]>arr[i+1]){ return false; } } return true;}"
},
{
"code": null,
"e": 1346,
"s": 1344,
"text": "0"
},
{
"code": null,
"e": 1371,
"s": 1346,
"text": "codewithaddy2 months ago"
},
{
"code": null,
"e": 1392,
"s": 1371,
"text": "SIMPLE C+++ SOLUTION"
},
{
"code": null,
"e": 1544,
"s": 1392,
"text": "bool arraySortedOrNot(int a[], int n) {\n\n for(int i=0; i<n-1; i++){\n if(a[i]>a[i+1]){\n return false;\n }\n }\n return true;\n}"
},
{
"code": null,
"e": 1546,
"s": 1544,
"text": "0"
},
{
"code": null,
"e": 1578,
"s": 1546,
"text": "deepanshubhalla20002 months ago"
},
{
"code": null,
"e": 1862,
"s": 1578,
"text": "class Solution { boolean arraySortedOrNot(int[] arr, int n) { return helper(arr,n,0); // code here } static boolean helper(int[] arr, int n, int index){ if(index==n-1) return true; return arr[index] <= arr[index+1] && helper(arr,n,index+1); }}"
},
{
"code": null,
"e": 1864,
"s": 1862,
"text": "0"
},
{
"code": null,
"e": 1894,
"s": 1864,
"text": "godvishalkumar7862 months ago"
},
{
"code": null,
"e": 2128,
"s": 1894,
"text": " if(arr.length==1){ return true; } boolean flag=false; for(int i=0; i<n-1; i++){ if(arr[i]<=arr[i+1]){ flag=true; } else{ flag=false; break; }}return flag;"
},
{
"code": null,
"e": 2130,
"s": 2128,
"text": "0"
},
{
"code": null,
"e": 2160,
"s": 2130,
"text": "mohdabidalistates2 months ago"
},
{
"code": null,
"e": 2221,
"s": 2160,
"text": "Hi I have a doubt. Can anyone tell why my code doesn't work."
},
{
"code": null,
"e": 2410,
"s": 2225,
"text": " bool flag=false; for(int i=0; i<n; i++){ if(arr[i]<=arr[i+1]){ flag=true; } else{ flag=false; break; } } return flag;"
},
{
"code": null,
"e": 2412,
"s": 2410,
"text": "0"
},
{
"code": null,
"e": 2436,
"s": 2412,
"text": "shulabhgill2 months ago"
},
{
"code": null,
"e": 2723,
"s": 2436,
"text": "class Solution{ public: bool arraySortedOrNot(int arr[], int n) { bool res=true; if(n==0 || n==1) return true; if(arr[0]>arr[1]) return false; else { res=arraySortedOrNot(arr+1,n-1); } return res; }};"
},
{
"code": null,
"e": 2725,
"s": 2723,
"text": "0"
},
{
"code": null,
"e": 2755,
"s": 2725,
"text": "shivamrajpoot24102 months ago"
},
{
"code": null,
"e": 2913,
"s": 2755,
"text": " for(int i=1;i<n;i++){ if(arr[i]<arr[i-1]){ return false; } } return true; }"
},
{
"code": null,
"e": 2916,
"s": 2913,
"text": "+1"
},
{
"code": null,
"e": 2938,
"s": 2916,
"text": "shivamray3 months ago"
},
{
"code": null,
"e": 3129,
"s": 2938,
"text": " bool arraySortedOrNot(int arr[], int n) { for (int i=0;i<n-1;i++)// code here { if (arr[i]>arr[i+1]) { return false; } } return true; } "
},
{
"code": null,
"e": 3132,
"s": 3129,
"text": "+1"
},
{
"code": null,
"e": 3151,
"s": 3132,
"text": "niks043 months ago"
},
{
"code": null,
"e": 3297,
"s": 3151,
"text": "bool arraySortedOrNot(int arr[], int n) {\n for(int i=1;i<n;i++){\n if(arr[i]<arr[i-1])\n return false;\n }\n return true;\n}"
},
{
"code": null,
"e": 3443,
"s": 3297,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 3479,
"s": 3443,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 3489,
"s": 3479,
"text": "\nProblem\n"
},
{
"code": null,
"e": 3499,
"s": 3489,
"text": "\nContest\n"
},
{
"code": null,
"e": 3562,
"s": 3499,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 3710,
"s": 3562,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 3918,
"s": 3710,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 4024,
"s": 3918,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
]
|
How to get all the available azure VM images using Azure CLI in PowerShell? | To get all the available azure VM images using Azure CLI, you can use the command az vm image.
The below command will retrieve all the available azure images in the marketplace.
PS C:\> az vm image list --all
The above command will take some time to retrieve the output. To get the output into the table format, use the below command.
PS C:\> az vm image list --all -otable
To retrieve the images from the particular location, use the -l or --location parameter.
PS C:\> az vm image list -l eastus -otable
To get the VM images from the specific publisher,
PS C:\> az vm image list -l eastus -p MicrosoftWindowsServer --all -otable | [
{
"code": null,
"e": 1157,
"s": 1062,
"text": "To get all the available azure VM images using Azure CLI, you can use the command az vm image."
},
{
"code": null,
"e": 1240,
"s": 1157,
"text": "The below command will retrieve all the available azure images in the marketplace."
},
{
"code": null,
"e": 1271,
"s": 1240,
"text": "PS C:\\> az vm image list --all"
},
{
"code": null,
"e": 1397,
"s": 1271,
"text": "The above command will take some time to retrieve the output. To get the output into the table format, use the below command."
},
{
"code": null,
"e": 1436,
"s": 1397,
"text": "PS C:\\> az vm image list --all -otable"
},
{
"code": null,
"e": 1525,
"s": 1436,
"text": "To retrieve the images from the particular location, use the -l or --location parameter."
},
{
"code": null,
"e": 1568,
"s": 1525,
"text": "PS C:\\> az vm image list -l eastus -otable"
},
{
"code": null,
"e": 1618,
"s": 1568,
"text": "To get the VM images from the specific publisher,"
},
{
"code": null,
"e": 1693,
"s": 1618,
"text": "PS C:\\> az vm image list -l eastus -p MicrosoftWindowsServer --all -otable"
}
]
|
Find All Numbers Disappeared in an Array in C++ | Suppose we have an array of n elements. Some elements appear twice and other appear once. Elements are in range 1 <= A[i] <= n. We have to find those elements that are not present in the array. The constraint is that we have to solve this problem without using extra space, and time will be O(n).
So if the array is [4, 3, 2, 7, 8, 2, 3, 1], then result will be [5, 6]
To solve this, we will follow these steps −
let n is the size of the array
for i in range 0 to n – 1x := |A[i]| - 1if A[x] > 0, then A[x] := - A[x]
x := |A[i]| - 1
if A[x] > 0, then A[x] := - A[x]
define answer as an array
for i in range 0 to n – 1if A[i] > 0, then add i + 1 into the answer
if A[i] > 0, then add i + 1 into the answer
return the answer
Let us see the following implementation to get better understanding −
Live Demo
#include <bits/stdc++.h>
using namespace std;
void print_vector(vector<int> v){
cout << "[";
for(int i = 0; i<v.size(); i++){
cout << v[i] << ", ";
}
cout << "]";
}
class Solution {
public:
vector<int> findDisappearedNumbers(vector<int>& v) {
int n = v.size();
for(int i = 0;i < n; i++){
int x = abs(v[i]) - 1;
if(v[x] > 0) v[x] = -v[x];
}
vector <int> ans;
for(int i = 0; i < n; i++){
if(v[i]>0)ans.push_back(i+1);
}
return ans;
}
};
main(){
Solution ob;
vector<int> v{4,3,2,7,8,2,3,5};
print_vector(ob.findDisappearedNumbers(v));
}
[4,3,2,7,8,2,3,5]
[1, 6, ] | [
{
"code": null,
"e": 1359,
"s": 1062,
"text": "Suppose we have an array of n elements. Some elements appear twice and other appear once. Elements are in range 1 <= A[i] <= n. We have to find those elements that are not present in the array. The constraint is that we have to solve this problem without using extra space, and time will be O(n)."
},
{
"code": null,
"e": 1431,
"s": 1359,
"text": "So if the array is [4, 3, 2, 7, 8, 2, 3, 1], then result will be [5, 6]"
},
{
"code": null,
"e": 1475,
"s": 1431,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1506,
"s": 1475,
"text": "let n is the size of the array"
},
{
"code": null,
"e": 1579,
"s": 1506,
"text": "for i in range 0 to n – 1x := |A[i]| - 1if A[x] > 0, then A[x] := - A[x]"
},
{
"code": null,
"e": 1595,
"s": 1579,
"text": "x := |A[i]| - 1"
},
{
"code": null,
"e": 1628,
"s": 1595,
"text": "if A[x] > 0, then A[x] := - A[x]"
},
{
"code": null,
"e": 1654,
"s": 1628,
"text": "define answer as an array"
},
{
"code": null,
"e": 1723,
"s": 1654,
"text": "for i in range 0 to n – 1if A[i] > 0, then add i + 1 into the answer"
},
{
"code": null,
"e": 1767,
"s": 1723,
"text": "if A[i] > 0, then add i + 1 into the answer"
},
{
"code": null,
"e": 1785,
"s": 1767,
"text": "return the answer"
},
{
"code": null,
"e": 1855,
"s": 1785,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 1866,
"s": 1855,
"text": " Live Demo"
},
{
"code": null,
"e": 2505,
"s": 1866,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid print_vector(vector<int> v){\n cout << \"[\";\n for(int i = 0; i<v.size(); i++){\n cout << v[i] << \", \";\n }\n cout << \"]\";\n}\nclass Solution {\n public:\n vector<int> findDisappearedNumbers(vector<int>& v) {\n int n = v.size();\n for(int i = 0;i < n; i++){\n int x = abs(v[i]) - 1;\n if(v[x] > 0) v[x] = -v[x];\n }\n vector <int> ans;\n for(int i = 0; i < n; i++){\n if(v[i]>0)ans.push_back(i+1);\n }\n return ans;\n }\n};\nmain(){\n Solution ob;\n vector<int> v{4,3,2,7,8,2,3,5};\n print_vector(ob.findDisappearedNumbers(v));\n}"
},
{
"code": null,
"e": 2523,
"s": 2505,
"text": "[4,3,2,7,8,2,3,5]"
},
{
"code": null,
"e": 2532,
"s": 2523,
"text": "[1, 6, ]"
}
]
|
Dimensional Data Modeling - GeeksforGeeks | 24 Sep, 2021
Popular Schema – Star Schema, Snow Flake Schema
Dimensional Data Modelling is one of the data modelling techniques used in data warehouse design.
Goal: Improve the data retrieval.
The concept of Dimensional Modelling was developed by Ralph Kimball which is comprised of facts and dimension tables. Since the main goal of this modelling is to improve the data retrieval so it is optimized for SELECT OPERATION. The advantage of using this model is that we can store data in such a way that it is easier to store and retrieve the data once stored in a data warehouse. Dimensional model is the data model used by many OLAP systems.
Figure – Steps for Dimensional Model
Steps to Create Dimensional Data Modelling:
Step-1: Identifying the business objective – The first step is to identify the business objective. Sales, HR, Marketing, etc. are some examples as per the need of the organization. Since it is the most important step of Data Modelling the selection of business objective also depends on the quality of data available for that process.
Step-2: Identifying Granularity – Granularity is the lowest level of information stored in the table. The level of detail for business problem and its solution is described by Grain.
Step-3: Identifying Dimensions and its Attributes – Dimensions are objects or things. Dimensions categorize and describe data warehouse facts and measures in a way that supports meaningful answers to business questions. A data warehouse organizes descriptive attributes as columns in dimension tables. For Example, the data dimension may contain data like a year, month and weekday.
Step-4: Identifying the Fact – The measurable data is held by the fact table. Most of the fact table rows are numerical values like price or cost per unit, etc.
Step-5: Building of Schema – We implement the Dimension Model in this step. A schema is a database structure. There are two popular schemes: Star Schema and Snowflake Schema.
krishna_97
gabaa406
Technical Scripter 2018
DBMS
DBMS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
SQL | Views
CTE in SQL
Difference between DELETE, DROP and TRUNCATE
Third Normal Form (3NF)
Second Normal Form (2NF)
Data Preprocessing in Data Mining
Difference between SQL and NoSQL
SQL | GROUP BY
Difference between DDL and DML in DBMS
Indexing in Databases | Set 1 | [
{
"code": null,
"e": 24023,
"s": 23995,
"text": "\n24 Sep, 2021"
},
{
"code": null,
"e": 24073,
"s": 24023,
"text": "Popular Schema – Star Schema, Snow Flake Schema "
},
{
"code": null,
"e": 24172,
"s": 24073,
"text": "Dimensional Data Modelling is one of the data modelling techniques used in data warehouse design. "
},
{
"code": null,
"e": 24206,
"s": 24172,
"text": "Goal: Improve the data retrieval."
},
{
"code": null,
"e": 24656,
"s": 24206,
"text": "The concept of Dimensional Modelling was developed by Ralph Kimball which is comprised of facts and dimension tables. Since the main goal of this modelling is to improve the data retrieval so it is optimized for SELECT OPERATION. The advantage of using this model is that we can store data in such a way that it is easier to store and retrieve the data once stored in a data warehouse. Dimensional model is the data model used by many OLAP systems. "
},
{
"code": null,
"e": 24696,
"s": 24658,
"text": "Figure – Steps for Dimensional Model "
},
{
"code": null,
"e": 24741,
"s": 24696,
"text": "Steps to Create Dimensional Data Modelling: "
},
{
"code": null,
"e": 25078,
"s": 24741,
"text": "Step-1: Identifying the business objective – The first step is to identify the business objective. Sales, HR, Marketing, etc. are some examples as per the need of the organization. Since it is the most important step of Data Modelling the selection of business objective also depends on the quality of data available for that process. "
},
{
"code": null,
"e": 25263,
"s": 25078,
"text": "Step-2: Identifying Granularity – Granularity is the lowest level of information stored in the table. The level of detail for business problem and its solution is described by Grain. "
},
{
"code": null,
"e": 25648,
"s": 25263,
"text": "Step-3: Identifying Dimensions and its Attributes – Dimensions are objects or things. Dimensions categorize and describe data warehouse facts and measures in a way that supports meaningful answers to business questions. A data warehouse organizes descriptive attributes as columns in dimension tables. For Example, the data dimension may contain data like a year, month and weekday. "
},
{
"code": null,
"e": 25811,
"s": 25648,
"text": "Step-4: Identifying the Fact – The measurable data is held by the fact table. Most of the fact table rows are numerical values like price or cost per unit, etc. "
},
{
"code": null,
"e": 25987,
"s": 25811,
"text": "Step-5: Building of Schema – We implement the Dimension Model in this step. A schema is a database structure. There are two popular schemes: Star Schema and Snowflake Schema. "
},
{
"code": null,
"e": 25998,
"s": 25987,
"text": "krishna_97"
},
{
"code": null,
"e": 26007,
"s": 25998,
"text": "gabaa406"
},
{
"code": null,
"e": 26031,
"s": 26007,
"text": "Technical Scripter 2018"
},
{
"code": null,
"e": 26036,
"s": 26031,
"text": "DBMS"
},
{
"code": null,
"e": 26041,
"s": 26036,
"text": "DBMS"
},
{
"code": null,
"e": 26139,
"s": 26041,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26148,
"s": 26139,
"text": "Comments"
},
{
"code": null,
"e": 26161,
"s": 26148,
"text": "Old Comments"
},
{
"code": null,
"e": 26173,
"s": 26161,
"text": "SQL | Views"
},
{
"code": null,
"e": 26184,
"s": 26173,
"text": "CTE in SQL"
},
{
"code": null,
"e": 26229,
"s": 26184,
"text": "Difference between DELETE, DROP and TRUNCATE"
},
{
"code": null,
"e": 26253,
"s": 26229,
"text": "Third Normal Form (3NF)"
},
{
"code": null,
"e": 26278,
"s": 26253,
"text": "Second Normal Form (2NF)"
},
{
"code": null,
"e": 26312,
"s": 26278,
"text": "Data Preprocessing in Data Mining"
},
{
"code": null,
"e": 26345,
"s": 26312,
"text": "Difference between SQL and NoSQL"
},
{
"code": null,
"e": 26360,
"s": 26345,
"text": "SQL | GROUP BY"
},
{
"code": null,
"e": 26399,
"s": 26360,
"text": "Difference between DDL and DML in DBMS"
}
]
|
Prim’s MST for Adjacency List Representation | It is similar to the previous algorithm. Here the only difference is, the Graph G(V, E) is represented by an adjacency list.
Time complexity adjacency list representation is O(E log V).
Input:
The cost matrix:
Output:
Edge: A--B And Cost: 1
Edge: B--E And Cost: 2
Edge: A--C And Cost: 3
Edge: A--D And Cost: 4
Edge: E--F And Cost: 2
Edge: F--G And Cost: 3
Total Cost: 15
prims(g: Graph, start)
Input − The graph g and the seed vertex named ‘start’
Output − The Tree after adding edges.
Begin
create two set B, N
add the start node in B set.
for all vertices u in graph g do
add u in the set N
done
while B ≠ N do
min := ∞
for all vertices u in graph g do
if u is in the set B then
for all vertices v which are adjacent with u do
if v is in (N – B) then
if min > cost of uv edge then
min := cost of uv edge
parent := u
node := v
done
done
insert node in the B set
add the edge starting from parent to node in the tree
done
return the tree
End
#include<iostream>
#include<list>
#include<set>
using namespace std;
typedef struct nodes {
int dest;
int cost;
}node;
class Graph {
int n;
list<node> *adjList;
private:
void showList(int src, list<node> lt) {
list<node> :: iterator i;
node tempNode;
for(i = lt.begin(); i != lt.end(); i++) {
tempNode = *i;
cout << "(" << src << ")---("<<tempNode.dest << "|"<<tempNode.cost<<") ";
}
cout << endl;
}
public:
Graph() {
n = 0;
}
Graph(int nodeCount) {
n = nodeCount;
adjList = new list<node>[n];
}
void addEdge(int source, int dest, int cost) {
node newNode;
newNode.dest = dest;
newNode.cost = cost;
adjList[source].push_back(newNode);
}
void displayEdges() {
for(int i = 0; i<n; i++) {
list<node> tempList = adjList[i];
showList(i, tempList);
}
}
friend Graph primsMST(Graph g, int start);
};
set<int> difference(set<int> first, set<int> second) {
set<int> :: iterator it;
set<int> res;
for(it = first.begin(); it != first.end(); it++) {
if(second.find(*it) == second.end())
res.insert(*it); //add those item which are not in the second list
}
return res; //the set (first-second)
}
Graph primsMST(Graph g, int start) {
int n = g.n;
set<int> B, N, diff;
Graph tree(n); //make tree with same node as graph
B.insert(start); //insert start node in the B set
for(int u = 0; u<n; u++) {
N.insert(u); //add all vertices in the N set
}
while(B != N) {
int min = 9999; //set as infinity
int v, par;
diff = difference(N, B); //find the set N - B
for(int u = 0; u < n; u++) {
if(B.find(u) != B.end()) {
list<node>::iterator it;
for(it = g.adjList[u].begin(); it != g.adjList[u].end(); it++) {
if(diff.find(it->dest) != diff.end()) {
if(min > it->cost) {
min = it->cost; //update cost
par = u;
v = it->dest;
}
}
}
}
}
B.insert(v);
tree.addEdge(par, v, min);
tree.addEdge(v, par, min);
}
return tree;
}
main() {
Graph g(7), tree(7);
g.addEdge(0, 1, 1);
g.addEdge(0, 2, 3);
g.addEdge(0, 3, 4);
g.addEdge(0, 5, 5);
g.addEdge(1, 0, 1);
g.addEdge(1, 3, 7);
g.addEdge(1, 4, 2);
g.addEdge(2, 0, 3);
g.addEdge(2, 4, 8);
g.addEdge(3, 0, 4);
g.addEdge(3, 1, 7);
g.addEdge(4, 1, 2);
g.addEdge(4, 2, 8);
g.addEdge(4, 5, 2);
g.addEdge(4, 6, 4);
g.addEdge(5, 0, 5);
g.addEdge(5, 4, 2);
g.addEdge(5, 6, 3);
g.addEdge(6, 4, 4);
g.addEdge(6, 5, 3);
tree = primsMST(g, 0);
tree.displayEdges();
}
Edge: A--B And Cost: 1
Edge: B--E And Cost: 2
Edge: A--C And Cost: 3
Edge: A--D And Cost: 4
Edge: E--F And Cost: 2
Edge: F--G And Cost: 3
Total Cost: 15 | [
{
"code": null,
"e": 1187,
"s": 1062,
"text": "It is similar to the previous algorithm. Here the only difference is, the Graph G(V, E) is represented by an adjacency list."
},
{
"code": null,
"e": 1248,
"s": 1187,
"text": "Time complexity adjacency list representation is O(E log V)."
},
{
"code": null,
"e": 1434,
"s": 1248,
"text": "Input:\nThe cost matrix:\n\nOutput:\nEdge: A--B And Cost: 1\nEdge: B--E And Cost: 2\nEdge: A--C And Cost: 3\nEdge: A--D And Cost: 4\nEdge: E--F And Cost: 2\nEdge: F--G And Cost: 3\nTotal Cost: 15"
},
{
"code": null,
"e": 1457,
"s": 1434,
"text": "prims(g: Graph, start)"
},
{
"code": null,
"e": 1512,
"s": 1457,
"text": "Input − The graph g and the seed vertex named ‘start’"
},
{
"code": null,
"e": 1550,
"s": 1512,
"text": "Output − The Tree after adding edges."
},
{
"code": null,
"e": 2197,
"s": 1550,
"text": "Begin\n create two set B, N\n add the start node in B set.\n\n for all vertices u in graph g do\n add u in the set N\n done\n\n while B ≠ N do\n min := ∞\n for all vertices u in graph g do\n if u is in the set B then\n for all vertices v which are adjacent with u do\n if v is in (N – B) then\n if min > cost of uv edge then\n min := cost of uv edge\n parent := u\n node := v\n done\n done\n\n insert node in the B set\n add the edge starting from parent to node in the tree\n done\n\n return the tree\nEnd"
},
{
"code": null,
"e": 5138,
"s": 2197,
"text": "#include<iostream>\n#include<list>\n#include<set>\nusing namespace std;\n\ntypedef struct nodes {\n int dest;\n int cost;\n}node;\n\nclass Graph {\n int n;\n list<node> *adjList;\n private:\n void showList(int src, list<node> lt) {\n list<node> :: iterator i;\n node tempNode;\n\n for(i = lt.begin(); i != lt.end(); i++) {\n tempNode = *i;\n cout << \"(\" << src << \")---(\"<<tempNode.dest << \"|\"<<tempNode.cost<<\") \";\n }\n cout << endl;\n }\n\n public:\n Graph() {\n n = 0;\n }\n\n Graph(int nodeCount) {\n n = nodeCount;\n adjList = new list<node>[n];\n }\n\n void addEdge(int source, int dest, int cost) {\n node newNode;\n newNode.dest = dest;\n newNode.cost = cost;\n adjList[source].push_back(newNode);\n }\n\n void displayEdges() {\n for(int i = 0; i<n; i++) {\n list<node> tempList = adjList[i];\n showList(i, tempList);\n }\n }\n\n friend Graph primsMST(Graph g, int start);\n};\n\nset<int> difference(set<int> first, set<int> second) {\n set<int> :: iterator it;\n set<int> res;\n\n for(it = first.begin(); it != first.end(); it++) {\n if(second.find(*it) == second.end())\n res.insert(*it); //add those item which are not in the second list\n }\n\n return res; //the set (first-second)\n}\n\nGraph primsMST(Graph g, int start) {\n int n = g.n;\n set<int> B, N, diff;\n Graph tree(n); //make tree with same node as graph\n B.insert(start); //insert start node in the B set\n\n for(int u = 0; u<n; u++) {\n N.insert(u); //add all vertices in the N set\n }\n\n while(B != N) {\n int min = 9999; //set as infinity\n int v, par;\n diff = difference(N, B); //find the set N - B\n\n for(int u = 0; u < n; u++) {\n if(B.find(u) != B.end()) {\n list<node>::iterator it;\n for(it = g.adjList[u].begin(); it != g.adjList[u].end(); it++) {\n if(diff.find(it->dest) != diff.end()) {\n if(min > it->cost) {\n min = it->cost; //update cost\n par = u;\n v = it->dest;\n }\n }\n }\n }\n }\n\n B.insert(v);\n tree.addEdge(par, v, min);\n tree.addEdge(v, par, min);\n }\n return tree;\n}\n\nmain() {\n Graph g(7), tree(7);\n g.addEdge(0, 1, 1);\n g.addEdge(0, 2, 3);\n g.addEdge(0, 3, 4);\n g.addEdge(0, 5, 5);\n g.addEdge(1, 0, 1);\n g.addEdge(1, 3, 7);\n g.addEdge(1, 4, 2);\n g.addEdge(2, 0, 3);\n g.addEdge(2, 4, 8);\n g.addEdge(3, 0, 4);\n g.addEdge(3, 1, 7);\n g.addEdge(4, 1, 2);\n g.addEdge(4, 2, 8);\n g.addEdge(4, 5, 2);\n g.addEdge(4, 6, 4);\n g.addEdge(5, 0, 5);\n g.addEdge(5, 4, 2);\n g.addEdge(5, 6, 3);\n g.addEdge(6, 4, 4);\n g.addEdge(6, 5, 3);\n\n tree = primsMST(g, 0);\n tree.displayEdges();\n}"
},
{
"code": null,
"e": 5291,
"s": 5138,
"text": "Edge: A--B And Cost: 1\nEdge: B--E And Cost: 2\nEdge: A--C And Cost: 3\nEdge: A--D And Cost: 4\nEdge: E--F And Cost: 2\nEdge: F--G And Cost: 3\nTotal Cost: 15"
}
]
|
Difference Between Iterator and Spliterator in Java - GeeksforGeeks | 15 Oct, 2020
The Java Iterator interface represents an object capable of iterating through a collection of Java objects, one object at a time. The Iterator interface is one of the oldest mechanisms in Java for iterating collections of objects (although not the oldest — Enumerator predated Iterator).
Moreover, an iterator differs from the enumerations in two ways:
1. Iterator permits the caller to remove the given elements from the specified collection during the iteration of the elements.
2. Method names have been enhanced.
Java
// Java program to illustrate Iterator interface import java.util.Iterator;import java.util.LinkedList;import java.util.List;public class JavaIteratorExample1 { public static void main(String[] args) { // create a linkedlist List<String> list = new LinkedList<>(); // Add elements list.add("Welcome"); list.add("to"); list.add("our"); list.add("website"); // print the list to the console System.out.println("The list is given as : " + list); // call iterator on the list Iterator<String> itr = list.iterator(); // itr.hasNext() returns true if there // is still an element next to the current // element pointed by iterator while (itr.hasNext()) { // Returns the next element. System.out.println(itr.next()); } // Removes the last element. itr.remove(); // print the list after removing an // element System.out.println( "After the remove() method is called : " + list); }}
The list is given as : [Welcome, to, our, website]
Welcome
to
our
website
After the remove() method is called : [Welcome, to, our]
Like Iterator and ListIterator, Spliterator is a Java Iterator, which is used to iterate elements one-by-one from a List implemented object.
The main functionalities of Spliterator are:
Splitting the source data
Processing the source data
The Interface Spliterator is included in JDK 8 for taking the advantages of parallelism in addition to sequential traversal. It is designed as a parallel analogue of an iterator.
Java
// Java program to illustrate a Spliterator import java.util.*;import java.util.stream.Stream; public class InterfaceSpliteratorExample { public static void main(String args[]) { // Create an object of array list ArrayList<Integer> list = new ArrayList<>(); // Add elements to the array list list.add(101); list.add(201); list.add(301); list.add(401); list.add(501); // create a stream on the list Stream<Integer> str = list.stream(); // Get Spliterator object on stream Spliterator<Integer> splitr = str.spliterator(); // Get size of the list // encountered by the // forEachRemaining method System.out.println("Estimate size: " + splitr.estimateSize()); // Print getExactSizeIfKnown // returns exact size if finite // or return -1 System.out.println("Exact size: " + splitr.getExactSizeIfKnown()); // Check if the Spliterator has all // the characteristics System.out.println("Boolean Result: " + splitr.hasCharacteristics( splitr.characteristics())); System.out.println("Elements of ArrayList :"); // print elements using forEachRemaining splitr.forEachRemaining( (n) -> System.out.println(n)); // Obtaining another Stream to the array list. Stream<Integer> str1 = list.stream(); splitr = str1.spliterator(); // Obtain spliterator using trySplit() method Spliterator<Integer> splitr2 = splitr.trySplit(); // If splitr can be partitioned use splitr2 first. if (splitr2 != null) { System.out.println("Output from splitr2: "); splitr2.forEachRemaining( (n) -> System.out.println(n)); } // Now, use the splitr System.out.println("Output from splitr1: "); splitr.forEachRemaining( (n) -> System.out.println(n)); }}
Estimate size: 5
Exact size: 5
Boolean Result: true
Elements of ArrayList :
101
201
301
401
501
Output from splitr2:
101
201
Output from splitr1:
301
401
501
Difference between Iterator and Spliterator in java :
Iterator
Spliterator
java-interfaces
Difference Between
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Difference Between Method Overloading and Method Overriding in Java
Difference between Prim's and Kruskal's algorithm for MST
Difference between Internal and External fragmentation
Differences and Applications of List, Tuple, Set and Dictionary in Python
Arrays in Java
Split() String method in Java with examples
For-each loop in Java
Arrays.sort() in Java with examples
Reverse a string in Java | [
{
"code": null,
"e": 24858,
"s": 24830,
"text": "\n15 Oct, 2020"
},
{
"code": null,
"e": 25146,
"s": 24858,
"text": "The Java Iterator interface represents an object capable of iterating through a collection of Java objects, one object at a time. The Iterator interface is one of the oldest mechanisms in Java for iterating collections of objects (although not the oldest — Enumerator predated Iterator)."
},
{
"code": null,
"e": 25211,
"s": 25146,
"text": "Moreover, an iterator differs from the enumerations in two ways:"
},
{
"code": null,
"e": 25339,
"s": 25211,
"text": "1. Iterator permits the caller to remove the given elements from the specified collection during the iteration of the elements."
},
{
"code": null,
"e": 25375,
"s": 25339,
"text": "2. Method names have been enhanced."
},
{
"code": null,
"e": 25380,
"s": 25375,
"text": "Java"
},
{
"code": "// Java program to illustrate Iterator interface import java.util.Iterator;import java.util.LinkedList;import java.util.List;public class JavaIteratorExample1 { public static void main(String[] args) { // create a linkedlist List<String> list = new LinkedList<>(); // Add elements list.add(\"Welcome\"); list.add(\"to\"); list.add(\"our\"); list.add(\"website\"); // print the list to the console System.out.println(\"The list is given as : \" + list); // call iterator on the list Iterator<String> itr = list.iterator(); // itr.hasNext() returns true if there // is still an element next to the current // element pointed by iterator while (itr.hasNext()) { // Returns the next element. System.out.println(itr.next()); } // Removes the last element. itr.remove(); // print the list after removing an // element System.out.println( \"After the remove() method is called : \" + list); }}",
"e": 26498,
"s": 25380,
"text": null
},
{
"code": null,
"e": 26631,
"s": 26498,
"text": "The list is given as : [Welcome, to, our, website]\nWelcome\nto\nour\nwebsite\nAfter the remove() method is called : [Welcome, to, our]\n\n"
},
{
"code": null,
"e": 26772,
"s": 26631,
"text": "Like Iterator and ListIterator, Spliterator is a Java Iterator, which is used to iterate elements one-by-one from a List implemented object."
},
{
"code": null,
"e": 26818,
"s": 26772,
"text": "The main functionalities of Spliterator are: "
},
{
"code": null,
"e": 26844,
"s": 26818,
"text": "Splitting the source data"
},
{
"code": null,
"e": 26871,
"s": 26844,
"text": "Processing the source data"
},
{
"code": null,
"e": 27050,
"s": 26871,
"text": "The Interface Spliterator is included in JDK 8 for taking the advantages of parallelism in addition to sequential traversal. It is designed as a parallel analogue of an iterator."
},
{
"code": null,
"e": 27055,
"s": 27050,
"text": "Java"
},
{
"code": "// Java program to illustrate a Spliterator import java.util.*;import java.util.stream.Stream; public class InterfaceSpliteratorExample { public static void main(String args[]) { // Create an object of array list ArrayList<Integer> list = new ArrayList<>(); // Add elements to the array list list.add(101); list.add(201); list.add(301); list.add(401); list.add(501); // create a stream on the list Stream<Integer> str = list.stream(); // Get Spliterator object on stream Spliterator<Integer> splitr = str.spliterator(); // Get size of the list // encountered by the // forEachRemaining method System.out.println(\"Estimate size: \" + splitr.estimateSize()); // Print getExactSizeIfKnown // returns exact size if finite // or return -1 System.out.println(\"Exact size: \" + splitr.getExactSizeIfKnown()); // Check if the Spliterator has all // the characteristics System.out.println(\"Boolean Result: \" + splitr.hasCharacteristics( splitr.characteristics())); System.out.println(\"Elements of ArrayList :\"); // print elements using forEachRemaining splitr.forEachRemaining( (n) -> System.out.println(n)); // Obtaining another Stream to the array list. Stream<Integer> str1 = list.stream(); splitr = str1.spliterator(); // Obtain spliterator using trySplit() method Spliterator<Integer> splitr2 = splitr.trySplit(); // If splitr can be partitioned use splitr2 first. if (splitr2 != null) { System.out.println(\"Output from splitr2: \"); splitr2.forEachRemaining( (n) -> System.out.println(n)); } // Now, use the splitr System.out.println(\"Output from splitr1: \"); splitr.forEachRemaining( (n) -> System.out.println(n)); }}",
"e": 29143,
"s": 27055,
"text": null
},
{
"code": null,
"e": 29305,
"s": 29143,
"text": "Estimate size: 5\nExact size: 5\nBoolean Result: true\nElements of ArrayList :\n101\n201\n301\n401\n501\nOutput from splitr2: \n101\n201\nOutput from splitr1: \n301\n401\n501\n\n"
},
{
"code": null,
"e": 29359,
"s": 29305,
"text": "Difference between Iterator and Spliterator in java :"
},
{
"code": null,
"e": 29369,
"s": 29359,
"text": "Iterator "
},
{
"code": null,
"e": 29381,
"s": 29369,
"text": "Spliterator"
},
{
"code": null,
"e": 29397,
"s": 29381,
"text": "java-interfaces"
},
{
"code": null,
"e": 29416,
"s": 29397,
"text": "Difference Between"
},
{
"code": null,
"e": 29421,
"s": 29416,
"text": "Java"
},
{
"code": null,
"e": 29426,
"s": 29421,
"text": "Java"
},
{
"code": null,
"e": 29524,
"s": 29426,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29585,
"s": 29524,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 29653,
"s": 29585,
"text": "Difference Between Method Overloading and Method Overriding in Java"
},
{
"code": null,
"e": 29711,
"s": 29653,
"text": "Difference between Prim's and Kruskal's algorithm for MST"
},
{
"code": null,
"e": 29766,
"s": 29711,
"text": "Difference between Internal and External fragmentation"
},
{
"code": null,
"e": 29840,
"s": 29766,
"text": "Differences and Applications of List, Tuple, Set and Dictionary in Python"
},
{
"code": null,
"e": 29855,
"s": 29840,
"text": "Arrays in Java"
},
{
"code": null,
"e": 29899,
"s": 29855,
"text": "Split() String method in Java with examples"
},
{
"code": null,
"e": 29921,
"s": 29899,
"text": "For-each loop in Java"
},
{
"code": null,
"e": 29957,
"s": 29921,
"text": "Arrays.sort() in Java with examples"
}
]
|
Databricks Delta Lake — Database on top of a Data Lake — Part 1 | by Manoj Kukreja | Towards Data Science | Going back 8 years, I still remember the days when I was adopting Big Data frameworks like Hadoop and Spark. Coming from a database background this adaptation was challenging for many reasons. The most challenging was the lack of database like transactions in Big Data frameworks. To cover for this missing functionality we had to develop several routines the performed the necessary checks and measures. However, the process was cumbersome, time-consuming and frankly error-prone.
Another issue that use to keep me awake at night was the dreaded Change Data Capture (CDC). Databases have a convenient way of updating records and showing the latest state of the record to the user. On the other hand in Big Data we ingest data and store them as files. Therefore, the daily delta ingestion may contain a combination of newly inserted, updated or deleted data. This means we end up storing the same row multiple times in the Data Lake. This creates two problems:
Duplicate Data — In some cases the same row exists more than once (updated and deleted data)Data Analytics — Unless data is already de-duplicated, users get confused when they see multiple instances of the same row
Duplicate Data — In some cases the same row exists more than once (updated and deleted data)
Data Analytics — Unless data is already de-duplicated, users get confused when they see multiple instances of the same row
So how did we manage to deal with this situation up till now:
Day — Ingest the full data set — complete tables
Day 2-Day n — Make sure the incremental data (delta) is delivered with a Record Update Timestamp and the Mode (Insert/Update/Delete)
After data has been ingested in the raw zone run a Hive/Mapreduce/Spark curation job that merges incremental data with the full data.
However, at this stage, it cleverly uses remove duplicates using functions like RANK() OVER PARTITION of PRIMARY KEY and Record Update Timestamp DESC
Filter rows with RANK=1 gives us the most recently updated row
Drop rows with Mode=Delete
Save above data set to the curation zone — This data is then shared with the user community for further analysis down line
As you see the above process is pretty involving. A better method is warranted.
Developed by Databricks, Delta Lake brings ACID transaction support for your data lakes for both batch and streaming operations. Delta Lake is an open-source storage layer for big data workloads over HDFS, AWS S3, Azure Data Lake Storage or Google Cloud Storage.
Delta Lake packs in a lot of cool features useful for Data Engineers. Lets explore a few of these features in a two part series:
Part 1: ACID Transactions, Checkpoints, Transaction Log & Time Travel
Part 2 : Vaccum, Schema Evolution, History
Use Case: An eCommerce website sells products from several vendors. Each vendor sends latest prices for its products on a daily basis.
The eCommerce company wants to adjust the pricing information on their website based on latest prices sent by the vendor each day. Additionally, they want to track prices over time for their ML models. On the left is a sample of the data files received each day.
Lets start by downloading data:
$ git clone https://github.com/mkukreja1/blogs.git
Assume today is Aug20 and you received the file — products_aug20.csv
Save the data file to HDFS
$ hadoop fs -mkdir -p /delta_lake/raw$ hadoop fs -put blogs/delta-lake/products_aug20.csv /delta_lake/raw
The complete notebook is available at /delta_lake/delta_lake-demo-1.ipynb . Let me run through each step below with explanations:
Start the Spark session first with the Delta Lake package and then import the Python APIs
from pyspark.sql import SparkSessionimport pysparkfrom pyspark.sql.functions import *spark = pyspark.sql.SparkSession.builder.appName("Product_Price_Tracking") \ .config("spark.jars.packages", "io.delta:delta-core_2.12:0.7.0") \ .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \ .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") \ .getOrCreate()from delta.tables import *
Create a Spark DataFrame using the recently received data — products_aug20.csv
df_productsaug20 = spark.read.csv('hdfs:///delta_lake/raw/products_aug20.csv', header=True, inferSchema=True)df_productsaug20.show()+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 20.5|| 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+
Now lets store data in the Delta Lake. Delta Lake uses versioned Parquet files to store your data in your cloud storage. Additionally, a transaction log is stored to keep track of data changes over time.
df_productsaug20.write.format("delta").option("path", "hdfs:///delta_lake/products").saveAsTable("products")
Lets see how the date was stored in HDFS. Take note of the _delta_log directory that stores the changes over time.
Each commit = 1 JSON file starting with 00000000000000000000.json
Every 10 commits, a checkpoint is performed that combines previous JSON files into a parquet file.
$ hadoop fs -ls /delta_lake/productsFound 2 itemsdrwxr-xr-x - mkukreja supergroup 0 2020-08-26 20:43 /delta_lake/products/_delta_log-rw-r--r-- 2 mkukreja supergroup 1027 2020-08-26 20:43 /delta_lake/products/part-00000-37f5ec8d-5e21-4a01-9f19-7e9942196ef6-c000.snappy.parquet$ hadoop fs -cat /delta_lake/products/_delta_log/00000000000000000000.json2020-08-26 20:44:42,159 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{"commitInfo":{"timestamp":1598489011963,"operation":"CREATE TABLE AS SELECT","operationParameters":{"isManaged":"false","description":null,"partitionBy":"[]","properties":"{}"},"isBlindAppend":true,"operationMetrics":{"numFiles":"1","numOutputBytes":"1027","numOutputRows":"5"}}}{"protocol":{"minReaderVersion":1,"minWriterVersion":2}}{"metaData":{"id":"7788c86b-ae7e-47be-ac43-76c1f3f0506f","format":{"provider":"parquet","options":{}},"schemaString":"{\"type\":\"struct\",\"fields\":[{\"name\":\"ProductID\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Date\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Price\",\"type\":\"double\",\"nullable\":true,\"metadata\":{}}]}","partitionColumns":[],"configuration":{},"createdTime":1598489010883}}{"add":{"path":"part-00000-37f5ec8d-5e21-4a01-9f19-7e9942196ef6-c000.snappy.parquet","partitionValues":{},"size":1027,"modificationTime":1598489011874,"dataChange":true}}$ hadoop fs -cat /delta_lake/products/_delta_log/00000000000000000000.json2020-08-26 20:44:42,159 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{"commitInfo":{"timestamp":1598489011963,"operation":"CREATE TABLE AS SELECT","operationParameters":{"isManaged":"false","description":null,"partitionBy":"[]","properties":"{}"},"isBlindAppend":true,"operationMetrics":{"numFiles":"1","numOutputBytes":"1027","numOutputRows":"5"}}}{"protocol":{"minReaderVersion":1,"minWriterVersion":2}}{"metaData":{"id":"7788c86b-ae7e-47be-ac43-76c1f3f0506f","format":{"provider":"parquet","options":{}},"schemaString":"{\"type\":\"struct\",\"fields\":[{\"name\":\"ProductID\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Date\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Price\",\"type\":\"double\",\"nullable\":true,\"metadata\":{}}]}","partitionColumns":[],"configuration":{},"createdTime":1598489010883}}{"add":{"path":"part-00000-37f5ec8d-5e21-4a01-9f19-7e9942196ef6-c000.snappy.parquet","partitionValues":{},"size":1027,"modificationTime":1598489011874,"dataChange":true}}
This is how we can query recently saved data in Delta Lake using Spark SQL
spark.sql('SELECT * FROM products').show()+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 20.5|| 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+
You can query previous snapshots of your Delta table by using time travel. If you want to access the data before it was overwritten, you can query a snapshot of the table using the versionAsOf option.
deltaTable.update("ProductID = '200'", { "Price": "'48.00'" } )df = spark.read.format("delta").option("versionAsOf", 1).load("hdfs:///delta_lake/products")df.show()# Notice the value of Price for ProductID=200 has changed in version 1 of the table+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 48.0| | 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+df = spark.read.format("delta").option("versionAsOf", 0).load("hdfs:///delta_lake/products")df.show()# Notice the value of Price for ProductID=200 is the older snapshot in version 0+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 20.5|| 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+
Lets perform another DML operation, this time delete ProductID=210.
deltaTable.delete("ProductID = 210") df = spark.read.format("delta").option("versionAsOf", 2).load("hdfs:///delta_lake/products")df.show()# Notice the value of Price for ProductID=210 is missing in Version 2+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 48.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+
Notice that the Transaction Log has progressed, one log per transaction
$ hadoop fs -ls /delta_lake/products/_delta_logFound 3 items-rw-r--r-- 2 mkukreja supergroup 912 2020-08-21 13:14 /delta_lake/products/_delta_log/00000000000000000000.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 11:03 /delta_lake/products/_delta_log/00000000000000000001.json-rw-r--r-- 2 mkukreja supergroup 592 2020-08-24 11:14 /delta_lake/products/_delta_log/00000000000000000002.json
Pay special attention to the operation attribute in JSON of the transaction logs
$ hadoop fs -cat /delta_lake/products/_delta_log/*.json{"commitInfo":{"timestamp":1598489978902,"operation":"CREATE TABLE AS SELECT","operationParameters":{"isManaged":"false","description":null,"partitionBy":"[]","properties":"{}"},"isBlindAppend":true,"operationMetrics":{"numFiles":"1","numOutputBytes":"1027","numOutputRows":"5"}}}{"protocol":{"minReaderVersion":1,"minWriterVersion":2}}{"metaData":{"id":"47d211fc-7148-4c85-aa80-7d9aa8f0b7a2","format":{"provider":"parquet","options":{}},"schemaString":"{\"type\":\"struct\",\"fields\":[{\"name\":\"ProductID\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Date\",\"type\":\"string\",\"nullable\":true,\"metadata\":{}},{\"name\":\"Price\",\"type\":\"double\",\"nullable\":true,\"metadata\":{}}]}","partitionColumns":[],"configuration":{},"createdTime":1598489977907}}{"add":{"path":"part-00000-8c43a47a-02bf-4bc2-a3be-aaabe9c409bd-c000.snappy.parquet","partitionValues":{},"size":1027,"modificationTime":1598489978816,"dataChange":true}}2020-08-26 21:07:41,120 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{"commitInfo":{"timestamp":1598490235896,"operation":"UPDATE","operationParameters":{"predicate":"(ProductID#609 = 200)"},"readVersion":0,"isBlindAppend":false,"operationMetrics":{"numRemovedFiles":"1","numAddedFiles":"1","numUpdatedRows":"1","numCopiedRows":"4"}}}{"remove":{"path":"part-00000-8c43a47a-02bf-4bc2-a3be-aaabe9c409bd-c000.snappy.parquet","deletionTimestamp":1598490235237,"dataChange":true}}{"add":{"path":"part-00000-272c0f65-433e-4901-83fd-70b78667ede0-c000.snappy.parquet","partitionValues":{},"size":1025,"modificationTime":1598490235886,"dataChange":true}}2020-08-26 21:07:41,123 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{"commitInfo":{"timestamp":1598490393953,"operation":"DELETE","operationParameters":{"predicate":"[\"(`ProductID` = 210)\"]"},"readVersion":1,"isBlindAppend":false,"operationMetrics":{"numRemovedFiles":"1","numDeletedRows":"1","numAddedFiles":"1","numCopiedRows":"4"}}}{"remove":{"path":"part-00000-272c0f65-433e-4901-83fd-70b78667ede0-c000.snappy.parquet","deletionTimestamp":1598490393950,"dataChange":true}}{"add":{"path":"part-00000-73a381e1-fa68-4323-81b9-c42dea484542-c000.snappy.parquet","partitionValues":{},"size":1015,"modificationTime":1598490393946,"dataChange":true}}
It is next day now and you received a new data file. Now lets merge the new data set from the next day (products_aug21.csv) into the Delta Lake
$ hadoop fs -put csv/products_aug21.csv /delta_lake/raw$ hadoop fs -ls /delta_lake/rawFound 2 items-rw-r--r-- 2 mkukreja supergroup 132 2020-08-21 13:00 /delta_lake/raw/products_aug20.csv-rw-r--r-- 2 mkukreja supergroup 220 2020-08-24 11:33 /delta_lake/raw/products_aug21.csv
Perform an Upsert Operation. This means that if the data in the new file (products_aug21.csv) matches (based on the join condition on ProductID) any existing data in the Delta Lake the update the price, else insert the new row.
df_productsaug21 = spark.read.csv('hdfs:///delta_lake/raw/products_aug21.csv', header=True, inferSchema=True)df_productsaug21.show()deltaTable.alias("products").merge( df_productsaug21.alias("products_new"), "products.ProductID = products_new.ProductID") \ .whenMatchedUpdate(set = { "Price" : "products_new.Price" } ) \ .whenNotMatchedInsert(values = { "ProductID": "products_new.ProductID", "Date": "products_new.Date", "Price": "products_new.Price" } ).execute()
Check the latest version of the table after the Upsert. You may notice that ProductID=240 went though the whenMatchedUpdate operation whereas ProductID=240 till 280 went through the whenNotMatchedInsert operation.
spark.table("products").show()+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 230|2020-08-20|23.67|| 210|2020-08-21| 46.0|| 250|2020-08-21|89.76|| 220|2020-08-20|34.56|| 240|2020-08-20|90.82|| 200|2020-08-20| 25.5|| 260|2020-08-21|54.55|| 280|2020-08-21|44.78|| 270|2020-08-21|96.32|+---------+----------+-----+
For every 10 commits Delta Lake saves a checkpoint file in Parquet format in the _delta_log sub-directory. Since the checkpoint file is in Parquet format it allows Spark to perform a faster read compared to inefficient JSON files.
Lets see how this happens by updating a few more rows:
deltaTable.update("ProductID = '230'", { "Price": "'33.67'" } )deltaTable.update("ProductID = '210'", { "Price": "'56.00'" } )deltaTable.update("ProductID = '250'", { "Price": "'99.76'" } )deltaTable.update("ProductID = '220'", { "Price": "'44.56'" } )deltaTable.update("ProductID = '240'", { "Price": "'100.82'" } )deltaTable.update("ProductID = '200'", { "Price": "'35.5'" } )deltaTable.update("ProductID = '260'", { "Price": "'64.55'" } )deltaTable.update("ProductID = '280'", { "Price": "'54.78'" } )deltaTable.update("ProductID = '270'", { "Price": "'106.32'" } )
Check the _delta_log sub-directory. After 10 commits a 00000000000000000010.checkpoint.parquet file has been created. Checkpoint files saves the state of the table at a point in time in Parquet format so that it can retrieve the history very efficiently.
$ hadoop fs -ls /delta_lake/products/_delta_logFound 15 items-rw-r--r-- 2 mkukreja supergroup 912 2020-08-21 13:14 /delta_lake/products/_delta_log/00000000000000000000.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 11:03 /delta_lake/products/_delta_log/00000000000000000001.json-rw-r--r-- 2 mkukreja supergroup 592 2020-08-24 11:14 /delta_lake/products/_delta_log/00000000000000000002.json-rw-r--r-- 2 mkukreja supergroup 2255 2020-08-24 11:39 /delta_lake/products/_delta_log/00000000000000000003.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000004.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000005.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000006.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000007.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000008.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000009.json-rw-r--r-- 2 mkukreja supergroup 14756 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000010.checkpoint.parquet-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000010.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000011.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000012.json-rw-r--r-- 2 mkukreja supergroup 25 2020-08-24 12:05 /delta_lake/products/_delta_log/_last_checkpoint
Lots of good stuff. Having played in the Data Engineering and Data Science world for several years, I can safely tell you that these features are no short of a life-savor. No more jumping through hoops to store and display the latest change data sets from the databases.
In the next part, we will deep dive into some advanced topics of Delta Lake including Partitioning, Schema Evolution, Data Lineage & Vacuum.
I hope this article was helpful. Delta Lake is covered as part of the Big Data Hadoop, Spark & Kafka course offered by Datafence Cloud Academy. The course is taught online by myself on weekends. | [
{
"code": null,
"e": 654,
"s": 172,
"text": "Going back 8 years, I still remember the days when I was adopting Big Data frameworks like Hadoop and Spark. Coming from a database background this adaptation was challenging for many reasons. The most challenging was the lack of database like transactions in Big Data frameworks. To cover for this missing functionality we had to develop several routines the performed the necessary checks and measures. However, the process was cumbersome, time-consuming and frankly error-prone."
},
{
"code": null,
"e": 1133,
"s": 654,
"text": "Another issue that use to keep me awake at night was the dreaded Change Data Capture (CDC). Databases have a convenient way of updating records and showing the latest state of the record to the user. On the other hand in Big Data we ingest data and store them as files. Therefore, the daily delta ingestion may contain a combination of newly inserted, updated or deleted data. This means we end up storing the same row multiple times in the Data Lake. This creates two problems:"
},
{
"code": null,
"e": 1348,
"s": 1133,
"text": "Duplicate Data — In some cases the same row exists more than once (updated and deleted data)Data Analytics — Unless data is already de-duplicated, users get confused when they see multiple instances of the same row"
},
{
"code": null,
"e": 1441,
"s": 1348,
"text": "Duplicate Data — In some cases the same row exists more than once (updated and deleted data)"
},
{
"code": null,
"e": 1564,
"s": 1441,
"text": "Data Analytics — Unless data is already de-duplicated, users get confused when they see multiple instances of the same row"
},
{
"code": null,
"e": 1626,
"s": 1564,
"text": "So how did we manage to deal with this situation up till now:"
},
{
"code": null,
"e": 1675,
"s": 1626,
"text": "Day — Ingest the full data set — complete tables"
},
{
"code": null,
"e": 1808,
"s": 1675,
"text": "Day 2-Day n — Make sure the incremental data (delta) is delivered with a Record Update Timestamp and the Mode (Insert/Update/Delete)"
},
{
"code": null,
"e": 1942,
"s": 1808,
"text": "After data has been ingested in the raw zone run a Hive/Mapreduce/Spark curation job that merges incremental data with the full data."
},
{
"code": null,
"e": 2092,
"s": 1942,
"text": "However, at this stage, it cleverly uses remove duplicates using functions like RANK() OVER PARTITION of PRIMARY KEY and Record Update Timestamp DESC"
},
{
"code": null,
"e": 2155,
"s": 2092,
"text": "Filter rows with RANK=1 gives us the most recently updated row"
},
{
"code": null,
"e": 2182,
"s": 2155,
"text": "Drop rows with Mode=Delete"
},
{
"code": null,
"e": 2305,
"s": 2182,
"text": "Save above data set to the curation zone — This data is then shared with the user community for further analysis down line"
},
{
"code": null,
"e": 2385,
"s": 2305,
"text": "As you see the above process is pretty involving. A better method is warranted."
},
{
"code": null,
"e": 2648,
"s": 2385,
"text": "Developed by Databricks, Delta Lake brings ACID transaction support for your data lakes for both batch and streaming operations. Delta Lake is an open-source storage layer for big data workloads over HDFS, AWS S3, Azure Data Lake Storage or Google Cloud Storage."
},
{
"code": null,
"e": 2777,
"s": 2648,
"text": "Delta Lake packs in a lot of cool features useful for Data Engineers. Lets explore a few of these features in a two part series:"
},
{
"code": null,
"e": 2847,
"s": 2777,
"text": "Part 1: ACID Transactions, Checkpoints, Transaction Log & Time Travel"
},
{
"code": null,
"e": 2890,
"s": 2847,
"text": "Part 2 : Vaccum, Schema Evolution, History"
},
{
"code": null,
"e": 3025,
"s": 2890,
"text": "Use Case: An eCommerce website sells products from several vendors. Each vendor sends latest prices for its products on a daily basis."
},
{
"code": null,
"e": 3288,
"s": 3025,
"text": "The eCommerce company wants to adjust the pricing information on their website based on latest prices sent by the vendor each day. Additionally, they want to track prices over time for their ML models. On the left is a sample of the data files received each day."
},
{
"code": null,
"e": 3320,
"s": 3288,
"text": "Lets start by downloading data:"
},
{
"code": null,
"e": 3371,
"s": 3320,
"text": "$ git clone https://github.com/mkukreja1/blogs.git"
},
{
"code": null,
"e": 3440,
"s": 3371,
"text": "Assume today is Aug20 and you received the file — products_aug20.csv"
},
{
"code": null,
"e": 3467,
"s": 3440,
"text": "Save the data file to HDFS"
},
{
"code": null,
"e": 3573,
"s": 3467,
"text": "$ hadoop fs -mkdir -p /delta_lake/raw$ hadoop fs -put blogs/delta-lake/products_aug20.csv /delta_lake/raw"
},
{
"code": null,
"e": 3703,
"s": 3573,
"text": "The complete notebook is available at /delta_lake/delta_lake-demo-1.ipynb . Let me run through each step below with explanations:"
},
{
"code": null,
"e": 3793,
"s": 3703,
"text": "Start the Spark session first with the Delta Lake package and then import the Python APIs"
},
{
"code": null,
"e": 4244,
"s": 3793,
"text": "from pyspark.sql import SparkSessionimport pysparkfrom pyspark.sql.functions import *spark = pyspark.sql.SparkSession.builder.appName(\"Product_Price_Tracking\") \\ .config(\"spark.jars.packages\", \"io.delta:delta-core_2.12:0.7.0\") \\ .config(\"spark.sql.extensions\", \"io.delta.sql.DeltaSparkSessionExtension\") \\ .config(\"spark.sql.catalog.spark_catalog\", \"org.apache.spark.sql.delta.catalog.DeltaCatalog\") \\ .getOrCreate()from delta.tables import *"
},
{
"code": null,
"e": 4323,
"s": 4244,
"text": "Create a Spark DataFrame using the recently received data — products_aug20.csv"
},
{
"code": null,
"e": 4708,
"s": 4323,
"text": "df_productsaug20 = spark.read.csv('hdfs:///delta_lake/raw/products_aug20.csv', header=True, inferSchema=True)df_productsaug20.show()+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 20.5|| 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+"
},
{
"code": null,
"e": 4912,
"s": 4708,
"text": "Now lets store data in the Delta Lake. Delta Lake uses versioned Parquet files to store your data in your cloud storage. Additionally, a transaction log is stored to keep track of data changes over time."
},
{
"code": null,
"e": 5021,
"s": 4912,
"text": "df_productsaug20.write.format(\"delta\").option(\"path\", \"hdfs:///delta_lake/products\").saveAsTable(\"products\")"
},
{
"code": null,
"e": 5136,
"s": 5021,
"text": "Lets see how the date was stored in HDFS. Take note of the _delta_log directory that stores the changes over time."
},
{
"code": null,
"e": 5202,
"s": 5136,
"text": "Each commit = 1 JSON file starting with 00000000000000000000.json"
},
{
"code": null,
"e": 5301,
"s": 5202,
"text": "Every 10 commits, a checkpoint is performed that combines previous JSON files into a parquet file."
},
{
"code": null,
"e": 7938,
"s": 5301,
"text": "$ hadoop fs -ls /delta_lake/productsFound 2 itemsdrwxr-xr-x - mkukreja supergroup 0 2020-08-26 20:43 /delta_lake/products/_delta_log-rw-r--r-- 2 mkukreja supergroup 1027 2020-08-26 20:43 /delta_lake/products/part-00000-37f5ec8d-5e21-4a01-9f19-7e9942196ef6-c000.snappy.parquet$ hadoop fs -cat /delta_lake/products/_delta_log/00000000000000000000.json2020-08-26 20:44:42,159 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{\"commitInfo\":{\"timestamp\":1598489011963,\"operation\":\"CREATE TABLE AS SELECT\",\"operationParameters\":{\"isManaged\":\"false\",\"description\":null,\"partitionBy\":\"[]\",\"properties\":\"{}\"},\"isBlindAppend\":true,\"operationMetrics\":{\"numFiles\":\"1\",\"numOutputBytes\":\"1027\",\"numOutputRows\":\"5\"}}}{\"protocol\":{\"minReaderVersion\":1,\"minWriterVersion\":2}}{\"metaData\":{\"id\":\"7788c86b-ae7e-47be-ac43-76c1f3f0506f\",\"format\":{\"provider\":\"parquet\",\"options\":{}},\"schemaString\":\"{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{\\\"name\\\":\\\"ProductID\\\",\\\"type\\\":\\\"integer\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}},{\\\"name\\\":\\\"Date\\\",\\\"type\\\":\\\"string\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}},{\\\"name\\\":\\\"Price\\\",\\\"type\\\":\\\"double\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}}]}\",\"partitionColumns\":[],\"configuration\":{},\"createdTime\":1598489010883}}{\"add\":{\"path\":\"part-00000-37f5ec8d-5e21-4a01-9f19-7e9942196ef6-c000.snappy.parquet\",\"partitionValues\":{},\"size\":1027,\"modificationTime\":1598489011874,\"dataChange\":true}}$ hadoop fs -cat /delta_lake/products/_delta_log/00000000000000000000.json2020-08-26 20:44:42,159 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{\"commitInfo\":{\"timestamp\":1598489011963,\"operation\":\"CREATE TABLE AS SELECT\",\"operationParameters\":{\"isManaged\":\"false\",\"description\":null,\"partitionBy\":\"[]\",\"properties\":\"{}\"},\"isBlindAppend\":true,\"operationMetrics\":{\"numFiles\":\"1\",\"numOutputBytes\":\"1027\",\"numOutputRows\":\"5\"}}}{\"protocol\":{\"minReaderVersion\":1,\"minWriterVersion\":2}}{\"metaData\":{\"id\":\"7788c86b-ae7e-47be-ac43-76c1f3f0506f\",\"format\":{\"provider\":\"parquet\",\"options\":{}},\"schemaString\":\"{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{\\\"name\\\":\\\"ProductID\\\",\\\"type\\\":\\\"integer\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}},{\\\"name\\\":\\\"Date\\\",\\\"type\\\":\\\"string\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}},{\\\"name\\\":\\\"Price\\\",\\\"type\\\":\\\"double\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}}]}\",\"partitionColumns\":[],\"configuration\":{},\"createdTime\":1598489010883}}{\"add\":{\"path\":\"part-00000-37f5ec8d-5e21-4a01-9f19-7e9942196ef6-c000.snappy.parquet\",\"partitionValues\":{},\"size\":1027,\"modificationTime\":1598489011874,\"dataChange\":true}}"
},
{
"code": null,
"e": 8013,
"s": 7938,
"text": "This is how we can query recently saved data in Delta Lake using Spark SQL"
},
{
"code": null,
"e": 8308,
"s": 8013,
"text": "spark.sql('SELECT * FROM products').show()+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 20.5|| 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+"
},
{
"code": null,
"e": 8509,
"s": 8308,
"text": "You can query previous snapshots of your Delta table by using time travel. If you want to access the data before it was overwritten, you can query a snapshot of the table using the versionAsOf option."
},
{
"code": null,
"e": 9444,
"s": 8509,
"text": "deltaTable.update(\"ProductID = '200'\", { \"Price\": \"'48.00'\" } )df = spark.read.format(\"delta\").option(\"versionAsOf\", 1).load(\"hdfs:///delta_lake/products\")df.show()# Notice the value of Price for ProductID=200 has changed in version 1 of the table+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 48.0| | 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+df = spark.read.format(\"delta\").option(\"versionAsOf\", 0).load(\"hdfs:///delta_lake/products\")df.show()# Notice the value of Price for ProductID=200 is the older snapshot in version 0+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 20.5|| 210|2020-08-20| 45.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+"
},
{
"code": null,
"e": 9512,
"s": 9444,
"text": "Lets perform another DML operation, this time delete ProductID=210."
},
{
"code": null,
"e": 9944,
"s": 9512,
"text": "deltaTable.delete(\"ProductID = 210\") df = spark.read.format(\"delta\").option(\"versionAsOf\", 2).load(\"hdfs:///delta_lake/products\")df.show()# Notice the value of Price for ProductID=210 is missing in Version 2+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 200|2020-08-20| 48.0|| 220|2020-08-20|34.56|| 230|2020-08-20|23.67|| 240|2020-08-20|89.76|+---------+----------+-----+"
},
{
"code": null,
"e": 10016,
"s": 9944,
"text": "Notice that the Transaction Log has progressed, one log per transaction"
},
{
"code": null,
"e": 10437,
"s": 10016,
"text": "$ hadoop fs -ls /delta_lake/products/_delta_logFound 3 items-rw-r--r-- 2 mkukreja supergroup 912 2020-08-21 13:14 /delta_lake/products/_delta_log/00000000000000000000.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 11:03 /delta_lake/products/_delta_log/00000000000000000001.json-rw-r--r-- 2 mkukreja supergroup 592 2020-08-24 11:14 /delta_lake/products/_delta_log/00000000000000000002.json"
},
{
"code": null,
"e": 10518,
"s": 10437,
"text": "Pay special attention to the operation attribute in JSON of the transaction logs"
},
{
"code": null,
"e": 12965,
"s": 10518,
"text": "$ hadoop fs -cat /delta_lake/products/_delta_log/*.json{\"commitInfo\":{\"timestamp\":1598489978902,\"operation\":\"CREATE TABLE AS SELECT\",\"operationParameters\":{\"isManaged\":\"false\",\"description\":null,\"partitionBy\":\"[]\",\"properties\":\"{}\"},\"isBlindAppend\":true,\"operationMetrics\":{\"numFiles\":\"1\",\"numOutputBytes\":\"1027\",\"numOutputRows\":\"5\"}}}{\"protocol\":{\"minReaderVersion\":1,\"minWriterVersion\":2}}{\"metaData\":{\"id\":\"47d211fc-7148-4c85-aa80-7d9aa8f0b7a2\",\"format\":{\"provider\":\"parquet\",\"options\":{}},\"schemaString\":\"{\\\"type\\\":\\\"struct\\\",\\\"fields\\\":[{\\\"name\\\":\\\"ProductID\\\",\\\"type\\\":\\\"integer\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}},{\\\"name\\\":\\\"Date\\\",\\\"type\\\":\\\"string\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}},{\\\"name\\\":\\\"Price\\\",\\\"type\\\":\\\"double\\\",\\\"nullable\\\":true,\\\"metadata\\\":{}}]}\",\"partitionColumns\":[],\"configuration\":{},\"createdTime\":1598489977907}}{\"add\":{\"path\":\"part-00000-8c43a47a-02bf-4bc2-a3be-aaabe9c409bd-c000.snappy.parquet\",\"partitionValues\":{},\"size\":1027,\"modificationTime\":1598489978816,\"dataChange\":true}}2020-08-26 21:07:41,120 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{\"commitInfo\":{\"timestamp\":1598490235896,\"operation\":\"UPDATE\",\"operationParameters\":{\"predicate\":\"(ProductID#609 = 200)\"},\"readVersion\":0,\"isBlindAppend\":false,\"operationMetrics\":{\"numRemovedFiles\":\"1\",\"numAddedFiles\":\"1\",\"numUpdatedRows\":\"1\",\"numCopiedRows\":\"4\"}}}{\"remove\":{\"path\":\"part-00000-8c43a47a-02bf-4bc2-a3be-aaabe9c409bd-c000.snappy.parquet\",\"deletionTimestamp\":1598490235237,\"dataChange\":true}}{\"add\":{\"path\":\"part-00000-272c0f65-433e-4901-83fd-70b78667ede0-c000.snappy.parquet\",\"partitionValues\":{},\"size\":1025,\"modificationTime\":1598490235886,\"dataChange\":true}}2020-08-26 21:07:41,123 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false{\"commitInfo\":{\"timestamp\":1598490393953,\"operation\":\"DELETE\",\"operationParameters\":{\"predicate\":\"[\\\"(`ProductID` = 210)\\\"]\"},\"readVersion\":1,\"isBlindAppend\":false,\"operationMetrics\":{\"numRemovedFiles\":\"1\",\"numDeletedRows\":\"1\",\"numAddedFiles\":\"1\",\"numCopiedRows\":\"4\"}}}{\"remove\":{\"path\":\"part-00000-272c0f65-433e-4901-83fd-70b78667ede0-c000.snappy.parquet\",\"deletionTimestamp\":1598490393950,\"dataChange\":true}}{\"add\":{\"path\":\"part-00000-73a381e1-fa68-4323-81b9-c42dea484542-c000.snappy.parquet\",\"partitionValues\":{},\"size\":1015,\"modificationTime\":1598490393946,\"dataChange\":true}}"
},
{
"code": null,
"e": 13109,
"s": 12965,
"text": "It is next day now and you received a new data file. Now lets merge the new data set from the next day (products_aug21.csv) into the Delta Lake"
},
{
"code": null,
"e": 13403,
"s": 13109,
"text": "$ hadoop fs -put csv/products_aug21.csv /delta_lake/raw$ hadoop fs -ls /delta_lake/rawFound 2 items-rw-r--r-- 2 mkukreja supergroup 132 2020-08-21 13:00 /delta_lake/raw/products_aug20.csv-rw-r--r-- 2 mkukreja supergroup 220 2020-08-24 11:33 /delta_lake/raw/products_aug21.csv"
},
{
"code": null,
"e": 13631,
"s": 13403,
"text": "Perform an Upsert Operation. This means that if the data in the new file (products_aug21.csv) matches (based on the join condition on ProductID) any existing data in the Delta Lake the update the price, else insert the new row."
},
{
"code": null,
"e": 14290,
"s": 13631,
"text": "df_productsaug21 = spark.read.csv('hdfs:///delta_lake/raw/products_aug21.csv', header=True, inferSchema=True)df_productsaug21.show()deltaTable.alias(\"products\").merge( df_productsaug21.alias(\"products_new\"), \"products.ProductID = products_new.ProductID\") \\ .whenMatchedUpdate(set = { \"Price\" : \"products_new.Price\" } ) \\ .whenNotMatchedInsert(values = { \"ProductID\": \"products_new.ProductID\", \"Date\": \"products_new.Date\", \"Price\": \"products_new.Price\" } ).execute()"
},
{
"code": null,
"e": 14504,
"s": 14290,
"text": "Check the latest version of the table after the Upsert. You may notice that ProductID=240 went though the whenMatchedUpdate operation whereas ProductID=240 till 280 went through the whenNotMatchedInsert operation."
},
{
"code": null,
"e": 14899,
"s": 14504,
"text": "spark.table(\"products\").show()+---------+----------+-----+|ProductID| Date|Price|+---------+----------+-----+| 230|2020-08-20|23.67|| 210|2020-08-21| 46.0|| 250|2020-08-21|89.76|| 220|2020-08-20|34.56|| 240|2020-08-20|90.82|| 200|2020-08-20| 25.5|| 260|2020-08-21|54.55|| 280|2020-08-21|44.78|| 270|2020-08-21|96.32|+---------+----------+-----+"
},
{
"code": null,
"e": 15130,
"s": 14899,
"text": "For every 10 commits Delta Lake saves a checkpoint file in Parquet format in the _delta_log sub-directory. Since the checkpoint file is in Parquet format it allows Spark to perform a faster read compared to inefficient JSON files."
},
{
"code": null,
"e": 15185,
"s": 15130,
"text": "Lets see how this happens by updating a few more rows:"
},
{
"code": null,
"e": 15754,
"s": 15185,
"text": "deltaTable.update(\"ProductID = '230'\", { \"Price\": \"'33.67'\" } )deltaTable.update(\"ProductID = '210'\", { \"Price\": \"'56.00'\" } )deltaTable.update(\"ProductID = '250'\", { \"Price\": \"'99.76'\" } )deltaTable.update(\"ProductID = '220'\", { \"Price\": \"'44.56'\" } )deltaTable.update(\"ProductID = '240'\", { \"Price\": \"'100.82'\" } )deltaTable.update(\"ProductID = '200'\", { \"Price\": \"'35.5'\" } )deltaTable.update(\"ProductID = '260'\", { \"Price\": \"'64.55'\" } )deltaTable.update(\"ProductID = '280'\", { \"Price\": \"'54.78'\" } )deltaTable.update(\"ProductID = '270'\", { \"Price\": \"'106.32'\" } )"
},
{
"code": null,
"e": 16009,
"s": 15754,
"text": "Check the _delta_log sub-directory. After 10 commits a 00000000000000000010.checkpoint.parquet file has been created. Checkpoint files saves the state of the table at a point in time in Parquet format so that it can retrieve the history very efficiently."
},
{
"code": null,
"e": 17876,
"s": 16009,
"text": "$ hadoop fs -ls /delta_lake/products/_delta_logFound 15 items-rw-r--r-- 2 mkukreja supergroup 912 2020-08-21 13:14 /delta_lake/products/_delta_log/00000000000000000000.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 11:03 /delta_lake/products/_delta_log/00000000000000000001.json-rw-r--r-- 2 mkukreja supergroup 592 2020-08-24 11:14 /delta_lake/products/_delta_log/00000000000000000002.json-rw-r--r-- 2 mkukreja supergroup 2255 2020-08-24 11:39 /delta_lake/products/_delta_log/00000000000000000003.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000004.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000005.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000006.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000007.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000008.json-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000009.json-rw-r--r-- 2 mkukreja supergroup 14756 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000010.checkpoint.parquet-rw-r--r-- 2 mkukreja supergroup 578 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000010.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000011.json-rw-r--r-- 2 mkukreja supergroup 579 2020-08-24 12:05 /delta_lake/products/_delta_log/00000000000000000012.json-rw-r--r-- 2 mkukreja supergroup 25 2020-08-24 12:05 /delta_lake/products/_delta_log/_last_checkpoint"
},
{
"code": null,
"e": 18147,
"s": 17876,
"text": "Lots of good stuff. Having played in the Data Engineering and Data Science world for several years, I can safely tell you that these features are no short of a life-savor. No more jumping through hoops to store and display the latest change data sets from the databases."
},
{
"code": null,
"e": 18288,
"s": 18147,
"text": "In the next part, we will deep dive into some advanced topics of Delta Lake including Partitioning, Schema Evolution, Data Lineage & Vacuum."
}
]
|
Real-Time Head Pose Estimation in Python | Towards Data Science | Head pose estimation is a challenging problem in computer vision because of the various steps required to solve it. Firstly, we need to locate the face in the frame and then the various facial landmarks. Now, recognizing the face seems a trivial task in this day and that is true with faces facing the camera. The problem arises when the face is at an angle. Add to that some facial landmarks are not visible due to the movement of the head. After this, we need to convert the points to 3D coordinates to find the inclination. Sounds like a lot of work? Don’t worry we will go step by step and refer two great resources that will make our work a lot easier.
Requirements
Face Detection
Facial Landmark Detection
Pose Estimation
For this project, we need OpenCV and Tensorflow so let’s install them.
#Using pippip install opencv-pythonpip install tensorflow#Using condaconda install -c conda-forge opencvconda install -c conda-forge tensorflow
Our first step is to find the faces in the images on which we can find facial landmarks. For this task, we will be using a Caffe model of OpenCV’s DNN module. If you are wondering how it fares against other models like Haar Cascades or Dlib’s frontal face detector or you want to know more about it in-depth then you can refer to this article:
towardsdatascience.com
You can download the required models from my GitHub repository.
import cv2import numpy as npmodelFile = "models/res10_300x300_ssd_iter_140000.caffemodel"configFile = "models/deploy.prototxt.txt"net = cv2.dnn.readNetFromCaffe(configFile, modelFile)img = cv2.imread('test.jpg')h, w = img.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(img, (300, 300)), 1.0,(300, 300), (104.0, 117.0, 123.0))net.setInput(blob)faces = net.forward()#to draw faces on imagefor i in range(faces.shape[2]): confidence = faces[0, 0, i, 2] if confidence > 0.5: box = faces[0, 0, i, 3:7] * np.array([w, h, w, h]) (x, y, x1, y1) = box.astype("int") cv2.rectangle(img, (x, y), (x1, y1), (0, 0, 255), 2)
Load the network using cv2.dnn.readNetFromCaffe and pass the model's layers and weights as its arguments. It performs best on images resized to 300x300.
The most commonly used one is Dlib’s facial landmark detection which gives us 68 landmarks, however, it does not give good accuracy. Instead, we will be using a facial landmark detector provided by Yin Guobing in this Github repo. It also gives 68 landmarks and it is a Tensorflow CNN trained on 5 datasets! The pre-trained model can be found here. The author has only written a series of posts explaining the includes background, dataset, preprocessing, model architecture, training, and deployment that can be found here. I have provided a very brief summary here, but I would strongly encourage you to read them.
In the first of those series, he describes the problem of stability of facial landmarks in videos followed by labeling out the existing solutions like OpenFace and Dlib’s facial landmark detection along with the datasets available. The third article is all about data preprocessing and making it ready to use. In the next two articles, the work is to extract the faces and apply facial landmarks on it to make it ready to train a CNN and store them as TFRecord files. In the sixth article, a model is trained using Tensorflow. In this article, we can see how important loss functions are in training as first he used tf.losses.mean_pairwise_squared_error which uses the relationships between points as the basis for optimization when minimizing loss and could not generalize well. In contrast, when tf.losses.mean_squared_error was used it worked well. In the final article, the model is exported as an API and shown how to use it in Python.
The model takes square boxes of size 128x128 which contain faces and return 68 facial landmarks. The code provided below is taken from here and it can also be used to draw 3D annotation boxes on it. The code is modified to draw facial landmarks on all the faces, unlike the original code which would draw on only one.
This code will draw facial landmarks on the faces.
Using the draw_annotation_box() we can also draw the annotation box as shown below.
This is a great article on Learn OpenCV which explains head pose detection on images with a lot of Maths about converting the points to 3D space and using cv2.solvePnP to find rotational and translational vectors. A quick read-through of that article will be great to understand the intrinsic working and hence I will write about it only in brief here.
We need six points of the face i.e. is nose tip, chin, extreme left and right points of lips, and the left corner of the left eye and right corner of the right eye. We take standard 3D coordinates of these facial landmarks and try to estimate the rational and translational vectors at the nose tip. Now, for an accurate estimate, we need to intrinsic parameters of the camera like focal length, optical center, and radial distortion parameters. We can estimate the former two and assume the last one is not present to make our work easier. After obtaining the required vectors we can project those 3D points on a 2D surface that is our image.
If we only use the code available and find the angle with the x-axis we can obtain the result shown below.
It works great for recording the head moving up and down but not moving left or right. So how to do that? Well, above we had seen an annotation box on the face. If we could utilize it somehow to measure the left and right movements.
We can find the line in the middle of the two dark blue lines to act as our pointer and find the angle with the y-axis to find the angle of movement.
Combining both of them we can get the result in which direction we want. The complete code can also be found here at my GitHub repository along with various other sub-models for an online proctoring solution.
On testing it on an i5 processor, even with displaying the image I was able to get a healthy 6.76 frame per seconds whereas the facial landmark detection model takes only 0.05 seconds to find them.
Now that we have created a head-pose detector you might want to make an eye gaze tracker then you can have a look at this article: | [
{
"code": null,
"e": 705,
"s": 47,
"text": "Head pose estimation is a challenging problem in computer vision because of the various steps required to solve it. Firstly, we need to locate the face in the frame and then the various facial landmarks. Now, recognizing the face seems a trivial task in this day and that is true with faces facing the camera. The problem arises when the face is at an angle. Add to that some facial landmarks are not visible due to the movement of the head. After this, we need to convert the points to 3D coordinates to find the inclination. Sounds like a lot of work? Don’t worry we will go step by step and refer two great resources that will make our work a lot easier."
},
{
"code": null,
"e": 718,
"s": 705,
"text": "Requirements"
},
{
"code": null,
"e": 733,
"s": 718,
"text": "Face Detection"
},
{
"code": null,
"e": 759,
"s": 733,
"text": "Facial Landmark Detection"
},
{
"code": null,
"e": 775,
"s": 759,
"text": "Pose Estimation"
},
{
"code": null,
"e": 846,
"s": 775,
"text": "For this project, we need OpenCV and Tensorflow so let’s install them."
},
{
"code": null,
"e": 990,
"s": 846,
"text": "#Using pippip install opencv-pythonpip install tensorflow#Using condaconda install -c conda-forge opencvconda install -c conda-forge tensorflow"
},
{
"code": null,
"e": 1334,
"s": 990,
"text": "Our first step is to find the faces in the images on which we can find facial landmarks. For this task, we will be using a Caffe model of OpenCV’s DNN module. If you are wondering how it fares against other models like Haar Cascades or Dlib’s frontal face detector or you want to know more about it in-depth then you can refer to this article:"
},
{
"code": null,
"e": 1357,
"s": 1334,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 1421,
"s": 1357,
"text": "You can download the required models from my GitHub repository."
},
{
"code": null,
"e": 2080,
"s": 1421,
"text": "import cv2import numpy as npmodelFile = \"models/res10_300x300_ssd_iter_140000.caffemodel\"configFile = \"models/deploy.prototxt.txt\"net = cv2.dnn.readNetFromCaffe(configFile, modelFile)img = cv2.imread('test.jpg')h, w = img.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(img, (300, 300)), 1.0,(300, 300), (104.0, 117.0, 123.0))net.setInput(blob)faces = net.forward()#to draw faces on imagefor i in range(faces.shape[2]): confidence = faces[0, 0, i, 2] if confidence > 0.5: box = faces[0, 0, i, 3:7] * np.array([w, h, w, h]) (x, y, x1, y1) = box.astype(\"int\") cv2.rectangle(img, (x, y), (x1, y1), (0, 0, 255), 2)"
},
{
"code": null,
"e": 2233,
"s": 2080,
"text": "Load the network using cv2.dnn.readNetFromCaffe and pass the model's layers and weights as its arguments. It performs best on images resized to 300x300."
},
{
"code": null,
"e": 2849,
"s": 2233,
"text": "The most commonly used one is Dlib’s facial landmark detection which gives us 68 landmarks, however, it does not give good accuracy. Instead, we will be using a facial landmark detector provided by Yin Guobing in this Github repo. It also gives 68 landmarks and it is a Tensorflow CNN trained on 5 datasets! The pre-trained model can be found here. The author has only written a series of posts explaining the includes background, dataset, preprocessing, model architecture, training, and deployment that can be found here. I have provided a very brief summary here, but I would strongly encourage you to read them."
},
{
"code": null,
"e": 3791,
"s": 2849,
"text": "In the first of those series, he describes the problem of stability of facial landmarks in videos followed by labeling out the existing solutions like OpenFace and Dlib’s facial landmark detection along with the datasets available. The third article is all about data preprocessing and making it ready to use. In the next two articles, the work is to extract the faces and apply facial landmarks on it to make it ready to train a CNN and store them as TFRecord files. In the sixth article, a model is trained using Tensorflow. In this article, we can see how important loss functions are in training as first he used tf.losses.mean_pairwise_squared_error which uses the relationships between points as the basis for optimization when minimizing loss and could not generalize well. In contrast, when tf.losses.mean_squared_error was used it worked well. In the final article, the model is exported as an API and shown how to use it in Python."
},
{
"code": null,
"e": 4109,
"s": 3791,
"text": "The model takes square boxes of size 128x128 which contain faces and return 68 facial landmarks. The code provided below is taken from here and it can also be used to draw 3D annotation boxes on it. The code is modified to draw facial landmarks on all the faces, unlike the original code which would draw on only one."
},
{
"code": null,
"e": 4160,
"s": 4109,
"text": "This code will draw facial landmarks on the faces."
},
{
"code": null,
"e": 4244,
"s": 4160,
"text": "Using the draw_annotation_box() we can also draw the annotation box as shown below."
},
{
"code": null,
"e": 4597,
"s": 4244,
"text": "This is a great article on Learn OpenCV which explains head pose detection on images with a lot of Maths about converting the points to 3D space and using cv2.solvePnP to find rotational and translational vectors. A quick read-through of that article will be great to understand the intrinsic working and hence I will write about it only in brief here."
},
{
"code": null,
"e": 5240,
"s": 4597,
"text": "We need six points of the face i.e. is nose tip, chin, extreme left and right points of lips, and the left corner of the left eye and right corner of the right eye. We take standard 3D coordinates of these facial landmarks and try to estimate the rational and translational vectors at the nose tip. Now, for an accurate estimate, we need to intrinsic parameters of the camera like focal length, optical center, and radial distortion parameters. We can estimate the former two and assume the last one is not present to make our work easier. After obtaining the required vectors we can project those 3D points on a 2D surface that is our image."
},
{
"code": null,
"e": 5347,
"s": 5240,
"text": "If we only use the code available and find the angle with the x-axis we can obtain the result shown below."
},
{
"code": null,
"e": 5580,
"s": 5347,
"text": "It works great for recording the head moving up and down but not moving left or right. So how to do that? Well, above we had seen an annotation box on the face. If we could utilize it somehow to measure the left and right movements."
},
{
"code": null,
"e": 5730,
"s": 5580,
"text": "We can find the line in the middle of the two dark blue lines to act as our pointer and find the angle with the y-axis to find the angle of movement."
},
{
"code": null,
"e": 5939,
"s": 5730,
"text": "Combining both of them we can get the result in which direction we want. The complete code can also be found here at my GitHub repository along with various other sub-models for an online proctoring solution."
},
{
"code": null,
"e": 6137,
"s": 5939,
"text": "On testing it on an i5 processor, even with displaying the image I was able to get a healthy 6.76 frame per seconds whereas the facial landmark detection model takes only 0.05 seconds to find them."
}
]
|
How to create a qqplot with confidence interval in R? | A qqplot is the plot of quantiles that helps to understand whether the supplied data comes from the specified distribution, mostly it is used to check whether the data follows normal distribution or not. If we want to create the qqplot with confidence interval then qqPlot function of car package can be used as shown in the below example.
Consider the below data frame −
Live Demo
x<-rnorm(20,74,3.5)
y<-rnorm(20,50,2.25)
df<-data.frame(x,y)
df
x y
1 73.30956 51.31650
2 78.67091 51.01323
3 71.34887 46.93155
4 81.89449 51.54427
5 70.74800 55.81567
6 71.84334 50.74052
7 70.50627 50.35543
8 75.99494 48.10296
9 72.22310 50.42166
10 76.17660 47.82578
11 70.23273 45.80355
12 77.38643 47.50654
13 76.91476 51.83949
14 69.65716 48.11115
15 74.00487 52.76651
16 74.25146 53.72022
17 73.83530 50.83250
18 68.10708 50.85800
19 73.75495 50.37560
20 73.99065 52.60846
Loading car package and creating the qqplot with confidence interval −
library(car)
qqPlot(df$x)
[1] 4 18 | [
{
"code": null,
"e": 1402,
"s": 1062,
"text": "A qqplot is the plot of quantiles that helps to understand whether the supplied data comes from the specified distribution, mostly it is used to check whether the data follows normal distribution or not. If we want to create the qqplot with confidence interval then qqPlot function of car package can be used as shown in the below example."
},
{
"code": null,
"e": 1434,
"s": 1402,
"text": "Consider the below data frame −"
},
{
"code": null,
"e": 1445,
"s": 1434,
"text": " Live Demo"
},
{
"code": null,
"e": 1509,
"s": 1445,
"text": "x<-rnorm(20,74,3.5)\ny<-rnorm(20,50,2.25)\ndf<-data.frame(x,y)\ndf"
},
{
"code": null,
"e": 1965,
"s": 1509,
"text": " x y\n1 73.30956 51.31650\n2 78.67091 51.01323\n3 71.34887 46.93155\n4 81.89449 51.54427\n5 70.74800 55.81567\n6 71.84334 50.74052\n7 70.50627 50.35543\n8 75.99494 48.10296\n9 72.22310 50.42166\n10 76.17660 47.82578\n11 70.23273 45.80355\n12 77.38643 47.50654\n13 76.91476 51.83949\n14 69.65716 48.11115\n15 74.00487 52.76651\n16 74.25146 53.72022\n17 73.83530 50.83250\n18 68.10708 50.85800\n19 73.75495 50.37560\n20 73.99065 52.60846"
},
{
"code": null,
"e": 2036,
"s": 1965,
"text": "Loading car package and creating the qqplot with confidence interval −"
},
{
"code": null,
"e": 2071,
"s": 2036,
"text": "library(car)\nqqPlot(df$x)\n[1] 4 18"
}
]
|
Tutorial: Using Deep Learning and CNNs to make a Hand Gesture recognition model | by Filipe Borba | Towards Data Science | First, here’s the Github repository with the code. The project is in the format of a Jupyter Notebook, which can be uploaded to Google Colaboratory to work without environment issues.
Machine Learning is very useful for a variety of real-life problems. It is commonly used for tasks such as classification, recognition, detection and predictions. Moreover, it is very efficient to automate processes that use data. The basic idea is to use data to produce a model capable of returning an output. This output may give a right answer with a new input or produce predictions towards the known data.
The goal of this project is to train a Machine Learning algorithm capable of classifying images of different hand gestures, such as a fist, palm, showing the thumb, and others. This particular classification problem can be useful for Gesture Navigation, for example. The method I’ll be using is Deep Learning with the help of Convolutional Neural Networks based on Tensorflow and Keras.
Deep Learning is part of a broader family of machine learning methods. It is based on the use of layers that process the input data, extracting features from them and producing a mathematical model. The creation of this said ‘model’ will be more clear in the next session. In this specific project, we’ll be aiming to classify different images of hand gestures, which means that the computer will have to “learn” the features of each gesture and classify them correctly. For example, if it is given an image of a hand doing a thumbs up gesture, the output of the model needs to be “the hand is doing a thumbs up gesture”. Let’s begin.
This project uses the Hand Gesture Recognition Database (citation below) available on Kaggle. It contains 20000 images with different hands and hand gestures. There is a total of 10 hand gestures of 10 different people presented in the data set. There are 5 female subjects and 5 male subjects.The images were captured using the Leap Motion hand tracking device.
With that, we have to prepare the images to train the algorithm. We have to load all the images into an array that we will call X and all the labels into another array called y.
X = [] # Image datay = [] # Labels# Loops through imagepaths to load images and labels into arraysfor path in imagepaths: img = cv2.imread(path) # Reads image and returns np.array img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Converts into the corret colorspace (GRAY) img = cv2.resize(img, (320, 120)) # Reduce image size so training can be faster X.append(img) # Processing label in image path category = path.split("/")[3] label = int(category.split("_")[0][1]) # We need to convert 10_down to 00_down, or else it crashes y.append(label)# Turn X and y into np.array to speed up train_test_splitX = np.array(X, dtype="uint8")X = X.reshape(len(imagepaths), 120, 320, 1) # Needed to reshape so CNN knows it's different imagesy = np.array(y)print("Images loaded: ", len(X))print("Labels loaded: ", len(y))
Scipy’s train_test_split allows us to split our data into a training set and a test set. The training set will be used to build our model. Then, the test data will be used to check if our predictions are correct. A random_state seed is used so the randomness of our results can be reproduced. The function will shuffle the images it’s using to minimize training loss.
# Percentage of images that we want to use for testing. # The rest is used for training.ts = 0.3X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=ts, random_state=42)
To simplify the idea of the model being constructed here, we’re going to use the concept of Linear Regression. By using linear regression, we can create a simple model and represent it using the equation y = ax + b. a and b (slope and intercept, respectively) are the parameters that we’re trying to find. By finding the best parameters, for any given value of x, we can predict y. This is the same idea here, but much more complex, with the use of Convolutional Neural Networks.
A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, CNNs have the ability to learn these filters/characteristics.
From Figure 1 and imagining the Linear Regression model equation that we talked about, we can imagine that the input layer is x and the output layer is y. The hidden layers vary from model to model, but they are used to “learn” the parameters for our model. Each one has a different function, but they work towards getting the best “slope and intercept”.
# Construction of modelmodel = Sequential()model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(120, 320, 1))) model.add(MaxPooling2D((2, 2)))model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2)))model.add(Conv2D(64, (3, 3), activation='relu'))model.add(MaxPooling2D((2, 2)))model.add(Flatten())model.add(Dense(128, activation='relu'))model.add(Dense(10, activation='softmax'))# Configures the model for trainingmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])# Trains the model for a given number of epochs (iterations on a dataset) and validates it.model.fit(X_train, y_train, epochs=5, batch_size=64, verbose=2, validation_data=(X_test, y_test))
CNNs apply a series of filters to the raw pixel data of an image to extract and learn higher-level features, which the model can then use for classification. CNNs contains three components:
Convolutional layers, which apply a specified number of convolution filters to the image. For each subregion, the layer performs a set of mathematical operations to produce a single value in the output feature map. Convolutional layers then typically apply a ReLU activation function to the output to introduce nonlinearities into the model.
Pooling layers, which downsample the image data extracted by the convolutional layers to reduce the dimensionality of the feature map in order to decrease processing time. A commonly used pooling algorithm is max pooling, which extracts subregions of the feature map (e.g., 2x2-pixel tiles), keeps their maximum value, and discards all other values.
Dense (fully connected) layers, which perform classification on the features extracted by the convolutional layers and downsampled by the pooling layers. In a dense layer, every node in the layer is connected to every node in the preceding layer.
Now that we have the model compiled and trained, we need to check if it’s good. First, we run ‘model.evaluate’ to test the accuracy. Then, we make predictions and plot the images as long with the predicted labels and true labels to check everything. With that, we can see how our algorithm is working. Later, we produce a confusion matrix, which is a specific table layout that allows visualization of the performance of an algorithm.
test_loss, test_acc = model.evaluate(X_test, y_test)print('Test accuracy: {:2.2f}%'.format(test_acc*100))
6000/6000 [=====================] — 39s 6ms/step
Test accuracy: 99.98%
predictions = model.predict(X_test) # Make predictions towards the test sety_pred = np.argmax(predictions, axis=1) # Transform predictions into 1-D array with label number# H = Horizontal# V = Verticalpd.DataFrame(confusion_matrix(y_test, y_pred), columns=["Predicted Thumb Down", "Predicted Palm (H)", "Predicted L", "Predicted Fist (H)", "Predicted Fist (V)", "Predicted Thumbs up", "Predicted Index", "Predicted OK", "Predicted Palm (V)", "Predicted C"], index=["Actual Thumb Down", "Actual Palm (H)", "Actual L", "Actual Fist (H)", "Actual Fist (V)", "Actual Thumbs up", "Actual Index", "Actual OK", "Actual Palm (V)", "Actual C"])
Based on the results presented in the previous section, we can conclude that our algorithm successfully classifies different hand gestures images with enough confidence (>95%) based on a Deep Learning model.
The accuracy of our model is directly influenced by a few aspects of our problem. The gestures presented are reasonably distinct, the images are clear and without background. Also, there is a reasonable quantity of images, which makes our model more robust. The drawback is that for different problems, we would probably need more data to stir the parameters of our model into a better direction. Moreover, a deep learning model is very hard to interpret, given it’s abstractions. However, by using this approach it becomes much more easier to start working on the actual problem, since we don’t have to account for feature engineering. This means that we don’t need to pre-process the images with edge or blob detectors to extract the important features; the CNN does it for us. Also, it can be adapted to new problems relatively easily, with generally good performance.
As mentioned, another approach to this problem would be to use feature engineering, such as binary thresholding (check area of the hand), circle detection and others to detect unique characteristics on the images. However, with our CNN approach, we don’t have to worry about any of these.
Any doubts? Feel free to send questions/issues on the Github repository!
T. Mantecón, C.R. del Blanco, F. Jaureguizar, N. García, “Hand Gesture Recognition using Infrared Imagery Provided by Leap Motion Controller”, Int. Conf. on Advanced Concepts for Intelligent Vision Systems, ACIVS 2016, Lecce, Italy, pp. 47–57, 24–27 Oct. 2016. (doi: 10.1007/978–3–319–48680–2_5) | [
{
"code": null,
"e": 230,
"s": 46,
"text": "First, here’s the Github repository with the code. The project is in the format of a Jupyter Notebook, which can be uploaded to Google Colaboratory to work without environment issues."
},
{
"code": null,
"e": 642,
"s": 230,
"text": "Machine Learning is very useful for a variety of real-life problems. It is commonly used for tasks such as classification, recognition, detection and predictions. Moreover, it is very efficient to automate processes that use data. The basic idea is to use data to produce a model capable of returning an output. This output may give a right answer with a new input or produce predictions towards the known data."
},
{
"code": null,
"e": 1029,
"s": 642,
"text": "The goal of this project is to train a Machine Learning algorithm capable of classifying images of different hand gestures, such as a fist, palm, showing the thumb, and others. This particular classification problem can be useful for Gesture Navigation, for example. The method I’ll be using is Deep Learning with the help of Convolutional Neural Networks based on Tensorflow and Keras."
},
{
"code": null,
"e": 1664,
"s": 1029,
"text": "Deep Learning is part of a broader family of machine learning methods. It is based on the use of layers that process the input data, extracting features from them and producing a mathematical model. The creation of this said ‘model’ will be more clear in the next session. In this specific project, we’ll be aiming to classify different images of hand gestures, which means that the computer will have to “learn” the features of each gesture and classify them correctly. For example, if it is given an image of a hand doing a thumbs up gesture, the output of the model needs to be “the hand is doing a thumbs up gesture”. Let’s begin."
},
{
"code": null,
"e": 2027,
"s": 1664,
"text": "This project uses the Hand Gesture Recognition Database (citation below) available on Kaggle. It contains 20000 images with different hands and hand gestures. There is a total of 10 hand gestures of 10 different people presented in the data set. There are 5 female subjects and 5 male subjects.The images were captured using the Leap Motion hand tracking device."
},
{
"code": null,
"e": 2205,
"s": 2027,
"text": "With that, we have to prepare the images to train the algorithm. We have to load all the images into an array that we will call X and all the labels into another array called y."
},
{
"code": null,
"e": 3020,
"s": 2205,
"text": "X = [] # Image datay = [] # Labels# Loops through imagepaths to load images and labels into arraysfor path in imagepaths: img = cv2.imread(path) # Reads image and returns np.array img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Converts into the corret colorspace (GRAY) img = cv2.resize(img, (320, 120)) # Reduce image size so training can be faster X.append(img) # Processing label in image path category = path.split(\"/\")[3] label = int(category.split(\"_\")[0][1]) # We need to convert 10_down to 00_down, or else it crashes y.append(label)# Turn X and y into np.array to speed up train_test_splitX = np.array(X, dtype=\"uint8\")X = X.reshape(len(imagepaths), 120, 320, 1) # Needed to reshape so CNN knows it's different imagesy = np.array(y)print(\"Images loaded: \", len(X))print(\"Labels loaded: \", len(y))"
},
{
"code": null,
"e": 3388,
"s": 3020,
"text": "Scipy’s train_test_split allows us to split our data into a training set and a test set. The training set will be used to build our model. Then, the test data will be used to check if our predictions are correct. A random_state seed is used so the randomness of our results can be reproduced. The function will shuffle the images it’s using to minimize training loss."
},
{
"code": null,
"e": 3573,
"s": 3388,
"text": "# Percentage of images that we want to use for testing. # The rest is used for training.ts = 0.3X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=ts, random_state=42)"
},
{
"code": null,
"e": 4053,
"s": 3573,
"text": "To simplify the idea of the model being constructed here, we’re going to use the concept of Linear Regression. By using linear regression, we can create a simple model and represent it using the equation y = ax + b. a and b (slope and intercept, respectively) are the parameters that we’re trying to find. By finding the best parameters, for any given value of x, we can predict y. This is the same idea here, but much more complex, with the use of Convolutional Neural Networks."
},
{
"code": null,
"e": 4542,
"s": 4053,
"text": "A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other. The pre-processing required in a ConvNet is much lower as compared to other classification algorithms. While in primitive methods filters are hand-engineered, with enough training, CNNs have the ability to learn these filters/characteristics."
},
{
"code": null,
"e": 4897,
"s": 4542,
"text": "From Figure 1 and imagining the Linear Regression model equation that we talked about, we can imagine that the input layer is x and the output layer is y. The hidden layers vary from model to model, but they are used to “learn” the parameters for our model. Each one has a different function, but they work towards getting the best “slope and intercept”."
},
{
"code": null,
"e": 5625,
"s": 4897,
"text": "# Construction of modelmodel = Sequential()model.add(Conv2D(32, (5, 5), activation='relu', input_shape=(120, 320, 1))) model.add(MaxPooling2D((2, 2)))model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D((2, 2)))model.add(Conv2D(64, (3, 3), activation='relu'))model.add(MaxPooling2D((2, 2)))model.add(Flatten())model.add(Dense(128, activation='relu'))model.add(Dense(10, activation='softmax'))# Configures the model for trainingmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])# Trains the model for a given number of epochs (iterations on a dataset) and validates it.model.fit(X_train, y_train, epochs=5, batch_size=64, verbose=2, validation_data=(X_test, y_test))"
},
{
"code": null,
"e": 5815,
"s": 5625,
"text": "CNNs apply a series of filters to the raw pixel data of an image to extract and learn higher-level features, which the model can then use for classification. CNNs contains three components:"
},
{
"code": null,
"e": 6157,
"s": 5815,
"text": "Convolutional layers, which apply a specified number of convolution filters to the image. For each subregion, the layer performs a set of mathematical operations to produce a single value in the output feature map. Convolutional layers then typically apply a ReLU activation function to the output to introduce nonlinearities into the model."
},
{
"code": null,
"e": 6507,
"s": 6157,
"text": "Pooling layers, which downsample the image data extracted by the convolutional layers to reduce the dimensionality of the feature map in order to decrease processing time. A commonly used pooling algorithm is max pooling, which extracts subregions of the feature map (e.g., 2x2-pixel tiles), keeps their maximum value, and discards all other values."
},
{
"code": null,
"e": 6754,
"s": 6507,
"text": "Dense (fully connected) layers, which perform classification on the features extracted by the convolutional layers and downsampled by the pooling layers. In a dense layer, every node in the layer is connected to every node in the preceding layer."
},
{
"code": null,
"e": 7189,
"s": 6754,
"text": "Now that we have the model compiled and trained, we need to check if it’s good. First, we run ‘model.evaluate’ to test the accuracy. Then, we make predictions and plot the images as long with the predicted labels and true labels to check everything. With that, we can see how our algorithm is working. Later, we produce a confusion matrix, which is a specific table layout that allows visualization of the performance of an algorithm."
},
{
"code": null,
"e": 7295,
"s": 7189,
"text": "test_loss, test_acc = model.evaluate(X_test, y_test)print('Test accuracy: {:2.2f}%'.format(test_acc*100))"
},
{
"code": null,
"e": 7344,
"s": 7295,
"text": "6000/6000 [=====================] — 39s 6ms/step"
},
{
"code": null,
"e": 7366,
"s": 7344,
"text": "Test accuracy: 99.98%"
},
{
"code": null,
"e": 8027,
"s": 7366,
"text": "predictions = model.predict(X_test) # Make predictions towards the test sety_pred = np.argmax(predictions, axis=1) # Transform predictions into 1-D array with label number# H = Horizontal# V = Verticalpd.DataFrame(confusion_matrix(y_test, y_pred), columns=[\"Predicted Thumb Down\", \"Predicted Palm (H)\", \"Predicted L\", \"Predicted Fist (H)\", \"Predicted Fist (V)\", \"Predicted Thumbs up\", \"Predicted Index\", \"Predicted OK\", \"Predicted Palm (V)\", \"Predicted C\"], index=[\"Actual Thumb Down\", \"Actual Palm (H)\", \"Actual L\", \"Actual Fist (H)\", \"Actual Fist (V)\", \"Actual Thumbs up\", \"Actual Index\", \"Actual OK\", \"Actual Palm (V)\", \"Actual C\"])"
},
{
"code": null,
"e": 8235,
"s": 8027,
"text": "Based on the results presented in the previous section, we can conclude that our algorithm successfully classifies different hand gestures images with enough confidence (>95%) based on a Deep Learning model."
},
{
"code": null,
"e": 9107,
"s": 8235,
"text": "The accuracy of our model is directly influenced by a few aspects of our problem. The gestures presented are reasonably distinct, the images are clear and without background. Also, there is a reasonable quantity of images, which makes our model more robust. The drawback is that for different problems, we would probably need more data to stir the parameters of our model into a better direction. Moreover, a deep learning model is very hard to interpret, given it’s abstractions. However, by using this approach it becomes much more easier to start working on the actual problem, since we don’t have to account for feature engineering. This means that we don’t need to pre-process the images with edge or blob detectors to extract the important features; the CNN does it for us. Also, it can be adapted to new problems relatively easily, with generally good performance."
},
{
"code": null,
"e": 9396,
"s": 9107,
"text": "As mentioned, another approach to this problem would be to use feature engineering, such as binary thresholding (check area of the hand), circle detection and others to detect unique characteristics on the images. However, with our CNN approach, we don’t have to worry about any of these."
},
{
"code": null,
"e": 9469,
"s": 9396,
"text": "Any doubts? Feel free to send questions/issues on the Github repository!"
}
]
|
How to deploy Airflow on AWS: best practices | by André Sionek | Towards Data Science | I created a repo to deploy Airflow on AWS following software engineering best practices. You can go straight there if you don’t feel like reading this post. But I do describe some things you might find useful here.
Run a docker-compose command and voíla, you have Airflow running on your local environment, and you are ready to develop some DAGs. After some time you have your DAGs (and Airflow) prepared for deployment on a production environment. Then you start searching for instructions on how to deploy Airflow on AWS. Here’s what you’ll probably find:
No instructions on Airflow documentation.
Some posts, like this one, teach you how to deploy on AWS ECS. Quite an interesting approach. The problem is that the whole tutorial is based on creating resources by point-and-click on the AWS console. Trust me; you don’t want to go that route for deploying in production. Just imagine the nightmare for creating three different environments (dev, staging and production) and having to repeat the process three times. Now imagine updating the environments and keeping them in sync. Picture how you could easily spend a whole week fixing a bug caused by a resource that was deleted by mistake.
This other post is relatively recent and has a superior approach, but it still creates resources (like the ECS cluster) using AWS CLI. A little better than console point-and-click, but still not production-ready.
Other articles, might mention using Infrastructure as Code, which would solve the problems mentioned above. However, they are very shallow in technical details and implementation. So, despite offering a good overview and best practices, they are not practical for someone without DevOps experience.
This repo on GitHub is probably the closest you’ll get from a proper implementation of Airflow on AWS following software engineering best practices. But it still lacks some basic stuff like autoscaling of webservers and workers or a way to configure settings such as RDS instance type without having to dig through Terraform code.
You might have a look on page 2 of your Google search, but if a good result doesn’t show up on the first page, then it probably means that it doesn’t exist.
Just clone the repo, follow the few instructions and install the requirements described on the README file and then run:
make airflow-deploy
This will deploy an Airflow instance to your AWS account!
This repo implements all infrastructure using AWS Cloudformation. Inside the /cloudformation directory you’ll find all templates to create the infrastructure needed to run Airflow. The good thing is you don’t need to worry about learning Cloudformation to do a simple deploy because in the root there is a service.yml to help you.
Let’s say you want to whitelist specific IPs (such as your office IP) to have access to Airflow UI. The only thing you need to do is change a few lines in service.yml:
If you want to change the Airflow database instance type, you also go to service.yml:
You want to just the Airflow workers CPU and memory? Fine-tune autoscaling? service.yml is the one-stop-shop.
Don’t hardcode your passwords is Software Engineering 101. But sometimes it is hard to automate deployments and create passwords at runtime without hardcoding then somewhere. In this Airflow repo, we use AWS Secrets Manager to help us solve that.
Airflow Metadata DB
Our Postgres database that will hold Airflow metadata is one of the resources that require an admin username and password. Using AWS Secrets Manager, a strong random password is created at deploy time and attached to the cluster. To get the password value, you have to log in to your AWS account and go to Secrets Manager.
It is also possible to implement automatic password rotation with Secrets Manager, but it was not implemented for this project.
Fernet Key
Airflow uses a Fernet Key to encrypt passwords (such as connection credentials) saved to the Metadata DB. This deployment generates a random Fernet Key at deployment time and adds it to Secrets Manager. It is then referenced in the Airflow containers as an environment variable.
One of the biggest challenges of putting Airflow in production is dealing with resources management. How to avoid crashing the webserver if there is a usage peak? Or what to do if a particular daily job requires more CPU/memory?
Autoscaling solves those issues for you. In this repository, you can easily configure thresholds and rest assured that your infrastructure will scale up and down to meet demand.
In a production setup, you will want to deploy your code to different environments. Let’s say you’ll need: prod, stage and dev.
This repo allows you to deploy the same code to different environments by just changing one environment variable, that could be automatically inferred on you CI/CD pipeline. To change the environment, do:
export ENVIRONMENT=dev; # this will deploy airflow to dev environmentmake airflow-deploy;
The beauty of Airflow is the ability to write workflows as code. It means that you will change DAGs code much more often than you change infrastructure. With this deployment of Airflow, you will submit changes to your DAGs, and it won’t try to redeploy the infrastructure for you.
The only thing you want to do is build a new Airflow image, push it to ECR and then update your ECS service to load the latest image. To achieve that, just run:
make airflow-push-image;
It is not the case here, but you could even have your DAGs sitting on a separate repository. It would separate infrastructure from software even more.
Tagging resources will allow us to create automated alerts, identify ownership and track infrastructure costs easily. That’s why this Airflow repository tags all resources.
You should add the deployment process to your CI/CD pipeline. To run some automated tests, I’m using GitHub Actions (but your company might be using other tools such as CircleCI or Jenkins).
You can follow a similar process to automate your deploy and tests. Have a look at the tests workflow to get some inspiration.
Running Airflow on AWS with the default configurations (also considering cluster is not scaling up) should cost from 5 to 7 US dollars per day. Depending on the region you are deploying.
This cost can be further reduced by lowering CPU and memory on service.yml and also changing the minimum number of workers. The default settings would allow Airflow to run quite a few dags before needing to increase resources. Fine-tune and find what works best for your use case.
I hope this post (and the repository) helps you to easily productionize Airflow. If you have any questions, suggestions or requests, please reach out to me on LinkedIn, or open an issue on the repo. Also, you are welcome to open PRs and collaborate with the project! | [
{
"code": null,
"e": 387,
"s": 172,
"text": "I created a repo to deploy Airflow on AWS following software engineering best practices. You can go straight there if you don’t feel like reading this post. But I do describe some things you might find useful here."
},
{
"code": null,
"e": 731,
"s": 387,
"text": "Run a docker-compose command and voíla, you have Airflow running on your local environment, and you are ready to develop some DAGs. After some time you have your DAGs (and Airflow) prepared for deployment on a production environment. Then you start searching for instructions on how to deploy Airflow on AWS. Here’s what you’ll probably find:"
},
{
"code": null,
"e": 773,
"s": 731,
"text": "No instructions on Airflow documentation."
},
{
"code": null,
"e": 1367,
"s": 773,
"text": "Some posts, like this one, teach you how to deploy on AWS ECS. Quite an interesting approach. The problem is that the whole tutorial is based on creating resources by point-and-click on the AWS console. Trust me; you don’t want to go that route for deploying in production. Just imagine the nightmare for creating three different environments (dev, staging and production) and having to repeat the process three times. Now imagine updating the environments and keeping them in sync. Picture how you could easily spend a whole week fixing a bug caused by a resource that was deleted by mistake."
},
{
"code": null,
"e": 1580,
"s": 1367,
"text": "This other post is relatively recent and has a superior approach, but it still creates resources (like the ECS cluster) using AWS CLI. A little better than console point-and-click, but still not production-ready."
},
{
"code": null,
"e": 1879,
"s": 1580,
"text": "Other articles, might mention using Infrastructure as Code, which would solve the problems mentioned above. However, they are very shallow in technical details and implementation. So, despite offering a good overview and best practices, they are not practical for someone without DevOps experience."
},
{
"code": null,
"e": 2210,
"s": 1879,
"text": "This repo on GitHub is probably the closest you’ll get from a proper implementation of Airflow on AWS following software engineering best practices. But it still lacks some basic stuff like autoscaling of webservers and workers or a way to configure settings such as RDS instance type without having to dig through Terraform code."
},
{
"code": null,
"e": 2367,
"s": 2210,
"text": "You might have a look on page 2 of your Google search, but if a good result doesn’t show up on the first page, then it probably means that it doesn’t exist."
},
{
"code": null,
"e": 2488,
"s": 2367,
"text": "Just clone the repo, follow the few instructions and install the requirements described on the README file and then run:"
},
{
"code": null,
"e": 2508,
"s": 2488,
"text": "make airflow-deploy"
},
{
"code": null,
"e": 2566,
"s": 2508,
"text": "This will deploy an Airflow instance to your AWS account!"
},
{
"code": null,
"e": 2897,
"s": 2566,
"text": "This repo implements all infrastructure using AWS Cloudformation. Inside the /cloudformation directory you’ll find all templates to create the infrastructure needed to run Airflow. The good thing is you don’t need to worry about learning Cloudformation to do a simple deploy because in the root there is a service.yml to help you."
},
{
"code": null,
"e": 3065,
"s": 2897,
"text": "Let’s say you want to whitelist specific IPs (such as your office IP) to have access to Airflow UI. The only thing you need to do is change a few lines in service.yml:"
},
{
"code": null,
"e": 3151,
"s": 3065,
"text": "If you want to change the Airflow database instance type, you also go to service.yml:"
},
{
"code": null,
"e": 3261,
"s": 3151,
"text": "You want to just the Airflow workers CPU and memory? Fine-tune autoscaling? service.yml is the one-stop-shop."
},
{
"code": null,
"e": 3508,
"s": 3261,
"text": "Don’t hardcode your passwords is Software Engineering 101. But sometimes it is hard to automate deployments and create passwords at runtime without hardcoding then somewhere. In this Airflow repo, we use AWS Secrets Manager to help us solve that."
},
{
"code": null,
"e": 3528,
"s": 3508,
"text": "Airflow Metadata DB"
},
{
"code": null,
"e": 3851,
"s": 3528,
"text": "Our Postgres database that will hold Airflow metadata is one of the resources that require an admin username and password. Using AWS Secrets Manager, a strong random password is created at deploy time and attached to the cluster. To get the password value, you have to log in to your AWS account and go to Secrets Manager."
},
{
"code": null,
"e": 3979,
"s": 3851,
"text": "It is also possible to implement automatic password rotation with Secrets Manager, but it was not implemented for this project."
},
{
"code": null,
"e": 3990,
"s": 3979,
"text": "Fernet Key"
},
{
"code": null,
"e": 4269,
"s": 3990,
"text": "Airflow uses a Fernet Key to encrypt passwords (such as connection credentials) saved to the Metadata DB. This deployment generates a random Fernet Key at deployment time and adds it to Secrets Manager. It is then referenced in the Airflow containers as an environment variable."
},
{
"code": null,
"e": 4498,
"s": 4269,
"text": "One of the biggest challenges of putting Airflow in production is dealing with resources management. How to avoid crashing the webserver if there is a usage peak? Or what to do if a particular daily job requires more CPU/memory?"
},
{
"code": null,
"e": 4676,
"s": 4498,
"text": "Autoscaling solves those issues for you. In this repository, you can easily configure thresholds and rest assured that your infrastructure will scale up and down to meet demand."
},
{
"code": null,
"e": 4804,
"s": 4676,
"text": "In a production setup, you will want to deploy your code to different environments. Let’s say you’ll need: prod, stage and dev."
},
{
"code": null,
"e": 5009,
"s": 4804,
"text": "This repo allows you to deploy the same code to different environments by just changing one environment variable, that could be automatically inferred on you CI/CD pipeline. To change the environment, do:"
},
{
"code": null,
"e": 5099,
"s": 5009,
"text": "export ENVIRONMENT=dev; # this will deploy airflow to dev environmentmake airflow-deploy;"
},
{
"code": null,
"e": 5380,
"s": 5099,
"text": "The beauty of Airflow is the ability to write workflows as code. It means that you will change DAGs code much more often than you change infrastructure. With this deployment of Airflow, you will submit changes to your DAGs, and it won’t try to redeploy the infrastructure for you."
},
{
"code": null,
"e": 5541,
"s": 5380,
"text": "The only thing you want to do is build a new Airflow image, push it to ECR and then update your ECS service to load the latest image. To achieve that, just run:"
},
{
"code": null,
"e": 5566,
"s": 5541,
"text": "make airflow-push-image;"
},
{
"code": null,
"e": 5717,
"s": 5566,
"text": "It is not the case here, but you could even have your DAGs sitting on a separate repository. It would separate infrastructure from software even more."
},
{
"code": null,
"e": 5890,
"s": 5717,
"text": "Tagging resources will allow us to create automated alerts, identify ownership and track infrastructure costs easily. That’s why this Airflow repository tags all resources."
},
{
"code": null,
"e": 6081,
"s": 5890,
"text": "You should add the deployment process to your CI/CD pipeline. To run some automated tests, I’m using GitHub Actions (but your company might be using other tools such as CircleCI or Jenkins)."
},
{
"code": null,
"e": 6208,
"s": 6081,
"text": "You can follow a similar process to automate your deploy and tests. Have a look at the tests workflow to get some inspiration."
},
{
"code": null,
"e": 6395,
"s": 6208,
"text": "Running Airflow on AWS with the default configurations (also considering cluster is not scaling up) should cost from 5 to 7 US dollars per day. Depending on the region you are deploying."
},
{
"code": null,
"e": 6676,
"s": 6395,
"text": "This cost can be further reduced by lowering CPU and memory on service.yml and also changing the minimum number of workers. The default settings would allow Airflow to run quite a few dags before needing to increase resources. Fine-tune and find what works best for your use case."
}
]
|
Electron - File Handling | File handling is a very important part of building a desktop application. Almost all desktop apps interact with files.
We will create a form in our app that will take as input, a Name and an Email address. This form will be saved to a file and a list will be created that will show this as output.
Set up your main process using the following code in the main.js file −
const {app, BrowserWindow} = require('electron')
const url = require('url')
const path = require('path')
let win
function createWindow() {
win = new BrowserWindow({width: 800, height: 600})
win.loadURL(url.format ({
pathname: path.join(__dirname, 'index.html'),
protocol: 'file:',
slashes: true
}))
}
app.on('ready', createWindow)
Now open the index.html file and enter the following code in it −
<!DOCTYPE html>
<html>
<head>
<meta charset = "UTF-8">
<title>File System</title>
<link rel = "stylesheet"
href = "./bower_components/bootstrap/dist/css/bootstrap.min.css" />
<style type = "text/css">
#contact-list {
height: 150px;
overflow-y: auto;
}
</style>
</head>
<body>
<div class = "container">
<h1>Enter Names and Email addresses of your contacts</h1>
<div class = "form-group">
<label for = "Name">Name</label>
<input type = "text" name = "Name" value = "" id = "Name"
placeholder = "Name" class = "form-control" required>
</div>
<div class = "form-group">
<label for = "Email">Email</label>
<input type = "email" name = "Email" value = "" id = "Email"
placeholder = "Email" class = "form-control" required>
</div>
<div class = "form-group">
<button class = "btn btn-primary" id = "add-to-list">Add to list!</button>
</div>
<div id = "contact-list">
<table class = "table-striped" id = "contact-table">
<tr>
<th class = "col-xs-2">S. No.</th>
<th class = "col-xs-4">Name</th>
<th class = "col-xs-6">Email</th>
</tr>
</table>
</div>
<script src = "./view.js" ></script>
</div>
</body>
</html>
Now we need to handle the addition event. We will do this in our view.js file.
We will create a function loadAndDisplayContacts() that will initially load contacts from the file. After creating the loadAndDisplayContacts() function, we will create a click handler on our add to list button. This will add the entry to both the file and the table.
In your view.js file, enter the following code −
let $ = require('jquery')
let fs = require('fs')
let filename = 'contacts'
let sno = 0
$('#add-to-list').on('click', () => {
let name = $('#Name').val()
let email = $('#Email').val()
fs.appendFile('contacts', name + ',' + email + '\n')
addEntry(name, email)
})
function addEntry(name, email) {
if(name && email) {
sno++
let updateString = '<tr><td>'+ sno + '</td><td>'+ name +'</td><td>'
+ email +'</td></tr>'
$('#contact-table').append(updateString)
}
}
function loadAndDisplayContacts() {
//Check if file exists
if(fs.existsSync(filename)) {
let data = fs.readFileSync(filename, 'utf8').split('\n')
data.forEach((contact, index) => {
let [ name, email ] = contact.split(',')
addEntry(name, email)
})
} else {
console.log("File Doesn\'t Exist. Creating new file.")
fs.writeFile(filename, '', (err) => {
if(err)
console.log(err)
})
}
}
loadAndDisplayContacts()
Now run the application, using the following command −
$ electron ./main.js
Once you add some contacts to it, the application will look like −
For more fs module API calls, please refer to Node File System tutorial.
Now we can handle files using Electron. We will look at how to call the save and open dialog boxes(native) for files in the dialogs chapter.
251 Lectures
35.5 hours
Gowthami Swarna
9 Lectures
41 mins
Ashraf Said
8 Lectures
32 mins
Ashraf Said
25 Lectures
1 hours
Ashraf Said
17 Lectures
1 hours
Ashraf Said
8 Lectures
25 mins
Ashraf Said
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2184,
"s": 2065,
"text": "File handling is a very important part of building a desktop application. Almost all desktop apps interact with files."
},
{
"code": null,
"e": 2363,
"s": 2184,
"text": "We will create a form in our app that will take as input, a Name and an Email address. This form will be saved to a file and a list will be created that will show this as output."
},
{
"code": null,
"e": 2435,
"s": 2363,
"text": "Set up your main process using the following code in the main.js file −"
},
{
"code": null,
"e": 2796,
"s": 2435,
"text": "const {app, BrowserWindow} = require('electron')\nconst url = require('url')\nconst path = require('path')\n\nlet win\n\nfunction createWindow() {\n win = new BrowserWindow({width: 800, height: 600})\n win.loadURL(url.format ({\n pathname: path.join(__dirname, 'index.html'),\n protocol: 'file:',\n slashes: true\n }))\n}\n\napp.on('ready', createWindow)"
},
{
"code": null,
"e": 2862,
"s": 2796,
"text": "Now open the index.html file and enter the following code in it −"
},
{
"code": null,
"e": 4410,
"s": 2862,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <meta charset = \"UTF-8\">\n <title>File System</title>\n <link rel = \"stylesheet\" \n href = \"./bower_components/bootstrap/dist/css/bootstrap.min.css\" />\n \n <style type = \"text/css\">\n #contact-list {\n height: 150px;\n overflow-y: auto;\n }\n </style>\n </head>\n \n <body>\n <div class = \"container\">\n <h1>Enter Names and Email addresses of your contacts</h1>\n <div class = \"form-group\">\n <label for = \"Name\">Name</label>\n <input type = \"text\" name = \"Name\" value = \"\" id = \"Name\" \n placeholder = \"Name\" class = \"form-control\" required>\n </div>\n \n <div class = \"form-group\">\n <label for = \"Email\">Email</label>\n <input type = \"email\" name = \"Email\" value = \"\" id = \"Email\" \n placeholder = \"Email\" class = \"form-control\" required>\n </div>\n \n <div class = \"form-group\">\n <button class = \"btn btn-primary\" id = \"add-to-list\">Add to list!</button>\n </div>\n \n <div id = \"contact-list\">\n <table class = \"table-striped\" id = \"contact-table\">\n <tr>\n <th class = \"col-xs-2\">S. No.</th>\n <th class = \"col-xs-4\">Name</th>\n <th class = \"col-xs-6\">Email</th>\n </tr>\n </table>\n </div>\n \n <script src = \"./view.js\" ></script>\n </div>\n </body>\n</html>"
},
{
"code": null,
"e": 4489,
"s": 4410,
"text": "Now we need to handle the addition event. We will do this in our view.js file."
},
{
"code": null,
"e": 4757,
"s": 4489,
"text": "We will create a function loadAndDisplayContacts() that will initially load contacts from the file. After creating the loadAndDisplayContacts() function, we will create a click handler on our add to list button. This will add the entry to both the file and the table."
},
{
"code": null,
"e": 4806,
"s": 4757,
"text": "In your view.js file, enter the following code −"
},
{
"code": null,
"e": 5823,
"s": 4806,
"text": "let $ = require('jquery')\nlet fs = require('fs')\nlet filename = 'contacts'\nlet sno = 0\n\n$('#add-to-list').on('click', () => {\n let name = $('#Name').val()\n let email = $('#Email').val()\n\n fs.appendFile('contacts', name + ',' + email + '\\n')\n\n addEntry(name, email)\n})\n\nfunction addEntry(name, email) {\n if(name && email) {\n sno++\n let updateString = '<tr><td>'+ sno + '</td><td>'+ name +'</td><td>' \n + email +'</td></tr>'\n $('#contact-table').append(updateString)\n }\n}\n\nfunction loadAndDisplayContacts() { \n \n //Check if file exists\n if(fs.existsSync(filename)) {\n let data = fs.readFileSync(filename, 'utf8').split('\\n')\n \n data.forEach((contact, index) => {\n let [ name, email ] = contact.split(',')\n addEntry(name, email)\n })\n \n } else {\n console.log(\"File Doesn\\'t Exist. Creating new file.\")\n fs.writeFile(filename, '', (err) => {\n if(err)\n console.log(err)\n })\n }\n}\n\nloadAndDisplayContacts()"
},
{
"code": null,
"e": 5878,
"s": 5823,
"text": "Now run the application, using the following command −"
},
{
"code": null,
"e": 5900,
"s": 5878,
"text": "$ electron ./main.js\n"
},
{
"code": null,
"e": 5967,
"s": 5900,
"text": "Once you add some contacts to it, the application will look like −"
},
{
"code": null,
"e": 6040,
"s": 5967,
"text": "For more fs module API calls, please refer to Node File System tutorial."
},
{
"code": null,
"e": 6181,
"s": 6040,
"text": "Now we can handle files using Electron. We will look at how to call the save and open dialog boxes(native) for files in the dialogs chapter."
},
{
"code": null,
"e": 6218,
"s": 6181,
"text": "\n 251 Lectures \n 35.5 hours \n"
},
{
"code": null,
"e": 6235,
"s": 6218,
"text": " Gowthami Swarna"
},
{
"code": null,
"e": 6266,
"s": 6235,
"text": "\n 9 Lectures \n 41 mins\n"
},
{
"code": null,
"e": 6279,
"s": 6266,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6310,
"s": 6279,
"text": "\n 8 Lectures \n 32 mins\n"
},
{
"code": null,
"e": 6323,
"s": 6310,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6356,
"s": 6323,
"text": "\n 25 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6369,
"s": 6356,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6402,
"s": 6369,
"text": "\n 17 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6415,
"s": 6402,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6446,
"s": 6415,
"text": "\n 8 Lectures \n 25 mins\n"
},
{
"code": null,
"e": 6459,
"s": 6446,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6466,
"s": 6459,
"text": " Print"
},
{
"code": null,
"e": 6477,
"s": 6466,
"text": " Add Notes"
}
]
|
Find element at given index after a number of rotations - GeeksforGeeks | 26 Oct, 2021
An array consisting of N integers is given. There are several Right Circular Rotations of range[L..R] that we perform. After performing these rotations, we need to find element at a given index.Examples :
Input : arr[] : {1, 2, 3, 4, 5}
ranges[] = { {0, 2}, {0, 3} }
index : 1
Output : 3
Explanation : After first given rotation {0, 2}
arr[] = {3, 1, 2, 4, 5}
After second rotation {0, 3}
arr[] = {4, 3, 1, 2, 5}
After all rotations we have element 3 at given
index 1.
Method : Brute-force The brute force approach is to actually rotate the array for all given ranges, finally return the element in at given index in the modified array.Method : Efficient We can do offline processing after saving all ranges. Suppose, our rotate ranges are : [0..2] and [0..3] We run through these ranges from reverse.After range [0..3], index 0 will have the element which was on index 3. So, we can change 0 to 3, i.e. if index = left, index will be changed to right. After range [0..2], index 3 will remain unaffected.So, we can make 3 cases : If index = left, index will be changed to right. If index is not bounds by the range, no effect of rotation. If index is in bounds, index will have the element at index-1.Below is the implementation :
For better explanation:-
10 20 30 40 50
Index: 1
Rotations: {0,2} {1,4} {0,3}
Answer: Index 1 will have 30 after all the 3 rotations in the order {0,2} {1,4} {0,3}.
We performed {0,2} on A and now we have a new array A1.
We performed {1,4} on A1 and now we have a new array A2.
We performed {0,3} on A2 and now we have a new array A3.
Now we are looking for the value at index 1 in A3.
But A3 is {0,3} done on A2.
So index 1 in A3 is index 0 in A2.
But A2 is {1,4} done on A1.
So index 0 in A2 is also index 0 in A1 as it does not lie in the range {1,4}.
But A1 is {0,2} done on A.
So index 0 in A1 is index 2 in A.
On observing it, we are going deeper into the previous rotations starting from the latest rotation.
{0,3}
|
{1,4}
|
{0,2}
This is the reason we are processing the rotations in reverse order.
Please note that we are not rotating the elements in the reverse order, just processing the index from reverse.
Because if we actually rotate in reverse order, we might get a completely different answer as in case of rotations the order matters.
C++
Java
Python3
C#
PHP
Javascript
// CPP code to rotate an array// and answer the index query#include <bits/stdc++.h>using namespace std; // Function to compute the element at// given indexint findElement(int arr[], int ranges[][2], int rotations, int index){ for (int i = rotations - 1; i >= 0; i--) { // Range[left...right] int left = ranges[i][0]; int right = ranges[i][1]; // Rotation will not have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index];} // Driverint main(){ int arr[] = { 1, 2, 3, 4, 5 }; // No. of rotations int rotations = 2; // Ranges according to 0-based indexing int ranges[rotations][2] = { { 0, 2 }, { 0, 3 } }; int index = 1; cout << findElement(arr, ranges, rotations, index); return 0; }
// Java code to rotate an array// and answer the index queryimport java.util.*; class GFG{ // Function to compute the element at // given index static int findElement(int[] arr, int[][] ranges, int rotations, int index) { for (int i = rotations - 1; i >= 0; i--) { // Range[left...right] int left = ranges[i][0]; int right = ranges[i][1]; // Rotation will not have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index]; } // Driver public static void main (String[] args) { int[] arr = { 1, 2, 3, 4, 5 }; // No. of rotations int rotations = 2; // Ranges according to 0-based indexing int[][] ranges = { { 0, 2 }, { 0, 3 } }; int index = 1; System.out.println(findElement(arr, ranges, rotations, index)); }} /* This code is contributed by Mr. Somesh Awasthi */
# Python 3 code to rotate an array# and answer the index query # Function to compute the element# at given indexdef findElement(arr, ranges, rotations, index) : for i in range(rotations - 1, -1, -1 ) : # Range[left...right] left = ranges[i][0] right = ranges[i][1] # Rotation will not have # any effect if (left <= index and right >= index) : if (index == left) : index = right else : index = index - 1 # Returning new element return arr[index] # Driver Codearr = [ 1, 2, 3, 4, 5 ] # No. of rotationsrotations = 2 # Ranges according to# 0-based indexingranges = [ [ 0, 2 ], [ 0, 3 ] ] index = 1 print(findElement(arr, ranges, rotations, index)) # This code is contributed by Nikita Tiwari.
// C# code to rotate an array// and answer the index queryusing System; class GFG{ // Function to compute the // element at given index static int findElement(int []arr, int [,]ranges, int rotations, int index) { for (int i = rotations - 1; i >= 0; i--) { // Range[left...right] int left = ranges[i, 0]; int right = ranges[i, 1]; // Rotation will not // have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index]; } // Driver Code public static void Main () { int []arr = { 1, 2, 3, 4, 5 }; // No. of rotations int rotations = 2; // Ranges according // to 0-based indexing int [,]ranges = { { 0, 2 }, { 0, 3 } }; int index = 1; Console.Write(findElement(arr, ranges, rotations, index)); }} // This code is contributed// by nitin mittal.
<?php// PHP code to rotate an array// and answer the index query // Function to compute the// element at given indexfunction findElement($arr, $ranges, $rotations, $index){ for ($i = $rotations - 1; $i >= 0; $i--) { // Range[left...right] $left = $ranges[$i][0]; $right = $ranges[$i][1]; // Rotation will not // have any effect if ($left <= $index && $right >= $index) { if ($index == $left) $index = $right; else $index--; } } // Returning new element return $arr[$index];} // Driver Code$arr = array(1, 2, 3, 4, 5); // No. of rotations$rotations = 2; // Ranges according// to 0-based indexing$ranges = array(array(0, 2), array(0, 3)); $index = 1; echo findElement($arr, $ranges, $rotations, $index); // This code is contributed by ajit?>
<script> // JavaScript code to rotate an array// and answer the index query // Function to compute the element at// given indexlet findElement = (arr, ranges,rotations,index)=>{ for (let i = rotations - 1; i >= 0; i--) { // Range[left...right] let left = ranges[i][0]; let right = ranges[i][1]; // Rotation will not have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index];} // Driver Codelet arr = [ 1, 2, 3, 4, 5 ]; // No. of rotationslet rotations = 2; // Ranges according to 0-based indexinglet ranges = [ [ 0, 2 ], [ 0, 3] ]; let index = 1; document.write(findElement(arr, ranges, rotations, index)); </script>
Output :
3
This article is contributed by Rohit Thapliyal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
nitin mittal
jit_t
rohitsingh07052
jibalaj3
akshaysingh98088
array-range-queries
rotation
Arrays
Arrays
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Arrays in Java
Arrays in C/C++
Program for array rotation
Stack Data Structure (Introduction and Program)
Top 50 Array Coding Problems for Interviews
Largest Sum Contiguous Subarray
Write a program to reverse an array or string
Introduction to Arrays
Multidimensional Arrays in Java | [
{
"code": null,
"e": 40885,
"s": 40857,
"text": "\n26 Oct, 2021"
},
{
"code": null,
"e": 41092,
"s": 40885,
"text": "An array consisting of N integers is given. There are several Right Circular Rotations of range[L..R] that we perform. After performing these rotations, we need to find element at a given index.Examples : "
},
{
"code": null,
"e": 41420,
"s": 41092,
"text": "Input : arr[] : {1, 2, 3, 4, 5}\n ranges[] = { {0, 2}, {0, 3} }\n index : 1\nOutput : 3\nExplanation : After first given rotation {0, 2}\n arr[] = {3, 1, 2, 4, 5}\n After second rotation {0, 3} \n arr[] = {4, 3, 1, 2, 5}\nAfter all rotations we have element 3 at given\nindex 1. "
},
{
"code": null,
"e": 42186,
"s": 41422,
"text": "Method : Brute-force The brute force approach is to actually rotate the array for all given ranges, finally return the element in at given index in the modified array.Method : Efficient We can do offline processing after saving all ranges. Suppose, our rotate ranges are : [0..2] and [0..3] We run through these ranges from reverse.After range [0..3], index 0 will have the element which was on index 3. So, we can change 0 to 3, i.e. if index = left, index will be changed to right. After range [0..2], index 3 will remain unaffected.So, we can make 3 cases : If index = left, index will be changed to right. If index is not bounds by the range, no effect of rotation. If index is in bounds, index will have the element at index-1.Below is the implementation : "
},
{
"code": null,
"e": 42211,
"s": 42186,
"text": "For better explanation:-"
},
{
"code": null,
"e": 42226,
"s": 42211,
"text": "10 20 30 40 50"
},
{
"code": null,
"e": 42235,
"s": 42226,
"text": "Index: 1"
},
{
"code": null,
"e": 42264,
"s": 42235,
"text": "Rotations: {0,2} {1,4} {0,3}"
},
{
"code": null,
"e": 42351,
"s": 42264,
"text": "Answer: Index 1 will have 30 after all the 3 rotations in the order {0,2} {1,4} {0,3}."
},
{
"code": null,
"e": 42407,
"s": 42351,
"text": "We performed {0,2} on A and now we have a new array A1."
},
{
"code": null,
"e": 42464,
"s": 42407,
"text": "We performed {1,4} on A1 and now we have a new array A2."
},
{
"code": null,
"e": 42521,
"s": 42464,
"text": "We performed {0,3} on A2 and now we have a new array A3."
},
{
"code": null,
"e": 42572,
"s": 42521,
"text": "Now we are looking for the value at index 1 in A3."
},
{
"code": null,
"e": 42600,
"s": 42572,
"text": "But A3 is {0,3} done on A2."
},
{
"code": null,
"e": 42635,
"s": 42600,
"text": "So index 1 in A3 is index 0 in A2."
},
{
"code": null,
"e": 42663,
"s": 42635,
"text": "But A2 is {1,4} done on A1."
},
{
"code": null,
"e": 42741,
"s": 42663,
"text": "So index 0 in A2 is also index 0 in A1 as it does not lie in the range {1,4}."
},
{
"code": null,
"e": 42768,
"s": 42741,
"text": "But A1 is {0,2} done on A."
},
{
"code": null,
"e": 42802,
"s": 42768,
"text": "So index 0 in A1 is index 2 in A."
},
{
"code": null,
"e": 42902,
"s": 42802,
"text": "On observing it, we are going deeper into the previous rotations starting from the latest rotation."
},
{
"code": null,
"e": 42908,
"s": 42902,
"text": "{0,3}"
},
{
"code": null,
"e": 42910,
"s": 42908,
"text": "|"
},
{
"code": null,
"e": 42916,
"s": 42910,
"text": "{1,4}"
},
{
"code": null,
"e": 42918,
"s": 42916,
"text": "|"
},
{
"code": null,
"e": 42924,
"s": 42918,
"text": "{0,2}"
},
{
"code": null,
"e": 42993,
"s": 42924,
"text": "This is the reason we are processing the rotations in reverse order."
},
{
"code": null,
"e": 43105,
"s": 42993,
"text": "Please note that we are not rotating the elements in the reverse order, just processing the index from reverse."
},
{
"code": null,
"e": 43240,
"s": 43105,
"text": "Because if we actually rotate in reverse order, we might get a completely different answer as in case of rotations the order matters. "
},
{
"code": null,
"e": 43244,
"s": 43240,
"text": "C++"
},
{
"code": null,
"e": 43249,
"s": 43244,
"text": "Java"
},
{
"code": null,
"e": 43257,
"s": 43249,
"text": "Python3"
},
{
"code": null,
"e": 43260,
"s": 43257,
"text": "C#"
},
{
"code": null,
"e": 43264,
"s": 43260,
"text": "PHP"
},
{
"code": null,
"e": 43275,
"s": 43264,
"text": "Javascript"
},
{
"code": "// CPP code to rotate an array// and answer the index query#include <bits/stdc++.h>using namespace std; // Function to compute the element at// given indexint findElement(int arr[], int ranges[][2], int rotations, int index){ for (int i = rotations - 1; i >= 0; i--) { // Range[left...right] int left = ranges[i][0]; int right = ranges[i][1]; // Rotation will not have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index];} // Driverint main(){ int arr[] = { 1, 2, 3, 4, 5 }; // No. of rotations int rotations = 2; // Ranges according to 0-based indexing int ranges[rotations][2] = { { 0, 2 }, { 0, 3 } }; int index = 1; cout << findElement(arr, ranges, rotations, index); return 0; }",
"e": 44204,
"s": 43275,
"text": null
},
{
"code": "// Java code to rotate an array// and answer the index queryimport java.util.*; class GFG{ // Function to compute the element at // given index static int findElement(int[] arr, int[][] ranges, int rotations, int index) { for (int i = rotations - 1; i >= 0; i--) { // Range[left...right] int left = ranges[i][0]; int right = ranges[i][1]; // Rotation will not have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index]; } // Driver public static void main (String[] args) { int[] arr = { 1, 2, 3, 4, 5 }; // No. of rotations int rotations = 2; // Ranges according to 0-based indexing int[][] ranges = { { 0, 2 }, { 0, 3 } }; int index = 1; System.out.println(findElement(arr, ranges, rotations, index)); }} /* This code is contributed by Mr. Somesh Awasthi */",
"e": 45360,
"s": 44204,
"text": null
},
{
"code": "# Python 3 code to rotate an array# and answer the index query # Function to compute the element# at given indexdef findElement(arr, ranges, rotations, index) : for i in range(rotations - 1, -1, -1 ) : # Range[left...right] left = ranges[i][0] right = ranges[i][1] # Rotation will not have # any effect if (left <= index and right >= index) : if (index == left) : index = right else : index = index - 1 # Returning new element return arr[index] # Driver Codearr = [ 1, 2, 3, 4, 5 ] # No. of rotationsrotations = 2 # Ranges according to# 0-based indexingranges = [ [ 0, 2 ], [ 0, 3 ] ] index = 1 print(findElement(arr, ranges, rotations, index)) # This code is contributed by Nikita Tiwari.",
"e": 46177,
"s": 45360,
"text": null
},
{
"code": "// C# code to rotate an array// and answer the index queryusing System; class GFG{ // Function to compute the // element at given index static int findElement(int []arr, int [,]ranges, int rotations, int index) { for (int i = rotations - 1; i >= 0; i--) { // Range[left...right] int left = ranges[i, 0]; int right = ranges[i, 1]; // Rotation will not // have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index]; } // Driver Code public static void Main () { int []arr = { 1, 2, 3, 4, 5 }; // No. of rotations int rotations = 2; // Ranges according // to 0-based indexing int [,]ranges = { { 0, 2 }, { 0, 3 } }; int index = 1; Console.Write(findElement(arr, ranges, rotations, index)); }} // This code is contributed// by nitin mittal.",
"e": 47428,
"s": 46177,
"text": null
},
{
"code": "<?php// PHP code to rotate an array// and answer the index query // Function to compute the// element at given indexfunction findElement($arr, $ranges, $rotations, $index){ for ($i = $rotations - 1; $i >= 0; $i--) { // Range[left...right] $left = $ranges[$i][0]; $right = $ranges[$i][1]; // Rotation will not // have any effect if ($left <= $index && $right >= $index) { if ($index == $left) $index = $right; else $index--; } } // Returning new element return $arr[$index];} // Driver Code$arr = array(1, 2, 3, 4, 5); // No. of rotations$rotations = 2; // Ranges according// to 0-based indexing$ranges = array(array(0, 2), array(0, 3)); $index = 1; echo findElement($arr, $ranges, $rotations, $index); // This code is contributed by ajit?>",
"e": 48364,
"s": 47428,
"text": null
},
{
"code": "<script> // JavaScript code to rotate an array// and answer the index query // Function to compute the element at// given indexlet findElement = (arr, ranges,rotations,index)=>{ for (let i = rotations - 1; i >= 0; i--) { // Range[left...right] let left = ranges[i][0]; let right = ranges[i][1]; // Rotation will not have any effect if (left <= index && right >= index) { if (index == left) index = right; else index--; } } // Returning new element return arr[index];} // Driver Codelet arr = [ 1, 2, 3, 4, 5 ]; // No. of rotationslet rotations = 2; // Ranges according to 0-based indexinglet ranges = [ [ 0, 2 ], [ 0, 3] ]; let index = 1; document.write(findElement(arr, ranges, rotations, index)); </script>",
"e": 49181,
"s": 48364,
"text": null
},
{
"code": null,
"e": 49191,
"s": 49181,
"text": "Output : "
},
{
"code": null,
"e": 49193,
"s": 49191,
"text": "3"
},
{
"code": null,
"e": 49617,
"s": 49193,
"text": "This article is contributed by Rohit Thapliyal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 49630,
"s": 49617,
"text": "nitin mittal"
},
{
"code": null,
"e": 49636,
"s": 49630,
"text": "jit_t"
},
{
"code": null,
"e": 49652,
"s": 49636,
"text": "rohitsingh07052"
},
{
"code": null,
"e": 49661,
"s": 49652,
"text": "jibalaj3"
},
{
"code": null,
"e": 49678,
"s": 49661,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 49698,
"s": 49678,
"text": "array-range-queries"
},
{
"code": null,
"e": 49707,
"s": 49698,
"text": "rotation"
},
{
"code": null,
"e": 49714,
"s": 49707,
"text": "Arrays"
},
{
"code": null,
"e": 49721,
"s": 49714,
"text": "Arrays"
},
{
"code": null,
"e": 49819,
"s": 49721,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 49828,
"s": 49819,
"text": "Comments"
},
{
"code": null,
"e": 49841,
"s": 49828,
"text": "Old Comments"
},
{
"code": null,
"e": 49856,
"s": 49841,
"text": "Arrays in Java"
},
{
"code": null,
"e": 49872,
"s": 49856,
"text": "Arrays in C/C++"
},
{
"code": null,
"e": 49899,
"s": 49872,
"text": "Program for array rotation"
},
{
"code": null,
"e": 49947,
"s": 49899,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 49991,
"s": 49947,
"text": "Top 50 Array Coding Problems for Interviews"
},
{
"code": null,
"e": 50023,
"s": 49991,
"text": "Largest Sum Contiguous Subarray"
},
{
"code": null,
"e": 50069,
"s": 50023,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 50092,
"s": 50069,
"text": "Introduction to Arrays"
}
]
|
Android - Clipboard | Android provides the clipboard framework for copying and pasting different types of data. The data could be text, images, binary stream data or other complex data types.
Android provides the library of ClipboardManager and ClipData and ClipData.item to use the copying and pasting framework.In order to use clipboard framework, you need to put data into clip object, and then put that object into system wide clipboard.
In order to use clipboard , you need to instantiate an object of ClipboardManager by calling the getSystemService() method. Its syntax is given below −
ClipboardManager myClipboard;
myClipboard = (ClipboardManager)getSystemService(CLIPBOARD_SERVICE);
The next thing you need to do is to instantiate the ClipData object by calling the respective type of data method of the ClipData class. In case of text data , the newPlainText method will be called. After that you have to set that data as the clip of the Clipboard Manager object.Its syntax is given below −
ClipData myClip;
String text = "hello world";
myClip = ClipData.newPlainText("text", text);
myClipboard.setPrimaryClip(myClip);
The ClipData object can take these three form and following functions are used to create those forms.
Text
newPlainText(label, text)
Returns a ClipData object whose single ClipData.Item object contains a text string.
URI
newUri(resolver, label, URI)
Returns a ClipData object whose single ClipData.Item object contains a URI.
Intent
newIntent(label, intent)
Returns a ClipData object whose single ClipData.Item object contains an Intent.
In order to paste the data, we will first get the clip by calling the getPrimaryClip() method. And from that click we will get the item in ClipData.Item object. And from the object we will get the data. Its syntax is given below −
ClipData abc = myClipboard.getPrimaryClip();
ClipData.Item item = abc.getItemAt(0);
String text = item.getText().toString();
Apart from the these methods , there are other methods provided by the ClipboardManager class for managing clipboard framework. These methods are listed below −
getPrimaryClip()
This method just returns the current primary clip on the clipboard
getPrimaryClipDescription()
This method returns a description of the current primary clip on the clipboard but not a copy of its data.
hasPrimaryClip()
This method returns true if there is currently a primary clip on the clipboard
setPrimaryClip(ClipData clip)
This method sets the current primary clip on the clipboard
setText(CharSequence text)
This method can be directly used to copy text into the clipboard
getText()
This method can be directly used to get the copied text from the clipboard
Here is an example demonstrating the use of ClipboardManager class. It creates a basic copy paste application that allows you to copy the text and then paste it via clipboard.
To experiment with this example , you can run this on an actual device or in an emulator.
Following is the content of the modified main activity file src/MainActivity.java.
package com.example.sairamkrishna.myapplication;
import android.content.ClipData;
import android.content.ClipboardManager;
import android.os.Bundle;
import android.support.v7.app.ActionBarActivity;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
public class MainActivity extends ActionBarActivity {
EditText ed1, ed2;
Button b1, b2;
private ClipboardManager myClipboard;
private ClipData myClip;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ed1 = (EditText) findViewById(R.id.editText);
ed2 = (EditText) findViewById(R.id.editText2);
b1 = (Button) findViewById(R.id.button);
b2 = (Button) findViewById(R.id.button2);
myClipboard = (ClipboardManager) getSystemService(CLIPBOARD_SERVICE);
b1.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
String text;
text = ed1.getText().toString();
myClip = ClipData.newPlainText("text", text);
myClipboard.setPrimaryClip(myClip);
Toast.makeText(getApplicationContext(), "Text Copied",
Toast.LENGTH_SHORT).show();
}
});
b2.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ClipData abc = myClipboard.getPrimaryClip();
ClipData.Item item = abc.getItemAt(0);
String text = item.getText().toString();
ed2.setText(text);
Toast.makeText(getApplicationContext(), "Text Pasted",
Toast.LENGTH_SHORT).show();
}
});
}
}
Following is the modified content of the xml res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:paddingBottom="@dimen/activity_vertical_margin"
tools:context=".MainActivity">
<TextView android:text="Example" android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/textview"
android:textSize="35dp"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Tutorials point"
android:id="@+id/textView"
android:layout_below="@+id/textview"
android:layout_centerHorizontal="true"
android:textColor="#ff7aff24"
android:textSize="35dp" />
<ImageView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/imageView"
android:src="@drawable/abc"
android:layout_below="@+id/textView"
android:layout_centerHorizontal="true" />
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/editText"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true"
android:hint="Copy text"
android:layout_below="@+id/imageView"
android:layout_alignLeft="@+id/imageView"
android:layout_alignStart="@+id/imageView" />
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/editText2"
android:layout_alignLeft="@+id/editText"
android:layout_alignStart="@+id/editText"
android:hint="paste text"
android:layout_below="@+id/editText"
android:layout_alignRight="@+id/editText"
android:layout_alignEnd="@+id/editText" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Copy text"
android:id="@+id/button"
android:layout_below="@+id/editText2"
android:layout_alignLeft="@+id/editText2"
android:layout_alignStart="@+id/editText2" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Paste text"
android:id="@+id/button2"
android:layout_below="@+id/editText2"
android:layout_alignRight="@+id/editText2"
android:layout_alignEnd="@+id/editText2" />
</RelativeLayout>
Following is the content of the res/values/string.xml.
<resources>
<string name="app_name">My Application</string>
</resources>
Following is the content of AndroidManifest.xml file.
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.sairamkrishna.myapplication" >
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name="com.example.sairamkrishna.myapplication.MainActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run our an application we just modified. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the tool bar. Android studio installer will display following images −
Now just enter any text in the Text to copy field and then select the copy text button. The following notification will be displayed which is shown below −
Now just press the paste button, and you will see the text which is copied is now pasted in the field of Copied Text. It is shown below −
46 Lectures
7.5 hours
Aditya Dua
32 Lectures
3.5 hours
Sharad Kumar
9 Lectures
1 hours
Abhilash Nelson
14 Lectures
1.5 hours
Abhilash Nelson
15 Lectures
1.5 hours
Abhilash Nelson
10 Lectures
1 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3777,
"s": 3607,
"text": "Android provides the clipboard framework for copying and pasting different types of data. The data could be text, images, binary stream data or other complex data types."
},
{
"code": null,
"e": 4027,
"s": 3777,
"text": "Android provides the library of ClipboardManager and ClipData and ClipData.item to use the copying and pasting framework.In order to use clipboard framework, you need to put data into clip object, and then put that object into system wide clipboard."
},
{
"code": null,
"e": 4179,
"s": 4027,
"text": "In order to use clipboard , you need to instantiate an object of ClipboardManager by calling the getSystemService() method. Its syntax is given below −"
},
{
"code": null,
"e": 4278,
"s": 4179,
"text": "ClipboardManager myClipboard;\nmyClipboard = (ClipboardManager)getSystemService(CLIPBOARD_SERVICE);"
},
{
"code": null,
"e": 4587,
"s": 4278,
"text": "The next thing you need to do is to instantiate the ClipData object by calling the respective type of data method of the ClipData class. In case of text data , the newPlainText method will be called. After that you have to set that data as the clip of the Clipboard Manager object.Its syntax is given below −"
},
{
"code": null,
"e": 4715,
"s": 4587,
"text": "ClipData myClip;\nString text = \"hello world\";\nmyClip = ClipData.newPlainText(\"text\", text);\nmyClipboard.setPrimaryClip(myClip);"
},
{
"code": null,
"e": 4817,
"s": 4715,
"text": "The ClipData object can take these three form and following functions are used to create those forms."
},
{
"code": null,
"e": 4822,
"s": 4817,
"text": "Text"
},
{
"code": null,
"e": 4848,
"s": 4822,
"text": "newPlainText(label, text)"
},
{
"code": null,
"e": 4932,
"s": 4848,
"text": "Returns a ClipData object whose single ClipData.Item object contains a text string."
},
{
"code": null,
"e": 4936,
"s": 4932,
"text": "URI"
},
{
"code": null,
"e": 4965,
"s": 4936,
"text": "newUri(resolver, label, URI)"
},
{
"code": null,
"e": 5041,
"s": 4965,
"text": "Returns a ClipData object whose single ClipData.Item object contains a URI."
},
{
"code": null,
"e": 5048,
"s": 5041,
"text": "Intent"
},
{
"code": null,
"e": 5073,
"s": 5048,
"text": "newIntent(label, intent)"
},
{
"code": null,
"e": 5153,
"s": 5073,
"text": "Returns a ClipData object whose single ClipData.Item object contains an Intent."
},
{
"code": null,
"e": 5384,
"s": 5153,
"text": "In order to paste the data, we will first get the clip by calling the getPrimaryClip() method. And from that click we will get the item in ClipData.Item object. And from the object we will get the data. Its syntax is given below −"
},
{
"code": null,
"e": 5509,
"s": 5384,
"text": "ClipData abc = myClipboard.getPrimaryClip();\nClipData.Item item = abc.getItemAt(0);\nString text = item.getText().toString();"
},
{
"code": null,
"e": 5670,
"s": 5509,
"text": "Apart from the these methods , there are other methods provided by the ClipboardManager class for managing clipboard framework. These methods are listed below −"
},
{
"code": null,
"e": 5687,
"s": 5670,
"text": "getPrimaryClip()"
},
{
"code": null,
"e": 5754,
"s": 5687,
"text": "This method just returns the current primary clip on the clipboard"
},
{
"code": null,
"e": 5782,
"s": 5754,
"text": "getPrimaryClipDescription()"
},
{
"code": null,
"e": 5889,
"s": 5782,
"text": "This method returns a description of the current primary clip on the clipboard but not a copy of its data."
},
{
"code": null,
"e": 5906,
"s": 5889,
"text": "hasPrimaryClip()"
},
{
"code": null,
"e": 5985,
"s": 5906,
"text": "This method returns true if there is currently a primary clip on the clipboard"
},
{
"code": null,
"e": 6015,
"s": 5985,
"text": "setPrimaryClip(ClipData clip)"
},
{
"code": null,
"e": 6074,
"s": 6015,
"text": "This method sets the current primary clip on the clipboard"
},
{
"code": null,
"e": 6101,
"s": 6074,
"text": "setText(CharSequence text)"
},
{
"code": null,
"e": 6166,
"s": 6101,
"text": "This method can be directly used to copy text into the clipboard"
},
{
"code": null,
"e": 6176,
"s": 6166,
"text": "getText()"
},
{
"code": null,
"e": 6251,
"s": 6176,
"text": "This method can be directly used to get the copied text from the clipboard"
},
{
"code": null,
"e": 6427,
"s": 6251,
"text": "Here is an example demonstrating the use of ClipboardManager class. It creates a basic copy paste application that allows you to copy the text and then paste it via clipboard."
},
{
"code": null,
"e": 6517,
"s": 6427,
"text": "To experiment with this example , you can run this on an actual device or in an emulator."
},
{
"code": null,
"e": 6600,
"s": 6517,
"text": "Following is the content of the modified main activity file src/MainActivity.java."
},
{
"code": null,
"e": 8417,
"s": 6600,
"text": "package com.example.sairamkrishna.myapplication;\n\nimport android.content.ClipData;\nimport android.content.ClipboardManager;\nimport android.os.Bundle;\n\nimport android.support.v7.app.ActionBarActivity;\nimport android.view.View;\n\nimport android.widget.Button;\nimport android.widget.EditText;\nimport android.widget.Toast;\n\n\npublic class MainActivity extends ActionBarActivity {\n EditText ed1, ed2;\n Button b1, b2;\n\n private ClipboardManager myClipboard;\n private ClipData myClip;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n\n ed1 = (EditText) findViewById(R.id.editText);\n ed2 = (EditText) findViewById(R.id.editText2);\n\n b1 = (Button) findViewById(R.id.button);\n b2 = (Button) findViewById(R.id.button2);\n\n myClipboard = (ClipboardManager) getSystemService(CLIPBOARD_SERVICE);\n\n b1.setOnClickListener(new View.OnClickListener() {\n \n @Override\n public void onClick(View v) {\n String text;\n text = ed1.getText().toString();\n\n myClip = ClipData.newPlainText(\"text\", text);\n myClipboard.setPrimaryClip(myClip);\n\n Toast.makeText(getApplicationContext(), \"Text Copied\", \n Toast.LENGTH_SHORT).show();\n }\n });\n\n b2.setOnClickListener(new View.OnClickListener() {\n \n @Override\n public void onClick(View v) {\n ClipData abc = myClipboard.getPrimaryClip();\n ClipData.Item item = abc.getItemAt(0);\n\n String text = item.getText().toString();\n ed2.setText(text);\n\n Toast.makeText(getApplicationContext(), \"Text Pasted\", \n Toast.LENGTH_SHORT).show();\n }\n });\n }\n\n}"
},
{
"code": null,
"e": 8492,
"s": 8417,
"text": "Following is the modified content of the xml res/layout/activity_main.xml."
},
{
"code": null,
"e": 11331,
"s": 8492,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout \n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\" \n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\" \n android:paddingLeft=\"@dimen/activity_horizontal_margin\"\n android:paddingRight=\"@dimen/activity_horizontal_margin\"\n android:paddingTop=\"@dimen/activity_vertical_margin\"\n android:paddingBottom=\"@dimen/activity_vertical_margin\" \n tools:context=\".MainActivity\">\n \n <TextView android:text=\"Example\" android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/textview\"\n android:textSize=\"35dp\"\n android:layout_alignParentTop=\"true\"\n android:layout_centerHorizontal=\"true\" />\n \n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Tutorials point\"\n android:id=\"@+id/textView\"\n android:layout_below=\"@+id/textview\"\n android:layout_centerHorizontal=\"true\"\n android:textColor=\"#ff7aff24\"\n android:textSize=\"35dp\" />\n \n <ImageView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/imageView\"\n android:src=\"@drawable/abc\"\n android:layout_below=\"@+id/textView\"\n android:layout_centerHorizontal=\"true\" />\n \n <EditText\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/editText\"\n android:layout_alignParentRight=\"true\"\n android:layout_alignParentEnd=\"true\"\n android:hint=\"Copy text\"\n android:layout_below=\"@+id/imageView\"\n android:layout_alignLeft=\"@+id/imageView\"\n android:layout_alignStart=\"@+id/imageView\" />\n \n <EditText\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/editText2\"\n android:layout_alignLeft=\"@+id/editText\"\n android:layout_alignStart=\"@+id/editText\"\n android:hint=\"paste text\"\n android:layout_below=\"@+id/editText\"\n android:layout_alignRight=\"@+id/editText\"\n android:layout_alignEnd=\"@+id/editText\" />\n \n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Copy text\"\n android:id=\"@+id/button\"\n android:layout_below=\"@+id/editText2\"\n android:layout_alignLeft=\"@+id/editText2\"\n android:layout_alignStart=\"@+id/editText2\" />\n \n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Paste text\"\n android:id=\"@+id/button2\"\n android:layout_below=\"@+id/editText2\"\n android:layout_alignRight=\"@+id/editText2\"\n android:layout_alignEnd=\"@+id/editText2\" />\n \n</RelativeLayout>"
},
{
"code": null,
"e": 11386,
"s": 11331,
"text": "Following is the content of the res/values/string.xml."
},
{
"code": null,
"e": 11462,
"s": 11386,
"text": "<resources>\n <string name=\"app_name\">My Application</string>\n</resources>"
},
{
"code": null,
"e": 11516,
"s": 11462,
"text": "Following is the content of AndroidManifest.xml file."
},
{
"code": null,
"e": 12250,
"s": 11516,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.sairamkrishna.myapplication\" >\n\n <application\n android:allowBackup=\"true\"\n android:icon=\"@drawable/ic_launcher\"\n android:label=\"@string/app_name\"\n android:theme=\"@style/AppTheme\" >\n \n <activity\n android:name=\"com.example.sairamkrishna.myapplication.MainActivity\"\n android:label=\"@string/app_name\" >\n \n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n \n </activity>\n </application>\n\n</manifest>"
},
{
"code": null,
"e": 12543,
"s": 12250,
"text": "Let's try to run our an application we just modified. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the tool bar. Android studio installer will display following images −"
},
{
"code": null,
"e": 12699,
"s": 12543,
"text": "Now just enter any text in the Text to copy field and then select the copy text button. The following notification will be displayed which is shown below −"
},
{
"code": null,
"e": 12837,
"s": 12699,
"text": "Now just press the paste button, and you will see the text which is copied is now pasted in the field of Copied Text. It is shown below −"
},
{
"code": null,
"e": 12872,
"s": 12837,
"text": "\n 46 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 12884,
"s": 12872,
"text": " Aditya Dua"
},
{
"code": null,
"e": 12919,
"s": 12884,
"text": "\n 32 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 12933,
"s": 12919,
"text": " Sharad Kumar"
},
{
"code": null,
"e": 12965,
"s": 12933,
"text": "\n 9 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 12982,
"s": 12965,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 13017,
"s": 12982,
"text": "\n 14 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 13034,
"s": 13017,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 13069,
"s": 13034,
"text": "\n 15 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 13086,
"s": 13069,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 13119,
"s": 13086,
"text": "\n 10 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 13136,
"s": 13119,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 13143,
"s": 13136,
"text": " Print"
},
{
"code": null,
"e": 13154,
"s": 13143,
"text": " Add Notes"
}
]
|
Python Debugger – Python pdb - GeeksforGeeks | 03 Jan, 2021
Debugging in Python is facilitated by pdb module(python debugger) which comes built-in to the Python standard library. It is actually defined as the class Pdb which internally makes use of bdb(basic debugger functions) and cmd(support for line-oriented command interpreters) modules. The major advantage of pdb is it runs purely in the command line thereby making it great for debugging code on remote servers when we don’t have the privilege of a GUI-based debugger.
pdb supports-
Setting breakpoints
Stepping through code
Source code listing
Viewing stack traces
There are several ways to invoke a debugger
To start debugging within the program just insert import pdb, pdb.set_trace() commands. Run your script normally and execution will stop where we have introduced a breakpoint. So basically we are hard coding a breakpoint on a line below where we call set_trace(). With python 3.7 and later versions, there is a built-in function called breakpoint() which works in the same manner. Refer following example on how to insert set_trace() function.
Example1: Addition of two numbers
Intentional error: As input() returns string the program concatenates those strings instead of adding input numbers
Python3
import pdb def addition(a, b): answer = a + b return answer pdb.set_trace()x = input("Enter first number : ")y = input("Enter second number : ")sum = addition(x, y)print(sum)
Output :
set_trace
In the output on the first line after the angle bracket, we have the directory path of our file, line number where our breakpoint is located, and <module>. It’s basically saying that we have a breakpoint in exppdb.py on line number 10 at the module level. If you introduce the breakpoint inside the function then its name will appear inside <>. The next line is showing the code line where our execution is stopped. That line is not executed yet. Then we have the pdb prompt. Now to navigate the code we can use the following commands :
Now to check the type of variable just write whatis and variable name. In the example given below the output of type of x is returned as <class string>. Thus typecasting string to int in our program will resolve the error.
Example 2:
From the Command Line: It is the easiest way of using a debugger. You just have to run the following command in terminal
python -m pdb exppdb.py (put your file name instead of exppdb.py)
This statement loads your source code and stops execution on the first line of code.
Example 3:
Python3
def addition(a, b): answer = a + b return answer x = input("Enter first number : ")y = input("Enter second number : ")sum = addition(x, y)print(sum)
Output :
command_line
Post-mortem debugging means entering debug mode after the program is finished with the execution process (failure has already occurred). pdb supports post-mortem debugging through the pm() and post_mortem() functions. These functions look for active trace back and start the debugger at the line in the call stack where the exception occurred. In the output of the given example you can notice pdb appear when exception is encountered in the program.
Example 4:
Python3
def multiply(a, b): answer = a * b return answer x = input("Enter first number : ")y = input("Enter second number : ")result = multiply(x, y)print(result)
Output :
All the variables including variables local to the function being executed in the program as well as global are maintained on the stack. We can use args( or use a) to print all the arguments of function which is currently active. p command evaluates an expression given as an argument and prints the result.
Here, example 4 of this article is executed in debugging mode to show you how to check for variables :
cheking_variable_values
While working with large programs we often want to add a number of breakpoints where we know errors might occur. To do this you just have to use the break command. When you insert a breakpoint, the debugger assigns a number to it starting from 1. Use the break to display all the breakpoints in the program.
Syntax:
break filename: lineno, condition
Given below is the implementation to add breakpoints in a program used for example 4.
Adding_breakpoints
After adding breakpoints with the help of numbers assigned to them we can manage the breakpoints using the enable and disable and remove command. disable tells the debugger not to stop when that breakpoint is reached while enable turns on the disabled breakpoints.
Given below is the implementation to manage breakpoints using Example 4.
Manage_breakpoints
Picked
python-modules
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Different ways to create Pandas Dataframe
Python String | replace()
Create a Pandas DataFrame from Lists
Reading and Writing to text files in Python
Selecting rows in pandas DataFrame based on conditions
sum() function in Python | [
{
"code": null,
"e": 23808,
"s": 23780,
"text": "\n03 Jan, 2021"
},
{
"code": null,
"e": 24277,
"s": 23808,
"text": "Debugging in Python is facilitated by pdb module(python debugger) which comes built-in to the Python standard library. It is actually defined as the class Pdb which internally makes use of bdb(basic debugger functions) and cmd(support for line-oriented command interpreters) modules. The major advantage of pdb is it runs purely in the command line thereby making it great for debugging code on remote servers when we don’t have the privilege of a GUI-based debugger. "
},
{
"code": null,
"e": 24292,
"s": 24277,
"text": "pdb supports- "
},
{
"code": null,
"e": 24312,
"s": 24292,
"text": "Setting breakpoints"
},
{
"code": null,
"e": 24334,
"s": 24312,
"text": "Stepping through code"
},
{
"code": null,
"e": 24354,
"s": 24334,
"text": "Source code listing"
},
{
"code": null,
"e": 24375,
"s": 24354,
"text": "Viewing stack traces"
},
{
"code": null,
"e": 24420,
"s": 24375,
"text": "There are several ways to invoke a debugger "
},
{
"code": null,
"e": 24866,
"s": 24420,
"text": "To start debugging within the program just insert import pdb, pdb.set_trace() commands. Run your script normally and execution will stop where we have introduced a breakpoint. So basically we are hard coding a breakpoint on a line below where we call set_trace(). With python 3.7 and later versions, there is a built-in function called breakpoint() which works in the same manner. Refer following example on how to insert set_trace() function."
},
{
"code": null,
"e": 24900,
"s": 24866,
"text": "Example1: Addition of two numbers"
},
{
"code": null,
"e": 25016,
"s": 24900,
"text": "Intentional error: As input() returns string the program concatenates those strings instead of adding input numbers"
},
{
"code": null,
"e": 25024,
"s": 25016,
"text": "Python3"
},
{
"code": "import pdb def addition(a, b): answer = a + b return answer pdb.set_trace()x = input(\"Enter first number : \")y = input(\"Enter second number : \")sum = addition(x, y)print(sum)",
"e": 25211,
"s": 25024,
"text": null
},
{
"code": null,
"e": 25220,
"s": 25211,
"text": "Output :"
},
{
"code": null,
"e": 25230,
"s": 25220,
"text": "set_trace"
},
{
"code": null,
"e": 25768,
"s": 25230,
"text": "In the output on the first line after the angle bracket, we have the directory path of our file, line number where our breakpoint is located, and <module>. It’s basically saying that we have a breakpoint in exppdb.py on line number 10 at the module level. If you introduce the breakpoint inside the function then its name will appear inside <>. The next line is showing the code line where our execution is stopped. That line is not executed yet. Then we have the pdb prompt. Now to navigate the code we can use the following commands :"
},
{
"code": null,
"e": 25991,
"s": 25768,
"text": "Now to check the type of variable just write whatis and variable name. In the example given below the output of type of x is returned as <class string>. Thus typecasting string to int in our program will resolve the error."
},
{
"code": null,
"e": 26002,
"s": 25991,
"text": "Example 2:"
},
{
"code": null,
"e": 26123,
"s": 26002,
"text": "From the Command Line: It is the easiest way of using a debugger. You just have to run the following command in terminal"
},
{
"code": null,
"e": 26189,
"s": 26123,
"text": "python -m pdb exppdb.py (put your file name instead of exppdb.py)"
},
{
"code": null,
"e": 26274,
"s": 26189,
"text": "This statement loads your source code and stops execution on the first line of code."
},
{
"code": null,
"e": 26285,
"s": 26274,
"text": "Example 3:"
},
{
"code": null,
"e": 26293,
"s": 26285,
"text": "Python3"
},
{
"code": "def addition(a, b): answer = a + b return answer x = input(\"Enter first number : \")y = input(\"Enter second number : \")sum = addition(x, y)print(sum)",
"e": 26451,
"s": 26293,
"text": null
},
{
"code": null,
"e": 26460,
"s": 26451,
"text": "Output :"
},
{
"code": null,
"e": 26473,
"s": 26460,
"text": "command_line"
},
{
"code": null,
"e": 26925,
"s": 26473,
"text": "Post-mortem debugging means entering debug mode after the program is finished with the execution process (failure has already occurred). pdb supports post-mortem debugging through the pm() and post_mortem() functions. These functions look for active trace back and start the debugger at the line in the call stack where the exception occurred. In the output of the given example you can notice pdb appear when exception is encountered in the program."
},
{
"code": null,
"e": 26937,
"s": 26925,
"text": "Example 4: "
},
{
"code": null,
"e": 26945,
"s": 26937,
"text": "Python3"
},
{
"code": "def multiply(a, b): answer = a * b return answer x = input(\"Enter first number : \")y = input(\"Enter second number : \")result = multiply(x, y)print(result)",
"e": 27109,
"s": 26945,
"text": null
},
{
"code": null,
"e": 27120,
"s": 27109,
"text": "Output : "
},
{
"code": null,
"e": 27436,
"s": 27128,
"text": "All the variables including variables local to the function being executed in the program as well as global are maintained on the stack. We can use args( or use a) to print all the arguments of function which is currently active. p command evaluates an expression given as an argument and prints the result."
},
{
"code": null,
"e": 27540,
"s": 27436,
"text": "Here, example 4 of this article is executed in debugging mode to show you how to check for variables :"
},
{
"code": null,
"e": 27564,
"s": 27540,
"text": "cheking_variable_values"
},
{
"code": null,
"e": 27886,
"s": 27576,
"text": "While working with large programs we often want to add a number of breakpoints where we know errors might occur. To do this you just have to use the break command. When you insert a breakpoint, the debugger assigns a number to it starting from 1. Use the break to display all the breakpoints in the program. "
},
{
"code": null,
"e": 27894,
"s": 27886,
"text": "Syntax:"
},
{
"code": null,
"e": 27928,
"s": 27894,
"text": "break filename: lineno, condition"
},
{
"code": null,
"e": 28014,
"s": 27928,
"text": "Given below is the implementation to add breakpoints in a program used for example 4."
},
{
"code": null,
"e": 28033,
"s": 28014,
"text": "Adding_breakpoints"
},
{
"code": null,
"e": 28299,
"s": 28033,
"text": " After adding breakpoints with the help of numbers assigned to them we can manage the breakpoints using the enable and disable and remove command. disable tells the debugger not to stop when that breakpoint is reached while enable turns on the disabled breakpoints."
},
{
"code": null,
"e": 28373,
"s": 28299,
"text": "Given below is the implementation to manage breakpoints using Example 4. "
},
{
"code": null,
"e": 28392,
"s": 28373,
"text": "Manage_breakpoints"
},
{
"code": null,
"e": 28399,
"s": 28392,
"text": "Picked"
},
{
"code": null,
"e": 28414,
"s": 28399,
"text": "python-modules"
},
{
"code": null,
"e": 28421,
"s": 28414,
"text": "Python"
},
{
"code": null,
"e": 28519,
"s": 28421,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28528,
"s": 28519,
"text": "Comments"
},
{
"code": null,
"e": 28541,
"s": 28528,
"text": "Old Comments"
},
{
"code": null,
"e": 28559,
"s": 28541,
"text": "Python Dictionary"
},
{
"code": null,
"e": 28594,
"s": 28559,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 28616,
"s": 28594,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 28648,
"s": 28616,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28690,
"s": 28648,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 28716,
"s": 28690,
"text": "Python String | replace()"
},
{
"code": null,
"e": 28753,
"s": 28716,
"text": "Create a Pandas DataFrame from Lists"
},
{
"code": null,
"e": 28797,
"s": 28753,
"text": "Reading and Writing to text files in Python"
},
{
"code": null,
"e": 28852,
"s": 28797,
"text": "Selecting rows in pandas DataFrame based on conditions"
}
]
|
PouchDB - Replication | One of the most important features of PouchDB is replication, i.e. you can make a copy of a database. You can replicate either a PouchDB instance stored locally or a CouchDB instance stored remotely.
Following is the syntax of replicating a database in PouchDB. Here, a copy of the source database is the target. To this method, you can directly pass the location of source and destination databases in String format, or you can pass objects representing them.
PouchDB.replicate(source, target, [options])
Both the source and targets can be either PouchDB instances or CouchDB instances.
Suppose there is a database with the name sample_database in PouchDB, and it contains 3 documents doc1, doc2, and doc3, having contents as shown below.
doc1 = {_id: '001', name: 'Ram', age: 23, Designation: 'Programmer'}
doc2 = {_id: '002', name: 'Robert', age: 24, Designation: 'Programmer'}
doc3 = {_id: '003', name: 'Rahim', age: 25, Designation: 'Programmer'}
Following is an example which makes a copy of the database named sample_database that is stored locally in CouchDB.
//Requiring the package
var PouchDB = require('PouchDB');
var localdb = 'sample_database';
//Creating remote database object
var remotedb = 'http://localhost:5984/sample_database';
//Replicating a local database to Remote
PouchDB.replicate(localDB, remoteDB);
console.log ("Database replicated successfully");
Save the above code in a file with name Replication_example.js. Open the command prompt and execute the JavaScript file using node as shown below.
C:\PouchDB_Examples >node Replication_example.js
This makes a copy of the database named sample_database in CouchDB instance and displays a message on the console as shown below.
Database replicated successfully
You can verify whether the database is replicated in your CouchDB instance by clicking the following link http://127.0.0.1:5984/_utils/index.html.
On clicking, you can see the list of databases in your CouchDB. You can also observe that a copy of the database sample_database is created here.
If you select the replicated database, you can view its contents as shown below.
Suppose there is a database with the name Remote_Database in CouchDB and it contains 3 documents, doc1, doc2, and doc3, having contents as shown below.
doc1 = {_id: '001', name: 'Geeta', age: 25, Designation: 'Programmer'}
doc2 = {_id: '002', name: 'Zara Ali', age: 24, Designation: 'Manager'}
doc3 = {_id: '003', name: 'Mary', age: 23, Designation: 'Admin'}
Following is an example which makes a copy of the database named Remote_Database that is stored in CouchDB in the local storage.
//Requiring the package
var PouchDB = require('PouchDB');
var localdb = 'sample_database';
var remotedb = 'http://localhost:5984/sample_database1';
//Replicating a local database to Remote
PouchDB.replicate(remotedb, localdb);
console.log("Database replicated successfully");
Save the above code in a file with the name Replication_example2.js. Open the command prompt and execute the JavaScript file using node as shown below.
C:\PouchDB_Examples >node Replication_example2.js
This makes a copy of the database named remote_database in PouchDB instance and displays a message on the console as shown below.
Database replicated successfully
You can verify whether the database is replicated in your Pouch instance by executing the following code.
//Requiring the package
var PouchDB = require('PouchDB');
//Creating the database object
var db = new PouchDB('remote_database');
//Retrieving all the documents in PouchDB
db.allDocs({include_docs: true, attachments: true}, function(err, docs) {
if (err) {
return console.log(err);
} else {
console.log(docs.rows);
}
});
If the database is replicated on executing the above code, you will get the contents of the replicated database as shown below.
[
{
id: '001',
key: '001',
value: { rev: '1-23cf3767e32a682c247053b16caecedb' },
doc: {
name: 'Geeta',
age: 25,
Designation: 'Programmer',
_id: '001',
_rev: '1-23cf3767e32a682c247053b16caecedb'
}
},
{
id: '002',
key: '002',
value: { rev: '1-d5bcfafbd4d4fae92fd7fc4fdcaa3a79' },
doc: {
name: 'Zara Ali',
age: 24,
Designation: 'Manager',
_id: '002',
_rev: '1-d5bcfafbd4d4fae92fd7fc4fdcaa3a79'
}
},
{
id: '003',
key: '003',
value: { rev: '1-c4cce025dbd30d21e40882d41842d5a4' },
doc: {
name: 'Mary',
age: 23,
Designation: 'Admin',
_id: '003',
_rev: '1-c4cce025dbd30d21e40882d41842d5a4'
}
}
]
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2253,
"s": 2053,
"text": "One of the most important features of PouchDB is replication, i.e. you can make a copy of a database. You can replicate either a PouchDB instance stored locally or a CouchDB instance stored remotely."
},
{
"code": null,
"e": 2514,
"s": 2253,
"text": "Following is the syntax of replicating a database in PouchDB. Here, a copy of the source database is the target. To this method, you can directly pass the location of source and destination databases in String format, or you can pass objects representing them."
},
{
"code": null,
"e": 2560,
"s": 2514,
"text": "PouchDB.replicate(source, target, [options])\n"
},
{
"code": null,
"e": 2642,
"s": 2560,
"text": "Both the source and targets can be either PouchDB instances or CouchDB instances."
},
{
"code": null,
"e": 2794,
"s": 2642,
"text": "Suppose there is a database with the name sample_database in PouchDB, and it contains 3 documents doc1, doc2, and doc3, having contents as shown below."
},
{
"code": null,
"e": 3008,
"s": 2794,
"text": "doc1 = {_id: '001', name: 'Ram', age: 23, Designation: 'Programmer'} \ndoc2 = {_id: '002', name: 'Robert', age: 24, Designation: 'Programmer'} \ndoc3 = {_id: '003', name: 'Rahim', age: 25, Designation: 'Programmer'}"
},
{
"code": null,
"e": 3124,
"s": 3008,
"text": "Following is an example which makes a copy of the database named sample_database that is stored locally in CouchDB."
},
{
"code": null,
"e": 3441,
"s": 3124,
"text": "//Requiring the package \nvar PouchDB = require('PouchDB');\n\nvar localdb = 'sample_database';\n\n//Creating remote database object \nvar remotedb = 'http://localhost:5984/sample_database';\n\n//Replicating a local database to Remote \nPouchDB.replicate(localDB, remoteDB); \nconsole.log (\"Database replicated successfully\");"
},
{
"code": null,
"e": 3588,
"s": 3441,
"text": "Save the above code in a file with name Replication_example.js. Open the command prompt and execute the JavaScript file using node as shown below."
},
{
"code": null,
"e": 3638,
"s": 3588,
"text": "C:\\PouchDB_Examples >node Replication_example.js\n"
},
{
"code": null,
"e": 3768,
"s": 3638,
"text": "This makes a copy of the database named sample_database in CouchDB instance and displays a message on the console as shown below."
},
{
"code": null,
"e": 3802,
"s": 3768,
"text": "Database replicated successfully\n"
},
{
"code": null,
"e": 3949,
"s": 3802,
"text": "You can verify whether the database is replicated in your CouchDB instance by clicking the following link http://127.0.0.1:5984/_utils/index.html."
},
{
"code": null,
"e": 4095,
"s": 3949,
"text": "On clicking, you can see the list of databases in your CouchDB. You can also observe that a copy of the database sample_database is created here."
},
{
"code": null,
"e": 4176,
"s": 4095,
"text": "If you select the replicated database, you can view its contents as shown below."
},
{
"code": null,
"e": 4328,
"s": 4176,
"text": "Suppose there is a database with the name Remote_Database in CouchDB and it contains 3 documents, doc1, doc2, and doc3, having contents as shown below."
},
{
"code": null,
"e": 4535,
"s": 4328,
"text": "doc1 = {_id: '001', name: 'Geeta', age: 25, Designation: 'Programmer'}\ndoc2 = {_id: '002', name: 'Zara Ali', age: 24, Designation: 'Manager'}\ndoc3 = {_id: '003', name: 'Mary', age: 23, Designation: 'Admin'}"
},
{
"code": null,
"e": 4664,
"s": 4535,
"text": "Following is an example which makes a copy of the database named Remote_Database that is stored in CouchDB in the local storage."
},
{
"code": null,
"e": 4943,
"s": 4664,
"text": "//Requiring the package\nvar PouchDB = require('PouchDB');\n\nvar localdb = 'sample_database';\n\nvar remotedb = 'http://localhost:5984/sample_database1';\n\n//Replicating a local database to Remote\nPouchDB.replicate(remotedb, localdb);\nconsole.log(\"Database replicated successfully\");"
},
{
"code": null,
"e": 5095,
"s": 4943,
"text": "Save the above code in a file with the name Replication_example2.js. Open the command prompt and execute the JavaScript file using node as shown below."
},
{
"code": null,
"e": 5146,
"s": 5095,
"text": "C:\\PouchDB_Examples >node Replication_example2.js\n"
},
{
"code": null,
"e": 5276,
"s": 5146,
"text": "This makes a copy of the database named remote_database in PouchDB instance and displays a message on the console as shown below."
},
{
"code": null,
"e": 5309,
"s": 5276,
"text": "Database replicated successfully"
},
{
"code": null,
"e": 5415,
"s": 5309,
"text": "You can verify whether the database is replicated in your Pouch instance by executing the following code."
},
{
"code": null,
"e": 5759,
"s": 5415,
"text": "//Requiring the package\nvar PouchDB = require('PouchDB');\n\n//Creating the database object\nvar db = new PouchDB('remote_database');\n\n//Retrieving all the documents in PouchDB\ndb.allDocs({include_docs: true, attachments: true}, function(err, docs) {\n if (err) {\n return console.log(err);\n } else {\n console.log(docs.rows);\n }\n});"
},
{
"code": null,
"e": 5887,
"s": 5759,
"text": "If the database is replicated on executing the above code, you will get the contents of the replicated database as shown below."
},
{
"code": null,
"e": 6754,
"s": 5887,
"text": "[ \n { \n id: '001', \n key: '001', \n value: { rev: '1-23cf3767e32a682c247053b16caecedb' }, \n doc: { \n name: 'Geeta', \n age: 25, \n Designation: 'Programmer', \n _id: '001',\n _rev: '1-23cf3767e32a682c247053b16caecedb' \n } \n }, \n { \n id: '002', \n key: '002', \n value: { rev: '1-d5bcfafbd4d4fae92fd7fc4fdcaa3a79' }, \n doc: { \n name: 'Zara Ali', \n age: 24, \n Designation: 'Manager', \n _id: '002',\n _rev: '1-d5bcfafbd4d4fae92fd7fc4fdcaa3a79' \n } \n }, \n { \n id: '003', \n key: '003', \n value: { rev: '1-c4cce025dbd30d21e40882d41842d5a4' }, \n doc: { \n name: 'Mary', \n age: 23, \n Designation: 'Admin', \n _id: '003', \n _rev: '1-c4cce025dbd30d21e40882d41842d5a4' \n } \n } \n]\n"
},
{
"code": null,
"e": 6761,
"s": 6754,
"text": " Print"
},
{
"code": null,
"e": 6772,
"s": 6761,
"text": " Add Notes"
}
]
|
Sort the words in lexicographical order in C# | Firstly, set a string array −
string[] arr = new string[] {
"Indian",
"Moroccon",
"American",
};
To sort the words in lexicographical order −
var sort = from a in arr
orderby a
select a;
Live Demo
Let us see the complete code −
using System;
using System.Linq;
class Program {
static void Main() {
string[] arr = new string[] {
"Indian",
"Moroccon",
"American",
};
var sort = from a in arr
orderby a
select a;
foreach(string res in sort) {
Console.WriteLine(res);
}
}
}
American
Indian
Moroccon | [
{
"code": null,
"e": 1092,
"s": 1062,
"text": "Firstly, set a string array −"
},
{
"code": null,
"e": 1168,
"s": 1092,
"text": "string[] arr = new string[] {\n \"Indian\",\n \"Moroccon\",\n \"American\",\n};"
},
{
"code": null,
"e": 1213,
"s": 1168,
"text": "To sort the words in lexicographical order −"
},
{
"code": null,
"e": 1258,
"s": 1213,
"text": "var sort = from a in arr\norderby a\nselect a;"
},
{
"code": null,
"e": 1269,
"s": 1258,
"text": " Live Demo"
},
{
"code": null,
"e": 1300,
"s": 1269,
"text": "Let us see the complete code −"
},
{
"code": null,
"e": 1628,
"s": 1300,
"text": "using System;\nusing System.Linq;\nclass Program {\n static void Main() {\n\n string[] arr = new string[] {\n \"Indian\",\n \"Moroccon\",\n \"American\",\n };\n var sort = from a in arr\n orderby a\n select a;\n\n foreach(string res in sort) {\n Console.WriteLine(res);\n }\n }\n}"
},
{
"code": null,
"e": 1653,
"s": 1628,
"text": "American\nIndian\nMoroccon"
}
]
|
Efficiently Streaming a Large AWS S3 File via S3 Select | by Idris Rampurawala | Towards Data Science | AWS S3 is an industry-leading object storage service. We tend to store lots of data files on S3 and at times require processing these files. If the size of the file that we are processing is small, we can basically go with traditional file processing flow, wherein we fetch the file from S3 and then process it row by row level. But the question arises, what if the file is size is more viz. > 1GB? 😓
Importing (reading) a large file leads Out of Memory error. It can also lead to a system crash event. There are libraries viz. Pandas, Dask, etc. are very good at processing large files but again the file is to be present locally i.e. we will have to import it from S3 to our local machine. But what if we do not want to fetch and store the whole S3 file locally? 🤔
📜 Let’s consider some of the use-cases:
We want to process a large CSV S3 file (~2GB) every day. It must be processed within a certain time frame (e.g. in 4 hours)
We are required to process large S3 files regularly from the FTP server. New files come in certain time intervals and to be processed sequentially i.e. the old file has to be processed before starting to process the newer files.
These are some very good scenarios where local processing may impact the overall flow of the system. Also, if we are running these file processing units in containers, then we have got limited disk space to work with. Hence, a cloud streaming flow is needed (which can also parallelize the processing of multiple chunks of the same file by streaming different chunks of the same file in parallel threads/processes). This is where I came across the AWS S3 Select feature. 😎
📝 This post focuses on the streaming of a large file into smaller manageable chunks (sequentially). This approach can then be used to parallelize the processing by running in concurrent threads/processes. Check my next post on this.
With Amazon S3 Select, you can use simple structured query language (SQL) statements to filter the contents of Amazon S3 objects and retrieve just the subset of data that you need. Using Amazon S3 Select to filter this data, you can reduce the amount of data that Amazon S3 transfers, reducing the cost and latency to retrieve this data.
Amazon S3 Select works on objects stored in CSV, JSON, or Apache Parquet format. It also works with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects only) and server-side encrypted objects. You can specify the format of the results as either CSV or JSON, and you can determine how the records in the result are delimited.
📝 We will be using Python boto3 to accomplish our end goal.
To work with S3 Select, boto3 provides select_object_content() function to query S3. You pass SQL expressions to Amazon S3 in the request. Amazon S3 Select supports a subset of SQL. Check this link for more information on this.
response = s3_client.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', Expression='SELECT * FROM S3Object', InputSerialization={ 'CSV': { 'FileHeaderInfo': 'USE', 'FieldDelimiter': ',', 'RecordDelimiter': '\n' } }, OutputSerialization={ 'JSON': { 'RecordDelimiter': ',' } })
In above request, InputSerialization determines the S3 file type and related properties, while OutputSerialization determines the response that we get out of this select_object_content().
Now, as we have got some idea about how the S3 Select works, let's try to accomplish our use-case of streaming chunks (subset) of a large file like how a paginated API works. 😋
S3 Select supports ScanRange parameter which helps us to stream a subset of an object by specifying a range of bytes to query. S3 Select requests for a series of non-overlapping scan ranges. Scan ranges don't need to be aligned with record boundaries. A record that starts within the scan range specified but extends beyond the scan range will be processed by the query. It means that the row would be fetched within the scan range and it might extend to fetch the whole row. It doesn't fetch a subset of a row, either the whole row is fetched or it is skipped (to be fetched in another scan range).
Let’s try to achieve this in 2 simple steps:
The following code snippet showcases the function that will perform a HEAD request on our S3 file and determines the file size in bytes.
def get_s3_file_size(bucket: str, key: str) -> int: """Gets the file size of S3 object by a HEAD request Args: bucket (str): S3 bucket key (str): S3 object path Returns: int: File size in bytes. Defaults to 0 if any error. """ aws_profile = current_app.config.get('AWS_PROFILE_NAME') s3_client = boto3.session.Session(profile_name=aws_profile).client('s3') file_size = 0 try: response = s3_client.head_object(Bucket=bucket, Key=key) if response: file_size = int(response.get('ResponseMetadata').get('HTTPHeaders').get('content-length')) except ClientError: logger.exception(f'Client error reading S3 file {bucket} : {key}') return file_size
Now, the logic is to yield the chunks of byte stream of the S3 file until we reach the file size. Rest assured, this continuous scan range won’t result in the over-lapping of rows in the response 😉 (check the output image / GitHub repo). Simple enough, eh? 😝
import astimport boto3from botocore.exceptions import ClientErrordef stream_s3_file(bucket: str, key: str, file_size: int, chunk_bytes=5000) -> tuple[dict]: """Streams a S3 file via a generator. Args: bucket (str): S3 bucket key (str): S3 object path chunk_bytes (int): Chunk size in bytes. Defaults to 5000 Returns: tuple[dict]: Returns a tuple of dictionary containing rows of file content """ aws_profile = current_app.config.get('AWS_PROFILE_NAME') s3_client = boto3.session.Session(profile_name=aws_profile).client('s3') expression = 'SELECT * FROM S3Object' start_range = 0 end_range = min(chunk_bytes, file_size) while start_range < file_size: response = s3_client.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', Expression=expression, InputSerialization={ 'CSV': { 'FileHeaderInfo': 'USE', 'FieldDelimiter': ',', 'RecordDelimiter': '\n' } }, OutputSerialization={ 'JSON': { 'RecordDelimiter': ',' } }, ScanRange={ 'Start': start_range, 'End': end_range }, ) """ select_object_content() response is an event stream that can be looped to concatenate the overall result set Hence, we are joining the results of the stream in a string before converting it to a tuple of dict """ result_stream = [] for event in response['Payload']: if records := event.get('Records'): result_stream.append(records['Payload'].decode('utf-8')) yield ast.literal_eval(''.join(result_stream)) start_range = end_range end_range = end_range + min(chunk_bytes, file_size - end_range)def s3_file_processing(): bucket = '<s3-bucket>' key = '<s3-key>' file_size = get_s3_file_size(bucket=bucket, key=key) logger.debug(f'Initiating streaming file of {file_size} bytes') chunk_size = 524288 # 512KB or 0.5MB for file_chunk in stream_s3_file(bucket=bucket, key=key, file_size=file_size, chunk_bytes=chunk_size): logger.info(f'\n{30 * "*"} New chunk {30 * "*"}') id_set = set() for row in file_chunk: # perform any other processing here id_set.add(int(row.get('id'))) logger.info(f'{min(id_set)} --> {max(id_set)}')
Congratulations! 👏 We have successfully managed to solve one of the key challenges of processing a large S3 file without crashing our system. 🤘
📌 You can check out my GitHub repository for a complete working example of this approach.
🔖 The next step to achieve more concurrency is to process the file in parallel. Check out a sequel of this post here.
Reduced IO — thus better performance
Reduced costs due to smaller data transfer fees
Multiple chunks can be run in parallel to expedite the file processing using ScanRange in multiple threads/processes
The maximum length of a record in the input or result is 1 MB
Amazon S3 Select can only emit nested data using the JSON output format
S3 select returns a stream of encoded bytes, so we have to loop over the returned stream and decode the output records['Payload'].decode('utf-8')
Only works on objects stored in CSV, JSON, or Apache Parquet format. For more flexibility/features, you can go for AWS Athena
My GitHub repository demonstrating the above approach
AWS S3 Select boto3 reference
AWS S3 Select the user guide
AWS S3 Select Example
Sequel to this post showcasing parallel file processing
Originally published at https://dev.to on April 6, 2021. | [
{
"code": null,
"e": 572,
"s": 171,
"text": "AWS S3 is an industry-leading object storage service. We tend to store lots of data files on S3 and at times require processing these files. If the size of the file that we are processing is small, we can basically go with traditional file processing flow, wherein we fetch the file from S3 and then process it row by row level. But the question arises, what if the file is size is more viz. > 1GB? 😓"
},
{
"code": null,
"e": 938,
"s": 572,
"text": "Importing (reading) a large file leads Out of Memory error. It can also lead to a system crash event. There are libraries viz. Pandas, Dask, etc. are very good at processing large files but again the file is to be present locally i.e. we will have to import it from S3 to our local machine. But what if we do not want to fetch and store the whole S3 file locally? 🤔"
},
{
"code": null,
"e": 978,
"s": 938,
"text": "📜 Let’s consider some of the use-cases:"
},
{
"code": null,
"e": 1102,
"s": 978,
"text": "We want to process a large CSV S3 file (~2GB) every day. It must be processed within a certain time frame (e.g. in 4 hours)"
},
{
"code": null,
"e": 1331,
"s": 1102,
"text": "We are required to process large S3 files regularly from the FTP server. New files come in certain time intervals and to be processed sequentially i.e. the old file has to be processed before starting to process the newer files."
},
{
"code": null,
"e": 1804,
"s": 1331,
"text": "These are some very good scenarios where local processing may impact the overall flow of the system. Also, if we are running these file processing units in containers, then we have got limited disk space to work with. Hence, a cloud streaming flow is needed (which can also parallelize the processing of multiple chunks of the same file by streaming different chunks of the same file in parallel threads/processes). This is where I came across the AWS S3 Select feature. 😎"
},
{
"code": null,
"e": 2037,
"s": 1804,
"text": "📝 This post focuses on the streaming of a large file into smaller manageable chunks (sequentially). This approach can then be used to parallelize the processing by running in concurrent threads/processes. Check my next post on this."
},
{
"code": null,
"e": 2375,
"s": 2037,
"text": "With Amazon S3 Select, you can use simple structured query language (SQL) statements to filter the contents of Amazon S3 objects and retrieve just the subset of data that you need. Using Amazon S3 Select to filter this data, you can reduce the amount of data that Amazon S3 transfers, reducing the cost and latency to retrieve this data."
},
{
"code": null,
"e": 2721,
"s": 2375,
"text": "Amazon S3 Select works on objects stored in CSV, JSON, or Apache Parquet format. It also works with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects only) and server-side encrypted objects. You can specify the format of the results as either CSV or JSON, and you can determine how the records in the result are delimited."
},
{
"code": null,
"e": 2781,
"s": 2721,
"text": "📝 We will be using Python boto3 to accomplish our end goal."
},
{
"code": null,
"e": 3009,
"s": 2781,
"text": "To work with S3 Select, boto3 provides select_object_content() function to query S3. You pass SQL expressions to Amazon S3 in the request. Amazon S3 Select supports a subset of SQL. Check this link for more information on this."
},
{
"code": null,
"e": 3399,
"s": 3009,
"text": "response = s3_client.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', Expression='SELECT * FROM S3Object', InputSerialization={ 'CSV': { 'FileHeaderInfo': 'USE', 'FieldDelimiter': ',', 'RecordDelimiter': '\\n' } }, OutputSerialization={ 'JSON': { 'RecordDelimiter': ',' } })"
},
{
"code": null,
"e": 3587,
"s": 3399,
"text": "In above request, InputSerialization determines the S3 file type and related properties, while OutputSerialization determines the response that we get out of this select_object_content()."
},
{
"code": null,
"e": 3764,
"s": 3587,
"text": "Now, as we have got some idea about how the S3 Select works, let's try to accomplish our use-case of streaming chunks (subset) of a large file like how a paginated API works. 😋"
},
{
"code": null,
"e": 4364,
"s": 3764,
"text": "S3 Select supports ScanRange parameter which helps us to stream a subset of an object by specifying a range of bytes to query. S3 Select requests for a series of non-overlapping scan ranges. Scan ranges don't need to be aligned with record boundaries. A record that starts within the scan range specified but extends beyond the scan range will be processed by the query. It means that the row would be fetched within the scan range and it might extend to fetch the whole row. It doesn't fetch a subset of a row, either the whole row is fetched or it is skipped (to be fetched in another scan range)."
},
{
"code": null,
"e": 4409,
"s": 4364,
"text": "Let’s try to achieve this in 2 simple steps:"
},
{
"code": null,
"e": 4546,
"s": 4409,
"text": "The following code snippet showcases the function that will perform a HEAD request on our S3 file and determines the file size in bytes."
},
{
"code": null,
"e": 5270,
"s": 4546,
"text": "def get_s3_file_size(bucket: str, key: str) -> int: \"\"\"Gets the file size of S3 object by a HEAD request Args: bucket (str): S3 bucket key (str): S3 object path Returns: int: File size in bytes. Defaults to 0 if any error. \"\"\" aws_profile = current_app.config.get('AWS_PROFILE_NAME') s3_client = boto3.session.Session(profile_name=aws_profile).client('s3') file_size = 0 try: response = s3_client.head_object(Bucket=bucket, Key=key) if response: file_size = int(response.get('ResponseMetadata').get('HTTPHeaders').get('content-length')) except ClientError: logger.exception(f'Client error reading S3 file {bucket} : {key}') return file_size"
},
{
"code": null,
"e": 5529,
"s": 5270,
"text": "Now, the logic is to yield the chunks of byte stream of the S3 file until we reach the file size. Rest assured, this continuous scan range won’t result in the over-lapping of rows in the response 😉 (check the output image / GitHub repo). Simple enough, eh? 😝"
},
{
"code": null,
"e": 8081,
"s": 5529,
"text": "import astimport boto3from botocore.exceptions import ClientErrordef stream_s3_file(bucket: str, key: str, file_size: int, chunk_bytes=5000) -> tuple[dict]: \"\"\"Streams a S3 file via a generator. Args: bucket (str): S3 bucket key (str): S3 object path chunk_bytes (int): Chunk size in bytes. Defaults to 5000 Returns: tuple[dict]: Returns a tuple of dictionary containing rows of file content \"\"\" aws_profile = current_app.config.get('AWS_PROFILE_NAME') s3_client = boto3.session.Session(profile_name=aws_profile).client('s3') expression = 'SELECT * FROM S3Object' start_range = 0 end_range = min(chunk_bytes, file_size) while start_range < file_size: response = s3_client.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', Expression=expression, InputSerialization={ 'CSV': { 'FileHeaderInfo': 'USE', 'FieldDelimiter': ',', 'RecordDelimiter': '\\n' } }, OutputSerialization={ 'JSON': { 'RecordDelimiter': ',' } }, ScanRange={ 'Start': start_range, 'End': end_range }, ) \"\"\" select_object_content() response is an event stream that can be looped to concatenate the overall result set Hence, we are joining the results of the stream in a string before converting it to a tuple of dict \"\"\" result_stream = [] for event in response['Payload']: if records := event.get('Records'): result_stream.append(records['Payload'].decode('utf-8')) yield ast.literal_eval(''.join(result_stream)) start_range = end_range end_range = end_range + min(chunk_bytes, file_size - end_range)def s3_file_processing(): bucket = '<s3-bucket>' key = '<s3-key>' file_size = get_s3_file_size(bucket=bucket, key=key) logger.debug(f'Initiating streaming file of {file_size} bytes') chunk_size = 524288 # 512KB or 0.5MB for file_chunk in stream_s3_file(bucket=bucket, key=key, file_size=file_size, chunk_bytes=chunk_size): logger.info(f'\\n{30 * \"*\"} New chunk {30 * \"*\"}') id_set = set() for row in file_chunk: # perform any other processing here id_set.add(int(row.get('id'))) logger.info(f'{min(id_set)} --> {max(id_set)}')"
},
{
"code": null,
"e": 8225,
"s": 8081,
"text": "Congratulations! 👏 We have successfully managed to solve one of the key challenges of processing a large S3 file without crashing our system. 🤘"
},
{
"code": null,
"e": 8315,
"s": 8225,
"text": "📌 You can check out my GitHub repository for a complete working example of this approach."
},
{
"code": null,
"e": 8433,
"s": 8315,
"text": "🔖 The next step to achieve more concurrency is to process the file in parallel. Check out a sequel of this post here."
},
{
"code": null,
"e": 8470,
"s": 8433,
"text": "Reduced IO — thus better performance"
},
{
"code": null,
"e": 8518,
"s": 8470,
"text": "Reduced costs due to smaller data transfer fees"
},
{
"code": null,
"e": 8635,
"s": 8518,
"text": "Multiple chunks can be run in parallel to expedite the file processing using ScanRange in multiple threads/processes"
},
{
"code": null,
"e": 8697,
"s": 8635,
"text": "The maximum length of a record in the input or result is 1 MB"
},
{
"code": null,
"e": 8769,
"s": 8697,
"text": "Amazon S3 Select can only emit nested data using the JSON output format"
},
{
"code": null,
"e": 8915,
"s": 8769,
"text": "S3 select returns a stream of encoded bytes, so we have to loop over the returned stream and decode the output records['Payload'].decode('utf-8')"
},
{
"code": null,
"e": 9041,
"s": 8915,
"text": "Only works on objects stored in CSV, JSON, or Apache Parquet format. For more flexibility/features, you can go for AWS Athena"
},
{
"code": null,
"e": 9095,
"s": 9041,
"text": "My GitHub repository demonstrating the above approach"
},
{
"code": null,
"e": 9125,
"s": 9095,
"text": "AWS S3 Select boto3 reference"
},
{
"code": null,
"e": 9154,
"s": 9125,
"text": "AWS S3 Select the user guide"
},
{
"code": null,
"e": 9176,
"s": 9154,
"text": "AWS S3 Select Example"
},
{
"code": null,
"e": 9232,
"s": 9176,
"text": "Sequel to this post showcasing parallel file processing"
}
]
|
Yet Another Twitter Sentiment Analysis Part 1 — tackling class imbalance | by Ricky Kim | Towards Data Science | I finished an 11-part series blog posts on Twitter sentiment analysis not long ago. Why do I want to do the sentiment analysis again? I wanted to extend further and run sentiment analysis on real retrieved tweets. And there are other limits to my previous sentiment analysis project.
The project stopped at the final trained model and lacks application of the model to retrieved tweetsThe model was trained on only positive and negative class, so it lacks the ability to predict a neutral class
The project stopped at the final trained model and lacks application of the model to retrieved tweets
The model was trained on only positive and negative class, so it lacks the ability to predict a neutral class
Regarding neutral class, it might be possible to set a threshold value for negative, neutral, positive class, and map the final output probability value to one of three classes, but I wanted to train a model with training data, which has three sentiment classes: negative, neutral, positive.
Since I already wrote quite a lengthy series on NLP, sentiment analysis, if a concept was already covered in my previous posts, I won’t go into the detailed explanation. And also the main data visualisation will be with retrieved tweets, and I won’t go through extensive data visualisation with the data I use for training and testing a model.
*In addition to short code blocks I will attach, you can find the link for the whole Jupyter Notebook at the end of this post.
In order to train my sentiment classifier, I need a dataset which meets conditions below.
Preferably tweets text data with annotated sentiment label
with 3 sentiment classes: negative, neutral, positive
big enough to train a model
While googling to find a good data source, I learned about renowned NLP competition called SemEval. “SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.”
You might have already heard of this if you’re interested in NLP. Highly-skilled teams from all around the world compete on a couple of tasks such as “semantic textual similarity”, “multilingual semantic word similarity”, etc. One of the competition tasks is the Twitter sentiment analysis. It also has a couple of subtasks, but what I would want to focus on is “Subtask A. : Message Polarity Classification: Given a message, classify whether the message is of positive, negative, or neutral sentiment”.
Luckily the dataset they provide for the competition is available to download. The training data consists of SemEval’s previous training and test data. What’s even better is they provide test data, and all the teams who participated in the competition are scored with the same test data. This means I can compare my model performance with 2017 participants in SemEval.
I first downloaded full training data for SemEval 2017 Task 4.
There are 11 txt files in total, spanning from SemEval 2013 to SemEval 2016. While trying to read the files into a Pandas dataframe, I found two files cannot be properly loaded as tsv file. It seems like there are some entries not properly tab-separated, so end up as a chunk of 10 or more tweets stuck together. I could have tried retrieving them with tweet ID provided, but I decided to first ignore these two files, and make up a training set with only 9 txt files.
import pandas as pd import numpy as npimport matplotlib.pyplot as pltplt.style.use('fivethirtyeight')%matplotlib inline%config InlineBackend.figure_format = 'retina'
Once I import basic dependencies, I’ll read the data to a Pandas dataframe.
import globpath ='Subtask_A/'all_files = glob.glob(path + "/twitter*.txt")frame = pd.DataFrame()list_ = []for file_ in all_files: df = pd.read_csv(file_,index_col=None, sep='\t', header=None, names=['id','sentiment','text','to_delete']) list_.append(df.iloc[:,:-1])df = pd.concat(list_)df = df.drop_duplicates()df = df.reset_index(drop=True)df.tail()
The dataset looks fairly simple with individual tweet ID, sentiment label, and tweet text.
df.info()
There are total 41,705 tweets. As another sanity check, let’s take a look at how many words are there in each tweet.
df['token_length'] = [len(x.split(" ")) for x in df.text]max(df.token_length)
df.loc[df.token_length.idxmax(),'text']
OK, the token length looks fine, and the tweet for maximum token length seems like a properly parsed tweet. Let’s take a look at the class distribution of the data.
df.sentiment.value_counts()
The data is not well balanced, and negative class has the least number of data entries with 6,485, and the neutral class has the most data with 19,466 entries. I want to rebalance the data so that I will have a balanced dataset at least for training. I will deal with this after I define the cleaning function.
Data cleaning process is similar to my previous project, but this time I added a long list of contraction to expand most of the contracted form to its original form such as “don’t” to “do not”. And this time, instead of Regex, I used Spacy to parse the documents, and filtered numbers, URL, punctuation, etc. Below are the steps I took to clean the tweets.
Decoding: unicode_escape for extra “\” before unicode character, then unidecodeApostrophe handled: there are two characters people use for contraction. “’”(apostrophe) and “‘“(single quote). If these two symbols are both used for contraction, it will be difficult to detect and properly map the right expanded form. So any “’”(apostrophe) is changed to “‘“(single quote)Contraction check: check if there’s any contracted form, and replace it with its original formParsing: done with SpacyFiltering punctuation, white space, numbers, URL using Spacy methods while keeping the text content of hashtag intactRemoved @mentionLemmatize: lemmatized each token using Spacy method ‘.lemma_’. Pronouns are kept as they are since Spacy lemmatizer transforms every pronoun to “-PRON-”Special character removalSingle syllable token removalSpell correction: it is a simple spell correction dealing with repeated characters such as “sooooo goooood”. If the same character is repeated more than two times, it shortens the repetition to two. For example “sooooo goooood” will be transformed as “soo good”. This is not a perfect solution since even after correction, in case of “soo”, it is not a correct spelling. But at least it will help to reduce feature space by making “sooo”, “soooo”, “sooooo” to the same word “soo”
Decoding: unicode_escape for extra “\” before unicode character, then unidecode
Apostrophe handled: there are two characters people use for contraction. “’”(apostrophe) and “‘“(single quote). If these two symbols are both used for contraction, it will be difficult to detect and properly map the right expanded form. So any “’”(apostrophe) is changed to “‘“(single quote)
Contraction check: check if there’s any contracted form, and replace it with its original form
Parsing: done with Spacy
Filtering punctuation, white space, numbers, URL using Spacy methods while keeping the text content of hashtag intact
Removed @mention
Lemmatize: lemmatized each token using Spacy method ‘.lemma_’. Pronouns are kept as they are since Spacy lemmatizer transforms every pronoun to “-PRON-”
Special character removal
Single syllable token removal
Spell correction: it is a simple spell correction dealing with repeated characters such as “sooooo goooood”. If the same character is repeated more than two times, it shortens the repetition to two. For example “sooooo goooood” will be transformed as “soo good”. This is not a perfect solution since even after correction, in case of “soo”, it is not a correct spelling. But at least it will help to reduce feature space by making “sooo”, “soooo”, “sooooo” to the same word “soo”
OK now let’s see how this custom cleaner works with tweets.
pd.set_option('display.max_colwidth', -1)df.text[:10]
[spacy_cleaner(t) for t in df.text[:10]]
It looks like it’s doing what I intended it to do. I’ll clean the “text” column and create a new column called “clean_text”.
df['clean_text'] = [spacy_cleaner(t) for t in df.text]
By running the cleaning function I can see it encountered some “invalid escape sequence”. Let’s see what these are.
for i,t in enumerate(df.text): if '\m' in t: print(i,t)
The tweets that contain ‘\m’ were actually containing an emoticon ‘\m/’ I didn’t know about this until I googled it. Apparently ‘\m/’ stands for the horn sign you make with your hand. This hand sign is popular in metal music. Anyway, this is just a warning and it is not an error. Let’s see how the cleaner deals with this.
df.text[2064]
spacy_cleaner(df.text[2064])
Again it seems like to be doing what I intended it to do. So far so good.
“The class imbalance problem typically occurs when, in a classification problem, there are many more instances of some classes than others. In such cases, standard classifiers tend to be overwhelmed by the large classes and ignore the small ones.”
As I have already realised, the training data is not perfectly balanced, ‘neutral’ class has 3 times more data than ‘negative’ class, and ‘positive’ class has around 2.4 times more data than ‘negative’ class. I will try fitting a model with three different data; oversampled, downsampled, original, to see how different sampling techniques affect the learning of a classifier.
The simple default classifier I’ll use to compare performances of different datasets will be the logistic regression. From my previous sentiment analysis project, I learned that Tf-Idf with Logistic Regression is a pretty powerful combination. Before I apply any other more complex models such as ANN, CNN, RNN etc, the performances with logistic regression will hopefully give me a good idea of which data sampling methods I should choose. If you want to know more about Tf-Idf, and how it extracts features from text, you can check my old post, “Another Twitter Sentiment Analysis with Python-Part5”.
In terms of validation, I will use K-Fold Cross Validation. In my previous project, I split the data into three; training, validation, test, and all the parameter tuning was done with reserved validation set and finally applied the model to the test set. Considering that I had more than 1 million data for training, this kind of validation set approach was acceptable. But this time, the data I have is much smaller (around 40,000 tweets), and by leaving out validation set from the data we might leave out interesting information about data.
from sklearn.pipeline import Pipelineoriginal_pipeline = Pipeline([ ('vectorizer', tvec), ('classifier', lr)])lr_cv(5, df.clean_text, df.sentiment, original_pipeline, 'macro')
With data as it is without any resampling, we can see that the precision is higher than the recall. If you want to know more about precision and recall, you can check my old post, “Another Twitter sentiment analysis with Python — Part4”.
If we take a closer look at the result from each fold, we can also see that the recall for the negative class is quite low around 28~30%, while the precisions for the negative class are high as 61~65%. This means the classifier is very picky and does not think many things are negative. All the text it classifies as negative is 61~65% of the time really negative. However, it also misses a lot of actual negative class, because it is so very picky. We have a low recall, but a very high precision. The intuition behind this precision and recall has been taken from a Medium blog post by Andreas Klintberg.
There is a very useful Python package called “imbalanced-learn”, which helps you deal with class imbalance issues, it is compatible with Scikit Learn, and easy to implement.
Within imbalanced-learn, there are different techniques you can use for oversampling. I will use below two.
RandomOverSamplerSMOTE (Synthetic Minority Over-Sampling Technique)
RandomOverSampler
SMOTE (Synthetic Minority Over-Sampling Technique)
There is one more point to consider if you are cross-validating with oversampled data. Oversampling the minority class can result in overfitting problems if we oversample before cross-validating. Why is that so? Because by oversampling before cross validation split, you are leaking the information of validation data already to your training set. As they say “What has been seen, cannot be unseen.”
If you want more detailed explanation, I recommend this Youtube video “Machine Learning — Over-& Undersampling — Python/ Scikit/ Scikit-Imblearn”
Luckily cross-validation function I defined above as “lr_cv()” will fit the pipeline only with the training set split after cross-validation split, thus it is not leaking any information of validation set to the model.
Random over-sampling is simply a process of repeating some samples of the minority class and balance the number of samples between classes in the dataset.
from imblearn.pipeline import make_pipelinefrom imblearn.over_sampling import ADASYN, SMOTE, RandomOverSamplerROS_pipeline = make_pipeline(tvec, RandomOverSampler(random_state=777),lr)SMOTE_pipeline = make_pipeline(tvec, SMOTE(random_state=777),lr)
Before we fit each pipeline, let’s see what the RadomOverSampler does. In order to make it easier to see I defined some toy text data below, and the target sentiment value for each text.
sent1 = "I love dogs"sent2 = "I don't like dogs"sent3 = "I adore cats"sent4 = "I hate spiders"sent5 = "I like dogs"testing_text = pd.Series([sent1, sent2, sent3, sent4, sent5])testing_target = pd.Series([1,0,1,0,1])
My toy data has 5 entries in total, and the target sentiments are three positives and two negatives. In order to be balanced, this toy data needs one more entry of negative class.
One thing is over sampler won’t be able to handle raw text data. It has to be transformed into a feature space for over sampler to work. I’ll first fit TfidfVectorizer, and oversample using Tf-Idf representation of texts.
tv = TfidfVectorizer(stop_words=None, max_features=100000)testing_tfidf = tv.fit_transform(testing_text)ros = RandomOverSampler(random_state=777)X_ROS, y_ROS = ros.fit_sample(testing_tfidf, testing_target)pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())
pd.DataFrame(X_ROS.todense(), columns=tv.get_feature_names())
By running RandomOverSampler, now we have one more entry at the end. The last entry added by RandomOverSampler is exactly same as the fourth one (index number 3) from the top. RandomOverSampler simply repeats some entries of the minority class to balance the data. If we look at the target sentiments after RandomOverSampler, we can see that it has now a perfect balance between classes by adding on more entry of negative class.
y_ROS
lr_cv(5, df.clean_text, df.sentiment, ROS_pipeline, 'macro')
Compared to the model built with original imbalanced data, now the model behaves in opposite way. The precisions for the negative class are around 47~49%, but the recalls are way higher at 64~67%. Now we have a situation of high recall, low precision. What this means is that the classifier thinks a lot of things are negative. However, it also thinks a lot of non-negative texts are negative. So from our set of data we got a lot of texts classified as negative, many of them were in the set of actual negative, however, a lot of them were also non-negative.
But without resampling, the recall rate was as low as 28~30% for negative class, the precision rate for the negative class I get from oversampling is more robust at around 47~49%.
Another way to look at it is to look at the f1 score, which is the harmonic average of precision and recall. The original imbalanced data had 66.51% accuracy and 60.01% F1 score. However with oversampling, we get a slightly lower accuracy of 65.95%, but a much higher F1 score of 64.18%
SMOTE is an over-sampling approach in which the minority class is over-sampled by creating “synthetic” examples rather than by over-sampling with replacement.
According to the original research paper “SMOTE: Synthetic Minority Over-sampling Technique” (Chawla et al., 2002), “synthetic samples are generated in the following way: Take the difference between the feature vector (sample) under consideration and its nearest neighbour. Multiply this difference by a random number between 0 and 1, and add it to the feature vector under consideration. This causes the selection of a random point along the line segment between two specific features. This approach effectively forces the decision region of the minority class to become more general.” What this means is that when SMOTE creates a new synthetic data, it will choose one data to copy, and look at its k nearest neighbours. Then, on feature space, it will create random values in feature space that is between the original sample and its neighbours.
Once you see the example with the toy data, it will become clearer.
smt = SMOTE(random_state=777, k_neighbors=1)X_SMOTE, y_SMOTE = smt.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_SMOTE.todense(), columns=tv.get_feature_names())
The last entry is the data created by SMOTE. To make it easier to see, let’s see only the negative class.
pd.DataFrame(X_SMOTE.todense()[y_SMOTE == 0], columns=tv.get_feature_names())
The top two entries are original data, and the one on the bottom is synthetic data. You can see it didn’t just repeat original data. Instead, the Tf-Idf values are created by taking random values between the top two original data. As you can see, if the Tf-Idf values for both original data are 0, then synthetic data also has 0 for those features, such as “adore”, “cactus”, “cats”, because if two values are the same there are no random values between them. I specifically defined k_neighbors as 1 for this toy data, since there are only two entries of negative class, if SMOTE chooses one to copy, then only one other negative entry left as a neighbour.
Now let’s fit the SMOTE pipeline to see how it affects performance.
lr_cv(5, df.clean_text, df.sentiment, SMOTE_pipeline, 'macro')
SMOTE sampling seems to have a slightly higher accuracy and F1 score compared to random oversampling. With the results so far, it seems like choosing SMOTE oversampling is preferable over original or random oversampling.
How about downsampling. If we oversample the minority class in the above oversampling, with downsampling, we try to reduce the data of majority class, so that the data classes are balanced.
from imblearn.under_sampling import NearMiss, RandomUnderSamplerRUS_pipeline = make_pipeline(tvec, RandomUnderSampler(random_state=777),lr)NM1_pipeline = make_pipeline(tvec, NearMiss(ratio='not minority',random_state=777, version = 1),lr)NM2_pipeline = make_pipeline(tvec, NearMiss(ratio='not minority',random_state=777, version = 2),lr)NM3_pipeline = make_pipeline(tvec, NearMiss(ratio=nm3_dict,random_state=777, version = 3, n_neighbors_ver3=4),lr)
Again, before we run the pipeline, let’s apply this to the toy data to see what it does.
rus = RandomUnderSampler(random_state=777)X_RUS, y_RUS = rus.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_RUS.todense(), columns=tv.get_feature_names())
pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())
Compared with the original imbalanced data, we can see that downsampled data has one less entry, which is the last entry of the original data belonging to the positive class. RandomUnderSampler reduces the majority class by randomly removing data from the majority class.
lr_cv(5, df.clean_text, df.sentiment, RUS_pipeline, 'macro')
Now the accuracy and the F1 score has significantly dropped. But the characteristic of low precision and high recall is as same as oversampled data. Only its overall performance dropped.
According to the documentation of “imbalanced-learn”, “NearMiss adds some heuristic rules to select samples. NearMiss implements 3 different types of heuristic which can be selected with the parameter version. NearMiss heuristic rules are based on nearest neighbors algorithm.”
There is also a good paper on resampling techniques. “Survey of resampling techniques for improving classification performance in unbalanced datasets” (Ajinkya More, 2016)
I borrowed the explanation of three different versions of NearMiss from More’s paper.
In NearMiss-1, those points from majority class are retained whose mean distance to the k nearest points in minority class is lowest. Which means it will keep the points of majority class that’s similar to the minority class.
nm = NearMiss(ratio='not minority',random_state=777, version=1, n_neighbors=1)X_nm, y_nm = nm.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_nm.todense(), columns=tv.get_feature_names())
pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())
We can see that NearMiss-1 has eliminated the entry for the text “I adore cats”, which makes sense because both words “adore” and “cats” are only appeared in this entry, so makes it the most different from minority class in terms of Tf-Idf representation in feature space.
lr_cv(5, df.clean_text, df.sentiment, NM1_pipeline, 'macro')
It seems like both the accuracy and F1 score got worse than random undersampling.
In contrast to NearMiss-1, NearMiss-2 keeps those points from the majority class whose mean distance to the k farthest points in minority class is lowest. In other words, it will keep the points of majority class that’s most different to the minority class.
nm = NearMiss(ratio='not minority',random_state=777, version=2, n_neighbors=1)X_nm, y_nm = nm.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_nm.todense(), columns=tv.get_feature_names())
pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())
Now we can see that NearMiss-2 has eliminated the entry for the text “I like dogs”, which again makes sense because we also have a negative entry “I don’t like dogs”. Two entries are in different classes but they share two same tokens “like” and “dogs”.
lr_cv(5, df.clean_text, df.sentiment, NM2_pipeline, 'macro')
Both accuracy and F1 score got even lower compared to NearMiss-1. And we can also see that all the metrics fluctuate from fold to fold quite a lot.
The final NearMiss variant, NearMiss-3 selects k nearest neighbours in majority class for every point in the minority class. In this case, the undersampling ratio is directly controlled by k. For example, if we set k to be 4, then NearMiss-3 will choose 4 nearest neighbours of every minority class entry.
Then we’ll end up with either more or fewer samples of majority class than minority class depending on n neighbours we set. For example, with my dataset, if I run NearMiss-3 with default n_neighbors_ver3 of 3, it will complain and the number of neutral class(which is majority class in my dataset) will be smaller than negative class(which is minority class in my dataset). So I explicitly set n_neighbors_ver3 to be 4, so that I’ll have enough majority class data at least the same number as the minority class.
One thing I’m not completely sure is that what kind of filtering it applies when all the data selected with n_neighbors_ver3 parameter is more than the minority class. As you will see below, after applying NearMiss-3, the dataset is perfectly balanced. However, if the algorithm simply chooses the nearest neighbour according to the n_neighbors_ver3 parameter, I doubt that it will end up with the exact same number of entries for each class.
lr_cv(5, df.clean_text, df.sentiment, NM3_pipeline, 'macro')
NearMiss-3 produced the most robust result within NearMiss family, but slightly lower than RandomUnderSampling.
from collections import Counternm3 = NearMiss(ratio='not minority',random_state=777, version=3, n_neighbors_ver3=4)tvec = TfidfVectorizer(stop_words=None, max_features=100000, ngram_range=(1, 3))df_tfidf = tvec.fit_transform(df.clean_text)X_res, y_res = nm3.fit_sample(df_tfidf, df.sentiment)print('Distribution before NearMiss-3: {}'.format(Counter(df.sentiment)))print('Distribution after NearMiss-3: {}'.format(Counter(y_res)))
5-fold cross validation result (classifier used for validation: logistic regression with default setting)
Based on the above result, the sampling technique I’ll be using for the next post will be SMOTE. In the next post, I will try different classifiers with SMOTE oversampled data.
Thank you for reading and you can find the Jupyter Notebook from the below link: | [
{
"code": null,
"e": 456,
"s": 172,
"text": "I finished an 11-part series blog posts on Twitter sentiment analysis not long ago. Why do I want to do the sentiment analysis again? I wanted to extend further and run sentiment analysis on real retrieved tweets. And there are other limits to my previous sentiment analysis project."
},
{
"code": null,
"e": 667,
"s": 456,
"text": "The project stopped at the final trained model and lacks application of the model to retrieved tweetsThe model was trained on only positive and negative class, so it lacks the ability to predict a neutral class"
},
{
"code": null,
"e": 769,
"s": 667,
"text": "The project stopped at the final trained model and lacks application of the model to retrieved tweets"
},
{
"code": null,
"e": 879,
"s": 769,
"text": "The model was trained on only positive and negative class, so it lacks the ability to predict a neutral class"
},
{
"code": null,
"e": 1171,
"s": 879,
"text": "Regarding neutral class, it might be possible to set a threshold value for negative, neutral, positive class, and map the final output probability value to one of three classes, but I wanted to train a model with training data, which has three sentiment classes: negative, neutral, positive."
},
{
"code": null,
"e": 1515,
"s": 1171,
"text": "Since I already wrote quite a lengthy series on NLP, sentiment analysis, if a concept was already covered in my previous posts, I won’t go into the detailed explanation. And also the main data visualisation will be with retrieved tweets, and I won’t go through extensive data visualisation with the data I use for training and testing a model."
},
{
"code": null,
"e": 1642,
"s": 1515,
"text": "*In addition to short code blocks I will attach, you can find the link for the whole Jupyter Notebook at the end of this post."
},
{
"code": null,
"e": 1732,
"s": 1642,
"text": "In order to train my sentiment classifier, I need a dataset which meets conditions below."
},
{
"code": null,
"e": 1791,
"s": 1732,
"text": "Preferably tweets text data with annotated sentiment label"
},
{
"code": null,
"e": 1845,
"s": 1791,
"text": "with 3 sentiment classes: negative, neutral, positive"
},
{
"code": null,
"e": 1873,
"s": 1845,
"text": "big enough to train a model"
},
{
"code": null,
"e": 2217,
"s": 1873,
"text": "While googling to find a good data source, I learned about renowned NLP competition called SemEval. “SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics.”"
},
{
"code": null,
"e": 2721,
"s": 2217,
"text": "You might have already heard of this if you’re interested in NLP. Highly-skilled teams from all around the world compete on a couple of tasks such as “semantic textual similarity”, “multilingual semantic word similarity”, etc. One of the competition tasks is the Twitter sentiment analysis. It also has a couple of subtasks, but what I would want to focus on is “Subtask A. : Message Polarity Classification: Given a message, classify whether the message is of positive, negative, or neutral sentiment”."
},
{
"code": null,
"e": 3090,
"s": 2721,
"text": "Luckily the dataset they provide for the competition is available to download. The training data consists of SemEval’s previous training and test data. What’s even better is they provide test data, and all the teams who participated in the competition are scored with the same test data. This means I can compare my model performance with 2017 participants in SemEval."
},
{
"code": null,
"e": 3153,
"s": 3090,
"text": "I first downloaded full training data for SemEval 2017 Task 4."
},
{
"code": null,
"e": 3622,
"s": 3153,
"text": "There are 11 txt files in total, spanning from SemEval 2013 to SemEval 2016. While trying to read the files into a Pandas dataframe, I found two files cannot be properly loaded as tsv file. It seems like there are some entries not properly tab-separated, so end up as a chunk of 10 or more tweets stuck together. I could have tried retrieving them with tweet ID provided, but I decided to first ignore these two files, and make up a training set with only 9 txt files."
},
{
"code": null,
"e": 3789,
"s": 3622,
"text": "import pandas as pd import numpy as npimport matplotlib.pyplot as pltplt.style.use('fivethirtyeight')%matplotlib inline%config InlineBackend.figure_format = 'retina'"
},
{
"code": null,
"e": 3865,
"s": 3789,
"text": "Once I import basic dependencies, I’ll read the data to a Pandas dataframe."
},
{
"code": null,
"e": 4222,
"s": 3865,
"text": "import globpath ='Subtask_A/'all_files = glob.glob(path + \"/twitter*.txt\")frame = pd.DataFrame()list_ = []for file_ in all_files: df = pd.read_csv(file_,index_col=None, sep='\\t', header=None, names=['id','sentiment','text','to_delete']) list_.append(df.iloc[:,:-1])df = pd.concat(list_)df = df.drop_duplicates()df = df.reset_index(drop=True)df.tail()"
},
{
"code": null,
"e": 4313,
"s": 4222,
"text": "The dataset looks fairly simple with individual tweet ID, sentiment label, and tweet text."
},
{
"code": null,
"e": 4323,
"s": 4313,
"text": "df.info()"
},
{
"code": null,
"e": 4440,
"s": 4323,
"text": "There are total 41,705 tweets. As another sanity check, let’s take a look at how many words are there in each tweet."
},
{
"code": null,
"e": 4518,
"s": 4440,
"text": "df['token_length'] = [len(x.split(\" \")) for x in df.text]max(df.token_length)"
},
{
"code": null,
"e": 4558,
"s": 4518,
"text": "df.loc[df.token_length.idxmax(),'text']"
},
{
"code": null,
"e": 4723,
"s": 4558,
"text": "OK, the token length looks fine, and the tweet for maximum token length seems like a properly parsed tweet. Let’s take a look at the class distribution of the data."
},
{
"code": null,
"e": 4751,
"s": 4723,
"text": "df.sentiment.value_counts()"
},
{
"code": null,
"e": 5062,
"s": 4751,
"text": "The data is not well balanced, and negative class has the least number of data entries with 6,485, and the neutral class has the most data with 19,466 entries. I want to rebalance the data so that I will have a balanced dataset at least for training. I will deal with this after I define the cleaning function."
},
{
"code": null,
"e": 5419,
"s": 5062,
"text": "Data cleaning process is similar to my previous project, but this time I added a long list of contraction to expand most of the contracted form to its original form such as “don’t” to “do not”. And this time, instead of Regex, I used Spacy to parse the documents, and filtered numbers, URL, punctuation, etc. Below are the steps I took to clean the tweets."
},
{
"code": null,
"e": 6726,
"s": 5419,
"text": "Decoding: unicode_escape for extra “\\” before unicode character, then unidecodeApostrophe handled: there are two characters people use for contraction. “’”(apostrophe) and “‘“(single quote). If these two symbols are both used for contraction, it will be difficult to detect and properly map the right expanded form. So any “’”(apostrophe) is changed to “‘“(single quote)Contraction check: check if there’s any contracted form, and replace it with its original formParsing: done with SpacyFiltering punctuation, white space, numbers, URL using Spacy methods while keeping the text content of hashtag intactRemoved @mentionLemmatize: lemmatized each token using Spacy method ‘.lemma_’. Pronouns are kept as they are since Spacy lemmatizer transforms every pronoun to “-PRON-”Special character removalSingle syllable token removalSpell correction: it is a simple spell correction dealing with repeated characters such as “sooooo goooood”. If the same character is repeated more than two times, it shortens the repetition to two. For example “sooooo goooood” will be transformed as “soo good”. This is not a perfect solution since even after correction, in case of “soo”, it is not a correct spelling. But at least it will help to reduce feature space by making “sooo”, “soooo”, “sooooo” to the same word “soo”"
},
{
"code": null,
"e": 6806,
"s": 6726,
"text": "Decoding: unicode_escape for extra “\\” before unicode character, then unidecode"
},
{
"code": null,
"e": 7098,
"s": 6806,
"text": "Apostrophe handled: there are two characters people use for contraction. “’”(apostrophe) and “‘“(single quote). If these two symbols are both used for contraction, it will be difficult to detect and properly map the right expanded form. So any “’”(apostrophe) is changed to “‘“(single quote)"
},
{
"code": null,
"e": 7193,
"s": 7098,
"text": "Contraction check: check if there’s any contracted form, and replace it with its original form"
},
{
"code": null,
"e": 7218,
"s": 7193,
"text": "Parsing: done with Spacy"
},
{
"code": null,
"e": 7336,
"s": 7218,
"text": "Filtering punctuation, white space, numbers, URL using Spacy methods while keeping the text content of hashtag intact"
},
{
"code": null,
"e": 7353,
"s": 7336,
"text": "Removed @mention"
},
{
"code": null,
"e": 7506,
"s": 7353,
"text": "Lemmatize: lemmatized each token using Spacy method ‘.lemma_’. Pronouns are kept as they are since Spacy lemmatizer transforms every pronoun to “-PRON-”"
},
{
"code": null,
"e": 7532,
"s": 7506,
"text": "Special character removal"
},
{
"code": null,
"e": 7562,
"s": 7532,
"text": "Single syllable token removal"
},
{
"code": null,
"e": 8042,
"s": 7562,
"text": "Spell correction: it is a simple spell correction dealing with repeated characters such as “sooooo goooood”. If the same character is repeated more than two times, it shortens the repetition to two. For example “sooooo goooood” will be transformed as “soo good”. This is not a perfect solution since even after correction, in case of “soo”, it is not a correct spelling. But at least it will help to reduce feature space by making “sooo”, “soooo”, “sooooo” to the same word “soo”"
},
{
"code": null,
"e": 8102,
"s": 8042,
"text": "OK now let’s see how this custom cleaner works with tweets."
},
{
"code": null,
"e": 8156,
"s": 8102,
"text": "pd.set_option('display.max_colwidth', -1)df.text[:10]"
},
{
"code": null,
"e": 8197,
"s": 8156,
"text": "[spacy_cleaner(t) for t in df.text[:10]]"
},
{
"code": null,
"e": 8322,
"s": 8197,
"text": "It looks like it’s doing what I intended it to do. I’ll clean the “text” column and create a new column called “clean_text”."
},
{
"code": null,
"e": 8377,
"s": 8322,
"text": "df['clean_text'] = [spacy_cleaner(t) for t in df.text]"
},
{
"code": null,
"e": 8493,
"s": 8377,
"text": "By running the cleaning function I can see it encountered some “invalid escape sequence”. Let’s see what these are."
},
{
"code": null,
"e": 8559,
"s": 8493,
"text": "for i,t in enumerate(df.text): if '\\m' in t: print(i,t)"
},
{
"code": null,
"e": 8883,
"s": 8559,
"text": "The tweets that contain ‘\\m’ were actually containing an emoticon ‘\\m/’ I didn’t know about this until I googled it. Apparently ‘\\m/’ stands for the horn sign you make with your hand. This hand sign is popular in metal music. Anyway, this is just a warning and it is not an error. Let’s see how the cleaner deals with this."
},
{
"code": null,
"e": 8897,
"s": 8883,
"text": "df.text[2064]"
},
{
"code": null,
"e": 8926,
"s": 8897,
"text": "spacy_cleaner(df.text[2064])"
},
{
"code": null,
"e": 9000,
"s": 8926,
"text": "Again it seems like to be doing what I intended it to do. So far so good."
},
{
"code": null,
"e": 9248,
"s": 9000,
"text": "“The class imbalance problem typically occurs when, in a classification problem, there are many more instances of some classes than others. In such cases, standard classifiers tend to be overwhelmed by the large classes and ignore the small ones.”"
},
{
"code": null,
"e": 9625,
"s": 9248,
"text": "As I have already realised, the training data is not perfectly balanced, ‘neutral’ class has 3 times more data than ‘negative’ class, and ‘positive’ class has around 2.4 times more data than ‘negative’ class. I will try fitting a model with three different data; oversampled, downsampled, original, to see how different sampling techniques affect the learning of a classifier."
},
{
"code": null,
"e": 10228,
"s": 9625,
"text": "The simple default classifier I’ll use to compare performances of different datasets will be the logistic regression. From my previous sentiment analysis project, I learned that Tf-Idf with Logistic Regression is a pretty powerful combination. Before I apply any other more complex models such as ANN, CNN, RNN etc, the performances with logistic regression will hopefully give me a good idea of which data sampling methods I should choose. If you want to know more about Tf-Idf, and how it extracts features from text, you can check my old post, “Another Twitter Sentiment Analysis with Python-Part5”."
},
{
"code": null,
"e": 10772,
"s": 10228,
"text": "In terms of validation, I will use K-Fold Cross Validation. In my previous project, I split the data into three; training, validation, test, and all the parameter tuning was done with reserved validation set and finally applied the model to the test set. Considering that I had more than 1 million data for training, this kind of validation set approach was acceptable. But this time, the data I have is much smaller (around 40,000 tweets), and by leaving out validation set from the data we might leave out interesting information about data."
},
{
"code": null,
"e": 10954,
"s": 10772,
"text": "from sklearn.pipeline import Pipelineoriginal_pipeline = Pipeline([ ('vectorizer', tvec), ('classifier', lr)])lr_cv(5, df.clean_text, df.sentiment, original_pipeline, 'macro')"
},
{
"code": null,
"e": 11192,
"s": 10954,
"text": "With data as it is without any resampling, we can see that the precision is higher than the recall. If you want to know more about precision and recall, you can check my old post, “Another Twitter sentiment analysis with Python — Part4”."
},
{
"code": null,
"e": 11799,
"s": 11192,
"text": "If we take a closer look at the result from each fold, we can also see that the recall for the negative class is quite low around 28~30%, while the precisions for the negative class are high as 61~65%. This means the classifier is very picky and does not think many things are negative. All the text it classifies as negative is 61~65% of the time really negative. However, it also misses a lot of actual negative class, because it is so very picky. We have a low recall, but a very high precision. The intuition behind this precision and recall has been taken from a Medium blog post by Andreas Klintberg."
},
{
"code": null,
"e": 11973,
"s": 11799,
"text": "There is a very useful Python package called “imbalanced-learn”, which helps you deal with class imbalance issues, it is compatible with Scikit Learn, and easy to implement."
},
{
"code": null,
"e": 12081,
"s": 11973,
"text": "Within imbalanced-learn, there are different techniques you can use for oversampling. I will use below two."
},
{
"code": null,
"e": 12149,
"s": 12081,
"text": "RandomOverSamplerSMOTE (Synthetic Minority Over-Sampling Technique)"
},
{
"code": null,
"e": 12167,
"s": 12149,
"text": "RandomOverSampler"
},
{
"code": null,
"e": 12218,
"s": 12167,
"text": "SMOTE (Synthetic Minority Over-Sampling Technique)"
},
{
"code": null,
"e": 12618,
"s": 12218,
"text": "There is one more point to consider if you are cross-validating with oversampled data. Oversampling the minority class can result in overfitting problems if we oversample before cross-validating. Why is that so? Because by oversampling before cross validation split, you are leaking the information of validation data already to your training set. As they say “What has been seen, cannot be unseen.”"
},
{
"code": null,
"e": 12764,
"s": 12618,
"text": "If you want more detailed explanation, I recommend this Youtube video “Machine Learning — Over-& Undersampling — Python/ Scikit/ Scikit-Imblearn”"
},
{
"code": null,
"e": 12983,
"s": 12764,
"text": "Luckily cross-validation function I defined above as “lr_cv()” will fit the pipeline only with the training set split after cross-validation split, thus it is not leaking any information of validation set to the model."
},
{
"code": null,
"e": 13138,
"s": 12983,
"text": "Random over-sampling is simply a process of repeating some samples of the minority class and balance the number of samples between classes in the dataset."
},
{
"code": null,
"e": 13387,
"s": 13138,
"text": "from imblearn.pipeline import make_pipelinefrom imblearn.over_sampling import ADASYN, SMOTE, RandomOverSamplerROS_pipeline = make_pipeline(tvec, RandomOverSampler(random_state=777),lr)SMOTE_pipeline = make_pipeline(tvec, SMOTE(random_state=777),lr)"
},
{
"code": null,
"e": 13574,
"s": 13387,
"text": "Before we fit each pipeline, let’s see what the RadomOverSampler does. In order to make it easier to see I defined some toy text data below, and the target sentiment value for each text."
},
{
"code": null,
"e": 13790,
"s": 13574,
"text": "sent1 = \"I love dogs\"sent2 = \"I don't like dogs\"sent3 = \"I adore cats\"sent4 = \"I hate spiders\"sent5 = \"I like dogs\"testing_text = pd.Series([sent1, sent2, sent3, sent4, sent5])testing_target = pd.Series([1,0,1,0,1])"
},
{
"code": null,
"e": 13970,
"s": 13790,
"text": "My toy data has 5 entries in total, and the target sentiments are three positives and two negatives. In order to be balanced, this toy data needs one more entry of negative class."
},
{
"code": null,
"e": 14192,
"s": 13970,
"text": "One thing is over sampler won’t be able to handle raw text data. It has to be transformed into a feature space for over sampler to work. I’ll first fit TfidfVectorizer, and oversample using Tf-Idf representation of texts."
},
{
"code": null,
"e": 14467,
"s": 14192,
"text": "tv = TfidfVectorizer(stop_words=None, max_features=100000)testing_tfidf = tv.fit_transform(testing_text)ros = RandomOverSampler(random_state=777)X_ROS, y_ROS = ros.fit_sample(testing_tfidf, testing_target)pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 14529,
"s": 14467,
"text": "pd.DataFrame(X_ROS.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 14959,
"s": 14529,
"text": "By running RandomOverSampler, now we have one more entry at the end. The last entry added by RandomOverSampler is exactly same as the fourth one (index number 3) from the top. RandomOverSampler simply repeats some entries of the minority class to balance the data. If we look at the target sentiments after RandomOverSampler, we can see that it has now a perfect balance between classes by adding on more entry of negative class."
},
{
"code": null,
"e": 14965,
"s": 14959,
"text": "y_ROS"
},
{
"code": null,
"e": 15026,
"s": 14965,
"text": "lr_cv(5, df.clean_text, df.sentiment, ROS_pipeline, 'macro')"
},
{
"code": null,
"e": 15586,
"s": 15026,
"text": "Compared to the model built with original imbalanced data, now the model behaves in opposite way. The precisions for the negative class are around 47~49%, but the recalls are way higher at 64~67%. Now we have a situation of high recall, low precision. What this means is that the classifier thinks a lot of things are negative. However, it also thinks a lot of non-negative texts are negative. So from our set of data we got a lot of texts classified as negative, many of them were in the set of actual negative, however, a lot of them were also non-negative."
},
{
"code": null,
"e": 15766,
"s": 15586,
"text": "But without resampling, the recall rate was as low as 28~30% for negative class, the precision rate for the negative class I get from oversampling is more robust at around 47~49%."
},
{
"code": null,
"e": 16053,
"s": 15766,
"text": "Another way to look at it is to look at the f1 score, which is the harmonic average of precision and recall. The original imbalanced data had 66.51% accuracy and 60.01% F1 score. However with oversampling, we get a slightly lower accuracy of 65.95%, but a much higher F1 score of 64.18%"
},
{
"code": null,
"e": 16212,
"s": 16053,
"text": "SMOTE is an over-sampling approach in which the minority class is over-sampled by creating “synthetic” examples rather than by over-sampling with replacement."
},
{
"code": null,
"e": 17061,
"s": 16212,
"text": "According to the original research paper “SMOTE: Synthetic Minority Over-sampling Technique” (Chawla et al., 2002), “synthetic samples are generated in the following way: Take the difference between the feature vector (sample) under consideration and its nearest neighbour. Multiply this difference by a random number between 0 and 1, and add it to the feature vector under consideration. This causes the selection of a random point along the line segment between two specific features. This approach effectively forces the decision region of the minority class to become more general.” What this means is that when SMOTE creates a new synthetic data, it will choose one data to copy, and look at its k nearest neighbours. Then, on feature space, it will create random values in feature space that is between the original sample and its neighbours."
},
{
"code": null,
"e": 17129,
"s": 17061,
"text": "Once you see the example with the toy data, it will become clearer."
},
{
"code": null,
"e": 17301,
"s": 17129,
"text": "smt = SMOTE(random_state=777, k_neighbors=1)X_SMOTE, y_SMOTE = smt.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_SMOTE.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 17407,
"s": 17301,
"text": "The last entry is the data created by SMOTE. To make it easier to see, let’s see only the negative class."
},
{
"code": null,
"e": 17485,
"s": 17407,
"text": "pd.DataFrame(X_SMOTE.todense()[y_SMOTE == 0], columns=tv.get_feature_names())"
},
{
"code": null,
"e": 18142,
"s": 17485,
"text": "The top two entries are original data, and the one on the bottom is synthetic data. You can see it didn’t just repeat original data. Instead, the Tf-Idf values are created by taking random values between the top two original data. As you can see, if the Tf-Idf values for both original data are 0, then synthetic data also has 0 for those features, such as “adore”, “cactus”, “cats”, because if two values are the same there are no random values between them. I specifically defined k_neighbors as 1 for this toy data, since there are only two entries of negative class, if SMOTE chooses one to copy, then only one other negative entry left as a neighbour."
},
{
"code": null,
"e": 18210,
"s": 18142,
"text": "Now let’s fit the SMOTE pipeline to see how it affects performance."
},
{
"code": null,
"e": 18273,
"s": 18210,
"text": "lr_cv(5, df.clean_text, df.sentiment, SMOTE_pipeline, 'macro')"
},
{
"code": null,
"e": 18494,
"s": 18273,
"text": "SMOTE sampling seems to have a slightly higher accuracy and F1 score compared to random oversampling. With the results so far, it seems like choosing SMOTE oversampling is preferable over original or random oversampling."
},
{
"code": null,
"e": 18684,
"s": 18494,
"text": "How about downsampling. If we oversample the minority class in the above oversampling, with downsampling, we try to reduce the data of majority class, so that the data classes are balanced."
},
{
"code": null,
"e": 19135,
"s": 18684,
"text": "from imblearn.under_sampling import NearMiss, RandomUnderSamplerRUS_pipeline = make_pipeline(tvec, RandomUnderSampler(random_state=777),lr)NM1_pipeline = make_pipeline(tvec, NearMiss(ratio='not minority',random_state=777, version = 1),lr)NM2_pipeline = make_pipeline(tvec, NearMiss(ratio='not minority',random_state=777, version = 2),lr)NM3_pipeline = make_pipeline(tvec, NearMiss(ratio=nm3_dict,random_state=777, version = 3, n_neighbors_ver3=4),lr)"
},
{
"code": null,
"e": 19224,
"s": 19135,
"text": "Again, before we run the pipeline, let’s apply this to the toy data to see what it does."
},
{
"code": null,
"e": 19388,
"s": 19224,
"text": "rus = RandomUnderSampler(random_state=777)X_RUS, y_RUS = rus.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_RUS.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 19458,
"s": 19388,
"text": "pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 19730,
"s": 19458,
"text": "Compared with the original imbalanced data, we can see that downsampled data has one less entry, which is the last entry of the original data belonging to the positive class. RandomUnderSampler reduces the majority class by randomly removing data from the majority class."
},
{
"code": null,
"e": 19791,
"s": 19730,
"text": "lr_cv(5, df.clean_text, df.sentiment, RUS_pipeline, 'macro')"
},
{
"code": null,
"e": 19978,
"s": 19791,
"text": "Now the accuracy and the F1 score has significantly dropped. But the characteristic of low precision and high recall is as same as oversampled data. Only its overall performance dropped."
},
{
"code": null,
"e": 20256,
"s": 19978,
"text": "According to the documentation of “imbalanced-learn”, “NearMiss adds some heuristic rules to select samples. NearMiss implements 3 different types of heuristic which can be selected with the parameter version. NearMiss heuristic rules are based on nearest neighbors algorithm.”"
},
{
"code": null,
"e": 20428,
"s": 20256,
"text": "There is also a good paper on resampling techniques. “Survey of resampling techniques for improving classification performance in unbalanced datasets” (Ajinkya More, 2016)"
},
{
"code": null,
"e": 20514,
"s": 20428,
"text": "I borrowed the explanation of three different versions of NearMiss from More’s paper."
},
{
"code": null,
"e": 20740,
"s": 20514,
"text": "In NearMiss-1, those points from majority class are retained whose mean distance to the k nearest points in minority class is lowest. Which means it will keep the points of majority class that’s similar to the minority class."
},
{
"code": null,
"e": 20936,
"s": 20740,
"text": "nm = NearMiss(ratio='not minority',random_state=777, version=1, n_neighbors=1)X_nm, y_nm = nm.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_nm.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 21006,
"s": 20936,
"text": "pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 21279,
"s": 21006,
"text": "We can see that NearMiss-1 has eliminated the entry for the text “I adore cats”, which makes sense because both words “adore” and “cats” are only appeared in this entry, so makes it the most different from minority class in terms of Tf-Idf representation in feature space."
},
{
"code": null,
"e": 21340,
"s": 21279,
"text": "lr_cv(5, df.clean_text, df.sentiment, NM1_pipeline, 'macro')"
},
{
"code": null,
"e": 21422,
"s": 21340,
"text": "It seems like both the accuracy and F1 score got worse than random undersampling."
},
{
"code": null,
"e": 21680,
"s": 21422,
"text": "In contrast to NearMiss-1, NearMiss-2 keeps those points from the majority class whose mean distance to the k farthest points in minority class is lowest. In other words, it will keep the points of majority class that’s most different to the minority class."
},
{
"code": null,
"e": 21876,
"s": 21680,
"text": "nm = NearMiss(ratio='not minority',random_state=777, version=2, n_neighbors=1)X_nm, y_nm = nm.fit_sample(testing_tfidf, testing_target)pd.DataFrame(X_nm.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 21946,
"s": 21876,
"text": "pd.DataFrame(testing_tfidf.todense(), columns=tv.get_feature_names())"
},
{
"code": null,
"e": 22200,
"s": 21946,
"text": "Now we can see that NearMiss-2 has eliminated the entry for the text “I like dogs”, which again makes sense because we also have a negative entry “I don’t like dogs”. Two entries are in different classes but they share two same tokens “like” and “dogs”."
},
{
"code": null,
"e": 22261,
"s": 22200,
"text": "lr_cv(5, df.clean_text, df.sentiment, NM2_pipeline, 'macro')"
},
{
"code": null,
"e": 22409,
"s": 22261,
"text": "Both accuracy and F1 score got even lower compared to NearMiss-1. And we can also see that all the metrics fluctuate from fold to fold quite a lot."
},
{
"code": null,
"e": 22715,
"s": 22409,
"text": "The final NearMiss variant, NearMiss-3 selects k nearest neighbours in majority class for every point in the minority class. In this case, the undersampling ratio is directly controlled by k. For example, if we set k to be 4, then NearMiss-3 will choose 4 nearest neighbours of every minority class entry."
},
{
"code": null,
"e": 23228,
"s": 22715,
"text": "Then we’ll end up with either more or fewer samples of majority class than minority class depending on n neighbours we set. For example, with my dataset, if I run NearMiss-3 with default n_neighbors_ver3 of 3, it will complain and the number of neutral class(which is majority class in my dataset) will be smaller than negative class(which is minority class in my dataset). So I explicitly set n_neighbors_ver3 to be 4, so that I’ll have enough majority class data at least the same number as the minority class."
},
{
"code": null,
"e": 23671,
"s": 23228,
"text": "One thing I’m not completely sure is that what kind of filtering it applies when all the data selected with n_neighbors_ver3 parameter is more than the minority class. As you will see below, after applying NearMiss-3, the dataset is perfectly balanced. However, if the algorithm simply chooses the nearest neighbour according to the n_neighbors_ver3 parameter, I doubt that it will end up with the exact same number of entries for each class."
},
{
"code": null,
"e": 23732,
"s": 23671,
"text": "lr_cv(5, df.clean_text, df.sentiment, NM3_pipeline, 'macro')"
},
{
"code": null,
"e": 23844,
"s": 23732,
"text": "NearMiss-3 produced the most robust result within NearMiss family, but slightly lower than RandomUnderSampling."
},
{
"code": null,
"e": 24275,
"s": 23844,
"text": "from collections import Counternm3 = NearMiss(ratio='not minority',random_state=777, version=3, n_neighbors_ver3=4)tvec = TfidfVectorizer(stop_words=None, max_features=100000, ngram_range=(1, 3))df_tfidf = tvec.fit_transform(df.clean_text)X_res, y_res = nm3.fit_sample(df_tfidf, df.sentiment)print('Distribution before NearMiss-3: {}'.format(Counter(df.sentiment)))print('Distribution after NearMiss-3: {}'.format(Counter(y_res)))"
},
{
"code": null,
"e": 24381,
"s": 24275,
"text": "5-fold cross validation result (classifier used for validation: logistic regression with default setting)"
},
{
"code": null,
"e": 24558,
"s": 24381,
"text": "Based on the above result, the sampling technique I’ll be using for the next post will be SMOTE. In the next post, I will try different classifiers with SMOTE oversampled data."
}
]
|
Intuitive Guide to Convolution Neural Networks | by Thushan Ganegedara | Towards Data Science | This is the second article on my series introducing machine learning concepts with while stepping very lightly on mathematics. If you missed previous article you can find in here (on KL divergence). Fun fact, I’m going to make this an interesting adventure by introducing some machine learning concept for every letter in the alphabet (This would be for the letter C).
A B C D* E F G* H I J K L* M N O P Q R S T U V W X Y Z
* denotes articles behind the Medium paywall
Convolution neural networks (CNNs) are a family of deep networks that can exploit the spatial structure of data (e.g. images) to learn about the data, so that the algorithm can output something useful. Think of a problem where we want to identify if there is a person in a given image. For example, if I give the the CNN an image of a person, this deep neural network first needs to learn some local features (e.g. eyes, nose, mouth, etc.). These local features are learnt in convolution layers.
Then the CNN will look at what local features are present in a given image and then produce specific activation patterns (or an activation vector) which globally represents the existence of those local features maps. These activation patterns are produced by fully connected layers in the CNN. For example, if the image is a non-person, the activation pattern will be different from what it gives for an image of a person.
Now let’s look at what sort of sub modules are present in a CNN. There are three different components in a typical CNN. They are, convolution layers, pooling layers and fully-connected layers. We already have a general idea about what a convolution layer and a fully connected layer is. One thing we did not discuss is the pooling layer, which we will discuss soon.
First we discuss what a convolution layer does in depth. A convolution layer consists of many kernels. These kernels (sometimes called convolution filters) present in the convolution layer, learn local features present in an image (e.g. how the eye of a person looks like). Such a local feature that a convolution layer learns is called a feature map. Then these features are convolved over the image. This convolution operation will result in a matrix (that is sometimes called an activation map). The activation map produces a high value at a given location, if the feature represented in the convolution filter is present at that location of the input.
The pooling layer make these features learnt by the CNN translation invariant (e.g. no matter the person’s eye is at [x=10, y=10] or [x=12,y=11] positions, the output of the pooling layer will be same). Note that we talk about slight translation variations per layer. However aggregating several such layers, allows us to have higher translation invariance.
Finally we have the fully connected layer. Fully connected layers are responsible for producing different activation patterns based on the set of activated feature maps and the locations in the image, the feature maps are activated for. This is what CNN looks like visually.
With a good understanding about what the overall structure of a CNN looks like, let us move on to understanding each of these sub components, that make up a CNN.
What does the convolution operation exactly do? Convolution operation outputs a high value for a given position if the convolution feature is present in that location, else outputs a low value. More concretely, at a given position of the convolution kernel, we take the element-wise multiplication of each kernel cell value and the corresponding image pixel value that overlaps the kernel cell, and then take the sum of that. The exact value is decided according to the following formula (m — kernel width and height, h — convolution output, x — input, w — convolution kernel).
The convolution process on an image, can be visualised as follows.
It is not enough to know what the convolution operation does, we also need to understand what the convolution output represents. Just imagine colours for the values in the convolution output (0 — black, 100 — white). If you visualise this image, it will represent a binary image that lights up at the location the eyes are at.
The convolution operation also can be thought as performing some transformation on a given image. This transformation can result in various effects (e.g. extracting edges, blurring, etc.). Let us more concretely understand what the convolution operation does to an image. Consider the following image and the convolution kernel. You can find more about this in this Wikipedia article.
Let us now learn what the pooling operation does. Pooling (or sometimes called subsampling) layer make the CNN a little bit translation invariant in terms of the convolution output. There are two different pooling mechanisms used in practice (max-pooling and average-pooling). We will refer to max-pooling as pooling as, max-pooling is widely used compared to average pooling. More precisely, the pooling operation, at a given position, outputs the maximum value of the input, that falls within the kernel. So mathematically,
Let us understand how pooling works, by applying the pooling operation on the convolution output we saw earlier.
As you can see, we use two variants of the same image; one original image and another image translated slightly on the x-axis. However, the pooling operation outputs the exact same feature map for both images (black — 0, white — 100). Therefore we say the pooling operation make the knowledge in the CNN translation invariant. One thing to note is that we are not moving 1 pixel at a time, but 2 pixels at a time. This is known as the strided-pooling, meaning that we are performing pooling with a stride of 2.
Fully connected layers will combine features learnt by different convolution kernels so that the network can build a global representation about the holistic image. We can understand the fully connected layer as below.
The neurons in the fully connected layer will get activated based on whether various entities represented by convolution features is actually present in the inputs. As the fully connected neurons get activated for this, it will produced different activation patterns based on what features are present in the input images. This provides a compact representation of what exists in the image, to the output layer, that the output layer can easily use to correctly classify the image.
Now all we have to do is put all these together, to form an end-to-end model, from raw images, to decisions. And once connected the CNN will look like this. To summarise, the convolution layers will learn various local features in the data (e.g. what an eye looks like), then the pooling layer will make the CNN invariant to translations of these features (e.g. if the eye appear slightly translated in two images, the CNN will still recognise it as an eye). Finally we have fully connected layers, that says, “we found two eyes, a nose and a mouth, so this must be a person, and activate the correct output.
Adding more layers, obviously boosts up the performance of deep neural networks. In fact, the most of notable ground breaking research in deep learning had to do with solving the problem of how do we add more layers?, while not disrupting the training of the model. Because deeper the model is, the more difficult it is to train.
But having more layers helps the CNN to learn features in a hierarchical manner. For example the first layer learns various edge orientations in the image, the second layer learns basic shapes (circles, triangles, etc.) and the third layer learns more advance shapes (e.g shape of an eye, shape of a nose), and so on. This delivers better performance, compared to what you would learn with a CNN that has to learn all these with a single layer.
Now, one thing to keep in mind is that, these convolution features (eyes, nose, mouth) don’t magically appear when you implement a CNN. The objective is to learn these features given data. To do this, we define a cost function, that rewards the correctly identified data and penalise misclassified data. Example cost function would be the root mean squared error or the binary cross entropy loss.
After we define the loss, we can optimise the weights of the features (that is each cell value of the features) to reflect useful features that lead the CNN to correctly identify a person. More concretely, we optimise each convolution kernel and fully-connected neurons, by taking a small step in the opposite direction shown by the gradient of each parameter with respect to the loss. However, to implement a CNN you don’t need to know the exact details of how to implement gradient propagation. This is because, most of deep learning libraries (e.g. TensorFlow, PyTorch) implement these differentiation operations internally, when you define the forward computations, automatically.
Here we will briefly discuss how to implement a CNN. Knowing the basics is not enough, we also should understand how to implement a model using a standard deep learning library like Keras. Keras is a wonderful tool, especially to quickly prototype models, to see them in action! The exercise is available here.
First we define the Keras API we want to use. We will go with the sequential API. You can read more about it here:
# Define a sequential modelmodel = Sequential()
Then we define a convolution layer as follows:
# Added a convolution layermodel.add(Conv2D(32, (3,3), activation=’relu’, input_shape=[28, 28, 1]))
Here, 32 is the number of kernels in the layer, (3,3) is the kernel size (height and width) of the convolution layer. We use the non-linear activation Relu and the input shape [28, 28, 1] which is [image height, image width, color channels]. Note that the input shape should be the shape of the output produced by the previous layer. So for the first convolution layer we have the actual data input. For the rest of layer, it will be the output produced by previous layer. Next we discuss how to implement a max pooling layer:
# Add a max pool lyermodel.add(MaxPool2D())
Here we don’t provide any parameters as we’re going to use default values provided in Keras. If you do not specify the argument, Keras will use a kernel size of (2,2) and a stride of (2,2). Next we define the fully-connected layers. However before that we need to flatten our output, as the fully connected layers process 1D data:
model.add(Flatten())model.add(Dense(256, activation=’relu’))model.add(Dense(10, activation=’softmax’))
Here we define two fully-connected or dense layers. The first fully connected layer has 256 neurons and uses Relu activation. Finally we define a dense layer with 10 output nodes with softmax activation. This acts as the output layer, that will activate a particular neuron for images having the same object. Finally we compile our model with,
model.compile( optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[‘accuracy’])
Here we say use the Adam optimiser (to train the model), with the cross entropy loss and use the accuracy of the model to evaluate the model. Finally we can train and test our model using data. We are going to use MNIST dataset, which we will download and read into the memory using maybe_download and read_mnist functions defined in the exercise. The MNIST dataset contains images of hand written digits (0–9) and the objective is to classify the images correctly by assigning the digit, that the image represents.
Next we train our model by calling the following function:
model.fit(x_train, y_train, batch_size = batch_size)
And we can test our model with some test data as below:
test_acc = model.evaluate(x_test, y_test, batch_size=batch_size)
We will run this for several epochs, and this will allow you to increase your models performance.
We wrap our discussion about the convolution neural network here. We first discussed what takes place within a CNN from a higher vantage point and kept in closing in section by section. Then we discussed the major components within a typical CNN such as, convolution layers, pooling layers and fully connected layers. Finally we walked through each of these components in much more detail. Then we discussed how the training happens in a CNN very briefly. Finally we discussed you we can implement a standard CNN with Keras: a high-level TensorFlow library. You can find the exercise for this tutorial here.
Cheers!
If you enjoy the stories I share about data science and machine learning, consider becoming a member!
thushv89.medium.com
Checkout my work on the subject.
[1] (Book) TensorFlow 2 in Action — Manning
[2] (Video Course) Machine Translation in Python — DataCamp
[3] (Book) Natural Language processing in TensorFlow 1 — Packt | [
{
"code": null,
"e": 541,
"s": 172,
"text": "This is the second article on my series introducing machine learning concepts with while stepping very lightly on mathematics. If you missed previous article you can find in here (on KL divergence). Fun fact, I’m going to make this an interesting adventure by introducing some machine learning concept for every letter in the alphabet (This would be for the letter C)."
},
{
"code": null,
"e": 596,
"s": 541,
"text": "A B C D* E F G* H I J K L* M N O P Q R S T U V W X Y Z"
},
{
"code": null,
"e": 641,
"s": 596,
"text": "* denotes articles behind the Medium paywall"
},
{
"code": null,
"e": 1137,
"s": 641,
"text": "Convolution neural networks (CNNs) are a family of deep networks that can exploit the spatial structure of data (e.g. images) to learn about the data, so that the algorithm can output something useful. Think of a problem where we want to identify if there is a person in a given image. For example, if I give the the CNN an image of a person, this deep neural network first needs to learn some local features (e.g. eyes, nose, mouth, etc.). These local features are learnt in convolution layers."
},
{
"code": null,
"e": 1560,
"s": 1137,
"text": "Then the CNN will look at what local features are present in a given image and then produce specific activation patterns (or an activation vector) which globally represents the existence of those local features maps. These activation patterns are produced by fully connected layers in the CNN. For example, if the image is a non-person, the activation pattern will be different from what it gives for an image of a person."
},
{
"code": null,
"e": 1926,
"s": 1560,
"text": "Now let’s look at what sort of sub modules are present in a CNN. There are three different components in a typical CNN. They are, convolution layers, pooling layers and fully-connected layers. We already have a general idea about what a convolution layer and a fully connected layer is. One thing we did not discuss is the pooling layer, which we will discuss soon."
},
{
"code": null,
"e": 2582,
"s": 1926,
"text": "First we discuss what a convolution layer does in depth. A convolution layer consists of many kernels. These kernels (sometimes called convolution filters) present in the convolution layer, learn local features present in an image (e.g. how the eye of a person looks like). Such a local feature that a convolution layer learns is called a feature map. Then these features are convolved over the image. This convolution operation will result in a matrix (that is sometimes called an activation map). The activation map produces a high value at a given location, if the feature represented in the convolution filter is present at that location of the input."
},
{
"code": null,
"e": 2940,
"s": 2582,
"text": "The pooling layer make these features learnt by the CNN translation invariant (e.g. no matter the person’s eye is at [x=10, y=10] or [x=12,y=11] positions, the output of the pooling layer will be same). Note that we talk about slight translation variations per layer. However aggregating several such layers, allows us to have higher translation invariance."
},
{
"code": null,
"e": 3215,
"s": 2940,
"text": "Finally we have the fully connected layer. Fully connected layers are responsible for producing different activation patterns based on the set of activated feature maps and the locations in the image, the feature maps are activated for. This is what CNN looks like visually."
},
{
"code": null,
"e": 3377,
"s": 3215,
"text": "With a good understanding about what the overall structure of a CNN looks like, let us move on to understanding each of these sub components, that make up a CNN."
},
{
"code": null,
"e": 3955,
"s": 3377,
"text": "What does the convolution operation exactly do? Convolution operation outputs a high value for a given position if the convolution feature is present in that location, else outputs a low value. More concretely, at a given position of the convolution kernel, we take the element-wise multiplication of each kernel cell value and the corresponding image pixel value that overlaps the kernel cell, and then take the sum of that. The exact value is decided according to the following formula (m — kernel width and height, h — convolution output, x — input, w — convolution kernel)."
},
{
"code": null,
"e": 4022,
"s": 3955,
"text": "The convolution process on an image, can be visualised as follows."
},
{
"code": null,
"e": 4349,
"s": 4022,
"text": "It is not enough to know what the convolution operation does, we also need to understand what the convolution output represents. Just imagine colours for the values in the convolution output (0 — black, 100 — white). If you visualise this image, it will represent a binary image that lights up at the location the eyes are at."
},
{
"code": null,
"e": 4734,
"s": 4349,
"text": "The convolution operation also can be thought as performing some transformation on a given image. This transformation can result in various effects (e.g. extracting edges, blurring, etc.). Let us more concretely understand what the convolution operation does to an image. Consider the following image and the convolution kernel. You can find more about this in this Wikipedia article."
},
{
"code": null,
"e": 5260,
"s": 4734,
"text": "Let us now learn what the pooling operation does. Pooling (or sometimes called subsampling) layer make the CNN a little bit translation invariant in terms of the convolution output. There are two different pooling mechanisms used in practice (max-pooling and average-pooling). We will refer to max-pooling as pooling as, max-pooling is widely used compared to average pooling. More precisely, the pooling operation, at a given position, outputs the maximum value of the input, that falls within the kernel. So mathematically,"
},
{
"code": null,
"e": 5373,
"s": 5260,
"text": "Let us understand how pooling works, by applying the pooling operation on the convolution output we saw earlier."
},
{
"code": null,
"e": 5884,
"s": 5373,
"text": "As you can see, we use two variants of the same image; one original image and another image translated slightly on the x-axis. However, the pooling operation outputs the exact same feature map for both images (black — 0, white — 100). Therefore we say the pooling operation make the knowledge in the CNN translation invariant. One thing to note is that we are not moving 1 pixel at a time, but 2 pixels at a time. This is known as the strided-pooling, meaning that we are performing pooling with a stride of 2."
},
{
"code": null,
"e": 6103,
"s": 5884,
"text": "Fully connected layers will combine features learnt by different convolution kernels so that the network can build a global representation about the holistic image. We can understand the fully connected layer as below."
},
{
"code": null,
"e": 6585,
"s": 6103,
"text": "The neurons in the fully connected layer will get activated based on whether various entities represented by convolution features is actually present in the inputs. As the fully connected neurons get activated for this, it will produced different activation patterns based on what features are present in the input images. This provides a compact representation of what exists in the image, to the output layer, that the output layer can easily use to correctly classify the image."
},
{
"code": null,
"e": 7194,
"s": 6585,
"text": "Now all we have to do is put all these together, to form an end-to-end model, from raw images, to decisions. And once connected the CNN will look like this. To summarise, the convolution layers will learn various local features in the data (e.g. what an eye looks like), then the pooling layer will make the CNN invariant to translations of these features (e.g. if the eye appear slightly translated in two images, the CNN will still recognise it as an eye). Finally we have fully connected layers, that says, “we found two eyes, a nose and a mouth, so this must be a person, and activate the correct output."
},
{
"code": null,
"e": 7524,
"s": 7194,
"text": "Adding more layers, obviously boosts up the performance of deep neural networks. In fact, the most of notable ground breaking research in deep learning had to do with solving the problem of how do we add more layers?, while not disrupting the training of the model. Because deeper the model is, the more difficult it is to train."
},
{
"code": null,
"e": 7969,
"s": 7524,
"text": "But having more layers helps the CNN to learn features in a hierarchical manner. For example the first layer learns various edge orientations in the image, the second layer learns basic shapes (circles, triangles, etc.) and the third layer learns more advance shapes (e.g shape of an eye, shape of a nose), and so on. This delivers better performance, compared to what you would learn with a CNN that has to learn all these with a single layer."
},
{
"code": null,
"e": 8366,
"s": 7969,
"text": "Now, one thing to keep in mind is that, these convolution features (eyes, nose, mouth) don’t magically appear when you implement a CNN. The objective is to learn these features given data. To do this, we define a cost function, that rewards the correctly identified data and penalise misclassified data. Example cost function would be the root mean squared error or the binary cross entropy loss."
},
{
"code": null,
"e": 9051,
"s": 8366,
"text": "After we define the loss, we can optimise the weights of the features (that is each cell value of the features) to reflect useful features that lead the CNN to correctly identify a person. More concretely, we optimise each convolution kernel and fully-connected neurons, by taking a small step in the opposite direction shown by the gradient of each parameter with respect to the loss. However, to implement a CNN you don’t need to know the exact details of how to implement gradient propagation. This is because, most of deep learning libraries (e.g. TensorFlow, PyTorch) implement these differentiation operations internally, when you define the forward computations, automatically."
},
{
"code": null,
"e": 9362,
"s": 9051,
"text": "Here we will briefly discuss how to implement a CNN. Knowing the basics is not enough, we also should understand how to implement a model using a standard deep learning library like Keras. Keras is a wonderful tool, especially to quickly prototype models, to see them in action! The exercise is available here."
},
{
"code": null,
"e": 9477,
"s": 9362,
"text": "First we define the Keras API we want to use. We will go with the sequential API. You can read more about it here:"
},
{
"code": null,
"e": 9525,
"s": 9477,
"text": "# Define a sequential modelmodel = Sequential()"
},
{
"code": null,
"e": 9572,
"s": 9525,
"text": "Then we define a convolution layer as follows:"
},
{
"code": null,
"e": 9672,
"s": 9572,
"text": "# Added a convolution layermodel.add(Conv2D(32, (3,3), activation=’relu’, input_shape=[28, 28, 1]))"
},
{
"code": null,
"e": 10199,
"s": 9672,
"text": "Here, 32 is the number of kernels in the layer, (3,3) is the kernel size (height and width) of the convolution layer. We use the non-linear activation Relu and the input shape [28, 28, 1] which is [image height, image width, color channels]. Note that the input shape should be the shape of the output produced by the previous layer. So for the first convolution layer we have the actual data input. For the rest of layer, it will be the output produced by previous layer. Next we discuss how to implement a max pooling layer:"
},
{
"code": null,
"e": 10243,
"s": 10199,
"text": "# Add a max pool lyermodel.add(MaxPool2D())"
},
{
"code": null,
"e": 10574,
"s": 10243,
"text": "Here we don’t provide any parameters as we’re going to use default values provided in Keras. If you do not specify the argument, Keras will use a kernel size of (2,2) and a stride of (2,2). Next we define the fully-connected layers. However before that we need to flatten our output, as the fully connected layers process 1D data:"
},
{
"code": null,
"e": 10677,
"s": 10574,
"text": "model.add(Flatten())model.add(Dense(256, activation=’relu’))model.add(Dense(10, activation=’softmax’))"
},
{
"code": null,
"e": 11021,
"s": 10677,
"text": "Here we define two fully-connected or dense layers. The first fully connected layer has 256 neurons and uses Relu activation. Finally we define a dense layer with 10 output nodes with softmax activation. This acts as the output layer, that will activate a particular neuron for images having the same object. Finally we compile our model with,"
},
{
"code": null,
"e": 11109,
"s": 11021,
"text": "model.compile( optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[‘accuracy’])"
},
{
"code": null,
"e": 11625,
"s": 11109,
"text": "Here we say use the Adam optimiser (to train the model), with the cross entropy loss and use the accuracy of the model to evaluate the model. Finally we can train and test our model using data. We are going to use MNIST dataset, which we will download and read into the memory using maybe_download and read_mnist functions defined in the exercise. The MNIST dataset contains images of hand written digits (0–9) and the objective is to classify the images correctly by assigning the digit, that the image represents."
},
{
"code": null,
"e": 11684,
"s": 11625,
"text": "Next we train our model by calling the following function:"
},
{
"code": null,
"e": 11737,
"s": 11684,
"text": "model.fit(x_train, y_train, batch_size = batch_size)"
},
{
"code": null,
"e": 11793,
"s": 11737,
"text": "And we can test our model with some test data as below:"
},
{
"code": null,
"e": 11859,
"s": 11793,
"text": "test_acc = model.evaluate(x_test, y_test, batch_size=batch_size) "
},
{
"code": null,
"e": 11957,
"s": 11859,
"text": "We will run this for several epochs, and this will allow you to increase your models performance."
},
{
"code": null,
"e": 12565,
"s": 11957,
"text": "We wrap our discussion about the convolution neural network here. We first discussed what takes place within a CNN from a higher vantage point and kept in closing in section by section. Then we discussed the major components within a typical CNN such as, convolution layers, pooling layers and fully connected layers. Finally we walked through each of these components in much more detail. Then we discussed how the training happens in a CNN very briefly. Finally we discussed you we can implement a standard CNN with Keras: a high-level TensorFlow library. You can find the exercise for this tutorial here."
},
{
"code": null,
"e": 12573,
"s": 12565,
"text": "Cheers!"
},
{
"code": null,
"e": 12675,
"s": 12573,
"text": "If you enjoy the stories I share about data science and machine learning, consider becoming a member!"
},
{
"code": null,
"e": 12695,
"s": 12675,
"text": "thushv89.medium.com"
},
{
"code": null,
"e": 12728,
"s": 12695,
"text": "Checkout my work on the subject."
},
{
"code": null,
"e": 12772,
"s": 12728,
"text": "[1] (Book) TensorFlow 2 in Action — Manning"
},
{
"code": null,
"e": 12832,
"s": 12772,
"text": "[2] (Video Course) Machine Translation in Python — DataCamp"
}
]
|
Python - MCQ Quiz Game using Tkinter - GeeksforGeeks | 20 Apr, 2022
Prerequisite: Python GUI – tkinter
Python provides a standard GUI framework Tkinter which is used to develop fast and easy GUI applications. Here we will be developing a simple multiple-choice quiz in python with GUI. We will be creating a multiple choice quiz in Python with Tkinter. First, we will create a library named Quiz in the directory of your preference.
Overview
Steps Needed
1. We will create data.json for storing the data
YouTubeGeeksforGeeks500K subscribersMCQ Quiz Game Using Python Tkinter | Python Projects | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 49:07•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=lFSkmIEjckc" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>
The data for the quiz is defined in data.json with JSON data which are name/value pairs and contain an array of values. We have defined sample data for the quiz as shown below :
{
"question": [
"Q1. What Indian city is the capital of two states?",
"Q2. Which city is the capital of India?",
"Q3. Smallest State of India?",
"Q4. Where is Taj Mahal Located?"
],
"answer": [
1,
2,
3,
2
],
"options": [
["Chandigarh",
"Kolkata",
"Delhi",
"Bangalore"
],
["Jaipur",
"Delhi",
"Chennai",
"Mumbai"
],
["Rajasthan",
"Punjab",
"Goa",
"Bihar"
],
["Lucknow",
"Agra",
"Bhopal",
"Delhi"
]
]
}
2. Creating the GUI using Tkinter in quiz.py
Importing the module: tkinter and jsonCreate the main window (container) of the appAdd widgets to display dataAdd the functionalities to the buttonUsing the data in the Quiz
Importing the module: tkinter and json
Create the main window (container) of the app
Add widgets to display data
Add the functionalities to the button
Using the data in the Quiz
Note: Both the data.json and the quiz.py will be created in the same directory that we have defined above.
Now that we have created the data.json file for storing the data we are going to create quiz.py file which will contain the program for the quiz.
Python3
# Python program to create a simple GUI# Simple Quiz using Tkinter #import everything from tkinterfrom tkinter import * # and import messagebox as mb from tkinterfrom tkinter import messagebox as mb #import json to use json file for dataimport json #class to define the components of the GUIclass Quiz: # This is the first method which is called when a # new object of the class is initialized. This method # sets the question count to 0. and initialize all the # other methoods to display the content and make all the # functionalities available def __init__(self): # set question number to 0 self.q_no=0 # assigns ques to the display_question function to update later. self.display_title() self.display_question() # opt_selected holds an integer value which is used for # selected option in a question. self.opt_selected=IntVar() # displaying radio button for the current question and used to # display options for the current question self.opts=self.radio_buttons() # display options for the current question self.display_options() # displays the button for next and exit. self.buttons() # no of questions self.data_size=len(question) # keep a counter of correct answers self.correct=0 # This method is used to display the result # It counts the number of correct and wrong answers # and then display them at the end as a message Box def display_result(self): # calculates the wrong count wrong_count = self.data_size - self.correct correct = f"Correct: {self.correct}" wrong = f"Wrong: {wrong_count}" # calcultaes the percentage of correct answers score = int(self.correct / self.data_size * 100) result = f"Score: {score}%" # Shows a message box to display the result mb.showinfo("Result", f"{result}\n{correct}\n{wrong}") # This method checks the Answer after we click on Next. def check_ans(self, q_no): # checks for if the selected option is correct if self.opt_selected.get() == answer[q_no]: # if the option is correct it return true return True # This method is used to check the answer of the # current question by calling the check_ans and question no. # if the question is correct it increases the count by 1 # and then increase the question number by 1. If it is last # question then it calls display result to show the message box. # otherwise shows next question. def next_btn(self): # Check if the answer is correct if self.check_ans(self.q_no): # if the answer is correct it increments the correct by 1 self.correct += 1 # Moves to next Question by incrementing the q_no counter self.q_no += 1 # checks if the q_no size is equal to the data size if self.q_no==self.data_size: # if it is correct then it displays the score self.display_result() # destroys the GUI gui.destroy() else: # shows the next question self.display_question() self.display_options() # This method shows the two buttons on the screen. # The first one is the next_button which moves to next question # It has properties like what text it shows the functionality, # size, color, and property of text displayed on button. Then it # mentions where to place the button on the screen. The second # button is the exit button which is used to close the GUI without # completing the quiz. def buttons(self): # The first button is the Next button to move to the # next Question next_button = Button(gui, text="Next",command=self.next_btn, width=10,bg="blue",fg="white",font=("ariel",16,"bold")) # placing the button on the screen next_button.place(x=350,y=380) # This is the second button which is used to Quit the GUI quit_button = Button(gui, text="Quit", command=gui.destroy, width=5,bg="black", fg="white",font=("ariel",16," bold")) # placing the Quit button on the screen quit_button.place(x=700,y=50) # This method deselect the radio button on the screen # Then it is used to display the options available for the current # question which we obtain through the question number and Updates # each of the options for the current question of the radio button. def display_options(self): val=0 # deselecting the options self.opt_selected.set(0) # looping over the options to be displayed for the # text of the radio buttons. for option in options[self.q_no]: self.opts[val]['text']=option val+=1 # This method shows the current Question on the screen def display_question(self): # setting the Question properties q_no = Label(gui, text=question[self.q_no], width=60, font=( 'ariel' ,16, 'bold' ), anchor= 'w' ) #placing the option on the screen q_no.place(x=70, y=100) # This method is used to Display Title def display_title(self): # The title to be shown title = Label(gui, text="GeeksforGeeks QUIZ", width=50, bg="green",fg="white", font=("ariel", 20, "bold")) # place of the title title.place(x=0, y=2) # This method shows the radio buttons to select the Question # on the screen at the specified position. It also returns a # list of radio button which are later used to add the options to # them. def radio_buttons(self): # initialize the list with an empty list of options q_list = [] # position of the first option y_pos = 150 # adding the options to the list while len(q_list) < 4: # setting the radio button properties radio_btn = Radiobutton(gui,text=" ",variable=self.opt_selected, value = len(q_list)+1,font = ("ariel",14)) # adding the button to the list q_list.append(radio_btn) # placing the button radio_btn.place(x = 100, y = y_pos) # incrementing the y-axis position by 40 y_pos += 40 # return the radio buttons return q_list # Create a GUI Windowgui = Tk() # set the size of the GUI Windowgui.geometry("800x450") # set the title of the Windowgui.title("GeeksforGeeks Quiz") # get the data from the json filewith open('data.json') as f: data = json.load(f) # set the question, options, and answerquestion = (data['question'])options = (data['options'])answer = (data[ 'answer']) # create an object of the Quiz Class.quiz = Quiz() # Start the GUIgui.mainloop() # END OF THE PROGRAM
Output:
sweetyty
surinderdawra388
Picked
Python Tkinter-projects
Python-tkinter
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Pandas dataframe.groupby()
Python | Get unique values from a list
Defaultdict in Python
Python | os.path.join() method
Python Classes and Objects
Create a directory in Python | [
{
"code": null,
"e": 23925,
"s": 23897,
"text": "\n20 Apr, 2022"
},
{
"code": null,
"e": 23960,
"s": 23925,
"text": "Prerequisite: Python GUI – tkinter"
},
{
"code": null,
"e": 24290,
"s": 23960,
"text": "Python provides a standard GUI framework Tkinter which is used to develop fast and easy GUI applications. Here we will be developing a simple multiple-choice quiz in python with GUI. We will be creating a multiple choice quiz in Python with Tkinter. First, we will create a library named Quiz in the directory of your preference."
},
{
"code": null,
"e": 24299,
"s": 24290,
"text": "Overview"
},
{
"code": null,
"e": 24312,
"s": 24299,
"text": "Steps Needed"
},
{
"code": null,
"e": 24361,
"s": 24312,
"text": "1. We will create data.json for storing the data"
},
{
"code": null,
"e": 25213,
"s": 24361,
"text": "YouTubeGeeksforGeeks500K subscribersMCQ Quiz Game Using Python Tkinter | Python Projects | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 49:07•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=lFSkmIEjckc\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>"
},
{
"code": null,
"e": 25391,
"s": 25213,
"text": "The data for the quiz is defined in data.json with JSON data which are name/value pairs and contain an array of values. We have defined sample data for the quiz as shown below :"
},
{
"code": null,
"e": 25940,
"s": 25391,
"text": "{\n \"question\": [\n \"Q1. What Indian city is the capital of two states?\",\n \"Q2. Which city is the capital of India?\",\n \"Q3. Smallest State of India?\",\n \"Q4. Where is Taj Mahal Located?\"\n ],\n \"answer\": [\n 1,\n 2,\n 3,\n 2\n ],\n \"options\": [\n\n [\"Chandigarh\",\n \"Kolkata\",\n \"Delhi\",\n \"Bangalore\"\n ],\n [\"Jaipur\",\n \"Delhi\",\n \"Chennai\",\n \"Mumbai\"\n ],\n [\"Rajasthan\",\n \"Punjab\",\n \"Goa\",\n \"Bihar\"\n ],\n [\"Lucknow\",\n \"Agra\",\n \"Bhopal\",\n \"Delhi\"\n ]\n ]\n}"
},
{
"code": null,
"e": 25985,
"s": 25940,
"text": "2. Creating the GUI using Tkinter in quiz.py"
},
{
"code": null,
"e": 26159,
"s": 25985,
"text": "Importing the module: tkinter and jsonCreate the main window (container) of the appAdd widgets to display dataAdd the functionalities to the buttonUsing the data in the Quiz"
},
{
"code": null,
"e": 26198,
"s": 26159,
"text": "Importing the module: tkinter and json"
},
{
"code": null,
"e": 26244,
"s": 26198,
"text": "Create the main window (container) of the app"
},
{
"code": null,
"e": 26272,
"s": 26244,
"text": "Add widgets to display data"
},
{
"code": null,
"e": 26310,
"s": 26272,
"text": "Add the functionalities to the button"
},
{
"code": null,
"e": 26337,
"s": 26310,
"text": "Using the data in the Quiz"
},
{
"code": null,
"e": 26444,
"s": 26337,
"text": "Note: Both the data.json and the quiz.py will be created in the same directory that we have defined above."
},
{
"code": null,
"e": 26590,
"s": 26444,
"text": "Now that we have created the data.json file for storing the data we are going to create quiz.py file which will contain the program for the quiz."
},
{
"code": null,
"e": 26598,
"s": 26590,
"text": "Python3"
},
{
"code": "# Python program to create a simple GUI# Simple Quiz using Tkinter #import everything from tkinterfrom tkinter import * # and import messagebox as mb from tkinterfrom tkinter import messagebox as mb #import json to use json file for dataimport json #class to define the components of the GUIclass Quiz: # This is the first method which is called when a # new object of the class is initialized. This method # sets the question count to 0. and initialize all the # other methoods to display the content and make all the # functionalities available def __init__(self): # set question number to 0 self.q_no=0 # assigns ques to the display_question function to update later. self.display_title() self.display_question() # opt_selected holds an integer value which is used for # selected option in a question. self.opt_selected=IntVar() # displaying radio button for the current question and used to # display options for the current question self.opts=self.radio_buttons() # display options for the current question self.display_options() # displays the button for next and exit. self.buttons() # no of questions self.data_size=len(question) # keep a counter of correct answers self.correct=0 # This method is used to display the result # It counts the number of correct and wrong answers # and then display them at the end as a message Box def display_result(self): # calculates the wrong count wrong_count = self.data_size - self.correct correct = f\"Correct: {self.correct}\" wrong = f\"Wrong: {wrong_count}\" # calcultaes the percentage of correct answers score = int(self.correct / self.data_size * 100) result = f\"Score: {score}%\" # Shows a message box to display the result mb.showinfo(\"Result\", f\"{result}\\n{correct}\\n{wrong}\") # This method checks the Answer after we click on Next. def check_ans(self, q_no): # checks for if the selected option is correct if self.opt_selected.get() == answer[q_no]: # if the option is correct it return true return True # This method is used to check the answer of the # current question by calling the check_ans and question no. # if the question is correct it increases the count by 1 # and then increase the question number by 1. If it is last # question then it calls display result to show the message box. # otherwise shows next question. def next_btn(self): # Check if the answer is correct if self.check_ans(self.q_no): # if the answer is correct it increments the correct by 1 self.correct += 1 # Moves to next Question by incrementing the q_no counter self.q_no += 1 # checks if the q_no size is equal to the data size if self.q_no==self.data_size: # if it is correct then it displays the score self.display_result() # destroys the GUI gui.destroy() else: # shows the next question self.display_question() self.display_options() # This method shows the two buttons on the screen. # The first one is the next_button which moves to next question # It has properties like what text it shows the functionality, # size, color, and property of text displayed on button. Then it # mentions where to place the button on the screen. The second # button is the exit button which is used to close the GUI without # completing the quiz. def buttons(self): # The first button is the Next button to move to the # next Question next_button = Button(gui, text=\"Next\",command=self.next_btn, width=10,bg=\"blue\",fg=\"white\",font=(\"ariel\",16,\"bold\")) # placing the button on the screen next_button.place(x=350,y=380) # This is the second button which is used to Quit the GUI quit_button = Button(gui, text=\"Quit\", command=gui.destroy, width=5,bg=\"black\", fg=\"white\",font=(\"ariel\",16,\" bold\")) # placing the Quit button on the screen quit_button.place(x=700,y=50) # This method deselect the radio button on the screen # Then it is used to display the options available for the current # question which we obtain through the question number and Updates # each of the options for the current question of the radio button. def display_options(self): val=0 # deselecting the options self.opt_selected.set(0) # looping over the options to be displayed for the # text of the radio buttons. for option in options[self.q_no]: self.opts[val]['text']=option val+=1 # This method shows the current Question on the screen def display_question(self): # setting the Question properties q_no = Label(gui, text=question[self.q_no], width=60, font=( 'ariel' ,16, 'bold' ), anchor= 'w' ) #placing the option on the screen q_no.place(x=70, y=100) # This method is used to Display Title def display_title(self): # The title to be shown title = Label(gui, text=\"GeeksforGeeks QUIZ\", width=50, bg=\"green\",fg=\"white\", font=(\"ariel\", 20, \"bold\")) # place of the title title.place(x=0, y=2) # This method shows the radio buttons to select the Question # on the screen at the specified position. It also returns a # list of radio button which are later used to add the options to # them. def radio_buttons(self): # initialize the list with an empty list of options q_list = [] # position of the first option y_pos = 150 # adding the options to the list while len(q_list) < 4: # setting the radio button properties radio_btn = Radiobutton(gui,text=\" \",variable=self.opt_selected, value = len(q_list)+1,font = (\"ariel\",14)) # adding the button to the list q_list.append(radio_btn) # placing the button radio_btn.place(x = 100, y = y_pos) # incrementing the y-axis position by 40 y_pos += 40 # return the radio buttons return q_list # Create a GUI Windowgui = Tk() # set the size of the GUI Windowgui.geometry(\"800x450\") # set the title of the Windowgui.title(\"GeeksforGeeks Quiz\") # get the data from the json filewith open('data.json') as f: data = json.load(f) # set the question, options, and answerquestion = (data['question'])options = (data['options'])answer = (data[ 'answer']) # create an object of the Quiz Class.quiz = Quiz() # Start the GUIgui.mainloop() # END OF THE PROGRAM",
"e": 33721,
"s": 26598,
"text": null
},
{
"code": null,
"e": 33730,
"s": 33721,
"text": "Output: "
},
{
"code": null,
"e": 33741,
"s": 33732,
"text": "sweetyty"
},
{
"code": null,
"e": 33758,
"s": 33741,
"text": "surinderdawra388"
},
{
"code": null,
"e": 33765,
"s": 33758,
"text": "Picked"
},
{
"code": null,
"e": 33789,
"s": 33765,
"text": "Python Tkinter-projects"
},
{
"code": null,
"e": 33804,
"s": 33789,
"text": "Python-tkinter"
},
{
"code": null,
"e": 33811,
"s": 33804,
"text": "Python"
},
{
"code": null,
"e": 33909,
"s": 33811,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33918,
"s": 33909,
"text": "Comments"
},
{
"code": null,
"e": 33931,
"s": 33918,
"text": "Old Comments"
},
{
"code": null,
"e": 33963,
"s": 33931,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 34019,
"s": 33963,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 34061,
"s": 34019,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 34103,
"s": 34061,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 34139,
"s": 34103,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 34178,
"s": 34139,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 34200,
"s": 34178,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 34231,
"s": 34200,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 34258,
"s": 34231,
"text": "Python Classes and Objects"
}
]
|
C++ Program To Find Decimal Equivalent Of Binary Linked List - GeeksforGeeks | 03 Jan, 2022
Given a singly linked list of 0s and 1s find its decimal equivalent.
Input: 0->0->0->1->1->0->0->1->0
Output: 50
Input: 1->0->0
Output: 4
The decimal value of an empty linked list is considered as 0.
Initialize the result as 0. Traverse the linked list and for each node, multiply the result by 2 and add the node’s data to it.
C++
// C++ Program to find decimal value // of binary linked list #include <bits/stdc++.h>using namespace std; // Link list Node class Node { public: bool data; Node* next; }; /* Returns decimal value of binary linked list */int decimalValue(Node *head) { // Initialized result int res = 0; // Traverse linked list while (head != NULL) { // Multiply result by 2 and // add head's data res = (res << 1) + head->data; // Move next head = head->next; } return res; } // Utility function to create a // new node. Node *newNode(bool data) { Node *temp = new Node; temp->data = data; temp->next = NULL; return temp; } // Driver codeint main() { // Start with the empty list Node* head = newNode(1); head->next = newNode(0); head->next->next = newNode(1); head->next->next->next = newNode(1); cout << "Decimal value is " << decimalValue(head); return 0; } // This is code is contributed by rathbhupendra
Output :
Decimal value is 11
Please refer complete article on Decimal Equivalent of Binary Linked List for more details!
Juniper Networks
C++ Programs
Linked List
Juniper Networks
Linked List
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Passing a function as a parameter in C++
Program to implement Singly Linked List in C++ using class
cout in C++
Pi(π) in C++ with Examples
Const keyword in C++
Linked List | Set 1 (Introduction)
Linked List | Set 2 (Inserting a node)
Reverse a linked list
Stack Data Structure (Introduction and Program)
Linked List | Set 3 (Deleting a node) | [
{
"code": null,
"e": 24215,
"s": 24187,
"text": "\n03 Jan, 2022"
},
{
"code": null,
"e": 24284,
"s": 24215,
"text": "Given a singly linked list of 0s and 1s find its decimal equivalent."
},
{
"code": null,
"e": 24357,
"s": 24284,
"text": "Input: 0->0->0->1->1->0->0->1->0\nOutput: 50 \n\nInput: 1->0->0\nOutput: 4"
},
{
"code": null,
"e": 24419,
"s": 24357,
"text": "The decimal value of an empty linked list is considered as 0."
},
{
"code": null,
"e": 24547,
"s": 24419,
"text": "Initialize the result as 0. Traverse the linked list and for each node, multiply the result by 2 and add the node’s data to it."
},
{
"code": null,
"e": 24551,
"s": 24547,
"text": "C++"
},
{
"code": "// C++ Program to find decimal value // of binary linked list #include <bits/stdc++.h>using namespace std; // Link list Node class Node { public: bool data; Node* next; }; /* Returns decimal value of binary linked list */int decimalValue(Node *head) { // Initialized result int res = 0; // Traverse linked list while (head != NULL) { // Multiply result by 2 and // add head's data res = (res << 1) + head->data; // Move next head = head->next; } return res; } // Utility function to create a // new node. Node *newNode(bool data) { Node *temp = new Node; temp->data = data; temp->next = NULL; return temp; } // Driver codeint main() { // Start with the empty list Node* head = newNode(1); head->next = newNode(0); head->next->next = newNode(1); head->next->next->next = newNode(1); cout << \"Decimal value is \" << decimalValue(head); return 0; } // This is code is contributed by rathbhupendra",
"e": 25593,
"s": 24551,
"text": null
},
{
"code": null,
"e": 25603,
"s": 25593,
"text": "Output : "
},
{
"code": null,
"e": 25623,
"s": 25603,
"text": "Decimal value is 11"
},
{
"code": null,
"e": 25715,
"s": 25623,
"text": "Please refer complete article on Decimal Equivalent of Binary Linked List for more details!"
},
{
"code": null,
"e": 25732,
"s": 25715,
"text": "Juniper Networks"
},
{
"code": null,
"e": 25745,
"s": 25732,
"text": "C++ Programs"
},
{
"code": null,
"e": 25757,
"s": 25745,
"text": "Linked List"
},
{
"code": null,
"e": 25774,
"s": 25757,
"text": "Juniper Networks"
},
{
"code": null,
"e": 25786,
"s": 25774,
"text": "Linked List"
},
{
"code": null,
"e": 25884,
"s": 25786,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25893,
"s": 25884,
"text": "Comments"
},
{
"code": null,
"e": 25906,
"s": 25893,
"text": "Old Comments"
},
{
"code": null,
"e": 25947,
"s": 25906,
"text": "Passing a function as a parameter in C++"
},
{
"code": null,
"e": 26006,
"s": 25947,
"text": "Program to implement Singly Linked List in C++ using class"
},
{
"code": null,
"e": 26018,
"s": 26006,
"text": "cout in C++"
},
{
"code": null,
"e": 26045,
"s": 26018,
"text": "Pi(π) in C++ with Examples"
},
{
"code": null,
"e": 26066,
"s": 26045,
"text": "Const keyword in C++"
},
{
"code": null,
"e": 26101,
"s": 26066,
"text": "Linked List | Set 1 (Introduction)"
},
{
"code": null,
"e": 26140,
"s": 26101,
"text": "Linked List | Set 2 (Inserting a node)"
},
{
"code": null,
"e": 26162,
"s": 26140,
"text": "Reverse a linked list"
},
{
"code": null,
"e": 26210,
"s": 26162,
"text": "Stack Data Structure (Introduction and Program)"
}
]
|
What is SimpleDateFormat in Java? | The java.text.SimpleDateFormat class is used to format and parse a string to date and date to string.
One of the constructors of this class accepts a String value representing the desired date format and creates SimpleDateFormat object. To parse/convert a string as a Date object
Instantiate this class by passing desired format string.
Parse the date string using the parse() method.
Live Demo
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
public class Sample {
public static void main(String args[]) throws ParseException {
String date_string = "2007-25-06";
//Instantiating the SimpleDateFormat class
SimpleDateFormat formatter = new SimpleDateFormat("yyyy-dd-MM");
//Parsing the given String to Date object
Date date = formatter.parse(date_string);
System.out.println("Date value: "+date);
}
}
Date value: Mon Jun 25 00:00:00 IST 2007
The toPattern() method of this class returns the pattern string representing the format of the current object.
Live Demo
import java.text.ParseException;
import java.text.SimpleDateFormat;
public class Demo {
public static void main(String args[]) throws ParseException {
SimpleDateFormat obj = new SimpleDateFormat();
String pattern = obj.toPattern();
System.out.println(pattern);
}
}
M/d/yy h:mm a
The parse() method of this class accepts ParsePosition as a parameter along with date string and, parses date from a text.
Live Demo
import java.text.ParseException;
import java.text.ParsePosition;
import java.text.SimpleDateFormat;
import java.util.Date;
public class Sample {
public static void main(String args[]) throws ParseException {
String text = "Marriage date of Samrat is 2007-25-06";
//Instantiating the SimpleDateFormat class
SimpleDateFormat formatter = new SimpleDateFormat("yyyy-dd-MM");
//Parsing date from the given text
ParsePosition pos = new ParsePosition(27);
Date date = formatter.parse(text, pos);
System.out.println("Date value: "+date);
}
}
Date value: Mon Jun 25 00:00:00 IST 2007 | [
{
"code": null,
"e": 1164,
"s": 1062,
"text": "The java.text.SimpleDateFormat class is used to format and parse a string to date and date to string."
},
{
"code": null,
"e": 1342,
"s": 1164,
"text": "One of the constructors of this class accepts a String value representing the desired date format and creates SimpleDateFormat object. To parse/convert a string as a Date object"
},
{
"code": null,
"e": 1399,
"s": 1342,
"text": "Instantiate this class by passing desired format string."
},
{
"code": null,
"e": 1447,
"s": 1399,
"text": "Parse the date string using the parse() method."
},
{
"code": null,
"e": 1457,
"s": 1447,
"text": "Live Demo"
},
{
"code": null,
"e": 1961,
"s": 1457,
"text": "import java.text.ParseException;\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\npublic class Sample {\n public static void main(String args[]) throws ParseException { \n String date_string = \"2007-25-06\";\n //Instantiating the SimpleDateFormat class\n SimpleDateFormat formatter = new SimpleDateFormat(\"yyyy-dd-MM\"); \n //Parsing the given String to Date object\n Date date = formatter.parse(date_string); \n System.out.println(\"Date value: \"+date);\n }\n}"
},
{
"code": null,
"e": 2002,
"s": 1961,
"text": "Date value: Mon Jun 25 00:00:00 IST 2007"
},
{
"code": null,
"e": 2113,
"s": 2002,
"text": "The toPattern() method of this class returns the pattern string representing the format of the current object."
},
{
"code": null,
"e": 2123,
"s": 2113,
"text": "Live Demo"
},
{
"code": null,
"e": 2414,
"s": 2123,
"text": "import java.text.ParseException;\nimport java.text.SimpleDateFormat;\npublic class Demo {\n public static void main(String args[]) throws ParseException { \n SimpleDateFormat obj = new SimpleDateFormat();\n String pattern = obj.toPattern();\n System.out.println(pattern);\n }\n}"
},
{
"code": null,
"e": 2428,
"s": 2414,
"text": "M/d/yy h:mm a"
},
{
"code": null,
"e": 2551,
"s": 2428,
"text": "The parse() method of this class accepts ParsePosition as a parameter along with date string and, parses date from a text."
},
{
"code": null,
"e": 2561,
"s": 2551,
"text": "Live Demo"
},
{
"code": null,
"e": 3151,
"s": 2561,
"text": "import java.text.ParseException;\nimport java.text.ParsePosition;\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\npublic class Sample {\n public static void main(String args[]) throws ParseException { \n String text = \"Marriage date of Samrat is 2007-25-06\";\n //Instantiating the SimpleDateFormat class\n SimpleDateFormat formatter = new SimpleDateFormat(\"yyyy-dd-MM\"); \n //Parsing date from the given text\n ParsePosition pos = new ParsePosition(27);\n Date date = formatter.parse(text, pos);\n System.out.println(\"Date value: \"+date);\n }\n}"
},
{
"code": null,
"e": 3192,
"s": 3151,
"text": "Date value: Mon Jun 25 00:00:00 IST 2007"
}
]
|
Program to find Perimeter / Circumference of Square and Rectangle in C++ | In this problem, we are given the side of a square (A) and the length and breadth of a rectangle (L and B). Our task is to create a Program to find Perimeter / Circumference of Square and Rectangle in C++.
To find the circumference of a square, we need the side of the square (a). For that, we will use the formula for the perimeter of the square which is 4a.
To find the circumference of a square, we need the side of the square (a). For that, we will use the formula for the perimeter of the square which is 4a.
To find the circumference of a rectangle, we need the Length (L) and breadth (B) of the rectangle. For that, we will use the formula for the perimeter of the rectangle which is 2(L+B).
To find the circumference of a rectangle, we need the Length (L) and breadth (B) of the rectangle. For that, we will use the formula for the perimeter of the rectangle which is 2(L+B).
Square is a four-sided closed figure that has all four sides equal and all angle 90 degrees.
Formula for perimeter / circumference of square = (4 * a)
Let’s take an example to understand the problem,
Input − 5
Output − 20
Input − 12
Output − 48
#include <iostream>
using namespace std;
int calcCircumference(int a){
int perimeter = (4 * a);
return perimeter;
}
int main() {
int a = 6;
cout<<"The Perimeter / Circumference of Square is
"<<calcCircumference(a);
}
The Perimeter / Circumference of Square is 24
Square is a four-sided closed figure that has opposite sides equal and all angle 90 degrees.
Formula for perimeter/circumference of square = 2 * (l + b)
Let’s take an example to understand the problem,
Input − l = 7; b = 3
Output − 20
Input − l = 13; b = 6
Output − 38
#include <iostream>
using namespace std;
int calcCircumference(int l, int b){
int perimeter = (2 * (l + b)); w =
return perimeter;
}
int main() {
int l = 8, b = 5;
cout<<"The Perimeter / Circumference of Rectangle is
"<<calcCircumference(l, b);
}
The Perimeter / Circumference of Rectangle is 26 | [
{
"code": null,
"e": 1268,
"s": 1062,
"text": "In this problem, we are given the side of a square (A) and the length and breadth of a rectangle (L and B). Our task is to create a Program to find Perimeter / Circumference of Square and Rectangle in C++."
},
{
"code": null,
"e": 1422,
"s": 1268,
"text": "To find the circumference of a square, we need the side of the square (a). For that, we will use the formula for the perimeter of the square which is 4a."
},
{
"code": null,
"e": 1576,
"s": 1422,
"text": "To find the circumference of a square, we need the side of the square (a). For that, we will use the formula for the perimeter of the square which is 4a."
},
{
"code": null,
"e": 1761,
"s": 1576,
"text": "To find the circumference of a rectangle, we need the Length (L) and breadth (B) of the rectangle. For that, we will use the formula for the perimeter of the rectangle which is 2(L+B)."
},
{
"code": null,
"e": 1946,
"s": 1761,
"text": "To find the circumference of a rectangle, we need the Length (L) and breadth (B) of the rectangle. For that, we will use the formula for the perimeter of the rectangle which is 2(L+B)."
},
{
"code": null,
"e": 2039,
"s": 1946,
"text": "Square is a four-sided closed figure that has all four sides equal and all angle 90 degrees."
},
{
"code": null,
"e": 2097,
"s": 2039,
"text": "Formula for perimeter / circumference of square = (4 * a)"
},
{
"code": null,
"e": 2146,
"s": 2097,
"text": "Let’s take an example to understand the problem,"
},
{
"code": null,
"e": 2156,
"s": 2146,
"text": "Input − 5"
},
{
"code": null,
"e": 2168,
"s": 2156,
"text": "Output − 20"
},
{
"code": null,
"e": 2179,
"s": 2168,
"text": "Input − 12"
},
{
"code": null,
"e": 2191,
"s": 2179,
"text": "Output − 48"
},
{
"code": null,
"e": 2423,
"s": 2191,
"text": "#include <iostream>\nusing namespace std;\nint calcCircumference(int a){\n int perimeter = (4 * a);\n return perimeter;\n}\nint main() {\n int a = 6;\n cout<<\"The Perimeter / Circumference of Square is\n \"<<calcCircumference(a);\n}"
},
{
"code": null,
"e": 2469,
"s": 2423,
"text": "The Perimeter / Circumference of Square is 24"
},
{
"code": null,
"e": 2562,
"s": 2469,
"text": "Square is a four-sided closed figure that has opposite sides equal and all angle 90 degrees."
},
{
"code": null,
"e": 2622,
"s": 2562,
"text": "Formula for perimeter/circumference of square = 2 * (l + b)"
},
{
"code": null,
"e": 2671,
"s": 2622,
"text": "Let’s take an example to understand the problem,"
},
{
"code": null,
"e": 2692,
"s": 2671,
"text": "Input − l = 7; b = 3"
},
{
"code": null,
"e": 2704,
"s": 2692,
"text": "Output − 20"
},
{
"code": null,
"e": 2726,
"s": 2704,
"text": "Input − l = 13; b = 6"
},
{
"code": null,
"e": 2738,
"s": 2726,
"text": "Output − 38"
},
{
"code": null,
"e": 3000,
"s": 2738,
"text": "#include <iostream>\nusing namespace std;\nint calcCircumference(int l, int b){\n int perimeter = (2 * (l + b)); w =\n return perimeter;\n}\nint main() {\n int l = 8, b = 5;\n cout<<\"The Perimeter / Circumference of Rectangle is\n \"<<calcCircumference(l, b);\n}"
},
{
"code": null,
"e": 3049,
"s": 3000,
"text": "The Perimeter / Circumference of Rectangle is 26"
}
]
|
Logic and implementation of a spam filter machine learning algorithm | by Lily Chen | Towards Data Science | Below is an email caught by Gmail’s spam filter. How did the spam filter decide this was a spam?
In this article, I talk about Naive Bayes algorithm for spam classification.
We know all of the words that this email contains:
[want, to, maximize, your, time, ..., save, on, the, breakers, bonus]
At the high level, the way spam classification works is this:
What’s the probability this email is spam given the list of words in the email?
The mathematical way of representing this question is:
We need to find this probability.
Let’s break it down.
One of the words in the email is “bonus”.
If bonus was the only word in the email, what would be the probability this email is spam?
Let’s solve for P(spam|bonus) .
In order to find P(spam|bonus), you need an existing dataset of emails. This is your training data. The probability that an email is spam is based on information from this training data.
Let’s say you have an existing dataset of 100 emails. 20 are spam and 80 are not spam. The spam emails combined contain 2000 words. The non spam emails combined contain 8000 words. Of the spam emails, the word “bonus” appears 1000 times. Of the non spam emails, the word bonus appears 800 times. Below is what your data looks like:
You can think of area of the rectangle as number of words.
We need to find probability of “spam” given “bonus”, or P(spam | bonus) .
To answer this question, we use Bayes theorem.
P(spam):
The probability of spam is number of spam divided by total number of emails.
20/100 = 0.2
P(bonus|spam):
The probability of seeing “bonus” in spam is number of “bonus” in spam divided by total number of words in spam.
1000/2000 = 0.5
P(bonus):
Probability of bonus is total number of “bonus” (1000 in spam and 800 in non spam) divided by total number of words:
(1000 + 800)/10000 = 0.18.
This is the same as the area taken by “bonus” in the entire rectangle.
P(spam|bonus):
Plug in the numbers into Bayes theorem to find probability an email is spam if it contains “bonus”.
If “bonus” was the only word in the email, the probability that the email is spam is 0.56.
Let’s look at 2 words now. What if the email only contains: “save bonus”?
Instead of P(spam|bonus), we need P(spam|save, bonus).
Using Bayes theorem, this becomes:
P(spam):
The probability of spam is still number of spam divided by total number of emails.
20/100 = 0.2
P(save, bonus|spam):
By chain rule of probability
We already calculated P(bonus|spam) to be 0.5 because there are 1000 “bonus” words out of 2000 total words in spam emails.
Let’s say there are 100 “save” in spam emails and 160 “save” in non spam emails. Our rectangle of words now look like this:
P(save|spam) is going to be number of “save” in spam divided by total number of words in spam.
P(save|spam) = 100/2000 = 0.05
Taken together,
P(save, bonus|spam) = P(save|spam)*P(bonus|spam) = 0.5 * 0.05 = 0.025
Note: this formula assumes the words in the email are conditionally independent of each other. In other words, having “save” in the email does not change the probability of having “bonus” in the same email, and vice versa. This assumption is why we call the algorithm Naive Bayes.
P(save, bonus):
To find what is P(save, bonus), we need to do some math.
We know that P(spam|save, bonus) + P(not spam|save, bonus) must be 1.
Substitute in Bayes equation:
Multiply both sides by P(save, bonus) to get:
P(not spam) is just number of non spam emails divided by total number of emails. P(not spam) = 800/1000, or 0.8.
P(save, bonus | not spam) = P(save |not spam) * P(bonus|not spam) = 160/8000 * 800/8000 = 0.002
P(save, bonus) = 0.2*0.025 + 0.8*0.002 = 0.0066
P(spam|save, bonus):
Plug in the numbers into Bayes theorem to find probability an email is spam if it contains “bonus”.
Now, let’s analyze the real Bay to Breakers of 130+ words.
Applying the same logic for more words, the formula becomes
P(spam):
Probability of spam is still 20/100, or 0.2.
P(word_1, word_2,...,word_n| spam):
We find P(word|spam) for every word, and take the product of all the probabilities.
P(word_1, word_2,...,word_n):
The denominator of the equation, i.e. P(word_1, word_2,...,word_n), can also be written as
P(spam) * P(word_1, word_2,...,word_n| spam) + P(not spam) * P(word_1, word_2,...,word_n | not spam)
The first half of the equation is basically the nominator of Bayes theorem, and solved in the steps above.
P(not spam) is 80/100. To find P(word_1, word_2, ..., word_n | not spam), we find P(word|not spam) for every word, and take the product of all the probabilities.
Mathematically, it is equivalent to
Taken everything together, the probability that the email from Bay to Breakers is spam is:
Everything works great....except if you encounter a new word previously unseen in your training data. Let’s say an email contains the single word “GetYourFreeCookieNow” and your algorithm needs to decide whether it’s spam or not.
Most likely your training data does not include this word. In our example, "GetYourFreeCookieNow” was neither present in our 20 spam email nor 80 non spam email. If that’s the case, P(GetYourFreeCookieNow | spam) is zero. P(GetYourFreeCookieNow | not spam) is also zero. You can’t divide 0 by 0, so the algorithm will fail.
Similarly, if a word (let’s say “calendar”) never appeared in spam emails of your training set, you always get probability of 0 for spam if an email contains “calendar”.
So let’s tweak our Naive Bayes algorithm to fix this problem.
We solve this problem by adding a count of 1 to each word. For example, even though “GetYourFreeCookieNow” was not in our training data, the nominator of P(GetYourFreeCookieNow | spam) will now be 1 instead of 0. Therefore, you will not get a zero for P(GetYourFreeCookieNow | spam).
Let’s look at our P(bonus|spam) example.
Before adding 1 to each word:
After adding 1 to each word:
W_unique is the number of unique words in all of the emails.
Below is a video illustration of spam classification based on Naive Bayes.
Now that we talked about the theory behind email spam classification. Let’s implement it.
First, you need a training set. This is the data I used from Kaggle. It’s in a csv format and looks like this:
The label “ham” means “not spam”.
From this data, we need to compute several things:
Number of spam emailsNumber of non spam emails (i.e. “ham” emails).Number of unique words in the vocabulary.Total number of words in spam emails.Total number of words in ham emails.Number of occurrences of each word in spam emails.Number of occurrences of each word in ham emails.
Number of spam emails
Number of non spam emails (i.e. “ham” emails).
Number of unique words in the vocabulary.
Total number of words in spam emails.
Total number of words in ham emails.
Number of occurrences of each word in spam emails.
Number of occurrences of each word in ham emails.
I’m going to iterate through the csv file email by email, and increment the value for the 7 variables we need. You can find the code in my Github repo.
github.com
This is the result I got:
number of spam emails: 1451number of non spam emails: 3512total unique words: 45819total spam words: 282332total non spam words: 487196spam words with count: {'subject': 1741, 'photoshop': 80, 'windows': 207, 'office': 204, 'cheap': 79, 'main': 17, 'trending': 2, ... }non spam words with count: {'subject': 6111, 'enron': 6209, 'methanol': 114, 'meter': 2381, 'this': 4680, 'is': 4730, 'a': 5765, 'follow': 93, ...}
Now, we’re ready to classify whether the Bay to Breakers email is spam.
There are a few things about my algorithm that differ from the formula presented above.
Calculate LOG of probability
Calculate LOG of probability
I calculated the log of the probability instead of the probability itself. This is because the product of P(word | spam) for all the words is going to be very close to 0, and the computer just round it down to zero.
Just a reminder, Bayes theorem is this:
We do a transformation by applying the log. This is the new nominator:
To calculate the nominator of spam, my algorithm iterate through each word in the email, and sum up the log of P(word | spam).
The complete code is in this Github repo.
2. Compare the nominator of Bayes theorem for probability of spam and probability of not spam.
We don’t need to compute the denominator of Naive Bayes theorem at all. We just need to compare which nominator is greater: the probability of spam given the words in the email, or the probability of not spam given those words.
Whichever is greater (spam or not spam), we’ll categorize the email in that category.
Using these 2 tweaks, my algorithm indeed found that the Bay to Breakers email is SPAM!
These are the print statements when running my algorithm:
log_p_spam: -1.229757422703253log_p_ham: -0.3458247208324336log_p_words_given_spam: -1028.860861014028log_p_words_given_ham: -1051.386635753655Is Spam? True
Note: This may not be how Gmail spam filter actually works. This is a Naive Bayes algorithm, and works relatively well according to research. You can also use logistic regression for spam classification. I will talk about comparison of different algorithms and their tradeoffs in later articles. Stay tuned! | [
{
"code": null,
"e": 268,
"s": 171,
"text": "Below is an email caught by Gmail’s spam filter. How did the spam filter decide this was a spam?"
},
{
"code": null,
"e": 345,
"s": 268,
"text": "In this article, I talk about Naive Bayes algorithm for spam classification."
},
{
"code": null,
"e": 396,
"s": 345,
"text": "We know all of the words that this email contains:"
},
{
"code": null,
"e": 466,
"s": 396,
"text": "[want, to, maximize, your, time, ..., save, on, the, breakers, bonus]"
},
{
"code": null,
"e": 528,
"s": 466,
"text": "At the high level, the way spam classification works is this:"
},
{
"code": null,
"e": 608,
"s": 528,
"text": "What’s the probability this email is spam given the list of words in the email?"
},
{
"code": null,
"e": 663,
"s": 608,
"text": "The mathematical way of representing this question is:"
},
{
"code": null,
"e": 697,
"s": 663,
"text": "We need to find this probability."
},
{
"code": null,
"e": 718,
"s": 697,
"text": "Let’s break it down."
},
{
"code": null,
"e": 760,
"s": 718,
"text": "One of the words in the email is “bonus”."
},
{
"code": null,
"e": 851,
"s": 760,
"text": "If bonus was the only word in the email, what would be the probability this email is spam?"
},
{
"code": null,
"e": 883,
"s": 851,
"text": "Let’s solve for P(spam|bonus) ."
},
{
"code": null,
"e": 1070,
"s": 883,
"text": "In order to find P(spam|bonus), you need an existing dataset of emails. This is your training data. The probability that an email is spam is based on information from this training data."
},
{
"code": null,
"e": 1402,
"s": 1070,
"text": "Let’s say you have an existing dataset of 100 emails. 20 are spam and 80 are not spam. The spam emails combined contain 2000 words. The non spam emails combined contain 8000 words. Of the spam emails, the word “bonus” appears 1000 times. Of the non spam emails, the word bonus appears 800 times. Below is what your data looks like:"
},
{
"code": null,
"e": 1461,
"s": 1402,
"text": "You can think of area of the rectangle as number of words."
},
{
"code": null,
"e": 1535,
"s": 1461,
"text": "We need to find probability of “spam” given “bonus”, or P(spam | bonus) ."
},
{
"code": null,
"e": 1582,
"s": 1535,
"text": "To answer this question, we use Bayes theorem."
},
{
"code": null,
"e": 1591,
"s": 1582,
"text": "P(spam):"
},
{
"code": null,
"e": 1668,
"s": 1591,
"text": "The probability of spam is number of spam divided by total number of emails."
},
{
"code": null,
"e": 1681,
"s": 1668,
"text": "20/100 = 0.2"
},
{
"code": null,
"e": 1696,
"s": 1681,
"text": "P(bonus|spam):"
},
{
"code": null,
"e": 1809,
"s": 1696,
"text": "The probability of seeing “bonus” in spam is number of “bonus” in spam divided by total number of words in spam."
},
{
"code": null,
"e": 1825,
"s": 1809,
"text": "1000/2000 = 0.5"
},
{
"code": null,
"e": 1835,
"s": 1825,
"text": "P(bonus):"
},
{
"code": null,
"e": 1952,
"s": 1835,
"text": "Probability of bonus is total number of “bonus” (1000 in spam and 800 in non spam) divided by total number of words:"
},
{
"code": null,
"e": 1979,
"s": 1952,
"text": "(1000 + 800)/10000 = 0.18."
},
{
"code": null,
"e": 2050,
"s": 1979,
"text": "This is the same as the area taken by “bonus” in the entire rectangle."
},
{
"code": null,
"e": 2065,
"s": 2050,
"text": "P(spam|bonus):"
},
{
"code": null,
"e": 2165,
"s": 2065,
"text": "Plug in the numbers into Bayes theorem to find probability an email is spam if it contains “bonus”."
},
{
"code": null,
"e": 2256,
"s": 2165,
"text": "If “bonus” was the only word in the email, the probability that the email is spam is 0.56."
},
{
"code": null,
"e": 2330,
"s": 2256,
"text": "Let’s look at 2 words now. What if the email only contains: “save bonus”?"
},
{
"code": null,
"e": 2385,
"s": 2330,
"text": "Instead of P(spam|bonus), we need P(spam|save, bonus)."
},
{
"code": null,
"e": 2420,
"s": 2385,
"text": "Using Bayes theorem, this becomes:"
},
{
"code": null,
"e": 2429,
"s": 2420,
"text": "P(spam):"
},
{
"code": null,
"e": 2512,
"s": 2429,
"text": "The probability of spam is still number of spam divided by total number of emails."
},
{
"code": null,
"e": 2525,
"s": 2512,
"text": "20/100 = 0.2"
},
{
"code": null,
"e": 2546,
"s": 2525,
"text": "P(save, bonus|spam):"
},
{
"code": null,
"e": 2575,
"s": 2546,
"text": "By chain rule of probability"
},
{
"code": null,
"e": 2698,
"s": 2575,
"text": "We already calculated P(bonus|spam) to be 0.5 because there are 1000 “bonus” words out of 2000 total words in spam emails."
},
{
"code": null,
"e": 2822,
"s": 2698,
"text": "Let’s say there are 100 “save” in spam emails and 160 “save” in non spam emails. Our rectangle of words now look like this:"
},
{
"code": null,
"e": 2917,
"s": 2822,
"text": "P(save|spam) is going to be number of “save” in spam divided by total number of words in spam."
},
{
"code": null,
"e": 2948,
"s": 2917,
"text": "P(save|spam) = 100/2000 = 0.05"
},
{
"code": null,
"e": 2964,
"s": 2948,
"text": "Taken together,"
},
{
"code": null,
"e": 3034,
"s": 2964,
"text": "P(save, bonus|spam) = P(save|spam)*P(bonus|spam) = 0.5 * 0.05 = 0.025"
},
{
"code": null,
"e": 3315,
"s": 3034,
"text": "Note: this formula assumes the words in the email are conditionally independent of each other. In other words, having “save” in the email does not change the probability of having “bonus” in the same email, and vice versa. This assumption is why we call the algorithm Naive Bayes."
},
{
"code": null,
"e": 3331,
"s": 3315,
"text": "P(save, bonus):"
},
{
"code": null,
"e": 3388,
"s": 3331,
"text": "To find what is P(save, bonus), we need to do some math."
},
{
"code": null,
"e": 3458,
"s": 3388,
"text": "We know that P(spam|save, bonus) + P(not spam|save, bonus) must be 1."
},
{
"code": null,
"e": 3488,
"s": 3458,
"text": "Substitute in Bayes equation:"
},
{
"code": null,
"e": 3534,
"s": 3488,
"text": "Multiply both sides by P(save, bonus) to get:"
},
{
"code": null,
"e": 3647,
"s": 3534,
"text": "P(not spam) is just number of non spam emails divided by total number of emails. P(not spam) = 800/1000, or 0.8."
},
{
"code": null,
"e": 3743,
"s": 3647,
"text": "P(save, bonus | not spam) = P(save |not spam) * P(bonus|not spam) = 160/8000 * 800/8000 = 0.002"
},
{
"code": null,
"e": 3791,
"s": 3743,
"text": "P(save, bonus) = 0.2*0.025 + 0.8*0.002 = 0.0066"
},
{
"code": null,
"e": 3812,
"s": 3791,
"text": "P(spam|save, bonus):"
},
{
"code": null,
"e": 3912,
"s": 3812,
"text": "Plug in the numbers into Bayes theorem to find probability an email is spam if it contains “bonus”."
},
{
"code": null,
"e": 3971,
"s": 3912,
"text": "Now, let’s analyze the real Bay to Breakers of 130+ words."
},
{
"code": null,
"e": 4031,
"s": 3971,
"text": "Applying the same logic for more words, the formula becomes"
},
{
"code": null,
"e": 4040,
"s": 4031,
"text": "P(spam):"
},
{
"code": null,
"e": 4085,
"s": 4040,
"text": "Probability of spam is still 20/100, or 0.2."
},
{
"code": null,
"e": 4121,
"s": 4085,
"text": "P(word_1, word_2,...,word_n| spam):"
},
{
"code": null,
"e": 4205,
"s": 4121,
"text": "We find P(word|spam) for every word, and take the product of all the probabilities."
},
{
"code": null,
"e": 4235,
"s": 4205,
"text": "P(word_1, word_2,...,word_n):"
},
{
"code": null,
"e": 4326,
"s": 4235,
"text": "The denominator of the equation, i.e. P(word_1, word_2,...,word_n), can also be written as"
},
{
"code": null,
"e": 4427,
"s": 4326,
"text": "P(spam) * P(word_1, word_2,...,word_n| spam) + P(not spam) * P(word_1, word_2,...,word_n | not spam)"
},
{
"code": null,
"e": 4534,
"s": 4427,
"text": "The first half of the equation is basically the nominator of Bayes theorem, and solved in the steps above."
},
{
"code": null,
"e": 4696,
"s": 4534,
"text": "P(not spam) is 80/100. To find P(word_1, word_2, ..., word_n | not spam), we find P(word|not spam) for every word, and take the product of all the probabilities."
},
{
"code": null,
"e": 4732,
"s": 4696,
"text": "Mathematically, it is equivalent to"
},
{
"code": null,
"e": 4823,
"s": 4732,
"text": "Taken everything together, the probability that the email from Bay to Breakers is spam is:"
},
{
"code": null,
"e": 5053,
"s": 4823,
"text": "Everything works great....except if you encounter a new word previously unseen in your training data. Let’s say an email contains the single word “GetYourFreeCookieNow” and your algorithm needs to decide whether it’s spam or not."
},
{
"code": null,
"e": 5377,
"s": 5053,
"text": "Most likely your training data does not include this word. In our example, \"GetYourFreeCookieNow” was neither present in our 20 spam email nor 80 non spam email. If that’s the case, P(GetYourFreeCookieNow | spam) is zero. P(GetYourFreeCookieNow | not spam) is also zero. You can’t divide 0 by 0, so the algorithm will fail."
},
{
"code": null,
"e": 5547,
"s": 5377,
"text": "Similarly, if a word (let’s say “calendar”) never appeared in spam emails of your training set, you always get probability of 0 for spam if an email contains “calendar”."
},
{
"code": null,
"e": 5609,
"s": 5547,
"text": "So let’s tweak our Naive Bayes algorithm to fix this problem."
},
{
"code": null,
"e": 5893,
"s": 5609,
"text": "We solve this problem by adding a count of 1 to each word. For example, even though “GetYourFreeCookieNow” was not in our training data, the nominator of P(GetYourFreeCookieNow | spam) will now be 1 instead of 0. Therefore, you will not get a zero for P(GetYourFreeCookieNow | spam)."
},
{
"code": null,
"e": 5934,
"s": 5893,
"text": "Let’s look at our P(bonus|spam) example."
},
{
"code": null,
"e": 5964,
"s": 5934,
"text": "Before adding 1 to each word:"
},
{
"code": null,
"e": 5993,
"s": 5964,
"text": "After adding 1 to each word:"
},
{
"code": null,
"e": 6054,
"s": 5993,
"text": "W_unique is the number of unique words in all of the emails."
},
{
"code": null,
"e": 6129,
"s": 6054,
"text": "Below is a video illustration of spam classification based on Naive Bayes."
},
{
"code": null,
"e": 6219,
"s": 6129,
"text": "Now that we talked about the theory behind email spam classification. Let’s implement it."
},
{
"code": null,
"e": 6330,
"s": 6219,
"text": "First, you need a training set. This is the data I used from Kaggle. It’s in a csv format and looks like this:"
},
{
"code": null,
"e": 6364,
"s": 6330,
"text": "The label “ham” means “not spam”."
},
{
"code": null,
"e": 6415,
"s": 6364,
"text": "From this data, we need to compute several things:"
},
{
"code": null,
"e": 6696,
"s": 6415,
"text": "Number of spam emailsNumber of non spam emails (i.e. “ham” emails).Number of unique words in the vocabulary.Total number of words in spam emails.Total number of words in ham emails.Number of occurrences of each word in spam emails.Number of occurrences of each word in ham emails."
},
{
"code": null,
"e": 6718,
"s": 6696,
"text": "Number of spam emails"
},
{
"code": null,
"e": 6765,
"s": 6718,
"text": "Number of non spam emails (i.e. “ham” emails)."
},
{
"code": null,
"e": 6807,
"s": 6765,
"text": "Number of unique words in the vocabulary."
},
{
"code": null,
"e": 6845,
"s": 6807,
"text": "Total number of words in spam emails."
},
{
"code": null,
"e": 6882,
"s": 6845,
"text": "Total number of words in ham emails."
},
{
"code": null,
"e": 6933,
"s": 6882,
"text": "Number of occurrences of each word in spam emails."
},
{
"code": null,
"e": 6983,
"s": 6933,
"text": "Number of occurrences of each word in ham emails."
},
{
"code": null,
"e": 7135,
"s": 6983,
"text": "I’m going to iterate through the csv file email by email, and increment the value for the 7 variables we need. You can find the code in my Github repo."
},
{
"code": null,
"e": 7146,
"s": 7135,
"text": "github.com"
},
{
"code": null,
"e": 7172,
"s": 7146,
"text": "This is the result I got:"
},
{
"code": null,
"e": 7589,
"s": 7172,
"text": "number of spam emails: 1451number of non spam emails: 3512total unique words: 45819total spam words: 282332total non spam words: 487196spam words with count: {'subject': 1741, 'photoshop': 80, 'windows': 207, 'office': 204, 'cheap': 79, 'main': 17, 'trending': 2, ... }non spam words with count: {'subject': 6111, 'enron': 6209, 'methanol': 114, 'meter': 2381, 'this': 4680, 'is': 4730, 'a': 5765, 'follow': 93, ...}"
},
{
"code": null,
"e": 7661,
"s": 7589,
"text": "Now, we’re ready to classify whether the Bay to Breakers email is spam."
},
{
"code": null,
"e": 7749,
"s": 7661,
"text": "There are a few things about my algorithm that differ from the formula presented above."
},
{
"code": null,
"e": 7778,
"s": 7749,
"text": "Calculate LOG of probability"
},
{
"code": null,
"e": 7807,
"s": 7778,
"text": "Calculate LOG of probability"
},
{
"code": null,
"e": 8023,
"s": 7807,
"text": "I calculated the log of the probability instead of the probability itself. This is because the product of P(word | spam) for all the words is going to be very close to 0, and the computer just round it down to zero."
},
{
"code": null,
"e": 8063,
"s": 8023,
"text": "Just a reminder, Bayes theorem is this:"
},
{
"code": null,
"e": 8134,
"s": 8063,
"text": "We do a transformation by applying the log. This is the new nominator:"
},
{
"code": null,
"e": 8261,
"s": 8134,
"text": "To calculate the nominator of spam, my algorithm iterate through each word in the email, and sum up the log of P(word | spam)."
},
{
"code": null,
"e": 8303,
"s": 8261,
"text": "The complete code is in this Github repo."
},
{
"code": null,
"e": 8398,
"s": 8303,
"text": "2. Compare the nominator of Bayes theorem for probability of spam and probability of not spam."
},
{
"code": null,
"e": 8626,
"s": 8398,
"text": "We don’t need to compute the denominator of Naive Bayes theorem at all. We just need to compare which nominator is greater: the probability of spam given the words in the email, or the probability of not spam given those words."
},
{
"code": null,
"e": 8712,
"s": 8626,
"text": "Whichever is greater (spam or not spam), we’ll categorize the email in that category."
},
{
"code": null,
"e": 8800,
"s": 8712,
"text": "Using these 2 tweaks, my algorithm indeed found that the Bay to Breakers email is SPAM!"
},
{
"code": null,
"e": 8858,
"s": 8800,
"text": "These are the print statements when running my algorithm:"
},
{
"code": null,
"e": 9019,
"s": 8858,
"text": "log_p_spam: -1.229757422703253log_p_ham: -0.3458247208324336log_p_words_given_spam: -1028.860861014028log_p_words_given_ham: -1051.386635753655Is Spam? True"
}
]
|
Assign other value to a variable from two possible values in C++ | we have to assign a variable the value of other variables from two possible values without using any conditional operator.
In this problem, we are given a variable let's say a which can have a value of any of the two variables x and y. Now, we have to create a program to assign the value of another than its current value without using any conditional operator i.e. we can’t check the value of x.
Let’s take an example to understand the problem better −
Input : a = 43 ; x = 43 and y = 21
Output : 21
Explanation − the initial value of a is 43 so we need to return the other value i.e. 21 as the final value of a.
Since we are not allowed to check the value of an i.e. use of any sort of conditional statement is not valid in the code. So, we need to see alternate solutions to the swap value of the variable. For this, there can be multiple solutions but we are discussing most feasible and easy once here −
One of the easy ways to swap value is to use addition/multiplication of the two values and subtract/divide for respective opposite operations i.e subtract if we have done addition and divide if used multiplication.
So, the formula becomes −
a = x + y - a or a = x * y / a
But here the multiply and divide operations are more costly and may sometimes throw errors. So, we have used addition - subtraction combo for this.
Live Demo
#include <iostream>
using namespace std;
int main(){
int x = 45;
int y = 5;
int a = x;
cout<<"Initial value of a is : "<<a;
a = x+y - a;
cout<<"\nAfter changing the value of a is : "<<a;
}
Initial value of a is : 45
After changing the value of a is : 5
A more effective way could be using the bitwise XOR operator.
So, the value would be changed in the following manner −
a = x^y^a;
Live Demo
#include <iostream>
using namespace std;
int main(){
int x = 56;
int y = 78;
int a = x;
cout<<"Initial value of a is : "<< a;
a = x^y^a;
cout<<"\nAfter changing the value of a is "<<a;
return 0;
}
Initial value of a is : 56
After changing the value of a is 78 | [
{
"code": null,
"e": 1185,
"s": 1062,
"text": "we have to assign a variable the value of other variables from two possible values without using any conditional operator."
},
{
"code": null,
"e": 1460,
"s": 1185,
"text": "In this problem, we are given a variable let's say a which can have a value of any of the two variables x and y. Now, we have to create a program to assign the value of another than its current value without using any conditional operator i.e. we can’t check the value of x."
},
{
"code": null,
"e": 1517,
"s": 1460,
"text": "Let’s take an example to understand the problem better −"
},
{
"code": null,
"e": 1564,
"s": 1517,
"text": "Input : a = 43 ; x = 43 and y = 21\nOutput : 21"
},
{
"code": null,
"e": 1677,
"s": 1564,
"text": "Explanation − the initial value of a is 43 so we need to return the other value i.e. 21 as the final value of a."
},
{
"code": null,
"e": 1972,
"s": 1677,
"text": "Since we are not allowed to check the value of an i.e. use of any sort of conditional statement is not valid in the code. So, we need to see alternate solutions to the swap value of the variable. For this, there can be multiple solutions but we are discussing most feasible and easy once here −"
},
{
"code": null,
"e": 2187,
"s": 1972,
"text": "One of the easy ways to swap value is to use addition/multiplication of the two values and subtract/divide for respective opposite operations i.e subtract if we have done addition and divide if used multiplication."
},
{
"code": null,
"e": 2213,
"s": 2187,
"text": "So, the formula becomes −"
},
{
"code": null,
"e": 2244,
"s": 2213,
"text": "a = x + y - a or a = x * y / a"
},
{
"code": null,
"e": 2392,
"s": 2244,
"text": "But here the multiply and divide operations are more costly and may sometimes throw errors. So, we have used addition - subtraction combo for this."
},
{
"code": null,
"e": 2403,
"s": 2392,
"text": " Live Demo"
},
{
"code": null,
"e": 2610,
"s": 2403,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n int x = 45;\n int y = 5;\n int a = x;\n cout<<\"Initial value of a is : \"<<a;\n a = x+y - a;\n cout<<\"\\nAfter changing the value of a is : \"<<a;\n}"
},
{
"code": null,
"e": 2674,
"s": 2610,
"text": "Initial value of a is : 45\nAfter changing the value of a is : 5"
},
{
"code": null,
"e": 2736,
"s": 2674,
"text": "A more effective way could be using the bitwise XOR operator."
},
{
"code": null,
"e": 2793,
"s": 2736,
"text": "So, the value would be changed in the following manner −"
},
{
"code": null,
"e": 2804,
"s": 2793,
"text": "a = x^y^a;"
},
{
"code": null,
"e": 2815,
"s": 2804,
"text": " Live Demo"
},
{
"code": null,
"e": 3033,
"s": 2815,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n int x = 56;\n int y = 78;\n int a = x;\n cout<<\"Initial value of a is : \"<< a;\n a = x^y^a;\n cout<<\"\\nAfter changing the value of a is \"<<a;\n return 0;\n}"
},
{
"code": null,
"e": 3096,
"s": 3033,
"text": "Initial value of a is : 56\nAfter changing the value of a is 78"
}
]
|
Dealing with Imbalanced dataset. Techniques to handle imbalanced data | by Vaibhav Jayaswal | Towards Data Science | The imbalanced dataset in real-world problems is not so rare. In layman terms, an imbalanced dataset is a dataset where classes are distributed unequally. An imbalanced data can create problems in the classification task. Before delving into the handling of imbalanced data, we should know the issues that an imbalanced dataset can create.
We will take an example of a credit card fraud detection problem to understand an imbalanced dataset and how to handle it in a better way.
EXAMPLE — CREDIT CARD FRAUD DETECTION
The dataset of credit card fraud detection is taken from Kaggle. The dataset contains transactions that occurred in two days, where
Total transactions in the data = 284,807
The data has two classes: 0 and 1 represents legal transactions and fraud transactions, respectively. Plotting the distribution of the classes will give us insights about the imbalance if present.
The total legit transactions are 284315 out of 284807, which is 99.83%. The fraud transactions are only 492 in the whole dataset (0.17%). An imbalanced dataset can occur in other scenarios such as cancer detection where large amounts of tested people are negative, and only a few people have cancer.
If we are using accuracy as a performance metric, it can create a huge problem. Let’s say our model predicts each transaction as legal (dumb model). Using an accuracy metric on the credit card dataset will give 99.83% accuracy, which is excellent. IS IT A GOOD RESULT? NO
For an imbalanced dataset, other performance metrics should be used, such as the Precision-Recall AUC score, F1 score, etc.. Moreover, the model will be biased towards the majority class. Since most machine learning techniques are designed to work well with a balanced dataset, we must create balanced data out of an imbalanced dataset. HOW TO DO THAT?
NOTE: Before changing the dataset, we must split the dataset into training and testing because the change is only for the training purpose.
>>>from sklearn.model_selection import train_test_split>>>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 2, shuffle = True, stratify = y)
The shape of the training dataset:0: 190430 transactions1: 330 transactions
We will discuss three methods in this article for creating a balanced dataset from imbalanced data:
Undersampling
Oversampling
Creating synthetic data
Undersampling resamples the majority class points in the data to make them equal to the minority class points. We will be creating a new dataset out of the original dataset using undersampling. We will randomly sample the legal transactions (190490 in number) in the credit card fraud detection to make data points of class 0 equal to 330 (minority class).
>>>from imblearn.under_sampling import RandomUnderSampler>>>rus = RandomUnderSampler(random_state=0)>>>X_resampled_under, y_resampled_under =rus.fit_resample(X_train, y_train)
The major disadvantage of undersampling is that we do not use a significant chunk of the data, which contains some information. In our example, we have removed 190160 (190490–330) data points. Therefore, we are losing information, and as a result, we will not get significant results.
We can avoid that by using a technique called oversampling instead of undersampling.
Oversampling refers to the resampling of the minority class points to equal the total number of majority points. Repetition of the minority class points is one such type of oversampling technique.
Apart from repetition, we can provide the class weights to both the class. Providing the large weights to the minority class will give the same result as from that of repetition.
The problem with repeating the data is that it does not provide any extra information. One way to increase the information about the data is by creating synthetic data points. One such technique is the SMOTE (Synthetic Minority Oversampling technique). As the name suggests, SMOTE is an oversampling technique. In layman terms, SMOTE will create synthetic data points for the minority class. It creates new instances between the points of the minority class.
We can apply SMOTE on the training dataset using the imblearn library.
>>>from imblearn.over_sampling import SMOTE>>>sm = SMOTE(random_state = 42)>>>X_train_new, y_train_new = sm.fit_sample(X_train, y_train)
The imbalanced dataset is extremely common when handling real-world scenarios. A machine learning model is not robust if it uses an imbalanced dataset for training purposes. Therefore, a balanced dataset is preferred for training machine learning models. Techniques such as undersampling, oversampling, and SMOTE can be used to create balanced data.
Thanks for reading! | [
{
"code": null,
"e": 512,
"s": 172,
"text": "The imbalanced dataset in real-world problems is not so rare. In layman terms, an imbalanced dataset is a dataset where classes are distributed unequally. An imbalanced data can create problems in the classification task. Before delving into the handling of imbalanced data, we should know the issues that an imbalanced dataset can create."
},
{
"code": null,
"e": 651,
"s": 512,
"text": "We will take an example of a credit card fraud detection problem to understand an imbalanced dataset and how to handle it in a better way."
},
{
"code": null,
"e": 689,
"s": 651,
"text": "EXAMPLE — CREDIT CARD FRAUD DETECTION"
},
{
"code": null,
"e": 821,
"s": 689,
"text": "The dataset of credit card fraud detection is taken from Kaggle. The dataset contains transactions that occurred in two days, where"
},
{
"code": null,
"e": 862,
"s": 821,
"text": "Total transactions in the data = 284,807"
},
{
"code": null,
"e": 1059,
"s": 862,
"text": "The data has two classes: 0 and 1 represents legal transactions and fraud transactions, respectively. Plotting the distribution of the classes will give us insights about the imbalance if present."
},
{
"code": null,
"e": 1359,
"s": 1059,
"text": "The total legit transactions are 284315 out of 284807, which is 99.83%. The fraud transactions are only 492 in the whole dataset (0.17%). An imbalanced dataset can occur in other scenarios such as cancer detection where large amounts of tested people are negative, and only a few people have cancer."
},
{
"code": null,
"e": 1631,
"s": 1359,
"text": "If we are using accuracy as a performance metric, it can create a huge problem. Let’s say our model predicts each transaction as legal (dumb model). Using an accuracy metric on the credit card dataset will give 99.83% accuracy, which is excellent. IS IT A GOOD RESULT? NO"
},
{
"code": null,
"e": 1984,
"s": 1631,
"text": "For an imbalanced dataset, other performance metrics should be used, such as the Precision-Recall AUC score, F1 score, etc.. Moreover, the model will be biased towards the majority class. Since most machine learning techniques are designed to work well with a balanced dataset, we must create balanced data out of an imbalanced dataset. HOW TO DO THAT?"
},
{
"code": null,
"e": 2124,
"s": 1984,
"text": "NOTE: Before changing the dataset, we must split the dataset into training and testing because the change is only for the training purpose."
},
{
"code": null,
"e": 2306,
"s": 2124,
"text": ">>>from sklearn.model_selection import train_test_split>>>X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 2, shuffle = True, stratify = y)"
},
{
"code": null,
"e": 2382,
"s": 2306,
"text": "The shape of the training dataset:0: 190430 transactions1: 330 transactions"
},
{
"code": null,
"e": 2482,
"s": 2382,
"text": "We will discuss three methods in this article for creating a balanced dataset from imbalanced data:"
},
{
"code": null,
"e": 2496,
"s": 2482,
"text": "Undersampling"
},
{
"code": null,
"e": 2509,
"s": 2496,
"text": "Oversampling"
},
{
"code": null,
"e": 2533,
"s": 2509,
"text": "Creating synthetic data"
},
{
"code": null,
"e": 2890,
"s": 2533,
"text": "Undersampling resamples the majority class points in the data to make them equal to the minority class points. We will be creating a new dataset out of the original dataset using undersampling. We will randomly sample the legal transactions (190490 in number) in the credit card fraud detection to make data points of class 0 equal to 330 (minority class)."
},
{
"code": null,
"e": 3066,
"s": 2890,
"text": ">>>from imblearn.under_sampling import RandomUnderSampler>>>rus = RandomUnderSampler(random_state=0)>>>X_resampled_under, y_resampled_under =rus.fit_resample(X_train, y_train)"
},
{
"code": null,
"e": 3351,
"s": 3066,
"text": "The major disadvantage of undersampling is that we do not use a significant chunk of the data, which contains some information. In our example, we have removed 190160 (190490–330) data points. Therefore, we are losing information, and as a result, we will not get significant results."
},
{
"code": null,
"e": 3436,
"s": 3351,
"text": "We can avoid that by using a technique called oversampling instead of undersampling."
},
{
"code": null,
"e": 3633,
"s": 3436,
"text": "Oversampling refers to the resampling of the minority class points to equal the total number of majority points. Repetition of the minority class points is one such type of oversampling technique."
},
{
"code": null,
"e": 3812,
"s": 3633,
"text": "Apart from repetition, we can provide the class weights to both the class. Providing the large weights to the minority class will give the same result as from that of repetition."
},
{
"code": null,
"e": 4271,
"s": 3812,
"text": "The problem with repeating the data is that it does not provide any extra information. One way to increase the information about the data is by creating synthetic data points. One such technique is the SMOTE (Synthetic Minority Oversampling technique). As the name suggests, SMOTE is an oversampling technique. In layman terms, SMOTE will create synthetic data points for the minority class. It creates new instances between the points of the minority class."
},
{
"code": null,
"e": 4342,
"s": 4271,
"text": "We can apply SMOTE on the training dataset using the imblearn library."
},
{
"code": null,
"e": 4479,
"s": 4342,
"text": ">>>from imblearn.over_sampling import SMOTE>>>sm = SMOTE(random_state = 42)>>>X_train_new, y_train_new = sm.fit_sample(X_train, y_train)"
},
{
"code": null,
"e": 4829,
"s": 4479,
"text": "The imbalanced dataset is extremely common when handling real-world scenarios. A machine learning model is not robust if it uses an imbalanced dataset for training purposes. Therefore, a balanced dataset is preferred for training machine learning models. Techniques such as undersampling, oversampling, and SMOTE can be used to create balanced data."
}
]
|
How to use search functionality in custom list view in Android? | This example demonstrates how do I use the search functionality in custom listview in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="8dp"
tools:context=".MainActivity">
<EditText
android:id="@+id/etSearch"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="Search here" />
<ListView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:id="@+id/listView"
android:layout_below="@id/etSearch"/>
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import androidx.appcompat.app.AppCompatActivity;
import android.os.Bundle;
import android.text.Editable;
import android.text.TextWatcher;
import android.widget.ArrayAdapter;
import android.widget.EditText;
import android.widget.ListView;
import java.util.ArrayList;
public class MainActivity extends AppCompatActivity {
ListView listView;
ArrayList<String> months = new ArrayList<>();
ArrayAdapter<String> arrayAdapter;
EditText etSearch;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
listView = findViewById(R.id.listView);
etSearch = findViewById(R.id.etSearch);
months.add("January");
months.add("February");
months.add("March");
months.add("April");
months.add("May");
months.add("June");
months.add("July");
months.add("August");
months.add("September");
months.add("October");
months.add("November");
months.add("December");
arrayAdapter = new ArrayAdapter<>(this, android.R.layout.simple_list_item_1, android.R.id.text1, months);
listView.setAdapter(arrayAdapter);
etSearch.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
arrayAdapter.getFilter().filter(s);
}
@Override
public void afterTextChanged(Editable s) {
}
});
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code. | [
{
"code": null,
"e": 1157,
"s": 1062,
"text": "This example demonstrates how do I use the search functionality in custom listview in android."
},
{
"code": null,
"e": 1286,
"s": 1157,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1351,
"s": 1286,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2013,
"s": 1351,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"8dp\"\n tools:context=\".MainActivity\">\n <EditText\n android:id=\"@+id/etSearch\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:hint=\"Search here\" />\n <ListView\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:id=\"@+id/listView\"\n android:layout_below=\"@id/etSearch\"/>\n</RelativeLayout>"
},
{
"code": null,
"e": 2070,
"s": 2013,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 3703,
"s": 2070,
"text": "import androidx.appcompat.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.text.Editable;\nimport android.text.TextWatcher;\nimport android.widget.ArrayAdapter;\nimport android.widget.EditText;\nimport android.widget.ListView;\nimport java.util.ArrayList;\npublic class MainActivity extends AppCompatActivity {\n ListView listView;\n ArrayList<String> months = new ArrayList<>();\n ArrayAdapter<String> arrayAdapter;\n EditText etSearch;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n listView = findViewById(R.id.listView);\n etSearch = findViewById(R.id.etSearch);\n months.add(\"January\");\n months.add(\"February\");\n months.add(\"March\");\n months.add(\"April\");\n months.add(\"May\");\n months.add(\"June\");\n months.add(\"July\");\n months.add(\"August\");\n months.add(\"September\");\n months.add(\"October\");\n months.add(\"November\");\n months.add(\"December\");\n arrayAdapter = new ArrayAdapter<>(this, android.R.layout.simple_list_item_1, android.R.id.text1, months);\n listView.setAdapter(arrayAdapter);\n etSearch.addTextChangedListener(new TextWatcher() {\n @Override\n public void beforeTextChanged(CharSequence s, int start, int count, int after) {\n }\n @Override\n public void onTextChanged(CharSequence s, int start, int before, int count) {\n arrayAdapter.getFilter().filter(s);\n }\n @Override\n public void afterTextChanged(Editable s) {\n }\n });\n }\n}"
},
{
"code": null,
"e": 3758,
"s": 3703,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4428,
"s": 3758,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 4779,
"s": 4428,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from the android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −"
},
{
"code": null,
"e": 4820,
"s": 4779,
"text": "Click here to download the project code."
}
]
|
SVM Hyperparameters Explained with Visualizations | by Soner Yıldırım | Towards Data Science | Support Vector Machine (SVM) is a widely-used supervised machine learning algorithm. It is mostly used in classification tasks but suitable for regression tasks as well.
In this post, we dive deep into two important hyperparameters of SVMs, C and gamma, and explain their effects with visualizations. So I will assume you have a basic understanding of the algorithm and focus on these hyperparameters.
SVM separates data points that belong to different classes with a decision boundary. When determining the decision boundary, a soft margin SVM (soft margin means allowing some data points to be misclassified) tries to solve an optimization problem with the following goals:
Increase the distance of decision boundary to classes (or support vectors)
Maximize the number of points that are correctly classified in the training set
There is obviously a trade-off between these two goals which and it is controlled by C which adds a penalty for each misclassified data point.
If C is small, the penalty for misclassified points is low so a decision boundary with a large margin is chosen at the expense of a greater number of misclassification.
If C is large, SVM tries to minimize the number of misclassified examples due to the high penalty which results in a decision boundary with a smaller margin. The penalty is not the same for all misclassified examples. It is directly proportional to the distance to the decision boundary.
It will be more clear after the examples. Let’s first import the libraries and create a synthetic dataset.
import numpy as npimport pandas as pdimport matplotlib.pyplot as plt%matplotlib inlinefrom sklearn.svm import SVCfrom sklearn.datasets import make_classificationX, y = make_classification(n_samples=200, n_features=2,n_informative=2, n_redundant=0, n_repeated=0, n_classes=2,random_state=42)plt.figure(figsize=(10,6))plt.title("Synthetic Binary Classification Dataset", fontsize=18)plt.scatter(X[:,0], X[:,1], c=y, cmap='cool')
We will first train a linear SVM which only requires to tune C. Then we will implement an SVM with RBF kernel and also tune the gamma parameter.
To plot the decision boundaries, we will be using the function from the SVM chapter of the Python Data Science Handbook by Jake VanderPlas.
We can now create two linear SVM classifiers with different C values.
clf = SVC(C=0.1, kernel='linear').fit(X, y)plt.figure(figsize=(10,6))plt.title("Linear kernel with C=0.1", fontsize=18)plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='cool')plot_svc_decision_function(clf)
Just change the C value as 100 to produce the following plot.
When we increase the C value, the margin gets smaller. Thus, the models with low C values tend to be more generalized. The difference becomes more clear with larger datasets.
The effects of hyperparameters only reach to a certain extent with linear kernels. The influence of hyperparameters becomes more visible with non-linear kernels.
Gamma is a hyperparameter used with non-linear SVM. One of the most commonly used non-linear kernels is the radial basis function (RBF). Gamma parameter of RBF controls the distance of the influence of a single training point.
Low values of gamma indicate a large similarity radius which results in more points being grouped together. For high values of gamma, the points need to be very close to each other in order to be considered in the same group (or class). Therefore, models with very large gamma values tend to overfit.
Let’s plot the predictions of three SVMs with different gamma values.
clf = SVC(C=1, kernel='rbf', gamma=0.01).fit(X, y)y_pred = clf.predict(X)plt.figure(figsize=(10,6))plt.title("Predictions of RBF kernel with C=1 and Gamma=0.01", fontsize=18)plt.scatter(X[:, 0], X[:, 1], c=y_pred, s=50, cmap='cool')plot_svc_decision_function(clf)
Just change the gamma values to produce the following plots.
As the gamma values increase, the model is becoming overfit. The data points need to be very close in order to be grouped together because the similarity radius decreases with an increasing gamma value.
The accuracies of RBF kernels on this dataset with gamma values 0.01, 1, and 5 are 0.89, 0.92, and 0.93, respectively. These values indicate that models are overfitting to the training set as the gamma values increase.
For a linear kernel, we just need to optimize the c parameter. However, if we want to use an RBF kernel, both c and gamma parameters need to optimized simultaneously. If gamma is large, the effect of c becomes negligible. If gamma is small, c affects the model just like how it affects a linear model. Typical values for c and gamma are as follows. However, specific optimal values may exist depending on the application:
0.0001 < gamma < 10
0.1 < c < 100 | [
{
"code": null,
"e": 341,
"s": 171,
"text": "Support Vector Machine (SVM) is a widely-used supervised machine learning algorithm. It is mostly used in classification tasks but suitable for regression tasks as well."
},
{
"code": null,
"e": 573,
"s": 341,
"text": "In this post, we dive deep into two important hyperparameters of SVMs, C and gamma, and explain their effects with visualizations. So I will assume you have a basic understanding of the algorithm and focus on these hyperparameters."
},
{
"code": null,
"e": 847,
"s": 573,
"text": "SVM separates data points that belong to different classes with a decision boundary. When determining the decision boundary, a soft margin SVM (soft margin means allowing some data points to be misclassified) tries to solve an optimization problem with the following goals:"
},
{
"code": null,
"e": 922,
"s": 847,
"text": "Increase the distance of decision boundary to classes (or support vectors)"
},
{
"code": null,
"e": 1002,
"s": 922,
"text": "Maximize the number of points that are correctly classified in the training set"
},
{
"code": null,
"e": 1145,
"s": 1002,
"text": "There is obviously a trade-off between these two goals which and it is controlled by C which adds a penalty for each misclassified data point."
},
{
"code": null,
"e": 1314,
"s": 1145,
"text": "If C is small, the penalty for misclassified points is low so a decision boundary with a large margin is chosen at the expense of a greater number of misclassification."
},
{
"code": null,
"e": 1602,
"s": 1314,
"text": "If C is large, SVM tries to minimize the number of misclassified examples due to the high penalty which results in a decision boundary with a smaller margin. The penalty is not the same for all misclassified examples. It is directly proportional to the distance to the decision boundary."
},
{
"code": null,
"e": 1709,
"s": 1602,
"text": "It will be more clear after the examples. Let’s first import the libraries and create a synthetic dataset."
},
{
"code": null,
"e": 2136,
"s": 1709,
"text": "import numpy as npimport pandas as pdimport matplotlib.pyplot as plt%matplotlib inlinefrom sklearn.svm import SVCfrom sklearn.datasets import make_classificationX, y = make_classification(n_samples=200, n_features=2,n_informative=2, n_redundant=0, n_repeated=0, n_classes=2,random_state=42)plt.figure(figsize=(10,6))plt.title(\"Synthetic Binary Classification Dataset\", fontsize=18)plt.scatter(X[:,0], X[:,1], c=y, cmap='cool')"
},
{
"code": null,
"e": 2281,
"s": 2136,
"text": "We will first train a linear SVM which only requires to tune C. Then we will implement an SVM with RBF kernel and also tune the gamma parameter."
},
{
"code": null,
"e": 2421,
"s": 2281,
"text": "To plot the decision boundaries, we will be using the function from the SVM chapter of the Python Data Science Handbook by Jake VanderPlas."
},
{
"code": null,
"e": 2491,
"s": 2421,
"text": "We can now create two linear SVM classifiers with different C values."
},
{
"code": null,
"e": 2695,
"s": 2491,
"text": "clf = SVC(C=0.1, kernel='linear').fit(X, y)plt.figure(figsize=(10,6))plt.title(\"Linear kernel with C=0.1\", fontsize=18)plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='cool')plot_svc_decision_function(clf)"
},
{
"code": null,
"e": 2757,
"s": 2695,
"text": "Just change the C value as 100 to produce the following plot."
},
{
"code": null,
"e": 2932,
"s": 2757,
"text": "When we increase the C value, the margin gets smaller. Thus, the models with low C values tend to be more generalized. The difference becomes more clear with larger datasets."
},
{
"code": null,
"e": 3094,
"s": 2932,
"text": "The effects of hyperparameters only reach to a certain extent with linear kernels. The influence of hyperparameters becomes more visible with non-linear kernels."
},
{
"code": null,
"e": 3321,
"s": 3094,
"text": "Gamma is a hyperparameter used with non-linear SVM. One of the most commonly used non-linear kernels is the radial basis function (RBF). Gamma parameter of RBF controls the distance of the influence of a single training point."
},
{
"code": null,
"e": 3622,
"s": 3321,
"text": "Low values of gamma indicate a large similarity radius which results in more points being grouped together. For high values of gamma, the points need to be very close to each other in order to be considered in the same group (or class). Therefore, models with very large gamma values tend to overfit."
},
{
"code": null,
"e": 3692,
"s": 3622,
"text": "Let’s plot the predictions of three SVMs with different gamma values."
},
{
"code": null,
"e": 3956,
"s": 3692,
"text": "clf = SVC(C=1, kernel='rbf', gamma=0.01).fit(X, y)y_pred = clf.predict(X)plt.figure(figsize=(10,6))plt.title(\"Predictions of RBF kernel with C=1 and Gamma=0.01\", fontsize=18)plt.scatter(X[:, 0], X[:, 1], c=y_pred, s=50, cmap='cool')plot_svc_decision_function(clf)"
},
{
"code": null,
"e": 4017,
"s": 3956,
"text": "Just change the gamma values to produce the following plots."
},
{
"code": null,
"e": 4220,
"s": 4017,
"text": "As the gamma values increase, the model is becoming overfit. The data points need to be very close in order to be grouped together because the similarity radius decreases with an increasing gamma value."
},
{
"code": null,
"e": 4439,
"s": 4220,
"text": "The accuracies of RBF kernels on this dataset with gamma values 0.01, 1, and 5 are 0.89, 0.92, and 0.93, respectively. These values indicate that models are overfitting to the training set as the gamma values increase."
},
{
"code": null,
"e": 4861,
"s": 4439,
"text": "For a linear kernel, we just need to optimize the c parameter. However, if we want to use an RBF kernel, both c and gamma parameters need to optimized simultaneously. If gamma is large, the effect of c becomes negligible. If gamma is small, c affects the model just like how it affects a linear model. Typical values for c and gamma are as follows. However, specific optimal values may exist depending on the application:"
},
{
"code": null,
"e": 4881,
"s": 4861,
"text": "0.0001 < gamma < 10"
}
]
|
BabylonJS - Mesh AssetsManager | With assestsmanager class, you can load meshes, images and binaryfiles in the scene.
var assetsManager = new BABYLON.AssetsManager(scene);
<!doctype html>
<html>
<head>
<meta charset = "utf-8">
<title>BabylonJs - Basic Element-Creating Scene</title>
<script src = "babylon.js"></script>
<style>
canvas {width: 100%; height: 100%;}
</style>
</head>
<body>
<canvas id = "renderCanvas"></canvas>
<script type = "text/javascript">
var canvas = document.getElementById("renderCanvas");
var engine = new BABYLON.Engine(canvas, true);
var createScene = function() {
var scene = new BABYLON.Scene(engine);
//Adding a light
var light = new BABYLON.PointLight("Omni", new BABYLON.Vector3(20, 20, 100), scene);
//Adding an Arc Rotate Camera
var camera = new BABYLON.ArcRotateCamera("Camera", 0, 0.8, 100, BABYLON.Vector3.Zero(), scene);
camera.attachControl(canvas, false);
var assetsManager = new BABYLON.AssetsManager(scene);
var meshTask = assetsManager.addMeshTask("skull task", "", "scenes/", "skull.babylon");
meshTask.onSuccess = function (task) {
task.loadedMeshes[0].position = BABYLON.Vector3.Zero();
}
// Move the light with the camera
scene.registerBeforeRender(function () {
light.position = camera.position;
});
assetsManager.onFinish = function (tasks) {
engine.runRenderLoop(function () {
scene.render();
});
};
assetsManager.load();
return scene;
};
var scene = createScene();
engine.runRenderLoop(function() {
scene.render();
});
</script>
</body>
</html>
In the above demo link, we have used skull.babylon mesh. You can download the json file for skull.babylon from −
skull.babylon
Save the file in the scenes folder to get the output as shown below. skull.babylon is a json file with all the details of positions to be plotted for the mesh.
For assetsmanager, the first thing that you need to do is create an object of the same as shown above.
Now, you can add task to the assetsmanager object as follows −
var meshTask = assetsManager.addMeshTask("skull task", "", "scenes/", "skull.babylon");
ThemeshTask created above gives access to 2 callbacks called onSucess and onError.
Following is the syntax for the onSuccess callback −
meshTask.onSuccess = function (task) {
task.loadedMeshes[0].position = BABYLON.Vector3.Zero();
}
In the above case, the position of the mesh is changed to 0 on success callback.
<!doctype html>
<html>
<head>
<meta charset = "utf-8">
<title>BabylonJs - Basic Element-Creating Scene</title>
<script src = "babylon.js"></script>
<style>
canvas {width: 100%; height: 100%;}
</style>
</head>
<body>
<canvas id = "renderCanvas"></canvas>
<script type = "text/javascript">
var canvas = document.getElementById("renderCanvas");
var engine = new BABYLON.Engine(canvas, true);
var createScene = function() {
var scene = new BABYLON.Scene(engine);
//Adding a light
var light = new BABYLON.PointLight("Omni", new BABYLON.Vector3(20, 20, 100), scene);
//Adding an Arc Rotate Camera
var camera = new BABYLON.ArcRotateCamera("Camera", 0, 0.8, 100, BABYLON.Vector3.Zero(), scene);
camera.attachControl(canvas, false);
var assetsManager = new BABYLON.AssetsManager(scene);
var imageTask = assetsManager.addImageTask("image_task", "images/balloon.png");
imageTask.onSuccess = function(task) {
console.log(task.image.width);
}
var textTask = assetsManager.addTextFileTask("text task", "mesh.txt");
textTask.onSuccess = function(task) {
console.log(task.text);
}
assetsManager.load();
return scene;
};
var scene = createScene();
engine.runRenderLoop(function() {
scene.render();
});
</script>
</body>
</html>
Create a file called mesh.txt and add the text to it “This is babylonjs test”. The above demo when executed in browser will show in console, the width of the image and the text present in mesh.txt.
Attaching mesh.txt content – mesh.txt −
This is babylonjs test
To show loading indicator for the scene, use the following −
If set to true, the loading indicator will be shown. Set it to false to disable.
assetsManager.useDefaultLoadingScreen = true;
There are other ways also to show the loading indicator −
BABYLON.SceneLoader.ShowLoadingScreen = true; //false to disable it
To manually hide and show the loading screen, execute the following.
engine.displayLoadingUI();
engine.hideLoadingUI();
Loading text is controlled using loadingUIText:engine.loadingUIText = "text"; and background using engine.loadingUIBackgroundColor = "red";
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2268,
"s": 2183,
"text": "With assestsmanager class, you can load meshes, images and binaryfiles in the scene."
},
{
"code": null,
"e": 2323,
"s": 2268,
"text": "var assetsManager = new BABYLON.AssetsManager(scene);\n"
},
{
"code": null,
"e": 4105,
"s": 2323,
"text": "<!doctype html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <title>BabylonJs - Basic Element-Creating Scene</title>\n <script src = \"babylon.js\"></script>\n <style>\n canvas {width: 100%; height: 100%;}\n </style>\n </head>\n\n <body>\n <canvas id = \"renderCanvas\"></canvas>\n <script type = \"text/javascript\">\n var canvas = document.getElementById(\"renderCanvas\");\n var engine = new BABYLON.Engine(canvas, true);\n \n var createScene = function() {\n var scene = new BABYLON.Scene(engine);\n\n //Adding a light\n var light = new BABYLON.PointLight(\"Omni\", new BABYLON.Vector3(20, 20, 100), scene);\n\n //Adding an Arc Rotate Camera\n var camera = new BABYLON.ArcRotateCamera(\"Camera\", 0, 0.8, 100, BABYLON.Vector3.Zero(), scene);\n camera.attachControl(canvas, false);\n\n var assetsManager = new BABYLON.AssetsManager(scene);\n \n var meshTask = assetsManager.addMeshTask(\"skull task\", \"\", \"scenes/\", \"skull.babylon\");\n\n meshTask.onSuccess = function (task) {\n task.loadedMeshes[0].position = BABYLON.Vector3.Zero();\n }\t\n\n // Move the light with the camera\n scene.registerBeforeRender(function () {\n light.position = camera.position;\n });\n\n assetsManager.onFinish = function (tasks) {\n engine.runRenderLoop(function () {\n scene.render();\n });\n };\n assetsManager.load();\n return scene;\n };\n var scene = createScene();\n engine.runRenderLoop(function() {\n scene.render();\n });\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 4218,
"s": 4105,
"text": "In the above demo link, we have used skull.babylon mesh. You can download the json file for skull.babylon from −"
},
{
"code": null,
"e": 4232,
"s": 4218,
"text": "skull.babylon"
},
{
"code": null,
"e": 4392,
"s": 4232,
"text": "Save the file in the scenes folder to get the output as shown below. skull.babylon is a json file with all the details of positions to be plotted for the mesh."
},
{
"code": null,
"e": 4495,
"s": 4392,
"text": "For assetsmanager, the first thing that you need to do is create an object of the same as shown above."
},
{
"code": null,
"e": 4558,
"s": 4495,
"text": "Now, you can add task to the assetsmanager object as follows −"
},
{
"code": null,
"e": 4647,
"s": 4558,
"text": "var meshTask = assetsManager.addMeshTask(\"skull task\", \"\", \"scenes/\", \"skull.babylon\");\n"
},
{
"code": null,
"e": 4730,
"s": 4647,
"text": "ThemeshTask created above gives access to 2 callbacks called onSucess and onError."
},
{
"code": null,
"e": 4783,
"s": 4730,
"text": "Following is the syntax for the onSuccess callback −"
},
{
"code": null,
"e": 4883,
"s": 4783,
"text": "meshTask.onSuccess = function (task) {\n task.loadedMeshes[0].position = BABYLON.Vector3.Zero();\n}"
},
{
"code": null,
"e": 4964,
"s": 4883,
"text": "In the above case, the position of the mesh is changed to 0 on success callback."
},
{
"code": null,
"e": 6560,
"s": 4964,
"text": "<!doctype html>\n<html>\n <head>\n <meta charset = \"utf-8\">\n <title>BabylonJs - Basic Element-Creating Scene</title>\n <script src = \"babylon.js\"></script>\n <style>\n canvas {width: 100%; height: 100%;}\n </style>\n </head>\n\n <body>\n <canvas id = \"renderCanvas\"></canvas>\n <script type = \"text/javascript\">\n var canvas = document.getElementById(\"renderCanvas\");\n var engine = new BABYLON.Engine(canvas, true);\n \n var createScene = function() {\n var scene = new BABYLON.Scene(engine);\n\n //Adding a light\n var light = new BABYLON.PointLight(\"Omni\", new BABYLON.Vector3(20, 20, 100), scene);\n\n //Adding an Arc Rotate Camera\n var camera = new BABYLON.ArcRotateCamera(\"Camera\", 0, 0.8, 100, BABYLON.Vector3.Zero(), scene);\n camera.attachControl(canvas, false);\n\n var assetsManager = new BABYLON.AssetsManager(scene);\n \n var imageTask = assetsManager.addImageTask(\"image_task\", \"images/balloon.png\");\n imageTask.onSuccess = function(task) {\n console.log(task.image.width);\n }\t\n\n var textTask = assetsManager.addTextFileTask(\"text task\", \"mesh.txt\");\n textTask.onSuccess = function(task) {\n console.log(task.text);\n }\n\n assetsManager.load();\n return scene;\n };\n var scene = createScene();\n engine.runRenderLoop(function() {\n scene.render();\n });\n </script>\n </body>\n</html>"
},
{
"code": null,
"e": 6758,
"s": 6560,
"text": "Create a file called mesh.txt and add the text to it “This is babylonjs test”. The above demo when executed in browser will show in console, the width of the image and the text present in mesh.txt."
},
{
"code": null,
"e": 6798,
"s": 6758,
"text": "Attaching mesh.txt content – mesh.txt −"
},
{
"code": null,
"e": 6822,
"s": 6798,
"text": "This is babylonjs test\n"
},
{
"code": null,
"e": 6883,
"s": 6822,
"text": "To show loading indicator for the scene, use the following −"
},
{
"code": null,
"e": 6964,
"s": 6883,
"text": "If set to true, the loading indicator will be shown. Set it to false to disable."
},
{
"code": null,
"e": 7011,
"s": 6964,
"text": "assetsManager.useDefaultLoadingScreen = true;\n"
},
{
"code": null,
"e": 7069,
"s": 7011,
"text": "There are other ways also to show the loading indicator −"
},
{
"code": null,
"e": 7138,
"s": 7069,
"text": "BABYLON.SceneLoader.ShowLoadingScreen = true; //false to disable it\n"
},
{
"code": null,
"e": 7207,
"s": 7138,
"text": "To manually hide and show the loading screen, execute the following."
},
{
"code": null,
"e": 7259,
"s": 7207,
"text": "engine.displayLoadingUI();\nengine.hideLoadingUI();\n"
},
{
"code": null,
"e": 7399,
"s": 7259,
"text": "Loading text is controlled using loadingUIText:engine.loadingUIText = \"text\"; and background using engine.loadingUIBackgroundColor = \"red\";"
},
{
"code": null,
"e": 7406,
"s": 7399,
"text": " Print"
},
{
"code": null,
"e": 7417,
"s": 7406,
"text": " Add Notes"
}
]
|
How to create windows loading effect using HTML and CSS ? - GeeksforGeeks | 09 Mar, 2021
In this article, we are going to create a window loading effect before the lock screen appears using HTML and CSS.
Glipmse of the Windows Loading Effect:
Approach:
Create an HTML file that contains HTML div in which we are giving the loader effect.
Then we create 5 span elements which are used for creating inline elements.
Then we have to use @keyframe to create animation features.
Then we have to use nth-child() property for selecting different children.
HTML Code:
First, we create an HTML file (index.html).
Now after the creation of our HTML file, we are going to give a title to our webpage using the <title> tag. It should be placed between the <head> tag.
We link the CSS file that provides all the animation’s effect to our HTML. This is also placed in between the <head> tag.
Now we add a link from Google Fonts to use a different types of font-family in our project.
Coming to the body section of our HTML code.
Then, we have to create a div in which we can store all the heading part and the span tags.
index.html
<!DOCTYPE html><html lang="en"> <head> <link rel="stylesheet" href="style.css"> <link rel="preconnect" href="https://fonts.gstatic.com"> <link href="https://fonts.googleapis.com/css2?family=Dosis:wght@300&family=Hanalei&display=swap" rel="stylesheet"></head> <body> <h1>Windows-Loading-Effect</h1> <div class="container"> <span></span> <span></span> <span></span> <span></span> <span></span> </div></body> </html>
CSS code: CSS is used to give different types of animations and effects to our HTML page so that it looks interactive to all users.
Restore all the browser effects.
Use classes and ids to give effects to HTML elements.
Use of @keyframes for providing the animation/transition effects to the browser.
Use of n-th child() property for calling the child elements.
All the features of CSS are covered in the following code.
style.css
*{ margin: 0; padding: 0; box-sizing: border-box;} /* Common styles of project which are applied to body */body{ background-color: rgb(0, 21, 138); overflow: hidden; font-family: 'Dosis', sans-serif; color: #fff;} /* Style to our heading */h1{ display: flex; margin-top: 3em; justify-content: center;} .container{ position: absolute; top: 50%; left: 50%; transform: translate(-50%,-50%);} span{ display: inline-block; width: 0.6em; height: 0.6em; border-radius: 50%; margin: 0 0.125em; background-color: rgb(235, 217, 217); opacity: 0;} /* Calling childs using nth-child() property */span:last-child{ animation: move-right 3s infinite; animation-delay: 100ms; background-color: #000;}span:nth-child(5){ animation: move 3s infinite; animation-delay: 200ms; background-color: rgb(41, 133, 22);}span:nth-child(4){ animation: move-right 3s infinite; animation-delay: 300ms; background-color: #000;}span:nth-child(3){ animation: move 3s infinite; animation-delay: 400ms; background-color: rgb(41, 133, 22);}span:nth-child(2){ animation: move-right 3s infinite; animation-delay: 500ms; background-color: #000;}span:first-child{ animation: move 3s infinite; animation-delay: 600ms; background-color: rgb(41, 133, 22);} /* Animations effect*/@keyframes move{ 0%{ transform: translateX(-31em); opacity: 0; } 30%,60%{ transform: translateX(0); opacity: 1; } 100%{ transform: translateX(31em); opacity: 0; }}@keyframes move-right{ 0%{ transform: translateX(31em); opacity: 0; } 30%,60%{ transform: translateX(0); opacity: 1; } 100%{ transform: translateX(-31em); opacity: 0; }}
Complete Code: Here we will combine above two section of code into one.
index.html
<!DOCTYPE html><html lang="en"> <head> <link rel="stylesheet" href="style.css" /> <link rel="preconnect" href="https://fonts.gstatic.com" /> <link href="https://fonts.googleapis.com/css2?family=Dosis:wght@300&family=Hanalei&display=swap" rel="stylesheet" /> <style> * { margin: 0; padding: 0; box-sizing: border-box; } /* Common styles of project which are applied to body */ body { background-color: rgb(0, 21, 138); overflow: hidden; font-family: "Dosis", sans-serif; color: #fff; } /* Style to our heading */ h1 { display: flex; margin-top: 3em; justify-content: center; } .container { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } span { display: inline-block; width: 0.6em; height: 0.6em; border-radius: 50%; margin: 0 0.125em; background-color: rgb(235, 217, 217); opacity: 0; } /* Calling childs using nth-child() property */ span:last-child { animation: move-right 3s infinite; animation-delay: 100ms; background-color: #000; } span:nth-child(5) { animation: move 3s infinite; animation-delay: 200ms; background-color: rgb(41, 133, 22); } span:nth-child(4) { animation: move-right 3s infinite; animation-delay: 300ms; background-color: #000; } span:nth-child(3) { animation: move 3s infinite; animation-delay: 400ms; background-color: rgb(41, 133, 22); } span:nth-child(2) { animation: move-right 3s infinite; animation-delay: 500ms; background-color: #000; } span:first-child { animation: move 3s infinite; animation-delay: 600ms; background-color: rgb(41, 133, 22); } /* Animations effect */ @keyframes move { 0% { transform: translateX(-31em); opacity: 0; } 30%, 60% { transform: translateX(0); opacity: 1; } 100% { transform: translateX(31em); opacity: 0; } } @keyframes move-right { 0% { transform: translateX(31em); opacity: 0; } 30%, 60% { transform: translateX(0); opacity: 1; } 100% { transform: translateX(-31em); opacity: 0; } } </style> </head> <body> <h1>Windows-Loading-Effect</h1> <div class="container"> <span></span> <span></span> <span></span> <span></span> <span></span> </div> </body></html>
Output:
Windows loading effect
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
CSS-Properties
CSS-Questions
HTML-Questions
Technical Scripter 2020
CSS
HTML
Technical Scripter
Web Technologies
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Design a web page using HTML and CSS
Form validation using jQuery
How to set space between the flexbox ?
Search Bar using HTML, CSS and JavaScript
How to style a checkbox using CSS?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
Hide or show elements in HTML using display property
How to Insert Form Data into Database using PHP ?
REST API (Introduction) | [
{
"code": null,
"e": 25376,
"s": 25348,
"text": "\n09 Mar, 2021"
},
{
"code": null,
"e": 25492,
"s": 25376,
"text": "In this article, we are going to create a window loading effect before the lock screen appears using HTML and CSS. "
},
{
"code": null,
"e": 25531,
"s": 25492,
"text": "Glipmse of the Windows Loading Effect:"
},
{
"code": null,
"e": 25541,
"s": 25531,
"text": "Approach:"
},
{
"code": null,
"e": 25626,
"s": 25541,
"text": "Create an HTML file that contains HTML div in which we are giving the loader effect."
},
{
"code": null,
"e": 25702,
"s": 25626,
"text": "Then we create 5 span elements which are used for creating inline elements."
},
{
"code": null,
"e": 25762,
"s": 25702,
"text": "Then we have to use @keyframe to create animation features."
},
{
"code": null,
"e": 25837,
"s": 25762,
"text": "Then we have to use nth-child() property for selecting different children."
},
{
"code": null,
"e": 25848,
"s": 25837,
"text": "HTML Code:"
},
{
"code": null,
"e": 25892,
"s": 25848,
"text": "First, we create an HTML file (index.html)."
},
{
"code": null,
"e": 26044,
"s": 25892,
"text": "Now after the creation of our HTML file, we are going to give a title to our webpage using the <title> tag. It should be placed between the <head> tag."
},
{
"code": null,
"e": 26166,
"s": 26044,
"text": "We link the CSS file that provides all the animation’s effect to our HTML. This is also placed in between the <head> tag."
},
{
"code": null,
"e": 26258,
"s": 26166,
"text": "Now we add a link from Google Fonts to use a different types of font-family in our project."
},
{
"code": null,
"e": 26303,
"s": 26258,
"text": "Coming to the body section of our HTML code."
},
{
"code": null,
"e": 26395,
"s": 26303,
"text": "Then, we have to create a div in which we can store all the heading part and the span tags."
},
{
"code": null,
"e": 26406,
"s": 26395,
"text": "index.html"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <link rel=\"stylesheet\" href=\"style.css\"> <link rel=\"preconnect\" href=\"https://fonts.gstatic.com\"> <link href=\"https://fonts.googleapis.com/css2?family=Dosis:wght@300&family=Hanalei&display=swap\" rel=\"stylesheet\"></head> <body> <h1>Windows-Loading-Effect</h1> <div class=\"container\"> <span></span> <span></span> <span></span> <span></span> <span></span> </div></body> </html>",
"e": 26883,
"s": 26406,
"text": null
},
{
"code": null,
"e": 27016,
"s": 26883,
"text": "CSS code: CSS is used to give different types of animations and effects to our HTML page so that it looks interactive to all users."
},
{
"code": null,
"e": 27049,
"s": 27016,
"text": "Restore all the browser effects."
},
{
"code": null,
"e": 27103,
"s": 27049,
"text": "Use classes and ids to give effects to HTML elements."
},
{
"code": null,
"e": 27184,
"s": 27103,
"text": "Use of @keyframes for providing the animation/transition effects to the browser."
},
{
"code": null,
"e": 27245,
"s": 27184,
"text": "Use of n-th child() property for calling the child elements."
},
{
"code": null,
"e": 27304,
"s": 27245,
"text": "All the features of CSS are covered in the following code."
},
{
"code": null,
"e": 27314,
"s": 27304,
"text": "style.css"
},
{
"code": "*{ margin: 0; padding: 0; box-sizing: border-box;} /* Common styles of project which are applied to body */body{ background-color: rgb(0, 21, 138); overflow: hidden; font-family: 'Dosis', sans-serif; color: #fff;} /* Style to our heading */h1{ display: flex; margin-top: 3em; justify-content: center;} .container{ position: absolute; top: 50%; left: 50%; transform: translate(-50%,-50%);} span{ display: inline-block; width: 0.6em; height: 0.6em; border-radius: 50%; margin: 0 0.125em; background-color: rgb(235, 217, 217); opacity: 0;} /* Calling childs using nth-child() property */span:last-child{ animation: move-right 3s infinite; animation-delay: 100ms; background-color: #000;}span:nth-child(5){ animation: move 3s infinite; animation-delay: 200ms; background-color: rgb(41, 133, 22);}span:nth-child(4){ animation: move-right 3s infinite; animation-delay: 300ms; background-color: #000;}span:nth-child(3){ animation: move 3s infinite; animation-delay: 400ms; background-color: rgb(41, 133, 22);}span:nth-child(2){ animation: move-right 3s infinite; animation-delay: 500ms; background-color: #000;}span:first-child{ animation: move 3s infinite; animation-delay: 600ms; background-color: rgb(41, 133, 22);} /* Animations effect*/@keyframes move{ 0%{ transform: translateX(-31em); opacity: 0; } 30%,60%{ transform: translateX(0); opacity: 1; } 100%{ transform: translateX(31em); opacity: 0; }}@keyframes move-right{ 0%{ transform: translateX(31em); opacity: 0; } 30%,60%{ transform: translateX(0); opacity: 1; } 100%{ transform: translateX(-31em); opacity: 0; }}",
"e": 29130,
"s": 27314,
"text": null
},
{
"code": null,
"e": 29202,
"s": 29130,
"text": "Complete Code: Here we will combine above two section of code into one."
},
{
"code": null,
"e": 29213,
"s": 29202,
"text": "index.html"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <link rel=\"stylesheet\" href=\"style.css\" /> <link rel=\"preconnect\" href=\"https://fonts.gstatic.com\" /> <link href=\"https://fonts.googleapis.com/css2?family=Dosis:wght@300&family=Hanalei&display=swap\" rel=\"stylesheet\" /> <style> * { margin: 0; padding: 0; box-sizing: border-box; } /* Common styles of project which are applied to body */ body { background-color: rgb(0, 21, 138); overflow: hidden; font-family: \"Dosis\", sans-serif; color: #fff; } /* Style to our heading */ h1 { display: flex; margin-top: 3em; justify-content: center; } .container { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); } span { display: inline-block; width: 0.6em; height: 0.6em; border-radius: 50%; margin: 0 0.125em; background-color: rgb(235, 217, 217); opacity: 0; } /* Calling childs using nth-child() property */ span:last-child { animation: move-right 3s infinite; animation-delay: 100ms; background-color: #000; } span:nth-child(5) { animation: move 3s infinite; animation-delay: 200ms; background-color: rgb(41, 133, 22); } span:nth-child(4) { animation: move-right 3s infinite; animation-delay: 300ms; background-color: #000; } span:nth-child(3) { animation: move 3s infinite; animation-delay: 400ms; background-color: rgb(41, 133, 22); } span:nth-child(2) { animation: move-right 3s infinite; animation-delay: 500ms; background-color: #000; } span:first-child { animation: move 3s infinite; animation-delay: 600ms; background-color: rgb(41, 133, 22); } /* Animations effect */ @keyframes move { 0% { transform: translateX(-31em); opacity: 0; } 30%, 60% { transform: translateX(0); opacity: 1; } 100% { transform: translateX(31em); opacity: 0; } } @keyframes move-right { 0% { transform: translateX(31em); opacity: 0; } 30%, 60% { transform: translateX(0); opacity: 1; } 100% { transform: translateX(-31em); opacity: 0; } } </style> </head> <body> <h1>Windows-Loading-Effect</h1> <div class=\"container\"> <span></span> <span></span> <span></span> <span></span> <span></span> </div> </body></html>",
"e": 32779,
"s": 29213,
"text": null
},
{
"code": null,
"e": 32787,
"s": 32779,
"text": "Output:"
},
{
"code": null,
"e": 32810,
"s": 32787,
"text": "Windows loading effect"
},
{
"code": null,
"e": 32947,
"s": 32810,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 32962,
"s": 32947,
"text": "CSS-Properties"
},
{
"code": null,
"e": 32976,
"s": 32962,
"text": "CSS-Questions"
},
{
"code": null,
"e": 32991,
"s": 32976,
"text": "HTML-Questions"
},
{
"code": null,
"e": 33015,
"s": 32991,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 33019,
"s": 33015,
"text": "CSS"
},
{
"code": null,
"e": 33024,
"s": 33019,
"text": "HTML"
},
{
"code": null,
"e": 33043,
"s": 33024,
"text": "Technical Scripter"
},
{
"code": null,
"e": 33060,
"s": 33043,
"text": "Web Technologies"
},
{
"code": null,
"e": 33065,
"s": 33060,
"text": "HTML"
},
{
"code": null,
"e": 33163,
"s": 33065,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33200,
"s": 33163,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 33229,
"s": 33200,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 33268,
"s": 33229,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 33310,
"s": 33268,
"text": "Search Bar using HTML, CSS and JavaScript"
},
{
"code": null,
"e": 33345,
"s": 33310,
"text": "How to style a checkbox using CSS?"
},
{
"code": null,
"e": 33405,
"s": 33345,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 33466,
"s": 33405,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 33519,
"s": 33466,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 33569,
"s": 33519,
"text": "How to Insert Form Data into Database using PHP ?"
}
]
|
Smallest distinct window | Practice | GeeksforGeeks | Given a string 's'. The task is to find the smallest window length that contains all the characters of the given string at least one time.
For eg. A = aabcbcdbca, then the result would be 4 as of the smallest window will be dbca.
Example 1:
Input : "AABBBCBBAC"
Output : 3
Explanation : Sub-string -> "BAC"
Input : "aaab"
Output : 2
Explanation : Sub-string -> "ab"
Input : "GEEKSGEEKSFOR"
Output : 8
Explanation : Sub-string -> "GEEKSFOR"
Your Task:
You don't need to read input or print anything. Your task is to complete the function findSubString() which takes the string S as input and returns the length of the smallest such window of the string.
Expected Time Complexity: O(256.N)
Expected Auxiliary Space: O(256)
Constraints:
1 ≤ |S| ≤ 105
String may contain both type of English Alphabets.
+1
bhatshripad181 month ago
Can someone please explain me, why aren't we decreasing counter when shrinking the window,as there might be possibility that we may move to window where certain character is not present in window
+1
aloksinghbais022 months ago
C++ solution having time complexity as O(N*127) and space complexity as O(127) is as follows :-
Execution Time :- 0.2 / 3.2 sec
string findSubString(string str){ int n = str.length(); int freq[127] = {0}; int tdc = 0; // total distinct character for(int i = 0; i < n; i++){ if(freq[str[i]] == 0) tdc++; freq[str[i]]++; } memset(freq,0,sizeof(freq)); int cnt = 0; int st = 0; int size = n; int i = 0, j = 0; while(j < n){ if(freq[str[j]] == 0) cnt++; freq[str[j]]++; while(cnt == tdc){ if(j-i+1 < size){ st = i; size = j-i+1; } freq[str[i]]--; if(freq[str[i++]] == 0){ cnt--; } } j++; } return (str.substr(st,size)); }
0
parthprajapati972392 months ago
/* Understandable code in c++ */
string findSubString(string str) { int n = str.size(); vector<int>hash(260,0); for(int i=0;i<n;i++) { hash[str[i]]++; } int uniq = 0; for(int i=0;i<260;i++) { if(hash[i] != 0) uniq++; hash[i] = 0; } int start = 0; //to store latest index of occurance of any character. unordered_map<char,int>mp; string ans = ""; int len = n; for(int i=0;i<n;i++) { mp[str[i]] = i; int mn_start = i; for(auto it : mp) { mn_start = min(mn_start , it.second); } if(mp.size() == uniq) { if(len >= (i-mn_start + 1)) { len = i-mn_start+1; ans = ""; ans = str.substr(mn_start,len); } } } return ans; }
0
manishkumar250319992 months ago
string findSubString(string str) { unordered_map<char,int>mp; for(int i=0;i<str.length();i++) mp[str[i]]++; int size=mp.size(); mp.clear(); int count=0,start=0,sindex=-1,len=INT_MAX; for(int i=0;i<str.length();i++) { mp[str[i]]++; int count=mp.size(); if(count==size) { while(mp[str[start]]>1) { if(mp[str[start]]>1) mp[str[start]]--; start++; } int len_window=i-start+1; if(len_window<len) { len=len_window; sindex=start; } } } return str.substr(sindex,len);
0
sankarshansiddhanti6782 months ago
readable c++ code using sliding window technique:
string findSubString(string str) { unordered_set<int> distinctChars; for(char c: str) { distinctChars.insert(c); } int totalDistinctCount = distinctChars.size(); //cout<<totalDistinctCount; int ans_start,ans_end, win_start, win_end; ans_start = ans_end = win_start = 0; win_end = 1; unordered_map<char,int> frequency; frequency[str[0]]++; int curr_dist_count = 1; int minSizeSoFar = INT_MAX; while( win_start <= win_end && win_end < str.length() ) { if( curr_dist_count < totalDistinctCount ) { if(frequency[str[win_end]] == 0) { curr_dist_count++; } frequency[str[win_end]]++; win_end++; } else if( curr_dist_count == totalDistinctCount ) { int curr_win_size = win_end - win_start; if( curr_win_size < minSizeSoFar ) { ans_start = win_start; ans_end = win_end; minSizeSoFar = curr_win_size; } if( frequency[str[win_start]] == 1 ) curr_dist_count--; frequency[str[win_start]]--; win_start++; } while( curr_dist_count == totalDistinctCount ) { int curr_win_size = win_end - win_start; if( curr_win_size < minSizeSoFar ) { ans_start = win_start; ans_end = win_end; minSizeSoFar = curr_win_size; } if( frequency[str[win_start]] == 1 ) curr_dist_count--; frequency[str[win_start]]--; win_start++; } } string smallest_distinct_window = str.substr(ans_start,minSizeSoFar); return smallest_distinct_window; }
+2
2020010432 months ago
//Easy to understand
string findSubString(string str) { int n=str.length(); unordered_map<char,int> m; for(auto i:str)m[i]++; int c=m.size(); int ans=-1,len=n; m.clear(); for(int i=0;i<n;i++){ m[str[i]]=i; if(m.size()==c){ int mi=n; for(auto i:m){ mi=min(mi,i.second); } if(ans==-1 || len>(i-mi+1)){ ans=mi; len=i-mi+1; } } } return str.substr(ans,len); }
0
codecrackerp3 months ago
class Solution{ public: string findme(string str,map<int,int>m){ int start=-1,end=-1; string ans; for(int i=0;i<str.length();i++){ m[str[i]]--; if(m[str[i]]==0){ start=i; break; } } for(int i=str.length()-1;i>=0;i--){ if(m[str[i]]>0){ m[str[i]]--; } if(m[str[i]]==0){ end=i; break; } } for(int i=start;i<=end;i++){ ans.push_back(str[i]); } return ans; } string findSubString(string str) { map<int,int>m; for(int i=0;i<str.length();i++){ m[str[i]]++; } string ans1=findme(str,m); reverse(str.begin(),str.end()); string ans2=findme(str,m); if(ans1.length()<=ans2.length()){ return ans1; } return ans2; }};
I am not getting why this code is giving TLE if any one of u are able to find the glitch then plz help
0
brajmohanst96613 months ago
public String findSubString( String str) { // Your code goes here int pattern [] = new int[256]; String res = ""; int n = str.length() ,n1 = 0, count = 0 , minlen = Integer.MAX_VALUE , start = 0; int org[] =new int[256]; for(int i = 0 ; i < n ; i++ ) if(pattern[str.charAt(i)] == 0) { n1++; pattern[str.charAt(i)]++; } for(int i = 0 ; i < n ; i++ ){ org[str.charAt(i)]++; if( org[str.charAt(i)] == pattern[str.charAt(i)]) count++; if(count == n1){ while(org[str.charAt(start)] > pattern[str.charAt(start)] || pattern[str.charAt(start)] == 0){ if(org[str.charAt(start)] > pattern[str.charAt(start)]) org[str.charAt(start)]--; start++; } if( minlen > i - start + 1){ minlen = i - start + 1; res = str.substring(start , i + 1); } } } return res; }
+4
gaurabhkumarjha271020013 months ago
int n= str.length();
unordered_map < char, int > m;
int i=0, j=0, maxx= INT_MAX;
string res;
for (int i=0; i< n; i++){
// only those elements appear only one time.
m[str[i]]= 0;
}
int cnt=0;
while (i< n){
if (m[str[i]] == 0)
cnt ++;
// expand the window the increase by 1.
m[str[i]]+= 1;
if (cnt == m.size()){
while (j < n and m[str[j]] > 1){
// shrink the window because duplicate elements
m[str[j]]--;
j++;
}
if (maxx > (i-j+1)){
maxx= i-j+1;
res= str.substr (j, i-j+1);
}
}
i++;
}
return res;
0
krushimonpara243 months ago
Anyone Help me this code only one character time not work my code.
class Solution { public String findSubString( String str) { HashSet<Character> set = new HashSet<Character>(); for(int i=0;i<str.length();i++){ set.add(str.charAt(i)); } int distCounter=set.size(); int start=0; int startIndex=0; int counter=0; int min_length =Integer.MAX_VALUE; int[] visited = new int[256]; for(int i=0;i<str.length();i++){ visited[str.charAt(i)-65]++; if(visited[str.charAt(i)-65]==1) counter++; if(counter==distCounter){ while(visited[str.charAt(start)-65]>1){ if(visited[str.charAt(start)-65]>1){ visited[str.charAt(start)-65]--; start++; } int cur_length = i-start+1; if(cur_length<min_length){ min_length=cur_length; startIndex = start; } } } } return str.substring(startIndex,startIndex+min_length); }}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 468,
"s": 238,
"text": "Given a string 's'. The task is to find the smallest window length that contains all the characters of the given string at least one time.\nFor eg. A = aabcbcdbca, then the result would be 4 as of the smallest window will be dbca."
},
{
"code": null,
"e": 481,
"s": 470,
"text": "Example 1:"
},
{
"code": null,
"e": 548,
"s": 481,
"text": "Input : \"AABBBCBBAC\"\nOutput : 3\nExplanation : Sub-string -> \"BAC\"\n"
},
{
"code": null,
"e": 607,
"s": 548,
"text": "Input : \"aaab\"\nOutput : 2\nExplanation : Sub-string -> \"ab\""
},
{
"code": null,
"e": 681,
"s": 607,
"text": "Input : \"GEEKSGEEKSFOR\"\nOutput : 8\nExplanation : Sub-string -> \"GEEKSFOR\""
},
{
"code": null,
"e": 900,
"s": 683,
"text": "\nYour Task: \nYou don't need to read input or print anything. Your task is to complete the function findSubString() which takes the string S as input and returns the length of the smallest such window of the string."
},
{
"code": null,
"e": 969,
"s": 900,
"text": "\nExpected Time Complexity: O(256.N)\nExpected Auxiliary Space: O(256)"
},
{
"code": null,
"e": 1049,
"s": 971,
"text": "Constraints:\n1 ≤ |S| ≤ 105\nString may contain both type of English Alphabets."
},
{
"code": null,
"e": 1052,
"s": 1049,
"text": "+1"
},
{
"code": null,
"e": 1077,
"s": 1052,
"text": "bhatshripad181 month ago"
},
{
"code": null,
"e": 1273,
"s": 1077,
"text": "Can someone please explain me, why aren't we decreasing counter when shrinking the window,as there might be possibility that we may move to window where certain character is not present in window"
},
{
"code": null,
"e": 1276,
"s": 1273,
"text": "+1"
},
{
"code": null,
"e": 1304,
"s": 1276,
"text": "aloksinghbais022 months ago"
},
{
"code": null,
"e": 1401,
"s": 1304,
"text": "C++ solution having time complexity as O(N*127) and space complexity as O(127) is as follows :- "
},
{
"code": null,
"e": 1435,
"s": 1403,
"text": "Execution Time :- 0.2 / 3.2 sec"
},
{
"code": null,
"e": 2240,
"s": 1437,
"text": "string findSubString(string str){ int n = str.length(); int freq[127] = {0}; int tdc = 0; // total distinct character for(int i = 0; i < n; i++){ if(freq[str[i]] == 0) tdc++; freq[str[i]]++; } memset(freq,0,sizeof(freq)); int cnt = 0; int st = 0; int size = n; int i = 0, j = 0; while(j < n){ if(freq[str[j]] == 0) cnt++; freq[str[j]]++; while(cnt == tdc){ if(j-i+1 < size){ st = i; size = j-i+1; } freq[str[i]]--; if(freq[str[i++]] == 0){ cnt--; } } j++; } return (str.substr(st,size)); }"
},
{
"code": null,
"e": 2242,
"s": 2240,
"text": "0"
},
{
"code": null,
"e": 2274,
"s": 2242,
"text": "parthprajapati972392 months ago"
},
{
"code": null,
"e": 2307,
"s": 2274,
"text": "/* Understandable code in c++ */"
},
{
"code": null,
"e": 3265,
"s": 2309,
"text": "string findSubString(string str) { int n = str.size(); vector<int>hash(260,0); for(int i=0;i<n;i++) { hash[str[i]]++; } int uniq = 0; for(int i=0;i<260;i++) { if(hash[i] != 0) uniq++; hash[i] = 0; } int start = 0; //to store latest index of occurance of any character. unordered_map<char,int>mp; string ans = \"\"; int len = n; for(int i=0;i<n;i++) { mp[str[i]] = i; int mn_start = i; for(auto it : mp) { mn_start = min(mn_start , it.second); } if(mp.size() == uniq) { if(len >= (i-mn_start + 1)) { len = i-mn_start+1; ans = \"\"; ans = str.substr(mn_start,len); } } } return ans; }"
},
{
"code": null,
"e": 3267,
"s": 3265,
"text": "0"
},
{
"code": null,
"e": 3299,
"s": 3267,
"text": "manishkumar250319992 months ago"
},
{
"code": null,
"e": 4126,
"s": 3299,
"text": " string findSubString(string str) { unordered_map<char,int>mp; for(int i=0;i<str.length();i++) mp[str[i]]++; int size=mp.size(); mp.clear(); int count=0,start=0,sindex=-1,len=INT_MAX; for(int i=0;i<str.length();i++) { mp[str[i]]++; int count=mp.size(); if(count==size) { while(mp[str[start]]>1) { if(mp[str[start]]>1) mp[str[start]]--; start++; } int len_window=i-start+1; if(len_window<len) { len=len_window; sindex=start; } } } return str.substr(sindex,len);"
},
{
"code": null,
"e": 4128,
"s": 4126,
"text": "0"
},
{
"code": null,
"e": 4163,
"s": 4128,
"text": "sankarshansiddhanti6782 months ago"
},
{
"code": null,
"e": 4213,
"s": 4163,
"text": "readable c++ code using sliding window technique:"
},
{
"code": null,
"e": 6209,
"s": 4215,
"text": "string findSubString(string str) { unordered_set<int> distinctChars; for(char c: str) { distinctChars.insert(c); } int totalDistinctCount = distinctChars.size(); //cout<<totalDistinctCount; int ans_start,ans_end, win_start, win_end; ans_start = ans_end = win_start = 0; win_end = 1; unordered_map<char,int> frequency; frequency[str[0]]++; int curr_dist_count = 1; int minSizeSoFar = INT_MAX; while( win_start <= win_end && win_end < str.length() ) { if( curr_dist_count < totalDistinctCount ) { if(frequency[str[win_end]] == 0) { curr_dist_count++; } frequency[str[win_end]]++; win_end++; } else if( curr_dist_count == totalDistinctCount ) { int curr_win_size = win_end - win_start; if( curr_win_size < minSizeSoFar ) { ans_start = win_start; ans_end = win_end; minSizeSoFar = curr_win_size; } if( frequency[str[win_start]] == 1 ) curr_dist_count--; frequency[str[win_start]]--; win_start++; } while( curr_dist_count == totalDistinctCount ) { int curr_win_size = win_end - win_start; if( curr_win_size < minSizeSoFar ) { ans_start = win_start; ans_end = win_end; minSizeSoFar = curr_win_size; } if( frequency[str[win_start]] == 1 ) curr_dist_count--; frequency[str[win_start]]--; win_start++; } } string smallest_distinct_window = str.substr(ans_start,minSizeSoFar); return smallest_distinct_window; }"
},
{
"code": null,
"e": 6214,
"s": 6211,
"text": "+2"
},
{
"code": null,
"e": 6236,
"s": 6214,
"text": "2020010432 months ago"
},
{
"code": null,
"e": 6257,
"s": 6236,
"text": "//Easy to understand"
},
{
"code": null,
"e": 6804,
"s": 6257,
"text": "string findSubString(string str) { int n=str.length(); unordered_map<char,int> m; for(auto i:str)m[i]++; int c=m.size(); int ans=-1,len=n; m.clear(); for(int i=0;i<n;i++){ m[str[i]]=i; if(m.size()==c){ int mi=n; for(auto i:m){ mi=min(mi,i.second); } if(ans==-1 || len>(i-mi+1)){ ans=mi; len=i-mi+1; } } } return str.substr(ans,len); }"
},
{
"code": null,
"e": 6806,
"s": 6804,
"text": "0"
},
{
"code": null,
"e": 6831,
"s": 6806,
"text": "codecrackerp3 months ago"
},
{
"code": null,
"e": 7721,
"s": 6831,
"text": "class Solution{ public: string findme(string str,map<int,int>m){ int start=-1,end=-1; string ans; for(int i=0;i<str.length();i++){ m[str[i]]--; if(m[str[i]]==0){ start=i; break; } } for(int i=str.length()-1;i>=0;i--){ if(m[str[i]]>0){ m[str[i]]--; } if(m[str[i]]==0){ end=i; break; } } for(int i=start;i<=end;i++){ ans.push_back(str[i]); } return ans; } string findSubString(string str) { map<int,int>m; for(int i=0;i<str.length();i++){ m[str[i]]++; } string ans1=findme(str,m); reverse(str.begin(),str.end()); string ans2=findme(str,m); if(ans1.length()<=ans2.length()){ return ans1; } return ans2; }};"
},
{
"code": null,
"e": 7826,
"s": 7723,
"text": "I am not getting why this code is giving TLE if any one of u are able to find the glitch then plz help"
},
{
"code": null,
"e": 7828,
"s": 7826,
"text": "0"
},
{
"code": null,
"e": 7856,
"s": 7828,
"text": "brajmohanst96613 months ago"
},
{
"code": null,
"e": 9231,
"s": 7856,
"text": " public String findSubString( String str) { // Your code goes here int pattern [] = new int[256]; String res = \"\"; int n = str.length() ,n1 = 0, count = 0 , minlen = Integer.MAX_VALUE , start = 0; int org[] =new int[256]; for(int i = 0 ; i < n ; i++ ) if(pattern[str.charAt(i)] == 0) { n1++; pattern[str.charAt(i)]++; } for(int i = 0 ; i < n ; i++ ){ org[str.charAt(i)]++; if( org[str.charAt(i)] == pattern[str.charAt(i)]) count++; if(count == n1){ while(org[str.charAt(start)] > pattern[str.charAt(start)] || pattern[str.charAt(start)] == 0){ if(org[str.charAt(start)] > pattern[str.charAt(start)]) org[str.charAt(start)]--; start++; } if( minlen > i - start + 1){ minlen = i - start + 1; res = str.substring(start , i + 1); } } } return res; } "
},
{
"code": null,
"e": 9234,
"s": 9231,
"text": "+4"
},
{
"code": null,
"e": 9270,
"s": 9234,
"text": "gaurabhkumarjha271020013 months ago"
},
{
"code": null,
"e": 10290,
"s": 9270,
"text": " int n= str.length();\n \n unordered_map < char, int > m;\n int i=0, j=0, maxx= INT_MAX;\n string res;\n \n for (int i=0; i< n; i++){\n // only those elements appear only one time. \n m[str[i]]= 0; \n \n }\n int cnt=0;\n while (i< n){\n \n if (m[str[i]] == 0)\n cnt ++;\n // expand the window the increase by 1. \n m[str[i]]+= 1; \n \n \n if (cnt == m.size()){\n \n while (j < n and m[str[j]] > 1){\n \n// shrink the window because duplicate elements \n m[str[j]]--; \n \n j++;\n }\n \n if (maxx > (i-j+1)){\n \n maxx= i-j+1;\n res= str.substr (j, i-j+1);\n }\n \n \n }\n i++;\n }\n \n return res;"
},
{
"code": null,
"e": 10292,
"s": 10290,
"text": "0"
},
{
"code": null,
"e": 10320,
"s": 10292,
"text": "krushimonpara243 months ago"
},
{
"code": null,
"e": 10387,
"s": 10320,
"text": "Anyone Help me this code only one character time not work my code."
},
{
"code": null,
"e": 11607,
"s": 10391,
"text": "class Solution { public String findSubString( String str) { HashSet<Character> set = new HashSet<Character>(); for(int i=0;i<str.length();i++){ set.add(str.charAt(i)); } int distCounter=set.size(); int start=0; int startIndex=0; int counter=0; int min_length =Integer.MAX_VALUE; int[] visited = new int[256]; for(int i=0;i<str.length();i++){ visited[str.charAt(i)-65]++; if(visited[str.charAt(i)-65]==1) counter++; if(counter==distCounter){ while(visited[str.charAt(start)-65]>1){ if(visited[str.charAt(start)-65]>1){ visited[str.charAt(start)-65]--; start++; } int cur_length = i-start+1; if(cur_length<min_length){ min_length=cur_length; startIndex = start; } } } } return str.substring(startIndex,startIndex+min_length); }}"
},
{
"code": null,
"e": 11753,
"s": 11607,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 11789,
"s": 11753,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 11799,
"s": 11789,
"text": "\nProblem\n"
},
{
"code": null,
"e": 11809,
"s": 11799,
"text": "\nContest\n"
},
{
"code": null,
"e": 11872,
"s": 11809,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 12020,
"s": 11872,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 12228,
"s": 12020,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 12334,
"s": 12228,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
]
|
Difference Between predict and predict_proba in scikit-learn | Towards Data Science | When training models (and more precisely supervised estimators) with sklearn, we sometimes need to predict the actual class while in some other occasions we may want to predict the class probabilities.
In today’s article we will discuss how to use predict and predict_proba methods over a dataset in order to perform predictions. Additionally, we’ll explore the differences between these methods and discuss when to use one over the other.
First, let’s create an example model that we’ll reference throughout this article in order to demonstrate a few concepts. In our examples, we will be using the Iris dataset which is also included in sklearn.datasets module of scikit-learn. This will be a classification task in which we need to identify and correctly predict three distinct types of irises, namely Setosa, Versicolour, and Virginica from the petal and sepal dimensions (length and width).
import numpy as npfrom sklearn import datasetsfrom sklearn.neighbors import KNeighborsClassifier# Load the Iris datasetiris_X, iris_y = datasets.load_iris(return_X_y=True)# Split Iris dataset into train/test sets randomlynp.random.seed(0)indices = np.random.permutation(len(iris_X))iris_X_train = iris_X[indices[:-10]]iris_y_train = iris_y[indices[:-10]]iris_X_test = iris_X[indices[-10:]]iris_y_test = iris_y[indices[-10:]]# Instantiate and fit a KNeighbors classifierknn = KNeighborsClassifier()knn.fit(iris_X_train, iris_y_train)
All supervised estimators in scikit-learn implement the predict() method that can be executed on a trained model in order to predict the actual label (or class) over a new set of data.
The method accepts a single argument that corresponds to the data over which the predictions will be made and it returns an array containing the predicted label for each data point.
predictions = knn.predict(iris_X_test)print(predictions)array([1, 2, 1, 0, 0, 0, 2, 1, 2, 0])
In the context of classification tasks, some sklearn estimators also implement the predict_proba method that returns the class probabilities for each data point.
The method accepts a single argument that corresponds to the data over which the probabilities will be computed and returns an array of lists containing the class probabilities for the input data points.
predictions = knn.predict_proba(iris_X_test)print(predictions)array([[0. , 1. , 0. ], [0. , 0.4, 0.6], [0. , 1. , 0. ], [1. , 0. , 0. ], [1. , 0. , 0. ], [1. , 0. , 0. ], [0. , 0. , 1. ], [0. , 1. , 0. ], [0. , 0. , 1. ], [1. , 0. , 0. ]])
In today’s article we discussed how to perform predictions over data using a pre-trained scikit-learn model. Additionally, we explored the main differences between the methods predict and predict_proba which are implemented by estimators of scikit-learn.
The predict method is used to predict the actual class while predict_proba method can be used to infer the class probabilities (i.e. the probability that a particular data point falls into the underlying classes).
Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read.
You may also like | [
{
"code": null,
"e": 374,
"s": 172,
"text": "When training models (and more precisely supervised estimators) with sklearn, we sometimes need to predict the actual class while in some other occasions we may want to predict the class probabilities."
},
{
"code": null,
"e": 612,
"s": 374,
"text": "In today’s article we will discuss how to use predict and predict_proba methods over a dataset in order to perform predictions. Additionally, we’ll explore the differences between these methods and discuss when to use one over the other."
},
{
"code": null,
"e": 1068,
"s": 612,
"text": "First, let’s create an example model that we’ll reference throughout this article in order to demonstrate a few concepts. In our examples, we will be using the Iris dataset which is also included in sklearn.datasets module of scikit-learn. This will be a classification task in which we need to identify and correctly predict three distinct types of irises, namely Setosa, Versicolour, and Virginica from the petal and sepal dimensions (length and width)."
},
{
"code": null,
"e": 1601,
"s": 1068,
"text": "import numpy as npfrom sklearn import datasetsfrom sklearn.neighbors import KNeighborsClassifier# Load the Iris datasetiris_X, iris_y = datasets.load_iris(return_X_y=True)# Split Iris dataset into train/test sets randomlynp.random.seed(0)indices = np.random.permutation(len(iris_X))iris_X_train = iris_X[indices[:-10]]iris_y_train = iris_y[indices[:-10]]iris_X_test = iris_X[indices[-10:]]iris_y_test = iris_y[indices[-10:]]# Instantiate and fit a KNeighbors classifierknn = KNeighborsClassifier()knn.fit(iris_X_train, iris_y_train)"
},
{
"code": null,
"e": 1786,
"s": 1601,
"text": "All supervised estimators in scikit-learn implement the predict() method that can be executed on a trained model in order to predict the actual label (or class) over a new set of data."
},
{
"code": null,
"e": 1968,
"s": 1786,
"text": "The method accepts a single argument that corresponds to the data over which the predictions will be made and it returns an array containing the predicted label for each data point."
},
{
"code": null,
"e": 2062,
"s": 1968,
"text": "predictions = knn.predict(iris_X_test)print(predictions)array([1, 2, 1, 0, 0, 0, 2, 1, 2, 0])"
},
{
"code": null,
"e": 2224,
"s": 2062,
"text": "In the context of classification tasks, some sklearn estimators also implement the predict_proba method that returns the class probabilities for each data point."
},
{
"code": null,
"e": 2428,
"s": 2224,
"text": "The method accepts a single argument that corresponds to the data over which the probabilities will be computed and returns an array of lists containing the class probabilities for the input data points."
},
{
"code": null,
"e": 2722,
"s": 2428,
"text": "predictions = knn.predict_proba(iris_X_test)print(predictions)array([[0. , 1. , 0. ], [0. , 0.4, 0.6], [0. , 1. , 0. ], [1. , 0. , 0. ], [1. , 0. , 0. ], [1. , 0. , 0. ], [0. , 0. , 1. ], [0. , 1. , 0. ], [0. , 0. , 1. ], [1. , 0. , 0. ]])"
},
{
"code": null,
"e": 2977,
"s": 2722,
"text": "In today’s article we discussed how to perform predictions over data using a pre-trained scikit-learn model. Additionally, we explored the main differences between the methods predict and predict_proba which are implemented by estimators of scikit-learn."
},
{
"code": null,
"e": 3191,
"s": 2977,
"text": "The predict method is used to predict the actual class while predict_proba method can be used to infer the class probabilities (i.e. the probability that a particular data point falls into the underlying classes)."
},
{
"code": null,
"e": 3308,
"s": 3191,
"text": "Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read."
}
]
|
Extra brackets with function names in C/C++ - GeeksforGeeks | 21 Jun, 2018
Consider below C program. The program has extra bracket around function name.
// C program to show that extra brackets with function// name work#include <stdio.h> void (foo)(int n){ printf("Function : %d ", n);} int main(){ (foo)(4); return 0; }
Output:
Function 4
So putting extra bracket with function name works in C/C++.
What can be use of it?One use could be, if we have a macro with same name as function, then extra brackets avoid macro expansion wherever we want the function to be called.
// C program to show that extra brackets with function// name can be useful if we have a macro with same name#include <stdio.h>#define foo(n) printf("\nMacro : %d ", n); void (foo)(int n){ printf("Function : %d ", n);} int main(){ (foo)(4); foo(4); return 0;}
Output:
Function 4
Macro : 4
If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
CPP-Functions
C Language
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCP Server-Client implementation in C
Exception Handling in C++
Multithreading in C
'this' pointer in C++
Arrow operator -> in C/C++ with Examples
Vector in C++ STL
Initialize a vector in C++ (6 different ways)
Inheritance in C++
Map in C++ Standard Template Library (STL)
C++ Classes and Objects | [
{
"code": null,
"e": 24232,
"s": 24204,
"text": "\n21 Jun, 2018"
},
{
"code": null,
"e": 24310,
"s": 24232,
"text": "Consider below C program. The program has extra bracket around function name."
},
{
"code": "// C program to show that extra brackets with function// name work#include <stdio.h> void (foo)(int n){ printf(\"Function : %d \", n);} int main(){ (foo)(4); return 0; }",
"e": 24487,
"s": 24310,
"text": null
},
{
"code": null,
"e": 24495,
"s": 24487,
"text": "Output:"
},
{
"code": null,
"e": 24506,
"s": 24495,
"text": "Function 4"
},
{
"code": null,
"e": 24566,
"s": 24506,
"text": "So putting extra bracket with function name works in C/C++."
},
{
"code": null,
"e": 24739,
"s": 24566,
"text": "What can be use of it?One use could be, if we have a macro with same name as function, then extra brackets avoid macro expansion wherever we want the function to be called."
},
{
"code": "// C program to show that extra brackets with function// name can be useful if we have a macro with same name#include <stdio.h>#define foo(n) printf(\"\\nMacro : %d \", n); void (foo)(int n){ printf(\"Function : %d \", n);} int main(){ (foo)(4); foo(4); return 0;}",
"e": 25010,
"s": 24739,
"text": null
},
{
"code": null,
"e": 25018,
"s": 25010,
"text": "Output:"
},
{
"code": null,
"e": 25039,
"s": 25018,
"text": "Function 4\nMacro : 4"
},
{
"code": null,
"e": 25260,
"s": 25039,
"text": "If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 25384,
"s": 25260,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above"
},
{
"code": null,
"e": 25398,
"s": 25384,
"text": "CPP-Functions"
},
{
"code": null,
"e": 25409,
"s": 25398,
"text": "C Language"
},
{
"code": null,
"e": 25413,
"s": 25409,
"text": "C++"
},
{
"code": null,
"e": 25417,
"s": 25413,
"text": "CPP"
},
{
"code": null,
"e": 25515,
"s": 25417,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25553,
"s": 25515,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 25579,
"s": 25553,
"text": "Exception Handling in C++"
},
{
"code": null,
"e": 25599,
"s": 25579,
"text": "Multithreading in C"
},
{
"code": null,
"e": 25621,
"s": 25599,
"text": "'this' pointer in C++"
},
{
"code": null,
"e": 25662,
"s": 25621,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 25680,
"s": 25662,
"text": "Vector in C++ STL"
},
{
"code": null,
"e": 25726,
"s": 25680,
"text": "Initialize a vector in C++ (6 different ways)"
},
{
"code": null,
"e": 25745,
"s": 25726,
"text": "Inheritance in C++"
},
{
"code": null,
"e": 25788,
"s": 25745,
"text": "Map in C++ Standard Template Library (STL)"
}
]
|
Fetch records in MongoDB on querying its subset | You can use $all operator. Let us first create a collection with documents −
> db.subsetOfAnArrayDemo.insertOne({"StudentProgrammingSkills":
["Java","MongoDB","MySQL","C++","Data Structure","Algorithm","Python","Oracle","SQL Server"]});
{
"acknowledged" : true,
"insertedId" : ObjectId("5cb9d1e1895c4fd159f80804")
}
Following is the query to display all documents from the collection with the help of find() method −
> db.subsetOfAnArrayDemo.find().pretty();
This will produce the following output −
{
"_id" : ObjectId("5cb9d1e1895c4fd159f80804"),
"StudentProgrammingSkills" : [
"Java",
"MongoDB",
"MySQL",
"C++",
"Data Structure",
"Algorithm",
"Python",
"Oracle",
"SQL Server"
]
}
Following is the query to get the subset of an array −
> db.subsetOfAnArrayDemo.find({ StudentProgrammingSkills:
{ $all: [ 'MongoDB', 'MySQL' ] } } ).pretty();
This will produce the following output −
{
"_id" : ObjectId("5cb9d1e1895c4fd159f80804"),
"StudentProgrammingSkills" : [
"Java",
"MongoDB",
"MySQL",
"C++",
"Data Structure",
"Algorithm",
"Python",
"Oracle",
"SQL Server"
]
} | [
{
"code": null,
"e": 1139,
"s": 1062,
"text": "You can use $all operator. Let us first create a collection with documents −"
},
{
"code": null,
"e": 1387,
"s": 1139,
"text": "> db.subsetOfAnArrayDemo.insertOne({\"StudentProgrammingSkills\":\n [\"Java\",\"MongoDB\",\"MySQL\",\"C++\",\"Data Structure\",\"Algorithm\",\"Python\",\"Oracle\",\"SQL Server\"]});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5cb9d1e1895c4fd159f80804\")\n}"
},
{
"code": null,
"e": 1488,
"s": 1387,
"text": "Following is the query to display all documents from the collection with the help of find() method −"
},
{
"code": null,
"e": 1530,
"s": 1488,
"text": "> db.subsetOfAnArrayDemo.find().pretty();"
},
{
"code": null,
"e": 1571,
"s": 1530,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1816,
"s": 1571,
"text": "{\n \"_id\" : ObjectId(\"5cb9d1e1895c4fd159f80804\"),\n \"StudentProgrammingSkills\" : [\n \"Java\",\n \"MongoDB\",\n \"MySQL\",\n \"C++\",\n \"Data Structure\",\n \"Algorithm\",\n \"Python\",\n \"Oracle\",\n \"SQL Server\"\n ]\n}"
},
{
"code": null,
"e": 1871,
"s": 1816,
"text": "Following is the query to get the subset of an array −"
},
{
"code": null,
"e": 1979,
"s": 1871,
"text": "> db.subsetOfAnArrayDemo.find({ StudentProgrammingSkills:\n { $all: [ 'MongoDB', 'MySQL' ] } } ).pretty();"
},
{
"code": null,
"e": 2020,
"s": 1979,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2265,
"s": 2020,
"text": "{\n \"_id\" : ObjectId(\"5cb9d1e1895c4fd159f80804\"),\n \"StudentProgrammingSkills\" : [\n \"Java\",\n \"MongoDB\",\n \"MySQL\",\n \"C++\",\n \"Data Structure\",\n \"Algorithm\",\n \"Python\",\n \"Oracle\",\n \"SQL Server\"\n ]\n}"
}
]
|
Expressive words problem case in JavaScript | Sometimes people repeat letters to represent extra feeling, such as "hello" −> "heeellooo", "hi" −> "hiiii". In these strings like "heeellooo", we have groups of adjacent letters that are all the same: "h", "eee", "ll", "ooo".
For some given string S, a query word is stretchy if it can be made to be equal to S by any number of applications of the following extension operation: choose a group consisting of characters c, and add some number of characters c to the group so that the size of the group is
3 or more.
For example, starting with "hello", we could do an extension on the group "o" to get "hellooo", but we cannot get "helloo" since the group "oo" has size less than 3. Also, we could do another extension like "ll" −> "lllll" to get "helllllooo". If S = "helllllooo", then the query word "hello" would be stretchy because of these two extension operations: query = "hello" −> "hellooo" −> "helllllooo" = S.
Given a list of query words, we are required to return the number of words that are stretchy.
For example −
If the input string is −
const str = 'heeellooo';
And the list of words is −
const words = ["hello", "hi", "helo"];
And the output should be −
const output = 1
The code for this will be −
const str = 'heeellooo';
const words = ["hello", "hi", "helo"];
const extraWords = (str, words) => {
let count = 0;
for (let w of words) {
let i = 0;
let j = 0;
for (; i < str.length && j < w.length && w[j] === str[i];) {
let lenS = 1;
let lenW = 1;
for (; i+lenS < str.length && str[i+lenS] === str[i]; lenS++);
for (; j+lenW < w.length && w[j+lenW] === w[j]; lenW++);
if (lenS < lenW || lenS > lenW && lenS < 3) break;
i += lenS;
j += lenW;
}
if (i === str.length && j === w.length) {
count++;
}
}
return count;
}
console.log(extraWords(str, words));
And the output in the console will be −
1 | [
{
"code": null,
"e": 1289,
"s": 1062,
"text": "Sometimes people repeat letters to represent extra feeling, such as \"hello\" −> \"heeellooo\", \"hi\" −> \"hiiii\". In these strings like \"heeellooo\", we have groups of adjacent letters that are all the same: \"h\", \"eee\", \"ll\", \"ooo\"."
},
{
"code": null,
"e": 1578,
"s": 1289,
"text": "For some given string S, a query word is stretchy if it can be made to be equal to S by any number of applications of the following extension operation: choose a group consisting of characters c, and add some number of characters c to the group so that the size of the group is\n3 or more."
},
{
"code": null,
"e": 1982,
"s": 1578,
"text": "For example, starting with \"hello\", we could do an extension on the group \"o\" to get \"hellooo\", but we cannot get \"helloo\" since the group \"oo\" has size less than 3. Also, we could do another extension like \"ll\" −> \"lllll\" to get \"helllllooo\". If S = \"helllllooo\", then the query word \"hello\" would be stretchy because of these two extension operations: query = \"hello\" −> \"hellooo\" −> \"helllllooo\" = S."
},
{
"code": null,
"e": 2076,
"s": 1982,
"text": "Given a list of query words, we are required to return the number of words that are stretchy."
},
{
"code": null,
"e": 2090,
"s": 2076,
"text": "For example −"
},
{
"code": null,
"e": 2115,
"s": 2090,
"text": "If the input string is −"
},
{
"code": null,
"e": 2140,
"s": 2115,
"text": "const str = 'heeellooo';"
},
{
"code": null,
"e": 2167,
"s": 2140,
"text": "And the list of words is −"
},
{
"code": null,
"e": 2206,
"s": 2167,
"text": "const words = [\"hello\", \"hi\", \"helo\"];"
},
{
"code": null,
"e": 2233,
"s": 2206,
"text": "And the output should be −"
},
{
"code": null,
"e": 2250,
"s": 2233,
"text": "const output = 1"
},
{
"code": null,
"e": 2278,
"s": 2250,
"text": "The code for this will be −"
},
{
"code": null,
"e": 2951,
"s": 2278,
"text": "const str = 'heeellooo';\nconst words = [\"hello\", \"hi\", \"helo\"];\nconst extraWords = (str, words) => {\n let count = 0;\n for (let w of words) {\n let i = 0;\n let j = 0;\n for (; i < str.length && j < w.length && w[j] === str[i];) {\n let lenS = 1;\n let lenW = 1;\n for (; i+lenS < str.length && str[i+lenS] === str[i]; lenS++);\n for (; j+lenW < w.length && w[j+lenW] === w[j]; lenW++);\n if (lenS < lenW || lenS > lenW && lenS < 3) break;\n i += lenS;\n j += lenW;\n }\n if (i === str.length && j === w.length) {\n count++;\n }\n }\n return count;\n}\nconsole.log(extraWords(str, words));"
},
{
"code": null,
"e": 2991,
"s": 2951,
"text": "And the output in the console will be −"
},
{
"code": null,
"e": 2993,
"s": 2991,
"text": "1"
}
]
|
Set the background color of an element with CSS | To set the background color of an element, use the background-color property.
You can try to run the following code to learn how to work with the background-color property:
<html>
<head>
<body>
<p style = "background-color:blue;">
This text has a blue background color.</p>
</body>
</head>
<html> | [
{
"code": null,
"e": 1140,
"s": 1062,
"text": "To set the background color of an element, use the background-color property."
},
{
"code": null,
"e": 1235,
"s": 1140,
"text": "You can try to run the following code to learn how to work with the background-color property:"
},
{
"code": null,
"e": 1395,
"s": 1235,
"text": "<html>\n <head>\n <body>\n <p style = \"background-color:blue;\">\n This text has a blue background color.</p>\n </body>\n </head>\n<html>"
}
]
|
Perceptrons, Logical Functions, and the XOR problem | by Francesco Cicala | Towards Data Science | Today we will explore what a Perceptron can do, what are its limitations, and we will prepare the ground to overreach these limits! Everything supported by graphs and code.
In Part 1 of this series, we introduced the Perceptron as a model that implements the following function:
For a particular choice of the parameters w and b, the output ŷ only depends on the input vector x. I’m using ŷ (“y hat”) to indicate that this number has been produced/predicted by the model. Soon, you will appreciate the ease of this notation.
To visualize the architecture of a model, we use what is called computational graph: a directed graph which is used to represent a math function. Both variables and operations are nodes; variables are fed into operations and operations produce variables.
The computational graph of our perceptron is:
The Σ symbol represents the linear combination of the inputs x by means of the weights w and the bias b. Since this notation is quite heavy, from now on I will simplify the computational graph in the following way:
I am introducing some examples of what a perceptron can implement with its capacity (I will talk about this term in the following parts of this series!). Logical functions are a great starting point since they will bring us to a natural development of the theory behind the perceptron and, as a consequence, neural networks.
Let’s start with a very simple problem:
Can a perceptron implement the NOT logical function?
NOT(x) is a 1-variable function, that means that we will have one input at a time: N=1. Also, it is a logical function, and so both the input and the output have only two possible states: 0 and 1 (i.e., False and True): the Heaviside step function seems to fit our case since it produces a binary output.
With these considerations in mind, we can tell that, if there exists a perceptron which can implement the NOT(x) function, it would be like the one shown at left. Given two parameters, w and b, it will perform the following computation:ŷ = Θ(wx + b)
The fundamental question is: do exist two values that, if picked as parameters, allow the perceptron to implement the NOT logical function? When I say that a perceptron implements a function, I mean that for each input in the function’s domain the perceptron returns the same number (or vector) the function would return for the same input. Back to our question: those values exist since we can easily find them: let’s pick w = -1 and b = 0.5.
And we get:
NOT(0) = 1NOT(1) = 0
We conclude that the answer to the initial question is: yes, a perceptron can implement the NOT logical function; we just need to properly set its parameters. Notice that my solution isn’t unique; in fact, solutions, intended as (w, b) points, are infinite for this particular problem! You can use your favorite one ;)
The next question is:
Can a perceptron implement the AND logical function?
The AND logical function is a 2-variables function, AND(x1, x2), with binary inputs and output.
This graph is associated with the following computation:ŷ = Θ(w1*x1 + w2*x2 + b)
This time, we have three parameters: w1, w2, and b.Can you guess which are three values for these parameters which would allow the perceptron to solve the AND problem?
SOLUTION:w1 = 1, w2 = 1, b = -1.5
And it prints:
AND(1, 1) = 1AND(1, 0) = 0AND(0, 1) = 0AND(0, 0) = 0
OR(x1, x2) is a 2-variables function too, and its output is 1-dimensional (i.e., one number) and has two possible states (0 or 1). Therefore, we will use a perceptron with the same architecture as the one before. Which are the three parameters which solve the OR problem?
SOLUTION:w1 = 1, w2 = 1, b = -0.5
OR(1, 1) = 1OR(1, 0) = 1OR(0, 1) = 1OR(0, 0) = 0
We conclude that a single perceptron with an Heaviside activation function can implement each one of the fundamental logical functions: NOT, AND and OR.They are called fundamental because any logical function, no matter how complex, can be obtained by a combination of those three. We can infer that, if we appropriately connect the three perceptrons we just built, we can implement any logical function! Let’s see how:
How can we build a network of fundamental logical perceptrons so that it implements the XOR function?
SOLUTION:
And the output is:
XOR(1, 1) = 0XOR(1, 0) = 1XOR(0, 1) = 1XOR(0, 0) = 0
These are the predictions we were looking for! We just combined the three perceptrons above to get a more complex logical function.
Some of you may be wondering if, as we did for the previous functions, it is possible to find parameters’ values for a single perceptron so that it solves the XOR problem all by itself.
I won’t make you struggle too much looking for those three numbers, because it would be useless: the answer is that they do not exist. Why? The answer is that the XOR problem is not linearly separable, and we will discuss it in depth in the next chapter of this series!
I will publish it in a few days, and we will go through the linear separability property I just mentioned. I will reshape the topics I introduced today within a geometrical perspective. In this way, every result we obtained today will get its natural and intuitive explanation.
If you liked this article, I hope you’ll consider to give it some claps! Every clap is a great encouragement to me :) Also, feel free to get in touch with me on Linkedin!
See you very soon,Frank
Some rights reserved | [
{
"code": null,
"e": 345,
"s": 172,
"text": "Today we will explore what a Perceptron can do, what are its limitations, and we will prepare the ground to overreach these limits! Everything supported by graphs and code."
},
{
"code": null,
"e": 451,
"s": 345,
"text": "In Part 1 of this series, we introduced the Perceptron as a model that implements the following function:"
},
{
"code": null,
"e": 699,
"s": 451,
"text": "For a particular choice of the parameters w and b, the output ŷ only depends on the input vector x. I’m using ŷ (“y hat”) to indicate that this number has been produced/predicted by the model. Soon, you will appreciate the ease of this notation."
},
{
"code": null,
"e": 954,
"s": 699,
"text": "To visualize the architecture of a model, we use what is called computational graph: a directed graph which is used to represent a math function. Both variables and operations are nodes; variables are fed into operations and operations produce variables."
},
{
"code": null,
"e": 1000,
"s": 954,
"text": "The computational graph of our perceptron is:"
},
{
"code": null,
"e": 1215,
"s": 1000,
"text": "The Σ symbol represents the linear combination of the inputs x by means of the weights w and the bias b. Since this notation is quite heavy, from now on I will simplify the computational graph in the following way:"
},
{
"code": null,
"e": 1540,
"s": 1215,
"text": "I am introducing some examples of what a perceptron can implement with its capacity (I will talk about this term in the following parts of this series!). Logical functions are a great starting point since they will bring us to a natural development of the theory behind the perceptron and, as a consequence, neural networks."
},
{
"code": null,
"e": 1580,
"s": 1540,
"text": "Let’s start with a very simple problem:"
},
{
"code": null,
"e": 1633,
"s": 1580,
"text": "Can a perceptron implement the NOT logical function?"
},
{
"code": null,
"e": 1938,
"s": 1633,
"text": "NOT(x) is a 1-variable function, that means that we will have one input at a time: N=1. Also, it is a logical function, and so both the input and the output have only two possible states: 0 and 1 (i.e., False and True): the Heaviside step function seems to fit our case since it produces a binary output."
},
{
"code": null,
"e": 2189,
"s": 1938,
"text": "With these considerations in mind, we can tell that, if there exists a perceptron which can implement the NOT(x) function, it would be like the one shown at left. Given two parameters, w and b, it will perform the following computation:ŷ = Θ(wx + b)"
},
{
"code": null,
"e": 2633,
"s": 2189,
"text": "The fundamental question is: do exist two values that, if picked as parameters, allow the perceptron to implement the NOT logical function? When I say that a perceptron implements a function, I mean that for each input in the function’s domain the perceptron returns the same number (or vector) the function would return for the same input. Back to our question: those values exist since we can easily find them: let’s pick w = -1 and b = 0.5."
},
{
"code": null,
"e": 2645,
"s": 2633,
"text": "And we get:"
},
{
"code": null,
"e": 2666,
"s": 2645,
"text": "NOT(0) = 1NOT(1) = 0"
},
{
"code": null,
"e": 2985,
"s": 2666,
"text": "We conclude that the answer to the initial question is: yes, a perceptron can implement the NOT logical function; we just need to properly set its parameters. Notice that my solution isn’t unique; in fact, solutions, intended as (w, b) points, are infinite for this particular problem! You can use your favorite one ;)"
},
{
"code": null,
"e": 3007,
"s": 2985,
"text": "The next question is:"
},
{
"code": null,
"e": 3060,
"s": 3007,
"text": "Can a perceptron implement the AND logical function?"
},
{
"code": null,
"e": 3156,
"s": 3060,
"text": "The AND logical function is a 2-variables function, AND(x1, x2), with binary inputs and output."
},
{
"code": null,
"e": 3238,
"s": 3156,
"text": "This graph is associated with the following computation:ŷ = Θ(w1*x1 + w2*x2 + b)"
},
{
"code": null,
"e": 3406,
"s": 3238,
"text": "This time, we have three parameters: w1, w2, and b.Can you guess which are three values for these parameters which would allow the perceptron to solve the AND problem?"
},
{
"code": null,
"e": 3440,
"s": 3406,
"text": "SOLUTION:w1 = 1, w2 = 1, b = -1.5"
},
{
"code": null,
"e": 3455,
"s": 3440,
"text": "And it prints:"
},
{
"code": null,
"e": 3508,
"s": 3455,
"text": "AND(1, 1) = 1AND(1, 0) = 0AND(0, 1) = 0AND(0, 0) = 0"
},
{
"code": null,
"e": 3780,
"s": 3508,
"text": "OR(x1, x2) is a 2-variables function too, and its output is 1-dimensional (i.e., one number) and has two possible states (0 or 1). Therefore, we will use a perceptron with the same architecture as the one before. Which are the three parameters which solve the OR problem?"
},
{
"code": null,
"e": 3814,
"s": 3780,
"text": "SOLUTION:w1 = 1, w2 = 1, b = -0.5"
},
{
"code": null,
"e": 3863,
"s": 3814,
"text": "OR(1, 1) = 1OR(1, 0) = 1OR(0, 1) = 1OR(0, 0) = 0"
},
{
"code": null,
"e": 4283,
"s": 3863,
"text": "We conclude that a single perceptron with an Heaviside activation function can implement each one of the fundamental logical functions: NOT, AND and OR.They are called fundamental because any logical function, no matter how complex, can be obtained by a combination of those three. We can infer that, if we appropriately connect the three perceptrons we just built, we can implement any logical function! Let’s see how:"
},
{
"code": null,
"e": 4385,
"s": 4283,
"text": "How can we build a network of fundamental logical perceptrons so that it implements the XOR function?"
},
{
"code": null,
"e": 4395,
"s": 4385,
"text": "SOLUTION:"
},
{
"code": null,
"e": 4414,
"s": 4395,
"text": "And the output is:"
},
{
"code": null,
"e": 4467,
"s": 4414,
"text": "XOR(1, 1) = 0XOR(1, 0) = 1XOR(0, 1) = 1XOR(0, 0) = 0"
},
{
"code": null,
"e": 4599,
"s": 4467,
"text": "These are the predictions we were looking for! We just combined the three perceptrons above to get a more complex logical function."
},
{
"code": null,
"e": 4785,
"s": 4599,
"text": "Some of you may be wondering if, as we did for the previous functions, it is possible to find parameters’ values for a single perceptron so that it solves the XOR problem all by itself."
},
{
"code": null,
"e": 5055,
"s": 4785,
"text": "I won’t make you struggle too much looking for those three numbers, because it would be useless: the answer is that they do not exist. Why? The answer is that the XOR problem is not linearly separable, and we will discuss it in depth in the next chapter of this series!"
},
{
"code": null,
"e": 5333,
"s": 5055,
"text": "I will publish it in a few days, and we will go through the linear separability property I just mentioned. I will reshape the topics I introduced today within a geometrical perspective. In this way, every result we obtained today will get its natural and intuitive explanation."
},
{
"code": null,
"e": 5504,
"s": 5333,
"text": "If you liked this article, I hope you’ll consider to give it some claps! Every clap is a great encouragement to me :) Also, feel free to get in touch with me on Linkedin!"
},
{
"code": null,
"e": 5528,
"s": 5504,
"text": "See you very soon,Frank"
}
]
|
How to make loops run faster using Python? | This is a language agnostic question. Loops are there in almost every language and the same principles apply everywhere. You need to realize that compilers do most heavy lifting when it comes to loop optimization, but you as a programmer also need to keep your loops optimized.
It is important to realize that everything you put in a loop gets executed for every loop iteration. They key to optimizing loops is to minimize what they do. Even operations that appear to be very fast will take a long time if the repeated many times. Executing an operation that takes 1 microsecond a million times will take 1 second to complete.
Don't execute things like len(list) inside a loop or even in its starting condition.
a = [i for i in range(1000000)]
length = len(a)
for i in a:
print(i - length)
is much much faster than
a = [i for i in range(1000000)]
for i in a:
print(i - len(a))
You can also use techniques like Loop Unrolling(https://en.wikipedia.org/wiki/Loop_unrolling) which is loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space-time tradeoff.
Using functions like map, filter, etc. instead of explicit for loops can also provide some performance improvements. | [
{
"code": null,
"e": 1340,
"s": 1062,
"text": "This is a language agnostic question. Loops are there in almost every language and the same principles apply everywhere. You need to realize that compilers do most heavy lifting when it comes to loop optimization, but you as a programmer also need to keep your loops optimized."
},
{
"code": null,
"e": 1689,
"s": 1340,
"text": "It is important to realize that everything you put in a loop gets executed for every loop iteration. They key to optimizing loops is to minimize what they do. Even operations that appear to be very fast will take a long time if the repeated many times. Executing an operation that takes 1 microsecond a million times will take 1 second to complete."
},
{
"code": null,
"e": 1774,
"s": 1689,
"text": "Don't execute things like len(list) inside a loop or even in its starting condition."
},
{
"code": null,
"e": 1855,
"s": 1774,
"text": "a = [i for i in range(1000000)]\nlength = len(a)\nfor i in a:\n print(i - length)"
},
{
"code": null,
"e": 1880,
"s": 1855,
"text": "is much much faster than"
},
{
"code": null,
"e": 1945,
"s": 1880,
"text": "a = [i for i in range(1000000)]\nfor i in a:\n print(i - len(a))"
},
{
"code": null,
"e": 2218,
"s": 1945,
"text": "You can also use techniques like Loop Unrolling(https://en.wikipedia.org/wiki/Loop_unrolling) which is loop transformation technique that attempts to optimize a program's execution speed at the expense of its binary size, which is an approach known as space-time tradeoff."
},
{
"code": null,
"e": 2335,
"s": 2218,
"text": "Using functions like map, filter, etc. instead of explicit for loops can also provide some performance improvements."
}
]
|
What is Modulus Operator (%) in JavaScript? | The Modulus operator (%) outputs the remainder of an integer division.
You can try to run the following code to learn how to work with Modulus (%) operator.
<html>
<body>
<script>
var a = 33;
var b = 10;
document.write("a % b = ");
result = a % b;
document.write(result);
</script>
</body>
</html> | [
{
"code": null,
"e": 1133,
"s": 1062,
"text": "The Modulus operator (%) outputs the remainder of an integer division."
},
{
"code": null,
"e": 1219,
"s": 1133,
"text": "You can try to run the following code to learn how to work with Modulus (%) operator."
},
{
"code": null,
"e": 1424,
"s": 1219,
"text": "<html>\n <body>\n <script>\n var a = 33;\n var b = 10;\n\n document.write(\"a % b = \");\n result = a % b;\n document.write(result);\n </script>\n </body>\n</html>"
}
]
|
Geospatial adventures. Step 2: Pandas vs. GeoPandas | by Dmitry Selemir | Towards Data Science | Having introduced shapely in my first post, it’s time to look at some interesting geo datasets and in order to do that we can not possibly do without Pandas and more specifically GeoPandas. I am going to assume you have come across Pandas before and will try and highlight some of the differences between the two and more importantly how you can convert one to the other.
We start from the very beginning, if you have never worked with GeoPandas before — go ahead and install it, especially if you would like to follow the steps from this post yourself and play with the data a little bit. Good old trusted pip is all you need.
!pip install geopandas
If you are so inclined (and you absolutely should be) — you can read the docs here.
Let’s import both Pandas and GeoPandas to start.
import pandas as pdimport geopandas as gpd
Now, before we can do anything with these two, we need a dataset to play with. There are some links to interesting geo-specific datasets at the end of this article (do check them out), but let’s start with something relatively short and yet realistic enough: UK local authority boundaries. The reason I like this dataset is, on the one hand, it has just 380 records, on the other — the boundaries and shapes it is using are anything but simple, so it is a good illustration of a dataset you are likely going to work with in real life. I also think this is a really good starting point for any geospatial analysis of the UK data as it allows you to break everything into manageable chunks. Trust me — it really helps at times.
So, without further ado, the dataset lives here. Just hit the download button on your right. You’ll get a zip file with a short and sweet name Local_Authority_Districts__December_2017__Boundaries_in_Great_Britain-shp.zip weighing at approximately 35mb (at list at the time of writing, they do get updates occasionally ). Go ahead and unzip it and you’ll get a folder looking like this:
What we are after is the shape file, the one with *.shp extension.
To the best of my knowledge, regular Pandas can not deal with that format, so here comes our first difference between Pandas and Geopandas as, of course, the latter can. Let’s go ahead and open it:
la = gpd.read_file('Downloads/Local_Authority_Districts__December_2017__Boundaries_in_Great_Britain-shp/Local_Authority_Districts__December_2017__Boundaries_in_Great_Britain.shp')
Make sure you are using the path relevant to your system. If you are unsure which folder your notebook is using by default just run <!ls> in one of the cells — you’ll get the current directory contents, so you should be able to work out where to go from here. If you need to go up a folder — use <../>.
Let’s take a peek at what the data looks like. All the standard Pandas commands work, so we can just do:
la.head()
To get
Nice! OK, what the hell does it actually mean?
Let’s go through it column by column: objectid is fairly self explanatory and, frankly, not very useful to us. lad17cd is the local authority code. This is actually quite handy as it is used alongside things like super output areas by ONS, so this gives us a good reference system to go back and forth between different geographical subdivisions and corresponding datasets. Non-UK people — I do apologise for being so partisan here, the truth is, most countries will have a somewhat similar way of looking at things, so even if you find this is not directly of interest, this can still be transferable and useful. lad17nm: local authority names. Some of these are weird and wonderful, so it is absolutely worth exploring. No? Just me then...lad17nmw, which seems to be populated by friendly looking <None>s (sorry, couldn’t resist). This column also has names, and “w” kind of gives it away — it is local authority names in Welsh, hence None values in the first five records as these are all English local authorities. We don’t get out of England until number 317, in case you were wondering.bng_e and bng_n — the evil twins. Seriously, these are Eastings/Northings— I’ll talk about these a little bit more later. Basically, coordinates showing where this area is on the map.long and lat — some more coordinates — longitude and latitude this time.st_sreasha — surface area in square meters, that “sha” at the end is somewhat confusing, but it means shape.st_lengths — boundary length (also in meters).And finally, the one we are really here for! Drum roll please....geometry — this is the actual polygon or, in some cases, multiPolygon shape. Don’t get me started on the Scottish isles.... Geometry is the key attribute of GeoPandas table and many applications working with them essentially assume that this column is there and that it is called exactly “geometry”. So if, for whatever reason, you wanted to give it another, friendlier name — think again.
Before we dive in any further, since we are comparing Pandas and GeoPandas among other things, we can create a copy of this table in regular Pandas format.
la_pd = pd.DataFrame(la)la_pd.head()
You’ll notice that this new table looks absolutely identical. Not only that, the objects in geometry table are still very much intact:
So far so good. We can also look at all the stats for the DataFrame in the same way we do it in Pandas and the two tables will look identical, so I’m going to show the results for GeoPandas only here (you will just have to believe me... No? OK, run it in your own notebook for both just to check).
Note, this one gives you the stats on numerical data columns only.
In[9]:
la.info()
Out[9]:
<class 'geopandas.geodataframe.GeoDataFrame'>RangeIndex: 380 entries, 0 to 379Data columns (total 11 columns):objectid 380 non-null int64lad17cd 380 non-null objectlad17nm 380 non-null objectlad17nmw 22 non-null objectbng_e 380 non-null int64bng_n 380 non-null int64long 380 non-null float64lat 380 non-null float64st_areasha 380 non-null float64st_lengths 380 non-null float64geometry 380 non-null geometrydtypes: float64(4), geometry(1), int64(3), object(3)memory usage: 32.7+ KB
And finally:
In[10]:
la.memory_usage()
Out[10]:
Index 80objectid 3040lad17cd 3040lad17nm 3040lad17nmw 3040bng_e 3040bng_n 3040long 3040lat 3040st_areasha 3040st_lengths 3040geometry 3040dtype: int64
So far, so good. Time to spot another difference. This command would not work on your Pandas DataFrame:
In[11]:
la.crs
Out[11]:
<Projected CRS: EPSG:27700>Name: OSGB 1936 / British National GridAxis Info [cartesian]:- E[east]: Easting (metre)- N[north]: Northing (metre)Area of Use:- name: UK - Britain and UKCS 49°46'N to 61°01'N, 7°33'W to 3°33'E- bounds: (-9.2, 49.75, 2.88, 61.14)Coordinate Operation:- name: British National Grid- method: Transverse MercatorDatum: OSGB 1936- Ellipsoid: Airy 1830- Prime Meridian: Greenwich
Woah!!! What does that mean?! Remember I promised to expand on Northings/Eastings? Time to look at the coordinate reference systems, aka CRS. You can read a more technical GeoPandas specific explanation here. In a nutshell, defining coordinate systems is not as straightforward as it may sound at first. The main issue here is that the Earth has the audacity of not being flat. Not only that — it’s not even spherical, so defining an exact point on the surface can be tricky (if you don’t believe me — try wrapping a square-lined paper around a rugby ball). To make matters even more interesting, because it is so large — it sometimes makes sense to pretend it is flat because, on the scale even large enough to cover an entire country like the UK, this doesn’t introduce enough error to really worry about. Hence two main reference systems you are likely going to come across — longitude and latitude (pseudo 3D) and Northings/Eastings (2D). I have to confess that, in most cases, I prefer to work with the latter. The reason being — it’s easy and I am lazy. Seriously though, both Northings and Eastings are expressed in meters and measure a straightforward distance. So, if you want to know how far things are from each other, all you have to do is take the differences between Xs (eastings) and Ys(northings), apply Pythagorus and you are done. Note that Shapely library, which we talked about in my previous post, also has the concept of CRS built in, however, we don’t really need to use it, as GeoPandas provides us with all the ammunition we could possibly need, with high precision.
There is a number of CRS systems available in GeoPandas. This particular one, used in the table above is “epsg:27700” — if you are planning to work with the UK data this is something you are going to use a lot, so you’ll get to remember it, be warned. An alternative in the UK, using lat/long is “epsg:4326”. GeoPandas provides us with an easy way of converting one to the other. All we have to do is run:
la_4326 = la.to_crs("epsg:4326")la_4326.head()
To get:
The eagle-eyed among you would have spotted that the only thing that’s changed in this table is our polygon/multiPolygon object column, with each object now composed of points referencing latitudes and longitudes. You can still look at the same graphical representation as we did before and it should look identical. I will leave that for you to verify.
Before we wrap with some calculations on top of these — let’s have a look at saving to csv and converting from Pandas to GeoPandas. In more recent versions of Pandas you should be able to save the DataFrame directly to a csv file, despite the geometric objects in the last column, i.e.
la.to_csv('Downloads/la.csv', compression='gzip')
will do the trick. What’s more, the resulting file is only 39.9mb vs. 63.7mb for the shapefile (of course, we already applied compression here though). It comes at a price, however. To the best of my knowledge, you can not read a csv file directly from GeoPandas, so you have to load it back as a normal DataFrame. Simply by running:
la_new = pd.read_csv('Downloads/test.csv', compression='gzip')
At first glance, nothing much has changed:
la_new.head()
Essentially, we have a single extra column, which we can easily get rid of by running
la_new = la_new[la_new.columns[1:]]
However, this is not all of it, if we try to have a look at one of our geometry objects as we did before, we get something very different:
In[17]
la_new['geometry'].iloc[0]
Out[17]:
'MULTIPOLYGON (((447213.8995000003 537036.1042999998, 447228.7982999999 537033.3949999996, 447233.6958999997 537035.1045999993, 447243.2024999997 537047.6009999998, 447246.0965 537052.5995000005, 447255.9988000002 537102.1953999996, 447259.0988999996 537108.8035000004, 447263.6007000003 537113.8019999992, 447266.1979 537115.6015000008, 447273.1979999999 537118.6007000003, 447280.7010000004 537120.1001999993, 447289.3005999997 537119.6004000008, 447332.2986000003 537111.5026999991, 447359.5980000002 537102.3953000009, 447378.0998999998 537095.0974000003, 447391.0033999998 537082.9009000007, 447434.6032999996 537034.5046999995, 447438.7011000002 537030.9956999999, 447443.7965000002 537027.6966999993...
OK, I’m cheating here, in reality, the output is a lot longer than that. Essentially what happened is all our geometry objects got converted to strings. All is not lost, however, as shapely provides us with a way of converting them back. Apart from loading shapely here, I will also need swifter library. You can do without it by using straight apply method, but among other things — swifter provides you with a nice progress bar and time estimate (it can also improve performance, but that is beyond the scope of this post).
import swifterfrom shapely import wktla_new['geometry1'] = la_new['geometry'].swifter.apply(lambda x: wkt.loads(x))
What’s happening here is I am applying the conversion function to each element of the geometry column and store the output in the new column. On my laptop this takes 12 seconds on our 380 record long dataset. I have done this on much, much larger datasets and to be fair it’s not too bad.
We can then spot check if the new elements are indeed geometry objects by running
la_new['geometry1'].iloc[0]
We are not done yet though. In fact, if we run la_new.info() we get the following:
<class 'pandas.core.frame.DataFrame'>RangeIndex: 380 entries, 0 to 379Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 objectid 380 non-null int64 1 lad17cd 380 non-null object 2 lad17nm 380 non-null object 3 lad17nmw 22 non-null object 4 bng_e 380 non-null int64 5 bng_n 380 non-null int64 6 long 380 non-null float64 7 lat 380 non-null float64 8 st_areasha 380 non-null float64 9 st_lengths 380 non-null float64 10 geometry 380 non-null object 11 geometry1 380 non-null object dtypes: float64(4), int64(3), object(5)memory usage: 35.8+ KB
The new column appears as an object type, not geometry type. This won’t stop us from converting it to GeoDataFrame however. All we have to do is get rid of extra geometry column we no longer need and rename geometry1 to geometry (remember — it has to be named that) and then (!) and this is important, after converting to GeoDataFrame we have to set crs for it,
la_new_geo = la_new.drop(columns=['geometry']).rename(columns={'geometry1': 'geometry'})la_new_geo = gpd.GeoDataFrame(la_new_geo)la_new_geo.crs = 'epsg:27700'
And voila! la_new_geo.info() gives us:
<class 'geopandas.geodataframe.GeoDataFrame'>RangeIndex: 380 entries, 0 to 379Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 objectid 380 non-null int64 1 lad17cd 380 non-null object 2 lad17nm 380 non-null object 3 lad17nmw 22 non-null object 4 bng_e 380 non-null int64 5 bng_n 380 non-null int64 6 long 380 non-null float64 7 lat 380 non-null float64 8 st_areasha 380 non-null float64 9 st_lengths 380 non-null float64 10 geometry 380 non-null geometrydtypes: float64(4), geometry(1), int64(3), object(3)memory usage: 32.8+ KB
Finally, some quick calculations on our new shiny GeoDataFrame:
Verifying length and area:
la_new_geo['area']=la_new_geo['geometry'].swifter.apply(lambda x: x.area)la_new_geo['length'] = la_new_geo['geometry'].swifter.apply(lambda x: x.length)la_new_geo[['lad17cd', 'lad17nm', 'st_areasha', 'st_lengths', 'area', 'length']].head()
Or if we wanted to verify the entire column, we could do:
(la_new_geo.st_lengths == la_new_geo.length).all()
You’ll notice that this would actually return False however, due to rounding error. For instance, on the first record, the st_lengths value is 71707.4075227013 and length value is 71707.40752270141
So to check we can just run it with set precision:
(la_new_geo.st_lengths.round(4)==la_new_geo.length.round(4)).all()
In any case, when dealing with geographical objects, moving beyond 25cm precision is probably unjustified given the quality of existing data and for most applications you are probably OK limiting it to the nearest meter anyway.
Let’s also have a look at which region has the most complicated polygon/multiPolygon. We are going to judge that by the number of points needed to describe it. Essentially, this means counting points in exterior and interior boundaries:
la_new_geo['point_count'] = la_new_geo['geometry'].swifter.apply( lambda x: np.sum([len(np.array(a.exterior)) + np.sum([len(np.array(b)) for b in a.interiors]) for a in x]) if x.type == 'MultiPolygon' else len(np.array(x.exterior)) + np.sum([len(np.array(a)) for a in x.interiors]))
Ok, this is a bit complicated, so let’s look at what is going on. First we are looking at MultiPolygon objects and if we have one — we have to iterate through each polygon, converting it to exterior boundary and then numpy array to get an array of coordinates. We take the length of that array. We also get the list of internal boundaries, so we have to also iterate through them, convert them to arrays and take length. Similarly, if we are dealing with a simple polygon we drop one level of iteration, so just one exterior boundary, but we still need to iterate through potentially multiple holes.
We then sort the values and voila, we have our top ten:
la_new_geo[ ['lad17cd', 'lad17nm', 'point_count']].sort_values('point_count', ascending=False).head(10)
And this is why everyone loves Scotland so much.
No? Just me then...
Next post is all about matching polygons with each other, the coolest invention of mankind — R-Trees (Erm.. sort of). Well... OK, maybe one or two pictures too.
And finally the promised...
OSM datasets for all of the World: http://download.geofabrik.de/
OSM datasets for Britain: http://download.geofabrik.de/europe/great-britain.html
ONS Open Data: https://www.ordnancesurvey.co.uk/opendatadownload/products.html
NOMIS (labour market /census data):https://www.nomisweb.co.uk/ — these usually will have data for specific area, which you can then relate to the map by using their boundary shape files (matching them by ID)
I’m sure there’s many, many more, please feel free to add any interesting geospatial datasets (preferably free to use) in the comments.
See you around...
Also in this series:
Geospatial adventures. Step 1: Shapely.
Geospatial adventures. Step 3. Polygons grow on R-Trees
Geospatial adventures. Step 4. The Colour of Magic or If I Don’t See It — It Doesn’t Exist
Geospatial adventures. Step 5. Leaving the flatlands or flying over the sea of polygons | [
{
"code": null,
"e": 543,
"s": 171,
"text": "Having introduced shapely in my first post, it’s time to look at some interesting geo datasets and in order to do that we can not possibly do without Pandas and more specifically GeoPandas. I am going to assume you have come across Pandas before and will try and highlight some of the differences between the two and more importantly how you can convert one to the other."
},
{
"code": null,
"e": 799,
"s": 543,
"text": "We start from the very beginning, if you have never worked with GeoPandas before — go ahead and install it, especially if you would like to follow the steps from this post yourself and play with the data a little bit. Good old trusted pip is all you need."
},
{
"code": null,
"e": 822,
"s": 799,
"text": "!pip install geopandas"
},
{
"code": null,
"e": 906,
"s": 822,
"text": "If you are so inclined (and you absolutely should be) — you can read the docs here."
},
{
"code": null,
"e": 955,
"s": 906,
"text": "Let’s import both Pandas and GeoPandas to start."
},
{
"code": null,
"e": 998,
"s": 955,
"text": "import pandas as pdimport geopandas as gpd"
},
{
"code": null,
"e": 1724,
"s": 998,
"text": "Now, before we can do anything with these two, we need a dataset to play with. There are some links to interesting geo-specific datasets at the end of this article (do check them out), but let’s start with something relatively short and yet realistic enough: UK local authority boundaries. The reason I like this dataset is, on the one hand, it has just 380 records, on the other — the boundaries and shapes it is using are anything but simple, so it is a good illustration of a dataset you are likely going to work with in real life. I also think this is a really good starting point for any geospatial analysis of the UK data as it allows you to break everything into manageable chunks. Trust me — it really helps at times."
},
{
"code": null,
"e": 2110,
"s": 1724,
"text": "So, without further ado, the dataset lives here. Just hit the download button on your right. You’ll get a zip file with a short and sweet name Local_Authority_Districts__December_2017__Boundaries_in_Great_Britain-shp.zip weighing at approximately 35mb (at list at the time of writing, they do get updates occasionally ). Go ahead and unzip it and you’ll get a folder looking like this:"
},
{
"code": null,
"e": 2177,
"s": 2110,
"text": "What we are after is the shape file, the one with *.shp extension."
},
{
"code": null,
"e": 2375,
"s": 2177,
"text": "To the best of my knowledge, regular Pandas can not deal with that format, so here comes our first difference between Pandas and Geopandas as, of course, the latter can. Let’s go ahead and open it:"
},
{
"code": null,
"e": 2555,
"s": 2375,
"text": "la = gpd.read_file('Downloads/Local_Authority_Districts__December_2017__Boundaries_in_Great_Britain-shp/Local_Authority_Districts__December_2017__Boundaries_in_Great_Britain.shp')"
},
{
"code": null,
"e": 2858,
"s": 2555,
"text": "Make sure you are using the path relevant to your system. If you are unsure which folder your notebook is using by default just run <!ls> in one of the cells — you’ll get the current directory contents, so you should be able to work out where to go from here. If you need to go up a folder — use <../>."
},
{
"code": null,
"e": 2963,
"s": 2858,
"text": "Let’s take a peek at what the data looks like. All the standard Pandas commands work, so we can just do:"
},
{
"code": null,
"e": 2973,
"s": 2963,
"text": "la.head()"
},
{
"code": null,
"e": 2980,
"s": 2973,
"text": "To get"
},
{
"code": null,
"e": 3027,
"s": 2980,
"text": "Nice! OK, what the hell does it actually mean?"
},
{
"code": null,
"e": 4983,
"s": 3027,
"text": "Let’s go through it column by column: objectid is fairly self explanatory and, frankly, not very useful to us. lad17cd is the local authority code. This is actually quite handy as it is used alongside things like super output areas by ONS, so this gives us a good reference system to go back and forth between different geographical subdivisions and corresponding datasets. Non-UK people — I do apologise for being so partisan here, the truth is, most countries will have a somewhat similar way of looking at things, so even if you find this is not directly of interest, this can still be transferable and useful. lad17nm: local authority names. Some of these are weird and wonderful, so it is absolutely worth exploring. No? Just me then...lad17nmw, which seems to be populated by friendly looking <None>s (sorry, couldn’t resist). This column also has names, and “w” kind of gives it away — it is local authority names in Welsh, hence None values in the first five records as these are all English local authorities. We don’t get out of England until number 317, in case you were wondering.bng_e and bng_n — the evil twins. Seriously, these are Eastings/Northings— I’ll talk about these a little bit more later. Basically, coordinates showing where this area is on the map.long and lat — some more coordinates — longitude and latitude this time.st_sreasha — surface area in square meters, that “sha” at the end is somewhat confusing, but it means shape.st_lengths — boundary length (also in meters).And finally, the one we are really here for! Drum roll please....geometry — this is the actual polygon or, in some cases, multiPolygon shape. Don’t get me started on the Scottish isles.... Geometry is the key attribute of GeoPandas table and many applications working with them essentially assume that this column is there and that it is called exactly “geometry”. So if, for whatever reason, you wanted to give it another, friendlier name — think again."
},
{
"code": null,
"e": 5139,
"s": 4983,
"text": "Before we dive in any further, since we are comparing Pandas and GeoPandas among other things, we can create a copy of this table in regular Pandas format."
},
{
"code": null,
"e": 5176,
"s": 5139,
"text": "la_pd = pd.DataFrame(la)la_pd.head()"
},
{
"code": null,
"e": 5311,
"s": 5176,
"text": "You’ll notice that this new table looks absolutely identical. Not only that, the objects in geometry table are still very much intact:"
},
{
"code": null,
"e": 5609,
"s": 5311,
"text": "So far so good. We can also look at all the stats for the DataFrame in the same way we do it in Pandas and the two tables will look identical, so I’m going to show the results for GeoPandas only here (you will just have to believe me... No? OK, run it in your own notebook for both just to check)."
},
{
"code": null,
"e": 5676,
"s": 5609,
"text": "Note, this one gives you the stats on numerical data columns only."
},
{
"code": null,
"e": 5683,
"s": 5676,
"text": "In[9]:"
},
{
"code": null,
"e": 5693,
"s": 5683,
"text": "la.info()"
},
{
"code": null,
"e": 5701,
"s": 5693,
"text": "Out[9]:"
},
{
"code": null,
"e": 6251,
"s": 5701,
"text": "<class 'geopandas.geodataframe.GeoDataFrame'>RangeIndex: 380 entries, 0 to 379Data columns (total 11 columns):objectid 380 non-null int64lad17cd 380 non-null objectlad17nm 380 non-null objectlad17nmw 22 non-null objectbng_e 380 non-null int64bng_n 380 non-null int64long 380 non-null float64lat 380 non-null float64st_areasha 380 non-null float64st_lengths 380 non-null float64geometry 380 non-null geometrydtypes: float64(4), geometry(1), int64(3), object(3)memory usage: 32.7+ KB"
},
{
"code": null,
"e": 6264,
"s": 6251,
"text": "And finally:"
},
{
"code": null,
"e": 6272,
"s": 6264,
"text": "In[10]:"
},
{
"code": null,
"e": 6290,
"s": 6272,
"text": "la.memory_usage()"
},
{
"code": null,
"e": 6299,
"s": 6290,
"text": "Out[10]:"
},
{
"code": null,
"e": 6528,
"s": 6299,
"text": "Index 80objectid 3040lad17cd 3040lad17nm 3040lad17nmw 3040bng_e 3040bng_n 3040long 3040lat 3040st_areasha 3040st_lengths 3040geometry 3040dtype: int64"
},
{
"code": null,
"e": 6632,
"s": 6528,
"text": "So far, so good. Time to spot another difference. This command would not work on your Pandas DataFrame:"
},
{
"code": null,
"e": 6640,
"s": 6632,
"text": "In[11]:"
},
{
"code": null,
"e": 6647,
"s": 6640,
"text": "la.crs"
},
{
"code": null,
"e": 6656,
"s": 6647,
"text": "Out[11]:"
},
{
"code": null,
"e": 7057,
"s": 6656,
"text": "<Projected CRS: EPSG:27700>Name: OSGB 1936 / British National GridAxis Info [cartesian]:- E[east]: Easting (metre)- N[north]: Northing (metre)Area of Use:- name: UK - Britain and UKCS 49°46'N to 61°01'N, 7°33'W to 3°33'E- bounds: (-9.2, 49.75, 2.88, 61.14)Coordinate Operation:- name: British National Grid- method: Transverse MercatorDatum: OSGB 1936- Ellipsoid: Airy 1830- Prime Meridian: Greenwich"
},
{
"code": null,
"e": 8649,
"s": 7057,
"text": "Woah!!! What does that mean?! Remember I promised to expand on Northings/Eastings? Time to look at the coordinate reference systems, aka CRS. You can read a more technical GeoPandas specific explanation here. In a nutshell, defining coordinate systems is not as straightforward as it may sound at first. The main issue here is that the Earth has the audacity of not being flat. Not only that — it’s not even spherical, so defining an exact point on the surface can be tricky (if you don’t believe me — try wrapping a square-lined paper around a rugby ball). To make matters even more interesting, because it is so large — it sometimes makes sense to pretend it is flat because, on the scale even large enough to cover an entire country like the UK, this doesn’t introduce enough error to really worry about. Hence two main reference systems you are likely going to come across — longitude and latitude (pseudo 3D) and Northings/Eastings (2D). I have to confess that, in most cases, I prefer to work with the latter. The reason being — it’s easy and I am lazy. Seriously though, both Northings and Eastings are expressed in meters and measure a straightforward distance. So, if you want to know how far things are from each other, all you have to do is take the differences between Xs (eastings) and Ys(northings), apply Pythagorus and you are done. Note that Shapely library, which we talked about in my previous post, also has the concept of CRS built in, however, we don’t really need to use it, as GeoPandas provides us with all the ammunition we could possibly need, with high precision."
},
{
"code": null,
"e": 9055,
"s": 8649,
"text": "There is a number of CRS systems available in GeoPandas. This particular one, used in the table above is “epsg:27700” — if you are planning to work with the UK data this is something you are going to use a lot, so you’ll get to remember it, be warned. An alternative in the UK, using lat/long is “epsg:4326”. GeoPandas provides us with an easy way of converting one to the other. All we have to do is run:"
},
{
"code": null,
"e": 9102,
"s": 9055,
"text": "la_4326 = la.to_crs(\"epsg:4326\")la_4326.head()"
},
{
"code": null,
"e": 9110,
"s": 9102,
"text": "To get:"
},
{
"code": null,
"e": 9464,
"s": 9110,
"text": "The eagle-eyed among you would have spotted that the only thing that’s changed in this table is our polygon/multiPolygon object column, with each object now composed of points referencing latitudes and longitudes. You can still look at the same graphical representation as we did before and it should look identical. I will leave that for you to verify."
},
{
"code": null,
"e": 9750,
"s": 9464,
"text": "Before we wrap with some calculations on top of these — let’s have a look at saving to csv and converting from Pandas to GeoPandas. In more recent versions of Pandas you should be able to save the DataFrame directly to a csv file, despite the geometric objects in the last column, i.e."
},
{
"code": null,
"e": 9800,
"s": 9750,
"text": "la.to_csv('Downloads/la.csv', compression='gzip')"
},
{
"code": null,
"e": 10134,
"s": 9800,
"text": "will do the trick. What’s more, the resulting file is only 39.9mb vs. 63.7mb for the shapefile (of course, we already applied compression here though). It comes at a price, however. To the best of my knowledge, you can not read a csv file directly from GeoPandas, so you have to load it back as a normal DataFrame. Simply by running:"
},
{
"code": null,
"e": 10197,
"s": 10134,
"text": "la_new = pd.read_csv('Downloads/test.csv', compression='gzip')"
},
{
"code": null,
"e": 10240,
"s": 10197,
"text": "At first glance, nothing much has changed:"
},
{
"code": null,
"e": 10254,
"s": 10240,
"text": "la_new.head()"
},
{
"code": null,
"e": 10340,
"s": 10254,
"text": "Essentially, we have a single extra column, which we can easily get rid of by running"
},
{
"code": null,
"e": 10376,
"s": 10340,
"text": "la_new = la_new[la_new.columns[1:]]"
},
{
"code": null,
"e": 10515,
"s": 10376,
"text": "However, this is not all of it, if we try to have a look at one of our geometry objects as we did before, we get something very different:"
},
{
"code": null,
"e": 10522,
"s": 10515,
"text": "In[17]"
},
{
"code": null,
"e": 10549,
"s": 10522,
"text": "la_new['geometry'].iloc[0]"
},
{
"code": null,
"e": 10558,
"s": 10549,
"text": "Out[17]:"
},
{
"code": null,
"e": 11268,
"s": 10558,
"text": "'MULTIPOLYGON (((447213.8995000003 537036.1042999998, 447228.7982999999 537033.3949999996, 447233.6958999997 537035.1045999993, 447243.2024999997 537047.6009999998, 447246.0965 537052.5995000005, 447255.9988000002 537102.1953999996, 447259.0988999996 537108.8035000004, 447263.6007000003 537113.8019999992, 447266.1979 537115.6015000008, 447273.1979999999 537118.6007000003, 447280.7010000004 537120.1001999993, 447289.3005999997 537119.6004000008, 447332.2986000003 537111.5026999991, 447359.5980000002 537102.3953000009, 447378.0998999998 537095.0974000003, 447391.0033999998 537082.9009000007, 447434.6032999996 537034.5046999995, 447438.7011000002 537030.9956999999, 447443.7965000002 537027.6966999993..."
},
{
"code": null,
"e": 11794,
"s": 11268,
"text": "OK, I’m cheating here, in reality, the output is a lot longer than that. Essentially what happened is all our geometry objects got converted to strings. All is not lost, however, as shapely provides us with a way of converting them back. Apart from loading shapely here, I will also need swifter library. You can do without it by using straight apply method, but among other things — swifter provides you with a nice progress bar and time estimate (it can also improve performance, but that is beyond the scope of this post)."
},
{
"code": null,
"e": 11910,
"s": 11794,
"text": "import swifterfrom shapely import wktla_new['geometry1'] = la_new['geometry'].swifter.apply(lambda x: wkt.loads(x))"
},
{
"code": null,
"e": 12199,
"s": 11910,
"text": "What’s happening here is I am applying the conversion function to each element of the geometry column and store the output in the new column. On my laptop this takes 12 seconds on our 380 record long dataset. I have done this on much, much larger datasets and to be fair it’s not too bad."
},
{
"code": null,
"e": 12281,
"s": 12199,
"text": "We can then spot check if the new elements are indeed geometry objects by running"
},
{
"code": null,
"e": 12309,
"s": 12281,
"text": "la_new['geometry1'].iloc[0]"
},
{
"code": null,
"e": 12392,
"s": 12309,
"text": "We are not done yet though. In fact, if we run la_new.info() we get the following:"
},
{
"code": null,
"e": 13116,
"s": 12392,
"text": "<class 'pandas.core.frame.DataFrame'>RangeIndex: 380 entries, 0 to 379Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 objectid 380 non-null int64 1 lad17cd 380 non-null object 2 lad17nm 380 non-null object 3 lad17nmw 22 non-null object 4 bng_e 380 non-null int64 5 bng_n 380 non-null int64 6 long 380 non-null float64 7 lat 380 non-null float64 8 st_areasha 380 non-null float64 9 st_lengths 380 non-null float64 10 geometry 380 non-null object 11 geometry1 380 non-null object dtypes: float64(4), int64(3), object(5)memory usage: 35.8+ KB"
},
{
"code": null,
"e": 13478,
"s": 13116,
"text": "The new column appears as an object type, not geometry type. This won’t stop us from converting it to GeoDataFrame however. All we have to do is get rid of extra geometry column we no longer need and rename geometry1 to geometry (remember — it has to be named that) and then (!) and this is important, after converting to GeoDataFrame we have to set crs for it,"
},
{
"code": null,
"e": 13637,
"s": 13478,
"text": "la_new_geo = la_new.drop(columns=['geometry']).rename(columns={'geometry1': 'geometry'})la_new_geo = gpd.GeoDataFrame(la_new_geo)la_new_geo.crs = 'epsg:27700'"
},
{
"code": null,
"e": 13676,
"s": 13637,
"text": "And voila! la_new_geo.info() gives us:"
},
{
"code": null,
"e": 14394,
"s": 13676,
"text": "<class 'geopandas.geodataframe.GeoDataFrame'>RangeIndex: 380 entries, 0 to 379Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 objectid 380 non-null int64 1 lad17cd 380 non-null object 2 lad17nm 380 non-null object 3 lad17nmw 22 non-null object 4 bng_e 380 non-null int64 5 bng_n 380 non-null int64 6 long 380 non-null float64 7 lat 380 non-null float64 8 st_areasha 380 non-null float64 9 st_lengths 380 non-null float64 10 geometry 380 non-null geometrydtypes: float64(4), geometry(1), int64(3), object(3)memory usage: 32.8+ KB"
},
{
"code": null,
"e": 14458,
"s": 14394,
"text": "Finally, some quick calculations on our new shiny GeoDataFrame:"
},
{
"code": null,
"e": 14485,
"s": 14458,
"text": "Verifying length and area:"
},
{
"code": null,
"e": 14725,
"s": 14485,
"text": "la_new_geo['area']=la_new_geo['geometry'].swifter.apply(lambda x: x.area)la_new_geo['length'] = la_new_geo['geometry'].swifter.apply(lambda x: x.length)la_new_geo[['lad17cd', 'lad17nm', 'st_areasha', 'st_lengths', 'area', 'length']].head()"
},
{
"code": null,
"e": 14783,
"s": 14725,
"text": "Or if we wanted to verify the entire column, we could do:"
},
{
"code": null,
"e": 14834,
"s": 14783,
"text": "(la_new_geo.st_lengths == la_new_geo.length).all()"
},
{
"code": null,
"e": 15032,
"s": 14834,
"text": "You’ll notice that this would actually return False however, due to rounding error. For instance, on the first record, the st_lengths value is 71707.4075227013 and length value is 71707.40752270141"
},
{
"code": null,
"e": 15083,
"s": 15032,
"text": "So to check we can just run it with set precision:"
},
{
"code": null,
"e": 15150,
"s": 15083,
"text": "(la_new_geo.st_lengths.round(4)==la_new_geo.length.round(4)).all()"
},
{
"code": null,
"e": 15378,
"s": 15150,
"text": "In any case, when dealing with geographical objects, moving beyond 25cm precision is probably unjustified given the quality of existing data and for most applications you are probably OK limiting it to the nearest meter anyway."
},
{
"code": null,
"e": 15615,
"s": 15378,
"text": "Let’s also have a look at which region has the most complicated polygon/multiPolygon. We are going to judge that by the number of points needed to describe it. Essentially, this means counting points in exterior and interior boundaries:"
},
{
"code": null,
"e": 15905,
"s": 15615,
"text": "la_new_geo['point_count'] = la_new_geo['geometry'].swifter.apply( lambda x: np.sum([len(np.array(a.exterior)) + np.sum([len(np.array(b)) for b in a.interiors]) for a in x]) if x.type == 'MultiPolygon' else len(np.array(x.exterior)) + np.sum([len(np.array(a)) for a in x.interiors]))"
},
{
"code": null,
"e": 16505,
"s": 15905,
"text": "Ok, this is a bit complicated, so let’s look at what is going on. First we are looking at MultiPolygon objects and if we have one — we have to iterate through each polygon, converting it to exterior boundary and then numpy array to get an array of coordinates. We take the length of that array. We also get the list of internal boundaries, so we have to also iterate through them, convert them to arrays and take length. Similarly, if we are dealing with a simple polygon we drop one level of iteration, so just one exterior boundary, but we still need to iterate through potentially multiple holes."
},
{
"code": null,
"e": 16561,
"s": 16505,
"text": "We then sort the values and voila, we have our top ten:"
},
{
"code": null,
"e": 16668,
"s": 16561,
"text": "la_new_geo[ ['lad17cd', 'lad17nm', 'point_count']].sort_values('point_count', ascending=False).head(10)"
},
{
"code": null,
"e": 16717,
"s": 16668,
"text": "And this is why everyone loves Scotland so much."
},
{
"code": null,
"e": 16737,
"s": 16717,
"text": "No? Just me then..."
},
{
"code": null,
"e": 16898,
"s": 16737,
"text": "Next post is all about matching polygons with each other, the coolest invention of mankind — R-Trees (Erm.. sort of). Well... OK, maybe one or two pictures too."
},
{
"code": null,
"e": 16926,
"s": 16898,
"text": "And finally the promised..."
},
{
"code": null,
"e": 16991,
"s": 16926,
"text": "OSM datasets for all of the World: http://download.geofabrik.de/"
},
{
"code": null,
"e": 17072,
"s": 16991,
"text": "OSM datasets for Britain: http://download.geofabrik.de/europe/great-britain.html"
},
{
"code": null,
"e": 17151,
"s": 17072,
"text": "ONS Open Data: https://www.ordnancesurvey.co.uk/opendatadownload/products.html"
},
{
"code": null,
"e": 17359,
"s": 17151,
"text": "NOMIS (labour market /census data):https://www.nomisweb.co.uk/ — these usually will have data for specific area, which you can then relate to the map by using their boundary shape files (matching them by ID)"
},
{
"code": null,
"e": 17495,
"s": 17359,
"text": "I’m sure there’s many, many more, please feel free to add any interesting geospatial datasets (preferably free to use) in the comments."
},
{
"code": null,
"e": 17513,
"s": 17495,
"text": "See you around..."
},
{
"code": null,
"e": 17534,
"s": 17513,
"text": "Also in this series:"
},
{
"code": null,
"e": 17574,
"s": 17534,
"text": "Geospatial adventures. Step 1: Shapely."
},
{
"code": null,
"e": 17630,
"s": 17574,
"text": "Geospatial adventures. Step 3. Polygons grow on R-Trees"
},
{
"code": null,
"e": 17721,
"s": 17630,
"text": "Geospatial adventures. Step 4. The Colour of Magic or If I Don’t See It — It Doesn’t Exist"
}
]
|
Predicting Hotel Reservation Cancellations with Machine Learning | by Egemen Zeytinci | Towards Data Science | As you can imagine, the cancellation rate for bookings in the online booking industry is quite high. Once the reservation has been cancelled, there is almost nothing to be done. This creates discomfort for many institutions and creates a desire to take precautions. Therefore, predicting reservations that can be cancelled and preventing these cancellations will create a surplus value for the institutions.
In this article, I will try to explain how future cancelled reservations can be predicted in advance by machine learning methods. Let’s start with preprocessing!
First of all, I should say that you can access the data used in my repository that I will share at the end of my article. I would also like to share that this is the subject of a thesis. [1]
We have two separate data sets, and since we’re going to do preprocessing for both, it makes sense to combine them. But during the modeling phase, we’re going to want to get to these two sets of data separately. So, to distinguish the two, I created the id field.
import pandas as pdh1 = pd.read_csv('data/H1.csv')h2 = pd.read_csv('data/H2.csv')h1.loc[:, 'id'] = range(1, len(h1) + 1)start = h1['id'].max() + 1stop = start + len(h2)h2.loc[:, 'id'] = range(start, stop)df = pd.concat([h1, h2], ignore_index=True, sort=False)
Here are the preprocessing steps for this project:
Converting string NULL or Undefined values to np.nan
Deletion of missing observations from columns with a small number of NULLvalues
Filling in missing values according to rules
Deletion of incorrect values
Outlier detection
import numpy as npfor col in df.columns: if df[col].dtype == 'object' and col != 'country': df.loc[df[col].str.contains('NULL'), col] = np.nan df.loc[df[col].str.contains('Undefined', na=False), col] = np.nannull_series = df.isnull().sum()print(null_series[null_series > 0])
With the code above, we convert string NULLand Undefined values to np.nan values. Then, we print the count of NULL values for each column. Here’s what the result looks like,
We can delete NULLvalues in country, children, market_segment, distribution_channel, because there are few NULLvalues in these fields.
subset = [ 'country', 'children', 'market_segment', 'distribution_channel'] df = df.dropna(subset=subset)
There are a number of rules specified for the data. [2] For example, values that are Undefined/SC mean that they are no meal type.Since we have previously replaced Undefined values with NULL, we can fill the NULLvalues in the meal field with SC.
The fact that the agent field is NULL means the reservation didn’t come from any agency. Therefore, these reservations can be considered as purchased directly by the customers, without any intermediary organizations such as agencies and etc. That’s why we’re not deleting NULLvalues, we’re throwing a random value like 999 instead. The same goes for the company field.
More detailed information can be found in the document in the second link in the references.
df.loc[df.agent.isnull(), 'agent'] = 999 df.loc[df.company.isnull(), 'company'] = 999 df.loc[df.meal.isnull(), 'meal'] = 'SC'
The ADR field refers to the average price per night of the reservation. Therefore, it is not normal for it to take a value smaller than zero. You can use df.describe().T to see such situations. We delete values that are smaller than zero for the ADR field.
df = df[df.adr > 0]
For integer and float fields, we determine the lower and upper points using the code below. If there is equation between the lower point and the upper point, we do not do any filtering. If not equal, we remove observations larger than the upper point and observations smaller than the lower point from the data set.
The lower and upper points of the fields seem to be below,
Finally, we’re going to talk about multivariate outlier detection. [3] This is a special inference in which we work a little more and does not apply to every business. Charges such as $5 or $10 for a 1-night stay can be met normally, but that’s not normal for 10-night stay. Therefore, removing these values, which are considered contrary, from the data set will help our model to learn. So I’ve tried the LocalOutlierFactor and EllipticEnvelope, I’m only going over EllipticEnvelope because it yielded better results, but if you want to check out both, you can look at my repository.
from sklearn.covariance import EllipticEnvelopeimport matplotlib.pyplot as pltimport numpy as np# create new features: total price and total nightscleaned.loc[:, 'total_nights'] = \cleaned['stays_in_week_nights'] + cleaned['stays_in_weekend_nights']cleaned.loc[:, 'price'] = cleaned['adr'] * cleaned['total_nights']# create numpy arrayX = np.array(cleaned[['total_nights', 'price']])# create model ee = EllipticEnvelope(contamination=.01, random_state=0)# predictions y_pred_ee = ee.fit_predict(X)# predictions (-1: outlier, 1: normal)anomalies = X[y_pred_ee == -1]# plot data and outliersplt.figure(figsize=(15, 8))plt.scatter(X[:, 0], X[:, 1], c='white', s=20, edgecolor='k')plt.scatter(anomalies[:, 0], anomalies[:, 1], c='red');
The chart is as follows. The red dots show outlier values.
As you can see, it would make sense to leave small values outside the data set, especially after 6 nights. By applying this process, we can save the data set.
df_cleaned = cleaned[y_pred_ee != -1].copy()h1_cleaned = df_cleaned[df_cleaned.id.isin(h1.id.tolist())]h2_cleaned = df_cleaned[df_cleaned.id.isin(h2.id.tolist())]h1_cleaned = h1_cleaned.drop('id', axis=1)h2_cleaned = h2_cleaned.drop('id', axis=1)h1_cleaned.to_csv('data/H1_cleaned.csv', index=False)h2_cleaned.to_csv('data/H2_cleaned.csv', index=False)
Another important issue that has to be done before building up a model is feature engineering. Adding or removing features may be more efficient for our model.
First, I’m going to convert categorical data to integer using LabelEncoder, and then I’m going to look at the correlations. [4] The following code does this,
from sklearn.preprocessing import LabelEncoderimport matplotlib.pyplot as pltimport pandas as pdimport seaborn as snstrain = pd.read_csv('./data/H1_cleaned.csv')test = pd.read_csv('./data/H2_cleaned.csv')df_le = train.copy()le = LabelEncoder()categoricals = [ 'arrival_date_month', 'meal', 'country', 'market_segment', 'distribution_channel', 'reserved_room_type', 'assigned_room_type', 'deposit_type', 'agent', 'company', 'customer_type', 'reservation_status',]for col in categoricals: df_le[col] = le.fit_transform(df_le[col])plt.figure(figsize=(20, 15))sns.heatmap(df_le.corr(), annot=True, fmt='.2f');
This code gives us a correlation matrix like the one below,
In this matrix, there appears to be a negative high correlation between reservation_status and is_canceled features. There is also a high correlation between total_nights and stays_in_week_nights and stays_in_weekend_nights fields. So, we remove reservation_status and total_nights features from our data set. Since there is a relation between reservation_status_date and reservation_status, we will remove this feature.
columns = [ 'reservation_status_date', 'total_nights', 'reservation_status',]train = train.drop(columns, axis=1)test = test.drop(columns, axis=1)df_le = df_le.drop(columns, axis=1)
Machine learning models requires numerical data to operate. So before we can model, we need to convert categorical variables into numerical variables. There are two methods we can use to do this: Dummy variables and LabelEncoder .With the code you see below, we create features both using LabelEncoder and using dummy variables.
import pandas as pdnew_categoricals = [col for col in categoricals if col in train.columns]df_hot = pd.get_dummies(data=train, columns=new_categoricals)test_hot = pd.get_dummies(data=test, columns=new_categoricals)X_hot = df_hot.drop('is_canceled', axis=1)X_le = df_le.drop('is_canceled', axis=1)y = train['is_canceled']
Then we build a logistic regression model with dummy variables and examine the classification report as a first look at the data.
from sklearn.linear_model import LogisticRegressionfrom sklearn.metrics import accuracy_score, classification_reportfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X_hot, y, test_size=.2, random_state=42)log = LogisticRegression().fit(X_train, y_train)y_pred = log.predict(X_test)print(accuracy_score(y_test, y_pred))print(classification_report(y_test, y_pred))
The accuracy score appears to be 0.8584, but the accuracy for reservations that have been cancelled is very low when looking at the classification report. Because our data contains 23720 successful cases and 8697 canceled cases. In such cases, it is preferred to dilute the weighted class or to increase the number of samples for the fewer sampled class. We will first select features with feature selection algorithm and then compare the dummy variables and label encoder using diluted data.
Feature selection is one of the most important issues for feature engineering. Here we will use SelectKBest, a popular feature selection algorithm for classification problems. Our scoring function will be chi2. [5]
With the above function, we select the best features for both LabelEncoder and dummy variables.
selects_hot = select(X_hot)selects_le = select(X_le)
Then we compare these features in a simple way.
The comparison results are as follows,
We select these fields because the features we create with dummy variables give better results.
from sklearn.model_selection import train_test_splitfrom sklearn.utils import resampleimport pandas as pdlast = test_hot[selects_hot + ['is_canceled']]X_last = last.drop('is_canceled', axis=1)y_last = last['is_canceled']# separate majority and minority classesmajor = selected[selected['is_canceled'] == 0]minor = selected[selected['is_canceled'] == 1]# downsample majority classdownsampled = resample(major, replace=False, n_samples=len(minor), random_state=123) # combine minority class with downsampled majority classdf_new = pd.concat([downsampled, minor])# display new class countsprint(df_new['is_canceled'].value_counts())X = df_new.drop('is_canceled', axis=1)y = df_new['is_canceled']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42)
With the code above, we have equalized both the number of success reservations and the number of canceled reservations by 8697 and divided our data set into train and test. We will then measure the performance of our models by creating the following class.
Let’s go to the last step and compare our models!
Many models have been tried here, you can see them in my repository. But here I’m going to share the results of the top 2 models and some code that shows how we do hyperparameter tuning. Here’s how it goes,
from sklearn.model_selection import GridSearchCVfrom xgboost import XGBClassifierreport = Report(X_test, y_test)xgb = XGBClassifier().fit(X_train, y_train)xgb_params = { 'n_estimators': [100, 500, 1000], 'max_depth': [3, 5, 10], 'min_samples_split': [2, 5, 10]}params = { 'estimator': xgb, 'param_grid': xgb_params, 'cv': 5, 'refit': False, 'n_jobs': -1, 'verbose': 2, 'scoring': 'recall',}xgb_cv = GridSearchCV(**params)_ = xgb_cv.fit(X_train, y_train)print(xgb_cv.best_params_)xgb = XGBClassifier(**xgb_cv.best_params_).fit(X_train, y_train)report.metrics(xgb)report.plot_roc_curve(xgb, save=True)
XGBoost results are as follows:
If we replace the XGBoost with the GBM using the codes above, the results are as follows:
First, I wanted to emphasize the importance of preprocessing and feature selection steps in model building processes in this article. The way to create a successful model is to get clean data.
The optimization of the model established afterwards and especially the problem of classification should not be overlooked the importance of recall values. The accuracy by class is one of the most critical points of classification problems.
Hopefully, it has been a useful article!
Thank you for reading! If you’re curious about more and want to see the results for the H2 file, please visit my repository!
github.com
[1] Nuno Antonio, Ana de Almeida and Luis Nunes, Predicting hotel booking cancellations to decrease uncertainty and increase revenue (2017)
[2] Nuno Antonio, Ana de Almeida and Luis Nunes, Hotel booking demand datasets (2019)
[3] Christopher Jose, Anomaly Detection Techniques in Python (2019)
[4] Vishal R, Feature selection — Correlation and P-value (2018)
[5] Feature selection using SelectKBest (2018) | [
{
"code": null,
"e": 580,
"s": 172,
"text": "As you can imagine, the cancellation rate for bookings in the online booking industry is quite high. Once the reservation has been cancelled, there is almost nothing to be done. This creates discomfort for many institutions and creates a desire to take precautions. Therefore, predicting reservations that can be cancelled and preventing these cancellations will create a surplus value for the institutions."
},
{
"code": null,
"e": 742,
"s": 580,
"text": "In this article, I will try to explain how future cancelled reservations can be predicted in advance by machine learning methods. Let’s start with preprocessing!"
},
{
"code": null,
"e": 933,
"s": 742,
"text": "First of all, I should say that you can access the data used in my repository that I will share at the end of my article. I would also like to share that this is the subject of a thesis. [1]"
},
{
"code": null,
"e": 1197,
"s": 933,
"text": "We have two separate data sets, and since we’re going to do preprocessing for both, it makes sense to combine them. But during the modeling phase, we’re going to want to get to these two sets of data separately. So, to distinguish the two, I created the id field."
},
{
"code": null,
"e": 1457,
"s": 1197,
"text": "import pandas as pdh1 = pd.read_csv('data/H1.csv')h2 = pd.read_csv('data/H2.csv')h1.loc[:, 'id'] = range(1, len(h1) + 1)start = h1['id'].max() + 1stop = start + len(h2)h2.loc[:, 'id'] = range(start, stop)df = pd.concat([h1, h2], ignore_index=True, sort=False)"
},
{
"code": null,
"e": 1508,
"s": 1457,
"text": "Here are the preprocessing steps for this project:"
},
{
"code": null,
"e": 1561,
"s": 1508,
"text": "Converting string NULL or Undefined values to np.nan"
},
{
"code": null,
"e": 1641,
"s": 1561,
"text": "Deletion of missing observations from columns with a small number of NULLvalues"
},
{
"code": null,
"e": 1686,
"s": 1641,
"text": "Filling in missing values according to rules"
},
{
"code": null,
"e": 1715,
"s": 1686,
"text": "Deletion of incorrect values"
},
{
"code": null,
"e": 1733,
"s": 1715,
"text": "Outlier detection"
},
{
"code": null,
"e": 2025,
"s": 1733,
"text": "import numpy as npfor col in df.columns: if df[col].dtype == 'object' and col != 'country': df.loc[df[col].str.contains('NULL'), col] = np.nan df.loc[df[col].str.contains('Undefined', na=False), col] = np.nannull_series = df.isnull().sum()print(null_series[null_series > 0])"
},
{
"code": null,
"e": 2199,
"s": 2025,
"text": "With the code above, we convert string NULLand Undefined values to np.nan values. Then, we print the count of NULL values for each column. Here’s what the result looks like,"
},
{
"code": null,
"e": 2334,
"s": 2199,
"text": "We can delete NULLvalues in country, children, market_segment, distribution_channel, because there are few NULLvalues in these fields."
},
{
"code": null,
"e": 2470,
"s": 2334,
"text": "subset = [ 'country', 'children', 'market_segment', 'distribution_channel'] df = df.dropna(subset=subset)"
},
{
"code": null,
"e": 2716,
"s": 2470,
"text": "There are a number of rules specified for the data. [2] For example, values that are Undefined/SC mean that they are no meal type.Since we have previously replaced Undefined values with NULL, we can fill the NULLvalues in the meal field with SC."
},
{
"code": null,
"e": 3085,
"s": 2716,
"text": "The fact that the agent field is NULL means the reservation didn’t come from any agency. Therefore, these reservations can be considered as purchased directly by the customers, without any intermediary organizations such as agencies and etc. That’s why we’re not deleting NULLvalues, we’re throwing a random value like 999 instead. The same goes for the company field."
},
{
"code": null,
"e": 3178,
"s": 3085,
"text": "More detailed information can be found in the document in the second link in the references."
},
{
"code": null,
"e": 3304,
"s": 3178,
"text": "df.loc[df.agent.isnull(), 'agent'] = 999 df.loc[df.company.isnull(), 'company'] = 999 df.loc[df.meal.isnull(), 'meal'] = 'SC'"
},
{
"code": null,
"e": 3561,
"s": 3304,
"text": "The ADR field refers to the average price per night of the reservation. Therefore, it is not normal for it to take a value smaller than zero. You can use df.describe().T to see such situations. We delete values that are smaller than zero for the ADR field."
},
{
"code": null,
"e": 3581,
"s": 3561,
"text": "df = df[df.adr > 0]"
},
{
"code": null,
"e": 3897,
"s": 3581,
"text": "For integer and float fields, we determine the lower and upper points using the code below. If there is equation between the lower point and the upper point, we do not do any filtering. If not equal, we remove observations larger than the upper point and observations smaller than the lower point from the data set."
},
{
"code": null,
"e": 3956,
"s": 3897,
"text": "The lower and upper points of the fields seem to be below,"
},
{
"code": null,
"e": 4541,
"s": 3956,
"text": "Finally, we’re going to talk about multivariate outlier detection. [3] This is a special inference in which we work a little more and does not apply to every business. Charges such as $5 or $10 for a 1-night stay can be met normally, but that’s not normal for 10-night stay. Therefore, removing these values, which are considered contrary, from the data set will help our model to learn. So I’ve tried the LocalOutlierFactor and EllipticEnvelope, I’m only going over EllipticEnvelope because it yielded better results, but if you want to check out both, you can look at my repository."
},
{
"code": null,
"e": 5274,
"s": 4541,
"text": "from sklearn.covariance import EllipticEnvelopeimport matplotlib.pyplot as pltimport numpy as np# create new features: total price and total nightscleaned.loc[:, 'total_nights'] = \\cleaned['stays_in_week_nights'] + cleaned['stays_in_weekend_nights']cleaned.loc[:, 'price'] = cleaned['adr'] * cleaned['total_nights']# create numpy arrayX = np.array(cleaned[['total_nights', 'price']])# create model ee = EllipticEnvelope(contamination=.01, random_state=0)# predictions y_pred_ee = ee.fit_predict(X)# predictions (-1: outlier, 1: normal)anomalies = X[y_pred_ee == -1]# plot data and outliersplt.figure(figsize=(15, 8))plt.scatter(X[:, 0], X[:, 1], c='white', s=20, edgecolor='k')plt.scatter(anomalies[:, 0], anomalies[:, 1], c='red');"
},
{
"code": null,
"e": 5333,
"s": 5274,
"text": "The chart is as follows. The red dots show outlier values."
},
{
"code": null,
"e": 5492,
"s": 5333,
"text": "As you can see, it would make sense to leave small values outside the data set, especially after 6 nights. By applying this process, we can save the data set."
},
{
"code": null,
"e": 5845,
"s": 5492,
"text": "df_cleaned = cleaned[y_pred_ee != -1].copy()h1_cleaned = df_cleaned[df_cleaned.id.isin(h1.id.tolist())]h2_cleaned = df_cleaned[df_cleaned.id.isin(h2.id.tolist())]h1_cleaned = h1_cleaned.drop('id', axis=1)h2_cleaned = h2_cleaned.drop('id', axis=1)h1_cleaned.to_csv('data/H1_cleaned.csv', index=False)h2_cleaned.to_csv('data/H2_cleaned.csv', index=False)"
},
{
"code": null,
"e": 6005,
"s": 5845,
"text": "Another important issue that has to be done before building up a model is feature engineering. Adding or removing features may be more efficient for our model."
},
{
"code": null,
"e": 6163,
"s": 6005,
"text": "First, I’m going to convert categorical data to integer using LabelEncoder, and then I’m going to look at the correlations. [4] The following code does this,"
},
{
"code": null,
"e": 6808,
"s": 6163,
"text": "from sklearn.preprocessing import LabelEncoderimport matplotlib.pyplot as pltimport pandas as pdimport seaborn as snstrain = pd.read_csv('./data/H1_cleaned.csv')test = pd.read_csv('./data/H2_cleaned.csv')df_le = train.copy()le = LabelEncoder()categoricals = [ 'arrival_date_month', 'meal', 'country', 'market_segment', 'distribution_channel', 'reserved_room_type', 'assigned_room_type', 'deposit_type', 'agent', 'company', 'customer_type', 'reservation_status',]for col in categoricals: df_le[col] = le.fit_transform(df_le[col])plt.figure(figsize=(20, 15))sns.heatmap(df_le.corr(), annot=True, fmt='.2f');"
},
{
"code": null,
"e": 6868,
"s": 6808,
"text": "This code gives us a correlation matrix like the one below,"
},
{
"code": null,
"e": 7289,
"s": 6868,
"text": "In this matrix, there appears to be a negative high correlation between reservation_status and is_canceled features. There is also a high correlation between total_nights and stays_in_week_nights and stays_in_weekend_nights fields. So, we remove reservation_status and total_nights features from our data set. Since there is a relation between reservation_status_date and reservation_status, we will remove this feature."
},
{
"code": null,
"e": 7479,
"s": 7289,
"text": "columns = [ 'reservation_status_date', 'total_nights', 'reservation_status',]train = train.drop(columns, axis=1)test = test.drop(columns, axis=1)df_le = df_le.drop(columns, axis=1)"
},
{
"code": null,
"e": 7808,
"s": 7479,
"text": "Machine learning models requires numerical data to operate. So before we can model, we need to convert categorical variables into numerical variables. There are two methods we can use to do this: Dummy variables and LabelEncoder .With the code you see below, we create features both using LabelEncoder and using dummy variables."
},
{
"code": null,
"e": 8129,
"s": 7808,
"text": "import pandas as pdnew_categoricals = [col for col in categoricals if col in train.columns]df_hot = pd.get_dummies(data=train, columns=new_categoricals)test_hot = pd.get_dummies(data=test, columns=new_categoricals)X_hot = df_hot.drop('is_canceled', axis=1)X_le = df_le.drop('is_canceled', axis=1)y = train['is_canceled']"
},
{
"code": null,
"e": 8259,
"s": 8129,
"text": "Then we build a logistic regression model with dummy variables and examine the classification report as a first look at the data."
},
{
"code": null,
"e": 8677,
"s": 8259,
"text": "from sklearn.linear_model import LogisticRegressionfrom sklearn.metrics import accuracy_score, classification_reportfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X_hot, y, test_size=.2, random_state=42)log = LogisticRegression().fit(X_train, y_train)y_pred = log.predict(X_test)print(accuracy_score(y_test, y_pred))print(classification_report(y_test, y_pred))"
},
{
"code": null,
"e": 9170,
"s": 8677,
"text": "The accuracy score appears to be 0.8584, but the accuracy for reservations that have been cancelled is very low when looking at the classification report. Because our data contains 23720 successful cases and 8697 canceled cases. In such cases, it is preferred to dilute the weighted class or to increase the number of samples for the fewer sampled class. We will first select features with feature selection algorithm and then compare the dummy variables and label encoder using diluted data."
},
{
"code": null,
"e": 9385,
"s": 9170,
"text": "Feature selection is one of the most important issues for feature engineering. Here we will use SelectKBest, a popular feature selection algorithm for classification problems. Our scoring function will be chi2. [5]"
},
{
"code": null,
"e": 9481,
"s": 9385,
"text": "With the above function, we select the best features for both LabelEncoder and dummy variables."
},
{
"code": null,
"e": 9534,
"s": 9481,
"text": "selects_hot = select(X_hot)selects_le = select(X_le)"
},
{
"code": null,
"e": 9582,
"s": 9534,
"text": "Then we compare these features in a simple way."
},
{
"code": null,
"e": 9621,
"s": 9582,
"text": "The comparison results are as follows,"
},
{
"code": null,
"e": 9717,
"s": 9621,
"text": "We select these fields because the features we create with dummy variables give better results."
},
{
"code": null,
"e": 10498,
"s": 9717,
"text": "from sklearn.model_selection import train_test_splitfrom sklearn.utils import resampleimport pandas as pdlast = test_hot[selects_hot + ['is_canceled']]X_last = last.drop('is_canceled', axis=1)y_last = last['is_canceled']# separate majority and minority classesmajor = selected[selected['is_canceled'] == 0]minor = selected[selected['is_canceled'] == 1]# downsample majority classdownsampled = resample(major, replace=False, n_samples=len(minor), random_state=123) # combine minority class with downsampled majority classdf_new = pd.concat([downsampled, minor])# display new class countsprint(df_new['is_canceled'].value_counts())X = df_new.drop('is_canceled', axis=1)y = df_new['is_canceled']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42)"
},
{
"code": null,
"e": 10755,
"s": 10498,
"text": "With the code above, we have equalized both the number of success reservations and the number of canceled reservations by 8697 and divided our data set into train and test. We will then measure the performance of our models by creating the following class."
},
{
"code": null,
"e": 10805,
"s": 10755,
"text": "Let’s go to the last step and compare our models!"
},
{
"code": null,
"e": 11012,
"s": 10805,
"text": "Many models have been tried here, you can see them in my repository. But here I’m going to share the results of the top 2 models and some code that shows how we do hyperparameter tuning. Here’s how it goes,"
},
{
"code": null,
"e": 11652,
"s": 11012,
"text": "from sklearn.model_selection import GridSearchCVfrom xgboost import XGBClassifierreport = Report(X_test, y_test)xgb = XGBClassifier().fit(X_train, y_train)xgb_params = { 'n_estimators': [100, 500, 1000], 'max_depth': [3, 5, 10], 'min_samples_split': [2, 5, 10]}params = { 'estimator': xgb, 'param_grid': xgb_params, 'cv': 5, 'refit': False, 'n_jobs': -1, 'verbose': 2, 'scoring': 'recall',}xgb_cv = GridSearchCV(**params)_ = xgb_cv.fit(X_train, y_train)print(xgb_cv.best_params_)xgb = XGBClassifier(**xgb_cv.best_params_).fit(X_train, y_train)report.metrics(xgb)report.plot_roc_curve(xgb, save=True)"
},
{
"code": null,
"e": 11684,
"s": 11652,
"text": "XGBoost results are as follows:"
},
{
"code": null,
"e": 11774,
"s": 11684,
"text": "If we replace the XGBoost with the GBM using the codes above, the results are as follows:"
},
{
"code": null,
"e": 11967,
"s": 11774,
"text": "First, I wanted to emphasize the importance of preprocessing and feature selection steps in model building processes in this article. The way to create a successful model is to get clean data."
},
{
"code": null,
"e": 12208,
"s": 11967,
"text": "The optimization of the model established afterwards and especially the problem of classification should not be overlooked the importance of recall values. The accuracy by class is one of the most critical points of classification problems."
},
{
"code": null,
"e": 12249,
"s": 12208,
"text": "Hopefully, it has been a useful article!"
},
{
"code": null,
"e": 12374,
"s": 12249,
"text": "Thank you for reading! If you’re curious about more and want to see the results for the H2 file, please visit my repository!"
},
{
"code": null,
"e": 12385,
"s": 12374,
"text": "github.com"
},
{
"code": null,
"e": 12525,
"s": 12385,
"text": "[1] Nuno Antonio, Ana de Almeida and Luis Nunes, Predicting hotel booking cancellations to decrease uncertainty and increase revenue (2017)"
},
{
"code": null,
"e": 12611,
"s": 12525,
"text": "[2] Nuno Antonio, Ana de Almeida and Luis Nunes, Hotel booking demand datasets (2019)"
},
{
"code": null,
"e": 12679,
"s": 12611,
"text": "[3] Christopher Jose, Anomaly Detection Techniques in Python (2019)"
},
{
"code": null,
"e": 12744,
"s": 12679,
"text": "[4] Vishal R, Feature selection — Correlation and P-value (2018)"
}
]
|
Different possible marks for n questions and negative marking - GeeksforGeeks | 09 Mar, 2022
Given the number of questions as , and marks for the correct answer as and marks for the incorrect answer. One can either attempt to solve the question in an examination and get either marks if the answer is right, or marks if the answer is wrong, or leave the question unattended and get marks. The task is to find the count of all the different possible marks that one can score in the examination.Examples:
Input: n = 2, p = 1, q = -1
Output: 5
The different possible marks are: -2, -1, 0, 1, 2
Input: n = 4, p = 2, q = -1
Output: 12
Approach: Iterate through all the possible number of correctly solved and unsolved problems. Store the scores in a set containing distinct elements keeping in mind that there is a positive number of incorrectly solved problems.Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// CPP program to find the count of// all the different possible marks// that one can score in the examination#include<bits/stdc++.h> using namespace std; // Function to return // the count of distinct scores int scores(int n, int p, int q) { // Set to store distinct values set<int> hset; // iterate through all // possible pairs of (p, q) for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { int correct = i; int not_solved = j; int incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.insert(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.size(); } // Driver code int main() { // Get the number of questions int n = 4; // Get the marks for correct answer int p = 2; // Get the marks for incorrect answer int q = -1; // Get the count and print it cout << (scores(n, p, q)); } // This code is contributed by// Surendra_Gangwar
// Java program to find the count of// all the different possible marks// that one can score in the examination import java.util.*; class GFG { // Function to return // the count of distinct scores static int scores(int n, int p, int q) { // Set to store distinct values HashSet<Integer> hset = new HashSet<Integer>(); // iterate through all // possible pairs of (p, q) for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { int correct = i; int not_solved = j; int incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.add(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.size(); } // Driver code public static void main(String[] args) { // Get the number of questions int n = 4; // Get the marks for correct answer int p = 2; // Get the marks for incorrect answer int q = -1; // Get the count and print it System.out.println(scores(n, p, q)); }}
# Python3 program to find the count of# all the different possible marks# that one can score in the examination # Function to return the count of# distinct scoresdef scores(n, p, q): # Set to store distinct values hset = set() # Iterate through all possible # pairs of (p, q) for i in range(0, n + 1): for j in range(0, n + 1): correct = i not_solved = j incorrect = n - i - j # If there are positive number # of incorrectly solved problems if incorrect >= 0: hset.add(p * correct + q * incorrect) else: break # return the size of the set # containing distinct elements return len(hset) # Driver codeif __name__ == "__main__": # Get the number of questions n = 4 # Get the marks for correct answer p = 2 # Get the marks for incorrect answer q = -1 # Get the count and print it print(scores(n, p, q)) # This code is contributed by Rituraj Jain
// C# program to find the count of// all the different possible marks// that one can score in the examinationusing System;using System.Collections.Generic; class GFG{ // Function to return // the count of distinct scores static int scores(int n, int p, int q) { // Set to store distinct values HashSet<int> hset = new HashSet<int>(); // iterate through all // possible pairs of (p, q) for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { int correct = i; int not_solved = j; int incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.Add(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.Count; } // Driver code public static void Main() { // Get the number of questions int n = 4; // Get the marks for correct answer int p = 2; // Get the marks for incorrect answer int q = -1; // Get the count and print it Console.WriteLine(scores(n, p, q)); }} /* This code contributed by PrinciRaj1992 */
<script> // JavaScript program to find the count of// all the different possible marks// that one can score in the examination // Function to return // the count of distinct scores function scores(n, p, q) { // Set to store distinct values let hset = new Set(); // iterate through all // possible pairs of (p, q) for (let i = 0; i <= n; i++) { for (let j = 0; j <= n; j++) { let correct = i; let not_solved = j; let incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.add(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.size; } // Driver Code // Get the number of questions let n = 4; // Get the marks for correct answer let p = 2; // Get the marks for incorrect answer let q = -1; // Get the count and print it document.write(scores(n, p, q)); </script>
12
rituraj_jain
SURENDRA_GANGWAR
princiraj1992
code_hunt
simranarora5sos
java-hashset
Combinatorial
Hash
Hash
Combinatorial
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Find the Number of Permutations that satisfy the given condition in an array
Split array into subarrays at minimum cost by minimizing count of repeating elements in each subarray
Ways to sum to N using Natural Numbers up to K with repetitions allowed
Number of Simple Graph with N Vertices and M Edges
Largest substring with same Characters
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
Internal Working of HashMap in Java
Hashing | Set 1 (Introduction)
Count pairs with given sum
Hashing | Set 3 (Open Addressing) | [
{
"code": null,
"e": 25061,
"s": 25033,
"text": "\n09 Mar, 2022"
},
{
"code": null,
"e": 25473,
"s": 25061,
"text": "Given the number of questions as , and marks for the correct answer as and marks for the incorrect answer. One can either attempt to solve the question in an examination and get either marks if the answer is right, or marks if the answer is wrong, or leave the question unattended and get marks. The task is to find the count of all the different possible marks that one can score in the examination.Examples: "
},
{
"code": null,
"e": 25601,
"s": 25473,
"text": "Input: n = 2, p = 1, q = -1\nOutput: 5\nThe different possible marks are: -2, -1, 0, 1, 2\n\nInput: n = 4, p = 2, q = -1\nOutput: 12"
},
{
"code": null,
"e": 25883,
"s": 25603,
"text": "Approach: Iterate through all the possible number of correctly solved and unsolved problems. Store the scores in a set containing distinct elements keeping in mind that there is a positive number of incorrectly solved problems.Below is the implementation of the above approach: "
},
{
"code": null,
"e": 25887,
"s": 25883,
"text": "C++"
},
{
"code": null,
"e": 25892,
"s": 25887,
"text": "Java"
},
{
"code": null,
"e": 25900,
"s": 25892,
"text": "Python3"
},
{
"code": null,
"e": 25903,
"s": 25900,
"text": "C#"
},
{
"code": null,
"e": 25914,
"s": 25903,
"text": "Javascript"
},
{
"code": "// CPP program to find the count of// all the different possible marks// that one can score in the examination#include<bits/stdc++.h> using namespace std; // Function to return // the count of distinct scores int scores(int n, int p, int q) { // Set to store distinct values set<int> hset; // iterate through all // possible pairs of (p, q) for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { int correct = i; int not_solved = j; int incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.insert(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.size(); } // Driver code int main() { // Get the number of questions int n = 4; // Get the marks for correct answer int p = 2; // Get the marks for incorrect answer int q = -1; // Get the count and print it cout << (scores(n, p, q)); } // This code is contributed by// Surendra_Gangwar",
"e": 27251,
"s": 25914,
"text": null
},
{
"code": "// Java program to find the count of// all the different possible marks// that one can score in the examination import java.util.*; class GFG { // Function to return // the count of distinct scores static int scores(int n, int p, int q) { // Set to store distinct values HashSet<Integer> hset = new HashSet<Integer>(); // iterate through all // possible pairs of (p, q) for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { int correct = i; int not_solved = j; int incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.add(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.size(); } // Driver code public static void main(String[] args) { // Get the number of questions int n = 4; // Get the marks for correct answer int p = 2; // Get the marks for incorrect answer int q = -1; // Get the count and print it System.out.println(scores(n, p, q)); }}",
"e": 28597,
"s": 27251,
"text": null
},
{
"code": "# Python3 program to find the count of# all the different possible marks# that one can score in the examination # Function to return the count of# distinct scoresdef scores(n, p, q): # Set to store distinct values hset = set() # Iterate through all possible # pairs of (p, q) for i in range(0, n + 1): for j in range(0, n + 1): correct = i not_solved = j incorrect = n - i - j # If there are positive number # of incorrectly solved problems if incorrect >= 0: hset.add(p * correct + q * incorrect) else: break # return the size of the set # containing distinct elements return len(hset) # Driver codeif __name__ == \"__main__\": # Get the number of questions n = 4 # Get the marks for correct answer p = 2 # Get the marks for incorrect answer q = -1 # Get the count and print it print(scores(n, p, q)) # This code is contributed by Rituraj Jain",
"e": 29651,
"s": 28597,
"text": null
},
{
"code": "// C# program to find the count of// all the different possible marks// that one can score in the examinationusing System;using System.Collections.Generic; class GFG{ // Function to return // the count of distinct scores static int scores(int n, int p, int q) { // Set to store distinct values HashSet<int> hset = new HashSet<int>(); // iterate through all // possible pairs of (p, q) for (int i = 0; i <= n; i++) { for (int j = 0; j <= n; j++) { int correct = i; int not_solved = j; int incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.Add(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.Count; } // Driver code public static void Main() { // Get the number of questions int n = 4; // Get the marks for correct answer int p = 2; // Get the marks for incorrect answer int q = -1; // Get the count and print it Console.WriteLine(scores(n, p, q)); }} /* This code contributed by PrinciRaj1992 */",
"e": 31059,
"s": 29651,
"text": null
},
{
"code": "<script> // JavaScript program to find the count of// all the different possible marks// that one can score in the examination // Function to return // the count of distinct scores function scores(n, p, q) { // Set to store distinct values let hset = new Set(); // iterate through all // possible pairs of (p, q) for (let i = 0; i <= n; i++) { for (let j = 0; j <= n; j++) { let correct = i; let not_solved = j; let incorrect = n - i - j; // if there are positive number // of incorrectly solved problems if (incorrect >= 0) hset.add(p * correct + q * incorrect); else break; } } // return the size of the set // containing distinct elements return hset.size; } // Driver Code // Get the number of questions let n = 4; // Get the marks for correct answer let p = 2; // Get the marks for incorrect answer let q = -1; // Get the count and print it document.write(scores(n, p, q)); </script>",
"e": 32299,
"s": 31059,
"text": null
},
{
"code": null,
"e": 32302,
"s": 32299,
"text": "12"
},
{
"code": null,
"e": 32317,
"s": 32304,
"text": "rituraj_jain"
},
{
"code": null,
"e": 32334,
"s": 32317,
"text": "SURENDRA_GANGWAR"
},
{
"code": null,
"e": 32348,
"s": 32334,
"text": "princiraj1992"
},
{
"code": null,
"e": 32358,
"s": 32348,
"text": "code_hunt"
},
{
"code": null,
"e": 32374,
"s": 32358,
"text": "simranarora5sos"
},
{
"code": null,
"e": 32387,
"s": 32374,
"text": "java-hashset"
},
{
"code": null,
"e": 32401,
"s": 32387,
"text": "Combinatorial"
},
{
"code": null,
"e": 32406,
"s": 32401,
"text": "Hash"
},
{
"code": null,
"e": 32411,
"s": 32406,
"text": "Hash"
},
{
"code": null,
"e": 32425,
"s": 32411,
"text": "Combinatorial"
},
{
"code": null,
"e": 32523,
"s": 32425,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 32532,
"s": 32523,
"text": "Comments"
},
{
"code": null,
"e": 32545,
"s": 32532,
"text": "Old Comments"
},
{
"code": null,
"e": 32622,
"s": 32545,
"text": "Find the Number of Permutations that satisfy the given condition in an array"
},
{
"code": null,
"e": 32724,
"s": 32622,
"text": "Split array into subarrays at minimum cost by minimizing count of repeating elements in each subarray"
},
{
"code": null,
"e": 32796,
"s": 32724,
"text": "Ways to sum to N using Natural Numbers up to K with repetitions allowed"
},
{
"code": null,
"e": 32847,
"s": 32796,
"text": "Number of Simple Graph with N Vertices and M Edges"
},
{
"code": null,
"e": 32886,
"s": 32847,
"text": "Largest substring with same Characters"
},
{
"code": null,
"e": 32971,
"s": 32886,
"text": "Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)"
},
{
"code": null,
"e": 33007,
"s": 32971,
"text": "Internal Working of HashMap in Java"
},
{
"code": null,
"e": 33038,
"s": 33007,
"text": "Hashing | Set 1 (Introduction)"
},
{
"code": null,
"e": 33065,
"s": 33038,
"text": "Count pairs with given sum"
}
]
|
Hardware Protection and Type of Hardware Protection - GeeksforGeeks | 31 May, 2021
In this article, we are going to learn about hardware protection and it’s the type. so first let’s see the type of hardware which is used in a computer system. we know that a computer system contains the hardware like processor, monitor, RAM and many more, and one thing that the operating system ensures that these devices can not directly accessible by the user.
Basically, hardware protection is divided into 3 categories: CPU protection, Memory Protection, and I/O protection. These are explained as following below.
1. CPU Protection: CPU protection is referred to as we can not give CPU to a process forever, it should be for some limited time otherwise other processes will not get the chance to execute the process. So for that, a timer is used to get over from this situation. which is basically give a certain amount of time a process and after the timer execution a signal will be sent to the process to leave the CPU. hence process will not hold CPU for more time.
2. Memory Protection: In memory protection, we are talking about that situation when two or more processes are in memory and one process may access the other process memory. and to prevent this situation we are using two registers as:
1. Bare register
2. Limit register
So basically Bare register store the starting address of program and limit register store the size of the process, so when a process wants to access the memory then it is checked that it can access or can not access the memory.
3. I/O Protection: So when we’re ensuring the I/O protection then some cases will never have occurred in the system as:
Termination I/O of other processView I/O of other processGiving priority to a particular process I/O
Termination I/O of other process
View I/O of other process
Giving priority to a particular process I/O
kshgupta99
Computer Organization & Architecture
Operating Systems
Operating Systems
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Logical and Physical Address in Operating System
Memory Hierarchy Design and its Characteristics
Architecture of 8086
Programmable peripheral interface 8255
Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)
Banker's Algorithm in Operating System
Page Replacement Algorithms in Operating Systems
Program for FCFS CPU Scheduling | Set 1
Program for Round Robin scheduling | Set 1
Program for Shortest Job First (or SJF) CPU Scheduling | Set 1 (Non- preemptive) | [
{
"code": null,
"e": 24594,
"s": 24566,
"text": "\n31 May, 2021"
},
{
"code": null,
"e": 24960,
"s": 24594,
"text": "In this article, we are going to learn about hardware protection and it’s the type. so first let’s see the type of hardware which is used in a computer system. we know that a computer system contains the hardware like processor, monitor, RAM and many more, and one thing that the operating system ensures that these devices can not directly accessible by the user. "
},
{
"code": null,
"e": 25117,
"s": 24960,
"text": "Basically, hardware protection is divided into 3 categories: CPU protection, Memory Protection, and I/O protection. These are explained as following below. "
},
{
"code": null,
"e": 25575,
"s": 25117,
"text": " 1. CPU Protection: CPU protection is referred to as we can not give CPU to a process forever, it should be for some limited time otherwise other processes will not get the chance to execute the process. So for that, a timer is used to get over from this situation. which is basically give a certain amount of time a process and after the timer execution a signal will be sent to the process to leave the CPU. hence process will not hold CPU for more time. "
},
{
"code": null,
"e": 25811,
"s": 25575,
"text": "2. Memory Protection: In memory protection, we are talking about that situation when two or more processes are in memory and one process may access the other process memory. and to prevent this situation we are using two registers as: "
},
{
"code": null,
"e": 25847,
"s": 25811,
"text": "1. Bare register\n2. Limit register "
},
{
"code": null,
"e": 26077,
"s": 25847,
"text": "So basically Bare register store the starting address of program and limit register store the size of the process, so when a process wants to access the memory then it is checked that it can access or can not access the memory. "
},
{
"code": null,
"e": 26198,
"s": 26077,
"text": "3. I/O Protection: So when we’re ensuring the I/O protection then some cases will never have occurred in the system as: "
},
{
"code": null,
"e": 26299,
"s": 26198,
"text": "Termination I/O of other processView I/O of other processGiving priority to a particular process I/O"
},
{
"code": null,
"e": 26332,
"s": 26299,
"text": "Termination I/O of other process"
},
{
"code": null,
"e": 26358,
"s": 26332,
"text": "View I/O of other process"
},
{
"code": null,
"e": 26402,
"s": 26358,
"text": "Giving priority to a particular process I/O"
},
{
"code": null,
"e": 26413,
"s": 26402,
"text": "kshgupta99"
},
{
"code": null,
"e": 26450,
"s": 26413,
"text": "Computer Organization & Architecture"
},
{
"code": null,
"e": 26468,
"s": 26450,
"text": "Operating Systems"
},
{
"code": null,
"e": 26486,
"s": 26468,
"text": "Operating Systems"
},
{
"code": null,
"e": 26584,
"s": 26486,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26593,
"s": 26584,
"text": "Comments"
},
{
"code": null,
"e": 26606,
"s": 26593,
"text": "Old Comments"
},
{
"code": null,
"e": 26655,
"s": 26606,
"text": "Logical and Physical Address in Operating System"
},
{
"code": null,
"e": 26703,
"s": 26655,
"text": "Memory Hierarchy Design and its Characteristics"
},
{
"code": null,
"e": 26724,
"s": 26703,
"text": "Architecture of 8086"
},
{
"code": null,
"e": 26763,
"s": 26724,
"text": "Programmable peripheral interface 8255"
},
{
"code": null,
"e": 26858,
"s": 26763,
"text": "Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)"
},
{
"code": null,
"e": 26897,
"s": 26858,
"text": "Banker's Algorithm in Operating System"
},
{
"code": null,
"e": 26946,
"s": 26897,
"text": "Page Replacement Algorithms in Operating Systems"
},
{
"code": null,
"e": 26986,
"s": 26946,
"text": "Program for FCFS CPU Scheduling | Set 1"
},
{
"code": null,
"e": 27029,
"s": 26986,
"text": "Program for Round Robin scheduling | Set 1"
}
]
|
SequenceMatcher in Python for Longest Common Substring - GeeksforGeeks | 11 Dec, 2017
Given two strings ‘X’ and ‘Y’, print the longest common sub-string.
Examples:
Input : X = "GeeksforGeeks",
Y = "GeeksQuiz"
Output : Geeks
Input : X = "zxabcdezy",
Y = "yzabcdezx"
Output : abcdez
We have existing solution for this problem please refer Print the longest common substring link. We will solve problem in python using SequenceMatcher.find_longest_match() method.
First we initialize SequenceMatcher object with two input string str1 and str2, find_longest_match(aLow,aHigh,bLow,bHigh) takes 4 parameters aLow, bLow are start index of first and second string respectively and aHigh, bHigh are length of first and second string respectively. find_longest_match() returns named tuple (i, j, k) such that a[i:i+k] is equal to b[j:j+k], if no blocks match, this returns (aLow, bLow, 0).
# Function to find Longest Common Sub-string from difflib import SequenceMatcher def longestSubstring(str1,str2): # initialize SequenceMatcher object with # input string seqMatch = SequenceMatcher(None,str1,str2) # find match of longest sub-string # output will be like Match(a=0, b=0, size=5) match = seqMatch.find_longest_match(0, len(str1), 0, len(str2)) # print longest substring if (match.size!=0): print (str1[match.a: match.a + match.size]) else: print ('No longest common sub-string found') # Driver programif __name__ == "__main__": str1 = 'GeeksforGeeks' str2 = 'GeeksQuiz' longestSubstring(str1,str2)
Output:
Geeks
python-string
Python
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Python Dictionary
Read a file line by line in Python
Enumerate() in Python
How to Install PIP on Windows ?
Iterate over a list in Python
Reverse a string in Java
Write a program to reverse an array or string
Write a program to print all permutations of a given string
C++ Data Types
Python program to check if a string is palindrome or not | [
{
"code": null,
"e": 24577,
"s": 24549,
"text": "\n11 Dec, 2017"
},
{
"code": null,
"e": 24645,
"s": 24577,
"text": "Given two strings ‘X’ and ‘Y’, print the longest common sub-string."
},
{
"code": null,
"e": 24655,
"s": 24645,
"text": "Examples:"
},
{
"code": null,
"e": 24794,
"s": 24655,
"text": "Input : X = \"GeeksforGeeks\", \n Y = \"GeeksQuiz\"\nOutput : Geeks\n\nInput : X = \"zxabcdezy\", \n Y = \"yzabcdezx\"\nOutput : abcdez\n"
},
{
"code": null,
"e": 24974,
"s": 24794,
"text": "We have existing solution for this problem please refer Print the longest common substring link. We will solve problem in python using SequenceMatcher.find_longest_match() method."
},
{
"code": null,
"e": 25393,
"s": 24974,
"text": "First we initialize SequenceMatcher object with two input string str1 and str2, find_longest_match(aLow,aHigh,bLow,bHigh) takes 4 parameters aLow, bLow are start index of first and second string respectively and aHigh, bHigh are length of first and second string respectively. find_longest_match() returns named tuple (i, j, k) such that a[i:i+k] is equal to b[j:j+k], if no blocks match, this returns (aLow, bLow, 0)."
},
{
"code": "# Function to find Longest Common Sub-string from difflib import SequenceMatcher def longestSubstring(str1,str2): # initialize SequenceMatcher object with # input string seqMatch = SequenceMatcher(None,str1,str2) # find match of longest sub-string # output will be like Match(a=0, b=0, size=5) match = seqMatch.find_longest_match(0, len(str1), 0, len(str2)) # print longest substring if (match.size!=0): print (str1[match.a: match.a + match.size]) else: print ('No longest common sub-string found') # Driver programif __name__ == \"__main__\": str1 = 'GeeksforGeeks' str2 = 'GeeksQuiz' longestSubstring(str1,str2)",
"e": 26079,
"s": 25393,
"text": null
},
{
"code": null,
"e": 26087,
"s": 26079,
"text": "Output:"
},
{
"code": null,
"e": 26094,
"s": 26087,
"text": "Geeks\n"
},
{
"code": null,
"e": 26108,
"s": 26094,
"text": "python-string"
},
{
"code": null,
"e": 26115,
"s": 26108,
"text": "Python"
},
{
"code": null,
"e": 26123,
"s": 26115,
"text": "Strings"
},
{
"code": null,
"e": 26131,
"s": 26123,
"text": "Strings"
},
{
"code": null,
"e": 26229,
"s": 26131,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26238,
"s": 26229,
"text": "Comments"
},
{
"code": null,
"e": 26251,
"s": 26238,
"text": "Old Comments"
},
{
"code": null,
"e": 26269,
"s": 26251,
"text": "Python Dictionary"
},
{
"code": null,
"e": 26304,
"s": 26269,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 26326,
"s": 26304,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 26358,
"s": 26326,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26388,
"s": 26358,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 26413,
"s": 26388,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 26459,
"s": 26413,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 26519,
"s": 26459,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 26534,
"s": 26519,
"text": "C++ Data Types"
}
]
|
The Amazon Data Analyst Interview | by Jay Feng | Towards Data Science | Amazon is one of the largest online markets in the world, and unlike the traditional market place, Amazon is gigantic! and it is filled with millions and millions of products on display. In the USA only, Amazon controls more than half of the online market, and since its inception in 1994, it has worked to achieve its ultimate goal which is being “the one-stop-shop” by learning from data.
In this age of data, Amazon collects data on every customer clicks and interactions on its website; this includes what items customers are looking at, what items they put in their cart, what quality they want, what their preferences are, etc.
Amazon compiles this data and feeds it to its recommendation system to better serve customers needs and improve the shopping experience by recommending products that best fit their preferences. Amazon also leverages data to make business decisions and drive growth. Data analyst at Amazon work with both technical and non-technical internal teams to build the right analysis to answer key business questions
To better familiarize yourself with Amazon’s interview process, check these articles out on the Amazon Business Intelligence, Business Analyst, Machine Learning Interviews!
Data analysts at Amazon help bridge the gap between data and the decision-making process. Typical data analyst roles at Amazon include data analysis, dashboard/report building, and metric definitions and reviews. Data analysts at Amazon also design systems for data collection, compiling, analysis, and reporting.
Data analyst roles differ based on the type of data they are working with (e.g Twitch data, Sales data, Alexia data, etc.), the type of project they are on, the product they’re working with, and the team they're assigned to. Data analysts at Amazon also collaborate cross-functionally with various teams including engineering, data science, and marketing to provide data-driven insights to research and business areas. Depending on the team, the role may range from basic business intelligence analytics such as data processing, analysis, and reporting to a more technical role like data collection.
The data analyst position at Amazon requires specialization in knowledge and experience. Therefore, Amazon only hires highly qualified candidates with at least 3 years of industry experience working with data analysis, data modeling, advanced business analytics, and other related fields.
Other basic qualifications include:
Bachelor’s or Masters (PhD preferred) in Finance, Business, Economics, Engineering, math, statistics, computer science, Operation Research, or related fields.
Experience with scripting, querying, and data warehouse tools, such as Linux, R, SAS, and/or SQL
Extensive experience in programming languages like Python, R, or Java. (Check out the Python data science interview guide here)
Experience with querying relational databases (SQL) and hands-on experience with processing, optimization, and analysis of large data set.
Proficiency with Microsoft Excel, Macros and Access.
Experience in identifying metrics and KPIs, gathering data, experimentation, and presenting decks, dashboards, and scorecards.
Experience with business intelligence and automated self-service reporting tools such as Tableau, Quicksight, Microsoft Power BI, or Cognos.
Experience with AWS services such as RDS, SQS, or Lambda.
Amazon is a large conglomerate technology company offering many products and services. As a result of this, Amazon has over 100 teams working on various areas. Data analysts work with these teams to help bridge the gap between data and the decision making process. Generally, data analysts at Amazon help streamline the decision making process through the analysis of data.
Depending on the team at Amazon, data analysts’ responsibilities may include:
Alliance (Twitch): Leveraging advanced analytics in shaping the way deals performance is measured, defining what questions should be asked, and scaling analytics methods and tools to support Twitch’s growing business. Also, define and track KPIs, support strategic initiatives, evaluate new business opportunities, and improve/enhance decision making through data.
Finance Operation: Develop standard and ad hoc analysis and report for decision making. Structure high-level business problems within the framework of analyzing, defining, creating and sourcing the data, producing metrics, and providing recommendations. Automate standard reporting and drive data governance and standardization.
Search Capacity: Leverage advance analytics and predictive algorithms to create powerful, customer-focused search solutions and technologies. Collaborate with engineering and operation teams to scale Amazon search service by identifying and tracking KPIs regarding efficiency and cost.
Textbook team: Build robust data analytics solutions to improve the customer experience. Employ advanced data mining concepts, data modeling, and analytics to define and measure metrics for evaluating business growth. Extract, integrate and work on critical data to build data pipelines, automate reports and dashboards, and leverage self-service tools to internal stakeholders.
Fraud and Abuse prevention: Leverage sophisticated machine learning concepts to mitigate and prevent fraud. Develop and manage scalable solutions for new and existing metrics, reports, analysis, and dashboards to support business needs. Implement customized ETL pipelines from diverse sources for higher data quality and availability
Buying: Use advanced analytics concepts to determine how much inventory to carry on all of Amazon’s websites worldwide. Develop and maintain metrics, expose and measure the current performance of Amazon’s buying system, identify and quantify opportunities for improvement, and leverage Amazon’s massive data to identify and prevent unexpected performance. Collaborate cross-functionally with other teams, especially engineering, research, data science and business teams for future innovation.
Engineering Success (Twitch): Collaborate with the engineering team at Twitch to provide data analysis towards improving and shaping success measurement metrics, defining business-impact questions, and scaling analytics methods and tools to bolster Amazon’s growing business.
The Amazon data analyst interview process follows the standard Amazon “STAR” (Situation, Task, Action, and Result) process with slight variations. The interview process starts with an initial phone screen with HR. After this, a technical interview will be scheduled. Once you get through the technical interview, a final onsite interview with 5 to 6 one-on-ones with the hiring manager, team members, and HR will be scheduled.
This is a standard introductory interview with HR after the submission of an application. The interview is exploratory and lasts about 45 minutes; it focuses on showcasing your background, skillsets, and work experience related to the position. You also get to know about Amazon’s work culture and the position.
Note: Amazon emphasizes its leadership principles. It will be really helpful to tailor your responses to follow the “STAR” format based on Amazon’s leadership principles.
Sample Questions:
What is the biggest challenge you have overcome?
What is your previous experience with SQL?
Tell me about a time when you disagreed with a manager. How did you handle the situation? What was the result?
How would you go about making improvements (performance, safety, process) in your workplace?
Describe a long term goal and how you plan to achieve it.
This is a technical interview with a member of HR or a manager. Amazon uses a collaboration service platform called “Collabedit” for all its technical interviews.
The questions in this interview round revolves around a SQL coding challenge, Excel, and questions regarding Amazon’s Leadership Principles (LP).
Check out our ultimate guide to SQL interview questions
Here’s another example Amazon SQL interview question
'users' table+-----------------+----------+| columns | type |+-----------------+----------+| id | integer || name | string || created_at | datetime || neighborhood_id | integer || mail | string |+-----------------+----------+'comments' table+------------+----------+| columns | type |+------------+----------+| user_id | integer || body | text || created_at | datetime |+------------+----------+
Consider the following tables:
Write a SQL query to create a histogram of the number of comments per user in the month of January 2020. Assume bin buckets class intervals of one.
Here’s a hint:
What does a histogram represent? In this case we’re interested in using a histogram to represent the distribution of comments each user has made in January 2020.
A histogram with bin buckets of size one means that we can avoid the logical overhead of grouping frequencies into specific intervals.
For example, if we wanted a histogram of size five, we would have to run a SELECT statement like so:
Try solving this question in our interactive SQL editor
The onsite interview for a data analyst at Amazon is very similar to other onsite interviews at the company. Candidates who progress to this stage of the interview process go through 5 or 6 one-on-one interviews with a hiring manager, a team manager, data analysts, data engineers, and statisticians. There is a lunch break in between interview rounds. The Amazon data analyst onsite interview rounds are comprised of data science concepts, SQL coding, and the famous Amazon Leadership Principles.
The Amazon data analyst interview primarily consists of data science concepts. It is uniquely structured to asses a candidate’s ability to analyze Amazon’s data to provide new insights that will shape business decisions. Leveraging Amazon’s “STAR” format in answering questions can give you an advantage. To better understand Amazon’s STAR process, check out Amazon’s BI engineer interview process on Interview Query.
Interviewers at Amazon are looking for you to support your answers with your previous work experience. Attempt to answer each question with examples from past work experience; this may include the challenges you faced, what method or approach you used, and how you overcame those challenges.
Which functions in SQL do you like the most?
Explain OLAP cubes and a use case explaining business analytics application.
What are data normalization and non-normalization?
What happens to the data of a table with foreign keys when the associated table with primary keys has been updated?
What do you understand by cascading referential integrity?
Explain the difference between the linear and logistic regression and use examples.
What is an independent variable and what if I have three independent variables in my model and no dependent variable?
Write an equation for the multivariance or multiple regression model.
Given a sample with n observations, how could you test a hypothesis?
What are the Assumptions of ANOVA.
What test would you use for a small sample?
What is the null hypothesis?
What are type 1 and type 2 errors?
Use the following tables to write a query to retrieve data for customers who registered in the past ten days and spent over $100. Write another query to retrieve data for customers that spent over $100 in the past seven days. The first table is a customer purchase table with five columns: customer id, purchase date, product id, unit price, and units purchased. The second table is a customer details table with two columns: customer id and registration date.
What is the probability of generating ten consecutive numbers in ascending order out of 100 numbers?
How would you merge two tables in SQL?
Write a function to calculate the Fibonacci code in any of these languages (VBA, Python, Java).
If you’d like more exclusive interview explanations, check out Interview Query!
Check out my Youtube channel for more interviewing guides, and tips & tricks for solving problems.
Find more Amazon interview guides like the Amazon BI engineer interview and the Amazon business analyst interview on the Interview Query blog.
Originally published at https://www.interviewquery.com on July 7, 2020. | [
{
"code": null,
"e": 562,
"s": 171,
"text": "Amazon is one of the largest online markets in the world, and unlike the traditional market place, Amazon is gigantic! and it is filled with millions and millions of products on display. In the USA only, Amazon controls more than half of the online market, and since its inception in 1994, it has worked to achieve its ultimate goal which is being “the one-stop-shop” by learning from data."
},
{
"code": null,
"e": 805,
"s": 562,
"text": "In this age of data, Amazon collects data on every customer clicks and interactions on its website; this includes what items customers are looking at, what items they put in their cart, what quality they want, what their preferences are, etc."
},
{
"code": null,
"e": 1213,
"s": 805,
"text": "Amazon compiles this data and feeds it to its recommendation system to better serve customers needs and improve the shopping experience by recommending products that best fit their preferences. Amazon also leverages data to make business decisions and drive growth. Data analyst at Amazon work with both technical and non-technical internal teams to build the right analysis to answer key business questions"
},
{
"code": null,
"e": 1386,
"s": 1213,
"text": "To better familiarize yourself with Amazon’s interview process, check these articles out on the Amazon Business Intelligence, Business Analyst, Machine Learning Interviews!"
},
{
"code": null,
"e": 1700,
"s": 1386,
"text": "Data analysts at Amazon help bridge the gap between data and the decision-making process. Typical data analyst roles at Amazon include data analysis, dashboard/report building, and metric definitions and reviews. Data analysts at Amazon also design systems for data collection, compiling, analysis, and reporting."
},
{
"code": null,
"e": 2300,
"s": 1700,
"text": "Data analyst roles differ based on the type of data they are working with (e.g Twitch data, Sales data, Alexia data, etc.), the type of project they are on, the product they’re working with, and the team they're assigned to. Data analysts at Amazon also collaborate cross-functionally with various teams including engineering, data science, and marketing to provide data-driven insights to research and business areas. Depending on the team, the role may range from basic business intelligence analytics such as data processing, analysis, and reporting to a more technical role like data collection."
},
{
"code": null,
"e": 2589,
"s": 2300,
"text": "The data analyst position at Amazon requires specialization in knowledge and experience. Therefore, Amazon only hires highly qualified candidates with at least 3 years of industry experience working with data analysis, data modeling, advanced business analytics, and other related fields."
},
{
"code": null,
"e": 2625,
"s": 2589,
"text": "Other basic qualifications include:"
},
{
"code": null,
"e": 2784,
"s": 2625,
"text": "Bachelor’s or Masters (PhD preferred) in Finance, Business, Economics, Engineering, math, statistics, computer science, Operation Research, or related fields."
},
{
"code": null,
"e": 2881,
"s": 2784,
"text": "Experience with scripting, querying, and data warehouse tools, such as Linux, R, SAS, and/or SQL"
},
{
"code": null,
"e": 3009,
"s": 2881,
"text": "Extensive experience in programming languages like Python, R, or Java. (Check out the Python data science interview guide here)"
},
{
"code": null,
"e": 3148,
"s": 3009,
"text": "Experience with querying relational databases (SQL) and hands-on experience with processing, optimization, and analysis of large data set."
},
{
"code": null,
"e": 3201,
"s": 3148,
"text": "Proficiency with Microsoft Excel, Macros and Access."
},
{
"code": null,
"e": 3328,
"s": 3201,
"text": "Experience in identifying metrics and KPIs, gathering data, experimentation, and presenting decks, dashboards, and scorecards."
},
{
"code": null,
"e": 3469,
"s": 3328,
"text": "Experience with business intelligence and automated self-service reporting tools such as Tableau, Quicksight, Microsoft Power BI, or Cognos."
},
{
"code": null,
"e": 3527,
"s": 3469,
"text": "Experience with AWS services such as RDS, SQS, or Lambda."
},
{
"code": null,
"e": 3901,
"s": 3527,
"text": "Amazon is a large conglomerate technology company offering many products and services. As a result of this, Amazon has over 100 teams working on various areas. Data analysts work with these teams to help bridge the gap between data and the decision making process. Generally, data analysts at Amazon help streamline the decision making process through the analysis of data."
},
{
"code": null,
"e": 3979,
"s": 3901,
"text": "Depending on the team at Amazon, data analysts’ responsibilities may include:"
},
{
"code": null,
"e": 4344,
"s": 3979,
"text": "Alliance (Twitch): Leveraging advanced analytics in shaping the way deals performance is measured, defining what questions should be asked, and scaling analytics methods and tools to support Twitch’s growing business. Also, define and track KPIs, support strategic initiatives, evaluate new business opportunities, and improve/enhance decision making through data."
},
{
"code": null,
"e": 4673,
"s": 4344,
"text": "Finance Operation: Develop standard and ad hoc analysis and report for decision making. Structure high-level business problems within the framework of analyzing, defining, creating and sourcing the data, producing metrics, and providing recommendations. Automate standard reporting and drive data governance and standardization."
},
{
"code": null,
"e": 4959,
"s": 4673,
"text": "Search Capacity: Leverage advance analytics and predictive algorithms to create powerful, customer-focused search solutions and technologies. Collaborate with engineering and operation teams to scale Amazon search service by identifying and tracking KPIs regarding efficiency and cost."
},
{
"code": null,
"e": 5338,
"s": 4959,
"text": "Textbook team: Build robust data analytics solutions to improve the customer experience. Employ advanced data mining concepts, data modeling, and analytics to define and measure metrics for evaluating business growth. Extract, integrate and work on critical data to build data pipelines, automate reports and dashboards, and leverage self-service tools to internal stakeholders."
},
{
"code": null,
"e": 5672,
"s": 5338,
"text": "Fraud and Abuse prevention: Leverage sophisticated machine learning concepts to mitigate and prevent fraud. Develop and manage scalable solutions for new and existing metrics, reports, analysis, and dashboards to support business needs. Implement customized ETL pipelines from diverse sources for higher data quality and availability"
},
{
"code": null,
"e": 6166,
"s": 5672,
"text": "Buying: Use advanced analytics concepts to determine how much inventory to carry on all of Amazon’s websites worldwide. Develop and maintain metrics, expose and measure the current performance of Amazon’s buying system, identify and quantify opportunities for improvement, and leverage Amazon’s massive data to identify and prevent unexpected performance. Collaborate cross-functionally with other teams, especially engineering, research, data science and business teams for future innovation."
},
{
"code": null,
"e": 6442,
"s": 6166,
"text": "Engineering Success (Twitch): Collaborate with the engineering team at Twitch to provide data analysis towards improving and shaping success measurement metrics, defining business-impact questions, and scaling analytics methods and tools to bolster Amazon’s growing business."
},
{
"code": null,
"e": 6869,
"s": 6442,
"text": "The Amazon data analyst interview process follows the standard Amazon “STAR” (Situation, Task, Action, and Result) process with slight variations. The interview process starts with an initial phone screen with HR. After this, a technical interview will be scheduled. Once you get through the technical interview, a final onsite interview with 5 to 6 one-on-ones with the hiring manager, team members, and HR will be scheduled."
},
{
"code": null,
"e": 7181,
"s": 6869,
"text": "This is a standard introductory interview with HR after the submission of an application. The interview is exploratory and lasts about 45 minutes; it focuses on showcasing your background, skillsets, and work experience related to the position. You also get to know about Amazon’s work culture and the position."
},
{
"code": null,
"e": 7352,
"s": 7181,
"text": "Note: Amazon emphasizes its leadership principles. It will be really helpful to tailor your responses to follow the “STAR” format based on Amazon’s leadership principles."
},
{
"code": null,
"e": 7370,
"s": 7352,
"text": "Sample Questions:"
},
{
"code": null,
"e": 7419,
"s": 7370,
"text": "What is the biggest challenge you have overcome?"
},
{
"code": null,
"e": 7462,
"s": 7419,
"text": "What is your previous experience with SQL?"
},
{
"code": null,
"e": 7573,
"s": 7462,
"text": "Tell me about a time when you disagreed with a manager. How did you handle the situation? What was the result?"
},
{
"code": null,
"e": 7666,
"s": 7573,
"text": "How would you go about making improvements (performance, safety, process) in your workplace?"
},
{
"code": null,
"e": 7724,
"s": 7666,
"text": "Describe a long term goal and how you plan to achieve it."
},
{
"code": null,
"e": 7887,
"s": 7724,
"text": "This is a technical interview with a member of HR or a manager. Amazon uses a collaboration service platform called “Collabedit” for all its technical interviews."
},
{
"code": null,
"e": 8033,
"s": 7887,
"text": "The questions in this interview round revolves around a SQL coding challenge, Excel, and questions regarding Amazon’s Leadership Principles (LP)."
},
{
"code": null,
"e": 8089,
"s": 8033,
"text": "Check out our ultimate guide to SQL interview questions"
},
{
"code": null,
"e": 8142,
"s": 8089,
"text": "Here’s another example Amazon SQL interview question"
},
{
"code": null,
"e": 8617,
"s": 8142,
"text": "'users' table+-----------------+----------+| columns | type |+-----------------+----------+| id | integer || name | string || created_at | datetime || neighborhood_id | integer || mail | string |+-----------------+----------+'comments' table+------------+----------+| columns | type |+------------+----------+| user_id | integer || body | text || created_at | datetime |+------------+----------+"
},
{
"code": null,
"e": 8648,
"s": 8617,
"text": "Consider the following tables:"
},
{
"code": null,
"e": 8796,
"s": 8648,
"text": "Write a SQL query to create a histogram of the number of comments per user in the month of January 2020. Assume bin buckets class intervals of one."
},
{
"code": null,
"e": 8811,
"s": 8796,
"text": "Here’s a hint:"
},
{
"code": null,
"e": 8973,
"s": 8811,
"text": "What does a histogram represent? In this case we’re interested in using a histogram to represent the distribution of comments each user has made in January 2020."
},
{
"code": null,
"e": 9108,
"s": 8973,
"text": "A histogram with bin buckets of size one means that we can avoid the logical overhead of grouping frequencies into specific intervals."
},
{
"code": null,
"e": 9209,
"s": 9108,
"text": "For example, if we wanted a histogram of size five, we would have to run a SELECT statement like so:"
},
{
"code": null,
"e": 9265,
"s": 9209,
"text": "Try solving this question in our interactive SQL editor"
},
{
"code": null,
"e": 9763,
"s": 9265,
"text": "The onsite interview for a data analyst at Amazon is very similar to other onsite interviews at the company. Candidates who progress to this stage of the interview process go through 5 or 6 one-on-one interviews with a hiring manager, a team manager, data analysts, data engineers, and statisticians. There is a lunch break in between interview rounds. The Amazon data analyst onsite interview rounds are comprised of data science concepts, SQL coding, and the famous Amazon Leadership Principles."
},
{
"code": null,
"e": 10181,
"s": 9763,
"text": "The Amazon data analyst interview primarily consists of data science concepts. It is uniquely structured to asses a candidate’s ability to analyze Amazon’s data to provide new insights that will shape business decisions. Leveraging Amazon’s “STAR” format in answering questions can give you an advantage. To better understand Amazon’s STAR process, check out Amazon’s BI engineer interview process on Interview Query."
},
{
"code": null,
"e": 10473,
"s": 10181,
"text": "Interviewers at Amazon are looking for you to support your answers with your previous work experience. Attempt to answer each question with examples from past work experience; this may include the challenges you faced, what method or approach you used, and how you overcame those challenges."
},
{
"code": null,
"e": 10518,
"s": 10473,
"text": "Which functions in SQL do you like the most?"
},
{
"code": null,
"e": 10595,
"s": 10518,
"text": "Explain OLAP cubes and a use case explaining business analytics application."
},
{
"code": null,
"e": 10646,
"s": 10595,
"text": "What are data normalization and non-normalization?"
},
{
"code": null,
"e": 10762,
"s": 10646,
"text": "What happens to the data of a table with foreign keys when the associated table with primary keys has been updated?"
},
{
"code": null,
"e": 10821,
"s": 10762,
"text": "What do you understand by cascading referential integrity?"
},
{
"code": null,
"e": 10905,
"s": 10821,
"text": "Explain the difference between the linear and logistic regression and use examples."
},
{
"code": null,
"e": 11023,
"s": 10905,
"text": "What is an independent variable and what if I have three independent variables in my model and no dependent variable?"
},
{
"code": null,
"e": 11093,
"s": 11023,
"text": "Write an equation for the multivariance or multiple regression model."
},
{
"code": null,
"e": 11162,
"s": 11093,
"text": "Given a sample with n observations, how could you test a hypothesis?"
},
{
"code": null,
"e": 11197,
"s": 11162,
"text": "What are the Assumptions of ANOVA."
},
{
"code": null,
"e": 11241,
"s": 11197,
"text": "What test would you use for a small sample?"
},
{
"code": null,
"e": 11270,
"s": 11241,
"text": "What is the null hypothesis?"
},
{
"code": null,
"e": 11305,
"s": 11270,
"text": "What are type 1 and type 2 errors?"
},
{
"code": null,
"e": 11766,
"s": 11305,
"text": "Use the following tables to write a query to retrieve data for customers who registered in the past ten days and spent over $100. Write another query to retrieve data for customers that spent over $100 in the past seven days. The first table is a customer purchase table with five columns: customer id, purchase date, product id, unit price, and units purchased. The second table is a customer details table with two columns: customer id and registration date."
},
{
"code": null,
"e": 11867,
"s": 11766,
"text": "What is the probability of generating ten consecutive numbers in ascending order out of 100 numbers?"
},
{
"code": null,
"e": 11906,
"s": 11867,
"text": "How would you merge two tables in SQL?"
},
{
"code": null,
"e": 12002,
"s": 11906,
"text": "Write a function to calculate the Fibonacci code in any of these languages (VBA, Python, Java)."
},
{
"code": null,
"e": 12082,
"s": 12002,
"text": "If you’d like more exclusive interview explanations, check out Interview Query!"
},
{
"code": null,
"e": 12181,
"s": 12082,
"text": "Check out my Youtube channel for more interviewing guides, and tips & tricks for solving problems."
},
{
"code": null,
"e": 12324,
"s": 12181,
"text": "Find more Amazon interview guides like the Amazon BI engineer interview and the Amazon business analyst interview on the Interview Query blog."
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.