title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Optimal Binary Search Tree | DP-24
19 Aug, 2021 Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency counts, where freq[i] is the number of searches for keys[i]. Construct a binary search tree of all keys such that the total cost of all the searches is as small as possible.Let us first define the cost of a BST. The cost of a BST node is the level of that node multiplied by its frequency. The level of the root is 1. Examples: Input: keys[] = {10, 12}, freq[] = {34, 50} There can be following two possible BSTs 10 12 \ / 12 10 I II Frequency of searches of 10 and 12 are 34 and 50 respectively. The cost of tree I is 34*1 + 50*2 = 134 The cost of tree II is 50*1 + 34*2 = 118 Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50} There can be following possible BSTs 10 12 20 10 20 \ / \ / \ / 12 10 20 12 20 10 \ / / \ 20 10 12 12 I II III IV V Among all possible BSTs, cost of the fifth BST is minimum. Cost of the fifth BST is 1*50 + 2*34 + 3*8 = 142 1) Optimal Substructure: The optimal cost for freq[i..j] can be recursively calculated using the following formula. We need to calculate optCost(0, n-1) to find the result. The idea of above formula is simple, we one by one try all nodes as root (r varies from i to j in second term). When we make rth node as root, we recursively calculate optimal cost from i to r-1 and r+1 to j. We add sum of frequencies from i to j (see first term in the above formula) The reason for adding the sum of frequencies from i to j: This can be divided into 2 parts one is the freq[r]+sum of frequencies of all elements from i to j except r. The term freq[r] is added because it is going to be root and that means level of 1, so freq[r]*1=freq[r]. Now the actual part comes, we are adding the frequencies of remaining elements because as we take r as root then all the elements other than that are going 1 level down than that is calculated in the subproblem. Let me put it in a more clear way, for calculating optcost(i,j) we assume that the r is taken as root and calculate min of opt(i,r-1)+opt(r+1,j) for all i<=r<=j. Here for every subproblem we are choosing one node as a root. But in reality the level of subproblem root and all its descendant nodes will be 1 greater than the level of the parent problem root. Therefore the frequency of all the nodes except r should be added which accounts to the descend in their level compared to level assumed in subproblem.2) Overlapping Subproblems Following is recursive implementation that simply follows the recursive structure mentioned above. C++ C Java Python3 C# Javascript // A naive recursive implementation of// optimal binary search tree problem#include <bits/stdc++.h>using namespace std; // A utility function to get sum of// array elements freq[i] to freq[j]int sum(int freq[], int i, int j); // A recursive function to calculate// cost of optimal binary search treeint optCost(int freq[], int i, int j){ // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = INT_MAX; // One by one consider all elements // as root and recursively find cost // of the BST, compare the cost with // min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r - 1) + optCost(freq, r + 1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum;} // The main function that calculates// minimum cost of a Binary Search Tree.// It mainly uses optCost() to find// the optimal cost.int optimalSearchTree(int keys[], int freq[], int n){ // Here array keys[] is assumed to be // sorted in increasing order. If keys[] // is not sorted, then add code to sort // keys, and rearrange freq[] accordingly. return optCost(freq, 0, n - 1);} // A utility function to get sum of// array elements freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <= j; k++) s += freq[k]; return s;} // Driver Codeint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys) / sizeof(keys[0]); cout << "Cost of Optimal BST is " << optimalSearchTree(keys, freq, n); return 0;} // This is code is contributed// by rathbhupendra // A naive recursive implementation of optimal binary// search tree problem#include <stdio.h>#include <limits.h> // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j); // A recursive function to calculate cost of optimal// binary search treeint optCost(int freq[], int i, int j){ // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = INT_MAX; // One by one consider all elements as root and // recursively find cost of the BST, compare the // cost with min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r-1) + optCost(freq, r+1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum;} // The main function that calculates minimum cost of// a Binary Search Tree. It mainly uses optCost() to// find the optimal cost.int optimalSearchTree(int keys[], int freq[], int n){ // Here array keys[] is assumed to be sorted in // increasing order. If keys[] is not sorted, then // add code to sort keys, and rearrange freq[] // accordingly. return optCost(freq, 0, n-1);} // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s;} // Driver program to test above functionsint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys)/sizeof(keys[0]); printf("Cost of Optimal BST is %d ", optimalSearchTree(keys, freq, n)); return 0;} // A naive recursive implementation of optimal binary// search tree problempublic class GFG{ // A recursive function to calculate cost of // optimal binary search tree static int optCost(int freq[], int i, int j) { // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = Integer.MAX_VALUE; // One by one consider all elements as root and // recursively find cost of the BST, compare the // cost with min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r-1) + optCost(freq, r+1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum; } // The main function that calculates minimum cost of // a Binary Search Tree. It mainly uses optCost() to // find the optimal cost. static int optimalSearchTree(int keys[], int freq[], int n) { // Here array keys[] is assumed to be sorted in // increasing order. If keys[] is not sorted, then // add code to sort keys, and rearrange freq[] // accordingly. return optCost(freq, 0, n-1); } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int freq[], int i, int j) { int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s; } // Driver code public static void main(String[] args) { int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = keys.length; System.out.println("Cost of Optimal BST is " + optimalSearchTree(keys, freq, n)); }}// This code is contributed by Sumit Ghosh # A naive recursive implementation of# optimal binary search tree problem # A recursive function to calculate# cost of optimal binary search treedef optCost(freq, i, j): # Base cases if j < i: # no elements in this subarray return 0 if j == i: # one element in this subarray return freq[i] # Get sum of freq[i], freq[i+1], ... freq[j] fsum = Sum(freq, i, j) # Initialize minimum value Min = 999999999999 # One by one consider all elements as # root and recursively find cost of # the BST, compare the cost with min # and update min if needed for r in range(i, j + 1): cost = (optCost(freq, i, r - 1) + optCost(freq, r + 1, j)) if cost < Min: Min = cost # Return minimum value return Min + fsum # The main function that calculates minimum# cost of a Binary Search Tree. It mainly# uses optCost() to find the optimal cost.def optimalSearchTree(keys, freq, n): # Here array keys[] is assumed to be # sorted in increasing order. If keys[] # is not sorted, then add code to sort # keys, and rearrange freq[] accordingly. return optCost(freq, 0, n - 1) # A utility function to get sum of# array elements freq[i] to freq[j]def Sum(freq, i, j): s = 0 for k in range(i, j + 1): s += freq[k] return s # Driver Codeif __name__ == '__main__': keys = [10, 12, 20] freq = [34, 8, 50] n = len(keys) print("Cost of Optimal BST is", optimalSearchTree(keys, freq, n)) # This code is contributed by PranchalK // A naive recursive implementation of optimal binary// search tree problemusing System; class GFG{ // A recursive function to calculate cost of // optimal binary search tree static int optCost(int []freq, int i, int j) { // Base cases // no elements in this subarray if (j < i) return 0; // one element in this subarray if (j == i) return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = int.MaxValue; // One by one consider all elements as root and // recursively find cost of the BST, compare the // cost with min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r-1) + optCost(freq, r+1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum; } // The main function that calculates minimum cost of // a Binary Search Tree. It mainly uses optCost() to // find the optimal cost. static int optimalSearchTree(int []keys, int []freq, int n) { // Here array keys[] is assumed to be sorted in // increasing order. If keys[] is not sorted, then // add code to sort keys, and rearrange freq[] // accordingly. return optCost(freq, 0, n-1); } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int []freq, int i, int j) { int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s; } // Driver code public static void Main() { int []keys = {10, 12, 20}; int []freq = {34, 8, 50}; int n = keys.Length; Console.Write("Cost of Optimal BST is " + optimalSearchTree(keys, freq, n)); }} // This code is contributed by Sam007 <script>//Javascript Implementation // A recursive function to calculate// cost of optimal binary search treefunction optCost(freq, i, j){ // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] var fsum = sum(freq, i, j); // Initialize minimum value var min = Number. MAX_SAFE_INTEGER; // One by one consider all elements // as root and recursively find cost // of the BST, compare the cost with // min and update min if needed for (var r = i; r <= j; ++r) { var cost = optCost(freq, i, r - 1) + optCost(freq, r + 1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum;} // The main function that calculates// minimum cost of a Binary Search Tree.// It mainly uses optCost() to find// the optimal cost.function optimalSearchTree(keys, freq, n){ // Here array keys[] is assumed to be // sorted in increasing order. If keys[] // is not sorted, then add code to sort // keys, and rearrange freq[] accordingly. return optCost(freq, 0, n - 1);} // A utility function to get sum of// array elements freq[i] to freq[j]function sum(freq, i, j){ var s = 0; for (var k = i; k <= j; k++) s += freq[k]; return s;} // Driver Code var keys = [10, 12, 20];var freq = [34, 8, 50];var n = keys.length;document.write("Cost of Optimal BST is " + optimalSearchTree(keys, freq, n)); // This code is contributed by shubhamsingh10</script> Output: Cost of Optimal BST is 142 Time complexity of the above naive recursive approach is exponential. It should be noted that the above function computes the same subproblems again and again. We can see many subproblems being repeated in the following recursion tree for freq[1..4]. Since same subproblems are called again, this problem has Overlapping Subproblems property. So optimal BST problem has both properties (see this and this) of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by constructing a temporary array cost[][] in bottom up manner.Dynamic Programming Solution Following is C/C++ implementation for optimal BST problem using Dynamic Programming. We use an auxiliary array cost[n][n] to store the solutions of subproblems. cost[0][n-1] will hold the final result. The challenge in implementation is, all diagonal values must be filled first, then the values which lie on the line just above the diagonal. In other words, we must first fill all cost[i][i] values, then all cost[i][i+1] values, then all cost[i][i+2] values. So how to fill the 2D array in such manner> The idea used in the implementation is same as Matrix Chain Multiplication problem, we use a variable β€˜L’ for chain length and increment β€˜L’, one by one. We calculate column number β€˜j’ using the values of β€˜i’ and β€˜L’. C++ C Java Python3 C# Javascript // Dynamic Programming code for Optimal Binary Search// Tree Problem#include <bits/stdc++.h>using namespace std; // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j); /* A Dynamic Programming based function that calculatesminimum cost of a Binary Search Tree. */int optimalSearchTree(int keys[], int freq[], int n){ /* Create an auxiliary 2D matrix to store results of subproblems */ int cost[n][n]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L = 2; L <= n; L++) { // i is row number in cost[][] for (int i = 0; i <= n-L+1; i++) { // Get column number j from row number i and // chain length L int j = i+L-1; cost[i][j] = INT_MAX; // Try making all keys in interval keys[i..j] as root for (int r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i)? cost[i][r-1]:0) + ((r < j)? cost[r+1][j]:0) + sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n-1];} // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <= j; k++) s += freq[k]; return s;} // Driver codeint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys)/sizeof(keys[0]); cout << "Cost of Optimal BST is " << optimalSearchTree(keys, freq, n); return 0;} // This code is contributed by rathbhupendra // Dynamic Programming code for Optimal Binary Search// Tree Problem#include <stdio.h>#include <limits.h> // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j); /* A Dynamic Programming based function that calculates minimum cost of a Binary Search Tree. */int optimalSearchTree(int keys[], int freq[], int n){ /* Create an auxiliary 2D matrix to store results of subproblems */ int cost[n][n]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L=2; L<=n; L++) { // i is row number in cost[][] for (int i=0; i<=n-L+1; i++) { // Get column number j from row number i and // chain length L int j = i+L-1; cost[i][j] = INT_MAX; // Try making all keys in interval keys[i..j] as root for (int r=i; r<=j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i)? cost[i][r-1]:0) + ((r < j)? cost[r+1][j]:0) + sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n-1];} // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s;} // Driver program to test above functionsint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys)/sizeof(keys[0]); printf("Cost of Optimal BST is %d ", optimalSearchTree(keys, freq, n)); return 0;} // Dynamic Programming Java code for Optimal Binary Search// Tree Problempublic class Optimal_BST2 { /* A Dynamic Programming based function that calculates minimum cost of a Binary Search Tree. */ static int optimalSearchTree(int keys[], int freq[], int n) { /* Create an auxiliary 2D matrix to store results of subproblems */ int cost[][] = new int[n + 1][n + 1]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L = 2; L <= n; L++) { // i is row number in cost[][] for (int i = 0; i <= n - L + 1; i++) { // Get column number j from row number i and // chain length L int j = i + L - 1; cost[i][j] = Integer.MAX_VALUE; // Try making all keys in interval keys[i..j] as root for (int r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i) ? cost[i][r - 1] : 0) + ((r < j) ? cost[r + 1][j] : 0) + sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n - 1]; } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int freq[], int i, int j) { int s = 0; for (int k = i; k <= j; k++) { if (k >= freq.length) continue; s += freq[k]; } return s; } public static void main(String[] args) { int keys[] = { 10, 12, 20 }; int freq[] = { 34, 8, 50 }; int n = keys.length; System.out.println("Cost of Optimal BST is " + optimalSearchTree(keys, freq, n)); } }//This code is contributed by Sumit Ghosh # Dynamic Programming code for Optimal Binary Search# Tree Problem INT_MAX = 2147483647 """ A Dynamic Programming based function thatcalculates minimum cost of a Binary Search Tree. """def optimalSearchTree(keys, freq, n): """ Create an auxiliary 2D matrix to store results of subproblems """ cost = [[0 for x in range(n)] for y in range(n)] """ cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost """ # For a single key, cost is equal to # frequency of the key for i in range(n): cost[i][i] = freq[i] # Now we need to consider chains of # length 2, 3, ... . L is chain length. for L in range(2, n + 1): # i is row number in cost for i in range(n - L + 2): # Get column number j from row number # i and chain length L j = i + L - 1 if i >= n or j >= n: break cost[i][j] = INT_MAX # Try making all keys in interval # keys[i..j] as root for r in range(i, j + 1): # c = cost when keys[r] becomes root # of this subtree c = 0 if (r > i): c += cost[i][r - 1] if (r < j): c += cost[r + 1][j] c += sum(freq, i, j) if (c < cost[i][j]): cost[i][j] = c return cost[0][n - 1] # A utility function to get sum of# array elements freq[i] to freq[j]def sum(freq, i, j): s = 0 for k in range(i, j + 1): s += freq[k] return s # Driver Codeif __name__ == '__main__': keys = [10, 12, 20] freq = [34, 8, 50] n = len(keys) print("Cost of Optimal BST is", optimalSearchTree(keys, freq, n)) # This code is contributed by SHUBHAMSINGH10 // Dynamic Programming C# code for Optimal Binary Search// Tree Problemusing System; class GFG{ /* A Dynamic Programming based function that calculates minimum cost of a Binary Search Tree. */ static int optimalSearchTree(int []keys, int []freq, int n) { /* Create an auxiliary 2D matrix to store results of subproblems */ int [,]cost = new int[n + 1,n + 1]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i,i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L = 2; L <= n; L++) { // i is row number in cost[][] for (int i = 0; i <= n - L + 1; i++) { // Get column number j from row number i and // chain length L int j = i + L - 1; cost[i,j] = int.MaxValue; // Try making all keys in interval keys[i..j] as root for (int r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i) ? cost[i,r - 1] : 0) + ((r < j) ? cost[r + 1,j] : 0) + sum(freq, i, j); if (c < cost[i,j]) cost[i,j] = c; } } } return cost[0,n - 1]; } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int []freq, int i, int j) { int s = 0; for (int k = i; k <= j; k++) { if (k >= freq.Length) continue; s += freq[k]; } return s; } public static void Main() { int []keys = { 10, 12, 20 }; int []freq = { 34, 8, 50 }; int n = keys.Length; Console.Write("Cost of Optimal BST is " + optimalSearchTree(keys, freq, n)); }}// This code is contributed by Sam007 <script>// Dynamic Programming code for Optimal Binary Search// Tree Problem /* A Dynamic Programming based function that calculatesminimum cost of a Binary Search Tree. */function optimalSearchTree(keys, freq, n){ /* Create an auxiliary 2D matrix to store results of subproblems */ var cost = new Array(n); for (var i = 0; i < n; i++) cost[i] = new Array(n); /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (var i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (var L = 2; L <= n; L++) { // i is row number in cost[][] for (var i = 0; i <= n-L+1; i++) { // Get column number j from row number i and // chain length L var j = i+L-1; if ( i >= n || j >= n) break cost[i][j] = Number. MAX_SAFE_INTEGER; // Try making all keys in interval keys[i..j] as root for (var r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree var c = 0; if (r > i) c += cost[i][r-1] if (r < j) c += cost[r+1][j] c += sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n-1];} // A utility function to get sum of array elements// freq[i] to freq[j]function sum(freq, i, j){ var s = 0; for (var k = i; k <= j; k++) s += freq[k]; return s;}var keys = [10, 12, 20];var freq = [34, 8, 50];var n = keys.length;document.write("Cost of Optimal BST is " + optimalSearchTree(keys, freq, n)); // This code contributed by shubhamsingh10</script> Output: Cost of Optimal BST is 142 Notes 1) The time complexity of the above solution is O(n^4). The time complexity can be easily reduced to O(n^3) by pre-calculating sum of frequencies instead of calling sum() again and again.2) In the above solutions, we have computed optimal cost only. The solutions can be easily modified to store the structure of BSTs also. We can create another auxiliary array of size n to store the structure of tree. All we need to do is, store the chosen β€˜r’ in the innermost loop.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. aradhya95 PranchalKatiyar SHUBHAMSINGH10 rathbhupendra anikakapoor codstrider Binary Search Tree Dynamic Programming Dynamic Programming Binary Search Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. set vs unordered_set in C++ STL Flatten BST to sorted list | Increasing order Find median of BST in O(n) time and O(1) space Count BST nodes that lie in a given range Given n appointments, find all conflicting appointments Largest Sum Contiguous Subarray Program for Fibonacci numbers 0-1 Knapsack Problem | DP-10 Longest Common Subsequence | DP-4 Subset Sum Problem | DP-25
[ { "code": null, "e": 52, "s": 24, "text": "\n19 Aug, 2021" }, { "code": null, "e": 460, "s": 52, "text": "Given a sorted array key [0.. n-1] of search keys and an array freq[0.. n-1] of frequency counts, where freq[i] is the number of searches for keys[i]. Construct a binary search tree of all keys such that the total cost of all the searches is as small as possible.Let us first define the cost of a BST. The cost of a BST node is the level of that node multiplied by its frequency. The level of the root is 1." }, { "code": null, "e": 472, "s": 460, "text": "Examples: " }, { "code": null, "e": 1467, "s": 472, "text": "Input: keys[] = {10, 12}, freq[] = {34, 50}\nThere can be following two possible BSTs \n 10 12\n \\ / \n 12 10\n I II\nFrequency of searches of 10 and 12 are 34 and 50 respectively.\nThe cost of tree I is 34*1 + 50*2 = 134\nThe cost of tree II is 50*1 + 34*2 = 118 \n\n\nInput: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}\nThere can be following possible BSTs\n 10 12 20 10 20\n \\ / \\ / \\ /\n 12 10 20 12 20 10 \n \\ / / \\\n 20 10 12 12 \n I II III IV V\nAmong all possible BSTs, cost of the fifth BST is minimum. \nCost of the fifth BST is 1*50 + 2*34 + 3*8 = 142 " }, { "code": null, "e": 1925, "s": 1467, "text": "1) Optimal Substructure: The optimal cost for freq[i..j] can be recursively calculated using the following formula. We need to calculate optCost(0, n-1) to find the result. The idea of above formula is simple, we one by one try all nodes as root (r varies from i to j in second term). When we make rth node as root, we recursively calculate optimal cost from i to r-1 and r+1 to j. We add sum of frequencies from i to j (see first term in the above formula)" }, { "code": null, "e": 1983, "s": 1925, "text": "The reason for adding the sum of frequencies from i to j:" }, { "code": null, "e": 3048, "s": 1983, "text": "This can be divided into 2 parts one is the freq[r]+sum of frequencies of all elements from i to j except r. The term freq[r] is added because it is going to be root and that means level of 1, so freq[r]*1=freq[r]. Now the actual part comes, we are adding the frequencies of remaining elements because as we take r as root then all the elements other than that are going 1 level down than that is calculated in the subproblem. Let me put it in a more clear way, for calculating optcost(i,j) we assume that the r is taken as root and calculate min of opt(i,r-1)+opt(r+1,j) for all i<=r<=j. Here for every subproblem we are choosing one node as a root. But in reality the level of subproblem root and all its descendant nodes will be 1 greater than the level of the parent problem root. Therefore the frequency of all the nodes except r should be added which accounts to the descend in their level compared to level assumed in subproblem.2) Overlapping Subproblems Following is recursive implementation that simply follows the recursive structure mentioned above. " }, { "code": null, "e": 3052, "s": 3048, "text": "C++" }, { "code": null, "e": 3054, "s": 3052, "text": "C" }, { "code": null, "e": 3059, "s": 3054, "text": "Java" }, { "code": null, "e": 3067, "s": 3059, "text": "Python3" }, { "code": null, "e": 3070, "s": 3067, "text": "C#" }, { "code": null, "e": 3081, "s": 3070, "text": "Javascript" }, { "code": "// A naive recursive implementation of// optimal binary search tree problem#include <bits/stdc++.h>using namespace std; // A utility function to get sum of// array elements freq[i] to freq[j]int sum(int freq[], int i, int j); // A recursive function to calculate// cost of optimal binary search treeint optCost(int freq[], int i, int j){ // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = INT_MAX; // One by one consider all elements // as root and recursively find cost // of the BST, compare the cost with // min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r - 1) + optCost(freq, r + 1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum;} // The main function that calculates// minimum cost of a Binary Search Tree.// It mainly uses optCost() to find// the optimal cost.int optimalSearchTree(int keys[], int freq[], int n){ // Here array keys[] is assumed to be // sorted in increasing order. If keys[] // is not sorted, then add code to sort // keys, and rearrange freq[] accordingly. return optCost(freq, 0, n - 1);} // A utility function to get sum of// array elements freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <= j; k++) s += freq[k]; return s;} // Driver Codeint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys) / sizeof(keys[0]); cout << \"Cost of Optimal BST is \" << optimalSearchTree(keys, freq, n); return 0;} // This is code is contributed// by rathbhupendra", "e": 4975, "s": 3081, "text": null }, { "code": "// A naive recursive implementation of optimal binary// search tree problem#include <stdio.h>#include <limits.h> // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j); // A recursive function to calculate cost of optimal// binary search treeint optCost(int freq[], int i, int j){ // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = INT_MAX; // One by one consider all elements as root and // recursively find cost of the BST, compare the // cost with min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r-1) + optCost(freq, r+1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum;} // The main function that calculates minimum cost of// a Binary Search Tree. It mainly uses optCost() to// find the optimal cost.int optimalSearchTree(int keys[], int freq[], int n){ // Here array keys[] is assumed to be sorted in // increasing order. If keys[] is not sorted, then // add code to sort keys, and rearrange freq[] // accordingly. return optCost(freq, 0, n-1);} // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s;} // Driver program to test above functionsint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys)/sizeof(keys[0]); printf(\"Cost of Optimal BST is %d \", optimalSearchTree(keys, freq, n)); return 0;}", "e": 6782, "s": 4975, "text": null }, { "code": "// A naive recursive implementation of optimal binary// search tree problempublic class GFG{ // A recursive function to calculate cost of // optimal binary search tree static int optCost(int freq[], int i, int j) { // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = Integer.MAX_VALUE; // One by one consider all elements as root and // recursively find cost of the BST, compare the // cost with min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r-1) + optCost(freq, r+1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum; } // The main function that calculates minimum cost of // a Binary Search Tree. It mainly uses optCost() to // find the optimal cost. static int optimalSearchTree(int keys[], int freq[], int n) { // Here array keys[] is assumed to be sorted in // increasing order. If keys[] is not sorted, then // add code to sort keys, and rearrange freq[] // accordingly. return optCost(freq, 0, n-1); } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int freq[], int i, int j) { int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s; } // Driver code public static void main(String[] args) { int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = keys.length; System.out.println(\"Cost of Optimal BST is \" + optimalSearchTree(keys, freq, n)); }}// This code is contributed by Sumit Ghosh", "e": 8812, "s": 6782, "text": null }, { "code": "# A naive recursive implementation of# optimal binary search tree problem # A recursive function to calculate# cost of optimal binary search treedef optCost(freq, i, j): # Base cases if j < i: # no elements in this subarray return 0 if j == i: # one element in this subarray return freq[i] # Get sum of freq[i], freq[i+1], ... freq[j] fsum = Sum(freq, i, j) # Initialize minimum value Min = 999999999999 # One by one consider all elements as # root and recursively find cost of # the BST, compare the cost with min # and update min if needed for r in range(i, j + 1): cost = (optCost(freq, i, r - 1) + optCost(freq, r + 1, j)) if cost < Min: Min = cost # Return minimum value return Min + fsum # The main function that calculates minimum# cost of a Binary Search Tree. It mainly# uses optCost() to find the optimal cost.def optimalSearchTree(keys, freq, n): # Here array keys[] is assumed to be # sorted in increasing order. If keys[] # is not sorted, then add code to sort # keys, and rearrange freq[] accordingly. return optCost(freq, 0, n - 1) # A utility function to get sum of# array elements freq[i] to freq[j]def Sum(freq, i, j): s = 0 for k in range(i, j + 1): s += freq[k] return s # Driver Codeif __name__ == '__main__': keys = [10, 12, 20] freq = [34, 8, 50] n = len(keys) print(\"Cost of Optimal BST is\", optimalSearchTree(keys, freq, n)) # This code is contributed by PranchalK", "e": 10394, "s": 8812, "text": null }, { "code": "// A naive recursive implementation of optimal binary// search tree problemusing System; class GFG{ // A recursive function to calculate cost of // optimal binary search tree static int optCost(int []freq, int i, int j) { // Base cases // no elements in this subarray if (j < i) return 0; // one element in this subarray if (j == i) return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] int fsum = sum(freq, i, j); // Initialize minimum value int min = int.MaxValue; // One by one consider all elements as root and // recursively find cost of the BST, compare the // cost with min and update min if needed for (int r = i; r <= j; ++r) { int cost = optCost(freq, i, r-1) + optCost(freq, r+1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum; } // The main function that calculates minimum cost of // a Binary Search Tree. It mainly uses optCost() to // find the optimal cost. static int optimalSearchTree(int []keys, int []freq, int n) { // Here array keys[] is assumed to be sorted in // increasing order. If keys[] is not sorted, then // add code to sort keys, and rearrange freq[] // accordingly. return optCost(freq, 0, n-1); } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int []freq, int i, int j) { int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s; } // Driver code public static void Main() { int []keys = {10, 12, 20}; int []freq = {34, 8, 50}; int n = keys.Length; Console.Write(\"Cost of Optimal BST is \" + optimalSearchTree(keys, freq, n)); }} // This code is contributed by Sam007", "e": 12323, "s": 10394, "text": null }, { "code": "<script>//Javascript Implementation // A recursive function to calculate// cost of optimal binary search treefunction optCost(freq, i, j){ // Base cases if (j < i) // no elements in this subarray return 0; if (j == i) // one element in this subarray return freq[i]; // Get sum of freq[i], freq[i+1], ... freq[j] var fsum = sum(freq, i, j); // Initialize minimum value var min = Number. MAX_SAFE_INTEGER; // One by one consider all elements // as root and recursively find cost // of the BST, compare the cost with // min and update min if needed for (var r = i; r <= j; ++r) { var cost = optCost(freq, i, r - 1) + optCost(freq, r + 1, j); if (cost < min) min = cost; } // Return minimum value return min + fsum;} // The main function that calculates// minimum cost of a Binary Search Tree.// It mainly uses optCost() to find// the optimal cost.function optimalSearchTree(keys, freq, n){ // Here array keys[] is assumed to be // sorted in increasing order. If keys[] // is not sorted, then add code to sort // keys, and rearrange freq[] accordingly. return optCost(freq, 0, n - 1);} // A utility function to get sum of// array elements freq[i] to freq[j]function sum(freq, i, j){ var s = 0; for (var k = i; k <= j; k++) s += freq[k]; return s;} // Driver Code var keys = [10, 12, 20];var freq = [34, 8, 50];var n = keys.length;document.write(\"Cost of Optimal BST is \" + optimalSearchTree(keys, freq, n)); // This code is contributed by shubhamsingh10</script>", "e": 13951, "s": 12323, "text": null }, { "code": null, "e": 13960, "s": 13951, "text": "Output: " }, { "code": null, "e": 13987, "s": 13960, "text": "Cost of Optimal BST is 142" }, { "code": null, "e": 14240, "s": 13987, "text": "Time complexity of the above naive recursive approach is exponential. It should be noted that the above function computes the same subproblems again and again. We can see many subproblems being repeated in the following recursion tree for freq[1..4]. " }, { "code": null, "e": 15349, "s": 14240, "text": "Since same subproblems are called again, this problem has Overlapping Subproblems property. So optimal BST problem has both properties (see this and this) of a dynamic programming problem. Like other typical Dynamic Programming(DP) problems, recomputations of same subproblems can be avoided by constructing a temporary array cost[][] in bottom up manner.Dynamic Programming Solution Following is C/C++ implementation for optimal BST problem using Dynamic Programming. We use an auxiliary array cost[n][n] to store the solutions of subproblems. cost[0][n-1] will hold the final result. The challenge in implementation is, all diagonal values must be filled first, then the values which lie on the line just above the diagonal. In other words, we must first fill all cost[i][i] values, then all cost[i][i+1] values, then all cost[i][i+2] values. So how to fill the 2D array in such manner> The idea used in the implementation is same as Matrix Chain Multiplication problem, we use a variable β€˜L’ for chain length and increment β€˜L’, one by one. We calculate column number β€˜j’ using the values of β€˜i’ and β€˜L’. " }, { "code": null, "e": 15353, "s": 15349, "text": "C++" }, { "code": null, "e": 15355, "s": 15353, "text": "C" }, { "code": null, "e": 15360, "s": 15355, "text": "Java" }, { "code": null, "e": 15368, "s": 15360, "text": "Python3" }, { "code": null, "e": 15371, "s": 15368, "text": "C#" }, { "code": null, "e": 15382, "s": 15371, "text": "Javascript" }, { "code": "// Dynamic Programming code for Optimal Binary Search// Tree Problem#include <bits/stdc++.h>using namespace std; // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j); /* A Dynamic Programming based function that calculatesminimum cost of a Binary Search Tree. */int optimalSearchTree(int keys[], int freq[], int n){ /* Create an auxiliary 2D matrix to store results of subproblems */ int cost[n][n]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L = 2; L <= n; L++) { // i is row number in cost[][] for (int i = 0; i <= n-L+1; i++) { // Get column number j from row number i and // chain length L int j = i+L-1; cost[i][j] = INT_MAX; // Try making all keys in interval keys[i..j] as root for (int r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i)? cost[i][r-1]:0) + ((r < j)? cost[r+1][j]:0) + sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n-1];} // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <= j; k++) s += freq[k]; return s;} // Driver codeint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys)/sizeof(keys[0]); cout << \"Cost of Optimal BST is \" << optimalSearchTree(keys, freq, n); return 0;} // This code is contributed by rathbhupendra", "e": 17340, "s": 15382, "text": null }, { "code": "// Dynamic Programming code for Optimal Binary Search// Tree Problem#include <stdio.h>#include <limits.h> // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j); /* A Dynamic Programming based function that calculates minimum cost of a Binary Search Tree. */int optimalSearchTree(int keys[], int freq[], int n){ /* Create an auxiliary 2D matrix to store results of subproblems */ int cost[n][n]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L=2; L<=n; L++) { // i is row number in cost[][] for (int i=0; i<=n-L+1; i++) { // Get column number j from row number i and // chain length L int j = i+L-1; cost[i][j] = INT_MAX; // Try making all keys in interval keys[i..j] as root for (int r=i; r<=j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i)? cost[i][r-1]:0) + ((r < j)? cost[r+1][j]:0) + sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n-1];} // A utility function to get sum of array elements// freq[i] to freq[j]int sum(int freq[], int i, int j){ int s = 0; for (int k = i; k <=j; k++) s += freq[k]; return s;} // Driver program to test above functionsint main(){ int keys[] = {10, 12, 20}; int freq[] = {34, 8, 50}; int n = sizeof(keys)/sizeof(keys[0]); printf(\"Cost of Optimal BST is %d \", optimalSearchTree(keys, freq, n)); return 0;}", "e": 19308, "s": 17340, "text": null }, { "code": "// Dynamic Programming Java code for Optimal Binary Search// Tree Problempublic class Optimal_BST2 { /* A Dynamic Programming based function that calculates minimum cost of a Binary Search Tree. */ static int optimalSearchTree(int keys[], int freq[], int n) { /* Create an auxiliary 2D matrix to store results of subproblems */ int cost[][] = new int[n + 1][n + 1]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L = 2; L <= n; L++) { // i is row number in cost[][] for (int i = 0; i <= n - L + 1; i++) { // Get column number j from row number i and // chain length L int j = i + L - 1; cost[i][j] = Integer.MAX_VALUE; // Try making all keys in interval keys[i..j] as root for (int r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i) ? cost[i][r - 1] : 0) + ((r < j) ? cost[r + 1][j] : 0) + sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n - 1]; } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int freq[], int i, int j) { int s = 0; for (int k = i; k <= j; k++) { if (k >= freq.length) continue; s += freq[k]; } return s; } public static void main(String[] args) { int keys[] = { 10, 12, 20 }; int freq[] = { 34, 8, 50 }; int n = keys.length; System.out.println(\"Cost of Optimal BST is \" + optimalSearchTree(keys, freq, n)); } }//This code is contributed by Sumit Ghosh", "e": 21501, "s": 19308, "text": null }, { "code": "# Dynamic Programming code for Optimal Binary Search# Tree Problem INT_MAX = 2147483647 \"\"\" A Dynamic Programming based function thatcalculates minimum cost of a Binary Search Tree. \"\"\"def optimalSearchTree(keys, freq, n): \"\"\" Create an auxiliary 2D matrix to store results of subproblems \"\"\" cost = [[0 for x in range(n)] for y in range(n)] \"\"\" cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost \"\"\" # For a single key, cost is equal to # frequency of the key for i in range(n): cost[i][i] = freq[i] # Now we need to consider chains of # length 2, 3, ... . L is chain length. for L in range(2, n + 1): # i is row number in cost for i in range(n - L + 2): # Get column number j from row number # i and chain length L j = i + L - 1 if i >= n or j >= n: break cost[i][j] = INT_MAX # Try making all keys in interval # keys[i..j] as root for r in range(i, j + 1): # c = cost when keys[r] becomes root # of this subtree c = 0 if (r > i): c += cost[i][r - 1] if (r < j): c += cost[r + 1][j] c += sum(freq, i, j) if (c < cost[i][j]): cost[i][j] = c return cost[0][n - 1] # A utility function to get sum of# array elements freq[i] to freq[j]def sum(freq, i, j): s = 0 for k in range(i, j + 1): s += freq[k] return s # Driver Codeif __name__ == '__main__': keys = [10, 12, 20] freq = [34, 8, 50] n = len(keys) print(\"Cost of Optimal BST is\", optimalSearchTree(keys, freq, n)) # This code is contributed by SHUBHAMSINGH10", "e": 23440, "s": 21501, "text": null }, { "code": "// Dynamic Programming C# code for Optimal Binary Search// Tree Problemusing System; class GFG{ /* A Dynamic Programming based function that calculates minimum cost of a Binary Search Tree. */ static int optimalSearchTree(int []keys, int []freq, int n) { /* Create an auxiliary 2D matrix to store results of subproblems */ int [,]cost = new int[n + 1,n + 1]; /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (int i = 0; i < n; i++) cost[i,i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (int L = 2; L <= n; L++) { // i is row number in cost[][] for (int i = 0; i <= n - L + 1; i++) { // Get column number j from row number i and // chain length L int j = i + L - 1; cost[i,j] = int.MaxValue; // Try making all keys in interval keys[i..j] as root for (int r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree int c = ((r > i) ? cost[i,r - 1] : 0) + ((r < j) ? cost[r + 1,j] : 0) + sum(freq, i, j); if (c < cost[i,j]) cost[i,j] = c; } } } return cost[0,n - 1]; } // A utility function to get sum of array elements // freq[i] to freq[j] static int sum(int []freq, int i, int j) { int s = 0; for (int k = i; k <= j; k++) { if (k >= freq.Length) continue; s += freq[k]; } return s; } public static void Main() { int []keys = { 10, 12, 20 }; int []freq = { 34, 8, 50 }; int n = keys.Length; Console.Write(\"Cost of Optimal BST is \" + optimalSearchTree(keys, freq, n)); }}// This code is contributed by Sam007", "e": 25572, "s": 23440, "text": null }, { "code": "<script>// Dynamic Programming code for Optimal Binary Search// Tree Problem /* A Dynamic Programming based function that calculatesminimum cost of a Binary Search Tree. */function optimalSearchTree(keys, freq, n){ /* Create an auxiliary 2D matrix to store results of subproblems */ var cost = new Array(n); for (var i = 0; i < n; i++) cost[i] = new Array(n); /* cost[i][j] = Optimal cost of binary search tree that can be formed from keys[i] to keys[j]. cost[0][n-1] will store the resultant cost */ // For a single key, cost is equal to frequency of the key for (var i = 0; i < n; i++) cost[i][i] = freq[i]; // Now we need to consider chains of length 2, 3, ... . // L is chain length. for (var L = 2; L <= n; L++) { // i is row number in cost[][] for (var i = 0; i <= n-L+1; i++) { // Get column number j from row number i and // chain length L var j = i+L-1; if ( i >= n || j >= n) break cost[i][j] = Number. MAX_SAFE_INTEGER; // Try making all keys in interval keys[i..j] as root for (var r = i; r <= j; r++) { // c = cost when keys[r] becomes root of this subtree var c = 0; if (r > i) c += cost[i][r-1] if (r < j) c += cost[r+1][j] c += sum(freq, i, j); if (c < cost[i][j]) cost[i][j] = c; } } } return cost[0][n-1];} // A utility function to get sum of array elements// freq[i] to freq[j]function sum(freq, i, j){ var s = 0; for (var k = i; k <= j; k++) s += freq[k]; return s;}var keys = [10, 12, 20];var freq = [34, 8, 50];var n = keys.length;document.write(\"Cost of Optimal BST is \" + optimalSearchTree(keys, freq, n)); // This code contributed by shubhamsingh10</script>", "e": 27496, "s": 25572, "text": null }, { "code": null, "e": 27506, "s": 27496, "text": "Output: " }, { "code": null, "e": 27533, "s": 27506, "text": "Cost of Optimal BST is 142" }, { "code": null, "e": 28134, "s": 27533, "text": "Notes 1) The time complexity of the above solution is O(n^4). The time complexity can be easily reduced to O(n^3) by pre-calculating sum of frequencies instead of calling sum() again and again.2) In the above solutions, we have computed optimal cost only. The solutions can be easily modified to store the structure of BSTs also. We can create another auxiliary array of size n to store the structure of tree. All we need to do is, store the chosen β€˜r’ in the innermost loop.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 28144, "s": 28134, "text": "aradhya95" }, { "code": null, "e": 28160, "s": 28144, "text": "PranchalKatiyar" }, { "code": null, "e": 28175, "s": 28160, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 28189, "s": 28175, "text": "rathbhupendra" }, { "code": null, "e": 28201, "s": 28189, "text": "anikakapoor" }, { "code": null, "e": 28212, "s": 28201, "text": "codstrider" }, { "code": null, "e": 28231, "s": 28212, "text": "Binary Search Tree" }, { "code": null, "e": 28251, "s": 28231, "text": "Dynamic Programming" }, { "code": null, "e": 28271, "s": 28251, "text": "Dynamic Programming" }, { "code": null, "e": 28290, "s": 28271, "text": "Binary Search Tree" }, { "code": null, "e": 28388, "s": 28290, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28420, "s": 28388, "text": "set vs unordered_set in C++ STL" }, { "code": null, "e": 28466, "s": 28420, "text": "Flatten BST to sorted list | Increasing order" }, { "code": null, "e": 28513, "s": 28466, "text": "Find median of BST in O(n) time and O(1) space" }, { "code": null, "e": 28555, "s": 28513, "text": "Count BST nodes that lie in a given range" }, { "code": null, "e": 28611, "s": 28555, "text": "Given n appointments, find all conflicting appointments" }, { "code": null, "e": 28643, "s": 28611, "text": "Largest Sum Contiguous Subarray" }, { "code": null, "e": 28673, "s": 28643, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 28702, "s": 28673, "text": "0-1 Knapsack Problem | DP-10" }, { "code": null, "e": 28736, "s": 28702, "text": "Longest Common Subsequence | DP-4" } ]
Node.js vs Vue.js
21 Oct, 2020 Node.js: It is a JavaScript runtime environment, which is built on Chrome’s V8 JavaScript engine. It is developed by Ryan Dahl who is a Software Engineer working at Google Brain, he also developed Deno JavaScript and TypeScript runtime. Node.js is cross-platform and open-source which executes JavaScript code on the server-side, i.e. outside the web browser. Due to its single-threaded nature, it is mainly used for event-driven, non-blocking servers, a non-blocking I/O model makes it lightweight and efficient, hence it is best for data-intensive real-time applications. It is used by traditional web-sites and back-end API services. It is designed with a real-time, push-based architecture that runs across distributed devices. The HTTP (Hypertext Transfer Protocol) module provides a set of classes and functions for building an HTTP server. We use native Node like file-system, path, and URL for this basic HTTP server. Vue.js: It is an open-source progressive JavaScript framework that is mainly used for building UIs and single-page applications. It is created by Evan who was funded by the community on Patreon to develop VueJS. It is compatible with most modern technologies and because of the gentle learning curve and scalability, it gained a lot of popularity. VueJS follows the Model-View-ViewModel (MVVM) architectural pattern, where ViewModel has a β€˜Vue’ instance and View and Model are bound by two-way data binding. It utilizes a virtual DOM and in terms of API and design Vue is easy to learn as compared with AngularJS. As the concerns of routing and state were handled in ReactJS, in the same way Vue handles it by associate libraries. Difference between Node.js and Vue.js: if(gfg) { console.log("Geeks for Geeks"); } <h1 v-if="gfg">Geeks for Geeks</h1> bunnyram19 Node.js-Misc Vue.JS Difference Between JavaScript Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Differences between JDK, JRE and JVM Difference between Process and Thread Difference between Clustered and Non-clustered index Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Hide or show elements in HTML using display property How to append HTML code to a div using JavaScript ?
[ { "code": null, "e": 28, "s": 0, "text": "\n21 Oct, 2020" }, { "code": null, "e": 954, "s": 28, "text": "Node.js: It is a JavaScript runtime environment, which is built on Chrome’s V8 JavaScript engine. It is developed by Ryan Dahl who is a Software Engineer working at Google Brain, he also developed Deno JavaScript and TypeScript runtime. Node.js is cross-platform and open-source which executes JavaScript code on the server-side, i.e. outside the web browser. Due to its single-threaded nature, it is mainly used for event-driven, non-blocking servers, a non-blocking I/O model makes it lightweight and efficient, hence it is best for data-intensive real-time applications. It is used by traditional web-sites and back-end API services. It is designed with a real-time, push-based architecture that runs across distributed devices. The HTTP (Hypertext Transfer Protocol) module provides a set of classes and functions for building an HTTP server. We use native Node like file-system, path, and URL for this basic HTTP server." }, { "code": null, "e": 1685, "s": 954, "text": "Vue.js: It is an open-source progressive JavaScript framework that is mainly used for building UIs and single-page applications. It is created by Evan who was funded by the community on Patreon to develop VueJS. It is compatible with most modern technologies and because of the gentle learning curve and scalability, it gained a lot of popularity. VueJS follows the Model-View-ViewModel (MVVM) architectural pattern, where ViewModel has a β€˜Vue’ instance and View and Model are bound by two-way data binding. It utilizes a virtual DOM and in terms of API and design Vue is easy to learn as compared with AngularJS. As the concerns of routing and state were handled in ReactJS, in the same way Vue handles it by associate libraries." }, { "code": null, "e": 1724, "s": 1685, "text": "Difference between Node.js and Vue.js:" }, { "code": null, "e": 1769, "s": 1724, "text": "if(gfg) {\nconsole.log(\"Geeks for Geeks\"); }\n" }, { "code": null, "e": 1806, "s": 1769, "text": "<h1 v-if=\"gfg\">Geeks for Geeks</h1>\n" }, { "code": null, "e": 1817, "s": 1806, "text": "bunnyram19" }, { "code": null, "e": 1830, "s": 1817, "text": "Node.js-Misc" }, { "code": null, "e": 1837, "s": 1830, "text": "Vue.JS" }, { "code": null, "e": 1856, "s": 1837, "text": "Difference Between" }, { "code": null, "e": 1867, "s": 1856, "text": "JavaScript" }, { "code": null, "e": 1875, "s": 1867, "text": "Node.js" }, { "code": null, "e": 1892, "s": 1875, "text": "Web Technologies" }, { "code": null, "e": 1990, "s": 1892, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2051, "s": 1990, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2119, "s": 2051, "text": "Difference Between Method Overloading and Method Overriding in Java" }, { "code": null, "e": 2156, "s": 2119, "text": "Differences between JDK, JRE and JVM" }, { "code": null, "e": 2194, "s": 2156, "text": "Difference between Process and Thread" }, { "code": null, "e": 2247, "s": 2194, "text": "Difference between Clustered and Non-clustered index" }, { "code": null, "e": 2308, "s": 2247, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2380, "s": 2308, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 2420, "s": 2380, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 2473, "s": 2420, "text": "Hide or show elements in HTML using display property" } ]
What is buffer in Node.js ?
16 Aug, 2021 Pure JavaScript is great with Unicode encoded strings, but it does not handle binary data very well. It is not problematic when we perform an operation on data at browser level but at the time of dealing with TCP stream and performing a read-write operation on the file system is required to deal with pure binary data. To satisfy this need Node.js use Buffer, So in this article, we are going to know about buffer in Node.js. Buffers in Node.js: The Buffer class in Node.js is used to perform operations on raw binary data. Generally, Buffer refers to the particular memory location in memory. Buffer and array have some similarities, but the difference is array can be any type, and it can be resizable. Buffers only deal with binary data, and it can not be resizable. Each integer in a buffer represents a byte. console.log() function is used to print the Buffer instance. Methods to perform the operations on Buffer: Filename: index.js Javascript // Different Method to create Buffervar buffer1 = Buffer.alloc(100);var buffer2 = new Buffer('GFG');var buffer3 = Buffer.from([1, 2, 3, 4]); // Writing data to Bufferbuffer1.write("Happy Learning"); // Reading data from Buffervar a = buffer1.toString('utf-8');console.log(a); // Check object is buffer or notconsole.log(Buffer.isBuffer(buffer1)); // Check length of Bufferconsole.log(buffer1.length); // Copy buffervar bufferSrc = new Buffer('ABC');var bufferDest = Buffer.alloc(3);bufferSrc.copy(bufferDest); var Data = bufferDest.toString('utf-8');console.log(Data); // Slicing datavar bufferOld = new Buffer('GeeksForGeeks');var bufferNew = bufferOld.slice(0, 4);console.log(bufferNew.toString()); // concatenate two buffervar bufferOne = new Buffer('Happy Learning ');var bufferTwo = new Buffer('With GFG');var bufferThree = Buffer.concat([bufferOne, bufferTwo]);console.log(bufferThree.toString()); Run the index.js file using the following command: node index.js Output: Happy Learning true 100 ABC Geek Happy Learning With GFG Reference: https://nodejs.org/api/buffer.html simmytarika5 akshaysingh98088 Node.js-Methods NodeJS-Questions Picked Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. JWT Authentication with Node.js Installation of Node.js on Windows Difference between dependencies, devDependencies and peerDependencies Mongoose Populate() Method Mongoose find() Function Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ? Differences between Functional Components and Class Components in React
[ { "code": null, "e": 28, "s": 0, "text": "\n16 Aug, 2021" }, { "code": null, "e": 455, "s": 28, "text": "Pure JavaScript is great with Unicode encoded strings, but it does not handle binary data very well. It is not problematic when we perform an operation on data at browser level but at the time of dealing with TCP stream and performing a read-write operation on the file system is required to deal with pure binary data. To satisfy this need Node.js use Buffer, So in this article, we are going to know about buffer in Node.js." }, { "code": null, "e": 904, "s": 455, "text": "Buffers in Node.js: The Buffer class in Node.js is used to perform operations on raw binary data. Generally, Buffer refers to the particular memory location in memory. Buffer and array have some similarities, but the difference is array can be any type, and it can be resizable. Buffers only deal with binary data, and it can not be resizable. Each integer in a buffer represents a byte. console.log() function is used to print the Buffer instance." }, { "code": null, "e": 949, "s": 904, "text": "Methods to perform the operations on Buffer:" }, { "code": null, "e": 968, "s": 949, "text": "Filename: index.js" }, { "code": null, "e": 979, "s": 968, "text": "Javascript" }, { "code": "// Different Method to create Buffervar buffer1 = Buffer.alloc(100);var buffer2 = new Buffer('GFG');var buffer3 = Buffer.from([1, 2, 3, 4]); // Writing data to Bufferbuffer1.write(\"Happy Learning\"); // Reading data from Buffervar a = buffer1.toString('utf-8');console.log(a); // Check object is buffer or notconsole.log(Buffer.isBuffer(buffer1)); // Check length of Bufferconsole.log(buffer1.length); // Copy buffervar bufferSrc = new Buffer('ABC');var bufferDest = Buffer.alloc(3);bufferSrc.copy(bufferDest); var Data = bufferDest.toString('utf-8');console.log(Data); // Slicing datavar bufferOld = new Buffer('GeeksForGeeks');var bufferNew = bufferOld.slice(0, 4);console.log(bufferNew.toString()); // concatenate two buffervar bufferOne = new Buffer('Happy Learning ');var bufferTwo = new Buffer('With GFG');var bufferThree = Buffer.concat([bufferOne, bufferTwo]);console.log(bufferThree.toString());", "e": 1883, "s": 979, "text": null }, { "code": null, "e": 1934, "s": 1883, "text": "Run the index.js file using the following command:" }, { "code": null, "e": 1948, "s": 1934, "text": "node index.js" }, { "code": null, "e": 1956, "s": 1948, "text": "Output:" }, { "code": null, "e": 2013, "s": 1956, "text": "Happy Learning\ntrue\n100\nABC\nGeek\nHappy Learning With GFG" }, { "code": null, "e": 2059, "s": 2013, "text": "Reference: https://nodejs.org/api/buffer.html" }, { "code": null, "e": 2072, "s": 2059, "text": "simmytarika5" }, { "code": null, "e": 2089, "s": 2072, "text": "akshaysingh98088" }, { "code": null, "e": 2105, "s": 2089, "text": "Node.js-Methods" }, { "code": null, "e": 2122, "s": 2105, "text": "NodeJS-Questions" }, { "code": null, "e": 2129, "s": 2122, "text": "Picked" }, { "code": null, "e": 2137, "s": 2129, "text": "Node.js" }, { "code": null, "e": 2154, "s": 2137, "text": "Web Technologies" }, { "code": null, "e": 2252, "s": 2154, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2284, "s": 2252, "text": "JWT Authentication with Node.js" }, { "code": null, "e": 2319, "s": 2284, "text": "Installation of Node.js on Windows" }, { "code": null, "e": 2389, "s": 2319, "text": "Difference between dependencies, devDependencies and peerDependencies" }, { "code": null, "e": 2416, "s": 2389, "text": "Mongoose Populate() Method" }, { "code": null, "e": 2441, "s": 2416, "text": "Mongoose find() Function" }, { "code": null, "e": 2503, "s": 2441, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 2564, "s": 2503, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2614, "s": 2564, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 2657, "s": 2614, "text": "How to fetch data from an API in ReactJS ?" } ]
Matplotlib.axes.Axes.twinx() in Python
21 Apr, 2020 Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute. The Axes.twinx() function in axes module of matplotlib library is used to create a twin Axes sharing the xaxis. Syntax: Axes.twinx(self) Parameters: This method does not accepts any parameters. Return value: This method is used to returns the following. ax_twin : This returns the newly created Axes instance. Below examples illustrate the matplotlib.axes.Axes.twinx() function in matplotlib.axes: Example 1: # Implementation of matplotlib functionimport matplotlib.pyplot as pltimport numpy as np def GFG1(temp): return (5. / 9.) * (temp - 32) def GFG2(ax1): y1, y2 = ax1.get_ylim() ax_twin .set_ylim(GFG1(y1), GFG1(y2)) ax_twin .figure.canvas.draw() fig, ax1 = plt.subplots()ax_twin = ax1.twinx() ax1.callbacks.connect("ylim_changed", GFG2)ax1.plot(np.linspace(-40, 120, 100))ax1.set_xlim(0, 100) ax1.set_ylabel('Fahrenheit')ax_twin .set_ylabel('Celsius') fig.suptitle('matplotlib.axes.Axes.twinx()\ function Example\n\n', fontweight ="bold") plt.show() Output: Example 2: # Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt # Create some mock datat = np.arange(0.01, 10.0, 0.001)data1 = np.exp(t)data2 = np.sin(0.4 * np.pi * t) fig, ax1 = plt.subplots() color = 'tab:blue'ax1.set_xlabel('time (s)')ax1.set_ylabel('exp', color = color)ax1.plot(t, data1, color = color)ax1.tick_params(axis ='y', labelcolor = color) ax2 = ax1.twinx() color = 'tab:green'ax2.set_ylabel('sin', color = color)ax2.plot(t, data2, color = color)ax2.tick_params(axis ='y', labelcolor = color) fig.suptitle('matplotlib.axes.Axes.twinx() \function Example\n\n', fontweight ="bold") plt.show() Output: Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n21 Apr, 2020" }, { "code": null, "e": 328, "s": 28, "text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute." }, { "code": null, "e": 440, "s": 328, "text": "The Axes.twinx() function in axes module of matplotlib library is used to create a twin Axes sharing the xaxis." }, { "code": null, "e": 465, "s": 440, "text": "Syntax: Axes.twinx(self)" }, { "code": null, "e": 522, "s": 465, "text": "Parameters: This method does not accepts any parameters." }, { "code": null, "e": 582, "s": 522, "text": "Return value: This method is used to returns the following." }, { "code": null, "e": 638, "s": 582, "text": "ax_twin : This returns the newly created Axes instance." }, { "code": null, "e": 726, "s": 638, "text": "Below examples illustrate the matplotlib.axes.Axes.twinx() function in matplotlib.axes:" }, { "code": null, "e": 737, "s": 726, "text": "Example 1:" }, { "code": "# Implementation of matplotlib functionimport matplotlib.pyplot as pltimport numpy as np def GFG1(temp): return (5. / 9.) * (temp - 32) def GFG2(ax1): y1, y2 = ax1.get_ylim() ax_twin .set_ylim(GFG1(y1), GFG1(y2)) ax_twin .figure.canvas.draw() fig, ax1 = plt.subplots()ax_twin = ax1.twinx() ax1.callbacks.connect(\"ylim_changed\", GFG2)ax1.plot(np.linspace(-40, 120, 100))ax1.set_xlim(0, 100) ax1.set_ylabel('Fahrenheit')ax_twin .set_ylabel('Celsius') fig.suptitle('matplotlib.axes.Axes.twinx()\\ function Example\\n\\n', fontweight =\"bold\") plt.show()", "e": 1305, "s": 737, "text": null }, { "code": null, "e": 1313, "s": 1305, "text": "Output:" }, { "code": null, "e": 1324, "s": 1313, "text": "Example 2:" }, { "code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt # Create some mock datat = np.arange(0.01, 10.0, 0.001)data1 = np.exp(t)data2 = np.sin(0.4 * np.pi * t) fig, ax1 = plt.subplots() color = 'tab:blue'ax1.set_xlabel('time (s)')ax1.set_ylabel('exp', color = color)ax1.plot(t, data1, color = color)ax1.tick_params(axis ='y', labelcolor = color) ax2 = ax1.twinx() color = 'tab:green'ax2.set_ylabel('sin', color = color)ax2.plot(t, data2, color = color)ax2.tick_params(axis ='y', labelcolor = color) fig.suptitle('matplotlib.axes.Axes.twinx() \\function Example\\n\\n', fontweight =\"bold\") plt.show()", "e": 1961, "s": 1324, "text": null }, { "code": null, "e": 1969, "s": 1961, "text": "Output:" }, { "code": null, "e": 1987, "s": 1969, "text": "Python-matplotlib" }, { "code": null, "e": 1994, "s": 1987, "text": "Python" } ]
Shell Script to List Files that have Read, Write and Execute Permissions
20 Apr, 2021 In this article, We will learn how to list all files in the current directory that have Red, Write and Execute permission. Suppose, we have the following files in our current directory : Here, We have a total of 8 files in our current directory. Out of 8, we have Read, Write and Execute permission on 6 files and 2 have only Read and Write permissions. Let’s write the script for List the Files that have Read, Write and Execute Permissions Approach : We have to check every file in the current directory and display the name that has Read, Write and Execute permission, To traverse through all files, we will use for loop for file in * Here, we are using * which represent all files in current working directory and we are storing the current file name on file variable. Now we will check whether the chosen file is actually a file or not using if statementIf it is a file, then we will check whether it has Read, Write and Execute permission,We will use an if statement to check all permissions.If the file has all permissions then we will print the file name to the console.Close the if statement If it is a file, then we will check whether it has Read, Write and Execute permission, We will use an if statement to check all permissions.If the file has all permissions then we will print the file name to the console. If the file has all permissions then we will print the file name to the console. Close the if statement If it is not a file, then we will close the if statement and move to the next file. Before moving forward we will see what these operators do : -f $file -> returns true if file exists. -r $file -> returns true if file has Read permission -w $file -> returns true if file ha write permission. -x $file -> returns true if file has Executed permission. -a -> it is used for checking multiple conditions, same as && operator. Below is the implementation: # Shell script to display list of file names # having read, Write and Execute permission echo "The name of all files having all permissions :" # loop through all files in current directory for file in * do # check if it is a file if [ -f $file ] then # check if it has all permissions if [ -r $file -a -w $file -a -x $file ] then # print the complete file name with -l option ls -l $file # closing second if statement fi # closing first if statement fi done Now, our code writing work is done but still, we can’t run our program because when we create a file in Linux, It has two permissions i.e Read and Write for the User who created the file. To execute our file, we must give the Execute permission to the file. Assigning Execute permission to main.sh: $ chmod 777 main.sh Use the following command to run the script: $ bash main.sh Picked Shell Script Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Docker - COPY Instruction scp command in Linux with Examples chown command in Linux with Examples SED command in Linux | Set 2 nohup Command in Linux with Examples mv command in Linux with examples chmod command in Linux with examples Array Basics in Shell Scripting | Set 1 Introduction to Linux Operating System Basic Operators in Shell Scripting
[ { "code": null, "e": 28, "s": 0, "text": "\n20 Apr, 2021" }, { "code": null, "e": 151, "s": 28, "text": "In this article, We will learn how to list all files in the current directory that have Red, Write and Execute permission." }, { "code": null, "e": 215, "s": 151, "text": "Suppose, we have the following files in our current directory :" }, { "code": null, "e": 383, "s": 215, "text": "Here, We have a total of 8 files in our current directory. Out of 8, we have Read, Write and Execute permission on 6 files and 2 have only Read and Write permissions." }, { "code": null, "e": 471, "s": 383, "text": "Let’s write the script for List the Files that have Read, Write and Execute Permissions" }, { "code": null, "e": 482, "s": 471, "text": "Approach :" }, { "code": null, "e": 601, "s": 482, "text": "We have to check every file in the current directory and display the name that has Read, Write and Execute permission," }, { "code": null, "e": 653, "s": 601, "text": "To traverse through all files, we will use for loop" }, { "code": null, "e": 667, "s": 653, "text": "for file in *" }, { "code": null, "e": 802, "s": 667, "text": "Here, we are using * which represent all files in current working directory and we are storing the current file name on file variable." }, { "code": null, "e": 1130, "s": 802, "text": "Now we will check whether the chosen file is actually a file or not using if statementIf it is a file, then we will check whether it has Read, Write and Execute permission,We will use an if statement to check all permissions.If the file has all permissions then we will print the file name to the console.Close the if statement" }, { "code": null, "e": 1217, "s": 1130, "text": "If it is a file, then we will check whether it has Read, Write and Execute permission," }, { "code": null, "e": 1351, "s": 1217, "text": "We will use an if statement to check all permissions.If the file has all permissions then we will print the file name to the console." }, { "code": null, "e": 1432, "s": 1351, "text": "If the file has all permissions then we will print the file name to the console." }, { "code": null, "e": 1455, "s": 1432, "text": "Close the if statement" }, { "code": null, "e": 1539, "s": 1455, "text": "If it is not a file, then we will close the if statement and move to the next file." }, { "code": null, "e": 1599, "s": 1539, "text": "Before moving forward we will see what these operators do :" }, { "code": null, "e": 1641, "s": 1599, "text": "-f $file -> returns true if file exists." }, { "code": null, "e": 1694, "s": 1641, "text": "-r $file -> returns true if file has Read permission" }, { "code": null, "e": 1748, "s": 1694, "text": "-w $file -> returns true if file ha write permission." }, { "code": null, "e": 1806, "s": 1748, "text": "-x $file -> returns true if file has Executed permission." }, { "code": null, "e": 1878, "s": 1806, "text": "-a -> it is used for checking multiple conditions, same as && operator." }, { "code": null, "e": 1907, "s": 1878, "text": "Below is the implementation:" }, { "code": null, "e": 2374, "s": 1907, "text": "# Shell script to display list of file names\n# having read, Write and Execute permission\necho \"The name of all files having all permissions :\"\n \n# loop through all files in current directory\nfor file in *\ndo\n\n# check if it is a file\nif [ -f $file ]\nthen\n\n# check if it has all permissions\nif [ -r $file -a -w $file -a -x $file ]\nthen\n\n# print the complete file name with -l option\nls -l $file\n\n# closing second if statement\nfi\n\n# closing first if statement\nfi\n\ndone" }, { "code": null, "e": 2632, "s": 2374, "text": "Now, our code writing work is done but still, we can’t run our program because when we create a file in Linux, It has two permissions i.e Read and Write for the User who created the file. To execute our file, we must give the Execute permission to the file." }, { "code": null, "e": 2673, "s": 2632, "text": "Assigning Execute permission to main.sh:" }, { "code": null, "e": 2693, "s": 2673, "text": "$ chmod 777 main.sh" }, { "code": null, "e": 2738, "s": 2693, "text": "Use the following command to run the script:" }, { "code": null, "e": 2753, "s": 2738, "text": "$ bash main.sh" }, { "code": null, "e": 2760, "s": 2753, "text": "Picked" }, { "code": null, "e": 2773, "s": 2760, "text": "Shell Script" }, { "code": null, "e": 2784, "s": 2773, "text": "Linux-Unix" }, { "code": null, "e": 2882, "s": 2784, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2908, "s": 2882, "text": "Docker - COPY Instruction" }, { "code": null, "e": 2943, "s": 2908, "text": "scp command in Linux with Examples" }, { "code": null, "e": 2980, "s": 2943, "text": "chown command in Linux with Examples" }, { "code": null, "e": 3009, "s": 2980, "text": "SED command in Linux | Set 2" }, { "code": null, "e": 3046, "s": 3009, "text": "nohup Command in Linux with Examples" }, { "code": null, "e": 3080, "s": 3046, "text": "mv command in Linux with examples" }, { "code": null, "e": 3117, "s": 3080, "text": "chmod command in Linux with examples" }, { "code": null, "e": 3157, "s": 3117, "text": "Array Basics in Shell Scripting | Set 1" }, { "code": null, "e": 3196, "s": 3157, "text": "Introduction to Linux Operating System" } ]
Java Program to add leading zeros to a number
To add leading zeros to a number, you need to format the output. Let’s say we need to add 4 leading zeros to the following number with 3 digits. int val = 290; For adding 4 leading zeros above, we will use %07d i.e. 4+3 = 7. Here, 3, as shown above, is the number with 3 digits. String.format("%07d", val); The following is the final example. Live Demo import java.util.Formatter; public class Demo { public static void main(String args[]) { int val = 290; System.out.println("Integer: "+val); String formattedStr = String.format("%07d", val); System.out.println("With leading zeros = " + formattedStr); } } Integer: 290 With leading zeros = 0000290
[ { "code": null, "e": 1332, "s": 1187, "text": "To add leading zeros to a number, you need to format the output. Let’s say we need to add 4 leading zeros to the following number with 3 digits." }, { "code": null, "e": 1347, "s": 1332, "text": "int val = 290;" }, { "code": null, "e": 1466, "s": 1347, "text": "For adding 4 leading zeros above, we will use %07d i.e. 4+3 = 7. Here, 3, as shown above, is the number with 3 digits." }, { "code": null, "e": 1494, "s": 1466, "text": "String.format(\"%07d\", val);" }, { "code": null, "e": 1530, "s": 1494, "text": "The following is the final example." }, { "code": null, "e": 1541, "s": 1530, "text": " Live Demo" }, { "code": null, "e": 1832, "s": 1541, "text": "import java.util.Formatter;\npublic class Demo {\n public static void main(String args[]) {\n int val = 290;\n System.out.println(\"Integer: \"+val);\n String formattedStr = String.format(\"%07d\", val);\n System.out.println(\"With leading zeros = \" + formattedStr);\n }\n}" }, { "code": null, "e": 1874, "s": 1832, "text": "Integer: 290\nWith leading zeros = 0000290" } ]
Sort 3 numbers
23 Jun, 2022 Given three numbers, how to sort them? Examples: Input : arr[] = {3, 2, 1} Output : arr[] = {1, 2, 3} Input : arr[] = {6, 5, 0} Output :arr[] = {0, 5, 6} One simple solution is to use sort function. C++ Java Python3 C# PHP Javascript // C++ program to sort an array of size 3#include <algorithm>#include <iostream>using namespace std; int main(){ int a[] = {10, 12, 5}; sort(a, a + 3); for (int i = 0; i < 3; i++) cout << a[i] << " "; return 0;} // Java program to sort// an array of size 3import java.io.*;import java .util.*; class GFG{ public static void main (String[] args) { int a[] = {10, 12, 5}; Arrays.sort(a); for (int i = 0; i < 3; i++) System.out.print( a[i] + " "); }} // This code is contributed// by inder_verma. # Python3 program to sort# an array of size 3a = [10, 12, 5]a.sort()for i in range(len(a)): print(a[i], end = ' ') # This code is contributed# by Samyukta S Hegde // C# program to sort// an array of size 3using System;class GFG{ public static void Main () { int []a = {10, 12, 5}; Array.Sort(a); for (int i = 0; i < 3; i++) Console.Write( a[i] + " "); }} // This code is contributed// by chandan_jnu. <?php// PHP program to sort// an array of size 3$a = array(10, 12, 5); sort($a); for ($i = 0; $i < 3; $i++) echo $a[$i] , " "; // This code is contributed// by chandan_jnu.?> <script> // Javascript program to sort an array of size 3let arr = [10, 12, 5]; arr.sort((a, b) => a - b); for (let i = 0; i < 3; i++)document.write(arr[i] + " "); // This code is contributed by Jana_Sayantan.</script> 5 10 12 Time Complexity: O(1)Auxiliary Space: O(1) How to write our own sort function that does minimum comparison and does not use extra variables? The idea is to use insertion sort as insertion sort works best for small arrays. C++ Java Python3 C# PHP Javascript // C++ program to sort an array of size 3#include <algorithm>#include <iostream>using namespace std; int sort3(int arr[]){ // Insert arr[1] if (arr[1] < arr[0]) swap(arr[0], arr[1]); // Insert arr[2] if (arr[2] < arr[1]) { swap(arr[1], arr[2]); if (arr[1] < arr[0]) swap(arr[1], arr[0]); }} int main(){ int a[] = {10, 12, 5}; sort3(a); for (int i = 0; i < 3; i++) cout << a[i] << " "; return 0;} // Java program to sort// an array of size 3import java.io.*;import java.util.*; class GFG{static void sort3(int arr[], int temp[]){ // Insert arr[1] if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } // Insert arr[2] if (arr[2] < arr[1]) { temp[0] = arr[1]; arr[1] = arr[2]; arr[2] = temp[0]; if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } }} // Driver Codepublic static void main(String args[]){ int a[] = new int[]{10, 12, 5}; int temp1[] = new int[10]; sort3(a, temp1); for (int i = 0; i < 3; i++) System.out.print( a[i] + " ");}} // This code is contributed// by Akanksha Rai(Abby_akku) # Python3 program to sort an array of size 3def sort3(arr): # Insert arr[1] if (arr[1] < arr[0]): arr[0], arr[1] = arr[1], arr[0] # Insert arr[2] if (arr[2] < arr[1]): arr[1], arr[2] = arr[2], arr[1] if (arr[1] < arr[0]): arr[1], arr[0] = arr[0], arr[1] # Driver codea = [10, 12, 5] sort3(a) for i in range(3): print(a[i],end=" ") # This code is contributed by shubhamsingh10 // C# program to sort// an array of size 3using System; class GFG { static void sort3(int []arr, int []temp){ // Insert arr[1] if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } // Insert arr[2] if (arr[2] < arr[1]) { temp[0] = arr[1]; arr[1] = arr[2]; arr[2] = temp[0]; if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } }} // Driver Code public static void Main(String []args) { int []a= new int[]{10, 12, 5}; int []temp1 = new int[10]; sort3(a, temp1); for (int i = 0; i < 3; i++) Console.Write( a[i] + " "); }} // This code is contributed// by Akanksha Rai(Abby_akku) <?php// PHP program to sort an array of size 3function sort3(&$arr, $temp){ // Insert arr[1] if ($arr[1] < $arr[0]) { $temp[0] = $arr[0]; $arr[0] = $arr[1]; $arr[1] = $temp[0]; } // Insert arr[2] if ($arr[2] < $arr[1]) { $temp[0] = $arr[1]; $arr[1] = $arr[2]; $arr[2] = $temp[0]; } if ($arr[1] < $arr[0]) { $temp[0] = $arr[0]; $arr[0] = $arr[1]; $arr[1] = $temp[0]; }} // Driver Code$a = array(10, 12, 5);$temp1 = array(10);sort3($a, $temp1); for ($i = 0; $i < 3; $i++) echo($a[$i] . " "); // This code is contributed// by Code_Mech.?> <script> // Javascript program to sort an// array of size 3function sort3(arr, temp){ // Insert arr[1] if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } // Insert arr[2] if (arr[2] < arr[1]) { temp[0] = arr[1]; arr[1] = arr[2]; arr[2] = temp[0]; if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } }} // Driver codelet a = [ 10, 12, 5 ];let temp1 = [10];sort3(a, temp1); for(let i = 0; i < 3; i++) document.write( a[i] + " "); // This code is contributed by decode2207 </script> 5 10 12 Time Complexity: O(1)Auxiliary Space: O(1) inderDuMCA Chandan_Kumar Akanksha_Rai SamyuktaSHegde Code_Mech SHUBHAMSINGH10 jana_sayantan decode2207 amankr0211 Insertion Sort Picked School Programming Sorting Sorting Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Constructors in Java Exceptions in Java Python Exception Handling Python Try Except Ternary Operator in Python Merge Sort Bubble Sort Algorithm QuickSort Insertion Sort Selection Sort Algorithm
[ { "code": null, "e": 52, "s": 24, "text": "\n23 Jun, 2022" }, { "code": null, "e": 91, "s": 52, "text": "Given three numbers, how to sort them?" }, { "code": null, "e": 102, "s": 91, "text": "Examples: " }, { "code": null, "e": 209, "s": 102, "text": "Input : arr[] = {3, 2, 1}\nOutput : arr[] = {1, 2, 3}\n\nInput : arr[] = {6, 5, 0}\nOutput :arr[] = {0, 5, 6}" }, { "code": null, "e": 256, "s": 209, "text": "One simple solution is to use sort function. " }, { "code": null, "e": 260, "s": 256, "text": "C++" }, { "code": null, "e": 265, "s": 260, "text": "Java" }, { "code": null, "e": 273, "s": 265, "text": "Python3" }, { "code": null, "e": 276, "s": 273, "text": "C#" }, { "code": null, "e": 280, "s": 276, "text": "PHP" }, { "code": null, "e": 291, "s": 280, "text": "Javascript" }, { "code": "// C++ program to sort an array of size 3#include <algorithm>#include <iostream>using namespace std; int main(){ int a[] = {10, 12, 5}; sort(a, a + 3); for (int i = 0; i < 3; i++) cout << a[i] << \" \"; return 0;}", "e": 525, "s": 291, "text": null }, { "code": "// Java program to sort// an array of size 3import java.io.*;import java .util.*; class GFG{ public static void main (String[] args) { int a[] = {10, 12, 5}; Arrays.sort(a); for (int i = 0; i < 3; i++) System.out.print( a[i] + \" \"); }} // This code is contributed// by inder_verma.", "e": 850, "s": 525, "text": null }, { "code": "# Python3 program to sort# an array of size 3a = [10, 12, 5]a.sort()for i in range(len(a)): print(a[i], end = ' ') # This code is contributed# by Samyukta S Hegde", "e": 1016, "s": 850, "text": null }, { "code": "// C# program to sort// an array of size 3using System;class GFG{ public static void Main () { int []a = {10, 12, 5}; Array.Sort(a); for (int i = 0; i < 3; i++) Console.Write( a[i] + \" \"); }} // This code is contributed// by chandan_jnu.", "e": 1294, "s": 1016, "text": null }, { "code": "<?php// PHP program to sort// an array of size 3$a = array(10, 12, 5); sort($a); for ($i = 0; $i < 3; $i++) echo $a[$i] , \" \"; // This code is contributed// by chandan_jnu.?>", "e": 1472, "s": 1294, "text": null }, { "code": "<script> // Javascript program to sort an array of size 3let arr = [10, 12, 5]; arr.sort((a, b) => a - b); for (let i = 0; i < 3; i++)document.write(arr[i] + \" \"); // This code is contributed by Jana_Sayantan.</script>", "e": 1691, "s": 1472, "text": null }, { "code": null, "e": 1699, "s": 1691, "text": "5 10 12" }, { "code": null, "e": 1744, "s": 1701, "text": "Time Complexity: O(1)Auxiliary Space: O(1)" }, { "code": null, "e": 1923, "s": 1744, "text": "How to write our own sort function that does minimum comparison and does not use extra variables? The idea is to use insertion sort as insertion sort works best for small arrays." }, { "code": null, "e": 1927, "s": 1923, "text": "C++" }, { "code": null, "e": 1932, "s": 1927, "text": "Java" }, { "code": null, "e": 1940, "s": 1932, "text": "Python3" }, { "code": null, "e": 1943, "s": 1940, "text": "C#" }, { "code": null, "e": 1947, "s": 1943, "text": "PHP" }, { "code": null, "e": 1958, "s": 1947, "text": "Javascript" }, { "code": "// C++ program to sort an array of size 3#include <algorithm>#include <iostream>using namespace std; int sort3(int arr[]){ // Insert arr[1] if (arr[1] < arr[0]) swap(arr[0], arr[1]); // Insert arr[2] if (arr[2] < arr[1]) { swap(arr[1], arr[2]); if (arr[1] < arr[0]) swap(arr[1], arr[0]); }} int main(){ int a[] = {10, 12, 5}; sort3(a); for (int i = 0; i < 3; i++) cout << a[i] << \" \"; return 0;}", "e": 2426, "s": 1958, "text": null }, { "code": "// Java program to sort// an array of size 3import java.io.*;import java.util.*; class GFG{static void sort3(int arr[], int temp[]){ // Insert arr[1] if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } // Insert arr[2] if (arr[2] < arr[1]) { temp[0] = arr[1]; arr[1] = arr[2]; arr[2] = temp[0]; if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } }} // Driver Codepublic static void main(String args[]){ int a[] = new int[]{10, 12, 5}; int temp1[] = new int[10]; sort3(a, temp1); for (int i = 0; i < 3; i++) System.out.print( a[i] + \" \");}} // This code is contributed// by Akanksha Rai(Abby_akku)", "e": 3275, "s": 2426, "text": null }, { "code": "# Python3 program to sort an array of size 3def sort3(arr): # Insert arr[1] if (arr[1] < arr[0]): arr[0], arr[1] = arr[1], arr[0] # Insert arr[2] if (arr[2] < arr[1]): arr[1], arr[2] = arr[2], arr[1] if (arr[1] < arr[0]): arr[1], arr[0] = arr[0], arr[1] # Driver codea = [10, 12, 5] sort3(a) for i in range(3): print(a[i],end=\" \") # This code is contributed by shubhamsingh10", "e": 3720, "s": 3275, "text": null }, { "code": "// C# program to sort// an array of size 3using System; class GFG { static void sort3(int []arr, int []temp){ // Insert arr[1] if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } // Insert arr[2] if (arr[2] < arr[1]) { temp[0] = arr[1]; arr[1] = arr[2]; arr[2] = temp[0]; if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } }} // Driver Code public static void Main(String []args) { int []a= new int[]{10, 12, 5}; int []temp1 = new int[10]; sort3(a, temp1); for (int i = 0; i < 3; i++) Console.Write( a[i] + \" \"); }} // This code is contributed// by Akanksha Rai(Abby_akku)", "e": 4566, "s": 3720, "text": null }, { "code": "<?php// PHP program to sort an array of size 3function sort3(&$arr, $temp){ // Insert arr[1] if ($arr[1] < $arr[0]) { $temp[0] = $arr[0]; $arr[0] = $arr[1]; $arr[1] = $temp[0]; } // Insert arr[2] if ($arr[2] < $arr[1]) { $temp[0] = $arr[1]; $arr[1] = $arr[2]; $arr[2] = $temp[0]; } if ($arr[1] < $arr[0]) { $temp[0] = $arr[0]; $arr[0] = $arr[1]; $arr[1] = $temp[0]; }} // Driver Code$a = array(10, 12, 5);$temp1 = array(10);sort3($a, $temp1); for ($i = 0; $i < 3; $i++) echo($a[$i] . \" \"); // This code is contributed// by Code_Mech.?>", "e": 5207, "s": 4566, "text": null }, { "code": "<script> // Javascript program to sort an// array of size 3function sort3(arr, temp){ // Insert arr[1] if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } // Insert arr[2] if (arr[2] < arr[1]) { temp[0] = arr[1]; arr[1] = arr[2]; arr[2] = temp[0]; if (arr[1] < arr[0]) { temp[0] = arr[0]; arr[0] = arr[1]; arr[1] = temp[0]; } }} // Driver codelet a = [ 10, 12, 5 ];let temp1 = [10];sort3(a, temp1); for(let i = 0; i < 3; i++) document.write( a[i] + \" \"); // This code is contributed by decode2207 </script>", "e": 5874, "s": 5207, "text": null }, { "code": null, "e": 5882, "s": 5874, "text": "5 10 12" }, { "code": null, "e": 5927, "s": 5884, "text": "Time Complexity: O(1)Auxiliary Space: O(1)" }, { "code": null, "e": 5938, "s": 5927, "text": "inderDuMCA" }, { "code": null, "e": 5952, "s": 5938, "text": "Chandan_Kumar" }, { "code": null, "e": 5965, "s": 5952, "text": "Akanksha_Rai" }, { "code": null, "e": 5980, "s": 5965, "text": "SamyuktaSHegde" }, { "code": null, "e": 5990, "s": 5980, "text": "Code_Mech" }, { "code": null, "e": 6005, "s": 5990, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 6019, "s": 6005, "text": "jana_sayantan" }, { "code": null, "e": 6030, "s": 6019, "text": "decode2207" }, { "code": null, "e": 6041, "s": 6030, "text": "amankr0211" }, { "code": null, "e": 6056, "s": 6041, "text": "Insertion Sort" }, { "code": null, "e": 6063, "s": 6056, "text": "Picked" }, { "code": null, "e": 6082, "s": 6063, "text": "School Programming" }, { "code": null, "e": 6090, "s": 6082, "text": "Sorting" }, { "code": null, "e": 6098, "s": 6090, "text": "Sorting" }, { "code": null, "e": 6196, "s": 6098, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 6217, "s": 6196, "text": "Constructors in Java" }, { "code": null, "e": 6236, "s": 6217, "text": "Exceptions in Java" }, { "code": null, "e": 6262, "s": 6236, "text": "Python Exception Handling" }, { "code": null, "e": 6280, "s": 6262, "text": "Python Try Except" }, { "code": null, "e": 6307, "s": 6280, "text": "Ternary Operator in Python" }, { "code": null, "e": 6318, "s": 6307, "text": "Merge Sort" }, { "code": null, "e": 6340, "s": 6318, "text": "Bubble Sort Algorithm" }, { "code": null, "e": 6350, "s": 6340, "text": "QuickSort" }, { "code": null, "e": 6365, "s": 6350, "text": "Insertion Sort" } ]
Use of fflush(stdin) in C
13 Sep, 2021 fflush() is typically used for output stream only. Its purpose is to clear (or flush) the output buffer and move the buffered data to console (in case of stdout) or disk (in case of file output stream). Below is its syntax. fflush(FILE *ostream); ostream points to an output stream or an update stream in which the most recent operation was not input, the fflush function causes any unwritten data for that stream to be delivered to the host environment to be written to the file; otherwise, the behavior is undefined. Can we use it for input stream like stdin? As per C standard, it is undefined behavior to use fflush(stdin). However some compilers like Microsoft visual studio allows it. How is it used in these compilers? While taking an input string with spaces, the buffer does not get cleared for the next input and considers the previous input for the same. To solve this problem fflush(stdin) is. used to clear the stream/buffer. C // C program to illustrate situation// where flush(stdin) is required only// in certain compilers.#include <stdio.h>#include<stdlib.h>int main(){ char str[20]; int i; for (i=0; i<2; i++) { scanf("%[^\n]s", str); printf("%s\n", str); // fflush(stdin); } return 0;} Input: geeks geeksforgeeks Output: geeks geeks The code above takes only single input and gives the same result for the second input. Reason is because as the string is already stored in the buffer i.e. stream is not cleared yet as it was expecting string with spaces or new line. So, to handle this situation fflush(stdin) is used. C // C program to illustrate flush(stdin)// This program works as expected only// in certain compilers like Microsoft// visual studio.#include <stdio.h>#include<stdlib.h>int main(){ char str[20]; int i; for (i = 0; i<2; i++) { scanf("%[^\n]s", str); printf("%s\n", str); // used to clear the buffer // and accept the next string fflush(stdin); } return 0;} Input: geeks geeksforgeeks Output: geeks geeksforgeeks Is it good to use fflush(stdin)? Although using β€œfflush(stdin)” after β€œscanf()” statement also clears the input buffer in certain compilers, it is not recommended to use it as it is undefined behavior by the language standards. In C and C++, we have different methods to clear the buffer discussed in this post. Reference: https://stackoverflow.com/questions/2979209/using-fflushstdin This article is contributed by Sahil Chhabra. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. swastikmishra3000 C-Input and Output Quiz c-input-output C Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Substring in C++ Function Pointer in C Different Methods to Reverse a String in C++ std::string class in C++ Unordered Sets in C++ Standard Template Library Enumeration (or enum) in C Memory Layout of C Programs C Language Introduction What is the purpose of a function prototype? Power Function in C/C++
[ { "code": null, "e": 52, "s": 24, "text": "\n13 Sep, 2021" }, { "code": null, "e": 276, "s": 52, "text": "fflush() is typically used for output stream only. Its purpose is to clear (or flush) the output buffer and move the buffered data to console (in case of stdout) or disk (in case of file output stream). Below is its syntax." }, { "code": null, "e": 579, "s": 276, "text": "fflush(FILE *ostream);\n\nostream points to an output stream \nor an update stream in which the \nmost recent operation was not input, \nthe fflush function causes any \nunwritten data for that stream to \nbe delivered to the host environment \nto be written to the file; otherwise, \nthe behavior is undefined." }, { "code": null, "e": 1000, "s": 579, "text": "Can we use it for input stream like stdin? As per C standard, it is undefined behavior to use fflush(stdin). However some compilers like Microsoft visual studio allows it. How is it used in these compilers? While taking an input string with spaces, the buffer does not get cleared for the next input and considers the previous input for the same. To solve this problem fflush(stdin) is. used to clear the stream/buffer. " }, { "code": null, "e": 1002, "s": 1000, "text": "C" }, { "code": "// C program to illustrate situation// where flush(stdin) is required only// in certain compilers.#include <stdio.h>#include<stdlib.h>int main(){ char str[20]; int i; for (i=0; i<2; i++) { scanf(\"%[^\\n]s\", str); printf(\"%s\\n\", str); // fflush(stdin); } return 0;}", "e": 1305, "s": 1002, "text": null }, { "code": null, "e": 1313, "s": 1305, "text": "Input: " }, { "code": null, "e": 1336, "s": 1313, "text": "geeks \ngeeksforgeeks" }, { "code": null, "e": 1346, "s": 1336, "text": "Output: " }, { "code": null, "e": 1360, "s": 1346, "text": "geeks \ngeeks " }, { "code": null, "e": 1646, "s": 1360, "text": "The code above takes only single input and gives the same result for the second input. Reason is because as the string is already stored in the buffer i.e. stream is not cleared yet as it was expecting string with spaces or new line. So, to handle this situation fflush(stdin) is used." }, { "code": null, "e": 1648, "s": 1646, "text": "C" }, { "code": "// C program to illustrate flush(stdin)// This program works as expected only// in certain compilers like Microsoft// visual studio.#include <stdio.h>#include<stdlib.h>int main(){ char str[20]; int i; for (i = 0; i<2; i++) { scanf(\"%[^\\n]s\", str); printf(\"%s\\n\", str); // used to clear the buffer // and accept the next string fflush(stdin); } return 0;}", "e": 2057, "s": 1648, "text": null }, { "code": null, "e": 2065, "s": 2057, "text": "Input: " }, { "code": null, "e": 2085, "s": 2065, "text": "geeks\ngeeksforgeeks" }, { "code": null, "e": 2095, "s": 2085, "text": "Output: " }, { "code": null, "e": 2116, "s": 2095, "text": "geeks \ngeeksforgeeks" }, { "code": null, "e": 2149, "s": 2116, "text": "Is it good to use fflush(stdin)?" }, { "code": null, "e": 2428, "s": 2149, "text": "Although using β€œfflush(stdin)” after β€œscanf()” statement also clears the input buffer in certain compilers, it is not recommended to use it as it is undefined behavior by the language standards. In C and C++, we have different methods to clear the buffer discussed in this post." }, { "code": null, "e": 2501, "s": 2428, "text": "Reference: https://stackoverflow.com/questions/2979209/using-fflushstdin" }, { "code": null, "e": 2923, "s": 2501, "text": "This article is contributed by Sahil Chhabra. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 2941, "s": 2923, "text": "swastikmishra3000" }, { "code": null, "e": 2965, "s": 2941, "text": "C-Input and Output Quiz" }, { "code": null, "e": 2980, "s": 2965, "text": "c-input-output" }, { "code": null, "e": 2991, "s": 2980, "text": "C Language" }, { "code": null, "e": 3089, "s": 2991, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3106, "s": 3089, "text": "Substring in C++" }, { "code": null, "e": 3128, "s": 3106, "text": "Function Pointer in C" }, { "code": null, "e": 3173, "s": 3128, "text": "Different Methods to Reverse a String in C++" }, { "code": null, "e": 3198, "s": 3173, "text": "std::string class in C++" }, { "code": null, "e": 3246, "s": 3198, "text": "Unordered Sets in C++ Standard Template Library" }, { "code": null, "e": 3273, "s": 3246, "text": "Enumeration (or enum) in C" }, { "code": null, "e": 3301, "s": 3273, "text": "Memory Layout of C Programs" }, { "code": null, "e": 3325, "s": 3301, "text": "C Language Introduction" }, { "code": null, "e": 3370, "s": 3325, "text": "What is the purpose of a function prototype?" } ]
Python | shutil.copyfileobj() method
12 Oct, 2021 Shutil module in Python provides many functions of high-level operations on files and collections of files. It comes under Python’s standard utility modules. This module helps in automating process of chowning and removal of files and directories.shutil.copyfileobj() method in Python is used to copy the contents of a file-like object to another file-like object. By default this method copy data in chunks and if want we can also specify the buffer size through length parameter. This method copies the content of the file from the current file position to the end of the file. Syntax: shutil.copyfileobj(fsrc, fdst[, length])Parameters: fsrc: A file-like object representing the source file to be copied fdst: A file-like object representing the destination file, where fsrc will be copied. length (optional): An integer value denoting buffer size. File-like object are mainly StringIO objects, connected sockets and actual file objects. Return Type: This method does not return any value. Code: Use of shutil.copyfileobj() method to copy the contents of source file-like object to destination file-like object Python3 # Python program to explain shutil.copyfileobj() method # importing shutil moduleimport shutil # Source filesource = 'file.txt' # Open the source file# in read mode and# get the file objectfsrc = open(source, 'r') # destination filedest = 'file_copy.txt' # Open the destination file# in write mode and# get the file objectfdst = open(dest, 'w') # Now, copy the contents of# file object f1 to f2# using shutil.copyfileobj() methodshutil.copyfileobj(fsrc, fdst) # We can also specify# the buffer size by passing# optional length parameter# like shutil.copyfileobj(fsrc, fdst, 1024) print("Contents of file object copied successfully") # Close file objectsf1.close()f2.close() Contents of file object copied successfully Reference: https://docs.python.org/3/library/shutil.html gabaa406 python-utility Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Introduction To PYTHON How to drop one or multiple columns in Pandas Dataframe Python | os.path.join() method Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python | Get unique values from a list Python | datetime.timedelta() function
[ { "code": null, "e": 28, "s": 0, "text": "\n12 Oct, 2021" }, { "code": null, "e": 609, "s": 28, "text": "Shutil module in Python provides many functions of high-level operations on files and collections of files. It comes under Python’s standard utility modules. This module helps in automating process of chowning and removal of files and directories.shutil.copyfileobj() method in Python is used to copy the contents of a file-like object to another file-like object. By default this method copy data in chunks and if want we can also specify the buffer size through length parameter. This method copies the content of the file from the current file position to the end of the file. " }, { "code": null, "e": 1024, "s": 609, "text": "Syntax: shutil.copyfileobj(fsrc, fdst[, length])Parameters: fsrc: A file-like object representing the source file to be copied fdst: A file-like object representing the destination file, where fsrc will be copied. length (optional): An integer value denoting buffer size. File-like object are mainly StringIO objects, connected sockets and actual file objects. Return Type: This method does not return any value. " }, { "code": null, "e": 1147, "s": 1024, "text": "Code: Use of shutil.copyfileobj() method to copy the contents of source file-like object to destination file-like object " }, { "code": null, "e": 1155, "s": 1147, "text": "Python3" }, { "code": "# Python program to explain shutil.copyfileobj() method # importing shutil moduleimport shutil # Source filesource = 'file.txt' # Open the source file# in read mode and# get the file objectfsrc = open(source, 'r') # destination filedest = 'file_copy.txt' # Open the destination file# in write mode and# get the file objectfdst = open(dest, 'w') # Now, copy the contents of# file object f1 to f2# using shutil.copyfileobj() methodshutil.copyfileobj(fsrc, fdst) # We can also specify# the buffer size by passing# optional length parameter# like shutil.copyfileobj(fsrc, fdst, 1024) print(\"Contents of file object copied successfully\") # Close file objectsf1.close()f2.close()", "e": 1835, "s": 1155, "text": null }, { "code": null, "e": 1879, "s": 1835, "text": "Contents of file object copied successfully" }, { "code": null, "e": 1939, "s": 1881, "text": "Reference: https://docs.python.org/3/library/shutil.html " }, { "code": null, "e": 1948, "s": 1939, "text": "gabaa406" }, { "code": null, "e": 1963, "s": 1948, "text": "python-utility" }, { "code": null, "e": 1970, "s": 1963, "text": "Python" }, { "code": null, "e": 2068, "s": 1970, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2100, "s": 2068, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2127, "s": 2100, "text": "Python Classes and Objects" }, { "code": null, "e": 2148, "s": 2127, "text": "Python OOPs Concepts" }, { "code": null, "e": 2171, "s": 2148, "text": "Introduction To PYTHON" }, { "code": null, "e": 2227, "s": 2171, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 2258, "s": 2227, "text": "Python | os.path.join() method" }, { "code": null, "e": 2300, "s": 2258, "text": "Check if element exists in list in Python" }, { "code": null, "e": 2342, "s": 2300, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 2381, "s": 2342, "text": "Python | Get unique values from a list" } ]
Producer-Consumer solution using Semaphores in Java | Set 2
20 Nov, 2019 Prerequisites – Semaphore in Java, Inter Process Communication, Producer Consumer Problem using Semaphores | Set 1 In computing, the producer–consumer problem (also known as the bounded-buffer problem) is a classic example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, which share a common, fixed-size buffer used as a queue. The producer’s job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e. removing it from the buffer), one piece at a time. Problem : To make sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t try to remove data from an empty buffer. Solution : The producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.An inadequate solution could result in a deadlock where both processes are waiting to be awakened. In the post Producer-Consumer solution using threads in Java, we have discussed above solution by using inter-thread communication(wait(), notify(), sleep()). In this post, we will use Semaphores to implement the same. The below solution consists of four classes: Q : the queue that you’re trying to synchronizeProducer : the threaded object that is producing queue entriesConsumer : the threaded object that is consuming queue entriesPC : the driver class that creates the single Q, Producer, and Consumer. Q : the queue that you’re trying to synchronize Producer : the threaded object that is producing queue entries Consumer : the threaded object that is consuming queue entries PC : the driver class that creates the single Q, Producer, and Consumer. // Java implementation of a producer and consumer// that use semaphores to control synchronization. import java.util.concurrent.Semaphore; class Q { // an item int item; // semCon initialized with 0 permits // to ensure put() executes first static Semaphore semCon = new Semaphore(0); static Semaphore semProd = new Semaphore(1); // to get an item from buffer void get() { try { // Before consumer can consume an item, // it must acquire a permit from semCon semCon.acquire(); } catch (InterruptedException e) { System.out.println("InterruptedException caught"); } // consumer consuming an item System.out.println("Consumer consumed item : " + item); // After consumer consumes the item, // it releases semProd to notify producer semProd.release(); } // to put an item in buffer void put(int item) { try { // Before producer can produce an item, // it must acquire a permit from semProd semProd.acquire(); } catch (InterruptedException e) { System.out.println("InterruptedException caught"); } // producer producing an item this.item = item; System.out.println("Producer produced item : " + item); // After producer produces the item, // it releases semCon to notify consumer semCon.release(); }} // Producer classclass Producer implements Runnable { Q q; Producer(Q q) { this.q = q; new Thread(this, "Producer").start(); } public void run() { for (int i = 0; i < 5; i++) // producer put items q.put(i); }} // Consumer classclass Consumer implements Runnable { Q q; Consumer(Q q) { this.q = q; new Thread(this, "Consumer").start(); } public void run() { for (int i = 0; i < 5; i++) // consumer get items q.get(); }} // Driver classclass PC { public static void main(String args[]) { // creating buffer queue Q q = new Q(); // starting consumer thread new Consumer(q); // starting producer thread new Producer(q); }} Output: Producer produced item : 0 Consumer consumed item : 0 Producer produced item : 1 Consumer consumed item : 1 Producer produced item : 2 Consumer consumed item : 2 Producer produced item : 3 Consumer consumed item : 3 Producer produced item : 4 Consumer consumed item : 4 Explanation : As you can see, the calls to put() and get( ) are synchronized, i.e. each call to put() is followed by a call to get( ) and no items are missed. Without the semaphores, multiple calls to put() would have occurred without matching calls to get(), resulting in items being missed. (To prove this, remove the semaphore code and observe the results.) The sequencing of put() and get() calls is handled by two semaphores: semProd and semCon. Before put( ) can produce an item, it must acquire a permit from semProd. After it has produce the item, it releases semCon. Before get( ) can consume an item, it must acquire a permit from semCon. After it consumes the item, it releases semProd. This β€œgive and take” mechanism ensures that each call to put( ) must be followed by a call to get( ). Also notice that semCon is initialized with no available permits. This ensures that put( ) executes first. The ability to set the initial synchronization state is one of the more powerful aspects of a semaphore. This article is contributed by Gaurav Miglani. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Java-Multithreading Process Synchronization GATE CS Java Operating Systems Operating Systems Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n20 Nov, 2019" }, { "code": null, "e": 169, "s": 54, "text": "Prerequisites – Semaphore in Java, Inter Process Communication, Producer Consumer Problem using Semaphores | Set 1" }, { "code": null, "e": 446, "s": 169, "text": "In computing, the producer–consumer problem (also known as the bounded-buffer problem) is a classic example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer, which share a common, fixed-size buffer used as a queue." }, { "code": null, "e": 527, "s": 446, "text": "The producer’s job is to generate data, put it into the buffer, and start again." }, { "code": null, "e": 637, "s": 527, "text": "At the same time, the consumer is consuming the data (i.e. removing it from the buffer), one piece at a time." }, { "code": null, "e": 798, "s": 637, "text": "Problem : To make sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t try to remove data from an empty buffer." }, { "code": null, "e": 1278, "s": 798, "text": "Solution : The producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.An inadequate solution could result in a deadlock where both processes are waiting to be awakened." }, { "code": null, "e": 1497, "s": 1278, "text": "In the post Producer-Consumer solution using threads in Java, we have discussed above solution by using inter-thread communication(wait(), notify(), sleep()). In this post, we will use Semaphores to implement the same." }, { "code": null, "e": 1542, "s": 1497, "text": "The below solution consists of four classes:" }, { "code": null, "e": 1786, "s": 1542, "text": "Q : the queue that you’re trying to synchronizeProducer : the threaded object that is producing queue entriesConsumer : the threaded object that is consuming queue entriesPC : the driver class that creates the single Q, Producer, and Consumer." }, { "code": null, "e": 1834, "s": 1786, "text": "Q : the queue that you’re trying to synchronize" }, { "code": null, "e": 1897, "s": 1834, "text": "Producer : the threaded object that is producing queue entries" }, { "code": null, "e": 1960, "s": 1897, "text": "Consumer : the threaded object that is consuming queue entries" }, { "code": null, "e": 2033, "s": 1960, "text": "PC : the driver class that creates the single Q, Producer, and Consumer." }, { "code": "// Java implementation of a producer and consumer// that use semaphores to control synchronization. import java.util.concurrent.Semaphore; class Q { // an item int item; // semCon initialized with 0 permits // to ensure put() executes first static Semaphore semCon = new Semaphore(0); static Semaphore semProd = new Semaphore(1); // to get an item from buffer void get() { try { // Before consumer can consume an item, // it must acquire a permit from semCon semCon.acquire(); } catch (InterruptedException e) { System.out.println(\"InterruptedException caught\"); } // consumer consuming an item System.out.println(\"Consumer consumed item : \" + item); // After consumer consumes the item, // it releases semProd to notify producer semProd.release(); } // to put an item in buffer void put(int item) { try { // Before producer can produce an item, // it must acquire a permit from semProd semProd.acquire(); } catch (InterruptedException e) { System.out.println(\"InterruptedException caught\"); } // producer producing an item this.item = item; System.out.println(\"Producer produced item : \" + item); // After producer produces the item, // it releases semCon to notify consumer semCon.release(); }} // Producer classclass Producer implements Runnable { Q q; Producer(Q q) { this.q = q; new Thread(this, \"Producer\").start(); } public void run() { for (int i = 0; i < 5; i++) // producer put items q.put(i); }} // Consumer classclass Consumer implements Runnable { Q q; Consumer(Q q) { this.q = q; new Thread(this, \"Consumer\").start(); } public void run() { for (int i = 0; i < 5; i++) // consumer get items q.get(); }} // Driver classclass PC { public static void main(String args[]) { // creating buffer queue Q q = new Q(); // starting consumer thread new Consumer(q); // starting producer thread new Producer(q); }}", "e": 4322, "s": 2033, "text": null }, { "code": null, "e": 4330, "s": 4322, "text": "Output:" }, { "code": null, "e": 4601, "s": 4330, "text": "Producer produced item : 0\nConsumer consumed item : 0\nProducer produced item : 1\nConsumer consumed item : 1\nProducer produced item : 2\nConsumer consumed item : 2\nProducer produced item : 3\nConsumer consumed item : 3\nProducer produced item : 4\nConsumer consumed item : 4\n" }, { "code": null, "e": 4962, "s": 4601, "text": "Explanation : As you can see, the calls to put() and get( ) are synchronized, i.e. each call to put() is followed by a call to get( ) and no items are missed. Without the semaphores, multiple calls to put() would have occurred without matching calls to get(), resulting in items being missed. (To prove this, remove the semaphore code and observe the results.)" }, { "code": null, "e": 5052, "s": 4962, "text": "The sequencing of put() and get() calls is handled by two semaphores: semProd and semCon." }, { "code": null, "e": 5177, "s": 5052, "text": "Before put( ) can produce an item, it must acquire a permit from semProd. After it has produce the item, it releases semCon." }, { "code": null, "e": 5299, "s": 5177, "text": "Before get( ) can consume an item, it must acquire a permit from semCon. After it consumes the item, it releases semProd." }, { "code": null, "e": 5401, "s": 5299, "text": "This β€œgive and take” mechanism ensures that each call to put( ) must be followed by a call to get( )." }, { "code": null, "e": 5613, "s": 5401, "text": "Also notice that semCon is initialized with no available permits. This ensures that put( ) executes first. The ability to set the initial synchronization state is one of the more powerful aspects of a semaphore." }, { "code": null, "e": 5915, "s": 5613, "text": "This article is contributed by Gaurav Miglani. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 6040, "s": 5915, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 6060, "s": 6040, "text": "Java-Multithreading" }, { "code": null, "e": 6084, "s": 6060, "text": "Process Synchronization" }, { "code": null, "e": 6092, "s": 6084, "text": "GATE CS" }, { "code": null, "e": 6097, "s": 6092, "text": "Java" }, { "code": null, "e": 6115, "s": 6097, "text": "Operating Systems" }, { "code": null, "e": 6133, "s": 6115, "text": "Operating Systems" }, { "code": null, "e": 6138, "s": 6133, "text": "Java" } ]
SQL Interview Questions
18 Apr, 2022 1. What is SQL? SQL stands for Structured Query Language. It is a language used to interact with the database, i.e to create a database, to create a table in the database, to retrieve data or update a table in the database, etc. SQL is an ANSI(American National Standards Institute) standard. Using SQL, we can do many things. For example – we can execute queries, we can insert records into a table, we can update records, we can create a database, we can create a table, we can delete a table, etc. 2. What is a database? A Database is defined as a structured form of data storage in a computer or collection of data in an organized manner and can be accessed in various ways. It is also the collection of schemas, tables, queries, views, etc. Databases help us with easily storing, accessing, and manipulating data held on a computer. The Database Management System allows a user to interact with the database. 3. Does SQL support programming language features? It is true that SQL is a language, but it does not support programming as it is not a programming language, it is a command language. We do not have conditional statements in SQL like for loops or if..else, we only have commands which we can use to query, update, delete, etc. data in the database. SQL allows us to manipulate data in a database. 4. What are the differences between SQL and PL/SQL? Ans: Some common differences between SQL and PL/SQL are as shown below: 5. What is the difference between BETWEEN and IN operators in SQL? BETWEEN The BETWEEN operator is used to fetch rows based on a range of values. For example, SELECT * FROM Students WHERE ROLL_NO BETWEEN 20 AND 30; This query will select all those rows from the table. Students where the value of the field ROLL_NO lies between 20 and 30. IN The IN operator is used to check for values contained in specific sets. For example, SELECT * FROM Students WHERE ROLL_NO IN (20,21,23); This query will select all those rows from the table Students where the value of the field ROLL_NO is either 20 or 21 or 23. 6. Write an SQL query to find names of employees start with β€˜A’? The LIKE operator of SQL is used for this purpose. It is used to fetch filtered data by searching for a particular pattern in the where clause. The Syntax for using LIKE is, SELECT column1,column2 FROM table_name WHERE column_name LIKE pattern; LIKE: operator name pattern: exact value extracted from the pattern to get related data in result set. The required query is: SELECT * FROM Employees WHERE EmpName like 'A%' ; You may refer to this article WHERE clause for more details on the LIKE operator. 7. What is the difference between CHAR and VARCHAR2 datatype in SQL? Both of these data types are used for characters, but varchar2 is used for character strings of variable length, whereas char is used for character strings of fixed length. For example, if we specify the type as char(5) then we will not be allowed to store a string of any other length in this variable, but if we specify the type of this variable as varchar2(5) then we will be allowed to store strings of variable length. We can store a string of length 3 or 4 or 2 in this variable. 8. Name different types of case manipulation functions available in SQL. There are three types of case manipulation functions available in SQL. They are, LOWER: The purpose of this function is to return the string in lowercase. It takes a string as an argument and returns the string by converting it into lower case. Syntax: LOWER('string') UPPER: The purpose of this function is to return the string in uppercase. It takes a string as an argument and returns the string by converting it into uppercase. Syntax: UPPER('string') INITCAP: The purpose of this function is to return the string with the first letter in uppercase and the rest of the letters in lowercase. Syntax: INITCAP('string') 9. What do you mean by data definition language? Data definition language or DDL allows to execution of queries like CREATE, DROP and ALTER. That is those queries that define the data. 10. What do you mean by data manipulation language? Data manipulation Language or DML is used to access or manipulate data in the database. It allows us to perform the below-listed functions: Insert data or rows in a database Delete data from the database Retrieve or fetch data Update data in a database. 11. What is the difference between primary key and unique constraints? The primary key cannot have NULL values, the unique constraints can have NULL values. There is only one primary key in a table, but there can be multiple unique constraints. The primary key creates the clustered index automatically but the unique key does not. 12. What is the view in SQL? Views in SQL are a kind of virtual table. A view also has rows and columns as they are on a real table in the database. We can create a view by selecting fields from one or more tables present in the database. A View can either have all the rows of a table or specific rows based on certain conditions. The CREATE VIEW statement of SQL is used for creating views. Basic Syntax: CREATE VIEW view_name AS SELECT column1, column2..... FROM table_name WHERE condition; view_name: Name for the View table_name: Name of the table condition: Condition to select rows For more details on how to create and use view, please refer to this article. 13. What do you mean by foreign key? A Foreign key is a field that can uniquely identify each row in another table. And this constraint is used to specify a field as a Foreign key. That is this field points to the primary key of another table. This usually creates a kind of link between the two tables. Consider the two tables as shown below: Orders Customers As we can see clearly, that the field C_ID in the Orders table is the primary key in the Customers’ table, i.e. it uniquely identifies each row in the Customers table. Therefore, it is a Foreign Key in the Orders table. Syntax: CREATE TABLE Orders ( O_ID int NOT NULL, ORDER_NO int NOT NULL, C_ID int, PRIMARY KEY (O_ID), FOREIGN KEY (C_ID) REFERENCES Customers(C_ID) ) 14. What is a join in SQL? What are the types of joins? An SQL Join statement is used to combine data or rows from two or more tables based on a common field between them. Different types of Joins are: INNER JOIN: The INNER JOIN keyword selects all rows from both the tables as long as the condition satisfies. This keyword will create the result-set by combining all rows from both the tables where the condition satisfies i.e the value of the common field will be the same. LEFT JOIN: This join returns all the rows of the table on the left side of the join and matching rows for the table on the right side of the join. For the rows for which there is no matching row on the right side, the result-set will be null. LEFT JOIN is also known as LEFT OUTER JOIN RIGHT JOIN: RIGHT JOIN is similar to LEFT JOIN. This join returns all the rows of the table on the right side of the join and matching rows for the table on the left side of the join. For the rows for which there is no matching row on the left side, the result-set will contain null. RIGHT JOIN is also known as RIGHT OUTER JOIN. FULL JOIN: FULL JOIN creates the result-set by combining results of both LEFT JOIN and RIGHT JOIN. The result-set will contain all the rows from both tables. For the rows for which there is no matching, the result-set will contain NULL values. 15. What is an index? A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and the use of more storage space to maintain the extra copy of data. Data can be stored only in one order on a disk. To support faster access according to different values, a faster search like binary search for different values is desired. For this purpose, indexes are created on tables. These indexes need extra space on the disk, but they allow faster search according to different frequently searched values. 16. What is table and Field? Table: A table has a combination of rows and columns. Rows are called records and columns are called fields. In MS SQL Server, the tables are being designated within the database and schema names. Field: In DBMS, a database field can be defined as – a single piece of information from a record. 17. What is a primary key? A Primary Key is one of the candidate keys. One of the candidate keys is selected as most important and becomes the primary key. There cannot be more than one primary key in a table. 18. What is a Default constraint? The DEFAULT constraint is used to fill a column with default and fixed values. The value will be added to all new records when no other value is provided. For more details please refer to the SQL | Default Constraint article. 19. What is On Delete cascade constraint? An β€˜ON DELETE CASCADE’ constraint is used in MySQL to delete the rows from the child table automatically when the rows from the parent table are deleted. For more details, please read MySQL – On Delete Cascade constraint article. 20. What is normalization? It is a process of analyzing the given relation schemas based on their functional dependencies and primary keys to achieve the following desirable properties: 1) Minimizing Redundancy 2) Minimizing the Insertion, Deletion, And Update Anomalies Relation schemas that do not meet the properties are decomposed into smaller relation schemas that could meet desirable properties. 21. What is Denormalization? Denormalization is a database optimization technique in which we add redundant data to one or more tables. This can help us avoid costly joins in a relational database. Note that denormalization does not mean not doing normalization. It is an optimization technique that is applied after doing normalization. In a traditional normalized database, we store data in separate logical tables and attempt to minimize redundant data. We may strive to have only one copy of each piece of data in the database. 22. Explain WITH clause in SQL? The WITH clause provides a way relationship of defining a temporary relationship whose definition is available only to the query in which the with clause occurs. SQL applies predicates in the WITH clause after groups have been formed, so aggregate functions may be used. 23. What are all the different attributes of indexes? The indexing has various attributes: Access Types: This refers to the type of access such as value-based search, range access, etc. Access Time: It refers to the time needed to find a particular data element or set of elements. Insertion Time: It refers to the time taken to find the appropriate space and insert new data. Deletion Time: Time is taken to find an item and delete it as well as update the index structure. Space Overhead: It refers to the additional space required by the index. 24. What is a Cursor? The cursor is a Temporary Memory or Temporary Work Station. It is Allocated by Database Server at the Time of Performing DML operations on Table by User. Cursors are used to store Database Tables. 25. Write down various types of relationships in SQL? There are various relationships, namely: One-to-One Relationship. One to Many Relationships. Many to One Relationship. Self-Referencing Relationship. 26. What is a query? An SQL query is used to retrieve the required data from the database. However, there may be multiple SQL queries that yield the same results but with different levels of efficiency. An inefficient query can drain the database resources, reduce the database speed or result in a loss of service for other users. So it is very important to optimize the query to obtain the best database performance. 27. What is subquery? In SQL a Subquery can be simply defined as a query within another query. In other words, we can say that a Subquery is a query that is embedded in the WHERE clause of another SQL query. 28. What are the different operators available in SQL? There are three operators available in SQL namely: Arithmetic OperatorsLogical OperatorsComparison Operators Arithmetic Operators Logical Operators Comparison Operators 29. What is a trigger? Trigger is a statement that a system executes automatically when there is any modification to the database. In a trigger, we first specify when the trigger is to be executed and then the action to be performed when the trigger executes. Triggers are used to specify certain integrity constraints and referential constraints that cannot be specified using the constraint mechanism of SQL. 30. What is the difference between DELETE and TRUNCATE commands? 31. What are local and global variables and their differences? Global Variable: In contrast, global variables are variables that are defined outside of functions. These variables have global scope, so they can be used by any function without passing them to the function as parameters. Local Variable: Local variables are variables that are defined within functions. They have local scope, which means that they can only be used within the functions that define them. 32. What is a constraint? Constraints are the rules that we can apply to the type of data in a table. That is, we can specify the limit on the type of data that can be stored in a particular column in a table using constraints. For more details please refer to SQL|Constraints article. 33. What is Data Integrity? Data integrity is defined as the data contained in the database is both correct and consistent. For this purpose, the data stored in the database must satisfy certain types of procedures (rules). The data in a database must be correct and consistent. So, data stored in the database must satisfy certain types of procedures (rules). DBMS provides different ways to implement such types of constraints (rules). This improves data integrity in a database. For more details please refer difference between data security and data integrity article. 34. What is Auto Increment? Sometimes, while creating a table, we do not have a unique identifier within the table, hence we face difficulty in choosing Primary Key. So as to resolve such an issue, we’ve to manually provide unique keys to every record, but this is often also a tedious task. So we can use the the the Auto-Increment feature that automatically generates a numerical Primary key value for every new record inserted. The Auto Increment feature is supported by all the Databases. For more details please refer SQL Auto Increment article. 35. What is the difference between Cluster and Non-Cluster Index? For more details please refer Difference between Clustered index and No-Clustered index article. 36. What is MySQL collation? A MySQL collation is a well-defined set of rules which are used to compare characters of a particular character set by using their corresponding encoding. Each character set in MySQL might have more than one collation, and has, at least, one default collation. Two character sets cannot have the same collation. For more details please refer What is collation and character set in MySQL? article. 37. What are user-defined functions? We can use User-defined functions in PL/SQL or Java to provide functionality that is not available in SQL or SQL built-in functions. SQL functions and User-defined functions can appear anywhere, that is, wherever an expression occurs. For example, it can be used in: Select a list of SELECT statements. Condition of WHERE clause. CONNECT BY, ORDER BY, START WITH, and GROUP BY The VALUES clause of the INSERT statement. The SET clause of the UPDATE statement. 38. What are all types of user-defined functions? User-Defined Functions allow people to define their own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.Different Kinds of User-Defined Functions created are: 1. Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value. 2. Inline Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and, in essence, provide us with a parameterized, non-updateable view of the underlying tables. 3. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view, as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them gives us the capability to, in essence, create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, it can be used in the FROM clause of a T-SQL command, unlike the behavior found when using a stored procedure which can also return record sets. 39. Name the function which is used to remove spaces at the end of a string? In SQL the spaces at the end of the string are removed by a trim function. Syntax: Trim(s) Where s is a any string. 40. What is a stored procedure? Stored Procedures are created to perform one or more DML operations on databases. It is nothing but a group of SQL statements that accepts some input in the form of parameters and performs some task and may or may not return a value. For more details please refer what is Stored procedures in SQL article. 41. What are Union, minus and Interact commands? Set Operations in SQL eliminate duplicate tuples and can be applied only to the relations which are union compatible. Set Operations available in SQL are : Set Union Set Intersection Set Difference UNION Operation: This operation includes all the tuples which are present in either of the relations. For example: To find all the customers who have a loan or an account or both in a bank. SELECT CustomerName FROM Depositor UNION SELECT CustomerName FROM Borrower ; The union operation automatically eliminates duplicates. If all the duplicates are supposed to be retained, UNION ALL is used in the place of UNION. INTERSECT Operation: This operation includes the tuples which are present in both of the relations. For example: To find the customers who have a loan as well as an account in the bank: SELECT CustomerName FROM Depositor INTERSECT SELECT CustomerName FROM Borrower ; The Intersect operation automatically eliminates duplicates. If all the duplicates are supposed to be retained, INTERSECT ALL is used in the place of INTERSECT. EXCEPT Operation: This operation includes tuples that are present in one relation but should not be present in another relationship. For example: To find the customers who have an account but no loan at the bank: SELECT CustomerName FROM Depositor EXCEPT SELECT CustomerName FROM Borrower ; The Except operation automatically eliminates the duplicates. If all the duplicates are supposed to be retained, EXCEPT ALL is used at the place of EXCEPT. 42. What is an ALIAS command? Aliases are the temporary names given to a table or column for the purpose of a particular SQL query. It is used when the name of a column or table is used other than their original name, but the modified name is only temporary. Aliases are created to make table or column names more readable. The renaming is just a temporary change and the table name does not change in the original database. Aliases are useful when table or column names are big or not very readable. These are preferred when there isthe more than one table involved in a query. For more details, please read SQL| Aliases article. 43. What is the difference between TRUNCATE and DROP statements? For more details, please read the Difference between DROP and TRUNCATE in the SQL article. 44. What are aggregate and scalar functions? For doing operations on data SQL has many built-in functions, they are categorized into two categories and further sub-categorized into seven different functions under each category. The categories are: Aggregate functions:These functions are used to do operations from the values of the column and a single value is returned. Scalar functions:These functions are based on user input, these too return a single value. For more details, please read the SQL | Functions (Aggregate and Scalar Functions) article. 45. Which operator is used in queries for pattern matching? LIKE operator: It is used to fetch filtered data by searching for a particular pattern in the where clause. Syntax: SELECT column1,column2 FROM table_name WHERE column_name LIKE pattern; LIKE: operator name 46. Define SQL Order by the statement? The ORDER BY statement in SQL is used to sort the fetched data in either ascending or descending according to one or more columns. By default ORDER BY sorts the data in ascending order. We can use the keyword DESC to sort the data in descending order and the keyword ASC to sort in ascending order. For more details please read SQL | ORDER BY article. 47. Explain SQL Having statement? HAVING is used to specify a condition for a group or an aggregate function used in the select statement. The WHERE clause selects before grouping. The HAVING clause selects rows after grouping. Unlike the HAVING clause, the WHERE clause cannot contain aggregate functions. (See this for examples). See Having vs Where Clause? 48. Explain SQL AND OR statement with example? In SQL, the AND & OR operators are used for filtering the data and getting precise results based on conditions. The AND and OR operators are used with the WHERE clause. These two operators are called conjunctive operators. AND Operator: This operator displays only those records where both the conditions condition1 and condition2 evaluates to True. OR Operator: This operator displays the records where either one of the conditions condition1 and condition2 evaluates to True. That is, either condition1 is True or condition2 is True. For more details please read the SQL | AND and OR operators article. 49. Define BETWEEN statements in SQL? The SQL BETWEEN condition allows you to easily test if an expression is within a range of values (inclusive). The values can be text, date, or numbers. It can be used in a SELECT, INSERT, UPDATE, or DELETE statement. The SQL BETWEEN Condition will return the records where expression is within the range of value1 and value2. For more details please read SQL | Between & I operator article. 50. Why do we use Commit and Rollback command? For more details please read the Difference between Commit and Rollback in SQL article. 51. What are ACID properties? A transaction is a single logical unit of work that accesses and possibly modifies the contents of a database. Transactions access data using read and write operations. In order to maintain consistency in a database, before and after the transaction, certain properties are followed. These are called ACID properties. ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably. For more details please read ACID properties in the DBMS article. 52. What is a T-SQL? T-SQL is an abbreviation for Transact Structure Query Language. It is a product by Microsoft and is an extension of SQL Language which is used to interact with relational databases. It is considered to perform best with Microsoft SQL servers. T-SQL statements are used to perform the transactions to the databases. T-SQL has huge importance since all the communications with an instance of an SQL server are done by sending Transact-SQL statements to the server. Users can also define functions using T-SQL. Types of T-SQL functions are : Aggregate functions. Ranking functions. There are different types of ranking functions. Rowset function. Scalar functions. 53. Are NULL values the same as zero or a blank space? In SQL, zero or blank space can be compared with another zero or blank space. whereas one null may not be equal to another null. null means data might not be provided or there is no data. 54. What is the need for group functions in SQL? In database management, group functions, also known as an aggregate function, is a function where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning. Various Group Functions 1) Count() 2) Sum() 3) Avg() 4) Min() 5) Max() For more details please read Aggregate functions in SQL article. 55. What is the need for a MERGE statement? The MERGE command in SQL is actually a combination of three SQL statements: INSERT, UPDATE and DELETE. In simple words, the MERGE statement in SQL provides a convenient way to perform all these three operations together which can be very helpful when it comes to handling large running databases. But unlike INSERT, UPDATE and DELETE statements MERGE statement requires a source table to perform these operations on the required table which is called a target table. For more details please read the SQL | MERGE Statement article. 56. How can you fetch common records from two tables? The below statement could be used to get data from multiple tables, so, we need to use join to get data from multiple tables. Syntax : SELECT tablenmae1.colunmname, tablename2.columnnmae FROM tablenmae1 JOIN tablename2 ON tablenmae1.colunmnam = tablename2.columnnmae ORDER BY columnname; For more details and examples, please read SQL | SELECT data from Multiple tables article. 57. What are the advantages of PL/SQL functions? Advantages of PL / SQL functions as follows: We can make a single call to the database to run a block of statements. Thus, it improves the performance against running SQL multiple times. This will reduce the number of calls between the database and the application. We can divide the overall work into small modules which becomes quite manageable, also enhancing the readability of the code. It promotes reusability. It is secure since the code stays inside the database, thus hiding internal database details from the application(user). The user only makes a call to the PL/SQL functions. Hence, security and data hiding is ensured. 58. What is the SQL query to display the current date? CURRENT_DATE returns to the current date. This function returns the same value if it is executed more than once in a single statement, which means that the value is fixed, even if there is a long delay between fetching rows in a cursor. CURRENT_DATE or CURRENT DATE 59. What is ETL in SQL? ETL is a process in Data Warehousing and it stands for Extract, Transform and Load. It is a process in which an ETL tool extracts the data from various data source systems, transforms it in the staging area, and then finally, loads it into the Data Warehouse system. These are three database functions that are incorporated into one tool to pull data out from one database and to put data into another database. 60. What are Nested Triggers? A trigger can also contain INSERT, UPDATE, and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger. 61. How to find the available constraint information in the table? In SQL Server the data dictionary is a set of database tables used to store information about a database’s definition. One can use these data dictionaries to check the constraints on an already existing table and to change them(if possible). For more details please read SQL | Checking Existing Constraint on a table article. 62. What is SQL injection? SQL injection is a technique used to exploit user data through web page inputs by injecting SQL commands as statements. Basically, these statements can be used to manipulate the application’s web server by malicious users. SQL injection is a code injection technique that might destroy your database. SQL injection is one of the most common web hacking techniques. SQL injection is the placement of malicious code in SQL statements, via web page input. For more details, please read the SQL | Injection article. 63. How to copy tables in SQL? Sometimes, in SQL, we need to create an exact copy of an already defined (or created) table. MySQL enables you to perform this operation. Because we may need such duplicate tables for testing the data without having any impact on the original table and the data stored in it. CREATE TABLE Contact List(Clone_1) LIKE Original_table; For more details, Please read Cloning Table in the MySQL article. 64. Can we disable a trigger? If yes, how? Yes, we can disable a trigger in PL/SQL. If consider temporarily disabling a trigger and one of the following conditions is true: An object that the trigger references is not available. We must perform a large data load and want it to proceed quickly without firing triggers. We are loading data into the table to which the trigger applies. We disable a trigger using the ALTER TRIGGER statement with the DISABLE option. We can disable all triggers associated with a table at the same time using the ALTER TABLE statement with the DISABLE ALL TRIGGERS option. 65. What is a Live Lock? Livelock occurs when two or more processes continually repeat the same interaction in response to changes in the other processes without doing any useful work. These processes are not in the waiting state, and they are running concurrently. This is different from a deadlock because in a deadlock all processes are in the waiting state. 66. How do we avoid getting duplicate entries in a query without using the distinct keyword? DISTINCT is useful in certain circumstances, but it has drawbacks that it can increase the load on the query engine to perform the sort (since it needs to compare the result set to itself to remove duplicates). We can remove duplicate entries using the following options: Remove duplicates using row numbers. Remove duplicates using self-Join. Remove duplicates using group by. For more details, please read SQL | Remove duplicates without distinct articles. 67. The difference between NVL and NVL2 functions? These functions work with any data type and pertain to the use of null values in the expression list. These are all single-row functions i.e. provide one result per row. NVL(expr1, expr2): In SQL, NVL() converts a null value to an actual value. Data types that can be used are date, character, and number. Data types must match with each other. i.e. expr1 and expr2 must be of the same data type. Syntax: NVL (expr1, expr2) NVL2(expr1, expr2, expr3): The NVL2 function examines the first expression. If the first expression is not null, then the NVL2 function returns the second expression. If the first expression is null, then the third expression is returned i.e. If expr1 is not null, NVL2 returns expr2. If expr1 is null, NVL2 returns expr3. The argument expr1 can have any data type. Syntax: NVL2 (expr1, expr2, expr3) For more details please read SQL general functions | NVL, NVL2, DECODE, COALESCE, NULLIF, LNNVL, and NANVL article. 68. What is Case WHEN in SQL? Control statements form an important part of most languages since they control the execution of other sets of statements. These are found in SQL too and should be exploited for uses such as query filtering and query optimization through careful selection of tuples that match our requirements. In this post, we explore the Case-Switch statement in SQL. The CASE statement is SQL’s way of handling if/then logic. syntax: 1 CASE case_value WHEN when_value THEN statement_list [WHEN when_value THEN statement_list] ... [ELSE statement_list] END CASE syntax: 2 CASE WHEN search_condition THEN statement_list [WHEN search_condition THEN statement_list] ... [ELSE statement_list] END CASE For more details, please read the SQL | Case Statement article. 69. What is the difference between COALESCE() & ISNULL()? COALESCE(): COALESCE function in SQL returns the first non-NULL expression among its arguments. If all the expressions evaluate to null, then the COALESCE function will return null.Syntax: SELECT column(s), CAOLESCE(expression_1,....,expression_n) FROM table_name; ISNULL(): The ISNULL function has different uses in SQL Server and MySQL. In SQL Server, ISNULL() function is used to replace NULL values.Syntax: SELECT column(s), ISNULL(column_name, value_to_replace) FROM table_name; For more details, please read the SQL | Null functions article. 70. Name the operator which is used in the query for appending two strings? In SQL for appending two strings, ” Concentration operator” is used and its symbol is ” || β€œ. varshachoudhary serenesushant placement preparation Articles DBMS SQL DBMS SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n18 Apr, 2022" }, { "code": null, "e": 553, "s": 52, "text": "1. What is SQL? SQL stands for Structured Query Language. It is a language used to interact with the database, i.e to create a database, to create a table in the database, to retrieve data or update a table in the database, etc. SQL is an ANSI(American National Standards Institute) standard. Using SQL, we can do many things. For example – we can execute queries, we can insert records into a table, we can update records, we can create a database, we can create a table, we can delete a table, etc." }, { "code": null, "e": 966, "s": 553, "text": "2. What is a database? A Database is defined as a structured form of data storage in a computer or collection of data in an organized manner and can be accessed in various ways. It is also the collection of schemas, tables, queries, views, etc. Databases help us with easily storing, accessing, and manipulating data held on a computer. The Database Management System allows a user to interact with the database." }, { "code": null, "e": 1364, "s": 966, "text": "3. Does SQL support programming language features? It is true that SQL is a language, but it does not support programming as it is not a programming language, it is a command language. We do not have conditional statements in SQL like for loops or if..else, we only have commands which we can use to query, update, delete, etc. data in the database. SQL allows us to manipulate data in a database." }, { "code": null, "e": 1490, "s": 1364, "text": "4. What are the differences between SQL and PL/SQL? Ans: Some common differences between SQL and PL/SQL are as shown below: " }, { "code": null, "e": 1650, "s": 1490, "text": "5. What is the difference between BETWEEN and IN operators in SQL? BETWEEN The BETWEEN operator is used to fetch rows based on a range of values. For example, " }, { "code": null, "e": 1707, "s": 1650, "text": "SELECT * FROM Students \nWHERE ROLL_NO BETWEEN 20 AND 30;" }, { "code": null, "e": 1920, "s": 1707, "text": "This query will select all those rows from the table. Students where the value of the field ROLL_NO lies between 20 and 30. IN The IN operator is used to check for values contained in specific sets. For example, " }, { "code": null, "e": 1973, "s": 1920, "text": "SELECT * FROM Students \nWHERE ROLL_NO IN (20,21,23);" }, { "code": null, "e": 2098, "s": 1973, "text": "This query will select all those rows from the table Students where the value of the field ROLL_NO is either 20 or 21 or 23." }, { "code": null, "e": 2339, "s": 2098, "text": "6. Write an SQL query to find names of employees start with β€˜A’? The LIKE operator of SQL is used for this purpose. It is used to fetch filtered data by searching for a particular pattern in the where clause. The Syntax for using LIKE is, " }, { "code": null, "e": 2515, "s": 2339, "text": "SELECT column1,column2 FROM table_name WHERE column_name LIKE pattern; \n\nLIKE: operator name\npattern: exact value extracted from the pattern to get related data in\nresult set." }, { "code": null, "e": 2539, "s": 2515, "text": "The required query is: " }, { "code": null, "e": 2589, "s": 2539, "text": "SELECT * FROM Employees WHERE EmpName like 'A%' ;" }, { "code": null, "e": 2671, "s": 2589, "text": "You may refer to this article WHERE clause for more details on the LIKE operator." }, { "code": null, "e": 3226, "s": 2671, "text": "7. What is the difference between CHAR and VARCHAR2 datatype in SQL? Both of these data types are used for characters, but varchar2 is used for character strings of variable length, whereas char is used for character strings of fixed length. For example, if we specify the type as char(5) then we will not be allowed to store a string of any other length in this variable, but if we specify the type of this variable as varchar2(5) then we will be allowed to store strings of variable length. We can store a string of length 3 or 4 or 2 in this variable." }, { "code": null, "e": 3381, "s": 3226, "text": "8. Name different types of case manipulation functions available in SQL. There are three types of case manipulation functions available in SQL. They are, " }, { "code": null, "e": 3555, "s": 3381, "text": "LOWER: The purpose of this function is to return the string in lowercase. It takes a string as an argument and returns the string by converting it into lower case. Syntax: " }, { "code": null, "e": 3571, "s": 3555, "text": "LOWER('string')" }, { "code": null, "e": 3742, "s": 3571, "text": "UPPER: The purpose of this function is to return the string in uppercase. It takes a string as an argument and returns the string by converting it into uppercase. Syntax:" }, { "code": null, "e": 3758, "s": 3742, "text": "UPPER('string')" }, { "code": null, "e": 3907, "s": 3758, "text": "INITCAP: The purpose of this function is to return the string with the first letter in uppercase and the rest of the letters in lowercase. Syntax: " }, { "code": null, "e": 3925, "s": 3907, "text": "INITCAP('string')" }, { "code": null, "e": 4110, "s": 3925, "text": "9. What do you mean by data definition language? Data definition language or DDL allows to execution of queries like CREATE, DROP and ALTER. That is those queries that define the data." }, { "code": null, "e": 4303, "s": 4110, "text": "10. What do you mean by data manipulation language? Data manipulation Language or DML is used to access or manipulate data in the database. It allows us to perform the below-listed functions: " }, { "code": null, "e": 4337, "s": 4303, "text": "Insert data or rows in a database" }, { "code": null, "e": 4367, "s": 4337, "text": "Delete data from the database" }, { "code": null, "e": 4390, "s": 4367, "text": "Retrieve or fetch data" }, { "code": null, "e": 4417, "s": 4390, "text": "Update data in a database." }, { "code": null, "e": 4749, "s": 4417, "text": "11. What is the difference between primary key and unique constraints? The primary key cannot have NULL values, the unique constraints can have NULL values. There is only one primary key in a table, but there can be multiple unique constraints. The primary key creates the clustered index automatically but the unique key does not." }, { "code": null, "e": 4779, "s": 4749, "text": "12. What is the view in SQL? " }, { "code": null, "e": 5083, "s": 4779, "text": "Views in SQL are a kind of virtual table. A view also has rows and columns as they are on a real table in the database. We can create a view by selecting fields from one or more tables present in the database. A View can either have all the rows of a table or specific rows based on certain conditions. " }, { "code": null, "e": 5145, "s": 5083, "text": "The CREATE VIEW statement of SQL is used for creating views. " }, { "code": null, "e": 5160, "s": 5145, "text": "Basic Syntax: " }, { "code": null, "e": 5343, "s": 5160, "text": "CREATE VIEW view_name AS\nSELECT column1, column2.....\nFROM table_name\nWHERE condition;\n\nview_name: Name for the View\ntable_name: Name of the table\ncondition: Condition to select rows" }, { "code": null, "e": 5421, "s": 5343, "text": "For more details on how to create and use view, please refer to this article." }, { "code": null, "e": 5773, "s": 5421, "text": "13. What do you mean by foreign key? A Foreign key is a field that can uniquely identify each row in another table. And this constraint is used to specify a field as a Foreign key. That is this field points to the primary key of another table. This usually creates a kind of link between the two tables. Consider the two tables as shown below: Orders" }, { "code": null, "e": 5783, "s": 5773, "text": "Customers" }, { "code": null, "e": 6012, "s": 5783, "text": "As we can see clearly, that the field C_ID in the Orders table is the primary key in the Customers’ table, i.e. it uniquely identifies each row in the Customers table. Therefore, it is a Foreign Key in the Orders table. Syntax: " }, { "code": null, "e": 6154, "s": 6012, "text": "CREATE TABLE Orders\n(\nO_ID int NOT NULL,\nORDER_NO int NOT NULL,\nC_ID int,\nPRIMARY KEY (O_ID),\nFOREIGN KEY (C_ID) REFERENCES Customers(C_ID)\n)" }, { "code": null, "e": 6358, "s": 6154, "text": "14. What is a join in SQL? What are the types of joins? An SQL Join statement is used to combine data or rows from two or more tables based on a common field between them. Different types of Joins are: " }, { "code": null, "e": 6632, "s": 6358, "text": "INNER JOIN: The INNER JOIN keyword selects all rows from both the tables as long as the condition satisfies. This keyword will create the result-set by combining all rows from both the tables where the condition satisfies i.e the value of the common field will be the same." }, { "code": null, "e": 6918, "s": 6632, "text": "LEFT JOIN: This join returns all the rows of the table on the left side of the join and matching rows for the table on the right side of the join. For the rows for which there is no matching row on the right side, the result-set will be null. LEFT JOIN is also known as LEFT OUTER JOIN" }, { "code": null, "e": 7248, "s": 6918, "text": "RIGHT JOIN: RIGHT JOIN is similar to LEFT JOIN. This join returns all the rows of the table on the right side of the join and matching rows for the table on the left side of the join. For the rows for which there is no matching row on the left side, the result-set will contain null. RIGHT JOIN is also known as RIGHT OUTER JOIN." }, { "code": null, "e": 7492, "s": 7248, "text": "FULL JOIN: FULL JOIN creates the result-set by combining results of both LEFT JOIN and RIGHT JOIN. The result-set will contain all the rows from both tables. For the rows for which there is no matching, the result-set will contain NULL values." }, { "code": null, "e": 8072, "s": 7492, "text": "15. What is an index? A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and the use of more storage space to maintain the extra copy of data. Data can be stored only in one order on a disk. To support faster access according to different values, a faster search like binary search for different values is desired. For this purpose, indexes are created on tables. These indexes need extra space on the disk, but they allow faster search according to different frequently searched values." }, { "code": null, "e": 8101, "s": 8072, "text": "16. What is table and Field?" }, { "code": null, "e": 8299, "s": 8101, "text": "Table: A table has a combination of rows and columns. Rows are called records and columns are called fields. In MS SQL Server, the tables are being designated within the database and schema names. " }, { "code": null, "e": 8397, "s": 8299, "text": "Field: In DBMS, a database field can be defined as – a single piece of information from a record." }, { "code": null, "e": 8424, "s": 8397, "text": "17. What is a primary key?" }, { "code": null, "e": 8608, "s": 8424, "text": "A Primary Key is one of the candidate keys. One of the candidate keys is selected as most important and becomes the primary key. There cannot be more than one primary key in a table. " }, { "code": null, "e": 8642, "s": 8608, "text": "18. What is a Default constraint?" }, { "code": null, "e": 8868, "s": 8642, "text": "The DEFAULT constraint is used to fill a column with default and fixed values. The value will be added to all new records when no other value is provided. For more details please refer to the SQL | Default Constraint article." }, { "code": null, "e": 8910, "s": 8868, "text": "19. What is On Delete cascade constraint?" }, { "code": null, "e": 9140, "s": 8910, "text": "An β€˜ON DELETE CASCADE’ constraint is used in MySQL to delete the rows from the child table automatically when the rows from the parent table are deleted. For more details, please read MySQL – On Delete Cascade constraint article." }, { "code": null, "e": 9167, "s": 9140, "text": "20. What is normalization?" }, { "code": null, "e": 9544, "s": 9167, "text": "It is a process of analyzing the given relation schemas based on their functional dependencies and primary keys to achieve the following desirable properties: 1) Minimizing Redundancy 2) Minimizing the Insertion, Deletion, And Update Anomalies Relation schemas that do not meet the properties are decomposed into smaller relation schemas that could meet desirable properties. " }, { "code": null, "e": 9573, "s": 9544, "text": "21. What is Denormalization?" }, { "code": null, "e": 9883, "s": 9573, "text": "Denormalization is a database optimization technique in which we add redundant data to one or more tables. This can help us avoid costly joins in a relational database. Note that denormalization does not mean not doing normalization. It is an optimization technique that is applied after doing normalization. " }, { "code": null, "e": 10078, "s": 9883, "text": "In a traditional normalized database, we store data in separate logical tables and attempt to minimize redundant data. We may strive to have only one copy of each piece of data in the database. " }, { "code": null, "e": 10111, "s": 10078, "text": "22. Explain WITH clause in SQL?" }, { "code": null, "e": 10383, "s": 10111, "text": "The WITH clause provides a way relationship of defining a temporary relationship whose definition is available only to the query in which the with clause occurs. SQL applies predicates in the WITH clause after groups have been formed, so aggregate functions may be used. " }, { "code": null, "e": 10437, "s": 10383, "text": "23. What are all the different attributes of indexes?" }, { "code": null, "e": 10474, "s": 10437, "text": "The indexing has various attributes:" }, { "code": null, "e": 10569, "s": 10474, "text": "Access Types: This refers to the type of access such as value-based search, range access, etc." }, { "code": null, "e": 10665, "s": 10569, "text": "Access Time: It refers to the time needed to find a particular data element or set of elements." }, { "code": null, "e": 10760, "s": 10665, "text": "Insertion Time: It refers to the time taken to find the appropriate space and insert new data." }, { "code": null, "e": 10858, "s": 10760, "text": "Deletion Time: Time is taken to find an item and delete it as well as update the index structure." }, { "code": null, "e": 10931, "s": 10858, "text": "Space Overhead: It refers to the additional space required by the index." }, { "code": null, "e": 10953, "s": 10931, "text": "24. What is a Cursor?" }, { "code": null, "e": 11151, "s": 10953, "text": "The cursor is a Temporary Memory or Temporary Work Station. It is Allocated by Database Server at the Time of Performing DML operations on Table by User. Cursors are used to store Database Tables. " }, { "code": null, "e": 11205, "s": 11151, "text": "25. Write down various types of relationships in SQL?" }, { "code": null, "e": 11246, "s": 11205, "text": "There are various relationships, namely:" }, { "code": null, "e": 11271, "s": 11246, "text": "One-to-One Relationship." }, { "code": null, "e": 11298, "s": 11271, "text": "One to Many Relationships." }, { "code": null, "e": 11324, "s": 11298, "text": "Many to One Relationship." }, { "code": null, "e": 11355, "s": 11324, "text": "Self-Referencing Relationship." }, { "code": null, "e": 11376, "s": 11355, "text": "26. What is a query?" }, { "code": null, "e": 11774, "s": 11376, "text": "An SQL query is used to retrieve the required data from the database. However, there may be multiple SQL queries that yield the same results but with different levels of efficiency. An inefficient query can drain the database resources, reduce the database speed or result in a loss of service for other users. So it is very important to optimize the query to obtain the best database performance." }, { "code": null, "e": 11796, "s": 11774, "text": "27. What is subquery?" }, { "code": null, "e": 11982, "s": 11796, "text": "In SQL a Subquery can be simply defined as a query within another query. In other words, we can say that a Subquery is a query that is embedded in the WHERE clause of another SQL query." }, { "code": null, "e": 12037, "s": 11982, "text": "28. What are the different operators available in SQL?" }, { "code": null, "e": 12088, "s": 12037, "text": "There are three operators available in SQL namely:" }, { "code": null, "e": 12146, "s": 12088, "text": "Arithmetic OperatorsLogical OperatorsComparison Operators" }, { "code": null, "e": 12167, "s": 12146, "text": "Arithmetic Operators" }, { "code": null, "e": 12185, "s": 12167, "text": "Logical Operators" }, { "code": null, "e": 12206, "s": 12185, "text": "Comparison Operators" }, { "code": null, "e": 12229, "s": 12206, "text": "29. What is a trigger?" }, { "code": null, "e": 12617, "s": 12229, "text": "Trigger is a statement that a system executes automatically when there is any modification to the database. In a trigger, we first specify when the trigger is to be executed and then the action to be performed when the trigger executes. Triggers are used to specify certain integrity constraints and referential constraints that cannot be specified using the constraint mechanism of SQL." }, { "code": null, "e": 12682, "s": 12617, "text": "30. What is the difference between DELETE and TRUNCATE commands?" }, { "code": null, "e": 12745, "s": 12682, "text": "31. What are local and global variables and their differences?" }, { "code": null, "e": 12762, "s": 12745, "text": "Global Variable:" }, { "code": null, "e": 12968, "s": 12762, "text": "In contrast, global variables are variables that are defined outside of functions. These variables have global scope, so they can be used by any function without passing them to the function as parameters." }, { "code": null, "e": 12984, "s": 12968, "text": "Local Variable:" }, { "code": null, "e": 13150, "s": 12984, "text": "Local variables are variables that are defined within functions. They have local scope, which means that they can only be used within the functions that define them." }, { "code": null, "e": 13176, "s": 13150, "text": "32. What is a constraint?" }, { "code": null, "e": 13436, "s": 13176, "text": "Constraints are the rules that we can apply to the type of data in a table. That is, we can specify the limit on the type of data that can be stored in a particular column in a table using constraints. For more details please refer to SQL|Constraints article." }, { "code": null, "e": 13464, "s": 13436, "text": "33. What is Data Integrity?" }, { "code": null, "e": 14009, "s": 13464, "text": "Data integrity is defined as the data contained in the database is both correct and consistent. For this purpose, the data stored in the database must satisfy certain types of procedures (rules). The data in a database must be correct and consistent. So, data stored in the database must satisfy certain types of procedures (rules). DBMS provides different ways to implement such types of constraints (rules). This improves data integrity in a database. For more details please refer difference between data security and data integrity article." }, { "code": null, "e": 14037, "s": 14009, "text": "34. What is Auto Increment?" }, { "code": null, "e": 14560, "s": 14037, "text": "Sometimes, while creating a table, we do not have a unique identifier within the table, hence we face difficulty in choosing Primary Key. So as to resolve such an issue, we’ve to manually provide unique keys to every record, but this is often also a tedious task. So we can use the the the Auto-Increment feature that automatically generates a numerical Primary key value for every new record inserted. The Auto Increment feature is supported by all the Databases. For more details please refer SQL Auto Increment article." }, { "code": null, "e": 14626, "s": 14560, "text": "35. What is the difference between Cluster and Non-Cluster Index?" }, { "code": null, "e": 14723, "s": 14626, "text": "For more details please refer Difference between Clustered index and No-Clustered index article." }, { "code": null, "e": 14752, "s": 14723, "text": "36. What is MySQL collation?" }, { "code": null, "e": 15149, "s": 14752, "text": "A MySQL collation is a well-defined set of rules which are used to compare characters of a particular character set by using their corresponding encoding. Each character set in MySQL might have more than one collation, and has, at least, one default collation. Two character sets cannot have the same collation. For more details please refer What is collation and character set in MySQL? article." }, { "code": null, "e": 15186, "s": 15149, "text": "37. What are user-defined functions?" }, { "code": null, "e": 15421, "s": 15186, "text": "We can use User-defined functions in PL/SQL or Java to provide functionality that is not available in SQL or SQL built-in functions. SQL functions and User-defined functions can appear anywhere, that is, wherever an expression occurs." }, { "code": null, "e": 15453, "s": 15421, "text": "For example, it can be used in:" }, { "code": null, "e": 15489, "s": 15453, "text": "Select a list of SELECT statements." }, { "code": null, "e": 15516, "s": 15489, "text": "Condition of WHERE clause." }, { "code": null, "e": 15563, "s": 15516, "text": "CONNECT BY, ORDER BY, START WITH, and GROUP BY" }, { "code": null, "e": 15606, "s": 15563, "text": "The VALUES clause of the INSERT statement." }, { "code": null, "e": 15646, "s": 15606, "text": "The SET clause of the UPDATE statement." }, { "code": null, "e": 15696, "s": 15646, "text": "38. What are all types of user-defined functions?" }, { "code": null, "e": 15919, "s": 15696, "text": "User-Defined Functions allow people to define their own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.Different Kinds of User-Defined Functions created are:" }, { "code": null, "e": 16254, "s": 15919, "text": "1. Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value." }, { "code": null, "e": 16585, "s": 16254, "text": "2. Inline Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and, in essence, provide us with a parameterized, non-updateable view of the underlying tables." }, { "code": null, "e": 17388, "s": 16585, "text": "3. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view, as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them gives us the capability to, in essence, create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, it can be used in the FROM clause of a T-SQL command, unlike the behavior found when using a stored procedure which can also return record sets." }, { "code": null, "e": 17465, "s": 17388, "text": "39. Name the function which is used to remove spaces at the end of a string?" }, { "code": null, "e": 17540, "s": 17465, "text": "In SQL the spaces at the end of the string are removed by a trim function." }, { "code": null, "e": 17595, "s": 17540, "text": "Syntax:\n Trim(s)\n Where s is a any string." }, { "code": null, "e": 17628, "s": 17595, "text": "40. What is a stored procedure?" }, { "code": null, "e": 17934, "s": 17628, "text": "Stored Procedures are created to perform one or more DML operations on databases. It is nothing but a group of SQL statements that accepts some input in the form of parameters and performs some task and may or may not return a value. For more details please refer what is Stored procedures in SQL article." }, { "code": null, "e": 17983, "s": 17934, "text": "41. What are Union, minus and Interact commands?" }, { "code": null, "e": 18139, "s": 17983, "text": "Set Operations in SQL eliminate duplicate tuples and can be applied only to the relations which are union compatible. Set Operations available in SQL are :" }, { "code": null, "e": 18149, "s": 18139, "text": "Set Union" }, { "code": null, "e": 18166, "s": 18149, "text": "Set Intersection" }, { "code": null, "e": 18181, "s": 18166, "text": "Set Difference" }, { "code": null, "e": 18371, "s": 18181, "text": "UNION Operation: This operation includes all the tuples which are present in either of the relations. For example: To find all the customers who have a loan or an account or both in a bank." }, { "code": null, "e": 18453, "s": 18371, "text": " SELECT CustomerName FROM Depositor \n UNION \n SELECT CustomerName FROM Borrower ;" }, { "code": null, "e": 18603, "s": 18453, "text": "The union operation automatically eliminates duplicates. If all the duplicates are supposed to be retained, UNION ALL is used in the place of UNION. " }, { "code": null, "e": 18789, "s": 18603, "text": "INTERSECT Operation: This operation includes the tuples which are present in both of the relations. For example: To find the customers who have a loan as well as an account in the bank:" }, { "code": null, "e": 18874, "s": 18789, "text": " SELECT CustomerName FROM Depositor \n INTERSECT\n SELECT CustomerName FROM Borrower ;" }, { "code": null, "e": 19036, "s": 18874, "text": "The Intersect operation automatically eliminates duplicates. If all the duplicates are supposed to be retained, INTERSECT ALL is used in the place of INTERSECT. " }, { "code": null, "e": 19249, "s": 19036, "text": "EXCEPT Operation: This operation includes tuples that are present in one relation but should not be present in another relationship. For example: To find the customers who have an account but no loan at the bank:" }, { "code": null, "e": 19331, "s": 19249, "text": " SELECT CustomerName FROM Depositor \n EXCEPT\n SELECT CustomerName FROM Borrower ;" }, { "code": null, "e": 19487, "s": 19331, "text": "The Except operation automatically eliminates the duplicates. If all the duplicates are supposed to be retained, EXCEPT ALL is used at the place of EXCEPT." }, { "code": null, "e": 19517, "s": 19487, "text": "42. What is an ALIAS command?" }, { "code": null, "e": 19746, "s": 19517, "text": "Aliases are the temporary names given to a table or column for the purpose of a particular SQL query. It is used when the name of a column or table is used other than their original name, but the modified name is only temporary." }, { "code": null, "e": 19811, "s": 19746, "text": "Aliases are created to make table or column names more readable." }, { "code": null, "e": 19912, "s": 19811, "text": "The renaming is just a temporary change and the table name does not change in the original database." }, { "code": null, "e": 19988, "s": 19912, "text": "Aliases are useful when table or column names are big or not very readable." }, { "code": null, "e": 20067, "s": 19988, "text": "These are preferred when there isthe more than one table involved in a query." }, { "code": null, "e": 20119, "s": 20067, "text": "For more details, please read SQL| Aliases article." }, { "code": null, "e": 20184, "s": 20119, "text": "43. What is the difference between TRUNCATE and DROP statements?" }, { "code": null, "e": 20276, "s": 20184, "text": "For more details, please read the Difference between DROP and TRUNCATE in the SQL article." }, { "code": null, "e": 20321, "s": 20276, "text": "44. What are aggregate and scalar functions?" }, { "code": null, "e": 20524, "s": 20321, "text": "For doing operations on data SQL has many built-in functions, they are categorized into two categories and further sub-categorized into seven different functions under each category. The categories are:" }, { "code": null, "e": 20648, "s": 20524, "text": "Aggregate functions:These functions are used to do operations from the values of the column and a single value is returned." }, { "code": null, "e": 20739, "s": 20648, "text": "Scalar functions:These functions are based on user input, these too return a single value." }, { "code": null, "e": 20831, "s": 20739, "text": "For more details, please read the SQL | Functions (Aggregate and Scalar Functions) article." }, { "code": null, "e": 20891, "s": 20831, "text": "45. Which operator is used in queries for pattern matching?" }, { "code": null, "e": 20999, "s": 20891, "text": "LIKE operator: It is used to fetch filtered data by searching for a particular pattern in the where clause." }, { "code": null, "e": 21101, "s": 20999, "text": "Syntax:\n\nSELECT column1,column2 FROM table_name WHERE column_name LIKE pattern;\n\nLIKE: operator name " }, { "code": null, "e": 21140, "s": 21101, "text": "46. Define SQL Order by the statement?" }, { "code": null, "e": 21271, "s": 21140, "text": "The ORDER BY statement in SQL is used to sort the fetched data in either ascending or descending according to one or more columns." }, { "code": null, "e": 21326, "s": 21271, "text": "By default ORDER BY sorts the data in ascending order." }, { "code": null, "e": 21439, "s": 21326, "text": "We can use the keyword DESC to sort the data in descending order and the keyword ASC to sort in ascending order." }, { "code": null, "e": 21492, "s": 21439, "text": "For more details please read SQL | ORDER BY article." }, { "code": null, "e": 21526, "s": 21492, "text": "47. Explain SQL Having statement?" }, { "code": null, "e": 21853, "s": 21526, "text": "HAVING is used to specify a condition for a group or an aggregate function used in the select statement. The WHERE clause selects before grouping. The HAVING clause selects rows after grouping. Unlike the HAVING clause, the WHERE clause cannot contain aggregate functions. (See this for examples). See Having vs Where Clause? " }, { "code": null, "e": 21900, "s": 21853, "text": "48. Explain SQL AND OR statement with example?" }, { "code": null, "e": 22012, "s": 21900, "text": "In SQL, the AND & OR operators are used for filtering the data and getting precise results based on conditions." }, { "code": null, "e": 22069, "s": 22012, "text": "The AND and OR operators are used with the WHERE clause." }, { "code": null, "e": 22123, "s": 22069, "text": "These two operators are called conjunctive operators." }, { "code": null, "e": 22250, "s": 22123, "text": "AND Operator: This operator displays only those records where both the conditions condition1 and condition2 evaluates to True." }, { "code": null, "e": 22436, "s": 22250, "text": "OR Operator: This operator displays the records where either one of the conditions condition1 and condition2 evaluates to True. That is, either condition1 is True or condition2 is True." }, { "code": null, "e": 22505, "s": 22436, "text": "For more details please read the SQL | AND and OR operators article." }, { "code": null, "e": 22543, "s": 22505, "text": "49. Define BETWEEN statements in SQL?" }, { "code": null, "e": 22869, "s": 22543, "text": "The SQL BETWEEN condition allows you to easily test if an expression is within a range of values (inclusive). The values can be text, date, or numbers. It can be used in a SELECT, INSERT, UPDATE, or DELETE statement. The SQL BETWEEN Condition will return the records where expression is within the range of value1 and value2." }, { "code": null, "e": 22934, "s": 22869, "text": "For more details please read SQL | Between & I operator article." }, { "code": null, "e": 22982, "s": 22934, "text": "50. Why do we use Commit and Rollback command?" }, { "code": null, "e": 23070, "s": 22982, "text": "For more details please read the Difference between Commit and Rollback in SQL article." }, { "code": null, "e": 23100, "s": 23070, "text": "51. What are ACID properties?" }, { "code": null, "e": 23626, "s": 23100, "text": "A transaction is a single logical unit of work that accesses and possibly modifies the contents of a database. Transactions access data using read and write operations. In order to maintain consistency in a database, before and after the transaction, certain properties are followed. These are called ACID properties. ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably. For more details please read ACID properties in the DBMS article." }, { "code": null, "e": 23647, "s": 23626, "text": "52. What is a T-SQL?" }, { "code": null, "e": 24155, "s": 23647, "text": "T-SQL is an abbreviation for Transact Structure Query Language. It is a product by Microsoft and is an extension of SQL Language which is used to interact with relational databases. It is considered to perform best with Microsoft SQL servers. T-SQL statements are used to perform the transactions to the databases. T-SQL has huge importance since all the communications with an instance of an SQL server are done by sending Transact-SQL statements to the server. Users can also define functions using T-SQL." }, { "code": null, "e": 24186, "s": 24155, "text": "Types of T-SQL functions are :" }, { "code": null, "e": 24207, "s": 24186, "text": "Aggregate functions." }, { "code": null, "e": 24274, "s": 24207, "text": "Ranking functions. There are different types of ranking functions." }, { "code": null, "e": 24291, "s": 24274, "text": "Rowset function." }, { "code": null, "e": 24309, "s": 24291, "text": "Scalar functions." }, { "code": null, "e": 24365, "s": 24309, "text": "53. Are NULL values the same as zero or a blank space? " }, { "code": null, "e": 24553, "s": 24365, "text": "In SQL, zero or blank space can be compared with another zero or blank space. whereas one null may not be equal to another null. null means data might not be provided or there is no data." }, { "code": null, "e": 24603, "s": 24553, "text": "54. What is the need for group functions in SQL? " }, { "code": null, "e": 24832, "s": 24603, "text": "In database management, group functions, also known as an aggregate function, is a function where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning." }, { "code": null, "e": 24856, "s": 24832, "text": "Various Group Functions" }, { "code": null, "e": 24903, "s": 24856, "text": "1) Count()\n2) Sum()\n3) Avg()\n4) Min()\n5) Max()" }, { "code": null, "e": 24968, "s": 24903, "text": "For more details please read Aggregate functions in SQL article." }, { "code": null, "e": 25012, "s": 24968, "text": "55. What is the need for a MERGE statement?" }, { "code": null, "e": 25543, "s": 25012, "text": "The MERGE command in SQL is actually a combination of three SQL statements: INSERT, UPDATE and DELETE. In simple words, the MERGE statement in SQL provides a convenient way to perform all these three operations together which can be very helpful when it comes to handling large running databases. But unlike INSERT, UPDATE and DELETE statements MERGE statement requires a source table to perform these operations on the required table which is called a target table. For more details please read the SQL | MERGE Statement article." }, { "code": null, "e": 25597, "s": 25543, "text": "56. How can you fetch common records from two tables?" }, { "code": null, "e": 25723, "s": 25597, "text": "The below statement could be used to get data from multiple tables, so, we need to use join to get data from multiple tables." }, { "code": null, "e": 25732, "s": 25723, "text": "Syntax :" }, { "code": null, "e": 25895, "s": 25732, "text": "SELECT tablenmae1.colunmname, tablename2.columnnmae \nFROM tablenmae1 \nJOIN tablename2 \nON tablenmae1.colunmnam = tablename2.columnnmae\nORDER BY columnname; " }, { "code": null, "e": 25986, "s": 25895, "text": "For more details and examples, please read SQL | SELECT data from Multiple tables article." }, { "code": null, "e": 26035, "s": 25986, "text": "57. What are the advantages of PL/SQL functions?" }, { "code": null, "e": 26081, "s": 26035, "text": "Advantages of PL / SQL functions as follows: " }, { "code": null, "e": 26302, "s": 26081, "text": "We can make a single call to the database to run a block of statements. Thus, it improves the performance against running SQL multiple times. This will reduce the number of calls between the database and the application." }, { "code": null, "e": 26428, "s": 26302, "text": "We can divide the overall work into small modules which becomes quite manageable, also enhancing the readability of the code." }, { "code": null, "e": 26453, "s": 26428, "text": "It promotes reusability." }, { "code": null, "e": 26670, "s": 26453, "text": "It is secure since the code stays inside the database, thus hiding internal database details from the application(user). The user only makes a call to the PL/SQL functions. Hence, security and data hiding is ensured." }, { "code": null, "e": 26726, "s": 26670, "text": "58. What is the SQL query to display the current date?" }, { "code": null, "e": 26963, "s": 26726, "text": "CURRENT_DATE returns to the current date. This function returns the same value if it is executed more than once in a single statement, which means that the value is fixed, even if there is a long delay between fetching rows in a cursor." }, { "code": null, "e": 26997, "s": 26963, "text": "CURRENT_DATE\n or\nCURRENT DATE" }, { "code": null, "e": 27021, "s": 26997, "text": "59. What is ETL in SQL?" }, { "code": null, "e": 27433, "s": 27021, "text": "ETL is a process in Data Warehousing and it stands for Extract, Transform and Load. It is a process in which an ETL tool extracts the data from various data source systems, transforms it in the staging area, and then finally, loads it into the Data Warehouse system. These are three database functions that are incorporated into one tool to pull data out from one database and to put data into another database." }, { "code": null, "e": 27464, "s": 27433, "text": "60. What are Nested Triggers?" }, { "code": null, "e": 27764, "s": 27464, "text": "A trigger can also contain INSERT, UPDATE, and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger." }, { "code": null, "e": 27831, "s": 27764, "text": "61. How to find the available constraint information in the table?" }, { "code": null, "e": 28157, "s": 27831, "text": "In SQL Server the data dictionary is a set of database tables used to store information about a database’s definition. One can use these data dictionaries to check the constraints on an already existing table and to change them(if possible). For more details please read SQL | Checking Existing Constraint on a table article." }, { "code": null, "e": 28185, "s": 28157, "text": "62. What is SQL injection?" }, { "code": null, "e": 28408, "s": 28185, "text": "SQL injection is a technique used to exploit user data through web page inputs by injecting SQL commands as statements. Basically, these statements can be used to manipulate the application’s web server by malicious users." }, { "code": null, "e": 28486, "s": 28408, "text": "SQL injection is a code injection technique that might destroy your database." }, { "code": null, "e": 28550, "s": 28486, "text": "SQL injection is one of the most common web hacking techniques." }, { "code": null, "e": 28638, "s": 28550, "text": "SQL injection is the placement of malicious code in SQL statements, via web page input." }, { "code": null, "e": 28697, "s": 28638, "text": "For more details, please read the SQL | Injection article." }, { "code": null, "e": 28728, "s": 28697, "text": "63. How to copy tables in SQL?" }, { "code": null, "e": 29005, "s": 28728, "text": "Sometimes, in SQL, we need to create an exact copy of an already defined (or created) table. MySQL enables you to perform this operation. Because we may need such duplicate tables for testing the data without having any impact on the original table and the data stored in it. " }, { "code": null, "e": 29061, "s": 29005, "text": "CREATE TABLE Contact List(Clone_1) LIKE Original_table;" }, { "code": null, "e": 29127, "s": 29061, "text": "For more details, Please read Cloning Table in the MySQL article." }, { "code": null, "e": 29170, "s": 29127, "text": "64. Can we disable a trigger? If yes, how?" }, { "code": null, "e": 29301, "s": 29170, "text": " Yes, we can disable a trigger in PL/SQL. If consider temporarily disabling a trigger and one of the following conditions is true:" }, { "code": null, "e": 29357, "s": 29301, "text": "An object that the trigger references is not available." }, { "code": null, "e": 29447, "s": 29357, "text": "We must perform a large data load and want it to proceed quickly without firing triggers." }, { "code": null, "e": 29512, "s": 29447, "text": "We are loading data into the table to which the trigger applies." }, { "code": null, "e": 29592, "s": 29512, "text": "We disable a trigger using the ALTER TRIGGER statement with the DISABLE option." }, { "code": null, "e": 29731, "s": 29592, "text": "We can disable all triggers associated with a table at the same time using the ALTER TABLE statement with the DISABLE ALL TRIGGERS option." }, { "code": null, "e": 29757, "s": 29731, "text": "65. What is a Live Lock?" }, { "code": null, "e": 30094, "s": 29757, "text": "Livelock occurs when two or more processes continually repeat the same interaction in response to changes in the other processes without doing any useful work. These processes are not in the waiting state, and they are running concurrently. This is different from a deadlock because in a deadlock all processes are in the waiting state." }, { "code": null, "e": 30188, "s": 30094, "text": "66. How do we avoid getting duplicate entries in a query without using the distinct keyword?" }, { "code": null, "e": 30460, "s": 30188, "text": "DISTINCT is useful in certain circumstances, but it has drawbacks that it can increase the load on the query engine to perform the sort (since it needs to compare the result set to itself to remove duplicates). We can remove duplicate entries using the following options:" }, { "code": null, "e": 30497, "s": 30460, "text": "Remove duplicates using row numbers." }, { "code": null, "e": 30532, "s": 30497, "text": "Remove duplicates using self-Join." }, { "code": null, "e": 30652, "s": 30532, "text": "Remove duplicates using group by. For more details, please read SQL | Remove duplicates without distinct articles. " }, { "code": null, "e": 30703, "s": 30652, "text": "67. The difference between NVL and NVL2 functions?" }, { "code": null, "e": 30873, "s": 30703, "text": "These functions work with any data type and pertain to the use of null values in the expression list. These are all single-row functions i.e. provide one result per row." }, { "code": null, "e": 31101, "s": 30873, "text": "NVL(expr1, expr2): In SQL, NVL() converts a null value to an actual value. Data types that can be used are date, character, and number. Data types must match with each other. i.e. expr1 and expr2 must be of the same data type. " }, { "code": null, "e": 31129, "s": 31101, "text": "Syntax:\n\nNVL (expr1, expr2)" }, { "code": null, "e": 31496, "s": 31129, "text": "NVL2(expr1, expr2, expr3): The NVL2 function examines the first expression. If the first expression is not null, then the NVL2 function returns the second expression. If the first expression is null, then the third expression is returned i.e. If expr1 is not null, NVL2 returns expr2. If expr1 is null, NVL2 returns expr3. The argument expr1 can have any data type." }, { "code": null, "e": 31532, "s": 31496, "text": "Syntax:\n\nNVL2 (expr1, expr2, expr3)" }, { "code": null, "e": 31648, "s": 31532, "text": "For more details please read SQL general functions | NVL, NVL2, DECODE, COALESCE, NULLIF, LNNVL, and NANVL article." }, { "code": null, "e": 31678, "s": 31648, "text": "68. What is Case WHEN in SQL?" }, { "code": null, "e": 32090, "s": 31678, "text": "Control statements form an important part of most languages since they control the execution of other sets of statements. These are found in SQL too and should be exploited for uses such as query filtering and query optimization through careful selection of tuples that match our requirements. In this post, we explore the Case-Switch statement in SQL. The CASE statement is SQL’s way of handling if/then logic." }, { "code": null, "e": 32237, "s": 32090, "text": "syntax: 1\nCASE case_value\n WHEN when_value THEN statement_list\n [WHEN when_value THEN statement_list] ...\n [ELSE statement_list]\nEND CASE" }, { "code": null, "e": 32386, "s": 32237, "text": "syntax: 2\nCASE\n WHEN search_condition THEN statement_list\n\n [WHEN search_condition THEN statement_list] ...\n [ELSE statement_list]\nEND CASE" }, { "code": null, "e": 32450, "s": 32386, "text": "For more details, please read the SQL | Case Statement article." }, { "code": null, "e": 32510, "s": 32450, "text": "69. What is the difference between COALESCE() & ISNULL()? " }, { "code": null, "e": 32699, "s": 32510, "text": "COALESCE(): COALESCE function in SQL returns the first non-NULL expression among its arguments. If all the expressions evaluate to null, then the COALESCE function will return null.Syntax:" }, { "code": null, "e": 32775, "s": 32699, "text": "SELECT column(s), CAOLESCE(expression_1,....,expression_n)\nFROM table_name;" }, { "code": null, "e": 32921, "s": 32775, "text": "ISNULL(): The ISNULL function has different uses in SQL Server and MySQL. In SQL Server, ISNULL() function is used to replace NULL values.Syntax:" }, { "code": null, "e": 32994, "s": 32921, "text": "SELECT column(s), ISNULL(column_name, value_to_replace)\nFROM table_name;" }, { "code": null, "e": 33058, "s": 32994, "text": "For more details, please read the SQL | Null functions article." }, { "code": null, "e": 33134, "s": 33058, "text": "70. Name the operator which is used in the query for appending two strings?" }, { "code": null, "e": 33232, "s": 33134, "text": "In SQL for appending two strings, ” Concentration operator” is used and its symbol is ” || β€œ. " }, { "code": null, "e": 33250, "s": 33234, "text": "varshachoudhary" }, { "code": null, "e": 33264, "s": 33250, "text": "serenesushant" }, { "code": null, "e": 33286, "s": 33264, "text": "placement preparation" }, { "code": null, "e": 33295, "s": 33286, "text": "Articles" }, { "code": null, "e": 33300, "s": 33295, "text": "DBMS" }, { "code": null, "e": 33304, "s": 33300, "text": "SQL" }, { "code": null, "e": 33309, "s": 33304, "text": "DBMS" }, { "code": null, "e": 33313, "s": 33309, "text": "SQL" } ]
Multiplication Algorithm in Signed Magnitude Representation
21 Aug, 2019 Multiplication of two fixed point binary number in signed magnitude representation is done with process of successive shift and add operation. In the multiplication process we are considering successive bits of the multiplier, least significant bit first.If the multiplier bit is 1, the multiplicand is copied down else 0’s are copied down. The numbers copied down in successive lines are shifted one position to the left from the previous number.Finally numbers are added and their sum form the product. The sign of the product is determined from the sign of the multiplicand and multiplier. If they are alike, sign of the product is positive else negative. Hardware Implementation :Following components are required for the Hardware Implementation of multiplication algorithm : Registers:Two Registers B and Q are used to store multiplicand and multiplier respectively.Register A is used to store partial product during multiplication.Sequence Counter register (SC) is used to store number of bits in the multiplier.Flip Flop:To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).Flip flop E is used to store carry bit generated during partial product addition.Complement and Parallel adder:This hardware unit is used in calculating partial product i.e, perform addition required. Registers:Two Registers B and Q are used to store multiplicand and multiplier respectively.Register A is used to store partial product during multiplication.Sequence Counter register (SC) is used to store number of bits in the multiplier. Flip Flop:To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).Flip flop E is used to store carry bit generated during partial product addition. Complement and Parallel adder:This hardware unit is used in calculating partial product i.e, perform addition required. Flowchart of Multiplication: Initially multiplicand is stored in B register and multiplier is stored in Q register.Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e., if both the signs are alike, output of XOR operation is 0 unless 1) and output stored in As (sign of A register).Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n is the number of bits in the Multiplier.Now least significant bit of multiplier is checked. If it is 1 add the content of register A with Multiplicand (register B) and result is assigned in A register with carry bit in flip flop E. Content of E A Q is shifted to right by one position, i.e., content of E is shifted to most significant bit (MSB) of A and least significant bit of A is shifted to most significant bit of Q.If Qn = 0, only shift right operation on content of E A Q is performed in a similar fashion.Content of Sequence counter is decremented by 1.Check the content of Sequence counter (SC), if it is 0, end the process and the final product is present in register A and Q, else repeat the process. Initially multiplicand is stored in B register and multiplier is stored in Q register. Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e., if both the signs are alike, output of XOR operation is 0 unless 1) and output stored in As (sign of A register).Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n is the number of bits in the Multiplier. Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n is the number of bits in the Multiplier. Now least significant bit of multiplier is checked. If it is 1 add the content of register A with Multiplicand (register B) and result is assigned in A register with carry bit in flip flop E. Content of E A Q is shifted to right by one position, i.e., content of E is shifted to most significant bit (MSB) of A and least significant bit of A is shifted to most significant bit of Q. If Qn = 0, only shift right operation on content of E A Q is performed in a similar fashion. Content of Sequence counter is decremented by 1. Check the content of Sequence counter (SC), if it is 0, end the process and the final product is present in register A and Q, else repeat the process. Example: Multiplicand = 10111 Multiplier = 10011 Computer Organization & Architecture Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n21 Aug, 2019" }, { "code": null, "e": 197, "s": 54, "text": "Multiplication of two fixed point binary number in signed magnitude representation is done with process of successive shift and add operation." }, { "code": null, "e": 395, "s": 197, "text": "In the multiplication process we are considering successive bits of the multiplier, least significant bit first.If the multiplier bit is 1, the multiplicand is copied down else 0’s are copied down." }, { "code": null, "e": 559, "s": 395, "text": "The numbers copied down in successive lines are shifted one position to the left from the previous number.Finally numbers are added and their sum form the product." }, { "code": null, "e": 713, "s": 559, "text": "The sign of the product is determined from the sign of the multiplicand and multiplier. If they are alike, sign of the product is positive else negative." }, { "code": null, "e": 834, "s": 713, "text": "Hardware Implementation :Following components are required for the Hardware Implementation of multiplication algorithm :" }, { "code": null, "e": 1370, "s": 834, "text": "Registers:Two Registers B and Q are used to store multiplicand and multiplier respectively.Register A is used to store partial product during multiplication.Sequence Counter register (SC) is used to store number of bits in the multiplier.Flip Flop:To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).Flip flop E is used to store carry bit generated during partial product addition.Complement and Parallel adder:This hardware unit is used in calculating partial product i.e, perform addition required." }, { "code": null, "e": 1609, "s": 1370, "text": "Registers:Two Registers B and Q are used to store multiplicand and multiplier respectively.Register A is used to store partial product during multiplication.Sequence Counter register (SC) is used to store number of bits in the multiplier." }, { "code": null, "e": 1788, "s": 1609, "text": "Flip Flop:To store sign bit of registers we require three flip flops (A sign, B sign and Q sign).Flip flop E is used to store carry bit generated during partial product addition." }, { "code": null, "e": 1908, "s": 1788, "text": "Complement and Parallel adder:This hardware unit is used in calculating partial product i.e, perform addition required." }, { "code": null, "e": 1937, "s": 1908, "text": "Flowchart of Multiplication:" }, { "code": null, "e": 3038, "s": 1937, "text": "Initially multiplicand is stored in B register and multiplier is stored in Q register.Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e., if both the signs are alike, output of XOR operation is 0 unless 1) and output stored in As (sign of A register).Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n is the number of bits in the Multiplier.Now least significant bit of multiplier is checked. If it is 1 add the content of register A with Multiplicand (register B) and result is assigned in A register with carry bit in flip flop E. Content of E A Q is shifted to right by one position, i.e., content of E is shifted to most significant bit (MSB) of A and least significant bit of A is shifted to most significant bit of Q.If Qn = 0, only shift right operation on content of E A Q is performed in a similar fashion.Content of Sequence counter is decremented by 1.Check the content of Sequence counter (SC), if it is 0, end the process and the final product is present in register A and Q, else repeat the process." }, { "code": null, "e": 3125, "s": 3038, "text": "Initially multiplicand is stored in B register and multiplier is stored in Q register." }, { "code": null, "e": 3468, "s": 3125, "text": "Sign of registers B (Bs) and Q (Qs) are compared using XOR functionality (i.e., if both the signs are alike, output of XOR operation is 0 unless 1) and output stored in As (sign of A register).Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n is the number of bits in the Multiplier." }, { "code": null, "e": 3618, "s": 3468, "text": "Note: Initially 0 is assigned to register A and E flip flop. Sequence counter is initialized with value n, n is the number of bits in the Multiplier." }, { "code": null, "e": 4001, "s": 3618, "text": "Now least significant bit of multiplier is checked. If it is 1 add the content of register A with Multiplicand (register B) and result is assigned in A register with carry bit in flip flop E. Content of E A Q is shifted to right by one position, i.e., content of E is shifted to most significant bit (MSB) of A and least significant bit of A is shifted to most significant bit of Q." }, { "code": null, "e": 4094, "s": 4001, "text": "If Qn = 0, only shift right operation on content of E A Q is performed in a similar fashion." }, { "code": null, "e": 4143, "s": 4094, "text": "Content of Sequence counter is decremented by 1." }, { "code": null, "e": 4294, "s": 4143, "text": "Check the content of Sequence counter (SC), if it is 0, end the process and the final product is present in register A and Q, else repeat the process." }, { "code": null, "e": 4303, "s": 4294, "text": "Example:" }, { "code": null, "e": 4344, "s": 4303, "text": "Multiplicand = 10111\nMultiplier = 10011 " }, { "code": null, "e": 4381, "s": 4344, "text": "Computer Organization & Architecture" } ]
How to count values per level in a factor in R
26 May, 2021 In this article, we will discuss how to count the values per level in a given factor in R Programming Language. summary() method in base R is a generic function used to produce result summaries of the results of the functions computed based on the class of the argument passed. The summary() function produces an output of the frequencies of the values per level of the given factor column of the data frame in R. A summary statistics for each of the variables of this column is result in a tabular format, as an output. The output is concise and clear to be easily understood. Example: R set.seed(1) # creating data data_frame <- data.frame(col1 = sample(letters,50,rep=TRUE)) # count of variablessummary(data_frame$col1) Output: a b c d e f g h i j k l n o r s t u v w y z 3 2 1 1 3 3 2 1 2 5 1 3 3 3 1 1 3 3 1 2 5 1 The plyr package in R is used to simulate data enhancements and manipulations and can be installed into the working space. lapply() method in R returns a list of the same length as that of the input vector where each element is the result of application of the function specified to that corresponding element. This method takes as input the dataframe or lists, and returns list as the output. Syntax: lapply(vec, FUN) Parameters : vec – The atomic factor type vector to apply the function on FUN – The function to be applied, in this case, which is equivalent to count, to return the frequency of factor levels. The output returns a list where first component is the factor level and second column is the frequency of that level. A row number is appended to the beginning of each output row. R # importing required librarieslibrary ("plyr") set.seed(1) # creating data data_frame <- data.frame(col1 = sample( letters,50,rep=TRUE)) # counting frequencies of factor# levelslapply(data_frame, count) Output $col1 x freq 1 a 3 2 b 2 3 c 1 4 d 1 5 e 3 6 f 3 7 g 2 8 h 1 9 i 2 10 j 5 11 k 1 12 l 3 13 n 3 14 o 3 15 r 1 16 s 1 17 t 3 18 u 3 19 v 1 20 w 2 21 y 5 22 z 1 The data.table package in R is used to work with tables to access, manipulate and store data. Initially, the data frame is converted to data.table by reference using the setDT() command. This method is very useful while working with large data sets and more observations. Syntax: setDT(df) The keyby attribute is applied over the required column name in order to group the data contained in it. As an index, the .N parameter is used in place of columns to access the number of instances of each particular factor level. The output is a frequency table. The output is returned in the form of a data.table where row begins with a row number followed by colon. Example: R # importing required librarieslibrary(data.table) set.seed(1) # creating data data_frame <- data.frame(col1 = sample( letters,50,rep=TRUE)) # counting frequencies of factor# levelssetDT(data_frame)[, .N, keyby=col1] Output col1 N 1: a 3 2: b 2 3: c 1 4: d 1 5: e 3 6: f 3 7: g 2 8: h 1 9: i 2 10: j 5 11: k 1 12: l 3 13: n 3 14: o 3 15: r 1 16: s 1 17: t 3 18: u 3 19: v 1 20: w 2 21: y 5 22: z 1 The β€œdplyr” package is an enhancement of the plyr package which provides a wide range of selection and filter operations to be performed over the data elements. It can be loaded and installed into the working space. The group_by() method in the package is first used to group the data into different categories depending on the different values encountered. The rows belonging to a single value are stacked together. The tally() function behaves similarly to the summarise() function and is used to generate summaries according to the groups made. Syntax: df %>% group_by() %>% tally() The output returned is in the form of a tibble, which contains rows equivalent to the length of the input vector. The columns contain information about the frequencies of the factor level encountered. This method gives a clear insight into the column types and dimensions of the returned output. However, only ten rows are displayed by default, which can be expanded further to view others. Example: R # importing required librarieslibrary ("dplyr") set.seed(1) # creating data data_frame <- data.frame(col1 = sample( letters,50,rep=TRUE)) # counting frequencies of factor# levelsdata_frame %>% group_by(col1) %>% tally() Output # A tibble: 22 x 2 col1 n <fct> <int> 1 a 3 2 b 2 3 c 1 4 d 1 5 e 3 6 f 3 7 g 2 8 h 1 9 i 2 10 j 5 # ... with 12 more rows Picked R Factor-Programs R-Factors R Language R Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n26 May, 2021" }, { "code": null, "e": 140, "s": 28, "text": "In this article, we will discuss how to count the values per level in a given factor in R Programming Language." }, { "code": null, "e": 607, "s": 140, "text": "summary() method in base R is a generic function used to produce result summaries of the results of the functions computed based on the class of the argument passed. The summary() function produces an output of the frequencies of the values per level of the given factor column of the data frame in R. A summary statistics for each of the variables of this column is result in a tabular format, as an output. The output is concise and clear to be easily understood. " }, { "code": null, "e": 616, "s": 607, "text": "Example:" }, { "code": null, "e": 618, "s": 616, "text": "R" }, { "code": "set.seed(1) # creating data data_frame <- data.frame(col1 = sample(letters,50,rep=TRUE)) # count of variablessummary(data_frame$col1)", "e": 754, "s": 618, "text": null }, { "code": null, "e": 762, "s": 754, "text": "Output:" }, { "code": null, "e": 808, "s": 762, "text": "a b c d e f g h i j k l n o r s t u v w y z " }, { "code": null, "e": 853, "s": 808, "text": "3 2 1 1 3 3 2 1 2 5 1 3 3 3 1 1 3 3 1 2 5 1 " }, { "code": null, "e": 976, "s": 853, "text": "The plyr package in R is used to simulate data enhancements and manipulations and can be installed into the working space." }, { "code": null, "e": 1248, "s": 976, "text": "lapply() method in R returns a list of the same length as that of the input vector where each element is the result of application of the function specified to that corresponding element. This method takes as input the dataframe or lists, and returns list as the output. " }, { "code": null, "e": 1256, "s": 1248, "text": "Syntax:" }, { "code": null, "e": 1273, "s": 1256, "text": "lapply(vec, FUN)" }, { "code": null, "e": 1287, "s": 1273, "text": "Parameters : " }, { "code": null, "e": 1348, "s": 1287, "text": "vec – The atomic factor type vector to apply the function on" }, { "code": null, "e": 1468, "s": 1348, "text": "FUN – The function to be applied, in this case, which is equivalent to count, to return the frequency of factor levels." }, { "code": null, "e": 1649, "s": 1468, "text": "The output returns a list where first component is the factor level and second column is the frequency of that level. A row number is appended to the beginning of each output row. " }, { "code": null, "e": 1651, "s": 1649, "text": "R" }, { "code": "# importing required librarieslibrary (\"plyr\") set.seed(1) # creating data data_frame <- data.frame(col1 = sample( letters,50,rep=TRUE)) # counting frequencies of factor# levelslapply(data_frame, count)", "e": 1858, "s": 1651, "text": null }, { "code": null, "e": 1865, "s": 1858, "text": "Output" }, { "code": null, "e": 1875, "s": 1865, "text": "$col1 " }, { "code": null, "e": 1886, "s": 1875, "text": " x freq " }, { "code": null, "e": 1897, "s": 1886, "text": "1 a 3 " }, { "code": null, "e": 1908, "s": 1897, "text": "2 b 2 " }, { "code": null, "e": 1919, "s": 1908, "text": "3 c 1 " }, { "code": null, "e": 1930, "s": 1919, "text": "4 d 1 " }, { "code": null, "e": 1941, "s": 1930, "text": "5 e 3 " }, { "code": null, "e": 1952, "s": 1941, "text": "6 f 3 " }, { "code": null, "e": 1963, "s": 1952, "text": "7 g 2 " }, { "code": null, "e": 1974, "s": 1963, "text": "8 h 1 " }, { "code": null, "e": 1985, "s": 1974, "text": "9 i 2 " }, { "code": null, "e": 1996, "s": 1985, "text": "10 j 5 " }, { "code": null, "e": 2007, "s": 1996, "text": "11 k 1 " }, { "code": null, "e": 2018, "s": 2007, "text": "12 l 3 " }, { "code": null, "e": 2028, "s": 2018, "text": "13 n 3" }, { "code": null, "e": 2039, "s": 2028, "text": "14 o 3 " }, { "code": null, "e": 2050, "s": 2039, "text": "15 r 1 " }, { "code": null, "e": 2061, "s": 2050, "text": "16 s 1 " }, { "code": null, "e": 2072, "s": 2061, "text": "17 t 3 " }, { "code": null, "e": 2083, "s": 2072, "text": "18 u 3 " }, { "code": null, "e": 2094, "s": 2083, "text": "19 v 1 " }, { "code": null, "e": 2105, "s": 2094, "text": "20 w 2 " }, { "code": null, "e": 2116, "s": 2105, "text": "21 y 5 " }, { "code": null, "e": 2126, "s": 2116, "text": "22 z 1" }, { "code": null, "e": 2221, "s": 2126, "text": "The data.table package in R is used to work with tables to access, manipulate and store data. " }, { "code": null, "e": 2400, "s": 2221, "text": "Initially, the data frame is converted to data.table by reference using the setDT() command. This method is very useful while working with large data sets and more observations. " }, { "code": null, "e": 2408, "s": 2400, "text": "Syntax:" }, { "code": null, "e": 2418, "s": 2408, "text": "setDT(df)" }, { "code": null, "e": 2787, "s": 2418, "text": "The keyby attribute is applied over the required column name in order to group the data contained in it. As an index, the .N parameter is used in place of columns to access the number of instances of each particular factor level. The output is a frequency table. The output is returned in the form of a data.table where row begins with a row number followed by colon. " }, { "code": null, "e": 2796, "s": 2787, "text": "Example:" }, { "code": null, "e": 2798, "s": 2796, "text": "R" }, { "code": "# importing required librarieslibrary(data.table) set.seed(1) # creating data data_frame <- data.frame(col1 = sample( letters,50,rep=TRUE)) # counting frequencies of factor# levelssetDT(data_frame)[, .N, keyby=col1]", "e": 3018, "s": 2798, "text": null }, { "code": null, "e": 3025, "s": 3018, "text": "Output" }, { "code": null, "e": 3036, "s": 3025, "text": " col1 N " }, { "code": null, "e": 3048, "s": 3036, "text": "1: a 3 " }, { "code": null, "e": 3060, "s": 3048, "text": "2: b 2 " }, { "code": null, "e": 3072, "s": 3060, "text": "3: c 1 " }, { "code": null, "e": 3084, "s": 3072, "text": "4: d 1 " }, { "code": null, "e": 3096, "s": 3084, "text": "5: e 3 " }, { "code": null, "e": 3108, "s": 3096, "text": "6: f 3 " }, { "code": null, "e": 3120, "s": 3108, "text": "7: g 2 " }, { "code": null, "e": 3132, "s": 3120, "text": "8: h 1 " }, { "code": null, "e": 3143, "s": 3132, "text": "9: i 2 " }, { "code": null, "e": 3155, "s": 3143, "text": "10: j 5 " }, { "code": null, "e": 3167, "s": 3155, "text": "11: k 1 " }, { "code": null, "e": 3179, "s": 3167, "text": "12: l 3 " }, { "code": null, "e": 3191, "s": 3179, "text": "13: n 3 " }, { "code": null, "e": 3203, "s": 3191, "text": "14: o 3 " }, { "code": null, "e": 3215, "s": 3203, "text": "15: r 1 " }, { "code": null, "e": 3227, "s": 3215, "text": "16: s 1 " }, { "code": null, "e": 3239, "s": 3227, "text": "17: t 3 " }, { "code": null, "e": 3251, "s": 3239, "text": "18: u 3 " }, { "code": null, "e": 3263, "s": 3251, "text": "19: v 1 " }, { "code": null, "e": 3275, "s": 3263, "text": "20: w 2 " }, { "code": null, "e": 3287, "s": 3275, "text": "21: y 5 " }, { "code": null, "e": 3303, "s": 3287, "text": "22: z 1 " }, { "code": null, "e": 3519, "s": 3303, "text": "The β€œdplyr” package is an enhancement of the plyr package which provides a wide range of selection and filter operations to be performed over the data elements. It can be loaded and installed into the working space." }, { "code": null, "e": 3852, "s": 3519, "text": "The group_by() method in the package is first used to group the data into different categories depending on the different values encountered. The rows belonging to a single value are stacked together. The tally() function behaves similarly to the summarise() function and is used to generate summaries according to the groups made. " }, { "code": null, "e": 3860, "s": 3852, "text": "Syntax:" }, { "code": null, "e": 3890, "s": 3860, "text": "df %>% group_by() %>% tally()" }, { "code": null, "e": 4281, "s": 3890, "text": "The output returned is in the form of a tibble, which contains rows equivalent to the length of the input vector. The columns contain information about the frequencies of the factor level encountered. This method gives a clear insight into the column types and dimensions of the returned output. However, only ten rows are displayed by default, which can be expanded further to view others." }, { "code": null, "e": 4290, "s": 4281, "text": "Example:" }, { "code": null, "e": 4292, "s": 4290, "text": "R" }, { "code": "# importing required librarieslibrary (\"dplyr\") set.seed(1) # creating data data_frame <- data.frame(col1 = sample( letters,50,rep=TRUE)) # counting frequencies of factor# levelsdata_frame %>% group_by(col1) %>% tally()", "e": 4520, "s": 4292, "text": null }, { "code": null, "e": 4527, "s": 4520, "text": "Output" }, { "code": null, "e": 4550, "s": 4527, "text": "# A tibble: 22 x 2 " }, { "code": null, "e": 4566, "s": 4550, "text": "col1 n " }, { "code": null, "e": 4580, "s": 4566, "text": "<fct> <int> " }, { "code": null, "e": 4595, "s": 4580, "text": "1 a 3 " }, { "code": null, "e": 4611, "s": 4595, "text": "2 b 2 " }, { "code": null, "e": 4627, "s": 4611, "text": "3 c 1 " }, { "code": null, "e": 4643, "s": 4627, "text": "4 d 1 " }, { "code": null, "e": 4659, "s": 4643, "text": "5 e 3 " }, { "code": null, "e": 4675, "s": 4659, "text": "6 f 3 " }, { "code": null, "e": 4691, "s": 4675, "text": "7 g 2 " }, { "code": null, "e": 4707, "s": 4691, "text": "8 h 1 " }, { "code": null, "e": 4722, "s": 4707, "text": "9 i 2 " }, { "code": null, "e": 4738, "s": 4722, "text": "10 j 5 " }, { "code": null, "e": 4762, "s": 4738, "text": "# ... with 12 more rows" }, { "code": null, "e": 4769, "s": 4762, "text": "Picked" }, { "code": null, "e": 4787, "s": 4769, "text": "R Factor-Programs" }, { "code": null, "e": 4797, "s": 4787, "text": "R-Factors" }, { "code": null, "e": 4808, "s": 4797, "text": "R Language" }, { "code": null, "e": 4819, "s": 4808, "text": "R Programs" } ]
MONTH() function in MySQL
02 Dec, 2020 MONTH() function in MySQL is used to find a month from the given date. It returns 0 when the month part for the date is 0 otherwise it returns month value between 1 and 12. Syntax : MONTH(date) Parameter : This function accepts one parameter date : The date or DateTime from which we want to extract the month. Returns : It returns the value range from 1 to 12. Example-1 : Finding the Current Month Using MONTH() Function. SELECT MONTH(NOW()) AS Current_Month; Output : Example-2 : Finding the Month from given DateTime Using Month() Function. SELECT MONTH('2015-09-26 08:09:22') AS MONTH; Output : Example-3 : Finding the Month from given DateTime Using Month () Function when the date is NULL. SELECT MONTH(NULL) AS Month ; Output : Example-4 : The MONTH function can also be used to find the total product sold for every month. To demonstrate create a table named. Product : CREATE TABLE Product( Product_id INT AUTO_INCREMENT, Product_name VARCHAR(100) NOT NULL, Buying_price DECIMAL(13, 2) NOT NULL, Selling_price DECIMAL(13, 2) NOT NULL, Selling_Date Date NOT NULL, PRIMARY KEY(Product_id) ); Now insert some data to the Product table : INSERT INTO Product(Product_name, Buying_price, Selling_price, Selling_Date) VALUES ('Audi Q8', 10000000.00, 15000000.00, '2018-01-26' ), ('Volvo XC40', 2000000.00, 3000000.00, '2018-04-20' ), ('Audi A6', 4000000.00, 5000000.00, '2018-07-25' ), ('BMW X5', 5000500.00, 7006500.00, '2018-10-18' ), ('Jaguar XF', 5000000, 7507000.00, '2019-01-27' ), ('Mercedes-Benz C-Class', 4000000.00, 6000000.00, '2019-04-01' ), ('Jaguar F-PACE', 5000000.00, 7000000.00, '2019-12-26' ), ('Porsche Macan', 6500000.00, 8000000.00, '2020-04-16' ) ; So, Our table looks like : Now, we are going to find the number of products sold per month by using the MONTH () function. SELECT MONTH (Selling_Date) month, COUNT(Product_id) Product_Sold FROM Product GROUP BY MONTH (Selling_Date) ORDER BY MONTH (Selling_Date); Output : DBMS-SQL mysql SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Update Multiple Columns in Single Update Statement in SQL? SQL | Sub queries in From Clause Window functions in SQL SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter What is Temporary Table in SQL? SQL using Python RANK() Function in SQL Server SQL Query to Convert VARCHAR to INT SQL Query to Convert Rows to Columns in SQL Server SQL Query to Compare Two Dates
[ { "code": null, "e": 28, "s": 0, "text": "\n02 Dec, 2020" }, { "code": null, "e": 201, "s": 28, "text": "MONTH() function in MySQL is used to find a month from the given date. It returns 0 when the month part for the date is 0 otherwise it returns month value between 1 and 12." }, { "code": null, "e": 210, "s": 201, "text": "Syntax :" }, { "code": null, "e": 222, "s": 210, "text": "MONTH(date)" }, { "code": null, "e": 235, "s": 222, "text": "Parameter : " }, { "code": null, "e": 272, "s": 235, "text": "This function accepts one parameter " }, { "code": null, "e": 341, "s": 272, "text": "date : The date or DateTime from which we want to extract the month." }, { "code": null, "e": 392, "s": 341, "text": "Returns : It returns the value range from 1 to 12." }, { "code": null, "e": 405, "s": 392, "text": "Example-1 : " }, { "code": null, "e": 455, "s": 405, "text": "Finding the Current Month Using MONTH() Function." }, { "code": null, "e": 493, "s": 455, "text": "SELECT MONTH(NOW()) AS Current_Month;" }, { "code": null, "e": 502, "s": 493, "text": "Output :" }, { "code": null, "e": 515, "s": 502, "text": "Example-2 : " }, { "code": null, "e": 577, "s": 515, "text": "Finding the Month from given DateTime Using Month() Function." }, { "code": null, "e": 623, "s": 577, "text": "SELECT MONTH('2015-09-26 08:09:22') AS MONTH;" }, { "code": null, "e": 632, "s": 623, "text": "Output :" }, { "code": null, "e": 645, "s": 632, "text": "Example-3 : " }, { "code": null, "e": 730, "s": 645, "text": "Finding the Month from given DateTime Using Month () Function when the date is NULL." }, { "code": null, "e": 760, "s": 730, "text": "SELECT MONTH(NULL) AS Month ;" }, { "code": null, "e": 769, "s": 760, "text": "Output :" }, { "code": null, "e": 782, "s": 769, "text": "Example-4 : " }, { "code": null, "e": 903, "s": 782, "text": "The MONTH function can also be used to find the total product sold for every month. To demonstrate create a table named." }, { "code": null, "e": 913, "s": 903, "text": "Product :" }, { "code": null, "e": 1154, "s": 913, "text": "CREATE TABLE Product(\n Product_id INT AUTO_INCREMENT, \n Product_name VARCHAR(100) NOT NULL,\n Buying_price DECIMAL(13, 2) NOT NULL,\n Selling_price DECIMAL(13, 2) NOT NULL,\n Selling_Date Date NOT NULL,\n PRIMARY KEY(Product_id)\n);" }, { "code": null, "e": 1198, "s": 1154, "text": "Now insert some data to the Product table :" }, { "code": null, "e": 1761, "s": 1198, "text": "INSERT INTO \n Product(Product_name, Buying_price, Selling_price, Selling_Date)\nVALUES\n ('Audi Q8', 10000000.00, 15000000.00, '2018-01-26' ),\n ('Volvo XC40', 2000000.00, 3000000.00, '2018-04-20' ),\n ('Audi A6', 4000000.00, 5000000.00, '2018-07-25' ),\n ('BMW X5', 5000500.00, 7006500.00, '2018-10-18' ),\n ('Jaguar XF', 5000000, 7507000.00, '2019-01-27' ),\n ('Mercedes-Benz C-Class', 4000000.00, 6000000.00, '2019-04-01' ),\n ('Jaguar F-PACE', 5000000.00, 7000000.00, '2019-12-26' ),\n ('Porsche Macan', 6500000.00, 8000000.00, '2020-04-16' ) ;" }, { "code": null, "e": 1788, "s": 1761, "text": "So, Our table looks like :" }, { "code": null, "e": 1884, "s": 1788, "text": "Now, we are going to find the number of products sold per month by using the MONTH () function." }, { "code": null, "e": 2034, "s": 1884, "text": "SELECT \n MONTH (Selling_Date) month, \n COUNT(Product_id) Product_Sold\nFROM Product\nGROUP BY MONTH (Selling_Date)\nORDER BY MONTH (Selling_Date);" }, { "code": null, "e": 2043, "s": 2034, "text": "Output :" }, { "code": null, "e": 2052, "s": 2043, "text": "DBMS-SQL" }, { "code": null, "e": 2058, "s": 2052, "text": "mysql" }, { "code": null, "e": 2062, "s": 2058, "text": "SQL" }, { "code": null, "e": 2066, "s": 2062, "text": "SQL" }, { "code": null, "e": 2164, "s": 2066, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2230, "s": 2164, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 2263, "s": 2230, "text": "SQL | Sub queries in From Clause" }, { "code": null, "e": 2287, "s": 2263, "text": "Window functions in SQL" }, { "code": null, "e": 2365, "s": 2287, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 2397, "s": 2365, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 2414, "s": 2397, "text": "SQL using Python" }, { "code": null, "e": 2444, "s": 2414, "text": "RANK() Function in SQL Server" }, { "code": null, "e": 2480, "s": 2444, "text": "SQL Query to Convert VARCHAR to INT" }, { "code": null, "e": 2531, "s": 2480, "text": "SQL Query to Convert Rows to Columns in SQL Server" } ]
Tryit Editor v3.7
Tryit: HTML image - full URL
[]
Playit
div#myBox { position:absolute; background-color:red; z-index:auto;}
[]
PHP $argv
When a PHP script is run from command line, $argv superglobal array contains arguments passed to it. First element in array $argv[0] is always the name of script. This variable is not available if register_argc_argv directive in php.ini is disabled. Following script is executed from command line. Live Demo <?php var_dump($argv); ?> array(1) { [0]=> string(8) "main.php" } In another example as follows, addition of command line arguments is performed <?php $add=$argv[1]+$argv[2]; echo "addition = " . $add; ?> Output C:\xampp\php>php test1.php 10 20 addition = 30
[ { "code": null, "e": 1312, "s": 1062, "text": "When a PHP script is run from command line, $argv superglobal array contains arguments passed to it. First element in array $argv[0] is always the name of script. This variable is not available if register_argc_argv directive in php.ini is disabled." }, { "code": null, "e": 1360, "s": 1312, "text": "Following script is executed from command line." }, { "code": null, "e": 1371, "s": 1360, "text": " Live Demo" }, { "code": null, "e": 1397, "s": 1371, "text": "<?php\nvar_dump($argv);\n?>" }, { "code": null, "e": 1443, "s": 1397, "text": "array(1) {\n [0]=>\n string(8) \"main.php\"\n}" }, { "code": null, "e": 1522, "s": 1443, "text": "In another example as follows, addition of command line arguments is performed" }, { "code": null, "e": 1582, "s": 1522, "text": "<?php\n$add=$argv[1]+$argv[2];\necho \"addition = \" . $add;\n?>" }, { "code": null, "e": 1589, "s": 1582, "text": "Output" }, { "code": null, "e": 1636, "s": 1589, "text": "C:\\xampp\\php>php test1.php 10 20\naddition = 30" } ]
Copy all elements of ArrayList to an Object Array in Java
All the elements of an ArrayList can be copied into an Object Array using the method java.util.ArrayList.toArray(). This method does not have any parameters and it returns an Object Array that contains all the elements of the ArrayList copied in the correct order. A program that demonstrates this is given as follows βˆ’ Live Demo import java.util.ArrayList; import java.util.List; public class Demo { public static void main(String[] args) { List<String> aList = new ArrayList<String>(); aList.add("Nathan"); aList.add("John"); aList.add("Susan"); aList.add("Betty"); aList.add("Peter"); Object[] objArr = aList.toArray(); System.out.println("The array elements are: "); for (Object i : objArr) { System.out.println(i); } } } The array elements are: Nathan John Susan Betty Peter Now let us understand the above program. The ArrayList aList is created. Then ArrayList.add() is used to add the elements to this ArrayList. A code snippet which demonstrates this is as follows βˆ’ List<String> aList = new ArrayList<String>(); aList.add("Nathan"); aList.add("John"); aList.add("Susan"); aList.add("Betty"); aList.add("Peter"); The method ArrayList.toArray() is used to copy all the elements of the ArrayList into an Object Array. Then the Object Array elements are displayed using a for loop. A code snippet which demonstrates this is as follows βˆ’ Object[] objArr = aList.toArray(); System.out.println("The array elements are: "); for (Object i : objArr) { System.out.println(i); }
[ { "code": null, "e": 1327, "s": 1062, "text": "All the elements of an ArrayList can be copied into an Object Array using the method java.util.ArrayList.toArray(). This method does not have any parameters and it returns an Object Array that contains all the elements of the ArrayList copied in the correct order." }, { "code": null, "e": 1382, "s": 1327, "text": "A program that demonstrates this is given as follows βˆ’" }, { "code": null, "e": 1393, "s": 1382, "text": " Live Demo" }, { "code": null, "e": 1864, "s": 1393, "text": "import java.util.ArrayList;\nimport java.util.List;\npublic class Demo {\n public static void main(String[] args) {\n List<String> aList = new ArrayList<String>();\n aList.add(\"Nathan\");\n aList.add(\"John\");\n aList.add(\"Susan\");\n aList.add(\"Betty\");\n aList.add(\"Peter\");\n Object[] objArr = aList.toArray();\n System.out.println(\"The array elements are: \");\n for (Object i : objArr) {\n System.out.println(i);\n }\n }\n}" }, { "code": null, "e": 1918, "s": 1864, "text": "The array elements are:\nNathan\nJohn\nSusan\nBetty\nPeter" }, { "code": null, "e": 1959, "s": 1918, "text": "Now let us understand the above program." }, { "code": null, "e": 2114, "s": 1959, "text": "The ArrayList aList is created. Then ArrayList.add() is used to add the elements to this ArrayList. A code snippet which demonstrates this is as follows βˆ’" }, { "code": null, "e": 2260, "s": 2114, "text": "List<String> aList = new ArrayList<String>();\naList.add(\"Nathan\");\naList.add(\"John\");\naList.add(\"Susan\");\naList.add(\"Betty\");\naList.add(\"Peter\");" }, { "code": null, "e": 2481, "s": 2260, "text": "The method ArrayList.toArray() is used to copy all the elements of the ArrayList into an Object Array. Then the Object Array elements are displayed using a for loop. A code snippet which demonstrates this is as follows βˆ’" }, { "code": null, "e": 2618, "s": 2481, "text": "Object[] objArr = aList.toArray();\nSystem.out.println(\"The array elements are: \");\nfor (Object i : objArr) {\n System.out.println(i);\n}" } ]
How to remove Y-axis labels in R?
When we create a plot in R, the Y-axis labels are automatically generated and if we want to remove those labels, the plot function can help us. For this purpose, we need to set ylab argument of plot function to blank as ylab="" and yaxt="n" to remove the axis title. This is a method of base R only, not with ggplot2 package. x<-rnorm(10) y<-rnorm(10) plot(x,y) plot(x,y,ylab="",yaxt="n")
[ { "code": null, "e": 1388, "s": 1062, "text": "When we create a plot in R, the Y-axis labels are automatically generated and if we want to remove those labels, the plot function can help us. For this purpose, we need to set ylab argument of plot function to blank as ylab=\"\" and yaxt=\"n\" to remove the axis title. This is a method of base R only, not with ggplot2 package." }, { "code": null, "e": 1424, "s": 1388, "text": "x<-rnorm(10)\ny<-rnorm(10)\nplot(x,y)" }, { "code": null, "e": 1451, "s": 1424, "text": "plot(x,y,ylab=\"\",yaxt=\"n\")" } ]
Implementing Recurrent Neural Network using Numpy | by Rishit Dholakia | Towards Data Science
Recurrent neural network (RNN) is one of the earliest neural networks that was able to provide a break through in the field of NLP. The beauty of this network is its capacity to store memory of previous sequences due to which they are widely used for time series tasks as well. High level frameworks like Tensorflow and PyTorch abstract the mathematics behind these neural networks making it difficult for any AI enthusiast to code a deep learning architecture with right knowledge of parameters and layers. In order to resolve these type of inefficiencies the mathematical knowledge behind these networks is necessary. Coding these algorithms from scratch gives an extra edge by helping AI enthusiast understand the different notations in research papers and implement them in practicality. If you are new to the concept of RNN please refer to MIT 6.S191 course, which is one of the best lectures giving a good intuitive understanding on how RNN work. This knowledge will help you understand the different notations and concept implementations explained in this tutorial. The end goal of this blog is to make AI enthusiasts comfortable with coding the theoretical knowledge they gain from research papers in the field of deep learning. Unlike traditional neural networks, RNN possess 3 weight parameters, namely input weights, internal state weights (used to store the memory) and output weights. We start by initializing these parameters with random values. We initialize the word_embedding dimension and output dimension as 100 and 80 respectively. The output dimension is the total count of unique words present in the vocabulary. The variable prev_memory refers to the internal_state (these are the memory of the previous sequences).Other parameters like the gradients for updating the weights have been initialized as well. The input_weight gradients, internal_state_weight gradient and output_weight gradients have been named as dU, dW and dV respectively. Variable bptt_truncate refers to the number of timestamps the network has to look back while back-propagating, this is done to overcome the vanishing gradient problem. Input and output vectors: Consider we have a sentence β€œI like to play.” . In the vocabulary list lets assume that I is mapped to index 2 , like to index 45, to at index 10 and play at index 64 and the punctuation β€œ.” at index 1. To get a real life scenario working from input to output, lets randomly initialize the word_embedding for each word. Note: You could also try it with a one hot encoded vector for each word and pass that as an input. Now that we are done with the input, we need to consider the output for each word input. The RNN cell should output the next most probable word for the current input. For training the RNN we provide the t+1'th word as the output for the t’th input value, for example: the RNN cell should output the word like for the given input word I. Note: The output for each individual timestamp is not exclusively determined by the current input, but by the previous set of inputs along with it, which is determined by the internal state parameter. Now that the input is in the form of embedding vectors, the format of output required to calculate the loss should be one-hot encoded vectors. This is done for each word that is present in the input string except the first word, because we are considering just one example sentence for this neural network to learn and the initial input is the the first word of the sentence. Why do we one-hot encode the output words ? The reason being, raw output would just be scores for each unique word and they are not important to us. Instead we need the probabilities of each word with respect to the previous word. How do we find the probabilities from raw output values ? In order to solve for this problem a softmax activation function is used on the vector of scores such that all those probabilities would add up to one. Img 1 shows the input-output pipeline at a single time-stamp. The top row is the ground _truth output and the second line represent the predicted output. Note: During the implementation we need to take care of the key value of the output_mapper. We need to reset the key values with its timestamp values so that the algorithm knows which ground-truth word needs to be used at that particular time-stamp in-order to calculate the loss . Before reset:45: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])After reset:{0: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) Now that we have the weights and we also know how we pass our input and we know what is expected as output, we will start with the forward propagation calculation. Below calculations are required for training the neural network. Here the U represent the input_weights, W represent the internal_state_weights and the V represent the output weights. The input weights are multiplied with the input(x) , the internal_state_weights are multiplied with the previous activation which in our notation is prev_memory. The activation function used between the layers is Tanh. It provides non-linearity with eventually helps in learning. Note: Bias term for the RNN calculations is not used as it would lead to more complex understanding in this tutorial. Since the above code would just calculate the output for one particular time-stamp, we would now have to code the forward propagation for the entire sequence of words. In the code below, the output string contains a list of output vectors for each time-stamp. Memory is a dictionary that contains parameters for each timestamp that will be essential during back propagation. Loss Calculations: We also defined our loss or error, to be the cross entropy loss, given by: Most importantly what we need to look in the above code is line 5. As we know that the ground_truth output(y) is of the form [0,0,....,1,..0] and predicted_output(y^hat) is of the form [0.34,0.03,......,0.45], we need the loss to be a single value to infer the total loss from it. For this reason we use the sum function to get the sum of the differences/error for each value in the y and y^hat vectors for that particular time-stamp. The total_loss is the loss for the entire model inclusive of all time stamps. If you heard of back propagation, then you must have heard of chain rule and it being the vital aspect behind calculating the gradient. Based on Img 4 above, cost C represents the error which is the change required for y^hat to reach y. Since the cost is a function output of activation a, the change reflected by the activation is represented by dCost/da. Practically, it means the change (error) value seen from the point of view of the activation node. Similarly the change of activation with respect to z is represented by da/dz and z with respect to w is given by dw/dz. We are concerned with how much the change (error) is with respect to weights. Since there is no direct relation between weights and cost, the intermediate change values from cost all the way to the weights are multiplied(as can be seen in the equation above). Since there are three weights in RNN we require three gradients. Gradient of input_weights(dLoss/dU), internal_state_weights(dLoss/dW) and output_weights(dLoss/dV). The chain of these three gradients can be represented as follows: Note: Here the T represents the transpose. The dLoss/dy_unactivated is coded as the following: In order to know more about the loss derivatives, please refer this blog. There are two gradient functions that we will be calculating, one is the multiplication_backward and the other is addition_backward. In case of multiplication _backward we return 2 parameters, one is the gradient with respect to the weights (dLoss/dV) and the other is a chain gradient which will be a part of the chain to calculate another weight gradient. In the case of addition backward while calculating the derivative we find out that the derivative of the individual components in the add function(ht_unactivated) are 1. For example: dh_unactivated/dU_frd= 1 as (h_unactivated = U_frd + W_frd_) and the derivative of dU_frd/dU_frd= 1. But the number of ones are with respect to the dimension of U_frd. To know more about the gradients you can refer to this source. That’s it, these are the only two functions required to calculate the gradients. The multiplication_backward function is used on equations that contain a dot product of the vectors and addition_backward on equations that contain addition of two vectors. Now that you have analyzed and understood back-propagation for RNN, its time to implement it for one single time-stamp, which will be later used for calculating the gradients across all the time-stamps . As seen in the code below , forward_params_t is a list that contains the forward parameters of the network at a particular time-step. Variable ds is a vital part as this line of code considers the hidden state of previous timestamps, which will help extract adequate useful information required while back-propagating. For RNN, instead of using vanilla back propagation, we will be using truncated back propagation because of the vanishing gradient problem. In this technique instead of looking at just one time-stamp back, the current cell will look at k time-stamps back , where k represents the number of previous cells to look back so that more knowledge is retrieved. Once we have calculated the gradients using back-propagation, we have to update the weights which is done using the batch gradient descent approach. Once we have all our functions in place, we can approach our climax i.e. training the neural network. The learning rate considered for training is static, you could even use a dynamic approach of changing the learning rate based on using step decay. Now that you implemented a recurrent neural network, its time to take a step forward with advanced architectures like LSTM and GRU that utilize the hidden states in a much efficient manner to retain the meaning of longer sequences. There is still a long way to go. With a lot of advancements in the field of NLP there are highly sophisticated algorithms like Elmo and Bert. Understand them and try to implement it yourself. It follows the same concept of memory but brings in an element of weighted words. Since these models are highly complex, using Numpy would not suffice, rather inculcate the skills of PyTorch or TensorFlow to implement them and build amazing AI systems that can serve the community. Inspiration to create this tutorial was from this github blog. You can access the notebook for this tutorial here. Hope you all enjoyed the tutorial! [1] Sargur Srihari, RNN-Gradients, https://cedar.buffalo.edu/~srihari/CSE676/10.2.2%20RNN-Gradients.pdf [2] Yu Gong, RNN-from-scratch, https://github.com/pangolulu/rnn-from-scratch
[ { "code": null, "e": 964, "s": 172, "text": "Recurrent neural network (RNN) is one of the earliest neural networks that was able to provide a break through in the field of NLP. The beauty of this network is its capacity to store memory of previous sequences due to which they are widely used for time series tasks as well. High level frameworks like Tensorflow and PyTorch abstract the mathematics behind these neural networks making it difficult for any AI enthusiast to code a deep learning architecture with right knowledge of parameters and layers. In order to resolve these type of inefficiencies the mathematical knowledge behind these networks is necessary. Coding these algorithms from scratch gives an extra edge by helping AI enthusiast understand the different notations in research papers and implement them in practicality." }, { "code": null, "e": 1245, "s": 964, "text": "If you are new to the concept of RNN please refer to MIT 6.S191 course, which is one of the best lectures giving a good intuitive understanding on how RNN work. This knowledge will help you understand the different notations and concept implementations explained in this tutorial." }, { "code": null, "e": 1409, "s": 1245, "text": "The end goal of this blog is to make AI enthusiasts comfortable with coding the theoretical knowledge they gain from research papers in the field of deep learning." }, { "code": null, "e": 1807, "s": 1409, "text": "Unlike traditional neural networks, RNN possess 3 weight parameters, namely input weights, internal state weights (used to store the memory) and output weights. We start by initializing these parameters with random values. We initialize the word_embedding dimension and output dimension as 100 and 80 respectively. The output dimension is the total count of unique words present in the vocabulary." }, { "code": null, "e": 2304, "s": 1807, "text": "The variable prev_memory refers to the internal_state (these are the memory of the previous sequences).Other parameters like the gradients for updating the weights have been initialized as well. The input_weight gradients, internal_state_weight gradient and output_weight gradients have been named as dU, dW and dV respectively. Variable bptt_truncate refers to the number of timestamps the network has to look back while back-propagating, this is done to overcome the vanishing gradient problem." }, { "code": null, "e": 2330, "s": 2304, "text": "Input and output vectors:" }, { "code": null, "e": 2650, "s": 2330, "text": "Consider we have a sentence β€œI like to play.” . In the vocabulary list lets assume that I is mapped to index 2 , like to index 45, to at index 10 and play at index 64 and the punctuation β€œ.” at index 1. To get a real life scenario working from input to output, lets randomly initialize the word_embedding for each word." }, { "code": null, "e": 2749, "s": 2650, "text": "Note: You could also try it with a one hot encoded vector for each word and pass that as an input." }, { "code": null, "e": 3086, "s": 2749, "text": "Now that we are done with the input, we need to consider the output for each word input. The RNN cell should output the next most probable word for the current input. For training the RNN we provide the t+1'th word as the output for the t’th input value, for example: the RNN cell should output the word like for the given input word I." }, { "code": null, "e": 3287, "s": 3086, "text": "Note: The output for each individual timestamp is not exclusively determined by the current input, but by the previous set of inputs along with it, which is determined by the internal state parameter." }, { "code": null, "e": 3663, "s": 3287, "text": "Now that the input is in the form of embedding vectors, the format of output required to calculate the loss should be one-hot encoded vectors. This is done for each word that is present in the input string except the first word, because we are considering just one example sentence for this neural network to learn and the initial input is the the first word of the sentence." }, { "code": null, "e": 3707, "s": 3663, "text": "Why do we one-hot encode the output words ?" }, { "code": null, "e": 3894, "s": 3707, "text": "The reason being, raw output would just be scores for each unique word and they are not important to us. Instead we need the probabilities of each word with respect to the previous word." }, { "code": null, "e": 3952, "s": 3894, "text": "How do we find the probabilities from raw output values ?" }, { "code": null, "e": 4258, "s": 3952, "text": "In order to solve for this problem a softmax activation function is used on the vector of scores such that all those probabilities would add up to one. Img 1 shows the input-output pipeline at a single time-stamp. The top row is the ground _truth output and the second line represent the predicted output." }, { "code": null, "e": 4540, "s": 4258, "text": "Note: During the implementation we need to take care of the key value of the output_mapper. We need to reset the key values with its timestamp values so that the algorithm knows which ground-truth word needs to be used at that particular time-stamp in-order to calculate the loss ." }, { "code": null, "e": 5284, "s": 4540, "text": "Before reset:45: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])After reset:{0: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])" }, { "code": null, "e": 5513, "s": 5284, "text": "Now that we have the weights and we also know how we pass our input and we know what is expected as output, we will start with the forward propagation calculation. Below calculations are required for training the neural network." }, { "code": null, "e": 5912, "s": 5513, "text": "Here the U represent the input_weights, W represent the internal_state_weights and the V represent the output weights. The input weights are multiplied with the input(x) , the internal_state_weights are multiplied with the previous activation which in our notation is prev_memory. The activation function used between the layers is Tanh. It provides non-linearity with eventually helps in learning." }, { "code": null, "e": 6030, "s": 5912, "text": "Note: Bias term for the RNN calculations is not used as it would lead to more complex understanding in this tutorial." }, { "code": null, "e": 6198, "s": 6030, "text": "Since the above code would just calculate the output for one particular time-stamp, we would now have to code the forward propagation for the entire sequence of words." }, { "code": null, "e": 6405, "s": 6198, "text": "In the code below, the output string contains a list of output vectors for each time-stamp. Memory is a dictionary that contains parameters for each timestamp that will be essential during back propagation." }, { "code": null, "e": 6424, "s": 6405, "text": "Loss Calculations:" }, { "code": null, "e": 6499, "s": 6424, "text": "We also defined our loss or error, to be the cross entropy loss, given by:" }, { "code": null, "e": 7012, "s": 6499, "text": "Most importantly what we need to look in the above code is line 5. As we know that the ground_truth output(y) is of the form [0,0,....,1,..0] and predicted_output(y^hat) is of the form [0.34,0.03,......,0.45], we need the loss to be a single value to infer the total loss from it. For this reason we use the sum function to get the sum of the differences/error for each value in the y and y^hat vectors for that particular time-stamp. The total_loss is the loss for the entire model inclusive of all time stamps." }, { "code": null, "e": 7148, "s": 7012, "text": "If you heard of back propagation, then you must have heard of chain rule and it being the vital aspect behind calculating the gradient." }, { "code": null, "e": 7848, "s": 7148, "text": "Based on Img 4 above, cost C represents the error which is the change required for y^hat to reach y. Since the cost is a function output of activation a, the change reflected by the activation is represented by dCost/da. Practically, it means the change (error) value seen from the point of view of the activation node. Similarly the change of activation with respect to z is represented by da/dz and z with respect to w is given by dw/dz. We are concerned with how much the change (error) is with respect to weights. Since there is no direct relation between weights and cost, the intermediate change values from cost all the way to the weights are multiplied(as can be seen in the equation above)." }, { "code": null, "e": 8013, "s": 7848, "text": "Since there are three weights in RNN we require three gradients. Gradient of input_weights(dLoss/dU), internal_state_weights(dLoss/dW) and output_weights(dLoss/dV)." }, { "code": null, "e": 8079, "s": 8013, "text": "The chain of these three gradients can be represented as follows:" }, { "code": null, "e": 8122, "s": 8079, "text": "Note: Here the T represents the transpose." }, { "code": null, "e": 8174, "s": 8122, "text": "The dLoss/dy_unactivated is coded as the following:" }, { "code": null, "e": 9274, "s": 8174, "text": "In order to know more about the loss derivatives, please refer this blog. There are two gradient functions that we will be calculating, one is the multiplication_backward and the other is addition_backward. In case of multiplication _backward we return 2 parameters, one is the gradient with respect to the weights (dLoss/dV) and the other is a chain gradient which will be a part of the chain to calculate another weight gradient. In the case of addition backward while calculating the derivative we find out that the derivative of the individual components in the add function(ht_unactivated) are 1. For example: dh_unactivated/dU_frd= 1 as (h_unactivated = U_frd + W_frd_) and the derivative of dU_frd/dU_frd= 1. But the number of ones are with respect to the dimension of U_frd. To know more about the gradients you can refer to this source. That’s it, these are the only two functions required to calculate the gradients. The multiplication_backward function is used on equations that contain a dot product of the vectors and addition_backward on equations that contain addition of two vectors." }, { "code": null, "e": 9797, "s": 9274, "text": "Now that you have analyzed and understood back-propagation for RNN, its time to implement it for one single time-stamp, which will be later used for calculating the gradients across all the time-stamps . As seen in the code below , forward_params_t is a list that contains the forward parameters of the network at a particular time-step. Variable ds is a vital part as this line of code considers the hidden state of previous timestamps, which will help extract adequate useful information required while back-propagating." }, { "code": null, "e": 10151, "s": 9797, "text": "For RNN, instead of using vanilla back propagation, we will be using truncated back propagation because of the vanishing gradient problem. In this technique instead of looking at just one time-stamp back, the current cell will look at k time-stamps back , where k represents the number of previous cells to look back so that more knowledge is retrieved." }, { "code": null, "e": 10300, "s": 10151, "text": "Once we have calculated the gradients using back-propagation, we have to update the weights which is done using the batch gradient descent approach." }, { "code": null, "e": 10550, "s": 10300, "text": "Once we have all our functions in place, we can approach our climax i.e. training the neural network. The learning rate considered for training is static, you could even use a dynamic approach of changing the learning rate based on using step decay." }, { "code": null, "e": 11256, "s": 10550, "text": "Now that you implemented a recurrent neural network, its time to take a step forward with advanced architectures like LSTM and GRU that utilize the hidden states in a much efficient manner to retain the meaning of longer sequences. There is still a long way to go. With a lot of advancements in the field of NLP there are highly sophisticated algorithms like Elmo and Bert. Understand them and try to implement it yourself. It follows the same concept of memory but brings in an element of weighted words. Since these models are highly complex, using Numpy would not suffice, rather inculcate the skills of PyTorch or TensorFlow to implement them and build amazing AI systems that can serve the community." }, { "code": null, "e": 11319, "s": 11256, "text": "Inspiration to create this tutorial was from this github blog." }, { "code": null, "e": 11371, "s": 11319, "text": "You can access the notebook for this tutorial here." }, { "code": null, "e": 11406, "s": 11371, "text": "Hope you all enjoyed the tutorial!" }, { "code": null, "e": 11510, "s": 11406, "text": "[1] Sargur Srihari, RNN-Gradients, https://cedar.buffalo.edu/~srihari/CSE676/10.2.2%20RNN-Gradients.pdf" } ]
Creating a Pop Music Generator with the Transformer | by Andrew Shaw | Towards Data Science
TLDR; Train a Deep Learning model to generate pop music. You can compose music with our pre-trained model here β€” http://musicautobot.com. Source code is available here β€” https://github.com/bearpelican/musicautobot. In this post, I’m going to explain how to train a deep learning model to generate pop music. This is Part I of the β€œBuilding An A.I. Music Generator” series. Quick Note: There’s a couple of ways to generate music. The non-trivial way is to generate the actual sound waves (WaveNet, MelNet). The other is to generate music notation for an instrument to play (similar to sheet music). I’ll be explaining how to do the latter. Ok! Here are some cool music generated examples for you. I’ve built the website: MusicAutobot. It’s powered by the music model we’ll be building in this post. Song #1 (Inspired by Richie Valen β€” La Bamba) Song #2 (Inspired by Pachelbel β€” Canon in D) Note: MusicAutobot is best viewed on desktop. For those of you on mobile, you’ll just have to listen below. 2K plays2K 1.4K plays1.4K The Transformer architecture is a recent advance in NLP, which has produced amazing results in generating text. Transformers train faster and have much better long term memory than previous language models. Give it a few words, and it can continue generating whole paragraphs that make even more sense than this one. Naturally, this seems like a great fit for generating music. Why not give it a few notes instead, and have it generate a melody? That’s exactly what we’ll be doing. We’re going to be using a transformer to predict music notes! Here’s a high level diagram of what we’re trying to do: What we are doing, is building a sequence model for music. Take an input sequence and predict a target sequence. Whether it’s time series forecasting, music or text generation, building these models can be boiled down into two steps: Step 1. Convert data into a sequence of tokens Step 2. Build and train the model to predict the next token With the help of 2 python libraries β€” music21 and fastai β€” we’ve built a simple library musicautobot that makes these two steps become relatively easy. Convert data (music files) into sequence of tokens (music notes) Take a piano sheet that looks like this: And tokenize it to something like this: xxbos xxpad n72 d2 n52 d16 n48 d16 n45 d16 xxsep d2 n71 d2 xxsep d2 n71 d2 xxsep d2 n69 d2 xxsep d2 n69 d2 xxsep d2 n67 d4 xxsep d4 n64 d1 xxsep d1 n62 d1 With musicautobot, you do it like so: Note: musicautobot uses music21 behind the scenes to load music files and tokenize. Details of this conversion will be covered in the next post. Build and train the model to predict the next token. fastai has some amazingly simple training code for training language models. If we modify the data loading to handle music files instead of text, we can reuse most of the same code. Training our music model is now as easy as this: Go ahead and try out this notebook to train your own! Now let’s see if it actually works. For the next part, I’ll be using the model I’ve pre-trained for a few days on a large MIDI database. You can directly play with the pre-trained model here. Step 1: Create a short snippet of notes: Here’s what that looks like: Step 2: Feed it to our model: Hyperparameters:Temperature adjusts how "creative" the predictions will be. You can control the amount of variation in the pitch and/or rhythm.TopK/TopP - filters out the lowest probability tokens. It makes sure outliers never get chosen, even if they have the tiniest bit of probability Here’s what it sounds like: 1.3K plays1.3K This is actually the first result I got, but side effects may vary. You can create your own variations here. According to the awesome people at HookTheory β€” the most popular chord progression in modern music is the I β€” V β€” vi β€” IV chord. You may have heard it before. It’s in a lot of pop songs. Like every. single. pop. song. Let’s test our model and see if it recognizes this chord progression. We’re going to feed it the first 3 chords β€œI β€” V β€” vi”, and see if it predicts the β€œIV” chord. Here’s how you create the first three chords with music21: It looks like this: [C-E-G] β€” [G-B-D] β€” [A-C-E] Now, we take those chords and feed it to our model to predict the next one: Here’s what we get back (3 input chords are included): Huzzah! The model predicted notes [F-A-C] β€” which is the β€œIV” chord. This is exactly what we were hoping it’d predict. Looks like our music model is able to follow basic music theory and make every single pop song! Well, the chords at least. Don’t just take my word for it. Try it out on the musicautobot: All you have to do is press the red button. Note: 8 times out of 10, you’ll get an β€œIV” chord or one of its inversions. I can’t guarantee deterministic results. Only probabilities. Now you know the basic steps to training a music model. All code shown in this post is available here: https://github.com/bearpelican/musicautobot Play and generate songs with the model we just built:http://musicautobot.com ^ These are real time predictions, so please be patient! I may have glossed over a few details. There’s more! Part II. Practical Tips for Training a Music Modelβ€” Deep dive into music encoding and training β€” it’ll cover all the details I just glossed over. Part III. Building a Multitask Music Model β€” Train a super cool music model that can can harmonize, generate melodies, and remix songs. Next token prediction is just so... basic. Part IV. Using a Music Bot to Remix The Chainsmokers β€” We’ll remix an EDM drop in Ableton with musicautobot. For pure entertainment purposes only. Special Thanks to Jeroen Kerstens and Jeremy Howard for guidance, South Park Commons and PalapaVC for support.
[ { "code": null, "e": 387, "s": 172, "text": "TLDR; Train a Deep Learning model to generate pop music. You can compose music with our pre-trained model here β€” http://musicautobot.com. Source code is available here β€” https://github.com/bearpelican/musicautobot." }, { "code": null, "e": 545, "s": 387, "text": "In this post, I’m going to explain how to train a deep learning model to generate pop music. This is Part I of the β€œBuilding An A.I. Music Generator” series." }, { "code": null, "e": 811, "s": 545, "text": "Quick Note: There’s a couple of ways to generate music. The non-trivial way is to generate the actual sound waves (WaveNet, MelNet). The other is to generate music notation for an instrument to play (similar to sheet music). I’ll be explaining how to do the latter." }, { "code": null, "e": 868, "s": 811, "text": "Ok! Here are some cool music generated examples for you." }, { "code": null, "e": 970, "s": 868, "text": "I’ve built the website: MusicAutobot. It’s powered by the music model we’ll be building in this post." }, { "code": null, "e": 1016, "s": 970, "text": "Song #1 (Inspired by Richie Valen β€” La Bamba)" }, { "code": null, "e": 1061, "s": 1016, "text": "Song #2 (Inspired by Pachelbel β€” Canon in D)" }, { "code": null, "e": 1169, "s": 1061, "text": "Note: MusicAutobot is best viewed on desktop. For those of you on mobile, you’ll just have to listen below." }, { "code": null, "e": 1184, "s": 1169, "text": "\n\n2K plays2K\n\n" }, { "code": null, "e": 1203, "s": 1184, "text": "\n\n1.4K plays1.4K\n\n" }, { "code": null, "e": 1520, "s": 1203, "text": "The Transformer architecture is a recent advance in NLP, which has produced amazing results in generating text. Transformers train faster and have much better long term memory than previous language models. Give it a few words, and it can continue generating whole paragraphs that make even more sense than this one." }, { "code": null, "e": 1649, "s": 1520, "text": "Naturally, this seems like a great fit for generating music. Why not give it a few notes instead, and have it generate a melody?" }, { "code": null, "e": 1747, "s": 1649, "text": "That’s exactly what we’ll be doing. We’re going to be using a transformer to predict music notes!" }, { "code": null, "e": 1803, "s": 1747, "text": "Here’s a high level diagram of what we’re trying to do:" }, { "code": null, "e": 2037, "s": 1803, "text": "What we are doing, is building a sequence model for music. Take an input sequence and predict a target sequence. Whether it’s time series forecasting, music or text generation, building these models can be boiled down into two steps:" }, { "code": null, "e": 2084, "s": 2037, "text": "Step 1. Convert data into a sequence of tokens" }, { "code": null, "e": 2144, "s": 2084, "text": "Step 2. Build and train the model to predict the next token" }, { "code": null, "e": 2296, "s": 2144, "text": "With the help of 2 python libraries β€” music21 and fastai β€” we’ve built a simple library musicautobot that makes these two steps become relatively easy." }, { "code": null, "e": 2361, "s": 2296, "text": "Convert data (music files) into sequence of tokens (music notes)" }, { "code": null, "e": 2402, "s": 2361, "text": "Take a piano sheet that looks like this:" }, { "code": null, "e": 2442, "s": 2402, "text": "And tokenize it to something like this:" }, { "code": null, "e": 2597, "s": 2442, "text": "xxbos xxpad n72 d2 n52 d16 n48 d16 n45 d16 xxsep d2 n71 d2 xxsep d2 n71 d2 xxsep d2 n69 d2 xxsep d2 n69 d2 xxsep d2 n67 d4 xxsep d4 n64 d1 xxsep d1 n62 d1" }, { "code": null, "e": 2635, "s": 2597, "text": "With musicautobot, you do it like so:" }, { "code": null, "e": 2780, "s": 2635, "text": "Note: musicautobot uses music21 behind the scenes to load music files and tokenize. Details of this conversion will be covered in the next post." }, { "code": null, "e": 2833, "s": 2780, "text": "Build and train the model to predict the next token." }, { "code": null, "e": 2910, "s": 2833, "text": "fastai has some amazingly simple training code for training language models." }, { "code": null, "e": 3015, "s": 2910, "text": "If we modify the data loading to handle music files instead of text, we can reuse most of the same code." }, { "code": null, "e": 3064, "s": 3015, "text": "Training our music model is now as easy as this:" }, { "code": null, "e": 3118, "s": 3064, "text": "Go ahead and try out this notebook to train your own!" }, { "code": null, "e": 3154, "s": 3118, "text": "Now let’s see if it actually works." }, { "code": null, "e": 3310, "s": 3154, "text": "For the next part, I’ll be using the model I’ve pre-trained for a few days on a large MIDI database. You can directly play with the pre-trained model here." }, { "code": null, "e": 3351, "s": 3310, "text": "Step 1: Create a short snippet of notes:" }, { "code": null, "e": 3380, "s": 3351, "text": "Here’s what that looks like:" }, { "code": null, "e": 3410, "s": 3380, "text": "Step 2: Feed it to our model:" }, { "code": null, "e": 3698, "s": 3410, "text": "Hyperparameters:Temperature adjusts how \"creative\" the predictions will be. You can control the amount of variation in the pitch and/or rhythm.TopK/TopP - filters out the lowest probability tokens. It makes sure outliers never get chosen, even if they have the tiniest bit of probability" }, { "code": null, "e": 3726, "s": 3698, "text": "Here’s what it sounds like:" }, { "code": null, "e": 3745, "s": 3726, "text": "\n\n1.3K plays1.3K\n\n" }, { "code": null, "e": 3854, "s": 3745, "text": "This is actually the first result I got, but side effects may vary. You can create your own variations here." }, { "code": null, "e": 3983, "s": 3854, "text": "According to the awesome people at HookTheory β€” the most popular chord progression in modern music is the I β€” V β€” vi β€” IV chord." }, { "code": null, "e": 4041, "s": 3983, "text": "You may have heard it before. It’s in a lot of pop songs." }, { "code": null, "e": 4072, "s": 4041, "text": "Like every. single. pop. song." }, { "code": null, "e": 4237, "s": 4072, "text": "Let’s test our model and see if it recognizes this chord progression. We’re going to feed it the first 3 chords β€œI β€” V β€” vi”, and see if it predicts the β€œIV” chord." }, { "code": null, "e": 4296, "s": 4237, "text": "Here’s how you create the first three chords with music21:" }, { "code": null, "e": 4344, "s": 4296, "text": "It looks like this: [C-E-G] β€” [G-B-D] β€” [A-C-E]" }, { "code": null, "e": 4420, "s": 4344, "text": "Now, we take those chords and feed it to our model to predict the next one:" }, { "code": null, "e": 4475, "s": 4420, "text": "Here’s what we get back (3 input chords are included):" }, { "code": null, "e": 4544, "s": 4475, "text": "Huzzah! The model predicted notes [F-A-C] β€” which is the β€œIV” chord." }, { "code": null, "e": 4717, "s": 4544, "text": "This is exactly what we were hoping it’d predict. Looks like our music model is able to follow basic music theory and make every single pop song! Well, the chords at least." }, { "code": null, "e": 4781, "s": 4717, "text": "Don’t just take my word for it. Try it out on the musicautobot:" }, { "code": null, "e": 4825, "s": 4781, "text": "All you have to do is press the red button." }, { "code": null, "e": 4962, "s": 4825, "text": "Note: 8 times out of 10, you’ll get an β€œIV” chord or one of its inversions. I can’t guarantee deterministic results. Only probabilities." }, { "code": null, "e": 5018, "s": 4962, "text": "Now you know the basic steps to training a music model." }, { "code": null, "e": 5109, "s": 5018, "text": "All code shown in this post is available here: https://github.com/bearpelican/musicautobot" }, { "code": null, "e": 5243, "s": 5109, "text": "Play and generate songs with the model we just built:http://musicautobot.com ^ These are real time predictions, so please be patient!" }, { "code": null, "e": 5296, "s": 5243, "text": "I may have glossed over a few details. There’s more!" }, { "code": null, "e": 5442, "s": 5296, "text": "Part II. Practical Tips for Training a Music Modelβ€” Deep dive into music encoding and training β€” it’ll cover all the details I just glossed over." }, { "code": null, "e": 5621, "s": 5442, "text": "Part III. Building a Multitask Music Model β€” Train a super cool music model that can can harmonize, generate melodies, and remix songs. Next token prediction is just so... basic." }, { "code": null, "e": 5768, "s": 5621, "text": "Part IV. Using a Music Bot to Remix The Chainsmokers β€” We’ll remix an EDM drop in Ableton with musicautobot. For pure entertainment purposes only." } ]
Error connecting SAP while sapjco3.jar file is in my library path
You need to copy sapjco3.dll in a folder in your Java library path as he t library is not sapjco3.jar and it is a sapjco3.dll file. You can call in your application usingfollowing: System.getProperty("java.library.path") Following approaches can be used: First is by copying sapjco3.dll into one of the folder which are already in your library path like: C:\WINNT\system32 Second would be to use the same path in Java library path using any of the following options: By accessing System.setProperty ("java.library.path","C:\path\to\folder\with\dll\") before accessing the SAPJCo By accessing System.setProperty ("java.library.path","C:\path\to\folder\with\dll\") before accessing the SAPJCo You can set Java command line like this -Djava.library.path=C:\path\to\folder\with\dll\ You can set Java command line like this -Djava.library.path=C:\path\to\folder\with\dll\
[ { "code": null, "e": 1194, "s": 1062, "text": "You need to copy sapjco3.dll in a folder in your Java library path as he t library is not sapjco3.jar and it is a sapjco3.dll file." }, { "code": null, "e": 1243, "s": 1194, "text": "You can call in your application usingfollowing:" }, { "code": null, "e": 1283, "s": 1243, "text": "System.getProperty(\"java.library.path\")" }, { "code": null, "e": 1317, "s": 1283, "text": "Following approaches can be used:" }, { "code": null, "e": 1435, "s": 1317, "text": "First is by copying sapjco3.dll into one of the folder which are already in your library path like: C:\\WINNT\\system32" }, { "code": null, "e": 1529, "s": 1435, "text": "Second would be to use the same path in Java library path using any of the following options:" }, { "code": null, "e": 1641, "s": 1529, "text": "By accessing System.setProperty (\"java.library.path\",\"C:\\path\\to\\folder\\with\\dll\\\") before accessing the SAPJCo" }, { "code": null, "e": 1753, "s": 1641, "text": "By accessing System.setProperty (\"java.library.path\",\"C:\\path\\to\\folder\\with\\dll\\\") before accessing the SAPJCo" }, { "code": null, "e": 1841, "s": 1753, "text": "You can set Java command line like this -Djava.library.path=C:\\path\\to\\folder\\with\\dll\\" }, { "code": null, "e": 1929, "s": 1841, "text": "You can set Java command line like this -Djava.library.path=C:\\path\\to\\folder\\with\\dll\\" } ]
Set insert() in C++ STL
In this article we are going to discuss the set::insert() function in C++ STL, their syntax, working and their return values. Sets in C++ STL are the containers which must have unique elements in a general order. Sets must have unique elements because the value of the element identifies the element. Once added a value in set container later can’t be modified, although we can still remove or add the values to the set. Sets are used as binary search trees. insert() function is an inbuilt function in C++ STL, which is defined in <set> header file. This function is used to insert elements in the set container. when we insert the element the size of the container is increased by the number of the elements inserted. As the set contains the unique values, insert() not only inserts the element, it first checks whether the element which is to be inserted is not present in the set container. Also, in set all the elements are stored in the sorted position, so the element we will insert will be inserted according to its sorted position. Set1.insert(const type_t &value); ----(1) Or Set1.insert(iterator position, const type_t &value); -----(2) Or Set1.insert(iterator position_1, iterator position_2); -----(3) value βˆ’ It is the value which is to be inserted in the set container. value βˆ’ It is the value which is to be inserted in the set container. position βˆ’ It is the hint to the position, it will start searching from this position and inserts the element where it is suited to be inserted. position βˆ’ It is the hint to the position, it will start searching from this position and inserts the element where it is suited to be inserted. position_1, position_2 βˆ’ These are the iterator which specifies the range which is to be inserted in the set associated with insert(). position_1 for starting of the range and position_2 for the end of the range. position_1, position_2 βˆ’ These are the iterator which specifies the range which is to be inserted in the set associated with insert(). position_1 for starting of the range and position_2 for the end of the range. The function returns different types of values according to the arguments passed in the function. When we pass only the value; the function returns the iterator pointing to the element which is being inserted in the set container. When we pass only the value; the function returns the iterator pointing to the element which is being inserted in the set container. When we pass position with value; the function again returns the iterator pointing to the element which is being inserted in the set container. When we pass position with value; the function again returns the iterator pointing to the element which is being inserted in the set container. When we pass the position_1 and position_2; the function returns the set of the values which comes between the range starting from the position_1 and ending at position_2. When we pass the position_1 and position_2; the function returns the set of the values which comes between the range starting from the position_1 and ending at position_2. Input: set<int> myset; myset.insert(10); Output: values in the set = 10 Input: set <int> myset = {11, 12, 13, 14}; myset.insert(myset.begin(), 10); Output: values in the set = 10 11 12 13 14 Inserting elements in a set in a queue i.e. one after the another Live Demo #include <bits/stdc++.h> using namespace std; int main(){ set<int> mySet; mySet.insert(10); mySet.insert(20); mySet.insert(30); mySet.insert(40); mySet.insert(50); cout<<"Elements are: "; for (auto i = mySet.begin(); i != mySet.end(); i++) cout << *i << " "; return 0; } If we run the above code then it will generate the following output βˆ’ Elements are : 10 20 30 40 50 Inserting elements to the set based upon the position Live Demo #include <bits/stdc++.h> using namespace std; int main(){ set<int> mySet; auto i = mySet.insert(mySet.begin(), 10); i = mySet.insert(i, 20); i = mySet.insert(i, 40); i = mySet.insert(i, 30); i = mySet.insert(i, 80); i = mySet.insert(mySet.end(), 90); cout<<"Elements are: "; for (auto i = mySet.begin(); i != mySet.end(); i++) cout << *i << " "; return 0; } If we run the above code then it will generate the following output βˆ’ Elements are: 10 20 30 40 80 90
[ { "code": null, "e": 1188, "s": 1062, "text": "In this article we are going to discuss the set::insert() function in C++ STL, their syntax, working and their return values." }, { "code": null, "e": 1521, "s": 1188, "text": "Sets in C++ STL are the containers which must have unique elements in a general order. Sets must have unique elements because the value of the element identifies the element. Once added a value in set container later can’t be modified, although we can still remove or add the values to the set. Sets are used as binary search trees." }, { "code": null, "e": 2103, "s": 1521, "text": "insert() function is an inbuilt function in C++ STL, which is defined in <set> header file. This function is used to insert elements in the set container. when we insert the element the size of the container is increased by the number of the elements inserted. As the set contains the unique values, insert() not only inserts the element, it first checks whether the element which is to be inserted is not present in the set container. Also, in set all the elements are stored in the sorted position, so the element we will insert will be inserted according to its sorted position." }, { "code": null, "e": 2277, "s": 2103, "text": "Set1.insert(const type_t &value); ----(1)\nOr\nSet1.insert(iterator position, const type_t &value); -----(2)\nOr\nSet1.insert(iterator position_1, iterator position_2); -----(3)" }, { "code": null, "e": 2347, "s": 2277, "text": "value βˆ’ It is the value which is to be inserted in the set container." }, { "code": null, "e": 2417, "s": 2347, "text": "value βˆ’ It is the value which is to be inserted in the set container." }, { "code": null, "e": 2562, "s": 2417, "text": "position βˆ’ It is the hint to the position, it will start searching from this position and inserts the element where it is suited to be inserted." }, { "code": null, "e": 2707, "s": 2562, "text": "position βˆ’ It is the hint to the position, it will start searching from this position and inserts the element where it is suited to be inserted." }, { "code": null, "e": 2920, "s": 2707, "text": "position_1, position_2 βˆ’ These are the iterator which specifies the range which is to be inserted in the set associated with insert(). position_1 for starting of the range and position_2 for the end of the range." }, { "code": null, "e": 3133, "s": 2920, "text": "position_1, position_2 βˆ’ These are the iterator which specifies the range which is to be inserted in the set associated with insert(). position_1 for starting of the range and position_2 for the end of the range." }, { "code": null, "e": 3231, "s": 3133, "text": "The function returns different types of values according to the arguments passed in the function." }, { "code": null, "e": 3364, "s": 3231, "text": "When we pass only the value; the function returns the iterator pointing to the element which is being inserted in the set container." }, { "code": null, "e": 3497, "s": 3364, "text": "When we pass only the value; the function returns the iterator pointing to the element which is being inserted in the set container." }, { "code": null, "e": 3641, "s": 3497, "text": "When we pass position with value; the function again returns the iterator pointing to the element which is being inserted in the set container." }, { "code": null, "e": 3785, "s": 3641, "text": "When we pass position with value; the function again returns the iterator pointing to the element which is being inserted in the set container." }, { "code": null, "e": 3957, "s": 3785, "text": "When we pass the position_1 and position_2; the function returns the set of the values which comes between the range starting from the position_1 and ending at position_2." }, { "code": null, "e": 4129, "s": 3957, "text": "When we pass the position_1 and position_2; the function returns the set of the values which comes between the range starting from the position_1 and ending at position_2." }, { "code": null, "e": 4326, "s": 4129, "text": "Input: set<int> myset;\n myset.insert(10);\nOutput: values in the set = 10\nInput: set <int> myset = {11, 12, 13, 14};\n myset.insert(myset.begin(), 10);\nOutput: values in the set = 10 11 12 13 14" }, { "code": null, "e": 4392, "s": 4326, "text": "Inserting elements in a set in a queue i.e. one after the another" }, { "code": null, "e": 4403, "s": 4392, "text": " Live Demo" }, { "code": null, "e": 4707, "s": 4403, "text": "#include <bits/stdc++.h>\nusing namespace std;\nint main(){\n set<int> mySet;\n mySet.insert(10);\n mySet.insert(20);\n mySet.insert(30);\n mySet.insert(40);\n mySet.insert(50);\n cout<<\"Elements are: \";\n for (auto i = mySet.begin(); i != mySet.end(); i++)\n cout << *i << \" \";\n return 0;\n}" }, { "code": null, "e": 4777, "s": 4707, "text": "If we run the above code then it will generate the following output βˆ’" }, { "code": null, "e": 4807, "s": 4777, "text": "Elements are : 10 20 30 40 50" }, { "code": null, "e": 4861, "s": 4807, "text": "Inserting elements to the set based upon the position" }, { "code": null, "e": 4872, "s": 4861, "text": " Live Demo" }, { "code": null, "e": 5266, "s": 4872, "text": "#include <bits/stdc++.h>\nusing namespace std;\nint main(){\n set<int> mySet;\n auto i = mySet.insert(mySet.begin(), 10);\n i = mySet.insert(i, 20);\n i = mySet.insert(i, 40);\n i = mySet.insert(i, 30);\n i = mySet.insert(i, 80);\n i = mySet.insert(mySet.end(), 90);\n cout<<\"Elements are: \";\n for (auto i = mySet.begin(); i != mySet.end(); i++)\n cout << *i << \" \";\n return 0;\n}" }, { "code": null, "e": 5336, "s": 5266, "text": "If we run the above code then it will generate the following output βˆ’" }, { "code": null, "e": 5368, "s": 5336, "text": "Elements are: 10 20 30 40 80 90" } ]
Java Program to generate random number with restrictions
To generate random number with resctrictions, here we are taking an example of a phone number from India with the country code 91. First, we have set the first 2 numbers i.e. the country code. The variables are declared for each digit. We have also fixed the first digit of the number as 9 after country code 91: Random num = new Random(); int num0, num1, num2, num3, num4, num5, num6, num7, num8, num9, num10, num11; num0 = 9; num1 = 1; num2 = 9; Now, for rest of the numbers, use the nextInt() of the Random. The parameter consists of the bound set for random numbers: num3 = num.nextInt(9) + 10; num4 = num.nextInt(10); num5 = num.nextInt(5) + 11; num6 = num.nextInt(10); num7 = num.nextInt(3); num8 = num.nextInt(5); num9 = num.nextInt(10); import java.util.Random; public class Main { public static void main(String[] args) { Random num = new Random(); int num0, num1, num2, num3, num4, num5, num6, num7, num8, num9, num10, num11; num0 = 9; num1 = 1; num2 = 9; num3 = num.nextInt(9) + 10; num4 = num.nextInt(10); num5 = num.nextInt(5) + 11; num6 = num.nextInt(10); num7 = num.nextInt(3); num8 = num.nextInt(5); num9 = num.nextInt(10); System.out.print("Random (Country code 91 for India) = "); System.out.print(num0); System.out.print(num1); System.out.print("-" + num2); System.out.print(num3); System.out.print(num4); System.out.print(num5); System.out.print(num6); System.out.print(num7); System.out.print(num8); System.out.print(num9); } } Random (Country code 91 for India) = 91-9114158010
[ { "code": null, "e": 1193, "s": 1062, "text": "To generate random number with resctrictions, here we are taking an example of a phone number from\nIndia with the country code 91." }, { "code": null, "e": 1375, "s": 1193, "text": "First, we have set the first 2 numbers i.e. the country code. The variables are declared for each digit. We have also fixed the first digit of the number as 9 after country code 91:" }, { "code": null, "e": 1510, "s": 1375, "text": "Random num = new Random();\nint num0, num1, num2, num3, num4, num5, num6, num7, num8, num9, num10, num11;\nnum0 = 9;\nnum1 = 1;\nnum2 = 9;" }, { "code": null, "e": 1633, "s": 1510, "text": "Now, for rest of the numbers, use the nextInt() of the Random. The parameter consists of the bound set\nfor random numbers:" }, { "code": null, "e": 1807, "s": 1633, "text": "num3 = num.nextInt(9) + 10;\nnum4 = num.nextInt(10);\nnum5 = num.nextInt(5) + 11;\nnum6 = num.nextInt(10);\nnum7 = num.nextInt(3);\nnum8 = num.nextInt(5);\nnum9 = num.nextInt(10);" }, { "code": null, "e": 2655, "s": 1807, "text": "import java.util.Random;\npublic class Main {\n public static void main(String[] args) {\n Random num = new Random();\n int num0, num1, num2, num3, num4, num5, num6, num7, num8, num9, num10, num11;\n num0 = 9;\n num1 = 1;\n num2 = 9;\n num3 = num.nextInt(9) + 10;\n num4 = num.nextInt(10);\n num5 = num.nextInt(5) + 11;\n num6 = num.nextInt(10);\n num7 = num.nextInt(3);\n num8 = num.nextInt(5);\n num9 = num.nextInt(10);\n System.out.print(\"Random (Country code 91 for India) = \");\n System.out.print(num0);\n System.out.print(num1);\n System.out.print(\"-\" + num2);\n System.out.print(num3);\n System.out.print(num4);\n System.out.print(num5);\n System.out.print(num6);\n System.out.print(num7);\n System.out.print(num8);\n System.out.print(num9);\n }\n}" }, { "code": null, "e": 2706, "s": 2655, "text": "Random (Country code 91 for India) = 91-9114158010" } ]
Angular PrimeNG Slider Component - GeeksforGeeks
22 Aug, 2021 Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the slider component in Angular PrimeNG. Properties: animate: It is used to display an animation on click of the slider bar, It is of a boolean data type, the default value is false. disabled: It specifies that the element should be disabled, It is of a boolean data type, the default value is false. min: It is used to set the Minimum boundary value, It is of number data type, the default value is 0. max: It is used to set the Maximum boundary value, It is of number data type, the default value is 100. orientation: It is used to set the orientation of the slider, valid values are horizontal and vertical, It is of string data type, the default value is horizontal. step: It is used to set the factor to increment/decrement the value, It is of number data type, the default value is 1. range: It is used to specify two boundary values to be picked, It is of the boolean data type, the default value is false. style: It is used to set the Inline style of the element, It is of string data type, the default value is null. styleClass: It is used to set the Style class of the element, It is of string data type, the default value is null. ariaLabelledBy: It is the ariaLabelledBy property that Establishes relationships between the component and label(s) where its value should be one or more element IDs, It is of string data type, the default value is null. tabindex: It is used to set the Index of the element in tabbing order, It is of number data type, the default value is 0. Event: onChange: it is a callback that is fired on value change via slide. onSlideEnd: it is a callback that is fired when the slide stops. Styling: p-slider: It is a styling Container element p-slider-handle: It is a styling Handle element Creating Angular Application & module installation: Step 1: Create an Angular application using the following command. ng new appname Step 2: After creating your project folder i.e. appname, move to it using the following command. cd appname Step 3: Install PrimeNG in your given directory. npm install primeng --save npm install primeicons --save Project Structure: It will look like the following. Example 1: This is the basic example that shows how to use the Slider component. app.component.html <h5>PrimeNG Slider component</h5><p-slider [(ngModel)]="val1"></p-slider><p-slider [(ngModel)]="val2" orientation="vertical"></p-slider> app.module.ts import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import { HttpClientModule } from '@angular/common/http';import { BrowserAnimationsModule } from '@angular/platform-browser/animations';import { AppComponent } from './app.component';import { SliderModule } from 'primeng/slider';import { InputTextModule } from 'primeng/inputtext'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, SliderModule, InputTextModule, FormsModule ], declarations: [AppComponent], bootstrap: [AppComponent]})export class AppModule {} app.component.ts import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html'})export class AppComponent { val1: number; val2: number; rangeValues: number[] = [20, 80];} Output: Example 2: In this example, we will know how to use the max property in the slider component. app.component.html <h5>Basic Slider: </h5><p-slider [(ngModel)]="basic"></p-slider> <h5>Input Slider:</h5><input type="text" pInputText [(ngModel)]="inp" /><p-slider [(ngModel)]="inp"></p-slider> <h5>Step Slider:</h5><p-slider [(ngModel)]="step" [step]="15"></p-slider> <h5>Range Slider:</h5><p-slider [(ngModel)]="range" [range]="true"></p-slider> app.component.ts import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html'})export class AppComponent { basic: number; inp: number = 50; step: number; range: number[] = [20, 80];} app.module.ts import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import { BrowserAnimationsModule } from '@angular/platform-browser/animations';import { AppComponent } from './app.component';import { SliderModule } from 'primeng/slider'; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule, SliderModule, FormsModule], declarations: [AppComponent], bootstrap: [AppComponent]})export class AppModule {} Output: Reference: https://primefaces.org/primeng/showcase/#/slider Angular-PrimeNG AngularJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Angular PrimeNG Dropdown Component How to make a Bootstrap Modal Popup in Angular 9/8 ? Angular 10 (blur) Event How to setup 404 page in angular routing ? How to create module with Routing in Angular 9 ? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24664, "s": 24636, "text": "\n22 Aug, 2021" }, { "code": null, "e": 24946, "s": 24664, "text": "Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the slider component in Angular PrimeNG." }, { "code": null, "e": 24958, "s": 24946, "text": "Properties:" }, { "code": null, "e": 25088, "s": 24958, "text": "animate: It is used to display an animation on click of the slider bar, It is of a boolean data type, the default value is false." }, { "code": null, "e": 25206, "s": 25088, "text": "disabled: It specifies that the element should be disabled, It is of a boolean data type, the default value is false." }, { "code": null, "e": 25308, "s": 25206, "text": "min: It is used to set the Minimum boundary value, It is of number data type, the default value is 0." }, { "code": null, "e": 25412, "s": 25308, "text": "max: It is used to set the Maximum boundary value, It is of number data type, the default value is 100." }, { "code": null, "e": 25577, "s": 25412, "text": "orientation: It is used to set the orientation of the slider, valid values are horizontal and vertical, It is of string data type, the default value is horizontal." }, { "code": null, "e": 25697, "s": 25577, "text": "step: It is used to set the factor to increment/decrement the value, It is of number data type, the default value is 1." }, { "code": null, "e": 25820, "s": 25697, "text": "range: It is used to specify two boundary values to be picked, It is of the boolean data type, the default value is false." }, { "code": null, "e": 25932, "s": 25820, "text": "style: It is used to set the Inline style of the element, It is of string data type, the default value is null." }, { "code": null, "e": 26048, "s": 25932, "text": "styleClass: It is used to set the Style class of the element, It is of string data type, the default value is null." }, { "code": null, "e": 26269, "s": 26048, "text": "ariaLabelledBy: It is the ariaLabelledBy property that Establishes relationships between the component and label(s) where its value should be one or more element IDs, It is of string data type, the default value is null." }, { "code": null, "e": 26391, "s": 26269, "text": "tabindex: It is used to set the Index of the element in tabbing order, It is of number data type, the default value is 0." }, { "code": null, "e": 26399, "s": 26391, "text": "Event: " }, { "code": null, "e": 26467, "s": 26399, "text": "onChange: it is a callback that is fired on value change via slide." }, { "code": null, "e": 26532, "s": 26467, "text": "onSlideEnd: it is a callback that is fired when the slide stops." }, { "code": null, "e": 26541, "s": 26532, "text": "Styling:" }, { "code": null, "e": 26585, "s": 26541, "text": "p-slider: It is a styling Container element" }, { "code": null, "e": 26633, "s": 26585, "text": "p-slider-handle: It is a styling Handle element" }, { "code": null, "e": 26687, "s": 26635, "text": "Creating Angular Application & module installation:" }, { "code": null, "e": 26754, "s": 26687, "text": "Step 1: Create an Angular application using the following command." }, { "code": null, "e": 26769, "s": 26754, "text": "ng new appname" }, { "code": null, "e": 26866, "s": 26769, "text": "Step 2: After creating your project folder i.e. appname, move to it using the following command." }, { "code": null, "e": 26877, "s": 26866, "text": "cd appname" }, { "code": null, "e": 26926, "s": 26877, "text": "Step 3: Install PrimeNG in your given directory." }, { "code": null, "e": 26983, "s": 26926, "text": "npm install primeng --save\nnpm install primeicons --save" }, { "code": null, "e": 27035, "s": 26983, "text": "Project Structure: It will look like the following." }, { "code": null, "e": 27117, "s": 27035, "text": "Example 1: This is the basic example that shows how to use the Slider component. " }, { "code": null, "e": 27136, "s": 27117, "text": "app.component.html" }, { "code": "<h5>PrimeNG Slider component</h5><p-slider [(ngModel)]=\"val1\"></p-slider><p-slider [(ngModel)]=\"val2\" orientation=\"vertical\"></p-slider>", "e": 27273, "s": 27136, "text": null }, { "code": null, "e": 27287, "s": 27273, "text": "app.module.ts" }, { "code": "import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import { HttpClientModule } from '@angular/common/http';import { BrowserAnimationsModule } from '@angular/platform-browser/animations';import { AppComponent } from './app.component';import { SliderModule } from 'primeng/slider';import { InputTextModule } from 'primeng/inputtext'; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, SliderModule, InputTextModule, FormsModule ], declarations: [AppComponent], bootstrap: [AppComponent]})export class AppModule {}", "e": 27928, "s": 27287, "text": null }, { "code": null, "e": 27945, "s": 27928, "text": "app.component.ts" }, { "code": "import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html'})export class AppComponent { val1: number; val2: number; rangeValues: number[] = [20, 80];}", "e": 28155, "s": 27945, "text": null }, { "code": null, "e": 28165, "s": 28157, "text": "Output:" }, { "code": null, "e": 28260, "s": 28165, "text": "Example 2: In this example, we will know how to use the max property in the slider component. " }, { "code": null, "e": 28279, "s": 28260, "text": "app.component.html" }, { "code": "<h5>Basic Slider: </h5><p-slider [(ngModel)]=\"basic\"></p-slider> <h5>Input Slider:</h5><input type=\"text\" pInputText [(ngModel)]=\"inp\" /><p-slider [(ngModel)]=\"inp\"></p-slider> <h5>Step Slider:</h5><p-slider [(ngModel)]=\"step\" [step]=\"15\"></p-slider> <h5>Range Slider:</h5><p-slider [(ngModel)]=\"range\" [range]=\"true\"></p-slider>", "e": 28612, "s": 28279, "text": null }, { "code": null, "e": 28629, "s": 28612, "text": "app.component.ts" }, { "code": "import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html'})export class AppComponent { basic: number; inp: number = 50; step: number; range: number[] = [20, 80];}", "e": 28859, "s": 28629, "text": null }, { "code": null, "e": 28873, "s": 28859, "text": "app.module.ts" }, { "code": "import { NgModule } from '@angular/core';import { BrowserModule } from '@angular/platform-browser';import { FormsModule } from '@angular/forms';import { BrowserAnimationsModule } from '@angular/platform-browser/animations';import { AppComponent } from './app.component';import { SliderModule } from 'primeng/slider'; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule, SliderModule, FormsModule], declarations: [AppComponent], bootstrap: [AppComponent]})export class AppModule {}", "e": 29397, "s": 28873, "text": null }, { "code": null, "e": 29405, "s": 29397, "text": "Output:" }, { "code": null, "e": 29465, "s": 29405, "text": "Reference: https://primefaces.org/primeng/showcase/#/slider" }, { "code": null, "e": 29481, "s": 29465, "text": "Angular-PrimeNG" }, { "code": null, "e": 29491, "s": 29481, "text": "AngularJS" }, { "code": null, "e": 29508, "s": 29491, "text": "Web Technologies" }, { "code": null, "e": 29606, "s": 29508, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29615, "s": 29606, "text": "Comments" }, { "code": null, "e": 29628, "s": 29615, "text": "Old Comments" }, { "code": null, "e": 29663, "s": 29628, "text": "Angular PrimeNG Dropdown Component" }, { "code": null, "e": 29716, "s": 29663, "text": "How to make a Bootstrap Modal Popup in Angular 9/8 ?" }, { "code": null, "e": 29740, "s": 29716, "text": "Angular 10 (blur) Event" }, { "code": null, "e": 29783, "s": 29740, "text": "How to setup 404 page in angular routing ?" }, { "code": null, "e": 29832, "s": 29783, "text": "How to create module with Routing in Angular 9 ?" }, { "code": null, "e": 29888, "s": 29832, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 29921, "s": 29888, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 29983, "s": 29921, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 30026, "s": 29983, "text": "How to fetch data from an API in ReactJS ?" } ]
Hazelcast - IMap
The java.util.concurrent.Map provides an interface which supports storing key value pair in a single JVM. While java.util.concurrent.ConcurrentMap extends this to support thread safety in a single JVM with multiple threads. Similarly, IMap extends the ConcurrentHashMap and provides an interface which makes the map thread safe across JVMs. It provides similar functions: put, get etc. The IMap supports synchronous backup as well as asynchronous backup. Synchronous backup ensures that even if the JVM holding the queue goes down, all elements would be preserved and available from the backup. Let's look at an example of the useful functions. Adding elements and reading elements. Let’s execute the following code on two JVMs. The producer code on one and one consumer code on the other. The first piece is the producer code which creates a map and adds item to it. public static void main(String... args) throws IOException, InterruptedException { //initialize hazelcast instance HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); // create a map IMap<String, String> hzStock = hazelcast.getMap("stock"); hzStock.put("Mango", "4"); hzStock.put("Apple", "1"); hzStock.put("Banana", "7"); hzStock.put("Watermelon", "10"); Thread.sleep(5000); System.exit(0); } The second piece is of consumer code which reads the elements. public static void main(String... args) throws IOException, InterruptedException { //initialize hazelcast instance HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); // create a map IMap<String, String> hzStock = hazelcast.getMap("stock"); for(Map.Entry<String, String> entry: hzStock.entrySet()){ System.out.println(entry.getKey() + ":" + entry.getValue()); } Thread.sleep(5000); System.exit(0); } The output for the code for the consumer βˆ’ Mango:4 Apple:1 Banana:7 Watermelon:10 put(K key, V value) Add an element to the map remove(K key) Remove an element from the map keySet() Return a copy of all the keys in the map localKeySet() Return a copy of all keys which are present in the local partition values() Return a copy of all the values in the map size() Return the count of elements in the map containsKey(K key) Return true if the key is present executeOnEnteries(EntryProcessor processor) Applies the processor on all the map’s keys and returns the output of this application. We will look at an example for the same in the upcoming section. addEntryListener(EntryListener listener, value) Notifies the subscriber of an element being removed/added/modified in the map. addLocalEntryListener(EntryListener listener, value) Notifies the subscriber of an element being removed/added/modified in the local partitions By default, keys in Hazelcast stay indefinitely in the IMap. If we have a very large set of keys, then we need to ensure that the keys which are heavily used are stored in the IMap as compared to the ones which are used less often, in order to have better performance and efficient memory usage. For this purpose, one can manually delete keys via remove()/evict() functions for the keys which are not used that often. However, Hazelcast also provides automatic eviction of keys based on various eviction algorithms. This policy can be set by XML or programmatically. Let’s look at an example for the same βˆ’ <map name="stock"> <max-size policy="FREE_HEAP_PERCENTAGE">30</max-size> <eviction-policy>LFU</eviction-policy> </map> There are two attributes in the above configuration. Max-size βˆ’ Policy which is used to communicate to Hazelcast the limit at which we claim that max size of the map β€œstock” has reached. Max-size βˆ’ Policy which is used to communicate to Hazelcast the limit at which we claim that max size of the map β€œstock” has reached. Eviction-policy βˆ’ Once the above max-size policy is hit, what algorithm to use to remove/evict the key. Eviction-policy βˆ’ Once the above max-size policy is hit, what algorithm to use to remove/evict the key. Here are some of the useful max_size policy. PER_NODE Max number of entries per JVM for the map which is the default policy. FREE_HEAP Minimum free heap memory to be kept aside (in MBytes) in the JVM FREE_HEAP_PERCENTAGE Minimum free heap memory to be kept aside (in percent) in the JVM take() Return the head of the queue or wait till the element becomes available USED_HEAP Maximum allowed heap memory used in the JVM (in MBytes) USED_HEAP_PERCENTAGE Maximum allowed heap memory used in the JVM (in percent) Here are some of the useful eviction policy βˆ’ NONE No eviction will be made which is the default policy LFU Least frequently used would be evicted LRU Least recently used key would be evicted Another useful parameter for eviction is also time-to-live-seconds, i.e., TTL. With this, we can ask Hazelcast to remove any key which is older than X seconds. This ensures that we are proactive in removing older keys before max-size policy is hit. One important point to note about IMap is that unlike other collections, the data is partitioned across JVMs. All the data doesn't need to be stored/present on a single JVM. Complete data is still accessible to all JVMs. This gives Hazelcast a way to scale linearly across available JVMs and not be constrained by memory of a single JVM. The IMap instances are divided into multiple partitions. By default, the map is divided into 271 partitions. And these partitions are distributed across Hazelcast members available. Each entry in which is added to the map is stored in a single partition. Let’s execute this code on 2 JVMs. public static void main(String... args) throws IOException, InterruptedException { //initialize hazelcast instance HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); // create a map IMap<String, String> hzStock = hazelcast.getMap("stock"); hzStock.put("Mango", "4"); hzStock.put("Apple", "1"); hzStock.put("Banana", "7"); hzStock.put("Watermelon", "10"); Thread.sleep(5000); // print the keys which are local to these instance hzStock.localKeySet().forEach(System.out::println); System.exit(0); } As seen in the following output, the consumer 1 prints its own partition which contains 2 keys βˆ’ Mango Watermelon Consumer 2 owns the partition which has the other 2 keys βˆ’ Banana Apple By default, IMap has one synchronous backup, which means that even if one node/member goes down, the data would not get lost. There are two types of back up. Synchronous βˆ’ The map.put(key, value) would not succeed till the key is also backed up on another node/member. Sync backups are blocking and thus impact the performance of the put call. Synchronous βˆ’ The map.put(key, value) would not succeed till the key is also backed up on another node/member. Sync backups are blocking and thus impact the performance of the put call. Async βˆ’ The backup of the stored key is performed eventually. Async backups are non-blocking and fast but they do not guarantee existence of the data if a member were to go down. Async βˆ’ The backup of the stored key is performed eventually. Async backups are non-blocking and fast but they do not guarantee existence of the data if a member were to go down. The value can be configured using XML configuration. For example, let's do it for out stock map βˆ’ <map name="stock"> <backup-count>1</backup-count> <async-backup-count>1<async-backup-count> </map> In Java-based HashMap, key comparison happens by checking equality of the hashCode() and equals() method. For example, a vehicle may have serialId and the model to keep it simple. public class Vehicle implements Serializable{ private static final long serialVersionUID = 1L; private int serialId; private String model; public Vehicle(int serialId, String model) { super(); this.serialId = serialId; this.model = model; } public int getId() { return serialId; } public String getModel() { return model; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + serialId; return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Vehicle other = (Vehicle) obj; if (serialId != other.serialId) return false; return true; } } When we try using the above class as the key for HashMap and IMap, we see the difference in comparison. public static void main(String... args) throws IOException, InterruptedException { // create a Java based hash map Map<Vehicle, String> vehicleOwner = new HashMap<>(); Vehicle v1 = new Vehicle(123, "Honda"); vehicleOwner.put(v1, "John"); Vehicle v2 = new Vehicle(123, null); System.out.println(vehicleOwner.containsKey(v2)); // create a hazelcast map HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap("owner"); hzVehicleOwner.put(v1, "John"); System.out.println(hzVehicleOwner.containsKey(v2)); System.exit(0); } Hazelcast serializes the key and stores it as a byte array in binary format. As these keys are serialized, the comparison cannot be made based on equals() and hashcode(). Serializing and Deserializing are required in case of Hazelcast because the function get(), containsKey(), etc. may be invoked on the node which does not own the key, so remote call is required. Serializing and Deserializng are expensive operations and so, instead of using equals() method, Hazelcast compares byte arrays. What this means is that all the attributes of the Vehicle class should match not just id. So, let’s execute the following code βˆ’ public static void main(String... args) throws IOException, InterruptedException { Vehicle v1 = new Vehicle(123, "Honda"); // create a hazelcast map HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap("owner"); Vehicle v3 = new Vehicle(123, "Honda"); System.out.println(hzVehicleOwner.containsKey(v3)); System.exit(0); } The output of the above code is βˆ’ true This output means all the attributes of Vehicle should match for equality. EntryProcessor is a construct which supports sending of code to the data instead of bringing data to the code. It supports serializing, transferring, and the execution of function on the node which owns the IMap keys instead of bringing in the data to the node which initiates the execution of the function. Let’s understand this with an example. Let’s say we create an IMap of Vehicle -> Owner. And now, we want to store lowercase for the owner. So, how do we do that? public static void main(String... args) throws IOException, InterruptedException { // create a hazelcast map HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap("owner"); hzVehicleOwner.put(new Vehicle(123, "Honda"), "John"); hzVehicleOwner.put(new Vehicle(23, "Hyundai"), "Betty"); hzVehicleOwner.put(new Vehicle(103, "Mercedes"), "Jane"); for(Map.Entry<Vehicle, String> entry: hzVehicleOwner.entrySet()) hzVehicleOwner.put(entry.getKey(), entry.getValue().toLowerCase()); for(Map.Entry<Vehicle, String> entry: hzVehicleOwner.entrySet()) System.out.println(entry.getValue()); System.exit(0); } The output of the above code is βˆ’ john jane betty While this code seems simple, it has a major drawback in terms of scale if there are high number of keys βˆ’ Processing would happen on the single/caller node instead of being distributed across nodes. Processing would happen on the single/caller node instead of being distributed across nodes. More time as well as memory would be needed to get the key information on the caller node. More time as well as memory would be needed to get the key information on the caller node. That is where the EntryProcessor helps. We send the function of converting to lowercase to each node which holds the key. This makes the processing parallel and keeps the memory requirements in check. public static void main(String... args) throws IOException, InterruptedException { // create a hazelcast map HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap("owner"); hzVehicleOwner.put(new Vehicle(123, "Honda"), "John"); hzVehicleOwner.put(new Vehicle(23, "Hyundai"), "Betty"); hzVehicleOwner.put(new Vehicle(103, "Mercedes"), "Jane"); hzVehicleOwner.executeOnEntries(new OwnerToLowerCaseEntryProcessor()); for(Map.Entry<Vehicle, String> entry: hzVehicleOwner.entrySet()) System.out.println(entry.getValue()); System.exit(0); } static class OwnerToLowerCaseEntryProcessor extends AbstractEntryProcessor<Vehicle, String> { @Override public Object process(Map.Entry<Vehicle, String> entry) { String ownerName = entry.getValue(); entry.setValue(ownerName.toLowerCase()); return null; } } The output of the above code is βˆ’ john jane betty Print Add Notes Bookmark this page
[ { "code": null, "e": 2187, "s": 1963, "text": "The java.util.concurrent.Map provides an interface which supports storing key value pair in a single JVM. While java.util.concurrent.ConcurrentMap extends this to support thread safety in a single JVM with multiple threads." }, { "code": null, "e": 2349, "s": 2187, "text": "Similarly, IMap extends the ConcurrentHashMap and provides an interface which makes the map thread safe across JVMs. It provides similar functions: put, get etc." }, { "code": null, "e": 2558, "s": 2349, "text": "The IMap supports synchronous backup as well as asynchronous backup. Synchronous backup ensures that even if the JVM holding the queue goes down, all elements would be preserved and available from the backup." }, { "code": null, "e": 2608, "s": 2558, "text": "Let's look at an example of the useful functions." }, { "code": null, "e": 2753, "s": 2608, "text": "Adding elements and reading elements. Let’s execute the following code on two JVMs. The producer code on one and one consumer code on the other." }, { "code": null, "e": 2831, "s": 2753, "text": "The first piece is the producer code which creates a map and adds item to it." }, { "code": null, "e": 3267, "s": 2831, "text": "public static void main(String... args) throws IOException, InterruptedException {\n //initialize hazelcast instance\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n // create a map\n IMap<String, String> hzStock = hazelcast.getMap(\"stock\");\n hzStock.put(\"Mango\", \"4\");\n hzStock.put(\"Apple\", \"1\");\n hzStock.put(\"Banana\", \"7\");\n hzStock.put(\"Watermelon\", \"10\");\n Thread.sleep(5000);\n System.exit(0);\n}" }, { "code": null, "e": 3330, "s": 3267, "text": "The second piece is of consumer code which reads the elements." }, { "code": null, "e": 3772, "s": 3330, "text": "public static void main(String... args) throws IOException, InterruptedException {\n //initialize hazelcast instance\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n // create a map\n IMap<String, String> hzStock = hazelcast.getMap(\"stock\");\n for(Map.Entry<String, String> entry: hzStock.entrySet()){\n System.out.println(entry.getKey() + \":\" + entry.getValue());\n }\n Thread.sleep(5000);\n System.exit(0);\n}" }, { "code": null, "e": 3815, "s": 3772, "text": "The output for the code for the consumer βˆ’" }, { "code": null, "e": 3855, "s": 3815, "text": "Mango:4\nApple:1\nBanana:7\nWatermelon:10\n" }, { "code": null, "e": 3875, "s": 3855, "text": "put(K key, V value)" }, { "code": null, "e": 3901, "s": 3875, "text": "Add an element to the map" }, { "code": null, "e": 3915, "s": 3901, "text": "remove(K key)" }, { "code": null, "e": 3946, "s": 3915, "text": "Remove an element from the map" }, { "code": null, "e": 3955, "s": 3946, "text": "keySet()" }, { "code": null, "e": 3996, "s": 3955, "text": "Return a copy of all the keys in the map" }, { "code": null, "e": 4010, "s": 3996, "text": "localKeySet()" }, { "code": null, "e": 4077, "s": 4010, "text": "Return a copy of all keys which are present in the local partition" }, { "code": null, "e": 4086, "s": 4077, "text": "values()" }, { "code": null, "e": 4129, "s": 4086, "text": "Return a copy of all the values in the map" }, { "code": null, "e": 4136, "s": 4129, "text": "size()" }, { "code": null, "e": 4176, "s": 4136, "text": "Return the count of elements in the map" }, { "code": null, "e": 4195, "s": 4176, "text": "containsKey(K key)" }, { "code": null, "e": 4229, "s": 4195, "text": "Return true if the key is present" }, { "code": null, "e": 4273, "s": 4229, "text": "executeOnEnteries(EntryProcessor processor)" }, { "code": null, "e": 4426, "s": 4273, "text": "Applies the processor on all the map’s keys and returns the output of this application. We will look at an example for the same in the upcoming section." }, { "code": null, "e": 4474, "s": 4426, "text": "addEntryListener(EntryListener listener, value)" }, { "code": null, "e": 4553, "s": 4474, "text": "Notifies the subscriber of an element being removed/added/modified in the map." }, { "code": null, "e": 4606, "s": 4553, "text": "addLocalEntryListener(EntryListener listener, value)" }, { "code": null, "e": 4697, "s": 4606, "text": "Notifies the subscriber of an element being removed/added/modified in the local partitions" }, { "code": null, "e": 4993, "s": 4697, "text": "By default, keys in Hazelcast stay indefinitely in the IMap. If we have a very large set of keys, then we need to ensure that the keys which are heavily used are stored in the IMap as compared to the ones which are used less often, in order to have better performance and efficient memory usage." }, { "code": null, "e": 5213, "s": 4993, "text": "For this purpose, one can manually delete keys via remove()/evict() functions for the keys which are not used that often. However, Hazelcast also provides automatic eviction of keys based on various eviction algorithms." }, { "code": null, "e": 5304, "s": 5213, "text": "This policy can be set by XML or programmatically. Let’s look at an example for the same βˆ’" }, { "code": null, "e": 5430, "s": 5304, "text": "<map name=\"stock\">\n <max-size policy=\"FREE_HEAP_PERCENTAGE\">30</max-size>\n <eviction-policy>LFU</eviction-policy>\n</map>\n" }, { "code": null, "e": 5483, "s": 5430, "text": "There are two attributes in the above configuration." }, { "code": null, "e": 5617, "s": 5483, "text": "Max-size βˆ’ Policy which is used to communicate to Hazelcast the limit at which we claim that max size of the map β€œstock” has reached." }, { "code": null, "e": 5751, "s": 5617, "text": "Max-size βˆ’ Policy which is used to communicate to Hazelcast the limit at which we claim that max size of the map β€œstock” has reached." }, { "code": null, "e": 5855, "s": 5751, "text": "Eviction-policy βˆ’ Once the above max-size policy is hit, what algorithm to use to remove/evict the key." }, { "code": null, "e": 5959, "s": 5855, "text": "Eviction-policy βˆ’ Once the above max-size policy is hit, what algorithm to use to remove/evict the key." }, { "code": null, "e": 6004, "s": 5959, "text": "Here are some of the useful max_size policy." }, { "code": null, "e": 6013, "s": 6004, "text": "PER_NODE" }, { "code": null, "e": 6084, "s": 6013, "text": "Max number of entries per JVM for the map which is the default policy." }, { "code": null, "e": 6094, "s": 6084, "text": "FREE_HEAP" }, { "code": null, "e": 6159, "s": 6094, "text": "Minimum free heap memory to be kept aside (in MBytes) in the JVM" }, { "code": null, "e": 6180, "s": 6159, "text": "FREE_HEAP_PERCENTAGE" }, { "code": null, "e": 6246, "s": 6180, "text": "Minimum free heap memory to be kept aside (in percent) in the JVM" }, { "code": null, "e": 6253, "s": 6246, "text": "take()" }, { "code": null, "e": 6325, "s": 6253, "text": "Return the head of the queue or wait till the element becomes available" }, { "code": null, "e": 6335, "s": 6325, "text": "USED_HEAP" }, { "code": null, "e": 6391, "s": 6335, "text": "Maximum allowed heap memory used in the JVM (in MBytes)" }, { "code": null, "e": 6412, "s": 6391, "text": "USED_HEAP_PERCENTAGE" }, { "code": null, "e": 6469, "s": 6412, "text": "Maximum allowed heap memory used in the JVM (in percent)" }, { "code": null, "e": 6515, "s": 6469, "text": "Here are some of the useful eviction policy βˆ’" }, { "code": null, "e": 6520, "s": 6515, "text": "NONE" }, { "code": null, "e": 6573, "s": 6520, "text": "No eviction will be made which is the default policy" }, { "code": null, "e": 6577, "s": 6573, "text": "LFU" }, { "code": null, "e": 6616, "s": 6577, "text": "Least frequently used would be evicted" }, { "code": null, "e": 6620, "s": 6616, "text": "LRU" }, { "code": null, "e": 6661, "s": 6620, "text": "Least recently used key would be evicted" }, { "code": null, "e": 6910, "s": 6661, "text": "Another useful parameter for eviction is also time-to-live-seconds, i.e., TTL. With this, we can ask Hazelcast to remove any key which is older than X seconds. This ensures that we are proactive in removing older keys before max-size policy is hit." }, { "code": null, "e": 7248, "s": 6910, "text": "One important point to note about IMap is that unlike other collections, the data is partitioned across JVMs. All the data doesn't need to be stored/present on a single JVM. Complete data is still accessible to all JVMs. This gives Hazelcast a way to scale linearly across available JVMs and not be constrained by memory of a single JVM." }, { "code": null, "e": 7503, "s": 7248, "text": "The IMap instances are divided into multiple partitions. By default, the map is divided into 271 partitions. And these partitions are distributed across Hazelcast members available. Each entry in which is added to the map is stored in a single partition." }, { "code": null, "e": 7538, "s": 7503, "text": "Let’s execute this code on 2 JVMs." }, { "code": null, "e": 8084, "s": 7538, "text": "public static void main(String... args) throws IOException, InterruptedException {\n //initialize hazelcast instance\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n // create a map\n IMap<String, String> hzStock = hazelcast.getMap(\"stock\");\n hzStock.put(\"Mango\", \"4\");\n hzStock.put(\"Apple\", \"1\");\n hzStock.put(\"Banana\", \"7\");\n hzStock.put(\"Watermelon\", \"10\");\n Thread.sleep(5000);\n // print the keys which are local to these instance\n hzStock.localKeySet().forEach(System.out::println);\n System.exit(0);\n}" }, { "code": null, "e": 8181, "s": 8084, "text": "As seen in the following output, the consumer 1 prints its own partition which contains 2 keys βˆ’" }, { "code": null, "e": 8199, "s": 8181, "text": "Mango\nWatermelon\n" }, { "code": null, "e": 8258, "s": 8199, "text": "Consumer 2 owns the partition which has the other 2 keys βˆ’" }, { "code": null, "e": 8272, "s": 8258, "text": "Banana\nApple\n" }, { "code": null, "e": 8430, "s": 8272, "text": "By default, IMap has one synchronous backup, which means that even if one node/member goes down, the data would not get lost. There are two types of back up." }, { "code": null, "e": 8616, "s": 8430, "text": "Synchronous βˆ’ The map.put(key, value) would not succeed till the key is also backed up on another node/member. Sync backups are blocking and thus impact the performance of the put call." }, { "code": null, "e": 8802, "s": 8616, "text": "Synchronous βˆ’ The map.put(key, value) would not succeed till the key is also backed up on another node/member. Sync backups are blocking and thus impact the performance of the put call." }, { "code": null, "e": 8981, "s": 8802, "text": "Async βˆ’ The backup of the stored key is performed eventually. Async backups are non-blocking and fast but they do not guarantee existence of the data if a member were to go down." }, { "code": null, "e": 9160, "s": 8981, "text": "Async βˆ’ The backup of the stored key is performed eventually. Async backups are non-blocking and fast but they do not guarantee existence of the data if a member were to go down." }, { "code": null, "e": 9258, "s": 9160, "text": "The value can be configured using XML configuration. For example, let's do it for out stock map βˆ’" }, { "code": null, "e": 9364, "s": 9258, "text": "<map name=\"stock\">\n <backup-count>1</backup-count>\n <async-backup-count>1<async-backup-count>\n</map>\n" }, { "code": null, "e": 9544, "s": 9364, "text": "In Java-based HashMap, key comparison happens by checking equality of the hashCode() and equals() method. For example, a vehicle may have serialId and the model to keep it simple." }, { "code": null, "e": 10417, "s": 9544, "text": "public class Vehicle implements Serializable{\n private static final long serialVersionUID = 1L;\n private int serialId;\n private String model;\n\n public Vehicle(int serialId, String model) {\n super();\n this.serialId = serialId;\n this.model = model;\n }\n public int getId() {\n return serialId;\n }\n public String getModel() {\n return model;\n }\n @Override\n public int hashCode() {\n final int prime = 31;\n int result = 1;\n result = prime * result + serialId;\n return result;\n }\n @Override\n public boolean equals(Object obj) {\n if (this == obj)\n return true;\n if (obj == null)\n return false;\n if (getClass() != obj.getClass())\n return false;\n Vehicle other = (Vehicle) obj;\n if (serialId != other.serialId)\n return false;\n return true;\n }\n}" }, { "code": null, "e": 10521, "s": 10417, "text": "When we try using the above class as the key for HashMap and IMap, we see the difference in comparison." }, { "code": null, "e": 11153, "s": 10521, "text": "public static void main(String... args) throws IOException, InterruptedException {\n // create a Java based hash map\n Map<Vehicle, String> vehicleOwner = new HashMap<>();\n Vehicle v1 = new Vehicle(123, \"Honda\");\n vehicleOwner.put(v1, \"John\");\n \n Vehicle v2 = new Vehicle(123, null);\n System.out.println(vehicleOwner.containsKey(v2));\n \n // create a hazelcast map\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap(\"owner\");\n hzVehicleOwner.put(v1, \"John\");\n System.out.println(hzVehicleOwner.containsKey(v2));\n System.exit(0);\n}" }, { "code": null, "e": 11324, "s": 11153, "text": "Hazelcast serializes the key and stores it as a byte array in binary format. As these keys are serialized, the comparison cannot be made based on equals() and hashcode()." }, { "code": null, "e": 11519, "s": 11324, "text": "Serializing and Deserializing are required in case of Hazelcast because the function get(), containsKey(), etc. may be invoked on the node which does not own the key, so remote call is required." }, { "code": null, "e": 11647, "s": 11519, "text": "Serializing and Deserializng are expensive operations and so, instead of using equals() method, Hazelcast compares byte arrays." }, { "code": null, "e": 11776, "s": 11647, "text": "What this means is that all the attributes of the Vehicle class should match not just id. So, let’s execute the following code βˆ’" }, { "code": null, "e": 12186, "s": 11776, "text": "public static void main(String... args) throws IOException, InterruptedException {\n Vehicle v1 = new Vehicle(123, \"Honda\");\n // create a hazelcast map\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap(\"owner\");\n Vehicle v3 = new Vehicle(123, \"Honda\");\n System.out.println(hzVehicleOwner.containsKey(v3));\n System.exit(0);\n}" }, { "code": null, "e": 12220, "s": 12186, "text": "The output of the above code is βˆ’" }, { "code": null, "e": 12226, "s": 12220, "text": "true\n" }, { "code": null, "e": 12301, "s": 12226, "text": "This output means all the attributes of Vehicle should match for equality." }, { "code": null, "e": 12609, "s": 12301, "text": "EntryProcessor is a construct which supports sending of code to the data instead of bringing data to the code. It supports serializing, transferring, and the execution of function on the node which owns the IMap keys instead of bringing in the data to the node which initiates the execution of the function." }, { "code": null, "e": 12771, "s": 12609, "text": "Let’s understand this with an example. Let’s say we create an IMap of Vehicle -> Owner. And now, we want to store lowercase for the owner. So, how do we do that?" }, { "code": null, "e": 13468, "s": 12771, "text": "public static void main(String... args) throws IOException, InterruptedException {\n // create a hazelcast map\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap(\"owner\");\n hzVehicleOwner.put(new Vehicle(123, \"Honda\"), \"John\");\n hzVehicleOwner.put(new Vehicle(23, \"Hyundai\"), \"Betty\");\n hzVehicleOwner.put(new Vehicle(103, \"Mercedes\"), \"Jane\");\n for(Map.Entry<Vehicle, String> entry: hzVehicleOwner.entrySet())\n hzVehicleOwner.put(entry.getKey(), entry.getValue().toLowerCase());\n for(Map.Entry<Vehicle, String> entry: hzVehicleOwner.entrySet())\n System.out.println(entry.getValue());\n System.exit(0);\n}" }, { "code": null, "e": 13502, "s": 13468, "text": "The output of the above code is βˆ’" }, { "code": null, "e": 13519, "s": 13502, "text": "john\njane\nbetty\n" }, { "code": null, "e": 13626, "s": 13519, "text": "While this code seems simple, it has a major drawback in terms of scale if there are high number of keys βˆ’" }, { "code": null, "e": 13719, "s": 13626, "text": "Processing would happen on the single/caller node instead of being distributed across nodes." }, { "code": null, "e": 13812, "s": 13719, "text": "Processing would happen on the single/caller node instead of being distributed across nodes." }, { "code": null, "e": 13903, "s": 13812, "text": "More time as well as memory would be needed to get the key information on the caller node." }, { "code": null, "e": 13994, "s": 13903, "text": "More time as well as memory would be needed to get the key information on the caller node." }, { "code": null, "e": 14195, "s": 13994, "text": "That is where the EntryProcessor helps. We send the function of converting to lowercase to each node which holds the key. This makes the processing parallel and keeps the memory requirements in check." }, { "code": null, "e": 15110, "s": 14195, "text": "public static void main(String... args) throws IOException, InterruptedException {\n // create a hazelcast map\n HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance();\n IMap<Vehicle, String> hzVehicleOwner = hazelcast.getMap(\"owner\");\n hzVehicleOwner.put(new Vehicle(123, \"Honda\"), \"John\");\n hzVehicleOwner.put(new Vehicle(23, \"Hyundai\"), \"Betty\");\n hzVehicleOwner.put(new Vehicle(103, \"Mercedes\"), \"Jane\");\n hzVehicleOwner.executeOnEntries(new OwnerToLowerCaseEntryProcessor());\n for(Map.Entry<Vehicle, String> entry: hzVehicleOwner.entrySet())\n System.out.println(entry.getValue());\n System.exit(0);\n}\nstatic class OwnerToLowerCaseEntryProcessor extends\nAbstractEntryProcessor<Vehicle, String> {\n @Override\n public Object process(Map.Entry<Vehicle, String> entry) {\n String ownerName = entry.getValue();\n entry.setValue(ownerName.toLowerCase());\n return null;\n }\n}" }, { "code": null, "e": 15144, "s": 15110, "text": "The output of the above code is βˆ’" }, { "code": null, "e": 15161, "s": 15144, "text": "john\njane\nbetty\n" }, { "code": null, "e": 15168, "s": 15161, "text": " Print" }, { "code": null, "e": 15179, "s": 15168, "text": " Add Notes" } ]
Longest Harmonious Subsequence in C++
Suppose we have an integer array; we have to find the length of its longest harmonious subsequence among all its possible subsequences. As we know a harmonious sequence array is an array where the difference between its maximum value and its minimum value is exactly 1. So, if the input is like [1,3,2,2,5,2,3,7], then the output will be 5, as the longest harmonious subsequence is [4,3,3,3,4]. To solve this, we will follow these steps βˆ’ Define one map m Define one map m for n in nums βˆ’(increase m[n] by 1) for n in nums βˆ’ (increase m[n] by 1) (increase m[n] by 1) for key-value pair (k,v) in m βˆ’it := position of (k+1) in mif 'it' is in m, then βˆ’max_:= maximum of max_ and value of it for key-value pair (k,v) in m βˆ’ it := position of (k+1) in m it := position of (k+1) in m if 'it' is in m, then βˆ’max_:= maximum of max_ and value of it if 'it' is in m, then βˆ’ max_:= maximum of max_ and value of it max_:= maximum of max_ and value of it return max_ return max_ Let us see the following implementation to get a better understanding βˆ’ Live Demo #include <bits/stdc++.h> using namespace std; class Solution { public: int findLHS(vector<int>& nums) { unordered_map<int, int> m; for (const int n : nums) ++m[n]; int max_{ 0 }; for (const auto & [ k, v ] : m) { auto it = m.find(k + 1); if (it != m.end()) max_ = max(max_, v + it->second); } return max_; } }; main(){ Solution ob; vector<int> v = {2,4,3,3,6,3,4,8}; cout << (ob.findLHS(v)); } {2,4,3,3,6,3,4,8} 5
[ { "code": null, "e": 1332, "s": 1062, "text": "Suppose we have an integer array; we have to find the length of its longest harmonious subsequence among all its possible subsequences. As we know a harmonious sequence array is an array where the difference between its maximum value and its minimum value is exactly 1." }, { "code": null, "e": 1457, "s": 1332, "text": "So, if the input is like [1,3,2,2,5,2,3,7], then the output will be 5, as the longest harmonious subsequence is [4,3,3,3,4]." }, { "code": null, "e": 1501, "s": 1457, "text": "To solve this, we will follow these steps βˆ’" }, { "code": null, "e": 1518, "s": 1501, "text": "Define one map m" }, { "code": null, "e": 1535, "s": 1518, "text": "Define one map m" }, { "code": null, "e": 1571, "s": 1535, "text": "for n in nums βˆ’(increase m[n] by 1)" }, { "code": null, "e": 1587, "s": 1571, "text": "for n in nums βˆ’" }, { "code": null, "e": 1608, "s": 1587, "text": "(increase m[n] by 1)" }, { "code": null, "e": 1629, "s": 1608, "text": "(increase m[n] by 1)" }, { "code": null, "e": 1750, "s": 1629, "text": "for key-value pair (k,v) in m βˆ’it := position of (k+1) in mif 'it' is in m, then βˆ’max_:= maximum of max_ and value of it" }, { "code": null, "e": 1782, "s": 1750, "text": "for key-value pair (k,v) in m βˆ’" }, { "code": null, "e": 1811, "s": 1782, "text": "it := position of (k+1) in m" }, { "code": null, "e": 1840, "s": 1811, "text": "it := position of (k+1) in m" }, { "code": null, "e": 1902, "s": 1840, "text": "if 'it' is in m, then βˆ’max_:= maximum of max_ and value of it" }, { "code": null, "e": 1926, "s": 1902, "text": "if 'it' is in m, then βˆ’" }, { "code": null, "e": 1965, "s": 1926, "text": "max_:= maximum of max_ and value of it" }, { "code": null, "e": 2004, "s": 1965, "text": "max_:= maximum of max_ and value of it" }, { "code": null, "e": 2016, "s": 2004, "text": "return max_" }, { "code": null, "e": 2028, "s": 2016, "text": "return max_" }, { "code": null, "e": 2100, "s": 2028, "text": "Let us see the following implementation to get a better understanding βˆ’" }, { "code": null, "e": 2111, "s": 2100, "text": " Live Demo" }, { "code": null, "e": 2595, "s": 2111, "text": "#include <bits/stdc++.h>\nusing namespace std;\nclass Solution {\npublic:\n int findLHS(vector<int>& nums) {\n unordered_map<int, int> m;\n for (const int n : nums)\n ++m[n];\n int max_{ 0 };\n for (const auto & [ k, v ] : m) {\n auto it = m.find(k + 1);\n if (it != m.end())\n max_ = max(max_, v + it->second);\n }\n return max_;\n }\n};\nmain(){\n Solution ob;\n vector<int> v = {2,4,3,3,6,3,4,8};\n cout << (ob.findLHS(v));\n}" }, { "code": null, "e": 2613, "s": 2595, "text": "{2,4,3,3,6,3,4,8}" }, { "code": null, "e": 2615, "s": 2613, "text": "5" } ]
Facial Landmark Detection for Occluded Angled Faces | Towards Data Science
Facial landmarks detection or facial keypoints detection has a lot of uses in computer vision like face alignment, drowsiness detection, Snapchat filters to name a few. The most widely known model for this task is Dlib’s 68 keypoints landmark predictor which gives very good results in real-time. But the problem starts when the face is occluded or at an angle to the camera. Getting accurate results at angles is crucial for tasks like head pose estimation. So in this article, I will introduce a lesser-known Tensorflow model that can achieve this mission. Now, this is finding drawbacks just because I want to but in essence, installing Dlib in Windows is a little tough and one needs to install Cmake and some other applications to do so. If you are using Anaconda and try using conda install, then the version for Windows is outdated as well and does not support Python>3.5. If you are using any other operating system then you will be fine. Instead of writing what’s wrong, I’ll just show it to you. All right enough of maligning Dlib let’s move on to the real stuff. We will require Tensorflow 2 and OpenCV for this task. # pip installpip install tensorflowpip install opencv# conda installconda install -c conda-forge tensorflowconda install -c conda-forge opencv Our first step is to find the faces in the images on which we can find facial landmarks. For this task, we will be using a Caffe model of OpenCV’s DNN module. If you are wondering how it fares against other models like Haar Cascades or Dlib’s frontal face detector or you want to know more about it in-depth then you can refer to this article: towardsdatascience.com You can download the required models from my GitHub repository. import cv2import numpy as npmodelFile = "models/res10_300x300_ssd_iter_140000.caffemodel"configFile = "models/deploy.prototxt.txt"net = cv2.dnn.readNetFromCaffe(configFile, modelFile)img = cv2.imread('test.jpg')h, w = img.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(img, (300, 300)), 1.0,(300, 300), (104.0, 117.0, 123.0))net.setInput(blob)faces = net.forward()#to draw faces on imagefor i in range(faces.shape[2]): confidence = faces[0, 0, i, 2] if confidence > 0.5: box = faces[0, 0, i, 3:7] * np.array([w, h, w, h]) (x, y, x1, y1) = box.astype("int") cv2.rectangle(img, (x, y), (x1, y1), (0, 0, 255), 2) Load the network using cv2.dnn.readNetFromCaffe and pass the model's layers and weights as its arguments. It performs best on images resized to 300x300. We will be using a facial landmark detector provided by Yin Guobing in this Github repo. It also gives 68 landmarks and it is a Tensorflow CNN trained on 5 datasets! The pre-trained model can be found here. The author has also written a series of posts explaining the background, dataset, preprocessing, model architecture, training, and deployment that can be found here. I have provided their outlines here, but I would strongly encourage you to read them. In the first of those series, he describes the problem of stability of facial landmarks in videos followed by labeling out the existing solutions like OpenFace and Dlib’s facial landmark detection along with the datasets available. The third article is all about data preprocessing and making it ready to use. In the next two articles, the work is to extract the faces and apply facial landmarks on it to make it ready to train a CNN and store them as TFRecord files. In the sixth article, a model is trained using Tensorflow. In the final article, the model is exported as an API and shown how to use it in Python. We need the coordinates of the faces that acts as the region of interest and is extracted from the image. Then it is converted to a square shape of size 128x128 and passed to the model which returns the 68 key points which can then be normalized to the dimensions of the original image. The complete code used for this piece of work: Now, that you know it can perform well on side-faces it can be utilized to make a head-pose estimator. If you want to do you can refer to this article: towardsdatascience.com One of the major selling points of Dlib was its speed. Let’s see how they compare on my i5 processor (yeah πŸ˜”). Dlib gives ~11.5 FPS and the landmark prediction step takes around 0.005 seconds. The Tensorflow model gives ~7.2 FPS and the landmark prediction step takes around 0.05 seconds. So if speed is the main concern and occluded or angled faces not so much then Dlib might be better suited for you otherwise I feel that the Tensorflow model reigns supreme without compromising a lot on speed. You find the properly documented code for both face detection and landmark detection here on GitHub, where I have tried to make an AI for online proctoring.
[ { "code": null, "e": 605, "s": 46, "text": "Facial landmarks detection or facial keypoints detection has a lot of uses in computer vision like face alignment, drowsiness detection, Snapchat filters to name a few. The most widely known model for this task is Dlib’s 68 keypoints landmark predictor which gives very good results in real-time. But the problem starts when the face is occluded or at an angle to the camera. Getting accurate results at angles is crucial for tasks like head pose estimation. So in this article, I will introduce a lesser-known Tensorflow model that can achieve this mission." }, { "code": null, "e": 993, "s": 605, "text": "Now, this is finding drawbacks just because I want to but in essence, installing Dlib in Windows is a little tough and one needs to install Cmake and some other applications to do so. If you are using Anaconda and try using conda install, then the version for Windows is outdated as well and does not support Python>3.5. If you are using any other operating system then you will be fine." }, { "code": null, "e": 1052, "s": 993, "text": "Instead of writing what’s wrong, I’ll just show it to you." }, { "code": null, "e": 1120, "s": 1052, "text": "All right enough of maligning Dlib let’s move on to the real stuff." }, { "code": null, "e": 1175, "s": 1120, "text": "We will require Tensorflow 2 and OpenCV for this task." }, { "code": null, "e": 1318, "s": 1175, "text": "# pip installpip install tensorflowpip install opencv# conda installconda install -c conda-forge tensorflowconda install -c conda-forge opencv" }, { "code": null, "e": 1662, "s": 1318, "text": "Our first step is to find the faces in the images on which we can find facial landmarks. For this task, we will be using a Caffe model of OpenCV’s DNN module. If you are wondering how it fares against other models like Haar Cascades or Dlib’s frontal face detector or you want to know more about it in-depth then you can refer to this article:" }, { "code": null, "e": 1685, "s": 1662, "text": "towardsdatascience.com" }, { "code": null, "e": 1749, "s": 1685, "text": "You can download the required models from my GitHub repository." }, { "code": null, "e": 2408, "s": 1749, "text": "import cv2import numpy as npmodelFile = \"models/res10_300x300_ssd_iter_140000.caffemodel\"configFile = \"models/deploy.prototxt.txt\"net = cv2.dnn.readNetFromCaffe(configFile, modelFile)img = cv2.imread('test.jpg')h, w = img.shape[:2]blob = cv2.dnn.blobFromImage(cv2.resize(img, (300, 300)), 1.0,(300, 300), (104.0, 117.0, 123.0))net.setInput(blob)faces = net.forward()#to draw faces on imagefor i in range(faces.shape[2]): confidence = faces[0, 0, i, 2] if confidence > 0.5: box = faces[0, 0, i, 3:7] * np.array([w, h, w, h]) (x, y, x1, y1) = box.astype(\"int\") cv2.rectangle(img, (x, y), (x1, y1), (0, 0, 255), 2)" }, { "code": null, "e": 2561, "s": 2408, "text": "Load the network using cv2.dnn.readNetFromCaffe and pass the model's layers and weights as its arguments. It performs best on images resized to 300x300." }, { "code": null, "e": 3020, "s": 2561, "text": "We will be using a facial landmark detector provided by Yin Guobing in this Github repo. It also gives 68 landmarks and it is a Tensorflow CNN trained on 5 datasets! The pre-trained model can be found here. The author has also written a series of posts explaining the background, dataset, preprocessing, model architecture, training, and deployment that can be found here. I have provided their outlines here, but I would strongly encourage you to read them." }, { "code": null, "e": 3636, "s": 3020, "text": "In the first of those series, he describes the problem of stability of facial landmarks in videos followed by labeling out the existing solutions like OpenFace and Dlib’s facial landmark detection along with the datasets available. The third article is all about data preprocessing and making it ready to use. In the next two articles, the work is to extract the faces and apply facial landmarks on it to make it ready to train a CNN and store them as TFRecord files. In the sixth article, a model is trained using Tensorflow. In the final article, the model is exported as an API and shown how to use it in Python." }, { "code": null, "e": 3923, "s": 3636, "text": "We need the coordinates of the faces that acts as the region of interest and is extracted from the image. Then it is converted to a square shape of size 128x128 and passed to the model which returns the 68 key points which can then be normalized to the dimensions of the original image." }, { "code": null, "e": 3970, "s": 3923, "text": "The complete code used for this piece of work:" }, { "code": null, "e": 4122, "s": 3970, "text": "Now, that you know it can perform well on side-faces it can be utilized to make a head-pose estimator. If you want to do you can refer to this article:" }, { "code": null, "e": 4145, "s": 4122, "text": "towardsdatascience.com" }, { "code": null, "e": 4256, "s": 4145, "text": "One of the major selling points of Dlib was its speed. Let’s see how they compare on my i5 processor (yeah πŸ˜”)." }, { "code": null, "e": 4338, "s": 4256, "text": "Dlib gives ~11.5 FPS and the landmark prediction step takes around 0.005 seconds." }, { "code": null, "e": 4434, "s": 4338, "text": "The Tensorflow model gives ~7.2 FPS and the landmark prediction step takes around 0.05 seconds." }, { "code": null, "e": 4643, "s": 4434, "text": "So if speed is the main concern and occluded or angled faces not so much then Dlib might be better suited for you otherwise I feel that the Tensorflow model reigns supreme without compromising a lot on speed." } ]
How do we check if a String contains a substring (ignoring case) in Java?
The contains() method of the String class accepts Sting value as a parameter, verifies whether the current String object contains the specified String and returns true if it does (else false). The toLoweCase() method of the String class converts all the characters in the current String into lower case and returns. To find whether a String contains a particular sub string irrespective of case βˆ’ Get the String. Get the String. Get the Sub String. Get the Sub String. Convert the string value into lower case letters using the toLowerCase() method, store it as fileContents. Convert the string value into lower case letters using the toLowerCase() method, store it as fileContents. Convert the string value into lower case letters using the toLowerCase() method, store it as subString. Convert the string value into lower case letters using the toLowerCase() method, store it as subString. Invoke the contains() method on the fileContents by passing subString as a parameter to it. Invoke the contains() method on the fileContents by passing subString as a parameter to it. Assume we have a file named sample.txt in the D directory with the following content βˆ’ Tutorials point originated from the idea that there exists a class of readers who respond better to on-line content and prefer to learn new skills at their own pace from the comforts of their drawing rooms. At Tutorials point we provide high quality learning-aids for free of cost. Following Java example reads a sub string from the user an verifies whether the file contains the given sub string irrespective of case. Live Demo import java.io.File; import java.util.Scanner; public class SubStringExample { public static String fileToString(String filePath) throws Exception{ String input = null; Scanner sc = new Scanner(new File(filePath)); StringBuffer sb = new StringBuffer(); while (sc.hasNextLine()) { input = sc.nextLine(); sb.append(input); } return sb.toString(); } public static void main(String args[]) throws Exception { Scanner sc = new Scanner(System.in); System.out.println("Enter the sub string to be verified: "); String subString = sc.next(); String fileContents = fileToString("D:\\sample.txt"); //Converting the contents of the file to lower case fileContents = fileContents.toLowerCase(); //Converting the sub string to lower case subString = subString.toLowerCase(); //Verify whether the file contains the given sub String boolean result = fileContents.contains(subString); if(result) { System.out.println("File contains the given sub string."); } else { System.out.println("File doesnot contain the given sub string."); } } } Enter the sub string to be verified: comforts of their drawing rooms. File contains the given sub string.
[ { "code": null, "e": 1255, "s": 1062, "text": "The contains() method of the String class accepts Sting value as a parameter, verifies whether the current String object contains the specified String and returns true if it does (else false)." }, { "code": null, "e": 1378, "s": 1255, "text": "The toLoweCase() method of the String class converts all the characters in the current String into lower case and returns." }, { "code": null, "e": 1459, "s": 1378, "text": "To find whether a String contains a particular sub string irrespective of case βˆ’" }, { "code": null, "e": 1475, "s": 1459, "text": "Get the String." }, { "code": null, "e": 1491, "s": 1475, "text": "Get the String." }, { "code": null, "e": 1511, "s": 1491, "text": "Get the Sub String." }, { "code": null, "e": 1531, "s": 1511, "text": "Get the Sub String." }, { "code": null, "e": 1638, "s": 1531, "text": "Convert the string value into lower case letters using the toLowerCase() method, store it as fileContents." }, { "code": null, "e": 1745, "s": 1638, "text": "Convert the string value into lower case letters using the toLowerCase() method, store it as fileContents." }, { "code": null, "e": 1849, "s": 1745, "text": "Convert the string value into lower case letters using the toLowerCase() method, store it as subString." }, { "code": null, "e": 1953, "s": 1849, "text": "Convert the string value into lower case letters using the toLowerCase() method, store it as subString." }, { "code": null, "e": 2045, "s": 1953, "text": "Invoke the contains() method on the fileContents by passing subString as a parameter to it." }, { "code": null, "e": 2137, "s": 2045, "text": "Invoke the contains() method on the fileContents by passing subString as a parameter to it." }, { "code": null, "e": 2224, "s": 2137, "text": "Assume we have a file named sample.txt in the D directory with the following content βˆ’" }, { "code": null, "e": 2506, "s": 2224, "text": "Tutorials point originated from the idea that there exists a class of readers who respond better to on-line content\nand prefer to learn new skills at their own pace from the comforts of their drawing rooms.\nAt Tutorials point we provide high quality learning-aids for free of cost." }, { "code": null, "e": 2643, "s": 2506, "text": "Following Java example reads a sub string from the user an verifies whether the file contains the given sub string irrespective of case." }, { "code": null, "e": 2654, "s": 2643, "text": " Live Demo" }, { "code": null, "e": 3837, "s": 2654, "text": "import java.io.File;\nimport java.util.Scanner;\npublic class SubStringExample {\n public static String fileToString(String filePath) throws Exception{\n String input = null;\n Scanner sc = new Scanner(new File(filePath));\n StringBuffer sb = new StringBuffer();\n while (sc.hasNextLine()) {\n input = sc.nextLine();\n sb.append(input);\n }\n return sb.toString();\n }\n public static void main(String args[]) throws Exception {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the sub string to be verified: \");\n String subString = sc.next();\n String fileContents = fileToString(\"D:\\\\sample.txt\");\n //Converting the contents of the file to lower case\n fileContents = fileContents.toLowerCase();\n //Converting the sub string to lower case\n subString = subString.toLowerCase();\n //Verify whether the file contains the given sub String\n boolean result = fileContents.contains(subString);\n if(result) {\n System.out.println(\"File contains the given sub string.\");\n } else {\n System.out.println(\"File doesnot contain the given sub string.\");\n }\n }\n}" }, { "code": null, "e": 3943, "s": 3837, "text": "Enter the sub string to be verified:\ncomforts of their drawing rooms.\nFile contains the given sub string." } ]
Predict IT Support Tickets with Machine Learning and NLP | by Garrett Eichhorn | Towards Data Science
Dealing with high volume IT service requests? Interested in reducing operational costs? Looking to elevate the user experience? Look no further. This article serves as a guide for data science aficionados to deploy production-level Machine Learning solutions in the IT Service Management (ITSM) environment. The prescribed ML solution will help us peer into the black-box of IT service requests and coalesce ITSM strategy across business management silos. We’ll be using a supervised, classification algorithm to categorize new tickets based on input text. I employ Python, RESTful API framework, Scikit-Learn and SpaCy to accomplish this task; however, there are many solutions that could more efficiently fit your organization. I’ll do my best to address opportunities for divergence as well as provide dedicated reasoning for why I chose specific methodology. Susan Li provides an excellent overview on Machine Learning for Text Classification using SpaCy. My process, code and demonstration (as outlined in this article) are influenced by her contributions. I strongly recommend subscribing to her channel if you find any of my content helpful/interesting. The final model is more than 85% accurate in making predictions for all tickets flowing into the production environment. SLA response times have been cut in half and annual cost savings rack in nearly $600,000. IT Service Management (ITSM) is an important corporate function responsible for leveraging innovation to increase value, maximizing user productivity, providing end-to-end tech services, and so much more. Despite such resonant enterprise responsibility, front-end IT interactions are often defined by long, arduous conversations (via web or phone) with support specialists. Whether you’re requesting a new password, submitting configuration changes for an application, or simply asking for help, you'll be enduring a frustrating road ahead. The stigma persists because IT leaders struggle to staff and support a comprehensive help-desk management solution that serves the entire enterprise. Despite good intentions, support organizations often miss the mark for efficient ITSM. While on a project for a recent global medical device client of mine, I was tasked with remedying the frustrating outcomes that accompany enterprise incident ticket triage. Leadership made the decision to pour resources into a high-cost legacy solution using ServiceNow (a popular ITSM workflow platform) and an outside vendor to improve response times for incoming tickets, with little success. Generalized use and stringent SLA restrictions led to inaccuracy across the board, where business groups played hopscotch with tickets that landed in their queue. Users were exasperated, support professionals were callous, and leadership was stumped. Time for a fresh perspective! When a user submits a support ticket, it flows into the platform via email, phone or an embedded portal. Each ticket contains a bit of text about the problem or request, which is quickly reviewed by a support professional and sent on its way. Once the correct assignment group picks up the ticket, some amount of work gets completed and the incident state reverts to closed. This opportunity seems ripe for Multinomial Classification via Supervised Machine Learning to categorize support tickets based on a fixed number of business groups. We can easily scrape text and category from each ticket and train a model to associate certain words and phrases with a particular category. My hypothesis is simple: machine learning can provide immediate cost savings, better SLA outcomes, and more accurate predictions than the human counterpart. Let’s get started! Before selecting and training machine learning models, we need to take a look at the data to better understand trends within incident tickets. ServiceNow provides a robust Table API framework for us to grab the data directly from the platform. # Import Statementsimport requestsimport jsonimport pandas as pd# Initialize urlurl = "https://{instance.service-now.com}/api/now/table/incident"# Set simple authorizationuser = "{username}"pwd = "{password}"# Set proper headersheaders = {"Content-Type" : "application/json", "Accept" : "application/json","accept-encoding" : "gzip, deflate, br"}# Initialize GET responseresponse = requests.get(url, auth=(user, pwd), headers=headers)data = json.loads(response.text)dataframe = pd.DataFrame(data['result']) ServiceNow provides you with fantastic opportunity to explore the nuances of RESTful API tuning with their embedded API Explorer. This tool helps the user build custom API requests from scratch, whittling down query parameters, fields, etc. into easy-to-understand increments. Furthermore, you can hit popular tables (incident, task, etc.) or create complex queries. It’s an awesome tool for any data professional! Let’s take a look at our dataframe: Since we’re interested in associating text with a relevant classifier, we can use a categorical variable like β€œu_portfolio” to label each row in our dataframe. Despite a pretty serious class imbalance (β€œGlobal Support Services” with almost 65% of all records) and more than 2,000 missing values, we want to eliminate those specific categories with fewer than 100 tickets to reduce noise and ensure we’re only using relevant categories. Let’s create a pure text column called β€œtext” by concatenating together β€œshort_description” and β€œdescription”. We definitely want to visualize the updated dataframe! import matplotlib.pyplot as pltimport seaborn as sns# Eliminate categories with fewer than 100 ticketsclassifier = "u_portfolio"ticket_threshold = 100df_classifiers = df[df.groupby(classifier[classifier].transform(len) > ticket_threshold]# Print number of relevant categories & shapeprint("Categories: " + str(df_classifiers[classifier].nunique()))# Plot the classifiersfig = plt.figure(figsize=(10,6))sns.barplot(df_classifiers[classifier].value_counts().index, df_classifiers[classifier].value_counts())plt.xticks(rotation=20)plt.show() It looks like we dropped more than 5 categories after setting the threshold to 100 tickets, returning only those categories with relevant business value. After digging into the data and asking around a bit, I confirmed the dropped categories hadn’t been used in more than a year and could be comfortably eliminated. A note on class imbalance and the wonderful world of enterprise consulting: Global Support Services (GSS) accounts for more than 60% of total support tickets. That means we could write a simple program to assign GSS to every incoming ticket, and we’d be right more half the time! Without doing any advanced analysis, we’ve identified a major issue. The 3rd party vendor that charges $13 for every ticket interaction and averages 40% accuracy, is performing worse than if my client took no action at all...Imagine breaking that news to the CIO! The remaining categories will be used as the labels to train/test the model. Let’s save them as a list: category_labels = list(df_classifiers[classifier].value_counts().index) Now that we have our category_labels, we need to better understand the text patterns for each type of ticket. By peeking into the ServiceNow platform, I can quickly gather a few themes by category: GSS handles a lot of password resets and hardware issues; Business Intelligence covers reporting functions and data questions; Customer deals with SalesForce and other customer apps; SAP S/4 Security manages all ERP related access/configuration. If you’ve worked in this corporate arena before, these motifs sound familiar. It’s easy for a human to identify a few keywords for each category by studying the data β€” let’s see if a computer can do it too! Once we run the code, we can inspect the output: Unfortunately, there’s not much here, as the most common words barely differ by category. I investigated and found that emails account for more than 75% of ticket submissions; all internal employees have some version of a confidentiality notice below their signature that distorts significant differences between the categories. We can try and change N to see if other patterns emerge β€œdown the line”, or hard code the email signature into the STOPLIST variable to prevent it from showing up, but this wouldn’t fix the root cause. Instead, we want to find words/phrases that correlate with each label in our list of categories. This is called Term Selection, and can help us identify the most relevant terms by label for our dataset. Let’s explore some ML solutions for measuring and evaluating correlation! Natural Language Processing (NLP) sits at the nexus of computer science and linguistics, defining the solutions for how machine and human languages can interact with one another. Functionally, NLP consumes human language by analyzing and manipulating data (often in the form of text) to derive meaning. To do this, we need to convert data that’s passed between humans into a numeric format that is machine readable. This process of encoding text is called Vectorization, and catalyzes computational processes like applying mathematical rules and performing matrix operations that can produce valuable insights. Although there are some super cool, burgeoning methods for vectorizing text data for NLP like Transfer Learning and advanced Neural Networks, we’re going to use a more simplistic technique called Term Frequency β€” Inverse Document Frequency. Tf-idf values increases proportionally to the number of times a word / phrase (n-gram) appears in a document, offset by the number of documents in total. Although it sounds complex, it basically reflects how important an n-gram is to the document without favoring words that appear more frequently. This is especially powerful for processing text documents where class imbalance exists, like ours! You might use a Count Vectorizer if your text is well-balanced. Now that we understand how computers consume text data, we can experiment using different models! Here’s some starter code to test out a few options: Let’s use Logistic Regression as our model of best fit. As a data scientist, you should be able to apply multiple techniques to a project and choose one favorably suited to the opportunity. In my experience, human psychology plays a major part in the success of a use-case; coaching an enterprise to accept emerging, disruptive technology takes time and energy! It’s just as important to market, brand and sell your solution as it is to build an elegant algorithm. Let’s build our model! For this project, I had ample opportunity to socialize concepts methodically to address questions in real time. A major part of my success criterion for this use-case was the uptake by leadership to both understand and spread these data science concepts in context. Because there are many other algorithms/models that can optimize model performance based on the data, I encourage you to experiment. When a user submits a ticket, it’s easy to grab the text and pump it through the model. In doing so, we can determine... a) if the model finds the text relevant b) which category best fits the text # Save the model to variable 'model'model = pipe.fit(X_train, y_train)# Save array of predictions for given TEXTpredict_category = model.predict(TEXT) # Save array of prediction probabilities for given TEXT predict_probability = model.predict_proba(TEXT) Both of our variables that predict will return an array of numbers proportional to the length of the categories list. If we print predict_category, we expect an array of 8 numbers that correspond to our 8 categories with either 0 or 1 to represent relevance. If the text string was β€œI need a new corporate laptop”, then we should expect an array of 0’s except for a 1 in the nth position that corresponds to β€œGlobal Support Services”. We can use β€˜predict_probability’ to see how strong the prediction result for GSS is in context; at 98% for this particular text string, it’s safe to say we trust the model πŸ˜ƒ. We can use the same Table API that we employed to scrape the data, replacing our GET response with a PUT request, to update the ticket in ServiceNow. In real-time, a user submits a ticket and the model updates ServiceNow with the predicted category in less than a minute. Let’s pat ourselves on the back for implementing an effective machine learning solution! Deploying a model in production depends on what technology stack your particular enterprise subscribes to. My client is an AWS shop and manages a great relationship with access to the full suite of AWS tools. I played around with Lambda and SageMaker to automate support ticket assignment in a serverless AWS environment. However, it was considerably easier to spin up an EC2 instance to host the model and interact with ServiceNow directly. ServiceNow has built-in β€˜Business Rules’ that can be configured to trigger API calls on the model and perform updates. The final deployment was slightly cheaper and much more easily updated in the EC2 server, and relies on AWS and ServiceNow communicating effectively. AWS documentation is legendary for its depth and breadth; I strongly recommend consulting appropriate resources before diving in. If these terms mean nothing to you β€” don’t fret! Basically, the machine learning pipeline needs to be hosted in an environment agnostic of the people and technology involved. If new developers come aboard, ticket volume triples overnight, or leadership elects to use KNN in R instead of LogReg in Python, the environment needs to accommodate variable scale. The entire pipeline was developed on my local machine, but production deployment can’t rely on my computing resources and/or availability in the long term. Keeping it on a server (hosted in the cloud) ensures sustainability and efficacy. This is a critical shift between the build phase and actual deployment. After all this work, what did we accomplish? To start, we have an awesome ML solution that uses Natural Language Processing to categorize all incident tickets for a global company. We save the enterprise nearly $600,000 annually by automating ticket assignment and circumventing the 3rd party vendor. We improved average accuracy from 40% to 85% and cut SLA response times in half! Anecdotally, the user experience is significantly more positive and trust in the ITSM environment has skyrocketed. Unexpectedly, my intense focus on data and process improvement in ServiceNow helped to coalesce department strategy and inform better decision making. IT Leadership, comprised of VPs, Directors and Senior Managers, were so excited about saving money, improving response times, and bolstering the user experience. Despite initial trepidation around sustaining new technology, operationalizing the model in production, refactoring platform workflows to the vendor, etc. leadership finally accepted the change. Deployment offered the opportunity to democratize decision making and evangelize complex data topics. I believe leadership is better equipped to strategize and staff data science projects in the future, displaying this use-case as a springboard to convey the success of predictive analytics and data science. If you have any questions or comments about the methods outlined in the article, please drop a line below or message me directly! I’d love to hear from you πŸ˜„.
[ { "code": null, "e": 192, "s": 47, "text": "Dealing with high volume IT service requests? Interested in reducing operational costs? Looking to elevate the user experience? Look no further." }, { "code": null, "e": 503, "s": 192, "text": "This article serves as a guide for data science aficionados to deploy production-level Machine Learning solutions in the IT Service Management (ITSM) environment. The prescribed ML solution will help us peer into the black-box of IT service requests and coalesce ITSM strategy across business management silos." }, { "code": null, "e": 910, "s": 503, "text": "We’ll be using a supervised, classification algorithm to categorize new tickets based on input text. I employ Python, RESTful API framework, Scikit-Learn and SpaCy to accomplish this task; however, there are many solutions that could more efficiently fit your organization. I’ll do my best to address opportunities for divergence as well as provide dedicated reasoning for why I chose specific methodology." }, { "code": null, "e": 1208, "s": 910, "text": "Susan Li provides an excellent overview on Machine Learning for Text Classification using SpaCy. My process, code and demonstration (as outlined in this article) are influenced by her contributions. I strongly recommend subscribing to her channel if you find any of my content helpful/interesting." }, { "code": null, "e": 1419, "s": 1208, "text": "The final model is more than 85% accurate in making predictions for all tickets flowing into the production environment. SLA response times have been cut in half and annual cost savings rack in nearly $600,000." }, { "code": null, "e": 2110, "s": 1419, "text": "IT Service Management (ITSM) is an important corporate function responsible for leveraging innovation to increase value, maximizing user productivity, providing end-to-end tech services, and so much more. Despite such resonant enterprise responsibility, front-end IT interactions are often defined by long, arduous conversations (via web or phone) with support specialists. Whether you’re requesting a new password, submitting configuration changes for an application, or simply asking for help, you'll be enduring a frustrating road ahead. The stigma persists because IT leaders struggle to staff and support a comprehensive help-desk management solution that serves the entire enterprise." }, { "code": null, "e": 2874, "s": 2110, "text": "Despite good intentions, support organizations often miss the mark for efficient ITSM. While on a project for a recent global medical device client of mine, I was tasked with remedying the frustrating outcomes that accompany enterprise incident ticket triage. Leadership made the decision to pour resources into a high-cost legacy solution using ServiceNow (a popular ITSM workflow platform) and an outside vendor to improve response times for incoming tickets, with little success. Generalized use and stringent SLA restrictions led to inaccuracy across the board, where business groups played hopscotch with tickets that landed in their queue. Users were exasperated, support professionals were callous, and leadership was stumped. Time for a fresh perspective!" }, { "code": null, "e": 3249, "s": 2874, "text": "When a user submits a support ticket, it flows into the platform via email, phone or an embedded portal. Each ticket contains a bit of text about the problem or request, which is quickly reviewed by a support professional and sent on its way. Once the correct assignment group picks up the ticket, some amount of work gets completed and the incident state reverts to closed." }, { "code": null, "e": 3731, "s": 3249, "text": "This opportunity seems ripe for Multinomial Classification via Supervised Machine Learning to categorize support tickets based on a fixed number of business groups. We can easily scrape text and category from each ticket and train a model to associate certain words and phrases with a particular category. My hypothesis is simple: machine learning can provide immediate cost savings, better SLA outcomes, and more accurate predictions than the human counterpart. Let’s get started!" }, { "code": null, "e": 3975, "s": 3731, "text": "Before selecting and training machine learning models, we need to take a look at the data to better understand trends within incident tickets. ServiceNow provides a robust Table API framework for us to grab the data directly from the platform." }, { "code": null, "e": 4482, "s": 3975, "text": "# Import Statementsimport requestsimport jsonimport pandas as pd# Initialize urlurl = \"https://{instance.service-now.com}/api/now/table/incident\"# Set simple authorizationuser = \"{username}\"pwd = \"{password}\"# Set proper headersheaders = {\"Content-Type\" : \"application/json\", \"Accept\" : \"application/json\",\"accept-encoding\" : \"gzip, deflate, br\"}# Initialize GET responseresponse = requests.get(url, auth=(user, pwd), headers=headers)data = json.loads(response.text)dataframe = pd.DataFrame(data['result'])" }, { "code": null, "e": 4897, "s": 4482, "text": "ServiceNow provides you with fantastic opportunity to explore the nuances of RESTful API tuning with their embedded API Explorer. This tool helps the user build custom API requests from scratch, whittling down query parameters, fields, etc. into easy-to-understand increments. Furthermore, you can hit popular tables (incident, task, etc.) or create complex queries. It’s an awesome tool for any data professional!" }, { "code": null, "e": 4933, "s": 4897, "text": "Let’s take a look at our dataframe:" }, { "code": null, "e": 5535, "s": 4933, "text": "Since we’re interested in associating text with a relevant classifier, we can use a categorical variable like β€œu_portfolio” to label each row in our dataframe. Despite a pretty serious class imbalance (β€œGlobal Support Services” with almost 65% of all records) and more than 2,000 missing values, we want to eliminate those specific categories with fewer than 100 tickets to reduce noise and ensure we’re only using relevant categories. Let’s create a pure text column called β€œtext” by concatenating together β€œshort_description” and β€œdescription”. We definitely want to visualize the updated dataframe!" }, { "code": null, "e": 6074, "s": 5535, "text": "import matplotlib.pyplot as pltimport seaborn as sns# Eliminate categories with fewer than 100 ticketsclassifier = \"u_portfolio\"ticket_threshold = 100df_classifiers = df[df.groupby(classifier[classifier].transform(len) > ticket_threshold]# Print number of relevant categories & shapeprint(\"Categories: \" + str(df_classifiers[classifier].nunique()))# Plot the classifiersfig = plt.figure(figsize=(10,6))sns.barplot(df_classifiers[classifier].value_counts().index, df_classifiers[classifier].value_counts())plt.xticks(rotation=20)plt.show()" }, { "code": null, "e": 6390, "s": 6074, "text": "It looks like we dropped more than 5 categories after setting the threshold to 100 tickets, returning only those categories with relevant business value. After digging into the data and asking around a bit, I confirmed the dropped categories hadn’t been used in more than a year and could be comfortably eliminated." }, { "code": null, "e": 6466, "s": 6390, "text": "A note on class imbalance and the wonderful world of enterprise consulting:" }, { "code": null, "e": 6670, "s": 6466, "text": "Global Support Services (GSS) accounts for more than 60% of total support tickets. That means we could write a simple program to assign GSS to every incoming ticket, and we’d be right more half the time!" }, { "code": null, "e": 6934, "s": 6670, "text": "Without doing any advanced analysis, we’ve identified a major issue. The 3rd party vendor that charges $13 for every ticket interaction and averages 40% accuracy, is performing worse than if my client took no action at all...Imagine breaking that news to the CIO!" }, { "code": null, "e": 7038, "s": 6934, "text": "The remaining categories will be used as the labels to train/test the model. Let’s save them as a list:" }, { "code": null, "e": 7110, "s": 7038, "text": "category_labels = list(df_classifiers[classifier].value_counts().index)" }, { "code": null, "e": 7761, "s": 7110, "text": "Now that we have our category_labels, we need to better understand the text patterns for each type of ticket. By peeking into the ServiceNow platform, I can quickly gather a few themes by category: GSS handles a lot of password resets and hardware issues; Business Intelligence covers reporting functions and data questions; Customer deals with SalesForce and other customer apps; SAP S/4 Security manages all ERP related access/configuration. If you’ve worked in this corporate arena before, these motifs sound familiar. It’s easy for a human to identify a few keywords for each category by studying the data β€” let’s see if a computer can do it too!" }, { "code": null, "e": 7810, "s": 7761, "text": "Once we run the code, we can inspect the output:" }, { "code": null, "e": 8544, "s": 7810, "text": "Unfortunately, there’s not much here, as the most common words barely differ by category. I investigated and found that emails account for more than 75% of ticket submissions; all internal employees have some version of a confidentiality notice below their signature that distorts significant differences between the categories. We can try and change N to see if other patterns emerge β€œdown the line”, or hard code the email signature into the STOPLIST variable to prevent it from showing up, but this wouldn’t fix the root cause. Instead, we want to find words/phrases that correlate with each label in our list of categories. This is called Term Selection, and can help us identify the most relevant terms by label for our dataset." }, { "code": null, "e": 8618, "s": 8544, "text": "Let’s explore some ML solutions for measuring and evaluating correlation!" }, { "code": null, "e": 9229, "s": 8618, "text": "Natural Language Processing (NLP) sits at the nexus of computer science and linguistics, defining the solutions for how machine and human languages can interact with one another. Functionally, NLP consumes human language by analyzing and manipulating data (often in the form of text) to derive meaning. To do this, we need to convert data that’s passed between humans into a numeric format that is machine readable. This process of encoding text is called Vectorization, and catalyzes computational processes like applying mathematical rules and performing matrix operations that can produce valuable insights." }, { "code": null, "e": 9932, "s": 9229, "text": "Although there are some super cool, burgeoning methods for vectorizing text data for NLP like Transfer Learning and advanced Neural Networks, we’re going to use a more simplistic technique called Term Frequency β€” Inverse Document Frequency. Tf-idf values increases proportionally to the number of times a word / phrase (n-gram) appears in a document, offset by the number of documents in total. Although it sounds complex, it basically reflects how important an n-gram is to the document without favoring words that appear more frequently. This is especially powerful for processing text documents where class imbalance exists, like ours! You might use a Count Vectorizer if your text is well-balanced." }, { "code": null, "e": 10082, "s": 9932, "text": "Now that we understand how computers consume text data, we can experiment using different models! Here’s some starter code to test out a few options:" }, { "code": null, "e": 10570, "s": 10082, "text": "Let’s use Logistic Regression as our model of best fit. As a data scientist, you should be able to apply multiple techniques to a project and choose one favorably suited to the opportunity. In my experience, human psychology plays a major part in the success of a use-case; coaching an enterprise to accept emerging, disruptive technology takes time and energy! It’s just as important to market, brand and sell your solution as it is to build an elegant algorithm. Let’s build our model!" }, { "code": null, "e": 10969, "s": 10570, "text": "For this project, I had ample opportunity to socialize concepts methodically to address questions in real time. A major part of my success criterion for this use-case was the uptake by leadership to both understand and spread these data science concepts in context. Because there are many other algorithms/models that can optimize model performance based on the data, I encourage you to experiment." }, { "code": null, "e": 11090, "s": 10969, "text": "When a user submits a ticket, it’s easy to grab the text and pump it through the model. In doing so, we can determine..." }, { "code": null, "e": 11130, "s": 11090, "text": "a) if the model finds the text relevant" }, { "code": null, "e": 11167, "s": 11130, "text": "b) which category best fits the text" }, { "code": null, "e": 11424, "s": 11167, "text": "# Save the model to variable 'model'model = pipe.fit(X_train, y_train)# Save array of predictions for given TEXTpredict_category = model.predict(TEXT) # Save array of prediction probabilities for given TEXT predict_probability = model.predict_proba(TEXT)" }, { "code": null, "e": 12034, "s": 11424, "text": "Both of our variables that predict will return an array of numbers proportional to the length of the categories list. If we print predict_category, we expect an array of 8 numbers that correspond to our 8 categories with either 0 or 1 to represent relevance. If the text string was β€œI need a new corporate laptop”, then we should expect an array of 0’s except for a 1 in the nth position that corresponds to β€œGlobal Support Services”. We can use β€˜predict_probability’ to see how strong the prediction result for GSS is in context; at 98% for this particular text string, it’s safe to say we trust the model πŸ˜ƒ." }, { "code": null, "e": 12395, "s": 12034, "text": "We can use the same Table API that we employed to scrape the data, replacing our GET response with a PUT request, to update the ticket in ServiceNow. In real-time, a user submits a ticket and the model updates ServiceNow with the predicted category in less than a minute. Let’s pat ourselves on the back for implementing an effective machine learning solution!" }, { "code": null, "e": 12604, "s": 12395, "text": "Deploying a model in production depends on what technology stack your particular enterprise subscribes to. My client is an AWS shop and manages a great relationship with access to the full suite of AWS tools." }, { "code": null, "e": 13236, "s": 12604, "text": "I played around with Lambda and SageMaker to automate support ticket assignment in a serverless AWS environment. However, it was considerably easier to spin up an EC2 instance to host the model and interact with ServiceNow directly. ServiceNow has built-in β€˜Business Rules’ that can be configured to trigger API calls on the model and perform updates. The final deployment was slightly cheaper and much more easily updated in the EC2 server, and relies on AWS and ServiceNow communicating effectively. AWS documentation is legendary for its depth and breadth; I strongly recommend consulting appropriate resources before diving in." }, { "code": null, "e": 13904, "s": 13236, "text": "If these terms mean nothing to you β€” don’t fret! Basically, the machine learning pipeline needs to be hosted in an environment agnostic of the people and technology involved. If new developers come aboard, ticket volume triples overnight, or leadership elects to use KNN in R instead of LogReg in Python, the environment needs to accommodate variable scale. The entire pipeline was developed on my local machine, but production deployment can’t rely on my computing resources and/or availability in the long term. Keeping it on a server (hosted in the cloud) ensures sustainability and efficacy. This is a critical shift between the build phase and actual deployment." }, { "code": null, "e": 14401, "s": 13904, "text": "After all this work, what did we accomplish? To start, we have an awesome ML solution that uses Natural Language Processing to categorize all incident tickets for a global company. We save the enterprise nearly $600,000 annually by automating ticket assignment and circumventing the 3rd party vendor. We improved average accuracy from 40% to 85% and cut SLA response times in half! Anecdotally, the user experience is significantly more positive and trust in the ITSM environment has skyrocketed." }, { "code": null, "e": 15218, "s": 14401, "text": "Unexpectedly, my intense focus on data and process improvement in ServiceNow helped to coalesce department strategy and inform better decision making. IT Leadership, comprised of VPs, Directors and Senior Managers, were so excited about saving money, improving response times, and bolstering the user experience. Despite initial trepidation around sustaining new technology, operationalizing the model in production, refactoring platform workflows to the vendor, etc. leadership finally accepted the change. Deployment offered the opportunity to democratize decision making and evangelize complex data topics. I believe leadership is better equipped to strategize and staff data science projects in the future, displaying this use-case as a springboard to convey the success of predictive analytics and data science." } ]
How to make the existing bootstrap 2 table responsive? - GeeksforGeeks
29 Jul, 2020 Method 1:To make the table responsive on all viewport size, wrap the table within a opening and closing <div> tags, having class β€œtable-responsive” within the opening <div> tag.Syntax:<div class="table-responsive"> <table class="table"> ... </table> </div> Example: The example describe the β€œtable-responsive” Class.<!DOCTYPE html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class="container"> <h1>Bootstrap Table Responsive</h1> <!-- table-responsive --> <div class="table-responsive"> <!-- table --> <table class="table"> <thead> <tr> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html>Output:When window size is normal:When Window size is small i.e tablet mode:Scroll bar will appear in tablet as well as mobile mode. To make the table responsive on all viewport size, wrap the table within a opening and closing <div> tags, having class β€œtable-responsive” within the opening <div> tag. Syntax: <div class="table-responsive"> <table class="table"> ... </table> </div> Example: The example describe the β€œtable-responsive” Class. <!DOCTYPE html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class="container"> <h1>Bootstrap Table Responsive</h1> <!-- table-responsive --> <div class="table-responsive"> <!-- table --> <table class="table"> <thead> <tr> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html> Output: When window size is normal: When Window size is small i.e tablet mode: Scroll bar will appear in tablet as well as mobile mode. Method 2: Breakpoint specificUse .table-responsive{-sm|-md|-lg|-xl} as needed to create responsive tables up to a particular breakpoint. From that breakpoint and up, the table will behave normally and not scroll horizontally.ClassScreen Width.table-responsive-sm< 576px.table-responsive-md< 768px.table-responsive-lg< 992px.table-responsive-xl< 1200pxSyntax:<div class="table-responsive-sm"> <table class="table"> ... </table> </div> Example: The example describe the β€œtable-responsive-sm” Class .<!DOCTYPE html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class="container"> <h1>Bootstrap Table Responsive-sm</h1> <!-- table-responsive --> <div class="table-responsive-sm"> <!-- table --> <table class="table"> <thead> <tr> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html>Output:when window size is > 576px :When window size is less than < 576px : Use .table-responsive{-sm|-md|-lg|-xl} as needed to create responsive tables up to a particular breakpoint. From that breakpoint and up, the table will behave normally and not scroll horizontally. Syntax: <div class="table-responsive-sm"> <table class="table"> ... </table> </div> Example: The example describe the β€œtable-responsive-sm” Class . <!DOCTYPE html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class="container"> <h1>Bootstrap Table Responsive-sm</h1> <!-- table-responsive --> <div class="table-responsive-sm"> <!-- table --> <table class="table"> <thead> <tr> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> <th scope="col">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html> Output: when window size is > 576px : When window size is less than < 576px : Bootstrap-Misc Picked Bootstrap Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Show Images on Click using HTML ? How to set Bootstrap Timepicker using datetimepicker library ? How to Use Bootstrap with React? Tailwind CSS vs Bootstrap How to Add Image into Dropdown List for each items ? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 25113, "s": 25085, "text": "\n29 Jul, 2020" }, { "code": null, "e": 28117, "s": 25113, "text": "Method 1:To make the table responsive on all viewport size, wrap the table within a opening and closing <div> tags, having class β€œtable-responsive” within the opening <div> tag.Syntax:<div class=\"table-responsive\">\n <table class=\"table\">\n ...\n </table>\n</div>\nExample: The example describe the β€œtable-responsive” Class.<!DOCTYPE html> <html lang=\"en\"> <head> <!-- Required meta tags --> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\"> <!-- Bootstrap CSS --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css\" integrity=\"sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS\" crossorigin=\"anonymous\"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class=\"container\"> <h1>Bootstrap Table Responsive</h1> <!-- table-responsive --> <div class=\"table-responsive\"> <!-- table --> <table class=\"table\"> <thead> <tr> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html>Output:When window size is normal:When Window size is small i.e tablet mode:Scroll bar will appear in tablet as well as mobile mode." }, { "code": null, "e": 28286, "s": 28117, "text": "To make the table responsive on all viewport size, wrap the table within a opening and closing <div> tags, having class β€œtable-responsive” within the opening <div> tag." }, { "code": null, "e": 28294, "s": 28286, "text": "Syntax:" }, { "code": null, "e": 28376, "s": 28294, "text": "<div class=\"table-responsive\">\n <table class=\"table\">\n ...\n </table>\n</div>\n" }, { "code": null, "e": 28436, "s": 28376, "text": "Example: The example describe the β€œtable-responsive” Class." }, { "code": "<!DOCTYPE html> <html lang=\"en\"> <head> <!-- Required meta tags --> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\"> <!-- Bootstrap CSS --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css\" integrity=\"sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS\" crossorigin=\"anonymous\"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class=\"container\"> <h1>Bootstrap Table Responsive</h1> <!-- table-responsive --> <div class=\"table-responsive\"> <!-- table --> <table class=\"table\"> <thead> <tr> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html>", "e": 30984, "s": 28436, "text": null }, { "code": null, "e": 30992, "s": 30984, "text": "Output:" }, { "code": null, "e": 31020, "s": 30992, "text": "When window size is normal:" }, { "code": null, "e": 31063, "s": 31020, "text": "When Window size is small i.e tablet mode:" }, { "code": null, "e": 31120, "s": 31063, "text": "Scroll bar will appear in tablet as well as mobile mode." }, { "code": null, "e": 34236, "s": 31120, "text": "Method 2: Breakpoint specificUse .table-responsive{-sm|-md|-lg|-xl} as needed to create responsive tables up to a particular breakpoint. From that breakpoint and up, the table will behave normally and not scroll horizontally.ClassScreen Width.table-responsive-sm< 576px.table-responsive-md< 768px.table-responsive-lg< 992px.table-responsive-xl< 1200pxSyntax:<div class=\"table-responsive-sm\">\n <table class=\"table\">\n ...\n </table>\n</div>\nExample: The example describe the β€œtable-responsive-sm” Class .<!DOCTYPE html> <html lang=\"en\"> <head> <!-- Required meta tags --> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\"> <!-- Bootstrap CSS --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css\" integrity=\"sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS\" crossorigin=\"anonymous\"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class=\"container\"> <h1>Bootstrap Table Responsive-sm</h1> <!-- table-responsive --> <div class=\"table-responsive-sm\"> <!-- table --> <table class=\"table\"> <thead> <tr> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html>Output:when window size is > 576px :When window size is less than < 576px :" }, { "code": null, "e": 34433, "s": 34236, "text": "Use .table-responsive{-sm|-md|-lg|-xl} as needed to create responsive tables up to a particular breakpoint. From that breakpoint and up, the table will behave normally and not scroll horizontally." }, { "code": null, "e": 34441, "s": 34433, "text": "Syntax:" }, { "code": null, "e": 34526, "s": 34441, "text": "<div class=\"table-responsive-sm\">\n <table class=\"table\">\n ...\n </table>\n</div>\n" }, { "code": null, "e": 34590, "s": 34526, "text": "Example: The example describe the β€œtable-responsive-sm” Class ." }, { "code": "<!DOCTYPE html> <html lang=\"en\"> <head> <!-- Required meta tags --> <meta charset=\"utf-8\"> <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\"> <!-- Bootstrap CSS --> <link rel=\"stylesheet\" href=\"https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css\" integrity=\"sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS\" crossorigin=\"anonymous\"> <title>Bootstrap | Tables</title> <style> h1{ color:blue; text-align: center; } div{ margin-top: 20px; } </style> </head> <body> <div class=\"container\"> <h1>Bootstrap Table Responsive-sm</h1> <!-- table-responsive --> <div class=\"table-responsive-sm\"> <!-- table --> <table class=\"table\"> <thead> <tr> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> <th scope=\"col\">Head</td> </tr> </thead> <tbody> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody> </table> </div> </div> </body> </html>", "e": 37125, "s": 34590, "text": null }, { "code": null, "e": 37133, "s": 37125, "text": "Output:" }, { "code": null, "e": 37163, "s": 37133, "text": "when window size is > 576px :" }, { "code": null, "e": 37204, "s": 37163, "text": "When window size is less than < 576px :" }, { "code": null, "e": 37219, "s": 37204, "text": "Bootstrap-Misc" }, { "code": null, "e": 37226, "s": 37219, "text": "Picked" }, { "code": null, "e": 37236, "s": 37226, "text": "Bootstrap" }, { "code": null, "e": 37253, "s": 37236, "text": "Web Technologies" }, { "code": null, "e": 37351, "s": 37253, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 37360, "s": 37351, "text": "Comments" }, { "code": null, "e": 37373, "s": 37360, "text": "Old Comments" }, { "code": null, "e": 37414, "s": 37373, "text": "How to Show Images on Click using HTML ?" }, { "code": null, "e": 37477, "s": 37414, "text": "How to set Bootstrap Timepicker using datetimepicker library ?" }, { "code": null, "e": 37510, "s": 37477, "text": "How to Use Bootstrap with React?" }, { "code": null, "e": 37536, "s": 37510, "text": "Tailwind CSS vs Bootstrap" }, { "code": null, "e": 37589, "s": 37536, "text": "How to Add Image into Dropdown List for each items ?" }, { "code": null, "e": 37645, "s": 37589, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 37678, "s": 37645, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 37740, "s": 37678, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 37783, "s": 37740, "text": "How to fetch data from an API in ReactJS ?" } ]
K-means, DBSCAN, GMM, Agglomerative clustering β€” Mastering the popular models in a segmentation problem | by Indraneel Dutta Baruah | Towards Data Science
In the current age, the availability of granular data for a large pool of customers/products and technological capability to handle petabytes of data efficiently is growing rapidly. Due to this, it’s now possible to come up with very strategic and meaningful clusters for effective targeting. And identifying the target segments requires a robust segmentation exercise. In this blog, we will be discussing the most popular algorithms for unsupervised clustering algorithms and how to implement them in python. In this blog, we will be working with clickstream data from an online store offering clothing for pregnant women. It includes variables like product category, location of the photo on the webpage, country of origin of the IP address and product price in US dollars. It has data from April 2008 to August 2008. The first step is to prepare the data for segmentation. I encourage you to check out the article below for an in-depth explanation of different steps for preparing data for segmentation before proceeding further: One Hot Encoding, Standardization, PCA: Data preparation for segmentation in python Selecting the optimal number of clusters is another key concept one should be aware of while dealing with a segmentation problem. It will be helpful if you read the article below for understanding a comprehensive list of popular metrics for selecting clusters: Cheatsheet for implementing 7 methods for selecting the optimal number of clusters in Python We will be talking about 4 categories of models in this blog: K-meansAgglomerative clusteringDensity-based spatial clustering (DBSCAN)Gaussian Mixture Modelling (GMM) K-means Agglomerative clustering Density-based spatial clustering (DBSCAN) Gaussian Mixture Modelling (GMM) K-means The K-means algorithm is an iterative process with three critical stages: Pick initial cluster centroids Pick initial cluster centroids The algorithm starts by picking initial k cluster centers which are known as centroids. Determining the optimal number of clusters i.e k as well as proper selection of the initial clusters is extremely important for the performance of the model. The number of clusters should always be dependent on the nature of the dataset while poor selection of the initial cluster can lead to the problem of local convergence. Thankfully, we have solutions for both. For further details on selecting the optimal number of clusters please refer to this detailed blog. For selection of initial clusters, we can either run multiple iterations of the model with various initializations to pick the most stable one or use the β€œk-means++” algorithm which has the following steps: Randomly select the first centroid from the datasetCompute distance of all points in the dataset from the selected centroidPick a point as the new centroid that has maximum probability proportional to this distanceRepeat steps 2 and 3 until k centroids have been sampled Randomly select the first centroid from the dataset Compute distance of all points in the dataset from the selected centroid Pick a point as the new centroid that has maximum probability proportional to this distance Repeat steps 2 and 3 until k centroids have been sampled The algorithm initializes the centroids to be distant from each other leading to more stable results than random initialization. 2. Cluster assignment K-means then assigns the data points to the closest cluster centroids based on euclidean distance between the point and all centroids. 3. Move centroid The model finally calculates the average of all the points in a cluster and moves the centroid to that average location. Step 2 and 3 are repeated until there is no change in the clusters or possibly some other stopping condition is met (like maximum number of iterations). For implementing the model in python we need to do specify the number of clusters first. We have used the elbow method, Gap Statistic, Silhouette score, Calinski Harabasz score and Davies Bouldin score. For each of these methods the optimal number of clusters are as follows: Elbow method: 8Gap statistic: 29Silhouette score: 4Calinski Harabasz score: 2Davies Bouldin score: 4 Elbow method: 8 Gap statistic: 29 Silhouette score: 4 Calinski Harabasz score: 2 Davies Bouldin score: 4 As seen above, 2 out of 5 methods suggest that we should use 4 clusters. If each model suggests a different number of clusters we can either take an average or median. The codes for finding the optimal number of k can be found here and further details on each method can be found in this blog. Once we have the optimal number of clusters, we can fit the model and get the performance of the model using Silhouette score, Calinski Harabasz score and Davies Bouldin score. # K meansfrom sklearn.cluster import KMeansfrom sklearn.metrics import silhouette_score from sklearn.metrics import calinski_harabasz_scorefrom sklearn.metrics import davies_bouldin_score# Fit K-Meanskmeans_1 = KMeans(n_clusters=4,random_state= 10)# Use fit_predict to cluster the datasetpredictions = kmeans_1.fit_predict(cluster_df)# Calculate cluster validation metricsscore_kemans_s = silhouette_score(cluster_df, kmeans_1.labels_, metric='euclidean')score_kemans_c = calinski_harabasz_score(cluster_df, kmeans_1.labels_)score_kemans_d = davies_bouldin_score(cluster_df, predictions)print('Silhouette Score: %.4f' % score_kemans_s)print('Calinski Harabasz Score: %.4f' % score_kemans_c)print('Davies Bouldin Score: %.4f' % score_kemans_d) We can also check the relative size and distribution of the clusters using an inter-cluster distance map. # Inter cluster distance mapfrom yellowbrick.cluster import InterclusterDistance# Instantiate the clustering model and visualizervisualizer = InterclusterDistance(kmeans_1)visualizer.fit(cluster_df) # Fit the data to the visualizervisualizer.show() # Finalize and render the figure As seen in the figure above, two clusters are quite large compared to the others and they seem to have decent separation between them. However, if two clusters overlap in the 2D space, it does not imply that they overlap in the original feature space. Further details on the model can be found here. Finally, other variants of K-Means like Mini Batch K-means, K-Medoids will be discussed in a separate blog. Agglomerative clustering is a general family of clustering algorithms that build nested clusters by merging data points successively. This hierarchy of clusters can be represented as a tree diagram known as dendrogram. The top of the tree is a single cluster with all data points while the bottom contains individual points. There are multiple options for linking data points in a successive manner: Single linkage: It minimizes the distance between the closest observations of pairs of clusters Complete or Maximum linkage: Tries to minimize the maximum distance between observations of pairs of clusters Average linkage: It minimizes the average of the distances between all observations of pairs of clusters Ward: Similar to the k-means as it minimizes the sum of squared differences within all clusters but with a hierarchical approach. We will be using this option in our exercise. The ideal option can be picked by checking which linkage method performs best based on cluster validation metrics (Silhouette score, Calinski Harabasz score and Davies Bouldin score). And similar to K-means, we will have to specify the number of clusters in this model and the dendrogram can help us do that. # Dendrogram for Hierarchical Clusteringimport scipy.cluster.hierarchy as shcfrom matplotlib import pyplotpyplot.figure(figsize=(10, 7)) pyplot.title("Dendrograms") dend = shc.dendrogram(shc.linkage(cluster_df, method='ward')) From figure 3, we can see that we can choose either 4 or 8 clusters. We also use the elbow method, Silhouette score and Calinski Harabasz score to find the optimal number of clusters and get the following results: Elbow method: 10Davies Bouldin score : 8Silhouette score: 3Calinski Harabasz score: 2 Elbow method: 10 Davies Bouldin score : 8 Silhouette score: 3 Calinski Harabasz score: 2 We will go ahead with 8 as both the Davies Bouldin score and dendrogram suggest so. If the metrics give us different number of cluster we can either go ahead with the one suggested by the dendrogram (as it is based on this specific model) or take average/median of all the metrics. The codes for finding the optimal number of clusters can be found here and further details on each method can be found in this blog. Similar to k means, we can fit the model with the optimal number of clusters as well as linkage type and test its performance using the three metrics used in K-means. # Agglomerative clusteringfrom numpy import uniquefrom numpy import wherefrom sklearn.cluster import AgglomerativeClusteringfrom matplotlib import pyplot# define the modelmodel = AgglomerativeClustering(n_clusters=4)# fit model and predict clustersyhat = model.fit(cluster_df)yhat_2 = model.fit_predict(cluster_df)# retrieve unique clustersclusters = unique(yhat)# Calculate cluster validation metricsscore_AGclustering_s = silhouette_score(cluster_df, yhat.labels_, metric='euclidean')score_AGclustering_c = calinski_harabasz_score(cluster_df, yhat.labels_)score_AGclustering_d = davies_bouldin_score(cluster_df, yhat_2)print('Silhouette Score: %.4f' % score_AGclustering_s)print('Calinski Harabasz Score: %.4f' % score_AGclustering_c)print('Davies Bouldin Score: %.4f' % score_AGclustering_d) Comparing figure 1 and 4, we can see that K-means outperforms agglomerative clustering based on all cluster validation metrics. DBSCAN groups together points that are closely packed together while marking others as outliers which lie alone in low-density regions. There are two key parameters in the model needed to define β€˜density’: minimum number of points required to form a dense region min_samples and distance to define a neighborhood eps. Higher min_samples or lower eps demands greater density to form a cluster. Based on these parameters, DBSCAN starts with an arbitrary point x and identifies points that are within neighbourhood of x based on eps and classifies x as one of the following: Core point: If the number of points in the neighbourhood is at least equal to the min_samples parameter then it called a core point and a cluster is formed around x.Border point: x is considered a border point if it is part of a cluster with a different core point but number of points in it’s neighbourhood is less than the min_samples parameter. Intuitively, these points are on the fringes of a cluster. Core point: If the number of points in the neighbourhood is at least equal to the min_samples parameter then it called a core point and a cluster is formed around x. Border point: x is considered a border point if it is part of a cluster with a different core point but number of points in it’s neighbourhood is less than the min_samples parameter. Intuitively, these points are on the fringes of a cluster. 3. Outlier or noise: If x is not a core point and distance from any core sample is at least equal to or greater thaneps , it is considered an outlier or noise. For tuning the parameters of the model, we first identify the optimal eps value by finding the distance among a point’s neighbors and plotting the minimum distance. This gives us the elbow curve to find density of the data points and optimal eps value can be found at the inflection point. We use the NearestNeighbours function to get the minimum distance and the KneeLocator function to identify the inflection point. # parameter tuning for epsfrom sklearn.neighbors import NearestNeighborsnearest_neighbors = NearestNeighbors(n_neighbors=11)neighbors = nearest_neighbors.fit(cluster_df)distances, indices = neighbors.kneighbors(cluster_df)distances = np.sort(distances[:,10], axis=0)from kneed import KneeLocatori = np.arange(len(distances))knee = KneeLocator(i, distances, S=1, curve='convex', direction='increasing', interp_method='polynomial')fig = plt.figure(figsize=(5, 5))knee.plot_knee()plt.xlabel("Points")plt.ylabel("Distance")print(distances[knee.knee]) As seen above, the optimal value for eps is 1.9335816413107338. We use this value for the parameter going forward and try to find the optimal value of min_samples parameter based on Silhouette score, Calinski Harabasz score and Davies Bouldin score. For each of these methods the optimal number of clusters are as follows: Silhouette score: 18Calinski Harabasz score: 29Davies Bouldin score: 2 Silhouette score: 18 Calinski Harabasz score: 29 Davies Bouldin score: 2 The codes for finding the optimal number of min_samples can be found here and further details on each method can be found in this blog. We go ahead with the median suggestion which is 18 by Silhouette score.In case we don’t have time to run a grid search over these metrics, one quick rule of thumb is to set min_samples parameter as twice the number of features. # dbscan clusteringfrom numpy import uniquefrom numpy import wherefrom sklearn.cluster import DBSCANfrom matplotlib import pyplot# define dataset# define the modelmodel = DBSCAN(eps=1.9335816413107338, min_samples= 18)# rule of thumb for min_samples: 2*len(cluster_df.columns)# fit model and predict clustersyhat = model.fit_predict(cluster_df)# retrieve unique clustersclusters = unique(yhat)# Calculate cluster validation metricsscore_dbsacn_s = silhouette_score(cluster_df, yhat, metric='euclidean')score_dbsacn_c = calinski_harabasz_score(cluster_df, yhat)score_dbsacn_d = davies_bouldin_score(cluster_df, yhat)print('Silhouette Score: %.4f' % score_dbsacn_s)print('Calinski Harabasz Score: %.4f' % score_dbsacn_c)print('Davies Bouldin Score: %.4f' % score_dbsacn_d) Comparing figure 1 and 6, we can see that DBSCAN performs better than K-means on Silhouette score. The model is described in the paper: A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise, 1996. In a separate blog, we will be discussing a more advanced version of DBSCAN called Hierarchical Density-Based Spatial Clustering (HDBSCAN). A Gaussian mixture model is a distance based probabilistic model that assumes all the data points are generated from a linear combination of multivariate Gaussian distributions with unknown parameters. Like K-means it takes into account centers of the latent Gaussian distributions but unlike K-means, the covariance structure of the distributions is also taken into account. The algorithm implements the expectation-maximization (EM) algorithm to iteratively find the distribution parameters that maximize a model quality measure called log likelihood. The key steps performed in this model are: Initialize k gaussian distributionsCalculate probabilities of each point’s association with each of the distributionsRecalculate distribution parameters based on each point’s probabilities associated with the the distributionsRepeat process till log-likelihood is maximized Initialize k gaussian distributions Calculate probabilities of each point’s association with each of the distributions Recalculate distribution parameters based on each point’s probabilities associated with the the distributions Repeat process till log-likelihood is maximized There are 4 options for calculating covariances in GMM: Full: Each distribution has its own general covariance matrixTied: All distributions share general covariance matrixDiag: Each distribution has its own diagonal covariance matrixSpherical: Each distribution has its own single variance Full: Each distribution has its own general covariance matrix Tied: All distributions share general covariance matrix Diag: Each distribution has its own diagonal covariance matrix Spherical: Each distribution has its own single variance Apart from selecting the covariance type, we need to select the optimal number of clusters in the model as well. We use BIC score, Silhouette score, Calinski Harabasz score and Davies Bouldin score for selecting both parameters using grid search. For each of these methods the optimal number of clusters are as follows: BIC Score: Covariance- β€˜full’ and cluster number- 26Silhouette score: Covariance- β€˜tied’ and cluster number- 2Calinski Harabasz score: Covariance- β€˜spherical’ and cluster number- 4Davies Bouldin score: Covariance- β€˜full’ and cluster number- 8 BIC Score: Covariance- β€˜full’ and cluster number- 26 Silhouette score: Covariance- β€˜tied’ and cluster number- 2 Calinski Harabasz score: Covariance- β€˜spherical’ and cluster number- 4 Davies Bouldin score: Covariance- β€˜full’ and cluster number- 8 The codes for finding the optimal parameter values can be found here and further details on each method can be found in this blog. We chose covariance as β€œfull” and the number of clusters as 26 based on the BIC score as it is based on this specific model. If we have similar configurations from multiple metrics, we can take average/median/mode of all the metrics. We can now fit the model and check model performance. # gaussian mixture clusteringfrom numpy import uniquefrom numpy import wherefrom sklearn.mixture import GaussianMixturefrom matplotlib import pyplot# define the modelmodel = GaussianMixture(n_components= 26,covariance_type= "full", random_state = 10)# fit the modelmodel.fit(cluster_df)# assign a cluster to each exampleyhat = model.predict(cluster_df)# retrieve unique clustersclusters = unique(yhat)# Calculate cluster validation scorescore_dbsacn_s = silhouette_score(cluster_df, yhat, metric='euclidean')score_dbsacn_c = calinski_harabasz_score(cluster_df, yhat)score_dbsacn_d = davies_bouldin_score(cluster_df, yhat)print('Silhouette Score: %.4f' % score_dbsacn_s)print('Calinski Harabasz Score: %.4f' % score_dbsacn_c)print('Davies Bouldin Score: %.4f' % score_dbsacn_d) Comparing figure 1 and 7, we can see that K-means outperforms GMM based on all cluster validation metrics. In a separate blog, we will be discussing a more advanced version of GMM called Variational Bayesian Gaussian Mixture. The aim of this blog is to help the readers understand how 4 popular clustering models work as well as their detailed implementation in python. As shown below, each model has its own pros and cons: Finally, it is important to understand that these models are just a means to find logical and easily understandable customer/product segments which can be targeted effectively. So in most practical cases, we will end up trying multiple models and creating customer/product profiles from each iteration till we find segments that make the most business sense.Thus, segmentation is both an art and science. Do you have any questions or suggestions about this blog? Please feel free to drop in a note. If you, like me, are passionate about AI, Data Science, or Economics, please feel free to add/follow me on LinkedIn, Github and Medium. Ester, M, Kriegel, H P, Sander, J, and Xiaowei, Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. United States: N. p., 1996. Web.MacQueen, J. Some methods for classification and analysis of multivariate observations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 281–297, University of California Press, Berkeley, Calif., 1967. https://projecteuclid.org/euclid.bsmsp/1200512992Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825–2830, 2011. Ester, M, Kriegel, H P, Sander, J, and Xiaowei, Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. United States: N. p., 1996. Web. MacQueen, J. Some methods for classification and analysis of multivariate observations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 281–297, University of California Press, Berkeley, Calif., 1967. https://projecteuclid.org/euclid.bsmsp/1200512992 Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825–2830, 2011.
[ { "code": null, "e": 681, "s": 171, "text": "In the current age, the availability of granular data for a large pool of customers/products and technological capability to handle petabytes of data efficiently is growing rapidly. Due to this, it’s now possible to come up with very strategic and meaningful clusters for effective targeting. And identifying the target segments requires a robust segmentation exercise. In this blog, we will be discussing the most popular algorithms for unsupervised clustering algorithms and how to implement them in python." }, { "code": null, "e": 991, "s": 681, "text": "In this blog, we will be working with clickstream data from an online store offering clothing for pregnant women. It includes variables like product category, location of the photo on the webpage, country of origin of the IP address and product price in US dollars. It has data from April 2008 to August 2008." }, { "code": null, "e": 1204, "s": 991, "text": "The first step is to prepare the data for segmentation. I encourage you to check out the article below for an in-depth explanation of different steps for preparing data for segmentation before proceeding further:" }, { "code": null, "e": 1288, "s": 1204, "text": "One Hot Encoding, Standardization, PCA: Data preparation for segmentation in python" }, { "code": null, "e": 1549, "s": 1288, "text": "Selecting the optimal number of clusters is another key concept one should be aware of while dealing with a segmentation problem. It will be helpful if you read the article below for understanding a comprehensive list of popular metrics for selecting clusters:" }, { "code": null, "e": 1642, "s": 1549, "text": "Cheatsheet for implementing 7 methods for selecting the optimal number of clusters in Python" }, { "code": null, "e": 1704, "s": 1642, "text": "We will be talking about 4 categories of models in this blog:" }, { "code": null, "e": 1809, "s": 1704, "text": "K-meansAgglomerative clusteringDensity-based spatial clustering (DBSCAN)Gaussian Mixture Modelling (GMM)" }, { "code": null, "e": 1817, "s": 1809, "text": "K-means" }, { "code": null, "e": 1842, "s": 1817, "text": "Agglomerative clustering" }, { "code": null, "e": 1884, "s": 1842, "text": "Density-based spatial clustering (DBSCAN)" }, { "code": null, "e": 1917, "s": 1884, "text": "Gaussian Mixture Modelling (GMM)" }, { "code": null, "e": 1925, "s": 1917, "text": "K-means" }, { "code": null, "e": 1999, "s": 1925, "text": "The K-means algorithm is an iterative process with three critical stages:" }, { "code": null, "e": 2030, "s": 1999, "text": "Pick initial cluster centroids" }, { "code": null, "e": 2061, "s": 2030, "text": "Pick initial cluster centroids" }, { "code": null, "e": 2516, "s": 2061, "text": "The algorithm starts by picking initial k cluster centers which are known as centroids. Determining the optimal number of clusters i.e k as well as proper selection of the initial clusters is extremely important for the performance of the model. The number of clusters should always be dependent on the nature of the dataset while poor selection of the initial cluster can lead to the problem of local convergence. Thankfully, we have solutions for both." }, { "code": null, "e": 2823, "s": 2516, "text": "For further details on selecting the optimal number of clusters please refer to this detailed blog. For selection of initial clusters, we can either run multiple iterations of the model with various initializations to pick the most stable one or use the β€œk-means++” algorithm which has the following steps:" }, { "code": null, "e": 3094, "s": 2823, "text": "Randomly select the first centroid from the datasetCompute distance of all points in the dataset from the selected centroidPick a point as the new centroid that has maximum probability proportional to this distanceRepeat steps 2 and 3 until k centroids have been sampled" }, { "code": null, "e": 3146, "s": 3094, "text": "Randomly select the first centroid from the dataset" }, { "code": null, "e": 3219, "s": 3146, "text": "Compute distance of all points in the dataset from the selected centroid" }, { "code": null, "e": 3311, "s": 3219, "text": "Pick a point as the new centroid that has maximum probability proportional to this distance" }, { "code": null, "e": 3368, "s": 3311, "text": "Repeat steps 2 and 3 until k centroids have been sampled" }, { "code": null, "e": 3497, "s": 3368, "text": "The algorithm initializes the centroids to be distant from each other leading to more stable results than random initialization." }, { "code": null, "e": 3519, "s": 3497, "text": "2. Cluster assignment" }, { "code": null, "e": 3654, "s": 3519, "text": "K-means then assigns the data points to the closest cluster centroids based on euclidean distance between the point and all centroids." }, { "code": null, "e": 3671, "s": 3654, "text": "3. Move centroid" }, { "code": null, "e": 3792, "s": 3671, "text": "The model finally calculates the average of all the points in a cluster and moves the centroid to that average location." }, { "code": null, "e": 3945, "s": 3792, "text": "Step 2 and 3 are repeated until there is no change in the clusters or possibly some other stopping condition is met (like maximum number of iterations)." }, { "code": null, "e": 4221, "s": 3945, "text": "For implementing the model in python we need to do specify the number of clusters first. We have used the elbow method, Gap Statistic, Silhouette score, Calinski Harabasz score and Davies Bouldin score. For each of these methods the optimal number of clusters are as follows:" }, { "code": null, "e": 4322, "s": 4221, "text": "Elbow method: 8Gap statistic: 29Silhouette score: 4Calinski Harabasz score: 2Davies Bouldin score: 4" }, { "code": null, "e": 4338, "s": 4322, "text": "Elbow method: 8" }, { "code": null, "e": 4356, "s": 4338, "text": "Gap statistic: 29" }, { "code": null, "e": 4376, "s": 4356, "text": "Silhouette score: 4" }, { "code": null, "e": 4403, "s": 4376, "text": "Calinski Harabasz score: 2" }, { "code": null, "e": 4427, "s": 4403, "text": "Davies Bouldin score: 4" }, { "code": null, "e": 4721, "s": 4427, "text": "As seen above, 2 out of 5 methods suggest that we should use 4 clusters. If each model suggests a different number of clusters we can either take an average or median. The codes for finding the optimal number of k can be found here and further details on each method can be found in this blog." }, { "code": null, "e": 4898, "s": 4721, "text": "Once we have the optimal number of clusters, we can fit the model and get the performance of the model using Silhouette score, Calinski Harabasz score and Davies Bouldin score." }, { "code": null, "e": 5641, "s": 4898, "text": "# K meansfrom sklearn.cluster import KMeansfrom sklearn.metrics import silhouette_score from sklearn.metrics import calinski_harabasz_scorefrom sklearn.metrics import davies_bouldin_score# Fit K-Meanskmeans_1 = KMeans(n_clusters=4,random_state= 10)# Use fit_predict to cluster the datasetpredictions = kmeans_1.fit_predict(cluster_df)# Calculate cluster validation metricsscore_kemans_s = silhouette_score(cluster_df, kmeans_1.labels_, metric='euclidean')score_kemans_c = calinski_harabasz_score(cluster_df, kmeans_1.labels_)score_kemans_d = davies_bouldin_score(cluster_df, predictions)print('Silhouette Score: %.4f' % score_kemans_s)print('Calinski Harabasz Score: %.4f' % score_kemans_c)print('Davies Bouldin Score: %.4f' % score_kemans_d)" }, { "code": null, "e": 5747, "s": 5641, "text": "We can also check the relative size and distribution of the clusters using an inter-cluster distance map." }, { "code": null, "e": 6043, "s": 5747, "text": "# Inter cluster distance mapfrom yellowbrick.cluster import InterclusterDistance# Instantiate the clustering model and visualizervisualizer = InterclusterDistance(kmeans_1)visualizer.fit(cluster_df) # Fit the data to the visualizervisualizer.show() # Finalize and render the figure" }, { "code": null, "e": 6451, "s": 6043, "text": "As seen in the figure above, two clusters are quite large compared to the others and they seem to have decent separation between them. However, if two clusters overlap in the 2D space, it does not imply that they overlap in the original feature space. Further details on the model can be found here. Finally, other variants of K-Means like Mini Batch K-means, K-Medoids will be discussed in a separate blog." }, { "code": null, "e": 6851, "s": 6451, "text": "Agglomerative clustering is a general family of clustering algorithms that build nested clusters by merging data points successively. This hierarchy of clusters can be represented as a tree diagram known as dendrogram. The top of the tree is a single cluster with all data points while the bottom contains individual points. There are multiple options for linking data points in a successive manner:" }, { "code": null, "e": 6947, "s": 6851, "text": "Single linkage: It minimizes the distance between the closest observations of pairs of clusters" }, { "code": null, "e": 7057, "s": 6947, "text": "Complete or Maximum linkage: Tries to minimize the maximum distance between observations of pairs of clusters" }, { "code": null, "e": 7162, "s": 7057, "text": "Average linkage: It minimizes the average of the distances between all observations of pairs of clusters" }, { "code": null, "e": 7338, "s": 7162, "text": "Ward: Similar to the k-means as it minimizes the sum of squared differences within all clusters but with a hierarchical approach. We will be using this option in our exercise." }, { "code": null, "e": 7647, "s": 7338, "text": "The ideal option can be picked by checking which linkage method performs best based on cluster validation metrics (Silhouette score, Calinski Harabasz score and Davies Bouldin score). And similar to K-means, we will have to specify the number of clusters in this model and the dendrogram can help us do that." }, { "code": null, "e": 7876, "s": 7647, "text": "# Dendrogram for Hierarchical Clusteringimport scipy.cluster.hierarchy as shcfrom matplotlib import pyplotpyplot.figure(figsize=(10, 7)) pyplot.title(\"Dendrograms\") dend = shc.dendrogram(shc.linkage(cluster_df, method='ward'))" }, { "code": null, "e": 8090, "s": 7876, "text": "From figure 3, we can see that we can choose either 4 or 8 clusters. We also use the elbow method, Silhouette score and Calinski Harabasz score to find the optimal number of clusters and get the following results:" }, { "code": null, "e": 8176, "s": 8090, "text": "Elbow method: 10Davies Bouldin score : 8Silhouette score: 3Calinski Harabasz score: 2" }, { "code": null, "e": 8193, "s": 8176, "text": "Elbow method: 10" }, { "code": null, "e": 8218, "s": 8193, "text": "Davies Bouldin score : 8" }, { "code": null, "e": 8238, "s": 8218, "text": "Silhouette score: 3" }, { "code": null, "e": 8265, "s": 8238, "text": "Calinski Harabasz score: 2" }, { "code": null, "e": 8680, "s": 8265, "text": "We will go ahead with 8 as both the Davies Bouldin score and dendrogram suggest so. If the metrics give us different number of cluster we can either go ahead with the one suggested by the dendrogram (as it is based on this specific model) or take average/median of all the metrics. The codes for finding the optimal number of clusters can be found here and further details on each method can be found in this blog." }, { "code": null, "e": 8847, "s": 8680, "text": "Similar to k means, we can fit the model with the optimal number of clusters as well as linkage type and test its performance using the three metrics used in K-means." }, { "code": null, "e": 9642, "s": 8847, "text": "# Agglomerative clusteringfrom numpy import uniquefrom numpy import wherefrom sklearn.cluster import AgglomerativeClusteringfrom matplotlib import pyplot# define the modelmodel = AgglomerativeClustering(n_clusters=4)# fit model and predict clustersyhat = model.fit(cluster_df)yhat_2 = model.fit_predict(cluster_df)# retrieve unique clustersclusters = unique(yhat)# Calculate cluster validation metricsscore_AGclustering_s = silhouette_score(cluster_df, yhat.labels_, metric='euclidean')score_AGclustering_c = calinski_harabasz_score(cluster_df, yhat.labels_)score_AGclustering_d = davies_bouldin_score(cluster_df, yhat_2)print('Silhouette Score: %.4f' % score_AGclustering_s)print('Calinski Harabasz Score: %.4f' % score_AGclustering_c)print('Davies Bouldin Score: %.4f' % score_AGclustering_d)" }, { "code": null, "e": 9770, "s": 9642, "text": "Comparing figure 1 and 4, we can see that K-means outperforms agglomerative clustering based on all cluster validation metrics." }, { "code": null, "e": 10163, "s": 9770, "text": "DBSCAN groups together points that are closely packed together while marking others as outliers which lie alone in low-density regions. There are two key parameters in the model needed to define β€˜density’: minimum number of points required to form a dense region min_samples and distance to define a neighborhood eps. Higher min_samples or lower eps demands greater density to form a cluster." }, { "code": null, "e": 10342, "s": 10163, "text": "Based on these parameters, DBSCAN starts with an arbitrary point x and identifies points that are within neighbourhood of x based on eps and classifies x as one of the following:" }, { "code": null, "e": 10749, "s": 10342, "text": "Core point: If the number of points in the neighbourhood is at least equal to the min_samples parameter then it called a core point and a cluster is formed around x.Border point: x is considered a border point if it is part of a cluster with a different core point but number of points in it’s neighbourhood is less than the min_samples parameter. Intuitively, these points are on the fringes of a cluster." }, { "code": null, "e": 10915, "s": 10749, "text": "Core point: If the number of points in the neighbourhood is at least equal to the min_samples parameter then it called a core point and a cluster is formed around x." }, { "code": null, "e": 11157, "s": 10915, "text": "Border point: x is considered a border point if it is part of a cluster with a different core point but number of points in it’s neighbourhood is less than the min_samples parameter. Intuitively, these points are on the fringes of a cluster." }, { "code": null, "e": 11317, "s": 11157, "text": "3. Outlier or noise: If x is not a core point and distance from any core sample is at least equal to or greater thaneps , it is considered an outlier or noise." }, { "code": null, "e": 11736, "s": 11317, "text": "For tuning the parameters of the model, we first identify the optimal eps value by finding the distance among a point’s neighbors and plotting the minimum distance. This gives us the elbow curve to find density of the data points and optimal eps value can be found at the inflection point. We use the NearestNeighbours function to get the minimum distance and the KneeLocator function to identify the inflection point." }, { "code": null, "e": 12283, "s": 11736, "text": "# parameter tuning for epsfrom sklearn.neighbors import NearestNeighborsnearest_neighbors = NearestNeighbors(n_neighbors=11)neighbors = nearest_neighbors.fit(cluster_df)distances, indices = neighbors.kneighbors(cluster_df)distances = np.sort(distances[:,10], axis=0)from kneed import KneeLocatori = np.arange(len(distances))knee = KneeLocator(i, distances, S=1, curve='convex', direction='increasing', interp_method='polynomial')fig = plt.figure(figsize=(5, 5))knee.plot_knee()plt.xlabel(\"Points\")plt.ylabel(\"Distance\")print(distances[knee.knee])" }, { "code": null, "e": 12606, "s": 12283, "text": "As seen above, the optimal value for eps is 1.9335816413107338. We use this value for the parameter going forward and try to find the optimal value of min_samples parameter based on Silhouette score, Calinski Harabasz score and Davies Bouldin score. For each of these methods the optimal number of clusters are as follows:" }, { "code": null, "e": 12677, "s": 12606, "text": "Silhouette score: 18Calinski Harabasz score: 29Davies Bouldin score: 2" }, { "code": null, "e": 12698, "s": 12677, "text": "Silhouette score: 18" }, { "code": null, "e": 12726, "s": 12698, "text": "Calinski Harabasz score: 29" }, { "code": null, "e": 12750, "s": 12726, "text": "Davies Bouldin score: 2" }, { "code": null, "e": 13114, "s": 12750, "text": "The codes for finding the optimal number of min_samples can be found here and further details on each method can be found in this blog. We go ahead with the median suggestion which is 18 by Silhouette score.In case we don’t have time to run a grid search over these metrics, one quick rule of thumb is to set min_samples parameter as twice the number of features." }, { "code": null, "e": 13885, "s": 13114, "text": "# dbscan clusteringfrom numpy import uniquefrom numpy import wherefrom sklearn.cluster import DBSCANfrom matplotlib import pyplot# define dataset# define the modelmodel = DBSCAN(eps=1.9335816413107338, min_samples= 18)# rule of thumb for min_samples: 2*len(cluster_df.columns)# fit model and predict clustersyhat = model.fit_predict(cluster_df)# retrieve unique clustersclusters = unique(yhat)# Calculate cluster validation metricsscore_dbsacn_s = silhouette_score(cluster_df, yhat, metric='euclidean')score_dbsacn_c = calinski_harabasz_score(cluster_df, yhat)score_dbsacn_d = davies_bouldin_score(cluster_df, yhat)print('Silhouette Score: %.4f' % score_dbsacn_s)print('Calinski Harabasz Score: %.4f' % score_dbsacn_c)print('Davies Bouldin Score: %.4f' % score_dbsacn_d)" }, { "code": null, "e": 14021, "s": 13885, "text": "Comparing figure 1 and 6, we can see that DBSCAN performs better than K-means on Silhouette score. The model is described in the paper:" }, { "code": null, "e": 14117, "s": 14021, "text": "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise, 1996." }, { "code": null, "e": 14257, "s": 14117, "text": "In a separate blog, we will be discussing a more advanced version of DBSCAN called Hierarchical Density-Based Spatial Clustering (HDBSCAN)." }, { "code": null, "e": 14854, "s": 14257, "text": "A Gaussian mixture model is a distance based probabilistic model that assumes all the data points are generated from a linear combination of multivariate Gaussian distributions with unknown parameters. Like K-means it takes into account centers of the latent Gaussian distributions but unlike K-means, the covariance structure of the distributions is also taken into account. The algorithm implements the expectation-maximization (EM) algorithm to iteratively find the distribution parameters that maximize a model quality measure called log likelihood. The key steps performed in this model are:" }, { "code": null, "e": 15128, "s": 14854, "text": "Initialize k gaussian distributionsCalculate probabilities of each point’s association with each of the distributionsRecalculate distribution parameters based on each point’s probabilities associated with the the distributionsRepeat process till log-likelihood is maximized" }, { "code": null, "e": 15164, "s": 15128, "text": "Initialize k gaussian distributions" }, { "code": null, "e": 15247, "s": 15164, "text": "Calculate probabilities of each point’s association with each of the distributions" }, { "code": null, "e": 15357, "s": 15247, "text": "Recalculate distribution parameters based on each point’s probabilities associated with the the distributions" }, { "code": null, "e": 15405, "s": 15357, "text": "Repeat process till log-likelihood is maximized" }, { "code": null, "e": 15461, "s": 15405, "text": "There are 4 options for calculating covariances in GMM:" }, { "code": null, "e": 15696, "s": 15461, "text": "Full: Each distribution has its own general covariance matrixTied: All distributions share general covariance matrixDiag: Each distribution has its own diagonal covariance matrixSpherical: Each distribution has its own single variance" }, { "code": null, "e": 15758, "s": 15696, "text": "Full: Each distribution has its own general covariance matrix" }, { "code": null, "e": 15814, "s": 15758, "text": "Tied: All distributions share general covariance matrix" }, { "code": null, "e": 15877, "s": 15814, "text": "Diag: Each distribution has its own diagonal covariance matrix" }, { "code": null, "e": 15934, "s": 15877, "text": "Spherical: Each distribution has its own single variance" }, { "code": null, "e": 16254, "s": 15934, "text": "Apart from selecting the covariance type, we need to select the optimal number of clusters in the model as well. We use BIC score, Silhouette score, Calinski Harabasz score and Davies Bouldin score for selecting both parameters using grid search. For each of these methods the optimal number of clusters are as follows:" }, { "code": null, "e": 16497, "s": 16254, "text": "BIC Score: Covariance- β€˜full’ and cluster number- 26Silhouette score: Covariance- β€˜tied’ and cluster number- 2Calinski Harabasz score: Covariance- β€˜spherical’ and cluster number- 4Davies Bouldin score: Covariance- β€˜full’ and cluster number- 8" }, { "code": null, "e": 16550, "s": 16497, "text": "BIC Score: Covariance- β€˜full’ and cluster number- 26" }, { "code": null, "e": 16609, "s": 16550, "text": "Silhouette score: Covariance- β€˜tied’ and cluster number- 2" }, { "code": null, "e": 16680, "s": 16609, "text": "Calinski Harabasz score: Covariance- β€˜spherical’ and cluster number- 4" }, { "code": null, "e": 16743, "s": 16680, "text": "Davies Bouldin score: Covariance- β€˜full’ and cluster number- 8" }, { "code": null, "e": 17162, "s": 16743, "text": "The codes for finding the optimal parameter values can be found here and further details on each method can be found in this blog. We chose covariance as β€œfull” and the number of clusters as 26 based on the BIC score as it is based on this specific model. If we have similar configurations from multiple metrics, we can take average/median/mode of all the metrics. We can now fit the model and check model performance." }, { "code": null, "e": 17939, "s": 17162, "text": "# gaussian mixture clusteringfrom numpy import uniquefrom numpy import wherefrom sklearn.mixture import GaussianMixturefrom matplotlib import pyplot# define the modelmodel = GaussianMixture(n_components= 26,covariance_type= \"full\", random_state = 10)# fit the modelmodel.fit(cluster_df)# assign a cluster to each exampleyhat = model.predict(cluster_df)# retrieve unique clustersclusters = unique(yhat)# Calculate cluster validation scorescore_dbsacn_s = silhouette_score(cluster_df, yhat, metric='euclidean')score_dbsacn_c = calinski_harabasz_score(cluster_df, yhat)score_dbsacn_d = davies_bouldin_score(cluster_df, yhat)print('Silhouette Score: %.4f' % score_dbsacn_s)print('Calinski Harabasz Score: %.4f' % score_dbsacn_c)print('Davies Bouldin Score: %.4f' % score_dbsacn_d)" }, { "code": null, "e": 18165, "s": 17939, "text": "Comparing figure 1 and 7, we can see that K-means outperforms GMM based on all cluster validation metrics. In a separate blog, we will be discussing a more advanced version of GMM called Variational Bayesian Gaussian Mixture." }, { "code": null, "e": 18363, "s": 18165, "text": "The aim of this blog is to help the readers understand how 4 popular clustering models work as well as their detailed implementation in python. As shown below, each model has its own pros and cons:" }, { "code": null, "e": 18768, "s": 18363, "text": "Finally, it is important to understand that these models are just a means to find logical and easily understandable customer/product segments which can be targeted effectively. So in most practical cases, we will end up trying multiple models and creating customer/product profiles from each iteration till we find segments that make the most business sense.Thus, segmentation is both an art and science." }, { "code": null, "e": 18862, "s": 18768, "text": "Do you have any questions or suggestions about this blog? Please feel free to drop in a note." }, { "code": null, "e": 18998, "s": 18862, "text": "If you, like me, are passionate about AI, Data Science, or Economics, please feel free to add/follow me on LinkedIn, Github and Medium." }, { "code": null, "e": 19574, "s": 18998, "text": "Ester, M, Kriegel, H P, Sander, J, and Xiaowei, Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. United States: N. p., 1996. Web.MacQueen, J. Some methods for classification and analysis of multivariate observations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 281–297, University of California Press, Berkeley, Calif., 1967. https://projecteuclid.org/euclid.bsmsp/1200512992Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825–2830, 2011." }, { "code": null, "e": 19749, "s": 19574, "text": "Ester, M, Kriegel, H P, Sander, J, and Xiaowei, Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. United States: N. p., 1996. Web." }, { "code": null, "e": 20062, "s": 19749, "text": "MacQueen, J. Some methods for classification and analysis of multivariate observations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, 281–297, University of California Press, Berkeley, Calif., 1967. https://projecteuclid.org/euclid.bsmsp/1200512992" } ]
Hands-On Jupyter Notebook Hacks. Hacks, tips and shortcuts you should be... | by Shinichi Okada | Towards Data Science
Table of ContentsIntroduction1. Gist2. display() over print() for dataFrame3. Output without print()4. Bash commands without !5. Other useful shortcuts6. Nbextensions7. Move Selected Cells8. Tabnine9. Autopep8 In this short article, I am going to write Jupyter Notebook tips, extensions and hacks that I found useful when working with Jupyter Notebook. For writing tips with Jupyter Notebook, please read this article. The gist is a simple way to share snippets hosted by Github. This hack allows you to create a gist from your codes in a Juptyer notebook cell. Assuming you have a Github account, first, you need to install gist. If you are using brew, please run the following in your terminal. brew updatebrew outdatedbrew upgradebrew cleanupbrew install gist Now you need to login to Github. gist --loginObtaining OAuth2 access_token from GitHub.GitHub username: your-github-nameGitHub password: your-github-pw2-factor auth code:Success! https://github.com/settings/tokens gist will allow you to upload the content from a file or the clipboard, adding a filename and description. Put the following code in one of the cells in your Jupyter Notebook. You need to change myfile.py and my description . %%bashgist -P -f myfile.py -d "my description" Copy codes from a cell you want to send to Gist to your clipboard, CTRL+C/⌘C, then running the above cell will send your clipboard to Gist. It returns your gist URL. https://gist.github.com/b9e4b509cb6ed80631b617b53a65f0b9 When you want to update your gist, you need to use your local file name. You can add the following in a cell and run it to update the gist. %%bashgist -u your-gist-id your-filename-with-extension towardsdatascience.com towardsdatascience.com IPython.display is already installed in Jupyter Notebook. If you are printing one item, you don't need print() method. If you are printing out more than one item, display() will display nicer outputs. import pandas as pddf = pd.DataFrame( [ [48,22,33,47], [35,36,42,27] ], index=["Male","Female"], columns=["Black","White","Red","Blue"])print(df)display(df)df If you feel too lazy to type print() for all outputs, then this is for you. For Conda users, you need to add the following code in a cell. from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = "all" Let’s test it. testvar=2testvartestvar This will return, 22 If you installed Jupyter Notebook with PIP, you can make it as the default without the above code. Please open ~/.ipython/profile_default/ipython_config.py with an editor and paste the following. c = get_config()c.InteractiveShell.ast_node_interactivity = "all" You need to restart Jupyter Notebook to make it work. towardsdatascience.com You can use bash commands without !. Try them out. Some examples you can use are: ls -alpwdcat Readme.mdman lsmkdir newfolder Please install Nbextensions before using the following extensions. The jupyter_contrib_nbextensions package contains a collection of community-contributed unofficial extensions that add functionality to the Jupyter notebook. How to install jupyter_contrib_nbextensions I use quite a few Nbextensions. Equation Auto Numbering, Select CodeMirror Keymap, Table of Contents (2), RISE. I’d like to show you three other extensions that are very useful to have. towardsdatascience.com Please install Move Selected Cells by enabling it from the Nbextensions tab. This is super useful when you organize your cells. You can move a selected cell or cells with Option/Alt + Up/Down. jupyter-tabnine enables the use of coding auto-completion based on Deep Learning. Let’s install it. pip install jupyter-tabninejupyter nbextension install --py jupyter_tabninejupyter nbextension enable --py jupyter_tabninejupyter serverextension enable --py jupyter_tabnine You might restart your Jupyter Notebook a couple of times to work. You can see Tabnine in action in the following gif images. The keyboard shortcuts for indentation are `CMD+]` for right indentation and `CMD+[` for left indentation. The Nbextensions Autopep8 formats code in a code cell with one click. You can install it using pip: pip install autopep8 Then enable it in Nbextsions. What is your favorite? Do you have anything else to share? Get full access to every story on Medium by becoming a member.
[ { "code": null, "e": 382, "s": 172, "text": "Table of ContentsIntroduction1. Gist2. display() over print() for dataFrame3. Output without print()4. Bash commands without !5. Other useful shortcuts6. Nbextensions7. Move Selected Cells8. Tabnine9. Autopep8" }, { "code": null, "e": 591, "s": 382, "text": "In this short article, I am going to write Jupyter Notebook tips, extensions and hacks that I found useful when working with Jupyter Notebook. For writing tips with Jupyter Notebook, please read this article." }, { "code": null, "e": 734, "s": 591, "text": "The gist is a simple way to share snippets hosted by Github. This hack allows you to create a gist from your codes in a Juptyer notebook cell." }, { "code": null, "e": 869, "s": 734, "text": "Assuming you have a Github account, first, you need to install gist. If you are using brew, please run the following in your terminal." }, { "code": null, "e": 935, "s": 869, "text": "brew updatebrew outdatedbrew upgradebrew cleanupbrew install gist" }, { "code": null, "e": 968, "s": 935, "text": "Now you need to login to Github." }, { "code": null, "e": 1149, "s": 968, "text": "gist --loginObtaining OAuth2 access_token from GitHub.GitHub username: your-github-nameGitHub password: your-github-pw2-factor auth code:Success! https://github.com/settings/tokens" }, { "code": null, "e": 1256, "s": 1149, "text": "gist will allow you to upload the content from a file or the clipboard, adding a filename and description." }, { "code": null, "e": 1375, "s": 1256, "text": "Put the following code in one of the cells in your Jupyter Notebook. You need to change myfile.py and my description ." }, { "code": null, "e": 1422, "s": 1375, "text": "%%bashgist -P -f myfile.py -d \"my description\"" }, { "code": null, "e": 1562, "s": 1422, "text": "Copy codes from a cell you want to send to Gist to your clipboard, CTRL+C/⌘C, then running the above cell will send your clipboard to Gist." }, { "code": null, "e": 1588, "s": 1562, "text": "It returns your gist URL." }, { "code": null, "e": 1645, "s": 1588, "text": "https://gist.github.com/b9e4b509cb6ed80631b617b53a65f0b9" }, { "code": null, "e": 1785, "s": 1645, "text": "When you want to update your gist, you need to use your local file name. You can add the following in a cell and run it to update the gist." }, { "code": null, "e": 1841, "s": 1785, "text": "%%bashgist -u your-gist-id your-filename-with-extension" }, { "code": null, "e": 1864, "s": 1841, "text": "towardsdatascience.com" }, { "code": null, "e": 1887, "s": 1864, "text": "towardsdatascience.com" }, { "code": null, "e": 2088, "s": 1887, "text": "IPython.display is already installed in Jupyter Notebook. If you are printing one item, you don't need print() method. If you are printing out more than one item, display() will display nicer outputs." }, { "code": null, "e": 2273, "s": 2088, "text": "import pandas as pddf = pd.DataFrame( [ [48,22,33,47], [35,36,42,27] ], index=[\"Male\",\"Female\"], columns=[\"Black\",\"White\",\"Red\",\"Blue\"])print(df)display(df)df" }, { "code": null, "e": 2349, "s": 2273, "text": "If you feel too lazy to type print() for all outputs, then this is for you." }, { "code": null, "e": 2412, "s": 2349, "text": "For Conda users, you need to add the following code in a cell." }, { "code": null, "e": 2518, "s": 2412, "text": "from IPython.core.interactiveshell import InteractiveShellInteractiveShell.ast_node_interactivity = \"all\"" }, { "code": null, "e": 2533, "s": 2518, "text": "Let’s test it." }, { "code": null, "e": 2557, "s": 2533, "text": "testvar=2testvartestvar" }, { "code": null, "e": 2575, "s": 2557, "text": "This will return," }, { "code": null, "e": 2578, "s": 2575, "text": "22" }, { "code": null, "e": 2774, "s": 2578, "text": "If you installed Jupyter Notebook with PIP, you can make it as the default without the above code. Please open ~/.ipython/profile_default/ipython_config.py with an editor and paste the following." }, { "code": null, "e": 2840, "s": 2774, "text": "c = get_config()c.InteractiveShell.ast_node_interactivity = \"all\"" }, { "code": null, "e": 2894, "s": 2840, "text": "You need to restart Jupyter Notebook to make it work." }, { "code": null, "e": 2917, "s": 2894, "text": "towardsdatascience.com" }, { "code": null, "e": 2999, "s": 2917, "text": "You can use bash commands without !. Try them out. Some examples you can use are:" }, { "code": null, "e": 3043, "s": 2999, "text": "ls -alpwdcat Readme.mdman lsmkdir newfolder" }, { "code": null, "e": 3268, "s": 3043, "text": "Please install Nbextensions before using the following extensions. The jupyter_contrib_nbextensions package contains a collection of community-contributed unofficial extensions that add functionality to the Jupyter notebook." }, { "code": null, "e": 3312, "s": 3268, "text": "How to install jupyter_contrib_nbextensions" }, { "code": null, "e": 3498, "s": 3312, "text": "I use quite a few Nbextensions. Equation Auto Numbering, Select CodeMirror Keymap, Table of Contents (2), RISE. I’d like to show you three other extensions that are very useful to have." }, { "code": null, "e": 3521, "s": 3498, "text": "towardsdatascience.com" }, { "code": null, "e": 3598, "s": 3521, "text": "Please install Move Selected Cells by enabling it from the Nbextensions tab." }, { "code": null, "e": 3714, "s": 3598, "text": "This is super useful when you organize your cells. You can move a selected cell or cells with Option/Alt + Up/Down." }, { "code": null, "e": 3796, "s": 3714, "text": "jupyter-tabnine enables the use of coding auto-completion based on Deep Learning." }, { "code": null, "e": 3814, "s": 3796, "text": "Let’s install it." }, { "code": null, "e": 3988, "s": 3814, "text": "pip install jupyter-tabninejupyter nbextension install --py jupyter_tabninejupyter nbextension enable --py jupyter_tabninejupyter serverextension enable --py jupyter_tabnine" }, { "code": null, "e": 4114, "s": 3988, "text": "You might restart your Jupyter Notebook a couple of times to work. You can see Tabnine in action in the following gif images." }, { "code": null, "e": 4221, "s": 4114, "text": "The keyboard shortcuts for indentation are `CMD+]` for right indentation and `CMD+[` for left indentation." }, { "code": null, "e": 4291, "s": 4221, "text": "The Nbextensions Autopep8 formats code in a code cell with one click." }, { "code": null, "e": 4321, "s": 4291, "text": "You can install it using pip:" }, { "code": null, "e": 4342, "s": 4321, "text": "pip install autopep8" }, { "code": null, "e": 4372, "s": 4342, "text": "Then enable it in Nbextsions." }, { "code": null, "e": 4431, "s": 4372, "text": "What is your favorite? Do you have anything else to share?" } ]
Binary Tree to Binary Search Tree Conversion using STL set - GeeksforGeeks
24 Jun, 2021 Given a Binary Tree, convert it to a Binary Search Tree. The conversion must be done in such a way that keeps the original structure of the Binary Tree.This solution will use Sets of C++ STL instead of array-based solution.Examples: Example 1 Input: 10 / \ 2 7 / \ 8 4 Output: 8 / \ 4 10 / \ 2 7 Example 2 Input: 10 / \ 30 15 / \ 20 5 Output: 15 / \ 10 20 / \ 5 30 Solution Copy the items of binary tree in a set while doing inorder traversal. This takes O(n log n) time. Note that set in C++ STL is implemented using a Self Balancing Binary Search Tree like Red Black Tree, AVL Tree, etcThere is no need to sort the set as sets in C++ are implemented using Self-balancing binary search trees due to which each operation such as insertion, searching, deletion, etc takes O(log n) time.Now simply copy the items of set one by one from beginning to the tree while doing inorder traversal of the tree. Care should be taken as when copying each item of set from its beginning, we first copy it to the tree while doing inorder traversal, then remove it from the set as well. Copy the items of binary tree in a set while doing inorder traversal. This takes O(n log n) time. Note that set in C++ STL is implemented using a Self Balancing Binary Search Tree like Red Black Tree, AVL Tree, etc There is no need to sort the set as sets in C++ are implemented using Self-balancing binary search trees due to which each operation such as insertion, searching, deletion, etc takes O(log n) time. Now simply copy the items of set one by one from beginning to the tree while doing inorder traversal of the tree. Care should be taken as when copying each item of set from its beginning, we first copy it to the tree while doing inorder traversal, then remove it from the set as well. Now the above solution is simpler and easier to implement than the array-based conversion of Binary tree to Binary search tree explained here- Conversion of Binary Tree to Binary Search tree (Set-1), where we had to separately make a function to sort the items of the array after copying the items from tree to it.Program to convert a binary tree to a binary search tree using the set. C++ Java Python3 C# Javascript /* CPP program to convert a Binary tree to BST using sets as containers. */#include <bits/stdc++.h>using namespace std; struct Node { int data; struct Node *left, *right;}; // function to store the nodes in set while// doing inorder traversal.void storeinorderInSet(Node* root, set<int>& s){ if (!root) return; // visit the left subtree first storeinorderInSet(root->left, s); // insertion takes order of O(logn) for sets s.insert(root->data); // visit the right subtree storeinorderInSet(root->right, s); } // Time complexity = O(nlogn) // function to copy items of set one by one// to the tree while doing inorder traversalvoid setToBST(set<int>& s, Node* root){ // base condition if (!root) return; // first move to the left subtree and // update items setToBST(s, root->left); // iterator initially pointing to the // beginning of set auto it = s.begin(); // copying the item at beginning of // set(sorted) to the tree. root->data = *it; // now erasing the beginning item from set. s.erase(it); // now move to right subtree and update items setToBST(s, root->right); } // T(n) = O(nlogn) time // Converts Binary tree to BST.void binaryTreeToBST(Node* root){ set<int> s; // populating the set with the tree's // inorder traversal data storeinorderInSet(root, s); // now sets are by default sorted as // they are implemented using self- // balancing BST // copying items from set to the tree // while inorder traversal which makes a BST setToBST(s, root); } // Time complexity = O(nlogn), // Auxiliary Space = O(n) for set. // helper function to create a nodeNode* newNode(int data){ // dynamically allocating memory Node* temp = new Node(); temp->data = data; temp->left = temp->right = NULL; return temp;} // function to do inorder traversalvoid inorder(Node* root){ if (!root) return; inorder(root->left); cout << root->data << " "; inorder(root->right);} int main(){ Node* root = newNode(5); root->left = newNode(7); root->right = newNode(9); root->right->left = newNode(10); root->left->left = newNode(1); root->left->right = newNode(6); root->right->right = newNode(11); /* Constructing tree given in the above figure 5 / \ 7 9 /\ / \ 1 6 10 11 */ // converting the above Binary tree to BST binaryTreeToBST(root); cout << "Inorder traversal of BST is: " << endl; inorder(root); return 0;} /* Java program to convert a Binary tree to BSTusing sets as containers. */import java.util.*; class Solution{static class Node{ int data; Node left, right;} // setstatic Set<Integer> s = new HashSet<Integer>(); // function to store the nodes in set while// doing inorder traversal.static void storeinorderInSet(Node root){ if (root == null) return; // visit the left subtree first storeinorderInSet(root.left); // insertion takes order of O(logn) for sets s.add(root.data); // visit the right subtree storeinorderInSet(root.right); } // Time complexity = O(nlogn) // function to copy items of set one by one// to the tree while doing inorder traversalstatic void setToBST( Node root){ // base condition if (root == null) return; // first move to the left subtree and // update items setToBST( root.left); // iterator initially pointing to the // beginning of set // copying the item at beginning of // set(sorted) to the tree. root.data = s.iterator().next(); // now erasing the beginning item from set. s.remove(root.data); // now move to right subtree and update items setToBST( root.right); } // T(n) = O(nlogn) time // Converts Binary tree to BST.static void binaryTreeToBST(Node root){ s.clear(); // populating the set with the tree's // inorder traversal data storeinorderInSet(root); // now sets are by default sorted as // they are implemented using self- // balancing BST // copying items from set to the tree // while inorder traversal which makes a BST setToBST( root); } // Time complexity = O(nlogn),// Auxiliary Space = O(n) for set. // helper function to create a nodestatic Node newNode(int data){ // dynamically allocating memory Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // function to do inorder traversalstatic void inorder(Node root){ if (root == null) return; inorder(root.left); System.out.print(root.data + " "); inorder(root.right);} // Driver codepublic static void main(String args[]){ Node root = newNode(5); root.left = newNode(7); root.right = newNode(9); root.right.left = newNode(10); root.left.left = newNode(1); root.left.right = newNode(6); root.right.right = newNode(11); /* Constructing tree given in the above figure 5 / \ 7 9 /\ / \ 1 6 10 11 */ // converting the above Binary tree to BST binaryTreeToBST(root); System.out.println( "Inorder traversal of BST is: " ); inorder(root);}} // This code is contributed by Arnab Kundu # Python3 program to convert a Binary tree# to BST using sets as containers. # Binary Tree Node""" A utility function to create anew BST node """class newNode: # Construct to create a newNode def __init__(self, data): self.data = data self.left = None self.right = None # function to store the nodes in set# while doing inorder traversal.def storeinorderInSet(root, s): if (not root) : return # visit the left subtree first storeinorderInSet(root.left, s) # insertion takes order of O(logn) # for sets s.add(root.data) # visit the right subtree storeinorderInSet(root.right, s) # Time complexity = O(nlogn) # function to copy items of set one by one# to the tree while doing inorder traversaldef setToBST(s, root) : # base condition if (not root): return # first move to the left subtree and # update items setToBST(s, root.left) # iterator initially pointing to # the beginning of set it = next(iter(s)) # copying the item at beginning of # set(sorted) to the tree. root.data = it # now erasing the beginning item from set. s.remove(it) # now move to right subtree # and update items setToBST(s, root.right) # T(n) = O(nlogn) time # Converts Binary tree to BST.def binaryTreeToBST(root): s = set() # populating the set with the tree's # inorder traversal data storeinorderInSet(root, s) # now sets are by default sorted as # they are implemented using self- # balancing BST # copying items from set to the tree # while inorder traversal which makes a BST setToBST(s, root) # Time complexity = O(nlogn),# Auxiliary Space = O(n) for set. # function to do inorder traversaldef inorder(root) : if (not root) : return inorder(root.left) print(root.data, end = " ") inorder(root.right) # Driver Codeif __name__ == '__main__': root = newNode(5) root.left = newNode(7) root.right = newNode(9) root.right.left = newNode(10) root.left.left = newNode(1) root.left.right = newNode(6) root.right.right = newNode(11) """ Constructing tree given in the above figure 5 / \ 7 9 /\ / \ 1 6 10 11 """ # converting the above Binary tree to BST binaryTreeToBST(root) print("Inorder traversal of BST is: ") inorder(root) # This code is contributed by# Shubham Singh(SHUBHAMSINGH10) // C# program to convert// a Binary tree to BSTusing System;using System.Collections;using System.Collections.Generic;using System.Linq;class Solution{ class Node{ public int data; public Node left, right;} // setstatic SortedSet<int> s = new SortedSet<int>(); // function to store the nodes// in set while doing inorder// traversal.static void storeinorderInSet(Node root){ if (root == null) return; // visit the left subtree // first storeinorderInSet(root.left); // insertion takes order of // O(logn) for sets s.Add(root.data); // visit the right subtree storeinorderInSet(root.right); } // Time complexity = O(nlogn) // function to copy items of// set one by one to the tree// while doing inorder traversalstatic void setToBST(Node root){ // base condition if (root == null) return; // first move to the left // subtree and update items setToBST(root.left); // iterator initially pointing // to the beginning of set copying // the item at beginning of set(sorted) // to the tree. root.data = s.First(); // now erasing the beginning item // from set. s.Remove(s.First()); // now move to right subtree and // update items setToBST( root.right);} // T(n) = O(nlogn) time // Converts Binary tree to BST.static void binaryTreeToBST(Node root){ s.Clear(); // populating the set with // the tree's inorder traversal // data storeinorderInSet(root); // now sets are by default sorted // as they are implemented using // self-balancing BST // copying items from set to the // tree while inorder traversal // which makes a BST setToBST( root); } // Time complexity = O(nlogn),// Auxiliary Space = O(n) for set. // helper function to create a nodestatic Node newNode(int data){ // dynamically allocating // memory Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // function to do inorder traversalstatic void inorder(Node root){ if (root == null) return; inorder(root.left); Console.Write(root.data + " "); inorder(root.right);} // Driver codepublic static void Main(string []args){ Node root = newNode(5); root.left = newNode(7); root.right = newNode(9); root.right.left = newNode(10); root.left.left = newNode(1); root.left.right = newNode(6); root.right.right = newNode(11); /* Constructing tree given in // the above figure 5 / \ 7 9 /\ / \ 1 6 10 11 */ // converting the above Binary // tree to BST binaryTreeToBST(root); Console.Write("Inorder traversal of " + "BST is: \n" ); inorder(root);}} // This code is contributed by Rutvik_56 <script> // JavaScript program to convert// a Binary tree to BST class Node{ constructor() { this.data = 0; this.right = null; this.left = null; }} // setvar s = new Set(); // function to store the nodes// in set while doing inorder// traversal.function storeinorderInSet(root){ if (root == null) return; // visit the left subtree // first storeinorderInSet(root.left); // insertion takes order of // O(logn) for sets s.add(root.data); // visit the right subtree storeinorderInSet(root.right); } // Time complexity = O(nlogn) // function to copy items of// set one by one to the tree// while doing inorder traversalfunction setToBST(root){ // base condition if (root == null) return; // first move to the left // subtree and update items setToBST(root.left); var tmp = [...s].sort((a,b)=> a-b); // iterator initially pointing // to the beginning of set copying // the item at beginning of set(sorted) // to the tree. root.data = tmp[0]; // now erasing the beginning item // from set. s.delete(tmp[0]); // now move to right subtree and // update items setToBST( root.right);} // T(n) = O(nlogn) time // Converts Binary tree to BST.function binaryTreeToBST(root){ s = new Set(); // populating the set with // the tree's inorder traversal // data storeinorderInSet(root); // now sets are by default sorted // as they are implemented using // self-balancing BST // copying items from set to the // tree while inorder traversal // which makes a BST setToBST( root); } // Time complexity = O(nlogn),// Auxiliary Space = O(n) for set. // helper function to create a nodefunction newNode(data){ // dynamically allocating // memory var temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // function to do inorder traversalfunction inorder(root){ if (root == null) return; inorder(root.left); document.write(root.data + " "); inorder(root.right);} // Driver codevar root = newNode(5);root.left = newNode(7);root.right = newNode(9);root.right.left = newNode(10);root.left.left = newNode(1);root.left.right = newNode(6);root.right.right = newNode(11);/* Constructing tree given in// the above figure 5 / \ 7 9 /\ / \ 1 6 10 11 */// converting the above Binary// tree to BSTbinaryTreeToBST(root);document.write("Inorder traversal of " + "BST is: <br>" );inorder(root); </script> Inorder traversal of BST is: 1 5 6 7 9 10 11 Time Complexity: O(n Log n) Auxiliary Space: (n) SHUBHAMSINGH10 andrew1234 rutvik_56 itsok cpp-set Binary Search Tree Tree Binary Search Tree Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Red-Black Tree | Set 2 (Insert) Inorder Successor in Binary Search Tree Find the node with minimum value in a Binary Search Tree Optimal Binary Search Tree | DP-24 Advantages of BST over Hash Table Tree Traversals (Inorder, Preorder and Postorder) Binary Tree | Set 1 (Introduction) Level Order Binary Tree Traversal Inorder Tree Traversal without Recursion Binary Tree | Set 3 (Types of Binary Tree)
[ { "code": null, "e": 25166, "s": 25138, "text": "\n24 Jun, 2021" }, { "code": null, "e": 25400, "s": 25166, "text": "Given a Binary Tree, convert it to a Binary Search Tree. The conversion must be done in such a way that keeps the original structure of the Binary Tree.This solution will use Sets of C++ STL instead of array-based solution.Examples: " }, { "code": null, "e": 25735, "s": 25400, "text": "Example 1\nInput:\n 10\n / \\\n 2 7\n / \\\n 8 4\nOutput:\n 8\n / \\\n 4 10\n / \\\n 2 7\n\n\nExample 2\nInput:\n 10\n / \\\n 30 15\n / \\\n 20 5\nOutput:\n 15\n / \\\n 10 20\n / \\\n 5 30" }, { "code": null, "e": 25745, "s": 25735, "text": "Solution " }, { "code": null, "e": 26441, "s": 25745, "text": "Copy the items of binary tree in a set while doing inorder traversal. This takes O(n log n) time. Note that set in C++ STL is implemented using a Self Balancing Binary Search Tree like Red Black Tree, AVL Tree, etcThere is no need to sort the set as sets in C++ are implemented using Self-balancing binary search trees due to which each operation such as insertion, searching, deletion, etc takes O(log n) time.Now simply copy the items of set one by one from beginning to the tree while doing inorder traversal of the tree. Care should be taken as when copying each item of set from its beginning, we first copy it to the tree while doing inorder traversal, then remove it from the set as well." }, { "code": null, "e": 26656, "s": 26441, "text": "Copy the items of binary tree in a set while doing inorder traversal. This takes O(n log n) time. Note that set in C++ STL is implemented using a Self Balancing Binary Search Tree like Red Black Tree, AVL Tree, etc" }, { "code": null, "e": 26854, "s": 26656, "text": "There is no need to sort the set as sets in C++ are implemented using Self-balancing binary search trees due to which each operation such as insertion, searching, deletion, etc takes O(log n) time." }, { "code": null, "e": 27139, "s": 26854, "text": "Now simply copy the items of set one by one from beginning to the tree while doing inorder traversal of the tree. Care should be taken as when copying each item of set from its beginning, we first copy it to the tree while doing inorder traversal, then remove it from the set as well." }, { "code": null, "e": 27526, "s": 27139, "text": "Now the above solution is simpler and easier to implement than the array-based conversion of Binary tree to Binary search tree explained here- Conversion of Binary Tree to Binary Search tree (Set-1), where we had to separately make a function to sort the items of the array after copying the items from tree to it.Program to convert a binary tree to a binary search tree using the set. " }, { "code": null, "e": 27530, "s": 27526, "text": "C++" }, { "code": null, "e": 27535, "s": 27530, "text": "Java" }, { "code": null, "e": 27543, "s": 27535, "text": "Python3" }, { "code": null, "e": 27546, "s": 27543, "text": "C#" }, { "code": null, "e": 27557, "s": 27546, "text": "Javascript" }, { "code": "/* CPP program to convert a Binary tree to BST using sets as containers. */#include <bits/stdc++.h>using namespace std; struct Node { int data; struct Node *left, *right;}; // function to store the nodes in set while// doing inorder traversal.void storeinorderInSet(Node* root, set<int>& s){ if (!root) return; // visit the left subtree first storeinorderInSet(root->left, s); // insertion takes order of O(logn) for sets s.insert(root->data); // visit the right subtree storeinorderInSet(root->right, s); } // Time complexity = O(nlogn) // function to copy items of set one by one// to the tree while doing inorder traversalvoid setToBST(set<int>& s, Node* root){ // base condition if (!root) return; // first move to the left subtree and // update items setToBST(s, root->left); // iterator initially pointing to the // beginning of set auto it = s.begin(); // copying the item at beginning of // set(sorted) to the tree. root->data = *it; // now erasing the beginning item from set. s.erase(it); // now move to right subtree and update items setToBST(s, root->right); } // T(n) = O(nlogn) time // Converts Binary tree to BST.void binaryTreeToBST(Node* root){ set<int> s; // populating the set with the tree's // inorder traversal data storeinorderInSet(root, s); // now sets are by default sorted as // they are implemented using self- // balancing BST // copying items from set to the tree // while inorder traversal which makes a BST setToBST(s, root); } // Time complexity = O(nlogn), // Auxiliary Space = O(n) for set. // helper function to create a nodeNode* newNode(int data){ // dynamically allocating memory Node* temp = new Node(); temp->data = data; temp->left = temp->right = NULL; return temp;} // function to do inorder traversalvoid inorder(Node* root){ if (!root) return; inorder(root->left); cout << root->data << \" \"; inorder(root->right);} int main(){ Node* root = newNode(5); root->left = newNode(7); root->right = newNode(9); root->right->left = newNode(10); root->left->left = newNode(1); root->left->right = newNode(6); root->right->right = newNode(11); /* Constructing tree given in the above figure 5 / \\ 7 9 /\\ / \\ 1 6 10 11 */ // converting the above Binary tree to BST binaryTreeToBST(root); cout << \"Inorder traversal of BST is: \" << endl; inorder(root); return 0;}", "e": 30112, "s": 27557, "text": null }, { "code": "/* Java program to convert a Binary tree to BSTusing sets as containers. */import java.util.*; class Solution{static class Node{ int data; Node left, right;} // setstatic Set<Integer> s = new HashSet<Integer>(); // function to store the nodes in set while// doing inorder traversal.static void storeinorderInSet(Node root){ if (root == null) return; // visit the left subtree first storeinorderInSet(root.left); // insertion takes order of O(logn) for sets s.add(root.data); // visit the right subtree storeinorderInSet(root.right); } // Time complexity = O(nlogn) // function to copy items of set one by one// to the tree while doing inorder traversalstatic void setToBST( Node root){ // base condition if (root == null) return; // first move to the left subtree and // update items setToBST( root.left); // iterator initially pointing to the // beginning of set // copying the item at beginning of // set(sorted) to the tree. root.data = s.iterator().next(); // now erasing the beginning item from set. s.remove(root.data); // now move to right subtree and update items setToBST( root.right); } // T(n) = O(nlogn) time // Converts Binary tree to BST.static void binaryTreeToBST(Node root){ s.clear(); // populating the set with the tree's // inorder traversal data storeinorderInSet(root); // now sets are by default sorted as // they are implemented using self- // balancing BST // copying items from set to the tree // while inorder traversal which makes a BST setToBST( root); } // Time complexity = O(nlogn),// Auxiliary Space = O(n) for set. // helper function to create a nodestatic Node newNode(int data){ // dynamically allocating memory Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // function to do inorder traversalstatic void inorder(Node root){ if (root == null) return; inorder(root.left); System.out.print(root.data + \" \"); inorder(root.right);} // Driver codepublic static void main(String args[]){ Node root = newNode(5); root.left = newNode(7); root.right = newNode(9); root.right.left = newNode(10); root.left.left = newNode(1); root.left.right = newNode(6); root.right.right = newNode(11); /* Constructing tree given in the above figure 5 / \\ 7 9 /\\ / \\ 1 6 10 11 */ // converting the above Binary tree to BST binaryTreeToBST(root); System.out.println( \"Inorder traversal of BST is: \" ); inorder(root);}} // This code is contributed by Arnab Kundu", "e": 32750, "s": 30112, "text": null }, { "code": "# Python3 program to convert a Binary tree# to BST using sets as containers. # Binary Tree Node\"\"\" A utility function to create anew BST node \"\"\"class newNode: # Construct to create a newNode def __init__(self, data): self.data = data self.left = None self.right = None # function to store the nodes in set# while doing inorder traversal.def storeinorderInSet(root, s): if (not root) : return # visit the left subtree first storeinorderInSet(root.left, s) # insertion takes order of O(logn) # for sets s.add(root.data) # visit the right subtree storeinorderInSet(root.right, s) # Time complexity = O(nlogn) # function to copy items of set one by one# to the tree while doing inorder traversaldef setToBST(s, root) : # base condition if (not root): return # first move to the left subtree and # update items setToBST(s, root.left) # iterator initially pointing to # the beginning of set it = next(iter(s)) # copying the item at beginning of # set(sorted) to the tree. root.data = it # now erasing the beginning item from set. s.remove(it) # now move to right subtree # and update items setToBST(s, root.right) # T(n) = O(nlogn) time # Converts Binary tree to BST.def binaryTreeToBST(root): s = set() # populating the set with the tree's # inorder traversal data storeinorderInSet(root, s) # now sets are by default sorted as # they are implemented using self- # balancing BST # copying items from set to the tree # while inorder traversal which makes a BST setToBST(s, root) # Time complexity = O(nlogn),# Auxiliary Space = O(n) for set. # function to do inorder traversaldef inorder(root) : if (not root) : return inorder(root.left) print(root.data, end = \" \") inorder(root.right) # Driver Codeif __name__ == '__main__': root = newNode(5) root.left = newNode(7) root.right = newNode(9) root.right.left = newNode(10) root.left.left = newNode(1) root.left.right = newNode(6) root.right.right = newNode(11) \"\"\" Constructing tree given in the above figure 5 / \\ 7 9 /\\ / \\ 1 6 10 11 \"\"\" # converting the above Binary tree to BST binaryTreeToBST(root) print(\"Inorder traversal of BST is: \") inorder(root) # This code is contributed by# Shubham Singh(SHUBHAMSINGH10)", "e": 35166, "s": 32750, "text": null }, { "code": "// C# program to convert// a Binary tree to BSTusing System;using System.Collections;using System.Collections.Generic;using System.Linq;class Solution{ class Node{ public int data; public Node left, right;} // setstatic SortedSet<int> s = new SortedSet<int>(); // function to store the nodes// in set while doing inorder// traversal.static void storeinorderInSet(Node root){ if (root == null) return; // visit the left subtree // first storeinorderInSet(root.left); // insertion takes order of // O(logn) for sets s.Add(root.data); // visit the right subtree storeinorderInSet(root.right); } // Time complexity = O(nlogn) // function to copy items of// set one by one to the tree// while doing inorder traversalstatic void setToBST(Node root){ // base condition if (root == null) return; // first move to the left // subtree and update items setToBST(root.left); // iterator initially pointing // to the beginning of set copying // the item at beginning of set(sorted) // to the tree. root.data = s.First(); // now erasing the beginning item // from set. s.Remove(s.First()); // now move to right subtree and // update items setToBST( root.right);} // T(n) = O(nlogn) time // Converts Binary tree to BST.static void binaryTreeToBST(Node root){ s.Clear(); // populating the set with // the tree's inorder traversal // data storeinorderInSet(root); // now sets are by default sorted // as they are implemented using // self-balancing BST // copying items from set to the // tree while inorder traversal // which makes a BST setToBST( root); } // Time complexity = O(nlogn),// Auxiliary Space = O(n) for set. // helper function to create a nodestatic Node newNode(int data){ // dynamically allocating // memory Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // function to do inorder traversalstatic void inorder(Node root){ if (root == null) return; inorder(root.left); Console.Write(root.data + \" \"); inorder(root.right);} // Driver codepublic static void Main(string []args){ Node root = newNode(5); root.left = newNode(7); root.right = newNode(9); root.right.left = newNode(10); root.left.left = newNode(1); root.left.right = newNode(6); root.right.right = newNode(11); /* Constructing tree given in // the above figure 5 / \\ 7 9 /\\ / \\ 1 6 10 11 */ // converting the above Binary // tree to BST binaryTreeToBST(root); Console.Write(\"Inorder traversal of \" + \"BST is: \\n\" ); inorder(root);}} // This code is contributed by Rutvik_56", "e": 37806, "s": 35166, "text": null }, { "code": "<script> // JavaScript program to convert// a Binary tree to BST class Node{ constructor() { this.data = 0; this.right = null; this.left = null; }} // setvar s = new Set(); // function to store the nodes// in set while doing inorder// traversal.function storeinorderInSet(root){ if (root == null) return; // visit the left subtree // first storeinorderInSet(root.left); // insertion takes order of // O(logn) for sets s.add(root.data); // visit the right subtree storeinorderInSet(root.right); } // Time complexity = O(nlogn) // function to copy items of// set one by one to the tree// while doing inorder traversalfunction setToBST(root){ // base condition if (root == null) return; // first move to the left // subtree and update items setToBST(root.left); var tmp = [...s].sort((a,b)=> a-b); // iterator initially pointing // to the beginning of set copying // the item at beginning of set(sorted) // to the tree. root.data = tmp[0]; // now erasing the beginning item // from set. s.delete(tmp[0]); // now move to right subtree and // update items setToBST( root.right);} // T(n) = O(nlogn) time // Converts Binary tree to BST.function binaryTreeToBST(root){ s = new Set(); // populating the set with // the tree's inorder traversal // data storeinorderInSet(root); // now sets are by default sorted // as they are implemented using // self-balancing BST // copying items from set to the // tree while inorder traversal // which makes a BST setToBST( root); } // Time complexity = O(nlogn),// Auxiliary Space = O(n) for set. // helper function to create a nodefunction newNode(data){ // dynamically allocating // memory var temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // function to do inorder traversalfunction inorder(root){ if (root == null) return; inorder(root.left); document.write(root.data + \" \"); inorder(root.right);} // Driver codevar root = newNode(5);root.left = newNode(7);root.right = newNode(9);root.right.left = newNode(10);root.left.left = newNode(1);root.left.right = newNode(6);root.right.right = newNode(11);/* Constructing tree given in// the above figure 5 / \\ 7 9 /\\ / \\ 1 6 10 11 */// converting the above Binary// tree to BSTbinaryTreeToBST(root);document.write(\"Inorder traversal of \" + \"BST is: <br>\" );inorder(root); </script>", "e": 40229, "s": 37806, "text": null }, { "code": null, "e": 40275, "s": 40229, "text": "Inorder traversal of BST is: \n1 5 6 7 9 10 11" }, { "code": null, "e": 40327, "s": 40277, "text": "Time Complexity: O(n Log n) Auxiliary Space: (n) " }, { "code": null, "e": 40342, "s": 40327, "text": "SHUBHAMSINGH10" }, { "code": null, "e": 40353, "s": 40342, "text": "andrew1234" }, { "code": null, "e": 40363, "s": 40353, "text": "rutvik_56" }, { "code": null, "e": 40369, "s": 40363, "text": "itsok" }, { "code": null, "e": 40377, "s": 40369, "text": "cpp-set" }, { "code": null, "e": 40396, "s": 40377, "text": "Binary Search Tree" }, { "code": null, "e": 40401, "s": 40396, "text": "Tree" }, { "code": null, "e": 40420, "s": 40401, "text": "Binary Search Tree" }, { "code": null, "e": 40425, "s": 40420, "text": "Tree" }, { "code": null, "e": 40523, "s": 40425, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 40532, "s": 40523, "text": "Comments" }, { "code": null, "e": 40545, "s": 40532, "text": "Old Comments" }, { "code": null, "e": 40577, "s": 40545, "text": "Red-Black Tree | Set 2 (Insert)" }, { "code": null, "e": 40617, "s": 40577, "text": "Inorder Successor in Binary Search Tree" }, { "code": null, "e": 40674, "s": 40617, "text": "Find the node with minimum value in a Binary Search Tree" }, { "code": null, "e": 40709, "s": 40674, "text": "Optimal Binary Search Tree | DP-24" }, { "code": null, "e": 40743, "s": 40709, "text": "Advantages of BST over Hash Table" }, { "code": null, "e": 40793, "s": 40743, "text": "Tree Traversals (Inorder, Preorder and Postorder)" }, { "code": null, "e": 40828, "s": 40793, "text": "Binary Tree | Set 1 (Introduction)" }, { "code": null, "e": 40862, "s": 40828, "text": "Level Order Binary Tree Traversal" }, { "code": null, "e": 40903, "s": 40862, "text": "Inorder Tree Traversal without Recursion" } ]
Extraction of Objects In Images and Videos Using 5 Lines of Code. | by Ayoola Olafenwa (she/her) | Towards Data Science
Computer vision is the medium through which computers see and identify objects. The goal of computer vision is to make it possible for computers to analyze objects in images and videos, to solve different vision problems. Object Segmentation has paved way for convenient analysis of objects in images and videos, contributing immensely to different fields, such as medical, vision in self driving cars and background editing in images and videos. PixelLib is a library created for easy integration of image and video segmentation in real life applications. PixelLib has employed the powerful techniques of object segmentation to make computer vision accessible to everyone. I am excited to announce that the new version of PixelLib has made analysis of objects in computer vision easier than ever. PixelLib uses segmentation techniques to implement objects’ extraction in images and videos using five lines of code. Install PixelLib and its dependencies: Install Tensorflow with:(PixelLib supports tensorflow 2.0 and above) pip3 install tensorflow Install PixelLib with pip3 install pixellib If installed, upgrade to the latest version using: pip3 install pixellib β€” upgrade Object Extraction in Images Using Mask R-CNN COCO Model import pixellibfrom pixellib.instance import instance_segmentationsegment_image=instance_segmentation()segment_image.load_model(β€œmask_rcnn_coco.h5”) Line1–4: We imported the PixelLib package, created an instance of the instance segmentation class and loaded the pretrained Coco model. Download the model from here. segment_image.segmentImage("image_path", extract_segmented_objects=True, save_extracted_objects=True, show_bboxes=True, output_image_name="output.jpg") This is the final line of the code, where we called the function segmentImage with the following parameters: image_path: This is the path to the image to be segmented. extract_segmented_objects: This is the parameter that tells the function to extract the objects segmented in the image and it is set to true. save_extracted_objects: This is an optional parameter for saving the extracted segmented objects. show_bboxes: This is the parameter that shows segmented objects with bounding boxes. If it is set to false, it shows only the segmentation masks. output_image_name: This is the path to save the output image. Sample image Full code for object extraction Output Image Extracted Objects from the image Note: All the objects in the image are extracted and saved individually as an image. I displayed just few of them. We make use of a pretrained Mask R-CNN coco model to perform image segmentation. The coco model supports 80 classes of objects, but in some applications we may not want to segment all the objects it supports. Therefore, PixelLib has made it possible to filter unused detections and segment specific classes. Modified code for Segmenting Specific Classes target_classes = segment_image.select_target_classes(person=True) It is still the same code except we called a new function select_target_classes, to filter unused detections and segment only our target class which is person. segment_image.segmentImage("sample.jpg", segment_target_classes=target_classes, extract_segmented_objects=True,save_extracted_objects=True, show_bboxes=True, output_image_name="output.jpg") In the segmentImage function we introduced a new parameter called segment_target_classes, to perform segmentation on the target class called from the select_target_classes function. Wow! we were able to detect only the people in this image. What of if we are only interested in detecting the means of transport of the people available in this picture? target_classes = segment_image.select_target_classes(car=True, bicycle = True) We changed the target class from person to car and bicycle. Beautiful Result! We detected only the bicycles and the cars available in this picture. Note: If you filter coco model detections, it is the objects of the target class segmented in the image that would be extracted. PixelLib supports the extraction of segmented objects in videos and camera feeds. sample video segment_video.process_video("sample.mp4", show_bboxes=True, extract_segmented_objects=True,save_extracted_objects=True, frames_per_second= 5, output_video_name="output.mp4") It is still the same code, except we changed the function from segmentImage to process_video. It takes the following parameters: show_bboxes: This is the parameter that shows segmented objects with bounding boxes. If it is set to false, it shows only the segmentation masks. frames_per_second: This is the parameter that sets the number of frames per second for the saved video file. In this case it is set to 5, i.e the saved video file would have 5 frames per second. extract_segmented_objects: This is the parameter that tells the function to extract the objects segmented in the image and it is set to true. save_extracted_objects: This is an optional parameter for saving the extracted segmented objects. output_video_name: This is the name of the saved segmented video. Output Video Extracted objects from the video Note: All the objects in the video are extracted and saved individually as an image. I displayed just some of them. Segmentation of Specific Classes in Videos PixelLib makes it possible to filter unused detections and segment specific classes in videos and camera feeds. target_classes = segment_video.select_target_classes(person=True)segment_video.process_video("sample.mp4", show_bboxes=True, segment_target_classes= target_classes, extract_segmented_objects=True,save_extracted_objects=True, frames_per_second= 5, output_video_name="output.mp4") The target class for detection is set to person and we were able to segment only the people in the video. target_classes = segment_video.select_target_classes(car = True)segment_video.process_video("sample.mp4", show_bboxes=True, segment_target_classes= target_classes, extract_segmented_objects=True,save_extracted_objects=True, frames_per_second= 5, output_video_name="output.mp4") The target class for segmentation is set to car and we were able to segment only the cars in the video. Full code for Segmenting Specific Classes and Object Extraction in Videos Full code for Segmenting Specific Classes and Object Extraction in Camera Feeds import cv2 capture = cv2.VideoCapture(0) We imported cv2 and included the code to capture camera’s frames. segment_camera.process_camera(capture, show_bboxes=True, show_frames=True, extract_segmented_objects=True, save_extracted_objects=True,frame_name="frame", frames_per_second=5, output_video_name="output.mp4") In the code for performing segmentation, we replaced the video’s filepath to capture, i.e we are processing a stream of frames captured by the camera. We added extra parameters for the purpose of showing the camera’s frames: show_frames: This is the parameter that handles the showing of segmented camera’s frames. frame_name: This is the name given to the shown camera’s frame. PixelLib supports training of a custom segmentation model and it is possible to extract objects segmented with a custom model. import pixellibfrom pixellib.instance import custom_segmentation segment_image = custom_segmentation()segment_image.inferConfig(num_classes=2, class_names=['BG', 'butterfly', 'squirrel'])segment_image.load_model("Nature_model_resnet101.h5") Line1–4: We imported the PixelLib package, created an instance of the custom segmentation class, called the inference configuration function (inferConfig) and loaded the custom model. Download the custom model from here. The custom model supports two classes which are as follows: Butterfly Squirrel segment_image.segmentImage("image_path", extract_segmented_objects=True, ave_extracted_objects=True, show_bboxes=True, output_image_name="output.jpg") We called the same function segmentImage used for coco model detection. Full Code for Object Extraction with A Custom Model sample image Output Extracted Object from the image sample video Full Code for Object Extraction in Videos Using A Custom Model. Output Extracted Objects Full Code for Object Extraction in Camera Feeds Using A Custom Model Read this article to learn how to train a custom model with PixelLib. towardsdatascience.com Visit PixelLib’s official github repository Visit PixelLib’s offical documentation Reach to me via: Email: [email protected] Twitter: @AyoolaOlafenwa Facebook: Ayoola Olafenwa Linkedin: Ayoola Olafenwa Check out these articles written on how to make use of PixelLib for semantic segmentation, instance segmentation and background editing in images and videos.
[ { "code": null, "e": 618, "s": 171, "text": "Computer vision is the medium through which computers see and identify objects. The goal of computer vision is to make it possible for computers to analyze objects in images and videos, to solve different vision problems. Object Segmentation has paved way for convenient analysis of objects in images and videos, contributing immensely to different fields, such as medical, vision in self driving cars and background editing in images and videos." }, { "code": null, "e": 1087, "s": 618, "text": "PixelLib is a library created for easy integration of image and video segmentation in real life applications. PixelLib has employed the powerful techniques of object segmentation to make computer vision accessible to everyone. I am excited to announce that the new version of PixelLib has made analysis of objects in computer vision easier than ever. PixelLib uses segmentation techniques to implement objects’ extraction in images and videos using five lines of code." }, { "code": null, "e": 1126, "s": 1087, "text": "Install PixelLib and its dependencies:" }, { "code": null, "e": 1195, "s": 1126, "text": "Install Tensorflow with:(PixelLib supports tensorflow 2.0 and above)" }, { "code": null, "e": 1219, "s": 1195, "text": "pip3 install tensorflow" }, { "code": null, "e": 1241, "s": 1219, "text": "Install PixelLib with" }, { "code": null, "e": 1263, "s": 1241, "text": "pip3 install pixellib" }, { "code": null, "e": 1314, "s": 1263, "text": "If installed, upgrade to the latest version using:" }, { "code": null, "e": 1346, "s": 1314, "text": "pip3 install pixellib β€” upgrade" }, { "code": null, "e": 1402, "s": 1346, "text": "Object Extraction in Images Using Mask R-CNN COCO Model" }, { "code": null, "e": 1551, "s": 1402, "text": "import pixellibfrom pixellib.instance import instance_segmentationsegment_image=instance_segmentation()segment_image.load_model(β€œmask_rcnn_coco.h5”)" }, { "code": null, "e": 1717, "s": 1551, "text": "Line1–4: We imported the PixelLib package, created an instance of the instance segmentation class and loaded the pretrained Coco model. Download the model from here." }, { "code": null, "e": 1892, "s": 1717, "text": "segment_image.segmentImage(\"image_path\", extract_segmented_objects=True, save_extracted_objects=True, show_bboxes=True, output_image_name=\"output.jpg\")" }, { "code": null, "e": 2001, "s": 1892, "text": "This is the final line of the code, where we called the function segmentImage with the following parameters:" }, { "code": null, "e": 2060, "s": 2001, "text": "image_path: This is the path to the image to be segmented." }, { "code": null, "e": 2202, "s": 2060, "text": "extract_segmented_objects: This is the parameter that tells the function to extract the objects segmented in the image and it is set to true." }, { "code": null, "e": 2300, "s": 2202, "text": "save_extracted_objects: This is an optional parameter for saving the extracted segmented objects." }, { "code": null, "e": 2446, "s": 2300, "text": "show_bboxes: This is the parameter that shows segmented objects with bounding boxes. If it is set to false, it shows only the segmentation masks." }, { "code": null, "e": 2508, "s": 2446, "text": "output_image_name: This is the path to save the output image." }, { "code": null, "e": 2521, "s": 2508, "text": "Sample image" }, { "code": null, "e": 2553, "s": 2521, "text": "Full code for object extraction" }, { "code": null, "e": 2566, "s": 2553, "text": "Output Image" }, { "code": null, "e": 2599, "s": 2566, "text": "Extracted Objects from the image" }, { "code": null, "e": 2714, "s": 2599, "text": "Note: All the objects in the image are extracted and saved individually as an image. I displayed just few of them." }, { "code": null, "e": 3022, "s": 2714, "text": "We make use of a pretrained Mask R-CNN coco model to perform image segmentation. The coco model supports 80 classes of objects, but in some applications we may not want to segment all the objects it supports. Therefore, PixelLib has made it possible to filter unused detections and segment specific classes." }, { "code": null, "e": 3068, "s": 3022, "text": "Modified code for Segmenting Specific Classes" }, { "code": null, "e": 3134, "s": 3068, "text": "target_classes = segment_image.select_target_classes(person=True)" }, { "code": null, "e": 3294, "s": 3134, "text": "It is still the same code except we called a new function select_target_classes, to filter unused detections and segment only our target class which is person." }, { "code": null, "e": 3485, "s": 3294, "text": "segment_image.segmentImage(\"sample.jpg\", segment_target_classes=target_classes, extract_segmented_objects=True,save_extracted_objects=True, show_bboxes=True, output_image_name=\"output.jpg\")" }, { "code": null, "e": 3667, "s": 3485, "text": "In the segmentImage function we introduced a new parameter called segment_target_classes, to perform segmentation on the target class called from the select_target_classes function." }, { "code": null, "e": 3726, "s": 3667, "text": "Wow! we were able to detect only the people in this image." }, { "code": null, "e": 3837, "s": 3726, "text": "What of if we are only interested in detecting the means of transport of the people available in this picture?" }, { "code": null, "e": 3916, "s": 3837, "text": "target_classes = segment_image.select_target_classes(car=True, bicycle = True)" }, { "code": null, "e": 3976, "s": 3916, "text": "We changed the target class from person to car and bicycle." }, { "code": null, "e": 4064, "s": 3976, "text": "Beautiful Result! We detected only the bicycles and the cars available in this picture." }, { "code": null, "e": 4193, "s": 4064, "text": "Note: If you filter coco model detections, it is the objects of the target class segmented in the image that would be extracted." }, { "code": null, "e": 4275, "s": 4193, "text": "PixelLib supports the extraction of segmented objects in videos and camera feeds." }, { "code": null, "e": 4288, "s": 4275, "text": "sample video" }, { "code": null, "e": 4464, "s": 4288, "text": "segment_video.process_video(\"sample.mp4\", show_bboxes=True, extract_segmented_objects=True,save_extracted_objects=True, frames_per_second= 5, output_video_name=\"output.mp4\")" }, { "code": null, "e": 4593, "s": 4464, "text": "It is still the same code, except we changed the function from segmentImage to process_video. It takes the following parameters:" }, { "code": null, "e": 4739, "s": 4593, "text": "show_bboxes: This is the parameter that shows segmented objects with bounding boxes. If it is set to false, it shows only the segmentation masks." }, { "code": null, "e": 4934, "s": 4739, "text": "frames_per_second: This is the parameter that sets the number of frames per second for the saved video file. In this case it is set to 5, i.e the saved video file would have 5 frames per second." }, { "code": null, "e": 5076, "s": 4934, "text": "extract_segmented_objects: This is the parameter that tells the function to extract the objects segmented in the image and it is set to true." }, { "code": null, "e": 5174, "s": 5076, "text": "save_extracted_objects: This is an optional parameter for saving the extracted segmented objects." }, { "code": null, "e": 5240, "s": 5174, "text": "output_video_name: This is the name of the saved segmented video." }, { "code": null, "e": 5253, "s": 5240, "text": "Output Video" }, { "code": null, "e": 5286, "s": 5253, "text": "Extracted objects from the video" }, { "code": null, "e": 5402, "s": 5286, "text": "Note: All the objects in the video are extracted and saved individually as an image. I displayed just some of them." }, { "code": null, "e": 5445, "s": 5402, "text": "Segmentation of Specific Classes in Videos" }, { "code": null, "e": 5557, "s": 5445, "text": "PixelLib makes it possible to filter unused detections and segment specific classes in videos and camera feeds." }, { "code": null, "e": 5837, "s": 5557, "text": "target_classes = segment_video.select_target_classes(person=True)segment_video.process_video(\"sample.mp4\", show_bboxes=True, segment_target_classes= target_classes, extract_segmented_objects=True,save_extracted_objects=True, frames_per_second= 5, output_video_name=\"output.mp4\")" }, { "code": null, "e": 5943, "s": 5837, "text": "The target class for detection is set to person and we were able to segment only the people in the video." }, { "code": null, "e": 6222, "s": 5943, "text": "target_classes = segment_video.select_target_classes(car = True)segment_video.process_video(\"sample.mp4\", show_bboxes=True, segment_target_classes= target_classes, extract_segmented_objects=True,save_extracted_objects=True, frames_per_second= 5, output_video_name=\"output.mp4\")" }, { "code": null, "e": 6326, "s": 6222, "text": "The target class for segmentation is set to car and we were able to segment only the cars in the video." }, { "code": null, "e": 6400, "s": 6326, "text": "Full code for Segmenting Specific Classes and Object Extraction in Videos" }, { "code": null, "e": 6480, "s": 6400, "text": "Full code for Segmenting Specific Classes and Object Extraction in Camera Feeds" }, { "code": null, "e": 6521, "s": 6480, "text": "import cv2 capture = cv2.VideoCapture(0)" }, { "code": null, "e": 6587, "s": 6521, "text": "We imported cv2 and included the code to capture camera’s frames." }, { "code": null, "e": 6795, "s": 6587, "text": "segment_camera.process_camera(capture, show_bboxes=True, show_frames=True, extract_segmented_objects=True, save_extracted_objects=True,frame_name=\"frame\", frames_per_second=5, output_video_name=\"output.mp4\")" }, { "code": null, "e": 7020, "s": 6795, "text": "In the code for performing segmentation, we replaced the video’s filepath to capture, i.e we are processing a stream of frames captured by the camera. We added extra parameters for the purpose of showing the camera’s frames:" }, { "code": null, "e": 7110, "s": 7020, "text": "show_frames: This is the parameter that handles the showing of segmented camera’s frames." }, { "code": null, "e": 7174, "s": 7110, "text": "frame_name: This is the name given to the shown camera’s frame." }, { "code": null, "e": 7301, "s": 7174, "text": "PixelLib supports training of a custom segmentation model and it is possible to extract objects segmented with a custom model." }, { "code": null, "e": 7542, "s": 7301, "text": "import pixellibfrom pixellib.instance import custom_segmentation segment_image = custom_segmentation()segment_image.inferConfig(num_classes=2, class_names=['BG', 'butterfly', 'squirrel'])segment_image.load_model(\"Nature_model_resnet101.h5\")" }, { "code": null, "e": 7823, "s": 7542, "text": "Line1–4: We imported the PixelLib package, created an instance of the custom segmentation class, called the inference configuration function (inferConfig) and loaded the custom model. Download the custom model from here. The custom model supports two classes which are as follows:" }, { "code": null, "e": 7833, "s": 7823, "text": "Butterfly" }, { "code": null, "e": 7842, "s": 7833, "text": "Squirrel" }, { "code": null, "e": 8016, "s": 7842, "text": "segment_image.segmentImage(\"image_path\", extract_segmented_objects=True, ave_extracted_objects=True, show_bboxes=True, output_image_name=\"output.jpg\")" }, { "code": null, "e": 8088, "s": 8016, "text": "We called the same function segmentImage used for coco model detection." }, { "code": null, "e": 8140, "s": 8088, "text": "Full Code for Object Extraction with A Custom Model" }, { "code": null, "e": 8153, "s": 8140, "text": "sample image" }, { "code": null, "e": 8160, "s": 8153, "text": "Output" }, { "code": null, "e": 8192, "s": 8160, "text": "Extracted Object from the image" }, { "code": null, "e": 8205, "s": 8192, "text": "sample video" }, { "code": null, "e": 8269, "s": 8205, "text": "Full Code for Object Extraction in Videos Using A Custom Model." }, { "code": null, "e": 8276, "s": 8269, "text": "Output" }, { "code": null, "e": 8294, "s": 8276, "text": "Extracted Objects" }, { "code": null, "e": 8363, "s": 8294, "text": "Full Code for Object Extraction in Camera Feeds Using A Custom Model" }, { "code": null, "e": 8433, "s": 8363, "text": "Read this article to learn how to train a custom model with PixelLib." }, { "code": null, "e": 8456, "s": 8433, "text": "towardsdatascience.com" }, { "code": null, "e": 8500, "s": 8456, "text": "Visit PixelLib’s official github repository" }, { "code": null, "e": 8539, "s": 8500, "text": "Visit PixelLib’s offical documentation" }, { "code": null, "e": 8556, "s": 8539, "text": "Reach to me via:" }, { "code": null, "e": 8588, "s": 8556, "text": "Email: [email protected]" }, { "code": null, "e": 8613, "s": 8588, "text": "Twitter: @AyoolaOlafenwa" }, { "code": null, "e": 8639, "s": 8613, "text": "Facebook: Ayoola Olafenwa" }, { "code": null, "e": 8665, "s": 8639, "text": "Linkedin: Ayoola Olafenwa" } ]
Train BERT-Large in your own language | by Hajdu Róbert | Towards Data Science
BERT-Large has been a real β€œgame changer” technology in the field of Natural Language Processing in recent years. Extending the basic model with transfer learning, we get state-of-the-art solutions for tasks such as Question Answering, Named Entity Recognition or Text Summarization. The model currently exists in around 10 languages and in this article we will share our experiences of training so that, a new model can be trained in your own language relatively easily and effectively. In terms of training, Microsoft’s ONNX Runtime library with DeepSpeed optimizations offers the fastest (and cheapest!) solution for training a model, so we used that in our experiments with Azure Machine Learning platform (however, feel free to check the ONNX Runtime link above for local/different environment runs). Note, that the training can be completed roughly around 200hrs on a 4x Tesla V100 node. The guide covers the training process, which consists of 2 major parts: Data preparation Training A 3.4 billion word text corpus was used for the original BERT-Large, so it is worth training with a data set of this size. An obvious solution might be to use the wikipedia corpus, which can be downloaded in the target language from here. The wikicorpus alone most probably won’t contain enough data, but it is definitely worth adding to the existing corpus, as it is a good quality corpus, it increases the efficiency of training and use. Data preprocessing can be a computationally intensive operation, depending on the size of the training files, a lot of RAM may be required. For this, we used STANDARD_D14_V2 (16 Cores, 112 GB RAM, 800 GB Disk) VM in AzureML. OnnxRuntime uses NVIDIA’s BERT-Large solution. Raw dataset first of all needs to be cleaned, if it’s necessary and the desired format must meet two criteria: Each sentence is in a separate line. The related entries (articles, paragraphs) are separated by a blank line. For the wiki dataset, I created and uploaded to my own repository a customized solution from the NVIDIA BERT data preparation scripts. So let’s take a look at the process: Wikipedia dumps are available at this link. You can even download the dump with wget. The downloaded file is in .bz2 format, which can be extracted with the bunzip2 (or bzip2 -d) command on Linux-based machines. The extracted file is in .xml format. The WikiExtractor Python package extracts articles from the input wiki file, provides useful and essential help for dump cleaning. When we’re done with that, the script has created a folder structure that contains the extracted text. In this case, we need to take the following steps to bring the dataset into a form compatible with training script: Formatting Tokenizing Filtering Creating vocab Sharding Creating binaries For formatting, let’s use the following script: python3 formatting.py --input_folder=INPUT_FOLDER --output_file=OUTPUT_FILE Which sorts the extracted text into one file and one article per line. Thereafter, we need to tokenize our articles to sentence-per-line (spl) format because it’s needed for our training script and for further actions like filtering and sharding. It’s a quite easy job, however the quality of tokenizing is absolutely important, so if you don’t just know a good sentence tokenizer for your language, it’s time to do a bit research work now. We’re going to give a few examples and also implemented a few method there, in our tokenization script: NLTK is a very common library, it uses a package called punkt for tokenization. Check supported languages here. UDPipe (v1) has many supported languages for sentence tokenization. Check them here. StanfordNLP is also a common library in this case. Check pretrained (language) models here. Note, that the library has a spacy wrapper also, if you are more comfortable with that. Spacy (last but not least) is a very simple and powerful library for NLP. Available languages can be found here. Note, that at this point, I strongly recommend to do a bit research work about the available sentence tokenizers in your language, since it can be a matter of your model performance. So if your text is not in spl format, please test a correct tokenizer and use it on your text. Make sure that every paragraph of your text is separated by an empty line. As we’re a great fan of the Finnish TurkuNLP group (who have done wikiBERT-base models in many languages), we would like to share a customized filtering method from their pipeline. This filtering method has default parameter values based on TurkuNLP team’s experience, however feel free to adjust those values (if you want) based on your dataset(s). To run (with the default working directory): python filtering.py INPUT_SPL_FILE \ --word-chars abcdefghijklmnopqrstuvwxyz (replace with your alphabet) \ --language en (replace with your lang) \ --langdetect en (replace with your lang) \ > OUTPUT_FILE At this point, make sure your text is clean, it doesn’t have unnecessary or way too much repeated lines(more likely in crawled data) and every paragraph has it’s meaning, the text is continously in them. For premade alphabets, please refer to the TurkuNLP pipeline repo. Also a way to clean the text is converting to a smaller character set like Latin1 (then back to UTF-8) to remove unnecessary symbols. For creating vocab files, we forked the solution of TurkuNLP group (which is derived from Google). Now you can run the vocabulary training method on the whole dataset (note, that it’ll take lines randomly from a large corpus file, according to our setup). Feel free to increase your vocab size, according to your experiences and language, but take note, that larger vocab size will require more VRAM, what means you’ll have to train your model with lower microbatch size, which can decrease the effectivity of your model on 16GB sized GPU’s. It doesn’t count too much really on 32 GB GPU’s. python3 spmtrain.py INPUT_FILE \ --model_prefix=bert \ --vocab_size=32000 \ --input_sentence_size=100000000 \ --shuffle_input_sentence=true \ --character_coverage=0.9999 \ --model_type=bpe So after this one, we need to convert our SentencePiece vocab to a BERT compatible WordPiece vocab, issuing this script: python3 sent2wordpiece.py bert.vocab > vocab.txt Tadaa! You’re done creating a BERT compatible vocab based on your text corpus. Sharding is a recommended part, since an 500Mbyte raw text file could take up to 50GB RAM at 512 seq len binary creation so it’s recommended to create shards sized between 50–100 MBytes. python3 sharding.py \ --input_file=INPUT_FILE \ --num_shards=NUMBER_OF_SHARDS_TO_MAKE For binary (.hdf5) creation, we need to have a BERT compatible vocabulary file. If you have one, it’s nice, if you don’t, then please check the following article for creating your own vocab file. You can process the files parallelly, however it could take up to 10GB RAM on a 100MByte file. You can try more shards in 128 seq len run, since it eats less RAM. To create .hdf5 files, run: python3 create_hdf5_files.py --max_seq_length 128 --max_predictions_per_seq 20 --vocab_file=vocab.txt --n_processes=NUMBER_OF_PROCESSES for 128 sequence length train, and python3 create_hdf5_files.py --max_seq_length 512 --max_predictions_per_seq 80 --vocab_file=vocab.txt --n_processes=NUMBER_OF_PROCESSES for 512 sequence length preprocessing. Both of them are required for the training process, which will be explained later on. So finishing the .hdf5 creation, you should have 2 folders, named something like this: hdf5_lower_case_0_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/ and hdf5_lower_case_0_seq_len_512_max_pred_80_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5 In order to use the converted files for training, you’ll need to upload them to a BLOB container. We recommend to use Azure Storage Explorer and AzCopy command line tool. In Azure Storage Explorer you can easily create a BLOB container (use the guide above) and with AzCopy, you can copy the converted files to the BLOB container, with the following format: azcopy.exe cp --recursive "src" "dest" Note: you can create a BLOB under your Azure storage account (left side in Explorer) and you can get the source navigating to and right clicking on your .hdf5 folder, using β€œGet Shared Access Signature” option. Same with the BLOB container as destination. There are two parts to the training process: a phase of sequence length 128 and 512. This significantly speeds up training, as it first takes approx. 7,000 steps on 128 and 1,500 on 512, which allows for much faster training. This is based on the fact that training is much faster at 128 sequence length, but we want our model to be able to work with texts with 512 token lengths. For the training, we used the ONNX Runtime-based solution which now contains the DeepSpeed training optimizations. This provides the fastest and of course the cheapest solution currently available (SOTA). The repository is available here. The ONNX Runtime team also prepared a docker image for training with necessary components like openMPI, CUDA, cuDNN, NCCL and required Python packages. As mentioned, we ran the training in AzureML, so the guide also follows this method. This is not necessarily required to run in AzureML if the required resources (GPU) are available. The Microsoft’s repository above also includes recipe for local execution. First we need to create an compute instance to get the code from the GitHub repositories. This instance (VM) does not have to be a pricey one, we used STANDARD_D1_V2 (1 Cores, 3.5 GB RAM, 50 GB Disk) for example. To creating a compute instance open any file on Notebooks tab and click on the + button: Now open a terminal which you can do on the same tab. In the VM’s terminal, you need to use the following commands (according to the ONNX Runtime training examples repository above) to get the training code(s). To get the ONNX Runtime code, for enchance BERT train: git clone https://github.com/microsoft/onnxruntime-training-examples.gitcd onnxruntime-training-examples To get the NVIDIA’s BERT Large training solution: git clone --no-checkout https://github.com/NVIDIA/DeepLearningExamples.gitcd DeepLearningExamples/git checkout 4733603577080dbd1bdcd51864f31e45d5196704cd .. And getting them together a bit: mkdir -p workspacemv DeepLearningExamples/PyTorch/LanguageModeling/BERT/ workspacerm -rf DeepLearningExamplescp -r ./nvidia-bert/ort_addon/* workspace/BERT You will need a few steps to perform before you can start the training: Copy your vocab.txt file to workspace/BERT/vocab directory Modify vocab size in nvidia_bert/ort_addon/ort_supplement/ort_supplement.py line:55 Download and copy bert_config.json to workspace/BERT Modify vocab size in workspace/BERT/bert_config.json Now you can access the training notebook in the nvidia-bert/azureml-notebooks/. Open it. First you’ll need to address your workspace name subscription ID resource group in the notebook. Next, it’s mandatory to give access the notebook of your BLOB container, where we previously uploaded the converted dataset. It consists of datastore name account name account key container name parameters. In the next step, we need to create a compute target for training. In the training process we were using Standard_NC24rs_v3 which contains 4x Tesla V100 16GB GPU’s. This setup takes approx. 200–220 hours to train. It’s really up to you, which VM’s you want to use. You can mainly choose between Standard_NC24rs_v3 (4x Tesla V100 16GB) Standard_ND40rs_v2 (8x Tesla V100 32GB) VM’s. Perhaps this is the most exciting step in your training. Here you need to configure the training script, which the Notebook will launch as an AzureML experiment. Note, that we will run the experiment (and script) twice regarding to 128 and 512 train. You’ll need to set up the following essential parameters: process_count_per_node (2x) node_count input_dir output_dir train_batch_size gradient_accumulation_steps gpu_memory_limit process_count_per_node: number of GPU’s per VM (4/8). node_count: overall number of VM’s you use. input_dir: the location of the 128 pretraining data within the BLOB. output_dir: arbitrary directory for the checkpoints. Note: 512 train will use this directory to load the last phase 1 checkpoint. train_batch_size: train batch size. Set this regarding to the table above in the Notebook. gradient_accumulation_steps: set this one aswell according to table above in the Notebook. Micro batch size will be calculated with train batch size and this parameter. gpu_memory_limit: set this one regarding you use 16 or 32GB GPU’s. And finally, don’t forget to add '--deepspeed_zero_stage': '' parameter to accelerate your training with DeepSpeed ZeRO optimizer. Note: you may also want to disable progress bar during your pretrain with this parameter: '--disable_progress_bar': '' Note, that micro batch should be maximized with batch size and gradient accumulation steps. Submit. So as you started pretraining as an AzureML experiment, you should find something like this in your Experiments tab: I also included a small script in my repo, named convert_checkpoint.py to make your checkpoint compatible for fine-tuning with transformers library. After your cluster finished with phase 1, you can set up the phase 2 script with the same method as above. It’s important to use the same output dir, so the phase 2 run will find your phase 1 checkpoint in that directory. The run will always keep the last 3 checkpoints, and you can set up the number of training steps to checkpoint. We recommend around 100 steps, so you can download a checkpoint around 100 steps and benchmark them with a fine-tuning script. Congratulations, you have a BERT-Large model in your own language! Please share your experiences here or contact me by email, since we are eager to hear about your experiences here in Applied Data Science And Artifical Intelligence Group, University of Pécs, Hungary.
[ { "code": null, "e": 660, "s": 172, "text": "BERT-Large has been a real β€œgame changer” technology in the field of Natural Language Processing in recent years. Extending the basic model with transfer learning, we get state-of-the-art solutions for tasks such as Question Answering, Named Entity Recognition or Text Summarization. The model currently exists in around 10 languages and in this article we will share our experiences of training so that, a new model can be trained in your own language relatively easily and effectively." }, { "code": null, "e": 1066, "s": 660, "text": "In terms of training, Microsoft’s ONNX Runtime library with DeepSpeed optimizations offers the fastest (and cheapest!) solution for training a model, so we used that in our experiments with Azure Machine Learning platform (however, feel free to check the ONNX Runtime link above for local/different environment runs). Note, that the training can be completed roughly around 200hrs on a 4x Tesla V100 node." }, { "code": null, "e": 1138, "s": 1066, "text": "The guide covers the training process, which consists of 2 major parts:" }, { "code": null, "e": 1155, "s": 1138, "text": "Data preparation" }, { "code": null, "e": 1164, "s": 1155, "text": "Training" }, { "code": null, "e": 1829, "s": 1164, "text": "A 3.4 billion word text corpus was used for the original BERT-Large, so it is worth training with a data set of this size. An obvious solution might be to use the wikipedia corpus, which can be downloaded in the target language from here. The wikicorpus alone most probably won’t contain enough data, but it is definitely worth adding to the existing corpus, as it is a good quality corpus, it increases the efficiency of training and use. Data preprocessing can be a computationally intensive operation, depending on the size of the training files, a lot of RAM may be required. For this, we used STANDARD_D14_V2 (16 Cores, 112 GB RAM, 800 GB Disk) VM in AzureML." }, { "code": null, "e": 1987, "s": 1829, "text": "OnnxRuntime uses NVIDIA’s BERT-Large solution. Raw dataset first of all needs to be cleaned, if it’s necessary and the desired format must meet two criteria:" }, { "code": null, "e": 2024, "s": 1987, "text": "Each sentence is in a separate line." }, { "code": null, "e": 2098, "s": 2024, "text": "The related entries (articles, paragraphs) are separated by a blank line." }, { "code": null, "e": 2270, "s": 2098, "text": "For the wiki dataset, I created and uploaded to my own repository a customized solution from the NVIDIA BERT data preparation scripts. So let’s take a look at the process:" }, { "code": null, "e": 2482, "s": 2270, "text": "Wikipedia dumps are available at this link. You can even download the dump with wget. The downloaded file is in .bz2 format, which can be extracted with the bunzip2 (or bzip2 -d) command on Linux-based machines." }, { "code": null, "e": 2651, "s": 2482, "text": "The extracted file is in .xml format. The WikiExtractor Python package extracts articles from the input wiki file, provides useful and essential help for dump cleaning." }, { "code": null, "e": 2870, "s": 2651, "text": "When we’re done with that, the script has created a folder structure that contains the extracted text. In this case, we need to take the following steps to bring the dataset into a form compatible with training script:" }, { "code": null, "e": 2881, "s": 2870, "text": "Formatting" }, { "code": null, "e": 2892, "s": 2881, "text": "Tokenizing" }, { "code": null, "e": 2902, "s": 2892, "text": "Filtering" }, { "code": null, "e": 2917, "s": 2902, "text": "Creating vocab" }, { "code": null, "e": 2926, "s": 2917, "text": "Sharding" }, { "code": null, "e": 2944, "s": 2926, "text": "Creating binaries" }, { "code": null, "e": 2992, "s": 2944, "text": "For formatting, let’s use the following script:" }, { "code": null, "e": 3068, "s": 2992, "text": "python3 formatting.py --input_folder=INPUT_FOLDER --output_file=OUTPUT_FILE" }, { "code": null, "e": 3139, "s": 3068, "text": "Which sorts the extracted text into one file and one article per line." }, { "code": null, "e": 3613, "s": 3139, "text": "Thereafter, we need to tokenize our articles to sentence-per-line (spl) format because it’s needed for our training script and for further actions like filtering and sharding. It’s a quite easy job, however the quality of tokenizing is absolutely important, so if you don’t just know a good sentence tokenizer for your language, it’s time to do a bit research work now. We’re going to give a few examples and also implemented a few method there, in our tokenization script:" }, { "code": null, "e": 3725, "s": 3613, "text": "NLTK is a very common library, it uses a package called punkt for tokenization. Check supported languages here." }, { "code": null, "e": 3810, "s": 3725, "text": "UDPipe (v1) has many supported languages for sentence tokenization. Check them here." }, { "code": null, "e": 3990, "s": 3810, "text": "StanfordNLP is also a common library in this case. Check pretrained (language) models here. Note, that the library has a spacy wrapper also, if you are more comfortable with that." }, { "code": null, "e": 4103, "s": 3990, "text": "Spacy (last but not least) is a very simple and powerful library for NLP. Available languages can be found here." }, { "code": null, "e": 4456, "s": 4103, "text": "Note, that at this point, I strongly recommend to do a bit research work about the available sentence tokenizers in your language, since it can be a matter of your model performance. So if your text is not in spl format, please test a correct tokenizer and use it on your text. Make sure that every paragraph of your text is separated by an empty line." }, { "code": null, "e": 4851, "s": 4456, "text": "As we’re a great fan of the Finnish TurkuNLP group (who have done wikiBERT-base models in many languages), we would like to share a customized filtering method from their pipeline. This filtering method has default parameter values based on TurkuNLP team’s experience, however feel free to adjust those values (if you want) based on your dataset(s). To run (with the default working directory):" }, { "code": null, "e": 5069, "s": 4851, "text": "python filtering.py INPUT_SPL_FILE \\ --word-chars abcdefghijklmnopqrstuvwxyz (replace with your alphabet) \\ --language en (replace with your lang) \\ --langdetect en (replace with your lang) \\ > OUTPUT_FILE" }, { "code": null, "e": 5340, "s": 5069, "text": "At this point, make sure your text is clean, it doesn’t have unnecessary or way too much repeated lines(more likely in crawled data) and every paragraph has it’s meaning, the text is continously in them. For premade alphabets, please refer to the TurkuNLP pipeline repo." }, { "code": null, "e": 5474, "s": 5340, "text": "Also a way to clean the text is converting to a smaller character set like Latin1 (then back to UTF-8) to remove unnecessary symbols." }, { "code": null, "e": 5573, "s": 5474, "text": "For creating vocab files, we forked the solution of TurkuNLP group (which is derived from Google)." }, { "code": null, "e": 6065, "s": 5573, "text": "Now you can run the vocabulary training method on the whole dataset (note, that it’ll take lines randomly from a large corpus file, according to our setup). Feel free to increase your vocab size, according to your experiences and language, but take note, that larger vocab size will require more VRAM, what means you’ll have to train your model with lower microbatch size, which can decrease the effectivity of your model on 16GB sized GPU’s. It doesn’t count too much really on 32 GB GPU’s." }, { "code": null, "e": 6272, "s": 6065, "text": "python3 spmtrain.py INPUT_FILE \\ --model_prefix=bert \\ --vocab_size=32000 \\ --input_sentence_size=100000000 \\ --shuffle_input_sentence=true \\ --character_coverage=0.9999 \\ --model_type=bpe" }, { "code": null, "e": 6393, "s": 6272, "text": "So after this one, we need to convert our SentencePiece vocab to a BERT compatible WordPiece vocab, issuing this script:" }, { "code": null, "e": 6442, "s": 6393, "text": "python3 sent2wordpiece.py bert.vocab > vocab.txt" }, { "code": null, "e": 6521, "s": 6442, "text": "Tadaa! You’re done creating a BERT compatible vocab based on your text corpus." }, { "code": null, "e": 6708, "s": 6521, "text": "Sharding is a recommended part, since an 500Mbyte raw text file could take up to 50GB RAM at 512 seq len binary creation so it’s recommended to create shards sized between 50–100 MBytes." }, { "code": null, "e": 6800, "s": 6708, "text": "python3 sharding.py \\ --input_file=INPUT_FILE \\ --num_shards=NUMBER_OF_SHARDS_TO_MAKE" }, { "code": null, "e": 7159, "s": 6800, "text": "For binary (.hdf5) creation, we need to have a BERT compatible vocabulary file. If you have one, it’s nice, if you don’t, then please check the following article for creating your own vocab file. You can process the files parallelly, however it could take up to 10GB RAM on a 100MByte file. You can try more shards in 128 seq len run, since it eats less RAM." }, { "code": null, "e": 7187, "s": 7159, "text": "To create .hdf5 files, run:" }, { "code": null, "e": 7323, "s": 7187, "text": "python3 create_hdf5_files.py --max_seq_length 128 --max_predictions_per_seq 20 --vocab_file=vocab.txt --n_processes=NUMBER_OF_PROCESSES" }, { "code": null, "e": 7358, "s": 7323, "text": "for 128 sequence length train, and" }, { "code": null, "e": 7494, "s": 7358, "text": "python3 create_hdf5_files.py --max_seq_length 512 --max_predictions_per_seq 80 --vocab_file=vocab.txt --n_processes=NUMBER_OF_PROCESSES" }, { "code": null, "e": 7619, "s": 7494, "text": "for 512 sequence length preprocessing. Both of them are required for the training process, which will be explained later on." }, { "code": null, "e": 7706, "s": 7619, "text": "So finishing the .hdf5 creation, you should have 2 folders, named something like this:" }, { "code": null, "e": 7801, "s": 7706, "text": "hdf5_lower_case_0_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/" }, { "code": null, "e": 7805, "s": 7801, "text": "and" }, { "code": null, "e": 7899, "s": 7805, "text": "hdf5_lower_case_0_seq_len_512_max_pred_80_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5" }, { "code": null, "e": 8070, "s": 7899, "text": "In order to use the converted files for training, you’ll need to upload them to a BLOB container. We recommend to use Azure Storage Explorer and AzCopy command line tool." }, { "code": null, "e": 8257, "s": 8070, "text": "In Azure Storage Explorer you can easily create a BLOB container (use the guide above) and with AzCopy, you can copy the converted files to the BLOB container, with the following format:" }, { "code": null, "e": 8296, "s": 8257, "text": "azcopy.exe cp --recursive \"src\" \"dest\"" }, { "code": null, "e": 8552, "s": 8296, "text": "Note: you can create a BLOB under your Azure storage account (left side in Explorer) and you can get the source navigating to and right clicking on your .hdf5 folder, using β€œGet Shared Access Signature” option. Same with the BLOB container as destination." }, { "code": null, "e": 8933, "s": 8552, "text": "There are two parts to the training process: a phase of sequence length 128 and 512. This significantly speeds up training, as it first takes approx. 7,000 steps on 128 and 1,500 on 512, which allows for much faster training. This is based on the fact that training is much faster at 128 sequence length, but we want our model to be able to work with texts with 512 token lengths." }, { "code": null, "e": 9324, "s": 8933, "text": "For the training, we used the ONNX Runtime-based solution which now contains the DeepSpeed training optimizations. This provides the fastest and of course the cheapest solution currently available (SOTA). The repository is available here. The ONNX Runtime team also prepared a docker image for training with necessary components like openMPI, CUDA, cuDNN, NCCL and required Python packages." }, { "code": null, "e": 9582, "s": 9324, "text": "As mentioned, we ran the training in AzureML, so the guide also follows this method. This is not necessarily required to run in AzureML if the required resources (GPU) are available. The Microsoft’s repository above also includes recipe for local execution." }, { "code": null, "e": 9884, "s": 9582, "text": "First we need to create an compute instance to get the code from the GitHub repositories. This instance (VM) does not have to be a pricey one, we used STANDARD_D1_V2 (1 Cores, 3.5 GB RAM, 50 GB Disk) for example. To creating a compute instance open any file on Notebooks tab and click on the + button:" }, { "code": null, "e": 9938, "s": 9884, "text": "Now open a terminal which you can do on the same tab." }, { "code": null, "e": 10095, "s": 9938, "text": "In the VM’s terminal, you need to use the following commands (according to the ONNX Runtime training examples repository above) to get the training code(s)." }, { "code": null, "e": 10150, "s": 10095, "text": "To get the ONNX Runtime code, for enchance BERT train:" }, { "code": null, "e": 10255, "s": 10150, "text": "git clone https://github.com/microsoft/onnxruntime-training-examples.gitcd onnxruntime-training-examples" }, { "code": null, "e": 10305, "s": 10255, "text": "To get the NVIDIA’s BERT Large training solution:" }, { "code": null, "e": 10462, "s": 10305, "text": "git clone --no-checkout https://github.com/NVIDIA/DeepLearningExamples.gitcd DeepLearningExamples/git checkout 4733603577080dbd1bdcd51864f31e45d5196704cd .." }, { "code": null, "e": 10495, "s": 10462, "text": "And getting them together a bit:" }, { "code": null, "e": 10651, "s": 10495, "text": "mkdir -p workspacemv DeepLearningExamples/PyTorch/LanguageModeling/BERT/ workspacerm -rf DeepLearningExamplescp -r ./nvidia-bert/ort_addon/* workspace/BERT" }, { "code": null, "e": 10723, "s": 10651, "text": "You will need a few steps to perform before you can start the training:" }, { "code": null, "e": 10782, "s": 10723, "text": "Copy your vocab.txt file to workspace/BERT/vocab directory" }, { "code": null, "e": 10866, "s": 10782, "text": "Modify vocab size in nvidia_bert/ort_addon/ort_supplement/ort_supplement.py line:55" }, { "code": null, "e": 10919, "s": 10866, "text": "Download and copy bert_config.json to workspace/BERT" }, { "code": null, "e": 10972, "s": 10919, "text": "Modify vocab size in workspace/BERT/bert_config.json" }, { "code": null, "e": 11061, "s": 10972, "text": "Now you can access the training notebook in the nvidia-bert/azureml-notebooks/. Open it." }, { "code": null, "e": 11095, "s": 11061, "text": "First you’ll need to address your" }, { "code": null, "e": 11110, "s": 11095, "text": "workspace name" }, { "code": null, "e": 11126, "s": 11110, "text": "subscription ID" }, { "code": null, "e": 11141, "s": 11126, "text": "resource group" }, { "code": null, "e": 11158, "s": 11141, "text": "in the notebook." }, { "code": null, "e": 11298, "s": 11158, "text": "Next, it’s mandatory to give access the notebook of your BLOB container, where we previously uploaded the converted dataset. It consists of" }, { "code": null, "e": 11313, "s": 11298, "text": "datastore name" }, { "code": null, "e": 11326, "s": 11313, "text": "account name" }, { "code": null, "e": 11338, "s": 11326, "text": "account key" }, { "code": null, "e": 11353, "s": 11338, "text": "container name" }, { "code": null, "e": 11365, "s": 11353, "text": "parameters." }, { "code": null, "e": 11579, "s": 11365, "text": "In the next step, we need to create a compute target for training. In the training process we were using Standard_NC24rs_v3 which contains 4x Tesla V100 16GB GPU’s. This setup takes approx. 200–220 hours to train." }, { "code": null, "e": 11660, "s": 11579, "text": "It’s really up to you, which VM’s you want to use. You can mainly choose between" }, { "code": null, "e": 11700, "s": 11660, "text": "Standard_NC24rs_v3 (4x Tesla V100 16GB)" }, { "code": null, "e": 11740, "s": 11700, "text": "Standard_ND40rs_v2 (8x Tesla V100 32GB)" }, { "code": null, "e": 11746, "s": 11740, "text": "VM’s." }, { "code": null, "e": 11997, "s": 11746, "text": "Perhaps this is the most exciting step in your training. Here you need to configure the training script, which the Notebook will launch as an AzureML experiment. Note, that we will run the experiment (and script) twice regarding to 128 and 512 train." }, { "code": null, "e": 12055, "s": 11997, "text": "You’ll need to set up the following essential parameters:" }, { "code": null, "e": 12083, "s": 12055, "text": "process_count_per_node (2x)" }, { "code": null, "e": 12094, "s": 12083, "text": "node_count" }, { "code": null, "e": 12104, "s": 12094, "text": "input_dir" }, { "code": null, "e": 12115, "s": 12104, "text": "output_dir" }, { "code": null, "e": 12132, "s": 12115, "text": "train_batch_size" }, { "code": null, "e": 12160, "s": 12132, "text": "gradient_accumulation_steps" }, { "code": null, "e": 12177, "s": 12160, "text": "gpu_memory_limit" }, { "code": null, "e": 12231, "s": 12177, "text": "process_count_per_node: number of GPU’s per VM (4/8)." }, { "code": null, "e": 12275, "s": 12231, "text": "node_count: overall number of VM’s you use." }, { "code": null, "e": 12344, "s": 12275, "text": "input_dir: the location of the 128 pretraining data within the BLOB." }, { "code": null, "e": 12474, "s": 12344, "text": "output_dir: arbitrary directory for the checkpoints. Note: 512 train will use this directory to load the last phase 1 checkpoint." }, { "code": null, "e": 12565, "s": 12474, "text": "train_batch_size: train batch size. Set this regarding to the table above in the Notebook." }, { "code": null, "e": 12734, "s": 12565, "text": "gradient_accumulation_steps: set this one aswell according to table above in the Notebook. Micro batch size will be calculated with train batch size and this parameter." }, { "code": null, "e": 12801, "s": 12734, "text": "gpu_memory_limit: set this one regarding you use 16 or 32GB GPU’s." }, { "code": null, "e": 12834, "s": 12801, "text": "And finally, don’t forget to add" }, { "code": null, "e": 12863, "s": 12834, "text": "'--deepspeed_zero_stage': ''" }, { "code": null, "e": 12932, "s": 12863, "text": "parameter to accelerate your training with DeepSpeed ZeRO optimizer." }, { "code": null, "e": 13022, "s": 12932, "text": "Note: you may also want to disable progress bar during your pretrain with this parameter:" }, { "code": null, "e": 13051, "s": 13022, "text": "'--disable_progress_bar': ''" }, { "code": null, "e": 13143, "s": 13051, "text": "Note, that micro batch should be maximized with batch size and gradient accumulation steps." }, { "code": null, "e": 13151, "s": 13143, "text": "Submit." }, { "code": null, "e": 13268, "s": 13151, "text": "So as you started pretraining as an AzureML experiment, you should find something like this in your Experiments tab:" }, { "code": null, "e": 13417, "s": 13268, "text": "I also included a small script in my repo, named convert_checkpoint.py to make your checkpoint compatible for fine-tuning with transformers library." }, { "code": null, "e": 13639, "s": 13417, "text": "After your cluster finished with phase 1, you can set up the phase 2 script with the same method as above. It’s important to use the same output dir, so the phase 2 run will find your phase 1 checkpoint in that directory." }, { "code": null, "e": 13878, "s": 13639, "text": "The run will always keep the last 3 checkpoints, and you can set up the number of training steps to checkpoint. We recommend around 100 steps, so you can download a checkpoint around 100 steps and benchmark them with a fine-tuning script." } ]
Text Extraction using Regular Expression (Python) | by KahEm Chu | Towards Data Science
β€œRegular Expression (RegEx) is one of the unsung successes in standardization in computer science,” [1]. In the example of my previous article, the regular expression is used to clean up the noise and perform tokenization to the text. Well, what we can do with RegEx in Text Analytics is far more than that. In this article, I am sharing how to use RegEx to extract the sentences which contain any keyword in a defined list from the text data or corpus. For example, you may want to extract the reviews made on features of a particular product, or you may which to extract all emails discussing urgent or critical subjects. For me, the text documents I used in my text analytics project is huge, which take a long time if I going to train the model on my device. Also, I am not interested in every sentence in the datasets, hence text extraction is acting as a part of the data cleaning process for my project. Extracting only the sentences containing keywords defined not only able to reduce the vocab size, but also make the model more precise according to my need. However, this method might not be suitable for every text analytics project. For example, if you are studying the overall sentiment of people toward a particular subject, object or event, extracting sentences might affect the accuracy of the overall sentiment. On another way round, this method is suitable if you are sure on what are the key matter you wish to study on. For example, you run an eCommerce that selling household products, you are curious about people feelings toward your new product, you may directly extract all the reviews containing the name or model of the new product. As an example, the eCommerce store introduces a bar stool with a different height from the past product and would like to know how their customers react. The following highlighted sentence will be the output of text extraction with keywords β€˜bar stool’ and β€˜height’ Well, this introduction is longer than I expected. Let’s waste no more time and get started! ## read sentences and extract only line which contain the keywordsimport pandas as pdimport re# open filekeyword = open('keyword.txt', 'r', encoding = 'utf-8').readlines()texts = open('sent_token.txt', 'r', encoding = 'utf-8').readlines()# define function to read file and remove next line symboldef read_file(file): texts = [] for word in file: text = word.rstrip('\n') texts.append(text) return texts# save to variable key = read_file(keyword)corpus = read_file(texts) In the script above, the inputs are sentence tokens and the list of keywords stored in a text file. You may tokenize your dataset from documents into paragraphs or sentences, and then extract the paragraphs or sentences which contain the keywords. Sentence tokenization can be done easily with sent_tokenize from nltk.tokenize as below. from nltk.tokenize import sent_tokenizetext = open('Input/data.txt', 'r', encoding = 'utf-8')text_file = open("Output/sent_token.txt", "w", encoding='utf-8')### This part to remove end line breakstring_without_line_breaks = ""for line in text: stripped_line = line.rstrip() + " " string_without_line_breaks += stripped_linesent_token = sent_tokenize(string_without_line_breaks)for word in sent_token: text_file.write(word) text_file.write("\n")text_file.close() As the text data I used is extracted from a PDF file, there are a lot of line breaks, hence I will remove the line breaks before sentence tokenization. # open file to write line which contain keywordsfile = open('Output/keyline.txt', 'w', encoding = 'utf-8') def write_file(file, keyword, corpus): keyline = [] for line in corpus: line = line.lower() for key in keyword: result = re.search(r"(^|[^a-z])" + key + r"([^a-z]|$)", line) if result != None: keypair = [key, line] keyline.append(keypair) file.write(line + " ") break else: pass return(keyline)output = write_file(file,key,corpus) The function above is the function I used to extract all the sentences which contain the keywords. A break is added to prevent copy the same line with multiple keywords to lower file size. The key script of doing so is just one line of code. result = re.search(r”(^|[^a-z])” + key + r”([^a-z]|$)”, line) The ”(^|[^a-z])” and ”([^a-z]|$)” surrounding the key is to make sure the word is identical with the keyword. For example, without these two part of the script, when we searching for a keyword like β€œact”, the result return might be the word β€œact” with prefix or suffix, such as β€œreact” or β€œactor”. # create DataFrame using data df = pd.DataFrame(output, columns =['Key', 'Line']) After extract the lines, you may create DataFrame from the keywords and the respective line extracted for further analysis. Note that in my example, I do not extract the same line again if it contains multiple keywords. If you need to do so, you may remove the break command from the script above. That’s all on how to extract the line of sentences with keywords. Simple right? Last but not least, sharing a script I have been using lately, os.listdir.startswith() and os.listdir.endswith() which help you to get all the file you need efficiently. This is sort of a similar concept with Regular Expression, where we define a particular pattern, and then we search for it. # collect txt file with name start with 'data' import ospath = 'C:/Users/Dataset txt'folder = os.fsencode(path)filenames_list = []for file in os.listdir(folder): filename = os.fsdecode(file) if filename.startswith( ('data') ) and filename.endswith( ('.txt') ): filenames1.append(filename)filenames_list.sort() If you are interested in text processing with NLTK and SpaCy: Text Processing in Python. If you are interested in explore the gigantic text with Word Cloud: Text Exploration with Word Cloud. Subscribe on YouTube [1] D. Jurafsky and J. H. Martin, β€œSpeech and Language Processing,” 3 December 2020. [Online]. Available: https://web.stanford.edu/~jurafsky/slp3/. Congrats and thanks for reading to the end. Hope you enjoy this article. ☺️
[ { "code": null, "e": 277, "s": 172, "text": "β€œRegular Expression (RegEx) is one of the unsung successes in standardization in computer science,” [1]." }, { "code": null, "e": 796, "s": 277, "text": "In the example of my previous article, the regular expression is used to clean up the noise and perform tokenization to the text. Well, what we can do with RegEx in Text Analytics is far more than that. In this article, I am sharing how to use RegEx to extract the sentences which contain any keyword in a defined list from the text data or corpus. For example, you may want to extract the reviews made on features of a particular product, or you may which to extract all emails discussing urgent or critical subjects." }, { "code": null, "e": 1240, "s": 796, "text": "For me, the text documents I used in my text analytics project is huge, which take a long time if I going to train the model on my device. Also, I am not interested in every sentence in the datasets, hence text extraction is acting as a part of the data cleaning process for my project. Extracting only the sentences containing keywords defined not only able to reduce the vocab size, but also make the model more precise according to my need." }, { "code": null, "e": 1832, "s": 1240, "text": "However, this method might not be suitable for every text analytics project. For example, if you are studying the overall sentiment of people toward a particular subject, object or event, extracting sentences might affect the accuracy of the overall sentiment. On another way round, this method is suitable if you are sure on what are the key matter you wish to study on. For example, you run an eCommerce that selling household products, you are curious about people feelings toward your new product, you may directly extract all the reviews containing the name or model of the new product." }, { "code": null, "e": 2098, "s": 1832, "text": "As an example, the eCommerce store introduces a bar stool with a different height from the past product and would like to know how their customers react. The following highlighted sentence will be the output of text extraction with keywords β€˜bar stool’ and β€˜height’" }, { "code": null, "e": 2191, "s": 2098, "text": "Well, this introduction is longer than I expected. Let’s waste no more time and get started!" }, { "code": null, "e": 2692, "s": 2191, "text": "## read sentences and extract only line which contain the keywordsimport pandas as pdimport re# open filekeyword = open('keyword.txt', 'r', encoding = 'utf-8').readlines()texts = open('sent_token.txt', 'r', encoding = 'utf-8').readlines()# define function to read file and remove next line symboldef read_file(file): texts = [] for word in file: text = word.rstrip('\\n') texts.append(text) return texts# save to variable key = read_file(keyword)corpus = read_file(texts)" }, { "code": null, "e": 3029, "s": 2692, "text": "In the script above, the inputs are sentence tokens and the list of keywords stored in a text file. You may tokenize your dataset from documents into paragraphs or sentences, and then extract the paragraphs or sentences which contain the keywords. Sentence tokenization can be done easily with sent_tokenize from nltk.tokenize as below." }, { "code": null, "e": 3503, "s": 3029, "text": "from nltk.tokenize import sent_tokenizetext = open('Input/data.txt', 'r', encoding = 'utf-8')text_file = open(\"Output/sent_token.txt\", \"w\", encoding='utf-8')### This part to remove end line breakstring_without_line_breaks = \"\"for line in text: stripped_line = line.rstrip() + \" \" string_without_line_breaks += stripped_linesent_token = sent_tokenize(string_without_line_breaks)for word in sent_token: text_file.write(word) text_file.write(\"\\n\")text_file.close()" }, { "code": null, "e": 3655, "s": 3503, "text": "As the text data I used is extracted from a PDF file, there are a lot of line breaks, hence I will remove the line breaks before sentence tokenization." }, { "code": null, "e": 4338, "s": 3655, "text": "# open file to write line which contain keywordsfile = open('Output/keyline.txt', 'w', encoding = 'utf-8') def write_file(file, keyword, corpus): keyline = [] for line in corpus: line = line.lower() for key in keyword: result = re.search(r\"(^|[^a-z])\" + key + r\"([^a-z]|$)\", line) if result != None: keypair = [key, line] keyline.append(keypair) file.write(line + \" \") break else: pass return(keyline)output = write_file(file,key,corpus)" }, { "code": null, "e": 4580, "s": 4338, "text": "The function above is the function I used to extract all the sentences which contain the keywords. A break is added to prevent copy the same line with multiple keywords to lower file size. The key script of doing so is just one line of code." }, { "code": null, "e": 4642, "s": 4580, "text": "result = re.search(r”(^|[^a-z])” + key + r”([^a-z]|$)”, line)" }, { "code": null, "e": 4940, "s": 4642, "text": "The ”(^|[^a-z])” and ”([^a-z]|$)” surrounding the key is to make sure the word is identical with the keyword. For example, without these two part of the script, when we searching for a keyword like β€œact”, the result return might be the word β€œact” with prefix or suffix, such as β€œreact” or β€œactor”." }, { "code": null, "e": 5023, "s": 4940, "text": "# create DataFrame using data df = pd.DataFrame(output, columns =['Key', 'Line']) " }, { "code": null, "e": 5321, "s": 5023, "text": "After extract the lines, you may create DataFrame from the keywords and the respective line extracted for further analysis. Note that in my example, I do not extract the same line again if it contains multiple keywords. If you need to do so, you may remove the break command from the script above." }, { "code": null, "e": 5401, "s": 5321, "text": "That’s all on how to extract the line of sentences with keywords. Simple right?" }, { "code": null, "e": 5695, "s": 5401, "text": "Last but not least, sharing a script I have been using lately, os.listdir.startswith() and os.listdir.endswith() which help you to get all the file you need efficiently. This is sort of a similar concept with Regular Expression, where we define a particular pattern, and then we search for it." }, { "code": null, "e": 6020, "s": 5695, "text": "# collect txt file with name start with 'data' import ospath = 'C:/Users/Dataset txt'folder = os.fsencode(path)filenames_list = []for file in os.listdir(folder): filename = os.fsdecode(file) if filename.startswith( ('data') ) and filename.endswith( ('.txt') ): filenames1.append(filename)filenames_list.sort() " }, { "code": null, "e": 6109, "s": 6020, "text": "If you are interested in text processing with NLTK and SpaCy: Text Processing in Python." }, { "code": null, "e": 6211, "s": 6109, "text": "If you are interested in explore the gigantic text with Word Cloud: Text Exploration with Word Cloud." }, { "code": null, "e": 6232, "s": 6211, "text": "Subscribe on YouTube" }, { "code": null, "e": 6380, "s": 6232, "text": "[1] D. Jurafsky and J. H. Martin, β€œSpeech and Language Processing,” 3 December 2020. [Online]. Available: https://web.stanford.edu/~jurafsky/slp3/." } ]
Python program to find the most occurring character and its count
In this article, we will learn about the solution and approach to solve the given problem statement. Given an input string we need to find the most occurring character and its count. Create a dictionary using Counter method having strings as keys and their frequencies as values. Create a dictionary using Counter method having strings as keys and their frequencies as values. Find the maximum occurrence of a character i.e. value and get the index of it. Find the maximum occurrence of a character i.e. value and get the index of it. Now let’s see the implementation below βˆ’ from collections import Counter def find(input_): # dictionary wc = Counter(input_) # Finding maximum occurrence s = max(wc.values()) i = wc.values().index(s) print (wc.items()[i]) # Driver program if __name__ == "__main__": input_ = 'Tutorialspoint' find(input_) (β€˜t’,3) All the variables and functions are declared in the global scope as shown below βˆ’ In this article, we learnt about the approach to find the most occurring character and its count.
[ { "code": null, "e": 1163, "s": 1062, "text": "In this article, we will learn about the solution and approach to solve the given problem statement." }, { "code": null, "e": 1245, "s": 1163, "text": "Given an input string we need to find the most occurring character and its count." }, { "code": null, "e": 1342, "s": 1245, "text": "Create a dictionary using Counter method having strings as keys and their frequencies as values." }, { "code": null, "e": 1439, "s": 1342, "text": "Create a dictionary using Counter method having strings as keys and their frequencies as values." }, { "code": null, "e": 1518, "s": 1439, "text": "Find the maximum occurrence of a character i.e. value and get the index of it." }, { "code": null, "e": 1597, "s": 1518, "text": "Find the maximum occurrence of a character i.e. value and get the index of it." }, { "code": null, "e": 1638, "s": 1597, "text": "Now let’s see the implementation below βˆ’" }, { "code": null, "e": 1929, "s": 1638, "text": "from collections import Counter\n def find(input_):\n # dictionary\n wc = Counter(input_)\n # Finding maximum occurrence\n s = max(wc.values())\n i = wc.values().index(s)\n print (wc.items()[i])\n# Driver program\nif __name__ == \"__main__\":\n input_ = 'Tutorialspoint'\n find(input_)" }, { "code": null, "e": 1937, "s": 1929, "text": "(β€˜t’,3)" }, { "code": null, "e": 2019, "s": 1937, "text": "All the variables and functions are declared in the global scope as shown below βˆ’" }, { "code": null, "e": 2117, "s": 2019, "text": "In this article, we learnt about the approach to find the most occurring character and its count." } ]
C# | Copy() Method - GeeksforGeeks
21 Jun, 2019 In C#, Copy() is a string method. It is used to create a new instance of String with the same value for a specified String. The Copy() method returns a String object, which is the same as the original string but represents a different object reference. To check its reference, use assignment operation, which assigns an existing string reference to an additional object variable. Syntax: public static string Copy(string str) Explanation : This method accepts single parameter str which is the original string to be copied. And it returns the string value, which is the new string with the same value as str. The type of Copy() method is System.String. Example Program to illustrate the Copy() Method // C# program to demonstrate the // use of Copy() methodusing System;class Program { static void cpymethod() { string str1 = "GeeksforGeeks"; string str2 = "GFG"; Console.WriteLine("Original Strings are str1 = " + "'{0}' and str2='{1}'", str1, str2); Console.WriteLine(""); Console.WriteLine("After Copy method"); Console.WriteLine(""); // using the Copy method // to copy the value of str1 // into str2 str2 = String.Copy(str1); Console.WriteLine("Strings are str1 = " +"'{0}' and str2='{1}'", str1, str2); // check the objects reference equal or not Console.WriteLine("ReferenceEquals: {0}", Object.ReferenceEquals(str1, str2)); // check the objects are equal or not Console.WriteLine("Equals: {0}", Object.Equals(str1, str2)); Console.WriteLine(""); Console.WriteLine("After Assignment"); Console.WriteLine(""); // to str1 object reference assign to str2 str2 = str1; Console.WriteLine("Strings are str1 = '{0}' " +"and str2 = '{1}'", str1, str2); // check the objects reference equal or not Console.WriteLine("ReferenceEquals: {0}", Object.ReferenceEquals(str1, str2)); // check the objects are equal or not Console.WriteLine("Equals: {0}", Object.Equals(str1, str2)); } // Main Method public static void Main() { // calling method cpymethod(); }} Original Strings are str1 = 'GeeksforGeeks' and str2='GFG' After Copy method Strings are str1 = 'GeeksforGeeks' and str2='GeeksforGeeks' ReferenceEquals: False Equals: True After Assignment Strings are str1 = 'GeeksforGeeks' and str2 = 'GeeksforGeeks' ReferenceEquals: True Equals: True Reference : https://msdn.microsoft.com/en-us/library/system.string.copy nidhi_biet CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C# Dictionary with examples Destructors in C# C# | Delegates C# | String.IndexOf( ) Method | Set - 1 Extension Method in C# Introduction to .NET Framework C# | Abstract Classes C# | Data Types HashSet in C# with Examples Different ways to sort an array in descending order in C#
[ { "code": null, "e": 24326, "s": 24298, "text": "\n21 Jun, 2019" }, { "code": null, "e": 24706, "s": 24326, "text": "In C#, Copy() is a string method. It is used to create a new instance of String with the same value for a specified String. The Copy() method returns a String object, which is the same as the original string but represents a different object reference. To check its reference, use assignment operation, which assigns an existing string reference to an additional object variable." }, { "code": null, "e": 24714, "s": 24706, "text": "Syntax:" }, { "code": null, "e": 24753, "s": 24714, "text": "public static string Copy(string str)\n" }, { "code": null, "e": 24980, "s": 24753, "text": "Explanation : This method accepts single parameter str which is the original string to be copied. And it returns the string value, which is the new string with the same value as str. The type of Copy() method is System.String." }, { "code": null, "e": 25028, "s": 24980, "text": "Example Program to illustrate the Copy() Method" }, { "code": "// C# program to demonstrate the // use of Copy() methodusing System;class Program { static void cpymethod() { string str1 = \"GeeksforGeeks\"; string str2 = \"GFG\"; Console.WriteLine(\"Original Strings are str1 = \" + \"'{0}' and str2='{1}'\", str1, str2); Console.WriteLine(\"\"); Console.WriteLine(\"After Copy method\"); Console.WriteLine(\"\"); // using the Copy method // to copy the value of str1 // into str2 str2 = String.Copy(str1); Console.WriteLine(\"Strings are str1 = \" +\"'{0}' and str2='{1}'\", str1, str2); // check the objects reference equal or not Console.WriteLine(\"ReferenceEquals: {0}\", Object.ReferenceEquals(str1, str2)); // check the objects are equal or not Console.WriteLine(\"Equals: {0}\", Object.Equals(str1, str2)); Console.WriteLine(\"\"); Console.WriteLine(\"After Assignment\"); Console.WriteLine(\"\"); // to str1 object reference assign to str2 str2 = str1; Console.WriteLine(\"Strings are str1 = '{0}' \" +\"and str2 = '{1}'\", str1, str2); // check the objects reference equal or not Console.WriteLine(\"ReferenceEquals: {0}\", Object.ReferenceEquals(str1, str2)); // check the objects are equal or not Console.WriteLine(\"Equals: {0}\", Object.Equals(str1, str2)); } // Main Method public static void Main() { // calling method cpymethod(); }}", "e": 26613, "s": 25028, "text": null }, { "code": null, "e": 26905, "s": 26613, "text": "Original Strings are str1 = 'GeeksforGeeks' and str2='GFG'\n\nAfter Copy method\n\nStrings are str1 = 'GeeksforGeeks' and str2='GeeksforGeeks'\nReferenceEquals: False\nEquals: True\n\nAfter Assignment\n\nStrings are str1 = 'GeeksforGeeks' and str2 = 'GeeksforGeeks'\nReferenceEquals: True\nEquals: True\n" }, { "code": null, "e": 26977, "s": 26905, "text": "Reference : https://msdn.microsoft.com/en-us/library/system.string.copy" }, { "code": null, "e": 26988, "s": 26977, "text": "nidhi_biet" }, { "code": null, "e": 27002, "s": 26988, "text": "CSharp-method" }, { "code": null, "e": 27005, "s": 27002, "text": "C#" }, { "code": null, "e": 27103, "s": 27005, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27131, "s": 27103, "text": "C# Dictionary with examples" }, { "code": null, "e": 27149, "s": 27131, "text": "Destructors in C#" }, { "code": null, "e": 27164, "s": 27149, "text": "C# | Delegates" }, { "code": null, "e": 27204, "s": 27164, "text": "C# | String.IndexOf( ) Method | Set - 1" }, { "code": null, "e": 27227, "s": 27204, "text": "Extension Method in C#" }, { "code": null, "e": 27258, "s": 27227, "text": "Introduction to .NET Framework" }, { "code": null, "e": 27280, "s": 27258, "text": "C# | Abstract Classes" }, { "code": null, "e": 27296, "s": 27280, "text": "C# | Data Types" }, { "code": null, "e": 27324, "s": 27296, "text": "HashSet in C# with Examples" } ]
Should I repartition?. About Data Distribution in Spark SQL. | by David Vrba | Towards Data Science
In a distributed environment, having proper data distribution becomes a key tool for boosting performance. In the DataFrame API of Spark SQL, there is a function repartition() that allows controlling the data distribution on the Spark cluster. The efficient usage of the function is however not straightforward because changing the distribution is related to a cost for physical data movement on the cluster nodes (a so-called shuffle). A general rule of thumb is that using repartition is costly because it will induce shuffle. In this article, we will go further and see that there are situations in which adding one shuffle at the right place will remove two other shuffles so it will make the overall execution more efficient. We will first cover a bit of theory to understand how the information about data distribution is internally utilized in Spark SQL and then we will go over some practical examples where using repartition becomes useful. The theory described in this article is based on the Spark source code, the version being the current snapshot 3.1 (written in June 2020) and most of it is valid also in previous versions 2.x. Also, the theory and the internal behavior are language-agnostic, so it doesn’t matter whether we use it with Scala, Java, or Python API. The DataFrame API in Spark SQL allows the users to write high-level transformations. These transformations are lazy, which means that they are not executed eagerly but instead under the hood they are converted to a query plan. The query plan will be materialized when the user calls an action which is a function where we ask for some output, for example when we are saving the result of the transformations to some storage system. The query plan itself can be of two major types: a logical plan and a physical plan. And the steps of the query plan processing can be called accordingly as logical planning and physical planning. The phase of the logical planning is responsible for multiple steps related to a logical plan where the logical plan itself is just an abstract representation of the query, it has a form of a tree where each node in the tree is a relational operator. The logical plan itself does not carry any specific information about the execution or about algorithms used to compute transformations such as joins or aggregations. It just represents the information from the query in a way that is convenient for optimizations. During logical planning, the query plan is optimized by a Spark optimizer, which applies a set of rules that transform the plan. These rules are based mostly on heuristics, for instance, that it is better to first filter the data and then do other processing and so on. Once the logical plan is optimized, physical planning begins. The purpose of this phase is to take the logical plan and turn it into a physical plan which can be then executed. Unlike the logical plan which is very abstract, the physical plan is much more specific regarding details about the execution, because it contains a concrete choice of algorithms that will be used during the execution. The physical planning is also composed of two steps because there are two versions of the physical plan: spark plan and executed plan. The spark plan is created using so-called strategies where each node in a logical plan is converted into one or more operators in the spark plan. One example of a strategy is JoinSelection, where Spark decides what algorithm will be used to join the data. The string representation of the spark plan can be displayed using the Scala API: df.queryExecution.sparkPlan // in Scala After the spark plan is generated, there is a set of additional rules that are applied to it to create the final version of the physical plan which is the executed plan. This executed plan will then be executed to generate RDD code. To see this executed plan we can simply call explain on the DataFrame because it is actually the final version of the physical plan. Alternatively, we can go to the Spark UI to see its graphical representation. One of these additional rules that are used to transform the spark plan into the executed plan is called EnsureRequirements (in the next we will refer to it as ER rule) and this rule is going to make sure that the data is distributed correctly as is required by some transformations (for example joins and aggregations). Each operator in the physical plan is having two important properties outputPartitioning and outputOrdering (in the next we will call them oP and oO respectively) which carry the information about the data distribution, how the data is partitioned and sorted at the given moment. Besides that, each operator also has two other properties requiredChildDistribution and requiredChildOrdering by which it puts requirements on the values of oP and oO of its child nodes. Some operators don’t have any requirements but some do. Let’s see this on a simple example with SortMergeJoin, which is an operator that has strong requirements on its child nodes, it requires that the data must be partitioned and sorted by the joining key so it can be merged correctly. Let’s consider this simple query in which we join two tables (both of them are based on a file datasource such as parquet): # Using Python API:spark.table("tableA") \.join(spark.table("tableB"), "id") \.write... The spark plan for this query will look like this (and I added there also the information about the oP, oO and the requirements of the SortMergeJoin): From the spark plan we can see that the child nodes of the SortMergeJoin (two Project operators) have no oP or oO (they are Unknown and None) and this is a general situation where the data has not been repartitioned in advance and the tables are not bucketed. When the ER rule is applied on the plan it can see that the requirements of the SortMergeJoin are not satisfied so it will fill Exchange and Sort operators to the plan to meet the requirements. The Exchange operator will be responsible for repartitioning the data to meet the requiredChildDistribution requirement and the Sort will order the data to meet the requiredChildOrdering, so the final executed plan will look like this (and this is what you can find also in the SparkUI in the SQL tab, you won’t find there the spark plan though, because it is not available there): The situation will be different if both tables are bucketed by the joining key. Bucketing is a technique for storing the data in a pre-shuffled and possibly pre-sorted state where the information about bucketing is stored in the metastore. In such a case the FileScan operator will have the oP set according to the information from the metastore and if there is exactly one file per bucket, the oO will be also set and it will all be passed downstream to the Project. If both tables were bucketed by the joining key to the same number of buckets, the requirements for the oP will be satisfied and the ER rule will add no Exchange to the plan. The same number of partitions on both sides of the join is crucial here and if these numbers are different, Exchange will still have to be used for each branch where the number of partitions differs from spark.sql.shuffle.partitions configuration setting (default value is 200). So with a correct bucketing in place, the join can be shuffle-free. The important thing to understand is that Spark needs to be aware of the distribution to make use of it, so even if your data is pre-shuffled with bucketing, unless you read the data as a table to pick the information from the metastore, Spark will not know about it and so it will not set the oP on the FileScan. As mentioned at the beginning, there is a function repartition that can be used to change the distribution of the data on the Spark cluster. The function takes as argument columns by which the data should be distributed (optionally the first argument can be the number of partitions that should be created). What happens under the hood is that it adds a RepartitionByExpression node to the logical plan which is then converted to Exchange in the spark plan using a strategy and it sets the oP to HashPartitioning with the key being the column name used as the argument. Another usage of the repartition function is that it can be called with only one argument being the number of partitions that should be created (repartition(n)), which will distribute the data randomly. The use case for this random distribution is however not discussed in this article. Let’s now go over some practical examples where adjusting the distribution using repartition by some specific field will bring some benefits. Let’s see what happens if one of the tables in the above join is bucketed and the other is not. In such a case the requirements are not satisfied because the oP is different on both sides (on one side it is defined by the bucketing and on the other side it is Unknown). In this case, the ER rule will add Exchange to both branches of the join so each side of the join will have to be shuffled! Spark will simply neglect that one side is already pre-shuffled and will waste this opportunity to avoid the shuffle. Here we can simply use repartition on the other side of the join to make sure that oP is set before the ER rule checks it and adds Exchanges: # Using Python API:# tableA is not bucketed # tableB is bucketed by id into 50 bucketsspark.table("tableA") \.repartition(50, "id") \.join(spark.table("tableB"), "id") \.write \... Calling repartition will add one Exchange to the left branch of the plan but the right branch will stay shuffle-free because requirements will now be satisfied and ER rule will add no more Exchanges. So we will have only one shuffle instead of two in the final plan. Alternatively, we could change the number of shuffle partitions to match the number of buckets in tableB, in such case the repartition is not needed (it would bring no additional benefit), because the ER rule will leave the right branch shuffle-free and it will adjust only the left branch (in the same way as repartition does): # match number of buckets in the right branch of the join with the number of shuffle partitions:spark.conf.set("spark.sql.shuffle.partitions", 50) Another example where repartition becomes useful is related to queries where we aggregate a table by two keys and then join an additional table by one of these two keys (neither of these tables is bucketed in this case). Let’s see a simple example which is based on transactional data of this kind: {"id": 1, "user_id": 100, "price": 50, "date": "2020-06-01"}{"id": 2, "user_id": 100, "price": 200, "date": "2020-06-02"}{"id": 3, "user_id": 101, "price": 120, "date": "2020-06-01"} Each user can have many rows in the dataset because he/she could have made many transactions. These transactions are stored in tableA. On the other hand, tableB will contain information about each user (name, address, and so on). The tableB has no duplicities, each record belongs to a different user. In our query we want to count the number of transactions for each user and date and then join the user information: # Using Python API:dfA = spark.table("tableA") # transactions (not bucketed)dfB = spark.table("tableB") # user information (not bucketed)dfA \.groupBy("user_id", "date") \.agg(count("*")) \.join(dfB, "user_id") The spark plan of this query looks like this In the spark plan, you can see a pair of HashAggregate operators, the first one (on the top) is responsible for a partial aggregation and the second one does the final merge. The requirements of the SortMergeJoin are the same as previously. The interesting part of this example are the HashAggregates. The first one has no requirements from its child, however, the second one requires for the oP to be HashPartitioning by user_id and date or any subset of these columns and this is what we will take advantage of shortly. In the general case, these requirements are not fulfilled so the ER rule will add Exchanges (and Sorts). This will lead to this executed plan: As you can see we end up with a plan that has three Exchange operators, so three shuffles will happen during the execution. Let’s now see how using repartition can change the situation: dfA = spark.table("tableA").repartition("user_id")dfB = spark.table("tableB")dfA \.groupBy("user_id", "date") \.agg(count("*")) \.join(dfB, "user_id") The spark plan will now look different, it will contain Exchange that is generated by a strategy that converts RepartitionByExpression node from the logical plan. This Exchange will be a child of the first HashAggregate operator and it will set the oP to HashPartitioning (user_id) which will be passed downstream: The requirements for oP of all operators in the left branch are now satisfied so ER rule will add no additional Exchanges (it will still add Sort to satisfy oO). The essential concept in this example is that we are grouping by two columns and the requirements of the HashAggregate operator are more flexible so if the data will be distributed by any of these two fields, the requirements will be met. The final executed plan will have only one Exchange in the left branch (and one in the right branch) so using repartition we reduced the number of shuffles by one: It is true that using repartition we have now only one shuffle in the left branch instead of two, it is however important to understand that these shuffles are not of the same kind! In the original plan, both of the Exchanges came after HashAggregate which was responsible for a partial aggregation, so the data were reduced (aggregated locally on each node) before shuffling. In the new plan, the Exchange comes before HashAggregate so the full data will be shuffled. So what is better? One full shuffle or two reduced shuffles? Well, that ultimately depends on the properties of the data. If there are only a few records per user_id and date, it means that the aggregation will not reduce the data much, so the total shuffle will be comparable with the reduced one, and having only one shuffle will be better. On the other hand, if there are lots of records per user_id and date, the aggregation will make the data much smaller and so going with the original plan might be better because these two small shuffles can be faster than one big shuffle. This can be expressed also in terms of the cardinality of all distinct combinations of these two fields user_id and date. If this cardinality is comparable with the total row count it means that the groupBy transformation will not reduce the data much. Let’s consider one more example where repartition will bring optimization to our query. The problem is based on the same data as the previous example. Now in our query we want to make a union of two different aggregations, in the first one we will count the rows for each user and in the second we will sum the price column: # Using Python API:countDF = df.groupBy("user_id") \.agg(count("*").alias("metricValue")) \.withColumn("metricName", lit("count"))sumDF = df.groupBy("user_id") \.agg(sum("price").alias("metricValue")) \.withColumn("metricName", lit("sum"))countDF.union(sumDF) Here is the final executed plan for this query: It is a typical plan for a union-like query, one branch for each DataFrame in the union. We can see that there are two shuffles, one for each aggregation. Besides that, it also follows from the plan that the dataset will be scanned twice. Here the repartition function together with a small trick can help us to change the shape of the plan df = spark.read.parquet(...).repartition("user_id")countDF = df.groupBy("user_id") \.agg(count("price").alias("metricValue")) \.withColumn("metricName", lit("count"))sumDF = df.groupBy("user_id") \.agg(sum("price").alias("metricValue")) \.withColumn("metricName", lit("sum"))countDF.union(sumDF) The repartition function will move the Exchange operator before the HashAggregate and it will make the Exchange sub-branches identical so it will be reused by another rule called ReuseExchange. In the count function, changing the star to the price column becomes important here because it will make sure that the projection will be the same in both DataFrames (we need to project the price column also in the left branch to make it the same as the second branch). It will however produce the same result as the original query only if there are no null values in the price column. To understand the logic behind this rule see my other article where I explain the ReuseExchange rule more in detail on a similar example. Similarly as before, we reduced here the number of shuffles by one, but again we have now a total shuffle as opposed to reduced shuffles in the original query. The additional benefit here is that after this optimization the dataset will be scanned only once because of the reused computation. As we already mentioned, it is not only important to have the data distributed in an optimal way, but it is also important to have Spark know about it. The information about the oP is propagated through the plan from a node to its parent, however, there are some operators that will stop propagating the information even if the actual distribution doesn’t change. One of these operators is BatchEvalPython β€” an operator that represents a user-defined function in the Python API. So if you repartition the data, then call a Python UDF and then do a join (or some aggregation), the ER rule will add a new Exchange because the BatchEvalPython will not pass the oP information downstream. We can simply say that after calling a Python UDF, Spark will forget how the data was distributed. Let me just mention very briefly yet another use case for the repartition function which is to steer the number of produced files when partitioning and/or bucketing the data to the storage system. If you are partitioning data to a file system like this: df \.write \.partitionBy(key) \.save(path) it can produce lots of small files if the data is distributed randomly in the last stage of the Spark job. Each task in the final stage can possibly contain values for all the keys so it will create a file in each storage partition and thus produce lots of small files. Calling custom repartition just before the write allows us to imitate the distribution required for the file system and thus control the number of produced files. In some future post we will describe more in detail how this works and how it can be used efficiently also for bucketing. The repartition function allows us to change the distribution of the data on the Spark cluster. This distribution change will induce shuffle (physical data movement) under the hood, which is quite an expensive operation. In this article, we have seen some examples in which this additional shuffle can however remove some other shuffles at the same time and thus make the overall execution more efficient. We have also seen that it is important to distinguish between two kinds of shuffles β€” a total shuffle (which moves all the data) and a reduced shuffle (which moves the data after partial aggregation). Sometimes deciding what is ultimately more efficient requires understanding the properties of the actual data.
[ { "code": null, "e": 609, "s": 172, "text": "In a distributed environment, having proper data distribution becomes a key tool for boosting performance. In the DataFrame API of Spark SQL, there is a function repartition() that allows controlling the data distribution on the Spark cluster. The efficient usage of the function is however not straightforward because changing the distribution is related to a cost for physical data movement on the cluster nodes (a so-called shuffle)." }, { "code": null, "e": 1122, "s": 609, "text": "A general rule of thumb is that using repartition is costly because it will induce shuffle. In this article, we will go further and see that there are situations in which adding one shuffle at the right place will remove two other shuffles so it will make the overall execution more efficient. We will first cover a bit of theory to understand how the information about data distribution is internally utilized in Spark SQL and then we will go over some practical examples where using repartition becomes useful." }, { "code": null, "e": 1453, "s": 1122, "text": "The theory described in this article is based on the Spark source code, the version being the current snapshot 3.1 (written in June 2020) and most of it is valid also in previous versions 2.x. Also, the theory and the internal behavior are language-agnostic, so it doesn’t matter whether we use it with Scala, Java, or Python API." }, { "code": null, "e": 2082, "s": 1453, "text": "The DataFrame API in Spark SQL allows the users to write high-level transformations. These transformations are lazy, which means that they are not executed eagerly but instead under the hood they are converted to a query plan. The query plan will be materialized when the user calls an action which is a function where we ask for some output, for example when we are saving the result of the transformations to some storage system. The query plan itself can be of two major types: a logical plan and a physical plan. And the steps of the query plan processing can be called accordingly as logical planning and physical planning." }, { "code": null, "e": 2597, "s": 2082, "text": "The phase of the logical planning is responsible for multiple steps related to a logical plan where the logical plan itself is just an abstract representation of the query, it has a form of a tree where each node in the tree is a relational operator. The logical plan itself does not carry any specific information about the execution or about algorithms used to compute transformations such as joins or aggregations. It just represents the information from the query in a way that is convenient for optimizations." }, { "code": null, "e": 2867, "s": 2597, "text": "During logical planning, the query plan is optimized by a Spark optimizer, which applies a set of rules that transform the plan. These rules are based mostly on heuristics, for instance, that it is better to first filter the data and then do other processing and so on." }, { "code": null, "e": 3263, "s": 2867, "text": "Once the logical plan is optimized, physical planning begins. The purpose of this phase is to take the logical plan and turn it into a physical plan which can be then executed. Unlike the logical plan which is very abstract, the physical plan is much more specific regarding details about the execution, because it contains a concrete choice of algorithms that will be used during the execution." }, { "code": null, "e": 3736, "s": 3263, "text": "The physical planning is also composed of two steps because there are two versions of the physical plan: spark plan and executed plan. The spark plan is created using so-called strategies where each node in a logical plan is converted into one or more operators in the spark plan. One example of a strategy is JoinSelection, where Spark decides what algorithm will be used to join the data. The string representation of the spark plan can be displayed using the Scala API:" }, { "code": null, "e": 3776, "s": 3736, "text": "df.queryExecution.sparkPlan // in Scala" }, { "code": null, "e": 4220, "s": 3776, "text": "After the spark plan is generated, there is a set of additional rules that are applied to it to create the final version of the physical plan which is the executed plan. This executed plan will then be executed to generate RDD code. To see this executed plan we can simply call explain on the DataFrame because it is actually the final version of the physical plan. Alternatively, we can go to the Spark UI to see its graphical representation." }, { "code": null, "e": 5420, "s": 4220, "text": "One of these additional rules that are used to transform the spark plan into the executed plan is called EnsureRequirements (in the next we will refer to it as ER rule) and this rule is going to make sure that the data is distributed correctly as is required by some transformations (for example joins and aggregations). Each operator in the physical plan is having two important properties outputPartitioning and outputOrdering (in the next we will call them oP and oO respectively) which carry the information about the data distribution, how the data is partitioned and sorted at the given moment. Besides that, each operator also has two other properties requiredChildDistribution and requiredChildOrdering by which it puts requirements on the values of oP and oO of its child nodes. Some operators don’t have any requirements but some do. Let’s see this on a simple example with SortMergeJoin, which is an operator that has strong requirements on its child nodes, it requires that the data must be partitioned and sorted by the joining key so it can be merged correctly. Let’s consider this simple query in which we join two tables (both of them are based on a file datasource such as parquet):" }, { "code": null, "e": 5508, "s": 5420, "text": "# Using Python API:spark.table(\"tableA\") \\.join(spark.table(\"tableB\"), \"id\") \\.write..." }, { "code": null, "e": 5659, "s": 5508, "text": "The spark plan for this query will look like this (and I added there also the information about the oP, oO and the requirements of the SortMergeJoin):" }, { "code": null, "e": 6495, "s": 5659, "text": "From the spark plan we can see that the child nodes of the SortMergeJoin (two Project operators) have no oP or oO (they are Unknown and None) and this is a general situation where the data has not been repartitioned in advance and the tables are not bucketed. When the ER rule is applied on the plan it can see that the requirements of the SortMergeJoin are not satisfied so it will fill Exchange and Sort operators to the plan to meet the requirements. The Exchange operator will be responsible for repartitioning the data to meet the requiredChildDistribution requirement and the Sort will order the data to meet the requiredChildOrdering, so the final executed plan will look like this (and this is what you can find also in the SparkUI in the SQL tab, you won’t find there the spark plan though, because it is not available there):" }, { "code": null, "e": 7485, "s": 6495, "text": "The situation will be different if both tables are bucketed by the joining key. Bucketing is a technique for storing the data in a pre-shuffled and possibly pre-sorted state where the information about bucketing is stored in the metastore. In such a case the FileScan operator will have the oP set according to the information from the metastore and if there is exactly one file per bucket, the oO will be also set and it will all be passed downstream to the Project. If both tables were bucketed by the joining key to the same number of buckets, the requirements for the oP will be satisfied and the ER rule will add no Exchange to the plan. The same number of partitions on both sides of the join is crucial here and if these numbers are different, Exchange will still have to be used for each branch where the number of partitions differs from spark.sql.shuffle.partitions configuration setting (default value is 200). So with a correct bucketing in place, the join can be shuffle-free." }, { "code": null, "e": 7799, "s": 7485, "text": "The important thing to understand is that Spark needs to be aware of the distribution to make use of it, so even if your data is pre-shuffled with bucketing, unless you read the data as a table to pick the information from the metastore, Spark will not know about it and so it will not set the oP on the FileScan." }, { "code": null, "e": 8369, "s": 7799, "text": "As mentioned at the beginning, there is a function repartition that can be used to change the distribution of the data on the Spark cluster. The function takes as argument columns by which the data should be distributed (optionally the first argument can be the number of partitions that should be created). What happens under the hood is that it adds a RepartitionByExpression node to the logical plan which is then converted to Exchange in the spark plan using a strategy and it sets the oP to HashPartitioning with the key being the column name used as the argument." }, { "code": null, "e": 8656, "s": 8369, "text": "Another usage of the repartition function is that it can be called with only one argument being the number of partitions that should be created (repartition(n)), which will distribute the data randomly. The use case for this random distribution is however not discussed in this article." }, { "code": null, "e": 8798, "s": 8656, "text": "Let’s now go over some practical examples where adjusting the distribution using repartition by some specific field will bring some benefits." }, { "code": null, "e": 9452, "s": 8798, "text": "Let’s see what happens if one of the tables in the above join is bucketed and the other is not. In such a case the requirements are not satisfied because the oP is different on both sides (on one side it is defined by the bucketing and on the other side it is Unknown). In this case, the ER rule will add Exchange to both branches of the join so each side of the join will have to be shuffled! Spark will simply neglect that one side is already pre-shuffled and will waste this opportunity to avoid the shuffle. Here we can simply use repartition on the other side of the join to make sure that oP is set before the ER rule checks it and adds Exchanges:" }, { "code": null, "e": 9633, "s": 9452, "text": "# Using Python API:# tableA is not bucketed # tableB is bucketed by id into 50 bucketsspark.table(\"tableA\") \\.repartition(50, \"id\") \\.join(spark.table(\"tableB\"), \"id\") \\.write \\..." }, { "code": null, "e": 10229, "s": 9633, "text": "Calling repartition will add one Exchange to the left branch of the plan but the right branch will stay shuffle-free because requirements will now be satisfied and ER rule will add no more Exchanges. So we will have only one shuffle instead of two in the final plan. Alternatively, we could change the number of shuffle partitions to match the number of buckets in tableB, in such case the repartition is not needed (it would bring no additional benefit), because the ER rule will leave the right branch shuffle-free and it will adjust only the left branch (in the same way as repartition does):" }, { "code": null, "e": 10376, "s": 10229, "text": "# match number of buckets in the right branch of the join with the number of shuffle partitions:spark.conf.set(\"spark.sql.shuffle.partitions\", 50)" }, { "code": null, "e": 10675, "s": 10376, "text": "Another example where repartition becomes useful is related to queries where we aggregate a table by two keys and then join an additional table by one of these two keys (neither of these tables is bucketed in this case). Let’s see a simple example which is based on transactional data of this kind:" }, { "code": null, "e": 10858, "s": 10675, "text": "{\"id\": 1, \"user_id\": 100, \"price\": 50, \"date\": \"2020-06-01\"}{\"id\": 2, \"user_id\": 100, \"price\": 200, \"date\": \"2020-06-02\"}{\"id\": 3, \"user_id\": 101, \"price\": 120, \"date\": \"2020-06-01\"}" }, { "code": null, "e": 11276, "s": 10858, "text": "Each user can have many rows in the dataset because he/she could have made many transactions. These transactions are stored in tableA. On the other hand, tableB will contain information about each user (name, address, and so on). The tableB has no duplicities, each record belongs to a different user. In our query we want to count the number of transactions for each user and date and then join the user information:" }, { "code": null, "e": 11487, "s": 11276, "text": "# Using Python API:dfA = spark.table(\"tableA\") # transactions (not bucketed)dfB = spark.table(\"tableB\") # user information (not bucketed)dfA \\.groupBy(\"user_id\", \"date\") \\.agg(count(\"*\")) \\.join(dfB, \"user_id\")" }, { "code": null, "e": 11532, "s": 11487, "text": "The spark plan of this query looks like this" }, { "code": null, "e": 12197, "s": 11532, "text": "In the spark plan, you can see a pair of HashAggregate operators, the first one (on the top) is responsible for a partial aggregation and the second one does the final merge. The requirements of the SortMergeJoin are the same as previously. The interesting part of this example are the HashAggregates. The first one has no requirements from its child, however, the second one requires for the oP to be HashPartitioning by user_id and date or any subset of these columns and this is what we will take advantage of shortly. In the general case, these requirements are not fulfilled so the ER rule will add Exchanges (and Sorts). This will lead to this executed plan:" }, { "code": null, "e": 12383, "s": 12197, "text": "As you can see we end up with a plan that has three Exchange operators, so three shuffles will happen during the execution. Let’s now see how using repartition can change the situation:" }, { "code": null, "e": 12534, "s": 12383, "text": "dfA = spark.table(\"tableA\").repartition(\"user_id\")dfB = spark.table(\"tableB\")dfA \\.groupBy(\"user_id\", \"date\") \\.agg(count(\"*\")) \\.join(dfB, \"user_id\")" }, { "code": null, "e": 12849, "s": 12534, "text": "The spark plan will now look different, it will contain Exchange that is generated by a strategy that converts RepartitionByExpression node from the logical plan. This Exchange will be a child of the first HashAggregate operator and it will set the oP to HashPartitioning (user_id) which will be passed downstream:" }, { "code": null, "e": 13414, "s": 12849, "text": "The requirements for oP of all operators in the left branch are now satisfied so ER rule will add no additional Exchanges (it will still add Sort to satisfy oO). The essential concept in this example is that we are grouping by two columns and the requirements of the HashAggregate operator are more flexible so if the data will be distributed by any of these two fields, the requirements will be met. The final executed plan will have only one Exchange in the left branch (and one in the right branch) so using repartition we reduced the number of shuffles by one:" }, { "code": null, "e": 13883, "s": 13414, "text": "It is true that using repartition we have now only one shuffle in the left branch instead of two, it is however important to understand that these shuffles are not of the same kind! In the original plan, both of the Exchanges came after HashAggregate which was responsible for a partial aggregation, so the data were reduced (aggregated locally on each node) before shuffling. In the new plan, the Exchange comes before HashAggregate so the full data will be shuffled." }, { "code": null, "e": 14718, "s": 13883, "text": "So what is better? One full shuffle or two reduced shuffles? Well, that ultimately depends on the properties of the data. If there are only a few records per user_id and date, it means that the aggregation will not reduce the data much, so the total shuffle will be comparable with the reduced one, and having only one shuffle will be better. On the other hand, if there are lots of records per user_id and date, the aggregation will make the data much smaller and so going with the original plan might be better because these two small shuffles can be faster than one big shuffle. This can be expressed also in terms of the cardinality of all distinct combinations of these two fields user_id and date. If this cardinality is comparable with the total row count it means that the groupBy transformation will not reduce the data much." }, { "code": null, "e": 15043, "s": 14718, "text": "Let’s consider one more example where repartition will bring optimization to our query. The problem is based on the same data as the previous example. Now in our query we want to make a union of two different aggregations, in the first one we will count the rows for each user and in the second we will sum the price column:" }, { "code": null, "e": 15303, "s": 15043, "text": "# Using Python API:countDF = df.groupBy(\"user_id\") \\.agg(count(\"*\").alias(\"metricValue\")) \\.withColumn(\"metricName\", lit(\"count\"))sumDF = df.groupBy(\"user_id\") \\.agg(sum(\"price\").alias(\"metricValue\")) \\.withColumn(\"metricName\", lit(\"sum\"))countDF.union(sumDF)" }, { "code": null, "e": 15351, "s": 15303, "text": "Here is the final executed plan for this query:" }, { "code": null, "e": 15692, "s": 15351, "text": "It is a typical plan for a union-like query, one branch for each DataFrame in the union. We can see that there are two shuffles, one for each aggregation. Besides that, it also follows from the plan that the dataset will be scanned twice. Here the repartition function together with a small trick can help us to change the shape of the plan" }, { "code": null, "e": 15988, "s": 15692, "text": "df = spark.read.parquet(...).repartition(\"user_id\")countDF = df.groupBy(\"user_id\") \\.agg(count(\"price\").alias(\"metricValue\")) \\.withColumn(\"metricName\", lit(\"count\"))sumDF = df.groupBy(\"user_id\") \\.agg(sum(\"price\").alias(\"metricValue\")) \\.withColumn(\"metricName\", lit(\"sum\"))countDF.union(sumDF)" }, { "code": null, "e": 16706, "s": 15988, "text": "The repartition function will move the Exchange operator before the HashAggregate and it will make the Exchange sub-branches identical so it will be reused by another rule called ReuseExchange. In the count function, changing the star to the price column becomes important here because it will make sure that the projection will be the same in both DataFrames (we need to project the price column also in the left branch to make it the same as the second branch). It will however produce the same result as the original query only if there are no null values in the price column. To understand the logic behind this rule see my other article where I explain the ReuseExchange rule more in detail on a similar example." }, { "code": null, "e": 16999, "s": 16706, "text": "Similarly as before, we reduced here the number of shuffles by one, but again we have now a total shuffle as opposed to reduced shuffles in the original query. The additional benefit here is that after this optimization the dataset will be scanned only once because of the reused computation." }, { "code": null, "e": 17783, "s": 16999, "text": "As we already mentioned, it is not only important to have the data distributed in an optimal way, but it is also important to have Spark know about it. The information about the oP is propagated through the plan from a node to its parent, however, there are some operators that will stop propagating the information even if the actual distribution doesn’t change. One of these operators is BatchEvalPython β€” an operator that represents a user-defined function in the Python API. So if you repartition the data, then call a Python UDF and then do a join (or some aggregation), the ER rule will add a new Exchange because the BatchEvalPython will not pass the oP information downstream. We can simply say that after calling a Python UDF, Spark will forget how the data was distributed." }, { "code": null, "e": 18037, "s": 17783, "text": "Let me just mention very briefly yet another use case for the repartition function which is to steer the number of produced files when partitioning and/or bucketing the data to the storage system. If you are partitioning data to a file system like this:" }, { "code": null, "e": 18080, "s": 18037, "text": "df \\.write \\.partitionBy(key) \\.save(path)" }, { "code": null, "e": 18635, "s": 18080, "text": "it can produce lots of small files if the data is distributed randomly in the last stage of the Spark job. Each task in the final stage can possibly contain values for all the keys so it will create a file in each storage partition and thus produce lots of small files. Calling custom repartition just before the write allows us to imitate the distribution required for the file system and thus control the number of produced files. In some future post we will describe more in detail how this works and how it can be used efficiently also for bucketing." } ]
Node.js fsPromises.stat() Method - GeeksforGeeks
08 Oct, 2021 The fsPromises.stat() method is used to return information about the given file or directory. The Promise is resolved with the fs.Stats object for the given path. Syntax: fsPromises.stat( path, options ) Parameters: This method accept two parameters as mentioned above and described below: path: It holds the path of the file or directory that has to be checked. It can be a String, Buffer or URL. options: It is an object that can be used to specify optional parameters that will affect the output. It has one optional parameter:bigint: It is a boolean value which specifies if the numeric values returned in the fs.Stats object are bigint. The default value is false. bigint: It is a boolean value which specifies if the numeric values returned in the fs.Stats object are bigint. The default value is false. Below examples illustrate the fsPromises.stat() method in Node.js: Example 1: This example uses fsPromises.stat() method to get the details of the path. // Node.js program to demonstrate the // fsPromises.stat() method // Import the filesystem module const fsPromises = require("fs").promises;(async () => { try { await fsPromises.rename("GFG.txt", "GeeksforGeeks.txt"); // Using the fsPromises.stat() method const stats = await fsPromises.stat( "GeeksforGeeks.txt"); console.log(stats); } catch (error) { console.log(error); }})(); Output: Stats { dev: 654202934, mode: 85416, nlink: 1, uid: 0, gid: 0, rdev: 0, blksize: undefined, ino: 6192449489177455, size: 0, blocks: undefined, atimeMs: 5126587454188, mtimeMs: 8845632838067, ctimeMs: 5214789541254.1998, birthtimeMs: 1572568634187.734, atime: 2020-06-10T00:25:14.198ZZ, mtime: 2020-06-10T00:38:38.068Z, ctime: 2020-06-10T00:38:47.450Z, birthtime: 2020-06-101T00:25:14.198Z } Example 2: This example uses fsPromises.stat() method to get the details of files with the bigint option: // Node.js program to demonstrate the // fsPromises.stat() method // Import the filesystem module const fsPromises = require("fs").promises;(async () => { try { await fsPromises.rename("GFG.txt", "GeeksforGeeks.txt"); // Using the fsPromises.stat() method const stats = await fsPromises.stat( ("GeeksforGeeks.txt"), {bigint: true}); console.log(stats); } catch (error) { console.log(error); }})(); Output: Stats { dev: 2269, mode: 33188, nlink: 1, uid: 1000, gid: 1000, rdev: 0, blksize: 4096, ino: 271, size: 0, blocks: 0, atimeMs: 1582871562365.894, mtimeMs: 1582871556897.5554, ctimeMs: 1582871556897.5554, birthtimeMs: 1582871556897.5554, atime: 2020-02-28T06:32:42.366Z, mtime: 2020-02-28T06:32:36.898Z, ctime: 2020-02-28T06:32:36.898Z, birthtime: 2020-02-28T06:32:36.898Z } Reference: https://nodejs.org/api/fs.html#fs_fspromises_stat_path_options Node.js-fs-module Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Express.js express.Router() Function Mongoose Populate() Method Express.js res.render() Function Express.js res.json() Function How to use an ES6 import in Node.js? Express.js express.Router() Function How to set input type date in dd-mm-yyyy format using HTML ? Differences between Functional Components and Class Components in React How to create footer to stay at the bottom of a Web page? Convert a string to an integer in JavaScript
[ { "code": null, "e": 24131, "s": 24103, "text": "\n08 Oct, 2021" }, { "code": null, "e": 24294, "s": 24131, "text": "The fsPromises.stat() method is used to return information about the given file or directory. The Promise is resolved with the fs.Stats object for the given path." }, { "code": null, "e": 24302, "s": 24294, "text": "Syntax:" }, { "code": null, "e": 24335, "s": 24302, "text": "fsPromises.stat( path, options )" }, { "code": null, "e": 24421, "s": 24335, "text": "Parameters: This method accept two parameters as mentioned above and described below:" }, { "code": null, "e": 24529, "s": 24421, "text": "path: It holds the path of the file or directory that has to be checked. It can be a String, Buffer or URL." }, { "code": null, "e": 24801, "s": 24529, "text": "options: It is an object that can be used to specify optional parameters that will affect the output. It has one optional parameter:bigint: It is a boolean value which specifies if the numeric values returned in the fs.Stats object are bigint. The default value is false." }, { "code": null, "e": 24941, "s": 24801, "text": "bigint: It is a boolean value which specifies if the numeric values returned in the fs.Stats object are bigint. The default value is false." }, { "code": null, "e": 25008, "s": 24941, "text": "Below examples illustrate the fsPromises.stat() method in Node.js:" }, { "code": null, "e": 25094, "s": 25008, "text": "Example 1: This example uses fsPromises.stat() method to get the details of the path." }, { "code": "// Node.js program to demonstrate the // fsPromises.stat() method // Import the filesystem module const fsPromises = require(\"fs\").promises;(async () => { try { await fsPromises.rename(\"GFG.txt\", \"GeeksforGeeks.txt\"); // Using the fsPromises.stat() method const stats = await fsPromises.stat( \"GeeksforGeeks.txt\"); console.log(stats); } catch (error) { console.log(error); }})();", "e": 25525, "s": 25094, "text": null }, { "code": null, "e": 25533, "s": 25525, "text": "Output:" }, { "code": null, "e": 25961, "s": 25533, "text": "Stats {\n dev: 654202934,\n mode: 85416,\n nlink: 1,\n uid: 0,\n gid: 0,\n rdev: 0,\n blksize: undefined,\n ino: 6192449489177455,\n size: 0,\n blocks: undefined,\n atimeMs: 5126587454188,\n mtimeMs: 8845632838067,\n ctimeMs: 5214789541254.1998,\n birthtimeMs: 1572568634187.734,\n atime: 2020-06-10T00:25:14.198ZZ,\n mtime: 2020-06-10T00:38:38.068Z,\n ctime: 2020-06-10T00:38:47.450Z,\n birthtime: 2020-06-101T00:25:14.198Z }\n" }, { "code": null, "e": 26067, "s": 25961, "text": "Example 2: This example uses fsPromises.stat() method to get the details of files with the bigint option:" }, { "code": "// Node.js program to demonstrate the // fsPromises.stat() method // Import the filesystem module const fsPromises = require(\"fs\").promises;(async () => { try { await fsPromises.rename(\"GFG.txt\", \"GeeksforGeeks.txt\"); // Using the fsPromises.stat() method const stats = await fsPromises.stat( (\"GeeksforGeeks.txt\"), {bigint: true}); console.log(stats); } catch (error) { console.log(error); }})();", "e": 26513, "s": 26067, "text": null }, { "code": null, "e": 26521, "s": 26513, "text": "Output:" }, { "code": null, "e": 26914, "s": 26521, "text": "Stats {\n dev: 2269,\n mode: 33188,\n nlink: 1,\n uid: 1000,\n gid: 1000,\n rdev: 0,\n blksize: 4096,\n ino: 271,\n size: 0,\n blocks: 0,\n atimeMs: 1582871562365.894,\n mtimeMs: 1582871556897.5554,\n ctimeMs: 1582871556897.5554,\n birthtimeMs: 1582871556897.5554,\n atime: 2020-02-28T06:32:42.366Z,\n mtime: 2020-02-28T06:32:36.898Z,\n ctime: 2020-02-28T06:32:36.898Z,\n birthtime: 2020-02-28T06:32:36.898Z }\n" }, { "code": null, "e": 26988, "s": 26914, "text": "Reference: https://nodejs.org/api/fs.html#fs_fspromises_stat_path_options" }, { "code": null, "e": 27006, "s": 26988, "text": "Node.js-fs-module" }, { "code": null, "e": 27014, "s": 27006, "text": "Node.js" }, { "code": null, "e": 27031, "s": 27014, "text": "Web Technologies" }, { "code": null, "e": 27129, "s": 27031, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27138, "s": 27129, "text": "Comments" }, { "code": null, "e": 27151, "s": 27138, "text": "Old Comments" }, { "code": null, "e": 27188, "s": 27151, "text": "Express.js express.Router() Function" }, { "code": null, "e": 27215, "s": 27188, "text": "Mongoose Populate() Method" }, { "code": null, "e": 27248, "s": 27215, "text": "Express.js res.render() Function" }, { "code": null, "e": 27279, "s": 27248, "text": "Express.js res.json() Function" }, { "code": null, "e": 27316, "s": 27279, "text": "How to use an ES6 import in Node.js?" }, { "code": null, "e": 27353, "s": 27316, "text": "Express.js express.Router() Function" }, { "code": null, "e": 27414, "s": 27353, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 27486, "s": 27414, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 27544, "s": 27486, "text": "How to create footer to stay at the bottom of a Web page?" } ]
4 Simple Ways to Import Word and PDF Data into Python when Pandas Fails | by Kat Li | Towards Data Science
Being a part of the data science/analytics team, you’ll probably encounter many file types to import and analyze in Python. In the ideal world, all our data reside in the cloud-based databases (e.g., SQL, NoSQL) that are easy to query and extract. In the real world, however, we rarely get neat tabular data. Also, if we need additional data (being structured or unstructured) to augment the analysis, we will inevitably be working with raw data files that come in different formats. Recently, my team started a project that, as the first step, involves integrating raw data files in formats .csv, .xlsx, .pdf, .docx, and .doc. My first reaction: the mighty pandas! which certainly handles the .csv and .xlsx, but regarding the .pdf and .docx, we will have to explore possibilities beyond the pandas. In this blog, I will be sharing my tips and tricks to help you easily import PDF and Word documents (into Python) in case it comes up in your own work, especially in your NLP Natural Language Processing projects. All sample data files are publicly accessible, and file copies along with the corresponding download links are available in my Github repo. Python-docx β†’ work with MS Word .docx files Python-docx β†’ work with MS Word .docx files As one of the most commonly used documentation tools, the MS Word oftentimes is people’s top choice for writing and sharing text. For word documents with the .docx extension, Python module docx is a handy tool, and the following shows how to import .docx paragraphs with just 2 lines of code, Now let’s print out the output information, As we can see, the return is a list of strings/sentences, and thus we can leverage the string processing techniques and regular expression to get the text data ready for further analysis (e.g., NLP). 2. Win32com β†’ work with MS Word .doc files Despite the ease of use, the python-docx module cannot take in the aging .doc extension, and believe it or not, .doc file is still the go-to word processor for lots of stakeholders (despite the .docx being around for over a decade). If under this circumstance, converting file types is not an option, we can turn to the win32com.client package with a couple of tricks. The basic technique is first to launch a Word application as an active document and then to read the content/paragraphs in Python. The function docReader( ) defined below showcases how (and the fully-baked code snippet is linked here), After running this function, we should see the same output as in section 1. Two tips: (1) we set word.Visible = False to hide the physical file so that all the processing work is done in the background; (2) the argument doc_file_name requires a full file path, not just the file name. Otherwise, the function Documents.Open( ) wouldn’t recognize the file, even with setting the working directory to the current folder. Now, let’s move onto the PDF files, 3. Pdfminer (in lieu of PyPDF2) β†’ work with PDF text When it comes to processing PDF files in Python, the well-known module PyPDF2 will probably be the initial attempt of most analysts, including myself. Hence, I coded it up using PyPDF2 (full code available in my Github repo), which gave the text output, as shown below, Hmmm, apparently this doesn’t seem right because all the white spaces are missing! Without the proper spaces, there’s no way for us to parse the strings correctly. Indeed, this example reveals one caveat of the function extractText( ) in PyPDF2: it doesn’t perform well for PDFs containing complicated text or non-printable space characters. Therefore, let’s switch to the pdfminer and explore how to get this pdf text imported, Now, the output looks way better, and can be easily cleaned up with text mining techniques, 4. Pdf2image + Pytesseract β†’ work with PDF scanned-in images To make things even complicated for data scientists (of course), PDFs can (and often) be created from scanned images in lieu of a text document; hence, they cannot be rendered as plain text by pdf readers, regardless of how neatly they are organized. In this case, the best technique I found is first explicitly to extract out the images, and then to read and parse those images in Python. We will implement this idea with the modules pdf2image and pytesseract. If the latter sounds unfamiliar, the pytesseract is an OCR optical character recognition tool for Python, which can recognize and read the text embedded in images. Now, here’s the basic function, In the output, you should see the text of the scanned image shown below: PREFACE In 1939 the Yorkshire Parish Register Society, of which the Parish Register Section of the Yorkshire Archaeological Society is the successor (the publications having been issued in numerical sequence without any break) published as its Volume No. 108 the entries in the Register of Wensley Parish Church from 1538 to 1700 inclusive. These entries comprised the first 110 pages (and a few lines of p. 111) of the oldest register at Wensley. Mission accomplished! As a bonus, you now know how to extract data from images as well, i.e., the image_to_string( ) in the pytesseract module! Note: for the pytesseract module to run successfully, you may need to perform additional configuration steps, including installing the poppler and tesseract packages. Again, please feel free to grab a more robust implementation and a detailed list of configs in my Github here. To conclude, there’s a data science joke mentioned by Greg Horton, in his piece talking about the 80–20 rule in data wrangling: Data Scientists spent 80% of their time dealing with data preparation problems and the other 20% complaining about how long it takes to deal with data preparation problems. By going through different ways of scraping text from Word and PDF files, I hope this blog will make your 80% a bit easier/less boring so that you don’t pull your hair out, and also reduce the other 20% so that you can spend more time reading interesting Medium articles. Final tip: Once done working with a file, it always is a good coding practice to close the connection so that other applications can access the file. This is why you saw the close() method at the end of each function above. πŸ˜ƒ Want more data science and programming tips? Use my link to sign up to Medium and gain full access to all my content. Also subscribe to my newly created YouTube channel β€œData Talk with Kat”,
[ { "code": null, "e": 655, "s": 171, "text": "Being a part of the data science/analytics team, you’ll probably encounter many file types to import and analyze in Python. In the ideal world, all our data reside in the cloud-based databases (e.g., SQL, NoSQL) that are easy to query and extract. In the real world, however, we rarely get neat tabular data. Also, if we need additional data (being structured or unstructured) to augment the analysis, we will inevitably be working with raw data files that come in different formats." }, { "code": null, "e": 972, "s": 655, "text": "Recently, my team started a project that, as the first step, involves integrating raw data files in formats .csv, .xlsx, .pdf, .docx, and .doc. My first reaction: the mighty pandas! which certainly handles the .csv and .xlsx, but regarding the .pdf and .docx, we will have to explore possibilities beyond the pandas." }, { "code": null, "e": 1325, "s": 972, "text": "In this blog, I will be sharing my tips and tricks to help you easily import PDF and Word documents (into Python) in case it comes up in your own work, especially in your NLP Natural Language Processing projects. All sample data files are publicly accessible, and file copies along with the corresponding download links are available in my Github repo." }, { "code": null, "e": 1369, "s": 1325, "text": "Python-docx β†’ work with MS Word .docx files" }, { "code": null, "e": 1413, "s": 1369, "text": "Python-docx β†’ work with MS Word .docx files" }, { "code": null, "e": 1706, "s": 1413, "text": "As one of the most commonly used documentation tools, the MS Word oftentimes is people’s top choice for writing and sharing text. For word documents with the .docx extension, Python module docx is a handy tool, and the following shows how to import .docx paragraphs with just 2 lines of code," }, { "code": null, "e": 1750, "s": 1706, "text": "Now let’s print out the output information," }, { "code": null, "e": 1950, "s": 1750, "text": "As we can see, the return is a list of strings/sentences, and thus we can leverage the string processing techniques and regular expression to get the text data ready for further analysis (e.g., NLP)." }, { "code": null, "e": 1993, "s": 1950, "text": "2. Win32com β†’ work with MS Word .doc files" }, { "code": null, "e": 2362, "s": 1993, "text": "Despite the ease of use, the python-docx module cannot take in the aging .doc extension, and believe it or not, .doc file is still the go-to word processor for lots of stakeholders (despite the .docx being around for over a decade). If under this circumstance, converting file types is not an option, we can turn to the win32com.client package with a couple of tricks." }, { "code": null, "e": 2598, "s": 2362, "text": "The basic technique is first to launch a Word application as an active document and then to read the content/paragraphs in Python. The function docReader( ) defined below showcases how (and the fully-baked code snippet is linked here)," }, { "code": null, "e": 3017, "s": 2598, "text": "After running this function, we should see the same output as in section 1. Two tips: (1) we set word.Visible = False to hide the physical file so that all the processing work is done in the background; (2) the argument doc_file_name requires a full file path, not just the file name. Otherwise, the function Documents.Open( ) wouldn’t recognize the file, even with setting the working directory to the current folder." }, { "code": null, "e": 3053, "s": 3017, "text": "Now, let’s move onto the PDF files," }, { "code": null, "e": 3106, "s": 3053, "text": "3. Pdfminer (in lieu of PyPDF2) β†’ work with PDF text" }, { "code": null, "e": 3376, "s": 3106, "text": "When it comes to processing PDF files in Python, the well-known module PyPDF2 will probably be the initial attempt of most analysts, including myself. Hence, I coded it up using PyPDF2 (full code available in my Github repo), which gave the text output, as shown below," }, { "code": null, "e": 3540, "s": 3376, "text": "Hmmm, apparently this doesn’t seem right because all the white spaces are missing! Without the proper spaces, there’s no way for us to parse the strings correctly." }, { "code": null, "e": 3805, "s": 3540, "text": "Indeed, this example reveals one caveat of the function extractText( ) in PyPDF2: it doesn’t perform well for PDFs containing complicated text or non-printable space characters. Therefore, let’s switch to the pdfminer and explore how to get this pdf text imported," }, { "code": null, "e": 3897, "s": 3805, "text": "Now, the output looks way better, and can be easily cleaned up with text mining techniques," }, { "code": null, "e": 3958, "s": 3897, "text": "4. Pdf2image + Pytesseract β†’ work with PDF scanned-in images" }, { "code": null, "e": 4209, "s": 3958, "text": "To make things even complicated for data scientists (of course), PDFs can (and often) be created from scanned images in lieu of a text document; hence, they cannot be rendered as plain text by pdf readers, regardless of how neatly they are organized." }, { "code": null, "e": 4616, "s": 4209, "text": "In this case, the best technique I found is first explicitly to extract out the images, and then to read and parse those images in Python. We will implement this idea with the modules pdf2image and pytesseract. If the latter sounds unfamiliar, the pytesseract is an OCR optical character recognition tool for Python, which can recognize and read the text embedded in images. Now, here’s the basic function," }, { "code": null, "e": 4689, "s": 4616, "text": "In the output, you should see the text of the scanned image shown below:" }, { "code": null, "e": 5139, "s": 4689, "text": "PREFACE In 1939 the Yorkshire Parish Register Society, of which the Parish Register Section of the Yorkshire Archaeological Society is the successor (the publications having been issued in numerical sequence without any break) published as its Volume No. 108 the entries in the Register of Wensley Parish Church from 1538 to 1700 inclusive. These entries comprised the first 110 pages (and a few lines of p. 111) of the oldest register at Wensley." }, { "code": null, "e": 5283, "s": 5139, "text": "Mission accomplished! As a bonus, you now know how to extract data from images as well, i.e., the image_to_string( ) in the pytesseract module!" }, { "code": null, "e": 5561, "s": 5283, "text": "Note: for the pytesseract module to run successfully, you may need to perform additional configuration steps, including installing the poppler and tesseract packages. Again, please feel free to grab a more robust implementation and a detailed list of configs in my Github here." }, { "code": null, "e": 5689, "s": 5561, "text": "To conclude, there’s a data science joke mentioned by Greg Horton, in his piece talking about the 80–20 rule in data wrangling:" }, { "code": null, "e": 5862, "s": 5689, "text": "Data Scientists spent 80% of their time dealing with data preparation problems and the other 20% complaining about how long it takes to deal with data preparation problems." }, { "code": null, "e": 6134, "s": 5862, "text": "By going through different ways of scraping text from Word and PDF files, I hope this blog will make your 80% a bit easier/less boring so that you don’t pull your hair out, and also reduce the other 20% so that you can spend more time reading interesting Medium articles." }, { "code": null, "e": 6360, "s": 6134, "text": "Final tip: Once done working with a file, it always is a good coding practice to close the connection so that other applications can access the file. This is why you saw the close() method at the end of each function above. πŸ˜ƒ" }, { "code": null, "e": 6478, "s": 6360, "text": "Want more data science and programming tips? Use my link to sign up to Medium and gain full access to all my content." } ]
Apex - DML
In this chapter, we will discuss how to perform the different Database Modification Functionalities in Salesforce. There are two says with which we can perform the functionalities. DML are the actions which are performed in order to perform insert, update, delete, upsert, restoring records, merging records, or converting leads operation. DML is one of the most important part in Apex as almost every business case involves the changes and modifications to database. All operations which you can perform using DML statements can be performed using Database methods as well. Database methods are the system methods which you can use to perform DML operations. Database methods provide more flexibility as compared to DML Statements. In this chapter, we will be looking at the first approach using DML Statements. We will look at the Database Methods in a subsequent chapter. Let us now consider the instance of the Chemical supplier company again. Our Invoice records have fields as Status, Amount Paid, Amount Remaining, Next Pay Date and Invoice Number. Invoices which have been created today and have their status as 'Pending', should be updated to 'Paid'. Insert operation is used to create new records in Database. You can create records of any Standard or Custom object using the Insert DML statement. Example We can create new records in APEX_Invoice__c object as new invoices are being generated for new customer orders every day. We will create a Customer record first and then we can create an Invoice record for that new Customer record. // fetch the invoices created today, Note, you must have at least one invoice // created today List<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c, createdDate FROM APEX_Invoice__c WHERE createdDate = today]; // create List to hold the updated invoice records List<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>(); APEX_Customer__c objCust = new APEX_Customer__C(); objCust.Name = 'Test ABC'; //DML for Inserting the new Customer Records insert objCust; for (APEX_Invoice__c objInvoice: invoiceList) { if (objInvoice.APEX_Status__c == 'Pending') { objInvoice.APEX_Status__c = 'Paid'; updatedInvoiceList.add(objInvoice); } } // DML Statement to update the invoice status update updatedInvoiceList; // Prints the value of updated invoices System.debug('List has been updated and updated values are' + updatedInvoiceList); // Inserting the New Records using insert DML statement APEX_Invoice__c objNewInvoice = new APEX_Invoice__c(); objNewInvoice.APEX_Status__c = 'Pending'; objNewInvoice.APEX_Amount_Paid__c = 1000; objNewInvoice.APEX_Customer__c = objCust.id; // DML which is creating the new Invoice record which will be linked with newly // created Customer record insert objNewInvoice; System.debug('New Invoice Id is '+objNewInvoice.id+' and the Invoice Number is' + objNewInvoice.Name); Update operation is to perform updates on existing records. In this example, we will be updating the Status field of an existing Invoice record to 'Paid'. Example // Update Statement Example for updating the invoice status. You have to create and Invoice records before executing this code. This program is updating the record which is at index 0th position of the List. // First, fetch the invoice created today List<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c, createdDate FROM APEX_Invoice__c]; List<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>(); // Update the first record in the List invoiceList[0].APEX_Status__c = 'Pending'; updatedInvoiceList.add(invoiceList[0]); // DML Statement to update the invoice status update updatedInvoiceList; // Prints the value of updated invoices System.debug('List has been updated and updated values of records are' + updatedInvoiceList[0]); Upsert Operation is used to perform an update operation and if the records to be updated are not present in database, then create new records as well. Example Suppose, the customer records in Customer object need to be updated. We will update the existing Customer record if it is already present, else create a new one. This will be based on the value of field APEX_External_Id__c. This field will be our field to identify if the records are already present or not. Note βˆ’ Before executing this code, please create a record in Customer object with the external Id field value as '12341' and then execute the code given below βˆ’ // Example for upserting the Customer records List<apex_customer__c> CustomerList = new List<apex_customer__c>(); for (Integer i = 0; i < 10; i++) { apex_customer__c objcust=new apex_customer__c(name = 'Test' +i, apex_external_id__c='1234' +i); customerlist.add(objcust); } //Upserting the Customer Records upsert CustomerList; System.debug('Code iterated for 10 times and created 9 records as one record with External Id 12341 is already present'); for (APEX_Customer_c objCustomer: CustomerList) { if (objCustomer.APEX_External_Id_c == '12341') { system.debug('The Record which is already present is '+objCustomer); } } You can perform the delete operation using the Delete DML. Example In this case, we will delete the invoices which have been created for the testing purpose, that is the ones which contain the name as 'Test'. You can execute this snippet from the Developer console as well without creating the class. // fetch the invoice created today List<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c, createdDate FROM APEX_Invoice__c WHERE createdDate = today]; List<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>(); APEX_Customer__c objCust = new APEX_Customer__C(); objCust.Name = 'Test'; // Inserting the Customer Records insert objCust; for (APEX_Invoice__c objInvoice: invoiceList) { if (objInvoice.APEX_Status__c == 'Pending') { objInvoice.APEX_Status__c = 'Paid'; updatedInvoiceList.add(objInvoice); } } // DML Statement to update the invoice status update updatedInvoiceList; // Prints the value of updated invoices System.debug('List has been updated and updated values are' + updatedInvoiceList); // Inserting the New Records using insert DML statement APEX_Invoice__c objNewInvoice = new APEX_Invoice__c(); objNewInvoice.APEX_Status__c = 'Pending'; objNewInvoice.APEX_Amount_Paid__c = 1000; objNewInvoice.APEX_Customer__c = objCust.id; // DML which is creating the new record insert objNewInvoice; System.debug('New Invoice Id is' + objNewInvoice.id); // Deleting the Test invoices from Database // fetch the invoices which are created for Testing, Select name which Customer Name // is Test. List<apex_invoice__c> invoiceListToDelete = [SELECT id FROM APEX_Invoice__c WHERE APEX_Customer__r.Name = 'Test']; // DML Statement to delete the Invoices delete invoiceListToDelete; System.debug('Success, '+invoiceListToDelete.size()+' Records has been deleted'); You can undelete the record which has been deleted and is present in Recycle bin. All the relationships which the deleted record has, will also be restored. Example Suppose, the Records deleted in the previous example need to be restored. This can be achieved using the following example. The code in the previous example has been modified for this example. // fetch the invoice created today List<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c, createdDate FROM APEX_Invoice__c WHERE createdDate = today]; List<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>(); APEX_Customer__c objCust = new APEX_Customer__C(); objCust.Name = 'Test'; // Inserting the Customer Records insert objCust; for (APEX_Invoice__c objInvoice: invoiceList) { if (objInvoice.APEX_Status__c == 'Pending') { objInvoice.APEX_Status__c = 'Paid'; updatedInvoiceList.add(objInvoice); } } // DML Statement to update the invoice status update updatedInvoiceList; // Prints the value of updated invoices System.debug('List has been updated and updated values are' + updatedInvoiceList); // Inserting the New Records using insert DML statement APEX_Invoice__c objNewInvoice = new APEX_Invoice__c(); objNewInvoice.APEX_Status__c = 'Pending'; objNewInvoice.APEX_Amount_Paid__c = 1000; objNewInvoice.APEX_Customer__c = objCust.id; // DML which is creating the new record insert objNewInvoice; System.debug('New Invoice Id is '+objNewInvoice.id); // Deleting the Test invoices from Database // fetch the invoices which are created for Testing, Select name which Customer Name // is Test. List<apex_invoice__c> invoiceListToDelete = [SELECT id FROM APEX_Invoice__c WHERE APEX_Customer__r.Name = 'Test']; // DML Statement to delete the Invoices delete invoiceListToDelete; system.debug('Deleted Record Count is ' + invoiceListToDelete.size()); System.debug('Success, '+invoiceListToDelete.size() + 'Records has been deleted'); // Restore the deleted records using undelete statement undelete invoiceListToDelete; System.debug('Undeleted Record count is '+invoiceListToDelete.size()+'. This should be same as Deleted Record count'); 14 Lectures 2 hours Vijay Thapa 7 Lectures 2 hours Uplatz 29 Lectures 6 hours Ramnarayan Ramakrishnan 49 Lectures 3 hours Ali Saleh Ali 10 Lectures 4 hours Soham Ghosh 48 Lectures 4.5 hours GUHARAJANM Print Add Notes Bookmark this page
[ { "code": null, "e": 2233, "s": 2052, "text": "In this chapter, we will discuss how to perform the different Database Modification Functionalities in Salesforce. There are two says with which we can perform the functionalities." }, { "code": null, "e": 2392, "s": 2233, "text": "DML are the actions which are performed in order to perform insert, update, delete, upsert, restoring records, merging records, or converting leads operation." }, { "code": null, "e": 2520, "s": 2392, "text": "DML is one of the most important part in Apex as almost every business case involves the changes and modifications to database." }, { "code": null, "e": 2785, "s": 2520, "text": "All operations which you can perform using DML statements can be performed using Database methods as well. Database methods are the system methods which you can use to perform DML operations. Database methods provide more flexibility as compared to DML Statements." }, { "code": null, "e": 2927, "s": 2785, "text": "In this chapter, we will be looking at the first approach using DML Statements. We will look at the Database Methods in a subsequent chapter." }, { "code": null, "e": 3212, "s": 2927, "text": "Let us now consider the instance of the Chemical supplier company again. Our Invoice records have fields as Status, Amount Paid, Amount Remaining, Next Pay Date and Invoice Number. Invoices which have been created today and have their status as 'Pending', should be updated to 'Paid'." }, { "code": null, "e": 3360, "s": 3212, "text": "Insert operation is used to create new records in Database. You can create records of any Standard or Custom object using the Insert DML statement." }, { "code": null, "e": 3368, "s": 3360, "text": "Example" }, { "code": null, "e": 3601, "s": 3368, "text": "We can create new records in APEX_Invoice__c object as new invoices are being generated for new customer orders every day. We will create a Customer record first and then we can create an Invoice record for that new Customer record." }, { "code": null, "e": 4959, "s": 3601, "text": "// fetch the invoices created today, Note, you must have at least one invoice \n// created today\n\nList<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c,\n createdDate FROM APEX_Invoice__c WHERE createdDate = today];\n\n// create List to hold the updated invoice records\nList<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>();\nAPEX_Customer__c objCust = new APEX_Customer__C();\nobjCust.Name = 'Test ABC';\n\n//DML for Inserting the new Customer Records\ninsert objCust;\nfor (APEX_Invoice__c objInvoice: invoiceList) {\n if (objInvoice.APEX_Status__c == 'Pending') {\n objInvoice.APEX_Status__c = 'Paid';\n updatedInvoiceList.add(objInvoice);\n }\n}\n\n// DML Statement to update the invoice status\nupdate updatedInvoiceList;\n\n// Prints the value of updated invoices\nSystem.debug('List has been updated and updated values are' + updatedInvoiceList);\n\n// Inserting the New Records using insert DML statement\nAPEX_Invoice__c objNewInvoice = new APEX_Invoice__c();\nobjNewInvoice.APEX_Status__c = 'Pending';\nobjNewInvoice.APEX_Amount_Paid__c = 1000;\nobjNewInvoice.APEX_Customer__c = objCust.id;\n\n// DML which is creating the new Invoice record which will be linked with newly\n// created Customer record\ninsert objNewInvoice;\nSystem.debug('New Invoice Id is '+objNewInvoice.id+' and the Invoice Number is'\n + objNewInvoice.Name);" }, { "code": null, "e": 5114, "s": 4959, "text": "Update operation is to perform updates on existing records. In this example, we will be updating the Status field of an existing Invoice record to 'Paid'." }, { "code": null, "e": 5122, "s": 5114, "text": "Example" }, { "code": null, "e": 5889, "s": 5122, "text": "// Update Statement Example for updating the invoice status. You have to create\nand Invoice records before executing this code. This program is updating the\nrecord which is at index 0th position of the List.\n\n// First, fetch the invoice created today\nList<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c,\ncreatedDate FROM APEX_Invoice__c];\nList<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>();\n\n// Update the first record in the List\ninvoiceList[0].APEX_Status__c = 'Pending';\nupdatedInvoiceList.add(invoiceList[0]);\n\n// DML Statement to update the invoice status\nupdate updatedInvoiceList;\n\n// Prints the value of updated invoices\nSystem.debug('List has been updated and updated values of records are' \n + updatedInvoiceList[0]);" }, { "code": null, "e": 6040, "s": 5889, "text": "Upsert Operation is used to perform an update operation and if the records to be updated are not present in database, then create new records as well." }, { "code": null, "e": 6048, "s": 6040, "text": "Example" }, { "code": null, "e": 6356, "s": 6048, "text": "Suppose, the customer records in Customer object need to be updated. We will update the existing Customer record if it is already present, else create a new one. This will be based on the value of field APEX_External_Id__c. This field will be our field to identify if the records are already present or not." }, { "code": null, "e": 6517, "s": 6356, "text": "Note βˆ’ Before executing this code, please create a record in Customer object with the external Id field value as '12341' and then execute the code given below βˆ’" }, { "code": null, "e": 7167, "s": 6517, "text": "// Example for upserting the Customer records\nList<apex_customer__c> CustomerList = new List<apex_customer__c>();\nfor (Integer i = 0; i < 10; i++) {\n apex_customer__c objcust=new apex_customer__c(name = 'Test' +i,\n apex_external_id__c='1234' +i);\n customerlist.add(objcust);\n} //Upserting the Customer Records\n\nupsert CustomerList;\n\nSystem.debug('Code iterated for 10 times and created 9 records as one record with \n External Id 12341 is already present');\n\nfor (APEX_Customer_c objCustomer: CustomerList) {\n if (objCustomer.APEX_External_Id_c == '12341') {\n system.debug('The Record which is already present is '+objCustomer);\n }\n}" }, { "code": null, "e": 7226, "s": 7167, "text": "You can perform the delete operation using the Delete DML." }, { "code": null, "e": 7234, "s": 7226, "text": "Example" }, { "code": null, "e": 7376, "s": 7234, "text": "In this case, we will delete the invoices which have been created for the testing purpose, that is the ones which contain the name as 'Test'." }, { "code": null, "e": 7468, "s": 7376, "text": "You can execute this snippet from the Developer console as well without creating the class." }, { "code": null, "e": 8986, "s": 7468, "text": "// fetch the invoice created today\nList<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c,\ncreatedDate FROM APEX_Invoice__c WHERE createdDate = today];\nList<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>();\nAPEX_Customer__c objCust = new APEX_Customer__C();\nobjCust.Name = 'Test';\n\n// Inserting the Customer Records\ninsert objCust;\nfor (APEX_Invoice__c objInvoice: invoiceList) {\n if (objInvoice.APEX_Status__c == 'Pending') {\n objInvoice.APEX_Status__c = 'Paid';\n updatedInvoiceList.add(objInvoice);\n }\n}\n\n// DML Statement to update the invoice status\nupdate updatedInvoiceList;\n\n// Prints the value of updated invoices\nSystem.debug('List has been updated and updated values are' + updatedInvoiceList);\n\n// Inserting the New Records using insert DML statement\nAPEX_Invoice__c objNewInvoice = new APEX_Invoice__c();\nobjNewInvoice.APEX_Status__c = 'Pending';\nobjNewInvoice.APEX_Amount_Paid__c = 1000;\nobjNewInvoice.APEX_Customer__c = objCust.id;\n\n// DML which is creating the new record\ninsert objNewInvoice;\nSystem.debug('New Invoice Id is' + objNewInvoice.id);\n\n// Deleting the Test invoices from Database\n// fetch the invoices which are created for Testing, Select name which Customer Name\n// is Test.\nList<apex_invoice__c> invoiceListToDelete = [SELECT id FROM APEX_Invoice__c\n WHERE APEX_Customer__r.Name = 'Test'];\n\n// DML Statement to delete the Invoices\ndelete invoiceListToDelete;\nSystem.debug('Success, '+invoiceListToDelete.size()+' Records has been deleted');" }, { "code": null, "e": 9143, "s": 8986, "text": "You can undelete the record which has been deleted and is present in Recycle bin. All the relationships which the deleted record has, will also be restored." }, { "code": null, "e": 9151, "s": 9143, "text": "Example" }, { "code": null, "e": 9344, "s": 9151, "text": "Suppose, the Records deleted in the previous example need to be restored. This can be achieved using the following example. The code in the previous example has been modified for this example." }, { "code": null, "e": 11143, "s": 9344, "text": "// fetch the invoice created today\nList<apex_invoice__c> invoiceList = [SELECT id, Name, APEX_Status__c,\ncreatedDate FROM APEX_Invoice__c WHERE createdDate = today];\nList<apex_invoice__c> updatedInvoiceList = new List<apex_invoice__c>();\nAPEX_Customer__c objCust = new APEX_Customer__C();\nobjCust.Name = 'Test';\n\n// Inserting the Customer Records\ninsert objCust;\nfor (APEX_Invoice__c objInvoice: invoiceList) {\n if (objInvoice.APEX_Status__c == 'Pending') {\n objInvoice.APEX_Status__c = 'Paid';\n updatedInvoiceList.add(objInvoice);\n }\n}\n\n// DML Statement to update the invoice status\nupdate updatedInvoiceList;\n\n// Prints the value of updated invoices\nSystem.debug('List has been updated and updated values are' + updatedInvoiceList);\n\n// Inserting the New Records using insert DML statement\nAPEX_Invoice__c objNewInvoice = new APEX_Invoice__c();\nobjNewInvoice.APEX_Status__c = 'Pending';\nobjNewInvoice.APEX_Amount_Paid__c = 1000;\nobjNewInvoice.APEX_Customer__c = objCust.id;\n\n// DML which is creating the new record\ninsert objNewInvoice;\nSystem.debug('New Invoice Id is '+objNewInvoice.id);\n\n// Deleting the Test invoices from Database\n// fetch the invoices which are created for Testing, Select name which Customer Name\n// is Test.\nList<apex_invoice__c> invoiceListToDelete = [SELECT id FROM APEX_Invoice__c\n WHERE APEX_Customer__r.Name = 'Test'];\n\n// DML Statement to delete the Invoices\ndelete invoiceListToDelete;\nsystem.debug('Deleted Record Count is ' + invoiceListToDelete.size());\nSystem.debug('Success, '+invoiceListToDelete.size() + 'Records has been deleted');\n\n// Restore the deleted records using undelete statement\nundelete invoiceListToDelete;\nSystem.debug('Undeleted Record count is '+invoiceListToDelete.size()+'. This should \n be same as Deleted Record count');" }, { "code": null, "e": 11176, "s": 11143, "text": "\n 14 Lectures \n 2 hours \n" }, { "code": null, "e": 11189, "s": 11176, "text": " Vijay Thapa" }, { "code": null, "e": 11221, "s": 11189, "text": "\n 7 Lectures \n 2 hours \n" }, { "code": null, "e": 11229, "s": 11221, "text": " Uplatz" }, { "code": null, "e": 11262, "s": 11229, "text": "\n 29 Lectures \n 6 hours \n" }, { "code": null, "e": 11287, "s": 11262, "text": " Ramnarayan Ramakrishnan" }, { "code": null, "e": 11320, "s": 11287, "text": "\n 49 Lectures \n 3 hours \n" }, { "code": null, "e": 11335, "s": 11320, "text": " Ali Saleh Ali" }, { "code": null, "e": 11368, "s": 11335, "text": "\n 10 Lectures \n 4 hours \n" }, { "code": null, "e": 11381, "s": 11368, "text": " Soham Ghosh" }, { "code": null, "e": 11416, "s": 11381, "text": "\n 48 Lectures \n 4.5 hours \n" }, { "code": null, "e": 11428, "s": 11416, "text": " GUHARAJANM" }, { "code": null, "e": 11435, "s": 11428, "text": " Print" }, { "code": null, "e": 11446, "s": 11435, "text": " Add Notes" } ]
How to plot a high resolution graph in Matplotlib?
We can use the resolution value, i.e., dots per inch, and the image format to plot a high-resolution graph in Matplotlib. Create a dictionary with Column 1 and Column 2 as the keys and Values are like i and i*i, where i is from 0 to 10, respectively. Create a dictionary with Column 1 and Column 2 as the keys and Values are like i and i*i, where i is from 0 to 10, respectively. Create a data frame using pd.DataFrame(d); d created in step 1. Create a data frame using pd.DataFrame(d); d created in step 1. Plot the data frame with β€˜o’ and β€˜rx’ style. Plot the data frame with β€˜o’ and β€˜rx’ style. To save the file in pdf format, use savefig() method where the image name is myImagePDF.pdf, format="pdf". To save the file in pdf format, use savefig() method where the image name is myImagePDF.pdf, format="pdf". We can set the dpi value to get a high-quality image. We can set the dpi value to get a high-quality image. Using the saving() method, we can save the image with format=”png” and dpi=1200. Using the saving() method, we can save the image with format=”png” and dpi=1200. To show the image, use the plt.show() method. To show the image, use the plt.show() method. import pandas as pd from matplotlib import pyplot as plt d = {'Column 1': [i for i in range(10)], 'Column 2': [i * i for i in range(10)]} df = pd.DataFrame(d) df.plot(style=['o', 'rx']) resolution_value = 1200 plt.savefig("myImage.png", format="png", dpi=resolution_value) plt.show()
[ { "code": null, "e": 1184, "s": 1062, "text": "We can use the resolution value, i.e., dots per inch, and the image format to plot a high-resolution graph in Matplotlib." }, { "code": null, "e": 1313, "s": 1184, "text": "Create a dictionary with Column 1 and Column 2 as the keys and Values are like i and i*i, where i is from 0 to 10, respectively." }, { "code": null, "e": 1442, "s": 1313, "text": "Create a dictionary with Column 1 and Column 2 as the keys and Values are like i and i*i, where i is from 0 to 10, respectively." }, { "code": null, "e": 1506, "s": 1442, "text": "Create a data frame using pd.DataFrame(d); d created in step 1." }, { "code": null, "e": 1570, "s": 1506, "text": "Create a data frame using pd.DataFrame(d); d created in step 1." }, { "code": null, "e": 1615, "s": 1570, "text": "Plot the data frame with β€˜o’ and β€˜rx’ style." }, { "code": null, "e": 1660, "s": 1615, "text": "Plot the data frame with β€˜o’ and β€˜rx’ style." }, { "code": null, "e": 1767, "s": 1660, "text": "To save the file in pdf format, use savefig() method where the image name is myImagePDF.pdf, format=\"pdf\"." }, { "code": null, "e": 1874, "s": 1767, "text": "To save the file in pdf format, use savefig() method where the image name is myImagePDF.pdf, format=\"pdf\"." }, { "code": null, "e": 1928, "s": 1874, "text": "We can set the dpi value to get a high-quality image." }, { "code": null, "e": 1982, "s": 1928, "text": "We can set the dpi value to get a high-quality image." }, { "code": null, "e": 2063, "s": 1982, "text": "Using the saving() method, we can save the image with format=”png” and dpi=1200." }, { "code": null, "e": 2144, "s": 2063, "text": "Using the saving() method, we can save the image with format=”png” and dpi=1200." }, { "code": null, "e": 2190, "s": 2144, "text": "To show the image, use the plt.show() method." }, { "code": null, "e": 2236, "s": 2190, "text": "To show the image, use the plt.show() method." }, { "code": null, "e": 2523, "s": 2236, "text": "import pandas as pd\nfrom matplotlib import pyplot as plt\n\nd = {'Column 1': [i for i in range(10)], 'Column 2': [i * i for i in range(10)]}\n\ndf = pd.DataFrame(d)\n\ndf.plot(style=['o', 'rx'])\nresolution_value = 1200\nplt.savefig(\"myImage.png\", format=\"png\", dpi=resolution_value)\nplt.show()" } ]
Ten Tricks To Speed Up Your Python Codes | by Jun | Towards Data Science
Python is slow. I bet you might encounter this counterargument many times about using Python, especially from people who come from C or C++ or Java world. This is true in many cases, for instance, looping over or sorting Python arrays, lists, or dictionaries can be sometimes slow. After all, Python is developed to make programming fun and easy. Thus, the improvements of Python code in succinctness and readability have to come with a cost of performance. Having said that, many efforts have been done in recent years to improve Python’s performance. We now can process large datasets in an efficient way by using numpy, scipy, pandas, and numba, as all these libraries implemented their critical code paths in C/C++. There is another exciting project, the Pypy project, which speed up Python code by 4.4 times compared to Cpython (original Python implementation). The downside of Pypy is that its coverage of some popular scientific modules (e.g., Matplotlib, Scipy) is limited or nonexistent which means that you cannot use those modules in code meant for Pypy. Other than these external resources, what can we do to speed up Python code in our daily coding practice? Today, I will share with you 10 tricks that I used a lot during my Python learning process. As usual, if you want to rerun codes in this post yourself, all required data and notebook can be accessed from my Github. 1. Familiar with built-in functions Python comes with many built-in functions implemented in C, which are very fast and well maintained (Figure 1). We should at least familiar with these function names and know where to find them (some commonly used computation-related functions are abs(), len(), max(), min(), set(), sum()). Therefore, whenever we need to carry out a simple computation, we can take the right shortcut instead of writing our own version in a clumsy way. Let’s use the built-in functions set() and sum() as examples. As you can see in Figure 2, it is 36.1 and 20.9 times faster using set() and sum() than the functions written by ourselves, respectively. 2. sort() vs. sorted() Both functions can sort list. If we just want to obtain a sorted list and do not care about the original list, sort() is a bit faster than sorted() both for basic sorting and when using key parameters (the key parameter specifies a function to be called on each list element prior to making comparisons), as shown in Figure 3. This is because the sort() method modifies the list in-place while sorted() builds a new sorted list and keep the original list intact. In other words, the order of values inside a_long_list itself actually already changed. However, sorted() is more versatile compared to sort(). This is because sorted() accepts any iterable while sort() is only defined for lists. Therefore, if we want to sort something other than a list, sorted() is the right function to use. For example, we can quickly sort a dictionary either by its keys or values (Figure 4). 3. Use symbols instead of their names As shown in Figure 5, when we need an empty dictionary or list object, instead of using dict() or list(), we can directly call {} (as for an empty set, we need to use set() itself) and []. This trick may not necessarily speed-up the codes, but do make the codes more pythonic. 4. List comprehension Normally when we need to create a new list from an old list based on certain rules, we use a for loop to iterate through the old list and transform its values based on the rule and save in a new list. For example, let’s say we want to find all even numbers from another_long_list, we can use the following codes: even_num = []for number in another_long_list: if number % 2 == 0: even_num.append(number) However, there is a more concise and elegant way to achieve this. As shown in Figure 6, we put the original for loop in just a single line of code. Moreover, the speed improved by almost 2 times. Combined with rule 3, we can turn the list into dictionaries or sets as well, just change [] as {}. Let’s rewrite codes in Figure 5, we can omit the step of assignment and complete the iteration inside the symbol, like this sorted_dict3 = {key: value for key, value in sorted(a_dict.items(), key=lambda item: item[1])}. To break this down, start at the end. The function β€œsorted(a_dict.items(), key=lambda item: item[1])” returned us a list of tuples (Figure 4). Here, we used multiple assignments to unpack the tuple, as for each tuple inside the list, we assigned key to its first item and value to its second item (as we know there are two items inside each tuple in this case). Finally, each pair of key and value was kept inside a dictionary. 5. Use enumerate() for value and index Sometimes, when we iterate through a list, we want to use both its values and indices in expressions. As shown in Figure 7, we should use enumerate(), which turns values of a list into pairs of index and value. This also speed-up our code by about 2 times. 6. Use zip() for packing and unpacking multiple iterables In some cases, we will need to iterate through two or more lists. We then can use zip() function, which transforms multiple lists into a single list of tuples (Figure 8). Note that the lists are better to be in the same length, otherwise, zip() stops as soon as the shorter list ends. Reversely, to access items in each tuple within the list, we can also unzip a list of tuple by adding an asterisk(*) and using multiple assignments, like this, letters1, numbers1 = zip(*pairs_list). 7. Combine set() and in When we want to check if a value exists inside a list or not, a clumsy way is to construct a function like this: # Construct a function for membership testdef check_membership(n): for element in another_long_list: if element == n: return True return False Then call check_membership(value) to see if the value inside another_long_list. However, a pythonic way to do this is just to use in by calling value in another_long_list as shown in Figure 9. It just like you are asking Python literally that β€œhey python, could you please tell me if value inside another_long_list”. To be more efficient, we should first remove duplicates from the list by using set() and then test the membership in the set object. By doing so, we reduced the number of elements that need to be check. In addition, in is a very fast operation on sets by design. As you can see from Figure 9, even though it took 20ms to construct the set object, this is just a one-time invest and the checking step itself only used 5.2ΞΌs. That is 1962 times improvement. 8. Check if a variable is true Inevitably, we will use a lot of if statements to check for empty variables, empty lists, empty dictionaries, and so on. We can save a bit time from here as well. As shown in Figure 10, we do not need to explicitly state == True or is True in the if statement, instead we just use the variable name. This saves the resource used by magic function __eq__ for comparing values in both sides. Likewise, if we need to check if the variable is empty, we just need to say if not string_returned_from_function:. 9. Count unique values use Counters() Let’s say we are trying to count unique values in the list we generated in Rule 1, a_long_list. One way is to create a dictionary in which the keys are numbers and the values are counts. As we iterate the list, we can increment its count if it is already in the dictionary and add it to the dictionary if it is not. num_counts = {}for num in a_long_list: if num in num_counts: num_counts[num] += 1 else: num_counts[num] = 1 However, a more efficient way to do this is just using Counter() from collections in one line of code, num_counts2 = Counter(a_long_list). Yes, it is that simple. As show in Figure 11, it is about 10 times faster than the function we wrote. If we want to know the 10 most common numbers, the Counter() instance also has a most_common method that is very handy. In a word, collections is an amazing module, we should save it into our daily toolbox and use it whenever we can. 10. Put for loop inside the function There might be a time that we built a function and need to reiterate this function a given number of times. An obvious way is that we build a function and then put this function into a for loop. However, as shown in Figure 12, instead of repeatedly executing the function 1 million time (the length of a_long_list is 1,000,000), we integrated the for loop inside the function. This saved us about 22% of running time. This is because function calls are expensive, avoid it by writing function into the list comprehension is a better choice. That’s all! Thanks for reading this post. I hope that some tricks can be useful for you. Also, what are some other approaches that you used to speed up your Python code? I will really appreciate it if you share them by leaving a comment. Here are links you may be interested in: How to sort a list using sort() and sorted() When to use a list comprehension in Python Transforming Code into Beautiful, Idiomatic Python Advantages and Disadvantages of Python Programming Language As always, I welcome feedback, constructive criticism, and hearing about your data science projects. I can be reached on Linkedin, and also on my website.
[ { "code": null, "e": 188, "s": 172, "text": "Python is slow." }, { "code": null, "e": 630, "s": 188, "text": "I bet you might encounter this counterargument many times about using Python, especially from people who come from C or C++ or Java world. This is true in many cases, for instance, looping over or sorting Python arrays, lists, or dictionaries can be sometimes slow. After all, Python is developed to make programming fun and easy. Thus, the improvements of Python code in succinctness and readability have to come with a cost of performance." }, { "code": null, "e": 1039, "s": 630, "text": "Having said that, many efforts have been done in recent years to improve Python’s performance. We now can process large datasets in an efficient way by using numpy, scipy, pandas, and numba, as all these libraries implemented their critical code paths in C/C++. There is another exciting project, the Pypy project, which speed up Python code by 4.4 times compared to Cpython (original Python implementation)." }, { "code": null, "e": 1238, "s": 1039, "text": "The downside of Pypy is that its coverage of some popular scientific modules (e.g., Matplotlib, Scipy) is limited or nonexistent which means that you cannot use those modules in code meant for Pypy." }, { "code": null, "e": 1436, "s": 1238, "text": "Other than these external resources, what can we do to speed up Python code in our daily coding practice? Today, I will share with you 10 tricks that I used a lot during my Python learning process." }, { "code": null, "e": 1559, "s": 1436, "text": "As usual, if you want to rerun codes in this post yourself, all required data and notebook can be accessed from my Github." }, { "code": null, "e": 1595, "s": 1559, "text": "1. Familiar with built-in functions" }, { "code": null, "e": 2032, "s": 1595, "text": "Python comes with many built-in functions implemented in C, which are very fast and well maintained (Figure 1). We should at least familiar with these function names and know where to find them (some commonly used computation-related functions are abs(), len(), max(), min(), set(), sum()). Therefore, whenever we need to carry out a simple computation, we can take the right shortcut instead of writing our own version in a clumsy way." }, { "code": null, "e": 2232, "s": 2032, "text": "Let’s use the built-in functions set() and sum() as examples. As you can see in Figure 2, it is 36.1 and 20.9 times faster using set() and sum() than the functions written by ourselves, respectively." }, { "code": null, "e": 2255, "s": 2232, "text": "2. sort() vs. sorted()" }, { "code": null, "e": 2285, "s": 2255, "text": "Both functions can sort list." }, { "code": null, "e": 2582, "s": 2285, "text": "If we just want to obtain a sorted list and do not care about the original list, sort() is a bit faster than sorted() both for basic sorting and when using key parameters (the key parameter specifies a function to be called on each list element prior to making comparisons), as shown in Figure 3." }, { "code": null, "e": 2806, "s": 2582, "text": "This is because the sort() method modifies the list in-place while sorted() builds a new sorted list and keep the original list intact. In other words, the order of values inside a_long_list itself actually already changed." }, { "code": null, "e": 3133, "s": 2806, "text": "However, sorted() is more versatile compared to sort(). This is because sorted() accepts any iterable while sort() is only defined for lists. Therefore, if we want to sort something other than a list, sorted() is the right function to use. For example, we can quickly sort a dictionary either by its keys or values (Figure 4)." }, { "code": null, "e": 3171, "s": 3133, "text": "3. Use symbols instead of their names" }, { "code": null, "e": 3448, "s": 3171, "text": "As shown in Figure 5, when we need an empty dictionary or list object, instead of using dict() or list(), we can directly call {} (as for an empty set, we need to use set() itself) and []. This trick may not necessarily speed-up the codes, but do make the codes more pythonic." }, { "code": null, "e": 3470, "s": 3448, "text": "4. List comprehension" }, { "code": null, "e": 3783, "s": 3470, "text": "Normally when we need to create a new list from an old list based on certain rules, we use a for loop to iterate through the old list and transform its values based on the rule and save in a new list. For example, let’s say we want to find all even numbers from another_long_list, we can use the following codes:" }, { "code": null, "e": 3883, "s": 3783, "text": "even_num = []for number in another_long_list: if number % 2 == 0: even_num.append(number)" }, { "code": null, "e": 4079, "s": 3883, "text": "However, there is a more concise and elegant way to achieve this. As shown in Figure 6, we put the original for loop in just a single line of code. Moreover, the speed improved by almost 2 times." }, { "code": null, "e": 4399, "s": 4079, "text": "Combined with rule 3, we can turn the list into dictionaries or sets as well, just change [] as {}. Let’s rewrite codes in Figure 5, we can omit the step of assignment and complete the iteration inside the symbol, like this sorted_dict3 = {key: value for key, value in sorted(a_dict.items(), key=lambda item: item[1])}." }, { "code": null, "e": 4827, "s": 4399, "text": "To break this down, start at the end. The function β€œsorted(a_dict.items(), key=lambda item: item[1])” returned us a list of tuples (Figure 4). Here, we used multiple assignments to unpack the tuple, as for each tuple inside the list, we assigned key to its first item and value to its second item (as we know there are two items inside each tuple in this case). Finally, each pair of key and value was kept inside a dictionary." }, { "code": null, "e": 4866, "s": 4827, "text": "5. Use enumerate() for value and index" }, { "code": null, "e": 5123, "s": 4866, "text": "Sometimes, when we iterate through a list, we want to use both its values and indices in expressions. As shown in Figure 7, we should use enumerate(), which turns values of a list into pairs of index and value. This also speed-up our code by about 2 times." }, { "code": null, "e": 5181, "s": 5123, "text": "6. Use zip() for packing and unpacking multiple iterables" }, { "code": null, "e": 5466, "s": 5181, "text": "In some cases, we will need to iterate through two or more lists. We then can use zip() function, which transforms multiple lists into a single list of tuples (Figure 8). Note that the lists are better to be in the same length, otherwise, zip() stops as soon as the shorter list ends." }, { "code": null, "e": 5665, "s": 5466, "text": "Reversely, to access items in each tuple within the list, we can also unzip a list of tuple by adding an asterisk(*) and using multiple assignments, like this, letters1, numbers1 = zip(*pairs_list)." }, { "code": null, "e": 5689, "s": 5665, "text": "7. Combine set() and in" }, { "code": null, "e": 5802, "s": 5689, "text": "When we want to check if a value exists inside a list or not, a clumsy way is to construct a function like this:" }, { "code": null, "e": 5969, "s": 5802, "text": "# Construct a function for membership testdef check_membership(n): for element in another_long_list: if element == n: return True return False" }, { "code": null, "e": 6286, "s": 5969, "text": "Then call check_membership(value) to see if the value inside another_long_list. However, a pythonic way to do this is just to use in by calling value in another_long_list as shown in Figure 9. It just like you are asking Python literally that β€œhey python, could you please tell me if value inside another_long_list”." }, { "code": null, "e": 6549, "s": 6286, "text": "To be more efficient, we should first remove duplicates from the list by using set() and then test the membership in the set object. By doing so, we reduced the number of elements that need to be check. In addition, in is a very fast operation on sets by design." }, { "code": null, "e": 6742, "s": 6549, "text": "As you can see from Figure 9, even though it took 20ms to construct the set object, this is just a one-time invest and the checking step itself only used 5.2ΞΌs. That is 1962 times improvement." }, { "code": null, "e": 6773, "s": 6742, "text": "8. Check if a variable is true" }, { "code": null, "e": 6936, "s": 6773, "text": "Inevitably, we will use a lot of if statements to check for empty variables, empty lists, empty dictionaries, and so on. We can save a bit time from here as well." }, { "code": null, "e": 7163, "s": 6936, "text": "As shown in Figure 10, we do not need to explicitly state == True or is True in the if statement, instead we just use the variable name. This saves the resource used by magic function __eq__ for comparing values in both sides." }, { "code": null, "e": 7278, "s": 7163, "text": "Likewise, if we need to check if the variable is empty, we just need to say if not string_returned_from_function:." }, { "code": null, "e": 7316, "s": 7278, "text": "9. Count unique values use Counters()" }, { "code": null, "e": 7632, "s": 7316, "text": "Let’s say we are trying to count unique values in the list we generated in Rule 1, a_long_list. One way is to create a dictionary in which the keys are numbers and the values are counts. As we iterate the list, we can increment its count if it is already in the dictionary and add it to the dictionary if it is not." }, { "code": null, "e": 7760, "s": 7632, "text": "num_counts = {}for num in a_long_list: if num in num_counts: num_counts[num] += 1 else: num_counts[num] = 1" }, { "code": null, "e": 8001, "s": 7760, "text": "However, a more efficient way to do this is just using Counter() from collections in one line of code, num_counts2 = Counter(a_long_list). Yes, it is that simple. As show in Figure 11, it is about 10 times faster than the function we wrote." }, { "code": null, "e": 8121, "s": 8001, "text": "If we want to know the 10 most common numbers, the Counter() instance also has a most_common method that is very handy." }, { "code": null, "e": 8235, "s": 8121, "text": "In a word, collections is an amazing module, we should save it into our daily toolbox and use it whenever we can." }, { "code": null, "e": 8272, "s": 8235, "text": "10. Put for loop inside the function" }, { "code": null, "e": 8467, "s": 8272, "text": "There might be a time that we built a function and need to reiterate this function a given number of times. An obvious way is that we build a function and then put this function into a for loop." }, { "code": null, "e": 8813, "s": 8467, "text": "However, as shown in Figure 12, instead of repeatedly executing the function 1 million time (the length of a_long_list is 1,000,000), we integrated the for loop inside the function. This saved us about 22% of running time. This is because function calls are expensive, avoid it by writing function into the list comprehension is a better choice." }, { "code": null, "e": 9051, "s": 8813, "text": "That’s all! Thanks for reading this post. I hope that some tricks can be useful for you. Also, what are some other approaches that you used to speed up your Python code? I will really appreciate it if you share them by leaving a comment." }, { "code": null, "e": 9092, "s": 9051, "text": "Here are links you may be interested in:" }, { "code": null, "e": 9137, "s": 9092, "text": "How to sort a list using sort() and sorted()" }, { "code": null, "e": 9180, "s": 9137, "text": "When to use a list comprehension in Python" }, { "code": null, "e": 9231, "s": 9180, "text": "Transforming Code into Beautiful, Idiomatic Python" }, { "code": null, "e": 9291, "s": 9231, "text": "Advantages and Disadvantages of Python Programming Language" } ]
Mahalanobis Distance and Multivariate Outlier Detection in R | Towards Data Science
Mahalanobis Distance (MD) is an effective distance metric that finds the distance between point and a distribution (see also). It is quite effective on multivariate data. The reason why MD is effective on multivariate data is because it uses covariance between variables in order to find the distance of two points. In other words, Mahalanobis calculates the distance between point β€œP1” and point β€œP2” by considering standard deviation (how many standard deviations P1 far from P2). MD also gives reliable results when outliers are considered as multivariate. In order to find outliers by MD, distance between every point and center in n-dimension data are calculated and outliers found by considering these distances. Mahalanobis and Euclidean distance Finding distance between two points with MD Finding outliers with Mahalanobis distance in R Conclusions If you are interested in how to detect outliers using Mahalanobis distance in Python you check my other article below. towardsdatascience.com Euclidean distance is also commonly used to find distance between two points in 2 or more than 2 dimensional space. But, MD uses a covariance matrix unlike Euclidean. Because of that, MD works well when two or more variables are highly correlated and even if their scales are not the same. But, when two or more variables are not on the same scale, Euclidean distance results might misdirect. Therefore, Z-scores of variables has to be calculated before finding distance between these points. Moreover, Euclidean won’t work good enough if the variables are highly correlated. Let’s checkout Euclidean and MD formulas, As you can see from the formulas, MD uses a covariance matrix (which is at the middle C ^(-1) ) unlike Euclidean. In Euclidean formula p and q represent the points whose distance will be calculated. β€œn” represents the number of variables in multivariate data. Finding Distance Between Two Points by MD Suppose that we have 5 rows and 2 columns data. As you can guess, every row in this data represents a point in 2-dimensional space. V1 V2 ----- ----- P1 5 7 P2 6 8 P3 5 6 P4 3 2 P5 9 11 Let’s draw a scatter plot of V1 and V2, The orange point shows the center of these two variables (by mean) and black points represent each row in the data frame. Now, let’s try to find Mahalanobis Distance between P2 and P5; According to the calculations above M. Distance between P2 and P5 found 4.08. As mentioned before MD is quite effective to find outliers for multivariate data. Especially, if there are linear relationships between variables, MD can figure out which observations break down the linearity. Unlike the other example, in order to find the outliers we need to find distance between each point and the center. The center point can be represented as the mean value of every variable in multivariate data. In this example we can use predefined data in R which is called β€œairquality”. We will take β€œTemp” and β€œOzone” values as our variable. Here is the list of steps that we need to follow; Finding the center point of β€œOzone” and β€œTemp”. Calculating the covariance matrix of β€œOzone” and β€œTemp”. Finding the Mahalanobis Distance of each point to center. Finding the Cut-Off value from Chi-Square distribution Selecting the distances which is less than Cut-Off (These are the values which isn’t an outlier). Here is the codes to calculate center and covariance matrix; Before calculating the distances let’s plot our data and draw an ellipse by considering center point and covariance matrix. We can find the ellipse coordinates by using the ellipse function that comes in the β€œcar” package. β€œellipse” function takes 3 important arguments; center, shape and radius. Center represents the mean values of variables, shape represents the covariance matrix and radius should be the square root of Chi-Square value with 2 degrees of freedom and 0.95 probability. We take probability values 0.95 because outside the 0.95 will be considered as an outlier and degree of freedom is 2, because we have two variables β€œOzone” and β€œTemp”. After our ellipse coordinates are found, we can create our scatter plot with β€œggplot2” package; Above, code snippet will return below scatter plot; Blue point on the plot shows the center point. Black points are the observations for Ozone β€” Wind variables. As you can see, the points 30, 62, 117, 99 are outside the orange ellipse. It means that these points might be the outliers. If we consider that this ellipse has been drawn over covariance, center and radius, we can say we might have found the same points as the outlier for Mahalanobis Distance. In MD, we don’t draw an ellipse but we calculate distance between each point and center. After we find distances, we use Chi-Square value as Cut-Off in order to identify outliers (same as radius of ellipse in above example). β€œmahalanobis” function that comes with R in stats package returns distances between each point and given center point. This function also takes 3 arguments β€œx”, β€œcenter” and β€œcov”. As you can guess, β€œx” is multivariate data (matrix or data frame), β€œcenter” is the vector of center points of variables and β€œcov” is covariance matrix of the data. This time, while obtaining Chi-Sqaure Cut-Off value we shouldn’t take square root. Because, MD already returns D2 (squared) distances (you can see it from MD formula). Finally! We have identified the outliers in our multivariate data. Outliers found 30. 62. 99. 117. observations (rows) same as the points outside of the ellipse in scatter plot. In this post, we covered β€œMahalanobis Distance” from theory to practice. Besides calculating distance between two points from formula, we also learned how to use it in order to find outliers in R. Although MD is not used much in machine learning, it is very useful in defining multivariate outliers. If you have any question please feel free to leave a comment. However, you may be interested my other article on detecting outliers using Mahalonobis distance in Python from scratch.
[ { "code": null, "e": 891, "s": 172, "text": "Mahalanobis Distance (MD) is an effective distance metric that finds the distance between point and a distribution (see also). It is quite effective on multivariate data. The reason why MD is effective on multivariate data is because it uses covariance between variables in order to find the distance of two points. In other words, Mahalanobis calculates the distance between point β€œP1” and point β€œP2” by considering standard deviation (how many standard deviations P1 far from P2). MD also gives reliable results when outliers are considered as multivariate. In order to find outliers by MD, distance between every point and center in n-dimension data are calculated and outliers found by considering these distances." }, { "code": null, "e": 926, "s": 891, "text": "Mahalanobis and Euclidean distance" }, { "code": null, "e": 970, "s": 926, "text": "Finding distance between two points with MD" }, { "code": null, "e": 1018, "s": 970, "text": "Finding outliers with Mahalanobis distance in R" }, { "code": null, "e": 1030, "s": 1018, "text": "Conclusions" }, { "code": null, "e": 1149, "s": 1030, "text": "If you are interested in how to detect outliers using Mahalanobis distance in Python you check my other article below." }, { "code": null, "e": 1172, "s": 1149, "text": "towardsdatascience.com" }, { "code": null, "e": 1748, "s": 1172, "text": "Euclidean distance is also commonly used to find distance between two points in 2 or more than 2 dimensional space. But, MD uses a covariance matrix unlike Euclidean. Because of that, MD works well when two or more variables are highly correlated and even if their scales are not the same. But, when two or more variables are not on the same scale, Euclidean distance results might misdirect. Therefore, Z-scores of variables has to be calculated before finding distance between these points. Moreover, Euclidean won’t work good enough if the variables are highly correlated." }, { "code": null, "e": 1790, "s": 1748, "text": "Let’s checkout Euclidean and MD formulas," }, { "code": null, "e": 2050, "s": 1790, "text": "As you can see from the formulas, MD uses a covariance matrix (which is at the middle C ^(-1) ) unlike Euclidean. In Euclidean formula p and q represent the points whose distance will be calculated. β€œn” represents the number of variables in multivariate data." }, { "code": null, "e": 2092, "s": 2050, "text": "Finding Distance Between Two Points by MD" }, { "code": null, "e": 2224, "s": 2092, "text": "Suppose that we have 5 rows and 2 columns data. As you can guess, every row in this data represents a point in 2-dimensional space." }, { "code": null, "e": 2324, "s": 2224, "text": " V1 V2 ----- ----- P1 5 7 P2 6 8 P3 5 6 P4 3 2 P5 9 11" }, { "code": null, "e": 2364, "s": 2324, "text": "Let’s draw a scatter plot of V1 and V2," }, { "code": null, "e": 2549, "s": 2364, "text": "The orange point shows the center of these two variables (by mean) and black points represent each row in the data frame. Now, let’s try to find Mahalanobis Distance between P2 and P5;" }, { "code": null, "e": 2627, "s": 2549, "text": "According to the calculations above M. Distance between P2 and P5 found 4.08." }, { "code": null, "e": 3047, "s": 2627, "text": "As mentioned before MD is quite effective to find outliers for multivariate data. Especially, if there are linear relationships between variables, MD can figure out which observations break down the linearity. Unlike the other example, in order to find the outliers we need to find distance between each point and the center. The center point can be represented as the mean value of every variable in multivariate data." }, { "code": null, "e": 3231, "s": 3047, "text": "In this example we can use predefined data in R which is called β€œairquality”. We will take β€œTemp” and β€œOzone” values as our variable. Here is the list of steps that we need to follow;" }, { "code": null, "e": 3279, "s": 3231, "text": "Finding the center point of β€œOzone” and β€œTemp”." }, { "code": null, "e": 3336, "s": 3279, "text": "Calculating the covariance matrix of β€œOzone” and β€œTemp”." }, { "code": null, "e": 3394, "s": 3336, "text": "Finding the Mahalanobis Distance of each point to center." }, { "code": null, "e": 3449, "s": 3394, "text": "Finding the Cut-Off value from Chi-Square distribution" }, { "code": null, "e": 3547, "s": 3449, "text": "Selecting the distances which is less than Cut-Off (These are the values which isn’t an outlier)." }, { "code": null, "e": 3608, "s": 3547, "text": "Here is the codes to calculate center and covariance matrix;" }, { "code": null, "e": 4265, "s": 3608, "text": "Before calculating the distances let’s plot our data and draw an ellipse by considering center point and covariance matrix. We can find the ellipse coordinates by using the ellipse function that comes in the β€œcar” package. β€œellipse” function takes 3 important arguments; center, shape and radius. Center represents the mean values of variables, shape represents the covariance matrix and radius should be the square root of Chi-Square value with 2 degrees of freedom and 0.95 probability. We take probability values 0.95 because outside the 0.95 will be considered as an outlier and degree of freedom is 2, because we have two variables β€œOzone” and β€œTemp”." }, { "code": null, "e": 4361, "s": 4265, "text": "After our ellipse coordinates are found, we can create our scatter plot with β€œggplot2” package;" }, { "code": null, "e": 4413, "s": 4361, "text": "Above, code snippet will return below scatter plot;" }, { "code": null, "e": 5044, "s": 4413, "text": "Blue point on the plot shows the center point. Black points are the observations for Ozone β€” Wind variables. As you can see, the points 30, 62, 117, 99 are outside the orange ellipse. It means that these points might be the outliers. If we consider that this ellipse has been drawn over covariance, center and radius, we can say we might have found the same points as the outlier for Mahalanobis Distance. In MD, we don’t draw an ellipse but we calculate distance between each point and center. After we find distances, we use Chi-Square value as Cut-Off in order to identify outliers (same as radius of ellipse in above example)." }, { "code": null, "e": 5557, "s": 5044, "text": "β€œmahalanobis” function that comes with R in stats package returns distances between each point and given center point. This function also takes 3 arguments β€œx”, β€œcenter” and β€œcov”. As you can guess, β€œx” is multivariate data (matrix or data frame), β€œcenter” is the vector of center points of variables and β€œcov” is covariance matrix of the data. This time, while obtaining Chi-Sqaure Cut-Off value we shouldn’t take square root. Because, MD already returns D2 (squared) distances (you can see it from MD formula)." }, { "code": null, "e": 5735, "s": 5557, "text": "Finally! We have identified the outliers in our multivariate data. Outliers found 30. 62. 99. 117. observations (rows) same as the points outside of the ellipse in scatter plot." }, { "code": null, "e": 6035, "s": 5735, "text": "In this post, we covered β€œMahalanobis Distance” from theory to practice. Besides calculating distance between two points from formula, we also learned how to use it in order to find outliers in R. Although MD is not used much in machine learning, it is very useful in defining multivariate outliers." } ]
Pylance: The best Python extension for VS Code | by Dimitris Poulopoulos | Towards Data Science
Visual Studio Code is arguably the best open-source code editor, and, of course, Python is treated as a first-class citizen inside its ecosystem. The corresponding Microsoft extension comes with β€œrich support for the Python Language, including features like IntelliSense, linting, debugging, code navigation, code formatting, code refactoring, test explorer, snippets, Jupyter Notebook support, and much more.” However, the enemy of good is better; thus, Microsoft created Pylance, a new, improved, faster, and feature-packed Python language server. Pylance, a reference to Monty Python's Sir Lancelot the Brave, depends on the core Python extension and builds upon that experience. What is more, with the VS Code December update, Pylance can perform long-awaited actions that take the Python developing experience to a new level. This story presents Pylance, what it is, and how you can leverage its full functionality today. Finally, we examine the new version that was released as part of the latest December update. Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here! Pylance is the new Python language server from Microsoft. It offers fast, feature-rich language support for Python in Visual Studio Code, is dependent on the core Python extensions but brings a hell of a lot more to the table. Let’s see what the Microsoft VS Code Python team has been building for months. Pylance leverages type stub files (.pyi files) and lazy type inferencing to provide an efficient development experience. But what are stub files, you may ask? As a quick sidenote, Stub files are used to provide type-hinting information for Python modules. The full official documentation can be found in the section about stub-files in PEP-484. For example, consider the following Python function that resides in my_function.py module: def add(a, b): return a + b You can create a new stub file, my_function.pyi, to provide type-hinting: def add(a: int, b: int) -> int: ... NOTE: The ... at the end of the function definition in the stub file is part of the syntax However, we know that we can insert type-hints inside the Python module like that: def add(a: int, b: int) -> int: return a + b So, why should we use stub files? There are a couple of reasons. For example, if you want to keep your .py files backward compatible or if you want to provide type-hints into an existing code-base and want to minimize the changes in the source-code itself. Coming back to Pylance, the use of stub files can supercharge your Python IntelliSense experience with rich type information, helping you write code faster. What is more, it already comes with a collection of stubs for popular modules out of the box. The built-in stub library provides accurate type checking as well as fast auto-completion. But is that all? Of course not; we are just scratching the surface. Pylance main characteristic is its performance, but many other features make this extension a must-have. Type information: You can now get the type definition of a function when you hover over it. Auto-imports: You are now able to auto-import suggestions for installed and standard library modules. This is probably the most requested feature by Python developers, and it’s finally available. Type-checking: You can now validate that the arguments you pass to a function are of the right type prior to its execution. You have to enable it through settings; set python.analysis.typeCheckingMode to basic or strict. The December 2020 Visual Studio Code update introduced several new Pylance features, with code extraction and the Pylance Insiders program being the most important. Code extraction: You are now able to extract methods and variables with one click. Pylance Insiders: Pylance now has its own club of dedicated users, the Pylance Insiders program, which offers early access to new language server features and improvements. To enable insiders, set "pylance.insidersChannel": "daily". Pylance seems to be the future of the Python language server for VS Code. It is still in preview, but I would recommend turning it on immediately. The new features and improvements coming almost every day will make you more productive and elevate your overall developing experience. The new VS Code update features many improvements and features outside Pylance, like Ipywidget Support in Native Notebooks. You can read the changelog for more details. Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here! My name is Dimitris Poulopoulos, and I’m a machine learning engineer working for Arrikto. I have designed and implemented AI and software solutions for major clients such as the European Commission, Eurostat, IMF, the European Central Bank, OECD, and IKEA. If you are interested in reading more posts about Machine Learning, Deep Learning, Data Science, and DataOps, follow me on Medium, LinkedIn, or @james2pl on Twitter. Also, visit the resources page on my website, a place for great books and top-rated courses, to start building your own Data Science curriculum! Opinions expressed are solely my own and do not express the views or opinions of my employer.
[ { "code": null, "e": 583, "s": 172, "text": "Visual Studio Code is arguably the best open-source code editor, and, of course, Python is treated as a first-class citizen inside its ecosystem. The corresponding Microsoft extension comes with β€œrich support for the Python Language, including features like IntelliSense, linting, debugging, code navigation, code formatting, code refactoring, test explorer, snippets, Jupyter Notebook support, and much more.”" }, { "code": null, "e": 1003, "s": 583, "text": "However, the enemy of good is better; thus, Microsoft created Pylance, a new, improved, faster, and feature-packed Python language server. Pylance, a reference to Monty Python's Sir Lancelot the Brave, depends on the core Python extension and builds upon that experience. What is more, with the VS Code December update, Pylance can perform long-awaited actions that take the Python developing experience to a new level." }, { "code": null, "e": 1192, "s": 1003, "text": "This story presents Pylance, what it is, and how you can leverage its full functionality today. Finally, we examine the new version that was released as part of the latest December update." }, { "code": null, "e": 1414, "s": 1192, "text": "Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!" }, { "code": null, "e": 1720, "s": 1414, "text": "Pylance is the new Python language server from Microsoft. It offers fast, feature-rich language support for Python in Visual Studio Code, is dependent on the core Python extensions but brings a hell of a lot more to the table. Let’s see what the Microsoft VS Code Python team has been building for months." }, { "code": null, "e": 1879, "s": 1720, "text": "Pylance leverages type stub files (.pyi files) and lazy type inferencing to provide an efficient development experience. But what are stub files, you may ask?" }, { "code": null, "e": 2156, "s": 1879, "text": "As a quick sidenote, Stub files are used to provide type-hinting information for Python modules. The full official documentation can be found in the section about stub-files in PEP-484. For example, consider the following Python function that resides in my_function.py module:" }, { "code": null, "e": 2186, "s": 2156, "text": "def add(a, b): return a + b" }, { "code": null, "e": 2260, "s": 2186, "text": "You can create a new stub file, my_function.pyi, to provide type-hinting:" }, { "code": null, "e": 2296, "s": 2260, "text": "def add(a: int, b: int) -> int: ..." }, { "code": null, "e": 2387, "s": 2296, "text": "NOTE: The ... at the end of the function definition in the stub file is part of the syntax" }, { "code": null, "e": 2470, "s": 2387, "text": "However, we know that we can insert type-hints inside the Python module like that:" }, { "code": null, "e": 2517, "s": 2470, "text": "def add(a: int, b: int) -> int: return a + b" }, { "code": null, "e": 2774, "s": 2517, "text": "So, why should we use stub files? There are a couple of reasons. For example, if you want to keep your .py files backward compatible or if you want to provide type-hints into an existing code-base and want to minimize the changes in the source-code itself." }, { "code": null, "e": 3116, "s": 2774, "text": "Coming back to Pylance, the use of stub files can supercharge your Python IntelliSense experience with rich type information, helping you write code faster. What is more, it already comes with a collection of stubs for popular modules out of the box. The built-in stub library provides accurate type checking as well as fast auto-completion." }, { "code": null, "e": 3184, "s": 3116, "text": "But is that all? Of course not; we are just scratching the surface." }, { "code": null, "e": 3289, "s": 3184, "text": "Pylance main characteristic is its performance, but many other features make this extension a must-have." }, { "code": null, "e": 3381, "s": 3289, "text": "Type information: You can now get the type definition of a function when you hover over it." }, { "code": null, "e": 3577, "s": 3381, "text": "Auto-imports: You are now able to auto-import suggestions for installed and standard library modules. This is probably the most requested feature by Python developers, and it’s finally available." }, { "code": null, "e": 3798, "s": 3577, "text": "Type-checking: You can now validate that the arguments you pass to a function are of the right type prior to its execution. You have to enable it through settings; set python.analysis.typeCheckingMode to basic or strict." }, { "code": null, "e": 3963, "s": 3798, "text": "The December 2020 Visual Studio Code update introduced several new Pylance features, with code extraction and the Pylance Insiders program being the most important." }, { "code": null, "e": 4046, "s": 3963, "text": "Code extraction: You are now able to extract methods and variables with one click." }, { "code": null, "e": 4279, "s": 4046, "text": "Pylance Insiders: Pylance now has its own club of dedicated users, the Pylance Insiders program, which offers early access to new language server features and improvements. To enable insiders, set \"pylance.insidersChannel\": \"daily\"." }, { "code": null, "e": 4562, "s": 4279, "text": "Pylance seems to be the future of the Python language server for VS Code. It is still in preview, but I would recommend turning it on immediately. The new features and improvements coming almost every day will make you more productive and elevate your overall developing experience." }, { "code": null, "e": 4731, "s": 4562, "text": "The new VS Code update features many improvements and features outside Pylance, like Ipywidget Support in Native Notebooks. You can read the changelog for more details." }, { "code": null, "e": 4953, "s": 4731, "text": "Learning Rate is my weekly newsletter for those who are curious about the world of AI and MLOps. You’ll hear from me every Friday with updates and thoughts on the latest AI news, research, repos and books. Subscribe here!" }, { "code": null, "e": 5210, "s": 4953, "text": "My name is Dimitris Poulopoulos, and I’m a machine learning engineer working for Arrikto. I have designed and implemented AI and software solutions for major clients such as the European Commission, Eurostat, IMF, the European Central Bank, OECD, and IKEA." }, { "code": null, "e": 5521, "s": 5210, "text": "If you are interested in reading more posts about Machine Learning, Deep Learning, Data Science, and DataOps, follow me on Medium, LinkedIn, or @james2pl on Twitter. Also, visit the resources page on my website, a place for great books and top-rated courses, to start building your own Data Science curriculum!" } ]
C++ Program to Implement Disjoint Set Data Structure
Disjoint set is basically as group of sets where no item can be in more than one set. It supports union and find operation on subsets. Find(): It is used to find in which subset a particular element is in and returns the representative of that particular set. Union(): It merges two different subsets into a single subset and representative of one set becomes representative of other. Begin Assume k is the element makeset(k): k.parent = k. Find(x): If k.parent == k return k. else return Find(k.parent) Union (a,b): Take two set a and b as input. aroot = Find(a) broot = Find(b) aroot.parent = broot End #include <iostream> #include <vector> #include <unordered_map> using namespace std; class DisjointSet { //to represent disjoint set unordered_map<int, int> parent; public: void makeSet(vector<int> const &wholeset){ //perform makeset operation for (int i : wholeset) // create n disjoint sets (one for each item) parent[i] = i; } int Find(int l) { // Find the root of the set in which element l belongs if (parent[l] == l) // if l is root return l; return Find(parent[l]); // recurs for parent till we find root } void Union(int m, int n) { // perform Union of two subsets m and n int x = Find(m); int y = Find(n); parent[x] = y; } }; void print(vector<int> const &universe, DisjointSet &dis) { for (int i : universe) cout << dis.Find(i) << " "; cout << '\n'; } int main() { vector<int> wholeset = { 6,7,1,2,3 }; // items of whole set DisjointSet dis; //initialize DisjointSet class dis.makeSet(wholeset); // create individual set of the items of wholeset dis.Union(7, 6); // 7,6 are in same set print(wholeset, dis); if (dis.Find(7) == dis.Find(6)) // if they are belong to same set or not. cout<<"Yes"<<endl; else cout<<"No"; if (dis.Find(3) == dis.Find(4)) cout<<"Yes"<<endl; else cout<<"No"; return 0; } 6 6 1 2 3 Yes No
[ { "code": null, "e": 1197, "s": 1062, "text": "Disjoint set is basically as group of sets where no item can be in more than one set. It supports union and find operation on subsets." }, { "code": null, "e": 1322, "s": 1197, "text": "Find(): It is used to find in which subset a particular element is in and returns the representative of that particular set." }, { "code": null, "e": 1447, "s": 1322, "text": "Union(): It merges two different subsets into a single subset and representative of one set becomes representative of other." }, { "code": null, "e": 1724, "s": 1447, "text": "Begin\n Assume k is the element\n makeset(k):\n k.parent = k.\n Find(x):\n If k.parent == k\n return k.\n else\n return Find(k.parent)\n Union (a,b):\n Take two set a and b as input.\n aroot = Find(a)\n broot = Find(b)\n aroot.parent = broot\nEnd" }, { "code": null, "e": 3080, "s": 1724, "text": "#include <iostream>\n#include <vector>\n#include <unordered_map>\nusing namespace std;\nclass DisjointSet { //to represent disjoint set\n unordered_map<int, int> parent;\n public:\n void makeSet(vector<int> const &wholeset){\n //perform makeset operation\n for (int i : wholeset) // create n disjoint sets\n (one for each item)\n parent[i] = i;\n }\n int Find(int l) { // Find the root of the set in which element l belongs\n if (parent[l] == l) // if l is root\n return l;\n return Find(parent[l]); // recurs for parent till we find root\n }\n void Union(int m, int n) { // perform Union of two subsets m and n \n int x = Find(m);\n int y = Find(n);\n parent[x] = y;\n }\n};\nvoid print(vector<int> const &universe, DisjointSet &dis) {\n for (int i : universe)\n cout << dis.Find(i) << \" \";\n cout << '\\n';\n}\nint main() {\n vector<int> wholeset = { 6,7,1,2,3 }; // items of whole set\n DisjointSet dis; //initialize DisjointSet class\n dis.makeSet(wholeset); // create individual set of the items of wholeset\n dis.Union(7, 6); // 7,6 are in same set\n print(wholeset, dis);\n if (dis.Find(7) == dis.Find(6)) // if they are belong to same set or not.\n cout<<\"Yes\"<<endl;\n else\n cout<<\"No\";\n if (dis.Find(3) == dis.Find(4))\n cout<<\"Yes\"<<endl;\n else\n cout<<\"No\";\n return 0;\n}" }, { "code": null, "e": 3097, "s": 3080, "text": "6 6 1 2 3\nYes\nNo" } ]
Mahotas - RGB to LAB Conversion - GeeksforGeeks
08 Jul, 2021 In this article we will see how we can covert rgb image to CIE L*a*b* in mahotas. An RGB image, sometimes referred to as a truecolor image, is stored in MATLAB as an m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel. The CIELAB color space (also known as CIE L*a*b* or sometimes abbreviated as simply β€œLab” color space) is a color space defined by the International Commission on Illumination (CIE) in 1976. In this tutorial we will use β€œlena” image, below is the command to load it. mahotas.demos.load('lena') Below is the lena image In order to do this we will use mahotas.colors.rgb2sepiamethod Syntax : mahotas.colors.rgb2lab(img)Argument :It takes image object as argumentReturn : It returns image object Below is the implementation Python3 # importing required librariesimport mahotasimport mahotas.demosfrom pylab import gray, imshow, showimport numpy as np # loading imageimg = mahotas.demos.load('lena') # showing imageprint("Image")imshow(img)show() # rgb to labnew_img = mahotas.colors.rgb2lab(img) # showing new imageprint("New Image")imshow(new_img)show() Strong : Image New Image Another example Python3 # importing required librariesimport mahotasimport numpy as npimport matplotlib.pyplot as pltimport os # loading imageimg = mahotas.imread('dog_image.png') # filtering imageimg = img[:, :, :3] # showing imageprint("Image")imshow(img)show() # rgb to labnew_img = mahotas.colors.rgb2lab(img) # showing new imageprint("New Image")imshow(new_img)show() Strong : Image New Image simranarora5sos sagartomar9927 Python-Mahotas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python Python OOPs Concepts Read a file line by line in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Stack in Python Reading and Writing to text files in Python Python Classes and Objects
[ { "code": null, "e": 24039, "s": 24011, "text": "\n08 Jul, 2021" }, { "code": null, "e": 24497, "s": 24039, "text": "In this article we will see how we can covert rgb image to CIE L*a*b* in mahotas. An RGB image, sometimes referred to as a truecolor image, is stored in MATLAB as an m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel. The CIELAB color space (also known as CIE L*a*b* or sometimes abbreviated as simply β€œLab” color space) is a color space defined by the International Commission on Illumination (CIE) in 1976." }, { "code": null, "e": 24573, "s": 24497, "text": "In this tutorial we will use β€œlena” image, below is the command to load it." }, { "code": null, "e": 24600, "s": 24573, "text": "mahotas.demos.load('lena')" }, { "code": null, "e": 24625, "s": 24600, "text": "Below is the lena image " }, { "code": null, "e": 24690, "s": 24625, "text": "In order to do this we will use mahotas.colors.rgb2sepiamethod " }, { "code": null, "e": 24804, "s": 24690, "text": "Syntax : mahotas.colors.rgb2lab(img)Argument :It takes image object as argumentReturn : It returns image object " }, { "code": null, "e": 24834, "s": 24804, "text": "Below is the implementation " }, { "code": null, "e": 24842, "s": 24834, "text": "Python3" }, { "code": "# importing required librariesimport mahotasimport mahotas.demosfrom pylab import gray, imshow, showimport numpy as np # loading imageimg = mahotas.demos.load('lena') # showing imageprint(\"Image\")imshow(img)show() # rgb to labnew_img = mahotas.colors.rgb2lab(img) # showing new imageprint(\"New Image\")imshow(new_img)show()", "e": 25168, "s": 24842, "text": null }, { "code": null, "e": 25178, "s": 25168, "text": "Strong : " }, { "code": null, "e": 25184, "s": 25178, "text": "Image" }, { "code": null, "e": 25194, "s": 25184, "text": "New Image" }, { "code": null, "e": 25212, "s": 25194, "text": "Another example " }, { "code": null, "e": 25220, "s": 25212, "text": "Python3" }, { "code": "# importing required librariesimport mahotasimport numpy as npimport matplotlib.pyplot as pltimport os # loading imageimg = mahotas.imread('dog_image.png') # filtering imageimg = img[:, :, :3] # showing imageprint(\"Image\")imshow(img)show() # rgb to labnew_img = mahotas.colors.rgb2lab(img) # showing new imageprint(\"New Image\")imshow(new_img)show()", "e": 25576, "s": 25220, "text": null }, { "code": null, "e": 25586, "s": 25576, "text": "Strong : " }, { "code": null, "e": 25594, "s": 25586, "text": "Image\n " }, { "code": null, "e": 25604, "s": 25594, "text": "New Image" }, { "code": null, "e": 25622, "s": 25606, "text": "simranarora5sos" }, { "code": null, "e": 25637, "s": 25622, "text": "sagartomar9927" }, { "code": null, "e": 25652, "s": 25637, "text": "Python-Mahotas" }, { "code": null, "e": 25659, "s": 25652, "text": "Python" }, { "code": null, "e": 25757, "s": 25659, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25766, "s": 25757, "text": "Comments" }, { "code": null, "e": 25779, "s": 25766, "text": "Old Comments" }, { "code": null, "e": 25797, "s": 25779, "text": "Python Dictionary" }, { "code": null, "e": 25819, "s": 25797, "text": "Enumerate() in Python" }, { "code": null, "e": 25840, "s": 25819, "text": "Python OOPs Concepts" }, { "code": null, "e": 25875, "s": 25840, "text": "Read a file line by line in Python" }, { "code": null, "e": 25907, "s": 25875, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 25937, "s": 25907, "text": "Iterate over a list in Python" }, { "code": null, "e": 25979, "s": 25937, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 25995, "s": 25979, "text": "Stack in Python" }, { "code": null, "e": 26039, "s": 25995, "text": "Reading and Writing to text files in Python" } ]
Chaining comparison operators in Python
Sometimes we need to use more than one condition checking in a single statement. There are some basic syntax for these kind of checking is x < y < z, or if x < y and x < z etc. Like other languages, there are some basic comparison operators in Python. These comparison operators are <, <=, >, >=, ==, !=, is, is not, in, not in. The precedence of these operators are same, and the precedence is lesser than arithmetic, bitwise and shifting operators. These operators can be arranged arbitrarily. They will be used as a chain. So for an example, if the expression is x < y < z, then it is similar to the x < y and y < z. So from this example, we can see if the operands are p1, p2,..., pn, and the operators are OP1, OP2,..., OPn-1, then it will be same as p1 OP1 p2 and p2 OP2 p3, , pn-1 OPn-1pn So there are some examples on the chaining features of comparison operators. Live Demo a = 10 b = 20 c = 5 # c < a < b is same as c <a and a < b print(c < a) print(a < b) print(c < a < b) # b is not in between 40 and 60 print(40 <= b <= 60) # a is 10, which is greater than c print(a == 10 > c) True True True False True Live Demo u = 5 v = 10 w = 15 x = 0 y = 7 z = 15 # The w is same as z but not same as v, v is greater than x, which is less than y print(z is w is not v > x < y) # Check whether w and z are same and x < z > y or not print(x < w == z > y) True True
[ { "code": null, "e": 1239, "s": 1062, "text": "Sometimes we need to use more than one condition checking in a single statement. There are some basic syntax for these kind of checking is x < y < z, or if x < y and x < z etc." }, { "code": null, "e": 1391, "s": 1239, "text": "Like other languages, there are some basic comparison operators in Python. These comparison operators are <, <=, >, >=, ==, !=, is, is not, in, not in." }, { "code": null, "e": 1513, "s": 1391, "text": "The precedence of these operators are same, and the precedence is lesser than arithmetic, bitwise and shifting operators." }, { "code": null, "e": 1858, "s": 1513, "text": "These operators can be arranged arbitrarily. They will be used as a chain. So for an example, if the expression is x < y < z, then it is similar to the x < y and y < z. So from this example, we can see if the operands are p1, p2,..., pn, and the operators are OP1, OP2,..., OPn-1, then it will be same as p1 OP1 p2 and p2 OP2 p3, , pn-1 OPn-1pn" }, { "code": null, "e": 1935, "s": 1858, "text": "So there are some examples on the chaining features of comparison operators." }, { "code": null, "e": 1946, "s": 1935, "text": " Live Demo" }, { "code": null, "e": 2154, "s": 1946, "text": "a = 10\nb = 20\nc = 5\n# c < a < b is same as c <a and a < b\nprint(c < a)\nprint(a < b)\nprint(c < a < b)\n# b is not in between 40 and 60\nprint(40 <= b <= 60)\n# a is 10, which is greater than c\nprint(a == 10 > c)" }, { "code": null, "e": 2181, "s": 2154, "text": "True\nTrue\nTrue\nFalse\nTrue\n" }, { "code": null, "e": 2192, "s": 2181, "text": " Live Demo" }, { "code": null, "e": 2420, "s": 2192, "text": "u = 5\nv = 10\nw = 15\nx = 0\ny = 7\nz = 15\n# The w is same as z but not same as v, v is greater than x, which is less than y\nprint(z is w is not v > x < y)\n# Check whether w and z are same and x < z > y or not\nprint(x < w == z > y)" }, { "code": null, "e": 2431, "s": 2420, "text": "True\nTrue\n" } ]
Vertical Traversal of Binary Tree | Practice | GeeksforGeeks
Given a Binary Tree, find the vertical traversal of it starting from the leftmost level to the rightmost level. If there are multiple nodes passing through a vertical line, then they should be printed as they appear in level order traversal of the tree. Example 1: Input: 1 / \ 2 3 / \ / \ 4 5 6 7 \ \ 8 9 Output: 4 2 1 5 6 3 8 7 9 Explanation: Example 2: Input: 1 / \ 2 3 / \ \ 4 5 6 Output: 4 2 1 5 3 6 Your Task: You don't need to read input or print anything. Your task is to complete the function verticalOrder() which takes the root node as input parameter and returns an array containing the vertical order traversal of the tree from the leftmost to the rightmost level. If 2 nodes lie in the same vertical level, they should be printed in the order they appear in the level order traversal of the tree. Expected Time Complexity: O(N) Expected Auxiliary Space: O(N) Constraints: 1 <= Number of nodes <= 3*104 0 tirtha19025682 days ago class Solution { static class pair{ int dist; Node node; pair(int dist,Node node){ this.dist = dist; this.node = node; } } static ArrayList <Integer> verticalOrder(Node root) { if(root == null) return null; ArrayList<Integer>ans = new ArrayList<>(); Queue<pair>q = new LinkedList<>(); Map<Integer,ArrayList<Integer>> map = new TreeMap<>(); q.add(new pair(0,root)); while(!q.isEmpty()){ pair curr = q.poll(); if(map.containsKey(curr.dist)){ map.get(curr.dist).add(curr.node.data); }else{ ArrayList<Integer>temp = new ArrayList<>(); temp.add(curr.node.data); map.put(curr.dist,temp); } if(curr.node.left != null){ q.add(new pair(curr.dist-1,curr.node.left)); } if(curr.node.right != null){ q.add(new pair(curr.dist+1,curr.node.right)); } } for(Map.Entry<Integer,ArrayList<Integer>>m:map.entrySet()){ for(int i=0;i<m.getValue().size();i++){ ans.add(m.getValue().get(i)); } } return ans; } } 0 akashkhurana282 days ago JAVA SOLUTION static class Pair { Node node; int hd; Pair(Node n, int h) { node = n; hd = h; } } static ArrayList<Integer> verticalOrder(Node root) { ArrayList<Integer>ans=new ArrayList<>(); if(root==null) return ans; Queue<Pair>q=new LinkedList<>(); Map<Integer,ArrayList<Integer>>mp= new TreeMap<>(); q.add(new Pair(root,0)); while(!q.isEmpty()){ Pair p=q.poll(); Node curr= p.node; int hd=p.hd; if(mp.containsKey(hd)){ mp.get(hd).add(curr.data); } else{ ArrayList<Integer>al= new ArrayList<>(); al.add(curr.data); mp.put(hd,al); } if(curr.left!=null) q.add(new Pair(curr.left,hd-1)); if(curr.right!=null) q.add(new Pair(curr.right,hd+1)); } for(Map.Entry<Integer,ArrayList<Integer>>e:mp.entrySet()){ ans.addAll(e.getValue()); } return ans; }} 0 akshu199bhardwaj1 week ago What is the problem with this code? public: //Function to find the vertical order traversal of Binary Tree. void helper(Node* root, int hd, map<int,vector<int>> &m){ if(!root) return; m[hd].push_back(root->data); helper(root->left,hd-1,m); helper(root->right,hd+1,m); } vector<int> verticalOrder(Node *root) { vector<int> ans; map<int,vector<int>> m; helper(root,0,m); for(auto e: m){ for(int i=0; i<e.second.size(); i++) ans.push_back(e.second[i]); } return ans; } 0 saicharanthammi2 weeks ago vector<int> verticalOrder(Node *root) { map<int,vector<int>>mp; pair<Node*,int>p;p.first=root; p.second=0;vector<int>ans; queue<pair<Node*,int>> q; q.push(p); while(!q.empty()) { p=q.front(); mp[p.second].push_back(p.first->data); if(p.first->left!=NULL) {q.push(make_pair(p.first->left,p.second-1)); } if(p.first->right!=NULL){ q.push(make_pair(p.first->right,p.second+1));} q.pop(); } for(auto it: mp) for(auto iter : it.second) ans.push_back(iter); return ans; } 0 aishwaryadwani97992 weeks ago class Solution{ //Function to find the vertical order traversal of Binary Tree. static class Pair { Node node; int horizontalDistance; Pair(Node node,int horizontalDistance) { this.node=node; this.horizontalDistance=horizontalDistance; } } static ArrayList <Integer> verticalOrder(Node root) { Queue<Pair> bag = new LinkedList<>(); Pair pair = new Pair(root,0); bag.add(pair); Map<Integer,ArrayList<Integer>> mp = new HashMap<>(); int minimumHD = Integer.MAX_VALUE; int maximumHD = Integer.MIN_VALUE; while(!bag.isEmpty()) { pair = bag.poll(); int key = pair.horizontalDistance; int value = pair.node.data; if(mp.containsKey(key)) { ArrayList<Integer> valueList = mp.get(key); valueList.add(value); mp.put(key,valueList); } else { ArrayList<Integer> valueList = new ArrayList<>(); valueList.add(value); mp.put(key,valueList); } if(pair.node.left!=null) { Pair leftPair = new Pair(pair.node.left,key-1); bag.add(leftPair); } if(pair.node.right!=null) { Pair rightPair = new Pair(pair.node.right,key+1); bag.add(rightPair); } minimumHD = Math.min(minimumHD,key); maximumHD = Math.max(maximumHD,key); } ArrayList<Integer> result = new ArrayList<>(); for(int i=0;i<mp.size();i++) { ArrayList<Integer> values = mp.get(minimumHD); for(Integer value : values) result.add(value); minimumHD += 1; } return result; }} 0 aishwaryadwani9799 This comment was deleted. +3 yoginpcs202 weeks ago vector<int> verticalOrder(Node *root) { //Your code here queue<pair<Node*,int>> q; map<int,vector<int>>mp; q.push(make_pair(root,0)); while(!q.empty()) { auto p=q.front(); Node* curr=p.first; int hd=p.second; mp[hd].push_back(curr->data); q.pop(); if(curr->left!=NULL) q.push(make_pair(curr->left,hd-1)); if(curr->right!=NULL) q.push(make_pair(curr->right,hd+1)); } vector<int> ans; for(auto i:mp) { for(auto j:i.second) { ans.push_back(j); } } return ans; } 0 moaslam8263 weeks ago c++ easy code vector<int> verticalOrder(Node *root) { map<int,vector<int>> um; vector<int> v; queue<pair<Node*,int>> q; q.push({root,0}); while(!q.empty()){ Node* temp=q.front().first; int h=q.front().second; um[q.front().second].push_back(temp->data); q.pop(); if(temp->left) q.push(make_pair(temp->left,h-1)); if(temp->right) q.push(make_pair(temp->right,h+1)); } for(auto &it: um){ for(auto &el : it.second) v.push_back(el); } return v; } +1 parasnirban90503 weeks ago vector<int> verticalOrder(Node *root) { vector<int> v; if(root==nullptr) return v; map<int,vector<int>> mp; queue<pair<Node*,int>> q; q.push({root,0}); while(!q.empty()){ vector<int> temp; auto it=q.front(); q.pop(); Node* node=it.first; int hd=it.second; mp[hd].push_back(node->data); if(node->left!=nullptr){ q.push({node->left,hd-1}); } if(node->right!=nullptr){ q.push({node->right,hd+1}); } } for(auto it: mp){ for(auto j : it.second){ v.push_back(j); } } return v; } 0 sachinsinghss193 weeks ago Python solution1. Create vertical levels for left and right sides of the tree 2. Iterate through the levels in sorted order def verticalOrder(self, root): #Your code here levels = {} q = [(root, 0)] while q: nd, ind = q.pop(0) if ind not in levels: levels[ind] = [nd.data] else: levels[ind].append(nd.data) if nd.left: q.append((nd.left, ind + 1)) if nd.right: q.append((nd.right, ind - 1)) op = [] for v in sorted(levels, reverse=True): op += levels[v] return op We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 492, "s": 238, "text": "Given a Binary Tree, find the vertical traversal of it starting from the leftmost level to the rightmost level.\nIf there are multiple nodes passing through a vertical line, then they should be printed as they appear in level order traversal of the tree." }, { "code": null, "e": 503, "s": 492, "text": "Example 1:" }, { "code": null, "e": 696, "s": 503, "text": "Input:\n 1\n / \\\n 2 3\n / \\ / \\\n 4 5 6 7\n \\ \\\n 8 9 \nOutput: \n4 2 1 5 6 3 8 7 9 \nExplanation:\n\n" }, { "code": null, "e": 707, "s": 696, "text": "Example 2:" }, { "code": null, "e": 799, "s": 707, "text": "Input:\n 1\n / \\\n 2 3\n / \\ \\\n4 5 6\nOutput: 4 2 1 5 3 6\n" }, { "code": null, "e": 1205, "s": 799, "text": "Your Task:\nYou don't need to read input or print anything. Your task is to complete the function verticalOrder() which takes the root node as input parameter and returns an array containing the vertical order traversal of the tree from the leftmost to the rightmost level. If 2 nodes lie in the same vertical level, they should be printed in the order they appear in the level order traversal of the tree." }, { "code": null, "e": 1267, "s": 1205, "text": "Expected Time Complexity: O(N)\nExpected Auxiliary Space: O(N)" }, { "code": null, "e": 1310, "s": 1267, "text": "Constraints:\n1 <= Number of nodes <= 3*104" }, { "code": null, "e": 1312, "s": 1310, "text": "0" }, { "code": null, "e": 1336, "s": 1312, "text": "tirtha19025682 days ago" }, { "code": null, "e": 2634, "s": 1336, "text": "class Solution\n{\n static class pair{\n int dist;\n Node node;\n pair(int dist,Node node){\n this.dist = dist;\n this.node = node;\n }\n }\n \n static ArrayList <Integer> verticalOrder(Node root)\n {\n if(root == null) return null;\n ArrayList<Integer>ans = new ArrayList<>();\n Queue<pair>q = new LinkedList<>();\n Map<Integer,ArrayList<Integer>> map = new TreeMap<>();\n q.add(new pair(0,root));\n while(!q.isEmpty()){\n pair curr = q.poll();\n \n if(map.containsKey(curr.dist)){\n map.get(curr.dist).add(curr.node.data);\n }else{\n ArrayList<Integer>temp = new ArrayList<>();\n temp.add(curr.node.data);\n map.put(curr.dist,temp);\n }\n \n if(curr.node.left != null){\n q.add(new pair(curr.dist-1,curr.node.left));\n }\n if(curr.node.right != null){\n q.add(new pair(curr.dist+1,curr.node.right));\n }\n }\n \n for(Map.Entry<Integer,ArrayList<Integer>>m:map.entrySet()){\n for(int i=0;i<m.getValue().size();i++){\n ans.add(m.getValue().get(i));\n }\n }\n \n \n \n return ans;\n }\n}" }, { "code": null, "e": 2636, "s": 2634, "text": "0" }, { "code": null, "e": 2661, "s": 2636, "text": "akashkhurana282 days ago" }, { "code": null, "e": 2675, "s": 2661, "text": "JAVA SOLUTION" }, { "code": null, "e": 2735, "s": 2675, "text": " static class Pair { Node node; int hd;" }, { "code": null, "e": 3803, "s": 2735, "text": " Pair(Node n, int h) { node = n; hd = h; } } static ArrayList<Integer> verticalOrder(Node root) { ArrayList<Integer>ans=new ArrayList<>(); if(root==null) return ans; Queue<Pair>q=new LinkedList<>(); Map<Integer,ArrayList<Integer>>mp= new TreeMap<>(); q.add(new Pair(root,0)); while(!q.isEmpty()){ Pair p=q.poll(); Node curr= p.node; int hd=p.hd; if(mp.containsKey(hd)){ mp.get(hd).add(curr.data); } else{ ArrayList<Integer>al= new ArrayList<>(); al.add(curr.data); mp.put(hd,al); } if(curr.left!=null) q.add(new Pair(curr.left,hd-1)); if(curr.right!=null) q.add(new Pair(curr.right,hd+1)); } for(Map.Entry<Integer,ArrayList<Integer>>e:mp.entrySet()){ ans.addAll(e.getValue()); } return ans; }} " }, { "code": null, "e": 3805, "s": 3803, "text": "0" }, { "code": null, "e": 3832, "s": 3805, "text": "akshu199bhardwaj1 week ago" }, { "code": null, "e": 3868, "s": 3832, "text": "What is the problem with this code?" }, { "code": null, "e": 4392, "s": 3868, "text": " public: //Function to find the vertical order traversal of Binary Tree. void helper(Node* root, int hd, map<int,vector<int>> &m){ if(!root) return; m[hd].push_back(root->data); helper(root->left,hd-1,m); helper(root->right,hd+1,m); } vector<int> verticalOrder(Node *root) { vector<int> ans; map<int,vector<int>> m; helper(root,0,m); for(auto e: m){ for(int i=0; i<e.second.size(); i++) ans.push_back(e.second[i]); } return ans; }" }, { "code": null, "e": 4394, "s": 4392, "text": "0" }, { "code": null, "e": 4421, "s": 4394, "text": "saicharanthammi2 weeks ago" }, { "code": null, "e": 4976, "s": 4421, "text": "vector<int> verticalOrder(Node *root) { map<int,vector<int>>mp; pair<Node*,int>p;p.first=root; p.second=0;vector<int>ans; queue<pair<Node*,int>> q; q.push(p); while(!q.empty()) { p=q.front(); mp[p.second].push_back(p.first->data); if(p.first->left!=NULL) {q.push(make_pair(p.first->left,p.second-1)); } if(p.first->right!=NULL){ q.push(make_pair(p.first->right,p.second+1));} q.pop(); } for(auto it: mp) for(auto iter : it.second) ans.push_back(iter);" }, { "code": null, "e": 4998, "s": 4976, "text": " return ans; }" }, { "code": null, "e": 5000, "s": 4998, "text": "0" }, { "code": null, "e": 5030, "s": 5000, "text": "aishwaryadwani97992 weeks ago" }, { "code": null, "e": 6897, "s": 5030, "text": "class Solution{ //Function to find the vertical order traversal of Binary Tree. static class Pair { Node node; int horizontalDistance; Pair(Node node,int horizontalDistance) { this.node=node; this.horizontalDistance=horizontalDistance; } } static ArrayList <Integer> verticalOrder(Node root) { Queue<Pair> bag = new LinkedList<>(); Pair pair = new Pair(root,0); bag.add(pair); Map<Integer,ArrayList<Integer>> mp = new HashMap<>(); int minimumHD = Integer.MAX_VALUE; int maximumHD = Integer.MIN_VALUE; while(!bag.isEmpty()) { pair = bag.poll(); int key = pair.horizontalDistance; int value = pair.node.data; if(mp.containsKey(key)) { ArrayList<Integer> valueList = mp.get(key); valueList.add(value); mp.put(key,valueList); } else { ArrayList<Integer> valueList = new ArrayList<>(); valueList.add(value); mp.put(key,valueList); } if(pair.node.left!=null) { Pair leftPair = new Pair(pair.node.left,key-1); bag.add(leftPair); } if(pair.node.right!=null) { Pair rightPair = new Pair(pair.node.right,key+1); bag.add(rightPair); } minimumHD = Math.min(minimumHD,key); maximumHD = Math.max(maximumHD,key); } ArrayList<Integer> result = new ArrayList<>(); for(int i=0;i<mp.size();i++) { ArrayList<Integer> values = mp.get(minimumHD); for(Integer value : values) result.add(value); minimumHD += 1; } return result; }}" }, { "code": null, "e": 6899, "s": 6897, "text": "0" }, { "code": null, "e": 6918, "s": 6899, "text": "aishwaryadwani9799" }, { "code": null, "e": 6944, "s": 6918, "text": "This comment was deleted." }, { "code": null, "e": 6947, "s": 6944, "text": "+3" }, { "code": null, "e": 6969, "s": 6947, "text": "yoginpcs202 weeks ago" }, { "code": null, "e": 7701, "s": 6969, "text": "vector<int> verticalOrder(Node *root)\n {\n //Your code here\n queue<pair<Node*,int>> q;\n map<int,vector<int>>mp;\n q.push(make_pair(root,0));\n while(!q.empty())\n {\n auto p=q.front();\n Node* curr=p.first;\n int hd=p.second;\n mp[hd].push_back(curr->data);\n q.pop();\n if(curr->left!=NULL)\n q.push(make_pair(curr->left,hd-1));\n if(curr->right!=NULL)\n q.push(make_pair(curr->right,hd+1));\n }\n vector<int> ans;\n for(auto i:mp)\n {\n for(auto j:i.second)\n {\n ans.push_back(j);\n }\n }\n return ans;\n \n }" }, { "code": null, "e": 7703, "s": 7701, "text": "0" }, { "code": null, "e": 7725, "s": 7703, "text": "moaslam8263 weeks ago" }, { "code": null, "e": 7739, "s": 7725, "text": "c++ easy code" }, { "code": null, "e": 8345, "s": 7739, "text": " vector<int> verticalOrder(Node *root)\n {\n map<int,vector<int>> um;\n vector<int> v;\n queue<pair<Node*,int>> q;\n q.push({root,0});\n while(!q.empty()){\n Node* temp=q.front().first;\n int h=q.front().second;\n um[q.front().second].push_back(temp->data);\n q.pop();\n if(temp->left) q.push(make_pair(temp->left,h-1));\n if(temp->right) q.push(make_pair(temp->right,h+1));\n }\n for(auto &it: um){\n for(auto &el : it.second) v.push_back(el);\n }\n return v;\n }" }, { "code": null, "e": 8348, "s": 8345, "text": "+1" }, { "code": null, "e": 8375, "s": 8348, "text": "parasnirban90503 weeks ago" }, { "code": null, "e": 9080, "s": 8375, "text": "vector<int> verticalOrder(Node *root) { vector<int> v; if(root==nullptr) return v; map<int,vector<int>> mp; queue<pair<Node*,int>> q; q.push({root,0}); while(!q.empty()){ vector<int> temp; auto it=q.front(); q.pop(); Node* node=it.first; int hd=it.second; mp[hd].push_back(node->data); if(node->left!=nullptr){ q.push({node->left,hd-1}); } if(node->right!=nullptr){ q.push({node->right,hd+1}); } } for(auto it: mp){ for(auto j : it.second){ v.push_back(j); } } return v; }" }, { "code": null, "e": 9082, "s": 9080, "text": "0" }, { "code": null, "e": 9109, "s": 9082, "text": "sachinsinghss193 weeks ago" }, { "code": null, "e": 9187, "s": 9109, "text": "Python solution1. Create vertical levels for left and right sides of the tree" }, { "code": null, "e": 9233, "s": 9187, "text": "2. Iterate through the levels in sorted order" }, { "code": null, "e": 9760, "s": 9233, "text": "def verticalOrder(self, root): #Your code here levels = {} q = [(root, 0)] while q: nd, ind = q.pop(0) if ind not in levels: levels[ind] = [nd.data] else: levels[ind].append(nd.data) if nd.left: q.append((nd.left, ind + 1)) if nd.right: q.append((nd.right, ind - 1)) op = [] for v in sorted(levels, reverse=True): op += levels[v] return op" }, { "code": null, "e": 9906, "s": 9760, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 9942, "s": 9906, "text": " Login to access your submissions. " }, { "code": null, "e": 9952, "s": 9942, "text": "\nProblem\n" }, { "code": null, "e": 9962, "s": 9952, "text": "\nContest\n" }, { "code": null, "e": 10025, "s": 9962, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 10173, "s": 10025, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 10381, "s": 10173, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 10487, "s": 10381, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
sum() function in Python
18 May, 2022 Sum of numbers in the list is required everywhere. Python provides an inbuilt function sum() which sums up the numbers in the list. Syntax: sum(iterable, start) iterable : iterable can be anything list , tuples or dictionaries , but most importantly it should be numbers. start : this start is added to the sum of numbers in the iterable. If start is not given in the syntax , it is assumed to be 0. Possible two syntaxes: sum(a) a is the list , it adds up all the numbers in the list a and takes start to be 0, so returning only the sum of the numbers in the list. sum(a, start) this returns the sum of the list + start Below is the Python implementation of the sum() # Python code to demonstrate the working of # sum() numbers = [1,2,3,4,5,1,4,5] # start parameter is not providedSum = sum(numbers)print(Sum) # start = 10Sum = sum(numbers, 10)print(Sum) Output: 25 35 Error and Exceptions TypeError : This error is raised in the case when there is anything other then numbers in the list. # Python code to demonstrate the exception of # sum()arr = ["a"] # start parameter is not providedSum = sum(arr)print(Sum) # start = 10Sum = sum(arr, 10)print(Sum) Runtime Error : Traceback (most recent call last): File "/home/23f0f6c9e022aa96d6c560a7eb4cf387.py", line 6, in Sum = sum(arr) TypeError: unsupported operand type(s) for +: 'int' and 'str' So the list should contain numbers Practical Application: Problems where we require sum to be calculated to do further operations such as finding out the average of numbers. # Python code to demonstrate the practical application# of sum() numbers = [1,2,3,4,5,1,4,5] # start = 10Sum = sum(numbers)average= Sum/len(numbers) print (average) Output: 3 Python-Built-in-functions Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Different ways to create Pandas Dataframe Enumerate() in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Convert integer to string in Python Python | os.path.join() method Python OOPs Concepts Introduction To PYTHON
[ { "code": null, "e": 52, "s": 24, "text": "\n18 May, 2022" }, { "code": null, "e": 184, "s": 52, "text": "Sum of numbers in the list is required everywhere. Python provides an inbuilt function sum() which sums up the numbers in the list." }, { "code": null, "e": 192, "s": 184, "text": "Syntax:" }, { "code": null, "e": 457, "s": 192, "text": "sum(iterable, start) \niterable : iterable can be anything list , tuples or dictionaries ,\n but most importantly it should be numbers.\nstart : this start is added to the sum of \nnumbers in the iterable. \nIf start is not given in the syntax , it is assumed to be 0." }, { "code": null, "e": 480, "s": 457, "text": "Possible two syntaxes:" }, { "code": null, "e": 682, "s": 480, "text": "sum(a)\na is the list , it adds up all the numbers in the \nlist a and takes start to be 0, so returning \nonly the sum of the numbers in the list.\nsum(a, start)\nthis returns the sum of the list + start \n" }, { "code": null, "e": 730, "s": 682, "text": "Below is the Python implementation of the sum()" }, { "code": "# Python code to demonstrate the working of # sum() numbers = [1,2,3,4,5,1,4,5] # start parameter is not providedSum = sum(numbers)print(Sum) # start = 10Sum = sum(numbers, 10)print(Sum)", "e": 921, "s": 730, "text": null }, { "code": null, "e": 929, "s": 921, "text": "Output:" }, { "code": null, "e": 936, "s": 929, "text": "25\n35\n" }, { "code": null, "e": 957, "s": 936, "text": "Error and Exceptions" }, { "code": null, "e": 1057, "s": 957, "text": "TypeError : This error is raised in the case when there is anything other then numbers in the list." }, { "code": "# Python code to demonstrate the exception of # sum()arr = [\"a\"] # start parameter is not providedSum = sum(arr)print(Sum) # start = 10Sum = sum(arr, 10)print(Sum)", "e": 1223, "s": 1057, "text": null }, { "code": null, "e": 1239, "s": 1223, "text": "Runtime Error :" }, { "code": null, "e": 1420, "s": 1239, "text": "Traceback (most recent call last):\n File \"/home/23f0f6c9e022aa96d6c560a7eb4cf387.py\", line 6, in \n Sum = sum(arr)\nTypeError: unsupported operand type(s) for +: 'int' and 'str'\n" }, { "code": null, "e": 1455, "s": 1420, "text": "So the list should contain numbers" }, { "code": null, "e": 1594, "s": 1455, "text": "Practical Application: Problems where we require sum to be calculated to do further operations such as finding out the average of numbers." }, { "code": "# Python code to demonstrate the practical application# of sum() numbers = [1,2,3,4,5,1,4,5] # start = 10Sum = sum(numbers)average= Sum/len(numbers) print (average)", "e": 1761, "s": 1594, "text": null }, { "code": null, "e": 1769, "s": 1761, "text": "Output:" }, { "code": null, "e": 1772, "s": 1769, "text": "3\n" }, { "code": null, "e": 1798, "s": 1772, "text": "Python-Built-in-functions" }, { "code": null, "e": 1805, "s": 1798, "text": "Python" }, { "code": null, "e": 1903, "s": 1805, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1945, "s": 1903, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 1967, "s": 1945, "text": "Enumerate() in Python" }, { "code": null, "e": 1993, "s": 1967, "text": "Python String | replace()" }, { "code": null, "e": 2025, "s": 1993, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 2054, "s": 2025, "text": "*args and **kwargs in Python" }, { "code": null, "e": 2081, "s": 2054, "text": "Python Classes and Objects" }, { "code": null, "e": 2117, "s": 2081, "text": "Convert integer to string in Python" }, { "code": null, "e": 2148, "s": 2117, "text": "Python | os.path.join() method" }, { "code": null, "e": 2169, "s": 2148, "text": "Python OOPs Concepts" } ]
Find the maximum GCD of the siblings of a Binary Tree
06 Jun, 2022 Given a 2d-array arr[][] which represents the nodes of a Binary tree, the task is to find the maximum GCD of the siblings of this tree without actually constructing it. Example: Input: arr[][] = {{4, 5}, {4, 2}, {2, 3}, {2, 1}, {3, 6}, {3, 12}} Output: 6 Explanation: For the above tree, the maximum GCD for the siblings is formed for the nodes 6 and 12 for the children of node 3. Input: arr[][] = {{5, 4}, {5, 8}, {4, 6}, {4, 9}, {8, 10}, {10, 20}, {10, 30}} Output: 10 Approach: The idea is to form a vector and store the tree in the form of the vector. After storing the tree in the form of a vector, the following cases occur: If the vector size is 0 or 1, then print 0 as GCD could not be found. For all other cases, since we store the tree in the form of a pair, we consider the first values of two pairs and compare them. But first, you need to Sort the pair of edges.For example, let’s assume there are two pairs in the vector A and B. We check if: A.first == B.first If both of them match, then both of them belongs to the same parent. Therefore, we compute the GCD of the second values in the pairs and finally print the maximum of all such GCD’s. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program to find the maximum// GCD of the siblings of a binary tree #include <bits/stdc++.h>using namespace std; // Function to find maximum GCDint max_gcd(vector<pair<int, int> >& v){ // No child or Single child if (v.size() == 1 || v.size() == 0) return 0; sort(v.begin(), v.end()); // To get the first pair pair<int, int> a = v[0]; pair<int, int> b; int ans = INT_MIN; for (int i = 1; i < v.size(); i++) { b = v[i]; // If both the pairs belongs to // the same parent if (b.first == a.first) // Update ans with the max // of current gcd and // gcd of both children ans = max(ans, __gcd(a.second, b.second)); // Update previous // for next iteration a = b; } return ans;} // Driver functionint main(){ vector<pair<int, int> > v; v.push_back(make_pair(5, 4)); v.push_back(make_pair(5, 8)); v.push_back(make_pair(4, 6)); v.push_back(make_pair(4, 9)); v.push_back(make_pair(8, 10)); v.push_back(make_pair(10, 20)); v.push_back(make_pair(10, 30)); cout << max_gcd(v); return 0;} // Java program to find the maximum// GCD of the siblings of a binary treeimport java.util.*;import java.lang.*; class GFG{ // Function to find maximum GCDstatic int max_gcd(ArrayList<int[]> v){ // No child or Single child if (v.size() == 1 || v.size() == 0) return 0; Collections.sort(v, new Comparator<int[]>() { public int compare(int[] a, int[] b) { return a[0]-b[0]; } }); // To get the first pair int[] a = v.get(0); int[] b = new int[2]; int ans = Integer.MIN_VALUE; for(int i = 1; i < v.size(); i++) { b = v.get(i); // If both the pairs belongs to // the same parent if (b[0] == a[0]) // Update ans with the max // of current gcd and // gcd of both children ans = Math.max(ans, gcd(a[1], b[1])); // Update previous // for next iteration a = b; } return ans;} static int gcd(int a, int b){ if (b == 0) return a; return gcd(b, a % b);} // Driver codepublic static void main(String[] args){ ArrayList<int[]> v = new ArrayList<>(); v.add(new int[]{5, 4}); v.add(new int[]{5, 8}); v.add(new int[]{4, 6}); v.add(new int[]{4, 9}); v.add(new int[]{8, 10}); v.add(new int[]{10, 20}); v.add(new int[]{10, 30}); System.out.println(max_gcd(v));}} // This code is contributed by offbeat # Python3 program to find the maximum# GCD of the siblings of a binary treefrom math import gcd # Function to find maximum GCDdef max_gcd(v): # No child or Single child if (len(v) == 1 or len(v) == 0): return 0 v.sort() # To get the first pair a = v[0] ans = -10**9 for i in range(1, len(v)): b = v[i] # If both the pairs belongs to # the same parent if (b[0] == a[0]): # Update ans with the max # of current gcd and # gcd of both children ans = max(ans, gcd(a[1], b[1])) # Update previous # for next iteration a = b return ans # Driver functionif __name__ == '__main__': v=[] v.append([5, 4]) v.append([5, 8]) v.append([4, 6]) v.append([4, 9]) v.append([8, 10]) v.append([10, 20]) v.append([10, 30]) print(max_gcd(v)) # This code is contributed by mohit kumar 29 // C# program to find the maximum// GCD of the siblings of a binary treeusing System.Collections;using System; class GFG{ // Function to find maximum GCDstatic int max_gcd(ArrayList v){ // No child or Single child if (v.Count == 1 || v.Count == 0) return 0; v.Sort(); // To get the first pair int[] a = (int [])v[0]; int[] b = new int[2]; int ans = -10000000; for(int i = 1; i < v.Count; i++) { b = (int[])v[i]; // If both the pairs belongs to // the same parent if (b[0] == a[0]) // Update ans with the max // of current gcd and // gcd of both children ans = Math.Max(ans, gcd(a[1], b[1])); // Update previous // for next iteration a = b; } return ans;} static int gcd(int a, int b){ if (b == 0) return a; return gcd(b, a % b);} // Driver codepublic static void Main(string[] args){ ArrayList v = new ArrayList(); v.Add(new int[]{5, 4}); v.Add(new int[]{5, 8}); v.Add(new int[]{4, 6}); v.Add(new int[]{4, 9}); v.Add(new int[]{8, 10}); v.Add(new int[]{10, 20}); v.Add(new int[]{10, 30}); Console.Write(max_gcd(v));}} // This code is contributed by rutvik_56 <script> // JavaScript program to find the maximum// GCD of the siblings of a binary tree // Function to find maximum GCDfunction max_gcd(v){ // No child or Single child if (v.length == 1 || v.length == 0) return 0; v.sort((a, b) => a - b); // To get the first pair let a = v[0]; let b; let ans = Number.MIN_SAFE_INTEGER; for (let i = 1; i < v.length; i++) { b = v[i]; // If both the pairs belongs to // the same parent if (b[0] == a[0]) // Update ans with the max // of current gcd and // gcd of both children ans = Math.max(ans, gcd(a[1], b[1])); // Update previous // for next iteration a = b; } return ans;} function gcd(a, b){ if (b == 0) return a; return gcd(b, a % b);} // Driver function let v = new Array(); v.push([5, 4]); v.push([5, 8]); v.push([4, 6]); v.push([4, 9]); v.push([8, 10]); v.push([10, 20]); v.push([10, 30]); document.write(max_gcd(v)); // This code is contributed by gfgking </script> 10 mohit kumar 29 offbeat rutvik_56 snape_here gfgking vinayedula Binary Tree cpp-vector GCD-LCM HCF Arrays Tree Arrays Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n06 Jun, 2022" }, { "code": null, "e": 223, "s": 54, "text": "Given a 2d-array arr[][] which represents the nodes of a Binary tree, the task is to find the maximum GCD of the siblings of this tree without actually constructing it." }, { "code": null, "e": 234, "s": 223, "text": "Example: " }, { "code": null, "e": 326, "s": 234, "text": "Input: arr[][] = {{4, 5}, {4, 2}, {2, 3}, {2, 1}, {3, 6}, {3, 12}} Output: 6 Explanation: " }, { "code": null, "e": 440, "s": 326, "text": "For the above tree, the maximum GCD for the siblings is formed for the nodes 6 and 12 for the children of node 3." }, { "code": null, "e": 532, "s": 440, "text": "Input: arr[][] = {{5, 4}, {5, 8}, {4, 6}, {4, 9}, {8, 10}, {10, 20}, {10, 30}} Output: 10 " }, { "code": null, "e": 693, "s": 532, "text": "Approach: The idea is to form a vector and store the tree in the form of the vector. After storing the tree in the form of a vector, the following cases occur: " }, { "code": null, "e": 763, "s": 693, "text": "If the vector size is 0 or 1, then print 0 as GCD could not be found." }, { "code": null, "e": 1019, "s": 763, "text": "For all other cases, since we store the tree in the form of a pair, we consider the first values of two pairs and compare them. But first, you need to Sort the pair of edges.For example, let’s assume there are two pairs in the vector A and B. We check if:" }, { "code": null, "e": 1038, "s": 1019, "text": "A.first == B.first" }, { "code": null, "e": 1220, "s": 1038, "text": "If both of them match, then both of them belongs to the same parent. Therefore, we compute the GCD of the second values in the pairs and finally print the maximum of all such GCD’s." }, { "code": null, "e": 1273, "s": 1220, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 1277, "s": 1273, "text": "C++" }, { "code": null, "e": 1282, "s": 1277, "text": "Java" }, { "code": null, "e": 1290, "s": 1282, "text": "Python3" }, { "code": null, "e": 1293, "s": 1290, "text": "C#" }, { "code": null, "e": 1304, "s": 1293, "text": "Javascript" }, { "code": "// C++ program to find the maximum// GCD of the siblings of a binary tree #include <bits/stdc++.h>using namespace std; // Function to find maximum GCDint max_gcd(vector<pair<int, int> >& v){ // No child or Single child if (v.size() == 1 || v.size() == 0) return 0; sort(v.begin(), v.end()); // To get the first pair pair<int, int> a = v[0]; pair<int, int> b; int ans = INT_MIN; for (int i = 1; i < v.size(); i++) { b = v[i]; // If both the pairs belongs to // the same parent if (b.first == a.first) // Update ans with the max // of current gcd and // gcd of both children ans = max(ans, __gcd(a.second, b.second)); // Update previous // for next iteration a = b; } return ans;} // Driver functionint main(){ vector<pair<int, int> > v; v.push_back(make_pair(5, 4)); v.push_back(make_pair(5, 8)); v.push_back(make_pair(4, 6)); v.push_back(make_pair(4, 9)); v.push_back(make_pair(8, 10)); v.push_back(make_pair(10, 20)); v.push_back(make_pair(10, 30)); cout << max_gcd(v); return 0;}", "e": 2517, "s": 1304, "text": null }, { "code": "// Java program to find the maximum// GCD of the siblings of a binary treeimport java.util.*;import java.lang.*; class GFG{ // Function to find maximum GCDstatic int max_gcd(ArrayList<int[]> v){ // No child or Single child if (v.size() == 1 || v.size() == 0) return 0; Collections.sort(v, new Comparator<int[]>() { public int compare(int[] a, int[] b) { return a[0]-b[0]; } }); // To get the first pair int[] a = v.get(0); int[] b = new int[2]; int ans = Integer.MIN_VALUE; for(int i = 1; i < v.size(); i++) { b = v.get(i); // If both the pairs belongs to // the same parent if (b[0] == a[0]) // Update ans with the max // of current gcd and // gcd of both children ans = Math.max(ans, gcd(a[1], b[1])); // Update previous // for next iteration a = b; } return ans;} static int gcd(int a, int b){ if (b == 0) return a; return gcd(b, a % b);} // Driver codepublic static void main(String[] args){ ArrayList<int[]> v = new ArrayList<>(); v.add(new int[]{5, 4}); v.add(new int[]{5, 8}); v.add(new int[]{4, 6}); v.add(new int[]{4, 9}); v.add(new int[]{8, 10}); v.add(new int[]{10, 20}); v.add(new int[]{10, 30}); System.out.println(max_gcd(v));}} // This code is contributed by offbeat", "e": 4017, "s": 2517, "text": null }, { "code": "# Python3 program to find the maximum# GCD of the siblings of a binary treefrom math import gcd # Function to find maximum GCDdef max_gcd(v): # No child or Single child if (len(v) == 1 or len(v) == 0): return 0 v.sort() # To get the first pair a = v[0] ans = -10**9 for i in range(1, len(v)): b = v[i] # If both the pairs belongs to # the same parent if (b[0] == a[0]): # Update ans with the max # of current gcd and # gcd of both children ans = max(ans, gcd(a[1], b[1])) # Update previous # for next iteration a = b return ans # Driver functionif __name__ == '__main__': v=[] v.append([5, 4]) v.append([5, 8]) v.append([4, 6]) v.append([4, 9]) v.append([8, 10]) v.append([10, 20]) v.append([10, 30]) print(max_gcd(v)) # This code is contributed by mohit kumar 29 ", "e": 4948, "s": 4017, "text": null }, { "code": "// C# program to find the maximum// GCD of the siblings of a binary treeusing System.Collections;using System; class GFG{ // Function to find maximum GCDstatic int max_gcd(ArrayList v){ // No child or Single child if (v.Count == 1 || v.Count == 0) return 0; v.Sort(); // To get the first pair int[] a = (int [])v[0]; int[] b = new int[2]; int ans = -10000000; for(int i = 1; i < v.Count; i++) { b = (int[])v[i]; // If both the pairs belongs to // the same parent if (b[0] == a[0]) // Update ans with the max // of current gcd and // gcd of both children ans = Math.Max(ans, gcd(a[1], b[1])); // Update previous // for next iteration a = b; } return ans;} static int gcd(int a, int b){ if (b == 0) return a; return gcd(b, a % b);} // Driver codepublic static void Main(string[] args){ ArrayList v = new ArrayList(); v.Add(new int[]{5, 4}); v.Add(new int[]{5, 8}); v.Add(new int[]{4, 6}); v.Add(new int[]{4, 9}); v.Add(new int[]{8, 10}); v.Add(new int[]{10, 20}); v.Add(new int[]{10, 30}); Console.Write(max_gcd(v));}} // This code is contributed by rutvik_56", "e": 6221, "s": 4948, "text": null }, { "code": "<script> // JavaScript program to find the maximum// GCD of the siblings of a binary tree // Function to find maximum GCDfunction max_gcd(v){ // No child or Single child if (v.length == 1 || v.length == 0) return 0; v.sort((a, b) => a - b); // To get the first pair let a = v[0]; let b; let ans = Number.MIN_SAFE_INTEGER; for (let i = 1; i < v.length; i++) { b = v[i]; // If both the pairs belongs to // the same parent if (b[0] == a[0]) // Update ans with the max // of current gcd and // gcd of both children ans = Math.max(ans, gcd(a[1], b[1])); // Update previous // for next iteration a = b; } return ans;} function gcd(a, b){ if (b == 0) return a; return gcd(b, a % b);} // Driver function let v = new Array(); v.push([5, 4]); v.push([5, 8]); v.push([4, 6]); v.push([4, 9]); v.push([8, 10]); v.push([10, 20]); v.push([10, 30]); document.write(max_gcd(v)); // This code is contributed by gfgking </script>", "e": 7343, "s": 6221, "text": null }, { "code": null, "e": 7346, "s": 7343, "text": "10" }, { "code": null, "e": 7363, "s": 7348, "text": "mohit kumar 29" }, { "code": null, "e": 7371, "s": 7363, "text": "offbeat" }, { "code": null, "e": 7381, "s": 7371, "text": "rutvik_56" }, { "code": null, "e": 7392, "s": 7381, "text": "snape_here" }, { "code": null, "e": 7400, "s": 7392, "text": "gfgking" }, { "code": null, "e": 7411, "s": 7400, "text": "vinayedula" }, { "code": null, "e": 7423, "s": 7411, "text": "Binary Tree" }, { "code": null, "e": 7434, "s": 7423, "text": "cpp-vector" }, { "code": null, "e": 7442, "s": 7434, "text": "GCD-LCM" }, { "code": null, "e": 7446, "s": 7442, "text": "HCF" }, { "code": null, "e": 7453, "s": 7446, "text": "Arrays" }, { "code": null, "e": 7458, "s": 7453, "text": "Tree" }, { "code": null, "e": 7465, "s": 7458, "text": "Arrays" }, { "code": null, "e": 7470, "s": 7465, "text": "Tree" } ]
How to Calculate SMAPE in Excel?
28 Jul, 2021 In statistics, we often use Forecasting Accuracy which denotes the closeness of a quantity to the actual value of that particular quantity. The actual value is also known as the true value. It basically denotes the degree of closeness or a verification process that is highly used by business professionals to keep track records of their sales and exchanges to maintain the demand and supply mapping every year. There are various methods to calculate Forecasting Accuracy. So, one of the most common methods used to calculate the Forecasting Accuracy is MAPE which is abbreviated as Mean Absolute Percentage Error. It is an effective and more convenient method because it becomes easier to interpret the accuracy just by seeing the MAPE value. Symmetric Mean Absolute Percentage Error abbreviated as SMAPE is also used to measure accuracy on the basis of relative errors. It is basically a prediction accuracy technique an alternative to MAPE. It is generally represented in terms of percentage (%). In this article we are going to discuss how to calculate SMAPE in Excel using a suitable example. The formula to calculate SMAPE in Excel is : The above formula gives a lower bound value of 0% and upper bound value of 200%. Note : The above formula becomes invalid when both the Actual value and the Forecast value tend to zero. Due to the presence of upper and lower bounds it provides more symmetric results which is not present in MAPE.Generally, we prefer the percentage error between 0% to 100%. So, the term division by 2 is eliminated and the above formula is modified to calculate SMAPE. The above formula becomes invalid when both the Actual value and the Forecast value tend to zero. Due to the presence of upper and lower bounds it provides more symmetric results which is not present in MAPE. Generally, we prefer the percentage error between 0% to 100%. So, the term division by 2 is eliminated and the above formula is modified to calculate SMAPE. Example: Consider the dataset shown below : The functions needed for formulas in Excel are- ABS : To calculate the absolute value. AVERAGE : To calculate the mean. The steps are : 1. Insert the data set in the Excel sheet. 2. Calculate the subpart of the formula inside the summation which is also known as SMAPE difference. =2*ABS(Cell_No_Act-Cell_No_Fore)/(ABS(Cell_No_Act)+ABS(Cell_No_Fore)) where ABS : Used to calculate the absolute value Cell_No_Act : Cell number where Actual value is present Cell_No_Fore : Cell number where Forecast value is present The above formula will calculate the SMAPE difference for the first entry in the data set. Similarly, you can drag the AutoFill Options button to get the SMAPE difference for the entire dataset. SMAPE Difference 3. Now, simply take the mean or the average value of all the data obtained in step 2 using the Excel AVERAGE formula. The syntax is : =AVERAGE(Cell_Range) Therefore, the value of SMAPE for the given dataset is 0.0916 or 9.16%. Picked Excel Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Delete Blank Columns in Excel? How to Normalize Data in Excel? How to Get Length of Array in Excel VBA? How to Find the Last Used Row and Column in Excel VBA? How to Use Solver in Excel? How to make a 3 Axis Graph using Excel? Introduction to Excel Spreadsheet How to Show Percentages in Stacked Column Chart in Excel? Macros in Excel How to Extract the Last Word From a Cell in Excel?
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Jul, 2021" }, { "code": null, "e": 501, "s": 28, "text": "In statistics, we often use Forecasting Accuracy which denotes the closeness of a quantity to the actual value of that particular quantity. The actual value is also known as the true value. It basically denotes the degree of closeness or a verification process that is highly used by business professionals to keep track records of their sales and exchanges to maintain the demand and supply mapping every year. There are various methods to calculate Forecasting Accuracy." }, { "code": null, "e": 772, "s": 501, "text": "So, one of the most common methods used to calculate the Forecasting Accuracy is MAPE which is abbreviated as Mean Absolute Percentage Error. It is an effective and more convenient method because it becomes easier to interpret the accuracy just by seeing the MAPE value." }, { "code": null, "e": 1028, "s": 772, "text": "Symmetric Mean Absolute Percentage Error abbreviated as SMAPE is also used to measure accuracy on the basis of relative errors. It is basically a prediction accuracy technique an alternative to MAPE. It is generally represented in terms of percentage (%)." }, { "code": null, "e": 1126, "s": 1028, "text": "In this article we are going to discuss how to calculate SMAPE in Excel using a suitable example." }, { "code": null, "e": 1171, "s": 1126, "text": "The formula to calculate SMAPE in Excel is :" }, { "code": null, "e": 1253, "s": 1171, "text": "The above formula gives a lower bound value of 0% and upper bound value of 200%. " }, { "code": null, "e": 1261, "s": 1253, "text": "Note : " }, { "code": null, "e": 1626, "s": 1261, "text": "The above formula becomes invalid when both the Actual value and the Forecast value tend to zero. Due to the presence of upper and lower bounds it provides more symmetric results which is not present in MAPE.Generally, we prefer the percentage error between 0% to 100%. So, the term division by 2 is eliminated and the above formula is modified to calculate SMAPE." }, { "code": null, "e": 1725, "s": 1626, "text": "The above formula becomes invalid when both the Actual value and the Forecast value tend to zero. " }, { "code": null, "e": 1836, "s": 1725, "text": "Due to the presence of upper and lower bounds it provides more symmetric results which is not present in MAPE." }, { "code": null, "e": 1993, "s": 1836, "text": "Generally, we prefer the percentage error between 0% to 100%. So, the term division by 2 is eliminated and the above formula is modified to calculate SMAPE." }, { "code": null, "e": 2037, "s": 1993, "text": "Example: Consider the dataset shown below :" }, { "code": null, "e": 2085, "s": 2037, "text": "The functions needed for formulas in Excel are-" }, { "code": null, "e": 2124, "s": 2085, "text": "ABS : To calculate the absolute value." }, { "code": null, "e": 2157, "s": 2124, "text": "AVERAGE : To calculate the mean." }, { "code": null, "e": 2173, "s": 2157, "text": "The steps are :" }, { "code": null, "e": 2216, "s": 2173, "text": "1. Insert the data set in the Excel sheet." }, { "code": null, "e": 2318, "s": 2216, "text": "2. Calculate the subpart of the formula inside the summation which is also known as SMAPE difference." }, { "code": null, "e": 2553, "s": 2318, "text": "=2*ABS(Cell_No_Act-Cell_No_Fore)/(ABS(Cell_No_Act)+ABS(Cell_No_Fore))\n\nwhere\nABS : Used to calculate the absolute value\nCell_No_Act : Cell number where Actual value is present\nCell_No_Fore : Cell number where Forecast value is present" }, { "code": null, "e": 2748, "s": 2553, "text": "The above formula will calculate the SMAPE difference for the first entry in the data set. Similarly, you can drag the AutoFill Options button to get the SMAPE difference for the entire dataset." }, { "code": null, "e": 2765, "s": 2748, "text": "SMAPE Difference" }, { "code": null, "e": 2899, "s": 2765, "text": "3. Now, simply take the mean or the average value of all the data obtained in step 2 using the Excel AVERAGE formula. The syntax is :" }, { "code": null, "e": 2920, "s": 2899, "text": "=AVERAGE(Cell_Range)" }, { "code": null, "e": 2992, "s": 2920, "text": "Therefore, the value of SMAPE for the given dataset is 0.0916 or 9.16%." }, { "code": null, "e": 2999, "s": 2992, "text": "Picked" }, { "code": null, "e": 3005, "s": 2999, "text": "Excel" }, { "code": null, "e": 3103, "s": 3005, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3141, "s": 3103, "text": "How to Delete Blank Columns in Excel?" }, { "code": null, "e": 3173, "s": 3141, "text": "How to Normalize Data in Excel?" }, { "code": null, "e": 3214, "s": 3173, "text": "How to Get Length of Array in Excel VBA?" }, { "code": null, "e": 3269, "s": 3214, "text": "How to Find the Last Used Row and Column in Excel VBA?" }, { "code": null, "e": 3297, "s": 3269, "text": "How to Use Solver in Excel?" }, { "code": null, "e": 3337, "s": 3297, "text": "How to make a 3 Axis Graph using Excel?" }, { "code": null, "e": 3371, "s": 3337, "text": "Introduction to Excel Spreadsheet" }, { "code": null, "e": 3429, "s": 3371, "text": "How to Show Percentages in Stacked Column Chart in Excel?" }, { "code": null, "e": 3445, "s": 3429, "text": "Macros in Excel" } ]
SQL Query to Find Shortest & Longest String From a Column of a Table
13 Apr, 2021 Here, we are going to see how to find the shortest and longest string from a column of a table in a database with the help of SQL queries. We will first create a database β€œgeeksβ€œ, then we create a table β€œfriends” with β€œfirstNameβ€œ, β€œlastNameβ€œ, β€œage” columns. Then will perform our SQL query on this table to retrieve the shortest and longest string in a column. For this article, we will be using the MS SQL Server as our database. Use the below SQL statement to create a database called geeks: CREATE DATABASE geeks; USE geeks; We have the following Employee table in our geeks database : CREATE TABLE friends( firstName VARCHAR(30) not NULL, lastName VARCHAR(30) not NULL, age INT NOT NULL); You can use the below statement to query the description of the created table: EXEC SP_COLUMNS friends; Use the below statement to add data to the friends table: INSERT INTO friends values ('Ajit','Yadav', 20), ('Ajay', 'More', 21), ('Amir', 'Jawadwala', 21), ('Zara', 'Khan', 20), ('Yogesh', 'Vaishnav', 21), ('Ashish', 'Yadav', 21), ('Govind', 'Vaishnav', 22), ('Vishal', 'Vishwakarma', 21); To verify the contents of the table use the below statement: SELECT * FROM friends; Now let’s find the shortest and longest firstName from the table we have just created using char_length(), min(), and max() functions and LIMIT clause. Use the ow syntax to find the shortest firstName in the friends table: SYNTAX : SELECT TOP 1 * FROM<table_name> –Here we want only one row that’s why TOP 1 * WHERE len(<string_column>) = (SELECT min(len(<string_column>)) FROM<table_name> ) ; Example : SELECT TOP 1 * FROM friends WHERE len(firstName) = (SELECT min(len(firstName)) FROM friends); Output : If there are more than one strings of the same minimum length, and we want to retrieve, lexicographically, the shortest string then can do as following: SYNTAX : SELECT TOP 1 * FROM<table_name> –Here we want only one row that’s why TOP 1*. WHERE len(<string_column>) = (SELECTmin(len(<string_column>)) FROM <table_name> ) ORDER BY <column_name>; –In this case column_name would be firstName. Example : SELECT TOP 1 * FROM friends WHERE len(firstName) = (SELECT min(len(firstName)) FROM friends) ORDER BY firstname; Output : SYNTAX : SELECT TOP 1* FROM<table_name> WHERE len(<string_column>) = (SELECT max(len(<string_column>)) FROM <table_name> ); Example : SELECT TOP 1 * FROMfriends WHERE len(firstName) = (SELECT max(len(firstName)) FROMfriends); Output : SYNTAX : SELECT TOP 1* FROM<table_name> WHERE len(<string_column>) = (SELECT max(len(<string_column>)) from <table_name> ) ORDER BY <string_column>; –here we would order data by firstName. Example : SELECT TOP 1* FROM friends WHERE len(firstName) = (SELECT max(len(firstName)) FROM friends) ORDER BY firstName; Output : Picked SQL-Query SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Update Multiple Columns in Single Update Statement in SQL? Window functions in SQL SQL | Sub queries in From Clause What is Temporary Table in SQL? SQL using Python SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL Query to Convert VARCHAR to INT RANK() Function in SQL Server SQL Query to Compare Two Dates How to Write a SQL Query For a Specific Date Range and Date Time?
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Apr, 2021" }, { "code": null, "e": 389, "s": 28, "text": "Here, we are going to see how to find the shortest and longest string from a column of a table in a database with the help of SQL queries. We will first create a database β€œgeeksβ€œ, then we create a table β€œfriends” with β€œfirstNameβ€œ, β€œlastNameβ€œ, β€œage” columns. Then will perform our SQL query on this table to retrieve the shortest and longest string in a column." }, { "code": null, "e": 459, "s": 389, "text": "For this article, we will be using the MS SQL Server as our database." }, { "code": null, "e": 522, "s": 459, "text": "Use the below SQL statement to create a database called geeks:" }, { "code": null, "e": 545, "s": 522, "text": "CREATE DATABASE geeks;" }, { "code": null, "e": 556, "s": 545, "text": "USE geeks;" }, { "code": null, "e": 617, "s": 556, "text": "We have the following Employee table in our geeks database :" }, { "code": null, "e": 721, "s": 617, "text": "CREATE TABLE friends(\nfirstName VARCHAR(30) not NULL,\nlastName VARCHAR(30) not NULL,\nage INT NOT NULL);" }, { "code": null, "e": 800, "s": 721, "text": "You can use the below statement to query the description of the created table:" }, { "code": null, "e": 825, "s": 800, "text": "EXEC SP_COLUMNS friends;" }, { "code": null, "e": 883, "s": 825, "text": "Use the below statement to add data to the friends table:" }, { "code": null, "e": 1115, "s": 883, "text": "INSERT INTO friends\nvalues\n('Ajit','Yadav', 20),\n('Ajay', 'More', 21),\n('Amir', 'Jawadwala', 21),\n('Zara', 'Khan', 20),\n('Yogesh', 'Vaishnav', 21),\n('Ashish', 'Yadav', 21),\n('Govind', 'Vaishnav', 22),\n('Vishal', 'Vishwakarma', 21);" }, { "code": null, "e": 1176, "s": 1115, "text": "To verify the contents of the table use the below statement:" }, { "code": null, "e": 1199, "s": 1176, "text": "SELECT * FROM friends;" }, { "code": null, "e": 1351, "s": 1199, "text": "Now let’s find the shortest and longest firstName from the table we have just created using char_length(), min(), and max() functions and LIMIT clause." }, { "code": null, "e": 1422, "s": 1351, "text": "Use the ow syntax to find the shortest firstName in the friends table:" }, { "code": null, "e": 1432, "s": 1422, "text": "SYNTAX : " }, { "code": null, "e": 1512, "s": 1432, "text": "SELECT TOP 1 * FROM<table_name> –Here we want only one row that’s why TOP 1 *" }, { "code": null, "e": 1518, "s": 1512, "text": "WHERE" }, { "code": null, "e": 1542, "s": 1518, "text": "len(<string_column>) = " }, { "code": null, "e": 1598, "s": 1542, "text": "(SELECT min(len(<string_column>)) FROM<table_name> ) ;" }, { "code": null, "e": 1609, "s": 1598, "text": "Example : " }, { "code": null, "e": 1704, "s": 1609, "text": "SELECT TOP 1 * FROM friends\nWHERE\nlen(firstName) = \n(SELECT min(len(firstName)) FROM friends);" }, { "code": null, "e": 1713, "s": 1704, "text": "Output :" }, { "code": null, "e": 1866, "s": 1713, "text": "If there are more than one strings of the same minimum length, and we want to retrieve, lexicographically, the shortest string then can do as following:" }, { "code": null, "e": 1875, "s": 1866, "text": "SYNTAX :" }, { "code": null, "e": 1955, "s": 1875, "text": "SELECT TOP 1 * FROM<table_name> –Here we want only one row that’s why TOP 1*." }, { "code": null, "e": 1961, "s": 1955, "text": "WHERE" }, { "code": null, "e": 1984, "s": 1961, "text": "len(<string_column>) =" }, { "code": null, "e": 2038, "s": 1984, "text": "(SELECTmin(len(<string_column>)) FROM <table_name> ) " }, { "code": null, "e": 2110, "s": 2038, "text": "ORDER BY <column_name>; –In this case column_name would be firstName." }, { "code": null, "e": 2120, "s": 2110, "text": "Example :" }, { "code": null, "e": 2235, "s": 2120, "text": "SELECT TOP 1 * FROM friends\nWHERE\nlen(firstName) = \n(SELECT min(len(firstName)) FROM friends)\nORDER BY firstname;" }, { "code": null, "e": 2244, "s": 2235, "text": "Output :" }, { "code": null, "e": 2253, "s": 2244, "text": "SYNTAX :" }, { "code": null, "e": 2284, "s": 2253, "text": "SELECT TOP 1* FROM<table_name>" }, { "code": null, "e": 2290, "s": 2284, "text": "WHERE" }, { "code": null, "e": 2314, "s": 2290, "text": "len(<string_column>) = " }, { "code": null, "e": 2369, "s": 2314, "text": "(SELECT max(len(<string_column>)) FROM <table_name> );" }, { "code": null, "e": 2379, "s": 2369, "text": "Example :" }, { "code": null, "e": 2472, "s": 2379, "text": "SELECT TOP 1 * FROMfriends\nWHERE\nlen(firstName) = \n(SELECT max(len(firstName)) FROMfriends);" }, { "code": null, "e": 2481, "s": 2472, "text": "Output :" }, { "code": null, "e": 2490, "s": 2481, "text": "SYNTAX :" }, { "code": null, "e": 2521, "s": 2490, "text": "SELECT TOP 1* FROM<table_name>" }, { "code": null, "e": 2527, "s": 2521, "text": "WHERE" }, { "code": null, "e": 2550, "s": 2527, "text": "len(<string_column>) =" }, { "code": null, "e": 2604, "s": 2550, "text": "(SELECT max(len(<string_column>)) from <table_name> )" }, { "code": null, "e": 2671, "s": 2604, "text": "ORDER BY <string_column>; –here we would order data by firstName." }, { "code": null, "e": 2681, "s": 2671, "text": "Example :" }, { "code": null, "e": 2795, "s": 2681, "text": "SELECT TOP 1* FROM friends\nWHERE\nlen(firstName) = \n(SELECT max(len(firstName)) FROM friends) \nORDER BY firstName;" }, { "code": null, "e": 2804, "s": 2795, "text": "Output :" }, { "code": null, "e": 2811, "s": 2804, "text": "Picked" }, { "code": null, "e": 2821, "s": 2811, "text": "SQL-Query" }, { "code": null, "e": 2825, "s": 2821, "text": "SQL" }, { "code": null, "e": 2829, "s": 2825, "text": "SQL" }, { "code": null, "e": 2927, "s": 2829, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2993, "s": 2927, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 3017, "s": 2993, "text": "Window functions in SQL" }, { "code": null, "e": 3050, "s": 3017, "text": "SQL | Sub queries in From Clause" }, { "code": null, "e": 3082, "s": 3050, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 3099, "s": 3082, "text": "SQL using Python" }, { "code": null, "e": 3177, "s": 3099, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 3213, "s": 3177, "text": "SQL Query to Convert VARCHAR to INT" }, { "code": null, "e": 3243, "s": 3213, "text": "RANK() Function in SQL Server" }, { "code": null, "e": 3274, "s": 3243, "text": "SQL Query to Compare Two Dates" } ]
How to Create and Configure Wi-Fi Hotspot in Windows 10?
15 Sep, 2021 In layman’s term, Hotspot is a Wi-Fi network created from a device other than a router. In our case, it will be created from a Windows device connected to the internet using broadband or a dongle connection. Hotspot can be used to share the internet of our Windows device to our smartphones or other devices by creating a Wi-Fi network. Hotspots can be created from device even without any internet access, but that would create a local network with no internet. Such a network can be used to create an FTP server or to transfer files locally between 2 devices connected to the same network. Step 1: Goto settings app in Windows or use the combination of Window Key + i. Step 2: Now click on Network & Internet After clicking, you should see something similar to the following: Step 3: Click on Mobile Hotspot Now you should see something like: Step 4: Configure the Wi-Fi Hotspot’s Name (ssid) and Password by clicking Edit. After clicking you can see the following: Now edit/configure your Hotspot according to your choice as shown in below image: Step 5: Now click on Save Step 6: Now switch the Share my internet connection with other devices option to ON. Thus, we have successfully created a Wi-Fi hotspot from a Windows Device from the Windows Settings App. Step 1: Open Command prompt with administrator privilege. Follow the following steps to do it: Press Window key + R. Type cmd in the text field present in the run dialog box: Press Ctrl + Shift + Enter key to run Command Prompt as Administrator and then Select yes when prompted to let command prompt run as administrator. Another way to open Cmd: Search Command Prompt or cmd in start menu search bar. [Hit windows key to open start menu] Click Run as Administrator Select yes when prompted to let command prompt run as administrator. Step 2: Now open the netsh (a command-line scripting utility that will help us to create or change network settings) by typing the below command: netsh Step 3: Now for specifically changing the network settings related to WLAN type: wlan Step 4: Configure the Hotspot’s name (ssid) by using the below command: set hostednetwork ssid=GeeksForGeeks [Put the name of your choice after β€œname=” ;”GeeksForGeeks” is taken as an example.] Step 5: Configure the Hotspot’s password by using the below command: set hostednetwork key=GeeksAreAlwaysExploring [Put whatever password you want after β€œkey=” ;”GeeksAreAlwaysExploring” is just an example.] Step 6: Now start this configured hotspot by typing: start hostednetwork To stop the hotspot just type: stop hostednetwork Note: Make sure your Windows device has a working Wi-Fi module as it is a must for creating a Wi-Fi hotspot. Once the hotspot has been set up and configured by using the Command Prompt method, it can be easily run next time just by typing: netsh wlan start hostednetwork Directly after opening a command prompt with administrator privilege and can be stopped directly by: netsh wlan stop hostednetwork After opening a command prompt with administrator privilege. sagartomar9927 TechTips Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Docker - COPY Instruction How to Run a Python Script using Docker? Top Programming Languages for Android App Development How to setup cron jobs in Ubuntu How to Add External JAR File to an IntelliJ IDEA Project? How to Delete Temporary Files in Windows 10? How to set up Command Prompt for Python in Windows10 ? How to Install Z Shell(zsh) on Linux? How to overcome Time Limit Exceed(TLE)? Whatsapp using Python!
[ { "code": null, "e": 52, "s": 24, "text": "\n15 Sep, 2021" }, { "code": null, "e": 645, "s": 52, "text": "In layman’s term, Hotspot is a Wi-Fi network created from a device other than a router. In our case, it will be created from a Windows device connected to the internet using broadband or a dongle connection. Hotspot can be used to share the internet of our Windows device to our smartphones or other devices by creating a Wi-Fi network. Hotspots can be created from device even without any internet access, but that would create a local network with no internet. Such a network can be used to create an FTP server or to transfer files locally between 2 devices connected to the same network. " }, { "code": null, "e": 725, "s": 645, "text": "Step 1: Goto settings app in Windows or use the combination of Window Key + i. " }, { "code": null, "e": 767, "s": 725, "text": "Step 2: Now click on Network & Internet " }, { "code": null, "e": 836, "s": 767, "text": "After clicking, you should see something similar to the following: " }, { "code": null, "e": 870, "s": 836, "text": "Step 3: Click on Mobile Hotspot " }, { "code": null, "e": 907, "s": 870, "text": "Now you should see something like: " }, { "code": null, "e": 989, "s": 907, "text": "Step 4: Configure the Wi-Fi Hotspot’s Name (ssid) and Password by clicking Edit. " }, { "code": null, "e": 1033, "s": 989, "text": "After clicking you can see the following: " }, { "code": null, "e": 1117, "s": 1033, "text": "Now edit/configure your Hotspot according to your choice as shown in below image: " }, { "code": null, "e": 1145, "s": 1117, "text": "Step 5: Now click on Save " }, { "code": null, "e": 1232, "s": 1145, "text": "Step 6: Now switch the Share my internet connection with other devices option to ON. " }, { "code": null, "e": 1337, "s": 1232, "text": "Thus, we have successfully created a Wi-Fi hotspot from a Windows Device from the Windows Settings App. " }, { "code": null, "e": 1434, "s": 1337, "text": "Step 1: Open Command prompt with administrator privilege. Follow the following steps to do it: " }, { "code": null, "e": 1457, "s": 1434, "text": "Press Window key + R. " }, { "code": null, "e": 1516, "s": 1457, "text": "Type cmd in the text field present in the run dialog box: " }, { "code": null, "e": 1664, "s": 1516, "text": "Press Ctrl + Shift + Enter key to run Command Prompt as Administrator and then Select yes when prompted to let command prompt run as administrator." }, { "code": null, "e": 1691, "s": 1664, "text": "Another way to open Cmd: " }, { "code": null, "e": 1784, "s": 1691, "text": "Search Command Prompt or cmd in start menu search bar. [Hit windows key to open start menu] " }, { "code": null, "e": 1812, "s": 1784, "text": "Click Run as Administrator " }, { "code": null, "e": 1881, "s": 1812, "text": "Select yes when prompted to let command prompt run as administrator." }, { "code": null, "e": 2029, "s": 1881, "text": "Step 2: Now open the netsh (a command-line scripting utility that will help us to create or change network settings) by typing the below command: " }, { "code": null, "e": 2035, "s": 2029, "text": "netsh" }, { "code": null, "e": 2118, "s": 2035, "text": "Step 3: Now for specifically changing the network settings related to WLAN type: " }, { "code": null, "e": 2124, "s": 2118, "text": "wlan " }, { "code": null, "e": 2198, "s": 2124, "text": "Step 4: Configure the Hotspot’s name (ssid) by using the below command: " }, { "code": null, "e": 2235, "s": 2198, "text": "set hostednetwork ssid=GeeksForGeeks" }, { "code": null, "e": 2321, "s": 2235, "text": "[Put the name of your choice after β€œname=” ;”GeeksForGeeks” is taken as an example.] " }, { "code": null, "e": 2392, "s": 2321, "text": "Step 5: Configure the Hotspot’s password by using the below command: " }, { "code": null, "e": 2438, "s": 2392, "text": "set hostednetwork key=GeeksAreAlwaysExploring" }, { "code": null, "e": 2533, "s": 2438, "text": "[Put whatever password you want after β€œkey=” ;”GeeksAreAlwaysExploring” is just an example.] " }, { "code": null, "e": 2588, "s": 2533, "text": "Step 6: Now start this configured hotspot by typing: " }, { "code": null, "e": 2608, "s": 2588, "text": "start hostednetwork" }, { "code": null, "e": 2641, "s": 2608, "text": "To stop the hotspot just type: " }, { "code": null, "e": 2660, "s": 2641, "text": "stop hostednetwork" }, { "code": null, "e": 2668, "s": 2660, "text": "Note: " }, { "code": null, "e": 2771, "s": 2668, "text": "Make sure your Windows device has a working Wi-Fi module as it is a must for creating a Wi-Fi hotspot." }, { "code": null, "e": 2903, "s": 2771, "text": "Once the hotspot has been set up and configured by using the Command Prompt method, it can be easily run next time just by typing: " }, { "code": null, "e": 2934, "s": 2903, "text": "netsh wlan start hostednetwork" }, { "code": null, "e": 3036, "s": 2934, "text": "Directly after opening a command prompt with administrator privilege and can be stopped directly by: " }, { "code": null, "e": 3066, "s": 3036, "text": "netsh wlan stop hostednetwork" }, { "code": null, "e": 3127, "s": 3066, "text": "After opening a command prompt with administrator privilege." }, { "code": null, "e": 3144, "s": 3129, "text": "sagartomar9927" }, { "code": null, "e": 3153, "s": 3144, "text": "TechTips" }, { "code": null, "e": 3251, "s": 3153, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3277, "s": 3251, "text": "Docker - COPY Instruction" }, { "code": null, "e": 3318, "s": 3277, "text": "How to Run a Python Script using Docker?" }, { "code": null, "e": 3372, "s": 3318, "text": "Top Programming Languages for Android App Development" }, { "code": null, "e": 3405, "s": 3372, "text": "How to setup cron jobs in Ubuntu" }, { "code": null, "e": 3463, "s": 3405, "text": "How to Add External JAR File to an IntelliJ IDEA Project?" }, { "code": null, "e": 3508, "s": 3463, "text": "How to Delete Temporary Files in Windows 10?" }, { "code": null, "e": 3563, "s": 3508, "text": "How to set up Command Prompt for Python in Windows10 ?" }, { "code": null, "e": 3601, "s": 3563, "text": "How to Install Z Shell(zsh) on Linux?" }, { "code": null, "e": 3641, "s": 3601, "text": "How to overcome Time Limit Exceed(TLE)?" } ]
unordered_map end( ) function in C++ STL
19 Sep, 2018 The unordered_map::end() is a built-in function in C++ STL which returns an iterator pointing to the position past the last element in the container in the unordered_map container. In an unordered_map object, there is no guarantee that which specific element is considered its first element. But all the elements in the container are covered since the range goes from its begin to its end until invalidated. Syntax: iterator unordered_map_name.end(n) Parameters : This function accepts one parameter n which is an optional parameter that specifies the bucket number. If it is set, the iterator retrieves points to the past-the-end element of a bucket, otherwise, it points to the past-the-end element of the container. Return value: The function returns an iterator to the element past the end of the container. Below programs illustrates the above-mentioned function: Program 1: // CPP program to demonstrate the// unordered_map::end() function// returning all the elements of the multimap#include <iostream>#include <string>#include <unordered_map>using namespace std; int main(){ unordered_map<string, int> marks; // Declaring the elements of the multimap marks = { { "Rohit", 64 }, { "Aman", 37 }, { "Ayush", 96 } }; // Printing all the elements of the multimap cout << "marks bucket contains : " << endl; for (int i = 0; i < marks.bucket_count(); ++i) { cout << "bucket #" << i << " contains:"; for (auto iter = marks.begin(i); iter != marks.end(i); ++iter) { cout << "(" << iter->first << ", " << iter->second << "), "; } cout << endl; } return 0;} marks bucket contains : bucket #0 contains: bucket #1 contains: bucket #2 contains: bucket #3 contains:(Aman, 37), (Rohit, 64), bucket #4 contains:(Ayush, 96), Program 2: // CPP program to demonstrate the// unordered_map::end() function// returning the elements along// with their bucket number#include <iostream>#include <string>#include <unordered_map> using namespace std; int main(){ unordered_map<string, int> marks; // Declaring the elements of the multimap marks = { { "Rohit", 64 }, { "Aman", 37 }, { "Ayush", 96 } }; // Printing all the elements of the multimap for (auto iter = marks.begin(); iter != marks.end(); ++iter) { cout << "Marks of " << iter->first << " is " << iter->second << " and his bucket number is " << marks.bucket(iter->first) << endl; } return 0;} Marks of Ayush is 96 and his bucket number is 4 Marks of Aman is 37 and his bucket number is 3 Marks of Rohit is 64 and his bucket number is 3 CPP-Functions cpp-unordered_map cpp-unordered_map-functions STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Bitwise Operators in C/C++ Inheritance in C++ vector erase() and clear() in C++ Substring in C++ The C++ Standard Template Library (STL) C++ Classes and Objects Object Oriented Programming in C++ Priority Queue in C++ Standard Template Library (STL) Sorting a vector in C++ 2D Vector In C++ With User Defined Size
[ { "code": null, "e": 52, "s": 24, "text": "\n19 Sep, 2018" }, { "code": null, "e": 460, "s": 52, "text": "The unordered_map::end() is a built-in function in C++ STL which returns an iterator pointing to the position past the last element in the container in the unordered_map container. In an unordered_map object, there is no guarantee that which specific element is considered its first element. But all the elements in the container are covered since the range goes from its begin to its end until invalidated." }, { "code": null, "e": 468, "s": 460, "text": "Syntax:" }, { "code": null, "e": 503, "s": 468, "text": "iterator unordered_map_name.end(n)" }, { "code": null, "e": 771, "s": 503, "text": "Parameters : This function accepts one parameter n which is an optional parameter that specifies the bucket number. If it is set, the iterator retrieves points to the past-the-end element of a bucket, otherwise, it points to the past-the-end element of the container." }, { "code": null, "e": 864, "s": 771, "text": "Return value: The function returns an iterator to the element past the end of the container." }, { "code": null, "e": 921, "s": 864, "text": "Below programs illustrates the above-mentioned function:" }, { "code": null, "e": 932, "s": 921, "text": "Program 1:" }, { "code": "// CPP program to demonstrate the// unordered_map::end() function// returning all the elements of the multimap#include <iostream>#include <string>#include <unordered_map>using namespace std; int main(){ unordered_map<string, int> marks; // Declaring the elements of the multimap marks = { { \"Rohit\", 64 }, { \"Aman\", 37 }, { \"Ayush\", 96 } }; // Printing all the elements of the multimap cout << \"marks bucket contains : \" << endl; for (int i = 0; i < marks.bucket_count(); ++i) { cout << \"bucket #\" << i << \" contains:\"; for (auto iter = marks.begin(i); iter != marks.end(i); ++iter) { cout << \"(\" << iter->first << \", \" << iter->second << \"), \"; } cout << endl; } return 0;}", "e": 1681, "s": 932, "text": null }, { "code": null, "e": 1844, "s": 1681, "text": "marks bucket contains : \nbucket #0 contains:\nbucket #1 contains:\nbucket #2 contains:\nbucket #3 contains:(Aman, 37), (Rohit, 64), \nbucket #4 contains:(Ayush, 96),\n" }, { "code": null, "e": 1855, "s": 1844, "text": "Program 2:" }, { "code": "// CPP program to demonstrate the// unordered_map::end() function// returning the elements along// with their bucket number#include <iostream>#include <string>#include <unordered_map> using namespace std; int main(){ unordered_map<string, int> marks; // Declaring the elements of the multimap marks = { { \"Rohit\", 64 }, { \"Aman\", 37 }, { \"Ayush\", 96 } }; // Printing all the elements of the multimap for (auto iter = marks.begin(); iter != marks.end(); ++iter) { cout << \"Marks of \" << iter->first << \" is \" << iter->second << \" and his bucket number is \" << marks.bucket(iter->first) << endl; } return 0;}", "e": 2524, "s": 1855, "text": null }, { "code": null, "e": 2668, "s": 2524, "text": "Marks of Ayush is 96 and his bucket number is 4\nMarks of Aman is 37 and his bucket number is 3\nMarks of Rohit is 64 and his bucket number is 3\n" }, { "code": null, "e": 2682, "s": 2668, "text": "CPP-Functions" }, { "code": null, "e": 2700, "s": 2682, "text": "cpp-unordered_map" }, { "code": null, "e": 2728, "s": 2700, "text": "cpp-unordered_map-functions" }, { "code": null, "e": 2732, "s": 2728, "text": "STL" }, { "code": null, "e": 2736, "s": 2732, "text": "C++" }, { "code": null, "e": 2740, "s": 2736, "text": "STL" }, { "code": null, "e": 2744, "s": 2740, "text": "CPP" }, { "code": null, "e": 2842, "s": 2744, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2869, "s": 2842, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 2888, "s": 2869, "text": "Inheritance in C++" }, { "code": null, "e": 2922, "s": 2888, "text": "vector erase() and clear() in C++" }, { "code": null, "e": 2939, "s": 2922, "text": "Substring in C++" }, { "code": null, "e": 2979, "s": 2939, "text": "The C++ Standard Template Library (STL)" }, { "code": null, "e": 3003, "s": 2979, "text": "C++ Classes and Objects" }, { "code": null, "e": 3038, "s": 3003, "text": "Object Oriented Programming in C++" }, { "code": null, "e": 3092, "s": 3038, "text": "Priority Queue in C++ Standard Template Library (STL)" }, { "code": null, "e": 3116, "s": 3092, "text": "Sorting a vector in C++" } ]
Python | N largest values in dictionary
26 Aug, 2019 Many times while working with Python dictionary, we can have a particular problem to find the N maxima of values in numerous keys. This problem is quite common while working with web development domain. Let’s discuss several ways in which this task can be performed. Method #1 : itemgetter() + items() + sorted() The combination of above method is used to perform this particular task. In this, we just reverse sort the dictionary values expressed using itemgetter() and accessed using items(). # Python3 code to demonstrate working of# N largest values in dictionary# Using sorted() + itemgetter() + items()from operator import itemgetter # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize N N = 3 # printing original dictionaryprint("The original dictionary is : " + str(test_dict)) # N largest values in dictionary# Using sorted() + itemgetter() + items()res = dict(sorted(test_dict.items(), key = itemgetter(1), reverse = True)[:N]) # printing resultprint("The top N value pairs are " + str(res)) The original dictionary is : {'best': 6, 'gfg': 1, 'geeks': 3, 'for': 7, 'is': 4} The top N value pairs are {'for': 7, 'is': 4, 'best': 6} Method #2 : Using nlargest()This task can be performed using the nlargest function. This is inbuilt function in heapq library which internally performs this task and can be used to do it externally. Has the drawback of printing just keys not values. # Python3 code to demonstrate working of# N largest values in dictionary# Using nlargestfrom heapq import nlargest # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize N N = 3 # printing original dictionaryprint("The original dictionary is : " + str(test_dict)) # N largest values in dictionary# Using nlargestres = nlargest(N, test_dict, key = test_dict.get) # printing resultprint("The top N value pairs are " + str(res)) The original dictionary is : {'gfg': 1, 'best': 6, 'geeks': 3, 'for': 7, 'is': 4} The top N value pairs are ['for', 'best', 'is'] Python dictionary-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() Python program to convert a list to string Defaultdict in Python Python | Convert a list to dictionary Python | Convert string dictionary to dictionary Python Program for Fibonacci numbers
[ { "code": null, "e": 28, "s": 0, "text": "\n26 Aug, 2019" }, { "code": null, "e": 295, "s": 28, "text": "Many times while working with Python dictionary, we can have a particular problem to find the N maxima of values in numerous keys. This problem is quite common while working with web development domain. Let’s discuss several ways in which this task can be performed." }, { "code": null, "e": 341, "s": 295, "text": "Method #1 : itemgetter() + items() + sorted()" }, { "code": null, "e": 523, "s": 341, "text": "The combination of above method is used to perform this particular task. In this, we just reverse sort the dictionary values expressed using itemgetter() and accessed using items()." }, { "code": "# Python3 code to demonstrate working of# N largest values in dictionary# Using sorted() + itemgetter() + items()from operator import itemgetter # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize N N = 3 # printing original dictionaryprint(\"The original dictionary is : \" + str(test_dict)) # N largest values in dictionary# Using sorted() + itemgetter() + items()res = dict(sorted(test_dict.items(), key = itemgetter(1), reverse = True)[:N]) # printing resultprint(\"The top N value pairs are \" + str(res))", "e": 1091, "s": 523, "text": null }, { "code": null, "e": 1232, "s": 1091, "text": "The original dictionary is : {'best': 6, 'gfg': 1, 'geeks': 3, 'for': 7, 'is': 4}\nThe top N value pairs are {'for': 7, 'is': 4, 'best': 6}\n" }, { "code": null, "e": 1484, "s": 1234, "text": "Method #2 : Using nlargest()This task can be performed using the nlargest function. This is inbuilt function in heapq library which internally performs this task and can be used to do it externally. Has the drawback of printing just keys not values." }, { "code": "# Python3 code to demonstrate working of# N largest values in dictionary# Using nlargestfrom heapq import nlargest # Initialize dictionarytest_dict = {'gfg' : 1, 'is' : 4, 'best' : 6, 'for' : 7, 'geeks' : 3 } # Initialize N N = 3 # printing original dictionaryprint(\"The original dictionary is : \" + str(test_dict)) # N largest values in dictionary# Using nlargestres = nlargest(N, test_dict, key = test_dict.get) # printing resultprint(\"The top N value pairs are \" + str(res))", "e": 1968, "s": 1484, "text": null }, { "code": null, "e": 2100, "s": 1968, "text": "The original dictionary is : {'gfg': 1, 'best': 6, 'geeks': 3, 'for': 7, 'is': 4}\nThe top N value pairs are ['for', 'best', 'is']\n" }, { "code": null, "e": 2127, "s": 2100, "text": "Python dictionary-programs" }, { "code": null, "e": 2134, "s": 2127, "text": "Python" }, { "code": null, "e": 2150, "s": 2134, "text": "Python Programs" }, { "code": null, "e": 2248, "s": 2150, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2266, "s": 2248, "text": "Python Dictionary" }, { "code": null, "e": 2308, "s": 2266, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 2330, "s": 2308, "text": "Enumerate() in Python" }, { "code": null, "e": 2365, "s": 2330, "text": "Read a file line by line in Python" }, { "code": null, "e": 2391, "s": 2365, "text": "Python String | replace()" }, { "code": null, "e": 2434, "s": 2391, "text": "Python program to convert a list to string" }, { "code": null, "e": 2456, "s": 2434, "text": "Defaultdict in Python" }, { "code": null, "e": 2494, "s": 2456, "text": "Python | Convert a list to dictionary" }, { "code": null, "e": 2543, "s": 2494, "text": "Python | Convert string dictionary to dictionary" } ]
Java program to calculate mode in Java
In statistics math, a mode is a value that occurs the highest numbers of time. For Example, assume a set of values 3, 5, 2, 7, 3. The mode of this value set is 3 as it appears more than any other number. 1.Take an integer set A of n values. 2.Count the occurrence of each integer value in A. 3.Display the value with the highest occurrence. Live Demo public class Mode { static int mode(int a[],int n) { int maxValue = 0, maxCount = 0, i, j; for (i = 0; i < n; ++i) { int count = 0; for (j = 0; j < n; ++j) { if (a[j] == a[i]) ++count; } if (count > maxCount) { maxCount = count; maxValue = a[i]; } } return maxValue; } public static void main(String args[]){ int n = 5; int a[] = {0,6,7,2,7}; System.out.println("Mode ::"+mode(a,n)); } } Mode ::7
[ { "code": null, "e": 1391, "s": 1187, "text": "In statistics math, a mode is a value that occurs the highest numbers of time. For Example, assume a set of values 3, 5, 2, 7, 3. The mode of this value set is 3 as it appears more than any other number." }, { "code": null, "e": 1528, "s": 1391, "text": "1.Take an integer set A of n values.\n2.Count the occurrence of each integer value in A.\n3.Display the value with the highest occurrence." }, { "code": null, "e": 1538, "s": 1528, "text": "Live Demo" }, { "code": null, "e": 2075, "s": 1538, "text": "public class Mode {\n static int mode(int a[],int n) {\n int maxValue = 0, maxCount = 0, i, j;\n\n for (i = 0; i < n; ++i) {\n int count = 0;\n for (j = 0; j < n; ++j) {\n if (a[j] == a[i])\n ++count;\n }\n\n if (count > maxCount) {\n maxCount = count;\n maxValue = a[i];\n }\n }\n return maxValue;\n }\n public static void main(String args[]){\n int n = 5;\n int a[] = {0,6,7,2,7};\n System.out.println(\"Mode ::\"+mode(a,n));\n }\n}" }, { "code": null, "e": 2084, "s": 2075, "text": "Mode ::7" } ]
Underscore.js _.isEqual() Function
25 Nov, 2021 The _.isEqual() function: is used to find that whether the 2 given arrays are same or not. Two arrays are the same if they have the same number of elements, both the property and the values need to be the same. It may be beneficial in situations where the elements of the array are not known and we want to check whether they are the same or not. Syntax: _.isEqual(object, other) Parameters:It takes two arguments: object: The object can be an array. other: The other array holds. Return value:It returns true if the arrays passed are same otherwise it returns false. Examples: Passing 2 simple arrays to the _.isEqual() function:The _.isEqual() function takes the element from the list of one array and searches it in the other array. If that property is found with the same value in the other array then it just goes on to checks the other property otherwise it just returns false. In this it checks for both the character values and the number values in the property.<!-- Write HTML code here --><html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {name: 'akash', numbers: [3, 7, 14]}; var arr2 = {name: 'akash', numbers: [3, 7, 14]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:Passing an array with more properties to the _.isEqual() function:An array can have as many properties ad it has to act as an parameter to this function. Like here both the arrays contain 3 properties each of type character and date. The _.isEqual() function will work in the same way as for the above example. Since, both the arrays have same properties and the same values so, the output will be β€˜true’.<!-- Write HTML code here --><html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; var arr2 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:Passing 2 empty arrays to the _.isEqual() function:The _.isEqual() function will try to check all the array properties along with their values. Since, both the array does not have any property so, there is nothing to match. And hence, both the arrays are equal. Therefore, the answer will be true.<html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {}; var arr2 = {}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:Passing arrays with different properties to the _.isEqual() function:If we pass arrays containing different properties then this function will work the same way. It will take the property of first parameter array ( here, β€˜name’) and try to find it in the next array. But since, the other array does not have this property so the output will be β€˜false’.<!-- Write HTML code here --> <html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {phoneNo: 4345656543}; var arr2 = {name: 'ashok'}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:` Passing 2 simple arrays to the _.isEqual() function:The _.isEqual() function takes the element from the list of one array and searches it in the other array. If that property is found with the same value in the other array then it just goes on to checks the other property otherwise it just returns false. In this it checks for both the character values and the number values in the property.<!-- Write HTML code here --><html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {name: 'akash', numbers: [3, 7, 14]}; var arr2 = {name: 'akash', numbers: [3, 7, 14]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output: <!-- Write HTML code here --><html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {name: 'akash', numbers: [3, 7, 14]}; var arr2 = {name: 'akash', numbers: [3, 7, 14]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html> Output: Passing an array with more properties to the _.isEqual() function:An array can have as many properties ad it has to act as an parameter to this function. Like here both the arrays contain 3 properties each of type character and date. The _.isEqual() function will work in the same way as for the above example. Since, both the arrays have same properties and the same values so, the output will be β€˜true’.<!-- Write HTML code here --><html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; var arr2 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output: <!-- Write HTML code here --><html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; var arr2 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html> Output: Passing 2 empty arrays to the _.isEqual() function:The _.isEqual() function will try to check all the array properties along with their values. Since, both the array does not have any property so, there is nothing to match. And hence, both the arrays are equal. Therefore, the answer will be true.<html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {}; var arr2 = {}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output: <html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {}; var arr2 = {}; console.log(_.isEqual(arr1, arr2)); </script></body> </html> Output: Passing arrays with different properties to the _.isEqual() function:If we pass arrays containing different properties then this function will work the same way. It will take the property of first parameter array ( here, β€˜name’) and try to find it in the next array. But since, the other array does not have this property so the output will be β€˜false’.<!-- Write HTML code here --> <html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {phoneNo: 4345656543}; var arr2 = {name: 'ashok'}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output: <!-- Write HTML code here --> <html> <head> <script src = "https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"> </script></head> <body> <script type="text/javascript"> var arr1 = {phoneNo: 4345656543}; var arr2 = {name: 'ashok'}; console.log(_.isEqual(arr1, arr2)); </script></body> </html> Output: ` NOTE:These commands will not work in Google console or in firefox as for these additional files need to be added which they didn’t have added.So, add the given links to your HTML file and then run them.The links are as follows: <!-- Write HTML code here --><script type="text/javascript" src ="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js"></script> An example is shown below: shubham_singh JavaScript - Underscore.js JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n25 Nov, 2021" }, { "code": null, "e": 375, "s": 28, "text": "The _.isEqual() function: is used to find that whether the 2 given arrays are same or not. Two arrays are the same if they have the same number of elements, both the property and the values need to be the same. It may be beneficial in situations where the elements of the array are not known and we want to check whether they are the same or not." }, { "code": null, "e": 383, "s": 375, "text": "Syntax:" }, { "code": null, "e": 408, "s": 383, "text": "_.isEqual(object, other)" }, { "code": null, "e": 443, "s": 408, "text": "Parameters:It takes two arguments:" }, { "code": null, "e": 479, "s": 443, "text": "object: The object can be an array." }, { "code": null, "e": 509, "s": 479, "text": "other: The other array holds." }, { "code": null, "e": 596, "s": 509, "text": "Return value:It returns true if the arrays passed are same otherwise it returns false." }, { "code": null, "e": 606, "s": 596, "text": "Examples:" }, { "code": null, "e": 3564, "s": 606, "text": "Passing 2 simple arrays to the _.isEqual() function:The _.isEqual() function takes the element from the list of one array and searches it in the other array. If that property is found with the same value in the other array then it just goes on to checks the other property otherwise it just returns false. In this it checks for both the character values and the number values in the property.<!-- Write HTML code here --><html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {name: 'akash', numbers: [3, 7, 14]}; var arr2 = {name: 'akash', numbers: [3, 7, 14]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:Passing an array with more properties to the _.isEqual() function:An array can have as many properties ad it has to act as an parameter to this function. Like here both the arrays contain 3 properties each of type character and date. The _.isEqual() function will work in the same way as for the above example. Since, both the arrays have same properties and the same values so, the output will be β€˜true’.<!-- Write HTML code here --><html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; var arr2 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:Passing 2 empty arrays to the _.isEqual() function:The _.isEqual() function will try to check all the array properties along with their values. Since, both the array does not have any property so, there is nothing to match. And hence, both the arrays are equal. Therefore, the answer will be true.<html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {}; var arr2 = {}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:Passing arrays with different properties to the _.isEqual() function:If we pass arrays containing different properties then this function will work the same way. It will take the property of first parameter array ( here, β€˜name’) and try to find it in the next array. But since, the other array does not have this property so the output will be β€˜false’.<!-- Write HTML code here --> <html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {phoneNo: 4345656543}; var arr2 = {name: 'ashok'}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:`" }, { "code": null, "e": 4351, "s": 3564, "text": "Passing 2 simple arrays to the _.isEqual() function:The _.isEqual() function takes the element from the list of one array and searches it in the other array. If that property is found with the same value in the other array then it just goes on to checks the other property otherwise it just returns false. In this it checks for both the character values and the number values in the property.<!-- Write HTML code here --><html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {name: 'akash', numbers: [3, 7, 14]}; var arr2 = {name: 'akash', numbers: [3, 7, 14]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:" }, { "code": "<!-- Write HTML code here --><html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {name: 'akash', numbers: [3, 7, 14]}; var arr2 = {name: 'akash', numbers: [3, 7, 14]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>", "e": 4739, "s": 4351, "text": null }, { "code": null, "e": 4747, "s": 4739, "text": "Output:" }, { "code": null, "e": 5595, "s": 4747, "text": "Passing an array with more properties to the _.isEqual() function:An array can have as many properties ad it has to act as an parameter to this function. Like here both the arrays contain 3 properties each of type character and date. The _.isEqual() function will work in the same way as for the above example. Since, both the arrays have same properties and the same values so, the output will be β€˜true’.<!-- Write HTML code here --><html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; var arr2 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:" }, { "code": "<!-- Write HTML code here --><html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; var arr2 = {name: 'akash', gender: ['male'], birthDate: [03/22/99]}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>", "e": 6031, "s": 5595, "text": null }, { "code": null, "e": 6039, "s": 6031, "text": "Output:" }, { "code": null, "e": 6643, "s": 6039, "text": "Passing 2 empty arrays to the _.isEqual() function:The _.isEqual() function will try to check all the array properties along with their values. Since, both the array does not have any property so, there is nothing to match. And hence, both the arrays are equal. Therefore, the answer will be true.<html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {}; var arr2 = {}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:" }, { "code": "<html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {}; var arr2 = {}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>", "e": 6943, "s": 6643, "text": null }, { "code": null, "e": 6951, "s": 6943, "text": "Output:" }, { "code": null, "e": 7672, "s": 6951, "text": "Passing arrays with different properties to the _.isEqual() function:If we pass arrays containing different properties then this function will work the same way. It will take the property of first parameter array ( here, β€˜name’) and try to find it in the next array. But since, the other array does not have this property so the output will be β€˜false’.<!-- Write HTML code here --> <html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {phoneNo: 4345656543}; var arr2 = {name: 'ashok'}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>Output:" }, { "code": "<!-- Write HTML code here --> <html> <head> <script src = \"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"> </script></head> <body> <script type=\"text/javascript\"> var arr1 = {phoneNo: 4345656543}; var arr2 = {name: 'ashok'}; console.log(_.isEqual(arr1, arr2)); </script></body> </html>", "e": 8034, "s": 7672, "text": null }, { "code": null, "e": 8042, "s": 8034, "text": "Output:" }, { "code": null, "e": 8044, "s": 8042, "text": "`" }, { "code": null, "e": 8272, "s": 8044, "text": "NOTE:These commands will not work in Google console or in firefox as for these additional files need to be added which they didn’t have added.So, add the given links to your HTML file and then run them.The links are as follows:" }, { "code": "<!-- Write HTML code here --><script type=\"text/javascript\" src =\"https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.9.1/underscore-min.js\"></script>", "e": 8426, "s": 8272, "text": null }, { "code": null, "e": 8453, "s": 8426, "text": "An example is shown below:" }, { "code": null, "e": 8467, "s": 8453, "text": "shubham_singh" }, { "code": null, "e": 8494, "s": 8467, "text": "JavaScript - Underscore.js" }, { "code": null, "e": 8505, "s": 8494, "text": "JavaScript" }, { "code": null, "e": 8522, "s": 8505, "text": "Web Technologies" } ]
HTTP headers | Access-Control-Request-Method
10 May, 2020 The HTTP headers Access-Control-Request-Method is a request type header which has been used to inform the server that which HTTP method will be used when the actual request is made. Syntax: Access-Control-Request-Method: <method> Directives: This header accept a single directive which is mentioned above and described below: <method>: This directive holds the method which will be used when the actual request is made. Note: Multiple methods can be used when the actual request will be made. Example: Access-Control-Request-Method: POST Access-Control-Request-Method: POST Access-Control-Request-Method: GET, PUT Access-Control-Request-Method: GET, PUT To check this Access-Control-Request-Method in action go to Inspect Element -> Network check the request header for Access-Control-Request-Method like below, Access-Control-Request-Method is highlighted you can see. Supported Browsers: The browsers compatible with HTTP headers Access-Control-Request-Method are listed below: Google Chrome Internet Explorer Firefox Safari Opera HTTP-headers Picked Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n10 May, 2020" }, { "code": null, "e": 210, "s": 28, "text": "The HTTP headers Access-Control-Request-Method is a request type header which has been used to inform the server that which HTTP method will be used when the actual request is made." }, { "code": null, "e": 218, "s": 210, "text": "Syntax:" }, { "code": null, "e": 258, "s": 218, "text": "Access-Control-Request-Method: <method>" }, { "code": null, "e": 354, "s": 258, "text": "Directives: This header accept a single directive which is mentioned above and described below:" }, { "code": null, "e": 448, "s": 354, "text": "<method>: This directive holds the method which will be used when the actual request is made." }, { "code": null, "e": 521, "s": 448, "text": "Note: Multiple methods can be used when the actual request will be made." }, { "code": null, "e": 530, "s": 521, "text": "Example:" }, { "code": null, "e": 566, "s": 530, "text": "Access-Control-Request-Method: POST" }, { "code": null, "e": 602, "s": 566, "text": "Access-Control-Request-Method: POST" }, { "code": null, "e": 642, "s": 602, "text": "Access-Control-Request-Method: GET, PUT" }, { "code": null, "e": 682, "s": 642, "text": "Access-Control-Request-Method: GET, PUT" }, { "code": null, "e": 898, "s": 682, "text": "To check this Access-Control-Request-Method in action go to Inspect Element -> Network check the request header for Access-Control-Request-Method like below, Access-Control-Request-Method is highlighted you can see." }, { "code": null, "e": 1008, "s": 898, "text": "Supported Browsers: The browsers compatible with HTTP headers Access-Control-Request-Method are listed below:" }, { "code": null, "e": 1022, "s": 1008, "text": "Google Chrome" }, { "code": null, "e": 1040, "s": 1022, "text": "Internet Explorer" }, { "code": null, "e": 1048, "s": 1040, "text": "Firefox" }, { "code": null, "e": 1055, "s": 1048, "text": "Safari" }, { "code": null, "e": 1061, "s": 1055, "text": "Opera" }, { "code": null, "e": 1074, "s": 1061, "text": "HTTP-headers" }, { "code": null, "e": 1081, "s": 1074, "text": "Picked" }, { "code": null, "e": 1098, "s": 1081, "text": "Web Technologies" } ]
Challenges of Data Mining
27 Feb, 2020 Nowadays Data Mining and knowledge discovery are evolving a crucial technology for business and researchers in many domains.Data Mining is developing into established and trusted discipline, many still pending challenges have to be solved. Some of these challenges are given below. Security and Social Challenges:Decision-Making strategies are done through data collection-sharing, so it requires considerable security. Private information about individuals and sensitive information are collected for customers profiles, user behaviour pattern understanding. Illegal access to information and the confidential nature of information becoming an important issue.User Interface:The knowledge discovered is discovered using data mining tools is useful only if it is interesting and above all understandable by the user. From good visualization interpretation of data, mining results can be eased and helps better understand their requirements. To obtain good visualization many research is carried out for big data sets that display and manipulate mined knowledge.(i) Mining based on Level of Abstraction: Data Mining process needs to be collaborative because it allows users to concentrate on pattern finding, presenting and optimizing requests for data mining based on returned results.(ii) Integration of Background Knowledge: Previous information may be used to express discovered patterns to direct the exploration processes and to express discovered patterns.Mining Methodology Challenges:These challenges are related to data mining approaches and their limitations. Mining approaches that cause the problem are:(i) Versatility of the mining approaches, (ii) Diversity of data available, (iii) Dimensionality of the domain, (iv) Control and handling of noise in data, etc. Different approaches may implement differently based upon data consideration. Some algorithms require noise-free data. Most data sets contain exceptions, invalid or incomplete information lead to complication in the analysis process and some cases compromise the precision of the results.Complex Data:Real-world data is heterogeneous and it could be multimedia data containing images, audio and video, complex data, temporal data, spatial data, time series, natural language text etc. It is difficult to handle these various kinds of data and extract the required information. New tools and methodologies are developing to extract relevant information.(i) Complex data types: The database can include complex data elements, objects with graphical data, spatial data, and temporal data. Mining all these kinds of data is not practical to be done one device.(ii) Mining from Varied Sources:The data is gathered from different sources on Network. The data source may be of different kinds depending on how they are stored such as structured, semi-structured or unstructured.Performance:The performance of the data mining system depends on the efficiency of algorithms and techniques are using. The algorithms and techniques designed are not up to the mark lead to affect the performance of the data mining process.(i) Efficiency and Scalability of the Algorithms: The data mining algorithm must be efficient and scalable to extract information from huge amounts of data in the database.(ii) Improvement of Mining Algorithms: Factors such as the enormous size of the database, the entire data flow and the difficulty of data mining approaches inspire the creation of parallel & distributed data mining algorithms. Security and Social Challenges:Decision-Making strategies are done through data collection-sharing, so it requires considerable security. Private information about individuals and sensitive information are collected for customers profiles, user behaviour pattern understanding. Illegal access to information and the confidential nature of information becoming an important issue. User Interface:The knowledge discovered is discovered using data mining tools is useful only if it is interesting and above all understandable by the user. From good visualization interpretation of data, mining results can be eased and helps better understand their requirements. To obtain good visualization many research is carried out for big data sets that display and manipulate mined knowledge.(i) Mining based on Level of Abstraction: Data Mining process needs to be collaborative because it allows users to concentrate on pattern finding, presenting and optimizing requests for data mining based on returned results.(ii) Integration of Background Knowledge: Previous information may be used to express discovered patterns to direct the exploration processes and to express discovered patterns. Mining Methodology Challenges:These challenges are related to data mining approaches and their limitations. Mining approaches that cause the problem are:(i) Versatility of the mining approaches, (ii) Diversity of data available, (iii) Dimensionality of the domain, (iv) Control and handling of noise in data, etc. Different approaches may implement differently based upon data consideration. Some algorithms require noise-free data. Most data sets contain exceptions, invalid or incomplete information lead to complication in the analysis process and some cases compromise the precision of the results. (i) Versatility of the mining approaches, (ii) Diversity of data available, (iii) Dimensionality of the domain, (iv) Control and handling of noise in data, etc. Different approaches may implement differently based upon data consideration. Some algorithms require noise-free data. Most data sets contain exceptions, invalid or incomplete information lead to complication in the analysis process and some cases compromise the precision of the results. Complex Data:Real-world data is heterogeneous and it could be multimedia data containing images, audio and video, complex data, temporal data, spatial data, time series, natural language text etc. It is difficult to handle these various kinds of data and extract the required information. New tools and methodologies are developing to extract relevant information.(i) Complex data types: The database can include complex data elements, objects with graphical data, spatial data, and temporal data. Mining all these kinds of data is not practical to be done one device.(ii) Mining from Varied Sources:The data is gathered from different sources on Network. The data source may be of different kinds depending on how they are stored such as structured, semi-structured or unstructured. Performance:The performance of the data mining system depends on the efficiency of algorithms and techniques are using. The algorithms and techniques designed are not up to the mark lead to affect the performance of the data mining process.(i) Efficiency and Scalability of the Algorithms: The data mining algorithm must be efficient and scalable to extract information from huge amounts of data in the database.(ii) Improvement of Mining Algorithms: Factors such as the enormous size of the database, the entire data flow and the difficulty of data mining approaches inspire the creation of parallel & distributed data mining algorithms. pcp21599 data mining DBMS DBMS Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n27 Feb, 2020" }, { "code": null, "e": 293, "s": 53, "text": "Nowadays Data Mining and knowledge discovery are evolving a crucial technology for business and researchers in many domains.Data Mining is developing into established and trusted discipline, many still pending challenges have to be solved." }, { "code": null, "e": 335, "s": 293, "text": "Some of these challenges are given below." }, { "code": null, "e": 3539, "s": 335, "text": "Security and Social Challenges:Decision-Making strategies are done through data collection-sharing, so it requires considerable security. Private information about individuals and sensitive information are collected for customers profiles, user behaviour pattern understanding. Illegal access to information and the confidential nature of information becoming an important issue.User Interface:The knowledge discovered is discovered using data mining tools is useful only if it is interesting and above all understandable by the user. From good visualization interpretation of data, mining results can be eased and helps better understand their requirements. To obtain good visualization many research is carried out for big data sets that display and manipulate mined knowledge.(i) Mining based on Level of Abstraction: Data Mining process needs to be collaborative because it allows users to concentrate on pattern finding, presenting and optimizing requests for data mining based on returned results.(ii) Integration of Background Knowledge: Previous information may be used to express discovered patterns to direct the exploration processes and to express discovered patterns.Mining Methodology Challenges:These challenges are related to data mining approaches and their limitations. Mining approaches that cause the problem are:(i) Versatility of the mining approaches,\n(ii) Diversity of data available,\n(iii) Dimensionality of the domain,\n(iv) Control and handling of noise in data, etc. Different approaches may implement differently based upon data consideration. Some algorithms require noise-free data. Most data sets contain exceptions, invalid or incomplete information lead to complication in the analysis process and some cases compromise the precision of the results.Complex Data:Real-world data is heterogeneous and it could be multimedia data containing images, audio and video, complex data, temporal data, spatial data, time series, natural language text etc. It is difficult to handle these various kinds of data and extract the required information. New tools and methodologies are developing to extract relevant information.(i) Complex data types: The database can include complex data elements, objects with graphical data, spatial data, and temporal data. Mining all these kinds of data is not practical to be done one device.(ii) Mining from Varied Sources:The data is gathered from different sources on Network. The data source may be of different kinds depending on how they are stored such as structured, semi-structured or unstructured.Performance:The performance of the data mining system depends on the efficiency of algorithms and techniques are using. The algorithms and techniques designed are not up to the mark lead to affect the performance of the data mining process.(i) Efficiency and Scalability of the Algorithms: The data mining algorithm must be efficient and scalable to extract information from huge amounts of data in the database.(ii) Improvement of Mining Algorithms: Factors such as the enormous size of the database, the entire data flow and the difficulty of data mining approaches inspire the creation of parallel & distributed data mining algorithms." }, { "code": null, "e": 3919, "s": 3539, "text": "Security and Social Challenges:Decision-Making strategies are done through data collection-sharing, so it requires considerable security. Private information about individuals and sensitive information are collected for customers profiles, user behaviour pattern understanding. Illegal access to information and the confidential nature of information becoming an important issue." }, { "code": null, "e": 4721, "s": 3919, "text": "User Interface:The knowledge discovered is discovered using data mining tools is useful only if it is interesting and above all understandable by the user. From good visualization interpretation of data, mining results can be eased and helps better understand their requirements. To obtain good visualization many research is carried out for big data sets that display and manipulate mined knowledge.(i) Mining based on Level of Abstraction: Data Mining process needs to be collaborative because it allows users to concentrate on pattern finding, presenting and optimizing requests for data mining based on returned results.(ii) Integration of Background Knowledge: Previous information may be used to express discovered patterns to direct the exploration processes and to express discovered patterns." }, { "code": null, "e": 5324, "s": 4721, "text": "Mining Methodology Challenges:These challenges are related to data mining approaches and their limitations. Mining approaches that cause the problem are:(i) Versatility of the mining approaches,\n(ii) Diversity of data available,\n(iii) Dimensionality of the domain,\n(iv) Control and handling of noise in data, etc. Different approaches may implement differently based upon data consideration. Some algorithms require noise-free data. Most data sets contain exceptions, invalid or incomplete information lead to complication in the analysis process and some cases compromise the precision of the results." }, { "code": null, "e": 5486, "s": 5324, "text": "(i) Versatility of the mining approaches,\n(ii) Diversity of data available,\n(iii) Dimensionality of the domain,\n(iv) Control and handling of noise in data, etc. " }, { "code": null, "e": 5775, "s": 5486, "text": "Different approaches may implement differently based upon data consideration. Some algorithms require noise-free data. Most data sets contain exceptions, invalid or incomplete information lead to complication in the analysis process and some cases compromise the precision of the results." }, { "code": null, "e": 6559, "s": 5775, "text": "Complex Data:Real-world data is heterogeneous and it could be multimedia data containing images, audio and video, complex data, temporal data, spatial data, time series, natural language text etc. It is difficult to handle these various kinds of data and extract the required information. New tools and methodologies are developing to extract relevant information.(i) Complex data types: The database can include complex data elements, objects with graphical data, spatial data, and temporal data. Mining all these kinds of data is not practical to be done one device.(ii) Mining from Varied Sources:The data is gathered from different sources on Network. The data source may be of different kinds depending on how they are stored such as structured, semi-structured or unstructured." }, { "code": null, "e": 7198, "s": 6559, "text": "Performance:The performance of the data mining system depends on the efficiency of algorithms and techniques are using. The algorithms and techniques designed are not up to the mark lead to affect the performance of the data mining process.(i) Efficiency and Scalability of the Algorithms: The data mining algorithm must be efficient and scalable to extract information from huge amounts of data in the database.(ii) Improvement of Mining Algorithms: Factors such as the enormous size of the database, the entire data flow and the difficulty of data mining approaches inspire the creation of parallel & distributed data mining algorithms." }, { "code": null, "e": 7207, "s": 7198, "text": "pcp21599" }, { "code": null, "e": 7219, "s": 7207, "text": "data mining" }, { "code": null, "e": 7224, "s": 7219, "text": "DBMS" }, { "code": null, "e": 7229, "s": 7224, "text": "DBMS" } ]
Java Thread Priority in Multithreading
25 Jun, 2022 As we already know java being completely object-oriented works within a multithreading environment in which thread scheduler assigns the processor to a thread based on the priority of thread. Whenever we create a thread in Java, it always has some priority assigned to it. Priority can either be given by JVM while creating the thread or it can be given by the programmer explicitly. Priorities in threads is a concept where each thread is having a priority which in layman’s language one can say every object is having priority here which is represented by numbers ranging from 1 to 10. The default priority is set to 5 as excepted. Minimum priority is set to 1. Maximum priority is set to 10. Here 3 constants are defined in it namely as follows: public static int NORM_PRIORITYpublic static int MIN_PRIORITYpublic static int MAX_PRIORITY public static int NORM_PRIORITY public static int MIN_PRIORITY public static int MAX_PRIORITY Let us discuss it with an example to get how internally the work is getting executed. Here we will be using the knowledge gathered above as follows: We will use currentThread() method to get the name of the current thread. User can also use setName() method if he/she wants to make names of thread as per choice for understanding purposes. getName() method will be used to get the name of the thread. The accepted value of priority for a thread is in the range of 1 to 10. Chapters descriptions off, selected captions settings, opens captions settings dialog captions off, selected English This is a modal window. Beginning of dialog window. Escape will cancel and close the window. End of dialog window. Let us do discuss how to get and set priority of a thread in java. public final int getPriority(): java.lang.Thread.getPriority() method returns priority of given thread.public final void setPriority(int newPriority): java.lang.Thread.setPriority() method changes the priority of thread to the value newPriority. This method throws IllegalArgumentException if value of parameter newPriority goes beyond minimum(1) and maximum(10) limit. public final int getPriority(): java.lang.Thread.getPriority() method returns priority of given thread. public final void setPriority(int newPriority): java.lang.Thread.setPriority() method changes the priority of thread to the value newPriority. This method throws IllegalArgumentException if value of parameter newPriority goes beyond minimum(1) and maximum(10) limit. Example Java // Java Program to Illustrate Priorities in Multithreading// via help of getPriority() and setPriority() method // Importing required classesimport java.lang.*; // Main classclass ThreadDemo extends Thread { // Method 1 // run() method for the thread that is called // as soon as start() is invoked for thread in main() public void run() { // Print statement System.out.println("Inside run method"); } // Main driver method public static void main(String[] args) { // Creating random threads // with the help of above class ThreadDemo t1 = new ThreadDemo(); ThreadDemo t2 = new ThreadDemo(); ThreadDemo t3 = new ThreadDemo(); // Thread 1 // Display the priority of above thread // using getPriority() method System.out.println("t1 thread priority : " + t1.getPriority()); // Thread 1 // Display the priority of above thread System.out.println("t2 thread priority : " + t2.getPriority()); // Thread 3 System.out.println("t3 thread priority : " + t3.getPriority()); // Setting priorities of above threads by // passing integer arguments t1.setPriority(2); t2.setPriority(5); t3.setPriority(8); // t3.setPriority(21); will throw // IllegalArgumentException // 2 System.out.println("t1 thread priority : " + t1.getPriority()); // 5 System.out.println("t2 thread priority : " + t2.getPriority()); // 8 System.out.println("t3 thread priority : " + t3.getPriority()); // Main thread // Displays the name of // currently executing Thread System.out.println( "Currently Executing Thread : " + Thread.currentThread().getName()); System.out.println( "Main thread priority : " + Thread.currentThread().getPriority()); // Main thread priority is set to 10 Thread.currentThread().setPriority(10); System.out.println( "Main thread priority : " + Thread.currentThread().getPriority()); }} t1 thread priority : 5 t2 thread priority : 5 t3 thread priority : 5 t1 thread priority : 2 t2 thread priority : 5 t3 thread priority : 8 Currently Executing Thread : main Main thread priority : 5 Main thread priority : 10 Output explanation: Thread with the highest priority will get an execution chance prior to other threads. Suppose there are 3 threads t1, t2, and t3 with priorities 4, 6, and 1. So, thread t2 will execute first based on maximum priority 6 after that t1 will execute and then t3. The default priority for the main thread is always 5, it can be changed later. The default priority for all other threads depends on the priority of the parent thread. Now geeks you must be wondering out what if we do assign the same priorities to threads than what will happen. All the processing in order to look after threads is carried with help of the thread scheduler. One can refer to the below example of what will happen if the priorities are set to the same and later onwards we will discuss it as an output explanation to have a better understanding conceptually and practically. Example Java // Java program to demonstrate that a Child thread// Getting Same Priority as Parent thread // Importing all classes from java.lang packageimport java.lang.*; // Main class// ThreadDemo// Extending Thread classclass GFG extends Thread { // Method 1 // run() method for the thread that is // invoked as threads are started public void run() { // Print statement System.out.println("Inside run method"); } // Method 2 // Main driver method public static void main(String[] args) { // main thread priority is set to 6 now Thread.currentThread().setPriority(6); // Current thread is accessed // using currentThread() method // Print and display main thread priority // using getPriority() method of Thread class System.out.println( "main thread priority : " + Thread.currentThread().getPriority()); // Creating a thread by creating object inside // main() GFG t1 = new GFG(); // t1 thread is child of main thread // so t1 thread will also have priority 6 // Print and display priority of current thread System.out.println("t1 thread priority : " + t1.getPriority()); }} main thread priority : 6 t1 thread priority : 6 Output explanation: If two threads have the same priority then we can’t expect which thread will execute first. It depends on the thread scheduler’s algorithm(Round-Robin, First Come First Serve, etc) If we are using thread priority for thread scheduling then we should always keep in mind that the underlying platform should provide support for scheduling based on thread priority. All this processing is been carried over with the help of a thread scheduler which can be better visualized withthe help of a video sample been provided below as follows: This article is contributed by Dharmesh Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. agarwaltushar35 1tejeswararaonuka saibabaaj Java-Multithreading Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n25 Jun, 2022" }, { "code": null, "e": 437, "s": 52, "text": "As we already know java being completely object-oriented works within a multithreading environment in which thread scheduler assigns the processor to a thread based on the priority of thread. Whenever we create a thread in Java, it always has some priority assigned to it. Priority can either be given by JVM while creating the thread or it can be given by the programmer explicitly. " }, { "code": null, "e": 642, "s": 437, "text": "Priorities in threads is a concept where each thread is having a priority which in layman’s language one can say every object is having priority here which is represented by numbers ranging from 1 to 10. " }, { "code": null, "e": 688, "s": 642, "text": "The default priority is set to 5 as excepted." }, { "code": null, "e": 718, "s": 688, "text": "Minimum priority is set to 1." }, { "code": null, "e": 749, "s": 718, "text": "Maximum priority is set to 10." }, { "code": null, "e": 803, "s": 749, "text": "Here 3 constants are defined in it namely as follows:" }, { "code": null, "e": 895, "s": 803, "text": "public static int NORM_PRIORITYpublic static int MIN_PRIORITYpublic static int MAX_PRIORITY" }, { "code": null, "e": 927, "s": 895, "text": "public static int NORM_PRIORITY" }, { "code": null, "e": 958, "s": 927, "text": "public static int MIN_PRIORITY" }, { "code": null, "e": 989, "s": 958, "text": "public static int MAX_PRIORITY" }, { "code": null, "e": 1138, "s": 989, "text": "Let us discuss it with an example to get how internally the work is getting executed. Here we will be using the knowledge gathered above as follows:" }, { "code": null, "e": 1329, "s": 1138, "text": "We will use currentThread() method to get the name of the current thread. User can also use setName() method if he/she wants to make names of thread as per choice for understanding purposes." }, { "code": null, "e": 1390, "s": 1329, "text": "getName() method will be used to get the name of the thread." }, { "code": null, "e": 1463, "s": 1390, "text": "The accepted value of priority for a thread is in the range of 1 to 10. " }, { "code": null, "e": 1472, "s": 1463, "text": "Chapters" }, { "code": null, "e": 1499, "s": 1472, "text": "descriptions off, selected" }, { "code": null, "e": 1549, "s": 1499, "text": "captions settings, opens captions settings dialog" }, { "code": null, "e": 1572, "s": 1549, "text": "captions off, selected" }, { "code": null, "e": 1580, "s": 1572, "text": "English" }, { "code": null, "e": 1604, "s": 1580, "text": "This is a modal window." }, { "code": null, "e": 1673, "s": 1604, "text": "Beginning of dialog window. Escape will cancel and close the window." }, { "code": null, "e": 1695, "s": 1673, "text": "End of dialog window." }, { "code": null, "e": 1763, "s": 1695, "text": "Let us do discuss how to get and set priority of a thread in java. " }, { "code": null, "e": 2133, "s": 1763, "text": "public final int getPriority(): java.lang.Thread.getPriority() method returns priority of given thread.public final void setPriority(int newPriority): java.lang.Thread.setPriority() method changes the priority of thread to the value newPriority. This method throws IllegalArgumentException if value of parameter newPriority goes beyond minimum(1) and maximum(10) limit." }, { "code": null, "e": 2237, "s": 2133, "text": "public final int getPriority(): java.lang.Thread.getPriority() method returns priority of given thread." }, { "code": null, "e": 2504, "s": 2237, "text": "public final void setPriority(int newPriority): java.lang.Thread.setPriority() method changes the priority of thread to the value newPriority. This method throws IllegalArgumentException if value of parameter newPriority goes beyond minimum(1) and maximum(10) limit." }, { "code": null, "e": 2513, "s": 2504, "text": "Example " }, { "code": null, "e": 2518, "s": 2513, "text": "Java" }, { "code": "// Java Program to Illustrate Priorities in Multithreading// via help of getPriority() and setPriority() method // Importing required classesimport java.lang.*; // Main classclass ThreadDemo extends Thread { // Method 1 // run() method for the thread that is called // as soon as start() is invoked for thread in main() public void run() { // Print statement System.out.println(\"Inside run method\"); } // Main driver method public static void main(String[] args) { // Creating random threads // with the help of above class ThreadDemo t1 = new ThreadDemo(); ThreadDemo t2 = new ThreadDemo(); ThreadDemo t3 = new ThreadDemo(); // Thread 1 // Display the priority of above thread // using getPriority() method System.out.println(\"t1 thread priority : \" + t1.getPriority()); // Thread 1 // Display the priority of above thread System.out.println(\"t2 thread priority : \" + t2.getPriority()); // Thread 3 System.out.println(\"t3 thread priority : \" + t3.getPriority()); // Setting priorities of above threads by // passing integer arguments t1.setPriority(2); t2.setPriority(5); t3.setPriority(8); // t3.setPriority(21); will throw // IllegalArgumentException // 2 System.out.println(\"t1 thread priority : \" + t1.getPriority()); // 5 System.out.println(\"t2 thread priority : \" + t2.getPriority()); // 8 System.out.println(\"t3 thread priority : \" + t3.getPriority()); // Main thread // Displays the name of // currently executing Thread System.out.println( \"Currently Executing Thread : \" + Thread.currentThread().getName()); System.out.println( \"Main thread priority : \" + Thread.currentThread().getPriority()); // Main thread priority is set to 10 Thread.currentThread().setPriority(10); System.out.println( \"Main thread priority : \" + Thread.currentThread().getPriority()); }}", "e": 4834, "s": 2518, "text": null }, { "code": null, "e": 5057, "s": 4834, "text": "t1 thread priority : 5\nt2 thread priority : 5\nt3 thread priority : 5\nt1 thread priority : 2\nt2 thread priority : 5\nt3 thread priority : 8\nCurrently Executing Thread : main\nMain thread priority : 5\nMain thread priority : 10" }, { "code": null, "e": 5078, "s": 5057, "text": " Output explanation:" }, { "code": null, "e": 5337, "s": 5078, "text": "Thread with the highest priority will get an execution chance prior to other threads. Suppose there are 3 threads t1, t2, and t3 with priorities 4, 6, and 1. So, thread t2 will execute first based on maximum priority 6 after that t1 will execute and then t3." }, { "code": null, "e": 5505, "s": 5337, "text": "The default priority for the main thread is always 5, it can be changed later. The default priority for all other threads depends on the priority of the parent thread." }, { "code": null, "e": 5928, "s": 5505, "text": "Now geeks you must be wondering out what if we do assign the same priorities to threads than what will happen. All the processing in order to look after threads is carried with help of the thread scheduler. One can refer to the below example of what will happen if the priorities are set to the same and later onwards we will discuss it as an output explanation to have a better understanding conceptually and practically." }, { "code": null, "e": 5937, "s": 5928, "text": "Example " }, { "code": null, "e": 5942, "s": 5937, "text": "Java" }, { "code": "// Java program to demonstrate that a Child thread// Getting Same Priority as Parent thread // Importing all classes from java.lang packageimport java.lang.*; // Main class// ThreadDemo// Extending Thread classclass GFG extends Thread { // Method 1 // run() method for the thread that is // invoked as threads are started public void run() { // Print statement System.out.println(\"Inside run method\"); } // Method 2 // Main driver method public static void main(String[] args) { // main thread priority is set to 6 now Thread.currentThread().setPriority(6); // Current thread is accessed // using currentThread() method // Print and display main thread priority // using getPriority() method of Thread class System.out.println( \"main thread priority : \" + Thread.currentThread().getPriority()); // Creating a thread by creating object inside // main() GFG t1 = new GFG(); // t1 thread is child of main thread // so t1 thread will also have priority 6 // Print and display priority of current thread System.out.println(\"t1 thread priority : \" + t1.getPriority()); }}", "e": 7217, "s": 5942, "text": null }, { "code": null, "e": 7265, "s": 7217, "text": "main thread priority : 6\nt1 thread priority : 6" }, { "code": null, "e": 7286, "s": 7265, "text": " Output explanation:" }, { "code": null, "e": 7467, "s": 7286, "text": "If two threads have the same priority then we can’t expect which thread will execute first. It depends on the thread scheduler’s algorithm(Round-Robin, First Come First Serve, etc)" }, { "code": null, "e": 7649, "s": 7467, "text": "If we are using thread priority for thread scheduling then we should always keep in mind that the underlying platform should provide support for scheduling based on thread priority." }, { "code": null, "e": 7820, "s": 7649, "text": "All this processing is been carried over with the help of a thread scheduler which can be better visualized withthe help of a video sample been provided below as follows:" }, { "code": null, "e": 8242, "s": 7820, "text": "This article is contributed by Dharmesh Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 8258, "s": 8242, "text": "agarwaltushar35" }, { "code": null, "e": 8276, "s": 8258, "text": "1tejeswararaonuka" }, { "code": null, "e": 8286, "s": 8276, "text": "saibabaaj" }, { "code": null, "e": 8306, "s": 8286, "text": "Java-Multithreading" }, { "code": null, "e": 8311, "s": 8306, "text": "Java" }, { "code": null, "e": 8316, "s": 8311, "text": "Java" } ]
Implementing interactive Online Shopping in C++
07 Jul, 2021 Online shopping is all about calculating the total amount for the items selected by the customer. In this article, we will discuss a menu-driven C++ program for Online Shopping. Users will be able to purchase laptops, Mobile, Computer Courses.Users will be able to add items.Users will be able to reduce the quantity or delete the item.Users will be able to print the bill. Users will be able to purchase laptops, Mobile, Computer Courses. Users will be able to add items. Users will be able to reduce the quantity or delete the item. Users will be able to print the bill. Firstly a menu will be displayed to the customer. After the selection, all the products with their prices will be displayed. Then the customer will select the products and chooses the quantity(number of products). This process continues until the shopping is completed. Whenever the customer completes his shopping, the items, quantity, cost, and finally the total amount to be paid are displayed. Below is the implementation of the above functionality: C++ // C++ program to implement the program// that illustrates Online shopping#include <bits/stdc++.h>#include <cstring>#include <iostream>#include <map>using namespace std; char c1, confirm_quantity;float quantity;int selectedNum;double total_amount = 0;int flag = 0; // Stores items with their corresponding// pricemap<string, double> items = { { "Samsung", 15000 }, { "Redmi", 12000 }, { "Apple", 100000 }, { "Macbook", 250000 }, { "HP", 40000 }, { "Lenovo", 35000 }, { "C", 1000 }, { "C++", 3000 }, { "Java", 4000 }, { "Python", 3500 }}; // Stores the selected items with// their quantitymap<string, int> selected_items; // Function to print the bill after shopping// is completed prints the items, quantity,// their cost along with total amountvoid printBill(map<string, double> items, map<string, int> selected_items, float total_amount){ cout << "Item " << "Quantity " << "Cost\n"; for (auto j = selected_items.begin(); j != selected_items.end(); j++) { cout << j->first << " "; cout << j->second << " "; cout << (selected_items[j->first]) * (items[j->first]) << endl; } cout << "-----------------------" << "-------------\n"; cout << "Total amount: " << total_amount << endl; cout << "-----------------------" << "-------------\n"; cout << "*****THANK YOU && HAPPY" << " ONLINE SHOPPING*****";} // Function to ask the basic details of// any customervoid customerDetails(){ cout << "Enter your name: "; string customer_name; getline(cin, customer_name); cout << "WELCOME "; for (int i = 0; i < customer_name.length(); i++) { cout << char(toupper( customer_name[i])); } cout << "\n";} // showMenu() is to print the// menu to the uservoid showMenu(){ cout << "Menu\n"; cout << "= = = = = = = = " << " = = = = = \n"; cout << "1.Mobile\n2.laptop\n3" << ".Computer courses\n"; cout << "= = = = = = = = " << " = = = = = \n";} // Function to display the mobile productsvoid showMobileMenu(){ cout << "- - - - - - - - - - -" << " - -\nItem Cost\n"; cout << "1.Samsung Rs.15, 000/-\n"; cout << "2.Redmi Rs.12, 000/-\n"; cout << "3.Apple Rs.1, 00, 000/-\n"; cout << "- - - - - - - - - - - - -\n";} // Function to display Laptop productsvoid showLaptopMenu(){ cout << "- - - - - - - - - - -" << " - -\nItem Cost\n"; cout << "1.Macbook Rs.2, 00, 000/-\n"; cout << "2.HP Rs.40, 000/-\n"; cout << "3.Lenovo Rs.35, 000/-\n"; cout << "- - - - - - - - - - - - -\n";} // if the user selects computer courses,// then courses list will be displayedvoid showComputerCourseMenu(){ cout << "- - - - - - - - - - " << " - -\nItem Cost\n"; cout << "1.C Rs.1, 000/-\n"; cout << "2.C++ Rs.3, 000/-\n"; cout << "3.Java Rs.4, 000/-\n"; cout << "4.Python Rs.3, 500/-\n"; cout << "- - - - - - - - - - - - -\n";} // Function to display the mobile categoryvoid selectedMobile(){ cout << "Do you wish to conti" << "nue?(for yes" + "press (Y/y ), " << " if no press other letter ): "; cin >> c1; if (c1 == 'Y' || c1 == 'y') { cout << "Enter respective number: "; cin >> selectedNum; if (selectedNum == 1 || selectedNum == 2 || selectedNum == 3) { // Selected Samsung if (selectedNum == 1) { cout << "selected Samsung\n"; do { cout << "Quantity: "; cin >> quantity; cout << "You have selected Samsung - " << quantity << endl; cout << "Are you sure?" << "(for yes press (Y/y ), " << " if no press other letter): "; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items["Samsung"]; selected_items["Samsung"] = quantity; cout << "amount = " << total_amount << endl; } } // Selected Redmi if (selectedNum == 2) { cout << "selected Redmi\n"; do { cout << "Quantity: "; cin >> quantity; cout << "You have selec" << "ted Redmi - " << quantity << endl; cout << "Are you sure?(f" << "or yes press (Y/y ), " << " if no press other letter ): "; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items["Redmi"]; selected_items["Redmi"] = quantity; cout << "amount = " << total_amount << endl; } } // Selected Apple if (selectedNum == 3) { cout << "You have selected Apple\n"; do { cout << "Quantity: "; cin >> quantity; cout << "You have selected" << " Apple - " << quantity << endl; cout << "Are you sure?" << "(for yes press (Y/y )" << ", if no press other letter ): "; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items["Apple"]; selected_items["Apple"] = quantity; cout << "amount = " << total_amount << endl; } } } else { flag = 1; } } else { flag = 1; }} // If Laptop category is selectedvoid selectedLaptop(){ cout << "Do you wish to continue?" << "(for yes press (Y/y ), " << "if no press other letter): "; cin >> c1; if (c1 == 'Y' || c1 == 'y') { cout << "Enter respective number: "; cin >> selectedNum; if (selectedNum == 1 || selectedNum == 2 || selectedNum == 3) { // selected Macbook if (selectedNum == 1) { cout << "selected Macbook\n"; do { cout << "Quantity: "; cin >> quantity; cout << "You have selected" << " Macbook - " << quantity << endl; cout << "Are you sure?" << "(for yes press (Y/y ), " << " if no press other letter ): "; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items["Macbook"]; selected_items["Macbook"] = quantity; cout << "amount = " << total_amount << endl; } } // selected HP if (selectedNum == 2) { cout << "selected HP\n"; do { cout << "Quantity: "; cin >> quantity; cout << "You have selected" << " HP - " << quantity << endl; cout << "Are you sure?" << "(for yes press (Y/y ), " << " if no press other letter ): "; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items["HP"]; selected_items["HP"] = quantity; cout << "amount = " << total_amount << endl; } } // selected Lenovo if (selectedNum == 3) { cout << "selected Lenovo\n"; do { cout << "Quantity: "; cin >> quantity; cout << "You have selected" " Lenovo - " << quantity << endl; cout << "Are you sure?" << "(for yes press (Y/y ), " << "if no press other letter ): "; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items["Lenovo"]; selected_items["Lenovo"] = quantity; cout << "amount = " << total_amount << endl; } } } else { flag = 1; } } else { flag = 1; }} // If computer course// category is selectedvoid selectedCourses(){ cout << "Do you wish to continue?" << "(for yes press (Y/y ), " << " if no press other letter ): "; cin >> c1; if (c1 == 'Y' || c1 == 'y') { cout << "Enter the respective number: "; cin >> selectedNum; if (selectedNum == 1 || selectedNum == 2 || selectedNum == 3 || selectedNum == 4) { // selected C if (selectedNum == 1) { cout << "selected C Language" << " course\n"; total_amount += items["C"]; selected_items["C"]++; cout << "amount = " << total_amount << endl; } // selected C++ if (selectedNum == 2) { cout << "selected C++ Language course\n"; total_amount += items["C++"]; selected_items["C++"]++; cout << "amount = " << total_amount << endl; } // selected Java if (selectedNum == 3) { cout << "selected Java Language course\n"; total_amount += items["Java"]; selected_items["Java"]++; cout << "amount = " << total_amount << endl; } // selected python if (selectedNum == 4) { cout << "selected Python" << " Language course\n"; total_amount += items["Python"]; selected_items["Python"]++; cout << "amount = " << total_amount << endl; } } else { flag = 1; } } else { flag = 1; }} // Driver codeint main(){ // function call customerDetails(); do { showMenu(); cout << "Do you wish to continue?" << "(for yes press (Y/y ), " << " if no press other letter ): "; char c; cin >> c; if (c == 'Y' || c == 'y') { cout << "Enter respective number: "; int num; cin >> num; if (num == 1 || num == 2 || num == 3) { switch (num) { case 1: // For Mobile showMobileMenu(); selectedMobile(); break; case 2: // For Laptop showLaptopMenu(); selectedLaptop(); break; case 3: // For computer course showComputerCourseMenu(); selectedCourses(); break; } } else { flag = 1; } } else { flag = 1; } } while (flag == 0); // print bill printBill(items, selected_items, total_amount);} Output: Let us suppose, someone need to buy 2 Redmi mobiles, 1 HP Laptop and a Java course. Video Output: Demonstration: Step 1: Firstly, a map(say map<string, long double> items ) is constructed, which stores products with their costs. Construct another map (say map<string, long double>selected_items ), which is used to push the selected items with their quantity. Then initiate total_amount(which stores the total amount) to 0. Use flag and initiate to 0. In case if wrong input is given by the customer, then the flag changes to 1 and gets exited directly by printing items with their prices and then print the total amount. Step 2: Ask for details For example- the name of the customer. In our code customerDetails() function is constructed for this purpose. toupper() is used for converting all the characters of a string into uppercase. Step 3: Display the menu to the user. showMenu() function is created for this purpose. Step 4: Ask the user whether he likes to continue. Here do-while loop is used, this loop continues till flag changes to 1. Whenever the flag changes to 1, it directly prints the bill. If yes, he needs to enter Y/y then ask the user to input the respective number from the menu. If the wrong number is entered then the flag changes to 1.If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1.As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user.Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied.If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map.This process goes on till the flag gets changed to 1.Else, he can type any other letter except Y/y. Then the flag changes to 1. If yes, he needs to enter Y/y then ask the user to input the respective number from the menu. If the wrong number is entered then the flag changes to 1.If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1.As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user.Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied.If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map.This process goes on till the flag gets changed to 1. If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1.As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user.Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied.If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map.This process goes on till the flag gets changed to 1. If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1. As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user. Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied. If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map. This process goes on till the flag gets changed to 1. Else, he can type any other letter except Y/y. Then the flag changes to 1. Below are some screenshots showing what will be the screen like, when the particular selection is done by the user- If mobile is selected: The below screenshot shows the Mobile Menu: <img src=" If laptop is selected: The below screenshot shows the Laptop Menu: If computer course is selected: The below screen shows the Computer Course Menu: If the flag changed to 1, we print the bill using printBill() function. abhishek0719kadiyan Project-Ideas Technical Scripter 2020 C++ C++ Programs Project Technical Scripter CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Bitwise Operators in C/C++ vector erase() and clear() in C++ Substring in C++ Priority Queue in C++ Standard Template Library (STL) Inheritance in C++ Header files in C/C++ and its uses Sorting a Map by value in C++ STL Program to print ASCII Value of a character How to return multiple values from a function in C or C++? Shallow Copy and Deep Copy in C++
[ { "code": null, "e": 52, "s": 24, "text": "\n07 Jul, 2021" }, { "code": null, "e": 231, "s": 52, "text": "Online shopping is all about calculating the total amount for the items selected by the customer. In this article, we will discuss a menu-driven C++ program for Online Shopping. " }, { "code": null, "e": 427, "s": 231, "text": "Users will be able to purchase laptops, Mobile, Computer Courses.Users will be able to add items.Users will be able to reduce the quantity or delete the item.Users will be able to print the bill." }, { "code": null, "e": 493, "s": 427, "text": "Users will be able to purchase laptops, Mobile, Computer Courses." }, { "code": null, "e": 526, "s": 493, "text": "Users will be able to add items." }, { "code": null, "e": 588, "s": 526, "text": "Users will be able to reduce the quantity or delete the item." }, { "code": null, "e": 626, "s": 588, "text": "Users will be able to print the bill." }, { "code": null, "e": 1024, "s": 626, "text": "Firstly a menu will be displayed to the customer. After the selection, all the products with their prices will be displayed. Then the customer will select the products and chooses the quantity(number of products). This process continues until the shopping is completed. Whenever the customer completes his shopping, the items, quantity, cost, and finally the total amount to be paid are displayed." }, { "code": null, "e": 1080, "s": 1024, "text": "Below is the implementation of the above functionality:" }, { "code": null, "e": 1084, "s": 1080, "text": "C++" }, { "code": "// C++ program to implement the program// that illustrates Online shopping#include <bits/stdc++.h>#include <cstring>#include <iostream>#include <map>using namespace std; char c1, confirm_quantity;float quantity;int selectedNum;double total_amount = 0;int flag = 0; // Stores items with their corresponding// pricemap<string, double> items = { { \"Samsung\", 15000 }, { \"Redmi\", 12000 }, { \"Apple\", 100000 }, { \"Macbook\", 250000 }, { \"HP\", 40000 }, { \"Lenovo\", 35000 }, { \"C\", 1000 }, { \"C++\", 3000 }, { \"Java\", 4000 }, { \"Python\", 3500 }}; // Stores the selected items with// their quantitymap<string, int> selected_items; // Function to print the bill after shopping// is completed prints the items, quantity,// their cost along with total amountvoid printBill(map<string, double> items, map<string, int> selected_items, float total_amount){ cout << \"Item \" << \"Quantity \" << \"Cost\\n\"; for (auto j = selected_items.begin(); j != selected_items.end(); j++) { cout << j->first << \" \"; cout << j->second << \" \"; cout << (selected_items[j->first]) * (items[j->first]) << endl; } cout << \"-----------------------\" << \"-------------\\n\"; cout << \"Total amount: \" << total_amount << endl; cout << \"-----------------------\" << \"-------------\\n\"; cout << \"*****THANK YOU && HAPPY\" << \" ONLINE SHOPPING*****\";} // Function to ask the basic details of// any customervoid customerDetails(){ cout << \"Enter your name: \"; string customer_name; getline(cin, customer_name); cout << \"WELCOME \"; for (int i = 0; i < customer_name.length(); i++) { cout << char(toupper( customer_name[i])); } cout << \"\\n\";} // showMenu() is to print the// menu to the uservoid showMenu(){ cout << \"Menu\\n\"; cout << \"= = = = = = = = \" << \" = = = = = \\n\"; cout << \"1.Mobile\\n2.laptop\\n3\" << \".Computer courses\\n\"; cout << \"= = = = = = = = \" << \" = = = = = \\n\";} // Function to display the mobile productsvoid showMobileMenu(){ cout << \"- - - - - - - - - - -\" << \" - -\\nItem Cost\\n\"; cout << \"1.Samsung Rs.15, 000/-\\n\"; cout << \"2.Redmi Rs.12, 000/-\\n\"; cout << \"3.Apple Rs.1, 00, 000/-\\n\"; cout << \"- - - - - - - - - - - - -\\n\";} // Function to display Laptop productsvoid showLaptopMenu(){ cout << \"- - - - - - - - - - -\" << \" - -\\nItem Cost\\n\"; cout << \"1.Macbook Rs.2, 00, 000/-\\n\"; cout << \"2.HP Rs.40, 000/-\\n\"; cout << \"3.Lenovo Rs.35, 000/-\\n\"; cout << \"- - - - - - - - - - - - -\\n\";} // if the user selects computer courses,// then courses list will be displayedvoid showComputerCourseMenu(){ cout << \"- - - - - - - - - - \" << \" - -\\nItem Cost\\n\"; cout << \"1.C Rs.1, 000/-\\n\"; cout << \"2.C++ Rs.3, 000/-\\n\"; cout << \"3.Java Rs.4, 000/-\\n\"; cout << \"4.Python Rs.3, 500/-\\n\"; cout << \"- - - - - - - - - - - - -\\n\";} // Function to display the mobile categoryvoid selectedMobile(){ cout << \"Do you wish to conti\" << \"nue?(for yes\" + \"press (Y/y ), \" << \" if no press other letter ): \"; cin >> c1; if (c1 == 'Y' || c1 == 'y') { cout << \"Enter respective number: \"; cin >> selectedNum; if (selectedNum == 1 || selectedNum == 2 || selectedNum == 3) { // Selected Samsung if (selectedNum == 1) { cout << \"selected Samsung\\n\"; do { cout << \"Quantity: \"; cin >> quantity; cout << \"You have selected Samsung - \" << quantity << endl; cout << \"Are you sure?\" << \"(for yes press (Y/y ), \" << \" if no press other letter): \"; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items[\"Samsung\"]; selected_items[\"Samsung\"] = quantity; cout << \"amount = \" << total_amount << endl; } } // Selected Redmi if (selectedNum == 2) { cout << \"selected Redmi\\n\"; do { cout << \"Quantity: \"; cin >> quantity; cout << \"You have selec\" << \"ted Redmi - \" << quantity << endl; cout << \"Are you sure?(f\" << \"or yes press (Y/y ), \" << \" if no press other letter ): \"; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items[\"Redmi\"]; selected_items[\"Redmi\"] = quantity; cout << \"amount = \" << total_amount << endl; } } // Selected Apple if (selectedNum == 3) { cout << \"You have selected Apple\\n\"; do { cout << \"Quantity: \"; cin >> quantity; cout << \"You have selected\" << \" Apple - \" << quantity << endl; cout << \"Are you sure?\" << \"(for yes press (Y/y )\" << \", if no press other letter ): \"; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items[\"Apple\"]; selected_items[\"Apple\"] = quantity; cout << \"amount = \" << total_amount << endl; } } } else { flag = 1; } } else { flag = 1; }} // If Laptop category is selectedvoid selectedLaptop(){ cout << \"Do you wish to continue?\" << \"(for yes press (Y/y ), \" << \"if no press other letter): \"; cin >> c1; if (c1 == 'Y' || c1 == 'y') { cout << \"Enter respective number: \"; cin >> selectedNum; if (selectedNum == 1 || selectedNum == 2 || selectedNum == 3) { // selected Macbook if (selectedNum == 1) { cout << \"selected Macbook\\n\"; do { cout << \"Quantity: \"; cin >> quantity; cout << \"You have selected\" << \" Macbook - \" << quantity << endl; cout << \"Are you sure?\" << \"(for yes press (Y/y ), \" << \" if no press other letter ): \"; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items[\"Macbook\"]; selected_items[\"Macbook\"] = quantity; cout << \"amount = \" << total_amount << endl; } } // selected HP if (selectedNum == 2) { cout << \"selected HP\\n\"; do { cout << \"Quantity: \"; cin >> quantity; cout << \"You have selected\" << \" HP - \" << quantity << endl; cout << \"Are you sure?\" << \"(for yes press (Y/y ), \" << \" if no press other letter ): \"; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items[\"HP\"]; selected_items[\"HP\"] = quantity; cout << \"amount = \" << total_amount << endl; } } // selected Lenovo if (selectedNum == 3) { cout << \"selected Lenovo\\n\"; do { cout << \"Quantity: \"; cin >> quantity; cout << \"You have selected\" \" Lenovo - \" << quantity << endl; cout << \"Are you sure?\" << \"(for yes press (Y/y ), \" << \"if no press other letter ): \"; cin >> confirm_quantity; } while ((confirm_quantity != 'y' && confirm_quantity != 'Y') || (quantity < 0) || (ceil(quantity) != floor(quantity))); if (confirm_quantity == 'y' || confirm_quantity == 'Y') { total_amount += quantity * items[\"Lenovo\"]; selected_items[\"Lenovo\"] = quantity; cout << \"amount = \" << total_amount << endl; } } } else { flag = 1; } } else { flag = 1; }} // If computer course// category is selectedvoid selectedCourses(){ cout << \"Do you wish to continue?\" << \"(for yes press (Y/y ), \" << \" if no press other letter ): \"; cin >> c1; if (c1 == 'Y' || c1 == 'y') { cout << \"Enter the respective number: \"; cin >> selectedNum; if (selectedNum == 1 || selectedNum == 2 || selectedNum == 3 || selectedNum == 4) { // selected C if (selectedNum == 1) { cout << \"selected C Language\" << \" course\\n\"; total_amount += items[\"C\"]; selected_items[\"C\"]++; cout << \"amount = \" << total_amount << endl; } // selected C++ if (selectedNum == 2) { cout << \"selected C++ Language course\\n\"; total_amount += items[\"C++\"]; selected_items[\"C++\"]++; cout << \"amount = \" << total_amount << endl; } // selected Java if (selectedNum == 3) { cout << \"selected Java Language course\\n\"; total_amount += items[\"Java\"]; selected_items[\"Java\"]++; cout << \"amount = \" << total_amount << endl; } // selected python if (selectedNum == 4) { cout << \"selected Python\" << \" Language course\\n\"; total_amount += items[\"Python\"]; selected_items[\"Python\"]++; cout << \"amount = \" << total_amount << endl; } } else { flag = 1; } } else { flag = 1; }} // Driver codeint main(){ // function call customerDetails(); do { showMenu(); cout << \"Do you wish to continue?\" << \"(for yes press (Y/y ), \" << \" if no press other letter ): \"; char c; cin >> c; if (c == 'Y' || c == 'y') { cout << \"Enter respective number: \"; int num; cin >> num; if (num == 1 || num == 2 || num == 3) { switch (num) { case 1: // For Mobile showMobileMenu(); selectedMobile(); break; case 2: // For Laptop showLaptopMenu(); selectedLaptop(); break; case 3: // For computer course showComputerCourseMenu(); selectedCourses(); break; } } else { flag = 1; } } else { flag = 1; } } while (flag == 0); // print bill printBill(items, selected_items, total_amount);}", "e": 15154, "s": 1084, "text": null }, { "code": null, "e": 15163, "s": 15154, "text": "Output: " }, { "code": null, "e": 15248, "s": 15163, "text": "Let us suppose, someone need to buy 2 Redmi mobiles, 1 HP Laptop and a Java course. " }, { "code": null, "e": 15263, "s": 15248, "text": "Video Output: " }, { "code": null, "e": 15278, "s": 15263, "text": "Demonstration:" }, { "code": null, "e": 15788, "s": 15278, "text": "Step 1: Firstly, a map(say map<string, long double> items ) is constructed, which stores products with their costs. Construct another map (say map<string, long double>selected_items ), which is used to push the selected items with their quantity. Then initiate total_amount(which stores the total amount) to 0. Use flag and initiate to 0. In case if wrong input is given by the customer, then the flag changes to 1 and gets exited directly by printing items with their prices and then print the total amount. " }, { "code": null, "e": 16004, "s": 15788, "text": "Step 2: Ask for details For example- the name of the customer. In our code customerDetails() function is constructed for this purpose. toupper() is used for converting all the characters of a string into uppercase. " }, { "code": null, "e": 16092, "s": 16004, "text": "Step 3: Display the menu to the user. showMenu() function is created for this purpose. " }, { "code": null, "e": 16278, "s": 16092, "text": "Step 4: Ask the user whether he likes to continue. Here do-while loop is used, this loop continues till flag changes to 1. Whenever the flag changes to 1, it directly prints the bill. " }, { "code": null, "e": 17231, "s": 16278, "text": "If yes, he needs to enter Y/y then ask the user to input the respective number from the menu. If the wrong number is entered then the flag changes to 1.If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1.As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user.Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied.If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map.This process goes on till the flag gets changed to 1.Else, he can type any other letter except Y/y. Then the flag changes to 1." }, { "code": null, "e": 18110, "s": 17231, "text": "If yes, he needs to enter Y/y then ask the user to input the respective number from the menu. If the wrong number is entered then the flag changes to 1.If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1.As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user.Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied.If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map.This process goes on till the flag gets changed to 1." }, { "code": null, "e": 18837, "s": 18110, "text": "If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1.As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user.Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied.If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map.This process goes on till the flag gets changed to 1." }, { "code": null, "e": 18993, "s": 18837, "text": "If the input is valid, show the products of the selected type. Ask the user to input the respective number. If it is not valid, then the flag changes to 1." }, { "code": null, "e": 19126, "s": 18993, "text": "As there are many products, a switch case is used, where the parameter is the number(respective number of the item) entered by user." }, { "code": null, "e": 19373, "s": 19126, "text": "Now respective case gets executed. Firstly, ask the quantity and then ask the user if he is sure about the quantity entered. If he is not sure (or) if the quantity is not an integer, then he will be asked again till both conditions are satisfied." }, { "code": null, "e": 19514, "s": 19373, "text": "If he is sure about the quantity of the product selected, then that product along with its quantity gets pushed into the selected_items map." }, { "code": null, "e": 19568, "s": 19514, "text": "This process goes on till the flag gets changed to 1." }, { "code": null, "e": 19643, "s": 19568, "text": "Else, he can type any other letter except Y/y. Then the flag changes to 1." }, { "code": null, "e": 19760, "s": 19643, "text": "Below are some screenshots showing what will be the screen like, when the particular selection is done by the user- " }, { "code": null, "e": 19828, "s": 19760, "text": "If mobile is selected: The below screenshot shows the Mobile Menu: " }, { "code": null, "e": 19839, "s": 19828, "text": "<img src=\"" }, { "code": null, "e": 19908, "s": 19839, "text": "If laptop is selected: The below screenshot shows the Laptop Menu: " }, { "code": null, "e": 19990, "s": 19908, "text": "If computer course is selected: The below screen shows the Computer Course Menu: " }, { "code": null, "e": 20062, "s": 19990, "text": "If the flag changed to 1, we print the bill using printBill() function." }, { "code": null, "e": 20084, "s": 20064, "text": "abhishek0719kadiyan" }, { "code": null, "e": 20098, "s": 20084, "text": "Project-Ideas" }, { "code": null, "e": 20122, "s": 20098, "text": "Technical Scripter 2020" }, { "code": null, "e": 20126, "s": 20122, "text": "C++" }, { "code": null, "e": 20139, "s": 20126, "text": "C++ Programs" }, { "code": null, "e": 20147, "s": 20139, "text": "Project" }, { "code": null, "e": 20166, "s": 20147, "text": "Technical Scripter" }, { "code": null, "e": 20170, "s": 20166, "text": "CPP" }, { "code": null, "e": 20268, "s": 20170, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 20295, "s": 20268, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 20329, "s": 20295, "text": "vector erase() and clear() in C++" }, { "code": null, "e": 20346, "s": 20329, "text": "Substring in C++" }, { "code": null, "e": 20400, "s": 20346, "text": "Priority Queue in C++ Standard Template Library (STL)" }, { "code": null, "e": 20419, "s": 20400, "text": "Inheritance in C++" }, { "code": null, "e": 20454, "s": 20419, "text": "Header files in C/C++ and its uses" }, { "code": null, "e": 20488, "s": 20454, "text": "Sorting a Map by value in C++ STL" }, { "code": null, "e": 20532, "s": 20488, "text": "Program to print ASCII Value of a character" }, { "code": null, "e": 20591, "s": 20532, "text": "How to return multiple values from a function in C or C++?" } ]
How to make Array.indexOf() case insensitive in JavaScript ?
31 Dec, 2019 The task is to make the Array.indexOf() method case insensitive. Here are a few of the most techniques discussed with the help of JavaScript. .toLowerCase() method .toUpperCase() method The .toLowerCase() method: Transform the search string and elements of the array to the lower case using .toLowerCase() method, then perform the simple search. Below example illustrate the method. Example 1: <!DOCTYPE HTML><html> <head> <title> How to use Array.indexOf() case insensitive in JavaScript ? </title></head> <body style="text-align:center;"> <h1 style="color: green"> Geeksforgeeks </h1> <p id="GFG_UP" style= "font-size: 20px;font-weight: bold;"> </p> <button onclick="gfg_Run()"> Click Here </button> <p id="GFG_DOWN" style="color:green; font-size: 26px; font-weight: bold;"> </p> <script> var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); var arr = ['GFG_1', 'geeks', 'Geeksforgeeks', 'GFG_2', 'gfg']; var el = 'gfg_1'; el_up.innerHTML = "Click on the button for " + "case-insensitive search. <br>Array = '" + arr + "'<br>Element = '" + el + "'"; function gfg_Run() { res = arr.findIndex(item => el.toLowerCase() === item.toLowerCase()); el_down.innerHTML = "The index of '" + el + "' is '" + res + "'."; } </script></body> </html> Output: The .toUpperCase() method: Transform the search string and elements of the array to the upper case using .toUpperCase() method, then perform the simple search. Below example illustrates this method. Example 2: <!DOCTYPE HTML><html> <head> <title> How to use Array.indexOf() case insensitive in JavaScript ? </title></head> <body style="text-align:center;"> <h1 style="color: green"> GeeksforGeeks </h1> <p id="GFG_UP" style="font-size:20px; font-weight: bold;"> </p> <button onclick="gfg_Run()"> Click Here </button> <p id="GFG_DOWN" style="color:green; font-size: 26px; font-weight: bold;"> </p> <script> var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); var arr = ['GFG_1', 'geeks', 'GeeksforGeeks', 'GFG_2', 'gfg']; var el = 'gfg_1'; el_up.innerHTML = "Click on the button for " + "case-insensitive search. <br>Array = '" + arr + "'<br>Element = '" + el + "'"; function gfg_Run() { var res = arr.find(key => key.toUpperCase() === el.toUpperCase()) != undefined; if (res) { res = 'present'; } else { res = 'absent'; } el_down.innerHTML = "The element '" + el + "' is '" + res + "' in the array."; } </script></body> </html> Output: javascript-array JavaScript Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Difference Between PUT and PATCH Request How to append HTML code to a div using JavaScript ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n31 Dec, 2019" }, { "code": null, "e": 170, "s": 28, "text": "The task is to make the Array.indexOf() method case insensitive. Here are a few of the most techniques discussed with the help of JavaScript." }, { "code": null, "e": 192, "s": 170, "text": ".toLowerCase() method" }, { "code": null, "e": 214, "s": 192, "text": ".toUpperCase() method" }, { "code": null, "e": 411, "s": 214, "text": "The .toLowerCase() method: Transform the search string and elements of the array to the lower case using .toLowerCase() method, then perform the simple search. Below example illustrate the method." }, { "code": null, "e": 422, "s": 411, "text": "Example 1:" }, { "code": "<!DOCTYPE HTML><html> <head> <title> How to use Array.indexOf() case insensitive in JavaScript ? </title></head> <body style=\"text-align:center;\"> <h1 style=\"color: green\"> Geeksforgeeks </h1> <p id=\"GFG_UP\" style= \"font-size: 20px;font-weight: bold;\"> </p> <button onclick=\"gfg_Run()\"> Click Here </button> <p id=\"GFG_DOWN\" style=\"color:green; font-size: 26px; font-weight: bold;\"> </p> <script> var el_up = document.getElementById(\"GFG_UP\"); var el_down = document.getElementById(\"GFG_DOWN\"); var arr = ['GFG_1', 'geeks', 'Geeksforgeeks', 'GFG_2', 'gfg']; var el = 'gfg_1'; el_up.innerHTML = \"Click on the button for \" + \"case-insensitive search. <br>Array = '\" + arr + \"'<br>Element = '\" + el + \"'\"; function gfg_Run() { res = arr.findIndex(item => el.toLowerCase() === item.toLowerCase()); el_down.innerHTML = \"The index of '\" + el + \"' is '\" + res + \"'.\"; } </script></body> </html>", "e": 1663, "s": 422, "text": null }, { "code": null, "e": 1671, "s": 1663, "text": "Output:" }, { "code": null, "e": 1870, "s": 1671, "text": "The .toUpperCase() method: Transform the search string and elements of the array to the upper case using .toUpperCase() method, then perform the simple search. Below example illustrates this method." }, { "code": null, "e": 1881, "s": 1870, "text": "Example 2:" }, { "code": "<!DOCTYPE HTML><html> <head> <title> How to use Array.indexOf() case insensitive in JavaScript ? </title></head> <body style=\"text-align:center;\"> <h1 style=\"color: green\"> GeeksforGeeks </h1> <p id=\"GFG_UP\" style=\"font-size:20px; font-weight: bold;\"> </p> <button onclick=\"gfg_Run()\"> Click Here </button> <p id=\"GFG_DOWN\" style=\"color:green; font-size: 26px; font-weight: bold;\"> </p> <script> var el_up = document.getElementById(\"GFG_UP\"); var el_down = document.getElementById(\"GFG_DOWN\"); var arr = ['GFG_1', 'geeks', 'GeeksforGeeks', 'GFG_2', 'gfg']; var el = 'gfg_1'; el_up.innerHTML = \"Click on the button for \" + \"case-insensitive search. <br>Array = '\" + arr + \"'<br>Element = '\" + el + \"'\"; function gfg_Run() { var res = arr.find(key => key.toUpperCase() === el.toUpperCase()) != undefined; if (res) { res = 'present'; } else { res = 'absent'; } el_down.innerHTML = \"The element '\" + el + \"' is '\" + res + \"' in the array.\"; } </script></body> </html>", "e": 3222, "s": 1881, "text": null }, { "code": null, "e": 3230, "s": 3222, "text": "Output:" }, { "code": null, "e": 3247, "s": 3230, "text": "javascript-array" }, { "code": null, "e": 3258, "s": 3247, "text": "JavaScript" }, { "code": null, "e": 3275, "s": 3258, "text": "Web Technologies" }, { "code": null, "e": 3302, "s": 3275, "text": "Web technologies Questions" }, { "code": null, "e": 3400, "s": 3302, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3461, "s": 3400, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3533, "s": 3461, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 3573, "s": 3533, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 3614, "s": 3573, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 3666, "s": 3614, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 3699, "s": 3666, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 3761, "s": 3699, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 3822, "s": 3761, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3872, "s": 3822, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
How to import JSON File in MongoDB using Python?
10 Jun, 2020 Prerequisistes: MongoDB and Python, Working With JSON Data in Python MongoDB is a cross-platform document-oriented and a non relational (i.e NoSQL) database program. It is an open-source document database, that stores the data in the form of key-value pairs. JSON stands for JavaScript Object Notation. It is an open standard file format, and data interchange format with an extension β€œ.json”, that uses human-readable text to store and transmit data objects consisting of attribute-value pairs and array data types. To import a JSON file in MongoDB we have to first load or open the JSON file after that we can easily insert that file into the database or the collection. To load a JSON file we have to first import json in our code after that we can open the JSON file. When our file gets loaded or opened we can easily insert it into the collection and operate on that file. Let’s see the example for better understanding. Example : Sample JSON used: import jsonfrom pymongo import MongoClient # Making Connectionmyclient = MongoClient("mongodb://localhost:27017/") # database db = myclient["GFG"] # Created or Switched to collection # names: GeeksForGeeksCollection = db["data"] # Loading or Opening the json filewith open('data.json') as file: file_data = json.load(file) # Inserting the loaded data in the Collection# if JSON contains data more than one entry# insert_many is used else inser_one is usedif isinstance(file_data, list): Collection.insert_many(file_data) else: Collection.insert_one(file_data) Output: Python-mongoDB Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python | os.path.join() method Python OOPs Concepts How to drop one or multiple columns in Pandas Dataframe Introduction To PYTHON How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | datetime.timedelta() function Python | Get unique values from a list
[ { "code": null, "e": 52, "s": 24, "text": "\n10 Jun, 2020" }, { "code": null, "e": 121, "s": 52, "text": "Prerequisistes: MongoDB and Python, Working With JSON Data in Python" }, { "code": null, "e": 311, "s": 121, "text": "MongoDB is a cross-platform document-oriented and a non relational (i.e NoSQL) database program. It is an open-source document database, that stores the data in the form of key-value pairs." }, { "code": null, "e": 569, "s": 311, "text": "JSON stands for JavaScript Object Notation. It is an open standard file format, and data interchange format with an extension β€œ.json”, that uses human-readable text to store and transmit data objects consisting of attribute-value pairs and array data types." }, { "code": null, "e": 978, "s": 569, "text": "To import a JSON file in MongoDB we have to first load or open the JSON file after that we can easily insert that file into the database or the collection. To load a JSON file we have to first import json in our code after that we can open the JSON file. When our file gets loaded or opened we can easily insert it into the collection and operate on that file. Let’s see the example for better understanding." }, { "code": null, "e": 988, "s": 978, "text": "Example :" }, { "code": null, "e": 1006, "s": 988, "text": "Sample JSON used:" }, { "code": "import jsonfrom pymongo import MongoClient # Making Connectionmyclient = MongoClient(\"mongodb://localhost:27017/\") # database db = myclient[\"GFG\"] # Created or Switched to collection # names: GeeksForGeeksCollection = db[\"data\"] # Loading or Opening the json filewith open('data.json') as file: file_data = json.load(file) # Inserting the loaded data in the Collection# if JSON contains data more than one entry# insert_many is used else inser_one is usedif isinstance(file_data, list): Collection.insert_many(file_data) else: Collection.insert_one(file_data)", "e": 1591, "s": 1006, "text": null }, { "code": null, "e": 1599, "s": 1591, "text": "Output:" }, { "code": null, "e": 1614, "s": 1599, "text": "Python-mongoDB" }, { "code": null, "e": 1621, "s": 1614, "text": "Python" }, { "code": null, "e": 1719, "s": 1621, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1751, "s": 1719, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 1778, "s": 1751, "text": "Python Classes and Objects" }, { "code": null, "e": 1809, "s": 1778, "text": "Python | os.path.join() method" }, { "code": null, "e": 1830, "s": 1809, "text": "Python OOPs Concepts" }, { "code": null, "e": 1886, "s": 1830, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 1909, "s": 1886, "text": "Introduction To PYTHON" }, { "code": null, "e": 1951, "s": 1909, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 1993, "s": 1951, "text": "Check if element exists in list in Python" }, { "code": null, "e": 2032, "s": 1993, "text": "Python | datetime.timedelta() function" } ]
Python | Pandas Series.duplicated()
13 Feb, 2019 Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer and label-based indexing and provides a host of methods for performing operations involving the index. Pandas Series.duplicated() function indicate duplicate Series values. The duplicated values are indicated as True values in the resulting Series. Either all duplicates, all except the first or all except the last occurrence of duplicates can be indicated. Syntax: Series.duplicated(keep=’first’) Parameter :keep : {β€˜first’, β€˜last’, False}, default β€˜first’ Returns : pandas.core.series.Series Example #1: Use Series.duplicated() function to find the duplicate values in the given series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([80, 25, 3, 25, 24, 6]) # Create the Indexindex_ = ['Coca Cola', 'Sprite', 'Coke', 'Fanta', 'Dew', 'ThumbsUp'] # set the indexsr.index = index_ # Print the seriesprint(sr) Output : Now we will use Series.duplicated() function to find the duplicate values in the underlying data of the given series object. # detect duplicatesresult = sr.duplicated() # Print the resultprint(result) Output : As we can see in the output, the Series.duplicated() function has successfully detected the duplicated values in the given series object. False indicates that the corresponding value is unique whereas, True indicates that the corresponding value was a duplicated value in the given series object. Example #2 : Use Series.duplicated() function to find the duplicate values in the given series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([11, 11, 8, 18, 65, 18, 32, 10, 5, 32, 32]) # Create the Indexindex_ = pd.date_range('2010-10-09', periods = 11, freq ='M') # set the indexsr.index = index_ # Print the seriesprint(sr) Output : Now we will use Series.duplicated() function to find the duplicate values in the underlying data of the given series object. # detect duplicatesresult = sr.duplicated() # Print the resultprint(result) Output :As we can see in the output, the Series.duplicated() function has successfully detected the duplicated values in the given series object. False indicates that the corresponding value is unique whereas, True indicates that the corresponding value was a duplicated value in the given series object. Python pandas-series Python pandas-series-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Feb, 2019" }, { "code": null, "e": 284, "s": 28, "text": "Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer and label-based indexing and provides a host of methods for performing operations involving the index." }, { "code": null, "e": 540, "s": 284, "text": "Pandas Series.duplicated() function indicate duplicate Series values. The duplicated values are indicated as True values in the resulting Series. Either all duplicates, all except the first or all except the last occurrence of duplicates can be indicated." }, { "code": null, "e": 580, "s": 540, "text": "Syntax: Series.duplicated(keep=’first’)" }, { "code": null, "e": 640, "s": 580, "text": "Parameter :keep : {β€˜first’, β€˜last’, False}, default β€˜first’" }, { "code": null, "e": 676, "s": 640, "text": "Returns : pandas.core.series.Series" }, { "code": null, "e": 778, "s": 676, "text": "Example #1: Use Series.duplicated() function to find the duplicate values in the given series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([80, 25, 3, 25, 24, 6]) # Create the Indexindex_ = ['Coca Cola', 'Sprite', 'Coke', 'Fanta', 'Dew', 'ThumbsUp'] # set the indexsr.index = index_ # Print the seriesprint(sr)", "e": 1034, "s": 778, "text": null }, { "code": null, "e": 1043, "s": 1034, "text": "Output :" }, { "code": null, "e": 1168, "s": 1043, "text": "Now we will use Series.duplicated() function to find the duplicate values in the underlying data of the given series object." }, { "code": "# detect duplicatesresult = sr.duplicated() # Print the resultprint(result)", "e": 1245, "s": 1168, "text": null }, { "code": null, "e": 1254, "s": 1245, "text": "Output :" }, { "code": null, "e": 1654, "s": 1254, "text": "As we can see in the output, the Series.duplicated() function has successfully detected the duplicated values in the given series object. False indicates that the corresponding value is unique whereas, True indicates that the corresponding value was a duplicated value in the given series object. Example #2 : Use Series.duplicated() function to find the duplicate values in the given series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([11, 11, 8, 18, 65, 18, 32, 10, 5, 32, 32]) # Create the Indexindex_ = pd.date_range('2010-10-09', periods = 11, freq ='M') # set the indexsr.index = index_ # Print the seriesprint(sr)", "e": 1923, "s": 1654, "text": null }, { "code": null, "e": 1932, "s": 1923, "text": "Output :" }, { "code": null, "e": 2057, "s": 1932, "text": "Now we will use Series.duplicated() function to find the duplicate values in the underlying data of the given series object." }, { "code": "# detect duplicatesresult = sr.duplicated() # Print the resultprint(result)", "e": 2134, "s": 2057, "text": null }, { "code": null, "e": 2439, "s": 2134, "text": "Output :As we can see in the output, the Series.duplicated() function has successfully detected the duplicated values in the given series object. False indicates that the corresponding value is unique whereas, True indicates that the corresponding value was a duplicated value in the given series object." }, { "code": null, "e": 2460, "s": 2439, "text": "Python pandas-series" }, { "code": null, "e": 2489, "s": 2460, "text": "Python pandas-series-methods" }, { "code": null, "e": 2503, "s": 2489, "text": "Python-pandas" }, { "code": null, "e": 2510, "s": 2503, "text": "Python" } ]
How to Generate a PDF file in Android App?
20 Jun, 2022 There are many apps in which data from the app is provided to users in the downloadable PDF file format. So in this case we have to create a PDF file from the data present inside our app and represent that data properly inside our app. So by using this technique we can easily create a new PDF according to our requirement. In this article, we will take a look at creating a new PDF file from the data present inside your Android app and saving that PDF file in the external storage of the users’ device. So for generating a new PDF file from the data present inside our Android app we will be using Canvas. Canvas is a predefined class in Android which is used to make 2D drawings of the different object on our screen. So in this article, we will be using canvas to draw our data inside our canvas, and then we will store that canvas in the form of a PDF. Now we will move towards the implementation of our project. Below is the sample GIF in which we will get to know what we are going to build in this article. Note that this application is built using Java language. In this project, we are going to display a simple button. After clicking the button our PDF file will be generated and we can see this PDF file saved in our files. Step 1: Create a New Project To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language. Step 2: Working with the activity_main.xml file Go to the activity_main.xml file and refer to the following code. Below is the code for the activity_main.xml file. XML <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--Button for generating the PDF file--> <Button android:id="@+id/idBtnGeneratePDF" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:text="Generate PDF" /> </RelativeLayout> Step 3: Add permission for reading and writing in the External Storage Navigate to the app > AndroifManifest.xml file and add the below permissions to it. XML <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/><uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/> Step 4: Working with the MainActivity.java file Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail. Java import android.content.pm.PackageManager;import android.graphics.Bitmap;import android.graphics.BitmapFactory;import android.graphics.Canvas;import android.graphics.Paint;import android.graphics.Typeface;import android.graphics.pdf.PdfDocument;import android.os.Bundle;import android.os.Environment;import android.view.View;import android.widget.Button;import android.widget.Toast; import androidx.annotation.NonNull;import androidx.appcompat.app.AppCompatActivity;import androidx.core.app.ActivityCompat;import androidx.core.content.ContextCompat; import java.io.File;import java.io.FileOutputStream;import java.io.IOException; import static android.Manifest.permission.READ_EXTERNAL_STORAGE;import static android.Manifest.permission.WRITE_EXTERNAL_STORAGE; public class MainActivity extends AppCompatActivity { // variables for our buttons. Button generatePDFbtn; // declaring width and height // for our PDF file. int pageHeight = 1120; int pagewidth = 792; // creating a bitmap variable // for storing our images Bitmap bmp, scaledbmp; // constant code for runtime permissions private static final int PERMISSION_REQUEST_CODE = 200; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // initializing our variables. generatePDFbtn = findViewById(R.id.idBtnGeneratePDF); bmp = BitmapFactory.decodeResource(getResources(), R.drawable.gfgimage); scaledbmp = Bitmap.createScaledBitmap(bmp, 140, 140, false); // below code is used for // checking our permissions. if (checkPermission()) { Toast.makeText(this, "Permission Granted", Toast.LENGTH_SHORT).show(); } else { requestPermission(); } generatePDFbtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // calling method to // generate our PDF file. generatePDF(); } }); } private void generatePDF() { // creating an object variable // for our PDF document. PdfDocument pdfDocument = new PdfDocument(); // two variables for paint "paint" is used // for drawing shapes and we will use "title" // for adding text in our PDF file. Paint paint = new Paint(); Paint title = new Paint(); // we are adding page info to our PDF file // in which we will be passing our pageWidth, // pageHeight and number of pages and after that // we are calling it to create our PDF. PdfDocument.PageInfo mypageInfo = new PdfDocument.PageInfo.Builder(pagewidth, pageHeight, 1).create(); // below line is used for setting // start page for our PDF file. PdfDocument.Page myPage = pdfDocument.startPage(mypageInfo); // creating a variable for canvas // from our page of PDF. Canvas canvas = myPage.getCanvas(); // below line is used to draw our image on our PDF file. // the first parameter of our drawbitmap method is // our bitmap // second parameter is position from left // third parameter is position from top and last // one is our variable for paint. canvas.drawBitmap(scaledbmp, 56, 40, paint); // below line is used for adding typeface for // our text which we will be adding in our PDF file. title.setTypeface(Typeface.create(Typeface.DEFAULT, Typeface.NORMAL)); // below line is used for setting text size // which we will be displaying in our PDF file. title.setTextSize(15); // below line is sued for setting color // of our text inside our PDF file. title.setColor(ContextCompat.getColor(this, R.color.purple_200)); // below line is used to draw text in our PDF file. // the first parameter is our text, second parameter // is position from start, third parameter is position from top // and then we are passing our variable of paint which is title. canvas.drawText("A portal for IT professionals.", 209, 100, title); canvas.drawText("Geeks for Geeks", 209, 80, title); // similarly we are creating another text and in this // we are aligning this text to center of our PDF file. title.setTypeface(Typeface.defaultFromStyle(Typeface.NORMAL)); title.setColor(ContextCompat.getColor(this, R.color.purple_200)); title.setTextSize(15); // below line is used for setting // our text to center of PDF. title.setTextAlign(Paint.Align.CENTER); canvas.drawText("This is sample document which we have created.", 396, 560, title); // after adding all attributes to our // PDF file we will be finishing our page. pdfDocument.finishPage(myPage); // below line is used to set the name of // our PDF file and its path. File file = new File(Environment.getExternalStorageDirectory(), "GFG.pdf"); try { // after creating a file name we will // write our PDF file to that location. pdfDocument.writeTo(new FileOutputStream(file)); // below line is to print toast message // on completion of PDF generation. Toast.makeText(MainActivity.this, "PDF file generated successfully.", Toast.LENGTH_SHORT).show(); } catch (IOException e) { // below line is used // to handle error e.printStackTrace(); } // after storing our pdf to that // location we are closing our PDF file. pdfDocument.close(); } private boolean checkPermission() { // checking of permissions. int permission1 = ContextCompat.checkSelfPermission(getApplicationContext(), WRITE_EXTERNAL_STORAGE); int permission2 = ContextCompat.checkSelfPermission(getApplicationContext(), READ_EXTERNAL_STORAGE); return permission1 == PackageManager.PERMISSION_GRANTED && permission2 == PackageManager.PERMISSION_GRANTED; } private void requestPermission() { // requesting permissions if not provided. ActivityCompat.requestPermissions(this, new String[]{WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE}, PERMISSION_REQUEST_CODE); } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { if (requestCode == PERMISSION_REQUEST_CODE) { if (grantResults.length > 0) { // after requesting permissions we are showing // users a toast message of permission granted. boolean writeStorage = grantResults[0] == PackageManager.PERMISSION_GRANTED; boolean readStorage = grantResults[1] == PackageManager.PERMISSION_GRANTED; if (writeStorage && readStorage) { Toast.makeText(this, "Permission Granted..", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, "Permission Denied.", Toast.LENGTH_SHORT).show(); finish(); } } } }} Output: gabaa406 germanshephered48 android Technical Scripter 2020 Android Java Technical Scripter Java Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n20 Jun, 2022" }, { "code": null, "e": 947, "s": 28, "text": "There are many apps in which data from the app is provided to users in the downloadable PDF file format. So in this case we have to create a PDF file from the data present inside our app and represent that data properly inside our app. So by using this technique we can easily create a new PDF according to our requirement. In this article, we will take a look at creating a new PDF file from the data present inside your Android app and saving that PDF file in the external storage of the users’ device. So for generating a new PDF file from the data present inside our Android app we will be using Canvas. Canvas is a predefined class in Android which is used to make 2D drawings of the different object on our screen. So in this article, we will be using canvas to draw our data inside our canvas, and then we will store that canvas in the form of a PDF. Now we will move towards the implementation of our project. " }, { "code": null, "e": 1266, "s": 947, "text": "Below is the sample GIF in which we will get to know what we are going to build in this article. Note that this application is built using Java language. In this project, we are going to display a simple button. After clicking the button our PDF file will be generated and we can see this PDF file saved in our files. " }, { "code": null, "e": 1295, "s": 1266, "text": "Step 1: Create a New Project" }, { "code": null, "e": 1457, "s": 1295, "text": "To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language." }, { "code": null, "e": 1505, "s": 1457, "text": "Step 2: Working with the activity_main.xml file" }, { "code": null, "e": 1621, "s": 1505, "text": "Go to the activity_main.xml file and refer to the following code. Below is the code for the activity_main.xml file." }, { "code": null, "e": 1625, "s": 1621, "text": "XML" }, { "code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" tools:context=\".MainActivity\"> <!--Button for generating the PDF file--> <Button android:id=\"@+id/idBtnGeneratePDF\" android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:layout_centerInParent=\"true\" android:text=\"Generate PDF\" /> </RelativeLayout>", "e": 2194, "s": 1625, "text": null }, { "code": null, "e": 2266, "s": 2194, "text": " Step 3: Add permission for reading and writing in the External Storage" }, { "code": null, "e": 2352, "s": 2266, "text": "Navigate to the app > AndroifManifest.xml file and add the below permissions to it. " }, { "code": null, "e": 2356, "s": 2352, "text": "XML" }, { "code": "<uses-permission android:name=\"android.permission.WRITE_EXTERNAL_STORAGE\"/><uses-permission android:name=\"android.permission.READ_EXTERNAL_STORAGE\"/>", "e": 2506, "s": 2356, "text": null }, { "code": null, "e": 2555, "s": 2506, "text": " Step 4: Working with the MainActivity.java file" }, { "code": null, "e": 2746, "s": 2555, "text": "Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail. " }, { "code": null, "e": 2751, "s": 2746, "text": "Java" }, { "code": "import android.content.pm.PackageManager;import android.graphics.Bitmap;import android.graphics.BitmapFactory;import android.graphics.Canvas;import android.graphics.Paint;import android.graphics.Typeface;import android.graphics.pdf.PdfDocument;import android.os.Bundle;import android.os.Environment;import android.view.View;import android.widget.Button;import android.widget.Toast; import androidx.annotation.NonNull;import androidx.appcompat.app.AppCompatActivity;import androidx.core.app.ActivityCompat;import androidx.core.content.ContextCompat; import java.io.File;import java.io.FileOutputStream;import java.io.IOException; import static android.Manifest.permission.READ_EXTERNAL_STORAGE;import static android.Manifest.permission.WRITE_EXTERNAL_STORAGE; public class MainActivity extends AppCompatActivity { // variables for our buttons. Button generatePDFbtn; // declaring width and height // for our PDF file. int pageHeight = 1120; int pagewidth = 792; // creating a bitmap variable // for storing our images Bitmap bmp, scaledbmp; // constant code for runtime permissions private static final int PERMISSION_REQUEST_CODE = 200; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // initializing our variables. generatePDFbtn = findViewById(R.id.idBtnGeneratePDF); bmp = BitmapFactory.decodeResource(getResources(), R.drawable.gfgimage); scaledbmp = Bitmap.createScaledBitmap(bmp, 140, 140, false); // below code is used for // checking our permissions. if (checkPermission()) { Toast.makeText(this, \"Permission Granted\", Toast.LENGTH_SHORT).show(); } else { requestPermission(); } generatePDFbtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // calling method to // generate our PDF file. generatePDF(); } }); } private void generatePDF() { // creating an object variable // for our PDF document. PdfDocument pdfDocument = new PdfDocument(); // two variables for paint \"paint\" is used // for drawing shapes and we will use \"title\" // for adding text in our PDF file. Paint paint = new Paint(); Paint title = new Paint(); // we are adding page info to our PDF file // in which we will be passing our pageWidth, // pageHeight and number of pages and after that // we are calling it to create our PDF. PdfDocument.PageInfo mypageInfo = new PdfDocument.PageInfo.Builder(pagewidth, pageHeight, 1).create(); // below line is used for setting // start page for our PDF file. PdfDocument.Page myPage = pdfDocument.startPage(mypageInfo); // creating a variable for canvas // from our page of PDF. Canvas canvas = myPage.getCanvas(); // below line is used to draw our image on our PDF file. // the first parameter of our drawbitmap method is // our bitmap // second parameter is position from left // third parameter is position from top and last // one is our variable for paint. canvas.drawBitmap(scaledbmp, 56, 40, paint); // below line is used for adding typeface for // our text which we will be adding in our PDF file. title.setTypeface(Typeface.create(Typeface.DEFAULT, Typeface.NORMAL)); // below line is used for setting text size // which we will be displaying in our PDF file. title.setTextSize(15); // below line is sued for setting color // of our text inside our PDF file. title.setColor(ContextCompat.getColor(this, R.color.purple_200)); // below line is used to draw text in our PDF file. // the first parameter is our text, second parameter // is position from start, third parameter is position from top // and then we are passing our variable of paint which is title. canvas.drawText(\"A portal for IT professionals.\", 209, 100, title); canvas.drawText(\"Geeks for Geeks\", 209, 80, title); // similarly we are creating another text and in this // we are aligning this text to center of our PDF file. title.setTypeface(Typeface.defaultFromStyle(Typeface.NORMAL)); title.setColor(ContextCompat.getColor(this, R.color.purple_200)); title.setTextSize(15); // below line is used for setting // our text to center of PDF. title.setTextAlign(Paint.Align.CENTER); canvas.drawText(\"This is sample document which we have created.\", 396, 560, title); // after adding all attributes to our // PDF file we will be finishing our page. pdfDocument.finishPage(myPage); // below line is used to set the name of // our PDF file and its path. File file = new File(Environment.getExternalStorageDirectory(), \"GFG.pdf\"); try { // after creating a file name we will // write our PDF file to that location. pdfDocument.writeTo(new FileOutputStream(file)); // below line is to print toast message // on completion of PDF generation. Toast.makeText(MainActivity.this, \"PDF file generated successfully.\", Toast.LENGTH_SHORT).show(); } catch (IOException e) { // below line is used // to handle error e.printStackTrace(); } // after storing our pdf to that // location we are closing our PDF file. pdfDocument.close(); } private boolean checkPermission() { // checking of permissions. int permission1 = ContextCompat.checkSelfPermission(getApplicationContext(), WRITE_EXTERNAL_STORAGE); int permission2 = ContextCompat.checkSelfPermission(getApplicationContext(), READ_EXTERNAL_STORAGE); return permission1 == PackageManager.PERMISSION_GRANTED && permission2 == PackageManager.PERMISSION_GRANTED; } private void requestPermission() { // requesting permissions if not provided. ActivityCompat.requestPermissions(this, new String[]{WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE}, PERMISSION_REQUEST_CODE); } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { if (requestCode == PERMISSION_REQUEST_CODE) { if (grantResults.length > 0) { // after requesting permissions we are showing // users a toast message of permission granted. boolean writeStorage = grantResults[0] == PackageManager.PERMISSION_GRANTED; boolean readStorage = grantResults[1] == PackageManager.PERMISSION_GRANTED; if (writeStorage && readStorage) { Toast.makeText(this, \"Permission Granted..\", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(this, \"Permission Denied.\", Toast.LENGTH_SHORT).show(); finish(); } } } }}", "e": 10134, "s": 2751, "text": null }, { "code": null, "e": 10142, "s": 10134, "text": "Output:" }, { "code": null, "e": 10153, "s": 10144, "text": "gabaa406" }, { "code": null, "e": 10171, "s": 10153, "text": "germanshephered48" }, { "code": null, "e": 10179, "s": 10171, "text": "android" }, { "code": null, "e": 10203, "s": 10179, "text": "Technical Scripter 2020" }, { "code": null, "e": 10211, "s": 10203, "text": "Android" }, { "code": null, "e": 10216, "s": 10211, "text": "Java" }, { "code": null, "e": 10235, "s": 10216, "text": "Technical Scripter" }, { "code": null, "e": 10240, "s": 10235, "text": "Java" }, { "code": null, "e": 10248, "s": 10240, "text": "Android" } ]
Word Embeddings in NLP
26 May, 2022 What are Word Embeddings? It is an approach for representing words and documents. Word Embedding or Word Vector is a numeric vector input that represents a word in a lower-dimensional space. It allows words with similar meaning to have a similar representation. They can also approximate meaning. A word vector with 50 values can represent 50 unique features. Features: Anything that relates words to one another. Eg: Age, Sports, Fitness, Employed etc. Each word vector has values corresponding to these features. Goal of Word Embeddings To reduce dimensionality To use a word to predict the words around it Inter word semantics must be captured How are Word Embeddings used? They are used as input to machine learning models.Take the words β€”-> Give their numeric representation β€”-> Use in training or inference To represent or visualize any underlying patterns of usage in the corpus that was used to train them. Implementations of Word Embeddings: Word Embeddings are a method of extracting features out of text so that we can input those features into a machine learning model to work with text data. They try to preserve syntactical and semantic information. The methods such as Bag of Words(BOW), CountVectorizer and TFIDF rely on the word count in a sentence but do not save any syntactical or semantic information. In these algorithms, the size of the vector is the number of elements in the vocabulary. We can get a sparse matrix if most of the elements are zero. Large input vectors will mean a huge number of weights which will result in high computation required for training. Word Embeddings give a solution to these problems. Let’s take an example to understand how word vector is generated by taking emoticons which are most frequently used in certain conditions and transform each emoji into a vector and the conditions will be our features. The emoji vectors for the emojis will be: [happy,sad,excited,sick] ???? =[1,0,1,0] ???? =[0,1,0,1] ???? =[0,0,1,1] ..... In a similar way, we can create word vectors for different words as well on the basis of given features. The words with similar vectors are most likely to have the same meaning or are used to convey the same sentiment. In this article we will be discussing two different approaches to get Word Embeddings: In Word2Vec every word is assigned a vector. We start with either a random vector or one-hot vector. One-Hot vector: A representation where only one bit in a vector is 1.If there are 500 words in the corpus then the vector length will be 500. After assigning vectors to each word we take a window size and iterate through the entire corpus. While we do this there are two neural embedding methods which are used: In this model what we do is we try to fit the neighboring words in the window to the central word. In this model, we try to make the central word closer to the neighboring words. It is the complete opposite of the CBOW model. It is shown that this method produces more meaningful embeddings. After applying the above neural embedding methods we get trained vectors of each word after many iterations through the corpus. These trained vectors preserve syntactical or semantic information and are converted to lower dimensions. The vectors with similar meaning or semantic information are placed close to each other in space. This is another method for creating word embeddings. In this method, we take the corpus and iterate through it and get the co-occurrence of each word with other words in the corpus. We get a co-occurrence matrix through this. The words which occur next to each other get a value of 1, if they are one word apart then 1/2, if two words apart then 1/3 and so on. Let us take an example to understand how the matrix is created. We have a small corpus: Corpus: It is a nice evening. Good Evening! Is it a nice evening? The upper half of the matrix will be a reflection of the lower half. We can consider a window frame as well to calculate the co-occurrences by shifting the frame till the end of the corpus. This helps gather information about the context in which the word is used. Initially, the vectors for each word is assigned randomly. Then we take two pairs of vectors and see how close they are to each other in space. If they occur together more often or have a higher value in the co-occurrence matrix and are far apart in space then they are brought close to each other. If they are close to each other but are rarely or not frequently used together then they are moved further apart in space. After many iterations of the above process, we’ll get a vector space representation that approximates the information from the co-occurrence matrix. The performance of GloVe is better than Word2Vec in terms of both semantic and syntactic capturing. Pre-trained Word Embedding Models: People generally use pre-trained models for word embeddings. Few of them are: SpaCy fastText Flair etc. Common Errors made: You need to use the exact same pipeline during deploying your model as were used to create the training data for the word embedding. If you use a different tokenizer or different method of handling white space, punctuation etc. you might end up with incompatible inputs. Words in your input that doesn’t have a pre-trained vector. Such words are known as Out of Vocabulary Word(oov). What you can do is replace those words with β€œUNK” which means unknown and then handle them separately. Dimension mis-match: Vectors can be of many lengths. If you train a model with vectors of length say 400 and then try to apply vectors of length 1000 at inference time, you will run into errors. So make sure to use the same dimensions throughout. Benefits of using Word Embeddings: It is much faster to train than hand build models like WordNet(which uses graph embeddings) Almost all modern NLP applications start with an embedding layer It Stores an approximation of meaning Drawbacks of Word Embeddings: It can be memory intensive It is corpus dependent. Any underlying bias will have an effect on your model It cannot distinguish between homophones. Eg: brake/break, cell/sell, weather/whether etc. nikhatkhan11 Natural-language-processing Machine Learning Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Support Vector Machine Algorithm Introduction to Recurrent Neural Network ML | Monte Carlo Tree Search (MCTS) Markov Decision Process DBSCAN Clustering in ML | Density based clustering Normalization vs Standardization Bagging vs Boosting in Machine Learning Principal Component Analysis with Python Types of Environments in AI An introduction to Machine Learning
[ { "code": null, "e": 52, "s": 24, "text": "\n26 May, 2022" }, { "code": null, "e": 78, "s": 52, "text": "What are Word Embeddings?" }, { "code": null, "e": 412, "s": 78, "text": "It is an approach for representing words and documents. Word Embedding or Word Vector is a numeric vector input that represents a word in a lower-dimensional space. It allows words with similar meaning to have a similar representation. They can also approximate meaning. A word vector with 50 values can represent 50 unique features." }, { "code": null, "e": 567, "s": 412, "text": "Features: Anything that relates words to one another. Eg: Age, Sports, Fitness, Employed etc. Each word vector has values corresponding to these features." }, { "code": null, "e": 591, "s": 567, "text": "Goal of Word Embeddings" }, { "code": null, "e": 616, "s": 591, "text": "To reduce dimensionality" }, { "code": null, "e": 661, "s": 616, "text": "To use a word to predict the words around it" }, { "code": null, "e": 699, "s": 661, "text": "Inter word semantics must be captured" }, { "code": null, "e": 729, "s": 699, "text": "How are Word Embeddings used?" }, { "code": null, "e": 865, "s": 729, "text": "They are used as input to machine learning models.Take the words β€”-> Give their numeric representation β€”-> Use in training or inference" }, { "code": null, "e": 967, "s": 865, "text": "To represent or visualize any underlying patterns of usage in the corpus that was used to train them." }, { "code": null, "e": 1003, "s": 967, "text": "Implementations of Word Embeddings:" }, { "code": null, "e": 1692, "s": 1003, "text": "Word Embeddings are a method of extracting features out of text so that we can input those features into a machine learning model to work with text data. They try to preserve syntactical and semantic information. The methods such as Bag of Words(BOW), CountVectorizer and TFIDF rely on the word count in a sentence but do not save any syntactical or semantic information. In these algorithms, the size of the vector is the number of elements in the vocabulary. We can get a sparse matrix if most of the elements are zero. Large input vectors will mean a huge number of weights which will result in high computation required for training. Word Embeddings give a solution to these problems." }, { "code": null, "e": 1910, "s": 1692, "text": "Let’s take an example to understand how word vector is generated by taking emoticons which are most frequently used in certain conditions and transform each emoji into a vector and the conditions will be our features." }, { "code": null, "e": 2036, "s": 1910, "text": "The emoji vectors for the emojis will be:\n [happy,sad,excited,sick]\n???? =[1,0,1,0]\n???? =[0,1,0,1]\n???? =[0,0,1,1]\n....." }, { "code": null, "e": 2255, "s": 2036, "text": "In a similar way, we can create word vectors for different words as well on the basis of given features. The words with similar vectors are most likely to have the same meaning or are used to convey the same sentiment." }, { "code": null, "e": 2342, "s": 2255, "text": "In this article we will be discussing two different approaches to get Word Embeddings:" }, { "code": null, "e": 2443, "s": 2342, "text": "In Word2Vec every word is assigned a vector. We start with either a random vector or one-hot vector." }, { "code": null, "e": 2755, "s": 2443, "text": "One-Hot vector: A representation where only one bit in a vector is 1.If there are 500 words in the corpus then the vector length will be 500. After assigning vectors to each word we take a window size and iterate through the entire corpus. While we do this there are two neural embedding methods which are used:" }, { "code": null, "e": 2854, "s": 2755, "text": "In this model what we do is we try to fit the neighboring words in the window to the central word." }, { "code": null, "e": 3047, "s": 2854, "text": "In this model, we try to make the central word closer to the neighboring words. It is the complete opposite of the CBOW model. It is shown that this method produces more meaningful embeddings." }, { "code": null, "e": 3379, "s": 3047, "text": "After applying the above neural embedding methods we get trained vectors of each word after many iterations through the corpus. These trained vectors preserve syntactical or semantic information and are converted to lower dimensions. The vectors with similar meaning or semantic information are placed close to each other in space." }, { "code": null, "e": 3740, "s": 3379, "text": "This is another method for creating word embeddings. In this method, we take the corpus and iterate through it and get the co-occurrence of each word with other words in the corpus. We get a co-occurrence matrix through this. The words which occur next to each other get a value of 1, if they are one word apart then 1/2, if two words apart then 1/3 and so on." }, { "code": null, "e": 3828, "s": 3740, "text": "Let us take an example to understand how the matrix is created. We have a small corpus:" }, { "code": null, "e": 3894, "s": 3828, "text": "Corpus:\nIt is a nice evening.\nGood Evening!\nIs it a nice evening?" }, { "code": null, "e": 4159, "s": 3894, "text": "The upper half of the matrix will be a reflection of the lower half. We can consider a window frame as well to calculate the co-occurrences by shifting the frame till the end of the corpus. This helps gather information about the context in which the word is used." }, { "code": null, "e": 4581, "s": 4159, "text": "Initially, the vectors for each word is assigned randomly. Then we take two pairs of vectors and see how close they are to each other in space. If they occur together more often or have a higher value in the co-occurrence matrix and are far apart in space then they are brought close to each other. If they are close to each other but are rarely or not frequently used together then they are moved further apart in space." }, { "code": null, "e": 4830, "s": 4581, "text": "After many iterations of the above process, we’ll get a vector space representation that approximates the information from the co-occurrence matrix. The performance of GloVe is better than Word2Vec in terms of both semantic and syntactic capturing." }, { "code": null, "e": 4865, "s": 4830, "text": "Pre-trained Word Embedding Models:" }, { "code": null, "e": 4943, "s": 4865, "text": "People generally use pre-trained models for word embeddings. Few of them are:" }, { "code": null, "e": 4949, "s": 4943, "text": "SpaCy" }, { "code": null, "e": 4958, "s": 4949, "text": "fastText" }, { "code": null, "e": 4969, "s": 4958, "text": "Flair etc." }, { "code": null, "e": 4989, "s": 4969, "text": "Common Errors made:" }, { "code": null, "e": 5260, "s": 4989, "text": "You need to use the exact same pipeline during deploying your model as were used to create the training data for the word embedding. If you use a different tokenizer or different method of handling white space, punctuation etc. you might end up with incompatible inputs." }, { "code": null, "e": 5476, "s": 5260, "text": "Words in your input that doesn’t have a pre-trained vector. Such words are known as Out of Vocabulary Word(oov). What you can do is replace those words with β€œUNK” which means unknown and then handle them separately." }, { "code": null, "e": 5723, "s": 5476, "text": "Dimension mis-match: Vectors can be of many lengths. If you train a model with vectors of length say 400 and then try to apply vectors of length 1000 at inference time, you will run into errors. So make sure to use the same dimensions throughout." }, { "code": null, "e": 5758, "s": 5723, "text": "Benefits of using Word Embeddings:" }, { "code": null, "e": 5850, "s": 5758, "text": "It is much faster to train than hand build models like WordNet(which uses graph embeddings)" }, { "code": null, "e": 5915, "s": 5850, "text": "Almost all modern NLP applications start with an embedding layer" }, { "code": null, "e": 5953, "s": 5915, "text": "It Stores an approximation of meaning" }, { "code": null, "e": 5983, "s": 5953, "text": "Drawbacks of Word Embeddings:" }, { "code": null, "e": 6010, "s": 5983, "text": "It can be memory intensive" }, { "code": null, "e": 6088, "s": 6010, "text": "It is corpus dependent. Any underlying bias will have an effect on your model" }, { "code": null, "e": 6179, "s": 6088, "text": "It cannot distinguish between homophones. Eg: brake/break, cell/sell, weather/whether etc." }, { "code": null, "e": 6192, "s": 6179, "text": "nikhatkhan11" }, { "code": null, "e": 6220, "s": 6192, "text": "Natural-language-processing" }, { "code": null, "e": 6237, "s": 6220, "text": "Machine Learning" }, { "code": null, "e": 6254, "s": 6237, "text": "Machine Learning" }, { "code": null, "e": 6352, "s": 6254, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 6385, "s": 6352, "text": "Support Vector Machine Algorithm" }, { "code": null, "e": 6426, "s": 6385, "text": "Introduction to Recurrent Neural Network" }, { "code": null, "e": 6462, "s": 6426, "text": "ML | Monte Carlo Tree Search (MCTS)" }, { "code": null, "e": 6486, "s": 6462, "text": "Markov Decision Process" }, { "code": null, "e": 6537, "s": 6486, "text": "DBSCAN Clustering in ML | Density based clustering" }, { "code": null, "e": 6570, "s": 6537, "text": "Normalization vs Standardization" }, { "code": null, "e": 6610, "s": 6570, "text": "Bagging vs Boosting in Machine Learning" }, { "code": null, "e": 6651, "s": 6610, "text": "Principal Component Analysis with Python" }, { "code": null, "e": 6679, "s": 6651, "text": "Types of Environments in AI" } ]
How to get current CPU and RAM usage in Python?
24 Jan, 2021 Prerequisites: psutil os CPU usage or utilization refers to the time taken by a computer to process some information. RAM usage or MAIN MEMORY UTILIZATION on the other hand refers to the amount of time RAM is used by a certain system at a particular time. Both of these can be retrieved using python. Method 1: Using psutil The function psutil.cpu_percent() provides the current system-wide CPU utilization in the form of a percentage. It takes a parameter which is the time interval (seconds). Since CPU utilization is calculated over a period of time it is recommended to provide a time interval. Syntax: cpu_percent(time_interval) Example: Python # Importing the libraryimport psutil # Calling psutil.cpu_precent() for 4 secondsprint('The CPU usage is: ', psutil.cpu_percent(4)) Output: The CPU usage is: 2.4 Method 2: Using OS module The psutil.getloadavg() provides the load information of the CPU in the form of a tuple. The psutil.getloadavg() runs in the background and results gets updated every 5 seconds. The os.cpu_count() returns the number of CPU in the system. Example: Python3 import osimport psutil # Getting loadover15 minutesload1, load5, load15 = psutil.getloadavg() cpu_usage = (load15/os.cpu_count()) * 100 print("The CPU usage is : ", cpu_usage) Output: The CPU usage is: 13.4 Method 1: Using psutil The function psutil.virutal_memory() returns a named tuple about system memory usage. The third field in tuple represents the percentage use of the memory(RAM). It is calculated by (total – available)/total * 100 . The total fields in the output of function are: total: total memory excluding swap available: available memory for processes percent: memory usage in per cent used: the memory used free: memory not used at and is readily available Example: Python # Importing the libraryimport psutil # Getting % usage of virtual_memory ( 3rd field)print('RAM memory % used:', psutil.virtual_memory()[2]) Output: RAM memory % used: 76.9 Method 2: Using OS module The os module is also useful for calculating the ram usage in the CPU. The os.popen() method with flags as input can provide the total, available and used memory. This method opens a pipe to or from command. The return value can be read or written depending on whether mode is β€˜r’ or β€˜w’. Syntax: os.popen(command[, mode[, bufsize]]) Example: Python3 import os # Getting all memory using os.popen()total_memory, used_memory, free_memory = map( int, os.popen('free -t -m').readlines()[-1].split()[1:]) # Memory usageprint("RAM memory % used:", round((used_memory/total_memory) * 100, 2)) Output: RAM memory % used: 17.55 Note: The os module method works with Linux system only due to free flag and system command specified for linux system. Picked python-utility Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Iterate over a list in Python Python OOPs Concepts
[ { "code": null, "e": 53, "s": 25, "text": "\n24 Jan, 2021" }, { "code": null, "e": 68, "s": 53, "text": "Prerequisites:" }, { "code": null, "e": 76, "s": 68, "text": "psutil " }, { "code": null, "e": 79, "s": 76, "text": "os" }, { "code": null, "e": 355, "s": 79, "text": "CPU usage or utilization refers to the time taken by a computer to process some information. RAM usage or MAIN MEMORY UTILIZATION on the other hand refers to the amount of time RAM is used by a certain system at a particular time. Both of these can be retrieved using python." }, { "code": null, "e": 378, "s": 355, "text": "Method 1: Using psutil" }, { "code": null, "e": 653, "s": 378, "text": "The function psutil.cpu_percent() provides the current system-wide CPU utilization in the form of a percentage. It takes a parameter which is the time interval (seconds). Since CPU utilization is calculated over a period of time it is recommended to provide a time interval." }, { "code": null, "e": 661, "s": 653, "text": "Syntax:" }, { "code": null, "e": 688, "s": 661, "text": "cpu_percent(time_interval)" }, { "code": null, "e": 697, "s": 688, "text": "Example:" }, { "code": null, "e": 704, "s": 697, "text": "Python" }, { "code": "# Importing the libraryimport psutil # Calling psutil.cpu_precent() for 4 secondsprint('The CPU usage is: ', psutil.cpu_percent(4))", "e": 837, "s": 704, "text": null }, { "code": null, "e": 845, "s": 837, "text": "Output:" }, { "code": null, "e": 868, "s": 845, "text": "The CPU usage is: 2.4" }, { "code": null, "e": 894, "s": 868, "text": "Method 2: Using OS module" }, { "code": null, "e": 1132, "s": 894, "text": "The psutil.getloadavg() provides the load information of the CPU in the form of a tuple. The psutil.getloadavg() runs in the background and results gets updated every 5 seconds. The os.cpu_count() returns the number of CPU in the system." }, { "code": null, "e": 1141, "s": 1132, "text": "Example:" }, { "code": null, "e": 1149, "s": 1141, "text": "Python3" }, { "code": "import osimport psutil # Getting loadover15 minutesload1, load5, load15 = psutil.getloadavg() cpu_usage = (load15/os.cpu_count()) * 100 print(\"The CPU usage is : \", cpu_usage)", "e": 1328, "s": 1149, "text": null }, { "code": null, "e": 1336, "s": 1328, "text": "Output:" }, { "code": null, "e": 1359, "s": 1336, "text": "The CPU usage is: 13.4" }, { "code": null, "e": 1382, "s": 1359, "text": "Method 1: Using psutil" }, { "code": null, "e": 1600, "s": 1382, "text": "The function psutil.virutal_memory() returns a named tuple about system memory usage. The third field in tuple represents the percentage use of the memory(RAM). It is calculated by (total – available)/total * 100 . " }, { "code": null, "e": 1648, "s": 1600, "text": "The total fields in the output of function are:" }, { "code": null, "e": 1683, "s": 1648, "text": "total: total memory excluding swap" }, { "code": null, "e": 1725, "s": 1683, "text": "available: available memory for processes" }, { "code": null, "e": 1759, "s": 1725, "text": "percent: memory usage in per cent" }, { "code": null, "e": 1781, "s": 1759, "text": "used: the memory used" }, { "code": null, "e": 1831, "s": 1781, "text": "free: memory not used at and is readily available" }, { "code": null, "e": 1840, "s": 1831, "text": "Example:" }, { "code": null, "e": 1847, "s": 1840, "text": "Python" }, { "code": "# Importing the libraryimport psutil # Getting % usage of virtual_memory ( 3rd field)print('RAM memory % used:', psutil.virtual_memory()[2])", "e": 1989, "s": 1847, "text": null }, { "code": null, "e": 1997, "s": 1989, "text": "Output:" }, { "code": null, "e": 2021, "s": 1997, "text": "RAM memory % used: 76.9" }, { "code": null, "e": 2047, "s": 2021, "text": "Method 2: Using OS module" }, { "code": null, "e": 2336, "s": 2047, "text": "The os module is also useful for calculating the ram usage in the CPU. The os.popen() method with flags as input can provide the total, available and used memory. This method opens a pipe to or from command. The return value can be read or written depending on whether mode is β€˜r’ or β€˜w’." }, { "code": null, "e": 2344, "s": 2336, "text": "Syntax:" }, { "code": null, "e": 2381, "s": 2344, "text": "os.popen(command[, mode[, bufsize]])" }, { "code": null, "e": 2390, "s": 2381, "text": "Example:" }, { "code": null, "e": 2398, "s": 2390, "text": "Python3" }, { "code": "import os # Getting all memory using os.popen()total_memory, used_memory, free_memory = map( int, os.popen('free -t -m').readlines()[-1].split()[1:]) # Memory usageprint(\"RAM memory % used:\", round((used_memory/total_memory) * 100, 2))", "e": 2639, "s": 2398, "text": null }, { "code": null, "e": 2647, "s": 2639, "text": "Output:" }, { "code": null, "e": 2672, "s": 2647, "text": "RAM memory % used: 17.55" }, { "code": null, "e": 2792, "s": 2672, "text": "Note: The os module method works with Linux system only due to free flag and system command specified for linux system." }, { "code": null, "e": 2799, "s": 2792, "text": "Picked" }, { "code": null, "e": 2814, "s": 2799, "text": "python-utility" }, { "code": null, "e": 2821, "s": 2814, "text": "Python" }, { "code": null, "e": 2919, "s": 2821, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2937, "s": 2919, "text": "Python Dictionary" }, { "code": null, "e": 2979, "s": 2937, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 3001, "s": 2979, "text": "Enumerate() in Python" }, { "code": null, "e": 3036, "s": 3001, "text": "Read a file line by line in Python" }, { "code": null, "e": 3062, "s": 3036, "text": "Python String | replace()" }, { "code": null, "e": 3094, "s": 3062, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 3123, "s": 3094, "text": "*args and **kwargs in Python" }, { "code": null, "e": 3150, "s": 3123, "text": "Python Classes and Objects" }, { "code": null, "e": 3180, "s": 3150, "text": "Iterate over a list in Python" } ]
Robot Framework - Working With Variables
In this chapter, we will discuss how to create and use variables in Robot Framework. Variables are used to hold a value, which can be used in test cases, user-defined keywords, etc. We are going to discuss following variables available in Robot Framework Scalar Variable List Variable Dictionary Variable We will understand the working of each of this variable with the help of test cases in Ride. Scalar variables will be replaced with the value they are assigned. The syntax for scalar variable is as follows βˆ’ ${variablename} We can use scalar variable to store strings, objects, lists, etc. We will first create a simple test case and make use of scalar variable in it. Open RIDE using ride.py in the command line and create a new project. Click New Project. Now, give a name to your project. The name given is variables. Click OK to save the project. Right-click on the name of the project created and click on New Test Case βˆ’ Give a name to the test case and click OK. We are done with the project setup and now will write test cases for the scalar variables to be used in our test case. Since we need Selenium library, we need to import the same in our project. Click on your project on the left side and use Library from Add Import βˆ’ Upon clicking Library, a screen will appear where you need to enter the library name βˆ’ Click OK and the library will get displayed in the settings. The name given has to match with the name of the folder installed in site-packages. If the name does not match, the library name will be shown in red βˆ’ In the above test cases we hardcoded the values like the URL, email, password, which we are giving to the test case. The values used can be stored in a variable and instead of hardcoding, we can use the variable in those places. To create scalar variable, right-click on your project and click on New Scalar as shown below βˆ’ Clicking on New Scalar will open the following screen to create the variable and the value we need to replace with when the variable in used inside test cases. We get ${} for the Name field. Here we need to enter the name of the variable inside the curly braces as shown in the screen below βˆ’ The name of the variable is ${url}. The value is βˆ’ http://localhost/robotframework/login.html. We added the comment as shown above. Click OK to save the scalar variable. The details of the variable are added as shown below βˆ’ The variable name is shown under the project created as follows βˆ’ Let us now use the scalar variable created inside our test case. In the above test case, we have to replace the URL with the variable we just created above. Now, we will run the test case to see if it is taking the URL from the variable. Below is the output that we get when we run it. The URL http://localhost/robotframework/login.html is picked up from the scalar variable we created. The advantage of using variables is that you can change the value for that variable and it will be reflected in all test cases. You can use the variables in many test cases that you create under that project. Hardcoding of values can be a serious problem when you want to change something, you will have to go to individual test case and change the values for it. Having variables in one place gives us the flexibility to test the way we want with different values to the variables. Now, we will look into the next type of variable called the List variable. List variable will have an array of values. To get the value, the list item is passed as the argument to the list variable. @{variablename} Suppose we have values A, B. To refer the values, we need to pass the list item as follows βˆ’ @{variablename}[0] // A @{variablename}[1] // B To add list variable, right-click on the project and click New List Variable. Upon clicking New List Variable, a screen appears where we can enter the values βˆ’ The Name is given as @{} followed by Value. It also has 4 Columns selected. Right now, we will use just Column 1 and create the list variable, which will have values, email id and password as follows βˆ’ The name of the list variable is @{LOGIN_DETAILS} and the values given are [email protected] and admin, which has email id and password for the login page. Click OK to save the list variable. The variable is listed below the project as shown here βˆ’ The details of variables used are listed in the settings tab βˆ’ Now, we will add the list variable inside the test cases as shown below. Here, we have hardcoded values for the Input Text and Password. Now, we will change it to use the list variable. Using List Variable Now, we will execute the test case to see if it is taking the values from the list variable βˆ’ It has taken the email id and password from the list variable as shown above in the test screen. The following screenshot shows the execution details for the same βˆ’ In our next section, we will learn about the Dictionary Variable. Dictionary Variable is similar to list variable wherein we pass the index as an argument; however, in case of dictionary variable, we can store the details – key value form. It becomes easier to refer when used in the test case instead of using the index as 0, 1, etc. &{Variablename} Suppose we are storing the values as key1=A, key2=B. It will be referred in the test case as βˆ’ &{Variablename}[key1] // A &{Variablename}[key2] // B Let us create dictionary variable in Ride. Right-click on Project and click on New Dictionary Variable. Upon clicking New Dictionary Variable, a screen will appear as shown below βˆ’ The Name by default in the screen is &{} and it has Value and Columns option. We will enter the Name and the Values to be used in the test case. Click OK to save the variable. The variable will be listed under the project and also in the settings as follows βˆ’ We will change the test case to take the dictionary values. We will change to dictionary variable as shown below. Upon clicking run, we get the following βˆ’ The execution details are as follows βˆ’ We have seen the Edit and Run Tab so far. In case of TextEdit, we have the details of the test case written. We can also add variables required in TextEdit. We have used scalar variable and dictionary variable in the above test case. Here is the code so far in TextEdit; this is based on the test case written βˆ’ The variables used are highlighted in Red. We can also create variables we want directly in TextEdit as shown below βˆ’ We have added a scalar variable called ${new_url} and the value given is https://www.tutorialspoint.com/. Click Apply Changes button on the top left corner and the variable will be seen under the project as shown below βˆ’ Similarly, other variables βˆ’ list and dictionary variables can be created directly inside TextEdit tab whenever required. We have seen how to create and use variables. There are three types of variables supported in robot framework βˆ’ scalar, list and dictionary. We discussed in detail the working of all these variables.
[ { "code": null, "e": 2485, "s": 2303, "text": "In this chapter, we will discuss how to create and use variables in Robot Framework. Variables are used to hold a value, which can be used in test cases, user-defined keywords, etc." }, { "code": null, "e": 2558, "s": 2485, "text": "We are going to discuss following variables available in Robot Framework" }, { "code": null, "e": 2574, "s": 2558, "text": "Scalar Variable" }, { "code": null, "e": 2588, "s": 2574, "text": "List Variable" }, { "code": null, "e": 2608, "s": 2588, "text": "Dictionary Variable" }, { "code": null, "e": 2701, "s": 2608, "text": "We will understand the working of each of this variable with the help of test cases in Ride." }, { "code": null, "e": 2816, "s": 2701, "text": "Scalar variables will be replaced with the value they are assigned. The syntax for scalar variable is as follows βˆ’" }, { "code": null, "e": 2833, "s": 2816, "text": "${variablename}\n" }, { "code": null, "e": 2978, "s": 2833, "text": "We can use scalar variable to store strings, objects, lists, etc. We will first create a simple test case and make use of scalar variable in it." }, { "code": null, "e": 3048, "s": 2978, "text": "Open RIDE using ride.py in the command line and create a new project." }, { "code": null, "e": 3067, "s": 3048, "text": "Click New Project." }, { "code": null, "e": 3101, "s": 3067, "text": "Now, give a name to your project." }, { "code": null, "e": 3160, "s": 3101, "text": "The name given is variables. Click OK to save the project." }, { "code": null, "e": 3236, "s": 3160, "text": "Right-click on the name of the project created and click on New Test Case βˆ’" }, { "code": null, "e": 3279, "s": 3236, "text": "Give a name to the test case and click OK." }, { "code": null, "e": 3473, "s": 3279, "text": "We are done with the project setup and now will write test cases for the scalar variables to be used in our test case. Since we need Selenium library, we need to import the same in our project." }, { "code": null, "e": 3546, "s": 3473, "text": "Click on your project on the left side and use Library from Add Import βˆ’" }, { "code": null, "e": 3633, "s": 3546, "text": "Upon clicking Library, a screen will appear where you need to enter the library name βˆ’" }, { "code": null, "e": 3694, "s": 3633, "text": "Click OK and the library will get displayed in the settings." }, { "code": null, "e": 3778, "s": 3694, "text": "The name given has to match with the name of the folder installed in site-packages." }, { "code": null, "e": 3846, "s": 3778, "text": "If the name does not match, the library name will be shown in red βˆ’" }, { "code": null, "e": 4075, "s": 3846, "text": "In the above test cases we hardcoded the values like the URL, email, password, which we are giving to the test case. The values used can be stored in a variable and instead of hardcoding, we can use the variable in those places." }, { "code": null, "e": 4171, "s": 4075, "text": "To create scalar variable, right-click on your project and click on New Scalar as shown below βˆ’" }, { "code": null, "e": 4331, "s": 4171, "text": "Clicking on New Scalar will open the following screen to create the variable and the value we need to replace with when the variable in used inside test cases." }, { "code": null, "e": 4362, "s": 4331, "text": "We get ${} for the Name field." }, { "code": null, "e": 4464, "s": 4362, "text": "Here we need to enter the name of the variable inside the curly braces as shown in the screen below βˆ’" }, { "code": null, "e": 4559, "s": 4464, "text": "The name of the variable is ${url}. The value is βˆ’ http://localhost/robotframework/login.html." }, { "code": null, "e": 4689, "s": 4559, "text": "We added the comment as shown above. Click OK to save the scalar variable. The details of the variable are added as shown below βˆ’" }, { "code": null, "e": 4755, "s": 4689, "text": "The variable name is shown under the project created as follows βˆ’" }, { "code": null, "e": 4820, "s": 4755, "text": "Let us now use the scalar variable created inside our test case." }, { "code": null, "e": 4912, "s": 4820, "text": "In the above test case, we have to replace the URL with the variable we just created above." }, { "code": null, "e": 5142, "s": 4912, "text": "Now, we will run the test case to see if it is taking the URL from the variable. Below is the output that we get when we run it. The URL http://localhost/robotframework/login.html is picked up from the scalar variable we created." }, { "code": null, "e": 5625, "s": 5142, "text": "The advantage of using variables is that you can change the value for that variable and it will be reflected in all test cases. You can use the variables in many test cases that you create under that project. Hardcoding of values can be a serious problem when you want to change something, you will have to go to individual test case and change the values for it. Having variables in one place gives us the flexibility to test the way we want with different values to the variables." }, { "code": null, "e": 5700, "s": 5625, "text": "Now, we will look into the next type of variable called the List variable." }, { "code": null, "e": 5824, "s": 5700, "text": "List variable will have an array of values. To get the value, the list item is passed as the argument to the list variable." }, { "code": null, "e": 5841, "s": 5824, "text": "@{variablename}\n" }, { "code": null, "e": 5934, "s": 5841, "text": "Suppose we have values A, B. To refer the values, we need to pass the list item as follows βˆ’" }, { "code": null, "e": 5983, "s": 5934, "text": "@{variablename}[0] // A\n@{variablename}[1] // B\n" }, { "code": null, "e": 6061, "s": 5983, "text": "To add list variable, right-click on the project and click New List Variable." }, { "code": null, "e": 6143, "s": 6061, "text": "Upon clicking New List Variable, a screen appears where we can enter the values βˆ’" }, { "code": null, "e": 6345, "s": 6143, "text": "The Name is given as @{} followed by Value. It also has 4 Columns selected. Right now, we will use just Column 1 and create the list variable, which will have values, email id and password as follows βˆ’" }, { "code": null, "e": 6499, "s": 6345, "text": "The name of the list variable is @{LOGIN_DETAILS} and the values given are [email protected] and admin, which has email id and password for the login page." }, { "code": null, "e": 6592, "s": 6499, "text": "Click OK to save the list variable. The variable is listed below the project as shown here βˆ’" }, { "code": null, "e": 6655, "s": 6592, "text": "The details of variables used are listed in the settings tab βˆ’" }, { "code": null, "e": 6728, "s": 6655, "text": "Now, we will add the list variable inside the test cases as shown below." }, { "code": null, "e": 6841, "s": 6728, "text": "Here, we have hardcoded values for the Input Text and Password. Now, we will change it to use the list variable." }, { "code": null, "e": 6861, "s": 6841, "text": "Using List Variable" }, { "code": null, "e": 6955, "s": 6861, "text": "Now, we will execute the test case to see if it is taking the values from the list variable βˆ’" }, { "code": null, "e": 7052, "s": 6955, "text": "It has taken the email id and password from the list variable as shown above in the test screen." }, { "code": null, "e": 7120, "s": 7052, "text": "The following screenshot shows the execution details for the same βˆ’" }, { "code": null, "e": 7186, "s": 7120, "text": "In our next section, we will learn about the Dictionary Variable." }, { "code": null, "e": 7455, "s": 7186, "text": "Dictionary Variable is similar to list variable wherein we pass the index as an argument; however, in case of dictionary variable, we can store the details – key value form. It becomes easier to refer when used in the test case instead of using the index as 0, 1, etc." }, { "code": null, "e": 7472, "s": 7455, "text": "&{Variablename}\n" }, { "code": null, "e": 7567, "s": 7472, "text": "Suppose we are storing the values as key1=A, key2=B. It will be referred in the test case as βˆ’" }, { "code": null, "e": 7622, "s": 7567, "text": "&{Variablename}[key1] // A\n&{Variablename}[key2] // B\n" }, { "code": null, "e": 7665, "s": 7622, "text": "Let us create dictionary variable in Ride." }, { "code": null, "e": 7726, "s": 7665, "text": "Right-click on Project and click on New Dictionary Variable." }, { "code": null, "e": 7803, "s": 7726, "text": "Upon clicking New Dictionary Variable, a screen will appear as shown below βˆ’" }, { "code": null, "e": 7881, "s": 7803, "text": "The Name by default in the screen is &{} and it has Value and Columns option." }, { "code": null, "e": 7948, "s": 7881, "text": "We will enter the Name and the Values to be used in the test case." }, { "code": null, "e": 8063, "s": 7948, "text": "Click OK to save the variable. The variable will be listed under the project and also in the settings as follows βˆ’" }, { "code": null, "e": 8123, "s": 8063, "text": "We will change the test case to take the dictionary values." }, { "code": null, "e": 8177, "s": 8123, "text": "We will change to dictionary variable as shown below." }, { "code": null, "e": 8219, "s": 8177, "text": "Upon clicking run, we get the following βˆ’" }, { "code": null, "e": 8258, "s": 8219, "text": "The execution details are as follows βˆ’" }, { "code": null, "e": 8415, "s": 8258, "text": "We have seen the Edit and Run Tab so far. In case of TextEdit, we have the details of the test case written. We can also add variables required in TextEdit." }, { "code": null, "e": 8570, "s": 8415, "text": "We have used scalar variable and dictionary variable in the above test case. Here is the code so far in TextEdit; this is based on the test case written βˆ’" }, { "code": null, "e": 8688, "s": 8570, "text": "The variables used are highlighted in Red. We can also create variables we want directly in TextEdit as shown below βˆ’" }, { "code": null, "e": 8794, "s": 8688, "text": "We have added a scalar variable called ${new_url} and the value given is https://www.tutorialspoint.com/." }, { "code": null, "e": 8909, "s": 8794, "text": "Click Apply Changes button on the top left corner and the variable will be seen under the project as shown below βˆ’" }, { "code": null, "e": 9031, "s": 8909, "text": "Similarly, other variables βˆ’ list and dictionary variables can be created directly inside TextEdit tab whenever required." } ]
How to Change the Line Width of a Graph Plot in Matplotlib with Python?
12 Nov, 2020 Prerequisite : Matplotlib In this article we will learn how to Change the Line Width of a Graph Plot in Matplotlib with Python. For that one must be familiar with the given concepts: Matplotlib : Matplotlib is a tremendous visualization library in Python for 2D plots of arrays. Matplotlib may be a multi-platform data visualization library built on NumPy arrays and designed to figure with the broader SciPy stack. It was introduced by John Hunter within the year 2002. Graph Plot : A plot is a graphical technique for representing a data set, usually as a graph showing the relationship between two or more variables. Line Width : The width of a line is known as line width. One can change the line width of a graph in matplotlib using a feature. Import packages Import or create the data Draw a graph plot with a line Set the line width by using line-width feature ( lw can also be used as short form ). Example 1: Python3 # importing packagesimport matplotlib.pyplot as pltimport numpy as np # create datax_values = np.arange(0, 10)y_values = np.arange(0, 10) # Adjust the line widthsplt.plot(x_values, y_values - 2, linewidth=5)plt.plot(x_values, y_values)plt.plot(x_values, y_values + 2, lw=5) # add legends and showplt.legend(['Lw = 5', 'Lw = auto', 'Lw = 5'])plt.show() Output : Example 2 : Python3 # importing packagesimport matplotlib.pyplot as pltimport numpy as np # create datax_values = np.linspace(0, 10, 1000)y_values = np.sin(x_values) # Adjust the line widthsfor i in range(20): plt.plot(x_values, y_values + i*0.5, lw=i*0.5) plt.show() Output : Example 3 : Python3 # importing packagesimport matplotlib.pyplot as pltimport numpy as np # create datax_values = np.linspace(0, 10, 1000) # Adjust the line widthsfor i in range(20): plt.plot(x_values, np.sin(x_values) + i*0.5, lw=i*0.4) plt.plot(x_values, np.cos(x_values) + i*0.5, lw=i*0.4) plt.show() Output : Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n12 Nov, 2020" }, { "code": null, "e": 55, "s": 28, "text": "Prerequisite : Matplotlib " }, { "code": null, "e": 212, "s": 55, "text": "In this article we will learn how to Change the Line Width of a Graph Plot in Matplotlib with Python. For that one must be familiar with the given concepts:" }, { "code": null, "e": 500, "s": 212, "text": "Matplotlib : Matplotlib is a tremendous visualization library in Python for 2D plots of arrays. Matplotlib may be a multi-platform data visualization library built on NumPy arrays and designed to figure with the broader SciPy stack. It was introduced by John Hunter within the year 2002." }, { "code": null, "e": 649, "s": 500, "text": "Graph Plot : A plot is a graphical technique for representing a data set, usually as a graph showing the relationship between two or more variables." }, { "code": null, "e": 778, "s": 649, "text": "Line Width : The width of a line is known as line width. One can change the line width of a graph in matplotlib using a feature." }, { "code": null, "e": 794, "s": 778, "text": "Import packages" }, { "code": null, "e": 820, "s": 794, "text": "Import or create the data" }, { "code": null, "e": 850, "s": 820, "text": "Draw a graph plot with a line" }, { "code": null, "e": 936, "s": 850, "text": "Set the line width by using line-width feature ( lw can also be used as short form )." }, { "code": null, "e": 947, "s": 936, "text": "Example 1:" }, { "code": null, "e": 955, "s": 947, "text": "Python3" }, { "code": "# importing packagesimport matplotlib.pyplot as pltimport numpy as np # create datax_values = np.arange(0, 10)y_values = np.arange(0, 10) # Adjust the line widthsplt.plot(x_values, y_values - 2, linewidth=5)plt.plot(x_values, y_values)plt.plot(x_values, y_values + 2, lw=5) # add legends and showplt.legend(['Lw = 5', 'Lw = auto', 'Lw = 5'])plt.show()", "e": 1310, "s": 955, "text": null }, { "code": null, "e": 1319, "s": 1310, "text": "Output :" }, { "code": null, "e": 1331, "s": 1319, "text": "Example 2 :" }, { "code": null, "e": 1339, "s": 1331, "text": "Python3" }, { "code": "# importing packagesimport matplotlib.pyplot as pltimport numpy as np # create datax_values = np.linspace(0, 10, 1000)y_values = np.sin(x_values) # Adjust the line widthsfor i in range(20): plt.plot(x_values, y_values + i*0.5, lw=i*0.5) plt.show()", "e": 1597, "s": 1339, "text": null }, { "code": null, "e": 1606, "s": 1597, "text": "Output :" }, { "code": null, "e": 1618, "s": 1606, "text": "Example 3 :" }, { "code": null, "e": 1626, "s": 1618, "text": "Python3" }, { "code": "# importing packagesimport matplotlib.pyplot as pltimport numpy as np # create datax_values = np.linspace(0, 10, 1000) # Adjust the line widthsfor i in range(20): plt.plot(x_values, np.sin(x_values) + i*0.5, lw=i*0.4) plt.plot(x_values, np.cos(x_values) + i*0.5, lw=i*0.4) plt.show()", "e": 1923, "s": 1626, "text": null }, { "code": null, "e": 1932, "s": 1923, "text": "Output :" }, { "code": null, "e": 1950, "s": 1932, "text": "Python-matplotlib" }, { "code": null, "e": 1957, "s": 1950, "text": "Python" } ]
MySQL | Operator precedence
04 Feb, 2019 Operator precedence specifies the order in which operators are evaluated when two or more operators with different precedence are adjacent in an expression. For example, 1+2/3 gives the different result as compared to (1+2)/3. Just like all other programming languages C, C++, Java etc. MySQL also has a precedence rule. The following table describes operator precedence in MySQL, from highest to lowest. Operators which are in the same group have the equal precedence. These operator precedence rule greatly affects our MySQL queries.Without knowledge of operator precedence we can get unexpected result. To, understand this consider the following table Student. From the above table we want the result of those students having marks greater than 10 and whose name starts with either β€˜p’ or β€˜s’. So, it’s query can be written as- mysql>select * from student where marks>10 and name like 'p%' or name like 's%'; Result:It will produce the desired result: This result set is not as expected from the query. As it is giving the result of a student β€˜Punit’ having marks less than 10 which is not required. Due to higher precedence of operator AND as compare to OR, the above query give result of all those students having marks greater than 10 and name starting with β€˜s’, in addition to those the result of those student having name starting with β€˜p’ is also given in output. So, here come role of parentheses. The above-stated query can be written as, mysql>select * from student where marks>10 and (name like 'p%' or name like 's%'); Result:It will produce the desired result: mysql SQLmysql SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. CTE in SQL SQL Interview Questions How to Update Multiple Columns in Single Update Statement in SQL? Difference between DELETE, DROP and TRUNCATE Window functions in SQL Difference between DELETE and TRUNCATE MySQL | Group_CONCAT() Function SQL Correlated Subqueries MySQL | Regular expressions (Regexp) SQL - ORDER BY
[ { "code": null, "e": 28, "s": 0, "text": "\n04 Feb, 2019" }, { "code": null, "e": 185, "s": 28, "text": "Operator precedence specifies the order in which operators are evaluated when two or more operators with different precedence are adjacent in an expression." }, { "code": null, "e": 349, "s": 185, "text": "For example, 1+2/3 gives the different result as compared to (1+2)/3. Just like all other programming languages C, C++, Java etc. MySQL also has a precedence rule." }, { "code": null, "e": 498, "s": 349, "text": "The following table describes operator precedence in MySQL, from highest to lowest. Operators which are in the same group have the equal precedence." }, { "code": null, "e": 692, "s": 498, "text": "These operator precedence rule greatly affects our MySQL queries.Without knowledge of operator precedence we can get unexpected result. To, understand this consider the following table Student." }, { "code": null, "e": 859, "s": 692, "text": "From the above table we want the result of those students having marks greater than 10 and whose name starts with either β€˜p’ or β€˜s’. So, it’s query can be written as-" }, { "code": null, "e": 963, "s": 859, "text": "mysql>select * \nfrom student \nwhere marks>10 and name like 'p%' \n or name like 's%'; " }, { "code": null, "e": 1006, "s": 963, "text": "Result:It will produce the desired result:" }, { "code": null, "e": 1501, "s": 1006, "text": "This result set is not as expected from the query. As it is giving the result of a student β€˜Punit’ having marks less than 10 which is not required. Due to higher precedence of operator AND as compare to OR, the above query give result of all those students having marks greater than 10 and name starting with β€˜s’, in addition to those the result of those student having name starting with β€˜p’ is also given in output. So, here come role of parentheses. The above-stated query can be written as," }, { "code": null, "e": 1610, "s": 1501, "text": "mysql>select * \nfrom student \nwhere marks>10 and (name like 'p%' \n or name like 's%');" }, { "code": null, "e": 1653, "s": 1610, "text": "Result:It will produce the desired result:" }, { "code": null, "e": 1659, "s": 1653, "text": "mysql" }, { "code": null, "e": 1668, "s": 1659, "text": "SQLmysql" }, { "code": null, "e": 1672, "s": 1668, "text": "SQL" }, { "code": null, "e": 1676, "s": 1672, "text": "SQL" }, { "code": null, "e": 1774, "s": 1676, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1785, "s": 1774, "text": "CTE in SQL" }, { "code": null, "e": 1809, "s": 1785, "text": "SQL Interview Questions" }, { "code": null, "e": 1875, "s": 1809, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 1920, "s": 1875, "text": "Difference between DELETE, DROP and TRUNCATE" }, { "code": null, "e": 1944, "s": 1920, "text": "Window functions in SQL" }, { "code": null, "e": 1983, "s": 1944, "text": "Difference between DELETE and TRUNCATE" }, { "code": null, "e": 2015, "s": 1983, "text": "MySQL | Group_CONCAT() Function" }, { "code": null, "e": 2041, "s": 2015, "text": "SQL Correlated Subqueries" }, { "code": null, "e": 2078, "s": 2041, "text": "MySQL | Regular expressions (Regexp)" } ]
Difference between Transparent Bridge and Source Routing Bridge
08 Nov, 2020 Bridge is a device that is attached to connect two or more LANs to create continued LAN. A bridge works at data link layer of the OSI reference model. Transparent Bridge: Transparent bridge automatically maintains a routing table and update table in response to maintain changing topology. Transparent bridge mechanism consists of three mechanisms: 1. Frame forwarding 2. Address Learning 3. Loop Resolution Transparent bridge is easy to use, install the bridge and no software changes are needed in hosts. In all the cases, transparent bridge flooded the broadcast and multicast frames. Source Routing Bridge: Source routing bridge decides the route between two hosts. Source routing bridge uses the MAC destination address of a frame to direct it by the source routing algorithm. In source routing, the route over which the frame is to send is Known to every station on the extended LAN. The routing information is stored in the frames. The difference between Transparent Bridge and Source Routing Bridge are as following: itskawal2000 Computer Networks Difference Between Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. GSM in Wireless Communication Secure Socket Layer (SSL) Wireless Application Protocol Mobile Internet Protocol (or Mobile IP) Advanced Encryption Standard (AES) Class method vs Static method in Python Difference between BFS and DFS Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Differences between JDK, JRE and JVM
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Nov, 2020" }, { "code": null, "e": 180, "s": 28, "text": "Bridge is a device that is attached to connect two or more LANs to create continued LAN. A bridge works at data link layer of the OSI reference model. " }, { "code": null, "e": 379, "s": 180, "text": "Transparent Bridge: Transparent bridge automatically maintains a routing table and update table in response to maintain changing topology. Transparent bridge mechanism consists of three mechanisms: " }, { "code": null, "e": 441, "s": 379, "text": "1. Frame forwarding\n2. Address Learning\n3. Loop Resolution \n\n" }, { "code": null, "e": 622, "s": 441, "text": "Transparent bridge is easy to use, install the bridge and no software changes are needed in hosts. In all the cases, transparent bridge flooded the broadcast and multicast frames. " }, { "code": null, "e": 974, "s": 622, "text": "Source Routing Bridge: Source routing bridge decides the route between two hosts. Source routing bridge uses the MAC destination address of a frame to direct it by the source routing algorithm. In source routing, the route over which the frame is to send is Known to every station on the extended LAN. The routing information is stored in the frames. " }, { "code": null, "e": 1061, "s": 974, "text": "The difference between Transparent Bridge and Source Routing Bridge are as following: " }, { "code": null, "e": 1076, "s": 1063, "text": "itskawal2000" }, { "code": null, "e": 1094, "s": 1076, "text": "Computer Networks" }, { "code": null, "e": 1113, "s": 1094, "text": "Difference Between" }, { "code": null, "e": 1131, "s": 1113, "text": "Computer Networks" }, { "code": null, "e": 1229, "s": 1131, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1259, "s": 1229, "text": "GSM in Wireless Communication" }, { "code": null, "e": 1285, "s": 1259, "text": "Secure Socket Layer (SSL)" }, { "code": null, "e": 1315, "s": 1285, "text": "Wireless Application Protocol" }, { "code": null, "e": 1355, "s": 1315, "text": "Mobile Internet Protocol (or Mobile IP)" }, { "code": null, "e": 1390, "s": 1355, "text": "Advanced Encryption Standard (AES)" }, { "code": null, "e": 1430, "s": 1390, "text": "Class method vs Static method in Python" }, { "code": null, "e": 1461, "s": 1430, "text": "Difference between BFS and DFS" }, { "code": null, "e": 1522, "s": 1461, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 1590, "s": 1522, "text": "Difference Between Method Overloading and Method Overriding in Java" } ]
Python | Pandas Series.from_csv()
13 Feb, 2019 Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Pandas Series.from_csv() function is used to read a csv file into a series. It is preferable to use the more powerful pandas.read_csv() for most general purposes. Syntax: Series.from_csv(path, sep=’, β€˜, parse_dates=True, header=None, index_col=0, encoding=None, infer_datetime_format=False) Parameter :path : string file path or file handle / StringIOsep : Field delimiterparse_dates : Parse dates. Different default from read_tableheader : Row to use as header (skip prior rows)index_col : Column to use for indexencoding : a string representing the encoding to use if the contents are non-asciiinfer_datetime_format : If True and parse_dates is True for a column, try to infer the datetime format based on the first datetime string Returns : Series For this example we have used a CSV file. To download click here Example #1: Use Series.from_csv() function to read the data from the given CSV file into a pandas series. # importing pandas as pdimport pandas as pd # Read the data into a seriessr = pd.Series.from_csv('nba.csv') # Print the first 10 rows of seriesprint(sr.head(10)) Output : As we can see in the output, the Series.from_csv() function has successfully read the csv file into a pandas series. Example #2 : Use Series.from_csv() function to read the data from the given CSV file into a pandas series. Use the 1st column as an index of the series object. # importing pandas as pdimport pandas as pd # Read the data into a seriessr = pd.Series.from_csv('nba.csv', index_col = 1) # Print the first 10 rows of seriesprint(sr.head(10)) Output : As we can see in the output, the Series.from_csv() function has successfully read the csv file into a pandas series. Python pandas-series Python pandas-series-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n13 Feb, 2019" }, { "code": null, "e": 285, "s": 28, "text": "Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index." }, { "code": null, "e": 448, "s": 285, "text": "Pandas Series.from_csv() function is used to read a csv file into a series. It is preferable to use the more powerful pandas.read_csv() for most general purposes." }, { "code": null, "e": 576, "s": 448, "text": "Syntax: Series.from_csv(path, sep=’, β€˜, parse_dates=True, header=None, index_col=0, encoding=None, infer_datetime_format=False)" }, { "code": null, "e": 1019, "s": 576, "text": "Parameter :path : string file path or file handle / StringIOsep : Field delimiterparse_dates : Parse dates. Different default from read_tableheader : Row to use as header (skip prior rows)index_col : Column to use for indexencoding : a string representing the encoding to use if the contents are non-asciiinfer_datetime_format : If True and parse_dates is True for a column, try to infer the datetime format based on the first datetime string" }, { "code": null, "e": 1036, "s": 1019, "text": "Returns : Series" }, { "code": null, "e": 1101, "s": 1036, "text": "For this example we have used a CSV file. To download click here" }, { "code": null, "e": 1207, "s": 1101, "text": "Example #1: Use Series.from_csv() function to read the data from the given CSV file into a pandas series." }, { "code": "# importing pandas as pdimport pandas as pd # Read the data into a seriessr = pd.Series.from_csv('nba.csv') # Print the first 10 rows of seriesprint(sr.head(10))", "e": 1371, "s": 1207, "text": null }, { "code": null, "e": 1380, "s": 1371, "text": "Output :" }, { "code": null, "e": 1657, "s": 1380, "text": "As we can see in the output, the Series.from_csv() function has successfully read the csv file into a pandas series. Example #2 : Use Series.from_csv() function to read the data from the given CSV file into a pandas series. Use the 1st column as an index of the series object." }, { "code": "# importing pandas as pdimport pandas as pd # Read the data into a seriessr = pd.Series.from_csv('nba.csv', index_col = 1) # Print the first 10 rows of seriesprint(sr.head(10))", "e": 1836, "s": 1657, "text": null }, { "code": null, "e": 1845, "s": 1836, "text": "Output :" }, { "code": null, "e": 1962, "s": 1845, "text": "As we can see in the output, the Series.from_csv() function has successfully read the csv file into a pandas series." }, { "code": null, "e": 1983, "s": 1962, "text": "Python pandas-series" }, { "code": null, "e": 2012, "s": 1983, "text": "Python pandas-series-methods" }, { "code": null, "e": 2026, "s": 2012, "text": "Python-pandas" }, { "code": null, "e": 2033, "s": 2026, "text": "Python" } ]
JavaScript Array concat() Method
04 Oct, 2021 Below is the example of Array concat() method to join three arrays. Example:<script>// JavaScript code for concat() methodfunction func() { var num1 = [11, 12, 13], num2 = [14, 15, 16], num3 = [17, 18, 19]; document.write(num1.concat(num2, num3));}func();</script> <script>// JavaScript code for concat() methodfunction func() { var num1 = [11, 12, 13], num2 = [14, 15, 16], num3 = [17, 18, 19]; document.write(num1.concat(num2, num3));}func();</script> Output:[11,12,13,14,15,16,17,18,19] [11,12,13,14,15,16,17,18,19] The arr.concat() method is used to merge two or more arrays together. This method does not alter the original arrays passed as arguments. Syntax: var new_array = old_array.concat(value1[, value2[, ...[, valueN]]]) Parameters: The parameters to this method are the arrays or the values that need to be added to the given array. The number of arguments to this method depends upon the number of arrays or values to be merged. Return value: This method returns a newly created array that is created after merging all the arrays passed to the method as arguments. Below examples illustrate the JavaScript Array concat() method: Example 1: In this example, the method concat() concatenates all the three arrays into one array which it return as the answer.var num1 = [11, 12, 13], num2 = [14, 15, 16], num3 = [17, 18, 19]; print(num1.concat(num2, num3)); Output:[11,12,13,14,15,16,17,18,19] var num1 = [11, 12, 13], num2 = [14, 15, 16], num3 = [17, 18, 19]; print(num1.concat(num2, num3)); Output: [11,12,13,14,15,16,17,18,19] Example 2: In this example, the method concat() concatenates all the arguments passed to the method with the given array into one array which it return as the answer.var alpha = ['a', 'b', 'c']; print(alpha.concat(1, [2, 3])); Output:[a,b,c,1,2,3] var alpha = ['a', 'b', 'c']; print(alpha.concat(1, [2, 3])); Output: [a,b,c,1,2,3] Example 3: In this example, the method concat() concatenates both the arrays into one array which it return as the answer.var num1 = [[23]]; var num2 = [89, [67]]; print(num1.concat(num2)); Output:[23,89,67] var num1 = [[23]]; var num2 = [89, [67]]; print(num1.concat(num2)); Output: [23,89,67] More example codes for the above method are as follows:Program 1: <script> // JavaScript code for concat() method function func() { var alpha = ["a", "b", "c"]; document.write(alpha.concat(1, [2, 3])); } func();</script> Output: [a,b,c,1,2,3] Program 2: <script> // JavaScript code for concat() method function func() { var num1 = [[23]]; var num2 = [89, [67]]; document.write(num1.concat(num2)); } func();</script> Output: [23,89,67] Supported Browsers: The browsers supported by JavaScript Array concat() method are listed below: Google Chrome 1 and above Edge 12 and above Firefox 1 and above Internet Explorer 5.5 and above Opera 4 and above Safari 1 and above arorakashish0911 ysachin2314 javascript-array JavaScript-Methods JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n04 Oct, 2021" }, { "code": null, "e": 120, "s": 52, "text": "Below is the example of Array concat() method to join three arrays." }, { "code": null, "e": 337, "s": 120, "text": "Example:<script>// JavaScript code for concat() methodfunction func() { var num1 = [11, 12, 13], num2 = [14, 15, 16], num3 = [17, 18, 19]; document.write(num1.concat(num2, num3));}func();</script>" }, { "code": "<script>// JavaScript code for concat() methodfunction func() { var num1 = [11, 12, 13], num2 = [14, 15, 16], num3 = [17, 18, 19]; document.write(num1.concat(num2, num3));}func();</script>", "e": 546, "s": 337, "text": null }, { "code": null, "e": 583, "s": 546, "text": "Output:[11,12,13,14,15,16,17,18,19]\n" }, { "code": null, "e": 613, "s": 583, "text": "[11,12,13,14,15,16,17,18,19]\n" }, { "code": null, "e": 751, "s": 613, "text": "The arr.concat() method is used to merge two or more arrays together. This method does not alter the original arrays passed as arguments." }, { "code": null, "e": 759, "s": 751, "text": "Syntax:" }, { "code": null, "e": 827, "s": 759, "text": "var new_array = old_array.concat(value1[, value2[, ...[, valueN]]])" }, { "code": null, "e": 1037, "s": 827, "text": "Parameters: The parameters to this method are the arrays or the values that need to be added to the given array. The number of arguments to this method depends upon the number of arrays or values to be merged." }, { "code": null, "e": 1173, "s": 1037, "text": "Return value: This method returns a newly created array that is created after merging all the arrays passed to the method as arguments." }, { "code": null, "e": 1237, "s": 1173, "text": "Below examples illustrate the JavaScript Array concat() method:" }, { "code": null, "e": 1508, "s": 1237, "text": "Example 1: In this example, the method concat() concatenates all the three arrays into one array which it return as the answer.var num1 = [11, 12, 13],\n num2 = [14, 15, 16],\n num3 = [17, 18, 19];\nprint(num1.concat(num2, num3));\nOutput:[11,12,13,14,15,16,17,18,19]\n" }, { "code": null, "e": 1616, "s": 1508, "text": "var num1 = [11, 12, 13],\n num2 = [14, 15, 16],\n num3 = [17, 18, 19];\nprint(num1.concat(num2, num3));\n" }, { "code": null, "e": 1624, "s": 1616, "text": "Output:" }, { "code": null, "e": 1654, "s": 1624, "text": "[11,12,13,14,15,16,17,18,19]\n" }, { "code": null, "e": 1903, "s": 1654, "text": "Example 2: In this example, the method concat() concatenates all the arguments passed to the method with the given array into one array which it return as the answer.var alpha = ['a', 'b', 'c'];\nprint(alpha.concat(1, [2, 3]));\nOutput:[a,b,c,1,2,3]\n" }, { "code": null, "e": 1965, "s": 1903, "text": "var alpha = ['a', 'b', 'c'];\nprint(alpha.concat(1, [2, 3]));\n" }, { "code": null, "e": 1973, "s": 1965, "text": "Output:" }, { "code": null, "e": 1988, "s": 1973, "text": "[a,b,c,1,2,3]\n" }, { "code": null, "e": 2198, "s": 1988, "text": "Example 3: In this example, the method concat() concatenates both the arrays into one array which it return as the answer.var num1 = [[23]];\nvar num2 = [89, [67]];\nprint(num1.concat(num2));\nOutput:[23,89,67] \n" }, { "code": null, "e": 2267, "s": 2198, "text": "var num1 = [[23]];\nvar num2 = [89, [67]];\nprint(num1.concat(num2));\n" }, { "code": null, "e": 2275, "s": 2267, "text": "Output:" }, { "code": null, "e": 2288, "s": 2275, "text": "[23,89,67] \n" }, { "code": null, "e": 2354, "s": 2288, "text": "More example codes for the above method are as follows:Program 1:" }, { "code": "<script> // JavaScript code for concat() method function func() { var alpha = [\"a\", \"b\", \"c\"]; document.write(alpha.concat(1, [2, 3])); } func();</script>", "e": 2535, "s": 2354, "text": null }, { "code": null, "e": 2543, "s": 2535, "text": "Output:" }, { "code": null, "e": 2557, "s": 2543, "text": "[a,b,c,1,2,3]" }, { "code": null, "e": 2568, "s": 2557, "text": "Program 2:" }, { "code": "<script> // JavaScript code for concat() method function func() { var num1 = [[23]]; var num2 = [89, [67]]; document.write(num1.concat(num2)); } func();</script>", "e": 2763, "s": 2568, "text": null }, { "code": null, "e": 2771, "s": 2763, "text": "Output:" }, { "code": null, "e": 2783, "s": 2771, "text": "[23,89,67] " }, { "code": null, "e": 2880, "s": 2783, "text": "Supported Browsers: The browsers supported by JavaScript Array concat() method are listed below:" }, { "code": null, "e": 2906, "s": 2880, "text": "Google Chrome 1 and above" }, { "code": null, "e": 2924, "s": 2906, "text": "Edge 12 and above" }, { "code": null, "e": 2944, "s": 2924, "text": "Firefox 1 and above" }, { "code": null, "e": 2976, "s": 2944, "text": "Internet Explorer 5.5 and above" }, { "code": null, "e": 2994, "s": 2976, "text": "Opera 4 and above" }, { "code": null, "e": 3013, "s": 2994, "text": "Safari 1 and above" }, { "code": null, "e": 3030, "s": 3013, "text": "arorakashish0911" }, { "code": null, "e": 3042, "s": 3030, "text": "ysachin2314" }, { "code": null, "e": 3059, "s": 3042, "text": "javascript-array" }, { "code": null, "e": 3078, "s": 3059, "text": "JavaScript-Methods" }, { "code": null, "e": 3089, "s": 3078, "text": "JavaScript" }, { "code": null, "e": 3106, "s": 3089, "text": "Web Technologies" } ]
JavaScript Array push() Method
01 Nov, 2021 Below is the example of Array push() method. Example:<script> function func() { var arr = ['GFG', 'gfg', 'g4g']; // Pushing the element into the array arr.push('GeeksforGeeks'); document.write(arr); } func(); </script> <script> function func() { var arr = ['GFG', 'gfg', 'g4g']; // Pushing the element into the array arr.push('GeeksforGeeks'); document.write(arr); } func(); </script> Output:GFG,gfg,g4g,GeeksforGeeks GFG,gfg,g4g,GeeksforGeeks The arr.push() method is used to push one or more values into the array. This method changes the length of the array by the number of elements added to the array. Syntax: arr.push(element1, elements2 ....., elementN]]) Parameters This method contains as many numbers of parameters as the number of elements to be inserted into the array. Return value: This method returns the new length of the array after inserting the arguments into the array. Below examples illustrate the JavaScript Array push() method: Example 1: In this example the function push() adds the numbers to the end of the array.var arr = [34, 234, 567, 4]; print(arr.push(23,45,56)); print(arr); Output:7 34,234,567,4,23,45,56 var arr = [34, 234, 567, 4]; print(arr.push(23,45,56)); print(arr); Output: 7 34,234,567,4,23,45,56 Example 2: In this example the function push() adds the objects to the end of the array. var arr = [34, 234, 567, 4]; print(arr.push('jacob',true,23.45)); print(arr); Output:7 34,234,567,4,jacob,true,23.45 var arr = [34, 234, 567, 4]; print(arr.push('jacob',true,23.45)); print(arr); Output: 7 34,234,567,4,jacob,true,23.45 More example codes for the above method are as follows: Program 1: <script>function func() { // Original array var arr = [34, 234, 567, 4]; // Pushing the elements document.write(arr.push(23,45,56)); document.write("<br>"); document.write(arr);}func();</script> Output: 7 34,234,567,4,23,45,56 Program 2: <script>function func() { var arr = [34, 234, 567, 4]; document.write(arr.push('jacob',true,23.45)); document.write("<br>"); document.write(arr);}func();</script> Output: 7 34,234,567,4,jacob,true,23.45 Supported Browsers: The browsers supported by JavaScript Array push() method are listed below: Google Chrome 1.0 and above Microsoft Edge 12 and above Mozilla Firefox 1.0 and above Safari 1 and above Opera 4 and above ysachin2314 javascript-array JavaScript-Methods JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n01 Nov, 2021" }, { "code": null, "e": 73, "s": 28, "text": "Below is the example of Array push() method." }, { "code": null, "e": 327, "s": 73, "text": "Example:<script> function func() { var arr = ['GFG', 'gfg', 'g4g']; // Pushing the element into the array arr.push('GeeksforGeeks'); document.write(arr); } func(); </script> " }, { "code": "<script> function func() { var arr = ['GFG', 'gfg', 'g4g']; // Pushing the element into the array arr.push('GeeksforGeeks'); document.write(arr); } func(); </script> ", "e": 573, "s": 327, "text": null }, { "code": null, "e": 606, "s": 573, "text": "Output:GFG,gfg,g4g,GeeksforGeeks" }, { "code": null, "e": 632, "s": 606, "text": "GFG,gfg,g4g,GeeksforGeeks" }, { "code": null, "e": 795, "s": 632, "text": "The arr.push() method is used to push one or more values into the array. This method changes the length of the array by the number of elements added to the array." }, { "code": null, "e": 803, "s": 795, "text": "Syntax:" }, { "code": null, "e": 851, "s": 803, "text": "arr.push(element1, elements2 ....., elementN]])" }, { "code": null, "e": 970, "s": 851, "text": "Parameters This method contains as many numbers of parameters as the number of elements to be inserted into the array." }, { "code": null, "e": 1078, "s": 970, "text": "Return value: This method returns the new length of the array after inserting the arguments into the array." }, { "code": null, "e": 1140, "s": 1078, "text": "Below examples illustrate the JavaScript Array push() method:" }, { "code": null, "e": 1328, "s": 1140, "text": "Example 1: In this example the function push() adds the numbers to the end of the array.var arr = [34, 234, 567, 4];\nprint(arr.push(23,45,56));\nprint(arr);\nOutput:7\n34,234,567,4,23,45,56\n" }, { "code": null, "e": 1397, "s": 1328, "text": "var arr = [34, 234, 567, 4];\nprint(arr.push(23,45,56));\nprint(arr);\n" }, { "code": null, "e": 1405, "s": 1397, "text": "Output:" }, { "code": null, "e": 1430, "s": 1405, "text": "7\n34,234,567,4,23,45,56\n" }, { "code": null, "e": 1638, "s": 1430, "text": "Example 2: In this example the function push() adds the objects to the end of the array. \nvar arr = [34, 234, 567, 4];\nprint(arr.push('jacob',true,23.45));\nprint(arr);\nOutput:7\n34,234,567,4,jacob,true,23.45\n" }, { "code": null, "e": 1719, "s": 1638, "text": " \nvar arr = [34, 234, 567, 4];\nprint(arr.push('jacob',true,23.45));\nprint(arr);\n" }, { "code": null, "e": 1727, "s": 1719, "text": "Output:" }, { "code": null, "e": 1760, "s": 1727, "text": "7\n34,234,567,4,jacob,true,23.45\n" }, { "code": null, "e": 1816, "s": 1760, "text": "More example codes for the above method are as follows:" }, { "code": null, "e": 1827, "s": 1816, "text": "Program 1:" }, { "code": "<script>function func() { // Original array var arr = [34, 234, 567, 4]; // Pushing the elements document.write(arr.push(23,45,56)); document.write(\"<br>\"); document.write(arr);}func();</script>", "e": 2044, "s": 1827, "text": null }, { "code": null, "e": 2052, "s": 2044, "text": "Output:" }, { "code": null, "e": 2077, "s": 2052, "text": "7\n34,234,567,4,23,45,56\n" }, { "code": null, "e": 2088, "s": 2077, "text": "Program 2:" }, { "code": "<script>function func() { var arr = [34, 234, 567, 4]; document.write(arr.push('jacob',true,23.45)); document.write(\"<br>\"); document.write(arr);}func();</script>", "e": 2267, "s": 2088, "text": null }, { "code": null, "e": 2275, "s": 2267, "text": "Output:" }, { "code": null, "e": 2308, "s": 2275, "text": "7\n34,234,567,4,jacob,true,23.45\n" }, { "code": null, "e": 2403, "s": 2308, "text": "Supported Browsers: The browsers supported by JavaScript Array push() method are listed below:" }, { "code": null, "e": 2431, "s": 2403, "text": "Google Chrome 1.0 and above" }, { "code": null, "e": 2459, "s": 2431, "text": "Microsoft Edge 12 and above" }, { "code": null, "e": 2489, "s": 2459, "text": "Mozilla Firefox 1.0 and above" }, { "code": null, "e": 2508, "s": 2489, "text": "Safari 1 and above" }, { "code": null, "e": 2526, "s": 2508, "text": "Opera 4 and above" }, { "code": null, "e": 2538, "s": 2526, "text": "ysachin2314" }, { "code": null, "e": 2555, "s": 2538, "text": "javascript-array" }, { "code": null, "e": 2574, "s": 2555, "text": "JavaScript-Methods" }, { "code": null, "e": 2585, "s": 2574, "text": "JavaScript" }, { "code": null, "e": 2602, "s": 2585, "text": "Web Technologies" } ]
Text Similarity with TensorFlow.js Universal Sentence Encoder | Towards Data Science
Have you wondered how search engines understand your queries and retrieve relevant results? How chatbots extract your intent from your questions and provide the most appropriate response? In this story, I will detail each of the parts needed to build a textual similarity analysis web-app, including: word embeddings sentence embeddings cosine similarity build a textual similarity analysis web-app analysis of results Try the textual similarity analysis web-app, and let me know how it works for you in the comments below! Word embeddings enable knowledge representation where a vector represents a word. This improves the ability for neural networks to learn from a textual dataset. Before word embeddings were de facto standard for natural language processing, a common approach to deal with words was to use a one-hot vectorisation. Each word represents a column in the vector space, and each sentence is a vector of ones and zeros. Ones denote the presence of the word in the sentence. As a result, this leads to a huge and sparse representation, because there are many more zeros than ones. When there are many words in a vocabulary, it creates a large word vector. This might become a problem for machine learning algorithms. One-hot vectorisation also fails to capture the meaning of words. For example, β€œdrink” and β€œbeverage”, even though these are two different words, they have a similar definition. With word embeddings, semantically similar words have similar vectors representation. As a result, when presented with phrases like β€œI would like to order a drink” or β€œa beverage”, an ordering system can interpret that request the same way. Back in 2003, Yoshua Bengio et al. introduced a language model concept. The focus of that paper was to learn representations for words, which allows the model to predict the next word. This paper is crucial and led to the development and discovery of word embeddings. Yoshua received the Turing Award alongside with Geoffrey Hinton, and Yann LeCun. In 2008, Ronan and Jason worked on a neural network that could learn to identify similar words. Their discovery has opened up many possibilities for natural language processing. The table below shows a list of words and the respective ten most similar words. In 2013, Tomas Mikolov et al. introduced learning high-quality word vectors from datasets with billions of words. They named it Word2Vec, and it contains millions of words in the vocabulary. Word2Vec has become popular since then. Nowadays, the word embeddings layer is in all popular deep learning framework. On Google’s pretrained Word2Vec model, they trained on roughly 100 billion words from Google News dataset. The word β€œcat” shares the closest meanings to β€œcats”, β€œdog”, β€œmouse”, β€œpet”. Word embedding also manages to recognise relationships between words. A classic example is the gender-role relationships between words. For example, β€œman” is to β€œwoman” is like β€œking” is to β€œqueen”. Galina Olejnik did an excellent job describing the motivation of word embeddings. From one-hot encoding and TF-IDF to GloVe and Poincaré. towardsdatascience.com Here’s a 29-minute comprehensive article about various language models by Dipanjan (DJ) Sarkar. He covers Word2Vec, GloVe and FastText; do check this out, if you are planning to work on word embeddings. towardsdatascience.com TensorFlow has provided a tutorial on word embeddings and codes in this Colab notebook. You can get your hands dirty with the codes and use it to train your word embeddings on your dataset. This can definitely help you get started. For who enjoys animation, there is a cool embeddings visualisation on Embedding Projector. Every dot represents a word, and you can visualise semantically similar words in a 3D space. We have word vectors to represent meanings for words; how about sentences? Like word embeddings, universal sentence encoder is a versatile sentence embedding model that converts text into semantically-meaningful fixed-length vector representations. These vectors produced by the universal sentence encoder capture rich semantic information. We can use it for various natural language processing tasks, to train classifiers such as classification and textual similarity analysis. There are two universal sentence encoder models by Google. One of them is based on a Transformer architecture and the other one is based on Deep Averaging Network. Transformer, the sentence embedding creates context-aware representations for every word to produce sentence embeddings. It is designed for higher accuracy, but the encoding requires more memory and computational time. This is useful for sentiment classification where words like β€˜not’ can change the meaning and able to handle double negation like β€œnot bad”. Deep Averaging Network, the embedding of words are first averaged together and then passed through a feedforward deep neural network to produce sentence embeddings. Unfortunately, by averaging the vectors, we lose the context of the sentence and sequence of words in the sentence in the process. It is designed for speed and efficiency, and some accuracy is sacrificed (especially on sarcasm and double negation). A great model for topic classification, classifying long articles into categories. Yinfei Yang et al. introduce a way to learn sentence representations using conversational data. For example, β€œHow old are you?” and β€œWhat is your age?”, both questions are semantically similar, a chatbot can reply the same answer β€œI am 20 years old”. In contrast, while β€œHow are you?” and β€œHow old are you?” contain identical words, both sentences have different meanings. A chatbot has to understand the question and provide the appropriate response. This is a heatmap showing the similarity between three sentences β€œHow old are you?”, β€œWhat is your age?” and β€œHow are you?”. β€œHow are you?” and β€œHow old are you?” have low similarity score even though having identical words. Logeswaran et al. introduced a framework to learn sentence representations from unlabelled data. In this paper, the decoder (orange box) used in prior methods is replaced with a classifier that chooses the target sentence from a set of candidate sentences (green boxes); it improves the performance of question and answer system. Dipanjan (DJ) Sarkar explained the development of each embedding models. If you are keen to build a text classifier, his article detailed each step to perform sentiment analysis on movie reviews dataset. towardsdatascience.com If you are curious to explore other language models, Pratik Bhavsar compared the performance of various language models such as BERT, ELMo, USE, Siamese and InferSent. Learning to choose the correct one will improve the outcome of your results. medium.com TensorFlow has provided a tutorial, a pretrained model and a notebook on universal sentence encoder. Definitely check this out if you are thinking about building your own text classifier. With semantically-meaningful vectors for each sentence, how can we measure the similarity between sentences? Cosine similarity is a measure of similarity by calculating the cosine angle between two vectors. If two vectors are similar, the angle between them is small, and the cosine similarity value is closer to 1. Here we input sentences into the universal sentence encoder, and it returns us sentence embeddings vectors. With the vectors, we can take the cosine similarities between vectors. For every sentence pair, A and B, we can calculate the cosine similarity of A and B vectors. We can determine a minimum threshold to group sentence together. As similarity score falls between 0 to 1, perhaps we can choose 0.5, at the halfway mark. That means any sentence that is greater than 0.5 similarities will be cluster together. Euge Inzaugarat introduced six methods to measure the similarity between vectors. Each method is suitable for a particular context, so knowing them it’s like knowing your data science toolbox well. towardsdatascience.com In this project, I will be using these libraries: TensorFlow.js Universal sentence encoder Angular TensorFlow.js is a framework built by Google which enables machine learning in JavaScript. We can develop machine learning models and deploy them in the web browser and Node.js. As I enjoy developing web applications, I was so happy when TensorFlow.js was released in 2018. It is easy to get started, and we can install TensorFlow.js with npm. $ npm install @tensorflow/tfjs An example of a simple linear regression model would look like this. import * as tf from '@tensorflow/tfjs';const model = tf.sequential();model.add(tf.layers.dense({units: 1, inputShape: [1]}));model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);model.fit(xs, ys, {epochs: 10}).then(() => { model.predict(tf.tensor2d([5], [1, 1])).print();}); I will be using is the universal sentence encoder package from TensorFlow.js. We can install universal sentence encoder using npm. $ npm install @tensorflow-models/universal-sentence-encoder This is an example to show how we can extract embeddings from each sentence using universal sentence encoder. import * as use from '@tensorflow-models/universal-sentence-encoder';use.load().then(model => { const sentences = [ 'Hello.', 'How are you?' ]; model.embed(sentences).then(embeddings => { embeddings.print(true /* verbose */); });}); Angular is a web application framework built by Google for creating dynamic single-page apps. For this project, I am using Angular 8.0. I enjoy building on Angular for its model–view–controller design pattern. I have used Angular since its first version and for most of my web development. But since they roll out major releases every half a year, feeling that my work will become obsolete (maybe? I don’t know). React is a popular UI framework, so perhaps I might switch to React one day. Who knows? Create a function to calculate the similarity of two vectors using the cosine similarity formula. similarity(a, b) { var magnitudeA = Math.sqrt(this.dot(a, a)); var magnitudeB = Math.sqrt(this.dot(b, b)); if (magnitudeA && magnitudeB) return this.dot(a, b) / (magnitudeA * magnitudeB); else return false} Another function to calculate the similarity scores for every sentence pair as follows. cosine_similarity_matrix(matrix){ let cosine_similarity_matrix = []; for(let i=0;i<matrix.length;i++){ let row = []; for(let j=0;j<i;j++){ row.push(cosine_similarity_matrix[j][i]); } row.push(1); for(let j=(i+1);j<matrix.length;j++){ row.push(this.similarity(matrix[i],matrix[j])); } cosine_similarity_matrix.push(row); } return cosine_similarity_matrix;} I have introduced all the major components needed for this project. Now we just have to stack them up like Legos, package it and deploy to Github. VoilaΜ€! We get a web application for a live demo. We have a list of sentences, and these will be input into the universal sentence encoder. It will output will the embeddings of each sentence. Then we calculate the similarity between each sentence. These are the sentences we will be testing our universal sentence encoder. The objective is to group sentences with similar meaning together. I have picked a few difficult cases, so let us see how it performs. Will it snow tomorrow?Recently a lot of hurricanes have hit the USGlobal warming is real An apple a day, keeps the doctors awayEating strawberries is healthy what is your age?How old are you?How are you? The dog bit JohnnyJohnny bit the dog The cat ate the mouseThe mouse ate the cat This heatmap shows how similar each sentence are to other sentences. The brighter the green represents similarity closer to 1, which means the sentences are more identical to each other. We can adjust the value to determine a minimum similarity threshold to group sentences together. These are the sentences grouped together with greater than 0.5 similarity value. Group 1Recently a lot of hurricanes have hit the USGlobal warming is real Group 2An apple a day, keeps the doctors awayEating strawberries is healthy Group 3what is your age?How old are you? Group 4The dog bit JohnnyJohnny bit the dog Group 5The cat ate the mouseThe mouse ate the cat Our web application did an excellent job recognising β€œGroup 1” being weather-related issues. Even though both sentences do not have any overlapping words. It managed to identify that β€œhurricanes” and β€œglobal warming” are weather-related, but somehow didn’t manage to group the β€œsnow” into this category. Unfortunately, β€œJohnny bit the dog” and β€œThe dog bit Johnny” has an 87% similarity. Poor Johnny, I don’t know which is better. Likewise for β€œThe cat ate the mouse” and β€œThe mouse ate the cat”, I would be expecting that the two vectors to have an opposing similarity. Thanks for reading thus far! Once again, do try the textual similarity analysis web-app, and let me know how it works for you in the comments below! Check out the codes for the web application if you would like to build something similar. As I enjoy building web applications, I have developed these web-apps to showcase machine learning capabilities on the web. Do follow me on Medium (Jingles) because I will be building more of such. A time-series prediction with TensorFlow.js. towardsdatascience.com A reinforcement agent learning to play tic-tac-toe. towardsdatascience.com [1] Bengio, Yoshua, et al. β€œA neural probabilistic language model.” (2003) [2] Collobert, Ronan, and Jason Weston. β€œA unified architecture for natural language processing: Deep neural networks with multitask learning.” (2008) [3] Mikolov, Tomas, et al. β€œEfficient estimation of word representations in vector space.” (2013) [4] Cer, Daniel, et al. β€œUniversal sentence encoder.” (2018) [5] Yang, Yinfei, et al. β€œLearning semantic textual similarity from conversations.” (2018) [6] Logeswaran, Lajanugen, and Honglak Lee. β€œAn efficient framework for learning sentence representations.” (2018)
[ { "code": null, "e": 360, "s": 172, "text": "Have you wondered how search engines understand your queries and retrieve relevant results? How chatbots extract your intent from your questions and provide the most appropriate response?" }, { "code": null, "e": 473, "s": 360, "text": "In this story, I will detail each of the parts needed to build a textual similarity analysis web-app, including:" }, { "code": null, "e": 489, "s": 473, "text": "word embeddings" }, { "code": null, "e": 509, "s": 489, "text": "sentence embeddings" }, { "code": null, "e": 527, "s": 509, "text": "cosine similarity" }, { "code": null, "e": 571, "s": 527, "text": "build a textual similarity analysis web-app" }, { "code": null, "e": 591, "s": 571, "text": "analysis of results" }, { "code": null, "e": 696, "s": 591, "text": "Try the textual similarity analysis web-app, and let me know how it works for you in the comments below!" }, { "code": null, "e": 857, "s": 696, "text": "Word embeddings enable knowledge representation where a vector represents a word. This improves the ability for neural networks to learn from a textual dataset." }, { "code": null, "e": 1163, "s": 857, "text": "Before word embeddings were de facto standard for natural language processing, a common approach to deal with words was to use a one-hot vectorisation. Each word represents a column in the vector space, and each sentence is a vector of ones and zeros. Ones denote the presence of the word in the sentence." }, { "code": null, "e": 1405, "s": 1163, "text": "As a result, this leads to a huge and sparse representation, because there are many more zeros than ones. When there are many words in a vocabulary, it creates a large word vector. This might become a problem for machine learning algorithms." }, { "code": null, "e": 1583, "s": 1405, "text": "One-hot vectorisation also fails to capture the meaning of words. For example, β€œdrink” and β€œbeverage”, even though these are two different words, they have a similar definition." }, { "code": null, "e": 1824, "s": 1583, "text": "With word embeddings, semantically similar words have similar vectors representation. As a result, when presented with phrases like β€œI would like to order a drink” or β€œa beverage”, an ordering system can interpret that request the same way." }, { "code": null, "e": 2009, "s": 1824, "text": "Back in 2003, Yoshua Bengio et al. introduced a language model concept. The focus of that paper was to learn representations for words, which allows the model to predict the next word." }, { "code": null, "e": 2173, "s": 2009, "text": "This paper is crucial and led to the development and discovery of word embeddings. Yoshua received the Turing Award alongside with Geoffrey Hinton, and Yann LeCun." }, { "code": null, "e": 2432, "s": 2173, "text": "In 2008, Ronan and Jason worked on a neural network that could learn to identify similar words. Their discovery has opened up many possibilities for natural language processing. The table below shows a list of words and the respective ten most similar words." }, { "code": null, "e": 2623, "s": 2432, "text": "In 2013, Tomas Mikolov et al. introduced learning high-quality word vectors from datasets with billions of words. They named it Word2Vec, and it contains millions of words in the vocabulary." }, { "code": null, "e": 2742, "s": 2623, "text": "Word2Vec has become popular since then. Nowadays, the word embeddings layer is in all popular deep learning framework." }, { "code": null, "e": 2926, "s": 2742, "text": "On Google’s pretrained Word2Vec model, they trained on roughly 100 billion words from Google News dataset. The word β€œcat” shares the closest meanings to β€œcats”, β€œdog”, β€œmouse”, β€œpet”." }, { "code": null, "e": 3125, "s": 2926, "text": "Word embedding also manages to recognise relationships between words. A classic example is the gender-role relationships between words. For example, β€œman” is to β€œwoman” is like β€œking” is to β€œqueen”." }, { "code": null, "e": 3264, "s": 3125, "text": "Galina Olejnik did an excellent job describing the motivation of word embeddings. From one-hot encoding and TF-IDF to GloVe and Poincaré." }, { "code": null, "e": 3287, "s": 3264, "text": "towardsdatascience.com" }, { "code": null, "e": 3490, "s": 3287, "text": "Here’s a 29-minute comprehensive article about various language models by Dipanjan (DJ) Sarkar. He covers Word2Vec, GloVe and FastText; do check this out, if you are planning to work on word embeddings." }, { "code": null, "e": 3513, "s": 3490, "text": "towardsdatascience.com" }, { "code": null, "e": 3745, "s": 3513, "text": "TensorFlow has provided a tutorial on word embeddings and codes in this Colab notebook. You can get your hands dirty with the codes and use it to train your word embeddings on your dataset. This can definitely help you get started." }, { "code": null, "e": 3929, "s": 3745, "text": "For who enjoys animation, there is a cool embeddings visualisation on Embedding Projector. Every dot represents a word, and you can visualise semantically similar words in a 3D space." }, { "code": null, "e": 4004, "s": 3929, "text": "We have word vectors to represent meanings for words; how about sentences?" }, { "code": null, "e": 4178, "s": 4004, "text": "Like word embeddings, universal sentence encoder is a versatile sentence embedding model that converts text into semantically-meaningful fixed-length vector representations." }, { "code": null, "e": 4408, "s": 4178, "text": "These vectors produced by the universal sentence encoder capture rich semantic information. We can use it for various natural language processing tasks, to train classifiers such as classification and textual similarity analysis." }, { "code": null, "e": 4572, "s": 4408, "text": "There are two universal sentence encoder models by Google. One of them is based on a Transformer architecture and the other one is based on Deep Averaging Network." }, { "code": null, "e": 4932, "s": 4572, "text": "Transformer, the sentence embedding creates context-aware representations for every word to produce sentence embeddings. It is designed for higher accuracy, but the encoding requires more memory and computational time. This is useful for sentiment classification where words like β€˜not’ can change the meaning and able to handle double negation like β€œnot bad”." }, { "code": null, "e": 5429, "s": 4932, "text": "Deep Averaging Network, the embedding of words are first averaged together and then passed through a feedforward deep neural network to produce sentence embeddings. Unfortunately, by averaging the vectors, we lose the context of the sentence and sequence of words in the sentence in the process. It is designed for speed and efficiency, and some accuracy is sacrificed (especially on sarcasm and double negation). A great model for topic classification, classifying long articles into categories." }, { "code": null, "e": 5525, "s": 5429, "text": "Yinfei Yang et al. introduce a way to learn sentence representations using conversational data." }, { "code": null, "e": 5680, "s": 5525, "text": "For example, β€œHow old are you?” and β€œWhat is your age?”, both questions are semantically similar, a chatbot can reply the same answer β€œI am 20 years old”." }, { "code": null, "e": 5881, "s": 5680, "text": "In contrast, while β€œHow are you?” and β€œHow old are you?” contain identical words, both sentences have different meanings. A chatbot has to understand the question and provide the appropriate response." }, { "code": null, "e": 6006, "s": 5881, "text": "This is a heatmap showing the similarity between three sentences β€œHow old are you?”, β€œWhat is your age?” and β€œHow are you?”." }, { "code": null, "e": 6106, "s": 6006, "text": "β€œHow are you?” and β€œHow old are you?” have low similarity score even though having identical words." }, { "code": null, "e": 6436, "s": 6106, "text": "Logeswaran et al. introduced a framework to learn sentence representations from unlabelled data. In this paper, the decoder (orange box) used in prior methods is replaced with a classifier that chooses the target sentence from a set of candidate sentences (green boxes); it improves the performance of question and answer system." }, { "code": null, "e": 6640, "s": 6436, "text": "Dipanjan (DJ) Sarkar explained the development of each embedding models. If you are keen to build a text classifier, his article detailed each step to perform sentiment analysis on movie reviews dataset." }, { "code": null, "e": 6663, "s": 6640, "text": "towardsdatascience.com" }, { "code": null, "e": 6908, "s": 6663, "text": "If you are curious to explore other language models, Pratik Bhavsar compared the performance of various language models such as BERT, ELMo, USE, Siamese and InferSent. Learning to choose the correct one will improve the outcome of your results." }, { "code": null, "e": 6919, "s": 6908, "text": "medium.com" }, { "code": null, "e": 7107, "s": 6919, "text": "TensorFlow has provided a tutorial, a pretrained model and a notebook on universal sentence encoder. Definitely check this out if you are thinking about building your own text classifier." }, { "code": null, "e": 7216, "s": 7107, "text": "With semantically-meaningful vectors for each sentence, how can we measure the similarity between sentences?" }, { "code": null, "e": 7423, "s": 7216, "text": "Cosine similarity is a measure of similarity by calculating the cosine angle between two vectors. If two vectors are similar, the angle between them is small, and the cosine similarity value is closer to 1." }, { "code": null, "e": 7531, "s": 7423, "text": "Here we input sentences into the universal sentence encoder, and it returns us sentence embeddings vectors." }, { "code": null, "e": 7695, "s": 7531, "text": "With the vectors, we can take the cosine similarities between vectors. For every sentence pair, A and B, we can calculate the cosine similarity of A and B vectors." }, { "code": null, "e": 7938, "s": 7695, "text": "We can determine a minimum threshold to group sentence together. As similarity score falls between 0 to 1, perhaps we can choose 0.5, at the halfway mark. That means any sentence that is greater than 0.5 similarities will be cluster together." }, { "code": null, "e": 8136, "s": 7938, "text": "Euge Inzaugarat introduced six methods to measure the similarity between vectors. Each method is suitable for a particular context, so knowing them it’s like knowing your data science toolbox well." }, { "code": null, "e": 8159, "s": 8136, "text": "towardsdatascience.com" }, { "code": null, "e": 8209, "s": 8159, "text": "In this project, I will be using these libraries:" }, { "code": null, "e": 8223, "s": 8209, "text": "TensorFlow.js" }, { "code": null, "e": 8250, "s": 8223, "text": "Universal sentence encoder" }, { "code": null, "e": 8258, "s": 8250, "text": "Angular" }, { "code": null, "e": 8436, "s": 8258, "text": "TensorFlow.js is a framework built by Google which enables machine learning in JavaScript. We can develop machine learning models and deploy them in the web browser and Node.js." }, { "code": null, "e": 8532, "s": 8436, "text": "As I enjoy developing web applications, I was so happy when TensorFlow.js was released in 2018." }, { "code": null, "e": 8602, "s": 8532, "text": "It is easy to get started, and we can install TensorFlow.js with npm." }, { "code": null, "e": 8633, "s": 8602, "text": "$ npm install @tensorflow/tfjs" }, { "code": null, "e": 8702, "s": 8633, "text": "An example of a simple linear regression model would look like this." }, { "code": null, "e": 9075, "s": 8702, "text": "import * as tf from '@tensorflow/tfjs';const model = tf.sequential();model.add(tf.layers.dense({units: 1, inputShape: [1]}));model.compile({loss: 'meanSquaredError', optimizer: 'sgd'});const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]);const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]);model.fit(xs, ys, {epochs: 10}).then(() => { model.predict(tf.tensor2d([5], [1, 1])).print();});" }, { "code": null, "e": 9206, "s": 9075, "text": "I will be using is the universal sentence encoder package from TensorFlow.js. We can install universal sentence encoder using npm." }, { "code": null, "e": 9266, "s": 9206, "text": "$ npm install @tensorflow-models/universal-sentence-encoder" }, { "code": null, "e": 9376, "s": 9266, "text": "This is an example to show how we can extract embeddings from each sentence using universal sentence encoder." }, { "code": null, "e": 9622, "s": 9376, "text": "import * as use from '@tensorflow-models/universal-sentence-encoder';use.load().then(model => { const sentences = [ 'Hello.', 'How are you?' ]; model.embed(sentences).then(embeddings => { embeddings.print(true /* verbose */); });});" }, { "code": null, "e": 9716, "s": 9622, "text": "Angular is a web application framework built by Google for creating dynamic single-page apps." }, { "code": null, "e": 10123, "s": 9716, "text": "For this project, I am using Angular 8.0. I enjoy building on Angular for its model–view–controller design pattern. I have used Angular since its first version and for most of my web development. But since they roll out major releases every half a year, feeling that my work will become obsolete (maybe? I don’t know). React is a popular UI framework, so perhaps I might switch to React one day. Who knows?" }, { "code": null, "e": 10221, "s": 10123, "text": "Create a function to calculate the similarity of two vectors using the cosine similarity formula." }, { "code": null, "e": 10435, "s": 10221, "text": "similarity(a, b) { var magnitudeA = Math.sqrt(this.dot(a, a)); var magnitudeB = Math.sqrt(this.dot(b, b)); if (magnitudeA && magnitudeB) return this.dot(a, b) / (magnitudeA * magnitudeB); else return false}" }, { "code": null, "e": 10523, "s": 10435, "text": "Another function to calculate the similarity scores for every sentence pair as follows." }, { "code": null, "e": 10914, "s": 10523, "text": "cosine_similarity_matrix(matrix){ let cosine_similarity_matrix = []; for(let i=0;i<matrix.length;i++){ let row = []; for(let j=0;j<i;j++){ row.push(cosine_similarity_matrix[j][i]); } row.push(1); for(let j=(i+1);j<matrix.length;j++){ row.push(this.similarity(matrix[i],matrix[j])); } cosine_similarity_matrix.push(row); } return cosine_similarity_matrix;}" }, { "code": null, "e": 11061, "s": 10914, "text": "I have introduced all the major components needed for this project. Now we just have to stack them up like Legos, package it and deploy to Github." }, { "code": null, "e": 11111, "s": 11061, "text": "VoilaΜ€! We get a web application for a live demo." }, { "code": null, "e": 11310, "s": 11111, "text": "We have a list of sentences, and these will be input into the universal sentence encoder. It will output will the embeddings of each sentence. Then we calculate the similarity between each sentence." }, { "code": null, "e": 11520, "s": 11310, "text": "These are the sentences we will be testing our universal sentence encoder. The objective is to group sentences with similar meaning together. I have picked a few difficult cases, so let us see how it performs." }, { "code": null, "e": 11609, "s": 11520, "text": "Will it snow tomorrow?Recently a lot of hurricanes have hit the USGlobal warming is real" }, { "code": null, "e": 11678, "s": 11609, "text": "An apple a day, keeps the doctors awayEating strawberries is healthy" }, { "code": null, "e": 11724, "s": 11678, "text": "what is your age?How old are you?How are you?" }, { "code": null, "e": 11761, "s": 11724, "text": "The dog bit JohnnyJohnny bit the dog" }, { "code": null, "e": 11804, "s": 11761, "text": "The cat ate the mouseThe mouse ate the cat" }, { "code": null, "e": 11991, "s": 11804, "text": "This heatmap shows how similar each sentence are to other sentences. The brighter the green represents similarity closer to 1, which means the sentences are more identical to each other." }, { "code": null, "e": 12169, "s": 11991, "text": "We can adjust the value to determine a minimum similarity threshold to group sentences together. These are the sentences grouped together with greater than 0.5 similarity value." }, { "code": null, "e": 12243, "s": 12169, "text": "Group 1Recently a lot of hurricanes have hit the USGlobal warming is real" }, { "code": null, "e": 12319, "s": 12243, "text": "Group 2An apple a day, keeps the doctors awayEating strawberries is healthy" }, { "code": null, "e": 12360, "s": 12319, "text": "Group 3what is your age?How old are you?" }, { "code": null, "e": 12404, "s": 12360, "text": "Group 4The dog bit JohnnyJohnny bit the dog" }, { "code": null, "e": 12454, "s": 12404, "text": "Group 5The cat ate the mouseThe mouse ate the cat" }, { "code": null, "e": 12609, "s": 12454, "text": "Our web application did an excellent job recognising β€œGroup 1” being weather-related issues. Even though both sentences do not have any overlapping words." }, { "code": null, "e": 12758, "s": 12609, "text": "It managed to identify that β€œhurricanes” and β€œglobal warming” are weather-related, but somehow didn’t manage to group the β€œsnow” into this category." }, { "code": null, "e": 12885, "s": 12758, "text": "Unfortunately, β€œJohnny bit the dog” and β€œThe dog bit Johnny” has an 87% similarity. Poor Johnny, I don’t know which is better." }, { "code": null, "e": 13025, "s": 12885, "text": "Likewise for β€œThe cat ate the mouse” and β€œThe mouse ate the cat”, I would be expecting that the two vectors to have an opposing similarity." }, { "code": null, "e": 13054, "s": 13025, "text": "Thanks for reading thus far!" }, { "code": null, "e": 13174, "s": 13054, "text": "Once again, do try the textual similarity analysis web-app, and let me know how it works for you in the comments below!" }, { "code": null, "e": 13264, "s": 13174, "text": "Check out the codes for the web application if you would like to build something similar." }, { "code": null, "e": 13462, "s": 13264, "text": "As I enjoy building web applications, I have developed these web-apps to showcase machine learning capabilities on the web. Do follow me on Medium (Jingles) because I will be building more of such." }, { "code": null, "e": 13507, "s": 13462, "text": "A time-series prediction with TensorFlow.js." }, { "code": null, "e": 13530, "s": 13507, "text": "towardsdatascience.com" }, { "code": null, "e": 13582, "s": 13530, "text": "A reinforcement agent learning to play tic-tac-toe." }, { "code": null, "e": 13605, "s": 13582, "text": "towardsdatascience.com" }, { "code": null, "e": 13680, "s": 13605, "text": "[1] Bengio, Yoshua, et al. β€œA neural probabilistic language model.” (2003)" }, { "code": null, "e": 13831, "s": 13680, "text": "[2] Collobert, Ronan, and Jason Weston. β€œA unified architecture for natural language processing: Deep neural networks with multitask learning.” (2008)" }, { "code": null, "e": 13929, "s": 13831, "text": "[3] Mikolov, Tomas, et al. β€œEfficient estimation of word representations in vector space.” (2013)" }, { "code": null, "e": 13990, "s": 13929, "text": "[4] Cer, Daniel, et al. β€œUniversal sentence encoder.” (2018)" }, { "code": null, "e": 14081, "s": 13990, "text": "[5] Yang, Yinfei, et al. β€œLearning semantic textual similarity from conversations.” (2018)" } ]
CSS Transitions
CSS transitions allows you to change property values smoothly, over a given duration. Mouse over the element below to see a CSS transition effect: In this chapter you will learn about the following properties: transition transition-delay transition-duration transition-property transition-timing-function The numbers in the table specify the first browser version that fully supports the property. To create a transition effect, you must specify two things: the CSS property you want to add an effect to the duration of the effect Note: If the duration part is not specified, the transition will have no effect, because the default value is 0. The following example shows a 100px * 100px red <div> element. The <div> element has also specified a transition effect for the width property, with a duration of 2 seconds: The transition effect will start when the specified CSS property (width) changes value. Now, let us specify a new value for the width property when a user mouses over the <div> element: Notice that when the cursor mouses out of the element, it will gradually change back to its original style. The following example adds a transition effect for both the width and height property, with a duration of 2 seconds for the width and 4 seconds for the height: The transition-timing-function property specifies the speed curve of the transition effect. The transition-timing-function property can have the following values: ease - specifies a transition effect with a slow start, then fast, then end slowly (this is default) linear - specifies a transition effect with the same speed from start to end ease-in - specifies a transition effect with a slow start ease-out - specifies a transition effect with a slow end ease-in-out - specifies a transition effect with a slow start and end cubic-bezier(n,n,n,n) - lets you define your own values in a cubic-bezier function The following example shows some of the different speed curves that can be used: The transition-delay property specifies a delay (in seconds) for the transition effect. The following example has a 1 second delay before starting: The following example adds a transition effect to the transformation: The CSS transition properties can be specified one by one, like this: or by using the shorthand property transition: Add a 2 second transition effect for width changes of the <div> element. <style> div { width: 100px; height: 100px; background: red; : ; } div:hover { width: 300px; } </style> <body> <div>This is a div</div> </body> Start the Exercise The following table lists all the CSS transition properties: We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 86, "s": 0, "text": "CSS transitions allows you to change property values smoothly, over a given duration." }, { "code": null, "e": 147, "s": 86, "text": "Mouse over the element below to see a CSS transition effect:" }, { "code": null, "e": 210, "s": 147, "text": "In this chapter you will learn about the following properties:" }, { "code": null, "e": 221, "s": 210, "text": "transition" }, { "code": null, "e": 238, "s": 221, "text": "transition-delay" }, { "code": null, "e": 258, "s": 238, "text": "transition-duration" }, { "code": null, "e": 278, "s": 258, "text": "transition-property" }, { "code": null, "e": 305, "s": 278, "text": "transition-timing-function" }, { "code": null, "e": 398, "s": 305, "text": "The numbers in the table specify the first browser version that fully supports the property." }, { "code": null, "e": 458, "s": 398, "text": "To create a transition effect, you must specify two things:" }, { "code": null, "e": 504, "s": 458, "text": "the CSS property you want to add an effect to" }, { "code": null, "e": 531, "s": 504, "text": "the duration of the effect" }, { "code": null, "e": 644, "s": 531, "text": "Note: If the duration part is not specified, the transition will have no effect, because the default value is 0." }, { "code": null, "e": 819, "s": 644, "text": "The following example shows a 100px * 100px red <div> element. The <div> \nelement has also specified a transition effect for the width property, with a duration of 2 seconds:" }, { "code": null, "e": 907, "s": 819, "text": "The transition effect will start when the specified CSS property (width) changes value." }, { "code": null, "e": 1005, "s": 907, "text": "Now, let us specify a new value for the width property when a user mouses over the <div> element:" }, { "code": null, "e": 1113, "s": 1005, "text": "Notice that when the cursor mouses out of the element, it will gradually change back to its original style." }, { "code": null, "e": 1274, "s": 1113, "text": "The following example adds a transition effect for both the width and height property, with a duration \nof 2 seconds for the width and 4 seconds for the height:" }, { "code": null, "e": 1366, "s": 1274, "text": "The transition-timing-function property specifies the speed curve of the transition effect." }, { "code": null, "e": 1437, "s": 1366, "text": "The transition-timing-function property can have the following values:" }, { "code": null, "e": 1538, "s": 1437, "text": "ease - specifies a transition effect with a slow start, then fast, then end slowly (this is default)" }, { "code": null, "e": 1615, "s": 1538, "text": "linear - specifies a transition effect with the same speed from start to end" }, { "code": null, "e": 1673, "s": 1615, "text": "ease-in - specifies a transition effect with a slow start" }, { "code": null, "e": 1730, "s": 1673, "text": "ease-out - specifies a transition effect with a slow end" }, { "code": null, "e": 1800, "s": 1730, "text": "ease-in-out - specifies a transition effect with a slow start and end" }, { "code": null, "e": 1883, "s": 1800, "text": "cubic-bezier(n,n,n,n) - lets you define your own values in a cubic-bezier function" }, { "code": null, "e": 1964, "s": 1883, "text": "The following example shows some of the different speed curves that can be used:" }, { "code": null, "e": 2052, "s": 1964, "text": "The transition-delay property specifies a delay (in seconds) for the transition effect." }, { "code": null, "e": 2112, "s": 2052, "text": "The following example has a 1 second delay before starting:" }, { "code": null, "e": 2182, "s": 2112, "text": "The following example adds a transition effect to the transformation:" }, { "code": null, "e": 2252, "s": 2182, "text": "The CSS transition properties can be specified one by one, like this:" }, { "code": null, "e": 2299, "s": 2252, "text": "or by using the shorthand property transition:" }, { "code": null, "e": 2372, "s": 2299, "text": "Add a 2 second transition effect for width changes of the <div> element." }, { "code": null, "e": 2530, "s": 2372, "text": "<style>\ndiv {\n width: 100px;\n height: 100px;\n background: red;\n : ;\n}\n\ndiv:hover {\n width: 300px;\n}\n</style>\n\n<body>\n <div>This is a div</div>\n</body>\n" }, { "code": null, "e": 2549, "s": 2530, "text": "Start the Exercise" }, { "code": null, "e": 2610, "s": 2549, "text": "The following table lists all the CSS transition properties:" }, { "code": null, "e": 2643, "s": 2610, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 2685, "s": 2643, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 2792, "s": 2685, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 2811, "s": 2792, "text": "[email protected]" } ]
Hibernate - Mapping Files
An Object/relational mappings are usually defined in an XML document. This mapping file instructs Hibernate β€” how to map the defined class or classes to the database tables? Though many Hibernate users choose to write the XML by hand, but a number of tools exist to generate the mapping document. These include XDoclet, Middlegen and AndroMDA for the advanced Hibernate users. Let us consider our previously defined POJO class whose objects will persist in the table defined in next section. public class Employee { private int id; private String firstName; private String lastName; private int salary; public Employee() {} public Employee(String fname, String lname, int salary) { this.firstName = fname; this.lastName = lname; this.salary = salary; } public int getId() { return id; } public void setId( int id ) { this.id = id; } public String getFirstName() { return firstName; } public void setFirstName( String first_name ) { this.firstName = first_name; } public String getLastName() { return lastName; } public void setLastName( String last_name ) { this.lastName = last_name; } public int getSalary() { return salary; } public void setSalary( int salary ) { this.salary = salary; } } There would be one table corresponding to each object you are willing to provide persistence. Consider above objects need to be stored and retrieved into the following RDBMS table βˆ’ create table EMPLOYEE ( id INT NOT NULL auto_increment, first_name VARCHAR(20) default NULL, last_name VARCHAR(20) default NULL, salary INT default NULL, PRIMARY KEY (id) ); Based on the two above entities, we can define following mapping file, which instructs Hibernate how to map the defined class or classes to the database tables. <?xml version = "1.0" encoding = "utf-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD//EN" "http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name = "Employee" table = "EMPLOYEE"> <meta attribute = "class-description"> This class contains the employee detail. </meta> <id name = "id" type = "int" column = "id"> <generator class="native"/> </id> <property name = "firstName" column = "first_name" type = "string"/> <property name = "lastName" column = "last_name" type = "string"/> <property name = "salary" column = "salary" type = "int"/> </class> </hibernate-mapping> You should save the mapping document in a file with the format <classname>.hbm.xml. We saved our mapping document in the file Employee.hbm.xml. Let us see understand a little detail about the mapping elements used in the mapping file βˆ’ The mapping document is an XML document having <hibernate-mapping> as the root element, which contains all the <class> elements. The mapping document is an XML document having <hibernate-mapping> as the root element, which contains all the <class> elements. The <class> elements are used to define specific mappings from a Java classes to the database tables. The Java class name is specified using the name attribute of the class element and the database table name is specified using the table attribute. The <class> elements are used to define specific mappings from a Java classes to the database tables. The Java class name is specified using the name attribute of the class element and the database table name is specified using the table attribute. The <meta> element is optional element and can be used to create the class description. The <meta> element is optional element and can be used to create the class description. The <id> element maps the unique ID attribute in class to the primary key of the database table. The name attribute of the id element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type. The <id> element maps the unique ID attribute in class to the primary key of the database table. The name attribute of the id element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type. The <generator> element within the id element is used to generate the primary key values automatically. The class attribute of the generator element is set to native to let hibernate pick up either identity, sequence, or hilo algorithm to create primary key depending upon the capabilities of the underlying database. The <generator> element within the id element is used to generate the primary key values automatically. The class attribute of the generator element is set to native to let hibernate pick up either identity, sequence, or hilo algorithm to create primary key depending upon the capabilities of the underlying database. The <property> element is used to map a Java class property to a column in the database table. The name attribute of the element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type. The <property> element is used to map a Java class property to a column in the database table. The name attribute of the element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type. There are other attributes and elements available, which will be used in a mapping document and I would try to cover as many as possible while discussing other Hibernate related topics. 108 Lectures 11 hours Chaand Sheikh 65 Lectures 5 hours Karthikeya T 39 Lectures 4.5 hours TELCOMA Global Print Add Notes Bookmark this page
[ { "code": null, "e": 2237, "s": 2063, "text": "An Object/relational mappings are usually defined in an XML document. This mapping file instructs Hibernate β€” how to map the defined class or classes to the database tables?" }, { "code": null, "e": 2440, "s": 2237, "text": "Though many Hibernate users choose to write the XML by hand, but a number of tools exist to generate the mapping document. These include XDoclet, Middlegen and AndroMDA for the advanced Hibernate users." }, { "code": null, "e": 2555, "s": 2440, "text": "Let us consider our previously defined POJO class whose objects will persist in the table defined in next section." }, { "code": null, "e": 3434, "s": 2555, "text": "public class Employee {\n private int id;\n private String firstName; \n private String lastName; \n private int salary; \n\n public Employee() {}\n \n public Employee(String fname, String lname, int salary) {\n this.firstName = fname;\n this.lastName = lname;\n this.salary = salary;\n }\n \n public int getId() {\n return id;\n }\n \n public void setId( int id ) {\n this.id = id;\n }\n \n public String getFirstName() {\n return firstName;\n }\n \n public void setFirstName( String first_name ) {\n this.firstName = first_name;\n }\n \n public String getLastName() {\n return lastName;\n }\n \n public void setLastName( String last_name ) {\n this.lastName = last_name;\n }\n \n public int getSalary() {\n return salary;\n }\n \n public void setSalary( int salary ) {\n this.salary = salary;\n }\n}" }, { "code": null, "e": 3616, "s": 3434, "text": "There would be one table corresponding to each object you are willing to provide persistence. Consider above objects need to be stored and retrieved into the following RDBMS table βˆ’" }, { "code": null, "e": 3811, "s": 3616, "text": "create table EMPLOYEE (\n id INT NOT NULL auto_increment,\n first_name VARCHAR(20) default NULL,\n last_name VARCHAR(20) default NULL,\n salary INT default NULL,\n PRIMARY KEY (id)\n);" }, { "code": null, "e": 3972, "s": 3811, "text": "Based on the two above entities, we can define following mapping file, which instructs Hibernate how to map the defined class or classes to the database tables." }, { "code": null, "e": 4703, "s": 3972, "text": "<?xml version = \"1.0\" encoding = \"utf-8\"?>\n<!DOCTYPE hibernate-mapping PUBLIC \n\"-//Hibernate/Hibernate Mapping DTD//EN\"\n\"http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd\"> \n\n<hibernate-mapping>\n <class name = \"Employee\" table = \"EMPLOYEE\">\n \n <meta attribute = \"class-description\">\n This class contains the employee detail. \n </meta>\n \n <id name = \"id\" type = \"int\" column = \"id\">\n <generator class=\"native\"/>\n </id>\n \n <property name = \"firstName\" column = \"first_name\" type = \"string\"/>\n <property name = \"lastName\" column = \"last_name\" type = \"string\"/>\n <property name = \"salary\" column = \"salary\" type = \"int\"/>\n \n </class>\n</hibernate-mapping>" }, { "code": null, "e": 4847, "s": 4703, "text": "You should save the mapping document in a file with the format <classname>.hbm.xml. We saved our mapping document in the file Employee.hbm.xml." }, { "code": null, "e": 4939, "s": 4847, "text": "Let us see understand a little detail about the mapping elements used in the mapping file βˆ’" }, { "code": null, "e": 5068, "s": 4939, "text": "The mapping document is an XML document having <hibernate-mapping> as the root element, which contains all the <class> elements." }, { "code": null, "e": 5197, "s": 5068, "text": "The mapping document is an XML document having <hibernate-mapping> as the root element, which contains all the <class> elements." }, { "code": null, "e": 5446, "s": 5197, "text": "The <class> elements are used to define specific mappings from a Java classes to the database tables. The Java class name is specified using the name attribute of the class element and the database table name is specified using the table attribute." }, { "code": null, "e": 5695, "s": 5446, "text": "The <class> elements are used to define specific mappings from a Java classes to the database tables. The Java class name is specified using the name attribute of the class element and the database table name is specified using the table attribute." }, { "code": null, "e": 5783, "s": 5695, "text": "The <meta> element is optional element and can be used to create the class description." }, { "code": null, "e": 5871, "s": 5783, "text": "The <meta> element is optional element and can be used to create the class description." }, { "code": null, "e": 6223, "s": 5871, "text": "The <id> element maps the unique ID attribute in class to the primary key of the database table. The name attribute of the id element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type." }, { "code": null, "e": 6575, "s": 6223, "text": "The <id> element maps the unique ID attribute in class to the primary key of the database table. The name attribute of the id element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type." }, { "code": null, "e": 6893, "s": 6575, "text": "The <generator> element within the id element is used to generate the primary key values automatically. The class attribute of the generator element is set to native to let hibernate pick up either identity, sequence, or hilo algorithm to create primary key depending upon the capabilities of the underlying database." }, { "code": null, "e": 7211, "s": 6893, "text": "The <generator> element within the id element is used to generate the primary key values automatically. The class attribute of the generator element is set to native to let hibernate pick up either identity, sequence, or hilo algorithm to create primary key depending upon the capabilities of the underlying database." }, { "code": null, "e": 7558, "s": 7211, "text": "The <property> element is used to map a Java class property to a column in the database table. The name attribute of the element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type." }, { "code": null, "e": 7905, "s": 7558, "text": "The <property> element is used to map a Java class property to a column in the database table. The name attribute of the element refers to the property in the class and the column attribute refers to the column in the database table. The type attribute holds the hibernate mapping type, this mapping types will convert from Java to SQL data type." }, { "code": null, "e": 8091, "s": 7905, "text": "There are other attributes and elements available, which will be used in a mapping document and I would try to cover as many as possible while discussing other Hibernate related topics." }, { "code": null, "e": 8126, "s": 8091, "text": "\n 108 Lectures \n 11 hours \n" }, { "code": null, "e": 8141, "s": 8126, "text": " Chaand Sheikh" }, { "code": null, "e": 8174, "s": 8141, "text": "\n 65 Lectures \n 5 hours \n" }, { "code": null, "e": 8188, "s": 8174, "text": " Karthikeya T" }, { "code": null, "e": 8223, "s": 8188, "text": "\n 39 Lectures \n 4.5 hours \n" }, { "code": null, "e": 8239, "s": 8223, "text": " TELCOMA Global" }, { "code": null, "e": 8246, "s": 8239, "text": " Print" }, { "code": null, "e": 8257, "s": 8246, "text": " Add Notes" } ]
How to use JavaScript to hide a DIV when the user clicks outside of it?
To hide a div when the user clicks outside of it, try to run the following code Live Demo <!DOCTYPE html> <html> <body> <script> window.onload = function(){ var hideMe = document.getElementById('hideMe'); document.onclick = function(e){ if(e.target.id !== 'hideMe'){ hideMe.style.display = 'none'; } }; }; </script> <div id="hideMe">Click outside this div and hide it.</div> </body> </html>
[ { "code": null, "e": 1142, "s": 1062, "text": "To hide a div when the user clicks outside of it, try to run the following code" }, { "code": null, "e": 1152, "s": 1142, "text": "Live Demo" }, { "code": null, "e": 1579, "s": 1152, "text": "<!DOCTYPE html>\n<html>\n <body>\n <script>\n window.onload = function(){\n var hideMe = document.getElementById('hideMe');\n document.onclick = function(e){\n if(e.target.id !== 'hideMe'){\n hideMe.style.display = 'none';\n }\n };\n };\n </script>\n <div id=\"hideMe\">Click outside this div and hide it.</div>\n </body>\n</html>" } ]
Boolean Indexing in Python
The Boolean values like True & false and 1&0 can be used as indexes in panda dataframe. They can help us filter out the required records. In the below exampels we will see different methods that can be used to carry out the Boolean indexing operations. Let’s consider a data frame desciribing the data from a game. The various points scored on different days are mentioned in a dictionary. Then we can create an index on the dataframe using True and False as the indexing values. Then we can print the final dataframe. Live Demo import pandas as pd # dictionary game = {'Day':["Monday","Tuesday","Wednesday","Thursday","Friday"], 'points':[31,24,16,11,22]} df = pd.DataFrame(game,index=[True,False,True,False,True]) print(df) Running the above code gives us the following result Day points True Monday 31 False Tuesday 24 True Wednesday 16 False Thursday 11 True Friday 22 This function can be used to filter out records that has a specific Boolean value. In the below example we can see fetch only the records where the Boolean value is True. Live Demo import pandas as pd # dictionary game = {'Day':["Monday","Tuesday","Wednesday","Thursday","Friday"], 'points':[31,24,16,11,22]} df = pd.DataFrame(game,index=[True,False,True,False,True]) #print(df) print(df.loc[True]) Running the above code gives us the following result Day points True Monday 31 True Wednesday 16 True Friday 22 In this method we also use integers as Boolean values. So we change the True and False values in the dataframe to 1 and 0. Then use them to filter out the records. Live Demo import pandas as pd # dictionary game = {'Day':["Monday","Tuesday","Wednesday","Thursday","Friday"], 'points':[31,24,16,11,22]} df = pd.DataFrame(game,index=[1,1,0,0,1]) #print(df) print(df.ix[0]) Running the above code gives us the following result: Day points 0 Wednesday 16 0 T hursday 11
[ { "code": null, "e": 1315, "s": 1062, "text": "The Boolean values like True & false and 1&0 can be used as indexes in panda dataframe. They can help us filter out the required records. In the below exampels we will see different methods that can be used to carry out the Boolean indexing operations." }, { "code": null, "e": 1581, "s": 1315, "text": "Let’s consider a data frame desciribing the data from a game. The various points scored on different days are mentioned in a dictionary. Then we can create an index on the dataframe using True and False as the indexing values. Then we can print the final dataframe." }, { "code": null, "e": 1592, "s": 1581, "text": " Live Demo" }, { "code": null, "e": 1789, "s": 1592, "text": "import pandas as pd\n# dictionary\ngame = {'Day':[\"Monday\",\"Tuesday\",\"Wednesday\",\"Thursday\",\"Friday\"], 'points':[31,24,16,11,22]}\ndf = pd.DataFrame(game,index=[True,False,True,False,True])\nprint(df)" }, { "code": null, "e": 1842, "s": 1789, "text": "Running the above code gives us the following result" }, { "code": null, "e": 2034, "s": 1842, "text": " Day points\nTrue Monday 31\nFalse Tuesday 24\nTrue Wednesday 16\nFalse Thursday 11\nTrue Friday 22" }, { "code": null, "e": 2205, "s": 2034, "text": "This function can be used to filter out records that has a specific Boolean value. In the below example we can see fetch only the records where the Boolean value is True." }, { "code": null, "e": 2216, "s": 2205, "text": " Live Demo" }, { "code": null, "e": 2434, "s": 2216, "text": "import pandas as pd\n# dictionary\ngame = {'Day':[\"Monday\",\"Tuesday\",\"Wednesday\",\"Thursday\",\"Friday\"], 'points':[31,24,16,11,22]}\ndf = pd.DataFrame(game,index=[True,False,True,False,True])\n#print(df)\nprint(df.loc[True])" }, { "code": null, "e": 2487, "s": 2434, "text": "Running the above code gives us the following result" }, { "code": null, "e": 2603, "s": 2487, "text": " Day points\nTrue Monday 31\nTrue Wednesday 16\nTrue Friday 22" }, { "code": null, "e": 2767, "s": 2603, "text": "In this method we also use integers as Boolean values. So we change the True and False values in the dataframe to 1 and 0. Then use them to filter out the records." }, { "code": null, "e": 2778, "s": 2767, "text": " Live Demo" }, { "code": null, "e": 2975, "s": 2778, "text": "import pandas as pd\n# dictionary\ngame = {'Day':[\"Monday\",\"Tuesday\",\"Wednesday\",\"Thursday\",\"Friday\"], 'points':[31,24,16,11,22]}\ndf = pd.DataFrame(game,index=[1,1,0,0,1])\n#print(df)\nprint(df.ix[0])" }, { "code": null, "e": 3029, "s": 2975, "text": "Running the above code gives us the following result:" }, { "code": null, "e": 3086, "s": 3029, "text": " Day points\n0 Wednesday 16\n0 T hursday 11" } ]
HTTP headers | Content-Disposition - GeeksforGeeks
22 Jun, 2020 The HTTP Content Disposition is a response-type header field that gives information on how to process the response payload and additional information such as filename when user saves it locally. This response header field holds a number of values and parameters in the larger context of MIME (Multipurpose Internet Mail Extensions). However, it reduces to a fixed set of parameters and values under HTTP forms and POST requests. The Content Disposition header field takes different values while working as a response header for data enclosed in the main body, forms, or multiple parts of the content. It has an option to make the data available locally or display it in the browser while treating content present in the main body present on the browser. It can give information about the specified field of data which are stored as sub-parts in Multipart/Form data. Syntax : Syntax for treating data in the main body : Content Disposition : inline Content Disposition : attachment Content Disposition : attachment ; filename = "file_name.html" Syntax for treating Multipart/Form Data : Content-Disposition : form-data Content-Disposition : form-data; name="field_value" Content-Disposition : form-data; name="field_value"; filename="file_name.html" Directives : 1. Content Disposition Type : inline: This indicates that data should be displayed automatically on prompt in browser. attachment: This indicates that user should receive a prompt (usually a Save As dialog box) to save the file locally on the disk to access it. filename: It is an optional parameter that contains the original name of file sent to the recipient. The receiver has completely authority to change the name or the suggested directory to save file. This parameter can also be used in inline type of disposition. RFC 5987 provided a variant filename* with new encoding but performs similar function. This parameter is now preferred over conventional filename when both are used by a header. 2. Content Disposition Parameters : name: It contains the name or value of HTML field which is referred by subpart of the form. form-data: This indicates that data is divided into various parts and each part is separated by a boundary. 3.Working of Content Disposition and Multipart : When Content-Disposition Header is used on multipart, it is applied to the complete set as whole and disposition type of sub-parts do not need to be consulted. However, while displaying the content under multipart, the disposition of each subpart should be respected. On using inline disposition, the multipart should be displayed normally and if any attachment sub-part is present, it requires user action. User action is required when attachment disposition is used as whole on multipart. Examples : The following examples have been taken from RFC 6266 and RFC 7578. content-disposition: form-data; name="field1" content-disposition: form-data; name="_charset_" Content-Disposition: attachment; filename="EURO rates"; filename*=utf-8''%e2%82%ac%20rates Content-Disposition: inline ; filename=example.html Supported Browsers : The browsers supported by HTTP headers | Content-Disposition are listed below Google Chrome Safari Mozilla Firefox Microsoft Edge Internet Explorer Opera HTTP-headers Picked Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Top 10 Front End Developer Skills That You Need in 2022 How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript Differences between Functional Components and Class Components in React How to redirect to another page in ReactJS ? How to Insert Form Data into Database using PHP ? How to pass data from child component to its parent in ReactJS ? How to execute PHP code using command line ? REST API (Introduction)
[ { "code": null, "e": 24551, "s": 24523, "text": "\n22 Jun, 2020" }, { "code": null, "e": 24981, "s": 24551, "text": "The HTTP Content Disposition is a response-type header field that gives information on how to process the response payload and additional information such as filename when user saves it locally. This response header field holds a number of values and parameters in the larger context of MIME (Multipurpose Internet Mail Extensions). However, it reduces to a fixed set of parameters and values under HTTP forms and POST requests. " }, { "code": null, "e": 25419, "s": 24981, "text": "The Content Disposition header field takes different values while working as a response header for data enclosed in the main body, forms, or multiple parts of the content. It has an option to make the data available locally or display it in the browser while treating content present in the main body present on the browser. It can give information about the specified field of data which are stored as sub-parts in Multipart/Form data. " }, { "code": null, "e": 25429, "s": 25419, "text": "Syntax : " }, { "code": null, "e": 25474, "s": 25429, "text": "Syntax for treating data in the main body : " }, { "code": null, "e": 25601, "s": 25474, "text": "Content Disposition : inline \nContent Disposition : attachment\nContent Disposition : attachment ; filename = \"file_name.html\"\n" }, { "code": null, "e": 25644, "s": 25601, "text": "Syntax for treating Multipart/Form Data : " }, { "code": null, "e": 25808, "s": 25644, "text": "Content-Disposition : form-data\nContent-Disposition : form-data; name=\"field_value\"\nContent-Disposition : form-data; name=\"field_value\"; filename=\"file_name.html\"\n" }, { "code": null, "e": 25822, "s": 25808, "text": "Directives : " }, { "code": null, "e": 25853, "s": 25822, "text": "1. Content Disposition Type : " }, { "code": null, "e": 25944, "s": 25853, "text": " inline: This indicates that data should be displayed automatically on prompt in browser." }, { "code": null, "e": 26087, "s": 25944, "text": "attachment: This indicates that user should receive a prompt (usually a Save As dialog box) to save the file locally on the disk to access it." }, { "code": null, "e": 26527, "s": 26087, "text": "filename: It is an optional parameter that contains the original name of file sent to the recipient. The receiver has completely authority to change the name or the suggested directory to save file. This parameter can also be used in inline type of disposition. RFC 5987 provided a variant filename* with new encoding but performs similar function. This parameter is now preferred over conventional filename when both are used by a header." }, { "code": null, "e": 26564, "s": 26527, "text": "2. Content Disposition Parameters : " }, { "code": null, "e": 26656, "s": 26564, "text": "name: It contains the name or value of HTML field which is referred by subpart of the form." }, { "code": null, "e": 26764, "s": 26656, "text": "form-data: This indicates that data is divided into various parts and each part is separated by a boundary." }, { "code": null, "e": 26814, "s": 26764, "text": "3.Working of Content Disposition and Multipart : " }, { "code": null, "e": 27082, "s": 26814, "text": "When Content-Disposition Header is used on multipart, it is applied to the complete set as whole and disposition type of sub-parts do not need to be consulted. However, while displaying the content under multipart, the disposition of each subpart should be respected." }, { "code": null, "e": 27222, "s": 27082, "text": "On using inline disposition, the multipart should be displayed normally and if any attachment sub-part is present, it requires user action." }, { "code": null, "e": 27305, "s": 27222, "text": "User action is required when attachment disposition is used as whole on multipart." }, { "code": null, "e": 27385, "s": 27305, "text": "Examples : The following examples have been taken from RFC 6266 and RFC 7578. " }, { "code": null, "e": 27433, "s": 27385, "text": "content-disposition: form-data; name=\"field1\"\n\n" }, { "code": null, "e": 27485, "s": 27433, "text": " content-disposition: form-data; name=\"_charset_\"\n\n" }, { "code": null, "e": 27578, "s": 27485, "text": "Content-Disposition: attachment;\nfilename=\"EURO rates\";\nfilename*=utf-8''%e2%82%ac%20rates\n\n" }, { "code": null, "e": 27633, "s": 27578, "text": " Content-Disposition: inline ; filename=example.html\n\n" }, { "code": null, "e": 27735, "s": 27633, "text": "Supported Browsers : The browsers supported by HTTP headers | Content-Disposition are listed below" }, { "code": null, "e": 27749, "s": 27735, "text": "Google Chrome" }, { "code": null, "e": 27756, "s": 27749, "text": "Safari" }, { "code": null, "e": 27772, "s": 27756, "text": "Mozilla Firefox" }, { "code": null, "e": 27787, "s": 27772, "text": "Microsoft Edge" }, { "code": null, "e": 27805, "s": 27787, "text": "Internet Explorer" }, { "code": null, "e": 27811, "s": 27805, "text": "Opera" }, { "code": null, "e": 27824, "s": 27811, "text": "HTTP-headers" }, { "code": null, "e": 27831, "s": 27824, "text": "Picked" }, { "code": null, "e": 27848, "s": 27831, "text": "Web Technologies" }, { "code": null, "e": 27946, "s": 27848, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27955, "s": 27946, "text": "Comments" }, { "code": null, "e": 27968, "s": 27955, "text": "Old Comments" }, { "code": null, "e": 28024, "s": 27968, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 28067, "s": 28024, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 28128, "s": 28067, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 28173, "s": 28128, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 28245, "s": 28173, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 28290, "s": 28245, "text": "How to redirect to another page in ReactJS ?" }, { "code": null, "e": 28340, "s": 28290, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 28405, "s": 28340, "text": "How to pass data from child component to its parent in ReactJS ?" }, { "code": null, "e": 28450, "s": 28405, "text": "How to execute PHP code using command line ?" } ]
Euler tour of Binary Tree - GeeksforGeeks
30 Mar, 2022 Given a binary tree where each node can have at most two child nodes, the task is to find the Euler tour of the binary tree. Euler tour is represented by a pointer to the topmost node in the tree. If the tree is empty, then value of root is NULL.Examples: Input : Output: 1 5 4 2 4 3 4 5 1 Approach: (1) First, start with root node 1, Euler[0]=1 (2) Go to left node i.e, node 5, Euler[1]=5 (3) Go to left node i.e, node 4, Euler[2]=4 (4) Go to left node i.e, node 2, Euler[3]=2 (5) Go to left node i.e, NULL, go to parent node 4 Euler[4]=4 (6) Go to right node i.e, node 3 Euler[5]=3 (7) No child, go to parent, node 4 Euler[6]=4 (8) All child discovered, go to parent node 5 Euler[7]=5 (9) All child discovered, go to parent node 1 Euler[8]=1 Euler tour of tree has been already discussed where it can be applied to N-ary tree which is represented by adjacency list. If a Binary tree is represented by the classical structured way by links and nodes, then there need to first convert the tree into adjacency list representation and then we can find the Euler tour if we want to apply method discussed in the original post. But this increases the space complexity of the program. Here, In this post, a generalized space-optimized version is discussed which can be directly applied to binary trees represented by structure nodes. This method : (1) Works without the use of Visited arrays. (2) Requires exactly 2*N-1 vertices to store Euler tour. C++ Java Python3 C# Javascript // C++ program to find euler tour of binary tree#include <bits/stdc++.h>using namespace std; /* A tree node structure */struct Node { int data; struct Node* left; struct Node* right;}; /* Utility function to create a new Binary Tree node */struct Node* newNode(int data){ struct Node* temp = new struct Node; temp->data = data; temp->left = temp->right = NULL; return temp;} // Find Euler Tourvoid eulerTree(struct Node* root, vector<int> &Euler){ // store current node's data Euler.push_back(root->data); // If left node exists if (root->left) { // traverse left subtree eulerTree(root->left, Euler); // store parent node's data Euler.push_back(root->data); } // If right node exists if (root->right) { // traverse right subtree eulerTree(root->right, Euler); // store parent node's data Euler.push_back(root->data); }} // Function to print Euler Tour of treevoid printEulerTour(Node *root){ // Stores Euler Tour vector<int> Euler; eulerTree(root, Euler); for (int i = 0; i < Euler.size(); i++) cout << Euler[i] << " ";} /* Driver function to test above functions */int main(){ // Constructing tree given in the above figure Node* root = newNode(1); root->left = newNode(2); root->right = newNode(3); root->left->left = newNode(4); root->left->right = newNode(5); root->right->left = newNode(6); root->right->right = newNode(7); root->right->left->right = newNode(8); // print Euler Tour printEulerTour(root); return 0;} // Java program to find euler tour of binary treeimport java.util.*; class GFG{ /* A tree node structure */ static class Node { int data; Node left; Node right; }; /* Utility function to create a new Binary Tree node */ static Node newNode(int data) { Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp; } // Find Euler Tour static Vector<Integer> eulerTree(Node root, Vector<Integer> Euler) { // store current node's data Euler.add(root.data); // If left node exists if (root.left != null) { // traverse left subtree Euler = eulerTree(root.left, Euler); // store parent node's data Euler.add(root.data); } // If right node exists if (root.right != null) { // traverse right subtree Euler = eulerTree(root.right, Euler); // store parent node's data Euler.add(root.data); } return Euler; } // Function to print Euler Tour of tree static void printEulerTour(Node root) { // Stores Euler Tour Vector<Integer> Euler = new Vector<Integer>(); Euler = eulerTree(root, Euler); for (int i = 0; i < Euler.size(); i++) System.out.print(Euler.get(i) + " "); } /* Driver function to test above functions */ public static void main(String[] args) { // Constructing tree given in the above figure Node root = newNode(1); root.left = newNode(2); root.right = newNode(3); root.left.left = newNode(4); root.left.right = newNode(5); root.right.left = newNode(6); root.right.right = newNode(7); root.right.left.right = newNode(8); // print Euler Tour printEulerTour(root); }} // This code is contributed by Rajput-Ji # python program to find euler tour of binary tree# a Node of binary treeclass Node: def __init__(self, key): self.data = key self.left = None self.right = None # Find Euler Treedef eulerTree(root, euler): # store current node's data euler.append(root.data) # If left node exists if root.left: # traverse left subtree euler = eulerTree(root.left, euler) # store parent node's data euler.append(root.data) # If right node exists if root.right: # traverse right subtree euler = eulerTree(root.right, euler) # store parent node's data euler.append(root.data) return euler # Function to print Euler Tour of treedef printEulerTour(root): # Stores Euler Tour euler = [] euler = eulerTree(root, euler) for i in range(len(euler)): print(euler[i], end=" ") # Driver function to test above functions */# Constructing tree given in the above figureroot = Node(1)root.left = Node(2)root.right = Node(3)root.left.left = Node(4)root.left.right = Node(5)root.right.left = Node(6)root.right.right = Node(7)root.right.left.right = Node(8) # print Euler TourprintEulerTour(root) # This code is contributed by RAJAT KUMAR...(GLA UNIVERSITY).... // C# program to find euler tour of binary treeusing System;using System.Collections.Generic; class GFG{ /* A tree node structure */ public class Node { public int data; public Node left; public Node right; }; /* Utility function to create a new Binary Tree node */ static Node newNode(int data) { Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp; } // Find Euler Tour static List<int> eulerTree(Node root, List<int> Euler) { // store current node's data Euler.Add(root.data); // If left node exists if (root.left != null) { // traverse left subtree Euler = eulerTree(root.left, Euler); // store parent node's data Euler.Add(root.data); } // If right node exists if (root.right != null) { // traverse right subtree Euler = eulerTree(root.right, Euler); // store parent node's data Euler.Add(root.data); } return Euler; } // Function to print Euler Tour of tree static void printEulerTour(Node root) { // Stores Euler Tour List<int> Euler = new List<int>(); Euler = eulerTree(root, Euler); for (int i = 0; i < Euler.Count; i++) Console.Write(Euler[i] + " "); } /* Driver function to test above functions */ public static void Main(String[] args) { // Constructing tree given in the above figure Node root = newNode(1); root.left = newNode(2); root.right = newNode(3); root.left.left = newNode(4); root.left.right = newNode(5); root.right.left = newNode(6); root.right.right = newNode(7); root.right.left.right = newNode(8); // print Euler Tour printEulerTour(root); }} // This code is contributed by 29AjayKumar <script> // Javascript program to find euler tour of binary tree/* A tree node structure */class Node{ constructor() { this.data = 0; this.left = null; this.right = null; }}; /* Utility function to create a new Binary Tree node */function newNode(data){ var temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;}// Find Euler Tourfunction eulerTree(root, Euler){ // store current node's data Euler.push(root.data); // If left node exists if (root.left != null) { // traverse left subtree Euler = eulerTree(root.left, Euler); // store parent node's data Euler.push(root.data); } // If right node exists if (root.right != null) { // traverse right subtree Euler = eulerTree(root.right, Euler); // store parent node's data Euler.push(root.data); } return Euler;} // Function to print Euler Tour of treefunction printEulerTour(root){ // Stores Euler Tour var Euler = []; Euler = eulerTree(root, Euler); for (var i = 0; i < Euler.length; i++) document.write(Euler[i] + " ");} /* Driver function to test above functions */// Constructing tree given in the above figurevar root = newNode(1);root.left = newNode(2);root.right = newNode(3);root.left.left = newNode(4);root.left.right = newNode(5);root.right.left = newNode(6);root.right.right = newNode(7);root.right.left.right = newNode(8); // print Euler TourprintEulerTour(root); // This code is contributed by itsok.</script> 1 2 4 2 5 2 1 3 6 8 6 3 7 3 1 Time Complexity: O(2*N-1) where N is number of nodes in the tree. Auxiliary Space : O(2*N-1) where N is number of nodes in the tree. YouTubeGeeksforGeeks500K subscribersEuler tour of Binary Tree | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:49β€’Liveβ€’<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=c_pVFPP2Lb8" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> Rajput-Ji 29AjayKumar itsok rajatkumargla19 Binary Tree cpp-vector Advanced Data Structure Competitive Programming Tree Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Agents in Artificial Intelligence Decision Tree Introduction with example AVL Tree | Set 2 (Deletion) Red-Black Tree | Set 2 (Insert) Disjoint Set Data Structures Practice for cracking any coding interview Arrow operator -> in C/C++ with Examples Competitive Programming - A Complete Guide Modulo 10^9+7 (1000000007) Prefix Sum Array - Implementation and Applications in Competitive Programming
[ { "code": null, "e": 24165, "s": 24137, "text": "\n30 Mar, 2022" }, { "code": null, "e": 24423, "s": 24165, "text": "Given a binary tree where each node can have at most two child nodes, the task is to find the Euler tour of the binary tree. Euler tour is represented by a pointer to the topmost node in the tree. If the tree is empty, then value of root is NULL.Examples: " }, { "code": null, "e": 24433, "s": 24423, "text": "Input : " }, { "code": null, "e": 24461, "s": 24433, "text": "Output: 1 5 4 2 4 3 4 5 1 " }, { "code": null, "e": 24473, "s": 24461, "text": "Approach: " }, { "code": null, "e": 24919, "s": 24473, "text": "(1) First, start with root node 1, Euler[0]=1 (2) Go to left node i.e, node 5, Euler[1]=5 (3) Go to left node i.e, node 4, Euler[2]=4 (4) Go to left node i.e, node 2, Euler[3]=2 (5) Go to left node i.e, NULL, go to parent node 4 Euler[4]=4 (6) Go to right node i.e, node 3 Euler[5]=3 (7) No child, go to parent, node 4 Euler[6]=4 (8) All child discovered, go to parent node 5 Euler[7]=5 (9) All child discovered, go to parent node 1 Euler[8]=1 " }, { "code": null, "e": 25621, "s": 24919, "text": "Euler tour of tree has been already discussed where it can be applied to N-ary tree which is represented by adjacency list. If a Binary tree is represented by the classical structured way by links and nodes, then there need to first convert the tree into adjacency list representation and then we can find the Euler tour if we want to apply method discussed in the original post. But this increases the space complexity of the program. Here, In this post, a generalized space-optimized version is discussed which can be directly applied to binary trees represented by structure nodes. This method : (1) Works without the use of Visited arrays. (2) Requires exactly 2*N-1 vertices to store Euler tour. " }, { "code": null, "e": 25625, "s": 25621, "text": "C++" }, { "code": null, "e": 25630, "s": 25625, "text": "Java" }, { "code": null, "e": 25638, "s": 25630, "text": "Python3" }, { "code": null, "e": 25641, "s": 25638, "text": "C#" }, { "code": null, "e": 25652, "s": 25641, "text": "Javascript" }, { "code": "// C++ program to find euler tour of binary tree#include <bits/stdc++.h>using namespace std; /* A tree node structure */struct Node { int data; struct Node* left; struct Node* right;}; /* Utility function to create a new Binary Tree node */struct Node* newNode(int data){ struct Node* temp = new struct Node; temp->data = data; temp->left = temp->right = NULL; return temp;} // Find Euler Tourvoid eulerTree(struct Node* root, vector<int> &Euler){ // store current node's data Euler.push_back(root->data); // If left node exists if (root->left) { // traverse left subtree eulerTree(root->left, Euler); // store parent node's data Euler.push_back(root->data); } // If right node exists if (root->right) { // traverse right subtree eulerTree(root->right, Euler); // store parent node's data Euler.push_back(root->data); }} // Function to print Euler Tour of treevoid printEulerTour(Node *root){ // Stores Euler Tour vector<int> Euler; eulerTree(root, Euler); for (int i = 0; i < Euler.size(); i++) cout << Euler[i] << \" \";} /* Driver function to test above functions */int main(){ // Constructing tree given in the above figure Node* root = newNode(1); root->left = newNode(2); root->right = newNode(3); root->left->left = newNode(4); root->left->right = newNode(5); root->right->left = newNode(6); root->right->right = newNode(7); root->right->left->right = newNode(8); // print Euler Tour printEulerTour(root); return 0;}", "e": 27244, "s": 25652, "text": null }, { "code": "// Java program to find euler tour of binary treeimport java.util.*; class GFG{ /* A tree node structure */ static class Node { int data; Node left; Node right; }; /* Utility function to create a new Binary Tree node */ static Node newNode(int data) { Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp; } // Find Euler Tour static Vector<Integer> eulerTree(Node root, Vector<Integer> Euler) { // store current node's data Euler.add(root.data); // If left node exists if (root.left != null) { // traverse left subtree Euler = eulerTree(root.left, Euler); // store parent node's data Euler.add(root.data); } // If right node exists if (root.right != null) { // traverse right subtree Euler = eulerTree(root.right, Euler); // store parent node's data Euler.add(root.data); } return Euler; } // Function to print Euler Tour of tree static void printEulerTour(Node root) { // Stores Euler Tour Vector<Integer> Euler = new Vector<Integer>(); Euler = eulerTree(root, Euler); for (int i = 0; i < Euler.size(); i++) System.out.print(Euler.get(i) + \" \"); } /* Driver function to test above functions */ public static void main(String[] args) { // Constructing tree given in the above figure Node root = newNode(1); root.left = newNode(2); root.right = newNode(3); root.left.left = newNode(4); root.left.right = newNode(5); root.right.left = newNode(6); root.right.right = newNode(7); root.right.left.right = newNode(8); // print Euler Tour printEulerTour(root); }} // This code is contributed by Rajput-Ji", "e": 29178, "s": 27244, "text": null }, { "code": "# python program to find euler tour of binary tree# a Node of binary treeclass Node: def __init__(self, key): self.data = key self.left = None self.right = None # Find Euler Treedef eulerTree(root, euler): # store current node's data euler.append(root.data) # If left node exists if root.left: # traverse left subtree euler = eulerTree(root.left, euler) # store parent node's data euler.append(root.data) # If right node exists if root.right: # traverse right subtree euler = eulerTree(root.right, euler) # store parent node's data euler.append(root.data) return euler # Function to print Euler Tour of treedef printEulerTour(root): # Stores Euler Tour euler = [] euler = eulerTree(root, euler) for i in range(len(euler)): print(euler[i], end=\" \") # Driver function to test above functions */# Constructing tree given in the above figureroot = Node(1)root.left = Node(2)root.right = Node(3)root.left.left = Node(4)root.left.right = Node(5)root.right.left = Node(6)root.right.right = Node(7)root.right.left.right = Node(8) # print Euler TourprintEulerTour(root) # This code is contributed by RAJAT KUMAR...(GLA UNIVERSITY)....", "e": 30488, "s": 29178, "text": null }, { "code": "// C# program to find euler tour of binary treeusing System;using System.Collections.Generic; class GFG{ /* A tree node structure */ public class Node { public int data; public Node left; public Node right; }; /* Utility function to create a new Binary Tree node */ static Node newNode(int data) { Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp; } // Find Euler Tour static List<int> eulerTree(Node root, List<int> Euler) { // store current node's data Euler.Add(root.data); // If left node exists if (root.left != null) { // traverse left subtree Euler = eulerTree(root.left, Euler); // store parent node's data Euler.Add(root.data); } // If right node exists if (root.right != null) { // traverse right subtree Euler = eulerTree(root.right, Euler); // store parent node's data Euler.Add(root.data); } return Euler; } // Function to print Euler Tour of tree static void printEulerTour(Node root) { // Stores Euler Tour List<int> Euler = new List<int>(); Euler = eulerTree(root, Euler); for (int i = 0; i < Euler.Count; i++) Console.Write(Euler[i] + \" \"); } /* Driver function to test above functions */ public static void Main(String[] args) { // Constructing tree given in the above figure Node root = newNode(1); root.left = newNode(2); root.right = newNode(3); root.left.left = newNode(4); root.left.right = newNode(5); root.right.left = newNode(6); root.right.right = newNode(7); root.right.left.right = newNode(8); // print Euler Tour printEulerTour(root); }} // This code is contributed by 29AjayKumar", "e": 32438, "s": 30488, "text": null }, { "code": "<script> // Javascript program to find euler tour of binary tree/* A tree node structure */class Node{ constructor() { this.data = 0; this.left = null; this.right = null; }}; /* Utility function to create a new Binary Tree node */function newNode(data){ var temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;}// Find Euler Tourfunction eulerTree(root, Euler){ // store current node's data Euler.push(root.data); // If left node exists if (root.left != null) { // traverse left subtree Euler = eulerTree(root.left, Euler); // store parent node's data Euler.push(root.data); } // If right node exists if (root.right != null) { // traverse right subtree Euler = eulerTree(root.right, Euler); // store parent node's data Euler.push(root.data); } return Euler;} // Function to print Euler Tour of treefunction printEulerTour(root){ // Stores Euler Tour var Euler = []; Euler = eulerTree(root, Euler); for (var i = 0; i < Euler.length; i++) document.write(Euler[i] + \" \");} /* Driver function to test above functions */// Constructing tree given in the above figurevar root = newNode(1);root.left = newNode(2);root.right = newNode(3);root.left.left = newNode(4);root.left.right = newNode(5);root.right.left = newNode(6);root.right.right = newNode(7);root.right.left.right = newNode(8); // print Euler TourprintEulerTour(root); // This code is contributed by itsok.</script>", "e": 34015, "s": 32438, "text": null }, { "code": null, "e": 34045, "s": 34015, "text": "1 2 4 2 5 2 1 3 6 8 6 3 7 3 1" }, { "code": null, "e": 34182, "s": 34047, "text": "Time Complexity: O(2*N-1) where N is number of nodes in the tree. Auxiliary Space : O(2*N-1) where N is number of nodes in the tree. " }, { "code": null, "e": 35006, "s": 34182, "text": "YouTubeGeeksforGeeks500K subscribersEuler tour of Binary Tree | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:49β€’Liveβ€’<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=c_pVFPP2Lb8\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 35018, "s": 35008, "text": "Rajput-Ji" }, { "code": null, "e": 35030, "s": 35018, "text": "29AjayKumar" }, { "code": null, "e": 35036, "s": 35030, "text": "itsok" }, { "code": null, "e": 35052, "s": 35036, "text": "rajatkumargla19" }, { "code": null, "e": 35064, "s": 35052, "text": "Binary Tree" }, { "code": null, "e": 35075, "s": 35064, "text": "cpp-vector" }, { "code": null, "e": 35099, "s": 35075, "text": "Advanced Data Structure" }, { "code": null, "e": 35123, "s": 35099, "text": "Competitive Programming" }, { "code": null, "e": 35128, "s": 35123, "text": "Tree" }, { "code": null, "e": 35133, "s": 35128, "text": "Tree" }, { "code": null, "e": 35231, "s": 35133, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35240, "s": 35231, "text": "Comments" }, { "code": null, "e": 35253, "s": 35240, "text": "Old Comments" }, { "code": null, "e": 35287, "s": 35253, "text": "Agents in Artificial Intelligence" }, { "code": null, "e": 35327, "s": 35287, "text": "Decision Tree Introduction with example" }, { "code": null, "e": 35355, "s": 35327, "text": "AVL Tree | Set 2 (Deletion)" }, { "code": null, "e": 35387, "s": 35355, "text": "Red-Black Tree | Set 2 (Insert)" }, { "code": null, "e": 35416, "s": 35387, "text": "Disjoint Set Data Structures" }, { "code": null, "e": 35459, "s": 35416, "text": "Practice for cracking any coding interview" }, { "code": null, "e": 35500, "s": 35459, "text": "Arrow operator -> in C/C++ with Examples" }, { "code": null, "e": 35543, "s": 35500, "text": "Competitive Programming - A Complete Guide" }, { "code": null, "e": 35570, "s": 35543, "text": "Modulo 10^9+7 (1000000007)" } ]
LaTeX for Data Scientists, in Under 6 Minutes | by Andre Ye | Towards Data Science
As a data scientist, you work with data. Data is inherently mathematical β€” and it is imperative that you are able to communicate those ideas clearly. Even if you’re not developing algorithms, the ability the express common data science techniques β€” perhaps the result of a polynomial regression or performance of a box-cox transformation β€” is imperative. That being said, LaTeX, the most popular mathematical typesetting language, comes with lots and lots and lots of frills. This tutorial will only show the most important parts of LaTeX as they pertain to data science, and by the end, you will know enough of LaTeX to incorporate it in your projects. Here’s LaTeX, in 6 minutes or less. At the end are 5 practice LaTeX problems. To display math inline with text, the expression must be in between dollar signs. In physics, the mass-energy equivalence is stated by the equation $E=mc^2$, discovered in 1905 by Albert Einstein. Alternatively, the opening and closing \[ and \] can be used. The mass-energy equivalence is described by the famous equation\[E=mc2\] This will automatically center the math and display it on a new line. A subscript is indicated using _, and a superscript using ^. $a^2$ will output a2. Superscripts and also be combined with subscripts by being called consecutively: \[ a_1^2 + a_2^2 = a_3^2 \] For longer superscript and subscripts, putting the superscript and subscript within brackets can clean the code: \[ x^{2 \alpha} - 1 = y_{ij} + y_{ij} \] Note β€” LaTeX has many symbols that can be called using \name. In the example above, \alpha was used. Others include \infty, which is the infinity symbol. Superscripts and subscripts can also be nested and combined in many ways, as long as the scope is clearly specified with parentheses: \[ (a^n)^{r+s} = a^{nr+ns} \] Many mathematical operators require subscripts or/and superscripts. In these cases, the operator is treated as an object with normal superscript and subscript properties. Take, for example, the sigma/summation operator, which is called by \sum. \[ \sum_{i=1}^{\infty} \frac{1}{n^s} = \prod_p \frac{1}{1 - p^{-s}} \] Even though superscripts and subscripts are called on a regular object with the sigma operator, it is automatically aligned. Something else to note β€” when using \[ and \] opening and closing indicators, everything will be put on one line, so the code being put on several lines will not impact the end result. Other operators that operate on a superscript and subscript include: \int for integrals \cup for union (upward facing u) \cap for intersection (downward facing u) \oint for curve integral \coprod for coproduct Trigonometric functions, logarithms, and other mathematical functions can be un-italicized and formatted by putting a \ before them. \[ \sin(a + b ) = \sin(a)\cos(b) + \cos(a)\sin(b) \] Some operators can take parameters via subscript, such as the limit operator. \[ \lim_{h \rightarrow 0 } \frac{f(x+h)-f(x)}{h} \] Note how a limit declaration includes a subscript and the calling of a symbol, \rightarrow. \frac{a}{b} is a method of creating a fraction a/b. For some of the commands in this section to work, you must first include the amsmath package at the beginning of the file. \usepackage{amsmath} Fractions and binomial coefficients are very straightforward. They are called with \name{parameter1}{parameter2}. \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] How fractions are displayed differs when they are used inline. Fractions can be used alongside the text, for example $frac{1}{2}$, and in a mathematical display style like the one below:\[\frac{1}{2}\] Fractions can be easily nested by using a fraction as a parameter. \[ a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cdots}}} \] The above command uses \cfrac{}{} from the amsmath package, which keeps the size of the fractions the same, even when they are nested. This can be substituted with \frac{}{}, which will make nested fractions smaller. LaTeX supports many types of brackets: (x+y) renders (x+y) [x+y] renders [x+y] \{ x+y \} renders {x+y} \langle x+y \rangle renders ⟨x+y⟩ |x+y| renders |x+y| \| x+y \| renders βˆ₯x+yβˆ₯ Sometimes, the brackets and parenthesis will be too small for an expression, for example, if it has fractions. Dynamic parentheses and brackets operate by \left[object] and \right[object] commands to signify the start and end. \[ F = G \left( \frac{m_1 m_2}{r^2} \right) \] In this case, the [object] was a parenthesis, so the expression was enclosed in \left( and \right). [object] can also be substituted with brackets (\left[ and \right]). The amsmath package includes many commands to typeset matrices. A matrix is defined by several values, separated in each row by &, each row separated by a \\. It is enclosed in standard \begin{object} and \end{object}. For a plain matrix with no side brackets: \begin{matrix}1 & 2 & 3\\a & b & c\end{matrix} Substituting {matrix} in \begin{matrix} and \end{matrix} with {pmatrix} puts parenthesis around the matrix: {bmatrix} puts brackets around it: ...and {Bmatrix} puts curly brackets around it. Now, you have the basic skills of LaTeX. Find the code that will create the given output. #1 (Loss): \[Loss = Bias^2 + Variance^2 + Noise\] #2 (Chi-Squared): \[Chi = \frac{(\hat(y)-y)^2}{\sqrt{y}}= \frac{\delta^2}{\sqrt{y}}\] #3 (KNN): \[\hat(f)(x)\leftarrow \frac{\sum f(x)}{k}\]\[DE(x_i,x_j) = \sqrt{(x_i-x_j)^2 + (y_{xi}-y_{xj})^2}\] #4 (Sigmoid): \[\frac{1}{1+e^{-(wx+b)}}\] #5 (R2): \[R^2 = \frac{n \sum xy - \sum x. \sum y}{\sqrt{(n \sum x^2 - (\sum x)^2). (n \sum y^2 - (\sum y)^2)}}\] In this tutorial, you learned... How to format math inline and by itself How to use superscripts and subscripts, as well as with operators How to call certain operators How to use parenthesis and brackets, including dynamic ones How to create matrices with a variety of side bracket styles Now you can format math beautifully! Most of learning LaTeX from now is memorizing the code for certain objects (\object), since you now know the foundational structure of LaTeX. If you enjoyed this article, you may also be interested in other articles in this series: SQL for Data Scientists, in 6 Minutes or Less
[ { "code": null, "e": 527, "s": 172, "text": "As a data scientist, you work with data. Data is inherently mathematical β€” and it is imperative that you are able to communicate those ideas clearly. Even if you’re not developing algorithms, the ability the express common data science techniques β€” perhaps the result of a polynomial regression or performance of a box-cox transformation β€” is imperative." }, { "code": null, "e": 826, "s": 527, "text": "That being said, LaTeX, the most popular mathematical typesetting language, comes with lots and lots and lots of frills. This tutorial will only show the most important parts of LaTeX as they pertain to data science, and by the end, you will know enough of LaTeX to incorporate it in your projects." }, { "code": null, "e": 904, "s": 826, "text": "Here’s LaTeX, in 6 minutes or less. At the end are 5 practice LaTeX problems." }, { "code": null, "e": 986, "s": 904, "text": "To display math inline with text, the expression must be in between dollar signs." }, { "code": null, "e": 1102, "s": 986, "text": "In physics, the mass-energy equivalence is stated by the equation $E=mc^2$, discovered in 1905 by Albert Einstein." }, { "code": null, "e": 1164, "s": 1102, "text": "Alternatively, the opening and closing \\[ and \\] can be used." }, { "code": null, "e": 1237, "s": 1164, "text": "The mass-energy equivalence is described by the famous equation\\[E=mc2\\]" }, { "code": null, "e": 1307, "s": 1237, "text": "This will automatically center the math and display it on a new line." }, { "code": null, "e": 1471, "s": 1307, "text": "A subscript is indicated using _, and a superscript using ^. $a^2$ will output a2. Superscripts and also be combined with subscripts by being called consecutively:" }, { "code": null, "e": 1499, "s": 1471, "text": "\\[ a_1^2 + a_2^2 = a_3^2 \\]" }, { "code": null, "e": 1612, "s": 1499, "text": "For longer superscript and subscripts, putting the superscript and subscript within brackets can clean the code:" }, { "code": null, "e": 1654, "s": 1612, "text": "\\[ x^{2 \\alpha} - 1 = y_{ij} + y_{ij} \\]" }, { "code": null, "e": 1808, "s": 1654, "text": "Note β€” LaTeX has many symbols that can be called using \\name. In the example above, \\alpha was used. Others include \\infty, which is the infinity symbol." }, { "code": null, "e": 1942, "s": 1808, "text": "Superscripts and subscripts can also be nested and combined in many ways, as long as the scope is clearly specified with parentheses:" }, { "code": null, "e": 1973, "s": 1942, "text": "\\[ (a^n)^{r+s} = a^{nr+ns} \\]" }, { "code": null, "e": 2218, "s": 1973, "text": "Many mathematical operators require subscripts or/and superscripts. In these cases, the operator is treated as an object with normal superscript and subscript properties. Take, for example, the sigma/summation operator, which is called by \\sum." }, { "code": null, "e": 2290, "s": 2218, "text": "\\[ \\sum_{i=1}^{\\infty} \\frac{1}{n^s} = \\prod_p \\frac{1}{1 - p^{-s}} \\]" }, { "code": null, "e": 2415, "s": 2290, "text": "Even though superscripts and subscripts are called on a regular object with the sigma operator, it is automatically aligned." }, { "code": null, "e": 2600, "s": 2415, "text": "Something else to note β€” when using \\[ and \\] opening and closing indicators, everything will be put on one line, so the code being put on several lines will not impact the end result." }, { "code": null, "e": 2669, "s": 2600, "text": "Other operators that operate on a superscript and subscript include:" }, { "code": null, "e": 2688, "s": 2669, "text": "\\int for integrals" }, { "code": null, "e": 2721, "s": 2688, "text": "\\cup for union (upward facing u)" }, { "code": null, "e": 2763, "s": 2721, "text": "\\cap for intersection (downward facing u)" }, { "code": null, "e": 2788, "s": 2763, "text": "\\oint for curve integral" }, { "code": null, "e": 2810, "s": 2788, "text": "\\coprod for coproduct" }, { "code": null, "e": 2943, "s": 2810, "text": "Trigonometric functions, logarithms, and other mathematical functions can be un-italicized and formatted by putting a \\ before them." }, { "code": null, "e": 2996, "s": 2943, "text": "\\[ \\sin(a + b ) = \\sin(a)\\cos(b) + \\cos(a)\\sin(b) \\]" }, { "code": null, "e": 3074, "s": 2996, "text": "Some operators can take parameters via subscript, such as the limit operator." }, { "code": null, "e": 3126, "s": 3074, "text": "\\[ \\lim_{h \\rightarrow 0 } \\frac{f(x+h)-f(x)}{h} \\]" }, { "code": null, "e": 3270, "s": 3126, "text": "Note how a limit declaration includes a subscript and the calling of a symbol, \\rightarrow. \\frac{a}{b} is a method of creating a fraction a/b." }, { "code": null, "e": 3393, "s": 3270, "text": "For some of the commands in this section to work, you must first include the amsmath package at the beginning of the file." }, { "code": null, "e": 3414, "s": 3393, "text": "\\usepackage{amsmath}" }, { "code": null, "e": 3528, "s": 3414, "text": "Fractions and binomial coefficients are very straightforward. They are called with \\name{parameter1}{parameter2}." }, { "code": null, "e": 3573, "s": 3528, "text": "\\[ \\binom{n}{k} = \\frac{n!}{k!(n-k)!} \\]" }, { "code": null, "e": 3636, "s": 3573, "text": "How fractions are displayed differs when they are used inline." }, { "code": null, "e": 3777, "s": 3636, "text": "Fractions can be used alongside the text, for example $frac{1}{2}$, and in a mathematical display style like the one below:\\[\\frac{1}{2}\\]" }, { "code": null, "e": 3844, "s": 3777, "text": "Fractions can be easily nested by using a fraction as a parameter." }, { "code": null, "e": 3908, "s": 3844, "text": "\\[ a_0+\\cfrac{1}{a_1+\\cfrac{1}{a_2+\\cfrac{1}{a_3+\\cdots}}} \\]" }, { "code": null, "e": 4125, "s": 3908, "text": "The above command uses \\cfrac{}{} from the amsmath package, which keeps the size of the fractions the same, even when they are nested. This can be substituted with \\frac{}{}, which will make nested fractions smaller." }, { "code": null, "e": 4164, "s": 4125, "text": "LaTeX supports many types of brackets:" }, { "code": null, "e": 4184, "s": 4164, "text": "(x+y) renders (x+y)" }, { "code": null, "e": 4204, "s": 4184, "text": "[x+y] renders [x+y]" }, { "code": null, "e": 4228, "s": 4204, "text": "\\{ x+y \\} renders {x+y}" }, { "code": null, "e": 4262, "s": 4228, "text": "\\langle x+y \\rangle renders ⟨x+y⟩" }, { "code": null, "e": 4282, "s": 4262, "text": "|x+y| renders |x+y|" }, { "code": null, "e": 4306, "s": 4282, "text": "\\| x+y \\| renders βˆ₯x+yβˆ₯" }, { "code": null, "e": 4417, "s": 4306, "text": "Sometimes, the brackets and parenthesis will be too small for an expression, for example, if it has fractions." }, { "code": null, "e": 4533, "s": 4417, "text": "Dynamic parentheses and brackets operate by \\left[object] and \\right[object] commands to signify the start and end." }, { "code": null, "e": 4581, "s": 4533, "text": "\\[ F = G \\left( \\frac{m_1 m_2}{r^2} \\right) \\]" }, { "code": null, "e": 4750, "s": 4581, "text": "In this case, the [object] was a parenthesis, so the expression was enclosed in \\left( and \\right). [object] can also be substituted with brackets (\\left[ and \\right])." }, { "code": null, "e": 4814, "s": 4750, "text": "The amsmath package includes many commands to typeset matrices." }, { "code": null, "e": 4969, "s": 4814, "text": "A matrix is defined by several values, separated in each row by &, each row separated by a \\\\. It is enclosed in standard \\begin{object} and \\end{object}." }, { "code": null, "e": 5011, "s": 4969, "text": "For a plain matrix with no side brackets:" }, { "code": null, "e": 5058, "s": 5011, "text": "\\begin{matrix}1 & 2 & 3\\\\a & b & c\\end{matrix}" }, { "code": null, "e": 5166, "s": 5058, "text": "Substituting {matrix} in \\begin{matrix} and \\end{matrix} with {pmatrix} puts parenthesis around the matrix:" }, { "code": null, "e": 5201, "s": 5166, "text": "{bmatrix} puts brackets around it:" }, { "code": null, "e": 5249, "s": 5201, "text": "...and {Bmatrix} puts curly brackets around it." }, { "code": null, "e": 5339, "s": 5249, "text": "Now, you have the basic skills of LaTeX. Find the code that will create the given output." }, { "code": null, "e": 5350, "s": 5339, "text": "#1 (Loss):" }, { "code": null, "e": 5389, "s": 5350, "text": "\\[Loss = Bias^2 + Variance^2 + Noise\\]" }, { "code": null, "e": 5407, "s": 5389, "text": "#2 (Chi-Squared):" }, { "code": null, "e": 5475, "s": 5407, "text": "\\[Chi = \\frac{(\\hat(y)-y)^2}{\\sqrt{y}}= \\frac{\\delta^2}{\\sqrt{y}}\\]" }, { "code": null, "e": 5485, "s": 5475, "text": "#3 (KNN):" }, { "code": null, "e": 5586, "s": 5485, "text": "\\[\\hat(f)(x)\\leftarrow \\frac{\\sum f(x)}{k}\\]\\[DE(x_i,x_j) = \\sqrt{(x_i-x_j)^2 + (y_{xi}-y_{xj})^2}\\]" }, { "code": null, "e": 5600, "s": 5586, "text": "#4 (Sigmoid):" }, { "code": null, "e": 5628, "s": 5600, "text": "\\[\\frac{1}{1+e^{-(wx+b)}}\\]" }, { "code": null, "e": 5637, "s": 5628, "text": "#5 (R2):" }, { "code": null, "e": 5742, "s": 5637, "text": "\\[R^2 = \\frac{n \\sum xy - \\sum x. \\sum y}{\\sqrt{(n \\sum x^2 - (\\sum x)^2). (n \\sum y^2 - (\\sum y)^2)}}\\]" }, { "code": null, "e": 5775, "s": 5742, "text": "In this tutorial, you learned..." }, { "code": null, "e": 5815, "s": 5775, "text": "How to format math inline and by itself" }, { "code": null, "e": 5881, "s": 5815, "text": "How to use superscripts and subscripts, as well as with operators" }, { "code": null, "e": 5911, "s": 5881, "text": "How to call certain operators" }, { "code": null, "e": 5971, "s": 5911, "text": "How to use parenthesis and brackets, including dynamic ones" }, { "code": null, "e": 6032, "s": 5971, "text": "How to create matrices with a variety of side bracket styles" }, { "code": null, "e": 6211, "s": 6032, "text": "Now you can format math beautifully! Most of learning LaTeX from now is memorizing the code for certain objects (\\object), since you now know the foundational structure of LaTeX." }, { "code": null, "e": 6301, "s": 6211, "text": "If you enjoyed this article, you may also be interested in other articles in this series:" } ]
Count number of occurrences (or frequency) in a sorted array - GeeksforGeeks
02 Dec, 2021 Given a sorted array arr[] and a number x, write a function that counts the occurrences of x in arr[]. Expected time complexity is O(Logn) Examples: Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 2 Output: 4 // x (or 2) occurs 4 times in arr[] Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 3 Output: 1 Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 1 Output: 2 Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 4 Output: -1 // 4 doesn't occur in arr[] Method 1 (Linear Search) Linearly search for x, count the occurrences of x and return the count. C++ Java Python3 C# PHP Javascript // C++ program to count occurrences of an element#include<bits/stdc++.h>using namespace std; // Returns number of times x occurs in arr[0..n-1]int countOccurrences(int arr[], int n, int x){ int res = 0; for (int i=0; i<n; i++) if (x == arr[i]) res++; return res;} // Driver codeint main(){ int arr[] = {1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 }; int n = sizeof(arr)/sizeof(arr[0]); int x = 2; cout << countOccurrences(arr, n, x); return 0;} // Java program to count occurrences// of an element class Main{ // Returns number of times x occurs in arr[0..n-1] static int countOccurrences(int arr[], int n, int x) { int res = 0; for (int i=0; i<n; i++) if (x == arr[i]) res++; return res; } public static void main(String args[]) { int arr[] = {1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 }; int n = arr.length; int x = 2; System.out.println(countOccurrences(arr, n, x)); }} # Python3 program to count# occurrences of an element # Returns number of times x# occurs in arr[0..n-1]def countOccurrences(arr, n, x): res = 0 for i in range(n): if x == arr[i]: res += 1 return res # Driver codearr = [1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8]n = len(arr)x = 2print (countOccurrences(arr, n, x)) // C# program to count occurrences// of an elementusing System; class GFG{ // Returns number of times x // occurs in arr[0..n-1] static int countOccurrences(int []arr, int n, int x) { int res = 0; for (int i = 0; i < n; i++) if (x == arr[i]) res++; return res; } // driver code public static void Main() { int []arr = {1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 }; int n = arr.Length; int x = 2; Console.Write(countOccurrences(arr, n, x)); }} // This code is contributed by Sam007 <?php// PHP program to count occurrences// of an element // Returns number of times x// occurs in arr[0..n-1]function countOccurrences($arr, $n, $x){ $res = 0; for ($i = 0; $i < $n; $i++) if ($x == $arr[$i]) $res++; return $res;} // Driver code $arr = array(1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 ); $n = count($arr); $x = 2; echo countOccurrences($arr,$n, $x); // This code is contributed by Sam007?> <script> // Javascript program to count occurrences// of an element // Returns number of times x occurs in arr[0..n-1] function countOccurrences(arr,n,x) { let res = 0; for (let i=0; i<n; i++) { if (x == arr[i]) res++; } return res; } arr=[1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8] let n = arr.length; let x = 2; document.write(countOccurrences(arr, n, x)); // This code is contributed by avanitrachhadiya2155 </script> Output : 4 Time Complexity: O(n) Method 2 (Better using Binary Search) We first find an occurrence using binary search. Then we match toward left and right sides of the matched the found index. C++ Java Python 3 C# PHP Javascript // C++ program to count occurrences of an element#include <bits/stdc++.h>using namespace std; // A recursive binary search function. It returns// location of x in given array arr[l..r] is present,// otherwise -1int binarySearch(int arr[], int l, int r, int x){ if (r < l) return -1; int mid = l + (r - l) / 2; // If the element is present at the middle // itself if (arr[mid] == x) return mid; // If element is smaller than mid, then // it can only be present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element can only be present // in right subarray return binarySearch(arr, mid + 1, r, x);} // Returns number of times x occurs in arr[0..n-1]int countOccurrences(int arr[], int n, int x){ int ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. int count = 1; int left = ind - 1; while (left >= 0 && arr[left] == x) count++, left--; // Count elements on right side. int right = ind + 1; while (right < n && arr[right] == x) count++, right++; return count;} // Driver codeint main(){ int arr[] = { 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 }; int n = sizeof(arr) / sizeof(arr[0]); int x = 2; cout << countOccurrences(arr, n, x); return 0;} // Java program to count// occurrences of an elementclass GFG{ // A recursive binary search // function. It returns location // of x in given array arr[l..r] // is present, otherwise -1 static int binarySearch(int arr[], int l, int r, int x) { if (r < l) return -1; int mid = l + (r - l) / 2; // If the element is present // at the middle itself if (arr[mid] == x) return mid; // If element is smaller than // mid, then it can only be // present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element can // only be present in // right subarray return binarySearch(arr, mid + 1, r, x); } // Returns number of times x // occurs in arr[0..n-1] static int countOccurrences(int arr[], int n, int x) { int ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. int count = 1; int left = ind - 1; while (left >= 0 && arr[left] == x) { count++; left--; } // Count elements // on right side. int right = ind + 1; while (right < n && arr[right] == x) { count++; right++; } return count; } // Driver code public static void main(String[] args) { int arr[] = {1, 2, 2, 2, 2, 3, 4, 7, 8, 8}; int n = arr.length; int x = 2; System.out.print(countOccurrences(arr, n, x)); }} // This code is contributed// by ChitraNayal # Python 3 program to count# occurrences of an element # A recursive binary search# function. It returns location# of x in given array arr[l..r]# is present, otherwise -1def binarySearch(arr, l, r, x): if (r < l): return -1 mid = int( l + (r - l) / 2) # If the element is present # at the middle itself if arr[mid] == x: return mid # If element is smaller than # mid, then it can only be # present in left subarray if arr[mid] > x: return binarySearch(arr, l, mid - 1, x) # Else the element # can only be present # in right subarray return binarySearch(arr, mid + 1, r, x) # Returns number of times# x occurs in arr[0..n-1]def countOccurrences(arr, n, x): ind = binarySearch(arr, 0, n - 1, x) # If element is not present if ind == -1: return 0 # Count elements # on left side. count = 1 left = ind - 1 while (left >= 0 and arr[left] == x): count += 1 left -= 1 # Count elements on # right side. right = ind + 1; while (right < n and arr[right] == x): count += 1 right += 1 return count # Driver codearr = [ 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 ]n = len(arr)x = 2print(countOccurrences(arr, n, x)) # This code is contributed# by ChitraNayal // C# program to count// occurrences of an elementusing System; class GFG{ // A recursive binary search // function. It returns location // of x in given array arr[l..r] // is present, otherwise -1 static int binarySearch(int[] arr, int l, int r, int x) { if (r < l) return -1; int mid = l + (r - l) / 2; // If the element is present // at the middle itself if (arr[mid] == x) return mid; // If element is smaller than // mid, then it can only be // present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element // can only be present // in right subarray return binarySearch(arr, mid + 1, r, x); } // Returns number of times x // occurs in arr[0..n-1] static int countOccurrences(int[] arr, int n, int x) { int ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. int count = 1; int left = ind - 1; while (left >= 0 && arr[left] == x) { count++; left--; } // Count elements on right side. int right = ind + 1; while (right < n && arr[right] == x) { count++; right++; } return count; } // Driver code public static void Main() { int[] arr = {1, 2, 2, 2, 2, 3, 4, 7, 8, 8}; int n = arr.Length; int x = 2; Console.Write(countOccurrences(arr, n, x)); }} // This code is contributed// by ChitraNayal <?php// PHP program to count// occurrences of an element // A recursive binary search// function. It returns location// of x in given array arr[l..r]// is present, otherwise -1function binarySearch(&$arr, $l, $r, $x){ if ($r < $l) return -1; $mid = $l + ($r - $l) / 2; // If the element is present // at the middle itself if ($arr[$mid] == $x) return $mid; // If element is smaller than // mid, then it can only be // present in left subarray if ($arr[$mid] > $x) return binarySearch($arr, $l, $mid - 1, $x); // Else the element // can only be present // in right subarray return binarySearch($arr, $mid + 1, $r, $x);} // Returns number of times// x occurs in arr[0..n-1]function countOccurrences($arr, $n, $x){ $ind = binarySearch($arr, 0, $n - 1, $x); // If element is not present if ($ind == -1) return 0; // Count elements // on left side. $count = 1; $left = $ind - 1; while ($left >= 0 && $arr[$left] == $x) { $count++; $left--; } // Count elements on right side. $right = $ind + 1; while ($right < $n && $arr[$right] == $x) { $count++; $right++; } return $count;} // Driver code$arr = array( 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 );$n = sizeof($arr);$x = 2;echo countOccurrences($arr, $n, $x); // This code is contributed// by ChitraNayal?> <script> // Javascript program to count occurrences of an element // A recursive binary search function. It returns// location of x in given array arr[l..r] is present,// otherwise -1function binarySearch(arr, l, r, x){ if (r < l) return -1; var mid = l + parseInt((r - l) / 2); // If the element is present at the middle // itself if (arr[mid] == x) return mid; // If element is smaller than mid, then // it can only be present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element can only be present // in right subarray return binarySearch(arr, mid + 1, r, x);} // Returns number of times x occurs in arr[0..n-1]function countOccurrences(arr, n, x){ var ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. var count = 1; var left = ind - 1; while (left >= 0 && arr[left] == x) count++, left--; // Count elements on right side. var right = ind + 1; while (right < n && arr[right] == x) count++, right++; return count;} // Driver codevar arr = [ 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 ];var n = arr.length;var x = 2;document.write(countOccurrences(arr, n, x)); // This code is contributed by noob2000.</script> Output : 4 Time Complexity : O(Log n + count) where count is number of occurrences. Method 3 (Best using Improved Binary Search) 1) Use Binary search to get index of the first occurrence of x in arr[]. Let the index of the first occurrence be i. 2) Use Binary search to get index of the last occurrence of x in arr[]. Let the index of the last occurrence be j. 3) Return (j – i + 1); C++ C Java Python3 C# Javascript // C++ program to count occurrences of an element// in a sorted array.# include <bits/stdc++.h>using namespace std; /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns 0. */int count(int arr[], int x, int n){ /* get the index of first occurrence of x */ int *low = lower_bound(arr, arr+n, x); // If element is not present, return 0 if (low == (arr + n) || *low != x) return 0; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ int *high = upper_bound(low, arr+n, x); /* return count */ return high - low;} /* driver program to test above functions */int main(){ int arr[] = {1, 2, 2, 3, 3, 3, 3}; int x = 3; // Element to be counted in arr[] int n = sizeof(arr)/sizeof(arr[0]); int c = count(arr, x, n); printf(" %d occurs %d times ", x, c); return 0;} # include <stdio.h> /* if x is present in arr[] then returns the index of FIRST occurrence of x in arr[0..n-1], otherwise returns -1 */int first(int arr[], int low, int high, int x, int n){ if(high >= low) { int mid = (low + high)/2; /*low + (high - low)/2;*/ if( ( mid == 0 || x > arr[mid-1]) && arr[mid] == x) return mid; else if(x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid -1), x, n); } return -1;} /* if x is present in arr[] then returns the index of LAST occurrence of x in arr[0..n-1], otherwise returns -1 */int last(int arr[], int low, int high, int x, int n){ if (high >= low) { int mid = (low + high)/2; /*low + (high - low)/2;*/ if( ( mid == n-1 || x < arr[mid+1]) && arr[mid] == x ) return mid; else if(x < arr[mid]) return last(arr, low, (mid -1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1;} /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns -1. */int count(int arr[], int x, int n){ int i; // index of first occurrence of x in arr[0..n-1] int j; // index of last occurrence of x in arr[0..n-1] /* get the index of first occurrence of x */ i = first(arr, 0, n-1, x, n); /* If x doesn't exist in arr[] then return -1 */ if(i == -1) return i; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ j = last(arr, i, n-1, x, n); /* return count */ return j-i+1;} /* driver program to test above functions */int main(){ int arr[] = {1, 2, 2, 3, 3, 3, 3}; int x = 3; // Element to be counted in arr[] int n = sizeof(arr)/sizeof(arr[0]); int c = count(arr, x, n); printf(" %d occurs %d times ", x, c); getchar(); return 0;} // Java program to count occurrences// of an element class Main{ /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns -1. */ static int count(int arr[], int x, int n) { // index of first occurrence of x in arr[0..n-1] int i; // index of last occurrence of x in arr[0..n-1] int j; /* get the index of first occurrence of x */ i = first(arr, 0, n-1, x, n); /* If x doesn't exist in arr[] then return -1 */ if(i == -1) return i; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ j = last(arr, i, n-1, x, n); /* return count */ return j-i+1; } /* if x is present in arr[] then returns the index of FIRST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int first(int arr[], int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == 0 || x > arr[mid-1]) && arr[mid] == x) return mid; else if(x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid -1), x, n); } return -1; } /* if x is present in arr[] then returns the index of LAST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int last(int arr[], int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == n-1 || x < arr[mid+1]) && arr[mid] == x ) return mid; else if(x < arr[mid]) return last(arr, low, (mid -1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1; } public static void main(String args[]) { int arr[] = {1, 2, 2, 3, 3, 3, 3}; // Element to be counted in arr[] int x = 3; int n = arr.length; int c = count(arr, x, n); System.out.println(x+" occurs "+c+" times"); }} # Python3 program to count# occurrences of an element # if x is present in arr[] then# returns the count of occurrences# of x, otherwise returns -1.def count(arr, x, n): # get the index of first # occurrence of x i = first(arr, 0, n-1, x, n) # If x doesn't exist in # arr[] then return -1 if i == -1: return i # Else get the index of last occurrence # of x. Note that we are only looking # in the subarray after first occurrence j = last(arr, i, n-1, x, n); # return count return j-i+1; # if x is present in arr[] then return# the index of FIRST occurrence of x in# arr[0..n-1], otherwise returns -1def first(arr, low, high, x, n): if high >= low: # low + (high - low)/2 mid = (low + high)//2 if (mid == 0 or x > arr[mid-1]) and arr[mid] == x: return mid elif x > arr[mid]: return first(arr, (mid + 1), high, x, n) else: return first(arr, low, (mid -1), x, n) return -1; # if x is present in arr[] then return# the index of LAST occurrence of x# in arr[0..n-1], otherwise returns -1def last(arr, low, high, x, n): if high >= low: # low + (high - low)/2 mid = (low + high)//2; if(mid == n-1 or x < arr[mid+1]) and arr[mid] == x : return mid elif x < arr[mid]: return last(arr, low, (mid -1), x, n) else: return last(arr, (mid + 1), high, x, n) return -1 # driver program to test above functionsarr = [1, 2, 2, 3, 3, 3, 3]x = 3 # Element to be counted in arr[]n = len(arr)c = count(arr, x, n)print ("%d occurs %d times "%(x, c)) // C# program to count occurrences// of an elementusing System; class GFG{ /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns -1. */ static int count(int []arr, int x, int n) { // index of first occurrence of x in arr[0..n-1] int i; // index of last occurrence of x in arr[0..n-1] int j; /* get the index of first occurrence of x */ i = first(arr, 0, n-1, x, n); /* If x doesn't exist in arr[] then return -1 */ if(i == -1) return i; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ j = last(arr, i, n-1, x, n); /* return count */ return j-i+1; } /* if x is present in arr[] then returns the index of FIRST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int first(int []arr, int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == 0 || x > arr[mid-1]) && arr[mid] == x) return mid; else if(x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid -1), x, n); } return -1; } /* if x is present in arr[] then returns the index of LAST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int last(int []arr, int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == n-1 || x < arr[mid+1]) && arr[mid] == x ) return mid; else if(x < arr[mid]) return last(arr, low, (mid -1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1; } public static void Main() { int []arr = {1, 2, 2, 3, 3, 3, 3}; // Element to be counted in arr[] int x = 3; int n = arr.Length; int c = count(arr, x, n); Console.Write(x + " occurs " + c + " times"); }}// This code is contributed by Sam007 <script> // Javascript program to count occurrences// of an element /* if x is present in arr[] then returnsthe count of occurrences of x,otherwise returns -1. */function count(arr, x, n){ // Index of first occurrence of x in arr[0..n-1] let i; // Index of last occurrence of x in arr[0..n-1] let j; // Get the index of first occurrence of x i = first(arr, 0, n - 1, x, n); // If x doesn't exist in arr[] then return -1 if (i == -1) return i; // Else get the index of last occurrence of x. // Note that we are only looking in the // subarray after first occurrence j = last(arr, i, n - 1, x, n); // return count return j - i + 1;} // if x is present in arr[] then returns the// index of FIRST occurrence of x in arr[0..n-1],// otherwise returns -1function first(arr, low, high, x, n){ if (high >= low) { // low + (high - low)/2; let mid = (low + high) / 2; if ((mid == 0 || x > arr[mid - 1]) && arr[mid] == x) return mid; else if (x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid - 1), x, n); } return -1;} // If x is present in arr[] then returns the// index of LAST occurrence of x in arr[0..n-1],// otherwise returns -1function last(arr, low, high, x, n){ if (high >= low) { /*low + (high - low)/2;*/ let mid = Math.floor((low + high) / 2); if ((mid == n - 1 || x < arr[mid + 1]) && arr[mid] == x) return mid; else if (x < arr[mid]) return last(arr, low, (mid - 1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1;} // Driver codelet arr = [ 1, 2, 2, 3, 3, 3, 3 ]; // Element to be counted in arr[]let x = 3;let n = arr.length;let c = count(arr, x, n); document.write(x + " occurs " + c + " times"); // This code is contributed by target_2 </script> Output: 3 occurs 4 times Time Complexity: O(Logn) Programming Paradigm: Divide & Conquer Using Collections.frequency() method of java Java C# Javascript /*package whatever //do not write package name here */import java.util.ArrayList;import java.util.Collections; public class GFG { // Function to count occurrences static int countOccurrences(ArrayList<Integer> clist, int x) { // returning the frequency of // element x in the ArrayList // using Collections.frequency() method return Collections.frequency(clist, x); } // Driver Code public static void main(String args[]) { int arr[] = { 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 }; int x = 2; ArrayList<Integer> clist = new ArrayList<>(); // adding elements of array to // ArrayList for (int i : arr) clist.add(i); // displaying the frequency of x in ArrayList System.out.println(x + " occurs " + countOccurrences(clist, x) + " times"); }} // C# program for the above approachusing System; public class GFG{ // Function to count occurrences static int countOccurrences(int[] arr, int x) { int count = 0; int n = arr.Length; for (int i=0; i < n; i++) if (arr[i] == x) count++; return count; } // Driver Code public static void Main (string[] args) { int[] arr = { 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 }; int x = 2; // displaying the frequency of x in ArrayList Console.WriteLine(x + " occurs " + countOccurrences(arr, x) + " times"); }} // This code is contributed by avijitmondal1998. <script> // javascript program for above approach // Function to count occurrences function countOccurrences(arr, x) { let count = 0; let n = arr.length; for (let i=0; i < n; i++) if (arr[i] == x) count++; return count; } // Driver Code let arr = [ 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 ]; let x = 2; // displaying the frequency of x in ArrayList document.write(x + " occurs " + countOccurrences(arr, x) + " times"); // This code is contributed by splevel62.</script> Output: 2 occurs 4 times YouTubeGeeksforGeeks502K subscribersCount number of occurrences (or frequency) in a sorted array | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:56β€’Liveβ€’<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=G2qGmOyDzCY" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> Please write comments if you find the above codes/algorithms incorrect, or find other ways to solve the same problem. Sam007 ukasp le0 avanitrachhadiya2155 swplucky noob2000 target_2 avijitmondal1998 splevel62 Amazon MakeMyTrip Arrays Divide and Conquer Searching Amazon MakeMyTrip Arrays Searching Divide and Conquer Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Maximum and minimum of an array using minimum number of comparisons Top 50 Array Coding Problems for Interviews Multidimensional Arrays in Java Introduction to Arrays Linear Search Merge Sort QuickSort Binary Search Maximum and minimum of an array using minimum number of comparisons Program for Tower of Hanoi
[ { "code": null, "e": 24890, "s": 24862, "text": "\n02 Dec, 2021" }, { "code": null, "e": 25030, "s": 24890, "text": "Given a sorted array arr[] and a number x, write a function that counts the occurrences of x in arr[]. Expected time complexity is O(Logn) " }, { "code": null, "e": 25041, "s": 25030, "text": "Examples: " }, { "code": null, "e": 25356, "s": 25041, "text": " Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 2\n Output: 4 // x (or 2) occurs 4 times in arr[]\n\n Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 3\n Output: 1 \n\n Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 1\n Output: 2 \n\n Input: arr[] = {1, 1, 2, 2, 2, 2, 3,}, x = 4\n Output: -1 // 4 doesn't occur in arr[] " }, { "code": null, "e": 25454, "s": 25356, "text": "Method 1 (Linear Search) Linearly search for x, count the occurrences of x and return the count. " }, { "code": null, "e": 25458, "s": 25454, "text": "C++" }, { "code": null, "e": 25463, "s": 25458, "text": "Java" }, { "code": null, "e": 25471, "s": 25463, "text": "Python3" }, { "code": null, "e": 25474, "s": 25471, "text": "C#" }, { "code": null, "e": 25478, "s": 25474, "text": "PHP" }, { "code": null, "e": 25489, "s": 25478, "text": "Javascript" }, { "code": "// C++ program to count occurrences of an element#include<bits/stdc++.h>using namespace std; // Returns number of times x occurs in arr[0..n-1]int countOccurrences(int arr[], int n, int x){ int res = 0; for (int i=0; i<n; i++) if (x == arr[i]) res++; return res;} // Driver codeint main(){ int arr[] = {1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 }; int n = sizeof(arr)/sizeof(arr[0]); int x = 2; cout << countOccurrences(arr, n, x); return 0;}", "e": 25959, "s": 25489, "text": null }, { "code": "// Java program to count occurrences// of an element class Main{ // Returns number of times x occurs in arr[0..n-1] static int countOccurrences(int arr[], int n, int x) { int res = 0; for (int i=0; i<n; i++) if (x == arr[i]) res++; return res; } public static void main(String args[]) { int arr[] = {1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 }; int n = arr.length; int x = 2; System.out.println(countOccurrences(arr, n, x)); }}", "e": 26473, "s": 25959, "text": null }, { "code": "# Python3 program to count# occurrences of an element # Returns number of times x# occurs in arr[0..n-1]def countOccurrences(arr, n, x): res = 0 for i in range(n): if x == arr[i]: res += 1 return res # Driver codearr = [1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8]n = len(arr)x = 2print (countOccurrences(arr, n, x))", "e": 26803, "s": 26473, "text": null }, { "code": "// C# program to count occurrences// of an elementusing System; class GFG{ // Returns number of times x // occurs in arr[0..n-1] static int countOccurrences(int []arr, int n, int x) { int res = 0; for (int i = 0; i < n; i++) if (x == arr[i]) res++; return res; } // driver code public static void Main() { int []arr = {1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 }; int n = arr.Length; int x = 2; Console.Write(countOccurrences(arr, n, x)); }} // This code is contributed by Sam007", "e": 27438, "s": 26803, "text": null }, { "code": "<?php// PHP program to count occurrences// of an element // Returns number of times x// occurs in arr[0..n-1]function countOccurrences($arr, $n, $x){ $res = 0; for ($i = 0; $i < $n; $i++) if ($x == $arr[$i]) $res++; return $res;} // Driver code $arr = array(1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8 ); $n = count($arr); $x = 2; echo countOccurrences($arr,$n, $x); // This code is contributed by Sam007?>", "e": 27873, "s": 27438, "text": null }, { "code": "<script> // Javascript program to count occurrences// of an element // Returns number of times x occurs in arr[0..n-1] function countOccurrences(arr,n,x) { let res = 0; for (let i=0; i<n; i++) { if (x == arr[i]) res++; } return res; } arr=[1, 2, 2, 2, 2, 3, 4, 7 ,8 ,8] let n = arr.length; let x = 2; document.write(countOccurrences(arr, n, x)); // This code is contributed by avanitrachhadiya2155 </script>", "e": 28386, "s": 27873, "text": null }, { "code": null, "e": 28396, "s": 28386, "text": "Output : " }, { "code": null, "e": 28398, "s": 28396, "text": "4" }, { "code": null, "e": 28581, "s": 28398, "text": "Time Complexity: O(n) Method 2 (Better using Binary Search) We first find an occurrence using binary search. Then we match toward left and right sides of the matched the found index." }, { "code": null, "e": 28585, "s": 28581, "text": "C++" }, { "code": null, "e": 28590, "s": 28585, "text": "Java" }, { "code": null, "e": 28599, "s": 28590, "text": "Python 3" }, { "code": null, "e": 28602, "s": 28599, "text": "C#" }, { "code": null, "e": 28606, "s": 28602, "text": "PHP" }, { "code": null, "e": 28617, "s": 28606, "text": "Javascript" }, { "code": "// C++ program to count occurrences of an element#include <bits/stdc++.h>using namespace std; // A recursive binary search function. It returns// location of x in given array arr[l..r] is present,// otherwise -1int binarySearch(int arr[], int l, int r, int x){ if (r < l) return -1; int mid = l + (r - l) / 2; // If the element is present at the middle // itself if (arr[mid] == x) return mid; // If element is smaller than mid, then // it can only be present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element can only be present // in right subarray return binarySearch(arr, mid + 1, r, x);} // Returns number of times x occurs in arr[0..n-1]int countOccurrences(int arr[], int n, int x){ int ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. int count = 1; int left = ind - 1; while (left >= 0 && arr[left] == x) count++, left--; // Count elements on right side. int right = ind + 1; while (right < n && arr[right] == x) count++, right++; return count;} // Driver codeint main(){ int arr[] = { 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 }; int n = sizeof(arr) / sizeof(arr[0]); int x = 2; cout << countOccurrences(arr, n, x); return 0;}", "e": 29993, "s": 28617, "text": null }, { "code": "// Java program to count// occurrences of an elementclass GFG{ // A recursive binary search // function. It returns location // of x in given array arr[l..r] // is present, otherwise -1 static int binarySearch(int arr[], int l, int r, int x) { if (r < l) return -1; int mid = l + (r - l) / 2; // If the element is present // at the middle itself if (arr[mid] == x) return mid; // If element is smaller than // mid, then it can only be // present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element can // only be present in // right subarray return binarySearch(arr, mid + 1, r, x); } // Returns number of times x // occurs in arr[0..n-1] static int countOccurrences(int arr[], int n, int x) { int ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. int count = 1; int left = ind - 1; while (left >= 0 && arr[left] == x) { count++; left--; } // Count elements // on right side. int right = ind + 1; while (right < n && arr[right] == x) { count++; right++; } return count; } // Driver code public static void main(String[] args) { int arr[] = {1, 2, 2, 2, 2, 3, 4, 7, 8, 8}; int n = arr.length; int x = 2; System.out.print(countOccurrences(arr, n, x)); }} // This code is contributed// by ChitraNayal", "e": 31855, "s": 29993, "text": null }, { "code": "# Python 3 program to count# occurrences of an element # A recursive binary search# function. It returns location# of x in given array arr[l..r]# is present, otherwise -1def binarySearch(arr, l, r, x): if (r < l): return -1 mid = int( l + (r - l) / 2) # If the element is present # at the middle itself if arr[mid] == x: return mid # If element is smaller than # mid, then it can only be # present in left subarray if arr[mid] > x: return binarySearch(arr, l, mid - 1, x) # Else the element # can only be present # in right subarray return binarySearch(arr, mid + 1, r, x) # Returns number of times# x occurs in arr[0..n-1]def countOccurrences(arr, n, x): ind = binarySearch(arr, 0, n - 1, x) # If element is not present if ind == -1: return 0 # Count elements # on left side. count = 1 left = ind - 1 while (left >= 0 and arr[left] == x): count += 1 left -= 1 # Count elements on # right side. right = ind + 1; while (right < n and arr[right] == x): count += 1 right += 1 return count # Driver codearr = [ 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 ]n = len(arr)x = 2print(countOccurrences(arr, n, x)) # This code is contributed# by ChitraNayal", "e": 33214, "s": 31855, "text": null }, { "code": "// C# program to count// occurrences of an elementusing System; class GFG{ // A recursive binary search // function. It returns location // of x in given array arr[l..r] // is present, otherwise -1 static int binarySearch(int[] arr, int l, int r, int x) { if (r < l) return -1; int mid = l + (r - l) / 2; // If the element is present // at the middle itself if (arr[mid] == x) return mid; // If element is smaller than // mid, then it can only be // present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element // can only be present // in right subarray return binarySearch(arr, mid + 1, r, x); } // Returns number of times x // occurs in arr[0..n-1] static int countOccurrences(int[] arr, int n, int x) { int ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. int count = 1; int left = ind - 1; while (left >= 0 && arr[left] == x) { count++; left--; } // Count elements on right side. int right = ind + 1; while (right < n && arr[right] == x) { count++; right++; } return count; } // Driver code public static void Main() { int[] arr = {1, 2, 2, 2, 2, 3, 4, 7, 8, 8}; int n = arr.Length; int x = 2; Console.Write(countOccurrences(arr, n, x)); }} // This code is contributed// by ChitraNayal", "e": 35096, "s": 33214, "text": null }, { "code": "<?php// PHP program to count// occurrences of an element // A recursive binary search// function. It returns location// of x in given array arr[l..r]// is present, otherwise -1function binarySearch(&$arr, $l, $r, $x){ if ($r < $l) return -1; $mid = $l + ($r - $l) / 2; // If the element is present // at the middle itself if ($arr[$mid] == $x) return $mid; // If element is smaller than // mid, then it can only be // present in left subarray if ($arr[$mid] > $x) return binarySearch($arr, $l, $mid - 1, $x); // Else the element // can only be present // in right subarray return binarySearch($arr, $mid + 1, $r, $x);} // Returns number of times// x occurs in arr[0..n-1]function countOccurrences($arr, $n, $x){ $ind = binarySearch($arr, 0, $n - 1, $x); // If element is not present if ($ind == -1) return 0; // Count elements // on left side. $count = 1; $left = $ind - 1; while ($left >= 0 && $arr[$left] == $x) { $count++; $left--; } // Count elements on right side. $right = $ind + 1; while ($right < $n && $arr[$right] == $x) { $count++; $right++; } return $count;} // Driver code$arr = array( 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 );$n = sizeof($arr);$x = 2;echo countOccurrences($arr, $n, $x); // This code is contributed// by ChitraNayal?>", "e": 36629, "s": 35096, "text": null }, { "code": "<script> // Javascript program to count occurrences of an element // A recursive binary search function. It returns// location of x in given array arr[l..r] is present,// otherwise -1function binarySearch(arr, l, r, x){ if (r < l) return -1; var mid = l + parseInt((r - l) / 2); // If the element is present at the middle // itself if (arr[mid] == x) return mid; // If element is smaller than mid, then // it can only be present in left subarray if (arr[mid] > x) return binarySearch(arr, l, mid - 1, x); // Else the element can only be present // in right subarray return binarySearch(arr, mid + 1, r, x);} // Returns number of times x occurs in arr[0..n-1]function countOccurrences(arr, n, x){ var ind = binarySearch(arr, 0, n - 1, x); // If element is not present if (ind == -1) return 0; // Count elements on left side. var count = 1; var left = ind - 1; while (left >= 0 && arr[left] == x) count++, left--; // Count elements on right side. var right = ind + 1; while (right < n && arr[right] == x) count++, right++; return count;} // Driver codevar arr = [ 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 ];var n = arr.length;var x = 2;document.write(countOccurrences(arr, n, x)); // This code is contributed by noob2000.</script>", "e": 37962, "s": 36629, "text": null }, { "code": null, "e": 37972, "s": 37962, "text": "Output : " }, { "code": null, "e": 37974, "s": 37972, "text": "4" }, { "code": null, "e": 38347, "s": 37974, "text": "Time Complexity : O(Log n + count) where count is number of occurrences. Method 3 (Best using Improved Binary Search) 1) Use Binary search to get index of the first occurrence of x in arr[]. Let the index of the first occurrence be i. 2) Use Binary search to get index of the last occurrence of x in arr[]. Let the index of the last occurrence be j. 3) Return (j – i + 1);" }, { "code": null, "e": 38351, "s": 38347, "text": "C++" }, { "code": null, "e": 38353, "s": 38351, "text": "C" }, { "code": null, "e": 38358, "s": 38353, "text": "Java" }, { "code": null, "e": 38366, "s": 38358, "text": "Python3" }, { "code": null, "e": 38369, "s": 38366, "text": "C#" }, { "code": null, "e": 38380, "s": 38369, "text": "Javascript" }, { "code": "// C++ program to count occurrences of an element// in a sorted array.# include <bits/stdc++.h>using namespace std; /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns 0. */int count(int arr[], int x, int n){ /* get the index of first occurrence of x */ int *low = lower_bound(arr, arr+n, x); // If element is not present, return 0 if (low == (arr + n) || *low != x) return 0; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ int *high = upper_bound(low, arr+n, x); /* return count */ return high - low;} /* driver program to test above functions */int main(){ int arr[] = {1, 2, 2, 3, 3, 3, 3}; int x = 3; // Element to be counted in arr[] int n = sizeof(arr)/sizeof(arr[0]); int c = count(arr, x, n); printf(\" %d occurs %d times \", x, c); return 0;}", "e": 39290, "s": 38380, "text": null }, { "code": "# include <stdio.h> /* if x is present in arr[] then returns the index of FIRST occurrence of x in arr[0..n-1], otherwise returns -1 */int first(int arr[], int low, int high, int x, int n){ if(high >= low) { int mid = (low + high)/2; /*low + (high - low)/2;*/ if( ( mid == 0 || x > arr[mid-1]) && arr[mid] == x) return mid; else if(x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid -1), x, n); } return -1;} /* if x is present in arr[] then returns the index of LAST occurrence of x in arr[0..n-1], otherwise returns -1 */int last(int arr[], int low, int high, int x, int n){ if (high >= low) { int mid = (low + high)/2; /*low + (high - low)/2;*/ if( ( mid == n-1 || x < arr[mid+1]) && arr[mid] == x ) return mid; else if(x < arr[mid]) return last(arr, low, (mid -1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1;} /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns -1. */int count(int arr[], int x, int n){ int i; // index of first occurrence of x in arr[0..n-1] int j; // index of last occurrence of x in arr[0..n-1] /* get the index of first occurrence of x */ i = first(arr, 0, n-1, x, n); /* If x doesn't exist in arr[] then return -1 */ if(i == -1) return i; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ j = last(arr, i, n-1, x, n); /* return count */ return j-i+1;} /* driver program to test above functions */int main(){ int arr[] = {1, 2, 2, 3, 3, 3, 3}; int x = 3; // Element to be counted in arr[] int n = sizeof(arr)/sizeof(arr[0]); int c = count(arr, x, n); printf(\" %d occurs %d times \", x, c); getchar(); return 0;}", "e": 41122, "s": 39290, "text": null }, { "code": "// Java program to count occurrences// of an element class Main{ /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns -1. */ static int count(int arr[], int x, int n) { // index of first occurrence of x in arr[0..n-1] int i; // index of last occurrence of x in arr[0..n-1] int j; /* get the index of first occurrence of x */ i = first(arr, 0, n-1, x, n); /* If x doesn't exist in arr[] then return -1 */ if(i == -1) return i; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ j = last(arr, i, n-1, x, n); /* return count */ return j-i+1; } /* if x is present in arr[] then returns the index of FIRST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int first(int arr[], int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == 0 || x > arr[mid-1]) && arr[mid] == x) return mid; else if(x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid -1), x, n); } return -1; } /* if x is present in arr[] then returns the index of LAST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int last(int arr[], int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == n-1 || x < arr[mid+1]) && arr[mid] == x ) return mid; else if(x < arr[mid]) return last(arr, low, (mid -1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1; } public static void main(String args[]) { int arr[] = {1, 2, 2, 3, 3, 3, 3}; // Element to be counted in arr[] int x = 3; int n = arr.length; int c = count(arr, x, n); System.out.println(x+\" occurs \"+c+\" times\"); }}", "e": 43315, "s": 41122, "text": null }, { "code": "# Python3 program to count# occurrences of an element # if x is present in arr[] then# returns the count of occurrences# of x, otherwise returns -1.def count(arr, x, n): # get the index of first # occurrence of x i = first(arr, 0, n-1, x, n) # If x doesn't exist in # arr[] then return -1 if i == -1: return i # Else get the index of last occurrence # of x. Note that we are only looking # in the subarray after first occurrence j = last(arr, i, n-1, x, n); # return count return j-i+1; # if x is present in arr[] then return# the index of FIRST occurrence of x in# arr[0..n-1], otherwise returns -1def first(arr, low, high, x, n): if high >= low: # low + (high - low)/2 mid = (low + high)//2 if (mid == 0 or x > arr[mid-1]) and arr[mid] == x: return mid elif x > arr[mid]: return first(arr, (mid + 1), high, x, n) else: return first(arr, low, (mid -1), x, n) return -1; # if x is present in arr[] then return# the index of LAST occurrence of x# in arr[0..n-1], otherwise returns -1def last(arr, low, high, x, n): if high >= low: # low + (high - low)/2 mid = (low + high)//2; if(mid == n-1 or x < arr[mid+1]) and arr[mid] == x : return mid elif x < arr[mid]: return last(arr, low, (mid -1), x, n) else: return last(arr, (mid + 1), high, x, n) return -1 # driver program to test above functionsarr = [1, 2, 2, 3, 3, 3, 3]x = 3 # Element to be counted in arr[]n = len(arr)c = count(arr, x, n)print (\"%d occurs %d times \"%(x, c))", "e": 44976, "s": 43315, "text": null }, { "code": "// C# program to count occurrences// of an elementusing System; class GFG{ /* if x is present in arr[] then returns the count of occurrences of x, otherwise returns -1. */ static int count(int []arr, int x, int n) { // index of first occurrence of x in arr[0..n-1] int i; // index of last occurrence of x in arr[0..n-1] int j; /* get the index of first occurrence of x */ i = first(arr, 0, n-1, x, n); /* If x doesn't exist in arr[] then return -1 */ if(i == -1) return i; /* Else get the index of last occurrence of x. Note that we are only looking in the subarray after first occurrence */ j = last(arr, i, n-1, x, n); /* return count */ return j-i+1; } /* if x is present in arr[] then returns the index of FIRST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int first(int []arr, int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == 0 || x > arr[mid-1]) && arr[mid] == x) return mid; else if(x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid -1), x, n); } return -1; } /* if x is present in arr[] then returns the index of LAST occurrence of x in arr[0..n-1], otherwise returns -1 */ static int last(int []arr, int low, int high, int x, int n) { if(high >= low) { /*low + (high - low)/2;*/ int mid = (low + high)/2; if( ( mid == n-1 || x < arr[mid+1]) && arr[mid] == x ) return mid; else if(x < arr[mid]) return last(arr, low, (mid -1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1; } public static void Main() { int []arr = {1, 2, 2, 3, 3, 3, 3}; // Element to be counted in arr[] int x = 3; int n = arr.Length; int c = count(arr, x, n); Console.Write(x + \" occurs \" + c + \" times\"); }}// This code is contributed by Sam007", "e": 47246, "s": 44976, "text": null }, { "code": "<script> // Javascript program to count occurrences// of an element /* if x is present in arr[] then returnsthe count of occurrences of x,otherwise returns -1. */function count(arr, x, n){ // Index of first occurrence of x in arr[0..n-1] let i; // Index of last occurrence of x in arr[0..n-1] let j; // Get the index of first occurrence of x i = first(arr, 0, n - 1, x, n); // If x doesn't exist in arr[] then return -1 if (i == -1) return i; // Else get the index of last occurrence of x. // Note that we are only looking in the // subarray after first occurrence j = last(arr, i, n - 1, x, n); // return count return j - i + 1;} // if x is present in arr[] then returns the// index of FIRST occurrence of x in arr[0..n-1],// otherwise returns -1function first(arr, low, high, x, n){ if (high >= low) { // low + (high - low)/2; let mid = (low + high) / 2; if ((mid == 0 || x > arr[mid - 1]) && arr[mid] == x) return mid; else if (x > arr[mid]) return first(arr, (mid + 1), high, x, n); else return first(arr, low, (mid - 1), x, n); } return -1;} // If x is present in arr[] then returns the// index of LAST occurrence of x in arr[0..n-1],// otherwise returns -1function last(arr, low, high, x, n){ if (high >= low) { /*low + (high - low)/2;*/ let mid = Math.floor((low + high) / 2); if ((mid == n - 1 || x < arr[mid + 1]) && arr[mid] == x) return mid; else if (x < arr[mid]) return last(arr, low, (mid - 1), x, n); else return last(arr, (mid + 1), high, x, n); } return -1;} // Driver codelet arr = [ 1, 2, 2, 3, 3, 3, 3 ]; // Element to be counted in arr[]let x = 3;let n = arr.length;let c = count(arr, x, n); document.write(x + \" occurs \" + c + \" times\"); // This code is contributed by target_2 </script>", "e": 49250, "s": 47246, "text": null }, { "code": null, "e": 49260, "s": 49250, "text": "Output: " }, { "code": null, "e": 49277, "s": 49260, "text": "3 occurs 4 times" }, { "code": null, "e": 49341, "s": 49277, "text": "Time Complexity: O(Logn) Programming Paradigm: Divide & Conquer" }, { "code": null, "e": 49386, "s": 49341, "text": "Using Collections.frequency() method of java" }, { "code": null, "e": 49391, "s": 49386, "text": "Java" }, { "code": null, "e": 49394, "s": 49391, "text": "C#" }, { "code": null, "e": 49405, "s": 49394, "text": "Javascript" }, { "code": "/*package whatever //do not write package name here */import java.util.ArrayList;import java.util.Collections; public class GFG { // Function to count occurrences static int countOccurrences(ArrayList<Integer> clist, int x) { // returning the frequency of // element x in the ArrayList // using Collections.frequency() method return Collections.frequency(clist, x); } // Driver Code public static void main(String args[]) { int arr[] = { 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 }; int x = 2; ArrayList<Integer> clist = new ArrayList<>(); // adding elements of array to // ArrayList for (int i : arr) clist.add(i); // displaying the frequency of x in ArrayList System.out.println(x + \" occurs \" + countOccurrences(clist, x) + \" times\"); }}", "e": 50341, "s": 49405, "text": null }, { "code": "// C# program for the above approachusing System; public class GFG{ // Function to count occurrences static int countOccurrences(int[] arr, int x) { int count = 0; int n = arr.Length; for (int i=0; i < n; i++) if (arr[i] == x) count++; return count; } // Driver Code public static void Main (string[] args) { int[] arr = { 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 }; int x = 2; // displaying the frequency of x in ArrayList Console.WriteLine(x + \" occurs \" + countOccurrences(arr, x) + \" times\"); }} // This code is contributed by avijitmondal1998.", "e": 51070, "s": 50341, "text": null }, { "code": "<script> // javascript program for above approach // Function to count occurrences function countOccurrences(arr, x) { let count = 0; let n = arr.length; for (let i=0; i < n; i++) if (arr[i] == x) count++; return count; } // Driver Code let arr = [ 1, 2, 2, 2, 2, 3, 4, 7, 8, 8 ]; let x = 2; // displaying the frequency of x in ArrayList document.write(x + \" occurs \" + countOccurrences(arr, x) + \" times\"); // This code is contributed by splevel62.</script>", "e": 51650, "s": 51070, "text": null }, { "code": null, "e": 51658, "s": 51650, "text": "Output:" }, { "code": null, "e": 51675, "s": 51658, "text": "2 occurs 4 times" }, { "code": null, "e": 52534, "s": 51675, "text": "YouTubeGeeksforGeeks502K subscribersCount number of occurrences (or frequency) in a sorted array | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 3:56β€’Liveβ€’<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=G2qGmOyDzCY\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 52653, "s": 52534, "text": "Please write comments if you find the above codes/algorithms incorrect, or find other ways to solve the same problem. " }, { "code": null, "e": 52660, "s": 52653, "text": "Sam007" }, { "code": null, "e": 52666, "s": 52660, "text": "ukasp" }, { "code": null, "e": 52670, "s": 52666, "text": "le0" }, { "code": null, "e": 52691, "s": 52670, "text": "avanitrachhadiya2155" }, { "code": null, "e": 52700, "s": 52691, "text": "swplucky" }, { "code": null, "e": 52709, "s": 52700, "text": "noob2000" }, { "code": null, "e": 52718, "s": 52709, "text": "target_2" }, { "code": null, "e": 52735, "s": 52718, "text": "avijitmondal1998" }, { "code": null, "e": 52745, "s": 52735, "text": "splevel62" }, { "code": null, "e": 52752, "s": 52745, "text": "Amazon" }, { "code": null, "e": 52763, "s": 52752, "text": "MakeMyTrip" }, { "code": null, "e": 52770, "s": 52763, "text": "Arrays" }, { "code": null, "e": 52789, "s": 52770, "text": "Divide and Conquer" }, { "code": null, "e": 52799, "s": 52789, "text": "Searching" }, { "code": null, "e": 52806, "s": 52799, "text": "Amazon" }, { "code": null, "e": 52817, "s": 52806, "text": "MakeMyTrip" }, { "code": null, "e": 52824, "s": 52817, "text": "Arrays" }, { "code": null, "e": 52834, "s": 52824, "text": "Searching" }, { "code": null, "e": 52853, "s": 52834, "text": "Divide and Conquer" }, { "code": null, "e": 52951, "s": 52853, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 53019, "s": 52951, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 53063, "s": 53019, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 53095, "s": 53063, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 53118, "s": 53095, "text": "Introduction to Arrays" }, { "code": null, "e": 53132, "s": 53118, "text": "Linear Search" }, { "code": null, "e": 53143, "s": 53132, "text": "Merge Sort" }, { "code": null, "e": 53153, "s": 53143, "text": "QuickSort" }, { "code": null, "e": 53167, "s": 53153, "text": "Binary Search" }, { "code": null, "e": 53235, "s": 53167, "text": "Maximum and minimum of an array using minimum number of comparisons" } ]
Create a stacked bar plot in Matplotlib - GeeksforGeeks
17 Dec, 2020 In this article, we will learn how to Create a stacked bar plot in Matplotlib. Let’s discuss some concepts : Matplotlib is a tremendous visualization library in Python for 2D plots of arrays. Matplotlib may be a multi-platform data visualization library built on NumPy arrays and designed to figure with the broader SciPy stack. A bar plot or bar graph may be a graph that represents the category of knowledge with rectangular bars with lengths and heights that’s proportional to the values which they represent. The bar plots are often plotted horizontally or vertically. Stacked bar plots represent different groups on the highest of 1 another. The peak of the bar depends on the resulting height of the mixture of the results of the groups. It goes from rock bottom to the worth rather than going from zero to value. Approach: Import Library (Matplotlib)Import / create data.Plot the bars in the stack manner. Import Library (Matplotlib) Import / create data. Plot the bars in the stack manner. Example 1: (Simple stacked bar plot) Python3 # importing packageimport matplotlib.pyplot as plt # create datax = ['A', 'B', 'C', 'D']y1 = [10, 20, 10, 30]y2 = [20, 25, 15, 25] # plot bars in stack mannerplt.bar(x, y1, color='r')plt.bar(x, y2, bottom=y1, color='b')plt.show() Output : Example 2: (Stacked bar chart with more than 2 data) Python3 # importing packageimport matplotlib.pyplot as pltimport numpy as np # create datax = ['A', 'B', 'C', 'D']y1 = np.array([10, 20, 10, 30])y2 = np.array([20, 25, 15, 25])y3 = np.array([12, 15, 19, 6])y4 = np.array([10, 29, 13, 19]) # plot bars in stack mannerplt.bar(x, y1, color='r')plt.bar(x, y2, bottom=y1, color='b')plt.bar(x, y3, bottom=y1+y2, color='y')plt.bar(x, y4, bottom=y1+y2+y3, color='g')plt.xlabel("Teams")plt.ylabel("Score")plt.legend(["Round 1", "Round 2", "Round 3", "Round 4"])plt.title("Scores by Teams in 4 Rounds")plt.show() Output : Example 3: (Stacked Bar chart using dataframe plot) Python3 # importing packageimport matplotlib.pyplot as pltimport numpy as npimport pandas as pd # create datadf = pd.DataFrame([['A', 10, 20, 10, 26], ['B', 20, 25, 15, 21], ['C', 12, 15, 19, 6], ['D', 10, 18, 11, 19]], columns=['Team', 'Round 1', 'Round 2', 'Round 3', 'Round 4'])# view dataprint(df) # plot data in stack manner of bar typedf.plot(x='Team', kind='bar', stacked=True, title='Stacked Bar Graph by dataframe') Output : Team Round 1 Round 2 Round 3 Round 4 0 A 10 20 10 26 1 B 20 25 15 21 2 C 12 15 19 6 3 D 10 18 11 19 Picked Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Python Dictionary Taking input in Python Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python
[ { "code": null, "e": 30302, "s": 30274, "text": "\n17 Dec, 2020" }, { "code": null, "e": 30411, "s": 30302, "text": "In this article, we will learn how to Create a stacked bar plot in Matplotlib. Let’s discuss some concepts :" }, { "code": null, "e": 30631, "s": 30411, "text": "Matplotlib is a tremendous visualization library in Python for 2D plots of arrays. Matplotlib may be a multi-platform data visualization library built on NumPy arrays and designed to figure with the broader SciPy stack." }, { "code": null, "e": 30875, "s": 30631, "text": "A bar plot or bar graph may be a graph that represents the category of knowledge with rectangular bars with lengths and heights that’s proportional to the values which they represent. The bar plots are often plotted horizontally or vertically." }, { "code": null, "e": 31122, "s": 30875, "text": "Stacked bar plots represent different groups on the highest of 1 another. The peak of the bar depends on the resulting height of the mixture of the results of the groups. It goes from rock bottom to the worth rather than going from zero to value." }, { "code": null, "e": 31132, "s": 31122, "text": "Approach:" }, { "code": null, "e": 31215, "s": 31132, "text": "Import Library (Matplotlib)Import / create data.Plot the bars in the stack manner." }, { "code": null, "e": 31243, "s": 31215, "text": "Import Library (Matplotlib)" }, { "code": null, "e": 31265, "s": 31243, "text": "Import / create data." }, { "code": null, "e": 31300, "s": 31265, "text": "Plot the bars in the stack manner." }, { "code": null, "e": 31337, "s": 31300, "text": "Example 1: (Simple stacked bar plot)" }, { "code": null, "e": 31345, "s": 31337, "text": "Python3" }, { "code": "# importing packageimport matplotlib.pyplot as plt # create datax = ['A', 'B', 'C', 'D']y1 = [10, 20, 10, 30]y2 = [20, 25, 15, 25] # plot bars in stack mannerplt.bar(x, y1, color='r')plt.bar(x, y2, bottom=y1, color='b')plt.show()", "e": 31577, "s": 31345, "text": null }, { "code": null, "e": 31586, "s": 31577, "text": "Output :" }, { "code": null, "e": 31639, "s": 31586, "text": "Example 2: (Stacked bar chart with more than 2 data)" }, { "code": null, "e": 31647, "s": 31639, "text": "Python3" }, { "code": "# importing packageimport matplotlib.pyplot as pltimport numpy as np # create datax = ['A', 'B', 'C', 'D']y1 = np.array([10, 20, 10, 30])y2 = np.array([20, 25, 15, 25])y3 = np.array([12, 15, 19, 6])y4 = np.array([10, 29, 13, 19]) # plot bars in stack mannerplt.bar(x, y1, color='r')plt.bar(x, y2, bottom=y1, color='b')plt.bar(x, y3, bottom=y1+y2, color='y')plt.bar(x, y4, bottom=y1+y2+y3, color='g')plt.xlabel(\"Teams\")plt.ylabel(\"Score\")plt.legend([\"Round 1\", \"Round 2\", \"Round 3\", \"Round 4\"])plt.title(\"Scores by Teams in 4 Rounds\")plt.show()", "e": 32193, "s": 31647, "text": null }, { "code": null, "e": 32202, "s": 32193, "text": "Output :" }, { "code": null, "e": 32254, "s": 32202, "text": "Example 3: (Stacked Bar chart using dataframe plot)" }, { "code": null, "e": 32262, "s": 32254, "text": "Python3" }, { "code": "# importing packageimport matplotlib.pyplot as pltimport numpy as npimport pandas as pd # create datadf = pd.DataFrame([['A', 10, 20, 10, 26], ['B', 20, 25, 15, 21], ['C', 12, 15, 19, 6], ['D', 10, 18, 11, 19]], columns=['Team', 'Round 1', 'Round 2', 'Round 3', 'Round 4'])# view dataprint(df) # plot data in stack manner of bar typedf.plot(x='Team', kind='bar', stacked=True, title='Stacked Bar Graph by dataframe')", "e": 32723, "s": 32262, "text": null }, { "code": null, "e": 32732, "s": 32723, "text": "Output :" }, { "code": null, "e": 32947, "s": 32732, "text": " Team Round 1 Round 2 Round 3 Round 4\n0 A 10 20 10 26\n1 B 20 25 15 21\n2 C 12 15 19 6\n3 D 10 18 11 19" }, { "code": null, "e": 32954, "s": 32947, "text": "Picked" }, { "code": null, "e": 32972, "s": 32954, "text": "Python-matplotlib" }, { "code": null, "e": 32979, "s": 32972, "text": "Python" }, { "code": null, "e": 33077, "s": 32979, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33105, "s": 33077, "text": "Read JSON file using Python" }, { "code": null, "e": 33155, "s": 33105, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 33177, "s": 33155, "text": "Python map() function" }, { "code": null, "e": 33221, "s": 33177, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 33239, "s": 33221, "text": "Python Dictionary" }, { "code": null, "e": 33262, "s": 33239, "text": "Taking input in Python" }, { "code": null, "e": 33297, "s": 33262, "text": "Read a file line by line in Python" }, { "code": null, "e": 33319, "s": 33297, "text": "Enumerate() in Python" }, { "code": null, "e": 33351, "s": 33319, "text": "How to Install PIP on Windows ?" } ]
Prime String in C++
In this problem, we are given a string. Our task is to print YES / NO based on is the sum of ASCII values of the characters of the string is prime or not. ASCII values are character encodings Prime number is a number that is divisible only by the number itself and 1. Let’s take an example to understand the problem, Input: string = β€œHello” Output:No To solve this problem, we will have to find the sum of ASCII values of all characters of the string. And store the sum in a variable and then check if the sum is a prime number or not. The code to show the implementation of our solution Live Demo #include <iostream> using namespace std; bool CheckPrimeString(string str) { int len = str.length(), sum = 0; for (int i = 0; i < len; i++) sum += (int)str[i]; if (sum<= 1) return false; if (sum <= 3) return true; if (sum % 2 == 0 || sum % 3 == 0) return false; for (int i = 5; i * i <= sum; i = i + 6) if (sum % i == 0 || sum % (i + 2) == 0) return false; return true; } int main() { string str = "Hello!"; cout<<"The string '"<<str<<" ' is "; if (CheckPrimeString(str)) cout<<"a prime String \n"; else cout<<"not a prime String\n"; } The string 'Hello! ' is not a prime String
[ { "code": null, "e": 1217, "s": 1062, "text": "In this problem, we are given a string. Our task is to print YES / NO based on is the sum of ASCII values of the characters of the string is prime or not." }, { "code": null, "e": 1254, "s": 1217, "text": "ASCII values are character encodings" }, { "code": null, "e": 1330, "s": 1254, "text": "Prime number is a number that is divisible only by the number itself and 1." }, { "code": null, "e": 1379, "s": 1330, "text": "Let’s take an example to understand the problem," }, { "code": null, "e": 1413, "s": 1379, "text": "Input: string = β€œHello”\nOutput:No" }, { "code": null, "e": 1598, "s": 1413, "text": "To solve this problem, we will have to find the sum of ASCII values of all characters of the string. And store the sum in a variable and then check if the sum is a prime number or not." }, { "code": null, "e": 1650, "s": 1598, "text": "The code to show the implementation of our solution" }, { "code": null, "e": 1661, "s": 1650, "text": " Live Demo" }, { "code": null, "e": 2272, "s": 1661, "text": "#include <iostream>\nusing namespace std;\nbool CheckPrimeString(string str) {\n int len = str.length(), sum = 0;\n for (int i = 0; i < len; i++)\n sum += (int)str[i];\n if (sum<= 1)\n return false;\n if (sum <= 3)\n return true;\n if (sum % 2 == 0 || sum % 3 == 0)\n return false;\n for (int i = 5; i * i <= sum; i = i + 6)\n if (sum % i == 0 || sum % (i + 2) == 0)\n return false;\n return true;\n}\nint main() {\n string str = \"Hello!\";\n cout<<\"The string '\"<<str<<\" ' is \";\n if (CheckPrimeString(str))\n cout<<\"a prime String \\n\";\n else\n cout<<\"not a prime String\\n\";\n}" }, { "code": null, "e": 2315, "s": 2272, "text": "The string 'Hello! ' is not a prime String" } ]
Puppeteer - Capture Screenshot
We can capture screenshots while working on automation tests developed in Puppeteer using the screenshot method. A screenshot is generally captured if we encounter an application error, a failure in a test case, and so on. The syntax to capture screenshot in Puppeteer is as follows βˆ’ await page.screenshot({ path: 'tutorialspoint.png' }) Here, the path where the screenshot is to be saved is passed as a parameter to the method. With this, only the viewable part of the web page shall be captured. To capture the full page screenshot, we have to pass another parameter called the fullPage and set its value to true. The syntax is as follows βˆ’ await page.screenshot({ path: 'tutorialspoint.png', fullPage: true }) Let us capture the screenshot of the below page βˆ’ To begin, follow Steps 1 to 2 from the Chapter of Basic Test on Puppeteer which are as follows βˆ’ Step 1 βˆ’ Create a new file within the directory where the node_modules folder is created (location where the Puppeteer and Puppeteer core have been installed). The details on Puppeteer installation is discussed in the Chapter of Puppeteer Installation. Right-click on the folder where the node_modules folder is created, then click on the New file button. Step 2 βˆ’ Enter a filename, say testcase1.js. Step 3 βˆ’ Add the below code within the testcase1.js file created. //adding Puppeteer library const pt = require('puppeteer'); pt.launch().then(async browser => { //browser new page const p = await browser.newPage(); //set viewpoint of browser page await p.setViewport({ width: 1000, height: 500 }) //launch URL await p.goto('https://www.tutorialspoint.com/index.htm') //capture screenshot await p.screenshot({ path: 'tutorialspoint.png' }); //browser close await browser.close() }) Step 4 βˆ’ Execute the code with the command given below βˆ’ node <filename> So in our example, we shall run the following command βˆ’ node testcase1.js After the command has been successfully executed, a new file called the tutorialspoint.png gets created within the page directory. It contains the captured screenshot of the page launched in the browser. Print Add Notes Bookmark this page
[ { "code": null, "e": 2973, "s": 2750, "text": "We can capture screenshots while working on automation tests developed in Puppeteer using the screenshot method. A screenshot is generally captured if we encounter an application error, a failure in a test case, and so on." }, { "code": null, "e": 3035, "s": 2973, "text": "The syntax to capture screenshot in Puppeteer is as follows βˆ’" }, { "code": null, "e": 3093, "s": 3035, "text": "await page.screenshot({\n path: 'tutorialspoint.png'\n})\n" }, { "code": null, "e": 3371, "s": 3093, "text": "Here, the path where the screenshot is to be saved is passed as a parameter to the method. With this, only the viewable part of the web page shall be captured. To capture the full page screenshot, we have to pass another parameter called the fullPage and set its value to true." }, { "code": null, "e": 3398, "s": 3371, "text": "The syntax is as follows βˆ’" }, { "code": null, "e": 3472, "s": 3398, "text": "await page.screenshot({\n path: 'tutorialspoint.png', fullPage: true\n})\n" }, { "code": null, "e": 3522, "s": 3472, "text": "Let us capture the screenshot of the below page βˆ’" }, { "code": null, "e": 3619, "s": 3522, "text": "To begin, follow Steps 1 to 2 from the Chapter of Basic Test on Puppeteer which are as follows βˆ’" }, { "code": null, "e": 3779, "s": 3619, "text": "Step 1 βˆ’ Create a new file within the directory where the node_modules folder is created (location where the Puppeteer and Puppeteer core have been installed)." }, { "code": null, "e": 3872, "s": 3779, "text": "The details on Puppeteer installation is discussed in the Chapter of Puppeteer Installation." }, { "code": null, "e": 3975, "s": 3872, "text": "Right-click on the folder where the node_modules folder is created, then click on the New file button." }, { "code": null, "e": 4020, "s": 3975, "text": "Step 2 βˆ’ Enter a filename, say testcase1.js." }, { "code": null, "e": 4086, "s": 4020, "text": "Step 3 βˆ’ Add the below code within the testcase1.js file created." }, { "code": null, "e": 4541, "s": 4086, "text": "//adding Puppeteer library\nconst pt = require('puppeteer');\npt.launch().then(async browser => {\n //browser new page\n const p = await browser.newPage();\n //set viewpoint of browser page\n await p.setViewport({ width: 1000, height: 500 })\n //launch URL\n await p.goto('https://www.tutorialspoint.com/index.htm')\n //capture screenshot\n await p.screenshot({\n path: 'tutorialspoint.png'\n });\n //browser close\n await browser.close()\n})" }, { "code": null, "e": 4598, "s": 4541, "text": "Step 4 βˆ’ Execute the code with the command given below βˆ’" }, { "code": null, "e": 4615, "s": 4598, "text": "node <filename>\n" }, { "code": null, "e": 4671, "s": 4615, "text": "So in our example, we shall run the following command βˆ’" }, { "code": null, "e": 4690, "s": 4671, "text": "node testcase1.js\n" }, { "code": null, "e": 4894, "s": 4690, "text": "After the command has been successfully executed, a new file called the tutorialspoint.png gets created within the page directory. It contains the captured screenshot of the page launched in the browser." }, { "code": null, "e": 4901, "s": 4894, "text": " Print" }, { "code": null, "e": 4912, "s": 4901, "text": " Add Notes" } ]
Replace substring with another substring C++
Here we will see how to replace substring with another substring. It replaces the portion of the string that begins at character pos and spans len characters. The structure of the replace function is like below: string& replace (size_t pos, size_t len, const string& str, size_t subpos, size_t sublen); The parameters are pos: It is an insertion point, str : It is a string object, len : It contains information about number of characters to erase. Step 1: Get the main string, and the string which will be replaced. And the match string Step 2: While the match string is present in the main string: Step 2.1: Replace it with the given string. Step 3: Return the modified string Live Demo #include <iostream> #include <string> using namespace std; int main () { string base = "this is a test string."; string str2 = "n example"; string str3 = "sample phrase"; string str4 = "useful."; string str = base; str.replace(9,5,str2); str.replace(19,6,str3,7,6); str.replace(8,10,"just a"); str.replace(8,6,"a shorty",7); str.replace(22,1,3,'!'); str.replace(str.begin(),str.end()-3,str3); str.replace(str.begin(),str.begin()+6,"replace"); str.replace(str.begin()+8,str.begin()+14,"is coolness",7); str.replace(str.begin()+12,str.end()-4,4,'o'); str.replace(str.begin()+11,str.end(),str4.begin(),str4.end()); cout << str << '\n'; return 0; } replace is useful.
[ { "code": null, "e": 1221, "s": 1062, "text": "Here we will see how to replace substring with another substring. It replaces the portion of the string that begins at character pos and spans len characters." }, { "code": null, "e": 1274, "s": 1221, "text": "The structure of the replace function is like below:" }, { "code": null, "e": 1365, "s": 1274, "text": "string& replace (size_t pos, size_t len, const string& str, size_t subpos, size_t sublen);" }, { "code": null, "e": 1511, "s": 1365, "text": "The parameters are pos: It is an insertion point, str : It is a string object, len : It contains information about number of characters to erase." }, { "code": null, "e": 1741, "s": 1511, "text": "Step 1: Get the main string, and the string which will be replaced. And the match string\nStep 2: While the match string is present in the main string:\nStep 2.1: Replace it with the given string.\nStep 3: Return the modified string" }, { "code": null, "e": 1752, "s": 1741, "text": " Live Demo" }, { "code": null, "e": 2448, "s": 1752, "text": "#include <iostream>\n#include <string>\nusing namespace std;\nint main () {\n string base = \"this is a test string.\";\n string str2 = \"n example\";\n string str3 = \"sample phrase\";\n string str4 = \"useful.\";\n string str = base;\n str.replace(9,5,str2);\n str.replace(19,6,str3,7,6);\n str.replace(8,10,\"just a\");\n str.replace(8,6,\"a shorty\",7);\n str.replace(22,1,3,'!');\n str.replace(str.begin(),str.end()-3,str3);\n str.replace(str.begin(),str.begin()+6,\"replace\");\n str.replace(str.begin()+8,str.begin()+14,\"is coolness\",7);\n str.replace(str.begin()+12,str.end()-4,4,'o');\n str.replace(str.begin()+11,str.end(),str4.begin(),str4.end());\n cout << str << '\\n';\n return 0;\n}" }, { "code": null, "e": 2467, "s": 2448, "text": "replace is useful." } ]
Priority Queue in C++ Standard Template Library (STL)
Priority queue is an abstract data type for storing a collection of prioritized elements that supports insertion and deletion of an element based upon their priorities, that is, the element with first priority can be removed at any time. The priority queue doesn’t stores elements in linear fashion with respect to their locations like in Stacks, Queues, List, etc. The priority queue ADT(abstract data type) stores elements based upon their priorities. Priority Queue supports the following functions βˆ’ Size() βˆ’ it is used to calculate the size of the priority queue as it returns the number of elements in it. Empty() βˆ’ it return true if the Priority Queue is empty and false otherwise Insert(element) βˆ’ used to insert the new element into a Priority Queue Min() βˆ’ it return the element with the smallest associated key value and display error message if Priority Queue is empty. removeMin()βˆ’ it removes the element referenced by the min() function. Given below is a table that shows the effect of operations on a priority queue Start Step 1-> Declare function to display the elements in a Priority Queue void display(priority_queue <int> Pq) declare and set priority_queue <int> que = Pq Loop While (!que.empty()) call que.top() call que.pop() End Step 2-> In main() Create object of priority_queue <int> Pq Call push() to insert element in a priority queue as Pq.push(1) Call display(Pq) Call to check the size of a priority queue Pq.size() Call to display the top element of a priority queue Pq.top() Call to remove the elements of a priority queue Pq.pop() Call display(Pq) Stop Live Demo #include <iostream> #include <queue> using namespace std; void display(priority_queue <int> Pq) { priority_queue <int> que = Pq; while (!que.empty()) { cout << '\t' << que.top(); que.pop(); } //cout << '\n'; } int main () { priority_queue <int> Pq; Pq.push(1); Pq.push(3); Pq.push(5); Pq.push(7); Pq.push(9); cout << "The priority queue is : "; display(Pq); cout << "\nPrioriy queue size using size() : " << Pq.size(); cout << "\nFirst element of priority queue using top(): " << Pq.top(); cout << "\nremoving element using pop() : "; Pq.pop(); display(Pq); return 0; } The priority queue is : 9 7 5 3 1 Prioriy queue size using size() : 5 First element of priority queue using top(): 9 removing element using pop() : 7 5 3 1
[ { "code": null, "e": 1516, "s": 1062, "text": "Priority queue is an abstract data type for storing a collection of prioritized elements that supports insertion and deletion of an element based upon their priorities, that is, the element with first priority can be removed at any time. The priority queue doesn’t stores elements in linear fashion with respect to their locations like in Stacks, Queues, List, etc. The priority queue ADT(abstract data type) stores elements based upon their priorities." }, { "code": null, "e": 1566, "s": 1516, "text": "Priority Queue supports the following functions βˆ’" }, { "code": null, "e": 1674, "s": 1566, "text": "Size() βˆ’ it is used to calculate the size of the priority queue as it returns the number of elements in it." }, { "code": null, "e": 1750, "s": 1674, "text": "Empty() βˆ’ it return true if the Priority Queue is empty and false otherwise" }, { "code": null, "e": 1821, "s": 1750, "text": "Insert(element) βˆ’ used to insert the new element into a Priority Queue" }, { "code": null, "e": 1944, "s": 1821, "text": "Min() βˆ’ it return the element with the smallest associated key value and display error message if Priority Queue is empty." }, { "code": null, "e": 2014, "s": 1944, "text": "removeMin()βˆ’ it removes the element referenced by the min() function." }, { "code": null, "e": 2093, "s": 2014, "text": "Given below is a table that shows the effect of operations on a priority queue" }, { "code": null, "e": 2689, "s": 2093, "text": "Start\nStep 1-> Declare function to display the elements in a Priority Queue\n void display(priority_queue <int> Pq)\n declare and set priority_queue <int> que = Pq\n Loop While (!que.empty())\n call que.top()\n call que.pop()\n End\nStep 2-> In main()\nCreate object of priority_queue <int> Pq\n Call push() to insert element in a priority queue as Pq.push(1)\n Call display(Pq)\n Call to check the size of a priority queue Pq.size()\n Call to display the top element of a priority queue Pq.top()\n Call to remove the elements of a priority queue Pq.pop()\n Call display(Pq)\nStop" }, { "code": null, "e": 2700, "s": 2689, "text": " Live Demo" }, { "code": null, "e": 3336, "s": 2700, "text": "#include <iostream>\n#include <queue>\nusing namespace std;\nvoid display(priority_queue <int> Pq) {\n priority_queue <int> que = Pq;\n while (!que.empty()) {\n cout << '\\t' << que.top();\n que.pop();\n }\n //cout << '\\n';\n}\nint main () {\n priority_queue <int> Pq;\n Pq.push(1);\n Pq.push(3);\n Pq.push(5);\n Pq.push(7);\n Pq.push(9);\n cout << \"The priority queue is : \";\n display(Pq);\n cout << \"\\nPrioriy queue size using size() : \" << Pq.size();\n cout << \"\\nFirst element of priority queue using top(): \" << Pq.top();\n cout << \"\\nremoving element using pop() : \";\n Pq.pop();\n display(Pq);\n return 0;\n}" }, { "code": null, "e": 3492, "s": 3336, "text": "The priority queue is : 9 7 5 3 1\nPrioriy queue size using size() : 5\nFirst element of priority queue using top(): 9\nremoving element using pop() : 7 5 3 1" } ]
Best practices for Professional Developer - Django Framework - GeeksforGeeks
08 Jun, 2020 Django is an open-source, Python-based framework for building web applications. To make our Django code more readable and efficient we should follow a certain set of rules/practices. These should not be seen as the right way or the only way to work with Django, but instead best practices to work with Django framework In general, code should be clean, concise, readable, Testable and DRY (Don’t repeat yourself).Try to follow PEP8 guidelines as closely as reasonable. Avoid using global environment for project dependencies, since it can produce dependency conflicts. Python can’t use multiple package versions at the same time. This can be a problem if different projects require different incompatible versions of the same package. Always isolate your project requirements and dependencies in a virtual environment. Most common way to do it is by using virtualenv. Requirements are the list of Python packages (dependencies) your project is using while it runs, including the version for each package. It is important to update your requirements.txt file for collaborating properly with other developers. This file, when included in your code repository, enables you to update all the packages installed in your virtual environment by executing a single line in the terminal. In order to generate a new requirements.txt file or update the existing one use this command. Make sure that you are in a correct directory. (virtualenv) $ pip freeze > requirements.txt It is a good practice to update the requirements.txt file before pushing code to the repository and install requirements.txt file after pulling code from the repository. You should write fat models, skinny views, which means try to write most of your logic in model itself. For example: Suppose we are implementing a functionality of sending an email to user, it is better to extend the model with an email function instead of writing this logic in your controller/view. This makes your code easier to unit test because you can test the email logic in one place, rather than repeatedly in every controller/view where this takes place. Generally models represent a single object or entity so model names should be a singular noun. # Bad practiceclass Users(models.Model): pass # Good practiceclass User(models.Model): # use 'User' instead of 'Users' pass Related name specifies the reverse relation from the parent model back to the child model. It is reasonable to indicate related-name in plural as it returns a queryset # parent modelclass Owner(models.Model): pass # child modelclass Item(models.Model): # use "items" instead of "item" owner = models.ForeignKey(Owner, related_name ='items') Location: Templates can be placed at two places, either in the app directory itself or at the root of project. It is recommended to put templates in the root directory but if you want to make your app reusable (use it at multiple places) than you should place it in app directory. #Good practice root_folder/ my_app1/ my_app2/ my_app3/ templates/ #If you want to make your app reusable root_folder/ my_app1/ templates/ my_app2/ templates/ my_app3/ templates/ Naming:Correctly naming your templates helps any new developer immediately picking up your django code. Good template name looks like this [application]/[model]_[function].html For example, creating a template to list all of the contacts (Contact model) in my address book (address_book application), I would use the following template: address_book/contact_list.html Similarly, a detail view of contact would use address_book/contact_detail.html Python Django Python Write From Home Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Convert integer to string in Python Convert string to integer in Python Python infinity How to set input type date in dd-mm-yyyy format using HTML ? Matplotlib.pyplot.title() in Python
[ { "code": null, "e": 24757, "s": 24729, "text": "\n08 Jun, 2020" }, { "code": null, "e": 25076, "s": 24757, "text": "Django is an open-source, Python-based framework for building web applications. To make our Django code more readable and efficient we should follow a certain set of rules/practices. These should not be seen as the right way or the only way to work with Django, but instead best practices to work with Django framework" }, { "code": null, "e": 25226, "s": 25076, "text": "In general, code should be clean, concise, readable, Testable and DRY (Don’t repeat yourself).Try to follow PEP8 guidelines as closely as reasonable." }, { "code": null, "e": 25625, "s": 25226, "text": "Avoid using global environment for project dependencies, since it can produce dependency conflicts. Python can’t use multiple package versions at the same time. This can be a problem if different projects require different incompatible versions of the same package. Always isolate your project requirements and dependencies in a virtual environment. Most common way to do it is by using virtualenv." }, { "code": null, "e": 26036, "s": 25625, "text": "Requirements are the list of Python packages (dependencies) your project is using while it runs, including the version for each package. It is important to update your requirements.txt file for collaborating properly with other developers. This file, when included in your code repository, enables you to update all the packages installed in your virtual environment by executing a single line in the terminal." }, { "code": null, "e": 26177, "s": 26036, "text": "In order to generate a new requirements.txt file or update the existing one use this command. Make sure that you are in a correct directory." }, { "code": null, "e": 26223, "s": 26177, "text": "(virtualenv) $ pip freeze > requirements.txt\n" }, { "code": null, "e": 26393, "s": 26223, "text": "It is a good practice to update the requirements.txt file before pushing code to the repository and install requirements.txt file after pulling code from the repository." }, { "code": null, "e": 26498, "s": 26393, "text": "You should write fat models, skinny views, which means try to write most of your logic in model itself. " }, { "code": null, "e": 26859, "s": 26498, "text": "For example: Suppose we are implementing a functionality of sending an email to user, it is better to extend the model with an email function instead of writing this logic in your controller/view. This makes your code easier to unit test because you can test the email logic in one place, rather than repeatedly in every controller/view where this takes place." }, { "code": null, "e": 26954, "s": 26859, "text": "Generally models represent a single object or entity so model names should be a singular noun." }, { "code": "# Bad practiceclass Users(models.Model): pass # Good practiceclass User(models.Model): # use 'User' instead of 'Users' pass", "e": 27081, "s": 26954, "text": null }, { "code": null, "e": 27249, "s": 27081, "text": "Related name specifies the reverse relation from the parent model back to the child model. It is reasonable to indicate related-name in plural as it returns a queryset" }, { "code": "# parent modelclass Owner(models.Model): pass # child modelclass Item(models.Model): # use \"items\" instead of \"item\" owner = models.ForeignKey(Owner, related_name ='items')", "e": 27433, "s": 27249, "text": null }, { "code": null, "e": 27714, "s": 27433, "text": "Location: Templates can be placed at two places, either in the app directory itself or at the root of project. It is recommended to put templates in the root directory but if you want to make your app reusable (use it at multiple places) than you should place it in app directory." }, { "code": null, "e": 27946, "s": 27714, "text": "#Good practice\nroot_folder/\n my_app1/\n my_app2/\n my_app3/\n templates/\n\n#If you want to make your app reusable\nroot_folder/\n my_app1/\n templates/\n my_app2/\n templates/\n my_app3/\n templates/\n" }, { "code": null, "e": 28085, "s": 27946, "text": "Naming:Correctly naming your templates helps any new developer immediately picking up your django code. Good template name looks like this" }, { "code": null, "e": 28124, "s": 28085, "text": "[application]/[model]_[function].html\n" }, { "code": null, "e": 28284, "s": 28124, "text": "For example, creating a template to list all of the contacts (Contact model) in my address book (address_book application), I would use the following template:" }, { "code": null, "e": 28316, "s": 28284, "text": "address_book/contact_list.html\n" }, { "code": null, "e": 28362, "s": 28316, "text": "Similarly, a detail view of contact would use" }, { "code": null, "e": 28396, "s": 28362, "text": "address_book/contact_detail.html\n" }, { "code": null, "e": 28410, "s": 28396, "text": "Python Django" }, { "code": null, "e": 28417, "s": 28410, "text": "Python" }, { "code": null, "e": 28433, "s": 28417, "text": "Write From Home" }, { "code": null, "e": 28531, "s": 28433, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28549, "s": 28531, "text": "Python Dictionary" }, { "code": null, "e": 28584, "s": 28549, "text": "Read a file line by line in Python" }, { "code": null, "e": 28606, "s": 28584, "text": "Enumerate() in Python" }, { "code": null, "e": 28638, "s": 28606, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 28668, "s": 28638, "text": "Iterate over a list in Python" }, { "code": null, "e": 28704, "s": 28668, "text": "Convert integer to string in Python" }, { "code": null, "e": 28740, "s": 28704, "text": "Convert string to integer in Python" }, { "code": null, "e": 28756, "s": 28740, "text": "Python infinity" }, { "code": null, "e": 28817, "s": 28756, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" } ]
Learning Reinforcement Learning: REINFORCE with PyTorch! | by Christian Hubbs | Towards Data Science
The REINFORCE algorithm is one of the first policy gradient algorithms in reinforcement learning and a great jumping off point to get into more advanced approaches. Policy gradients are different than Q-value algorithms because PG’s try to learn a parameterized policy instead of estimating Q-values of state-action pairs. So the policy output is represented as a probability distribution over actions rather than a set of Q-value estimates. If any of this is confusing or unclear, don’t worry, we’ll break it down step-by-step! In this post, we’ll look at the REINFORCE algorithm and test it using OpenAI’s CartPole environment with PyTorch. We assume a basic understanding of reinforcement learning, so if you don’t know what states, actions, environments and the like mean, check out some of the links to other articles here or the simple primer on the topic here. We can distinguish policy gradient algorithms from Q-value approaches (e.g. Deep Q-Networks) in that policy gradients make action selection without reference to the action values. Some policy gradients learn an estimate of values to help find a better policy, but this value estimate isn’t required to select an action. The output of a DQN is going to be a vector of value estimates while the output of the policy gradient is going to be a probability distribution over actions. For example, consider we have two networks, a policy network and a DQN network that have learned the CartPole task with two actions (left and right). If we pass a state s to each, we might get the following from the DQN: And this from the policy gradient: The DQN gives us estimates of the discounted future rewards of the state and we make our selection based on these values (typically taking the maximum value according to some Ξ΅-greedy rule). The policy gradient, on the other hand, gives us probabilities of our actions. The way we make our selection, in this case, is by choosing action 0 28% of the time and action 1 72% of the time. These probabilities will change as the network gains more experience. To get these probabilities, we use a simple function called softmax at the output layer. The function is given below: This squashes all of our values to be between 0 and 1, and ensures that all of the outputs sum to 1 (Ξ£ Οƒ(x) = 1). Because we’re using the exp(x) function to scale our values, the largest ones tend to dominate and get more of the probability assigned to them. Now for the algorithm itself. If you’ve followed along with some previous posts, this shouldn’t look too daunting. However, we’ll walk through it anyway for clarity. The requirements are rather straightforward, we need a differentiable policy, for which we can use a neural network, and a few hyperparameters like our step size (Ξ±), discount rate (Ξ³), batch size (K) and max episodes (N). From there, we initialize our network, and run our episodes. After each episode, we discount our rewards, which is the sum of all of the discounted rewards from that reward onward. We’ll update our policy after each batch (e.g. K episodes) is complete. This tends to help provide stability for training. The policy loss (L(ΞΈ)) looks a bit complicated at first but isn’t that difficult to understand if you look at it closely. Recall that the output of the policy network is a probability distribution. What we’re doing with the Ο€(a | s, ΞΈ), is just getting the probability estimate of our network at each state. We then multiply that by the sum of the discounted rewards (G) to get the network’s expected value. For example, say we’re at a state s the network is split between two actions, so the probability of choosing a=0 is 50% and a=1 is also 50%. The network randomly selects a=0, we get a reward of 1 and the episode ends (let’s assume discount factor is 1). When we go back and update our network, this state-action pair gives us (1)(0.5)=0.5, which translates into the network’s expected value of that action taken at that state. From here, we take the log of the probability and sum over all of the steps in our batch of episodes. Finally, we average this out and take the gradient of this value to make our updates. Just for a quick refresher, the goal of Cart-Pole is to keep the pole in the air for as long as possible. Your agent needs to determine whether to push the cart to the left or the right to keep it balanced while not going over the edges on the left and right. If you don’t have OpenAI’s library installed yet, just run pip install gym and you should be set. Also, grab the latest off of pytorch.org if you haven’t already. Go ahead and import some packages: import numpy as npimport matplotlib.pyplot as pltimport gymimport sysimport torchfrom torch import nnfrom torch import optimprint("PyTorch:\t{}".format(torch.__version__))PyTorch: 1.1.0 With our packages imported, we’re going to set up a simple class called policy_estimator that will contain our neural network. It’s going to have two hidden layers with a ReLU activation function and softmax output. We’ll also give it a method called predict that enables us to do a forward pass through the network. class policy_estimator(): def __init__(self, env): self.n_inputs = env.observation_space.shape[0] self.n_outputs = env.action_space.n # Define network self.network = nn.Sequential( nn.Linear(self.n_inputs, 16), nn.ReLU(), nn.Linear(16, self.n_outputs), nn.Softmax(dim=-1)) def predict(self, state): action_probs = self.network(torch.FloatTensor(state)) return action_probs Note that calling the predict method requires us to convert our state into a FloatTensor for PyTorch to work with it. Actually, the predict method itself is somewhat superfluous in PyTorch as a tensor could be passed directly to our network to get the results, but I include it here just for clarity. The other thing we need is our discounting function to discount future rewards based on the discount factor Ξ³ we use. def discount_rewards(rewards, gamma=0.99): r = np.array([gamma**i * rewards[i] for i in range(len(rewards))]) # Reverse the array direction for cumsum and then # revert back to the original order r = r[::-1].cumsum()[::-1] return r β€” r.mean() One thing I’ve done here that’s a bit non-standard is subtract the mean of the rewards at the end. This helps to stabilize the learning, particularly in cases such as this one where all the rewards are positive because the gradients change more with negative or below-average rewards than they would if the rewards weren’t normalized like this. Now for the REINFORCE algorithm itself. def reinforce(env, policy_estimator, num_episodes=2000, batch_size=10, gamma=0.99): # Set up lists to hold results total_rewards = [] batch_rewards = [] batch_actions = [] batch_states = [] batch_counter = 1 # Define optimizer optimizer = optim.Adam(policy_estimator.network.parameters(), lr=0.01) action_space = np.arange(env.action_space.n) ep = 0 while ep < num_episodes: s_0 = env.reset() states = [] rewards = [] actions = [] done = False while done == False: # Get actions and convert to numpy array action_probs = policy_estimator.predict( s_0).detach().numpy() action = np.random.choice(action_space, p=action_probs) s_1, r, done, _ = env.step(action) states.append(s_0) rewards.append(r) actions.append(action) s_0 = s_1 # If done, batch data if done: batch_rewards.extend(discount_rewards( rewards, gamma)) batch_states.extend(states) batch_actions.extend(actions) batch_counter += 1 total_rewards.append(sum(rewards)) # If batch is complete, update network if batch_counter == batch_size: optimizer.zero_grad() state_tensor = torch.FloatTensor(batch_states) reward_tensor = torch.FloatTensor( batch_rewards) # Actions are used as indices, must be # LongTensor action_tensor = torch.LongTensor( batch_actions) # Calculate loss logprob = torch.log( policy_estimator.predict(state_tensor)) selected_logprobs = reward_tensor * \ torch.gather(logprob, 1, action_tensor).squeeze() loss = -selected_logprobs.mean() # Calculate gradients loss.backward() # Apply gradients optimizer.step() batch_rewards = [] batch_actions = [] batch_states = [] batch_counter = 1 avg_rewards = np.mean(total_rewards[-100:]) # Print running average print("\rEp: {} Average of last 100:" + "{:.2f}".format( ep + 1, avg_rewards), end="") ep += 1 return total_rewards For the algorithm, we pass our policy_estimator and env objects, set a few hyperparameters and we’re off. A few points on the implementation, always be certain to ensure your outputs from PyTorch are converted back to NumPy arrays before you pass the values to env.step() or functions like np.random.choice() to avoid errors. Also, we use torch.gather() to separate the actual actions taken from the action probabilities to ensure we’re calculating the loss function properly as discussed above. Finally, you can change the ending so that the algorithm stops running once the environment is β€œsolved” instead of running for a preset number of steps (CartPole is solved after an average score of 195 or more for 100 consecutive episodes). To run this, we just need a few lines of code to put it all together. env = gym.make('CartPole-v0')policy_est = policy_estimator(env)rewards = reinforce(env, policy_est) Plotting the results, we can see that it works quite well! Even simple policy gradient algorithms can work quite nicely and they have less baggage than DQN’s which often employ additional features like memory replay to learn effectively. See what you can do with this algorithm on more challenging environments!
[ { "code": null, "e": 700, "s": 171, "text": "The REINFORCE algorithm is one of the first policy gradient algorithms in reinforcement learning and a great jumping off point to get into more advanced approaches. Policy gradients are different than Q-value algorithms because PG’s try to learn a parameterized policy instead of estimating Q-values of state-action pairs. So the policy output is represented as a probability distribution over actions rather than a set of Q-value estimates. If any of this is confusing or unclear, don’t worry, we’ll break it down step-by-step!" }, { "code": null, "e": 1039, "s": 700, "text": "In this post, we’ll look at the REINFORCE algorithm and test it using OpenAI’s CartPole environment with PyTorch. We assume a basic understanding of reinforcement learning, so if you don’t know what states, actions, environments and the like mean, check out some of the links to other articles here or the simple primer on the topic here." }, { "code": null, "e": 1518, "s": 1039, "text": "We can distinguish policy gradient algorithms from Q-value approaches (e.g. Deep Q-Networks) in that policy gradients make action selection without reference to the action values. Some policy gradients learn an estimate of values to help find a better policy, but this value estimate isn’t required to select an action. The output of a DQN is going to be a vector of value estimates while the output of the policy gradient is going to be a probability distribution over actions." }, { "code": null, "e": 1739, "s": 1518, "text": "For example, consider we have two networks, a policy network and a DQN network that have learned the CartPole task with two actions (left and right). If we pass a state s to each, we might get the following from the DQN:" }, { "code": null, "e": 1774, "s": 1739, "text": "And this from the policy gradient:" }, { "code": null, "e": 2229, "s": 1774, "text": "The DQN gives us estimates of the discounted future rewards of the state and we make our selection based on these values (typically taking the maximum value according to some Ξ΅-greedy rule). The policy gradient, on the other hand, gives us probabilities of our actions. The way we make our selection, in this case, is by choosing action 0 28% of the time and action 1 72% of the time. These probabilities will change as the network gains more experience." }, { "code": null, "e": 2347, "s": 2229, "text": "To get these probabilities, we use a simple function called softmax at the output layer. The function is given below:" }, { "code": null, "e": 2606, "s": 2347, "text": "This squashes all of our values to be between 0 and 1, and ensures that all of the outputs sum to 1 (Ξ£ Οƒ(x) = 1). Because we’re using the exp(x) function to scale our values, the largest ones tend to dominate and get more of the probability assigned to them." }, { "code": null, "e": 2636, "s": 2606, "text": "Now for the algorithm itself." }, { "code": null, "e": 2772, "s": 2636, "text": "If you’ve followed along with some previous posts, this shouldn’t look too daunting. However, we’ll walk through it anyway for clarity." }, { "code": null, "e": 3299, "s": 2772, "text": "The requirements are rather straightforward, we need a differentiable policy, for which we can use a neural network, and a few hyperparameters like our step size (Ξ±), discount rate (Ξ³), batch size (K) and max episodes (N). From there, we initialize our network, and run our episodes. After each episode, we discount our rewards, which is the sum of all of the discounted rewards from that reward onward. We’ll update our policy after each batch (e.g. K episodes) is complete. This tends to help provide stability for training." }, { "code": null, "e": 3707, "s": 3299, "text": "The policy loss (L(ΞΈ)) looks a bit complicated at first but isn’t that difficult to understand if you look at it closely. Recall that the output of the policy network is a probability distribution. What we’re doing with the Ο€(a | s, ΞΈ), is just getting the probability estimate of our network at each state. We then multiply that by the sum of the discounted rewards (G) to get the network’s expected value." }, { "code": null, "e": 4322, "s": 3707, "text": "For example, say we’re at a state s the network is split between two actions, so the probability of choosing a=0 is 50% and a=1 is also 50%. The network randomly selects a=0, we get a reward of 1 and the episode ends (let’s assume discount factor is 1). When we go back and update our network, this state-action pair gives us (1)(0.5)=0.5, which translates into the network’s expected value of that action taken at that state. From here, we take the log of the probability and sum over all of the steps in our batch of episodes. Finally, we average this out and take the gradient of this value to make our updates." }, { "code": null, "e": 4745, "s": 4322, "text": "Just for a quick refresher, the goal of Cart-Pole is to keep the pole in the air for as long as possible. Your agent needs to determine whether to push the cart to the left or the right to keep it balanced while not going over the edges on the left and right. If you don’t have OpenAI’s library installed yet, just run pip install gym and you should be set. Also, grab the latest off of pytorch.org if you haven’t already." }, { "code": null, "e": 4780, "s": 4745, "text": "Go ahead and import some packages:" }, { "code": null, "e": 4966, "s": 4780, "text": "import numpy as npimport matplotlib.pyplot as pltimport gymimport sysimport torchfrom torch import nnfrom torch import optimprint(\"PyTorch:\\t{}\".format(torch.__version__))PyTorch:\t1.1.0" }, { "code": null, "e": 5283, "s": 4966, "text": "With our packages imported, we’re going to set up a simple class called policy_estimator that will contain our neural network. It’s going to have two hidden layers with a ReLU activation function and softmax output. We’ll also give it a method called predict that enables us to do a forward pass through the network." }, { "code": null, "e": 5762, "s": 5283, "text": "class policy_estimator(): def __init__(self, env): self.n_inputs = env.observation_space.shape[0] self.n_outputs = env.action_space.n # Define network self.network = nn.Sequential( nn.Linear(self.n_inputs, 16), nn.ReLU(), nn.Linear(16, self.n_outputs), nn.Softmax(dim=-1)) def predict(self, state): action_probs = self.network(torch.FloatTensor(state)) return action_probs" }, { "code": null, "e": 6063, "s": 5762, "text": "Note that calling the predict method requires us to convert our state into a FloatTensor for PyTorch to work with it. Actually, the predict method itself is somewhat superfluous in PyTorch as a tensor could be passed directly to our network to get the results, but I include it here just for clarity." }, { "code": null, "e": 6181, "s": 6063, "text": "The other thing we need is our discounting function to discount future rewards based on the discount factor Ξ³ we use." }, { "code": null, "e": 6447, "s": 6181, "text": "def discount_rewards(rewards, gamma=0.99): r = np.array([gamma**i * rewards[i] for i in range(len(rewards))]) # Reverse the array direction for cumsum and then # revert back to the original order r = r[::-1].cumsum()[::-1] return r β€” r.mean()" }, { "code": null, "e": 6792, "s": 6447, "text": "One thing I’ve done here that’s a bit non-standard is subtract the mean of the rewards at the end. This helps to stabilize the learning, particularly in cases such as this one where all the rewards are positive because the gradients change more with negative or below-average rewards than they would if the rewards weren’t normalized like this." }, { "code": null, "e": 6832, "s": 6792, "text": "Now for the REINFORCE algorithm itself." }, { "code": null, "e": 9672, "s": 6832, "text": "def reinforce(env, policy_estimator, num_episodes=2000, batch_size=10, gamma=0.99): # Set up lists to hold results total_rewards = [] batch_rewards = [] batch_actions = [] batch_states = [] batch_counter = 1 # Define optimizer optimizer = optim.Adam(policy_estimator.network.parameters(), lr=0.01) action_space = np.arange(env.action_space.n) ep = 0 while ep < num_episodes: s_0 = env.reset() states = [] rewards = [] actions = [] done = False while done == False: # Get actions and convert to numpy array action_probs = policy_estimator.predict( s_0).detach().numpy() action = np.random.choice(action_space, p=action_probs) s_1, r, done, _ = env.step(action) states.append(s_0) rewards.append(r) actions.append(action) s_0 = s_1 # If done, batch data if done: batch_rewards.extend(discount_rewards( rewards, gamma)) batch_states.extend(states) batch_actions.extend(actions) batch_counter += 1 total_rewards.append(sum(rewards)) # If batch is complete, update network if batch_counter == batch_size: optimizer.zero_grad() state_tensor = torch.FloatTensor(batch_states) reward_tensor = torch.FloatTensor( batch_rewards) # Actions are used as indices, must be # LongTensor action_tensor = torch.LongTensor( batch_actions) # Calculate loss logprob = torch.log( policy_estimator.predict(state_tensor)) selected_logprobs = reward_tensor * \\ torch.gather(logprob, 1, action_tensor).squeeze() loss = -selected_logprobs.mean() # Calculate gradients loss.backward() # Apply gradients optimizer.step() batch_rewards = [] batch_actions = [] batch_states = [] batch_counter = 1 avg_rewards = np.mean(total_rewards[-100:]) # Print running average print(\"\\rEp: {} Average of last 100:\" + \"{:.2f}\".format( ep + 1, avg_rewards), end=\"\") ep += 1 return total_rewards" }, { "code": null, "e": 9778, "s": 9672, "text": "For the algorithm, we pass our policy_estimator and env objects, set a few hyperparameters and we’re off." }, { "code": null, "e": 10409, "s": 9778, "text": "A few points on the implementation, always be certain to ensure your outputs from PyTorch are converted back to NumPy arrays before you pass the values to env.step() or functions like np.random.choice() to avoid errors. Also, we use torch.gather() to separate the actual actions taken from the action probabilities to ensure we’re calculating the loss function properly as discussed above. Finally, you can change the ending so that the algorithm stops running once the environment is β€œsolved” instead of running for a preset number of steps (CartPole is solved after an average score of 195 or more for 100 consecutive episodes)." }, { "code": null, "e": 10479, "s": 10409, "text": "To run this, we just need a few lines of code to put it all together." }, { "code": null, "e": 10579, "s": 10479, "text": "env = gym.make('CartPole-v0')policy_est = policy_estimator(env)rewards = reinforce(env, policy_est)" }, { "code": null, "e": 10638, "s": 10579, "text": "Plotting the results, we can see that it works quite well!" }, { "code": null, "e": 10817, "s": 10638, "text": "Even simple policy gradient algorithms can work quite nicely and they have less baggage than DQN’s which often employ additional features like memory replay to learn effectively." } ]
Python - Remove duplicate values from a Pandas DataFrame
To remove duplicate values from a Pandas DataFrame, use the drop_duplicates() method. At first, create a DataFrame with 3 columns βˆ’ dataFrame = pd.DataFrame({'Car': ['BMW', 'Mercedes', 'Lamborghini', 'BMW', 'Mercedes', 'Porsche'],'Place': ['Delhi', 'Hyderabad', 'Chandigarh', 'Delhi', 'Hyderabad', 'Mumbai'],'UnitsSold': [95, 70, 80, 95, 70, 90]}) Remove duplicate values βˆ’ dataFrame = dataFrame.drop_duplicates() Following is the complete code βˆ’ import pandas as pd # Create DataFrame dataFrame = pd.DataFrame({'Car': ['BMW', 'Mercedes', 'Lamborghini', 'BMW', 'Mercedes', 'Porsche'],'Place': ['Delhi', 'Hyderabad', 'Chandigarh', 'Delhi', 'Hyderabad', 'Mumbai'], 'UnitsSold': [95, 70, 80, 95, 70, 90]}) print"Dataframe...\n", dataFrame # counting frequency of column Car count = dataFrame['Car'].value_counts() print"\nCount in column Car" print(count) # removing duplicates dataFrame = dataFrame.drop_duplicates() print"\nUpdated DataFrame after removing duplicates...\n",dataFrame # counting frequency of column Car after removing duplicates count = dataFrame['Car'].value_counts() print"\nCount in column Car" print(count) This will produce the following output βˆ’ Dataframe... Car Place UnitsSold 0 BMW Delhi 95 1 Mercedes Hyderabad 70 2 Lamborghini Chandigarh 80 3 BMW Delhi 95 4 Mercedes Hyderabad 70 5 Porsche Mumbai 90 Count in column Car BMW 2 Mercedes 2 Porsche 1 Lamborghini 1 Name: Car, dtype: int64 Updated DataFrame after removing duplicates... Car Place UnitsSold 0 BMW Delhi 95 1 Mercedes Hyderabad 70 2 Lamborghini Chandigarh 80 5 Porsche Mumbai 90 Count in column Car BMW 1 Porsche 1 Lamborghini 1 Mercedes 1 Name: Car, dtype: int64
[ { "code": null, "e": 1194, "s": 1062, "text": "To remove duplicate values from a Pandas DataFrame, use the drop_duplicates() method. At first, create a DataFrame with 3 columns βˆ’" }, { "code": null, "e": 1410, "s": 1194, "text": "dataFrame = pd.DataFrame({'Car': ['BMW', 'Mercedes', 'Lamborghini', 'BMW', 'Mercedes', 'Porsche'],'Place': ['Delhi', 'Hyderabad', 'Chandigarh', 'Delhi', 'Hyderabad', 'Mumbai'],'UnitsSold': [95, 70, 80, 95, 70, 90]})" }, { "code": null, "e": 1436, "s": 1410, "text": "Remove duplicate values βˆ’" }, { "code": null, "e": 1477, "s": 1436, "text": "dataFrame = dataFrame.drop_duplicates()\n" }, { "code": null, "e": 1510, "s": 1477, "text": "Following is the complete code βˆ’" }, { "code": null, "e": 2194, "s": 1510, "text": "import pandas as pd\n\n# Create DataFrame\ndataFrame = pd.DataFrame({'Car': ['BMW', 'Mercedes', 'Lamborghini', 'BMW', 'Mercedes', 'Porsche'],'Place': ['Delhi', 'Hyderabad', 'Chandigarh', 'Delhi', 'Hyderabad', 'Mumbai'], 'UnitsSold': [95, 70, 80, 95, 70, 90]})\n\nprint\"Dataframe...\\n\", dataFrame\n\n# counting frequency of column Car\ncount = dataFrame['Car'].value_counts()\nprint\"\\nCount in column Car\"\nprint(count)\n\n# removing duplicates\ndataFrame = dataFrame.drop_duplicates()\nprint\"\\nUpdated DataFrame after removing duplicates...\\n\",dataFrame\n\n# counting frequency of column Car after removing duplicates\ncount = dataFrame['Car'].value_counts()\nprint\"\\nCount in column Car\"\nprint(count)" }, { "code": null, "e": 2235, "s": 2194, "text": "This will produce the following output βˆ’" }, { "code": null, "e": 2989, "s": 2235, "text": "Dataframe...\n Car Place UnitsSold\n0 BMW Delhi 95\n1 Mercedes Hyderabad 70\n2 Lamborghini Chandigarh 80\n3 BMW Delhi 95\n4 Mercedes Hyderabad 70\n5 Porsche Mumbai 90\n\nCount in column Car\nBMW 2\nMercedes 2\nPorsche 1\nLamborghini 1\nName: Car, dtype: int64\n\nUpdated DataFrame after removing duplicates...\n Car Place UnitsSold\n0 BMW Delhi 95\n1 Mercedes Hyderabad 70\n2 Lamborghini Chandigarh 80\n5 Porsche Mumbai 90\n\nCount in column Car\nBMW 1\nPorsche 1\nLamborghini 1\nMercedes 1\nName: Car, dtype: int64" } ]
iwconfig command in Linux with Examples - GeeksforGeeks
22 May, 2019 iwconfig command in Linux is like ifconfig command, in the sense it works with kernel-resident network interface but it is dedicated to wireless networking interfaces only. It is used to set the parameters of the network interface that are particular to the wireless operation like SSID, frequency etc. iwconfig may also be used to display the parameters, and the wireless statistics which are extracted from /proc/net/wireless. Syntax: iwconfig [INTERFACE] [OPTIONS] Example: Below command is used to display all the wireless interfaces. Options: essid : Set the ESSID (or Network Name – in some products it may also be called Domain ID).iwconfig [Interface] essid "Network name" iwconfig [Interface] essid "Network name" –help : Displays help regarding iwconfig command, such as the different modes in the options.iwconfig --help iwconfig --help nwid : This option sets the network ID, you may disable or enable the Network ID.Example:iwconfig [Interface] nwid on/off Example: iwconfig [Interface] nwid on/off nick : This option sets the nickname or the station name.Example:iwconfig [Interface] nickname "My Node" Example: iwconfig [Interface] nickname "My Node" mode : Set the operating mode of the device, which depends on the network topology. The modes can be Ad-Hoc, Managed, Master, Repeater, Secondary, and Monitor.Example:iwconfig [Interface] mode Managed Example: iwconfig [Interface] mode Managed freq/channel: This option sets the operating frequency or channel in the device.Example:iwconfig [Interface] freq 2.46000000iwconfig [Interface] channel 3 Example: iwconfig [Interface] freq 2.46000000 iwconfig [Interface] channel 3 ap : This option forces the card to register to the Access Point given by the address if it is possible.Example:iwconfig [Interface] ap 00:60:1D:01:23:45 Example: iwconfig [Interface] ap 00:60:1D:01:23:45 rate : This option sets bitrate in b/s in supporting cards.Example:iwconfig [Interface] rate 11M Example: iwconfig [Interface] rate 11M txpower : This option sets the transmit power in dBm for cards supporting multiple transmit powers.Example:iwconfig [Interface] txpower 15 Example: iwconfig [Interface] txpower 15 sens : This option sets the sensitivity threshold. This defines how sensitive is the card to poor operating conditions (low signal, interference).Example:iwconfig [Interface] sens -80 Example: iwconfig [Interface] sens -80 retry : This option sets the maximum number of times the MAC can retry transmission.Example:iwconfig [Interface] retry 16 Example: iwconfig [Interface] retry 16 rts : This option adds a handshake before each packet transmission to make sure that the channel is clear.Example:iwconfig [[Interface] rts 250 Example: iwconfig [[Interface] rts 250 frag : This option sets the maximum fragment size, which is always lower than the maximum packet size.Example:iwconfig [Interface] frag 512 Example: iwconfig [Interface] frag 512 key/enc : This option is used to manipulate encryption or scrambling keys and security mode.Example:iwconfig [Interface] key 0123-4567-89 Example: iwconfig [Interface] key 0123-4567-89 power : This option is used to manipulate power management scheme parameters and mode.Example:iwconfig [Interface] power off Example: iwconfig [Interface] power off modu : This option is used to force the card to use a specific set of modulations.Example:iwconfig [Interface] modu auto Example: iwconfig [Interface] modu auto commit : This option forces the card to apply all pending changes.Example:iwconfig [Interface] commit Example: iwconfig [Interface] commit linux-command Linux-networking-commands Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments scp command in Linux with Examples nohup Command in Linux with Examples mv command in Linux with examples Thread functions in C/C++ Docker - COPY Instruction chown command in Linux with Examples nslookup command in Linux with Examples SED command in Linux | Set 2 Named Pipe or FIFO with example C program uniq Command in LINUX with examples
[ { "code": null, "e": 24015, "s": 23987, "text": "\n22 May, 2019" }, { "code": null, "e": 24444, "s": 24015, "text": "iwconfig command in Linux is like ifconfig command, in the sense it works with kernel-resident network interface but it is dedicated to wireless networking interfaces only. It is used to set the parameters of the network interface that are particular to the wireless operation like SSID, frequency etc. iwconfig may also be used to display the parameters, and the wireless statistics which are extracted from /proc/net/wireless." }, { "code": null, "e": 24452, "s": 24444, "text": "Syntax:" }, { "code": null, "e": 24483, "s": 24452, "text": "iwconfig [INTERFACE] [OPTIONS]" }, { "code": null, "e": 24554, "s": 24483, "text": "Example: Below command is used to display all the wireless interfaces." }, { "code": null, "e": 24563, "s": 24554, "text": "Options:" }, { "code": null, "e": 24696, "s": 24563, "text": "essid : Set the ESSID (or Network Name – in some products it may also be called Domain ID).iwconfig [Interface] essid \"Network name\"" }, { "code": null, "e": 24738, "s": 24696, "text": "iwconfig [Interface] essid \"Network name\"" }, { "code": null, "e": 24847, "s": 24738, "text": "–help : Displays help regarding iwconfig command, such as the different modes in the options.iwconfig --help" }, { "code": null, "e": 24863, "s": 24847, "text": "iwconfig --help" }, { "code": null, "e": 24985, "s": 24863, "text": "nwid : This option sets the network ID, you may disable or enable the Network ID.Example:iwconfig [Interface] nwid on/off" }, { "code": null, "e": 24994, "s": 24985, "text": "Example:" }, { "code": null, "e": 25027, "s": 24994, "text": "iwconfig [Interface] nwid on/off" }, { "code": null, "e": 25132, "s": 25027, "text": "nick : This option sets the nickname or the station name.Example:iwconfig [Interface] nickname \"My Node\"" }, { "code": null, "e": 25141, "s": 25132, "text": "Example:" }, { "code": null, "e": 25181, "s": 25141, "text": "iwconfig [Interface] nickname \"My Node\"" }, { "code": null, "e": 25382, "s": 25181, "text": "mode : Set the operating mode of the device, which depends on the network topology. The modes can be Ad-Hoc, Managed, Master, Repeater, Secondary, and Monitor.Example:iwconfig [Interface] mode Managed" }, { "code": null, "e": 25391, "s": 25382, "text": "Example:" }, { "code": null, "e": 25425, "s": 25391, "text": "iwconfig [Interface] mode Managed" }, { "code": null, "e": 25580, "s": 25425, "text": "freq/channel: This option sets the operating frequency or channel in the device.Example:iwconfig [Interface] freq 2.46000000iwconfig [Interface] channel 3" }, { "code": null, "e": 25589, "s": 25580, "text": "Example:" }, { "code": null, "e": 25626, "s": 25589, "text": "iwconfig [Interface] freq 2.46000000" }, { "code": null, "e": 25657, "s": 25626, "text": "iwconfig [Interface] channel 3" }, { "code": null, "e": 25811, "s": 25657, "text": "ap : This option forces the card to register to the Access Point given by the address if it is possible.Example:iwconfig [Interface] ap 00:60:1D:01:23:45" }, { "code": null, "e": 25820, "s": 25811, "text": "Example:" }, { "code": null, "e": 25862, "s": 25820, "text": "iwconfig [Interface] ap 00:60:1D:01:23:45" }, { "code": null, "e": 25959, "s": 25862, "text": "rate : This option sets bitrate in b/s in supporting cards.Example:iwconfig [Interface] rate 11M" }, { "code": null, "e": 25968, "s": 25959, "text": "Example:" }, { "code": null, "e": 25998, "s": 25968, "text": "iwconfig [Interface] rate 11M" }, { "code": null, "e": 26137, "s": 25998, "text": "txpower : This option sets the transmit power in dBm for cards supporting multiple transmit powers.Example:iwconfig [Interface] txpower 15" }, { "code": null, "e": 26146, "s": 26137, "text": "Example:" }, { "code": null, "e": 26178, "s": 26146, "text": "iwconfig [Interface] txpower 15" }, { "code": null, "e": 26362, "s": 26178, "text": "sens : This option sets the sensitivity threshold. This defines how sensitive is the card to poor operating conditions (low signal, interference).Example:iwconfig [Interface] sens -80" }, { "code": null, "e": 26371, "s": 26362, "text": "Example:" }, { "code": null, "e": 26401, "s": 26371, "text": "iwconfig [Interface] sens -80" }, { "code": null, "e": 26523, "s": 26401, "text": "retry : This option sets the maximum number of times the MAC can retry transmission.Example:iwconfig [Interface] retry 16" }, { "code": null, "e": 26532, "s": 26523, "text": "Example:" }, { "code": null, "e": 26562, "s": 26532, "text": "iwconfig [Interface] retry 16" }, { "code": null, "e": 26706, "s": 26562, "text": "rts : This option adds a handshake before each packet transmission to make sure that the channel is clear.Example:iwconfig [[Interface] rts 250" }, { "code": null, "e": 26715, "s": 26706, "text": "Example:" }, { "code": null, "e": 26745, "s": 26715, "text": "iwconfig [[Interface] rts 250" }, { "code": null, "e": 26885, "s": 26745, "text": "frag : This option sets the maximum fragment size, which is always lower than the maximum packet size.Example:iwconfig [Interface] frag 512" }, { "code": null, "e": 26894, "s": 26885, "text": "Example:" }, { "code": null, "e": 26924, "s": 26894, "text": "iwconfig [Interface] frag 512" }, { "code": null, "e": 27062, "s": 26924, "text": "key/enc : This option is used to manipulate encryption or scrambling keys and security mode.Example:iwconfig [Interface] key 0123-4567-89" }, { "code": null, "e": 27071, "s": 27062, "text": "Example:" }, { "code": null, "e": 27109, "s": 27071, "text": "iwconfig [Interface] key 0123-4567-89" }, { "code": null, "e": 27234, "s": 27109, "text": "power : This option is used to manipulate power management scheme parameters and mode.Example:iwconfig [Interface] power off" }, { "code": null, "e": 27243, "s": 27234, "text": "Example:" }, { "code": null, "e": 27274, "s": 27243, "text": "iwconfig [Interface] power off" }, { "code": null, "e": 27395, "s": 27274, "text": "modu : This option is used to force the card to use a specific set of modulations.Example:iwconfig [Interface] modu auto" }, { "code": null, "e": 27404, "s": 27395, "text": "Example:" }, { "code": null, "e": 27435, "s": 27404, "text": "iwconfig [Interface] modu auto" }, { "code": null, "e": 27537, "s": 27435, "text": "commit : This option forces the card to apply all pending changes.Example:iwconfig [Interface] commit" }, { "code": null, "e": 27546, "s": 27537, "text": "Example:" }, { "code": null, "e": 27574, "s": 27546, "text": "iwconfig [Interface] commit" }, { "code": null, "e": 27588, "s": 27574, "text": "linux-command" }, { "code": null, "e": 27614, "s": 27588, "text": "Linux-networking-commands" }, { "code": null, "e": 27625, "s": 27614, "text": "Linux-Unix" }, { "code": null, "e": 27723, "s": 27625, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27732, "s": 27723, "text": "Comments" }, { "code": null, "e": 27745, "s": 27732, "text": "Old Comments" }, { "code": null, "e": 27780, "s": 27745, "text": "scp command in Linux with Examples" }, { "code": null, "e": 27817, "s": 27780, "text": "nohup Command in Linux with Examples" }, { "code": null, "e": 27851, "s": 27817, "text": "mv command in Linux with examples" }, { "code": null, "e": 27877, "s": 27851, "text": "Thread functions in C/C++" }, { "code": null, "e": 27903, "s": 27877, "text": "Docker - COPY Instruction" }, { "code": null, "e": 27940, "s": 27903, "text": "chown command in Linux with Examples" }, { "code": null, "e": 27980, "s": 27940, "text": "nslookup command in Linux with Examples" }, { "code": null, "e": 28009, "s": 27980, "text": "SED command in Linux | Set 2" }, { "code": null, "e": 28051, "s": 28009, "text": "Named Pipe or FIFO with example C program" } ]
RSA Cipher Decryption
This chapter is a continuation of the previous chapter where we followed step wise implementation of encryption using RSA algorithm and discusses in detail about it. The function used to decrypt cipher text is as follows βˆ’ def decrypt(ciphertext, priv_key): cipher = PKCS1_OAEP.new(priv_key) return cipher.decrypt(ciphertext) For public key cryptography or asymmetric key cryptography, it is important to maintain two important features namely Authentication and Authorization. Authorization is the process to confirm that the sender is the only one who have transmitted the message. The following code explains this βˆ’ def sign(message, priv_key, hashAlg="SHA-256"): global hash hash = hashAlg signer = PKCS1_v1_5.new(priv_key) if (hash == "SHA-512"): digest = SHA512.new() elif (hash == "SHA-384"): digest = SHA384.new() elif (hash == "SHA-256"): digest = SHA256.new() elif (hash == "SHA-1"): digest = SHA.new() else: digest = MD5.new() digest.update(message) return signer.sign(digest) Authentication is possible by verification method which is explained as below βˆ’ def verify(message, signature, pub_key): signer = PKCS1_v1_5.new(pub_key) if (hash == "SHA-512"): digest = SHA512.new() elif (hash == "SHA-384"): digest = SHA384.new() elif (hash == "SHA-256"): digest = SHA256.new() elif (hash == "SHA-1"): digest = SHA.new() else: digest = MD5.new() digest.update(message) return signer.verify(digest, signature) The digital signature is verified along with the details of sender and recipient. This adds more weight age for security purposes. You can use the following code for RSA cipher decryption βˆ’ from Crypto.PublicKey import RSA from Crypto.Cipher import PKCS1_OAEP from Crypto.Signature import PKCS1_v1_5 from Crypto.Hash import SHA512, SHA384, SHA256, SHA, MD5 from Crypto import Random from base64 import b64encode, b64decode hash = "SHA-256" def newkeys(keysize): random_generator = Random.new().read key = RSA.generate(keysize, random_generator) private, public = key, key.publickey() return public, private def importKey(externKey): return RSA.importKey(externKey) def getpublickey(priv_key): return priv_key.publickey() def encrypt(message, pub_key): cipher = PKCS1_OAEP.new(pub_key) return cipher.encrypt(message) def decrypt(ciphertext, priv_key): cipher = PKCS1_OAEP.new(priv_key) return cipher.decrypt(ciphertext) def sign(message, priv_key, hashAlg = "SHA-256"): global hash hash = hashAlg signer = PKCS1_v1_5.new(priv_key) if (hash == "SHA-512"): digest = SHA512.new() elif (hash == "SHA-384"): digest = SHA384.new() elif (hash == "SHA-256"): digest = SHA256.new() elif (hash == "SHA-1"): digest = SHA.new() else: digest = MD5.new() digest.update(message) return signer.sign(digest) def verify(message, signature, pub_key): signer = PKCS1_v1_5.new(pub_key) if (hash == "SHA-512"): digest = SHA512.new() elif (hash == "SHA-384"): digest = SHA384.new() elif (hash == "SHA-256"): digest = SHA256.new() elif (hash == "SHA-1"): digest = SHA.new() else: digest = MD5.new() digest.update(message) return signer.verify(digest, signature) 10 Lectures 2 hours Total Seminars 10 Lectures 2 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2458, "s": 2292, "text": "This chapter is a continuation of the previous chapter where we followed step wise implementation of encryption using RSA algorithm and discusses in detail about it." }, { "code": null, "e": 2515, "s": 2458, "text": "The function used to decrypt cipher text is as follows βˆ’" }, { "code": null, "e": 2625, "s": 2515, "text": "def decrypt(ciphertext, priv_key):\n cipher = PKCS1_OAEP.new(priv_key)\n return cipher.decrypt(ciphertext)\n" }, { "code": null, "e": 2777, "s": 2625, "text": "For public key cryptography or asymmetric key cryptography, it is important to maintain two important features namely Authentication and Authorization." }, { "code": null, "e": 2918, "s": 2777, "text": "Authorization is the process to confirm that the sender is the only one who have transmitted the message. The following code explains this βˆ’" }, { "code": null, "e": 3351, "s": 2918, "text": "def sign(message, priv_key, hashAlg=\"SHA-256\"):\n global hash\n hash = hashAlg\n signer = PKCS1_v1_5.new(priv_key)\n \n if (hash == \"SHA-512\"):\n digest = SHA512.new()\n elif (hash == \"SHA-384\"):\n digest = SHA384.new()\n elif (hash == \"SHA-256\"):\n digest = SHA256.new()\n elif (hash == \"SHA-1\"):\n digest = SHA.new()\n else:\n digest = MD5.new()\n digest.update(message)\n return signer.sign(digest)" }, { "code": null, "e": 3431, "s": 3351, "text": "Authentication is possible by verification method which is explained as below βˆ’" }, { "code": null, "e": 3832, "s": 3431, "text": "def verify(message, signature, pub_key):\n signer = PKCS1_v1_5.new(pub_key)\n if (hash == \"SHA-512\"):\n digest = SHA512.new()\n elif (hash == \"SHA-384\"):\n digest = SHA384.new()\n elif (hash == \"SHA-256\"):\n digest = SHA256.new()\n elif (hash == \"SHA-1\"):\n digest = SHA.new()\n else:\n digest = MD5.new()\n digest.update(message)\n return signer.verify(digest, signature)" }, { "code": null, "e": 3963, "s": 3832, "text": "The digital signature is verified along with the details of sender and recipient. This adds more weight age for security purposes." }, { "code": null, "e": 4022, "s": 3963, "text": "You can use the following code for RSA cipher decryption βˆ’" }, { "code": null, "e": 5624, "s": 4022, "text": "from Crypto.PublicKey import RSA\nfrom Crypto.Cipher import PKCS1_OAEP\nfrom Crypto.Signature import PKCS1_v1_5\nfrom Crypto.Hash import SHA512, SHA384, SHA256, SHA, MD5\nfrom Crypto import Random\nfrom base64 import b64encode, b64decode\nhash = \"SHA-256\"\n\ndef newkeys(keysize):\n random_generator = Random.new().read\n key = RSA.generate(keysize, random_generator)\n private, public = key, key.publickey()\n return public, private\n\ndef importKey(externKey):\n return RSA.importKey(externKey)\n\ndef getpublickey(priv_key):\n return priv_key.publickey()\n\ndef encrypt(message, pub_key):\n cipher = PKCS1_OAEP.new(pub_key)\n return cipher.encrypt(message)\n\ndef decrypt(ciphertext, priv_key):\n cipher = PKCS1_OAEP.new(priv_key)\n return cipher.decrypt(ciphertext)\n\ndef sign(message, priv_key, hashAlg = \"SHA-256\"):\n global hash\n hash = hashAlg\n signer = PKCS1_v1_5.new(priv_key)\n \n if (hash == \"SHA-512\"):\n digest = SHA512.new()\n elif (hash == \"SHA-384\"):\n digest = SHA384.new()\n elif (hash == \"SHA-256\"):\n digest = SHA256.new()\n elif (hash == \"SHA-1\"):\n digest = SHA.new()\n else:\n digest = MD5.new()\n digest.update(message)\n return signer.sign(digest)\n\ndef verify(message, signature, pub_key):\n signer = PKCS1_v1_5.new(pub_key)\n if (hash == \"SHA-512\"):\n digest = SHA512.new()\n elif (hash == \"SHA-384\"):\n digest = SHA384.new()\n elif (hash == \"SHA-256\"):\n digest = SHA256.new()\n elif (hash == \"SHA-1\"):\n digest = SHA.new()\n else:\n digest = MD5.new()\n digest.update(message)\n return signer.verify(digest, signature)" }, { "code": null, "e": 5657, "s": 5624, "text": "\n 10 Lectures \n 2 hours \n" }, { "code": null, "e": 5673, "s": 5657, "text": " Total Seminars" }, { "code": null, "e": 5706, "s": 5673, "text": "\n 10 Lectures \n 2 hours \n" }, { "code": null, "e": 5729, "s": 5706, "text": " Stone River ELearning" }, { "code": null, "e": 5736, "s": 5729, "text": " Print" }, { "code": null, "e": 5747, "s": 5736, "text": " Add Notes" } ]
Executing C# code in Linux
The .NET centric applications are meant to windows operating system up till now, but now Microsoft has introduced a new cross-platform application called Mono which enables the execution of the application developed under the .NET platform in Linux environment by giving an impression in such a way that as if we are running Linux package rather than executing .exe file. Mono is an open-source utility that allows the developer to execute .NET centric applications on other platforms such as Mac or Linux as it provides an installation package for Windows platform to compile and execute .NET assemblies on Windows OS without ever installing the Visual Studio IDE or .NET Framework SDK. Hence, we can build real-time, production-ready assemblies that use Windows Forms, LINQ, XML web services, ADO.NET and ASP.NET by taking advantage of existing core CLR namespaces under Mono. First, download the Mono binaries using the wget utility and execute these series of commands to configure it properly as; wget --no-check-certificate https://raw.github.com/nathanb/iws- snippets/master/mono-install-scripts/ubuntu/install_mono-3.0.sh chmod 755 install_mono-3.0.sh ./install_mono-3.0.sh Apart from that, install the MCS package too alternatively, to compile the .NET binary as following; root/kali:~/ sudo apt-get install mcs The infrastructure of the Mono console application is almost similar to the traditional C#.NET console application. To develop the first Mono based console application (test.cs), open any code editor like VIM and type the following code. using System; namespace test { class test{ public static void Main(string[] args) { System.Console.WriteLine("C# app Compiled on Kali Linux"); } } } Then, open the Terminal and hit the following commands to compile the code. root/kali:~/ mcs test.cs root/kali:~/ ls test.cs test.exe The aforesaid command will generate an executable file like windows. Now hit the ./test.exe or mono test.exe command to run the C# binary; Here, the screenshot summarized everything we have done so far.
[ { "code": null, "e": 1434, "s": 1062, "text": "The .NET centric applications are meant to windows operating system up till now, but now Microsoft has introduced a new cross-platform application called Mono which enables the execution of the application developed under the .NET platform in Linux environment by giving an impression in such a way that as if we are running Linux package rather than executing .exe file." }, { "code": null, "e": 2064, "s": 1434, "text": "Mono is an open-source utility that allows the developer to execute .NET centric applications on other platforms such as Mac or Linux as it provides an installation package for Windows platform to compile and execute .NET assemblies on Windows OS without ever installing the Visual Studio IDE or .NET Framework SDK. Hence, we can build real-time, production-ready assemblies that use Windows Forms, LINQ, XML web services, ADO.NET and ASP.NET by taking advantage of existing core CLR namespaces under Mono. First, download the Mono binaries using the wget utility and execute these series of commands to configure it properly as;" }, { "code": null, "e": 2244, "s": 2064, "text": "wget --no-check-certificate https://raw.github.com/nathanb/iws- snippets/master/mono-install-scripts/ubuntu/install_mono-3.0.sh\nchmod 755 install_mono-3.0.sh\n./install_mono-3.0.sh" }, { "code": null, "e": 2345, "s": 2244, "text": "Apart from that, install the MCS package too alternatively, to compile the .NET binary as following;" }, { "code": null, "e": 2383, "s": 2345, "text": "root/kali:~/ sudo apt-get install mcs" }, { "code": null, "e": 2621, "s": 2383, "text": "The infrastructure of the Mono console application is almost similar to the traditional C#.NET console application. To develop the first Mono based console application (test.cs), open any code editor like VIM and type the following code." }, { "code": null, "e": 2803, "s": 2621, "text": "using System;\nnamespace test {\n class test{\n public static void Main(string[] args) {\n System.Console.WriteLine(\"C# app Compiled on Kali Linux\");\n }\n } \n}" }, { "code": null, "e": 2879, "s": 2803, "text": "Then, open the Terminal and hit the following commands to compile the code." }, { "code": null, "e": 2937, "s": 2879, "text": "root/kali:~/ mcs test.cs\nroot/kali:~/ ls\ntest.cs test.exe" }, { "code": null, "e": 3140, "s": 2937, "text": "The aforesaid command will generate an executable file like windows. Now hit the ./test.exe or mono test.exe command to run the C# binary; Here, the screenshot summarized everything we have done so far." } ]
How to handle EOFException in java?
While reading the contents of a file in certain scenarios the end of the file will be reached in such scenarios a EOFException is thrown. Especially, this exception is thrown while reading data using the Input stream objects. In other scenarios a specific value will be thrown when the end of file reached. Let us consider the DataInputStream class, it provides various methods such as readboolean(), readByte(), readChar() etc.. to read primitive values. While reading data from a file using these methods when the end of file is reached an EOFException is thrown. import java.io.DataInputStream; import java.io.FileInputStream; public class EOFExample { public static void main(String[] args) throws Exception { //Reading from the above created file using readChar() method DataInputStream dis = new DataInputStream(new FileInputStream("D:\\data.txt")); while(true) { char ch; ch = dis.readChar(); System.out.print(ch); } } } Hello how are youException in thread "main" java.io.EOFException at java.io.DataInputStream.readChar(Unknown Source) at SEPTEMBER.remaining.EOFExample.main(EOFExample.java:11) You cannot read contents of a file without reaching end of the file using the DataInputStream class. If you want, you can use other sub classes of the InputStream interface. In the following example we have rewritten the above program using FileInputStream class instead of DataInputStream to read data from a file. Live Demo import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.util.Scanner; public class AIOBSample { public static void main(String[] args) throws Exception { //Reading data from user byte[] buf = " Hello how are you".getBytes(); //Writing it to a file using the DataOutputStream DataOutputStream dos = new DataOutputStream(new FileOutputStream("D:\\data.txt")); for (byte b:buf) { dos.writeChar(b); } dos.flush(); System.out.println("Data written successfully"); } } Data written successfully Following is another way to handle EOFException in Java βˆ’ import java.io.DataInputStream; import java.io.EOFException; import java.io.FileInputStream; import java.io.IOException; public class HandlingEOF { public static void main(String[] args) throws Exception { DataInputStream dis = new DataInputStream(new FileInputStream("D:\\data.txt")); while(true) { char ch; try { ch = dis.readChar(); System.out.print(ch); } catch (EOFException e) { System.out.println(""); System.out.println("End of file reached"); break; } catch (IOException e) { } } } } Hello how are you End of file reached
[ { "code": null, "e": 1200, "s": 1062, "text": "While reading the contents of a file in certain scenarios the end of the file will be reached in such scenarios a EOFException is thrown." }, { "code": null, "e": 1369, "s": 1200, "text": "Especially, this exception is thrown while reading data using the Input stream objects. In other scenarios a specific value will be thrown when the end of file reached." }, { "code": null, "e": 1628, "s": 1369, "text": "Let us consider the DataInputStream class, it provides various methods such as readboolean(), readByte(), readChar() etc.. to read primitive values. While reading data from a file using these methods when the end of file is reached an EOFException is thrown." }, { "code": null, "e": 2047, "s": 1628, "text": "import java.io.DataInputStream;\nimport java.io.FileInputStream;\npublic class EOFExample {\n public static void main(String[] args) throws Exception {\n //Reading from the above created file using readChar() method\n DataInputStream dis = new DataInputStream(new FileInputStream(\"D:\\\\data.txt\"));\n while(true) {\n char ch;\n ch = dis.readChar();\n System.out.print(ch);\n }\n }\n}" }, { "code": null, "e": 2229, "s": 2047, "text": "Hello how are youException in thread \"main\" java.io.EOFException\n at java.io.DataInputStream.readChar(Unknown Source)\n at SEPTEMBER.remaining.EOFExample.main(EOFExample.java:11)" }, { "code": null, "e": 2403, "s": 2229, "text": "You cannot read contents of a file without reaching end of the file using the DataInputStream class. If you want, you can use other sub classes of the InputStream interface." }, { "code": null, "e": 2545, "s": 2403, "text": "In the following example we have rewritten the above program using FileInputStream class instead of DataInputStream to read data from a file." }, { "code": null, "e": 2556, "s": 2545, "text": " Live Demo" }, { "code": null, "e": 3157, "s": 2556, "text": "import java.io.DataOutputStream;\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.util.Scanner;\npublic class AIOBSample {\n public static void main(String[] args) throws Exception {\n //Reading data from user\n byte[] buf = \" Hello how are you\".getBytes();\n //Writing it to a file using the DataOutputStream\n DataOutputStream dos = new DataOutputStream(new FileOutputStream(\"D:\\\\data.txt\"));\n for (byte b:buf) {\n dos.writeChar(b);\n }\n dos.flush();\n System.out.println(\"Data written successfully\");\n }\n}" }, { "code": null, "e": 3183, "s": 3157, "text": "Data written successfully" }, { "code": null, "e": 3241, "s": 3183, "text": "Following is another way to handle EOFException in Java βˆ’" }, { "code": null, "e": 3863, "s": 3241, "text": "import java.io.DataInputStream;\nimport java.io.EOFException;\nimport java.io.FileInputStream;\nimport java.io.IOException;\npublic class HandlingEOF {\n public static void main(String[] args) throws Exception {\n DataInputStream dis = new DataInputStream(new FileInputStream(\"D:\\\\data.txt\"));\n while(true) {\n char ch;\n try {\n ch = dis.readChar();\n System.out.print(ch);\n } catch (EOFException e) {\n System.out.println(\"\");\n System.out.println(\"End of file reached\");\n break;\n } catch (IOException e) {\n }\n }\n }\n}" }, { "code": null, "e": 3901, "s": 3863, "text": "Hello how are you\nEnd of file reached" } ]
TIKA - Extracting HTML Document
Given below is the program to extract content and metadata from an HTML document. import java.io.File; import java.io.FileInputStream; import java.io.IOException; import org.apache.tika.exception.TikaException; import org.apache.tika.metadata.Metadata; import org.apache.tika.parser.ParseContext; import org.apache.tika.parser.html.HtmlParser; import org.apache.tika.sax.BodyContentHandler; import org.xml.sax.SAXException; public class HtmlParse { public static void main(final String[] args) throws IOException,SAXException, TikaException { //detecting the file type BodyContentHandler handler = new BodyContentHandler(); Metadata metadata = new Metadata(); FileInputStream inputstream = new FileInputStream(new File("example.html")); ParseContext pcontext = new ParseContext(); //Html parser HtmlParser htmlparser = new HtmlParser(); htmlparser.parse(inputstream, handler, metadata,pcontext); System.out.println("Contents of the document:" + handler.toString()); System.out.println("Metadata of the document:"); String[] metadataNames = metadata.names(); for(String name : metadataNames) { System.out.println(name + ": " + metadata.get(name)); } } } Save the above code as HtmlParse.java, and compile it from the command prompt by using the following commands βˆ’ javac HtmlParse.java java HtmlParse Given below is the snapshot of example.txt file. The HTML document has the following propertiesβˆ’ If you execute the above program it will give you the following output. Output βˆ’ Contents of the document: Name Salary age Ramesh Raman 50000 20 Shabbir Hussein 70000 25 Umesh Raman 50000 30 Somesh 50000 35 Metadata of the document: title: HTML Table Header Content-Encoding: windows-1252 Content-Type: text/html; charset = windows-1252 dc:title: HTML Table Header Print Add Notes Bookmark this page
[ { "code": null, "e": 2192, "s": 2110, "text": "Given below is the program to extract content and metadata from an HTML document." }, { "code": null, "e": 3380, "s": 2192, "text": "import java.io.File;\nimport java.io.FileInputStream;\nimport java.io.IOException;\n\nimport org.apache.tika.exception.TikaException;\nimport org.apache.tika.metadata.Metadata;\nimport org.apache.tika.parser.ParseContext;\nimport org.apache.tika.parser.html.HtmlParser;\nimport org.apache.tika.sax.BodyContentHandler;\n\nimport org.xml.sax.SAXException;\n\npublic class HtmlParse {\n\n public static void main(final String[] args) throws IOException,SAXException, TikaException {\n\n //detecting the file type\n BodyContentHandler handler = new BodyContentHandler();\n Metadata metadata = new Metadata();\n FileInputStream inputstream = new FileInputStream(new File(\"example.html\"));\n ParseContext pcontext = new ParseContext();\n \n //Html parser \n HtmlParser htmlparser = new HtmlParser();\n htmlparser.parse(inputstream, handler, metadata,pcontext);\n System.out.println(\"Contents of the document:\" + handler.toString());\n System.out.println(\"Metadata of the document:\");\n String[] metadataNames = metadata.names();\n \n for(String name : metadataNames) {\n System.out.println(name + \": \" + metadata.get(name)); \n }\n }\n}" }, { "code": null, "e": 3492, "s": 3380, "text": "Save the above code as HtmlParse.java, and compile it from the command prompt by using the following commands βˆ’" }, { "code": null, "e": 3530, "s": 3492, "text": "javac HtmlParse.java\njava HtmlParse \n" }, { "code": null, "e": 3579, "s": 3530, "text": "Given below is the snapshot of example.txt file." }, { "code": null, "e": 3627, "s": 3579, "text": "The HTML document has the following propertiesβˆ’" }, { "code": null, "e": 3699, "s": 3627, "text": "If you execute the above program it will give you the following output." }, { "code": null, "e": 3708, "s": 3699, "text": "Output βˆ’" }, { "code": null, "e": 4115, "s": 3708, "text": "Contents of the document:\n\n\tName\t Salary\t age\n\tRamesh Raman\t 50000\t 20\n\tShabbir Hussein\t 70000 25\n\tUmesh Raman\t 50000\t 30\n\tSomesh\t 50000\t 35\n\nMetadata of the document:\n\ntitle: HTML Table Header\nContent-Encoding: windows-1252\nContent-Type: text/html; charset = windows-1252\ndc:title: HTML Table Header\n" }, { "code": null, "e": 4122, "s": 4115, "text": " Print" }, { "code": null, "e": 4133, "s": 4122, "text": " Add Notes" } ]
Java - DataInputStream
The DataInputStream is used in the context of DataOutputStream and can be used to read primitives. Following is the constructor to create an InputStream βˆ’ InputStream in = new DataInputStream(InputStream in); Once you have DataInputStream object in hand, then there is a list of helper methods, which can be used to read the stream or to do other operations on the stream. public final int read(byte[] r, int off, int len)throws IOException Reads up to len bytes of data from the input stream into an array of bytes. Returns the total number of bytes read into the buffer otherwise -1 if it is end of file. Public final int read(byte [] b)throws IOException Reads some bytes from the inputstream an stores in to the byte array. Returns the total number of bytes read into the buffer otherwise -1 if it is end of file. (a) public final Boolean readBooolean()throws IOException (b) public final byte readByte()throws IOException (c) public final short readShort()throws IOException (d) public final Int readInt()throws IOException These methods will read the bytes from the contained InputStream. Returns the next two bytes of the InputStream as the specific primitive type. public String readLine() throws IOException Reads the next line of text from the input stream. It reads successive bytes, converting each byte separately into a character, until it encounters a line terminator or end of file; the characters read are then returned as a String. Following is an example to demonstrate DataInputStream and DataOutputStream. This example reads 5 lines given in a file test.txt and converts those lines into capital letters and finally copies them into another file test1.txt. import java.io.*; public class DataInput_Stream { public static void main(String args[])throws IOException { // writing string to a file encoded as modified UTF-8 DataOutputStream dataOut = new DataOutputStream(new FileOutputStream("E:\\file.txt")); dataOut.writeUTF("hello"); // Reading data from the same file DataInputStream dataIn = new DataInputStream(new FileInputStream("E:\\file.txt")); while(dataIn.available()>0) { String k = dataIn.readUTF(); System.out.print(k+" "); } } } Following is the sample run of the above program βˆ’ hello 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2476, "s": 2377, "text": "The DataInputStream is used in the context of DataOutputStream and can be used to read primitives." }, { "code": null, "e": 2532, "s": 2476, "text": "Following is the constructor to create an InputStream βˆ’" }, { "code": null, "e": 2587, "s": 2532, "text": "InputStream in = new DataInputStream(InputStream in);\n" }, { "code": null, "e": 2751, "s": 2587, "text": "Once you have DataInputStream object in hand, then there is a list of helper methods, which can be used to read the stream or to do other operations on the stream." }, { "code": null, "e": 2819, "s": 2751, "text": "public final int read(byte[] r, int off, int len)throws IOException" }, { "code": null, "e": 2985, "s": 2819, "text": "Reads up to len bytes of data from the input stream into an array of bytes. Returns the total number of bytes read into the buffer otherwise -1 if it is end of file." }, { "code": null, "e": 3036, "s": 2985, "text": "Public final int read(byte [] b)throws IOException" }, { "code": null, "e": 3196, "s": 3036, "text": "Reads some bytes from the inputstream an stores in to the byte array. Returns the total number of bytes read into the buffer otherwise -1 if it is end of file." }, { "code": null, "e": 3254, "s": 3196, "text": "(a) public final Boolean readBooolean()throws IOException" }, { "code": null, "e": 3305, "s": 3254, "text": "(b) public final byte readByte()throws IOException" }, { "code": null, "e": 3358, "s": 3305, "text": "(c) public final short readShort()throws IOException" }, { "code": null, "e": 3407, "s": 3358, "text": "(d) public final Int readInt()throws IOException" }, { "code": null, "e": 3551, "s": 3407, "text": "These methods will read the bytes from the contained InputStream. Returns the next two bytes of the InputStream as the specific primitive type." }, { "code": null, "e": 3595, "s": 3551, "text": "public String readLine() throws IOException" }, { "code": null, "e": 3828, "s": 3595, "text": "Reads the next line of text from the input stream. It reads successive bytes, converting each byte separately into a character, until it encounters a line terminator or end of file; the characters read are then returned as a String." }, { "code": null, "e": 4056, "s": 3828, "text": "Following is an example to demonstrate DataInputStream and DataOutputStream. This example reads 5 lines given in a file test.txt and converts those lines into capital letters and finally copies them into another file test1.txt." }, { "code": null, "e": 4611, "s": 4056, "text": "import java.io.*;\npublic class DataInput_Stream {\n\n public static void main(String args[])throws IOException {\n\n // writing string to a file encoded as modified UTF-8\n DataOutputStream dataOut = new DataOutputStream(new FileOutputStream(\"E:\\\\file.txt\"));\n dataOut.writeUTF(\"hello\");\n\n // Reading data from the same file\n DataInputStream dataIn = new DataInputStream(new FileInputStream(\"E:\\\\file.txt\"));\n\n while(dataIn.available()>0) {\n String k = dataIn.readUTF();\n System.out.print(k+\" \");\n }\n }\n}" }, { "code": null, "e": 4662, "s": 4611, "text": "Following is the sample run of the above program βˆ’" }, { "code": null, "e": 4669, "s": 4662, "text": "hello\n" }, { "code": null, "e": 4702, "s": 4669, "text": "\n 16 Lectures \n 2 hours \n" }, { "code": null, "e": 4718, "s": 4702, "text": " Malhar Lathkar" }, { "code": null, "e": 4751, "s": 4718, "text": "\n 19 Lectures \n 5 hours \n" }, { "code": null, "e": 4767, "s": 4751, "text": " Malhar Lathkar" }, { "code": null, "e": 4802, "s": 4767, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4816, "s": 4802, "text": " Anadi Sharma" }, { "code": null, "e": 4850, "s": 4816, "text": "\n 126 Lectures \n 7 hours \n" }, { "code": null, "e": 4864, "s": 4850, "text": " Tushar Kale" }, { "code": null, "e": 4901, "s": 4864, "text": "\n 119 Lectures \n 17.5 hours \n" }, { "code": null, "e": 4916, "s": 4901, "text": " Monica Mittal" }, { "code": null, "e": 4949, "s": 4916, "text": "\n 76 Lectures \n 7 hours \n" }, { "code": null, "e": 4968, "s": 4949, "text": " Arnab Chakraborty" }, { "code": null, "e": 4975, "s": 4968, "text": " Print" }, { "code": null, "e": 4986, "s": 4975, "text": " Add Notes" } ]